Diverse Conceptions of Program Evaluation

Diverse Conceptions of Program Evaluation

The many evaluation approaches that have emerged since 1960 range from com- prehensive models to checklists of actions to be taken. Some authors opt for a comprehensive approach to judging a program, while others view evaluation as a process of identifying and collecting information to assist decision makers. Still others see evaluation as synonymous with professional judgment, where judg- ments about a program’s quality are based on opinions of experts. In one school of thought, evaluation is viewed as the process of comparing performance data with clearly specified goals or objectives, while in another, it is seen as synony- mous with carefully controlled experimental research on programs to establish causal links between programs and outcomes. Some focus on the importance of naturalistic inquiry or urge that value pluralism be recognized, accommodated, and preserved. Others focus on social equity and argue that those involved with the entity being evaluated should play an important, or even the primary, role in determining what direction the evaluation study takes and how it is conducted.

The various models are built on differing—often conflicting—conceptions and definitions of evaluation. Let us consider an example from education.

• If one viewed evaluation as essentially synonymous with professional judg- ment, the worth of an educational program would be assessed by experts (often in the subject matter to be studied) who observed the program in action, examined the curriculum materials, or in some other way gleaned sufficient information to record their considered judgments.

• If evaluation is viewed as a comparison between student performance indi- cators and objectives, standards would be established for the curriculum and relevant student knowledge or skills would be measured against this yard- stick, using either standardized or evaluator-constructed instruments.

• If an evaluation is viewed as providing useful information for decision mak- ing, the evaluator, working closely with the decision maker(s), would iden- tify the decisions to be made and collect sufficient information about the relative advantages and disadvantages of each decision alternative to judge which was best. Or, if the decision alternatives were more ambiguous, the evaluator might collect information to help define or analyze the decisions to be made.

• If the evaluator emphasized a participative approach, he or she would iden- tify the relevant stakeholder groups and seek information on their views of the program and, possibly, their information needs. The data collection would focus on qualitative measures, such as interviews, observations, and content analysis of documents, designed to provide multiple perspectives on the program. Stakeholders might be involved at each stage of the evaluation to help build evaluation capacity and to ensure that the methods used, the interpretation of the results, and the final conclusions reflected the multiple perspectives of the stakeholders.

114 Part II • Alternative Approaches to Program Evaluation

• If the evaluator saw evaluation as critical for establishing the causal links between the program activities and outcomes, he or she might use random assignment of students, teachers, or schools to the program and its alterna- tives; collect quantitative data on the intended outcomes; and draw conclu- sions about the program’s success in achieving those outcomes.

As these examples illustrate, the way in which one views evaluation has a direct impact on the manner in which the evaluation is planned and the types of evaluation methods that are used. Each of the previous examples, when reviewed in detail, might be considered an excellent evaluation. But, evaluations must con- sider the context in which they are to be conducted and used. Each context—the nature and stage of the program, the primary audiences for the study and the needs and expectations of other stakeholders, and the political environment in which the program operates—holds clues to the approach that will be most appropriate for conducting an evaluation study that makes a difference in that context. There- fore, without a description of the context, we cannot even begin to consider which of the examples would lead to the best evaluation study. Nor can we judge, based on our own values, which example is most appropriate. Instead, we must learn about the characteristics and critical factors of each approach so that we can make appropriate choices when conducting an evaluation in a specific context.

Place Your Order Here!

Leave a Comment

Your email address will not be published. Required fields are marked *