Evaluation Becomes a Profession: 1973–1989

Evaluation Becomes a Profession: 1973–1989

This period can be characterized as one of increasing development of a distinct field of evaluation through the growth in approaches, programs to train students to become evaluators, and professional associations. At the same time, the sites of evaluation began to diversify dramatically, with the federal government playing a less dominant role.

Several prominent writers in the field proposed new and differing models. Evaluation moved beyond simply measuring whether objectives were attained, as evaluators began to consider information needs of managers and unintended out- comes. Values and standards were emphasized, and the importance of making judgments about merit and worth became apparent. These new and controversial ideas spawned dialogue and debate that fed a developing evaluation vocabulary and literature. Scriven (1972), working to move evaluators beyond the rote applica- tion of objectives-based evaluation, proposed goal-free evaluation, urging evaluators to examine the processes and context of the program to find unintended outcomes. Stufflebeam (1971), responding to the need for evaluations that were more in- formative to decision makers, developed the CIPP model. Stake (1975b) proposed responsive evaluation, moving evaluators away from the dominance of the ex- perimental, social science paradigms. Guba and Lincoln (1981), building on Stake’s qualitative work, proposed naturalistic evaluation, leading to much debate over the relative merits of qualitative and quantitative methods. Collectively, these new conceptualizations of evaluation provided new ways of thinking about eval- uation that greatly broadened earlier views, making it clear that good program evaluation encompasses much more than simple application of the skills of the empirical scientists. (These models and others will be reviewed in Part Two.)

This burgeoning body of evaluation literature revealed sharp differences in the authors’ philosophical and methodological preferences. It also underscored a fact about which there was much agreement: Evaluation is a multidimensional techni- cal and political enterprise that requires both new conceptualizations and new insights into when and how existing methodologies from other fields might be used appropriately. Shadish and his colleagues (1991) said it well when, in recognizing the need for unique theories for evaluation, they noted that “as evaluation matured, its theory took on its own special character that resulted from the interplay among

48 Part I • Introduction to Evaluation

problems uncovered by practitioners, the solutions they tried, and traditions of the academic discipline of each evaluator, winnowed by 20 years of experience” (p. 31).

Publications that focused exclusively on evaluation grew dramatically in the 1970s and 1980s, including journals and series such as Evaluation and Program Planning, Evaluation Practice, Evaluation Review, Evaluation Quarterly, Educational Evaluation and Policy Analysis, Studies in Educational Evaluation, Canadian Journal of Program Evalua- tion, New Directions for Program Evaluation, Evaluation and the Health Professions, ITEA Journal of Tests and Evaluation, Performance Improvement Quarterly, and the Evaluation Studies Review Annual. Others that omit evaluation from the title but highlight it in their contents included Performance Improvement Quarterly, Policy Studies Review, and the Journal of Policy Analysis and Management. In the latter half of the 1970s and throughout the 1980s, the publication of evaluation books, including textbooks, reference books, and even compendia and encyclopedias of evaluation, increased markedly. In response to the demands and experience gained from practicing eval- uation in the field, a unique evaluation content developed and grew.

Simultaneously, professional associations and related organizations were formed. The American Educational Research Association’s Division H was an initial focus for professional activity in evaluation. During this same period, two professional associations were founded that focused exclusively on evaluation: the Evaluation Research Society (ERS) and Evaluation Network. In 1985, these organ- izations merged to form the American Evaluation Association. In 1975, the Joint Committee on Standards for Educational Evaluation, a coalition of 12 professional associations concerned with evaluation in education and psychology, was formed to develop standards that both evaluators and consumers could use to judge the quality of evaluations. In 1981, they published Standards for Evaluations of Educational Programs, Projects, and Materials. In 1982, the Evaluation Research Society devel- oped a set of standards, or ethical guidelines, for evaluators to use in practicing evaluation (Evaluation Research Society Standards Committee, 1982). (These Standards and the 1995 Guiding Principles, a code of ethics developed by the American Evaluation Association to update the earlier ERS standards, will be reviewed in Chapter 3.) These activities contributed greatly to the formalization of evaluation as a profession with standards for judging the results of evaluation, ethical codes for guiding practice, and professional associations for training, learning, and exchanging ideas.

While the professional structures for evaluation were being formed, the markets for evaluation were changing dramatically. The election of Ronald Reagan in 1980 brought about a sharp decline in federal evaluations as states were given block grants, and spending decisions and choices about evaluation requirements were delegated to the states. However, the decline in evaluation at the federal level resulted in a needed diversification of evaluation, not only in settings, but also in approaches (Shadish et al., 1991). Many state and local agencies began doing their own evaluations. Foundations and other nonprofit organizations began emphasizing evaluation. As the funders of evaluation diversified, the nature and methods of evaluation adapted and changed. Formative evaluations that examine programs to provide feedback for incremental change and improvement and to find the links

Chapter 2 • Origins and Current Trends in Modern Program Evaluation 49

between program actions and outcomes became more prominent. Michael Patton’s utilization-focused evaluation, emphasizing the need to identify a likely user of the evaluation and to adapt questions and methods to that user’s needs, became a model for many evaluators concerned with use (Patton, 1975, 1986). Guba and Lincoln (1981) urged evaluators to make greater use of qualitative methods to de- velop “thick descriptions” of programs, providing more authentic portrayals of the nature of programs in action. David Fetterman also began writing about alterna- tive methods with his book on ethnographic methods for educational evaluation (Fetterman, 1984). Evaluators who had previously focused on policymakers (e.g., Congress, cabinet-level departments, legislators) as their primary audience began to consider multiple stakeholders and more qualitative methods as different sources funded evaluation and voiced different needs. Participatory methods for involving many different stakeholders, including those often removed from decision making, emerged and became prominent. Thus, the decline in federal funding, while dramatic and frightening for evaluation at the time, led to the de- velopment of a richer and fuller approach to determining merit and worth.

Place Your Order Here!

Leave a Comment

Your email address will not be published. Required fields are marked *