The Influence of Paradigms on Evaluation Practice.

The Influence of Paradigms on Evaluation Practice.

These philosophical para- digms, and their implications for methodological choices, have influenced the de- velopment of different evaluation approaches. Some have argued that paradigms and qualitative and quantitative methods should not be mixed because the core beliefs of postpositivists and constructivists are incompatible (Denzin & Lincoln, 1994). As noted, Reichardt and Rallis (1994) argued and demonstrated that the paradigms were compatible. These and other pragmatists, representing different methodological stances—quantitative and qualitative—disputed the incompatibil- ity argument and urged evaluators and researchers to look beyond ontological and epistemological arguments to consider what they are studying and the appropri- ate methods for studying the issues of concern. In other words, evaluative and methodological choices should not be based on paradigms or philosophical views, but on the practical characteristics of each specific evaluation and the concepts to be measured in that particular study. Today, there are many evaluators, some of whose approaches will be discussed in subsequent chapters, who skip the arguments over paradigms and prefer a pragmatic approach (Patton, 1990; 2001; Tashakkori and Teddlie, 2003). Howe (1988) and, more recently, Tashakkori and Teddlie (1998) have proposed the pragmatic approach as a paradigm in itself. They see discussions of ontology and epistemology as fruitless and unnecessary and argue that re- searchers’ and evaluators’ choice of methods should be based on the questions the evaluator or researcher is trying to answer. They write, “Pragmatist researchers consider the research question to be more important than either the methods they use or the paradigm that underlies the method” (Tashakkori & Teddlie, p. 21, 2003).

It is useful, however, for readers to be familiar with these paradigms because their philosophical assumptions were key influences on the development of dif- ferent evaluation approaches and continue to play a role in many evaluations and approaches.

Methodological Backgrounds and Preferences

For many years, evaluators differed, and argued, about the use and value of qual- itative or quantitative methods, as suggested previously. These methodological preferences were derived from the older paradigms described earlier. That is, the postpositivist paradigm focused on quantitative methods as a better way to obtain objective information about causal relationships among the phenomena that eval- uators and researchers studied. To be clear, quantitative methods are ones that yield numerical data. These may include tests, surveys, and direct measures of certain quantifiable constructs such as the percentage of entering students who

118 Part II • Alternative Approaches to Program Evaluation

graduate from a high school to examine a school’s success, blood alcohol content for the evaluation of a drunk-drivers treatment program, or the numbers of people who are unemployed to evaluate economic development programs. Quantitative methods also rely on experimental and quasi-experimental designs, or multivari- ate statistical methods, to establish causality.

Constructivists were more concerned with describing different perspectives and with exploring and discovering new theories. Guba and Lincoln discussed developing “thick descriptions” of the phenomenon being studied. Such in-depth descriptions were more likely to be made using qualitative observations, inter- views, and analyses of existing documents. Constructivists also see the benefit of studying causal relationships, but their emphasis is more on understanding those causal relationships than on establishing a definitive causal link between a pro- gram and an outcome. Given these emphases, constructivists favored qualitative measures. Qualitative measures are not readily reducible to numbers and include data collection methods such as interviews, focus groups, observations, and content analysis of existing documents.

Some evaluators have noted that the quantitative approach is often used for theory testing or confirmation while qualitative approaches are often used for exploration and theory development (Sechrest & Figueredo, 1993; Tashakkori & Teddlie, 1998). If the program to be evaluated is based on an established theory and the interest of the evaluation is in determining whether that theory applies in this new setting, a quantitative approach might be used to determine if, in fact, the causal mechanisms or effects hypothesized by the theory actually did occur. For ex- ample, a reading program based upon an established theory is being tried with a younger age group or in a new school setting. The focus is on determining whether the theory works in this new setting to increase reading comprehension as it has in other settings. Students might be randomly assigned to either the new method or the old one for a period of a few months, and then data would be collected through tests of reading comprehension. While qualitative methods could also be used to examine the causal connections, if the focus were on firmly establishing causality, quantitative approaches might be preferred. In contrast, if the evaluator is evaluating an experimental program or policy for which the theory is only loosely developed—for example, a new merit pay program for teachers in a par- ticular school district—a qualitative approach would generally be more appropriate to better describe and understand what is going on in the program. Although a few districts are experimenting today with merit pay, little is known about how merit pay might work in educational settings, and results from other sectors are mixed (Perry, Engbers, & Jun, 2009; Springer & Winters, 2009). In this case, it would be important to collect much qualitative data through interviews with teachers, prin- cipals, and other staff; observations at staff meetings; content analysis of policy doc- uments; and other methods to learn more about the impact of merit pay on the school environment; teacher retention, satisfaction, and performance; teamwork; teacher-principal relations; and many other issues.

In the beginning years of evaluation, most evaluators’ training was in quanti- tative methods. This was particularly true for evaluators coming from the disciplines

Chapter 4 • Alternative Views of Evaluation 119

of psychology, education, and sociology. The emergence of qualitative methods in evaluation provided new methodologies that were initially resisted by those more accustomed to quantitative measures. Today, however, most evaluators (and researchers) acknowledge the value of mixed methods and most graduate pro- grams recognize the need to train their students in each, though some may focus more on one method than another. For researchers, who tend to study the same or a similar subject most of their career, intensive training in a few methodologies appropriate for the types of constructs and settings they are studying is appropri- ate. But evaluators study many different programs and policies containing many different important constructs over the course of their careers. Therefore, evalua- tors now recognize the need to have skills in both qualitative and quantitative methods in order to select the most appropriate method for the program and con- text they are evaluating.

Place Your Order Here!

Leave a Comment

Your email address will not be published. Required fields are marked *