Provus Discrepancy Evaluation Model
Another approach to evaluation in the Tylerian tradition was developed by Malcolm Provus, who based his approach on his evaluation assignments in the Pittsburgh public schools (Provus, 1971, 1973). Provus viewed evaluation as a continuous information-management process designed to serve as “the watch- dog of program management” and the “handmaiden of administration in the management of program development through sound decision making” (Provus, 1973, p. 186). Although his was, in some ways, a management-oriented
156 Part II • Alternative Approaches to Program Evaluation
evaluation approach, the key characteristic of his proposals stemmed from the Tylerian tradition. Provus viewed evaluation as a process of (1) agreeing on standards (another term used in place of objectives),1 (2) determining whether a discrepancy exists between the performance of some aspect of a program and the standards set for performance, and (3) using information about discrepancies to decide whether to improve, maintain, or terminate the program or some aspect of it. He called his approach, not surprisingly, the Discrepancy Evaluation Model (DEM).
Provus determined that, as a program is being developed, it goes through four developmental stages, to which he added a fifth, optional stage:
1. Definition 2. Installation 3. Process (interim products) 4. Product 5. Cost-benefit analysis (optional)
During the definition, or design, stage, the focus of work is on defining goals and processes or activities and delineating necessary resources and partici- pants to carry out the activities and accomplish the goals. Provus considered programs to be dynamic systems involving inputs (antecedents), processes, and outputs (outcomes). Standards or expectations were established for each stage. These standards were the objectives on which all further evaluation work was based. The evaluator’s job at the design stage is to see that a complete set of design specifications is produced and that they meet certain criteria: theoretical and structural soundness.
At the installation stage, the program design or definition is used as the standard against which to judge program operation. The evaluator performs a series of congruency tests to identify any discrepancies between expected and actual implementation of the program or activity. The intent is to make certain that the program has been installed as it has been designed. This is important because studies have found that staff vary as much in implementing a single program as they do in implementing several different ones. The degree to which program specifications are followed is best determined through firsthand observa- tion. If discrepancies are found at this stage, Provus proposed several solutions to be considered: (a) changing the program definition to conform to the way in which the program is actually being delivered if the actual delivery seems more appropriate, (b) making adjustments in the delivery of the program to better conform to the program definition (through providing more resources or training),
1Although standards and objectives are not synonymous, they were used by Provus interchangeably. Stake (1970) also stated that “standards are another form of objective: those seen by outside authority figures who know little or nothing about the specific program being evaluated but whose advice is relevant to programs in many places” (p. 185).
Chapter 6 • Program-Oriented Evaluation Approaches 157
or (c) terminating the activity if it appears that further development would be fu- tile in achieving program goals.
During the process stage, evaluation focuses on gathering data on the progress of participants to determine whether their behaviors changed as expected. Provus used the term “enabling objective” to refer to those gains that participants should be making if longer-term program goals are to be reached. If certain enabling objectives are not achieved, the activities leading to those objec- tives are revised or redefined. The validity of the evaluation data would also be questioned. If the evaluator finds that enabling objectives are not being achieved, another option is to terminate the program if it appears that the discrepancy cannot be eliminated.
At the product stage, the purpose of evaluation is to determine whether the terminal objectives for the program have been achieved. Provus distinguished between immediate outcomes, or terminal objectives, and long-term outcomes, or ultimate objectives. He encouraged the evaluator to go beyond the traditional emphasis on end-of-program performance and to make follow-up studies, based on ultimate objectives, a part of all program evaluations.
Provus also suggested an optional fifth stage that called for a cost-benefit analysis and a comparison of the results with similar cost analyses of comparable programs. In recent times, with funds for human services becoming scarcer, cost-benefit analyses have become a part of many program evaluations.
The Discrepancy Evaluation Model was designed to facilitate the develop- ment of programs in large public school systems and was later applied to statewide evaluations by a federal bureau. A complex approach that works best in larger systems with adequate staff resources, its central focus is on identifying discrepancies to help managers determine the extent to which program develop- ment is proceeding toward attainment of stated objectives. It attempts to assure effective program development by preventing the activity from proceeding to the next stage until all identified discrepancies have been removed. Whenever a discrepancy is found, Provus suggested a cooperative problem-solving process for program staff and evaluators. The process called for asking the following questions: (1) Why is there a discrepancy? (2) What corrective actions are possi- ble? (3) Which corrective action is best? This process usually required that additional information be gathered and criteria developed to allow rational, justifiable decisions about corrective actions (or terminations). This particular problem-solving activity was a new addition to the traditional objectives- oriented evaluation approach.
Though the Discrepancy Evaluation Model was one of the earliest approaches to evaluation, elements of it can still be found in many evaluations. For example, in Fitzpatrick’s interview with David Fetterman, a developer of empowerment evaluation, on his evaluation of the Stanford Teacher Education Program (STEP), Fetterman uses the discrepancy model to identify program areas (Fitzpatrick & Fetterman, 2000). The fact that the model continues to influence evaluation studies 30 years later is evidence of how these seminal