Applying the Consumer-Oriented Approach

Applying the Consumer-Oriented Approach

Applying the Consumer-Oriented Approach
Applying the Consumer-Oriented Approach

A key step in judging a product is determining the criteria to be used. In the consumer-oriented model, these criteria are explicit and are presumably ones val- ued by the consumer. Although Scriven writes about the possibility of conducting needs assessments to identify criteria, his needs assessments are not formal sur- veys of consumers to determine what they would like. Instead, his needs assess- ments focus on a “functional analysis” that he writes is “often a surrogate for needs assessments in the case of product evaluation” (Scriven, 1983, p. 235). By func- tional analysis, Scriven means becoming familiar with the product and consider- ing what dimensions are important to its quality:

Once one understands the nature of the evaluand, . . . one will often understand rather fully what it takes to be a better and a worse instance of that type of evaluand. Understanding what a watch is leads automatically to understanding what the dimensions of merit for one are—time-keeping, accuracy, legibility, sturdiness, etc. (1980, pp. 90–91)

146 Part II • Alternative Approaches to Program Evaluation

Thus, his criteria are identified by studying the product to be evaluated, not by pre- vious, extended experience with the product. Standards, developed next, are lev- els of the criteria to be used in the measurement and judgment process. They are often created or recognized when comparing the object of the evaluation with its competitors. Since the goal is to differentiate one product from another to inform the consumer about quality, standards might be relatively close together when competitors’ performances on a criterion are similar. In contrast, standards might be quite far apart when competitors differ widely. Standards, of course, can be in- fluenced by factors other than competitors, such as safety issues, regulatory re- quirements, and efficiency factors that provide common benchmarks.

Scriven’s work in product evaluation focused on describing this process and, in part because identifying criteria can be difficult, in developing checklists of criteria for others to use in evaluating products. His product checklist published in 1974 reflects the potential breadth of criteria that he recommends using in evaluating educational products (Scriven, 1974b). This product check- list, which remains useful today, was the result of reviews commissioned by the federal government, focusing on educational products developed by federally sponsored research and development centers, and regional educational labora- tories. It was used in the examination of more than 90 educational products, most of which underwent many revisions during the review. Scriven stressed that the items in this checklist were necessitata, not desiderata. They included the following:

1. Need: Number affected, social significance, absence of substitutes, multiplica- tive effects, evidence of need

2. Market: Dissemination plan, size, and importance of potential markets 3. Performance—True field trials: Evidence of effectiveness of final version with

typical users, with typical aid, in typical settings, within a typical time frame 4. Performance—True consumer: Tests run with all relevant consumers, such as

students, teachers, principals, school district staff, state and federal officials, Congress, and taxpayers

5. Performance—Critical comparisons: Comparative data provided on important competitors such as no-treatment groups, existing competitors, projected competitors, created competitors, and hypothesized competitors

6. Performance—Long-term: Evidence of effects reported at pertinent times, such as a week to a month after use of the product, a month to a year later, a year to a few years later, and over critical career stages

7. Performance—Side effects: Evidence of independent study or search for unin- tended outcomes during, immediately following, and over the long-term use of the product

8. Performance—Process: Evidence of product use provided to verify product descriptions, causal claims, and the morality of product use

9. Performance—Causation: Evidence of product effectiveness provided through randomized experimental study or through defensible quasi-experimental, expost facto, or correlational studies

Chapter 5 • First Approaches: Expertise and Consumer-Oriented Approaches 147

10. Performance—Statistical significance: Statistical evidence of product effective- ness to make use of appropriate analysis techniques, significance levels, and interpretations

11. Performance—Educational significance: Educational significance demonstrated through independent judgments, expert judgments, judgments based on item analysis and raw scores of tests, side effects, long-term effects and comparative gains, and educationally sound use

12. Cost-effectiveness: A comprehensive cost analysis made, including expert judgment of costs, independent judgment of costs, and comparison to competitors’ costs

13. Extended Support: Plans made for post-marketing data collection and im- provement, in-service training, updating of aids, and study of new uses and user data

These criteria are comprehensive, addressing areas from need to process to out- comes to cost. Scriven also developed a checklist to use as a guide for evaluating program evaluations, the Key Evaluation Checklist (KEC) (Scriven, 1991c, 2007). It can be found at http://www.wmich.edu/evalctr/checklists/kec_feb07.pdf.

Place Your Order Here!

Leave a Comment

Your email address will not be published. Required fields are marked *