What are the ethical obligations of evaluators?

What are the ethical obligations of evaluators?

We will briefly review the ethical components of the Program Evaluation Standards and the Guiding Principles here. The more complete text of both documents is presented in Appendix A.

The Program Evaluation Standards. Before moving into a discussion of the Standards themselves, let us briefly describe how the Standards were developed. When appointed in 1975, the task of the Joint Committee on Standards for Edu- cational Evaluation was to develop standards for evaluators and other audiences to use to judge the overall quality of an evaluation. Today, 18 academic and profes- sional associations belong to the Joint Committee and oversee the revision and publication of the Standards.3 The Standards have been approved by the American National Standards Institute (ANSI) and not only have served as a model for edu- cational evaluations in the United States and Canada, but have also been adapted for use in other countries and in disciplines beyond education, such as housing and community development (Stufflebeam, 2004a). The first Standards published in 1981 were designed to address evaluation activities in public schools in the United States. The revision in 1994 expanded their purview to other educational settings, including higher education and training in medicine, law, government, corpora- tions, and other institutions.

The developers of the Standards and their revisions make use of an unusual “public standard-setting process” in which evaluators, educators, social scientists, and lay citizens review, field test, comment, and validate the standards (Joint Committee, 1994, p. xvii). Daniel Stufflebeam, who has led the development of the Standards, notes that a key step in the early stages in 1975 was the decision to include on the Joint Committee not only professional groups that represent evaluators and applied researchers, but also professional associations that represent school administrators, teachers, counselors, and others who are often clients for educational evaluation

3These include the American Evaluation Association (AEA) and the Canadian Evaluation Society (CES), as well as the American Educational Research Association (AERA), the Canadian Society for the Study of Education (CSSE), the American Psychological Association (APA), the National Council on Measurement in Education (NCME), and many associations concerned with school administration and education, including the National Education Association (NEA), the American Association of School Administrators (AASA), and the Council of Chief State School Officers (CCSSO).

84 Part I • Introduction to Evaluation

(Stufflebeam, 2004a). Inclusion of these groups on the Joint Committee led to some contentious discussions about what constituted a good evaluation. However, these dis- cussions helped produce standards that are useful guides for practicing evaluators in designing evaluations and helping clients and other stakeholders to know what to ex- pect from an evaluation. Standards also play a major role in metaevaluations or judg- ing the final product of an evaluation. (See Chapter 13 for more on metaevaluations.)

The Joint Committee defines an evaluation standard as “[a] principle mutu- ally agreed to by people engaged in the professional practice of evaluation, that, if met, will enhance the quality and fairness of an evaluation” (Joint Committee, 1994, p. 3). As such, the Standards are important for the reader to consider before we move into a discussion of how evaluations should be conducted, because the Standards communicate what the evaluator, in planning and carrying out an eval- uation, should consider. They serve as a guide for the evaluator and a means for the evaluator to discuss and reflect on issues critical to the evaluation with clients and other stakeholders.4

The Joint Committee developed 30 standards, which are presented in their entirety in Appendix A. Our attention will be devoted here to the five important attributes of an evaluation under which the 30 standards are organized. The iden- tification of these five attributes was, in itself, a quite significant step for the field of evaluation because it signified the major areas of importance in conducting an eval- uation. The four areas are (1) utility, (2) feasibility, (3) propriety, and (4) accuracy. The 2009 revision of the Standards added (5) evaluation accountability. Note that prior to the identification of these areas, it was generally assumed that evaluations should be judged based on their validity, or accuracy, because validity is the primary means for judging the quality of research (Stufflebeam, 2004a). The identification of the other areas reminded evaluators and their clients that evaluation also needed to attend to other issues, because it was being conducted in the field and for differ- ent purposes than research.

To articulate the meaning of the original four areas, let us draw from the Joint Committee’s publication of the Standards in 1994.5 Their introduction to each area addresses the following concepts:

Utility standards guide evaluations so that they will be informative, timely, and influential. They require evaluators to acquaint themselves with their audiences, define the audiences clearly, ascertain the audiences’ information needs, plan eval- uations to respond to these needs, and report the relevant information clearly and in a timely fashion. . . .

4The Joint Committee notes that not every standard is relevant to every evaluation. They recognize that the context for individual evaluations differs and, therefore, the nature of the evaluation differs. The evaluator and others should consider which of the standards are most relevant for guiding or judging an individual evaluation. 5In late 2009, the Joint Committee approved new standards to be published in 2010. We have obtained a prepublication list of the new standards, but the discussion and explanation of these standards are to be published in 2010. Therefore, we present the 2010 standards, but will rely on the previous version for a discussion of the original four categories and their meanings.

Chapter 3 • Political, Interpersonal, and Ethical Issues in Evaluation 85

Feasibility standards recognize that evaluations usually are conducted in a natural, as opposed to a laboratory, setting and consume valuable resources. There- fore evaluation designs must be operable in field settings, and evaluations must not consume more resources, materials, personnel, or time than necessary to address the evaluation questions . . .

Propriety standards reflect the fact that evaluations affect many people in a variety of ways. These standards are intended to facilitate protection of the rights of individuals affected by an evaluation. They promote sensitivity to and warn against unlawful, unscrupulous, unethical, and inept actions by those who conduct evaluations. . . .

Accuracy standards determine whether an evaluation has produced sound information. The evaluation of a program must be comprehensive; that is, the eval- uators should have considered as many of the program’s identifiable features as practical and should have gathered data on those particular features judged important for assessing the program’s worth or merit. Moreover, the information must be technically adequate, and the judgments rendered must be linked logically to the data. (Joint Committee, 1994, pp. 5–6)

The identification of these four areas of concern reminds us that evaluation is conducted in the field with the intention of providing sound information to others. The first area emphasizes the importance of use to evaluation and identi- fies some of the steps the evaluator can take to maximize the likelihood that the evaluation will be used. The identification of feasibility as an area of concern reflects the special considerations that must be made because evaluation takes place in real-world settings with real clients and stakeholders. Procedures must be practical and cost-effective. In addition, for the evaluation to be feasible, the eval- uator must consider the context in which the evaluation is conducted—the polit- ical and cultural interests. Accuracy standards reflect concerns with the scope of the study and the means by which data are collected. The means for addressing each of these three areas will be discussed further in subsequent chapters. Utility standards and use are the focus of Chapter 17, in which we discuss research and theories on the use of evaluation and recommend ways to increase use. Feasibil- ity is addressed in Chapter 14, in which we discuss planning and managing the study. Finally, accuracy is examined in Chapters 15 and 16 where we discuss methodological concerns.

Here we will focus on the propriety area because our primary concern in this chapter is with ethical conduct in evaluation. The specific standards listed under propriety in the new 2010 Standards are as follows:

• “P1 Responsive and Inclusive Orientation. Evaluations should be responsive to stakeholders and their communities.” This standard, as do many in the 2010 edition, emphasizes the evaluator’s obligation to be responsive to stakeholders and to con- sider the many different groups who may have interests in the evaluation.

• “P2 Formal Agreements. Evaluation agreements should be negotiated to make obligations explicit and take into account the needs, expectations, and cultural contexts of clients and other stakeholders.” External evaluations generally include

86 Part I • Introduction to Evaluation

a formal agreement, but internal evaluations often do not. The Joint Committee encourages evaluators to develop a formal agreement at the planning stage of each evaluation and to use it as a guide. The guidelines to this standard provide a use- ful list of the types of information that might be included in a formal agreement.

• “P3 Human Rights and Respect. Evaluations should be designed and conducted to protect human and legal rights and maintain the dignity of participants and other stakeholders.” The rights of human subjects are understood to include issues such as obtaining informed consent, maintaining rights to privacy, and assuring confidentiality for those from whom data are collected. (See later section on Institutional Review Boards or IRBs in this chapter.)

• “P4 Clarity and Fairness. Evaluations should be understandable and fair in ad- dressing stakeholder needs and purposes.” New to the 2010 edition of the Standards is an emphasis on clarity, recognizing that many different audiences and stake- holder groups have interests in the evaluation and must receive results in ways that are understandable and comprehensible to them.

• “P5 Transparency and Disclosure. Evaluations should provide complete descrip- tions of findings, limitations, and conclusions to all stakeholders, unless doing so would violate legal and propriety obligations.” Government in the early twenty-first century has emphasized transparency and the wording of this 2010 standard reflects that emphasis, although previous standards have also emphasized disclosing find- ings to all who are affected or interested within legal boundaries.

• “P6 Conflicts of Interest. Evaluations should openly and honestly identify and ad- dress real or perceived conflicts of interest that may compromise the evaluation.” Conflicts of interest cannot always be totally eliminated, but if evaluators consider potential conflicts of interest and make their values and biases explicit in as open and honest a way as possible, in the spirit of “let the buyer beware,” clients can at least be alert to biases that may unwittingly creep into the work of even the most honest evaluators.

• “P7 Fiscal Responsibility. Evaluations should account for all expended resources and comply with sound fiscal procedures and processes.” This standard has been included in all editions and reflects the important fiscal obligations of evaluations and emphasizes that the proper handling of these fiscal responsibilities, as well as respecting human rights, is part of the propriety of the evaluation.

Note that the Standards emphasize quite a few different issues and, thus, illus- trate how ethical concerns cross many dimensions of evaluation and should be considered throughout the study. Traditionally, ethical codes in the social sciences focus on the means for collecting data from others; that is, ensuring informed consent, confidentiality, or anonymity, as appropriate, and dealing with other important issues in protecting the rights of individuals when collecting data from them. These standards indicate that ensuring the rights of human subjects is cer- tainly one very important standard in evaluation. But, the propriety standards also

Chapter 3 • Political, Interpersonal, and Ethical Issues in Evaluation 87

communicate other areas of ethical concern for the evaluator, such as being re- sponsive to many stakeholders; considering the cultural and political values that are important to the evaluation; being clear on agreements and obligations in the eval- uation, conflicts of interest, and reports of findings and conclusions; and managing fiscal resources appropriately. The standard on formal agreements attests to the fact that evaluations, unlike research, always include other parties and, therefore, mis- understandings can arise. Typically, an evaluation involves a partnership between the evaluator and the client. Putting agreements in writing and following them, or formally modifying them as changes are needed, provides the evaluator and the client with a means for clarifying these expectations. At the beginning of the process, the evaluator and client can begin by talking about their understandings and expectations and putting them in writing. This agreement then provides a doc- ument to use to monitor these understandings about the evaluation and, thus, can prevent the violation of other propriety standards. Clients, for example, may not be aware of propriety issues such as informed consent or the obligation to disseminate results to others. Formal agreements can work to clarify these concerns. The 2010 Standards emphasis on clarity and transparency further highlights the fact that eval- uation occurs in the public arena where democracy requires attention to many dif- ferent stakeholders.

Take a minute now to read the complete text of all of the Standards in Appendix A to become acquainted with the meaning and intent of each.

The Guiding Principles. The American Evaluation Association’s (AEA) Guiding Principles for Evaluators are elaborations of five basic, broad principles (numbered A–E here to reflect their enumeration in the original document):

A. Systematic Inquiry: Evaluators conduct systematic, data-based inquiries. B. Competence: Evaluators provide competent performance to stakeholders. C. Integrity/Honesty: Evaluators display honesty and integrity in their own behav-

ior and attempt to ensure the honesty and integrity of the entire evaluation process.

D. Respect for People: Evaluators respect the security, dignity, and self-worth of respondents, program participants, clients, and other stakeholders.

E. Responsibilities for General and Public Welfare: Evaluators articulate and take into account the diversity of general and public interests and values that may be related to the evaluation (American Evaluation Association, 2004, The Principles section). (See Appendix A for a more complete presentation of the Guiding Principles.)

Systematic inquiry emphasizes the distinction between formal program eval- uation and the evaluations conducted in everyday life. Program evaluators, this principle asserts, use specific, technical methods to complete their evaluations. Because no method is infallible, the principle encourages evaluators to share the strengths and weaknesses of the methods and approach with clients and others to permit an accurate interpretation of the work.

88 Part I • Introduction to Evaluation

The Competence principle makes evaluators aware of the need to practice within their area of expertise and to “continually seek to maintain and improve their competencies, in order to provide the highest level of performance” (American Eval- uation Association, 2004, Section B.4). An emphasis on maintaining professional knowledge is a principle common to many professions’ ethical codes, serving to remind their practitioners that their education is ongoing and that they have an obligation to the profession to produce work that maintains the standards and reputation of the field (Fitzpatrick, 1999). The 2004 revision of the Guiding Princi- ples specifically addressed the need for evaluators to be culturally competent in the context of the program they are evaluating. Principle B.2 states

To ensure recognition, accurate interpretation, and respect for diversity, evaluators should ensure that the members of the evaluation team collectively demonstrate cultural competence. Cultural competence would be reflected in evaluators seek- ing awareness of their own culturally based assumptions, their understanding of the world views of culturally different participants and stakeholders in the evalua- tion, and the use of appropriate evaluation strategies and skills in working with cul- turally different groups. Diversity may be in terms of race, ethnicity, gender, religion, socio-economics, or other factors pertinent to the evaluation context. (American Evaluation Association, 2004, Section B.2)

This new principle reflects the recent attention that AEA and professional evaluators have given to the issue of cultural competence, recognizing that evalua- tors are often responsible for evaluating programs that serve clients or involve other stakeholders who have different cultural experiences and norms than those of the evaluator. To accurately evaluate the program competently, the evaluator needs to consider the context of the program and those it serves. The 2010 revision of the Standards also reflects this concern with its emphasis on learning the cultural context. (See the interview with Katrina Bledsoe, in the “Suggested Readings” section at the end of this chapter for her description of an evaluation where the different cultural norms of clients, volunteers, program staff, and managers were critical to evaluating the program and making recommendations for improvement.)

Place Your Order Here!

Leave a Comment

Your email address will not be published. Required fields are marked *