How does Christianity blend historical reviews and personal experience as credentials of its authenticity?

How does Christianity blend historical reviews and personal experience as credentials of its authenticity?

Discussion 7

Read “Evidence-Based Public Health – A Fundamental Concept for Public Health Practice” by Brownson, et al., 2009, located in the Reading & Study folder in Module/Week 7. Discuss the following points in your initial thread.

  • What is evidence-based public health (EBPH), and why does it matter?
  • Compare and contrast the analytical tools of EBPH (systematic reviews, public health surveillance, economic evaluation, health impact assessment, and participatory approaches).
  • In what ways do systematic reviews provide better evidence on which to base intervention decisions than personal experience? Why should qualitative data from community members be considered in the mix of evidence when planning a community-based intervention

How does Christianity blend historical reviews and personal experience as credentials of its authenticity?

ANRV370-PU30-10 ARI 15 February 2009 12:1

Evidence-Based Public Health: A Fundamental Concept for Public Health Practice Ross C. Brownson,1 Jonathan E. Fielding,2

and Christopher M. Maylahn3 1Prevention Research Center in St. Louis, George Warren Brown School of Social Work, Department of Surgery and Alvin J. Siteman Cancer Center, Washington University School of Medicine, Washington University in St. Louis, St. Louis, Missouri 63110; email: [email protected] 2Los Angeles Department of Health Services, Los Angeles, California 90012; School of Public Health, University of California, Los Angeles, California 90095-1772; email: [email protected] 3Office of Public Health Practice, New York State Department of Health, Albany, New York 12237; email: [email protected]

Annu. Rev. Public Health 2009. 30:175–201

First published online as a Review in Advance on January 14, 2009

The Annual Review of Public Health is online at publhealth.annualreviews.org

This article’s doi: 10.1146/annurev.publhealth.031308.100134

Copyright c© 2009 by Annual Reviews. All rights reserved

0163-7525/09/0421-0175$20.00

Key Words

disease prevention, evidence-based medicine, intervention, population-based

Abstract Despite the many accomplishments of public health, a greater atten- tion to evidence-based approaches is warranted. This article reviews the concepts of evidence-based public health (EBPH), on which formal discourse originated about a decade ago. Key components of EBPH include making decisions on the basis of the best available scientific evidence, using data and information systems systematically, apply- ing program-planning frameworks, engaging the community in deci- sion making, conducting sound evaluation, and disseminating what is learned. Three types of evidence have been presented on the causes of diseases and the magnitude of risk factors, the relative impact of spe- cific interventions, and how and under which contextual conditions in- terventions were implemented. Analytic tools (e.g., systematic reviews, economic evaluation) can be useful in accelerating the uptake of EBPH. Challenges and opportunities (e.g., political issues, training needs) for disseminating EBPH are reviewed. The concepts of EBPH outlined in this article hold promise to better bridge evidence and practice.

175

A nn

u. R

ev . P

ub lic

H ea

lth 2

00 9.

30 :1

75 -2

01 . D

ow nl

oa de

d fr

om w

w w

.a nn

ua lr

ev ie

w s.

or g

A cc

es s

pr ov

id ed

b y

L ib

er ty

U ni

ve rs

ity o

n 08

/0 1/

16 . F

or p

er so

na l u

se o

nl y.

ANRV370-PU30-10 ARI 15 February 2009 12:1

EBPH: evidence- based public health

INTRODUCTION

Public health research and practice are credited with many notable achievements, including much of the 30-year gain in life expectancy in the United States over the twentieth century (124). A large part of this increase can be attributed to provision of safe water and food, sewage treatment and disposal, tobacco use prevention and cessation, injury preven- tion, control of infectious diseases through immunization and other means, and other population-based interventions (34).

Despite these successes, many additional opportunities to improve the public’s health remain. To achieve state and national objec- tives for improved population health, more widespread adoption of evidence-based strate- gies has been recommended (19, 57, 64, 109, 119). Increased focus on evidence-based pub- lic health (EBPH) has numerous direct and in- direct benefits, including access to more and higher-quality information on what works, a higher likelihood of successful programs and policies being implemented, greater workforce productivity, and more efficient use of public and private resources (19, 77, 95).

Ideally, public health practitioners should al- ways incorporate scientific evidence in selecting and implementing programs, developing poli- cies, and evaluating progress (23, 107). Soci- ety pays a high opportunity cost when inter- ventions that yield the highest health return on an investment are not implemented (55). In practice, intervention decisions are often based on perceived short-term opportunities, lacking systematic planning and review of the best ev- idence regarding effective approaches. These concerns were noted two decades ago when the Institute of Medicine determined that de- cision making in public health is often driven by “crises, hot issues, and concerns of orga- nized interest groups” (p. 4) (82). Barriers to implementing EBPH include the political en- vironment and deficits in relevant and timely research, information systems, resources, lead- ership, and the required competencies (4, 7, 23, 78).

It is difficult to estimate how widely evidence-based approaches are being applied. In a survey of 107 U.S. public health prac- titioners, an estimated 58% of programs in their agencies were deemed evidence-based (i.e., using the most current evidence from peer- reviewed research) (51). This finding in pub- lic health settings appears to mirror the use of evidence-based approaches in clinical care. A random study of adults living in selected metropolitan areas within the United States found that 55% of overall medical care was based on what is recommended in the med- ical literature (108). Thacker and colleagues (159) found that the preventable fraction (i.e., how much of a reduction in the health bur- den is estimated to occur if an intervention is carried out) was known for only 4.4% of 702 population-based interventions. Similarly, cost- effectiveness data are reported for a low propor- tion of public health interventions.

Several concepts are fundamental to achiev- ing a more evidence-based approach to public health practice. First, we need scientific infor- mation on the programs and policies that are most likely to be effective in promoting health (i.e., undertake evaluation research to gener- ate sound evidence) (14, 19, 45, 77). An array of effective interventions is now available from numerous sources including the Guide to Com- munity Preventive Services (16, 171), the Guide to Clinical Preventive Services (2), Cancer Con- trol PLANET (29), and the National Registry of Evidence-Based Programs and Practices (142). Second, to translate science to practice, we need to marry information on evidence-based inter- ventions from the peer-reviewed literature with the realities of a specific real-world environ- ment (19, 69, 96). To do so, we need to bet- ter define processes that lead to evidence-based decision making. Finally, wide-scale dissemi- nation of interventions of proven effectiveness must occur more consistently at state and local levels (91). This article focuses particularly on state and local public health departments be- cause of their responsibilities to assess public health problems, develop appropriate programs

176 Brownson · Fielding · Maylahn

A nn

u. R

ev . P

ub lic

H ea

lth 2

00 9.

30 :1

75 -2

01 . D

ow nl

oa de

d fr

om w

w w

.a nn

ua lr

ev ie

w s.

or g

A cc

es s

pr ov

id ed

b y

L ib

er ty

U ni

ve rs

ity o

n 08

/0 1/

16 . F

or p

er so

na l u

se o

nl y.

ANRV370-PU30-10 ARI 15 February 2009 12:1

or policies, and assure that programs and poli- cies are effectively implemented in states and local communities (81, 82).

We review EBPH in four major sections that describe (a) relevant background issues, includ- ing concepts underlying EBPH and definitions of evidence; (b) key analytic tools to enhance the adoption of evidence-based decision making; (c) challenges and opportunities for implemen- tation in public health practice; and (d ) future issues.

EVOLUTION OF THE TENETS OF EVIDENCE-BASED PUBLIC HEALTH

Formal discourse on the nature and scope of EBPH originated about a decade ago. Several authors have attempted to define EBPH. In 1997, Jenicek defined EBPH as the “conscien- tious, explicit, and judicious use of current best evidence in making decisions about the care of communities and populations in the domain of health protection, disease prevention, health maintenance and improvement (health promo- tion)” (84). In 1999, scholars and practition- ers in Australia (64) and the United States (23) elaborated further on the concept of EBPH. Glasziou and colleagues posed a series of ques- tions to enhance uptake of EBPH (e.g., “Does this intervention help alleviate this problem?”) and identified 14 sources of high-quality evi- dence (64). Brownson and colleagues described a six-stage process by which practitioners can take a more evidence-based approach to deci- sion making (19, 23). Kohatsu and colleagues broadened earlier definitions of EBPH to in- clude the perspectives of community members, fostering a more population-centered approach (96). In 2004, Rychetnik and colleagues summa- rized many key concepts in a glossary for EBPH (141). There appears to be a consensus among investigators and public health leaders that a combination of scientific evidence and values, resources, and context should enter into deci- sion making (Figure 1) (19, 119, 141, 151, 152).

In summarizing these various attributes of EBPH, key characteristics include

� Making decisions using the best available peer-reviewed evidence (both quantita- tive and qualitative research),

� Using data and information systems sys- tematically,

� Applying program-planning frameworks (that often have a foundation in behav- ioral science theory),

� Engaging the community in assessment and decision making,

� Conducting sound evaluation, and � Disseminating what is learned to key

stakeholders and decision makers.

Accomplishing these activities in EBPH is likely to require a synthesis of scientific skills, enhanced communication, common sense, and political acumen.

Defining Evidence

At the most basic level, evidence involves “the available body of facts or information indicat- ing whether a belief or proposition is true or valid” (85). The idea of evidence often derives from legal settings in Western societies. In law, evidence comes in the form of stories, wit- ness accounts, police testimony, expert opin- ions, and forensic science (112). For a pub- lic health professional, evidence is some form of data—including epidemiologic (quantitative) data, results of program or policy evaluations, and qualitative data—for uses in making judg- ments or decisions (Figure 2). Public health evidence is usually the result of a complex cy- cle of observation, theory, and experiment (114, 138). However, the value of evidence is in the eye of the beholder (e.g., usefulness of evidence may vary by stakeholder type) (92). Medical ev- idence includes not only research but charac- teristics of the patient, a patient’s readiness to undergo a therapy, and society’s values (122). Policy makers seek out distributional conse- quences (i.e., who has to pay, how much, and who benefits) (154), and in practice settings, anecdotes sometimes trump empirical data (26). Evidence is usually imperfect and, as noted by Muir Gray, “[t]he absence of excellent evidence does not make evidence-based decision making

www.annualreviews.org • Evidence-Based Public Health 177

A nn

u. R

ev . P

ub lic

H ea

lth 2

00 9.

30 :1

75 -2

01 . D

ow nl

oa de

d fr

om w

w w

.a nn

ua lr

ev ie

w s.

or g

A cc

es s

pr ov

id ed

b y

L ib

er ty

U ni

ve rs

ity o

n 08

/0 1/

16 . F

or p

er so

na l u

se o

nl y.

ANRV370-PU30-10 ARI 15 February 2009 12:1

Decision-making

Best available research evidence Environment and

organizational context

Population characteristics, needs, values,

and preferences

Resources, including

practitioner expertise

Figure 1 Domains that influence evidence-based decision making [from Spring et al. (151, 152)].

• Scientific literature in systematic reviews

• Scientific literature in one or more journal articles

• Public health surveillance data • Program evaluations • Qualitative data

– Community members – Other stakeholders

• Media/marketing data • Word of mouth • Personal experience

Objective

Subjective

Figure 2 Different forms of evidence. Adapted from Chambers & Kerner (37).

178 Brownson · Fielding · Maylahn

A nn

u. R

ev . P

ub lic

H ea

lth 2

00 9.

30 :1

75 -2

01 . D

ow nl

oa de

d fr

om w

w w

.a nn

ua lr

ev ie

w s.

or g

A cc

es s

pr ov

id ed

b y

L ib

er ty

U ni

ve rs

ity o

n 08

/0 1/

16 . F

or p

er so

na l u

se o

nl y.

ANRV370-PU30-10 ARI 15 February 2009 12:1

Table 1 Comparison of the types of scientific evidence

Characteristic Type One Type Two Type Three Typical data/ relationship

Size and strength of preventable risk—disease relationship (measures of burden, etiologic research)

Relative effectiveness of public health intervention

Information on the adaptation and translation of an effective intervention

Common setting

Clinic or controlled community setting

Socially intact groups or community wide

Socially intact groups or community wide

Example Smoking causes lung cancer Price increases with a targeted media campaign reduce smoking rates

Understanding the political challenges of price increases or targeting media messages to particular audience segments

Quantity More Less Less Action Something should be done This particular intervention

should be implemented How an intervention should be implemented

impossible; what is required is the best evidence available not the best evidence possible” (119).

Several authors have defined types of sci- entific evidence for public health practice (Table 1) (19, 23, 141). Type 1 evidence de- fines the causes of diseases and the magni- tude, severity, and preventability of risk fac- tors and diseases. It suggests that “something should be done” about a particular disease or risk factor. Type 2 evidence describes the rel- ative impact of specific interventions that do or do not improve health, adding “specifically, this should be done” (19). There are different sources of Type 2 evidence (Table 2). These categories build on work from Canada, the United Kingdom, Australia, the Netherlands, and the United States on how to recast the strength of evidence, emphasizing the weight of evidence and a wider range of considera- tions beyond efficacy. We define four categories within a typology of scientific evidence for decision making: evidence-based, efficacious, promising, and emerging interventions. Adher- ence to a strict hierarchy of study designs may reinforce an inverse evidence law by which in- terventions most likely to influence whole pop- ulations (e.g., policy change) are least valued in an evidence matrix emphasizing randomized designs (125, 127). Type 3 evidence (of which we have the least) shows how and under which contextual conditions interventions were im- plemented and how they were received, thus

informing “how something should be done” (141). Studies to date have tended to overem- phasize internal validity (e.g., well-controlled efficacy trials) while giving sparse attention to external validity (e.g., the translation of sci- ence to the various circumstances of practice) (62, 71).

Understanding the context for evidence. Type 3 evidence derives from the context of an intervention (141). Although numerous au- thors have written about the role of context in informing evidence-based practice (32, 60, 77, 90, 92, 93, 140, 141), there is little consensus on its definition. When moving from clinical interventions to population-level and policy in- terventions, context becomes more uncertain, variable, and complex (49). One useful defini- tion of context highlights information needed to adapt and implement an evidence-based in- tervention in a particular setting or population (141). The context for Type 3 evidence speci- fies five overlapping domains (Table 3). First, characteristics of the target population for an intervention are defined such as education level and health history (104). Next, interpersonal variables provide important context. For exam- ple, a person with a family history of cancer might be more likely to undergo cancer screen- ing. Third, organizational variables should be considered. For example, whether an agency is successful in carrying out an evidence-based

www.annualreviews.org • Evidence-Based Public Health 179

A nn

u. R

ev . P

ub lic

H ea

lth 2

00 9.

30 :1

75 -2

01 . D

ow nl

oa de

d fr

om w

w w

.a nn

ua lr

ev ie

w s.

or g

A cc

es s

pr ov

id ed

b y

L ib

er ty

U ni

ve rs

ity o

n 08

/0 1/

16 . F

or p

er so

na l u

se o

nl y.

ANRV370-PU30-10 ARI 15 February 2009 12:1

Table 2 Typology for classifying interventions by level of scientific evidence

Category How established Considerations for the level of scientific

evidence Data source examples Evidence- based

Peer review via systematic or narrative review

Based on study design and execution External validity

Community Guide Cochrane reviews

Potential side benefits or harms Costs and cost-effectiveness

Narrative reviews based on published literature

Effective Peer review Based on study design and execution Articles in the scientific literature External validity Potential side benefits or harms

Research-tested intervention programs (123)

Costs and cost-effectiveness Technical reports with peer review Promising Written program evaluation

without formal peer review Summative evidence of effectiveness Formative evaluation data

State or federal government reports (without peer review)

Theory-consistent, plausible, potentially high-reach, low-cost, replicable

Conference presentations

Emerging Ongoing work, practice- based summaries, or evaluation works in progress

Formative evaluation data Theory-consistent, plausible, potentially high-reaching, low-cost, replicable

Face validity

Evaluability assessmentsa

Pilot studies NIH CRISP database Projects funded by health foundations

aA preevaluation activity that involves an assessment is an assessment prior to commencing an evaluation to establish whether a program or policy can be evaluated and what might be the barriers to its evaluation (145).

program will be influenced by its capacity (e.g., a trained workforce, agency leadership) (51, 77). Fourth, social norms and culture are known to shape many health behaviors. Finally, larger po- litical and economic forces affect context. For example, a high rate for a certain disease may influence a state’s political will to address the issue in a meaningful and systematic way. Par- ticularly for high-risk and understudied pop- ulations, there is a pressing need for evidence on contextual variables and ways of adapting programs and policies across settings and pop- ulation subgroups. Contextual issues are being addressed more fully in the new realist review, which is a systematic review process that seeks to examine not only whether an intervention works but also how interventions work in real- world settings (134).

Triangulating evidence. Triangulation in- volves the accumulation of evidence from a va- riety of sources to gain insight into a particular topic (164) and often combines quantitative and qualitative data (19). It generally uses multiple

methods of data collection and/or analysis to determine points of commonality or disagree- ment (47, 153). Triangulation is often benefi- cial because of the complementary nature of information from different sources. Although quantitative data provide an excellent oppor- tunity to determine how variables are related for large numbers of people, these data provide little understanding of why these relationships exist. Qualitative data, on the other hand, can help provide information to explain quantita- tive findings, or what has been called “illumi- nating meaning” (153). One can find many ex- amples of the use of triangulation of qualitative and quantitative data to evaluate health pro- grams and policies including AIDS-prevention programs (50), occupational health programs and policies (79), and chronic disease preven- tion programs in community settings (66).

Audiences for EBPH

There are four overlapping user groups for EBPH (56). The first includes public health

180 Brownson · Fielding · Maylahn

A nn

u. R

ev . P

ub lic

H ea

lth 2

00 9.

30 :1

75 -2

01 . D

ow nl

oa de

d fr

om w

w w

.a nn

ua lr

ev ie

w s.

or g

A cc

es s

pr ov

id ed

b y

L ib

er ty

U ni

ve rs

ity o

n 08

/0 1/

16 . F

or p

er so

na l u

se o

nl y.

ANRV370-PU30-10 ARI 15 February 2009 12:1

practitioners with executive and managerial re- sponsibilities who want to know the scope and quality of evidence for alternative strategies (e.g., programs, policies). In practice, however, public health practitioners frequently have a relatively narrow set of options. Funds from federal, state, or local sources are most often earmarked for a specific purpose (e.g., surveil- lance and treatment of sexually transmitted dis- eases, inspection of retail food establishments). Still, the public health practitioner has the op- portunity, even the obligation, to carefully re- view the evidence for alternative ways to achieve the desired health goals. The next user group is policy makers at local, regional, state, na- tional, and international levels. They are faced with macrolevel decisions on how to allocate the public resources of which they are stewards. This group has the additional responsibility of making policies on controversial public issues. The third group is composed of stakeholders who will be affected by any intervention. This includes the public, especially those who vote, as well as interest groups formed to support or oppose specific policies, such as the legality of abortion, whether the community water supply should be fluoridated, or whether adults must be issued handgun licenses if they pass back- ground checks. The final user group is com- posed of researchers on population health is- sues, such as those who evaluate the impact of a specific policy or program. They both develop and use evidence to answer research questions.

Similarities and Differences between EBPH and Evidence-Based Medicine

The concept of evidence-based practice is well established in numerous disciplines includ- ing psychology (136), social work (58), and nursing (115). It is probably best established in medicine. The doctrine of evidence-based medicine (EBM) was formally introduced in 1992 (53). Its origins can be traced back to the seminal work of Cochrane that noted many medical treatments lacked scientific effective- ness (41). A basic tenet of EBM is to deempha-

Table 3 Contextual variables for intervention design, implementation, and adaptation

Category Examples Individual Education level

Basic human needsa

Personal health history Interpersonal Family health history

Support from peers

Social capital Organizational Staff composition

Staff expertise

Physical infrastructure

Organizational culture Sociocultural Social norms

Values

Cultural traditions

History Political and economic Political will

Political ideology

Lobbying and special interests

Costs and benefits

aBasic human needs include food, shelter, warmth, safety (104).

size unsystematic clinical experience and place greater emphasis on evidence from clinical re- search. This approach requires new skills, such as efficient literature searching and an under- standing of types of evidence in evaluating the clinical literature (73). The literature on EBM has grown rapidly, contributing to the formal recognition of EBM. Using the search term “evidence-based medicine” there were 0 cita- tions in 1991, rising to 4040 citations in 2007 (Figure 3). Even though the formal terminol- ogy of EBM is relatively recent, its concepts are embedded in earlier efforts such as the Canadian Task Force for the Periodic Health Examination (28) and the Guide to Clinical Pre- ventive Services (167).

Important distinctions can be made between evidence-based approaches in medicine and public health. First, the type and volume of ev- idence differ. Medical studies of pharmaceu- ticals and procedures often rely on random- ized controlled trials of individuals, the most

www.annualreviews.org • Evidence-Based Public Health 181

A nn

u. R

ev . P

ub lic

H ea

lth 2

00 9.

30 :1

75 -2

01 . D

ow nl

oa de

d fr

om w

w w

.a nn

ua lr

ev ie

w s.

or g

A cc

es s

pr ov

id ed

b y

L ib

er ty

U ni

ve rs

ity o

n 08

/0 1/

16 . F

or p

er so

na l u

se o

nl y.

ANRV370-PU30-10 ARI 15 February 2009 12:1

Figure 3 Citations for evidence-based medicine.

scientifically rigorous of epidemiologic stud- ies. In contrast, public health interventions usually rely on cross-sectional studies, quasi- experimental designs, and time-series analy- ses. These studies sometimes lack a comparison group and require more caveats when interpret- ing the results. Over the past 50 years, there have been more than one million randomized controlled trials of medical treatments (157). Many fewer studies have been performed on the effectiveness of public health interventions (19, 128) because they are difficult to design, and often results derive from natural experi- ments (e.g., a state adopting a new policy com- pared with other states). EBPH has borrowed the term intervention from clinical disciplines, insinuating specificity and discreteness. How- ever, in public health, we seldom have a single “intervention,” but rather a program that in- volves a blending of several interventions within a community. Large community-based trials can be more expensive to conduct than ran- domized experiments in a clinic. Population- based studies generally require a longer time

period between intervention and outcome. For example, a study on the effects of smoking ces- sation on lung cancer mortality would require decades of data collection and analysis. Con- trast that with treatment of a medical condi- tion (e.g., an antibiotic for symptoms of pneu- monia), which is likely to produce effects in days or weeks, or even a surgical trial for can- cer with endpoints of mortality within a few years.

The formal training of persons working in public health is much more variable than that in medicine or other clinical disciplines (161). Unlike medicine, public health relies on a vari- ety of disciplines, and there is not a single aca- demic credential that certifies a public health practitioner, although efforts to establish cre- dentials (via an exam) are now underway. Fewer than 50% of public health workers have any for- mal training in a public health discipline such as epidemiology or health education (166). This higher level of heterogeneity means that multi- ple perspectives are involved in a more compli- cated decision-making process.

Leave a Comment

Your email address will not be published. Required fields are marked *