based logic models can help anchor a QI program in ... 4University of Florida/Shands-Jacksonville, Jacksonville, FL,. USA. A Merged ..... with best practices.
474464
HPPXXX10.1177/152483 9912474464Health Promotion Practice / Month XXXXWoodhouse et al. / MODEL OF QUALITY IMPROVEMENT AND EVALUATION 2013
Applications/Interventions
A Merged Model of Quality Improvement and Evaluation: Maximizing Return on Investment Lynn D. Woodhouse, EdD, MPH1 Russ Toal, MPH1 Trang Nguyen, MD, MPH, DrPH2 DeAnna Keene, MPH1 Laura Gunn, PhD1 Andrea Kellum, MPH3 Gary Nelson, PhD3 Simone Charles, PhD1 Stuart Tedders, PhD, MS1 Natalie Williams, MPH1 William C. Livingood, PhD1,4
Quality improvement (QI) and evaluation are frequently considered to be alternative approaches for monitoring and assessing program implementation and impact. The emphasis on third-party evaluation, particularly associated with summative evaluation, and the grounding of evaluation in the social and behavioral science contrast with an emphasis on the integration of QI process within programs or organizations and its origins in management science and industrial engineering. Working with a major philanthropic organization in Georgia, we illustrate how a QI model is integrated with evaluation for five asthma prevention and control sites serving poor and underserved communities in rural and urban Georgia. A primary foundation of this merged model of QI and evaluation is a refocusing of the evaluation from an intimidating report card summative evaluation by external evaluators to an internally engaged program focus on developmental evaluation. The benefits of the merged model to both QI and evaluation are discussed. The use of evaluation based logic models can help anchor a QI program in evidence-based practice and provide linkage between process and outputs with the longer term distal outcomes. Merging the QI approach with evaluation has
Health Promotion Practice Month XXXX Vol. XX , No. (X) 1–8 DOI: 10.1177/1524839912474464 © 2013 Society for Public Health Education
major advantages, particularly related to enhancing the funder’s return on investment. We illustrate how a Plan-Do-Study-Act model of QI can (a) be integrated with evaluation based logic models, (b) help refocus emphasis from summative to developmental evaluation, (c) enhance program ownership and engagement in evaluation activities, and (d) increase the role of evaluators in providing technical assistance and support. Keywords:
health promotion; evaluation design; evaluation methods; formative evaluation; logic models; outcome evaluation; process evaluation; summative evaluation; quality assurance/quality improvement
INTRODUCTION >> Quality improvement (QI) is increasingly being viewed as a critical approach to improving health status and overcoming health disparities (Institute of Medicine, 1
Jiann Ping Hsu College of Public Health, Georgia Southern University, Statesboro, GA, USA 2 Latham, NY, USA 3 Healthcare Georgia Foundation, Atlanta, GA, USA 4 University of Florida/Shands-Jacksonville, Jacksonville, FL, USA Authors’ Note: This article is part of the dissemination component for research conducted with a grant from the Healthcare Georgia Foundation to evaluate community projects.
1
2001). Following challenges for Health Care to enhance QI (President’s Advisory Commission on Consumer Protection and Quality in the Health Care System, 1998), Public Health is now being challenged to implement QI to improve public health systems and service delivery (Honoré & Scott, 2010). Although the Robert Wood Johnson Foundation has launched a number of initiatives to increase the use of QI strategies to improve state and local public health systems and services, documented improvements for public health are relatively rare and only recently being reported (Dilley, Bekemeier, & Harris, 2012). On the other hand, Health Care has already documented an extensive record of using QI, and health educators have been enjoined to use QI because of the skills that health educators can contribute to QI in health care settings (Hammarth, 2012). We focus on merging QI with evaluation, one of the major roles and responsibilities of health educators. This merging of QI with evaluation has strategic importance to professionals engaged in the practice of developing, implementing, and evaluating health promotion programs because of the paucity of QI principles and practices in the health education and promotion theory and practice literature despite the rapidly growing emphasis on QI in public health and health care and the potential for health promotion professionals to contribute to this national strategic direction.
BACKGROUND >> Quality improvement and evaluation are frequently considered to be alternative approaches for monitoring and/or assessing program implementation and impact. An emphasis on third-party evaluation, particularly associated with summative evaluation, differs conceptually and operationally from an emphasis on the integration of QI process within programs or organizations. The grounding of evaluation in traditional social and behavioral sciences with an emphasis on cause and effect research also contrasts with the origins of QI within management science and industrial engineering. Working with a major philanthropic organization in Georgia, we illustrate how a QI model is integrated with evaluation for five asthma prevention and control grantee projects serving diverse populations, including many poor and underserved communities in rural and urban Georgia. The prevalence of asthma among children worldwide is increasing, which includes an 8.5% prevalence of asthma among children in the United States (Fifield et al., 2010; Tzeng, Chiang, Hsueh, Ma, & Fu, 2010). In general, asthma prevalence rates among children are higher in Georgia than in many other states in the
2
HEALTH PROMOTION PRACTICE / Month XXXX
nation. Recently collected asthma data showed that Georgia childhood asthma prevalence rates were 11.0% (age 0-4 years), 10.5% (age 5-9 years), 15.4% (age 10-14 years), and 12.0% (age 15-17 years), compared with the averages of similar age-groups from 38 states in the United States: 5.7%, 10.6%, 10.4%, and 10.1%, respectively (Centers for Disease Control and Prevention, 2011). Reflecting many Deep South health disparities (Goldhagen et al., 2005), Georgia’s children have some of the poorest health status in the country. According to Kids Count 2011 (The Annie E. Casey Foundation, 2011), Georgia is ranked the ninth worst state for children and has the fifth lowest high school graduation rate in the country. Twelve out of the 20 counties served by the Georgia Childhood Asthma Management Program had high school graduation rates lower than the state average graduation rate of 64%. The Georgia Childhood Asthma Management Program includes some of the most impoverished counties with the poorest health rankings in Georgia. Counties’ population of children in poverty range from 11% to 41%, with a median of 27%. According to the 2011 County Health Outcomes Rankings (University of Wisconsin Population Health Institute, 2011), 9 of the counties have an assigned health ranking above the midpoint, ranging in rank from 80 to 146, with 156 being the worst. Many of these counties have high minority (African American or Hispanic) populations.
STRATEGIES/INTERVENTION >> APPLICATION
The Healthcare Georgia Foundation launched the Childhood Asthma Management Program using a Request for Proposal Process in 2008 to address the growing epidemic of asthma. Applicants were required to propose strategies to control asthma, including objectives for enrollment of participants and reductions in their emergency room (ER) visits for asthma. All five successful applicants had objectives for increasing enrollment in asthma management programs and decreased ER use, but their intervention approaches had little else in common. The Grantees’ approaches varied extensively and included school-based programs, public health agency– based programs, community-based lay health worker home visitation, asthma camps, a hospital-based specialty clinic, and primary care–based approaches. At the initial funding, each grantee had its own internal evaluator. The evaluation team at the Jiann Ping Hsu College of Public Health was contracted to conduct the crosssite summative evaluation, which included providing grantee-specific technical support that would facilitate assessment of the overall program’s success.
Applications/Interventions
The extensive diversity of the different grantee projects, including major differences in activities and potential to meet projected outcomes, along with the limited background in outcome evaluation of the grantees, led to a major redirection of the cross-site evaluation. During the first 2 years, we moved from a primary emphasis on summative evaluation to formative evaluation combined with QI. This move had primary goals of building grantee organizational capacity and focusing interventions on evidence-based strategies. A primary foundation of our merged model of QI and evaluation was a refocusing of the evaluation from a report card–type summative evaluation by external evaluators to an internally engaged program focus on formative evaluation in collaboration with evaluation team members, providing technical assistance for program improvement. This merged approach of QI and evaluation enhanced both QI and evaluation and is complementary to the concepts of “Developmental Evaluation” (Patton, 1994, 2011). It facilitated the development of a culture that can be simultaneously supportive of program improvement and effective capacity building through evaluation. A major emphasis of this model of formative evaluation was on the development of logic models to clarify and measure program activities and outcomes (Livingood, Winterbauer, McCaskill, & Wood, 2007). This use of evaluation-based logic models increasingly helped anchor the QI program in evidence-based practice and provided linkage between process and outputs. Furthermore, this allowed for linkages with the longer term distal outcomes such as reduced ER use or improved quality of life for children with asthma and their families. The use of logic models helped the various individual grantees maintain their unique approach to asthma control while identifying performance measures that (a) complement the scientific evidence for asthma control, (b) link their program activities with intermediate and longer term outcomes, (c) monitor program implementation and effectiveness, and (d) support and enable programs to make adjustments to achieve more optimal results. Building a logic model, with support from evaluators, helped each grantee distinguish which activities were likely to produce a desirable outcome, and examine opportunities for achieving better outcomes that had previously not been considered. However, the logic models were built around each grantee’s priorities, assets, and experience in trying to enhance their asthma program, requiring a tailored approach for each grantee. The major challenge was to clarify the linkage between enrolling children in their programs and actually reducing ER use and improving quality of life
within the context of each grantee’s unique activities. For example, most grantees had some form of education for patients and/or clinicians as program activities. However, most lacked clearly articulated outcomes to be directly produced by the education activity, such as change in behavior, especially change in behavior that could be linked to reductions in ER use for asthma. The site evaluators helped each of the grantees work through the linkages of activities to outcomes that would provide the bridge between enrolling children in programs and reducing future asthma attacks that required use of a hospital ER or some other acute care services. The site evaluators worked with each grantee to make realistic and measurable changes, which were appropriate to their settings to meet the program objectives. The focus on bridging enrollment of children with demonstrated evidence-based interventions that lead to program-specific reduction in ER use was important for all grantees, including hospital, primary care, school, and community settings, with some varying enthusiasm across the sites. The hospital-based clinic focused on stronger linkages to the ER to identify and actively follow up the highest risk children for enrollment in the program. The school-based programs focused on shifting emphasis to provide updated asthma management education to nurses, teachers, school administrators, and families and to create an “asthma-friendly” school environment (e.g., bus-idling policies). The new emphasis was linked to immediate and intermediate outcomes that were based on scientific evidence. The communitybased lay worker home visitations focused on clarifying measures and data collection related to reducing asthma triggers in the home. The primary care settings’ emphasis on the adoption of the Breakthrough Series (Institute for Healthcare Improvement, 2003), with QI activities that integrated the chronic care model (Group Health Research Institute, 2012) and the most recent Guidelines for the Diagnosis and Management of Asthma (National Heart Lung and Blood Institute, 2007), focused on training providers and practices to implement the system changes in order to reduce ER use and hospitalizations due to asthma. Each grantee actively engaged in the development of their own logic model and created one or more logic models to link their project activities to outcomes that would reduce ER use. The logic model for the Children’s Healthcare of Atlanta, the hospital-based grantee, is illustrated in Figure 1. This hospital-based program developed and refined the logic model to clearly identify how program activities would lead to short-term outcomes, which in turn would lead to longer term outcomes that would support the goal of improved quality of life.
Woodhouse et al. / MODEL OF QUALITY IMPROVEMENT AND EVALUATION 3
FIGURE 1 Asthma Logic Model for Children’s Healthcare of Atlanta NOTE: PCP = primary care provider; CHOA = Children’s Healthcare of Atlanta.
Following creation of the logic model(s) for each grantee, evaluation technical assistance shifted to an emphasis on refining performance measures for program activities and outcomes. Many grantees requested and received technical assistance in basic public health measurement science, including data collection, data management, and data analysis. Continued refinement of performance measures also facilitated the development of consistent measures across multiple grantee sites to support the overall summative evaluation. Although few of the grantees had identified measures for assessing performance other than for enrollment at the beginning of their programs, most grantees (80%) were collecting data for a range of outcomes, reflected
4
HEALTH PROMOTION PRACTICE / Month XXXX
in their logic models by the end of the grant cycles. Table 1 provides a summary of performance measures for each of the types of grantees. Merging a QI approach with evaluation had major advantages, particularly related to enhancing the funder’s return on investment. Continuing to evaluate a program that is not achieving or is achieving less than optimal results is of no advantage to funders or the grantees. Emphasis on fidelity to an initial plan to implement an intervention may make sense from a research perspective but is counter to the interest of the funder, communities, and organizations that are more concerned with optimal results. In contrast to an emphasis on third-party evaluation and fidelity to a
Applications/Interventions
TABLE 1 Summative Evaluation Site Reported Status on Measures—July 2012 Measure Site Hospital based
Emergency Room Yes (third party) Yes (third party) No
Hospital Admissions
Prescription Medicines
Triggers in Home
Asthma Plan
Yes (third party)
No
Yes
Yes
Public health agency Primary care association Community and Yes (SR) school based
Yes (third party)
Yes (SR)
Yes (SR)
Yes
No
Yes (SR)
Yes (SR)
No
Yes (SR)
Yes (school nurse No administers)
Yes (SR)
University and community based
Yes (SR)
Yes (records)
Yes
Yes (SR)
Yes (conduct visits)
Examples of Policy/ System Changes School environment, access to records School nurses educate with best practices Physician and medical practice School tobacco free and bus idling, Georgia Department of Health asthma-friendly schools Peak flow meters on site
NOTE: SR = self-report.
planned intervention, QI has a primary emphasis on continually improving an organization’s performance, including a focus on results that can be measured. This added emphasis on QI complemented the shift from summative evaluation to formative evaluation that is much more compatible with capacity building, sustainability, and concepts of Developmental Evaluation as conceptualized by Patton (1994, 2011). By introducing a plan-do-study-act (PDSA) model of QI with evaluation, we (a) used evaluation-based logic models to ground the QI in performance measures based on scientific evidence, (b) enhanced program ownership and engagement in evaluation activities, (c) helped refocus emphasis from summative to formative and developmental evaluation, and (d) increased the role of evaluators in providing technical assistance and support. The recurring nature of PDSA cycles also underscores the ongoing continuous nature of QI. By focusing on the continued development of each grantee’s projects to help building capacity and shift toward evidence-based strategies in the early phases, rather than simply assessing the success or failure of the initial proposal by the end of the funding cycle, the overall summative evaluation is more likely to result in positive impacts and greater return for funders and grantees. The use of evaluation logic modeling to enhance the scientific base of QI was accomplished by integrating
the logic model with the PDSA cycle illustrated in Figure 2. The use of evaluation logic models is not typically reported as a QI technique. However, it can be very useful for QI, particularly in the Plan stage of the PDSA cycle. The evaluation model emphasis on scientific links of process to outcomes moves the root cause analysis to a more scientific foundation for QI. This can avert a potential in the Plan phase of the PDSA cycle for participants to brainstorm root causes and continue to try to improve activities that may be of no value in affecting the ultimate outcomes. Brainstorming, which is a cognitive process of noncritical identification of all possible alternatives, does not exclude alternatives that have failed to be viable, nor does it prioritize alternatives that have demonstrated causality or links to positive outcomes. Evaluation logic modeling with an emphasis on the scientific links between activities and outcomes can guide the Plan phase of the PDSA cycle with a stronger scientific foundation than a brainstorming-based root cause analysis. During the Plan phase of PDSA cycle, logic models help clarify evidence for selected changes, support the use of data to assess status, reassess relation of activities to performance measures, and adjust activities and performance measures to more effectively produce results. The emphasis on scientific measurements for processes and outcomes associated with logic modeling also strengthens and enhances other phases of the
Woodhouse et al. / MODEL OF QUALITY IMPROVEMENT AND EVALUATION 5
FIGURE 2 PDSA Interface With Logic Model NOTE: PDSA = plan-do-study-act.
PDSA process. Our focus on using a PDSA cycle was on using data to plan, monitor, and inform decision making, rather than specific QI techniques. Collection of activity and outcome data is enhanced by the more scientifically grounded (logic model–guided) performance data during the Do stage. Review of performance data is critical during the Study phase of the PDSA cycle and determining if performance results are achieved or not achieved to take appropriate action is particularly enhanced if the data supporting the decision making are sound.
DISCUSSION AND CONCLUSIONS >> Some discussion of the relationship of this merged model of QI and evaluation to health behavioral theory may be required for many practitioners and researchers to place the merged model within the context of health promotion and education. QI is not considered basic health promotion/education theory as is reflected in 6
HEALTH PROMOTION PRACTICE / Month XXXX
major health behavior theory texts (Glanz, Rimer, & Viswanath, 2008). This may in part be because of the tendency for health education theory to focus more on individual, psychology-based behavior than sociologybased group behavior (Green, 2006). More specifically, Steckler, Goodman, and Crozier Kegler (2002) noted that organization change theories “have not often been applied, or reported, as foundations for health promotion or public health programs” (p. 356). It also may in part be because of the origins of QI in operational engineering rather than social and behavioral science. Another barrier to QI’s inclusion in health education theory may be its response to organizational complexity that is not as researchable in the traditional linear science (randomized control trial) approaches associated with psychology-based health behavior change (Livingood et al., 2012; Walshe, 2007) There may be commonalities between QI and health promotion–based evaluation, such as a focus on both outcome and process measures, and common QI techniques such as
Applications/Interventions
process mapping can be found with some psychologybased behavior change theory (Weinstein & Sandman, 1992). But the consistent use of many QI techniques with different QI approaches such as model for improvement or six sigma (Pyzdek & Keller, 2009) represents approaches to organizational change that have largely escaped the health education/promotion theory and practice literature. Yet QI has continued to emerge within health care and public health sectors as a fundamental and critical approach to overcoming many of the health-related challenges facing our society (Honoré et al., 2011). Evaluation represents an important opportunity for health educators to use this major approach to organization change for improving health, particularly if a more developmental approach to evaluation is adopted by health promotion and education. Posing barriers to developmental evaluation and QI are traditional concepts of third-party evaluation, which are justifiable within the context of the need for “objective” evaluation, particularly with summative evaluation. However, little is accomplished for anyone if the evaluation establishes that a program is unsuccessful, other than to learn from one’s mistakes. Integrating evaluation and QI to ensure programs’ successes is a win for everyone. It may be much more in the interest of program funders (internal or external) to emphasize program staff capacity development and ownership by combining evaluation and QI to maximize the return on their investment rather than keeping the evaluators at arms’ length. Similarly, QI can benefit from some of the rigor and accountability of evaluation. Combining the two can optimize the utility of both approaches. Implications for Health Promotion and Public Health Education Health educators and other social and behavioral applied public health science professionals have strong foundations in program evaluation, particularly since evaluation was codified as a basic health education responsibility through Role Delineation and Credentialing efforts (National Commission for Health Education Credentialing, 2010). Although QI and Evaluation have some similar characteristics, their origins, purposes, and approaches can be distinctly different (Table 2). In particular, QI takes on the role of being more of an intervention (a change strategy) in contrast to assessing the impact of an intervention as is associated with evaluation. Although evaluation continues to evolve and is not strictly limited to summative evaluation as can be seen with “developmental evaluation,” the concept of evaluation being a change strategy has not typically been reported in the health education or health promotion literature.
TABLE 2 Key Comparisons of Quality Improvement and Summative Evaluation Summative Evaluation
Quality Improvement
Origins
Social-behavioral Management science sciences and engineering Goal Assess success or Continually improve failure at end of throughout and project (summative without end focus) (continuous focus) Role in Objective review of Participant use of using data data by third party data for decision making Relationship Emphasis on fidelity Emphasis on to change and consistency adaptability and modification
Although the application of QI has continued to evolve extensively in the United States for program and organizational management, its use in public health and health promotion is only recently being reported (Honoré, Wright, & Koh, 2012; Riley, Parsons, Duffy, Moran, & Henry, 2010). Although QI originates from operational engineering and management science, the human organization and behavioral dimensions overlap with health education spheres of influence. In particular, the benefits of ownership and commitment associated with participatory decision making that is inherent in QI can also be found in health promotion strategies such as coalition building or stakeholder and community empowerment. This approach is also inherent in community-based participatory research, which is increasingly used for health promotion intervention research. Because health educators, particularly within the broader context of health promotion, have continued to emerge as public health workforce experts in evaluation and intervention planning and implementation, it may be critical for health educators to embrace and adapt QI methods and techniques as part of their tool kits. At the very least, QI principles and techniques should be incorporated into evolving approaches to developmental evaluation. The use of QI by health educators not only complements their role as evaluators and change agents but also gives them the potential to help public health agencies address the challenge of public health accreditation, which has become firmly rooted in QI.
Woodhouse et al. / MODEL OF QUALITY IMPROVEMENT AND EVALUATION 7
REFERENCES The Annie E. Casey Foundation. (2011). KIDS COUNT data book: Georgia. Retrieved from http://datacenter.kidscount.org/Databook/ 2011/OnlineBooks/ForMedia/StateProfiles/KCDB2011_profiles _GA_FINAL-rev.pdf Centers for Disease Control and Prevention. (2011). Asthma in Georgia. Retrieved from http://www.cdc.gov/asthma/stateprofiles/ Asthma_in_GA.pdf Dilley, J. A., Bekemeier, B., & Harris, J. R. (2012). Quality improvement interventions in public health systems: A systematic review. American Journal of Preventive Medicine, 42(5 Suppl. 1), S58-S71. Fifield, J., McQuillan, J., Martin-Peele, M., Nazarov, V., Apter, A. J., Babor, T., & Twiggs, J. (2010). Improving pediatric asthma control among minority children participating in Medicaid: Providing practice redesign support to deliver a chronic care model. Journal of Asthma, 47, 718-727. Glanz, K., Rimer, B. K., & Viswanath, K. (Eds.). (2008). Health behavior and health education: Theory, research, and practice (4th ed.). San Francisco, CA: Jossey-Bass. Goldhagen, J., Remo, R., Bryant, T., Wludyka, P., Dailey, A., Wood, D., & Livingood, W. (2005). The health status of Southern children: A neglected regional disparity. Pediatrics, 116, e746-e753. Green, L. W. (2006). Public health asks of systems science: To advance our evidence-based practice, can you help us get more practice-based evidence? American Journal of Public Health, 96, 406-409. Group Health Research Institute. (2012). The chronic care model. Retrieved from http://www.improvingchroniccare.org/index.php? p=The_Chronic_Care_Model&s=2 Hammarth, A. (2012). Continuous quality improvement and health educators: Capitalizing on commonalities. Health Promotion Practice, 13, 438-443. Honoré, P. A., & Scott, W. (2010). Priority areas for improvement of quality in public health. Washington, DC: Department of Health and Human Services. Retrieved from http://www.hhs.gov/ ash/initiatives/quality/quality/improvequality2010.pdf Honoré, P. A., Wright, D., Berwick, D., Clancy, C. M., Lee, P., Nowinski, J., & Koh, H. K. (2011). Creating a framework for getting quality into the public health system. Health Affairs, 30, 737-745. Honoré, P. A., Wright, D., & Koh, H. K. (2012). Bridging the quality chasm between health care and public health. Journal of Public Health Management and Practice, 18, 1-3. Institute for Healthcare Improvement. (2003). The breakthrough series: IHI’s collaborative model for achieving breakthrough improvement (IHI Innovation Series White Paper). Retrieved from http://www. ihi.org/knowledge/Pages/IHIWhitePapers/TheBreakthroughSeriesIHIs CollaborativeModelforAchievingBreakthroughImprovement.aspx Institute of Medicine. (2001). Crossing the quality chasm: A new health system for the 21st century. Washington, DC: National Academies Press.
8
HEALTH PROMOTION PRACTICE / Month XXXX
Livingood, W. C., Sabbagh, R., Spitzfaden, S., Hicks, A., Wells, L., Puigdomenech, S., & Wood, D. L. (in press). Evaluating immunization quality improvement to assess impact on outcomes and culture. American Journal of Preventive Medicine. Livingood, W. C., Winterbauer, N., McCaskill, Q., & Wood, D. (2007). Evaluating medical home constructs for children with special needs: Integrating theory and logic models. Family & Community Health, 30(4), E1-S15. National Commission for Health Education Credentialing. (2010). Responsibilities and competencies for health education specialists. Retrieved from http://www.nchec.org/credentialing/responsibilities/ National Heart Lung and Blood Institute. (2007). Guidelines for the diagnosis and management of asthma (EPR-3). Retrieved from http://www.nhlbi.nih.gov/guidelines/asthma/index.htm Patton, M. Q. (1994). Developmental evaluation. Evaluation Practice, 15, 311-319. Patton, M. Q. (2011). Developmental evaluation: Applying complexity concepts to enhance innovation and use. New York, NY: Guilford Press. President’s Advisory Commission on Consumer Protection and Quality in the Health Care System. (1998). Quality first: Better health care for all Americans. Rockville, MD: Agency for Healthcare Research and Quality. Retrieved from http://archive. ahrq.gov/hcqual/final/ Pyzdek, T., & Keller, P. A. (2009). The six sigma handbook (3rd ed.). New York, NY: McGraw-Hill. Riley, W. J., Parsons, H. M., Duffy, G. L., Moran, J. W., & Henry, B. (2010). Realizing transformational change through quality improvement in public health. Journal of Public Health Management & Practice, 16, 72-78. Steckler, A., Goodman, R. M., & Crozier Kegler, M. (2002). Mobilizing organizations for health enhancement: Theories of organization change. In K. Glanz, B. K. Rimer, & F. M. Lewis (Eds.), Health behavior and health education: Theory, research, and practice (3rd ed., pp. 335-361). San Francisco, CA: Jossey-Bass. Tzeng, L.-F., Chiang, L.-C., Hsueh, K.-C., Ma, W.-F., & Fu, L.-S. (2010). A preliminary study to evaluate a patient-centered asthma education program on parental control of home environment and asthma signs and symptoms in children with moderate-to-severe asthma. Journal of Clinical Nursing, 19, 1424-1433. University of Wisconsin Population Health Institute. (2011). County health rankings. Retrieved from www.countyhealthrankings .org Walshe, K. (2007). Understanding what works--and why--in quality improvement: The need for theory-driven evaluation. International Journal for Quality in Healthcare, 19(2), 57-59. Weinstein, N. D., & Sandman, P. M. (1992). A model of the precaution adoption process: Evidence from home radon testing. Health Psychology, 11, 170-180.