viewpoint 341..344

0 downloads 0 Views 46KB Size Report
May 5, 2006 - makers can learn more about the differ- ... be negotiated with the program sponsors ... Karien Stronks, Onyebuchi A. Arah, Thomas Plochg.
Viewpoint

collect a certain amount of data in common. Ideally such data should provide information about what are thought to be the key outcomes and processes. One of the great weaknesses of the HAZ evaluation was the failure to achieve this. Although it is often expensive, time consuming and difficult to collect, it is important that community-based initiatives should strive to establish a robust baseline with which to measure changes through time. But that is a necessary rather than sufficient condition for a high-quality evaluation. More qualitative indicators of success, ideally collected from a range of perspectives, are also needed. It is only through an assessment of both forms of evidence that researchers, practitioners and policy makers can learn more about the different elements of ‘what works’, and to use this knowledge to frame future policy and practice development.

Remember your limitations Complex community-based initiatives are just that. They do not lend themselves to simple calculations of good or bad, right or wrong, successful or not. The job of the policy analyst is to illuminate the processes of change and experience that s/he observes. In doing so, s/he can contribute to a collective process of ‘enlightenment’ about complex change processes in modern social welfare systems. The policy researcher should, therefore, use every evaluation opportunity to throw light on some important aspect of the process either of her/his own choosing or that which can best be negotiated with the program sponsors and research commissioners. The ethical obligation of the policy researcher is to make a contribution—albeit a modest one—to the social process of understanding and promoting change. It is not to award or withhold a seal of approval to this or that ‘flavor of the

moment’ initiative that might have provided the pretext for a new investigative opportunity.

Conclusion Almost a decade after the landslide election victory that brought New Labour into power at least some of the initial enthusiasm for a centralist approach to ‘big’ government has waned. But the commitment to promoting social justice remains. What has changed is that there appears to be a more sophisticated recognition of the complexity, pervasiveness and durability of the social problems that have to be confronted. At the same time there is greater awareness of the need to learn lessons from past attempts to promote social change. But a huge amount still needs to be done to ensure that learning is maximized from expensive efforts to promote social change such as a fairer distribution of population health improvement. In this respect, HAZs should provide a salutary rather than forgotten example of what not to do in the future. It is often argued that the really significant contributions to strengthening the evidence base to tackle health inequalities will come from well-designed studies that set out to answer clearly defined questions. But this will not be possible unless the interventions are themselves designed with more rigor than is commonplace. How realistic is this? It seems more likely that the frequent urge to intervene in response to the latest crisis or problem to reach the political hit list will continue to be part of the public policy landscape in modern democracies. There is a real danger in these circumstances, especially when the imperative to intervene quickly is combined with unrealistic expectations about the speed with which research findings can be generated, that valuable learning opportunities will be missed.

343

The challenge when evaluation opportunities arise in this way is to negotiate the best possible research approach that acknowledges inter alia that incontrovertible measures of impact are not the only useful products that can be generated. The value of throwing light on complex processes in reflective and scholarly ways should not be underestimated even if it falls short of what is ideally required.

Acknowledgements This paper is based on research funded by the Department of Health, London. The views expressed are those of the authors and not necessarily those of the department of health.

References 1 Barnes M, Bauld L, Benzeval M, et al. Health Action Zones: Partnerships for Health Equity. London: Routledge, 2005. 2 Bauld L, Judge K, editors. Learning from Health Action Zones. Chichester: Aeneas, 2002. 3 Wanless D. Securing Good Health for the Whole Population. London: The Stationary Office, 2004. 4 Mays N, Wyke S, Malbon G, Goodwin N, editors. The Purchasing of Health Care from Primary Care Organisations. Buckingham: Open University Press, 2001. 5 Alcock P. Participation or pathology: contradictory tensions in area-based policy. Soc Pol Soc 2004;3:87–96. 6 Pawson R. Nothing as practical as good theory. Evaluation 2003;9:471–90. 7 Mackenzie M, Blamey A. The practice and the theory: lessons from the application of a theories of change approach. Evaluation 2005;11: 151–68. doi:10.1093/eurpub/ckl068 Advance Access published on May 12, 2006

Learning from Policy Failure and Failing to Learn from Policy Karien Stronks, Onyebuchi A. Arah, Thomas Plochg Evaluation studies provide a key source of learning from policy success and failure. Policy interventions and their evaluation are, however, drenched in inescapable complexity. This makes it Correspondence: Karien Stronks, PhD, Department of Social Medicine, Academic Medical Center, University of Amsterdam, PO Box 22700, 1100 DE Amsterdam, The Netherlands, tel: þ31 20 5664892, e-mail: [email protected]

more difficult to evaluate this kind of intervention in the highly regarded randomized experimental design. Based on the experiences of the Health Action Zones (HAZs), Judge and Bauld outline key elements of a more realistic evaluation framework, which might contribute to a further understanding of complex policy initiatives in the field of public health.1 Their recommendations provide a good basis for the further

Downloaded from https://academic.oup.com/eurpub/article-abstract/16/4/343/644411 by guest on 29 August 2018

development of the methodology of evaluation studies. Three additional issues should be mentioned, however. First, what is evidence in health policy? Second, we want to emphasize the importance of the evaluator having an open mind during the evaluation process. Third, we believe that the ultimate goal for us as evaluators is to influence health policy, in addition to understanding a policy intervention.

344

European Journal of Public Health

Different types of evidence In line with Judge and Bauld, we must consider the question ‘what is evidence in health policy?’ Their paper can be read as a plea for the use of another type of evidence than that obtained in experimental studies. Implicitly, however, they seem to restrict themselves to evaluation studies that are based on new data collection. But must we always collect fresh data given the associated costs and burden? There is room, we believe, for the intelligent use of routine data such as linked administrative or monitoring data.2 Moreover, we believe that, in addition to evaluation studies, other types of studies might also contribute to the evidence base for effective health policy. These include the re-analysis of existing epidemiological datasets with the aim of assessing the cause—effect relationships between determinants and health. This knowledge is necessary for evaluating the potential impact of a specific intervention.3

Theory guided but still open minded We agree with Judge and Bauld that a conceptual framework of what a policy intervention aims to achieve is extremely useful as a starting point for both the intervention and its evaluation. On the other hand, the evaluator should continue to approach this with an open mind in order to anticipate results other than those originally aimed for. In this respect, it is debatable whether the HAZs should be considered a policy failure. Ultimately, before we can label policies as failures, we must learn to appreciate the fact that policy interventions in themselves are political tools which are both means and ends in a society.4,5 HAZs could be seen as a means used by the New Labour government to place public health high up on its political agenda. This might have led to increased public health awareness among policy makers and communities and, therefore, their readiness to pursue effective policies given favorable sociopolitical structures. HAZs could also been seen as an end characterized by increased government—community partnerships, where the community is a cornerstone

of the ‘third way’ governance in the UK.5 Therefore, HAZs might be an attempt to reinstall ‘the social’ in public and social policy for health by way of community partnership and public participation. This political reasoning should partly determine how evaluation end-points are defined and calls for a complementary evaluation of the implicit and explicit political processes involved in such policy interventions.

How can evaluators make a difference? Given the complexity of health policy, evaluators would probably be satisfied if they succeeded in understanding the policy and in helping the implementation of the intervention policy from the outset. However, we believe that the aspirations of an evaluator should be on a higher level. The ultimate aim, in our view, should be to make a difference in terms of influencing health policy. From this viewpoint, the following few additional recommendations come up. Be honest about the feasibility of a policy: Theoretically, a reflection on the value of a proposed policy or intervention might start before the intervention takes place. Such a reflection can be fed by an a priori impact assessment of (potential) policy interventions, using interdisciplinary knowledge, theory and thorough-going techniques.6 This is not an easy job, given, for example, the inherent long lag times for population effects of interventions. In addition, it requires courage on the part of the evaluator to tell policy makers that certain policies might not make a difference. If evaluators want to contribute to evidence-based policy, we feel it is their duty to be realistic about the anticipated effects and value of proposed interventions, and not to be overly optimistic. Operate in close interaction with policy makers: If the evaluator is to influence health policy, a close interaction between the evaluator and the policy makers is necessary. This increases the chances that the evaluation study meets the expectations of the policy maker and will be used as an input for policy.7 The metaphor of a zip might be used to indicate the added value of such a linkage. The two rows of teeth, representing pol-

Downloaded from https://academic.oup.com/eurpub/article-abstract/16/4/343/644411 by guest on 29 August 2018

icy and research are independent parts, and both have their own expertise and agenda. They do have, however, a common starting point. In addition, they also have a common end point but only if they succeed in being pulled together during the whole process.

More art than science? We agree with Judge and Bauld that complexity is inescapable in policy evaluation and that there are no easy answers when facing this complexity. It is useful to search for meaningful and measurable proxies in the complexity. Complexity reduction will be inevitable and we must continue to search for ways to achieve reductions that are meaningful and realistic without being overly simplistic. Policy evaluation may therefore turn out to be more of an art than a science, requiring skilled, experienced, open-minded and courageous evaluators.

References 1 Judge K, Bauld L. Learning from policy failure? Health Action Zones in England. Eur J Pub Health 2006; Advance Access published May 5, 2006, doi:10.1093/eurpub/ckl068. 2 Sibthorpe B, Dixon J. Rethinking evaluation for policy action on the social origins of health and well-being. In: Eckersley R, Dixon J, Douglas B, editors, The Social Origins of Health and WellBeing. Cambridge: Cambridge University Press, 2001. 3 Kindig D, Day P, Fox DM, et al. What new knowledge would help policymakers better balance investments for optimal health outcomes? Health Serv Res 2003;38: 1923–37. 4 Stone D. Policy Paradox. The Art of Political Decision Making. Revised edition. New York: W.W. Norton & Company, 2002. 5 Newman J. Modernising Governance. London: Sage, 2001. 6 Dunn WN. Public Policy Analysis: An Introduction. New Jersey: Pearson Education Inc., 2004. 7 Denis JL, Lomas J. Convergent evolution: the academic and policy roots of collaborative research. J Health Serv Res Policy 2003;8:1–6.

doi:10.1093/eurpub/ckl069 Advance Access published on May 12, 2006