Obesity prevention programs demand high ... - Wiley Online Library

6 downloads 5591 Views 74KB Size Report
Fax: (03) 9244 6640; e-mail: [email protected] programs are being planned or implemented with insufficient priority being placed on appropriate ...
Point of View

Methods

Obesity prevention programs demand high-quality evaluations Abstract

Boyd Swinburn Deakin University, Victoria

Obesity prevention programs are at last underway or being planned in Australia

Colin Bell

and New Zealand. However, it is imperative

Hunter New England Area Health Service, New South Wales

that they are well-evaluated so that they can contribute to continuous program

Lesley King

improvement and add much-needed evidence to the international literature on

University of Sydney, New South Wales

what works and does not work to prevent obesity. Three critical components of

Anthea Magarey

program evaluation are especially at risk

Flinders University, South Australia

when the funding comes from service delivery rather than research sources.

Kerry O’Brien

These are: the need for comparison

University of Wollongong, New South Wales

groups; the need for measured height and weight; and the need for sufficient process

Elizabeth Waters

and context information. There is an important opportunity to build collaborative

Deakin University, Victoria

mechanisms across community-based

On behalf of the Primary Prevention Group of the Australian Childhood and Adolescent Obesity Research Network

obesity prevention sites to enhance the program and evaluation quality and to accelerate knowledge translation into practice and policy.

O

Key words: Obesity prevention; program

besity prevention programs are springing up in response to growing concerns about childhood obesity. This is a very welcome development following more than a decade of inaction since the epidemic was recognised in the mid-1990s. Another welcome development has been the increased emphasis on using evidence to inform public health practice, programs and policies. 1 Unfortunately, knowing what works and what does not work for obesity prevention is difficult because the evidence base is so limited and the settings in which interventions have been tested are so few (mainly primary schools).2,3 The Primary Prevention Group of the Australian Childhood and Adolescent Obesity Research Network (ACAORN) is concerned that some obesity prevention Submitted: October 2006

programs are being planned or implemented with insufficient priority being placed on appropriate designs or sufficient funding for rigorous evaluation. Expensive programs with weak evaluations waste precious resources, fail to contribute to their own quality enhancement, and also fail to contribute much-needed effectiveness evidence to the literature. These exact concerns have also recently been raised about the United Kingdom (UK) response to childhood obesity by the UK Audit Office.4 The purpose of this article is to identify the main evaluation components that are at risk in Australasian intervention programs and to propose opportunities to lift the quality of evaluation of obesity prevention programs in the region.

Revision requested: April 2007

evaluation; health promotion. (Aust NZ J Public Health. 2007; 31:305-7) doi:10.1111/j.1753-6405.2007.00075.x

Accepted: May 2007

Correspondence to: Professor Boyd Swinburn, WHO Collaborating Centre for Obesity Prevention, Faculty of Health, Medicine, Nursing, and Behavioural Sciences, Deakin University, 221 Burwood Highway, Burwood, Victoria 3125. Fax: (03) 9244 6640; e-mail: [email protected]

2007 vol. 31 no. 4

AUSTRALIAN AND NEW ZEALAND JOURNAL OF PUBLIC HEALTH © 2007 The Authors. Journal Compilation © 2007 Public Health Association of Australia

305

Swinburn et al.

Point of View

Funding sources and priorities

The need for comparison groups

Interventions tend to be funded either by research agencies (where evidence creation is primary) or by government health agencies (where service delivery is primary). For example, the Pacific OPIC Project (Obesity Prevention in Communities) is a $5.8-million research project in Australia, New Zealand, Fiji and Tonga that is funded by the Wellcome Trust, the National Health and Medical Research Council and the New Zealand Health Research Council.5 It involves measurements of 15,000 adolescents in intervention and control sites. While this project will be evidence-rich about what worked and what did not, it runs the risk of not being able to convert its programs into sustainable service delivery. On the other hand, the evaluations of large projects funded through service agencies tend to be heavily constrained by the usual 10-15% budget allocation for evaluation and have program designs that aim to maximise on-the-ground delivery. These projects run the risk of not knowing if the interventions are successful and why. Providing funding for support and evaluation that is separate from program implementation would not only allow funding from a variety of sources (including from research agencies) but would also lift evaluation from a minor after-thought component to a major component alongside implementation. This approach is being used in France, where a successful obesity prevention program6 is being rolled out to about 130 municipalities7 using a funding model of one euro per capita from each municipality for on-the-ground programs and one euro per capita from a variety of other sources for the support, social marketing and evaluation (J.M. Borys, personal communication).

Engaging communities and schools to be comparison populations is very difficult and runs counter to a service-delivery philosophy. However, without a non-intervention comparison, it is not possible to know whether any decreases or increases in obesity prevalence in intervention areas represent a positive, negative or null effect. True experimental designs at a population level usually involve cluster randomisation by settings such as schools. Quasiexperimental designs can obtain comparison data from matched settings, regionally representative samples, or other population monitoring data. It is also possible that, within Australia, multiple intervention sites could use pooled comparison data.

Program evaluation The evaluation of community-based interventions is complex because communities themselves are complex and interventions are usually more ‘organic’ than the classic, clinical, investigatorcontrolled trials. In design, the comparison population selection is often challenging because of quasi-experimental designs, effects of clustering, and long intervention durations (usually 2-3 years). Health promotion theories, process evaluation and program logic models are also needed to show how the proposed inputs (interventions) influence the mediators and outcomes.8,9 The community engagement needed for most obesity prevention projects can add extra dimensions of community interpretation of the findings and sharing of knowledge gained. Below, we highlight three critical aspects of program evaluation that create a challenge for all obesity prevention programs. While these are fundamental and uncontroversial from a research trial perspective, at the intersection of research and the delivery of health promotion programs (where community-based obesity prevention must sit at this stage) they are at risk of being lost.

306

The need for measured height and weight Anthropometry (height, weight, and waist) provides the key outcome measures for obesity prevention interventions. Without these, it is not possible to determine obesity prevention effectiveness. Self-reported height and weight and behaviours are notoriously prone to bias.10,11 Anthropometry may not be needed for some efficacy interventions aimed at changing specific behaviours (such as TV viewing time or fundamental motor skills) or environments (such as school policies or neighbourhood facilities), but it is required if obesity prevention is part of the aim of a project and for population monitoring related to obesity.12 Often the anthropometry measurements take place in a school setting and, while principals and education departments are usually very supportive of these measures, there are understandable sensitivities about measuring children. However, it is the experience of the ACAORN group that when anthropometry is conducted in a private and sensitive manner, the risk of psychological or social discomfort for the child is minimal.

The need for sufficient process evaluation and contextual information Process evaluation, which involves analysing the program’s implementation and reach across different population subgroups and related contextual factors, contributes to the interpretation of impact and outcome results.8 The frequent lack of information on implementation reach and dose hampers the ability to compare interventions and draw conclusions on the effectiveness of strategies to prevent child obesity.13 Contextual information is vital for assessing the applicability of the interventions to other places, populations, and implementation conditions.8 Without this information there is a risk of drawing the false conclusion that there was intervention failure, when really there was implementation failure.14

Illustrative examples The Fleurbaix-Laventie Ville Sante project is a childhood obesity prevention program in northern France.6 It started in 1992 with baseline anthropometry in two intervention villages and two similar comparison villages. Periodic repeat measurements showed that it took about eight years of intervention to reverse

AUSTRALIAN AND NEW ZEALAND JOURNAL OF PUBLIC HEALTH © 2007 The Authors. Journal Compilation © 2007 Public Health Association of Australia

2007 vol. 31 no. 4

Methods

Evaluation of obesity prevention programs

the increasing prevalence of obesity and after 12 years there was significantly less obesity in the intervention compared with the control villages. While there is confidence that the intervention was eventually effective in reducing obesity, there was virtually no process evaluation (because of a shoestring budget), so explaining why and how the program worked was not possible. The French project at least had an outcome evaluation. By contrast, the ‘Active After School Communities’ program15 is by far Australia’s most expensive program to increase physical activity and reduce obesity in children (about $200 million of federal funds over eight years). However, it has no outcome evaluation and minimal process evaluation. We will never know whether this program was effective and whether it warranted this massive investment.

The potential of Australasian programs to create the evidence In Australia and New Zealand, there about 20 other substantial community-based prevention programs, either under way or in the planning stages, that have the potential to prevent obesity. However, three are not taking height and weight measurements in intervention and comparison populations and six are yet to decide. These demonstration projects represent the vital first step before rolling out ‘proven’ interventions for obesity prevention and, with full evaluations, there is an unparalleled opportunity to contribute to the rapid development and quality of the evidence base on obesity prevention and to support the rapid dissemination of the findings into policy and practice. To fully capture this opportunity, there would need to be as much consistency as possible of evaluation approaches and instruments across sites to facilitate multi-site comparisons and metaanalyses. Collaboration mechanisms would need to be in place to maximise the interactions between sites and promote the rapid dissemination of the findings into policy and practice. The Primary Prevention Group of ACAORN is advocating for a Collaboration of Community-based Obesity Prevention Sites (the CO-OPS Collaboration) to be funded to achieve these outcomes.

Conclusions

intensity. Within Australasia there are already a substantial number of whole-of-community childhood obesity prevention programs under way. The funding of a structure like the proposed CO-OPS Collaboration would capture their collective strength, increase the quality and comparability of the program evaluations, and accelerate the translation of what is learned into practice and policy.

References 1. Brownson RC, Baker EA, Leet TL, Gillespie KN. Evidence-Based Public Health. New York (NY): Oxford University Press; 2003. 2. Summerbell C, Waters E, Edmunds L, Kelly S, Brown T, Campbell K. Interventions for preventing obesity in children (Cochrane Review). In: The Cochrane Database of Systematic Reviews, Issue 3, 2005. Oxford (UK): 2005. 3. Doak CM, Visscher TL, Renders CM, Seidell JC. The prevention of overweight and obesity in children and adolescents: a review of interventions and programmes. Obes Rev. 2006;7(1):111-36. 4. National Audit Office. Tackling Child Obesity – First Steps. Report by the Comptroller and Auditor General prepared jointly by the Audit Commission, The Healthcare Commission and the National Audit Office. London (UK): The Stationery Office; 2006. 5. Swinburn BA, Pryor J, McCabe M, Carter R, de Courten M, Schaaf D, et al. The Pacific OPIC Project (Obesity Prevention in Communities) – objectives and design. Pacific Health Dialogue. In press 2007. 6. Borys JM. A successful way of preventing childhood obesity: the FleurbaixLaventie Study. In: Proceedings of the 18th International Congress of Nutrition: Nutrition Safari for Innovative Solutions; 2005 Durban, South Africa. 7. EPODE [home page on the Internet]. Fleurbaix Laventie Ville Santé (FRC): EPODE; 2007 [cited 2007 May]. Together, Let Us Prevent the Obesity of the Children. Available from: http://www.epode.fr/ 8. Rychetnik L, Frommer M, Hawe P, Shiell A. Criteria for evaluating evidence on public health interventions. J Epidemiol Community Health. 2002;56: 119-27. 9. Pawson R. Nothing as practical as a good theory. Evaluation. 2003;9(4); 471-90. 10. Bolton-Smith C, Woodward M, Tunstall-Pedoe H, Morrison C. Accuracy of the estimated prevalence of obesity from self reported height and weight in an adult Scottish population. J Epidemiol Community Health. 2000;54(2):143-8. 11. Black AE, Prentice AM, Goldberg GR, Jebb SA, Bingham SA, Livingstone MB, et al. Measurements of total energy expenditure provide insights into the validity of dietary measurements of energy intake. J Am Diet Assoc. 1993;93(5): 572-9. 12. Ministry of Health. An Analysis of the Usefulness and Feasibility of a Population Indicator of Childhood Obesity. Wellington (NZ): Ministry of Health; 2006 13. Thomas M. Obesity prevention programs for children and youth: why are their results so modest? Health Educ Res. 2006;21(6):783-95. 14. Oakley A, Strange V, Bonnell C, Allen E, Stephenson J. Process evaluation in randomised controlled trials of complex interventions. Br Med J. 2006;332; 413-16. 15. Healthy, Active Australia Initiatives [background page on the Internet]. Canberra (AUST): Australian Government; 2006 [cited 2006 August]. Active After Schools Communities Program. Available from: http://www.healthyactive.gov. au/background.htm

Given the limited evidence base on obesity prevention, funders of major obesity prevention programs are obliged to support highquality evaluations. At a minimum, this means the inclusion of height and weight measurements in intervention and comparison groups, analyses of outcomes by demographic variables, and detailed descriptions of key intervention strategies and their

2007 vol. 31 no. 4

AUSTRALIAN AND NEW ZEALAND JOURNAL OF PUBLIC HEALTH © 2007 The Authors. Journal Compilation © 2007 Public Health Association of Australia

307