Outcomes research - Wiley Online Library

35 downloads 0 Views 88KB Size Report
health-care delivery, with measures such as cost- effectiveness ... outcome and process assessment (health care), quality of life. ... constitutes health outcomes research. It has been ..... measures have entered routine clinical practice. For.
Internal Medicine Journal 2003; 33: 110–118

REVIEW

Outcomes research: what is it and why does it matter? M. JEFFORD, M. R. STOCKLER and M. H. N. TATTERSALL 1Ludwig

Institute for Cancer Research, Heidelberg, Victoria, 2Department of Medicine, Department of Public Health and Community Medicine and the National Health and Medical Research Council Clinical Trials Centre, and 3Department of Cancer Medicine, University of Sydney, New South Wales, Australia

Abstract Outcomes research is a broad umbrella term without a consistent definition. However it tends to describe research that is concerned with the effectiveness of public-health interventions and health services; that is, the outcomes of these services. Attention is frequently focused on the affected individual – with measures such as quality of life and preferences – but outcomes research may also refer to effectiveness of health-care delivery, with measures such as costeffectiveness, health status and disease burden. The

WHAT IS ‘OUTCOMES RESEARCH’? There appears to be no consistent definition of what constitutes health outcomes research. It has been suggested that ‘outcomes research’ is a difficult term to define because of the breadth of research included under this ‘umbrella expression’.1 The US National Library of Medicine does not include the term as a medical subject heading (MESH).2 Related terms include ‘health services research’ and ‘outcome assessment (health care)’. Health services research is defined as: the integration of epidemiologic, sociological, economic, and other analytic sciences in the study of health services. Health services research is usually concerned with relationships between need, demand, supply, use, and outcome of health services. The aim of the research is evaluation, particularly in terms of structure, process, output and outcome.2

Correspondence to: Michael Jefford, Peter MacCallum Cancer Institute, Locked Bag 1, A’Beckett Street, Vic. 8006, Australia. Email: [email protected] Received 24 October 2001; accepted 10 April 2002.

present review details the historical background of outcomes research to reveal the origins of its diversity. The value and relevance of outcomes research, commonly employed research techniques and examples of recent publications in the area are also discussed. (Intern Med J 2003; 33: 110–118) Key words: cost-benefit analysis, decision support techniques, health-care quality, access and evaluation, outcome and process assessment (health care), quality of life.

Outcome assessment (health care) is defined as: research aimed at assessing the quality and effectiveness of health care as measured by the attainment of a specified end result or outcome. Measures include parameters such as improved health, lowered morbidity or mortality, and improvement of abnormal states (such as elevated blood pressure).

The term ‘outcomes research’ describes a variety of fields of research that use a variety of methodologies, often with differing aims.1 The US Agency for Healthcare Research and Quality suggests that: outcomes research seeks to understand the end results of particular health care practices and interventions. End results include effects that people experience and care about, such as change in the ability to function. In particular, for individuals with chronic conditions – where cure is not always possible – end results include quality of life as well as mortality. By linking the care that people get to the outcomes they experience, outcomes research has become the key to developing better ways to monitor and improve the quality of care.3

What is outcomes research? An assessment of the historical factors that have shaped the field is required in order to understand the diversity of outcomes research.

HISTORICAL BACKGROUND In 1913, a Massachusetts surgeon, Ernst Codman, noted that hospitals reported the number of patients treated, but did not indicate whether patients appeared to benefit from treatment.4 He argued that hospitals should produce a report of their treatment results, standardized to allow comparison between hospitals. Furthermore, he suggested that all aspects of hospital practice should be evaluated to ensure that they produced favourable outcomes. This includes not only a focus on patient encounters, but also an analysis of all ‘products’ of the hospital including, for example, the efficiency of administrative procedures and staff and student training. The term ‘outcome’ was coined by Donabedian, who developed a paradigm for quality assessment, comprising structure, process and outcome.5 He recognized that, while some outcomes (such as death) might be easily recognized and measured, others might be less easily defined and measured. Among the latter he included ‘patient attitudes and satisfactions, social restoration and physical disability and rehabilitation’.6 Donabedian suggested that, ‘outcomes, by and large, remain the ultimate validators of the effectiveness and quality of medical care’. He has continued to guide the assessment of quality.7 In 1973, Wennberg and Gittelsohn reported wide variation in medical practice within Vermont, USA.8 There was considerable variation in resource input, use and expenditure among neighbouring communities but without obvious adverse sequelae. Wennberg et al. attributed this variation to differences in physicians’ diagnostic style and to physicians’ beliefs in the efficacy of particular treatments.9,10 In 1982, McPherson et al. confirmed variations in the use of six common surgical procedures in the USA, United Kingdom and Norway.11 There appeared to be a lack of consensus regarding effective medical procedures and techniques. There also seemed to be no clear link between these procedures and favourable results. Chassin et al. queried whether inappropriate use of procedures could explain geographical variation of results.12 They studied the use of coronary angiography, carotid endarterectomy and uppergastrointestinal-tract endoscopy in a variety of regions. Although they recognized a high incidence of inappropriate use, Chassin et al. concluded that this

111

did not explain variations in the use of services. Other studies around this time, including further observations by Wennberg et al.,13,14 reported variation in the use of hospital services yet noted that health outcomes were not compromised. These studies emphasized the need for research to determine the causes and implications of differences in practice. Around this time McNeil et al. reported a study that was designed to assess preferences for treatment, involving a trade between improved quality and quantity of life, following the diagnosis of laryngeal cancer.15 The study surveyed the attitudes of healthy volunteers and found that, to maintain their voices, approximately 20% of volunteers would chose radiation treatment over surgery. The study was important in that it emphasized the use of outcomes that were patient-focused. In the 1988 Shattuck lecture, Ellwood called for the development of ‘outcomes management, a “technology of patient experience”’.16 Reviewing the preceding changes to the US health system, Ellwood stated that, ‘the most destabilizing consequence of the restructuring of the health system has also been, in my view, the most desirable one: patients, payers, and executives of health care organizations now have both higher expectations and greater power’. The problem, he claimed, was that: too often, payers, physicians, and health care executives do not share common insights into the life of the patient…the problem is our inability to measure and understand the effect of the choices of patients, payers and physicians on the patient’s aspirations for a better quality of life. The result is that we have uninformed patients, skeptical payers, frustrated physicians, and besieged health care executives.16

He called for a technology of ‘outcomes management’, designed to allow all parties to make rational choices based on improved insights into the effects that those choices have upon the lives of patients. Outcomes management, he discussed, involves: (i) increased reliance on standards and guidelines, (ii) the collection of clinical outcome data (as well as patient-completed functional and well-being data), (iii) pooling and analysis of data through the use of centralized databases and (iv) subsequent dissemination of results and modification of guidelines. Based upon the recognition of wide variations in practice, evidence of inappropriate use and the need for health-care evaluation, the US Health Care Financing

Internal Medicine Journal 2003; 33: 110–118

112

Jefford et al.

Administration and the Public Health Service began to assume an active role in providing information to guide medical practice.17 Reflecting upon Ellwood’s vision, Relman suggested that the era of assessment and accountability represented the third revolution in medical care, following the ‘era of expansion’ of health services from the 1940s through the 1960s and the ‘era of cost containment’ beginning in the 1980s.18 Paralleling these changes was an evolving shift from a provider-centred model of medicine to patientcentred health care.19 Patient-based outcomes and greater accountability therefore became more relevant and important. Available governmental funding further drove outcomes research in the USA. Initial projects funded by the Agency for Health Care Policy and Research were determined on the basis of ‘the number of individuals affected, the extent of uncertainty or controversy with respect to the use of a procedure or its effectiveness, the level of related expenditure, and the availability of data’.20 Initially, prospective, randomized studies were discouraged. Projects funded subsequently have encouraged the generation of primary data regarding the effectiveness of interventions, either through randomized trials or prospective longitudinal studies. Thus outcomes research has come to describe a range of activities (Table 1). Lee et al. have proposed a conceptual framework of how outcomes research interfaces with clinical trials.1

Table 1 The scope of outcomes research Outcomes research may focus on: Quality-of-life measures Effectiveness Cost Quality of care Patient preferences Appropriateness Access Health status In areas such as: Disease prevention Screening Drug treatment Medical procedures Medical practices Diagnostic tests Guidelines Health-care policy

Internal Medicine Journal 2003; 33: 110–118

The framework developed by Lee et al. allows us to recognize both overlapping and exclusive elements of these fields at the levels of: (i) research topics, (ii) outcomes, (iii) secondary analysis and (iv) applications. Clinical trials are predominantly designed to assess the efficacy and effectiveness of specific interventions. The goal of outcomes research may include the assessment of quality of care, access and the effectiveness of general health-care strategies. The typical outcomes in clinical trials research are survival, response and adverse event rates. The outcomes in typical outcomes research studies are processes, costs and health-related quality of life. Methods of secondary analysis, such as meta-analysis and decision analysis, may use the same data sets, but are frequently considered to be elements of clinical and outcomes research, respectively. Clinical trials research is aimed primarily at informing clinical decisions, whereas outcomes research is aimed primarily at informing health policy decisions. However, outcomes research and clinical trials research have not developed in isolation. Outcomes research has progressed beyond studies of effectiveness to encompass quality of care, decision analysis and the analysis of administrative databases. Modern clinical trials often include measures of health status, quality of life, resource utilization and cost. Lee et al. recognize that the distinctions between clinical trials research and outcomes research are useful conceptually, but may be oversimplifications in practice. Thus, clinical trials and outcomes research are different, but they complement, rather than compete with, each other. Pronovost and Kazandijan provide some innovative examples of how the concepts of clinical trial design and outcomes research can be integrated to answer questions about quality improvement using modifications of case-control studies and randomized trials.21

THE VALUE OF OUTCOMES RESEARCH Outcomes research complements clinical trial research. It aims to: (i) provide better information to inform patient decisions, (ii) guide health providers and (iii) inform health policy decisions. In Table 2 we have suggested ways that outcomes research might benefit the individual patient, the health-care practitioner, the health-care organization and the government. These benefits may also be derived from clinical research and other means.

What is outcomes research?

113

Table 2 Suggested benefits of outcomes research Consumer Increased participation in decision-making Increased choice regarding hospital/practitioner/treatment options Assurance regarding effectiveness of interventions Assessment and development of interventions to improve well-being, not just survival Health-care provider Greater certainty regarding the benefit of an intervention Standards/guidelines to guide clinical practice Shared responsibility in decision-making Protection from malpractice suits (if complying with above) Health-care organization management Greater use of effective interventions Discontinuation of ineffective interventions/practices An organizational culture emphasizing quality Cost savings as inappropriate use is eliminated (i.e. interventions, medications, hospitalizations) Government Cost savings as inappropriate use is eliminated (i.e. interventions, medications, hospitalizations) Greater ability to plan health services Only effective pharmaceuticals and services are subsidized Target research in areas of greatest potential impact based on examination of databases etc.

EXAMPLES OF RECENT PUBLICATIONS IN OUTCOMES RESEARCH

combined with chemotherapy). The patient characteristic most highly correlated with receipt of radiation was age at diagnosis.

Below we briefly review four recent publications to illustrate aspects of outcomes research.

The data suggest that adjuvant radiation is underused. This information would not be found with traditional clinical trials research. This study also identified important areas for further research, for example, the exploration of physicians’ and patients’ knowledge and attitudes about adjuvant radiation therapy.

Who gets adjuvant treatment for stage II and stage III rectal cancer? Method: the use of linked databases Value: greater use of effective interventions Schrag et al. conducted a retrospective cohort study to examine the relationship between patient characteristics and the use of adjuvant pelvic radiation with or without chemotherapy.22 American patients with stage-II or stage-III rectal adenocarcinoma were identified from the linked Surveillance, Epidemiology and End-Results, and Medicare databases. Analysis of coding identified episodes of surgery consistent with definitive tumour resection that occurred ≤6 months after diagnosis. Medicare coding identified the appropriate procedural and revenue codes consistent with the use of radiotherapy and chemotherapy. Information was also collected on: (i) type of surgical procedure, (ii) comorbidities and (iii) demographic data (i.e. age, gender, race and median income for geographical area). The study found that only 57% of patients received adjuvant radiation (mostly

Cost-effectiveness of radiofrequency ablation (RFA) for supraventricular tachycardia (SVT). Method: decision analysis Value: cost savings, greater use of effective interventions Cheng et al. used a decision-analysis model to compare the health and economic outcomes of three treatment strategies for patients with SVT: (i) initial RFA, (ii) long-term antiarrhythmic drug therapy or (iii) treatment of acute episodes of arrhythmia with antiarrhythmic drugs.23 Costs were estimated from a major academic centre and from the literature, and treatment effectiveness was estimated from reports of clinical studies. Probabilities of clinical outcomes were also based on published data. Utility data were based upon patient-reported quality of life before

Internal Medicine Journal 2003; 33: 110–118

114

Jefford et al.

and after RFA. This study concluded that RFA substantially improves quality of life and reduces expenditures when used to treat symptomatic patients. RFA improved quality-adjusted life expectancy by 3.1 quality-adjusted life years (QALY) and reduced lifetime medical expenditure by $US27 900, compared with long-term drug therapy. Long-term drug therapy was more effective and had lower costs compared with episodic drug treatment. The findings were highly robust in sensitivity analyses; that is, similar results were obtained when multiple factors were varied across a wide range of plausible values. This study included a measure of patient preference (utility assessment) and economic analysis and reported measures of marginal effectiveness. Decision analysis provides a rational approach for decision making in areas where definitive data are lacking, and particularly in identifying areas of critical uncertainty.

Completeness of safety reporting in randomized trials Method: survey Value: emphasis on quality at a procedural/organizational level Ioannidis and Lau recently addressed the quality of safety reporting in clinical trials.24 They surveyed 192 randomized studies within seven topics of internal medicine and assessed the adequacy of reporting of adverse effects. Principal measures were: (i) the severity of adverse clinical events and laboratory abnormalities and (ii) the frequency and reasons for withdrawals due to toxic effects. The study found that the quality and quantity of safety reporting varied, but was generally inadequate. The severity of clinical adverse events and laboratorydetermined toxicity was adequately defined in only 39% and 29% of trial reports, respectively. Only 46% of trials stated the frequency of specific reasons for discontinuation of study treatment due to toxicity. The authors concluded that the standard of collection, analysis and reporting of safety data in clinical trials lags behind that for efficacy data. They suggested that the outcome measures used in this study could be incorporated into the recommendations of the Consolidated Standards of Reporting Trials statement.25 This study demonstrates the diversity of important outcomes and measures of quality. Measures of quality may be applied to areas beyond the direct patient–practitioner focus.

Internal Medicine Journal 2003; 33: 110–118

Randomized trial comparing traditional Chinese medical acupuncture, therapeutic massage, and self-care education for chronic low back pain Method: overlap between the traditional clinical trial and outcomes research Value: assessment and development of interventions to increase well-being and increased certainty regarding the benefit of interventions The study of Cherkin et al. randomly allocated patients with chronic lower-back pain to traditional Chinese medical acupuncture, therapeutic massage or self-care education.26 The primary outcomes were symptoms and dysfunction. Secondary outcomes included disability, health-care use and cost. Patients were assessed 4, 10 and 52 weeks after a 10-week intervention period. At baseline and at each follow-up visit patients were asked how ‘bothersome’ back pain, leg pain, numbness and tingling had been in the preceding week, each on a scale from ‘0’ to ‘10’. Patients also completed the Roland Disability Scale. Automated health-care use data were collected, including a record of all provider visits, medications dispensed, imaging procedures, operations and hospitalizations. With respect to primary end-points, the study found that therapeutic massage was effective for persistent lower-back pain and appeared to confer long-lasting benefit, as assessed by 1-year follow-up. This study exemplifies the interface between traditional clinical trials and outcomes research. The study used patient-focused outcome data, including measures of health perception (symptom panel) and measures of function (disability scale), in a randomized clinical trial. Quality of life is the most important outcome in a clinical scenario such as this. The study also illustrates the application of administrative data to explore important unconventional outcome measures in the rigorous context of a randomized trial.

SOME TECHNIQUES/ METHODOLOGIES IN OUTCOMES RESEARCH Because of the breadth of outcomes research, a discussion of all approaches and methodologies is impossible. However several approaches are commonly used, and these are briefly discussed below.

Measures used to assess well-being and satisfaction Clancy and Eisenberg have provided a useful description of the broad aspects of health-related quality

What is outcomes research? of life.27 Health perceptions may be assessed through symptom panels. Several validated symptom measures have entered routine clinical practice. For example the American Urological Association symptom index scale is used by >80% of practising urologists in the USA.28 Functional measures may be used to assess the impact of health interventions on a range of domains, including physical, mental, social and role function. Preference-based measures convey the meaning that a person places upon their individual health status. Patient satisfaction may also be directly assessed. This may include interpersonal and technical aspects of care. Instruments may be generic or disease-specific.29 Generic instruments allow comparison between different conditions and between different interventions. They may also detect the differential effects that an intervention has on different aspects of health status. Specific instruments focus on: (i) single disease states (such as asthma), (ii) patient groups (such as the frail elderly), (iii) areas of function (such as sleep) or (iv) a particular problem (such as pain).29 Specific instruments are often considered to be more clinically sensible. They may also be more sensitive to changes in specific aspects over time. The validity of many scales has been established. Acknowledging the tension between ‘generalizeability’ and specificity, several groups have recommended a modular approach, which incorporates instruments that reconcile these conflicting needs by adding specific items to a generic core.30–33

Economic analysis Economic analysis includes techniques such as cost outcome, cost-effectiveness and cost-utility analyses.34 Cost-outcome studies are descriptive and indicate the costs associated with a particular disease or treatment strategy. The lack of information about efficacy and the lack of a suitable comparison often limit the usefulness of such studies. For example, the median cost in Canada of treating women with advanced ovarian cancer with second-line and subsequent chemotherapy was estimated to be approximately $Can37 000; however, costs and outcomes in the absence of such treatment were not available.35 Cost-effectiveness studies compare the incremental cost of one treatment over another with the incremental benefit in terms of a single common outcome.34 For treatments that influence survival, the cost can be expressed in terms of life-years gained. While this can

115

be useful for comparing therapies with similar outcomes, it is not possible to compare interventions with different outcomes using this method. In addition, it can be difficult to take into account more than one component of a health strategy, such as the adverse effects of drug therapy. Cost-utility analyses account for the possibility that not all outcomes are valued equally by weighting outcomes according to their perceived value (utility). Years in good health are likely to be valued more than years with sickness. For example, gains in survival may be in varying states of imperfect health, due to the effects of the disease or treatment. To account for this, the life-years gained in a particular state may be multiplied by the utility of that health-state to give QALY. Utility refers to the value attributed to a particular health-state (outcome). Utility measures are an example of a generic measure of health-related quality of life because they provide a summary of quality of life that can be compared across diseases, conditions and populations. Cost-utility analyses are expressed in terms of dollars per QALY gained. For example, the incremental cost in the USA of adjuvant chemotherapy for nodepositive breast cancer in pre-menopausal women has been estimated to be about $US10 000 per QALY gained.36

Decision analysis Sarasin describes decision analysis as the ‘quantitative application of probability and utility theory to decision making under conditions of uncertainty’.37 Decision analysis involves modelling a problem to guide decision-making. The model builds on available clinical information such as: (i) prevalence of disease, (ii) effectiveness of interventions, (iii) incidence of side-effects, (iv) associated costs and (v) outcome measures, such as patient-assigned utility values. Thus, decision analysis is able to link research results, patient preferences and population data. Sensitivity analysis involves varying the assumptions made within the model. For example, the risk of side-effects may be higher (or lower), or the effectiveness of an intervention may be lower (or higher) than in the baseline model. Similarly, patient preferences (reflected in utility values) may vary. Sensitivity analysis allows one or many factors to be adjusted, alone or simultaneously, to determine whether this affects the apparent best choice; that is, how sensitive the decision is to the assumptions on which the model is based.

Internal Medicine Journal 2003; 33: 110–118

116

Jefford et al.

Decision analysis can be particularly useful where there is inadequate information available from clinical studies to guide treatment or policy recommendations. Its greatest benefit here is to identify key areas of uncertainty requiring further research. Kassirer et al. also suggest that decision analysis can be used: (i) to address health policy questions about screening and prevention, (ii) to weigh trade-offs between tests and treatments and (iii) for the interpretation of clinical data where there is uncertainty.38

Data sources in outcomes research The use of databases facilitates data exploration through the analysis of interventions and outcomes and, thus, through the generation of novel insights. A wide variety of health-information sources exists, including administrative databases, clinical databases, disease registries and clinical-trial databases. These datasets may be linked with information from other sources, such as census or electoral role data. Issues surrounding databases and outcomes research were discussed at the Regenstrief Conference Measuring Quality, Outcomes, and Cost of Care Using Large Databases and reported in a supplement to Annals of Internal Medicine in 1997.39 Administrative databases Administrative data are obtained from health-care administration, member enrolment and the collection of payments for services.40 Producers of such data include federal and state governments and private health insurers. Clinical content is often low and may only include demographic details and coding information concerning diagnoses and procedures. Nevertheless, administrative data are important because they are readily available, inexpensive to acquire, computer-readable and, typically, cover large populations.

Clinical databases Clinical databases may include laboratory, radiology, pharmacy and surgery scheduling systems. They contain detailed clinical information, covering a broad population. However, a major disadvantage of these datasets is that the information is often recorded in a non-standardized manner and is difficult to access. Thus, it is often necessary to manually extract data elements. This is expensive, time-consuming and prone to error.41

Disease registers Disease registers, such as cancer registers, provide incidence data and, frequently, detailed clinical infor-

Internal Medicine Journal 2003; 33: 110–118

mation. They are normally centralized and, with mandatory reporting, offer the potential for thorough, accurate reporting. One disadvantage is the frequent lack of a denominator or reference population.

Clinical trial databases These contain detailed clinical information regarding trial participants. Because of this, detailed and multivariate analysis within the data set may be possible. Disadvantages include the potential difficulty in generalizing to a broader population and the time and expense required to collect high-quality data.

Censuses Censuses provide information on the demographic composition of large populations with respect to such variables as age, gender, ethnicity, place of residence, employment and education. Importantly, these data may be used as a denominator for other data (such as cancer incidence) to determine rates. However, collecting census data is expensive, logistically difficult and limited to the collection of a relatively small amount of information. Linking datasets may overcome the individual limitations of the various data sources. Different information sources may be combined for modelling in decision analysis. Inferences may also be made when different information sources are juxtaposed, which may serve to generate hypotheses or prompt further research.

PUBLIC REPORTING OF MEDICAL OUTCOMES At first glance, the public reporting of health outcomes seems an important social responsibility. In the Bristol case in the United Kingdom, it is possible that more public presentation of outcomes might have avoided deaths.42 The New York State Department of Health collects data on quality of care provided to patients undergoing coronary artery bypass grafting.43 The publication of surgeon-specific death rates raised many concerns. Many physicians lack the skill to interpret and critically evaluate medical literature,44 which leads us to ask, Can consumers be expected to appraise outcomes data? Concerns such as independent data validation and appropriate risk adjustment may not be adequately considered. Public reporting may lead to hospitals or physicians avoiding high-risk patients in an attempt to lower risk-adjusted mortality rates. Regarding the public release of information on organizational performance, Epstein suggests that attention be given to determining what

What is outcomes research? constitutes a valid assessment of quality and how to ensure the integrity of reported data.45 He states that, ‘it has become clear that providing information to consumers in a way that is understandable and allows them actually to use it is at least as formidable a task as developing reliable and valid quality indicators’.45

THE AUSTRALIAN SITUATION Australia, like other developed countries, is confronted with the challenge of providing an equitable, efficient, affordable and high-quality health system. Cost containment is necessary. The shift to casemixbased funding has produced gains in throughput and has reduced costs.46 Casemix classification systems have also encouraged inter-hospital comparisons on issues such as the planning of bed and staff numbers, utilization review and funding allocations. What is missing from casemix is an assessment of quality of care (including the process) and cogent measures of patient outcomes. In Australia, the quality of health-care delivery is partly regulated by legislation (i.e. compulsory registration of health-care providers) and also regulated by the requirement for accreditation of health-care facilities. Australia has embraced the importance of alternative outcome measures in the process of approval of drugs to the Pharmaceutical Benefits Scheme (PBS). All applications include a cost-effectiveness analysis. The standard of health-related economic evaluations in Australia was questioned by Salkeld et al. in 1995.47 Although Hill et al. suggest that there are significant problems with many pharmacoeconomic analyses, they report that the intensive evaluation process used in the PBS allows for identification and correction of many problems.48 Although outcomes research in Australia has not been funded as generously as in the USA, Australian outcomes research and resulting publications are increasing. Australian clinical trialists have pioneered and led the incorporation of patient-based measures and economic analyses into clinical trials.49,50

CONCLUSIONS Outcomes research aims to provide information to all stakeholders (including patients, health-care providers, health-care managers and government) to allow more rational, evidence-based decision-making. Outcomes research complements clinical trials and other more traditional forms of clinical research. Outcomes research also offers the prospect of improving the quality of other aspects of health care

117

by applying a scientific approach to the evaluation of a wider range of clinical, management and organizational problems.

REFERENCES 1 Lee SJ, Earle CC, Weeks JC. Outcomes research in oncology: history, conceptual framework, and trends in the literature. J Natl Cancer Inst 2000; 92: 195–204. 2 National Library of Medicine. Medical Subject Headings [cited 2002 September 18]. Available from: URL: http://www.nlm.nih.gov/mesh/ 3 Agency for Healthcare Research and Quality. Outcomes Research Fact Sheet. AHRQ Publication no. 00-P011. Rockville: Agency for Healthcare Research and Quality; 2000. Available from: URL: http://www.ahrq.gov/clinic/outfact.htm 4 Codman EA. The product of a hospital. Surg Gynecol Obstet 1914; 18: 491–6. 5 Donabedian A. Evaluating the quality of medical care. Milbank Mem Fund Q 1966; 44: 166–206. 6 Kelman HR, Willner A. Problems in measurement and evaluation of rehabilitation. Arch Phys Med Rehab 1962; 43: 172–81. 7 Donabedian A. The quality of care. How can it be assessed? JAMA 1988; 260: 1743–8. 8 Wennberg JE, Gittelsohn A. Small area variations in health care delivery. Science 1973; 182: 1102–8. 9 Wennberg JE, Gittlesohn A. Variations in medical care among small areas. Sci Am 1982; 246: 120–34. 10 Wennberg JE, Barnes BA, Zubkoff M. Professional uncertainty and the problem of supplier-induced demand. Soc Sci Med 1982; 16: 811–24. 11 McPherson K, Wennberg JE, Hovind OB, Clifford P. Smallarea variations in the use of common surgical procedures. an international comparison of New England, England, and Norway. N Engl J Med 1982; 307: 1310–4. 12 Chassin MR, Kosecoff J, Park RE, Winslow CM, Kahn KL, Merrick NJ et al. Does inappropriate use explain geographic variations in the use of health care services? A study of three procedures. JAMA 1987; 258: 2533–7. 13 Wennberg JE, Freeman JL, Culp WJ. Are hospital services rationed in New Haven or over-utilised in Boston? Lancet 1987; 1: 1185–9. 14 Wennberg JE, Freeman JL, Shelton RM, Bubolz TA. Hospital use and mortality among Medicare beneficiaries in Boston and New Haven. N Engl J Med 1989; 321: 1168–73. 15 McNeil BJ, Weichselbaum R, Pauker SG. Speech and survival: tradeoffs between quality and quantity of life in laryngeal cancer. N Engl J Med 1981; 305: 982–7. 16 Ellwood PM. Shattuck lecture – outcomes management. A technology of patient experience. N Engl J Med 1988; 318: 1549–56. 17 Roper WL, Winkenwerder W, Hackbarth GM, Krakauer H. Effectiveness in health care. An initiative to evaluate and improve medical practice. N Engl J Med 1988; 319: 1197–202. 18 Relman AS. Assessment and accountability: the third revolution in medical care. N Engl J Med 1988; 319: 1220–2. 19 Laine C, Davidoff F. Patient-centered medicine. A professional evolution. JAMA 1996; 275: 152–6. 20 Agency for Health Care Policy and Research. Medical Treatment Effectiveness Research. Rockville: Agency for Health Care Policy and Research; 1990.

Internal Medicine Journal 2003; 33: 110–118

118

Jefford et al.

21 Pronovost PJ, Kazandjian VA. A new learning environment: combining clinical research with quality improvement. J Eval Clin Pract 1999; 5: 33–40. 22 Schrag D, Gelfand SE, Bach PB, Guillem J, Minsky BD, Begg CB. Who gets adjuvant treatment for stage II and III rectal cancer? Insight from surveillance, epidemiology, and end results – Medicare. J Clin Oncol 2001; 19: 3712–8. 23 Cheng CH, Sanders GD, Hlatky MA, Heidenreich P, McDonald KM, Lee BK et al. Cost-effectiveness of radiofrequency ablation for supraventricular tachycardia. Ann Intern Med 2000; 133: 864–76. 24 Ioannidis JP, Lau J. Completeness of safety reporting in randomized trials. an evaluation of 7 medical areas. JAMA 2001; 285: 437–43. 25 Begg C, Cho M, Eastwood S, Horton R, Moher D, Olkin I et al. Improving the quality of reporting of randomized controlled trials: the CONSORT Statement. Jama 1996; 276: 637–9. 26 Cherkin DC, Eisenberg D, Sherman KJ, Barlow W, Kaptchuk TJ, Street J et al. Randomized trial comparing traditional Chinese medical acupuncture, therapeutic massage, and self-care education for chronic low back pain. Arch Intern Med 2001; 161: 1081–8. 27 Clancy CM, Eisenberg JM. Outcomes research: measuring the end results of health care. Science 1998; 282: 245–6. 28 Barry MJ, Fowler FJ Jr, O’Leary MP, Bruskewitz RC, Holtgrewe HL, Mebust WK. Measuring disease-specific health status in men with benign prostatic hyperplasia. Measurement Committee of the American Urological Association. Med Care 1995; 33: AS145–55. 29 Guyatt GH, Feeny DH, Patrick DL. Measuring health-related quality of life. Ann Intern Med 1993; 118: 622–9. 30 McHorney CA. Generic health measurement: past accomplishments and a measurement paradigm for the 21st century. Ann Intern Med 1997; 127: 743–50. 31 Cella DF, Tulsky DS, Gray G, Sarafian B, Linn E, Bonomi A et al. The Functional Assessment of Cancer Therapy Scale: development and validation of the general measure. J Clin Oncol 1993; 11: 570–9. 32 Aaronson NK, Ahmedzai S, Bergman B, Bullinger M, Cull A, Duez NJ et al. The European Organization for Research and Treatment of Cancer QLQ-C30: a quality-of-life instrument for use in international clinical trials in oncology. J Natl Cancer Inst 1993; 85: 365–76. 33 Aaronson NK, Bullinger M, Ahmedzai S. A modular approach to quality-of-life assessment in cancer clinical trials. Recent Results Cancer Res 1988; 111: 231–49. 34 Detsky AS, Naglie IG. A clinician’s guide to cost-effectiveness analysis. Ann Intern Med 1990; 113: 147–54.

Internal Medicine Journal 2003; 33: 110–118

35 Doyle C, Stockler M, Pintilie M, Panesar P, Warde P, Sturgeon J et al. Resource implications of palliative chemotherapy for ovarian cancer. J Clin Oncol 1997; 15: 1000–7. 36 Hillner BE, Smith TJ. Efficacy and cost effectiveness of adjuvant chemotherapy in women with node-negative breast cancer. A decision-analysis model. N Engl J Med 1991; 324: 160–8. 37 Sarasin FP. Decision analysis and its application in clinical medicine. Eur J Obstet Gynecol Reprod Biol 2001; 94: 172–9. 38 Kassirer JP, Moskowitz AJ, Lau J, Pauker SG. Decision analysis: a progress report. Ann Intern Med 1987; 106: 275–91. 39 Anonymous. Measuring quality, outcomes, and cost of care using large databases. Proceedings of the 6th Regenstrief Conference. Marshall, Indiana, 4–6 September 1996. Ann Intern Med 1997; 127: 665–774. 40 Iezzoni LI. Assessing quality using administrative data. Ann Intern Med 1997; 127: 666–74. 41 McDonald CJ, Overhage JM, Dexter P, Takesue BY, Dwyer DM. A framework for capturing clinical data sets from computerized sources. Ann Intern Med 1997; 127: 675–82. 42 Bolsin SN. Professional misconduct: the Bristol case. Med J Aust 1998; 169: 369–72. 43 Chassin MR, Hannan EL, DeBuono BA. Benefits and hazards of reporting medical outcomes publicly. N Engl J Med 1996; 334: 394–8. 44 Anonymous. How to read clinical journals: II: to learn about a diagnostic test. Can Med Assoc J 1981; 124: 703–10. 45 Epstein AM. Rolling down the runway: the challenges ahead for quality report cards. JAMA 1998; 279: 1691–6. 46 Health Solutions Pty Ltd. Independent Assessment of Casemix in Victoria. Melbourne: Health Solutions Pty Ltd; 1994. 47 Salkeld G, Davey P, Arnolda G. A critical review of healthrelated economic evaluations in Australia: implications for health policy. Health Policy 1995; 31: 111–25. 48 Hill S, Henry D, Mitchell A. Problems in pharmacoeconomic analyses. JAMA 2000; 284: 1922–4. 49 Coates A, Gebski V, Bishop JF, Jeal PN, Woods RL, Snyder R et al. Improving the quality of life during chemotherapy for advanced breast cancer. A comparison of intermittent and continuous treatment strategies. N Engl J Med 1987; 317: 1490–5. 50 Stewart RA, Sharples KJ, North FM, Menkes DB, Baker J, Simes J. Long-term assessment of psychological well-being in a randomized placebo-controlled trial of cholesterol reduction with pravastatin. The LIPID Study Investigators. Arch Intern Med 2000; 160: 3144–52.