A Call for Empirical Research on Best Practices for Economic ...

3 downloads 0 Views 93KB Size Report
Johns Hopkins Bloomberg. School of Public Health,. Department of Health Policy and Management, Health. Services Research and. Development Center,.
Ophthalmic Epidemiology, 13:81–83, 2006 c Taylor & Francis Group, LLC Copyright  ISSN: 0928-6586 DOI: 10.1080/09286580600611401

INVITED EDITORIAL

A Call for Empirical Research on Best Practices for Economic Evaluation in Ophthalmic Outcomes Research Kevin D. Frick, PhD Johns Hopkins Bloomberg School of Public Health, Department of Health Policy and Management, Health Services Research and Development Center, Baltimore, MD, USA Melissa A. Clark, PhD Brown University Medical School, Department of Community Health, Providence, RI, USA Barbara K. Martin, PhD Johns Hopkins Bloomberg School of Public Health, Department of Epidemiology, Baltimore, MD, USA Lori L. Grover, OD, FAAO Johns Hopkins University School of Medicine, Department of Ophthalmology, Baltimore, MD, USA

Received 31 January 2006 Accepted 1 February 2006 Correspondence to: Kevin Frick, Johns Hopkins Bloomberg School of Public Health, Department of Health Policy and Management, 624 N. Broadway, Rm. 606 Baltimore, MD 21205, USA. E-mail: [email protected]

The work by Lamoureux and colleagues1 identifying the smallest number of months of diary data necessary to achieve an unbiased and relatively precise estimate of annual personal costs associated with visual impairment opens the door to a series of critical research questions aimed at improving economic evaluations in ophthalmic outcomes research. Economic evaluation is an increasingly common component of ophthalmic outcomes research. Many ophthalmic economic evaluation studies have relied on secondary analyses of administrative data or survey data collected for other purposes.2−5 A preferred study design would be to use data collected specifically to estimate the cost of visual impairment. Aside from the work of Lamoureux and colleagues, little research has focused on best research practices for ophthalmic economic evaluations using primary data. Despite many recommendations, the criteria for best research practices in economic evaluations are not set in stone.6−10 Two sets of criteria apply regardless of the field of research. First, instruments should measure constructs suggested by the underlying economic theory or by expert recommendations from a variety of global sources. Second, the measures should be valid and reliable. A specific interpretation of validity and reliability can be applied to economic evaluation, in which cost and quality of life constructs are annualized. Conceptually, annualization is accomplished by appropriately weighting multiple health related quality of life or cost measurements to arrive at an annual figure. By convention, a one year time period is used to summarize quality of life measures as quality adjusted life years (QALYs) and to accumulate costs and QALYs when assigning relative weights to events now and events years in the future. Annualized measures should be unbiased and as precise as possible given the usually limited resources available for data collection. Even if individual measures of health related quality of life and costs are valid and reliable, the annualized measures must still be assessed for validity and reliability. Conceptually, this is not unique to economic evaluation. In general survey research, when a scale is formed from valid and reliable individual items, the entire scale still must be assessed for validity and reliability. Unlike general survey research, the process of annualizing health related quality of life or cost measures combines repeated measures rather than combining 81

multiple items for a scale. The number and frequency of repeated measures necessary for precise outcome estimation has received attention in the literature, but the findings have not been applied to annualized outcomes to date. In this issue of Ophthalmic Epidemiology, Dr. Lamoureux and colleagues have advanced the assessment of best research practices for ophthalmic economic evaluations, focusing on reliability.1 These authors explored whether researchers must collect an entire year’s worth of personal cost data to generate a precise estimate of annual personal costs. The authors had 12 months of diary data and chose to estimate annual costs generalized from the first 1, 3, or 6 months. The annualization process specifically for 3 months of data is shown in Figure 1, where the costs measured in the first 3 months (the vertical heights of the solid rectangles represent what is measured) are assumed to be representative of each 3-month period during the year (the dashed rectangles represent the assumed generalization). A similar process of measuring and generalizing over a 12-month period could be shown for either 1 or 6 months of measurement. The authors find, notably, that at a population level the estimate of the annual personal costs is unbiased even if data are collected for only 1 month. However, 3 months of data are necessary to obtain an estimate with a level of precision comparable to the precision when a year’s worth of data are collected. This has important resource implications for future studies of the costs associated with visual impairment—the cost of data collection may be reduced by up to 75% if only 3 months of data need to be collected to describe costs over a 12-month period. These findings suggest that there would be value in additional research on best practices focusing on

other study designs (e.g. randomized controlled trials) and other constructs that are annualized (e.g., healthrelated quality of life that is annualized as QALYs). As Lamoureux and colleagues focused only on the number of repeated measures obtained at short intervals for costs in an observational cohort study, further research on the bias and precision of annualized measures could focus on choosing the ideal frequency of measurements to cover a pre-specified time period for quality of life or costs as is often the case in randomized controlled trials. Additional research could focus on appropriate recall periods, because few studies will use diary methods comparable to that used by Lamoureux and colleagues. Such research would be broadly applicable— all three design decisions (recall period and the number and frequency of measurements) apply to both types of study designs (observational cohort and randomized trial) for both constructs (cost and quality of life). Before addressing how the choices of recall period and the number and frequency of measurements might affect ophthalmic outcomes research, it is useful to understand the basic method of annualizing health-related quality of life measurements. Figure 2 illustrates how data from a health-related quality of life instrument with a 1-day recall period collected at 3-month intervals are generalized between observations to calculate QALYs. In this case, the standard assumption is a smooth transition between states. Mathematically, this is equivalent to assuming that each measurement applies from the midpoint between the previous and current observation to the midpoint between the current and next observation. Of course, in many cases, a cost or cost-effectiveness analysis is a secondary aim and design choices will be made to facilitate the study’s primary objectives

FIGURE 1

FIGURE 2

K. D. Frick et al.

82

rather than the economic objectives. However, as costoutcome analyses become increasingly important for decision-making, cost and quality of life results may become primary aims. In most cases, when design decisions are to be made to facilitate economic objectives, empirical data are needed. Without empirical data, intuition can only guide us in understanding when design choices make little difference. To illustrate, we will contrast studying persons with no light perception where design choices are unlikely to affect bias and precision, with a study of persons with diabetic retinopathy where design choices are likely to affect bias and precision. A person with no light perception is likely to have little systematic variation in health related quality of life over time—either within a month or from month to month. This does not mean that the person will experience every day similarly. A person with no light perception will have good days and bad days as an otherwise healthy person does. The key is that collecting data for this person using a short recall period at infrequent intervals will likely be sufficient to describe annual costs or QALYs. Assuming that a single measurement of health-related quality of life or costs is a relatively precise measure for a long period of time is not unreasonable. In a study of patients with diabetic retinopathy and associated macular edema, the recall period and number and frequency of measurements is likely to be important. For a person with macular edema, vision impairment and the resulting function and quality of life are likely to fluctuate over time.11 The calculation of QALYs or annual costs may be imprecise if it is necessary to assume that a single measure applies for an extended period. The annualized measures are likely to be more precise when the data are collected more frequently over a longer period of time with a recall period for which the respondent can provide a precise response. Asking about a very short recall period would avoid measurement error but may limit the opportunity to obtain data about periods of more and less vision impairment. For study subjects with periods of fluctuating vision impairment that are shorter than the recall period it may be difficult to provide a single response that appropriately summarizes health utility over the entire period.

83

The comparison of two ophthalmic conditions illustrates that patients with ophthalmic conditions can have fluctuating vision impairment and function, so that the choices of the recall period and the frequency and number of measurements are likely to have an impact on the precision of the annualized measure. This knowledge alone does not guide the design choices that must be made. Either empirical studies like that of Lamoureux and colleagues or simulations will be necessary to guide researchers in this case. Decisions that are made on the three design questions not only will affect the measurement but also affect the cost of data collection. To produce economic evaluation studies around the globe with the highest quality and greatest efficiency, the work of Lamoureux and colleagues should be extended to focus on quality of life, the impact of variations in the frequency of interviews and recall periods on the precision of annualized cost and quality of life estimates, and validation in other countries.

REFERENCES [1] Lamoureux EL, Chou SL, Larizza MF, Keeffe JE. The reliability of data collection periods of personal costs associated with vision impairment. Ophthalmic Epidemiology. Apr 2006;13(2):121–126. [2] Chiang YP, Bassi LJ, Javitt JC. Federal budgetary costs of blindness. Milbank Q. 1992;70(2):319–340. [3] Frick KD, Foster A, Bah M, Faal H. Analysis of costs and benefits of the Gambian Eye Care Program. Arch Ophthalmol. Feb 2005;123(2):239–243. [4] Frick KD, Basilion EV, Hanson CL, Colchero MA. Estimating the burden and economic impact of trachomatous visual loss. Ophthalmic Epidemiol. Apr 2003;10(2):121–132. [5] Baltussen RM, Sylla M, Frick KD, Mariotti SP. Cost-effectiveness of trachoma control in seven world regions. Ophthalmic Epidemiol. Apr 2005;12(2):91–101. [6] Gold MR, Siegel JE, Russell LB, Weinstein MC, eds. CostEffectiveness in Health and Medicine. New York: Oxford University Press; 1996. [7] Tan-Torres Edejer T, Baltussen R, Adam T, et al., eds. Making Choices in Health: WHO Guide to Cost-Effectiveness Analysis. Geneva, Switzerland: World Health Organization; 2003. [8] Ramsey S, Willke R, Briggs A, et al. Good research practices for cost-effectiveness analysis alongside clinical trials: the ISPOR RCTCEA Task Force report. Value Health. Sep–Oct 2005;8(5):521–533. [9] Weinstein MC, O’Brien B, Hornberger J, et al. Principles of good practice for decision analytic modeling in health-care evaluation: report of the ISPOR Task Force on Good Research Practices–Modeling Studies. Value Health. Jan–Feb 2003;6(1):9–17. [10] Motheral B, Brooks J, Clark MA, et al. A checklist for retrospective database studies—report of the ISPOR Task Force on Retrospective Databases. Value Health. Mar–Apr 2003;6(2):90–97. [11] Optometric Clinical Practice Guideline: Care of the Patient with Diabetes Mellitus (CPG3). St. Louis, MO: American Optometric Association, 3rd Edition; 2002.

Economic Evaluation in Ophthalmic Outcomes Research