Health Insurance Effects on Preventive Care and Health

12 downloads 416 Views 285KB Size Report
May 29, 2016 - In the basic setup, researchers observe outcomes for a treatment group .... ing an RD model with local linear regression.44 Figure 2 demonstrates a stylized .... http://theincidentaleconomist.com/wordpress/wp-content/uploads/.
Health Insurance Effects on Preventive Care and Health A Methodologic Review Jacob Wallace, BA,1 Benjamin D. Sommers, MD, PhD2,3 The Affordable Care Act has led to significant gains in insurance coverage and reduced the cost of preventive care for millions of Americans. There is considerable interest in understanding how these changes will impact the use of preventive care services and health outcomes. Obtaining unbiased estimates of the impact of insurance on these outcomes is challenging because of inherent differences between insured and uninsured individuals. This article reviews common experimental and quasi-experimental approaches researchers have used in the past to address this problem, including RCTs, differences-indifferences analyses, and regression discontinuity. In each case, the key assumptions underlying the models are discussed alongside some of the main research findings related to prevention and health. The review concludes with a discussion of how experimental and quasi-experimental methods can be used to study the impact of the Affordable Care Act on preventive care and health outcomes. (Am J Prev Med 2016;50(5S1):S27–S33) & 2016 American Journal of Preventive Medicine. All rights reserved. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Introduction

T

he passage of the Affordable Care Act (ACA) resulted in new insurance coverage for at least 16 million Americans.1-3 In addition to expanding insurance to previously uninsured individuals, the law also includes several provisions designed to enhance the coverage of preventive services. Most notably, individuals newly covered through the health insurance marketplaces must be provided with an essential health benefits package that includes preventive care. The law also mandates that all private insurance plans and Medicare cover evidence-based preventive services without any cost sharing.4 The ACA defines covered preventive services for three populations: adults, women (including pregnant women), and children. There is considerable interest in understanding how this enhanced coverage of preventive services will impact the use of preventive care and health outcomes. This article reviews some of the key research methods and previous findings on the impact of From the 1Harvard PhD Program in Health Policy, Harvard University, Cambridge, Massachusetts; 2Department of Health Policy and Management, Harvard T.H. Chan School of Public Health, Boston, Massachusetts; and 3Division of General Internal Medicine and Primary Care, Department of Medicine, Brigham and Women’s Hospital, Boston, Massachusetts Address correspondence to: Jacob Wallace, BA, Harvard PhD Program in Health Policy, 14 Story Street, Cambridge MA 02138. E-mail: [email protected]. This article is part of the supplement issue titled The Use of Economics in Informing U.S. Public Health Policy. 0749-3797/$36.00 http://dx.doi.org/10.1016/j.amepre.2016.01.003

coverage on the use of preventive care and health outcomes and discusses how future research might build on this work to examine the effects of the ACA. Prior evidence on the impact of coverage is mixed, in large part owing to the difficulty of obtaining unbiased estimates because of differences between insured and uninsured individuals.5 Simply put, people without insurance differ from those with insurance in terms of sociodemographics, health behaviors, propensity to use health care, and baseline health status. Studies in the literature have dealt with this problem to varying degrees. Cross-sectional observational and longitudinal studies, of which there are hundreds, typically compare outcomes for insured and uninsured individuals and (in some cases) control for covariates such as income, age, gender, or health behaviors. Although they typically document a positive association among coverage, receipt of preventive services, and health, these studies do not adequately address selection bias; even if they adjust for measured differences, many of the potential confounders are either not fully measured or simply not captured at all.6 For this reason, this review focuses on experimental and quasiexperimental methods that use variation in health insurance coverage that is plausibly unrelated to prevention and health.

Experimental Methods The ideal study of the causal effect of coverage on preventive care and health outcomes would randomly

& 2016 American Journal of Preventive Medicine. All rights reserved. This is an open access Am J Prev Med 2016;50(5S1):S27–S33 S27 article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

S28

Wallace and Sommers / Am J Prev Med 2016;50(5S1):S27–S33

assign some individuals to receive a particular type of coverage and other individuals to remain uninsured or to receive an alternative type of coverage. This method allows researchers to compare the use of preventive care and health outcomes across groups—one or more treatment groups and a control group—without any selection bias or confounding. There have been only a handful of large-scale, experimental studies of the impact of health insurance in the U.S. The first was the RAND Health Insurance Experiment (HIE), conducted between 1971 and 1982 to evaluate how cost sharing, how much patients have to pay for each service, and plan type affect service use (including preventive care) and health outcomes. The experiment found significant decreases in healthcare utilization when individuals faced higher cost-sharing requirements, though the health impacts of these changes were generally small or non-existent for all but the most vulnerable patients in the study.7 A more recent nonrandomized study examining high-deductible health plans offers support for some of the HIE’s key findings, observing that patients reduce the use of potentially valuable care (e.g., preventive care) and potentially wasteful care (e.g., imaging) when faced with higher cost sharing.8 These results suggest that the reductions in cost sharing for prevention in the ACA may increase the use of preventive services for those with insurance prior to the ACA. However, because all participants in the HIE study had some form of health insurance, the findings are not relevant to the impact on those newly insured with the ACA. Fortunately, the Oregon HIE (OHIE) examines the impact of coverage on the previously uninsured. The OHIE was an evaluation of the causal impact of Medicaid on utilization, financial strain, and health outcomes. The study was made possible by Oregon’s Medicaid expansion in the 2000s, in which demand outpaced state funding, leading to the creation of a waitlist. In 2008, the state conducted a lottery to determine which individuals on the waitlist could apply to participate in the Medicaid expansion. Even though not all people selected by the lottery enrolled in Medicaid, selection increased the probability of Medicaid coverage by 24.1 percentage points, allowing the researchers to estimate the effect of Medicaid.9 In the study’s first year, Medicaid increased the likelihood of being admitted to the hospital, using outpatient care, and using prescription drugs. The researchers also examined four measures of preventive care: cholesterol testing, blood tests for diabetes, mammograms, and Pap tests. Medicaid led to statistically significant increases in each of these measures, with particularly large increases

in the probability of having received a mammogram (60% relative increase) or a Pap test within the last year (45% relative increase). Medicaid also improved selfreported physical and mental health, increasing the probability that people reported being in good or excellent health by 25% and reducing the probability of a positive screen for depression by 10%.10 In the study’s second year, the researchers examined a wider set of clinical outcomes and preventive services. Of 21 clinical measures and health outcomes, Medicaid coverage had a statistically significant impact on five measures related to depression and diabetes, including a reduction in the rate of depression and increased rates of new diagnosis for diabetes and current use of medications for diabetes.11 They found no statistically significant impact of Medicaid on other clinical outcomes, including measured blood pressure, cholesterol, and blood sugar control, though whether this reflected a true null effect or in some cases an underpowered study design remains a source of debate.12 In addition to clinical outcomes, the researchers examined the use of seven preventive care services. They found that Medicaid coverage led to significant increases in the use of cholesterol screening, Pap tests, prostate-specific antigen screening, and mammography, but did not produce significant changes in receipt of colorectal cancer screening or influenza vaccination.11 The OHIE provides strong evidence that Medicaid coverage increases the use of some preventive services while improving self-reported health. Similar findings were noted in a smaller RCT featuring accelerated health insurance benefits for disabled individuals who did not yet qualify for Medicare coverage: In that study, significant increases were noted in self-reported health, overall physical and mental health (using the Short-Form 36 questionnaire), and healthcare utilization.13 It is unclear whether these two studies can be generalized to newly covered individuals in the ACA, as roughly half are expected to gain private health insurance rather than Medicaid or Medicare. Although RCTs provide valuable evidence on the causal impact of health insurance, they have limitations. First, time and cost constraints can limit sample size, making it difficult to detect effects on rare or longer-term outcomes (such as mortality), which has been one prominent critique of the OHIE. Second, RCT study procedures, such as intensive follow-up with patients, may limit generalizability because they differ from normal patterns of care. In addition, RCTs typically impact a small population and may not generalize to larger policy changes that may reshape healthcare markets or alter the behavior of doctors, hospitals, and insurers. Finally, in the design of RCTs, researchers face a www.ajpmonline.org

Wallace and Sommers / Am J Prev Med 2016;50(5S1):S27–S33

S29

trade-off between identifying the desired treatment effect (e.g., what impact does expanding health insurance have on the use of preventive care?) and distinguishing the causal mechanisms (e.g., why does health insurance impact the use of preventive care?).14 Some of these disadvantages are not present in quasi-experimental methods that, under the right conditions, allow researchers to estimate the causal impact of insurance using larger samples—sometimes even population-level data—under more typical real-world circumstances.15,16

Quasi-experimental Methods Quasi-experimental analyses use natural experiments to identify sources of variation in coverage that are plausibly unrelated to the outcomes being studied, for example, preventive care and health outcomes. Natural experiments in health insurance can arise from changes in coverage policy, the use of strict cut offs to determine eligibility (e.g., age or income), or differences in coverage policies across groups of individuals. Though numerous methods have been developed to study natural experiments, this review focuses on two of the most common: differences-in-differences analysis (DD) and regression discontinuity (RD).

Differences-in-Differences Analysis Perhaps the most common quasi-experimental method used to study the impact of insurance expansions is DD. In the basic setup, researchers observe outcomes for a treatment group impacted by the expansion and a control group not subject to the expansion. The DD estimator is the average difference in the outcome for the treatment group before and after treatment, minus the average difference in the outcome for the control group before and after treatment—hence, the term differencesin-differences (sometimes also called “difference-in-differences” in the literature). For those more familiar with interrupted time series, DD is somewhat analogous to interrupted time series with a control group, except that in a DD approach, the outcome is modeled based on the mean pre-policy versus mean post-policy level, whereas interrupted time series models specify a pre-policy time trend versus a post-policy time trend. Although both approaches typically produce similar results—and some papers present both sets of results—there are relative advantages and disadvantages to each, depending on the underlying data, sample size, and expected evolution of policy effects over time. There are many variations on DD, but all share the common assumption that trends in the outcomes prior to the expansion must be parallel across the treatment and control groups—in other words, in the absence of the May 2016

Figure 1. Stylized depiction of a differences-in-differences analysis.

policy change, the two groups would have continued to move in tandem. Figure 1 presents a stylized example of outcomes that can be analyzed using a DD model. Notably, the DD model does not require that the treatment and control groups have the same average outcome before the policy/treatment, as long as the pre-policy trends in those outcomes are similar.17 In some cases, researchers relax the common trends assumption and include controls for group-specific time trends. In these models, the assumption is that outcomes would follow the linear, group-specific trend in the absence of intervention (“parallel growth” rather than “parallel trends”). However, because data are noisy and there may be lags between an intervention and the change in an outcome, it can be difficult to distinguish treatment effects from differential trends in these models.18 Such DD methods have been used to study the creation of Medicare in 1965, state Medicaid expansions in the 1980s through the 2000s, Massachusetts’ landmark 2006 statewide health reform that served as the model for the ACA, and the 2010 dependent coverage provision of the ACA. To study the impact of Medicare, which was a federal law with uniform eligibility criteria across states, Finkelstein and McKnight19 constructed control groups by exploiting geographic differences in when the law went into effect or by identifying areas where the policy had smaller or larger impacts because of different preexisting coverage rates. In addition, the researchers analyzed trends in coverage and health outcomes for individuals aged younger and older than 65 years around the introduction of Medicare. A concern with defining the treatment and control groups using age was that trends in their outcomes diverged prior to 1965, but reassuringly, each of these approaches led to similar findings, with the authors concluding that the creation of Medicare produced large effects on healthcare utilization and spending, but no discernible impact on mortality. Another strain of DD studies examined how the impact

S30

Wallace and Sommers / Am J Prev Med 2016;50(5S1):S27–S33

of Medicare coverage differed for previously insured and uninsured adults, showing that previously uninsured adults experienced greater gains in health outcomes with the onset of Medicare coverage and also consumed more services than those previously insured.20-22 In the case of prior Medicaid expansions, control groups were more readily available, as they typically occurred as a result of some states electing to expand and others not doing so, or new Congressional requirements prompting states that had previously been less generous in their eligibility to catch up to the new federal minimum standards. A series of such DD studies have shown that Medicaid expansions increase the use of primary care, prenatal care, and hospital care,23 and mixed evidence on prenatal care and infant mortality.24-26 State Medicaid expansions among adults were found to improve access to care and self-reported health and to reduce mortality, particularly in low-income counties and minority groups.15 The DD framework has also been used extensively to study the 2006 Massachusetts health reform. Researchers have typically compared Massachusetts with nearby states or New England as a whole, and have documented a wide range of improvements in access to care, selfreported health,27,28 Pap tests,29 primary care, and rates of influenza vaccination,30 but mixed evidence on admissions for ambulatory sensitive conditions31,32 and mammography rates.29,33 Another analysis of this policy created a control group of demographically and economically similar counties from outside the state to compare with counties in Massachusetts, before and after the reform. The study found a significant decline in statewide mortality for non-elderly adults, with changes concentrated among causes of death more likely to be amenable to health care, such as heart disease, cancer, and infections.16 Most recently, the dependent coverage provision of the ACA has created an opportunity to evaluate the expansion of private health insurance. The policy, which allowed children to remain on their parents’ plans until age 26 years, was implemented beginning in 2010. Using a DD framework, researchers have compared outcomes for young adults impacted by the policy (aged 19 26 years) with control groups that were unaffected because they were too young (aged 16 18 years, and already eligible to remain on their parents’ plans) or too old (aged 27 30 years). Although many states already had dependent coverage mandates in place, these mandates were much weaker than the 2010 federal policy.34 The mandate led to large increases in health insurance coverage for young adults, as well as improvements in access to care, affordability, and self-reported health.35-37 The estimated impact on utilization and prevention has

been mixed: One study found increased utilization of hospital care,38 others have shown a decline in emergency department use, especially for outpatient-treatable conditions,39 and a recent study showed improved rates of vaccination against human papillomavirus among young women.40 Overall, recent DD analyses have helped clarify the impacts of coverage on prevention and health using large sample sizes and at much lower cost than operating a social experiment. Though DD analyses require stronger assumptions than RCTs, a useful feature of this study design is that the assumption of parallel trends can be verified graphically and statistically. DD methods can also be used to evaluate both binary and continuous outcomes. For binary outcomes, nonlinear models may be used in the DD framework, but because of their straightforward interpretation, many researchers favor linear probability models when working with binary outcomes.41 It is important to note, however, that even if parallel trends exist prior to the policy change being studied, it does not rule out the possibility that time-varying confounders in the treatment and control states could have influenced the results. For example, individual behavior may change in anticipation of a new policy. Alternatively, the timing of the policy change may be endogenous—meaning that the policy change itself was driven by confounding factors that may impact the study outcomes. For example, states may expand Medicaid when the economy is strong, which may independently impact health outcomes. Researchers must rule out these and other alternative explanations for a divergence in trend across treatment and control after the policy change. Finally, DD papers often use data from many years (or periods) before and after the intervention and examine serially correlated outcomes (e.g., insurance status). Researchers must carefully correct for this serial correlation when calculating SEs. Several approaches have been suggested, including collapsing the data and ignoring the time series variation or clustering SEs by group or time (or ideally both).42

Regression Discontinuity Another quasi-experimental approach involves identifying discontinuities created by policies, which offer a chance to identify the policies’ impact. Access to public insurance programs is determined by a set of eligibility criteria; as described in the last section, some of these criteria are discrete—living in an expansion state versus a non-expansion state, for instance, which lends itself naturally to a DD approach. In other cases, the criteria www.ajpmonline.org

Wallace and Sommers / Am J Prev Med 2016;50(5S1):S27–S33

Figure 2. Stylized depiction of a regression discontinuity analysis.

rely on continuous measures like income. RD designs exploit cut offs in the continuous measures (called the “running” or “forcing” variable) that allow researchers to identify treatment and control individuals that are similar except for being on one side or the other of the cut off point. These techniques have been used to evaluate the impact of Medicaid by using sharp cut offs in income criteria, and to study the impact of Medicare by examining outcomes for individuals just below and above the age 65 years eligibility cut off. In each case, the researchers must establish that treatment assignment (being on either side of the cut off)—for those close to the cutoff—is exogenous, meaning that it does not vary in a systematic way with the outcomes being studied. Put differently, individuals must not be able to manipulate their income or age to determine their treatment status, which would bias the study design.43 RD models adjust for the potential confounding effect of the running variable over the sample range (either with linear or polynomial controls or using local linear regression), and then identify the change in outcomes caused precisely at the point of the discontinuity as the policy’s causal effect. Recent advances provide data-driven criteria for selecting the optimal bandwidth around the cut off when estimating an RD model with local linear regression.44 Figure 2 demonstrates a stylized description of an RD analysis. The RD approach has been used to examine the effects of Medicare coverage by comparing individuals on either side of the eligibility threshold at age 65 years. RD estimates have demonstrated evidence of an increase in the use of preventive services related to breast cancer,45 voluntary procedures like knee and hip replacements,46 and reductions in mortality.47 Others have used Medicaid eligibility criteria to conduct RD analyses. In Wisconsin, applications for a Medicaid benefit plan exceeded projections, leading the state to announce that applications received after a May 2016

S31

specific date would be placed on a waitlist. Researchers constructed a treatment group of individuals whose applications were received just before the cut off and compared their outcomes with a control group of individuals whose applications were received just after the cut off. The study showed that a rural Medicaid expansion led to increases in outpatient visits and inpatient stays, but no effect on emergency department use.48 These results can be interpreted as causal estimates if assignment to treatment was indeed quasi-random, meaning that individuals who just missed the deadline did not differ significantly from individuals whose applications came in just before the deadline. RD methods have also been used to assess discontinuities in income-related premiums for coverage49 and Medicaid eligibility guidelines across states.50 The RD design provides strong internal validity, similar to a DD analysis, if the key assumptions for both models are met. For RD models, the key assumptions are testable and treatment effects ideally are visible in a scatterplot of the outcome against the running variable. The external validity of RDs, however, is often more limited. RDs provide an estimate of a local treatment effect at the discontinuity. For example, analyses that use the Medicare eligibility age as the discontinuity produce estimates of the impact of Medicare coverage at age 65 years. These findings may not be informative about the impact of Medicare coverage for older individuals.

Conclusions and Future Directions Numerous high-quality studies have used a wide range of methods to study the impact of health insurance on prevention and health. In contrast to simple crosssectional or cohort studies, both experimental and quasi-experimental approaches offer methodologic solutions to a key source of confounding: unmeasured or unobserved differences between insured and uninsured individuals. Because the ACA is a policy change (and a very large one at that), rather than a social experiment, researchers will primarily have to rely on quasi-experimental approaches to evaluate its impact. As the ACA was implemented nationally, identifying plausible control groups will be challenging for some components of the law. However, some aspects of the ACA—like the dependent coverage provision—lend themselves to quasiexperimental analyses. This work will continue as data become available for those newly insured through Medicaid expansions and the health insurance marketplaces. Even though the ACA was federal legislation, the Supreme Court ruling in 2012 rendered the Medicaid expansion a state option. The result has been significant

S32

Wallace and Sommers / Am J Prev Med 2016;50(5S1):S27–S33

state-by-state variation in expansion plans—both whether to expand coverage, and how to do so, with some states pursuing alternative expansions such as Arkansas’s socalled private option.51,52 These changes all naturally lend themselves to studying the impact of Medicaid and alternative coverage designs in a DD framework, which some preliminary research has already done.53 Meanwhile, the new criteria for both Medicaid and marketplace coverage include income cut offs that lead to various discontinuities in eligibility, premiums, and costsharing subsidies. Studying these policies via RD methods can yield important insights into the relative impacts of Medicaid, private coverage, and insurance plan design. Finally, RCTs as part of various demonstration projects under the ACA—designed to improve the cost effectiveness and quality of care—will also be important items in health policy researchers’ toolkits, and policymakers would be wise to consider the value of randomization when implementing innovative new approaches to coverage and care delivery, in order to better inform future policy efforts. Taken together, these alternative approaches will be critical to conducting high-quality research not only to assess the impact of the ACA as a piece of legislation, but also for gathering new evidence about the broader impacts of insurance on prevention and health. Publication of this article was supported by the Centers for Disease Control and Prevention. This material is based on work supported by grant DGE 1144152 from the National Science Foundation Graduate Research Fellowship (Mr. Wallace). Dr. Sommers currently serves part-time as an advisor in the Office of the Assistant Secretary for Planning and Evaluation, at the U.S. DHHS. The views presented here are those of the authors and do not represent U.S. DHHS.

References 1. Blumenthal D, Collins SR. Health care coverage under the Affordable Care Act a progress report. N Engl J Med. 2014;371(3):275–281. http: //dx.doi.org/10.1056/NEJMhpr1405667. 2. Sommers BD, Musco T, Finegold K, Gunja MZ, Burke A, McDowell AM. Health reform and changes in health insurance coverage in 2014. N Engl J Med. 2014;371(9):867–874. http://dx.doi.org/10.1056/NEJ Msr1406753. 3. Health Insurance Coverage and the Affordable Care Act. U.S. DHHS, Office of the Assistant Secretary for Planning and Evaluation (ASPE). 2015. http://aspe.hhs.gov/health/reports/2015/uninsured_change/ib_u ninsured_change.pdf. Accessed February 15, 2016. 4. Sommers BD, Wilson L. Fifty-Four Million Additional Americans Are Receiving Preventive Services Coverage Without Cost-Sharing Under the Affordable Care Act. Washington, DC: U.S. DHHS; 2012. 5. Frakt A, Carroll AE, Pollack HA, Reinhardt U. Our flawed but beneficial Medicaid program. N Engl J Med. 2011;364(16):e31. http: //dx.doi.org/10.1056/NEJMp1103168.

6. Levy H, Meltzer D. What do we really know about whether health insurance affects health? In: McLaughlin C, Health Policy and the Uninsured. Washington, DC: Urban Institute Press, 2004:179–204. 7. Newhouse JP. Free for All? Lessons from the RAND Health Insurance Experiment. Cambridge. MA: Harvard University Press; 1993. 8. Brot-Goldberg Z, Chandra A, Handel B, Kolstad J. What Does a Deductible Do? The Impact of Cost-Sharing on Health Care Prices, Quantities, and Spending Dynamics. Cambridge, MA: National Bureau of Economic Research; 2015. http://dx.doi.org/10.3386/w21632. 9. Allen H, Baicker K, Finkelstein A, Taubman S, Wright BJ, Oregon Health Study Group. What the Oregon health study can tell us about expanding Medicaid. Health Aff (Millwood). 2010;29(8):1498–1506. http://dx.doi.org/10.1377/hlthaff.2010.0191. 10. Finkelstein A, Taubman S, Wright BJ, et al. The Oregon health insurance experiment: evidence from the first year. Q J Econ. 2012; 127(3):1057–1106. http://dx.doi.org/10.1093/qje/qjs020. 11. Baicker K, Taubman S, Allen H, et al. The Oregon experiment effects of Medicaid on clinical outcomes. N Engl J Med. 2013;368(18): 1713–1722. http://dx.doi.org/10.1056/NEJMsa1212321. 12. Pizer S, Frakt A. Loss of precision with IV. The Incidental Economist. http://theincidentaleconomist.com/wordpress/wp-content/uploads/ 2013/05/loss-of-precision-with-IV-v3.pdf. Published 2013. Accessed February 15, 2016. 13. Weathers RR 2nd, Stegman M. The effect of expanding access to health insurance on the health and mortality of Social Security Disability Insurance beneficiaries. J Health Econ. 2012;31(6):863–875. http://dx. doi.org/10.1016/j.jhealeco.2012.08.004. 14. Ludwig J, Kling JR, Mullainathan S. Mechanism experiments and policy evaluations. J Econ Perspect. 2011;25(3):17–38. http://dx.doi.org/ 10.1257/jep.25.3.17. 15. Sommers BD, Baicker K, Epstein AM. Mortality and access to care among adults after state Medicaid expansions. N Engl J Med. 2012; 367(11):1025–1034. http://dx.doi.org/10.1056/NEJMsa1202099. 16. Sommers BD, Long SK, Baicker K. Changes in mortality after Massachusetts health care reform: a quasi-experimental study. Ann Intern Med. 2014;160(9):585–593. http://dx.doi.org/10.7326/M13-2275. 17. Angrist J, Pischke J. Mostly Harmless Econometrics. Princeton, NJ: Princeton University Press; 2009. 18. Angrist J, Pischke J. Mastering ’Metrics: The Path from Cause to Effect. Princeton, NJ: Princeton University Press; 2014. 19. Finkelstein A, McKnight R. What did Medicare do? The initial impact of Medicare on mortality and out of pocket medical spending. J Public Econ. 2008;92(7):1644–1668. http://dx.doi.org/10.1016/j.jpubeco. 2007.10.005. 20. McWilliams JM, Meara E, Zaslavsky AM, Ayanian JZ. Health of previously uninsured adults after acquiring Medicare coverage. JAMA. 2007;298(24):2886–2894. http://dx.doi.org/10.1001/jama.298. 24.2886. 21. McWilliams JM, Meara E, Zaslavsky AM, Ayanian JZ. Use of health services by previously uninsured Medicare beneficiaries. N Engl J Med. 2007;357:143–153. http://dx.doi.org/10.1056/NEJMsa067712. 22. McWilliams JM, Zaslavsky AM, Meara E, Ayanian JZ. Health insurance coverage and mortality among the near-elderly. Health Aff (Millwood). 2004;23(4):223–233. http://dx.doi.org/10.1377/hlthaff.23.4.223. 23. Currie J, Gruber J. Health insurance eligibility, utilization of medical care and child health. Q J Econ. 1996;111(2):431–466. http://dx.doi.org/ 10.2307/2946684. 24. Currie J, Gruber J. Saving babies: the efficacy and cost of recent expansions of Medicaid eligibility for pregnant women. J Polit Econ. 1996;104(6):1263–1996. http://dx.doi.org/10.1086/262059. 25. Howell EM. The impact of the Medicaid expansions for pregnant women: a synthesis of the evidence. Med Care Res Rev. 2001;58(1): 3–30. http://dx.doi.org/10.1177/107755870105800101. 26. Epstein AM, Newhouse J. The impact in Medicaid expansion for pregnant women on early prenatal care, use of cesarean section and

www.ajpmonline.org

Wallace and Sommers / Am J Prev Med 2016;50(5S1):S27–S33

27.

28.

29.

30.

31.

32.

33.

34.

35.

36.

37.

38.

39.

maternal and infant health outcomes in South Carolina and California. Health Care Financ Rev. 1998;19(4):85–99. Van Der Wees PJ, Zaslavsky AM, Ayanian JZ. Improvements in health status after Massachusetts health care reform. Milbank Q. 2013; 91(4):663–689. http://dx.doi.org/10.1111/1468-0009.12029. Courtemanche CJ, Zapata D. Does universal coverage improve health? The Massachusetts experience. J Policy Anal Manage. 2014;33(1): 36–69. http://dx.doi.org/10.1002/pam.21737. Sabik LM, Bradley CJ. The impact of near-universal insurance coverage on breast and cervical cancer screening: evidence from Massachusetts. Health Econ. In press. Online February 18, 2015. Miller S. The effect of the Massachusetts reform on health care utilization. Inquiry. 2012;49(4):317–326. http://dx.doi.org/10.5034/ inquiryjrnl_49.04.05. Kolstad JT, Kowalski AE. The impact of health care reform on hospital and preventive care: evidence from Massachusetts. J Public Econ. 2012; 96(11 12):909–929. http://dx.doi.org/10.1016/j.jpubeco.2012.07.003. McCormick D, Hanchate AD, Lasser KE, et al. Effect of Massachusetts healthcare reform on racial and ethnic disparities in admissions to hospital for ambulatory care sensitive conditions: retrospective analysis of hospital episode statistics. BMJ. 2015;350:h1480. http://dx.doi.org/ 10.1136/bmj.h1480. Keating NL, Kouri EM, He Y, West DW, Winer EP. Effect of Massachusetts health insurance reform on mammography use and breast cancer stage at diagnosis. Cancer. 2013;119(2):250–258. http: //dx.doi.org/10.1002/cncr.27757. Monheit AC, Cantor JC, DeLia D, Belloff D. How have state policies to expand dependent coverage affected the health insurance status of young adults? Health Serv Res. 2011;46(1 Pt 2):251–267. http://dx.doi. org/10.1111/j.1475-6773.2010.01200.x. Chua KP, Sommers BD. Changes in health and medical spending among young adults under health reform. JAMA. 2014;311(23):2437– 2439. http://dx.doi.org/10.1001/jama.2014.2202. Sommers BD, Buchmueller T, Decker SL, Carey C, Kronick R. The Affordable Care Act has led to significant gains in health insurance and access to care for young adults. Health Aff (Millwood). 2013;32(1): 165–174. http://dx.doi.org/10.1377/hlthaff.2012.0552. Wallace J, Sommers BD. Effect of dependent coverage expansion of the Affordable Care Act on health and access to care for young adults. JAMA Pediatrics. 2015;169(5):495–497. http://dx.doi.org/10.1001/jamapediatrics. 2014.3574. Akosa Antwi Y, Moriya AS, Simon KI. Access to health insurance and the use of inpatient medical care: evidence from the Affordable Care Act young adult mandate. J Health Econ. 2015;39:171–187. http://dx. doi.org/10.1016/j.jhealeco.2014.11.007. Akosa Antwi Y, Moriya AS, Simon K, Sommers BD. Changes in emergency department use among young adults after the Patient

May 2016

40.

41.

42.

43.

44.

45. 46.

47. 48.

49.

50.

51.

52.

53.

S33

Protection and Affordable Care Act’s dependent coverage provision. Ann Emerg Med. 2015;65(6):664–672. http://dx.doi.org/10.1016/j. annemergmed.2015.01.010. Lipton BJ, Decker SL. ACA provisions associated with increase in percentage of young adult women initiating and completing the HPV vaccine. Health Aff (Millwood). 2015;34(5):757–764. http://dx.doi.org/ 10.1377/hlthaff.2014.1302. Karaca-Mandic P, Norton EC, Dowd B. Interaction terms in nonlinear models. Health Serv Res. 2012;47(1 Pt 1):255–274. http://dx.doi.org/ 10.1111/j.1475-6773.2011.01314.x. Bertrand M, Duflo E, Mullainathan S. How much should we trust differences-in-differences estimates? Q J Econ. 2004;119(1):249–275. http://dx.doi.org/10.1162/003355304772839588. McCrary J. Manipulation of the running variable in the regression discontinuity design: a density test. J Econom. 2008;142(2):698–714. http://dx.doi.org/10.1016/j.jeconom.2007.05.005. Imbens G, Kalyanaraman K. Optimal bandwidth choice for the regression discontinuity estimator. Rev Econ Stud. 2012;79(3):933–959. http://dx.doi.org/10.1093/restud/rdr043. Decker SL. Medicare and the health of women with breast cancer. J Hum Resour. 2005;40(4):948–968. http://dx.doi.org/10.3368/jhr.XL.4.948. Card D, Dobkin C, Maestas N. The impact of nearly universal insurance coverage on health care utilization: evidence from Medicare. Am Econ Rev. 2008;98(5):2242–2258. http://dx.doi.org/10.1257/aer.98.5.2242. Card D, Dobkin C, Maestas N. Does Medicare save lives? Q J Econ. 2009;124(2):597–636. http://dx.doi.org/10.1162/qjec.2009.124.2.597. Burns ME, Dague L, DeLeire T, et al. The effects of expanding public insurance to rural low-income childless adults. Health Serv Res. 2014; 49(suppl 2):2173–2187. http://dx.doi.org/10.1111/1475-6773.12233. Dague L. The effect of Medicaid premiums on enrollment: a regression discontinuity approach. J Health Econ. 2014;37:1–12. http://dx.doi.org/ 10.1016/j.jhealeco.2014.05.001. Card D, Shore-Sheppard L. Using Discontinuous Eligibility Rules to Identify the Effects of the Federal Medicaid Expansions. Working Paper Series. Cambridge, MA: National Bureau of Economic Research; 2002. Rosenbaum S, Sommers BD. Using Medicaid to buy private health insurance—the great new experiment? N Engl J Med. 2013;369(1):7–9. http://dx.doi.org/10.1056/NEJMp1304170. Status of State Action on the Medicaid Expansion Decision. Kaiser Family Foundation. http://kff.org/health-reform/state-indicator/stateactivity-around-expanding-medicaid-under-the-affordable-care-act/. Published 2015. Accessed February 15, 2016. Sommers BD, Gunja MZ, Finegold K, Musco T. Changes in selfreported insurance coverage, access to care, and health under the Affordable Care Act. JAMA. 2015;314(4):366–374. http://dx.doi.org/ 10.1001/jama.2015.8421.