infection control and hospital epidemiology
april 2006, vol. 27, no. 4
original article
Administrative Data Fail to Accurately Identify Cases of Healthcare-Associated Infection Eileen R. Sherman, MS; Kateri H. Heydon, MS; Keith H. St. John, MS; Eva Teszner, BSN; Susan L. Rettig, BSN; Sharon K. Alexander, BSN; Theoklis Z. Zaoutis, MD, MSCE; Susan E. Coffin, MD, MPH
objective. Some policy makers have embraced public reporting of healthcare-associated infections (HAIs) as a strategy for improving patient safety and reducing healthcare costs. We compared the accuracy of 2 methods of identifying cases of HAI: review of administrative data and targeted active surveillance. design, setting, and participants. A cross-sectional prospective study was performed during a 9-month period in 2004 at the Children’s Hospital of Philadelphia, a 418-bed academic pediatric hospital. “True HAI” cases were defined as those that met the definitions of the National Nosocomial Infections Surveillance System and that were detected by a trained infection control professional on review of the medical record. We examined the sensitivity and the positive and negative predictive values of identifying HAI cases by review of administrative data and by targeted active surveillance. results. We found similar sensitivities for identification of HAI cases by review of administrative data (61%) and by targeted active surveillance (76%). However, the positive predictive value of identifying HAI cases by review of administrative data was poor (20%), whereas that of targeted active surveillance was 100%. conclusions. The positive predictive value of identifying HAI cases by targeted active surveillance is very high. Additional investigation is needed to define the optimal detection method for institutions that provide HAI data for comparative analysis. Infect Control Hosp Epidemiol 2006; 27:332-337
Public reporting of healthcare-associated infections (HAIs) has emerged as an important issue for those who provide, finance, and consume healthcare.1-7 HAIs are important because they are a serious patient safety issue,8 they have a demonstrated financial impact on the cost of healthcare,9,10 and employerprovided health insurance plans have been restructured so that many workers now bear a larger portion of the burden for their own healthcare expenses.11 In 2002, articles in several prominent newspapers drew public attention to the burden of HAIs.6,7 This focus was subsequently magnified in 2003 when a large consumer advocacy group, Consumers Union, began a campaign called “Stop Hospital Infections” to educate the public about the risk of nosocomial infections and to promote legislation mandating public reporting of HAIs.2 To date, 7 states have passed legislation requiring healthcare facilities to provide data on HAIs to a public agency for analysis, public disclosure, or both. Many other states are considering similar proposals. However, the optimal methods for collecting and analyzing interinstitutional data remain unclear.
Several states have used administrative claims data to provide the public with comparative data on selected healthcare outcomes.12,13 The use of administrative data to identify such outcomes has several advantages. First, an HAI surveillance system based on administrative data would permit the application of common definitions to similar data sets from nonaffiliated institutions. Second, a surveillance system built on administrative codes could minimize the ascertainment bias that might be introduced if participating institutions used diverse case-finding strategies. Finally, because some states currently use administrative data to track other healthcare outcomes,14,15 the use of existing data could expand the number of quality measures monitored without substantially increasing the workload of quality managers. Thus, the use of administrative data sets might provide state agencies with an efficient way of tracking institution-specific rates of HAIs. To our knowledge, no standardized method using administrative claims data to find cases of HAI has been validated. We evaluated the sensitivity and the positive and negative
Ms. Sherman, Mr. St. John, Ms. Teszner, Ms. Rettig, Ms. Alexander, and Dr. Coffin are from the Department of Infection Prevention and Control, and Ms. Heydon, Dr. Zaoutis, and Dr. Coffin are from the Division of Infectious Diseases, Department of Pediatrics, Children’s Hospital of Philadelphia, Philadelphia. Dr. Zaoutis and Dr. Coffin are also from the Department of Pediatrics, and Dr. Zaoutis is also from the Center for Clinical Epidemiology and Biostatistics, University of Pennsylvania School of Medicine, Philadelphia. Received August 9, 2005; accepted December 8, 2005; electronically published March 29, 2006. 䉷 2006 by The Society for Healthcare Epidemiology of America. All rights reserved. 0899-823X/2006/2704-0002$15.00.
surveillance for healthcare-associated infections
table 1.
333
Classification of Healthcare-Associated Infections (HAIs) Used in the Present Study Case Identified, by Method
Classification
Review of Targeted Administrative Active Data Surveillance
True HAI Concordant Discordant Surveillance error Not targeted Administrative data failure Misclassified HAI Outpatient infection No exposure No infection
Definition
Yes
Yes
HAI case identified by both methods
No
Yes
No Yes
Yes No
HAI case in a patient housed on targeted unit but missed by active surveillance HAI case in patient housed on nontargeted unit HAI case identified by active surveillance but not by administrative data
No No
Yes Yes
No
Yes
Infection with documented onset before hospital admission Laboratory-confirmed infection present but the patient had no exposure to a surgical or invasive procedure or to a medical device Review of the medical record revealed no laboratory-confirmed infection (for CLABSI or UTI), no radiographic evidence of pneumonia (for VAP), or no clinical evidence of infection (for SSI)
note.
CLABSI p central line–associated bloodstream infection; SSI p surgical site infection; UTI p catheter-associated urinary tract infection; VAP p ventilator-associated pneumonia
predictive values of 2 methods of identifying HAI cases: review of administrative data and targeted active surveillance.
methods Design, Setting, and Participants We conducted a retrospective, cross-sectional study at the Children’s Hospital of Philadelphia (CHOP), an academic, tertiary care pediatric hospital with 418 patient beds and approximately 22,000 annual hospital admissions. We evaluated the sensitivity and the positive and negative predictive values of identifying HAI cases by review of administrative claims data (defined as the presence on the hospital bill of an infection-specific discharge code from the International Classification of Diseases, 9th Revision, Clinical Modification [ICD-9]) and by performance of targeted active surveillance. All HAIs in patients discharged from January 1, 2004 through September 30, 2004 were eligible for inclusion. At the inception of this study, the Department of Infection Prevention and Control conducted active surveillance for device-related infections (ie, central line–associated bloodstream infection [CLABSI], catheter-associated urinary tract infection [UTI], or ventilator-associated pneumonia [VAP]) and surgical site infections (SSIs) on selected high-risk inpatient units, including the neonatal, pediatric, and cardiac intensive care units and the oncology unit. The Department of Infection Prevention and Control at CHOP is staffed by 3 certified infection control professionals (ICPs), 1 administrative director, and 1 medical director. This study was approved by the Institutional Review Board at CHOP.
Case Finding We used 2 sources of data to identify HAI cases. First, we reviewed records from the Department of Infection Prevention and Control to identify cases detected by targeted active surveillance. Second, we searched hospital billing data to detect cases in patients with an infection-specific ICD-9 discharge code (Appendix, Table A1, available online). We used infection-specific codes selected by the Pennsylvania Health Care Cost Containment Council for HAI case-finding; this council is an independent, state-funded legislative body that mandated public reporting of HAIs by all acute-care hospitals in the state of Pennsylvania beginning January 1, 2004. Study Definitions We defined HAIs according to surveillance definitions from the Centers for Disease Control and Prevention (CDC) National Nosocomial Infections Surveillance System.16 HAIs detected by review of administrative data but not by targeted active surveillance were investigated by chart review. We used a series of categories to designate all cases of HAI (Table 1). First, all cases of HAI that were identified from any source of data were placed into 1 of 2 categories: “true HAIs” or “misclassified HAIs.” A “true HAI” was defined as an HAI that was identified from the medical record by a trained ICP. A “misclassified HAI” was a case of infection detected by review of administrative data but found not to be an HAI when the medical record was reviewed. We designated 2 additional categories for true HAIs: “true HAI, concordant,” for cases found by both case-finding strategies, and “true HAI, discordant,” for cases found by only 1
334
infection control and hospital epidemiology
april 2006, vol. 27, no. 4
table 2. Classification of healthcare-associated infections (HAIs) found by review of administrative data but not by targeted active surveillance No. (%) of cases reviewed, by HAI classification Classification True HAI Not targeted Surveillance error Misclassified HAI Outpatient infection No exposure No infection
CLABSI (n p 24)
VAP (n p 22)
SSI (n p 32)
UTI (n p 15)
Total (n p 93)
2 (8) 1 1 22 (92) 7 9 6
1 (4) 1 0 21 (96) 11 0 10
3 (9) 2 1 29 (91) 2 18 9
3 (20) 3 0 12 (75) 4 2 6
9 (10) 7 2 84 (90) 24 29 31
note. Data are from a review of a random sample of 12% of medical records for which a case of HAI was identified by review of administrative data but not by targeted active surveillance. CLABSI p central line–associated bloodstream infection; SSI p surgical site infection; UTI p catheter-associated urinary tract infection; VAP p ventilator-associated pneumonia.
case-finding strategy. Cases designated as “true HAI, discordant” were further sorted into 3 categories: “surveillance error,” for HAI cases that occurred in a targeted unit but were not identified by active surveillance; “not targeted,” for HAI cases that occurred in a nontargeted unit; and “administrative data failure,” for HAI cases detected by targeted active surveillance but not by review of administrative data. We created 3 additional categories to describe misclassified HAIs. HAIs were classified as “outpatient infections” when they were documented to have existed before the patient was admitted to the CHOP. For the purposes of this study, HAIs that occurred in another institution and were present when the patient was admitted to the CHOP were classified as “outpatient infections.” HAIs were classified as “no exposure” if a patient had a laboratory-confirmed infection but had not undergone either a surgical or an invasive medical procedure and had not received treatment with a medical device. HAIs were classified as “no infection” if the medical record contained no documentation of a laboratory-confirmed infection (for CLABSI or UTI), no radiographic evidence of pneumonia (for VAP), and no clinical evidence of an SSI. Data Collection and Analysis We investigated cases of HAI that were detected by review of the hospital billing data but not by active surveillance. We used ICD-9 codes to categorize the type of HAI involved (these codes are listed in the Appendix, Table A1, available online). To determine the causes of case misclassification, we drew random samples of approximately 12% of the discrepant records for each type of HAI (CLABSI, VAP, UTI, and SSI). A single trained ICP (E.R.S.) conducted detailed chart reviews using a structured data-collection instrument; the results were reviewed by the medical director (S.E.C.). Statistical Analysis We summarized the frequency of each type of HAI as identified by review of hospital billing data and by active sur-
veillance. We summarized, by frequency and percent, the reasons for case misclassification. We calculated the sensitivity and the positive and negative predictive values of HAI casefinding by review of administrative data or by targeted active surveillance. These calculations summed the reference standard of the estimated number of HAI cases identified by targeted active surveillance and the estimated number of cases missed by active surveillance. We obtained administrative data from the financial analysis department of the hospital to determine the total number of hospital admissions that occurred during the 9-month study period. All analyses were performed with SAS, version 9.1 (SAS Institute).
results Concordance Between 2 Methods of Identifying HAI Cases We assessed the concordance between cases of HAI identified by review of administrative data and those detected by targeted active surveillance. Review of billing data identified 943 hospital admissions with HAI; active surveillance detected 232 cases of HAI. Thus, the 2 methods together identified a total of 1,072 cases of HAI; however, only 178 of those cases of HAI were identified by both surveillance methods. Reasons for Case Misclassification To assess the ability of review of administrative data to identify cases of HAI that were not identified by active surveillance, we reviewed a random sample of 93 charts of patients with HAIs that were documented by administrative data but not found by active surveillance. Overall, review of medical records identified a true HAI in 10% of cases. However, 90% (84 of 93) of the discrepant cases reviewed were found to have been misclassified (Table 2). Within our sample of cases that underwent medical chart review, 75% of the UTIs, 91% of the SSIs, 92% of the CLABSIs, and 96% of the cases of
surveillance for healthcare-associated infections
335
table 3. Accuracy of identification of healthcare-acquired infections (HAIs) by review of administrative data or by active surveillance, compared with the number identified by a reference standard
Variable Cases identified by review of administrative data HAI present (n p 943) HAI absent (n p 187) Cases identified by targeted active surveillance HAI present (n p 232) HAI absent (n p 16,957)
Cases With HAI Present, as Identified by the Reference Standarda (n p 306)
Cases With HAI Absent, as Identified by the Reference Standarda (n p 16,873)
187 119
756 16,117
232 74
0 16,873
Sensitivity,b %
Positive Predictive Value,c %
Negative Predictive Value,d %
61 … … 76 … …
20 … … 100 … …
99 … … 99 … …
a
Reference standard was calculated as the number of cases found by targeted active surveillance plus the estimated number of cases missed by targeted active surveillance. b Sensitivity was calculated as the proportion of HAI cases that were identified both by the reference standard and by the specified identification method (ie, either by review of administrative data or by targeted active surveillance: for details, see Methods). c Positive predictive value was calculated as the probability that an HAI case was identified by the specified identification method if an HAI was identified by the reference standard. d Negative predictive value defined as the probability that a HAI case was not identified by the specified identification method if no HAI was identified by the reference standard.
VAP identified by hospital billing records were found to be misclassified. We also used chart review to assess the causes of case misclassification in billing data (Table 2). The most common reason for HAI misclassification was that no laboratory-confirmed infection was present (31 [37%] of 84 cases). In addition, many misclassified HAIs occurred in patients with no exposure to medical devices or surgical procedures (29 [35%] of 84 cases). Finally, hospital billing data misidentified many outpatient infections as HAIs (24 [29%] of 84 cases). Review of hospital billing data identified 2 true HAIs that were considered surveillance errors. All other cases of true HAI missed by targeted active surveillance occurred in units that were not targeted for surveillance. Estimated Accuracy of Active Surveillance We sought to estimate the accuracy of targeted active surveillance for identification all HAIs in hospitalized children. To assess the approximate number of HAIs missed by targeted active surveillance, we compared the number of HAIs missed by active surveillance to the number of HAIs found by review of administrative data alone. Thus, we estimated that 7.5% of HAIs found by review of hospital billing but missed by active surveillance were probably true HAIs that occurred in units not targeted for surveillance. Similarly, we estimated that 2.2% of the HAIs found by review of hospital billing data but missed by active surveillance were probably true HAIs that were missed because of surveillance errors. Using these calculations, we determined that our targeted active surveillance program missed approximately 74 HAIs during the 9-month study period, a number equivalent to 103 HAIs missed per year. Thus, approximately 22 infections would be missed because of surveillance error and 81 infections would
be missed because the patient was admitted to a unit not targeted for surveillance. Sensitivity and Positive and Negative Predictive Values of the Two Methods To evaluate the accuracy of identifying HAI cases by review of administrative data or by targeted active surveillance, we constructed a reference standard that calculated the number of HAIs as those cases identified by active surveillance plus those estimated to have been missed by active surveillance. Compared with the values for the reference standard, review of billing data had a sensitivity of 61% for identifying a case of HAI, a positive predictive value of 20%, and a negative predictive value of 99%. In contrast, targeted active surveillance had a sensitivity of 76% for identifying a case of HAI, a positive predictive value of 100%, and a negative predictive value of 99% (Table 3).
discussion Our results show that the method of case-finding strongly influences both the number of HAIs found and the accuracy of the case-finding results. We found that targeted active surveillance carried out by trained ICPs found most of the HAIs that occurred in our institution. This program had a sensitivity of 76%, a positive predictive value of 100%, and a negative predictive value of 99%. In contrast, review of administrative data failed to provide accurate data on 4 of the most common HAIs. Most of the cases classified as HAIs by review of administrative data were misclassified. Although review of administrative data had a sensitivity of 61%, compared with our reference standard, its positive predictive value for identifying cases of HAI was only 20%.
336
infection control and hospital epidemiology
april 2006, vol. 27, no. 4
Like other investigators,17,18 we found that targeted active surveillance by trained ICPs using definitions developed by the CDC was both sensitive and accurate for identifying most cases of HAI. Emori and colleagues18 found that cases of HAI identified by trained ICPs were usually true HAIs. When cases of HAI identified by ICPs were subsequently reviewed by CDC epidemiologists, the positive predictive value differed by type of infection, ranging from 92% for UTIs to 72% for SSIs. We found that most of the cases of HAI identified by review of hospital billing data were not true HAIs. Other investigators have also reported substantial inaccuracies in identification of cases of HAI by review of administrative data.13,17,19 There are several reasons why hospital billing data are a poor source of infection control information. First, billing codes are typically not assigned by clinicians; therefore, coders may not make use of all of the available clinical data that would be used to assess a possible HAI. In addition, the ICD-9 codes used to generate hospital bills were not designed to differentiate community-acquired infections from HAIs. Thus, the failure of this data source to identify HAIs accurately is not surprising. Other unresolved issues surround the public reporting of HAIs. First, the appropriate scope of HAI surveillance for public reporting is unclear. The Healthcare Infection Control Practices Advisory Committee recommends that reported outcome measures should be limited to those that can be accurately detected.3 In addition, the CDC and other expert groups have recommended that HAI surveillance be limited to high-risk patient populations.20,21 We found that a program of targeted active surveillance identified most HAIs that occurred in our institution and that the most common reason for failure of targeted active surveillance to identify cases of HAI was the location of patients on units not targeted for surveillance because of a low risk of infection. However, the 7 existing state-wide surveillance programs that have been recently enacted differ substantially in targeted patient populations and infection types.22 Another unresolved issue is whether and how institution-specific HAI data should be adjusted for risk.4 The National Nosocomial Infections Surveillance System has emphasized the need for appropriate risk adjustment for interhospital comparative data for specific patient populations.3,20 A final issue is whether public reporting of comparative healthcare outcomes data might have unintended negative consequences. This issue is of great concern if the accuracy of the reported data is limited. Narins and colleagues12 found that, because of public reporting of mortality rates associated with treatment by specific physicians, interventional cardiologists in New York State were less likely to perform procedures on patients who might benefit from angioplasty. Other studies have also documented that quality report cards might promote physicians’ refusal to treat high-risk patients.14,15 Thus, the methods and the results of public reporting of healthcare outcomes data need careful study. Our study had several limitations. First, we were unable to assess the economic consequences of the identified coding
errors. The assignment of infection codes for cases in which no infection was present might have inflated hospital costs. Alternatively, the use of infection codes in lieu of other, more accurate codes might have decreased costs. As payors increase their use of administrative data to assess the quality of care when negotiating hospital contracts, the appropriate and inappropriate use of infection-specific codes may have indirect consequences. Second, we assessed the accuracy of a group of infection-specific codes in identifying 4 types of HAI. Although this group of codes performed poorly as a system for identifying HAIs, some specific codes may have a high positive predictive value for HAIs. Third, we used chart review by an ICP as our reference standard when evaluating the accuracy of administrative data; however, interobserver variability has been observed among trained ICPs.23 Finally, we hypothesize that the accuracy of review of administrative data for identification of HAIs may be influenced by the complexity of patients’ illnesses. Because this study was conducted in a tertiary care children’s hospital, our findings may not be generalizable to all acute-care hospitals. In summary, we found that the sensitivity of targeted active surveillance was similar to that of review of hospital billing data for identification of cases of HAI among hospitalized children. Importantly, however, review of administrative data misclassified many cases as HAIs. In contrast, targeted active surveillance demonstrated high positive and negative predictive values in identifying cases of HAI. Although administrative data can be easily accessed and analyzed, the usefulness of these data to hospital ICPs and state public-reporting programs is limited by the poor positive predictive value of data review for identifying cases of HAI. Additional study is needed to define the optimal methods of collecting and analyzing interinstitutional data.
Address reprint requests to Susan E. Coffin, MD, MPH, Department of Infection Prevention and Control, Children’s Hospital of Philadelphia, Philadelphia, PA 19104 (
[email protected]).
references 1. McCaughey B. Coming clean. New York Times. June 6, 2005, 2005;A. 2. Consumers Union. Stop hospital infections. Available at: http://www .stophospitalinfections.org. Accessed August 6, 2005. 3. McKibben L, Horan T, Tokars J, et al. Guidance on public reporting of healthcare-associated infections: recommendations of the Healthcare Infection Control Practices Advisory Committee. Am J Infect Control 2005; 33:217-226. 4. Wong ES, Rupp ME, Mermel L, et al. Public disclosure of healthcareassociated infections: the role of the Society for Healthcare Epidemiology of America. Infect Control Hosp Epidemiol 2005; 26:210-212. 5. Weinstein RA, Siegel JD, Brennan PJ. Infection-control report cards— securing patient safety. N Engl J Med 2005; 353:225-227. 6. Berens M. Unhealthy hospitals. Chicago Tribune. July 21, 2002. 7. Fabregas L. Area hospitals reduce in-house infections. Pittsburgh TribuneReview. July 24, 2002.
surveillance for healthcare-associated infections
8. Burke JP. Infection control- a problem for patient safety. N Engl J Med 2003; 348:651-656. 9. Elward AM, Hollenbeak CS, Warren DK, Fraser VJ. Attributable cost of nosocomial primary bloodstream infection in pediatric intensive care unit patients. Pediatrics 2005; 115:868-872. 10. Roberts RR, Scott RD 2nd, Cordell R, et al. The use of economic modeling to determine the hospital costs associated with nosocomial infections. Clin Infect Dis 2003; 36:1424-1432. 11. Employer struggles with rising health costs: fewer benefits and employees must contribute more. Lancet 2003; 362:377. 12. Narins C, Dozier A, Ling F, Zareba W. The influence of public reporting of outcome data on medical decision making by physicians. Arch Intern Med 2005; 165:83-87. 13. Romano PS, Chan BK, Schembri ME, Rainwater JA. Can administrative data be used to compare postoperative complication rates across hospitals? Med Care 2002; 40:856-867. 14. Omoigui NA, Miller DP, Brown KJ, et al. Outmigration for coronary bypass surgery in an era of public dissemination of clinical outcomes. Circulation 1996; 93:27-33. 15. Schneider EC, Epstein AM. Influence of cardiac-surgery performance reports on referral practices and access to care: a survey of cardiovascular specialists. N Engl J Med 1996; 335:251-256.
337
16. Garner J, Jarvis W, Emori T, Horan T, Hughes J. CDC definitions for nosocomial infections. Am J Infect Control 1988; 16:128-140. 17. Wright SB, Huskins WC, Dokholyan RS, Goldmann DA, Platt R. Administrative databases provide inaccurate data for surveillance of longterm central venous catheter-associated infections. Infect Control Hosp Epidemiol 2003; 24:946-949. 18. Emori T, Edward J, Culver D, et al. Accuracy of reporting nosocomial infections in intesntive-care-unit patients to the National Nosocomial Infections Surveillance System. Infect Control Hosp Epidemiol 1998; 19: 308-316. 19. Sands KE, Yokoe DS, Hooper DC, et al. Detection of postoperative surgical-site infections: comparison of health plan-based surveillance with hospital-based programs. Infect Control Hosp Epidemiol 2003; 24: 741-743. 20. Nosocomial infection rates for interhospital comparison: limitations and possible solutions. Infect Control Hosp Epidemiol 1991; 12:609-621. 21. Release of nosocomial infection data. APIC News 1998; 17:1-5. 22. Edmond M. Where the rubber hits the road: the healthcare epidemiologist as lobbyist. Society for the Healthcare Epidemiology of America. Los Angeles, CA; 2005. 23. Gastmeier P, Kampf G, Hauer T, et al. Experience with two validation methods in a prevalence survey on nosocomial infections. Infect Control Hosp Epidemiol 1998; 19:668-673.