Accuracy of Electronic Diagnoses and Electronic ... - Semantic Scholar

17 downloads 3603 Views 104KB Size Report
Electronic antibiotic prescribing had a sensitivity of 43%, specificity of 93%, positive predictive .... electronically ''sign'' the prescription, which results in the pre-.
Journal of the American Medical Informatics Association

Volume 13

Number 1 Jan / Feb 2006

61

Research Paper n

Acute Infections in Primary Care: Accuracy of Electronic Diagnoses and Electronic Antibiotic Prescribing JEFFREY A. LINDER, MD, MPH, DAVID W. BATES, MD, MSC, DEBORAH H. WILLIAMS, MHA, MEGHAN A. CONNOLLY, BLACKFORD MIDDLETON, MD, MPH, MSC Abstract

Objective: To maximize effectiveness, clinical decision-support systems must have access to accurate diagnostic and prescribing information. We measured the accuracy of electronic claims diagnoses and electronic antibiotic prescribing for acute respiratory infections (ARIs) and urinary tract infections (UTIs) in primary care. Design: A retrospective, cross-sectional study of randomly selected visits to nine clinics in the Brigham and Women’s Practice-Based Research Network between 2000 and 2003 with a principal claims diagnosis of an ARI or UTI (N 5 827). Measurements: We compared electronic billing diagnoses and electronic antibiotic prescribing to the gold standard of blinded chart review. Results: Claims-derived, electronic ARI diagnoses had a sensitivity of 98%, specificity of 96%, and positive predictive value of 96%. Claims-derived, electronic UTI diagnoses had a sensitivity of 100%, specificity of 87%, and positive predictive value of 85%. According to the visit note, physicians prescribed antibiotics in 45% of ARI visits and 73% of UTI visits. Electronic antibiotic prescribing had a sensitivity of 43%, specificity of 93%, positive predictive value of 90%, and simple agreement of 64%. The sensitivity of electronic antibiotic prescribing increased over time from 22% in 2000 to 58% in 2003 (p for trend , 0.0001). Conclusion: Claims-derived, electronic diagnoses for ARIs and UTIs appear accurate. Although closing, a large gap persists between antibiotic prescribing documented in the visit note and the use of electronic antibiotic prescribing. Barriers to electronic antibiotic prescribing in primary care must be addressed to leverage the potential that computerized decision-support systems offer in reducing costs, improving quality, and improving patient safety.

j

J Am Med Inform Assoc. 2006;13:61–66. DOI 10.1197/jamia.M1780.

To determine the accuracy of electronic diagnoses and electronic antibiotic prescribing for acute infections in primary care, we performed a study of acute respiratory infection (ARI) and urinary tract infection (UTI) visits in a practicebased research network.

Background Accurate electronic health information is important for patient care, clinical operations, quality improvement, and research efforts. Increasingly, clinical decision support systems are being designed to assist clinicians in providing high-

Affiliations of the authors: From the Division of General Medicine, Brigham and Women’s Hospital (JAL, DWB, DHW, BM); Harvard Medical School (JAL, DWB, BM); Clinical Informatics Research and Development, Partners HealthCare System (BM); Yale School of Nursing and Yale University School of Medicine (MAC). This project was supported by grant number R03 HS014420 from the Agency for Healthcare Research and Quality. Dr. Linder is supported by a Career Development Award (K08 HS014563) from the Agency for Healthcare Research and Quality. We thank Joseph C. Chan for his editorial assistance in preparing the manuscript. Correspondence and reprints: Jeffrey A. Linder, MD, MPH, Division of General Medicine, Brigham and Women’s Hospital, 1620 Tremont Street, BC-3-2X, Boston, MA 02120; e-mail: . Received for review: 12/20/04; accepted for publication: 09/21/05.

quality care, but such systems are also dependent on accurate electronic information. Electronic diagnoses and electronic medication information have been shown to be generally accurate for many chronic conditions.1,2 However, less is known about the accuracy of electronic health information for acute conditions in primary care. Two of the most common acute problems in primary care are ARIs and UTIs. ARIs, including nonspecific upper respiratory tract infections, otitis media, sinusitis, pharyngitis, acute bronchitis, influenza, and pneumonia, are the most common symptomatic reason for seeking care in the United States, accounting for 7% of all ambulatory visits.3 ARIs are the number one reason for antibiotic prescribing in the United States and account for about 50% of all antibiotic prescriptions to adults.4 Although antibiotic prescribing for ARIs has decreased,4 much antibiotic prescribing for ARIs remains inappropriate. Inappropriate antibiotic prescribing does not improve outcomes, but exposes individual patients to the risk of adverse drug events,5–11 increases the prevalence of antibiotic-resistant bacteria,12–14 and increases costs. The total cost, direct and indirect, of ARIs is at least $40 billion per year, with unnecessary antibiotics accounting for over $1.1 billion of the total.15 UTIs represent another common condition in primary care for which physicians commonly prescribe antibiotics. In contrast to ARIs, antibiotics are generally indicated in the treatment of UTIs, and physicians prescribe antibiotics to 60% to 75% of patients diagnosed with UTIs.4,16 However, there has been

62 concern about the type of antibiotic chosen for UTIs, with clinicians increasingly selecting broader spectrum antibiotics. Broader spectrum antibiotics made up 20% of antibiotic prescriptions for UTIs in 1991–1992 and over 40% of antibiotic prescriptions in 1998–1999.4 Electronic health records with integrated clinical decision support systems have the potential to improve antibiotic prescribing in primary care for ARIs and UTIs in a cost-effective, sustainable way.17,18 However, to be effective, clinical decision support systems for ARIs and UTIs must have access to accurate diagnostic and prescribing information. We performed a cross-sectional study from 2001 to 2003 to measure the accuracy of electronic ARI and UTI diagnoses and to measure the accuracy of electronic antibiotic prescribing in a primary care practice-based research network.19

Hypotheses We had two main hypotheses: (1) that claim-derived, electronic ARI and UTI diagnoses would be generally accurate and (2) that the accuracy of electronic antibiotic prescribing in primary care would be less than previously shown for long-term medications.

Methods Setting The Brigham and Women’s Primary Care (BWPC) PracticeBased Research Network (PBRN) includes nine primary care clinics in the greater Boston area. The BWPC clinics have approximately 95 practicing attending physicians, as well as internal medicine residents who provide longitudinal and urgent care. The BWPC clinics include two community health centers, four hospital-based clinics, and three community-based clinics. In 2002, the BWPC clinics provided primary care for over 72,000 adults and children and had over 230,000 patient visits.

Longitudinal Medical Record The BWPC-PBRN is linked with a common Web-based electronic health record, the Longitudinal Medical Record (LMR). Data contained in the LMR is the official patient record for the BWPC-PBRN. The LMR is an internally developed, fully functioning electronic record including typed and dictated notes from primary care and subspecialty clinics; International Classification of Diseases, Ninth Edition, Clinical Modification (ICD-9-CM)–coded problem lists; medication lists; coded allergies; and laboratory test and radiographic results. Clinicians directly type 65% of BWPC-PBRN notes. Some clinicians have developed personal templates for various problems, but there are no systemwide templates. BWPC clinicians use the LMR to write prescriptions that can be printed or transmitted to pharmacies electronically. The medication list in the LMR is meant to be both descriptive— lists of all medications the patient is taking and has taken as well as prescriptive—new prescriptions are to be written using the LMR. Physicians are strongly encouraged to use the electronic prescribing functionality of the LMR for all prescriptions. To prescribe a medication using the LMR, clinicians select ‘‘medications’’ from a drop-down menu within a patient’s electronic chart; type in the first few letters of the medication to be prescribed; select the medication from a list; complete a ‘‘prescription pad’’ screen with dose, frequency, number of pills to be dispensed, and number of refills; and

LINDER ET AL., Electronic Antibiotic Prescribing electronically ‘‘sign’’ the prescription, which results in the prescription being printed in the examination room. The LMR prescribing module has nine of 14 functional capabilities recently identified in a conceptual model of electronic prescribing: patient selection, medication selection menus, safety alerts, formulary alerts, dosage calculation, medication administration aides, patient education material, data transmission, and alerts for patients’ failure to refill.20 The LMR prescribing module lacks five functional capabilities: diagnosis selection, in-office dispensing, refill and renewal reminders, corollary orders, and automated questionnaires. During the study period, enhancements to the LMR prescribing module included improvements to make it easier to prescribe multiple medications, facilitate selecting the route of administration, facilitate printing or faxing prescriptions, and improvements in drug-allergy, drug-pregnancy, drugdrug, and drug-lab warnings. Computers running the LMR are available in most examination rooms. The LMR was introduced in eight BWPC clinics in July 2000 and has been in use in all the BWPC clinics since June 2001. The LMR has a down time of 0.7%.

Data Sources Partners HealthCare, of which Brigham and Women’s is a part, maintains the Research Patient Data Repository (RPDR), which pools inpatient and outpatient encounter data from all Partners HealthCare sites.21 The RPDR identifies claim diagnoses by ICD-9-CM codes and includes information about visit dates, site of care, visit notes, and patient demographics. Diagnosis codes are generated using clinicspecific ‘‘superbills,’’ with evaluation and management codes on the front and ICD-9 codes on the back. Clinicians select the appropriate diagnosis code, which is later entered into the system by administrative staff. Diagnosis codes are not derived from the LMR but are electronically available. We linked data derived from the RPDR to data from the LMR, including medication prescribing and provider characteristics.

Data Extraction We identified visits made to a BWPC clinic with a principal diagnosis of an ARI or UTI between January 1, 2000, and November 13, 2003, using the RPDR. ARI diagnoses included nonspecific URIs (ICD-9-CM 460, 464, and 465), otitis media (ICD-9-CM 381 and 382), sinusitis (ICD-9-CM 461 and 473), pharyngitis (ICD-9-CM 034.0, 462, and 463), acute bronchitis (ICD-9-CM 466 and 490), pneumonia (ICD-9-CM 481-486), and influenza (ICD-9-CM 487). UTI diagnoses included cystitis (ICD-9-CM 595) and UTI, site not specified (ICD-9-CM 599.0). From this pool of encounters, we randomly selected 1,000 visits, stratified by calendar year and ARI or UTI diagnosis. Because the LMR was introduced in some clinics early in the study period, we excluded visits for which the LMR was not in use during the study period. We only included visits for which we could find the corresponding visit note in the LMR. Although ARIs seem heterogeneous, it is useful to consider them as a group because of significant overlap in pathophysiology, signs, and symptoms. In addition, clinicians may be inclined to use ‘‘diagnosis shifting’’ if they are aware that antibiotic prescribing practices are being monitored.22 For example, when treating a patient with sinus congestion but

Journal of the American Medical Informatics Association

Volume 13

63

Number 1 Jan / Feb 2006

prescribing antibiotics, a clinician might be more inclined to diagnose a patient with sinusitis, an antibiotic-appropriate diagnosis, instead of nonspecific URI, an antibiotic-inappropriate diagnosis.

Data Analysis One author reviewed the visit notes blinded to the encounter diagnosis and electronic antibiotic prescribing. From the notes, we extracted the primary diagnosis responsible for the visit into nine possible categories (nonspecific URIs, otitis media, sinusitis, pharyngitis, bronchitis, pneumonia, influenza, UTI, or other diagnosis) and whether an antibiotic was prescribed. We used the first listed diagnosis in the ‘‘assessment and plan’’ as the primary diagnosis. In our definition of antibiotic prescribing, we included ‘‘delayed prescriptions,’’ in which the physician instructed the patient to fill the prescription only if symptoms failed to resolve after a specific amount of time. For quality control, we randomly selected 100 visits for duplicate chart abstraction by a second author for the diagnosis and antibiotic prescribing. For the nine potential diagnoses, interobserver agreement was 94% and for antibiotic prescribing, interobserver agreement was 99%. We did not assess the appropriateness of antibiotic prescribing. We considered data abstracted from the visit notes as the gold standard. We considered any electronic antibiotic prescription within 30 days of the index visit to be attributable to that visit because (1) documentation can occur after the visit date, (2) the possibility that prescribing was done subsequent to the visit (e.g., as a result of telephone follow-up documented in an addendum to the original visit with the original visit date), and (3) to maximize the apparent sensitivity of electronic antibiotic prescribing. To assess the accuracy of electronic ARI and UTI diagnoses, we calculated sensitivity, specificity, and positive predictive value.23 To assess the accuracy of electronic antibiotic prescribing, we calculated the sensitivity, specificity, positive predictive value, and simple agreement. We used sensitivity as a measure of the completeness and positive predictive value as a measure of the correctness of the electronic diagnoses and electronic antibiotic prescribing.1,24,25 We also examined how the antibiotic prescribing rate changed over time according to electronic antibiotic prescribing and according to the visit note.

Statistical Analysis For sample characteristics, we used standard descriptive statistics. Because we hypothesized increasing adoption of electronic antibiotic prescribing, we used the linear trend test to assess changes in electronic antibiotic prescribing over time. To examine interclinician and interclinic effects for statistically significant trends over time, we evaluated models that adjusted for clustering by clinic and by provider using generalized estimating equations.26 We performed all statistical analyses using SAS version 8.02 (SAS Institute, Cary, NC). P-Values less than 0.05 were considered significant. The Institutional Review Board of Brigham and Women’s Hospital approved the study protocol.

Results Sample Derivation and Characteristics During the study period, we identified 65,285 visits with a primary diagnosis of an ARI or UTI. From these, we randomly selected 1,000 visits, stratified by calendar year and

F i g u r e 1 . Visit flow. ARI = acute respiratory infection; UTI = urinary tract infection; EHR = electronic health record. The random selection was stratified by calendar year and by acute respiratory infection visit or urinary tract infection visit. For encounters for which there were no corresponding notes, 55 (81%) were by women and 53 (78%) were for urinary tract infection. ARI or UTI diagnosis (Fig. 1). We excluded visits for which the LMR was not in use early in the study period (n 5 102) and duplicate encounters (n 5 3). We were unable to locate visit notes for 68 encounters and excluded these encounters from the analysis. Women (81%) with a diagnosis of UTI (78%) accounted for most of these missing encounters. This left 827 visits in the final sample for analysis. The sample of 827 visits was 79% women, 7% younger than 18 years old, with a mean patient age of 37 (Table 1). The race/ethnicity of the sample was 43% white, 29% Hispanic, 10% black, and 18% other. Eighty percent of the sample patients spoke English and 16% spoke Spanish as their primary language. Most patients had insurance through a health maintenance organization, private insurance, or Medicaid or the Massachusetts Uncompensated Care Pool System (‘‘free care’’). Reflecting the stratified nature of the sampling, 51% of the visits had a primary diagnosis of an ARI and 49% had a primary diagnosis of UTI.

Validity of Electronic Diagnoses The sensitivity of ARI billing diagnoses ranged from 65% for nonspecific URI to 93% for pharyngitis (Table 2). The

Table 1

j

Demographic Characteristics (N = 827)

Age, mean yr, (SD) Women Race and ethnicity White Hispanic Black Other Insurance HMO Private Medicaid or ‘‘free care’’* Self-pay Medicare Other or missing

No.

%

37 (21) 654

79

356 241 84 146

43 29 10 18

215 171 126 101 59 155

26 21 15 12 7 19

HMO = Health Maintenance Organization. *‘‘Free care’’ is the Massachusetts Division of Healthcare Finance and Policy Uncompensated Care Pool for low-income uninsured or underinsured residents.

64

LINDER ET AL., Electronic Antibiotic Prescribing

Table 2 j Diagnostic Accuracy of Encounter Data Compared to the Visit Note for Acute Respiratory Infection and Urinary Tract Infection Visits (N = 827) Specificity (%)

Positive Predictive Value (%)

No.

%

Sensitivity (%)

109 55 38 132 54 23 10

13 7 5 16 7 3 1

65 89 86 93 88 83 67

98 99 99 96 98 100 99

86 87 82 77 67 87 20

Acute respiratory infections

421

51

98

96

96

Urinary tract infections Total

406

49

100

87

85

827

100







Diagnosis Nonspecific URI Otitis media Sinusitis Pharyngitis Bronchitis Pneumonia Influenza

URI = upper respiratory tract infection.

specificity of ARI diagnoses ranged from 96% for pharyngitis to 100% for pneumonia. The positive predictive value for ARI diagnoses ranged from 20% for influenza to 87% for otitis media and pneumonia. As a group, the ARI diagnoses had a sensitivity of 98%, specificity of 96%, and positive predictive value of 96%. An electronic diagnosis of UTI had a sensitivity of 100%, specificity of 87%, and positive predictive value of 85%.

Electronic Antibiotic Prescribing According to the note, clinicians prescribed antibiotics in 59% of all visits. Clinicians prescribed antibiotics in 45% of ARI visits and 73% of UTI visits. Compared to this gold standard, electronic antibiotic prescribing had a sensitivity of 43%, specificity of 93%, and a positive predictive value of 90% (Table 3). Simple agreement between electronic antibiotic prescribing and antibiotic prescribing according to the visit note was 64%. Among the nine clinics, the number of visits varied from 16 to 210 and the agreement between electronic antibiotic prescribing and antibiotic prescribing according to visit note varied significantly, ranging from 25% to 79% (p , 0.0001).

F i g u r e 2 . Accuracy over time of electronic antibiotic prescribing compared to the visit note (N = 827). For agreement, p for trend over time , 0.0001. With adjustment for clustering by physician, p = 0.001. With adjustment for clustering by clinic, p = 0.002. For ARI visits, according to the visit note, there was no significant change in antibiotic prescribing over time (p 5 0.99; Fig. 3). Electronic antibiotic prescribing increased significantly over time for ARI visits (15% in 2000 to 25% in 2003; p 5 0.03), but this became nonsignificant after adjusting for clustering by either clinic or clinician. The sensitivity of electronic antibiotic prescribing for ARIs increased from 26% in 2000 to 54% in 2003. For UTI visits, according to the visit notes, there was no significant change in antibiotic prescribing over time (p 5 0.58; Fig. 4). Electronic antibiotic prescribing increased significantly over time for UTI visits (from 16% in 2000 to 48% in 2003; p , 0.0001). Adjustment for clustering by clinic or clinician did not change the significance of these results. The sensitivity of electronic antibiotic prescribing for UTIs increased from 20% in 2000 to 60% in 2003.

Discussion Clinical decision-support systems, to be effective, must have access to accurate diagnostic and prescribing information. We found that electronic, claims-derived diagnosis codes have good accuracy for identifying ARI and UTI visits in our practice-based research network. ARIs as a group would be expected to have better accuracy than individual ARI diagnoses: a wider diagnostic definition leads to increased sensitivity

The sensitivity of electronic antibiotic prescribing increased from 22% in 2000 to 58% in 2003 (p , 0.0001; Fig. 2). The simple agreement between electronic antibiotic prescribing and the visit notes increased from 51% in 2000 to 73% in 2003. Adjustment for clustering by clinic or by clinician did not change these results.

Table 3 j Antibiotic Prescribing for Acute Respiratory Infections and Urinary Tract Infections According to Electronic Antibiotic Prescribing and the Visit Note* Visit Note Antibiotic Antibiotic Not Prescribed Prescribed Electronic Prescribing (Column %) (Column %) Row Totals (%) Antibiotic prescribed Antibiotic not prescribed Column totals (%)

209 (43) 276 (57) 485 (59)

23 (7) 319 (93) 342 (41)

232 (28) 595 (72) 827 (100)

*Positive predictive value, 90%; simple agreement, 64%.

F i g u r e 3 . Antibiotic prescribing for acute respiratory infections in primary care over time according to the visit note or to electronic prescribing (n = 421). For visit note, p for trend = 0.99. For electronic prescribing, p for trend = 0.03 (adjusted for clustering by physician, p = 0.23; adjusted for clustering by clinic, p = 0.18).

Journal of the American Medical Informatics Association

Volume 13

Number 1 Jan / Feb 2006

65

not using electronic prescribing will not interact with clinical decision support that recommends penicillin as the antibiotic of choice for group A b-hemolytic streptococcal pharyngitis or that recommends not prescribing antibiotics for acute bronchitis.37,38

F i g u r e 4 . Antibiotic prescribing for urinary tract infections in primary care over time according to the visit note or to electronic prescribing (n = 406). For visit note, p for trend = 0.58. For electronic prescribing, p for trend , 0.0001 (adjusted for clustering by physician, p , 0.0001; adjusted for clustering by clinic, p = 0.0004). and positive predictive value and decreased specificity. These findings compare well to other studies that examined the accuracy of electronic diagnoses, mostly chronic, that had sensitivities from 40% to 100%, specificities from 91% to 100%, and positive predictive values from 85% to 100%.2,23,25,27–29 In contrast, we identified poor accuracy in electronic antibiotic prescribing for ARIs and UTIs. The specificity (93%) and positive predictive values (90%) for electronic antibiotic prescribing were good. However, the sensitivity was low (43% overall), reflecting a gap between note-documented antibiotic prescribing and electronic antibiotic prescribing. Although there was substantial improvement over time— perhaps because of increased clinician familiarity, improvements in the prescribing module, or encouragements of leadership—at the end of the study period, the sensitivity of electronic antibiotic prescribing was still only 58%. Hogan and Wagner1 performed a systematic review of the accuracy of data in electronic health records and found that medications had a sensitivity (completeness) of between 93% and 100% and a positive predictive value (correctness) of 83%. Thiru et al.2 found that electronic prescribing information in primary care had a sensitivity of between 93% and 100% and a positive predictive value of 100%. A more recent study of the accuracy of medication lists for older Veterans Affairs patients found a sensitivity of 75% and a positive predictive value of 87%.30 These studies generally examined chronic medications and medication lists that were ‘‘descriptive’’ rather than ‘‘prescriptive,’’ as is our system. Future work should examine whether there are differences in the use of electronic prescribing between acute and chronic medications within a single network or health system. Understanding the accuracy of electronic data is important for patient care.31,32 The low sensitivity of electronic antibiotic prescribing represents a safety problem.20 Electronic antibiotic prescribing avoids errors associated with handwritten prescriptions and provides medication interaction checking, allergy checking, medical problem interaction checking, laboratory checking, and prospective monitoring for potential adverse drug events.33–35 The failure of clinicians to use electronic prescribing also limits the potential benefits of clinical decision support; clinicians are not using a key ‘‘effector arm’’ of clinical decision support.36 For example, clinicians

Understanding the accuracy of electronic data is also important for quality improvement and research purposes.31,39 The apparent antibiotic prescribing rate differs depending on whether one examines chart-documented antibiotic prescribing or electronic antibiotic prescribing. In an analysis of the appropriateness of antibiotic prescribing for ARIs and UTIs, electronic antibiotic prescribing would presumably appear ‘‘better,’’ with a lower antibiotic prescribing rate. However, this is simply an artifact of the quality of data. Similarly, for intervention studies that seek to reduce antibiotic prescribing for ARIs and UTIs, the more easily accessible electronic antibiotic prescribing rate could be misleading. The baseline rate would be too low and the effectiveness of the intervention would be blunted by an artifactual ‘‘floor effect.’’ This study has limitations that should be considered. First, this study was performed on a sample of visits with a primary billing diagnosis of an ARI or UTI that constrains our ability to comment on specificity and sensitivity of the diagnosis. Use of a data set with broader inclusion criteria would likely lead to decreased specificity, but presumably increased sensitivity. In addition, the claims diagnoses that we evaluated are used primarily for administrative purposes, not clinical. However, claims diagnoses are electronically available and frequently used for quality improvement, profiling, and clinical operations. We are presently implementing an ‘‘end-ofvisit’’ system in which the clinician enters the diagnostic code prospectively within the LMR. Second, the visit notes are an imperfect gold standard. Even though the physician documented an antibiotic prescription in the visit notes, we did not validate that the physician actually wrote or called in the prescription, the patient had it filled at a pharmacy, or the patient took the antibiotic. While the visit note is only a proxy for ‘‘the truth,’’ it is an accessible, economical measure of what the clinician is trying to do. Similarly, we did not assess whether the documentation supported the diagnosis. Third, we were unable to locate the visit note for some of our randomly selected visits. Most of the missing visits were for women with a diagnosis of UTI and may represent a woman coming to the clinic to have a urinalysis performed. Considering that failure of documentation is a marker of poor quality, exclusion of these visits would probably bias the results toward higher sensitivity. Fourth, including electronic antibiotic prescriptions within 30 days of the index visit likely inflated the sensitivity and decreased the specificity of electronic antibiotic prescribing. Despite these last two limitations, we still found a disturbingly low sensitivity for electronic antibiotic prescribing. Finally, this analysis was performed in an urban and suburban PBRN with a single electronic health record and focused only on the prescribing of one class of medication for two acute conditions. Although these results may not generalize to other settings, electronic health records, conditions, and medications, they demonstrate that clinicians, researchers, and clinical leaders need to understand the accuracy of electronic information that they are using.

66

LINDER ET AL., Electronic Antibiotic Prescribing

Conclusion For clinical decision-support systems to be effective in improving care for patients with acute problems, systems need to have access to accurate diagnoses and engage clinicians at the time of prescribing. We found that claims-derived, electronic diagnoses for ARIs and UTIs appear accurate. While the specificity of electronic antibiotic prescribing was good, the sensitivity was poor. Although there is a trend toward improvement, a large gap persists between antibiotic prescribing documented in the visit notes and the use of electronic antibiotic prescribing. Barriers to electronic antibiotic prescribing in primary care must be addressed to leverage the potential that computerized decision-support systems offer in reducing costs, improving quality, and improving patient safety. References

j

1. Hogan WR, Wagner MM. Accuracy of data in computer-based patient records. J Am Med Inform Assoc. 1997;4:342–55. 2. Thiru K, Hassey A, Sullivan F. Systematic review of scope and quality of electronic patient record data in primary care. BMJ. 2003;326:1070–2. 3. Woodwell DA, Cherry DK. National Ambulatory Medical Care Survey: 2002 Summary, No. 346. In: Advance data from vital and health statistics. Hyattsville, MD: National Center for Health Statistics, 2004. 4. Steinman MA, Gonzales R, Linder JA, Landefeld CS. Changing use of antibiotics in community-based outpatient practice, 1991–1999. Ann Intern Med. 2003;138:525–33. 5. Arroll B, Kenealy T. Antibiotics for the common cold. Cochrane Database Syst Rev. 2003;1:1. 6. Lau J, Zucker D, Engels EA, et al. Diagnosis and treatment of acute bacterial rhinosinusitis. Evidence report/technology assessment no. 9. Rockville, MD: Agency for Health Care Policy and Research, 1999. 7. Gandhi TK, Weingart SN, Borus J, et al. Adverse drug events in ambulatory care. N Engl J Med. 2003;348:1556–64. 8. Linder JA, Sim I. Antibiotic treatment of acute bronchitis in smokers: a systematic review. J Gen Intern Med. 2002;17:230–4. 9. Neugut AI, Ghatak AT, Miller RL. Anaphylaxis in the United States: an investigation into its epidemiology. Arch Intern Med. 2001;161:15–21. 10. Gandhi TK, Burstin HR, Cook EF, et al. Drug complications in outpatients. J Gen Intern Med. 2000;15:149–54. 11. Gurwitz JH, Field TS, Harrold LR, et al. Incidence and preventability of adverse drug events among older persons in the ambulatory setting. JAMA. 2003;289:1107–16. 12. Cohen ML. Epidemiology of drug resistance: implications for a post-antimicrobial era. Science. 1992;257:1050–5. 13. Seppala H, Klaukka T, Vuopio-Varkila J, et al. The effect of changes in the consumption of macrolide antibiotics on erythromycin resistance in group A streptococci in Finland. Finnish Study Group for Antimicrobial Resistance. N Engl J Med. 1997;337:441–6. 14. Chen DK, McGeer A, de Azavedo JC, Low DE. Decreased susceptibility of Streptococcus pneumoniae to fluoroquinolones in Canada. Canadian Bacterial Surveillance Network. N Engl J Med. 1999;341:233–9. 15. Fendrick AM, Monto AS, Nightengale B, Sarnes M. The economic burden of non-influenza-related viral respiratory tract infection in the United States. Arch Intern Med. 2003;163:487–94. 16. Huang ES, Stafford RS. National patterns in the treatment of urinary tract infections in women by ambulatory care physicians. Arch Intern Med. 2002;162:41–7. 17. Sim I, Gorman P, Greenes RA, et al. Clinical decision support systems for the practice of evidence-based medicine. J Am Med Inform Assoc. 2001;8:527–34.

18. Wang SJ, Middleton B, Prosser LA, et al. A cost-benefit analysis of electronic medical records in primary care. Am J Med. 2003; 114:397–403. 19. Lindbloom EJ, Ewigman BG, Hickner JM. Practice-based research networks: the laboratories of primary care research. Med Care. 2004;42(4 Suppl):III45–9. 20. Bell DS, Cretin S, Marken RS, Landman AB. A conceptual framework for evaluating outpatient electronic prescribing systems based on their functional capabilities. J Am Med Inform Assoc. 2004;11:60–70. 21. Murphy SN, Chueh HC. A security architecture for query tools used to access large biomedical databases. AMIA Annu Symp Proc. 2002;552–6. 22. Linder JA, Singer DE. Desire for antibiotics and antibiotic prescribing for adults with upper respiratory tract infections. J Gen Intern Med. 2003;18:795–801. 23. Maselli JH, Gonzales R. Measuring antibiotic prescribing practices among ambulatory physicians: accuracy of administrative claims data. J Clin Epidemiol. 2001;54:196–201. 24. Komaroff AL. The variability and inaccuracy of medical data. Proc IEEE. 1979;67:1196–207. 25. Hassey A, Gerrett D, Wilson A. A survey of validity and utility of electronic patient records in a general practice. BMJ. 2001;322: 1401–5. 26. Zeger SL, Liang KY. Longitudinal data analysis for discrete and continuous outcomes. Biometrics. 1986;42:121–30. 27. Szeto HC, Coleman RK, Gholami P, Hoffman BB, Goldstein MK. Accuracy of computerized outpatient diagnoses in a Veterans Affairs general medicine clinic. Am J Manag Care. 2002;8:37–43. 28. Woods CR. Impact of different definitions on estimates of accuracy of the diagnosis data in a clinical database. J Clin Epidemiol. 2001;54:782–8. 29. Peabody JW, Luck J, Jain S, Bertenthal D, Glassman P. Assessing the accuracy of administrative data in health information systems. Med Care. 2004;42:1066–72. 30. Kaboli PJ, McClimon BJ, Hoth AB, Barnett MJ. Assessing the accuracy of computerized medication histories. Am J Manag Care. 2004;10:872–7. 31. Brennan PF, Stead WW. Assessing data quality: from concordance, through correctness and completeness, to valid manipulatable representations. J Am Med Inform Assoc. 2000;7:106–7. 32. Stein HD, Nadkarni P, Erdos J, Miller PL. Exploring the degree of concordance of coded and textual data in answering clinical queries from a clinical data repository. J Am Med Inform Assoc. 2000;7:42–54. 33. Wagner MM, Hogan WR. The accuracy of medication data in an outpatient electronic medical record. J Am Med Inform Assoc. 1996;3:234–44. 34. Jha AK, Kuperman GJ, Rittenberg E, Teich JM, Bates DW. Identifying hospital admissions due to adverse drug events using a computer-based monitor. Pharmacoepidemiol Drug Saf. 2001; 10:113–9. 35. Kaushal R, Shojania KG, Bates DW. Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Intern Med. 2003;163: 1409–16. 36. Stead WW, Miller RA, Musen MA, Hersh WR. Integration and beyond: linking information from disparate sources and into workflow. J Am Med Inform Assoc. 2000;7:135–45. 37. Snow V, Mottur-Pilson C, Cooper RJ, Hoffman JR. Principles of appropriate antibiotic use for acute pharyngitis in adults. Ann Intern Med. 2001;134:506–8. 38. Snow V, Mottur-Pilson C, Gonzales R. Principles of appropriate antibiotic use for treatment of acute bronchitis in adults. Anna Intern Med. 2001;134:518–20. 39. Prins H, Kruisinga FH, Buller HA, Zwetsloot-Schonk JH. Availability and usability of data for medical practice assessment. Int J Qual Health Care. 2002;14:127–37.