chief complaint, and site, compliance with process-of-care guidelines increased ..... tals' information systems, we examined compliance for the following criteria ...
Benchmarking and Quality Improvement: The Harvard Emergency Department Quality Study* Helen R. Burstin, MD, MPH, Alasdair Conn, MD, Gary Setnik, MD, Donald W. Rucker, MD, MBA, Paul D. Cleary, PhD, Anne C. O’Neil, MPH, E. John Orav, PhD, Colin M. Sox, MD, Troyen A. Brennan, MD, JD, MPH, and the Harvard Emergency Department Quality Study Investigators PURPOSE: To determine whether feedback of comparative information was associated with improvement in medical record and patient-based measures of quality in emergency departments. SUBJECTS AND METHODS: During 1-month study periods in 1993 and 1995, all medical records for patients who presented to five Harvard teaching hospital emergency departments with one of six selected chief complaints (abdominal pain, shortness of breath, chest pain, hand laceration, head trauma, or vaginal bleeding) were reviewed for the percent compliance with process-of-care guidelines. Patient-reported problems and patient ratings of satisfaction with emergency department care were collected from eligible patients using patient questionnaires. After reviewing benchmark information, emergency department directors designed quality improvement interventions to improve compliance with the process-of-care guidelines and improve patient-reported quality measures. RESULTS: In the preintervention period, 4,876 medical records were reviewed (99% of those eligible), 2,327 patients completed on-site questionnaires (84% of those eligible), and 1,386 patients completed 10-day follow-up questionnaires
(80% of a random sample of eligible participants). In the postintervention period, 6,005 medical records were reviewed (99% of those eligible), 2,899 patients completed on-site questionnaires (84% of those eligible), and 2,326 patients completed 10-day follow-up questionnaires (80% of all baseline participants). In multivariate analyses, adjusting for age, urgency, chief complaint, and site, compliance with process-of-care guidelines increased from 55.9% (preintervention) to 60.4% (postintervention, P ⫽ 0.0001). We also found a 4% decrease (from 24% to 20%) in the rate of patient-reported problems with emergency department care (P ⫽ 0.0001). There were no significant improvements in patient ratings of satisfaction. CONCLUSION: Feedback of benchmark information and subsequent quality improvement efforts led to small, although significant, improvement in compliance with process-of-care guidelines and patient-reported measures of quality. The measures that relied on patient reports of problems with care, rather than patient ratings of satisfaction with care, seemed to be more responsive to change. These results support the value of benchmarking and collaboration. Am J Med. 1999;107:437– 449. 䉷1999 by Excerpta Medica, Inc.
M
cards” to improve quality, for instance by improving compliance with guidelines (4,5). At the same time, quality improvement based on industrial models has burgeoned in health care. Championed by Berwick (6,7) and others, continuous quality improvement involves cycles consisting of the collection and analysis of data on quality of care, development of interventions, and measurement of quality-of-care data (8). The rapid cycling is intended to lead to constant improvement of process and therefore outcomes. Those organizing report cards hope that comparative information will provide a basis for prudent purchasing and quality improvement. Collaboration between institutions might lead to greater improvement through sharing information on useful interventions. Little is known, however, about successful quality improvement efforts, especially those involving interinstitutional benchmarking and collaboration. Single hospital studies have shown that continuous quality improvement can improve care (9). Yet there are few reports from hospitals collaborating to improve care for multiple conditions (10,11). Recently, a collaboration of cardiac surgical centers reported on successful efforts to reduce postsurgical mor-
ost health-care reform proposals advocate the measurement and public reporting of quality of care (1–3). Outcome and process measures can inform patient and insurer choices of providers and can provide benchmark data for quality improvement efforts. However, it remains unclear whether institutions can use comparative information in the form of “report
* Access the “Journal Club” discussion of this paper at http://www. elsevier.com/locate/ajmselect/ From the Division of General Medicine, Department of Medicine (HRB, TAB, EJO), Brigham and Women’s Hospital, Boston, Massachusetts; Department of Emergency Medicine (AC), Massachusetts General Hospital; Department of Emergency Medicine (GS), Mount Auburn Hospital; Department of Emergency Medicine (DR), Beth Israel Hospital; Department of Health Care Policy (PDC), Harvard Medical School; Departments of Health Policy and Management (AC, ACO’N, TAB) and Biostatistics (EJO), Harvard School of Public Health, Boston, Massachusetts. An abstract for this manuscript was presented on May 4, 1996, at the national meeting of the Society for General Internal Medicine, Washington, DC. Dr. Burstin was supported in part by Grant HS00062-02 from the Agency for Health Care Policy and Research, Rockville, Maryland. Requests for reprints should be addressed to Troyen A. Brennan, MD, JD, MPH, Department of Quality Management Services, Brigham and Women’s Hospital, 75 Francis Street, Boston, Massachusetts 02115. 䉷1999 by Excerpta Medica, Inc. All rights reserved.
0002-9343/99/$–see front matter 437 PII S0002-9343(99)00269-7
Benchmarking and Quality Improvement/Burstin et al
tality (12), and a large group of practices succeeded in improving vaccination rates (13). The emergency department is an appropriate site for quality assessment and improvement efforts (14 –16). The combination of greater severity of illness, the need for rapid triage and treatment, and limited prior contact with patients creates the potential for poor outcomes (17–20). Supported by our professional liability insurer, the emergency department staffs of the Harvard University–affiliated hospitals compared the care that they provided by collecting benchmarking information. In early 1993, baseline data on compliance with clinical criteria and patients’ reports of care were gathered at each site. One year later, the results of this baseline investigation were provided to quality-improvement teams at each site, which designed their own strategies to improve quality of care (21). In 1995, we repeated data collection at each site to assess the efficacy of these interventions. We report the results of this report card–style benchmarking in emergency departments and discuss collaborative, data-driven quality-improvement efforts.
MATERIAL AND METHODS Study Design The Harvard Emergency Department Quality Study was conducted at five urban teaching hospital emergency departments (Brigham and Women’s Hospital, Massachusetts General Hospital, Beth Israel Hospital, New England Deaconess Hospital, and Mount Auburn Hospital). All emergency departments were staffed by a combination of attending and resident physicians, with varying levels of attending supervision; none had an established emergency medicine training program at the time of the study. All physicians and hospitals shared a common, self-funded malpractice insurance program. The emergency department directors, or their designates, and a study team from the Harvard School of Public Health and the Harvard Risk Management Foundation formed the Harvard Emergency Department Quality Study team. Each hospital agreed to participate, and the study was approved by human subjects committees at each institution. The initial study was conducted from February through May 1993, with the follow-up study conducted from February through May 1995. During a 1-month period in each hospital, patients who presented to the adult emergency departments with the selected chief complaints of abdominal pain, shortness of breath, chest pain, hand laceration, head trauma, or vaginal bleeding were eligible for the study. We chose these complaints because of their prevalence and their potential for medical injury. We administered on-site questionnaires during study hours, generally between 10 AM and midnight. These 438
November 1999
THE AMERICAN JOURNAL OF MEDICINE威
hours were selected after a pilot study determined that they allowed us to capture the greatest proportion of eligible patients. Every third day of the study, research assistants enrolled patients 24 hours per day. Patients were not eligible for the questionnaire portion of the study if they were incapacitated by medical illness, confused or intoxicated, nonpregnant minors, or left the emergency department without being seen. Eligible patients were approached by research assistants, who obtained informed consent. Patients completed an on-site questionnaire and agreed to complete telephone follow-up interviews. The baseline questionnaire was self-administered or interviewer-administered in English or Spanish. A follow-up phone interview occurred approximately 10 days (range, 7 to 12 days) later. There were up to 15 telephone attempts to reach each patient in follow-up. Questionnaires were professionally translated into Spanish using forward and back translation. Patients who did not speak either English or Spanish as their primary language completed the English language form with the assistance of relatives, friends, or hospital translators. The on-site questionnaire asked about sociodemographic characteristics and utilization of primary care, emergency department, and hospital services in the previous year. Patients were also asked about comorbid conditions, including anemia, asthma, arthritis, back problems, cancer (diagnosed in the past 3 years), depression, diabetes, digestive problems, heart trouble, high blood pressure, human immunodeficiency virus infection (HIV) or acquired immunodeficiency syndrome (AIDS), kidney disease, liver problems, stroke, and other major health problems. The follow-up telephone interview assessed patient satisfaction with emergency department care, self-reported problems with the process of care, and discharge instructions. These interviews were conducted in Spanish when necessary.
Compliance with Process-of-Care Guidelines We developed complaint-specific process-of-care criteria using a multi-attribute utility scale (22). After review of the relevant medical literature, a working group of emergency department directors from the study institutions proposed process-of-care criteria for the six chief complaints. In addition, we reviewed existing guidelines for emergency medicine, such as the American College of Emergency Physicians Chest Pain Protocol. Based on the data required to assess compliance with these criteria, we developed complaint-specific data forms for medical record review, which included site, demographic data, medications, vital signs, relevant history and physical exam findings, laboratory and radiologic data, discharge diagnosis, follow-up instructions, and a complaint-specific measure of patient urgency modified from triage criteria developed by Baker et al (23). This four-level ur-
Volume 107
Benchmarking and Quality Improvement/Burstin et al
Table 1. Examples of Determination of Compliance with Guidelines Case 1:
Case 2:
Case 3:
36-year-old woman with lower abdominal pain, last menstrual period greater than 30 days previously, with fever of 102⬚F, with abdominal guarding. Indicated criteria: pregnancy test, pelvic examination (unless dysuria and urinalysis suggestive of urinary tract infection), urinalysis, rectal examination with stool guaiac, and surgical/gynecologic evaluation. Record review: pregnancy test positive, pelvic examination with right adnexal tenderness, urinalysis normal, no rectal examination, no surgical or gynecologic evaluation. Compliance score: 60% compliance (3/5 indicated criteria done). 25-year-old man with left hand laceration caused by glass bottle. Indicated criteria: tetanus status documented, receive tetanus toxoid if greater than 10 years previously (for clean wound); foreign body to be ruled out by wound exploration or hand radiograph; documentation of neurovascular status; with suspected tendon, nerve, or vascular injury on examination, expect hand or orthopedics or plastic surgery consult to be done. Record review: tetanus status documented, received tetanus toxoid, no documentation of wound exploration or hand radiograph, no documentation of neurovascular status. Compliance score: 40% compliance (2/5 indicated criteria done). 58-year-old woman with pleuritic chest pain for 3 days, associated with shortness of breath, normal vital signs, no history of cardiac disease. Indicated standards: electrocardiogram, chest radiograph, arterial blood gas or oxygen saturation determination. Record review: electrocardiogram normal, chest radiograph with infiltrate, oxygen saturation of 96%. Compliance score: 100% compliance (3/3 indicated criteria done).
gency scale ranged from evaluation of a stable medical condition to the need for immediate evaluation of a lifethreatening situation (24). Affiliated faculty in the relevant medical and surgical specialties at the study institutions reviewed the processof-care criteria and the data forms. After modifications, these were distributed to 30 emergency medicine experts and medical/surgical specialists in the United States. Comments were reviewed by the study panel, and final modifications were made (see Appendix). Physicians used the data form to review emergency department medical records for all patients with the selected chief complaints after undergoing a training session by an investigator (HRB) and receiving a detailed coding manual. The physician-reviewers were unaware of the purpose of the study, were not affiliated with the emergency departments, did not review records from their own institution, and were unaware of the results of phase one when they reviewed records for phase two. Physician compliance with the process-of-care guidelines was the medical record– based quality measure for this study. For each patient, the applicability of each process-of-care guideline was determined by history, physical exam, and laboratory data. Several guidelines could apply to a single patient, for example, up to nine for a patient with abdominal pain. Physician compliance with
process-of-care guidelines for each patient was calculated as the number of guidelines found to be in compliance divided by the total number of applicable guidelines (Table 1). In 1993, a 5% random sample of emergency department records was blindly reviewed by a different physician-reviewer. Interrater reliability for percent compliance with process-of-care clinical guidelines had an intraclass correlation coefficient of 0.97.
Patient-reported Problems Patients were asked to report problems during their emergency department visit using questions from the Picker-Commonwealth Study of patient care, modified for emergency department care (25). These included issues regarding communication, follow-up, medication use, and diagnostic testing. Each question was scored as 0 (no problem) or 1 (a problem), and any patient who reported one or more problems was considered as having had an emergency department visit with a patient-related problem.
Patient Satisfaction At the follow-up telephone interview, patients were asked to rate the following items: overall care in the emergency department, courtesy and respect from the staff, completeness of care received, explanation of what was being done, waiting times, and discharge instructions on a 1
November 1999
THE AMERICAN JOURNAL OF MEDICINE威
Volume 107 439
Benchmarking and Quality Improvement/Burstin et al
Table 2. Hospital-specific Emergency Department Quality-improvement Interventions Hospital Interventions
A
B
C
D
E
Training for all attending physicians on guidelines Training for all resident physicians on guidelines Training for nursing staff on guidelines Guidelines available on paper in emergency department Guidelines available on clinical information systems Guidelines integrated into on-line emergency department record Implemented computerized emergency department record Computerized nursing progress notes Enlarged and diversified attending staff Specific training for staff in head trauma and hand laceration guidelines Greater resident supervision Reorganization of emergency department space to reduce wait times Renovation of emergency department space Increased on-site interpreter services Evaluation of waiting time and process flow Used asthma intervention form Used chest pain intervention form Asthma treatment protocol Increased emergency services technicians Quality-improvement project for improving speed of admissions Improvement of registration process Unit coordinator position added to assist with administrative tasks Triage process improvement with nurse training Flow study for expediting hospital admissions Formal triage policy implemented Condition-specific discharge instructions provided New discharge form implemented
x x
x x x
x x
x x
x
x
x
(poor) to 5 (excellent) scale. The Cronbach’s ␣ for the satisfaction measure was 0.88 for the English version and 0.85 for the Spanish version.
Quality Improvement Implementation After initial data collection in 1993, the emergency department leadership, including physician and nurse leaders, administrators, and risk managers, met monthly to review data. The emergency department staff decided to unblind hospital identity to allow more open sharing of data and collaboration. Provided data included general demographic information, detailed complaint-specific information and percent compliance with process-ofcare guidelines, patient satisfaction, and patient-reported problems. Each site was then allowed to design and implement its own quality-improvement efforts. After the 440
November 1999
THE AMERICAN JOURNAL OF MEDICINE威
x x
x x x x
x
x
x x
x x
x x x x x x
x
x x x x
x
x
x x x
x
x x
x x x x
first phase, process-of-care criteria were distributed to all of the emergency departments as clinical guidelines. Sites also focused efforts on problems with medications, discharge instructions, and communication (Table 2). Interventions were generally aimed at dissemination of the clinical guidelines by the emergency department director at each hospital, during teaching sessions and through computer-based applications.
Statistical Methods We sought to assess the effect of the quality-improvement intervention on percent compliance with process-of-care guidelines, patient satisfaction, and patient-reported problems with care. Each quality measure was assessed for normality. Study phase (before or after intervention) was the predictor variable. Generalized linear models
Volume 107
Benchmarking and Quality Improvement/Burstin et al
Table 3. Characteristics of Patients during the Pre-intervention and Post-intervention periods Medical Records, Number (percent) Characteristic Hospital A B C D E Chief complaint* Abdominal pain Shortness of breath Chest pain Hand laceration Head trauma Vaginal bleeding Missing Urgency level 1 (lowest) 2 3 4 (highest) Missing Age (years) ⬍40 40–49 50–59 60–69 70⫹ Missing
Patient Surveys, Number (percent)
Pre-intervention (n ⫽ 4,931)
Post-intervention (n ⫽ 6,005)
Pre-intervention (n ⫽ 1,386)
Post-intervention (n ⫽ 2,333)
1,478 (30.0) 1,338 (27.1) 869 (17.6) 1,038 (21.1) 208 (4.2)
1,813 (30.1) 1,565 (26.1) 1,012 (16.9) 1,367 (22.8) 248 (4.1)
432 (31.2) 267 (19.5) 270 (19.3) 333 (24.0) 84 (6.0)
628 (26.9) 541 (23.2) 465 (19.9) 570 (24.4) 129 (5.5)
1,437 (29.1) 791 (16.0) 1,096 (22.2) 424 (8.6) 903 (18.3) 233 (4.7) 47 (1.0)
1,827 (30.4) 969 (16.1) 1,298 (21.6) 450 (7.5) 1,245 (20.7) 214 (3.6) 2 (0.03)
476 (34.3) 228 (16.5) 337 (24.3) 108 (7.8) 143 (10.3) 71 (5.1) 23 (1.7)
713 (30.6) 333 (14.3) 518 (22.2) 259 (11.1) 420 (18.0) 88 (3.8) 2 (0.1)
914 (18.5) 1,036 (21.0) 1,011 (20.5) 1,856 (37.6) 114 (2.3)
1,037 (17.3) 1,307 (21.8) 1,174 (19.6) 2,121 (35.3) 366 (6.1)
224 (16.2) 302 (21.8) 300 (21.6) 530 (38.2) 30 (2.2)
383 (16.4) 480 (20.6) 466 (20.0) 729 (31.2) 275 (11.8)
2,324 (47.1) 614 (12.5) 446 (9.0) 457 (9.3) 933 (18.9) 157 (3.2)
2,696 (44.9) 803 (13.4) 630 (10.5) 495 (8.2) 1,185 (19.7) 196 (3.3)
658 (47.5) 174 (12.5) 125 (9.0) 138 (10.0) 239 (17.2) 52 (3.8)
1,091 (46.8) 313 (13.4) 252 (10.8) 204 (8.7) 391 (16.8) 82 (3.5)
* Data were missing for some patients who were enrolled based on hospital logs but who were later determined not to have had one of the six chief complaints that determined eligibility.
were used to examine univariate differences. Linear regression was used to assess the effect of the intervention on the quality measures, adjusting for age, site, urgency, and chief complaint; the residuals from the final models were normally distributed. The predicted mean responses were computed from the linear regression models for the usual patient, based on the covariates in the model. When the patient-based measures were analyzed using logistic regression for dichotomized outcomes, qualitatively similar results were found.
RESULTS In the pre-intervention phase, 4,876 medical records were reviewed (99% of those eligible), and 2,327 patients completed on-site questionnaires (84% of those eligible), with 1,386 patients completing the 10-day follow-up questionnaires (80% of a random sample of eligible participants). In the postintervention phase, 6,005 medical records were reviewed (99% of those eligible), 2,899 patients completed on-site questionnaires (84% of those
eligible), with 2,326 patients completing the 10-day follow-up questionnaires (80% of all eligible participants). The sampling of all follow-up participants in phase 2 resulted in greater numbers of follow-up questionnaires. Patient age, urgency, chief complaint, and site distribution were similar in the two study phases (Table 3).
Compliance with Clinical Process-of-care Guidelines In multivariate analyses adjusting for site, age, urgency, and chief complaint, the mean compliance with guidelines for all complaints was 55.9% before the interventions and 60.4% after the interventions (P ⫽ 0.0001). The degree of improvement varied significantly by chief complaint (P ⫽ 0.0001). For all study sites combined, compliance with guidelines was significantly improved for abdominal pain, shortness of breath, and head trauma, whereas there was no significant change in compliance with guidelines for chest pain, hand laceration, or vaginal bleeding (Table 4). There was intersite variation in improvement in com-
November 1999
THE AMERICAN JOURNAL OF MEDICINE威
Volume 107 441
Benchmarking and Quality Improvement/Burstin et al
Table 4. Hospital-specific (Hospitals A to E) and Total Compliance with Process-of-care Guidelines Mean (95% Confidence Interval)* Complaint All complaints: A (n ⫽ 3,291) B (n ⫽ 2,903) C (n ⫽ 1,881) D (n ⫽ 2,405) E (n ⫽ 456) Total Abdominal pain: A (n ⫽ 1,149) B (n ⫽ 752) C (n ⫽ 499) D (n ⫽ 704) E (n ⫽ 160) Total Shortness of breath: A (n ⫽ 527) B (n ⫽ 384) C (n ⫽ 332) D (n ⫽ 417) E (n ⫽ 100) Total Chest pain: A (n ⫽ 636) B (n ⫽ 701) C (n ⫽ 437) D (n ⫽ 503) E (n ⫽ 117) Total Hand laceration: A (n ⫽ 176) B (n ⫽ 293) C (n ⫽ 196) D (n ⫽ 178) E (n ⫽ 31) Total Head trauma: A (n ⫽ 589) B (n ⫽ 728) C (n ⫽ 348) D (n ⫽ 441) E (n ⫽ 42) Total Vaginal bleeding: A (n ⫽ 206) B (n ⫽ 34) C (n ⫽ 52) D (n ⫽ 152) E (n ⫽ 0)† Total
Pre-intervention
Post-intervention
P Value
57.2 (55.2–59.2) 57.4 (55.4–59.4) 54.5 (52.1–56.9) 52.7 (50.5–54.9) 63.4 (58.7–68.1) 55.9 (54.9–56.9)
60.3 (58.5–62.1) 60.2 (58.4–61.9) 61.7 (59.5–63.9) 58.6 (56.6–60.6) 62.6 (58.3–66.9) 60.4 (59.4–61.4)
0.02 0.04 0.0001 0.0001 0.83 0.0001
57.2 (53.9–60.5) 58.4 (54.1–62.7) 53.8 (48.7–58.9) 55.4 (50.7–60.1) 71.3 (62.7–79.9) 57.0 (55.0–59.0)
60.0 (56.9–63.1) 60.6 (56.5–64.7) 62.5 (57.8–67.2) 57.9 (54.2–61.6) 65.7 (58.3–73.1) 60.5 (58.7–62.3)
0.23 0.45 0.02 0.42 0.34 0.01
72.0 (66.5–77.5) 31.6 (25.1–38.1) 58.3 (49.1–67.5) 37.7 (30.8–44.6) 56.3 (35.1–77.5) 52.1 (48.8–55.4)
70.0 (64.9–75.1) 52.1 (45.0–59.2) 59.6 (50.4–68.8) 54.9 (48.6–61.2) 75.6 (54.4–96.8) 60.9 (57.6–64.2)
0.61 0.0001 0.96 0.005 0.29 0.0002
65.5 (62.4–68.6) 70.7 (67.6–73.9) 68.0 (64.5–71.5) 61.3 (57.8–64.8) 65.9 (59.8–72.0) 66.7 (65.1–68.3)
61.9 (59.0–64.8) 69.3 (66.4–72.2) 64.7 (61.2–68.2) 63.8 (60.7–66.9) 62.9 (56.6–69.2) 65.0 (63.4–66.6)
0.10 0.54 0.18 0.32 0.51 0.13
58.5 (50.7–66.3) 55.0 (49.9–60.1) 55.7 (49.0–62.4) 66.2 (59.5–72.9) 68.9 (56.4–81.4) 58.7 (55.6–61.8)
57.6 (50.3–64.9) 56.9 (50.8–63.0) 68.1 (62.2–74.0) 67.8 (60.7–74.9) 68.2 (58.8–77.6) 62.6 (59.5–65.7)
0.86 0.65 0.008 0.76 0.94 0.09
40.5 (36.6–44.4) 48.3 (44.8–51.8) 31.7 (26.8–36.6) 32.0 (28.1–35.9) 24.5 (10.6–38.4) 40.0 (38.0–42.0)
52.7 (49.4–56.0) 53.4 (50.0–56.7) 52.9 (48.6–57.2) 46.2 (42.3–50.1) 46.0 (29.1–62.9) 51.4 (49.6–53.2)
0.0001 0.04 0.0001 0.0001 0.08 0.0001
64.0 (57.4–70.7) 81.4 (63.0–99.8) 66.6 (52.3–80.9) 73.3 (65.7–80.9)
70.2 (62.0–78.4) 72.9 (56.2–89.6) 70.7 (54.4–87.0) 73.3 (66.0–80.6)
0.47 0.52 0.66 0.8
68.6 (64.1–73.1)
70.2 (65.3–75.1)
0.64
* Adjusted for age, urgency, and chief complaint. Total also adjusted for site. † Hospital E had no patients with vaginal bleeding.
442
November 1999
THE AMERICAN JOURNAL OF MEDICINE威
Volume 107
Benchmarking and Quality Improvement/Burstin et al
Table 5. Emergency Department Quality Improvements in Compliance with Selected Clinical Guidelines Compliance with Guideline Guideline
Pre-intervention
Post-intervention
P Value
Shortness of breath For patients with asthma, expect one to three nebulized -agonist treatments to be given with clear improvement in lung examination or peak flow before discharge. 59.1 74.1 P ⬍0.0001 Chest pain Expect thrombolytic therapy to be given if pain lasting greater than 30 minutes and less than 6 hours, and ECG with greater than 1mm ST elevation in two leads (with stated exclusions). 65.3 100.0 P ⫽ 0.0001 Head trauma Statement in clinical record regarding clinical course or change in clinical status noted in emergency department record. 32.5 61.1 P ⫽ 0.0001 Hand laceration If suspected tendon, nerve, or vascular injury on examination, expect hand, orthopedic surgery, or plastic surgery to be consulted. 35.7 61.1 P ⫽ 0.0001 Vaginal bleeding If premenopausal and vaginal bleeding unrelated to menses, expect beta HCG to be done. 57.1 83.0 P ⫽ 0.0001 ECG ⫽ electrocardiogram; HCG ⫽ human chorionic gonadotropin.
pliance with guidelines (Table 3). For all chief complaints combined, small, although statistically significant, improvements in compliance were noted at four of the five sites. Protocols regarding evaluation of patients with shortness of breath were very successful at two sites, Hospitals B and D. For abdominal pain and hand lacerations, only Hospital C demonstrated significant improvement in compliance with guidelines. (This was the only site that integrated all clinical guidelines into their computerbased emergency department record.) There was significant improvement in compliance with head trauma guidelines at all sites. Each site undertook efforts to comply with the American College of Emergency Physicians criteria for administration of thrombolytic therapy and attained 100% compliance with the guideline in the postintervention period. Other examples of significant improvement are displayed in Table 5. We examined the changes in compliance with guidelines that were attributable to improved documentation in the medical record. Using data from four study hospitals’ information systems, we examined compliance for the following criteria among 50 patients in each phase: confirm pregnancy test done for abdominal pain when indicated; confirm chest radiograph done for asthmatic patients when indicated; confirm electrocardiogram done or hospital admission for chest pain when indicated; confirm cervical spine radiographs done for head trauma when indicated; and confirm pregnancy test done for vaginal bleeding when indicated. Of the cases that
were classified as noncompliant by our record review, 19.5% (preintervention) and 17.5% (postintervention) were in compliance by review of data from the information systems.
Change in Patient-reported Problems In multivariate analyses adjusting for site, age, urgency, comorbidities, and chief complaint, the rate of reported problems decreased from 24% before the interventions compared with 20% following the interventions, a 17% relative decrease. Significant decreases in patient-reported problems were demonstrated in four of the five sites (Table 6). Several items related to communication demonstrated significant improvement (Table 7). Patients reported that they were less likely to have difficulty getting a message to family or friends after the intervention. Emergency department staff were better at setting expectations, with fewer patients reporting that they did not know how long they would wait. Hospitals had shared information on interpreter services programs, and after an intervention aimed at increasing access to interpreters, there were no patients who reported problems with translation services. After the interventions, fewer patients reported that they did not understand how to take their medications, more were told about the side effects of medications in a way they could understand, and fewer reported that they did not fill their prescriptions (Table 7). Although we noted some improvement in patient reports of care after the intervention, we found no evidence
November 1999
THE AMERICAN JOURNAL OF MEDICINE威
Volume 107 443
Benchmarking and Quality Improvement/Burstin et al
Table 6. Overall Patient-reported Problems and Patient Satisfaction by Hospital and Intervention Period Mean (95% Confidence Interval)* Hospital
Pre-intervention
Patient-reported problems Hospital A 0.27 (0.25–0.29) Hospital B 0.26 (0.24–0.28) Hospital C 0.25 (0.23–0.27) Hospital D 0.22 (0.20–0.24) Hospital E 0.16 (0.12–0.20) Total 0.24 (0.23–0.25) Patient satisfaction (on 1 to 5 scale) Hospital A 3.5 (3.4–3.6) Hospital B 4.1 (4.0–4.2) Hospital C 3.8 (3.7–3.9) Hospital D 4.0 (3.9–4.1) Hospital E 4.3 (4.1–4.5) Total 3.8 (3.76–3.84)
Post-intervention
P Value
0.21 (0.19–0.23) 0.20 (0.18–0.22) 0.18 (0.16–0.20) 0.18 (0.16–0.20) 0.19 (0.15–0.23) 0.20 (0.19–0.21)
0.0001 0.02 0.0001 0.007 0.37 0.0001
3.7 (3.6–3.8) 3.8 (3.7–3.9) 3.9 (3.8–4.0) 3.7 (3.6–3.8) 4.1 (4.0–4.2) 3.8 (3.4–4.2)
0.01 0.05 0.21 0.009 0.06 0.11
* Adjusted for age, urgency, and chief complaint. Total also adjusted for site.
for improvement in patient ratings of care. Patient satisfaction remained at 3.8 (95% CI: 3.76 to 3.84) (on a 1 to 5 scale) in both study years (Table 6). There were no significant improvements in patient satisfaction.
DISCUSSION In this study involving five urban teaching hospitals, we evaluated the effect of interinstitution benchmarking on
several quality measures. Several other studies of the quality of care in the emergency departments (26 –33), in particular, patient satisfaction and its relation to issues such as waiting time (34,35) and patient urgency (36), and the tools we used to measure quality in this study, have been reported (37– 41). Many sites are actively integrating such measures into quality-improvement efforts, and there has been some success with improvement strategies (42– 44). Our intent was to understand whether col-
Table 7. Patient-reported Problems by Intervention Period Mean (%)* Items
Pre-intervention Post-intervention
Communication Not able to get message to family/friends Not told how long you would wait to be seen No language assistance, if required Did not understand the causes of your illness Medication use Did not understand how to take medications Did not understand side effects of medications Did not take medications upon discharge Testing: Did not understand why tests were done Did not understand results of tests Follow-up Did not understand when to return to emergency department Did not understand danger signals for return Did not understand when to resume usual activities Did not understand when to return to work * Adjusted for age, urgency, site, and chief complaint. 444
November 1999
THE AMERICAN JOURNAL OF MEDICINE威
Volume 107
P Value
9.6 49.4 5.3 22.4
6.9 41.3 0.0 21.4
0.004 0.0001 0.0001 0.55
8.2 45.6
4.4 34.7
0.008 0.0001
12.1
8.0
0.02
17.1 22.9
16.1 22.7
0.72 0.92
21.3
18.3
0.08
25.2 35.3
24.7 34.3
0.75 0.60
22.1
26.2
0.03
Benchmarking and Quality Improvement/Burstin et al
laborative benchmarking could lead to improvements in care. This evaluation simulates health-policy reality. Insurers, encouraged by employers, are now benchmarking on common measures, such as the HEDIS criteria (45). In many metropolitan areas and in some states, purchasers of health care and organizations of health insurers are organizing report-card evaluations of hospitals (5,46). Our initiative is similar to other benchmarking efforts (12,13). The study sites worked with our team to develop clinical guidelines. We then gathered data on baseline compliance with process-of-care criteria and provided detailed feedback to the emergency departments. Next, we collaboratively developed the interventions aimed at improving compliance with guidelines, often using techniques advocated by continuous quality-improvement reports (47). As a final step, we gathered data on these same measures after the interventions. Our study suggests that benchmarking, feedback, and quality improvements lead to increased compliance with guidelines and fewer patient-reported problems. However, the use of multiple interventions and the absence of randomization does not allow the identification of which initiative led to improvement (48,49). Nor did all of our sites adhere to strict continuous quality improvement techniques, such as rapid cycling (50). However, benchmarking did appear to be useful. Purchasers have several ways to use comparative data to improve care, including “providing technical assistance (such as performance feedback, convening study groups, and consultation on improvement methods)” (51). Our study team acted in just this fashion, and the emergency departments responded well. Indeed, the most gratifying aspect of the study was the willingness of emergency department staff from different hospitals to share care-improvement protocols, discuss potential interventions, and seek explanations for poor performance. The study team has met monthly for more than 5 years, and the collaboration has provided the foundation for the quality-improvement interventions. Overall, our demonstrated improvements are relatively modest. We noted greater improvement among the surgical complaints (abdominal pain, hand lacerations, head trauma, and vaginal bleeding). Previous experience with critical pathways would indicate that surgical specialties may lend themselves more readily to protocols (52). Previous studies have also suggested that passive dissemination of clinical guidelines rarely produces impressive improvement. For example, there was no measurable change in compliance with evaluation or management of chest pain after the release of recent guidelines (53), and earlier studies demonstrated little change in compliance after recommendations of consensus panels (4,54). By contrast, an educational intervention with a clearly identified opinion leader who disseminated guide-
lines led to a significant increase in trials of labor for patients with prior cesarean sections (55), and a study that relied on opinion leaders to improve the use of perioperative antibiotics demonstrated similar effects (56). This suggests that we should have been more active in the use of opinion leaders, conducted shorter cycles of improvement, and used specific interventions. For example, part of the success in reducing the proportion of unfilled prescriptions was based on the collaborative effort to understand prescribing regulations. After confirming that state regulations would allow discharging patients home with medication from the emergency department, several hospitals undertook a new policy that sent patients home with the first few doses of medications. Other strategies also appear promising. Several institutions integrated the protocols into their hospital information systems, and one institution integrated the guidelines into its automated emergency department record. This institution had the greatest increase in overall compliance, as well as in chief complaint-specific improvements. In another report, we found that attending physician supervision of house officers led to greater compliance with process-of-care guidelines (57). Many of our study sites added attending physician staff between 1993 and 1995, which may have also contributed to greater compliance. In all of these emergency departments, cost pressures have forced fewer personnel to do more. Without the interventions, we believe that patient-identified problems with care may have increased, which may explain part of our inability to improve overall satisfaction despite the declines in patient reports of problems with care. Patient ratings of satisfaction may be complicated by several patient-level factors, such as expectations or cultural values (58), which prevent substantial improvements in satisfaction. The use of simple “yes or no” questions that allow patients to report potential problems with care may be more objective and less subject to other patient biases. Our data suggest that patient reports of problems may be more sensitive to changes that result from quality improvement. This study took place at a time of increasing consolidation of the health-care market in Boston, and the study sites are now in two different health-care systems that have become competitive for contracts with insurers. However, these developments did not affect our ability to collaborate. There are several limitations to this study. First, the results might not be generalizable, as this study occurred in five teaching institutions in the Boston area. However, these institutions are quite different in their settings, personnel, and patients. Second, the process-of-care criteria could lack general applicability, although we did consult with outside experts and use existing guidelines. Our criteria are available, and this report provides benchmark data against which others can compare themselves (Ap-
November 1999
THE AMERICAN JOURNAL OF MEDICINE威
Volume 107 445
Benchmarking and Quality Improvement/Burstin et al
pendix) (24). Third, this was not a randomized study. Fourth, although some may be surprised by the rate of compliance with guidelines, our results appear to be typical: in a meta-analysis of 23 studies of compliance with guidelines from official organizations, the mean compliance rate was 54.5% (59). Finally, a Hawthorne effect could have occurred (60), as the sites were aware that they were subject to ongoing evaluation and may have adjusted their behavior. However, these effects should have diminished during the relatively long time between the measurements. In summary, we found that the use of benchmarking followed by quality-improvement efforts resulted in some improvements in emergency department quality of care. The measures that relied on patient reports of problems with care, rather than patient ratings of satisfaction with care, seemed to be more responsive to change. The emergency department directors embraced the pre-intervention results and set out to design site-appropriate interventions that would work best. Providers are open to sharing their strategies for improvement, and the working group continues to develop quality-improvement efforts.
14.
15.
16. 17.
18. 19.
20.
21. 22.
23.
REFERENCES 1. Epstein AM. The outcomes movement—will it get us where we want to go? NEJM. 1990;323:266 –270. 2. Starr P. The framework of health care reform. NEJM. 1993;329: 1666 –1672. 3. Schiff GD, Bindman AB, Brennan TA, for the. Physicians for a National Health Program Quality of Care Working Group. A better-quality alternative: single-payer national health system reform. JAMA. 1994;272:803– 808. 4. Lomas J, Anderson GM, Domnick-Pierre K, Vayda E, Enkin MW, Hannah WJ. Do practice guidelines guide practice? The effect of a consensus statement on the practice of physicians. NEJM. 1989; 321:1306 –1311. 5. Jollis JG, Romano PS. Pennsylvania’s focus on heart attack— grading the scorecard. JAMA. 1997;338:983–985. 6. Berwick DM. Eleven working aims for clinical leadership of health system reform. JAMA. 1994;272:797– 802. 7. Berwick D. Quality improvement as an ideal in health care. NEJM. 1989;320:53–56. 8. Langley GJ, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide: A Practical Approach to Enhancing Organizational Behavior. San Francisco, CA: Jossey-Bass, Inc, Publishers, 1996. 9. Rubenstein L, Fink A, Yano EM, Chernof B, Robbins AS. Increasing the impact of quality improvement in health: an expert panel method for setting institutional priorities. Jt Comm J Qual Improv. 1995;21:420 – 432. 10. Institute for Health Care Improvement, Breakthrough Series Report, 1996. 11. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice. What will it take to accelerate progress. Millbank Quarterly. 1998;76:593– 627. 12. O’Connor GT, Plume SK, Olmstead EM, et al. A regional intervention to improve the hospital mortality associated with coronary artery bypass surgery. JAMA. 1996;275:841– 846. 13. Wood D, Halfon N, Donald-Sherbourne, et al. Increasing immu446
November 1999
THE AMERICAN JOURNAL OF MEDICINE威
24.
25.
26. 27.
28. 29. 30.
31.
32.
33.
34.
35.
36.
Volume 107
nization rates among inner city African American children. JAMA. 1998;279:29 –34. Jenkins DP, Cooke MW, Glucksman EE. Audit of upper limb fracture management in an accident and emergency department. J Accid Emerg Med. 1994;11:105–108. Barboni E, Liva C, Rossi P, Sacher M, Perraro F. Emergency treatment of bronchospastic attacks in an emergency medicine department: effects of a quality improvement project. Quality Assurance in Health Care. 1993;5:123–126. Brook RH, Stevenson RL. Effectiveness of patient care in an emergency room. NEJM. 1970;283:904 –907. Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. NEJM. 1991;324:377–384. Burstin HR, Lipsitz SR, Brennan TA. Socioeconomic status and risk for substandard medical care. JAMA. 1992;268:2383–2387. Trautlein JJ, Lambert RL, Miller J. Malpractice in the emergency department—review of 200 cases. Ann Emerg Med. 1984;13:709 – 711. Rusnak RA, Stair TO, Hansen K, Fastow JS. Litigation against the emergency physician: common features in cases of missed myocardial infarction. Ann Emerg Med. 1989;18:1029 –1034. Nolan TW. Understanding medical systems. Ann Intern Med. 1998; 128:293–298. Gustafson DH, Fryback DG, Rose JH, et al. A decision theoretic methodology for security index development. Med Decis Making. 1986;6:27. Baker DW, Stevens CD, Brook RH. Patients who leave a public hospital emergency department without being seen by a physician: causes and consequences. JAMA. 1991;266:1085–1090. Sox CM, Burstin HR, Edwards RA, O’Neil AC, Brennan TA. Hospital admissions through the emergency department: does insurance status matter? Am J Med. 1998;150:506 –512. Cleary PD, Edgman-Levitan S, Roberts M, et al. Patients evaluate their health care: a national survey. Health Aff (Millwood). 1991;10: 254 –267. Bursch B, Beezy J, Shaw R. Emergency department satisfaction: what matters most? Ann Emerg Med. 1993;22:586 –591. McMillan JR, Younger MS, DeWine LC. Satisfaction with hospital emergency department as a function of patient triage. Health Care Manage Rev. 1986;11:21–27. Krishel S, Baraff LJ. Effect of emergency department information on patient satisfaction. Ann Emerg Med. 1993;22:568 –572. Linn LS, Ware JE, Greenfield S. Factors associated with relief from chest pain following emergency care. Med Care. 1980;18:624 – 634. Karcz A, Holbrook J, Auerbach B, et al. Preventability of malpractice claims in emergency medicine: a closed claims study. Ann Emerg Med. 1990;19:865– 873. O’Leary MR, Smith MS, O’Leary DS, Olmstead WW, Curtis DJ, Groleau G, Mabey B. Application of clinical indicators in the emergency department. JAMA. 1989;262:3444 –3447. Pierce JM, Kellerman AL, Oster C. “Bounces”: an analysis of shortterm return visits to a public hospital emergency department. Ann Emerg Med. 1990;19:752–757. Gratton MC, Salomone JA II, Watson WA. Clinically significant misinterpretations at an emergency medicine residency program. Ann Emerg Med. 1990;19:497–502. Thompson DA, Yarnold PR. Relating patient satisfaction to waiting time perceptions and expectations: the disconfirmation paradigm. Acad Emerg Med. 1995;2:1057–1062. Mowen JC, Licata JW, McPhail J. Waiting in the emergency room: how to improve patient satisfaction. J Health Care Mark. 1993;13: 26 –33. Hansagi H, Carlsson B, Brismar B. The urgency of care need and
Benchmarking and Quality Improvement/Burstin et al
37.
38. 39.
40.
41. 42.
43.
44. 45. 46.
47. 48. 49. 50.
51. 52.
53.
54.
55.
56.
57.
58.
patient satisfaction at a hospital emergency department. Health Care Manage Rev. 1992;17:71–75. Nelson CW, Niederberger J. Patient satisfaction surveys: an opportunity for total quality improvement. Hosp Health Serv Adm. 1990; 35:409 – 427. Geigle R, Jones SB. Outcomes measurement: a report from the front. Inquiry. 1990;27:7–13. Albright JM, Panzer RJ, Black ER, Mays RA, Lush-Ehmann CM. Reporting tools for clinical quality improvement. Clin Perform Qual Health Care. 1993;1:227–232. Harvraves JL, Palmer RH, Zapka J, et al. Using patient reports to measure health care system performance. Clin Perform Qual Health Care. 1993;1:208 –213. Nelson EC, Batalden PB. Patient-based quality measurement systems. Qual Manag Health Care. 1993;1:208 –213. Lammers JC, Cretin S, Gilman S, Calingo E. Total quality management in hospitals: the contributions of commitment, quality councils, teams, budgets, and training to perceived improvement at Veterans Health Administration hospitals. Med Care. 1996;3:463– 478. Service you can bank on. Different levels of service and better communication are the key to improvement. Int J Health Care Qual Assur. 1995;8:22. Using patient input in a cycle for performance improvement. Jt Comm J Qual Improv 1995;21:87–96. NCQA. HEDIS 3.0 NCQA: Washington, DC, 1997. Lejnini MW, Zack A, Richards T, et al. Second Report of the California Hospital Outcomes Project: Acute Myocardial Infarction. Sacramento, CA: Office of Statewide Health Planning and Development, 1996. Berwick DM, Nolan TW. Physicians as leaders in improving health care. Ann Intern Med. 1998;128:289 –292. Berwick DM. Harvesting knowledge from improvement. JAMA. 1996;275:877– 878. Lee TH. Beyond guidelines— can general internists show the (critical) paths? J Gen Intern Med. 1996;11:174 –175. Langley GJ, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide: A Practical Approach to Emergency Organizational Reform. San Francisco: Josey-Boss, 1996. Jencks SF. Can large scale interventions improve care? JAMA. 1997; 277:419 – 420. Pearson SD, Goulard-Fisher D, Lee TH. Critical pathways as a strategy for improving care: problems and potential. Ann Intern Med. 1995;123:941–948. Lewis LM, Lasater LC, Ruoff BE. Failure of a chest pain clinical policy to modify physician evaluation and management. Ann Emerg Med. 1995;25:9 –14. Kosecoff J, Kanouse DE, Rogers WH, McCloskey L, Winslow CM, Brook RH. Effects of the National Institutes of Health Consensus Development Program on physician practice. JAMA. 1987;258: 2708 –2713. Lomas J, Enkin M, Anderson GM, Hannah WJ, Vayda E, Singer J. Opinion leaders versus audit and feedback to implement practice guidelines: delivery after previous cesarean section. JAMA. 1991; 265:2202–2207. Everitt DE, Soumerai SB, Avorn J, Klapholz H, Wessels M. Changing surgical antimicrobial prophylaxis practices through education targeted at senior department leaders. Infect Control Hosp Epidemiol. 1990;11:578 –583. Sox CM, Burstin HR, Orav EJ, et al. The effect of supervision of residents on quality of care in five university-affiliated emergency departments. Acad Med. 1998;73:776 –782. Harpole L, Orav EJ, Hickey M, Posther KE, Brennan TA. Patient satisfaction in the ambulatory setting: influence of data collection methods and sociodemographic factors. J Gen Intern Med. 1996;11: 431– 434.
59. Grilli R, Lomas J. Evaluating the message: the relationship between compliance rate and the subject of a practice guideline. Med Care. 1994;32:202–213. 60. Miettien O. Confounding and effect-modification. Am J Epidemiol. 1995;141:1111–1116.
APPENDIX Clinical Guidelines for Abdominal Pain ●
●
●
● ● ●
●
● ●
If the patient is a woman, premenopausal or less than 55 years old, and her last menstrual period was more than 30 days ago or she is experiencing lower abdominal pain, expect a pregnancy test to be performed. If the patient is a woman with lower abdominal pain, expect a pelvic examination to be performed, unless the patient has dysuria and a urinalysis indicates a urinary tract infection. If the patient is pregnant and experiencing lower abdominal pain, expect a pelvic ultrasound or obstetrics consult. If the patient is a man with lower abdominal pain, expect a genital examination. For all patients with abdominal pain, expect a rectal examination with stool guaiac. If blood is noted on rectal examination (positive stool guaiac, melena, or bright red blood per rectum) or the patient is orthostatic, expect hematocrit and hemoglobin tests. If the patient has upper abdominal pain without tenderness on palpation; a history of diabetes, hypertension, or coronary artery disease; and is a man older than 25 years, or a woman older than 35 years old, expect an electrocardiogram (ECG). If the patient has lower abdominal pain or flank pain, expect urinalysis. If the patient has abdominal pain with at least two of the following, expect a surgical consult: 1) abnormal vital signs or temperature greater than 38⬚C; 2) rebound or guarding on examination; 3) white blood cell count greater than 10,000 cells/mL.
Clinical Guidelines for Asthma If there is a discharge diagnosis of asthma exacerbation: ●
●
●
●
Expect initial peak flow or forced expiratory volume (FEV1) and oxygen saturation (oximetry) or arterial blood gas (ABG) measurement. Expect 1 to 3 nebulized  agonist treatments with clear improvement in lung examination or peak flow before discharge. Expect a chest radiograph or empiric treatment with antibiotics if there are symptoms of respiratory infection (productive cough) or temperature greater than 38⬚C. If significantly improved (ie, peak flow greater than
November 1999
THE AMERICAN JOURNAL OF MEDICINE威
Volume 107 447
Benchmarking and Quality Improvement/Burstin et al
●
350 L/sec, lungs clear), then steroids on discharge would not be required. In all other cases, expect steroids on discharge. If the patient has a history of heart disease, expect an ECG and chest radiograph.
CLINICAL GUIDELINES FOR CHEST PAIN History ●
●
● ● ● ● ●
●
● ● ●
If pain is consistent with prior angina or is crushing and substernal, expect an ECG, oxygen, monitor, pain management, chest radiograph, and admission. If pain is severe or pressure or substernal or exertional or radiating to jaw, neck, shoulder, or arm, expect an ECG, oxygen, monitor, pain management, and chest radiograph. If chest pain is pleuritic, expect a chest radiograph and ABG or oximetry. If the patient is a man older than 25 years old, or a woman and older than 35 years old, expect an ECG. If chest pain is associated with shortness of breath, expect an ECG and chest radiograph. If the patient is taking diuretics or cardiac medications, expect an ECG. If the patient has a prior history of a myocardial infarction, coronary artery bypass graft surgery or angioplasty, cocaine use in last 24 hours, or history of positive cardiac diagnostic studies, expect an ECG. If the patient has a history of chronic obstructive pulmonary disease and chest pain, expect a chest radiograph. If the patient has a history of hypertension, expect an ECG. If the patient has diabetes mellitus, expect an ECG. If the pain is tearing or radiating to the back, and maximal at onset, expect blood pressures in both arms and a diagnostic study [eg, computed tomographic (CT) scan of the thorax or transesophageal echocardiogram].
● ● ● ● ●
If the vital signs reveal tachypnea (respiratory rate greater than 24 breaths per minute), expect a chest radiograph, and ABG or oximetry. If fever greater than 38⬚C, expect a chest radiograph and complete blood count. If pericardial rub is noted on examination, expect an ECG and chest radiograph. If gallop is noted on examination, expect an ECG. If abnormal rhythm is noted on examination, expect an ECG, cardiac monitoring, and oxygen therapy. If focal dullness or decreased breath sounds are noted on lung examination, expect a chest radiograph and ABG or oximetry.
448
November 1999
THE AMERICAN JOURNAL OF MEDICINE威
● ●
If pleural rub is noted on examination, expect a chest radiograph and ECG. If rales are noted on examination, expect a chest radiograph and ABG or oximetry. If wheezing is noted, expect a cardiac monitor, oxygen, ECG, oximetry, bronchodilators.
Testing ● ●
●
If there are acute ischemic changes, expect a cardiac monitor, oxygen, pain management, and admission. If pulmonary edema is noted on chest radiograph, expect a cardiac monitor, pain management, oxygen, intravenous access, ECG, ABG or oximetry, and admission. If a new infiltrate is noted on chest radiograph, expect ABG or oximetry and antibiotics.
Assessment ●
● ●
●
If the patient has unstable angina or suspected or confirmed myocardial infarction, expect cardiac monitor, oxygen, ECG, and admission to a monitored bed. If the patient has pericarditis, expect an ECG. If the patient has pulmonary embolus, expect cardiac monitor, oxygen, ABG or oximetry, admission, and anticoagulation or thrombolytic therapy.* If pain lasts more than 30 minutes and less than 6 hours, and ECG has greater than 1 mm ST elevation in two leads, expect thrombolytic therapy.*
* Exclusions: history of cerebrovascular event, systolic blood pressure greater than 180 mm Hg, diastolic blood pressure greater than 110 mm Hg, recent surgery in last 10 days, known bleeding diathesis.
CLINICAL GUIDELINES FOR HAND LACERATION ● ●
Physical Examination ●
●
●
●
●
Volume 107
Expect all hand lacerations to have tetanus status checked and documented. If the patient cannot recall, or it was more than 10 years ago, expect the patient to be given tetanus toxoid. If the patient has a dirty wound, expect tetanus toxoid to be administered if the last toxoid was more than 5 years earlier. If a hand laceration requires suturing or packing, expect the wound to be irrigated and explored, and have distal neurovascular and motor function documented. If the patient has abnormal distal motor function, expect the tendons to be explored or hand surgery, plastic surgery, or orthopedic surgery to be consulted. If the patient has abnormal distal sensation or twopoint discrimination, expect the nerves to be explored or hand surgery, plastic surgery, or orthopedic surgery to be consulted.
Benchmarking and Quality Improvement/Burstin et al ●
●
●
●
●
If the patient has abnormal hand pulses, expect the wound to be explored or hand surgery, plastic surgery, or orthopedic surgery to be consulted. If there is a suspected tendon, nerve, or vascular injury on examination, expect hand surgery, plastic surgery, orthopedic surgery to be consulted. If the laceration is the result of wood or glass, expect wound exploration or radiograph to rule out a foreign body. If the laceration is associated with a human or animal bite, a tendon or joint injury, or a fracture, expect antibiotics to be administered. If the duration of time since the injury is more than 12 hours, the laceration should not be sutured.
●
●
●
CLINICAL GUIDELINES FOR VAGINAL BLEEDING
CLINICAL GUIDELINES FOR HEAD TRAUMA ●
●
weakness; 5) ethanol or drug use, history of trauma, and mental status that does not improve after observation; 6) blood in tympanic membrane. If a patient with head trauma has Glasgow coma scale less than 13 or a trauma-related intracranial injury on CT scan, expect neurosurgical consult or admission for observation. Expect a statement regarding the clinical course or a change in clinical status while in the emergency department to be noted in the emergency department record. Expect a patient with head trauma to be given an instruction sheet (“head sheet”) at discharge.
●
Expect cervical spine films to be done in all cases, unless all the following criteria are met: 1) no complaint of cervical pain; 2) no localized cervical spine tenderness; 3) no subjective or objective findings of spinal cord or nerve root injury. Subjective: weakness or paresthesias. Objective: motor or sensory deficit; 4) a reliable history, and physical examination, and appropriate responses from patient. Expect CT scan of the head to be done if any of the following criteria are met: 1) patient anticoagulated; 2) documented loss of consciousness of any duration or no memory of head trauma; 3) both pupils are not equal, and reactive; 4) any new focal sensory deficit or
●
●
●
If vaginal bleeding is associated with first trimester pregnancy, expect Rh testing to be done. If the Rh test is negative, expect RHo (D) immune globulin to be administered. If positive  human chorionic gonadotropin (HCG) test, expect ultrasound or quantitative HCG serum level or obstetrics consult. If first trimester vaginal bleeding is heavy, associated with moderate to severe vaginal cramping, or examination reveals tender uterus or adnexa or open cervical os, expect gynecologic consultation to be done in the emergency department. If the patient is premenopausal and vaginal bleeding is unrelated to menses, a  HCG test must be done.
November 1999
THE AMERICAN JOURNAL OF MEDICINE威
Volume 107 449