An Automated Electronic Case Log: Using Electronic Information ...

6 downloads 8599 Views 272KB Size Report
prone to difficulties with entry compliance and error. We describe the use of electronic information systems to create an automated case log that can be used to ...
An Automated Electronic Case Log: Using Electronic Information Systems to Assess Training in Emergency Medicine Joshua Nagler, MD, Marvin B. Harper, MD, Richard G. Bachur, MD

Abstract As part of the Outcome Project of the Accreditation Council for Graduate Medical Education, training programs are required to evaluate trainees across six general competencies. Assessment of the patient-care competency by direct observation can be supplemented with a quantification of overall experience through the use of case logs. However, manual entry of information into such registries frequently is incomplete. The authors report on the development of an automated electronic case log as a novel tool for evaluating the experience of individual trainees or an entire training program. Specific examples of use of the case log are provided. The authors use a pediatric emergency medicine fellowship as a paradigm to demonstrate the potential utility across all emergency medicine training programs. In addition, the authors discuss how additional information technologies might be incorporated to further these evaluative efforts in the future. ACADEMIC EMERGENCY MEDICINE 2006; 13:733–739 ª 2006 by the Society for Academic Emergency Medicine Keywords: competency, ACGME evaluation, training programs, education

I

n February 1999, the Accreditation Council for Graduate Medical Education (ACGME) published the Outcome Project, an initiative that emphasizes educational outcomes rather than structural-based assessment of postgraduate medical training. More specifically, the project identifies six core competencies as the principal educational goals to be met by all programs. One core competency, patient care (PC), requires interpretation for each medical specialty. Accordingly, the Council of Emergency Medicine Residency Directors held a consensus conference to define the PC competency in the context of emergency medicine (EM) training. Proceedings of this conference were published; a definition of the competency that was specific for physicians training in EM was offered, and potential educational tools for assessment were reviewed.1 In the review, direct observation was identified as the preferred evaluation tool for the EM-specific definition of the PC competency. However, because direct observation as a primary assessment tool can be time consuming,

From the Division of Emergency Medicine, Department of Medicine, Children’s Hospital (JN, MBH, RGB), Boston, MA. Received December 16, 2005; revision received February 10, 2006; accepted February 12, 2006. Address for correspondence and reprints: Joshua Nagler, MD, Division of Emergency Medicine, Children’s Hospital, 300 Longwood Avenue, Boston, MA 02115. Fax: 617-730-0335; e-mail: [email protected].

ª 2006 by the Society for Academic Emergency Medicine doi: 10.1197/j.aem.2006.02.010

it requires careful balance with patient care, particularly during times of high patient volume or acuity. In addition, some quantification of the patient care experience also is informative. Case logs offer a summative assessment of the patient care experience for trainees. The development of such registries can be aided through the use of new information technologies. Prior work has documented the successful use of personal digital assistants and other computer systems in tracking clinical experience in primary care,2 internal medicine,3 and surgical training.4 However, self-reporting is labor intensive and therefore difficult to sustain. Studies have estimated that manual entry reflects only 60% of the true numbers of encounters or procedures.5,6 With the advent and widespread use of electronic patient-tracking systems for emergency departments (EDs), information is now available that can be linked with other data to create an automated case log that is less prone to difficulties with entry compliance and error. We describe the use of electronic information systems to create an automated case log that can be used to assess the patient care experience for trainees. We use the experience of our pediatric emergency fellows as a paradigm for how the tool might be used more broadly in EM training. DEVELOPING AND DEFINING THE CASE LOG Information from a commercially available electronic patient tracking system (EMTrack, Version 2.7.4; Cerner Corp, Kansas City, MO), electronic clinical documentation

ISSN 1069-6563 PII ISSN 1069-6563583

733

734

Nagler et al.

system (EMStation Version 5.3; VitalWorks [now Cerner Corp.]), and internal electronic registration and billing systems were combined to generate a new database (Figure 1). We used the unique encounter number for each patient visit to aggregate information between these systems. Specific data elements from each source were included. The patient tracking system provides information including date of visit, calculated time intervals within the encounter, and triage level. In addition, the patient-tracking system records the trainee (student, resident, or fellow) and attending physician for each patient encounter. If a patient’s care was transferred between providers, only the last attending physician and trainee responsible for the patient were entered into the case log database. The electronic clinical documentation system includes the trainee and attending notes, discharge diagnoses, and International Classification of Diseases, Ninth Revision and Current Procedural Terminology codes.7,8 The internal billing and registration systems contain patient demographic data, discharge diagnoses, procedure and visit codes, patient disposition, and the admitting service where applicable. To demonstrate the concept of the case log as an assessment tool, we provide select examples of how it might be queried to characterize patient care. For the purposes of our analysis, we included all ED visits for



AUTOMATED CASE LOG

patients seen by one of our pediatric EM fellows between July 1, 2003 and June 30, 2004. We excluded patient encounters from shifts at our network community hospitals and during away rotations. The experience of an entire three-year fellowship was estimated by combining the results for each training year. Sample analyses include the total number of patients seen, with subanalysis by triage status (nonurgent vs. urgent or emergent), chief complaint, medical versus surgical chief complaint (designated by the triage nurse), patient disposition, and discharge diagnosis. We use hospital and intensive-care unit (ICU) admissions, as well as exposure to select critical diagnoses, as markers for higher-acuity patients. To assess compliance with published curricular guidelines, we used discharge diagnoses to categorize visits into the following foci: medical, surgical trauma, surgical nontrauma, airway, toxicology, psychiatric, and abuse. For the purposes of tabulation, each diagnosis is included in only one of these categories. To calculate the number of patients seen per hour, the total number of patients seen over the course of the year was divided by the number of scheduled work hours for each trainee. According to our committee on clinical investigation, projects designed to evaluate the educational experience of trainees, including the development of new evaluation tools, are exempt from informed consent.

Figure 1. Schematic representation of the automated case log.

ACAD EMERG MED



July 2006, Vol. 13, No. 7



www.aemj.org

SELECTED EXAMPLES OF HOW THE CASE LOG CAN BE USED Here we provide representative examples for potential educational uses of the electronic case log. This is not meant to be exhaustive but rather is a sampling of the types of analyses that might be performed. Overall Experience We found a total of 46,035 patient encounters in the ED over the academic year by using the case log database. Of these patients, 14,124 (31%) were seen by one of the full-time pediatric EM fellows. For the 2003–2004 academic year, this included four first-year fellows (FYFs), six second-year fellows (SYFs), and three third-year fellows. During the year, FYFs saw a mean of 1,091 patients, SYFs saw 841 patients, and third-year fellows saw 1,733 patients. Therefore, it could be expected that during three years of training, a full-time fellow would be directly involved in the care of approximately 3,665 patients. Individual Trainee To demonstrate how the case log can evaluate the experience of individual trainees, we selected data for our FYFs. During the 2003–2004 academic year, the number of patients cared for by each of FYFs ranged from 855 to 1,558 patients (Figure 2). The case log can objectively identify those trainees who are seeing significantly fewer patients (e.g., FYF1) or those seeing a much higher patient volume (e.g., FYF3). Assessing relative patient volume may be a useful screen that prompts further inquiry regarding efficiency, effort, scheduling, or other factors that may contribute to any disparities.

Figure 2. Patient volume for first-year fellows (FYF). FYF1, FYF2, FYF3, and FYF4 = each of the first-year fellows for the 2003–2004 academic year.

735

Further analyses using triage designations can offer additional insights into the FYF experience. For example, we preferentially assign our FYFs to areas of the ED that will allow them to gain experience with higher-acuity patients as well as those with predominantly surgical complaints. We are able to use the case log to evaluate the efficacy of our team assignments in meeting this curricular goal. Each of the FYFs saw a greater percentage of urgent and emergent patients than were seen on average by all providers in the department (Figure 3A). Similarly, FYFs cared for a disproportionate number of patients triaged with surgical complaints (Figure 3B). Although there is no single measure for patient acuity, the case log can use proxy markers for exposure to complex or ill patients, including patients requiring admission to the hospital or to an intensive care unit. Table 1 shows that our SYFs see at or above our overall department admission rate; however, only three of the six fellows cared for a higher percentage of ICU-level admissions. This might guide further training, for example by encouraging certain trainees to actively seek patients who are likely to require ICU-level care or by assigning trainees to teams or shifts that systematically will facilitate these opportunities. We also can demonstrate individual experience with select, relatively common diagnoses that represent higher acuity patients, as shown in Figure 4. Similarly, the case log might be used to assess for exposure to singular, less-frequent diagnoses that are felt to be paramount to training (e.g., nonaccidental trauma in pediatrics or ST-elevation myocardial infarction in adults). On a practical basis, the case log can facilitate completion of other training requirements. Providing individualized case listings to trainees can be a useful resource for following up patient visits, a curricular requirement in both pediatric EM fellowship and EM residency training. In addition, such listings may be used to select cases to discuss at teaching conferences or as a means for review at regular meetings with program directors. Training Program The case log also can help define programwide patientcare experiences, either for internal review or to compare across training programs. For example, Table 2 shows the most frequently encountered chief complaints that were seen by our pediatric EM fellows during their three years of training. Describing such patterns of exposures can be useful in identifying variability across programs, which may vary on the basis of factors such as the size of the institution, its geographic location, basepopulation demographics, and patterns of referral in the surrounding community. Tabulating illnesses and injuries also can be useful in evaluating programmatic compliance with established curricular guidelines that exist for pediatric and adult EM training.9,10 To demonstrate how this might be achieved, we categorized diagnoses into several of the curricular foci that were identified in the guidelines of the American Academy of Pediatrics (AAP) Section on Emergency Medicine (Figure 5). Programs also seek to demonstrate growth in patient care skills with increasing level of training. For example, after correcting for disparate clinical hours, more

736

Nagler et al.



AUTOMATED CASE LOG

Table 1 Second-year Fellows’ (SYF) Exposure to Patients Requiring Admission or Intensive Care Unit (ICU)–level Care as Markers for Experience with Patient Acuity Second-year Fellow SYF1 SYF2 SYF3 SYF4 SYF5 SYF6 ED totals

Patients Seen 793 805 841 777 921 911 46,035

Patients Admitted, n (%) 171 176 209 149 176 209 8672

(21.6) (21.9) (24.8) (19.2) (19.1) (22.9) (18.8)

ICU Admissions, n (%) 6 10 6 3 10 9 373

(3.5) (5.7) (2.9) (2.0) (5.7) (4.3) (4.3)

Admission percentage is of total patients seen. ICU admission percentage is of total admissions. SYF1–SYF6 = each of the six second-year fellows; ED totals = all patient visits in the case log for the academic year.

AUTOMATED CASE LOG AND EM TRAINING Emergency medicine training is unique in medical education in its ability to consistently provide supervision at the bedside. Accordingly, the opportunity exists for frequent and meaningful assessment of a trainee’s competence in patient care.11 However, faculty are simultaneously involved in teaching, supervision, and the direct care of patients and therefore may not have extended periods of time to observe trainees. A recent study showed that less than 5% of faculty members’ time in the ED involved direct observation of trainees.12 In addition, although they are rich in qualitative information, individual observations of patient encounters

Figure 3. Distribution of cases for first-year fellows (FYFs) compared with all ED providers. (A) Distribution of cases by triage status (urgent or emergent vs. nonurgent). (B) Distribution by medical versus surgical complaints. FYF1, FYF2, FYF3, and FYF4 = each of the first-year fellows. ED = all patient visits in the emergency department over the 2003–2004 academic year. Dashed line is drawn at baseline percentages for all ED providers for visual comparison with fellows. Numeric labels represent percentages of cases.

experienced trainees would be expected to see more (or higher acuity and complexity) patients on average than would those earlier in their training. Figure 6 displays this expected increase in number of patients seen per hour with advanced level of training.

Figure 4. Exposure to select medical and surgical diagnoses as marker for experience with patient acuity. Acute surgical diagnoses include bowel obstruction, liver or spleen laceration, open fracture, septic joint, skull fracture or intracranial injury, and appendicitis. Acute medical diagnoses include diabetic ketoacidosis, gastrointestinal bleeding, sepsis or shock, status asthmaticus, meningitis, and status epilepticus.

ACAD EMERG MED



July 2006, Vol. 13, No. 7



www.aemj.org

737

Table 2 Ten Most Common Chief Complaints for Patients Seen by All PEM Fellows Chief Complaint

Cases Seen*

Fever Injured extremity Respiratory distress or wheeze Abdominal pain Laceration Vomiting or diarrhea Cough, URI, or sore throat Head injury Facial injury (EENT) Seizure

436 411 280 186 183 177 144 105 78 74

URI = upper respiratory infection; EENT = ear, eyes, nose, throat. * Average number of cases seen during three years of fellowship.

may not provide an accurate representation of the quantitative experience of trainees. The use of a case log, in addition to other educational assessment tools, allows for evaluation of both process and outcome. Quantification of patient encounters, specific chief complaints or diagnoses, and experience with higher-acuity cases are important objective measures of experience and are part of the ACGME requirements.1 However, reliance on manual recording of individual experience has been shown to lead to underreporting of true experience.5,13 An automated case log system provides a reliable means to report the collective experience of trainees and programs. To our knowledge, no such system previously has been developed or described in the literature. We found that by extracting data from information management systems currently in use in the ED, we were able to create a database that provides valuable information regarding training experience. Individual Trainee Quantifying patient encounters provides a broad sense of the experience of trainees. Although using numbers alone is insufficient to document competence, it appears intuitive that relative exposure has merit. Patients and families themselves recognize this relationship when they ask, ‘‘How many times have you seen [or done] this before?’’ with an a priori assumption that care is likely to be superior from those with more experience.

Figure 5. Relative exposure to select topics that have been identified by the guidelines of the American Academy of Pediatrics Section on Emergency Medicine.9

Figure 6. Number of patients seen per hour by year of training. Vertical lines represent the range of values for patients seen per hour; the numbers provided indicate the median values.

The automated case log provides a means for such enumeration. However, because not all visits involve similar levels of patient care, we also have demonstrated how subanalysis can be used to further define trainee experience. Similarly, as part of the EM-specific definition of the patient care competency, it is incumbent upon training programs to develop means for evaluating trainees’ performance with regard to patient acuity. However, because no single definition currently exists in this context, assessment can be challenging. We demonstrated how disposition (e.g., need for admission or for ICU-level care) and several specific diagnoses can be used as examples of surrogate markers in the summative assessment of patient acuity. Training Program In addition to benefiting individual trainees, an automated case log has important implications on a programmatic level. Looking at experience by presenting complaint or discharge diagnoses can be used to broadly assess clinical exposure in a given program. For the most common topics, there is unlikely to be concern about adequate experience. However, the case log also can be used to evaluate for exposure to the infrequent but fundamental diagnoses and topics in EM. Several studies have demonstrated that many housestaff are not meeting recommendations regarding breadth of clinical exposure.14–16 Identifying such clinical gaps is important so that alternative means, including didactics, simulation, or other educational strategies can be used to ensure knowledge and comfort with these important but lessfrequent exposures in the ED. Published consensus regarding which topics are deemed most important to training exists both for adult

738

and pediatric EM. The model for EM training and the recommendations from the Curriculum Committee from the Section of Emergency Medicine of the AAP both offer listings of areas and specific topics to be covered during residency and fellowship, respectively.9,10 We categorized diagnoses into several of the curricular foci that are identified in the AAP guidelines. Similar assessment could be performed with those topics covered in the model, or with other current or future guidelines for EM training. Finally, case lists may be used by program directors and others involved in trainee assessment to guide further formative assessment. The database may be used to identify key cases for review through chart-stimulated recall, or other means of oral examinations. In addition, by quantifying the number of cases seen between each faculty–trainee dyad, feedback responsibilities can be focused on those who have worked most closely with a given trainee. This would allow evaluative approaches to become more targeted and efficient. We have demonstrated a number of potential uses for an automated case log as an educational tool in the assessment of the ACGME patient care competency. As we further refine this tool, we anticipate potential improvements and additional uses. A similar automated log could be used to assess procedural experience. The Residency Review Committee currently requires reporting of procedures for trainees in both adult and pediatric EM. We currently are developing an automated procedure log to assist in the recording of this mandatory information. An automated case log might also serve as a useful research tool. Correlations between relative exposure to a given diagnosis or category of disease and performance on in-training examinations, board certification exams, or ultimately even patient outcomes could be facilitated with data from a case log. Similarly, prior studies that have assessed trainees’ comfort with certain procedures have relied on self report of prior experience.17 An automated log could provide more objective data for evaluating links between experience and comfort or performance. Information from the case log could be used for quality improvement (QI) measures. Prior studies have demonstrated the use of electronic medical records to measure compliance with published guidelines.18 By using the case log to select for certain diagnoses and then searching the electronic medical record, practice patterns, such as test ordering by individuals or groups of physicians, easily could be assessed for adherence to standards of care. Similarly, outcome measures, such as length of stay by diagnosis, repeat visits to the ED, or transfer of patients from an inpatient service to an intensive care unit within a specified period of time after admission, all are QI metrics that can be easily extracted from the case log. Finally, additional information systems could be incorporated to further characterize the care provided during individual patient encounters. Laboratory, radiology, and pharmacy data, for example, could be added to currently available information to better define patient encounters. For example, using blood gases, chest radiographs, or intravenous therapies such as terbutaline or epinephrine can aid in the assessment of acuity in asthma patients, and incorporating imaging in minor head trauma or

Nagler et al.



AUTOMATED CASE LOG

antibiotic-prescribing patterns for otitis media can aid in evaluating for adherence with practice guidelines. There are limitations to the automated case log in its current form. First, because caring for patients often is performed by a team of physicians, sometimes in succession, identifying the trainee who is most involved in the care of the patient may be difficult. The initial iteration of the automated case log linked only the final provider with a patient. However, we recognize that for the purpose of quantifying clinical exposure, it is logical to include all providers who cared for a patient. Therefore, we recently revised our case log to include all trainees who are involved in each patient encounter. Using this new strategy, we found an 18% increase in entries for the given academic year, suggestive of multiple providers caring for this subset of patients. Finally, we emphasize that exposure alone does not equate to competence and should be measured in conjunction with other means of demonstrating proficiency. Documenting clinical experience is important; however, it should serve as an adjunct to direct observation, simulation exercises, chart-stimulated recall, and other means of formative assessment to document trainee competency in patient care. CONCLUSIONS Electronic information systems that commonly are used in the ED can be linked to create an automated case log for trainees. We use the experience of pediatric EM fellows to demonstrate how this model can be used as an educational assessment tool across EM training. In the future, this tool also may be used to define procedural experience, as an educational research tool, and in reviewing patient management behavior in EM. The authors thank Jeanne Greeno and Michael Durrant from Information Systems Department (ISD) at Children’s Hospital for their assistance in the creation of the automated case log.

References 1. King RW, Schiavone F, Counselman FL, Panacek EA. Patient care competency in emergency medicine graduate medical education: results of a consensus group on patient care. Acad Emerg Med. 2002; 9:1227–35. 2. Alderson TS, Oswals NT. Clinical experience of medical students in primary care: use of an electronic log in monitoring experience and in guiding education in the Cambridge community based clinical course. Med Educ. 1999; 33:429–33. 3. Sequist TD, Singh S, Pereira A, Pearson SD. On Track: a database for evaluating the outpatient clinical experience of internal medicine residency training. Proc AMIA Symp. 2003; 1002. 4. Brouwer R, Kiroff G. Computer-based logbook for surgical registrars. ANZ J Surg. 2002; 72:57–61. 5. Lee JS, Sineff SS, Sumner W. Validation of electronic student encounter logs in an emergency medicine clerkship. Proc AMIA Symp. 2002; 425–9. 6. Langdorf MI, Montague BJ, Bearie B, Sobel CS. Quantification of procedures and resuscitations in an emergency medicine residency. J Emerg Med. 1998; 16:121–7.

ACAD EMERG MED



July 2006, Vol. 13, No. 7



www.aemj.org

7. American Medical Association [AMA]. Current Procedural Terminology: CPT. Standard ed. Chicago, IL: AMA, 2005. 8. Hart AC, Hopkins CA, eds. 2005 ICD-9-CM Professional for Physicians. 6th ed. Salt Lake City, UT: Ingenix, 2004. 9. Curriculum Subcommittee, Section of Emergency Medicine, American Academy of Pediatrics. Pediatric emergency medicine (PEM) fellowship curriculum statement. Pediatr Emerg Care. 1993; 9:60–6. 10. Hockberger RS, Binder LS, Graber MA, et al. The model of the clinical practice of emergency medicine. Ann Emerg Med. 2001; 37:745–70. 11. Jouriles NJ, Emerman CL, Cydulka RK. Direct observation for assessing emergency medicine core competencies: interpersonal skills. Acad Emerg Med. 2002; 9:1338–41. 12. Chisholm CD, Whenmouth LF, Daly EA, Cordell WH, Giles BK, Brizendine EJ. An evaluation of emergency medicine resident interaction time with faculty in different teaching venues. Acad Emerg Med. 2004; 11:149–55.

739

13. Rowe BH, Ryan DT, Mulloy JV. Evaluation of a computer tracking program for resident-patient encounters. Can Fam Physician. 1995; 41:2113–20. 14. De Lorenzo RA, Mayer D, Geehr EC. Analyzing clinical case distributions to improve an emergency medicine clerkship. Ann Emerg Med. 1990; 19: 746–51. 15. Langdorf MI, Strange G, Macneil P. Computerized tracking of emergency medicine resident clinical experience. Ann Emerg Med. 1990; 19:764–73. 16. DelBaccaro MA, Shugerman RP. Pediatric residents in the emergency department: what is their experience? Ann Emerg Med. 1998; 31:49–53. 17. Simon HK, Steele DW, Lewander WJ, Linakis JG. Are pediatric emergency medicine training programs meeting their goals and objectives? A self assessment of individuals completing fellowship training in 1993. Pediatr Emerg Care. 1994; 10:208–12. 18. Maviglia SM, Teich JM. Using an electronic medical record to identify opportunities to improve compliance with cholesterol guidelines. J Gen Intern Med. 2001; 16:531–7.

Fatal GSW Right Chest

Aftermath of familiar scenes, inner-city America. Anthony Macasaet, MD Department of Emergency Medicine Mt. Sinai Hospital Medical Center Chicago, Illinois ([email protected]) doi: 10.1197/j.aem.2006.03.558