SPECIAL CONTRIBUTION
Outcome Assessment in Emergency Medicine—A Beginning: Results of the Council of Emergency Medicine Residency Directors (CORD) Emergency Medicine Consensus Workgroup on Outcome Assessment Cherri Hobgood, MD, Susan Promes, MD, Ernest Wang, MD, Risa Moriarity, MD, Deepi G. Goyal, MD
Abstract This article is designed to serve as a guide for emergency medicine (EM) educators seeking to comply with the measurement and reporting requirements for Phase 3 of the Accreditation Council for Graduate Medical Education (ACGME) Outcome Project. A consensus workshop held during the 2006 Council of Emergency Medicine Residency Directors (CORD) ‘‘Best Practices’’ conference identified specific measures for five of the six EM competencies—interpersonal communication skills, patient care, practicebased learning, professionalism, and systems-based practice (medical knowledge was excluded). The suggested measures described herein should allow for ease in data collection and applicability to multiple core competencies as program directors incorporate core competency outcome measurement into their EM residency training programs. ACADEMIC EMERGENCY MEDICINE 2008; 15:267–277 ª 2008 by the Society for Academic Emergency Medicine Keywords: outcome measures, assessment, competency, resident training, ACGME Outcomes Project
T
he objective of this article is to report the results of a consensus workgroup held during the 2006 Council of Emergency Medicine Residency Directors (CORD) ‘‘Best Practices’’ conference. The specific goals of the consensus workgroup were to gather stakeholders to brainstorm potential emergency medicine (EM)-specific resident outcome measures that meet the criteria defined by Phase 3 of the Accreditation Council for Graduate Medical Education (ACGME) Outcome Project. The process and the measures developed were not intended to be all inclusive, but to provide a starting From the School of Medicine, University of North Carolina (CH), Chapel Hill, NC; the Department of Emergency Medicine, University of California–San Francisco (SP), San Francisco, CA; Feinburg School, Division of Emergency Medicine, Northwestern Healthcare (EW), Evanston, IL; the Department of Emergency Medicine, University of Mississippi (RM), University, MS; the Department of Emergency Medicine, Mayo Clinic (DGG), Rochester, MN. Received August 1, 2007; revision received October 31, 2007; accepted November 1, 2007. Address for correspondence: Cherri Hobgood, MD; e-mail:
[email protected].
ª 2008 by the Society for Academic Emergency Medicine doi: 10.1111/j.1553-2712.2008.00046.x
place for program directors to begin addressing the EMspecific competency-derived outcome measures that integrate with our unique learning environment and clinical care requirements. The measures are also designed to assist residency program directors meet all components of Phase 3 of the ACGME Outcome Project. BACKGROUND The ACGME Outcome Project,1 initiated in 1999, defined a new conceptual framework for graduate medical education in the United States. The Outcome Project utilizes a set of six core competencies: interpersonal communication skills, medical knowledge, patient care, practicebased learning, professionalism, and systems-based practice. The goal of the core competencies is to focus resident education on high-quality patient care as defined by the Institute of Medicine (IOM) in ‘‘Crossing the Quality Chasm.’’2 These STEEEP goals (Safe, Timely, Effective, Efficient, Equitable, and Patient Centered) underpin the specialty-specific definition of the unique knowledge, skills, and attitudes resulting from specific education in a particular discipline. Implementation of the Outcome Project has been divided by the ACGME into three discrete phases:
ISSN 1069-6563 PII ISSN 1069-6563583
267
268
Hobgood et al.
Phase 1 (July 2001–June 2002) focused on defining specialty-specific competencies. The EM competencies were developed during a 2002 EM educator consensus conference and were disseminated that year in a series of six articles.3–8 Residencies were expected to incorporate the teaching and learning of these EM-specific competencies into their didactic and clinical curriculum. Phase 2 (July 2002–June 2006) of the project sharpened the focus of the competency definition by linking competencies to assessment tools. The goal was to move beyond simply counting the number of cases the resident was involved in and the procedures performed and toward a discrete assessment of the components of competency—namely, the knowledge, skills, and attitudes needed to competently practice medicine. A number of assessment methods were developed, including record review, checklists, chart-stimulated recall (CSR) oral examination, objective structured clinical examination (OSCE), simulations and models, portfolios, written examination, and 360-degree evaluations (see appendix for an overview, available as an online Data Supplement at http://www.blackwell-synergy.com/doi/abs/10.1111/ j.1553-2712.2008.00046.x). Using the guidelines and methods provided in the ACGME toolbox of assessment measures,9 CORD’s Standardized Evaluation Group then developed and deployed specific measures of resident performance. An example of an EM-specific tool developed during this time is the Structured Direct Observation Tool (SDOT). Many of these are available for CORD members on the Sharepoint Web site (http://cord.sharepointsite.com). The goal of the current Phase 3 (July 2006–June 2011) is the full integration of competencies and their assessment with learning and clinical care. The focus is on the development of observable outcome measures that will allow for the assessment of individual and collective resident performance, the use of these identified metrics as the basis for improvement of individual physicians, as well as residency programs in general, and the provision of documentation for accreditation review. How Can Measuring Outcomes Shape the Learning Environment? The learning environment is more productive when students and faculty agree upon aligned and explicit goals, instruction, and desired outcomes. Criteria-driven outcomes provide diminished rater subjectivity and increase the likelihood that measurement will be consistent. Learner accountability leads to development of self-assessment and promotes an environment in which feedback is expected and valued. Objective measures provide a consistent set of data by which both residents and faculty can measure progress toward a stated goal. Real-life clinical experiences provide the resident with the necessary contextual relevance of the measure, which, in turn, promotes interest in the material and retention of the teaching. Frequent reflection allows residents to become better at self-assessment. Independent study supplements the general curriculum. By accounting for the fact that residents enter residency with differing backgrounds, skill levels, knowledge
•
OUTCOME ASSESSMENT IN EMERGENCY MEDICINE
bases, aptitudes, abilities, and learning styles, independent study allows for individual focus on areas of perceived need. Characteristics of competency-based teaching and learning are summarized in Table 1.10 Workgroup Methods for Identifying Outcome Measures All workgroup leaders were selected prior to the conference, assigned a topic area, and briefed on the goals of the project. Each workgroup leader agreed to utilize the framework developed by the 2002 CORD Consensus Conference, in which the specifics of EM competency were defined.3–8 Five of the six competencies were selected for group work. The medical knowledge competency was not addressed because it is defined as a knowledge-based competency, and two excellent outcome measures, the board examination and the in-service training examination, already exist and are used extensively by program directors. We focused on the competencies of communication, system-based practice, patient care, professionalism, and practice-based learning and improvement. All conference participants were invited to participate in the workgroup sessions. All participants attended a brief didactic session given by one of the authors (CH), who is an educator with expertise in assessment and outcome measure development. The session provided background on the ACGME Outcome Project, the specific goals of Phase 3, and the specific tasks to be completed during the small-group work. At the conclusion of the didactic session, participants were divided into five working groups by counting off 1 through 5. Each numbered group then reconvened in a small group room, joined by their specific leader. Each group focused on one competency with the task of identifying characteristics of the competency that were both important and could be measured as outcomes. All workgroups were provided with copies of the publications defining their specific competency3–8 to utilize as an on-site resource for their work. Workgroups were encouraged to first identify existing measures that could be adapted to measurement of EM residency training outcomes. In the absence of such measures, they were asked to brainstorm measures of EM competency-based learning that were felt to be reliable and generalizable and that could be easily implemented. Specific measures could be defined in such a way as to focus on individual resident performance or on the aggregate performance, of the entire
Table 1 Five Important Characteristics of Competency-based Teaching and Learning10 Learning is explicit and clearly aligned with expected competencies. Teaching is criteria-driven, focusing on accountability in reaching benchmarks and, ultimately, competence. Content is grounded in ‘‘real-life’’ experiences. Reflection is focused on fostering the learner’s ability to self-assess. Curriculum is individualized, providing more opportunities for independent study.
ACAD EMERG MED • March 2008, Vol. 15, No. 3
•
www.aemj.org
group (e.g., at the residency program level), or to measure characteristics of the training environment that impact clinical care (including adequate resources, overcrowding, attending decision-making, etc). Workgroups arrived at the products in a consensus manner, and disagreements were resolved by individual leaders as part of the group process. The group noted that these new measures would substantially expand the roles of the program director and program coordinator, roles that have already grown dramatically in the past few years. Thus, a main consideration in the development of these measures was that the process itself not be too burdensome. To improve the success of implementation, the group used the following criteria to assess the viability of a potential measure: 1) the measure should provide meaningful feedback for both residents and programs, 2) results of the measure must be reliable, 3) data for the numerator and denominator must be easily attainable, and 4) measurements must be limited in scope. Results of the Consensus Workgroup All workgroups successfully developed outcome measures that met the criteria defined by the ACGME and fell within the scope of EM practice. Most participants struggled with the differences between the functional uses of assessment and outcome measures. The ACGME Outcome Project defines assessment as the ‘‘process of collecting, synthesizing, and interpreting information to aid decision-making.’’1 The results of assessments allow educators to make informed decisions about learner knowledge, beliefs, and attitudes. Outcomes are defined as the immediate, short-term, delayed, and long-term results, demonstrating that learning goals and objectives have been accomplished.1 It was concluded that too often, assessment tools are used as outcome measures and that the two are often confused. Despite this confusion, because measuring outcomes for a particular characteristic or skill is so important, it is often necessary to blur the lines and use what little is available. Our recommendations for outcome measurements may occasionally reflect this reality. The measures presented here are intended only as a guide. They are not intended to be prescriptive, and they do not represent the only options from which an individual residency director can choose when designing a program’s approach to competency measurement. Individual programs may wish to adopt some, all, or none of these measures when developing their own institution-specific outcome measures program. At a minimum, we hope these suggestions will assist residency program directors as they begin to form outcome measurement ‘‘toolboxes’’ that can be modified and refined with advances in our clinical specialty. The way these measures must be utilized for a particular program to be in compliance (e.g., the number of measures assessed for each category), the total number of measures applied, the cycle time for repeat measurement, or how the measures should change to reflect increasing learner competency with a condition remain unanswered questions for which the group had no definitive solutions.
269
DISCUSSION Communication Competency The unique communication skills required of competent emergency physicians (EPs) have been previously defined.3 Building on this previous work, we focused on the outcomes anticipated from practitioners who excelled in this specific competency. We also identified high-leverage areas for data collection and possible methods for enhancing both the face validity of our measures and the practical tips for implementing data collection. A summary of the measures developed is presented in Table 2.11–14 The most critical communication skill required of all EPs is the ability to rapidly develop a therapeutic relationship with their patients. Outcome measures unique to this skill are numerous, primarily focusing on the patient’s perception of the individual physician’s communication skills. The group endorsed the concept of using previously validated patient interpersonal communication inventories to measure the success of individual residents at the outset of the therapeutic relationship. These measures include, but are not limited to, the Calgary Communication Inventory,11 the interpersonal skills and communication instrument of Schnabl et al.,12 and the longitudinal communication skills initiative of Rucker and Morrison.13 Data collection methods for these instruments will vary depending upon individual residency program and departmental logistics. Suggested methods other than these, seen in Table 2, include faculty interview of patients following an assessment using the SDOT, resident-directed patient sampling, and exit interview sampling of a random selection of patients at the conclusion of their emergency department (ED) stay. Regardless of the method chosen, care must be taken to mitigate potential sample bias that can be introduced in a variety of ways, particularly by resident-directed sampling, patient illiteracy, or patient lack of English language proficiency. Other communication skills important to measure surround the areas of high-risk communications: patients leaving against medical advice (AMA), death notification, and refusal of resuscitation (do not attempt to resuscitate ⁄ do not intubate) orders. Although the group easily achieved consensus on the skills that constituted excellence in this competency, difficulty arose in determining practical measurement methods. For example, for patients leaving AMA, some would argue that the best outcome and most desirable communication skill is the ability to effectively convince the person to remain in the ED and continue treatment. Others would state that this outcome is paternalistic, and the only valuable measure is whether the patient received an unbiased communication of the risks and benefits of his or her medical decision. In this scenario, sampling difficulty arises for both the numerator and the denominator. For the numerator, if one selects the percentage of patients who originally planned to leave AMA, but declined following communication with their provider, one would miss all those patients who ultimately decided to depart but were adequately informed of the risks and benefits of their decision. The construction of
270
Hobgood et al.
•
OUTCOME ASSESSMENT IN EMERGENCY MEDICINE
Table 2 Emergency Medicine Relevant Communication Competencies
Condition
Data Collection Method or Assessment Technique
Measure
Therapeutic relationship
Establishment of a therapeutic relationship
Effective communication of care processes
Press Ganey14 scores Patient Satisfaction surveys Physician satisfaction scores ‘‘My physician was excellent at informing me about the outcomes of my care’’ Number of AMA Physician invites AMA patients to return for recommended treatment Family satisfaction with resident interpersonal communication skills May use any validated interpersonal skills inventory Physician documentation supports correct level of billing Physician leadership inventory CRT
AMA AMA Death notification
Written communication skills—chart documentation Leadership of critical care resuscitation team
Validated patient interpersonal skill inventories11–13 SP, PR, 360 Press Ganey survey
CR CR, OSCE, SP, S SP, S
CR S, DO
AMA = patient leaving against medical advice; CR = chart review; CRT = crew resource training; DO = direct observation; OSCE = objective structured clinical examination; PR = peer evaluation; RR = record review; S = simulation and models; SP = standardized patient assessment; 360 = 360-degree evaluations.
the denominator for the measure is equally difficult, as most of the discussions that providers have about leaving AMA with patients who then ultimately remain in the department are not captured by standard charting methods. In other words, the number of patients who depart AMA (numerator) is known, but the total number of patients who had discussed this option with their providers (denominator) is not. The group discussed another potential measure of best practice in the case of the patient who desires to leave AMA, namely, whether the resident encouraged the patient to return if the condition were to worsen or if the patient were to have a change of mind about seeking treatment. For this measure, written documentation of an invitation to return should be noted on the chart and discoverable by review. It was felt that this measure could easily be collected during standard review of all AMA patient charts. Death notification is another area of significant risk for all EPs. Assessment tools exist to measure resident competency in this difficult communication encounter.15 Measurement of family member satisfaction with the physician communicating this information can be obtained via a telephone survey call-back after an appropriate time interval, or a mail survey. Another key communication skill for EPs is the ability to communicate effectively in writing, particularly through chart documentation. Components of a welldocumented chart include a clear, concise description of medical decision-making, as well as an adequate number of history, review of systems, and physical examination items to support correct billing levels. Data for these measures are supported by chart review. Physician leadership and conflict resolution skills should also be measured. No known validated instruments exist to measure specific leadership skills of EPs.
Measures may exist in aviation, anesthesia, or crew resource training for components of these skills, but they have yet to be adapted to EM. Patient Care Residents need to be evaluated not only for their ability to pick the right intervention for a particular patient complaint, but also for their ability to carry out the appropriate therapeutic intervention. Proposed outcome measures surrounding patient care are highlighted in Table 3. Considering the limited time, personnel, and financial support of residency programs, outcome data should parallel and ⁄ or dovetail that information required for ongoing reporting systems for regulatory agencies such as The Joint Commission on Accreditation of Healthcare Organizations (JCAHO) and Centers for Medicare and Medicaid Services (CMS). Individual resident data as well as collective residency data documenting compliance with accepted therapeutic standards can be expressed in percentage metrics. For example, if a patient presents with the chief complaint of chest pain, the type of metrics that could be documented are compliance with administration of aspirin and beta blockers, as well as the rapid ordering and interpretation of electrocardiograms. Residency programs should identify common EM chief complaints using sources such as The Clinical Practice of Emergency Medicine,16 published clinical policies,17 or national data on chief complaints, most of which, unfortunately, are limited. Steps critical to timely and appropriate diagnosis and management could be identified as metrics for evaluating individual and program-specific outcomes. Metrics must be objective and universally accepted (i.e., not site-specific). Outcome measures should not be life-or-death dichotomies, but
ACAD EMERG MED • March 2008, Vol. 15, No. 3
•
www.aemj.org
271
Table 3 Emergency Medicine Relevant Patient Care Competencies
Measure Knowledge of proper procedure as defined by preexisting quality assurance programs (e.g., JCAHO, CMS) Knowledge of critical components to timely, appropriate diagnosis and management as specified, e.g., in The Clinical Practice of Emergency Medicine or national data on chief complaints
Universally accepted procedural competencies
Compliance with medication, e.g. administration of aspirin and beta-blockers in patients with ACS Electrocardiogram ordered and interpreted within 30 min of patient arrival Documentation of pulse oximeter reading for patients presenting with shortness of breath Administration of oxygen for patients with abnormal pulse oximeter readings Chest radiograph ordered and properly interpreted in patients with shortness of breath or symptoms consistent with pneumonia Urinalysis ordered for pain in patients with lower abdomen or flanks Pregnancy test ordered for all women of child bearing age with abdominal pain Vital signs recorded and addressed ⁄ treated if abnormal Serial abdominal exams performed and documented if prolonged ED stay for patients with abdominal pain chief complaint Pain documented and treated when present Presence or absence of peritoneal signs documented in patients with abdominal pain Imaging considered for elder patients with abdominal pain; if performed, results documented Endotracheal intubation Documentation that endotracheal tube placement was confirmed by at least two measures Number of attempts and success rate
Data Collection Method or Assessment Technique RR, S, DO RR RR RR, S RR, S, CSR RR, S, CSR RR, S, CSR RR, S RR RR RR RR, S, CSR RR, S, CSR
ACS = acute coronary syndromes; CMS = Centers for Medicare and Medicaid Services; CR = chart review; CSR = chart-stimulated recall; DO = direct observation; JCAHO = Joint Commission on Accreditation of Healthcare Organizations; RR = Record Review; S = simulation and models.
rather should assess whether the resident’s patient care was appropriate and within acceptable norms for EM. An example of a common EM chief complaint would be shortness of breath. Measures of appropriate care would include whether the resident obtained, documented, and properly interpreted a pulse oximeter reading and chest radiograph and whether he or she acted upon abnormal results. Another example of a chief complaint, abdominal pain, and its associated appropriate care metrics are elaborated on in Table 3. Compliance to these metrics could also be assessed using simulated patient encounters, computer-based simulations, or an oral boards-type setting, many of which exist in residency programs. Competency in patient care could be assessed retrospectively by residents performing chart audits using predefined criteria for specific chief complaints. Residents could add this to their portfolios along with self-reflection comments, thus enhancing individual academic growth. The program director could gather the data from residents to assess how well the program as a whole teaches patient care related to various chief complaints and make directed educational interventions to correct deficiencies. In addition to assessing residents in their ability to choose the correct procedure for a particular chief complaint, program directors should also assess resi-
dents in their ability to carry out procedures competently. It is not enough to simply attain a count of completed procedures and to document that number in each resident’s semiannual evaluation. Instead, metrics for key procedures should be identified and residents should be assessed on compliance and complication rates (an example of metrics related to endotracheal intubation that programs might consider can be found in Table 2). Functionally, assessing resident competency with key procedures can be accomplished through a variety of means. Some programs dedicate a day to procedural competency, during which residents are assessed in their ability to perform procedures in a simulated setting. Other programs use checklists to document competency. It is important, regardless of the method used, that key metrics are identified in advance and that they are communicated to the learner and faculty assessing the procedural skills. Practice-Based Learning and Improvement Practice-based learning refers to the ability to appropriately modify practice based on new literature and patient outcomes and to teach others current medical knowledge and standards. These skills, along with the workgroup’s proposed outcome measures, are listed in Table 4.
272
Hobgood et al.
•
OUTCOME ASSESSMENT IN EMERGENCY MEDICINE
Table 4 Emergency Medicine Relevant Practice-based Learning and Improvement Competency
Physician Task Analyze and assess practice experience, perform practice-based improvement Locate, appraise, and utilize scientific evidence related to patient health problems
Competency in applying knowledge of study design and statistical methods to appraise medical literature Utilize information technology to enhance learning and improve patient care Skilled in facilitating the learning of emergency medicine principles and practice by others
Measure Impact of PI program Learner ability to self-reflect, identify deficits, and improve Ability to find a specific piece of information Adherence to evidence-based recommendations from Cochrane Collaboration and Agency for Healthcare Research and Quality Adherence to the appraisal process, such as described in JAMA Guides to Medical Literature series Number of quantified prescription or order-entry errors Impact of teaching on other practitioners
Data Collection Method or Assessment Technique Depends on project goal RR, CSR, P, SP Appraisal of search strategy RR, CSR
Topic appraisal using EBM techniques RR Teaching evaluations
EBM = evidence-based medicine; CSR = chart-stimulated recall; P = portfolio; PI = performance improvement; RR = record review; SP = standardized patient assessment.
Competence in practice-based learning signifies that one is able to analyze and assess practice experience, reflect upon it, and identify and implement means by which to improve that practice.6 Accurate self-assessment is a critical component of this competency and can be measured by determining a learner’s ability to review the care he or she delivered and to identify future improvements for components of care. For instance, through the performance of follow-up to identify missed diagnoses, record review to assess adherence to national and local standards, and self-reflection of individual patient encounters via portfolios, a learner’s ability to identify and correct suboptimal practice patterns can be assessed. Outcome measures include, but are not limited to, improvements in the metrics outlined in other sections of this article. Current Residency Review Committee for EM requirements stipulate that ‘‘Each resident must actively participate in emergency department continuous performance quality improvement (PI) programs.’’18 A natural extension of this requirement would be the design of outcome measures that evaluate the impact of such a program. Learners at all levels, from medical students to residents, have been found to have an influence on PI initiatives.19 By measuring this influence, one can accurately determine a learner’s ability to identify a problem and implement a plan for improvements. One case series describes a cohort of internal medicine residents that identified an overuse of intravenous catheters and then developed an intervention that decreased use from 43% to 27%.20 Because PI projects often impact outcomes involving multiple competencies, measures may generate results that can be applied across many domains of resident competency acquisition.
Residents must also be able to locate, appraise, and utilize scientific evidence related to patient health problems and to the larger population from which they are drawn. The ability to find pertinent information, to appropriately assess its validity, and to thoughtfully implement it into practice is critical to a practitioner’s growth. Outcomes for this skill are tied to the assessment methods used. For example, in assessing one’s ability to use tools to find evidence, one could determine the practitioner’s ability to find a specific piece of information; as well, assessment of the search technique could also be used. Objective assessment of appraisal and implementation of this evidence is problematic due to the inherent controversies in determining the ‘‘gold standard.’’ However, by using objective evidence-based recommendations, such as those collected by organizations such as the Cochrane Collaboration and the Agency for Healthcare Research and Quality (AHRQ), one can determine the frequency with which a practitioner deviates from the standard of care for specific diagnoses. Residents must show competency in applying knowledge of study design and statistical methods to critically appraise medical literature. Numerous guides exist for systematically using evidence-based medicine techniques. The inherent subjectivity of the outcomes could be minimized by focusing on the appraisal process rather than on the conclusion. One method of structured appraisal, described in depth, is published in the Users’ Guides to Medical Literature series in the Journal of the American Medical Association (JAMA).21,22 Interpretation of rudimentary statistical tests is included in the board certification process. Another skill is the ability to utilize information technology to enhance learning and improve patient care.
ACAD EMERG MED • March 2008, Vol. 15, No. 3
•
www.aemj.org
Presumably, the use of information technology should decrease errors. Practitioners must be able to find and use information pertinent to positively impacting patient care. Assessment tools include 360-degree evaluations and practical examinations, which measure the ability to rapidly access pertinent information to guide care.6 Other surrogates for gauging the accuracy of information retrieval could include the examination of errors in prescription writing or order-entry errors, both of which can be quantified. Finally, practice-based learning and improvement means that residents are skilled in facilitating the learning of EM principles and practice by students, colleagues, and other health care professionals. Standard evaluation forms can be used to assess the ability of a practitioner to teach others. To better assess outcomes, however, one would need to determine the impact of the teaching on the other practitioners of the health care team. This can be done in simulated settings using either global or checklist evaluations. Due to the specific skills required, several different outcomes measures are likely needed to determine the efficiency and accuracy with which one can find and appraise information, apply it to one’s practice to maintain the highest standard of care, and disseminate the knowledge to other health care providers. Professionalism The workgroup segmented model behaviors of professionalism into those considered most important to patients and their families, and those deemed most important to employers and colleagues of EPs. Table 5 highlights the consensus group’s proposed measures. The skills falling under category of ‘‘sensitivity and respect for patients’’ are: 1) treating patients and family with respect; 2) demonstrating sensitivity to patient’s pain, emotional state, and gender, and ethnicity issues; 3) shaking hands with the patient and introducing one-
273
self to the patient and family; 4) showing unconditional positive regard for the patient and family; and 5) being open and responsive to input or feedback of patients and their families. The group agreed that the best assessment methods to evaluate the skills surrounding sensitivity and respect for patients would be the 360degree evaluation, the Press Ganey Patient Satisfaction survey,14 the SDOT, and any of a number of means to record patient complaints. The following aspects of professionalism were considered to be important by employers and colleagues: honesty, timely compliance with scheduled requirements, and lack of substance abuse. The group decided that with regard to honesty, outcome measures could include the 360-degree evaluation, the SDOT, patient complaints, and any episode of falsification of medical records. The group noted that lying on the part of physicians is often very difficult to measure. A number of professional skills fall under ‘‘compliance with scheduled requirements,’’ including arriving on time, prepared for work; willingly seeing patients throughout the entire shift; conducting appropriate sign-outs; and punctually completing medical records. The best outcome measures for this skill set are tracking punctuality through time cards or sign-in sheets, reviewing medical records to obtain the number of patients seen per shift or to uncover any instances of delinquent charting, and conducting peer evaluations related to sign-outs. Attendance at mandatory meetings and conferences is also an easy outcome to measure by means of a sign-in sheet or roll. Appropriate outcome measures regarding ‘‘substance abuse’’ could be any reported violation of the ED’s substance abuse policy and failure to seek treatment when a problem has been identified. Because physician impairment policies vary by state, the standards of each state medical board will dictate specific outcome measures.
Table 5 Emergency Medicine Relevant Professionalism Competency
Physician Task Exhibits professional behaviors toward patients and families
Exhibits professional behaviors toward employers and colleagues
Measure Demonstrates sensitivity to patient’s pain, emotional state, and gender ⁄ ethnicity issues Shakes hands with patient and introduces himself to patient and family Shows unconditional positive regard for patients and families Remains open ⁄ responsive to input ⁄ feedback of patients and families Honesty Arriving to work on time Willingly seeing patients throughout entire shift Conducting appropriate sign-outs Punctually completing medical records ‘‘Total instances of delinquent charting’’ Attending mandatory meetings and conferences Lack of substance abuse
360 = 360-degree evaluations; DO = direct observation; PR = peer evaluation.
Data Collection Method or Assessment Technique 360, Patient satisfaction surveys, PR 360, DO 360, patient satisfaction surveys, PR 360, patient satisfaction surveys 360, patient complaint Time sheets, PR Chart audit of patients seen in last hour of shift, PR PR Chart completion audit Conference attendance roster audit PR
274
Hobgood et al.
The difficulty in measuring certain aspects of professionalism begs the question of whether these aspects actually should be measured. Assessment and outcome measurement of professionalism are fraught with subjectivity and bias. Group discussion was limited not only in determining which elements of professionalism were most important to measure, but also in deciding which were even possible to measure. For example, it was noted that it is extremely difficult, if not impossible, to measure skills such as recognizing the influence of marketing and advertising, using humor and language appropriately, or properly administering symptomatic care. Systems-Based Practice (SBP) Emergency medicine educators can incorporate several measures into their curricula to document progressive improvement with respect to the SBP competency. The proposed measures can be found in Table 6.23–30 Successful outcomes assessment will require the employment of multiple measurement tools and will necessarily vary by institution depending on the relative strengths of each program. The consensus group chose specific criteria for each physician task based on generalizability across programs, acceptance as performance standards based on current guidelines (e.g., AHRQ standards), reliability, validity, and ease of implementation. The group also identified existing resources that support outcome measures for SBP. Standards of care are available for more than 1,600 diseases on the AHRQ Web site (http://www.ahrq.gov/). Embedded within the site is a link to the National Guideline Clearinghouse (http://www.guideline.gov/), which provides more than 1,800 listings of practice guidelines based on disease, treatment, or quality assessment tools. The AHRQ also has a Web page entirely focused on outcomes and effectiveness (http://www.ahrq.gov/clinic/ outcomix.htm). The Joint Commission on Accreditation of Healthcare Organizations, recently coined ‘‘The Joint Commission,’’ introduced the ORYX31 initiative in February 1997 to integrate outcomes and other performance measurement data into the accreditation process. In addition, ORYX measurement requirements are intended to support Joint Commission–accredited organizations in their quality improvement efforts. In July 2002, accredited hospitals began to collect data on standardized, or ‘‘core,’’ performance measures.31 The Hospital Quality Measures currently utilized by the Joint Commission and CMS are acute myocardial infarction (AMI), heart failure, pneumonia, and surgical infection prevention. With respect to EM, the relevant outcomes to be measured for AMI include administration of aspirin and beta blockers, percutaneous transluminal coronary angioplasty within 90 minutes of arrival, or thrombolysis within 30 minutes of arrival. For pneumonia, they include oxygen assessment, blood cultures, antibiotic administration within 4 hours of arrival, and antibiotic choice for intensive care unit (ICU) and non-ICU patients. One caveat with respect to these measures is that residents cannot control certain aspects of the time-critical events. For instance, time to electrocardiogram (ECG) is institution-dependent, and time to needle
•
OUTCOME ASSESSMENT IN EMERGENCY MEDICINE
from the time of notification is entirely dependent on the invasive cardiologist and the cardiology team framework; therefore, residents can actually only be assessed on timely notification of cardiology. Using the 3-hour window for stroke team activation for tissue plasminogen activator administration, or door-to-needle times for AMI as examples, a resident’s records can be reviewed for timing or documentation of notification of the stroke team after interpretation of the initial head computed tomography (CT) or notification of the catheterization team after interpretation of the initial ECG. However, door-to-needle time as a whole encompasses other institutional factors, such as time to initial ECG and time for arrival of the consulting service. Each of these metrics is beyond resident control; however, some would argue that these measures could be used as institutional metrics, providing an indicator of appropriateness of the training environment for graduate medical education. Other outcomes that easily could be evaluated using record review and checklists in the case of AMI, for example, include documentation of aspirin and betablocker administration. Residents can be evaluated based on their documentation of medication administration in the ED or by out-of-hospital caregivers. If medications were not administered, resident evaluation should be based on documentation of appropriate contraindications. The checklist format allows for items to be scored as either binary (‘‘Yes’’ or ‘‘No’’) or by level of compliance using a Likert-type measurement (total, partial, or incorrect) for each individual parameter. The individual items can then either be scored as a composite (percentage of items performed) or an all-or-none measurement.32 Missing or incomplete documentation of care is interpreted as not having met the accepted standard. Chart-stimulated recall oral exam cases can be tailored to assess resident understanding of specific systems-based issues. Areas of assessment might include the resident’s use of clinical decision rules for utilization of diagnostic studies (e.g., NEXUS23 criteria for c-spine clearance) or disposition (e.g., PORT28 score for pneumonia or CIWA29 score for alcohol withdrawal). One outcome measure for the requisite physician skill of multitasking and team management would be time to administration of pain medications. Core measures for JCAHO and ORYX specify guidelines for performance and outline the way in which quality is to be assessed.31 Using these metrics, a program director also can measure individual resident performance and can determine the aggregate performance of the program. The information will yield formative feedback at both the individual and the program levels. Repeat measurement will allow systematic improvement and will provide ample documentation of a systematic approach to improvement for accreditation agencies. An EM-specific simulation curriculum has been designed to address SBP topics.33 One case involves a patient with a language barrier who suffers from an AMI and who wishes to leave AMA. Another case involves an intoxicated patient with a Level 1 pelvic trauma requiring transport to a specialized facility. SBP issues pertinent to the case include transport protocols,
ACAD EMERG MED • March 2008, Vol. 15, No. 3
•
www.aemj.org
275
Table 6 Emergency Medicine Relevant Systems-based Practice Physician Tasks
Physician Task Out-of-hospital care
Modifying factors Legal ⁄ professional issues
Diagnostic studies
Consultation and disposition
Consultant interactions Prevention and education
Multitasking and team management
Measure Resident discusses relevant information with out-of-hospital providers Resident reviews out-of-hospital run sheet Documentation of out-of-hospital care (i.e., aspirin and nitroglycerin given in the field) Resource utilization Consultation of interpreter for language barrier Explanation of AMA indications, risks, and benefits Explanation of alternative treatments and options Documentation of patient capacity for decision-making Documentation of invitation to return for recommended treatment Documentation of patient handoff at change of shift Consideration of evidence-based decision rules Examples include: NEXUS C-spine rules27 Ottawa ankle rules28 Ottawa knee rules29 Canadian Head CT rules30 Documentation of deviation from decision rules Documentation of procedures Timely notification of cardiac catheterization team for AMI Timely notification of stroke team for acute CVA Utilization of PSI31 or PORT32 score in CAP for disposition CIWA33 score for alcohol withdrawal Appropriateness of consultation Documentation of indications for consultation Timely disposition (admission or discharge) Appropriate discharge instructions written for understandability at the patient’s level Discharge instructions document a follow-up provider Discharge instructions provide an explanation of medications Reasons to return for further care Appropriate discharge medications provided for key medical conditions, e.g., steroids ⁄ MDI in asthma, antibiotic choice for indication JCAHO ORYX Core measures34 AMI Administration of aspirin and beta-blockers in AMI PTCA within 90 min of arrival Thrombolysis within 39 min of arrival Community-acquired pneumonia Oxygen assessment Blood cultures Initial antibiotic administration