Assessing Practicebased Learning and ... - Wiley Online Library

6 downloads 19813 Views 246KB Size Report
Assessment of practice-based learning and improvement (PBLI) is a core concept identified in several competency ... resident's entire career. .... —Computer.
BREAKOUT SESSION

Assessing Practice-based Learning and Improvement David H. Salzman, MD, Douglas S. Franzen, MD, MEd, Katrina A. Leone, MD, and Chad S. Kessler, MD, MHPE

Abstract Assessment of practice-based learning and improvement (PBLI) is a core concept identified in several competency frameworks. This paper summarizes the current state of PBLI assessment as presented at the 2012 Academic Emergency Medicine consensus conference on education research in emergency medicine. Based on these findings and consensus achieved at the conference, seven recommendations have been identified for future research. ACADEMIC EMERGENCY MEDICINE 2012; 19:1403–1410 © 2012 by the Society for Academic Emergency Medicine

A

n important goal in education is teaching learners how to continue learning once they have finished formal training. “The main thrust of modern adult-educational technology is in the direction of inventing techniques for involving adults in everdeeper processes of self diagnosis of their own needs for continued learning, in formulating their own objectives for learning, in sharing responsibility for designing and carrying out their learning activities, and in evaluating their progress toward their objectives.”1 Practice-based learning and improvement (PBLI) has the broadest scope of all the Accreditation Council for Graduate Medical Education (ACGME) core competen-

From the Department of Emergency Medicine, Northwestern University, Feinberg School of Medicine (DHS), Chicago, IL; the Department of Emergency Medicine, Virginia Commonwealth University Health System (DSF), Richmond, VA; the Department of Emergency Medicine, Oregon Health and Science University (KAL), Portland, OR; the Department of Emergency Medicine, Jesse Brown VA Hospital (CSK), Chicago, IL; and the Department of Emergency Medicine, University of Illinois-Chicago (CSK), Chicago, IL. Received July 10, 2012; accepted July 12, 2012. The list of breakout session participants can be found as the appendix of a related article on page 1486. This paper reports on a workshop session of the 2012 Academic Emergency Medicine consensus conference, “Education Research in Emergency Medicine: Opportunities, Challenges, and Strategies for Success,” May 9, 2012, Chicago, IL. The authors have no relevant financial information or potential conflicts of interest to disclose. Supervising Editor: John Burton, MD. Address for correspondence and reprints: David H. Salzman, MD; e-mail: [email protected].

© 2012 by the Society for Academic Emergency Medicine doi: 10.1111/acem.12026

cies. The ACGME emergency medicine (EM) program requirements list 10 subcompetencies,2 which can be interpreted to cover a range from individuals to entire health care systems and have a time span that lasts a resident’s entire career. The overall goal is to help residents develop lifelong skills to identify areas for improvement in their own practice through both external feedback and self-reflection and then work to improve those areas through self-directed learning, quality improvement, and education of themselves and others. The concept of self-directed improvement based on a self-performance assessment is continued in the assessment of practice performance mandated as part of the maintenance of certification for practicing physicians. Thus, the habits and values emphasized in the PBLI competency serve as a foundation for the rest of a physician’s career. The wide range of practice-based learning is reflected in the fact that it is not a separate core competency in the CanMEDS framework. All of the elements of the ACGME’s PBLI competency can be mapped to elements of CanMEDS, with the exception of IV.A.5.c).(7) and (9): use information technology to optimize learning/ improve patient care. This mapping spans four different elements of the CanMEDS system—the scholar, professional, manager, and communicator roles (see Table 1). Assessment of PBLI as a whole is difficult due to this broad scope. Evaluation tools for various subsets of the components of PBLI have been described, but assessing all of the elements of PBLI with one tool has proven difficult. Demonstrating “improvement” also requires showing change over time, which many standard assessment tools are not built to measure. This leaves us with a choice: we can use the same tool to assess a learner at several different points in time, provided that the tool is designed to clearly distinguish different levels

ISSN 1069-6563 PII ISSN 1069-6563583

1403 1403

1404

Salzman et al. • ASSESSING CORE COMPETENCIES: PRACTICE-BASED LEARNING AND IMPROVEMENT

Table 1 AGCME and CanMEDS Competencies Comparison ACGME PBLI Subcompetency 1. Identify strengths, deficiencies, and limits in one’s knowledge and expertise. 2. Set learning and improvement goals. 3. Identify and perform appropriate learning activities.

4. Systematically analyze practice using quality improvement methods and implement changes with the goal of practice improvement. 5. Incorporate formative evaluation feedback into daily practice.

6. Locate, appraise, and assimilate evidence from scientific studies related to their patients’ health problems. 7. Apply knowledge of study design and statistical methods to critically appraise the medical literature.

8. Participate in the education of patients, families, students, residents, and other health professionals.

Corresponding CanMEDS SCHOLAR—Enabling Competencies: Physicians are able to … 1. Maintain and enhance professional activities through ongoing learning 1.1. Describe the principles of maintenance of competence 1.2. Describe the principles and strategies for implementing a personal knowledge management system 1.3. Recognize and reflect learning issues in practice 1.4. Conduct a personal practice audit 1.5. Pose an appropriate learning question 1.7. Integrate new learning into practice 1.8 Evaluate the impact of any change in practice 1.9 Document the learning process MANAGER—”quality assurance and improvement” Key Competency 1: Participate in activities that contribute to the effectiveness of their health care organizations and systems; Enabling Competency 1.2: Participate in systemic quality process evaluation and improvement, such as patient safety initiatives PROFESSIONALISM Enabling Competency 1.2: Demonstrate a commitment to delivering the highest quality care and maintenance of competence SCHOLAR 1.6. Access and interpret the relevant evidence Enabling Competency 2: Critically evaluate medical information and its sources, and apply this appropriately to practice decisions 2.1. Describe the principles of critical appraisal 2.2. Critically appraise retrieved evidence in order to address a clinical question 2.3. Integrate critical appraisal conclusions into clinical care SCHOLAR Enabling Competency 3: Facilitate the learning of patients, families, students, residents, other health professionals, the public, and others, as appropriate 3.1. Describe principles of learning relevant to medical education 3.2. Collaboratively identify the learning needs and desired learning outcomes of others 3.3. Select effective teaching strategies and content to facilitate others’ learning 3.4. Demonstrate an effective lecture or presentation 3.5. Assess and reflect on a teaching encounter 3.6. Provide effective feedback 3.7. Describe the principles of ethics with respect to teaching COMMUNICATOR—Key Competency 3: “Accurately convey relevant information and explanations to patients and families, colleagues and other professionals.

ACGME = Accreditation Council for Graduate Medical Education; PBLI = practice-based learning and improvement.

of performance and is used consistently, or we can use less familiar assessment tools that are designed to demonstrate a learner’s progress over time. In this article we summarize the findings and recommendations of the 2012 Academic Emergency Medicine consensus conference breakout session regarding the ACGME core competency of PBLI. We will discuss tools currently available for the assessment of PBLI, limitations of these instruments, limitations of the current research, and suggestions for tool development and further research. METHODS Before the consensus conference, a MEDLINE search was performed using the initial search term “practice-based learning and improvement.” No

articles containing this string were found. “Practice based learning” yielded 258 references. This was further limited through addition of the terms “eval*” or “assess*” resulting in 188 references. Combining these results with the search term “emergency” resulted in only 18 references. A search of the MedED portal using “practice based learning” yielded 26 results. From these various searches, relevant articles were identified. The bibliographies of relevant articles were reviewed for any additional pertinent references. Based on these available articles, a summary of available assessment tools was developed. Finally, the authors developed a list of seven items to guide a research agenda within the scope of this competency. These seven items were presented and modified during the breakout session on Assessment of Observable Learner Performance.

ACADEMIC EMERGENCY MEDICINE • December 2012, Vol. 19, No. 12 • www.aemj.org

Historical and Current Methods of Evaluation in EM In 2002, a consensus conference was convened by the Council of EM Residency Directors (CORD) in conjunction with the American Board of Emergency Medicine (ABEM), the Residency Review Committee (RRC-EM), and the Society for Academic Emergency Medicine (SAEM) to discuss appropriate methods of assessment for each of the ACGME core competencies.3 This group combined some of the subcompetencies of PBLI to create five subcategories. Suggested methods of evaluation for each category were graded using a three-grade scale: 1) the most desirable method, 2) the next best method, or 3) a potentially applicable method (see Table 2 for a summary of the recommendations). It is important to note that 1) these were simply recommendations of tools that could potentially be used, 2) portfolios are the only tool listed as grade one for all five subcategories, and 3) global rating scales are only mentioned as part of a 360-degree evaluation. There has been very little published on the actual use of any of these methods to evaluate PBLI in EM. Four years later, the section on PBLI in the summary of the 2006 CORD consensus workgroup on outcome assessment was an updated version of Hayden’s paper: full of suggestions about ways that PBLI could be evaluated as opposed to descriptions of how PBLI was being evaluated or assessments of those assessment tools.4 The only reference to an actual assessment was from internal medicine. Global Ratings In 2002, Swing5 stated that “global ratings are the most widely used method of assessment in graduate medical education and in EM residencies” based on data from a survey conducted in 1998. The paucity of publications

1405

about assessing PBLI in EM implies that things have not changed much since. Global assessments have been shown to be a valid way of assessing the core competencies in EM.6,7 However, a single global assessment cannot demonstrate improvement. It is important to note that the CORD Standardized Direct Observation Tool (SDOT), a global assessment developed by a CORD task force and used by many programs, does not assess PBLI.8 One potential way to overcome this problem is to use the same tool at different points in time. For example, a global assessment was used to compare performance of EM interns on an objective structured clinical examination (OSCE) designed to assess the core competencies with their performance in specific core competencies (including PBLI) later in residency, which was also rated with a global assessment tool.9 To use a global assessment this way, it is important that a rubric clearly defines expected levels of performance so that improvement may clearly be demonstrated (i.e., the learner moved from performing at one level to performing at a higher level as defined by the rubric). A series of global assessments that says a learner is performing at a “satisfactory” level throughout his or her training does not demonstrate improvement. Evidence-based Medicine The processes involved in locating, appraising, and incorporating scientific evidence into practice is often discussed as “evidence-based medicine” (EBM). Unlike the rest of PBLI, assessments of the EBM subcomponent of PBLI in EM have been studied and published. A recent survey reported that 29% of EM residency programs use structured critical appraisal instruments as

Table 2 Grades of Assessment

Grade One Methods of Assessment 1. Analyze and assess your practice experience and perform practice-based improvement.

2. Locate, appraise, and utilize scientific evidence.

3. Apply knowledge of study design and statistical methods to critically appraise medical literature.

4. Utilize information technology to enhance your education and improve patient care. 5. Facilitate the learning of students, colleagues, and other health care professionals in EM principles and practice.

—Record review —Chart-stimulated recall —M&M conferences —Portfolios —Portfolio/journal club —EBM exercise —Chart-stimulated recall —Portfolio/journal club —EBM exercise —Multiple-choice questions —Oral/practical exam —360° global rating —Portfolio —Oral/practical exam —Portfolio —Checklist evaluation of bedside teaching

Grade Two Methods of Assessment —Procedure logs —Case logs —360° global rating —Record review —Multiple-choice question exam —Oral/practical exam

—Checklist evaluation of live performance —Computer simulation —Record review —Lecture evaluation —Simulation

EBM = evidence-based medicine; EM = emergency medicine; M&M = morbidity & mortality.

Grade Three Methods of Assessment —Patient survey questionnaires —Simulators

1406

Salzman et al. • ASSESSING CORE COMPETENCIES: PRACTICE-BASED LEARNING AND IMPROVEMENT

part of teaching EBM,10 but did not mention whether or not the residents were assessed in their ability to use the instrument. Tools have been developed to assess the ability to locate scientific evidence by tracking searches performed in MEDLINE.11 In another study, residents were taught EBM skills and then assigned to clinical shifts where they performed formal literature searches related to active management questions. Evidence compiled and presented by these residents influenced a change in management by the primary ED team in 16.3% of cases.12 Such tools have not been directly shown to improve patient outcomes, which is the core of PBLI. Methods of Evaluation in Other Specialties Record Reviews. Resident peer review of charts has the potential benefit of serving as an instructional tool: in the process of performing a chart audit, the individual resident becomes aware of practice guidelines and has an opportunity to reflect and apply the concepts to his or her own patient care.13 A surgical program used a Web-based chart review program to calculate a “batting average” as a way to track improvement. After each consult, each senior resident was asked to document his or her own impression and plan on a Web-based platform prior to discussing the case with an attending physician. Once the final diagnosis or outcome was determined, each resident reviewed his or her initial impression and plan and was awarded 0.5 points for each element that was correct (1.0 if both impression and plan were correct). The program tracks a cumulative and 3-month moving average, allowing residents to monitor improvement over time.14 A university-based program provided residents with individual reports of their patient characteristics and performance in different categories of preventative medicine and disease management based on aggregate patient data from a systemwide data warehouse. Comparison reports for residents in the same clinical group, the same year of training, all residents, and faculty were also available, allowing residents to track their performance against a variety of benchmarks. Ninety-four percent of residents indicated that the electronic tool was a useful tool for learning PBLI. However, less than half (46%) thought that the data accurately reflected their practice.15,16 Morbidity and mortality (M&M) conferences are designed to encourage practice improvement through an analysis of and reflection on the outcomes of specific cases. Because regular M&M conferences are required by the ACGME as a part of the educational program, formalizing and integrating the results of the conference discussion into a postconference quality improvement project could be a platform for a PBLI activity. This has not been described in EM but has been documented in other specialties.17–19 Assessment can be done using forms, logs, case summaries, or the development of a plan for improvement, implementation of which is then used as evidence that improvement has occurred (see also “Projects”). OSCE and Standardized Patients. During an OSCE, examinees rotate through a circuit of stations where they may have to perform an action, answer questions, or

interact with a standardized patient while examiners remain at one station. These two elements allow some control over the variability of patients and examiners, allowing more objective assessment of the examinee.20 The benefit of an assessment using an OSCE is that the learner is provided an “opportunity for the demonstration of skills at Miller’s assessment level of ‘show’s how’ rather than testing of knowledge alone.”19 Previous consensus within EM has indicated that standardized patients4 or OSCEs3 could be used as a method of assessing practice-based learning despite a lack of evidence identifying the reliability or validity of such measurements within EM. A fellowship program in endocrinology developed an assessment program that used an OSCE format to measure competencies related to quality improvement. However, despite excellent content validity and high inter-rater reliability, the authors found low generalizability. This may have been due to the specificity of the cases developed for the OSCE. They encouraged the use of an OSCE as a component of the overall assessment of a learner’s competency in PBLI activities.21 Another limitation of OSCEs is the amount of resources they require to develop and administer. One innovative solution to decreasing the effort and cost of setting up an OSCE is the use of video clips housed on and transmitted via the Internet, instead of standardized patients—the objective structured video examination (OSVE).22 This also increases the consistency of patient interactions, but at the cost of decreased realism and flexibility. Portfolios. Portfolios are a collection of evidence that demonstrate one’s efforts or achievements and can be used to assess progress. “They are inherently practicebased learning since residents must review and consider their practice in order to begin the portfolio.”23 Portfolios may include plans for improvement, and later, evidence that those plans have been implemented and the results thereof. Assembling a portfolio is an exercise in self-reflection because it requires “looking back and analyzing what one has accomplished.”24 Reviewing a portfolio with a mentor is an excellent way for a resident to get formative feedback, “a necessary component of the evaluation of competence.”25 Portfolios have been used throughout education. In other countries, portfolios have been used to track physicians’ continuing professional development.26 Perhaps the most pertinent evidence of the value of the portfolio as an assessment tool is that the ACGME will be requiring documentation of performance-level milestones in an electronic “Advanced Learning Portfolio” as part of the Milestones initiative.27 Portfolios can be extremely inclusive or relatively simple. One internal medicine program noted that its current assessment tools (which included monthly and 360-degree evaluations, as well as a direct observation tool) did not allow longitudinal assessment of PBLI “with the degree of specificity required to document progress.”28 The program leadership developed a portfolio system that tracked progress in a number of areas. Content included, among other things, samples of clinical documentation, quality improvement projects, action plans from direct observations, evidence-based medicine searches, and critical incidents. Residents

ACADEMIC EMERGENCY MEDICINE • December 2012, Vol. 19, No. 12 • www.aemj.org

were encouraged to reflect on those items that demonstrated growth. Each resident met with a mentor three times a year to review and discuss the portfolio. At the other end of the spectrum, one surgical residency program described a portfolio that consisted of 12 entries per year, based on cases of the resident’s choice. Each entry asked for a case history, diagnostic studies, a three-item differential diagnosis, management options, three lessons learned, and a brief embellishment of one of those three lessons. Portfolios were evaluated by a rubric. Analysis of 420 entries from 35 residents showed that residents of all training levels “demonstrated reflection and understanding of the topic chosen.”29 There are numerous concerns about the use of portfolios. Perhaps the biggest is that by its very nature, a portfolio is documentation that there is room for improvement. “Documentation of suboptimal performance might be useful for individualized learning plans and professional development, but it could be detrimental to a physician’s career if used as evidence for medical malpractice lawsuits.”30 It is uncertain if statutes protecting peer review apply to the content of a portfolio. Maintaining patient privacy and doctor– patient confidentiality are additional concerns. Residents concerned about such issues “may be reluctant to engage in and document true self-reflection and criticism.”29 The same may hold true if portfolios are used for high-stakes assessment. One possible solution is to have a separate formative reviewer (“coach”) and summative reviewer (“evaluator”).28 One of the biggest challenges to portfolio assessment is that the variability and uniqueness of materials presented (supportive documentation and reflective comments) by each learner makes consistent assessment difficult. In one review of portfolio use among trainers of general practitioners in the United Kingdom, “reliability of individual assessors’ judgments (i.e., their consistency) was moderate, but inter-rater reliability did not reach a level that would support a safe summative judgment.”31 Attempting to standardize content defeats the purpose: “Too much specific obligatory content makes portfolios bureaucratic, with the result that they both fail to serve any educational purpose and force learners to search for content outside their direct and lived experiences.” One potential way to increase the reliability of the assessments is to perform review of portfolios through paired discussions instead of individual assessments.31 Lack of faculty experience with portfolio review may also contribute to inconsistency, in addition to being a hurdle for portfolio implementation.24,28,29 However, this can be overcome through faculty development.32 Projects. One other potential method to demonstrate improvement is to develop and complete an improvement project. Such projects are used in quality improvement. Residents “measure and describe processes and outcomes of care for their own patients, identify places in their own practice that can be changed, apply improvement to their own panel (of patients), and use balanced measures to show changes have improved care.”33 Projects may be identified through self-reflection, developed with assistance from faculty,

1407

or even chosen from a menu of existing institutional quality improvement efforts.34 Although development and completion of such projects can be time-consuming, they assess the upper levels of the Miller pyramid.35 As an additional benefit, projects for individual improvement may overlap with the requirement for participation in continuous performance quality improvement programs under systems-based practice. Evaluation in Other Countries As previously discussed, practice-based learning is not a separate core competency in the CanMEDS framework. The Scottish Doctor project (http://www.scottishdoctor.org/) similarly does not have a concept that parallels PBLI, but has many of the same components, again mapped out over multiple content areas. While methods of assessing the various CanMEDS and Scottish Doctor components have been developed, it would be difficult to tease out subsections of assessment tools that could be correlated back to PBLI as a whole while maintaining the validity of the assessment tool. We may be able to borrow assessment ideas from the CanMEDS system, but will likely have to develop new assessment tools using tools developed for the relevant subsections as opposed to modifying existing, published, validated assessments of the CanMEDS roles. FUTURE DIRECTIONS Ten years after the CORD working group identified potential ways to assess PBLI in the context of an EM residency program, there is still limited published description of the use of any such tools in an EM residency. Numerous opportunities exist for research to further explore assessment of PBLI. Many other specialties have described methods of assessing PBLI and the advantages and disadvantages of these methods. On the surface, many of these methods seem applicable to EM residents, but whether or not these methods are in practice applicable in EM, or whether they will have the same successes or difficulties, is currently unknown. The most successful approach in demonstrating achievement of this competency will likely be a combination of many different assessment tools that together form a reliable and valid measurement of the learner’s performance. The assessment of performance and achievement of competency within this area remains challenging due to a lack of validated and reliable tools at the educator’s disposal for use in the evaluation of learners. Complicating this further is the fact that “objective assessment of appraisal and implementation of this evidence is problematic due to the inherent controversies in determining the ‘criterion standard.’”4 Thus, a considerable amount of effort must be put forth to not only consider how to apply these concepts to the assessment of PBLI in EM, but also toward the psychometric analysis of any assessment tool to ensure validity and reliability and in the correct scenario that standards are set appropriately. Based on the lack of reliable tools with evidence of validity for assessment of this competency in EM, our review of the literature, and the results of the consensus

1408

Salzman et al. • ASSESSING CORE COMPETENCIES: PRACTICE-BASED LEARNING AND IMPROVEMENT

conference, we propose the following items as a research agenda. 1. Characterize and disseminate the methods currently used by EM programs to assess the PBLI competency. The paucity of studies published describing the tools EM educators have used to assess this competency highlights the importance of establishing what the other members of our society are currently doing. Descriptions of how programs are currently providing evidence of these required activities, along with each individual program’s discussion of pros, cons, and areas for potential improvement or further development, would be a good first step toward developing better tools. A complete characterization might reveal tools that are useful to all programs and could help inform areas for future research. 2. Determine the applicability and validate existing tools used for assessing this competency in other specialties to EM. Other specialties have used a variety of tools including global ratings, medical record review, OSCE, portfolios, and projects to assess the progress of learners through this competency. Research is needed to identify which of these types of assessment are most applicable to EM. Additionally, as these tools have been used for a different group of learners, the validity and reliability of the measurement with regards to EM residents will need to be determined. 3. Develop and validate other reliable processes for assessing this competency in EM. While many of the potential assessment tools mentioned here seem to have applicability to PBLI within EM, it is possible that they will not be able to sufficiently assess our group of learners, and new processes might have to be developed to meet this need. Such processes might incorporate the concepts presented below. 4. Develop methods for capturing the effort expended on required activities such as patient follow-up logs, M&M conferences, and quality improvement projects into a formal assessment process. The existing requirements for follow-up and M&M conferences offer great opportunities to publish about assessment and documentation of such activities, which fall under PBLI. The requirement for participation in ED continuous performance quality improvement programs under the systems-based practice competency might also offer opportunities to monitor improvement at the individual level (in addition to the system level). Research into developing and assessing methods of capturing and documenting improvement resulting from these activities could provide useful tools to assess PBLI. If properly designed, such tools would require minimal additional effort from that already being expended to meet ACGME and RRC requirements. 5. Develop methods that integrate with medical records to gather patient outcome information and provide reporting to individual providers. As graduate medical education transitions from a competency-based model to one of reaching milestones, having access to actual data with regards to medical

decision-making and ultimate patient outcomes will be crucial in demonstrating that residents have reached a specific milestone. Developing computerized systems that interface with patient records and are able to harness the data already available in the medical record could provide a benefit to the learners and instructors in being able to access information without taking time away from patient care activities to enter data. Ideally, these systems would be automated and the process could occur in real time. Not only will this be beneficial at the residency training level, but could also benefit attending physicians in providing patient care metrics useful for the maintenance of certification process. Further areas of investigation should include investigations on how to effectively implement such a system and make the process as useful as possible to provide meaningful data to learners and program leadership. While the metrics could potentially be useful, care provided in a training environment is generally provided by a team, and one potential problem with an automated system is separating an individual’s performance from systemic factors. 6. Investigate the benefits, limitations, validity, reliability, liability, and patient confidentiality issues specific to the use of portfolios for assessment of PBLI within EM. Portfolios appear to be an ideal tool for assessing and PBLI. They are the only well-studied tool to assess a resident’s ability to self-reflect, one of the core components of PBLI. Portfolios will be an important part of the ACGME Milestones initiative and have been used successfully in other specialties, but they are not widely used in EM. Before embracing portfolios, institutions will have to decide how best to implement them—what content should be included? How will portfolios be used for assessment? How will we assess a resident’s ability to self-assess? Gathering data about why EM residencies are not using portfolios would be a first step. Publishing about experiences with portfolios will help demonstrate their benefits and problems, pointing out more possibilities for future research. Systems will need to be developed to allow the learner a medium in which to present his or her work and encourage reflection, while still maintaining a learning environment that is not punitive to the resident (for documenting suboptimal performance), is protected from discovery, and maintains patient privacy. Furthermore, the portfolio process will need to be useable by both the resident who is compiling the artifacts in the portfolio and the faculty member who is reviewing the learner’s performance. Prior to full inclusion in a summative assessment of a learner’s performance, the validity and reliability of assessment of portfolios should be investigated, as previous research has indicated “individual assessments are consistent but show only fair inter-rater reliability and are untrustworthy in high-stakes assessment.”31 However, an inability to establish defensible standards should not prohibit the use of portfolios as another element in the toolbox of items available to assess PBLI. While researching a mechanism to improve reliability, validity, and usability, the portfolio may be better used for formative assessment. 7. Develop reliable ways to assess the EBM subcomponents of PBLI and the correlation between improved EBM skills and patient outcomes.

ACADEMIC EMERGENCY MEDICINE • December 2012, Vol. 19, No. 12 • www.aemj.org

The ability to locate, critically appraise, and use scientific evidence from medical literature is an essential part of improving one’s practice. Publication data suggest that EBM lends itself to familiar methods of assessment, such as multiple-choice questions or review of focused searches. However, linking practiced EBM techniques with improved clinical outcomes is more difficult. More research is needed to determine and demonstrate the contribution of improved EBM skills to practice improvement. CONCLUSIONS Although the practice-based learning and improvement competency seemingly has the largest scope of the core competencies, the research published within this domain is arguably the smallest. There are opportunities to develop strategies to integrate and formalize many of the assessments that are currently in use, to provide validity evidence for tools used in other specialties, and to develop new processes for assessment. Even with the transition from the traditional competencies to the Next Accreditation System and the Milestones Project, the themes discussed in this consensus conference article will still be relevant. While perhaps not limited solely to an individual competency, the themes will cross milestones and provide further opportunities to define levels of progress through residency training. Developing useful methods for assessment will not only help to provide feedback to learners on their progression, but may also serve as a mechanism to highlight the importance of these lifelong learning activities throughout an emergency physician’s career. The authors appreciate the contributions of the attendees at the 2012 Academic Emergency Medicine consensus conference “Education Research In Emergency Medicine: Opportunities, Challenges, and Strategies for Success.”

References 1. Knowles MS. What Is Andragogy? In the Modern Practice of Adult Education: Andragogy Versus Pedagogy. Englwood Cliffs, NJ: Cambridge Adult Education, 1970. 2. ACGME. Program Requirements for Graduate Medical Education in Emergency Medicine. Available at: http://www.acgme.org/acgmeweb/Portals/0/PFAssets/ ProgramRequirements/110emergencymed07012007. pdf. Accessed Nov 4, 2012. 3. Hayden SR, Dufel S, Shih R. Definitions and competencies for practice-based learning and improvement. Acad Emerg Med. 2002; 9:1242–8. 4. Hobgood C, Promes S, Wang E, Moriarity R, Goyal D. Outcome assessment in emergency medicine–a beginning: results of the Council of Emergency Medicine Residency Directors (CORD) Emergency Medicine Consensus Workgroup on Outcome Assessment. Acad Emerg Med. 2008; 15:267–77. 5. Swing S. Assessing the ACGME General Competencies: general considerations and assessment methods. Acad Emerg Med. 2002; 9:1278–88.

1409

6. Reisdorff E, Carlson D, Reeves M, Walker G, Hayes O, Reynolds B. Quantitative validation of a general competency composite assessment evaluation. Acad Emerg Med. 2004; 11:881–4. 7. Reisdorff E, Reynolds B, Hayes O, et al. General competencies are intrinsic to emergency medicine training: a multicenter study. Acad Emerg Med. 2003; 10:1049–53. 8. Shayne P, Gallahue F, Rinnert S, Anderson CL, Hern G, Katz E. Reliability of a core competency checklist assessment in the emergency department: the Standardized Direct Observation Assessment Tool. Acad Emerg Med. 2006; 13:727–32. 9. Wallenstein J, Heron S, Santen S, Shayne P, Ander D. A core competency–based objective structured clinical examination (OSCE) can predict future resident performance. Acad Emerg Med. 2010; 17(Suppl 2):S67–71. 10. Carpenter C, Kane G, Carter M, Lucas R, Wilbur LG, Graffeo CS. Incorporating evidence-based medicine into resident education: a CORD survey of faculty and resident expectations. Acad Emerg Med. 2010; 17(Suppl 2):S54–61. 11. Rana G, Bradley D, Lypson M. Validated Ovid Medline Search Assessment Tool. Available at: https:// www.mededportal.org/publication/8588. Accessed Oct 1, 2012. 12. Friedman S, Sayers B, Lazio M, Friedman S, Gisondi M. Curriculum design of a case-based knowledge translation shift for emergency medicine residents. Acad Emerg Med. 2010; 17(Suppl 2): S42–8. 13. Paukert J, Chumley-Jones HS, Littlefiled J. Do peer chart audits improve residents’ performance in providing preventive care? Acad Med. 2003; 78(10 Suppl):S39–41. 14. Wu BJ, Dietz PA, Bordley J, Borgstrom DC. A novel, web-based application for assessing and enhancing practice-based learning in surgery residency. J Surg Educ. 2009; 66:3–7. 15. Lyman JA, Schorling J, Nadkami M, May N, Scully K, Voss J. Development of a web-based resident profiling tool to support training in practice-based learning and improvement. J Gen Intern Med. 2008; 23:485–8. 16. Baumgart LA, Bass E, Lyman J, et al. Supporting physicians’ practice-based learning and improvement (PBLI) and quality improvement through exploration of population-based medical data. Proc Hum Fact Ergon Soc Annu Meet. 2010; 54:845–9. 17. Kauffmann RM, Landman MP, Shelton J, et al. The use of a multidisciplinary morbidity and mortality conference to incorporate ACGME general competencies. J Surg Educ. 2011; 68:303–8. 18. Bevis K, Straughn JM, Kendrick J, Walsh-Covarrubias J, Kilgore L. Morbidity and mortality conference in obstetrics and gynecology: a tool for addressing the 6 core competencies. J Grad Med Educ. 2011; 3:100–3. 19. Rosenfeld JC. Using the morbidity and mortality conference to teach and assess the ACGME general competencies. Curr Surg. 2005; 62:664–9.

1410

Salzman et al. • ASSESSING CORE COMPETENCIES: PRACTICE-BASED LEARNING AND IMPROVEMENT

20. Harden RM, Stevenson M, Downie WW, Wilson GM. Assessment of clinical competence using objective structured examination. BMJ. 1975; 1:447–51. 21. Varkey P, Natt N, Lesnick T, Downing S, Yudkowsky R. Validity evidence for an OSCE to assess competency in systems-based practice and practice-based learning and improvement: a preliminary investigation. Acad Med. 2008; 83:775–80. 22. Simpson D. Objective Structured Video Examination (OSVE) Toolkit for Teaching and Assessing the ACGME Competencies. Available at: https://www. mededportal.org/publication/118. Accessed Sep 23, 2012. 23. O’Sullivan P, Greene C. Portfolios: possibilities for addressing emergency medicine resident competencies. Acad Emerg Med. 2002; 9:1305–9. 24. Driessen E, van Tartwijk J, van der Vleuten C, Wass V. Portfolios in medical education: why do they meet with mixed success? A systematic review. Med Educ. 2007; 41:1224–33. 25. Carraccio C, Englander R. Analyses/literature reviews: evaluating competence using a portfolio: a literature review and web-based application to the ACGME competencies. Teach Learn Med. 2004; 16:381–7. 26. Dornan T, Carroll C, Parboosingh J. An electronic learning portfolio for reflective continuing professional development. Med Educ. 2002; 36:767–9. 27. ACGME. Sowing the Seeds, 2008–2009 Annual Report. Available at: https://www.acgme.org/

28.

29.

30.

31.

32.

33.

34.

35.

acgmeweb/Portals/0/PDFs/an_2008–09AnnRep.pdf. Accessed Oct 5, 2012. Donato AA, George DL. A blueprint for implementation of a structured portfolio in an internal medicine residency. Acad Med. 2012; 87:1–7. Webb TP, Merkley TR. An evaluation of the success of a surgical resident learning portfolio. J Surg. 2012; 69:1–7. Nagler A, Andolsek K, Padmore JS. The unintended consequences of portfolios in graduate medical education. Acad Med. 2009; 84:1522–6. Pitts J, Coles C, Thomas P, Smith F. Enhancing reliability in portfolio assessment: discussions between assessors. Med Teach. 2002; 24:197–201. Holmboe E, Ward DS, Reznick RK, et al. Faculty development in assessment: the missing link in competency-based medical education. Acad Med. 2011; 86:460–7. Ogrinc G, Headrick LA, Mutha S, Coleman MT, O’Donnell JO, Miles PV. A framework for teaching medical students and residents about practice-based learning and improvement, synthesized from a literature review. Acad Med. 2003; 78:748–56. Ogrinc G, Headrick LA, Morrison LJ, Foster T. Teaching and assessing resident competence in practice-based learning and improvement. J Gen Intern Med. 2004; 19:496–500. Miller GE. The assessment of clinical skills/ competence/performance. Acad Med. 1990; 9(Suppl): S63–7.