Assessment of Clinical Competence: Supplementing ...

4 downloads 0 Views 1MB Size Report
... Lucknow, Uttar Pradesh, India. 2Professor & Head, Department of Pharmacology, Adesh Institute of Medical Sciences & Research, Barnala Road, Bathinda,.
Journal of Research in Medical Education & Ethics Vol. 7, No. 2, July, 2017, pp-74-84

DOI: 10.5958/2231-6728.2017.00014.2

REVIEW ARTICLE

Assessment of Clinical Competence: Supplementing Existing Tools Sarika Gupta1*, Rajiv Mahajan2, Tejinder Singh3 *Corresponding author email id: [email protected]

ABSTRACT As the primary task of medical personnel is to correctly diagnose and appropriately treat a patient, a proper definition of clinical competence and its components is needed to validate the goals of medical education programmes. The Accreditation Council for Graduate Medical Education (ACGME) selected and endorsed a set of competencies to help define the foundational skills every practising physician should possess. These are patient care, medical knowledge, interpersonal and communication skills, professionalism, practice-based learning and improvement and systems-based practice. It is evident that assessment drives learning, and what is not assessed is not learnt. The objective of assessing the six core competencies as drafted by ACGME is comprehensive and meaningful assessment of learners. To be able to select appropriate assessment tools that measure aspects of performance that are meaningful, one must have an informed understanding of strengths and weaknesses of various assessment tools and techniques. Key concepts while assessing competence are validity, reliability and the utility or usefulness of a given tool when it is used for purposes of assessment. Although tools that focus on specific components of competencies are important in identifying baseline skill sets, equally important is the assessment of the learner’s ability to put these skills together to perform the expected professional activities. Considering the differing utility index of different assessment methods, the medical educationalists are continuously looking for newer methods of assessment of competence. However, any assessment method is justified, if it enriches learning, boosts up learner’s assertiveness and fosters change of curriculum. Keywords: Assessment methods, Assessment, Competence, Miller’s pyramid, Patient care, Practice based learning, Professionalism

INTRODUCTION The primary task of medical personnel is to correctly diagnose and appropriately treat a patient. This necessitates a proper definition of clinical competence and its components, to validate the goals of medical education programmes [1]. It helps in ensuring a minimum level of proficiency to be achieved as desired by medical personnel. Competence is a broad terminology involving ‘habitual and judicious use of communication, knowledge,

technical skills, clinical reasoning, emotions, values and reflection in daily practice for the benefit of the individual and community being served’[2–4]. The Accreditation Council for Graduate Medical Education (ACGME) identifies competency in six areas to be a clinically competent doctor[5]. These are patient care, medical knowledge, interpersonal and communication skills (ICS), professionalism, practice-based learning and improvement (PBLI) and systems-based practice (SBP)[2].

Assistant Professor, Department of Pediatrics, KGMU, Lucknow, Uttar Pradesh, India Professor & Head, Department of Pharmacology, Adesh Institute of Medical Sciences & Research, Barnala Road, Bathinda, Punjab, India 3 Professor of Pediatrics, Professor of Medical Education, Christian Medical College and Hospital, Ludhiana, Punjab, India 1 2

IndianJournals.com

74

Assessment of Clinical Competence

Patient Care

Systems-based Practice (SBP)

The patient care is the capability to provide compassionate, appropriate and effective patient care for the treatment and the promotion of health.

This is the demonstration of ability for awareness of and responsiveness to the larger context and system of health care, as well as the ability to call effectively on other resources in the system to provide optimal health care. Students are expected to:work effectively in various health care delivery settings and systems relevant to their clinical specialty;coordinate patient care within the health care system relevant to their clinical specialty;incorporate considerations of cost awareness and risk–benefit analysis in patient and/or population-based care as appropriate; advocate for quality patient care and optimal patient care systems; work in inter-professional teams to enhance patient safety and improve patient care quality and participate in identifying system errors and implementing potential systems solutions.

Medical Knowledge The medical knowledge implies demonstration of knowledge of established and evolving biomedical, clinical, epidemiological and social-behavioural sciences, in their application to patient care. Interpersonal and Communication Skills (ICS) ICS is the skill for effective exchange of information and collaboration with patients, their families, public and health professionals. It is the ability of a student to work as a leader or consultant to professional group in a required situation. The other element is maintenance of timely and comprehensible medical records. Professionalism Professionalism is the demonstration of empathy, integrity, respect for others, responsiveness to patient needs, respect for patient privacy and autonomy, accountability to patients, society and the profession and sensitivity and unbiased responsiveness to a diverse patient population. Practice-based Learning and Improvement (PBLI) PBLI is a continuous improvement in patient care based on constant self-evaluation of patient care integrated with scientific evidence and lifelong learning. It is understood as the ability to execute the following series of steps in a continuous cycle: (1) determine improvement needs, (2) identify and apply an intervention, (3) self-monitoring of practice behaviour and reflective practice and (4) measure the impact of the intervention and inform the next cycle[6]. Journal of Research in Medical Education & Ethics

WHY ASSESSMENT COMPETENCY

OF

CLINICAL

Dr XYZ, a brilliant MBBS graduate, got selected for the post of Medical Officer based upon his brilliant academic performance. He was posted in a primary health centre. As the fate would have it, within first week of his posting, during his emergency night duty, he received two persons of road side accident. Though he tried to the best of his abilities, he faced difficulty in managing the cases and had to call upon a senior fellow. The next day, reporting the case to the police again proved challenging for him. On probing by the Senior Medical Officer, he disclosed that though he was taught about handling medico-legal cases in Forensic Medicine during second professional, it was never assessed practically. Even during emergency posting during internship, assessment of skills was not part of the assessment. During whole undergraduate course, all stress was laid upon assessment of cognitive domain, and clinical competency was never assessed. 75

Sarika Gupta, Rajiv Mahajan, Tejinder Singh

Above scenario may look hypothetical, but it translate the true current picture of medical education, particularly in India. We all know that assessment drives learning, and what is not assessed is not learnt. If clinical competency is not assessed at all; how we are going to get ‘clinically competent’ medical graduates? Above scenario may look hypothetical, but it very well translate the true current picture of medical education, particularly in India. We all know that assessment drives learning, and what is not assessed is not learnt. If clinical competency is not assessed at all, how we are going to get ‘clinically competent’ medical graduates? The Miller’s pyramid provides a framework of different hierarchical levels of clinical competence, within which assessment is done. In Miller’s framework for assessing clinical competence, the lowest level of the pyramid is knowledge (knows), followed by concept building and understanding (knows how), competence to perform (shows how) and actual performance (does)[7]. Assessment plays an important role in identifying the learning needs and directing the learning in that direction at each level within this framework[8]. Assessment is also corner stone for in-course improvement and taking corrective measures. Feedback is the single most criterion having impact on learning[9]. This reinforces the role of frequent formative assessment during the course itself, and assessment of clinical competence during the course, at various levels commensurate with the level of competence achieved, provides an opportunity for formative assessment, feedback and in-course corrections to the learner[10,11]. ATTRIBUTES OF ASSESSMENT The attributes of a good assessment tools are validity[12], which in simple terms means ‘fit for the purpose’– assessing the same for which it is intended to be used– and reliability, that is the consistency and repeatability of those findings [13] . The other 76

characteristic virtues of educational effect, feasibility and acceptability are added for appraisal of any assessment method[14,15]. The educational effect of assessment is the students’ inspiration to do well and directs their study efforts in support of it. Feasibility is the measure of affordability and efficiency for the purpose. Acceptability is the extent to which stakeholders (students, faculty, patients and practitioners) involved in the process approve the measure[16]. Utility of assessment tool is a measure of product of all the attributes mentioned above[17,18]. The weightage of different attribute criteria for any assessment method depends on purpose for which the tool is to be used[19]. For summative purposes, more weightage is given to reliability, whereas for formative purpose, more weightage is given to educational impact. ASSESSMENT OF THE CORE COMPETENCIES The objective of assessing the six core competencies as drafted by ACGME is comprehensive and meaningful assessment of learners. The progression to competence is a developmental process marked with achievement of milestones along the way. Developmental models help in early identification of slow learners, so that timely intervention and remediation can be done to help the learner towards self-efficacy[20–23] (Table 1). While using an assessment method for an individual competence, be aware of the overlap with assessment of other competencies, application of utility model, alignment with goals of the institute and resources[24]. Patient Care Two major approaches in assessing the ‘patient care’ competency are observations of clinical performance and performance tests[25–30] (Table 2). Medical Knowledge This is a competency that faculty identify as most comfortable one to assess[31]. Some of the tools for Vol. 7, No. 2, July, 2017

Assessment of Clinical Competence Table 1: Developmental models for identifying progression as competent Name of Model

Salient Feature

The Dreyfus and Dreyfus model[20]

It describes the progression of skill from novice, through advanced beginner, competent, proficient, expert and master. Each step is defined by a characteristic behaviour

Reporter–Interpreter– Manager– Educator (RIME) model of cognitive and skill development[21]

The RIME model assesses the student’s performance in relation to patient care. At every level, the student needs to be consistently professional and have good interpersonal skills. It sets the framework for assessing procedural skill, and decision-making is foundational

Identity model[22]

development

This model is particularly helpful in providing insight into the developmental progression of professionalism and communication skills

Entrustable professional activities[23]

It represents the integration of six individual competencies in the clinical context of the professional activities of the specialty. This way, it provides a meaningful way of both teaching and assessing overall clinical competenceThe word ‘entrustable’ is an essential element of this abstract framework, which allows the faculty member to determine when the trainee is competent to perform the professional activity without direct supervision and can therefore be entrusted to do so

Table 2: Tools for assessment of competency of patient care Method

Strengths

Limitations

Mini-clinical evaluation exercise (Mini-CEX) [25] The facilitator/trained observer directly observes a specific trainee–patient interaction

Direct observation, immediate contextual feedback

Difficulty in arranging human resources

Global ratings[26] Rating scales used to assess performance in authentic clinical settings based on multiple observations over time mostly by using Likert scales

Provides formative feedback to learners specific to real-life observation, does assessment of integrative function also

Requires extensive rater training and sensitised faculty, limited direct observation, rater bias, inaccurate recall

Checklist evaluation[27] This is a method where specific aspects of performance in real setting is rated by yes or no items

Provide detailed feedback to students, especially helpful with technical skills assessment

Difficulty in weighing the score for various items, needed expertise to develop valid checklist

Objective-structured clinical examination [28] The OSCE is an assessment format in which the candidates rotate sequentially around a series of structured cases located in ‘stations’, at each of which specific tasks have to be performed, usually involving a clinical skill, such as history taking, examination of a patient or a practical skill

Allows for standardisation across learners, control of the cases and ability to provide complexity, good validity

Time and resource intensive Careful planning needed

Multi-source assessment (360 degree evaluation) [29,30] Checklist forms completed by different types of assessors including patients, families, peers, nurses, hospital staff. It may be self-assessment too

Triangulation and formative feedback to learners

Time intensive, high inter-rater variability, need training of staff

Journal of Research in Medical Education & Ethics

77

Sarika Gupta, Rajiv Mahajan, Tejinder Singh

assessing medical knowledge are elaborated in Table 3. Practice-based Learning and Improvement As PBLI is a complex set of sub-competencies, it requires an assembly of assessment methods rather than a single approach[32]. Best is to demonstrate improvement by the repeated use of the same assessment tool along the period of training. Various methods for assessing PBLI have been summarised in Figure 1.

Checklist is the most frequently used tool, allowing the observer to rate the performance of a student using a numeric scale. It can serve the dual purpose of being used as both formative and summative assessment[35].

Interpersonal and Communication Skills

Patient survey is an important tool to assess ICS, as the assessor is personally involved in the interaction and relationship with the physician. A patient can address many components of the physician’s ICS such as professionalism and humanism [34] . It complements faculty assessments of a learner. However, this assessment can be biased by the patient’s perceived health status.

Communication is important in all aspects of patient care. Effective communication leads to patient and physician satisfaction, improved quality of care, better patient compliance with treatment plans, reductions in medical errors and better management of chronic diseases.

Other tools used to assess ICS are Calgary Cambridge Observation Guides tool, Wayne State Medical Interviewing Inventory tool and SEGUE (Set the Stage, Elicit Information, Give Information, Understand the patient’s perspective, End the encounter)framework[36–38].

Assessment tools for ICS include: checklists, patient surveys, simulated patients, video/audiotapes, selfreflection, case discussions, empathy and emotional intelligence scales, role modelling/role play, multisource evaluations and objective-structured clinical examination[33,34].

Professionalism Professionalism is a developmental sequence of professional behaviours that mature over time with experience. It can be assessed as individual elements of professionalism (professionalisation, professional

Table 3: Tools to assess competency of medical knowledge [31] Method

Strengths

Limitations

Multiple-choice questions (MCQs) The multiple-choice item consists of two components, a stem and a series of alternatives. The stem can be an incomplete statement or a question. In stand-alone or independent items, the stem is the only stimulus material. In testlet-based or case-based multiple-choice items, additional material is provided as a stimulus

Readily developed, efficiently computer-scored sample widely from an extensive volume of knowledge can be used as formative feedback both for learner and teacher

The cueing effect in MCQ items do not replicate the real practice of clinical medicine, chances of ambiguously written test questions, opportunity to randomly guess the correct answer and questionable effectiveness in measuring higher order thinking

Essays A clinical scenario is provided that poses a question or problem

Assess higher order cognitive skills

Difficult scoring, subjective rating, require trained raters

Short answer questions Problem is provided as a short statement

Assess higher order cognitive skills

Difficult scoring, subjective rating, require trained raters

78

Vol. 7, No. 2, July, 2017

Assessment of Clinical Competence

Self & peer assessment

Portfolios

Case-based individual essays Case-based care plans

Triple jump

Tripartite assessment

QIPAT-7

Case report poster

Tools for assessment of PBLI

Group presentation

Figure 1: Tools for assessment of practice-based learning and improvement

conduct, cultural competence, humanism) or as a single construct attempting to globally assess professionalism. Tools for assessing professionalism include critical incidents, peer assessment, miniclinical evaluation exercise and multi-source assessment [39–44]. One specific tool to assess professionalism is professional mini-evaluation exercise. It is a structured observation tool consisting of 21 checkitems where each item is rated on a four-point scale of unacceptable, below expectations, met expectations and exceeded expectations [45]. Systems-based Practice Assessment of SBP depends on multiple inputs by faculty, peers, families and medical staff who are aware of behavioural goals. The assessment tools for SBP are chart-stimulated recall, self-reflection, peer assessment, individual patient advocacy and clinic-based projects[46]. Journal of Research in Medical Education & Ethics

The tool of individual patient advocacy is based upon the fact that the willingness of a student to assist for improved care of a patient, pursuing follow-up of management plans, can be assessed by faculty, nurses or other medical staffs. Advocacy for improved systems is demonstrated when residents follow through on their own suggestions for process of improvement, report safety concerns and engage in projects to enhance system functioning[47]. Clinic-based projects include a team of residents working on a project, intended to enhance the system of patient care and a faculty mentor. The team is expected to complete the project within a year and present. Team-based projects allow residents not only to learn about system complexity but also to disseminate system improvements[48]. INNOVATIONS Considering the differing utility index of different assessment methods, the medical educationalists are 79

Sarika Gupta, Rajiv Mahajan, Tejinder Singh

continuously looking for newer methods of assessment of competence[49]. The innovations to assess clinical competence include multi-method assessment, clinical reasoning in situations that involve clinical uncertainty, standardised patient (SP)exercises linked to post-encounter probes of pathophysiology and clinical reasoning, exercises to assess use of the medical literature, long-station SP exercises, simulated continuity, teamwork exercises, unannounced SPs in clinical settings, assessments by patients, peer assessment of professionalism, portfolios of videotapes, mentored self-assessment and remediation based on a learning plan[49]. Some of these methods are described in detail in the following sections. Simulation Uses of simulations are progressively increasing to assess the cognitive, psychomotor and affective domains together[50]. Simulation is of two types–SP and computer-based simulations. Standardised Patients A SP is a trained person who precisely depicts the status of a patient with a particular medical problem. On the basis of interaction between the SP and the student, SP as well as the teacher assesses the performance over a number of domains including history taking, physical examination and ICS. The scores are given on a specific scoring checklist specific to each patient problem. This form of assessment provides the students with valuable information on their ability to think critically, their interpersonal skills in working with patients and their ability to diagnose and develop a treatment plan. Reliability and validity can improve with adequate number of standardised cases[51,52].

review of the uses of high-fidelity simulation emphasises the integration of simulations into the curriculum plan[53]. Triple Jump Exercise There are two spectrums of triple jump exercise (TJE)[54], clinical and preclinical TJE[55,56]. The clinical TJE consists of three distinct phases or jumps. In first phase, trainee interviews and examines patients under observation of faculty. Second phase consists of writing the findings from the patient assessment using the ‘SOAP’ (subjective data, objective data, assessment, plans) format, with documenting evidence from the literature to support decisions and submit this document to the faculty. The final phase consists of participation in clinical examination conducted by the same faculty in which students are asked about the pathophysiology, diagnosis and treatment of the patient’s problems and to discuss evidences. Preclinical TJE involves reading a scenario depicting a patient with a health problem, identifying key issues and writing a researchable question in the PICO format (patient with problem, intervention, comparison and outcome). It is followed by literature search, writing the answer and critical appraisal of the quality of available evidence. Students receive score for each jump and a cumulative score. This assesses the problem-specific analytical and knowledge application skill. However, TJE is time consuming to develop and logistically difficult to implement, requiring considerable numbers of trained faculty. Peer Assessment

Computer-based Simulations

Students assess each other’s performance by the use of checklists or rating scales. This is best to assess the professionalism, communication and interpersonal skills and health promotion competencies, as behaviour of trainees varies when they are not observed[57].

It includes tools ranging from static manikin heads for intubation to elaborate computer-based systems, responsive to the trainee’s actions. A systematic

This is also useful in problem-based learning and group learning. This tool has raised awareness of professional behaviour, helped trainees to identify

80

Vol. 7, No. 2, July, 2017

Assessment of Clinical Competence

specific behaviours, which can be improved [58]. Minimum eight peer assessments are required to ensure its utility. The level of knowledge and ability of the peer is limiting factor. Student Self-assessment Students appraise own performance against a set of criteria. This involves rating scales, narratives reflecting their performance and reacting on lessons learned and strategies for progress of performance in the future. It is best for assessment of competencies that do not involve demonstrations of highly precise technical skills[59,60]. Threat to validity is incomplete training of the trainee for self-assessment. It is cofounded by the trainee’s sense of self-efficacy and self-confidence. Global Rating It is a series of scales on which examiners and/or SP rate the competence of an examinee in a performance-based assessment. The scale ranges from 1 to 5, indicating competency of interest not achieved successfully to score of 5 indicating most or all aspects of the competency achieved successfully[61]. This is effective for evaluating the competencies related to critical thinking, communication and interpersonal skills and professionalism. Strength is the suitability at evaluating general behaviours in a variety of settings. Limitation is the potential for subjectivity in ratings.

the student encountered and peer assessments to provide insights into interpersonal skills and work habits[62]. Clinical context variation and input from multiple observers provides information on distinct aspects of a trainee’s performance. Longitudinal assessment helps for monitoring ongoing professional development. Key Feature Testing It focuses on critical decision-making in a given clinical setting. It identifies the decision-taking steps in the successful resolution of the problem. This method is better than multiple-choice questions as it allows the freedom of more than one correct answer for a given clinical setting[63,64]. Script Concordance Testing It assesses the organisation of clinical knowledge in the mind of student. Students are presented a clinical scenario, and new elements of information are provided in a stepwise fashion. It is a good predictor of clinical reasoning skills. Comparing the concordance of the responses of the student with experts[65,66] does grading. Computer-based Testing It assesses the problem-solving skills of student. The student is asked to select appropriate items of history, examination and investigation before making a diagnosis and finalising the management plan. Its advantage is authenticity and ability to use multimedia questions, but it is expensive, and the questions can be recallable to students[67].

Multi-method and Longitudinal Assessment A multi-method assessment consists of direct observation of the student interacting with several patients with different clinical problems, a multiplechoice examination to assess clinical reasoning, an encounter with a SP, followed by an oral examination to assess clinical skills, written essays based on literature searches and synthesis of the literature on the basic science or clinical aspects of the diseases Journal of Research in Medical Education & Ethics

Video-taped Clinical Encounters This method offers a rich learning experience for trainees but has several challenges. It requires the informed consent of the patient, a room equipped with video recording, thereby not cost effective. It requires quality time to review such encounters. However, it provides influential feedback to students by viewing own performance[68]. 81

Sarika Gupta, Rajiv Mahajan, Tejinder Singh

CONCLUSION At the end, any assessment method is justified if it enriches learning, boosts up learner’s assertiveness and fosters change of curriculum. REFERENCES

[14] van der Vleuten CPM, Schuwirth LWT. Assessing professional competence: from methods to programmes. Medical Education 2005;39:309–17. [15] Downing SM, Yudowsky R. Assessment in health professions education. New York: Routledge Group; 2009. [16] Norcini JJ, McKinley DW. Assessment methods in medical education. Teaching and Teacher Education 2007;23:239–50.

[1]

Frank JR, Danoff D. The CanMEDS initiative: implementing an outcomes-based framework of physician competencies. Medical Teacher 2007;29:642–7.

[2]

Epstein RM, Hundert EM. Defining and assessing professional competence. Journal of the American Medical Association 2002;287:226–35.

[17] van der Vleuten CPM. The assessment of professional competence: developments, research and practical implications. Advances in Health Sciences Education 1996;1:41–67.

[3]

LaDuca A, Taylor DD, Hill IK. The design of a new physician licensure examination. Evaluation & the Health Professions 1984;7:115–40.

[18] Van der Vleuten CPM, Schuwirth LWT. Assessing professional competence: from methods to programmes. Medical Education 2005;39:309–17.

[4]

Kane M. Model-based practice analysis and test specifications. Applied Measurement in Education 1997;10:18.

[19] Epstein RM. Assessment in medical education. New England Journal of Medicine 2007;356:387–96.

[5]

Accreditation Council for Graduate Medical Education. ACGME outcome project enhancing residency education through outcomes assessment: general competencies; 1999 [accessed 1 May 2017]. Available from: http:// www.acgme.org/outcome/ comp/compFull.asp.

[6]

Mahajan R, Anshu, Gupta P, Singh T. Practice-based learning and improvement (PBLI) in postgraduate medical training: milestones, instructional and assessment strategies. Indian Pediatrics 2007;54:311–8.

[7]

Miller GE. The assessment of clinical skills/competence/ performance. Academic Medicine 1990;65:S63–7.

[8]

Singh T, Modi JN. Workplace based assessment: a step to promote competency based training. Indian Pediatrics 2013;50:553–9.

[9]

Hattie JA. Identifying the salient facets of a model of student learning: a synthesis of meta-analyses. International Journal of Educational Research 1987;11: 187–212.

[10] Miller A, Archer J. Impact of workplace based assessment on doctors’ education and performance: a systematic review. British Medical Journal 2010;341:c5064. [11] Kassebaum DG, Eaglen RH. Shortcomings in the evaluation of students’ clinical skills and behaviors in medical school. Academic Medicine 1999;74:841–99. [12] Downing SM. Validity: on the meaningful interpretation of assessment data. Medical Education 2003;37:830–7. [13] Downing SM. Reliability: on the reproducibility of assessment data. Medical Education 2004;38:1006–12.

82

[20] Carraccio CL, Benson BJ, Nixon LJ, Derstine PL. From the educational bench to the clinical bed-side: translating the Dreyfus developmental model to the learning of clinical skills. Academic Medicine 2008;83:761–7. [21] Pangaro L. Investing in descriptive evaluation: a vision for the future of assessment. Medical Teacher 2000;22:478–81. [22] Forsythe G. Identity development in professional education. Academic Medicine 2005;80:S112–7. [23] Ten Cate O, Scheele F. Competency-based postgraduate training: can we bridge the gap between theory and clinical practice. Academic Medicine 2007;82:542–7. [24] Raymnod M, Neustel S. Determining the content of credentialing examinations. In: Downing SM, Halyadyna TM, editors. Handbook of test development. Mahwah(NJ): Lawrence Erlbaum Associates; 2006. [25] Norcini JJ, Blank LL, Duffy FD, Fortna GS.The miniCEX: a method for assessing clinical skills. Annals of Internal Medicine 2003;138(6):476–81. [26] Ringsted C, Østergaard D, Ravn L, Pedersen JA, Berlac PA, van der Vleuten CP. A feasibility study comparing checklists and global rating forms to assess resident performance in clinical skills. Medical Teacher 2003;25(6):654–8. [27] Kogan JR, Holmboe ES, Hauer KE. Tools for direct observation and assessment of clinical skills of medical trainees, a systematic review. Journal of the American Medical Association 2009;302(12):1316–26.

Vol. 7, No. 2, July, 2017

Assessment of Clinical Competence [28] Harden R, Stevensen M, Downie W, Wilson M. Assessment of clinical competence using objective structured examinations. British Medical Journal 1975;1:447–51. [29] Musick DW, McDowell SM, Clark N, Salcido R. Pilot study of a 360-degree assessment instrument for physical medicine & rehabilitation residency programs. American Journal of Physical Medicine and Rehabilitation 2003;82: 394–402. [30] Lurie SJ, Mooney CJ, Lyness JM. Measurement of the general competencies of the Accreditation Council for Graduate Medical Education: a systematic review. Academic Medicine 2009;84(3):301–9. [31] Schuwirth LWT, van der Vleuten CPM. Different written assessment methods: what can be said about their strengths and weaknesses? Medical Education 2004;38:974–9. [32] Lynch DC, Swing SR, Horowitz SD, Holt K, Messer JV. Assessing practice-based learning and improvement. Teaching and Learning in Medicine 2004;16(1):85–92. [33] Participants in the Bayer–Fetzer Conference on Physician–Patient Communication in Medical Education. Essential elements of communication in medical encounters: the Kalamazoo consensus statement. Academic Medicine 2001;76(4):390–3. [34] Duffy FD, Gordon GH, Whelan G, Cole-Kelly K, Frankel R, All Participants in the American Academy on Physician and Patient’s Conference on Education and Evaluation of Competence in Communication and Interpersonal Skills. Assessing competence in communication and interpersonal skills: the Kalamazoo II report. Academic Medicine 2004;79(6):495–507. [35] Chan TM, Wallner C, Swoboda TK, Leone KA, Kessler C. Assessing interpersonal and communication skills in emergency medicine. Academic Emergency Medicine 2012;19(12):1390–402. [36] Accreditation Council for Graduate Medical Education [Internet]. The ACGME outcome project. Advancing Education in Interpersonal and Communication Skills;2005. http://www.acgme.org/outcome/implement/ interpercomskills.pdf. [37] Skillings JL, Porcerelli JH, Markova T. Contextualizing SEGUE: evaluating residents’ communication skills within the framework of a structured medical interview. Journal of Graduate Medical Education 2010;Mar:102–7.

[40] Lynch D, Surdyk P, Eiser A. Assessing professionalism: a review of the literature. Medical Teacher 2004;26:366– 73. [41] Hickson GB, Federspiel CF, Pichert JW, Miller CS, GauldJaeger J, Bost P. Patient complaints and malpractice risk. Journal of the American Medical Association 2002;287:2951–7. [42] Cruess RL, McIlroy JH, Cruess S, Ginsburg S, Steinert Y. The professionalism mini-evaluation exercise: a preliminary investigation. Academic Medicine 2006;81: S74–8. [43] Brinkman WB, Geraghty SR, Lanphear BP, Khoury JC, Gonzalez del Rey JA, DeWitt TG, et al. Effect of multisource feedback on resident communication skills and professionalism. Archives of Pediatric and Adolescent Medicine 2007;161:44–9. [44] Musick DW, McDowell SM, Clark N, Salcido R. Pilot study of a 360-degree assessment instrument for physical medicine and rehabilitation residency programs. American Journal of Physical Medicine and Rehabilitation 2003;82:394–402. [45] Cruess R, McIlroy JH, Cruess S, Ginsburg S, Steinert Y. The professionalism mini-evaluation exercise: a preliminary investigation. Academic Medicine 2006; 81Suppl 10:S74–8. [46] Guralnick S, Ludwig S, Englander R. Domain of competence: systems-based practice. Academic Pediatrics 2014;14:S70–9. [47] Oberg CN. Pediatric advocacy: yesterday, today, and tomorrow. Pediatrics 2003;112:406–9. [48] Delphin E, Davidson M. Teaching and evaluating group competency in systems-based practice in anesthesiology. Anesthesia and Analgesia 2008;106:1837–43. [49] Epstein RM, Hundert EM. Defining and assessing professional competence. Journal of the American Medical Association 2002;287(2):226–35. [50] Issenberg SB, McGaghie WC, Petrusa ER, Gordon DL, Scalese RJ. Features and uses of high fidelity medical simulations that lead to effective learning: a BEME systematic review. Medical Teacher 2005;27:10–28. [51] Ferrell BG. Clinical performance assessment using standardized patients: a primer. Family Medicine 1995;27:14–9.

[38] Makoul G. The SEGUE framework for teaching and assessing communication skills. Patient Education and Counseling 2001;45:23–34.

[52] van der Vleuten CPM, Swanson DB. Assessment of clinical skills with standardized patients: state of the art. Teaching and Learning in Medicine 1990;2:58–76.

[39] Whitcomb ME. Fostering and evaluating professionalism in medical education. Academic Medicine 2003;77:473–4.

[53] Scalese RJ, Obeso VT, Issenberg B. Simulation technology for skills training and competency assessment in medical

Journal of Research in Medical Education & Ethics

83

Sarika Gupta, Rajiv Mahajan, Tejinder Singh education. Journal of General Internal Medicine 2007;23(Suppl 1):46–9. [54] Smith RM. The triple-jump examination as an assessment tool in the problem-based medical curriculum at the University of Hawaii. Academic Medicine 1993;13:366– 72. [55] Feletti G, Ryan G. Triple jump exercise in inquiry-based learning: a case study. Assessment and Evaluationin Higher Education 1994;19(3):225–34. [56] Rangachari PK. The TRIPSE: a process-oriented evaluation for problem-based learning courses in the basic sciences. Biochemistry and Molecular Biology Education 2002;30(1):57–60. [57] Ramsey PG, Wenrich MD, Carline JD, Inui TS, Larson EB, LoGerfo JP. Use of peer ratings to evaluate physician performance. Journal of the American Medical Association 1993;269:1655–60. [58] Asch E, Saltzberg D, Kaiser S. Reinforcement of selfdirected learning and the development of professional attitudes through peer and self-assessment. Academic Medicine 1998;73:575. [59] Kaiser S, Bauer JJ. Checklist self-evaluation in a standardized patient exercise. American Journal of Surgery 1995;169:418–20. [60] Gordon MJ. A review of the validity and accuracy of selfassessments in health professions training. Academic Medicine 1991;66:762–9. [61] Gray J. Global rating scales in residency education. Academic Medicine 1996;71:S55–63.

84

[62] Epstein RM, Dannefer EF, Nofziger AC, et al. Comprehensive assessment of professional competence: the Rochester experiment. Teaching and Learning in Medicine 2004;16:186–96. [63] Bordage G, Page G. An alternate approach to PMPs, the key feature concept. In: Hart I, Harden R, editors. Further developments in assessing clinical competence. Montreal: Can-Heal Publications; 1987. pp. 57–75. [64] Farmer E, Page G. A practical guide to assessing clinical decision-making skills using the key features approach. Medical Education 2005;39:1188–94. [65] Charlin B, van der Vleuten CPM. Standardized assessment of reasoning in context of uncertainty. The script concordance test approach. Evaluation and the Health Professions 2004;27:304–19. [66] Fournier JP, Demeester A, Charlin B. Script concordance tests: guidelines for construction. BMC Medical Informatics and Decision Making 2008;8:18. [67] Cantillon P, Irish B, Sales D. Using computers for assessment in medicine. British Medical Journal 2004;329(7466):606–9. [68] Hammoud MM, Morgan HK, Edwards ME, Lyon JA, White C. Is video review of patient encounters an effective tool for medical student learning? A review of the literature. Advances in Medical Education and Practice. 2012;3:19– 30. Received: 12.06.2017 Accepted: 24.06.2016

Vol. 7, No. 2, July, 2017