selection and assessment Predictive validity of the Dundee multiple mini-interview Adrian Husbands & Jonathan Dowell
CONTEXT The multiple mini-interview (MMI) is the primary admissions tool used to assess non-cognitive skills at Dundee Medical School. Although the MMI shows promise, more research is required to demonstrate its transferability and predictive validity, for instance, relative to other UK pre-admissions measures. METHODS Applicants were selected for interview based on a combination of measures derived from the Universities and Colleges Admissions Service (UCAS) form (academic achievement, medical experience, non-academic achievement and references) and the UK Clinical Aptitude Test (UKCAT) in 2009 and 2010. Candidates were selected into medical school according to a weighted combination of the UKCAT, the UCAS form and MMI scores. Examination scores were matched for 140 and 128 first- and second-year students, respectively, who took the 2009 MMIs, and 150 first-year students who took the 2010 MMIs. Pearson’s correlations were used to test the relationships between pre-admission variables, examination scores and demographic
variables, namely gender and age. Statistically significant correlations were adjusted for range restrictions and were used to select variables for multiple linear regression analysis to predict examination scores. RESULTS Statistically significant correlations ranged from 0.18 to 0.34 and 0.23 to 0.50 unrestricted. Multiple regression confirmed that MMIs remained the most consistent predictor of medical school assessments. No scores derived from the UCAS form correlated significantly with examination scores. CONCLUSIONS This study reports positive findings from the largest undergraduate sample to date. The MMI was the most consistent predictor of success in early years at medical school across two separate cohorts. UKCAT and UCAS forms showed minimal or no predictive ability. Further research in this area appears worthwhile, with longitudinal studies, replication of results from other medical schools and more detailed analysis of knowledge, skills and attitudinal outcome markers.
Medical Education 2013: 47: 717–725 doi: 10.1111/medu.12193 Discuss ideas arising from the article at www.meduedu.com ‘discuss’
Division of Clinical and Population Sciences and Education, University of Dundee, Dundee, UK
Correspondence: Adrian Husbands, Division of Clinical and Population Sciences and Education, University of Dundee, The Mackenzie Building, Kirsty Semple Way, Dundee DD2 4BF, UK. Tel: 00 44 1382 383751; E-mail:
[email protected]
ª 2013 John Wiley & Sons Ltd. MEDICAL EDUCATION 2013; 47: 717–725
717
A Husbands & J Dowell
INTRODUCTION
It is widely accepted that so-called ‘non-cognitive’ or ‘non-academic’ attributes (such as interpersonal skills and moral reasoning) are important for medical school selection in addition to academic achievement.1 Developed and introduced at McMaster in 2004, Dundee has since adopted the multiple miniinterview (MMI) as the primary pre-admissions measure for this purpose. Other schools in the UK are increasingly following suit. MMIs aim to assess a broad array of candidates’ personal characteristics through ratings from multiple snapshots of behaviour in an objective structured clinical examination (OSCE)-like rotational approach. This type of interview was first introduced by Eva et al.2 because of the need for an interview process with robust psychometric properties, unlike most traditional interviews. By testing a larger content sample with multiple independent interviewers, MMIs have demonstrated that they can offer a more accurate picture of a candidate’s behaviour.3 With compelling evidence on reliability and other satisfactory psychometric properties from the USA, Australia and the UK,2,4–8 MMIs continue to be adopted across medical and dental schools worldwide. Attention has now shifted to the ability of MMIs to predict performance in medical school and beyond. A number of studies have demonstrated that they show statistically significant and practically relevant relationships with future performance.9–11 Beginning with Eva et al.,11 MMIs were shown to have significantly predicted mean scores of OSCEs (stand b = 0.44) among 45 medical students. The same cohort was followed by Reiter et al.,9 who found that MMIs correlated with a range of clerkship (r = 0.28– 0.57) as well as licensing examination performance measures (r = 0.37–0.39) and this relationship did not lose its predictive power after controlling for other variables. Eva et al.10 once again followed the same cohort, together with an additional group of postgraduate residents (n = 22), through another licensing examination and found significant correlations of 0.35 and 0.36, respectively. Although these studies have successfully demonstrated predictive validity, it is clear that more research is needed as the majority of this work was based on the same small Canadian cohort. Furthermore, the MMIs used in the predictive validity studies were heavily weighted towards ethical
718
decision-making and the authors acknowledge that MMIs developed elsewhere may be aimed at different characteristics.10 Therefore, the body of evidence examining the predictive validity of MMIs would benefit from an analysis of different and larger cohorts and from outside of North America. Although it is certainly beneficial to consider the usefulness of MMIs, the same expectation should be set for all admissions measures.9 It is therefore important to consider the predictive ability of MMIs relative to other pre-admissions measures. Dundee, like most UK medical schools, considers scores derived from Universities and Colleges Admissions Service (UCAS) form components, namely personal statements, references and academic achievement, in addition to an aptitude test in the form of the UK Clinical Aptitude Test (UKCAT).12 UCAS personal statements present a biography of non-academic achievements, work experience and usually a justification for career choice. Presented as free text, they are challenging to score consistently and subject to a range of influences, such as social opportunity. Neither they, nor references, have been shown to predict success in medical school. Ferguson et al.13 and Siu and Reiter14 reviewed predictors of success in medical school and found that there was a lack of evidence that personal statements or references have any predictive value in subsequent achievement. Wright and Bradley15 also found that not only did scores derived from the personal statement fail to predict medical school examination performance, but they were also biased towards those from more advantaged socio-economic backgrounds. Although existing data are not positive on the use of personal statements or references, there are not enough studies for definitive conclusions.13 The UKCAT (http://www.ukcat.ac.uk) is an intelligence test used to ‘assesses a range of mental abilities identified by university medical and dental schools as important’.16 It joins other established tests for selection into medical schools, such as the Graduate Australian Medical Schools Admissions Test (GAMSAT), Medical College Admissions Test (MCAT; for US candidates) and BioMedical Admissions Test (BMAT; used by some English medical schools), although it is distinct because it aims to purely assess aptitude and contains no knowledgebased component. Though the literature suggests that the MCAT, GAMSAT and BMAT each have some success at predicting future performance, this
ª 2013 John Wiley & Sons Ltd. MEDICAL EDUCATION 2013; 47: 717–725
Predictive validity of the Dundee MMI level of success has not been replicated so far with the UKCAT. Lynch et al.17 examined the predictive validity of the UKCAT at two Scottish medical schools and found it did not predict Year 1 performance. Similarly, Yates and James18 investigated whether the UKCAT predicted performance during the first 2 years of medical school at Nottingham University and found that it had a poor predictive value. To date, only Wright and Bradley15 have presented evidence of predictive validity; they found UKCAT scores to be predictive of Year 1 and 2 knowledge-based examination scores at Newcastle University. Medical school selection in the UK has to work with a range of markers that seek to assess educational achievement (e.g. school grades), fluid intelligence (aptitude testing), motivation and other reported non-academic achievements (statements) and interview scores. The first two are thought to reflect largely cognitive abilities and the latter two non-cognitive, but we are cognisant that this divide is debated and that these instruments are seen as imperfect. The predictive validity of the Dundee MMIs and other pre-admissions measures can now be evaluated for the first 2009–2010 and second 2010–2011 MMI cohorts, the former of which had, at the time of the study, completed 2 years of medical school. This study extends the findings of Dowell et al.7 and adds to the body of evidence by examining the relationship of MMIs and other pre-admissions measures with performance in medical school examinations. It takes advantage of an appreciably larger sample size, younger student population and a geographically distinct cohort relative to other published studies. It aims to address which aspects of the selection process can be justified in terms of predictive validity for knowledge-based and OSCE examinations in early medical school and to establish if MMIs are useful in the UK.
METHODS
rienced member of the medical school’s admissions team, with a second member of the team reviewing those who were close to the cut-off point. Numerical scores were assigned to UCAS form components, namely medical work experience and non-academic achievement (both derived from the personal statement), academic qualifications and references. For analysis purposes, a non-academic score total was created, which consisted of an aggregate of references, medical experience and non-academic experience scores. Widening Access markers were also considered for 44 (3.50%) and 53 (3.40%) local applicants in 2009 and 2010, respectively. A combined weighted outcome of UCAS form and UKCAT scores was then used to select candidates for interview. At the MMI stage, candidates rotated around 10 7minute stations. MMI content was developed based on a predefined set of non-cognitive attributes determined by the medical school’s admissions committee. These were interpersonal skills and communication (including empathy), logical reasoning and critical thinking, moral and ethical reasoning, motivation and preparation to study medicine, teamwork and leadership, honesty and integrity. Cronbach’s alpha reliabilities for the 2009 and 2010 MMIs were 0.70 and 0.69, respectively. Further details on the development of these MMIs are provided in Dowell et al.7 Offers were then given to the candidates based on a combined weighted MMI and preinterview score, with the MMI score being assigned a heavier weighting. Medical school examinations Two standardised assessments (written and OSCE) are completed at the end of each semester of Year 1; two are completed at the end of Year 2. Raw percentage scores at first sitting for individual assessments were used. Appendix 1 shows a description of medical school examinations for each year. There were no appreciable differences in examination content, format or curriculum between the years. Table 1 shows Cronbach’s alpha reliabilities for each assessment.
Admissions tools 2009 MMI cohort Dundee Medical School’s admissions process is similar to most other UK medical schools, as described by Parry et al.12 UCAS forms were examined first for minimum academic qualifications. In total, 1278 and 1553 applicants applied to the standard 5-year medical course at Dundee and met the minimum academic requirements in 2009 and 2010, respectively. All applications were scored by one expe-
In total, 452 candidates sat the 2009 MMIs, of which 147 enrolled in the programme. MMIs were comprised of six traditional ‘one-to-one’ stations and four interactive task-based stations. Admissions and examination scores were matched for 140 of 160 (87.50%) and 128 of 158 (81.00%) Year 1 and Year 2 students, respectively. This represented 95.2% and
ª 2013 John Wiley & Sons Ltd. MEDICAL EDUCATION 2013; 47: 717–725
719
A Husbands & J Dowell
Table 1
Examination reliabilities
Examination
2009 cohort
2010 cohort
Semester 1 written
0.87
0.82
Semester 1 OSCE
0.75
0.78
Semester 2 written
0.88
0.87
Semester 2 OSCE
0.78
0.66
Written
0.91
–
OSCE
0.66
–
Year 1
After consideration of the issues surrounding familywise error corrections as they relate to the multiple comparisons made, we elected not to correct.19,20 Our results should be read in this context and actual p values are provided to allow the reader to consider the likelihood of type I error.
Year 2
OSCE = objective structured clinical examination
87.1% of all enrolled 2009 MMI candidates in Years 1 and 2 respectively. The remaining students were found to either have deferred entry from the previous admissions cycle, withdrawn from medical school or repeated a year. Post hoc power analysis confirmed that the sample size was sufficient to achieve a 76% and 74% power to detect a correlation effect size of 0.20 at the 0.05 probability level in Years 1 and 2, respectively. 2010 MMI cohort In total, 477 candidates sat the 2010 MMIs, of which 150 enrolled in the programme. MMIs were comprised of five traditional ‘one-to-one’ stations and five interactive task-based stations. Admissions and examination scores were matched for 150 of 163 (92.00%) Year 1 students. This represented the entire cohort of enrolled 2010 MMI candidates. Similar to 2009, the remaining students were found to either have deferred entry from the previous admissions cycle, withdrawn from medical school or repeated a year. Post hoc power analysis confirmed that the sample size was sufficient to achieve an 80% power to detect a correlation effect size of 0.20 at the 0.05 probability level. Consent was collected from applicants for educational research utilising their admissions data and confirmation was obtained from the University Ethics Committee (UREC 12166) that approval was not required for this analysis of routinely collected data. Analysis Data were analysed with SPSS 17.0 for Windows (SPSS, Inc., Chicago, IL, USA). Independent
720
variables were UCAS academic, UCAS non-academic, UKCAT and MMI scores. The rare students with missing data were omitted from the statistical analyses involving that variable. Pearson’s correlations were used to test relationships between preadmissions variables, examination scores and demographic variables, namely gender and age.
Histograms and plots were used to confirm that the data were linear and normally distributed. Correlations were used to select variables for multiple linear regression analysis to predict examination scores. A significance level of P 0.05 was required for a variable to be included in a multiple regression model. Correlations were adjusted for range restriction and are referred to in this study as ‘unrestricted’ correlations. Statistical significance was determined prior to correcting the correlations. This adjustment is common in predictive validity studies and is carried out to counter correlation underestimates when the observed sample is not representative of the population of interest.21,22 In the present study, scores from admissions tools were used to select medical school candidates for whom examination results were then compared. The range of scores was therefore restricted in this sample, as medical students’ scores on admissions tools will, by definition, be higher than the overall applicant pool. Forward stepwise multiple regression was performed on assessments where the predictor that has the highest simple correlation with the outcome is selected in step 1. If this predictor significantly improved the model’s ability to predict the outcome, then it was retained in the model and the program searched for a second predictor with the largest semi-partial correlation with the outcome (step 2). This procedure allows us to see the contribution of each independent variable to the model’s ability to predict assessment performance. Where there was only one significant predictor for an assessment, the result for the stepwise regression was equivalent to a simple linear regression. Levels of F to enter and F to remove were set to correspond to p levels of 0.05 and 0.01, respectively. To eliminate the possibility of
ª 2013 John Wiley & Sons Ltd. MEDICAL EDUCATION 2013; 47: 717–725
Predictive validity of the Dundee MMI suppressor effects, a backwards elimination was used to confirm that no significant relationships were missed by forward inclusion.23
Table 3 shows multiple regression statistics for each assessment where there was a significant correlation with an admissions tool.
Model statistics are provided for each regression analysis. Independent variables were admissions tools together with demographic variables, namely age and gender. Independent variables were included in the multiple regression analyses only if they correlated with assessment scores.
2009 MMI cohort: Year 1 Of the 140 matched students, 41.4% were male, 58.6% were female; the average age was 20.80 years (standard deviation [SD] = 2.40). Statistically significant correlations ranged from 0.18 to 0.34 and 0.24 to 0.43 unrestricted. UKCAT scores showed significant positive correlations with the Semester 1 written and OSCE. MMI scores showed significant positive correlations with three of four examinations. MMI correlation magnitudes were generally larger than those of all other admissions scores.
The strengths of correlations were compared using Cohen’s effect size interpretations (small 0.10, medium 0.30, large 0.50)24 and the US Department of Labour, Employment Training and Administration’s guidelines for interpreting correlation coefficients in predictive validity studies (‘unlikely to be useful’ < 0.11; ‘dependent on circumstances’, 0.11–0.20; ‘likely to be useful’ 0.21– 0.35; ‘very beneficial’ > 0.35).25
Multiple regression analysis revealed statistically significant predictors. UKCAT scores explained 6% of the variance in the Semester 1 and 2 written examinations. UKCAT and MMI scores explained 7% of the variance in the Semester 1 OSCE. MMI scores and gender explained 17% of the variance in the Semester 2 OSCE.
RESULTS
Table 2 shows Pearson’s r correlations between admissions tools and examinations scores before and after correction for range restriction. Statistically significant correlations have been highlighted in bold.
Table 2
2009 MMI cohort: Year 2 Of the 128 matched students, 40.6% were male, 59.4% were female; the average age was 21.60 years
Correlations of admissions tools and examination scores
UCAS academic r 2009 Year 1
2009 Year 2 2010 Year 1
Semester 1 written
ru 0.07
0.18
p 0.84
UCAS non-academic
UKCAT
r
r
ru 0.03
0.05
p 0.74
MMI ru
0.25
0.34
p 0.01
r
ru 0.14
0.18
p 0.11
Semester 1 OSCE
0.05
0.13
0.84
0.07
0.11
0.41
0.18
0.24
0.03
0.19
0.24
0.02
Semester 2 written
0.10
0.26
0.84
0.02
0.02
0.86
0.14
0.19
0.11
0.26
0.33
0.01
Semester 2 OSCE
0.02
0.05
0.84
0.02
0.03
0.83
0.01
0.01
0.94
0.34
0.43
0.01
Written
0.09
0.23
0.30
0.02
0.02
0.86
0.05
0.07
0.54
0.18
0.23
0.04
OSCE
0.11
0.28
0.23
0.05
0.08
0.58
0.12
0.16
0.17
0.27
0.35
0.01
Semester 1 written
0.09
0.19
0.27
0.09
0.12
0.29
0.15
0.2
0.07
0.01
0.01
0.35
Semester 1 OSCE
0.04
0.09
0.59
0.01
0.02
0.87
0.03
0.04
0.68
0.05
0.07
0.55
Semester 2 written
0.06
0.13
0.48
0.01
0.01
0.93
0.02
0.03
0.84
0.02
0.03
0.76
Semester 2 OSCE
0.03
0.06
0.71
0.05
0.07
0.55
0.03
0.04
0.70
0.35
0.50
0.00
ru correlation corrected for range restriction. Statistically significant correlations are in bold. UCAS = Universities and Colleges Admissions Service; UKCAT = UK Clinical Aptitude Test; MMI = multiple mini-interview; OSCE = objective structured clinical examination
ª 2013 John Wiley & Sons Ltd. MEDICAL EDUCATION 2013; 47: 717–725
721
A Husbands & J Dowell
Table 3
Linear regression statistics
Model statistics Cohort
Assessment
Step
R2
2009 Year 1
Semester 1 written
–
0.06
Semester 1 OSCE
Semester 2 written Semester 2 OSCE
2009 Year 2
Written
OSCE
2010 Year 1
Semester 2 OSCE
F 8.81
Independent variables p
Predictor
0.004
UKCAT
b
Stand b 0.36
p
0.25
0.004
Step 1
0.03
4.81
0.030
UKCAT
9.99
5
0.18
0.030
Step 2
0.07
4.75
0.010
UKCAT
9.71
5
0.18
0.033
MMI
1.79
3
0.18
0.034
MMI
3.03
3
0.26
0.002
0.34
0.000
0.34
0.000
–
0.06
9.71
0.002
Step 1
0.11
17.61
0.000
MMI
2.56
3
Step 2
0.17
13.78
0.000
MMI
2.61
3
Gender
0.03
0.23
0.003
Gender
3.54
0.22
0.014
Step 1
0.05
6.21
0.014
Step 2
0.09
6.12
0.003
Gender
3.86
0.24
0.007
MMI
0.18
0.21
0.018 0.002
Step 1
0.07
9.99
0.002
MMI
0.14
0.27
Step 2
0.15
10.72
0.000
MMI
0.15
0.30
0.000
Gender
2.65
0.27
0.001
Step 1
0.12
21.02
0.000
MMI
2.00
3
0.35
0.000
Step 2
0.16
13.56
0.000
MMI
2.00
3
0.33
0.000
Gender
0.02
Age*
ns
0.18 ns
0.021 ns
* Age did not meet the inclusion criteria. OSCE = objective structured clinical examination; UKCAT = UK Clinical Aptitude Test; MMI = multiple mini-interview; Gender was coded as female = 0 and male = 1
(SD = 2.21). Statistically significant correlations ranged from 0.18 to 0.27 and 0.23 to 0.35 unrestricted. UCAS and UKCAT scores showed no significant correlations with examination scores. The MMI showed significant positive correlations with both examinations. Multiple regression analysis revealed statistically significant predictors. MMI scores and gender explained 9% of the variance in the written assessment and 15% of the variance in the OSCE. 2010 MMI cohort Of the 150 matched students, 43.3% were male, 56.7% were female; the average age was 20.80 years (SD = 2.70). There was a lone significant positive relationship between MMI and Semester 2 OSCE scores (r = 0.35, 0.50 unrestricted). Multiple regression analysis revealed statistically significant predictors. MMI scores and gender
722
explained 16% of the variance in the Semester 2 OSCE.
DISCUSSION
This study had a number of limitations, namely the length of follow-up, the nature of outcome markers available and the reduced likelihood of finding positive associations due to range restrictions. Follow-up into clinical training is ongoing for these cohorts when it is hoped markers of professionalism can be added to academic outcome measures. It may also prove possible to link these data with more detailed school achievement records, but it remains worthwhile analysing the predictive validity of the actual scores used in selection to assess the utility of the process. This study presents limited evidence for the validity of the UKCAT, whose scores showed significant positive correlations in only two of 10 assessments
ª 2013 John Wiley & Sons Ltd. MEDICAL EDUCATION 2013; 47: 717–725
Predictive validity of the Dundee MMI within 1 year. However, these results must be seen with caution, with potential type I errors as a result of multiple comparisons. It is notable that Semester 1 examinations are more focused on recall of knowledge than those in Semester 2, which test application of knowledge and clinical skills. This may explain the pattern with UKCAT scores and is consistent with results obtained by Wright and Bradley,15 who showed a positive relationship between UKCAT and knowledge-based examination scores, not OSCE, and which appeared to diminish over the early years. Scores derived from the UCAS form appear to be less valid selection tools, with no significant positive correlations. The lack of statistically significant associations between UCAS non-academic and examination scores is consistent with the available evidence, which suggests that references and scores derived from the personal statement are not predictive of medical school outcomes. This calls into question whether their continued use is justified. The lack of significant positive correlations between academic achievement and examination scores was somewhat surprising, but also possibly explicable. Although academic achievement has traditionally been the best predictor of medical school success, it may be difficult to detect when most students gain near maximum scores for this. It is therefore possible that continued reliance on measures of academic achievement provide reducing returns?14 The significant body of published literature demonstrates that measures of academic achievement are the most consistent predictors and dictates that further research is necessary before drawing conclusions based on our comparatively limited data.13,26,27 However, this highlights the need to review academic scoring in the UK context, especially with the introduction of A* grades at A-level. However, this study does provide important evidence of the validity of the MMIs by demonstrating that it was the most consistent predictor of success in medical school examinations across two separate cohorts and years. MMI scores significantly correlated with six of 10 examination sittings, with magnitudes ranging from 0.24 to 0.50 (unrestricted), accounting for between 5.70% and 25.00% of variance in students’ examination scores. Multiple regression also confirmed that the MMIs remained the most consistent predictor of success, accounting for between 5% and 17% of the variance in assessment scores alone or in combination with candidates’ gender. It is
unsurprising that correlations were lower or absent in the first year where assessments (even the OSCE) were more highly knowledge orientated. Although the size of these correlations can be described as moderate, it has been asserted that measures with even modest predictive validity could add considerable value to selection systems where the ratio of applicants to places is large and the importance of sound selection decisions is high.28 After adjusting for range restriction, these coefficients can be described as ‘likely to be useful’ or ‘very beneficial’.25 Correlations were largest in OSCE assessments, perhaps because certain components are common in both, such as communication skills, or even more generally an ability to ‘perform under pressure’. Although the results of this research are compelling in favour of MMIs, further research in this area is necessary. The short-term duration of follow-up is the primary limitation of this study and continuous longitudinal studies of these and future cohorts will establish the utility of admissions measures across medical school years and into postgraduate study. The body of evidence investigating the predictive power of MMIs would also benefit from results from other medical schools, particularly those measuring different non-cognitive attributes. Finally, testing should also investigate the ability of the UKCAT and MMIs to predict specific cognitive and non-cognitive attributes for which they were designed, such as interpersonal communications skills. This study demonstrates that it is possible to operate a reliable and valid MMI with the younger student population in the UK. It has demonstrated this with statistically robust assessments and relatively large sample sizes compared with previously published validity studies. It is hoped that as medical schools worldwide continue to adopt the MMI approach, more evidence will emerge to support its usefulness as a robust component of selection systems and increasingly refine its format.
Contributors: AH and JD both contributed to the study conception, data analysis, interpretation, drafting and revision of the paper and approved the final manuscript for publication Acknowledgements: Ben Kumwenda for assistance with data collection. Funding: none. Conflicts of interest: none. Ethical approval: not required.
ª 2013 John Wiley & Sons Ltd. MEDICAL EDUCATION 2013; 47: 717–725
723
A Husbands & J Dowell
REFERENCES 1 Albanese M, Snow M, Skochelak S, Huggett K, Farrell P. Assessing personal qualities in medical school admissions. Acad Med 2003;78:313–21. 2 Eva KW, Rosenfeld J, Reiter HI, Norman GR. An admissions OSCE: the multiple mini-interview. Med Educ 2004;38:314–26. 3 Roberts C, Walton M, Rothnie I, Crossley J, Lyon P, Kumar K, Tiller D. Factors affecting the utility of the multiple mini-interview in selecting candidates for graduate-entry medical school. Med Educ 2008;42:396– 404. 4 Lemay J-F, Lockyer JM, Collin VT, Brownell AKW. Assessment of non-cognitive traits through the admissions multiple mini-interview. Med Educ 2007;41:573–9. 5 Reiter HI, Salvatori P, Rosenfeld J, Trinh K, Eva KW. The effect of defined violations of test security on admissions outcomes using multiple mini-interviews. Med Educ 2006;40:36–42. 6 Eva K, Reiter H, Rosenfeld J, Norman G. The relationship between interviewers’ characteristics and ratings assigned during a multiple mini-interview. Acad Med 2004;79:602–9. 7 Dowell J, Lynch B, Till H, Kumwenda B, Husbands A. The multiple mini-interview in the UK context: 3 years of experience at Dundee. Med Teach 2012;34:297–304. 8 O’Brien A, Harvey J, Shannon M, Lewis K, Valencia O. A comparison of multiple mini-interviews and structured interviews in a UK setting. Med Teach 2011;33:397–402. 9 Reiter HI, Eva KW, Rosenfeld J, Norman GR. Multiple mini-interviews predict clerkship and licensing examination performance. Med Educ 2007;41:378–84. 10 Eva KW, Reiter HI, Trinh K, Wasi P, Rosenfeld J, Norman GR. Predictive validity of the multiple miniinterview for selecting medical trainees. Med Educ 2009;43:767–75. 11 Eva K, Reiter H, Rosenfeld J, Norman G. The ability of the multiple mini-interview to predict preclerkship performance in medical school. Acad Med 2004;79:S40–2. 12 Parry J, Mathers J, Stevens A, Parsons A, Lilford R, Spurgeon P, Thomas H. Admissions processes for five year medical courses at English schools: review. Br Med J 2006;332:1005–8. 13 Ferguson E, James D, Madeley L. Factors associated with success in medical school: systematic review of the literature. Br Med J 2002;324:952–7.
724
14 Siu E, Reiter H. Overview: what’s worked and what hasn’t as a guide towards predictive admissions tool development. Adv Health Sci Educ 2009;14: 759–75. 15 Wright SR, Bradley PM. Has the UK Clinical Aptitude Test improved medical student selection? Med Educ 2010;44:1069–76. 16 About the test: What is in the test? UKCAT. http:// www.ukcat.ac.uk/about-the-test/what-is-in-the-test/ 17 Lynch B, MacKenzie R, Dowell J, Cleland J, Prescott G. Does the UKCAT predict Year 1 performance in medical school? Med Educ 2009;43:1203–9. 18 Yates J, James D. The value of the UK Clinical Aptitude Test in predicting pre-clinical performance: a prospective cohort study at Nottingham Medical School. BMC Med Educ 2010;10:55. 19 O’Keefe D. Colloquy: should familywise alpha be adjusted? Against familywise alpha adjustment. Hum Commun Res 2003;29:431–47. 20 O’Keefe D. It takes a family - a well defined family to underwrite familywise corrections. Commun Methods Measures 2007;1:267–73. 21 Sackett P, Yang H. Correction for range restriction: an expanded typology. J Appl Psychol 2000;85:112–8. 22 Wiberg M, Sundström A. A comparison of two approaches to correction of restriction of range in correlation analysis. Pract Assess Res Eval 2009;14:1–9. 23 Menard SW. Applied Logistic Regression Analysis, 2nd edn. Thousand Oaks, CA; Sage 2002. 24 Cohen J. A power primer. Psychol Bull 1992;112: 155–9. 25 Department of Labor EaTAU. Testing and Assessment: An Employer’s Guide to Good Practices. Washington, DC: Department of Labor, Employment and Training Administration (US) 1999;26. 26 McManus I, Smithers E, Partridge P, Keeling A, Fleming P. A levels and intelligence as predictors of medical careers in UK doctors: 20 year prospective study. Br Med J 2003;327:139–42. 27 Bore M, Munro D, Powis D. A comprehensive model for the selection of medical students. Med Teach 2009;31:1066–72. 28 McManus C, Woolf K, Dacre J. Even one star at A level could be ‘too little, too late’ for medical student selection. BMC Med Educ 2008;8:16. Received 12 August 2012; editorial comments to author 19 October 2012, accepted for publication 4 February 2013
ª 2013 John Wiley & Sons Ltd. MEDICAL EDUCATION 2013; 47: 717–724
Predictive validity of the Dundee MMI
Appendix 1 1
Medical Education
Examination
Description
Year 1 Semester 1 written
Multiple choice question examination consisting of: anatomy, biomedical (this consists of biochemistry, physiology, pharmacology), disease mechanisms (this consists of pathology, immunology, microbiology, genetics), psychosocial (public health/behavioural science), safe medical practice (including ethics) and integrated teaching
Semester 1 objective structured clinical examination
A 50-station, 1 minute per station, 1 question per station assessment of core, clinically relevant anatomy. Utilises pinned prosections, models, osteology specimens and imaging material (plain radiography, contrast studies, computed tomography and magnetic resonance imaging). Questions are designed to test applied anatomical knowledge and understanding within a context of clinical relevance
Semester 2 written
Extended Matching Items consisting of dermatology, haematology, cardiovascular, psychosocial, ethics
Semester 2 objective
12 stations of 6 minutes each. Competencies: physical examination, history taking,
and safe medical practice structured clinical examination
communication, practical procedures. Situations: cardiovascular, doctors, patients and community (DPaC), haematology, safe medical practice
Year 2 Written
Endocrinology, gastroenterology, musculoskeletal, anatomy, rheumatology, musculoskeletal, orthopaedics, renal/urology, respiratory
Objective structured clinical examination
15 stations of 8 minutes each on: endocrinology, gastrointestinal, DPaC, musculoskeletal (orthopaedics), musculoskeletal (rheumatology), emergency medicine, renal/urology, respiratory
ª 2013 John Wiley & Sons Ltd. MEDICAL EDUCATION 2013; 47: 717–725
725