Benchmarking Image Interpretation Performance: A ...

2 downloads 0 Views 389KB Size Report
INTRODUCTION. The SCoR (2013) policy expects radiographers to be able to make reliable- ... REFERENCES. 1. The Society and College of Radiographers.
Benchmarking Image Interpretation Performance: A Multicentre Undergraduate Study Tatsuhito Akimoto, Dr Chris Wright, Dr Pauline Reeves Tatsuhito Akimoto is a PhD student at SHU ([email protected])

INTRODUCTION The SCoR (2013) policy expects radiographers to be able to make reliabledecisions on the images they produce and promotes preliminary clinical evaluation (PCE) in favour of red dot. Image interpretation has been integrated into all undergraduate degree programmes (Hardy & Snaith, 2009) but the effectiveness has not been assessed. This project aimed to benchmark and compare PCE competencies of undergraduate diagnostic radiography students from different universities.

DISCUSSION In their own way, all universities have integrated image interpretation into their undergraduate degree courses (Hardy & Snaith, 2009) and this research has demonstrated some evidence of the educational impact. Whilst a measure of ‘reliability’ in decision making is not explicitly defined in the 2013 SCoR policy other authors have suggested that accuracy should be a minimum of 90% (Wright& Reeves, 2017) in order for PCE to be of credible value to the referrer.

KEYWORDS: Diagnostic radiography, Preliminary Clinical Evaluation, PCE, student

No student from any university was able to meet this accuracy standard. Mean sensitivity at 79.6% suggests that the developing ability to correctly identify abnormality. A mean specificity of 67.1% highlights the inability to differentiate normal from abnormal. This low specificity has a negative impact on the accuracy score. Students may ‘overcall’ and identify false positives perhaps in a bid to avoid missing fractures, or may simply lack the education to differentiate normal variants. The differences between graduates at different universities could be explained by differences in student calibre but also by the differences in education provided because each institution is able to define its own curriculum.

METHODOLOGY All 21 Universities in England & Wales delivering diagnostic radiography education were invited to participate; 9 agreed. Final year students (n=87) at the point of graduation participated. The test contained 30 blind double reported MSK images with equal prevalence of normal and abnormal. RESULTS Accuracy ranged from 56 to 87%; mean 73.4, SD 8.01. Sensitivity ranged from 47 to 100%; mean 79.6, SD 10.78. Specificity ranged from 20 to 100%; mean 67.1, SD 16.42. ACCURACY by UNIVERSITY

Whilst the red dot abnormality signalling system has enabled radiographers to make contributions to diagnosis in accident and emergency services for many years, PCE is a major change because it requires the radiographer to make a decision of every examination they perform, as well as describing what they see. Wright (2012) identified the need to be able to make reliable decisions before progressing to the writing commentary. Higgins & Wright (2016) proposed a ‘traffic light’ system to encompass these decision states (red=abnormal, green=normal) with an option for ‘unsure’ (amber). This provides a scaffolding step between making decisions and the more complex task of written opinion. Reliable decision making on every examination performed is a key requirement of PCE and the evidence suggests that many graduates will need further educational support in order to develop the skills beyond red dot, to traffic light, before proceeding to PCE, perhaps as part of preceptorship. In addition a review of undergraduate education is recommended in order to match taught content with the vision of the profession.

A weak correlation in accuracy by university was demonstrated (r2=0.266) highlighting the wide range of graduate performance. One-way ANOVA (with PostHoc Tukey) highlighted a statistically significant difference in Specificity (F (8, 78) = 3.40, p = 0.002) at University A (CI: -47.4/-4.5). OVERALL IMAGE INTERPRETATION PERFORMANCE

CONCLUSION This project is the first to benchmark and compare image interpretation competencies of radiography students from multiple universities. The vision of the SCoR 2013 policy is commendable and a key driver for the progression of our profession. Whilst image interpretation is now a routine part of undergraduate radiography education, this research suggests that the capability of graduates is below the requested standard for reliable decision making, highlighting the need for benchmarking prior to participation in abnormality signalling systems and the need for post graduate education to meet and maintain standards. A follow-up study is recommended to increase reliability. REFERENCES 1. The Society and College of Radiographers. Preliminary clinical evaluation and clinical reporting by radiographers: Policy and practice guidance. London: College of Radiographers; 2013 2. Hardy, M. & Snaith B. Radiographer interpretation of trauma radiographs: Issues for radiography education providers. Radiography 2009; 15 (2): 101-105. 3. Higgins, S. & Wright, C. (2016) Traffic Light: An Alternative Approach to Abnormality Signalling. UKRC. June. Liverpool. 4. Wright,C. (2012) 'RadBench: Benchmarking Image Interpretation Performance'. UKRC. June. Liverpool 5. Wright,C. & Reeves,P. (2017) Image Interpretation Performance: A Longitudinal Study from Novice to Professional. Radiography. Volume 23, Issue 1, February 2017, e1–e7, http://dx.doi.org/10.1016/ j.radi.2016.08.006