Advancing Theory in Healthcare Simulation Instructional ... - TSpace

1 downloads 0 Views 21MB Size Report
them questions about the embryology of the human spine). ...... 2, n = 5) from the aforementioned specialties; (ii) clinical textbooks describing the LP technique;.
Advancing Theory in Healthcare Simulation Instructional Design: The Effect of Task Complexity on Novice Learning and Cognitive Load

by

Faizal Aminmohamed Haji

A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Institute of Medical Sciences University of Toronto

© Copyright by Faizal A. Haji 2015

Advancing Theory in Healthcare Simulation Instructional Design: The Effect of Task Complexity on Novice Learning and Cognitive Load Faizal Aminmohamed Haji Doctor of Philosophy Institute of Medical Sciences University of Toronto 2015

Abstract Dramatic changes to healthcare systems globally have led to increased use of simulation as a pedagogical tool in health professions education. An impressive evidence base has accrued in support of simulation-based education and training, leaving little doubt that ‘it works’. As a result, scholarship in the field is shifting toward clarifying the features of simulation instructional design that optimize learning outcomes. Many scholars advocate for the use of established instructional frameworks to advance this agenda. In this dissertation, the author employed Cognitive Load Theory (CLT) and related instructional frameworks to investigate the relationship between task complexity, cognitive load (CL), and learning among novices engaged in simulation-based procedural skills training. Phase one of this research program established the sensitivity of two CL measures (subjective ratings of mental effort and secondary task performance) to predicted differences in load related to learners’ proficiency with a procedural skill and simulation training task complexity. As a result, these measures may be used to track changes in CL during simulation training and distinguish between the CL imposed by different instructional designs. ii

Phase two operationalized Elaboration Theory (ET) to identify the task-conditions that impact training complexity of a prototypical procedural skill (lumbar puncture). The results of this phase demonstrate the methodological and theoretical advantages of combining a structured instructional design framework with expert consensus (via the Delphi technique) when developing healthcare simulation curricula. The final phase examined the competing effects of task complexity and context similarity on novices’ skills transfer. The results demonstrate that higher task complexity increases load during training, which may impede initial learning and subsequent transfer of skills ‘peripheral’ to the task (e.g. sterility). However, the findings also suggest that other variables (i.e. context or information processing specificity and learners strategies to manage load) may have important effects on these learning outcomes. At a broader level, the systematic, multi-phased approach employed in this dissertation provides a framework to guide future research in simulation instructional design. Furthermore, the application of CLT in this work exposes strengths and shortcomings in the theory that educators and researchers should be aware of, and highlights avenues for future inquiry.

iii

Acknowledgments

For a seed to achieve its greatest expression, it must come completely undone. The shell cracks, its insides come out and everything changes. To someone who doesn’t understand growth, it would look like complete destruction. - Cynthia Occelli As I look back on the last four years of my life, there is only one word that captures what the experience of this dissertation has been for me: transformative. My perspective on science and the human pursuit of knowledge have been fundamentally transformed, and with it my understanding of the world and of my own abilities. It is with humility and profound gratitude that I acknowledge those who have set me on this journey and helped me along the way. To the members of my thesis committee, thank you for graciously sharing your time and knowledge. To Adam Dubrowski, my supervisor, for giving me space and opportunity to grow into an independent investigator. Our interactions have shown me the kind of researcher, teacher and mentor I want to be. To Ruth Childs, for always saying the exact thing that I need to hear in the precise moment that I need to hear it. To Jim Drake, for helping me to keep my research grounded in my clinical roots and for reminding me about the practical relevance of my research. To Nikki Woods, for helping me through a difficult transition, and for always giving me your honest opinion – it is often the thing that I am thinking but too afraid to say. And to Glenn Regehr, thank you for challenging me when I need to be challenged, for supporting me whenever I needed support, and for helping me to understand what it means to ‘engage with research the

iv

way a PhD should’. My profound admiration cannot be summarized in a few short sentences, so I will simply say for being my advisor, mentor, role model, and so much more, thank you. I also wish to express my sincere appreciation to my clinical and academic network at Western University for all of your support. Sandrine, thank you for always being in my corner, and helping me to figure out what this thesis needed to be. To Michael Reider and the Clinican Investigator Program, thank you for taking a chance and letting me pursue my graduate studies at a Centre where I could get the training I was seeking. To Sayra Cristancho, Chris Watling, Lorelei Lingard, and the rest of the Center for Education Research and Innovation, thank you for giving me a place to touch down when I visited and for the unwavering support whenever I asked. And to the neurosurgical program at Western University, thank you for giving me the time to pursue my research and never rushing me to complete my graduate degree, even though at moments this made managing the clinical service difficult. Over the last four years, I have called The SickKids Learning Institute and The Wilson Centre home, not only because of the physical space these units have provided me to complete my work, but also because it is within their walls that I have grown academically. To Maria, Tina, Ryan, Mahan, Walter and all the Scientists and Fellows at the WC and LI, as well as those from the broader Canadian HPE community (Geoff, Lawrence, and too many others to name), thank you for exposing me to a plethora of research paradigms and helping me to think both deeply and differently about our field - my academic perspective is all the better for it. The last four years have also been a time of tremendous personal growth for me. I would like to take a moment to acknowledge those in my family who are no longer with us, for teaching me to value the time we have and for giving me the chance to make the most of the present moment. Safia, Alim, Sumair, Maria, Aleem, Ami, Abha, and the rest of my family: thank you v

for being my guinea pigs and for believing in me. Mom and Dad: I am where I am today because of all that you have sacrificed. There are no words to express what your support has meant thank you, for everything. And Rabia: you are my toughest critic and my greatest fan, a source of unconditional support and the love of my life. Thank you for your patience and for always believing in me, even when I doubt myself. You are a better partner than I could have ever hoped for. Finally, I thank God, through whose grace and benevolence all things are possible.

The research included in this dissertation was funded by the Royal College of Physicians and Surgeons of Canada (RCPSC) Medical Education Research Grant (12/MERG-27). I am grateful to the RCPSC for providing additional funding for my research through the Robert Maudsley Fellowship for Studies in Medical Education (2012-2015), and to the Canadian Institutes for Health Research for providing salary support through the Banting and Best Masters Award (2012) and Vanier Canada Graduate Scholarship (2013-2015).

vi

Contributions

Faizal Haji (author) solely prepared this dissertation. FH steered all aspects of this research program, including the design, execution, analysis and writing of original research studies and publications presented herein. The guidance and assistance provided by co-investigators is acknowledged formally and detailed below: Dr. Adam Dubrowski (Primary supervisor): provided theoretical and methodological mentorship for all aspects of this work, including provision of office and laboratory space; guidance and codirection in the planning, execution and analysis of the research studies, and preparation/critical revision of research manuscripts and this thesis document. Dr. Sandrine de Ribaupierre (Clinical mentor): provided clinical and methodological guidance throughout all phases of this work and assisted in the planning, collection, analysis and writing of research manuscripts included in this dissertation. Dr. Glenn Regehr (Program Advisory Committee member, research mentor): provided theoretical and methodological guidance throughout all phases of this work, and assisted in the planning, analysis and writing of research manuscripts presented in Chapters 3, 5 and 6. GR also provided office space and co-supervision during the planning and analysis of Chapter 5. Dr. James Drake (Program Advisory Committee member): provided mentorship and assistance throughout all phases of this work, including the planning and analysis of all research studies, as well as data collection and manuscript preparation for studies presented in Chapters 3 and 4. JD also assisted in the preparation/critical revision of Chapters 3 and 4, and this thesis document. vii

Dr. Ruth Childs (Program Advisory Committee member): provided mentorship and assistance throughout all phases of this work, including the planning and analysis of all research studies, as well as manuscript preparation for the experiment presented in Chapter 4. RC also assisted in the preparation/critical revision of this thesis document. Dr. Nicole Woods (Wilson Centre Fellowship supervisor): provided mentorship and laboratory space; assisted in experimental design, data analysis and manuscript preparation for the experiment presented in Chapter 6, and provided guidance in preparing this thesis document. Rabia Khan: assisted in data collection, analysis, and manuscript preparation for the experiments presented in Chapter 3; assistance in planning, data analysis, and manuscript preparation for the study presented in Chapter 5 and in the preparation of this thesis document. Gary Ng: assisted in developing electronic survey platform and data collection for the study presented in Chapter 5; assistance in developing the secondary task measure of CL used in Chapter 3. David Rojas: assisted in data collection, data analysis and manuscript preparation for the experiment presented in Chapter 4. Jeffrey Cheung: assisted in data collection, data analysis and manuscript preparation for the experiment presented in Chapter 6. Rob Shegawa: assisted in developing secondary task measures of CL used in Chapters 3, 4 and 6. Robert Martin: assisted in data collection for the experiment presented in Chapter 3.

viii

Table of Contents ACKNOWLEDGMENTS ........................................................................................................... iv CONTRIBUTIONS...................................................................................................................... iv TABLE OF CONTENTS ............................................................................................................ ix LIST OF TABLES ..................................................................................................................... xiv LIST OF FIGURES .................................................................................................................... xv LIST OF APPENDICES ........................................................................................................... xvi ABBREVIATIONS ................................................................................................................... xvii CHAPTER 1: Introduction and Literature Review .................................................................. 1 1.1 General Introduction and Overview .................................................................................................... 2 1.2 Simulation-based education and training: the state of the science .................................................... 4 1.2.1 The impetus for simulation in health professions education..................................................... 4 1.2.2 The evidence-base supporting simulation in healthcare settings .............................................. 7 1.2.3 Priorities for future inquiry: the research we should be doing in healthcare simulation ........ 10 1.2.4 Synthesis ................................................................................................................................. 13 1.3 Current controversies: the role of ‘fidelity’ in simulation instructional design ............................. 14 1.3.1 Conceptualizations of fidelity in SBET .................................................................................. 14 1.3.2 Theoretical arguments supporting the link between fidelity and learning .............................. 16 1.3.3 Empirical evidence for the link between fidelity and learning ............................................... 21 1.3.4 A re-conceptualization: fidelity as complexity in novice learning ......................................... 26 1.3.5 Synthesis ................................................................................................................................. 28 1.4 Cognitive load theory: overview and implications for simulation instructional design ................ 28 1.4.1 Perspectives on human cognitive architecture ........................................................................ 29 1.4.2 CLT: an overvie ...................................................................................................................... 36 1.4.3 Measurement of CL ................................................................................................................ 41 1.4.4 CLT Instructional design principles: part vs. whole and simple-to-complex training............ 45 1.4.5 Synthesis ................................................................................................................................. 50 1.5 Characteristics of learning and transfer: motor learning and psychology perspectives ............... 52 1.5.1 Definition and stages of motor learning .................................................................................. 52 1.5.2 Measuring learning: differentiating performance during skill acquisition and at retention ... 54 1.5.3 Transfer of learning: a typology.............................................................................................. 55 1.5.4 Synthesis ................................................................................................................................. 58 ix

CHAPTER 2: Research aims and hypotheses ......................................................................... 60 2.1 Purpose and aims ................................................................................................................................. 61 2.2 Overview and hypotheses .................................................................................................................... 62 2.3 Significance ........................................................................................................................................... 64

CHAPTER 3: Measuring cognitive load during simulation-based psychomotor skills training: sensitivity of secondary-task performance and subjective ratings ................... 66 3.1 Preamble ............................................................................................................................................... 67 3.2 Abstract ................................................................................................................................................. 68 3.3 Introduction .......................................................................................................................................... 69 3.4 Methods ................................................................................................................................................. 73 3.4.1 Participants .............................................................................................................................. 73 3.4.2 Primary and secondary tasks ................................................................................................... 73 3.4.3 Procedure ................................................................................................................................ 75 3.4.4 Outcome measures .................................................................................................................. 76 3.4.5 Data analysis ........................................................................................................................... 77 3.5 Results ................................................................................................................................................... 79 3.5.1 Phase 1 .................................................................................................................................... 79 3.5.2 Phase 2 .................................................................................................................................... 80 3.6 Discussion .............................................................................................................................................. 82 3.6.1 Discussion of experimental findings ....................................................................................... 82 3.6.2 Limitations and considerations for CL measurement in simulation instructional design research ................................................................................................................................... 85 3.7 Conclusion ............................................................................................................................................. 87

CHAPTER 4: Measuring cognitive load: performance, mental effort and simulation task complexity ............................................................................................................................... 88 4.1 Preamble ............................................................................................................................................... 89 4.2 Abstract ................................................................................................................................................. 90 4.3 Introduction .......................................................................................................................................... 91 4.4 Methods ................................................................................................................................................. 93 4.4.1 Participants and randomization ............................................................................................... 93 4.4.2 Primary and secondary tasks ................................................................................................... 94 x

4.4.3 Experimental protocol ............................................................................................................. 96 4.4.4 Outcome measures .................................................................................................................. 97 4.4.5 Statistical analysis ................................................................................................................... 98 4.5 Results ................................................................................................................................................... 98 4.5.1 Participant demographics ........................................................................................................ 98 4.5.2 Knot-tying performance .......................................................................................................... 99 4.5.3 Cognitive load ....................................................................................................................... 100 4.6 Discussion ............................................................................................................................................ 101 4.6.1 Interpretation of experimental findings................................................................................. 101 4.6.2 Implications ........................................................................................................................... 104 4.6.3 Limitations ............................................................................................................................ 107 4.7 Conclusion ........................................................................................................................................... 108

CHAPTER 5: Operationalizing Elaboration Theory for simulation instructional design: a Delphi study .......................................................................................................... 109 5.1 Preamble ............................................................................................................................................. 110 5.2 Abstract ............................................................................................................................................... 111 5.3 Introduction ........................................................................................................................................ 112 5.4 Methods ............................................................................................................................................... 114 5.4.1 Panelist selection ................................................................................................................... 114 5.4.2 Item generation ..................................................................................................................... 115 5.4.3 Survey instrument ................................................................................................................. 115 5.4.4 Delphi procedure ................................................................................................................... 117 5.4.5 Statistical analysis ................................................................................................................. 117 5.5 Results ................................................................................................................................................. 118 5.5.1 Expert panelists ..................................................................................................................... 118 5.5.2 Item generation ..................................................................................................................... 119 5.5.3 Summary of Delphi process .................................................................................................. 120 5.6 Discussion ............................................................................................................................................ 125 5.6.1 Identifying conditions impacting complexity of LP for novices to guide scenario development .......................................................................................................................... 125 5.6.2 Applying ET to simulation instructional design using the SCM-Delphi process ................. 126 5.6.3 Limitations and future directions .......................................................................................... 128

xi

CHAPTER 6: Competing effects? The impact of task complexity and context similarity on novices’ simulation-based learning .............................................................. 131 6.1 Preamble ............................................................................................................................................. 132 6.2 Abstract ............................................................................................................................................... 133 6.3 Introduction ........................................................................................................................................ 134 6.4 Methods ............................................................................................................................................... 136 6.4.1 Study population ................................................................................................................... 136 6.4.2 Simulation scenarios and apparatus ...................................................................................... 137 6.4.3 Experimental design and procedure ...................................................................................... 139 6.4.4 Outcome measurement .......................................................................................................... 141 6.4.5 Data analysis ......................................................................................................................... 142 6.5 Results ................................................................................................................................................. 142 6.5.1 Participants demographics .................................................................................................... 142 6.5.2 Skill acquisition phase .......................................................................................................... 143 6.5.3 Retention phase ..................................................................................................................... 144 6.5.4 Transfer phase ....................................................................................................................... 147 6.6 Discussion ............................................................................................................................................ 147 6.6.1 Effect of task complexity on skill acquisition and retention ................................................. 148 6.6.2 The impact of task complexity and context similarity on transfer ........................................ 149 6.6.3 Limitations and strengths ...................................................................................................... 151 6.6.4 Implications and directions for future inquiry ...................................................................... 152

CHAPTER 7: General Discussion .......................................................................................... 154 7.1 Preamble ............................................................................................................................................. 155 7.2 Implications for the measurement of CL ......................................................................................... 156 7.2.1 Methodological and practical implications for CL measurement ......................................... 156 7.2.2 Considerations when using subjective rating and secondary task CL measures .................. 158 7.3 Implications for designing SBET curricula to manage task complexity ....................................... 162 7.3.1 Combining CLT with specific instructional design frameworks .......................................... 162 7.3.2 Factors contributing to task complexity ................................................................................ 164 7.4 Implications for current understanding of learning and transfer ................................................. 167 7.4.1 The effect of task complexity and context similarity on skills transfer ................................ 167 7.4.2 Implications of learners’ strategies to manage CL................................................................ 169 7.5 Implications for the use of CLT in HPE research ........................................................................... 175 xii

7.5.1 Role of WM in procedural skills training ............................................................................. 175 7.5.2 Differentiating between intrinsic, extraneous, and germane load ......................................... 177 7.5.3 The utility of CLT in education research: what makes a good theory? ................................ 180 7.6 Limitations .......................................................................................................................................... 183

CHAPTER 8: Future directions and conclusion.................................................................... 188 8.1 Directions for future inquiry ............................................................................................................. 189 8.1.1 Measurement of CL .............................................................................................................. 189 8.1.2 Disentangling task complexity, context similarity, and information processing specificity 190 8.1.3 The role of fidelity in simulation instructional design .......................................................... 193 8.2 Concluding statement ........................................................................................................................ 194

References .................................................................................................................................. 196 Appendices ................................................................................................................................. 221

xiii

List of Tables Table 1-1

Arguments in support of simulation in HPE

Table 1-2

‘Key’ instructional design features identified from simulation

Page 5

research reviews (2005-2013)

Page 9

Table 1-3

Typologies of simulation fidelity

Page 15

Table 1-4

CLT design strategies

Page 46

Table 2-1

Research phases, questions and hypotheses

Page 61

Table 4-1

Participant demographics by case assignment

Page 100

Table 5-1

Expert panel demographics

Page 120

Table 5-2

Internal consistency and panelist-group correlations over Delphi rounds

Page 122

Table 5-3

Agreement on conditions for the epitome (simple) case of LP

Page 124

Table 5-4

Panelists’ ranking of conditions increasing LP complexity

Page 125

Table 6-1

Simulation scenario characteristics

Page 138

Table 6-2

Baseline, demographic and time to retention data

Page 144

xiv

List of Figures Figure 1-1

Atkinson and Shiffrin’s ‘multiple store’ model of human memory

Page 30

Figure 1-2

Cowan’s embedded-process model of working memory

Page 33

Figure 1-3

CL as a function of instructional design and expertise

Page 40

Figure 3-1

Primary and secondary task apparatus

Page 74

Figure 3-2

Experimental design for phases 1 and 2

Page 75

Figure 3-3

Novice vs. expert secondary-task performance under single- and dual-task conditions for RRT and SDR

Page 80

Figure 3-4

Novice primary task performance

Page 81

Figure 3-5

Novice secondary task performance and mental effort during simulation-based knot-tying training

Page 82

Figure 4-1

Simple and complex simulation scenarios

Page 95

Figure 4-2

Experimental design and participant flow

Page 96

Figure 4-3

Knot-tying performance

Page 100

Figure 4-4

Cognitive load

Page 101

Figure 5-1

Dynamic visual display used by participants to rank conditions from ‘adding no complexity’ to ‘adding extreme complexity’

Page 116

Figure 6-1

Experimental design and patient flow

Page 139

Figure 6-2

Participants’ LP performance during skill acquisition, retention

Page 145

and transfer Figure 6-3

Participants’ CL during skill acquisition, retention, and transfer xv

Page 146

List of Appendices Appendix 1

Secondary-task type and sensitivity of cognitive load measurement

Page 221

in simulation Appendix 2

Delphi survey item generation data sources

Page 229

Appendix 3

Delphi survey instructions for expert panel

Page 231

Appendix 4

Initial conditions for Delphi survey

Page 233

Appendix 5

Simple, complex and very complex simulation environments

Page 234

Appendix 6

LP instructional handout

Page 236

Appendix 7

LP multiple choice questions

Page 247

Appendix 8

Checklist and GRS for LP performance and communication skills

Page 251

Appendix 9

Supplemental Data Analyses

Page 255

xvi

Abbreviations ANOVA

Analysis of Variance

CATLM

Cognitive-Affective Theory of Learning with Media

CBVI

Computer-Based Video Instruction

CJD

Creutzfeldt-Jakob Disease

CL

Cognitive Load

CLT

Cognitive Load Theory

CSF

Cerebrospinal Fluid

EEG

Electroencephalography

EL

Extraneous Load

ET

Elaboration Theory

GL

Germane Load

GRS

Global Rating Scale

HIV

Human Immunodeficiency Virus

HPE

Health Professions Education

ICC

Intra-class Correlation Coefficient

ICSAD

Imperial College Surgical Assessment Device

IL

Intrinsic Load

LP

Lumbar Puncture

LTM

Long-Term Memory

MCQ

Multiple Choice Question

RMSE

Root Mean Square Error

RRT

Recognition Reaction Time

SBET

Simulation-Based Education and Training

SCM

Simplifying Conditions Method

SDR

Signal Detection Rate

SM

Sensory Memory

SP

Standardized Patient

SRME

Subjective Rating of Mental Effort

SRT

Simple Reaction Time

TAP

Transfer-Appropriate Processing

TERP

Task-Evoked Pupillary Response

VAS

Visual Analog Scale

WM

Working Memory xvii

1

__________________________________________

Chapter 1: 1

Introduction and Literature Review __________________________________________

2

1.1 General Introduction and Overview Simulation, when defined broadly as the “imitation of some real thing, state of affairs, or process” (Rosen, 2008, p. 157), stretches back over centuries and spans many areas of human endeavor (Bradley, 2006). Military applications of simulation have their roots in the invention of chess in the 6th century (Bradley, 2006; Perkins, 2007); the aviation industry has used simulation for personnel training since Link invented the blue box flight trainer in 1929 (Rosen, 2008); and influential events like Three Mile Island (1979) and Chernobyl (1986) have led the nuclear power industry to adopt simulation as a means to improve performance and mitigate potentially devastating human error (Bradley, 2006). The history of simulation in healthcare has a similarly extensive history: acupuncture was taught using life-size bronze statues beginning in 10th century China, whereas in the mid-18th century surgeons and midwives in Britain, Italy and France developed obstetrical simulators to train students in various procedural skills, including childbirth (Owen, 2012). These examples highlight that educators in the health professions have recognized for quite some time that simulation can serve as a useful adjunct to clinical training. The use of simulation in health professions education (HPE) has increased dramatically in the last 25 years. This is in part due to technological advances, which facilitated recreation of human physiological processes and disease states in a manner useful for training, as well as other factors that have eroded opportunities for clinical instruction (Issenberg et al., 1999). With the expansion of educational applications for simulation in healthcare, there has been parallel growth in research in this area. As healthcare simulation scholarship matures, so to do our questions about what should be the focus of inquiry and how best to advance the science of simulation (Gaba, 2012; Haji et al., 2014a). In particular, leading scholars have called for further research directed at simulation instructional design, in order to identify what works, for whom, and under what circumstances (Cook et al., 2013b; Issenberg et al., 2011). The use of established learning theories and conceptual frameworks is particularly important to this line of inquiry, both to elucidate the mechanisms underlying the instructional designs under investigation and improve the generalizability of subsequent research findings (Cook et al., 2013b). In this vein, the program of research presented in this dissertation has been conducted using such a theory-based approach. In so doing, the author hopes to advance the science in healthcare simulation and

3

clarify an important aspect of its instructional design: the relationship between task complexity, cognitive load (CL) and novices’ simulation-based learning. This dissertation is organized in a ‘paper format’ consisting of a series of self-contained chapters presenting individual research studies that have been submitted or published in various HPE journals (Chapters 3-6). These studies are framed by a series of inter-related research aims, questions, and hypotheses (see Chapter 2); a general discussion of the broader implications and limitations of this work (presented in Chapter 7); and finally the conclusions and directions for future inquiry arising from this program of research (Chapter 8). To place this body of work in an appropriate context for the reader, this introductory chapter provides a synthesis of the literature that has informed the research presented herein, including existing gaps in knowledge and theoretical frameworks that are the focus of inquiry. The first section of this introductory chapter begins with an overview of the state of the science in healthcare simulation, including the factors that have led to its growth in HPE, the evidence base that has accumulated supporting its use, and priorities for future research. This is followed by a discussion of the existing controversies in simulation instructional design related to the concept of fidelity, from an operational, theoretical and empirical perspective. The section ends with a reconceptualization of fidelity as complexity, which serves as one potential explanation for the limited evidence supporting the theorized relationship between fidelity and skills transfer among novice learners that is of particular relevance to this dissertation. The third section presents an overview of Cognitive Load Theory (CLT), which is the principal theoretical framework used throughout this work. Herein, the theorized structure of human cognitive architecture, implications for learning, approaches to measuring CL, and instructional design principles stemming from CLT are reviewed, and existing gaps in the application of CLT to simulation instructional design are presented. The final section provides an overview of the concept of learning and the related constructs of skill acquisition, retention, and transfer, so the reader can understand the rationale behind the outcome measures selected in the research studies subsequently presented.

4

1.2 Simulation-based education and training: the state of the science 1.2.1

The impetus for simulation in health professions education In the last two decades, the global landscape of HPE has undergone significant upheaval.

The availability of patients and time for clinical teaching has been substantially reduced by (i) rapid technological advances and improved healthcare delivery that have reduced hospital stays and clinic visits; (ii) new and more complex techniques for investigating, diagnosing, and managing disease; and (iii) ever-increasing costs that propagate the need for efficiency in healthcare delivery (Haji et al., 2014b; Issenberg & Scalese, 2007; Kneebone, 2010; Reznick & MacRae, 2006). Policy reforms that restrict clinical duty hours and the growing emphasis on patient safety have further reduced opportunities for health professions trainees to practice and receive feedback on their performance in the clinical setting (Haji et al., 2013; Haji et al., 2014b; Issenberg et al., 1999; Reznick & MacRae, 2006). As a result, learners are expected to acquire requisite knowledge and skills in less time, and in more complex environments, than ever before (Haji & Steven, 2014). At the same time, health professions educators have recognized that the inherent diversity and variability associated with the clinical environment can have a substantial impact on its educational value (Issenberg et al., 1999). Clinical training is opportunistic by nature; educators depend on the availability of a sufficient volume of patients with specific disease processes, clinical findings, etc. to ensure that trainees are exposed to enough cases to ensure competence (Reznick & MacRae, 2006). There is widespread concern that with the aforementioned changes in healthcare, today’s trainees simply cannot gain the breadth of experience required to manage the clinical problems they will encounter in practice, particularly with respect to surgical and procedural skills (Bell, 2010; Chikwe, 2004; Kneebone, 2010). Even when there is a sufficient number of cases, clinical training is often unsystematic because educational conditions cannot always be controlled (Gaba, 2007). For instance, a patient’s co-morbidities, the physical location of a clinical encounter (emergency room, clinic, operating theatre, etc.), the time of day, and interruptions due to other medical crises are all factors beyond the educator’s control, but which may have important implications for novice learning. From an educator’s perspective, this limited control means that such training opportunities are at the mercy of ad hoc clinical

5

availabilities rather than being purposively designed experiences that align with learning needs (Ziv et al., 2003). These constraints, coupled with the recent adoption of educational milestones and competency-based education in the Unites States, Canada, and other parts of the world (Frank et al., 2010; Nasca et al., 2012; RCPSC, 2014), have renewed the call for novel approaches to training and assessment that can address the limitations of traditional apprenticeship models (Issenberg et al., 1999; McGaghie et al., 2014). As a result, simulation-based education and training (SBET) is now being used as a pedagogical tool to augment clinical teaching and support the assessment of clinical competence in many domains of healthcare (Haji et al., 2014b; Issenberg et al., 2011; Issenberg & Scalese, 2007; Kneebone, 2005). In fact, the tension between ensuring adequate training of health professionals and ethical obligations to provide optimal patient care have led some authors to suggest that simulation is “a necessity” and an “ethical imperative” in our field (Kneebone, 2005; Ziv et al., 2003). Simulation educators have made various additional arguments to support the use of SBET in the health professions, which are summarized in Table 1-1. In the wake of these arguments, a vast array of simulation modalities have been developed, ranging from basic inanimate bench-top models to highly sophisticated and dynamic systems that respond to user actions (e.g. full-body patient simulators or virtual reality computer systems). These modalities serve a variety of purposes, ranging from task-training individual students on basic technical skills like venipuncture or knot-tying, to immersive scenario-based training of healthcare teams on cognitive and affective skills (e.g. crisis resource management and communication; Issenberg & Scalese, 2007; Kneebone, 2005). In fact, the use of simulation is so widely accepted that the World Health Organization now strongly recommends its use as an educational method to facilitate transformation and scale-up of health professionals’ education and training across the globe (WHO, 2013). Table 1-1: Arguments in support of simulation in HPE Category Patient Safety

Description Simulation training provides an ideal, ‘no risk’ environment that allows learners engage in repeated, deliberate practice without putting patients at risk. ‘Pretraining’ using simulation allows learners to delay their first encounter with a real patient until they are at a higher level of clinical or technical proficiency (Gallagher et al., 2005; Kneebone, 2010; Malone et al., 2010; Ziv et al., 2003).

6

Realism/Authenticity Instructional Control

Variability

Standardization Availability

Learner-centeredness

Simulations can be designed as lifelike representations of complex clinical situations, which allows learners to practice clinical skills under realistic conditions (Beaubien & Baker, 2004; Ziv et al., 2003). Simulation can facilitate learning related to specific findings, conditions, complications, procedures, and management situations. Educators can control patient reactions to address specific learning goals in ways that would not be possible in the clinical setting (Gaba, 2007; Issenberg et al., 1999; Ziv et al., 2003). Simulation facilitates exposure to various clinical situations, including atypical presentations, rare diseases, critical incidents, near misses, and crises. It also facilitates training of a range of competencies, including technical, cognitive, communication, and teamwork skills (Gaba, 2007; Ziv et al., 2003). Simulators are predictable and thus useful for standardizing training and assessment. They can be used repeatedly with a high degree of consistency between learning or assessment events (Gaba, 2007; Issenberg et al., 1999). Simulators can be used on demand to fit curriculum needs, as availability is not constrained by the resources of the clinical environment. Thus training can be structured around educational goals of the instructor and learner, rather than the clinical problem of the presenting patient (Gaba, 2007; Issenberg et al., 1999). In simulation, the focus is on the learner’s objectives and needs. Simulation provides a safe and forgiving environment in which errors can be made, and trainees can observe the consequences associated with their actions. Formative assessment can be provided using structured feedback, with the goal of facilitating acquisition and maintenance of clinical competence (Issenberg et al., 2005; McGaghie et al., 2014; Ziv et al., 2003).

A number of definitions of simulation have been put forward in the healthcare literature, most of which reference “devices” or “sets of conditions” that attempt to recreate characteristics of the real world (specifically the clinical environment) or present patient problems “authentically” (Beaubien & Baker, 2004; Issenberg & Scalese, 2007). However, such definitions may be interpreted too narrowly, overemphasizing instructional technology above the educational methods used to facilitate learning. There is a longstanding appreciation in the related field of educational technology that media (including simulation) will never influence learning directly, but rather serve as a vehicle that delivers instruction (Artino & Durning, 2012; Clark, 1994). As articulated by Beaubien and Baker, “like any other tool, the effectiveness of simulation technology depends on how it is used” (2004, p. i55). Thus, this dissertation adopts David Gaba’s definition of simulation as “a technique, not a technology, to replace or amplify real experiences with guided experiences…that evoke or replicate substantial aspects of the real world in a fully interactive fashion” (Gaba, 2007, p. 126). As emphasized in this definition, the author posits that it is the pedagogical features of a simulation experience (subsequently referred to as the simulation’s instructional design), rather

7

than specific aspects of simulation media or technology, that determine its effectiveness as an educational method. In this context, “instruction” refers to an educator’s manipulation of a trainee’s experience for the purpose of fostering learning. A central aim of this program of research is to investigate how to manipulate training experiences to optimize learning outcomes (the so called "science of instruction", see Mayer, 2010). As such, the research aims in this dissertation have been articulated to inform the overarching question articulated by Artino and Durning (2012): what are the key instructional methods in healthcare simulation training that positively influence learning and transfer?

1.2.2

The evidence-base supporting simulation in healthcare settings As our experience with SBET in healthcare settings has grown, so too has the volume of

research within the field. In the early stages, published reports were principally descriptions of simulation environments, methods, and technologies, in an attempt to demonstrate the versatility of the method and its applicability to various clinical disciplines (Haji et al., 2014b; Issenberg et al., 1999). More recently, research in the field has been dominated by “justification” studies that compare simulation interventions with no training or traditional educational practice, in an attempt to establish the effectiveness of SBET for various cadres of health professions learners and across multiple training contexts (Cook, 2010; Cook et al., 2008). Within this justification research, an impressive evidence base has accumulated demonstrating the effect of SBET on various educational and healthcare outcomes. This was eloquently demonstrated in a recent systematic review and meta-analysis by Cook et al. (2011), who reported on the results of a comprehensive synthesis of over 10,000 studies of technologyenhanced simulation involving health professions learners. Their report included 609 studies i

comparing simulation with no intervention across 35 266 learners; across this large volume of data, the investigators consistently observed large, significant effects (pooled effect size >1.0) in favour of SBET on participants’ knowledge acquisition, time to task completion, technical skills (measured by global ratings or efficiency metrics), and procedural outcomes (e.g. final product

i

The term ‘no intervention’ refers to experimental studies in which the control group receives no formal instruction at all, or pre-post studies in which outcomes are assessed in the same group of learners before and after simulation training (with the “pre” test serving as a ‘no intervention’ control).

8

analysis or task completion). Moderate, significant effects (pooled effect size 0.5-1.0) were also observed on outcomes related to participants’ clinical behaviour (while delivering care) and patient outcomes (Cook et al., 2011). In a subsequent review, these investigators further synthesized 92 studies comparing SBET with other instructional approaches, wherein small to moderate effects favouring SBET over non-simulation instruction were observed for learners’ satisfaction, knowledge, time to task completion, technical skill, and procedural outcomes (Cook et al., 2012). Small to moderate effects were also observed for clinical behavior and patient outcomes, however these were not statistically significant. The results of the latter review suggest that, particularly for technical skills, hands on practice through SBET may lead to superior gains compared to non-simulation instruction (e.g. lecture, small group discussion, video training), and that the instructional design of the training (rather than the use of simulation per se) likely accounts for some of the observed differences between educational media (Cook et al., 2012). The findings from these two reports align with other published reviews of healthcare simulation, which corroborate the positive and sustained effects of SBET across a wide range of surgical, procedural, communication, teamwork and decision-making skills (Gurusamy et al., 2008; Kennedy et al., 2013; McGaghie et al., 2014; Merien et al., 2010; Nestel et al., 2011; Sutherland et al., 2006). They also add to the growing body of literature demonstrating that skills learned through SBET are transferrable to the clinical setting (Teteris et al., 2012). This is evidenced by a number of reports detailing that simulation trained learners demonstrate shorter procedure times, fewer technical errors, improved adherence to practice guidelines and improved clinical performance (when compared with traditionally trained peers) across multiple domains, including minimally invasive and open procedural skills (e.g. laparoscopy, colonoscopy, central line insertion, cataract surgery); algorithm-based tasks (e.g. Advanced Cardiac Life Support); and knowledge-based tasks (e.g. auscultation for cardiac murmur diagnosis) (Dawe et al., 2014; Ma et al., 2011; Madenci et al., 2014; Seymour, 2008; Sturm et al., 2008; White et al., 2012). Given this extensive evidence, at present there is little doubt that simulation “works” – that is, time spent in simulation training can result in meaningful learning (particularly in comparison to no training, and to a lesser degree in comparison to non-experiential modalities such as lecture and video training), which can translate into improved clinical performance and downstream patient outcomes.

9

Notwithstanding these findings, the aforementioned reports fail to inform simulation educators about the key features of SBET that make it effective for learning, as they do not specifically consider what aspects of simulation lead to improved learning outcomes. To address this gap, three seminal reviews published in the last decade have attempted to elucidate, in broad terms, the “best practices” for designing SBET in healthcare settings. The first was a narrative review published by Issenberg et al. (2005) as a part of the Best Evidence in Medical Education Collaboration; the second, a critical narrative review of simulation research published between 2003-2009 by McGaghie et al. (2010); and the third, a systematic review and meta-analysis investigating the comparative effectiveness of instructional design features published by Cook et al. (2013b). Table 1-2 provides a comparative summary of the findings from these reports. Table 1-2: ‘Key’ instructional design features identified from simulation research reviews (2005-2013)  

Issenberg et al. 2005 McGaghie et al. 2010 Cook et al. 2013 (systematic review and (critical narrative review (systematic review and qualitative synthesis of of simulation research, meta-analysis of simulation simulation research, 2003-2009) research published up to 1969-2003) 2011) Instructional 1. Feedback 1. Feedback 1. Range of task difficulty design 2. Repetitive practice 2. Deliberate practiceg 2. Repetitive practice features 3. Curriculum 3. Curriculum 3. Distributed practice identified integrationa integrationa 4. Cognitive interactivityi (in order of 4. Range of difficulty 4. Outcome 5. Multiple learning importance, 5. Multiple learning measurement strategies as indicated strategies 5. Simulation fidelityf 6. Individualized learningd by the 6. Clinical variationb 6. Skill acquisition and 7. Mastery learninge investigators) 7. Controlled maintenanceh 8. Feedback c environment 7. Mastery learninge 9. (More) time spent 8. Individual learningd 8. Transfer to practice learning 9. Clearly defined 9. Team training 10. Clinical variation outcomes or 10. High-stakes testing 11. Group practice benchmarks for 11. Instructor training (inconsistent results) learner achievemente 12. Educational and 12. Curriculum integrationa 10. Simulator validityf professional context (insufficient evidence) Description of features: a ‘Curriculum integration’ refers to the integration of simulation activities with other educational methods (e.g. lecture, problem-based learning, and clinical teaching) within a broader curriculum b ‘Clinical variation’ refers to the presence of a wide variety of clinical conditions c ‘Controlled environment’ relates to the control over the learning environment, so that trainees can make, detect, and correct errors without adverse consequences d ‘Individual learning’ refers to learners being actively involved in reproducible, standardized learning experiences that are tailored to their learning needs e ‘Clearly defined benchmarks’ aligns with the mastery learning model, an approach to competency-based education that is comprised of 7 complementary features designed to ensure a learner achieves a specific

10

threshold of proficiency on all educational objectives, regardless of the training time required to achieve this outcome (cf. Cook et al., 2013a; McGaghie et al., 2014) f In this review, ‘validity’ aligns with the concept of ‘fidelity’ and refers to the degree of realism a simulation provides in approximating clinical situations, principles and tasks (Issenberg et al., 2005) g ‘Deliberate practice’ refers to an educational approach with 9 specific characteristics designed to enhance skill acquisition and retention (cf. Ericsson, 2004; McGaghie et al., 2011) h ‘Skill acquisition and maintenance’ refers to the frequency and timing of SBET and its relationship to skill acquisition and decay i ‘Cognitive interactivity’ is described as training that promotes cognitive engagement of learners using strategies like intentional task sequencing, etc.

While these reviews summarize the best available evidence on simulation instructional design, all these authors have acknowledged that this evidence base is weak (although improving with time), and many important questions remain unanswered (Issenberg et al., 2005; Cook et al., 2013b; McGaghie et al., 2010). For example, the existing literature demonstrates that feedback is an important feature of simulation instructional design, but provides limited guidance for educators regarding how the type, timing, frequency, and delivery of feedback influences simulation-based learning (Cheng et al., 2014; Cook et al., 2013b; Hatala et al., 2014; McGaghie et al., 2010). Similarly, while presenting a “range of task difficulty” is noted to have the largest effect on learning outcomes among the features reviewed by Cook et al. (2013), little is understood about how to align the difficulty of a simulation task with learners’ level of experience, what features to manipulate when increasing task difficulty, how to sequence tasks varying in difficulty to optimize educational outcomes, and when increasing the challenge associated with a simulation task is desirable (or not) within a given learning episode. It is clear from these reviews that significant gaps in our understanding of the instructional design features underpinning SBET still exist, leaving many questions about what works, for whom, and under what circumstances (Cook, 2010; Norman, 2009b).

1.2.3

Priorities for future inquiry: the research we should be doing in healthcare simulation Many scholars in the field attribute the current state of simulation research to

uncoordinated and unfocused “investigator initiated” studies that are overly reliant on describing local uses of simulation, or comparing it to no-intervention and other-media to justify its use (Cook, 2010; Cook et al., 2008; Gaba, 2012; Issenberg et al., 2011; Stefanidis et al., 2012a). Notraining comparison studies simply demonstrate that time spent learning leads to improved educational outcomes (Cook, 2010; Norman et al., 2012), while media comparisons often

11

confound instructional design features (e.g. repetitive practice) with the modality under investigation (e.g. simulation, lecture). These study designs fail to clarify whether observed effects are attributable to the modality under investigation, its instructional design, or both, and thus do little to advance the science of simulation or inform educational practice beyond the local context in which the study was conducted (Cook, 2010; Cook et al., 2012; Issenberg et al., 2011). As a result, in recent years leaders in healthcare simulation and HPE have called for a shift away from such “description” and “justification” studies, and towards programs of research that clarify when, for whom, how, and why our interventions do or do not work (Cook, 2010; Cook et al., 2008; 2013b; Eva, 2010; Issenberg et al., 2011; Regehr, 2010). Underlying these calls-to-action is an acknowledgement that medical education in general, and simulation in particular, are complex interventions (Dieckmann et al., 2011; Haji et al., 2014a; McGaghie et al., 2010). Thus, as stated by Eva (2010, p. 4), advancing knowledge in the field requires moving “away from research that is intended to prove the effectiveness of our educational endeavours and towards research that aims to understand the complexity inherent in those activities.” In the simulation context, this requires research that seeks to generate a deeper understanding about the “active ingredients” in simulation-based learning and research environments that bring about desired learning effects (Dieckmann et al., 2011; Haji et al., 2014a). Given the need for such a shift, it is not surprising that instructional design research emerged as a top priority in healthcare simulation scholarship during two recent agenda-setting meetings convened by the leadership of the Society for Simulation in Healthcare and the Society in Europe for Simulation Applied to Medicine (Dieckmann et al., 2011; Issenberg et al., 2011). Advancing this line of inquiry requires a fundamental change in the design of simulation research, such that future studies move beyond simplistic comparisons between simulation and ‘no intervention’ or ‘traditional practice’ (Cook, 2010). We also need to move beyond comparisons of the presence or absence of specific instructional design features (e.g. feedback) (Weinger, 2010). Instead, future studies need to systematically investigate the effect of one variation in a given instructional design feature against another, as the results of such studies have broad implications for SBET research and practice (Cook, 2010; Cook et al., 2013b). However, even these simulation-versus-simulation comparisons will have limited generalizability if they are not grounded in established theories or conceptual frameworks (Cook et al., 2013b; Dieckmann et al., 2011). Unfortunately, this is a particular weakness in current

12

healthcare simulation research (Bligh & Bleakley, 2006; Issenberg et al., 2011). Thus, as acknowledged in the aforementioned agenda-setting reports, future simulation research needs to be more strictly grounded in established theories of instruction and learning, in order to: (i) elucidate how such theories inform the design and structure of simulation programs (e.g. frequency, timing, type of training); (ii) allow for prediction and hypothesis testing that reveals the mechanisms linking the instructional designs under investigation and the observed outcomes; (iii) inform simulation educators and researchers about the applicability of theoretical concepts and empirical findings about human learning from non-medical domains within HPE settings; and (iv) link related studies together in a meaningful way, based on a shared conceptual foundation and vocabulary (Dieckmann et al., 2011; Issenberg et al., 2011). One theoretical concept cited in the research agenda from the Utstein-style summit (Issenberg et al., 2011) is particularly germane to this dissertation. Within the theme of instructional design research, the summit attendees articulated the potential value of cognitive and educational psychology theory to illuminate the interaction between learning task complexity and CL imposed on a learner, as well as the subsequent implications of this interaction for learning in the simulated setting. This led to the articulation of the question: how should theories of cognitive load inform the design and structure of simulation programs, courses, and scenarios based on the complexity of tasks required for learners to acquire and maintain [knowledge and skills]? Such questions cannot be addressed in a single study; they requires a series of investigations that build upon each other in a sequential and iterative manner (Dieckmann et al., ii

2011). This underscores the importance of a programmatic approach to simulation research, in which existing theoretical assumptions are critically appraised, tested, and revised based on

ii

Here, the term “programmatic” refers to a program of research, as opposed to research about an (educational) program.

13

evidence that accumulates from this process (Cook et al., 2008; Haji et al., 2014a). Acknowledging that simulation is a complex intervention, such a program of research would benefit from a phased approach that involves the: (i) identification of theoretical frameworks relevant to the instructional design feature under investigation (as well as competing theoretical perspectives); (ii) modeling of simulation interventions (and comparison interventions) based on these theories and the existing evidence base; (iii) rigorous piloting of these interventions and outcome measures to ensure the active ingredients are well understood and that selected metrics capture the outcomes of interest; and (iv) evaluation of the intervention against a competing, theoretically grounded design (Haji et al., 2014a).

1.2.4

Synthesis Dramatic changes to healthcare systems around the world have led to a substantial

increase in the use of simulation as a pedagogical tool in the education of health professionals. Parallel growth in healthcare simulation research has been observed, however the preponderance of studies in the field concentrate on describing or justifying the use of simulation in various clinical contexts, rather than clarifying the instructional features that make it effective for learning (Cook et al., 2008; Haji et al., 2014a). Similarly, while recent high quality reviews plainly demonstrate that time spent in simulation training can lead to significant gains in educational and healthcare outcomes (Cook et al., 2011), the research studies included in these syntheses are not grounded in established theories of learning. Consequently, the existing body of literature is limited in its ability to explain why simulation works, and how it should be designed to optimize learning (Cook, 2010). In response, a number of leading scholars have called for programmatic lines of inquiry that investigate the “active ingredients” underpinning SBET interventions (Cook et al., 2013b; Dieckmann et al., 2011; Issenberg et al., 2011). In particular, the systematic application of instructional theories to investigate the optimal design of simulation in healthcare settings has been advocated (Issenberg et al., 2011). The research outlined in this dissertation has been developed to reflect such a programmatic approach, drawing on various theories from cognitive psychology, educational psychology, and motor learning to clarify one important aspect of simulation instructional design: the relationship between task complexity, CL, and learning among novices engaged in procedural skills training.

14

1.3 Current controversies: the role of ‘fidelity’ in simulation instructional design 1.3.1

Conceptualizations of fidelity in SBET In all fields where simulation is used for education and training, the concept of fidelity is

often cited as one of the most important (and one of the most controversial) features of simulation instructional design. The term is generally used in reference to how well a simulation represents or replicates reality (Alessi, 1988; Dieckmann et al., 2007; Maran & Glavin, 2003), and is often spoken of as an intrinsic property of a given simulation system (Liu et al., 2009). Various other terms have been used in the HPE literature to describe the same underlying construct, including (i) validity, referring to “the degree of realism…a simulation provides as an approximation to complex clinical situations, principles, and tasks” (Issenberg et al., 2005, p. 24); (ii) presence, which has been used to compare simulation-based environments to their realworld counterparts (Dieckmann et al., 2007); and (iii) authenticity, which is conceptualized as the degree to which a simulated system faithfully replicates “the diverse and rich contexts of performance” encountered in clinical work environments, with the goal of maximizing this replication so that students experience tasks under the conditions that they “typically and naturally occur” (Kneebone, 2010 p. i48). At their core, all of these terms refer to the extent to which the appearance and behavior of the simulation system (i.e. the simulator and the associated training environment) match the appearance and behavior of the simulated (clinical) system (Issenberg & Scalese, 2007; Maran & Glavin, 2003). There appears to be an almost universal drive among learners, educators, and simulation designers to seek out high-fidelity simulation, based on the widespread assumption that training experiences and effectiveness improve in proportion to increases in the level of fidelity of a simulated task (Beaubien & Baker, 2004; Dieckmann et al., 2007; Grierson, 2014; Hamstra et al., 2014; Salas, 2002; Scerbo & Dawson, 2007). This belief is rarely challenged (Hamstra et al., 2014) and appears to permeate all domains where simulation is used as an educational tool, including aviation, nuclear power, military operations, and healthcare (Scerbo & Dawson, 2007). The concept is so pervasive in HPE that the terms simulation and high-fidelity simulation are used almost interchangeably (Beaubien & Baker, 2004). However, many scholars have argued that fidelity is not a one-dimensional construct (Beaubien & Baker, 2004; Norman et al., 2012; Rudolph et al., 2007), and treating it as such sets up a false dichotomy whereby simulations are

15

regarded as either high-fidelity (in so far as they present performance characteristics, contexts, and scenarios that look and feel like the clinical setting) or low-fidelity (in reference to simulations that are less realistic or that reduce to-be-learned skills to simpler constructs or constituent parts) (Beaubien & Baker, 2004; Grierson, 2014). It has been suggested that this dichotomy is too simplistic, as the concept of fidelity encompasses many facets of simulation instructional design, including characteristics of the simulation that mediate sensory impressions, the nature of the learning objectives and task demands, the context or environment of training, and other factors that may influence trainees’ engagement with the learning task (Beaubien & Baker, 2004; Cook et al., 2013b). As such, any simulation can be viewed as high- or low-fidelity depending on which of these facets are emphasized or ignored (Hamstra et al., 2014). Multiple typologies of simulation fidelity have been proposed in an attempt to address this issue, and bring clarity to an otherwise vague concept. These typologies describe various overlapping aspects of fidelity (including physical, visual-audio, equipment, environment, motion, psychological-cognitive, task, and functional considerations), each of which refer to some physical, cognitive, or affective dimension of a simulation system (Allen et al., 2009; Rehmann et al., 1995). A summary of common typologies that have been cited in the healthcare simulation literature is provided in Table 1-3. Table 1-3: Typologies of simulation fidelity Miller, 1954 Hays & Singer, 1989 Allen et al., 1991

Rehmann et al., 1995

Miller distinguished between two types of fidelity: engineering fidelity, i.e. the degree to which a training device or environment replicates the physical characteristics of the criterion task; and psychological fidelity, i.e. the degree to which the skill or skills required in the real task are captured in the simulation. Hays and Singer suggest fidelity is “how similar a training situation must be, relative to the operational situation, in order to train most efficiently” (p. 1). Similarity is a function of: (i) physical characteristics (e.g. visual and kinesthetic cues) and (ii) functional characteristics (how stimulus requirements map onto response options in training vs. operational situations). Allen et al. argue all fidelity typologies can be subsumed by two dimensions: (i) physical fidelity, i.e. the degree to which a simulation reproduces the physical appearance and activities of the reference system, and (ii) functional fidelity (what a simulation ‘does’), i.e. the degree to which the operational and feedback components of the reference system are present in the simulation. Rehmann et al. distinguish between (i) perceptual (psychological) fidelity, i.e. the degree to which learners subjectively perceive the simulation to reproduce its real-life counterpart in the operational task situation, and (ii) objective fidelity, i.e. the degree to which the simulation actually reproduces its real life counterpart in its substance, form, and behavior. The authors further divide objective fidelity into two subcomponents: (a) equipment cues (the duplication of the appearance and feel of operational equipment), and (b) environmental cues (the duplication of the criterion task environment and motion through it).

16

Dieckmann et al., 2007

Dieckmann et al. adopt Uwe Laucken’s three ‘modes of thinking’ to explore the meaning of simulation realism: (i) the physical mode concerns aspects of the simulation that can be quantified in physical or chemical terms (e.g. texture, shape, duration); (ii) the semantic mode refers to the concepts, information, meaning and their relationships contained in the simulation (i.e. the interpretability of information presented, such as shock represented by an elevated heart rate and low blood pressure on a vital signs monitor); and finally (iii) the phenomenal mode refers to the emotions, beliefs and metacognitive thoughts of learners engaged in simulation training (and their relation to those experienced during real clinical encounters). The authors contend that semantic and phenomenal realism further delineate the more commonly used concept of psychological fidelity.

Despite the proliferation of these typologies, as yet there is still no unifying definition of the underlying construct of fidelity that is agreed upon by the majority of simulation educators and researchers (Hamstra et al., 2014; Rehmann et al., 1995). This lack of consensus has led to considerable confusion in classifying existing research evidence related to this aspect of simulation instructional design, to a point where some authors have recommended that the term be abandoned entirely, in favour of more precise descriptions of the physical, contextual and functional attributes of simulation training systems (Cook et al., 2013b; Hamstra et al., 2014). Yet, others caution against ‘throwing the baby out with the bathwater’, as the essence of the concept of fidelity (the degree of faithfulness, similarity, or overlap that exists between a training and operational setting), “…is still fundamental to understanding the effectiveness that any one simulation might have in preparing learners for clinical performance” (Grierson, 2014, p. 281). While the author agrees with the latter viewpoint, given the aforementioned controversies fidelity is operationalized using the theoretical concept of ‘context similarity’, to align fidelity more closely with one of the hypotheses that is frequently cited as the basis for its link to simulation-based learning (detailed further in 1.3.2).

1.3.2

Theoretical arguments supporting the link between fidelity and learning Regardless of the definition of fidelity that is adopted, there appear to be two

fundamental arguments offered by simulation designers, educators and researchers to justify the link between higher fidelity and improved transfer of learning: 1. Situating a learning activity in a realistic environment will enhance the perceived realism of the task and facilitate suspension-of-disbelief, which can positively impact a trainee’s engagement with the simulation and subsequently improve learning (Alessi, 1988;

17

Bradley, 2006; Hamstra et al., 2014; Issenberg & Scalese, 2007; La Rochelle et al., 2011). 2. The transfer of skills from the simulation setting to the clinical setting is improved when the (simulation) training and (clinical) practice environments are closely aligned (Koens et al., 2005; La Rochelle et al., 2011; Teteris et al., 2012). As higher fidelity results in greater similarity between simulation and clinical encounters, fidelity should have a strong link to skills transfer (Liu et al., 2009). While both of these appear to be strongly held beliefs, as noted below the evidence substantiating these arguments in healthcare simulation are weak and as a result, much controversy remains regarding their importance in simulation instructional design. The effect of engagement on learning Proponents of the first argument assert that the emotional content of a learning experience, and its subsequent impact on a trainee’s motivation (i.e. their willingness to invest energy in learning) is an important factor that is often overlooked in HPE (Artino & Durning, 2012; Kneebone, 2005; Koens et al., 2005; La Rochelle et al., 2011; LeBlanc et al., 2015; McConnell & Eva, 2012). Based on the notion that learning must be underpinned by a desire to improve (Kneebone, 2005; McGaghie et al., 2010), these scholars believe that increasing the psychological fidelity of a simulation (e.g. by improving “semantic” and “phenomenal” realism) will positively impact a trainee’s engagement with the learning task. This is based on the idea that higher psychological fidelity facilitates the suspension of disbelief (which encourages learners to behave in the simulation as if it were a real clinical encounter), thereby allowing them to participate in the learning activity in an experientially and emotionally relevant manner (Dieckmann et al., 2007; Issenberg & Scalese, 2007; Kneebone, 2009; Rudolph et al., 2007). Indeed, some have agued that realism only matters in the service of engagement (Rudolph et al., 2007). Along these lines, it has been suggested that contextual factors (including the physical, conceptual, and emotional resemblance of a simulation to its clinical counterpart) engender commitment among learners by increasing emotional involvement, which may influence learning independently of cognitive or task-related factors (Koens et al., 2005). Most obviously, this manifests in increased time trainees invest in learning, but it may also influence the cognitive and metacognitive strategies they bring to the task (e.g. cognitive effort invested toward the learning

18

material, the effectiveness of information processing, and the use of deeper processing strategies; Artino & Durning, 2012; Koens et al., 2005; La Rochelle et al., 2011). This perspective aligns with constructivist frameworks of education, which posit that trainees actively attempt to make a learning context relevant to their goals and objectives (Hamstra et al., 2014). In this way, simulation fidelity has been hypothesized to impact motivational outcomes that subsequently influence future learning and performance (Artino & Durning, 2012). There is also evidence for a direct link between emotions and learning from the disciplines of neuroscience and cognitive psychology, particularly related to the impact of emotional valence (positive or negative) and arousal (activating or deactivating) on cognitive processes like perception, memory, attention and reasoning (LeBlanc et al., 2015; McConnell & Eva, 2012; Rudolph et al., 2007). Specifically, positive emotional states are associated with higher creativity, cognitive flexibility, exploration for alternative problem solutions, openness to information, and global processing (i.e. seeing the “big picture”), which may help learners to make associations between learning material and promote active abstraction that improves learning and transfer (McConnell & Eva, 2012). Highly emotional experiences (particularly negative ones) also tend to be remembered well, potentially due to superior memory consolidation and retrieval (i.e. mental rehearsal) of such experiences, which in turn can enhance learning (LeBlanc, 2009; LeBlanc et al., 2015; McConnell & Eva, 2012). Finally, the emotional aspects of simulation may lead to higher arousal, which can stimulate improvisation, deeper cognitive processing, and help to anchor information in learners’ memory (Rudolph et al., 2007). These findings would seem to support the recent movement towards complex, highfidelity, immersive simulations (including full-ward scenarios and deteriorating patient protocols), which create highly engaging, emotional learning experiences that reflect the “messy realities” and “unruliness” of clinical practice (Gaba, 2007; Kneebone, 2009; 2010; Rudolph et al., 2007). Such scenarios strive for high contextual realism, in order to challenge trainees to reflect on the personal and interpersonal emotional consequences of simulated clinical events. The hope is that doing so will both increase engagement in the learning activity and better prepare learners to handle stress, anxiety, fear, risk, and distraction in real-life settings (Dieckmann et al., 2007; Grierson, 2014; Kneebone, 2009; 2010). However, it is important to note that higher emotional engagement may actually be a double-edged sword, as it may also lead to excess arousal and stress, which in turn may trigger a regression towards heuristic-based

19

responses and constrict situational awareness in a manner that is detrimental for learning and performance (Rudolph et al., 2007). In fact, Fraser et al. (2012) recently showed that high arousal is associated with increased CL and worse performance immediately after simulation training on a diagnostic reasoning task. These effects may be further influenced by negative emotions, which have been shown to increase local processing (i.e. a focus on specific details rather than the whole task), reduce the number of problem solutions or strategies that come to mind, and increase anxiety to a point where it overwhelms cognitive resources and reduces performance (cf. the Yerkes-Dodson Law; Koens et al., 2005; McConnell & Eva, 2012). These findings highlight that the role of engagement in simulation-based learning remains a controversial and poorly understood issue. Context similarity and skills transfer The second argument used to support the link between fidelity and learning centers on the belief that a trainee’s ability to transfer skills from one situation to another is “strongly tied to the degree to which the new situation is similar to (or different from) the original learning context” (La Rochelle et al., 2011, p. 808). In this dissertation, this concept is referred to as context similarity, with “context” principally referring to the physical dimension (i.e. the physical characteristics of a simulation learning task and the environment in which it is performed), but also inclusive of the semantic dimension (e.g. the alignment between the characteristics of a simulation training scenario and a clinical encounter; Koens et al., 2005). The belief that context is important for transfer of learning is rooted in more than 100 years of research in cognitive and experimental psychology, tracing back to the seminal work of Thorndike and Woodworth and their identical elements theory (Thorndike, 1903; Thorndike & Woodworth, 1901a; 1901b; 1901c). This theory posits that the transfer of knowledge and skills from one context (the ‘trained’ function) to another (the ‘tested’ function) is dependent on the number of identical elements that exist between the two (i.e. that transfer is maximized when the practice and performance contexts are perfectly matched). Although the exact specification of what constitutes an “element” remains elusive (making it difficult to operationalize the concept for SBET), this theory has been used as a foundation for conceptualizations of fidelity in many domains in which simulation is used for skills training (Grierson, 2014). This is evidenced by the fact that attempt are often made to quantify the number of “identical elements” in common

20

between a simulation training task and its real-world counterpart as an ‘objective’ measure of simulation fidelity (Liu et al., 2009). The theory of context similarity aligns with an ‘information processing perspective’ of human performance, which conceptualizes three stages to information processing: stimulus identification, response selection, and response execution (Elliott et al., 2010). These processes are mediated by existing mental representations (housed within cortical and subcortical regions of the brain) containing sensory, motor and cognitive information. When stimuli are sensed, they are identified as novel or familiar depending on their similarity to representations already stored in memory (Grierson, 2014). This idea that human memory is ‘associative’ reinforces that learning depends on our prior experience with specific patterns of information, which mediate the recognition of information in our environment, as well as our subsequent selection and execution of actions (Grierson, 2014; Regehr & Norman, 1996). Thus, future skilled performance is highly dependent on the nature of information present during learning (Grierson, 2014). This premise is supported by empirical research related to human memory, which has shown a ‘same context advantage’ for information recall (Koens et al., 2005). For instance, in a widely cited study Godden and Baddeley (1975) demonstrated superior recall of a list of words learned underwater or on land, when participants were tested in the same context in which the words were initially learned. These findings have been interpreted as evidence for the powerful role of contextual cues present in learning environments that are believed to be encoded in memory alongside targeted knowledge and skills (so called ‘encoding specificity’), and that the “the probability of successful retrieval of the target is a monotonically increasing function of informational overlap between the information present at retrieval and the information stored in memory” (Tulving, 1979, p. 408). Analogous results related to psychomotor and clinical domains of performance similarly suggest that transfer largely depends on the overlap between training and performance conditions (Grierson, 2014; Kulasegaram et al., 2012; Wright, 1996). These theoretical principles have been used to support a number of educational practices, including cognitive apprenticeship, authentic assessment, and situated learning. In each of these examples, authentic training environments are advocated on the premise that that learning is inextricably bound to the situation in which it takes place (Brown et al. 1989; Norman et al., 2012; Teteris et al., 2012). Indeed, in the healthcare simulation domain the notion that the “clinical context” modulates learning and transfer has been used by Kneebone and colleagues to

21

justify methods like “hybrid simulation”, where standardized patients are attached to inanimate models in order to increase the realism of the training environment and facilitate training of technical and non-technical skills in tandem (Kneebone, 2005; 2010; Kneebone et al., 2005; Tun & Kneebone, 2011). The use of authentic learning tasks in this way is expected to help learners coordinate constituent knowledge, skills and attitudes necessary for effective task performance in clinical practice (Scandura, 1973; Tun & Kneebone, 2011; van Merrienboer et al., 2003; van Merriënboer & Sweller, 2010). Proponents of this approach argue that authenticity of the training environment is of paramount importance, as learners do not have the luxury of performing procedural skills in isolation from the complexities of clinical practice and thus simulation must recreate the contextual realities of everyday practice if it is to be an effective adjunct to clinical experience (Kneebone, 2005; 2009). However, as noted below, the evidence base supporting such practices is limited and thus the benefit of increasing simulation fidelity (with its attendant increase in cost) is questionable.

1.3.3

Empirical evidence for the link between fidelity and learning Despite an apparently strong theoretical foundation, empirical studies investigating the

relationship between increasing simulation fidelity and skilled acquisition, retention, and transfer among health professions learners have demonstrated conflicting results. In a recent study, Kassab et al. (2011) investigated laparoscopic cholecystectomy performance among experienced participants (>50 procedures performed) and inexperienced participants (24 are recommended for complex skills (Dubrowski, 2005; Schmidt & Lee, 2005).

1.5.3

Transfer of learning: a typology It may be argued that all learning involves transfer to some trivial extent (Salomon &

Perkins, 1989), as to have learned something you have to demonstrate the associated knowledge or skill at some later point, under circumstances that will never be exactly the same as the initial learning condition. However, while both motor learning and psychology theorists contend that retention and transfer must be measured following a retention period (Schmidt & Bjork, 1992), they distinguish between the two in so far as retention examines performance on the same training task, whereas transfer examines performance on a variation of the training task (e.g. change in context) or a different task altogether (Salomon & Perkins, 1989; Schmidt & Lee, 2005). Thus, both skill acquisition and retention are considered distinct from transfer, with the

56

latter generally referring to the extent to which practice or experience with one task results in a gain or loss in the capability to perform another (more or less related) task (Schmidt & Lee, 2005). Motor-learning theorists generally acknowledge that that transfer is difficult and that transfer of learning between tasks is typically small unless the two tasks are very similar (although what this means and the mechanisms behind it remain unclear; see 1.3.2 and 1.3.3 for an overview of related theories) (Schmidt & Lee, 2005). The concept of transfer appears to be more contentious in cognitive psychology, despite over 100 years of research on the topic (Day & Goldstone, 2012). Some theorists argue that there is limited empirical evidence to suggest that transfer of learning exists at all (Detterman, 1993), whereas others claim that it is ubiquitous if we know where and how to look for it (Dyson, 1999; Schwartz et al., 2005). The apparent contradiction can also be seen anecdotally: clinical educators often complain that students don’t transfer their pre-clinical training (e.g. knowledge of anatomy) into the care of patients on the ward (e.g. when performing a procedure; Bolander Laksov et al., 2007; Norman, 2009a), while the reader can likely attest from personal experience that driving a car in a new neighborhood (or even in a different country) can be managed with reasonable proficiency. Some have argued that part of the problem is that transfer (in psychology, motor learning, and indeed in healthcare simulation) is viewed as a unitary phenomenon, when it is not (Salomon & Perkins, 1989). These authors contend that transfer can occur by different routes that, in turn, depend on different mechanisms and cognitive processes (Bransford & Schwartz, 1999; Salomon & Perkins, 1989). Thus, different types of transfer can be articulated, which are framed below according to Broudy’s (1977) types of knowledge. Broudy’s first category is replicative knowing (“knowing-that”), which refers to the direct recall of information in the manner that it is learned (Broudy, 1977). This conceptualization aligns with Detterman’s classic definition of transfer as “the degree to which behavior will be repeated in a new situation” (Detterman, 1993, p. 4). Thus, the central goal is for learners to reproduce a skilled performance in a different context. This concept is analogous to low road iv

transfer (which is related to the concept of near transfer), and is thought to emerge

iv

It should be noted that ‘near’ transfer typically refers to the transfer between two similar tasks (e.g. two versions of the same procedure), whereas ‘far’ transfer refers to situations where the two tasks are dissimilar (particularly with respect to their surface features). This distinction is based on the notion of ‘distance’ of transfer, which cannot

57

automatically when two tasks are closely related (e.g. being able to drive a minivan after learning to drive a car) (Bolander Laksov et al., 2007; Teteris et al., 2012). This type of transfer is enhanced with extensive practice of the underlying skill under variable conditions (to facilitate minor and largely unconscious adaptations), whereby it becomes relatively automatic and somewhat flexible (Salomon & Perkins, 1989). Under these circumstances, the close resemblance between the stimulus characteristics of the training and transfer tasks result in the automatic triggering of this well-learned behavior (e.g. performing a LP on the ward after simulation-training on the procedure). The close resemblance between the two tasks makes it so that the replicated behavior is suitable in the transfer situation (Salomon & Perkins, 1989). By contrast, Broudy’s second category of applicative knowing (“knowing-how”) refers to the direct application of prior knowledge and skills to solve problems in new settings (Bransford & Schwartz, 1999; Broudy, 1977). This aligns with the common definition of transfer used in cognitive psychology (particularly in reference to analogical transfer): the use knowledge acquired in one context to solve a new (dissimilar) problem in another (Bransford & Schwartz, 1999; Eva et al., 1998; Norman, 2009a). Here, the goal is not to reproduce skilled behavior, but to adapt prior learning to be useful in a different, but related situation. For instance, applicative transfer is evoked when a learner uses knowledge and skills (e.g. needle handling, sterile technique, etc.) gained through simulation training in one procedure (e.g. central line insertion) to perform another (e.g. LP). The concept is analogous to high road or far transfer, which are considered to be intentional, conscious processes that involve mindful abstraction, i.e. where the learner makes explicit comparisons between the training and transfer tasks to activate relevant prior knowledge that can be applied in the new situation (Bolander Laksov et al., 2007; Salomon & Perkins, 1989; Teteris et al., 2012). A further distinction is often made between high road transfer that is forward-reaching (where knowledge is abstracted and connections are made at the time of learning) versus backward reaching (where a learner searches their memory for relevant prior knowledge that can be applied to an ongoing task). The key is that this type of

be easily quantified due to the subjective nature of judgments about ‘similarity’ between tasks. Thus, as Salomon and Perkins note, while the concept of ‘distance of transfer’ is useful in a general sense, it is not a literal idea that can be formally computed (Day & Goldstone, 2012; Salomon & Perkins, 1989).

58

transfer requires active cognitive processing that is often metacognitively guided (Bransford & Schwartz, 1999; Salomon & Perkins, 1989). Finally, Broudy’s third category of interpretive knowing (“knowing-with”) reflects our cumulative set of knowledge and experiences, which influence our ability to think, perceive, and judge the world around us (even though much of this knowledge is tacit and cannot be recalled on demand; Bransford & Schwartz, 1999; Broudy, 1977). “Knowing-with” refers to the idea that our prior knowledge is not just useful for the direct replication of skilled behavior or application to solve new problems, but that it also shapes our interpretation of novel information in a way that can facilitate learning (Schwartz et al., 2005). This notion has been used to argue that transfer can also be captured in a learners preparation for future learning, i.e. their ability to learn in information-rich environments based on past experiences (Bransford & Schwartz, 1999; Schwartz et al., 2005). Through the use of methods like the double transfer design, interpretive transfer investigates how learners “transfer in” prior knowledge to facilitate new learning and subsequently “transfer out” that new learning to perform a related task (Schwartz et al., 2005).

1.5.4

Synthesis It is the view of this investigator that a participant’s performance during skill acquisition

retention, and transfer reflect qualitatively distinct phenomena, and each offers specific insights that inform the various research questions posed in this dissertation and thus is worthy of measurement in its own right. First, in accordance with motor learning theory skill acquisition is viewed as a three stage process: a cognitive phase where the learner focuses on “getting an idea” of how to perform the skill; an associative phase in which the learner makes small modifications to gradually refine their performance; and an autonomous phase where extensive practice leads to accurate and consistent performance with minimal cognitive effort (Fitts & Posner, 1967). Importantly, while these stages have been articulated in both the motor learning and simulation literature, their relationship to the CL experienced by a novice learner have not been investigated. However, CLT theorists have theorized that performance during skill acquisition can reflect the CL experienced by learners (Sweller et al., 2011). In this dissertation, procedural skill performance during simulation training (i.e. the skill acquisition phase) is measured to examine how CL varies as a function of the stage of skill acquisition that a learner is theorized to be in. In addition, as motor learning theory contends that instructional design has the largest

59

impact on learning during the cognitive phase, the investigators have purposefully focused the experimental manipulations at this stage of learning. Second, the investigators also adopt the motor learning view that defines learning as a relatively permanent change in an individual’s capacity for skilled action as a result of practice (Schmidt & Lee, 2005). Following from this, participants performance during simulation training is not viewed as a marker of learning per se, given the potential for it to be transiently affected by factors related to practice (fatigue, motivation, etc.). Instead, performance on the training task after a retention period is used as a marker of true learning. Finally, this dissertation draws on Broudy’s concept of replicative knowing in its definition of transfer of learning. Specifically, transfer is defined as a learner’s ability to recall and reproduce procedural knowledge and skills learned during training when the same procedure is performed in a different context. This notion of replicative transfer was selected as an area of focus because it aligns with the current view of transfer in the healthcare simulation literature (i.e. the degree to which knowledge, skills and attitudes learned in the simulated setting are replicated in clinical practice). The most common instructional design strategies that underpin SBET in healthcare settings (deliberate practice, mastery learning, etc.) promote the acquisition and automation of procedural knowledge and skills, which in turn favour the automatic nature replicative transfer. Indeed, it has been argued that applicative or high-road transfer may not appropriate in simulation-based training, where skill automation is the goal (Teteris et al., 2012). Finally, replicative transfer is hypothesized to improve with greater similarity between the stimulus characteristics of the training and transfer tasks. Thus, this type of transfer is particularly germane to the comparison between context similarity and task complexity that is explored in this dissertation.

2

60

__________________________________________

Chapter 2: Research aims and hypotheses __________________________________________

61

2.1 Purpose and aims This dissertation applies CLT and related theoretical frameworks to explore the relationship between task complexity, CL and learning among novices during simulation-based training. The investigator has engaged with this line of inquiry using a multi-phase approach that aligns with a recently published simulation instructional design research framework (Haji et al., 2014a). For ease of reference, each phase and its corresponding research questions (RQs), as well as the associated hypotheses and chapters in which these hypotheses are tested are tested, are summarized in Table 2-1. Table 2-1: Research phases, questions and hypotheses Research Phase 1 - Piloting (outcome measures)

2 - Intervention modeling

3 - Evaluation

Research Questions

Hypotheses

Tested in

RQ1: Are subjective rating of mental effort and secondary task performance sensitive to predicted differences in cognitive load arising from variations in: (i) a performer’s level of expertise? (ii) simulation task complexity?

H1: Experts will experience lower subjective mental effort and higher secondary-task performance compared to novices when performing a psychomotor task (surgical knot-tying), but not when performing the secondary task alone.

Chapter 3

RQ2: What are the conditions that impact the complexity of a procedural skill, and how much complexity do they add for a novice learner? RQ3: What is the effect of task complexity and context similarity on performance and CL during skill acquisition, at retention, and on transfer among novices engaged in simulation-based procedural skills training?

H2: CL will decrease (demonstrated by improved secondary task performance and lower subjective mental effort) as novices become proficient in surgical knot-tying through simulation training. H3: Novices engaged in psychomotor skills training on a ‘complex’ simulation scenario will demonstrate inferior secondary-task performance and higher subjective mental effort compared to peers training on a ‘simple’ scenario. N/A

H4: Novices training on a simple simulation scenario will demonstrate superior LP performance and lower CL during skill acquisition and at retention compared to peers training on a complex scenario. H5: Novices training on the complex scenario will demonstrate superior LP performance and lower CL on transfer to the ‘very complex’ scenario, compared to peers training on the ‘simple’ scenario.

Chapter 3 and 4 Chapter 4

Chapter 5

Chapter 6

62

The aim of the first phase of the dissertation (piloting of outcome measures) is to evaluate the applicability of CL measures adapted from educational psychology within the healthcare simulation domain. The studies included in this phase address the question: Are existing measures of CL (i.e. subjective rating of mental effort and secondary task performance) sensitive to predicted differences in working memory demands arising from variations in (i) a performer’s level of expertise with a simulated task, and (ii) simulation instructional design (specifically simulation task complexity)? [RQ1] The second phase (intervention modeling) aims to operationalize an established instructional design framework (ET) for healthcare simulation curriculum development, in order to inform the design of simulation training scenarios of varying levels of complexity. The study associated with this phase seeks to identify: What are the conditions that impact the complexity of a procedural skill, and how much complexity do these conditions add for a novice learner? [RQ2] The third phase (evaluation of simulation instructional design) builds upon the findings of the prior phases, and as such represents the culmination of this dissertation. The aim of this final phase is to compare the effect of simulation training designed based on the principles of CLT with training designed in accordance with a competing theoretical perspective commonly used in healthcare simulation (i.e. the principle of ‘context similarity’). The study linked with this phase is designed to investigate: What is the effect of task complexity and context similarity on performance and CL during skill acquisition, at retention, and on transfer among novices engaged in simulation-based procedural skills training? [RQ3]

2.2 Overview and hypotheses The studies corresponding to each of the three research phases described above are presented sequentially in the four research chapters to follow (Chapters 3-4 for Phase 1, Chapter 5 for Phase 2, and Chapter 6 for Phase 3). The first research chapter (Chapter 3 - Measuring CL during simulation-based psychomotor skills training: sensitivity of secondary-task performance

63

and subjective ratings) addresses part (i) of RQ1 by utilizing quasi-experimental and cohort designs to test two hypotheses: Hypothesis 1: Experts will experience lower subjective mental effort and superior secondarytask performance compared to novices when performing a psychomotor task (surgical knottying) in which they are highly proficient, but not when performing the secondary task alone. Hypothesis 2: CL will decrease (demonstrated by improved secondary task performance and lower subjective mental effort) as novices become proficient in surgical knot-tying through simulation training. Building on the results of this study, the second research chapter (Chapter 4 - Measuring CL: performance, mental effort and simulation task complexity) employs experimental methods to test an additional hypothesis related to RQ1: Hypothesis 3: Novices engaged in psychomotor skills training on a ‘complex’ simulation scenario will demonstrate inferior secondary-task performance and higher subjective mental effort compared to peers training on a ‘simple’ scenario. The third research chapter (Chapter 5 - Operationalizing Elaboration Theory for simulation instructional design: a Delphi study) addresses RQ2 by combining the Simplifying Conditions Method (SCM) outlined in ET with survey methodology (the Delphi method) to identify the characteristics of the ‘simplest’ case of a prototypical procedural skill (LP) and the conditions that increase the complexity of the procedure for a novice learner. The results of Chapter 5 are used to construct three simulation scenarios of progressively increasing complexity (a simple, complex, and very complex scenario), which are outlined in the fourth research chapter (Chapter 6 - Competing effects? The impact of complexity and context on novices’ simulation-based learning). The study presented in this chapter addresses RQ3, using experimental methods to test two hypotheses related the effect of task complexity and context similarity respectively:

64

Hypothesis 4: Novices training on a ‘simple’ simulation scenario will demonstrate superior LP performance and lower CL during skill acquisition and at retention compared to peers training on a complex scenario. Hypothesis 5: Novices training on the ‘complex’ scenario will demonstrate superior LP performance and lower CL on transfer to the ‘very complex’ scenario, compared to peers training on the ‘simple’ scenario. The penultimate chapter (Chapter 7 – General Discussion) examines the strengths and limitations of this program of research, as well as the implications of the findings at the theoretical, methodological and practical levels. The final chapter (Chapter 8 – Future directions and conclusion) summarizes the results of this dissertation and presents recommendations for future research related to the measurement of CL, the role of fidelity in simulation instructional design, and further investigation of the variables that influence novice learning and transfer explored in this dissertation.

2.3 Significance The research presented in this dissertation makes a significant contribution to the literature in healthcare simulation, as well as the broader educational psychology literature, in a number of ways. First, the pattern and relationship between CL, skill acquisition, retention and transfer observed in Phases 1 and 3 of this research program help to illuminate the role of novices’ cognitive architecture in mediating learning during simulation, and more broadly the role of WM in motor learning and procedural skills training. Second, these studies generate new insight regarding the effect of task complexity on skill acquisition, retention and transfer for novices engaged in simulation-based procedural skills training. By clarifying this important facet of simulation instructional design, this research provides empirical evidence in support or opposition to the theoretical perspectives of CLT and ET outlined in the introductory chapter, and offers insight on the equivocal relationship between fidelity and simulation-based learning observed in the literature. In this way, the findings from this research program provide empirical evidence to direct the design of simulation curricula, as well as additional questions in simulation instructional design that should be pursued in the future. Third, the methods for devising a

65

simple-to-complex sequence of training outlined in Phase 2 also provide much needed guidance for medical educators seeking to use this approach in training other procedural and nonprocedural skills. Fourth, this program of research not only draws on the theoretical tenants of CLT, but by measuring it directly, generate empirical data to support its role in healthcare simulation research. As a result, based on the validity evidence generated for existing measures of CL in Phase 1, subjective rating and secondary task performance may be used to study other instructional designs or procedural and clinical skills. Finally, and perhaps most importantly, the systematic, theory-based approach to investigating simulation instructional design used in this research program can serve as a model for other investigators who wish to study other SBET features (e.g. debriefing and feedback, variability in practice, etc.). Analogous to the ‘science of instruction’ that has evolved in multimedia instructional design research (Mayer, 2010), the use of such a systematic approach can support novel lines of inquiry that will continue to clarify when, how, and why simulation should be used in HPE.

66

______________________________________________

Chapter 3: Measuring cognitive load during simulation-based psychomotor skills training: sensitivity of secondarytask performance and subjective ratings __________________________________________

Adapted from: Haji FA, Khan R, Regehr G, Drake J, de Ribaupierre S, Dubrowski A. Measuring cognitive load during simulation-based psychomotor skills training: sensitivity of secondary-task performance and subjective ratings. Advances in Health Sciences Education: Theory and Practice. 2015 [Epub ahead of print]. doi: 10.1007/s10459-015-9599-8.

67

3.1 Preamble One of the principal challenges to applying CLT to healthcare simulation is the limited evidence supporting the use of existing measures of load within HPE. In particular, there is limited research that investigates whether existing measures are sensitive to differences in CL that would be predicted based on CLT. As a first step in addressing this gap, the papers presented in Chapters 3 and 4 explore the application of two CL measures commonly in use in educational psychology to simulation-based psychomotor skills training: subjective ratings of mental effort and secondary task performance on a stimulus monitoring task. Collectively, these studies provide foundational evidence to support the use of these measures in the final phase of this dissertation (see Chapter 6). The paper presented in this chapter investigates the first issue pertaining to CL measurement addressed in this dissertation: the sensitivity of subjective rating and secondary task performance measures to theorized differences in CL that arise among performers with varying levels of experience with a psychomotor task. This is studied in two ways: (i) by examining task performance and CL between novice learners and expert surgeons performing a basic psychomotor task, and (ii) by tracking changes in task performance and CL among novice learners as they engage in simulation-based training on the same knot-tying skill. One-handed surgical knot-tying was selected as the psychomotor skill of interest in the two chapters to follow (Chapters 3 and 4), because it has been extensively studied in the surgical education literature and novice skill acquisition on this task has been well documented (Brydges et al., 2006; Jowett et al., 2007; Porte et al., 2007; Xeroulis et al., 2007). As a result, deviations in the observed pattern of CL can be attributed to the measures under investigation, rather than the training task. In addition, a visual stimulus detection task (response to changes in a virtual patient’s heart rate on a vital signs monitor) is used as the secondary task in this study, with recognition reaction time and signal detection rate as the associated performance measures. This task was chosen because it is assumed to require similar (visuospatial) perceptual-cognitive resources as surgical knot tying, while also simulating an ecologically valid activity that a learner need to attend to when performing a surgical procedure in the clinical setting. This paper is published in Advances in Health Sciences Education, a journal that aims to disseminate scholarly research that links theory with practice in all aspects of health sciences

68

education, and has a broad readership of health science professionals, educators and researchers. The discussion arising from this study addresses theoretical, methodological and practical issues related to CL measurement in healthcare simulation, including the: (i) relationship between observed differences in CL and different phases of psychomotor skill acquisition (cognitive, associative and automatic); (ii) potential for interference between primary and secondary tasks, as well as between cognitive processes within a secondary task; and (iii) the limitations of existing measures of CL.

3.2 Abstract As interest in applying cognitive load theory (CLT) to the study and design of pedagogic and technological approaches in healthcare simulation grows, suitable measures of cognitive load (CL) are needed. Here, we report a two-phased study investigating the sensitivity of subjective ratings of mental effort (SRME) and secondary-task performance (signal detection rate, SDR and recognition reaction time, RRT) as measures of CL. In phase 1 of the study, 8 novice learners and 5 expert surgeons attempted a visual-monitoring task under two conditions: single-task (monitoring a virtual patient’s heart-rate) and dual-task (tying surgical knots on a bench-top simulator while monitoring the virtual patient’s heart-rate). Novices demonstrated higher mental effort and inferior secondary-task performance on the dual-task compared to experts (RRT 1.76 vs. 0.73, p=.012; SDR 0.27 vs. 0.97, p65 or < 1 year) - A calm patient - A patient positioned upright (sitting position) - A patient with normal or high CSF pressure - A procedure performed without individuals present that could increase the anxiety of the novice (e.g. an anxious family member) - A procedure performed with the help of a knowledgeable assistant - Use of a larger gauge, cutting spinal needle (e.g. an 18 or 20 gauge Quincke needle) - Absence of any time constraints - A procedure completed with a physical spine model available to facilitate mental visualization - Having all necessary equipment readily available and within reach - Performing the task in a comfortable position (e.g. seated, the patient at an appropriate height) - A calm novice performing the procedure - A well-rested novice

- Having an inexperienced or no assistant - Using a small-gauge needle (e.g. smaller than 22 gauge) - Using an atraumatic needle (e.g. ‘pencil-tipped’ needle) - Having limited time to complete the procedure - Not having a physical spine model available to facilitate mental visualization of the procedure before starting - Not having all necessary equipment readily available and within reach - Being in an uncomfortable position (e.g. standing or with the patient at an inappropriate height) - An anxious novice performing the procedure - A fatigued or overworked novice

Simplifying conditions (removed):

Complicating conditions (removed):

- Early stylet removal (rationale: not applicable to all patient populations)

- Dark skinned patient (rationale: not corroborated by other studies or key informant interviews) - Too many sterile towels obscuring midline (rationale: related to individual procedural technique) - Advancing with stylet in place (rationale: not applicable to all patient populations)

234

Appendix 5: Simple, complex and very complex simulation environments

1. The ‘simple’ simulation environment. This scenario was conducted in an office-setting that was converted into a simulation laboratory. A part-task trainer was placed on the desktop and the participant was comfortably seated with the height of the chair adjusted to their preference. All relevant materials (the simulator, sterile gloves, anesthetic, aseptic solution, LP tray, and trash bin) easily within arms reach. The standard room lights were used and the room was quiet during the practice trials. No attempts were made to further contextualize the simulation or the environment.

2. The ‘complex’ simulation environment. This scenario was also conducted in an officesetting, however the room was contextualized to look like a hospital emergency bay. To do so, the room lights were dimmed, ambient hospital sounds were played in the background, and curtains were strategically placed around the tabletop, which was draped to look like an emergency ‘stretcher’. To further increase the realism of the scenario, the part-task trainer was attached to a mannequin that did not move during the scenario. Finally, a vital signs monitor was placed within the participant’s field of view, displaying the fluctuating vitals of the patient as described previously. The participant was forced to stand within this cramped space, and while all materials were within reach, they were placed awkwardly (e.g. the ‘bed’ was too low and the LP tray placed off to the side).

235

3. The ‘very complex’ simulation environment. This scenario was conducted in a highfidelity simulation laboratory that was contextualized to look like an emergency bay. To make the context of this scenario similar to the complex case, the contextual features of the complex environment were replicated in this environment (e.g. dimmed lighting, ambient hospital noise, the similar heart rate monitor displaying the patient’s vitals, and curtains placed around a emergency ‘stretcher’). As in the complex case, the participant had to complete the procedure while standing with all equipment awkwardly placed (e.g. the bed being too low). Finally, the realism was heighted by strapping the part-task trainer to the standardized patient, who moved and became agitated during the scenario as described previously.

236

Appendix 6: LP instructional handout Introduction: Lumbar Puncture (LP) is a commonly performed procedure in both pediatric and adult patients, for a variety of diagnostic and therapeutic indications. It is important to have a thorough knowledge of the indications, contraindications, pertinent anatomy, and technique for performing a LP, so that the risk of rare but potentially life threatening complications can be reduced. (Ellenby  et  al.,  2006) Today, you will have the opportunity to learn how to perform a LP using simulation-based education, which has been shown to be an effective method to teach this procedure to novice learners.(Brydges,  Nair,  Ma,  Shanks,  &  Hatala,  2012b;  Conroy  et  al.,  2010;   White  et  al.,  2012) This handout has been prepared to give you the information you need to be able to safely and effectively perform an LP. The handout begins by briefly reviewing the indications, contraindications and potential complications for LP. You should quickly review this information as a background, as it will help you to conceptualize why you are doing the procedure, and why it is important to follow the technique outlined in this handout. Next, the pertinent anatomy and procedural steps required to perform the LP are described in detail. You should go over this section thoroughly, as the focus of today’s training session is learning the techniques of the procedure (and this is what you will be tested on). The illustrations provided are images of both real patients and the simulation you will be practicing on, to better prepare you for the simulation training. Part 1: Indication, Contraindications, and Potential Complications(H. Chen et al., 2000; Ellenby et al., 2006; V. F. Schneider, 2007) •



Diagnostic indications LP is performed to obtain a sample of cerebrospinal fluid (CSF) to aid in the diagnosis of: o Infectious processes, e.g. viral, bacterial, or fungal meningitis or encephalitis o Inflammatory conditions, e.g. Multiple Sclerosis o Various cancers and metabolic processes Therapeutic indications LP is also performed for spinal delivery of drugs, e.g. chemotherapy, antibiotics, and anesthetic drugs, or to measure, drain, or divert CSF (e.g. in patients with CSF leak)

237

Contraindications The procedure should be avoided in patients with: o Cardiorespiratory compromise, as cardiorespiratory failure may occur in the position required for LP o Raised intracranial pressure and focal neurological signs, or signs of cerebral herniation, as the change in pressure during LP may precipitate or exacerbate herniation o Abnormal clotting, as LP may cause a spinal hematoma and compression of spinal nerves o Signs of infection overlying the puncture site, as the LP may seed the infection inside the spinal canal (infection can also occur with improper sterile technique) • Additional complications In addition to complications associated with the contraindications list above (e.g. cardiorespiratory failure, cerebral herniation, spinal hematoma, or spinal infection), LP may cause: o Post-LP headache: occurs in up to 30-50% of patients. You can minimize the risk of this by using a smaller gauge needle and orienting the bevel of the spinal needle towards the patient’s side (see Technique section below) to prevent cutting of the dural fibers, which run longitudinally. o Nerve root injury: usually occurs when the spinal needle is inserted or angled too far laterally from the midline, and pierces an exiting nerve root. o Intraspinal epidermoid tumours: occur when a small core of epidermal tissue is introduced into the spinal canal. This can be avoided by using a styleted spinal needle when passing through the skin (dermis and epidermis) •

Part 2: Pertinent Anatomy (see Cronan & Wiley, 2008; Ellenby et al., 2006; V. F. Schneider, 2007) LP is performed by inserting a spinal needle through the skin and into the subarachnoid space of the lumbar spine. Recall that CSF, which is produced in the ventricles of the brain, circulates around the brain and spinal cord within the subarachnoid space. Thus, by accessing the subarachnoid space using a percutaneous approach, CSF can be sampled and drugs can be injected into the spinal canal. The LP needle is inserted in the midline, between the spinous processes of the lumbar spine below the termination of the spinal cord (to avoid injuring the cord itself). The spinal cord usually terminates between the L1 or L2 levels in adults, beyond which are the nerves of the cauda equina. Thus, a lumbar puncture should be performed at the L3-4 or L4-5 interspace (i.e. the space between the L3 and L4 or L4 and L5 spinous processes). The L3-4 and L4-5 interspace can be identified by palpating the iliac crests (the bony prominence at the top of the pelvis) and drawing an imaginary line between them. The point at the midline that intersects this line corresponds to the L4 level. Thus, the L3-4 interspace is just above this point, and the L4-5 interspace is just below it.

238

As the needle is advanced, it passes through the following tissue layers: skin, subcutaneous fat, supraspinous ligament, interspinous ligament, ligamentum flavum and meninges (specifically the dura and arachnoid). Once the needle passes the meninges, it enters the subarachnoid space, which surrounds the nerves of the cauda equina. Part 3: Lumbar Puncture Technique (see  Chen  et  al.,  2000;  Ellenby  et  al.,  2006;  V.  F.  Schneider,  2007) 1.

Patient selection and Consent As with any invasive procedure, you should begin by confirming that there is a clear

indication for the procedure, making sure there are no contraindications, and obtaining informed consent from the patient (including reviewing the steps of the procedure, benefits of the procedure, and potential complications). 2.

Gather all necessary equipment You should gather all necessary equipment that you will need for the procedure. This

includes: •

Personal protective equipment (e.g. a gown, mask, cap, and sterile gloves)



Cleaning solution (e.g. stanhexidine) and local anesthetic (e.g. 2% lidocaine without epinephrine)

239



A sterile LP procedure tray, which includes: o A 3.5 inch beveled spinal needle with stylet (either 18-gauge [pink hub], 20gauge [yellow hub] or 22-gauge [black hub])

o

A 2-part pressure manometer and 3-way stop-cock (image below) for measuring CSF pressure

o

Four CSF collection vials (labeled 1-4) with gradations for the total volume collected (1-8mL) 2 sterile drapes (one fenestrated and one non-fenestrated) 3 sponge sticks with gauze and a dipping tray for skin prepping A 25 gauge, 1 inch needle (orange hub) and 3mL syringe for infiltration of local anesthetic Sterile gauze and a bandage to put over the puncture site at the end of the procedure

o o o o

240

3.

Position the patient • For lumbar puncture, a patient can be placed in either the sitting position, or in the lateral recumbent position. Generally, the lateral position is preferred as this is the only position in which an accurate opening pressure can be obtained. • The patient should be asked to flex their lower limbs, flex their back as much as possible, and “arch like a cat” in order to increase the distance between the spinous processes. • If in the lateral position, the patient’s spine should be parallel to the edge of the bed, with the shoulders and hips symmetrically positioned.

NOTE: For today’s training you can assume that the above three steps have been completed prior to you entering the room. You can also put on your cap, mask, and gown before entering the room. 4.

Palpate the anatomical landmarks and select the target site for LP • Next you should select the interspace where you will perform the puncture. Begin by palpating the iliac crests, which you should be able to feel along the patient’s side. Next, draw an imaginary line between the iliac crests, and identify the point where this line intersects the midline. Then feel for the spinous process that is at or just below this line; this is the L4 spinous process (Figure 1). • Palpate the interspace above (L3-4) or below (L4-5) the L4 spinous process and select which one to use. Generally, the lower interspace should be used first, so that if the LP is unsuccessful it can be re-attempted at the higher interspace. Once you have identified the target interspace, you should mark it with a pen or by pressing a thumbnail into the patient’s skin at that level (NOTE: no marking pen will be provided for today’s training session).

Figure 1: Palpation of landmarks for LP. The above figures demonstrate the technique for palpating the landmarks for a lumbar puncture on a simulated patient. Begin by identifying the iliac crests on both sides, which correspond to the L4 level. Draw a line to the midline, to identify the L4 spinous process, then select then palpate the L3-4 and L45 interspaces above and below this point.

241

5.

Prepare for the procedure • Once the target site has been identified, you should open the spinal tray in sterile fashion, by holding on to the outside of the drape covering the tray, and folding each of the flaps away from you without touching the inside of the drape, or any of the contents of the tray. • Next, you should put on your sterile gloves without contaminating them (touching only the inside surface of the glove, not the outside surface that will contact the sterile tray and equipment). Finally, you should prepare the spinal tray for the procedure (Figure 2): i. Unscrew the caps on all collection vials and setting the vials upright in the correct sequence ii. Connect the two parts of the manometer together and connecting the stopcock to the bottom part of the manometer iii. Hold the tray off the LP tray while an assistant pours cleaning solution into it iv. Connect the 25-gauge needle to the 3-mL syringe, and with the help of an assistant, draw local anesthetic into the syringe v. Check the spinal needle and stylet for proper functioning.

Figure 2: Prepped LP tray. Note that the stopcock and manometer have been assembled, the collection vials have been opened and arranged upright in sequence, the cleaning solution has be poured into the dipping tray, and the anesthetic has been drawn into the 3mL syringe and connected to the 25-gauge infiltrating needle.

6.

Prep and drape the patient • Once you have prepared the tray, you should cleanse the area where you will be performing the procedure. This is done by dipping the sponge stick into the cleaning solution, then wiping the patient’s lumbar area using gradually widening circular motions, beginning at the planned point of needle insertion, and moving outward to until a 10cm radius of skin has been prepped. This should be repeated a total of 3 times. • Once the lumbar area has been prepped, establish a sterile field by placing a nonfenestrated drape on the bed between you and the patient, and a fenestrated drape on the patient’s back (with the whole centered over the selected insertion site) in a sterile manner (i.e. holding the drape in a cuffed manner so as not to contaminate the sterile side of the drape or your gloves) (Figure 3).

242

Figure 3: Draping. Note the position of the fenestrated and non-fenestrated drapes over the patient’s lumbar region (right image), creating a sterile field while maintaining access to the intended insertion site. Care should be taken to ensure the drapes cover all surfaces that the operator will need to touch, including the patient’s side (e.g. when palpating the iliac crests).

7.

Administer local anesthetic • Overtop of the drapes, and without contaminating the sterile field you have established, re-confirm the target needle insertion point by palpation. • Once you are sure you have the correct site, insert the anesthetic needle under the skin, aspirate to make sure you are not in a blood vessel, and then inject 0.5 cc of the anesthestic solution into the subcutaneous tissue. It is often helpful to freeze as much of the track of the spinal needle as possible, just be sure to aspirate before each injection.

8.

Insert and advance the spinal needle into the subarachnoid space • After anesthetizing the target area, you should again reconfirm your target entry point by palpation, as the anesthetic can sometimes obscure the landmarks. • Once this is done, insert the spinal needle with the stylet in place precisely in the midline, at the center of the chosen interspace, angling about 15° towards the patient’s head (about the same as aiming for the patient’s umbilicus) (Figure 4). • Hold the needle firmly between the thumb and index finger (either with one or two hands), while using the rest of the fingers (or the other hand) to stabilize the needle or the patient, as needed. • Orient the bevel of the needle so that the flat portion is facing in the direction of the patient’s side) (Figure 4).

243

Figure 4: Angle and bevel orientation of the spinal needle during insertion. Note the 15° angle of the spinal needle (pointing towards the patient’s umbilicus) with an insertion site in the center of the interspace (left image). Also note that the bevel of the spinal needle is pointing upwards, i.e. towards the patient’s side (right image).

• •



• •

Insert the needle through the skin and subcutaneous tissue in a straight line, advancing with slow, smooth movements. Advance the needle through the lumbar ligaments until you feel a loss of resistance (usually occurs when the needle passes the ligamentum flavum), or a “pop” (occurs as the needle passes through the meninges). At this point, stop advancing, remove the stylet, and check for CSF return (i.e. fluid in the hub of the needle). If no CSF is seen, replace the stylet and continue to advance the needle in short increments (1-2mm), stopping periodically to remove the stylet and check for CSF return (Figure 5). If an obstruction is felt or the needle is advanced to the hub, slowly withdraw the needle into the subcutaneous space (without removing it from the skin) and redirect it. If CSF is not obtained after 3 attempts at redirecting the needle, remove the needle entirely and attempt to insert it again at that interspace or a different one, after rechecking your the landmarks. If CSF is not obtained after 3 attempts, you should call for help from a more senior colleague. Figure 5: Checking for fluid. Periodically remove the stylet and check for CSF return, especially when a loss of resistance or “pop” is felt. If CSF is not visualized in the hub of the needle, replace the stylet and advance in slow increments (1-2mm), removing the stylet before each advancement to check for fluid return.

244

9.

Measure CSF (opening) pressure • To accurately measure CSF opening pressure, as soon as fluid appears in the hub of the needle you should attach the stopcock and manometer to the needle hub (Figure 6) • Start by turning the stop-cock “off” to the patient (i.e. turn the dial towards the patient), and when you are ready to measure the pressure, turn the stop-cock “off” to the operator (i.e. turn the dial towards you) (Figure 6). • Allow fluid to enter the manometer and observe until the meniscus stops rising. Measure the opening pressure by observing the numerical value that corresponds to the level of the meniscus (Figure 6). Figure 6: Measuring CSF pressure. As soon as CSF is visualized in the hub of the needle, replace the stylet. Gather the manometer with the stop cock initially turned “off” to the patient and attach it to the needle hub (upper left image). Next, turn the dial “off” to the operator (i.e. towards you) and observe the fluid entering the manometer (lower left image). Wait for the column to stop rising and then measure the pressure by reading the number on the column that corresponds to the level of the meniscus (right image).

10. Collect CSF • To obtain CSF specimens for laboratory analysis, collect the smallest volume of CSF necessary (usually approximately 4mL of CSF total, or 1mL per vial, is sufficient). This can be done by collecting fluid through the manometer (by turning the stop-cock “off” toward the manometer, i.e. turning the dial upward) and placing the vials under the open end of the stop-cock (Figure 7) • Alternatively, you can remove the entire stopcock and manometer system after measuring the opening pressure, and collect fluid directly from the hub • In either case, the fluid should be collected in each vial in sequential order. Once the 1cc of fluid has been collected in each vial, move the vial from under the needle hub or stop-cock and screw on the cap with one hand, placing the vial back on the tray in its appropriate position while gathering the next collection vial. Throughout the collection, care should be taken to avoid excess loss of CSF.

245

Figure 7: CSF Sampling. Collect the minimum volume of CSF sample required, by first gathering the CSF in the manometer by placing the vial under the stop cock and turning the dial “off” to the manometer (i.e. upward, as demonstrated above). Next, you can either collect CSF in each of the remaining vials using this method, or remove the manometer and collect the sample directly from the needle hub.

11. Complete the procedure • When a sufficient sample of CSF has been collected, complete the procedure by replacing the stylet and with a quick, smooth motion, remove the needle from the spine. • Use the sterile gauze to apply pressure to the puncture site, until no bleeding or leakage is detected. Then, place a sterile bandage over the site and clean up the procedure tray. General points on LP etiquette and maintaining sterility: •



During the procedure, you can demonstrate good communication skills by warning the patient before you do anything that they may feel - e.g. “I’m just going to feel your back to determine where I should perform the puncture” or “I’m going to clean the skin on your back, this may feel cold” or “you may feel a small prick now, its just the freezing going in” – you get the idea You can establish a sterile field by following the directions above, but it is equally important to maintain sterility by practicing good aseptic technique throughout the procedure. This means being vigilant to avoid breaking the sterile field. Here are a few common pitfalls to avoid: i. Once you put on your sterile gloves, your hands are sterile – so if you touch any surface that is not sterile (e.g. scratching your head or nose), you have contaminated yourself. Even placing your hands at your sides or behind your back would be a breach of sterility, so be careful where you hands are at all times! ii. The same rule applies for any other sterile materials – so if the needle, stylet, manometer, or any other piece of equipment touches a non-sterile surface, that piece of equipment is contaminated. So be careful about where your equipment is at all times! iii. Sometimes during the procedure your drapes can migrate, exposing an unsterile area – be sure to watch for this and re-drape as needed to maintain the sterile field.

NOTE: During this training session, if you break sterile technique, simply acknowledge this break in sterility and continue as if the breach had not occurred.

246

References: 1. Ellenby MS, Tegtmeyer K, Lai S, Braner DAV. Videos in clinical medicine. Lumbar puncture. N Engl J Med. 2006 Sep 28;355(13):e12–e12. 2. Conroy SM, Bond WF, Pheasant KS, Ceccacci N. Competence and Retention in Performance of the Lumbar Puncture Procedure in a Task Trainer Model. Simulation in Healthcare. 2010 Jun;5(3):133–138. 3. Brydges R, Nair P, Ma I, Shanks D, Hatala R. Directed self-regulated learning versus instructor-regulated learning in simulation training. Medical Education. 2012 Jun 12;46(7):648– 656. 4. White MLM, Jones RR, Zinkan LL, Tofil NMN. Transfer of simulated lumbar puncture training to the clinical setting. Pediatr Emerg Care. 2012 Oct 1;28(10):1009–1012. 5. Schneider VF. Lumbar Puncture. In: Dehn RW, Asprey DP, editors. Essential Clinical Procedures. Philadelphia: Saunders; 2007. p. 191–201. 6. Chen H, Sonnenday CJ, Lillemoe KD. Neurosurgical Procedures: Lumbar Puncture. In: Manual of common bedside surgical procedures. Lippincott Williams & Wilkins; 2000. p. 181– 186. 7. Cronan KM, Wiley JF. Lumbar Puncture. In: King C, Henretig FM, editors. Textbook of Pediatric Emergency Procedures. Philadelphia: Williams & Wilkins; 2008. p. 506–515.

247

Appendix 7: LP multiple choice questions 1. Which of the following images is not appropriately matched to its description? a. 20 gauge, 2.5 inch spinal needle with stylet

b. 25 gauge needle and 3mL syringe for local anesthetic

c. Stopcock and manometer for measuring opening pressure

d. Vials for collecting CSF samples

e. All of the above are correct 2. Which of the following is the most appropriate method for prepping a patient’s back for a Lumbar Puncture? a. Prep once, starting 10 cm from the desired puncture site and working your way in using circular motions b. Prep three times, each time starting at the desired puncture site and working your way out to a 10 cm radius using circular motions c. Prep three times, each time starting 10 cm from the desired puncture site and working your way in using circular motions

248

d. Prep once, starting 10 cm from the desired puncture site and working your way from superior to inferior 3. Which of the following are appropriate levels to perform a lumbar puncture? a. L1/2 and L3/4 b. L3/4 and L4/5 c. L1/2 and L3/4 d. L4/5 and S1/2 e. All of the above may be used 4. Which of the following describes the appropriate technique for landmarking for a lumbar puncture? a. Palpate both iliac crests and draw an imaginary line connecting them. The point where this line intersects the midline corresponds to the L4 level, and you should place your needle in the space between the spinous processes directly above or below this (L3/4 or L4/5) b. Palpate the both iliac crests and draw an imaginary line line connecting them. The point where this line intersects the midline corresponds to the L3 level and you should place your needle in the space between the spinous processes directly above or below this (L2/3 or L3/4) c. Palpate the spinous processes in the midline, and find the widest interspace. This corresponds to L4/5, and you should place your needle here or in the space directly above d. Palpate the spinous processes in the midline, and find the widest interspace. This corresponds to L3/4, and you should place your needle here or in the space directly below 5. Which of the following is incorrect about establishing/maintaining a sterile field? a. The drapes should be set-up with the windowed-drape over the back and the full drape on the bed below after the skin has been prepped b. Prior to injecting the local anesthetic or inserting the spinal needle, you should landmark by palpating the iliac crests overtop of the drapes c. The LP tray should be opened prior to donning sterile gloves d. It is permissible to allow a sterile surface (e.g. sterile equipment or gloves) to contact a non-sterile surface (e.g. your skin or the bed), so long as it is not for more than 5 seconds 6. Which of the following is an appropriate method to insert the spinal needle? a. Insert at the midpoint of the desired interspace and angle 15 degrees up (toward the umbilicus), with the bevel pointing to the patient’s side b. Insert near the bottom of the desired interspace and angle 15 degrees downward (toward the feet), with the bevel pointing to the patient’s side c. Insert at the midpoint of the desired interspace, angling 15 degrees down (toward the feet), with the bevel pointing to the patient’s feet d. Insert at the top of the inferior spinous process, angling 15 degrees up (toward the umbilicus), with the bevel pointing to the patient’s head 7. Which of the following is an appropriate technique to advance the spinal needle? a. Advance the needle through the skin and subcutaneous tissues with the stylet in place until you hit an obstruction or the needle hub hits the skin, then remove the stylet and

249

check for fluid return. If no fluid returns, replace the stylet and withdraw the needle, reinserting at a revised angle b. Remove the stylet, advance the needle through the skin and subcutaneous tissues until a “pop” in felt, then remove the stylet and check for fluid return. If no fluid returns, replace the stylet and withdraw the needle, reinserting at a revised angle without the stylet c. Advance the needle through the skin and subcutaneous tissues with the stylet in place, then remove the stylet and check for fluid return. If no fluid returns, replace the stylet and advance the needle in 1-2mm increments until a “pop” is felt, periodically removing the stylet to check for CSF return. If an obstruction is felt, withdraw the needle to just below the skin, and redirect at a new angle. d. Advance the needle through the skin and subcutaneous tissues with the stylet in place, then remove the stylet and check for fluid return. If no fluid returns, withdraw the needle completely and reinsert without the stylet. 8. Which of the following is the appropriate technique to measure opening pressure? a. Before inserting the spinal needle, attach the stopcock and manometer to the needle with the stopcock turned toward the patient, so that once CSF is obtained the opening pressure can be measured immediately b. Once the needle is advanced into the subarachnoid space and CSF is acquired, immediately attach the stopcock and manometer to the spinal needle, with the stopcock turned towards you to allow for measurement of opening pressure c. Once the needle is advanced into the subarachnoid space and CSF is acquired, attach the stopcock and manometer to the spinal needle, with the stopcock turned upwards to allow for measurement of opening pressure d. Once the needle is advanced into the subarachnoid space and CSF is acquired, collect CSF in al four vials, then attach the stopcock and manometer to the spinal needle with the stopcock turned towards you, to allow for measurement of opening pressure 9. Which of the following is an appropriate method for CSF collection? I. From the stopcock by turning the dial upwards and placing the collecting tube below II. From the needle hub by detaching the stopcock and placing the collecting tube below a. I only b. II only c. Both I and II d. Neither I nor II 10. Which of the following is the appropriate sequence of steps for performing LP? a. (i) Position the patient (ii) Open the LP tray (iii) Don sterile gloves/gown (iv) Set up the tray (v) Landmark the desired level (vi) Prep the patient (vii) Place sterile drapes (viii) Landmark again, then inserting the spinal needle to obtain CSF (ix) Landmark again, then infiltrating with local anesthetic (x) Measure opening pressure

250

(xi) (xii)

Collect CSF Remove the needle with stylet in place and applying a bandage

b. (i) (ii) (iii) (iv) (v) (vi) (vii) (viii) (ix) (x) (xi) (xii)

Position the patient Landmark the desired level Open the LP tray Don sterile gloves/gown Set up the tray Prep the patient Place sterile drapes Landmark again, then infiltrating with local anesthetic Landmark again, then inserting the spinal needle to obtain CSF Measure opening pressure Collect CSF Remove the needle with stylet in place and applying a bandage

c. (i) (ii) (iii) (iv) (v) (vi) (vii) (viii) (ix) (x) (xi) (xii)

Position the patient Landmark the desired level Open the LP tray Set up the tray Don sterile gloves/gown Place sterile drapes Prep the patient Landmark again, then infiltrating with local anesthetic Landmark again, then inserting the spinal needle to obtain CSF Measure opening pressure Collect CSF Remove the needle with stylet in place and applying a bandage

d.

Landmark the desired level Position the patient Don sterile gloves/gown Open the LP tray Set up the tray Prep the patient Place sterile drapes Landmark again, then infiltrating with local anesthetic Landmark again, then inserting the spinal needle to obtain CSF Measure opening pressure Collect CSF Remove the needle with stylet in place and applying a bandage

(i) (ii) (iii) (iv) (v) (vi) (vii) (viii) (ix) (x) (xi) (xii)

251

Appendix 8: Checklist and GRS for LP performance and communication skills A8-1: LP Critical Actions Checklist (adapted from Brydges et al. 2012 and Lammers et al. 2005) 1. Locate the target _____ Locate the iliac crests by palpation. _____ Draw an imaginary line between the iliac crests. _____ Identify the L3-4 or L4-5 interspace (must spend > 2 seconds identifying interspace) 2. Prepare the spinal tray _____ Open the tray with sterile technique. _____ Put on sterile gloves without contamination _____ Pour Betadine into the well on the tray _____ Unscrew the caps on the 3-4 tubes. _____ Set up the tubes in the correct sequence. _____ Connect the stopcock to the bottom part of the manometer _____ Connect the two parts of the manometer together _____ Attach the large needle to the syringe. _____ Withdraw anesthetic without contamination 3. Cleanse and drape the skin _____ Place sponge stick into cleaning solution _____ Wipe the skin in a circular motion from the target area to about a 10cm radius. _____ Repeat the first three steps with the other two sponge sticks _____ Place a sterile towel between the hip and the bed. _____ Place fenestrated sterile towel on the patient’s back with the hole centered over the target area. 4. Anaesthetize the skin _____ Reconfirm the target by palpation. _____ Insert the needle into subcutaneous tissue. _____ Aspirate for blood. _____ Inject the anesthestic solution, aspirating first to confirm not in a blood vessel 5. Insert the spinal needle _____ Reconfirm the target by palpation. _____Place the needle in the center of the interspace (give credit unless obviously too close to spinous process) _____Angle the needle toward the umbilicus. (Give credit unless obviously incorrect or aiming towards feet.) _____Proper holding of needle (between the thumb and index finger using one or two hands, with the other fingers/hand to maintain needle and patient stability as needed) 6. Advance the needle _____ Insert the needle into the skin slowly and smoothly _____ Orient the bevel of the needle laterally (so as to split and not cut the fibers) _____ Advance the needle past the subcutaneous tissue in a straight line. _____ Remove the stylet and check for fluid. _____ Reinsert the stylet (unless fluid is obtained) _____ Advance the needle further until a pop is felt or an obstruction prevents further movement (if needed)

252

_____ Remove the stylet and check for fluid (if needed) _____If there is an obstruction and no fluid, withdraw the needle and advance again, repositioning the needle or using a different intervertebral space. 7. Measure CSF pressure _____ Attach the manometer/stopcock to the needle hub. _____ Turn the stopcock valve until the dial is parallel with the manometer. _____ Allow the fluid to fill the manometer until the meniscus stops rising. _____ Measure the CSF opening pressure correctly. _____ Turn the stopcock valve toward the needle. _____ Remove the stopcock/manometer from the needle hub (can be before or after collecting CSF samples) 8. Collect spinal fluid _____ Place the first tube under the stopcock/needle. _____ Collect CSF (1 mL per vial) _____ Screw the cap on the first tube with one hand, and place the tube upright in the slot in the tray. _____ Avoid excessive fluid loss; (uncollected fluid dripping for less than 5 seconds) 9. Terminate the procedure _____ Withdraw the need with stylet in place. _____ Apply sterile bandage/gauze over the puncture site. General Notes: Apply sterile bandage/gauze over the puncture site. Participant may only redirect the needle three times, then you must remove the needle and re-insert No marking pen will be provided Participant will only be able to reinsert the needle three times before the trial will be terminated

253

A8-2: LP GRS (from Cheung et al., 2012) ! Item& Preparation for procedure! Patient interaction!

1! Did not organize equipment well. Had to stop procedure frequently to prepare equipment! Little to no rapport established; patient is unaware of the steps of the procedure. No verbal comforting to alleviate patient anxiety!

2! !

!

Practice of proper aseptic technique not generally apparent. Many errors in aseptic technique made throughout the procedure! Frequently used unnecessary force on tissue or caused damage!

!

Time and motion!

Many unnecessary movements!

!

Instrument handling!

Repeatedly makes tentative and awkward movements by inappropriate use of instruments!

!

Flow of procedure!

Frequently stopped procedure and seemed unsure of next move!

!

Deficient knowledge!

!

Very Poor!

!

Asepsis

Respect for Tissue!

Knowledge of procedure! Overall Performance!

!

!

Score& 3! Equipment generally organized. Occasionally has to stop and prepare items! Rapport is generally established; patient is aware and informed of most steps in the procedure. Patient anxiety is alleviated adequately using verbal comforting! Generally practices proper aseptic technique. Occasional errors in aseptic technique made during procedure! Careful handling of tissue but occasionally caused unintentional damage! Efficient time/motion but some unnecessary movements! Competent with instruments but occasionally makes awkward or stiff movements! Demonstrated some forward planning with reasonable progression of procedure! Knew all important steps of the procedure! Competent!

4! !

!

!

! !

5! All equipment neatly organized, prepared, and ready for use! Strong rapport is established and maintained throughout procedure. Patient is well informed of all relevant steps of the procedure. Patient anxiety consistently alleviated verbal comforting! Excellently demonstrates proper aseptic technique. Few or no errors in aseptic technique made during procedure! Consistently handled tissues appropriately with minimal damage! Clear economy of movement and maximum efficiency!

!

Fluid movements with instruments and no awkwardness!

!

Obviously planned course of procedure, with effortless flow from one move to the next!

!

Demonstrated familiarity with all aspects of the procedure! Clearly superior!

!

254

A8-3: Communication GRS (from Hodges et al. 2003)

SIN _________ SCENARIO ____________ OVERALL ASSESSMENT OF THE KNOWLEDGE AND SKILLS DEMONSTRATED IN THE SCENARIO 1

2

Responds inappropriately and ineffectively to the task indicating a lack of knowledge and/or undeveloped interpersonal and communication skills.

3 Responds effectively to some components of the task indicating an adequate knowledge base and some development of interpersonal and communication skills

4

5

Responds precisely and perceptively to the task, consistently integrating all components.

GLOBAL RATING SCALES Circle the rating which best reflects your judgment of the student’s performance in the following categories: RESPONSE TO PATIENT’S FEELINGS AND NEEDS (EMPATHY) 1

2

Does not respond to obvious patient clues and/or responds inappropriately

3 Responds to patient’s needs and cues, but not always effectively.

4

5

Responds consistently in a perceptive and genuine manner to the patient’s needs and cues.

DEGREE OF COHERENCE IN THE INTERACTION 1

2

No recognizable plan to the interaction, the plan does not demonstrate cohesion, or the patient must determine direction of the interaction

3 Organizational approach is formulaic and minimally flexible and/or control of the interaction is inconsistent

4

5

Superior organization, demonstrating command of cohesive devises, flexibility, and consistent control of the interaction

VERBAL EXPRESSION 1

2

Communicates in manner that interferes with and/or prevents understanding by patient

3 Exhibits sufficient control of expression to be understood by an active listener (patient)

4

5

Exhibits command of expression (fluency, grammar, vocabulary, tone, volume and modulation of voice, rate of speech, pronunciation)

NON-VERBAL EXPRESSION 1

2

Fails to engage, frustrates and/or antagonizes the patient

3 Exhibits enough control of non-verbal expression to engage a patient willing to overlook deficiencies such as passivity, self-consciousness, or inappropriate aggressiveness

4

5

Exhibits finesse and command of nonverbal expression (eye contact, gesture, posture, use of silence, etc.)

255

Appendix 9: Supplemental Data Analyses Data on signal detection rate (SDR) of participants training on a simple vs. complex knot-tying task (see Chapter 4 for complete study details). No significant difference in SDR between groups (simple or complex; F(1,26)=0.797; p0.5; f=0.14). These results likely represent a ceiling effect related to the SDR metric, as the majority of participants in both groups achieved 100% accuracy on the signal-detection secondary task from the outset of training.