Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2013
Enhancing Infantry Training by Optimizing the Zone of Proximal Development Roberto K. Champney, Christina K. Padron, Ruben Ramirez-Padron, Kay M. Stanney Design Interactive Inc. Oviedo, Florida Roberto.Champney, Christina.Padron, Ruben.Ramirez,
[email protected]
Stephanie Lackey University of Central Florida, Institute for Simulation and Training Orlando, Florida
[email protected]
Peter Squire Office of Naval Research Washington DC peter.squire @navy.mil
Jason H. Wong Naval Undersea Warfare Center Newport, RI
[email protected]
ABSTRACT
A key challenge facing Infantry training systems is the lack of integrated instructional capabilities. Most existing training systems provide a venue for performing operational tasks yet lack the critical elements that distinguish practice from training; appropriate instructional methods and strategies, individualized diagnosis of performance breakdowns, directed feedback, and remedial part-task training. Without the presence of an instructor, feedback is mostly limited to outcome of results. Further, without the presence of a trained instructor, the appropriate types of pedagogical techniques suitable for the types of skills and issues observed are not optimized. This is a challenge as training is extended from schoolhouses to units where small unit leaders are serving as instructors. While leaders possess the highest task expertise, they may not be trained on identifying and applying the most appropriate pedagogical strategies to increase training effectiveness and efficiency. One approach to mitigate this gap is to embed supportive pedagogical strategies in training systems to assist instructors. This paper describes the integration of a pedagogically informed instructor support capability, the Enhanced Instructor System (EIS), within the Augmented Immersive Team Trainer (AITT), an augmented reality training system under development that is suitable for training artillery call for fire skills. The EIS design is based on the zone of proximal development (ZPD) approach, which pinpoints areas of skill development where trainees benefit most from instructional support (e.g., scaffolding). While the ZPD concept is straightforward, its application in simulation-based skill training presents unique challenges, including: 1) how to operationally assess and identify the ZPD for skills in a simulation environment and 2) how to select the most appropriate scaffolding strategy given the observed performance. The theoretical foundations of ZPD and its assessment, challenges to instantiate this approach in simulation training systems, and approaches to overcome these challenges are discussed.
ABOUT THE AUTHORS Dr. Roberto Champney is a Senior Research Associate II at Design Interactive, Inc. His work focuses on field analysis, design and evaluation of interactive and training systems, and emotion in design. Dr. Champney holds a Ph.D. degree in Industrial Engineering and Management Systems from the University of Central Florida. Ms. Christina Kokini Padron is a Research Associate II at Design Interactive, Inc., working on the design, development, and evaluation of virtual training tools for the Office of Naval Research (ONR) and the Army Research Laboratory (ARL), including requirements elicitation, system interaction design and specification, and usability evaluation for innovative training management systems. She holds a Master’s degree from Penn State
2013 Paper No. 13168 Page 1 of 10
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2013
University in Industrial Engineering with a Human Factors Option, where her research focused on the direct effect of contextual characteristics on perceived usability. She also holds a Bachelor’s degree from Purdue University in Industrial Engineering. Mr. Ruben Ramirez-Padron is a Software Engineer II at Design Interactive, Inc. Mr. Ramirez-Padron holds a Master’s degree in Computer Engineering from the University of Central Florida, and has a research background in novelty detection, statistical modeling, kernel methods for pattern analysis, and online machine learning. At Design Interactive, Inc., he has been involved in proposing, implementing and validating intelligent data analysis methods and AI/machine learning techniques for R&D projects. Dr. Kay Stanney is President and Founder of Design Interactive, Inc., a woman-owned, small business focused on human-systems integration founded in 1998. She has over 20 years experience in the design, development, and evaluation of human-interactive systems. Dr. Stanney received a B.S. in Industrial Engineering from the State University of New York at Buffalo, after which time she spent three years working as a manufacturing/quality engineer for Intel Corporation in Santa Clara, California. She received her Master’s and Ph.D. in Industrial Engineering, with a focus on Human Factors Engineering, from Purdue University. She was a professor in the Industrial Engineering and Management Systems at the University of Central Florida from 1992-2008. Dr. Stephanie Lackey is the Director of the Applied Cognition & Training in Immersive Virtual Environments (ACTIVE) Lab at the University of Central Florida’s (UCF) Institute for Simulation and Training (IST). She earned her Master’s and Ph.D. degrees in Industrial Engineering and Management Systems with a specialization in Simulation, Modeling, and Analysis from UCF. Dr. Lackey joined UCF-IST’s ACTIVE Lab in 2008, and assumed the role of Lab Director in 2010. Dr. Lackey leverages her experience in advanced predictive modeling to the field of human performance in order to develop methods for improving human performance in simulation-based and immersive training environments and human-robot interfaces. Dr. Peter Squire is the Program Manager for the Human Performance Training and Education (HPT&E) Thrust within the Office of Naval Research (ONR) Expeditionary Maneuver Warfare & Combating Terrorism Department. Prior to joining ONR, he led the Human System Integration Science and Technology efforts at the Naval Surface Warfare Center Dahlgren. His research interests include Human-Robot Interaction, Attention, Working Memory, and Neuroergonomics. He received his B.S. with Honors in Computer Science from Mary Washington College, and PhD as well as M.A. in Human Factors and Applied Cognition from George Mason University. Dr. Jason Wong is a Human Factors Scientist with the Naval Undersea Warfare Center, where he explores the cognitive and training aspects of complex systems. Additionally, he is the Technical Direction Agent for the Decision Making and Expertise Development investment area within the Office of Naval Research Expeditionary Maneuver Warfare & Combating Terrorism Department. Dr. Wong received his Ph.D. in Human Factors and Applied Cognitive Psychology from George Mason University and focused on theoretical and applied issues of visual attention and working memory.
2013 Paper No. 13168 Page 2 of 10
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2013
Enhancing Infantry Training by Optimizing the Zone of Proximal Development Roberto K. Champney, Christina K. Padron, Ruben Ramirez-Padron, Kay M. Stanney Design Interactive Inc. Oviedo, Florida Roberto.Champney, Christina.Padron, Ruben.Ramirez,
[email protected]
Stephanie Lackey University of Central Florida, Institute for Simulation and Training Orlando, Florida
[email protected]
Peter Squire Office of Naval Research Washington DC peter.squire @navy.mil
Jason H. Wong Naval Undersea Warfare Center Newport, RI
[email protected]
INTRODUCTION
In fact, the precise nature of practice and its relationship to learning outcomes has been largely ignored or misunderstood… practice may be a complex process, not simply task repetition…[what is needed is] a framework for delineating the conditions that might enhance the utility and efficacy of practice in training. Salas and Canon-Bowers (2001, pp. 480-481) A key challenge facing Infantry training systems is the lack of integrated instructional capabilities. Most existing training systems provide a venue for performing operational tasks yet lack the critical elements that distinguish practice from training: appropriate instructional methods and strategies, individualized diagnosis of performance breakdowns, directed feedback, and remedial part-task training. Without the presence of a trained instructor, the appropriate types of pedagogical techniques and assessment methods suitable for the types of skills and issues observed (and unobservable) are not optimized. Further, without the presence of an instructor, feedback is mostly limited to outcome of results and there may be limited guidance in terms of remediation (Schatz, Champney, Lackey, Oaks & Dunne, 2011). This is a challenge as training is extended from schoolhouses to units where small unit leaders are the instructors. While leaders possess the highest task expertise, they may not be trained on identifying and applying the most appropriate pedagogical strategies to increase training effectiveness and efficiency. One approach to mitigate this gap is to embed supportive pedagogical strategies in training systems to assist instructors. BACKGROUND
The purpose of the Augmented Immersive Team Training (AITT) system is to enable immersive training for Infantry in conducting artillery call for fire tasks. Augmented reality capabilities merge realistic virtual entities (e.g., vehicles, buildings, combatants and weapon effects) with the real-world environment. The resulting combined scenery is then presented to trainees in a Head Mounted Display (HMD) or tool props such as target range finders or binoculars. The AITT system is intended to be utilized outdoors where the real terrain is leveraged and a virtual battlefield is overlaid such that live fire training can be simulated on ranges or operational urban environments (e.g., a rooftop inside a base). By taking advantage of augmented reality, the presentation of psychological, physical and functional fidelity cues (Stone, 2012) can be enhanced in order to provide a compelling presentation of real-world operational environments.
2013 Paper No. 13168 Page 3 of 10
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2013
The use of augmented reality is anticipated to enhance the realism of training environments, while at the same time reducing the cost of such training by providing an experience somewhat comparable to the operational environment without the necessary expenses commonly associated with live training (e.g., ordnance, aircraft and fuel use). In addition, pre-training in such high fidelity environments may allow trainees to be better prepared to take advantage of culminating live-training events (Roman & Brown, 2008). The use of high-tech capabilities, such as augmented reality, does not guarantee that positive or efficient training will occur. In order to ensure that the opportunities for training provided by any training system are optimized, the training venue must include instructional support capabilities. Without such support, simulation-based training may even result in negative training, where practice in the training environment actually degrades real-world performance (Schatz, et al. 2011; Houck & Thomas, 1991; Stottler et al., 2002). Specifically, instructors must be supported with a framework that assists in formulating the best training packages and in making objective assessments of task performance such that the most pressing training issues are addressed. The Zone of Proximal Development (ZPD) approach (Vygotsky, 1978, 1934/86) can provide such a framework, one that delineates the conditions that enhance the utility and efficacy of simulation-based training. ENHANCING TRAINING BY OPTIMIZING THE ZONE OF PROXIMAL DEVELOPMENT
The AITT Enhanced Instruction System (AITT-EIS) To provide instructional support, the AITT system must have functionality that enables it to process human performance data, interpret these data with regard to desired training objectives, and provide a diagnosis from which to propose a suitable remediation strategy. Within the AITT system, these instructional capabilities will be provided by an embedded Enhanced Instruction System (EIS), which will provide assessment, diagnosis, and remediation support. One challenge that exists when supporting instruction is that of selecting a mechanism for determining when and what support to provide. Often training systems simply present a summary of results or “scores,” which are nothing more than scenario statistics. Field observations by the authors have found that instructors have little use for this type of support since it offers data that must still be processed and often overwhelms the instructor with too much information. For this reason it was determined that the EIS should provide processed data that is readily actionable (e.g., recommendations rather than scores) and also provide the ability to act on the recommendations (e.g., integrated remediation capabilities). In order to support such capabilities, the EIS must be integrated with a sound assessment mechanism that can determine the level of knowledge and skills possessed by a trainee. One approach that has been used in education to address such assessment has been the application of ZPD theory (Vygotsky, 1978, 1986). Vygotsky’s ZPD is the cognitive development “zone” that provides the most efficient and effective learning gains. ZPD is achieved when the level of scenario challenge is optimized such that the trainee at any given level of competence (low/novice – high/expert) is only able to perform effectively with assistance (see Figure 1). It is here – the area of “potential” where cognitive functions are maturing and provide the greatest opportunity for accelerating learning. If the training scenario is too difficult, the trainee will experience anxiety. If it is too easy, the trainee will experience boredom. ZPD theory suggests that instruction must focus on that narrow area of skill development just between what a trainee can do alone and what s/he cannot do even with assistance (Häll, 2013). In essence, the ZPD lies just above the comfort zone for a trainee and represents an area where the trainee can make appropriate connections and leap to the next level of understanding and ability if assistance is provided. Training packages that are directed at the ZPD have been demonstrated to enhance performance. For example, Kok, Siat, and Yeow (2011) found that students (children between 5-6 years old) in a cognitive teaching program designed to sustain ZPD development performed significantly better on post-testing as compared to students in a traditional learning program (7% to 52% across various metrics of cognitive function). While ZPD theory is relatively straightforward; its application in a simulation environment is more challenging, especially with regards to how to transform the theory into an operationally effective strategy. For example, how is ZPD identified in practice, and once identified, what kind of support should be provided (Luckin, 2010)? These are the central questions this effort sought to address.
2013 Paper No. 13168 Page 4 of 10
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2013
.
Figure 1. Conceptual Diagram of The Zone of Proximal Development
Identifying the Zone of Proximal Development To be able to identify a trainee's ZPD, one must first formulate a way to assess the trainee’s current level of ability and understanding of targeted knowledge and skills. The level of knowledge and skill capabilities can be inferred by assessing a trainee’s performance on knowledge comprehension or tasks that would require a particular level of competence (i.e. mastery / expertise). In other words, if a concept or task requires a given level of competence and the trainee is able to successfully complete this challenge without any assistance, then s/he must possess at least this level of expertise. Given the potential that a trainee is able to accomplish tasks at this level of mastery by chance, one must provide multiple opportunities to demonstrate proficiency. One approach to accomplish this is via Item Response Theory (IRT), which refers to a group of approaches that are used to measure psychological and educational constructs (Weiss & Yoes, 1991; Baker & Kim, 2004). IRT states that human capabilities can be estimated by a person’s response to relevant items on a test, and can be subjective or behavioral. In essence, one can estimate a trainee’s capability by utilizing, 1) a test item of X complexity 2) for which it is assumed that a person of Y level of understanding of the construct, 3) would have Z probability of asserting the item correctly (Champney, Stanney, Woodman, Kokini & Lackey, 2012). IRT measures may be subjective or behavioral In the case of simulation-based training, such as that targeted by the AITT-EIS, this would imply that to assess a trainee’s capability one would need: 1) a training scenario of a predetermined level of complexity (e.g., execute mission under foggy conditions); 2) for which a trainee of the targeted venue with a given level of proficiency on specific measures of performance (e.g., hit the target within 100 meters) ; 3) would have a specific probability of completing the scenario successfully (based on prior history of performance on that scenario by trainees with similar capability levels). Having a number of assessments like this would enable the system to estimate the level of competence of a trainee for not only one but multiple constructs within a single scenario. Given the challenge in estimating complexities in scenario-based training one possible approach to craft scenarios of different complexities is to utilize standardized training requirements (e.g., Marine Training & Readiness Manuals) which tent to progress in complexity. The AITT-EIS system uses IRT to estimate a trainee’s current level of competence; this approach is described next.
2013 Paper No. 13168 Page 5 of 10
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2013
Using Item Response Theory to identify the Zone of Proximal Development As described in the preceding section a combination of scenarios, metrics and historical data from trainees sharing similar competency levels is required to utilize IRT to estimate ZPD under operational conditions. This ZPD assessment process takes place across two key phases containing four steps. Phase 1 focuses on Model Building: 1) Define Performance Model, 2) Calibrate Model, and Phase 2 focuses on Conducting Training: 3) Identify Capability Level, and 4) Interpret ZPD status. The first phase is necessary to create the baseline model by which future performance is to be judged against (i.e., a trainee of X characteristics is expected to have Y probability of succeeding challenge A). The second phase uses the developed baseline model to judge the capability of a given trainee during training. Once the ZPD is determined, a remediation plan (i.e., scaffolding strategy) can be determined and performance changes can be monitored (see Figure 2). Scaffolding is an instructional strategy by which support (e.g., task help) is provided by the instructor to the trainee in order to assist the learning process. Scaffolding can be of different types and is usually faded (i.e., less scaffold is provided) as the trainee develops competence.
Figure 2. Process Flow Diagram for ZPD Approach
1. Define Performance Model. First, in order to apply IRT to simulation scenarios, a set of training objectives must be defined. For each of these training objectives, a number of measures of performance (MOPs) must be determined which intend to assess the different components of the training objective construct. Yet these are usually somewhat abstract and still require further refinement for which more specific and detailed operational performance metrics are defined. This breakup of objectives into operational metrics defines the operational world that the training system intends to target. 2. Calibrate Model. Second, once an operationally informed Performance Model is available it is then necessary to obtain a baseline with representative trainees of the target population. This is done to develop probability models of the target population’s ability to succeed in the challenges posed by a scenario. Typically, training data would need to be collected from a set of representative trainees and that data used to compute the parameters of the IRT model. This calibration procedure may be lengthy and require a significant number of data points to compute. This may become an operational issue given the limited availability of warfighters who may be recruited to build the sample. In turn, a two tiered approach may be utilized to build the data and utilize the system. First, a limited sample pilot study may be utilized to provide a rough estimate of the IRFs from the target population (which may include trainees of varying capability levels as well as experts). While performance from experts would support understanding “how to get it right”, novice data is necessary to understand the probabilities of trainees of “getting it right”. The calibration of the model is done by comparing performance against the “right” answer, given that there is a threshold for correct task performance (e.g., hit the target within 100 meters and within 1 minute of mission start). Nonetheless, not everyone has the same probabilities of completing the task successfully; experts may have a much higher probability than students. While it has been reported in literature that IRT does not require a representative sample to properly calibrate the model (Hambleton and Jones, 1993), the drawback in that case is that the sample size should be larger than the size of representative samples used in classical test theory. The more conservative
2013 Paper No. 13168 Page 6 of 10
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2013
approach is to utilize a representative sample of individuals sharing the same characteristics (e.g., those undergoing the same type of course). These data can be utilized to seed the system’s database and commence operational use. Second, after a certain minimum number of trainees have passed through, the system can perform a calibration process that will further tune the IRT framework to the trainee population. As the amount of data collected increases, so will the reliability of the IRFs. This calibration process may be done once or could be repeated periodically if desired to keep the system adjusted to changes in trainee population. Since every scenario represents unique conditions with varying levels of complexity, a probability curve must be developed for each metric in each scenario. For each metric, a curve - termed Item Response Function (IRF) - is constructed, which determines the probability that a trainee of a particular set of characteristics provides a correct answer (i.e., performs correctly in the task metric). There are several models to develop IRFs. The approach most suitable for the EIS is the two-parameter logistic model described in equation 1 below. =
1 1+
(1)
In the equation, θ denotes a trainee’s training objective capability, b denotes the item difficulty parameter, and a denotes the discriminative parameter. The discriminative parameter indicates how well the item can differentiate between performers of different capabilities, particularly near the location of the difficulty parameter. The value of estimates the probability that a trainee of a particular capability level will perform correctly in the task metric associated to the IRF. Figure 3 below depicts two instances of this particular IRF model used to model two different conditions. The curves in the figure show a situation with different values for the difficulty parameter b (-0.5 and 1) and different values for the discriminative parameter a. The IRF with the greater discriminative parameter (a) has a greater slope and thus it varies faster in the neighborhood of the difficulty parameter (b). This implies that differences in performance between any two trainees would be more likely to be found through the IRF having the greater discriminative parameter, provided their capabilities are near the difficulty parameter. Note, however, that the greater the value of the discriminative parameter of an IRF, the lesser its discrimination power when trainees’ capabilities are located to the same side of its difficulty parameter and relatively far away from it. The two-parameter model is calibrated so that the trade-off between discrimination power near the value of the difficulty parameter and on the extremes of the IRFs are optimized to better differentiate between different trainees’ capabilities.
Figure 3. Two instances of the two-parameter logistic IRF model. Red colored IRF has discriminative parameter a= 1. Green colored IRF has discriminative parameter a= 2.
2013 Paper No. 13168 Page 7 of 10
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2013
Further, given that a single test item is not sufficient to estimate a trainee’s level of competence with confidence, one must utilize multiple opportunities to estimate capability. As such, a composite score is required, which is built from all the IRF functions associated with the same training objective. This is done by adding the IRF functions to obtain a Test Response Function (TRF) for that training objective. Consequently, a TRF gives the expected number of items (from a set of related items) to be passed successfully by a trainee having a certain level of competence. A TRF curve can be normalized by dividing it by the number of underlying IRFs, so that it provides the expected performance as a percentage of measures at which the trainee will be successful given his/her training objective capability. This normalization allows for an easy side-by-side comparison of different TRFs, which can be made against historical response data by similar trainees under the same conditions (e.g., scenarios). In addition, the normalized TRFs allow for a direct comparison between capabilities such that it is easier to predict which capabilities are expected to have the lowest performance associated with them. 3. Identify Capability Level. Third, once expected probability curves have been developed (i.e., IRFs and TRFs) it is possible to use those curves to judge the capabilities of new individuals attempting the same challenges (i.e., the scenarios developed). Given a new trainee’s performance on a set of metrics, one can estimate the capability score (i.e., skill level) for a given training objective by mapping the trainee's actual performance (percentage of successful metrics) to a training objective capability score using the corresponding TRF curve. Figure 4 demonstrates a TRF curve formed by summing the two IRFs depicted in Figure 3. To determine the capability score (i.e., skill level) for the training objective, actual performance is localized on the y-axis and the corresponding capability score is found on the x-axis using the mapping provided by the TRF curve (e.g. in Figure 4 the capability score corresponding to failing at only one of the two metrics (i.e. 0.5 performance in y axis) is equal to 0.5 on the x-axis .
Figure 4. Training Objective Capability Score determination from TRF
4. Interpret ZPD status. Fourth, once a training objective capability score is known, it must be interpreted in terms of ZPD status. This is done by taking into consideration the capability score computed in the prior step and noting whether or not scaffolding (i.e. help / assistance) was applied during the task. Note that the training objective capability score represents the number of standard deviations above or below a mean training objective capability of 0. This results in a categorization scheme by which ZPD status can be interpreted based on the capability score and whether assistance was provided or not. The combination of these two variables allows for identification of the ZPD.
2013 Paper No. 13168 Page 8 of 10
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2013
Determining Appropriate Scaffolding Estimating the trainee’s level of competence and ZPD status allows the EIS to assess future training needs, particularly whether scaffolding will be needed or not and which type of scaffolding is most appropriate. If the trainee has a low level of competence, s/he will be highly challenged without scaffolding. Scaffolding should be provided in the “area of potential,” where the trainee has the potential to achieve given assistance, after which the trainee should become comfortably challenged. As the trainee’s competence grows, scenarios can become increasingly more challenging, scaffolding should be faded, and the trainee should begin to show more selfregulatory behaviors during training (e.g., planning, monitoring [e.g., identifying adequacy of information, monitoring progress toward goals], using effective strategies, and handling task difficulties and demands; Azevedo et al., 2011). Once trainee competence is high, even when training scenarios are highly challenging, then competence has been developed and new training challenges should be sought. The AITT-EIS will provide different levels of scaffolding, based on a combination of the trainee’s current level of competence as determined by the ZPD process, as shown in Figure 5 below. The characteristics and application of these scaffolds is informed by a scaffolding framework (still under development) which would prescribe the appropriate scaffold to address a particular competency level (e.g., for each competency level a distinct type of scaffold would be applied). In addition the design of the scaffold would require input from subject matter experts to ensure that they are operationally valid. For instance, it is ideal that a scaffold be faded and that the level of support be reduced as trainees develop their competency. This implies that initially a novice may be provided with a very supportive strategy (e.g., provide the trainee with a pre-brief containing the Call for Fire procedure steps), then progress to a strategy where they are prompted for data (e.g., prompt them to calculate the Observer Target factor [OT factor]), finally conclude with a less supporting strategy where they only prompted when mistakes are made (e.g., prompt trainee to check OT factor calculation).
Figure 5 Scaffolding Approach
SUMMARY
The use of simulation-based training systems is continuously expanding and contributing benefits such as increased training venues and reduced training burden. Nonetheless, without the application and integration of appropriate instructional strategies within the training system, there is the risk that training would be suboptimal: at best, the technology is not used optimally, and at worst, its use results in negative training. The application of instructional theories, such as Vygotsky’s ZPD and psychometric theories like IRT, into military training systems promises to be a step in the right direction. This, coupled with the ability to tailor different integrated scaffolds that match a trainee’s ZPD, should result in a training program that can be optimized to address a trainee’s knowledge and skill gaps. Further, it may also be possible to estimate a person’s level of expertise / mastery by judging their observed success compared to the success of other liked individuals (e.g., below, same, or above). One may further provide more granularity by identifying how far one’s successes are below or above the baseline curve. In the approach
2013 Paper No. 13168 Page 9 of 10
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2013
being used in this effort, good or bad task performance does not necessarily imply high/low competence; it is the assessment against the expected curve (TRF) that may be used to estimate mastery compared to other similar individuals (expected probabilities). The end goal of this effort is to develop a ZPD-based framework that delineates the conditions that enhance the utility and efficacy of practice in training, such that 1) instructors can better use simulation training capability and 2) pre-training in these simulation environments allows trainees to be better prepared to take advantage of culminating live-training events. Further development of this approach and operational testing will allow demonstration of its capabilities in producing training efficiencies and increased effectiveness. This approach will become operational in the upcoming development of the EIS and be integrated with the AITT system. ACKNOWLEDGEMENTS This material is based upon work supported in part by the Office of Naval Research (ONR) under contract N0001412-C-0216. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views or the endorsement of ONR.
REFERENCES Azevedo, R., Cromley, J.G., Moos, D.C., Greene, J.A., & Winters, F.I. (2011). Adaptive Content and Process Scaffolding: A Key to Facilitating Students’ Self-Regulated Learning with Hypermedia. Psychological Test and Assessment Modeling, 53(1), 106-140. Baker, F.B., & Kim, S.H. (2004). Item Response Theory: Parameters estimation techniques. New York: Dekker. Champney, R.K., Stanney, K.M., Woodman, M., Kokini, C., & Lackey, S. (2012). Enhancing Infantry Training via Pedagogically Informed Strategies. Proceedings from the 2012 Fall Simulation Interoperability Standards Organization (SISO) Simulation Interoperability Workshop (SIW), September 10-14, 2012. Häll, L.O. (2013). Developing Educational Computer-Assisted Simulations: Exploring a new approach to researching learning in collaborative health care simulation contexts. Unpublished Doctoral Dissertation. Umea Universitet, Pedagogiska Institutionen. Harris, D. (1989). Comparison of 1‐, 2‐, and 3‐Parameter IRT Models. Educational Measurement: Issues and Practice, 8(1), 35-41. Houck, M. R. & Thomas, G. S. (1991). Training Evaluation of the F-15 Advanced Air Combat Simulation. ADA241 675. Dayton, OH: Human Resources Directorate Aircrew Training Research Division. Kok, S.Y. (2011). Developing Children's Cognitive Functions and Increasing Learning Effectiveness: An Intervention Using the Bright Start Cognitive Curriculum for Young Children, Durham theses, Durham University. Available at Durham E-Theses Online: http://etheses.dur.ac.uk/625/. Luckin R. (2010). Re-designing Learning Contexts: technology rich, learner-centered ecologies. London, Routledge Roman, P.A., & Brown, D.G. (2008) Games – Just How Serious Are They? Intereservice/Industry Training, Simulation and Education Conference (I/ITSEC); Paper No. 8013, December 2008, Orlando FL. Salas, E., & Canon-Bowers, J.A. (2001). The Science of Training: A Decade of Progress. Annual Review of Psychology, 52, 471–499. Schatz, S., Champey, R., Lackey, S., Oaks, C., and Dunne, R. (2011). Evaluating Training Effectiveness: Instructional Support Software from Squads to Schoolhouses. In Proceedings of the Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) Annual Meeting. Orlando, FL. Stone, R.J. (2012). Human Factors Guidance for Designers of Interactive 3D and Games-Based Training Systems. Birmingham, UK: Human Factors Integration Defence Technology, Centre University of Birmingham. Stottler, R.H., Jensen, R., Pike, B., & Bingman, R., (2002). Adding an Intelligent Tutoring System to an Existing Training Simulation. Proceedings of the 2002 Interservice/Industry Training Simulation Education Conference. Vygotsky, L. S. (1934/1986). The development of scientific concepts in childhood. A. Kozulin (Ed.). In Thought and language. Cambridge, MA: MIT Press. Vygotsky, L.S. (1978). Mind in Society. Cambridge, MA: Harvard University Press. Weis, D.J., & Yoes, M.E. 1991. Item response theory. In Hambleton, R.K., and Zaal, J.N. (Eds). Advances in educational and psychological testing: Theory and applications. Pp. 69-95. New York. NY.: Kluwer Academic / Plenum Publishers.
2013 Paper No. 13168 Page 10 of 10