Jun 30, 2005 - surface vehicle driver training, training control tasks such as braking will ..... These large model boards provided very good resolution and were used for a ..... This can be accomplished through high tech visualization and ...
-1-
Simulation-Based Training: Applying lessons learned in aviation to surface transportation modes
Submitted by: Beth Blickensderfer, Ph. D. Dahai Liu, Ph. D. Angelica Hernandez Embry Riddle Aeronautical University
June 30, 2005
This effort is the final deliverable under the project – Simulation-based training: Applying lessons learned in aviation to surface transportation modes.
-2-
Executive Summary After reviewing the literature regarding simulation for aviation training and reviewing the literature on use of simulation in surface transportation, a number of lessons learned become apparent. Lesson Learned 1: Simulation has been proven to be an effective educational and instructional tool. In tests of flight simulator training effectiveness, trainees develop knowledge and skills in simulated systems as well as they do in the actual systems (Hays, Jacobs, Prince, & Salas, 1992). The simulator is an excellent classroom, as the learner is able to make mistakes and learn from them (Duncan & Feterle, 2000). The instructor is allowed to focus on teaching and not operating the vehicle. Additionally, many simulators have the capability to collect performance measures during the training scenarios that can help assess competencies and deficiencies. Not as much research has occurred regarding the effectiveness of training via simulation in the surface transportation domain compared to the aviation field, yet considerable research support has appeared. It is likely that the results regarding simulator effectiveness for aviation training will generalize to the surface transportation domain, but simulation must be used wisely. Users should consider the competencies needed to perform the task and the capabilities of the simulator. Not all simulators are appropriate for training all competencies. Furthermore, not all competencies require simulation for effective training. Lesson Learned 2: Simulators increase safety and reduce training costs. As noted in our review of the aviation literature, two main benefits of using simulation for training are increased safety during training and reduced training costs. In terms of safety, using simulators for training enables individuals to practice in conditions that would be too dangerous to train in actual situations (for example, aircraft engine failures, accidents, and other emergencies). This is also true when training driving and will likely be a major benefit of using simulation in surface transportation training. Regarding cost, aviation simulation saves aircraft fuel, aircraft maintenance costs, and keeps aircraft available for revenue producing activities. In the case of automobiles, buses, and trucks, although training via simulation would conserve fuel, the cost savings are most likely not as great as they are in aviation. Indeed, considerable driving training can occur in the actual vehicles at low cost. Training train operators, on the other hand, may benefit from significant cost savings as well as benefiting from the simpler logistics of training via simulators rather than actual trains. A benefit related to both safety and cost is that simulation can be used to give trainees experiences with unusual events. Unusual events are just that—unusual. Despite their rare occurrences, they can prove deadly in aviation as well as in surface transportation. Simulation offers the opportunity for drivers to experience these and learn how to perform effectively in these unusual situations (Down, Petford, & McHale, 1982). Consider driver training. Driving around in the real world, the driver may not encounter many, if any, hazardous or emergency situations. Using simulation, the scenario can be scripted to include a variety of hazards and emergencies. Thus, not only will simulation
-3training give driver trainees the opportunity to master the knowledge and skills necessary to perform effectively in hazardous situations, but also it will do so in a safe environment. Lesson Learned 3: Simulation alone does not equal training. Simulation is a tool for trainers to use (Salas, Bowers, & Rhodenizer, 1998). Simply experiencing a simulated environment is not effective training (Salas et al., 1998). Simulation must be used in a thoughtful, well-planned manner that includes identification of training needs, proper design of scenarios, appropriate performance measurement, and feedback to the learner (Oser et al., 1999). The same principles apply in surface transportation as well (Uhr et al., 2003). Lesson Learned 4: Simulation is one variable in the “big picture” of training effectiveness. Training effectiveness is a complex problem (Cannon-Bowers et al., 1995, Colquitt et al., 2000, Baldwin and Ford, 1998). Training method (e.g., use of simulation) is one variable involved. Numerous other variables also exist including trainee characteristics, work environment characteristics, and the transfer environment. Simulation training will not solve every training challenge for any domain. Lesson Learned 5: The Scenario Based Training model (Oser et al., 1999) is one method to ensure simulation is used appropriately. Aviation training researchers advocate using the scenario based training model to use simulation effectively. While a few papers have appeared in the surface transportation training literature regarding effective use of simulation (Uhr et al. 2003; Nagata & Kuriyama, 1983; Walker & Bailey, 2002; Down et al. 1982), limited advice exists regarding use simulation effectively in this domain. Fortunately, the basic principles of the Oser et al (1999) model apply to surface transportation and, if advocated in the surface field, can help instructors to use driving simulator systems most effectively. The Oser et al. approach is based on basic principles of learning. This approach guides training designers to 1) identify the task/mission and the knowledge, skills, and abilities involved; 2) design scenarios to include events which allow the trainee to develop and practice the specific knowledge, skills, and abilities identified; 3) design performance measures to enable the trainer to assess performance; and 4) ensure specific feedback is given to the trainee. Lesson Learned 6: Effective human performance measurement is crucial both for simulation validation and assessing skill development. As new simulators are developed, validation must occur. Validation should occur not only from the engineering/system performance standpoint but also from the human performance perspective (Hays & Singer, 1989). For example, when examining whether performance in a simulator equals performance in the real-world task, accurate, reliable human performance measures are essential to understand the human interactions with the system. Without such measures, it will be impossible to quantify training transfer. Both objective and subjective measurement approaches exist. Careful time and attention should be paid to developing and selecting the appropriate measures to ensure a well-rounded assessment of skills.
-4Lesson Learned 7: Simulation fidelity is an important concept that needs to be understood. Simulation fidelity is the degree to which a device can replicate the actual environment or how “real” the simulation appears and feels (Alessi, 1998; Gross et al., 1999). Simulation fidelity is composed of a number of dimensions including psychological and cognitive factors as well as the more obvious physical factors (e.g., visual, auditory, motion, etc.). Numerous researchers are devoted to studying fidelity issues regarding aviation training such as how to define fidelity, fidelity dimensions, measuring fidelity, and the relationship between fidelity and training effectiveness, yet questions still remain. In terms of surface transportation, limited study exists on the relationship between fidelity and performance in surface transportation tasks. For comparable skills (e.g., control vs. perceptual vs. decision making), it is expected the findings from fidelity research in aviation should generalize to surface transportation. However, surface transportation researchers should use these findings as a springboard for their own domain specific research. Lesson Learned 8: The relationship between simulation fidelity and training effectiveness is not a positive, linear relationship. The simulation industry pushes for higher and higher levels of physical fidelity. Indeed, as simulation technology continues to evolve, simulations come ever closer to being exact replicas of the real world environment. At the current time, high fidelity translates as high financial cost, and many questions remain regarding the cost-benefit trade-offs of using high physical fidelity simulations for aviation training. Research indicates that high fidelity is not necessary to train certain skills (Jentsch & Bowers, 1998; Koonce & Bramble, 1998). In terms of surface vehicle driver training, training control tasks such as braking will require a high level of physical fidelity. On the other hand, it is likely that for other skills (e.g., risk assessment training), a lower level of physical will be adequate (Fisher et al., 2002). Thus, an expensive, high fidelity simulator is not always required to fulfill training needs. However, more research is needed to identify the exact relationship between fidelity and training effectiveness. Lesson Learned 9: Motion fidelity is not always necessary. Motion fidelity is the extent to which a simulator replicates the motion cues actually felt during flight (Kaiser & Schroeder, 2003). In terms of aviation, motion appears to provide very little to training effectiveness (Garrison, 1985; Ray, 1996). While it is likely that these results generalize to surface transportation to some degree, the knowledge and skills required for effective driving differ somewhat from aviation (e.g., consider driving a vehicle over bumpy terrain), and motion is likely needed to train certain skills. Thus, additional domain specific research is needed. Lesson Learned 10: Establishing a standard classification system for different types of simulations can facilitate collaboration within the simulation industry. In aviation, levels of simulation are specifically defined with certifications and regulations regarding necessary fidelity for training certain skills (e.g., level A, level B, and level C). Using a classification system of this nature has provided industry and academia with common terminology to use in simulation design and evaluation (i.e., everyone is using the same terminology to refer to the same concepts). In comparing the current
-5simulation work in aviation to that of surface transportation, aviation has specific standards, but a simulation classification schema is not apparent in the surface transportation industry. Lesson Learned 11: Many opportunities to use simulation exist—be creative! The aviation industry has moved beyond using simulation only for pilot training to also using it to train air traffic controllers. In addition, simulations are helping to design airport layouts, assess traffic problems, and teach ground workers airport navigation. In surface transportation, researchers have begun to use simulation to assess road design (Godley et al., 1997). Indeed, use of simulation seems limited only by our imagination. As is the case in aviation training, the surface simulation industry is facing great challenges and also opportunities to make the roads safer, more efficient and enjoyable. With the development of technology, many driving operations become easier and require less effort. Unfortunately, these same technologies can introduce new opportunities for human error. The simulation industry must stay abreast of technological advances to produce up-to-date, effective training in a cost efficient manner. Many of the questions we posed in this report cannot be answered in a simple sentence, nor will the answers occur overnight. Instead, continued, fundamental research remains the key to understanding the human interaction with the vehicle.
-6Table of Contents Executive Summary……………………………………………………………………….2 Sections I: Simulation for Aviation Training...................................................................... 8 Simulation and Aviation ............................................................................................... 10 The History of Flight Simulation and Simulator Technology ...................................... 10 Simulation and Flight Training..................................................................................... 13 Standards and Regulations............................................................................................ 15 Simulation in Air Traffic Control ................................................................................. 18 Simulation Fidelity........................................................................................................ 19 Scenario Based Training ............................................................................................... 22 The SBT Cycle.......................................................................................................... 22 Previous uses of SBT ................................................................................................ 23 Example: Using SBT in General Aviation................................................................ 23 Training Effectiveness: An Overview .......................................................................... 25 Transfer of Training: Terms and Concepts.................................................................. 25 A Model of Factors Affecting the Transfer of Training ........................................... 26 Research Methods......................................................................................................... 29 Human Performance Measurement........................................................................... 29 Objective measures ....................................................................................... 30 Subjective Measures ..................................................................................... 30 How to choose measures........................................................................................... 30 Experimental Design................................................................................................. 33 Forward Transfer .......................................................................................... 33
-7Backward Transfer Experiment .................................................................... 33 Quasi-Transfer of Training ........................................................................... 34 Conclusions and Future Direction ................................................................................ 34 Bibliography ................................................................................................................. 37 Section II: Use of Simulation for Surface Transportation ................................................ 52 Review of Current Driver’s Education and Training.................................................... 53 Modeling Driving Behavior.......................................................................................... 55 Training effectiveness and Goals.................................................................................. 57 Simulators, Research, and Classifications .................................................................... 60 Driving Psychology ...................................................................................................... 63 Attention Deficits and the Distracted Driver ............................................................ 63 Decision Making and Scenario-based Training........................................................ 65 Motivation................................................................................................................. 65 Cognition and Perception.......................................................................................... 65 Driver types and Experience..................................................................................... 67 Driving Technology ...................................................................................................... 69 Road and Environment Design ..................................................................................... 71 Bikes, Trucks, Trains and Professional Driving ........................................................... 71 Appendix: Extensive Reviews of Selected Surface Transportation Articles.................... 73 Section III: Lessons Learned and Recommendations for Future Research ...................... 99 Lessons Learned.......................................................................................................... 100 Potential Research Topics........................................................................................... 104 Bibliography ............................................................................................................... 108
-8-
Section I Simulation for Aviation Training: A Review
Submitted by: Beth Blickensderfer, Ph. D. Dahai Liu, Ph. D. Angelica Hernandez Embry Riddle Aeronautical University
May 13, 2005
This effort was the first deliverable under the project-- Simulation-based Training: Applying lessons learned in aviation to surface transportation modes
-9-
- 10 Simulation and Aviation Since the conception of manned flight, pilot training has been an important issue. Flying is a difficult, complex task that requires considerable attention resources from the pilot as well as lengthy training (Wickens, Lee, Lie & Becker, 2004). Flight training devices have long been a significant resource to minimize both costs and dangers associated with learning to fly. The use of these devices has supported training programs in other areas in aviation such as air traffic management. Utilizing the latest computers and technology, simulation has also contributed a significant amount to training research by providing a controlled environment where experiments may be conducted to determine the effectiveness of various training methods. Simulation is defined as the representation of components of an environment through some medium in order to achieve an objective (Hays & Singer, 1989). In the case of flight training, the objective is to use the simulator to allow students to practice the fundamentals of flying an aircraft without flying an actual aircraft. Aircraft simulators range from desktop computers to full size replicas of a specific makes, models, and series of airplane cockpits, the latter include all flight management software and hardware as well as out-of-the-cockpit view of the environment (FAA, 1991). The Federal Aviation Administration is the chief authority to certify these devices for training purposes in the United States. This report will cover the scope of simulation as it has been used in the aviation field. First a review of the history of simulators and Flight Training Devices (FTD) will be covered followed by the benefits reaped by both flight education and air traffic management. Next, a discussion of how simulation fits into a training program will be dissected (i.e. scenario-based training). Finally, basic principles of training effectiveness are discussed along with how simulation fits into training effectiveness as a whole. The History of Flight Simulation and Simulator Technology The evolution of flight simulators has been primarily technologically driven and spans over 80 years. The humble start of flight training devices can be traced back to the Wright Brothers. Along with developing their aircraft, they also devised means to train students to fly the aircraft (Lansdaal, Lewis & Bezdek, 2004; Carow, 1988). They placed students in centerpieces from aircrafts on mounts. The instructor or another student would move the centerpieces about to demonstrate how the aircraft moved in relation to the controls. Other simulators placed aircraft sections on pylons that were placed on top of hills or on windy sections of fields. The wind would blow over the control surfaces and the device would react much like an airplane in flight (Koonce & Bramble, 1998). Other than these examples of training devices, pilot training was conducted by trial and error in real aircrafts or replicas. Piloting skills were basically self-taught, and instructors played a minimal role (Carow, 1988). Edwin Link created the first true aircraft simulator around 1929. At the age of 23 Link was working in his father's organ and piano factory in Binghamton, New York. There he acquired the knowledge about pumps, valves, and bellows, which he would later utilize in the building of his flight trainer (“Ed Link – Father,” 2004). His passion for
- 11 aviation led him to develop a ground-based flight trainer that would train pilots to fly instrument based flight. Link received a patent for the Link Aeronautical Trainer, which is also known as the "Blue Box" trainer. This trainer used organ pumps and bellows to move around and looked very much like a miniature plane with stubby wings and tail (Bezdek, Mays & Powel, 2004). Later upgrades included control and flight instruments. It was capable of simulating roll, pitch, yaw, and turbulence. The trainer also provided a setting to practice navigation and instrument flight, and it could also show pilots how to fly at night and during bad weather (Thomas, 2004). The device caught the eye of the US Army, which had been looking for a new method to train its pilots who flew airmail missions. During the early years of World War II, the airmail service had been suffering from large numbers of casualties. The Army hoped that improved training would reduce the number of accidents (Bezdek, Mays & Powel, 2004). The Link trainers were used in a variety of military flight training programs through the end of WWII. The Link Aviation company went on to built additional simulators and continues to do so to this day under the company L-3 Communications (“Ed Link - Father,” 2004). The first rudimentary trainers achieved fairly low fidelity, (i.e., imperfect replica of the physical environment; see section on fidelity later in this report). A main drawback of these early trainers was that they lacked displays that would show the outside world. Hence the early simulators could only be used for instrument training. Some of the early trainers, like the Link trainer, tried to solve this by painting horizontal lines on the walls that the simulator faced. Other programs painted scenery on the walls to simulate the environment the plane flew through. These scenes were called "cycloramas" (Thomas, 2004). As research on human learning progressed, the inclusion of visual cues and images became a more prominent issue. Link Aviation produced the first simulator with a basic visual system: The Celestial Navigation Trainer. This trainer was built to train aircrew to fly night missions over the Atlantic using only stars for a guide (Thomas, 2004). The stars were on a roll of film that would move according to how the airplane should be moving in space. The biggest drawback of this trainer was that it could only move along one flight path (Thomans, 2004). Another pioneer in early simulation was Rudy Frasca. Frasca was the first to argue that a simulator did not need motion cues in order to provide adequate pilot training (Garrison, 1985). His simulators were built with instruments that were able to exhibit appropriate readings in response to control inputs from the pilot. Frasca argued that trainers with tilting landscapes (e.g. Flightmatic) gave the impression of motion even if there was none (Garrison, 1985). Today Frasca International Flight simulators have developed a reputation for realism, reliability and affordability and have become a household name in the industry. By far the biggest milestone in flight simulator technology was the advent of computers and video technology. With the addition of these technologies the training devices were renamed "simulators". Around 1951, the Massachusetts Institute of Technology developed the first interactive computerized flight simulator (Lewis & Bezdek, 2004). Computers could now plot the course of simulation trainers as they "navigated" along an imaginary flight path.
- 12 Additionally, television made dynamic imaging systems possible. The pilot could now look out of the fictitious cockpit, see visual scenes and believe that s/he was flying around in actual space. Early on, these scenes were shown on Cathode-Ray Tube (CRT) displays. Later, the scenes were projected on large screens or walls to provide a wider panoramic view (Thomas, 2004). To provide scenes, training facilities created large detailed model boards that were miniature recreations of terrain be around an airport. A camera, directed by the simulator, would "fly" around the board just like an airplane in space. These large model boards provided very good resolution and were used for a variety of training maneuvers including training for space shuttle missions (Thomas, 2004; Carow, 1988). Until the latter half of 20th century, flight-training simulators were considered mostly as a novelty. For many years, the military had been the greatest driving force of simulation technology. Citing cost factors and impracticality, many airlines and smaller aviation groups often refused to use them all together. As more and more evidence over the benefits of simulator training became apparent, the commercial sector also became interested (Lewis & Bezdek, 2004), and during the fuel crisis of the 1970s, simulators became widely used staples in aviation (Garrison, 1985). Indeed, simulators became known as a low cost alternative for training. Not only did the simulators not require fuel or maintenance time, but also they were readily available for students to use (Garrison, 1985). In the 1980s, visual systems took another leap forward as more advanced processors and visual programs were developed. The old model boards were massive, difficult to change, and video screens were limited by resolution problems and images that could only be photographed along a preset track (Thomas, 2004; Carow, 1988). CRTs were also not very bright and had small displays, which were often fatiguing for viewers. Along came Computer Generated Imagery (CGI). CGI provided high fidelity pictures of the ground from the perspective of aircraft. At first these computerized images were low in resolution, but further advances in computers made the images increasingly realistic (Thomas, 2004). Typically only the highest fidelity simulators were built to resemble the most advanced aircrafts such as the large passenger planes. These simulators also were able to feature highly complex avionics and flight management technology that was being developed (Garrison, 1985). Motion technology also advanced and included the capability to provide up to six degrees of freedom. The motion technology, however, is generally used only in high fidelity simulation systems. Highly complex military flight trainers were developed to simulate difficult maneuvers encountered by fighter pilots (Thomas, 2004). Most recently, Head-Up Displays (HUDs) and Helmet Mounted Displays (HMDs) technology has been developed (Thomas, 2004). Computer systems can now project information on a windscreen or a helmet visor. This technology allowed the pilot to view the information displayed on a transparent surface (Thomas, 2004). In addition, the advent of Virtual Reality (VR) technology attempts to provide an immersive experience for the user to the point where the user ceases to perceive the natural environment and instead experiences the computer generated environment as the actual environment (Logan, 1995). This virtual world generally includes the use of HMDs, stereophonic headphones and motion sensors. The user becomes fully immersed in the
- 13 experience and is also able to interact with the environment (Logan, 1995). Table 1 provides a list of other VR related technologies being developed. Table 1. Virtual Reality Technologies Name Equipment Hi/Full Fidelity Simulation Telepresence
Mockups, simulators
Augmented Reality
Text Clear displays
Desktop-VR
LCD or Monitor
Bar Code Hotel/CAVE
LCD shutter glasses
Projected Reality
Video and projector
Video
Description B Simulates workspace with working controls and dials User conducts events from different locations Information is overlaid onto a display 3D graphics are displayed on a screen Visual projected onto a wall or screen that the user can interact with User actions reflected onto a screen with a mix of video and computer graphics
(Logan, 1995) Flight simulation technology has also increased in flexibility and affordability. As mentioned before, computer technology enabled simulators to virtually replicate the actual cockpit environment. Additionally, computers have enabled flight simulators to be run on conventional computer platforms at a low cost (Koonce & Bramble, 1998). Hence, flight simulator software can be purchased by any ordinary citizen and run on a home computer. For example, Microsoft Flight Simulator is a desktop simulation program. This and other software, along with realistic control joysticks and pedals, provide potential pilots the opportunity to fly different types of aircraft along different types of terrains (Koonce & Bramble, 1998). Although the first desktop flight simulator programs were marketed as games, their use as training devices has also been of interest (Koonce & Bramble, 1998; Stout, Salas, Merket, & Bowers, 1998; Bowers, Salas, Prince & Brannick, 1992). The history of flight training devices and simulators is deeply entwined with that of flight and flight training programs. Simulators have evolved from simple airplane sections, to replicas and model boards, and finally to fully automated complex machines with exemplary visual projection systems. Other than being technological toys for pilots to play in, simulators have benefited training programs in many ways. Next, the role of simulator in aviation training will be explored in greater depth Simulation and Flight Training The primary task of a pilot is to navigate in a three dimensional space as well as to maintain a flight path, follow procedures, and be alert for problems. A traditional flight education is conducted mainly in real aircraft with an instructor pilot guiding the students
- 14 along (Garrison, 1985, Wickens, Lee, Lie & Becker, 2004). Classroom instruction and use of simulators also occur, but these are more prevalent in advanced skill training programs. Flight training simulators have proven to be effective educational and instructional tools (Hays, Jacobs, Prince & Salas, 1992). Two main benefits of their use are safety and costs. Simulators increase safety because they can train pilots under conditions that would be too dangerous to train in actual instruction (Duncan & Feterle, 2000). For example, pilots who need to develop the knowledge and skills necessary to fly during stormy weather no longer need to wait for a cloudy day; they can practice in simulated conditions in the Flight Training Device (FTD) with no threat to themselves or the aircraft. The simulator is an excellent classroom because the student is able to make mistakes and learn from them (Duncan & Feterle, 2000, Garrison, 1985). The instructor pilot also is allowed to focus on teaching and not flying the aircraft. Additionally, many simulators also have the capability to collect data during the training scenarios that can also be reviewed later by the student and instructor. Along with the safety-related benefits, simulators also result in cost savings for training facilities. Typically, considerable funds are spent on maintaining, fueling, and operating aircraft for training purposes (Caro, 1988). Simulators, on the other hand, reduce costs by providing a training tool that can be used, at minimal operational costs, at any time of day under any conditions. Since simulators are not exposed to environmental elements, they often require very little maintenance and upgrading (Carow, 1988, Garrison, 1985). While maintenance costs may increase as the complexity and fidelity of the simulator increases, not all flight simulators need to be highly complex. Current training facilities are looking towards inexpensive simulator packages to instruct basic beginner skills (Pohlman & Edwards, 1983). Finally, when an aircraft is grounded (not operational) it interferes with training schedules. This type of delay increases training costs. Using simulators avoids this type of expense. In addition to practicing piloting (“stick and rudder”) skills in a simulator, pilots and crew can also practice other flight related skills as well. Pohlman and Edwards (1993) studied the usefulness of desktop simulators to train for procedural tasks. In their studies, students learned how to program a weapon program on an F-16 faster in a simulator than in a conventional training setting. Simulators, more specifically centrifuges, have also seen helpful to train pilots to sustain excessive gravitational (G) forces (Smit & Van Patten, 1992). Crew coordination and decision-making training are two other important types of skills practiced in simulators. The simulator is a great environment for this type of training because high physical fidelity is not needed to produce effective results (Connolly, Blackwell & Lester, 1989; Duncan & Feterle, 2000). Cockpit Resource Management (CRM) training has been a vital safety concern in most airlines (Caro, 1988). This type of training focuses on how aircrews work together and interact with one another, as well as how to make effective use of all resources available to them (Duncan & Feterle, 2000). CRM programs have been implemented successfully via simulatorbased training programs (Stout et al., 1998), and simulation-based training has also been effective in the initial stages of pilot training (Duncan & Feterle, 2000).
- 15 Finally, simulation has begun to be used to train pilots of Unmanned Aerial Vehicles (UAVs). UAVs are an increasingly visible component of military operations. In recent military operations, they have accomplished surveillance, reconnaissance, and tactical munitions deployment (Smith & Smith, 2002). Far away from these aircraft drones, a pilot is in charge of maneuvering the vehicle. These "remote pilots" must acquire some of the same flight skills and maneuvering that traditional pilots also possess. A few differences in the training needs exist between those of the UAV pilot and pilots of manned aircraft pilots. Remote pilots cannot rely on motion cues (Smith & Smith, 2000), whereas pilots can "fly by the seat of their pants". In contrast, since remote pilots would never be inside the actual UAV, they must rely on information other than motion cues. Currently, initial UAV pilot training is accomplished using small remote controlled airplanes. Next, pilots learn how to control small scale UAVs before finally practicing with full-scale UAV drones (Smith & Smith, 2000). At each level of training the remote pilots are often forced to learn and relearn the flight dynamics of the individual UAVs. This learning and relearning process is counterproductive and may lengthen the time and costs of the programs. UAV equipment is also very expensive and hard to replace. Using UAVS for initial training may become unfeasible for some programs because of the risk of breaking and availability. Smith and Smith (2000) report on a UAV training simulation-training device currently in development by Smith Enterprises. The Virtual Reality Flight Trainer (VRFT) provides hands on experience for remote pilots and provides the necessary skills to transition to full scale UAVs. The VRFT and similar products may compensate for the training limitations of current remote pilot training programs; they will also be less costly and more easily available for UAV pilots in training (Smith & Smith, 2000). Standards and Regulations The FAA has served as a regulatory force behind the use of simulators in aviation training. The administration has provided guidelines for training programs as well as standards that individual simulators must meet in order for pilots to receive credit for their training (FAA, 1991). Each simulator is classified through a series of levels (A through D, and level six), which represent the complexity of the simulation systems. Not all simulators are required to have the highest and most advanced technologies; each training program may acquire and certify simulators from different levels in order to suit their training need. It should be noted that level classification is not along a continuum. That is to say a Level A simulator may not become a Level D simulator at some time. The defining characteristic is that simulators at all levels are able to replicate certain aspects of an aircraft, and that training provided in the simulator is effective (FAA, 1991). Certification of simulators not only proves their effectiveness but also allows students to receive credit towards their pilot license and or aircraft rating. Table 2 describes the different levels according to FAA regulations; this information is also found under Federal Aviation Regulations Part 121: Title 14, appendix H as well as Aviation Circular AC 120-40B. In addition to the requirements listed in Table 2 simulators may be adjusted and configured for different kinds of training programs. Note that visual requirements are also mandated for both Level C and D simulators. These simulators are high in fidelity and
- 16 must be able to simulate all weather conditions, and various day or night conditions. This includes appropriate coloring, appropriate lighting, and properly oriented displays (FAA, 2005). The simulators used for flight training in foreign countries are usually held up to the U.S. standards and regulations. The International Civil Aviation Organization (ICAO) has also suggested international standards. Taking its cue from the FAA, the ICAO published the Manual of Criteria for the Qualification of Flight Simulators which provides standards for simulators used by the countries who abide by the ICAO standards (Ray, 2000). The FAA has also started to investigate the use of Personal Computer-based Aircraft Training Devices (PCATDs) for flight training. Currently, only a few select software packages have been approved for training purposes (Hampton & Moroney, 1998). Programs such as Jeppesen's FS 200 and programs by Aviation Teachwares Elite have both been certified by the FAA and have been shown to provide adequate training for beginner students. These software programs provide even more cost effective alternatives to the large complex, expensive simulators (Hampton & Moroney, 1998). Certification guidelines for these programs are contained in Advisory Circular 140-45A. These low fidelity simulators have also been used to provide training on other flight tasks such as ground operations, decision-making, and crew resource management (Hampton & Moroney, 1998). Industry and Training facilities may gain from cooperative efforts in building PCATDs. Flight Safety International has partnered up with Microsoft (maker of the popular flight simulator program Microsoft Flight Simulator) to develop training products to teach basic flight procedures and ATC communications (Hampton & Moroney, 1998). Although PCATDS have just begun to receive attention from researchers and the aviation industry, more research is needed to guide their development and design. In addition to teaching students how to operate an aircraft, simulators are used to train other personnel involved in aviation related domains. For example, air traffic controllers have also benefited from the use of simulators. This will be discussed next.
- 17 -
Table 2. FAA Simulator Levels Name Training requirements Level A
• •
Level B
• • •
Level C
• • •
Simulator requirements
Can provide restricted instruction in specific maneuvers, landings Line Oriented Flight training (LOFT)
•
Training and checking permitted Night take off and landing training Landing proficiency checks
• •
Transitional training from one aircraft to another Initial and upgrade to pilot in command training Initial and upgrade second in command training
•
•
• •
• • • • • • •
Not all components need to be operational 3 degrees of freedom motion system
Aerodynamics modeling Simulation of ground handling and effects 3 degrees of freedom motion system Performance assessment software Must represent windy conditions in three dimensions Must represent stopping forces for wet or icy conditions Must represent breaking failures 6 degrees of freedom motion system or better Provide adequate aircraft noises Minimal lag times Expanded computer capacity, accuracy and resolution Achieve all previous level requirements
Level D •
Extended training not covered in Level C
• • • • •
Advanced motion systems Aerodynamic modeling, low altitude flight, improper attitudes, accident scenarios Realistic noises Achieve all previous level requirements Operational Radar
Level 6 • • (FAA, 1991; FAA, 2005)
Must satisfy Level A requirements Non visual simulator
- 18 -
Simulation in Air Traffic Control The use of simulation has also expanded to include air traffic management (ATM) training. The benefits of using simulators for ATM are similar to those for flight training: Lower operating costs, increased safety, and decreased training time in the operational environment (Bruce, 1991). Traditional ATM training begins at an FAA approved education program where students must pass pre-employment tests that measure their abilities to learn controller duties. If they are successful, the students will go to the FAA Academy in Oklahoma City, where students receive their preliminary training to become controllers (Bruce, 1991). Trainee controllers do not, however, receive facility specific training until they are assigned to a particular location. Once assigned to a specific facility, controllers begin learning about the procedures and airspace of that region (Bruce, 1991). There are two major concerns with traditional ATM training methods. One problem relates to the amount of time and effort it takes to train a controller. It takes about three years to fully certify a controller (Bruce, 1991). With increases in demand for certified controllers expected in the future, this long training period may lead to staffing shortages. Additionally, if drastic changes are made to the airspace and associated procedures, retraining can take considerable time (Bruce, 1991). The other concern with ATM training programs is the use of antiquated technology. Current ATM workstations employ 30-year-old technology. Not surprisingly, the training programs for these workstations use training methods and technologies that are just as old. The computer-based ATM training devices in use are often poorly designed, do not motivate the students to excel, and generally do not employ current guidelines regarding instruction (Bruce, 1991). Limited use of simulators and computerbased trainers occurs. Overall, training facilities have yet to model flight-training programs and expand their use of simulation. Simulation has entered the air traffic management field on limited scale, however. Air traffic simulators range from small software packages that are easily installed on a personal computer to large-scale full simulations of entire towers (Bruce, 1991; Jones & Grandillo, 1996). Smaller systems, such as the Canadian Airspace Management Simulator (CAMSIM), provide radar like views of the airspace (Jones, & Grandillo, 1996). Larger full-scale simulators, equipped with large window-like screens that show the view from the actual tower, replicate almost exactly the control tower environment (Jensesn, 200s; Howard-Jones, 1999). Controllers view simulations of various scenarios involving different amounts of airspace saturation, weather differences, and changes in airport procedures (De Repentigy, 2003). In addition to depicting traffic in the air, simulators also can depict ground-based events. For example, consider runway incursions. Runway incursions are accidents or potential accidents involving violations in separations between airplanes and other objects such as ground vehicles and other aircraft (De Repentigny, 2003). Since this has become a very important safety concern, ATM training facilities have looked to simulation to provide additional training regarding this matter. Finally, simulators have also been used to validate design changes to airport layouts and current traffic problems.
- 19 Just as with flight simulators, ATM simulators have also proven to provide numerous research possibilities. Simulation enables researchers to apply experimental controls and, thus, to test and validate new ATM concepts (Heesbeen, Hoekstra & Clari, 2003). For instance, the National Aerospace Laboratory developed a desktop simulation tool, named "Traffic Manager," suited for most ATM concept research and development. This program allows for individual experiments to be conducted as well as allowing for other simulators to be networked together in order to study traffic pattern training (Heesbeen, Hoekstra & Clari, 2003). This last benefit has also become a feature of fullscale ATM simulators. In summary, flight simulator technology has advanced in leaps and bounds over many decades. Today, flight simulation has become an integral part of flight training. The FAA realized the potential that these simulators can bring to flight training and has certified these devices for the training of pilots for commercial aviation. Indeed, simulators have become a cost effective addition to aviation training. However, it is still unclear the extent to which simulators can replace traditional aviation training programs is still unclear. More research is needed. Also, simulationbased training may be an underutilized tool that could improve the already long and labor-intensive training program for air traffic controllers. Again, more research is needed. Overall, additional research could reveal actual savings that simulation can produce, as well as resolve issues such as level of fidelity necessary or the role of motion. The next part of this report will define and discuss simulation fidelity. Simulation Fidelity Simulation fidelity is an umbrella term defined as the extent to which the simulation replicates the actual environment (Gross et al., 1999; Alessi, 1998). In aviation, simulation fidelity refers to the extent to which a flight training device looks, sounds, responds and maneuvers like a real live aircraft. Many researchers have offered definitions of fidelity. Table 3 lists a number of different aspects of fidelity and how they relate to each other. As shown in Table 3, physical fidelity is the degree to which the simulator physically resembles the aircraft (Allen, 1986). Not to be confused with the broader term of simulation fidelity, this term specifically concerns the physical properties of the simulation experience. In order to be considered high physical fidelity, the simulator must have high Visual-Audio fidelity or the look, sound, feel, and, in some cases, smell of the real aircraft. This can be accomplished through high tech visualization and projection systems that can depict computerized out-of-cockpit views of the airspace, as well as audio systems that can project sound in a three dimensional space. As mentioned, higher fidelity systems may not always be needed to produce effective training results, but the elements can also be isolated to explore other research issues (Pohlman & Edwards, 1983). For example, studies may be conducted on how the smell of the aircraft may or may not affect performance. In addition to the high visual-audio fidelity, to be considered high physical fidelity, proper equipment fidelity must exist. Equipment fidelity refers to the extent to which a simulator can emulate or replicate the equipment being used, which includes all software and hardware components of the system (Zhang, 1993). Occasionally it may
- 20 become unfeasible or costly to use actual equipment for the simulation, in this case a replica or substitute may be used instead. It is important, however, to maintain a certain degree of equipment fidelity for training purposes. Reed (1996) encountered this problem when trying to simulate an aerial gunner station of a helicopter. During the simulation the actual weapons system appeared to interfere with the simulation equipment, and therefore had to be replaced by a replica. Using similarly shaped and weighted equipment in combination with parts of the actual weapons system, Reed (1996) was able to preserve some equipment fidelity with out endangering the training effectiveness. Table 3. Fidelity Definitions Word
Source(s)
Definition
Gross et al. (1999); Alessi (1998)
Degree to which device can replicate actual environment, or how "real" the simulation appears and feels
Psychological-cognitive fidelity
Kaiser & Schroeder (2003)
Degree to which device replicates psychological and cognitive factors (i.e. communication, situational awareness)
Task Fidelity
Zhang (1993); Roza (2000); Hughes & Rolek, 2002
Functional fidelity
Allen (1986)
Simulation Fidelity
Physical fidelity
Allen (1986)
Replication of tasks and maneuvers executed by user How device functions, works, and provides actual stimuli as actual environment Degree to which device looks, sounds and feels like actual environment
Visua/-Audio Fidelity
Rinalducci (1996)
Replication of visual and auditory stimulus
Equipment Fidelity
Zhang (1993)
Replication of actual equipment hardware and software
Motion fidelity
Kaiser and Schroeder (2003)
Replication of motion cues felt in actual environment
Another aspect of physical fidelity is motion. Motion fidelity is the extent to which a simulator replicates the motion cues felt during flight (Kaiser and Schroeder, 2003). Current technology has not been able to produce full motion capabilities in simulators, however, simulators can produce anywhere from three to six degrees of freedom. These motions provide pilots cues about the roll, pitch and yaw of the aircraft. Motion does have its limitations, for example the brain can often be tricked into sensing motion where there is not. Motion, it seems, provides very little to over all training effectiveness (Ray, 1996; Garrison, 1985). It is important to investigate what situations may best be suited for the addition of motion cues and/or what tasks require motion cues
- 21 in order to be executed properly. For example, military fighter pilots often rely heavily on motion cues to perform complicated maneuvers in jet airplanes (Thomas, 2004). In addition to physical fidelity, another major factor in overall simulation fidelity is psychological and cognitive fidelity. Psychological and cognitive fidelity is the extent to which psychological and cognitive factors are replicated within the simulation (Kaiser & Schroeder, 2003). It is the level to which the simulation engages the user cognitively in the same manner that the actual equipment would engage the user. The cockpit environment is a demanding environment, requiring the pilot to constantly be monitoring flight systems, looking for failures, and maintaining flight plan (Wickens, Lee, Lie & Becker, 2004). A simulator must require the same attention resources from the pilot as well as produce appropriate psychological effects such as stress, workload, or interpersonal factors. Research concerned with cognitive and psychological fidelity focuses on those factors that hamper or improve performance. For example, it is known that too much stress may cause critical performance decrement in flight. If, in a simulator, a user experiences symptoms of stress similar to those felt in the operational setting, some psychological fidelity has been achieved. Task fidelity is the degree to which a simulator replicates the tasks involved in the actual environment (Zhang, 1993; Roza, 2000; Hughes & Rolek, 2002). Flight training devices must act like actual airplanes, therefore a pilot must be able to “fly” the simulator just as he or she would fly an aircraft. This means that all tasks that have to be executed in an aircraft must be done in the same fashion in the simulator. The extent to which these tasks are simulated may be an issue for future research. Not all tasks may need to be replicated for all training exercises, and some may be isolated to investigate specific problems or issues. For example, when training for cockpit communication, the tasks involving communication must be exactly the same as those in the operational setting. Other tasks (e.g. stick and rudder tasks) do not need to be the same. Much like task fidelity, functional fidelity is also important. This can be described as how the simulator reacts to the tasks and commands being executed by the pilot (Allen, 1996). Not only does the pilot have to react and fly as he would in a real aircraft but the simulator must also react and maneuver as a real aircraft. This ensures that, for instance, when the pilot pulls on his yoke, the simulator reacts as if it were pulling up. This fidelity, along with task fidelity is essential for training effectiveness. Thus, simulator fidelity is a complex subject that includes a number of factors. It is important to understand that these dimensions are not mutually exclusive; there is a large degree of overlap. Although expensive, high fidelity devices are used for advanced pilot training; research has shown that high fidelity may not be necessary to produce effective training results (Connolly, Blackwell & Lester, 1989; Duncan & Feterle, 2000; Hays et al, 1992). Future research may need to focus on the specific knowledge and skills to be trained to determine exact fidelity requirements. At this point, it is important to understand that while simulation is a training tool it will not itself result in effective training. In other words, an individual will not acquire the necessary knowledge, skills and attitudes simply by experiencing a simulation. It is essential that the simulation experience occur in a structured manner. Used in a structured manner, a simulator becomes part of “simulation-based training” also known as “scenario based training.” This will be discussed next.
- 22 -
Scenario Based Training The idea underlying scenario-based training (SBT) is to give the learner the opportunity to acquire the knowledge and skills necessary for task performance via a simulated “real-world” operational environment. The required competencies (e.g., operator knowledge, skills, and attitudes) are trained within the scenarios as well as upon completion via instructor feedback (Oser, Cannon-Bowers, Salas, & Dwyer, 1999a). Thus, active learning, extensive practice, and feedback are the cornerstones of SBT (Oser et al., 1999a). The SBT Cycle SBT is more than learners (e.g., pilots) simply practicing simulated tasks (e.g., flight). Frequently, the use of lectures, books, and class discussions are complete. Thus, at this point in SBT, the scenario is the curriculum. Because of this, it is essential that SBT offer a highly structured training which links learning objectives, exercise design, performance measurement and feedback (Dwyer, Oser, Salas & Fowlkes, 1999a). As shown in Figure 1, the process begins with the instructor establishing the desired skills to train based on the training objectives (Oser et al., 1999a). Training objectives generally arise from a need for improvement or because unfamiliar equipment is introduced. Based on the required skills, the instructor plans scenario events, which will give the learner the opportunity to practice the desired skills. If planned carefully, and correctly, the events ensure that trainees practice all of the targeted skills until all training objectives are met (Cannon-Bowers, Burns, Salas & Pruitt, 1998).
The next step in the process is performance assessment (Oser et al., 1999a). Ideally, criteria are established which link closely to scenario events and assess whether the learner has developed the necessary knowledge and skills. Once the learning objectives, scenario events, and performance criteria are established, it is time for the learner to experience the scenario. During this time, the instructor monitors the learner’s performance and uses the performance criteria to judge the learner’s performance. The final step in the SBT cycle is feedback/debrief. In this step, the instructor gives specific, detailed feedback to the learner. Ideally, the instructor links the feedback to specific events and learning objectives. This helps the learner to interpret the feedback and, in turn, to integrate the feedback into his/her subsequent performance.
- 23 In sum, the SBT method offers a structured approach, using practice and feedback, to train learners the way they will perform on the job. Previous uses of SBT Scenario based training has been applied in a variety of domains, however, it is difficult to discern the degree to which the tightly linked SBT cycle described above was followed. Furthermore, only limited data appears. Nevertheless, a quick review of some of the domains in which a version of SBT has appeared is warranted. To begin, results indicate that SBT has high face validity—the learners believe it to be an effective training method. For example, stressful and realistic scenarios were presented in a training program for firearm safety with the goal being to increase mental readiness of probation officers (Scharr, 2001). In a post-training evaluation questionnaire, 97% of the respondees felt the training was effective “to a great extent” (Scharr, 2001). Additionally, Lowry (2000) found that probation officers trained with SBT were much more likely to rate the training as excellent than were officers trained with other methods (Lowry, 2000). Next, training cost savings has also been reported. Using a method similar to SBT, Stewart, Dohme & Nullmeyer (2002) examined the effectiveness of pilot training for rotary wing aircraft. Training times and costs were reduced as a result, and the transfer of training ratio was found acceptable. These results are not surprising, as Cannon-Bowers et al. (1998) described: training time and cost are reduced as a result of including only those events that exercise the targeted skills. In another study, a network of simulators was used to train military service personnel (e.g., U.S. National Guard, U.S. Marine Corps, U.S. Army and U.S. Air force) around the world in a joint military exercise. In this exercise, Dwyer et al. (1999) tested the use of performance measurement tools designed to specifically link to events and, thus, learning objectives. Although only case studies were used, the results indicated the SBT approach a success. Finally, while no data was reported, scenario based training has been used in the medical field to allow doctors to practice techniques such as sigmoidoscopy (Kneebone et al., 2003). Using a simulated patient, participants were able to improve their procedural skills without risking harm to a live patient. Thus, SBT has been used in a few of domains with some successful results. It should be noted that while minimal research results exists on SBT as a whole, SBT is composed of proven training methods including practice, performance measures and feedback. Additional research on SBT effectiveness, however, would enable training designers to take maximal advantage of the method while avoiding pitfalls. The next section of this paper describes the introduction of the technologically advanced aircraft (TAA) in general aviation and the potential use of SBT to train pilots to use the new system. Example: Using SBT in General Aviation Since the 1970’s, aircraft manufacturers have attempted to counteract the negative effects of increasingly complex aircraft and traffic-congested skies (NASA Facts Online).
- 24 This effort began with use of cathode ray tube screens in some aircraft to reduce the number of instruments and displays within the cockpit (NASA Facts Online). Electronic Flight Displays Systems (EFIS) (i.e., the “glass cockpit”; technologically advanced aircraft), appeared a short time later (Mulder, van Paassen, & Mulder, 2004). While EFIS systems differ depending on the manufacturer, the systems currently for use in general aviation include complete flight/navigation instrumentation in glass cockpit technology. Additionally, featuring 3D synthetic vision and highway-in-the-sky on a LCD display, the flight display system relays primary flight information as one integrated picture. The 3D synthetic vision enables pilots to see the terrain and future paths, while the highway-in-the-sky “creates a 3-D tunnel for all enroute and instrument procedures” and reduces pilot workload in executing flight plans and instrument approaches (Chelton Flight Systems, n.d.). In commercial aviation, the new technology engendered initial concerns with automation -- e.g. how to keep pilot active in process and able to explain what the automation was doing (Mulder et al., 2004). Over time, however, accident statistics have shown an increased level of safety, and research has helped determine the proper balance between automation and pilots. Presently, most in the aviation community would agree that the glass cockpit has “increased pilot safety by reducing pilot workload at peak times, yet kept the pilot ‘in the loop’ at all times to maintain situational awareness” (NASA Facts Online, 2000). With the advent of the TAA in general aviation, industry, government, and academia are collaborating to develop training guidelines to transition pilots experienced with traditional aircraft to the new aircraft. The name of these guidelines is called: FAA/Industry Training Standards (FITS). The overall goal is to ensure that pilots learn how to fly a small jet aircraft “safely, competently and efficiently” (FAA/Industry Training Standards, 2004). Specific goals include training higher order thinking, procedural knowledge, and related knowledge, skills, and attitudes to reduce the number of aviation accidents attributed to pilot error. FITS incorporates the SBT methodology. A number of reasons exist as to why SBT was selected. Most importantly, scenario based training (SBT) is generally used for training complex tasks or interactions (Loftin, Wang, & Baffes, 1989). Complex tasks are those that require high-level cognitive processing skills such as aeronautical decision-making and situational awareness. Similar to the complex operational environments described by Oser et al. (1999b), aviators must continually identify cues and patterns in their environment, assess and make inferences about their status, make judgments about their next course of action, implement actions, and evaluate the impact of these actions. Training must therefore address successful performance of such complex processes, and enhance the associated high order skills. As SBT replicates the critical aspects of the operational environment, while also offering carefully planned events and feedback, it seems a logical choice to train such skills. In contrast, traditional aviation training typically requires pilots to practice one flight task at a time until that task is mastered. The tasks are not integrated into a seamless activity until the training is near completion. While this method may be appropriate for the mastery of some simple tasks, such as learning how to controlling the rudder, it is likely too limited to effectively train for complex tasks. The complex interactions between pilots, instrumentation, and ATC that arise when flying a TAA may
- 25 require an alternative training method. Thus, attention has focused on using SBT to train pilots in general aviation to operate the TAA (FAA/Industry Training Standards, 2004). Additionally, as the TAA is introduced to general aviation, instructors will be tasked with training aviators who have a wide range of prior experience. Again, SBT seems a good fit--a wide range of operators may benefit from SBT, from novices without any specialized knowledge to expert users who wish to familiarize themselves with a new product (Loftin et al., 1989). To summarize, one method in which to use simulation effectively is to follow the Oser et al. (1999a) model of scenario-based training. This model links together task requirement, scenario events, performance measures and feedback. SBT is currently being tested to train general aviation pilots to fly technologically advanced aircraft. We now turn to a discussion of training effectiveness in general. Training Effectiveness: An Overview Since simulation is used for training, it is important to discuss training effectiveness in general. This section of this report is intended to educate the reader on training effectiveness. To accomplish this, we first review the concept of “transfer of training” and discuss the factors involved. While multiple models of training effectiveness exist, we describe a fairly basic model of training transfer/effectiveness. Next, we discuss research methods relating to training effectiveness. Transfer of Training: Terms and Concepts The ultimate goal of training is for the learner to transfer what was learned in training to the actual real world setting. Transfer of training (ToT) refers to the process of applying knowledge, skills and attitudes learned from training programs to real world situations and the maintenance of these knowledge, skills, and attitudes over time on the job (Baldwin and Ford, 1988). Transfer of training can be classified into two types, positive transfer and negative transfer (Chapanis, 1996). Positive Transfer: Positive transfer occurs when an individual correctly applies knowledge, skills, and/or attitudes learned (for example in simulation) to a different setting (in the case of aviation, this would be the real flight). Positive transfer is the goal of any type of training. In this chapter, when we refer to transfer of training, we mean positive transfer of training unless further interpretation is given. Negative Transfer: Negative transfer occurs when existing knowledge and/or skills (from previous experiences) impedes proper performance in a different task and/or environment. For example, a skilled typist on a QWERTY keyboard would have difficulties using, or learning how to use, a non-QWERTY keyboard such as a Dvorak keyboard. Negative transfer develops from at least two related manners: system design changes and a mismatch between a training system and the actual task. First, system design changes (e.g., controls, software menus, etc.) can create one type of negative transfer--habit interference. Specifically, the task performer has experience performing the task set up in one manner and has developed a certain degree of automaticity. If a design change occurs, it is likely the task performer will revert back to performing the
- 26 task according to the previous system (i.e., “habit interference”). Avoiding habit interference should be a major design goal (Chapanis, 1996). In the case of training, if the training system procedures do not match those necessary in the transfer environment, negative transfer is likely to occur. Consider a pilot who is trained to pull back on the yoke to lower the nose of the aircraft in a simulator (pulling back on the yoke actually raises the nose), the pilot will likely do the same in a real flight situation. In sum, negative transfer occurs when the trainee reacts to the transfer stimulus incorrectly but as s/he had practiced. A Model of Factors Affecting the Transfer of Training A number of different authors have proposed models of training effectiveness (e.g. Cannon-Bowers et al (1995); Colquitt et al, (2000)). A full review of these is beyond the scope of this report. For the purpose of this review we selected the Baldwin and Ford (1988) model of transfer of training. Baldwin and Ford (1988) conducted a review of research on transfer of training and proposed a theoretical framework of the transfer process (see Figure 2). This section will review this model with an emphasis on 1) research accomplished since publication of the model and 2) research done in simulation and/or aviation studies. As shown in Figure 2, the model depicts transfer of training in terms of training input factors, training outcomes, and conditions of transfer. Training Input Factors. Starting with the left side of Figure 2, the “Training Input” factors include training design, trainee characteristics, and work-environments. The model depicts each of these three input factors as having a direct influence on learning in the training environment. In addition, the model also connects trainee characteristics and work-environment characteristics directly with the transfer performance. Thus, those factors are thought to exert a direct influence on performance in the transfer setting. In terms of trainee characteristics, Baldwin and Ford found a number of research studies that suggest these characteristics would affect training transfer efficiency. Numerous individual differences exist, including motivation, attitudes, and ability (e.g. cognitive and physical). In an analysis specifically targeting flight simulation, Auffrey, Mirabella and Siebold (2001) argued that goal setting, planning, motivation, and attitudes were key factors in training effectiveness. Consider motivation, the trainee must put forth effort to learn. Therefore, a prerequisite for transfer is the trainee’s motivation to successfully complete the training-- to acquire the new skills and knowledge. In terms of ability, the trainee must also have the raw ability to improve their skills. For example, a pilot is working on shortening his/her take-off distance. He/she must analyze the situation and realize that the need to increase flaps and increase speed to gain more lift. Therefore, the trainee must possess the cognitive ability to understand the change in elements necessary to accomplish the performance improvement. These are just two examples of individual characteristics that relate to training effectiveness. Overall, more research on the impact of trainee characteristics on training effectiveness is needed. Secondly, the training design/method plays a role in transfer of training. Baldwin and Ford proposed four basic principles for the design of training to facilitate transfer of training. Of particular relevance to simulation-based training are the principles of identical elements and the stimulus response relationship. The first approach, “identical
- 27 elements” dates back to the turn of the century. E.L. Thorndike (1903), put forth a notion that is still held by simulator designers today. Thorndike argued that transfer between the first task (simulation) and the second task (real world) would occur if and only if the first task contained specific component activities that were held by the second task. This approach is entirely dependent on the presence of shared identical elements (Thorndike, 1903). Training Inputs
Training Ouputs
Conditions of Transfer
Trainee Characteristics
Training Design
Learning and Retention
Generalization and Maintenance
Work Environment
Figure 2: A Model of Training Transfer (adopted from Baldwin and Ford (1988)) Although outside the realm of flight simulation, an illustration of this principle can be found in athletics. If an individual plays softball and then tries out for the baseball team s/he will be better off than the person that has only previously played golf. All three sports share common elements (e.g., ball, striking stick, grass playing field). However, softball shares far more elements with baseball than golf does (number of players, bases, scoring, umpires, uniforms, fences, dugouts). The more elements that are shared between the two environments the better the transfer. Therefore, softball is a better form of simulation than golf for the training of baseball skills. This approach parallels with the idea that simulators should duplicate the real world situation to the greatest degree possible. Another principle for training design is in terms of “stimulus and response.” In this respect, the idea is to examine the extent to which similarities exist between the stimulus representation and the response demands of the training and those of the transfer task (Osgood, 1949). This perspective does not demand a duplication of exact elements. In contrast, the notion is that transfer of training can be obtained using training tasks
- 28 and/or devices that do not duplicate that real world exactly but that do maintain the correct stimulus-response relationship. Consider once again the example of athletics. Golf transfers quite well to tennis, as the golf swing (low to high) and tennis hit (low to high) are similar. Even though the sports themselves are quite different (racquets vs. golf clubs and holes), the stimulus response sequence for each of the sports is quite similar. In terms of simulation based training, Hays et al (1992) conducted a meta-analysis of relevant flight simulation research in order to identify important characteristics for simulator transfer of training. Simulator design and training context were included in the findings, and both fall under the training design section of the Baldwin and Ford (1988) model. Lathan et al. (2003) also argued that fidelity was a primary issue simulator transfer of training effectiveness—the higher fidelity the better transfer. Higher fidelity, however, does not mean “technology advance” or physical fidelity. Instead, it should be considered an indication of the degree to which simulation includes appropriate stimulusresponse relationships. In addition to design of the simulator itself, many other variables exist in terms of training methodology (Goldstein, 1993). Regarding flight simulation, Auffrey, Mirabella and Siebold (2001) argued that instructional factors include instruction specificity, prior knowledge and experience, active use of knowledge, similarity (fidelity), practice, and elaboration methods. The third input factor in the model is work-environment characteristics. This refers to the overall organizational support for the learner, such as supervision, sponsorship, and subsequent reward for the training/skill development. It also refers to the skill involved (i.e., task difficulty). That is, some tasks are easy and require little effort to transfer skills, while other tasks are difficult and the skills required to perform them are difficult to transfer and maintain (Hays et al, 1992; Lathan et al, 2003; Simon and Roscoe1984; Blaiwes, Puig and Regan, 1973). In their review regarding flight simulation, Auffrey, Mirabella and Siebold (2001) agree that the work environment is an important factor in training transfer. Additionally, Awoniyi, Griego and Morgan (2002) investigated the effect of various work environment factors on training transfer. These authors discussed the importance of person-environment fit. Briefly, the notion is that transfer will depend on the degree of fit between a worker and the particular work environment. In other words, two workers may attend training and acquire equivalent knowledge and skills. However, depending on the degree of person-environment fit in the organization, one worker may show significantly higher degrees of transfer than the other. Five dimensions were studied: supervisory encouragement, resources, freedom, workload pressure and creativity support. Results indicated that person-environment fit has a positive relationship with transfer of training and can be a moderate predictor for the transfer of training. Training Outputs. The middle of the model, “Training Outputs,” focuses on the actual outputs of training or, in other words, the amount of learning that occurred during training and the amount retained after the training program was completed. As noted, the training outputs depend on the three inputs described above. In turn, the amount learned during training will have a direct influence on ultimate training transfer.
- 29 Conditions of Transfer. Finally, the right side of the model, “Conditions of Transfer,” refers to the post-training environment. At this stage, the learner is back in the actual work environment. Conditions of transfer include the real-world conditions surrounding the use, generalization, and maintenance of the knowledge and skills learned in the training program, the degree to which the learner used the knowledge and skills in a transfer setting, and the length of time the learner retained the knowledge and skills. Little research has been done in this area, particularly in the aviation domain. One example is a study of assertiveness training for pilots (Smith-Jentsch, Salas & Brannick, 2001). This study investigated the combined effects of trainee characteristics, team leader support and team climate on training transfer. A multi-dimensional measurement of team transfer was applied. Results of their investigation showed a strong effect of transfer climate on transfer performance. In summary, transfer of training is the combined result of input factors (characteristics of the trainee, the training design, and the work environment), the amount learned in training, and the conditions surrounding the transfer setting. Some of these factors are better understood than others. Simulation is one sub-factor in this complex problem. Even with the best simulation available, if other variables do not exist in the appropriate manner, training will not be effective – it will not result in positive transfer. Thus, simulation is one part of the whole picture. Additional research is needed to further understand the differential impacts and the interaction effects of the variables. We now turn to a discussion of the methodologies underlying transfer of training studies. Research Methods How do researchers study training effectiveness? In other words, how do designers validate their simulator as a training tool? It sounds simple enough: simply compare performance on the job before training and after training. Unfortunately, it is not that simple, and because of the complex nature of the problem, many unknowns still exist. Indeed, transfer of training research has been inconclusive due to the differences in concepts and definitions, differences in theoretical orientations, and methodological flaws. Perhaps the greatest problem is that transfer of training is extremely difficult to measure (in terms of accurately determining whether transfer has occurred and how much transferred). Furthermore, transfer is often measured as a product, rather measuring it as a process (Auffrey, Mirabella and Siebold, 2001). Two major issues are involved: experimental design and human performance measurement. A full review of these topics is beyond the scope of this report. However, we offer some key terminology and issues. Human Performance Measurement Human performance (e.g. job performance) can be assessed in many ways. The most common are objective performance measures and subjective judgments (Spector, 2003).
- 30 Objective measures Objective measures are accounts of various behaviors (e.g., number of errors) or the results of job behaviors (e.g., total passengers transported). Thus, objective measures are data that objectively reflect the trainee’s performance level. An obvious problem with objective measurements is that they cannot give any insights on cognitive- related aspects of performance, such as workload, situational awareness, motivation and attitudes. These variables are vital to transfer of training assessment because they are parts of the transfer of training affected factors. Subjective Measures Another approach is to use subjective measures. Subjective measures are ratings of some sort given by an expert. Examples include surveys and questionnaires. Overall, Hays et al (1992) found that these subjective performance measures were more sensitive to transfer of training effects than were objective measures. For example, Xiao (1996) developed a 6-item scale questionnaire to measure three aspects of training transfer (efficiency, quality and productivity). Numerous other examples of subjective ratings are available in the literature (for summary see Spector, 2003). How to choose measures Selecting performance measures depends largely on the task. Indeed, it is rare for performance measures to be used in different tasks and domains with out at least some modification. Training requirements are key (Hammerton, 1966). If the training purpose is to teach solely physical skills, then the objective measurement can be used as the primary tool; if cognitive training is involved, those subjective performance assessment tools need to be applied in order to capture the cognitive skill learning and transfer. The notion of multiple measures also exists (Kirkpatrick, 1996; Hammerton, 1966; Kraiger, Ford, & Salas, 1993). Additionally, in terms of choosing what measures of transfer are appropriate, a deeper look at the transfer process is in order. While most transfer of training research focuses on immediate results of the training instruction, Foxon (1993) proposed that transfer doesn’t occur immediately, but instead, occurs in a series of stages through which trainees pass. In other words, Foxon proposed a process model. The Foxon (1993) model (see figure 3) fits into the “generalization and maintenance” portion of the Baldwin and Ford (1988) model. According to Foxon (1993), the degree of transfer increases progressively while the chance to transfer failure gets lower. Although this process model was proposed for organizational training programs, it is likely generalizable to any kind of training, including simulation training.
- 31 Low Risk of Transfer Failure
Acceptable Transfer
Optimal Transfer
Unconscious Maintenance
Conscious Maintenance
Partial Transfer
Initiation
High Risk of Transfer Failure
Intention to Transfer Transfer Elapsed Time
Figure 3. A model of the training transfer process adapted from Foxon (1993).
The Foxon (1993) model has several implications. One is for transfer of training measurements (Auffrey et al, 2001). While traditional transfer of training measures one single value of transfer, the previous approach enables several points of measurements so that transfer can be estimated continuously, and other information such as transfer speed can also be obtained. This is similar to the notion of multiple measures of learning (e.g., Kraiger et al., 1993) but the Foxon (1993) model also emphasized time. Combining the ideas from Kraiger et al. (1993) with the Foxon (1993) model suggests a need for multiple measures spread over multiple points in time.
- 32 Table 4a. Trials/time to transfer or trial to criterion (TTC) is the number of training trials that a subject attempted for a given task before reaching the criterion level of proficiency on that task given the following variables:
Yc : the control group’s number of TTC for training conducted in aircraft Yx : the experimental group’s number of TTC for training conducted in aircraft, and X : the experimental group’s number of TTC for training conducted in the simulator F: the mean performance on the first simulator training trial, L: the mean performance on the last and T: the mean performance on the first post-transfer trial. C : the mean first trail performance of the real situation training and S: the stable performance in real situation training.
Table 4b. Transfer of Training Formulas Name Formula Percent Transfer
Transfer Effective Ratio (TER) First Shot Performance (FSP)
Training Retained (TR)
Yc − Yx × 100 Yc
Yc − Yx X F −T fsp = F−L C −T tr = C−S
Definition Measures the saving of time or trials in aircraft training to criterion that can be achieved by use of a ground simulator. Measures the efficiency of the simulation by calculating the saving of trials per time. Measures how much training will be retained on first transference to the real situation Measures how much training is retained on the first post-transfer trial from simulator compare with that gained from real world
(Taylor, et al, 1997; Taylor et al, 1999; Hammerton, 1966)
Finally, as shown in Table 4, Taylor et al. (1997) and Taylor et al. (1999) present a number of different calculations that can be utilized with objective or subjective data. These include: percent transfer (the saving of time or trials in an aircraft by using a ground simulator); transfer effective ratio (measures the efficiency of the simulation); first shot performance (how much training will be retained on first transference to the real situation); and training retained (how much training is retained on the first post-transfer trial from simulator compare with that gained from real world). In addition to performance measures, experimental design is another crucial issue to determine the degree of transfer of training that occurred. This will be discussed next.
- 33 -
Experimental Design Numerous training evaluation studies exist in the literature. Three types include: “Forward Transfer Study” (i.e., predictive validation study), “Backward Transfer Study.” (i.e., a concurrent validation study), and “Quasi-Experimental” (an approach similar to a construct validation study). Forward Transfer In the case of aviation, the classic experiment is to compare two matched groups of pilot trainees. One group (the control group) would be trained using only actual aircraft and the other (the experimental group) would receive their training via a simulator (Hays & Singer, 1989). The transfer environment is the actual aircraft. The pilot performances after training (e.g., TTC, flight technical performance, etc.) are captured, measured and compared. This type of study is referred to as a Forward Transfer Study (Kaempf and Blackwell, 1990; Darken et al, 1998; Dohme, 1992) as it concurs with the transfer direction—first simulation, then the transfer environment. In some situations, however, this type of study can be expensive, time consuming, or impossible to complete. For instance, in the occurrence of rare and dangerous events such as engine failure, turbulence, or severe weather conditions, a forward transfer study is nearly impossible in a practical sense. Backward Transfer Experiment To overcome some of the technical difficulties inherent in forward transfer studies, some researchers use Backward Transfer (Kaempf and Blackwell, 1990). In a backward transfer experiment (i.e., concurrent validation study), current proficient performers perform tasks on the job and performance measures are taken. Next, the same performers perform the tasks in the simulation. Their performance in the simulation is compared with their performance on the job. The logic of a backward transfer experiment lies in the following assumption: “if the aircraft proficient aviators cannot perform the flying tasks successfully in the simulator, the poor performance is attributed to deficiencies in the simulators” (Kaempf and Blackwell, 1990). Possible simulator defiencies include: the cues are different from actual environment, the controls are different from the actual environment, and, flying simulators requires skills that are not required to fly aircraft (Kaempf and Blackwell, 1990). Backward transfer experiments use real pilots’ performance in simulators only to predict forward transfer effectiveness for this particular simulator. If a low degree of backward transfer occurs, it implies that there are deficiencies in simulators Unfortunately, in the case of a high degree of backward transfer; it is not necessarily an indication of a high of degree forward transfer. It may be simply that the pilots in the study were exemplary at getting the simulator to perform how they desired. Table 5 illustrates the formulas involved in backward transfer experiments. Results from Kaempf
- 34 and Blackwell (1990) indicate that the inexpensive backward transfer studies may be employed to predict forward transfer tasks.
Table 5. Index of Backwards Transfer Formula (B)
⎛ Ai ⎞ ⎟⎟ i =1 ⎝ i ⎠ B= N N
∑ ⎜⎜ S
i = subject N = total number of subjects A = the mean of the subject’s OPR scores for the last two trials in aircraft, and S = the subject’s OPR score during second simulator check ride B less than 1 indicates that performance in the simulator was substantially below that in the aircraft. (Kaempf & Blackwell,1990)
Quasi-Transfer of Training Lintern, Taylor, Koonce, Kaiser and Morrison (1997) and Stewart and Dohme (2002) completed a quasi-transfer of training experiment (i.e., similar to construct validation studies). In this method, transfer of training is compared between one configurations of simulation to another configuration of the same device. Quasi-transfer experiment research is intended for basic research on training effectiveness principles and theories. Benefits include considerable savings in experimental cost and time if we assume that quasi-transfer methodology is a valid portrayal of the actual transfer settings. In summary, after reviewing the literature, a few points are quite clear. First, perhaps the most difficult issue in studying transfer of training is the existence of the multitude of variables which influence training effectiveness and, in turn, training transfer (Baldwin and Ford, 1988; Colquitt, LePine & Noe, 2000). Indeed, a methodical approach to testing the influence of these variables independently and in interactions with each other is essential. Additional research is needed before an exact understanding of the relative importance of the different variables is achieved. In addition, when accomplishing the studies, attention must focus on measurement—what to measure and how to measure it. Appropriate measurement is absolutely crucial. Conclusions and Future Direction The use of simulation in aviation training is an exciting topic. Engineering technologies have given birth to all sorts of aircraft simulators. Additionally, training research and development has provided guidance on how to make the most of these training devices. After reviewing the literature regarding the use of simulation in aviation training, we offer the following observations:
- 35 1. Training is a crucial topic for the field of aviation. In the aviation domain, on a daily basis thousands of individuals perform highly complex tasks (e.g., flying and air traffic management). Effective human performance of these tasks can mean the difference between life and death, and training plays an important role in ensuring that task operations are successful. Indeed, aviation is one of a few domains where training is of utmost importance for optimal performance and safety. 2. Simulation is rapidly becoming a primary training tool in the field of aviation. It enables trainees to develop the knowledge, skills, and attitudes necessary for task performance in a cost effective, safe environment. The effectiveness of simulator training has been verified and use of flight simulators has become a standard method to train pilots. 3. Technological developments continue to push simulation closer and closer to exact mimics of reality. The rapid development of computing technology has made simulations increasingly realistic. With high fidelity simulations available, the validity of simulation training will continue to increase. 4. Considerable research exists regarding variables that influence training effectiveness. These include characteristics of the trainee, characteristics of the training design, and characteristics of the work environment. Simulation is one aspect of this complex problem. It is essential to link simulation technology to research on training effectiveness. If this does not occur, we will not make the most of simulation technology. 5. The Scenario Based Training model describes how to use simulation for effective training. As new technologies continue to be introduced into aircraft, the flight control tasks change and, at times, become more complex. This, in turn, requires pilots to continually update their skills. Research indicates that SBT (e.g., Oser et al., 1999) is one training approach to foster complex problem solving skills as well as basic skills. By using SBT, pilots can develop individual task skills, while also developing skills to handle the interactions among tasks, and unexpected events. Simulation alone will not teach these skills, but surrounding simulation with the other aspects of the SBT model is a powerful approach Despite the progress in simulation technology and the understanding of training effectiveness, answers to many questions remain unknown. These issues were mentioned throughout this manuscript. To briefly recap, questions remain involving the concept of simulation fidelity, such as: What is an effective measurement of simulation fidelity? How do you quantify the notion of simulation fidelity? What is the effect of fidelity on training effectiveness and, ultimately, training transfer? Other issues have to do with the Scenario Based Training model, such as: What is the best method to design and select scenario events? When is the best time to give feedback in a scenario based training cycle? In addition to the simulations themselves, what tools will help to automate the process of scenario based training? What types of knowledge and skills is scenario-based training best suited? There are also other research issues relating to assessing training transfer that need our attention, such as: What methods can be used to measure human performance accurately and non-obtrusively in transfer, real-life performance settings?
- 36 Thus, while much has been accomplished, our review of the literature suggests many areas remain unexplored. One hold up is the slow progress of research on simulation fidelity and its relationship with transfer of training. Overall, however, the field is alive and prospering. Simulation based training researchers and practitioners are providing the right training tools to the right people at the right time. With continued progress in both engineering technology and training research, simulation training will continue to improve. The purpose of this report was to educate individuals who were unfamiliar with aviation simulation research and practice. It is our hope that the reader now understands the overall picture of simulation training in aviation as well as understanding the need for and potential benefits from additional research in this vital, exciting area.
- 37 Bibliography Adams, G. H. (1976). Simulators in education and training (A bibliography with abstracts) (NTIS-PS-76/0287/3GA). Springfield, VA: National Technical Information Sevice. Adams, J. A. (1979). On the evaluation of training devices. Human Factors, 21(6), 711720. Advani, S. K. (1999). Optimising simulator motion systems - filling the gaps. In Can Flight Simulation do Everything? Flight Simulation Capabilities, Capability Gaps and Ways to Fill the Gaps; proceedings of the conference, London, United Kingdom, pp. 11.1-11.12. Advani, S.K. & Mulder, J.A. (1995). Achieving high-fidelity motion cues in flight simulation. AGARD FVP Symposium on Flight Simulation: Where are the Challenges, Braunschweig, Germany. Advani, S. K., Nahon, N., Haeck, N. & Albronda, J. (1999). Optimization of six-degreesof-freedom motion systems for flight simulators. Journal of Aircraft, 36(5), 819826. Alessi, S. M. (1988). Fidelity in the design of instructional simulations. Journal of Computer-Based Instruction, 15(2), 40-47. Allen, J. A. (1986). Maintenance training simulator fidelity and individual difference in transfer of training. Human Factors, 28(5), 497-509. Allen, J. A., Hays, R. T., & Buffardi, L. C. (1986). Maintenance training simulator fidelity and individual differences in transfer of training. Human Factors, 28(5), 497-509. Arbak, C. J., Derenski, P. A. & Walrath, L. C. (1993). Simulation considers the human factors. Aerospace America, 31(10), 38-41. Armando, A., Castoldi, P. & Fassi, F. (1992). AM-X Flight Simulator from engineering tool to training device. In the proceeding of the AGARD Conferences: Pilot Simulation Effectiveness, Brussels, Belgium from October 14th-17th 1991. pp 14.114.12. Armstrong Laboratory. (1996, March). The future of selective fidelity in training devices (Report AL/HR-TR-1995-0195). Mesa, AZ: Andrews, D. H., Carroll, L. A., & Bell, H. H. Atkins, R. J., Lansdowne, A. T. G. & Pfister, H. P. (2002). Conversion between control mechanisms in simulated flight: An ab initio quasi-transfer study. Australian Journal of Psychology, 54(3), pp. 144- 149. Auffrey, A. L., Mirabella, A., & Siebold, G. L. (2001). Transfer of training revisited (Report ARI-RN-2001-10). Alexandria, VA: Army Research Institute for the Behavioral and Social Sciences. Awoniyi, E. A., Griego, O. V., & Morgan, G. A. (2002). Person-environment fit and transfer of training. International Journal of Training and Development, 6(1), 2535. Ayres, A. Hays, R. T., Singer, M. J. & Heinicke, M. (1984). An annotated bibliography of abstracts on the use of simulators in technical training (ADA156792, ARI-RP-84-21). Fairfax Virginia: George Mason University.
- 38 Baldwin, T., Ford, J. (1988). Transfer of Training: A Review and Direction for Future Research, Personnel Psychology, No. 41, 1988, 63-105. Bauschat, J. M. (1994). The role of in-flight simulation for the definition of simulation fidelity criteria. In the 19th Annual ICAS Congress Proceedings, Anaheim, CA, 261-271. Beaubien, J. M., Baker, D. P. & Salvaggio, A. N. (2004). Improving the construct validity of line operational simulation ratings: Lessons learned from the assessment center. International Journal of Aviation Psychology, 14(1), 1-17. Bell, P. M. & Freeman, R. (1995). Qualitative and quantitative indices for simulation systems in distributed interactive simulation. In IEEE Proceedings of ISUMANAFIPS, 745-748. Bell, H. H. & Waag, W. L. (1998). Evaluating the effectiveness of flight simulators for training combat skills: A review. International Journal of Aviation Psychology, (8)3, 223-242. Bentley, P., Buck, P., & Robinson, P. (1996). Representing intelligent behavior in simulation. In 2nd annual AIAA Test and Evaluation International Aerospace Forum, London, United Kingdom, pp. 113-120. Beringer, D. B., & Ball, J. D. (2002). A flight simulator usability assessment of a multifunction flight-planning and navigation display for general aviation. In Human factors and ergonomics society 46th annual meeting; Baltimore, MD, pp. 6-10. Beyer, R. (1989). Experimental study towards a future airport ground movement simulator. In Integrated Air Traffic Management Proceedings, Germany (SEE N90-27676 22-04). Bezdek, W., Powell, R., & Mays, D. (2004). The history and future of military flight simulators. AIAA modeling and simulation technologies conference and exhibit, Providence, RI,AIAA Paper 2004-5148. Biegel, J. (1990). An intelligent simulation training system. NASA, Lyndon B. Johnson Space Center, Third Annual Workshop on Space Operations Automation and Robotics (SOAR 1989) (SEE N90-25503 19-59), pp. 579-584, 579-584. Biggs, S. (1996). Real time variable generic database generation. In Training-Lowering the Cost, Maintaining Fidelity: Proceedings from the Royal Aeronautical Society, London, UK, pp.13.1-13.7. Blaiwes, A. S., Puig, J. A., & Regan, J. J. (1973). Transfer of training and the measurement of training effectiveness. Human Factors, 15(6), 523-533. Bowers, C. A., Salas, E., Prince, C. & Brannick, M. (1992). Games teams play: A method for investigating team coordination and performance. Behavior Research Methods, Instraments, and Computers, 24, 503-506. Bradley, D. R. & Abelson, S. B. (1995). Desktop flight simulators: Simulation fidelity and pilot performance. Behavior Research Methods, Instruments, & Computers, 27(2), 152-159. Braun, D., & Thomas Galloway, R. (2004). Universal automated flight simulator fidelity test system. AIAA modeling and simulation technologies conference and exhibit, Providence, RI, AIAA Paper 2004-5269. Brown, J. L. (1976). Visual Elements in Flight Simulation. Aviation, Space and Environmental Medicine, September 1976, 913-924.
- 39 Bruce, D. (1991). Air traffic control simulation training. In the proceedings of the SAE Aerospace Technology Conference and Exposition, Long Beach, CA. Burke, L. A. (1997). Improving positive transfer: A test of relapse prevention training on transfer outcomes. Human Resource Development Quarterly, 8(2), 115-128. Burki-Cohen, J., Boothe, E. M., Soja, N. N., DiSario, R., Go, T., & Longridge, T. (2000). Simulator fidelity: The effect of platform motion. Proceedings of the International Conference Flight Simulation: The Next Decade, London, UK, 23, 1-7. Bürki-Cohen, J., Go, T. H., & Longridge, T. (2001). Flight simulator fidelity considerations for total airline pilot training and evaluation. AIAA Modeling and Simulation Technologies Conference and Exhibit, Montreal, Canada (AIAA 2001-4425). Burki-Cohen, J., Go, T.H., Chung, W. W., Schroeder, J., Jacobs, S., & Longridge, T. (2003). Simulator fidelity requirements for airline pilot training and evaluation continued: An update on motion requirements research. Proceedings of the 12th International Symposium on Aviation Psychology, 1-8. Bürki-Cohen, J., Soja, N. N. & Longridge, T. (1998). Simulator platform motion – The need revised. International Journal of Aviation Psychology, 8(3), 293-317. Bussolari, S. (1991). Real-time control tower simulation for evaluation of airport surface traffic automation. In the 6th Annual Proceedings of the International Symposium on Aviation Psychology, Columbus, OH, pp. 502-507. Caldwell, J. A., Caldwell, J. L., Brown, D. L., & Smith, J. K. (2004). The effect of 37 hours of continuous wakefulness on the physiological arousal, cognitive performance, self-reported mood, and simulator flight performance of F-117A pilots. Military Psychology 16(3), 163-181. Campbell, J. O. & Lison, C. A. (1995). New technologies for assessment training. College Student Journal, 29(1), 26-2.9 Cannon-Bowers, J. A., Burns, J. J., Salas, E. & Pruitt, J. S. (1998). Advanced technology in scenario-based training. In. Cannon-Bowers, J.A &. Salas, E. (Eds.), Making decisions under stress: Implications for individual and team training (pp. 365374). Washington, D.C.: APA. Cannon-Bowers, J.A., Salas, E., Tannenbaum, S. I. & Mathieu, J. E. (1995). Toward theoretically-based principles of training effectiveness: A model and initial empirical investigation. Military Psychology, 7, 141-164. Caro, P. W. (1969). Adaptive training – An application to flight simulation. Human Factors, 11(6), 569-576. Caro, P. W. (1977). Factors influencing simulator training effectiveness in the U.S. air force. (AD-A043119) Pensacola, FL: Seville Research Corporation. Caro, P. W. (1979). The relationship between flight simulator motion and training requirements. Human Factors, 21(4), 493-501. Caro, P. W. (1988). Flight Training and Simulation. In Wiener, E. L. & Nagel, D. C. (eds.), Human Factors in Aviation (pp. 229-261). New York: Academic Press, Inc. Carretta, T. R. & Dunlap, R. D. (1998). Transfer of training effectiveness in flight simulation. Brooks Air Force Base, TX: Air Force Research Laboratory.
- 40 Carver, T. (1991). Training transfer: A question of quality. The Royal Aeronautical Society Conference, London, UK, 1.1-1.6. Chapanis, Alphonse (1996). Human Factors in Systems Engineering. New York: John Wiley and Sons, Inc. Chappell, S. L. & Sexton, G. A. (1986). Advanced concepts flight simulation facility. Applied Ergonomics, 17(4), pp. 252-256. Chelton Flight Systems. (n.d.) Flightlogic features/benefits. Retrieved January 31, 2005, from http://www.cheltonflightsystems.com/Prod_cert_features.html Cheng, E. W. L. Ho, D. C. K. (2001). A review of transfer of training studies in the past decade. Personnel Review, 30(1), 102. Cho, Y. & Nagati, M. G. (1995). Flight Simulator Modeling using neural networks with spin flight test data. In AIAA 33rd Aerospace Sciences Meeting and Exhibit proceedings, Reno, NV, January 9-12, 1995, AIAA paper 95-0560. Cohen, M. M. (1992). Transfer of training for aerospace operations: How to measure, validate, and improve it. In 6th Annual Workshop on Space Operations Applications and Research (SOAR ’92) Proceedings, Houston, TX, pp. 482-487. Colquitt, J. A., LePine, J. A. & Noe, R. A. (2000). Toward an integrated theory of training motivation: A meta-analytic path analysis of 20 years of research. Journal of Applied Psychology, 85(5), 678-707. Control tower simulator will combat potential traffic problems (1998). AIAA Student Journal, 36(2), pp. 6, 7. Connolly, T. J., Blackwell, B. B. & Lester, L. F. (1989). A simulator –based approach to training in aeronautical decision making. Aviation, Space and Environmental Medicine, 60, 50-52. Connoly-Gomez, C., & Goettl, B. P. (1995). Investigating tradeoffs between practice and observation in automated instruction. In the 39th Human Factors and Ergonomics Society Annual Meeting, San Diego, CA, pp. 1340-1344. Cormier S. M. & Hagman J. D. (1987). Transfer of learning. Contemporary Research and Application, London: Academic Press. Damos, D. L. (1991). Examining transfer of training using curve fitting: A second look. The International Journal of Aviation Psychology, 1(1), 73-85. Darken, R. P. & Banker, W. P. (1998). Navigating in natural environments: A virtual environment training transfer study. Proceedings of VRAIS, 12-19. Dennis, K. A. & Harris, D. (1998). Computer-based simulation as an adjunct to ab initio flight training. International Journal of Aviation Psychology, 8(3), 261-270. Dohme, J. (1992). Transfer of training and simulator qualification or myth and folklore in helicopter simulation (N93-30687). In the NASA/FAA Helicopter Simulator Workshop Proceedings, 115-121. Driskell, J. E., Copper, C. & Willis, R. P. (1992). Effect of over learning on retention. Journal of Applied Psychology, 77(5), 615-622. Duncan, J. C., & Feterle, L. C. (2000). The use of personal computer-based aviation training devices to teach aircrew decision-making, teamwork, and resource management. In Proceedings of IEEE 2000 National Aerospace and Electronics Conference, Dayton, OH, pp. 421-426. Dwyer, J.D., Oser, R.L., Salas, E., & Fowlkes J.E. (1999). Performance measurement in
- 41 distributed environments: Initial results and implications for training. Military Psychology, 11(2), 189-215. Ed Link – Father of Aviation (2004). Retrieved on November 11, 2004 from: http://inventors.about.com/cs/inventorsalphabet/a/ed_limk.htm FAA/Industry Training Standards. (2004). FITS Curriculum Recognition Criteria. Retrieved January 28, 2005, from http://learn.aero.und.edu/pages.asp?PageID=13814 Farrell, M. J., Arnold, P., Pettifer, S., Adams, J., Graham, T. & MacManamon, M. (2003). Transfer of route leanring from virtual to real environments. Journal of Experimental Psychology: Applied, 9(4), 219-227. Fanjoy, R. O. & Young, J. P. (2001). The Transfer of Flight Training Proceedures from an advanced airline flight simulator to the classroom. Collegiate Aviation Review, 19(1), pp.41-48. Federal Aviation Administration (1991). Airplane Simulator Qualification (Advisory Circular 120-40B). Washington, DC: U.S. Government Printing Office. Federal Aviation Administration (2005). PART 121 – Operating Requirements: Domestic, Flag, and Supplemental Operations, Appendix H –Advanced Simulation (FAR Part 121, Title 14, Subpart X, Appendix H). Retrieved from Government Printing Office website on April 6, 2005 at: Http://ecfr.gpoaccess.gov Ferguson, S. W., Clement, W. F., Hoh, R. H. & Cleveland, W.B. (1985). Assessment of simulation fidelity using measurements of piloting technique in flight: Part II. 41st Annual Forum of the American Helicopter Society, Ft. Worth, TX, 1-23. Ferranti, M. J. & Silder, S. H. (1992). Full mission simulation: A view into the future. In the proceeding of the AGARD Conferences: Pilot Simulation Effectiveness, Brussels, Belgium, pp 15.1-15.10. Fernie, A. (1996). Improvements in area of interest helmet mounted displays. In Training-Lowering the Cost, Maintaining Fidelity: Proceedings of the Royal Aeronautical Society, London, UK, pp. 21.1-21.4. Field, E. J., Armor, J. B., & Rossitto, K. F. (2002). Comparison of in-flight and ground based simulations of large aircraft flying qualities., In AIAA Atmospheric Flight Mechanics Conference and Exhibit, Monterey, CA, AIAA 2002-4800. Fink, C., & Shriver, E. (1978). Simulators for maintenance training: Some issues, problems and areas for future research (Tech. Rep. No. AFHRL-TR-78-27). Lowery Air Force Base, CO: Air Force Human Resources Laboratory. Fitzgerald, T. R. & Brunner, M. T. (1992). Use of high fidelity simulation in the development of an F/A-18 active ground collision avoidance system (AD-P006 855). In the proceeding of the AGARD Conferences: Pilot Simulation Effectiveness, Brussels, Beligium, pp. 8.1-8.13. Ford, J. K. (1997). Transfer of training: The criterion problem. Applied Psychology, 46(4), 349-354. Fortin, M. (1989). Cost/Performance trade-offs in visual simulation. The Royal Aeronautical Society Conference on Flight Simulation Assessing the Benefits and Economics, London, UK, 19.1-19.15. Foxon, M. (1993). A process approach to the transfer of training. Australian Journal of Educational Technology, 9(2), 130-143.
- 42 Garavaglia, P. L. (1993). How to ensure transfer of training. Training and Development, 47(10), 63-68. Garrison, P. (1985). Flying Without Wings: A Flight Simulation Manual. Blue Ridge Summit, PA: TAB Books, Inc. Gawron, V. J., Bailey, R. & Lehman, E. (1995). Lessons learned in applying Simulators to crew station evaluation. International Journal of Aviation Psychology, 5(3), pp.277-290. Gawron, V. J., & Reynolds, P. A. (1995). When in-flight simulation is necessary. International Journal of Aviation Psychology, 5(3), 411-415. Gentner, D., Loewenstein & Thompson, L. (2003). Learning and transfer: A general role for analogical encoding. Journal of Educational Psychology, 95(2), 393-408. Goettl, B. P. & Shute, V. J. (1996). Analysis of part task training using backward transfer technique. Journal of Experimental Psychology: Applied, 2(3), pp. 227-249. Goldstein. I. L. (1993). Training in organizations. Pacific Grove, CA: Brooks-Cole. Gopher, D., Weil, M. & Barket, T. (1994). Transfer of skill from computer game trainer to flight. Human Factors, 36(3), 386-405. Groen, E. L., Hosman, R. J., & Dominicus, J. W. (2003). Motion fidelity during a simulated takeoff. 2003 AIAA modeling and simulation technologies conference and exhibit, Austin, TX, AIAA Paper 2003-5680. Gross D. C & Freeman R. (1997). Measuring fidelity differentials in HLA simulations. Fall 1997 Simulation Interoperability Workshop. Gross D. C., Pace D., Harmoon, S., & Tucker, W. (1999). Why fidelity? Spring 1999 Simulation Interoperability Workshop. Hammerton, M. (1963). Transfer of training from a simulated to a real control situation (Transfer of training from simulated to real control situation). Journal of Experimental Psychology, 66, pp. 450-453. Hammerton, M. (1966). Factors affecting the use of simulators for training. Proceedings of the IEE Conference, 113(11), 1881-1884. Hampton, S., & Moroney, W. F. (1998). Personal computer based training devices – reality or fiction in the USA? In Low Cost Simulation - New opportunities in flight training; proceedings of the conference, London, United Kingdom, pp. 4.1-4.9. Hawkes, R., Rushton, S., & Smyth, M. (1995). Update rates and fidelity in virtual environments. Virtual Reality: Research, Applications & Design, 1(2), 1-15. Hays, R. T., Jacobs, J. W., Prince, C. & Salas, E. (1992). Flight simulator training Effectiveness: A meta-analysis. Military Psychology, 4(2), 63-74. Hays, R.T., & Singer, M.J. (1989). Simulation fidelity in training system design. New York: Springer-Verlag. Heesbeen, B., Hoekstra, J., & Clari, M. V. (2003). Traffic manager traffic simulation for validation of future ATM concepts. 2003 AIAA modeling and simulation technologies conference and exhibit, Austin, TX, AIAA Paper 2003-5368. Hess, R. A. & Siwakosit, W. (2001). Assessment of flight simulator fidelity in multiaxis tasks including visual cue quality. Journal of Aircraft, 38(4), 607-614. Holladay, C. L. & Quinones, M. A. (2003). Practice variability and transfer of training: the role of self-efficacy generality. Journal of Applied Psychology, 88(6), 10941103.
- 43 Holzapfel, F., Sturhan, I., & Sachs, G. (2004). Pilot-in-the-loop flight simulation - A lowcost approach. AIAA modeling and simulation technologies conference and exhibit, Providence, RI, AIAA Paper 2004-5156. Howard-Jones, T. (1999). New-generation ATC simulators designed to provide quicker, more cost-effective training. ICAO Journal, 54(3), pp. 22-24, 34. Howells, P. B. (1996). The use of distributed interactive simulation and synthetic environments in civil aviation training. In The Progress and Direction of Distributed Interactive Simulations; Proceedings of the Conference, London, United Kingdom, pp. 5.1-5.8. Huff, E. M. & Nagel, D. C, (1975). Psychological aspects of aeronautical flight simulation. American Psychologist, 30(3), 426-439. Huffaker, D. A. & Calvert, S. L. (2003). The new science of learning: Active learning, metacognition, and transfer of knowledge in E-learning applications. Journal Educational Computing Research, 29(3), 325-334. Hughes T. & Rolek E. (2003). Fidelity and validity: issues of human behavioral representation requirements development. In Proceedings of the 2003 Winter Simulation Conference, New Orleans, LA. Jacobs, R. S. (1975). Simulator motion as a factor in flight simulator training effectiveness. Washington, D.C.: U.S. Department of Health, Education and Welfare, National Institute of Education. Jacobs, R. S. & Roscoe, S. (1977). Simulator cockpit motion and the transfer of initial flight training. Dissertation abstracts international, 37(10-B), 5398. Jensen, D. (2002). ATC simulation - USAF takes a giant step. Avionics Magazine, 26(8), pp. 17 –20. Jentsch, F. & Bowers, C. A. (1998). Evidence of the validity of PC-Based simulations in studying aircrew coordination. International Journal of Aviation Psychology, 8(3), 243-260. Johnson. S. D. (1995). Transfer of learning. The Technology Teacher, 54, pp. 33, 34. Jones, D., & Grandillo, R. (1996). Distributed simulation for air traffic control simulators and flight simulators. In The Progress and Direction of Distributed Interactive Simulations; Proceedings of the Conference, London, United Kingdom; pp. 8.18.12. Jorna, P. G. A. M., Van Kleef, E. R. A. & De Boer, W. P. (1992). Aircraft simulation and pilot proficiency: From surrogate flying towards effective training. In the proceeding of the AGARD Conferences: Pilot Simulation Effectiveness, Brussles, Beligium, pp. 10.1-10.8. Kaempf, G. L. & Blackwell, N. J. (1990). Transfer-of-training study of emergency touchdown maneuvers in the AH-1 flight and weapons simulator (Research Report 1561). Alexandria, VA: U.S. Army Research Institute for the Behavioral and Social Sciences. Kaiser, M. K. & Schroeder, J. A. (2003). Flights of fancy: The art and science of flight simulation. In Vidulich, Michael A & Tsang, Pamela S (eds), Principles and
- 44 Practice of Aviation Psychology, (pp.435-471). Mahwah, NJ, US: Lawrence Erlbaum Associates, Publishers. Kelly, P. R. (1998). Transfer of learning from computer simulation as compared to a laboratory activity. Journal of Educational Technology Systems, 28(4), 345-351. Kim, W. & Schenker, P. (1992). A teleoperation training simulator with visual and kinesthetic force virtual reality. In the Human vision, visual processing, and digital display III; proceedings of the meeting, San Jose, CA, pp. 560-569. Kinkade, R., & Wheaton, G. (1972). Training device design. In H. Van Cott & R. Kinkade (Eds.), Human engineering guide to equipment design (pp. 668-699). Washington, DC: Department of Defense. Kirkpatrick (1996). Evaluation. In R. L. Craig (ed.), The ASTD Training and Development Handbook. New York: McGraw-Hill. Kneebone, R.L., Nestel, D., Moorthy, K., Taylor, P., Bann, S., Munz, Y., et al. (2003). Learning the skills of flexible sigmoidscopy – the wider perspective. Medical Education, 37(1), 50-58. Koonce, J. M. & Bramble, W. J. (1998). Personal computer-based flight training devices. International Journal of Aviation Psychology, 8(3), 277-292. Kozak, J. J., Hancock, P. A., Arthur, E. J. & Chrysler, S. T. (1993). Transfer of training from virtual reality. Ergonomics, 36(7), 777-784. Kraiger, K., Ford, J. K. & Salas, E. (1993). Application of cognitive, skill-based, and affective theories of learning outcomes to new methods of training evaluation. Journal of Applied Psychology, 78, 311-328. Kurts, D. & Gainer, C. (1992). The use of dedicated testbed to evaluate simulator training effectiveness. In the proceeding of the AGARD Conferences: Pilot Simulation Effectiveness, Brussles, Beligium: pp 11.1-11.9. Lahiri, A. (2005) Significance of Qualitative Handling Assessment Approach towards motion system requirements for fight simulators. Retrieved February 24, 2005 from: www.faa.gov/nsp/documents/arnab_lahiri.rtf. Lansdaal, M., Lewis, L., & Bezdek, W. (2004). The history of commercial simulators and the boeing 777 systems integration lab. AIAA modeling and simulation technologies conference and exhibit, Providence, RI, AIAA Paper 2004-5150. Lapiska, C., Ross, L. & Smart, D. (1993). Flight simulation: An overview. Aerospace America, 31(8), 14-17. Lee, A. T. & Bussolari, S. R. (1989). Flight simulator platform motion and transport pilot training. Aviation, Space and Environmental Medicine, February 1989, 136-140. Lehmer, R. D. & Chung, W. W. Y. (1999). Image dynamic measurement system (IDMS2) for flight simulation fidelity verification. American Institute of Aeronautics and Astronautics, 137-143. Lintern, G. (1991). An informational perspective on skill transfer in flight training. Human Factors, 33, 251-266 Lintern, G. & McMillan, G. (1993). Transfer for flight simulation. Aviation Instruction and Training, 130-162. Lintern,G., Roscoe,S. N., Koonce, J. M., Segal, L.D. (1990). Transfer of landing skills in beginning flight training. Human Factors, 32(3), 319-327.
- 45 Lintern, G., Roscoe, S. N., & Sivier, J. E. (1990). Display principles, control dynamics, and environmental factors in pilot performance and transfer of training. Human Factors, 32, 299-317 Lintern, G., Taylor, H. L., Koonce, J. M., Kaiser, R. H., & Morrison, G. A. (1997). Transfer and quasi-transfer effects of scene detail and visual augmentation in landing training. International Journal of Aviation Psychology, 7(2), 149-169. Liu, D. & Vincenzi, D.A. (2004). Measuring Simulation Fidelity: A conceptual study. Proceedings of the 2nd Human Performance, Situation Awareness and Automation Conference (HPSAA II), Daytona Beach, FL. (pp. 160-165). Lawrence Erlbaum Associates. Mahway, NJ. Logan, A. I. (1995). Training beyond reality. In the Simposium on Space Mission Operations, Saint-Hubert, Canada, pp. 159-168. Longridge, T., Ray, P., Boothe, E. & Bürki-Cohen, J. (1996). Initiative towards more affordable flight simulators for U.S. commuter airline training. In TrainingLowering the Cost, Maintaining Fidelity: Proceedings from the Royal Aeronautical Society, London, UK, pp. 2.1-2.16. Loftin, R. B., Wang, & Luibaffes (1989). AIAA Computers in Aerospace Conference, 7th, Technical Papers. Part 2 (A90-10476 01-59) (pp. 581-588.). Monterey, CA: American Institute of Aeronautics and Astronautics. Lowry, K.D. (2000). United States probation/pretrial officers’ concerns about victimization and officer safety training. Federal Probation, 64(1), 51-55. Mania, K. et al (2003). Fidelity metrics for virtual environment simulations based on spatial memory awareness states. Presence, Teleoperators and Virtual Environments, 12(3), 296-310. Matheny, W. G. (1975). Investigation of the performance equivalence method for determining training simulator and training methods requirements. 13th Annual AIAA Aerospace Sciences Meeting, AIAA Paper 75-108. Mathony, W. G. (1975). Investigation of the performance equivalence method for determining training simulator and training methods requirements (AIAA75-108) Hurts, TX: Life Sciences, Inc. Mattoon, J. S. (1994). Designing instructional simulations: Effects of instructional control and type of training on developing display-interpretation skills. International Journal of Aviation Psychology, 4(3), pp. 189-209. McCloy, T. M. & Koonce, J. M. (1981). Sex differences in the transfer of training of basic flight skills. Proceedings of the 25th Annual Meeting of the Human Factors Society, 654-655. McCloy, T. M. & Swiney, J. F. (1982). Gender differences and cognitive abilities in the transfer of training of basic flying skills. Proceedings of the 26th Annual Meeting of the Human Factors Society, 602-604. Mikula, J. (2004). Modeling and simulation. Aerospace America, 42(12), 80. Mogford, R. H. (1997). Mental models and situational awareness in air traffic control. International Journal of Aviation Psychology, 7(4), pp. 331-341. Moraal, J. (1981). Evaluating simulator validity. In Proceedings of the Conference: Manned systems design: Methods, equipment, and applications, Freiberg Im Breisgau, West Germany, pp. 411-426. Morris, A. & Van Hemel, P. E. (1992). The evaluation of simulator effectiveness for the
- 46 training of high speed, low speed, tactical flight operations. In the proceeding of the AGARD Conferences: Pilot Simulation Effectiveness, Belgium, pp. 17.1-17.11. Moshell, J. M. & Hughes, C. E. (1996). The Virtual Academy: A simulated environment for constructionist learning. International Journal of Human-Computer Interaction, 8(1), 95-110. Mulder, J.A., van Paassen, M.M., & Mulder, M. (2004). Perspective guidance displays show the way ahead. Journal of Aerospace Computing, Information, and Communication, 1(11), 428-431. Mumenthaler, M. S., O’Hare, R., Taylor, J. L. Friedman, L., & Yesavage, J. A. (2001). Influence of the menstrual cycle on flight simulator performance after alcohol ingestion. Journal of Studies on Alcohol, 64(2), 422. NASA Facts Online. (2000). Glass Cockpit Fact Sheet (FS-2000-06-43-LaRC). Retrieved January 31, 2005, from http://oea.larc.nasa.gov/PAIS/Glasscockpit.html Nataupsky, M., Waag, W. L., Weyer, D. C., Mcfadden, R. W. & McDowell, E. (1979). Platform motion contributions to simulator training effectiveness: Study III – interaction of motion with field of view (AFHRL-TR-79-25). Williams Air Force Base, AZ: Air Force Human Resources Laboratory Flying Training Division Nemire, K., Jacoby, R. H., & Ellis, S. R. (1994). Simulation fidelity of a virtual environment display. Human Factors, 36(1), 79-93. Noble, C. (2002). The relationship between fidelity and learning in aviation training and assessment. Journal of Air Transportation, 7(3), 34-54. Olofsson, S. (1976). Training and evaluation role for ATC simulators. Airports International, Feb-March 1976, pp. 16-18. Ortiz, G. A. (1994). Effectiveness of pc-based flight simulation. International Journal of Aviation Psychology, 4(3), 285-291. Oser, R.L., Cannon-Bowers, J.A., Salas, E., & Dywer, D.J. (1999). Enhancing human performance in technology rich environments: guidelines for scenario-based training. In E. Salas (Ed.), Human/Technology Interactions in Complex Systems (pp. 175-202). US: Elsevier Science/JAI Press. Oser, R.L., Gualtieri, J.A., Cannon-Bowers, J.A., & Salas, E. (1999b). Training team problem solving skills: an event-based approach. Computers in Human Behavior, 15, 441-462. Osgood, C. E. (1949). The similarity paradox in human learning: a resolution. Psychological Review, 56, 132-143. Parrish, R. V., McKissick, B. T., & Ashworth, B. R. (1983). Comparison of simulator fidelity model predictions with in-simulator evaluation data (Technical Paper 2106). Hampton, VA: NASA Langley Research Center. Payne, T. A. (1982). Conducting studies of transfer of learning: a practical guide (AFHRL-TR-81-25). Williams Air Force Base, AZ: Operations Training Division. Pisanich, G. (2000). PC flight simulators - don't call them games anymore. In Flight simulation - the next decade; Proceedings of the conference, pp. 22.1-22.3. London, United Kingdom: Royal Aeronautical Society. Pohlman, D. L. & Edwards, B. J. (1983). Desk top trainer: Transfer of training of an aircrew procedural task. Journal of Computer-Based Instruction, 10(3/4), 62-65. Pouliot, N. A., Gosselin, C. M. & Nahon, M. A. (1998). Motion simulation capabilities
- 47 of three-degree-of-freedom flight simulators. Journal of Aircraft, 35(1), 9-18. Povenmire, H. K. & Roscoe, S. N. (1973). Incremental transfer effectiveness of a groundbased general aviation trainer. Human Factors, 15(6), 534-542. Ramesh, R. & Cheickna, S. (1990). A model for Instructor Training Analysis in Simulation-Based Flight Training. IEEE Transactions on Systems, Man, and Cybernetics, 20(5), pp. 1070-1080. Ray, P. A. (1996). Quality flight simulation cueing-why? AIAA Flight Simulation Technologies Conference, San Diego, CA, 138-147. Ray, P. A. (2000). Flight simulator standards - A case for review, revalidation and modernization. In Flight simulation - A decade of regulatory change; proceedings of the conference (pp. 12.1-12.11). London, United Kingdom. Ray, P. A. (2000). Is today's flight simulator prepared for tomorrow's requirements? Flight simulation - the next decade; proceedings of the conference, London, United Kingdom, pp. 16.1-16.8. Rayman , R. B. (1982). Negative transfer: A threat to flying safety. Aviation, Space and Environmental Medicine, 53(12), 1224-1226. Reed, E. T. (1996). The Aerial gunner and scanner simulator “Affordable virtual reality training for aircrews.” In Training-Lowering the Cost, Maintaining Fidelity: Proceedings from the Royal Aeronautical Society, London, UK, pp. 18.1-18.15. Rehmann, A. J., Mitman, R. D., & Reynolds, M. C. (1995). A handbook of flight simulation fidelity requirements for human factors research (DOT/FAA/CTTN95/46). Wright-Patterson AFB, OH: Crew System Ergonomics Information Analysis Center. Rhodenizer, L. (1998). It is not how much you have but how you use it: Towards a rational use of simulation to support aviation training. International Journal of Aviation Psychology, 8(3), 197-208. Rinalducci, E. (1996). Characteristics of visual fidelity in the virtual environment, Presence, 5(3), 330-345. Ringenbach, D. P., Louton, S. E. & Gleason, D. (1992). Use of simulation in the USAF Test Pilot School curriculum. In the proceeding of the AGARD Conferences: Pilot Simulation Effectiveness, Brussels, Belgium, pp 13.1-13.7. Roenker, D., Cissell, G., Ball, K., Wadley, V., & Edwards, J. (2003). Speed-ofprocessing and driving simulator training result in improved driving performance. Human factors, 45(2), pp. 218-233. Roessingh, J. J. M. (2005). Transfer of manual flying skills from PC-based simulation to actual flight-comparison of in flight measured data and instructor ratings. International Journal of Aviation Psychology, 15(1), 67-90. digets Rogalski, J. (1995). From real situations to training situations: Conservation of functionalities. In Hoc, J., Cacciabue, P. C. & Hollnagel, E. (eds), Expertise and technology: Cognition and Human-Computer Cooperation (125-139). Hillsdale, NJ: Lawrence Erlbaum Associates Publishers. Rolfe, J.M. (1982). Determining the training Effectiveness of flight simulators: Some basic issues and practical developments. Applied Ergonomics, 13(4), pp. 243-250. Rolfe, J.M. (1991). Transfer of training. The Royal Aeronautical Society Conference London, UK, 1-1.8.
- 48 Roza, M. (2000). Fidelity considerations for civil aviation distributed simulations. AIAA Modeling and Simulation Technologies Conference and Exhibit, Denver, CO. Roza, M. et al (2001). Defining, specifying and developing fidelity referents. Proceedings of The 2001 European Simulation Interoperability Workshop, London, UK. Sadlowe, A. R. (Eds.). (1991). Pc-Based Instrament Flight Simulation- A First Collection of Papers. New York: American Society of Mechanical Engineers. Salas, E. & Cannon-Bowers J. A. (2001). The science of training: A decade of progress. Annual Review of Psychology, 52, 471-499. Sauer, J., Hockey, G. R. & Wastell, D. G. (2000). Effects of training on short and long term skill retention in a complex multiple task environment. Ergonomics, 43(12), 2043-2063. Schilling, L. J. & Mackall, D. A. (1993). Flight research simulation takes off. Aerospace America, 31(8), 18 Schneider, W. (1985). Training high-performance skills: Fallacies and guidelines. Human Factors, 27(3), 285-300. Schricker, B. C. et al (2001). Fidelity evaluation framework. Proceedings of the IEEE 34th Annual Simulation Symposium, Seattle, WA, 21. Schroeder, J. A., & Chung, W. W. Y. (2000). Simulator platform motion effects on pilotinduced oscillation prediction. Journal of Guidance, Control, and Dynamics. 23(3), pp. 438-444. Sebrechts, M. M., Lathan, C., Clawson, D. M., Miller, M. S. & Trepagnier, C. (2003). Transfer of training in virtual environments: issues for human performance. In Hettinger, L. & Haas, M. (eds.) Virtual and Adaptive Environments: Applications, Implications, and Human Performance Issues. Mahwah, NJ: Lawrence Erlvaum Associates, Publishers. Scharr, T.M. (2002). Interactive video training for firearms safety. Federal Probation, 65(2), 45-51. Shihaleyev, V. N. & Miheyev, Y. V. (1996). Methods and criteria of simulators versus aircraft similarity evaluations. In Training-Lowering the Cost, Maintaining Fidelity: Proceedings from the Royal Aeronautical Society, London, UK, 1996, pp.12.1-12.4. Simon, C. W. & Roscoe, S. N. (1984). Application of a multifactor approach to transfer of training research. Human Factors, 26(5), 591-612. Smith, J. & Van Patten, R. E. (1992). G-tolerance and spatial disorientation: Can simulation help us? In the proceeding of the AGARD Conferences: Pilot Simulation Effectiveness, Brussels, Belgium. pp 12.1-12.7. Smith, R. C. & Melton, C. E. (1976) Effects of ground trainer use on the psychological and physiological states of students in private pilot training (FAA-AM-76-2). Oklahoma City, OK: FAA Civil Aeromedical Institute. Smith (Sr.), G. R., & Smith (Jr.), G. R. (2000). A virtual reality flight trainer for the UAV remote pilot. In Proceedings of the Helmet- and Head-mounted Displays V Conference, Orlando, FL, pp. 224-233. Smith-Jentsch, K. A., Salas, E., & Brannick, M. T. (2001). To transfer or not to transfer? Investigating the combined effects of trainee characteristics, team leader support, and team climate. Journal of Applied Psychology, 86(2), 279-292.
- 49 Spears, W. (1985). Measuring of Learning and Transfer using curve fitting. Human Factors, 27, 251-266. Stark, E. A. (1982). Simulation technology and the fixation phase. Aviation, Space and Environmental Medicine, October, 984-991. Stewart, J. E., II, Dohme, J. A. & Nullmeyer, R. T. (2002). U. S. army initial entry rotary-wing transfer of training research. The International Journal of Aviation Psychology, 12(4), 359-375. Sticha, P. J. (1987). Models of procedural control for human performance simulation. Human Factors, 29(4), 421-432. Stoffregen, T. A., Nelson, G. K., Pagulayan, R. J., & Bardy, B. G. (1999). Realism and reality in flight simulation. 10th International symposium on Aviation Psychology proceedings, Columbus, OH, pp. 1224-1229. Stout, R. J., Salas, E., Merket, D. C. & Bowers, C. A. (1998). Low-cost simulation and military aviation team training. In Proceedings of the American Institute of Aeronautics and Astronautics modeling and Simulation, Orlando, Fl. Szczepanski, C. (2003). Simulator technology in optimising the human-automated system interface (ADA422307). Warsaw, Poland: WarsawUniversity of Technology & Poland Institute of Aeronautics and Applied Mechanics. Tannenbaum, S. I., Cannon-Bowers, J. A., Salas, E. & Mathieu, J. E. (1993). Factors that influence training effectiveness: A conceptual model and longitudinal analysis (Tech Report 93-011). Albany, NY: US Naval Training Systems Center. Taylor, H. L., Lintern, G., Hulin, C. L., Talleur, D. A., Emanuel, T. W. & Phillips, S. I. (1999). Transfer of training effectiveness of a personal computer aviation training device. International Journal of Aviation Psychology, 9(4), 319- 335. Taylor, H. L., Lintern, G. & Koonce, J. M. (1993). Quasi-transfer as a predictor of transfer from simulator to airplane. Journal of General Psychology. 120(3), 257276. Taylor, J. L., O’Hare, R., Mumenthaler, M. S. & Yesavage, J. A. (2000). Relationship of CogScreen-AE to Flight Simulator Performance and Pilot Age. Aviation Space and Environmental Medicine, 71(4), pp. 373-380. Teunissen, R. (1999). The future of simulation and training. In Can Flight Simulation do Everything? Flight Simulation Capabilities, Capability Gaps an d Ways to Fill the Gaps; proceedings of the conference, London, United Kingdom, pp. 6.1-6.11. Thomas, T. G. (2004). From virtual to visual and back? In AIAA modeling and simulation technologies conference and exhibit, Providence, RI: AIAA Paper 2004-5146. Thompson, D. R. (1988-89). Transfer of Training from simulators to operational equipment – Are simulators effective? Journal of Educational Technology Systems, 17(3), 213-218. Thorndike, E. L. & Woodworth, R. S. (1901). The influence of improvement in one mental function upon the efficiency of other functions. Psychological Review, 8, 247-261. Thorndike, E.L., (1903). Educational psychology. New York: Lemke & Buechner. Tydeman, R., & Kullar, I. (1995). Test and acceptance of flight simulators. In Flight Simulation Technology, Capabilities and Benefits; Proceedings of the Conference, London, United Kingdom, pp. 15.1-15.10.
- 50 Vandervliet, G., Miller, C. & Croisetiere, M. P. (1992). Shipboard mission training effectiveness of the Naval Air Test Center’s V-22 Government Test Pilot Trainer. In the proceeding of the AGARD Conferences: Pilot Simulation Effectiveness, Brussles, Beligium, pp. 20.1-20.12. Valverde, H. H. (1973). A Review of flight simulator transfer of training studies. Human Factors, 15(6) 510-523. Veltman, J. A. (2002). A comparative study of psychophysiological reactions during simulator and real flight. International Journal of Aviation Psychology, 12(1), 3348. Vermeulen, R. C. M. (2002). Narrowing the transfer gap: the advantages of “as if” situations in training. Journal of European Industrial Training, 26(8/9), 366-374. Vince, J. & Jarvice, K. (1996). Virtual reality in flight training: Assessing the situation. In Training-Lowering the Cost, Maintaining Fidelity: Proceedings from the Royal Aeronautical Society, London, UK, pp.15.1-15.8. Vreuls, D. & Obermayer, R. W. (1985). Human-system performance measurement in training simulators.Human Factors, 27, 214-250. Warr, P. & Bunce, D. (1995). Trainee characteristics and the outcomes of open learning. Personnel Psychology, 48, 347-375. Weitzman, D. O., Fineberg, M. L. & Compton, G. L. (1978). Evaluation of a flight simulator (Device 2b24) for maintaining instrument proficiency among instrument-rated pilots (Technical paper 298). Alexandria, VA: Army Institute for the behavioral and Social Sciences. Weitzman, D. O., Fineberg, M. L., Gade, P. A. & Compton, G. L. (1979). Proficiency maintenance and assessment in an instrument flight simulator. Human Factors, 21(6) 7 01-710. Westra, D. P., Lintern, G., Sheppard, D. J., Thomley, K. E., Mauk, R., Wightman, D. C. & Chambers, W. S. (1986) Features for a carrier landing: A field transfer study (A269961) Orlando, FL: Essex Corporation. White, A. D., & Rodchenko, V. V. (1999). Motion fidelity criteria based on human perception and performance. AIAA modeling and simulation technologies conference and exhibit, Portland, OR, pp. 485-493 (AIAA Paper 99-4330). Wick, R. L., Billings, C. E., Gerke, R. J. & Chase, R. C. (1974). Aircraft simulator transfer problem (AMRL-TR-74-68) Wright Patterson Air Force Base: Aerospace Medical Research Laboratory Wickens, C. D., Lee, J. D., Liu, Y. & Becker, S. E. G. (2004). Introduction to Human Factors Engineering. Upper Saddle River, NJ: Pearson Prentice Hall. Wilks, B., De Repentigny, L., & Pearson, G. (2003). Using simulation as an effective runway incursion prevention strategy. In the proceedings of the 2003 AIAA Modeling and Simulation Technologies Conference and Exhibit, Austin, TX, AIAA Paper 2003-5599. Williams, G. W., Lawrence, K., & Weeks, R. (2004). Modeling and simulation technologies: Reconfigurable flight simulators in modeling and simulation. Edwards AFB, CA: 412th Test wing. (NTIS No. PR04163; ADA426650). Williams, K. W. & Blanchard, R. E. (1995). Development of qualification guidelines for personal computer-based aviation training devices (DOT/FAA/AM-95/6). Oklahoma City, OK: FAA Office of Aviation Medicine.
- 51 Williges, B. H., Roscoe, S. T. & Williges, R. C. (1973). Synthetic Flight Training Revisited. Human Factors, 15(6), 543-560. Wilson, P. N. & Forman, N. (1997). Transfer of spatial information from virtual to real environment. Human Factors, 39(4), 526-531. Wolfe, J. (1998). New developments in the use of simulations and games for learning. Journal of Workplace Learning, 10(6/7), pp. 310. Yesavage, J. A. Dolhert, N. & Taylor, J. L. (1994) Flight simulator performance of younger and older aircraft pilots: Effects of age and alcohol. Journal of the American Geriatrics Society, 42(6), 577-582. Yu, Zhi-gang (1997). Inquiry into concepts of flight simulation fidelity. In S. Sivasundaram (Ed.), First International Conference on Nonlinear Problems in Aviation and Aerospace, USA, 679-685. Xiao, J. (1996). The relationship between organizational factors and the transfer of training in the electronics industry in Shenzhen, China. Human Resources Development Quarterly, 7(1), 55-74. Zeyada, Y. & Hess, R. A. (2003). Computer-aided assessment of flight simulator fidelity. Journal of Aircraft, 40(1), 173-180. Zhang, B. (1993). How to consider simulation fidelity and validity for an engineering simulator. American Institute of Aeronautics and Astronautics, 298-305.
- 52 -
Section II Use of Simulation for Surface Transportation Training: An annotated bibliography
Submitted by: Beth Blickensderfer, Ph. D. Dahai Liu, Ph. D. Angelica Hernandez Embry Riddle Aeronautical University
June 30, 2005
This effort was the second deliverable under the project-- Simulation-based Training: Applying lessons learned in aviation to surface transportation modes
- 53 Review of Current Driver’s Education and Training Baldi, S., Baer, J. D. & Cook, A. L. (2005). Identifying best practices states in motorcycle rider education and licensing. Journal of Safety Research, 36(1), 1932. Minimal research has been conducted on the effectiveness and development of motorcycle rider education. One issue is that education and licensing methods play a key role in reducing the amount of accidents and injuries. This study investigates the rider education and licensing of various states to determine the best practices needed to construct an adequate training program. This review identifies three major components of a good program: Effective administration, rider education, and licensing. States that had excellent training program elements tended to have lowest motorcycle fatality rates. This article describes 13 best training practices and suggests that states may benefit from reevaluating their current practices. Harano, R. M. & Peck, R. C. (1972). The Effectiveness of a uniform traffic school curriculum for negligent drivers. Accident Analysis and Prevention, 4, 13-45. This study investigated and evaluated the effectiveness of using a uniform traffic school curriculum for the training of problematic drivers. As part of their sentence for a traffic violation participants were randomly chosen to undergo either a new traffic school curriculum or a traditional one. Participants already enrolled in traffic schools with traditional curriculum were also selected. Results from this experiment showed a reduction in traffic accidents and violations of male participants after training utilizing the new curriculum. There was no significant treatment effect for females. There were, however, some biases introduced in the selection of the participants, and the participant’s previous driving record may have influenced results. Future efforts may focus on modifying course content. Hatakka, M., Keskinen, E., Gregersen, N. P., Glad, A. & Hernetkoski, K. (2002). From control of the vehicle to personal self-control; broadening the perspectives to driver education. Transportation Research Part F, 5(3), 201-215. This paper provides guidelines and goals for future development of driver training and education programs. The “Goals for Driver Education” (GDE) framework provides a hierarchical model for driver behavior along four levels: Goals for life and skills for living, Goals and context of driving, mastering traffic situations, and vehicle maneuvering. Two main conclusions may be drawn. First, driver education programs should focus or address the motivational aspects of learning how to drive. Second, pedagogical methods should be evaluated in order to reach training goals. This framework can be used to evaluate a training program as well as provide a framework for developing training research questions.
- 54 Knapp, K., Walker, D. & Wildon, E. (2002). Challenges and strategies for local road safety training and technology transfer. Retrieved from the US Department of Transportation: Federal Highway Administration on March 4, 2005 from: http://safety.fhwa.dot.gov/training/challenges.htm. Increases in automobile accidents and fatalities require a unified response from both local and national government agencies. Effective road safety training is needed and new technologies need to be assessed to see if they may benefit current programs. This paper identifies some challenges involved in reforming the safety training system as well as suggests some initiatives towards improvement. Lund, A. K. & Williams, A. F. (1985). A review of the literature evaluating the defensive driving course. Accident Analysis and Prevention, 17(6), 449-460. Defensive driving courses have been created in an effort to reduce the amount of driving offenses and accidents drivers have. This study reviews 14 studies that investigate the effects of defensive driving programs. Overall studies suggest that defensive driving may not reduce the amount of accidents but may reduce the amount of traffic violations by 10 percent. There is little evidence to support that these programs reduce accident rates or that they change driver behavior. Future efforts may need to be focused on reforming defensive driving programs or identifying other factors that contribute to accident rates. Marek, J. M. & Sten, T. (1977). Traffic and the driver (eds.). Springfield, Illinois: Charles C. Thomas Books. This book presents a review of current driver education programs and how they affect traffic safety. Two questions are posed: one, is there a case for driver education at all? and two, what kind of driver’s education would best improve safety on the roads. This book not only dissects the content and methods involved in driver education programs but also evaluates the effectiveness of these programs. Also included are accident rate tables and taxonomy of literature on driving.
- 55 Roach, G. Taylor, M. A. P. & Dawson, D. (1999). A comparison of South Australia’s Driver-licensing methods: Competency-based training vs. practical examination. Transportation Research Part F, 2(2), 69-80. In Australia, there are two main methods in which drivers may secure their driver’s license. Students may either complete a competency-based course or pass a practical examination. This study compared 267 participants who had received their license through either method. Participants completed surveys that asked questions about their driving attitudes, behaviors and experiences. Results showed that both licensing methods produced drivers with similar behaviors and attitudes, however, those who had completed the practical exams had higher self-ratings of skill then those who took the course. Licensing method was not found to be a good predictor of crash or driving offense rate. More research is needed to investigate the relationship between selfperceptions of safety and driving skill. Modeling Driving Behavior Anceaux, F., Pacaux, M., Halluin, N., Rajaonah, B. & Popieul, J. (2003). A methodological framework for assessing driving behavior. In Dorn, L. (ed.) Driver behavior and training (203-211). Aldershot, Hampshire, UK: Ashgate Publishing Limited. This chapter presents a study that aims to model the speed and distance management strategies that drivers use. Forty participants were given a variety of scenarios in a simulator to see how they would react. Using the Dynamic Situation Management model driver cognitive activities were broken down the into: information elaboration activities, identification and diagnostic activities, and decision-making and implementation activities. Future work based on this framework may be used to develop technology to support speed and distance management such as automated cruise control technology. Al-Shihabi, T. & Mourant, R. R. (2003). Toward more realistic driving behavior models for autonomous vehicles in driving simulators. Transportation Research Record, No. 1843, 1-20. See Appendix for detailed review Bellet, T. & Tattegrain-Veste, H. (1997). A framework for modeling driving knowledge. In Harris, D. (ed), Engineering psychology and cognitive ergonomics v.1 (371378). Brooksfield, USA: Ashgate. See Appendix for detailed review
- 56 Fuller, R. (1990). Learning to make errors: Evidence from a driving task simulation. Ergonomics, 33(10/11), 1241-1250. Human error can be defined as the mismatch between what an operator does and what the system demands the operator to do. Errors can originate from either the design of the system, the actions (or lack there of) by the operator, or may originate over time by ongoing processes. Designers of driving interfaces must carefully manage error within the design of the system so that errors become hard to commit, easy to detect, and easy to manage. This paper investigates the phenomenon of human error in driving using simulation. Special emphasis is placed on how errors may arise from faulty learning experiences. The author suggests that research efforts may be further aided by the placing of “black boxes” or in-car measurement devices that can record driving behavior parameters in real time.
Harris, D. & Smith, F. (1997). What can be done versus what should be done: a critical evaluation of the transfer of human engineering solutions between application domains. In Harris, D. (ed), Engineering psychology and cognitive ergonomics v.1 (339-346). Brooksfield, USA: Ashgate. See Appendix for detailed review McInally, S. (2003). R3- A model for motorcycle rider skill development. In Dorn, L. (ed.) Driver behavior and training (35-43). Aldershot, Hampshire, UK: Ashgate Publishing Limited Motorcycle riding is a complex and risky activity demanding many attention resources from the rider. Riders have to balance out their perception of risk with their motivation to get to where they need to go. This may be influenced by their skill level, experience and motivation. This chapter describes a model used for motorcycle riding behaviors. The R3 model implies that three elements are always used during a rider’s journey: Risk, Reaction, and Review. A rider may first perceive risk in his environment, develop an appropriate reaction, and then review his or her decision to see if it was appropriately chosen and conducted. This model is used to describe how a rider may build up his experience and skill level over time.
- 57 Wickens, C. D., Lee, J. D., Liu, Y. & Becker, S. E. G. (2004). Introduction to human factors engineering. Upper Saddle River, NJ: Pearson Prentice Hall. This book is an introductory review of Human Factors Psychology. The topic of Transportation is also mentioned in this book. It provides an introduction to transportation safety as it relates to the field of human factors. The task of driving is seen as a break down of tasks from which a driver must execute in order to maintain safe control of the vehicle. Hazards, visibility, and the driving environment are also discussed as well as how they may affect driver performance. Public transportation and transit are also extensively discussed. Principles and guidelines to improving the transportation system are also discussed. Training effectiveness and Goals Allen, R. W. Park, G., Cook, M. Rosenthal, T. J., Fiorentino, D. & Viirre, E. (2003). Novice driver training results and experience with a pc-based simulator. In Second International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle design, Park City, Utah, 165-170. See Appendix for detailed review Baron, M. L. & Williges, R. C. (1975). Transfer effectiveness of a driving simulator. Human Factors, 17(1), 71-80. This study aims to determine the transfer effectiveness of an open-loop simulation using passive instruction in a driver education program. Forty high school students received various amounts of simulation or film-only training; their performance was compared with that of licensed drivers. At first licensed drivers performed better than students who had six hours of simulation training, later on in the experiment, however, the simulation group performed better. Implications of these findings may be extended to the improvement of driver education and the use of simulation. Brock, J. F., Jacobs, C. & Buchter, R. (2001). Design of a guidebook for the acquisition and use of driving simulators for training transit bus operators. In 2001 Driving Assessment Proceedings, Iowa City, Iowa, 177-182. The Transportation research board has attempted to develop guidelines for company managers to aid them in determining if simulators can benefit their training programs as well as what kind of simulators they should acquire. This paper provides comparative analysis of three types of simulators: open-loop video simulators, low-end simulators, and mid-range simulators. These devices cannot only help train novice operators but also to re-train more experienced operators. Simulators may be compared based on the types of elements required by the training program such as high fidelity, instructor stations and different types of training.
- 58 -
Decina, L. E., Gish, K. W., Staplin, L. & Kirchner, A. H. (1996). Feasibility of new simulation technology to train novice drivers (DTNH22-95-C-05104). Washington DC: National Highway Traffic Safety Administration/The Scientex Corporation. This project examined the feasibility of using current simulation technology and other electronic devices for the purpose of training driving skills. This report provides an extensive review of the driver training issues and also identifies training curriculum elements. A comparison is made over the different types of simulators and simulation technology. Recommendations include on the incorporations of training elements such as hazard anticipation, motivation and visual scanning techniques. The personal computer appears to be the most practical and viable device to use for training purposes. Further research may focus on human factors issues related to trainee feedback, fidelity, training effectiveness and strategic learning. Dorn, L. (2003a). Driver behavior and training (eds.). Aldershot, Hampshire, UK: Ashgate Publishing Limited. The papers from this book were gathered from the First International Conference on Driver Behavior and Training in 2003. At the time there had been little research conducted on driver training as it relates to driver safety and accident risk. This book covers topics such as: professional driver training, driver health and fatigue, vehicle technology, and driver training. Dorn, L. (2003b). Hazard awareness and police driving performance. In Driver behavior and training (eds.) (17-25). Aldershot, Hampshire, UK: Ashgate Publishing Limited. Police drivers see themselves as safer drivers and as being more aware of hazards and how to handle or minimize the consequences of those hazards This report compares the performance of police driver behavior to that of non police-trained driver behavior to see if their hazard awareness skills translate into safer performance. Fifty-four police trained drivers and 52 non police-trained drivers were to go through three simulated conditions (rural roads, “link” sections which transition from rural to urban roads, and urban roads) and were instructed to perform an overtaking maneuver. Overall, police drivers displayed greater caution than the non police-trained group by slowing their speed more and positioning themselves in the lane where greatest visibility was achieved. This study lends credibility to the professional driver programs undertaken by law enforcement departments. Dorn, L. & Barker, D. (2005). The effects of driver training on simulated driving performance. Accident Analysis and Prevention, 37(1), 63-69. See Appendix for detailed review
- 59 -
Down, S., Petford, J. & McHale, J. (1982). Learning techniques for driver training. International Review of Applied Psychology, 31, 511-522. See Appendix for detailed review Falkmer, T. & Gregersen, N. P. (2003). The TRAINER Project- The evaluation of a new simulator-based driver training methodology. In Dorn, L. (ed.) Driver behavior and training (317-330). Aldershot, Hampshire, UK: Ashgate Publishing Limited. This paper reports on the TRAINER project, which developed a driver training methodology based on computer-based interactive simulator technology. The methodology focused on experience building and on risk awareness of experienced students. This paper is a case study of the project and covers the overall behavior framework used, design of the simulator, and scenarios used. Training benefits have been shown using this simulator and future work on improving training effectiveness is suggested. Ivancic, K. & Hesketh, B. (2000). Learning from errors in a driving simulator. Ergonomics, 43(12), 1966-1984. Two experiments investigating the effect of “errors training” on driving performance and training transfer. Participants either were allowed to make errors during a driving simulation or were merely given examples of errors on driving skills in a classroom setting. In experiment one, participants who were allowed to make errors during simulation achieved better transfer to a regular driving test and learned more strategies for coping with error producing driving scenarios. In experiment two, no sufficient evidence supported the superiority of allowing participants to commit their own errors over those who were just given examples on analogous tests. Findings are discussed in depth and may provide trainers with a method to incorporate error training into driver education programs. Janes, B. S. & Flyte, M. G. (1999). Information needs and strategies of older drivers for navigation on unfamiliar routes: A methodological approach. In Harris, D. (ed.) Engineering psychology and cognitive ergonomics, v. 3 (297-304). Brookfield, USA: Ashgate. This field study aimed to identify the type of information that older drivers require in order to navigate through unfamiliar routes and roads. Younger and older drivers were instructed to navigate through two city routes they were unfamiliar with. Some participants received maps of the routes they were driving through and some did not. Cameras videotaped the participant’s progress and surveys were given at the end of the experiment. Results focused on how information needs changes as well as how timing affected these needs.
- 60 -
Strayer, D. L. & Drews, F. A. (2003). Simulator training improves driver efficiency: Transfer from the simulator to the real world. In Second International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle design, Park City, Utah, 190-193. See Appendix for detailed review Uhr, M. B. F., Felix, D., Williams, B. J. & Krueger, H. (2003). Transfer of training in a driving simulator: Comparisons between reality and simulation. In Dorn, L. (ed.) Driver behavior and training (339-348). Aldershot, Hampshire, UK: Ashgate Publishing Limited. Driving simulators have often been viewed with skepticism regarding their overall usefulness in driving programs. The aim of the study was compare the training effectiveness of both a simulator and a real driving environment. Forty-four inexperienced participants were asked to learn a specific driving maneuver in either learning environment. Result showed equal learning success for both training conditions. Although the simulation was simplistic, adequate transfer from simulator to real environment was shown. This study highlighted the importance of providing feedback to the trainee in order to develop an effective training regiment. Simulators, Research, and Classifications Blana, E. & Golias, J. (2002). Differences between vehicle lateral displacement on the road and in the fixed-based simulator. Human Factors, 44(2), 303-313. This experiment investigates the differences in lateral displacement while driving on curves or straight roads in an actual automobile or driving simulator. Two hundred participants were instructed to drive on a rural route course, half in an automobile and half in the driving simulator. Speed and lateral position were measured using video cameras. Results showed that real-road drivers had higher displacement than simulator drivers, however these differences decreased as drivers accelerated and drove on a curve. Changes in lateral position may be due to differences in distance perception. More realistic driving simulators may minimize these differences in lateral displacement.
- 61 -
Brock, J. F., Jacobs, C., Van Cott, H., McCauley, M. & Norstrom, D. M. (2001). Simulators and bus safety: Guidelines for acquiring and using transit bus operator driving simulators (TCRP-Report 72). Washington. DC: National Research Council: Transportation Research Board. The purpose of this study is to highlight the problems facing transit bus operator training programs. The transit community is facing a shortage of well-trained bus operators while at the same time trying to maintain optimum safety standards and practices. Simulation technology may provide one tool for training programs to produce highly qualified drivers. However, because this technology is relatively new, no guidelines exist to help trainers understand and utilize the technology effectively. This report provides an extensive review of simulators and simulation research on the applications related to training. Also, guidelines for acquiring and using driving simulators for training purposes are also given. Fupei, X. Bin, X, Haojie, D. Wei, Q., Dan, Z., Wan, W. & Feng, G. (2000). Virtual automobile training simulator. International Journal of Virtual Reality, 4(4), 1-11. This paper presents a case study of the YF Active 3D Automobile Driving Simulator, developed by Nanjing University in China. Vivid representation of the driving environment was achieved using Virtual Reality (VR) technology and artificial intelligence technology. This report covers the basic structure and architecture of the system and describes various software models used. Hoskovec, J. & Stikar, J. (1992). Training of driving skills with or without simulators? Studia Psychologica, 34(2), 183-187. See Appendix for detailed review Nagata, M. & Kuriyama, H. (1983). Development and application of a driving simulator for training. International Journal of Vehicle Design, 4(1), 2335. This paper provides a case study about the development of a driving simulator for the purpose of driver training. Some of the drawbacks of simulator development have to do with the problem of transfer of learning and training effectiveness. Often simulators are designed as driving experiences and not as learning tools. This report not only chronicles the software and hardware development but also the development of the training curriculum. Schori, T. R. (1970). Driving Simulation: An overview. Behavioral Research in Highway Safety, 1(4), 236-249. See Appendix for detailed review
- 62 -
Simulators: What will be their role in the future of driver training? (1992). Driver/Education, 2(3), 6-8. See Appendix for detailed review Smith, K. U., Kaplan, R. & Kao, H. (1970). Experimental systems analysis of simulated vehicle steering and safety training. Journal of Applied Psychology, 54(4), 364376. The study describes a series of experiments on the role of feedback in automobile design and driving. The road, the automobile, and the driver are all an interlocking system that is controlled by steering and control movements made by the driver. The automobile is seen as part of the driver’s exoskeleton, which must receive adequate feedback in order to navigate through the environment properly. These experiments test the effects of delays, in steering and feedbacks, on speeding and steering errors. It was found that delays in steering feedback did sharply increase errors made and prevented the driver from learning steering performance. Walker, A. & Bailey, S. (2002). Integrating simulation into driver training. International Railway Journal and Rapid Transit Review, 42(4), 32, 33. Training programs for train drivers within the railway industry are both expensive and lengthy. In the UK it takes a trainee about eleven months to complete basic training that may cost the company up to 40,000 pounds. Practical training is typically conducted with real trains and, hence, few emergency situations. Simulation may provide great benefits to the railway industry if applied correctly. Training programs will need to be developed that will maximize effectiveness and utilize current simulation technology. Instructors and designers will not only need to identify the skills and abilities necessary to become a proficient driver but also develop concrete performance measures. Why are simulators not catching on with trainers? (2003). Driver/Education, 12(4), 6, 7. This article reviews the criticisms and shortcomings of driving simulators. There are two reasons why simulators are not catching on with trainers. One is the antiquated equipment being used by current training programs. Second, is the fact that high fidelity simulators are very expensive. The role of simulators in training is still viable and should be explored further.
- 63 Driving Psychology Groeger, J. A. & Rothengatter, J. A. (1998). Traffic psychology and behavior. Transportation Research Part F, 1(1), 1-9. Driving is a complicated task performed by many people every day. This article gives a brief overview of traffic psychology topics that include perception, cognition and social aspects of driving. Having the knowledge about the kinds of factors that can affect driver performance will aid designers in creating devices, roadways, and other countermeasures that will enhance the driving experience and also improve safety. Topics covered also include: driver education and training, law enforcement, vehicle and road design, and driver improvement. Rothengatter, J. A. & Bruin R. A. (1987). Road users and Traffic Safety. Wolfeboro, New Hampshire: Van Gorcum. This book was compiled from reports presented at the 1987 Conference on Road Safety. It discusses the current issues in driver safety such as: accident rates, problematic drivers, driver training, and theoretical accident forecasts. Attention Deficits and the Distracted Driver McCarley, J. S., Vais, M. J., Pringle, H., Kramer, A. F., Irwin, D. E. & Strayer, D. L. (2004). Conversation disrupts change in detection in complex traffic scenes. Human Factors, 46(3), 424-436. This study presents the findings from two experiments that focused on the effect of conversation on driving performance. More specifically the effects that conversation had on visual scanning and the driver’s ability to detect meaningful object in the visual scene. Results showed that phone conversations using hands free devices could impair the driver’s ability to detect change in the driving scene, however attentive listening tasks did not affect performance. These findings give support to the phenomenon of “change blindness” in which a person may fail to detect subtle changes to what they are looking at. Potential applications of these findings may be applicable to the design of displays and devices in an automobile.
- 64 Peters, G. A. & Peters, B. J. (2002). Automotive vehicle safety. London: Taylor & Francis. This book provides a wealth of information concerning driver safety and provides a basis for designing products, systems, processes and services that better meet the demands of automotive safety. More specifically this books illustrates the problems of the distracted driver and advises on how environments and technology can be developed to minimize the performance decrements. Other topics covered include: Risk assessment and perception, simulation applications, accident analysis and crash testing, and the future of automotive safety. Regan, M. A., Deery, H. A. & Triggs, T. J. (1998). Training for attentional control in novice car drivers: A simulation study. Proceeding of the Human Factors and Ergonomics society 42nd annual meeting, 1452-1456. Novice drivers, between the age of 18 and 25, are overrepresented in a majority of automobile accidents. Some researchers have suggested that this is due to the novice’s limited and underdeveloped ability to control their attention while driving. This study investigates whether novices may be trained to exhibit better control over their attention. Seventy-two participants, of various experience levels, were split into two groups. One group received “variable priority” (VP) training, a method developed by Gopher (1992) to enhance a person’s attentional control. After training trails, groups that had received VP training performed significantly better than those who had not. Spence, C. & Driver, J. (1999). Multiple resources and multimodal interface design. In Harris, D. (ed.) Engineering psychology and cognitive ergonomics, v. 3 (305-312). Brookfield, USA: Ashgate. Recent research has suggested that using cellular telephones while driving may increase the risk of having an accident. Drivers only have a certain amount of attention resources while driving and those resources must be allocated between each sensory modality. This paper illustrates the problems and difficulties drivers may encounter while operating multi sensory modal interfaces such as cell phones and navigation displays. Young, M. S. & Stanton, N. A. (1997). Automotive automation: Effects, problems and implications for driver mental workload. In Harris, D. (ed), Engineering psychology and cognitive ergonomics v.1 (347-354). Brooksfield, USA: Ashgate. See Appendix for detailed review
- 65 -
Decision Making and Scenario-based Training Deloreme, D. & Marin-Lamellet, C. (1997). Investigation of cognitive process when driving: How drivers cope with ambiguity and time pressure. In Harris, D. (ed), Engineering psychology and cognitive ergonomics v.1 (379-385). Brooksfield, USA: Ashgate. See Appendix for detailed review United States Air Force (1976). Mayday! Mayday! (AF pamphlet; 12714). Washington, D.C.: Dept. of the Air Force: U.S. Government Printing Office. This book provides various examples and scenarios that a driver may encounter while driving. This book presents the reader with various emergency situations and lists two or three possible actions, and once the reader has chosen one he or she may turn the page to see the correct answer. An emergency situation is one in which the driver must take action to avoid hitting another person or object, or to avoid being hit by another driver or object. This book was originally intended to be part of the Air Force’s Driver Education Program, and may benefit driver-training programs of any level. Motivation Brooks, P. & Arthur, J. (1997). Computer-based motorcycle training: the concept of motivational fidelity. In Harris, D. (ed), Engineering psychology and cognitive ergonomics v.1 (403-410). Brooksfield, USA: Ashgate. See Appendix for detailed review Cognition and Perception Anceaux, F., Pacaux, M., Halluin, N., Rajaonah, B. & Popieul, J. (2003). A methodological framework for assessing driving behavior. In Dorn, L. (ed.) Driver behavior and training (203-211). Aldershot, Hampshire, UK: Ashgate Publishing Limited. This chapter presents a study that aims to model the speed and distancemanagement strategies that drivers use. Forty participants were given a variety of scenarios in a simulator to see how they would react. The author’s use the Dynamic Situation Management model that breaks down the cognitive activities into: information elaboration activities, identification and diagnostic activities, and decision-making and implementation activities. Future work based on this framework may be used to develop technology to support speed and distance management such as automated cruise control technology.
- 66 Lee, H. C., Lee, A. H. & Cameron, D. (2003). Validation of a driving simulator by measuring the visual attention skill of older adult drivers. American Journal of Occupational Therapy, 57(3), 324-328. See Appendix for detailed review Matthews, G. & Desmon, P. A. (1997). Underload and performance impairment: Evidence from studies of stress and simulated driving. In Harris, D. (ed), Engineering psychology and cognitive ergonomics v.1 (355-361). Brooksfield, USA: Ashgate. See Appendix for detailed review Mills, K. C. Parkman, K. M. Smith, G. A. & Rosendahl, F. (1999). Prediction of driving performance through computerized testing: High-risk driver assessment and training. Transportation Research Record, 1689, Paper no. 99-0356. This report investigates the relationship between cognitive screening tests and performance. Thirty-five North Carolina highway patrol cadets were given computerized cognitive assessment tests that measured the participant’s divided attention skills. Five months later cadets were given extensive driver training and practical examinations over high-speed pursuits. Results indicated that decision-making skills are more important when it comes to driving than simple scanning or target identification techniques. This study supports utilizing computer-based tests as “screening” and predictive devices of driving skills. Ruhr, R. H. & Schinauer, T. (2003). Optical flow fields and visual attention in car driving. In Dorn, L. (ed.) Driver behavior and training (213-222). Aldershot, Hampshire, UK: Ashgate Publishing Limited. The optical flow is the area within the driver’s visual scene that contains specific information that is pertinent and essential to driving and steering the vehicle. This chapter presents a study that investigates attention shifts as they relate to optical flow and the focus of expansion. Focus of expansion is the point in the optical field where things start to expand and move towards the viewer. This study manipulated location of the focus of expansion to either the front or the back and participants were measured to see how fast they could detect colored dots that would emerge on their viewing field. Results showed differences in reaction times when the focus of expansion was placed differently, participants responded faster when the focus was in the back. Concerning driving, the authors indicate that attention may be more quickly shifted from the back to the front then the opposite. Attention should be directed far into spatial dept in order to perform safely.
- 67 Wierda, M. (1991). Man and his wheel: Cognitive and perceptual aspects. In proceedings of the conference Strategic Highway Research Program and Traffic Safety on Two Continents, Gothenburg, Sweden, 63-73. Visual information is vitally important to the driving task. Where to look is just as important as what we are looking at. Even though driver education teaches people where to look and how often, little is known about the actual visual behavior of drivers, especially the differences between experienced and novice drivers. This paper gives a framework for understanding the visual process in driving. The model is broken down into visual tasks, algorithmic accounts of the tasks, and an implementation of those tasks. The author also suggests that simulators may be very useful tools for providing driver training. Wilde, G. J. S., Gerszke, D. & Paulozza, L. (1998). Risk optimization training and transfer. Transportation Research Part F, 1(1), 77-93. This article describes various measurement techniques for assessing risk optimization while driving. In order to navigate through traffic a driver must balance out the amount of risk he or she is willing to take with the motivation for achieving a goal (i.e. reaching a destination). In this study participants performed a series of tasks, with known costs and benefits, which measured risk-taking behaviors. Participants were also instructed on how to improve their risk-taking strategies in order to maximize their scores. Results showed that risk-optimization training and feedback helped participants become better risk-takers and that these skills transferred over to other more general driving skills. Driver types and Experience Evans, L. (1991). Traffic Safety and the Driver. New York: Van Nostrand Reinhold. The nature of injuries, fatalities and property damage due to traffic accidents are discussed in this book. This book also reports on the different factors affecting driver safety such as age, sex, risk perception, and the role of alcohol in accidents. Also discussed are environmental/roadway factors, traffic laws, restraints and protective devices. Fisher, D. L., Laurie, N. E., Glaser, R., Connerney, K., Pollatsek, A., Duffy, S. & Brock, J. (2002). Use of fixed-base driving simulator to evaluate the effects of experience and pc-based risk awareness training on drivers’ decisions. Human Factors, 44(2), 287-302. See Appendix for detailed review
- 68 Jamson, H, (1999). Curve negotiation in the Leeds Driver Simulator: the role of driver experience. In Harris, D. (ed.) Engineering Psychology and Cognitive Ergonomics, v. 3 (351-358). Brookfield, USA: Ashgate. Novice drivers are unable to navigate through curves as well as expert drivers. Novices tended to weave and swerve more in the lane and made more over corrections. This may be due to underdeveloped timing and driving skills. Using the Leeds Driving Simulator this study demonstrates the differences between novices and experts behavior in curves. Janes, B. S. & Flyte, M. G. (1999). Information needs and strategies of older drivers for navigation on unfamiliar routes: A methodological approach. In Harris, D. (ed.) Engineering psychology and cognitive ergonomics, v. 3 (297-304). Brookfield, USA: Ashgate. This field study aimed at identifying the type of information older drivers require in order to navigate through unfamiliar routes and roads. Younger and older drivers were instructed to navigate through two city routes they were unfamiliar with. Some participants received maps of the routes they were driving through and some did not. Cameras videotaped the participant’s progress and surveys were given at the end of the experiment. The experiment focused on how information needs changes as well as how timing affected these needs. Roenker, D. L., Cissell, G. M., Ball, K. K. Wadley, V. G. & Edwards, J. D. (2003).Speed-of-Processing and driving simulator training result in improved driving performance. Human Factors, 45(2), 218-233. See Appendix for detailed review Schuster, D. H. (1975). A pilot study of the use of simulator in retraining problem drivers. Journal of Safety Research, 7(3), 141-143. Hazardous drivers are threat to the safety of everyone on the road. This study attempted to see if these problematic drivers can benefit from simulation training using the three types of simulation: 1) the driving simulator provided by the manufacturer; 2) the simulator enhanced by student feedback; and 3) the simulator enhanced by feedback and with an added stress factor to the training. Although driving records did not appear to change significantly after training, accident rates did decline slightly. Future research may focus on whether or not simulation can aid drivers in learning how to avoid accidents.
- 69 Driving Technology Ben-Yaacov, A., Maltz, M. & Shinar, D. (2002). Effects of an in-vehicle collision avoidance warning system on short and long term driving performance. Human Factors, 44(2), 287-302. See Appendix for detailed review Comte, S. (1999). Speed management: targeting the road, vehicle, or driver? In Harris, D. (ed.) Engineering psychology and cognitive ergonomics, v. 3 (321-328). Brookfield, USA: Ashgate. Four systems that help drivers reduce their speed while taking a curve are compared in this study. Performance was evaluated by the way drivers approached a curve and negotiated through it. The systems included: an advisory system, a variable message sign system, automated speed control system, and normal driving condition system. The automatic speed control system surpassed performed better at monitoring and altering driver behavior then the rest.. Graham, R., Aldridge, L., Carter, C. & Lansdown, T. C. (1999). The design of in-car speech recognition interfaces for usability and user acceptance. In Harris, D. (ed.) Engineering psychology and cognitive ergonomics, v. 3 (313-320). Brookfield, USA: Ashgate. Speech recognition software has the potential to improve the safety of the driving environment. Automatic Speech Recognition (ASR) may improve the usability of in-car systems used for entertainment, communication, and navigation. This paper reviews the human factors issues related to speech recognition systems such as computer dialogue design and operational structure. Driver preferences and trends are reported as well as safety implications. Harms, L. (1996).Automatic train control and the task of train drivers: the case of intermitten information transmission. In Harris, D. (ed), Engineering psychology and cognitive ergonomics v.1 (421-426). Brooksfield, USA: Ashgate. See Appendix for detailed review Harris, D. & Smith, F. (1997). What can be done versus what should be done: a critical evaluation of the transfer of human engineering solutions between application domains. In Harris, D. (ed), Engineering psychology and cognitive ergonomics v.1 (339-346). Brooksfield, USA: Ashgate. See Appendix for detailed review
- 70 Schmidt, E. & Toebing, G. (2003). Computer-based driver status monitoring of fatigue. In Dorn, L. (ed.) Driver behavior and training (109-112). Aldershot, Hampshire, UK: Ashgate Publishing Limited This paper discusses the recent advances made in driver monitoring technology. Most devices measure variables such as blink rate or eye-lid movement. However, one main drawback is that these devices often require invasive equipment or are not practical for the driving tasks (ex. Question and answer techniques). Further advancements and algorithm development may strengthen the correlation between fatigue detection and driver performance data. Young, M. S. & Stanton, N. A. (1997). Automotive automation: Effects, problems and implications for driver mental workload. In Harris, D. (ed), Engineering psychology and cognitive ergonomics v.1 (347-354). Brooksfield, USA: Ashgate. See Appendix for detailed review Young, M. S. & Stanton, N. A. (1999). Pay attention 007! How will future vehicle technology affect drivers of all skill levels? In Harris, D. (ed.) Engineering psychology and cognitive ergonomics, v. 3 (289-296). Brookfield, USA: Ashgate. The current paper investigates the effect of automation on various skill levels of driver expertise. Previous research has indicated that mental workload measures might fluctuate depending on the level of skill. For example, as a person becomes an expert activities and tasks become more automatic and their attention demands decrease. This study argues that as automation is introduced driver mental workload (MWL) may decrease across the board. The study used four levels of skill and four levels of automation. The study does support an interaction between skill level and automation use, although all groups performed similarly. Further work may focus on how each type of driver may react specifically to critical scenarios and situations.
- 71 Road and Environment Design Godley, S. T., Fildes, B. N. & Triggs, T. J. (1997). Perceptual countermeasures to speed related accidents. In Harris, D. (ed), Engineering psychology and cognitive ergonomics v.1 (387-394). Brooksfield, USA: Ashgate. See Appendix for detailed review Ismail, R. B. (2003). The effects of road predictability on driving performance. In Dorn, L. (ed.) Driver behavior and training (113-124). Aldershot, Hampshire, UK: Ashgate Publishing Limited. This chapter explores the effect of roadway predictability on driving performance. Sixty-four drivers were assigned to drive during predictable and less predictable driving scenes for approximately half and hour. Drivers were to report if they had any “time gaps” or times where they became distracted with irrelevant thoughts. Results indicated that during predictable conditions drivers not only experienced more “time gaps” but also made more errors. This may be partly due to feelings of monotony or boredom induced by driving in predictable scenes. Findings from this study may be used to support road designs that offer the driver unpredictable scenes. Bikes, Trucks, Trains and Professional Driving Ambrose, W. G. (1971). The Santa Fe Railway locomotive simulator and coordinated engineer’s training program (71-RR-3 paper/DOTL TF.A72). New York: American Society of Mechanical Engineers. See Appendix for detailed review Barnaville, P. (2003). Professional driver training. In Dorn, L. (ed.) Driver behavior and training (371-380). Aldershot, Hampshire, UK: Ashgate Publishing Limited. The current study examines the reason why companies adopt training programs for their company car drivers. Professional driving courses have become more popular and accepted in the corporate world. The author used questionnaire, telephone surveys as well as personal assessment of training programs. Results indicate that adopting a holistic approach to safety throughout the company can enhance company driver safety. Future developments may extend training to that of teams, fleets, as well as different kinds of drivers.
- 72 DSL Dynamic Science Limited (1988). Locomotive training simulator (DSL Reference: 492/DD/210). Montreal, Quebec, Canada: Department of Transportation: Transportation Systems Center. This study is designed to report on the feasibility and cost-effectiveness of incorporating a simulator capability into a train operator training program. The objectives of this study are: to identify, define and analyze the target railroads' requirements for a simulator in terms of their engineering training, retraining, safety and research needs. This detailed report covers background information of training, principles related to locomotive simulator design and recommendations and conclusions. Meyer, G. Slick, R., Westra, D., Noblot, N. & Kuntz, L. (2001). Virtual truck driver training and validation: Preliminary results for range and skid pad. In 2001 driving assessment proceedings, Iowa City, Iowa. See Appendix for detailed review Pierowicz, J. A. & Robin, J. (2001). Re-Assessment of driving simulators for the training, testing and licensing of commercial vehicle drivers. In 2001 driving assessment proceedings, Iowa City, Iowa. See Appendix for detailed review Strayer, D. L. & Drews, F. A. (2003). Simulator training improves driver efficiency: Transfer from the simulator to the real world. In the proceedings of the second international driving symposium on human factors in driver assessment, training and vehicle design, Park City, Utah, 190-193. See Appendix for detailed review Vidal, S. & Borkoski, V. (2000). MTA New York transit’s bus simulator: A successful public/private partnership. In Bus and Paratransit Conference Proceedings, Houston, TX, 239-241. See Appendix for detailed review Wright, V. & Forman, R. (1980). Bus driver training simulator assessment (DOT-TSCUMTA-80-20). Washington, DC: US Department of transportation. Simulation has become an increasingly important tool in driving, highway, and vehicle design research. The benefits of this technology have just begun to be understood for professional driver training programs. This paper presents findings from a feasibility study of the use of simulators to train bus drivers. Benefits and drawbacks are discussed as well as costs and training needs. It is the opinion of this report, at this time, that although simulators may fulfill important training needs, the costs of these highly complicated devices are too high to justify purchase.
- 73 -
Appendix: Extensive Reviews of Selected Surface Transportation Articles
- 74 Allen, R. W. Park, G., Cook, M. Rosenthal, T. J., Fiorentino, D. & Viirre, E. (2003). Novice driver training results and experience with a pc-based simulator. In the proceedings of the second international driving symposium on human factors in driver assessment, training and vehicle design, Park City, Utah, 165-170. Background: • This article presents research on a transfer of training study conducted to see if simulation is a viable and effective option for the training of novice drivers. • Traditional driver's ed typically involves classroom instruction and real world driving • Experience is gained with age however, accidents rates are still high • The PC simulation here trains drivers in psychomotor and cognitive skills to novice drivers IV: Simulation configuration/visual display format (one monitor, 3 monitors, panoramic views) DV: Driver proficiency and performance: accident rates, speeding, road edge incursions, and time to collision in the sim. Participants: 111 High school students
Method: Students experienced driver training under the 3 visual conditions over 6 trials.
Findings/Results/Discussion: • Performance improved overall, but some differences existed amongst visual display method. Single monitors performed most poorly. • Training simulators are generally robust for all types of training environments. • Sim training system can be run outside of the laboratory in a practical setting
- 75 Al-Shihabi, T. & Mourant, R. R. (2003). Toward more realistic driving behavior models for autonomous vehicles in driving simulators. Transportation Research Record, No. 1843, 1-20. Background: • Drivers in real world environments are highly variable and display many different behavior and driving styles. • Using simple models for autonomous vehicles may not provide an adequate representation of the real life driving environment: • Sense of presence affected, drivers are not engaged in sim • Sim is too predictable • Sim cannot support too many scenarios • This framework not only provides a basis for modeling drivers of autonomous vehicles in simulation but also can be used to model driving behavior of actual people. Model/Ideas Presented: • Michon (1985) describes driving as a hierarchical control structure with emphasis on the cognitive aspect, divided into 3 levels of control: o Route planning-strategy o Maneuvering and control o And operational control (low level control of vehicle) • Decision making in driving is usually modeled through 3 approaches: o Rule based, State based (computers codes behavior into states) o Probilistic models • Driver Framework, composed of four categories: o Perception - encodes perceptual information, qualitative description of environment. Describes some elements in thresholds, (example: risky drivers may think their environment is less risky then conservative one). o Emotion/Motivations- governed by risk avoidance models-homeostasis models. High discomfort or perception of risk may lead to changes in driver performance. Doesn't dictate decisions, but desire to improve one's state influences decisions. Governs type of emotion, intensity, how it is induced, and social rules associated with it. o Decision Making - Inference engine that decides where to go how to maneuver. Serves the emotional needs of driver as well as maintains safe action. o Decision Implementation - Control end of the spectrum. Decides when to implement decisions made by decision-making process. Translates decision into control activities such as accelerate/decelerate and steer. Alertness also influences how these decisions are implemented.
- 76 Conclusions: • Model may be adapted for different kinds of drivers: aggressive/timid, drunk/fatigued • Most extreme profiles can be detected. • Increase in realism associated with using this model in simulation. • More research needs to be conducted.
- 77 Ambrose, W. G. (1971). The Santa Fe Railway locomotive simulator and coordinated engineer’s training program (71-RR-3 paper/DOTL TF.A72). New York: American Society of Mechanical Engineers. Background: o The railway system is having a shortage of employees to be trained as locomotive engineers. o Many programs have aimed at shortening training time. o The Santa Fe railroad recently started using a simulator to train its engineers.
Model/Ideas Presented: o Traditional training was typically on the job training, and involved a lot of observation and little actual interaction with the train. o The railway simulator provides both visual scenes, audio, and hydraulically controlled motion systems. o Sim compensates for all experience levels, beginners are exposed to controls, responses to the controls instruments and displays. o Sim can train for complex maneuvers, breaking, curves, and emergency situations.
Conclusions: further research is needed, and the sim may be used in a variety of applications outside of training: o Track profile design o Accident investigation o Development of new locomotive technology
- 78 Bellet, T. & Tattegrain-Veste, H. (1996). A framework for modeling driving knowledge. In Harris, D. (ed), Engineering psychology and cognitive ergonomics v.1 (371378). Brooksfield, USA: Ashgate. Background: o The aim of this report is to provide a framework for evaluating new technologies in the transportation area. o Simulation models developed in the 70s and 80s are for the most part conceptual and descriptive in nature, heuristic; During the 90s computer simulation took over focusing on predictive models or cognitive models. Model/Ideas Presented: o Performance simulation models – predict behavior, links situation parameters and driver behavior, cognition is though of as a “black box” where inputs come in and reactions go out. o Cognitive simulation models – Simulate internal states and processes involved in human cognitions. (i.e. decision making, anticipation, planning), the driver is a natural info processing system. o Involve detailed descriptions and functional architecture of cognitive processes o Cognitive Simulation Model Driver (COSMODRIVER) – example of cognitive sim model. o Composed of several models, which correspond to the major cognitive activities. o Driver’s decision-making model is not an objective view of the world but their own representation of the environment. o Drivers use functional space representation to get info from the environment and adjust their behavior accordingly. Environment is split into zones, which describe the initial state (beginning situation), a set of local final states (end results of actions), and actions (activities in zones ex. Turning). o Frame theory (Minsky, 1975)– used mainly in artificial intelligence modeling, frames are complex data structures which describe knowledge stored in memory. o Frames have slots, which accept input values received from environment. o Frames describe objects, situations, events, and sequences of events, actions, or actions. o Driving frame – structure of knowledge, which allows the driver to realize an objective. o Infrastructure modeling – can divide up an environment (traffic, intersection) into several zones that certain information will be scanned for so that some driving actions may be executed. o Driver path modeling- driver navigates through a series of zones where visual information must be gathered and drawn upon. Conclusions: Using this framework drivers can map out the world in front of them. This framework can be used to describe a variety of scenarios in traffic.
- 79 Ben-Yaacov, A., Maltz, M. & Shinar, D. (2002). Effects of an in-vehicle collision avoidance warning system on short and long term driving performance. Human Factors, 44(2), 287-302. Purpose: o 30% of all vehicle accidents are caused by drivers failing to maintain proper distance from cars in front of them. o Measures generally used to measure following rates are time to collision (TTC) and temporary headway (TH)- time it takes for the following car to reach the position of the lead car. o Drivers, despite their experience tend to underestimate their headway, which may be due to the difficulty in perceiving other object’s motion in relation to your own movement. o In vehicle collision avoidance items may be good tools uses for training drivers to maintain a safe following distance. o This report hypothesizes that drivers have a poor TH, and the in dash collision avoidance systems can aid and train them to keep a good TH. IV: Use of In-vehicle collision avoidance warning DV: Maintaining TH, error rates, calculated over 7 trials Participants: 30; 25-30 yrs old, at least had 5 years driving experience.
Method: Cars were to follow traffic rules; alarm would sound once driver came to close. System was also set to malfunction at various reliability rates. Findings/Results/Discussion: o Hypothesis confirmed, drivers are bad TH estimators. o TH can be improved through the use of collision avoidance alarms and were able to keep their training even after 6 months after the experiment. o Warning didn’t have to be perfect. o Motivation may be a future concern.
- 80 Brooks, P. & Arthur, J. (1997). Computer-based motorcycle training: the concept of motivational fidelity. In Harris, D. (ed), Engineering psychology and cognitive ergonomics v.1 (403-410). Brooksfield, USA: Ashgate. Background: • Motorcycle riders within the first years of riding have been identified as most at risk for accidents. • Accidents happen because of their own errors or the errors of others. • This paper examines motivational aspects for motorcycle training. Model/Ideas Presented: • Risk taking is the low sensitivity to danger. • Rothengatter (1998) argued that risk perception is secondary to risk apprehension, normal driving, and pleasure. • Reward is valued and validated by society and the norms of younger people • Simulation has the potential to provide successful training programs from an economical, logical, and safety standpoint. • This report conducted interviews with many motorcycle riders to check their motivational reasons for learning how to ride, driving style, beliefs about other drivers, and hazards. Findings: • Younger drivers were motivated by the excitement factors and risk-taking factors. • Older more experienced drivers thought the younger drivers were risky and that they lacked proper knowledge to handle risky situations. • Motorcycle training was thought to have as "a good start" but advanced training lacked in appeal. • Advanced training for motorcycle use was often unavailable or participants had no knowledge of its existence. Conclusions • Experience is viewed as "selective risk taking." • Training should focus more on hazard detection. • Motorcycle training must address the motivation to take risks by the riders, and incorporate hazard detection. • Advanced training must be able appealing.
- 81 Deloreme, D. & Marin-Lamellet, C. (1997). Investigation of a cognitive process when driving: How drivers cope with ambiguity and time pressure. In Harrid, D. (ed), Engineering psychology and cognitive ergonomics, v. 1. (379-385). Brookfield, USA: Ashgate. Purpose: • The decision making process is one of the major cognitive activities of driving, there tends to be a trade off between driving objectives and acceptance of risks. • Homeostasis theory (Wilde, 1998): a driver will increase or decrease his/her perception of risk in order to satisfy another motive (i.e. save time). • Other factors include: time pressure, information format, uncertainty, and initial mood. • The driving situation requires a dynamic decision task-driver must anticipate consequences of decision in little time. • Hypothesis: Drivers are a product of their understanding; make more errors when ambiguous information is presented. IV: experience (old vs. young) and familiarity with driving route
DV: driver errors and subjective opinions made by the driver
Participants: 14 males, 14 females, old and young
Method: Participants were exposed to 11 intersections where information was deemed confusing or ambiguous. Drivers were in an instrumented car with a real environment.
Findings/Results/Discussion: • Time pressures lead drivers to commit errors when the information presented is unclear. • Experienced users’ performance may decrease in ambiguous scenes.
- 82 Dorn, L. & Barker, D. (2005). The effects of driver training on simulated driving Performance. Accident Analysis and Prevention, 37(1), 63-69. Purpose: o The effectiveness of driver training on road safety is a controversial issue, statistical data yield ambiguous results on its relation to accidents, and often training is short lived with little staying power. o Hazard perception contributes more to accidents than driving skills or knowledge. o Increased focus on this may improve accident rates and training. o Accident rates, however, may not be a good indicator of driver training effectiveness: reliability problems, accident usually have my factors, and accidents are rare events. o This study compares effects of professional driver training (hazard training) that are part of UK police training.
IV: experience: hazard training vs. no hazard training DV: scenario completion time, speed, lane position, lane position variability Participants: 54; 36 mean yrs of age,
Method: Participants were exposed to several scenarios in a driving simulator. o Scenes included: rural roads, link roads (straight no traffic), and urban areas o Participants also got a practice run.
Findings/Results/Discussion: o No difference between scenario completions between the two groups. o Professional driving courses do affect simulated performance. Those who had experienced hazard training exhibited safer driving behavior. o Trained drivers also exhibited less risky behaviors. o Findings suggest that safer driver training may be retained and practice over time.
- 83 Down, S., Petford, J. & McHale, J. (1982). Learning techniques for driver training. International Review of Applied Psychology, 31, 511-522 Purpose: • This report covers a pilot study done in the UK about the training of “heavy goods vehicle” (HGV) drivers. • Trainees must develop and be able to demonstrate proficiency in breaking, driving in traffic, forward observation and planning, safety, and driving courteously. • Although most drivers develop adequate psychomotor skills to drive the heavy trucks they exhibit poor proficiency in driving during hazardous conditions. Method: Driver trainees went through a road course twice in which a certain hazard was placed to test the driver. Instructors rated each trainee before and after each test run. Training sessions were tape recorded and examined later. Findings/Results/Discussion: • Errors did drop after the second run. • Instructor style varied between instructors who did excessive talking and minimal talkers; also between instructor “teaching” and “telling” o Less effective instructors talked more. o Less effective instructors gave more orders. o Effective instructors dealt with errors in a conceptual way. o Effective instructors used: reviews, opinions, and questions better than other group. • Instructor style makes a difference for driver training.
- 84 Fisher, D. L., Laurie, N. E., Glaser, R., Connerney, K., Pollatsek, A., Duffy, S. & Brock, J. (2002). Use of fixed-base driving simulator to evaluate the effects of experience and pc-based risk awareness training on drivers’ decisions. Human Factors, 44(2), 287-302. Purpose: • Younger drivers have always had high automobile accident rates, the only remedy that has been proposed is to increase driver training. However, this has seen little success. • NHTSA reports that young driver’s lack of experience; increase risk taking and immaturity may be a key factor in the high accident rates. PC simulators may be able to address training issues related to risky behaviors. • This report addresses risk training for young drivers, aims to determine: o Can this training make young drivers more cautious? In low risk conditions? In high-risk conditions? o IS risk perception only affecting scenarios where breaking and turning are needed. IV: Training (using simulator or traditional training program) DV: velocity, break pressure, distance traveled (ex. To break). Participants: 45 students between the ages of 16-17
Method: Participants were exposed to regular driver training and one grouped received risk awareness training through a PC sim.
Findings/Results/Discussion: • Trained drivers tended to break more frequently and drove more slowly. • Results may prove difficult to generalize to total driving public. • Transfer effects were shown from PC to sim, this indicates that transfer to real road may occur. • PC risk training increases driver’s awareness of risks one to two weeks after training is completed.
- 85 Godley, S. T., Fildes, B. N. & Triggs, T. J. (1997). Perceptual countermeasures to speed related accidents. In Harrid, D. (ed), Engineering psychology and cognitive ergonomics, v. 1. (379-385). Brookfield, USA: Ashgate. Background: • Speeding contributes to a large portion of road accidents, perceptual countermeasures, which include gravel or road markings, are investigated here to see if some speed problems may be remedied by them. • Generally people tend to underestimate how fast they are going when they are driving. • Major roads tend to have greater underestimation than smaller ones. • Prolonged driving also decreases person sensitivity to speed. • Distorting the drivers perception by painting traversing lines, distorts their perception of speed. Model/Ideas Presented: • Perceptual countermeasures (PCM) are low cost markings to decrease speed. They change the driver's perception about how fast they are going. • Example: painted traverse lines, rumble lines, or gravel. • Implementation of these lines reduced speed related accidents by half. • Painting the road so that appears slimmer has also resulted in decrease in speed. • Simulation is a good tool for measuring effectiveness of new road designs.
- 86 Harms, L. (1997). Automatic train control and the task of train drivers: The case of intermittent information transmission. In Haris, D. (eds.), Engineering psychology and cognitive ergonomics, v. 1. (421-426). Brookfield, USA: Ashgate. Background: • Automation is most often introduced for efficiency and safety purposes by increasing cognitive demands and decreasing physical ones. • Automatic Training Protection (ATP) equipment has been used in Scandinavian countries to minimize human error by allowing the train to detect its own signals and problems. • These devices not only transmit signal information but also calculate adequate operating speeds and actions. • Drivers, however, sometimes ignore the automation, which may lead to more accidents. Conclusions: • New automated systems need to include human/machine interface in the design. • Neglect of this important issue still leads to accidents, such as when operators chose to ignore the automation. • Drivers ultimately have the responsibility for safe operation.
- 87 Harris, D. & Smith, F. (1996). What can be done versus what should be done: a critical evaluation of the transfer of human engineering solutions between application domains. In Harris, D. (ed), Engineering psychology and cognitive ergonomics v.1 (339-346). Brooksfield, USA: Ashgate. Background: • Engineering solutions must take an approach that considers not only machine aspects of a design but also the human component. • This paper focuses on how engineers and designers may use interaction models to come up with new technological products. • Also makes inferences between aviation and driving. Model/Ideas Presented: • The five Ms: Man (size and shape), Machine, Medium (environment), Mission (goal), and Management (performance standards, training). o Later more detailed is added and evolves into Edward’s SHELL model o Systems approach to viewing things • In driving “management” is not present; people dictate their own performance. • Aircraft and automobiles have similar automated technologies as well as procedural differences: o Drivers get licenses for life and they are valid across many types of cars, pilots have type ratings and have much tighter regulations. o High variability makes user centered design tough for car designers o Planes-commercial products, Cars-consumer products. o Aircraft have much less chance of encountering objects in space then cars; cars have more time critical tasks. o Driver is not as removed from automation as in an advanced aircraft. o Driving is “head up, eyeballs out” task, where flying may rely on many head down displays and instruments. o Driving is a more social medium, where change can be rapid and drivers need to be aware of other’s actions. Conclusions o Five-M model is a framework from which to evaluate the desirability of technology. o Just because no rules exist prohibiting new technology doesn’t mean it should be implemented.
- 88 Hoskovec, J. & Stikar, J. (1992). Training of driving skills with or without simulators? Studia Psychologica, 34(2), 183-187. Background: • Simulators are useful in road transportation especially for training for hazardous conditions. • Criticism for the value and usefulness of sim for driver training usually are due to lack of knowledge and finances. • Studies using Czech simulators yielded some success for transferring skills from a simulator to an actual car.
Model/Ideas Presented: • This article talks about some driving simulators being used for research purposes: • National Advanced Driving Simulator - Iowa Univ. • AT 90 • Other simulators used abroad
Conclusions • Simulators will allow researchers to put different types of drivers (young, old, risk-takers, timid drivers) in traffic scenarios.
- 89 Lee, H. C., Lee, A. H. & Cameron, D. (2003). Validation of a driving simulator by measuring the visual attention skill of older adult drivers. American Journal of Occupational Therapy, 57(3), 324-328. Background: • Older drivers are increasing in numbers and will continue to do so. • Driving ability does deteriorate with age: cognitive, mental, memory, visual attention and physical abilities. • Challenge is to develop a way to identify those who are highest at risk. • Current study investigates the validity of using simulation in order to test the elderly visual attention in a driving setting and how their performance is affected over time. IV: age, gender DV: visual attention skill date Participants: 129 licensed drivers 60 yrs old or older
Method: Quasi-experimental method. Drivers drove in a simulated course of 45 minutes. At a prescribed distance red triangles would flash on the screen indicating that the person should apply their turn signals.
Findings/Results/Discussion: • Visual attention skill decreases with age. • Sims prove to be a useful tool at testing the validity of using sims as measurement tools for performance. • Gender was not significant.
- 90 Matthews, G. & Desmon, P. A. (1997). Underload and performance impairment: Evidence from studies of stress and simulated driving. In Harris, D. (ed), Engineering psychology and cognitive ergonomics v.1 (355-361). Brooksfield, USA: Ashgate. Ideas presented: • Stress has the ability to change behavior for the better or worse. • Temperature may cause the driver to speed up in order to get somewhere quicker to be out of the heat/cold. • Coping behaviors while driving may cause the driver to become timid, aggressive, speeding up, slowing down. • Driver stress/vulnerability stems from: aggression, dislike of driving, or hazard monitoring. • Dislike tended to impair lateral control of the vehicle or heading control. • Fatigue generates symptoms of tension: performance deteriorates on straight roads and not curves. Conclusions • In order to improve driver performance either the driver has to change or the car has to change. • Managing anxiety, and aggression may help drivers drive better. • Devices must be sensitive enough to detect variations in workload.
- 91 Pierowicz, J. A. & Robin, J. (2001). Re-Assessment of driving simulators for the training, testing and licensing of commercial vehicle drivers. In the proceedings of Driving Assessment 2001 Conference, Iowa City, Iowa. Model/Ideas Presented: • • • •
• •
Federal Highway Administration published a report in 1996 describing simulator technology. From this, it is inferred that simulators may be mature enough for validation research. Computer powered simulators have increased the capabilities and versatility of training simulators. This report describes initial efforts made by Veridian Engineering and the federal highway administration to provide validation tools. Research efforts are in three parts: • Addresses forward transfer of training • Addresses advanced capabilities of sim • Addresses trainee's post training driving record System driver training tools and scenarios were tested. This report contains an in-depth table that compares four driving simulators across a number of factors such as: fidelity, maneuvering, motion, instructor role, and visuals.
- 92 Meyer, G. Slick, R., Westra, D., Noblot, N. & Kuntz, L. (2001). Virtual truck driver training and validation: Preliminary results for range and skid pad. In the proceedings of the 2001 Driving Assessment Proceedings, Iowa City, Iowa. Background: • Carnegie Mellon Driver training and Safety Institute has implemented truck driver-training curriculum that involves classroom teaching, real life road training, and simulations training. • The simulator provides180-degree horizontal and 45-degree angle vertical visual scene and provides basic training on traffic and hazard driving. • Validation and transfer effectiveness measures are usually lacking in simulation studies. • This report reports on skid pad training, 16% of all large truck fatal accidents are caused on slippery roads. Model/Ideas Presented: • True Transfer of Training experiments present best and ultimate information for sim validation. • Convergent validity may provide evidence for the use of sims in training. • In Training for backing maneuvers, using a simulator has shown to show some transfer. Although the control (non sim using) group performed better at first, transfer was still shown. • Sims and Skip pad training can provide training for emergency or hazard avoidance procedures, especially if the simulator can mimic the real world effectively.
- 93 Roenker, D. L., Cissell, G. M., Ball, K. K. Wadley, V. G. & Edwards, J. D. (2003). Speed-of-Processing and driving simulator training results in improved driving performance. Human Factors, 45(2), 218-233. Purpose: • • • •
Older drivers have high accident rates; older driver performance is related to visual processing speed and field of view test. Present investigation seeks to see if speed of processing training will improve driving performance. Useful field of view (UFOV)- area from which one can extract visual information in a single lane with out eye or head movements, related to localization of targets. Size of this is related to speed of processing; older drivers tend to have a smaller UFOV. - Also related to accidents. • Older drivers may benefit from speed task training. • UFOV impairment leads to faulty detection of periphery targets
IV: training (speed of processing training or none), and timing DV: driver breaking time, hazardous driving scores,
Participants: 456 licensed drivers, average age 69. All measured for visual acuity, contrast sensitivity, and UFOV tests. Method: • Training was given to those who have some difficulty in acuity and speed of processing. • Training was conducted on a touch screen monitor. • Driver performance was assessed in a simulated driving course; at various intervals red dots would appear which would signal the driver to break.
Findings/Results/Discussion: • Reaction times were reduced, even after 18 months after training, and therefore enhance performance. • Strengths and weaknesses addressed: • Each course was driven twice to develop more realistic measure • 3 point rating scale was used (unsafe, somewhat unsafe, and safe) • Two raters were used • Large number of measures where taken • Ceiling effects may have contributed to the success of some participants
- 94 Schori, T. R. (1970). Driving simulation: An overview. Behavioral Research in Highway Safety, 1(4), 236-249. Background: • This report reviews simulator technology as of 1970, also provides a brief classification of sims used. • Approach at designing sims: For training vs. For research o Most commonly used sims: Atena Drivotrainer & Allstate Good Driver trainer. o Has visual display and individual car cockpits for trainees, completely programmed. Model/Ideas Presented: • Advantages to using sims in research: o Can simulate hazard conditions and rare events (i.e. accidents) o Can control extraneous variables and manipulate independent ones o Measurement of behavior is easier o Time savings • Part task trainers may also focus on specific driving issues whereas whole task simulators respond to changing visual scenes and inputs from drivers and provide more realistic simulations. • Simulator classification (4 categories): o Real car, real environment - Using instrumented car, almost perfect measurements can be collected in real time. o Simulated car in real environment - Car operates in real environment but movement is restricted (ex. Car strapped to a bus, second break pedal). o Real car, real motion in simulated world - Using test courses. o Real or simulated car in a simulated environment – Standard car or mockup, visual scene presented through windshield (most commonly used in research • Research sims- generally classified by visual display: o Moving vs. fixed based o Television – Visual scene presented through projection, camera flies, in response to driver input, on a model board to provide the scene. o Point light sim – Scaled down model, automobile and a translucent screen, model is shown as shadows on screen. o Direct optical viewing simulator – A car displaced in front of a screen and the screen magnified an image of a two-lane straight “road.” Can measure changing lanes. o Moving picture sim- Car and two projectors projecting a scene to the front and back of the car. Film is sped up or slowed down depending on speed. Scene is fully programmed however. Conclusions: Each simulator provides its own uses and limitations; television is more versatile and desirable (as of 1970).
- 95 Simulators: What will be their role in the future of driver training? (1992). Driver/Education, 2(3), 6-8. Background: • Estimated 10 million drivers have been trained using simulators around the world. • Sims range from the L301 (students use pedals and steering wheels to indicate reactions to film scenes projected on a screen) to the L300 VMT (trains truck drivers in a fairly realistic simulated environment).
Model/Ideas Presented: • Expert views on simulation effectiveness are mixed • Some feel that it is pointless to use sims for instructing beginners, that they offer little difference in training rate and safety than real life training. • Accidents happen because of expectancy violations. • Sims have little value for teaching driving psychomotor skills. • Measurement problems are an issue to deal with.
- 96 Strayer, D. L. & Drews, F. A. (2003). Simulator training improves driver efficiency: Transfer from the simulator to the real world. In the proceedings of the Second international driving symposium on human factors in driver assessment, training and vehicle design, Park City, Utah, 190-193. Purpose: The purpose of this study was to see if simulators can be used to teach drivers to conserve fuel while driving. • Simulation may prove to be a low cost effective and safe method to train and improve driver performance. • This study aims to see if training will transfer into the real world. IV: training (sim or no sim) and driving skill level (novice or advanced) DV: Fuel consumption
Participants: 40 employees of a local truck company
Method: Drivers completed a 2-hour fuel management-training course that was part lecture, part computer training and part simulation.
Findings/Results/Discussion: • Simulation did increase fuel efficiency by 2%, however, information on how long training effects last were not immediately available. • Effects of training usually have diminishing returns after 6 months. • Drivers with the poorest fuel efficiency pre-training score benefited the most from this training. • Age and job satisfaction may be an issue worth looking into. • Simulation training may be an effective means of reducing operator costs and training costs for the trucking industry.
- 97 Vidal, S. & Borkoski, V. (2000). MTA New York transit’s bus simulator: A successful public/private partnership. In the Bus and Paratransit Conference proceedings, Houston, TX, 239-241. Background: • The bus industry is in need of developing standards for training, retraining, and certification of bus operators in North America. • NYC transit, in conjunction with private company has built and designed a training simulator for bus drivers that use state of the art technology. o Covers both rural and urban settings o Utilizes computer based imaging and VR o More R& D needed so that it can become industry standard. Model/Ideas Presented: • This sim can also be used as a tool for accident investigation and recreation • Previously sim technology was not used effectively for the transportation industry due to its functionality problems and cost factors. • NYC has taken the lead to investigate if current products or new inventions may aid their training programs already in place. • Sim has been very successful and technology has been opened up and offered to other markets. • Post studies are being conducted on overall training effectiveness with focus on accident reduction. • To improve current training practices, the transit industry may benefit from partnerships with private companies.
- 98 Young, M. S. & Stanton, N. A. (1997). Automotive automation: Effects, problems and implications for driver mental workload. In Haris, D. (eds.), Engineering psychology and cognitive ergonomics, v. 1. (421-426). Brookfield, USA: Ashgate.
Background: • Cars of the future will take great advantage of automation technology and may not have to rely much on driver input. Model/Ideas Presented: • Automation may create workload problems: too much or too little. And may prove hazardous in dangerous environments. • Transition from operator to passive passenger may be difficult for future drivers and cars. • Workload is reduced when car is automated.
Conclusions • New technologies may not be the cure for all driving problems. • Automation creates workload issues that must be addressed by car designers of the future.
- 99 -
Section III Using Simulation for Surface Transportation Training: Lessons Learned and Recommendations for Future Research
Beth Blickensderfer, Ph.D. Dahai Liu, Ph.D. Angelica Hernandez Embry-Riddle Aeronautical University
June 30, 2005
This effort was the third deliverable under the project-- Simulation-based Training: Applying lessons learned in aviation to surface transportation modes
- 100 Using Simulation for Surface Transportation Training: Applying Lessons Learned from the Literature After reviewing the literature regarding simulation for aviation training and reviewing the literature on use of simulation in surface transportation, a number of lessons learned become apparent. Lesson Learned 1: Simulation has been proven to be an effective educational and instructional tool. In tests of flight simulator training effectiveness, trainees develop knowledge and skills in simulated systems as well as they do in the actual systems (Hays, Jacobs, Prince, & Salas, 1992). The simulator is an excellent classroom, as the learner is able to make mistakes and learn from them (Duncan & Feterle, 2000). The instructor is allowed to focus on teaching and not operating the vehicle. Additionally, many simulators have the capability to collect performance measures during the training scenarios that can help assess competencies and deficiencies. Not as much research has occurred regarding the effectiveness of training via simulation in the surface transportation domain compared to the aviation field, yet considerable research support has appeared. It is likely that the results regarding simulator effectiveness for aviation training will generalize to the surface transportation domain, but simulation must be used wisely. Users should consider the competencies needed to perform the task and the capabilities of the simulator. Not all simulators are appropriate for training all competencies. Furthermore, not all competencies require simulation for effective training. Lesson Learned 2: Simulators increase safety and reduce training costs. As noted in our review of the aviation literature, two main benefits of using simulation for training are increased safety during training and reduced training costs. In terms of safety, using simulators for training enables individuals to practice in conditions that would be too dangerous to train in actual situations (for example, aircraft engine failures, accidents, and other emergencies). This is also true when training driving and will likely be a major benefit of using simulation in surface transportation training. Regarding cost, aviation simulation saves aircraft fuel, aircraft maintenance costs, and keeps aircraft available for revenue producing activities. In the case of automobiles, buses, and trucks, although training via simulation would conserve fuel, the cost savings are most likely not as great as they are in aviation. Indeed, considerable driving training can occur in the actual vehicles at low cost. Training train operators, on the other hand, may benefit from significant cost savings as well as benefiting from the simpler logistics of training via simulators rather than actual trains. A benefit related to both safety and cost is that simulation can be used to give trainees experiences with unusual events. Unusual events are just that—unusual. Despite their rare occurrences, they can prove deadly in aviation as well as in surface transportation. Simulation offers the opportunity for drivers to experience these and learn how to perform effectively in these unusual situations (Down, Petford, & McHale, 1982). Consider driver training. Driving around in the real world, the driver may not encounter many, if any, hazardous or emergency situations. Using simulation, the scenario can be scripted to include a variety of hazards and emergencies. Thus, not only will simulation
- 101 training give driver trainees the opportunity to master the knowledge and skills necessary to perform effectively in hazardous situations, but also it will do so in a safe environment. Lesson Learned 3: Simulation alone does not equal training. Simulation is a tool for trainers to use (Salas, Bowers, & Rhodenizer, 1998). Simply experiencing a simulated environment is not effective training (Salas et al., 1998). Simulation must be used in a thoughtful, well-planned manner that includes identification of training needs, proper design of scenarios, appropriate performance measurement, and feedback to the learner (Oser et al., 1999). The same principles apply in surface transportation as well (Uhr et al., 2003). Lesson Learned 4: Simulation is one variable in the “big picture” of training effectiveness. Training effectiveness is a complex problem (Cannon-Bowers et al., 1995, Colquitt et al., 2000, Baldwin and Ford, 1998). Training method (e.g., use of simulation) is one variable involved. Numerous other variables also exist including trainee characteristics, work environment characteristics, and the transfer environment. Simulation training will not solve every training challenge for any domain. Lesson Learned 5: The Scenario Based Training model (Oser et al., 1999) is one method to ensure simulation is used appropriately. Aviation training researchers advocate using the scenario based training model to use simulation effectively. While a few papers have appeared in the surface transportation training literature regarding effective use of simulation (Uhr et al. 2003; Nagata & Kuriyama, 1983; Walker & Bailey, 2002; Down et al. 1982), limited advice exists regarding use simulation effectively in this domain. Fortunately, the basic principles of the Oser et al (1999) model apply to surface transportation and, if advocated in the surface field, can help instructors to use driving simulator systems most effectively. The Oser et al. approach is based on basic principles of learning. This approach guides training designers to 1) identify the task/mission and the knowledge, skills, and abilities involved; 2) design scenarios to include events which allow the trainee to develop and practice the specific knowledge, skills, and abilities identified; 3) design performance measures to enable the trainer to assess performance; and 4) ensure specific feedback is given to the trainee. Lesson Learned 6: Effective human performance measurement is crucial both for simulation validation and assessing skill development. As new simulators are developed, validation must occur. Validation should occur not only from the engineering/system performance standpoint but also from the human performance perspective (Hays & Singer, 1989). For example, when examining whether performance in a simulator equals performance in the real-world task, accurate, reliable human performance measures are essential to understand the human interactions with the system. Without such measures, it will be impossible to quantify training transfer. Both objective and subjective measurement approaches exist. Careful time and attention should be paid to developing and selecting the appropriate measures to ensure a well-rounded assessment of skills.
- 102 Lesson Learned 7: Simulation fidelity is an important concept that needs to be understood. Simulation fidelity is the degree to which a device can replicate the actual environment or how “real” the simulation appears and feels (Alessi, 1998; Gross et al., 1999). Simulation fidelity is composed of a number of dimensions including psychological and cognitive factors as well as the more obvious physical factors (e.g., visual, auditory, motion, etc.). Numerous researchers are devoted to studying fidelity issues regarding aviation training such as how to define fidelity, fidelity dimensions, measuring fidelity, and the relationship between fidelity and training effectiveness, yet questions still remain. In terms of surface transportation, limited study exists on the relationship between fidelity and performance in surface transportation tasks. For comparable skills (e.g., control vs. perceptual vs. decision making), it is expected the findings from fidelity research in aviation should generalize to surface transportation. However, surface transportation researchers should use these findings as a springboard for their own domain specific research. Lesson Learned 8: The relationship between simulation fidelity and training effectiveness is not a positive, linear relationship. The simulation industry pushes for higher and higher levels of physical fidelity. Indeed, as simulation technology continues to evolve, simulations come ever closer to being exact replicas of the real world environment. At the current time, high fidelity translates as high financial cost, and many questions remain regarding the cost-benefit trade-offs of using high physical fidelity simulations for aviation training. Research indicates that high fidelity is not necessary to train certain skills (Jentsch & Bowers, 1998; Koonce & Bramble, 1998). In terms of surface vehicle driver training, training control tasks such as braking will require a high level of physical fidelity. On the other hand, it is likely that for other skills (e.g., risk assessment training), a lower level of physical will be adequate (Fisher et al., 2002). Thus, an expensive, high fidelity simulator is not always required to fulfill training needs. However, more research is needed to identify the exact relationship between fidelity and training effectiveness. Lesson Learned 9: Motion fidelity is not always necessary. Motion fidelity is the extent to which a simulator replicates the motion cues actually felt during flight (Kaiser & Schroeder, 2003). In terms of aviation, motion appears to provide very little to training effectiveness (Garrison, 1985; Ray, 1996). While it is likely that these results generalize to surface transportation to some degree, the knowledge and skills required for effective driving differ somewhat from aviation (e.g., consider driving a vehicle over bumpy terrain), and motion is likely needed to train certain skills. Thus, additional domain specific research is needed. Lesson Learned 10: Establishing a standard classification system for different types of simulations can facilitate collaboration within the simulation industry. In aviation, levels of simulation are specifically defined with certifications and regulations regarding necessary fidelity for training certain skills (e.g., level A, level B, and level C). Using a classification system of this nature has provided industry and academia with common terminology to use in simulation design and evaluation (i.e., everyone is using the same terminology to refer to the same concepts). In comparing the current
- 103 simulation work in aviation to that of surface transportation, aviation has specific standards, but a simulation classification schema is not apparent in the surface transportation industry. Lesson Learned 11: Many opportunities to use simulation exist—be creative! The aviation industry has moved beyond using simulation only for pilot training to also using it to train air traffic controllers. In addition, simulations are helping to design airport layouts, assess traffic problems, and teach ground workers airport navigation. In surface transportation, researchers have begun to use simulation to assess road design (Godley et al., 1997). Indeed, use of simulation seems limited only by our imagination.
- 104 -
Using Simulation to Address Surface Transportation Training Needs: Potential Research Topics
In this final section of the report, we offer some potential research topics for using simulation to address surface transportation challenges. 1)
Development of accident description database.
The National Transportation Safety Board (NTSB) analyzes every aviation accident in great depth and recommends interventions. The surface transportation industry does not have this type of in-depth analyses of accidents (i.e., what are the causing factors for automobile accidents). With a lack of data, it is difficult to identify training needs and consider how simulation can help. Establishing a database would help us to understand and capture the training requirements for surface transportation. The accident analyses feeding the database would likely not require as high a level of detail as those for aviation accidents. This research could be done as a proof-of concept program at the city or county level. 2)
Identification of knowledge, skills, abilities and other characteristics necessary for driving.
Another approach to identifying training needs is to perform task analyses and competency identification for the tasks of interest. As a closed-loop control tasks, driving involves cognitive, psychomotor, and decision-making related skills and knowledge such as managing divided attention, performing under time pressure, resolving ambiguous situations, assessing risk, speed and distance management, navigating curves, visual scanning, wayfinding/navigating, monitoring the environment, maneuvering and control, knowledge of time to collision, and etc. Once the competencies have been identified, a systematic simulation based approach could be used to develop performance measures, demonstrate the impact of the different skills on driving performance, and in general, validate the competencies. The findings, in turn, will enable organizations to establish training priorities and select appropriate simulations for their training needs. Furthermore, the findings would guide simulation developers regarding what types of performance measures are needed for their simulators, what types of event data bases are needed, and in general what instructor tools are needed.
3)
Development of specifications for driving simulators from the scenario based training perspective.
Oser et al. (1999) offer guidelines for design of training systems for technology rich
- 105 environments. Work is needed to tailor these guidelines to driving simulators (Brock et al., 2001). What features, performance measures, and instructor support tools are essential? What type of scenarios, together with different training media, are most effective for what type of tasks (perceptual, judgment or decision making?) Once the guidelines exist, government agencies can assess whether vendors have followed the guidelines. This will help to ensure government purchased simulators will fulfill training needs.
4)
Decision Making Training
Research indicates that decision-making is a crucial skill for effective driving (Mills et al., 1999). Recognition primed decision making (Klein, 1998) is one theory on how people make decisions. The basic premise is that over time, people acquire a variety of experiences with a certain task, and these experiences with specific situations guide their decision-making regarding similar experiences in the future. When encountering an unfamiliar situation, it will take a person some time to decide what to do. When encountering a situation similar to one experienced previously, s/he can recognize it and quickly know what to do. This theory on decision-making seems highly applicable to driver training. In particular, accidents happen when an unusual event occurs. If people are trained to recognize these unusual events and instantly know the correct reaction, lives could be saved and the related traffic congestion could be avoided. Some research has been done on the notion of “experience” training in surface transportation (Dorn & Barker, 2005). More research is needed to examine how trainees develop decisionmaking skills and to develop and test the efficacy of training interventions based on the recognition primed decision-making theory. 5)
Fidelity in Driving Simulators.
Many of the same issues regarding simulation fidelity in aviation are true for simulation in surface transportation. Fidelity in surface transportation simulation is under-researched and crucial questions remain unanswered. Questions to answer include: What are the aspects of fidelity for surface transportation system? What is an effective measure of simulation fidelity? How do you quantify the notion of simulation fidelity? What are the appropriate levels or categorizations of driving simulations? What is the effect of fidelity on training effectiveness/transfer? Domain specific research in this area will help the surface transportation field. Additionally, it is likely that research findings will generalize to other fields and, thus, help to bring the whole simulation industry forward. 6)
Measures of driver performance
Driver performance can be categorized roughly into either objective measurement (i.e., response time to certain events, deviation rate from desired route etc) or subjective measurement (i.e., mental workload, fatigue, stress level and comfort etc). Valid performance metrics are crucial for designing and evaluating training programs. Work is
- 106 needed to develop accurate, reliable measures of driver performance. In particular, automated performance assessment tools are needed. 7)
Investigating driver attitudes
Along with knowledge and skills, driver attitudes about risk and other matters play a role in driving performance (Wilde et al., 1998; Fisher et al., 2002). A need exists for additional research on this topic. What are effective/ineffective attitudes in driving performance? How can these be measured? What is the best method to change ineffective attitudes? Does attitude play a more important role than skill in driving? What is the impact of various interventions (road design, vehicle design, and training) on various driving related attitudes? Can driving attitude be altered using simulation training (aggressive driving to non-aggressive), if so, what training approach shall be used?
8) Using simulation to test the impact of various new technologies on driving performance. A host of new technologies are being developed for surface transportation. What will produce the biggest performance improvement? What are the possible problems with these technologies? How can these systems be designed to best facilitate driving performance? Can advanced technology eliminate training needs? Examples of research on these topics are listed below. Additional work is needed. Fatigue detection (Schmidt & Toebrig, 2003) Designing for uncertainty (Ismail, 2003) Using voice recognition in the vehicle (Graham & Aldridge, 1999) Navigation systems (Janes & Flyte, 1999) Effect of automation on performance (Young and Stanton, 1997) Impact of collision avoidance (Ben-Yaacov, Maltz, and Shinar, 2002) Road design (Groeger & Rothengatter, 1998) Other possible uses of simulations in surface transportation: to examine perceptual capabilities (Lee et al., 2003) to examine driver workload to examine accident investigation and re-creation (Vidal & Borkoski, 2000) to examine traffic patterns and efficiency As is the case in aviation training, the surface simulation industry is facing great challenges and also opportunities to make the roads safer, more efficient and enjoyable. With the development of technology, many driving operations become easier and require less effort. Unfortunately, these same technologies can introduce new opportunities for human error. The simulation industry must stay abreast of technological advances to produce up-to-date, effective training in a cost efficient manner. Many of the questions we posed here cannot be answered in a simple sentence, nor will the answers occur
- 107 overnight. Instead, continued, fundamental research remains the key to understanding the human interaction with the vehicle.
- 108 Bibliography
Alessi, S. M. (1988). Fidelity in the design of instructional simulations. Journal of Computer-Based Instruction, 15(2), 40-47. Al-Shihabi, T. & Mourant, R. R. (2003). Toward more realistic driving behavior models for autonomous vehicles in driving simulators. Transportation Research Record, No. 1843, 1-20. Baldwin, T., & Ford, J. (1988). Transfer of Training: A review and direction for future research, Personnel Psychology, No. 41, 1988, 63-105. Ben-Yaacov, A., Maltz, M. & Shinar, D. (2002). Effects of an in-vehicle collision avoidance warning system on short and long term driving performance. Human Factors, 44(2), 287-302. Brock, J. F., Jacobs, C., Van Cott, H., McCauley, M. & Norstrom, D. M. (2001). Simulators and bus safety: Guidelines for acquiring and using transit bus operator driving simulators (TCRP-Report 72). Washington. DC: National Research Council: Transportation Research Board. Cannon-Bowers, J.A., Salas, E., Tannenbaum, S. I. & Mathieu, J. E. (1995). Toward theoretically-based principles of training effectiveness: A model and initial empirical investigation. Military Psychology, 7, 141-164. Colquitt, J. A., LePine, J. A. & Noe, R. A. (2000). Toward an integrated theory of training motivation: A meta-analytic path analysis of 20 years of research. Journal of Applied Psychology, 85(5), 678-707. Dorn, L. & Barker, D. (2005). The effects of driver training on simulated driving Performance. Accident Analysis and Prevention, 37(1), 63-69. Down, S., Petford, J. & McHale, J. (1982). Learning techniques for driver training. International Review of Applied Psychology, 31, 511-522 Duncan, J. C., & Feterle, L. C. (2000). The use of personal computer-based aviation training devices to teach aircrew decision-making, teamwork, and resource management. In the Proceedings of IEEE 2000 National Aerospace and Electronics Conference, Dayton, OH, pp. 421-426. Fisher, D. L., Laurie, N. E., Glaser, R., Connerney, K., Pollatsek, A., Duffy, S. & Brock, J. (2002). Use of fixed-base driving simulator to evaluate the effects of experience and pc-based risk awareness training on drivers’ decisions. Human Factors, 44(2), 287-302. Garrison, P. (1985). Flying Without Wings: A Flight Simulation Manual. Blue Ridge Summit, PA: TAB Books, Inc. Godley, S. T., Fildes, B. N. & Triggs, T. J. (1997). Perceptual countermeasures to speed related accidents. In Harris, D. (ed), Engineering psychology and cognitive ergonomics v.1 (387-394). Brooksfield, USA: Ashgate. Graham, R., Aldridge, L., Carter, C. & Lansdown, T. C. (1999). The design of in-car speech recognition interfaces for usability and user acceptance. In Harris, D. (ed.) Engineering psychology and cognitive ergonomics, v. 3 (313-320). Brookfield, USA: Ashgate. Groeger, J. A. & Rothengatter, J. A. (1998). Traffic psychology and behavior. Transportation Research Part F, 1(1), 1-9.
- 109 Gross D. C., Pace D., Harmoon, S., & Tucker, W. (1999). Why fidelity? Spring 1999 Simulation Interoperability Workshop. Hays, R. T., Jacobs, J. W., Prince, C. & Salas, E. (1992). Flight simulator training Effectiveness: A meta-analysis. Military Psychology, 4(2), 63-74. Hays, R.T., & Singer, M.J. (1989). Simulation fidelity in training system design. New York: Springer-Verlag. Ismail, R. B. (2003). The effects of road predictability on driving performance. In Dorn, L. (ed.) Driver behavior and training (113-124). Aldershot, Hampshire, UK: Ashgate Publishing Limited. Janes, B. S. & Flyte, M. G. (1999). Information needs and strategies of older drivers for navigation on unfamiliar routes: A methodological approach. In Harris, D. (ed.) Engineering psychology and cognitive ergonomics, v. 3 (297-304). Brookfield, USA: Ashgate. Jentsch, F. & Bowers, C. A. (1998). Evidence of the validity of PC-Based simulations in studying aircrew coordination. International Journal of Aviation Psychology, 8(3), 243-260. Kaiser, M. K. & Schroeder, J. A. (2003). Flights of fancy: The art and science of flight simulation. In Vidulich, Michael A & Tsang, Pamela S (eds), Principles and Practice of Aviation Psychology, (pp.435-471). Mahwah, NJ, US: Lawrence Erlbaum Associates, Publishers. Klein, G. (1998). Sources of power: How people make decisions. London: MIT Press. Koonce, J. M. & Bramble, W. J. (1998). Personal computer-based flight training devices. International Journal of Aviation Psychology, 8(3), 277-292. Lee, H. C., Lee, A. H. & Cameron, D. (2003). Validation of a driving simulator by measuring the visual attention skill of older adult drivers. American Journal of Occupational Therapy, 57(3), 324-328. Mills, K. C. Parkman, K. M. Smith, G. A. & Rosendahl, F. (1999). Prediction of driving performance through computerized testing: High-risk driver assessment and training. Transportation Research Record, 1689, Paper no. 99-0356. Nagata, M. & Kuriyama, H. (1983). Development and application of a driving simulator for training. International Journal of Vehicle Design, 4(1), 2335. Oser, R.L., Cannon-Bowers, J.A., Salas, E., & Dywer, D.J. (1999). Enhancing human performance in technology rich environments: guidelines for scenario-based training. In E. Salas (Ed.), Human/Technology Interactions in Complex Systems (pp. 175-202). US: Elsevier Science/JAI Press. Ray, P. A. (1996). Quality flight simulation cueing-why? AIAA Flight Simulation Technologies Conference, San Diego, CA, 138-147. Salas, E., Bowers, C. A. & Rhodenizer, L. (1998). It is not how much you have but how you use it: Toward a rational use of simulation to support aviation training. The International Journal of Aviation Psychology, 8(3), 197-208. Schmidt, E. & Toebing, G. (2003). Computer-based driver status monitoring of fatigue. In Dorn, L. (ed.) Driver behavior and training (109-112). Aldershot, Hampshire, UK: Ashgate Publishing Limited Uhr, M. B. F., Felix, D., Williams, B. J. & Krueger, H. (2003). Transfer of training in a
- 110 driving simulator: Comparisons between reality and simulation. In Dorn, L. (ed.) Driver behavior and training (339-348). Aldershot, Hampshire, UK: Ashgate Publishing Limited. Vidal, S. & Borkoski, V. (2000). MTA New York transit’s bus simulator: A successful public/private partnership. In Bus and Paratransit Conference Proceedings, Houston, TX, 239-241. Walker, A. & Bailey, S. (2002). Integrating simulation into driver training. International Railway Journal and Rapid Transit Review, 42(4), 32, 33. Wilde, G. J. S., Gerszke, D. & Paulozza, L. (1998). Risk optimization training and transfer. Transportation Research Part F, 1(1), 77-93. Young, M. S. & Stanton, N. A. (1997). Automotive automation: Effects, problems and implications for driver mental workload. In Harris, D. (ed), Engineering psychology and cognitive ergonomics v.1 (347-354). Brooksfield, USA: Ashgate.