Visual modality research in virtual and mixed reality ...

0 downloads 0 Views 805KB Size Report
Feb 18, 2015 - by reducing the amount of sonar training the US Navy conducts in the .... ments the United States Army's Aviation Center of. Excellence have ...
JDMS Applications

Visual modality research in virtual and mixed reality simulation

Journal of Defense Modeling and Simulation: Applications, Methodology, Technology 1–19 Ó 2015 The Society for Modeling and Simulation International DOI: 10.1177/1548512915569742 dms.sagepub.com

Jonathan Stevens1, Peter Kincaid2, and Robert Sottilare1

Abstract Military organizations worldwide continue to invest heavily in research, development, and fielding of virtual and mixed reality simulations and simulators for training. However, the future fiscal environment will be challenging for both simulation as well as the US Army as a whole. Thus, wise design decisions must be made when developing virtual simulations for training. In order to optimize the effectiveness of these simulations, developers must employ trade-off analysis and scientific methods to derive empirical evidence, in order to ensure that the simulation under development is optimized to meet the training requirements, while still adhering to cost and schedule constraints. This paper specifically focuses on the task of employing the optimal visual modality in virtual and mixed reality simulations. This paper reviews the literature on training simulation benefits and taxonomy, and examines the training efficacy of virtual and mixed reality simulation. Major concepts and applications of virtual and mixed reality simulation training efficacy are discussed. A key component of virtual simulation, visual modality, is examined through a literature review and recommendations for visual display design parameters are provided.

Keywords Visual display, simulation, mixed reality, presence, transfer, immersion, head-mounted displays, performance

1. Introduction The United States Army, along with military organizations across the globe, continue to invest heavily in the development and fielding of virtual and mixed reality (MR) simulations and simulators for training with the goal of reducing training and development cost and schedule while optimizing trainee performance and learning effectiveness. Even in light of this goal, the current and future fiscal environment will be challenging for both simulation1,2 and the US Army as a whole.3 This makes it essential that wise acquisition decisions are made based upon empirical evidence and best practices. Simulation developers must employ trade-off analysis and use the scientific method to derive empirical evidence, in order to ensure that the simulation under development is optimized to meet the training requirements, while still adhering to cost and schedule constraints. This paper specifically focuses on the task of employing the optimal visual modality in virtual simulations and simulators. While virtual simulations may be composed of visual, aural, haptic, and

olfactory modalities, it is the visual modality that dominates the user experience. In order to assist simulation developers with the aforementioned task, this paper reviews literature on training simulation benefits and establishes a taxonomy to demonstrate the relationship, influence, and interaction of simulation concepts. Training efficacy of virtual and MR simulation is also examined along with major concepts and applications of virtual and MR simulation training efficacy. A key component of virtual simulation, visual modality, is examined through a literature review and

1

Army Research Laboratory (ARL), USA University of Central Florida (UCF), USA

2

Corresponding author: LTC Jonathan Stevens, PhD, Army Research Laboratory (ARL), Orlando, Florida, USA. Email: [email protected]

Downloaded from dms.sagepub.com by guest on February 18, 2015

2

Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

recommendations of the design of visual displays are provided.

2. Method Three search strategies were employed in order to conduct a thorough and relevant literature review. Firstly, relevant studies were identified through computer searches of Google Scholar, EBSCO, Dissertation abstracts, and the University of Central Florida’s OneSearch database. Keyword searches included the following: simulationbased training (SBT), MR, virtual reality (VR), training efficacy, transfer, fidelity, immersion, presence, simulator sickness (SS), head-mounted displays (HMDs), and comparison of HMDs and flat screen displays. Secondly, the reference list of pertinent studies was examined. Thirdly, and finally, manual searches were conducted on various journals producing relevant studies over the previous 10 years. As a result of our search strategy, we organized our study into discrete sections that sensibly led to our examination of prior research comparing the performance between HMDs and screen-based visual systems in virtual training. Those preceding sections include examinations of the benefits of simulation for training, the taxonomy of reality, the training efficacy of virtual environments, and HMDs.

Orlansky et al.8 list the advantages and disadvantages associated with the use of simulation for training. The use of ‘‘simulation’’ in this context may be interpreted as ‘‘virtual simulation’’. Advantages of simulation:8 • • • • • • • • • • • • •

3. Benefits of simulation for training Much has been written about the benefits of simulation in support of various domains, including medical4 and military combat training.5 However, a US Government Accounting Office (GAO) report6 calls for ‘‘better performance and cost data’’ in order to fully assess the impact of SBT. Simulation is defined as ‘‘the imitative representation of one system or process by means of the functioning of another’’,7 while a simulator is ‘‘a device that enables the operator to reproduce or represent under test conditions phenomena likely to occur in actual performance’’.7 Generally, the goal of simulation for training is to provide ‘‘increased performance effectiveness at the same or lesser cost’’.8 The US Army has defined three classes of simulation for training: live, virtual, and constructive.9 Live simulations involve real people operating real systems, while virtual simulations involve real people operating simulated systems in simulated environments, and constructive simulations involve simulated people (or a representation of the behaviors of real people) operating simulated systems. While the three classes periodically overlap each other in training, leading to the term ‘‘blended training’’, this paper is primarily focused on virtual simulation.

significant cost reductions achieved by not procuring and using actual equipment; substitutes the use of real equipment for training; allows training of high-risk tasks that would otherwise not be possible to train on; provides instructional feedback to trainees; reduces wear-and-tear of actual equipment, thereby extending its service life; protects the environment; protects operational security of tactics, techniques and procedures of using units; saves time by allowing rapid scenario development, set-up, and execution; saves on use of fuel, munitions, and support facilities; allows novices and experts to train through the use of customizable scenarios and difficulty levels; may serve as a mission rehearsal tool for units through the use of geo-specific or geo-typical terrain and scenarios; facilitates testing and validation of plans prior to execution; enables joint training of Department of Defense (DoD) forces and international training with coalition forces.

Disadvantages of simulation.8 • • •



Simulation requires funding to procure, use, and sustain. May induce simulation sickness in trainees through the use of visual modalities and motion platforms. Models and simulations must accurately portray the performance of actual systems in order to avoid negative habit transfer, decreased believability, and ineffective training. This requires costly, actual data of weapons, systems, and platforms to create accurate engineering models. Limited fidelity can reduce training effectiveness.

Lele10 determined that, due to the benefits of simulation (VR in particular), militaries throughout the world will continue to expand their use of this technology. As an example, Gourley11 describes a recent, major US Army training event whereby three-fourths of the training units were participating through the use of simulation. McGaghie et al.12 reviewed the use of simulation in medical training over the last 40 years and provided a list of 12

Downloaded from dms.sagepub.com by guest on February 18, 2015

Stevens et al.

3

best practices. In contrast, Psotka13 illustrates how the educational community has been far more resistant to adopting simulation in its curriculum than the military. This, despite the major technological advancements made in simulation, increasingly intuitive user interfaces, and widespread adoption of simulation by the public itself.13 Psotka implores the educational community to adopt this capability, in large part due to his assessment that the US military has employed simulation very effectively. While one-on-one human tutoring is a very effective method of learning,14,15 it is highly inefficient in terms of the resources required. SBT is an effective and efficient alternative to this approach, provided the right instructional strategy is employed.16 In an attempt to improve both the effectiveness and efficiency of military training simulators, Vogel-Walcutt et al.17 organized a framework of instructional strategies that can be used within the context of military simulators. The authors determined that an effective method to utilize military simulations is one in which the trainee’s expertise level, point in the training cycle, and type of knowledge to be trained are factored into the selection of the instructional strategy. This is based largely on Vygotsky’s Zone of Proximal Development.18 Using this information, the trainer can select the most optimal instructional strategy found within the authors’ framework. Aguinis and Kraiger,19 in their 10-year review of training and development literature, found extensive research testifying to the benefits of SBT for aviation. Their literature review indicated that SBT has resulted in better safety rates and improved team-based performance for aviators and aviation crews. Kincaid and Westerlund20 highlight that the aviation community has over 50 years of experience employing virtual simulations for training; in some instances simulators have replaced live aircraft in training. Dourado and Martin21 describe the benefits of simulation for flight training as reduced time to proficiency, increased safety, decreased training costs, and less pollution. Furthermore, the authors illustrate the need for better pilot training as future aircraft become ever higher performing, proposing their Dynamic Flight Simulator (DFS) as a potential future solution. SBT has countless other military applications and continues to expand. Smith22 detailed the chronology of games/simulations used for training and foresees the continued proliferation of simulation used for military training due to the ubiquity of computers. Cane et al.23 conducted a qualitative study to examine if particular simulation technologies could be mapped to certain military tasks. This research will influence how future military simulations are developed. To further illustrate the expanding breadth of simulation for military training, Summers24 described how simulation prevents environmental damage by reducing the amount of sonar training the US Navy conducts in the live environment. Butavicius et al.25

Figure 1. Reality–Virtuality continuum.27

examined the efficacy of a VR parachute training simulator and determined that there was no performance difference between the simulation and non-simulation group during live jumping operations. However, benefit was realized when the simulation group was found to require less corrective action during live airborne operations.

4. Taxonomy of reality Milgram and Kishino26 define VR as ‘‘an environment in which the participant-observer is totally immersed in, and able to interact with, a completely synthetic world’’. The ‘‘real’’ environment is the live environment. Milgram and Colquhoun27 developed a Reality–Virtuality (RV) continuum (Figure 1) whereby the major determinant of what environment a trainee resides in is determined by the extent to which a computer model is necessary to render the image of the environment. Following this logic, the real environment is not modeled, whereby a completely virtual environment would be completely modeled. Everywhere in between the two poles of this continuum is considered MR, which is also treated as a subset of VR. MR can be further decomposed into augmented reality (AR) and augmented virtuality (AV). According to Sherman and Craig,28 AR is a ‘‘type of virtual reality in which synthetic stimuli are registered with and superimposed on real-world objects’’. In AR, the live environment is augmented with virtual data and graphics, while in AV, the virtual environment is augmented with real data and images. Figure 2 depicts the MR axis. Schnabel et al.29 further stratify the R–V continuum (Figure 3). The authors classify three additional forms of MR: amplified reality, mediated reality, and virtualized reality. Amplified reality, in between the real environment and AR, according to Falk et al.30 ‘‘enhances publicly available properties of a physical object, by means of using embedded computational resources’’. Mediated reality, similar to AR, was described by Starner et al.31 and can best be described as altering a video input stream to augment one’s perception of reality through wearable computing. This is similar to fused reality, described by Bachelder32 as the employment of the combination of live video and VR, which he developed for helicopter crew training. Virtualized reality, in between AV and VR,

Downloaded from dms.sagepub.com by guest on February 18, 2015

4

Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

Figure 2. Mixed reality axis.27

Figure 3. Order of reality.29

virtualizes real-world scenes, enabling the end-user to move about freely in the virtual world. The major difference between virtual and virtualized reality is that virtualized reality virtualizes real-world scenes, while VR generally models them.33 In addition to visual modality, Schnabel et al.29 further classify reality based on the user’s interaction with two dimensions. The first dimension measures the level of action and perception the user experiences with his environment. The second dimension measures the level of interaction the user experiences with real entities. These dimensions are depicted in Figure 4. Of pertinence to this review is the classification of MR display environments. According to Milgram and Kishino,26 there are six classes of MR display environments, which were later updated to 10.34 The original six classes are as follows: 1. 2.

3.

4. 5. 6.

monitor-based (non-immersive) video displays (i.e. ‘‘window-on-the-world’’); video displays as in Class 1, but using immersive HMDs, rather than Window-on-the-World (WoW) monitors; HMDs equipped with a see-through capability, with which computer-generated graphics can be optically superimposed; same as Class 3, but using video, rather than optical, viewing of the ‘‘outside’’ world; completely graphic display environments to which video ‘‘reality’’ is added; completely graphic but partially immersive environments (e.g. large screen displays) in which real physical objects in the user’s environment play a role in the computer-generated scene.

Figure 4. Classification of reality.29

Milgram and Colquhoun27 developed the global taxonomy of MR display integration (Figure 5). The three dimensions of the taxonomy consist of the R–V continuum (previously described), congruence, and centricity. Congruence is the level of natural response shown in the user’s display space and is reactive to the user’s input. Higher congruence implies more direct and natural user control through the device’s interface (i.e. pulling a real trigger is more congruent than a drop down menu for emulating a precise surgical procedure). Centricity refers to the user’s viewpoint; an egocentric centricity represents a first-person view, whereas an exocentric centricity represents a third-person world view. As technology matures, MR has become more widespread, as well as the corresponding literature. FotouhiGhazvini et al.35 extended Milgram’s definition of MR to mobile games due to the present-day ubiquity of mobile devices. The authors developed a questionnaire for designers of Mobile Educational Mixed Reality Games (MEMRGs), which results in qualitative ratings of the game’s degree of mobility, degree of context, and degree of communication. Pimenta and Santos36 have proposed a taxonomy for three-dimensional (3D) displays, as this visual capability is rapidly evolving and proliferating. The five proposed classes of 3D displays are organized by display shape and number of views offered. Kersten-Oertel et

Downloaded from dms.sagepub.com by guest on February 18, 2015

Stevens et al.

5

Figure 5. Global taxonomy of mixed reality display integration.27

al.37 analyzed the use of MR in image-guided surgery, using a visualization taxonomy organized by data, visualization processing, and view (DVV).

5. Training efficacy of virtual environments The goal of a virtual simulation is generally to maximize the degree of skill transfer of the trainee from the training environment to the real or operational environment. The higher the degree of transfer, the more successful a training system is considered.38

5.1 Transfer The literature is abundant on the topic of transfer. Transfer can be described as the application of knowledge, skills, and abilities acquired during training and applied to the environment in which they are normally used.38,39 Successful transfer can also be thought of as applying the specific set of knowledge and skills learned in training by the individual to a variety of tasks or job skills in the real environment.40 Farr40 states the processing of information is what moves raw data along so that it is encoded and transferred to long-term memory. This acquisition and later retention of knowledge and skills is transfer. He further elaborates that the degree of original learning (initial mastery) is the key predictor of retention (a component of transfer). In

addition, overtraining (extra repetitions, increased difficulty level, increased level of stress, etc.) of learners is critical in order to transfer knowledge and skills more effectively. Transfer can be positive, negative, or nil.38 Positive transfer occurs when learning in the training simulator improves performance in the real environment. Negative transfer results in the antithesis of positive transfer, while nil transfer is neutral. Measuring transfer in SBT is difficult and thus not commonplace. The Army does not typically employ an objective approach to measuring transfer in SBT41 and has been criticized for this practice.6 Schout et al.42 conducted a 28-year literature review of the use of surgical simulation and concluded that few objective studies have been conducted that measured whether skills trained on using a surgical simulator were successfully transferred to patient care. Sotomayor and Proctor43 found minimal empirical research regarding the training effectiveness of games for training. The literature does contain SBT transfer-related analyses, studies, and experimentation, however. Salas et al.44 outline numerous qualitative and quantitative methodologies used to measure the degree of transfer in SBT. Of noteworthiness to this effort, the authors describe a quantitative method known as event-based measurement, which is a general approach to designing simulation scenarios and performance measurement tools that are

Downloaded from dms.sagepub.com by guest on February 18, 2015

6

Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

interconnected to the tasks selected for training. The Simulation Module for Assessment of Resident’s Targeted Event Responses (SMARTER) is an example of an individual event-based measurement tool used in healthcare simulations,45 while TARGETs (Targeted Acceptable Responses to Generated Events or Tasks) was a team performance event-based measurement SBT tool.46 Roscoe and Williges47 developed the Transfer Effectiveness Ratio (Equation (1)), which quantifies time saved in real-world training as a function of time spent in simulator training (where Yc is the control group’s real-world learning time, Yx is the experimental group’s real-world learning time, and X is the experimental group’s simulator learning time). The US military has historically used the transfer effectiveness ratio when attempting to quantify transfer.48 Yc  Yx X

ð1Þ

The literature also contains experiments and analyses that have been conducted to empirically prove transfer from a virtual to a real environment. Harrington49 found, in a comparison of real versus virtual learning, that while ‘‘real’’ provides the more superior overall learning environment (as measured by transfer), virtual learning conducted before real-world learning provides value as a primer and virtual reinforces real-world learning when it comes after. Blow50 describes numerous empirical experiments the United States Army’s Aviation Center of Excellence have conducted at its Flight School XXI simulation facility to prove that transfer from a virtual simulator to a real aircraft occurs. Of noteworthiness, Blow uses the example of a senior, non-aviator Colonel being trained in Flight School XXI virtual simulators and then passing his check ride in a real aircraft – on his very first flight. In a medical experiment, Seymour et al.51 validated the transfer of training from VR to the operating room, specifically for the laparoscopic cholecystectomy procedure. The authors demonstrated that the VR training improved performance when compared to a group of traditionally trained residents. Hays et al.52 conducted a meta-analysis (247 articles, 26 experiments) of flight simulator training efficacy and concluded, by and large, that simulator training, when combined with aircraft training, was more effective than just aircraft training. It is costly, from a resource and monetary standpoint, to assess transfer of training by employing the above methodology (virtual simulator performance compared to live performance). In many cases, it is not possible to measure the transfer of training in a live system – either due to cost, safety concerns, resource availability, or numerous other constraints. Therefore, a common approach is to measure transfer, or the degree of learning, in the simulator itself. Finkelstein’s experiment53 attempted to optimize

task retention (and reduce learning decay) in a virtual environment by manipulating design guidelines. Another example is the use of FSXXI virtual simulators to train rotary wing aviators on auto-rotation tasks (which can never be performed in an actual aircraft). A final example is that of the Army’s Expeditionary Warrior Experiment’s (AEWE’s) methodology whereby various immersive virtual training systems were evaluated. At the AEWE, transfer was subjectively measured by the experiment’s participants themselves in a series of post-mission feedback questionnaires.54 Interestingly, for virtual worlds, Thomas and Brown55 describe the term ‘‘conceptual blending’’, whereby transfer is not even measured since one’s experiences are simultaneously occurring in both the virtual world and live environment because of the depth and richness of the experience. This concept may gain traction as AR training systems blur the line between real and virtual.

5.2 Fidelity Fidelity is defined as the degree to which the virtual environment is indistinguishable from the real environment56 or the degree of similarity between a simulator and the environment being simulated.57 Fidelity, in the context of simulation, can be decomposed into physical fidelity and functional fidelity. Physical fidelity is defined as ‘‘the degree to which the physical simulation looks, sounds, and feels like the operational environment in terms of the visual displays, controls, and audio as well as the physics models driving each of these variables’’.38 Functional fidelity is defined as ‘‘the degree to which the simulation acts like the operational equipment in reacting to the tasks executed by the trainee’’.38 While it is generally assumed that the higher the simulator’s fidelity, the greater the degree of transfer;58 Salas et al.59 conducted a meta-analysis of high-fidelity simulations and discovered numerous examples whereby higherfidelity simulations had little to no effect on the degree of transfer to the trainee. Indeed, Hays and Singer58 posited that a simulator need not be an exact representation of the real world to be an effective training device. Feinstein and Cannon60 argue that the ‘‘slavish pursuit of fidelity can be devastating to the learning process’’ because there is an inverse relationship between fidelity and learning. In essence, humans develop techniques to simplify their decision-making because their environment is so complex. The authors articulate that effective simulations model that adjustment. Stewart et al.61 analyzed Army helicopter simulator use and found that ‘‘the prevailing institutional belief is that the simulator, in order to be training effective, must replicate the aircraft....but this is not supported by scientific evidence’’. Summers posits that ‘‘equating

Downloaded from dms.sagepub.com by guest on February 18, 2015

Stevens et al.

7

high levels of physical fidelity with training efficacy has a long history and remains commonplace’’.24 While trainees (in particular aviators) demand highfidelity simulations, the program manager must operate within the constraints of cost, schedule, and performance. Jentsch and Bowers62 looked at 10 years of fidelity research in the use of PC-based flight simulations and determined there was significant evidence to conclude that they can serve as an alternative to high-fidelity flight simulators for aircrew coordination, provided the simulators made appropriate use of their inherent hardware and software in respect to the tasks to be trained. ‘‘There is a relationship between fidelity and transfer, but it is not as simple as ‘more is better’’’.38 Thorpe63 refers to this as ‘‘selective fidelity’’, whereby design decisions are made based upon the tasks to be trained. Borgvall57 advances that fidelity should be assessed iteratively during the simulator design process in order to effectively allocate resources and avoid breaching cost parameters. Bennet et al.64 state that simulator fidelity requirements should be driven by the simulator’s objectives. Summers24 states that transfer is maximized when the relationship between fidelity, the user, the task, and the environment is collectively taken into consideration. In essence, the literature indicates that the highest level of fidelity is not necessary, but that the simulation must possess a sufficient level of fidelity where needed to train the tasks that have been selected.

5.3 Immersion Immersion is ‘‘a psychological state characterized by perceiving oneself to be enveloped by, included in, and interacting with an environment that provides a continuous stream of stimuli and experiences’’.65 Sherman and Craig28 describe immersion as the sensation of being present in, not just observing, the environment. Slater and Wilbur66 describe immersion as a function of technology, primarily concerned with the display of information to the participant. Their aspects of immersion are as follows: • • • •

inclusive: the extent to which physical reality is shut out; extensive: the range of sensory modalities accommodated; surrounding: the extent to which VR is panoramic rather than limited to a narrow field; vivid: the resolution within a particular modality.

Immersive systems involve first-person experience, non-symbolic interaction, and the involvement of more than one perceptual channel (i.e. visual, auditory, haptic, olfactory, etc.) by using specific peripheral devices.67 Witmer and Singer65 posit that a HMD can serve as this peripheral device and can potentially provide the isolation

required to generate immersion. Bailenson et al.68 state the two characteristics of an immersive virtual environment are unobtrusive tracking technology and minimization of real-world sensory information. Ragan et al.69 state that immersion is a function of the simulator’s technology – not the user’s experience in the virtual environment. While the literature interchanges the concepts of immersion and presence, we define immersion as ‘‘the objective level of fidelity of the sensory stimuli produced by a virtual reality system’’.69 Thus, immersion is primarily associated with the technology of the simulation (physical), while presence is associated with the trainee’s subjective experience in the simulation (mental). This is congruent with Dalgarno and Lee’s70 definition that immersion is reliant upon the system’s technical capabilities. In their analysis of 10 years’ worth of empirical studies, Mikropoulos and Natsis67 found three studies that compared different levels of immersion in virtual environments and concluded that ‘‘immersion compared to a desktop system has a great advantage only when the content to be learned is complex, 3D and dynamic, on the condition that students do not need to communicate with the ‘outside’ world’’. McMahan et al.71 conducted a literature review whereby the relationship between the level of immersion and task performance in a virtual environment were evaluated. Their report indicated conflicting results throughout the literature. The factors of immersion evaluated were field of view (FOV), display type, and head tracking capability and offered no obvious correlation to a user’s performance. Furthermore, their own experiment provided conflicting results, with low-immersion subjects performing equal to high-immersion subjects. Immersion treatments were display type and method of locomotion. Conversely, Ragan et al.,69 while acknowledging that there has been little empirical evidence showing higher immersion produces higher transfer, demonstrated that the level of immersion positively influenced subjects’ ability to perform a memorization task in a virtual environment. Immersion was adjusted through subjects’ FOV. Dalgarno and Lee70 examined different immersion levels (3D versus 2D) in virtual learning environments and concluded that very few empirical studies existed that examined the effects of higher immersion. They concluded that most evidence to date is anecdotal (if it exists at all) and recommend further research. Witmer and Singer65 developed an Immersive Tendency Questionnaire (ITQ) to measure the individual’s tendency and ability to be immersed. Thus, it is a predictor variable. While Witmer and Singer claim a positive correlation between immersion tendencies and level of presence, some research conflicts with this position,72 while others support it.73 While the ITQ has inspired many

Downloaded from dms.sagepub.com by guest on February 18, 2015

8

Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

Table 1. Factors hypothesized to contribute to a sense of presence.65 Control factors

Sensory factors

Distraction factors

Realism factors

Degree of control Immediacy of control Anticipation of events Mode of control Physical environment Modifiability

Sensor modality Environmental richness Multimodal presentation Consistency of multimodal information Degree of movement perception Active search

Isolation Selective attention Interface awareness

Scene realism Information consistent with objective world Meaningfulness of experience Separation anxiety/disorientation

research efforts involving the concept of immersion, the literature fails to support that more immersion results in more transfer. Alexander et al.38 state that the relationship between an individual’s level of immersion and the subsequent degree of transfer is still uncertain.

5.4 Presence Presence is defined as ‘‘the subjective experience of being in one place or environment, even when one is physically situated in another’’.65 Thus, presence is a mental concept,74 sometimes referred to as ‘‘mental immersion’’. Presence is commonly referred to as a user’s sense ‘‘of being there’’ or the ‘‘suspension of disbelief’’.75 Witmer and Singer65 listed four categories of factors (Table 1) that were thought to influence the degree of presence an operator experiences. Lackey and Reinerman76 have described 12 system factors that can affect a system participant’s level of presence. Lombard and Ditton77 declared that various visual display attributes, such as image quality, image size, viewing distance, motion and color, dimensionality, and camera techniques, affect a person’s sense of presence. Similar to immersion, Witmer and Singer65 developed a Presence Questionnaire (PQ) to quantitatively measure an individual’s degree of presence. While the PQ is a highly used instrument, it has not been universally accepted.68,78 However, the PQ was later refined once more data became available.79 Alternative presence measurement tools have been created, such as the Place Probe, which utilized a blended qualitative and quantitative approach to measuring presence.80 The ITC-Sense of Presence Inventory (ITCSOPI) is another common alternative to Witmer and Singer’s PQ.81 Higher presence has also anecdotally been associated with a higher degree of transfer. In their analysis of 10 years’ worth of empirical studies, Mikropoulos and Natsis67 conclude that the relationship is unclear, with few examples. Persky et al.82 found no relationship to presence and transfer in their experiment. The authors were able to create higher presence in a virtual environment in one group compared to another, but both groups achieved the same level of transfer. Moreno and Meyer83 conducted an

experiment whereby they used treatments (HMDs-walking, HMDs-sitting, desktop) that created more immersion, and thus led to greater presence, but resulted in no subsequent gain in retention or transfer. Slater78 questioned if there exists a correlation between presence and task performance. Whitelock et al.84 increased subjects’ sense of presence via enhanced audio but found no corresponding learning gain. These authors discovered that higher presence may also lead to student cognitive overload. Selverian and Sung,85 in their review of 17 research studies involving presence and learning, did not conclusively find a relationship between presence and learning outcome. Summers24 states that presence is of ‘‘limited use in developing a simulation-based training system’’ because of its subjectivity and imprecision in measurement. Conversely, some studies have found that higher presence resulted in higher transfer. In Winn et al.’s86 experiment, participants executed a simulation performing a complex 3D task in either a HMD or desktop treatment. The authors found that the higher immersion (HMD) treatment resulted in both higher presence and better performance. Mikropoulus87 empirically concluded that subjects achieved a high level of presence in both HMD and wall projection treatments but performed more successfully in the HMD condition (egocentrically). Higher presence may induce higher levels of anxiety in VR, which was deemed beneficial when using VR to address social phobias88 as well as prepare candidates for stressful job interviews.89 The literature is rife with examples of experiments that achieve a high level of presence with corresponding lower levels of fidelity and immersion. Mikropoulos and Strouboulis90 conducted an experiment in which high presence was equally created in groups donning HMDs (more immersive), as well as those using wall screens (less immersive). Dede91 describes a handheld AR prototype (low-immersion, cell-phone-like application) that generated high presence levels amongst students. Banos et al.92 conducted an experiment with three visual treatments (HMDs, projector screen, and desktop monitor) and two affective states (emotional and neutral) and determined that subjects’ sense of presence was more influenced by affective state than immersion. In other words, presence

Downloaded from dms.sagepub.com by guest on February 18, 2015

Stevens et al.

9

Table 2. Potential factors associated with simulator sickness in virtual environments.95 Individual

Simulator

Task

AgeConcentration level Ethnicity Experience with real-world task Experience with simulator (adaption) Flicker fusion frequency threshold Gender Illness and personal characteristics Mental rotation ability Perceptual style Postural stability

Binocular viewing Calibration Color Contracts Field of view Flicker Inter-pupillary distance Motion platform Phosphor lag Position-tracking error Refresh rate Resolution Scene content Time lag (transport delay) Update rate (frame rate) Viewing region

Attitude above terrain Degree of control Duration Global visual flow Head movements Luminance level Method of movement Rate of linear rotational acceleration Self-movement speed Sitting vs. standing Type of application Unusual maneuvers Vection

Table 3. Symptoms of simulator sickness.98 Nausea

Oculomotor discomfort

Disorientation

General discomfort Increased salivation Sweating Nausea Difficulty concentrating Stomach awareness Burping

General discomfort Fatigue Headache Eyestrain Difficulty focusing Difficulty concentrating Blurred vision

Difficulty focusing Nausea Fullness of head Blurred vision Dizzy (eyes open) Dizzy (eyes closed) Vertigo

was more affected by content design than factors of immersion.92 Villani et al.89 experimented and were able to induce a higher degree of presence in subjects conducting a job interview virtually versus in a live setting. Alexander et al.38 posited that the manipulation of Witmer and Singer’s presence factors are well within the control of the simulator developer. Slater and Wilbur66 orate ‘‘if we can find important factors that contribute to presence, then this can guide the future of the technology’’. ‘‘Characteristics of VR, such as immersion and presence, are important factors that contribute to learning and need further exploration.... but factors connected with presence have not been studied extensively since 2003’’.67 Recall that the primary purpose of a training simulator is to optimize transfer to the user. Whether this can be done in a high or low immersive simulator, which induces a high or low degree of presence, is secondary. What is most important is employing the combination that results in the most efficient and effective method of transfer to the simulation user.

5.5 Simulator sickness SS is defined as ‘‘the unwanted side effects and aftereffects that may result from using simulators, but does not result

from similar use of the actual equipment’’.93 SS was first reported in 1957 in a helicopter simulation.94 There is no unanimous agreement on the cause of SS93,95–97 as multiple theories (cue-conflict, ecological theory) and beliefs exist. SS is thought to originate due to a conflict between the body’s visual and propriocentric systems, meaning there is a disconnect in the brain’s positional memory.97 Kolasinski95 did, however, identify potential factors (Table 2), broken into three major categories, involved in SS in virtual environments and posited that no single factor can be identified as the cause. Thus, according to Kolasinski,95 a combination of the characteristics of the individual, simulator, and task will determine the extent of SS of the operator. The Simulator Sickness Questionnaire (SSQ), developed by Kennedy et al.,98 is the most common method of measuring SS99 and is considered the ‘‘gold standard’’.97 The SSQ is a self-reporting symptom checklist that includes 16 symptoms (Table 3) associated with SS. Each symptom is rated on a four-level scale (none, slight, moderate, severe). No symptoms are assigned a value of zero, while severe symptoms are assigned a value of three. The symptom scores are then aggregated by subscale (Nausea, Oculomotor Discomfort, Disorientation), followed by Total Severity. It has become common practice to

Downloaded from dms.sagepub.com by guest on February 18, 2015

10

Journal of Defense Modeling and Simulation: Applications, Methodology, Technology occluded peripheral vision in a HMD (use of eyecups) resulted in higher SS amongst experiment participants. This latter conclusion is intuitive in that the use of eyecups (occlusion) is more immersive than non-occluded HMDs. Kaufmann et al.94 investigated whether a handheld projector induced SS in participants and determined it was only a minor problem. Classen et al.97 conducted a literature review on SS in driving simulators and were able to identify possible client factors (age, gender), context factors (refresh rates, etc.), and activity factors (speed, etc.) that may either contribute to or increase the degree of SS in driving simulators.

6. Head-mounted displays Figure 6. US Army aviation head-mounted display block diagram.106

administer the SSQ before and after a simulator iteration. However, Young et al.100 showed that participants who knew they were being tested for SS reported higher SS than those who did not know the intention of the SSQ. The consequences of SS are two-fold. They not only distract from training in the simulator but may also lead to the transfer of adoption techniques in the real world,95 although the latter was investigated by Johnson96 and no instantiations exist. The literature is abundant on the topic of SS and research continues to identify the root causes of SS. Brooks et al.99 developed a model that successfully identified participants who would not be able to complete their driving simulation study due to SS with 90% accuracy. They also found that older participants were more susceptible to SS.99 Sharples et al.101 conducted a series of experiments to compare the effect of four different VR display modalities (HMD, desktop, standard projection screen, and reality theater) on SS and found that HMD use induced SS more than the other three modalities. Although limited other comparisons exist, Sharples et al.101 report their findings are congruent with one other previous experiment. Van Emmerick et al.102 conducted an experiment that found subjects’ SS increased as their external and internal fields of view became more congruent. Subjects viewed a simulation on a display and within the display a virtual restricted FOV was presented (simulated HMD). The authors also determined that the level of SS declined with repeated exposures to the simulation. Moss et al.103 conducted a series of experiments involving HMD use and determined that the level of SS increased as the duration of the exposure increased, suggesting a temporal relationship. Moss and Muth104 found no relationship between HMD latency and SS but did determine that

HMDs are becoming more common in military virtual simulators as well as real, tactical equipment. Melzer and Moffit105 define a HMD as ‘‘an image source and collimating optics in a head mount’’. According to Bayer et al.106 there are four major elements composing a HMD (Figure 6). •







Image source: the display device that generates the information imagery that is optically presented to the user’s eyes upon which sensor imagery is reproduced. Relay optics: used to transfer to the eyes the information at the image source. Typically consists of a sequence of optical elements (mostly lenses) that terminates with a beam-splitter (combiner). Mounting platform: serves as an attachment point and must provide the stability to maintain the critical alignment between the user’s eyes and the HMD viewing optics. Head-tracker: couples the head orientation or lineof-sight with that of the trainee’s sensor(s), weapons, and simulated equipment. The user’s directional line-of-sight must be recalculated continuously so synthetic imagery data is correlated with the user’s line-of-sight.

A typical classification of HMDs is by presentation mode. Monocular HMDs present one image to one eye, biocular HMDs provide the same image from the same perspective to both eyes and binocular (stereoscopic) HMDs provide two visual images (one for each eye) from two different sensors.106 While stereoscopic viewing is becoming more prevalent in simulation, this is not without its disadvantages, such as increased eye-strain.107 Another classification of HMDs is immersive versus seethrough.108 Immersive HMDs block the direct real-world view, whereas see-through HMDs augment the real-world view with synthetic imagery.

Downloaded from dms.sagepub.com by guest on February 18, 2015

Stevens et al.

11

While the goal of HMD design is generally to maximize FOV and maximize resolution, the reality is that this is a trade-off process unique to the specific simulator being developed. For example, Arthur109 looked at the effect that HMD FOV had on subjects’ performance of tasks in a virtual environment. He found that wide FOV HMDs were better for walking and searching in virtual environments, but found no FOV effect on distance estimation or spatial memory. However, wide FOV HMDs are generally more expensive than their reduced FOV counterparts, thus the necessity for a task analysis and trade-off process. Melzer et al.110 presented various options for the design of HMDs based upon the user, required tasks, and their environment. The authors list key requirements, under 13 domains, that are necessary to ensure an effective HMD experience. Bowman et al.111 outline the use of simulation to inform decision makers which HMDs (as well as other display options) will provide the best cost/benefit ratio for training a particular task. FOV, field of regard, size resolution, latency, brightness, and refresh rate are some of the attributes they use in their analysis. The literature is ample on the employment and development of HMDs. Bayer et al.106 outline the military, commercial, and consumer applications of HMDs. Within the military domain, Haar112 highlights the use of HMDs in flight simulations (rotary wing and fixed wing) as well as mission rehearsal. Moss and Muth104 offer two critical guidelines for the use of HMDs: they should not occlude the user’s external peripheral vision and users should not stand freely when using HMDs. Hodgson et al.113 describe a wearable VR simulation that employs a wireless HMD and is not constrained by infrastructure. Kiyokawa114 describes future trends and trade-offs of HMDs in mixed and AR applications. A primary difference between HMDs and traditional monitor displays used in virtual training is through the user interface. Ruddle et al.115 state that when using a traditional monitor display ‘‘people receive feedback on their movements from visual changes in the displayed scene and the motor actions of their fingers on the interface devices.....by contrast, the visual feedback that people receive when using immersive displays (HMDs) is supplemented by vestibular and kinesthetic feedback from their changes of direction’’. Bowman et al.111 refer to this as interaction fidelity.

7. Research of the comparison in performance between head-mounted displays and screen-based visual systems in virtual training This section reviews recent research comparing the effect on performance of different visual modalities utilized in a

virtual environment. Unfortunately, there does not exist a tremendous body of published, empirical results derived from experimentation. In fact, ‘‘we are far from a complete understanding of the effects of display fidelity...because controlled experiments are difficult’’.111 When experimentation has been conducted, most empirical results fail to prove that a training benefit exists when comparing the use of HMDs to more traditional visual displays. Based on this, the generally higher cost of HMDs may not be justified due to the lack of a proven corresponding training benefit when compared to a lower cost visual modality. Barnett and Taylor116 found no significant performance differences between subjects wearing a HMD and those utilizing a computer desktop interface (standard computer and keyboard) when training for dismounted hostagerescue operations. In their experiment, subjects executed ‘‘crawl and walk’’-level hostage-rescue training via a manworn ensemble (HMD, simulated rifle and body-worn sensors), computer desktop interface, or live walkthrough. The ‘‘run’’ phase of the experiment was conducted in a live environment, using simulated air munitions and cardboard cut-out targetry. Barnett and Taylor found no significant difference in performance (degree of transfer) between the HMD and desktop computer treatments. However, SS was higher in the HMD group. In fact, the authors highlight that the man-wearable ensemble used for the HMD treatment was far costlier, more cumbersome, and contributed to a loss of training time due to the frequent pauses required to address subjects’ SS symptoms. Bowman et al.111 created a simulation to empirically evaluate the effects that different levels of display immersion have on performance in simulation. In essence, the authors created a high-end VR simulation to emulate the display characteristics of AR and VR systems. Independent variables examined included FOV, Field of Regard (FOR), rendering, latency, and stereoscopy. In the first three experiments conducted using this emulation technique, the authors found that higher display immersion resulted in better performance of a memorization task, interface familiarity was determined to be more important than display fidelity (immersion) in performance of a psychomotor task, and amplification of head rotations in VR led to decreased performance in a visual scan task. Research is ongoing by the authors to examine the effects of display fidelity in VR and AR simulation. Taylor and Barnett117 found no significant performance differences between subjects executing a cognitive task (recollection) in one of two visual treatments: HMDs or a standard computer desktop interface. In their experiment, subjects were trained on two tasks: select a fighting position and employing hand grenades. Subjects were trained using one of three treatments: man-worn ensemble (HMD, simulated rifle, and body-worn sensors), computer desktop interface, or passively viewing PowerPoint slides. Upon

Downloaded from dms.sagepub.com by guest on February 18, 2015

12

Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

completion of the training, subjects then watched an avatar in a virtual environment perform the two aforementioned tasks. Subjects were then tasked to evaluate correct and incorrect procedures executed by the avatar, using their recollection from training as the basis for their assessment. The authors found that there was no difference in performance amongst the three groups, with the exception of SS symptoms manifesting themselves in the HMD group. Piryankova et al.118 conducted an experiment and determined that participants underestimated distance perception in three external screen display treatments (flat panel, semi-spherical, and cabin). This correlates to the authors’ literature review and prior research which highlighted that subjects have long underestimated distance while wearing HMDs in a virtual environment.118 Their study focused on an egocentric point of view. Lev and Reiner119 found that user performance in a complex surgical procedure was better in a more immersive condition (3D visual display) compared to a less immersive, 2D display condition. The authors posit that the more complex the task, the more significant the effect of the visual display on performance. Barnett and Taylor120 conducted a usability assessment of a wearable simulation compared to a traditional desktop-based simulation. Both systems were evaluated by eight users, following the authors’ prescribed methodology. The wearable simulation consisted of a man-worn ensemble (HMD, simulated rifle, man-worn sensors, and central processing unit (CPU)), while the traditional desktop simulation consisted of a computer, monitor, joystick, and mouse. The study determined that the wearable interface (in large part due to the negative interaction between the HMD and simulated weapon) was not only difficult to use but also susceptible to negative training as users attempted to compensate for the equipment’s input/output (I/O) conflicts. The authors conclude that the additional cost of the wearable simulation may not be justified based upon the usability concerns discovered. In Taylor and Barnett’s capstone report121 of their three previous studies/experiments of a prototypical wearable simulation for dismounted soldiers, the authors conclude that the wearable simulation did not provide any statistically significant training benefits when compared to a traditional desktop computer simulation. The major variable in the two conditions was the visual modality employed by the two simulation systems. Of key salience, the authors illustrate that while ‘‘new and innovative technologies are always coveted by users’’, that does not necessarily translate into greater training effectiveness. Recent visual display research demonstrates that this domain is expanding but fails to conclusively prove that higher immersion visual displays (HMDs, etc.) provide a clear training benefit over lower immersive displays (traditional computer monitor screens). Kwon et al.122

conducted an experiment and determined that participants experienced the same degree of anxiety in a virtual job interview setting while donning a HMD as compared to a large liquid crystal display (LCD) screen. Both visual treatments created higher anxiety levels than either a laptop screen or audio treatment. Riecke et al.123 experimented to determine whether different display sizes affected participants’ egocentric distance perceptions in VR. The authors found no significant difference in distance perception between the HMD, computer monitor, and large screen conditions when FOV was constant and actual photographs (instead of computer renderings) were employed. Jacobson124 found students learned more effectively in a digital dome (similar to a CAVE) treatment than a desktop computer in an educational VR setting. Ngoc and Kalawsky125 compared pilot performance in a landing task and found that participants performed better in a single screen display setting with head tracking as compared to a triple screen display treatment without head rotational capability. Tichon and Wallis126 found participants trained in a lower immersion train-driving simulator (flat screen) made fewer errors than those trained in a higher immersion simulator (curved wide screen). Santos et al.127 conducted an experiment comparing participants’ performance in navigation of a virtual environment. The primary treatment was level of immersion: a HMD-based VR system or traditional desktop computer (monitor and joystick). The authors determined that the desktop treatment resulted in superior performance over the HMD treatment (within-groups experimental design). However, the HMD treatment created a greater sense of enjoyment for the user. In addition, the authors’ literature review revealed six other studies comparing user performance (primarily navigation and search tasks) in a HMD versus a traditional desktop monitor configuration. Of those six experiments, two resulted in superior performance by the HMD treatment group, two studies determined the desktop group performed better, and finally two studies concluded there was no significant difference in performance between the two visual modalities. Knerr,93 in his compilation of experiments involving dismounted soldiers and immersive simulation from 1997 to 2005, describes perhaps the first experiment involving dismounted soldiers and wearable simulation in which the effectiveness of HMDs was assessed. The Dismounted Warrior Network (DWN) was evaluated in four different configurations, with one of those configurations utilizing a HMD, while the other three configurations employed a projection screen as their visual modality. According to Knerr, the DWN experiments found that the use of a simulated weapon, in conjunction with a HMD, was difficult to employ. Knerr,93 partially based on his assessment of others’ experiments, concluded that

Downloaded from dms.sagepub.com by guest on February 18, 2015

Stevens et al.

13

.desktop or laptop simulators should be considered as an alternative to fully-immersive simulators which use large rear-projection or HMDs. Fully-immersive head-tracked displays are more effective, particularly when performing spatially-oriented tasks and acquiring spatial knowledge, but based on a limited number of experiments, this difference does not appear to be large, and may not justify the increased cost.

Lathrop and Kaiser128 conducted an experiment comparing subject performance in searching and navigation tasks in a virtual environment. The experiment’s treatments used were a desktop interface (monitor and mouse) and a more immersive interface (HMD and mouse). The authors determined that search capability was better with the HMD but there was no difference in subjects’ ability to maintain orientation. Knerr129 performed a research analysis of then-current issues involving the use of virtual simulation, concluding that there is a lack of empirical evidence of the effectiveness of immersive simulations (those using HMDs or projector-based capabilities as their primary visual modality). Knerr’s research indicated three focal points: •





while advances in computer processing power and graphics rendering increased, there was not a corresponding decrease in the cost of various interface devices (HMDs, simulated weapons, etc.); head-tracked visual displays (both projector and HMDs) appear to make a slight difference in performing spatially oriented tasks and acquiring spatial knowledge; based on both of the above, ‘‘do these interface devices contribute enough to training effectiveness to justify their cost’’?129

Ntuen and Yoon130 conducted an experiment comparing performance differences between subjects using a computer monitor and joystick and subjects donning a HMD (without head-tracking capability) and joystick. The experiment compared subjects’ performance in three fixed wing aircraft navigation tasks utilizing Microsoft Flight Simulator 2000: Professional Edition coupled with a customized hardware package that the authors named Virtual Environment Training System (VETS). The authors found equal performance between subjects trained on the computer monitor and those trained with the HMD. Similar to other experiments, the authors questioned the wisdom of procuring a high-cost HMD, since no appreciable performance gain was achieved when compared to a traditional monitor. Jacquet131 conducted an experiment comparing task retention between 16 soldiers learning a M1 tank

maintenance task in an immersive virtual environment and 16 different soldiers learning the same task via a traditional desktop simulation system. The study was commissioned, in part, by a US Army Project Management Office asking the pointed question: whether it would be a wise business decision to upgrade their fielded, traditional desktop simulation to a more immersive, 3D virtual simulation. The author found no statistically significant performance difference between the groups in the execution of the maintenance task on a live tank. Jacquet concluded that upgrading the existing simulation may not be worth the additional cost of US$10,000 dollars (per system) required for the HMD, tracking system, and other required components. Patrick et al.132 conducted an experiment whereby participants’ spatial knowledge of a virtual environment was measured. Comparisons of performance (knowledge) were made between three groups of participants: those wearing HMDs, those utilizing a large projection screen, and the final group utilizing a traditional desktop monitor. No statistical performance difference was found between the HMD and projection screen group, nor between the HMD and desktop monitor groups. Subjects were determined to be equally adept at navigating in a virtual environment. The authors concluded that the projection screen, in particular, may be as effective as a HMD in executing virtual spatial tasks while costing significantly less. Ruddle et al.115 conducted an experiment that compared the performance of participants navigating in a virtual environment using HMDs versus a desktop monitor. The study found that HMD-wearing subjects navigated a virtual building quicker than those utilizing a traditional computer monitor. The authors suggest that a primary reason for the difference in navigation speed can be attributed to the ‘‘tunnel vision’’ effect of the desktop monitor. HMDwearing participants were less likely to travel past their destination, whereas desktop navigators would coursecorrect themselves more often. However, the experiment also determined that there was no statistical difference in the virtual distance travelled by the two groups. Bochenek133 looked at the effect of different visual modalities during the product design process, which was a team task. While executing a notional Critical Design Review, participants utilized four different visual modalities to conduct the design review. Of significance, Bochenek’s design teams were able to detect more design errors utilizing the traditional computer monitor versus the HMD. However, those teams were able to detect the errors faster wearing the HMDs, as compared to the traditional computer monitor treatment. Manrique134 compared the performance of Air Force Reserve Officer Training Corps students playing in a commercial virtual, gaming environment called VR Space Duel. The objective of the game was to destroy an enemy

Downloaded from dms.sagepub.com by guest on February 18, 2015

14

Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

spaceship in deep space while maneuvering one’s own ship with a joystick. The major independent variable was display modality: HMD or computer monitor. In his study, Manrique determined that there was no significant performance difference between the visual modality groups while playing this commercial game. Singer et al.135 conducted an experiment that assessed subjects’ ability to acquire spatial knowledge in executing training in a virtual environment. The three treatments utilized were a stereoscopic HMD (with head tracking capability) and a treadmill for locomotion, a stereoscopic HMD (with gaze direction controlled via a joystick), and the baseline treatment consisting of standard 2D paper maps. The authors concluded that more spatial knowledge was gained via the first treatment, followed by the second, and finally the third treatment, the more traditional method of training map-reading. Waller et al.56 conducted an experiment comparing performance of three groups of subjects in a maze-walking assignment. The three groups received training either using 2D paper maps, a traditional desktop monitor and joystick, or via a HMD (with a six-degrees-of-freedom tracker) and joystick. The test was then executed in a live setting, using the authors’ maze. There was no significant performance difference between the desktop configuration group and those using the HMD. Johnson and Stewart136 conducted an experiment whereby 30 Army aviators utilized a joystick and one of three visual modalities (wide FOV HMD, narrow FOV HMD, or wide screen displays) to navigate and explore a virtual representation of an Army airfield. Participants were later tested on spatial navigation at the actual airfield. The authors determined the visual display condition had no significant effect on learning, as all three groups performed equally at the live site. ‘‘These results argue against a widespread belief held by proponents of VE technology that a 3D HMD with wide FOV and high resolution is best for learning visual-spatial information’’.136 Deisinger et al.137 examined the differences in performance within participants using different visual modalities in an interactive environment. Participants used a HMD, screen projector display, and traditional computer monitor in the simulation. The authors determined that performance, as measured by completion time for the selected task, was degraded in the HMD condition. Participants experienced challenges reading numbers on distant objects while wearing the HMD. Lampton et al.138 conducted an experiment comparing task performance between groups of subjects utilizing different display types in a virtual environment (HMDs versus computer monitors). In a wide range of visual and psychomotor tasks, the authors found that visual acuity was statistically worse for the HMD group compared to the monitor group. The authors point out that this may be

attributed to the resolution difference of the two modalities, which would be significant due to the state of HMD technology at the time. Singer et al.139 conducted an experiment examining the effects of different HMD display modes (stereoscopic versus monoscopic) and head-coupling (dynamic versus fixed view). Forty-eight participants, in four groups corresponding to the four treatments, were assessed on five simple, separate tasks. With the exception of one task (distance estimation), the authors found no difference in performance based on stereoscopic condition or head-coupling condition.

8. Discussion In this study, we examined prior research comparing the effect on human performance between HMDs and traditional screen-based visual systems in virtual training. Our overarching goal was to present a thorough and current examination of the topic so as to assist simulation developers in achieving the optimal design for their requirement. Preceding that, we examined the benefits of SBT and the taxonomy of reality. We studied the training efficacy of virtual environments, to include the concepts and research of transfer, fidelity, immersion, presence, and SS. We also discussed key components and studies of HMDs. There are numerous advantages and disadvantages associated with using simulation for training. Some of the key advantages are reduced cost, increased safety, and less time required to obtain proficiency, as well as many others outlined in the literature. The goal of virtual simulation used for training is to maximize the degree of transfer to the trainee. The literature contains many examples that prove transfer does occur when using a virtual simulation and many methodologies have been constructed to measure the degree of transfer that has occurred. This paper is primarily interested in determining if there is a more efficient and less costly visual modality to employ in order to achieve that transfer. The literature indicates that the highest level of fidelity is not necessary in order to maximize transfer, but that the simulation or simulator must possess a sufficient level of fidelity, where needed, to train the tasks that have been selected. The topics of immersion and presence are well addressed in the literature and much effort has been devoted to measuring both concepts. However, what is not clear is whether greater immersion and greater presence actually leads to more transfer. The literature is conflicting. HMDs are becoming more prevalent in virtual simulation. However, the literature clearly indicates that HMDs induce SS symptoms more often than traditional visual modalities (e.g. flatscreen LCDs, curved projection

Downloaded from dms.sagepub.com by guest on February 18, 2015

Stevens et al.

15

screens, or domes). SS disrupts training. Whether or not HMDs actually increase immersion and presence is not decisively proven in the literature. Again, even if true, more immersion and presence has not been shown to decisively increase transfer. Most empirical results fail to prove that a training benefit exists when comparing the use of HMDs to more traditional visual displays. Based on this, the generally higher cost of HMDs may not be justified due to the lack of a proven corresponding training benefit when compared to a lower cost visual modality. However, minimal research has been conducted on the use of different visual displays, using real soldiers, who are performing true kinesthetic tasks in fielded, VR/MR simulators. In a resource-constrained environment, wise design decisions must be made when developing simulations for training. Employing the correct visual modality in a training simulation is a challenging design issue that must be addressed by the development team. Familiarity with scientific research and the effect that different visual modalities have on human performance, the goal of this paper, will hopefully assist developers in employing the correct visual modality in virtual simulation. Recommendations for future research include welldesigned experimentation involving an appropriate population examining the effect that different visual modalities have on human performance in simulation. With the increased commercialization and lower cost of HMDs, this type of experimentation is significantly more feasible than in the past. Furthermore, recommended future research should also include an assessment of the degree of transfer from the simulated to the live environment. The optimization of that degree of transfer is, after all, the overarching goal of SBT.

R–V continuum is largely dependent upon the task under training, the interaction between the trainee and the environment when performing that task, and the fidelity of the environment required to support reasonable trainee interaction. In other words, it may not be necessary to have the most interactive and faithful representation of the real environment if the task under training does not require it. The level of immersion and presence supported by the VR or MR environment may not be crucial to training the essential elements of the task under training. While they may contribute to the trainee’s level of engagement, it is unclear whether or not the level of immersion and presence are essential to learning effectiveness outcomes (e.g. skill acquisition and retention). Furthermore, it is important for simulation developers to consider cost and undesirable side-effects in selecting technologies that provide interfaces for the trainee to the VR or MR environment. One such side-effect is SS, which may be induced by cueconflict, perceptible refresh rates of displays, or even limitations in FOV. In conclusion, recent research and experimentation comparing the effect on performance of different visual modalities utilized in a virtual environment has generally failed to prove that a training benefit exists by employing HMDs versus traditional visual displays, such as computer monitor screens. Based on this, simulation developers should critically examine the task(s) that their simulation is designed to train and match that requirement to the appropriate visual solution. More immersive visual displays (such as HMDs) may not be as effective as lower immersive displays (such as screen-based displays) due to the lack of a proven corresponding training benefit, as identified in the scientific literature.

9. Recommended practices and conclusions

Funding

This paper reviewed the general benefits of virtual and MR simulations for training. Virtual simulation as a training tool is widely accepted as an alternative to live training as evidenced by its ubiquitousness in military organizations worldwide. However, the authors also recognize that individual simulations vary widely in their effectiveness and transfer of skills to the operational environment. As such, simulation developers must ensure that their designed virtual simulation is effective. Employing the scientific method during design, specifically in the domain of visual modality, is a solid first step to ensuring that the simulation is an effective training transfer mechanism. This paper also reviewed the literature relative to the taxonomy of reality. The selection of a modality in the

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. Declaration of conflicting interest The authors declare that there is no conflict of interest. References 1. Insinna V. Simulation, gaming sector plagued by fiscal challenges. National Defense, July 2013, pp. 22–23. 2. Insinna V. Defense simulation firms turn to commercial sector for inspiration. National Defense, February 2014, pp. 20–21. 3. Thompson M. Reshaping the Army: a force built to fight the Cold War is now battling changes to its size, shape and mission. Time, 4 November 2013, pp. 34–40.

Downloaded from dms.sagepub.com by guest on February 18, 2015

16

Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

4. Lateef F. Simulation-based learning: just like the real thing. J Emerg Trauma Shock 2010; Oct–Dec: 348–352. 5. Lo C. Total immersion: military training gets real, armytechnology.com, http://www.army-technology.com/features/ featuretotal-immersion-military-training-gets-real-battlefield-combat/. (1 August 2013) 6. United States Government. Accountability Office Army and Marine Corps training: better performance and cost data needed to more fully assess simulation-based efforts. Government Accountability Office, Washington, DC, 2013. 7. Merriam Webster. The Merriam Webster Dictionary (2013). http://www.merriam-webster.com/dictionary/simulation (2 August 13) 8. Orlansky J, Dahlman C, Hammon C, et al. The value of simulation for training. Institute for Defense Analyses, Alexandria, 1994. 9. Hodson DD and Hill RR. The art and science of live, virtual and constructive simulation for test and analysis. J Defens Model Simulat 2013; 11: pp. 77–89 10. Lele A. Virtual reality and its military utility. J Ambient Intell Humanized Comput 2013; 4: 17–26. 11. Gourley S. Network integration evaluation 14.1. Army: The Magazine of the Association of the United States Army, January 2014, pp. 42–47. 12. McGaghie W, Issenberg S, Petrusa E, et al. A critical review of simulation-based medical education research: 2003–2009. Med Educ 2010; 44: 50–63. 13. Psotka J. Educational games and virtual reality as disruptive technologies. Educ Tech Soc 2013; 16: 69–80. 14. Bloom BS. The 2 sigma problem: the search for methods of group instruction as effective as one-to-one tutoring. Educ Res 1984; 13: 4–16. 15. VanLehn K. The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educ Psychol 2011; 46: 197–221. 16. Vogel-Walcutt J, Carper T, Bowers C, et al. Increasing efficiency in military learning: theoretical considerations and practical applications. Mil Psychol 2010; 22: 311–339. 17. Vogel-Walcutt J, Fiorella L and Malone N. Instructional strategies framework for military training systems. Comput Hum Behav 2013; 29: 1490–1498. 18. Vygotsky LS. Mind in society: the development of higher psychological processes. Cambridge, MA: Harvard University Press, 1980. 19. Aguinis H. and Kraiger K. Benefits of Training and Development for Individuals and Teams, Organizations, and Society. The Annual Review of Psychology, pp. 451–474, 2009. 20. Kincaid J and Westerlund K. Simulation in education and training. In: proceedings of the 2009 winter simulation conference, 2009. 21. Dourado A and Martin C. New concept of dynamic flight simulator. Aero Sci Tech 2013; 30: 79–82. 22. Smith R. The long history of gaming in military training. Simulat Gaming 2010; 41: 6–19. 23. Cane S, McCarthy R and Halawi L. Ready for battle? A phenomenological study of military simulation studies. J Comput Inform Syst 2010. 50: 33–40.

24. Summers J. Simulation-based military training: an engineering approach to better addressing competing environmental, fiscal, and security concerns. J Wash Acad Sci 2012 Spring 2012: 9–12. 25. Butavicius M, Vozzo A, Braithwaite H, et al. Evaluation of a virtual reality parachute training simulator: assessing learning in an off-course augmented feedback training schedule. Int J Aviat Psychol 2012; 22: 282–298. 26. Milgram P and Kishino F. A taxonomy of mixed reality visual displays. IEICE Trans Inform Syst 1994; Vol E77-D: 1321–1329. 27. Milgram P and Colquhoun H. A taxonomy of real and virtual world display integration. In: Mixed reality: merging real and virtual worlds. 1999, pp. 5–30. 28. Sherman W and Craig A. Understanding virtual reality— interface, application, and design. San Francisco, CA: Morgan Kaufmann, 2003. 29. Schnabel M, Wang X, Secihter H, et al. From virtuality to reality and back. In: International Association of Societies of Design Research, Hong Kong, 2007. 30. Falk J, Redstrom J and Bjork S. Amplifying reality. In: 1st international symposium on handheld and ubiquitous computing, Karlsruhe, Germany, 1999. 31. Starner T, Mann S, Rhodes B, et al. Augmented reality through wearable computing. Presence Teleoperator Virtual Environ 1997; 386–398. 32. Bachelder E. Helicopter aircrew training using fused reality. Hawthorne, CA: Systems Technology Inc., 2006. 33. Kanade T, Narayanan P and Rander P. Virtualized reality: concepts and early results. In: IEEE workshop on the representation of visual scenes, 1995. 34. Stothard P, Squelch A, van Wyk E, et al. Taxonomy of interactive computer-based visualisation systems and content for the mining industry - part 1. In: first international future mining conference and exhibition 2008 proceedings, Sydney, 2008. 35. Fotouhi-Ghazvini F, Earnshaw R, Moeini A, et al. from elearning to m-learning – the use of mixed reality games as a new educational paradigm. Int J Interact Mobile Tech 2011; 5: 17–25. 36. Pimenta W and Santos L. A comprehensive taxonomy for three-dimensional displays. In: WSCG 2012 communication proceedings, 2012. 37. Kersten-Oertel M, Jannin P and Collins D. The state of the art of visualization in mixed reality image guided surgery. Comput Med Imag Graph 2013 37: 98–112. 38. Alexander A, Brunye T, Sidman J, et al. From gaming to training: a review of studies on fidelity, immersion, presence, and buy-in and their effects on transfer in PC-based simulations and games. In: DARWARS training impact group, Woburn, MA, 2005. 39. Muchinsky P. Psychology applied to work. Belmont, CA: Wadsworth, 2000. 40. Farr M. The long-term retention of knowledge and skills: a cognitive and instructional perspective. Institute for Defense Analyses, Alexandria, 1986. 41. Insinna V. Army, Marine Corps look for better data on simulator effectiveness. National Defense, December 2013, pp. 26–28.

Downloaded from dms.sagepub.com by guest on February 18, 2015

Stevens et al.

17

42. Schout B, Hendrikx A, Scheele FBB, et al. Validation and implementation of surgical simulators: a critical review of present, past, and future. Surg Endosc 2010; 24: 536–546. 43. Sotomayor T and Proctor M. Assessing combat medic knowledge and transfer effects resulting from alternative training treatments. J Defens Model Simulat 2009; 6: 121– 134. 44. Salas E, Rosen M, Held J, et al. Performance measurement in simulation-based training: a review and best practices. Simulat Gaming 2009; 40: 328–376. 45. Rosen M, Salas E, Silvestri S, et al. A measurement tool for simulation-based training in emergency medicine: the Simulation Module for Assessment of Resident Targeted Event Responses (SMARTER) approach. Simulat Healthc J Soc Med Simulat 2008; 3: 170–179. 46. Fowlkes J, Lane N, Salas E, et al. Improving the measurement of team performance: the TARGETs methodology. Mil Psychol 1994; 6: 47–61. 47. Roscoe S and Williges B. Measurment of transfer of training. In: Aviation psychology. Ames, Iowa State University Press, 1980; 182–93. 48. Fletcher J. Education and training technology in the military. Science 2009; 323: 72–75. 49. Harrington M. Empirical evidence of priming, transfer, reinforcement, and learning in the real and virtual trillium trails. IEEE Trans Learn Tech 2011; 4: 175–186. 50. Blow C. Flight school in the virtual environment. United States Army Command and General Staff College, Fort Leavenworth, Kansas, 2012. 51. Seymour N, Gallagher A, Roman S, et al. Virtual reality training improves operating room performance. Ann Surg 2002; 236: 458–464. 52. Hays R, Jacobs J, Carolyn P, et al. Flight simulator training effectiveness: a meta-analysis. Mil Psychol 1992; 4: 63–74. 53. Finkelstein N. Charting the retention of tasks learned in synthetic virtual environments. University of Central Florida, Orlando, 1999. 54. U.S. Army Evaluation Center. Army expeditionary warrior experiment - bold quest 12. U.S. Army Evaluation Center, Aberdeen Proving Ground, MD, 2013. 55. Thomas D and Brown J. Why virtual worlds can matter. Int J Learn 2009; 37–49. 56. Waller D, Hunt E and Knapp D. The transfer of spatial knowledge in virtual environment training. Presence Teleoperators Virtual Environ 1998; 7: 129–143. 57. Borgvall J. Generating recommendations for simulator design through pilot assessment of prototype utility. Mil Psychol 2013; 25: 244–251. 58. Hays R and Singer M. Simulation fidelity in training system design: bridging the gap between reality and training. New York: Springer, 1998. 59. Salas E, Bowers C and Rhodenizer L. It is not how much you have but how you use it: toward a rational use of simulation to support aviation training. Int J Aviat Psychol 1998; 8: 197–208. 60. Feinstein A and Cannon H. Constructs of simulation evaluation. Simulat Gaming 2002; 33: 425–440.

61. Stewart J, Johnson D and Howse W. Fidelity requirements for Army aviation training devices: issues and answers. U.S. Army Research Institute, Arlington, Virginia, 2008. 62. Jentsch F and Bowers C. Evidence for the validity of PCbased simulations in studtying aircrew coordination. Int J Aviat Psychol 1998; 8: 243–260. 63. Thorpe J. Trends in modeling, simulation & gaming: personal observatiions about the past thirty years and speculation about the next ten. In: the interservice/industry training, simulation & education conference (I/ITSEC), 2010. 64. Bennet W, Schreiber B, Portrey A, et al. Challenges in transforming military training: research and application of advanced simulation and training technologies and methods. Mil Psychol 2013; 25: 173–176. 65. Witmer B and Singer M. Measuring presence in virtual environments: a presence questionnaire. Presence 1998; 7: 225–240. 66. Slater M and Wilber S. A framework for immersive virtual environments (FIVE): speculations on the role of presence in virtual environments. Presence Teleoperators Virtual Environ 1997; 6: 603–616. 67. Mikropoulos T and Natsis A. Educational virtual environments: a ten-year review of empirical research (1999–2009). Comput Educ 2011; 46: 769–780. 68. Bailenson J, Yee N, Blascovich J, et al. The use of immersive virtual reality in the learning sciences: digital transformations of teachers, students and social context. J Learn Sci 2008; 17: 102–141. 69. Ragan E, Sowndararajan A, Kopper R, et al. The effects of higher levels of immersion on procedure memorization performance and implications for educational virtual environments. Presence 2010; 19: 527–543. 70. Dalgarno B and Lee M. What are the learning affordances of 3-D virtual environments? Br J Educ Tech 2010; 10–32. 71. McMahan R, Bowman D, Zielinski D, et al. Evaluating display fidelity and interaction fidelity in a virtual reality game. Visual Comput Graph 2012; 18: 626–633. 72. Johns C, Nunez D, Daya M, et al. The interaction between individuals’ immersive tendencies and the sensation of presence in a virtual environment. In: proceedings of the 6th europgrahics conference on virtual environments, 2000. 73. Bulu ST. Place presence, social presence, co-presence, and satisfaction in virtual worlds. Comput Educ 2012; 58: 154–161. 74. Nalbant G and Boston B. Interaction in virtual reality. In: 4th international symposium of interactive medical design (ISIMD), 2006. 75. Slater M, Usoh M and Steed A. Depth of presence in virtual environments. Presence 1994; 3: 130–144. 76. Lackey S and Reinerman L. Immersion: concepts and definitions. Orlando, FL, 2013. 77. Lombard M and Ditton T. At the heart of it all: the concept of presence. J Comp Mediat Commun 1997; 3: 0–0. 78. Slater M. Measuring presence: a response to the witmer and singer presence questionnaire. Presence 1999; 8: 560–565. 79. Witmer B, Jerome C and Singer M. The factor structure of the presence questionnaire. Presence 2005; 14: 298–312.

Downloaded from dms.sagepub.com by guest on February 18, 2015

18

Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

80. Smyth M, Benyon D, McCall R, et al. Patterns of place – a toolkit for the design and evaluation of real and virtual environments. In: Handbook of presence. Ijsselsteijn W., Biocca F., Freeman J., editors. Mahwah, NJ: Lawrence Erlbaum, 2006. 81. Lessiter J, Freeman J, Keogh E, et al. A cross-media presence questionnaire: the ITC-sense of presence inventory. Presence Teleoperators Virtual Environ 2001; 10: 282–297. 82. Persky S, Kaphinqst K, McCall C, et al. Presence relates to distinct outcomes in two virtual environments employing different learning modalities. Cyberpsychol Behav 2009; 12: 263–268. 83. Moreno R and Mayer R. Learning science in virtual reality multimedia environments: role of methods and media. J Educ Psychol 2002; 94: 598–610. 84. Whitelock D, Romano D, Jelfs A, et al. Perfect presence: what does this mean for the design of virtual learning environments? Educ Inform Tech 2000; 5: 277–289. 85. Selverian M and Sung HH. In search of presence: a systematic evaluation of evolving VLEs. Presence Teleoperators Virtual Environ 2003; 12: 512–522. 86. Winn W, Windschitl M, Fruland R, et al. When does immersion in a virtual environment help students construct understanding? In: proceedings of the international conference of the learning sciences, Mahwah, NJ, 2002. 87. Mikropoulos T. Presence: a unique characteristic in educational virtual environments. Virtual Reality 2006; 10: 197–206. 88. Price M, Mehta N, Tone E, et al. Does engagement with exposure yield better outcomes? Components of presence as a predictor of treatment response for virtual reality exposure therapy for social phobia. J Anxiety Disord 2011; 25: 763– 770. 89. Villani D, Repetto C and Cipresso PRG. May I experience more prsesnce in doing the same thing in virtual reality than in reality? An answer from a simulated job interview. Interact Comput 2012; 24: 265–272. 90. Mikropoulos T and Strouboulis V. Factors that influence presence in educational virtual environments. Cyberpsychol Behav 2004; 7: 582–591. 91. Dede C. Immersive interfaces for engagement and learning. Science 2009; 323: 66–69. 92. Banos R, Botella C, Alcaniz M, et al. Immersion and emotion: their impact on the sense of presence. Cyberpsychol Behav 2004; 7: 734–741. 93. Knerr B. Immersive simulation training immersive simulation training. United States Army Research Institute, Arlington, Virginia, 2007. 94. Kaufmann B, Kozeny P, Schaller S, et al. May cause dizziness: applying the simulator sickness questionnaire to handheld projector interaction. In: proceedings of the 26th annual BCS interaction specialist group conference on people and computers, 2012. 95. Kolasinski E. Simulator sickness in virtual environments. U.S. Army Research Institute, Alexandria, 1995. 96. Johnson D. Introduction to and review of simulator sickness research. U.S. Army Research Institute, Arlington, VA, 2005.

97. Classen S, Bewernitz M and Shechtman O. Driving simulator sickness: an evidence-based review of the literature. Am J Occup Ther 2011; 65: 179–188. 98. Kennedy R, Lane N, Berbaum K, et al. Simulator sickness questionnaire: an enhanced method for quantifying simulator sickness. Int J Aviat Psychol 1993; 3: 203–220. 99. Brooks J, Goodenough R, Crisler M, et al. Simulator sickness during driving simulation studies. Accid Anal Prev 2010; 42: 788–796. 100. Young S, Adelstein B and Ellis S. Demand characteristics in assessing motion sickness in a virtual environment: or does taking a motion sickness questionnaire make you sick? Visual Comput Graph 2007; 13: 422–428. 101. Sharples S, Cobb S, Moody A, et al. Virtual reality induced symptoms and effects (VRISE): comparison of head mounted display (HMD), desktop and projection display systems. Displays 2008; 29: 58–69. 102. van Emmerik M, de Vries S and Bos J. Internal and external fields of view affect cybersickness. Display 2010; 41: 169–175. 103. Moss J, Austin J, Salley J, et al. The effects of display delay on simulator sickness. Displays 2011; 32: 159–168. 104. Moss J and Muth E. Characteristics of head-mounted displays and their effects on simulator sickness. Hum Factors 2011; 53: 308–319. 105. Melzer J and Moffitt K. Head-mounted displays: designing for the user. New York: McGraw-Hill, 1997. 106. Bayer M, Rash C and Brindle J. Introduction to helmetmounted displays. Helmet Mounted Displ Sens Percep Cognit Iss 2009; 47–108. 107. Leroy L, Fuchs P and Moreau G. Visual fatigue reduction for immersive stereoscopic displays by disparity, content, and focus-point adapted blur. IEEE Trans Ind Electron 2012; 59: 3998–4004. 108. Rolland J and Hua H.Head-mounted display systems. Encyclop Opt Eng 2005; 1–13. 109. Arthur K. Effects of field of view on performance with head-mounted displays. Dissertation, University of North Carolina at Chapel Hill, 2000. 110. Melzer J, Brozoski F, Letowski T, et al. Guidelines for HMD design. In: helmet-mounted displays: sensation, perception and cognition issues, 2009, pp.805–848. 111. Bowman D, Stinson C, Ragan E, et al. Evaluating effectiveness in virtual environments with MR simulation. In: the interservice/industry training, simulation & education conference (I/ITSEC), 2012. 112. Haar R. Virtual reality in the military: present and future. In: 3rd Twente student conference on IT, 2005. 113. Hodgson E, Bachmann E, Waller D, et al. Virtual reality in the wild: a self-contained and wearable simulation system. In: 2012 IEEE virtual reality workshops (VR), 2012, pp. 157–158. 114. Kiyokawa K. Trends and vision of head mounted display in augmented reality. In: 2012 international symposium on ubiquitous virtual reality (ISUVR), 2012. 115. Ruddle R, Payne S and Jones D. Navigating large-scale virtual environments: what differences occur between

Downloaded from dms.sagepub.com by guest on February 18, 2015

Stevens et al.

116.

117.

118.

119.

120.

121. 122.

123.

124.

125.

126.

127.

128.

129.

130.

131.

132.

133.

19

helmet-mounted and desk-top displays? Presence Teleoperators Virtual Environ 1999; 8: 157–168. Barnett JS and Taylor G. How simulator interfaces affect transfer of training: comparing wearable and desktop systems. United States Army Research Institute, Ft. Belvoir, Virginia 22060, 2012. Taylor G and Barnett J. Training capabilities of wearable and desktop simulator interfaces. U.S. Army Research Institute, Arlington, Virginia 22202, 2011. Piryankova I, de la Rosa S, Kloos U, et al. Egocentric distance perception in large screen immersive displays. Displays 2013; 34: 153–164. Lev D and Reiner M. Is learning in low immersive environments carried over to high immersive environments? Adv Hum Comput Interact 2012; 2012: 1–8. Barnett J and Taylor G. Usability of wearable and desktop usability of wearable and desktop evaluation. United States Army Research Institute, Arlington, Virginia, 2010. Taylor G and Barnett J. Evaluation of wearable simulation interface for military training. Hum Factors 2013; 672–690. Kwon J, Powell J and Chalmers A. How level of realism influences anxiety in virtual reality environments for a job interview. Int J Hum Comput Stud 2013; 71: 978–987. Riecke B, Behbahani PA and Shaw C. Display size does not affect egocentric distance perception of naturalistic. In: proceedings of the 6th symposium on applied perception in graphics and visualization, 2009. Jacobson J. Digital dome versus desktop computer in a learning game for religious architecture. In: annual meeting of the American Educational Research Association, Denver, CO, 2010. Ngoc LL and Kalawsky R. Evaluating usability of amplified head rotations on base-to-final turn for flight simulation training devices. In: 2013 IEEE virtual reality (VR), 2013. Tichon J and Wallis G. Stress training and simulator complexity: why sometimes more is less. Behav Inform Tech 2010; 29: 459–466. Santos BS, Dias P, Pimentel A, et al. Head-mounted display versus desktop for 3D navigation in virtual reality: a user study. Multimedia Tools Appl 2009; 41: 161–181. Lathrop W and Kaiser M. Acquiring spatial knowledge while traveling simple and complex paths with immersive and nonimmersive interfaces. Presence 2005; 14: 249–263. Knerr B. Current issues in the use of virtual simulations for dismounted soldier training. U.S. Army Research Institute, Orlando, 2006. Ntuen C and Yoon S. Assessment of human interaction with virtual environment training technology. Air Force Research Laboratory, Mesa, AZ, 2002. Jacquet C. A study of the effects of virtual reality on the retention of training. University of Central Florida, Orlando, 2002. Patrick E, Cosgrove D, Slavkovie A, et al. Using a large projection screen as an alternative to head-mounted displays for virtual environments. In: CHI 2000, 2000, pp. 478–485. Bochenek G. Comparitive analysis of virtual 3-D visual display systems - contributions to cross-functional team

134.

135.

136.

137.

138.

139.

collaboration in a product design review environment. University of Central Florida, Orlando, 1998. Manrique F. The effects of display type and spatial ability on performance during a virtual reality simulation. Arizona State University, Wright Patterson Air Force Base, Ohio, 1998. Singer M, Allen R, McDonald D, et al. Terrain appreciation in virtual environments: spatial knowledge acquisition. U.S. Army Research Institute, Alexandria, VA, 1997. Johnson D and Stewart J. Use of virtual environments for the acquisition of spatial knowledge: comparison among different visual displays. Mil Pyschol 1999; 11: 129–148. Deisinger J, Cruz-Neira C, Riedel O, et al. The effect of different viewing devices for the sense of presence and immersion in virtual environments: a comparison of stereoprojections based on monitors, HMDs, and screen. Adv Hum Fact Ergon 1997; A: 881–884. Lampton D, Gildea J, McDonald D, et al. Effects of display type on performance in virtual environments. U.S. Army Research Institute, Alexandria, Virginia, 1996. Singer M, Ehrich J, Cinq-Mars S and Papin J-P. Task performance in virtual environments: steroscopic versus monoscopic displays and head-coupling. U.S. Army Research Institute, Alexandria, 1995.

Author biographies Dr Jonathan Stevens is a retired Lieutenant Colonel of the United States Army and a former Research Program Manager of the Army Research Laboratory (ARL). LTC Stevens has over 22 years of military experience as an Infantry and Acquisition Corps officer. He earned his PhD from the University of Central Florida in Modeling and Simulation. Dr Peter Kincaid is a Graduate Research Professor at the University of Central Florida (UCF) and the codirector of the Modeling and Simulation graduate program at UCF. His areas of research include training systems analysis, instructional technology, and human factors. He has 30 years’ applied research and university teaching experience. Dr Robert Sottilare is the Chief Technology Officer and leads adaptive tutoring research at the US Army Research Laboratory’s SFC Paul Ray Smith Simulation & Training Technology Center in Orlando, Florida. For nearly 30 year, his research has spanned modeling, simulation, and training sciences. For the past 10 years his focus has been on improving the usability and effectiveness of adaptive computer-based tutoring methods in support of self-regulated learning for both individuals and teams.

Downloaded from dms.sagepub.com by guest on February 18, 2015