Enhancing Realism in Desktop Interfaces for

0 downloads 0 Views 3MB Size Report
Enhancing Realism in Desktop Interfaces for Dismounted Infantry Simulation. Dr. James Templeman. Ms. Patricia Denbrook. U.S. Naval Research Laboratory.
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2012

Enhancing Realism in Desktop Interfaces for Dismounted Infantry Simulation Dr. James Templeman U.S. Naval Research Laboratory Code 5582, Immersive Simulation Washington, DC 20375-5337 [email protected]

Ms. Patricia Denbrook DCS Inc. 6719 White Post Rd. Centreville, VA 20121 [email protected]

ABSTRACT A new user interface for training dismounted infantrymen in decision-making and team coordination has been developed at the Naval Research Laboratory and evaluated by the Office of Naval Research (ONR) Code 30’s Demonstration & Assessment team. Since decisions are judged by the outcome of the actions they gave rise to, the validity of virtual training depends on the realism with which actions are simulated. We distinguish between physical realism: required for training sensory-motor skills; and behavioral realism: required for cognitive training. Behavioral realism indicates how closely users’ actions in simulation resemble their actions in real life. Avatars represent users in simulation; thus giving users greater control over their avatar enhances training tactics. This insight led us to extend desktop simulators. Gamepads are the controller of choice for first-person-shooter console games, familiar to many Marines. We added an inexpensive head tracker and sliding pedals to a gamepad interface. We mapped view control and aiming to the user’s head rotation, and leaning the upper body to head translation. The user steps by sliding the pedals back and forth, and crouches by pressing the pedals down. The gamepad directs the avatar’s course and heading. The interface engages the user’s head, hands, and feet to precisely control the avatar’s body. This new control, called ‘Pointman’, has been integrated into Virtual Battlespace 2. Pointman underwent a series of tests, involving fire-teams and later squads of Marines. The interface evolved based on user feedback. Practice drills were developed to expedite learning the interface. A Military Utility Assessment of Pointman was conducted by the D&A team at the Marine Corps Base Hawaii Simulation Center. A squad of seasoned Marines was trained to use Pointman and then applied it in training exercises. Marine feedback, in the form of surveys and interviews, was collected. The assessment concluded that Pointman provided realistic movement and utility as a training system. ABOUT THE AUTHORS Dr. Jim Templeman is head of the Naval Research Laboratory’s Immersive Simulation Section. He received a D.Sc. in computer science from George Washington University in 1992, with minors in neurobiology and perceptual psychology, and has over twenty-seven years of experience designing and testing advanced user interfaces. He has worked in the field of dismounted infantry training simulations for twelve years. He invented Gaiter: an interface in which the user walks in place to move through the virtual world, and Pointman: a seated infantry simulator combining body and device driven control over the user’s avatar. Dr. Templeman was technical manager of ONR’s VIRTE CQB for MOUT program and Navy SBIRs for improved tracking and HMDs. He is an associate editor of the journal Presence. Patricia Denbrook is a software professional with over thirty years experience in user interface development. She has worked with NRL’s Immersive Simulation Section since 1996 as the software architect and lead developer of user interfaces for virtual training systems, including Gaiter and Pointman. She integrated Pointman with VBS2, a combined arms simulator that is part of the USMC’s Deployable Virtual Training Environment. Ms. Denbrook received a M.S. degree in computer science from George Washington University in 1979.

2012 Paper No. 12043 Page 1 of 11

Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2012

Enhancing Realism in Desktop Interfaces for Dismounted Infantry Simulation Dr. James Templeman U.S. Naval Research Laboratory Code 5582, Immersive Simulation Washington, DC 20375-5337 [email protected]

THE NATURE OF VIRTUAL SIMULATION A virtual simulator is a system that uses input devices, computer simulation, and sensory feedback to model the users’ interaction with the real world. It attempts to replicate what would happen in real world situations. Although it may be used for the analysis of new tactics and technology it is primarily used to provide training and mission rehearsal. Virtual simulators have proven invaluable for training the pilots and crews of combat vehicles. It makes team training readily available anytime, anywhere. Virtual simulators offer a convenient means for delivering a wide range of scenarios and allow people to train on any terrain by simply uploading the appropriate digital models. The Challenge of Dismounted Infantry Simulation The design of user interfaces for vehicle simulators is straightforward because a similar set of control devices can be used in simulation as are used in the actual vehicle. Dismounted infantry simulation poses a greater challenge. An infantryman can apply his natural senses and mobility because he is directly exposed to the environment, rather than sequestered inside a vehicle. An infantryman makes direct contact with his environment and moves through it, propelled by his own limbs. He must make the most of his senses and direct them to see the enemy before he is detected. An infantryman is not heavily armored, so he must rely on available cover and concealment for protection. It is not good enough to simply hide behind cover, he must be able to peer out and shoot from behind it, and move from cover to cover while minimizing his exposure and maximizing his situational awareness. Tactical movements such as pie-ing a corner involve moving with respect to cover and concealment. People move differently than vehicles and change their posture in complex ways. Rather than controlling a vehicle, the user of a dismounted simulation controls an avatar that represents his body in the virtual world. The user interface primarily allows the user to control his avatar and sense the virtual world from the avatar’s

2012 Paper No. 12043 Page 2 of 11

Ms. Patricia Denbrook DCS Inc. 6719 White Post Rd. Centreville, VA 20121 [email protected]

perspective. The avatar exists within the simulation. The user interacts with the virtual world through his avatar. Varieties of Realism We have developed a variety of user interfaces for dismounted infantry simulation over the past decade. Our goal has always been to produce a realistic simulator; but what does that actually mean? Early on we sought to achieve a high level of physical realism in which the user’s physical actions in simulation match those in real life. To achieve physical realism the user’s body parts are tracked so that the avatar directly reflects the user’s motion. The difficulty with this approach is that the user does not directly sense, physically contact, or move through the virtual terrain (Templeman, et al. 1999). Later we shifted our approach to provide more effective control over the avatar. We discerned a second level of realism in avatar control: with behavioral realism the user controls the avatar to act in simulation as he would in the real life. This approach gave us the freedom to substitute control actions for natural actions, but still requires a user interface that provides a high level of facility in controlling the avatar to act as the user would in the real world. While physical realism is required for training sensory-motor skills, behavioral realism is sufficient for training cognitive skills, including tactical decision-making and team coordination (Mautone, et al. 2006). Throughout the course of development we often considered the question: What is wrong with simply adopting a user interface derived from conventional desktop first-person-shooter (FPS) games for training tactical decision-making and team coordination skills? The answer hinged on the how well the actions carried out by infantrymen in virtual simulation matched the actions they would have performed in the real world, given a similar scenario. Will the infantry team be able to carry out the tactical decisions they make with enough realism to get a sense of how things would have gone in a real world mission had they made the same decisions?

Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2012

In training exercises as in the real world missions, decision-making occurs within an OODA loop, an ongoing process of observe, orient, decide, and act (Boyd, 1986). Each new decision must be acted upon, and the result of a sequence of decisions is judged in terms of the outcome of the actions it gave rise to. A training simulator provides a means for realizing the outcome of tactical decisions. The realism with which actions are simulated determines the accuracy of the simulated outcome of tactical decisions. A comprehensive and effective means of controlling the users’ avatars is as important to ensuring the validity of an infantry simulation as it is to ensuring its usability. Thus to raise the bar in dismounted infantry simulators, we sought to make them more realistic in terms of allowing users to perform combat relevant actions with greater fidelity. Example: Performing an Ambush Let’s take a single limitation on the user’s control over the avatar in conventional FPS games: the inability to use cover and concealment effectively, and show how this alone can undermine the training experience. Consider the task of employing a squad to set up and execute an ambush. The leader must select a suitable site, choose the formation to be employed, and assign men to their positions. Each squad member must decide how best to utilize the cover available at his position. The team remains hidden while keeping a lookout for opposing forces moving towards them. To make effective use of cover and concealment each team member must adjust his posture and location to minimize exposure, and look out from and shoot from behind cover while exposing no more of themselves than necessary. What will be learnt from this exercise if, because team members are unable to control their avatars to effectively use cover and concealment, they suffer substantially more casualties than in a comparable real world engagement? Would each casualty conclude that his position provided inadequate cover? Since the user interface imposes the same limitations on all trainees, the impact of the misrepresentation will be amplified. Will the team shy away from similar ambush sites in the future? The team leader might conclude that he chose a bad ambush site or deployed his forces in the wrong formation when the failure was not due to these decisions, but to the limitations of the user interface. The OODA loop is cyclical. Errors in representing user actions cascade over time, diverging further from reality as the mission plays out. In the above example, even if the team leader realized that his plan may have been valid and his men’s ineffective use of cover was an artifact of the user interface, he will still be faced

2012 Paper No. 12043 Page 3 of 11

with having to take further action to deal with the failure of the plan to work properly in simulation (e.g., retreating with casualties). Effects of Lack of Behavioral Realism The less realistic a simulator is the more trainees will discount the tactical mistakes they make using it, and take their success with a grain of salt. If the simulation fails to convince trainees that it accurately represents their actions, how can they gain confidence from such training? On the other hand, the more relevant behaviors the interface allows a trainee to perform, the more responsible he becomes for carrying out his duties. This creates positive feedback: a behaviorally realistic interface encourages the user to take it more seriously (Bailenson, et al. 2006), which leads to more realistic performance, making each training exercise more meaningful. The user interface impacts the training experience on two levels: at the level of outcomes and at the level of practice. As illustrated in the ambush example, the user interface may change the outcome of decisions made over the course of a mission. But even without altering an outcome, the interface impacts the fidelity with which users get to practice carrying out mission related tactics, techniques, and procedures (TTPs). Each exercise presents an opportunity to practice selecting and applying appropriate TTPs. This is of value even when it does not impact a mission’s outcome. For example, a squad must always maintain 360-degree security while moving through a nonbenign environment. It is important to practice vigilance even when the squad does not encounter threats. The validity of a dismounted infantry simulator depends on the fidelity of its user interface. How bad is it to train tactical decision making with limited control over the user’s avatar? – It all depends on what happens over the course of each training mission. If fine control over the actions employed has no bearing on running a particular scenario, then it makes no difference. But one cannot know the fidelity needed in advance because we do not know what decisions the team will make as the exercise unfolds. Other elements, such as more lifelike behavior of computer driven characters and deformable virtual terrain also contribute to the realism of simulation. While the user may interact with a number of these elements over the course of an exercise, the user is constantly engaged in controlling his avatar. While there is a certain amount of mystery associated with the behavior of other characters, a user is intimately aware of how his own avatar responds to his control, and

Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2012

consciously and subconsciously discounts the training experience offered by a simulator that degrades his ability to perform mission critical actions. DEVELOPING A NEW USER INTERFACE Body-Tracked Simulators Originally, we sought to create a high fidelity infantry simulator to allow the user to control his avatar in as natural a manner as possible. We tracked the rotation and translation of the user’s major body segments, an instrumented rifle prop, and a head mounted display (HMD). The avatar’s posture constantly reflected the user’s posture, so the user simply turned his head and body, and lined up the rifle to change the view, heading, and aim. A means of allowing the user to move through a large virtual terrain while remaining in a small physical space was needed: In our Gaiter system (Templeman, et al. 1999), the user stepped in place to move through the virtual world. The movement of the legs controlled the direction and extent of each virtual step taken. (A common alternative used in other body-tracked simulators is a thumbstick mounted on the rifle prop to direct the course and speed of locomotion.) Although this approach appeared promising at first, problems arise when the user’s interaction with the virtual world resembles natural interaction, but is not close enough for a person to apply their real world skills and reflexes. Using natural actions to control their avatar without providing truly realistic sensory feedback disrupts the user’s ability to perform tasks. A case in point involves aiming: The user holds and manipulates a physical rifle prop, but only sees a rendered image of the rifle in the HMD. It is not technically possible to fully align the image of the rifle with the physical prop. There is inherent delay between when the tracker detects the positions of the HMD and rifle, and when the rifle’s image can be depicted at that position in the HMD against the background of the current state of the simulated environment. Another mismatch is due to the failure to align the rendered image on the optical display with the position of the user’s eyes, in order to present a correct perspective view of the virtual world. People wear HMDs in different ways due to the shape of their head, what feels comfortable, and how their movement jostles the HMD. Misalignment between how the physical rifle is held and how the virtual rifle appears to be held is also problematic, since the user is forced to compensate by misaligning the physical rifle aim in order to line the virtual rifle’s sights on the target. Rapid target engagement relies on the practiced ability

2012 Paper No. 12043 Page 4 of 11

to snap the rifle from a ready position into an aim with the sights nearly aligned on the target, yet these skills acquired on a shooting range cannot be applied in the virtual simulator. Worse yet, repeated practice at misaligning the physical rifle to get the virtual gunsights on target undermines the vital skill of engaging actual threats. Some body-tracked simulators limit which aspects of the user’s motion are reflected by the avatar. They gather just enough information to know when and how to transition the avatar from one posture or motion to another. These simulators provide the same sorts of stereotyped avatar motion control found in desktop and console interfaces. For example, some dismounted infantry simulators rely on inertial sensors for tracking. Since inertial sensors only report orientation, the true positions of the HMD and rifle prop are not known. In such systems, when the user shoulders the rifle prop (detected by a pressure sensor in the stock) the virtual rifle is forced into sight alignment in the user’s view. The user obtains virtual sight alignment even when the gunsight on the prop is translated out of alignment with the shooting eye. This makes it easier to aim in simulation than in the real world. If a rifleman attempted to rapidly engage an actual threat by snapping the rifle up just as he did in simulation, he may find it translated so far out of alignment that he cannot look through the sight. He would then have to reposition the rifle to line up the sight, but there may not be time to do so. This interface provides a different form of negative training. Why employ a realistic rifle prop for training if the user’s real world skills cannot be brought to bear in simulation, and practicing with it degrades the user’s ability to rapidly engage real world targets? Negative training that would compromise a shooter’s ability to engage threats must be ruled out. Back to the Drawing Board Our R&D group invested a lot of time and effort in designing, implementing, and testing body-tracked simulators. There was a great deal that we liked about the naturalness of that user interface, but the vital abilities of properly aiming a rifle, gracefully pie-ing a corner, or rapidly diving into cover were not among them. This led us back to the drawing board. We reevaluated existing user interfaces to FPS games and considered how we could extend them to include the advantages of body-tracked simulators such as Gaiter. A more abstract interface would help avoid negative training and free us to adopt a better way to simulate stepping.

Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2012

Requirements The actions involved with infantry tactics are often divided along the lines of ‘Look-Move-Shoot’ (Templeman, et al. 2008). Perception is active. Looking entails directing the eyes and ears to pick up information. An infantryman must freely look around to detect and engage threats in a 360°/180° battlefield. Movement involves turning the body, and stepping or crawling to translate the body in any direction. Shooting encompasses weapon manipulations ranging from holding the weapon during movement to firing it. Looking and shooting overlap in terms of aiming, and interact with movement when different shooting positions (standing, kneeling, sitting, and prone) are adopted. The Look-Move-Shoot paradigm also applies to the use of cover and concealment. Rapid movement between cover, precise movement near cover, and the ability to quickly look out and shoot from behind cover while minimizing exposure to enemy fire are critical.

Conventional desktop and console game controllers provide some level of control over those actions. These interfaces have a limited number and range of input channels, due to being operated exclusively by the hands. The mouse or the right thumbstick of a gamepad turns the avatar’s body and rotates the head up and down. Keys or the left thumbstick direct the course and speed. Keys or buttons generate animations for looking to the side, leaning the upper body right or left, shifting between firing positions, and raising or lowering a weapon. These conventional input devices provide a means of controlling the avatar’s motion and posture, and through them the user directs his avatar to perform the combat related actions described above. We learnt from working with Gaiter that body-tracked user interfaces can enhance the user’s control over the posture and motion of the user’s avatar, and that an infantryman’s posture is critical to executing TTPs. The input devices we added to Pointman enable us to provide a similar level of control in a seated interface.

Figure 1. Setup of Pointman Input Devices Pointman: Involving the Head, Hands, & Feet We developed a new desktop user interface for dismounted infantry simulation, called Pointman. It uses a set of three input devices: a head tracker, gamepad, and rudder pedals. The additional input from the head and feet offloads the hands from having to control the entire avatar and allows for a more natural assignment of control. Together, the three devices offer twelve independent channels of control over the avatar’s posture.

2012 Paper No. 12043 Page 5 of 11

The NaturalPoint TrackIR 5 head tracker registers the translation and rotation of the user’s head (Figure 2). The avatar's upper body posture is linked to directly reflect the tracked movement of the user's head. The rotation (yaw, pitch, roll) of the avatar’s head follows that of the user’s head. The view changes accordingly so that the user simply turns his head to look around. When aiming a weapon its sights remain centered in the field of view, so that turning the head also adjusts the aim. (This resembles physical aiming in the sense that a rifleman moves his head and rifle together as a unit by maintaining a consistent cheek-weld to index

Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2012

the weapon.) The user can aim as precisely as he can hold his head on target. The translational (x, y, z) readings of the head tracker allow Pointman to fully couple the avatar’s head movements to those of the user. The head translates as the upper body leans forward or to the side. Hunching the head down by flexing the spine is also registered by the head’s translation, and the avatar adopts a matching posture. Leaning forward and hunching are used to duck behind cover. Rising up and leaning to the side are used to look out and shoot from behind cover. Rotate Head

Turn Head & Aim

3-DoF: Yaw-Pitch-Roll

Translate Head

Lean Torso

3-DoF: Roll-Pitch-Hunch

Figure 2. Head Tracker based Upper Body Control The gamepad used with Pointman, the Sony Dual Shock 3, includes a pair of tilt sensors. The tilt of the gamepad controls the how the virtual rifle is held (Figure 3). The user tilts the gamepad down to lower the rifle, and tilts the gamepad up to continuously raise the rifle up through a low ready into an aim and then to a high ready. This allows users to practice muzzle discipline. The user lowers the virtual weapon to avoid muzzle sweeping friendly or civilian characters, minimize collisions when moving through tight spaces, and avoid leading with the rifle when moving around cover. Once the rifle is raised into an aim, the user's head motion aligns the sight picture. The user rolls the gamepad (tilting it side to side) to cant the weapon. Tilt Gamepad

Raise Rifle

Stepping Slide Pedals

1-DoF 1-DoF: Displacement

Figure 4. Slide Pedals to Step When the user presses down on the pedals the avatar bends its legs to lower its postural height (Figure 5). Thus the user can continuously go from standing tall to a low squat. If he pushes the pedals down with the pedals apart the avatar takes a kneeling position. Gamepad buttons are used to transition between the three discrete postures: standing, prone, and sitting. When prone, the user crawls by sliding the pedals and can move continuously from a high to a low prone (from hands-and-knees to belly-on-the-ground) by depressing the pedals. Postural Height Press Pedals

1-DoF 1-DoF: Height

Figure 5. Depress Pedals to Lower Postural Height Pointman retains the use of the thumbsticks to control the avatar’s course and heading (Figure 6). The left stick sets the stepping direction and is a positional control. The right stick provides rate control over turning the avatar’s body: the avatar turns in the direction the stick is deflected; the further the stick is deflected to either side the faster the avatar turns. Turn Body

Direct Course

1-DoF: Yaw

1-DoF: Yaw Course

Directional Thumbsticks

2-DoF: Pitch-Roll

Figure 3. Tilt Gamepad to Vary Weapon Hold The CH Products rudder pedals slide back and forth, and also move up and down like an accelerator pedal. The user slides the pedals back and forth to move the avatar by stepping (Figure 4).

2012 Paper No. 12043 Page 6 of 11

2-DoF

Figure 6. Thumbsticks Direct Heading and Course

Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2012

Correspondence The more natural assignment of control allows for a more natural correspondence between the user’s and the avatar’s movements: the head turns to look, the torso leans to duck or lean around corners, the reciprocal motion of the sliding foot pedals mimic stepping, and the hands tilt the gamepad to raise and lower the weapon. This correspondence allows us to tap into the user’s natural ability to perform coordinated actions with different parts of the body. Head Coupled View Control Head coupled viewing is commonly used with headmounted displays (HMDs) to allow the user to look in any direction. The HMD is tracked and an image of the virtual world as seen from the perspective of the user’s avatar is rendered on the display. Desktop interfaces can also use head tracking to control the view. When the user’s head turns, his avatar's head turns in the same direction, and an image of what the avatar would see looking in that direction is displayed on the monitor (Figures 7 and 8). This approach works well in practice, as can be seen in the popular use of the NaturalPoint TrackIR head tracker for controlling viewing in a wide variety of desktop simulators. Head coupled view control is taken further in the Pointman interface. In a seated interface, 6-DoF tracking of the head is sufficient to lean and hunch the avatar's upper body independently of the head’s orientation because the user’s pelvis remains in the chair.

Physical World

User's Head

User's View of Scene

Virtual World

Avatar's Head & View

Figure 7. Head Coupled Viewing - Fixed Display: The avatar’s head is linked to the user’s head motion and the view that would be seen by the avatar is displayed on the monitor. This figure shows the nominal case of looking straight ahead.

2012 Paper No. 12043 Page 7 of 11

Physical World

User's Head

User's View of Scene

Virtual World

Avatar's Head & View

Figure 8. This figure shows what happens in the virtual world and on the screen when the user tilts his head to look up. Although Pointman can be used with an HMD, we generally prefer a desktop display. When a desktop display is used to perform head coupled viewing, the pitch of the user's head (tilting up and down) is typically amplified to allow the user to look straight up or down in the virtual world while still being able to see the screen. Combining this with turning the avatar’s body using a thumbstick allows the user to look in any direction (360°/180°: around, above, and below). Positional Control The additional input devices also allow different forms of control to be adopted. A positional control uses the position of an input device or tracked segment of the user’s body to determine the position of an output element (Zhai, 1995). For our purposes the output element will be some aspect of the avatar’s body or a virtual object held by the avatar. The term ‘position’ may refer to translation or rotation. In contrast, a rate control uses the position of an input element to determine the rate of change in position (speed of motion) of an output element. Positional controls are well suited for controlling the avatar’s posture. They provide direct, continuous control over the different postural elements. The user controls the avatar’s posture with the same dexterity and precision as he can physically position his head, hands, and feet. This incorporates a natural means of trading off speed and accuracy of movement into the simulation, allowing users to attain more realistic levels of performance. The user constantly feels the avatar’s posture through his body. He can stop moving at any point along the arc of motion and know how his avatar is posed without the need of visual feedback.

Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2012

Positional control allows the user to move the avatar by continuously varying its posture. This is fundamentally different from either discrete control over the avatar’s posture or rate control over its motion. With discrete or rate controls the user presses a button or pushes a stick and the simulation plays out a predefined motion sequence. With positional control over posture, the user animates the avatar by moving his body. To implement this level of control we had to move away from the way conventional game control software works: where the game plays out an animation in response to an input. Instead we directly couple the position of the input to the avatar's posture. It feels more like controlling a puppet or inhabiting the avatar than like driving the avatar through a vehiclelike control interface. The use of the pedals to simulate stepping illustrates how effective positional control can be. The sliding pedals capture the reciprocal nature of leg motion: sliding the right pedal forward swings the avatar's right leg forward, past its left leg. The avatar moves by continuously varying the postures of its legs. The user can stop moving with the legs apart or together, and feels their separation at every moment. The user can vary the stride length and cadence of stepping to control the speed of motion. The user can also make fine adjustments in the avatar’s position by sliding small amounts. This allows for precise movement in and around cover. Pointman provides continuous control over the following motions, typically supported in conventional first-person-shooter FPS games: 1. Turning the body (view & aim) 2. Pitching the head (view & aim) 3. Directing the course 4. Moving the body by stepping 5. Leaning the upper body to the side 6. Lowering the body 7. Raising the rifle

11. Hunching the torso 12. Canting the rifle This gives the user twelve independent channels of control over the avatar's posture, eleven of which are positional controls. This level of control over the avatar’s posture allows the following actions to be performed with greater behavioral realism: – Direct the view: knowing where you are looking, and turning your head to search and access. – Aim: as precisely as you can hold your head directed on target. – Adjust the rifle hold: to avoid muzzle-sweeping friendlies, avoid disclosing your presence by protruding the muzzle out from concealment, and lowering the weapon to move through tight spaces. – Step: to move precisely in tight spaces and with respect to cover and concealment, varying speed continuously over a wide dynamic range (without need for a ‘turbo’ button), and the ability to immediately stop at any moment (with feet together or apart). – Lean and hunch: to duck behind cover and raise or lean out from cover/concealment to look and shoot. Training Required to Use the Interface The time required to learn a new user interface is a major concern, especially if the system is not used on a regular basis. Ease of learning often goes hand-in-hand with ease of use. Interfaces that require users to associate button presses with a large set of essential actions are not well suited for use by infrequent users, who are apt to forget which button invokes which action. Pointman employs distinct motor actions that resemble their real world counterpart. It is obvious which actions are controlled by head or foot movements. Users have a good idea of how the interface works when they return to it, and progressively gain confidence in using it.

Pointman also provides positional control over the following motions, rarely supported in FPS games:

We experimented with making the turning thumbstick a positional control and changing the way the course changes as the avatar turns to provide better support for pie-ing corners (Templeman, et al. 2007). We found that experienced console gamers required a great deal of practice to unlearn the rate controlled turning that they were familiar with and proficient at using. It is far easier to learn a new interface when the user does not have to overcome highly engrained habits of using a familiar device in a new way.

8. Yawing the head independently of the body 9. Canting the head 10. Leaning the upper body forward and back

Integration with Virtual Battlespace 2 Virtual Battlespace 2 (VBS2) is a combined arms training simulator used by the USMC, US Army, and

Pointman provides rate control over turning the body and positional control over all the other motions. In FPS games, gamepads typically provide positional control over only the avatar’s course, and mousekeyboard interfaces provide positional control over only the heading and the pitch of the avatar’s head.

2012 Paper No. 12043 Page 8 of 11

Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2012

NATO allies. In particular, VBS2 is used in the Marine Corps DVTE (Deployable Virtual Training Environment) program. Bohemia Interactive Systems (BIS), the creators of VBS2, worked closely with NRL to tightly integrate the Pointman interface with VBS2 and to allow Pointman to fully and continuously control the posture of the user’s avatar. The detailed articulation of the user’s avatar is made visible to other squad members running in a networked simulation. Pointman-enhanced VBS2 (VBS-Pointman) also supports the operation of a wide range of small arms and additional forms of mobility, including climbing, swimming, and mounted roles (driver, passenger and gunner) using the full complement of manned vehicles. TESTING & ASSESSMENT VBS-Pointman was first shown at the FITE (Future Immersive Training Environment) JCTD (Joint Capability Technology Demonstration) as an excursion demonstration. Since then it has undergone extensive testing to evaluate and refine enhancements to the user interface. Testing was performed at the Simulation Center in Camp Lejeune and at The Basic School (TBS) in Quantico. Military Utility Assessment ONR Code 30’s Demonstration & Assessment team performed a full Military Utility Assessment (MUA) of the Pointman user interface in September 2011. The assessment was held at the 3-MEF Simulation Center, Marine Corps Base Hawaii. A squad of Marines with combat experience from Golf Company, 2nd Battalion, 3rd Marine Regiment participated in the study. The MUA focused on the ability of the Pointman user interface to replicate real world movement in a virtual environment and to enhance the virtual training experience for Marine infantrymen. The primary data used to inform the assessment was subjective user feedback solicited from the 2/3 Marines. This was collected using automated surveys and interviews. Objective data was gathered by the assessment team throughout the MUA and documented via logs used to capture actual system performance times and events. Upon completion of the study all the data was compiled and analyzed by the D&A team to arrive at a conclusion regarding Pointman’s realism and training value. A description of the study, the analysis of the collected data, and its conclusions were included in the final report (Office of Naval Research, Code 30, 2012).

2012 Paper No. 12043 Page 9 of 11

The study found that the Marines adapted easily to the Pointman input devices. They gave the head tracker a 97% approval rating for the ability it provided to precisely control the movements of their avatar’s head and torso. They liked that the gamepad gave them a familiar means of controlling their course and heading, that it enabled them to fluidly control the level of the weapon, and that it integrated well with the head tracker for aiming, scanning and engaging targets. The Marines’ reception to the use of foot pedals was met with guarded skepticism at the start of Pointman training sessions, but after completing the first training session their feedback was overwhelmingly in favor of the pedals (92% either strongly agreed or agreed they were easy to use). In characterizing Pointman, the Marine Squad Leader observed that the “foot pedals allowed better, smaller and more precise avatar movement.” The foot pedals also proved to be durable over the course of the MUA, withstanding five days of continuous use by Marines in combat boots with no breaks or failures. The graphs in Figure 9 summarize the Marines’ assessment of the Pointman user interface based on their answers to the surveys. The vertical axis shows the number of times Marines selected a particular response to the question posed. The horizontal axis shows the response categories, ranging from “Strongly Agree” to “Strongly Disagree”. In terms of Pointman’s ability to replicate real-world movement, the Marines’ responses to the survey questions clearly indicated that Pointman allowed them to realistically (1) control viewing, (2) perform tactical movements, (3) control the virtual rifle, (4) utilize cover, and (5) control the avatar’s posture. With respect to Pointman’s utility in enhancing the training system, based on Marine feedback and the analysts’ observations it was determined that (6) the Pointman user interface was comfortable and easy to use and (7) the training the Marines received was useful and effective. Pointman demonstrated (8) high utility for training individual Marines and small units up to the fire team level, and the Marines saw the potential in Pointman as a training system for squadlevel training (9). Unfortunately, Pointman was unable to fully demonstrate its suitability for training small units at the squad level during the MUA. This was due to a memory buffer overflow condition which caused simulation performance issues and system crashes when running large multi-player scenarios.

Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2012

1. Marines felt Pointman allowed them to

2. Marines felt Pointman allowed them to

3. Marines felt Pointman allowed them to

Realistically Control Viewing

Perform Realistic Tactical Movements

Realistically Control the Virtual Rifle

35

120

80

30

100

25

80

Response 20 Level 15

Response 60 Level

70 60 50 Response 40 Level 30 20 10 0

40

10

20

5 0

Strongly Agree

Agree

0

Somewhat Somewhat Disagree Strongly Agree Disagree Disagree

Strongly Agree

Response Distribution

Agree

Somewhat Somewhat Disagree Strongly Agree Disagree Disagree

Strongly Agree

Response Distribution

Agree

Somewhat Somewhat Disagree Strongly Agree Disagree Disagree

Response Distribution

4. Marines felt Pointman allowed them to

5. Marines felt Pointman allowed them to

6. Marines felt Pointman was Comfortable,

Realistically Utilize Cover

Realistically Control Avatar's Posture

Easy to Use, & Enhanced Simulation

45

35

70

40 35 30 Response 25 Level 20 15 10 5 0

30

60

25

50

Response 20 Level 15

Response 40 Level 30

10

20

5 Strongly Agree

Agree

Somewhat Somewhat Disagree Strongly Agree Disagree Disagree

0

10 Strongly Agree

Response Distribution

Agree

Somewhat Somewhat Disagree Strongly Agree Disagree Disagree

0

Response Distribution

7. Marines felt they Received

8. Marines felt Pointman was

Adequate Training to Use Pointman

Useful for Individual Training

Useful for Small Unit Training 40

60

35 30 25 Response 20 Level 15 10 5 0

35 30 25 Response 20 Level 15 10 5 0

20 10 0

Strongly Agree

Agree

Somewhat Somewhat Disagree Strongly Agree Disagree Disagree

Response Distribution

Strongly Agree

Agree

Somewhat Somewhat Disagree Strongly Agree Disagree Disagree

Response Distribution

Somewhat Somewhat Disagree Strongly Agree Disagree Disagree

9. Marines felt Pointman was somewhat

40

Response 40 Level 30

Agree

Response Distribution

70 50

Strongly Agree

Strongly Agree

Agree

Somewhat Somewhat Disagree Strongly Agree Disagree Disagree

Response Distribution

Figure 9. The Marine Squads Responses to the Assessment Questions The MUA report concluded that Pointman provided realistic movement and utility in enhancing the training system. The Pointman user interface allowed the Marines to control their avatars to move realistically and enhanced their ability to utilize cover more effectively. Training utility to individual and small Marine units was also demonstrated. The report recommended transitioning the Pointman enhancements to VBS2 to increase its realism and efficacy as a virtual training aid. Since the MUA we have worked with Bohemia Interactive Systems to resolve the memory-related scalability issues which affected Pointman’s suitability for training at the squad level. Recent testing using Marines from Quantico TBS has confirmed that VBSPointman is now fully capable of running large squadlevel missions on complex terrains. We have also streamlined the Pointman installation and have developed a standard training curriculum with training objectives and a demonstration to be included with the system.

2012 Paper No. 12043 Page 10 of 11

CONCLUSION Pointman allows users to control their avatars to act in a more realistic way, but makes no attempt to train real world motor skills. It abstracts how the user controls the avatar: the user's actions do not match every motion the avatar makes in detail. Pointman is a more realistic simulation interface in the sense of being able to control the avatar to perform a wider range of combat relevant actions. This enhanced behavioral realism supports the training of cognitive skills, including tactical decision-making and team coordination. After extensive testing and evaluation, culminating in the recently completed Military Utility Assessment, the Pointman Dismounted Infantry Simulation Interface has been recommended for transition into VBS2 to enhance the training of Marine infantry units up to the squad level. NRL is continuing to work with ONR and the USMC towards this goal.

Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2012

ACKNOWLEDGEMENTS We would like to thank: NRL for the 6.2 base funding to initially develop Pointman; ONR’s Rapid Technology Transition program office for supporting the integration of Pointman with VBS2; Bohemia Interactive Systems for their assistance in integrating with VBS2; and ONR’s Human Performance, Training, and Education S&T thrust area managers for their support in refining, demonstrating, and assessing Pointman. REFERENCES Bailenson, J.N., Yee, N., Merget, D., & Schroeder, R. (2006), “The Effect of Behavioral Realism and Form Realism of Real-Time Avatar Faces on Verbal Disclosure, Nonverbal Disclosure, Emotion Recognition, and Copresence in Dyadic Interaction,” Presence 15, 359-372. Boyd, J. (1986) Patterns of Conflict (slide set): http://www.ausairpower.net/JRB/poc.pdf Office of Naval Research, Code 30: Expeditionary Maneuver Warfare & Combating Terrorism Dept. (2012) Pointman Dismounted Infantry Simulation Interface Military Utility Assessment Report (FOUO).

2012 Paper No. 12043 Page 11 of 11

Mautone, T., Spiker, A, and Karp R., (2006). Conventional Training Versus Game-Based Training. ONR Technical Report by Anacapa Sciences Inc., Santa Barbara, California. p. 9. Templeman, J.N., Denbrook, P.S., and Sibert, L.E. (1999), “Virtual Locomotion: Walking In Place Through Virtual Environments,” Presence 8, 598–617. Templeman, J. N, Sibert, L. E., Page, R. C. and Denbrook, P. S. (2007). “Pointman – A DeviceBased Control for Realistic Tactical Movement.” Proceedings of 3DUI 2007, pp. 163-166. Templeman, J.N., Sibert, L.E., Page, R.C., and Denbrook, P.S. (2008), “Designing User Interfaces for Training Dismounted Infantry,” in D. Nicholson, D. Schmorrow, and J. Cohn (eds.), The PSI Handbook of Virtual Environments for Training and Education (Westport, CT, Greenwood Publishing Group). Zhai, S. (1995) Human Performance in Six Degree of Freedom Input Control. Ph.D. Thesis, University of Toronto, http://etclab.mie.utoronto.ca/people/shumin_dir/p apers/PhD_Thesis/Chapter2/Chapter23.html

Suggest Documents