The Mixed-Initiative Experimental Testbed for ... - Semantic Scholar

4 downloads 10042 Views 645KB Size Report
ACTIVE Laboratory. Institute for Simulation and Training. University of Central Florida,. 3100 Technology Parkway. Orlando, FL 32826 USA [email protected], ...
The Mixed-Initiative Experimental Testbed for Collaborative Human Robot Interactions Daniel Barber, Sergey Leontyev, Bo Sun, Larry Davis, and Denise Nicholson ACTIVE Laboratory Institute for Simulation and Training University of Central Florida, 3100 Technology Parkway Orlando, FL 32826 USA [email protected], [email protected], [email protected], [email protected], [email protected]

ABSTRACT Current military forces increasingly rely upon unmanned systems. Training mixed teams of soldiers and robotic agents can be accomplished using specialized virtual environments or fabricated real-life structures. However, there are still gaps in the technology and methodologies needed to support human-robot teams. In addition, the environments presently used aren’t reconfigurable or extendable. The Mixed-Initiative Experimental (MIX) Testbed was developed to support training of mixedinitiative teams, experimentation with new training methods, and exploration of team composition and robot capabilities. The testbed combines the use of the Joint Architecture for Unmanned Systems (JAUS) and High Level Architecture (HLA) to create a system that can be used with a combination of virtual robotic entities within an HLA-based simulation environment. In this paper we will present the design and implementation of the MIX Testbed and describe sample scenarios for experimentation and training. KEYWORDS: Distributed simulation, Human-Robot Interaction, HLA, JAUS, Mixed-Initiative Teams.

Jessie Y.C. Chen U.S. Army Research Laboratory 12423 Research Parkway Orlando, FL 32826 USA [email protected]

(NCOE), it must be highly coordinated, multidisciplinary, and able to use and interact with computer-based resources. Team members may be required to operate in a distributed manner due to physical separation. Multiple teams working to achieve a common goal define distributed operations (DO). The DO concept subdivides traditional military units into smaller groups connected by a dynamic, robust communications network. The units can then operate independently or as part of larger teams with the NCOE being the common link. In [1], the Joint Chiefs of Staff define the capabilities, tasks, performance criteria, and measurement standards for the concept of a NCOE and recommend the use of simulation to test and train in the NCOE. The key to successful testing and training is to provide experiential learning in context. Specifically for a NCOE, the learning must occur in a distributed, team-oriented configuration. In 2003, the Department of Defense received a congressional mandate stating that “It shall be a goal of the Armed Forces to achieve the fielding of unmanned, remotely controlled technology such that… by 2015, onethird of the operational ground combat vehicles are unmanned.”[2] This mandate compels military teams to include robotic entities.

1. INTRODUCTION The military is required to work in a continuously growing, networked environment. For a team to operate effectively within a Net-Centric Operating Environment

Although there has been much research on human team performance [3][4], there still remain questions for optimizing operational systems with performance support for these “mixed-initiative” teams. Mixed-initiative

interaction refers to a flexible interaction strategy in which each agent (human or computer) contributes what it is best suited at the most appropriate time [5]. Distributed, multi-user simulation is a modality that can be used for training mixed-initiative teams. The ultimate goal of a distributed simulation is to enable multiple users to share a virtual environment without the requirement of being physically co-located. In some instances, users participate within a shared simulation using a heterogeneous mix of simulators. Moreover, the users participating may be human or robotic entities. Although there are already existing distributed multi-user simulations that can be used for training within distributed operations, they do not include direct operation of, or interaction with unmanned systems. For example, a SAF program can generate several unmanned vehicles (UV) following pre-planned routes, but there are no interfaces for users to change these plans or acquire live video feed using the same interface that will be used in the field. Secondly, tools for simulating unmanned systems do not typically include interactions with humans or cannot work with existing distributed simulation environments. Finally, additional tools for experimentation are needed to measure the workload and performance of human operators within the training environment, which are not included in existing platforms. The work presented here discusses on-going research at the University of Central Florida’s Institute for Simulation and Training targeted at training mixed-initiative teams, sponsored by the Army Research Lab. It presents the design and implementation of the Mixed-Initiative Experimental (MIX) Testbed which involves robots and humans for training in combined arms exercises. What makes MIX different from existing tools is the ability to use virtual robots that support the Joint Architecture for Unmanned Systems (JAUS), [8], [9], and existing High Level Architecture (HLA), [6], [7], based distributed simulations together seamlessly. In addition to simulated vehicles within existing environments, MIX has an Operator Control Unit for the operation of multiple unmanned vehicles during mixed-initiative exercises and training. Section 2 of the paper provides the design of the MIX Testbed and its goals. Section 3 describes the implementation of the main components of MIX which include: The Multi-Operator Team Training Immersive Virtual Environment (MOT2IVE), the Unmanned Vehicle Simulator (UVSIM), and the Operator Control Unit (OCU). Section 4 describes some scenarios which MIX can be used for HRI training. Section 5 includes final discussion and future work with MIX.

2. DESIGN To meet the challenge of training mixed-initiative teams, and fill in the missing gaps of existing environments, MIX is comprised of three main components: An existing HLA-based distributed simulation (a federate), unmanned vehicle simulator, and operator control unit for unmanned vehicles. A distributed simulation is used to allow semiautomated forces, unmanned vehicles, and human participants to collaborate. Using the High Level Architecture (HLA), [6], [7], which is a general purpose architecture for simulation, a federation (a collection of simulations) consisting of standalone federates is created. Different interfaces are incorporated into these federates for coordination within the simulation and operator control. 2.1. Distributed Simulation Design Each federate within the distributed simulation represents a different type of collaborator, such as simulated forces or a bridge to simulations supporting other protocols. The primary architecture for this simulation is HLA. Simulated unmanned vehicles incorporate a different interface for interactions with the simulation and remote command operations. The interface method for control of unmanned vehicles is the Joint Architecture for Unmanned Systems (JAUS), which is an open architecture defined for the research, development and acquisition of unmanned systems [8], [9]. JAUS is an emerging standard for unmanned vehicles backed by the Society for Automotive Engineers (SAE). Using JAUS, human or autonomous operators are capable of interaction with both simulated and real UVs. A JAUS/HLA Bridge is used to provide UVs with information from the distributed simulation and, conversely, to the distributed simulation from UVs. Figure 1 shows the connection between a real UV, a simulated UV, an operator control unit (OCU) and other, HLA based simulators that comprise the overall simulation. With this architecture in place, the JAUS/HLA Bridge makes it possible to incorporate entities and applications supporting JAUS without any additional modifications.

The team positions that can be trained or tested include mortar and artillery Forward Observers (FO), Forward Air Controllers (FAC), FiST Leaders, and drivers/pilots and gunners for the following vehicles: AH-1, AV-8B, UH-1, CH-53, LAV, AAV, and M1A1. Forward Observers are trained or tested in the environment using the Forward Observer PC Simulator (FOPCSIM). Forward Air Controllers are trained or tested using the Forward Air Controller PC Simulator (FACPCSIM). FiST leaders are trained or tested with the Combined Arms Planning Tool (CAPT). Figure 1 - MIX Testbed Architecture

Within this distributed simulation, different training scenarios (such as combined arms missions) are performed. Simulated vehicles within the environment can be used for reconnaissance or other assistance based on the capabilities of the vehicle. All live and virtual participants must coordinate as a mixed team of humans and robotic vehicles to perform a given exercise.

3. IMPLEMENTATION 3.1. MOT2IVE The Multi-Operator Team Training Immersive Virtual Environment (MOT2IVE) is a training and testing environment that provides distributed simulation capabilities. The MOT2IVE system is a deployable suite of simulators that run on identical, commercially available laptop computers. It is a three-dimensional, networked, HLA environment where individual users participate in shared training scenarios. Each simulation can run on a single laptop and more users may be added by simply increasing the number of laptops. The MOT2IVE system is also reconfigurable; any of the simulations may be run on any of the laptops, which allows for a large variety of training configurations. The MOT2IVE system was developed for the Marine Corps’ Deployable Virtual Training Environment and currently supports fire support team (FiST) and close air support (CAS) training. In its basic configuration, the system has a laptop dedicated for the Dismounted Infantry Virtual After Action Review System (DIVAARS) and a laptop dedicated for “master” simulation control running the Combined Distributed Mission Training System (CDMTS). Non-human entities within the simulation are instantiated by CDMTS and controlled using the Joint Semi-Automated Forces (JSAF) system. Every human participant also has voice communication through the use of Marine Digital Voice (MDV), a voice-over-IP system.

The capabilities of the training and testing simulators are encapsulated within the General Simulator (GENSIM). GENSIM may function as any of the previously mentioned vehicles or human team members. It is also capable of including voice interaction and adjustments for the simulation environment (such as display resolution). 3.2. Simulated Unmanned Vehicle (UVSIM) Within MIX, an unmanned vehicle simulator (UVSIM) is used to generate ground or air unmanned vehicles that support JAUS as a common interface. JAUS is a high level interface domain for unmanned vehicles. It is a component-based, message passing architecture which specifies standard fields and formats for communication among unmanned systems [8]. By using JAUS as the primary method for directly interacting with a simulated unmanned vehicle, it is possible to create a single operator control station that can give commands and receive information from any type of unmanned system. Therefore the same operator can be in charge of air, ground, or even underwater vehicles using a standard set of messages describing desired behaviors of those assets. Although JAUS is not required for simulation and control of an unmanned asset its use is important because it describes procedures for command of unmanned systems, and expected behavior of these commands. Use of JAUS has also been mandated under several government programs including the Joint Ground Robotics Enterprise [10]. Therefore, it is critical that MIX includes support for JAUS so that training users with simulated unmanned systems matches the use of real unmanned systems. The unmanned vehicle simulation, while controlled via JAUS, is presented in the networked simulation via HLA using the JAUS/HLA Bridge application. This allows the vehicle to be seen by and interact with other HLA federates, making it an integrated component in the distributed simulation without any additional modification to UVSIM.

UVSIM is implemented in C++ using Open Scene Graph [11] and the JAUS++ library created by the ACTIVE Laboratory. JAUS++ is an open source C++ implementation of JAUS that provides a complete message set, example components, and a vehicle simulator. It can be run in a distributed manner with multiple vehicles on a single laptop, or across multiple networked machines as demonstrated by Figure 2.

Figure 2 - UVSIM Each unmanned vehicle supports a common set of components which include: primitive driver, global vector driver, global waypoint driver, global pose sensor, and a visual sensor. The primitive driver component is used for direct control of vehicles driving components. For example, on a ground vehicle like the XUV, the primitive driver controls the gas pedal, brake, and steering wheel. The global vector driver is used to make a platform following a specific yaw, pitch, roll, or elevation. The global waypoint driver receives waypoint commands and will navigate the unmanned vehicle to a specific waypoint or a mission of multiple waypoints. The global pose sensor provides position and attitude information about the vehicle. Finally, the visual sensor component of the unmanned vehicle generates a live video feed from any cameras a robot may have. In addition to live video from the robot, the visual sensor can perform Reconnaissance, Surveillance, and Target Acquisition (RSTA) scans. A RSTA scan generates a panoramic view of the scene between to compass headings and identifies targets within the scene, see Figure 3.

The Operator Controller Unit (OCU) interface is used by human participants to operate unmanned vehicles and is modeled after the Tactical Control Unit developed by Army Research Laboratory’s Robotics Collaborative Technology Alliance (RCTA), [12]. All available unmanned assets are presented to the user through a graphical user interface. With JAUS as its primary method of communication, it is capable of interaction with any unmanned vehicle that supports JAUS whether it is real or virtual. The OCU can be easily configured through XML setting files. Among the things that can be modified are GUI elements such as dialog panels and buttons, and events invoked by them can be reassigned. Using the OCU, the operator can identify all vehicles on the network and query different information such as position, orientation, velocity, waypoint being traveled to, and other capabilities. It is also possible to send different commands and missions to the unmanned vehicle. A human interface device (HID) is used for direct manual operation of an unmanned vehicle. Input from the device is converted to JAUS messages that are transmitted directly to the asset. During teleoperation, live video from the unmanned vehicle is provided to the user for navigation in addition to a top-down “gods” eye view of the map. When not directly driving a vehicle, the user can plan a series of GPS waypoints for a vehicle to navigate to by building a mission plan. Along this mission plan, the user can designate that an RSTA scan be performed at the waypoint, and how large the scan area should be. Once a mission plan is constructed, the user can send it to an unmanned asset, and monitor its performance while it follows the designated route, Figure 4. Mission plans can also be saved and re-used for future uses.

Figure 3 - Example RSTA Image Using UVSIM it is possible to simulate multiple ground or air unmanned vehicles that can be controlled remotely via a joystick, follow pre-planned routes, and provide live video streams and target identification. Figure 4 - Operator Control Unit OCU 3.3. Operator Controller Unit

When an RSTA scan is sent back to the OCU from a UV, the user can switch to the RSTA screen of the OCU. On this screen a list of all RSTA scans generated is maintained. For each RSTA image, it is possible to select targets identified and add markers to the overhead map Figure 5.

Figure 6 – System Indicator

4. Scenarios With all the major components of MIX combined, many different scenarios can be generated for training and experimentation of mixed-initiative teams. The immediate use of unmanned vehicles for reconnaissance tasks is obvious. Using the OCU and UVSIM it is possible for users working within a mixed-initiative to plan multiple routes for both ground and air vehicles to traverse. Using the live video feeds and aided target recognition available with these vehicles provides instant target information that can be relayed to participants within a training scenario.

Figure 5 - RSTA Screen As a companion interface to the OCU for experimentation, the System Indicator and Voice Communications window is available to create secondary and tertiary multitasking roles for the user, Figure 6. Reminiscent of the system gauge interface used in the experiments of Dixon and Wickens, the System Indicator panel produces quantitative data on the response time of the user during multitasking [13]. The Voice Communication panel allows the user to qualitatively answer prerecorded questions though a text-field. Regarding the behavior of this interface, full-duplex communication between the OCU and the System Indicator allow for scripted audio and visual cues to be triggered not only temporally, but also spatially via event triggering (such as when an unmanned vehicle reaches a waypoint) making for a wide gamut of test bed configurations.

Using the MIX testbed, experiments can be done to identify how participants will use unmanned vehicles within mixed-initiative teams. For example, experiments can be performed to determine how many unmanned vehicles can be monitored before performance begins to decrease. Due to the distributed nature of MIX, it is possible to run multiple OCU applications. A second operator can monitor the performance of unmanned vehicles being controlled by other OCU stations. Also, experiments can be performed to determine what the best procedures are for transfer of control between operators of unmanned assets. Not only can unmanned vehicles be operated by users to supply information to other members of the mixedinitiative team, but the ability to bridge HLA simulations with JAUS compliant systems allows developers to improve the behaviors of unmanned vehicles. Using JSAF it is possible to generate different scenarios for interactions between unmanned vehicles and simulated entities, allowing developers to improve the intelligence of unmanned systems. Since JAUS is the primary method for control and communication of an unmanned vehicle, developers can easily export the behaviors developed for a simulated vehicle to a real vehicle without making any changes. The OCU offers extensive logging capabilities for use in performance analysis. In addition to logging usergenerated interactions, the OCU also captures screen shots to produce a video log. This video log is used to correlate the actions (or inactions) of a user to what was presented to them on screen. Actions performed using the System Indicator companion interface are also logged as user events.

5. DISCUSSION The current implementation of MIX shows the ability to introduce basic unmanned assets within a distributed simulation. These assets can be controlled by multiple users through the JAUS architecture. The ability to control multiple UVs simultaneously or to have a single UV controlled by multiple users is part of what enables mixed-initiative teams. A problem that still remains is task automation and delegation regarding the team members. Some tasks are better suited to human team members, while others for robot team members. The MIX testbed provides an opportunity to explore the formation of mixed-initiative teams and the automated allocation of tasks and resources. The UVSIM application can simulate different types of ground and air vehicles within a distributed simulation. All these assets provide a minimum level of functionality that can be used by the OCU. The OCU enables the creation of plans for UVs that are completely automated, or accomplished through teleoperation. Because the mission plans (which includes entities) are loaded dynamically from XML, the capability exists to adapt a scenario in real-time. This could be accomplished by monitoring the performance of human operators, then making changes based upon what is observed. The capability for dynamic scenarios also points to the possibility of providing real-time mitigations to improve training or performance [14]. If a user is participating in a training exercise, mitigations may help correct errors and prevent negative training consequences. Moreover, if a user is operating multiple UVs simultaneously, she may become overloaded and mitigating the situation may lead to better performance [15][16]. Real-time mitigations using robotic control is a topic for future investigations.

scenario where communication with one or more UV’s will be lost. Work needs to be done to identify what the behavior of a UV should be if it loses communication, and what impact this has on a participant’s expectations and evaluation of a UV. In conclusion, the MIX testbed provides a complete environment for mixed-initiative teams to train in. It is possible for users to interact within an environment as part of a FiST team, which is supported by unmanned vehicles. There are also opportunities for additional features to be added for generating realistic scenarios that include communication loss and reliability of autonomous vehicles.

ACKNOWLEDGEMENTS The authors acknowledge Don Kemper, Lee Sciarini and Carl Anglesea for their contributions regarding the effort described here. Due to export control restrictions, the authors are unable to publish some screen images. This work is supported, in part, by the Office of Naval Research and also by the US Army Research Laboratory under Cooperative Agreement W911NF-06-2-0041. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the ARL or the US Government. The US Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon.

REFERENCES [1] Joint Chiefs of Staff, “Net-Centric Operational Environment Joint Integrating Concept”, Version 1.0. Washington, DC: Office of Primary Responsibility. Retrieved from Defense Technical Information Center, 2005; http://www.dtic.mil/futurejointwarafare/concepts/ netcentric_jic.pdf.

Although MIX provides the functionally for performing human-robot interactions, future work is planned to provide additional means of experimentation. For example, currently the aided target recognition of RSTA scans generated by a UV is 100% accurate. Changes will be made to make the reliability of target recognition variable. This will allow for experiments where participants will perform tasks where vehicles will have different levels of reliability.

[3] E. Salas, D.E. Sims, and C.S. Burke, “Is there a Big Five in Teamwork?” Small Group Research, Sage, Thousand Oaks, CA, 2005, vol. 36, no5, pp. 555-599.

Not only can target recognition reliability be varied, but the reliability of communication with a UV can also be altered. Future work will include the ability for experimenters to create areas or timed events within a

[4] J.E. Driskell, P.H. Radtke, and E. Salas, “Virtual teams: Effects of technological mediation on team performance,” Group Dynamics: Theory, Research, and Practice, 2005 7(4), 297–323.

[2] “The National Defense Authorization Act for Fiscal Year 2001”, Public Law 106-398, Section 220, 2007.

[5] M. Hearst (editor), J. Allen, C. Guinn, and E. Horvitz, “Mixed-Initiative Interaction: Trends & Controversies”, IEEE Intelligent Systems, IEEE, New Jersey, 1999, pp. 14-23. [6] IEEE, “1516-2000 IEEE Standard for Modeling and Simulation (M&S) High Level Architecture (HLA) – Frameworkd and Rules,” 2000; http://shop.ieee.org/ieeestore/ Product.aspx?product_no=SH94882 [7] J.S. Dahmann, “High Level Architecture for Simulation,” 1st International Workshop on Distributed Interactive Simulation and Real-Time Applications, DIS-RT, 1997, p. 9. [8] JAUS Working Group, “Joint Architecture for Unmanned Systems (JAUS) References Architecture,” Version 3.0, March 2007; http://www.jauswg.org [9] AS-4 Unmanned Systems Committee, “AIR5664 – Jaus History and Domain Model,” March 2006; http://www.sae.org/technical/standards/AIR5664. [10] Department of Defense, “Joint Ground Robotics Enterprise,” January 2008; http://www.jointrobotics.com [11] Open Scene Graph, “Open Scene Graph”, January 2008; http://www.openscenegraph.org [12] General Dynamics “Tactical Control Unit (TCU),” January 2008; http://www.gdrs.com/robotics/capabilities/ scalable.asp [13] S.R. Dixon and C.D. Wickens, “Automation Reliability in Unmanned Aerial Vehicle Control: A Reliance-Compliance model of Automation Dependence in High Workload,” (AHFD-0317/MAAD-03-1). Savoy, IL: University of Illinois, Aviation Research Lab, 2003. [14] Nicholson, D., Stanney K., Fiore S., Davis, L., Fidopiastis, C., Finkelstein, & N., Arnold., “An Adaptive System for Improving and Augmenting Human Performance,” In Foundations of Augmented Cognition: Proceedings of the Augmented Cognition International Conference 2006 (San Francisco, CA. October 15-17, 2006). Volume 2. D. Schmorrow, K. Stanney, and L. Reeves, Eds. Strategic Analysis, Inc., Arlington, VA, United States(US), 215-222.

[15] Chen, J. Y. C. & Terrence, P. I. (in press). “Effects of Tactile Cueing on Concurrent Performance of Military and Robotics Tasks in a Simulated MultiTasking Environment. Ergonomics.” [16] Chen, J. Y. C., & Joyner, C. T. (in press). “Concurrent Performance of Gunner’s and Robotics Operator’s Tasks in a Multi-Tasking Environment. Military Psychology”.