iconic waypoints in the graphical interface, and the RIK determines the best ..... [10] D. Fox, J. Ko, K. Konolige, B. Limketkai, D. Schulz, and B. Stewart. Distributed ...
“Seamless Autonomy”: Removing Autonomy Level Stratifications Douglas A. Few∗
David J. Bruemmer†
Curtis W. Nielsen†
William D. Smart∗
∗ Department
† Idaho National Laboratory of Computer Science and Engineering Washington University in St. Louis Idaho Falls, ID 83415-3779 St. Louis, MO 63130 United States United States {david.bruemmer,curtis.nielsen}@inl.gov {dafew,wds}@cse.wustl.edu
Abstract—The dispatching of robots into mission critical environments is becoming more and more commonplace as hardware evolves to a level of ruggedness demanded in these scenarios. Despite the advances in hardware platforms, novel control strategies to support effective human-robot interaction languish behind. Researchers at the Idaho National Laboratory (INL) and Washington University in St. Louis have been working to bridge the gap between current robotic hardware readiness and its lack of efficient system usability. In 2007 the INL successfully deployed commercial off the shelf (COTS) robots targeted to Military and Hazmat Team usage (figure 1) outfitted with an intelligence payload in a series of chemical, biological, radiologic, nuclear, explosive (CBRNE) detection exercises using domain area experts from the US Navy Explosive Ordinance Disposal Command and the US Armys Chemical School at Ft. Leonard Wood, Missouri. This paper examines the primitive behaviors that comprise the intelligent navigation payload used in the exercises. It also discusses “Seamless Autonomy”, a robot control strategy that blends the users knowledge of the task requirement with the robot’s interpretation of the local environment, providing a more appropriate task allocation between human and robot. Seamless autonomy simplifies the user’s interaction with the system by removing the need for the user to understand the individual behaviors or when they should be used. Instead the user is enabled to think in terms of the task goals.
I. I NTRODUCTION An increasing number of researchers from the fields of human factors, cognitive science, and robotics are working to develop HRI methods for remotely operating mobile vehicles in hazardous environments. (See Murphy [16] for an overview) Casper et al present a post-hoc analysis of the rescue efforts at the World Trade Center in September 2001 where robots were used for the first time to assist in real, un-staged search and rescue operations [5]. Burke et al. present a field study on HRI in an urban search and rescue training task [4]. Yanco et al. [20] present an analysis of the 2002 American Association for Artificial Intelligence (AAAI) Robot Rescue Competition where robot systems were used to compete in a mock search and rescue operation. In each study, the authors noted the difficulty for operators to navigate due to an inability to understand the robots position and/or perspective within the remote environment. Collectively, these findings suggest that in order to move beyond the limitations discussed in the literature, we must have interface methods that more effectively promote a shared understanding of the environment and task. Researchers at Washington University in St. Louis and the Idaho National Laboratory (INL) have developed a set of new and innovative tools for performance-driven inter-
Fig. 1.
FMI Talon and iRobot Packbot with Intelligence Payload
actions between autonomous robots and human operators, peers, supervisors. This research is aimed at developing a control architecture that interleaves multiple levels of human intervention into the functioning of a robotic system which will, in turn, learn to scale its own level of initiative to meet whatever level of input is handed down. For a robotic system to gracefully accept a full spectrum of intervention possibilities, interaction issues cannot be handled merely as augmentations to a control system. Instead, opportunities for operator intervention must be incorporated as an integral part of the robot’s intelligence. The robot must be imbued with the ability to accept different levels and frequencies of intervention seamlessly. Moreover, for autonomous capabilities to evolve, the robot must be able to recognize when help is needed from an operator and/or other robots and to learn from these interactions. The effort has led to novel approaches to several mobile robot challenges and a number of objective findings resulting from frequent human-robot interaction experiments throughout the design cycle. One significant finding is that robot autonomy can increase task performance on a variety of metrics from human workload to an overall task performance composite that includes time to completion, joystick usage and navigational errors [3], [7]. This notion is counter to a commonly held assertion that increases in autonomy result in a trade-off between a task performance and human workload[11], [19]. But, despite the demonstrated performance gains offered by mixed initiative control, operators often experience confusion and/or frustration regarding how and when to change the autonomy mode. In cases where robot operators are presented with a choice they consistently choose interaction modes that are easy
to understand [15]. The result of their choice is often a struggle with the joystick as they try to command the robot in terms of mission goals while the robot refuses based on the obstacle avoidance algorithm’s response to environmental features. To alleviate human workload and confusion we propose seamless autonomy. Seamless autonomy uses metaphors that frame the robot tasking in taskcentric terms that the human already understands. Each of the seamless autonomy icons are linked to a specific payload capability. Examples are an eyeball that indicates where the human wants video or a nose icon that indicates where the human wants the robot to perform a chemical ”sniffing” behavior or a hand icon used to indicate where in 3D space the user would like the end effector of the robot to be placed. Everything else including the movement of the robot platform, arm joints and the path planning and obstacle avoidance is left up to the robot behaviors. The CBRNE exercises provided the opportunity to investigate the value of the new control strategy in the hands of domain area experts as they remotely deploy robot systems in search of chemical and radiological sources. II. S YSTEM D ESIGN The CBRNE experiments are the latest in a series of human-robot interaction experiments performed by the INL and Washington University, exploring the efficacy of combining robust primitive behaviors into intelligent control strategies [8]. The past experiments provide compelling evidence that what is needed is not the further stratification of our dynamic autonomy systems, as Baker et al suggest [1], but rather a combinatorial approach where primitive capabilities are wrapped into new interaction methods. The remainder of this section articulates the details of the primitive behaviors and representation strategies that enable the novel new interaction method deployed during the CBRNE exercise. A. Guarded Motion and Obstacle Avoidance Potential field obstacle avoidance enjoys popularity in the mobile robotics community for its ease of implementation and its conceptual simplicity. However, most seasoned behavior developers understand first-hand the limitations of the approach. See Koren and Borenstien for a formal analysis [13]. In particular, potential field obstacle avoidance methods are subject to: 1) Trap situations due to local minima (cyclic behavior) 2) No passage between closely spaced obstacles 3) Oscillations in the presence of obstacles 4) Oscillations in narrow passages To mitigate these performance detriments the INL Robotic Intelligence Kernel utilizes an egocentric approach to guarded motion and obstacle avoidance. In so doing, range data from the robots available range sensors (e.g. laser, IR and/or sonar sensors) are clustered into bins representing regions around the robot. Each of the range bins are discretized into a set R = {r1 , r2 , rn } range abstractions using the decretizing function ri = f (G, C, Xi ) where G is a geometric compensation for the robots physical shape, C is an acceleration/deceleration coefficient determined from the robot’s kinematic design, and Xi is the robot’s state at time i. By accounting for the robot’s physical
size, acceleration/deceleration model, and current state the calculated event horizon is such that the robot maintains the ability to stop within inches of any detected obstruction. Changes in translational and rotational speed of the robot occur by applying the family of events in R to a decision tree which calculates the actuator state transition from time ti to ti+1 by minimizing probability for collision consistent with user input or task goals. Our egocentric approach provides a predictability and ensures minimal interference with the operators control of the vehicle. If the robot is being driven near an obstacle rather than towards it, the guarded motion will not stop the robot, but may slow its speed consistent with the requirement of being able to stop prior to collision if the user input or task demands change by the next time step. In contrast, in potential field methods of obstacle avoidance, the robot state is determined by the convergence of fields into a vector representing the path to the goal. As Koren and Borenstein point out [13] it is possible that the arrangement of obstacles will result in a local minimum that causes the robot to get caught in a cyclic pattern making no progress to the goal. Regardless of the obstacle avoidance method complex environments can pose serious challenges. With the our egocentric approach the robot tracks its state over time. Ifit is determined progress is not being made or the robot is caught in a cyclical series of event-bases states, the algorithm can then trigger new events which more aggressively attempt to extricate the robot from the cycle given the past states and an understanding regarding the uncertainty model for each of the robot sensors. With potential field methods robot state is dependent on the convergence of fields attributed to the world and it is not clear how to cleanly manipulate the world such that potentials correspond to a vector resulting in the extrication of the robot from the tight spot. The final benefit of the egocentric approach to obstacle avoidance is that other perceptions be fed into set R making it quite easy to derive sensor driven detection and avoidance algorithms based on the same obstacle avoidance strategies. For instance, radiological detection sensors can be saturated if exposed to radiation levels greater than their maximum detection range. A saturated sensor will have to wait some indeterminate period before being able to once more provide accurate readings. Using our method of obstacle avoidance, radiation sensor readings greater than the sensor-saturation thresholds can be mapped to events in R thus causing the robot to navigate away from saturation conditions in an effort to preserve sensor data quality. B. Waypoint Navigation User navigation intentions are specified by the operator in terms of waypoints. The operator specifies one or more iconic waypoints in the graphical interface, and the RIK determines the best low-level commands to cause the robot to follow the specified path. Currently, we are able to have the robot go to a single waypoint, follow a path of waypoints, patrol a region with a perimeter specified by a set of waypoints, retro-traverse along a previously-taken path, and use a path planning algorithm to return a list of waypoints through a cluttered environment. Our previous studies [7], [8] have shown waypoints to be an effective
abstraction for use in a variety of mobile robot control tasks. The abstraction of waypoints is particularly well-suited to our purposes. It allows the operator to control the robot at the task level, without having to be aware of the lowlevel details of the system. This is important, since we cannot expect users of robot control systems to be experts in robotics. In fact, not only can we expect them not to have a model of, for example, the kinematics of the robot, we can often expect them to have the wrong model. For example, it might be reasonable to assume that the robot steers like a car (with Ackerman kinematics) if a car is the only system that the operator is familiar with. Using waypoints as a control abstraction allows us to sidestep this problem, by giving the operator an interface that is intuitive, relates to their own experiences in moving about the world, and is easy to interface to underlying pathplanning and navigation routines. Since waypoints can be placed and moved arbitrarily, the operator does not have to worry about the non-holonomic nature of most robots. This allows her to concentrate on the task rather than on the details of implementation. For most of the scenarios that we are considering, we argue that fine control over the low-level systems of the robot is not only unnecessary, but actually harms performance [7]. Once placed, waypoints can be annotated with additional icons, requesting specific robot actions. For example, dragging the icon of a nose to a specific waypoint might signal the activation of a chemical sensor there. Again, the user is dealing with an abstraction that they are, presumably, familiar with from the traditional desktop computer interface metaphor. Waypoint navigation is also well-studied in robotics, allowing us to implement a robust path-planning and navigation system to move the robot along the specified path. Since the whole length of the path is known in advance, in contrast to direct tele-operation where only the current and past actions are known, we can calculate sequences of lowlevel actions that are optimal in terms of time, distance, or some other metric of concern. Again, our previous studies have shown that this approach increases task efficiency, and reduces unwanted events such as collisions with the environment [7] The placement of the waypoints also allows us to select the most appropriate mechanism to control the robot’s movements. If a single waypoint is specified close to the current location, perhaps a simple PD controller is best. If a sequence of waypoints is given, through an already mapped area, A∗ path-planning might be appropriate. Since we can autonomously determine the best underlying mechanisms to apply, we do not need to ask the operator for assistance. This allows us to smoothly and continuously alter the level of autonomy in the system, in response to the needs of the task, a process that we call seamless autonomy (see subsection D). C. 3D Visualization True teamwork requires a shared model of the environment and task. The lack of an effective shared model has been a significant impediment to having humans and intelligent robots work together. In order to support a
Fig. 2.
3D Robot Control Interface
dynamic sharing of roles and responsibilities, RIK employs a representation that allows both the human and robot to reason spatially about the world and to understand the spatial perspective of the other. Understanding the perspective of the robot allows the human to predict robot behavior. Understanding the perspective of the human enables the robot to interpret and infer intentionality from human tasking. Rather than depend soley on transmission of live video images, the INL Operator Control Interface (OCI) untilizes the open graphics library (openGL) to create a 3D, computer-game-style representation of the real world, constructed on-the-fly, that promotes situation awareness and efficient tasking. The virtual 3D component has been developed by melding technologies from the INL, Brigham Young University [18], and Stanford Research Institute (SRI) International [10]. Data for the dynamic representation is gathered using scanning lasers, sonar and infrared sensors to create a clear picture of the environment even when the location is dark or obscured by smoke or dust. Figure 2 presents the 3D display presented to operators. The representation shows not only walls and obstacles but also other things that are significant to the operator. The operator can insert items (people, hazardous objects, etc.) from a pull-down menu to annotate what was seen and where. If video is available from the robot it can be projected in the 3D representation and still shots from the video can be “dropped” in the corresponding location. While not required for operation, if floor plans or overhead imagery are available they can be placed as a ground layer in the OCI. In this way, the representation is a collaborative workspace that supports virtual and real elements supplied by both the robot and the operator. The 3D representation also maintains the size relationships of the actual environment, helping the operator to understand the relative position of the robot in the real world. D. Seamless Autonomy Seamless autonomy replaces the teleop, guarded-teleop, shared control, and autonomous mode stratifications typical of many shared control robotic systems with one interaction method. It dynamically introduces and concludes the usage of primitive robot behaviors based on task specifications
GOAL
GOAL
GOAL
***
Robot
Robot
= constant input required = Typical Teleoperated Path
***
**
Robot
****
− user input required
*** User Attention Requred input optional
** − Task input path from user
= user input reqired
**** − Robot’s planned path
teleop
shared control Fig. 3.
seamless autonomy
Task Input Profiles
and progress being made towards task completion. With seamless autonomy the operator no longer provides direct input to the robot, instead the operator controls an icon within a robot control interface to specify goals to the robot. The robot leverages its algorithmic capabilities (e.g. obstacle avoidance, guarded motion, simultaneous localization and mapping (SLAM), path planning, and waypoint navigation) to reason as to how best meet the task objectives given the environmental constraints. Decoupling the human input from the robot actuators provides a number of advantageous results. For one, the controlling of an icon in a Graphical User Interface is holonomic and not subject to the non-holonomic constraints of many physical robot designs. Moreover, control of a “target” icon is not subjected to the latencies typical of remotely deploying robot systems thus the operator will be able to easily express the task goals without the over committing or over compensation errors typical of controlling systems with long or inconsistent latency periods [14]. The speed and acceleration parameters of the target are easily adjusted as dictated by task demands or user preference. For instance, when engaged in an exploratory mission it is beneficial to interact with the robot in a “decoupled teleop” fashion by adjusting the target control parameters to match the the kinematic parameters of the robot. As the robot travels through the environment and builds up map on-thefly, the operator can make decisions while controlling the target between the robot and the boundaries of the map. If operating within a representation, either by building the map or by loading an a priori map, operators can adjust the target control parameters to levels consistent with user preference. Constraining the target kinematics to that of the robot is not necessary as the robot recognized when the distance to the target exceeds a dynamic path planning threshold. Once the threshold is exceeded the robot then utilizes a path planning algorithms to dynamically plan a path from the current robot location to the target location. The user interface is then populated with a series of waypoint consistent with the path plan to the target. Representing the path plan in terms of a way point list allows the human operator to see the reasoned intent of the robot based on its received input. As the robot progresses the path plan will periodically update and be resent to the
user interface. Periodic replanning provides a robustness against dynamic obstacles in the environment and static differences between the real world and the representation. The use of seamless autonomy allows the operator to control the robot in a far-forward manner while viewing the robot’s intentions as it progresses toward the task objective. The combination of path planning into the seamless autonomy control further supports the ease of use by removing control constraints imposed by physical barriers. When directly controlling a robot via teleoperation or guarded teleoperation operators must control the robot along a rectilinear or taxicab path avoiding obstacles and occlusions to reach the task goal (Figure 3a). Shared control strategies can offer a reduction in input [15] but still have comparatively a low “neglect tolerance” level [6], [12] as the operator must maintain attentiveness to the system to prevent the robot from taking a wrong turn (Figure 3b). Seamless autonomy users experience a higher “interaction efficiency” [6] since the iconic means of tasking allows operators to control the system independent of physical barriers (Figure 3c). The Euclidian path-length to the goal requires less user input than tasking with respect to the constraints imposed by the environment. Furthermore neglect tolerance is increased by the articulation of the robot’s intent in the way of visualized waypoints in the user interface. The final and perhaps most beneficial advantage of decoupling human input from direct robot control is that it results in a proportional-derivative (PD) control that dampens the effect of input noise and user misdirection. Imagine a robot being deployed using the decoupledteleop method of control often used during exploration of unknown environments (refer to Figure 4). The robot is represented a R = [rx , ry , rθ ]. Allow target T = [tx , ty ]. Since we are performing an exploration task we will limit the target control parameter to the velocity limit of a typical small unmanned ground vehicle in both the X and Y directions. For the case of our example let us assume through communications lag, input noise or just errant user input the target T gets positioned off the operator’s desired vector of travel for some time terrant The actual rotational command sent to the robot with straight teleoperation will be (in the worst case) the
Seamless Autonomy Path planning threshold Decoupled Teleop
Robot [R_x,R_y]
s = arctan(T_x−R_x,T_y−R_y)
Dynamic Path Planning
Desired Direction
= errant input [T_x,T_y]
Fig. 4.
PD Controller
dvelrot e∗terrant . For a naive implementation with seamless autonomy the command s communicated to the robot is s = arctan((ty −ry ), (tx −rx )). The reason for the reduced effect is that any noise or errant rotational input when using seamless autonomy gets distributed over the distance between the robot and target rather than applied directly to the robot’s actuators, as happens with traditional methods of robot interaction. III. E XPERIMENTS We have discussed many of the advantages of how imparting task goals to the robot through the use of seamless autonomy control, but the following two CBRNE exercises conducted in 2007 provided the opportunity to test the system’s merit in real-world scenarios. A. Radiological Source Detection The first CBRNE experiment was designed for participants to detect and localize radiological sources within a large industrial facility located in the INL critical infrastructure test range complex (CITRC). There were 19 participants for this experiment who were divided into three groups of subject matter experts (SME) depending on their expertise in hazardous emergency response, robotics, and radiation knowledge: • Explosive Ordinance Disposal (EOD) Soldiers: These participants had robot and domain training including 1-2 years of regular, hands-on use of the Packbot and Talon robots in Iraq and/or Afghanistan. • Weapon of Mass Destruction Civil Support Team (WMD-CST) Personnel: These participants had domain training in radiological emergency response, but no previous robot training. • Nuclear Engineers (NE) had general domain training (expert knowledge about radiation dispersion), but no emergency response training, and no prior robot training. The experiment compared three modes of human-robot interaction using the an iRobot Packbot instrumented with an AMP-50 radiation detector and the intelligent navigation payload: Guarded Teleoperation with an audible GeigerMueller type clicking proportional to the radiation level, Guarded Teleoperation with the 3D interface and a graphical radiation level representation positionally located within
the robot’s map, and seamless autonomy with the same graphical radiation level representation positionally located in the map as in the second condition. The experiment was designed such that each participant used each of the three different modes of human-robot interaction in a counter-balanced manner to mitigate overall learning effects. Participants were told that their task was to find two Cesium-137 radiation sources hidden in the environment and then to return to the door from which they started. B. Chemical Source Detection The second experiment was designed for participants to detect and localize the source of an ammonia chemical hazard within underground bunkers at Ft. Leonard Wood, MO. Participants for this experiment were from the Edgewood Chemical and Biological Center and consisted of 10 individuals who had training in emergency response with half of them also having significant prior robot training. This experiment compared the INLs Packbot with the Armys state-of-the-art CBRNE Unmanned Ground Vehicle (CUGV). The CUGV is a traditional iRobot Packbot EOD that has been augmented to support a variety of hazard sensors. The CUGV is operated through the use of PuckMaster six degree of freedom input devices and a variety of levers and switches. The robot is fully teleoperated with the exception that the arm can move into predetermined positions with the click of a button. The control interface has been augmented over a traditional Packbot interface to show the readings of the various sensors attached to the robot. The INL robot was the same robot used in the radiological source detection experiment with the exception of upgraded tracks and instead of the radiation sensor, a RAE Systems MultiRAE Plus chemical sensor was used. The look and feel of the control interface was similar to the third condition of the first experiment where the operator was given a real-time map of the obstacles and the hazard chemical readings and video from the robot. The robot was controlled using seamless autonomy and the target icon described in this paper. The experiment was designed so each participant used both the INL Packbot and the CUGV in a simple and a complex environment in a counter-balanced manner to mitigate overall learning effects. Participants were told that they had complete control of the robot and it was up to them to decide how much to let the robot do and how much they should do themselves. Operators were told that emphasis should be placed on minimizing the time to find the source and exit the bunker. C. Results A complete analysis of the described experiments can be found in Nielsen and Bruemmer et al [17] and [2]. The following are highlights from the formal analysis to demonstrate that the improved “input efficiency” and “neglect-tolerance”, the removal of latency typical with remotely deployed systems; and the PD signal buffer against errant or noisy control input improved overall performance when compared to the systems being fielded in hazardous environments today.
• • • •
• •
100% of participants articulated the use of both systems as a positive experience 77% of participants preferred Seamless Autonomy to Standard Teleoperation 92% of participants thought Seamless Autonomy would be usefull in the field Seamless autonomy participants completed like task in 50% of time required to complete identical task using current state-of-practice system. 81 Collisions with the environment using CUGV and teleoperation 0 Collisions with the environment using Seamless Autonomy IV. D ISCUSSION
Recently there has been a trend towards dynamic autonomy robotic systems that provide multiple interaction methods from which the system operators can choose [9], [1], [11]. In this paper we have expressed the technical details of a behavior-based robot control architecture that merges algorithm, abstraction, and visualization using a new approach to human-robot interaction. The new method alleviates the need for the human to select a level of interaction and instead allows the system to initiate and orchestrate the appropriate behaviors in response to userspecified intentions. Whereas traditional approaches to dynamic or adjustable autonomy make the human responsible for correctly integrating the components of intelligence present in the system, seamless autonomy allows the robot and interface intelligence accomplish this. Our reseach expresses a commitment to performance-enhancing humanrobot interaction strategies composed under the behaviorbased methodology, using primitive behaviors for structuring elaborate systems and reducing the complexity of the associated control. Finally, we evaluate our control strategy in realistic environments engaging subject matter experts in human-subject experiments with details experimental designand analysis. ACKNOWLEDGMENTS The authors would like to thank R. Scott Hartley from the INL for logistical support, Maj. Werkmeister and Sgt. Jones of the Manuever Support Center (MANSCEN) for their help in organizing the experiment and for Maj. Ugarte, Steve Pranger, and Tom Anderson from TRAC Monterey for assisting with the experiment plan and the data analysis. R EFERENCES [1] M. Baker, R. Casey, B. Keyes, and H.A. Yanco. Improved interfaces for human-robot interaction in urban search and rescue. Systems, Man and Cybernetics, 2004 IEEE International Conference on, 3:2960–2965 vol.3, 10-13 Oct. 2004. [2] D. J. Bruemmer, C. W. Nielsen, and D. I. Gertman. How training and experience affect the benefits of autonomy in a dirty-bomb experiment. In Proceedings of the 2008 Conference on HumanRobot Interaction (HRI 2008). [3] David J. Bruemmer, Douglas A. Few, Ronald L. Boring, Julie L. Marble, Miles C. Walton, and Curtis W. Nielsen. Shared understanding for collaborative control. IEEE Transactions on Systems, Man, and Cybernetics, Part A, 35(4):494–504, 2005. [4] J.L. Burke. Moonlight in Miami: A field study of human-robot interactions in the context of an urban search and rescue disaster response training excercise. Master’s thesis, Department of Psychology, University of South Florida, 2003.
[5] J. Casper and R.R. Murphy. Human-robot interactions during the robot-assisted urban search and rescue response at the world trade center. Systems, Man, and Cybernetics, Part B, IEEE Transactions on, 33(3):367–385, June 2003. [6] Jacob W. Crandall and M. L. Cummings. Developing performance metrics for the supervisory control of multiple robots. In HRI ’07: Proceeding of the ACM/IEEE international conference on Humanrobot interaction, pages 33–40, New York, NY, USA, 2007. ACM. [7] Douglas A. Few, David J. Bruemmer, and Miles C. Walton. Dynamic leadership for human-robot teams. In RO-MAN 06: The 15th IEEE International Symposium on Robot and Human Interactive Communication, pages 333–334, 2006. [8] Douglas A. Few, Christine M. Roman, David J. Bruemmer, and William D. Smart. ”what does it do?”: Hri studies with the general public. Robot and Human interactive Communication, 2007. ROMAN 2007. The 16th IEEE International Symposium on, pages 744– 749, 26-29 Aug. 2007. [9] Terrence Fong, Charles Thorpe, and Charles Baur. Robot as partner: Vehicle teleoperation with collaborative control. In Alan C. Schultz and Lynne E. Parker, editors, ulti-Robot Systems: From Swarms to Intelligent Automata. 2002. [10] D. Fox, J. Ko, K. Konolige, B. Limketkai, D. Schulz, and B. Stewart. Distributed multirobot exploration and mapping. Proceedings of the IEEE, 94(7):1325–1339, July 2006. [11] M. Goodrich, D. Olsen, J. Crandall, and T. Palmer. Experiments in adjustable autonomy. In Proceedings of IJCAI Workshop on Autonomy, Delegation and Control: Interacting with Intelligent Agents (IJCAI 2001), 2001. [12] Dan R. Olsen Jr. and Stephen Bart Wood. Fan-out: measuring human control of multiple robots. In CHI ’04: Proceedings of the SIGCHI conference on Human factors in computing systems, pages 231–238. ACM, 2004. [13] Y. Koren and J. Borenstein. Potential field methods and their inherent limitations for mobile robot navigation. Robotics and Automation, 1991. Proceedings., 1991 IEEE International Conference on, pages 1398–1404 vol.2, 9-11 Apr 1991. [14] Jason P. Luck, Patricia L. McDermott, Laurel Allender, and Deborah C. Russell. An investigation of real world control of robotic assets under communication latency. In HRI ’06: Proceeding of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction, pages 202–209, New York, NY, USA, 2006. ACM. [15] Julie L. Marble, David J. Bruemmer, Douglas A. Few, and Donald D. Dudenhoeffer. Evaluation of supervisory vs. peer-peer interaction with human-robot teams. In Proceedings of the 37th Annual Hawaii International Conference on System Sciences (HICSS’04) - Track 5, page 50130.2, Washington, DC, USA, 2004. IEEE Computer Society. [16] R.R. Murphy. Human-robot interaction in rescue robotics. Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, 34(2):138–153, May 2004. [17] Curtis W. Neilsen, David I. Gertman, David J. Bruemmer, R. Scott Hartley, and Miles C. Walton. Evaluating robot technologies as tools to explore radiological and other hazardous environments. In Proceedings of American Nuclear Society Emergency Planning and Response, and Robotics and Security Systems Joint Topical Meeting, Albuquerque, NM, 2008. In Press. [18] C. W. Nielsen and M. A. Goodrich. Comparing the usefulness of video and map information in navigation tasks. In Proceedings of Human-Robot Interaction (HRI06), Salt Lake City, UT, 2006. [19] B. Trouvain, H. L. Wolf, and F. E. Schneider. Impact of autonomy in multi-robot systems on teleoperation performance. In in Proceedings of AAAI Workshop on Multi-Robot Systems, 2003. [20] H. A. Yanco, J. L. Drury, and J. Scholtz. Beyond usability evaluation: Analysis of human-robot interaction at a major robotics competition. Human-Computer Interaction, 19:117–149, 2004.