Proceedings of the 44th Hawaii International Conference on System Sciences - 2011
Network-Centric Control for Multirobot Teams in Urban Search and Rescue Michael Lewis University of Pittsburgh Pittsburgh, PA 15260
[email protected]
Katia Sycara Carnegie Mellon University Pittsburgh, PA 15213
[email protected]
Abstract Theories from Network-centric Warfare (NCW) distinguish between network-centric and platform-centric orientations to sensing and control. In platform-centric control the operator(s) access and focus on data through platforms they control, a robot’s camera, for example. In network-centric control the operator(s) do not access data through platforms but instead through the network without regard to where information originated. The conventional human role in sophisticated information gathering systems is usually as both commander (of platforms) and perceiver. Human sensory and perceptual capabilities, however, far outstrip our abilities to process information frequently making our perceptual function more critical to the mission. The use of human operators as “perceptual sensors” is standard practice for both UAVs and ground robotics where humans are called upon to “process” camera video to find targets and assist in navigation. This paper traces the evolution of an experimental human-multirobot system for Urban Search and Rescue from a platform--centric implementation in which operators navigated robots while searching for victims in streaming video to a network--centric version in which they search recorded video asynchronously. We describe two lines of research leading to our current design. In the first we identify the navigation task as both more difficult and the limiting factor in extending control to larger robot teams. In the second we found a few strategically selected images viewed asynchronously roughly comparable in performance to continuously viewed streaming video.
1. Introduction Alberts et al. [1] introduced the distinction between network-centric and platform-centric orientations to help explain the advantages they felt might accrue to highly networked military forces. In conventional forces a platform and its sensors operate with relative independence. A pilot, for example, would be able to see very close aircraft through his canopy, nearby aircraft through onboard radar, and hear with less
precision about more distant aircraft over his radio. A commander directing the mission would only see what was available through her own sensors plus reports coming in from other aircraft. In a network-centric version the aircraft could share their targets so each pilot would see what everyone else was sensing as well as data from his own instruments. This sharing of data cuts the bond between platform and information allowing the commander to focus on the mission rather than “who reported what”. The pilots still must fly their planes, however, this local platform-specific task is now divorced from the commander’s problem of maintaining global situation awareness. Controlling multirobot teams for Urban Search and Rescue (USAR) poses similar problems. Robots must be navigated through the environment to collect data about victim locations and clear areas. The operator supervising the robots then uses these data to direct rescue efforts. Although these tasks are functionally distinct they are typically combined with an operator both giving navigational commands and viewing video on a robot-by-robot basis. Foraging tasks [2] such as USAR are particularly suited for this form of control because robots can operate relatively independently of one another as they search different areas. This allows a round robin style of control in which the operator interacts with each robot in turn. Most multirobot USAR research [3,4,5] has followed this control regime which has been described by the Neglect Tolerance model [6]. In the NT model, the period following the end of human intervention but preceding a decline in performance below a threshold is considered time during which the operator is free to perform other tasks. If the operator services other robots over this period the measure can also provide an estimate of the number of robots that might be controlled. This regime is clearly platform-centric in that both navigation and visual search are performed by individual robots. The operator of the multirobot team has the additional difficulty of needing to continually switch between robots to practice this form of control. At each switch he must reorient himself to the new robot’s location and pose in order to
1530-1605/11 $26.00 © 2011 IEEE
1
Proceedings of the 44th Hawaii International Conference on System Sciences - 2011
Figure 1. Multirobot Control System (MrCS) make sense of what appears in the camera and how it fits into the global situation. Although humans are commonly viewed as the commanders, planners, and managers of complex human machine systems they could probably be better used playing other roles. Humans are more noteworthy for their sensory feats (1010 audition/1013 vision dynamic range) than for immediate memory (3 bits) or controlled processing rates (3-4 bits/sec) [7]. Despite substantial advances in speech recognition, for example, human listeners are still needed when free speech must be extracted from a noisy environment. This pattern is repeated for cluttered visual environments where human operators are routinely used to review and identify targets in live video feeds and/or archival photographs or to pick faint signals off of radar, sonar, and other displays. For mobile platforms, however, this perceptual strength has usually been treated as secondary to the task of navigating and driving robots and UAVs. In a series of experiments we have examined how the multirobot foraging task might be automated to take advantage of human strengths and compensate for human weakness. One set of experiments investigates separating robot navigation from perceptual search first by comparing
full and part task performance and then by automating path planning. The other experiments explore the effects on performance of performing the perceptual search task using recorded rather than live video.
2. Network-centric vs. platform-centric The distinction between network-centric and platformcentric orientations was introduced by Alberts et al. [1] as an essential part of a doctrine of Network Centric Warfare (NCW). According to Alberts’ theory pervasive networking of military information systems would lead to common situation awareness (SA) and emergent behavior in which military units would come to “self-synchronize”. Self-synchronization would occur when units sharing the same information, and understanding of the commander’s intent would spontaneously coordinate in response to commonly sensed changes in the environment. Because selfsynchronization could occur near the leafs of the command hierarchy it could eliminate delays in the chain of command leading to a more agile and effective response. Developing and maintaining common SA was believed to depend on shifting from a platform-centric orientation in which operators depended primarily on
2
Proceedings of the 44th Hawaii International Conference on System Sciences - 2011
sensors and effectors on their own platform with low bandwidth (such as voice) connections to other platforms to a network-centric orientation in which the information on their displays came from all parts of the network. Because information would be both more detailed and of greater scope than available from a single platform’s sensors uncertainty should be reduced leading to more effective choices of action. Critics of this doctrine have argued that the information overload would likely outweigh any benefits [8,9,10], that lack of pedigree would make networked information less trusted [8,9] and operators less aware of their own platform and situation [9]. The case for network-centric orientation is probably strongest for automated systems where issues involving human teamwork and social interaction [9,10] do not arise. In our system where operators must search for victims in video coming from multiple robots information load is already at a maximum and is reduced when information is fused in a single display. The separation of video from the platform in which it originated and loss of SA for individual robots, however, pose potential problems for a network-centric design.
3. USARSim & MrCS The experiments reported in this paper were conducted using the USARSim robotic simulation with simulated Pioneer P2-AT robots performing Urban Search and Rescue (USAR) foraging tasks. USARSim is a highfidelity simulation of urban search and rescue (USAR) robots and environments developed as a research tool for the study of human-robot interaction (HRI) and multi-robot coordination. USARSim supports HRI by accurately rendering user interface elements (particularly camera video), accurately representing robot automation and behavior, and accurately representing the remote environment that links the operator’s awareness with the robot’s behaviors. USARSim uses Epic Games’ UnrealEngine2 [11] to provide a high fidelity simulator at low cost and also serves as the basis for the Virtual Robots Competition of the RoboCup Rescue League. Other sensors including sonar and audio are also accurately modeled. Validation data showing close agreement in detection of walls and associated Hough transforms for a simulated Hokuyo laser range finder are described in [12]. The current UnrealEngine2 integrates MathEngine’s Karma physics engine [13] to support high fidelity rigid body simulation. Validation studies showing close agreement in behavior between USARSim models and real robots being modeled are reported in [14,15,16,17] as well as agreement for a
variety of feature extraction techniques between USARSim images and camera video reported in Carpin et al. [18]. MrCS (Multi-robot Control System), a multi-robot communications and control infrastructure with accompanying user interface(s), developed for experiments in multirobot control and RoboCup competition [19] was used in these experiments. MrCS provides facilities for starting and controlling robots in the simulation, displaying multiple camera and laser output, and supporting inter-robot communication through Machinetta, a distributed multi-agent coordination infrastructure. Figure 1 shows the elements of the baseline version of MrCS. The operator selects the robot to be controlled from the colored thumbnails at the top of the screen. To view more of the selected scene shown in the large video window the operator uses pan/tilt sliders to control the camera. The current locations and paths of the robots are shown on the Map Data Viewer (bottom right). Robots are tasked by assigning waypoints on one of the maps or through a teleoperation widget (upper right). The victim marking task requires the operator to first identify a victim from a video window, identify the robot and its pose on the map, and then locate corresponding features near the victim’s location in order to mark it on the laser generated map.
4. Human vs. automated path planning In looking for ways to increase the human span of control over robot teams we have conducted a series of experiments examining how aspects of the USAR task might be automated. The foraging task can be decomposed into exploration and perceptual search subtasks corresponding to navigation of an unknown space and searching for targets by inspecting and controlling onboard cameras. An initial study [20] investigated the scaling of performance with number of robots for operators performing either the full task or only one of the subtasks to identify limiting factors. If either of the subtasks scaled at approximately the same rate as the full task while the other scaled more rapidly, the first subtask could be considered the factor limiting performance. If performance were equal across the conditions or the subtasks scaled at the same rate, automation decisions would need to be based on other factors. The logic of our approach depends upon the equivalence among these three conditions. In the fulltask condition operators used waypoint control to explore an office like environment. When victims were detected using the onboard cameras the robot was stopped and the operator marked the victim on the map and returned to exploration. Equating the
3
Proceedings of the 44th Hawaii International Conference on System Sciences - 2011
exploration subtask was relatively straightforward. Operators were given the instruction to explore as large an area as possible with coverage judged by the extent of the laser rangefinder generated map. Because operators in the exploration condition did not need to pause to locate and mark victims the areas they explored should be slightly greater than in the fulltask condition. Developing an equivalent perceptual search condition is more complicated. The operator’s task resembles that of the payload operator for a UAV or a passenger in a train, in that she has no control over the platform’s trajectory but can only pan and tilt the cameras to find targets. The targets the operator has an opportunity to acquire, however, depend on the trajectories taken by the robots. If an autonomous path planner is used, robots will explore continuously likely covering a wider area than when operated by a human (where pauses typically occur upon arrival at a waypoint). If human generated trajectories are taken from the fulltask condition, however, they will contain additional pauses at locations where victims were found and marked providing an undesired cue. Instead, we have chosen to use canonical trajectories from the exploration condition since they should contain pauses associated with waypoint arrival but not those associated with identifying and marking victims. As a final adjustment, operators in the perceptual search condition were allowed to pause their robots in order to identify and mark the victims they discover.
environment as possible without any requirement to locate or mark victims. From examination of area coverage, pausing, and other characteristics of trajectories in the fulltask and exploration conditions a representative trajectory was selected from the exploration data for each size of team. In the perceptual search condition operators’ retained control of the robots’ cameras while robots followed the representative trajectory except when individually paused by the operator. Full task Exploration Perceptual Search
Search & exploration Discover & map new regions Id targets in visual field
Data were analyzed using a repeated measures ANOVA comparing fulltask performance with that of the subtasks. Where measures were inappropriate for some subtask comparisons (area covered for the perceptual search condition, for example) are pairwise rather than tripartite. Number of robots had a significant effect on every dependent measure collected (least affected “N switches in focus”, F2,54 = 12.6, p < .0001). The task/subtask comparisons also had effects in almost every case.
4.1 Experiment 1 Forty-five paid participants, 15 in each of the three conditions took part in the experiment. A large USAR environment previously used in the 2006 RoboCup Rescue Virtual Robots competition [19] was selected for use in the experiment. The environment was a maze like hall with many rooms and obstacles, such as chairs, desks, cabinets, and bricks. Victims were evenly distributed within the environment. A second simpler environment was used for training. The experiment followed a between groups repeated measures design with number (4, 8, 12) of robots defining the repeated measure. Participants in the fulltask condition performed the complete USAR task. In the subtask conditions they performed variants of the USAR task requiring only exploration or perceptual search. Participants in the fulltask condition followed instructions to use the robots to explore the environment and locate and mark on the map any victims they discovered. The exploration condition differed only in instructions. These operators were instructed to explore as much of the
Figure 2. Victims found as a function of N robots As shown in Figure 2 search performance in the perceptual search condition improved monotonically albeit shallowly while fulltask performance peaked at 8 robots then declined resulting in a significant N robot x Task interaction, F2,56 = 8.45, p = .001. As figure 3 shows, coverage for the fulltask and exploration conditions was nearly identical at 4 and 8 robots but diverged slightly at 12 robots, (between groups F1,27 = 11.43, p = .002, fulltask x N robots F2,54 = 4.15, p = .021) with perceptual search participants continuing to improve while those in the fulltask condition declined. Workload (Figure 4) increased monotonically with N robots in all groups but was
4
Proceedings of the 44th Hawaii International Conference on System Sciences - 2011
victims available for rescue. This interpretation is borne out by the workload ratings which show perceptual search operators found themselves relatively unloaded even when monitoring video streams from 12 robots. Fulltask and exploration operators, by contrast, reported being heavily loaded even with 4 robots and approached the top of the scale when confronted with 12. This initial experiment showed that: • Exploration was the limiting subtask • Advantages of eliminating exploration dominated any decrement in situation awareness
4.2 Experiment 2 Figure 3. Area explored as a function of N robots Experiment 2 compared teams of two operators controlling 24 robots with either 12 robots assigned to each or control over the 24 shared. The robots were controlled either manually or by an automated path planner demonstrated in [22] to generate paths that did not differ significantly in area covered but having higher tortuosity as measured by fractal dimension than humanly generated trajectories. Sixty teams (120 participants) were run in the experiment. While the division of responsibilities within teams affected performance the comparisons of concern to this paper involve manual and automated path planning.
Figure 4. Workload as a function of N robots (NASA-tlx) substantially lower, F1,27 = 21.17, p < .0001, in the perceptual search condition. While Peruch [21] and others have argued that user control of a robot was needed to develop adequate situation awareness and survey knowledge to perform foraging tasks our data suggest active control may actually be detrimental for high workload tasks such as multirobot control and monitoring. Our data show that exploration performance as N robots was increased was very similar in the fulltask and exploration conditions indicating that the cognitive resources freed up by eliminating perceptual search in the exploration condition provided very little advantage to operators who were already devoting the lion’s share of their efforts to exploration. Victim finding performance in the perceptual search condition, by contrast, increased as robots were added. This improvement was, if anything, underestimated in our experiment due to a ceiling effect because of the relatively small size of the map and the number of
Figure 5. Victims Found The 2x2 ANOVA for marked victims (Figure 5) comparing team organization and autonomy found a main effect for autonomy, F1,56 =13.436, p=.001, replicating the advantage for perceptual search found in the first experiment, but this time for automatically generated paths. A measure of search quality, victims/region explored, also found a significant advantage for autonomy, F1,56=7.138, p=.01. A slightly different picture is found looking more closely at operator behavior. To measure the operator behavior linked to victim observations, events and timing involved in the detection and eventual marking
5
Proceedings of the 44th Hawaii International Conference on System Sciences - 2011
of victims were annotated in the data. The 2x2 ANOVA for the time from which the victim is displayed in the camera to being successfully marked by the operator( Display to Mark), shows a main effect favoring autonomy, F1,56=4.750, p=.034 (Figure 11)
Figure 6. Victims per region explored
Figure 7. Display to Mark The ANOVA for Selected to Mark, which measured the time from when the operator selected the robot to successfully marking the victim (Figure 12), by contrast, found an effect favoring manual control, F1,56=7.440, p=.009. The display-to-mark times are much shorter for the autonomous conditions suggesting that when these operators do detect victims they detect them earlier leading to a shorter time to marking them. Autonomous operators, however, miss almost twice as many of the victims that appear in their cameras, perhaps the result of attempting to monitor continuously moving robots which may have multiple victims in view. Select-to-mark time is much shorter in the manual conditions going as low as 14 sec. in the shared pool group, approximately one third of the 41 seconds required for autonomous operators controlling dedicated robots. Some of these differences appear related to teleoperation times which were substantially longer in autonomous condition. Teleoperation, by our
earlier analysis, serves primarily to assist operators in regaining SA by helping them locate a selected robot
Figure 8. Select to Mark and determine its location and orientation, as well as match landmarks between the camera and the map. These data suggest that operators in the autonomous path planning condition had, in fact, poorer SA than those choosing paths themselves. Although missing fewer of the victims appearing in their thumbnails operators in the manual condition had fewer opportunities as their robots were often idle at terminal waypoints while those in the autonomous condition moved continuously explaining their advantage on the overall performance measures. This account supports our earlier conjecture that reduced cognitive load may mask poorer survey knowledge (SA) in the autonomous condition. From a network vs. platform-centric perspective these experiments support performance improvements from shifting to a network-centric information architecture. The foraging task was divided into two subtasks. The platform-centric exploration task was automated leading to better system performance as the operator was freed to devote more effort to the perceptual tasks of finding and marking victims. As experiment 2 shows, however, the task of locating and marking victims remained platform-centric in that the operator must orient herself with respect to the robot’s pose and location in order to figure out where the detected victim is in the environment in order to mark it on the map.
5. Synchronous vs. asynchronous viewing When considering the foraging task exploration needs to be more than simply moving the robot to different locations in the environment. For search, the process of acquiring a specific viewpoint or set of viewpoints containing a particular object is of greater concern. Because search relies on moving a viewpoint through
6
Proceedings of the 44th Hawaii International Conference on System Sciences - 2011
the environment to find and better view target objects, it is an inherently ego-centric task. Multi-robot search, presents the additional problem of assuring that areas the robot has “explored” have been thoroughly searched for targets. This requirement directly conflicts with the navigation task which requires the camera to be pointed in the direction of travel in order to detect and avoid objects and steer toward its goal. These difficulties are accentuated by the need to switch attention among robots which may increase the likelihood that a view containing a target will be missed. In earlier studies [23,24] we have demonstrated that success in search at these tasks is directly related to the frequency with which the operator shifts attention between robots and hypothesized that this might be due to victims missed while servicing other robots. To combat these problems of attentive sampling among cameras, incomplete coverage of searched areas, and difficulties in associating camera views with map locations we are investigating the potential of asynchronous control techniques previously used out of necessity in NASA applications as a solution to multi-robot search problems. Due to limited bandwidth and communication lags in interplanetary robotics, camera views are closely planned and executed. Rather than transmitting live video and moving the camera about the scene, photographs are taken from a single spot with plans to capture as much of the surrounding scene as possible. These photographs taken with either an omnidirectional overhead camera (camera faces upward to a convex mirror reflecting 360◦) and dewarped [25,26] or stitched together from multiple pictures from a ptz camera [27] provide a panorama guaranteeing complete coverage of the scene from a particular point. If these points are well chosen, a collection of panoramas can cover an area to be searched with greater certainty than imagery captured with a pan-tiltzoom (ptz) camera during navigation. For the operator searching within a saved panorama the experience is similar to controlling a ptz camera in the actual scene, a property that has been used to improve eleoperation in a low bandwidth high latency application [28]. In this modification of the MrCS shown in figure 9 we merge map and camera views as in [29]. The operator directs navigation from the map being generated with panoramas of a chosen extent being taken at the last waypoint of a series. Due to the time required to pan the camera to acquire views to be stitched operators frequently chose to specify incomplete pans. The
Figure 9. MrCS Asynchronous Panorama mode
panoramas are stored and accessed through icons showing their locations on the map. In the Panorama interface thumbnails are blanked out with the arcs at the viewing icons indicating the viewing angles available. The operator can find victims by asynchronously panning through these stored panoramas as time becomes available. When a victim is spotted the operator uses landmarks from the image and corresponding points on the map to record the victim’s location. By changing the task from a forced paced one with camera views that must be controlled and searched on multiple robots continuously to a self paced task in which only navigation needs to be controlled in realtime we hoped to provide a control interface that would allow more thorough search with lowered mental workload. The reductions in bandwidth and communications requirements [30] are yet another potential advantage offered by this approach. In an initial experiment reported in [31] we compared performance for operators controlling 4 robot teams at a simulated USAR task using either streaming or asynchronous video displays. Search performance was somewhat better using the conventional interface with operators marking victims slightly more accurately. This superiority, however, might have occurred simply because streaming video users had the opportunity to move closer to victims thereby improving their estimates of distance in marking the map. A contrasting observation was that frequency of shifting focus between robots, a practice we have previously found related to search performance [32] was correlated with performance for streaming video participants but not for participants using asynchronous panoramas. Because operators using asynchronous video did not need to constantly switch between camera views to avoid missing victims we hypothesized that for larger team sizes where forced pace search might exceed the operator’s attentional
7
Proceedings of the 44th Hawaii International Conference on System Sciences - 2011
capacity asynchronous video might offer an advantage. In a follow-on experiment we compared panoramas with the streaming video condition in Experiment 1 with teams of 4, 8, and 12 robots to test our hypothesis that advantages might emerge for larger team sizes.
the robot and where to take the panorama and views of panoramas are always referenced from a single robot and pose.
Figure 10. Victims Found
Figure 12. Image queue interface- operator views adjacent frames in “film strips”
6. A Network-centric MrCS
Figure 11. Area Covered Data were analyzed using a repeated measures ANOVA comparing streaming video performance with that of asynchronous panoramas. On the performance measures, victims found (figure 10) and area covered (figure 11), the groups showed nearly identical performance with victim identification peaking sharply at 8 robots accompanied by a slightly less dramatic maximum for search coverage. The near parity of the panorama condition supports our continued interest in the approach. Although the thumbnails remained blank allowing operators no other view of the environment than those they requested and the terminal panoramas were incomplete without guarantees of visual coverage they nevertheless allowed operators to perform almost as well as those with full information from streaming video. By moving the focus of the perceptual search task from the ego-centric camera mounted on a robot to a globally referenced icon on a map, the panorama interface moves in the direction of network centrism. The operator, however, still must choose where to send
The current incarnation of the MrCS differs drastically from its predecessors by taking a fully network-centric approach. The new MrCS automates both path planning and image acquisition to produce a system in which the operator performs perceptual search without being required to switch viewpoints among robots or correlate location and pose with map positions. The goal of path planning in the experimental system has been changed from maximizing coverage in building a laser map to maximizing visual coverage to increase chances of detecting victims. As robots move through the environment their videos are stored in a database. As frames arrive they are sorted by “freshness” to identify those that contain new views of the environment and to find older frames which now offer the greatest visual coverage. These frames are presented in prioritized queues for viewing by the operator. Simple geometry using the laser scans indexed with the frames allows the operator to mark victims directly on the viewing pane rather than performing the mental gymnastics that would be needed to mark them on the map now that the association between robot and view has been broken. In the current test environment after a search is complete, the top ten frames in the queue account for more that 70% of the map while the top 100 put coverage over 99%. A second screen provides the map
8
Proceedings of the 44th Hawaii International Conference on System Sciences - 2011
and thumbnails of the old MrCS to allow the operator to monitor and intervene when robots become stuck or encounter other unexpected problems. This paper describes the evolution of the MrCS from a platform-centric system to a network-centric one. Our data show that the automation of path planning provided substantial increases in performance. Benefits from automating platform-centric aspects of the perceptual search task are not yet known. While Alberts’ hypothesis that network centrism will lead to increases in performance seems plausible it is not yet proven. Enhancing global SA is likely to valuable in many situations there are others where a lack of local awareness or confusion over provenance may lead to difficulties. We hope that experimental implementations such as ours can provide concrete tests and help make evident contradictions, problems, and benefits of the approach.
7. References [1] Alberts, D., Garstka, J., & Stein, F. (1999) Networkcentric Warfare: Developing and leveraging information superiority, the Command and Control Research Program. [2] Y. Cao, A. Fukunaga, and A. Kahng. Cooperative mobile robotics: Antecedents and directions, Autonomous Robots, 4, 1-23, 1997. [3] D.R Olsen and S.B. Wood, Fan-out: measuring human control of multiple robots, in Proceedings of the SIGCHI conference on Human factors in computing systems. 2004, ACM Press: Vienna, Austria. p. 231-238. [4] C.M. Humphrey, C. Henk, G. Sewell, B. Williams, J. A. Adams. Evaluating a scaleable Multiple Robot Interface based on the USARSim Platform. 2006, Human-Machine Teaming Laboratory. [5] B. Trouvain, C. Schlick, and M. Mevert, Comparison of a map- vs. camera-based user interface in a multi-robot navigation task, in Proceedings of the 2003 International Conference on Robotics and Automation. 2003. p. 32243231. [6] J. W. Crandall, M. A. Goodrich, D. R. Olsen, and C. W. Nielsen. Validating human-robot interaction schemes in multitasking environments. IEEE Transactions on Systems, Man, and Cybernetics, Part A, 35(4):438–449, 2005. [7] C. Wickens and J. Hollands, Engineering Psychology and Human Performance, Prentice Hall, 1999. [8] Bonner, M. & French, H. (2004). Design myopia: Some hidden concerns in implementing network-centric warfare, Australian Army Journal 2(1), 141-148. [9] Bolia, R., Vidulich, M., Nelson, T. (2006). Unintended consequences of the network-centric decision making model: Considering the human operator, 2006 Command and Control Research and Technology Symposium [10] Ali, I. (2006). Is NCW information sharing a double edged sword? – Voices from the battlefield, Human Factors in Network-Centric Warfare Symposium
[11] (UE 2) UnrealEngine2, http://udn.epicgames.com/Two/rsrc/Two/KarmaReference/K armaUserGuide.pdf, accessed February 5, 2008. [12] S. Carpin, J. Wang, M. Lewis, A. Birk and A. Jacoff. High fidelity tools for rescue robotics: Results and perspectives, Robocup 2005 Symposium, 2005. [13] Mathengine, MathEngine Karma User Guide, http://udn.epicgames.com/Two/KarmaReference/KarmaUser Guide.pdf, accessed May 3, 2005. [14] M. Lewis, S. Hughes, J. Wang, M. Koes, and S. Carpin. Validating USARsim for use in HRI research, Proceedings of the 49th Annual Meeting of the Human Factors and Ergonomics Society, Orlando, FL, 2005, 457-461 [15] C. Pepper, S. Balakirsky, and C. Scrapper. Robot Simulation Physics Validation, Proceedings of PerMIS’07, 2007. [16] B. Taylor, S. Balakirsky, E. Messina and R. Quinn. Design and Validation of a Whegs Robot in USARSim, Proceedings of PerMIS’07, 2007. [17] M. Zaratti, M. Fratarcangeli and L. Iocchi. A 3D Simulator of Multiple Legged Robots based on USARSim. Robocup 2006: Robot Soccer World Cup X, Springer, LNAI, 2006. [18] S. Carpin, T. Stoyanov, Y. Nevatia, M. Lewis and J. Wang. Quantitative assessments of USARSim accuracy". Proceedings of PerMIS 2006 [19] S. Balakirsky, S. Carpin, A. Kleiner, M. Lewis, A. Visser, J. Wang and V. Zipara. Toward hetereogeneous robot teams for disaster mitigation: Results and performance metrics from RoboCup Rescue, Journal of Field Robotics, 2007 [20] H. Wang, M. Lewis, P. Velagapudi, P. Scerri, and K. Sycara, How search and its subtasks scale in N robots 2009 Human Robot Interaction Conference, ACM. [21] Peruch, P., J. Vercher and G. Guthier (1995). "Acquisition of Spatial Knowledge through Visual Exploration of Simulated Environments." Ecological Psychology 7(1): 1-20. [22] Chien et al [to be added] [23] Wang, J. & Lewis, M. (2007a) “Human control of cooperating robot teams”, 2007 Human-Robot Interaction Conference, ACM. [24] Wang, J. & Lewis, M. (2007b). “Assessing coordination overhead in control of robot teams,” Proceedings of the 2007 International Conference on Systems, Man, and Cybernetics. [25] Murphy, J. (1995). “Application of Panospheric Imaging to a Teleoperated Lunar Rover,” in Proceedings of the 1995 International Conference on Systems, Man, and Cybernetics. [26] Shiroma, N., Sato, N., Chiu, Y. & Matsuno, F. (2004). “Study on effective camera images for mobile robot teleoperation,” In Proceedings of the 2004 IEEE International Workshop on Robot and Human Interactive Communication, Kurashiki, Okayama Japan. [27] Volpe, R. (1999). “Navigation results from desert field tests of the Rocky 7 Mars rover prototype,” in The International Journal of Robotics Research, 18, pp.669-683.
9
Proceedings of the 44th Hawaii International Conference on System Sciences - 2011
[28] Fiala, M. (2005), “Pano-presence for teleoperation,” Proc. Intelligent Robots and Systems (IROS 2005), 37983802. [29] Ricks, B., Nielsen, C., and & Goodrich, M. (2004). Ecological displays for robot interaction: A new perspective. In International Conference on Intelligent Robots and Systems IEEE/RSJ, Sendai, Japan. [30] Bruemmer, D., Few, A., Walton, M., Boring, R., Marble, L., Nielsen, C., & Garner, J. (2005).. Turn off the television: Real-world robotic exploration experiments with a virtual 3-D display. Proc. HICSS, Kona, HI. [31] Velagapudi, P.,Wang, J., Wang, H., Scerri, P., Lewis, M., & Sycara, K. (2008). Synchronous vs. Asynchronous
Video in Multi-Robot Search, Proceedings of first International Conference on Advances in Computer-Human Interaction (ACHI'08). [32] Scerri, P., Xu, Y., Liao, E., Lai, G., Lewis, M., & Sycara, K. (2004). “Coordinating large groups of wide area search munitions,” In D. Grundel, R. Murphey, and P. Pandalos (Ed.), Recent Developments in Cooperative Control and Optimization, Singapore: World Scientific, 451-480.
10