This work investigates three robotic team visualizations via an evaluation with sixteen .... pause time intervals, and post-trial questionnaire were identical across ...
VISUALIZATION OF MULTIPLE ROBOTS DURING TEAM ACTIVITIES Curtis M. Humphrey, Stephen M. Gordon, Julie A. Adams Department of Electrical Engineering and Computer Science Vanderbilt University Nashville, Tennessee USA 37235
As robotic systems encompass larger numbers of individual robotic agents, interface design must provide better visual representations that account for factors affecting the human operator’s situational awareness. This work investigates three robotic team visualizations via an evaluation with sixteen participants who either had used robots or had no experience with robots. The team visualizations varied in how much information was displayed: only individual robots, individual robots connected via a semitransparent team shape, or a solid team shape. The evaluation results revealed that the two visualizations utilizing a team shape were used more frequently than the visualization displaying only individual robots.
INTRODUCTION Interacting with multiple robots is a complicated task for a single individual to undertake. Maintaining situational awareness of multiple robots, their sensors, states, and current actions is difficult for a single operator who is expected to complete several additional tasks unrelated to the robots (Adams, 2002). Therefore, multiple robot control is typically accomplished on a one operator to one robot ratio or many operators controlling a single robot (Burke et al., 2004). As robotic teams become more common, a cost effective approach would be one that enables a single person to manage a large robotic team, eventually expanding to multiple teams. Adams (1995) was one of the first to demonstrate that a single human could teleoperate up to four mobile robots. More recently Parasuraman et al. demonstrated that Playbook™, a delegation style interface, permits interaction with up to eight simulated robots (Parasuraman et al., 2003). Daily et al. (2003) have developed an augmented reality technique for an improved interface in environments where the operator is embedded with real robots. Baker et al. have centered activity and attention on a video window for use in the urban search and rescue situations (Baker et al., 2004). Kaminka and Elmaliach (2006) evaluated an ecological-based connected line team representations in their socially-attentive display. The lines connected as many as three robots and provided better feedback regarding team status than raw images did.
One manner of reducing the high demands placed on the operator is to develop interfaces that facilitate situational understanding. Envarli and Adams (2005) investigated employing colors to relate information regarding multiple robot team associations, team status, and individual robot status. The work presented in this paper developed and evaluated three multiple robot team visualization styles to determine which visualization was more frequently used during a supervisory task. The paper summarizes the apparatus, outlines the testing method, and provides the evaluation results and discussion.
APPARATUS The chosen visualizations were based on the results of a questionnaire completed by 27 undergraduate students. The students were asked to draw representations of two different types of objects (circles and isosceles triangles) in four identical configurations. The participants’ drawings used slightly more lined based shapes than spline based shapes (110 vs. 94) thus the three visualizations incorporated into this evaluation were lined based. The three team visualizations varied by the level of individual robot detail that they provided. The team name was placed at the center of the visualization in all three instances. The individual team visualization provided no connection between the individual robots in a team as shown in Figure 1.a. A semitransparent team visualization connected the visible individual robots via a transparent shape as shown in Figure 1.b.
team had its own set of waypoints and team goals.
(a)
(b)
(c)
Figure 1: The team visualization styles: (a) Individual robot, (b) semitransparent, and (c) solid.
The final team visualization provides a solid team shape in which the individual robots are not visible, see Figure 1.c. A single user interface was designed that permitted the display of combinations of all three team visualizations. The interface required the human to act as a supervisor (Sheridan et al., 2003) who was responsible for monitoring the robotic teams. The interface displayed four teams of four individual robots for each trial. The robots were simulated using Player/Stage (Gerkey et al., 2003). During each trial all teams autonomously navigated between waypoints while maintaining a safe distance from all obstacles. Each
The interface had three primary areas: the tab display, the main window, and the worldview as shown in Figure 2. The main window is the large area to the left in Figure 2 and provides a focused bird’s eye view of the environment. The smaller windows to the main window’s right, from top to bottom are: the camera view, the tab display, and the worldview. The camera view was not implemented for this evaluation. The tab area permitted the operator to access information related to all teams, a single team, or individual robots. The all tab, see Figure 3.a, provides each team’s color, status from 0 to 10, and distance to the next goal. When a team is selected, the main window centers on that team and the tab display switches to the team tab. The team tab, as shown in Figure 3.b, provides each team member’s status and the identifier of last goal completed. The color boxes to the left of each robot’s name (e.g. T0R0 represents a robot’s name) indicates visually the robot’s status. Light green indicates successful task execution, darker colors indicate that the robot is encountering problems, and red indicates an inability to complete the task. Selection of an individual robot changes the tab
Figure 2: The multiple robot team interface consisting of the focused main window, camera view, tab control, and the worldview.
(a) (b) (c) Figure 3: The tab control displays: (a) All teams, (b) Individual team with member robots, and (c) Individual robot.
display to the individual tab, see Figure 3.c. The individual tab provides the selected robot’s status, position, and distance to the next goal. The color box, as in the team tab, is a visual representation of the status, in this case an individual status. The worldview, on the bottom right of Figure 2, provides an environmental overview depicting each team’s position relative to the other teams. The worldview scales automatically so that all teams are displayed simultaneously. A bounding box depicts the main window’s viewing area relative to the entire working environment. The main window scrolls left, right, up, and down. Static world obstacles (e.g. walls and columns) are represented as black shapes in Figure 2. A left mouse button click on a team selects the team, opens the selected team tab, and changes the team visualization as defined for the evaluation trial. The team visualization also changes temporarily when the mouse hovers over a team.
METHOD Participants Sixteen uncompensated participants evaluated the three team visualizations. Eight participants had prior robotics experience but no experience with this system, while the remaining participants had no robotic experience. Six of the sixteen participants were avid video game players as defined by someone who plays games daily. All participants were at least 18 years old with at least a high school education and normal or corrected to normal vision.
Experimental Design A within-subjects design incorporating one interface with two presentations was employed. The interface interactions and layout as well as the actual tasks were identical across the two presentation trials. The visualization trials were counterbalanced across the participant pools. The first interface presentation (Presentation A) began in the semitransparent visualization and transitioned to/from the individual visualization. The second interface presentation (Presentation B) began in the solid visualization and transitioned to/from the semitransparent visualization. The transition between visualizations occurred for the duration of a mouse hover over a team or when the left-mouse button clicked on a team. All trial tasks could be performed with any of the visualization methods via the tab display, the main window, or some combination thereof. The independent variables for this evaluation included: the visualization presentations, participants’ prior robotics experience, color blindness, and experience with certain computer game genres. The dependent variables included: mouse clicks, the time visualizations were active, and subjective questionnaire feedback. The participants completed the post-trial questionnaire after each trial and the post-experimental questionnaire at the end of the session. Procedure The participants first completed a consent form and a demographic questionnaire. The participants then received a general orientation and training that lasted five minutes. The training included: 1. 2.
The environment layout, including object names. The team or individual robot selection capability.
6.
The main window navigation. The tab display usage. The robot and team status information provided by visual and textual feedback. The team visualization transitions.
Each interface trial lasted five minutes at which point the trial was stopped. At the start of each trial the participants interacted with the system at will. The system paused three times at 1.5, 2.5, and 3.5 minutes. During each system pause the participant received a task. The tasks were: 1. 2. 3.
Select robot 2 from team 3. Select the robot closest to the column. Verbally confirm or deny the accuracy of the following: “You have been told by someone that one of the robots near the right-most bench is having trouble.”
The participants were instructed to complete each task as quickly as possible and then interact with the system at will until another task was provided. After the trial, the post-trial questionnaire was completed. The participants then completed the second trial with the remaining presentation. The tasks, the system pause time intervals, and post-trial questionnaire were identical across trials. The participants completed the post-experiment questionnaire after completing the second post-trial questionnaire.
RESULTS The primary purpose of the evaluation was to determine which team visualization was used more frequently during a supervisory task. The results were analyzed to determine the level of engagement and the frequency that each visualization was active. The number of times the team visualization changed determined the level of engagement for each presentation. Presentation A had a mean of 63.50 visualization changes (SD = 45.53), while Presentation B’s mean was 48.31 (SD = 24.13). A paired one-tail Student’s T test across presentations found that Presentation A had significantly more visualization changes than Presentation B (t(30) = 2.18, p = 0.04). The percentage of time a visualization was used is presented in Figure 4. The percentages were calculated by dividing the time a visualization was active by the total time it could have been used across all participants and all trials. The individual visualization had a mean usage of 19.03% (SD = 10.83%). The semitransparent visualization was used on average
61.86% of the time (SD = 12.44%) while the solid visualization was used 57.25% of the time (SD = 22.42%). A paired two-tail Student T test across visualization styles found that the semitransparent and solid visualizations were used significantly more than the individual visualization (semitransparent: t(30) = 6.32, p < 0.01 and solid: t(30) = 4.86, p < 0.01). There was no statistical difference between the semitransparent and solid visualizations. There was also no significant differences based on order affects, robotic experience, or gaming experience. Therefore the participants preferred visualizations with a team shape compared to the individual robot visualization. 100% 90% % visualization was active
3. 4. 5.
80% 70% 60% 50% 40% 30% 20% 10% 0% Individual
Semitransparent
Solid
Figure 4: Visualization usage overall with 5% confidence intervals.
The post-trial questionnaire results were inconclusive regarding the visualization preferences. The post-experimental questionnaire ranked the two presentation trials. Presentation A was assigned the value one and Presentation B was assigned the value three. The participants indicated which presentation was preferred. The results indicated a slight preference for Presentation A, which used both individual and semitransparent visualizations (M = 1.88, SD = 0.98). The participants’ prior usage of robots did not significantly alter their preferences, engagement, or visualization frequency. However, the participants’ background experience regarding video games (i.e. real-time-strategy, first-person-shooters, and hack-nslash role-playing-games) did have a significant effect on their presentation preference as those participants who played daily preferred Presentation A (M = 1.33, SD = 0.82) over Presentation B (M = 2.20, SD = 0.92) with t(14) = 2.30, p = 0.04 determined via a twosample unequal variance 1-tailed Student t-Test. This
preference could be because both visualizations used in Presentation A (individual and semitransparent) are used in video games whereas the solid visualization used in Presentation B, to the known of the authors, has never been used in video games.
DISCUSSION Most current multiple robot interfaces present the robots as individual entities, this evaluation has provided preliminary results for visualizing multiple robots within a team as a single entity. The individual robot visualization was employed one-third less frequently than the team oriented visualizations. Therefore, the semitransparent and solid visualizations were preferred during the supervisory task. There appears to be a preference for the semitransparent visualization based upon the post-experimental preference ratings however, the post-trial questionnaires did not indicate a clear preference. Team visualizations will not be applicable to all multiple robot tasks, as some tasks will require explicit knowledge of individual robot activities; however, such abstractions should prove useful when supervising very large numbers of robots from a bird’s eye view. A possible limitation is related to the representation order of the team visualizations: the semitransparent visualization was the default in Presentation B but was not the default in Presentation A. It appears that there was no effect on the results however, we intend to further investigate this issue. Future evaluations will compare the existing semitransparent and solid visualizations along with additional team visualizations. Such evaluations will incorporate larger task sets and varying environmental conditions. The usefulness of the visualizations when teams overlap one another will be evaluated. The purpose of such evaluations will be to determine the appropriate visualization techniques for presenting multiple teams of multiple robots to a human operator. The general hypothesis is that team-based visualizations will increase an operator’s situational
awareness permitting him or her to supervise a larger number of mostly autonomous robots.
REFERENCES Adams, J.A. (1995). “Human management of a hierarchical system for the control of multiple mobile robots.” Ph.D. Dissertation, Department of Computer and Information Sciences, University of Pennsylvania. Adams, J.A. (2002). “Critical considerations for human-robot interface development.” 2002 AAAI Fall Symposium, Tech. Report FS-02-03, pp. 1-7. Baker, M., Casey, R., Keyes, B., and Yanco, H.A. (2004). “Improved Interfaces for Human-Robot Interaction in Urban search and Rescue.” IEEE International Conference on Systems, Man and Cybernetics, pp. 2960-2965. Burke, J.J., Murphy, R R., Coovert, M. and Riddle, D. (2004). “Moonlight in Miami: A field study of human-robot interaction in the context of an urban search and rescue disaster response training exercise.” International Journal of Human-Computer Interaction, 19(1-2) pp. 85-116. Daily, M., Cho, Y., Martin, K., and Payton, D. (2003). “World Embedded Interfaces for Human-Robot Interaction.” 36th Annual Hawaii International Conference on System Sciences, pp. 125-126. Envarli, I.C. and Adams, J.A. (2005) “Task Lists for HumanMultiple Robot Interaction.” 14th IEEE International Workshop on Robot and Human Interactive Communication, pp. 119-124. Gerkey, B.P., Vaughan, R.T., Stoy, K., Howard, A., Sukhatme, G.S., and Matarić, M.J. (2001) “Most valuable player: A robot device server for distributed control.” IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1226-1231. Kaminka, G.A., and Elmaliach, Y. (2006) “Experiments with an Ecological Interface for Monitoring Tightly-Coordinated Robot Teams.” IEEE International Conference on Robotics and Automation, pp. 200-205. Parasuraman, R., Galster, S., Squire, P., Furukawa, H. and Miller, C. (2005). “A Flexible Delegation-Type Interface Enhances System Performance in Human Supervision of Multiple Robots: Empirical Studies with RoboFlag.” IEEE Transactions Systems, Man and Cybernetics-Part A, 35(4), 481-493 Sheridan, T.B. (1992) Telerobotics, Automation, and Human Supervisory Control, MIT Press, Cambridge, MA.