Recovery from Automation Error after Robot Neglect. Daniel N. Cassenti. Human Research and Engineering Directorate. U.S. Army Research Laboratory.
PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 51st ANNUAL MEETING—2007
1096
Recovery from Automation Error after Robot Neglect Daniel N. Cassenti Human Research and Engineering Directorate U.S. Army Research Laboratory Aberdeen Proving Ground, MD 21005 Automation is a robot’s ability to control itself without operator intervention. Advances in automation have resulted in a phenomenon called robot neglect (Crandall & Goodrich, 2003) in which a robot operator infrequently attends to an automated robot. Robot neglect becomes a problem when automation fails and the robot operator is not attending to the robot. This work outlines a set of guidelines for aiding an operator faced with robot neglect followed by automation failure. The guidelines focus on how video replay of the incidents leading up to the failure may help the operator overcome inattention and regain situation awareness. Complications with this approach in past research are addressed.
INTRODUCTION Advances in robot automation (i.e., programs that allow a robot self-control) create other problems that require solutions. Automation cannot be perfect and often results in failure (Rovira, McGarry, & Parasuraman, 2002). Consider a robot that uses automation to navigate in an environment. In its path may be a rut or hole that its visual sensors fail to discover. This obstruction can cause the robot to fall even if its navigational automation reliably avoids above ground-level obstructions. The robot operator represents a fail safe for automated robots. In situations where the robot is automated, at least one robot operator must be assigned to monitor the robot. The better the automation, the less likely the robot will make an error. However, better automation will subsequently increase the operator’s confidence in the automation (measured by neglect tolerance) and be more likely to result in robot neglect (Crandall & Goodrich, 2003). Robot neglect is when an operator ignores a robot for long periods of time, trusting that automation will function properly. When an automated robot confronts a situation that it cannot handle, there must be contingencies to help regain the operator’s attention during times of neglect (Parasuraman, Sheridan, & Wickens, 2000). Goodrich (2004) offers hope for the effectiveness of this strategy by suggesting that
“attention management is useful in helping people manage the robot in a timely manner.” Recovery from error after robot neglect requires steps to help the operator reconstruct an accurate portrayal of the events leading to the error (Parasuraman et al., 2000). Robot neglect is analogous to interruptions during a complex task. Trafton, Altmann, Brock, and Mintz (2003) found that if an individual is interrupted during a complex task that a return to that task is fast and accurate when cues appear to help regain situational awareness. The next section reviews a video replay strategy aimed at providing cues that quickly and accurately provide situation awareness after robot neglect. VIDEO REPLAY According to Vidulich, Dominguez, Vogel, and McMillan (1994), situation awareness is “continuous extraction of environmental information, integration of this information with previous knowledge to form a coherent mental picture in directing further perception and anticipating future events.” The depiction suggests that video information is the best method of extracting situation awareness of a robot remotely. Therefore, recovery of situational awareness should include video information (Steinfeld, 2004). Recovery from robot neglect should also include the circumstances that led to the problem because
PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 51st ANNUAL MEETING—2007
the circumstances may aid the operator in solving the problem. The ideal would be to have the robot transmit continual video of its surroundings and to record this information on the operator’s computer for quick playback. However, frame rate limitations of video transmissions prevent this solution (Darken, Kempster, & Peterson, 2001). A transmission frame rate limits the precision, scope, and quantity of video that can be transmitted at one time. If the video covers too much space, then the precision of what it presents will be lower and it will be more difficult for the operator to distinguish what he or she sees on the screen. If the video presents a more detailed and precise picture, then it must also restrict the field of view. To cope with frame rate, up-to-date video imagery from a robot is restricted. Traditional video imagery from a robot is either presented in a grainy manner or represents only a narrow field of view from directly in front of the robot. A problem with a robot may result from a cause originating in any direction in a robot’s environment. For example, consider a robot that is used for surveillance for the military. An operator may not see the projectile that knocks a robot over and disrupts its navigation apparatus. In such a situation it would be advantageous to have numerous camera angles so that the operator may piece together what happened. In addition, other sensory information like gyroscope data or sound recordings may be useful, though these are considerations outside the scope of this paper. Frame rate limitations occur when the video transmits in real time. However, a robot with reliable automation does not need constant real time transmission. A single camera that transmits with a field and precision that fits frame rate limitations and can provide the operator with enough information to monitor the robot would be sufficient if automation is operating successfully. Full information to the
1097
operator is only necessary when automation results in error. The current proposal is that robots with strong automation capabilities record video from multiple camera angles and with wide fields as it progresses through its task. If a problem occurs during task execution, then the robot will transmit stored video leading up to the problem. The robot may present this data at a speed that fits within the frame rate limitation of transmission. The receiving computer can then recalculate how much time the set of video images took to record and replay the video clip in real time (or any operator-requested speed). Multiple camera angles may also help provide vital information of the robot’s current situation. For example, if it is caught in a ground indentation, the operator will have the opportunity to view different angles to formulate a plan to extract the robot. Two problems confront this solution. The first is that if the robot transmits high precision, wide field images from multiple cameras at a speed that fit with frame rate limitations, then the information will take some time to reach the operator. Fast recovery is not possible if all the stored information must first transmit. To adjust to this problem, the interface provided to the operator should contain parameter adjustments. In this scenario, operators could request one camera angle at a time and restrict the precision of the pictures to a desired level. This strategy would decrease the time to transmit the information For example, if an robot used for navigation falls off of its transportation apparatus, the operator may see the angle of the camera that sends up-to-date, frame rate limited imagery and infer the way in which the robot fell. Two likely causes of the fall would be a projectile or an obstacle. To check for a possible projectile, the operator may request the recordings from the side of the robot opposite to where it fell. To check on a possible terrain issue, the operator
PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 51st ANNUAL MEETING—2007
may request a camera angle that includes forward and ground level information. STORAGE LIMITATIONS The second problem with the strategy to record and later transmit multiple angle, large field, and high precision video is the storage limitations of robots. Processors and memory storage have become smaller with newer technology, but video is a storage-expensive type of media. Dimitrova, Sethi, and Rui (2000) described storage capacity limits as the greatest limitation to a video replay option for robots. A strategy for working around the storage capacity problem involves reducing the amount of information that is stored at one time. The solution cannot evoke dependence on transmitting the extra information. The frame rate limitations will already be used by the single camera that projects information for operator monitoring of the current situation. The solution also cannot involve too many storage hardware increases. Robots in the field are trending towards smaller sizes. These robots are stealthier and can penetrate areas of interest with smaller openings. The amount of video time stored by the robot should be restricted. Only the last several seconds of uninterrupted task execution should be stored in the robot’s memory. The number of seconds should be enough to indicate what occurred that stopped task execution, if this is possible from a remote location. Implementing this solution is a challenge because, of course, automation errors are largely unpredictable. The video cameras must record continuously in order for the last several seconds of the circumstances surrounding the automation failure to be caught on camera. The solution involves implementation of first-in, first-out memory (U.S. Patent No. 4,507,760, 1985). Storage buffers will only have the capacity to hold several seconds worth of video. When new information comes into the
1098
system it will be stored and the information that was recorded several seconds before the current information will be lost from the buffer. The first-in, first-out principle will therefore constantly update the buffers to always contain the latest information without sacrificing storage space. The first-in, first-out principle will save necessary storage space, but alone cannot be the solution to the storage capacity problem. If the robot gets into a situation where automation fails and the solution to the error requires operator intervention, then under the current plan, the robot will record over the old video while it is stuck in the situation. This information would not be useful to the operator in diagnosing the problem. A robot employing this strategy must freeze the buffer when it produces an error. This requires error detection algorithms (see Donald, 1987). These error detections will be imperfect, but will allow a way around the problem of storage limitations. To implement error detection, robots should possess algorithms that calculate the plan forward. When there is any delay in what the robot should do next, the robot will freeze the recordings, so that no more information records over the old information while the robot is delayed. The robot should also issue a call for an audible warning to the operator to call attention to the error, so that the operator may waste minimal time in regaining situational awareness from the camera that continues to transmit. Some errors do not require operator intervention because the robot works around the issue on its own. For example, a robot may deviate from a navigational goal by not advancing after hitting a rock, but continued pressure from the transportation mechanisms helps the robot force the rock out of the way. If the robot gets back to its plan, the freeze on video recording should end and allow the robot to continue the task. Figure 1 outlines the overall proposal for overcoming robot neglect.
PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 51st ANNUAL MEETING—2007
Robot-Initiated Correction Auto. Fail
Detect Error
Buffer Freeze
1099
Resume Buffer
Warn
Operator-Initiated Correction
Info Request
Figure 1. Proposed sequence for overcoming robot neglect followed by automation failure. The sequence is: automation failure, error detection, video buffer freeze, transmission of a warning signal, and either the robot fixes its own problem or the operator requests information and begins operator error correction.
FUTURE WORK
CONCLUSIONS
The hypotheses discussed here form an outline of an approach to overcoming robot neglect with automation failure. There are several more research objectives that need to be met before implementing this approach. First, the amount of time that the video buffers record needs to be set through empirical research. Important considerations in the empirical studies should include type of robot mission, how many camera angles are enough to cover potential causes of automation failure, and the level of precision of image that needs to be recorded. The results of the empirical studies should be tempered by engineering concerns about how many cameras can be affixed to a robot used for a given function, how much overlap the fields of the cameras can accommodate in case one camera malfunctions, and the amount of storage space that can be made available to the robot. In addition to empirical and engineering studies defining the video buffer parameters, other research must be accomplished to consider implementation of each of the projected recovery mechanisms. This includes a user-friendly interface for setting preferences for video feedback and design issues for attaching cameras to the robot.
Robot neglect is a problem when automation fails. If an operator neglects a robot and trusts that automation will complete certain objectives on its own, then the operator will be unaware of the events that led to the failure and might be at a disadvantage when figuring out how to solve the problem. The solution is for video replay of events leading up to the automation failure. This solution includes multiple camera angles to record many sources of error and a robot interface that allows the operator to select the camera angle and precision necessary for outlining the cause of error. To work around storage capacity limits, I suggest employing the first-in, first-out principle to constrain the amount of storage necessary. Though there is more research needed, this solution represents a first step in recovery from automation failure after robot neglect.
PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 51st ANNUAL MEETING—2007
ACKNOWLEDGEMENTS This project was funded by the U.S. Army’s Robotics Collaboration Army Technology Objective. The author wishes to thank Mike Barnes, Troy Kelley, Don Headley, and Linda Pierce of the U.S. Army Research Laboratory’s Human Research and Engineering Directorate and three anonymous reviewers for their critiques of previous drafts of this article. REFERENCES Crandall, J. W., & Goodrich, M. A. (2003). Measuring the intelligence of a robot and its interface. In Proceedings of PERMIS. Darken, R.P., Kempster, K., & Peterson, B. (2001). Effects of streaming video quality of service on spatial comprehension in a reconnaissance task. In Proceedings of the Interservice/Industry Training, Simulation & Education Conference. Donald, B.R. (1987). Error detection and recovery in robots. New York: Springer-Verlag. Dimitrova, N., Sethi, I. Rui, Y. (2000).Media Content Management. In M.R. Syed Design and Management of Multimedia Information Systems: Opportunities and Challenges. Hershey, PA: Idea Publishing Group. Fraser, A.G. (1985). U.S. Patent No. 4,507,760. Washington, DC: US Patent and Trademark Office. Goodrich, M.A. (2004). Using models of cognition in HRI Evaluation and Design. In Proceedings of the AAAI 2004 Fall Symposium Series. Parasuraman, R., Sheridan, T.B., & Wickens, C.D. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics Part A: Systems and Humans, 30, 286-297. Rovira, E., McGarry, K., & Parasuraman, R. (2002). Effects of unreliable automation on decision making in command and control. In Proceedings of Human Factors and Ergonomics Society 46th Annual Meeting (pp. 428-432).
Steinfeld, A. (2004). Interface lessons for fully and semi-autonomous mobile robots. In Proceedings of IEEE International Conference on Robotics & Automation, New Orleans, LA. Trafton, J.G., Altmann, E.M., Brock, D.P., & Mintz, F.E. (2003). Preparing to resume an interrupted task: Effects of prospective goal encoding and retrospective rehearsal. International Journal of Human-Computer Studies, 58, 583–603. Vidulich, M., Dominguez, C., Vogel, E., & McMillan, G. (1994). Situation awareness: Papers and annotated bibliography. AL/CF-TR- 1994- 0085, Air Force Material Command, Wright-Patterson Air Force Base. OH: Armstrong Laboratory.
1100