Intelligent Robots and Systems. Making Service Robots Human-Safe. V. J. Traver A. P. del Pobil M. P6rez-Francisco. Robotic Intelligence Laboratory. Edificio TI ...
Proceedings of the 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems
Making Service Robots Human-Safe V. J. Traver A. P. del Pobil M. P6rez-Francisco Robotic Intelligence Laboratory Edificio TI, Jaume-I University E12080 Castellh, Spain {vtraver,pobil,mperez}@inf.uji.es Abstract This paper first reviews the literature about the very important but neglected issue of safety an service-robots applications. Most of the few existing approaches are limited because they lack fiexibility. W e argue that human-tailored safety schemes are greatly needed, and propose two strategies in the line we think human-safe robots should be conceived. A f a s t method t o derive the robot's incremental movements given a workspace force applied t o the robot constitutes the basis of the robot behaviors that underlie the proposed strategies. Our simulations show that our strategies are safe, human-friendly and exhibit real-time performance. Finally, we discuss the m a n y ways in which our work can be extended.
1
Introduction
With the advances in computer science and robotics, robots are now more applicable to our everyday life, and much research is currently being done in what has been termed service robotics. Service robots demand a different design philosophy from the conventional industrial robots [19] because, unlike their counterparts in industrial fields, robots in service fields cannot be physically isolated from, but must coexist and cooperate [16] with human beings, while sharing a working area [2]. As a result, robots become a great danger for nearby people and adequate security measures are called for [16]. As Carlisle posed [3], imagine a robot playing with your child, or feeding a quadriplegic. While many authors recognize the importance of human-safe robots [23, 18, 6, 161, only a very few researchers address the problem t o some extent. Thus, safety is one of the most poorly addressed area in robotics, despite being critical for the progress 4many other applications [3]. Therefore, it is this insufficient
effort devoted t o this issue of uppermost importance, that motivates our work. Some kinds of safety can be considered, but we focus on the avoidance of mechanical injury that robots can inflict t o people. This paper is organized as follows. In subsection 1.1 a report on related work is given. Our safety strategies are presented in section 2. In section 3 experiments are described. Finally, in section 4 we discuss our current work and outline future research lines.
1.1
Some government directives include guidelines for robotics safety [l],but they concern mostly with the causes and prevention of accidents with industrial robots. In [6] European Safety Standards for machinery are presented and difficulties in validating the safety of computer controlled machinery, particularly that used in robotics, are discussed. However, theoretical studies [4, 61 are of limited interest. The international organization for standardization (ISO) strictly recommends that industrial robots should be isolated from operators and workers. However, in many fields in the near future, robots will have t o work beside people [18]. Thus, not allowing people to enter a robot's workspace is not a proper solution. Some researchers simply cover the robot with soft material t o achieve both impact force attenuation and contact stability, keeping within the human pain tolerance limit [18, 81. When the system detects a contact, a naive stop control is activated. Tadokoro et al. predict future human position and change a robot arm trajectories so as to minimize the danger [19]. The simulations yield good results, but with limitations regarding the human model (a 2D point) and the robot trajectories. In their subsequent work [20, 121, the simple case of a mobile robot is considered. Furthermore, due t o its large uncertainty, people's motion is difficult t o predict by and large. After a motivating introduction t o the need and fu-
- 696 0-7803-6348-5/00/$10.00 02000 IEEE.
Related Work
ture of safety, Rechsteiner et al. [16] stick to the problem of computing a dynamically adjustable 3D “separating surface” between the robot and the person, and do not address the problem of robot’s actions. The work by Baerveldt [2] consists of a computer vision system over the robot that detects and locates a human being. The robot arm speed is limited according to its closeness t o the person, but the robot’s path is not changed to avoid the person. To ensure safety, Heinzmann et al. adopt a robot with all the joints force controlled [7]. Ikuta et al. propose a general evaluation method of safety for humancare and medical robots [8]. They define a danger index to quantitatively evaluate the effectiveness of each safety strategy. Nevertheless, the strategies considered are those aimed a t minimizing human injury after human-robot collision, by studying “static” aspects chosen at design phase, while our interest lies on “dynamic” aspects controlled a t task execution phase. As opposed to physical augmentation, Wakita et al. propose an intelligent augmentation of the robot. While the idea of the former is to reduce the physical impact of a robot on a human body, the latter seeks to prevent and avoid undesirable person-robot contacts, with the key point of information sharing, i.e., the robot informs the user of its intended motion before it takes place. Without this information, the person is more likely to be frightened by the robot’s motion [22]. The limited research carried out on human safety may be due to the fact that this subject is viewed as uninteresting by researchers [3], or because of the vagueness of adequate safety [6, 7, 81. On the other hand, people who worked on it in the past [2, 161, have not dealt with it later on.
2
The Proposed Strategies
The low-level solutions proposed in the literature lead to too limited solutions (e.g., simple emergency stops). Higher-level approaches tend to focus only on human side (e.g., gesture recognition) or on robot side (e.g., sensitive skins), neglecting the other part. Hence, smarter solutions are lacking which pay attention to the relationship between human beings, robots and the environment. Our design philosophy consists of enriching traditional collision detection schemes with problem-specific knowledge. Thus, a person is considered a special, living “obstacle”, from which we (can) have information. From this valuable knowledge, strategies with capabilities such as recognizing the human action and understanding its intention can be developed, so as
to make the robot behave in a natural and humanfriendly way. Two different strategies have been designed: the elusive and the ergonomic. The first one owes its name t o the fact that the robot behavior mostly consists of moving away from the person, who will feel at ease as (s)he will not be frightened by robot motion. The second one takes a number of robot and human information into account for deciding its motion. Before describing these strategies, we briefly present a collision detection framework used by the authors in the past, along with some recent enhancements. Pieces of this method are used by our strategies. The collision detection algorithm itself could be regarded as a lower level strategy, where little use of a priori and on-line knowledge on our particular obstacle (the person) is made.
2.1
The Collision Detection Algorithm
We use a hierarchical spherical representation for each solid we want to study. Starting with only one bounding sphere, we add more spheres so that the approximate representation fit better to the shape of the object. The underlying data structure is a tree having as the root node only a sphere that encloses the whole object. At each node there is a representation. The deeper the level is, the more refined a representation will be. For a detailed explanation of the representation please refer t o the companion paper [13]. At each time instant, we begin by studying the collision between the spheres in the first level of the sphere tree of each element. The elements whose spheres do not collide are collision-free because the spheres completely enclose the solids. The other ones (i.e., the elements whose spheres collide), are studied again, and the colliding spheres are replaced by other more refined spheres. This process is repeated until the collision between spheres disappear or a termination condition is reached. The complexity of this algorithm is C?(m2),i.e., quadratic with the number m of spheres. This is usually not a problem in our case, because the number of spheres is always kept low: at the first level only one sphere per solid is used, while at deeper levels the intersection test is constrained only t o the spheres of the objects within the scene that are closer one t o another. However, if the number of objects t o be studied is high, this algorithm becomes too slow. To enhance the complexity of the algorithm we sort the spheres by their lower boundaries at each of the three coordinate axes. Two spheres intersect if the lower boundary of one of them is lower than the upper boundary of the other.
-697-
Then the algorithm only compares those spheres in the ordered list that are really close. This leads t o a reduction in the number of sphere-sphere collision checks, at the expense of the cost of sorting. Thus, the complexity is O ( mlog m ) t o sort the spheres, O ( m )t o find out which spheres collide, and O ( k ) for collision checking, where IC is the number of spheres that are really close, which is situation-dependent. Additionally, a parallel version has also been developed and a significant speed-up has been achieved [151.
2.2
Figure 1: Robot states and transitions between them
The Elusive Robot
The robot and the person are both approximated by the spherical representation outlined above. The pair of closest spheres between them provides us with an approximated collision vector and a human-robot1 distance dhr, which is used to adjust the robot’s speed: the smaller dhr is, the slower the robot moves. However, to make this simple strategy a little less conservative, we try to move the robot in the opposite direction the person does, which helps a collision not t o happen even when the person is the “guilty” of the collision. This behavior can be viewed as a reflex action human beings have: when we detect an imminent (danger of) collision, we move somehow in an attempt t o avoid it. The robot “evading” the nearby person has a two-fold interpretation: (1) a way t o increase human safety and (2) a reflex self-defense movement. Our approach gets its inspiration from the potential field method, initially proposed by Khatib [9] and widely used since then. Our collision vector will act as a repulsive force in the workspaze W . As a result we need a way t o map a 3D force F in W , applied on a point P on the robot, t o the n-dimensional configuration space C, with n being the degrees of freedom of the robot. We have devised a computationally inexpensive method to do such a mapping [21]. For a given robot R,we will denote by M R ( $ , P ) the function which, given a force F‘ E W actuating on a point P E W , computes and applies the joint offsets, found as explained in [21]. This function will be useful on several occasions both in elusive and ergonomic strategies. With this function, we also find the contributio_n pi E [-1,1] of each joint i t o move P according t o F . If ( S r , are the pair of closest spheres, with Si = (Ci, Ri)the sphere centered on Ci,and with a radius Ri,i E { r , h } ,then, when we want the robot t o move away from the person, we apply MR(Cr Cr)After deviating from the original pre-planned path because of its interaction with the environment (na-
sh)
ch,
‘The subindex
T
denotes the robot and h the human being.
mely, the human action), the robot should move back t o this path, in an iterative fashion. These are individual components of a global collision avoidance strategy. We blend them all together by means of a number of states. Besides states we have some parameters that allow us t o tune the strategy. These parameters a r e distance thresholds and time-outs which are used to specify the transition between states. The states and the transitions are briefly described here. When the robot is moving along the state. As the human original path it is in the PATH gets closer, the robot slows down. If robot and human get too close, the speed will be very low and the robot enters the STOPPEDstate. When some conditions meet, the avoidance strategy fires, which happens within the AVOIDstate. Finally, the robot goes state. back to the planned path when it is in the BACK Figure 1 depicts these states and the possible transitions. The parameteh used are the following ones. Two distance thresholds, dl and d2, define when the robot should react t o human in terms of how close they are. We also define some timers (counters) with their associated time-outs: tw is the maximum time the robot will be (waiting) in the STOPPEDstate. Another counter, t A , is used to set how much the robot should get away from human (i.e., how long it will be in the AVOIDstate). We make an estimation of how much the robot has deviated from the nominal path so as to give it enough time to return to such a path ( t B controls the time t o be spent in BACKstate). By assigning different values t o the parameters, the robot may exhibit different behaviors. For example, if tw is high, the robot will have a “passive” attitude, as if it wanted t o let the environment evolve before taking a decision. On the contrary, with a low t w the robot will behave more “nervously”. This is very important, as it is known how sensitive human beings are t o nearby motion [14].Hence, with different parameters, we can endow the robot with different “characters”, thus provoking different feelings to the person.
-698-
2.3
The Ergonomic Robot
The elusive strategy may create a sense of relief to the person, but it does not take into account information regarding the robot, the human being or the task context. With the ergonomic strategy, we do consider the robot and the person. Although not considered here, the environment and the context are important because the same safety strategy may make sense under some conditions but not under others [7]. Additionally, in the light of a task context, the rich and complex human action may be easier t o interpret. Different robots may be differently dangerous, and a given robot is more or less threatening depending on its motion and its velocity. To take account of this information, we compute the danger of a given robot posture. Undoubtly, for the same speed, the further the end-tip from the robot base is, the more dangerous the robot arm is, as it is more likely that a nearby person be hit. We compute the posture danger, r ( q ) , by measuring how far apart is T , the tool center point, from the vertical axis v’ passing through robot’s base. As we know the maximum value for this dangerousness, we normalize it t o [O,l]. When we need to decrease the robot posture danger, we move the robot tip inwards, in a contracting way, as follows. We compute d,the vector joining T to its projection on the straight line that has v’ as the director vector. The desired motion can be achieved by applying M R ( G ,T ) . However, this motion is not always the most appropriate. If the robot is very close t o person, and the robot arm is (almost) outstretched, it may be better for the robot to “bypass” the person by moving over him(her). To that end we apply an upward force, by raising the robot end-effector. This time we apply M R ( G , T ) ,where .ii is a vertical vector pointing upwards. However we have to take care if the human head is easily passable, by computing the distance dTH between the robot end-effector and the human head. Both when reducing the robot posture danger and when raising the robot arm, the joints not taking part in such motions may be contributing to move the robot arm to its goal. We can know how much a joint i can contribute t o the intended motion by means of pi, evaluated within the M R ( . , . )function. Thus, 1- lpil is a measure of the remaining contribution for moving the robot’s joint i towards its goal. Clearly, what persons are viewing -or what (s)he is looking at- gives us an important clue about their awareness of the robot: the robot motion is more dangerous if the person is not aware of its presence. Thus, our instant danger evaluation also takes this into ac-
-699-
Figure 2: The look vector count by using the “look” vector, r(figure 2) t o quantify to which extend the person may be aware of the robot. Moreover, the facing direction (i.e., where the person is facing the body) combined with the look direction gives even more information; e.g., whether the person is looking in the direction (s)he is walking to. Besides human-robot distance, we also compute the rate this distance changes, t o account for how fast the robot and the person are approaching one another: the faster, the more danger. The way we blend all this information together is by computing an overall danger index r), as a function of a series of instantaneous values, as follows:
Many definitions of the function G are possible, but it can simply be a weighted sum of all the factors, so that we can give more importance to some factors. Based on this index and on the relative weight of each term, a particular robot behavior is fired. Hence, some arbitration function Gi,depending on a subset of these factors, is associated t o a behavior i. For instance, the “contraction behavior”, is a function of dhT and r ( q ) , but does not take into account the vector Additionally, some kind o,f memory is necessary. For instance, by monitoring 1 we can guess that the person is aware of the robot. This condition should be memorized for a while; on the contrary, as soon as the person deviates his(her) visual attention from the robot, Twill contribute more than necessary t o a larger r).
3
Experimental Results
Our final goal is a real-time system working with real robots, but we are currently performing computer graphic simulations with Jack, a software package developed at the Center for Human Modeling and Simulation at the University of Pennsylvania’. Addition2http://uvu. c i s .upenn.edu: 80/-hms/jack.html
that this kind of behavior is interesting and looks quite natural (i.e., it resembles what people do t o avoid obstacles while moving towards a goal). In some experiments, the robot has exhibited an active attitude, acting collaboratively rather than merely stopping, when the person approaches the robot. The computations needed for these safety strategies are kept t o a minimum, as the reader can readily appreciate (no complex computations are performed in C-space, for instance), so that real-time performance can be achieved both in graphic simulations now, and in real-world applications in the future. However, we have also experienced some problems, such as some kind of local minima for some particular values of the parameters and in certain situations of the robot and the person. Overcoming this is part of future work, which is outlined below.
4 (3)
Figure 3: The robot avoiding the human being
ally, we use MotionStar, a real-time motion capture system developed by Ascension Technology Corporation3. Jack’s motion can then be driven by the tracker system t o get realistic human actions. As an example of the ergonomic strategy, in figure 3 we see how a Puma robot avoids hitting the person, by modifying its trajectory. The person is within the robot’s workspace, raising his torso, as if he was taking something from the chair. The robot is made t o move between two subgoals, covering a wide area that includes the person, with the arm horizontally fullstretched. However, when it encounters the man, it gently moves the arm upwards (this is the contractive motion we have explained above) before reaching the human body, and smoothly moves towards their next goal after having passed by the person. Note that, even when the robot’s shoulder and elbow rise, the waist keeps moving towards its goal joint value. This is so because, as we have explained, we compute how much each joint can contribute for a given movement. In this case it is detected that the shoulder and the elbow joints can do a lot (hence, the corresponding robot’s elements rise), while the waist can do nothing (hence, it can contribute to reach the goal, but moving slower). Through experiments we have observed ht tp ://win. ascension- t e ch. c om
Conclusion and Further Work
(4)
The fact that robots are entering human environments gives rise to challenging issues, safety playing a role of paramount importance. We have presented here the elusive and the ergonomic strategies, that are our initial efforts to make service robots not only human-safe, but also human-friendly. The few works addressing the problem do it at a very low level, or propose the use of special robots or strategies devised at design time. We feel that human-safe strategies should bear in mind the special characteristics of human beings; t o date, however, this aspect has virtually been ignored. We have found that an incredibly wide range of disciplines, besides robotics, can do their’s bit in the design of human-safe service robots. At the same time, they are a source of inspiration t o foster this young research field. Non-verbal communication [17] deserves special attention, as it could be applied to make the human action [ll, 101 meaningful for the robot. Exploiting the implicit human input, as well as the bi-directionality of human-robot communication [14] seem also promising ideas for human-tailored service robots. At a higher level, models of human and robot emotional state [14, 51 or its recognition, would be of interest too. Other areas we should look at include psychology, human-machine interaction, computer graphics, ergonomics, human factors and the more topical fields of virtual or augmented reality [22]. Finally, invaluable feedback will be gained from human subjects experiencing our robot strategies (in virtual or real robot environments).
- 700-
Acknowledgments This paper describes research done at the Robotic Intelligence Laboratory, Jaume-I University. Support for this laboratory is provided in part by Generalitat Valenciana under project GV97-TI-05-8, by CICYT under TAP98-0450, and by Fundaci6 Caixa-Caste116 under project PlB97-06.
References Department of Energy (DOE) OSH Technical Reference. Chapter 1: Industrial Robots. In Web page http://tis. eh.doe.gov/docs/osh-tr/chll. html. A.-J. Baerveldt. A safety system for close interaction between man and robot. In SAFECOMP’92: Safety of Computer Control Systems, pages 25-30. International Federation of Automatic Control, 1992. B. Carlisle. On creating a humanoid. Notes on a plenary talk in ICAR’97, 1997. J. Elliot, S. Brooks, and P. Hughes. A framework for enhancing the safety process for advanced robotic applications. In Achievements and Assurance of Safety: Proceedings of the 3rd. Safety-Critical Systems Symposium, pages 131-152, 1995. S. C. Gadanho and J. Hallam. Exploring the role of emotions in autonomous robot learning.
S. P. Gaskill and S. R. G. Went. Safety issues in modern applications of robots. Reliability Engineering and System Safety, 53:301-307, 1996.
J. Heinzmann et al. Smart interface + safe mechanisms = human friendly robots. In IARP, Intl. Workshop on Humanoid & Human Friendly Robotics, Oct. 1998. http://wwwsyseng.anu. edu.au/rsl. K. Ikuta and M. Nokata. General evaluation method of safety for human-care robots. In IEEE Intl. Conf. on Robotics and Automation, pages 2065-2072, Detroit, Michigan, May 1999.
0. Khatib. Real-time obstacle avoidance for manipulators and mobile robots. The Intl. Journal of Robotics Research, 5(1):90-98, 1986. D. Kortenkamp, E. Huber, and R. P. Bonasso. Recognizing and interpreting gestures on a mobile robot. In Proceedings of AAAI-96 (13th National Conf. on AI), pages 915-921, Aug. 1996. Y. Kuniyoshi and H. Inoue. Qualitative recognition of ongoing human action sequences. In 13th. Intl. Conf. on Artificial Intelligence, 1993. Y. Manabe, M. Hattori, S. Tadokoro, and T. Takamori. Generation of home robots movement
-701-
based on prediction of human actions (a model of human actions by a Petri net and prediction of human acts). In CESA ’96 IMACS Multiconference., pages 210-215, Lille, France, July 1996. [13] B. Martinez, A. P. del Pobil, and M. Pkrez-Francisco. A hierarchy of detail for fast collision detection. In IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems, Takamatsu, Japan, 2000. [14] T. Nakata, T. Sato, and T. Mori. Expression of emotion and intention by robot body movement. In Intelligent Autonomous System 5 (IAS-5), San Juan, Puerto Rico, 1998. http://www.ics.t.utokyo. ac.jp/-shizuka/index. html. [15] M. Pkrez-Francisco, A. P. del Pobil, and B. MartinezSalvador. Very fast collision detection for practical motion planning. Part 11: The parallel algorithm. In IEEE Intl. Conf. on Robotics and Automation, 1998. [16] M. Rechsteiner, M. Thaler, and G. Troester. Implementation aspects of a real time workspace monitoring system. In Fifth Intl. Conf. on Image Processing and its Applications, Heriot-Watt University, Edinburgh, UK, July 1995. [17] T. Sato, Y. Nishida, J. Ichikawa, Y. Hatamura, and H. Mizoguchi. Active understanding of human intention by a robot through monitoring of human behaviour. In IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems, volume 1, pages 405-414, 1994. [18] K. Suita, Y. Yamada, N. Tsuchida, K. Imai, H. Ikeda, and N. Sugimoto. A failure-to-safety “Kyozon” system with simple contact detection and stop capabilities for safe human-autonomous robot coexistence. In IEEE Intl. Conf. on Robotics and Automation, pages ’ 3089-3096, 1995. [19] S. Tadokoro et al. Stochastic prediction of human motion and control of robots in the service of human. In IEEE Intl. Conf. on Systems, Man and Cybernetics, pages 503-508, 1993. [20] S. Tadokoro, M. Hayashi, Y. Manabe, Y. Nakami, and T. Takamori. On motion planning of mobile robots which coexist and cooperate with human. In IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems, pages 518-523, 1995. [21] V. J. Traver, A. P. del Pobil, and M. Pkrez-Francisco. Smart safe strategies for service robots interacting with people. In The 6th. Intl. Conf. on Intelligent Autonomous Systems (IAS-6), Venice, Italy, July 2000. [22] Y. Wakita, S. Hirai, T. Hori, R.Takada, and M. Kakikura. Realization of safety in a coexistent robotic system by information sharing. In IEEE Intl. Conf. on Robotics and Automation, pages 3474-3479, Leuven, Belgium, May 1998. [23] N. Yamasaki and Y. Anzai. Active interface for human-robot interaction. In IEEE Intl. Conf. on Robotics and Automation, pages 3103-3109, 1995.