A Safe-Control Paradigm For Human-Robot Interaction J. Heinzmann (
[email protected]) and A. Zelinsky (
[email protected])
The Australian National University, Canberra ACT 0200, Australia
Abstract. This paper introduces a new approach to control a robot manipulator
in a way that is safe for humans in the robots workspace. Conceptually the robot is viewed as a tool with limited autonomy. The limited perception capabilities of automatic systems prohibits the construction of failsafe robots of the capability of people Instead, the goal of our control paradigm is to make the interaction with a robot manipulator safe by making the robots actions predictable and understandable to the human operator. At the same time the forces the robot applies with any part of its body to its environment have to be controllable and limited. Experimental results are presented of a human-friendly robot controller that is under development for a Barrett Whole Arm Manipulator robot.
Keywords: Human-Robot Interaction, Human-Robot Interfaces, Visual Interfaces
1. Introduction If robotics technology is to be introduced into the everyday human world, the technology must not only operate eciently executing complex tasks such house cleaning or putting out the garbage, the technology must be safe and easy for people to use. Executing complex tasks in unstructured and dynamic worlds is an immensely challenging problem. Researchers have realized that to create robotic technology for the human world, the most ecient form is the humanoid robot form, as people have surrounded themselves with artifacts and structures that suit dual-armed bipeds e.g. a staircase. However, merely giving a robot the human shape does not make it friendly to humans. Present day humanoid robots instead of stepping over a person lying on the ground would most probably walk on a person without falling over! What constitutes a human friendly robot? Firstly, human friendly robots must be easy to use and have natural communication interfaces. People naturally express themselves through language, facial gestures and expressions. Speech recognition in controlled situations using a limited vocabulary with minimal background noise interference is now possible and can be included in human-robot interfaces. However, vision-based human-computer interfaces are only recently began to attract considerable attention [See the proc. of the International Conference on Automatic Face and Gesture Recognition]. With a visual interface a robot could recognize facial gestures such as "yes" or "no",
c 1999 Kluwer Academic Publishers.
Printed in the Netherlands.
jirs99.tex; 14/09/1999; 11:45; p.1
2 as well being able to determine the user's gaze point i.e. where the person is looking. The ability to estimate a person's gaze point is most important for a human-friendly robot. For example a robot assisting the disabled may need to pick up items that attract the user's gaze. Our goal is to build "smart" interfaces for use in human-robot applications. The second requirement for a human-friendly robot is that it must possess a high integrity human safety system that ensures the robot's actions never result in physical harm. One way to solve the humansafety problem is to build robots that could never harm people, by making the mechanisms, small, light-weight and slow moving. However, this will result in systems that can't work in the human world i.e. won't be able to lift or carry objects of any signi cance. The only alternative is to build systems that are strong and fast, which are human-friendly. Current commercial robot manipulator technology is human-unfriendly. This is because these robots only use point to point control. Point to point control is quite dangerous to use in dynamic environments where situations and conditions can unexpectedly change. If an object unexpectedly blocks a planned point to point path, the object could be smashed out of the robot's path or the robot could sustain considerable damage. Clearly a compliant force based controller is needed in such situations. However, the technology force-based control of robots is not yet readily available. The current practice is to add force sensors to existing robot manipulators, usually at the tool plate. This allows force control of only the robot's end point. This means the robot could still easily collide with unexpected objects with other parts of the manipulator without noticing. To ensure safety a Whole Arm Manipulation (WAM) approach must be adopted, in which all the robot's joints are force controlled. Another important aspect that must be guarded against are software failures. A framework is needed to prevent a robot from continuing to work if a computer hangs or sends erroneous commands. In this paper we describe our recent results to build a safe control architecture and scheme for our Barrett WAM robot arm manipulator.
2. Smart Interfaces + Safe Mechanisms = Human Friendly Robots The human-robot interface for our human-friendly robot consists of a visual face tracking system. The system uses a monocular camera and a hardware vision system to track various facial features such as the eyes, the eyebrows, the mouth and the ears. Based on this information the 3D pose and orientation of the head is calculated.
jirs99.tex; 14/09/1999; 11:45; p.2
Affine Projection
3-D Model Feature Positions
Relative Positions
2-D Model Measured Positions
Search Window Positions
Template Tracking
Kalman Filter Network
3-D Pose
Bitmap Correlation
Hardware
Software
Software
3
A.
B.
Figure 1. A) The three layers of the visual face tracking system. B) Facial features tracked by the system. The triangles denote the planes used for the 3D pose estimation.
Figure 1A. shows the three layers of the face tracking system. On the lowest level the Fujitsu color tracking vision system (CTRV) performs template correlations in real-time. The system is capable of correlating 200 color 88 pixel templates at video frame rate using the sum of absolute dierences (SAD) between the pixels of the correlation kernel and the image. The systems tracks 19 facial features simultaneously marked with boxes in Figure 1B. The main part of the computational resources are used for error detection and correction. On the second layer the results of the template correlations are merged using a network of Kalman lters. The Kalman lters exploit the positioning constraints of the features in the projection. These 2D constraints de ne the relative position of the features within the image plane. They are derived from the 3D model in the third layer. Using geometric constraints between the features and the probabilistic merging mechanism of the Kalman lter makes feature tracking robust and allows for partial occlusions of the face. The tracking is dominated by those features which are tracking well, while the positions of badly tracking or occluded features are estimated from the geometric model. On the highest level the 3D pose of the head is derived using feature triplets. The algorithm is based on an extended version of the Huttenlocher and Ullman (Huttenlocher and Ullman, 1990) method using three points to recover the 3D pose of an object assuming ane projection. By merging the results from at least two feature triplets in dierent spatial planes the robustness of the estimate is increased, the sensitivity to tracking errors is decreased and the ambiguity of the solution for a single feature triplet is resolved. This facial pose estimation system will be used as an interface for our human-friendly robot. A person will be able to select dierent objects on a table with his/her gaze and command the robot using facial gestures to pick up the selected object and hand it over to the
jirs99.tex; 14/09/1999; 11:45; p.3
4 operator. Facial gestures can also be used to correct errors when an incorrect object was accidentally selected. A visual interface is appropriate for a human-friendly robot. It allows the operator to communicate with the robot in a natural and intuitive way. Technical details of the dierent aspects of the system can be found in (Heinzmann and Zelinsky, 1998) and (Zelinsky and Heinzmann, 1998).
3. A Safety Philosophy for Human-Friendly Robots Firstly we should de ne what are the safety requirements for a humanfriendly robot and what implications on the design of such systems do these requirements have. The goal of this development is to construct robots that are able to interact with humans, in particular that are safe for a human to be inside the robot's workspace while the robot is active. This of course is orthogonal to the current philosophy of safety in terms of robot applications. The most common solution to human safety is to separate robots and humans, for good reasons. Industrial robots usually work inside cages that keep persons from entering the robots workspace. Most accidents with robots are caused by people overriding the safety mechanisms and entering the work cell while the robot is in operation. The robot can easily injure the person without even noticing. The robot's position based controller will consider the resistance only as a disturbance and push harder to reach the set position! This control strategy is not human friendly by any standards. The safety goal for the design of a human-friendly robot is expressed in the following de nition: SAFETY GOAL 1. A human-friendly robot must be designed such that it can be operated without posing a threat when humans are inside the robot's workspace. According to this design objective neither a voice controlled nor a humanoid robot is human-friendly per se unless it can be operated safely with people. This observation guides us towards practical problems. It is quite easy to make a safe robot by designing it's hardware in a way that it can not possibly cause harm to a person by making it small, light weight, exible, round, padded and weakly actuated. However, the tasks which such a robot could perform would be quite limited. A
jirs99.tex; 14/09/1999; 11:45; p.4
5 robot that is capable of performing everyday tasks has to be provided with actuators that are sucently strong enough. This usually makes the robot capable of harming humans. This does not necessarily mean that human-friendly robots have to be weak and useless toys. Instead, the research challenge in human-friendly robotics is to develop control strategies for potentially dangerous machines that achieve the safety goal. In the following discussion we assume that the goal of the design is to control a potentially harmful robot in a human-friendly way. 3.1. The Perception Problem Industrial work cells are highly structured in a way that all objects are located in precisely de ned positions. This is the only way to ensure that the robots internal model of it's environment does not diverge from reality. Introducing humans into the workspace of robots makes the environment unstructured. People may introduce new unknown objects or remove objects from the environment, and the people themselves are dynamic objects that can in uence the environment beyond the robots perception. A perfectly safe robot would have to respond to any situation in a way that minimizes the possible threat to any person in it's environment. This would not only require the robotic system to be able to acquire all relevant information about it's environment, but also to draw the correct conclusions from this information, an impossible task with present and foreseeable technology. Presently the perception of robotic systems is limited, in particular the information that can be acquired reliably is restricted essentially to the state information of the robot. Information about the robot's environment usually is limited to sensor data from laser, ultrasonic, vision and other sensors. Even this information is not reliable enough to make critical safety decisions based on this information (double re ection or absorption with ultrasonic sensors, false results from vision algorithms, etc). The sensors can measure certain physical properties of the environment, but the higher level context of a situation can not be reliably determined by a computer system using this data. SAFETY GOAL 2. In an unstructured environment which may have humans, any action autonomously taken by the robot must be safe even when the robot's sensor information about the environment is false. This requirement implies major limitations for the possible actions available to the robot. However, we restrict this requirement later to autonomously initiated actions by the robot. Regarding reliable data available to the robot we include the robot's own state information, joint positions and cartesian positions, veloci-
jirs99.tex; 14/09/1999; 11:45; p.5
6 ties and torques. These parameters provide the base for restricting the actions taken by the robot. 3.2. Safety Strategy With this basic state information the threat potential of the two most important dangers to people posed by a robotic system can be determined: 1. The kinetic energy of the robot. 2. The forces applied by the robot after contact with an external object. It would be a reasonable safety strategy to limit both threats in a way that prevents the robot doing any serious harm to a person. However, when a person is interacting with the robot it is impossible to guarantee these limitations. Most robots are backdrivable to some degree, i.e. the joints can be moved by external forces. For this reason many robots are equipped with brakes that prevent gravity from backdriving the motors when the robot is powered down. In particular for human-robot interaction it is sensible to choose a mechanical system that can be easily backdriven by a person. A hardware design which is not backdrivable at all makes it impossible for a person to interact with the robot directly by pushing and pulling the manipulator. In a highly backdrivable system a person can apply external forces that accelerate the robot to kinetic energies beyond the preset safety limits. On the other hand the torques the robot needs to slow itself down when accelerated too much by external forces may exceed the preset limits for the forces that can be applied by the robot. Whatever action the robot takes, it is possible that it's safety limits are violated. The characteristics of the system must be de ned carefully in order to achieve safe operation. 3.3. Safety Characteristics The responsibilities between a human operator and a robot control system need to be distributed with the low perception capabilities of robotic systems in mind. Many technical and potentially dangerous systems are controlled exclusively by humans, using the support of simple, easy to understand and predictable technical control systems, eg. cars, aircrafts, construction site machinery. The autonomy of such systems is limited to the extent that the system can achieve the task
jirs99.tex; 14/09/1999; 11:45; p.6
7
awlessly and the overall behavior of the system is still understandable and controllable to the human operator. An everyday example for such a technical system are motor vehicles, potentially dangerous technical systems for both the drivers and other persons in the environment. However, most people feel comfortable and safe whilst driving cars or walking on the side of streets. The human driver is controlling the overall behavior of the car. Various electronic and mechanical systems are providing specialized functions such as clutch and gear handling in automatics and anti locking systems. Especially these mechanisms work autonomous without even requiring the driver to know how they are implemented and what state these systems are in. The driver speci es the overall behavior of the system by using the steering wheel, brake and throttle. Recently systems for autonomous highway driving are being developed (Thomanek and Dickmanns, 1996; Jochem et al., 1993). These systems are designed as extensions to existing mass-produced cars. A similar approach is appropriate for the research area of humanfriendly robots. Before high level autonomous behaviors of robots can be developed, the basic safety characteristics for human-friendly robots must be investigated and practicable strategies must be found to control such systems. Building on these fundamental strategies higher levels of autonomy can be implemented. As stated previously, developing the concept of safe autonomous motion of human-friendly robots is the primary target of this research. Similar to other technical systems a human-friendly robot should have the following characteristics:
? Ease of control by a human operator ? Provide a certain degree of autonomy which makes operating the system easier
? Autonomous actions that are predictable to the operator and understandable from commands de ning the overall behavior
? Autonomous actions that are not threatening to the operator or other people
Similar to motor vehicles the central coordinator in the system is the human operator. Instead of trying to build the perfectly safe car, the driver is responsible for safe operation of the system. Various safety measures are built into cars to protect people from the cases of human error e.g. safety belts and air bags.
jirs99.tex; 14/09/1999; 11:45; p.7
8 A similar philosophy applies to human-friendly robots. Instead of trying to build a system that is failsafe, the system should be controllable and predictable for the human operator. Similar to motor vehicles autonomy should be added in order to make it easier for the human operator to control the system. However, all autonomous actions taken by the system have to be safe for any situation, given the perceptual limitations. An example for such a system is the anti locking system. The driver does not control the bracking force in the vehicle, only the desired rate of deaccleration of the car is controlled. This makes the control of a car much easier during an emergency stop. In fact human-friendly robots can be compared to motor vehicles in another respect. A major problem with the introduction of autonomous driving devices into motor vehicles is the liability problem. Car manufactures can hardly aord to be held liable in cases when despite all safety measures an accident still occurs due to negligence or improper use of the vehicle by the driver. The same problem arises with robots to be used for human interaction. Only if the operator has sucient control over the system and is fully responsible for all of a robot's actions is it feasible for \outside of the lab" use of the robot. Once a basic robotic system is available which is safe to operate with people, higher levels of autonomy can be added that increase the level of autonomy of the system. Like a anti-blocking system that makes steering during heavy braking easier, new modules can be developed that add autonomy to the robot. 3.4. Basic Behavior For a manipulator system the safety requirements listed previously translate to certain mechanical and control speci cations. If a person is to control a robot's actions and is responsible also for it's safe operation, the operator must be able to restrain the robot from any particular motion and must be able to force the robot to execute any motion at any time. Thus, the mechanical system must be easily backdrivable, and the control system has to allow for backdrivability. The position controllers found in industrial robots are not suitable for this task. Instead, force control is required for a human-friendly robot. The basic behavior of the robot should be to compensate for it's own weight at all times. The forces required to compensate for the gravitational eects can be quite signi cant, higher than the forces the control system is allowed to apply to the environment autonomously. Since the forces bringing the robot into zero-gravity never aect the environment, they can be considered to be failsafe and not subject to safety limitations.
jirs99.tex; 14/09/1999; 11:45; p.8
9
Figure 2. The Barrett WAM (Whole Arm Manipulator) robot with 7DOF, equipped with a Barrett Hand with 4DOF.
A backdrivable robot in Zero-G oats passively. When not pushed or pulled by external forces the robot remains motionless in it's current con guration. The operator can pull or push the robot at any link to induce certain motions or place it in a certain con guration. This behavior diers signi cantly from systems that use force-torque sensors (FT sensor) in the wrist and allows compliant motion when forces are applied to the hand. Such systems can not sense forces that are applied to the robot away from the FT sensor, such as the main part of the robot's body. This can pose safety risks since the robot's hand may not be in reach of the operator at all times, leaving the operator out of the control of the robots motions. When pulling a robot in Zero-G, the operator feels only inertial and frictional eects. When the robot is pushed it keeps oating until the inertial energy is consumed by the friction or the robot collides with an obstacle. This basic behavior is perceived as passive and non-threatening by persons interacting with the robot. The robot does not prevent the operator from accelerating it to speeds that could be harmful if the robot did collide with another human. The predictability and controllability of the system still makes the robot safe to be operated with humans in it's workspace since it is the responsibility of the user to use the robot in a safe manner.
jirs99.tex; 14/09/1999; 11:45; p.9
10
4. A Testbed System The basic Zero-G behavior for safe human-robot interaction has been implemented on a Barrett Technology WAM arm (whole arm manipulator), shown in Figure 2. The arm is a commercial version of the original MIT arm built by Townsend and Salisbury (Townsend and Salisbury, 1993). The WAM was designed as a whole arm manipulator. The WAM alloes objects not only to be manipulated with the 3- ngered hand, but with any part of the arm, including the upper and lower link. This design idea is also appropriate for human-friendly robots where contacts with any part of the manipulator have to be considered as well as manipulation of the robot at any part of the robot by the operator. The controller of the robot allows direct access of the motor torques which is vital for the development of force control strategies. However, the robot software has few other features, therefore Zero-G and position-force control software, software safety systems, visualisation tools, homing functions etc. must be developed. 4.1. Safe Mechanical Design The joints of the WAM are driven by cables transmissions instead of gears. The advantage of the cable drive technology is that there is no backlash which is present in gear trains. Even more importantly, the friction in the transmissions is very low. That has two important implications: ? Each joint of the robot is easily backdrivable. Thus, people interacting with the robot can change the con guration of the robot simply by pushing the links or the end tip of the robot. Forces applied to the end tip or any other part of the robot get re ected back to the motors with very low loss in the transmission. Thus, the robot is a lot more 'sensitive' than gear driven robots to external forces applied anywhere at the robot. Depending on the control strategy the robot may resist the external manipulation by applying greater joint torques or it could let the person change the pose without inhibiting the externally imposed motions.
? Torques applied to the motor get transmitted to the joint with
negligible loss. This implies that the forces the robot is applying to it's environment with any of its parts can be derived with a high accuracy from the actual motor commands. The usually strict dividing line between actuators and sensors becomes blurred and the two merge into one.
jirs99.tex; 14/09/1999; 11:45; p.10
11 The WAM is also a light weight design. Almost all of the parts in the arm are made of aluminum and magnesium, signi cantly reducing the weight of the arm. The weight of the robot's components from shoulder joint is only 15kg. This allows the robot to move quickly with low kinetic energy. The WAM arm with its low friction transmission and the ability to directly control the current fed to the motors is well suited for safe human-robot interaction. 4.2. Safety Hardware Beside the mechanical design of the manipulator itself our system contains electronic safety measures. The ampli ers for the joint actuators of the WAM are digitally controlled and limit the current output to the motors. To prevent the maximum current being applied in the case of a catastrophic software failure the currents are limited to appropriate values. The tradeo to be made here is between safety considerations and usability of the robot. The robot must be able to overcome its own weight and the weight of all possible payloads that it needs to manipulate. The output currents that are needed may have to be suf ciently large enough to enable the robot to reach velocities that can not be considered safe, in particular if the motion is exaggerated due to gravity eects. However, limiting the current does provides some protection from extreme robot motions. This is particularly important during the development of the control software. The joint velocities are continuously monitored by an analog circuit. If any of the joint velocities exceed the preset limit, the robot is shut down. This emergency shutdown procedure can also be issued by software if necessary. It disables all ampli ers and shortcuts the motor power wires with a solid state contact or. This procedure also works as an electric brake for the motors. When a shut down occurs, the robot gently falls down without posing any real danger to people inside the robots work space. However, this safety measure is meant as a last fall-back in case all safety software mechanisms should fail. Shutting down the ampli ers disables any software control of the robot. Thus, the falling robot can not reduce its velocity using its actuators and may still cause unpleasant collisions. Just like a safety belt in a motor vehicle, the emergency shutdown prevents the worst consequences if an accident is already occurring. During normal operation the emergency shut-down should never be activated.
jirs99.tex; 14/09/1999; 11:45; p.11
12 Robot programming layer
Dynamics / Space Conv.
Safety Envelope : Limit additional motor torques disable Zero-G (Grav. Comp.)
disable Torque Ripple Compensation
Safety Hearbeat
Velocity Guard
External Forces (Humans)
Obstacles
Figure 3. Safety software architecture for the control of the WAM robot.
4.3. Safe Software The control software for the robot is structured in a hierarchical way, shown in Figure 3. At the lowest layer a passive module monitors the velocities of the elbow and the hand in cartesian space. The hardware velocity limit only monitors raw joint velocities. However, the same joint velocity can result in signi cantly dierent velocities of the links of the robot depending on the kinematic con guration of the robot. In order not to limit the usability of the robot by rigorous hardware limits, the limits have been set to values that may result in cartesian space velocities that violate the safety limits. Thus, the cartesian space velocities must also be monitored by a velocity guard module which shuts the robot down if a dangerous situation arises. Since the maximum cartesian space velocity of a long and slender link is always reached one of its end points, it is sucient to monitor the cartesian position of the elbow joint and position of the hand. If the velocity of any one of these points exceeds the preset safety margin the hardware safety circuit shuts down the robot. On the same level the safety \heartbeat" module monitors the output of the control program. It's main purpose is to shut down the robot in case the control software does not respond, e.g. when the control software in higher levels is incorrect. The module is triggered by a watchdog timer. Unless the control software resets the watchdog timer the robot is shut down. This routine is regarded as the \heartbeat" of the robot.
jirs99.tex; 14/09/1999; 11:45; p.12
13 On the next level two modules are responsible for eliminating the effects of gravity and the torque ripple of the motors. Torque ripple is the eect of changing torque output of the motor in dierent rotor positions as a result of the motor's geometry. For torque control this ect has to be compensated for in software. The Zero-G module generates the motor torques based on the current position together with the robot's kinematic model to counteract the gravitational forces acting on the links. The torque outputs of both modules are compensation torques and can not aect the robot's environment. Therefore, the torques are not subject to safety restrictions. Above the Zero-G level the Safety Envelope provides the lowest level programming interface. Its main purpose is to limit the torques that can be added to the outputs of the Zero-G level by any controller implemented above. These limits are not static but change according to the current joint and cartesian space velocities. Since the robot is already
oating in Zero-G, constant torques could accelerate the robot beyond the safe limits. In contrast to the other limits discussed previously, the software limits in the Safety Envelope are meant to be reached during normal operation of the robot, and even exceeding them will not cause a shut down of the robot. Above the Safety Envelope any control strategy can be implemented to control the robot. The limitations in the safety envelope will always guarantee safe operation conditions. For convenience a module may be implemented which oers a programming interface with modi ed robot dynamics or cartesian space control rather than the joint space control oered by the Safety Envelope. Since we only want to permit the robot to autonomously move slowly the control program may ignore dynamic eects like centrifugal and coriolis forces. However, when moving slowly the eects of stiction and friction become signi cant and have to be considered either in a conversion module hiding the eects or in the controller itself. The control algorithm to be used on the top level can be any algorithm used for conventional robots. The Safety Envelope always guarantees safe and human-friendly operation. To achieve a point-to-point motion a PID controller can be used. If the robot is in a con guration far away from the goal con guration, the PID controller outputs large torques. These will be clipped in the Safety Envelope to levels that will accelerate the robot slowly towards its goal con guration. At any time instance a person can interfere with the arm, change its con guration, block its motion, but eventually the robot will reach its goal if it's path is not permanently blocked. To evade permanent obstacles and joint limits and to exploit the redundancy of the manipulator the well known path planning algorithms must be applied.
jirs99.tex; 14/09/1999; 11:45; p.13
14 4.4. Zero-G Implementation The Zero-G module has been successfully implemented on the WAM. To reduce the computational complexity, the 7DOF are split into two groups, the 4 degrees of the base and the 3 degrees of the wrist. The derivation of the joint torques required to compensate for the gravitational eects for the rst 4 joints of the robot are described below. The four Denavit-Hartenberg transformation matrices are 2 2 3 3 cos(t1 ) ? sin(t1 ) 0 0 cos(t2 ) ? sin(t2 ) 0 0 6 6 7 7 T01 = 64 sin(0t1 ) cos(0t1) 10 00 75 T12 = 64 sin(0t2 ) cos(0t2) ?01 00 75 0 0 01 3 0 0 0 13 2 2 cos(t3 ) ? sin(t3 ) 0 0 cos(t4 ) ? sin(t4 ) 0 0 6 6 7 7 0 0 1 L 1 T23 = 64 ? sin(t3) ? cos(t3 ) 0 0 75 T34 = 64 sin(0t4 ) cos(0t4) ?01 00 75 0 0 0 1 0 0 0 1 where t1 : : :t4 are the joint positions. Assuming that the center of gravity of the rst link is in the xy-plane of frame 2 with coordinates (x2; y2; 0; 0) the height h1 of the center of gravity of the rst link is t
t
h1 = 0 0 1 0 T01T12 x2 y2 0 1 = sin(t2 )x2 + cos(t2)y2 Similarly the height h2 of the center of gravity of the second link can be derived assuming it lies within the xy-plane of frame 4 at (x4; y4; 0; 0)
t
t
h2 = 0 0 1 0 T01T12T23T34 x4 y4 0 1 = (sin(t2 ) cos(t3 ) cos(t4 ) + cos(t2 ) sin(t4 ))x4 + (? sin(t2 ) cos(t3 ) sin(t4 ) + cos(t2 ) cos(t4 ))y 4 + cos(t2 )L3 The potential energy U of the manipulator is U = m 1 h 1 + m 2 h2 (1) where m1 and m2 are the masses of the rst and second link respectively. Using Lagrange's method the joint torques 1 : : :4 compensating for the gravity are the partial derivatives of U with respect to t1 : : :t4 . The sin(t ) and cos(t ) are substituted by s and c for compacticity. @U = 0 1 = @t 1 @U 2 = @t i
i
i
i
2
jirs99.tex; 14/09/1999; 11:45; p.14
15
A.
B.
Figure 4. The example con gurations for the Zero-G measurements, A) with both links horizontal and the lower arm at 90 to the upper arm, B) with the upper arm vertical and the lower arm horizontal.
= m1 x2 c2 ? m1y2 s2 + m2 x4(c2c3 c4 ? s2 s4 ) ? m2 y4 (c2c3 s4 + s2 c4 ) ? m2L3 s2 @U = ?m x s s c + m y s s s 3 = @t 2 4 2 3 4 2 4 2 3 4 3
@U = m x (c c ? s c s ) ? m y (s c c + c s ) 4 = @t 2 4 2 4 2 3 4 2 4 2 3 4 2 4 4
In the regressor form the matrix containing the trigonometric functions is separated from the vector of unknowns containing the products of masses and the coordinates of the center of masses. 2
3
2
1 0 6 2 7 6 c2 6 6 7 4 3 5 = 4 0 4 0
2
3
m1 x2 6 m1 y2 7 ?s2 c2c3c4 ? s2s4 ?c2c3s4 ? s2c4 ?s2 77 66 m2x4 77 0 ?s2s3c4 s2s3 s4 0 5 64 m y 75 2 4 0 c2c4 ? s2 c3s4 ?s2 c3c4 ? c2s4 0 m2L3 0
0
0
0
3
The vector of unknowns can be determined experimentally from joint torque measurements in at least two dierent con gurations of the arm. Figure 4 shows two con gurations which contain sucient information. Least squares minimisation can be used to determine the unknown vector from those measurements. Figure 5 shows a sequence of frames from a 5 second sequence showing the robot in Zero-G mode. The motion of the robot is not precalculated at any instance. Only the velocity guard, the TR compensation and the Zero-G module are active. In the rst image the robot is being pushed by a person and it keeps oating over the next four snapshots until it bumps into the foam rubber tabletop. When the
jirs99.tex; 14/09/1999; 11:45; p.15
16
Figure 5. Image sequence from a Zero-G experiment
person is pulling the robot back by its upper link, note that despite the low velocity the inertial eects ex the lower arm. The arm oats again and is nally caught by the person in the last frame. 4.5. Future Experiments The next step in the development of our testbed system is to complete all modules required for the software architecture described above, in particular the safety envelope and a position controller. This will provide the robot with the ability of slow, autonomous egomotion. We will then connect the face tracking system described above to the robot. A person will be able to select one of a variety of objects on the robot table by looking at the object. The vision system will report the gaze point to the robot controller and the robot can be commanded using facial gestures to pick up the object at the designated position. If the object localization was not successful, facial gestures can be used for error correction, i.e. by shaking the head when the robot is about to pick up the wrong object. The restrictions that guarantee safety under normal conditions disable the robot from lifting heavy objects. When heavy objects are to be manipulated in a safe manner, the Zero-G model must be adapted to the new robot con guration. The robot with the payload has to be
jirs99.tex; 14/09/1999; 11:45; p.16
17 gravity compensated for before the controller can be allowed to move the object safely. There is also the risk of the robot loosing its grip of the object. In this case the Zero-G model changes abruptly which needs to be detected by the controller. 4.6. Application Areas In the long term human-friendly robots could become multipurpose helping hands for people. This requires the robots to be safe and easy to use for people. The gaze direction interface can only be one of many interfaces that allows the operator to communicate with the robot in a more natural way than with todays robots. More specialized applications require a smaller diversity of commands and simpler control strategies. Such applications are hook holding devices for operation theaters or heavy object handling devices for building site or machine construction. Before robots were used for welding in car manufacturing the welding clamps were suspended on \Zero-G" devices to ease their handling. Human-friendly robots could provide a similar service, only more versatile which allows their application in other manufacturing or construction areas. Also helping hands for disabled or paralyzed persons could be developed that allows the person to control a robot manipulator safely using only gaze and facial gestures.
5. Conclusion We have presented a new approach to control human-friendly robots. The basic philosophy is to make the robot a tool with limited autonomy. Instead of aiming at a perfectly safe, self-responsible system the robot should be viewed as a sophisticated tool. Since the human is responsible to use the robot in a safe way, the actions of the robot have to be intuitively understandable and predictable. The control of the robot must consider the forces applied by the robot to it's environment and, thus, force control of the robot is imperative. The overall behavior of the robot must be perceived as non-threatening by a human operator. The presented control scheme achieves these design goals. The base of the control scheme is formed by the Zero-G module. The safety envelope ensures that the safety characteristics are not altered by any control scheme implemented on top of it. In a later stage the visual interface for gaze point detection and gesture recognition will be connected to the robot allowing a person to gaze-control the robots motions.
jirs99.tex; 14/09/1999; 11:45; p.17
18
References Heinzmann, J. and A. Zelinsky: 1998, `3-D Facial Pose and Gaze Point Estimation using a Robust Real-Time Tracking Paradigm'. In: Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition. pp. 142{ 147. Huttenlocher, D. P. and S. Ullman: 1990, `Recognizing solid objects by alignment with an image'. International Journal of Computer Vision 5(2), 195{212. Jochem, T., D. Pomerleau, and C. Thorpe: 1993, `MANIAC: A Next Generation Neurally Based Autonomous Road Follower'. In: Proceedings of the International Conference on Intelligent Autonomous Systems: IAS{3. Thomanek, F. and E. D. Dickmanns: 1996, `Autonomous Road Vehicle Guidance in Normal Trac'. Lecture Notes in Computer Science 1035, 499{516. Townsend, W. T. and J. K. Salisbury: 1993, `Mechanical Design for Whole-Arm Manipulation'. Robots and Biological Systems: Toward a New Bionics? pp. 153{ 164. Zelinsky, A. and J. Heinzmann: 1998, `A novel visual interface for human-robot communication'. Advanced Robotics 11(8), 827{852.
jirs99.tex; 14/09/1999; 11:45; p.18