A Advanced Telerobotic Control System for a Mobile ... - CiteSeerX

9 downloads 348 Views 229KB Size Report
Institute for Real-Time Computer Systems & Robotics, University of Karlsruhe, D-76128 Karl- sruhe, Germany. ... remotely controlled by an operator to repair or.
In: Proc. of Intelligent Autonomous Systems(IAS-4) March 1995, pp.365-372

A Advanced Telerobotic Control System for a Mobile Robot with Multisensor Feedback I-Shen Lin, Frank Wallner, Rudiger Dillmann Institute for Real-Time Computer Systems & Robotics, University of Karlsruhe, D-76128 Karlsruhe, Germany.

Abstract. This paper presents an advanced telerobotic control system for a mobile robot with

multisensor feedback. A telecontrol concept for various degrees of cooperation between a human operator and a mobile robot is described. With multisensor on-board the robot at the remote site can adjust its path while continuously accepting commands from the human operator. Interactive modelling that allows the modelling of an unknown environment and makes landmarks known to the robot is introduced. A graphical user interface and a 3-D animation system are important elements in the teleoperation, they are integrated in this system to help the operator by task analysis, o line teaching and on-line monitoring. Experiments performed with the mobile robot PRIAMOS are discussed.

Key Words. Mobile Robot, Telerobotics, Multisensor, Man-Machine Systems

1. INTRODUCTION Telerobotic has been an active research eld in robotics for many years (Sheridan, 1992). Many researches applied this technique in outer space (Kim, 1993; Hirzinger, 1993) or nuclear power plants. However most of the papers investigate a telemanipulator whose base is xed in space and remotely controlled by an operator to repair or replace some machine units (Backes et al., 1991). Only few papers discuss the teleoperation of mobile robots. (Chatila et al., 1991) discusses a remotely controlled \intervention robot", which accepts task commands from the ground operator and moves autonomously on the Mars. The operator acts mainly as a mission planner and supervisor. In (Fournier et al., 1988) a computer aided teleoperation that uses a master/slave approach to bilaterally control a remote mobile robot is described. Integration of a supervising human operator and di erent sensor systems on-board opens a broad eld of applications that can not be achieved by a fully autonomous system. The recognition and decision capability of a human is much better than an intelligent robot at the present time. With visual and other feedback from sensors, an operator can control the gross motion of a remote robot. The various sensor systems on-board on the other hand adjust the behavior of the robot according

Figure 1 The mobile system PRIAMOS. to its surrounding situation. This kind of cooperation outmatches a pure autonomous or teleoperated robot. An intervention task which should solve a problem in a short time or the monitoring in an unknown environment demands such kind of system. Furthermore the interaction between the human operator and the robot provides precious knowledge to teach the robot. The operator can work with the robot to interactively build a model of the world. The landmarks for navigation and their semantics are taught to the robot by the op-

MMI World Specification

Database

Missionspecification

Control and Surveillance

Vehicle control architecture

start

geometrical model

subgoals

EOs Sensor/ emulator

goal

topological model

subgoal

reflexive worlddescription

Robot/ emulator

Figure 2 The control environment MARS (left) and PRIAMOS' world model hierarchy (right). erator during the remote control. The robot accumulates these knowledges and gradually increases its autonomy in the subsequent operation. This paper presents an advanced telerobotics system for mobile robots with multisensor feedback that considers di erent degrees of cooperation between an operator and a mobile robot, in order to achieve the best integration from both sides. As experimental testbed the mobile robot PRIAMOS (Dillmann et al., 1993), which is equipped with di erent type sensors, is employed. The mobile robot control environment MARS1 allows to test task analysis of the telerobotic concept including simulation of multiple mobile robots and is used for on-line monitoring. Despite being used in classical application areas of teleoperated systems, the system described provides a tool for programming the robot through demonstration. Such a way of programming is especially of interest for systems that will be operating in non-industrial environments under supervision of unexperienced users (Kaiser et al., 1994).

2. THE MOBILE SYSTEM PRIAMOS PRIAMOS (Fig. 1) is a holonomous mobile system with three degrees of freedom, i.e., motion in longitudinal and transversal direction and rotation around the center of the vehicle. This is accomplished by the use of four Mecanum wheels, each one driven separately. The servos that are in use enable the robot, which features a length of 900 mm, a width of 650 mm, a height of 750 mm, and a weight of 300 kg, to move at a maximum speed of 3 km/h. PRIAMOS is equipped with a structured light 1

Multi Agent Robot Control System

sensor for obstacle detection purposes as well as with a ring of 24 ultrasonic range sensors. While the structured light sensor observes the front area which is the main driving direction, position and orientation of the ultrasonic sensors is chosen in order to minimize blind regions near the robot. As a third sensor system, an active stereo vision head is mounted on a platform with motorcontrolled tilt and turn. It consists of two cameras of which zoom and focus as well as the vergence are equipped with motors. A passive camera mounted in front of the robot provides an overview of the scene. Despite its use of observing the structured light pattern, it is also useful for teleoperation to point out interesting regions to the active stereo vision head and support triangular scene reconstruction. All three cameras are connected to an on-board image processing system which provides a vector description of images in video-rate.

2.1 Control Architecture PRIAMOS re nes and adapts its world model on several levels of abstraction. These levels of abstraction nd their equivalents in PRIAMOS' control architecture, where they serve as the basic information for subtasks executed on each layer of control. PRIAMOS' control architecture is structured both horizontally and vertically. On each vertical layer, a navigation and a perception module that are linked by means of the world model are located. Figure 2 shows the levels of abstraction of the world representation. On the highest level, the world model has a topological structure. Path segments are described as edges and intersections as nodes in a graph. This allows simple path planning over longer distances without the need to take geometric details into account. For

the generation of the topological map and precise path planning in the vicinity of the robot, exact geometrical information is needed. On an intermediate level, this information is represented by means of a dynamic local model of the perceivable environment, which integrates recent sensor measurements as well as predictions generated on the base of a global geometrical model. User access to PRIAMOS is provided by means of the control environment MARS (Fig. 2 left). MARS has been developed following a strictly modular approach. Higher level modules run as independent UNIX processes and communicate via sockets. Lower level modules, taking stronger real-time constraints into account such as the collision-avoidance module, are realized on the robot's internal OS-9-based computer systems.

2.2 Motion Control The motion controller of PRIAMOS is the lowlevel interface to either the higher navigation layers or the operator. A set of basic commands called elementary operations (EOs) is de ned in the motion controller (Fig. 3). The EOs are classi ed in four groups. Position control commands send an absolute or relative goal position to the robot position controller. They can be executed sequentially or a new command terminates the previous one and causes a smooth transition of the current motion to the new goal. When teleoperating the robot, the velocity control commands provide the interface to the operator input. Telecommands are directly passed to the underlying velocity controller and immediately executed. A problem is caused by the varying timedelay in transmission. A telecommand is executed until a new command has been received. If several commands are received, the latest command is executed exclusively. Additional commands are used to dynamically update the robot's internal position estimation and to return information about external and internal sensors.

3. SYSTEM CONFIGURATION Fig. 4 gives an overview of the con guration of the presented telerobotic system. The global system consists of 2 subsystems, the local operator control and monitoring station and the remote mobile robot control system. At the local site an operator is continuously receiving information from computer graphics and live video transmitted via the

PRIAMOS Elementary operations (EOs) Motion - EOs (Position control) sequential a b s

r e l

Stop

interrupting

stop_move

move_abs_seq

move_abs

(x, y, orient, v_t, v_r)

(x, y, orient, v_t, v_r)

move_rel_seq

move_rel

(dx, dy, dorient, v_t, v_r)

(dx, dy, dorient, v_t, v_r)

delete_all

Motion- EOs (Velocity control) World coordinates

Robot coordinates

direct_control_abs

direct_control_rel

(vx, vy, vorient, a_t, a_r)

(vx, vy, vorient, a_t, a_r)

Sensor - EOs Internal sensors

External sensors

get_pos

get_us_data

Correction - EOs Position correction

set_pos_abs (x, y, orient)

get_vel

set_pos_rel (dx, dy, dorient)

get_pos_error

Figure 3 Elementary operations of the motion controller video link. He uses his perception, planning and control capabilities to in uence the remote mobile robot and thus closes the outer feedback loop of the telerobotic control system. The commands issued from the operator (telecommands) will be mixed with the multisensor feedback on the robot to take the time-delay into account and assure a correct response to the world. The system design is performed with respect to the following considerations: 1. Degraded perception from the single video feedback at the operator site and the timedelay between operator input and robot execution are two main problems in teleoperation. Therefore the robot acts not only as a passive transporter. It must also make its own decision according to the external situations while under the user guidance. The module \cooperation strategy" in Fig. 4 is the kernel to mix the telecommands and feedbacks from multisensor. 2. Di erent control modes should be provided to facilitate various degrees of cooperation between operator and robot. Depending on the situation, the operator can directly control the robot or just monitor the autonomous execution of the robot and intervene it when necessary.

OPERATOR CONTROL & MONITORING STATION 3-D animation

live video

video

MOBILE ROBOT CONTROL SYSTEM

command

telecommand cooperation strategy

graphical interface status

multisensor feedback

mobile robot input device

Figure 4 The system con guration of the telerobotic system 3. The system should accumulate its own knowledge and let the robot gain more autonomy after the guidance of a human operator. Aspects such as interactive modelling and landmark teaching are stressed. 4. Graphical simulation and animation systems are important elements for task analysis and predictive display in case of time-delay. The same graphical user interface is also a friendly tool for task speci cation, visual display of sensor information and monitoring.

Active Vision System KASTOR: Object

recognition and scene reconstruction are possible with this stereo vision system. During the teleoperation the operator can control all degrees of freedom of the cameras to get a panoramic view of the remote world. Third Camera

Diode Laser

These considerations have been taken into account in the design of the telerobotic control system and will be discussed in the following sections.

4. MULTISENSOR SYSTEM Baseline to achieve the partial autonomy desired at the remote side is the information about the world. Therefore the robot was equipped with various sensors. The sensor data are processed on the robot, in order to achieve a real-time reaction to the external events and reduce the communication trac via the radio-link. The informations reported to the operator station are mainly of symbolic feature and indispensable for the teleoperation. In this section the sensor systems presently on PRIAMOS and their features are introduced.

Ultrasonic Sensors: The robot is equipped with

a ring of 24 ultrasonic sensors. Feature of this ultrasonic sensor system is its exibility to be operated in di erent measurement modes. The activation of sensors (or sensor groups) can be altered on-line. Therefore, it is possible to get more precise and timely information about regions which are of interest.

Light stripe projected by using cylinder lenses

Figure 5 Con guration of structured light

Structured Light: Fig. 5 illustrates another sensor system on the robot. Diode lasers with a special optics on the two front corners of the robot project a laser line to the oor. The intersections of the laser lines with the oor or an obstacle are observed by the front-camera.

To be able to distinguish the laser lines from the light of surrounding sources, the camera is equipped with an interference lter that features a transmission peak corresponding to the wavelength of the laser light. A con guration with two crossing lines allows the supervision of a wide area with minimum light sources and maximum inspection of the area in front of the driving direction. Even being used to detect structured light, the third camera serves as the monitoring camera, which sends the live video to the operator station. It also supports the binocular matching process of the active vision head.

5. INTEGRATION OF HUMAN AND MULTISENSOR The integration of a human operator and multisensor information opens a wide eld of applications. This scheme complements the shortages in a purely teleoperated or autonomous system. The sensors can be used actively to assist the operator by teleoperation or passively to record the situations and actions on the robot under user guidance. The operator can cooperate with the sensors to model an unknown world. The following sections will discuss these topics in more detail.

5.1 Control Modes for Telerobotics In a traditional teleoperated system, an operator must continuously operate and monitor a remote robot. \Telepresence" is emphasized in making full use of information feedback to let the operator feel the remote environment and control the robot intuitively. However the bandwidth of data transmission, variable time-delays and the need of the full concentration of the operator impose a limitation on this scheme. Therefore the sensors on the robot should play an active role. They not only provide the remote status to the operator, but also actively a ect the behavior of the robot to re ect the situations at the remote site, which may not be perceived by the operator. With the help of the sensor systems mentioned above, 4 exible control modes are provided.

Direct Control: the operator uses a suitable in-

put device such as a 6-D mouse to control the movement of a vehicle and the active camera head on it. Safety measurements are needed to compensate the degraded perception and time-delay. Emergency stop and collision avoidance are two such examples. The strategies to merge human commands and re exive capability of the robot are developed by using ultrasonic sensors and structured light.

Traded Control: this mode provides alternative control of the robot. The operator assumes direct control in case of critical situations. The control is given to the robot in case of non-critical environment or bad visual feedback. The sensor feedback allows the operator to assess the remote situation and make a safe and continuous operation possible. Shared Control: in this mode, robot and human each controls speci c degrees of freedom. For example, when moving through a corridor, the robot keeps its orientation and distance with respect to the side walls while the operator controls the forward movement. This mode relieves the operator

from control details and lets him concentrate on the goals of the teleoperation. An experiment of shared control will be discussed at the end of this paper.

Supervisory Control: the operator acts as a

supervisor. He performs high-level planning and monitors the robot's execution. He may have to interrupt its execution in dangerous situation or help the robot executing its task , e.g. indicating the location of a landmark. In this mode the robot has the highest degree of autonomy. Supervisory control is an example of cooperation on the task level. The task planning relies mostly on the operator. The robot is responsible for task execution while under human supervision. When under traded control the operator decides when to switch the control. The other control modes mentioned are based on cooperation on the servo level. The telecommands from the operator are mixed with sensor feedback to control the robot. The degree of autonomy increases from direct to supervisory control. The choice of an adequate control mode depends on the task to be executed. This exibility guarantees both ecient and reliable control.

5.2 Interactive Environment Modelling An environment model is essential for a mobile robot to plan a path to a goal position. In the eld of teleoperation, a world model provides a global view of a scene, that greatly assists the operator. However a model is not always available. The integration of the human and sensor information provides an opportunity to interactively build a world model. A human can easily identify objects, their geometric features and the topological relations between them, while it is extremely dif cult to be done by the robot. The sensors on the other side provide visual feedback and numerical data. The short term memories of the operator are kept in a computer and then integrated into a global map. To model a general 3-D world from a pair of images, a matching process is necessary. An operator can x broken edges, nd corresponding points in both images and group edges that belong to an object, all of which are still very dicult to be done automatically. In a man made building, the three orthogonal directions are distinguished from other directions. Assumed to have a calibrated camera, it is possible to interactively obtain a world model. The perspective transformation from the world coordination system to the image plane is as follows: I =M W (1) 

0 uw 1 0 m11 m12 m13 m14 1 0 x 1 @ vw A = BB@ mm21 mm22 mm23 mm24 CCA BB@ yz CCA 31 32 33 34 w 

m41 m42 m43 m44

1 (2) The transformation matrix M is obtained from the camera calibration (Weckesser and Hetzel, 1994). In a man-made environment a human can directly recognize the intersection of a wall and the oor. On a at oor the z -coordinate of this intersection is known through the calibration and the camera position on the robot. With the help of eq.(2) the other 2 coordinates (x,y) in the world can be solved from the 2-D image. Other geometric features, e.g., orthogonality and parallelity, help to derive the main structure of many objects in the scene. Such an interactive modelling is easy to build and combines sensor precision with human intelligence. For many navigation tasks it is not necessary to know a scene completely. The robot can navigate relative to some known features and reactively avoid unknown obstacles.

The fusion of data from camera and other sensors (ultrasonic sensors and structured light) and the integration of local maps into a global map are the next steps to obtain a more precise world model. To make a subsequent autonomous navigation in the modelled environment possible, it is necessary for the robot to use natural landmarks to locate itself. A landmark pattern can be identi ed and segmented by the operator. During modelling, the robot is informed about the structure of landmarks and which sensors can be used to recognize them. By such an approach a model of the world and semantic information sucient to navigate in it is taught to the robot. Interactively modelling an environment signi cantly reduces the complexity of installing the robot in a new environment.

6. GRAPHICAL USER INTERFACE AND SIMULATION Since a telerobotics system involves the interaction between humans and robots, a friendly user interface is necessary to assist the human operator. Both 2-D and 3-D graphics are valuable to support a teleoperation. This section will discuss the use of a graphical user interface and a simulation system.

6.1 Task Analysis To teleoperate a mobile robot in a critical environment, a lot of time must be spent to prepare the task, to train the operator and to nd the optimal cooperation modes in di erent situations. Before really executing a task, a graphical user interface

can help the user to specify his intention, display the commands and expected consequences on the screen. In such way the user can interactively generate and modify a plan. On the graphical user interface an operator can specify a sequence of movements and actions by clicking or dragging a mouse on the screen. The underlying task and geometric planners then nd a sequence of movements and actions that full ll the task. The simulation system simulates the movement of the robot and displays the results on a 3-D animation workstation. On the animation system di erent view points can be set to simultaneously observe the robot and its relations to the world. Possible collisions with obstacles are checked and prevented from execution, in order to avoid collisions in the real world. During this step of task analysing, the optimal use of sensors is an important aspect. In addition the strategies of cooperation in the shared control mode and the special regions for the traded control are investigated in this phase. The goal is to take advantages out of the robot and the operator to achieve an optimal performance. Care must be taken not to overload the operator with too many information and control options. As additional help a task editor is available to support the task speci cation by interactively modifying a task plan. With this task editor the operator can also de ne a sequence of actions as a macro. These macros can be retrieved later to constitute a complete task plan. The concept of \Telesensorprogramming" (Hirzinger, 1993) is applied in the task analysis. Due to the errors in the dead-reckoning and world models, the sensor patterns must be used by the robot to assure an accurate relation to the world. Thus in the supervisory control mode an operator speci es a gross movement. The ne movement of the robot is controlled by its external sensors to reach the required sensor values, e.g. relative distances to side walls as they have been speci ed.

6.2 On-line Monitoring The status of the remote robot and its sensor values are monitored on the graphical interface. Even if the cameras on the robot provide di erent views of the environment, they could not cover the whole scene. The graphical display of sensor history and the free space around the robot are of great help in the telerobotic control system. Because the task is already analyzed in simulation, a predictive display that is overlayed on the live video is an useful tool to reduce the diculties caused by communication delay.

Another advantage of the graphical user interface is the direct correction of the robot position by the operator by observing the actual sensor data in the world model in case the robot loses its orientation.

7. EXPERIMENTS The global hardware architecture of PRIAMOS consists of two multiprocessor VME-bus systems, one in charge of vehicle control and the other of image processing. The real-time operatig system OS-9 is used on these boards to assure real-time response to external events or user commands. The MARS control environment consists of a Sunworkstation cluster and a Silicon Graphics workstation. A radio-link connects MARS and PRIAMOS. The live video from camera is transmitted to the Silicon Graphics workstation via a video sender-receiver set. Image processing can be done either on-board or on the remote workstation. A 6-D mouse is used as the input device to alternatively control the robot or the camera head. The processes in MARS communicate with each other using the public domain software PVM2 , which simpli es the communication of the processes in a network of heterogeneous Unix computers. A message system called RobMsg is developed in the PRIAMOS project, in order to unify the protocol of message passing among all processes and hardware architectures in MARS and PRIAMOS.

7.1 Direct Control and Monitoring Fig. 6 shows a screen dumped image from the Silicon Graphics workstation. PRIAMOS was directly controlled to move around in a laboratory. The robot and its world are displayed on the 3D graphics using the animation system \Kavis", which is developed in our institute to assist teleoperation. The robot is surrounded by walls and tables. The distances measured by ultrasonic sensors are dynamically displayed to visually provide the operator with information about the local environment. For the sake of clarity, only 8 of the 24 sensor cones are displayed on this gure. Presently the time between the consecutive commands received by PRIAMOS from the radio-link varies from 140ms to a maximum of 1500ms in case the radio link is busy. The time-delay from the user input to the robot execution justi es the need for local intelligence on the robot side. An operator at the control station can only command 2 Parallel Virtual Machine, from the Oak Ridge National Laboratory in the USA

the main moving direction and speed of the robot. A successful execution needs the local behavior of the robot, which uses its sensors to change the moving velocity according to the environment. The large and variable time-delays also make a kinesthetic feedback useless. Because the force the operator feels is averagely 0.8 sec later, he could not stably operate the remote robot (Paul and Funda, 1991).

Figure 6 The monitoring of a remote robot with sensor feedback in a 3-D animation system

7.2 Shared Control of a Mobile Robot It is usual that a mobile robot moves around a building with rooms and walls. The operator has to guide the robot along the wall to monitor/explore the environment. Shared control in such environment consists of the robot keeping its distance and orientation with respect to a side wall while being remotely commanded to move ahead. Fig. 7 is a record of a typical trajectory of the robot under shared control. The robot has 3 degrees of freedom. y, are controlled by the ultrasonic sensors to keep the clearance to wall between 72.5 to 102.5 cm and the orientation parallel to wall (within 10deg) respectively. The x-component is remotely guided by the operator. 

At the beginning PRIAMOS adjusted itself to the right wall. Then it avoided an obstacle in the way by moving around it. After that it returned automatically to the right wall and kept 90 3cm clearance to it. The dotted trajectory is the odometry feedback to the monitor station. A marker at the rear of the robot plots the actual trajectory (the solid line in Fig. 7) on the oor. Due to the slip in the wheels, the robot changed its orientation randomly during the course, but it was corrected automatically by using the ultrasonic sensors on the right side of the robot. It can be observed that the odometry delivered unreliable data. The nal error in the y-direction is 20.5cm after moving 9 

y

odometry

x

real trace

90cm

110.5cm

Figure 7 Degree of freedom sharing while moving along a wall meters ahead. The experimental result illustrates the usefulness of the degree of freedom sharing in the teleoperation. As opposed to the direct control, the operator is relieved from the irrelevant control details of the robot and concentrates on the task to be full lled. This experiment also indicates the use of external sensors (in this case, ultrasonic sensors) to correct the position of the robot.

8. CONCLUSIONS An advanced telerobotic control system is presented in this paper. The mobile robot PRIAMOS and the multisensor system equipped on it are introduced. Emphasis is laid on the cooperation between the human operator and the sensors of the remote robot. Di erent control modes are described and implemented on this system, in order to exibly execute a task. The experimental results demonstrate the use of graphics to monitor the robot status. The importance of the shared control is illustrated by an experiment. Timedelay imposes a great diculty in the teleoperation. Attempts will be made to reduce time-delay and to reliably control the robot under time-delay. An approach to interactively build a geometric and semantic modelling of an unknown environment with the help of human operator is presented. Further research will be concerned with the re nement of this idea.

ACKNOWLEDGEMENT The authors would like to thank A. Arbelo, H. Schaude and P. Weckesser for their contributions to the work described. This work was performed at the Institute for Real-Time Computer Systems and Robotics, Prof. Dr-Ing. U. Rembold and Prof. Dr.-Ing. R. Dillmann, Department of

Computer Science, University of Karlsruhe, 76128 Karlsruhe, Germany.

REFERENCES Backes, Paul G., Kam S. Tso and Thomas S. Lee (1991). A local-remote telerobot system for time-delayed traded and shared control. In: IEEE International Conference on Robotics and Automation. pp. 243{248. Chatila, R., R. Alami and G. Giralt (1991). Task-level programmable intervention autonomous robots. In: Metronics and Robotics I (K.-H. Robrock P.A. MacConaill, P. Drews, Ed.). pp. 77{87. IOS Press. Dillmann, R., J. Kreuzinger and F. Wallner (1993). The control architecture of the mobile system priamos. In: Proc. of the 1st IFAC International Workshop on Intelligent Autonomous Vehicles. Fournier, R., P. Gravez and M. Dupont (1988). Computer aided teleoperation of the centaure remote controlled mobile robot. In: Proc. Int. Symposium Teleoperation and Control. pp. 97{105. Hirzinger, G. (1993). Multisensory shared autonomy and tele-sensor programming - key issues in space robotics. Robotics and Autonomous System 11, 141{162. Kaiser, M., A. Giordana and M. Nuttin (1994). Integrated acquisition, execution, evaluation and tuning of elementary skills for intelligent robots. In: Proceedings of the 2nd IFAC Symposium on Arti cial Intelligence in Real Time Control (AIRTC '94). Valencia, Spain. Kim, Won S. (1993). Graphic operator interface for space telerotics. In: IEEE International Conference on Robotics and Automation. pp. 761{768. Paul, R.P. and J. Funda (1991). Ecient control of a robotics system for time-delayed environments. In: IEEE Proceedings of 5th Int. Conf. on Advanced Robotics. pp. 219{224. Sheridan, Thomas B. (1992). Telerobotics, Automation, and Human Supervisory Control. The MIT Press. London,England. Weckesser, P. and G. Hetzel (1994). Protogrammetric calibration methods for an active stereo vision system. In: Intelligent Robot System. pp. 430{436.

Suggest Documents