Development of an autonomous rough-terrain robot Alina Conduraru and Ionel Conduraru
Emanuel Puscalau
Geert De Cubber, Daniela Doroftei and Haris Balta
Gheorghe Asachi Technical University of Iasi B-dul D. Mangeron, 61-63 700050-Iasi, Romania
[email protected]
Technical University of Civil Engineering Bucharest Lacul Tei Bvd., no. 122 - 124 RO 020396, sector 2, Bucharest
[email protected]
Royal Military Academy Unmanned Vehicle Centre Av. De La Renaissance 30 B1000 Brussels, Belgium
[email protected]
Abstract—In this paper, we discuss the development process of a mobile robot intended for environmental observation applications. The paper describes how a standard tele-operated Explosive Ordnance Disposal (EOD) robot was upgraded with electronics, sensors, computing power and autonomous capabilities, such that it becomes able to execute semi-autonomous missions, e.g. for search & rescue or humanitarian demining tasks. The aim of this paper is not to discuss the details of the navigation algorithms (as these are often task-dependent), but more to concentrate on the development of the platform and its control architecture as a whole.
Teodor robot
Quadrotor Stereo Camera ToF Camera Pan-Tilt Unit
I. I NTRODUCTION A. Problem Statement Mobile robots are more and more leaving the protected lab environment and entering the unstructured and complex outside world, e.g. for applications such as environmental monitoring. However, recent events like the Tohoku earthquake in Japan, where robots could in theory have helped a lot with disaster relief but were nearly not used at all in practice [3], have learned that there exists a large discrepancy between robotic technology which is developed in science labs and the use of such technology on the terrain. The rough outside world poses several constraints on the mechanical structure of the robotic system, on the electronics and the control architecture and on the robustness of the autonomous components. Main factors to keep into consideration are [2]: • Mobility on difficult terrain and different soils • Resistance to rain and dust • Capability of working in changing illumination conditions and in direct sunlight • Capability of dealing with unreliable communication links, requiring autonomous navigation capabilities In this paper, we present a robotic system which is developed to deal with these constraints. The platform is to be used as an environmental monitoring robot for 2 main application areas: humanitarian demining (when equipped with a ground penetrating radar and a metal detector) and search and rescue (when equipped with human victim detection sensors). B. Platform Description Taking into account the different constraints and tasks for outdoor environmental monitoring applications, a robotic
Fig. 1. The robotic system, consisting of the Teodor base UGV, a quadrotor UAS, and an integrated active Stereo/Time-Of-Flight Depth sensing system
system as shown in Figure 1 was developed. The base vehicle of this unmanned platform consists of a Telerob Teodor Explosive Ordnance Disposal (EOD) robot [6]. We chose to use a standard EOD robot platform for several reasons: • As a platform, it has proven its usefulness in dealing with rough terrain. • Recycling a standardized platform is a good means of saving costs, as rescue or demining teams do not have the financial resources to buy expensive dedicated platforms. • The rugged design of the platform makes it capable of handling unfriendly environmental conditions. An important drawback of the standard Teodor platform is that it does not feature any autonomous capabilities. As discussed in [2], end-users of these systems do require the robotic systems to have autonomous capabilities, e.g. for entering into semi-collapsed structures, where communication lines may fail. On the other hand, the end-users also want to always have the capability to remote control the robotic systems. For this reason, a hybrid control architecture, sketched in Figure 2 and explained in section II, was developed, giving the user the choice between direct tele-operation and autonomous operation.
In order to provide data input to the autonomous control system, an active depth sensing system was integrated on the platform. This 3D sensing system consists of a time-of-flight (TOF) camera and a stereo-camera, mounted on a pan-tilt unit. This active vision system - further discussed in section IV-B - provides the required input for the terrain traversability and path negotiation algorithms. Finally, the last component of the unmanned system is a quadrotor-type helicopter system, able to land on top of the ground robot. The idea of using this unmanned aerial vehicle (UAV) is to pair the advantage of an UAV (possibility to obtain a good overview of the environment from above) with the advantages of an UGV (possibility of interacting on the terrain). The control system of the quadrotor is integrated in the global control architecture, making this robotic system an integrated UAV / UGV. The remains of this paper is organized as follows: Section II discusses the global control architecture. Section III focuses on the remote-operation functionalities, whereas section IV discusses the autonomous capabilities. II. G LOBAL C ONTROL A RCHITECTURE The global control architecture is shown on Figure 2. As can be clearly noticed from Figure 2, the architecture foresees multiple levels of control: 1) The first (bottom) layer is the hardware layer, consisting of the robots (UGV + UAV) themselves and the different installed devices (sensors). 2) In a second layer, a series of drivers provide interfaces to these devices.
3) In a third abstract sensing layer, information is transferred at a higher level due to data fusion and command decomposition algorithms. It is also here that the the remote control interface can be found 4) In a fourth and final layer, the robot intelligence modules and algorithms can be found It must be noted that, on Figure 2, the boxes with a green background represent ROS (Robot Operating System) modules. This software architecture was chosen as a base system to develop all autonomous capabilities upon, due to the large repository of pre-existing material which can be put to good use on this robot system. As can be noted from Figure 2, there are 2 means of controlling the robotic system: tele-operation (white boxes / left side) and using autonomy (green boxes / right side). In the following sections, we will now detail each of these possibilities more in detail. III. R EMOTE - OPERATION F UNCTIONALITIES Mobile robot tele-operation requires an intuitive humanmachine interface (HMI), which is flexible and efficient. The design and implementation of such a HMI is a difficult task, in particular when the mobile robot is used for operations or interventions in complex environments, where both safety and precision need to be assured. In most cases, mobile robots are equipped with sensors, which are capable of providing an impressive volume of data to the user. This massive datastream risks of causing a cognitive overload for the human operator when transferred unfiltered to the operator. The HMI must see to it that the operator is presented a comprehensive
Intelligence (task dependent: search & rescue / demining)
Remote Control
Base driver
Web camera
Teodor robot
Fig. 2.
Integrated 3D Reconstruction
ROS driver
Labview Driver
iOS/Android Application
Terrain traversability estimation
ROS driver
Quadrotor with HD camera
Bumblebee driver
PMD driver
Stereo camera
Time-of-flight camera
Global Control Architecture. Green boxes represent ROS nodes
6
actuators. 4) Here, the user can control the robot using a mouse or keyboard, in the absence of a joystick or gamepad. 5) On this panel, the current movement speed and turning velocity are displayed 6) In this area, the position of the robot is shown. 7) Here, a history of all previous commands is shown. 8) Finally, the camera image is streamed, such that the user has a view of the robot environment.
7
1 2
3
4
8
5
Fig. 3. The control panel for remote robot operation, subdivided in 8 zones
overview, which presents all required sensor data and all meaningful control modalities, but not too much. For this application of remote operation, LabView was chosen as a design methodology. Following the LabView design formalism, the remote operation module is built up as a virtual instrument. The virtual instrument provides a solution for the integration of all the control elements into a unitary system which is compact and with a high degree of mobility. Combining these virtual instrumentation in the LabView graphical programming reduces significantly the development time and the solution validation. Figure 3 shows the control panel which is presented to the remote human operator. As shown on Figure 3, the front panel is composed of eight areas, integrating the tools used by the operator to control the robot movement: 1) Here, a connection with the robot can established and commands can be sent to the robot. Commands are transmitted by the computer over serial port interface in the example format M V 150RT 150. Here, the characters MV are used to identify the type of command (linear displacement). The following three characters represent the variable value of the M V command. The characters RT again from a character string, identifying the rotation command for the robot, with a value determined by the following three characters. 2) Here, a connection to a (remote controlled) joystick or gamepad can be made. When this is turned on, the operator can use the gamepad connected to the computer to control the robot. 3) In this area, the user can select the speed of the different
IV. AUTONOMOUS FUNCTIONALITIES A. Requirements Unmanned vehicles used for environmental monitoring require autonomous capabilities for the following reasons: • During indoor operations, they must be able to copse with a loss of communication. In this case, they must be able to perform a high-level task (e.g. searching for survivors of a collapse) without human operator intervention • The complexity of the unstructured environment makes it difficult for remote human operators to assess the traversability of the terrain. Therefore, the robotic systems should be equipped at least with some local obstacle avoidance capacity. Autonomous reasoning requires the correct assessment (or ”understanding”) of the environment by the robot artificial intelligence. A first step in this process is the perception. As can be noted from Figure 1, the proposed unmanned environmental monitoring system is equipped with an active 3D sensing system, consisting of a stereo camera and a timeof-flight camera, mounted on a pan-tilt-unit. Note that the TOF camera used here is capable of working in outdoor conditions (also in heavy sunlight), unlike modern consumer-grade depth sensors, which is a requirement for environmental monitoring applications. B. 3D Sensing The objective of this 3D sensing system is to provide high-quality and real-time dense depth data by combining the advantages of a time-of-flight camera and a stereo camera. Individually, both the stereo and the time-of-flight sensing system suffer from restrictive usage constraints: • A stereo system has difficulties with reconstruction on untextured areas. • A TOF camera has a very limited resolution (here: 200 pixels x 200 pixels). These limitations are also visible on Figure 4, showing the 3D view of both the TOF and the stereo camera. It can be noted that the stereo-based reconstruction features some holes where reconstruction was not possible due to a lack of texture, which causes the left-to-right matching to fail. The TOF-based reconstruction on the right of Figure 4 is dense. However, it features only a limited resolution and a limited field of view, as depicted by the red rectangle on Figure 4a. To lift these disadvantages, we propose a data fusion approach which combines the TOF-based and stereo-based reconstruction result in real time. As both stereo and TOF depth
(a) Stereo-based Reconstruction Fig. 4.
(b) TOF-based Reconstruction
Visualisation of the points provided by the depth cameras.
sensors provide similar types of output (a depth map and/or a 3D point cloud), it is possible to perform this data fusion in a straightforward manner using standard ICP approaches; Whereas classical ICP approaches can be notoriously slow (which would be a problem in the envisaged application), this problem can be circumvented in this case as both sensors are rigidly attached to each other, so there is a good initial guess on the translation and rotation between both point clouds. The result of this data fusion operation is a clean and highresolution 3D reconstruction, serving as input for subsequent data processing algorithms, notable for traversability analysis. C. Terrain Traversability Analysis Traversability estimation is a challenging problem, as the traversability is a complex function of both the terrain characteristics, such as slopes, vegetation, rocks, etc and the robot mobility characteristics, i.e. locomotion method, wheels, etc. It is thus required to analyze in real-time the 3D characteristics of the terrain and pair this data to the robot capabilities. The methodology towards stereo and time-of-flight-based terrain traversability analysis extends our previous work on stereo-based terrain classification approaches [1], [4]. Following this strategy, the RGB data stream from the stereo camera is segmented to group pixels belonging to the same physical objects. From the Depth data streams of the TOF and stereo cameras, the v − disparity [5] is calculated to estimate the ground plane, which leads to a first estimation of the terrain traversability. From this estimation, a number of pixels are selected which have a high probability of belonging to the ground plane (low distance to the estimated ground plane). The mean a and b color values in the Lab color space of these pixels are recorded as c.
The presented methodology then classifies all image pixels as traversable or not by estimating for each pixel a traversability score which is based upon the analysis of the segmented color image and the v-disparity depth image. For each pixel i in the image, the color difference kci − ck and the obstacle density in the region where the pixel belongs to are calculated. ii The obstacle density δi is here defined as: δi = ho∈A hAi i , where o denotes the pixels marked as obstacles (high distance to the estimated ground plane) and Ai denotes the segment where pixel i belongs to. This allows us to define a traversability score as τi = δi kci − ck, which is used for classification. This is done by setting up a dynamic threshold, as a function of the distance measured. Indeed, as the error on the depth measurement increases with the distance, it is required to increase the tolerance on the terrain classification as a function of the distance. An important issue when dealing with data from a time-of-flight sensor is the correct assessment of erroneous input data and noise. Therefore, the algorithm automatically detects regions with low intensities and large variances in distance measurements and marks these as ”suspicious”. Using the traversability data, it is possible to steer the robot around non-traversable obstacles and execute high-level tasks. D. Implementation & Results In order to have all the physical subsystems working together autonomously, and achieve a control framework like the one depicted in Figure 2, we need to integrate all capabilities in a suitable framework. Here, we used ROS (Robot Operating System), an open-source, meta-operating system for robots. One of the main advantages of ROS is that it contains a lot of pre-made packages and libraries, providing access to sensors and actuators, next to a whole set of data processing and control algorithms. As an example, the low-level hardware
drivers for the pan-tilt-unit, stereo-camera, TOF camera and quadrotor are available in ROS. As such, we only needed to develop a low-level driver for the robot, supporting our custom serial commands. Both the stereo camera and the TOF camera publish ROS topics of the P ointCloud format towards the Integrated 3D Reconstruction module, which combines the data and sends the unified 3D data to a Terrain Traversability Estimation node, outputting a map of the environment, indicating the traversable and non-traversable areas. V. F UTURE W ORK & C ONCLUSION In this paper, we discussed the development process of a robotic system for environmental monitoring in search & rescue and demining applications. The system consists of an outdoor-capable UGV equipped with an active 3D sensing system and an UAS. By combining both types of vehicles, it is possible to rapidly get a good overview of the situation in the environment and to perform a life-saving task (finding and rescuing victims or detecting land mines). It is clear that this is still a work in progress, e.g. the robotic system does not contain specific sensor for its tasks yet (human detector / mine detector). Also a GPS system still needs to be integrated in the system for localisation purposes. From a research point of view, the objective is to completely integrate the UAS in the control system, such that the UAS will also assist in mapping the (traversability of the) environment, helping the UGV to navigate. ACKNOWLEDGMENT The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement number 285417 (ICARUS) and 284747 (TIRAMISU). R EFERENCES [1] G. De Cubber and D. Doroftei and H. Sahli and Y. Baudoin, Outdoor Terrain Traversability Analysis for Robot Navigation using a Time-OfFlight Camera, RGB-D Workshop on 3D Perception in Robotics, 2011. [2] D. Doroftei and G. De Cubber and K. Chintanami, Towards collaborative human and robotic rescue workers, Human Friendly Robotics, 2012. [3] K. Richardson, Rescue robots - where were they in the Japanese quake relief efforts?, Engineering and Technology Magazine, vol.6, nr. 4, 2011. [4] G. De Cubber, Multimodal Terrain Analysis for an All-terrain Crisis Management Robot, in Proc. IARP HUDEM Workshop on Humanitarian Demining, 2011. [5] R. Labayrade and D. Aubert, In - vehicle obstacles detection and characterization by stereovision, Int. Workshop on In-Vehicle Cognitive Comp. Vision Systems, 2003. [6] telerob GmbH, EOD Robot tEODor - Product Description, http://www.xtek.net/assets/DOL/PDF/302601.pdf.