sensors Article
A 2.5D Map-Based Mobile Robot Localization via Cooperation of Aerial and Ground Robots Tae Hyeon Nam 1 , Jae Hong Shim 2, * and Young Im Cho 3 1 2 3
*
Department of Mechatronics Engineering, Graduate School, Korea Polytechnic University, Si-Heung City 15073, Korea;
[email protected] Department of Mechatronics Engineering, Korea Polytechnic University, Si-Heung City 15073, Korea Department of Computer Engineering, Gachon University, Sung-Nam 13120, Korea;
[email protected] Correspondence:
[email protected]
Received: 23 October 2017; Accepted: 22 November 2017; Published: 25 November 2017
Abstract: Recently, there has been increasing interest in studying the task coordination of aerial and ground robots. When a robot begins navigation in an unknown area, it has no information about the surrounding environment. Accordingly, for robots to perform tasks based on location information, they need a simultaneous localization and mapping (SLAM) process that uses sensor information to draw a map of the environment, while simultaneously estimating the current location of the robot on the map. This paper aims to present a localization method based in cooperation between aerial and ground robots in an indoor environment. The proposed method allows a ground robot to reach accurate destination by using a 2.5D elevation map built by a low-cost RGB-D (Red Green and Blue-Depth) sensor and 2D Laser sensor attached onto an aerial robot. A 2.5D elevation map is formed by projecting height information of an obstacle using depth information obtained by the RGB-D sensor onto a grid map, which is generated by using the 2D Laser sensor and scan matching. Experimental results demonstrate the effectiveness of the proposed method for its accuracy in location recognition and computing speed. Keywords: SLAM; localization; ground robot; aerial robot; cooperation; low cost sensor; indoor
1. Introduction When a robot begins navigation in an unknown area, it has no information about the surrounding environment. Accordingly, for robots to perform tasks based on location information, they need a Simultaneous Localization and Mapping (SLAM) process that uses sensor information to draw a map of the environment, while simultaneously estimating the robot’s current location on the map. For example, recently commercialized cleaning robots featuring on-board location recognition functions establish the location of the device as a reference point when it is turned on. The robot then draws a map with the information collected from built-in sensors (e.g., vision sensor, distance sensor), while simultaneously estimating its own location in real-time to distinguish between areas that have been cleaned and areas that need to be cleaned. A variety of sensors, such as distance and vision sensors, are used in SLAM. Distance sensors typically include laser scanners, infrared scanners, ultrasonic sensors, LIDAR, and RADAR, whereas vision sensors include stereo cameras, mono cameras, omnidirectional cameras, and Kinect [1–4]. Distance sensors can easily obtain distance information about objects, but the type of obtainable data is limited. In contrast, vision sensors, which have recently been in the spotlight, can acquire a variety of information, such as distance and object information, through processing imagery information. However, the majority of the information requires significant computation power. Generally, in the field of mobile robotics, the information obtained by the sensors is integrated with odometry (e.g., encoders,
Sensors 2017, 17, 2730; doi:10.3390/s17122730
www.mdpi.com/journal/sensors
Sensors 2017, 17, 2730
2 of 24
inertia sensors) to be used, but the research on performing SLAM only with only vision sensors has been actively carried out [5–9]. Various methodologies have been studied to resolve a variety of issues including the SLAM process under wide environments, uncertainty of sensor observational data, and real time execution. Representative methods are those based on the Kalman filter (KF) [10–12], Particle filter (PF) [13,14], graph [15,16], and bundle adjustment (BA) [17]. The KF is difficult to be applied to a nonlinear system as it basically assumes system linearity. Accordingly, the extended Kalman filter (EKF) and unscented Kalman filter (UKF) are primarily used for typical nonlinear systems. The EKF is disadvantageous in its likeliness to be easily diverged under incorrectly designed system, but it demonstrates favorable performance in the fields of current navigation and nonlinear state estimation, such as in GPS. The PF expresses location uncertainties as particle groups, which are referred to as samples. If the number of samples is sufficient, it is more accurate than the EKF or UKF, but issues can arise when the number is insufficient. There are also methods that simultaneously use the PF and KF, with the Rao-Blackwellized particle filter-based SLAM being representative of them. Recently, SLAM studies are being carried out through cooperation of a UGV (unmanned ground vehicles) and UAV (unmanned aerial vehicles) to actualize more accurate mapping by improving capability in situational awareness and environment interaction [18–20]. A UGV may carry substantial payloads and can actively interact with environments. In a search and rescue mission, a UGV can be sent to the location where the settings are too dangerous for humans. It may reduce the time required to reach victims by removing the need to secure the area in the first place. However, the operator only receives limited information about the UGV’s surroundings owing to its low viewpoint. Other objects at the height of the robot may block the operator’s view, increasing the difficulty of navigating within hazardous environments. UAV, on the other hand, can provide a situational assessment of the environment through its ability to cover large areas quickly with its bird’s-eye view. This data enables global navigation of the UGV in a potentially unknown and challenging terrain. Therefore, working in a heterogeneous team of UAV and UGV enhances the capabilities of the robots to support human rescue teams. If the map generation and location recognition are performed using the UAV, the entire map can be generated without being significantly affected by the features of land. However, UAV is not suitable to carry out tasks such as lifesaving and obstacle removal, while UGV is. In case of map generation and location recognition, the UGV is considerably affected by the land features and, especially in case of a disaster, it is very difficult to build a useful map owing to the existence of non-driving areas. Therefore, in recent years, researches on the methods of sharing location and map information using features of the two robots have been conducted [21–23]. Previous work on the collaborative tasks between aerial and ground robots often involved performing the entire perception process on one robot and guiding the other through the mission. For example, a flying robot can track a ground robot using a QR-code [24], visual markers [25,26], or edge detection [27]. Other works used a camera on a ground robot to track the LEDs on a flying robot [28,29]. Additionally, many approaches are based on collaborative navigation in large outdoor environments with at least partial availability of GPS [30,31]. Therefore, these approaches are not applicable under indoor scenarios and locations without GPS available, such as forests or urban street canyons. In addition, even if a GPS signal is available, the position is often inaccurate and provides no heading direction. On the other hand, methods to globally localize planetary rovers in the absence of GPS exist for space exploration. For this purpose, visually detectable landmarks [32] or generated surface elevation maps of the rover surroundings of the rover are matched to a global map. The elevation maps are therefore searched for topographic peaks [33] or compared directly using a zero-mean normalized cross-correlation [34].
Sensors 2017, 17, 2730
3 of 24
In the context of UGV-UAV collaboration, Rudol presents a system paper on a UGV guiding a UAV to navigate in an indoor environment by using a LED cube structure pattern attached to the UAV [35]. In [36], the authors propose a cooperative control framework for real time task generation and allocation for a hierarchical UAV/UGV platform. Michael, N. [37] introduced the collaborative mapping method using a 3D laser scanner for investigating the damage caused by an earthquake. However, as it used the point cloud of a 3D laser scanner, vast amount of data led to lengthy map generation. Thus, when it is mounted on a UAV for traveling, it has very high power consumption. Oleynikova, H. [38] proposed real-time localization of a UAV with a UGV using inertial and vision sensors. To synchronize the initial position of the two robots, the UAV performs takeoff and landing at the UGV’s top. Early work on cooperative localization and mapping extended the well-studied single robot SLAM problem to multiple robots. Vidal-Calleja, T.A. [39] proposed multiple robot visual mapping with heterogeneous landmarks in GPS-denied and unexplored environments. Images captured from an UGV were matched to those from a satellite or high-flying vehicle. The warped ground images are then matched to the satellite map using descriptor-based feature matching techniques. For feature matching, they used a PF. However, if the data is limited, the result has the drawback of being inaccurate. Data format changes depending on the type of sensor used to generate a map for cooperation of the two robots. Usually, the majority of 3D sensors, such as the 3D LRF, are very costly. When raw data of 3D point cloud data is used, real-time location recognition and navigation become difficult as volume of the data increases and more computation is needed. On the other hand, when using 2D laser scanner sensors, it takes less time to create a map, but there are instances in which the information needed for the mobile robot’s autonomous navigation is not fully incorporated because the scanner can only recognize obstacles of a certain height. Also, there can be instances in which the scanner recognizes navigable slopes as obstacles as it cannot adequately incorporate height information needed. In this paper, we proposed a method for accurate map generation and real-time location recognition of a mobile robot in indoor environments by combining the advantages of 3D map and 2D map through cooperation between UAV and UGV. We used an RGB-D (Red Green Blue-Depth) sensor, which is a low-cost 3D sensor, and installed it onto a UAV to generate a 3D map. To reduce computation burden for real-time localization, the 3D map was transformed into a 2D map, and the depth value included in the 3D map was used to estimate heights of obstacles. By including the height information into the 2D map, a 2.5D elevation map was generated. Moreover, to improve the accuracy of the generated 2.5D map, a low-cost 2D LRF sensor was used to propose a method that connects the position value of the 2D LRF with location information obtained by the RGB-D sensor. Through a series of experiments, it was demonstrated that the method using a 2.5D map built by a 2D LRF sensor and a RGB-D sensor installed in an UAV effectively performs tasks of mobile robot’s localization and navigation. 2. Comparison of RGB-D and LRF SLAM on UGV This section describes the implementation of the RGB-D SLAM and LRF SLAM that are usually used for mobile robots in indoor environments, and compares the results of each SLAM method. Figure 1 presents turtlebot.2 from the Yujin Robot, which was used as the mobile robot in the experiment. The height of the robot is 50 cm. A 2D LRF and a RGB-D sensor were installed on the robot. The LRF and the RGB-D sensor were installed at a height of 57 cm and 45 cm respectively. The LRF sensor (URG-04LX from HOKUYO) has the measurable angle of 240 degrees, the distance resolution of 1 mm, and the angular resolution of 0.3515 degrees. The RGB-D sensor has the measurable angle of 57 degrees, the distance resolution of 2.5 to 4.8 mm, and the angular resolution of 0.097 degrees.
Sensors 2017, 17, 2730
4 of 24
Sensors 2017, 17, 2370
4 of 24
Sensors 2017, 17, 2370
4 of 24
Figure 1. Ground robot used in experiment. Figure 1. Ground robot used in experiment. Figure 1. Ground robot used in experiment.
Figure 2 shows the designed experimental environment. The robot navigated back and forth on Figure 22 shows the designed experimental The robot navigated back and forth Figure shows thestraight designed experimental environment. The robot navigated backthe and forthonona a 10-m section of the black line. The environment. location errors were calculated from difference 10-m section of the straight black line. The location errors were calculated from the difference between abetween 10-m section of the straight line. The locationby errors were calculated from difference its actual location andblack the location estimated SLAM after navigation. Asthe shown in the its actual location and the location estimated by SLAM after navigation. As shown in the figure, between its actual location and the location estimated by SLAM after navigation. As shown in the figure, obstacles such as a sloping hill, a tree, and two boxes were installed in the experimental obstacles such as a sloping hill, a tree, and two boxes were installed in the experimental environment. figure, obstacles such as ashown sloping a tree, and two boxes installed in the experimental environment. Experiment in hill, Figure 2 was carried out inwere the indoor environment, of which Experiment was: shown 2 was out the environment, of environment, which dimensions was: environment. Experiment shown incarried Figure 2 was carried out in the indoor of which dimensions 15in mFigure (length) × 1.5 m (width) ×in2.5 mindoor (height). 15 m (length)was: × 1.5 dimensions 15m m(width) (length)×× 2.5 1.5 m m (height). (width) × 2.5 m (height).
Figure 2. Experimental environment. Figure 2. Experimental environment. Figure 2. Experimental environment.
This study used the ICP (Iterative Closest Point) algorithm to generate the location recognition studyofused ICPusing (Iterative Closest algorithm generates to generate recognition and This mapping the the robot sensors. ThePoint) ICP algorithm a the maplocation via point-to-point This study used the ICP (Iterative Closest Point) algorithm to generate the location recognition and mapping of the robot using sensors. The ICP algorithm generates a map via point-to-point matching. The Euclidean distance in the data measured in real time through sensors and previous and mapping robotdistance using sensors. The ICP algorithm generates map viaalgorithm point-to-point matching. The of Euclidean inpair. the The data measured in real time through sensors and previous data calculates thethe closest matched scan matching program usinga the ICP must matching. The Euclidean distance the data time through and previous data data calculates the closest matched Themeasured scan program usingsensors the ICP algorithm make a one-to-one comparison ofin Npair. standard datamatching andinNreal new data. During matching process,must scan calculates the closest matched pair. The scan matching program using the ICP algorithm must make make a one-to-one comparison standard data and Nare new data.than During matching is performed if errorsofofNlocation and rotation larger the matching thresholdprocess, value. Ifscan thea one-to-one comparison N standard data and N rotation new data. matching process, scan matching matching performed ifthe errors of location and areDuring larger than the threshold value. If the errors areissmaller thanof threshold value and determined to be identical, the scan matching errors are For smaller the threshold value and(R) determined to bematrix identical, the scan matching concludes. scan than matching, the rotation matrix and translation (t) should be found and concludes. For scan theasrotation matrix (R) and translation matrix (t) should be found and the error function E matching, is calculates Equation (3) [40,41]. the error function E is calculates as Equation (3) [40,41].
Sensors 2017, 17, 2730
5 of 24
is performed if errors of location and rotation are larger than the threshold value. If the errors are smaller than the threshold value and determined to be identical, the scan matching concludes. For scan matching, the rotation matrix (R) and translation matrix (t) should be found and the error function E is calculates as Equation (3) [40,41]. The following are two data sets for scan matching and an error function E. X = { x1 , x2 , . . . , x n }
(1)
P = { p1 , p2 , . . . , p n }
(2)
E{R, t} =
1 Np
Np
∑ k x i − R p i − t k2
(3)
i =1
Here, Np is the number of data points, and xi , pi are the matching points. The center of the data set must be found to calculate errors in the data values. 1 Nx
µx =
1 µp = Np
Nx
∑ xi
(4)
i =1 Np
∑ pi
(5)
i =1
The data value deviation is calculated from the differences of the center of the data set and differences of data as shown in Equations (6) and (7). X 0 = { xi − µ x } = xi0
(6)
P0 = pi − µ p = pi0
(7)
Next, the SVD (Singular Value Decomposition) is used to find optimal solutions for the rotation matrix and translation matrix. Below is the set W for the matching pairs. Np
W=
∑ xi0 pi0 T
(8)
i =1
σ1 W = U 0 0
0 0 V T σ3
0 σ2 0
(9)
Here, U and V are 3 × 3 single matrices, respectively, and σi is the singular value of matrix W that satisfies following conditions. If rank (W) = 3, the optimal solution for E{R, t} is calculated using the formula below. R = UV T (10) t = µx − R µ p Np
E{R, t} =
∑ (k xi0 k2 + kyi0 k
2
) − 2(σ1 + σ2 + σ3 )
(11) (12)
i =1
Figure 3 shows the 2D map generated using the ICP algorithm and 2D LRF sensor. Objects over a certain height were recognized as obstacles, but objects of relatively lower height, such as the box, were not recognized as obstacles. As the 2D LRF SLAM uses the mobile robot’s encoder values as odometry and a slope was located on the path during navigation of the 2D LRF, location error of approximately 30 cm on average was observed. Table 1 shows the location recognition errors according to the number of the mobile robot’s round trip along the path of Figure 2.
Sensors 2017, 17, 2730 Sensors 2017, 17, 2370
6 of 24 6 of 24
Figure Figure 3. 3. 2D 2D Map Map information information generated generated by by using using the the 2D 2D LRF LRF sensor. sensor. Table 1. 1. Location Location recognition recognition errors errors according according to to the the number number of of the the mobile mobile robot’s robot’s navigation navigation in in case case Table of 2D LRF SLAM. of 2D LRF SLAM. Navigation Navigation Trial Trial 1 2 3 4 5 6 7 8 9 10 Average
1 2 3 4 5 6 7 8 9
Location Recognition ErrorError Location Recognition 32 cm 32 cm 32 cm 32 cm 31 cm 31 cm 31 cm 27 cm 31 cm 32 cm 27 cm 28 cm 32 cm 30 cm 33 cm 28 cm 32 cm 30 cm 30.8 cm 33 cm
10 32 cm For the SLAM using 2D LRF, instances occurred in which the robot was unable to recognize Average 30.8 cm obstacles of a lower height, and the SLAM was performed using an RGB-D camera to overcome this limitation. Figure 4a shows a 3D map generated by using an RGB-D camera installed in the mobile thean SLAM using 2D LRF, environment instances occurred theFigure robot4b was unable to recognize robotFor under identical experiment to that in of which Figure 2. shows the results of the obstacles of a lower height, and the SLAM was performed using an RGB-D camera to overcome this navigation trajectory of the mobile robot, which was conducted 10 times under the same experimental limitation. Figure 4a shows a 3D map generated by using an RGB-D camera installed in the mobile environment. Table 2 lists the location recognition errors based on the robot’s navigation trials applying robot underSLAM. an identical experiment environment to that of Figure 2. Figure 4b shows the the results of the RGB-D The location recognition errors were measured as the differences between actual the navigation trajectory of the mobile robot, which was conducted 10 times under the same location and the estimated location of the robot after SLAM-based navigation. experimental Table 2 listswas theable location recognition errors based height. on the Further, robot’s In case ofenvironment. RGB-D SLAM, the robot to recognize obstacles of lower navigation trials applying the RGB-D SLAM. The location recognition errors were measured as the as the RGB-D SLAM does not rely solely on the wheel’s encoder information, it was able to reduce differences between the actual location and the estimated location of the robot after SLAM-based errors during the slope navigation by using visual odometry information to correct error. Despite the navigation. however, an error of approximately 5–10 cm occurred due to vibrations of the robot corrections, In case of RGB-D SLAM, able to recognize obstacles of lower height. Further, as and camera movement duringthe therobot slopewas navigation. Moreover, due to the 3D mapping, there was the RGB-D SLAM does not rely solely on the wheel’s encoder information, it was able to reduce errors significant data computation increase, and the processing speed was about 3–4 fps (frames per second). during the slope navigation by using visual odometry information to correct error. Despite the corrections, however, an error of approximately 5–10 cm occurred due to vibrations of the robot and camera movement during the slope navigation. Moreover, due to the 3D mapping, there was significant data computation increase, and the processing speed was about 3–4 fps (frames per second).
Sensors 2017, 17, 2730
7 of 24
Sensors 2017, 17, 2370
7 of 24
(a)
(b)
Figure RGB-D camera of of thethe mobile robot; (b) (b) navigation trajectory of the Figure 4. 4. (a) (a)3D 3DMap Mapmade madebybya a RGB-D camera mobile robot; navigation trajectory of UGV. the UGV. Table 2. Location recognition errors according to the number of the mobile robot’s navigation in case Table 2. Location recognition errors according to the number of the mobile robot’s navigation in case of RGB-D SLAM. of RGB-D SLAM. Navigation Trial Navigation Trial 1 1 2 2 3 3 4 5 4 6 7 5 8 6 9 10 7 Average8 9
Location Recognition Errors Location Recognition Errors 5 cm 5 cm 66cm cm 8 cm 8 cm 5 cm cm 57cm 5 cm 79cm cm 8 cm 5 cm 6 cm 97cm cm 6.6 cm 8 cm 6 cm
Computing Speed (FPS) Computing Speed (FPS) 3 3 3 3 3 3 4 3 4 3 3 3 4 3 3 3 4 4 3.3 3
Through the two experiments, this study verified that topography and presence of obstacles cause 10 7 cm 4 location recognition errors in the LRF and RGB-D SLAM using a mobile robot. Thus, when using a Average a navigation algorithm 6.6 cm that allows avoidance of 3.3obstacles is absolutely mobile robot for mapping, needed. To address these issues, this study performed SLAM using an aerial robot that is virtually Through two experiments, verified that topography and presence of unaffected by the geographic features, this and study an RGB-D camera that includes information onobstacles various cause location recognition errors in the LRF and RGB-D SLAM using a mobile robot. Thus, when obstacles. Figure 5 shows the DJI F450 series aerial robot used in this study. using a mobile robot for mapping, a navigation algorithm that allows avoidance of obstacles is absolutely needed. To address these issues, this study performed SLAM using an aerial robot that is virtually unaffected by geographic features, and an RGB-D camera that includes information on various obstacles. Figure 5 shows the DJI F450 series aerial robot used in this study.
Sensors 2017, 17, 2730
8 of 24
Sensors 2017, 17, 2370 Sensors 2017, 17, 2370
8 of 24 8 of 24
Figure 5. Aerial robot used in this study. Figure 5. Aerial robot robot used used in in this this study. study.
Figure 6a,b show the mapping information and navigation trajectory generated using an RGBFigureand 6a,baerial showrobot. themapping mapping information andnavigation navigation trajectory generated using anon RGBFigure 6a,b show the and generated using an RGB-D D camera Table 3information lists the location errors andtrajectory the computing speed based the D camera and aerial robot. Table 3 lists the location errors and the computing speed based on the camera aerial robot. Table 3 liststhe theexperimental location errors and the computing speed based on the robot’s robot’sand navigation number within environment. robot’s navigation within the experimental environment. navigation number number within the experimental environment.
(a) (a)
(b) (b)
Figure 6. (a) 3D Map made by a RGB-D camera of the UAV; (b) navigation trajectory of the UAV. Figure 6. 6. (a) (a) 3D Figure 3D Map Map made made by by aa RGB-D RGB-D camera cameraof ofthe theUAV; UAV; (b) (b) navigation navigationtrajectory trajectoryofofthe theUAV. UAV. Table 3. Location recognition errors and computing speed according to the number of the UAV’s Table 3. Location recognition errors and computing speed according to the number of the UAV’s navigation with RGB-D SLAM.errors recognition and computing speed according to the number of the UAV’s Table 3. Location navigation with RGB-D SLAM. navigation with RGB-D SLAM. Navigation Trial Location Recognition Error Computing Speed (FPS) Navigation Trial Location Recognition Error Computing Speed (FPS) Navigation Trial Location Recognition Error Computing Speed (FPS) 1 2 cm 3 1 2 cm 3 1 2 cm 3 2 2 cm 2 2 2 cm 2 2 2 cm 2 3 4 cm 3 3 4 cm 3 3 4 cm 3 4 1 cm 4 4 1 cm 4 4 1 cm 4 5 3 cm 3 5 3 cm 3 6 2 cm 3 5 3 cm 3 6 21.6 cmcm 3 7 3 6 2 cm 3 8 2 cm 3 7 1.6 cm 3 9 cm 3 7 1.61cm 3 8 2 cm 3 10 2 cm 3 8 2 cm 3 12.1 cmcm 3 Average 9 3 9 1 cm 3 10 2 cm 3 10 2 cm 3
Sensors 2017, 17, 2730 Sensors 2017, 17, 2370
9 of 24 9 of 24
Averagelocation recognition2.1 cm the aerial robot’s IMU sensor 3 This study conducted using and visual odometry to generate a map using the aerial robot and RGB-D sensor. This study verified that aerial robot was not study conducted location using and the thus aerialreduced robot’s location IMU sensor and visual affectedThis by geographic features, unlike recognition the mobile robot, recognition errors. odometry to generate a map using the aerial robot and RGB-D sensor. This study verified that aerial of However, owing to the large amount of computations, map generation took a long time, at a speed robot was Even not affected unlike the robot, is and thus reduced location about 3 fps. thoughby thegeographic processingfeatures, speed using the 2Dmobile LRF SLAM relatively high, there were recognition errors. However, owing to the large amount of computations, map generation took a long instances in which obstacle recognition issues occurred. Considerably large location recognition errors time, at a speed of about 3 fps. Even though the processing speed using the 2D LRF SLAM is relatively also occurred during map generation when using the mobile robot, owing to geographic features such high, there were instances in which obstacle recognition issues occurred. Considerably large location as the slope. To overcome these shortcomings, this study used an aerial robot, which has a relatively recognition errors also occurred during map generation when using the mobile robot, owing to free range of motion, and an RGB-D sensor that can obtain 3D depth information. However, real-time geographic features such as the slope. To overcome these shortcomings, this study used an aerial location recognition was difficult when using 3D map information to detect obstacles of various robot, which has a relatively free range of motion, and an RGB-D sensor that can obtain 3D depth features because of the large amount of computation required. resolve issues, this study information. However, real-time location recognition was difficultTo when usingthese 3D map information proposes a 2.5D mapping method, which incorporates a 2D map drawn through the 2D LRF with to detect obstacles of various features because of the large amount of computation required. Tothe height information incorporated in the 3D map information drawn by the RGB-D camera installed resolve these issues, this study proposes a 2.5D mapping method, which incorporates a 2D map in thedrawn aerial through robot. the 2D LRF with the height information incorporated in the 3D map information
drawn by the RGB-D camera installed in the aerial robot. 3. 2.5D Elevation Mapping 3. 2.5D Elevationmap Mapping An elevation is a 2.5D map that incorporates height information into the 2D map. Recently, elevation havemap been reduce of information and improve the speeds Anmaps elevation is used a 2.5Dtomap that amount incorporates height information into the 2Dprocessing map. Recently, of elevation the algorithm by converting the height information of 3D maps to 2D maps for the robot’s SLAM. maps have been used to reduce amount of information and improve the processing speeds In of [42], that uses imagery was proposed. However, large of pixel data theelevation algorithmmapping by converting the height information of 3D maps to 2D as maps foramount the robot’s SLAM. [42], when elevation mapping thatgenerating uses imagery was proposed. large amount of pixeldifficult. data areInused using imagery, elevation map inHowever, real-timeasnavigation becomes are used when using imagery, generating elevation map in real-time navigation becomes difficult. Further, when using a 3D laser scanner to obtain 3D information, in many instances, the sensor price is Further, when using a 3D laser scanner to obtain 3D information, in many instances, the sensor price high and it is difficult to generate a 2D elevation map during real-time navigation [43,44]. Another is highinvolves and it is difficult to agenerate a 2D elevation map during navigation [43,44]. Anotherthe method installing sonar sensor and distance sensorreal-time in an aerial robot and generating method involves installing a sonar sensor and distance sensor in an aerial robot and generating the map by stacking 2D scan information based on height [14]. In such an instance, height measurement map by stacking 2D scan information based on height [14]. In such an instance, height measurement error of the sensor leads to errors in map information. Accordingly, this study used RGB-D sensor errorfrom of the sensorwhich leads is to aerrors in map information. study map. used RGB-D sensor (Xtion ASUS), low-cost RGB-D sensor, toAccordingly, generate an this elevation In addition, this (Xtion from ASUS), which is a low-cost RGB-D sensor, to generate an elevation map. In addition, this study used 2D LRF to increase the accuracy of the robot’s location estimation and 2D matching of the study used 2D LRF to increase the accuracy of the robot’s location estimation and 2D matching of the elevation map. elevation map. This study used an occupancy grid map to generate the 2D map [45]. An occupancy grid map is This study used an occupancy grid map to generate the 2D map [45]. An occupancy grid map is a grid block map, which is the most typical map concept used for generating 2D maps. Each block a grid block map, which is the most typical map concept used for generating 2D maps. Each block expresses geographic features and obstacles surrounding the robot detected by the robot’s sensor in expresses geographic features and obstacles surrounding the robot detected by the robot’s sensor in white and black. Figure an occupancy occupancygrid gridmap. map. white and black. Figure7 7isisan animage imagethat thatsimply simply expresses expresses an
Figure 7. Occupancy grid map. Figure 7. Occupancy grid map.
Figure 7 depicts laser-range sensor information as an occupancy grid. The black blocks, white Figure laser-range sensor information as area, an occupancy grid. The blacknavigation blocks, white blocks and7 depicts grey blocks respectively indicate occupied unoccupied area where is blocks and grey blocks respectively indicate occupied area, unoccupied area where navigation is possible, and unexplored area. possible, and unexplored area.
Sensors 2017, 17, 2730
10 of 24
Described below is the process that generates the 2D grid occupancy map, estimates the height of the obstacle from a sensor data, and incorporates the height information into the 2D grid map [46,47]. As shown in Equation (13), occupancy grid maps are determined by multiplying the probability value of each cell. P(M| X0:t , Z0:t ) = ∏ p(mn | X0:t , Z0:t ) (13) n
Here, X0:t is the robot’s location based on time, and Z0:t is the sensor value at each location. The mn value of each block is calculated using Equations (14) and (15). p(mn | X0:t , Z0:t ) = 1 − lt,n = lt−1 + log
1 1 + elt,n
p(mn | xt , zt ) 1 − p(mn ) − log 1 − p(mn | xt , zt ) p(mn )
(14)
(15)
Here, p(mn ) is the previous value of the block value mn , and the value p(mn | X0:t , Z0:t ) is the probability value of the block value mn calculated using the sensor value Z0:t and the robot’s location value X0:t based on time. To incorporate height information into the generated 2D grid map, this study used a KF-type formula given in Equation (16) to estimate obstacle height. µ0:t =
2 σt2 µo:t−1 + σ0:t −1 h t
2 = σ0:t
(16)
2 2 σ0:t −1 + σt 2 2 σ0:t −1 σt
(17)
2 2 σ0:t −1 + σt
Here, ht is the measured value of the sensor, and σt2 is the variance of noise value included in the measured value of the sensor. The above formula was used to calculate the height value µ0:t and the 2 . variance of the height error value σ0:t Next, an elevation map is generated through the probability model described below to include the height values of Equation (16) in map M. 2 P M X0:t , Z0:t , µ0:t , σ0:t =
2 ) ∏ p(mn |X0:t , Z0:t , µ0:t , σ0:t
2 p mn X0:t , Z0:t , µ0:t , σ0:t = 1− lt,n
(18)
n
1 1 + elt,n
2 ) p(mn xt , zt, µ0:t , σ0:t 1 − p(mn ) = lt−1 + log − log 2 ) p(mn ) 1 − p(mn xt , zt , µ0:t , σ0:t
(19) (20)
A 2D map (grid map) was generated as a map M. The time location X0:t of the robot and the 2D LRF sensor value Z0:t entered in the location were used. A KF-type formula was used to add 2 . Moreover, height information along with the calculated height value and variance value, µ0:t and σ0:t when navigating with a mobile robot, the robot must be able to recognize slopes as navigable areas and stairs as obstacles by distinguishing between the two. The robot obtains height value of the obstacle by using the height value µ0:t and the previous height value µ0:t−1 , which are calculated to determine slopes and stairs. As shown in Equation (21), it recognizes obstacles when the threshold angle φ is exceeded, and it ignores the instances equal to or less than the threshold. ( f(x) =
Recognition, µ0:t − µ0:t−1 > ∅ Ignore, µ0:t − µ0:t−1 ≤ ∅
(21)
Sensors 2017, 17, 2730
11 of 24
An aerial robot was taken off to generate an elevation map. Data of the RGB-D sensor and the 2D LRF sensor are gathered to calculate the robot’s current location and correct location recognition errors. After that, scan matching with LRF scan information was used to correct errors in location estimation. With obstacle height calculated with RGB-D depth information, the slope of the obstacle was also calculated. When the slope of the obstacle exceeds the threshold of a critical angle, it is recognized as an obstacle, while ignored when it does not. In addition, another process was needed to project the height information obtained with an RGB-D sensor in the LRF generated grid map. It is difficult to generate 2D map information when using only a 3D RGB-D sensor, while the 3D RGB-D sensor used in this study was not suitable in scan matching because of its small angle of view. To overcome this shortcoming, a 2D grid map was generated through the LRF sensor, and the height information obtained by the RGB-D sensor was mapped onto the 2D map. Described below is the process using the height information measured from the RGB-D sensor to redefine probability value of the grid map, which was generated via the earlier 2D LRF sensor, and to map the 3D coordinates into 2D coordinates [48]. The Pc of Equation (22) is the depth value entered through the RGB-D sensor. Pc = ( xc , yc , zc ) T
(22)
The following is the estimated distance (l p ) of the depth pixels. lp =
q
xc2 + y2c + z2c
(23)
The following equations define the probability values s of each cell of the 2D grid map. P(s(l )) = Pocc (l ) + (
∆l p 2π
( Pocc (l ) =
k √
pmin, , 0.5,
+ 0.5 − Pocc (l ))e i f 0 < l < lp i f l > lp
− 12 (
1− l p 2 ∆l p )
(24)
(25)
Here, l and k are values of the distance and angle measured by the LRF sensor, respectively. The occupancy function Pocc compares the distance measured through the RGB-D sensor and distance measured through the LRF sensor, and redefines the probability value s of each cell of the 2D grid map. If the distance measured through the RGB-D sensor is smaller than that through the existing LRF sensor, it is defined as a probability of 0.5. The reason for this is that a grid map is created as a navigable area if the probability value is higher or lower than 0, and as an unverified area if the probability value is 0. The function L(s(l )) updates the grid map through a log function and the probability value redefined above. L(s(l )) = ln (
P(s(l )) ) 1 − P(s(l ))
(26)
Finally, a 2.5D elevation map is generated by projecting object’s height information, which was calculated using the RGB-D sensor, onto the 2D grid map information obtained by the LRF sensor. Figure 8 is a flow chart of the algorithm used for generating the 2.5D elevation map.
Sensors 2017, 17, 2730
12 of 24
Sensors 2017, 17, 2370
12 of 24
Figure elevation map. Figure 8. 8. Flow Flowchart chartfor forcreating creatingthe the2.5D 2.5D elevation map.
4. Experiments and Discussion 4. Experiments and Discussion This study used the aerial robot of Figure 5 in an indoor environment that can explore vast area This study used the aerial robot of Figure 5 in an indoor environment that can explore vast area without being affected by geographic features for accurate and real-time location recognition, and without being affected by geographic features for accurate and real-time location recognition, and built built the 2.5D elevation map proposed in Section 3 as shown in Figure 9. Through a series of the 2.5D elevation map proposed in Section 3 asand shown in Figure 9. Through a seriesby ofcomparing experiments, experiments, we checked the map building time computing speed of the algorithm we checked map buildingbytime computing speed of the algorithm by comparing with the 3D with the 3Dthe map generated the and RGB-D sensor and proposed 2.5D elevation map. Furthermore, map generated by the RGB-D sensor and proposed 2.5D elevation map. Furthermore, this study also this study also intended to verify that the proposed 2.5D map could provide information on various intended to verify that the proposed 2.5D map could provide information on various obstacles obstacles through incorporating height information and comparing the result with that of the 2Dthrough map incorporating height information generated using an LRF sensor. and comparing the result with that of the 2D map generated using an LRF Tosensor. compare the accuracy of map generation and location recognition of the RGB-D SLAM, LRF To compare the accuracy of map generation location recognition thewithin RGB-D SLAM, SLAM, and the algorithm proposed in this paper, theand robot navigated back and of forth a fixed LRF SLAM, the algorithm proposed in this paper, theinrobot navigated and forth within distance (10 and m) under the experimental environment shown Figure 10a. Afterback the navigation, error a between the robot’s and the estimated location wasin calculated. Moreover, shown fixed distance (10 m) actual underlocation the experimental environment shown Figure 10a. After theasnavigation, in Figure 10b,the obstacles a tree, and fire extinguisher, pail, ramp, and stairs. Moreover, as shown error between robot’sincluded actual location the estimatedtin location was calculated. Figure 11obstacles shows theincluded map generated by using the 2D LRF sensors installed in the aerial robot, of in Figure 10b, a tree, fire extinguisher, tin pail, ramp, and stairs. which flying as 1generated m, and thebyresults the2D map generation show thatinthe stairs, Figure 11height showswas theset map usingofthe LRF sensors installed thetree, aerial robot, and ramp that were located higher than the flying height of the aerial robot were recognized of which flying height was set as 1 m, and the results of the map generation show that theastree,
Sensors 2017, 17, 2730
13 of 24
Sensors 2017, 17, 1313ofof2424 Sensors 2017, 17,2370 2370 stairs, and ramp that were located higher than the flying height of the aerial robot were recognized as obstacles, but the objects located lower than that were not recognized. Figure 12 represents the 3D obstacles, but located lower that were not recognized. Figure 1212represents the obstacles, butthe theobjects objects located lowerthan than were notshowing recognized. represents the3D 3D map generated using the RGB-D sensors of the that aerial robot, that Figure it was possible to recognize map generated using the RGB-D sensors of the aerial robot, showing that it was possible to recognize map generated using the RGB-D sensors of the aerial robot, showing that it was possible to recognize even the ramp, stairs and objects of lower height, unlike the 2D LRF SLAM. Table 4 lists the location even stairs and height, unlike the SLAM. Table 44lists location eventhe theramp, ramp, andobjects objectsofspeeds oflower lowerof height, the2D 2DLRF LRFthe SLAM. Table listsItthe the location recognition errorsstairs and computing the 2Dunlike LRF SLAM and RGB-D SLAM. shows that recognition errors and computing speeds of the 2D LRF SLAM and the RGB-D SLAM. It shows that recognition errors and computing speeds of the 2D LRF SLAM and the RGB-D SLAM. It shows that the 2D LRF SLAM was faster in its average computing speed (26 fps), compared to the RGB-D SLAM the 2D LRF SLAM was faster ininits average computing speed (26 fps), compared totothe RGB-D SLAM the 2D LRF SLAM was faster its average computing speed (26 fps), compared the RGB-D SLAM (3–4 fps). The average location recognition errors of the 2D LRF SLAM and the RGB-D SLAM were (3–4 fps). The average location (3–4 fps). Thecm, average locationrecognition recognitionerrors errorsofofthe the2D 2DLRF LRFSLAM SLAMand andthe theRGB-D RGB-DSLAM SLAMwere were 2.3 cm and 4.6 respectively. 2.3 cm and 4.6 cm, respectively. 2.3 cm and 4.6 cm, respectively.
Figure 9. Relationship between between the aerial aerial robot and and theground ground robotin in thisstudy study. Figure .. Figure9.9.Relationship Relationship betweenthe the aerialrobot robot andthe the groundrobot robot inthis this study
(a) (a)
(b) (b)
Figure Figure10. 10.(a) (a)Experiment Experimentenvironment; environment;(b) (b)layout layoutofofobjects objectsininthe theexperimental experimentalenvironment. environment. Figure 10. (a) Experiment environment; (b) layout of objects in the experimental environment.
Sensors 2017, 17, 2730
14 of 24
Sensors 2017, 17, 2370 Sensors 2017, 17, 2370
14 of 24 14 of 24
Figure 11. 2D map built using the LRF sensor. Figure 11. 2D map built using the LRF sensor. Figure 11. 2D map built using the LRF sensor.
Figure 12. 3D map created using RGB-D sensor of the aerial robot.
Figure 12. 3D map created using RGB-D sensor of the aerial robot. Figure 12. 3D map created using RGB-D sensor of the aerial robot.
Sensors 2017, 17, 2730
15 of 24
Sensors 2017, 17, 2370
15 of 24
Table 4. Comparison of 2D LRF SLAM and RGB-D SLAM. Table 4. Comparison of 2D LRF SLAM and RGB-D SLAM.
Location Recognition Location Recognition Error Error Trial NavigationNavigation Trial 2D LRF RGB-D 2D LRF RGB-D 1 3 cm 4 cm 1 3 cm 4 cm 2 2 cm 5 cm 2 2 cm 5 cm 3 2 cm 5 cm 3 2 cm 5 cm 4 2 cm 4 2 cm 5 cm5 cm 5 3 cm 5 3 cm 5 cm5 cm 6 2 cm 6 2 cm 4 cm4 cm 7 3 cm 7 3 cm 4 cm4 cm 8 2 cm 5 cm5 cm 8 2 cm 9 2 cm 5 cm5 cm 9 2 cm 10 2 cm 4 cm4 cm 10 2 cm Average Average 2.3 cm 2.3 cm 4.6 cm 4.6 cm
Computing Speed (FPS)(FPS) Computing Speed 2D LRF RGB-D 2D LRF RGB-D 25 3 25 3 27 4 27 4 25 3 25 3 2525 3 3 2525 3 3 2424 3 3 2626 3 3 3 2525 3 3 2424 3 3 2525 3 25.1 3.1 25.1 3.1
Figures1313and and1414 respectively show height information 3Dbuilt mapusing builtthe using the Figures respectively show the the height information of theof3Dthe map RGB-D RGB-D sensors and the 2D grid map generated using the LRF sensor. sensors and the 2D grid map generated using the LRF sensor.
Figure Figure13. 13. Height Height information informationof of3D 3Dmap mapobtained obtainedusing usingRGB-D RGB-Dsensors. sensors.
Sensors 2017, 17, 2730
16 of 24
Sensors 2017, 17, 2370
16 of 24
Sensors 2017, 17, 2370
16 of 24
Figure 14. 2D grid map information generated using 2D LRF.
Figure 14. 2D grid map information generated using 2D LRF.
Figure 15 shows the 2.5D elevation map generated with the proposed algorithm by combining Figure 14. 2D grid map information generated using 2D LRF. the height of Figure 13 and map the 2D grid map of Figure Table 5 represents Figure 15 information shows the 2.5D elevation generated with the14. proposed algorithmthe bylocation combining recognition errors and computing speeds of the SLAM based on the14. proposed 2.5D elevation map. the height information of Figure 13 and the 2D grid map of Figure Table 5 represents the location Figure 15 shows the 2.5D elevation map generated with the proposed algorithm by combining The SLAM-based 2.5D elevation map showed location recognition error of approximately 3 cm, recognition errors and computing speeds of 2D thegrid SLAM on14. theTable proposed 2.5D the elevation the height information of Figure 13 and the map based of Figure 5 represents locationmap. similar to that of the 2D LRF-generated map, and a computing speed of approximately 19 fps, which recognition errors computing speeds of thelocation SLAM based on the proposed elevation map.3 cm, The SLAM-based 2.5Dand elevation map showed recognition error of2.5D approximately is six times faster than that of the RGB-D SLAM-based map. The SLAM-based elevation map showed recognition error of approximately cm, is similar toNext, that of the 2D2.5D LRF-generated map, and alocation computing speed of approximately 19 fps,3 which a navigation experiment was conducted to check whether tasks of avoiding the obstacles similar to that of the 2D LRF-generated map, and a computing speed of approximately 19 fps, which six times thanupthat the RGB-D SLAM-based map. and faster travelling theofslope were possible when the mobile robot is using the proposed SLAM is six times faster than that of the RGB-D SLAM-based map. Next, a navigation experiment was conducted to checktrajectory whetheroftasks of avoiding the obstacles algorithm. The black line of Figure 15 shows the traveling the mobile robot using the Next, a navigation experiment was conducted to check whether tasks of avoiding the obstacles and travelling up the slope were16possible when thethe mobile using the proposed SLAM algorithm. 2.5D elevation map. Figure separately shows grid robot map ofisthe 2.5D elevation map so that the and travelling up the slope were possible when the mobile robot is using the proposed SLAM navigation results presented above can be more clearly. The red dot and green line respectively refer The black line of Figure 15 shows the traveling trajectory of the mobile robot using the 2.5D elevation algorithm. The black line of Figure 15 shows the traveling trajectory of the mobile robot using the the destination and actual navigation routeof traveled by the mobilemap robot. that the navigation results map.to Figure 16 separately shows grid map 2.5D elevation map. Figure 16the separately showsthe the2.5D grid elevation map of the 2.5Dso elevation map so that the presented aboveresults can be more clearly. red dotclearly. and green linedot respectively refer to the destination navigation presented above The can be more The red and green line respectively refer to the navigation destination and actual navigation traveled by the mobile robot. and actual route traveled by theroute mobile robot.
Figure 15. 2.5D elevation map generated through mapping of height information of 3D map and
2D grid map. 15. 2.5D elevation generated throughmapping mapping of of height height information FigureFigure 15. 2.5D elevation mapmap generated through informationofof3D 3Dmap mapand and 2D 2D grid map. grid map.
Sensors 2017, 17, 2370
17 of 24
Sensors 2017, 17, 2730
17 of 24
Table 5. Location recognition errors and computing speeds of the proposed 2.5D elevation map-based SLAM.
Table 5. Location recognition errors and computing speeds of the proposed 2.5D elevation Trial Location Recognition Error Computing Speed (FPS) map-based Navigation SLAM.
1 Navigation Trial 2 1 3 2 4 3 4 5 5 6 6 7 7 8 8 9 9 10 Average 10 Average
3 cm Location Recognition Error 2 cm 3 cm 3 cm 2 cm 3 cm cm 33 cm 3 cm 33 cm cm 2 cm 3 cm 3 cm 3 cm 3 cm 2.8 cm 3 cm 2.8 cm
19 Computing Speed (FPS) 20 19 19 20 18 19 18 19 19 19 19 20 20 19 19 19 19 19 19.1 19 19.1
Figure 16. Mobile robot’s navigation results using the topographical information of elevation map on Figure 16. Mobile robot’s navigation results using the topographical information of elevation map on the 2D grid map. the 2D grid map.
The results of the navigation shows that the robot avoided and circumvented the lower parts of The results of the thatmap the robot avoided and the robot lowerwas parts of the stairs that were notnavigation recognizedshows in the 2D generated using thecircumvented LRF sensor. The also the stairs that were not recognized in the 2D map generated using the LRF sensor. The robot was also able to reach its destination by confirming navigable areas of the ramp and taking that route. able to its destination by confirming navigable the ramp and taking that route.between If areach moving obstacle enters the sensor’s workingareas areaofwhen comparing the similarity If a moving obstacle enters the sensor’s working area when comparing the similarity the the current frame and the previous, presence of obstacle can be determined by using between the 2D LRF current frame and the previous, presence of obstacle can be determined by using the 2D LRF sensor sensor attached onto the mobile robot, but the obstacle cannot be recognized as the obstacle since the attached thetomobile robot, butthe themost obstacle cannot recognized as the obstacle the sensor sensor is onto unable recognize it as closest pointbeduring renewal of the map. since However, even is unable to recognize it as the most closest point during renewal of the map. However, even if the if the obstacle is moving but stays for at least 10 frames, it can be recognized as the obstacle and obstacle is moving but stays for at least 10 frames, it can be recognized as the obstacle and marked as marked as impassable point on the renewed map. impassable point on the renewed map.
Sensors 2017, 17, 2370
18 of 24
Sensors 2017, 17, 2730
18 of 24
We showed the results in the Tables 4 and 5 as a single summary table, Table 6, to compare the performances of existing methodology and the one proposed in this paper. It did not show much We showed the location results in the Tableserror 4 and 5 as a single table, 6, to compare difference regarding recognition compared to 2Dsummary LRF SLAM andTable RGB-D SLAM, but the performances of existing methodology and the one proposed in this paper. It did not show computing speed of the proposed method was about six times faster than that of the RGB-D SLAM. much difference regarding location recognition error compared to 2D LRF SLAM and RGB-D SLAM, but computing speed of the proposed was RGB-D about six times faster than thatmethod. of the RGB-D SLAM. Table 6. Comparison of 2Dmethod LRF SLAM, SLAM and the proposed Table 6. Comparison of 2D LRFError SLAM, RGB-D SLAM and the proposed Speed method. Location Recognition (cm) Computing (FPS) Experimental 2D LRFLocation RGB-D The Proposed 2D LRF RGB-D The Proposed Results Recognition Error (cm) Computing Speed (FPS) Experimental SLAM SLAM Method SLAM SLAM Method 2D LRF RGB-D The Proposed 2D LRF RGB-D The Proposed Results Average 2.3 4.6 2.8 25.1 3.1 19.1 SLAM SLAM Method SLAM SLAM Method
Ratio Average Ratio
0.5 2.3 0.5
14.6
0.61 2.8 0.61
1
25.18.1 8.1
3.11 1
19.16.2 6.2
In order to show expandability of the proposed localization method, we carried out the In orderintothe show expandability of the proposed localization method, we carried out the experiment indoors with more complicated shape, such as L-shape corridor. Asexperiment shown in in the indoors with more complicated such as L-shape corridor. As including shown in tree, Figure 17, Figure 17, the experimental environmentshape, includes obstacles of various heights, chair, the experimental environment includes obstacles of various heights, including tree, chair, stairs, slope stairs, slope and fire extinguisher. The length of the moving path of the robot was 60 m and the path and fire extinguisher. The length moving path of the In robot wasto60measure m and the includes includes L-shape section, leadingoftothethe straight course. order thepath localization L-shape section, leading to the straight course. In order to measure the localization recognition recognition error, we calculated the error as difference in the real location of the robot anderror, the we calculated error as difference infrom the real of thetorobot and the estimated value after estimated valuethe after running the robot the location starting point the goal. running the robot from the starting point to the goal.
(a)
(b)
(c)
(d)
Figure 17. (a) Experimental environment layout with wide, complex indoor configuration; (b) view from 1 ; (c) view from 2 ; (d) view from 3 . Figure 17. (a) Experimental environment layout with wide, complex indoor configuration; (b) view from 1 ; (c) view from 2 ; (d) view from 3 .
When the map was generating by using the 2D LRF as shown in Figure 18, there were obstacles that were detected and the ones that were not, depending on the flying altitude of the aerial robot. The aerial robot that worked at a height of 1 m detected tall obstacles (tree, upper stairs, upper part of
Sensors 2017, 17, 2730
19 of 24
the slope, garbage can), but not the lower ones. For the 2D LRF SLAM, location recognition error was Sensors 2017, 17, 2370 19 of 24 about cm Sensors2~3 2017, 17,and 2370 computing speed was about 25~30 FPS, which is quite fast. 19 of 24
Figure18. 18. 2Dmap map builtusing using theLRF LRFsensor. sensor. Figure Figure 18. 2D 2D mapbuilt built usingthe the LRF sensor.
Whenthe the3D 3Dmap mapwas wasgenerated generatedwith withthe theRGB-D RGB-Dsensor sensorasasshown shownininFigure Figure19, 19,allallobstacles obstacles When When the 3D regardless map was generated with3D theRGB-D RGB-DSLAM sensorshowed as shown in Figure recognition 19, all obstacles were were detected, of the height. localization errorof of were detected, regardless of the height. 3D RGB-D SLAM showed localization recognition error detected, regardless of the height. 3D RGB-D SLAM showed localization recognition error of about about 3~4 cm, which was similar to that of the 2D LRF SLAM, but slow computing speed of about about 3~4 cm, which was similar to that of the 2D LRF SLAM, but slow computing speed of about 2~3 FPS. 3~4 cm, which was similar to that of the 2D LRF SLAM, but slow computing speed of about 2~3 FPS. 2~3 FPS.
Figure19. 19.3D 3Dmap mapcreated createdusing usingthe theRGB-D RGB-Dsensor sensorofofthe theaerial aerialrobot. robot. Figure Figure 19. 3D map created using the RGB-D sensor of the aerial robot.
Wegenerated generatedthe the2.5D 2.5Delevation elevationmap mapasasshown shownininFigure Figure2020ononthe theexperimental experimentalenvironment environment We shown Figure17. 17. Unlike the2D 2Dmap map information shown Figure 18, coulddetect detectallall obstacles, We generated the 2.5D the elevation map as shown in Figure 20 on18, the experimental environment shown ininFigure Unlike information shown ininFigure ititcould obstacles, regardless of the height of the obstacle. Also, it could incorporate height information of the obstacle shown in Figure Unlike theobstacle. 2D map Also, information in Figure 18, it could detect obstacles, regardless of the17. height of the it could shown incorporate height information of theallobstacle like the 3D map drawn by the RGB-D sensor as shown in Figure 19. As shown on Table 7, the 2.5D like the 3D map drawn by the RGB-D sensor as shown in Figure 19. As shown on Table 7, the 2.5D Elevation Map-based SLAM showed 2~3 cm location recognition error and the computing speed Elevation Map-based SLAM showed 2~3 cm location recognition error and the computing speed ofof
Sensors 2017, 17, 2730
20 of 24
regardless of the height of the obstacle. Also, it could incorporate height information of the obstacle 202.5D of 24 by the RGB-D sensor as shown in Figure 19. As shown on Table 7, the Elevation Map-based SLAM showed 2~3 cm location recognition error and the computing speed of approximately approximately 19 19 fps, fps, which which is is slower slower than than that that of of 2D 2D LRF LRF SLAM SLAM (26 (26 fps), fps), but but approximately approximately seven seven times faster than thanthat that RGB-D SLAM. Therefore, according 7, the methodology times faster of of thethe RGB-D SLAM. Therefore, according to Tableto7,Table the methodology proposed proposed by this paperthat showed that it maintains localization and computing speed by this paper showed it maintains localization recognitionrecognition errors and errors computing speed at a certain at a certain level, regardless of the size of the application area. level, regardless of the size of the application area. Sensors 2017, 2370 drawn like the 3D17,map
Figure 20. 20. 2.5D 2.5Delevation elevation map map generated generated by by the the proposed proposed method. method. Figure Table 7. Comparison of of 2D 2D LRF LRF SLAM, SLAM, RGB-D RGB-D SLAM SLAM and and the the proposed proposed method method in in experimental experimental Table 7. Comparison environment of Figure 16. environment of Figure 16.
Navigation Trial
Navigation Trial 11 22 3 43 54 6 5 7 86 97 10 8 Average
Location Recognition Error (cm) Computing Speed (FPS) Location Recognition Error (cm) Computing Speed (FPS) 2D2D LRF RGB-D 2.5D Elevation 2D LRF RGB-D 2.5D Elevation LRF RGB-D 2.5D Elevation 2D LRF RGB-D 2.5D Elevation SLAM SLAM SLAM SLAM SLAM SLAM SLAM SLAM SLAM SLAM SLAM SLAM 3 3 4 4 3 25 2 19 3 25 2 19 2 2
3
3
4
2 3 2 3 2
3 2 3 2 3 2 3 3 2.8
4 3 4 4 3
3 4 4 3 4 4 3 5 4 3.8
2 3 3 3 3 2 3
2 3 3 3 3 2 3 3 3 2.8
27 27 25 30 25 24 28
27 27 25 30 25 24 28 24 25 26
3 2 2 4 3 2 2
3 2 2 4 3 2 2 3 2 2.5
18 18 19
19 18 18 18 19 22 19 19 22 18 19 19 18.9
18
9 3 5 3 24 3 18 10 3 4 3 25 2 Figure 21 shows the traveling trajectory of the mobile robot using the 2.5D elevation19map. The red Average 2.8 3.8 2.8 26 2.5 18.9
dot and green line respectively refer to the destination and the actual navigation route traveled by the mobile robot. The robot was able to reach its destination without colliding with obstacles. Figure 21 shows the traveling trajectory of the mobile robot using the 2.5D elevation map. The red dot and green line respectively refer to the destination and the actual navigation route traveled by the mobile robot. The robot was able to reach its destination without colliding with obstacles.
Sensors 2017, 17, 2730
Sensors 2017, 17, 2370
21 of 24
21 of 24
Figure 21. Mobile robot’s navigation results using the topographical information of elevation map on Figure 21. Mobile robot’s navigation results using the topographical information of elevation map on the 2D grid map. the 2D grid map.
5. Conclusions 5. Conclusions This paper proposes a mobile robot localization technique through the cooperation of UGV and proposes ausing mobile localization the cooperation of UGV and UAV. AThis mappaper was generated anrobot RGB-D sensor andtechnique a 2D LRFthrough sensor installed in an aerial robot. UAV. A mapmap was was generated using an RGB-D sensor and a 2D navigation LRF sensorand installed in an aerialThe robot. The generated used for the mobile robot’s autonomous path generation. The generated map was used for the mobile robot’s autonomous navigation and path generation. existing 2D LRF SLAM is unable to detect certain obstacles owing to their height, while the RGB-D The existing LRF SLAM of is data unable to computed detect certain obstacles owing to their height, while the SLAM involves2D large amounts to be for 3D mapping, making the use of real-time RGB-D SLAM involves large amounts of data to be computed for 3D mapping, making the algorithms difficult. To resolve these issues, this study simultaneously used both methods use and of real-time algorithms difficult. To resolve these issues, this study simultaneously used both methods proposed a new 2.5D elevation-mapping method that incorporates height information from 3D maps and proposed into a 2D map. a new 2.5D elevation-mapping method that incorporates height information from 3D maps into a 2D map.of the proposed method was validated through a series of experiments. The The effectiveness The effectiveness the proposed method waslocation validated through accuracy a series of proposed 2.5D map-basedofSLAM demonstrated similar recognition to experiments. that of the map-based SLAM demonstrated location recognition accuracy to that 2DThe LRFproposed SLAM. It2.5D was also shown to be able to recognize similar obstacles lower in height, such as the stairs of the 2D LRF SLAM. It was also shown to be able to recognize obstacles lower in height, such and ramp that were not recognized by the 2D LRF SLAM. With its computing speed that is six times as the stairs wereSLAM, not recognized the on 2D the LRFincreased SLAM. With its computing speed that faster than and that ramp of thethat RGB-D it sheds by light feasibility of the real-time is six times faster than that of the RGB-D SLAM, it sheds light on the increased feasibility of the localization. real-time localization. The results presented are encouraging and demonstrate the potential of collaborative robotic results presented areaccurate, encouraging and localization demonstrateofthe potential ofincollaborative robotic systemsThe cooperating to provide real-time mobile robots SLAM generated systems cooperating to provide accurate, real-time localization of mobile robots in SLAM maps. generated maps. Future research is needed to generate a single, consolidated 2.5D map for larger indoor Future research is neededbuildings. to generate a single,a number consolidated 2.5D mapthat forare larger indoor environments, such as multi-level By utilizing of aerial robots designed environments, such as multi-level buildings. By utilizing a number of aerial robots that are designed to discover various areas of larger indoor environments, each aerial robot may generate 2.5D maps to discover various areas of larger indoor environments, each maps aerial could robot be may generateto 2.5D maps through the proposed localization methodology. The generated combined form a consolidated map covering the whole area of large indoor environment.
Sensors 2017, 17, 2730
22 of 24
through the proposed localization methodology. The generated maps could be combined to form a consolidated map covering the whole area of large indoor environment. Acknowledgments: This research was supported by Priority Research Centers Program and Intelligent Smart City Convergence Platform Project through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2017R1A6A1A03015562, NRF-20151D1A1A01061271). Author Contributions: Jae Hong Shim proposed the research topic. Tae Hyeon Nam developed the algorithms and performed the experiments; Jae Hong Shim and Young Im Cho analyzed the experimental results and wrote the paper. Conflicts of Interest: The authors declare no conflict of interest.
References 1.
2.
3. 4. 5. 6. 7. 8.
9.
10.
11.
12.
13.
14.
15.
Newcombe, R.; Izadi, S.; Hilliges, O.; Molyneaux, D.; Kim, D.; Davison, A.; Kohli, P.; Shotton, J.; Hodges, S.; Fitzgibbon, A. KinectFusion: Real-time dense surface mapping and tracking. In Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR), Basel, Switzerland, 26–29 October 2011. Diosi, A.; Kleeman, L. Laser scan matching in polar coordinates with application to slam. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 1439–1444. Pupilli, M.; Calway, A. Real-time camera tracking using a particle filter. In Proceedings of the British Machine Vision Conference (BMVC’05), Oxford, UK, 5–8 September 2005; pp. 519–528. Diosi, A.; Kleeman, L. Fast laser scan matching using polar coordinates. Int. J. Robotics Research. 2007, 26, 1125–1153. [CrossRef] Davison, A.J.; Reid, I.; Molton, N.; Stasse, O. MonoSLAM: Real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1052–1067. [CrossRef] [PubMed] Hwang, S.Y.; Song, J.B. Monocular vision and odometrybased SLAM using position and orientation of ceiling lamps. J. Inst. Control Robot. Syst. 2011, 17, 164–170. [CrossRef] Strasdat, H.; Davison, A. Double window optimisation for constant time visual SLAM. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011. Lee, G.H.; Fraundorfer, F.; Pollefeys, M. RS-SLAM: RANSAC sampling for visual FastSLAM. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 1655–1660. Johannsson, H.; Kaess, M.; Fallon, M.F.; Leonard, J.J. Temporally scalable visual SLAM using a reduced pose graph. In Proceedings of the IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 54–61. Martinez-Cantin, R.; Castellanos, J.A. Unscented SLAM for large-scale outdoor environments. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 328–333. Civera, J.; Davison, A.J.; Montiel, J.M.M. Interacting multiple model monocular SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008; pp. 3704–3709. Sola, J.; Monin, A.; Devy, M.; Lemaire, T. Undelayed initialization in bearing only SLAM. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 2499–2504. Montemerlo, M.; Thrun, S.; Koller, D.; Wegbreit, B. FastSLAM: A factored solution to the simultaneous localization and mapping problem. In Proceedings of the AAAI National Conference on Artificial Intelligence, Edmonton, AB, Canada, 28 July–1 August 2002. Blanco, J.L.; Fernandez-Madrigal, J.A.; Gonzalez, J. Efficient probabilistic range-only SLAM. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 1017–1022. Kaess, M.; Ranganathan, A.; Dellaert, F. iSAM: Incremental smoothing and mapping. IEEE Trans. Robot. 2008, 24, 1–14. [CrossRef]
Sensors 2017, 17, 2730
16.
17. 18.
19.
20.
21. 22. 23.
24.
25.
26. 27. 28.
29.
30. 31.
32. 33. 34.
35. 36.
23 of 24
Carlevaris-bianco, N.; Eustice, R.M. Generic factor-based node marginalization and edge sparsification for pose-graph SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013. Klein, G.; Murray, D. Parallel tracking and mapping for small AR workspaces. In Proceedings of the 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007. Apvrille, L.; Tanzi, T.; Dugelay, J.-L. Autonomous drones for assisting rescue services within the context of natural disasters. In Proceedings of the XXXI General Assembly and scientific symposium of the international union of radio science, Beijing, China, 16–23 August 2014; pp. 1–4. Adams, C.F.S.M.; Levitan, M. Unmanned aerial vehicle data acquisition for damage assessment in hurricane events. In Proceedings of the 8th International Workshop on Remote Sensing for Disaster Response, Tokyo, Japan, 30 September–1 November 2010. Nikolic, J.; Burri, M.; Rehder, J.; Leutenegger, S.; Huerzeler, C.; Siegwart, R. A UAV system for inspection of industrial facilities. In Proceedings of the IEEE Aerospace Conference, Big Sky, MY, USA, 2–9 March 2013; pp. 1–8. Vandapel, N.; Donamukkala, R.R.; Hebert, M. Unmanned ground vehicle navigation using aerial ladar data. Int. J. Robot. Res. 2006, 25, 31–51. [CrossRef] Grocholsky, B.; Keller, J.; Kumar, V.; Pappas, G. Cooperative air and ground surveillance. IEEE Robot. Autom. Mag. 2005, 13, 16–25. [CrossRef] Dewan, A.; Mahendran, A.; Soni, N.; Krishna, K.M. Heterogeneous UGV-MAV exploration using integer programming. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Tokyo, Japan, 3–7 November 2013; pp. 5742–5749. Mueggler, E.; Faessler, M.; Fontana, F.; Scaramuzza, D. Aerialguided navigation of a ground robot among movable obstacles. In Proceedings of the IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), West Lafayette, IN, USA, 18–20 October 2014; pp. 1–8. Harik, E.; Guerin, F.; Guinand, F.; Brethe, J.-F.; Pelvillain, H. UAVUGV cooperation for objects transportation in an industrial area. In Proceedings of the IEEE International Conference on Industrial Technology (ICIT), Seville, Spain, 17–19 March 2015. Heppner, G.; Roennau, A.; Dillman. Enhancing sensor capabilities of walking robots through cooperative exploration with aerial robots. J. Autom. Mob. Robot. Intell. Syst. 2013, 7, 5–11. Garz’on, M.; Valente, J.; Zapata, D.; Barrientos, A. An aerial-ground robotic system for navigation and obstacle mapping in large outdoor areas. Sensors 2013, 13, 1247–1267. [CrossRef] [PubMed] Censi, P.; Strubel, J.; Brandli, C.; Delbruck, T.; Scaramuzza, D. Low-latency localization by active LED markers tracking using a dynamic vision sensor. In Proceedings of the IEEE/RSJ International Conferences on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 891–898. Faessler, M.; Mueggler, E.; Schwabe, K.; Scaramuzza, D. A monocular pose estimation system based on infrared leds. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014. Stentz, T.; Kelly, A.; Herman, H.; Rander, P.; Amidi, O.; Mandelbaum, R. Integrated Air/Ground Vehicle System for Semi-Autonomous Off-Road Navigation; The Robotics Institute: Pittsburgh, PA, USA, 2002. Grocholskya, B.; Bayraktara, S.; Kumar, V.; Pappas, G. UAV and UGV Collaboration for Active Ground Feature Search and Localization. In Proceedings of the AIAA 3rd Unmanned Unlimited Technical Conference, Workshop and Exhibit, Chicago, IL, USA, 20–23 September 2004. Boukas, E.; Gasteratos, A.; Visentin, G. Towards orbital based global rover localization. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015. Carle, P.J.; Furgale, P.T.; Barfoot, T.D. Long-range rover localization by matching LIDAR scans to orbital elevation maps. J. Field Robot. 2010, 27, 344–370. [CrossRef] Pham, B.V.; Maligo, A.; Lacroix, S. Absolute Map-Based Localization for a Planetary Rover. In Proceedings of the Symposium on Advanced Space Technologies for Robotics and Automation, Noordwijk, The Netherlands, 15–17 May 2013. Rudol, P.; Wzorek, M.; Conte, G.; Doherty, P. Micro Unmanned Aerial Vehicle Visual Servoing for Cooperative Indoor Exploration. In Proceedings of the IEEE Aerospace Conference, Big Sky, MT, USA, 1–8 March 2008. Phan, C.; Liu, H. A cooperative UAV/UGV platform for wildfire detection and fighting. In Proceedings of the System Simulation and Scientific Computing, Beijing, China, 10–12 October 2008.
Sensors 2017, 17, 2730
37.
38.
39. 40.
41.
42.
43. 44.
45. 46.
47. 48.
24 of 24
Michael, N.; Shen, S.; Mohta, K.; Mulgaonkar, Y.; Kumar, V.; Nagatani, K.; Okada, Y.; Kiribayashi, S.; Otake, K.; Yoshida, K.; et al. Collaborative mapping of an earthquake-damaged building via ground and aerial robots. J. Field Robot. 2012, 29, 832–841. [CrossRef] Burri, M.; Oleynikova, H.; Achtelik, M.W.; Siegwart, R. Real-time visual-inertial mapping, re-localization and planning onboard mavs in unknown environments. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015. Vidal-Calleja, T.A.; Berger, C.; Solà, J.; Lacroix, S. Large scale multiple robot visual mapping with heterogeneous landmarks in semistructured terrain. Robot. Auton. Syst. 2011, 59, 654–674. [CrossRef] Rusinkiewicz, S.; Levoy, M. Efficient variants of the ICP algorithm. In Proceedings of the 3rd International Conference on 3-D Digital Imaging and Modeling, Quebec City, QC, Canada, 28 May–1 June 2001; pp. 145–152. Nüchter, A.; Lingemann, K.; Hertzberg, J. Cached k-d tree search for ICP algorithms. In Proceedings of the Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM), Montreal, QC, Canada, 21–23 August 2007. Forster, C.; Fässler, M.; Fontana, F.; Werlberger, M.; Scaramuzza, D. Continuous on-board monocular-vision-based elevation mapping applied to autonomous landing of micro aerial vehicles. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014. Qin, R.; Gong, J.; Li, H.; Huang, X. A coarse elevation map-based registration method for super-resolution of three-line scanner images. Photogramm. Eng. Remote Sens. 2013, 79, 717–730. [CrossRef] Wulf, O.; Arras, K.; Christensen, H.I.; Wagner, B. 2D mapping of cluttered indoor environments by means of 3D perception. In Proceedings of the IEEE International Conference on Robotics and Automation, New Orleans, LA, USA, 26 April–1 May 2004; pp. 4204–4209. Thrun, S. Learning Occupancy Grids with Forward Models. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Maui, USA, 29 October–2 November 2001; pp. 1676–1681. Souza, A.; Goncalves, L. 2.5-Dimensional Grid Mapping from Stereo Vision for Robotic Navigation. In Proceedings of the Brazilian Robotics Symposium and Latin American Robotics Symposium, Uberlândia, Brazil, 16–19 October 2012; pp. 39–44. Pfaff, P.; Triebel, R.; Burgard, W. An efficient extension to elevation maps for outdoor terrain mapping and loop closing. Int. J. Robot. Res. 2007, 26, 217–230. [CrossRef] Franz, A. Drawing stereo disparity images into occupancy grids: Measurement model and fast implementation. In Proceedings of the IEEE/RSJ Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009. © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).