Robot Calibration of Sensor Poses and Region Based ...

6 downloads 730 Views 1MB Size Report
For floor surface classification, Best et al. ..... also called personal best value, which is the best position obtained .... On a Lenovo Z580 laptop with an Intel i5 four.
Proceeding of the 2015 IEEE International Conference on Information and Automation Lijing, China, August 2015

Robot Calibration of Sensor Poses and Region Based Odometry Using Offline Optimisation of Map Information Yanming Pei and Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton, Victoria 3800, Australia {yanming.pei & lindsay.kleeman}@monash.edu parameters in the observable directions. However, this method works for a robot with a single sensor observing distinct landmarks and known data associations at each time step. Our method can work with multiple sensors and does not require data associations, which is not robust in practice normally.

Abstract – The calibration of mobile robot odometry and sensor extrinsic parameters can significantly improve the accuracy of robot mapping and localisation. This paper reports on the simultaneous calibration of wheeled robot odometry and its onboard sensor extrinsic parameters. The calibration is achieved with the offline optimisation of map quality using a particle swarm optimisation algorithm. The approach takes advantage of a state-of-the-art map quality metric for 2D occupancy grid maps and uses this quality measurement as the fitness value for the particle swarm optimization algorithm. No ground truth map is required. Since odometry calibration depends on the floor surface type, the paper improves previous calibration efforts by employing a novel floor colour classification system. Floor classification is based on a Support Vector Machine that achieves greater than 98% precision and recall values for a testing dataset consisting of six different floors. We demonstrate the benefit of our proposed floor classification system in the calibration of odometry and sensor extrinsic parameters by real world experiments. Furthermore, the consistency of our calibration method is also validated experimentally with different data sets.

When a sensor is mounted on a robot, the position and orientation of the sensor in robot coordinates are examples of extrinsic parameters. Levinson and Thrun [6] addressed extrinsic parameter calibration by defining an energy function on point clouds that penalised points far from the surfaces constructed with other measurements of a 3D laser rangefinder. They applied a grid search to optimise this cost function. Grid search methods are suitable for low dimensional problems, but are inefficient for optimization on many dimensions, such as 12 dimensions in this paper. Similar to our approach, Sheehan et al. [7] used the Rényi Quadratic Entropy as a cost function to measure the degree of organisation of a point cloud, which was linked to the crispness of the point cloud of a 3D laser scanner. Their aim was to optimise extrinsic parameters. However, these papers do not calibrate the robot odometry parameters simultaneously. This is important since improved odometry can help improve sensor position estimation on a robot. We also allow for differences in the odometry calibration on different floor surfaces.

Index Terms – Particle swarm optimization, robot odometry calibration, floor classification, colour sensor, SVM.

I. INTRODUCTION

For floor surface classification, Best et al. [8] classified different terrains using a Hexapod robot and its proprioceptive position sensors within leg servos. A vision based floor segmentation algorithm developed by Posada et al. [9] can suffer from low illumination and the inability to classify the floor immediately below the robot due to the camera position on top of the robot. We use a colour sensor with its own light source mounted under the robot in this paper. In the literature, the colour of an object has been widely used as a feature in fire-like object detection systems [10] and vehicle identification systems [11]. This paper introduces a colour sensor to classify floor surfaces with high accuracy and almost immediate notification of floor surface type changes.

Accurate localisation and sensing abilities are important for an autonomous mobile robot. In particular, a wheeled robot commonly uses odometry pose estimation for mapping and localisation. Robot odometry requires proprioceptive sensors, such as wheel encoders, and their calibration parameters, such as the effective wheel radii. However, those odometry parameters require frequent calibration since they depend on tyre pressure, wear, robot load and floor surface types [1]. Researchers have proposed various strategies to calibrate the odometry parameters. UMBmark [2] is an early and popular odometry calibration method because of its simplicity. However the large square robot trajectory required is not often practical. Larsen et al. [3] and Martinelli et al. [4] proposed methods to calibrate the odometry parameters based on an extended Kalman filter (EKF). Censi et al. [5] were the first to simultaneously calibrate odometry parameters and the sensor extrinsic parameters of mobile robots using an offline maximum-likelihood method. They assumed it was possible to estimate the sensor’s egomotion (e.g. performing scan matching using a laser rangefinder), which is not a requirement for our method. In 2013, Maye et al. [1] first proposed an online algorithm to calibrate the robot odometry and sensor extrinsic parameters, which detected and locked unobservable parameters in the parameter space and only updated the

978-1-4673-9104-7/15/$31.00 ©2015 IEEE

This paper takes a novel approach for calibrating odometry and extrinsic sensor parameters by sensing and segmenting the environment into different floor surface types. The dependency of odometry calibration on floor surface is often quoted in the literature [12-14], but this is the first simultaneous sensor pose and odometry calibration approach to take direct account of this dependency. Our approach is to perform floor surface classification based on Support Vector Machine (SVM) for a wheeled robot in order to continuously and simultaneously calibrate odometry and sensor parameters on different surfaces. The

462

cell, corresponding to probabilities of 0.7 and 0.4 for occupied and free volumes, worked well. This selection is the same as [28], but in order to use bits/cell as the unit for MI we adopt the base 2 logarithm instead of the natural logarithm.

system can detect a transition to a new floor region immediately, which is beneficial not only for the calibration of odometry and sensor extrinsic parameters but also for a mapping process because the system actively adapts the odometry parameters to estimate the robot pose more accurately. The state-of-the-art map information quality metric for occupancy grid (OG) maps is used as a fitness value in the calibration process using the particle swarm optimization (PSO). This is also the first paper to the authors’ knowledge that applies PSO in odometry and sensor poses calibration problems. Note that the map information quality measure does not require any ground truth map.

B. Mobile Robot Model An ActivMedia Pioneer2 DX H-8 differential drive robot was used in this work. Therefore, the standard differential drive wheeled robot kinematic model proposed by Chong and Kleeman [18] is deployed here with the robot pose at time step k : ΔL(i ) = [ωr (i ) Rr Δt (i ) + ωr (i ) Rl Δt (i )] / 2 (3)

II. RELATED WORK

(4)

⎡ ⎤ θ (0) + ∑ Δθ (i ) ⎢ ⎥ i =1 ⎥ ⎡θ ( k ) ⎤ ⎢ k ⎢ x (k ) ⎥ = ⎢ x (0) + {ΔL(i ) cos[θ (i − 1) + Δθ (i ) / 2]}⎥ ∑ ⎥ ⎢ ⎥ ⎢ i =1 ⎢ ⎥ ⎣⎢ y ( k ) ⎦⎥ ⎢ k ⎥ ⎢ y (0) + ∑ {ΔL(i ) sin[θ (i − 1) + Δθ (i ) / 2]}⎥ i =1 ⎣ ⎦

(5)

k

A. Quality Metric for Occupancy Grid Maps The Occupancy Grid (OG) map is a common representation of a map built from laser scan measurements and is useful for robot navigation and measuring map quality [16]. The map consists of grid cells with discrete random variables with two outcomes, occupied cells and free cells. The entropy of the information in an OG map cell can be calculated as below [15].

H (mij ) = − p( mij ) log 2 ( p(mij )) − p (mij ) log 2 ( p (mij ))

Δθ (i ) = [ωr (i ) Rr Δt (i ) − ωl (i ) Rl Δt (i )] / B

Rl and Rr are the radii of the left and right wheels and B is the separation distance between the centres of the wheels. These are the odometry parameters. i = 1, 2," , k denotes the

(1)

where p( mij ) denotes the occupancy probability of cell with

indices of the sampling time for wheel rotation speeds.

index 〈i, j 〉 of the map m and p (mij ) = 1 − p( mij ) . Stachniss

ωr (i )

and ωl (i ) are the right and left wheel rotation speeds over the time interval Δt (i ) .

et al. [17] used the entropy of the information in a grid map as a map quality metric. Blanco et al. [15] proposed the map information (MI) metric, I (m ) , and showed it better represented the quality of an OG map. The MI of an OG map m is defined as:

C. Laser Scan Projection Using multiple laser rangefinders can reduce the chance of divergence, ambiguity and inaccuracy in the process of scan matching as explained in the previous work [19]. However, most mapping algorithms are designed to work with only one laser rangefinder. In our case we project the measurements of two Hokuyo URG-04LX laser rangefinders to a virtual laser rangefinder located on the original point of the robot coordinate frame as shown in Fig. 1. Each rangefinder has a field of view (FOV) of 240D and valid range measurements from 0.02 m to 4 m. The angular resolution is approximately 0.352D and one laser scan contains 681 laser beams. The virtual laser rangefinder has a FOV of 360D with 1024 laser beams and the same angular resolution.

⎧ ∑ (1 − H ( mij )) / N if N > 0 ⎪ (2) I (m ) = ⎨ ∀i , j (bits/cell) ⎪⎩0 otherwise where N is the total number of the cells that have been observed. The MI metric is mostly independent of the grid resolution for the commonly used cell sizes between 1 cm and 10 cm [15]. This metric can be explained intuitively by noting that the better the alignment between observations (in our case, laser scans) in the map, the higher the values obtained from MI. For example a cell with probability 0.5 contributes zero to the MI whilst a probability close to 1 or 0 contributes nearly 1. The poses of the on-board sensors for mapping and the odometry parameters of the robot will affect how well the observations align to each other, which is the outcome of MI.

Assume the two laser rangefinders are placed in the same horizontal plane. Coordinate frames {O} , {R} , {L1 } , {L2 } are the world frame, the robot frame, the frame of Laser 1 and the frame of Laser 2, respectively. The relative pose of a frame {B} with respect to a frame { A} is denoted by the symbol Aξ B .

We adopt a similar sensor model to [28] to calculate p( mij ) , which is initialised to the uniform prior of 0.5 and updated by a ray-casting operation determining its occupancy probability along a beam between the laser origin to the measured point. We refer readers to [16] and [28] for a detailed explanation about this sensor model and its calculation. Experimentally, we found the log likelihood value locc = 1.2224 for an occupied cell and l free = −0.585 for a free

The laser scan of Laser 1 at time step t is described

(

as S1 (t ) = x1 , y1 , θ1 , {ri1 (t ), α i1 (t )}

n

i =1

) , where

x1 , y1 ,θ1 are the

position and orientation of Laser 1 in {R} , {ri1 (t ), α i1 (t )}

n

i =1

463

describes n range measurements ri1 (t ) at bearings α i1 (t ) , and n = 1, 2," , N (N is the total number of laser beams in one scan). Analogously, we define the laser scan of Laser 2

(

as S2 (t ) = x2 , y2 , θ 2 , {ri 2 (t ), α i2 (t )}

n

i =1

X i = [ xi1 , xi 2 ," , xiD ]T , where D is the dimension of the

problem. The velocity of the i th particle is the displacement used to calculate the next position and is denoted Vi = [vi1 , vi 2 ," , viD ]T . The fitness value is the function value

) . Note that we omit the

by applying the position. The pbest Pi = [ pi1 , pi 2 ," , piD ]T is also called personal best value, which is the best position obtained from the corresponding particle itself. The gbest G = [ g1 , g2 ," , g D ]T is the global best position so far achieved by all the particles in the swarm. The following equations are used to update positon and velocity for the i th particle from time step t to t + 1 [22]: Vit +1 = ωVit + c1R i1t ⊗ (Pit − X it ) + c2 R i 2 t ⊗ (G − X it ) (8)

time step label for the relative poses symbols. Let ri Laser1 and

αiLaser1 denote the virtual range measurements and bearings seen from the central virtual laser based on the measurements of Laser 1 as in [20]. ri Laser1 = ( ri1 cos(θ1 + αi1 ) + x1 )2 + (ri1 sin(θ1 + αi1 ) + y1 )2 (6)

αiLaser1 = atan2(ri1 sin(θ1 + αi1 ) + y1, ri1 cos(θ1 + αi1 ) + x1 ) (7) ri Laser 2 and α iLaser 2 are calculated in the same way. Before scan projection, two scan processing steps including a median filter and laser segmentation are done. After the scan projection, a linear interpolation is performed and the final range measurements are the measurements as the corresponding laser measurements if there are no common areas between the two laser measurements. In the common area, the shorter range measurement is chosen.

X it +1 = X it + Vit +1 (9) where ω is the inertia coefficient to prevent explosion of the particles. Cognitive acceleration coefficient c1 and social acceleration coefficient c2 are used to include the influences of the pbest and gbest for this update. Independent vectors R i1t and R it 2 are uniformly distributed in the range [0,1] , ⊗ denotes element-wise vector multiplication. This is the basic PSO algorithm. III. FLOOR SURFACE CLASSIFICATION

P L1

y

x

x y

{R }

The floor surface detection system consists of a colour light-to-frequency converter and Arduino UNO microcontroller in hardware and classification algorithm based on a SVM [23].

pi

{L1}

A. Hardware Design of Colour Sensing A TCS230 [24] colour sensor is used to measure the intensity of four channels corresponding to 16 phototransistors: 4 red, 4 blue, 4 green and 4 unfiltered. The floor is illuminated by 4 white LEDs from each side of a square as in Fig. 2. A microcontroller is used to convert the frequency encoded intensity values from the TCS320 to integer format output onto a USB connection to a laptop running the Robot Operating System (ROS) [25]. As shown in Fig. 7, the colour sensor module is fixed beneath the robot 10 mm above the ground. This setup is designed to reduce interference from other light sources or unwanted reflected light.

ξ L1

R

{L2 }

x

y

y ξR

O

{O}

x

Figure 1. Schematic of the laser rangefinders on the robot. The red and green sectors show the maximum measurement ranges of Laser 1 and Laser 2, respectively.

D. PSO Algorithm Inspired by swarm intelligence in a population of simple agents interacting with their neighbours and the whole environment, Kennedy and Eberhart [21] first introduced PSO as a computer model for flocking birds in 1995. Later, researchers used this as a metaheuristic to find the optimum solution in a high dimensional problem. It does not require continuous or differentiable fitness function in contrast to gradient descent optimisation methods. A particle contains several properties, namely position, velocity, fitness value, pbest and gbest [22]. The position of the i th particle is a potential solution for the problem and is denoted

Figure 2. Hardware schematic of the floor surface classification system.

B. Floor Surface Classification with a SVM Based on the open-source LIBSVM library [26], we implemented our classification algorithm in both MATLAB and C++ with ROS ml_classifier package. The feature vector

464

TABLE I. CONFUSION MATRIX OF THE VALIDATION TEST

contains the intensities of the red, green and blue components of the light from floors. A median filter and a down-sampling process are done before the light intensity data are fed into the SVM. Fig. 3 depicts the effects of these two pre-processes. We used the suggested radio basis function (RBF) as the kernel function [26], because the linear kernel function and the sigmoid kernel function behave like RBF for certain kernel parameters and polynomial kernels have more numerical difficulties compared to RBF.

Floors

(a)

(b)

(c)

(d)

(e)

(f)

Precision (%)

(a)

390

0

6

0

0

0

98.5

(b)

0

396

0

0

0

0

100

(c)

0

0

396

0

0

0

100

(d)

0

0

0

395

0

1

99.7

(e)

0

0

0

0

396

0

100

(f)

0

0

0

0

0

396

100

Recall (%)

100

100

98.5

100

100

100

(a) Carpet 1 (b) Lino (c) Carpet 2 (d) Marble (e) Carpet 3 (f) Blue Rubber

(a)

IV. CALIBRATION METHOD

(b)

The calibration is divided into three phases: training a SVM, dataset collection and calibration on multiple floors.

Figure 3. Light intensity detection results when the robot travelled from Carpet 1 to Lino (appearance shown in Fig. 4). (a) Original results. (b) Results after a median filter (window size is 25) and downsampling process (every 80 samples).

A. Training a SVM For the floor classification method to work well floors should ideally be of even colour. The classification success rate will be higher if more data are used to train the SVM, so the robot is driven along a trajectory covering a greater area of the floor than what is used in the calibration phase. In addition, the training data should be based on data after the median filter and down sampling. These pre-processed data are fed into the C++ implementation of SVM for floor classification.

Six different floors in Fig. 4 were selected as the target floors for training and testing. The training dataset is shown in Fig. 5. A completely different dataset was collected in a different trajectory on those floors. The classification results are depicted in Table I.

(a)

(b)

(c)

(d)

(e)

B. Dataset Collection A velocity less than 0.1 m/s of the robot is desired, since this will reduce the non-systematic errors (e.g. wheel slippage). The trajectory needs to contain sufficient diversity to avoid rank deficiency as stated in [1]. Examples of rank deficiency motion are a pure straight line, or a rotation on the spot or a constant curvature arc motion. A combination of two or more of these motions is not rank deficient and allows all parameters to be observable. Furthermore, long datasets are not suitable for offline calibration since our method requires the dataset to be replayed repeatedly until the PSO algorithm reaches a suitable high MI value. Experimentally, we found a dataset of around 350 seconds was a good choice on two testing floors, Carpet 1 and Blue Rubber. An enclosed environment with most parts visible by the laser rangefinders is chosen primarily for efficiency reasons. This allows full use of the range information in building an OG map, whose quality measured in MI will be most affected by the calibration parameters of the odometry and the poses of the laser rangefinders.

(f)

Figure 4. Surfaces where the training data were collected. (a) Carpet 1 (b) Lino (c) Carpet 2 (d) Marble (e) Carpet 3 (f) Blue Rubber.

Figure 5. The training data of the SVM classifier. Each class has 414 feature vectors, which comprise the intensities of the red, green and blue components in the light from the surfaces (a-f), respectively, in Fig. 4.

Table I shows larger than 98% precision and recall values for all classes. The main classification error happened when the robot was running on Carpet 1, wrongly classified as Carpet 2. The reason could be that they are both dark carpets, which have little light reflection. This is also obvious as shown in Fig. 5. Nevertheless, our surface classification system has shown promising results. The ROS implementation also has a high successful rate with the same training and test dataset.

C. Calibration on Multiple Floors The calibrated parameters are the odometry parameters on different floors and the extrinsic parameters of the exteroceptive sensors. The fitness value of the PSO algorithm is the MI value of the OG map built using only odometry

465

and Dataset 2 have 35 and 36 laser scan measurements in their maps, respectively. We replay the datasets at a rate of twice the original ROS message publish rate via the rosbag commandline tool.

information and laser extrinsic parameters without any laser scan matching. The suggested swarm size 40 [22] was adopted. We use every 100th laser scan and the odometry pose to build the OG map. The Standard PSO 2011 (SPSO-2011) is adopted, because it performed well against 28 various test functions as reported in [27]. The details of SPSO-2011 are explained in [22]. A flowchart of our calibration method is shown in Fig. 6.

Figure 7. Left: The environment used to collect our datasets. The size of the environment was approximately 4 m × 5 m. Right: The robot is crossing the boundary of two floors, Carpet 1 and Blue Rubber.

Let Rl1 , Rr1 and l1 be the left wheel radius, right wheel radius and the reciprocal of the separation distance B1 on Carpet 1 and Rl 2 , Rr 2 and l2 on Blue Rubber. Let x1 , y1 and θ1 be the extrinsic parameters for Laser 1 and x2 , y2 and θ2 for Laser 2. The above 12 parameters form the calibration parameter vector in this work. According to a preliminary estimation, we confine each calibration parameter as listed in Table. II when performing the PSO algorithm. In terms of the cell size of the OG map, we choose the commonly used side length, 2.5 cm. As suggested by [22], 30 generations of the PSO algorithm in most practical problems are sufficient for a good optimisation result. We found that there was little improvement after 40 generations of PSO. Therefore, we choose 40 as the total generations in all the following experiments. On a Lenovo Z580 laptop with an Intel i5 four core CPU working at 2.50 GHz and a RAM of 4 GB, 40 generations of 40 particles in a PSO take approximately 14 hours of computation time. The time is proportional to the number of particles and the number of generations in the PSO. B. Influence of Floor Classification We validated the influence of the floor classification by a series of experiments. Firstly, we calibrated the odometry and laser pose parameters with a correct floor classification using our proposed floor classification system. Then we calibrated the parameters with a wrong floor classification which separated the robot trajectory into two parts: the first 17 laser scans on Carpet 1 and the other 18 laser scans on Blue Rubber. Fig. 8 shows the differences of the map information evolutions during the above two comparison experiments. MI of the OG map built with our proposed floor classification method is higher. Fig. 9 shows the evolution of the mean of the MI values of the 40 particles in each generation in the PSO algorithm, which indicates that most particles approached better estimated values. Finally, we applied the calibrated parameters in last two experiments onto Dataset 2 to validate the benefits of successful floor classification. Fig. 10 summarises the experiments. MI values of the OG maps in (c) and (d) in Fig. 10 are 0.9234 bits/cell and 0.9207 bits/cell, respectively.

Figure 6. Flow chart of calibrating odometry and sensor extrinsic parameters with PSO on multiple floors.

V. EXPERIMENTS A. Experiment Settings To evaluate our proposed calibration method, we recorded two different datasets, Dataset 1 and Dataset 2, one after the other, on Carpet 1 and Blue Rubber in a closed environment as shown in the left image of Fig. 7. The right image of Fig. 7 depicts the robot crossing the boundary of two floors. Since every 100th laser scan is used to build the OG map, Dataset 1

466

TABLE II. CONFINEMENT OF EACH CALIBRATION PARAMETERS

x1 (m)

y1 (m)

θ1 (rad)

x2 (m)

y2 (m)

θ2 (rad)

Rl1 (m)

Rr1 (m)

l1 (m-1)

Rl 2 (m)

Rr 2 (m)

l2 (m-1)

Min

0.06

-0.145

-0.8154

-0.22

0.125

2.2

0.093

0.093

2.765

0.093

0.093

2.765

Max

0.08

-0.125

-0.7754

-0.18

0.145

2.4

0.099

0.099

2.84

0.099

0.099

2.84

Figure 8. The map information evolutions during the simultaneous calibration of odometry and laser pose parameters vs. the number of generations in PSO algorithm when using our proposed floor classification method and a wrong floor classification.

(a)

(b)

(c)

(d)

Figure 10. Mapping results to show the influence of floor classification. The trajectories labelled with red and blue correpond to the classified results of floor types, Carpet 1 and Blue Rubber, respectively. Green dots depict positions of the robot for the laser scans used to build the OG maps and yellow lines depict the relative pose between the consecutive laser scans. (a) OG map built with the calibrated parameters based on our proposed floor classification method using Dataset 1. (b) OG map built with the calibrated parameters based on a wrong floor classification as a comparison using Dataset 1. (c) OG map built with the parameters in (a) using Dataset 2. (d) OG map built with the parameters in (b) using Dataset 2.

Figure 9. The evolution of the mean of the map information values of the 40 particles in each generation in the PSO algorithm.

C. Consistency Validation on Two Datasets We validated the consistency of our proposed PSO based offline simultaneous calibration of odometry and sensor extrinsic parameters by calibrating the parameters using different datasets, Dataset 1 and Dataset 2. The map built with the calibrated parameters using Dataset 2 is shown in Fig. 11. Table III is the calibration result using two datasets, which shows a consistent result for each parameter. Define S(i ) as

Figure 11. OG map built with the calibrated parameters based on our proposed floor classification method using Dataset 2.

(10) to test the calibration consistency, where p1 (i ) and p2 (i ) denote the ith dimension of the calibration parameters using Dataset 1 and Dataset 2, respectively. Fig. 12 shows the consistency between the calibration results in Table III. S(i ) = p1 (i ) − p2 (i ) / ( p1 (i ) + p2 (i )) / 2

(10) Figure 12. Relative calibration differences based on Dataset 1 and Dataset 2 using equation (10).

467

cases to achieve a fair comparison, since a comparison with single floor region would involve optimising over just 9 parameters and therefore disadvantage the single region. Furthermore, the calibration method has been successfully validated for consistency between two independent datasets.

VI. CONCLUSION AND FUTURE WORK In this paper, we proposed a novel offline simultaneous calibration of robot odometry and sensor extrinsic parameters using the PSO algorithm on multiple floors. A state-of-the-art map quality metric for OG map, Map Information [15], has been chosen as the fitness value of the PSO algorithm. Experimentally, we have shown that a calibration with the colour floor classification system is more accurate for each parameter than using an arbitrary classification. Note that 12 parameters are used in both the arbitrary and colour classified

Our PSO based calibration framework could be applied to calibrate odometry and the extrinsic sensor parameters for 3D laser scanners or Kinect sensors by extending the MI metric to 3D occupancy grid maps built with OctoMap [28].

TABLE III. CALIBRATION RESULTS USING DATASET 1 AND DATASET 2

x1 (m)

y1 (m)

θ1 (rad)

x2 (m)

y2 (m)

θ2 (rad)

Dataset 1

0.070456

-0.13935

-0.799576

-0.203149

0.133731

2.2913

Dataset 2

0.0677955

-0.134264

-0.799708

-0.206218

0.136393

2.3024

Rl1 (m)

Rr1 (m)

l1 (m-1)

Rl 2 (m)

Rr 2 (m)

l2 (m-1)

Dataset 1

0.0945349

0.0960021

2.79122

0.0961968

0.0973374

2.80303

Dataset 2

0.0951182

0.0963673

2.7887

0.0963655

0.0973481

2.79807

[14] L. Kleeman, “Advanced sonar and odometry error modeling for simultaneous localisation and map building,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2003. [15] J.L. Blanco, J.A. Fernandez-Madrigal, and J. Gonzalez, “A novel measure of uncertainty for mobile robot SLAM with Rao-Blackwellized particle filters,” in The International Journal of Robotics Research, vol. 27, no. 1, 2008. [16] S. Thrun, W. Burgard and D. Fox, Probabilistic Robotics. Cambridge, Mass: MIT Press, 2005. [17] C. Stachniss, G. Grisetti, and W. Burgard, “Information gain-based exploration using Rao-Blackwellized particle filters,” in Proceedings of Robotics: Science and Systems (RSS), 2005. [18] K. Chong and L. Kleeman, “Accurate odometry and error modelling for a mobile robot,” in Proceedings of IEEE International Conference on Robotics and Automation (ICRA), 1997. [19] F. Tungadi and L. Kleeman, “Multiple laser polar scan matching with application to SLAM,” in Proceedings of Australasian Conference on Robotics and Automation (ACRA), 2007. [20] A. Diosi and L. Kleeman, “Laser scan matching in polar coordinates with application to SLAM,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2005. [21] J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings IEEE International Conference on Neural Networks (ICNN), 1995. [22] M. Clerc, “Standard particle swarm optimization,” 2012. [23] C. Cortes and V. Vladimir, “Support-vector networks,” in Machine Learning, vol. 20, no. 3, 1995. [24] Texas Advanced Optoelectronic Solutions Inc., “Programmable color light-to-frequency converter,” [online]. Available: http://www.pobot.org/IMG/pdf/tcs230_datasheet.pdf, 2003. [25] M. Quigley, et al, “ROS: an open-source Robot Operating System." in ICRA Workshop on Open Source Software, vol. 3, no. 3.2, 2009. [26] C. Chang and C. Lin, “LIBSVM: a library for support vector machines,” in ACM Transactions on Intelligent Systems and Technology (TIST), vol. 2, no. 27, 2011. [27] M. Zambrano-Bigiarini, M. Clerc, and R. Rojas, “Standard particle swarm optimisation 2011 at cec-2013: A baseline for future pso improvements,” in Proceedings of IEEE Congress on Evolutionary Computation (CEC), 2013. [28] A. Hornung, K.M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard, “OctoMap: An efficient probabilistic 3D mapping framework based on octrees,” in Autonomous Robots, vol. 34, no. 3, 2013.

REFERENCES [1] [2] [3]

[4]

[5]

[6] [7] [8] [9]

[10] [11] [12] [13]

J., Maye, P. Furgale, and R. Siegwart, “Self-supervised calibration for robotic systems,” in IEEE Intelligent Vehicles Symposium (IV), 2013. J. Borenstein and L. Feng, “UMBmark - A method for measuring, comparing, and correcting dead-reckoning errors in mobile robots,” in Technical Report UM-MEAM-94-22, University of Michigan. T. Larsen, M. Bak, N. Andersen, and O. Ravn, “Location estimation for an autonomously guided vehicle using an augmented Kalman filter to autocalibrate the odometry,” in Proceedings of the First International Conference on Multisource-Multisensor Information Fusion (MMIF), 1998. A. Martinelli, N. Tomatis, A. Tapus, and R. Siegwart, “Simultaneous localization and odometry calibration for mobile robot,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2003. A. Censi, L. Marchionni, and G. Oriolo, “Simultaneous maximumlikelihood calibration of odometry and sensor parameters,” in Proceedings of IEEE International Conference on Robotics and Automation (ICRA), 2008. J. Levinson and S. Thrun, “Unsupervised calibration for multi-beam lasers,” in Experimental Robotics, Springer Berlin Heidelberg, 2014. M. Sheehan, A. Harrison, and P. Newman, “Self-calibration for a 3D laser,” in The International Journal of Robotics Research, vol. 31, no. 5, 2012. G. Best, P. Moghadam, N. Kottege, and L. Kleeman, “Terrain classification using a hexapod robot,” in Proceedings of Australasian Conference on Robotics and Automation (ACRA), 2013. Posada, K Narayanan, F. Hoffmann, and T. Bertram, “Floor segmentation of omnidirectional images for mobile robot visual navigation,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2010. S. Verstockt, P. Lambert, and R. Walle, “State of the art in vision-based fire and smoke detection,” in Proceedings of International Conference on Automatic Fire Detection (AUBE), 2009. C. Lin, C Yeh, and C. Yeh, “Real-time vehicle color identification for surveillance videos,” in Proceedings of International Conference on Electronics, Communications and Computers (CONIELECOMP), 2014. R. Kummerle, G. Grisetti, and W. Burgard, “Simultaneous calibration, localization, and mapping,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2011. N. Roy and S. Thrun, “Online self-calibration for mobile robots,” in Proceedings of IEEE International Conference on Robotics and Automation (ICRA), 1999.

468

Suggest Documents