Likelihood-field-model-based Vehicle Pose Estimation with Velodyne Tongtong Chen, Bin Dai, Daxue Liu, Hao Fu
Jinze Song
College of Mechatronic Engineering and Automation National University of Defense Technology Changsha, China Email:
[email protected]
Graduate School National University of Defense Technology Changsha, China Email:
[email protected]
Abstract—Dynamic vehicle tracking is an important module for Autonomous Land Vehicle (ALV) navigation in outdoor environments. The key step for a successful tracker is to accurately estimate the pose of the vehicle. In this paper, we present a novel real-time vehicle pose estimation algorithm based on the likelihood field model built on the Velodyne LIDAR data. The likelihood field model is adopted to weight the particles, which represent the potential poses, drawn around the location of the target vehicle. Importance sampling which is speeded up with the Scaling Series algorithm, is then exploited to choose the best particle as the final vehicle’s pose. The performance of the algorithm is validated on the data collected by our own ALV in various urban environments.
I.
I NTRODUCTION
Self-driving in an unknown urban environment is a highrisk task for an ALV. To achieve this goal, the ALV needs to thoroughly understand the environment, not only about the existence of other vehicles, but also the poses of the oncoming or intersecting vehicles. Based on these understanding, then it can plan right path while avoiding the dynamic vehicles, especially at the intersections. As vehicle pose estimation is a critical pre-processing step in dynamic vehicle tracking, it has become a hot research area recently. A typical urban scene with two dynamic vehicles at a crossroads is shown in Fig. 1.
Fig. 1. A typical urban scene at a crossroads. The yellow points are ground, and other colors represent different objects respectively. Two green cuboids represent two dynamic vehicles that are moving 20 meters away, and the arrows are their orientations.
compared with the approach in [12]. Finally, conclusions and an outline of future works are presented in Section 5. II.
For every Velodyne LIDAR scan (the set of points collected during the Velodyne LIDAR spins once), we assume that ground segmentation[1],[2], object clustering[3] and vehicle recognition steps have been performed. Given a cluster of three-dimensional points that are recognized as a vehicle, they can hardly provide any constraints on the pose directly. This paper presents a novel real-time vehicle measurement model based on likelihood filed to estimate the poses of the vehicles. Inspired by the view-dependent adaptive matched filter in reference [5], we propose the likelihood-field-based vehicle measurement model. This model is also coupled with the Scaling Series importance sampling algorithm to estimate the poses of the vehicles that are fully occluded in xy plane by other objects, as shown in Fig. 8. The rest of the paper is structured as follows. In the next section, an outline of the related work is given. Section 3 details the steps of the vehicle pose estimation algorithm. Some experimental results on the data collected in various urban environments are shown in Section 4. Our algorithm is also
R ELATED W ORK
With the LIDAR sensors becoming standard equipment in an ALV, much research about vehicle pose estimation with point cloud have been carried out during past decades. Generally, these approaches can be roughly divided into two groups: feature-based methods [6],[7],[8],[9],[10] and measurement model-based methods[11],[5],[12],[13],[14],[15]. Darms et al. [6] extracted edge targets which are interpreted as a part of the box model to represent the vehicle’s pose from the raw laser data. Himmelsbach et al [7] fitted the dominant line of the segment points in xy− plane with the RANSAC algorithm to ensure the orientations of the objects. Keat et al. [8] extracted line segments from raw laser data with the clustering and segmentation techniques, and then exploited bayesian programming to obtain the vehicle’s pose from these line segments. Mertz et al. [9] extracted line and corner features for each segment to estimate the vehicle’s pose. They began with a trial fit, then 20% worst points were discarded while remaining points were fitted again to improve robustness. Morris et al. [10] exploited the RANSAC algorithm
to fit an edge or “L-shape” to the three-dimensional points hitting on the surfaces of vehicle to extract the vehicle’s pose, and visibility constraints were used to make “L-shape” fitting more efficient. However, just as stated in [5], the process of extracting the edge features of the vehicles will encounter a number of ambiguities when using the cluttered and noisy data. Measurement model-based methods mainly utilize some cuboids or rectangles to represent the vehicles, optimisation or sampling techniques are then exploited to find the most probable vehicle pose from the segment points. Barrois et al. [11] employed a cuboid to represent the vehicle, then Levenberg-Marquardt and Downhill-Simplex algorithm are used to minimize the polar distance between the scene points obtained from the stereo and the visible sides to calculate the best vehicle pose. Morris et al.[5] took view-dependent self-occlusion effects into account, and used 4 rectangles to represent the inside, the outside, and the two visible sides of a vehicle, respectively. A gradient-based optimization technique is used to maximize the integrals of the segment points over the four rectangles to obtain the vehicle pose. Petrovskaya et al.[12] estimated the vehicle pose with importance sampling technique, and the scoring of each particle is calculated based on the pre-defined rectangular vehicle measurement model. Based on the same measurement model, Wojke et al. [13] also incorporated temporal cues and geometric cues to reduce the sampling range to improve the efficiency and precision. In [15], Liu et al. also proposed a modified measurement model for titled two dimensional laser range finder, and both temporal and motion cues are adopted to generate the hypotheses of the vehicle which are then evaluated by the modified measurement model. The optimisation-based approach may get trapped in a local optimum because of the improper initialization. On the other side, the measurement model proposed in [12] and its variants [13],[14],[15] can only detect vehicles that are most close to the ALV, they cannot be utilized to estimate the poses of the vehicles that are fully occluded in xy plane by other vehicles in the crowded urban environments. This issue motivates the approach presented in this paper. III.
P OSE E STIMATION A LGORITHM
In ALVs, Velodyne HDL-64E S2 LIDAR has been commonly used due to its accurate and high frequency three dimensional measurements. In this section, we will combine the advantages of likelihood-field-based vehicle measurement model and the Scaling Series algorithm to present a novel vehicle pose estimation algorithm. Compared with [12], our algorithm can also estimate the poses of the fully occluded vehicles in the xy plane. Compared with [5] that use a gradient-based optimization technique to calculate the pose, our algorithm is insensitive to the initial values, and can always get good results in real time. We will detail our algorithm below. A. Likelihood-field-based Vehicle Measurement Model Though likelihood field model lacks a plausible physical explanation [16], It has been widely used in robotics. In this paper, We utilize a rectangular shape to represent a vehicle and construct the following vehicle measurement model based on the likelihood filed. Given a vehicle’s pose X = {x, y, θ} with width W , length L and a cluster of measurements Z =
Visible Length Surface R0
yV yL
Visible Width Surface R1 Inside of The Vehicle
q0
T( x , y ,q )
AD ·q i
xV ( x, y )
q1 B C
q
q3 l = 1m
Outside of The Vehicle
R2 R3
yV AD · qiV
W
oV
L
xV
D D
q2
BC
xL
oL
(a)
(b)
Fig. 2. Measurement model computation. (a) shows the four geometric regions of the model. The green and blue rectangles are two visible surfaces of the model; the gray rectangle represents the inside region of the vehicle, while the red region is the outside. (b) shows the vehicle coordinate system oV xV yV with the origin at the geometric center of the rectangle. In this paper, the width and length of the model are fixed to constants, and the width of the visible surface of the model is set to ∆.
{q1 , q2 , · · · , qn }, where qi = (xi , yi , zi ) is represented by its Euclidean coordinates with respect to the LIDAR coordinate system. We now need to define the measurement likelihood p(Z|X). In this paper, we follow [5] and model each measurement qi = (xi , yi , zi ) as a two dimensional Gaussian distribution p(x, y) in xy plane, regardless of its zi coordinates. p(x, y) =
(x − xi )2 (y − yi )2 1 exp − exp − , (1) 2 2 2πσ 2σ 2σ 2
where σ 2 is the variance which represents the measurement noise. Our measurement model is illustrated in Fig. 2. The gray and red rectangles represent the inside and the outside of the vehicle respectively, while the green and blue ones are the visible length and width surfaces of the vehicle. For every edge of the rectangular shape, we define the direction pointing the original oL from its center as the corresponding orientation vector, and the orientation that is perpendicular to the edge and pointing outside the vehicle as its norm vector. If the angle θi between the orientation vector and norm vector is less than π/2, then the edge is a visible surface. To begin with, a rectangular shape that represents the vehicle’s pose according to X is positioned in LIDAR coordinate system oL xL yL . For a vehicle in this configuration, we would expect that most of the measurements will fall in the visible surface rectangles (green and blue rectangles), some measurements will be inside of vehicle (gray rectangle), while few will fall outside the vehicle in the red rectangle, as we believe that there are no objects in the vicinity of a moving vehicle. In this paper, we assume that each measurement is independent. Thus, the measurement model of the vehicle that contains
n measurements in Z can be expressed as follows: p(Z|X) =
n Y
B. Annealing Process of Measurement Model
p(qi |X).
(2)
i=1
Each measurement qi ’s likelihood is model as the exp of the sum of the integrals of the measurement qiV over the four rectangles in the vehicle coordinate system oV xV yV 1 p(qi |X) = ηi exp α
4 X
cj I(qiV , Rj )
j=1
(3)
Tx,y,θ
qiV
}| z { V cos(θ) sin(θ) xi − x xi = V = −sin(θ) cos(θ) yi − y yi
where qi is a measurement in the LIDAR coordinate system oL xL yL , and qiV is the corresponding point in the vehicle coordinate system oV xV yV obtained with the transformation Tx,y,θ as shown in Equation 3. ηi is a normalization constant. R0 , R1 , R2 , R3 are the visible length rectangle, width rectangle, the inside and outside areas of the vehicle, respectively (the green, blue, gray and red areas in Fig 2); and c0 , c1 , c2 , c3 are the corresponding integral weights over these rectangles. As we expect most of the measurements would fall in the visible rectangle or inside of the vehicle, c0 , c1 , c2 are chosen to be greater than zero, while c3 is less than zero. qP RR V 3 c2 dxdy . I(qi , Rj ) represents the integral α = 1/ k=1 Rk k V of measurement qi over the rectangle Rj in the vehicle coordinate system oV xV yV . Let IA , IB , IC , ID represent the integrals of measurement qiV over quarter plane extending to positive infinity from the positions A = (xVa , yaV ), B = (xVb , ybV ), C = (xVc , ycV ), D = (xVd , ydV ), respectively, in the vehicle coordinate system. Here, we take the integral I(qiV , R0 ) of qiV over the visible length rectangle R0 for instance, I(qiV , R0 ) = ID + IB − IA − IC 1 xV − xV IA = 1 + erf i √ a 4 2σ 1 xVi − xVb √ IB = 1 + erf 4 2σ 1 xVi − xVc √ IC = 1 + erf 4 2σ 1 xVi − xVd √ ID = 1 + erf 4 2σ
yiV
yaV
− √ 2σ yiV − ybV √ 1 + erf 2σ yiV − ycV √ 1 + erf 2σ yiV − ydV √ 1 + erf 2σ
1 + erf
. (4)
The integrals I(qiV , R1 ), I(qiV , R2 ), I(qiV , R3 ) can be calculated in the same way as above. Not that the measurement model in this paper is built directly on the original measurements. It avoids the virtual ray casting step made in [12]. It therefore could estimate the pose of vehicles that are further away from the ALV, while [12] failed to do so. 1 For
every target vehicle, we construct a vehicle coordinate system oV xV yV at the geometric center of the rectangle, with x, y axes pointing forward and right respectively.
In order to speed up the pose estimation algorithm, we combine the likelihood-field based vehicle measurement model with the scaling series algorithm. In the combined approach, the most important step is to iteratively anneal the measurement model from artificially relaxed to realistic [12]. The detailed process is illustrated in Fig. 3. The green and blue rectangular shapes are the visible surface of the vehicle, and the gray one is the area inside of the vehicle, while the red represents the free area outside the vehicle. The left image represents the most relaxed measurement model, while the right is the normal one. To begin with, we inflate the normal width ∆ of the visible surfaces by 1 meter both inside and outside of the measurement model, then the width of the relaxed visible rectangular shape is ∆ + 2. In that case, the visible rectangular shapes are expanded to fully occupy the free area around the target vehicle, as shown in the left image of Fig. 3. With this most relaxed vehicle measurement model, some implausible particles located at 1 meter away from the real target vehicle will be removed. In this way, when the inflating width of the visible surface iteratively anneals from ∆ + 2 to ∆, the high likelihood region will be getting smaller until the actual vehicle position is obtained (as shown in the right image of Fig. 3). C. The Scaling Series Algorithm for Pose Estimation The Scaling Series Algorithm was firstly proposed to handle the tactile localization problem in case of large uncertainty in real time by Petrovskaya [4]. In this paper, the pose estimation of many vehicles in the crowded urban environments also inherits large uncertainty: several meters in location and 180 degrees in direction. Thus, the Scaling Series algorithm is also utilized to improve the computational efficiency. The detailed pose estimation algorithm for one vehicle is shown in Algorithm 1. Algorithm 1 The Scaling Seriers Algorithm for Pose Estimation Input: Z = {q1 , q2 , · · · , qn }, M, ∆, N, L Output: X∗ = (x∗ , y∗ , θ∗ ) // position (x∗ , y∗ ), orientation θ∗ 1: Cxmb , Cymb , θ mb = minAreaRect(Z); √ 2: w = 1.0, θ = π/2, zoom = 1/ 2, V0 = L Cxmb , Cymb , θmb , 2 , θ ; 3: for n = 1 : N do 4: χn = Rejection Sampling(Vn−1 , M ); 5: ωn = Comput Normalized Weights χn , 2w + ∆, Z ; 6: χn = Prune(χn , ωn ) 7: Vn = Union Delta Neighborhoods(χn , w, θ) 8: w = w ∗ zoom, θ = θ ∗ zoom 9: end for 10: χ = Rejection Sampling(VN , M ); 11: ω = Comput Normalized Weights χ, ∆, Z ; 12: X∗ = Best Particle(χ, ω); The algorithm takes as input a cluster of three dimensional measurements Z = {q1 , q2 , · · · , qn }, the iterative number N , the normal width ∆ of the visible rectangle shape in the realistic vehicle measurement model, the particle number M
yL
yL
yL
D+2 D
L+2
W
L
oL
D+2
W +2
xL
D
oL
xL
oL
xL
Fig. 3. The detailed annealing process of the vehicle measurement model. The green, blue, gray, red areas represent the visible length and width surface, regions inside and around the vehicle, respectively. The left image represents the most relaxed vehicle measurement model, which is iteratively annealed to the realistic one in the right image.
per neighbourhood, the length of the target vehicle L, and outputs the vehicle’s pose X∗ . The first step is to find a rotated rectangle of the minimum area enclosing the two dimensional projections in xy plane of the cluster Z. This corresponds to Line 1, and the center and rotation angle are Cxmb , Cymb , θmb , respectively. The initial sampling region is an ellipsoid V0 , with the center Cxmb , Cymb , θmb . The sampling radii of location and orientation are L2 , θ, respectively. The lines from 3 to 9 describe the process of annealing the measurement model. Drawing a particle set χn from Vn−1 uniformly corresponds to function Rejection Sampling in Line 4. For every ellipsoid neighbour, M particles are drawn to form the particle set χn . The detailed implementation can be found in [4]. For these particles, we utilize the inflating likelihood-field-based vehicle measurement model with the relaxed width of 2w + ∆ meters of the visible rectangular shape to calculate their weights, and then prune those particles with low weights. This process corresponds to the functions Comput N ormalized W eights and P rune in Lines 5 and 6. For each remaining particle, the function U nion Delta N eighborhoods in Line 7 is exploited to construct an ellipsoid neighbourhood with itself as the center, whose sampling radii of location and orientation are w, θ respectively. The last step is annealing the width w and orientation θ in Line 8. After the annealing process, lines 10 and 12 draw the final particle set χ, and calculate the weights of these particles with the normal vehicle measurement model in Section III-A. We choose the particle with the highest weight as the final pose X∗ of the target vehicle with the function Best P article in Line 12. Fig. 4 illustrates an example of vehicle pose estimation with the algorithm in this paper.
Fig. 5. Our ALV used for the experiments in various scenes. It is equipped with a Velodyne HDL-64E S2 LIDAR, a NovAtel SPAN-CPT GPS-aid inertial navigation system and some cameras.
IV.
E XPERIMENTAL E VALUATION AND A NALYSIS
We have evaluated the performance of the algorithm proposed in this paper on the data collected by our own ALV in various urban environments. We implement the algorithm in C++ as well as the point cloud library (PCL)[17]. Our ALV is a modified Toyota Land Cruiser, as shown in Fig. 5, equipped with one Velodyne HDL-64E S2 LIDAR, one NovAtel SPANCPT GPS-aid inertial navigation system and some other sensors, such as cameras, laser ranges finders, etc. In this paper, The Gaussian-Process-based ground segmentation in [1] is adopted to remove the ground points and the residual obstacle points are clustered into many objects with Radially Bounded Nearest Neighbour (RBNN) graph in
Fig. 4. An example of the annealing results of the Scaling Series algorithm for vehicle pose estimation. The left-top image shows the urban scene with a dynamic vehicle, and the left-bottom shows the result of pose estimation (red rectangle). The blue rectangles represent the particles in the annealing process.
PositionError
[3]. Global Fourier Histogram descriptor in [18] is utilized to recognize the vehicles. In the implementation of our algorithm, the parameters are empirically set to M = 20, N = 8, ∆ = 0.4, L = 4.8, W = 1.8.
OrientationError
2 1.8
1.5 Our Algorithm with Likelihood Field Model Petrovskaya's Method [12]
Our Algorithm with Likelihood Field Model Petrovskaya's Method [12]
1.6 1.4
TABLE I.
position error orientation error
T HE S TATISTICAL ERRORS OF OUR ALGORITHM AND P ETROVSKAYA’ S M ETHOD [12] mean (m) var mean (rad) var
Our Algorithm 0.0886 0.0051 0.0344 0.0017
Petrovskaya’s Method [12] 0.2409 0.0089 0.0506 0.0044
Fig. 7 presents the qualitative results of pose estimation in four urban environments. The green cuboids are adopted to represent the vehicles’ poses. Our algorithm can obtain good performance in most situations, including the crowded environments. From the up row, one can find that our likelihoodfield-based vehicle measurement model can naturally handle over-segmentation. Though the points hitting on one vehicle may be clustered into different objects, our algorithm can also estimate the correct pose of the vehicle, like the vehicles in the 2 Only the vehicle detection algorithm without tracking is employed to compare with our algorithm
1
Rad
1.2
Meter
In order to evaluate our vehicle pose estimation algorithm quantitatively, we conduct an experiment in an open square where there was only one static vehicle as the target vehicle. Our ALV ran around the target vehicle clockwise and counter-clockwise with different radii. We labelled the pose of the target vehicle in every scan (a total of 2000 scans) manually. Fig. 6 shows the position and orientation errors of our algorithm (red) compared with Petrovskaya’s method in [12]2 (blue), and the statistical results of the position error and orientation error are presented in Table I. From the table, one can find that the position error (0.0086) of our algorithm is one third of that (0.2409) of Petrovskaya’s method [12]. The mean of the orientation error of our algorithm is 0.034 rad while that of Petrovskaya’s method is 0.0506 rad. Additionally, in 94.7% scans, the orientation error of the estimated vehicle pose is less than 5 degrees with our algorithm, while the percentage is 86.4% with Petrovskaya’s method [12].
1 0.8
0.5 0.6 0.4 0.2 0 0
500
1000
FrameIndex
1500
2000
0 0
500
1000
1500
2000
FrameIndex
Fig. 6. The errors of pose estimation of our algorithm (red) compared to that of [12] (red) for 2000 scans collected in an open square where there is only one static vehicle.
red rectangles3 . The right-bottom image of Fig. 7 illustrates a crossroads scenario where our ALV is turning left. Our algorithm can estimate the poses of six cars correctly. However, for the two big buses in the left part, our algorithm could not work, because of the fixed width and length of the vehicle measurement model. Fig. 8 shows another crowded urban scene at a crossroads where there are many vehicles and some ones are fully occluded in xy plane by others, such as the vehicles in the four rectangles. Though there are no rays (yellow lines in the right image) hitting on these vehicles, these vehicles still have some measurements. With Petrovskaya’s measurement model [12] or its variants [13],[15], the poses of these vehicles cannot be estimated. In contrast, our algorithm can successfully estimate these poses. This confirms the superiority of likelihood-field based vehicle measurement model presented in this paper.
3 Each
color represents an object, and the cuboid contains different colors
A B
Fig. 7. Some qualitative experimental results in the urban environments. The yellow points are ground points, while other colors represent different objects, respectively. The green cuboids represent the poses of the vehicles estimated with our algorithm. Each white circle represents 10 meters away from our ALV.
V.
C ONCLUSION AND F UTURE W ORK
In this paper, we proposed a novel likelihood-field-based vehicle measurement model, and combine this model with the Scaling Series algorithm to estimate the poses of the target vehicles. Our algorithm can naturally handle the situation where the target vehicles have measurements but fully occluded by other objects in xy plane. Additionally, the Scaling Series algorithm is utilized to improve the computational efficiency. The experimental results on the data collect by our own ALV in various urban environments show that our algorithm can obtain promising performance.
In the future, we will combine the likelihood-field-based vehicle measurement model and the particle filter to track the dynamic vehicles. We believe that this algorithm could also obtain good results.
ACKNOWLEDGMENT This work is supported by National Nature Science Foundations under Grant 61375050, 61075043.
& ' $
%
Fig. 8. Another crowded urban environment at a crossroads. Our algorithm can estimate the poses of the vehicles that are fully occluded in xy plane by other objects (the four vehicles in the red, green, blue and purple rectangles), compared to the algorithms in [12],[13],[15].
R EFERENCES [1]
[2]
[11]
T. Chen, B. Dai, R. Wang, and D. Liu, “Gaussian-process-based realtime ground segmentation for autonomous land vehicle,” Journal of Intelligent and Robotic systems, vol. 76, no. 3-4, pp. 563–582, 2014.
[12]
B. Douillard, J. Underwood, N. Kuntz, V. Vlaskine, A. Quadros, P. Morton, and A. Frenkel, “On the segmentation of 3d lidar point clouds,” in International Conference on Robotics and Automation, 2011, pp. 2798–2805.
[13]
[3]
K. Klasing, D. Wollherr, and M. Buss, “A clustering method for efficient segmentation of 3d laser data,” in International Conference on Robotics and Automation, 2008, pp. 4043–4048.
[14]
[4]
A. Petrovskaya, “Towards dependable robotic perception,” Ph.D. dissertation, The University of Stanford, 2011.
[15]
[5]
D. D. Morris, R. Hoffman, and P.Haley, “A view-dependent adaptive matched filter for ladar-based vehicle tracking,” in 14th Interational Conference on Robotics and Applications, 2009, pp. 1–9.
[16]
[6]
M. Darms, P. Rybski, and C. Urmson, “Classification and tracking of dynamic objects with multiple sensors for autonomous driving in urban environments,” in IEEE Intelligent Vehicles Symposium(IV), 2008, pp. 1197–1202.
[7]
M. Himmelsbach and H. J. Wuensche, “Tracking and classification of arbitrary objects with bottom-up/top-down detection,” in IEEE Intelligent Vehicles Symposium(IV), 2012, pp. 577–582.
[8]
C. T. M. Keat, C. Pradalier, and C. Laugier, “Vehicle detection and car park mapping using laser scanner,” in International Conference on Intelligent Robots and Systems(IROS), 2005, pp. 2054 – 2060.
[9]
C. Mertz, L. E. Navarro-Serment, R. MacLachlan, P. Rybski, A. Steinfeld, S. A, C. Urmson, N. Vandapel, M. Hebert, and C. Thorpe, “Moving object detection with laser scanners,” Journal of Field Robotics(JFR), vol. 30, no. 1, pp. 17–43, 2013.
[10]
D. D. Morris, P. Haley, W. Zachar, and S. McLean, “Ladar-based vehicle tracking and trajectory estimation for urban driving,” in Proceedings of Association for Unmanned Vehicle Systems International (AUVSI), 2008.
[17]
[18]
B. Borrois, S. Hristova, C. Wohler, F. Kunnert, and C. Hermes, “3d pose estimation of vehicle using a stereo camera,” in IEEE Intelligent Vehicles Symposium(IV), 2009, pp. 267–272. A. Petrovskaya and S. Thrun, “Model based vehicle tracking for autonomous driving in urban environments,” in Robotics: Science and Systems(RSS), 2008, pp. 25–28. N. Wojke and M. Haselich, “Moving vehicle detection and tracking in unstructured environments,” in International Conference on Robotics and Automation(ICRA), 2012, pp. 3082–3087. T. D. Vu and O. Aycard, “Laser-based detection and tracking moving objects using data-driving markov chain monte carlo,” in International Conference on Robotics and Automation(ICRA), 2009, pp. 3800–3806. Z. Liu, D. Liu, and T. Chen, “Vehicle detection and tracking with 2d laser range finders,” in 6th International Congress on Image and Signal Processing (CISP), 2013, pp. 1006 – 1013. S. Thrun, W. Burgard, and D. Fox, Probabilistics Robotics. England: the MIT press, 2005. R. Rusu and S. Cousins, “3d is here: point cloud library (pcl),” in International Conference on Robotics and Automation (ICRA), 2011, pp. 1–4. T. Chen, B. Dai, D. Liu, and J. Song, “Performance of global descriptors for velodyne-based urban object recognition,” in IEEE Intelligent Vehicles Symposium(IV), 2014, pp. 667–673.