Experimental Comparison of Extended Kalman and Particle Filter in Mobile Robotic Localization ´ Juli´an Angel Universidad De Los Andes IMAGINE laboratory a carrera 1 No. 18-10, Bogot´a D.C., Colombia
[email protected] IEEE Student Member
Abstract—This paper presents an implementation and comparison between odometry and probabilistic algorithms for the mobile robot localization problems on indoor environments. The hardware and software tools used for the experiments are briefly described. Also, architecture is proposed to make easier the development of necessary applications which are used to implement the algorithms and then get the results to compare them. Index Terms—Bayes Filter, Kalman Filter, Mobile Robots, Non-deterministic problems, Particle Filter, Odometry.
I. INTRODUCTION Since students took their first step in the university, teachers had been taught them all the theory that they need to know to resolve engineer problems. But in one point in their lives, they have to use that theory in practice, and then they realize that everything that they know is just not enough to resolve the problem. At that moment, they finally understand the sentence: in theory, there is not difference between theory and practice. In practice, there is it. Robotics field is not an exception of this reality. Researchers in the field have investigated different solutions to problems present in robotics without validating their solutions in practice. One of the more important topic in robotic is the mobile robot localization in indoor environments which has been studied and tested by researchers whose some work are [1], [2], [3], [6], [9], among others. However, those investigations never talk about the problems and practical considerations that have to have in account to implement the probabilistic filters. The scope of this paper is to make a practical comparison among the traditional approach, which is just using odometry, Extended Kalman and particle Filter. This paper is structured in the following sections: in the section III briefly introduce the algorithms used through the experiments. Section IV shows the models used to implement the algorithms, including odometry algorithms. Section II introduces the problem of localization in mobile robots. Sections V, VII and VIII are: Hardware and Software Tools used for the experiments, experimental environment setup, results obtained and their analysis, conclusions and future work.
Fernando De La Rosa Rosero
[email protected] IEEE Professional Member
II. LOCALIZATION PROBLEM Estimate the robot position, represented by the configuration p, , in an environment is known in mobile robotics as localization problem. Three kind of localization problem could be distinguished [1]: local, the robot knows its initial position in a map; global, the robot knows the map but it does not know its initial position, thus the position has to be estimated; kidnapping, the robot does not know the map, so the robot has to reconstruct the map and localized itself. The most common solution for this problem is using odometry. Nevertheless, this technique is inaccurate because does not take in count external factors as the terrain where the robot is executing its movements,the physical difference in the robot, and so on. As a consequence, researchers have been looking for other techniques that allow better estimations, for example: use of landmarks which needs sophisticated techniques to identify them. III. P ROBABILISTIC F ILTER A LGORITHMS Many problems that exist in the real world are considered as non deterministic problems. This means that we can not know the result of problem prior to solve it because each time that the problem is solved it is going to gave different results. Thus, it is necessary to find methods that allow to modeling the variability which is inherent to some robotic problems, as robot motion and sensor measures. Probability theories have been used to consider the uncertainty in a model. These theories allow associating a belief to each variable, thus is possible to use algorithms that resolve the problem and give a new belief of the variables using a prior belief, control actions and measures obtained after control actions. We consider below the basic methods which apply the probability theories. A. BAYES FILTER The algorithm used as the base for other algorithms is known as Bayes filter. The approach of this algorithm is: divide and conquer. So, the new belief is calculated recursively from prior beliefs, and the current control actions and measures. A pseudo-algorithm of the Bayes can been seeing in table I,
1: 2: 3: 4: 5: 6:
Algorithm Bayes Filter(bel(pt−1 ), ut , zt ) for all pt doR bel(pt ) = p(pt |ut , pt−1 )bel(pt−1 )δpt−1 bel(pt ) = ηp(zt |pt )bel(pt ) end return bel(pt ) TABLE I P SEUDO - ALGORITHM OF BAYES FILTER .[1].
1: 2: 3: 4: 5: 6: 7:
Algorithm Extended Kalman Filter(pt−1 , Σt−1 , ut , zt ): pt = g(ut , pt−1 ) T Σt = Gt Σt−1 GT t + Vt Mt Vt Kt = Σt HtT (Ht Σt HtT + Qt )−1 pt = pt + Kt (zt − h(pt , 0)) Σt = (I − Kt Ht )Σt return pt , Σt TABLE II P SEUDO - ALGORITHM OF EKF.[1].
1: 2: 3: 4:
Algorithm Particle Filter(Pt−1 , ut , zt ) Pt = Pt = 0 f or(m = 1; M ; m + +) [m] [m] sample pt p(pt |ut , pt−1 )
5: 6: 7: 8: 9: 10: 11: 12:
wt = p(zDt |pt ) E [m] [m] Pt = Pt + p t , w t end f or(m = 1; M ; m + +) [m] draw i whit probability αwt [m] add pt a Pt end return Pt
[m]
[m]
TABLE III P SEUDO - ALGOTITHM OF PARTICLE FILTER .[1].
Fig. 1.
Possible robot position in a map.
where pt is the position vector, ut is the control vector and zt is the measurement vector. B. EXTENDED KALMAN FILTER The Kalman Filter (KF) has the following assumptions: priori and pos distributions are unimodal Gaussian, and the system, where KF is used, is linear. However, few linear systems just exist in the world. In consequence KF can not be used in many systems. In order to use KF in non linear system, KF was extended and called Extended Kalman Filter (EKF) which eliminates the linearity assumption, this is achieved by linearizing the system using Tailor Expansion. A pseudoalgorithm of EKF can been seeing in table II, where pt is the position vector, ut is the control vector and zt is the measurement vector, and Σ is the covariance matrix. The computational complexity of EKF is O(k 2.37 + n2 ), where k is the measurement dimensionality and n is the state dimensionality. In the frame of localization, EKF just could be used in local problems, due to EKF just can represent the estimate pose using a unimodal Gaussian, figure 1.
the algorithm is high, because the PF complexity is O(m2 +a), where m is the number of particle and a is a number that depends of the implementation of how the particles are drawn. A pseudo-algorithm of PF can been seeing in table III, where pt is the particle position set, ut is the control vector and zt is the measurement vector. In the frame of localization, PF could be used in local and global problems. The particles are propagated using a kinematics model and then they are weighted using a sensor model. IV. MODELS A. Kinematic The kinematic model of a mobile robot describes the motion of robot with out considering external factors that can influence in a robot motion. This model depends on the robots constrains, such as if the robot is holonomic or not. The model used in this paper just considers the gravity center of the robot, avoiding knowing the form of the robot. Also assumes that the robot is holonomic and differential wheeled, figure 2. The variables considered to describe the robot position with respect to the scene frame are x, y and θ which describes the robot orientation. The equation 1 is the kinematic model for differential wheel robot as the robot show in figure 3.
C. PARTICLE FILTER The particle filter (PF) uses finite random particles to sample a probability density. PF has two principal advantages: it could be use in non linear systems; it could estimate any type of probability density. However, the computational complexity of
pG t+1
G G xt+1 xt d ∗ cos(θ + ∆θ) G = yt+1 = ytG + d ∗ sin(θ + ∆θ) G ∆θ θt+1 θtG
(1)
1:
Algorithm fiction model(zt , pt )
2: 7:
q = det(2πΣ)− 2 exp− 2 (zt −pt ) return q
1
1
T
Σ−1 (zt −pt )
TABLE V F ICTION MODEL SENSOR .W HERE zt IS THE MEASUREMENT VECTOR , pt IS THE POSITION VECTOR , AND Σ IS THE COVARIANCE MATRIX .
Fig. 2. 1: 2: 3: 4: 5: 6: 7:
Differential wheeled robot.
Algorithm beam range finder model(zt , pt , m) q=1 f or(k = 1; k position in the moment that the application establish communication with it, and this position is not possible to be changed while the application does not disconnect and connect again. This fact makes that the all the commands and measurements have to be converted from that initial position to the scene frame. A. CASE 1: Circular Trajectory, Using Just Odometry This case was thought as approximation to find possible problems implementing EKF, using MATLAB and develop applications for robots made by Mobile Robots Inc. This application consisted in following a circular trajectory without using sensors to get information from environment, so the zt vector was obtained from the odometry sensor. The environment to test this case can been seen in the figure 5. The first step was to get familiar with the use of MATLAB
in C++, this was done calling in the application a MATLAB program which generates points of a circles, this program takes as arguments the number of the points to discretized the circle and the radio, and it return an array with all the points of the circle. The second step was implemented EKF in MATLAB and then exported as a library to be use in C++. This step was very useful because it allow testing EKF before try this program be used in the application. The final step was developed and run the application in the robots. B. CASE 2: Square Trajectory, Using Just Odometry The trajectory selected to this case was the square, figure 6 show the environment for this case. The reasons to select this trajectory were: first, the robot has to travel long distance, 3.9 meters; second, the robot has to do angles of 90 degrees. These two factors made that the error introduce in the robot motion increase more than the circular trajectory. PF was implemented in MATLAB and tested in that environment. As in the last case, the odometry sensor was used as measurement vector. The points was obtained from a file, its structure can been seen in the table VI. The first line specifies the number of points that the robot has travel. The second line specifies the initial position of the robot where the first position represents the x axis, the second y axis and the third θ, all of them respect of the scene frame.
C. CASE 3: Using Fiction Sensor In this case the fictional sensor was introduced to give the robot information about where the initial position is located with respect the scene frame, in this case is the position < 0, 0 >. The execution environment and file format are the same from the last case.
1: 2: 3: 4: 5: 6:
4 000 3600 0 3600 3900 0 3900 00
TABLE VI F ILE STRUCTURE USED TO GIVE THE APPLICATIONS THE POINTS THAT THE ROBOT HAS TO FOLLOW.
n 1 2 3 4 5 6 7 8 9 10 11 12 Mean Standard Deviation
Odometry Error in X(mm) 41 33 41 36 35 41 26 16 23 24 28 29 31.08 8.0955
Error in Y(mm) 283 276 273 272 276 280 274 276 285 281 281 277 277.8 4.108
EKF Error in X(mm) 44 47 40 31.4 42 18.5 14.9 17.6 15.75 13 16 23 26.9 13.025
Error in Y(mm) 261.49 267.5 308.8 300 283 291.1 280.5 293.73 284.5 290 297.81 285.879 285.87 13.92
TABLE VII E XPERIMENTAL ERRORS OBTAINED IN CASE 1. T HESE ERRORS ARE THE DIFFERENCE BETWEEN THE REAL POSITION AND THE ESTIMATED POSITION .
n
Fig. 6. The two red lines are the scene framework. The red dots are where the robot is supposed to finish after executing this path.
VII. PRACTICAL RESULTS AND ANALYSIS A. CASE 1 The results obtained in the case 1 can been seen in table VII. In order to the results first and second correspond to EKF with coefficients: 30, 0.8, 30, 30, 0.5; and covariance matrix: 0.000001. Third and fourth have 30, 0.9, 50, 50, 0.8; and 0.001. Fifth and sixth have: 25, 0.8, 15, 15, 0.3; and 0.001. Seventh and eighth have: 25, 0.8, 15, 15, 0.3; and 0.000001. Ninth has: 45, 0.8, 10, 15, 0.3; and 0.000001. Tenth has: 15, 0.5, 10, 10, 0.2; and 0.000001. Eleventh has: 40, 0.9, 20, 20, 0.4; and 0.01. Twelfth has: 5, 0.3, 5, 5, 0.2; and 0.01 respectively. The results show that odometry and EKF do not have significant difference between each other. It makes sense because EKF is not taken advantage that the algorithm uses external information to estimate the robot position. However, these results give the best coefficients and covariance matrix values for EKF which are: 30, 0.8, 30, 30, 0.5; and 0.000001, respectively. B. CASE 2 EKF was implemented using the best coefficients obtained from the last case of study. The results obtained in the case 2 can been seen in tables VIII IX. The results show that odometry, EKF and PF do not have significant difference between each other. In addition, the error in the distance had been increased by the long distance that the robot had to travel. This result was expected because the long distance allows the
1 2 3 4 5 6 7 8 9 10 Mean Standard Deviation
Odometry Error in X(mm) 850 872 812 924 923 908 893 785 899 940 880.6 50.890
Error in Y(mm) 633 639 573 583 580 593 634 579 592 589 599.5 25.509
EKF Error in X(mm) 758.75 868 947 823.7 855 925 829 780.7 848.8 939.7 858.5 62.568
Error in Y(mm) 532.7 573 599 564 565 655.2 565 582.7 511.7 604 575.21 39.436
TABLE VIII E XPERIMENTAL ERRORS OBTAINED IN CASE 2. T HESE ERRORS ARE THE DIFFERENCE BETWEEN THE REAL POSITION AND THE ESTIMATED POSITION .
robot to increase its velocity, so the robot slides significantly itself when it is trying to stop. C. CASE 3 The results obtained in the case 3 can been seen in table X. The results show that EKF and PF reduce the position error significantly when a sensor gave information about the environment to the algorithms. It is important to notice that the EKF shows better results that PF. This could be caused by the fictional model used in PF, which could be inaccurate for the kind of measurements took it. VIII. CONCLUSIONS AND FUTURE WORKS A. Conclusions The proposal architecture was very useful in the implementation of the applications because the lines of code that was modified were just few. The principal changes in the code were
n 1 2 3 4 5 6 7 8 9 10 Mean Standard Deviation
PF Error in X(mm) 855 880 804 796.62 940.04 831 846.1 746 923.75 886.5 850.901 59.584
Error in Y(mm) 264.9 534 490 328.8 324 246 351.8 288.99 411.7 287 352.71 96.647
TABLE IX E XPERIMENTAL ERRORS OBTAINED IN CASE 2. T HIS ERRORS ARE THE DIFFERENCE BETWEEN THE REAL POSITION AND THE ESTIMATED POSITION .
n 1 2 3 4 5 6 7 8 9 10 Mean Standard Deviation
EKF Error in X(mm) 0.26 10.02 0.16 0.44 6.83 7 10 5 0.11 0.01 3.983 5.127
Error in Y(mm) 0.15 16.8 20 75 0.1 73 22 65 41.8 34.89 34.874 28.18
PF Error in X(mm) 12.2 14.4 18.83 35.228 1.69 28.46 8.31 35.8 43.66 8.171 20.6749 14.196
Error in Y(mm) 24.68 2 4.65 160 4.5 9.35 28.79 263.62 124.86 6.66 62.911 89.89
TABLE X E XPERIMENTAL ERRORS OBTAINED IN CASE 3. T HESE ERRORS ARE THE DIFFERENCE BETWEEN THE REAL POSITION AND THE ESTIMATED POSITION . T HE ODOMETRY RESULTS COULD BEEN SEEN IN TABLE VIII.
the matrix representation that MATLAB uses in C++ because the dimensionality of some matrixes changes for EKF to PF. Also the architecture allowed modifying class medicion and control without any implication in the applications that were developed. The use of sensor which gives external information is very important to reduce the errors introduced by different effects that affect the robot motion. However, find the correct values for the models and algorithms variables are not an easy task. Those variables depend on the robot and environment characteristics where the algorithm is going to be tested. B. Future Works Future work on this project is to give the robot a map of its environment, and use the algorithm in a local problem of localization. The idea is that the robot has to execute a predefine trajectory and the error is going to be measure as the difference between the real position and the position estimated by each algorithm. The robot which is going to be used for the experiment is Pioner P3-DX, fabricated by Mobile Robots
Inc. This robot has a laser which allows getting very accuracy measurements of the environment. R EFERENCES [1] Thrun, Sebastian. Probabilistic robotics. MIT Press, 2005. [2] Welch, Greg. Bishop, Gary. An Introduction to the Kalman Filter.University of North Carolina at Chapel Hill,2001. [3] Siegwart, Roland. Nourbakhsh, Illah Reza. Introduction to autonomous mobile robots. MIT Press, 2004. [4] McKerrow, Phillip John. Introduction to robotics.Addison-Wesley Pub. Co., 1991. [5] Craig, John J. Introduction to robotics : mechanics and control. Pearson: Prentice Hall, 2005. [6] Negenborn, Rudy. Robot Localization and Kalman Filters, On finding your position in a noisy world. UTRECHT UNIVERSITY, 2003. [7] Rekleitis, Ioannis. Cooperative Localization and multi-robot exploration. Mc Gill university, 2003. [8] Arulampalam, M. Sanjeev.Maskell, Simon. Gordon, Neil. Clapp, Tim. A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking. IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 2, FEBRUARY 2002. [9] Hu, Wenyan. Downs, Tom. Wyeth, Gordon. Milford, Michael. Prasser, David.A Modified Particle Filter for Simultaneous Robot Localization and Landmark Tracking in an Indoor Environment. University of Queensland, Australia.