Position Estimation for a Mobile Robot using Data Fusion E. Stella, G. Cicirelli, A. Distante Istituto Elaborazione Segnali ed Immagini - C.N.R. E-mail:
[email protected]
Abstract
several sensors may increase precision or reliability of results and, in certain cases, allows to extract information dicult to acquire using only one sensor. Fusing the outputs of a given sensory module with those of other independent sensory modules can minimize the eect of the sensor noise and then reduce the uncertainty of the resulting measurments. The data fusion [1] for location estimation has been used in the robotics literature by a number of authors. For example in [2] and [4] the locating system has two independent measuring subsystem: a dead-reckoning system and a machine vision system. The vision system described are dierent; in [2] it estimates the absolute location of the vehicle by using an onboard camera and bar-code-like signposts along the route, in [4] it's developed a tecnique that uses naturally accurring features as landmarks. These are xed objects in the world which are listed in a data base. Data coming from the two subsystems are combined by an extended Kalman lter. In this paper an accurate and simple navigation system is described. This system consists of a vision subsystem and a dead-reckoning one too. The rst is based on a straightforward and original tecnique to determine the location of the center of projection of the camera and the orientation of optical axis using 3 arti cial landmarks which position in the environment is known and capitalized on the excellent angular resolution of CCD cameras. The second uses odometers as internal sensors. The estimates of the position provided by the two sensor systems are combined, at every step, by means of a linear Kalman lter (LKF). Our navigation strategy is based on the fact that the path, that the vehicle will follow, is a-priori known. This paper is organized as follows: section 2 presents our model for the odometry uncertainty. Section 3 describes the vision module for camera location. The fusion of data obtained by the two described modules, through the LKF is explained in section 4. Finally the experimental results and conclusions are summa-
This paper describes a position estimation tecnique based on the fusion of data obtained by two independent subsystems in a mobile robot navigation context. The rst subsystem is a self-location one composed of an onboard camera, an onboard image processing unit and arti cial landmarks; the second one is a dead-reckoning subsystem based on odometry. The robot navigation system integrates the position estimation obtained by the vision subsystem with the position estimated by odometry using a Kalman lter framework.
1 Introduction Navigation is a foundamental problem in the development of autonomous mobile robots. A method of navigating in indoor environment is to follow a preplanned path, given as subgoals that are points on the path. For this purpose the autonomous mobile robot must know its current position. The position estimation methods most widely used are: dead-reckoning methods and self-location methods. The rst ones, using various internal sensors such as gyroscopic sensors, acceleration sensors and rotary encoders attached to the wheels (odometers), have long been examined [4, 3]. This type of methods is based on the estimate of the relative position with respect the last one, so, in long term routes, it is impossible to avoid accumulation of error caused by wheels slippage or roughness of the oor. The self-location methods, on the contrary, are based on the absolute estimate of the position using visible external landmarks. In this case the position error is bounded, but detection of external references or landmarks and real time position xing may not always be possible. Thus combining the two approaches can provide a more accurate positioning system. In fact the use of 1
rized in sections 5 and 6.
it follows that:
0 cos( + ) 1 t Et+1 = Et + ES @ sin(t + ) A +
2 Odometer Estimation Model
0
The two wheels of our vehicle are equipped with encoders. At each sampling period T , an estimate of angular displacement (respectively R and L), during the period, is provided. The corresponding translation (S ) and rotation (), measured respect to the mid-point of the wheels' axis, are given by:
0 ?S sin( + ) 1 t +E @ S cos(t + ) A = Et + E
1 where E is assumed to be a white Gaussian noise with mean zero and covariance matrix Qt. The error covariance matrix is de ned by:
S = R(R + L )=2
4 E[E (E )T ] = E[E E T ] + CtP+1 = t+1 t+1 | {zt t } CtP T T T E[EtE ] + E[E Et ] + E[ | E{zE }] Qt Under the assumption of white noise the following relations hold:
= R(R ? L )=B where R is the wheel radius and B is the wheel baseline. In a two dimensional space, the location of the mobile robot can be represented by the vector Pt = (xt; yt ; t), where xt; yt and t are, respectively, the position and orientation of the robot in a world reference system. Using the values S and , it's possible to write the theoretical estimation equation for Pt, which expresses Pt+1, at time t + t, as a function of Pt at time t: 2 x + S cos( + ) 3 t t Pt+1 = Pt + P = 4 yt + S sin(t + ) 5 = t + = F (Pt; S; )
E[EtE T ] = E[E EtT ] = 0 then we obtain:
CtP+1 = CtP + Qt : The expression of Qt is modeled by a diagonal matrix which terms are: q11 = S jS cos t j q22 = S jS sin t j q33 = jj
This equation approximates trajectory as a sequence of constant curvature segments of length S . It requires that the curvature of trajectory, given by S , be constant during T and be small respect to the vehicle translational and rotational acceleration. The estimated state P^t+1 is given by:
where S and represent, respectively the standard deviations of S and , that are determined experimentally (see section 5).
P^t+1 = Pt+1 + Et+1
3 Visual Self-Location Tecnique
but
P^t+1 = P^t + ^P = Pt + Et + ^P = 2 x + Ex + (S + E ) cos( + ( + E )) 3 t S t 4 yt + Etyt + (S + ES ) sin(t + ( + E)) 5 t + Et + + E = F (Pt + Et; S + ES ; + E ) Using the rst order Taylor expansion, this equation becomes: P^t+1 = Pt+1 + Et+1 = @F @F + ES @@F = F (Pt; S; ) + Et @P S + E @
The vision locating subsystem estimates the absolute location of the vehicle, localizing the center of projection of the camera and the orientation of the optical axis, so it uses an onboard camera and 3 arti cial landmarks (infrared LEDs) positioned in the environment. In order to apply the self-location tecnique it's important to have good conditions of visibility. Let P1(X1 ; Y1; Z1 ), P2(X2 ; Y2 ; Z2) and P3(X3 ; Y3; Z3 ) be the known landmark positions in the world reference system Ow ( g. 1). Let V (XV ; YV ; ZV ) the unknown coordinates of the center of projection (CP) of the camera.
t
2
Zc
P
1
Zc
Yc
2
C1
γ
1
3
β
α 3
L
L2
L1
P
P
γ
2
γ
C2
3
Zw
(image plane)
(image plane)
Yw
O V c
Xc
CP
Zw
Ow
(optical axis)
Xw
Ow
Figure 1: 3D camera geometry
Xc
Xw
Figure 2: 2D camera geometry The visual angles and can be estimated using the excellent angular resolution of the CCD cameras, infact:
1 = arctan( uf1 ) 2 = arctan( uf2 ) 3 = arctan( uf3 ) where f is the focal length of the camera and u1 , u2, u3 are the distances, on the x-axis (on the image plane), of the projections of the landmarks from the image conjugate point (intesection of optical axis and image plane), than
If basic assumptions are respected (for example the three landmarks projected on the ground plane must remain three separate points) and the context of autonomous robot navigation is considered, it's necessary to determine only the location of the CP in the Xw Zw plane. Then let L1(X1 ; Z1), L2 (X2 ; Z2), L3 (X3 ; Z3) and CP (XCP ; ZCP ) be the projections onto the plane Xw Zw of P1 , P2, P3 and V ( g. 2). If the visual angle is assumed to be known, then the triangle with vertices L1 L2 and CP can be circumscribed by a circle which equation can be determined imposing that:
= 2 ? 1 = 3 ? 2 : So the conditions to estimate the circle C1 ( g. 2) are completely de ned and are obtained solving the following system of equations:
8 2 2 2 > < ((XX12 ?? XXcc))2 ++ ((ZZ12 ?? ZZcc ))2 == RR2 p 2 + (Z1 ? Z2 )2 > : (X1 ? X22)sin = R
1. the circle passes trough landmarks L1 and L2
(1)
The (1) can be used for the triangle with vertices L2 , L3 and CP too, (angle is assumed known), to determine the equation of the circle C2 ( g. 2). The intersection of the two circle C1 and C2 consists of two points: the landmark L2 and the unknown CP . Hence, the coordinates of the CP can be evaluated (for details see [6]). In order to estimate the error variance, a sample of calculated CP is considered. To do this it's sucient to increase the number of landmarks in the environment (N > 3) and to consider at every position of the path the possible class 3 combinations of landmarks by
2. its radius satis es the relation:
d sin = 2R where d is the distance between L1 and L2 and R is the radius of the circle. 3
which achieve the estimate of the CP applying the described algorithm. 7 The total number of combinations is given by 3 . An estimate of the error variance is obtained by: N X
module, Hk is the trasformation matrix. The corresponding errors "k and k are assumed gaussian and uncorrelated.
"k N (0; Qok ) k N (0; Qvk) ek N (0; Rk) E [("k k )T ("j j )] = 0 Qok is the error covariance matrix de ned in the deadreckoning model (Qt), and Qvk is the error covariance matrix obtained by the self-location module through the variance ^ 2 . Rk is the covariance matrix of ek de ned by: Q 0 Rk = 0ok Qvk
(CPi ? CP )2
^ 2 = i=1 N ? 1 where (CP1; : : :; CPN )k is the sample of measured CP in the k ? th position on the path and CP is the sample mean.
4 Fusion of multisensory data
Summarizing the complete equations of the lter model can be written as follows: System model: Sk = k?1Sk?1 Measure model: zk = Hk Sk + ek ek N (0; Rk) Initial conditions: E [S0] = S^0 cov[S0] = P0 ? State estimate extrapolation: Sk = k?1Sk+?1 Error covariance extrapolation: Pk? = k?1Pk+?1Tk?1 State estimate update: Sk+ = Sk? + Kk [Zk ? Hk Sk? ] Error covariance update: Pk+ = [I ? Kk Hk ]Pk? Kalman gain matrix: Kk = Pk?HkT [Hk Pk?HkT + Rk ]?1
Sensory fusion can be performed employing a suitable Kalman lter model in order to obtain an optimal estimate of vehicle position and orientation. Kalman lters are well-known tools in theory of stochastic dynamic systems which can be used to improve the quality of estimates of unknown quantities [5]. Then we integrate, step-by-step, the two measures of vehicle position and orientation supplied by the odometers and the visual self-location procedure, to compute a new estimate which lters the previous ones to gain an \optimal" prevision of actual position and orientation. In the Kalman lter literature the measures provided by odometers and the self-location module are called measure model and are used to obtain statistically optimal estimates of the \internal" state of the system. The internal state is represented by position and orientation of the vehicle and constitutes the system model. Both the system and measure models characterize two stochastic processes linked by a suitable functional constraint and errors distribution. Indeed suppose that Sk (xk ; yk ; k )T is the unknown position and orientation of the vehicle in the environment, expressed in the absolute reference system at time tk . Because the path is planned in advance then the rototranslational matrix k can be evaluated and then at time tk+1 the state of the vehicle is:
5 Experimental results Several experiments have been carried out to study the validity of the proposed integration approach. The test of the methods described in preceding sections has been performed on a TRC LABMATE mobile base equipped with a COHU 6710 CCD camera installed on the top on a RS232 remote controlled pan-tilt motorized head. The processing unit is a 68040 (33Mhz) VME board installed on the vehicle. The self-locator uses an ELTEC VME frame grabber installed on the vehicle too. The vehicle is also supplied with two odometers for the measurements of position. For the experimental purpose it has been considered a speed of 500 mm/s for the vehicle. Initially the traslational and rotational error standard deviations S and have been calculated o-line considering a traslation S = 1000mm and a rotation = 10deg obtaining S = 12:32 and = 1:02. The positional errors ( g. 4) are estimated as euclidean distance between the true path and the path calculated respectively by odometer system, vision system and Kalman lter. To determine the true positions
Sk+1 = k Sk This equation represents the system model; the measure model is the following:
ok zk = zzvk = Hk Sk + ek
where zok = Sk + "k is the estimate of position supplied by odometers and zvk = Sk + k by the self-location 4
800 odometer error vision error filter error
700
600
500
error
of the vehicle (true path) we have used a D-50 NIKON TOTAL STATION (Theodolyte) that permits to estimate the coordinates of a point in the space with an error of 5mm. Fig. 3 and Table 2 show the estimated paths of the two systems (odometers and vision) and the lter estimates with respect the teoric and true paths. We can see that the lter estimates approach, in every position, the more accurate estimate that in this particular case is that of the odometer system.
400
300
6 Conclusions
200
In the paper we have discussed a navigation strategy based on use of the Kalman lter to integrate the measurements of the robot's position provided by the robot locator (odometers) and by the visual self-locator. Experimental results indicate that this approach can supply a statistically optimal estimate of the internal state of the vehicle. In future research we plan to consider a self-location system based on natural landmarks in order to reduce uncertainty.
100
0 0
2
3
4
5
6
7
position
Figure 4: Positional errors (in mm) Odometer
4000
Vision
Filter
E = 98:948 E = 313:813 E = 98:141
teoric path true path odometers vision filter
3500
1
= 36:635 = 179:410 = 35:102
3000
2500
2000 mm
Table 1: Estimates of mean error and the corresponding standard deviation
1500
1000
500
0
-500 -2500
Start
-2000
-1500
-1000 mm
-500
0
500
Figure 3: Experimental results
5
8
9
Teoric Position (mm) Px = 0:00 Py = 0:00
Real Position (mm) Px = ?17:09 Py = 22:86
Px = 0:00 Py = 1000:00
Px = ?38:26 Py = 1007:80
Px = ?500:00 Py = 1866:02
Px = ?501:59 Py = 1718:07
Px = ?1000:00 Px = ?1092:03 Py = 2732:05 Py = 2702:95 Px = ?1000:00 Px = ?1089:01 Py = 3732:05 Py = 3681:34 Px = ?2000:00 Px = ?2086:86 Py = 3732:05 Py = 3666:62 Px = ?2000:00 Px = ?2080:45 Py = 2732:05 Py = 2687:48 Px = ?2000:00 Px = ?2072:40 Py = 1732:05 Py = 1691:18 Px = ?1500:00 Px = ?1550:01 Py = 866:02 Py = 845:60 Px = ?1673:64 Px = ?1738:60 Py = ?118:78 Py = ?124:01
Odometer Estim. Vision Estim. (mm) (mm) Px = 103:17 Py = 175:93 x = 195:40 y = 76:24 Px = ?0:00 Px = 148:70 Py = 999:00 Py = 1332:82 x = 110:94 x = 312:28 y = 0:1 y = 77:50 Px = ?501:83 Px = ?305:89 Py = 1860:07 Py = 1953:94 x = 151:56 x = 311:13 y = 78:76 y = 90:86 Px = ?997:10 Px = ?889:46 Py = 2729:39 Py = 2877:52 x = 183:53 x = 201:96 y = 110:92 y = 81:04 Px = ?987:41 Px = ?1077:06 Py = 3725:96 Py = 3726:97 x = 214:49 x = 272:96 y = 111:42 y = 100:65 Px = ?1983:40 Px = ?1851:78 Py = 3728:33 Py = 3706:74 x = 214:56 x = 191:29 y = 157:29 y = 236:62 Px = ?1967:33 Px = ?2227:40 Py = 2733:46 Py = 2947:74 x = 241:55 x = 162:62 y = 157:92 y = 147:43 Px = ?1963:08 Px = ?1854:77 Py = 1732:47 Py = 1958:11 x = 265:86 x = 110:51 y = 158:08 y = 59:65 Px = ?1457:70 Px = ?1372:60 Py = 873:50 Py = 1105:24 x = 285:14 x = 159:12 y = 176:72 y = 75:50 Px = ?1625:57 Px = ?1252:75 Py = ?107:00 Py = 449:07 x = 305:67 x = 165:64 y = 182:46 y = 94:78
Filter Estim. (mm)
Px = 36:04 Py = 999:00 x = 92:17 y = 0:04 Px = ?464:00 Py = 1864:62 x = 76:34 y = 0:2 Px = ?960:18 Py = 2730:69 x = 66:55 y = 0:4 Px = ?968:44 Py = 3730:76 x = 61:90 y = 0:64 Px = ?1959:13 Py = 3730:78 x = 56:80 y = 0:61 Px = ?1987:61 Py = 2731:22 x = 52:34 y = 0:36 Px = ?1965:47 Py = 1733:19 x = 46:57 y = 0:15 Px = ?1459:04 Py = 865:02 x = 44:16 y = 0:05 Px = ?1608:04 Py = ?117:18 x = 42:26 y = 0:21
Table 2: Estimates of the position (Px ; Py ) and the corresponding standard deviations (x; y ).
References
[4] F. Chenavier, J. L. Crowley, Position estimation for a mobile robot using vision and odometry, in Proc. 1992 IEEE Int. Conf. on Robotics and Automation, Nice, France.
[1] Mongi A. Abidi and C. Gonzales, Data Fusion in robotics and machine intelligence, Academic Press, Inc. 1992. [2] S. Murata e T. Hirose, On board locating system using real-time image processing for a selfnavigation vehicle, IEEE Transaction on industrial electronics, vol. 40, no. 1, February 1993. [3] C. M. Wang, Location estimation and uncertainty analysis for mobile robots, in Proc. 1988 IEEE Int. Conf. Robotics and Automation, Philadelphia, PA, pp.1230-1235.
[5] A. Gelb, Applied optimal estimation, Cambridge: MIT Press., 1984. [6] E. Stella e A. Distante, Self-Location of a mobile robot by estimation of camera parameters, Internal Report, revised for Robotics and Autonomous Systems, Elsevier. 6