Self-Location of a Mobile Robot with uncertainty by cooperation of an heading sensor and a CCD TV camera E. Stella, G. Cicirelli, A. Distante Istituto Elaborazione Segnali ed Immagini - C.N.R. Via Amendola, 166/5 - 70126 Bari (Italy) E-mail:
[email protected]
Abstract
the absolute location of the start point must be known. Those techniques, however, do not require any a-priori knowledge about the environment [6, 1, 2].
Goal-oriented navigation of a mobile robot by landmark based techniques is a straightforward and suitable approach. Most of methods, normally, give an estimation of the position and orientation of the vehicle, but, often, they are not able to provide a good estimate of the uncertainty in the measurement. That information is useful in application where multisensor fusion is requested. Our method permits to determine the vehicle location and relative uncertainty, when its orientation is obtained by an heading sensor, using a visual landmark based method. The approach is straightforward and suitable for real time performance on general purpose hardware. Experimental results are provided by implementation on our autonomous mobile vehicle SAURO.
Methods in both classes work with a good performances (referred to execution time) and provide an acceptable estimation of the vehicle location and orientation. Most of them, however, have a fundamental drawback: they do not provide an estimation of the uncertainty in the vehicle location and orientation. In the last years, a common tendency of robotic researchers has been to analyze the problems of the mobile vehicles in term of multisensor fusion framework, so the estimation of uncertainty of sensor measurements has played an important rule. In this paper we propose a straightforward approach to self-location of a mobile robot by localizing the Center-ofprojections (CP) of the camera when the orientation of its optical axis is given. In fact, using an heading sensor (we have used a compass), the orientation of the vehicle can be recovered, so three landmarks are enough to estimate location of the camera and relative uncertainty. The experimental setup is made up by three beacons (infrared LEDS) used as landmarks at known positions in the environment. Using the excellent angular resolution of CCD cameras, visual angles are evaluated and, as described in sect. 2, the location of the CP and optical axis orientation are recovered. The visual angle of a landmark is the angle formed by the ray from the CP to the landmark and the optical axis (see fig.1). The technique used to estimate the visual angle is similar to [3]. We have used artificial beacons, in order to speed up the search for the landmarks in the image, but natural features could be considered.
Keywords: Camera Localization, Navigation, Autonomous mobile vehicle, Self-Location. The research is partially supported by Progetto Speciale Robotica (Comitato Scienze Fisiche) of the C.N.R.
1. Introduction Self-location is the capability of an autonomous mobile vehicle to determine its position in an environment. Visual approaches to self-location are suitable for navigating a mobile robot in indoor contexts. Two classes of methods can be considered:
Absolute technique: are those approaches providing an estimation of the location and orientation of the vehicle using artificial or natural landmarks whose positions in the environment are a-priori known [11, 4, 7].
The technique is described in section 2., the estimation of uncertainty is explained in sect.3. . The experimental setup is described in sect. 4 and, finally, the results on self-location of a mobile robot are shown.
Relative technique: are those methods where only the relative motion of the vehicle is estimated (rotation and translation) so for goal oriented applications 1
P1
L2 P2
L1 Zc (Optical axis)
Zc Yw
γ
Yc
1
γ
2
γ
Zw
γ
Zw
Image Plane
Xc
α
2
1
Image plane
Xw
Xc
Figure 1. 3D camera geometry α
2. CP self-location technique
Xw
Let P1 (X1 ; Y1; Z1), P2(X2 ; Y2; Z2 ) be 2 landmarks in world reference system Ow (fig.1). Let V (Xv ; Yv ; Zv ) be the unknown coordinates of the center-of-projection (CP) of the camera. In the context of autonomous robot navigation the determination of the location of the CP in the Xw ? Zw plane is all important, so the location problem is reduced to a two-dimensional one. Let L1(X1 ; Z1), L2 (X2 ; Z2), and CP (Xcp; Zcp ) be the projections onto the plane Xw ? Zw of P1 and P2 and V (fig.2). If the heading angle is assumed to be known (for example obtained by an heading sensor: gyro, compass, etc.) and the visual angles 1 and 2 can be recovered, then the CP position can be univocally determined as intersection of the two straight lines r1 and r2 , passing among landmarks L1 and L2 , respectively. Considered the parametric equation of a straight line:
z
=
mx
+
p
(1)
where m, the slope of the straight line, can be estimated as:
m
=
atan( + i ) i = 1; 2
(2)
while p the intercept of the straight line can be estimated considering that the line pass through a landmark. So, summarizing r1 can be estimated by resolving the system of two equations:
Z1 = mX1 + p (3) m = atan( + 1 ) Analogously, the straight line r2 can be estimated and so the CP location is given by resolving the system of the equations of the two straight lines. The visual angles 1 and 2 can be estimated using the excellent angular resolution of the CCD cameras. Considering the fig.2:
Figure 2. 2D camera geometry
1 = atan( uf1 ) 2 = atan( uf2 )
(4)
where f is the focal length of the camera and u1 and u2 are the distances on the x-axis (on the image plane) of the projections of the landmarks from the principal point (intersection of optical axis and image plane). Considering that, u1 and u2 in (4), are pixel coordinates referred to the CCD chip (not to the frame grabber), their estimation needs of the scale factor. In fact, as shown in [10]:
uk
Ncx pkx ? Cx ) ds x N
= (
x fx
(5)
where pkx is the pixel x-coordinate referred to the frame grabber, Cx is the principal point x-coordinate, sx is the scale factor given by different sampling frequencies of the TV camera and the frame grabber ADC, Ncx represents the number of the CCD elements in x-direction while Nfx is the length of the image line in the frame grabber, dx is the distance center-to-center of two adjacent ccd elements in xdirection. While dx; Ncx and Nfx are parameters described by the TV camera and Frame Grabber manufactures, the principal point, focal length, scale factor and lens distortion are camera intrinsic parameters that can be recovered only by camera calibration. [10, 8, 9, 5]. In our experiment, we have used the technique described by Tsai in [10] to perform camera calibration.
3. Uncertainty estimation In order to estimate uncertainty we have used some geometric consideration. In fact, three landmarks are
L2 L1 L3
G Zw
A
F
B C D
E
Xw
Figure 3. Uncertainty evaluation geometry used and for each pair of them an estimation of location has been obtained, globally, we recovered 3 estimates. In fig.3 those 3 estimates are labelled with A(XA ; ZA), B (XB ; ZB ) and C (XC ; ZC ). Considering the triangle ABC and Xmin = min(XA ; XB ; XC ), Xmax = max(XA ; XB ; XC ), Zmin = min(ZA ; ZB ; ZC ), Zmax = max(ZA ; ZB ; ZC ), the rectangle DEFG including ABC has been determined. The rectangle has the sides parallel to the axis of the world reference system and the vertexes coordinates are: D(Xmin ; Zmin), E (Xmax ; Zmin), F (Xmax ; Zmax ) and G(Xmin ; Zmax ). The uncertainty has been represented by the ellipse circumscribed to the rectangle DEFG as drawn in fig.3. For multisensor fusion proposal using framework as Extended Kalman Filter, a TV camera location standard deviation independently in Xdirection and Z-direction can be preferred. So, we propose to assume the center of the ellipse as the location estimation and the axis length as standard deviation respectively in X and Z directions. The uncertainty in TV camera orientation is not directly connected to the visual location method but depends exclusively by the uncertainty of the heading sensors and can be estimated on statistical basis using repetitively readings of the heading sensor at known angles.
Figure 4. View of environment and position of landmarks
4. Experimental results The method for self-location has been tested on our mobile robot SAURO, shown in fig.5. SAURO is based on a TRC LABMATE mobile base controlled by an on-board VME-bus based computer. The imaging system is the VME board IPP ELTEC frame grabber. The processing is performed on a Motorola 68040 board.
Figure 5. Our mobile vehicle SAURO
The TV camera used is an ADIMEC MX5 mounted on a RS232 controlled pan-tilt motorized head. The acquired images are 744 574 pixels. The heading sensor used is a KVH C100 fluxgate compass connected by an RS232 to the Motorola 68040 board. The C100 compass provides on-board firmware to estimate correction coefficients to correct the sensor readings respect of magnetic influence of metallic component of the mobile vehicle. The heading angle of the TV camera (referring to fig.1 (orientation of the optical axis) is so recovered, using compass as the sum of two angles:
=
+!
(6) Figure 6. a) Experimental results of self-location on a path. On path are shown the positions where self-location is applied and the uncertainty is represented by ellipses
where is the angle between the Z-axis of the world reference system and the magnetic NORTH direction detected by the compass, while ! is the current pan angle of the PANTILT head. In order to verify the reliability of self-location in navigation context, SAURO was asked to perform a path to a goal position. The path consisted of the segments shown in fig.6 (Real locations). The navigation strategy is as follows:
landmark distances 9163 8200 6515 5800 5736
1. The vehicle starts from an unknown position with an unknown orientation.
4244 3249 3236
2. Using the pan-tilt head the landmarks are searched for. Then, by self-location the mobile vehicle estimates its position and orientation and relative uncertainty in the world reference system.
3. The rotation and translation in order to go to the next stop position are evaluated and then the vehicle moves on.
Real Location x = -1821.94 z = -1407.02 x = -980.62 z = -3237.75 x = 820.69 z = -1673.60 x = 1488.13 z = -2000.12 x = 1493.37 z = -3356.80 x = 3051.68 z = -2097.77 x = 4049.47 z = -3665.40 x = 4581.36 z = -4811.84
Estimated Location x = -2276.45 z = -1042.86 x = -1480.38 z = -3474.57 x = 451.32 z = -1241.69 x = 1145.80 z = -1645.14 x = 1031.33 z = -3133.62 x = 2809.92 z = -1747.35 x = 3825.59 z = -3799.81 x = 4487.82 z = -4978.70
uncert.
x = 575.53 z = 534.89 x = 792.28 z = 318.33 x = 602.09 z = 550.34 x = 578.90 z = 537.85 x = 477.56 z = 380.31 x = 363.59 z = 443.90 x = 314.08 z = 259.31 x = 395.61 z = 347.06
Table 1. Path performed by SAURO. The focal length of camera was 6 mm. Data are in mm.
The described procedure is repeated until the vehicle reaches the goal. The positions estimated by self-location compared with real positions are shown in fig.6 , while in tab.1 the same data are tabulated. The robot location was verified using a D-50 NIKON TOTAL STATION (Theodolite) that can estimate the coordinates of a point in space with an error of 5 mm. The D-50 was also used to determine the
3D locations of the landmarks in the world reference system. The fig.6 shows that the error in self-location depends on distance from landmarks: the nearer landmarks, the less the error in robot location. The axis of the ellipse represent the standard deviation of the error by determining the camera location by the visual self-location method. Fig.6 shows that error grows with both: the distance from the landmarks and from view angle of the camera. In fact, the distance, like view angle, modify the appearance of the landmarks on the image plane of the camera, determining a growth of the unreliability in the visual angle estimation. The self-location estimation time is about 40 ms, but the global time depends on the time taken by the pan-tilt head to move in order to permit to the camera to see all landmarks. The speed of the head is 300 deg/sec. In order to reduce the global time, the head starts to move during the motion of the vehicle from a path position to the next. This is possible because the path positions are a-priori known. Furthermore, no a specific configuration of the landmarks is requested for self-location, so the LEDs are placed in the environment in order to have their maximum visibility. However, in real context or to reduce the self-location error or to resolve unforeseen landmark occlusion situations, we have chosen to use more setup of landmarks. So, because the path is known, at each position the vehicle could choose that setup of landmarks will produce lowest uncertainty in camera localization.
5. Conclusions This paper describes a new method for camera location suitable for mobile robot navigation in indoor environment working on a 68040 computer. Very promising results have been obtained from tests on our SAURO mobile robot. The next step in our research is to substitute natural landmarks for the LEDs and to develop a mobile robot capable of operating as a transportation system in any indoor environment.
References [1] A.Kak A.Kosaka. Fast vision-guided mobile robot navigation using model-based reasoning and prediction of uncertainties. CVGIP:Image Understanding, 56(3), November 1992. [2] O.Binford D.J.Kriegman, E.Triendl. Stereo vision and navigation in buildings for mobile robots. IEEE trans. on Robotics and Automation, 5(6):792–803, 1989. [3] E.Krotkov. Mobile robot localization using a single image. Proc. of IEEE Conf. on Robotics and Automation, 1989.
[4] A.Distante E.Stella. Self-location of a mobile robot by estimation of camera parameters. Robotics and Autonomous Systems - Elsevier, 15:179–187, 1995. [5] P.A.Beardsley et al. The correction of radial distortion in images. Technical report 1896/91,Department of Engineering Science, Univ. of Oxford (UK). [6] H.P.Moravec. The stanford cart and the cmu rover. Proceeding of IEEE, pages 872–884, July 1983. [7] H.S.Cho J.H.Kim. Experimental investigation for a mobile robot’s position by linear scanning of a landmark. Robotics and Autonomous Systems - Elsevier, 13(1), June 1994. [8] M.Herniou J.Weng, P.Cohen. Camera calibration with distortion models and accuracy evaluation. IEEE PAMI, 14(10), 1992. [9] R.Y.Tsai R.K.Lenz. Techniques for calibration of scale factor and image center for high accuracy 3d vision metrology. Proc. IEEE Int. Conf. Robotics Automat., March 1987. [10] R.Y.Tsai. A versatile camera calibration technique for high-accuracy 3d machine vision metrology using offthe-shelf tv cameras and lenses. IEEE J. Robotics Automat., RA-3(4), 1987. [11] T.D’Orazio A.Distante G.Attolico L.Caponetti E. Stella. A vision algorithm for mobile vehicle navigation. SPIE Proceedings on Advances in Intelligent Robotic systems, November 1991.