known environments, when there are uncertainties that call for a certain type of ... guide the robot along a corridor's center line, using a simple control law without ...
Vision-Based Tracking Control for Mobile Robots* Ricardo Carelli, Carlos M. Soria and Beatriz Morales Instituto de Automdtica, Universidad Nacional de San Juan Av. San Mart& Oeste, 5400 San Juan, Argentina {rcare lli, cs o ria, bmo rales} @ inaut, unsj. e du. ar
This work presents a control strategy that allows a follower robot to track a target vehicle moving along an unknown trajectory with unknown velocity. It uses only artificial vision to establish both the robot's position and orientation relative to the target, so as to maintain a specified formation at a given distance. The control system is proved to be asymptotically stable at the equilibrium point, which corresponds to the navigation objective. Experimental results with two robots, a leader and a follower, are included to show the performance of the proposed vision-based tracking control system. Abstract-
Index
Terms
-
mobile robots, nonlinear systems, vision
systems, tracking. I. INTRODUCTION
Mobile robots are mechanical devices capable of evolving in an environment with a certain autonomy degree. Environments can be classified as known environments, when the motion can be planned beforehand, or partially known environments, when there are uncertainties that call for a certain type of on-line planning for the trajectories. Autonomous navigation is associated to the capability of capturing information from the surrounding environment through external sensors, such as vision, distance or proximity sensors. Even though the fact that distance sensors, such as ultrasonic and laser sensors, are the most commonly used ones, vision sensors are becoming widely applied because of its ever-growing capability to capture information. In [1], methods are presented to localize and detect obstacles using a visual representation of the trajectory to follow, by using a sequence of images. In [2], image processing is used to detect perspective lines, and to guide the robot along a corridor's center line, using a simple control law without stability proof. In [3], the ceiling perspective lines are employed for robot guidance, though the work lacks a demonstration on system stability. Other authors have proposed to use the concept of optical flow to guide the robot along the corridor's center line. In [4], two video cameras are used, mounted on the robot's sides. The control computes the optical flow and compare the apparent velocity from the image patterns of both cameras. In [5] and [6], a camera is used to guide the robot along the corridor's centerline, or parallel to a wall. A control algorithm which combines vision based perspective lines and optical flow is presented in [7], including the stability proof of the control
system. In [8], perspective lines are used to determine the absolute orientation of the robot within the corridor. In recent years, there has been an increased use of omnidirectional cameras that capture images from all directions, for navigation and obstacle avoidance. The work in [9] describes the use of a catadioptric panoramic camera with spherical mirror, for navigation tasks. A revision of the research done so far is given in [10]. In [11] the authors present a framework for formation control of nonholonomic robots using omnidirectional cameras based on the switching of simple controllers. The work in [12] addresses the problem of tracking a moving person based on an appearance model learned by a neural network using visual information. In contrast with the previously cited references, in the present work a controller is developed which allows the robot to position itself respecting a mobile target detected through vision. This allows for the tracking of a target vehicle by means of a virtual link and thus reaching a specified formation. The paper is organized as follows. In Section 2, the model for the mobile robot is presented. Section 3 describes the visual measurement of the target position and orientation. Section 4 presents the development of the proposed nonlinear controller, including the stability proof and considerations on its robustness. Representative experimental results are given in Section 5 and, finally, some conclusions are given in Section 6. II. MOBILE ROBOT MODEL
The unicycle type mobile robot can be described by the following kinematics equations: 2 = v cos (p ;9 -
(1)
where (x,y) are the Cartesian coordinates of robot position, and ~0the robot heading or orientation angle; v and co are the robot linear and angular velocities. The non-holonomic restriction for model (1) is cos (,o - 2 sin (,o = 0
(2)
which specifies the tangent trajectory along any feasible trajectory for the robot. The reference point of the robot is assumed to be the middle point between the two driven wheels, vl and v2 denote linear speeds of the left and the
This work is partially supported by ANPCyT and CONICET, Argentina.
0-7803-9177-2/05/$20.00©2005 IEEE
vsin (p
148
right wheels, respectively. Linear and angular velocities of the robot can be expressed as v=(vs+v2)/2 and co= (Vs-V2)/L, where L represents the distance between the two driven wheels, (Fig. 1).
ZT ~
zR + z~ - 2
= cos_l(XR
-
-
(3) XL
)
E By resorting now to the model of reverse perspective, expressions are obtained to compute the target vehicle's posture of (3) as a function of the measurements supplied by the vision system:
.......~,"
............
....
XL z
~il
~ - f
-
-
~
XA,
X R -- -
f
-
- f
XB
f
(4)
:717! .... •
,
Fig. 1. Geometric description of the mobile robot
....
III. VISUAL MEASUREMENT OF THE TARGET POSTURE
.... :-:... .....
.......
The control aim is that a mobile robot follows a target vehicle evolving with unknown motion on the working area. The robot has been equipped with a fixed vision camera looking ahead. This camera captures the image of a pattern mounted on the target vehicle, that features four marks on a square of known side length E [m]. In order to ease the calculations, and without losing generality, the height of the horizontal median of this square is made to coincide with the height of the image's camera center. The projected pattern on the robot's camera image will appear with a projection distortion as represented in Fig. 2. The positions of the pattern's marks on the image are expressed in pixels as (xi,yi) with i - A , B , C and D . These variables are
~
......
....l
ill
. . . .
. I //~"
f xB"XA
I
~
~
M
arks B and D
0 ."~""
i Marksaand C
,
.
XL
Xr
.
X R Image Plane
Fig. 3. Location of the target respecting the camera. i
r~
considered as the image features. I
A
Y"k ~ E ~
i
v
I
' B
x
Target
' th.~_
,~:.~:i~_im-.-.,.,~
•
Xc
o p,
._
"3 FollowerRobot +
Fig 4. Relative position between the target and the follower robot.
XB' XD
Fig 2. Image of the pattern' s marks.
Now, using variables of (3) and (4), and assuming that the camera is mounted on the robot center, the relative posture of the mobile robot with respect to the target vehicle
From these image features, as measured by the vision system, it is possible to compute the posture of the target vehicle ( x r , z r , g ~ r ) , measured on a coordinate system
((,o, 0, d ) can be calculated. The relative posture is defined
associated to the camera (Xc Zc). Fig. 3 shows a diagram
as shown in Figs. 3 and 4. These variables are calculated by the following expressions:
with the horizontal projection of the vision system, showing the posture of the target vehicle on the camera's coordinates, from which the following expressions are attained: XT ~
+ tanl/Z -Z / tanliszt/
XR + XL - -
2
0-~o+~
149
(5)
d-
2 2 xr +z r
In particular, the following functions are considered: fe (e) -- k e tanh(Aee ) and f 0 ( ~ ) - k0 t a n h ( A o ~ ) .
IV. VISUAL SERVO CONTROL OF THE MOBILE ROBOT
These functions prevent that the control actions become saturated. The variables used by this controller (0, (p,d) as
A. C o n t r o l l e r
Figure 5 defines the control structure to be used in this work, where H represents the relationship between the variables defining the relative posture of the follower robot to the target vehicle, and the image features,
q~d d~
Control
~
I Robot
given by (5) are calculated from the image captured by the vision system. By combining (7), (8) and (9), the closed-loop system is obtained:
o--f~(e)
d (]9
(10)
0--G(o)
0
B. S t a b i l i t y a n a l i s y s
Considering the system of (10) with its single equilibrium point at the origin, the following Lyapunov candidate function,
Fig. 5. Control structure. The control objective is defined as follows" Assuming a case where an target vehicle moves along an unknown trajectory, with unknown velocity as well, make the follower robot keep a desired distance d d to the target
2
2
v - _ _e + _ _
(11)
2
2
has a time-derivative on the system trajectories given by
vehicle pointing to it (that is ( P d - 0), using only visual information, Fig. 4. More specifically, the control objective can be expressed as •
V - - e f e (e) - ~ f ~ (O)
(12)
Since f e (e) , f0 (/~) ~ f 2 , it is concluded the asymptotic
lim e ( t ) - l i m ( d d - d ) - 0 ,_~o~ lim (~ - lim(~o d - ~o) - 0
(6)
stability of the equilibrium, that is" e ( t ) ---~ O,
The evolution of the posture of the follower robot relative to the target vehicle will be stated by the time derivative of the two error variables. The variation of distance error is given by the difference between the projection of the target vehicle velocity and the follower robot velocity on the line connecting both vehicles, that is" O - - v r cos 0 + v cos ~
(7)
~ ( t ) --~ O,
w i t h t---~ oo .
It should be noted that the controller of (9) requires to know the linear velocity v r of the target vehicle. This variable should be estimated from the available visual information. By approximating the derivative of the position error as given in (7) by the discrete difference between successive positions, and considering a 0.1s sampling period, the target velocity can be approximated as follows"
Likewise, the variation of angle error (/9 has three
~=
terms" the angular velocity of the follower robot, and the rotational effect of the linear velocities of both robots, which can be expressed as:
(13)
- ( d k - dk_l)/0.1 + VCOS~ cosO
C. R o b u s t n e s s to the v r e s t i m a t i o n e r r o r
Considering that the target velocity is estimated as ~3r , senO (fl - o) + v r - - + d
(8)
sen# v - d
the closed loop equations (10) become: e - - ( v r - Pr ) cos 0 - f e (e)
The following nonlinear controller is proposed, = - G ( # ) - (v~ - ~ )
v-
1 COS
(14)
sin
( 15)
d (9)
(v r c o s O - f e ( e ) )
By
taking
the
Lyapunov
candidate
function
(11):
V - e 2 / 2 + ~ 2 / 2 - V~ + V 2 and denoting A v r - v r - Pr the senO
sen~
d
d
error between the real target velocity v r and its estimate v r , the time-derivative of V is
where f~ (e) , f o ( ~ ) e f2 , with ~
the set of functions
that meet the following definition:
17 - ee + ~ 0 - - e f e (e) - ~fo ( ~ ) - e A v r cos0 -
f~ - { f ' 9 t ~ 9t / f ( 0 ) - 0 and x f ( x ) > 0 V x ~ 9t} where
150
sin 0 d
(16)
V, - - e f ~ (e) - e A v ~
(17)
cos 0
t~Av r sin 0
(18)
used by the controller. Finally, the computed control actions are sent by a transmitter to the follower robot. For the experimental evaluation, the following parameter values were used: k~ = 200,2e = 0.005, k~ = 10 and Ao = 0.1. The follower robot has to follow the target
A sufficient condition for (17) to be negative is
e f~ (e) > e/Xv~
(19)
robot by keeping a desired distance of dd=0.50m and (,Od=0. Figure 8 shows the evolution of the distance between both robots. Figure 9 shows the evolution of angles (,o and O. From these figures, the accomplishment of the control objective can be verified. Fig. 10 depicts the control actions that are calculated and sent to the follower robot. The estimation of the leader robot's velocity is shown in Fig. 11, and the trajectory of the follower robot is depicted in Fig. 12.
For f~ (e) in saturation:
k~ > Av r For a small error in the linear range of f ( e ) (20)
ke~ e e > Av r which guarantees that e(t)converges to the ball B~
e(t) ~ B~,
with t ~
oo
6~ -
Avr
(21)
k~2~
Now, a sufficient condition for (18) to be negative, for f e (~) in saturation and replacing d = d d -6~, is:
ko>
Fig. 6. Robots used in the experiment.
and for f~ (~) in the linear range:
k~2~ t~ > Avr
(22)
d~ - ~ which implies that the heading error ~ converges to the ball B~o ~(t) --> B~
with t --->oo
(a) with Avr
k~2~(d~ - ~ )
Avr kq,(dd -
Avr k~2~ )
(b)
Fig 7. (a) Image captured by the robot's camera. (b) Image processed to find the centroid of each mark..
(23)
k~(k/Ld~-/~ )
Distance
[mm]
1000 900
assuming that the following condition is fulfilled
800
Av T
ke>
700
/
(24) 600
,Zed d
V. EXPERIMENTS
4OO 3OO
For the experimental evaluation of the proposed control algorithm, experiments were carried out with two Pioneer 2DX Mobile Robots (Fig. 6). Each robot has its own control system. The vision system includes a frame grabber PXC200 that allows capturing the images from a camera SONY EV-D30 mounted on the follower robot. These images are transmitted from the follower robot to a Pentium II-400 Mhz PC, in charge of processing the images and of calculating the corresponding control actions. Figure 7(a) shows the image captured by the camera; this image is processed resulting the image shown in Fig. 7(b). From this image, the centroids of the four projected pattern's marks are calculated and used to compute the variables (O,(,o,d)
2OO
IO0
0 0
i 1
i 2
i 3
i 4
i 5 Time
i 6
i 7
i 8
i 9
10
[secs]
Fig. 8. Evolution of the distance to the target
151
Angle
0.1
i
i
i
Phi
i
VII. CONCLUSIONS
[rad]
i
i
i
0.05
i
i
i
i
8
9
The work has presented a non-linear, vision-based controller for navigation of mobile robots following a target vehicle. The controller has been designed with variable gains that allow avoiding the saturation of the control actions. It has been proven that the resulting control system is asymptotically stable. The robustness to the estimation target vehicle velocity is also analyzed. Through experiences, it has been demonstrated that the proposed control system accomplishes the control objective with a good performance. Future work will be oriented to develop coordination tasks between robots, on the basis of the presented visual control strategy.
0.1 0
1
2
3
4
5
Angle
,
6
Theta
7
[rad]
,/J
' //
'
'
'
' ~
'
'
'
,
,
,
,
,
,
1
2
3
4
5
6
/
~'~,/v
b,,~ .....
~,:
i,J
/
'
'
'
V
,
,
,
7
8
9
0.4 0
Time
[secs]
Fig. 9. Evolution of angles (,0 and t) Reference
200
Linear
ACKNOWLEDGMENT
[mm/s]
Velocity
I
150
The authors acknowledge ANPCyT and CONICET for partially supporting this work.
~.
lOO
t
0
, i
' 1
0
,
2
,
i
i
3
4
Reference
1
2
, i
,
i
5 Angular
3
,
6 Velocity
4
5 Time
,
REFERENCES [1] Matsumoto et al. "Visual navigation using view-sequenced route representation," Proceedings of the IEEE International Conference on Robotics and Automation Minneapolis, Minnesota, pp.83-88, 1996. [2] Frizera Vassallo et al. "Visual navigation: combining visual servoing and appearance based methods," SIRS'98, Int. Syrup. on Intelligent Robotic Systems, Edinburgh, Scotalnd, 1998. [3] Z. Yang and W. Tsai, "Viewing corridors as right parallelepipeds for vision-based vehicle localization," IEEE
,
i
i
i
7
8
9
[grad/s]
i/v,
,
,
7
8
9
6
[secs]
Trans. on Industrial Electronics, vol. 46, No. 3, 1999.
Fig. 10. Desired control actions Estimated
50
[mm/s]
of the
Objective
I
I
I
I
I
I
I
I
I
i
i
i
i
i
i
i
i
i
'/ //
Velocity
[4] J. Santos-Victor et al, "Divergent stereo in autonomous navigation: from bees to robots," Int. Journal of Computers Vision, 14-159-177, 1995 [5] Dev et al, "Navigation of a mobile robot on the temporal development of the optic flow," Proc. Of the IEEE/RSJ/GI Int. Conf. On Intelligent Robots and Systems IROS'97, Grenoble, pp. 558-563, 1997 [6] R. Carelli et al, "Algoritmos estables para la navegaci6n de robots m6viles en pasillos usando flujo 6ptico," VIII Reunidn de Trabajo en Procesamiento de la Informacidn y Control, pp.
'
/,
79-7 a 86-7, 1999.
lOO
[7] R. Carelli, C. Soria, O. Nasisi and E. Freire, "Stable AGV corridor navigation with fused vision-based control signals,"
,
15o
• ,
200
, 1
o
, 2
, 3
, 4
, 5 Time
, 6
, 7
, 8
Proc. of the 28 th Conf. o f the IEEE Idustrial Electronics Society, IECON, Sevilla, Espafia, nov 5-8, 2002.
, 9
[secs]
[8] S. Servic and S. Ribaric, "Determining the Absolute Orientation in a Corridor using Projective Geometry and Active Vision," IEEE Trans. on Industrial Electronics, vol. 48, No. 3, 2001. [9] J, Gaspar and J. Santos Victor, "Visual path following with a catadioptric panoramic camera," SIRS' 99-7 h Intl Syrup. On
Fig 11. Estimated velocity of the target. Robot
25O
Trajectory
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
/
/ /
// 200
,
,
,
,
• // ,/ / //,
Intelligent Robotic Systems, 1999.
/ 150
[10] Boult et al,"Omnidirectional Video Applications," VAST Lab, Lehigh University 19 Memorial Drive West, Bethlehem PA USA. [11] A. Das, R. Fierro, V. Kumar, J. Ostrowski, J. Spletzer, C. Taylor "A Framework for Vision Based Formation Control". IEEE Transactions on Robotics and Automation, 18(5):813-- 825, 2002. [12] G. Cielniak, M. Miladinovic, D. Hammarin, L. G6ransson, A. Lilienthal and T. Duckett "Person Identification by Mobile Robots in Indoor Environments". Proc. Omnivis 2003, Fourth IEEE Workshop on Omnidirectional Vision, Madison, Wisconsin, June 21, 2003.
// // // /
loo
,
,
,
/.,J~ /,
'
..../
, •
/,
/ 50
,
y
f
z-
0
--J
0
J
- ~ 500
....
,
1000
1500
I
I
2000
2500
3000
X[mm]
Fig 12. Trajectory followed by the follower robot.
152