Document not found! Please try again

A Decoupled Image Space Approach to Visual Servo Control of ... - Irisa

0 downloads 0 Views 1MB Size Report
Cemif-SC FRE-CNRS 2494,. 40 rue du .... to ensure strong practical robustness of the proposed con- .... Note that Eq. 11 is independent of the angular velocity ft.
Proceedings of the 2002 IEEE International Conference on Robotics & Automation Washington, DC • May 2002

A D e c o u p l e d Image Space A p p r o a c h to Visual Servo Control of a R o b o t i c M a n i p u l a t o r Robert Mahony Dep. of Eng., Australian Nat. Uni., ACT, 0200 Australia.

Tarek Hamel Cemif-SC FRE-CNRS 2494, 40 rue du Pelvoux, 91020 Evry France.

Francois Chaumette Irisa/Inria Rennes, Campus Universitaire de Beaulieu, 35042 Rennes, France.

email: mahony0ieee, org

email: t h a m e l 0 i u p . u n i v - e v r y , f r

email: Francois. [email protected]

Abstract

the need to determine depth information for each feature point in the visual data. Various approaches have been reported including estimation via partial pose estimation [15], adaptive control [17] and estimation of the image Jacobian using quasi-Newton techniques [11, 18]. More recently, there has been considerable interest in hybrid control methods whereby translational and rotational control are treated separately [15, 6]. Hybrid methods, however, share with the P BVS approach a difficulty to keep all image features within the camera's field of view. Cont e m p o r a r y work [16, 4] aims to address these issues.

An image-based visual servo control is presented for a robotic manipulator. The proposed control design addresses visual servo of 'eye-in-hand' type systems. Using a novel representation of the visual error based on a spherical representation of target centroid information along with a measure of the rotation between the camera and the target, the control of the position and orientation is decoupled. A non-linear gain introduced into the orientation feedback kinematics prevents the target image from leaving the visual field. Semi-global convergence of the closed loop system is proved.

In this paper we propose an image-based visual servo control for a robotic manipulator. The model considered is that of an 'eye-in-hand' type configuration, where the camera is attached to the end effector of the robotic arm. The approach taken is based on recent work by the authors [9] in which an image based error is introduced for which the dynamics have certain passivity-like properties. The visual error representation used is based on a spherical representation of target centroid information. Using this representation it is possible to express the position and orientation kinematics of the visual error as a partially decoupled system. An exponentially stable feedback law is proposed for stabilization of the position. The feedback law for the orientation kinematics is modified via a simple non-linear gain to ensure t h a t the target image never leaves the field of vision of the camera.

1 Introduction Visual servo algorithms have been extensively developed in the robotics field over the last ten years [7, 19]. Most visual servo control has been developed for serial-link robotic manipulators with the camera typically mounted on the end-effector [12]. Visual servo systems may be divided into two main classes [15]: Position-based visual scrvo (PBVS) involves reconstruction of the target pose with respect to the robot and results in a Cartesian tootion planning problem. Problems with this approach inelude the necessity of an accurate 3D model of the target, sensitivity to camera calibration errors, poor robustness of pose estimation and the tendency for image features to leave the camera field of view during the task. Imagebased visual scrvo (IBVS) treats the problem as one of controlling features in the image plane, such t h a t moving features to a goal configuration implicitly results in the task being accomplished [7]. Feature errors are m a p p e d to actuator inputs via the inverse of an image Jacobian matrix. Features can be derived from image features such as points, lines and circles. Issues associated with optireal selection of features and partitioned control where some task degrees of freedom are controlled visually and others by position or force [13, 2] have been considered in depth. IBVS avoids many of the robustness and calibration problems associated with PBVS, however, it has its own problems [31. Foremost in the classical approach is

0-7803-7272-7/02/$17.00 © 2002 IEEE

2 Image Based Errors In this section an image based error measurement for a visual servo task is introduced. Let Z = {Ex, Ey, Ez} denote a right-hand inertial frame. Let the vector ~ = ( x , y , z ) denote the position of the focal centre of the camera in the inertial frame. Let A {E~, E~, E~} be a (right-hand) body fixed frame for the camera. The orientation of the camera is given by the r o t a t i o n / ~ : A + Z. Let P represent a stationary point target visible to the

3781

Consider the case of a t a r g e t comprising a finite n u m b e r of point targets. Recalling Eq. 2 it m a y be verified t h a t

c a m e r a expressed in the c a m e r a frame. T h e image point observed by the c a m e r a is d e n o t e d p and is o b t a i n e d by rescaling onto the image surface 8 of the c a m e r a (cf. Figure 1) 1 p -- r(p) P. (1)

× q - 0v, where

O - . /~1

Following the a p p r o a c h i n t r o d u c e d in [10] we consider a c a m e r a with a spherical image plane. Thus 1 r ( P ) - P .

(p)v,

(9)

where :rp - ( I 3 - p p r ) is the projection :rp" R 3 + T p S 2, the t a n g e n t space of the sphere S 2 at the point p ¢ S 2.

1 v /~ = - ~ x p - -r-Xp ,:

',

7rp~

(5)

T h e visual servo t a s k considered is t h a t of positioning a c a m e r a relative to a s t a t i o n a r y (in the inertial frame) target. In addition to visual information additional inertial information is explicitly used in the error formulation. In particular, it is a s s u m e d t h a t the orientation of the c a m e r a with respect to the t a r g e t is known a priori in the inertial frame. For example, one m a y wish to position the c a m e r a above the t a r g e t or to the n o r t h of the target. Note t h a t the centroid of an image contains no information relative to the orientation t h a t can be used to position the camera. If purely centroid information is used t h e n additional image or inertial information m u s t be i n c o r p o r a t e d into the error to fully specify the c a m e r a orientation.

Let V C ,,4 denote the linear velocity and gt C ,,4 denote the angular velocity of the c a m e r a b o t h expressed in the c a m e r a frame. T h e d y n a m i c s of an image point for a spherical c a m e r a of image surface radius unity are (see E9, 8]) 7I'p

(4)

Formally, let b C Z denote the desired inertial direction for the visual error. T h e n o r m of b encodes the effective d e p t h information for the desired limit point. Define

S

b* " - R T b C ,,4 r

to be the desired t a r g e t vector expressed in the c a m e r a fixed frame. Since b* C ,,4 it inherits d y n a m i c s from the m o t i o n of the c a m e r a

ir t

OP

b* - - g t x b*.

Figure 1: Image dynamics for spherical camera image geometry.

T h e image based error considered is the difference between the m e a s u r e d centroid and the t a r g e t vector expressed in the c a m e r a frame

For a given t a r g e t the centroid is defined to be

6 .-

n

q "-

c a 3

d-

where the t a r g e t consists of n points {P~}. Using centroid information is an old technique in visual servo control [1, 14, 211. A m o n g the a d v a n t a g e s of using the t a r g e t centroid as a visual feature one has t h a t it is not necessary to m a t c h observed image points directly to desired features and t h a t the calculation of a image centroid is a highly robust procedure. These properties are e x p e c t e d to ensure strong practical robustness of the proposed control design. Note t h a t the centroid defined above contains more information t h a n a classical definition. This m a y be seen from the fact t h a t the n o r m of q is related to the average d e p t h of the t a r g e t points. x[ r e p r e s e n t s

(6)

T h e image error kinematics are

i=1

1The notation

q -

(3)

× 6-Ov

(7)

To regulate the full pose of the camera, it is necessary to consider an additional error criterion for the orientation. Note t h a t any r o t a t i o n of the c a m e r a t h a t alters the observed image centroid m a y be controlled using centroid information. However, the image centroid is invariant under r o t a t i o n a r o u n d the axis of the centroid. Thus, the best t h a t can be achieved using the centroid feature is to orient the c a m e r a such t h a t the image centroid lies in a desired direction. To control the r e m a i n i n g degrees of freedom one m u s t use additional information; either from the image or a prior inertial direction (linearly independent from b (see section 3.1).

t h e n o r m of a n y v e c t o r x C R n

(~1-~).

3782

In general it is desired t h a t the image centroid is close to the focal axis of the camera• This is particularly important if there is a risk t h a t the image may leave the active field of view of the camera during the servo manoeuvre• For the sake of simplicity we choose to stabilise the orientation in order t h a t the image centroid converges to the focal axis of the camera• Choose the camera frame A such t h a t the focal axis of the camera is co-linear with the third coordinate axis. Let {el, e2, e3} denote the unit vectors of the coordinate frame A. The visual error used for the camera orientation is the angle 0 between the centroid and the focal axis cos0-

qT¢ 3 -- q0Te3 and Iql

sin0-

q0 x e3 ,

The matrix Q > 0 is not exactly known, however, it is known to be positive definite• Thus, a choice V = 5 is sufficient to stabilise S1. The kinematic control chosen for Eq. 11 is V = k65 (12)

L e m m a 3.1 Consider let k6 > 0 be a positive remains in the camera the closed loop system exponentially stabilises

Proof: Recalling the derivative of S and substituting the control input V by its expression, it yields

(8)

= --k66TQ6

where the normalized centroid is defined to be q0 := q/lql. In practice, it is most convenient to work directly with the cosine of 0. Define a " - cos 0 -

the system defined by Eq. 7 and constant. A s s u m e that the image field of view for all time. Then, Eq. 7 along with control Eq. 12 the visual error 5.

Since Q is a positive definite matrix, classical Lypunov theory guarantees t h a t 5 converges exponentially to zero.

q0Te 3.

Thus, the goal of the servo algorithm is to drive 5 ~ asymptotically and stabilize a ~ 1.

0

To control the kinematics of the visual feature a the storage function considered is

~ (~ - ~)~ " T ' - 2 (~ - ~o~O) ~ - E

3 Kinematic Control Design

(13)

Following a classical Lyapunov control design the n a t u r a l choice of ft is = - k ~ (qo × e3). (14)

In this section a Lyapunov control design is given for the kinematic control of the visual error based on the visual errors introduced in Section 2.

Thus, The kinematics of an image based visual servo system are the first order dynamics of the visual features q and a. The derivative of a is &-

d ~cos0-

= -a~(qo

eT3(t q

× ~)-

- (1 - ~ ) --k~(1

qeT3q q2

7~

(9)

q

for positive constants B, ~ > O. A s s u m e that ]q[ > c > 0 is bounded away from zero and that ]]QI] < A < oc is bounded. Then a ~ 1 exponentially. As a consequence 0 ~ 0 exponentially.

(lo)

An u n f o r t u n a t e possibility with the above control design is t h a t the target image may leave field of view of the camera during the evolution of the closed loop system• To avoid such a situation it is necessary to modify the orientation control design• In particular, if the closed loop system evolves such t h a t the image is tending to leave the field of view, high gain dynamics are introduced in the orientation kinematics to keep the target in the image plane•

Taking the time derivative of S and substituting for Eq. 7, yields

= -6TO, V

~) ~ +

IvI ~ Be -~t

Define a storage function S

6~

+~)(1-

(~5)

L e m m a 3.2 Consider the rotational dynamics defined by Eq. 9 with the control law Eq. 1~. Let V be an ezponentially stable signal

Note t h a t angular velocity ft is three dimensional, however, only the component in the direction q0 x e3, t h a t rotates the focal axis in the direction of the observed centroid, contributes to the first order dynamics of a. Thus, the kinematics considered are given by Eqn's 7 and 9. The inputs to these equations are V and ft.

S - - ~1

a~(qo × ~ ) + ~ ~~oO, V)

(11)

Note t h a t Eq. 11 is independent of the angular velocity ft.

3783

For a given initial condition it is possible to choose a gain ks > 0 sufficiently large such t h a t the exponential stability of the orientation dynamics dominates the camera a t t i t u d e kinematics and the camera points towards the target centroid regardless of the linear dynamics of the camera. This approach is undesirable as it leads to high gain input over the full dynamics of the system and reduces system robustness as a consequence. In this paper a non-linear gain (linked to a barrier function potential) is used to limit the divergence of the image centroid from the focal axis of the camera.

3.1 S t a b i l i s a t i o n o f r e m a i n i n g o r i e n t a t i o n w i t h reference to visual data. In this subsection an auxiliary control is proposed t h a t will regulate the remaining degree of freedom in the attitude of the camera using an additional error criteria.

To control the remaining orientation of a camera via an error in image space it is necessary to consider error criteria t h a t depends on another vector direction fixed in the camera frame. A key observation for the proposed control design is t h a t such an error is chosen and minimized after the regulation of the visual errors 5 and 0 respectively. Thus, the best t h a t can be achieved is to orient the camera within the set 5, 0 = 0 and the auxiliary error must be chosen with this in mind. T h e control objective considered is to align el as closely as possible with new visual 'feature' vector c o m p u t e d from image m e a s u r e m e n t s

Let the m a x i m u m allowable value for 0 be denoted 0 n > 0. The constraint 0 < 0,~ limits the possible orientation of the camera to a positive cone around the centroid. Set a,~ = cos(0,~). The approach taken is to choose the control to be

i=n

ko~ •- -

(qo

×

-

Z

(is)

i=1

The barrier t e r m 1 / ( a - a ~ ) ensures t h a t the control input for the orientation dynamics dominates any finite p e r t u r b a t i o n due to the linear velocity V if the orientation approaches the limit. Recalling the definition of the storage function T, one has

where a~ are a set of real constants. It is not required t h a t a~ > 0 and differences between observed points may be used to generate the visual feature q~ vector. Define cr to be the error between the visual feature diqa rection q~ (q~ iqal) and the camera frame direction C1. cr - q~ - el (19) -

-

_

(1

-

+

q

Taking the derivative of or, yields

L e m m a 3 . 3 Consider the rotational dynamics defined by Eq. 9. Let the control law Eq. 17. Let V be an exponentially stable signal

1

~ -- - - Q X cr + - - 7 c qa

IvI _
O. Let 0,~ > 0(0) (a(O) > a,~). A s s u m e that Iql > ~ > 0 is bounded away from zero and that llQIl< A < oc is bounded. Then a > a,~ for all time and a ---+ 1 exponentially.

T

Suggest Documents