Lecture Notes on Information Theory Vol. 2, No. 3, September 2014
Adaptive Robust Vision-Based Tracking Control of Quadrotor Hamed Jabbari Asl Young Researchers and Elite Club, Ilkhchi Branch, Islamic Azad University, Tabriz, Iran Email:
[email protected]
image moments are used in [5], [7], [8] to design full dynamic IBVS controllers for cascade dynamics of a quadrotor helicopter. However, the selected image features do not provide satisfactory behavior of the robot in vertical axis which is addressed and slightly improved by rescaled spherical image moments in [9]. To overcome this conditioning problem, the author has proposed a method in [10] which utilizes perspective image moments and the control is based on their dynamics in an oriented image plane. In all the mentioned works for IBVS, it is assumed that translational velocity of the vehicle is available from the IMU system. In [11], optic flow of image features are used as a cue of translational velocity and dynamics of the system are expressed based on dynamics of optic flow measurement. However the designed full dynamic IBVS controller has the conditioning problem mentioned above. An observer-based method has been presented in [12] where two nonlinear observers are used to estimate translational velocity and attitude of the system. The approach uses backstepping method which increases the complexity of the design procedure and the method still needs to estimate attitude of the vehicle from image information which requires prior geometrical information from model of the observed object and visual tracking of image features. Furthermore, these works only consider the case that the object is stationary. This paper presents a dynamic IBVS controller for controlling 3D translational motion of a quadrotor helicopter tracking a moving target. The control law makes use of perspective image features reconstructed in a suitably defined virtual image plane. These features are used in earlier work by the author [13] which provide efficient trajectories in both image and Cartesian space. Kinematics and dynamics of the system are presented in terms of dynamics of velocity of image features which make it possible to design dynamic IBVS controllers without using linear velocity information. The controller is robust with respect to uncertainties in dynamics of the system related to motion of the target and also unknown depth information of image. Stability analysis shows the convergence of the errors to a small adjustable compact set.
Abstract—In this paper, Image-Based Visual Servo (IBVS) approach is considered for 3D translational motion control of an underactuated Unmanned Aerial Vehicle (UAV) called quadrotor. Taking into account the low quality of accelerometers data, the objective of this paper is to use only information of rate gyroscopes and a camera, in order to design an IBVS controller. Kinematics and dynamics of the UAV are expressed in terms of visual information which make it possible to design dynamic IBVS controllers without using linear velocity information obtained from accelerometers. An adaptive robust controller is proposed to deal with uncertainties in dynamics of the system related to motion of the target and also unknown depth information of image. Simulation results are presented to validate the designed controllers. Index Terms—visual servo, quadrotor, UAV, optic flow, adaptive robust control.
I.
INTRODUCTION
Vision sensor is a reliable and low-cost system which can be fused with the IMU data to provide useful translational velocity information and also can be effectively used in localization of a vehicle with respect to its environment. In the last decade, vision system has received a great attention for aerial vehicles and many applications have been reported for these robots including estimation of ego-motion [1], pose estimation [2], SLAM [3], automatic landing [4], positioning [5], obstacle avoidance [6] and etc. Some of these applications need control strategy based on vision information. Vision-based control of robots mainly includes two major approaches. One approach is Position Based Visual Servoing (PBVS) where control is on Cartesian space, based on 3D information of workspace reconstructed from 2D image data. The other approach is Image Based Visual Servoing (IBVS) in which control is based on dynamics of image features in the image plane. IBVS does not need 3D information reconstruction so is simpler in computations with respect to PBVS. This approach is also robust with respect to camera calibration errors. However, IBVS method still needs depth information of the image. Implementation of IBVS approach for dynamic underactuated UAV systems requires more challenge in designing the controller. Passivity properties of spherical
II.
This section describes the kinematics and the dynamics model of the quadrotor helicopter. The models are similar
Manuscript received May 19, 2014; revised September 15, 2014. ©2014 Engineering and Technology Publishing doi: 10.12720/lnit.2.3.232-237
EQUATIONS OF MOTION OF THE ROBOT
232
Lecture Notes on Information Theory Vol. 2, No. 3, September 2014
to those introduced in the literature, [14]. Two coordinate frames are considered for describing the equations of the motion of the quadrotor equipped with a down-looking camera, which define the motion of the camera. They include an inertial frame Oi , X i ,Yi , Zi and a body
For visual servoing of quadrotor, sometimes the full dynamics of the system have been taken into account in designing the controller e.g. [7], [10]. However, another approach is to separate the control problem into inner loop and outer loop control. The inner loop, which uses inputs from rate gyroscopes acquired at high data rate, regulates the torque inputs to track a desired orientation. This desired orientation is defined by the outer loop through an IBVS scheme. Time-scale separation and high-gain arguments can be used to ensure stability of the whole system [15]. In this paper, only control of translational dynamics will be considered and it is assumed that a suitable high-gain inner loop controller provides the desired torque for the system's attitude.
Ob , X b ,Yb , Zb which is attached to the
fixed frame
center of mass of the robot. Center of the frame is located in position x, y, z with respect to the inertial frame and its attitude is given by the orthogonal rotation matrix R : depending on the three Euler angles , and denoting, respectively, the roll, the pitch, and the yaw. Considering V R3 and 1 2 3 respectively T
as linear and angular velocities of the robot in the body fixed frame, the kinematics of the quadrotor as a 6DOF rigid body will be as follows RV
(1)
R Rsk
(2)
III.
Commonly used image formations for visual servoing are perspective and spherical projection. The author has proposed a method in [10] which uses appropriately defined perspective image moments that are reprojected on an oriented image plane, called virtual plane, using only roll and pitch angles of the robot. Based on selected image features in the new image plane, it is possible to design full dynamic image-based controller for the under actuated UAV while preserving a good behavior of the robot in Cartesian space. In this paper, we will use the same imaging method for IBVS control and implement the image features presented in [13] which are as follows:
The notation sk is the skew-symmetric matrix such that for any vector b R3 , sk b b where denotes the vector cross-product. The relation between time derivatives of the Euler angles and the angular velocities is given by 1 0 0
sin tan
cos tan
cos
1 2 cos / cos 3
- sin
sin / cos
IMAGE DYNAMYCS
qx qz
(3)
ug
q y qz
,
ng
qz
,
a a
(7)
where u g and ng are the coordinates of the center of
The dynamics of a general 6DOF rigid body, with the mass of m and the constant symmetric inertial matrix J R33 around the center of mass and with respect to the frame , are as follows
gravity of the target in the oriented image plane, is the local length of the camera and a is the desired value of a which is defined as follows
V V F
(4)
J J
(5)
where ij is the centered moments of the target in the
where F R and R are respectively the force and the torque vectors with respect to the frame which determine specific dynamics of the system. The inertial matrix is diagonal, J diag J xx , J yy , J zz , when Ob is
virtual image plane. Knowing that z a z a , where z is the normal distance of the camera from the target in the desired position and a is the desired value of a , dynamics of the features defined in (7) can be written as follows [16]
3
a 20 02
3
coincide with the body principal axis of inertial. The quadrotor actuators generate a single actuation trust input, U1 , and full actuation of the torque U 2 U 3 U 4
(8)
qx 1 1 q sk e3 q y v 1 z z qzD
T
which demonstrates underactuated dynamics of the system. The force input F in (4) is as follows
(9)
T
1 F U1E3 gRT e3 m
in which q qx q y qz is the vector of image features
(6)
T
defined in (7), v t vx vy vz is the linear velocity
where E3 e3 0 0 1 are the unit vectors in the body T
of the camera frame expressed in the virtual frame,
fixed frame and the inertial frame respectively. From (3) and (6), it is clear that the translational dynamics, (5), is independent of the yaw angle and hence the yaw dynamics can be controlled independently from the translational dynamics. ©2014 Engineering and Technology Publishing
T
1 t d x d y d z is the velocity of the moving point in
the virtual frame and qzD can be an arbitrary value which will be defined properly to produce image space error in Section IV. 233
Lecture Notes on Information Theory Vol. 2, No. 3, September 2014
IV.
IBVS USING VELOCITY OF IMAGE FEATURES
image sequence. A large literature exists on the methods of robust computation of optic flow [17], [18]. Now dynamics of the system can be rewritten based on available visual measurements. Therefore, by time differentiating (12) and substituting (13) in it, dynamics of the system can be presented based on the dynamics of the visual features as follows
In this section, a dynamic IBVS scheme for 3D translational motion control of quadrotor helicopters is presented which is robust with respect to (i) the unknown depth value z , and (ii) motion of the target. Task is to move the camera, which is attached to the quadrotor, to match the observed image features with the predefined desired image features obtained from an object. The object is considered to have 3D translational motion and yaw rotation. For example, the object can be a wheeled mobile robot moving on a flat ground or a quadrotor flying with small variations in the roll and pitch angles. These kinds of object motion satisfy our assumption to have a planar image target. It is also possible to control the yaw rotation of the robot through visual information as done in [10], but here it is assumed that the yaw rotation is controlled by IMU data in order to have a stable velocity. To design the controller, first one needs to define an error vector in image space. In order to simplify the design procedure in the sequel, it is considered the case that the desired image features are as follows T
q d qxd q yd qzd 0 0 qzd
2M1 M 2
M1 sk e3
T
(10)
s
s
(11)
(19) D
has
maximum but unknown value of DM , the following theorem is stated Theorem 1. Consider the system defined by (15) with its input as f . Assume that M1 1 , M 2 2 and zmax
(12)
be a bound on the maximum value of z . The following IBVS controller is proposed f k p kr s u (20)
(13)
where the control gains satisfy the following conditions
where f is defined as follows
kp
(14)
zmax
2
1 zmax
kr zmax
In order to design a controller for the dynamics presented by (12) and (13), a measurement of the velocity v is required. Since our objective is to design the IBVS controller without using linear velocity of the robot obtained from accelerometers, time derivative of the image features will be used as the velocity information. This information, q and hence , can be computed by directly measuring optic flow of each target point in the
©2014 Engineering and Technology Publishing
1 1 f 2M1s s 2 M1 M 2 2 D z z
where D M11 1 . Now, assuming that
In order to write the full translational dynamics of the image features in the virtual plane, translational dynamics of the robot are used, (4), expressed in the virtual frame. Then, one has
f R F
(18)
where is a positive constant. Using (15), one has
Using (11) and assigning qzD qz qzd , the derivative of the image error vector will be
v sk e3 v f
(17)
In many applications, the information related to the motion of the target, 1 and 1 , is unknown. Hence, in this paper, these terms are considered as unknown bounded values. Another uncertainty in (15) is z which is the vertical distance of the object from the robot in the desired position. Our proposed IBVS scheme in this paper is robust with respect to these uncertainties. Now, we design an IBVS tracking controller for translational motion of the robot by considering known bounds for the derivatives of the yaw angle of the robot. The controller compensates for the unknown motion of the target in the image plane through an adaptive robust scheme such that no prior information is required about bound of velocity of motion of the target. To design the controller a new state variable is defined as follows
T
1 1 v 1 z z
(16)
M 2 sk e3 sk e3 sk e3
consider another projection operator such that for any desired features in the image plane the desired features in the virtual image plane become equal to (10). Now the image error for the translational motion control of the robot are defined as follows
sk e3
(15)
where
It is common in visual servoing of quadrotor to have the observed object in the center of the image plane. This task for our selected image features, (7), will lead to qxd q yd 0 . As done by [10], it is also possible to
q 0 0 qzd
1 1 1 M11 1 f z z z
2 zmax
2
2 zmax 2
1 zmax
2 zmax
(21)
2
and u is defined as follows u
sDˆ M2 s Dˆ M
(22)
in which is a positive constant and Dˆ M is the estimated value of DM through the following adaptation law
234
Lecture Notes on Information Theory Vol. 2, No. 3, September 2014
Dˆ M Proj s Dˆ M
(23)
L A1
where and are positive constants and Proj operator is similar to that is used in [13] which ensures that the estimated parameter to be positive. Then, the system states and s are Uniformly Ultimately Bounded (UUB) and converge exponentially to a small ball containing the equilibrium point of the system, s0. Proof: Consider the following Lyapunov function 1 1 1 L k p T sT s DM2 2z 2 2 z
L A1
(24)
B1 s 2
(28)
V. 1
(29)
In the other hand, since Dˆ M is positive, one has
2 2 1 s Dˆ M 1 s Dˆ M 1 ˆ z s Dˆ M z s D M 1 s Dˆ M z
(30)
Then one can write (28) as follows ©2014 Engineering and Technology Publishing
z
(33)
2
B1 s 2
DM2 2 z
DM2 2 z
(34)
(35)
2 1 e t 1 1
(36)
SIMULATION RESULTS
In this section MATLAB simulations are presented to evaluate the performance of the proposed visual servo controller. In the simulations, the camera frame rate is set at 50 Hz and sampling time for the rest of the system is 10 ms. The robot is assumed to start in a hover position with the desired object in the field of view of the camera. Visual information includes coordinates of four points corresponding to the four vertexes of a rectangle which are used to calculate the image features defined in (7). Vertexes of the rectangle with respect to the inertial frame are located at 0.25,0.2,0 m , 0.25, 0.2,0 m , 0.25,0.2,0 m and 0.25, 0.2,0 m .
2 2
z 2 2 2 2 kr B1 1 z 2 2
DM DM
Since the robot is underactuated, the input f is unable to assign the desired dynamics directly. However, when the desired f is available, it is always possible to extract the trust magnitude, U1 , and the desired attitude [19], namely the roll and pitch angles, which are already assumed to be controlled with a high-gain inner loop controller. Remark 1. Equation (23) is a leakage-like adaptation law [20]. This law can improve the robustness of the adaptive scheme in the presence of disturbance or unmodeled dynamics.
where k p
can be adjusted by the control gains k p , kr , , and .
and substituting (22), one
2 2 1 s Dˆ M 1 s DM z s Dˆ M z
z
1
1 DM Dˆ M z
A1
DM2
According to the specified conditions in (21), 1 is always positive and hence the system states are UUB and converge to a ball which include the equilibrium point ( s 0 ) of the system. The exponentially decaying rate 1 and final error of the system, which depends on 2 ,
(26)
can write (27) as follows 2
2
solving (35), one has
Then by knowing the fact that x1, x2 n and
L A1
(32)
DM2 2 A1 z . By ,2 B1 , , and 2 2 z kp
2
2
z
where 1 min
k k 2 x1 x2 2 2
DM Dˆ M
L 1L 2
T
k 0, k x1 x2
2
B1 s
(25)
kr 2 2 s s 21 s 2 s z (27) 1 1 2 s sT u D DM Dˆ M z z z
B1 s
Then, one will have
Using the bounds defined for the norm of the matrices k p
2
L A1
M 1 and M 2 , (26) can be bounded as follows
L
2
L t L 0 e 1t
kr T s s sT s 2 sT M 1 sT M 2 z 1 1 2 sT sT u D DM Dˆ M z z z
(31)
Using completion of squares, one has
of x M1x 0 x , one has k p
DM Dˆ M s z
which can be written as
3
L
2
L A1
Substituting the controller (20) in (25) and knowing that M 1 is a skew symmetric matrix, which has property T
B1 s
Now, substituting (23) in (31), one will have
in which, DM Dˆ M DM . Using (18) and (19), the time derivative of (24) will be 1 1 L sT f 2M 1s s 2 M 1 M 2 2 D z z kp 1 T s DM Dˆ M z z
2
235
Lecture Notes on Information Theory Vol. 2, No. 3, September 2014
These points are projected through perspective projection on a digital image plane with focal length divided by pixel size (identical in both u and n directions) equal to 213 and the principle point located at (80, 60). The dynamic parameters of the robot are selected as , and m 2kg g 9.81 m.s-2 J diag 0.0081,0.0081,0.0142 kg.m2 .rad-2 . The parameter is selected to be 10m. The inverse dynamics of the zmax
quadrotor [21] are used to compute the desired attitude of the robot in order to provide the force inputs for the translational motion designed in (20). High-gain proportional-derivative controllers are used to control the desired roll and pitch angles of the robot. Since the presented approach requires velocity of image features, numerical derivation is used to compute it which can provide an appropriate estimation in the case that visual data are available with high frame rate. This is an advantage with respect to those approaches which use translational optic flow as linear velocity information. In the simulation, the robot's initial position is at .2, 0.1, 9 m with respect to the inertial frame and
Figure 3. Time evolution of the quadrotor Cartesian coordinates.
Figure 4. Trajectory of the motion of the robot and the object in a 3D environment.
desired image features are obtained at 0,0, 7 m . The observed target is assumed to move on flat ground, following a circle of radius 1m with a velocity of rad.s 1 . In order to simulate the case of low quality rate
Fig. 1 shows the trajectories of the image points in the virtual image plane. The norm of the error signals are illustrated in Fig. 2. Trajectories of the translational motion of the robot in Cartesian coordinates and in a 3D illustration are shown in Fig. 3 and Fig. 4 respectively.
15
gyroscopes, white noise with covariance of 104 is added to the angular velocity information of the robot. The control gains, to satisfy the condition (21), are set as k p 4 , kr 10 , and 0.1 . Also, the parameters and
VI.
CONCLUSION
In this paper, an IBVS controller scheme has been developed for translational motion of the quadrotor helicopter flying on a moving target. Since the common IMU systems used in robotics applications do not provide appropriate quality of linear velocity, the main objective was to use the velocity of image features as the linear velocity cue. To design the controller, mathematical model of the system is expressed in terms of the velocity of image features which makes it possible to design dynamic IBVS controller for this vehicle. The proposed controller is robust with respect to uncertainties in the dynamics of the system related to motion of the target and also unknown depth information of the image. The approach guarantees that the states of the system remain UUB. Simulation results show satisfactory response of the presented visual servo approach in both image and Cartesian space of the robot. Our future work is to design an observer-based visual servo controller to compensate for lack of accuracy in measurement of velocity of image features in the case that visual information include noisy data or are available with low rate.
are selected to be 0.1 and 0.01 respectively and the initial value for Dˆ M is set as 0.5. The desired yaw velocity is considered to be zero which is controlled through a proportional controller.
Figure 1. Object points trajectories in the virtual image plane.
REFERENCES [1]
Figure 2. Time evolution of the norm of the error vectors.
©2014 Engineering and Technology Publishing
236
O. Shakernia, Y. Ma, T. Koo, T. John, and S. Sastry, “Landing an unmanned air vehicle: Vision based motion estimation and nonlinear control,” Asian Journal of Control, vol. 1, pp. 128–145, 1999.
Lecture Notes on Information Theory Vol. 2, No. 3, September 2014
[13] H. Jabbari Asl, G. Oriolo, and H. Bolandi, “An adaptive scheme for image-based visual servoing of an underactuated UAV,” submitted to International Journal of Robotics and Automation, vol. 29, pp. 92–104, 2014. [14] A. Tayebi and S. McGilvray, “Attitude stabilization of a vtol quadrotor aircraft,” IEEE Transactions on Control Systems Technology, vol. 14, pp. 562–571, 2006. [15] S. Bertrand, T. Hamed, and H. Piet-Lahanier, “Stability analysis of an UAV controller using singular perturbation theory,” in Proc. the 17th IFAC world congress, 2008, pp. 5706–5711. [16] H. Jabbari Asl and H. Bolandi, “Robust vision-based control of an underactuated flying robot tracking a moving target,” Transactions of the Institute of Measurement and Control, vol. 36, pp. 411–424, 2014. [17] M. Srinivasan and S. Zhang, “Visual motor computations in insects,” Annual Review of Neurosciencei, vol. 27, pp. 679–696, 2004. [18] J. Zufferey and D. Floreano, “Fly-inspired visual steering of an ultralight indoor aircraft,” IEEE Transactions on Robotics and Automation, vol. 22, pp. 137–146, 2006. [19] A. Abdessameud and A. Tayebi, “Global trajectory tracking control of vtol-uavs without linear velocity measurements,” Automatica, vol. 46, no. 6, pp. 1053–1059, 2010. [20] P. Ioannou and J. Sun, Robust Adaptive Control, Englewood Cliffs, NJ: Prentice-Hall, 1996. [21] A. Das, F. Lewis, and K. Subbarao, “Backstepping approach for controlling a quadrotor using lagrange form dynamics,” Journal of Intelligent and Robotic Systems, vol. 56, pp. 127–151, 2009.
E. Altug, J. Ostrowski, and R. Mahony, “Control of a quadrotor helicopter using visual feedback,” in Proc. IEEE International Conference on Robotics and Automation, vol. 1, 2002, pp. 72–77. [3] F. Caballero, L. Merino, J. Ferruz, and A. Ollero, “Vision based odometry and slam for medium and high altitude flying uavs,” Journal of Intelligent & Robotic Systems, vol. 54, pp. 137–161, 2009. [4] S. Saripalli, F. Montgomery, and G. Sukhatme, “Visually-guided landing of an unmanned aerial vehicle,” IEEE Transactions on Robotics and Automation, vol. 19, pp. 371–381, 2003. [5] T. Hamel and R. Mahony, “Visual servoing of an under-actuated dynamic rigid-body system: an image-based approach,” IEEE Transactions on Robotics and Automation, vol. 18, pp. 187–198, 2002. [6] A. Beyeler, J. Zufferey, and D. Floreano, “Vision-based control of near-obstacle flight,” Autonomous Robots, vol. 27, pp. 201– 219, 2009. [7] T. Hamel and R. Mahony, “Image based visual servo control for a class of aerial robotic systems,” Automatica, vol. 43, pp. 1975– 1983, 2007. [8] N. Guenard, T. Hamel, and R. Mahony, “A practical visual servo control for an unmanned aerial vehicle,” IEEE Transactions on Robotics, vol. 24, pp. 331–340, 2008. [9] O. Bourquardez, R. Mahony, N. Guenard, F. Chaumette, T. Hamel, and L. Eck, “Image-based visual servo control of the translation kinematics of a quadrotor aerial vehicle,” IEEE Transactions on Robotics, vol. 25, pp. 743–749, 2009. [10] H. Jabbari, G. Oriolo, and H. Bolandi, “Dynamic IBVS Control of an underactuated UAV,” in Proc. IEEE International Conference on Robotics and Biomimetics, 2012, pp. 1158–1163. [11] R. Mahony, P. Corke, and T. Hamel, “Dynamic image-based visual servo control using centroid and optic flow features,” Journal of Dynamic Systems Measurement and ControlTransactions of the Asme, vol. 130, 2008. [12] F. Le Bras, T. Hamel, R. Mahony, and A. Treil, “Output feedback observation and control for visual servoing of VTOL UAVs,” International Journal of Robust and Nonlinear Control, vol. 21, pp. 1008–1030, 2011.
[2]
©2014 Engineering and Technology Publishing
Hamed Jabbari Asl received the M.Sc. in control engineering in 2007 from Tabriz University. From 2011 to 2012, he was a visiting scholar at the Department of Computer, Control, and Management Engineering, La Sapienza, University of Rome. Currently, he is with Young Researchers and Elite Club, Ilkhchi Branch, Islamic Azad University, Tabriz, Iran.
237