Stabilization and Trajectory Tracking of a Quad-Rotor ... - Springer Link

0 downloads 0 Views 617KB Size Report
Oct 12, 2010 - Abstract We propose a vision-based position control method, with the purpose of providing some level of autonomy to a quad-rotor unmanned ...
J Intell Robot Syst (2011) 61:103–118 DOI 10.1007/s10846-010-9472-1

Stabilization and Trajectory Tracking of a Quad-Rotor Using Vision L. R. García Carrillo · E. Rondon · A. Sanchez · A. Dzul · R. Lozano

Received: 1 February 2010 / Accepted: 1 September 2010 / Published online: 12 October 2010 © Springer Science+Business Media B.V. 2010

Abstract We propose a vision-based position control method, with the purpose of providing some level of autonomy to a quad-rotor unmanned aerial vehicle. Our approach estimates the helicopter X-Y-Z position with respect to a landing pad on the ground. This technique allows us to measure the position variables that are difficult to compute when using conventional navigation systems, for example inertial sensors or Global Positioning Systems in urban environment or indoor. We

This work was partially supported by Mexico’s National Council of Science and Technology (CONACYT) and the Research Center for Advanced Studies—Cinvestav. L. R. García Carrillo (B) · E. Rondon Heudiasyc, UMR CNRS 6599, Compiègne University of Technology, Compiègne, France e-mail: [email protected] E. Rondon e-mail: [email protected] A. Sanchez Robotics and Advanced Manufacturing Division, Research Center for Advanced Studies, Cinvestav, Ramos Arizpe, Coahuila, México e-mail: [email protected] A. Dzul Research Studies and Postgraduate Division, Laguna Technological Institute, Torreón, Coahuila, México e-mail: [email protected] R. Lozano Heudiasyc UMR 6599, UTC CNRS, Compiègne, France e-mail: [email protected] R. Lozano LAFMIA UMI 3175, Cinvestav, Coahuila, México e-mail: [email protected]

104

J Intell Robot Syst (2011) 61:103–118

also present a method to measure translational speed in a local frame. The control strategy implemented is based on a full state feedback controller. Experimental results validate the effectiveness of our method. Keywords Aircraft control · Local positioning · Visual tracking · Sensor fusion

1 Introduction Unmanned aerial vehicles (UAV’s) flight control have become a major focus of research at both governmental and private levels, because of their great potential for performing civilian and military applications without putting human lives at stake. Common tasks of this vehicles include: search and rescue operations, remote areas inspection and sensing, hazardous material recovery, real-time forest fire monitoring, surveillance of sensitive areas (borders, ports, oil pipelines), etc. UAV’s can fly autonomously or semi-autonomously and, in addition, they are expendable or recoverable [1]. While performing on surveillance or inspection, rotary wing aerial vehicles, like the quad-rotor helicopters, present advantages over conventional fixed wing aircrafts. Basically, this advantage lies in their ability to perform vertical takeoff and landing tasks in limited spaces, hover above interesting targets, perform longitudinal and lateral flight, entering in otherwise inaccessible places for humans, etc. Indeed, real-time embedded stabilization of a small quad-rotor helicopter has been already achieved [2, 3]. A basic requirement for an UAV consists on robust autonomous navigation and positioning, which can be carried out by using vision-based control techniques. Computer vision becomes an important characteristic in the field of mobile robots. It is currently used in the feedback control loop, as a cheap, passive and informationabundant sensor, usually combined with an Inertial Measurement Unit (IMU) to provide robust relative attitude information and allowing autonomous position and navigation. Also, note that an on-board vision system increases the sensor suit including a GPS, which provides position information relative to an inertial frame, but fails in indoor or in noisy environments. On-board computer vision systems provide information which is obtained, for example, from the detection of landmarks. This information allows the UAV to estimate its position with respect to such landmarks. Once the UAV knows its position, a control strategy could be implemented in order to achieve a desired position. Several research works have been performed aiming to control the flight of UAV’s by using cameras as position sensors [4–6]. A vision algorithm for visual navigation and landing for a helicopter is presented in [7]. For the purpose of estimating the location and orientation of a helicopter landing pad, in [8] the authors use the projections of parallel lines. By this approach, the vision sensor estimates the position of the camera relative to the landmark, but it can not estimate its velocity, which is an important data for controlling the position of the UAV. An approach based on optical flow techniques has been applied to the real-time stabilization of an EightRotor UAV in [9]. Systems with more than one camera have also been studied, for example, in [10] authors use a system based on two cameras (one on board, one off board) in order to perform an autonomous hover, take-off and landing of a quad-rotor helicopter. Also, a two cameras system is presented in [11], with the

J Intell Robot Syst (2011) 61:103–118

105

Fig. 1 The four-rotor aircraft experimental platform

drawback that none of the cameras is embedded on the UAV. In [12], a stereo vision system combined with a multi-sensor suit is proposed. The disadvantage of on-board stereo vision systems is that they increase the payload on the UAV. Omnidirectional visual sensor are used as well in UAV’s. In [13], a video sequence captured from an omnidirectional camera mounted over an aerial vehicle is analyzed, in order to compute the vehicle’s attitude. Given the diversity of vision systems, in this paper, we have decided to use a very simple vision system, where our contribution is based on the combination of computer vision techniques and a simple control method applied to the position stabilization of a quad-rotor UAV. The computer vision approach consists of a landmark detection and tracking algorithm. It is conceived to estimate the position and linear velocity of the UAV with respect to a landing pad on the ground, by using a calibrated onboard monocular camera. In order to show the effectiveness of the vision algorithm, we use a simple control strategy based on proportional-derivative (PD) controllers for the altitude and yaw, and basic full state feedback controllers for the pitch and roll dynamics. Real time experiments are performed over a quad-rotor UAV platform that has been built at the University of Technology of Compiègne (UTC). The developed prototype is shown in Fig. 1. Experimental results validate that the vision-based position control method ensures that the UAV reaches the desired position, relative to the landing pad. This document is organized as follows. Section 2 presents the fundamental equations of motion of a quad-rotor UAV. In Section 3, we present the configuration of the vision system. The proposed control strategy for hovering flight is presented in Section 4. The UAV characteristics, mechanical configuration and the embedded electronics are presented in Section 5. In order to validate the effectiveness of the vision-based position control system, several experiments have been performed and the results are shown in Section 6. Finally, concluding remarks are presented in Section 7.

2 Quad-Rotor Model A quad-rotor helicopter is an underactuated dynamic vehicle composed of four input forces (the thrust provided by each propeller) and six output coordinates (fully spatial movements). It could be considered as a vertical takeoff and landing vehicle (VTOL), able to move omnidirectionally and with the ability to fly in hover. The

106

J Intell Robot Syst (2011) 61:103–118

quad-rotor behavior is controlled by varying the angular speed of the rotors. Each rotor produces a thrust and a torque, whose combination generates the main thrust, the yaw torque, the pitch torque, and the roll torque acting on the rotorcraft. In the quad-rotor, the front and rear rotors rotate counter-clockwise while the left and right rotors rotate clockwise, canceling gyroscopic effects and aerodynamic torques in stationary trimmed flight. Vertical motion is controlled by the collective throttle input, that is, the sum of the thrusts of each motor. As we can see in Fig. 2, forward/backward motion is achieved by controlling the differential speed of the front and rear motors. This causes the quad-rotor to tilt around the corresponding axis, generating a pitch angle. The left/right motion of the vehicle is achieved by controlling the differential speed of the right and left motors, tilting around the corresponding axis and producing a roll angle. Finally, yaw movement is obtained by taking advantage of the two sets of rotors rotating in opposite direction. Thus, a yaw angular displacement is obtained by increasing (or decreasing) the speed of the front and rear motors while decreasing (or increasing) the speed of the lateral motors. This is done keeping the total thrust constant, then the altitude remains unchanged. 2.1 Dynamic Model The quad-rotor representation used in this paper is shown in Fig. 2. The dynamic model of this aerial vehicle is obtained by representing the quad-rotor as a combination of two PVTOL’s, each one of them with an independent torque control. The position of the vehicle’s center of gravity, with respect to the inertial frame, is  T denoted by ξ = x y z ∈ R3 , the three Euler angles (roll, pitch and yaw), which represent the orientation of the vehicle, are expressed as η = [φ θ ψ]T ∈ R3 . The

Fig. 2 Visual system setup

J Intell Robot Syst (2011) 61:103–118

107

model of the full rotorcraft dynamics is obtained from Euler–Lagrange equations [14]: mx¨ = −(sin θ)u + (cos θ cos ψ)ε τ˜θ

(1)

m y¨ = (cos θ sin ψ)u + (cos ψ cos φ)ε τ˜φ

(2)

m¨z = (cos θ cos φ)u − mg

(3)

ψ¨ = τ˜ψ

(4)

θ¨ = τ˜θ

(5)

φ¨ = τ˜φ

(6)

where u is the main thrust directed out of the top of the aircraft, x and y are coordinates in the horizontal plane, z is the vertical position, and τ˜ψ , τ˜θ , and τ˜φ are the yawing moment, pitching moment and rolling moment respectively, which are related to the generalized torques τψ , τθ , τφ . The parameter ε is a small coefficient which characterizes the coupling between the rolling moment and the lateral acceleration of the aircraft. g denotes the gravitational constant.

3 Vision System The UAV position control depends on the knowledge of its X-Y-Z coordinates with respect to a well-known reference frame. With the adequate application of computer vision techniques, a camera system can be used to provide the position feedback. This information is a required data for the controller, in order to generate the control inputs that stabilize the vehicle over a desired position. Aiming at this goal, this section presents the vision system that we propose. It consists of a quad-rotor UAV with an on-board calibrated camera, a landing pad or marker placed on ground, and a vision algorithm running on a ground station PC. The purpose of the vision system consists of estimating the quad-rotor position with respect to the landing pad, in order to stabilize the rotorcraft on a desired position. 3.1 Visual System Setup The system presented in Fig. 2 can be described as: –





A quad-rotor aerial vehicle with a body fixed frame (Xh , Yh , Z h ), assumed to be at its center of gravity. Z h represents the yaw axis, and pointing upwards. Xh and Yh are the pitch and the roll axis. An onboard camera pointing downwards, with a reference frame expressed by (Xc , Yc , Z c ). When moving, the camera surveys the scene passing below the quad-rotor. Since Xc − Yc and Xh − Yh are considered as parallel planes, then the visual information collected by the camera can be used to stabilize the vehicle. The landing pad, a rectangle formed by four circles of known coordinates, painted on high contrast background and placed underneath the rotorcraft. The coordinates frame (Xlp , Ylp , Z lp ) represents the inertial reference frame.

108

J Intell Robot Syst (2011) 61:103–118

The planes formed by (Xh − Yh ) and (Xlp − Ylp ) are considered to be parallel because we consider that the rotorcraft is in hover flight. In order to estimate the UAV position relative to the landing pad, a method for computing the extrinsic parameters of the camera must be performed at every image frame. 3.2 Vision Algorithm for Camera Pose Estimation Using Planar Homographies The vision algorithm implemented in our approach estimates an homography H for each view of the landing pad. The action of the homography can be expressed as [15]: ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ X x X ⎢Y ⎥ ⎣ y ⎦ = sM[r1 r2 r3 t] ⎢ ⎥ = sM[r1 r2 t] ⎣ Y ⎦ (7) ⎣0⎦ 1 1 1 where [x, y, 1] represents the landing pad position in the camera image, s is an arbitrary scale factor, M ∈ R3×3 represents the camera matrix, r1 , r2 , r3 ∈ R3×1 are extrinsics rotation parameters, t ∈ R3×1 is the extrinsics translation parameters vector, [X, Y, 0, 1] is the real landing pad position, and H = sM[r1 r2 t] is the homography matrix. The homography matrix H is divided in two parts in order to obtain: the physical transformation (which locates the observed object plane) and the projection (the camera intrinsic matrix). For every instant, when the aerial vehicle is in hovering, it is possible to compute this homography matrix using the a priori knowledge of the position of the four centroids of the circles [16]. Using this estimated transformation matrix and because the intrinsic camera matrix is constant and was previously identified by an off-line calibration, we are able to calculate the camera extrinsic parameters, and therefore we have the vehicle’s position. An erroneous detection of the landing pad circles must be discarded, since it will provide an erroneous position estimation. Aiming at this goal, we must check the parallelism of the lines mapped from the landing pad. Figure 2 shows that the four circles of the landing pad are positioned forming the corners of a rectangle. The line between the two upper corners and the line joining the two lower corners must satisfy a parallelism constrain. The same restriction is checked for the line joining the two left corners and the line between the two right corners. Parallelism verification is based on the slope of a line equation: mslope =

y f − yi x f − xi

(8)

where i and f stand for initial and final coordinates respectively. Thus, the slope mup of the upper line must be almost equal to the slope mlo of the lower line, while the slope mle of the left line must be almost equal to the slope mri of the right line: |mup − mlo | < and |mle − mri | <

(9)

Here, stands for a constrain helping to determine the lines parallelism. For every four circles detection, the previous verification ensures that a good planar homography could be estimated, resulting in a good computation of the camera extrinsic parameters. Since the vehicle is in hover flight, the image of the landing pad observed by the camera presents some displacements. Aiming to track the displacement, an

J Intell Robot Syst (2011) 61:103–118

109

optical flow computation process is performed. Different approaches are available in order to compute optical flow [17]. We have implemented the Lucas-Kanade pyramidal algorithm [18] in combination with a feature-detecting algorithm. Our method provides an accurate estimation of the motion field since it does not take into account the non landing pad areas, where the motion field cannot be accurately determined. In order to track the four circles centroids, several tasks must be performed. First, a region of interest is defined around each one of the circles, using its radius magnitude. The most representative features over each one of these regions are selected as features to track for. These features are usually the circles perimeter. Once this group of features has been identified, a tracking process is performed over the entire image. The optical flow is estimated based on the displacements of the features tracked. Next step consists on using optical flow values in order to estimate the new position of the circles centroid. If we want to use this tracked landing pad position in our homographies computations, it is also necessary to validate the parallelism of the lines mapped from the tracked landing pad circles. Equations 8 and 9 must be also satisfied. Although the circles tracking process is performed over each new image, this data will not be used for computing the extrinsic parameters of the camera, unless the four circles centroids real position data is not available or does not satisfy the parallelism condition. Under the consideration of no moving objects or obstacles in the visual field of the camera and assuming rigid body motion, optical flow can be expressed as ¯ d = V¯ d + Kx ωx − Kx2 ωy + Kx ωz OF x xy y OFx x y y y d ¯ + K ω − K ω − K ωz OFyd = V¯ OF xy x x y 2 y y

(10)

with Vx Vz d = −f V¯ OF + Kxx x Z Z V V y y z d + Ky = −f V¯ OF y Z Z

(11)

¯ d and OF ¯ d are the optical flows in the image coordinate system. V¯ d , where OF x y OFx i d are the relative velocities of the vehicle in the image coordinate system, K are V¯ OF j y known scale factors depending on intrinsic parameters of the camera, and ωx , ωy and ωz are the UAV angular velocities. Due to the attitude control law, the optical flow induced by rotations of the engine will be considered negligible with respect to the optical flow produced by translational movement (as we will see in Section 6). Therefore, the optical flow can be simply expressed as ¯ d = V¯ d + υ1 OF x OFx d ¯ OFyd = V¯ OF + υ2 y

(12)

where υ1 and υ2 are noise signals. Each circle’s centroid image displacement is estimated by performing an image tracking of the landing pad circles. We compute a relative velocity as an average value of the optical flow produced by the displacement

110

J Intell Robot Syst (2011) 61:103–118

of the four circles centroids. A noise signal is added in the process, which takes into account the small changes in the attitude due to the position correction as follows d + νx ρxk+1 = ρxk + T V¯ OF x d k+1 k ¯ ρ y = ρ y + T V OFy + ν y

(13)

where ρxk and ρ yk represent the circle’s centroid position, T is the sampling period and νx and ν y are noise signals.

4 Control Strategy In this section, we present in detail the implementation of a full state feedback control method applied to the quad-rotor model. 4.1 Altitude and Yaw Control The control of the vertical position can be obtained by using the following control input: u=

m (sz + g) cos θ cos φ

(14)

where m is the mass of the quad-rotor, g denotes the gravity constant and sz = k pz ez − kvz z˙

(15)

where ez = zd − z is the z error position, with zd as the desired altitude. k pz and kvz are positive constants. Thus, for the altitude dynamics, sz is a PD controller. The yaw angular position can be controlled by applying τ˜ψ = −k pψ eψ − kvψ ψ˙

(16)

where eψ = ψd − ψ is the yaw error and ψd is the desired yaw angle. k pψ and kvψ denote the positive constants of a PD controller. Indeed, introducing Eqs. 14–16 into Eqs. 1–6 and provided that cos θ cos φ  = 0, we obtain mx¨ = −m

tan θ (sz + g) + cos θε τ˜θ cos φ

m y¨ = m tan φ(sz + g) + cos φε τ˜φ 1 (−k pz ez − kvz z˙ ) m ψ¨ = −k pψ eψ − kvψ ψ˙ z¨ =

(17) (18) (19) (20)

The control parameters k pψ , kvψ , k pz and kvz should be carefully chosen to ensure a stable well-damped response in the vertical and yaw axes [14]. From Eqs. 19 and 20 it follows that ψ → ψd and z → zd .

J Intell Robot Syst (2011) 61:103–118

111

4.2 Roll Control Note that from Eqs. 15 and 19 sz → 0. For a time T large enough, sz and ψ are arbitrarily small, therefore, Eqs. 17 and 18 reduce to mx¨ = −m

tan θ + cos θε τ˜θ cos φ

m y¨ = m tan φ + cos φε τ˜φ

(21) (22)

We will deal first the subsystem given by Eqs. 6 and 22. Under hovering conditions, we have a small angle for |φ|, then the difference tan(φ) − φ is arbitrarily small, and also we can consider that φ ≈ tan(φ). Therefore, the subsystem 6–22 reduces to y¨ = gφ +

ε τ˜φ m

(23)

φ¨ = τ˜φ

(24)

which can be seen as a system of four integrators in cascade: y˙ 1 = y2

(25)

y˙ 2 = y3 + ε  τ˜φ

(26)

y˙ 3 = y4

(27)

y˙ 4 = τ˜φ

(28)

˙ and ε  = where y1 = y, y2 = y˙ , y3 = φ, y4 = φ, law

ε . m

Now, we propose the next control

τ˜φ = −k0y y1 − k1y y2 − k2y y3 − k3y y4

(29)

where k0y , k1y , k2y , k3y are positive constant, and the characteristic polynomial of the closed-loop system is expressed as p4 + (k3y + εk1y ) p3 + (k2y + εk0y ) p2 + k1y p + k0y = 0

(30)

The poles of Eq. 30 must have negative real part in order to satisfy the Hurwitz stability criterion, which is obtained by: – –

k0y , k1y , k2y , k3y > 0. (k2y + εk0y )(k3y + εk1y ) − k1y > 0.



(k2y + ε0 )(k3y + εk1y ) − k1y > k0y

(k3y +εk1y )2 . k1y

4.3 Pitch Control Consider now the subsystem given by Eqs. 5 and 21. Under the consideration of a small angle for |θ | in such a way that the difference tan(θ) − θ is arbitrarily small, we have that θ ≈ tan(θ). Therefore, the subsystem 5–21 reduces to x¨ = −gθ + θ¨ = τ˜θ

ε τ˜θ m

(31) (32)

112

J Intell Robot Syst (2011) 61:103–118

which can be seen as a system of four integrators in cascade: x˙ 1 = x2

(33) 

x˙ 2 = x3 + ε τ˜θ

(34)

x˙ 3 = x4

(35)

x˙ 4 = τ˜θ

(36)

˙ and ε  = ε . Following the same procedure as ˙ x3 = θ, x4 = θ, where x1 = x, x2 = x, m for the roll control, the proposed controller is given as follows τ˜θ = −k0x x1 − k1x x2 − k2x x3 − k3x x4

(37)

where the k’s are chosen in order to satisfy the Hurwitz stability criterion.

5 System Configuration The proposed vision algorithm and the controller previously presented have been tested over a system composed by a four-rotor aircraft platform, a ground station personal computer (PC), and a high frequency (HF) video link (onboard transmitter, on ground receiver). 5.1 Aerial Vehicle The quad-rotor experimental platform shown in Fig. 1 was developed at the UTC, France. Some of its characteristics are resumed in Table 1. The onboard electronics is composed by two interconnected cards: the first board is the control unit, while the second one deals with the motors drivers. The control unit card performs the essential tasks of sensing, communicating and stabilizing the UAV attitude during fly. The board properties can be summarized as follows. –



Processor: a Texas Instruments TMS320F2812 DSP module is used to process the data of the different sensing devices, and to compute the control algorithm, which sends the control inputs in the form of four PWM signals to the motor drivers. Inertial sensors: a MIDG II INS/GPS from Microbotics Inc. is used to measure the angular position of the rotorcraft. We also use three additional gyros to measure angular velocity at a higher rate.

Table 1 Characteristics of the rotorcraft

Parameter

Value

Diameter between rotors Weight Autonomy Power

40 cm

Motor

800 g 15 min 12 V, 2,200 mAh Li-Po Pro-TroniK battery Booster 1200 Brushless

J Intell Robot Syst (2011) 61:103–118

113

Fig. 3 The electronics on board. Left side Electronic card for the DSP, rate gyros, IMU connections, atmospheric pressure sensor and wireless modem. Right side Signal conditioner board







Atmospheric pressure sensor: a Freescale MPXH6115A pressure sensor is used with an appropriate amplifier circuit, to measure the altitude of the engine on an appropriate sensing range. Battery voltage measurement circuit: this electronic circuit is intended to provide the actual tension level of the supply battery. This information is used for several goals: perform a safety landing and turn-off before an unwanted discharge of tension (avoiding accidents). Also, information regarding the supply voltage level is used in a preprocessing stage of the incoming measurements from the atmospheric pressure sensor. Wireless link: a XBee ZB ZigBee PRO Radio Modem is used to link the ground station and the aerial vehicle. This communication link can be used to introduce external control inputs, send the sensors information to the ground station, etc. The second board contains:



Signal Conditioner: In this stage, each control input of the four motors is decoupled from the rest of the electronic systems. The PWM signals are also filtered and conditioned. The control unit board and the signal conditioner board are shown in Fig. 3.

Fig. 4 The experimental platform: UAV and ground station

114

J Intell Robot Syst (2011) 61:103–118

Fig. 5 The UAV vision system: CTDM-5351 camera on board and the 4-Antenna Diversity System Receiver

5.2 Ground Station The ground station consists of a desktop PC, a flight simulator Cyborg-X joystick, a XBee ZB ZigBee PRO Radio Modem and a Diversity video receiver system. Using this station, the user can send information to the aerial vehicle. Different flying modes can be chosen: manual control, altitude stabilization using the pressure sensor, vision-based position hold, and reactive navigation. The quad-rotor includes some safety features, like an emergency stop switch and a signal condition that verifies that the thrust command on the ground station is at zero before starting the motors. The ground station receives and saves all information needed to debug and analyze the flight experiments and results. Our complete system can be seen in Fig. 4. 5.3 Vision System Components The UAV vision system is shown in Fig. 5. Our quad-rotor is equipped with a high definition CTDM-5351 camera placed pointing downwards, with a resolution of 640 × 480 pixels. The camera is connected to a 200 mW micro video and audio HF transmitter. Images from the vision system are recovered on ground by a 4-Antenna Diversity System Receiver. This receiver is connected to the ground station PC throughout a USB frame grabber. The frequency of the video transmission is performed at a rate of 30 Hz, and is intended to perform diverse tasks, such as altitude estimation using landing marks, navigation and obstacle avoidance. Vision algorithms run also in the ground station PC, and are programmed in Visual C++ using OpenCV functions [19].

6 Experimental Application Using our position estimation algorithm, the vision system provides the UAV position feedback while the embedded inertial electronic system provides the attitude data. In order to verify the performance of the vision-based position control method, we conducted a set of experiments which are described next.

J Intell Robot Syst (2011) 61:103–118

115

θ (º)

5 0 −5 0

200

400

600

800 1000 Samples

1200

1400

1600

200

400

600

800 1000 Samples

1200

1400

1600

200

400

600

800 1000 Samples

1200

1400

1600

φ (º)

2 0 −2 0

ψ (º)

20 10 0 0

Fig. 6 Euler angles, experimental results

Vx

5 0 -5 0

200

400

600

800 1000 Samples

1200

1400

1600

200

400

600

800 1000 Samples

1200

1400

1600

200

400

600

800 1000 Samples

1200

1400

1600

Vy

10 0

-10 0

Vz

5 0 -5 0

Fig. 7 Velocities, experimental results

X (cm)

50 0 -50

200

400

600

800 1000 Samples

1200

1400

1600

200

400

600

800 1000 Samples

1200

1400

1600

200

400

600

800 1000 Samples

1200

1400

1600

Y (cm)

50 0 -50

0

Z

e (cm)

50

-50

Fig. 8 Position errors, experimental results

116

J Intell Robot Syst (2011) 61:103–118

Fig. 9 The quad-rotor helicopter stabilized over the landmark in a desired position

The initial vehicle position was selected when the UAV is located exactly on top of the landing pad, and is used as a desired X-Y position reference in the rest of the experiment, in addition, the desired Z is fixed at 150 cm, and the desired yaw is fixed at 5 degrees. The parameter values used are: k pz = 0.68, kvz = 1.6, k pψ = 38, k pψ = 1,350, k0y = 1, k1y = 2, k2y = 38, k3y = 1,400, k0x = 1, k1x = 2, k2x = 38, and k3x = 1,350. Notice that the attitude stabilization control system is always running at higher frequency in order to guarantee that the Euler angles are close to zero (hover). The set of Figs. 6, 7, and 8 show the results obtained for the experiment. Here, each ten samples are equivalent to 1 s. Notice that the pitch and roll angles remain in the interval (−1.5, 1.5) degrees. Therefore, it can be concluded that the position control adds only small changes in the attitude of the rotorcraft for bringing the position to the desired one. This is an important property because the position controller runs at a lower rate compared to the attitude controller, then, a smooth position control is necessary to ensure the stability of the vehicle. A picture of the quad-rotor during a real time experiment is depicted in Fig. 9. It can be seen that the UAV maintains its position relative to the landmark.

7 Conclusions and Future Works 7.1 Conclusions A vision algorithm for detection and tracking of a landing pad was proposed and tested in a real-time application. A control strategy based on full state feedback control was reviewed and applied in such experiments, aiming to provide some level of autonomy to a quad-rotor UAV. The monocular vision system was used to estimate the X-Y-Z position of the aerial vehicle with respect to a landing pad using an homography estimation technique. Optical flow is also obtained from the vision system, in order to estimate the translational speed of the UAV. A control algorithm

J Intell Robot Syst (2011) 61:103–118

117

was implemented onboard to stabilize the UAV’s attitude. The control algorithm runs at a higher rate as compared to the image processing rate. The control algorithm ensures that the Euler angles of the vehicle are very close to zero, which considerably simplifies all the tasks concerning image processing (position estimation, optical flow). The experiment was successfully performed indoor showing that the quadrotor was stabilized at a selected X-Y-Z position above the landing pad. The attitude of the vehicle was not significantly perturbed by the control input used to correct the UAV X-Y-Z position. The vehicles’s velocity remained also very close to zero. A video of the experiments can be seen at http://www.youtube.com/ watch?v=SQlSXruTnj0. 7.2 Future Works Future work will focus on the improvement of the quad-rotor platform to increase its robustness in presence of wind gust and performing autonomous UAV takeoff and landing tasks, indoors as well as outdoors.

References 1. Reinhardt, J.R., James, J.E., Flannagan, E.M.: Future employment of UAVS: issues of jointness. Joint Force Q. 22, 36–41 (1999) 2. Salazar, S., Escareno, J., Lara, D., Lozano, R.: Embedded control system for a four rotor UAV. Int. J. Adapt. Control Signal Process. 21(2–3), 189–204 (2007) 3. Kendoul, F., Lara, D., Fantoni-Coichot, I., Lozano, R.: Real-time nonlinear embedded control for an autonomous quadrotor helicopter. AIAA J. Guid. Control Dyn. 30(4), 1049–1061 (2007) 4. Saripalli, S., Montgomery, J., Sukhatme, G.: Vision-based autonomous landing of an unmanned aerial vehicle. In: IEEE International Conference on Robotics and Automation, pp. 2799–2804 (2002) 5. Romero, H., Benosman, R., Lozano, R.: Stabilization and location of a four rotor helicopter applying vision. In: American Control Conference, pp. 3930–3936. Minneapolis, USA (2006) 6. Salazar, S., Romero, H., Lozano, R., Castillo, P.: Modeling and real-time stabilization of an aircraft having eight rotors. J. Intell. Robot. Syst. 54(1–3), 455–470 (2009) 7. Saripalli, S., Montgomery, J., Sukhatme, G.: Visually-guided landing of an unmanned aerial vehicle. IEEE Trans. Robot. Autom. 19(3), 371–381 (2003) 8. Yang, Z.F., Tsai, W.H.: Using parallel line information for vision-based landmark location estimation and an application to automatic helicopter landing. Robot. Comput.-Integr. Manuf. 14(4), 297–306 (1998) 9. Romero, H., Salazar, S., Lozano, R.: Real-time stabilization of an eight-rotor UAV using optical flow. IEEE Trans. Robot. 25(4), 809–817 (2009) 10. Altug, E., Ostrowski, J., Taylor, C.: Control of a quadrotor helicopter using dual camera visual feedback. Int. J. Rob. Res. 24(5), 329–341 (2005) 11. Rondon, E., Salazar, S., Escareno, J., Lozano, R.: Vision-based position control of a two-rotor VTOL miniUAV. J. Intell. Robot. Syst. 57(1–4), 49–64 (2010) 12. Achtelik, M., Bachrach, A., He, R., Prentice, S., Roy, N.: Autonomous navigation and exploration of a quadrotor helicopter in GPS-denied indoor environments. In: Robotics: Science and Systems Conference (2008) 13. Demonceaux, C., Vasseur, P., Pégard, C.: Omnidirectional vision on UAV for attitude computation. In: IEEE International Conference on Robotics and Automation (2006) 14. Castillo, P., Dzul, A., Lozano, R.: Real-time stabilization and tracking of a four-rotor mini rotorcraft. IEEE Trans. Control Syst. Technol. 12(4), 510–516 (2004) 15. Bradski, G., Kaehler, A.: Learning OpenCV, Computer Vision with the OpenCV Library. O’Reilly Media (2008)

118

J Intell Robot Syst (2011) 61:103–118

16. Horaud, R., Conio, B., Leboulleux, O., Lacolle, B.: An analytic solution for the perspective 4-point problem. Comput. Vis. Graph. Image Process. 47, 33–44 (1989) 17. Barron, J.L., Fleet, D.J., Beauchemin, S.S.: Performance of optical flow techniques. Int. J. Comput. Vis. 12(1), 43–77 (1994) 18. Bouguet, J.Y.: Pyramidal Implementation of the Lucas Kanade Feature Tracker—Description of the Algorithm. Intel Corporation—Microprocessor Research Labs (2002) 19. Open Computer Vision Lib. http://sourceforge.net/projects/opencvlibrary/. Accessed September 2009