Aug 23, 2007 - Vladimir N. Dobrokhodov, Isaac I. Kaminer, Kevin D. Jones, Ioannis Kitsios,. Naval Postgraduate School, Monterey, CA 93943. Chengyu Cao ...
AIAA 2007-6746
AIAA Guidance, Navigation and Control Conference and Exhibit 20 - 23 August 2007, Hilton Head, South Carolina
Rapid Motion Estimation of a Target Moving with Time-Varying Velocity Vladimir N. Dobrokhodov, Isaac I. Kaminer, Kevin D. Jones, Ioannis Kitsios, Naval Postgraduate School, Monterey, CA 93943
Chengyu Cao, Lili Ma, Naira Hovakimyan, and Craig Woolsey Dept. of Aerospace and Ocean Engineering, Virginia Tech, Blacksburg, VA 24061 This paper describes the development of a vision-based motion estimation and target tracking system for a small unmanned air vehicle (SUAV) equipped with an inertially stabilized gimballed camera. The work concentrates on the design of a new rapid motion estimation algorithm for a ground target moving with time-varying velocity. The capability to estimate target motion for tracking significantly improves operational utility of an inexpensive tactical SUAV. This work extends previous results in which a SUAV simultaneously tracked a ground target moving at constant speed and estimated its motion (position and velocity). In this paper, we allow for time-varying unknown target velocity. The target velocity estimation problem is formulated such that the recently developed L1 rapid estimator can be applied. The estimator uses two real-time measurements: the target position in the camera frame, provided by image processing software, and the relative altitude above the target, provided by an external geo-referenced database. Simulations show that the proposed algorithm is effective at tracking a non-cooperating target moving with unknown velocity, despite repeated out-of-frame events. The paper also describes the development of a Hardware-in-the-Loop simulation, reflecting a realistic tactical scenario, that is intended to provide further validation in advance of flight tests.
I.
Introduction
Refs. 1–5 have reported theoretical and experimental results on a vision-based tracking and motion estimation system for a small unmanned aerial vehicle (SUAV) tasked to follow a ground target. The original hardware setup, presented in Fig. 1, was motivated by the requirement to reduce the burden on the two operators who must coordinate the control of the UAV and its gimballed camera in a typical reconnaissance scenario. In the system that was developed, the UAV flies autonomously along a predefined search pattern, while a gimbal operator on the ground may select a target of interest using a joystick that steers the onboard gimballed camera. Real-time video, along with the UAV-gimbal telemetry, are transmitted to the ground station wirelessly. Once the target is selected, the UAV and the camera automatically track the target. The system also performs real-time estimation of the target’s unknown velocity using UAV-gimbal telemetry data and the extracted target position on the image plane. The system that was developed consists of airborne and ground components connected in real time over wireless links. The airborne component consists of a SUAV equipped with an autopilot coupled with a real-time embedded CPU that is responsible for coordinated control of the SUAV and the gimballed camera. The ground part of the system includes a set of communication links (both video and control) as well as an Automatic Target Tracking (ATT) computer that provides vision-based feedback. An important feature is the system’s ability to perform autonomous tracking and motion estimation of a moving target even in the presence of target loss events. The initial design assumes that the ground target moves with unknown but constant velocity. The algorithm is based on measurements that include (1) the UAV’s altitude above the target, provided by an external geo-referencing database using Perspective View Nascent Technologies (PVNT) system, (2) the coordinates of the target in the camera frame, provided by image tracking software, and (3) the position of the gimballed camera in inertial frame, computed from the UAV’s position and attitude along with the gimbal orientation relative to the UAV.
1 of Aeronautics andN.Astronautics Copyright © 2007 by Copyright © 20__ by V. Dobrokhodov, I.American Kaminer, K.Institute Jones, I. Kitsios, C. Cao, L. Ma, Hovakimyan, and C. Woolsey. Published by the American Institute of Aeronautics
Figure 1. Flight test bed setup.
This work extends previous results 1–5 by relaxing the assumption that the target velocity is constant. Furthermore, the target velocity estimate is used in the UAV control law to improve target tracking performance. The problem is cast in a framework appropriate for application of the L1 adaptive estimator which ensures fast convergence of the estimated parameters. This approach is different from the commonly used Extended Kalman Filter (EKF) methods reported, for example, in Refs. 6, 7. The paper is organized as follows. Section II describes the problem of vision-based ground target tracking. Estimation of the target’s unknown time-varying velocity is presented in Sec. III. Section IV describes incorporation of the target velocity’s estimate into the guidance law. Out-of-frame events that are typical and problematic in vision-based tracking applications are addressed in Sec. V. A laboratory setup and simulation results are described in Sec. VI. Conclusions are given in Sec. VII.
II.
Problem Formulation
Figure 2 represents 3D and 2D kinematics of vision-based target tracking problem for a UAV equipped with a gimbal camera 2 . The control objective is to coordinate the motion of the UAV and gimbal with respect to a moving target, using the UAV’s turn rate and the gimbal’s pan rate as control inputs, to orbit the UAV above the target at a predefined horizontal range, while maintaining the target in the center of the image captured by the UAV’s onboard camera 2 . In this paper, we address the control problem of orbiting the UAV above the target with a desired horizontal range. Let {I} be the inertial reference frame, {B} be the UAV’s body-fixed frame, and {C} be the gimballedcamera frame. The origin of {C} is the gimbal’s center of rotation, which is assumed to coincide with the origin of {B}. Let IC R, IB R, and B C R be the coordinate transformations from {C} to {I}, {B} to {I}, and {C} to {B}, respectively. These transformations may be constructed from available measurements. Let pb (t) denote the position of the UAV with respect to the origin of {I}, and expressed in {I}. Also, let pc (t) = [xc (t), yc (t), zc (t)]> denote the position of the target with respect to the origin of {C}, and expressed in {C}. Define the vector p(t) = [px (t), py (t), pz (t)]> =IC R · pc (t). (1) With these definitions, the inertial position of the target is pt (t) = pb (t) + p(t),
(2)
p(t) ˙ = −p˙b (t) + p˙t (t).
(3)
which implies that
2 American Institute of Aeronautics and Astronautics
{C}
r
λp
North x Vg
r Vg
{B} ψ
p = R ⋅ pc I C
pb
{I }
λ
z Down
z y
pt
Vt
η
y UAV
ψt
r
λg
x
r Vt
Target
(a) 3D Demonstration
(b) 2D Demonstration
Figure 2. Relative kinematics of UAV-target motion.
Assuming that the center of the target is detected by an image processing algorithm, and using a pin-hole camera model, the following two measurements can be obtained: " # " # 1 u(t) xc (t) = (4) zc (t) yc (t) v(t) where, without loss of generality, the camera’s focal length is assumed to be one. The components u(t) and v(t) represent the pixel-scale coordinates of the center of the target extracted from the image plane. Let h(t) denote the relative altitude of the UAV above the target. Letting ϕ(t) and θ(t) represent the (known) roll and pitch Euler angles for the rotation matrix IC R, the relative altitude is h(t) = −xc (t) sin θ(t) + yc (t) sin ϕ(t) cos θ(t) + zc (t) cos ϕ(t) cos θ(t). Summarizing (4) and (5), we have the following three measurements: xc (t)/zc (t) u(t) , g(pc (t)). yc (t)/zc (t) v(t) = −xc (t) sin θ(t) + yc (t) sin ϕ(t) cos θ(t) + zc (t) cos ϕ(t) cos θ(t) h(t)
(5)
(6)
Provided that −u(t) sin θ(t)+v(t) sin ϕ(t) cos θ(t)+cos ϕ(t) cos θ(t) 6= 0, the vector g(pc (t)) of output functions may be inverted to obtain xc (t) u(t) h(t) pc (t) = yc (t) = (7) v(t) . −u(t) sin θ(t) + v(t) sin ϕ(t) cos θ(t) + cos ϕ(t) cos θ(t) zc (t) 1 Therefore, p(t) can be calculated from (1) and (7). Let ρ(t) denote the horizontal range between the UAV and the target. Let Vuav (t) be the UAV’s speed and let Vg (t) be the projection of Vuav (t) onto the horizontal plane. Denoting the UAV flight path angle by γ(t), one has Vg (t) = Vuav (t) cos γ(t). Let Vt (t) be the velocity of the target in the horizontal plane and Vth (t) be the rate of change of target elevation. The kinematic equations for a UAV tracking a target can be written as 1–4 : Vg (t) cos η(t) − Vt (t) cos[ψt (t) − (ψ(t) − η(t))] ˙ + ψ(t), ρ(t) ρ(t) ˙ = −Vg (t) sin η(t) + Vt (t) sin(ψt (t) − (ψ(t) − η(t))), Vg (t) sin ψ(t) Vt (t) sin ψt (t) p(t) ˙ = − Vg (t) cos ψ(t) + Vt (t) cos ψt (t) . Vuav (t) sin γ(t) Vth (t)
η(t) ˙
= −
3 American Institute of Aeronautics and Astronautics
(8a) (8b) (8c)
The control objective is to regulate horizontal range ρ(t) between the UAV and the target to some desired ˙ value ρd by controlling UAV’s turn rate ψ(t). For simplicity, we consider the case where ρd is constant. The relative altitude h(t) between the UAV and the target is assumed to be known. Notice that the relative altitude h(t) is not being regulated in this paper; UAV altitude may readily be controlled by a conventional autopilot.
III.
Estimation of Target’s Unknown Time-Varying Velocity
Omitting the third component of equation (8c) gives # # # " " " p˙x (t) sin ψ(t) sin ψt (t) = − Vg (t) + Vt (t) , p˙y (t) cos ψ(t) cos ψt (t) | {z } | {z } | {z } x(t)
φ(t)
(9)
ω(t)
where Vg (t) and ψ(t) are the UAV’s velocity and yaw angle, which are available from onboard measurements. The camera-relative target position p(t) can be calculated from (1) and (7) as px (t) u(t) h(t) I p(t) = py (t) =IC R pc (t) = (10) R v(t) . −u(t) sin θ(t) + v sin ϕ(t) cos θ(t) + cos ϕ(t) cos θ(t) C pz (t) 1 III.A.
Rapid Estimator
A recently-developed rapid estimator can be applied to estimate ω(t) defined in (9). In the following, only the essential details of the adaptive estimation method are provided. More details may be found in Refs. 8,9. Consider the following system dynamics: x(t) ˙ = Am x(t) + ω(t),
x(0) = x0 ,
(11)
where x(t) ∈ Rn is the system state vector (assumed to be measurable) and Am is a known n × n Hurwitz matrix. The forcing term ω(t) ∈ Rn is a vector of unknown time-varying signals or parameters. Let ω(t) ∈ Ω,
(12)
where Ω is a known compact set. Suppose that ω(t) is uniformly bounded and continuously differentiable with uniformly bounded derivative. That is, kω(t)k ≤ µω < ∞ and kω(t)k ˙ ≤ dω < ∞,
∀ t ≥ 0,
(13)
where µω and dω are constants. The estimation objective is to design an adaptive estimator that provides fast estimation of ω(t). The key components of the adaptive estimator are the state predictor, the adaptive law, and a low-pass filter that is applied in the estimation step. State Predictor: Define the state predictor x ˆ˙ (t) = Am x ˆ(t) + ω ˆ (t),
x ˆ(0) = x0 ,
(14)
which has the same structure as the system in (11), but with the state vector x(t) and unknown signal vector ω(t) replaced by their estimates, denoted by overhats. Adaptive Law: The estimate ω ˆ evolves according to the following dynamics: ω ˆ˙ (t) = Γc Proj(ˆ ω (t), −P x ˜(t)),
ω ˆ (0) = ω ˆ0,
(15)
where x ˜(t) = x ˆ(t) − x(t) is the error signal between the state of the system and the state predictor, Γc ∈ R+ determines the adaptation rate, chosen sufficiently large to ensure fast convergence, and P is the solution of the algebraic equation A> m P + P Am = −Q for some choice of matrix Q > 0.
4 American Institute of Aeronautics and Astronautics
Estimation: Switching to the Laplace domain, for the moment, define ωr (s) = C(s)ω(s), ωe (s) = C(s)ˆ ω (s),
(16)
where C(s) is a diagonal matrix whose ith diagonal element Ci (s) is a strictly proper, stable transfer function with low-pass gain Ci (0) = 1. Based on the results in Refs. 8, 9, let Ci (s) =
c s+c
(17)
where c is a positive constant. The fast adaptive estimator ensures that ωe (t) estimates the unknown signal ω(t) with the final precision: γc kωe (t) − ω(t)kL∞ ≤ kωe (t) − ωr (t)kL∞ + kωr (t) − ω(t)kL∞ ≤ √ + k1 − C(s)kL1 kω(t)kL∞ , Γc where k · kL∞ denotes the L∞ -norm of a signal and where r ωm γc = kC(s)H −1 (s)kL1 , λmin (P )
(18)
(19)
where H(s) = (sI − Am )−1 and k · kL1 denotes the L1 gain of a stable system, and where ωm = 4µ2ω + 2µω dω
λmax (P ) , λmin (Q)
(20)
where µω and dω are defined in (13). III.B.
Estimation of Target’s Time-Varying Velocity
With x(t) and ω(t) defined as indicated in equation (9), estimates of Vt (t) and ψt (t) (denoted by Vˆt (t) and ψˆt (t), respectively) may be obtained through the following steps as outlined in the previous section 8 : • State Estimator: " # sin ψ(t) ˙x ˆ(t) = Am x ˜(t) − Vg (t) +ω ˆ (t), cos ψ(t)
x ˜(t) = x ˆ(t) − x(t).
(21)
• Adaptive Law: ω ˆ˙ (t) = Γc Proj (ˆ ω (t), −P x ˜(t)),
with Γc sufficiently large.
• Low-Pass Filter: ωe (s) = C(s)ˆ ω (s),
C(s) =
c , s+c
with c > 0.
(22)
(23)
• Extraction of Vˆt (t) and ψˆt (t): Vˆt (t) =
q ωe21 (t) + ωe22 (t),
ψˆt (t) = tan−1
ωe1 (t) ωe2 (t)
.
(24)
From (18) it follows that the final estimation precision ωe (t) − ω(t) and the transient time to achieve this can be arbitrarily reduced by increasing the bandwidth of C(s). Increasing the bandwidth of C(s) requires faster sampling rate of system measurements and larger Γc , which in turn demands faster computation.
5 American Institute of Aeronautics and Astronautics
III.C.
Performance Bound
This section presents the performance bound from the noise in the measurement to the estimation error. Assume that, given the system dynamics (9), the measured state x(t) is corrupted by noise so that the measurement is xn (t) = x(t) + n(t). Assume that n(t) is uniformly bounded and continuously differentiable with uniformly bounded derivative. That is, kn(t)k ˙ ≤ µn < ∞ and k¨ n(t)k ≤ an < ∞,
∀ t ≥ 0,
(25)
where µn and an are constants. Then from (9) we have x˙ n (t) = −φ(t) + ω(t) − n(t). ˙ Define nd (s) = C(s) s n(s).
(26)
Thus, the final estimation precision becomes kωe (t) − ω(t)kL∞ ≤ kωe (t) − (ωr (t) − nd (t))kL∞ + kωr (t) − ω(t)kL∞ + knd (t)kL∞ , {z } | {z } | {z } | B1
B2
γ¯c ≤ √ + k1 − C(s)kL1 kω(t)kL∞ + ksC(s)kL1 {z } | | Γ | {z c} B2
B3
q (27) tr(G> (pc (t))G(pc (t)))−1 kno (t)kL∞ , {z } B3
B1
where
r γ¯c =
ω ¯m kC(s)H −1 (s)kL1 λmin (P )
with ω ¯ m = 4(µω + µn )2 + 2(µω + µn )(dω + an )
(28) λmax (P ) , λmin (Q)
(29)
in which – no (t) in (27) denotes the noise in the measurements of [u(t), v(t), h(t)]> , – G(pc (t)) denotes the Jacobian matrix of g(pc (t)) with respect to pc (t), and – tr denotes the trace of a matrix, and – µω , dω , µn , and an are defined in (13) and (25), respectively. Considering (27), the noise introduces an extra term B3 and also shows up in B1 . Though increasing Γc can make B1 arbitrarily small, this has no effect on B3 . We also notice that we cannot choose C(s) to make both B2 and B3 small simultaneously. When the noise n(t) is not negligible compared to the true measurements x(t), the bandwidth of the low-pass filter C(s) cannot be chosen arbitrarily large as in the ideal case without noise. This is because ksC(s)kL1 in B3 grows much faster compared with k1 − C(s)kL1 in B2 , as shown in Fig. 3. In summary, bounded measurement noise leads to bounded estimation error, as expected. Notice that the first two terms in the bound (27) can be arbitrarily reduced by increasing Γc and the bandwidth of C(s), while the third term bears close affinity to the classical positional dilution of precision (PDOP) metric that is commonly used in navigation systems to determine a lower bound on the achievable error covariance as a function of geometry of the underlying navigation problem 4, 10 . The smaller the PDOP, the more accurate the navigation solution is. Using our notation, the PDOP for the problem at hand can be written as q PDOP = tr(G> (pc )G(pc ))−1 . (30)
6 American Institute of Aeronautics and Astronautics
0.94
20
0.92
18 16
0.9
14
0.88
12 0.86
10 0.84
8 0.82
6 0.8
4
0.78 0.76
2 0
5
10
C
15
20
0
25
0
5
(a) k1 − C(s)kL1 vs. c
10
15
C
20
25
(b) ksC(s)kL1 vs. c
Figure 3. Plots of k1 − C(s)kL1 and ksC(s)kL1 vs. c.
IV.
Guidance Law Design
Notice that the dynamics in (8b) can be rewritten as: ρ(t) ˙ = −Vg (t) sin η(t) + Vt (t) sin[η(t) + (ψt (t) − ψ(t))], = −Vg (t) sin η(t) + Vt (t)[sin η(t) cos(ψt (t) − ψ(t)) + cos η(t) sin(ψt (t) − ψ(t))], = [−Vg (t) + Vt (t) cos(ψt (t) − ψ(t))] sin η(t) + [Vt (t) sin(ψt (t) − ψ(t))] cos η(t), {z } | | {z }
(31)
ρc (t)
ρs (t)
= β1 (Vt (t), ψt (t)) sin (η(t) + β2 (Vt (t), ψt (t))) , where p β1 (Vt (t), ψt (t)) = sign(ρs (t)) ρ2s (t) + ρ2c (t),
−1
β2 (Vt (t), ψt (t)) = tan
ρc (t) ρs (t)
.
(32)
The guidance law developed in Ref. 2, modified to account for estimation of the target’s velocity, is ˙ ψ(t) ηd (Vˆt (t), ψˆt (t))
Vg (t) cos η(t) − Vˆt (t) cos[ψˆt (t) − (ψ(t) − η(t))] ρ(t) ! −k1 (ρ(t) − ρd ) −1 = sin − β2 (Vˆt (t), ψˆt (t)), β1 (Vˆt (t), ψˆt (t))
=
− k2 (η(t) − ηd (Vˆt (t), ψˆt (t))), (33a) −k (ρ(t) − ρ ) 1 d ≤ 1 − , (33b) ˆ ˆ β1 (Vt (t), ψt (t))
where ki > 0 (for i = 1, 2) are the design gains and is a small constant. In (33), the measurements of Vg (t) and ψ(t) are available from the onboard autopilot, and ρ(t) can be computed from [u(t), v(t), h(t)]> as p ρ(t) = h(t) u2 (t) + v 2 (t). (34) Proposition 1 When Vt (t) and ψt (t) are unknown, the fast estimator in (21)-(23), coupled with the guidance law (33), ensures that the tracking error ρ(t) − ρd is globally uniformly ultimately bounded as t → ∞. Proof. Let V (t) =
(ρ(t) − ρd )2 [η(t) − ηd (Vt (t), ψt (t))]2 + 2 2
7 American Institute of Aeronautics and Astronautics
(35)
be the candidate Lyapunov function. Then V˙ (t) = (ρ(t) − ρd )(ρ(t) ˙ − ρ˙ d ) + (η(t) − ηd (Vt (t), ψt (t)))(η(t) ˙ − η˙ d (Vt (t), ψt (t))), = (ρ(t) − ρd )β1 (Vt (t), ψt (t)) sin[η(t) + β2 (Vt (t), ψt (t))] " Vt (t) cos(ψt (t) − (ψ(t) − η(t))) − Vˆt (t) cos(ψˆt (t) − (ψ(t) − η(t))) + (η(t) − ηd (Vt (t), ψt (t))) ρ(t) i −k2 (η(t) − ηd (Vˆt (t), ψˆt (t))) − η˙ d (Vt (t), ψt (t)) , (36)
= (ρ(t) − ρd )β1 (Vt (t), ψt (t)) {sin[ηd (Vt (t), ψt (t)) + β2 (Vt (t), ψt (t))] + sin[η(t) + β2 (Vt (t), ψt (t))] − sin[ηd (Vt (t), ψt (t)) + β2 (Vt (t), ψt (t))]} h
+ (η(t) − ηd (Vt (t), ψt (t))) −k2 (η(t) − ηd (Vt (t), ψt (t))) + k2 (ηd (Vˆt (t), ψˆt (t)) − ηd (Vt (t), ψt (t))) # Vt (t) cos(ψt (t) − (ψ(t) − η(t))) − Vˆt (t) cos(ψˆt (t) − (ψ(t) − η(t))) − η˙ d (Vt (t), ψt (t)) . ρ(t)
It can be verified from the fast estimator that Vˆt (t) and ψˆt (t) are bounded. Bounded Vˆt (t) and ψˆt (t) imply that ηd (Vˆt (t), ψˆt (t)) is also bounded. Let |ηd (Vˆt (t), ψˆt (t)) − ηd (Vt (t), ψt (t))| ≤ M0 ,
|β1 (Vt (t), ψt (t))| ≤ M1 ,
V (t) cos(ψ (t) − (ψ(t) − η(t))) − Vˆ (t) cos(ψˆ (t) − (ψ(t) − η(t))) t t t t ≤ M2 , ρ(t)
(37)
where M0 , M1 , and M2 are positive constants. Next, consider η˙ d (Vt (t), ψt (t)) = r 1−
1 2 (ρ(t)−ρ )2 k1 d 2 (V (t),ψ (t)) β1 t t
−k1 (ρ(t) ˙ − ρ˙ d )β1 (Vt (t), ψt (t)) + k1 (ρ(t) − ρd )β˙ 1 (Vt (t), ψt (t)) − β˙ 2 (Vt (t), ψt (t)). β12 (Vt (t), ψt (t)) (38)
It can be shown that β˙ 1 (Vt (t), ψt (t)) and β˙ 2 (Vt (t), ψt (t)) are bounded. Straightforward algebraic manipulations lead to the following expression for the upper bound: |η˙ d (Vt (t), ψt (t))| ≤ M3 (k1 ) + M4 (k1 )|ρ(t) − ρd |,
(39)
where M3 and M4 are positive functions of k1 . From equations (37) and (39), V˙ (t) becomes: V˙ (t) ≤ −k1 (ρ(t) − ρd )2 − k2 (η(t) − ηd (Vt (t), ψt (t)))2 + (M1 + M4 (k1 ))|ρ(t) − ρd kη(t) − ηd (Vt (t), ψt (t))| + (M0 k2 + M3 (k1 ) + M2 )|η(t) − ηd (Vt (t), ψt (t))|, „ «2 ! M1 + M4 (k1 ) k1 k2 2 √ = − (ρ(t) − ρd ) − − (η(t) − ηd (Vt (t), ψt (t)))2 2 2 2k1 !2 r M1 + M4 (k1 ) k1 √ − |ρ(t) − ρd | − |η(t) − ηd (Vt (t), ψt (t))| 2 2k1 !2 „ r «2 M0 k2 + M3 (k1 ) + M2 M0 k2 + M3 (k1 ) + M2 k2 √ √ − |η(t) − ηd (Vt (t), ψt (t))| − + , 2 2k2 2k2 √ ≤ 0, ∀ k[ρ(t) − ρd , η(t) − ηd (Vt (t), ψt (t))]> k2 ≥ r/ d,
where M0 k2 + M3 (k1 ) + M2 √ r= , 2k2
( d = min
k1 k2 , − 2 2
M1 + M4 (k1 ) √ 2k1
(40)
2 ) ,
(41)
when k1 and k2 are selected to satisfy k1 > 0 and k2 > (M1 + M4 (k1 ))2 /k1 . We thus conclude that the system given by (8a) and (31) is globally uniformly ultimately bounded via the controller in (33). This completes the proof.
8 American Institute of Aeronautics and Astronautics
V.
Out-Of-Frame Event
The practical problem consists of determining the relative position and velocity of the moving target with respect to the UAV using IMU, GPS and camera measurements complemented by the altitude above the target provided in real-time by the PVNT system 2 . During numerous flight tests the image tracking software lost track of the target on a regular basis primarily due to the dynamic change of lighting conditions and radio frequency interference in video and control links 2 . This prompted the following question: can the filtering solution maintain stability in the presence of tracking loss events? The ideas presented in Refs. 2, 4 are used in this paper to derive an adaptive estimator that provides estimates of target motion using the process model (9) in the presence of such events. Following the development in Refs. 2, 4, define the tracking loss as a binary signal 0 out-of-frame event at time t, s(t) := (42) 1 camera tracks the target at time t. For a given binary signal s(t) and t > τ > 0, let Ts (τ, t) denote the length of time in the interval (τ, t) that s(t) = 0. Then, formally, Z t (1 − s(l)) dl. Ts (τ, t) := τ
The signal, s(t), is said to have brief tracking loss event if Ts (τ, t) ≤ T0 + α(t − τ ), ∀ t ≥ τ ≥ 0, for some 0 T0 ≥ 0 and α ∈ [0, 1]. Note that, α represents an upper bound on the ratio Ts (τ,t)−T , i.e., the total time t−τ the target is lost on a given interval as a fraction of the interval duration. When the target is out-of-frame, we do not have the measurement for x(t) = [px (t), py (t)]> . We treat the estimates ω ˆ (t) for the unknown parameters as constants during the out-of-frame time interval. That is, referring to (22), we let ω ˆ˙ (t) = 0 when s(t) = 0, which is equivalent to assuming x ˜(t) = 0 during the out-of-frame event. Suppose that the measurements become available at time instant ti . The initial state of x ˆ(ti ) is set by x ˆ(ti ) = x(ti ). The state estimator and adaptive law in equations (21) and (22) in the presence of target loss events lead to the following: • State Estimator: "
# sin ψ(t) x ˆ˙ (t) = s(t)Am x ˜(t) − Vg (t) +ω ˆ (t), cos ψ(t)
x ˜(t) = x ˆ(t) − x(t).
(43)
• Adaptive Law: ω ˆ˙ (t) = Γc Proj (ˆ ω (t), −s(t) P > x ˜(t)).
(44)
The low-pass filter in (23), the extraction of unknown parameters in (24), and the controller design in (33) remain the same. To derive the performance bound from the measurement noise to the estimation error in the presence of an out-of-frame event, consider the three bounds B1 , B2 , and B3 defined in (27). Of these, only B1 will be affected by the out-of-frame event. First, we want to characterize the maximal error between ωe (t) and ωr (t) due to the temporary target loss. Suppose kˆ ω (t)k∞ ≤ Bωˆ ,
kω(t)k∞ ≤ Bω .
(45)
Then, kωe (t) − ωr (t)k∞ ≤ Bωˆ + Bω .
(46)
Suppose at time instance ti , the measurements become available again, that is, when s(ti ) changes from 0 to 1. The initial value ωe (ti ) can be written as: ωe (ti ) = ωr (ti ) + (ωe (ti ) − ωr (ti )).
9 American Institute of Aeronautics and Astronautics
(47)
Notice that the initial value ωr (ti ), along with the input ω ˆ (t), results in the performance bound given in (27). The initial value ωe (ti ) − ωr (ti ) causes an extra exponential decaying term (Bωˆ + Bω )e−Ct . In summary, the performance bound in the presence of out-of-frame event becomes: γ¯c kωe (t) − ω(t)k∞ ≤ (Bωˆ + Bω )e−Ct + √ + k1 − C(s)kL1 kω(s)kL∞ Γc q + ksC(s)kL1 tr(G> (pc (t))G(pc (t)))−1 kno (t)kL∞ ,
(48)
where γ¯c is defined in (28). We see that in the presence of out-of-frame events the performance bound has an additional exponentially decaying term as compared to (27). Considering the decaying term itself, the effect of out-of-frame can be reduced by choosing a larger c. However, as we discussed in the case of (27), when the noise is not negligible, the extent to which c can be chosen as large as possible is restricted.
VI.
Experimental Setup and Simulations
This section describes the experimental setup that will be used to test the proposed estimation and guidance law. Results from numerical simulation are also presented. The original flight system, presented in Fig. 1, utilizes a gimbaled camera that is driven by two high speed digital servos. Though multiple flight tests have demonstrated satisfactory reliability and efficiency, it has been observed that the system has limited capability in tracking objects, especially at the beginning stage of the tracking process, when the feedback control experiences oscillatory transient behaviors. The flight system is further enhanced by installing an inertial stabilization unit on the gimbaled camera. This results in significant improvements in both the line-of-sight (LOS) stabilization and accuracy. The hardware design of the newly-enhanced gimbal unit integrates the following additional components: • a RISC ATMEGA-169 8-bit micro-controller running at 8MHz, • a custom-designed circuit board mounted on the camera body, integrating: – a two-axis (IDG300) and one-axis (ADXRS300) rate gyros, – a band-pass signal conditioning filter that removes gyro DC drift and high frequency noise, – a 16-bit analog-to-digital (A/D) converter (ADS8344, 100Hz sampling rate) connected to the micro-controller, • two high-speed digital servos for positioning of the pan and tilt gimbal rings. The schematic diagram of the gimbal controller is shown in Fig. 4. The central element of the architecture is a RISC micro-controller that implements the gimbal control by integrating the LOS rate measurements and gimbal reference commands sent from the onboard PC104 computer.
Figure 4. Schematic diagram of the gimbal inertial stabilization system.
The straightforward algorithm of the LOS inertial stabilization consists in subtracting the UAV’s Euler rates measured in the inertial frame from the gimbal reference commands, as given below: C C C C C B ϕcmd,CI ϕref cmd,CI K1 0 0 ϕ˙ CI ϕ˙ CI pCI pBI C C ϑcmd,CI = C ϑref cmd,CI − 0 K2 0 C ϑ˙ CI , C ϑ˙ CI = T C qCI −B R B qBI , (49) C C C ˙ C ˙ C B ψcmd,CI ψref cmd,CI 0 0 K3 ψCI ψCI rCI rBI 10 American Institute of Aeronautics and Astronautics
where –
C
pCI ,
C
qCI , and
C
rCI denote the camera-to-inertial angular rates, resolved in the {C} frame,
–
B
pBI ,
B
qBI , and
B
rBI denote the body-to-inertial angular rates, resolved in the {B} frame,
–
C
ϕ˙ CI ,
C
ϑ˙ CI , and
C
ψ˙ CI denote the camera-to-inertial Euler angle rates,
– T denotes an angular kinematics matrix relating angular rates to the derivatives of Euler angles, –
C BR
denotes a transformation matrix from the body frame to the camera frame,
– Ki (for i = 1, 2, 3) denote the feedback gains. The gimbal unit is a low-cost 2-axis system enclosing the Sony FCB-IX11A block camera. The entire gimbal, including the camera and the stabilization system as shown in Fig. 5, is placed into a 10cm spherical case, with the gimbal’s lower half exposed under the fuselage.
Figure 5. Inertially stabilized gimbaled camera.
A Hardware-in-the-Loop (HITL) simulation setup was developed to support the experimental component of this effort. Recall that the gimballed camera’s inertial stabilization system rejects the impact of UAV motion on the camera. Moreover, it is reasonable to assume that the gimbal dynamics are much faster than the UAV/target relative motion. Thus, the UAV guidance and target tracking tasks may be separated, for the purpose of validating the algorithm. In the HITL simulation, the UAV with gimbal is statically suspended above a simulated target, which is driven by two servo actuators as indicated in Fig. 6. Although the experimental simulation effort is ongoing, the theoretical results have been verified through numerical simulation. Numerical simulation results are presented for the following four cases: • Unknown constant target velocity, Fig. 7. • Unknown time-varying target velocity, Fig. 8. • Unknown constant target velocity in the presence of out-of-frame event, Fig. 9, • Unknown time-varying target velocity in the presence of out-of-frame event, Fig. 10. In Figs. 7-10, (a) shows the tracking of ρ(t) (blue) to ρd (red), (b) shows the estimate of the target’s unknown velocity (dashed blue) along with the true value (red), where the yaw angle ψt (t) is plotted in degrees, and (c) shows the 2D trajectories of the UAV (blue) and the target (red). The thin blue lines connecting the UAV and the target trajectories show the corresponding line of sight from the UAV to the target at periodic intervals. From Figs. 7-10, it can be seen that fast estimation is obtained for both constant and time-varying target velocity. Good tracking performance of ρ(t) to ρd is also achieved. Figures 9 and 10 show estimations in the presence of out-of-frame event for unknown constant and time-varying target velocities, respectively. It can be observed that, when s(t) changes from 0 to 1, estimation of the target’s velocity has some transient deviation. This is due to the difference between x(t) and x ˆ(t) when the target is suddenly re-captured.
11 American Institute of Aeronautics and Astronautics
UAV Dynamics Natural Modeling of Lighting and Out of Frame events HITL simulation: • UAV dynamics • Target dynamics • ATT software
UAV +Gimbal LOS
ωtg Simulated Relative Dynamics of Target Target Figure 6. Laboratory setup.
In the simulations, 10−2 % uniform noise is added to the measurements of (η(t), ρ(t), u(t), v(t)) and the UAV velocity (Vg (t), ψ(t)). The initial conditions, estimation parameters, and control parameters are as follows: • Initial Conditions: h(t) = 200, ρd = 30, ρ(0) = 10, η(0) = −π/6, Vg (t) = 30, ψ(0) = π/6. – Constant Target Velocity: Vt (t) = 10, ψt (t) = π/3. π π – Time-Varying Target Velocity: Vt (t) = 12 + 10 sin( 12 t), ψt (t) = sin( 12 t).
• Estimation Parameters: Am = −I2×2 , P = 1/2I2×2 , Γc = 2 × 108 , c = 50, ω ˆ (0) = (0, 0). • Control Parameters: k1 = 3, k2 = 30. Estimated Target Velocity
2D Horizontal Range
2D Trajectories
80
35 rhod
100
rho
60
30
80
ψ t in degree
40
UAV Trajectory
60 25
End
20 40
0
20
Vt
20
-20
Target Trajectory
0
15
-40
Start -20
10
0
5
10
15
(a) Relative 2D Distance: ρ(t)
-60
0
5
10
15
-20
(b) Estimation of Target’s Velocity
0
20
40
60
80
100
120
140
(c) 2D Trajectories
Figure 7. Target is undergoing a constant velocity.
VII.
Concluding Remarks
A recently-developed fast adaptive estimator is applied to the target tracking problem of a ground vehicle with unknown time-varying velocity. The estimated velocity is used in the guidance law to maintain a desired 2D horizontal range between the follower (a UAV) and the target. Simulation results for both constant and time-varying velocities of the target demonstrate the effectiveness of the estimation and control algorithms. 12 American Institute of Aeronautics and Astronautics
2D Trajectories
Estimated Target Velocity
2D Horizontal Range 35
100
End
180
rhod
80
rho 30
ψ t in degree
160
60
25
140
40
120
20
100
0 20
80
Vt
60
-20
Target Trajectory 40
-40 15
UAV Trajectory
20
-60
0 -80
Start
-20 10
0
5
10
15
-100
0
(a) Relative 2D Distance: ρ(t)
5
10
-50
15
0
(b) Estimation of Target’s Velocity
50
100
150
200
(c) 2D Trajectories
Figure 8. Target is undergoing a time-varying velocity.
2D Trajectories
Estimated Target Velocity
2D Horizontal Range 35
100
rhod
100
80
rho 30
60
80
UAV Trajectory
ψ t in degree
40
60 25
20
20
-20
End
40
0
Vt 20
-40
15
-60
10
-100
Start
-80
0
5
10
15
(a) Relative 2D Distance: ρ(t)
Target Trajectory
0
-20 0
5
10
15
(b) Estimation of Target’s Velocity
-20
0
20
40
60
80
100
120
140
(c) 2D Trajectories
Figure 9. Target is undergoing a constant velocity: in the presence of out-of-frame event.
Out-of-frame events, which are typical and problematic in vision-related applications, are also considered for the both cases of unknown constant and time-varying target velocities. Future investigations include coupling the fast estimation with other controllers, such as the one developed in Ref. 2. Characterizing the performance bounds from the noise in the measurement to the tracking error in the presence of out-of-frame event will also be studied.
Acknowledgments This work was sponsored in part by ARO Grant #W911NF-06-1-0330, ONR Grant #N00014-06-1-0801 and AFOSR MURI subcontract F49620-03-1-0401.
References 1 I. Wang, V. Dobrokhodov, I. Kaminer, and K. Jones, “On vision-based target tracking and range estimation for small UAVs,” in Proc. of AIAA Conf. on Guidance, Navigation, and Control, San Francisco, CA, Aug. 2005. 2 V. Dobrokhodov, I. Kaminer, K. Jones, and R. Ghabcheloo, “Vision-based tracking and motion estimation for moving targets using small UAVs,” in Proc. of AIAA Conf. on Guidance, Navigation, and Control, Keystone, CO, Aug. 2006. 3 I. Kaminer, W. Kang, O. Yakimenko, and A. Pascoal, “Application of nonlinear filtering to navigation system design using passive sensors,” IEEE Trans. on Aerospace and Electronic Systems, vol. 37, no. 1, pp. 158–172, Jan. 2001. 4 J. Hespanha, O. Yakimenko, I. Kaminer, and A. Pascoal, “Linear parametrically varying systems with brief instabilities: An application to integrated vision/IMU navigation,” IEEE Trans. on Aerospace and Electronics, vol. 40, no. 4, pp. 889–902, July 2004. 5 A. Pascoal and I. K. P. Oliveira, “Navigation system design using time-varying complementary filters,” IEEE Trans. on Aerospace and Electronic Systems, vol. 36, no. 4, pp. 1099–1114, Oct. 2000.
13 American Institute of Aeronautics and Astronautics
2D Trajectories
Estimated Target Velocity
2D Horizontal Range 35
100
180
rhod
80
rho 30
ψ t in degree
60
25
140
40
120
20
100
0
20
60
10
-100
Target Trajectory
40
-40 -60
UAV Trajectory
80
Vt
-20
15
End
160
20 0
-80
Start
-20 0
5
10
15
(a) Relative 2D Distance: ρ(t)
0
5
10
15
(b) Estimation of Target’s Velocity
-50
0
50
100
150
200
(c) 2D Trajectories
Figure 10. Target is undergoing a time-varying velocity: in the presence of out-of-frame event.
6 C. Prevost, A. Desbiens, and E. Gagnon, “Extended Kalman Filter for state estimation and trajectory prediction of a moving object detected by an unmanned aerial vehicle,” in American Control Conference, New York, NY, July 2007, pp. 1805–1810. 7 G. Ivey and E. Johnson, “Investigation of methods for target state estimation using vision sensors,” in Proc. of AIAA Guidance, Navigation and Control Conf., Atlanta, GA, Aug. 2005, AIAA-2005-6067. 8 L. Ma, C. Cao, N. Hovakimyan, C. Woolsey, and W. Dixon, “Fast estimation for range identification in the presence of unknown motion parameters,” Submitted to Integrated Computer-Aided Engineering, 2007. 9 ——, “Fast estimation for range identification in the presence of unknown motion parameters,” in International Conference on Informatics in Control, Automation and Robotics, Angers, France, May 2007. 10 J. Zhu, “Calculation of geometric dilution of precision,” IEEE Trans. on Aerospace and Electronic Systems, vol. 28, no. 3, pp. 893–895, July 1992.
14 American Institute of Aeronautics and Astronautics