optical navigation systems - Purdue Engineering - Purdue University

5 downloads 0 Views 369KB Size Report
Purdue University, School of Aeronautics & Astronautics Engineering. West Lafayette, IN 47907-2023. Aaron Braun‡, Ade Mulyana§and James Bethel¶. Purdue University ..... ically integrated at a high frequency, it is assumed that the small ...
OPTICAL NAVIGATION SYSTEMS Takayuki Hoshizaki∗, Dominick Andrisani II†, Purdue University, School of Aeronautics & Astronautics Engineering West Lafayette, IN 47907-2023 Aaron Braun‡, Ade Mulyana§and James Bethel¶ Purdue University, School of Civil Engineering West Lafayette, IN 47907-2051 ABSTRACT A new navigation system has been developed in which an imaging sensor is used as an aid to an airborne integrated INS (Inertial Navigation System) and GPS (Global Positioning System) system. We name it the tightly coupled INS/GPS/EO (Electro Optical System) system. In the current configuration, the EO measures single ground object image position through a camera on board an aircraft. As represented by the term “tightly coupled”, the INS, GPS and EO are integrated using the single Kalman filter which estimates aircraft states, sensor biases and ground object coordinates simultaneously. As the consequence, aircraft yaw angle determination (an weak point common in INS/GPS systems) is greatly improved. Furthermore, the INS/GPS/EO system can also focus on known stationary ground objects (control points) resulting in improved navigation accuracy.

INTRODUCTION Motivation

The motivation for this study is the need for increased accuracy in the aircraft navigation system and in the determination of the precise location of objects on the ground. Also, we would like to prove the hypothesis that an increased accuracy in navigation will result if the navigation and geo-positioning problems are solved simultaneously. Finally, we would like to investigate the benefits of focusing on known ground points (control points). We intend to prove the hypothesis that focusing on control points will result in an increase in the navigation accuracy. These hypotheses are based on the assumption that focusing on any stationary object on the ground ∗ Ph.D.

Candidate Professor, AIAA Senior Member ‡ Ph.D. Candidate § Ph.D. Candidate ¶ Associate Professor † Associate

gives: (1) aircraft velocity information that is useful in navigation; and (2) aircraft angular velocity information that can be used to help calibrate, in the real-time, the rate gyros of the INS. By aircraft navigation system we imply that an integrated Inertial Navigation System (INS) which is tightly coupled to the Global Positioning System (GPS). We denote the integrated product as the INS/GPS.5 Numerous companies manufacture these products, e.g., Litton and Microbotics. Systems that determine the location of objects on the ground from an aircraft are often physically separate electro/optical systems which take input from the INS/GPS. The ground object geo-positioning system on the Predator system made by Westcam and General Atomics is an example. Literature Review

The use of imaging as an aid to INS-based navigation has been traditionally studied by way of terrain matching methods in which aerial images are matched to an on-board digital elevation map.19 When integrating navigation and photogrammetry‡ , the first related problem is “observer motion estimation via visual cues” where the camera is the only used sensor. A variety of methods are reported to solve this problem such as: Heeger and Jepson12 use a least squares formulation; Soatto et al. (1996),21 Soatto and Perona (1997)22 use an analysis of topological manifolds to propose the Implicit Extended Kalman Filter (Implicit EKF); Gurfil and Rotstein9 use an Implicit EKF to estimate aircraft states. The second related problem is “object motion estimation via visual cues”. There are two distinct approaches to solve this problem: optical flow methods and feature based methods. Optical flow methods have an advantage that they are free from object recognition problems.1 A feature based method is ‡ Photogrammetry is defined as the process of deriving metric information about an object through measurements made on photographs of the object.18

1 American Institute of Aeronautics and Astronautics

taken in Broida et al.3 where several featured marks on an object are tracked by a stationary camera to estimate the object’s states. In their method, a batch process is used to give initial estimates based upon the first few images, then the IEKF (Iterated Extended Kalman Filter) recursively estimates the object’s states. This paper offers the idea of a twostage estimator where the first stage does a quick ground object location and passes this to the second stage thereby improving the initialization of the second state estimator. The study of the integration of observer motion estimation and object position estimation, especially those using knowledge from the INS, is seldom reported. Hagen and Hayerdahl11 propose to use image position measurements with a digital elevation map for the absolute positioning without using an INS. The IEKF tightly couples six DOF aircraft state estimation and more than one stationary ground object position estimations. Hafskjold et al.10 combine the technique used in Hagen and Hayerdahl11 with an INS technique. While these methods, which are dependent on a digital elevation map, achieve successful results, our final goal is to use geographical information given by imaged ground objects for navigation aid, instead of using the pre-stored terrain data.

TIGHTLY COUPLED INS/GPS/EO SYSTEM Figure 1 shows the layout of the tightly coupled INS/GPS/EO system. The simulation of the tightly Aircraft States (exact values)

IMU

Bias Correction

State Estimates

Navigation Equation Position, Velocity, Orientation and Covariance Correction

Kalman Gain

Camera

Fig. 1 Layout INS/GPS/EO.

− − +

− +

+

Kalman Filter Pseudorange Pseudorange rate

GPS Receiver

of

Image position

the

Ellipsoidal-Earth Based Aircraft Dynamics

To simulate the actual flight trajectory as realistically as possible, we use an ellipsoidal-Earth model using WGS-84.6 The dynamics of the aircraft are described by six DOF equations of motion24 using a Cessna 182 as the aircraft model. The turbulence model used in the simulation is given by MILSPEC 1797A.7 Inertial Navigation System

The Inertial Measurement Unit (IMU) in the INS, which consists of three accelerometers and three rate gyros, measures accelerations and angular velocities. These imperfect measurements contain small errors often specified by a scale factor, nonlinearity, bias, and white noise1316 .23 For simplicity, only bias modeled with a Markov process and the white noise are assumed to be errors in both accelerometers and rate gyros. Based on the imperfect sensor outputs, the Navigation Equation is used to estimate aircraft velocity, position, and orientation. The most commonly used navigation equation based on the NED (North-East-Down) coordinate system is used in this study.25 Global Positioning System

The absolute measurements of aircraft position and velocity are obtained using a GPS receiver on board the aircraft. For this study, a single frequency GPS receiver is simulated, which measures pseudoranges and pseudorange rates using the satellite broadcast ephemerides, in the real time mode.15 The Electro Optical System

Tightly Coupled INS/GPS/EO

accelerations Aircraft angular velocities

component is briefly described in the following subsections.

tightly

coupled

coupled system consists of the aircraft model, the INS, the GPS, the imaging sensor (CCD camera), and the single Kalman filter. The mechanism of each

Imaging Geometry Figure 2 describes the imaging geometry for a frame camera∗ . “For purposes of geometry and mathematical modeling, the camera lens is represented by a single point, called the perspective center, even though the lens assembly is composed of many optical elements.18 ” The focal length for a fixed-focus lens is the distance along the optical axis from the perspective center to the image plane. The image coordinate system is defined on the positive image plane, since the projected view is reversed on the negative image plane which is less convenient to work with. In Figure 2, the image points, t1 , t2 and t3 , correspond to the ground objects, T1 , T2 and ∗ A frame camera is an imaging sensor model which defines how to reconstruct the bundle of rays connecting image points and the corresponding ground objects. See Mikhail et al.18 for more details.

2 American Institute of Aeronautics and Astronautics

T3 , respectively. C denotes the origin of the Image coordinate system. Frame Photograph

t1

t2

Image Plane (Negative Plane)

t3 z

rag replacements

f (Focal Length)

L (Perspective Center) x

t1 t2 t3

y

y0 t1

C

t3

x0

Image Plane (Positive Plane)

t2

where

   XT − XL U .  V  = T  YT − YL  ce ZT − ZL W 

Note that T is the transformation matrix from the ce . ECEF to the Image coordinate system. Letting xc = . x − x0 and yc = y − y0 ,     xc U  yc  = k  V  (1) −f W Substituting k = −f /W which is given by the 3rd row into the 1st and 2nd rows, we obtain image position measurement equations:18 xc

T3

yc T1 Fig. 2

T2

Imaging geometry for a frame camera.

Image Position Measurements Let the point t be one of image points, and let the point T be the corresponding ground object point. Suppose that image coordinates of the image point t are (x, y, 0). Image coordinates of the perspective center L are, from Figure 2, (x0 , y0 , f ) where x0 and y0 are small offsets. Then,   x − x0 ~ ~  y − y0  p~Lt c = Ct − CL = −f where the subscript c denotes that the vector is described in image coordinate unit vectors. Since the ~ is an extension of the LOS (Line of Sight) vector LT ~ vector Lt, the following vector equation holds: p~Lt pLT c = k~ c ~T = where k is a scale factor. Letting X T T ~ [XT , YT , ZT ] and XL = [XL , YL , ZL ] be the position vectors of the points T and L, respectively, described in the ECEF (Earth Centered Earth Fixed) coordinate system, p~Lt ~LT c = kT · p e ce

or

   U x − x0  y − y0  = k  V  W −f 

U W V = −f W

= −f

(2) (3)

In the image simulation, white noise corresponding to one pixel is added to the x and y image coordinates to simulate image inaccuracy. Tightly Coupling

In a tightly coupled INS/GPS/EO navigation system, the single integrated Kalman filter receives the measurements of pseudoranges and pseudorange rates from the GPS receiver, ground object image coordinates from the imaging sensor. The residuals between these measurements and their estimates from the Kalman filter are used to compute correction signals. The state equation and the output equation of the Kalman filter are given by: ˙x δ~ δ~z

= F δ~x + G~v = Hδ~x + w ~

where δ~x consists of 20 elements: three orientation errors, three velocity errors (north, east and downcomponents), three position errors (longitude, latitude and altitude), three rate gyro biases, three accelerometer biases, GPS receiver clock bias and drift, and three ground object coordinate errors. The matrix F consists of the linearized navigation equation, the linear bias dynamics and the stationary ground object dynamics. The matrix H is obtained by linearizing the pseudorange, pseudorange rate, and image position equations. A least squares batch process and the IEKF (Iterated Extended Kalman Filter14 ) are used to allow large initial errors in ground object position estimates, and to reduce linearization errors in the image position equations.

3 American Institute of Aeronautics and Astronautics

While the nonlinear navigation equation is numerically integrated at a high frequency, it is assumed that the small perturbation error states (δ~x(t)) propagate linearly. Every one second, aircraft states, sensor biases, and ground object coordinates are updated with a set of correction signals from the Kalman filter2 4 8 .20

6000 Actual Aircraft Flight Trajectories 5000

To investigate the performance of the INS/GPS/EO system and the benefits of focusing on a control point, simulations are conducted according to the following three scenarios: Scenario I: the traditional INS/GPS navigation system; Scenario II: the INS/GPS/EO system focusing on a single unknown ground object; Scenario III: the INS/GPS/EO system focusing on a control point whose location is known with the accuracy of σ=0.1m. In each scenario, 30 ensemble experiments are performed changing the initial conditions of the actual aircraft trajectories, the actual sensor biases, and the true ground object’s coordinates, as well as the random noise to drive wind gusts and navigation sensor biases. The nominal trajectory in the simulations is approximately a straight path to the North with the altitude of 6096 meters (20000 feet), the velocity of 61 m/s (200 ft/s), and a 60 second flight duration, starting from a point on the equator. The mean location of the ground objects is 1829m (6000ft) north and 3048m (10000ft) west of the starting point of the aircraft. The nominal flight trajectory and the mean location of the ground objects form a good aircraft/ground object geometry since the direction of the line of sight largely changes during the simulation. Figure 3 shows 30 different actual flight trajectories and 30 different true ground object locations in the local coordinates system where a set of a flight trajectory and a ground object location is used in one experiment. In the local coordinate system, x, y and z axes are defined along the Easterly, Northerly and Upward directions, respectively, with the origin located on the equator. The following subsections explain the detailed simulation configurations. I. INS/GPS

The traditional INS/GPS system is simulated to estimate the aircraft states in this scenario. The highest accuracy achievable in the year 2001 is used for the sensor performance (see Tables 2 and 3), where it is assumed that the GPS receiver uses P code and the INS uses ring laser gyros. The update of the estimates in the Kalman filter is performed at

zlocal (Upward) (m)

SIMULATION

4000 3000

2000

1000 True Ground Object Locations 0 3000 2000 ylocal

(Northing) (m)

1000

−3000

−1000 −2000 x local

0

(Easting) (m)

Fig. 3 The local coordinate system and the aircraft/ground object geometry.

1 Hz as GPS receiver’s measurements are obtained. Figure 4 shows the time histories of yaw angle estimation errors resulting from this scenario. Theoretical 2p boundaries and the twice amount of the ensemble average are superposed on ensemble experiments, where p2 is the theoretical variance as computed by the IEKF and the ensemble average is the RMS value of 30 experiments. Note that the initial errors are scarcely reduced with this configuration. Also note that the p value and the ensemble average are in good accordance, especially at the end of the simulation. The simulation results of other aircraft states are summarized in Table 1, but only with the p values and ensemble averages after 60 seconds. II. INS/GPS/EO with an Unknown Ground Object

The tightly coupled INS/GPS/EO system is simulated in this scenario using the same performance of the INS/GPS system used in Scenario I and the imaging sensor with 5-µ meter measurement noise. The imaging sensor is always bore-sighting a single unknown ground object precisely, assuming to use a gimbaled image tracking system. During a 60 second flight, the imaging sensor keeps taking images of the ground object at 1 Hz at the same time as the

4 American Institute of Aeronautics and Astronautics

Yaw Angle Determination Error for Scenario I

−3

6

x 10

error size after 60 seconds is approximately 20 times smaller than the results from Scenario I compared with Figure 4.

2 Ensemble Average (2RMS) 2p

4

III. INS/GPS/EO with a Control Point

The INS/GPS/EO system is focusing on the same single ground object as Scenario II in each experiment, however, its location is known with 0.1 meter accuracy from the beginning of the simulation in this scenario. Hence, the tightly coupled mode is activated throughout 0 ∼ 60 seconds. Figure 6 shows the time histories of yaw angle estimation errors resulting from this scenario. Note

δ Yaw (rad)

2

0

−2

−4 Experiments −6

0

10

20

30 time (sec)

40

50

Yaw Angle Determination Error for Scenario III

−3

6

x 10

60

4

Fig. 4 Aircraft yaw angle estimation errors resulting from the INS/GPS system.

2p

2

Yaw Angle Determination Error for Scenario II

−3

6

x 10

δ Yaw (rad)

Experiments

−2

−4

−6

0

10

20

30 time (sec)

40

50

60

2 Ensemble Average (2RMS)

that the yaw angle estimation errors are dramatically reduced right after the simulations start. The error size after 60 seconds is approximately 30 times smaller than the results from Scenario I compared with Figure 4, 1.4 times smaller than the results from Scenario II compared with Figure 5.

2

0

−2

Overall Comparison

Experiments −4

−6

0

Fig. 6 Aircraft yaw angle estimation errors resulting from the INS/GPS/EO system focusing on a single 0.1m-accuracy control point .

2p

4

δ Yaw (rad)

2 Ensemble Average (2RMS)

GPS receiver catches the signals, for a total of 61 images. The batch process uses the first 20 images to obtain the initial estimates of the ground object location. The INS/GPS/EO based on an IEKF works on the remaining 41 images starting with the initial ground object location estimates given by the batch process. Hence, the navigation system is equivalent to the INS/GPS system during 0 ∼ 19 seconds. Figure 5 shows the time histories of yaw angle estimation errors resulting from this scenario. Note

0

10

20

30 time (sec)

40

50

60

Fig. 5 Aircraft yaw angle estimation errors resulting from the INS/GPS/EO system focusing on a single unknown ground object.

that the yaw angle estimation errors are dramatically reduced as soon as the INS/GPS/EO system picks up the ground object after 20 seconds. The

The aircraft velocity, position and orientation estimation errors at the end of the simulations are summarized in Table 1 with respect to the local coordinate system. In each cell of the table, the numerator of the fraction represents the ensemble average at time=60 sec. The denominator represents the p-value at time=60 sec. Note that Simulation III always gives the best accuracy among these three simulations. Comparing three orientation accuracy in the first column, we can recognize that the yaw angle estimation error is significantly larger than the errors of the other

5 American Institute of Aeronautics and Astronautics

Table 1 Performance Comparison Between Simulations I, II and III

vx (m/s) vy (m/s) vz (m/s) xac (m) yac (m) zac (m) roll (rad) pitch (rad) yaw (rad)

I

II

III

INS/GPS

INS/GPS/EO

INS/GPS/EO

U.G.O

C.P.

δ(60) p(60)

δ(60) p(60)

δ(60) p(60)

0.0065 0.0073 0.0049 0.0059 0.010 0.010 0.42 0.45 0.43 0.38 0.81 0.75 3.1 × 10−5 2.9 × 10−5 2.6 × 10−5 2.9 × 10−5 2.1 × 10−3 1.9 × 10−3

0.0040 0.0049 0.0048 0.0050 0.0091 0.0072 0.43 0.45 0.43 0.38 0.77 0.70 2.6 × 10−5 2.4 × 10−5 2.4 × 10−5 2.7 × 10−5 9.1 × 10−5 8.9 × 10−5

APPENDIX The mathematical background for this study is described in the appendix. Coordinate Systems

Firstly, the coordinate systems are introduced. They are represented by specific subscripts in the equations as described below:

0.0059 0.0043 0.0040 i 0.0043 0.0066 e 0.0066 0.24 n 0.22 0.27 b 0.29 c 0.32 0.32 2.5 × 10−5 2.2 × 10−5 2.3 × 10−5 2.3 × 10−5 6.7 × 10−5 6.6 × 10−5 PSfrag replacements two angles in the INS/GPS system. However, as we have seen in the time histories, yaw angle estimation accuracy is significantly improved by a factor of 20 in Scenario II, by a factor of 30 in Scenario III, compared with Scenario I, resulting in the same order of accuracy as the other two angles. Also notice that the aircraft position accuracy in Simulation III is two times better than Simulations I or II.

: Earth Centered Inertial Coordinate System (ECI) : Earth Centered Earth Fixed Coordinate System (ECEF) : North East Down Coordinate System (NED) : Aircraft Body Fixed Coordinate System (Body) : Image Coordinate System xb xn Zi, Ze

(North)

yn

(East)

P

yb

zb zn

(Down)

O

φc

λ λc

Xi

φ Ye Yi

γ

CONCLUSIONS The following conclusions assume a good aircraft/ground object geometry with a straight flight trajectory and 61 images taken along the flight trajectory at 1 Hz: 1. Tightly coupling the EO with the INS/GPS system has a great benefit on aircraft yaw angle determination. Yaw angle accuracy becomes more than 20 times better by focusing on an unknown ground object, and 30 times better by focusing on a control point compared with an ordinary INS/GPS navigation system; 2. Focusing on a control point with the tightly coupled INS/GPS/EO system gives two times better aircraft positioning accuracy than the the ordinary INS/GPS system. 3. Focusing on a control point with the tightly coupled INS/GPS/EO system gives two times better aircraft positioning accuracy than when focusing on an unknown ground object.

(Aries) Xe

Fig. 7

Coordinate systems.

6 DOF Ellipsoidal-Earth Based Aircraft Equations of Motion

The set of equations used to generate the aircraft trajectories is shown as follows: b

de P ~v dt b iω ~˙ b b

i

d OP p~ dt i λ˙ c

  F~b − iω ~ ie × e~vbP + T · ~gn ~ bb + T · i ω m bn bi  −1 −1 i b i b ~b = −Jb ~ b + Jb · M ω ~ b × Jb · ω

=

~ ie × p~OP = T · e~vbP +i ω i ib

=

vE + ωe (RE + h) cosφ

6 American Institute of Aeronautics and Astronautics

vN RN + h sinE2 cosE1 sinE2 sinE1 Q+ R = P+ cosE2 cosE2 sinE3 ˙ cosE3 cosφ ˙ λc + φ − cosE2 cosE2 = cosE1 Q − sinE1 R +cosφsinE3 λ˙ c + cosE3 φ˙

φ˙ = E˙ 1

E˙ 2 E˙ 3

=

cosE1 sinE1 Q+ R cosE2 cosE2   sinE2 cosE3 cosφ + sinφ λ˙ c + − cosE2 sinE2 sinE3 ˙ + φ cosE2

RE

a : East-West radius (1 − e2 sin2 φ)1/2 of curvature for ellipsoidal Earth

=

Above equations are based on the standard derivation which can be found in Stevens and Lewis24 with knowledge of the time-rate changes of geodetic coordinates described in Titterton and Weston25 and using the ellipsoidal Earth model WGS-84.6 NED-Coordinate Based Navigation Equation

The set of equations used to reproduce the aircraft states with input of the aircraft accelerations and angular velocities is shown as follows: T˙

nb

n b T · Ωb

=

nb

n

where

 de P ~vn = − 2i ω ~ ne + e ω ~ nn × e~vnP + ~gn + T ·~ab dt nb v E λ˙ = (RE + h)cosφ vN φ˙ = RN + h h˙ = −vD

e P ~vb

:

F~b

:

ω ~ bb

described in Body coordinate unit vectors where = [P, Q, R]T : Angular velocity vector from the ECI to the Body coordinate system,

i

i

ω ~ ie

p~OP i T

bi

~gn

Jb ~b M

Velocity vector with respect to an observer fixed in the ECEF coordinate system, described in Body coordinate unit vectors Aerodynamic force + Thrust force,

described in Body coordinate unit vectors = [0, 0, ωe ] : Angular velocity vector from the ECI to the ECEF coordinate

:

system, described in ECI coordinate unit vectors Position vector of the aircraft

:

described in ECI coordinate unit vectors Transformation matrix from the ECI

to the Body coordinate system  F~g i e − ω ~ × iω ~ e × p~OP = [0, 0, gn ]T = m : Local gravity vector, described in NED coordinate unit vectors where F~g is the gravity force vector : :

Inertia matrix of the aircraft Moment vector applied to the aircraft,

described in Body coordinate unit vectors E1 , E2 , E3 : Euler’s Roll, Pitch and Yaw angles respectively a(1 − e2 ) : Meridian radius RN = (1 − e2 sin2 φ)3/2 of curvature for ellipsoidal Earth

T

nb

n

Ωbb

n

ω ~ bb

e

~vn ~ab

i

ω ~ bb

E1 , E 2 , E 3

E1

:

Transformation matrix from the

Body to the NED coordinate system   0 −ωz ωy 0 −ωx  =  ωz −ωy ωx 0   ωx  ~ ne + e ω ~ nn ~ bb − T · i ω =  ωy  = i ω bn ωz = [vN , vE , vD ] F~b = : Accelerometer outputs m : Rate gyro outputs :

Euler’s Roll, Pitch and Yaw angles computed as T (3, 2) = tan−1 nb T (3, 3) nb

E2

= tan−1 q

− T (3, 1) nb

1 − T (3, 3)2 nb

T (2, 1)

E3

= tan−1 nb T (1, 1) nb

The purpose of the Navigation Equation is to reconstruct aircraft states with the measurements of

7 American Institute of Aeronautics and Astronautics

accelerometers and rate gyros attached to the aircraft Body coordinate system. The above equations are standard derivations which can be found in Titterton and Weston25 . Kalman Filter State Equation

The linearized navigation equation and the linear equations of sensor biases, clock bias and ground object stationary dynamics are combined to yield the 20 element state equations to be used in the Kalman filter as follows: δ~x˙ δ~z

b)δ~x + G~v = F (~x b)δ~x + w = H(~x ~

(4) (5)

where Kalman filter’s states (20 state Kalman filter): δ~x = [ δα, δβ, δγ, : Orientation angle errors δvN , δvE , δvD , : Velocity errors δλ, δφ, δh,

: Position errors in geodetic coordinates Bωx , Bωy , Bωz , : Rate gyro biases Bax , Bay , Baz , : Accelerometer biases b, d, : The clock bias and drift T δXT , δYT , δZT ] : Ground object position errors Process noise: ~v = [ vωx , vωy , vωz , : Rate gyro white noises with the PSD-value of Qω vax , vay , vaz , : Accelerometer white noises with the PSD-value of Qa vBωx , vBωy , vBωz , : Rate gyro bias white noises with the P SD-value of 2 2σBω βω vBax , vBay , vBaz , : Accelerometer bias white noises with the P SD-value 2 of 2σBa βa

vb , v d ]

T

: Clock bias and drift white noises with the PSD-values of Sb and Sd

in which • σBω and σBa are the σ-values of rate gyro and accelerometer biases specified by sensor performance and • βω and βa are the reciprocals of the time constants tω and ta . 60 seconds are used for both tω and ta .

State matrices:  F1,9×9  06×9 F (~x)20×20 =  

G1,9×6 F2,6×6 02×15 03×17



−T

nb

 G1,9×6 =  03×3 03×3



 G20×14 =  

015×2 F3,2×2 03×3 T

nb

03×3

G1,9×6 09×6 06×6 G2,6×6 02×12 03×14



017×3    F4,3×3

   015×2 G3,2×2

   

in which • F1 and G1 are given by linearizing the Navigation Equation, • F2 and G2 define the bias dynamics (the Markov process is assumed), • F3 and G3 define the clock bias dynamics and • F4 is a 3 × 3 zero-matrix since the ground object is assumed to be stationary. Linearized measurements:    b) ρGP S1 − h11 (~x   .    ..       b)   ρGP Sk − h1k (~x     b)   ρ˙ GP S − h21 (~x     1 δ~z =   , H(~x) =  . ..          b)   ρ˙ GP Sk − h2k (~x     b)   xccamera − h3 (~x b) yccamera − h4 (~x

 H11 (~x)  ..  .  H1k (~x)   H21 (~x)  ,  ..  .  H2k (~x)   H3 (~x)  H4 (~x)



 wρ 1  ..   .     wρ k     wρ˙1   w ~ =  .   ..     wρ˙  k    wx  c wy c

in which • ρGP Si is the pseudorange measurement given by the GPS receiver, • ρ˙ GP Si is the pseudorange rate measurement given by the GPS receiver, • k is the number of visible satellites, • xccamera and yccamera are the image coordinate

8 American Institute of Aeronautics and Astronautics

measurements given by the imaging sensor, • hi , i = 1 ∼ 4, are nonlinear measurement equations, • Hi , i = 1 ∼ 4, are linearized measurement equations, • ωρi is the white process pseudorange measurement noise specified by the σ-value of σρ (see Table 2), • ωρi ˙ is the white process pseudorange rate measurement noise specified by the σ-value of σρ˙ (see Table 2) and • ωxc and ωyc are the white process image coordinate measurement noise specified by the σ-value of σc (see Table 4). The parameters used in the simulations are summarized as follows: Table 2

Name Pseudorange (σ) Pseudorange Rate (σ) Clock bias white noise (PSD) Clock drift white noise (PSD)

Table 3

Random Walk



( P SD) Accelerometer Bias (σ) Random Walk √

GPS performance.

Notation σρ

Value 6.6

Unit m

σρ˙

0.05

m/s

Sb

0.009

m2 /Hz

Sd

0.0355

(m/s)2 Hz

INS performance.

Notation

Value

Unit

σBω

0.003

deg/hr



0.0015

(deg/hr) √ Hz

σBa

25×10−6

g

√ Qa

50×10−6

√ g/ Hz



( P SD)

Table 4

Imaging sensor performance.

Name White Noise (σ)

The initial error sigma values used in producing actual aircraft trajectories and biases are summarized in Table 5. The initial values of the covariance matrix in the Kalman filter are summarized in Table 6. Note that the Kalman filter knows only the mean values of the initial conditions and the covariances of the initial errors. Table 5

Initial error sigma values.

Notation [δα, δβ, δγ]0 [δE1 , δE2 , δE3 ]0

Sensor Performance

Name Rate Gyro Bias (σ)

Initial Error Sigmas

Notation σc

Value 5 × 10−6

Unit m

[δvN , δvE , δvD ]0 [δλ, δφ]0 δh0 [Bωx , Bωy , Bωz ]0 [Bax , Bay , Baz ]0 b0 d0

Table 6

Value According to [δE1 , δE2 , δE3 ]0 [0.0001,0.0001, 0.002] [0.02,0.02,0.02] [1.57 × 10−7 , 1.57 × 10−7 ] 1 [σBω , σBω , σBω ] [σBa , √ σBa , σBa ] √ Sb Sd

Unit rad rad m/s rad m rad/s g m m/s

Initial error covariance values.

Notation P0 (δα, δβ, δγ) P0 (δvN , δvE , δvD ) P0 (δλ, δφ) P0 (δh) P0 (Bωi ); i=x,y,z P0 (Bai ); i=x,y,z P0 (b, d)

Value [δα02 , δβ02 , δγ02 ] 2 2 2 [δvN 0 , δvE0 , δvD0 ] 2 2 [δλ0 , δφ0 ] δh20 2 σBω 2 σBa [Sb , Sd ]

Unit rad2 (m/s)2 rad2 m2 (rad/s)2 g2 2 m /Hz (m/s)2 /Hz

Batch Mode Least Squares Initializer

Since the Kalman filter requires reasonably accurate initial ground object coordinates, we initialize the filter after a separate batch process has analyzed the first few images, as described below. Eq.(1) can be written as     X − XL1 x 1 T  c1   T YT − YL1  y c1 M = k1 1 Z T − Z L1 −f where M = T . Note that the subscript 1 represents ce ”the 1st image”. Looking at the detailed elements,

9 American Institute of Aeronautics and Astronautics

Since there are three unknowns, (XT , YT , ZT ), we  need at least three data. One image gives two data, XT − XL1 x c1 m m211 m311 therefore we need at least two images. 1  111 m121 m221 m321   yc1  =  YT − YL1  In general, in the case we have i number of images, k1 Z T − Z L1 −f m131 m231 m331 the above equation looks like     or, X L 1 − c 11 Z L 1 1 0 −c11    0 1 −c21      Y L 1 − c 21 Z L 1      1  X L 2 − c 12 Z L 2   1 0 −c12  XT (m111 xc1 + m211 yc1 − m311 f ) = XT − XL1 (6)      =  Y L 2 − c 22 Z L 2  k1   0 1 −c22  YT     ··· ZT ··· 1     (m121 xc1 + m221 yc1 − m321 f ) = YT − YL1 (7)  X L i − c 1i Z L i   1 0 −c1i  k1 Y L i − c 2i Z L i 0 1 −c2i 1 (m131 xc1 + m231 yc1 − m331 f ) = ZT − ZL1 (8) k1 The least squares solution is given by   The last equation gives XT x =  YT  = (AT A)−1 AT b (12) Z T − Z L1 1 (9) = ZT k1 (m131 xc1 + m231 yc1 − m331 f ) 







This convenient method allows to roughly estimate ground object locations without initial estimates.

Substituting this into Eqs.(6) and (7), we have (m111 xc1 + m21 yc1 − m311 f ) (ZT − ZL1 ) (m131 xc1 + m23 yc1 − m331 f ) = XT − XL1 (m121 xc1 + m22 yc1 − m321 f ) (ZT − ZL1 ) (m131 xc1 + m23 yc1 − m331 f ) = YT − YL1

REFERENCES 1 Ballard,

(10)

(11)

Letting c11

=

c21

=

m111 xc1 m131 xc1 m121 xc1 m131 xc1

+ m21 yc1 + m23 yc1 + m22 yc1 + m23 yc1

− m311 f − m331 f − m321 f − m331 f

Eqs.(10) and (11) reduce to c11 (ZT − ZL1 ) = XT − XL1 c21 (ZT − ZL1 ) = YT − YL1 or X T − c 11 Z T Y T − c 21 Z T

= X L 1 − c 11 Z L 1 = Y L 1 − c 21 Z L 1

Rearranging them, 

1 0

0 −c11 1 −c21





   XT  Y T  = X L 1 − c 11 Z L 1 Y L 1 − c 21 Z L 1 ZT

or equivalently, Ax = b

D. H. and Kimball, O. A., Rigid Body Motion from Depth and Optical Flow, Computer Vision, Graphics, and Image Processing 22, April 1983, pp. 95-115. 2 Britting, K. R., Inertial Navigation Systems Analysis, Wiley Interscience, New York, NY, 1971. 3 Broida, T. J., Chandrashekhar, S. and Chellapa, R., Recursive 3-D Motion Estimation from a Monocular Image Sequence, IEEE Transactions on Aerospace and Electronic Systems, vol. 26, no. 4, July 1990, pp. 639-656. 4 Brown, R. G. and Hwang, P. Y. C. (1997), Introduction to Random Signals and Applied Kalman Filtering, John Wiley & Sons, Inc., New York., NY. 5 Buechler, D. and Foss, M., Integration of GPS and Strapdown Inertial Subsystems into a Single Unit, Navigation: Journal of The Institute of Navigation, vol. 34, no. 2, Summer 1987, pp. 140-159. 6 Department of Defense, World Geodetic System 1984, Its Definition and Relationships with Local Geodetic Systems, National Imagery And Mapping Agency Technical Report, 1984. 7 Department of Defense, Military Standart for Flying Qualities of Piloted Aircraft 1797A. 8 Gelb, A., Applied Optimal Estimation, M.I.T. Press., Cambridge, MA, 1974. 9 Gurfil, P. and Rotstein, H., Partial Aircraft State Estimation from Visual Motion Using the Subspace Constraints Approach, Journal of Guidance, Control, and Dynamics, vol. 24, no. 5, September-October 2001, pp. 1016-1028. 10 Hafskjold, B. H., Jalving, B., Hagen, P. E. and Gade, K., Integrated Camera-Based Navigation, Navigation: Journal of the Institute of Navigation, vol. 53, no. 2, 2000, pp. 237-245. 11 Hagen, E. and Heyerdahl, E., Navigation by Images, Modeling, Identification and Control, vol. 14, no. 3, 1993, pp. 133-143. 12 Heeger, D. J. and Jepson, A. D., Subspace Methodes for Recovering Rigid Motion I: Algorithm and Implementation, International Journal of Computer Vision, vol. 7, no. 2, 1992, pp. 95-117.

10 American Institute of Aeronautics and Astronautics

13 IEEE, IEEE Standard Specification Format Guide and Test Procedure for Single-Axis Laser Gyros, IEEE Std. 6471995, 1995. 14 Jazwinski, A. H., Stochastic Processes and Filtering Theory, Academic Press, Inc., New York, NY, 1970. 15 Kaplan, E. D., Understanding GPS Principles and Applications, Artech House, Norwood, MA, 1996. 16 Lawrence, A., Modern Inertial Technology, SpringerVerlag, New York, NY, 1992. 17 Martinez, P. and Klotz, A., A Pracical Guide to CCD Astronomy Cambridge University Press, Cambridge, United Kingdom, 1998. 18 Mikhail, E. M., Bethel, J. S. and McGlone, J. C., Introduction to Modern Photogrammetry, John Wiley & Sons, Inc., New York, NY, 2001. 19 Rodriguez, J. J. and Aggarwal, J. K., Matching Aerial Images to 3-D Terrain Maps, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 12, December 1990, pp. 1138-1149. 20 Rogers, R. M., Applied Mathematics In Integrated Navigation Systems, AIAA Education Series, AIAA, Reston, VA, 2000. 21 Soatto, S., Frezza, R. and Perona, P, Motion Estimation via Dynamic Vision, IEEE Transactions on Automatic Control, vol. 41, no. 3, March 1996, pp. 393-413. 22 Soatto, S. and Perona, P., Recursive 3-D Visual Motion Estimation Using Subspace Constraints, International Journal of Computer Vision, vol. 22, no. 3, 1997, pp. 235-259. 23 Stieler, B. and Winter, H., Gyroscopic Instruments and Their Application to Flight Testing, AGARDograph No. 160, vol. 15, NATO/AGARD, 1982. 24 Stevens, B. L. and Lewis, F. L., Aircraft Control and Simulation, John Wiley & Sons, Inc., New York, NY, 1992. 25 Titterton, D. H. and Weston, J. L., Strapdown Inertial Navigation Technology, Peter Peregrinus Ltd., Stevenage, Herts., England, U. K., 1997.

11 American Institute of Aeronautics and Astronautics