A DSP EMBEDDED OPTICAL NAVIGATION SYSTEM K ... - CiteSeerX

21 downloads 65 Views 164KB Size Report
Position Sensitive Diode (PSD) sensor to calculate 6DOF ... information can be found at the VISNAV Lab website ..... design is both simpler and more accurate.
A DSP EMBEDDED OPTICAL NAVIGATION SYSTEM K. Gunnam1, D.C. Hughes2, J.L. Junkins3, N. Kehtarnavaz4 1

Department of Electrical Engineering, 2NASA Commercial Space Center for Engineering 3 Department of Aerospace Engineering Texas A&M University, College Station, TX-77843, USA 4

Department of Electrical Engineering University of Texas at Dallas, Richardson, TX-75080, USA Email:[email protected] ABSTRACT Six degrees of freedom (6DOF) data estimation has a wide range of applications in navigation, proximity operations, manufacturing and robotic control. This paper presents an optimal and computationally efficient 6DOF estimation algorithm using Modified Rodrigues Parameters. The analytical results of the estimation algorithm and also its computational results on a low power floating point DSP, the TMS320VC33, are presented. A new approach involving frequency division multiplexing of the beacons for a vision based navigation system and the demodulation on a low power fixed point DSP, the TMS320C55x, is introduced in order to improve the accuracy of the sensor measurements and the reliability of the system.

1. INTRODUCTION Several research efforts have been carried out in the field of 6DOF estimation for rendezvous and proximity operations. The VISion based NAVigation (VISNAV) system [1] developed at Texas A&M University aims to achieve better accuracies in 6DOF estimation using a simpler yet robust approach. It uses measurements from a Position Sensitive Diode (PSD) sensor to calculate 6DOF estimates of the sensor location and orientation. The PSD sensor generates four currents and their imbalances are linearly proportional to the azimuth and elevation of the light source with respect to the sensor. The individual currents, which depend on the intensity of the light, can be kept nearly constant by using feedback amplitude control of the lights source output power to accommodate variable received energy due to range dependence and other factors. By having four or more light sources (beacons) in the target frame at known positions, the 6DOF data of the sensor attached to the chase frame can be calculated. An electromagnetic signature is given to the beacon by

modulating the beacon (an array of LEDs) at a distinct frequency to distinguish target energy from the ambient optical energy such as sunlight. The beacons can be operated at a single frequency in time division multiplexing (TDM) mode or at multiple different frequencies as in Frequency Division Multiplexing (FDM) mode for the purposes of identification. Demodulation of the sensor currents to recover the signal strengths is done via analog circuitry in the present TDM version of VISNAV. In the new version of VISNAV, FDM along with the demodulation on a fixed point Digital Signal Processor (DSP) is employed. A floating point DSP computes the 6DOF estimates based on the PSD normalized voltages, which are in turn calculated from the sampled demodulated currents. The experimental results show that a beacon's line-ofsight vector can be determined with an accuracy of approximately one part in 2000 of the sensor field of view (90 degree cone) angle at a distance of 30m with an update rate of 100Hz. Six degree of freedom navigation accuracies of less than 2 mm in translation estimates and less than 0.01 degrees in orientation estimates at rendezvous, updated at 100HZ are routinely possible by solving the colinearity equations for four or more beacons. Processing of these 6DOF estimates with a Kalman Filter permits estimation of relative motion velocity and acceleration with heretofore unachievable accuracy. Autonomous rendezvous and docking operations will be enabled as a direct consequence of precise, reliable, high bandwidth proximity navigation. The beacon electrical power requirements are less than 15W for a range of 25m. The VISNAV system has several advantages such as small sensor size, wide sensor field of view, no timeconsuming image processing, relatively simple electronics and very small errors in 6DOF data of the order of 2mm in position estimates and 0.01 degrees in attitude estimates at rendezvous. Among other competitive systems, Differential GPS [6] is limited to mid range accuracies,

lower bandwidth and requires complex infrastructure; other sensing systems [3,4,5] based on lasers, CCD s and high speed cameras require complex image processing and target identification with associated occasional failures. These systems have typically slower update rates limited by frame rates and slower image processing. Different applications of VISNAV such as formation flying and aerial refueling are discussed in [7,8] and more information can be found at the VISNAV Lab website [10]. 2. SYSTEM DESCRIPTION The PSDs are relatively fast compared to even high-speed cameras, having rise times of about five microseconds. This permits light sources to be structured in the frequency domain and utilization of radar-like signal processing methods to discriminate target energy in the presence of highly cluttered ambient optical scenes. If there is a single beacon excited by a sinusoidal oscillator operating at a frequency f c , the emitted light induces

f c at

sinusoidal currents in the PSD with the frequency

the four terminals of the PSD sensor. The closer the incident light centroid is to a particular terminal, the larger the portion of current that flows through that terminal. The normalized voltages computed from the PSD imbalance th

currents due to the i target are mapped to the horizontal and vertical displacement ( y i , z i ) estimates of that target beacon’s image spot with respect to the sensor frame. This mapping compensates for primarily lens induced distortion and may employ Chebyshev polynomials, or some other interpolation technique. V=K.(I - I )/(I +I ) y right left right left V=K.(I - I )/(I +I ) z up down up down

Sensor PSD fixed in Chase Space Craft Image Space z y I

The sensor electronics, the ambient light sources, and the inherent properties of the sensor produce noise in the PSD measurements. This noise can be modeled as zero mean Gaussian noise. The measurement model h i (x) is

~ h i ( x) = h i ( x ) + v i ,

Lens

(1)

where x = [ L : O]T is the state vector of the sensor,

(2)

and L = [ X c Y c Z c ]T is the position vector, O is the orientation vector, which depends on the attitude parameters used.

~ h i ( x ) is an ideal measurement model and is a function of

y i (x) and z i (x) :

~ h i ( x ) = F ( y i ( x), z i ( x)) ,

(3)

and v i is the Gaussian measurement noise with covariance

Ri = E{v i v i } T

(4)

The use of a Gaussian Least Squares Differential Correction (GLSDC) algorithm to determine the states, attitude and position, gives a best geometric solution in the least square sense upon convergence through iterations [1]. Estimates for position and attitude are refined through iterations of GLSDC as it minimizes the weighted sum of squares

J=

y=F (V , V )~V y z y z=F (V , V )~V y z z

I

right

P1

P2

(X

c

,Y

c

,Z

c

, f , q, y )

~ ~ 1 (h − h ) T W (h − h ) , 2

Pn

( X

down

n

W is the weighting T = 1 / Ri , j = E{v i v j } .

where

left

I

2.1. System Equations

(5)

up

x I

vehicle; C is the unknown direction cosine matrix of the image space coordinate frame with respect to the object space coordinate frame; F y , Fz are the lens calibration maps; f is the focal length of the sensor lens and y, i zi are the ideal image spot centroid coordinates in the image th space coordinate frame for the i target light source .

,Y

n

,Z

n

)

Wi , j

matrix

and (6)

Beacon N q

( X 0 ,Y 0 , Z 0 )

Y X

( X

y

Object Space

( X

1

,Y

1

,Z

)

1

>

Z f

2

,Y

2

,Z

2

)

Beacon 2

Beacon 1

Active Beacons fixed in Target Space Craft

Figure 1:Sensor Geometry Here V y , V z are the normalized voltages; X i , Yi , Z i are the known object space coordinates of the target light source ( i th beacon); X c , Yc , Z c are the unknown object space coordinates of the sensor attached to the chase

From the geometric colinearity equations, ideal image centroid coordinates are given by [1]:

yi = g yi ( X i , Yi , Z i , X c , Yc , Z c , C ) C ( X − X c ) + C22 (Yi − Yc ) + C23 ( Z i − Z c ) = − f 21 i , C11( X i − X c ) + C12 (Yi − Yc ) + C13 ( Zi − Z c ) zi = g zi ( X i , Yi , Zi , X c , Yc , Z c , C ) C ( X − X c ) + C32 (Yi − Yc ) + C33 (Zi − Z c ) = − f 31 i . C11( X i − X c ) + C12 (Yi − Yc ) + C13 ( Zi − Z c )

(7)

An alternate representation for the above equations can be written in the unit line of sight vector form as [2]

b i3 = C r i , where b i 3 = (1 /

f 2 + y i2 + z i2 ).[− f are the sensor frame unit vectors, and

(8)

y i z i ]T

r i = (1 / d i ). [(Xi − Xc) (Yi − Yc ) ( X i − X c ) ]T

(9)

The simulations showed that the model MRP-B2 gives less accurate results as the parameter f gives important information when the sensor is very near the target though it achieves computational savings of up to 20% compared to the model MRP-B3n. We propose a new measurement model, referred to as MRP-B2n in this paper, which uses only two parameters but normalizes them to get better convergence results:

(10)

h i = (1 /

are the object frame unit vectors, and here

f 2 + y i2 + z i2 )[ yi z i ]T .

(15)

This can be represented as follows,

di = (Xi − Xc ) +(Yi −Yc ) +(Zi −Zc ) 2

2

2

(11)

We have choices to make in coordinate representations for C , the Directional Cosine Matrix in terms of different attitude representations such as Euler’s angles and Modified Rodrigues parameters (MRP). MRP s are derived from Quaternions and yield better results in terms of linearity since they linearize like quarter-angles instead of half-angles for the Quaternions [2]. The modified Rodrigues parameters are defined as: p = e tan(

Φ ) , 4

h i ( x) = D ri , where D j ,k = C j +1,k , j = 1,2; k = 1,2,3

Hi =

T

The Direction Cosine Matrix, a 3× 3 matrix, in terms of modified Rodrigues Parameters is [2]

8 ( p ×) − 4 (1 − p 2

C = I 3×3 +

and

T

(1 + p T p ) 2  0 p× =  p3 − p2

p )( p ×)

,

− p3 p2  0 − p1  . p1 0 

(13)

(14)

2.2. Measurement Models We can use the y i , z i parameters in (7), or the normalized parameters in (9), as the parameters of the measurement model in (1). These two models will be referred for convenience as MRP-B2 and MRP-B3n respectively in this paper. Since f is constant, there is no need to consider this parameter and this will eliminate the redundancy in calculations. Most of the redundancy is eliminated due to the fact that now the Measurement sensitivity matrix H in (23) is 2 N × 6 instead of 3N × 6 where N is the number of beacons and the Weighting matrix W in (5) is 2 N × 2 N instead of 3N × 3N .

(17)

This new model MRP-B2n achieves the same convergence results as the MRP-B3n case while achieving computational savings of up to 20 % compared to the MRP-B3n case. The simulation results are shown in Table 1. The measurement sensitivity matrix for the i th beacon H i is obtained by partial differentiation of the measurement model with respect to the state vector x

(12)

where e = [e1 e 2 e3 ] is the rotation axis; Φ is the principal rotation angle.

(16)

∂ hi ∂L ∂ hi

 ∂ h ∂ hi  = i  , ∂ x  ∂L ∂ O 

∂ hi

(18)

= −D{I3×3 − ri ri T }/ di , and =

(19)

4

[S]{(1− pT p)I3×3 − 2[ p×] + 2p pT } ∂O (1+ pT p)2

 s 3 0 − s1  T where S =   , s = [s1 s 2 s 3 ] = C ri . (20) s s − 0   2 1 The actual measurement matrix for N beacons using (15) is b = [h1

T

... h N T ]T .

(21)

The estimated ideal measurement matrix using estimated position and orientation in the colinearity equation (16) is ~ ~ ~ b = [h 1T ... h N T ]T , (22) and the measurement T T H = [ H 1 ; H 2 ;...H N T ]T .

sensitivity

matrix

is (23)

2.3. GLSDC Algorithm The initial State before iteration is

xˆ k ,0 = xˆ k −1

if k = 0 then guess xˆ k ,0

(24)

Iterate using the following procedure where

Pk ,i is the

th

H k ,i = H in (23) at the i iteration at the k time step; Wk = W in (5), b k = b in (21) and ~ ~ th b k = b in (22) at the k time step ,

covariance and th

Pk ,i = ( H kT,iW k H k ,i ) −1 ~ ∆ xˆ k ,i = Pk ,i H kT,iW k (b k − b k ,i )

(25)

xˆ k ,i +1 = xˆ k ,i + ∆ xˆ k ,i . Stop iterating when one of these conditions is met: 1.The states are no longer improved by the iteration 2.The number of iterations reaches the allowable limit. This Gaussian Least Squares Differential Correction algorithm is exceedingly robust when there are four or more beacons measured except near certain geometric conditions that are rarely encountered. An extended Kalman filter can be applied to smooth out these 6DOF estimates and to get the velocities and accelerations of the sensor. The combination of this GLSDC algorithm and a Kalman filter to get the complete navigation solution is more robust than a sequential estimation process using a simple Kalman filter or a Predictive filter for different initial errors and different geometric conditions.

Figure 2:Signal Processing Schematic 2.4. Considerations for FDM If the beacons are operated sequentially, the beacon measurements can be modeled as ~ b i = Dr i + v i + ε i = b i ( x ) + v i + ε i , (26) where

ε i = H i ∆xi

(27)

and ∆xi is the change in position and orientation from the ideal position and orientation respectively due to the sensor relative movement that may have occurred while taking the ith beacon measurements. If FDM is used for the beacons, the error ε , which is significant if sensor relative velocity is greater than 10m/sec, can be eliminated from the beacon measurements. In the case of TDM, the low pass filter (LPF) after the demodulation needs to have a pass band of about N × 100Hz (100 Hz is the 6DOF data update rate), where as in FDM, it need to have only 100 Hz and this will result in an N factor improvement in signal to noise power ratio (SNR) of the beacon currents. To achieve the same N factor improvement in the SNR for the TDM mode, the peak power of the LED beacons has to be increased by N so that it will consume the same average power as the FDM mode but this may not be a viable solution as it results in a large number of LEDs in the beacons due to the increased peak power requirements. 3. SIGNAL PROCESSING Various estimation algorithms such as the GLSDC with attitude representation in Euler angles [1], referred to as Euler-2 in this paper, and in MRPs have been developed and implemented on the floating point DSP TMS320VC33 (VC33). In addition, different sensor distortion correction techniques such as the Chebychev Polynomial approach or the nearest neighbor approaches are also implemented on the VC33. For the new approach of operating beacons in FDM mode, a low power fixed point DSP TMS320C55x is utilized for the algorithms of beacon separation and demodulation as in Figure 2. A synchronous analog to digital converter samples the sensor’s four currents to feed estimates to the TMS320C55x (C55x). Each current has frequency components corresponding to the frequencies of different beacons. The carrier frequencies for the 8 beacons case are chosen to be start in order from 48.5Khz, with an inter channel separation of 0.5 KHz, in order to distinguish them from a low frequency background noise. The current components due to each beacon can be modeled in an analogous manner to an amplitude-modulated carrier with a 100Hz bandwidth except for the fact that now the instantaneous amplitude of the carrier in signal conveys the important information. So, techniques similar to amdemodulation methods with multi-rate processing [9] can be used on the C55x DSP, to demodulate the sensor currents and the different methods are contrasted in Table 2. The output SNR for the demodulation system can be derived easily by noting that the noise performance of the demodulation system will be mainly determined by the low pass filter (LPF) following the demodulator. Heterodyning mixes out of band noise into the band of interest to decrease the SNR by a factor 2. The C55x communicates the demodulated beacon currents to the

VC33 for a subsequent navigation solution. The VC33 communicates the navigation solution to the flight control computer through a serial data port and also sends commands through a wireless link to the micro controller to provide amplitude feedback control of the beacon intensity. With this scheme, there is no need for analog circuitry to perform demodulation. The new system is easily re-programmed and insensitive to temperature variations and aging effects since the majority of the signal processing is done in the digital domain. This novel design is both simpler and more accurate. 4. SIMULATION RESULTS For testing the 6DOF estimation algorithm, a trajectory is simulated as shown in Figure 3 and the simulated VISNAV sensor measurements are calculated from the ideal forward model of colinearity equations and noise is added to the measurements for different SNR s.

Figure 3: Simulated Flight trajectory and the beacon locations on the Chase craft. Table 1: 6DOF Estimation Flops Robustness VC33 clock cycles Euler-2 0.8M Low 32e3 MRP-3n M High 49e3 MRP-2n 0.8M High 39e3 MRP- 2 0.8M Low 39e3 VC33 results are for 8-beacon case for a single iteration

GLSDC

Table 2: Demodulation Methods Method SNRo

SNRo1

Nc

Synchronous Demod (SD) Heterodyning & SD Envelope Demod (ED) Heterodyning & ED

79.9 77.0 79.5 76.9

0.6 0.2 >1 0.4

SNRL SNRL / 2 SNRL SNRL / 2

SNRL = SNRi * f MAX / f LPF , f MAX = 105KHz , f LPF = 100 Hz SNRidB = 50dB, SNRLdB = 80dB, f S = 2 f MAX , N = 8

1

s

On a 200MHz C55 x ,No. of clocks/second N ca = 200e6 . N cn = No. of clocks needed/second. N c = N cn / N ca

1

5. CONCLUSIONS An optimal and computationally efficient 6DOF estimation algorithm using Modified Rodrigues Parameters is presented. This algorithm is exceedingly robust when there are four or more line of sight measurements except near certain geometric conditions that are rarely encountered. A new method for operating beacons and demodulating the beacon currents for the VISNAV system is introduced. 6. REFERENCES [1] J.L.Junkins, D.C.Hughes, K.P.Wazni, and V. Pariyapong, Vision-Based Navigation for Rendezvous, Docking, and Proximity Operations, 22nd Annual AAS Guidance and Control Conference, February 1999, AAS 99-021. [2] J.L. Junkins and H. Schaub. Analytical Mechanics of Aerospace Systems, January 1999. [3]. R. Howard, T. Bryan, M. Book, and R.Dabney. The Video Guidance Sensor-A Flight Proven Technology, 22nd Annual AAS Guidance and Control Conference, February 1999, AAS 99-025. [4]. P. Calhoun and R.Dabney, A Solution to the Problem of Determining the Relative 6 DOF State for Spacecraft Automated Rendezvous and Docking, Proceedings of SPIE Space guidance, control, and tracking II, pp. 175-184, 1718 April, 1995, Orlando, Florida [5]. S.D.Lindell and W.S.Cook, Closed-loop Autonomous Rendezvous and Capture Demonstration Based on Optical Pattern Recognition, 22nd Annual AAS Guidance and Control Conference, February 1999, AAS 99-027. [6]. M.Mokuno, I.Kawano and T.Kasai, Experimental Results of Autonomous Rendezvous Docking on Japanese ETS-VII Satellite, 22nd Annual AAS Guidance and Control Conference, February 1999, AAS 99-022. [7]. Alonso, R., Du, J.-Y., Hughes, D., Junkins, J.L., and Crassidis, J.L., "Relative Navigation for Formation Flight of Spacecraft," Proceedings of the Flight Mechanics Symposium, NASA-Goddard Space Flight Center, Greenbelt, MD, June 2001. [8]. J. Valasek, J. Kimmett, D. Hughes, K. Gunnam and J. Junkins., "Vision Based Sensor and Navigation System for Autonomous Aerial Refueling," AIAA-2002-3441, 1st AIAA Unmanned Aerospace Vehicles, Systems, Technologies, and Operations Conference, Portsmouth, VA, 20-22 May 2002. [9]. P. P. Vaidyanathan, Multirate Systems and Filter Banks, Prentice Hall, Englewoods, NJ, 1993. [10] VISNAV Lab Website http://jungfrau.tamu.edu/~html/VisionLab/

Suggest Documents