The sensor is made up of a Position Sensing Diode (PSD) placed in the focal ... When the rectangular silicon area of the PSD is illuminated by energy from a ...
"Vision-Based Navigation for Rendezvous, Docking and Proximity Operation" John L. Junkins, Declan C. Hughes, Karim P. Wazni, Vatee Pariyapong, Aerospace Engineering Department, Texas A&M University
Dynamics and Control of Space Structures 4th International Conference College of Aeronautics, Cranfield University May,24-28,1999
AAS 99-021
VISION-BASED NAVIGATION FOR RENDEZVOUS, DOCKING AND PROXIMITY OPERATIONS John L. Junkins, Declan C. Hughes, Karim P. Wazni, Vatee Pariyapong Center for Mechanics and Control Aerospace Engineering Department Texas A&M University A novel approach for vision sensing and vision-based proximity navigation of spacecraft is presented. We have recently invented a new sensor which utilizes area Position Sensing Diode (PSD) photodetectors in the focal plane of an omni-directional camera. These analog detectors inherently centroid incident light, from which a line of sight vector can be determined. PSDs are relatively fast compared to even high speed cameras, having rise times of about ¯ve microseconds. This permits light sources to be structured in the frequency domain and utilization of radar-like signal processing methods to discriminate target energy in the presence of even highly cluttered ambient optical scenes. We have developed the basic concepts, designed ¯rst generation vision sensors based on this approach and carried out proofof-concept experimental studies. Our results show that a beacon's line of sight vector can be determined with an accuracy of one part in 5,000 (of the sensor ¯eld of view angle) and at a distance of 30m with an update rate of 50Hz. In practice, measured directions toward four or more beacons would typically be used. We have also developed an associated six degree of freedom navigation algorithm that is readily applicable to rendezvous, docking, and proximity operations; we have veri¯ed that this algorithm is robust and compatible with real-time, on-board computational constraints. This paper summarizes analytical, computational, and laboratory experimental results supporting the e±ciacy and practicality of this approach.
1
Introduction
In rigid body dynamics, the determination of a body frame (position and orientation) in a given reference frame is possible if at least three points of the body frame are known in the given frame. The vision based navigation (VISNAV) system proposed in this paper implements this fundamental truth. Target lights are ¯xed in a frame A (embedded in the target spacecraft), with a known position in this frame, and an optical sensor is attached rigidly to a frame B (embedded in the chase spacecraft). The sensor computer then orchestrates the lights, turning them on alternately, and measures angles toward their lines of sight every time it detects them. Consequently, with a modest amount of computation, making use of a Gaussian Least Squares Di®erential Correction process, one is able to recover the relative position and orientation of frames A and B. Using an extended Kalman Filter, we can also derive optimal estimates of rates and accelerations of relative linear and angular motion. The system described in this paper comprises an optical sensor of a new kind combined with speci¯c light sources (beacons) in order to achieve a selective or "intelligent" vision. The sensor is made up of a Position Sensing Diode (PSD) placed in the focal plane of a wideangle lens. When the rectangular silicon area of the PSD is illuminated by energy from a beacon
1
focused by the lens, it generates electrical currents in 4 directions that can be processed with an appropriate electronic equipment. The current imbalances generated are almost linearly proportional to the location of the centroid of the light beam onto the PSD area. While the individual currents depend on the intensity of the light, their imbalance is weakly dependent on the light intensity. The idea behind the concept of intelligent vision is that the PSD can be used to see only speci¯c light sources thanks to frequency domain structuring of the target lights and some relatively simple analog signal processing. This is achieved using modulation/demodulation. The light is produced by LEDs modulated at an arbitrary frequency while the currents generated are driven through an active ¯lter set on the very same frequency. Calculating the currents imbalances then yields two analog signals directly related to the coordinates locating the centroid of that light's energy distribution on the PSD, in a quasi-linear fashion, and therefore to the incident angle of this light on the wide-angle lens. However, as mentioned earlier, di®erent intensities produce di®erent currents. The optimal solution lies in using feedback amplitude control of the lights source output power. Variations of this VISNAV system are described in our recent Patent Disclosure [1].
2
Hardware
The VISNAV hardware prototype consists of a set of beacons each radiating bursts of light modulated at a carrier frequency of 38:4KHz, a small position sensing diode (PSD) `camera' that senses these light packets, and controlling electronics for both the beacons and the PSD sensor. Figure 1 shows a schematic of a VISNAV con¯guration using three beacons and ¯gure 2 is a schematic °ow diagram that indicates the major electronic hardware subsystems. The sensor Digital Signal Processor (DSP, see top LHS of ¯g. 2) decides which beacon is to be switched on and at what intensity, and this information is then relayed in serial form to the beacon controller via an infrared or radio beam. The beacon controller then commands the appropriate light via an analog switch and the latter radiates light amplitude modulated at the carrier frequency. If this energy source is within the 90 degree ¯eld of view of the PSD sensor, then four currents are generated, from the terminals of the PSD, each also varying sinusoidally at 38:4KHz (see ¯g.1). The sensor electronics processes these currents, removing the 38:4KHz carrier, and passes the ¯ltered results back to the DSP. From the imbalance in these signals, the direction of the incident light (and therefore of the corresponding beacon) can be determined. Once four or more sets of beacon data have been collected, the navigation algorithm running on the DSP can compute the current position and attitude of the sensor with respect to the object space frame of reference in the target spacecraft. The overall data update rate is 50Hz for the current spacecraft VISNAV experiment. It is su±cient for controlling most proximity operations anticipated. We now describe some features of the VISNAV hardware and software.
2.1
PSD Sensor
The PSD is a single silicon photodiode with an active area of 20mm £ 20mm. This diode is reverse voltage biased in order to obtain the necessary signal bandwith (currently approximately 100KHz). Four leads are attached, two to each side of the semiconductor diode. When photons meet the PSD sensor active area electrical currents are generated that °ow through the four terminals. The closer the incident light centroid is to a particular terminal, the larger the portion of current that °ows through that lead. Comparison of the these four currents then determines the centroid location of the incident light. With regards to ¯gure 1, the following (unitless) normalized voltages are de¯ned, Iright ¡ Ilef t Iright + Ilef t Iup ¡ Idown Vz = K Iup + Idown
Vy = K
2
(1) (2)
z
Image Space Axes
y
PSD Sensor attached to Body B I up Two
Dimensional PSD I right
x
P 2 P1 P3
I left
Wide-angle Lens Pinhole
(X c,Y c,Z c,f ,q ,y )
I down
Object Space Origin
Beacon 3 (X 3,Y 3 ,Z 3 )
Z
Beacon 2 (X 2,Y 2,Z 2)
Y X
(X 0,Y0,Z 0 )
Beacon 1 (X 1 ,Y 1,Z 1) Beacons attached to Body A
Figure 1: PSD Sensor Pin-hole Model
where K is a constant of value 1 ohm. The closer the incident light centroid is to the right, the greater Iright and the lesser Ilef t and so Vy increases. The maximum value of Vy is reached when Ilef t drops to zero, in which case Vy becomes +1. The minimum value of Vy is similarly seen to be ¡1. In practice (because we restrict the ¯eld of view so that all the point spread function of the incident light lies on the detector) the light centroid never fully reaches either extreme of full right, or full left, and Vy is found to vary between §0:63 giving a full normalized voltage range of 1:26 approx. Figure 1 has a pin-hole representation of the wide angle lens used, and light approaching from the left then falls on the right hand side an generates a positive Vy : In this way Vy is an indication of the angle the incident light beam makes about the object space z axis. Similarly Vz is determined by the angle that the incident light beam makes about the object space y axis. To a ¯rst order approximation Vy and Vz are proportional to (y; z) , the light beam centroid location on the PSD, and so computing these normalized voltages constitutes a beam centroid measurement; this relationship is more linear the closer the focused light beam falls towards the PSD center. However to obtain the maximum accuracy from such a PSD one must measure with high precision the nonlinear relationship between (Vy ; Vz ) and (y; z), and this leads to the calibration procedure outlined later on in this paper.
2.2
Wide angle lens
A wide angle lens is used to collect light from a cone of angle 90 degrees and focus the incident energy onto the PSD. Currently a single piece 18mm diameter aspheric lens is used that has a focal length of 12mm. A new lens with a wider diameter of 21mm (approx.) and a shorter focal length of 11mm is presently being tested and this should capture more light, and keep more of this light on the PSD active area (to improve signal to noise ratios outside a 35m range). Fresnel lenses may also be used as only the PSD incident light centroid is of interest, and optical clarity (as required by a conventional camera) is not needed. A narrow bandpass color ¯lter (centered on the color of the beacon energy, 670nm in our case) is placed behind the lens and just in front of the PSD in order to reject most ambient light and protect the silicon PSD from harmful light energy densities. This also helps to reduce noise from ambient light sources.
3
Sensor + Electronics Sensor Coordinates (X c,Yc,Z c,f,q,y )
DSP uP
Lowpass Filter
Serial Data Stream
Rectifier
Frequency Shift Keyed Modulation (FSK)
Bandpass Filter
Light Emitting Diodes (LEDs)
Position Sensing Diode (PSD)
Beacon #1 LEDs
Beacon #n LEDs
..........
Photodiode
Beacon ControllerAnalog Switch Oscillator 38.4 KHz
Target uP
Figure 2: VISNAV Electronic Schematic
2.3
Beacons
Figure 3 shows a schematic diagram of one of the VISNAV beacons. Here each beacon is an array of light emitting diodes (LEDs) radiating energy over nearly a hemisphere, and driven with the same current varying sinusoidally at the carrier frequency. The number of LEDs in this array will depend upon cost, the type of LED, the required system signal to noise ratio, the max. operating distance etc. The beacons in the following simulations are assumed to radiate at a power level of approximately 1W and this requirement can be satis¯ed by 100 10mW LEDs mounted in an area of 4in2 or less. Suitable LEDs are widely availible and inexpensive making this approach attractive. The peak to peak value of the applied LED current is determined by the sensor DSP unit and set by the beacon controller. Current control is used as LEDs provide a light intensity response that is approximately linearly related to the LED current, whereas the voltage to light intensity relationship is very nonlinear, and much energy would be wasted in harmonics of the carrier frequency. We have found useful a further re¯nement; if we add a photodiode that feeds back a signal proportional to the emitted light intensity then this allows better control of the actual light intensity, with minimal harmonic generation. This technique is standard in laser diode drivers where accurate light intensity control is very important for reliable operation. A typical beacon LED wavelength would be 670nm (red) or 940nm (IR). A silicon PSD has a maximum response to a wavelength of 940nm approximately, however in some cases visible LEDs might be preferred. The practical wavelength (¸)range for our detector is about 450nm · ¸ · 950nm.
2.4
Beacon Orchestration
The sensor DSP unit controls the sequence of beacon lights by sending a two byte package of data to the beacon controller via an infrared or radio data link (see ¯g.2) One byte determines which beacon is to be turned on next (and therefore up to 255 di®erent beacons may be used with the current system, however extra bytes of data can be easily added as required), whilst the other contains
4
Beacon: Light Emitting Diode (LED) Array Power
Modulation (38.4 KHz Sine.)
Current Driver
Figure 3: Multi-LED Beacon
a one byte integer approximation to the maximum PSD current level induced the last time that the corresponding beacon was activated. If a beacon output power level is too high then one or more of the four PSD signal transimpedance ampli¯ers may saturate and the incident light centroid cannot be accurately determined. In order to prevent this, feedback control is used to hold the beacon light intensity at a level that results in a maximum PSD current at approximately 70% of the transimpedance ampli¯er input saturation level; this is also important for optimizing the system signal-to-noise ratio. The beacon controller compares the "maximum PSD current" byte to the beacon control intensity the last time that the beacon was on and determines whether this intensity should be increased, decreased, or left unchanged the next time this beacon is activated. If, e.g., eight beacons are to be illuminated in sequence each time new position and attitude data is to be computed (at e.g., a rate of 100Hz), then each beacon is turned on for 10=8msec. This power modulation feature automatically accomodates variable received energy due to range variations, viewing angle dependence, atmospheric conditions, and o®-nominal beacon performance. 2.4.1
Modulation and Demodulation
Whether a VISNAV system is used in a laboratory or on-orbit, there is likely to be a large amount of ambient light at short wavelengths and low carrier frequencies due to perhaps the sun, its re°ections, incandescent or discharge tube lights, LCD and cathode ray tube displays etc. In many cases this ambient energy would swamp a relatively small beacon signal and the PSD centroid data would mostly correspond to this unwanted background light. In order for the beacon light to dominate the PSD response, all energy except that centered on the color wavelength of the beacon is greatly reduced by an optical color ¯lter, and furthermore a 38:4KHz sinusoidal carrier is applied to each beacon control current. The resulting PSD signal currents then vary sinusoidally at approximately the same frequency (depending on the movement of the PSD sensor with respect to the individual beacons). These currents are converted to voltages by transimpedance ampli¯ers and then passed to bandpass ¯lters also centered at 38:4KHz that reject virtually all of the lower frequency background ambient light . The frequency 38:4Khz was chosen as a compromise between rejecting discharge 5
tube light (that may have signi¯cant components up to this frequency and even higher) and minimizing PSD/ampli¯er noise that tends to increase at higher frequencies. We found, for our current implementation, that 100kHz is the upper limit for practical modulation. The 38:4KHz carrier frequency is a parameter that may be changed to suit a particular con¯guration and the background noise environment. However, it is believed that this frequency is well-suited both to laboratory experimentation as well as on-orbit implementation. We have con¯rmed that modulation/demodulation scheme leads to high degree of insensitivity to variations in ambient lighting conditions, and it is key to making the PSD sensing approach practical. 2.4.2
Microcomputer
Currently a Pentium II 266 MHz desktop computer is being used to test six DOF position and attitude, and Kalman ¯lter, algorithms operating on the PSD current imbalance signals. The operating system is Windows NT, and test code is written in Matlab M-¯les or in LabWindows C format. However a digital signal processing (DSP) computer has been selected and will replace the desktop computer in the next generation of our VISNAV experiments. The current VISNAV prototype experimental system is designed with a data update rate of 50 ¡! 100Hz and this information can be fed via a RS-232 serial link to e.g., a spacecraft control computer.
3
Six DOF Position and Attitude Real-Time Navigation Algorithm
When PSD image centroid data has been digitized from four or more beacons (and therefore their respective directions with respect to the sensor implicitly determined) a Gaussian Least Squares Di®erential Correction (GLSDC) algorithm is used to compute current sensor position and attitude values consistent with the beacon data.
3.1
Sensor Model
Figure1 shows object space (X; Y; Z) and image space (x; y; z) coordinate frames for the PSD sensor and beacons. The ideal Object to Image Space Projective Transformation (noiseless) can then be written as follows, yi = gyi (Xi ; Yi ; Zi ; Xc ; Yc ; Zc ; Á; µ; Ã) C21 (Xi ¡ Xc ) + C22 (Yi ¡ Yc ) + C23 (Zi ¡ Zc ) = y0 ¡ f C11 (Xi ¡ Xc ) + C12 (Yi ¡ Yc ) + C13 (Zi ¡ Zc ) zi = gzi (Xi ; Yi ; Zi ; Xc ; Yc ; Zc ; Á; µ; Ã) C31 (Xi ¡ Xc ) + C32 (Yi ¡ Yc ) + C33 (Zi ¡ Zc ) = z0 ¡ f C11 (Xi ¡ Xc ) + C12 (Yi ¡ Yc ) + C13 (Zi ¡ Zc ) i = 1; 2; :::N
(3)
(4) (5)
As mentioned above, the voltages are passed through a nonlinear calibration function to obtain calibrated image centroids (y; z) consistent with these ideal projection equations. Notice that the x-axis is along the sensor boresight. A similar model and estimation process is described in [2] for a spacecraft star camera application ; this approach has been adopted for numerous spacecraft, including Clementine, NEAR and MSX. The Cjk entries above are the elements of the direction cosine matrix that describes the image space
6
orientation with respect to the object space, and in this case is in 3-2-1 Euler angle form; C = [Cjk (Á; µ; Ã)] 2 32 1 0 0 cµ = 4 0 cà sà 5 4 0 0 ¡sà cà sµ 2
cµ cÁ = 4 ¡cà sÁ + sà sµ cÁ sà sÁ+ cà sµ cÁ
32 0 ¡sµ cÁ 1 0 5 4 ¡sÁ 0 cµ 0 cµ sÁ cà cÁ + sà sµ sÁ ¡sà cÁ + cà sµ sÁ
sÁ cÁ 0
3 0 0 5 1
3 ¡sµ sà cµ 5 cà cµ
c: ´ cos(:); s: ´ sin(:) (Á; µ; Ã) ´ (phi; theta; psi) ´ 3 ¡ 2 ¡ 1 Euler angles; RHR (Á; µ; Ã) ´ (µ1 ; µ2 ; µ3 ) ´ (yaw; pitch; roll; (YPR))
(6)
(7) (8) (9) (10)
where : ² Xc ,Yc ,Zc are the unknown object space location of the sensor/spacecraft B (principal point) ² Á,µ,Ã are the unknown object space orientation of the sensor/spacecraft B as 3-2-1 Euler angles ² Cij (Á; µ; Ã) are coe±cients of the direction cosine matrix that rotates the inertial frame into the body frame ² Xi ,Yi ,Zi are the known object space (spacecraft A) location of the ith beacon (light source) ² yi ,zi are the PSD image space measurements for the ith beacon ² f is the known focal length of the wide{angle lens The sensor location and orientation variables comprise six independent unknowns (Xc ,Yc ,Zc ; Á,µ,Ã) and therefore at least six independent PSD measurements are required, so that at least three beacon ¯xes are needed (there is one y and one z PSD measurement per beacon). Since the sensor is ¯xed in the chase spacecraft (A) and the beacons are ¯xed in the target spacecraft (A), it is obvious that (Xc ,Yc ,Zc ; Á,µ,Ã) constitute the 6DOF position of B relative to A. Equations 3 and 4 are nonlinear equations in the six unknowns. A Gaussian Least Square Di®erential Correction (GLSDC) algorithm may be applied in order to determine these given the PSD measurements and corresponding object space beacon locations. The algorithm is an iterative technique, but convergence is shown to be very fast and reliable in this problem, provided four or more targets are measured with the lateral extent of the beacon array being at least 10% of the sensor ¯eld of view. Other variations of this technique that remove this limitation are also being developed.
3.2
PSD Laboratory Calibration Procedure
In order to use the sensor, one needs to know the mapping between the normalized voltages (Vx ; Vy ) returned by the sensor and the location of the light centroid on the image plane (PSD silicon area). A method has been designed to compute this mapping, based on Implicit Least Squares and will be reported on in detail in a subsequent paper. Here we concentrate on the least squares and Kalman ¯lter processes necessary for real-time navigation. The projection of eqns. 3 and 4, known as the colinearity equations, represents the ideal case for a pin-hole camera model. However, in practice the lens and PSD detector linearities cause any camera to depart from this ideal model. We elect to absorb all non-ideal e®ects into a calibration process that is implicitly constrained to be consistent with eqns.3 and 4 which will be inverted in real-time to obtain the navigation estimates (Á; µ; Ã; Xc ; Yc ; Zc ). The ideal laboratory calibration process would place the camera (Ái ; µi ; Ãi ; Xi ; Yi ; Zi ) at many known positions relative to an array of targets 7
located at (Xj ; Yj ; Zj ) and determine only the nonlinear mapping of the measured voltage imbalances (Vy ; Vz )ij into the corresponding known (yij ; zij ) consistent with eqns. 3, 4. Unfortunately, we must also consider the realistic uncertainty in the camera positions (Ái ; µi ; Ãi ; Xi ; Yi ; Zi ) and target locations (Xj ; Yj ; Zj ) since we can only achieve certain levels of precision in the laboratory. This gives rise to a nonlinear estimation process which can be solved, and we outline our formulation below. Let f©ij (Vy ; Vz )gi;j=1;n be a complete set of basis functions (we used orthogonal Chebyshev polynomials) used to approximate lens distortion as bivariate vector functions, e.g. : f(Vy ; Vz ) t
n X n X
fij ©ij (Vy ; Vz )
(11)
i=1 j=1
Thus, our goal is to ¯nd two sets of calibration coe±cients : a = [a11 ; ¢ ¢ ¢ a1n ; ¢ ¢ ¢ ; an1 ; ¢ ¢ ¢ ; ann ]T
(12)
b = [b11 ; ¢ ¢ ¢ ; b1n ; ¢ ¢ ¢ ; bn1 ; ¢ ¢ ¢ ; bnn ]T
(13)
and
such that the following equations hold for any measurement : y= z=
n X n X
i=1 j=1 n X n X
aij ©ij (Vy ; Vz )
(14)
bij ©ij (Vy ; Vz )
(15)
i=1 j=1
and are compatible with the colinearity eqns.3, 4, over several hundred laboratory experiments. We ¯nd that n ¼ 7 is usually adequate. To capture the essence of this least squares process, de¯ne : ©(Vy ; Vz ) = [©11 (Vy ; Vz ); ¢ ¢ ¢ ; ©1n (Vy ; Vz ); ¢ ¢ ¢ ; ©n1 (Vy ; Vz ); ¢ ¢ ¢ ; ©nn (Vy ; Vz )]T T
T
T
p = [a ; b ; (Ái ; µi ; Ãi ; Xci ; Yci ; Zci ; Xi ; Yi ; Zi )i=1;M ] T
Fi (p; Vyi ; Vzi ) = ©(Vyi ; Vzi ) a ¡ gyi (Xi ; Yi ; Zi ; Xci ; Yci ; Zci ; Ái ; µi ; Ãi ) T
Gi (p; Vyi ; Vzi ) = ©(Vyi ; Vzi ) b ¡ gzi (Xi ; Yi ; Zi ; Xci ; Yci ; Zci ; Ái ; µi ; Ãi ) T
R = [F1 ; G1 ; ¢ ¢ ¢ ; Fi ; Gi ; ¢ ¢ ¢ ; FM ; GM ]
(16) (17) (18) (19) (20)
Fi ; Gi are residuals indicating how well the choice of p simultaneously satis¯es the compatibility of the colinearity equations and the lens distortion curve-¯tting associated to the ith measurement. M is the number of measurements used for the calibration. We can now formulate the calibration problem as: Find the unknown vector p that minimizes the weighted least squares magnitude of the residual vector R. We note here that except for a and b, all the other quantities in p are known with a certain accuracy and can be associated realistic standard deviations that will be used to weight the elements of R: This nonlinear calibration problem can be solved via iteration, subject to the requirement that a su±cient number of well separated measurements distributed over the entire ¯eld of view are available. Thereafter, e.g. during navigation processes, only the calibration vectors a and b are needed to map (Vy ; Vz ) into (y; z) consistent with the colinearity equations in a least square sense.
8
3.3
Gaussian Least Squares Di®erential Correction Algorithm
Here we discuss the least squares algorithm that is required for real time navigation, assuming that the above calibration process has been previously completed. Let, 2 3 Xc 2 3 gy1 6 Yc 7 6 7 6 gz1 7 6 Zc 7 6 7 7; X =6 G = (21) 6 .. 7 6 Á 7 4 . 5 6 7 4 µ 5 gz4 Ã 2 3 y~1 ¡ gy1 6 z~1 ¡ gz1 7 6 7 ¢G = 6 (22) 7 .. 4 5 . z~4 ¡ gz4
2
6 6 A=6 6 4
2
@g y1 @Xc @g z2 @Xc
@gy1 @Yc @gz2 @Yc
@gy1 @Zc @g z2 @Zc
@g y1 @Ác @g z2 @Ác
@gy1 @µc @gz2 @µc
@gy1 @Ãc @g z2 @Ãc
@g z8 @Xc 1 2 ¾y1
@gz8 @Yc
@g z8 @Zc
@g z8 @Ác
@gz8 @µc
@g z8 @Ãc
.. .
6 0 6 W =6 6 .. 4 . 0
wii =
(i; i)th
0 1 2 ¾z1
.. .
.. .
.. .
::: ..
0
. 1 2 ¾z4
3
.. .
.. .
3 7 7 7 7 5
7 7 7 7 5
1 measurement covariance
(23)
(24)
(25)
² X is the sensor/spacecraft position/attitude ² G is the PSD output predicted by X and the sensor/beacon model ² ¢G is the di®erence between the actual noisy PSD measurements (with calibration corrections of equations 14 and 15 applied) and the predicted values (using the colinearity equations 3 and 4) ² A is the Jacobian matrix of all ¯rst order di®erentials for the sensor/beacon model ² W is the inverse of the measured error covariance matrix The problem is to compute an estimate for X, given ¢G, that minimizes a weighted sum of the elements of the error ¢G. The GLSDC algorithm provides an iterative solution as follows, ¢Xk = Pk¡1 Atk W ¢Gk Xk+1 = Pk =
Xk + Pk¡1 Atk W ¢Gk Atk W Ak
(26) (27) (28)
² Pk is the X covariance matrix and it provides an estimate of the derived data covariances ² Xk ) Gk and A, and PSD measurements ) ¢Gk , and W ) Xk+1 etc. 9
² This is an iterative model, beginning with some estimate for X at time zero, ) X0 . This algorithm converges quickly in practice, and the initial X0 estimate may be set to the most recent value computed, to start the next iteration. Note that GLSDC outputs, essentially, a geometric best estimate of position and orientation. The state history can be better estimated by imposing a dynamical model. One e±cient way to do this is by using a Kalman Filter.
4
Kalman Filtering
In order to improve the accuracy of the results provided by the Gaussian Least Square Di®erential Correction simulation, the system dynamics was imposed by using an extended Kalman ¯lter. The state vector for our purposes is de¯ned as : X = (x; y; z; Á; µ; Ã; x; _ y; _ z; _ !1; !2 ; !3 ; x Ä; yÄ; zÄ; !_ 1 ; !_ 2 ; !_ 3 )T T
or X = (x1 ; x2 ; x3 ; ¢ ¢ ¢ ; x18 )
(29) (30)
where : x; y; z : coordinates of the camera principal point relative to A, (x; y; z) ´ (Xc ; Yc ; Zc ). Á,µ,Ã : 3-2-1 Euler angles of B relative to A. !1 ; !2 ; !3 : angular velocity components expressed in the body B frame. The dynamical model used assumes piecewise constant accelerations between each measurements, which is a reasonable assumption for sampling rate of 50Hz and su±ciently small velocities, as is the case in the example shown below. The system is then described by the following state equation : _ = f (X) = (f1 (X); ¢ ¢ ¢ ; f18 (X))T X where the nontrivial derivatives are given by [3]: 0 _ 1 0 1 2 Á !1 0 1 @ µ_ A = B ¡1 @ !2 A with B ¡1 = 4 0 cµ cµ !3 Ã_
sà cµcà sµsÃ
(31) 3 cà ¡cµsà 5 sµcÃ
(32)
d (Ä x; yÄ; ¢ ¢ ¢ ; !_ 3 ) = 0. Thus we assume that the physical motion is adequately modeled by and dt piecewise constant linear and angular acceleration. In practice the process noise covariance (wk below) is tuned such that the fading memory of the Kalman ¯lter is consistent with the assumption, for a 20ms sample interval. The measurable outputs are chosen to be the outputs of the GLSDC (e.g. x; y; z,Á; µ; Ã). The measured states are simply position and orientation :
0
B B B Z=B B B @
z1 z2 z3 z4 z5 z6
1
0
C B C B C B C=B C B C B A @
x1 x2 x3 x4 x5 x6
1 C C C C C C A
(33)
We can now put those equations into a suitable form for a Kalman ¯lter implementation : Xk+1 = ©k Xk + wk Zk = HXk + vk 10
(34) (35)
The subscript k denotes a vector at time tk ©k : state transition matrix between times tk and tk+1 . wk : process noise (white, 0-mean) between tk and tk+1 with an associated covariance matrix Qk . vk : measurement noise (white + 0-mean,uncorrelated with process noise) with an associated covariance matrix Rk . H=
£
Id(6x6) j Zeros(6x12)
¤
(36)
To implement the ¯lter, the state equation is integrated numerically between each measurement time, using a 4th order Runga-Kutta algorithm, in order to update the states about the predicted trajectory (see [4]).The initial estimates are assumed to be poorly known and therefore "large" standard deviations are used to populate the initial error covariance matrix P0 . True Spacecraft Trajectory
8 7 6 Height
5 4 3 2 1
50 Hz sampling rate.
0
Spacecraft shown every 20 sec.
8 6
-1 -2 0
4 2 -20
-40
0 -60
-80
-2 -100
-120
-4
Lateral
Main
Figure 4: Simulation Flight Path and Beacon Locations
5
Simulation Results
In the following simulation the VISNAV system is used to simulate navigation during a 100m to zero approach of body B to body A (see ¯gure 1). The position and orientation data update rate is 50Hz, and for the sake of simplicity the camera image frame is assumed to be the same as that of the body B frame. Figure 6 is another view of the same scenario where body A is the spacecraft on the left hand side, and body B (not shown) is attached to the PSD sensor on the right hand side. In this picture seven beacons are shown, however only the four lying closest to the spacecraft outer rim are assumed in the simulation. Figure 4 shows a sample three dimensional trajectory of the spacecraft approaching a docking port near a set of beacon lights (the asterixes). A correctly oriented vehicle icon is printed once every twenty seconds along the °ight path. Note that the three axis scales are not to a common scale. The target lights are within a volume of 6m £ 5m £ 3m close to the drawing origin and the initial distance from the origin is approximately 96m. The PSD measurements were conservatively assumed to have a standard deviation of 1=3000 of the focal plane dimension, which corresponds to an angular resolution of 90=3000 ' 0:03deg, and it is assumed that this ¯gure applies when the vehicle is within 35m of the object space origin which is close to the beacons. For distances greater than 35m the available beacon light energy falls o® as 1=(distance2 ) and so too does the PSD data signal to noise ratio. 11
Main, Lateral and Vertical Component Trajectories 100
MLV
50
Vertical
0 Lateral -50
Main
Smooth => True
-100 -150 -100
-90
-80
-70
-60
-50 -40 Main
-30
-20
-10
0
Smooth => True -30 -20
-10
0
Yaw, Pitch, and Roll Trajectories 40
YPR
30 20 10
Pitch
0
Yaw Roll
-10 -20 -100
-90
-80
-70
-60
-50 -40 Main
Figure 5: True Motion and GLSDC Simulation Estimates
Z Object Space
PSD Image Space z
X Y
y x Sensor (attached to spacecraft B) Spacecraft A
Figure 6: Approaching a Spacecraft
5.1
GLSDC Results
Figure 5 shows both the true and the GLSDC estimated position and orientation of B relative to A as a function of distance along the main (X) axis (normal to the docking port on A). As the vehicle approaches the beacons they more completely span the PSD active area and the triangulation problem becomes better conditioned. The simulation shows very good results with displacement errors of the order of 10m or less at a range of 96m, reducing to errors of millimeters as the vehicle reaches its landing destination just in front of the beacons. The orientation errors also start out relatively large at around 10 degrees, and reduce to values less than 6=100th of a degree at rendezvous. In ¯gures 7 ,8 ,9 the GLSDC estimated data standard deviations are compared to the ¯nal estimate errors. The one sigma standard deviations are represented by the dashed envelope lines, lying symmetrically above and below the zero error axis. In each case there is a very good correspondence between the GLSDC standard deviation estimates and the actual navigation data errors. We
12
Main Axis Standard Deviation and Convergence Error Vertical Error
30 final convergence error std. dev. envelope
20 10 0
-10 -20 -30 -100
-90
-80
-70
-60
-50 -40 Main
-30
-20
-10
0
Lateral Axis Error
Lateral Standard Deviation and Convergence Error 30 final convergence error std. dev. envelope
20 10 0
-10 -20 -100
-90
-80
-70
-60
-50 -40 Main Axis
-30
-20
-10
0
Figure 7: Main and Lateral Axis Standard Deviations and Convergence Errors
Vertical Axis Error
Vertical Standard Deviation and Convergence Error 60 final convergence error std. dev. envelope
40 20 0
-20 -40 -100
-90
-80
-70
-60
-50 -40 Main Axis
-30
-20
-10
0
Yaw Standard Deviation and Convergence Error Yaw Error
10 final convergence error std. dev. envelope
5 0 -5
-10 -15 -100
-90
-80
-70
-60
-50 -40 Main
-30
-20
-10
0
Figure 8: Vertical Axis and Yaw Standard Deviations and Convergence Errors
emphasize that ¯gures 7 through 9 are geometric least squares estimates; these can be reduced by approximately one order of magnitude by using a dynamic Kalman estimation process, as shown below. Figure 10 also shows the six GLSDC estimated standard deviations, this time plotted logarithmically along the vertical axis in order to better demonstrate the wide range of values, the residual errors becoming very small inside a 20m range. For the three positions the main X axis estimate contains the least noise everywhere along the trajectory, its standard deviation being approximately half that of the lateral and vertical values. This is a consequence of all the targets lying close to a plane. A similar result is seen for the three spacecraft Euler angles where the roll angle estimate contains the least noise. An interesting feature of these two plots is a 'kink' in the data around the main axis 35m mark. At this point the standard deviation slopes(with respect to the main X axis) increase as the distance increases since the beacons are assumed to be unable to provide enough power to bring the maximum PSD signal current to 70% of its saturation level, and thus outside
13
Pitch Standard Deviation and Convergence Error Pitch Error
40 final convergence error std. dev. envelope
30 20 10 0
-10 -20 -100
-90
-80
-70
-60
-50 -40 Main Axis
-30
-20
-10
0
Roll Standard Deviation and Convergence Error Roll Error
10 final convergence error std. dev. envelope
5 0 -5
-10 -100
-90
-80
-70
-60
-50 -40 Main Axis
-30
-20
-10
0
Figure 9: Pitch and Roll Standard Deviations and Convergence Errors
Main, Lateral, and Vertical Convergence Standard Deviations (VLM)
MLV < log10(m) >
102
Main std. dev. Lateral std. dev. Vertical std. dev.
0
10
-2
10
-4
10 -100
-90
-80
-70
-60
-50 -40 Main Axis
-30
-20
-10
0
Yaw, Pitch, and Roll Convergence Standard Deviations (YPR)
2
YPR < log10(deg) >
10
Yaw std. dev. Pitch std. dev. Roll std. dev.
1
10
0
10
-1
10
-2
10 -100
-90
-80
-70
-60
-50 -40 Main
-30
-20
-10
0
Figure 10: Logarithms of GLSDC Estimation Standard Deviations
35m the signal to noise ratio degrades; this nonlinear e®ect is included in our simulations.. Figure 11 shows the number of iterations required for the GLSDC algorithm to reach its convergence criterion at each time step. This criterion was that the maximum six DOF parameter change after an iteration be less than 0:5%. The maximum number of GLSDC iterations required was eight in all cases but the ¯rst where 16 iterations were used. The current algorithm using four beacons requires on the order of 3000 °oating point (FP) operations for each iteration, and thus 24; 000 FP operations for an eight iteration computation. The results so far indicate that a lesser number of iterations may be required in order to obtain fully satisfactory accuracy. It is expected that eight or more beacons would be used in a practical application with some of the beacons being placed at much wider spacings than used in this simulation. This will raise the maximum computation requirements to the order of 48; 000 FP operations and also provide better estimation accuracy at large distances due to better conditioning of the triangulation problem. A modest additional computation capacity must also be allotted for PSD sensor linearization and Kalman ¯ltering of the
14
Number of Gaussian L.S. Diff. Update Algorithm Iterations 16
14
No. Iterations
12
10
8
6
4
2 -100
-90
-80
-70
-60
-50 -40 Main Axis
-30
-20
-10
0
Figure 11: GLSDC Iterations
GLSDC estimates. It may also be the case that many redundant beacons will be desirable, say 32, and only the 'best' eight beacons be selected at each measurement cycle. One ¯gure of merit for an individual beacon would be the magnitude of the largest PSD signal current produced in response, and another would be a large angular displacement o® the sensor boresight. At 50Hz 48; 000 FP operations per cycle translates to about 2:5 million FP operations per second, that is 2:5 MFLOPS, well within modern microprocessor capabilities. Some other observations on this VISNAV application are, ² The steady decline in data estimate errors, as the vehicle approaches the rendezvous zone, results in greater accuracy when it is needed most. ² Simulations so far using three target lights have shown that the model Jacobian (A) matrix can become singular for certain vehicle and target light con¯gurations, however this did not happen with four targets in a non-planar arrangement. This issue needs to be studied and fully understood. ² The initial position/orientation estimate errors at the start of the example simulation were of the order of 10m and 10 degrees respectively, and it is assumed that the ¯rst estimates would be obtained from another type of sensor such as GPS. However, it should also be possible to establish a good initial estimate with the described sensor using a modi¯ed algorithm. In any event initial estimate errors of 30m and 30 degrees can easily be accommodated. ² This simulation assumed that the beacon locations were known with perfect accuracy in the object (target spacecraft A) frame of reference. Determining beacon location accuracy speci¯cations (assuming zero noise) will require analyzing the forward model when beacon location errors are introduced.
5.2
Kalman Filter Results and Discussion
The performance of the ¯lter was evaluated using the results from the previous example. As can be seen from ¯gs. 12, 13, and 14, the best estimates converge in less than 10s to the true trajectory, with an accuracy much greater than the raw measurements. In fact, the errors in translation coordinates during the last 40s are down to the order of 1cm, as the attitude coordinates generally remain below 0:05deg. We also notice that the velocities are recovered with a reasonable precision, within 0:1m=s in average, terminally. 15
5 Yaw
0 X
True Kalman -50
0
20
80
-15
100
0
-30
True Kalman 0
20
100
40 60 time
80
40 60 time
80
80
100
True Kalman
10 20
40 60 time
80
100
80
100
0 -5 True Kalman
-10 -15
100
40 60 time
20
5 True Kalman
20
20
30
0 0
100
50
0 0
0
40
-10 -20
Z
40 60 time
Roll
Y
10
True Kalman
-10
Pitch
-100
0 -5
0
20
40 60 time
Figure 12: Kalman Filtered Positions
0 True Kalman 0
20
Vy
20
40 60 time
80
-40
20
20
40 60 time
80
-40
20
40 60 time
80
0
20
80
100
40 60 time
80
100
80
100
True Kalman
20 0 -20
100
40 60 time
True Kalman
-10
40
True Kalman 0
20
0
-20
100
0 -20
0
10
True Kalman 0
True Kalman
-50 -100
100
0 -20
0
Wy
-10 -20
Vz
Wx
50
10
Wz
Vx
20
0
20
40 60 time
Figure 13: Kalman Filtered Velocities
Figure 15 shows both the state errors and their standard deviations predicted by the Kalman Filter. A good agreement is seen between the two sets of curves, except for the x-coordinate, indicating that the Kalman ¯lter "tuning" is fairly good but can likely be improved. The ¯rst three sets of plots (Figs. 12-14) are for the entire time span, the fourth one (Fig. 15) is for the last 40 seconds in order to have a reasonable scale to show the steady state comparison between actual errors and the corresponding standard deviation estimates.
6
Conclusion
The simulation, calibration and noise tests to date indicate that a VISNAV system based upon our analog vision sensor concept can perform well in rendezvous, docking and proximity operations, 16
20
10
X
Yaw
GLSDC error Kalman error
10 0 -10 0
20
40 60 time
40
20
40 60 time
60 Z
40
80
0 -20
0
20
40 60 time
40 60 time
80
100
80
100
0 0
20
40 60 time
GLSDC error Kalman error
5 0 -5 -10
100
80
GLSDC error Kalman error
10
GLSDC error Kalman error
20
20
20
-20
100
Roll
0
0
40
0 -20
GLSDC error Kalman error
-10 -20
100
GLSDC error Kalman error
20
Y
80
Pitch
-20
0
0
20
40 60 time
80
100
Figure 14: GLSDC and Kalman Filtered Position/Attitude coordinates 0.02 Yaw
0.02 X
0.01 0
70
80 time
90
0 -0.5 70
90
0 -0.005 70
80 time
90
100
70
80 time
90
100
70
80 time
90
100
0 -0.05
0.02
0.005
-0.01 60
80 time
0.05
-0.1 60
100
Roll
Z
0.01
80 time
70
0.1
0.5
-1 60
0 -0.01 -0.02 60
100 Pitch
Y
-0.01 60 -3 x 10 1
0.01
90
100
0.01 0 -0.01 -0.02 60
Figure 15: Kalman Filtered Data Standard Deviations
providing very accurate six degree of freedom data and fast (50Hz) data update rates. This approach has many attractive features including ² very small sensor size ² very wide sensor ¯eld of view ² no time consuming and costly CCD signal processing required ² excellent rejection of ambient light interference under a wide variety of operating conditions ² easy con¯guration for a variety of applications ² relatively simple electronic circuits 17
² modest DSP requirements ² GLSDC algorithm produces position/attitude and associated covariance matrix ² Kalman ¯ltering results in signi¯cantly smoother position/attitude data, and rate estimates, useful for control feedback.. Six DOF sensing applications that can be explored, other than spacecraft docking/maneuvering, are in °ight aircraft refueling operations, vertical takeo® and landing navigation for aircraft, human body motion capture, and robot arm end e®ector position/attitude sensing. Future work will center on building a DSP based model that can be °ight tested, this e®ort will include complete analysis of the 2D PSD calibration problem, miniaturization of some of the electronic circuitry (using surface mount technology), further optimization of the system signal to noise ratios, and analysis of system failure modes.
References [1] J. L. Junkins and H. Schaub, \Laser position sensor, patent disclosure number 017575.0356," September 21 1998. [2] J. L. Junkins, An Introduction to Optimal Estimation of Dynamical Systems. Sijtho® & Noordho® International Publishers, 1. ed., 1978. ISBN 90-286-0067-1. [3] J. L. Junkins and T. J. D., Optimal Spacecraft Rotational Manouvers, vol. 3 of Studies in Astronautics. Elsevier, 1. ed., 1986. ISBN 0-44-42619-1. [4] R. G. Brown and P. Y. Hwang, Introduction to Random Signals and Applied Kalman Filtering: with MATLAB Exercises and Solutions. John Wiley and Sons Inc.,, 3. ed., 1997. ISBN 0-47112839-2.
18