Closed Loop Control for Autonomous Approach and ... - NASA

177 downloads 6815 Views 2MB Size Report
Mountain, AZ in May 2002. .... can either call home or plan a safe path to an alternate position .... instrument arm contact positions overlaid as crosses on the. HazCam image of the target used for arm trajectory planning. The designated target is at the center of a 1 cm radius circle ..... Decision and Control, Phoenix, AZ, pp.
Closed Loop Control for Autonomous Approach and Placement of Science Instruments by Planetary Rovers* Terry Huntsberger, Yang Cheng, Ashley Stroupe, and Hrand Aghazarian Mobility and Robotic Systems Section Jet Propulsion Laboratory Pasadena, CA, 91109, USA {terry.huntsberger, yang.cheng, ashley.stroupe, hrand.aghazarian}@jpl.nasa.gov Abstract - The underlying motive of “follow the water” in the search for evidence of past or present life on Mars has led NASA to deploy increasingly sophisticated robotic missions to the planetary surface. Opportunity and Spirit, the current pair of MER (Mars Exploration Rovers) on Mars for over a year, have both discovered evidence of past surface water. Optimal use of mission resources, such as ground planning time and surface operation duration, for increased science data return becomes critical with each advancement in the capabilities of the onboard instrument suites. This paper presents a novel, endto-end, fully integrated system being developed at JPL (Jet Propulsion Laboratory) that is called SCAIP (Single Command Approach and Instrument Placement). SCAIP enables a rover to autonomously travel to a designated science target from an extended distance away, and precisely place an instrument on that target with a single command without additional human interaction. The results of some experimental studies with a rover in terrestrial settings and using imagery returned from MER are also described. Index Terms – Planetary surface autonomous science, closed loop control.

rovers,

I. INTRODUCTION Since 1997, JPL has successfully landed three rovers on the surface of Mars, including the Sojourner rover in 1997 and the two MER, Spirit and Opportunity in 2003. The Sojourner rover had a two-degrees of freedom instrument deployment device (IDD) for the alpha proton X-ray spectrometer. MER has a five-degrees of freedom IDD, with a suite of four instruments at the end-effector, including a microscopic imager (MI), a rock abrasion tool, a Mössbauer spectrometer, and an alpha proton X-ray spectrometer. The MI is a fixed focus camera with a touch-rod for surface contact sensing. An advanced technology prototype of the MER called the FIDO (Field Integrated Design and Operations) rover is shown in Fig. 1, with the IDD in contact with a rock during a recent field trial. FIDO was used for algorithm development and training of MER science personnel for ground operations [1, 2, 3]. Currently, MER uses at least three Martian days (sols) after selection of a science target from panoramic mastmounted imagery to approach and place an instrument on the target. The first sol is used to traverse a path to a safe stand*

Fig. 1. FIDO rover placing an instrument on a science target during the MER/FIDO field trials in the desert at Gray Mountain, AZ in May 2002.

off position using a path planned by the operations team on Earth. At the end of the traverse, an image of the science target is taken using the wide field of view stereo hazard avoidance cameras (HazCams) body-fixed to the rover. The HazCams have a useful range for stereo height map generation of about one to two meters. This image is analyzed to determine if another short drive is needed to bring the target within the work-volume of the IDD; a path that would be uploaded to the rover and executed on the second sol. At the end of the second sol traverse, the target is imaged again using the HazCams, and a IDD arm trajectory is planned for the third sol to bring the instrument in contact with the science target. In the case of the MI, the uplinked command also includes image acquisition. Cutting this three sol period to a single sol potentially enables the maximization of science data return to Earth and makes the most efficient use of ground operations planning time. During 2001, members of the Mobility and Robotic Systems Section at JPL developed a closed loop system illustrated in Fig. 2 called SCAIP (S ingle C ommand Approach and Instrument P lacement) [4, 5]. SCAIP is potentially a key technology for precision placement operations for the upcoming 2009 Mars Science Laboratory (MSL) mission. The system controller closed the loop between the rover state and the relative target position through tracking of feature points using a visual odometry algorithm

The research described in this paper was performed at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.

Acquire Panorama & Designate Science Goal

Drive to Stand-off Position Using Interest Points

1

2

Sol 1

Step 1

Sol 2 2

Hand-off Goal Position from Navams to Hazcams

3

4

3

Plan Final Approach Path

4

Drive to Final Offset Position & Acquire Hazcam Image

5

Arm Path Planning with Collision Checks

5

Yes

Safe?

6

No

Determine Safe Substitute Placement Goal

6

7

Place Instrument & Acquire Science Data

Fig. 2. SCAIP sequence of operations showing all major algorithmic elements. Component technologies described in this paper are highlighted and the step numbers are shown to the left.

running onboard the rover [6, 7]. For the 2001 effort, long range navigation cameras were used for the initial approach to the target, and the final approach was done based on wheel odometry only. The primary sources of error in the SCAIP sequence are shown in Table I. The largest errors are due to wheel slippage and inaccuracies in stereo-derived height maps of the terrain. These uncertainties occur at two places in the SCAIP sequence – at the Hand-off Goal Position and the Drive to Final Offset Position (Steps 2 and 4 in Fig. 2) since localization errors up to that point can be on the order of 20%.

7

TABLE I SOURCES AND SIZE OF ERRORS FOR SCAIP SEQUENCE Source Size of Error Remedy Wheel slippage, On the order Visual odometry hazard avoidance, of 20% of potentially reduces mast joint distance error to 1-2% of the compliance traversed for distance traversed hard-packed soil, up to 100% in fine, loose sand Mast joint 1-2 cm HIPS potentially compliance, small cumulative for reduces error to 2inaccuracies in a MER-sized 5mm for a MERknowledge of rover sized rover relative rover geometry Inaccuracies in On the order Adjust path during stereo-derived of 3-5 cm for traverse height map of a standoff terrain position of 2 m Wheel slippage On the order Adjust path during of 20% of traverse using visual distance odometry coupled traversed for with HIPS hard-packed soil, up to 100% in fine, loose sand IDD joint 1-2 cm Size of bounding compliance, small cumulative, polygons are overinaccuracies in with the estimated to give a knowledge of majority in the margin of safety relative rover stereo-derived with the downside geometry, height map of that some targets inaccuracies in terrain are deemed not stereo-derived safely reachable height map of terrain IDD joint 1-2 cm HIPS can be used to compliance, cumulative, derive new camera inaccuracies in with the models as the IDD stereo-derived majority in the placement operation height map of stereo-derived is being done, terrain, fixed focus height map of closed loop control of some science terrain of IDD with instruments instrument focus feedback IDD joint 1-2 cm Polygonal model of compliance, cumulative, target derived from inaccuracies in with the stereo data can be stereo-derived majority in the used to limit IDD height map of stereo-derived placement to planar terrain height map of patches terrain

The closed-loop control between the IDD, which will ultimately perform the instrument placement, and the rover plant is broken at these two steps. If a single coordinate system is used for the design of a closed-loop controller for these steps, the potential error is minimized. Such a representation based on an IDD-centered coordinate system called HIPS (Hybrid Image/Plane Stereo) has been developed at JPL [8, 4]. Another source of compromised science return lies with the fixed focus MI. Errors in knowledge of the height and surface topology of the science target derived from stereo imagery from an extended distance away (~10 meters) can lead to out-of-focus MI imagery. Preliminary experimental studies

suggest that the HIPS manipulation approach yields placement accuracies on the order of 1-2 mm in position and less than a degree in orientation [8, 4]. HIPS is coupled with an autofocus algorithm [4, 5] recently developed at JPL to automatically close the loop between IDD movement and image focus. The next section gives an overview of the SCAIP system, followed by a discussion of related work. Next, the results of some experimental studies using terrestrial technology rovers and imagery returned from MER are described. Finally, a section on conclusions and current directions closes the paper. II.

SCAIP SEQUENCE OVERVIEW

The SCAIP sequence process flow is shown in Fig. 2. The science target selection process occurs in sol 1 (same as MER) through an analysis of a stereo panorama acquired by the NavCams mounted on a mast. Dense stereo range map generation is good to about twenty meters with these cameras. The target position in 3-D and the surface normal in the global coordinate frame are then uploaded to the rover. All of the operations then occur autonomously with exception checking at each step. The nominal sequence follows the six steps labeled on the left of Fig. 2, with an exception step 7 required if a safety check fails. These are: (Step 1) The rover drives to an offset position (~2 meters) from the science target using the mast-mounted narrow fieldof-view (~45 degrees) NavCams while updating the science target localization with respect to tracked feature points. An extended Kalman filter algorithm is used to fuse the wheel encoder, IMU, and estimated position from the feature point tracking and to update the rover state during the traverse [9, 10, 8, 11]. A hazard detection/avoidance algorithm such as MER GESTALT [12] is active during this portion of the traverse. (Step 2) At the offset position (usually about two meters from the designated target), the science target position is handed off to the wide field-of-view (~125 degrees) HazCams through a coordinate transformation of the HIPS IDD-centered frame. (Steps 3 & 4) The tracked feature points and science target are now mapped into the IDD-centered frame using HIPS for the final traverse to the target. At the end of the short traverse a stereo pair is taken with the HazCams to determine the IDD target for the instrument placement operation. (Step 5) A safe path is planned for the IDD using an onboard version of the MER collision-checking algorithm that verifies there are no collisions with the rover infrastructure, self, or the terrain prior to instrument placement [13]. (Step 6) Finally, the IDD is placed at a safe position over the target and incrementally stepped to the surface along the surface normal. (Step 7) In the event that a safe path is not found, the rover can either call home or plan a safe path to an alternate position on the target surface using an algorithm being developed at the NASA Ames Research Center [14, 15]. Details of Steps (1-4), and (6) of the sequence are described next. Details of Step (5) can be found in [13], and of Step (7) in [14, 15]. (Step 1) Drive to Stand-off Position Accumulated errors due to wheel slippage, deviations from the nominal route due to hazards, and/or compliance in

the mast joints can compromise the localization of the rover during the traverse to the stand-off position. For example, wheel odometry in the field has shown 20% error or more in traverses of just a few meters even on level terrain depending on the surface characteristics. This Step relies on tracking features in the vicinity of the science target [16, 17] and then localizing the target relative to these features. Possible feature invariants include the Harris corner detector [18, 6, 20], grayscale patches [21], and line features derived from connected output of the Harris operator. The visual odometry algorithm [6, 7] currently being successfully used on the Martian surface by Spirit and Opportunity uses the Förstner interest operator [22] to generate feature points, which are tracked between frames. The rover state is then updated using the estimated change in position and orientation. Typical errors in localization are within 1% to 2% of the total distance traversed. The designated science target is localized with respect to the feature points with its position relative to the rover updated after each frame. This approach eliminates problems, such as changes in lighting and possible lack of distinguishable features on the science target, which are associated with traditional target tracking methods. The visual odometry technique is computationally expensive because best results are obtained with a good coverage of the surrounding environment through a large number of matched feature points, and affine parameter estimation methods [23] offer a predictive solution to the problem. The affine parameter estimation algorithm illustrated in Fig. 3 uses three co-planar points in two image frames to estimate a homography transform [24] between the points, followed by a determination of the affine parameter for offplanar points. This parameter can then be used to predict where all of the feature points should be in the current frame based on their position in the previous frame. Some initial runs using the technique are described in the Experimental

V Frame 1

Epipolar line

V ’

Frame 2

Fig. 3. Illustration of affine parameter estimation using two image frames taken during a traverse. The technique derives the homography transform between three co-planar points (shown as a connected triangle in each frame), and then determines a single affine parameter using a single non co-planar point and the epipolar points in each image. The affine parameter can then be used to predict where all of the feature points seen in Frame 1 should appear in Frame 2.

Studies section. (Step 2) Hand-off Goal Position Up to this point, all coordinate transforms have been done in the rover-centric frame. Ultimately, greater accuracy can be obtained using an IDD-centered frame for the reasons mentioned in the Introduction section. The HIPS approach generates camera models through visual sensing of fiducial marker(s) on the manipulator’s end-effector as shown in Fig. 4. This process generates a mapping between the image plane coordinates of feature points and the IDD joint angles, all referenced to the base reference frame of the IDD. This is done for the NavCams (used for navigation in Step 1) and the HazCams (used for instrument placement in Step 6). For the critical problem of the transfer of control from the mobility system to the manipulation system, the fact that the front HazCam and NavCam camera models are specified relative to a common reference frame and have been generated using the rover-mounted manipulator that will ultimately perform the instrument placement will greatly increase the reliability and accuracy of the feature tracking hand-off process. (Steps 3&4) Final Path Plan & Traverse The feature point generation and tracking capabilities of the visual odometry algorithm from Step 1 are used to determine 3-D coordinates of the image plane “fiducials” since these will not be present in natural images. HIPS is then used to update the rover position and orientation and to determine the designated science target position in the IDD-centered frame. Some initial runs in the lab using the technique are described in the Experimental Studies section. (Step 6) Instrument Placement & Image Acquisition The basic principle of HIPS is the generation of camera models through direct visual sensing of the manipulator’s endeffector using estimation and the subsequent use of these models to position the manipulator at a target location specified in the image-planes of a stereo camera pair using stereo correlation and triangulation. In-situ estimation and€ adaptation of the manipulator/camera models in this method

accounts for changes in the system configuration, ensuring consistent precision for the life of the mission [25]. With the target range computed via stereo triangulation, the inverse kinematic model is used to solve for the joint rotations of the manipulator that place the end-effector at the desired target location. Finally, during the terminal approach to a target by the manipulator, additional samples of the fiducial(s) located on the end-effector are used to refine camera models in the region that is close to the terminus of the manipulator motion [8, 4]. The microscopic imager (MI) used on the current planetary surface rovers is a fixed focus instrument. The IDD is positioned at an offset relative to the stereo-derived surface height, and then moved toward the selected target position in steps of 3 mm. For hard surfaces such as rock, the touch-rod sensor on the device should be in contact for the image that is in focus. However, for soft surfaces such as terrain, this might not necessarily be the case if the touch-rod sinks into the surface before tripping. Since communication bandwidth is restricted for science data delivery back to Earth, an autonomous instrument placement sequence would ideally analyze the images as they are acquired and determine which one has the optimal focus. We have developed a closed-loop method for optimal IDD placement that is based on a wavelet texture index previously developed for automatic target recognition [21]. The index given in (1) is derived from the wavelet coefficients sampled at multiple scales, orientations, and local neighborhood sizes:

1 TS(a,b) = L

L

∑ i= 0

∑ ∑ C (a + j,b + k) i

j ∈n k ∈m

(m * n)

,

(1)

where L is the number of levels in the transform, n and m are the dimensions of the local sampling neighborhood centered at pixel (a,b) at each level of the transform, and Ci is the wavelet coefficient at level i. Higher values of the index indicate better image focus. The wavelet decomposition [26] is used for image compression onboard the spacecraft prior to downlink to Earth in order to optimize communication bandwidth. An example of a three-level wavelet decomposition is shown in Fig. 5, where positive, negative, and zero coefficients are represented by white, gray, and black respectively. As the IDD steps towards the science target, the wavelet texture index is calculated and the decision for the next step is fed back into the controller. This process continues until the optimal focus image is found, or the touch-rod makes contact with the surface. Since most surfaces are not of uniform height, the use of the index can be tuned for bringing in sub-areas of the image into focus and/or deriving depth from focus. III. RELATED WORK

Fig. 4. Calibration procedure for the HIPS (Hybrid Image/Plane Stereo) technique. Image plane coordinates of the cue on the end effector in both images of the stereo camera pair are mapped to arm joint angles. The end effector work volume is sampled and a camera model is then generated for use in recovery of 3-D coordinates of points subsequently seen in the image planes.

Past work at JPL developed and demonstrated a single command sequence for autonomous approach to specified remote science targets and instrument arm placement on those targets in the Arroyo Seco at JPL [8, 5]. A 13 step algorithm was developed to autonomously track features in the vicinity of science targets using a combination of cross correlation and homographic transforms [24], and the relative position of the science target was updated during traverse to mitigate errors in

Vertical

Horizontal

Diagonal Fig. 6. Instrument placement results from 11 trial runs of the prior algorithm [Huntsberger, et al., 2002, 2003], with crosses at the positions where the instrument arm made contact overlaid on the short range image used for arm trajectory planning. Instrument arm contact positions for the SCAIP effort will all lie within the 1cm radius red circle centered on the designated target.

easy to discriminate from the background and the approach distances were all less than 5 meters. Fig. 5. Three-level wavelet decomposition of a MER MI image displayed in the Mallat format [26]. The positive, negative, and zero valued wavelet coefficients are displayed in white, gray, and black respectively.

localization. Full onboard arm collision-checking was included for safe instrument placement. Eleven runs were performed with an average autonomous approach of 5.9 meters with active obstacle avoidance, and an average instrument arm placement error of 7.5cm (1.3% of distance traveled). This baseline algorithm is currently being ported to MER for ground testing and possible use on Spirit and Opportunity beginning in September of 2005. The results of the studies are shown in Fig. 6, with the instrument arm contact positions overlaid as crosses on the HazCam image of the target used for arm trajectory planning. The designated target is at the center of a 1 cm radius circle shown in red in Fig. 6. None of the runs were within a 1 cm radius of the designated target and the average approach distance was only 5.9 meters (no traverses longer than 6.2 meters) as compared to 10 meters, both requirements that were in the original 2009 MSL reference mission. Prior [27] and ongoing work by the QSS Group, Inc at the NASA Ames Research Center developed and demonstrated a planner/scheduler integrated with an auto-approach and instrument placement algorithm that analyzes the science target for the optimal safe deployment based on surface normals [28, 14, 15]. The system was demonstrated on the K9 technology rover in the ARC MarsScape testbed environment using a multiple science target scenario. No attempt was made to quantify placement errors and the average distance of approach was less than 5 meters. In addition, the runs were done without active obstacle avoidance. Closest to the SCAIP approach is recent work done using interest operators in a multi-scale approach over multiple frames [29, 30]. Another approach was investigated by a team at JPL for autonomous tracking and retrieval of small rock samples [31, 32, 33]. In this work, the targets were relatively

IV. EXPERIMENTAL STUDIES We performed a number of experimental studies for various portions of the SCAIP system. These included runs in the JPL MarsYard with the FIDO rover and with MER HazCam imagery acquired by Spirit for the feature point tracking used in Step 1, in the sandpit with the SRR (Sample Return Rover) in JPL Building 82 for the HIPS based traverse used in Step 4, and with MER MI imagery acquired by Opportunity for the wavelet-based texture index auto-focusing used in Step 6. The main issue of importance for SCAIP Step 1 is the detection and tracking of feature points in outdoor environments. In order to demonstrate the visual odometry feature extraction and tracking with MER rover imagery, a pair of HazCam images taken by Spirit on Sol 490 while on the final approach to a science target was analyzed offline. The first image of the sequence with all of the motion vectors for the tracked feature points overlaid is shown in Figure 7, where the start point is shown in white and the end point in black connected by a dotted line. The motion of the rover is computed based on vectors which are consistent with the global motion of the rover (aligned vectors), and the others are discarded. The estimated motion was 0.462 meters for a commanded traverse of 0.6 meters, giving a slip rate of 23%. In order to test the feature point prediction algorithm for SCAIP Step 1, the FIDO rover was set up about 10 meters from a target position in the JPL MarsYard and commanded to traverse the distance to the target in 20 cm steps while acquiring mast-mounted NavCam stereo images at each step. The rover position was updated using wheel odometry and the mast was repointed towards the target based on this update. Two consecutive frames taken with the mast-mounted FIDO NavCams are shown in Fig. 8. The visual odometry algorithm was used to generate three tracked feature points for the derivation of the homography transform between frames. These are shown in red in each frame in Fig. 8. The affine

Fig. 9. Final left HazCam image of the test approach sequence taken with the SRR in JPL Building 82. Overlaid on the image are the tracked feature points found by the visual odometry algorithm with the previous image location shown as a black dot and the current tracked point shown as a white dot at each end of the tracking vectors. Fig. 7. Initial left HazCam image of the MER Sol 490 science target approach. Overlaid on the image are the tracked feature points found by the visual odometry algorithm with the previous image location shown as a white dot and the current tracked point shown as a black dot at each end of the tracking vectors.

parameter was then determined based on any of the other tracked feature points labeled in green in Fig. 8, which was then used to predict the position of all of the other feature points in Frame 2 from their position in Frame 1. The maximum difference between the predicted points and the visual odometry matched points was (0.77 pixels, 1.01 pixels) with a standard deviation of (0.36 pixels, 0.43 pixels) in the tangent and lateral directions to the rover frame respectively. In the experimental study for SCAIP Step 4, the SRR was set up about 2 meters from a target rock and commanded to traverse the distance to the rock in 25 cm steps. The final left image frame taken with the body-fixed SRR HazCams is shown in Fig. 9, with an overlay of the tracked features determined by the visual odometry algorithm (white dot at one end of line is current position and black dot at other end is previous position). We used these feature point image plane coordinates as input to the HIPS algorithm that had been calibrated for the SRR HazCams. Since the work volume of

Fig. 8. Two consecutive frames from the FIDO left NavCam taken in the JPL MarsYard. The feature points were identified with the visual odometry algorithm. The three feature points labelled in red were used to generate the homography transform between frames, and the affine parameter was used to predict the positions of the feature points shown in green.

the SRR IDD is limited to about 65 cm in front of the rover and the HIPS camera models were all generated based on fiducial sampling within this volume, position errors from 2 meters away at the start of the sequence are expected to be large, with increasing refinement as the target enters within the work volume. At the start of the traverse, the maximum difference between the HIPS-derived and the visual odometry matched points was (77.0 cm, 25.0 cm, and 10.1 cm) with a standard deviation of (36.0 cm, 10.5 cm, and 4.3 cm) in the tangent, lateral, and height directions to the rover frame respectively. At the end of the traverse, the maximum difference between the HIPS-derived and the visual odometry matched points was (3.8 cm, 6.5 cm, and 4.1 cm) with a standard deviation of (3.2 cm, 2.5 cm, and 1.3 cm) in the tangent, lateral, and height directions respectively. In the experimental study for SCAIP Step 6, MER MI images downlinked from Opportunity on the surface of Mars were analyzed offline using the wavelet autofocus index. The images were acquired on Sol 10 and Sol 15 by positioning the IDD at a safe position from the target and following a trajectory towards the target along the surface normal in 5 steps of 3 mm each. The first and last MI images (spatial resolution of 1024 X 1024 pixels) from three of the targets are shown on the left and right in each row in Fig. 10. The first and third rows are from terrain targets, and the middle row is from a rock face. The wavelet autofocus index for each of 5 steps on the approach trajectory for three representative science targets is displayed in Fig. 11, where the horizontal axis is the step number and the vertical axis is the autofocus index value. A peak value indicates which MI image is in focus in each of the sequences. In all cases, the position with the peak value matched those determined by the scientist on Earth.

Fig. 11. Graph of wavelet autofocus index versus position relative to a designated target for MI imagery from three representative targets downlinked by MER Opportunity. The highest peak on each curve indicates the optimal focus position along the IDD trajectory.

matching process with no loss of accuracy. For the final rover approach prior to IDD placement, an algorithm based on HIPS for precision positioning of the designated science target within the IDD work volume demonstrated good accuracy for a 2 meter traverse. Finally, the wavelet texture autofocus index was demonstrated to autonomously match human performance for determination of the optimal IDD placement on a science target. We are currently running more tests to validate the performance of the system in terms of placement accuracy, sensitivity to environmental factors such as terrain and lighting variations, and sensitivity to science target geometry. We anticipate running a set of 50 field tests of the system using the FIDO rover in mid-summer of 2005. Fig. 10. First and last MI images taken in 5 steps of 3mm each and downlinked from MER Opportunity for three representative science targets (Piedmont, Stone Mountain, and Tarmac).

V. CONCLUSIONS & CURRENT DIRECTIONS A closed-loop system called SCAIP for autonomous acquisition of science data by planetary surface rovers was presented. This system is initialized with a single command from Earth containing a designated science target position in a global coordinate frame. The rover will then autonomously traverse to the target from an extended distance away (on the order of 10 meters) and place the instrument on the designated target position. The loop is closed between the rover plant and the target position using a visual odometry algorithm for the tracking of feature points not necessarily on the designated target to estimate and update rover state. A handoff mechanism using HIPS from the long-range NavCams and the short-range HazCams used for IDD arm trajectory planning was described. This hand-off mechanism closes the loop between the rover plant and the IDD-centered coordinate frames. For fixed focus instruments such as the MER MI, an algorithm was described that closes the loop between the IDD movement and the quality in-focus science imagery returned. The experimental studies demonstrated that an algorithm for feature point prediction based on affine parameter estimation addresses the computational load of the visual odometry technique by potentially speeding up the point

ACKNOWLEDGMENT The authors would like to thank the reviewers for their close reading of the manuscript and helpful comments. The authors would also like to thank Eric Baumgartner and Matt Robinson for the HIPS code used for the precision approach study of SCAIP. REFERENCES 1.

2.

3.

4.

P.S. Schenker, E.T. Baumgartner, L.I. Dorsky, P.G. Backes, H. Aghazarian, J.S. Norris, T.L. Huntsberger, Y. Cheng, A. TrebiOllennu, M.S. Garrett, B.A. Kennedy, A.J. Ganino, R.E Arvidson, and S.W. Squyres, “FIDO: A Field Integrated Design & Operations rover for Mars surface exploration,” in Proc. 6th International Symposium on Artificial Intelligence, Robotics and Automation in Space (i-SAIRAS-'01), Montreal, Canada, 2001. E. Tunstel, T. Huntsberger, H. Aghazarian, P. Backes, E. Baumgartner, Y. Cheng, M. Garrett, B. Kennedy, C. Leger, L. Magnone, J. Norris, M. Powell, A. Trebi-Ollennu and P. Schenker, “FIDO Rover Field Trials as Rehearsal for the NASA 2003 Mars Exploration Rovers Mission,” in Proc. of 9th Intl. Symp. on Robotics & Applications, 5th World Automation Congress, Orlando, FL, pp. 320-327, 2002. E. Tunstel, T. Huntsberger, A. Trebi-Ollennu, H. Aghazarian, M. Garrett, C. Leger, B. Kennedy, E. Baumgartner and P. Schenker, "FIDO Rover System Enhancements for High-Fidelity Mission Simulations," in Proc. 7th International Conference on Intelligent Autonomous Systems (IAS-7),Marina del Rey, CA, pp. 349-356, 2002. T. Huntsberger, Y. Cheng, E.T. Baumgartner, M. Robinson, and P.S. Schenker, “Sensory Fusion for Planetary Surface Robotic Navigation, Rendezvous, and Manipulation Operations,” Proc.

5.

6. 7. 8.

9.

10.

11. 12.

13.

14. 15.

16. 17.

International Conference on Advanced Robotics (ICAR'03), University of Coimba, Portugal, 2003. T. Huntsberger, H. Aghazarian, Y. Cheng, E.T. Baumgartner, E. Tunstel, C. Leger, A. Trebi-Ollennu, and P.S. Schenker, “Rover Autonomy for Long Range Navigation and Science Data Acquisition on Planetary Surfaces,” Proc. IEEE International Conf. on Robotics and Automation (ICRA2002), Washington, DC, pp. 3161-3168, 2002. C.F. Olson, “Probabilistic self-localization for mobile robots,” IEEE Trans. Robotics and Automation, vol. 16, no. 1, pp. 55-66, 2000. C.F. Olson, L.H. Matthies, M. Schoppers, and M.W. Maimone, “Rover navigation using stereo ego-motion,” Robotics and Autonomous Systems, vol. 43, pp. 215-229, 2003. E.T. Baumgartner, P.S. Schenker, C. Leger, and T.L. Huntsberger, “Sensor-fused navigation and manipulation from a planetary rover,” Proc. SPIE Symposium on Sensor Fusion and Decentralized Control in Robotic Systems, 3523, Boston, MA, pp. 58-66, 1998. E.T. Baumgartner, H. Aghazarian, and A. Trebi-Ollennu, “Rover Localization Results for the FIDO Rover,” Proc. SPIE Symposium on Sensor Fusion and Decentralized Control in Autonomous Robotic Systems IV, 4571, Newton, MA, pp. 34-44, 2001. E.T. Baumgartner, H. Aghazarian, A. Trebi-Ollennu, T.L. Huntsberger, and M.S. Garrett, “State estimation and vehicle localization for the FIDO rover,” Proc. SPIE Symposium on Sensor Fusion and Decentralized Control in Robotic Systems III, 4196, Boston, MA, pp. 329-336, 2000. B.D. Hoffman, E.T. Baumgartner, T. Huntsberger, and P.S. Schenker, “Improved rover state estimation in challenging terrain,” Autonomous Robots, vol. 6, no. 2, pp. 113-130, 1999. S.B. Goldberg, M.W. Maimone, and L. Matthies, “Stereo Vision and Rover Navigation Software for Planetary Exploration,” in Proc. IEEE Aerospace 2002, Big Sky, Montana, USA, pp. 20252036, 2002. C. Leger, “Efficient Sensor/Model Based On-Line Collision Detection for Planetary Manipulators,” Proc. IEEE Int. Conference on Robotics and Automation (ICRA’02), Washington, DC, pp. 1697-1703, 2002. L. Pedersen, “Science target assessment for mars rover instrument deployment,” Proc. IEEE/RSJ International Conference on Intelligent Robots and System, pp. 817 –822, 2002. L. Pedersen, R. Sargent, M. Bualat, M. Deans, C. Kunz, S. Lee, and A. Wright, “Single cycle instrument deployment for mars rovers,” Proc. International Symposium on Artificial Intelligence, Robotics and Automation in Space (i-SAIRAS’03), Nara, Japan, 2003. F. Conticelli, B. Allotta, and P. Khosla, “Image-based visual servoing of nonholonomic mobile robots,” Proc. 38th IEEE Conf. Decision and Control, Phoenix, AZ, pp. 3496-3501, 1999. Y. Ma, J. Kosecká, and S.S. Sastry, “Vision Guided Navigation for a Nonholonomic Mobile Robot,” IEEE Trans. Robotics and Automation, vol. 15, no. 3, pp. 521-536, 1999.

18. G. Chesi, D. Prattichizzo, and A. Vicino, “A visual servoing algorithm based on epipolar geometry,” Proc. IEEE International Conference on Robotics & Automation (ICRA’01), Seoul, Korea, pp. 737-742, 2001. 19. D. Comaniciu, V. Ramesh, and P. Meer, “Kernel-Based Object Tracking,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 25, no. 5, pp. 564-577, 2003. 2 0 . M. Knapek, R.S. Oropeza, and D.J. Kriegman, “Selecting promising landmarks,” Proc. IEEE International Conference on Robotics & Automation (ICRA’00), San Francisco, CA, pp. 37713776, 2000. 2 1 . F. Espinal, T.L. Huntsberger, B. Jawerth, and T. Kubota, “Wavelet-based fractal signature analysis for automatic target recognition,” Optical Engineering, Special Section on Advances in Pattern Recognition, vol. 37, no. 1, pp. 166-174, 1998. 22. Förtsner and Pertl, Photogrammetric Standard Methods and Digital Image Matching, 1986. 23. Z.-Y. Zhang & H.-T. Tsui, “Relative Affine Depth: Structure from Motion by an Uncalibrated Camera,” Proc. ACCV’98, Hong Kong, 1998. 24. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, Cambridge, UK, 2000. 25. G. Flandin, F. Chaumette, and E. Marchand, “Eye-in-hand/Eyeto-hand Cooperation for Visual Servoing,” Proc. IEEE International Conference on Robotics & Automation (ICRA’00), San Francisco, CA, pp. 2741-2746, 2000. 26. S. Mallat, “A theory for multiresolution signal decomposition: The wavelet representation,’’ IEEE Trans. Pattern Anal. Mach. Intell. vol. 11, pp. 674–693, 1989. 27. D. Wettergreen, H. Thomas, and M. Bualat, M “Initial results from vision-based control of the Ames Marsokhod rover,” Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '97), Grenoble, France, pp. 1377–1382, 1997. 28. M.C. Deans, C. Kunz, R. Sargent, and L. Pedersen, “Terrain model registration for single cycle instrument placement,” Proc. 7th International Symposium on Artificial Intelligence, Robotics and Automation in Space (i-SAIRAS’03), Nara, Japan, 2003. 29. E. Malis, “Visual servoing invariant to changes in camera intrinsic parameters,” Proc. IEEE, pp. 704-709, 2001. 3 0 . E. Malis, F. Chaumette, and S. Boudet, “2-1/2-D Visual Servoing,” IEEE Trans. Robotics and Automation, vol. 15, no. 2, pp. 238-250, 1999. 31. M.W. Maimone, I. Nesnas, and H. Das, “Autonomous rock tracking and acquisition from a Mars rover,” Proc. 5th International Symposium on Artificial Intelligence, Robotics and Automation in Space (i-SAIRAS’99), Noordwijk, Netherlands, pp. 329-334, 1999. 32. I.A.D. Nesnas, M.W. Maimone, and H. Das, “Autonomous vision-based manipulation from a rover platform,” Proc. IEEE International Sympos. On Computational Intelligence in Robotics and Automation (CIRA '99), pp. 351–356, 1999. 33. I. Nesnas, M.W. Maimone, and H. Das, “Rover maneuvering for autonomous vision-based dexterous manipulation,” Proc. IEEE International Conf. Robotics and Automation (ICRA '00), pp. 2296 –2301, 2000.