A crater detection and identification algorithm for autonomous lunar ...

8 downloads 0 Views 1MB Size Report
Abstract: Future space exploration missions and in particular robotic lunar landing missions will require a high-precision autonomous navigation. A key element ...
A crater detection and identification algorithm for autonomous lunar landing Sébastien Clerc*, Marc Spigai**, Vincent Simard-Bilodeau*** * Thales Alenia Space, Cannes La Bocca, France, (Tel: +33 492 926 052; e-mail: [email protected]). ** Thales Alenia Space, Toulouse, France, (e-mail: [email protected]) *** U. Sherbrooke and NGC Aerospace, Sherbrooke, Canada, (e-mail:[email protected] ) Abstract: Future space exploration missions and in particular robotic lunar landing missions will require a high-precision autonomous navigation. A key element of the navigation chain is the direct determination of the camera pose in the terrain reference frame by detecting and identifying visual features. For space applications and especially for lunar landing, craters are useful features. A segmentation-based method is introduced to automatically detect craters in an image. Finally, a method to identify the detected craters in a reference database is presented. Keywords: Interplanetary Spacecraft, Computer Vision, Navigation Systems. 1. INTRODUCTION

distinctive features can be used to develop an automatic detection algorithm.

Future space exploration missions and in particular robotic lunar landing missions will require a high-precision autonomous navigation with a performance better than 100 meters at touch-down (Brady et al., 2007, Neveu et al. 2010). A possible approach to reach such accuracy is to compare terrain images acquired by an on-board camera with a reference on-board terrain model built from results of prior orbital reconnaissance missions.

The present paper describes: 1) a new segmentation-based autonomous crater detection algorithm 2) a new RANSACbased feature identification algorithm. Compared to existing algorithms for those problems, the authors believe that the proposed approaches are faster and simpler to implement onboard. They rely on well-established methods of image processing not used previously in this context.

Comparing whole image frames has been proposed for automatic airplane landing (Gonçalvez et al., 2009). This approach is not well suited to a planetary landing mission, because it requires a large amount of on-board memory to store the image database. In fact, the preferred approach is to use a catalogue of geo-referenced features. These features could be distinctive texture elements in the image (Mourikis et al., 2009, Pham et al., 2008) or high-level terrain features such as craters or boulders. Impact craters are best candidates for this problem due to their distinctiveness comparatively to other geographic landmarks. Impact craters are created by the collision of small asteroids and comets drifting inside the Solar System. On Earth, the presence of the atmosphere prevents the smallest asteroids from reaching the surface and erosion tends to erase the impact of the larger ones over time. However, the surface of small bodies, such as the Moon, is covered with a large number of craters of all sizes. Impacts smaller than a few kilometres in diameter (“simple” craters) are remarkably similar. They are circular with a flat bottom and a rim size proportional to their diameter. The circular shape makes them appear similarly whatever the Sun azimuth. A variation of the Sun elevation modifies only the respective size of the shaded and sunlit parts. These

The paper is organized as follows: The first part explains briefly the reference lunar landing mission and the overall vision-based navigation concept. The second part presents the proposed crater detection method and some applications examples on real crater images. The third part analyzes quantitatively and qualitatively the performance of the proposed crater detection method and compares it to another state-of-the-art method. The fourth part addresses the crater identification problem, i.e. the matching of each detected crater with a geo-referenced crater database. This section includes a brief literature survey, the proposed crater identification method and some results showing its efficiency and its robustness. Finally, the last part of the paper presents the conclusions of this work and addresses possible extensions to other autonomous navigation problems. 2. THE LUNAR LANDING MISSION 2.1 Pin-point Landing and Navigation Accuracy Requirement Most space exploration missions so far have reached a landing accuracy of the order of some tens to one hundred kilometres, an impressive performance considering the overall distance travelled. However, there is a clear interest in gaining at least one order of magnitude in accuracy, in order to reach specific

points of interests, either scientific (e.g. a geologic feature Mars) or operational (e.g. a permanent lunar base). A major objective for a landing mission on the Moon are the North or South poles. Because of the very small inclination of the Moon rotation axis, the Sun remains always near the horizon. Some craters have parts of their rim almost constantly illuminated and their bottom constantly in the shade. The recent L-Cross mission has brought some conclusive elements on the presence of water ice inside such craters. Their rim could therefore be a very interesting place to establish a long term lunar base. The required accuracy to land on those narrow crater rims is of the order of 100 meters. 2.2 Vision-based Navigation Such accuracy is not within the reach of current inertial navigation technologies. Their performance is indeed limited by the Inertial Measurement Unit (IMU) integration error, the initial position determination error (by the Earth control station) and the uncertainty on the Moon gravity field. While the first source of error can be somewhat reduced by using more accurate sensors, there is little prospect of significantly reducing the other contributors. Vision-based navigation using on-board cameras is an attractive solution to this problem. Cameras are passive sensors with limited mass and power requirements, which are attractive characteristics for the design of an autonomous vehicle. However, the extraction of the useful information of the images requires complex algorithms. This is especially a problem for space vehicles using radiation-hardened flight computers with very limited processing power. There are two types of visual measurements. The first type, called “tracking”, consists in estimating the change of pose between two successive camera images. As is well known, only the rotation and the direction of the translation vector (not its magnitude) can be recovered from a pair of images. Therefore, the tracking measurement must be fused with information from other sensors and/or from a dynamical model (i.e. using a Kalman filter) in order to recover the full 6-dof motion. This strategy can reduce in a large extent the navigation error due to the un-calibrated IMU biases and gravity knowledge errors. The strategy has however some drawbacks: it can not reduce the initialization error, its accuracy decreases with the travelled distance and it is accurate only at low altitude (below 2 km). The second type, called “reckoning”, consists in estimating the camera position and attitude with respect to the terrain using an on-board database of terrain-referenced surface features. Contrary to the tracking case, a single image is enough to perform visual reckoning. In addition, it provides a dramatic improvement in terms of navigation accuracy compared to the tracking measurements only navigation, but it is much more demanding in terms of processing time. In a previous paper (Simard-Bilodeau et al. 2010) a complete navigation algorithm for lunar landing has been described.

on Numerical investigations reported in this paper have shown that the required 100 m precision at touch-down can be reached provided that: the reckoning measurements are performed every 5 seconds at least; five or more features are identified in the camera frame with a precision of 5 pixels; the feature position on the terrain is known with an error of 50 meters at most. This provides the requirements for the vision reckoning algorithm described in the present paper. 2.3 Reference Mission Timeline This paragraph aims at providing some orders of magnitude of a typical lunar landing mission to better understand the characteristics of the on-board navigation algorithm. The readers are referred to Brady et al., 2007 or Neveu et al., 2010 for a more detailed presentation. A typical lunar landing mission starts from a low periapsis orbit (15 km), with a relative velocity of more than 2 km/s. A set of high-thrust engines (500 N or more) is turned on to reduce the velocity of the lander. Smaller thrusters (20 N typically) are used to control the lander attitude, thereby controlling the direction of the thrust vector. The attitude of the lander with respect to the surface changes as the trajectory becomes progressively vertical. The final part of the trajectory (last 100 s) can be dedicated to the autonomous detection and avoidance of surface hazards (rocks, slopes, dark areas) in order to ensure a safe landing. A typical landing sequence lasts around 900 s and covers 800 km of ground range, see Fig. 1.

Fig. 1: Typical Lunar Landing Mission Scenario (figure not to scale). It is foreseen to acquire and process images for navigation during the whole descent trajectory. This means that the reference database must cover a long tract of Moon surface, and that the algorithm must be robust varying viewing conditions, i.e. changes in the lighting source orientation (due to the change of longitude and latitude), the altitude and the orientation of the camera. 3. SEGMENTATION-BASED CRATER DETECTION The crater detection principle relies on the fact that the crater appears as a pair of dark and bright patches inside a globally elliptical area (see Fig. 2). The segment joining the center of the bright and dark areas is roughly aligned with the Sun

direction and its length is similar to the diameter of each patch.

A final step consists in removing craters that are too close to the boundaries of the image, and therefore probably cropped. Even though cropped craters can often be correctly detected (see Fig. 3), the algorithm is not able to estimate their center and diameter correctly and this will induce errors subsequently. The performance of the algorithm on real images is illustrated in Fig. 3.

a

b

Fig. 2: Steps of the Segmentation-based Algorithm. The algorithm starts with a K-means segmentation of the image (see e.g. Duda et al., 2001). This algorithm is widely used in image processing application and is well known for its robustness and efficiency. The number of segmentation levels must however be chosen carefully. Extensive experiments on synthetic and real images show that choosing 5 levels is optimal for Moon and asteroid images characterized by sharp shadows (no diffuse light) and similar surfaces (regolith). The brightest and darkest connected objects only are kept for analysis and labelled. At this stage, very large or very small objects can be removed to speed up further processing. The next step consists in finding a pair of bright-dark objects. At this point the knowledge of the Sun direction is used. The azimuth provides the direction of the dark-bright alignments and the elevation provides the expected ratio between the size of the dark and bright patches. Only a rough knowledge of the Sun direction is required (better than 45° typically), which can be easily achieved by inertial navigation only. From the center of each bright object, a segment is constructed in the direction of the Sun, with a length equal to the object diameter. If the other end-point of the segment belongs to a dark object of similar size, a candidate crater is created by merging the dark and bright patches. In order to improve the detection probability, a second similar search is performed starting from dark objects still not associated. It is important to note that the complexity of this step of the algorithm is linear (proportional to the number of bright or dark objects) and not quadratic (proportional to the product of these numbers). Next, the second order moments (x², y² and xy) of the paired objects are computed. The ellipse having the same moments can be determined. If the object area is lower than a given fraction of the ellipse area, the object is rejected as a wrong detection.

c

d

Fig. 3: Crater Detection on Real Images. a and c: Moon, b: asteroid Phoebe, d: Mars (all images are courtesy of ESA). 4. PERFORMANCE AND ROBUSTNESS ASSESSMENT In order to determine the accuracy and robustness of the crater detection algorithm, a set of synthetic images has been prepared with the Planetary Scene generator PANGU (Dubois-Matra et al. 2009) developed by U. Dundee under an ESA contract. This software is able to generate images of the same terrain under different lighting conditions, determine the complete list of visible craters in the image and the parameters of the ellipses formed by the crater rims in the image frame. More precisely, 2 points of view (Nadir view and 45° slant view) and 3 Sun elevations (77.5°, 22.5° and 2.5°) have been considered, providing a set of 6 images. The very low Sun elevation of 2.5° is representative of illumination conditions at the Moon poles. Table 1. Segmentation-based Crater Detection Performances (errors are in pixels at 1 sigma). View

craters in Sun FoV elevation

Nadir

77

Slant

75

high low very low high low very low

Detected

False Alarms

Position bias

Position noise

Size error

33 38 11 31 36 9

2 1 2 0 2 0

0.98 0.68 0.54 0.97 0.87 0.59

1.53 1.13 1.20 1.15 1.59 1.16

1.07 0.42 0.58 0.55 0.63 0.44

This allows the determination of the detection rate, the probability of false alarms and the ellipse coefficient (crater rims) estimation accuracy. A typical detection performance

result is depicted in Fig. 4. Statistical results for the set of images are gathered in Table 1.

-test The crater size, determined as the geometric mean of the major and minor semi-axes, is estimated with a precision typically better than one pixel. Surprisingly, neither the position nor the size estimation errors appear to be correlated to the crater size. It is noted that there are physical limits to the detection probability and false alarm rate regardless of the performance of the crater detection algorithm. First, some craters may be completely shaded by the terrain relief if the Sun is very low, or occluded in case of slant view. Conversely a very low Sun incidence may allow the detection of shallow craters which may not be visible with a higher Sun elevation. The navigation method should in any case be robust to un-detected craters and false detections. Other algorithms for crater detections have been proposed in the literature. A first class of approach is to use the Hough transform, see e.g. Weismuller et al. (2007). The Hough transform is a convenient tool to detect circles in an image, but extensions to the ellipse exist. In spite of improvements to make the method faster and more robust, this approach remains more complex and sensitive to noise than the proposed one. A more attractive approach, using edges of the image, has been proposed by Cheng and Miller (2003). The principle of this algorithm is similar to the proposed method, but craters are detected by grouping their dark and bright side edges. Checks on edge curvature and size allow removing false detection. Finally, direct ellipse fitting is performed on the edges belonging to the same crater. Another edge-based crater detection is presented in Meng et al. (2008).

Fig. 4: Crater Detection Performances for the Nadir/Low Sun Elevation Case. Top: detected craters (green), ground truth (Blue) and false alarms (Red). Bottom: crater position error (the correlation of the error with the sun direction is obvious).

Using edges rather than grey levels of the image has some advantages. In particular, this reduces the sensitivity to lighting conditions. On the other hand, small craters are hard to detect due to noise. In addition, extracting and manipulating edges is more complex than image patches. Finally, the number of tuning parameters of the edge-based crater detection is larger. Table 2. Edge-based Crater Detection Performances (errors are in pixels at 1 sigma). View

The main findings of these tests are as follows: -

-

The detection probability is of the order of 40% except at low Sun elevation, in part because cast shadows reduce the number of visible craters in the field of view.

The false alarm rate is quite small (one or two false craters per image at most). There is a systematic position bias of less than 1 pixel, clearly correlated in magnitude and direction to the Sun elevation and azimuth. This leaves room for a possible improvement of the localization performance by modelling or estimating the bias. The standard deviation of the position error is a little above 1 pixel. The worst case performance for all simulated cases is below 5 pixels.

Nadir

Slant

craters in Sun Detected FoV elevation high 18 77 low 26 very low 15 high 18 75 low 24 very low 12

False Alarms 1 2 3 0 1 4

Position bias 0.56 0.54 0.32 0.35 0.53 0.22

Position Size error noise 1.13 1.02 0.75 0.71 1.46 1.63 1.59 0.90 1.08 0.75 2.58 0.96

The performance of the Cheng and Miller algorithm has been analyzed with the same image samples. Results reported in Table 2 show that the segmentation-based crater detection has better or equal performances. Since the computation cost is a critical aspect and the performances are similar, the authors consider that the segmentation-based crater detection is better suited for space applications. 5. RANSAC-BASED CRATER IDENTIFICATION

Once the craters have been detected, there remains to identify each crater in the reference database in order to use this information (image and terrain coordinates of the detected craters) as measurements for the navigation algorithm. Using a priori information on the camera pose, it is possible to narrow the search within the crater database. Nevertheless, for most of the detected craters, there are several matching candidates in the database. Each possible match between a detected crater and a database crater can be considered as information which allows determining the camera pose, i.e. the 6-dof transform from the reference frame to the current camera frame. Correct matches will lead to a consistent estimation of this motion, while incorrect matches can be viewed as outliers which need to be removed. The identification problem can therefore be expressed as a problem of robust pose estimation. This approach allows treating in a similar way false detections and false matches. Once the craters are correctly identified, the estimations of the lines-of-sight to each crater can be processed in a navigation filter (see Simar-Bilodeau et al. 2010 for details). The complexity of the crater identification step depends directly on the hypotheses made on a priori information about the camera pose. Weismuller et al. (2007) assume a nadir view with a good knowledge of the altitude. Therefore, one must reconstruct 3 degrees of freedom, namely a translation parallel to the image plane and a rotation along the roll axis. The algorithm reconstructs the pose from pairs of matched crater position, and selects the best pose as the one with the highest number of consistent matches. Cheng and Miller (2003) consider a small 6-dof transform between a reference image and the current image. A triplet of randomly matched craters is used to build the homography between the images. The number of other matches which are consistent with this homography is used to select the best match. The second approach is obviously more computationally intensive, but also more general. The complexity and robustness of the algorithm depends directly on the number of craters matches used to reconstruct the pose. The probability of finding a set of 3 correct matches among all possible matches is indeed considerably lower than that of finding only 2 correct matches. This work proposes a variant of the algorithm of Cheng and Miller using N ≥ 2 crater matches. First, the rotation between the current and the reference frame is assumed to be arbitrary but known with a good accuracy. This seems to be a more appropriate assumption for the lunar landing case, since a high-precision gyro initialized by star-tracker measurements can be used to determine the terrain-relative attitude with a good precision (typically better than 0.5°). The problem is therefore restricted to the determination of the translation vector. This vector can be determined with N = 2 correct crater matches. Instead of using homographies which are limited to planar geometries, the 3D translation vector is directly computed using a least-square approach. An a priori knowledge of the

camera position s assumed to be available thanks to the navigation filter. Let f(x) = u be the perspective projection transforming the terrain coordinates into image coordinates associated with the nominal camera position. If xi is the terrain-referenced position of the ith crater of the database, then f(xi) is the nominal position of the crater in the image. Let Ji be the corresponding Jacobian 2 x 3 matrix Ji = ∂f/∂x(xi,). The unknown 3D translation error d x of the camera is such that: Ji ·d x = d ui, (1) where d ui is the translation between the actual and nominal position of the crater i in the image. As a pre-processing step, one needs to build for each crater i of the database a list of all detected craters j whose image coordinates uj are within a user-defined distance of the nominal position f(xi). The number of crater in the list is typically of the order of 10. A match between the ith database crater and the jth detected crater determines a constraint for the translation vector: (2) Ji ·d x = uj - f(xi). A RANSAC-type algorithm is used to determine the best consistent set of i-j matches: -

Step 1: select a random set of N≥2 crater matches. The matches have to be bijective (a database crater can match at most one detected crater and vice-versa). Step 2: evaluate the translation vector d x by solving in a least-square sense the set of N constraints (2). Step 3: evaluate the number of matches consistent with the choice of N matches, i.e. for which the residual of equation (2) is lower than 20 pixels. If the number of consistent matches is lower or equal to N, go to Step 1. Step 4: improve d x estimation by including all consistent matches from Step 3. Step 5: improve matches selection by retaining only residuals lower than 5 pixels. Step 6: if the number of matches is larger than that of the current best estimate, change current best estimate. If more than 50% of craters have been matched, stop; else go to Step 1. The performance of the algorithm depends weakly on the choice of N, but the computational costs increases with N. In practice, N = 2 is recommended. Table 3. RANSAC-based Crater Identification Performances (errors are in meters at 1 sigma). View Nadir

Slant

craters in Sun iterations FoV elevation high 362 77 low 265 very low 131 high 355 75 low 400 very low 147

Lateral Error 1.96 0.62 0.48 3.51 3.30 3.16

Altitude Bad error matches 0.72 0 0.59 0 0.64 0 1.25 0 1.18 0 1.32 0

The simulations (see Table 3 for statistics and Fig. 5 for a typical result) demonstrate that the identification algorithm is

robust to translation uncertainty representing 7.5 % (3 sigma) of the altitude. For the camera parameters assumed here, this represents an uncertainty of ±170 pixels on the crater position in the image. The algorithm was tested with 100 randomly generated translation vectors, for each of the 6 test images. The algorithm converged successfully in all cases, without any bad association. The translation vector is reconstructed with a performance better than 4 meters for Slant View cases, and 2 meters for nadir view cases.

Fig. 5: Typical result of the RANSAC-based crater identification method, green: correctly identified craters and translation vectors d ui, yellow: rejected craters, black: rejected translation matches d ui. 6. CONCLUSIONS A method which allows the direct determination, under suitable assumptions, of the camera pose in the world reference frame has been presented. This problem, called “reckoning”, is not often addressed in the computer vision literature, whereas a considerable amount of work has been devoted to the “tracking” problem. It requires not only lowlevel image processing, but complex vision tasks such as detecting and identifying objects. However, the authors believe that this is a key technology to allow long-term navigation of autonomous vehicles. Although the crater-based navigation method is clearly limited to space applications, some of the ideas and concepts highlighted in this work could prove useful for GPS-free navigation for terrestrial Unmanned Vehicles. REFERENCES Brady T., Schwartz J. and Tillier C. (2007). System Architecture and Operational Concept for an Autonomous Precision Lunar Landing System, 30th

Annual AAS Guidance and Control Conf., Breckenridge, Co., U.S.A., February 3-7. Neveu D., Hamel J.-F., Christy J., de Lafontaine J., Next Lunar Lander: Descent & Landing GNC Analysis, Design And Simulations (2010), 33rd Annual AAS Guidance and Control Conf., Breckenridge, Co., U.S.A., February 6-10. Gonçalves T., Azinheira J.R. and Rives T. (2008), Visionbased Automatic Approach and Landing for an Aircraft using a Direct Visual Tracking Method, 6th Int. Conf. On Control, Automation and Robotics, INCINCO’09, Milan, Italy. Mourikis A.I., Trawny N., Roumeliotis S.I., Johnson A., Ansar A., and Matthies L. (2009), Vision-Aided Inertial Navigation for Spacecraft Entry, Descent, and Landing, IEEE Transactions on Robotics, Vol. 25, No. 2. Pham B.V., Lacroix S. and Devy M. (2008), Landmarks Constellation Based Position Estimation for Spacecraft Pinpoint Landing, 10th Workshop on Advanced Space Technologies for Robotics and Automation, Noordwijk, Nertherlands, November 11-13. Duda R.O. et al. (2001), Pattern Classification, John Wiley & sons, second edition. Dubois-Matra, O., Parkes S., Dunstan M. (2009), Testing and Validation of Planetary Vision-based navigation systems with PANGU, Int. Symp. on Space Flight Dynamics, Toulouse, France. Cheng Y. and Miller J. K. (2003), Autonomous Landmark based Spacecraft Navigation System, 13th Annual AAS/AIAA Space Flight Mechanics Meeting, Ponce, Puerto Rico. Meng D., Yun-Feng C., Qing-xian W (2008)., Autonomous craters detection from planetary image, Int. conf. On Innovative Computing Information and Control, Dalian. Weismuller T., Caballero D. and Leinz M. (2007), Technology for Autonomous Optical Planetary Navigation and Precision Landing, AIAA Space 2007 Conf., Long Beach, Calif., USA. Simard-Bilodeau V., Clerc S., de Lafontaine J. and Drai R. (2010), A Vision-Based Algorithm for Pin-point landing, 33rd AAS Guidance and Control Conf., Breckenridge, Co., USA.

Suggest Documents