testing and validation of planetary vision based navigation ... - ISSFD

4 downloads 0 Views 496KB Size Report
Rendering was done with POV-Ray, a freeware ray-tracing tool. ... List. Crater. Model. Parameters. Surface. DEM. Camera. Model. Parameters. Texture. Map.
TESTING AND VALIDATION OF PLANETARY VISION BASED NAVIGATION SYSTEMS WITH PANGU Olivier Dubois-Matra(1), Steve Parkes(2), Martin Dunstan(3) (1)

ESA / ESTEC, Keplerlaan 1 P.O. Box 29, 2200 AG Noordwijk, The Netherlands; Tel.: +31 71 565 3442; [email protected] (2) Space Technology Center, School of Computing, University of Dundee, Dundee, DD1 4HN, Scotland, UK; Tel.: +44 1382 385194; [email protected] (3) Space Technology Center, School of Computing, University of Dundee, Dundee, DD1 4HN, Scotland, UK; Tel.: +44 1382 385194; [email protected] ABSTRACT Missions using vision-based navigation systems require test and validation means which include representation of virtual planetary landscapes. This paper describes the PANGU software (Planet and Asteroid Natural scene Generation Utility) developed for ESA by the University of Dundee and its use so far in ESA projects for development and validation of vision based navigation systems. Upcoming improvements to PANGU, which will make possible new applications, are also presented

1. INTRODUCTION Vision-based navigation systems are often considered for exploration missions. Terrain related sensors can help to achieve accurate navigation by observing natural scenery and, in the case of landing, provide hazard avoidance. Potential applications include orbital rendezvous and sample capture, precision landing, rovers, flyby and initial orbit insertion manoeuvre. Other applications of such navigation systems are formation flying and future space transportation systems. Several research and technology studies have been and are still underway at the European Space Agency to test and validate autonomous navigation systems. A common issue for these projects is the need for test and validation means, whether on completely virtual simulator or with hardwarein-the-loop test bench. This in turn requires the capability to generate on request realistic virtual images of the surface of a planet, asteroid or spacecraft. This paper will describe the PANGU software (Planet and Asteroid Natural scene Generation Utility) and its use in ESA projects for development and validation of vision based navigation systems. 2. THE PANGU SOFTWARE 2.1. Origins The origin of PANGU traces back to the ESA-supported LunarSim study in 1998. The resulting tool was able to render Lunar-like cratered surfaces previously generated, either from existing digital elevation maps (DEM) or from a fractal surface. Rendering was done with POV-Ray, a freeware ray-tracing tool. There were several limitations however: only one level of resolution existed, meaning that increasing the resolution of the complete terrain (several kilometers on a side) would lead to unrealistic memory requirements; geological features were limited to craters, and their appearance was not sufficiently realistic; light source and reflectance models were limited; finally, several minutes were required to produce a single picture.

Based on the feasibility demonstrated by LunarSim, and with the goal on improving on the issues raised, the development of a successor software called PANGU was decided. In addition the scope of applications was extended to Mercury in order to support the Bepi Colombo mission. 2.2. Description

The architecture of the current version of PANGU (ver. 2.70) is shown in Fig. 1. PANGU has two main functions. The first one is the generation of realistic planetary surfaces. The user can start from scratch and create a fractal landscape as a basic surface, then improve it by adding geological features. Alternatively, the user can also import Digital Elevation Model (DEM) created from existing data and start from there. Surface Generator GUI

Surface Parameters

Crater List

Image Generator GUI

Crater Boulder Model Model & Parameters Parameters

Surface Generator

Surface DEM

Boulder List

Surface Polygons

Camera Model Parameters

Flight Path

Surface Viewer

Illumination Parameters

Texture Map

DEM

Make Texture

Image

Camera Coordinates

Shadow Map

Make Shadows Illumination Parameters

Socket Interface

Fig. 1. PANGU architecture As the planetary bodies of interest were initially the Moon and Mercury, greater emphasis has been placed on boulders and craters generation. A sand dune model for Mars is also present (see Sect. 3.2). The main characteristics of craters are their size and their age, which will determine the way in which PANGU add them to the terrain and their final aspect. For example, old craters will have smooth features, be put in place first and then be covered by younger craters. Craters can be randomly distributed on the surface or have user-specified distributions. Boulders can be distributed independently of craters or to the contrary have a distribution relative to craters which is generally more realistic. Once the features are generated a PANGU model is created which is an open polygon mesh. An important feature of PANGU is the possibility to create a hierarchical (or layered) terrain. This is a way to deal with various level of terrain definition, as required for instance in a landing scenario. Several layers, all centered on the middle of the initial DEM, can be superposed. Each has half the side from the one below, but twice the resolution. The resulting mesh model can be seen in Fig. 2. PANGU also allows for the creation of closed surfaces such as planets and asteroids. For the latter ones their basic shape can be randomly generated.

Fig. 2 : Example of a multi-layered PANGU model mesh structure The second main function of PANGU is its viewing or rendering capacity of the terrain. The user, either a person through a viewer application or a client program through a TCP/IP interface, can require, for a given point of view the image of the terrain. The viewer is based on the Open Graphics Library (OpenGL) interface for performance and portability. This is a significant improvement on previous versions (although it must be stressed that as of today PANGU is not running in real time). Before the image rendering itself, an intermediate step is the creation of a shadow map. The shadow map represents the umbra and penumbra on the landscape for a given position in the sky of the light source. A shadow map is computed once for a given scenery and then is used for each change in the camera point of view.

Fig. 3 : Example of a PANGU-generated moon landscape with craters, boulders with a craterrelated distribution and shadows (rendering with POVRay).

Another option from the viewer is the addition of atmospheric fog or dust, either of a global nature (the whole atmosphere is fogged with density following a linear or exponential extinction profile) or local planar (fog/dust is limited to below a horizontal plane). As said before the viewer can also run in server mode. A protocol allows a client program, either on the same PC or on a different one, to command via TCP/IP a camera position and attitude to a PANGU program acting as the server, and to query the resulting picture (it is also possible to query other data such as the elevation of the camera, of the point of the surface in the line of site, or of any other point on the surface). The PANGU viewer can also be sequenced with a flight file, which allows taking a sequence of pictures along a trajectory. Since PANGU only generate “perfect” images as seen from a pinhole camera, it is up to the user to provide a realistic model for picture noise and distortion. In addition to visual images PANGU also provide a LIDAR model and a radar altimeter model.

3. PREVIOUS APPLICATIONS 3.1. NPAL/VisNAV

NPAL (Navigation for Planetary Approach and Landing) was an ESA project led by EADS Astrium for developing a vision-based landing system based on feature point extraction and tracking. The system included a camera; an image processing device producing the tracked points called the FEIC (Feature Extraction Integrated Circuit) and a navigation software which hybrids the tracking points with inertial data to produce an estimate of the motion with respect to the ground. Mercury, Moon and Mars missions (under parachute or with propulsion) were considered. PANGU was introduced first as a generator of sequences of images. The images generated were to be used for developing the image processing software which extracts and tracks reference navigation points on the landscape. In order to validate the PANGU images comparison was made to available Apollo images. The comparison was not of a visual, subjective nature but what based instead on local radiometry: the image processing algorithm extracts information from small windows of the images (typically 7x7 pixels). Image texture is thus the main factor in the algorithm performance. Under this criterion the PANGU images have been proved to be similar to the Apollo images. PANGU was also part of the simulation environment VBNAT (Vision Base Navigation Analysis Tool). VBNAT is a SimulinkTM based closed-loop, non-real time simulation. PANGU was then used in closed-loop, non-real time simulation with retargeting to a different landing point. Typically 500km per side terrains with 12 layers were used for a model. NPAL went up to Technology Readiness Level 4 (breadboard validation in laboratory). A follow-up program named PLGTF (Precision Landing Ground Test Facility) will test the system in a representative environment. In this case it will be carried on an unmanned helicopter for mocked landing maneuvers over the Moroccan desert. Both NPAL and PLGTF have become the precursors to the VisNAV project whose goal is to produce an engineering qualification model of the visionbased navigation camera [1].

3.2. LiGNC The LiGNC study (LIDAR-based GNC for Automatic Rendezvous and Safe Landing), also led by Astrium within the frame of the Aurora program, evaluated the possibility of using scanning LIDAR (Laser Imaging Detection and Ranging) as the main sensor to support a Mars sample return mission [2]. The emphasis was on the two critical phases: the powered landing with generation of 3D maps for hazard avoidance purposes, and the orbital rendezvous with a passive canister containing the samples. In both cases, Monte Carlo campaigns were run on Matlab/Simulink 6dof simulators (a version of VBNAT in the landing case), in which PANGU provided exact LIDAR range data from the sensor to the surface (planet or canister) to a LIDAR physical model. The PANGU LIDAR model (this is a standard feature of PANGU) is designed to simulate a system with a full scan of up to 1 second. Since the LIDAR sensor may have move during that time, the client to the PANGU server has to specify the linear and angular motion of the sensor with respect to the target when it is in the middle of the scan. In the orbital rendezvous case, in addition to the lambertian reflectance model, corner cubes attached in a rigid lattice around the canister and with a specific reflectance model can be implemented. The physical LIDAR model in LiGNC received the images from PANGU and generated errors, distortions and added lambertian reflection images and eventually corner cube images. In the landing case, Martian terrains were also created with PANGU. At this occasion, the possibility to build a surface from MOLA (Mars Orbiter Laser Altimeter) data and thus to display realistic large scale features was introduced, as was a barchan dune field model. In the end both the rendezvous and landing GNC systems proved to be successful based on the simulation campaigns.

4. PANGU IMPROVEMENT AND FUTURE USES The PANGU improvement programs already in progress (one major update and two specialization modules) are designed to address the limitations of the current version and to allow support of several of the planetary missions under development at ESA. 4.1. PANGU 3.0 The new version of PANGU (3.0) to be released in September 2009 includes the following enhancements: -

a Graphic User Interface to facilitated the creation, analysis and viewing of PANGU surface models (Fig. 4). Crater and boulder fields can be easily visualized and modified, as can individual craters and boulders. DEM and albedo maps can also be examined and modified;

Fig. 4 : Example of craters & boulders placing with the GUI

-

-

importation of several formats for DEM, such as PDS (MOLA) and TIFF; image larger than the viewing screen can be rendered; more realistic appearance of boulders and craters, and more realistic distributions of theses features on the surface. The boulders can be textured and of different colors. Light reflection effects are also enhanced (zero phase opposition effect); capacity to fill holes and sharpen features on existing DEMs (useful in cases like DEM from the Moon South Pole, where large tracts of terrain are concealed from view or radar); shadows can now be dynamic, i.e. they move over the surface, either because sufficient time elapses for the light source to move (e.g. the sun), because the shadow is cast by a moving object (e.g. a lander or a rover) or because the landscape is itself in motion (asteroid).

Fig. 5 : Example of shadow casting by a rover

-

more than one dynamic object can be controlled (e.g. simulation with a lander and its falling heat shield); a simple, flat model of moving dust devils is implemented, since they might be detrimental to vision-base navigation during Mars landings; the sky is more realistic : various colors are available; Earth, moon and planets can be rendered, as can stars of various sizes an colors; an experimental feature of PANGU 3.0 is the inclusion of rover operations. It starts with the importation of a CAD rover model and, through some manipulations, the creation of a PANGU rover model. The client can then through the socket interface drive all the rover joints (wheels, mast, arm) and request position information, as well as query images from several cameras at different locations on the rover (Fig. 6).

Fig. 6 : View from different sensors on the same rover - new terrain analysis tools are available. This will improve on the third main purpose of PANGU, after terrain creation and rendering, which is terrain analysis for mission design. These tools include a boulder coverage tool (surface coverage by boulders above a given size), illumination tool and communication tool (respectively Sun and Earth visibility over a given period from a point on the surface, typically for Moon South Pole missions). A visualization tool for terrain roughness will also be included. Two types of missions will benefit most from the new version of PANGU: the first will be the NEXT Lunar Lander project. The three main functions of PANGU all come into play: South Pole models can be created with the help of the GUI and the associated tools, incorporating existing and upcoming data from earth-based and spacecraft-carried sensors (LRO, Kaguya…). The analysis tool will help in the selection of suitable sites for landing simulation. Finally the PANGU viewer will be used in all aspects of the navigation system validation: off-line characterization, simulation in a high-fidelity environment, simulation with optically stimulated hardware in the loop. In another domain, the ExoMars mission will be a primary candidate to use the experimental rover simulation.

4.2. Asteroid simulation PANGU can already generate and render closed volume objects (as opposed to surfaces) such as planets and asteroid (see Fig. 7), however the two main limitations to the use of the current version of PANGU for asteroid missions are the low level of details on the surface at close distance and the fixed lightning condition, which is unrealistic given the often high rotation rate of asteroids. An adon package to PANGU 3.0 to be released mid-2010 will address these issues. Rendering of surfaces and shadows will be made faster (for example by ignoring parts of the asteroid outside the sensor field of view) and adding boulders to an asteroid surface will become possible. The client software will be able to pilot via TCP/IP not only the movements of the main asteroid, but also of accompanying bodies (a useful feature given the known asteroids with satellites of their own), and all these objects (including the spacecraft) will cast dynamic shadows on the others. This new PANGU feature could be used for the preparation of asteroid missions (such as the CHILON project [3].)

Fig. 7 : PANGU-generated asteroid 4.3. Virtual Spacecraft Image Generation PANGU focus has so far been on the rendering of natural objects, but rendezvous or formation flying, if visual navigation is to be employed, requires also that artificial surfaces be represented with a certain level of realism. For this another package (Virtual Spacecraft Image Generator), will also be released in 2010. It will offer the possibility to import a CAD/3D model of a spacecraft (a STEP model for example) and provided that the different parts of the spacecraft are identified within the model file, the tool will determine for each part its visual appearance (e.g. solar panels identified as such will have the color, aspect and reflectance of solar cells). Among the possible surfaces will be solar cells, optical solar reflectors (OSR) and multi-layer insulation (MLI) (Fig. 8).

Fig. 8 : Spacecraft model with shadows and an OSR surface mirroring the landscape below. The High integrated multi-range rendezvous control system & Autonomous RendezVous and Docking GNC test facility (HARVD) is an ESA project led by EADS Astrium to develop a complete vision-based, generic and autonomous rendezvous system. Its development is oriented around two test cases: Mars sample canister recovery in orbit and docking with a non-cooperative geostationary telecom satellite. The use of PANGU is already planned for the closed-loop GNC simulations. 5. CONCLUSION PANGU has already being used successfully at ESA for testing and validation of vision-based navigation systems. Version 3.0 will provide increased performance, a more ergonomic interface, more realistic features as well as several analysis tools. The asteroid and the virtual spacecraft package will allow missions other than Entry, Descent & Landing to rely on virtual planetary landscape for testing of their GNC systems. Extension of PANGU used into mission planning could also be considered with new features such as the illumination tool or the boulder coverage tool. 6. ACKNOWLEDGEMENTS PANGU is produced by the University of Dundee, UK under contract from the European Space Agency. PANGU is available via ESA free of charge, but without any support either from ESA or the University of Dundee, for companies and universities in Europe for space applications only.

7. REFERENCES [1] Flandin G., Polle B., Frapard B., Vidal P., Philippe C. and Voirin T., Vision Based Navigation for Planetary Exploration, 32th Annual AAS Guidance & Control Conference, Breckenridge, CO, January 30 to February 4, 2009. [2] Kerambrun S., Frappard B., Silva N., Ganet M. and Cropp A., Autonomous RendezVous System: the HARVD Solution, 7th International ESA Conference on Guidance, Navigation and Control Systems, 2-5 June 2008, Tralee, Ireland. [3] Gil-Fernandez J., Gardenas-Gorgojo R., Prieto-Llanos T., Graziano M. and Drai R., Autonomous GNC Algorithms for Rendezvous Missions to Near-Earth-Objects, AIAA/AAS Astrodynamics Specialist Conference and Exhibit, Honolulu, Hawaii, August 18-21, 2008.

BACK TO SESSION DETAILED CONTENTS

BACK TO HIGHER LEVEL CONTENTS

Suggest Documents