DEVELOPMENT OF A LUNAR ASTRONAUT SPATIAL ORIENTATION AND INFORMATION SYSTEM (LASOIS) Ron Li1, Shaojun He1, Boris Skopljak1, Jinwei Jiang2, Pingbo Tang1, Alper Yilmaz2, Martin Banks3, and Charles Oman4 1 Mapping and GIS Laboratory, The Ohio State University Dept. of Civil & Env. Eng. & Geodetic Science 470 Hitchcock Hall, 2070 Neil Avenue, Columbus, OH 43210-1275 2 Photogrammetric Computer Vision Laboratory, The Ohio State University Dept. of Civil & Env. Eng. & Geodetic Science 3 Visual Space Perception Laboratory, University of California, Berkeley 4 Man Vehicle Laboratory, Massachusetts Institute of Technology
[email protected] [email protected]
ABSTRACT In future lunar exploration, spatial disorientation may become an increasingly critical issue for astronauts as the area of exploration increases from several kilometers in the Apollo missions to over one hundred kilometers from the main base station in future landed missions. To address this problem, the Mapping & GIS Laboratory at The Ohio State University, working with partners, is developing a Lunar Astronaut Spatial Orientation and Information System (LASOIS) designed to provide astronauts with continuous navigation updates. Due to specific environmental conditions on the lunar surface (lack of familiar landmarks, ambiguous depth and shading cues, etc.), a multiple-sensor approach is proposed to overcome challenges to astronaut orientation. In this approach, data from on-suit sensors (stereo cameras, MEMS Inertial Measurement Unit (IMU), and foot-mounted pressure sensors) and from off-suit sensors (Lunar Reconnaissance Orbiter Camera) are integrated through an Extended Kalman Filter (EKF). The Zero Velocity Update (ZUPT) technique is used to compensate for distance errors while data from vision sensors are used to compensate for any IMU gyro drift. The spatial information generated by this integrated-data approach will be provided to astronauts through a wrist-mounted OLED (Organic Light-Emitting Diode) interface. Extensive field tests incorporating all of the above-mentioned sensors were performed in a Lunar-like environment at Moses Lake, WA. When compared to GPS-derived ground truth, the trajectory generated by the developed system was found to have a disclosure of 6 m for a total traverse of 107 m (5.6% accuracy). By integrating additional sensor systems (tactical grade IMU, radio-frequency identification beacons, star tracker) and improving data-processing algorithms, it is expected that this system ultimately will be able to achieve a disclosure of less than 2% overall.
INTRODUCTION In previous human space missions, astronauts experienced occasional disorientation in conditions of reduced gravity (Oman, 2007). Apollo mission astronauts had difficulties in navigating on the lunar surface due to a lack of familiar landmarks, loss of aerial perspective, and the fact that the sense of balance is affected by ambiguous depth cues (Oman, 2007 and Mellberg, 1997). Spatial disorientation limited astronauts’ capabilities for completing scientific tasks, and it exposed them to serious risks. Therefore, it is highly desirable to develop technologies to enhance the spatial-orientation capabilities of astronauts operating on the lunar surface. The aim of this research is to reduce astronaut disorientation by providing integrated spatial orientation and navigation information in both a global and local framework (Li et al., 2009). Due to the specific environmental conditions on the lunar surface (non-existence of a regular dipolar magnetic field, low gravity, no earth-like atmosphere, and lack of a sufficient number of artificial orbital satellites for a full GPS constellation), developing a navigation system poses a great challenge. Sensors commonly used on Earth for human or robotic navigation are non-existent or ineffective in the lunar environment. The lunar surface can be a hostile environment for most sensor systems due to large variations in surface temperature, lunar dust and space radiation. Therefore, the reliability of the navigation system is critical, and the failure of any part of the system ASPRS 2010 Annual Conference San Diego, California April 26-30, 2010
should not cause complete system failure. If some components in the system do fail, back-up solutions should exist such that astronauts are provided with uninterrupted navigation information to enable them to safely return to the lunar outpost. Supported by the National Space Biomedical Research Institute (NSBRI) through a NASA grant, the Mapping and GIS Laboratory at The Ohio State University, working with partners at UC Berkeley, MIT and the NASA Glenn Research Center, is currently developing a Lunar Astronaut Spatial Orientation and Information System (LASOIS) to provide continuous spatial orientation and navigation information to astronauts and thereby reduce spatial disorientation (Li et al., 2010). The LASOIS system includes a hardware component (integrated sensor network) for data acquisition and a software component integrating multiple algorithms for data processing, integration, and display. This system integrates multiple sensors, some of which are multi-functional, in order to enable multiple back-up navigation solutions and to improve the reliability of this system in the hostile, energy-constrained environment found on the lunar surface.
LITERATURE REVIEW OF GENERAL AND LUNAR ASTRONAUT NAVIGATION In three of the six previous Apollo missions, a navigation system was developed for the LRV (Lunar Roving Vehicle). This system consisted mainly of a directional gyro unit, a wheel pulse odometer and a sun-shadow device. Astronauts used the sun-shadow device to obtain the initial azimuth. During the rovers’ research exploration traverses, the gyro and wheel odometer worked together to obtain the position and attitude of the rover. The achieved navigation accuracy was 350 m in maximum position error and 200 m in closure error (Smith and Mastin, 1973). This navigation system worked only for rovers, it could not be applied to astronauts' EVA suits. Since that time, a variety of sensors have been examined to provide pedestrian navigation solutions when GPS signals are not available (Rantakokko et al., 2007). In order to overcome the limitations of using a single type of sensor, several research studies have investigated approaches for integrating multiple sensors to deliver more accurate navigation information. Inertial navigation systems (INS) have been used extensively in navigation for a long time (Smith, 1986). As their price and mass has reduced, inertial measurement units (IMUs) have become an increasingly popular tool, particularly as an addition to GPS in urban canyons or for use in places where one cannot completely rely on GNSS (Rantakokko et al., 2007). Researchers have been investigating the use of IMUs for pedestrian navigation in places where GPS signals are denied (e.g., indoors, tunnels, urban canyons, forests, etc.) (Grejner-Brzezinska et al., 2001; Faulkner and Chestnut, 2008). In these studies, results from the IMU as the primary sensor were aided by other methods and instruments. ZUPT (Zero Velocity Update) is often used to overcome bias in the IMU signal (Ojeda and Borenstein, 2007; Faulkner and Chestnut, 2008; Godha and Lachapelle, 2008; Foxlin, 2005). Some studies have focused on instruments that can help the IMU to remove the bias. Most commonly used instruments include magnetometer or digital compass (Grejner-Brzezinska et al., 2008; Foxlin, 2005) and vision sensor (Veth and Raquet, 2006). On the lunar surface, a magnetometer is not applicable because there is no dipolar magnetic field (Oman, 2007), and heading information is collected using computer vision cameras. Other instruments investigated include radio-frequency identification (RFID) (Pradhan et al., 2009), barometer (Grejner-Brzezinska et al., 2008) and even the human body (Moafipoor et al., 2008). In the fields of computer vision and photogrammetry, a number of studies have explored the use of vision data for robotic and human navigation. Implementation of vision-based navigation systems has been studied in a number of research projects including Bayoud (2006), Jirawimut and Prakoonwit (2003) and Matthies et al. (2007). Visual odometry and bundle-adjustment technologies have been developed and applied to overcome wheel slippage, azimuthal angle drift and other navigation errors by the Mapping and GIS Laboratory for the purpose of Spirit and Opportunity rover navigation across the Martian surface (Li et al., 2005). Developing a robust navigation solution for the tough working environment on the lunar surface is challenging. Lunar surface environment lacks resources readily available for fixing any possible problems in a sensor network. In addition, exploration traverses can be several kilometers long in future landed lunar missions. Hence, integrating more types of sensors and characterizing their performance is necessary for obtaining a flexible and robust astronaut navigation system for future lunar missions. Based on previous studies, this research aims to identify all related sensing technologies and data integration techniques to enable a more precision and robust navigation system in order to significantly improve the safety of lunar exploration for astronauts. The resulted LASOIS system is a navigation system that integrates multiple sensors and can operate in several navigation scenarios, depending on the EVA mission and the astronauts' objectives. ASPRS 2010 Annual Conference San Diego, California April 26-30, 2010
THE LASOIS APPROACH The integrated sensor-network for LASOIS incorporates data from multiple types of sensors, including orbital satellite sensors, on-suit sensors and ground-based sensors (Figure 1). Orbital sensors, such as the LROC (Lunar Reconnaissance Orbiter Camera), capture highresolution imagery for extensive regions of the lunar surface (particularly areas for landings and traverses). Onsuit sensors of our current prototype system include an IMU (Inertial Measurement Unit) mounted on the heel of an astronaut boot (right or left), one step sensor mounted on the bottom of the boot with the IMU (Figure 2), and a pair of stereo vision sensors. The IMU measures both acceleration and the angular change rate of the astronaut at a high frequency (150 Hz). The step sensor gives step Figure 1. Conceptualization of the integrated pressure information, counts the number of strides of the sensor network for LASOIS. astronaut and gives the zero velocity phases. The stereo vision sensors capture stereo images of the landing site that are used to compute the local DEM (which can be matched with the orbital DEM) as well as the astronauts’ trajectories by tracking ground features. The ground-based sensor network can be extended to include multiple beacon transmitters on the ground as well as receivers mounted on the astronauts’ suits. Beacon-based sensing can provide astronauts with absolute position information at a lower frequency than that used by the IMU and vision sensors. As a security precaution, a star-tracking system is being investigated as a tool for astronauts to use to return to the lunar outpost by using star observations to determine azimuth in emergency situations.
Initial Orientation and Localization Because no dipolar magnetic field exists on Moon, capturing the initial orientation information for astronauts from the direction of magnetic force is not possible. In the Apollo missions 14, 15 and 16, a sundial device mounted on the interface of the LRV (Lunar Roving Vehicle) was used to determine the initial orientation (Smith and Mastin, 1973). However, in this method the LRV is required to park and remain absolutely unmoving and facing toward the sun, which is impossible for astronauts operating on the lunar surface. LASOIS system incorporates a method for initial orientation and localization through ground and orbital DEM matching. The on-suit stereo cameras can obtain panoramic images at a high level of resolution, and then a high-resolution DEM (Digital Elevation Model) can be generated from this panorama. This high-resolution ground-based DEM is defined in the local coordinate system (such as the one based on the position of the first stereo image pair). On this DEM, the relative position of distant ground features, such as craters and mountain peaks, can be computed. On the other hand, lunar orbiters such as the LRO have already obtained a high-resolution, global DEM of the lunar surface that is geo-referenced in the lunar body-fixed coordinate system. By matching terrain features and landmarks seen on both the ground and orbital DEMs, the initial orientation and position of the ground DEM can be registered to the global DEM. From this, the initial orientation and position of an astronaut can be determined.
IMU and Step Sensor After the initial orientation and localization of the astronaut, the on-suit sensors will continuously collect data to be integrated for generating navigation information. The IMU, which is the primary on-suit sensor, is composed of a gyro and an accelerometer, which mainly detects the angular change rate and acceleration of the part of body on which the device is mounted. The major problem in using an IMU, especially a low-grade IMU, is that the IMU suffers serious drift in navigation. In order to make it feasible to use a low-grade IMU for navigation, ZUPT is employed to remove the bias in the IMU signal (Ojeda and Borenstein, 2007). If an instrument or a method is able to detect the stationary phases where the part of the body on which the IMU is mounted is unmoving, we can use our knowledge that during these periods the velocity of the foot or the IMU is zero to adjust the drift in the IMU signal.
ASPRS 2010 Annual Conference San Diego, California April 26-30, 2010
Figure 2. Placement of the IMU and step sensor.
Based on the ZUPT concept, we used pressure sensors to accurately detect the zero velocity phases. Various types of pressure sensors have been developed for mounting on shoe soles for use in medical applications (Sazonov et al., 2005). Previous studies have used step sensors to try to detect foot contact intervals in order to estimate different locomotion models and to determine the best parameters for step length determination as an input to a neural network (Grejner-Brzezinska et al., 2008). The ZUPT method has been applied to integrate data from the IMU with data from the pressure sensors mounted in the shoe sole under the subject's heel. When pressure is applied to the pressure sensor, a peak signal is detected and the ZUPT period is recognized at that point. This is the calibration point since it is known that the observer’s foot is stationary (at the point of highest pressure). Pressure information depends, of course, on the sensitivity of the pressure sensor used. Based on the experimental analysis, a threshold around the peak value has been determined. The configuration of the sensor hardware is
shown in Figure 2. Figure 3 shows the overall concept of using an EKF (Extended Kalman Filter) for implementing ZUPT. The system implements the ZUPT through an EKF that has 15 elements in the state including δP as position errors, δψ as attitude error, δv as velocity errors, δω as gyro bias, and δa as accelerometer bias. Three additional observations in the EKF are the horizontal and vertical velocities when the foot is stationary, i.e., when speed is zero in all three directions. By synchronizing the IMU and step sensor data, we obtain zero velocity information from the step pressure sensors, which is then used to update the IMU information. EKF is performed to obtain the bias in the signals from the accelerometer and gyro, and then this bias is corrected in the new incoming signals. In this way, the velocity, attitude and position integrated from the incoming signal becomes bias “free”. When the foot is moving, the bias calculated by the latest EKF update is used on the signal in the navigation equation.
ωk
ZUPT Detection
δω
ak
Revised turn rate
ω ' k = ω − δω
δa
δv Extended Kalman Filter
δψ
Velocity update
Vk = Vk −1 + Cbn k a 'k Δt + δv
Attitude update
C bn k = f (C bn k −1 , ω ' k , δψ )
Revise acceleration a ' k = a − δa
δP
C bkn
Position update Pk = Pk −1 + Vk Δt + δP
Pk
Figure 3. Diagram of the integration of data from IMU and step sensor through ZUPT The notation used in the diagram in Figure 3 is: ωk : Turn rate at time k ASPRS 2010 Annual Conference San Diego, California April 26-30, 2010
ak : Acceleration at time k Δt : Sample interval Cbkn : Attitude matrix from body from to navigation frame at time k Vk : Velocity at time k Pk : Position at time k 8
4
6
3
4
0 0
2
4
6
8
10
-2
12
14
16
18
V elocity(m /s)
Velocity(m /s)
2 2
1
X Y Z
0 0
2
4
6
8
10
12
14
16
18
-1 -4 -2
-6 -8
-3 Time(s)
Time(s)
Figure 4. Velocity over 17 seconds from IMU signals with (right) and without (left) ZUPTs.
Figure 4 shows the result of IMU signal integration using ZUPT. In the graph on the left, velocity reconstructed from an industrial grade IMU without ZUPT can be seen to have bias in three directions. After only 17 seconds, the error is above 4 m/s. With the ZUPT technique over the same time period, the velocity error is reduced to 0.1 m/s. The ZUPT based approach removes the error in velocity and two of the attitude angles, roll and pitch. However, it cannot remove error in yaw, which means that drift still exists in the heading direction in traverses reconstructed by ZUPTs (El-Sheimy, 2004). In order to remove this remaining error in heading, additional instruments are required. Most dead-reckoning methods use magnetometers, radio signal (AOA – angle of arrival) or GPS as an external observation to the Inertial Navigation System (INS) in order to compensate for the gyro bias drift in heading. Based on specific lunar environmental conditions for astronaut navigation, a method using visual feature tracking has been developed in this research. This method is described in the following section.
Vision Sensors Two cameras installed on a stereo bar were calibrated to be used as a stereo camera system to continuously capture features in the environment. The in-door calibration provides the geometric relationships between the navigation cameras, which will later be used for geo-referencing of astronaut positions.The tracked features were used as landmarks for tracking the location of the astronaut. With at least three features can be captured in each image from the two cameras, the 3-D locations of these features can be intersected and used for resection of the exact location of the astronaut. Prior to visual navigation, an on-site calibration of the stereo cameras is performed using metric planes and orthogonal vanishing points (Li et al., 2010). For feature detection, we applied a scale-invariant feature transform (SIFT) to a stereo image pair, matching the image pairs using a multidimensional appearance model. Figure 5 shows how this process was performed on data collected during field tests in a lunar-like environment at Moses Lake, WA. Matched features are shown in the same color in the left and right images. The appearance model generated for each feature considers both color and texture in a window centered at the feature location. To reduce the computational complexity, we generated orientation histograms computed from gradient magnitude and direction for multiple image scales. This method accommodates changes in scale and orientation, and provides robust feature matches (Lowe, 2004).
ASPRS 2010 Annual Conference San Diego, California April 26-30, 2010
To avoid redundant feature matching at each image acquisition, a visual tracking of the features is performed. This tracking process keeps the number of potential mismatches to a minimum and further reduces the computational complexity. Due to its simplicity (a twoframe differential method for optical flow estimation), we perform this visual tracking based on an estimation of the optical flow of the feature from a window Figure 5. Feature matching on a stereo image pair. Grid on the centered on the feature itself. The motion right image is used for uniform feature distribution. of the feature is iteratively estimated by minimizing any differences in appearance between the feature windows acquired in the past and those for newly acquired images (Lucas and Kanade, 1981; Shi and Tomasi, 1994). The quality of the optical flow is evaluated by checking the epipolar line in the stereo images and by performing affine transformation in two consecutive frames. Using the set of matched features and the results of the navigation camera pair calibration, we can estimate the exterior orientation of the navigation camera stereo image pair using photogrammetric techniques, which exploit a collinearity equation framework. This approach provides simultaneous localization of the astronaut and mapping of 3-D features, which will be registered to orbital- and ground-based imagery for georeferencing the astronaut with respect to precisely known locations such as a lander, an outpost, or a beacon.
Integration of IMU and Vision Sensor Data The bias of the IMU signal in the azimuth (heading direction) is not removed by ZUPTs, and this bias causes the reconstructed position of the astronaut to deviate increasingly from the real position in the horizontal plane. In this research, heading information from the vision sensors is used to address this issue. Because visual odometry can reconstruct the camera position at each frame, the heading information (the slope “a” of a straight line representing the vertical projection result of the traverse on the horizontal XY plane) at frame t can be derived using Equation 1 through camera positions ( xt −1 , yt −1 ) and ( xt +1 , yt +1 ) in the traverse such that:
a=
yt +1 − yt −1 . xt +1 − xt −1
(1)
However, naïve application of this equation cannot generate accurate heading direction for the astronaut due to the fact that the results are very sensitive to local data noise. In order to overcome this limitation, we adopt a method to derive the heading by fitting a straight line using n consecutive points, setting n equal to the average number of image frames during one step. After the heading information is derived from the camera imagery, it is employed as an additional observation to refine the navigation information from the IMU. Mathematically, the EKF for ZUPT described in Figure 3 is upgraded to include the heading information into the observation model. The 15 states in the EKF remain the same, but the number of the additional observation increases from three to four, being the three velocities plus the heading Figure 6. LASOIS conceptual flowchart. information. By doing this, the azimuth of the IMU is constrained in addition to the three velocities and two attitude angles, pitch and roll. ASPRS 2010 Annual Conference San Diego, California April 26-30, 2010
Information Delivery Displaying navigation information to the astronaut in real time is a very challenging task due to the special visual cues on the lunar surface (Ehrlich et al., 1998). During field tests, information was delivered to the field-test observer on an OLED display (Organic Light Emitting Diode) produced by Honeywell, Inc. The physical size of the display is 4 x 2.5 inches. The display is connected via VGA cable to a processor that stores and processes the data. Figure 7 shows how a test subject checks navigation information as he performs a traverse in a lunar-like environment at the sand dunes of Moses Lake, WA.
EXPERIMENTAL TESTING RESULTS
Figure 7. Observer checking the navigation data display while crossing sand dunes during field tests.
To test the performance of the LASOIS prototype, a field experiment was conducted at Moses Lake, WA during the summer of 2009. The sand dunes near Moses Lake present a Lunar-like environment. In this experiment, GeoEye-1 images were used to simulate the LROC imagery necessary for generating an orbital DEM. The developed orbital-to-ground DEM matching algorithm matched a DEM generated from ground-based on-suit vision-sensor data to a DEM generated from GeoEye-1 orbital imagery to identify initial position and orientation of the test subject. Results of single-source and integrated navigation of the main sensors (Step sensor, IMU and stereo cameras) is shown in Figures 8, 9 and 10. Table 1 shows statistics on the reconstructed trajectories. Overall, the approach integrating data from IMU, step sensor, and vision sensors achieved an error of closure of 5.6% over a trajectory of 107 m (shown in Figure 10). The error in 3-D is not shown here but is it is on the level of 2-3% for the overall trajectory.
Figure 8. Trajectory comparison: trajectory constructed using the IMU and step sensor integration for ZUPT (solid cyan line) compared to ground truth (dotted red line).
Figure 9. Trajectory comparison: trajectory reconstructed using stereo cameras (solid green line) compared to ground truth (dotted red line).
ASPRS 2010 Annual Conference San Diego, California April 26-30, 2010
Trajectory for 107 m IMU Vision Integration
Figure 10. Trajectory comparison: trajectory reconstructed using the IMU and step sensor integration (ZUPT) and integrated with stereo cameras (solid blue line) compared to ground truth (dotted red line).
Average Distance Error (m) 1.17 0.97 0.78
Disclosure (m)
Relative Disclosure
11 9 6
10.2% 8.4% 5.6%
Table 1. Statistical information for trajectories reconstructed using different integration methods. IMU represents the trajectory using combined solution from IMU and step sensor. Vision represents the results from trajectory reconstructed by stereo cameras, and Integration represents the integrated result
As expected, the integrated navigation approach gives the best performance results, proving that a combination of low-cost sensors can result in a satisfactory navigation solution. In the future, a better IMU will be used and the vision-based localization algorithm will be improved. A more accurate calibration of the low-grade IMU and vision sensors is expected to produce more accurate trajectory results.
CONCLUSIONS In this research, we have developed an approach for integrating data from multiple orbital, ground and on-suit sensors to improve the accuracy, flexibility and robustness of an astronaut navigation system for lunar exploration, the LASOIS. Initial results show that the positioning accuracy improves drastically when an EKF is used to integrate data from different sensors. Specifically, we found that low cost IMU’s data generally has larger errors in yaw (heading) directions (these errors strictly depend on the grade of the IMU and on gyro bias), and this limitation can be overcome by the integration of heading information from other sensors (in this case, a pair of stereo cameras). As seen from the experimental results, an integration of the IMU with other sensors (such as pressure sensors for ZUPT and stereo cameras for attitude determination) delivers navigation results with an overall 5.6 % accuracy, while using IMU only results in a much larger disclosure error of the rectangular traverse.
FUTURE WORK Several approaches for improving the current prototype are undergoing, both for the hardware system and for the strategies of data processing. On the hardware part, a tactical-grade IMU (HG1900 from Honeywell) with more stable and reliable gyro information is going to be integrated into the hardware system, and a ground beacon system will be implemented for broadcasting the locations of geo-referenced beacons to local regions. A lunar-feasible radio navigation input will be implemented to augment the inertial navigation system. In addition, we are investigating the feasibility of adapting vision sensors as a star tracker by rotating them upward and developing algorithms for orienting and locating the astronaut by star tracking when all other sensors may be out of function. All these new sensors or new approaches of using existing sensors can further improve the system robustness and performance. Another aspect of hardware improvement is to miniaturize the system by implementing a single-board computer for processing the sensor information in order to give a real-time navigation package for astronauts. On the software part, we are investigating how to effectively use the geometric constraints exist in the sensor system to improve the computational efficiency of data processing algorithms. Examples are an epipolar geometry constraint ASPRS 2010 Annual Conference San Diego, California April 26-30, 2010
for the cameras, as well as more accurate calibration procedures for determining the boresight and lever-arm alignments. With these hardware and software improvements, we expect to achieve an overall accuracy of 2% or better for long traverses in future landed lunar missions. In the future, more experiments will be conducted for further characterizing the prototype system. First, the performance of the LASOIS system under different types of astronaut locomotion will be analyzed. Second, we plan to conduct large number of long-distance traverses to further understand the systematic behavior of the integrated sensor network by using more challenging test cases.
ACKNOWLEDGEMENTS This work is supported by the National Space Biomedical Research Institute through NASA NCC 9-58.
REFERENCES Bayoud, F., 2006. Development of a robotic mobile mapping system by vision-aided inertial navigation: A geomatics approach, PhD Thesis, Ecole Polytechnique Fédéral de Lausanne. Ehrlich, S.M., D.M. Beck, J.A. Crowell, T.C.A. Freeman, and M.S. Banks, 1998. Depth information and perceived self-motion during simulated gaze rotations. Vision Research, 38, 3129-3145. El-Sheimy, N., 2004. Inertial Techniques and INS/DGPS Integration, ENGO 623-Course Notes Department of Geomatics Engineering, University of Calgary, Canada. Faulkner,T. and S. Chestnut, 2008. Impact of Rapid Temperature Change on Firefighter Tracking in GPS-denied Environments Using Inexpensive MEMS IMUs. www.geonav.ensco.com/reports/ION_NTM2008.pdf (last date accessed: 10 February 2010) Foxlin, E., 2005. Pedestrian tracking with shoe-mounted inertial sensors. IEEE Computer Graphics and Applications, 25(6):38–46. Godha , S. and G. Lachapelle, 2008. Foot mounted inertial system for pedestrian navigation. Measurement Science and Technology, 19 (2008) 075202 (9pp). Grejner-Brzezinska D. A., Y. Yi, and C. Toth, 2001. Bridging GPS Gaps in Urban Canyons: Benefits of ZUPT, Navigation Journal, 48(4): 217–225. Grejner-Brzezinska D. A., C. Toth. and S. Moafipoor, 2008. Performance Assessment of a Multi-sensor personal navigator supported by an adaptive knowledge based system, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B5. Jirawimut, R., S. Prakoonwit, 2003. Visual Odometer for Pedestrian Navigation. IEEE Transactions On Instrumentation And Measurement, 52(4):1166-1173. Li, R., S. W. Squyres, R. E. Arvidson, J. Bell, L. Crumpler, D. J. Des Marais, K. Di, M. Golombek, J. Grant, J. Guinn, R. Greeley, R. L. Kirk, M. Maimone, L. H. Matthies, M. Malin, T. Parker, M. Sims, L. A. Soderblom, J. Wang, W. A. Watters, P. Whelley, F. Xu, 2005. Initial Results of Rover Localization and Topographic Mapping for the 2003 Mars Exploration Rover Mission, Photogrammetric Engineering & Remote Sensing, 71(10): 1129–1142. Li, R., He, P. Tang, B. Skopljak, A. Yilmaz, J. Jiang, M.S. Banks , C. Oman, 2010. Development of the Astronuat Navigation and Information System, Proceedings of the 41th Lunar and Planetary Science Conference, The Woodlands, TX, March 1-5. Li, R., B. Wu, S. He, B. Skopljak, A. Yilmaz, J. Jiang, M.S. Banks , C. Oman , K.B. Bhasin, J.D. Warner, and E.J. Knoblock, 2009. LASOIS: Enhancing the Spatial Orientation Capabilities of Astronauts on the Lunar Surface. Proceedings of the 40th Lunar and Planetary Science Conference, The Woodlands, TX, March 23-27. Li, R., B. Wu, S. He, B. Skopljak, A. Yilmaz, J. Jiang, M.S. Banks , C. Oman , K.B. Bhasin, J.D. Warner, and E.J. Knoblock, 2010. Reducing Spatial Disorientation Risks in Manned Missions, NASA Human Research Program Investigators’ Workshop, Abstract. Lowe, D..2004. Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision, 60, (2):91-110. Lucas, B., and T. Kanade, 1981. An Iterative Image Registration Technique with an Application to Stereo Vision. International Joint Conference on Artificial Intelligence, pp 674-679. ASPRS 2010 Annual Conference San Diego, California April 26-30, 2010
Matthies, L., M. Maimone, A. Johnson, Y. Cheng, R. Willson, C. Villalpando, S. Goldberg, A. Huertas, A. Stein, A. Angelova, 2007. Computer Vision on Mars. International Journal of Computer Vision, 75(1): 67–92. Mellberg, W. F., 1997. Moon Missions, 114–116. Moafipoor, S., D. Grejner-Brzezinska, and C. Toth, 2008. A Fuzzy Dead Reckoning Algorithm for a Personal Navigator, Navigation, ION Journal, 55(4): 241-255. Oman, C. M., 2007. Spatial Processing in Navigation, Imagery and Perception, 209-247. Ojeda, L. and J. Borenstein, 2007. Non-GPS Navigation for Security Personnel and First Responders, Journal of Navigation, 60, pp. 391–407. Pradhan, A., E. Ergen, B. Akinci, 2009. Technological Assessment of Radio Frequency Identification Technology for Indoor Localization, Journal of Computing in Civil Engineering, ASCE, 23(4): 230-238. Rantakokko J., Handel, P., Eklof, F., Boberg, B., Junered, M., Akos, D., Skog, I., Bohlin, H., Neregard, F., Hoffman, F., Andersson, D., Jansson, M. and Peter Stenumbaard, 2007. Positioning of emergency personnel in rescue operations, Technical report, Stockholm, TRITA-EE:037. Sazonov, E.S., T. Bumpus, S. Zeigler, and S. Marocco, 2005. Classification of plantar pressure and heel acceleration patterns using neural networks, Neural Networks, Proceedings. IEEE International Joint Conference, Volume: 5, pp: 3007- 3010. Shi, J., and C. Tomasi, 1994. Good Features to Track. IEEE Conference on Computer Vision and Pattern Recognition, pp 593-600. Smith, E. and W. Mastin, 1973. Lunar Roving Vehicle Navigation System Performance Review. NASA Technical Note NASA TN D-7469. http://history.nasa.gov/alsj/19740003321_1974003321.pdf (last data accessed: 10 February 2010). Smith, S.G., 1986. Developments in inertial navigation, Journal of Navigation, 39, pp. 401-414. Veth M., and J. Raquet, 2006. Fusion of Low-Cost Imaging and Inertial Sensors for Navigation. Proceedings of ION GNSS-2006, Fort Worth, TX.
ASPRS 2010 Annual Conference San Diego, California April 26-30, 2010