matching them to a map so this position location technique is called feature ..... [19] H.P. Moravec & A. Elfes, âHigh Resolution Maps from Wide Angle Sonar,â ...
S.Y. Harmon, “Knowledge Based Position Location on Mobile Robots,” Proc. 1985 IEEE International Conf. on Industrial Electronics, Controls, and Instrumentation, San Francisco, CA, 18-‐22 November 1985, pp174-‐179.
KNOWLEDGE-‐BASED POSITION LOCATION ON MOBILE ROBOTS S.Y. Harmon Code 442, Naval Ocean Systems Center San Diego, CA 92152
ABSTRACT This paper discusses application of techniques resulting from artificial intelligence research to the problem of position location for mobile robots for both indoor and outdoor environments. Knowledge-‐based techniques have been demonstrated most useful for position location by matching sensor perceptions of local features with a map of the region. In addition, knowledge-‐based techniques have proven promise to be useful in merging several independent estimates of absolute position.
INTRODUCTION The basic purpose for every mobile robot is to transit from one place to another. The process of navigating to the goal requires two functions: position location and route finding. Developments in artificial intelligence have influenced design and implementation of both of these functions on mobile robots. Clearly, techniques for knowledge representation, automated planning and sensor data understanding impact path finding. However, the relation of knowledge-‐based techniques to the position location problem is less obvious. This paper considers various knowledge-‐ based concepts to assist the position location of mobile robots for both indoor and outdoor environments. The application of computing intensive knowledge-‐based techniques to the problem of position location might seem like overkill. After all, there are several electronic navigation aids that do not require the complexity of knowledge-‐based techniques [1, 2] and several techniques have been developed for automated vehicle location that could be applied to mobile robots [3]. In fact, position location is one of the most important and difficult problems in mobile robots today. The accuracy of position measurements greatly affects the complexity of the robot's control strategy [4]. Furthermore, although many sensors exist which provide direct access to position information each of these has limitations that make it unsuitable for a large range of situations. Knowledge-‐based position location techniques simply add to an expanding bag of tricks that increase mobile robot capabilities by guaranteeing continuous access to accurate position information. A mobile robot can determine its position either by monitoring its motion after leaving a known location or by locating itself relative to external reference points with known positions. Vehicle motion tracking includes both dead reckoning systems and inertial navigation systems. External references can be either intentional or unintentional navigation references. All of these techniques have been used or proposed for mobile robot position location. 1
The largest number of past and present mobile robots by far locates their positions using various vehicle motion and orientation sensors for dead reckoning [5-‐12]. Early simulation studies for a Martian rover determined that dead reckoning navigation was vastly superior to inertial navigation and was adequate for effective planetary rover guidance over many kilometers travel if daily position updates were available from an independent source [11]. In spite of the modern alternatives, dead reckoning navigation has recently been recommended for a mobile robot for military applications [13]. However, only one of the mobile robot efforts cited above explored knowledge-‐based techniques in support of dead reckoning position location. In this case, stereovision obtained information about the robot's motion [9]. These studies indicate that visual navigation is presently quite fragile [9]. In one series of experiments, a stereovision system was found to estimate robot translation poorly and always short and to estimate rotation most accurately [14]. While this work shows the promise and versatility of visual sensors more extensive use of knowledge-‐based techniques for dead reckoning is not foreseeable. Dead reckoning can provide extremely accurate position estimates for very low cost but they integrate their errors over time and are, thus, unsuitable for long distance navigation. Unfortunately, all vehicle motion monitoring approaches to position location suffer from the same inherent problem. Wheel skid and spin contribute significantly to this drift in robots that derive all motion information from the locomotion system [12]. Doppler speed sensing systems offer some relief from this contribution to position error [12, 15] but they are not panaceas. While cost effective for some applications, long-‐term precision for either dead reckoning or inertial systems requires prohibitively expensive components for most applications [16]. In short if a mobile robot is to operate over long distances (or short distances many times) then it needs position information from some vehicle independent source. External references for position location have many different forms. The only requirements for an external reference to be useful in position location are that its absolute location be known to the desired accuracy (if it is to be used to provide absolute position), that it be unambiguously observable (many a ship has gone down by being lured onto the rocks by fake lighthouses) and that the robot's position relative to the reference can be determined with the desired accuracy. External references designed specifically for navigation can be passive or active and either fixed location sites or orbiting satellites. Active beacons can use any part of the electromagnetic or acoustic spectrum. Fixed navigational references offer considerable near term opportunity for implementation in the factory environments. These have demonstrated their utility in several mobile robot implementations [4, 16, 17]. While the precision of external reference systems is better than vehicle monitoring systems over the long term and some external reference systems are excellent for the short term (e.g., painted marks on the floor), any form of beacons or reflectors amounts to structuring the world and, thus, does not provide a universal solution (e.g., beacons can be obscured by other objects in world) [16]. In many cases it is
2
not economical to navigate using references precisely located in the environment specifically for position location. In these cases, references of opportunity must be used. Uncooperative references include celestial sources (e.g., stars, Sun), distinct landmarks and recognizable topographical or structural features. Several authors have suggested using landmarks or local terrain features for position location [18-‐ 28]. Determining location from the relative positions of local features requires matching them to a map so this position location technique is called feature matching. The widest application of knowledge-‐based techniques to position location has been in those approaches using uncooperative references for feature matching. Each of the techniques discussed above has its own limitations and error sources. These sources are often independent or only weakly coupled so position information from several different sensors can be combined to improve the combined position estimate accuracy under a wide range of conditions. The fusion of different position estimates is one of the most promising but least explored applications of knowledge-‐based techniques to mobile robot position location.
FEATURE MATCHING TECHNIQUES If the robot has knowledge of the absolute location of several recognizable structural features in the environment then by determining its location relative to those features it can compute its own absolute position. This approach to position location has several advantages. Accuracy is limited only by the knowledge of landmark positions and by the accuracy with which the robot can determine its position relative to those landmarks. This accuracy should not drift with distance traveled or operating time. This approach does not require the environment to be structured in any way (no beacons, reflectors or satellites) although the area of interest must be minimally surveyed and contain sufficient recognizable features. Feature matching is computationally expensive but it is also a perfect complement to the computationally cheaper dead reckoning. Feature matching also takes advantage of sensors normally required for the path finding process (e.g., vision and ranging) so there could be no extra sensor expense for this capability. However, enhanced sensor orienting accuracy may be required for very distant features (e.g., celestial objects). Several researchers have suggested feature matching using environmental information from widely different sensors [20, 25, 27]. Some of these efforts have explored locating indoor robots by matching features gained from acoustic ranging sensors [19, 21, 25] or laser rangefinders [22, 27] to the maps of room interiors. Others have suggested using visual landmark recognition for navigation in outdoor domains [18, 20]. One of the most interesting feature navigation techniques has been suggested for a planetary rover [24]. In this technique, the robot position is computed from landmark position information observed by an orbiting satellite and from the robot's observation of the orbiter position. Position location using feature matching is considered the most general and complex of all navigation techniques for mobile robots [28].
3
Position location from feature matching is usually done in two steps. First, surrounding features are recognized and their relative locations identified using the robot's sensors. Then that set of collected features is matched to a known map. The robot's absolute position is deduced from this correlation. Three elements are necessary for feature matching: a map representing at least the known absolute positions of the features and possibly other characteristics to assist feature recognition, a map representing the sensor perception of the surrounding features and their positions relative to the robot and a technique for matching the perception map to the known map. The two maps should use the same representation to ease this matching problem. This discussion of feature matching considers the issues of map representation, feature location and various algorithms for map matching.
Map Representations A good map representation is key for effective position location using feature matching techniques. The choice of representation determines what features can be represented and what algorithms can be used for map matching. The representation must be designed to be augmented with new but uncertain sensor information. Unfortunately, most internal map representations for mobile robots were developed to facilitate route planning more than position location [5-‐7, 9, 29, 30]. Map representations vary considerably depending upon whether the robot is working indoors or out. Virtually all of the work with indoor mobile robots has assumed a simple two dimensional world populated by polygonal objects. In this world, features are either polygon sides or vertices. In spite of these commonalities, several different map representations have been developed for indoor robots including uniform cells, line segments and object frames. The simplest of indoor representations is a grid of cells that represent the probability that that cell is occupied or not. This representation is particularly useful for easily combining multiple sonar range measurements [19]. The cell representation has also been used in such vintage indoor robots as SHAKY [5] and JASON [6]. Sonar mapping measurements have also been represented as connected line segments [21, 25]. In these representations space is divided into convex regions with line segments representing object surfaces. One of the most sophisticated indoor map representations is used in the robot HILARE [8, 22, 28]. Object vertices are the primitive features in this representation. These features are grouped into object frames (i.e., all the vertices for one object into the same frame). HILARE's world space is organized into a hierarchy of more complex frames containing objects, rooms and frontiers. This hierarchy represents geometrical, topological and semantic information. Geometrical information is constructed from sensor perceptions and the remaining information is inferred from the geometrical model [28]. Several map representations have been developed for less structured outdoor environments. In spite of the need for three dimensional representation [18, 31], all of these techniques have, like their indoor counterparts, assumed a two dimensional 4
world. The representations vary from simple property-‐less features to complex landmark databases. The simplest of the map representations for outdoor environments was developed for position location of a robot submersible using sonar sensors [26]. In this representation, features are simply sonar returns for which range and bearing can be computed. A known map of these features would describe where sonar returns were expected. Early work in landmark navigation evolved from efforts to develop a mobile robot from planetary exploration [23, 24]. In this work, map features represented mountain peaks and craters [23]. The landmark database is the most complex outdoor representation that has been proposed [20]. This database supports visual landmark recognition for an autonomous ground vehicle. This database represents such landmark properties as size, position and a list of boundary points that can be correlated to the edge map from a visual image [20]. Brooks warns against the dangers of using simplified two dimensional representations of the three dimensional world [18]. For instance, algorithms developed using inadequate map representations could suffer significant time performance degradation when applied to the real world's complexity and in stability could arise in maintaining the correspondence between the perceived environment and the internal map over slight changes in position, orientation or illumination. When the mapping between the world model and the perceived representation becomes unstable matching new perceptions with the partial world model becomes difficult and polygonal representations become hopelessly fragmented. He suggests using a relational map that is rubbery and stretchy instead of the typical rigid two-‐dimensional representation. This relational map stores primarily the geometric relationships between map primitives thus easing the matching with uncertain sensor perceptions [18]. All known measurement techniques have sources of error. These sensor errors produce uncertainty in perception. Since position location is based upon measurement it too is concerned with uncertainty. Both sensors and maps are sources of uncertainty. Most investigators have represented uncertainty in their sensor maps. In cell representations, cell labeling represents the probability that a cell is believed to be occupied (p > 0) or not (p < 0) [19]. In segment maps, time and observation dependent values for each segment represent uncertainty [21]. In HILARE's hierarchical representation each object property has an associated measurement uncertainty and these values are combined to compute the uncertainties associated with the parent frames [28]. Toward greater generality at the expense of simplicity, uncertainty manifolds have been proposed to represent' sensor induced uncertainty [18].
Feature Location It is not surprising that, like map representations, feature location approaches vary from indoor robot to outdoor robot. Indoor robots which use feature matching for position location use primarily ranging sensors to collect feature information. They have also had much simpler environments to represent and match. In the outdoor 5
robots landmarks were located using sonar [26], vision [20, 23] or laser rangefinder [24]. While the outdoor environment offers opportunity for complexity only one of the outdoor feature location approaches used a sophisticated capability for feature recognition [20]. Features in indoor environments were either object segments [19, 21, 25] or object vertices [27, 28]. This division depended upon the sensor used for range measurements. Those modeling the world as connected line segments used acoustic sensors with low angular resolution for feature location. Those efforts that represent objects as connected vertices used laser rangefinders with very high angular resolution. In indoor environments the process of locating a feature is simply a matter of pointing the sensor and interpreting the range measurement although vertex finding operations may be necessary if laser rangefinders are used. This simplicity is sufficient for structured indoor environments. Some complexity does arise when combining sensor measurements and it is usually necessary to build segment maps [21] or object models [27, 28] from several different sensor measurements. The choice of features varied widely for approaches to locating outdoor landmarks. The simplest feature is just a sensor return (i.e., no information which would distinguish one feature from another other than its position relative to the robot) [26]. This approach was devised for underwater navigation using information from only a sonar sensor. The complexity of this problem arises from the large number of features within the sensor's field of view. Researchers in planetary rovers limited feature types to just mountain peaks and craters [23, 24]. Only one approach has been proposed for guiding a mobile robot from several different types of features [20]. This approach uses visual landmark recognition for position location. In this approach, a selector module identifies from previous position location estimates a set of landmarks that should be observable by the robot. These landmark options are presented to a finder module that orients the camera and adjusts the focus properly. The finder then directs the matcher to locate possible landmark positions in an image. The matcher detects image edges, matches landmark templates using a generalized Hough transform, and interprets the uncertainty of the matches. False peaks in Hough space are reduced by using a measure of gradient direction informativeness to eliminate uninformative sources of spurious patterns. The finder then locates the possible landmark positions from a set of candidates provided by the matcher using geometric constraint propagation. After the landmarks are consistently located by the finder the selector then computes the vehicle's actual position and the new position uncertainty [20].
Map Matching Once a set of features has been identified these must be matched to a map. This problem can be generalized to one of finding the correspondence between two intersecting sets of objects. This general problem is well known in artificial intelligence and it can be extremely difficult if there are a large number of features 6
in the observed set, on the map or both. Several different techniques have been developed to solve the map matching problem for both indoor and outdoor situations. Map matching can be as difficult a problem indoors as outdoors. In spite of the variety of map matching techniques they all use some type of search to determine the best estimate of correspondence and they all constrain the search in various ways. Several different constraints have been used for map matching. The simplest is to match only features of like type (mountain peaks to peaks and craters to craters) [23, 26]. Geometric constraints are very common in map matching [23, 25, 26]. These constraints eliminate feature matches which do not have a consistent orientation or position relative to other features or which do not produce a robot position estimate that is consistent with other estimates. Prediction models can also be used to constrain landmark matches by identifying the feature matches that are possible if the robot moved in the way measured by its motion monitoring sensors [28]. In addition, measurements of absolute robot orientation can even further constrain the search in the same way [23, 27]. These constraints remain useful in spite of considerable sensor errors in estimating robot position and orientation. In fact, estimates of error can be used to determine how loosely to apply geometric contraints [21, 23, 25]. Another way to constrain the matching search is to match reduced resolution sensor data first then gradually increase the resolution as potential matches and the accompanying search space are decreased [19, 27]. The order of evaluation of the constraints can greatly affect the accuracy and speed of a matching [25, 26]. The simplest and fastest running constraints should be executed first. This approach organizes the processing so that the least processing is done on the largest data set. Only after the size of the data set has been decreased as much as possible should more computationally expensive constraints be applied. Usually the simplest constraints are those that use the least information [26]. If successful the constraint process should have reduced the data set to where it contains only the most ambiguous matches. This process would have then prepared the data set to be examined by the extremely powerful but very slow search process. If the search space has been reduced by the constraints enough then the search process can be a simple linear search. If the matches are inexact least squares [27] or chi square [23] fitting can be employed to produce a single consistent estimate. Another approach to matching is to assume a trial transformation between sensor map and known map and computing the degree of match. If the match is poor then a new transform is proposed until a match is found. This technique works best if applied to maps organized into a hierarchy of reduced resolutions [19]. In another technique, the possible feature matches are organized into either trees [25] or graphs [26] representing the correspondence between sensor and map data. The nodes and links in these structures must be pruned first using constraints then they can be searched using well-‐known tree or graph search algorithms. Still other authors have even considered the problem of matching perceptions to existing maps using backward reasoning techniques [18] and graph matching techniques [32]. Feature matching has proven to be surprisingly accurate. Experiments with indoor 7
robots have produced absolute position errors of less than 0.2 m and orientation errors of less than three degrees [19, 25]. Early simulations of feature matching on Earth and planetary data indicated an average absolute position error of approximately 250 m and an average orientation error of less than one degree [23]. Not surprisingly, the accuracy of the position estimates improved with increased number of sensed features. The researchers felt that this performance could be improved by an order of magnitude by improving the accuracy of the feature position measurements [23].
POSITION LOCATION FUSION A recurring theme found throughout this paper is that no one source of position information is sufficient for a mobile robot whether for indoor or outdoor environments. Several investigators have suggested using multiple sensor sources for position location [22, 23, 27]. Multiple sensor sources can crosscheck each other (e.g., odometry correlated with an infrared beacon navigation system). In addition, high cost sensors can be used for multiple purposes thereby getting more for the money [22]. HILARE uses three separate modules for position location to overcome the limitations of any single system. Absolute position can be obtained by triangulating from infrared beacons placed strategically in the environment. However, the beacons are not always available so HILARE dead reckons using optical shaft encoders on the drive wheels between beacon sightings. In addition, HILARE can deduce its position by matching range information from either its laser or its ultrasonic range sensors to room maps [22]. The odometry measurements of position are used for instantaneous path control with corrections and updates from the other sources. All sources of position information communicate that data in absolute coordinates to make the merging of position estimates from independent sources easier [28]. Position estimates can be combined by choosing the best estimate from the most accurate sensor and by weighted averaging of consistent measurements from the same sensor while taking into account the associated uncertainties [28]. While merging position estimates from different sensor sources seems like a good idea the mechanism to support this exchange is not so obvious in the distributed computing environment of a modern mobile robot. Several modules could consume absolute position information and the best position estimate should be the value used by all. Consumers should just be confident that the position estimate that they use is the best available and the source of the estimate should be invisible. Creating this ideal situation presents a significant problem that is closely related to as yet unresolved distributed data base issues. However, one approach for merging the information from multiple sensor sources has been demonstrated [33]. Conceptually, this approach is similar to blackboard systems implemented for various expert systems with distributed knowledge sources on mainframe computing environments. The sources of intersecting information write their value for a particular property onto the blackboard together with a common measure of 8
the confidence of the value's correctness when they see that their value has a higher confidence than the value that existed previously on the blackboard. In this way, all evaluation of confidences is done closest to the sensor source and only the best value for a property exists on the blackboard at any one time. This blackboard system is actually distributed over several processors that are loosely coupled through a local area network. Each processor has a copy of the blackboard that is kept consistent with the other blackboard copies by intelligent communications interface systems [33]. This concept provides a conceptually convenient and proven mechanism by which to integrate multiple estimates of the position of a mobile robot.
FUTURE WORK Knowledge-‐based techniques have been applied to several different aspects of the position location problem. However, in spite of this help and the capability of modern electronic navigation systems, the most sophisticated of mobile robots of today and for many years to come have only a small fraction of the navigational capability of a Boy Scout tenderfoot. Knowledge-‐based techniques will continue to advance in theory and implementation and will continue to expand the capabilities of mobile robots. Map matching techniques should be improved to use a wider selection of navigational cues to increase their robustness and versatility. In addition, the robot's sensors could be used to collect much more navigational information from the surrounding environment. For instance, direction can be inferred from such cues as the orientation of drifting dust or snow, the direction in which clouds first appear, the side of trees where leaves have fallen, the direction tree branches are permanently swayed, the direction the preponderance of flowers in a field are facing and the side of a solitary tree on which the moss grows thickest [34]. These cues and more are used by experienced human orienteers and, like other forms of expertise, should be incorporated into mobile robots requiring this level of navigational capability. Knowledge-‐based techniques make it possible to accomplish this very difficult task.
REFERENCES [1] G.E. Beck, ed., Navigation Systems: A Survey of Modern Electronic Aids, Van Nostrand Reinhold, London, UK, 1971. [2] R.R. Hobbs, Marine Navigation 2: Celestial and Electronic, Naval Institute Press, Annapolis, MD, 1981. [3] S. Riter & J. McCoy, “Automatic Vehicle Location – An Overview,” IEEE Trans. on Vehicular Technology, VT-‐26 (1), February 1977, pp7-‐11. [4] R. Berry, “Sensors for Mobile Robots,” Proc. 3rd Conf. on Robot Vision and Sensory Controls, Cambridge, MA, November 1983, pp584-‐588. [5] N.J. Nilsson, “A Mobile Automaton: An Application of Artificial Intelligence Techniques,” Proc. Int. Joint Conf. on Artificial Intelligence, Washington, DC, 9
May 1969, pp509-‐520. [6] M.H. Smith et al., “The System Design of JASON, A Computer Controlled Mobile Robot,” Proc. Int. Conf. on Cybernetics & Society, IEEE, New York, NY, September 1975, pp72-‐75. [7] A.M. Thompson, “The Navigation System of the JPL Robot,” Proc. of 5th Int. Joint Conf. on Artificial Intelligence, Cambridge, MA, August 1977, pp749-‐757. [8] G. Giralt, R. Sobek & R. Chatila, “A Multi-‐Level Planning and Navigation System for a Mobile Robot: A First Approach to HILARE,” Proc. 6th Int. Joint Conf. on Artificial Intelligence, Tokyo, Japan, August 1979, pp335-‐337. [9] H.P. Moravec, “Rover Visual Obstacle Avoidance,” Proc. 7th Int. Joint Conf. on Artificial Intelligence, Univ. of British Columbia, Vancouver, BC, Canada, August 1981, pp785-‐790. [10] H.M. Chen & C.N. Shen, “Surface Navigation System and Error Analysis for Martian Rover,” Proc. 4th Hawaii Conf. on Systems Science, Honolulu, HI, January 1971, pp661-‐663. [11] R.A. Lewis, “Roving Vehicle Navigation Subsystem Feasibility Study,” Proc. 3rd Hawaii Conf. on Systems Science, Honolulu, HI, January 1970, pp316-‐319. [12] S. Tsugawa, T. Hirose & T. Yatabe, “An Intelligent Vehicle with Obstacle Detection and Navigation Functions,” Proc. IEEE Conf. on Industrial Electronics, Controls, and Instrumentation, Tokyo, Japan, October 1984, pp303-‐308. [13] E.B. Cynkin et al., “Robotic Vehicle Terrain-‐Navigation Subsystem: Conceptual Design Phase,” Unmanned Systems, 2 (4), Spring 1984, pp12-‐19. [14] C. Thorpe, L. Matthies & H. Moravec, “Experiments & Thoughts on Visual Navigation,” Proc. Conf. on Robotics & Automation, 85CH2152-‐7, IEEE, New York, NY, March 1985, pp830-‐835. [15] S.Y. Harmon, “USMC Ground Surveillance Robot (GSR): A Testbed for Autonomous Vehicle Research,” Proc. 4th Univ. of Alabama, Huntsville Conf. on Robotics, Huntsville, AL, April 1984, np. [16] M. Julliere, L. Marce & H. Place, “A Guidance System for a Mobile Robot", Proc. 13th Int. Symp. on Industrial Robots, Chicago, IL, April 1983, p13:58-‐68. [17] G. Bauzil, M. Briot & R. Ribes, “A Navigation Subsystem Using Ultrasonic Sensors for the Mobile Robot HILARE,” Proc. 1st Conf on Robot Vision and Sensory Controls, Stratford/Avon, UK, April 1981, pp47-‐58. [18] R.A. Brooks, “Visual Map Making for a Mobile Robot,” Proc. Conf. on Robotics & Automation, 85CH2152-‐7, IEEE, New York, NY, March 1985, pp824-‐829. [19] H.P. Moravec & A. Elfes, “High Resolution Maps from Wide Angle Sonar,” Proc. Conf. on Robotics & Automation, 85CH2152-‐7, IEEE, New York, NY, March 1985, p116-‐121. [20] F.P. Andresen et al., “Visual Algorithms for Autonomous Navigation,” Proc. Conf.
10
on Robotics & Automation, 85CH2152-‐7, IEEE, New York, NY, March 1985, pp856-‐861. [21] J.L. Crowley, “Dynamic World Modeling for an Intelligent Mobile Robot,” IEEE J. of Robotics and Automation, RA-‐1 (1), March 1985, pp31-‐41. [22] G. Giralt, R. Chatila & M. Vaisset, “An Integrated Navigation and Motion Control System for Autonomous Multisensory Mobile Robots,” Proc. 1st Int. Conf. on Robotics Research, Bretton Woods, NH, August/September 1983, np. [23] R.C.A. Lewis, “A Computerized Landmark Navigator,” Proc. 4th Hawaii Conf. on Systems Science, Honolulu, HI, January 1971, pp649-‐651. [24] R.E. Janosko & C.N. Shen, “Consecutive Tracking of Landmarks by an Active Satellite and then by a Martian Roving Vehicle,” Proc. 3rd Hawaii Conf. on Systems Science, Honolulu, HI, January 1970, pp324-‐327. [25] D. Miller, “Two Dimensional Mobile Robot Positioning Using Onboard Sonar,” Proc. IEEE 1984 Pecora IX Symp., Sioux Falls, SD, October 1984, pp362-‐369. [26] C. Thorpe, “Underwater Landmark Identification,” Proc. 2nd ASME Computer Engineering Conf., San Diego, CA, August 1982, pp1-‐6. [27] C.J. Zhao et al., “Location of a Vehicle with a Laser Range Finder,” Proc. 3rd Conf. on Robot Vision and Sensory Controls, Cambridge, MA, November 1983, pp82-‐87. [28] R. Chatila & J.P. Laumond, “Position Referencing and Consistent World Modeling for Mobile Robots,” Proc. Conf. on Robotics & Automation, 85CH2152-‐7, IEEE, New York, NY, March 1985, pp138-‐145. [29] D. Keirsey et al., “Autonomous Vehicle Control Using AI Techniques,” Proc. 7th Int. Computer Software & Applications Conf., 83CH1940-‐6, IEEE, New York, NY, November 1983, p173. [30] D.T. Kuan et al., “Automatic Path Planning for a Mobile Robot Using a Mixed Representation of Free Space,” Proc. 1st Conf. on Artificial Intelligence Applications, 84CH2107-‐1, IEEE, New York, NY, December 1984, pp70-‐74. [31] S.Y. Harmon, “Comments on Automated Route Planning in Unknown Natural Terrain,” Proc. Int. Conf. on Robotics, 84CH2008-‐1, IEEE, New York, NY, March 1984, p571. [32] J.P. Laumond, “Model Structuring and Concept Recognition: Two Aspects of Learning for a Mobile Robot,” Proc. 8th Int. Joint Conf. on Artificial Intelligence, Karlsruhe, FRG, 8-‐12 August 1983, pp839-‐841. [33] S.Y. Harmon et al., “Coordination of Intelligent Subsystems in Complex Robots,” Proc. 1st Conf. on Artificial Intelligence Applications, 84CH2107-‐1, IEEE, New York, NY, December 1984, p64. [34] C. Ormond, Complete Book of Outdoor Lore, Harper & Row, New York, NY, 1964.
11