Determining Mobile Robot Actions which Require

0 downloads 0 Views 36KB Size Report
sensor/actuator. In other words, the method is hardware independent up to requiring a "sufficient description" of the hardware. † This work was funded in part by ...
[Accepted by AAAI 1991 Fall Symposium on Sensory Aspects of Robotic Intelligence]

Determining Mobile Robot Actions which Require Sensor Interaction John Budenske Research and Technology Center Alliant Techsystems [email protected]

Abstract In order for a mobile robot to accomplish a non-trivial task, the task must be described in terms of primitive actions of the robot’s actuators. Our contention is that the transformation from the high level description of the task to the primitive actions should be performed primarily at execution time, when knowledge about the environment can be obtained through sensors. We propose to produce the detailed plan of primitive actions by using a collection of low-level planning components that contain domain specific knowledge and knowledge about the available sensors, actuators, and sensor/actuator processing. This collection will perform signal and control processing as well as serve as an interface to high-level planning processes. 1. Introduction To be useful in the real world robots need to be able to move safely in unstructured environments and achieve their given tasks despite unexpected changes in their environment or failures of some of their sensors. The variability of the world makes it impractical to develop very detailed plans of actions before execution since the world might change before execution begins and thus invalidate the plan. In this paper we explain how to determine at execution time the sensor and actuator commands necessary to execute a given abstract goal. The abstract goal is a single, high level goal of the form that could be produced from a classical planning system. The method we propose works for any sensor/actuator existing on the robot when given a sufficient description of that sensor/actuator. In other words, the method is hardware independent up to requiring a "sufficient description" of the hardware.

Maria Gini † Department of Computer Science University of Minnesota [email protected]

Executing a given abstract-goal requires some amount of information from the external world. Without such information (ie. location of objects to manipulate) a goal cannot be executed. If the robot knew at planning time what relevant information is needed, the processing to execute the abstract-goal could be greatly reduced. The relevant information depends not only on the goal, but also on the current world state and the current sensor availability. Attempting to determine, prior to execution, all possible configurations of relevant information across the domain of abstract-goals, external world states, and sensor availability would be a universally endless task. However, to be executed a plan must somehow explicitly specify the details of how to utilize the sensors and actuators. These "utilization-details" determine how the sensor/actuator performs in the environment and thus different environment situations may require different programming of "utilization-detail-values" for the same abstract-goal. If the current environment is not what was planned for, then the selected sensor/actuator detail values may not be appropriate, and thus the plan may not achieve its goal. The central problem is defining how to translate the given abstract-goal into the necessary sensor/actuator utilization-details. The abstract-goal contains information about how the robot needs to interact with the world only implicitly. In performing this translation, the use of the robot’s sensors and actuators must be determined and explicitly stated within the primitive commands. We call this translation process, "Explication".

Explication is difficult because of the need to adapt to the environment as the command is being executed. Thus, it is dependent on the current-situation, and so it must occur at execution time. † This work was funded in part by the NSF under grants NSF/CCR-8715220 and NSF/CDA-9022509, and by the AT&T Foundation.

The Explication process consists of: (a)

determining the relevant information from the abstract-goal and current world state as sensed by the robot’s sensors;

(b)

selecting the source of the relevant information;

(c)

collecting the relevant information;

(d)

decomposing the abstract-goal into sensor/actuator utilization-details (primitive commands and control parameters);

(e)

executing primitive sensor/actuator/processor commands;

(f)

detecting and resolving conflicts which occur between sub-goals;

(g)

mapping (feedback control) the relevant information into the sensor/actuator utilizationdetail-values;

(h)

monitoring the relevant information to detect goal accomplishment and error conditions.

Though defining the Explication process is the central crux of the problem, there exists subproblems which must be addressed. Primarily, there are three other "bodies of knowledge" whose content, structure, and interaction must be defined in order for the Explication process knowledge to be fully defined. We call these bodies of knowledge the Sensor/Actuator Utilization-Detail-Knowledge, the Sensor/Actuator Processing Knowledge, and the Dynamic World State Knowledge. We describe them later in the paper. 2. Relevant Research Several authors have described methods for moving robots. Lumelsky [Lum86] has studied the problem of reaching a given goal position in a completely unknown but static environment. Moravec [Mor88] models the space with a grid and uses sensory data to compute the probability that the space in each element of the grid is occupied. Since the map is continuously updated changes in the environment are slowly reflected by changes in the map. The map is used for navigation and can be integrated with more accurate sensor knowledge (for example, stereo vision). Khatib [Kha86] uses artificial potential fields to repel the robot away from obstacles and to attract it to the goal. Artificial potential fields have been used in a variety of ways as the low level mechanism for navigation and path planning [Pay90]. When the tasks given to the robot are more complex than just reaching a given position while avoiding obstacles, sensor processing becomes more difficult.

Although sensing strategies can be developed at planning time and added to an existing plan [Doy86], it is preferable to reason about sensors during the development of the plan [Hut90] or to plan the sensing strategies dynamically when current information about the world becomes available [Hut89]. Firby [Fir87] proposes a planning system that can react to changes and take advantage of opportunities. Albus and his group [Lum89] have developed a hierarchical, highly-synchronized control structure for the sensors which processes the sensor data as they flow though the structure. The highest level decisions are carried out in the top module, and the longest planning horizons exist there as well. The system, thus, is a large, highly-synchronized, static, configuration of routines and subroutines which is not only imposed on the inter-module structure, but on the control and data paths, and the over-all goal decomposition strategy. The flexibility necessary to solve problems which use different orderings/hierarchies of the same sub-modules is compromised by the static control and data paths. Recent research efforts have produced a variety of alternatives to classical planning. The most popular and successful approach is the subsumption architecture [Bro86] which avoids completely planning by using layers of behaviors. The key idea is that layers of a control system can run in parallel to each other and interact when needed. Each individual layer corresponds to a level of behavioral competence. Though the distributed method of control for the robot allows for many behaviors to run in parallel, it also disallows the ability to centrally control the robot to achieve planned complex tasks. Georgeff [Geo87] has implemented a planning and control system initially in simulation, and then on the SRI FLAKEY mobile robot. This system is based on a functional decomposition of high level control into primitives, and is being extended to integrate reactive behaviors into the control structure. The REX system [Ros89] decomposes high level goals into mobile robot commands, and then converts them into hardware logic designs. Reactivity is achieved by the augmentation of embedding the planning system into a reactive control system (called Situated Automata Theory). Henderson [Hen84, Hen85] combines a declarative description of the sensors with procedural definitions of sensor control. This research, termed "Logical Sensors" consists of logical/abstracted views of sensors and sensor processing, much like how logical I/O is used to insulate the user from the differences of I/O devices and computer operating systems. It provides

a coherent and efficient data/control interface for the acquisition of information from many different types of sensors. The specifics of the implementation can change (ie. change sensors, or sensor processing algorithms) without affecting the symbolic-level control system. This allows for greater sensor system reconfiguration, both as a means of providing greater tolerance for sensor failure, and to enhance incremental development of additional sensing and processing devices. Work has been done to extend the concept to include actuator control [All87]. Lyons [Lyo89] proposes a formal model of the computation process of sensory based robots. Our work is closely related to the Logical Sensors and to the Perception and Motor Schemas by Arkin [Ark90]. Motor schemas are reactive oriented low level planning components, which can be goal driven, and thus are very similar to the Logical Sensors. The emphasis of the research is on robot navigation via potential field manipulation. Control is achieved by a single control module selecting from a single layer of motor schemas. 3. Our Approach to the Problem The main difficulty in defining the Explication process is in the need to react to a dynamic world. This not only forces the Explication process to occur at execution time, but also forces the determination of the sensor/actuator utilization-details and their values to occur then as well. Determining utilization-details requires knowledge, and this knowledge must be sufficient to describe the characteristics of the sensors, actuators and processes. Our approach is focusing on Explication and developing sufficient descriptions of sensors, actuator and process utilization-details. The first step of the approach was to determine the structure of each of the bodies of knowledge, and then to define their content type, as well as the interrelationships between them. From there we are building a framework for Explication which will interact with the sufficient descriptions of the sensor/actuator entities, which we have termed Logical Sensors/Actuators (LSA). The framework is being implemented in an object oriented programming environment resulting in a flexible robotic sensor/control system which we call the Logical Sensor Actuator Testbed (LSAT). As we determine the structure and content of the bodies of knowledge, we are also developing a framework of Explication. The framework consists of the knowledge, mechanisms, and structures used by the Explication process, which includes:

1) knowledge about sensors, actuators, and sensor/actuator processing; 2) mechanisms for combined goal and event driven control; and 3) mechanisms for matching sensor data to internal goal descriptions and to error conditions. One of the purposes of the framework is to organize/differentiate between the domain dependent and independent mechanisms/structures. This will allow us to implement a platform-independent testbed. Primarily, the framework is a collection of reconfigurable components and flexible component structures which can be combined to achieve abstract-goals through execution. Before describing how the Explication process is performed we will examine the bodies of knowledge that are needed. 3.1. Sensor/Actuator Utilization-Detail-Knowledge Utilization-details are the set of variables in the entire sensor/actuator system. This includes the available sensors and actuators, their primitive commands, their associated processing components and their control parameters. Knowing which sensors are available is important since multiple sensors, actuators, and their processing components can have overlapping data and control. Thus, a selection between them must occur. Also, with many different possible processing components and subcomponents, their functionality can be reconfigured, through the system’s calling structure and control parameters, such to produce various results. This is necessary since different situations will require different configurations for the same abstract-goal. The Explication process reasons about the identification of the utilization-details which compose the given abstract-goal, and then selects the corresponding utilization-detail-values based on the current situation. Such identification and selection requires a description of the function and capabilities of all utilizationdetails. The description will constitute "knowledge" about utilization-details which is combined with knowledge about the dynamic world state and used by the Explication process to identify utilization-details and select utilization-detail-values. The Explication process combines utilization-detailknowledge with the current situation so to reason about the identification of utilization-details and the selection of utilization-detail-values. Successful accomplishment of an abstract-goal depends on correctly determining the utilization-details and values. Determining correct values often means using up-to-date information. Thus, simple mappings of utilization-detail-values to abstract-commands will

not work since up-to-date information from the current situation is also needed. We are in the process of analyzing sensor process knowledge to derive a list of the sensor and actuator process and control characteristics, which describe the different types of sensors, actuators, processes, parameters, control effects, and process results (including knowledge about which utilization-detailvalue provides what processing results). The relationship between a process’s utilization-details and how different utilization-detail-values produces different process results must be determined and categorized (ie. an evaluation of an algorithm’s parameter ranges). This categorization can then be used to describe a process’s control and its results, and thus is the utilization-detail-knowledge. The utilization-detail-knowledge is of the form: "for each process, there are a number of utilization-details, each having a number of utilization-detail-values related to a processing result". This knowledge needs to be structured to be useful to the Explication process, which operates with the format: "to achieve this goal, I need this result". The "result" is the connecting piece of knowledge between Explication and utilization-detail-knowledge. We are using a hypothesize and test approach in the lab on an actual mobile robot to further investigate these relationships, and to develop utilization-detail-knowledge. 3.2. Sensor/Actuator Processing Knowledge Sensor/actuator processing knowledge is the knowledge about how to process sensor data and to drive sensors and actuators. This procedural knowledge, usually encoded into the low level processing algorithms, is needed to derive the utilization-detail-knowledge and the knowledge about the dynamic world state. The difficulty in determining the structure of this knowledge lies in the hierarchical nature of the Explication process and how it interacts with the sensor/actuator processing. To derive this knowledge, an analysis of the current sensor/actuator research is being performed such to determine viable candidates which can be implemented on an actual robot. Candidate sensor are analyzed so to understand how they work, what data types are used, what parameter types are needed, what results are output, etc. The knowledge derived from collecting, analyzing, implementing, and, ultimately, understanding constitutes the sensor/actuator processing knowledge, and this knowledge is being used in the derivation of the utilization-detail-knowledge, and the dynamic world state knowledge. Though its content is not the main interest of this research, the

structure of this knowledge is relevant, since this body of knowledge interacts with other bodies of knowledge. 3.3. Dynamic World State Knowledge The Dynamic World State is the conceptual structure for raw and processed sensor data produced by the extraction of relevant and up-to-date information from the external world. Its purpose is to provide current data about the state of the external world. The structure of the dynamic world state must allow for the many different types of raw and processed sensor data such as images, features, one-dimensional signals, movement of entities, stable state vs unstable state of entities, logical assumptions and derivations about the data, etc. The structure can be viewed as a scratch pad of information about the external world, some of which is relevant to the current abstract-goal and some of which is extraneous. A definition of "what makes information up-to-date" must be derived. For example, information about a wall may remain true for a long time as compared to information about a door which can easily be opened and closed, and so different types of information will have different methods for determining whether the information is up-to-date or not. The content and structure of the Dynamic World State are based on sensor processing capabilities. They can be viewed as a collection of functional data providers which form a functional abstraction between the "how the data is used" and "how it is collected and processed". This allows for some of the data collected to be buffered and attached to the sensor processing module which produced it, while other data will be actually collected upon request. In order for this functional abstraction to interact with the Explication process, knowledge about the data collection process is necessary, and so must be attached to each of the functions. This knowledge about the characteristics of each functional data provider is structured in much the same way as the sensor/actuator utilizationdetail-knowledge described above. 3.4. Explication Explication from an abstract-goal to utilization-details consists of the seven subproblems outlined in section 1. Though, each of these subproblems is individually solvable, the real challenge is in combining them into a system which accomplishes a given abstract-goal. Determining the relevant information for a given abstract-goal depends on what the goal’s informational needs are, and on what sensors and processes

are available. Informational needs refer to what data is necessary to allow selection and control decisions to be made about sensors and processes. A given abstract-goal will infer possible utilization-details (sensors and control parameters which can be used to accomplish that). This set must be reduced to the final utilization-detail-values and those values depend primarily on the current world state and the current availability of sensors/actuators and processes. Once the relevant information has been determined, it must be collected. This problem turns into a subExplication problem where "collection of data item ’x’" is an abstract-goal subject to Explication. Decomposition is the reduction from an abstract-goal to a set of possible utilization-details. We are determining how to obtain the decomposition through lab analysis of the relationships between abstract-goals and utilization-details. We then identify a reduced set of utilization-detail-values whose combined activity strives to achieve the abstract-goal. This analysis also includes sensor/actuator process knowledge, and studies the relationship between external world states and utilization-details. Our analysis is yielding rules which represent the relationships between utilizationdetail-value selections, achievement of abstract-goals, and possible world state changes. Feedback control mapping is selecting the final utilization-detail-values based on current sensor data. At times the selection of a utilization-detail-value may correspond to the selection of a sensor/actuator process which performs a sub-Explication (on a more detailed level) in order to accomplish that process. Also, the collection of relevant information often results in more detailed levels of Explication. This is how the hierarchical levels of Explication evolve. We also have to take into consideration that the dynamic world requires the system to be reactive. This forces the system to constantly collect information and use it to continuously reselect utilizationdetail-values. This implies a looping (or feedback control mapping) mechanism between the collection of information and the selection of utilization-detailvalues. Taken together, the overall process of Explication is one where, at each point in the hierarchy, there is some mixture of process calls, feedback control mapping, and sub-Explication. And, at each point in the hierarchy, this mixture is different depending on the goal we are currently explicating. This means we need a common, general process definition with a common means of communication (calling and subcalling) which allows the varied mixtures of Explication.

For example, an abstract-goal for turning the robot towards a moving object (due to the simple nature of the process) could be defined as a feedback control mapping with very little knowledge used within decomposing, determining, and selecting. In comparison, an abstract-goal for moving the robot through a doorway can contain detailed knowledge on decomposing, determining, detecting/resolving conflicts, etc. Explication knowledge decomposes it into subabstract-goals of finding the doorway, and moving towards it, etc. and feedback control mapping of sensor-sweeping an area, searching for the doorway until found and progressively moving through the doorway until clearing it. 4. An Example To illustrate our approach, we will use moving up to a door and passing through it as an example of an abstract-goal which the LSAT system will execute (ie. goal = "Move-through Door-1"). In order to execute this abstract-goal, we assume that a set of mid-level LSAs have been implemented. The set would include a Move-to-vicinity, which moves the robot into a designated vicinity; an Obstacleavoidance, which detects and avoids obstacles as the robot is moving; a Proximity-safety, which upon detection of a close obstacle (within millimeters) takes control of the robot to guide it away from the obstacle until the Move-to-vicinity and Obstacleavoidance can return to guiding without error. Also available is a Detect-doorway, a Verify-doorway, a Close-quarters-move (which guides the robot through the door opening), and monitoring behaviors which detect the passing through the doorway (Detectthrough-doorway) and which detect if something suddenly blocks the opening as the robot passes through it (Close-quarters-obstacle-detection). Each of these mid-level LSAs further decomposes into low level sensor processing, sensor control, robot command generation, and actuator driver LSAs. For brevity of the example, we will not discuss the low level details unless they become pertinent to the example. We assume that the robot in the example has two types of sensors. The first is a ring of 24 polaroid acoustic sensors which have a range of six inches to thirty-five feet. These sensors encircle the robot. The second is a set of infra-red, single beam proximity sensors which can detect the existence (but not range to) of an object up to thirty inches away. These sensors are mounted on the corners of the robot facing forward, to the sides, and to the rear (8 sensors in all).

When presented with the abstract-goal, the system first determines what information is initially needed. The abstract-goal contains a movement command "Move-through" and an object "Door-1". The system then sends the abstract goal to a Move-through LSA which will then control the execution for that goal. The overall strategy for moving through a location is to first identify the relative vicinity which can then be searched for the exact location. The abstract-goal is analyzed to determine what information is relative to accomplishing the goal. It is determined that there is an object to detect (door), and that the object has an approximate vicinity for searching for it (from the World-State-Knowledge). Since there is an object involved, the robot will search for the specified object. This type of command also involves movement, and so a movement LSA is needed to propel the robot to and through the door. Movement over a long distance will require the use of Obstacle-avoidance LSA, while a shorter distance would only require the Proximitysafety LSA. A decomposition is then selected which includes approaching the vicinity of the doorway (Move-to-vicinity), in parallel with searching and detecting the doorway (Detect-doorway). Also in parallel would be the Proximity-safety and the Obstacle-avoidance LSAs which would protect the robot from collisions. Once a door-like object is found, all four of these LSAs are deactivated since they will now get in the way. Next, a Verify-doorway LSA is activated to make sure that the doorway truly is a doorway, that it is open, and that the robot could fit through it. The process of verifying the doorway would involve moving the robot next to the doorway, within the range of the proximity sensors, and move the robot back and forth in front of the opening to verify its existence. Based on the movements, a precise measurement of the opening size can be calculated, and the position of the door frame can be determined. This can be used to line the robot up to the doorway for passing through it. Since the Verify-doorway LSA would only be using the proximity sensors nearest the doorway, an augmented Proximity-safety LSA is used to detect possible collisions during the forward and backward movements. The augmentation of this LSA is through the selection of utilization-details which designate which sensors to actually use. It is important to note that the switch from the four initial LSAs to the Verify-doorway LSA and the augmented Proximity-safety LSA is all controlled by the Move-through LSA. Within this LSA, the execution of the lower LSAs is monitored and upon completion

of that phase of the task, actions are taken by the Move-through LSA to switch to and monitor the next phase. In a hierarchically similar fashion, the Verifydoorway LSA controls additional sub-LSAs for monitoring the sensors, detection of the door frame, and for moving the robot in the desired maneuvers. On all levels, Explication is achieved through each individual LSA performing monitoring, decomposition, selection, etc, and through their control of lower LSAs via their parameters (Utilization-Details and Values). Once the door frame is verified, the Verify-doorway LSA is deactivated and the Close-quarters-move LSA is activated which in turns activates and controls three parallel sub-LSAs. These three will: 1) keep the robot in line with the door opening as it slowly moves the robot forward through the doorway, 2) monitor the detection of the doorway frame passing each of the proximity sensors mounted on the sides of the robot, and 3) use the front sonar and proximity sensors to monitor for any obstacles in the robot’s path. Initially, the robot would be positioned in front of the door frame such that the front sonars would detect the frame, but as the robot passes through the door frame, the sonar beam would become clear of the frame, and could then be used to detect potential obstacles in the robot’s path. Synchronization of this sensor usage would occur through feedback control mapping of the sensor data into processes which detect each of these occurrences. Such detection triggers the change in control of the LSAs. In attempting to execute the given abstract-goal, the system goes through the process of Explication. At times, movement of the robot is used in conjunction with the sensors to further collect information on the environment (ie. verification of the doorway’s existence). The selection of sensors is based on knowledge about them (utilization-detail-knowledge), in this case information on the range and location of the sensor on the robot. Reactivity to the external world is maintained through constantly managing the sensors, collecting data, and using that data to further guide the robot through its execution. This example can be augmented to include additional LSAs for monitoring potential errors, and the recovery from those errors upon their detection.

5. Implementation of the LSAT Framework The LSAT framework is being implemented on a SUN SPARC 4/330 computer which interacts with a TRC Labmate mobile robot over three separate serial lines. The system is being written in C, Lucid Common Lisp with the Common Loops Object System (CLOS) and LispView windowing system. An object oriented approach has been used in the implementation, where an object corresponds to a Logical Sensor/Actuator (LSA) entity. The LSAT framework encompasses the use of logical entities because of the basic need to abstractly view the functions of the sensor processing from the actual implementation. The LSAT architecture also incorporates hierarchical structures, because many sensor processing methods performed by a mobile robot are characterized as a hierarchy of sensors and software. For example, the process of deriving a map of the objects surrounding the robot via a ring of sonar sensors is a hierarchical system. First, there are the individual sensors and their drivers, where each signal has to be processed and controlled. Then, the sensors are combined into a structure called a ring. The resulting readings are fused together to derive locations of probable objects. Then, these probabilities are fused into a temporal collection of sonar-ring readings along with the navigation readings to produce a current probability distribution of object locations. This probability distribution can be processed to produce a map of object locations suitable for navigation/path-planning [Mor88]. Many of the sensor processing steps include the combination of sensor data from multiple sub-sensor sources (either temporally or from separate sensors). Also, there are times when information is only needed from a single sensor source (ie. just read from one sonar sensor in the ring to determine the location of a doorway), or different higher level processes will be using the same lower level processes. Using the sonar sensors to find moving objects as opposed to using them to produce a map of object locations in the environment will require different sensor processing, but many similar sub-sensor sources. Thus, the LSAT framework allows the natural hierarchical structure of robot sensor processing to be modeled within the Explication process. Within the LSAT framework, we have developed five classes of LSA which are used to implement the framework’s objects. The first is the SENSOR class which takes raw/processed data as input and outputs data which is further processed (ie. sensor processing). The next class is the DRIVER, which accepts as input multiple commands for the actual

hardware/drivers. This class acts as an interface to the hardware and performs command scheduling, minor command conflict resolution, and routes major conflicts to its controlling LSA. Another class is the GENERATOR, which accepts sensor data as input and outputs a command meant for a DRIVER. This class can be viewed as a low level, feedback control, looping mechanism between a sensor and the actuator. The MATCHER class is much like the SENSOR class in that it takes as input sensor data and processes them for output. The difference is that it also takes as input a description of a goal or error situation, and the processing consists of matching the goal/error to the input sensor data. The output is simply a measurement of the matching process. The last class is the CONTROLLER. This class accepts processed data from any other class, as well as commands/parameter-values from other CONTROLLERs higher-up in the hierarchy. The output is the control commands/parameter-values to its subLSAs. CONTROLLERS are the main entity used to implement Explication. They control the other LSAs, and manipulate them through the process of Explication. The process of Explication has been defined as a set of functions which correspond (almost one-to-one) to the subproblems of Explication (described earlier). We have developed an algorithm which combines these functions into a complete process and are implementing/experimenting with it on the Labmate robot. 6. Current Results and Conclusions Currently, we are implementing the process of Explication within the LSAT framework. We are working on the Move-through example as our first major milestone for abstract-goal achievement. As we implement the process of Explication we expect to improve upon our design. Though still in the initial stages of implementation, we have already discovered some interesting characteristics of the LSAT framework and Explication. As the Explication process is implemented, it has been discovered that many of the LSAs (specially higher level ones) are becoming mini expert systems. They contain a great deal of knowledge on how to achieve a specific goal, including recovering from errors. A great deal of the knowledge is in a rule-like form, and we utilize a rule execution engine design which has been augmented to deal with the specifics of Explication (ie. collecting up-to-date information and interacting with sensors).

Another interesting result is that we are finding the lowest level sensor and actuator driver LSAs to be fairly abstract and thus transparent to the hardware (robot and sensors). We are looking into adding another robot to the domain so to determine the flexibility in interacting with multiple types of hardware. We are also considering expanding into a multi-CPU configuration. We are evaluating the benefits of using a set of VME-based processors running VxWorks (from Wind River Systems) to execute some of the lower level LSAs, and use the Sun processor to execute the higher level ones. The development of the LSAT framework and Explication will provide greater insight to the use of sensors for accomplishing intelligent behavior of mobile robots. We hope to identify how abstract plans should be executed, and what sensor, actuator, and processing knowledge is necessary to achieve them. References [All87]

Peter K. Allen, "A Framework for Implementing Multi-Sensor Robotic Tasks," Proceedings of the DARPA Image Understanding Workshop, February 1987, pp 392-398.

[Ark90] Ronald C. Arkin, "The Impact of Cybernetics on the Design of a Mobile Robot System: a Case Study," IEEE Trans on Systems, Man, and Cybernetics, Vol 20, N. 6, 1990, pp 1245-1257. [Bro86] R. A. Brooks, "A Robust Layered Control System for a Mobile Robot," IEEE Journal of Robotics and Automation, Vol RA-2, N 1, 1986, pp 14-23. [Doy86] R. J. Doyle, D. J. Atkinson, R. S. Doshi, "Generating Perception Requests and Expectations to Verify the Execution of Plans," Proc. AAAI-86, Philadelphia, August 1986, pp 81-87. [Fir87]

R. J. Firby, "An Investigation into Reactive Planning in Complex Domains," Proc Sixth National Conference on Artificial Intelligence, Seattle, July 1987, pp 202-206.

[Geo87] M. Georgeff and A. Lansky, "Reactive Reasoning and Planning," Proc Sixth National Conference on Artificial Intelligence, Seattle, July 1987, pp 677-682.

[Hen84] T. Henderson and E. Shilcrat, "Logical Sensor Systems," Journal of Robotics, Vol 1, N 2, 1984, pp 169-193. [Hen85] T. Henderson, C. Hansen, B. Bhanu, "The Specification of Distributed Sensing and Control," Journal of Robotic Systems, Vol 2, N 4, April 1985, pp 387-396. [Hut89] S. A. Hutchinson and A. Kak "Planning Sensing Strategies in a Robot Work cell with Multi-Sensor Capabilities," IEEE Trans on Robotics and Automation, Vol RA-5, N 6, pp 765-783, 1989. [Hut90] S. Hutchinson and A. Kak, "Spar: A Planner that Satisfies Operational and Geometric Goals in Uncertain Environments," AI Magazine, Vol 11, N. 1, 1990, pp 31-61. [Kha86] O. Khatib, "Real Time Obstacle Avoidance for Manipulators and Mobile Robots", International Journal of Robotics Research, Vol 5, N 1, 1986. [Lum86] V. J. Lumelsky and A. A. Stepanov, "Dynamic Path Planning for a Mobile Automaton with Limited Information on the Environment", IEEE Trans. on Automatic Control, Vol AC-31, N 11, 1986, pp 10581063. [Lum89] R. Lumia, J. Fiala, A. Wavering, "The NASREM Robot Control System Standard," Robotics & Computer-Integrated Manufacturing, Vol 6, N 4, 1989, pp 303-308. [Lyo89] D. M. Lyons and M. A. Arbib "A Formal Model of Computation for Sensory-Based Robotics," IEEE Trans on Robotics and Automation, Vol RA-5, N 3, 1989, pp 280293. [Mor88] H. P. Moravec, "Sensor Fusion in Certainty Grids for Mobile Robots," AI Magazine, Vol 9, N 2, Summer 1988, pp 61-74. [Pay90] D. W. Payton et al. "Plan Guided Reaction," IEEE Trans on Systems, Man, and Cybernetics, Vol 20, N. 6, 1990, pp 13701382. [Ros89] S. Rosenschein and L. Pack Kaelbling, "Integrating Planning and Reactive Control," Proc. NASA Conference on Space Telerobotics, Vol 2, G. Rodriguez and H. Seraji (eds), JPL Publ. 89-7, Jet Propulsion Laboratory, Pasadena, Ca, pp 359-366.