decomposes into low level sensor processing, sensor control, robot command generation, and ..... Murphy 16 extends Arkins action-oriented perception para-.
Knowledge, Execution, and What Sufficiently Describes Sensors and Actuators [Accepted by the 1992 SPIE Conference on Mobile Robots VII]
John Budenske Maria Gini Department of Computer Science, University of Minnesota 4-192 EE/CSci Building, 200 Union Street SE, Minneapolis, MN 55455
ABSTRACT To be useful in the real world, robots need to be able to move safely in unstructured environments and achieve their given tasks despite unexpected environmental changes or failures of some of their sensors. The variability of the world makes it impractical to develop very detailed plans of actions before execution since the world might change before execution begins and thus invalidate the plan. We propose to generate the very detailed plan of actions needed to control a robot at execution time. This allows to obtain uptodate information about the environment through sensors and to use it in deciding how to achieve the task. Our theory is based on the premise that proper application of knowledge in the integration and utilization of sensors and actuators increases the robustness of execution. In our approach we produce the detailed plan of primitive actions and execute it by using an object-oriented approach, in which primitive components contain domain specific knowledge and knowledge about the available sensors and actuators. These primitives perform signal and control processing as well as serve as an interface to high-level planning processes. This paper addresses the issue of what knowledge needs to be available about sensors, actuators and processes in order to be able to integrate their usage, and control them during execution. The proposed method of execution works for any sensor/actuator existing on the robot when given such knowledge. I. KNOWLEDGE ABOUT UTILIZING SENSORS/ACTUATORS A very large part of our civilized world is unstructured and dynamic. This fact has been one of the leading obstacles to mass-quantity introduction of robots into everyday life. Though there are factories and buildings in which highly structured environments have been introduced for easy integration of robots, the ratio of unstructured to structured environments will always be high. Thus, society’s utilization of robotics will be dependent upon the ability of robots to move safely in these unstructured and dynamic environments. This means that robots must achieve their given tasks despite unexpected changes in their environment or failures of their sensors. The variability of the world makes it impractical to develop very detailed plans of actions before execution (the world might change before execution begins and thus invalidate the plan). Attempting to determine, prior to execution, everything needed to achieve a goal across the domain of abstract-goals, external world states, and sensor availability would be an endless task. It makes more sense to develop an abstract, detail-less plan, before execution, of what generally should be accomplished, and then make the necessary details explicit during execution. Of course, unstructured and dynamic environments require the robot to remain reactive to environment changes, and so "directed control from a plan" must give-way to "directive reaction" within the robot. Thus, we are faced with a problem: how can a robot execute an abstract plan such to accomplish goals and still remain reactive to its environment? Our approach to this problem has been to determine, at execution time, the sensors, actuators, and processes necessary for accomplishing an abstract plan, as well as the commands necessary to control these entities. Depending on information sensed from the environment, the flow of data from sensors into actuators is continuously reconfigured. Our approach is based on the premise that proper use of knowledge increases the robustness of plan execution without reducing the ability to react to the environment 3,4,7. The central dilemma is defining how to transform the given abstract-goal into an explicit representation of what is necessary to achieve the goal. We call this transformation process Explication and have discussed its implementation in previous papers 5,6.
In developing the concept of Explication, a number of sub-problems quickly became apparent: g g g g g
What information is relevant to accomplishing the goal and thus needs to be sensed? What actuation details need to be utilized to accomplish the goal? How is sensed information used to aid the generation of actuator commands? How does the system know when an error has occurred, and when the goal has been accomplished? What knowledge needs to be known about sensors, actuators, and processes in order to be able to integrate their utilizations into a complete system, and then control the system for success?
Many of these questions have been addressed in the aforementioned papers. In this paper we concentrate on the last of the above questions. In order to be able to utilize a sensor, actuator, or process, knowledge about its usage must be available. This knowledge must cover the entity’s functionality, control characteristics, and limitations. We refer to this knowledge as a Sufficient Description of an entity and define it to be all of the knowledge about the entity which is required by the Explication process to utilize the entity in accomplishing goals. First, we discuss the concept of Explication, and then present the contents of a Sufficient Description and our current results for an indoor mobile robot. II. EXPLICATION We view a plan as a collection of partially ordered abstract-goals (such as "Move Through Doorway"), much like what a classical planning system would produce. Executing an abstract-goal is difficult because the abstract-goal implicitly represents a large collection of information and details. To be executed, a plan must explicitly specify the details of how to utilize the sensors and actuators. For the same abstract-goal, different environment situations may require different sensors, different programming and control of the sensors, and different strategies. Explication is also difficult because of the need to adapt to the environment as the command is being executed. Because of this dependency on the current situation it must occur at execution time. In order to further understand the Explication process, we have broken it down into subprocesses. Explication for different abstract-goals is accomplished through different combinations of the subprocesses. The Explication process consists of: (a) (b) (c) (d) (e) (f) (g)
determining the relevant information from the abstract-goal and current world state as sensed by the robot’s sensors; decomposing the abstract-goal into sub-goals; selecting the source of the relevant information; collecting the relevant information and executing primitive commands on sensors or actuators; detecting and resolving conflicts which occur between sub-goals; using the information collected in a feedback control loop to control the sensors and actuators; monitoring the relevant information to detect goal accomplishment and error conditions.
Though each of these subproblems is individually solvable, the real challenge is in combining them. For example, an abstract-goal of turning the robot towards a moving object could be defined as a feedback control with little knowledge used within decomposing, determining, and selecting. In comparison, an abstract-goal of moving the robot through a doorway can contain detailed knowledge on decomposing, determining, detecting and resolving conflicts. Explication knowledge decomposes it into sub-abstract-goals of finding the doorway, moving towards it, as well as feedback control mapping of sensorsweeping an area, searching for the doorway until found, and progressively moving through the doorway until clearing it. Two concepts are crucial for Explication. First is the notion of Relevant Information Need (RIN). Explication must deal with the question of "what information (both sensed perception and internal state) is relevant and thus is needed in order to execute this abstract-goal?". Explication views such information, and thus the need for it, abstractly. Implicit goal abstraction is replaced by explicit informational need abstraction. To put it another way, the abstract goal: "how to achieve X", is replaced by two things: a) explicit knowledge on "how to achieve" using the sensors and actuators and b) an explicit representation of the needed information. This representation uses abstraction to reduce dependency on the source of the information. Abstract informational needs can then be met by abstract information providers (i.e. sensors). We have utilized the concept of "Logical Sensor" 12,13. to be such an abstract information provider, and we have expanded it to allow other logical entities.
Equally crucial is the second concept of Utilization-Detail. Explication must deal with the question of "what details of using sensors, actuators, and processes are necessary in order to execute this abstract-goal?" Again, abstraction is used to explicitly represent the need of detailed control of sensors and actuators. Control primitives are the robot commands and parameters for both discrete and continuous actions. Extending the Logical Sensor concept to other logical entities accounts for this explicit abstraction. Since the abstraction of a goal is hierarchical, Explication uses the above two concepts at each level of the hierarchy. On a single level of abstraction (i.e. for a single abstract-goal), Explication uses knowledge to explicitly represent the information and detail needs. These needs are met by Logical Sensors and other logical entities. These entities either map directly to the corresponding information/details or they each apply Explication hierarchically. Thus, more knowledge is used to explicitly determine further information and detail needs. We propose a framework for Explication, called the Logical Sensors/Actuators (LSA). The framework is being implemented in an object oriented programming environment resulting in a flexible robotic sensor/control system which we call the Logical Sensor Actuator Testbed (LSAT). Primarily, the framework is a collection of reconfigurable components and flexible component structures which can be combined to achieve abstract-goals through execution. The framework consists of the knowledge, mechanisms, and structures used by the Explication process (see Figure 1). The concept of Sufficient Description is included as one type of knowledge.
III. IMPLEMENTATION OF LSAT The LSAT framework is being implemented on a SUN SPARC 4/330 computer which interacts with a TRC Labmate mobile robot over two separate serial lines. The system is being written in C, Lucid Common Lisp with the Common Loops Object System (CLOS) and LispView windowing system. An object oriented approach has been used in the implementation, where an object corresponds to a Logical Sensor/Actuator (LSA) entity. Within the LSAT framework, we have developed five classes of LSAs: 1. SENSOR: Takes raw/partially processed data as input and outputs data which are further processed.
2. DRIVER: Accepts as input multiple commands for individual hardware/drivers, acts as an interface to the hardware, performs command scheduling, resolves minor command conflicts, and routes major conflicts to its controlling LSA. 3. GENERATOR: Accepts sensor data as input and outputs a command meant for a DRIVER. This class can be viewed as a low level, feedback control, looping mechanism between a sensor and the actuator. 4. MATCHER: It is much like the SENSOR, only it also takes as input a description of a goal or error situation. The processing consists of matching the goal/error to the input sensor data. 5. CONTROLLER: Accepts processed data from any other class, as well as commands and parameter-values from other CONTROLLERs higher-up in the hierarchy, and outputs the control commands and parameter-values (referred to earlier as Utilization-Detail) to its sub-LSAs. CONTROLLERs are the main entities used to implement Explication. They control the other LSAs, and manipulate them through the process of Explication. In addition to the five LSA classes, another object class is used to represent collections of knowledge about physical entities in the robot’s environment. This class, called LANDMARK, contains information on the expected characteristics of the real entity such as expected location in the world, size, etc. Any information which might be needed by the system or which is derivable within the system by other LSAs can be contained in this object. For example, when a LSA for detecting doors detects a specific doorway, the LANDMARK object corresponding to the physical doorway can be updated with the last known location of the doorway. When the system, later, needs to the location of the doorway and the door detection LSA is unable to detect it, the corresponding LANDMARK object can be accessed for an estimated location to be used until the doorway detection LSA can lock on and start tracking the doorway. The interface to LANDMARK objects is exactly the same as any other LSA and thus swapping a LANDMARK object’s output with that of a LSA is the same as swapping between two LSA’s outputs. The implementation of Explication has been defined as a set of functions which correspond to the subproblems of Explication described earlier. We have developed an algorithm which combines these functions into a complete process and are experimenting with it on the Labmate robot. One of the purposes of the framework is to differentiate the domain dependent and independent mechanisms. This will allow us to implement a platform-independent testbed, and to perform empirical studies on the use of knowledge during execution and the identification of relevant information, utilization-details, and the contents of a Sufficient Description. From this, we have derived a "template" for each LSA whose "slots" must be filled in with knowledge. This procedure constitutes deriving the Sufficient Description of a given LSA, and it is our belief that only through empirical research can such knowledge be developed. IV. AN EXAMPLE OF EXPLICATION To illustrate our approach, we will use an example derived from our lab experiments. We have been developing the knowledge necessary for a robot to robustly find and pass through an arbitrary doorway. This abstract-goal is called: MoveThrough (DOOR1), where DOOR1 is the designated door to move through. In order to explain the execution of this abstract-goal, we assume that a set of mid-level LSAs have been implemented. This set would include a MoveToVicinity, which moves the robot into a designated vicinity; a Sit-N-Spin, which spins the robot around in one place; a Circle, which circles the robot around designated position at a designated radius; a SimpleMove, which works to simply move the robot from one position to another (utilizing the following LSAs); A SearchWall->Door, which resorts to looking for walls in order to find doorways in them; a GoToXY and a Pounce which utilize different methods to move the robot; an Avoid, which detects and avoids obstacles as the robot is moving; a ProximitySafe, which upon detection of a close obstacle (within millimeters) takes control of the robot to guide it away from the obstacle until the SimpleMove, GoToXY, Pounce, and Avoid can return to guiding without error. Also available is a DetectDoor, a VerifyDoor, a MonitorDoor, and a CloseQuartersMove which guides the robot through the door opening. Each of these mid-level LSAs further decomposes into low level sensor processing, sensor control, robot command generation, and actuator driver LSAs. For brevity we will not discuss the low level details. When presented with the abstract-goal, the system first determines what information is initially needed. The abstract-goal contains a movement command "MoveThrough" and an object "DOOR1". The system sends the abstract goal to a MoveThrough LSA which will then control the execution for that goal.
The overall strategy for moving through a location is to first identify the relative vicinity which can then be searched for the exact location. The MoveThrough coordinates between three main phases: 1) moving to the vicinity and searching for the doorway; 2) verifying detected doorways and 3) moving through the doorway (see Figure 2). The initial abstract-goal is analyzed to determine what information is relative to accomplishing it. It is determined that there is an object to detect (door), and that the object has an approximate vicinity for searching for it (from a LANDMARK object corresponding to DOOR1). Since there is an object involved, the robot will search for the specified object.
This type of command involves movement, and so a movement LSA is needed to propel the robot to and through the door. Movement over any distance can require many different types of maneuvers and knowledge about which to use based on the situation is required. Thus a decomposition is selected which includes approaching the vicinity of the doorway (MoveToVicinity), in parallel with searching and detecting the doorway (DetectDoor). Also, a LANDMARK object for DOOR1 is activated along with a LSA for determining the type of doorway (DoorwayTyper) that is being dealt with. The doorway type is important in the selection of other LSAs and their control parameters, including the DetectDoor. Doors which open in vs doors which open out will appear differently to many of the sensor processing LSAs and thus a prediction of the door type allows more specific processing to occur. Doors in corners and at end of hallways pose processing challenges as well. Finally, a controller for the sonar senors (ProximitySonar) spawns sub-LSAs of 24 Range-Psonar LSAs and a driver to coordinate their activity (ProximitySonarDrive). This will provide input data for other LSAs. This is shown in Figure 3 (Note that in figures 3 through 5 the letters at the front of each LSA oval corresponds to the first letter of the class of that LSA). The Sub-LSAs of SimpleMove include, the Avoid, GoToXY, Pounce and ProximitySafe LSAs, which propel the robot as well as protect it from collisions. The ProximitySafe LSA contains knowledge about what type of input is needed (ProximityInfraRed). If the LSAs necessary to provide this data are not yet created, the ProximitySafe LSA will initiate their creation. Since the MoveThrough LSA already created the ProximitySonar LSA, the Avoid determines that its input needs can be resolved through using the ProximitySonar LSA. Once a door-like object is found, the MoveToVicinity (and its subLSAs) are deactivated. The DetectDoor LSA is kept to aid in monitoring the other phases. Note that the DetectDoor output is piped to the DoorTyper in case the type of door detected is different than expected. This would lead to further investigation of the door to determine if the right doorway has been found or if the door in the doorway is position such to confuse the detector algorithm. In the second phase, shown in Figure 4, a VerifyDoor LSA is activated to make sure that the doorway truly is a doorway, that it is open, and that the robot could fit through it. It initially sets up a DoorSize LSA to monitor the size calculations from other LSAs to determine how clearance there is for the robot to pass through. If there is a large clearance, the robot would engage simple, yet imprecise, LSAs for propelling itself through the doorway. If it is a close fit, additional investigation must occur to verify the size and to better align the robot to the opening. The later process of verifying the doorway involves three sub-phases: a) moving the robot next to the doorway, within the range of the proximity sensors (Touch and Perpendicular), b) move the robot back and forth in front of the opening to verify its existence and size (Slide), and c) center the robot to
the doorway (Slide and Perpendicular). Both proximity and sonar sensors are used. Based on the movements within the Slide LSA, a more precise measurement of the doorway opening size can be calculated, and the position of the door frame can be determined. This can be used to line the robot up to the doorway. Each of the sub-phases of VerifyDoor is accomplished through the coordination and control of each of the sub-LSAs. At different points in time during a single sub-phase, each sub-LSA is performing a specific behavior. Subtle changes in behaviors can greatly influence the performance within a sub-phase, and changes in behaviors are controlled by the VerifyDoor LSA through selection of Utilization-Detail values. Thus, VerifyDoor coordinates all activities associated with achieving the goal of verifying that the doorway is truly in front of the robot. It is important to note that the switch from the initial LSAs to the Verify-door LSA is all controlled by the MoveThrough LSA. Within this LSA, the execution of the lower LSAs is monitored and upon completion of that phase of the task, actions are taken by the MoveThrough LSA to switch to, and monitor the next phase. Once the door frame is verified (assuming it is a small one), the VerifyDoor LSA is deactivated and two new LSAs are activated to complete the achievement of the overall MoveThrough goal (shown in Figure 5). The first is a monitor LSA, MonitorDoor, which uses the sonar sensors to monitor the robot’s placement and the movement through the doorway. It also detects any new obstacles which may appear. This serves as a check on the progress of the second new LSA, CloseQuartersMove. Upon activation, the CloseQuartersMove LSA activates and controls a number of parallel sub-LSAs, which accomplish these sub-goals: (1) propel the robot through the doorway to a predicted "other side" (Propel); (2) keep the robot in line with the door opening as the robot moves through the doorway, and monitor for obstacles in the forward path (FrontAlignment); (3) monitor the detection of the doorway frame passing each of the side-mounted proximity sensors (FramePassings); (4) resolve various types of mis-alignments of the robot to the door frame (Shift); (5) monitor the movement of the robot for differentiating between pauses (used by Shift and Propel) and low velocity stagnation (MoveDetect). Often during low velocities, the wheel motor drives will freeze up and require a reset of the velocity register to resolve. Initially, the robot would be positioned in front of the door frame such that the front sonars would detect the frame, but as the robot passes through the door frame, the sonar beam would become clear of the frame, and could then be used to detect potential obstacles in the robot’s path. Synchronization of this sensor usage occurs through feedback control mapping of the sensor data into processes which detect each of these occurrences. Such detection triggers the change in control of the LSAs. In the lab experiments, we identified both relevant information needs and Utilization-Details which are abstracted by the goals. Within the goal of CloseQuartersMove, there are a number of implicit relevant information needs. These include: location of the left/right sides of the door frame, the size of the door frame, the relative location to other structures (such as walls), whether anything else was obstructing the path, how the robot is aligned to the door frame during the movement, and at what points in time are the various sensors passing the door’s frames. All of this information is needed to aid in the selection of various utilization-details and thus in achieving the goal. Within CloseQuartersMove, each relevant information need is explicitly, and yet abstractly, identified and then matched to a LSA which will then provide the information. As informational needs are resolved, utilization-details are identified as necessary for the execution of the goal. Again, these implicit details are explicitly identified. Some of the important details are: steering directions, speed, acceleration, synchronization of turning vs position within the door frame, and selection of other LSAs to collect data or control sub-processes (such as obstacle avoidance or door frame detection). Reactivity to the external world is maintained through constantly managing the sensors, collecting data, and using that data to further guide the robot through its execution. Often, feedback control loops with very short data-flow paths are needed. This example can be augmented to include additional LSAs for monitoring potential errors, and the recovery from those errors upon their detection. At times, movement of the robot is used in conjunction with the sensors to further collect information on the environment (ie. verification of the doorway’s existence). The selection of sensors is based on knowledge about them, in this case information on the range and location of the sensor on the robot. This corresponds to the Sufficient Description of a LSA mentioned earlier. Next, the concept of a Sufficient Description is further explored and the above example is extended to illustrate how it would be used.
V. CONTENTS OF A SUFFICIENT DESCRIPTION The contents of a Sufficient Description for a LSA have been developed empirically as the LSAT has been implemented. Though the research on Sufficient Descriptions of LSAs is not as far along as that on Explication, the utilization of it within Explication is coming along nicely. From our results we have developed a template where (Sufficient Description) knowledge is grouped into six categories (see Figure 6).
Selection Descriptions: Primary-Effects: LM-MOVEMENT Side-Effects: SLOW-MOVEMENT, PROPELLING Appropriate-Usage-Context: move-through, move-under-table, close-quarters-move, push-object, traverse-hallway Inappropriate-Usage-Context: avoid-fast-obstacles, cross-open-area Selection-Utility-Function: none Results Descriptions: Output-Type: ppgo-commands Output-Format: L-drv-LM-format Output-Resolution: 5mm Output-Accuracy: 75% Output-StDev: (-10mm . +10mm) Output-Repeatability: RIV-set: init, reset, ready, shutdown, goal, null-input, waiting, propelling, aiming, idle? Needed Information Descriptions (For each RIN): Number-RINs: 2 Name: Bump-Detected Expected-Data-Type: binary Contemporaneous-Level: immediate Source-Selection-Function: n/a Source-Verification-Function: n/a Mapping-to-Input: input_bump_source Name: Move-Detected Expected-Data-Type: binary Contemporaneous-Level: immediate Source-Selection-Function: n/a Source-Verification-Function: n/a Mapping-to-Input: input_move_source Name: Polar-Direction Expected-Data-Type: angle Contemporaneous-Level: immediate Source-Selection-Function: n/a Source-Verification-Function: n/a Mapping-to-Input: input_dir_source
Utilization (Result-Parameters) Descriptions (For each RP): Number-RPs: 3 Name: Speed Type: velocity Units: mm/sec Value-Ranges: slow, moderate, fast Value-Increments: faster, slower Default-Value: slow Result-Effects-Mapping: (Table (slow . 25) (moderate. 40) (fast . 60) (faster . +5) (slower . -5)) Name: Aim Type: direction Units: degrees*100 Value-Ranges: right, left, straight Default-Value: straight Result-Effects-Mapping: (Table (right . +50) (left . -50) (straight 0)) Name: ForeBack Type: direction Units: n/a Value-Ranges: forward, backward Default-Value: forward Result-Effects-Mapping: (Function setforeback forebackparmlist) Integration Descriptions: Interactions-to-Monitor-For: obstacle-avoid Errors-to-Monitor-For: bump, front-align Time-to-Execute-One-Cycle: 0.01sec Hardware-Resources-Required: none Limitations: none Execution Methods Descriptions: Method of Initialization Method of Resetting Method of Shutdown Method of Fault Checking Method of Returning RIV Method of Accessing Result Data Method of Primary Behavior
Figure 6: Contents of a Sufficient Description for LSA: L-gen-propel
The first category, Selection Descriptions, contains the knowledge needed by a CONTROLLER to decide whether this LSA would suffice (and later be best) as a source to satisfy a RIN. This knowledge is very much like that used in automated planning to select operators to expand a plan network. The next category is the Results Descriptions which pertain to knowledge about the data which the LSA outputs. Some of this information is also used by CONTROLLERs in the selection of LSAs. The Needed Information Descriptions is the knowledge required for each of the "data inputs" of the LSA. Input data for a LSA is viewed as an Informational Need. The CONTROLLER will use this knowledge during its determination of Relevant Informational Needs on its level (i.e. a CONTROLLER’s RINs will include the RINs corresponding to sub-LSA data inputs). More importantly, the LSA itself uses this knowledge during its execution as a guide in collecting its input data from its indicated sources, as well as determining when its RINs are unsatisfied (i.e. no source named, or source LSA has become faulty). For example, the Source Verification Function is used to interact with a selected source LSA to determine if the output data from that LSA is valid for use. The Source Selection Function will be used by the CONTROLLER to determine which of two acceptable LSAs would "best" satisfy this RIN. Currently this research is not investigating how to select the best sensor (including sensor positioning), but only acknowledges the need for such knowledge within the Sufficient Description 1,11. The Utilization Descriptions pertains to knowledge about the input parameters of the LSA used to control its behavior. Utilization-Detail knowledge covers the acceptable ranges of input values and attempts to describe the correspondence between numerical parameter values used by the actual algorithms within the LSA and the symbolic parameter values used by the CONTROLLER to manipulate the LSA. The Integration Descriptions is the knowledge used to ensure that the instantiation of the LSA will proceed correctly and that potential interactions are checked and monitored for. Limitations refers to miscellaneous information on what interactions may reduce the capabilities of the LSA. For example, across multiple LSAs for generating movement commands, it is difficult for a CONTROLLER to contain knowledge about what velocities are acceptable for that individual LSA and what effects each individual desired velocity has. Thus, it is better for the CONTROLLER to communicate symbolically to the sub-LSA. Instead of passing down a specific velocity (i.e. 50 mm/sec), relative values are passed such as "slow", "medium", or "fast". The LSA then makes the decision on the actual numerical value. If during execution, the CONTROLLER desires faster movement, it just passes another relative symbolic value, "faster", onto the active LSA, and the LSA will adjust its velocity calculation appropriately. Thus, knowledge about specific calculations and adjustments to velocity is held within the sub-LSA, and the CONTROLLER only requires knowledge about which abstract, symbolic parameter values to use for a specified type of Utilization-Detail. The last category is the Execution Methods Descriptions, which is procedural knowledge about what needs to be done within a LSA in order to achieve a specific internal state. These are the algorithms for initialization, resetting, shutting down, and performing the primary behavior of the LSA. These methods conceptually constitutes the working "guts" of the LSA. The contents of a Sufficient Description may vary or emphasize different aspects from one LSA type to another. For example, the DRIVER LSA Output Descriptions is not as important as the SENSOR LSA’s since DRIVERs do not "output" to other LSAs, they only communicate to the hardware. It is important to note that Sufficient Descriptions of CONTROLLER LSAs must also be gathered. Many CONTROLLERs will utilize other CONTROLLERs as sub-LSAs and thus will need knowledge about the sensors, actuators and processes that the sub-CONTROLLER utilizes. Since much of this information is dependent on the selected sub-sub-LSAs of the sub-CONTROLLER, CONTROLLERs must have mechanisms to progressively collect and update their RIN and Utilization-Detail knowledge as they determine sub-LSAs. These mechanisms are still in development. VI. AN EXAMPLE OF USING THE SUFFICIENT DESCRIPTION The example given in Figure 6 is for a LSA of the class L-Generator, subclass L-gen-propel (which corresponds to Propel in the previous example). L-gen-propel takes in information about directions to move, and about obstacles in its path. and generates actuator commands for moving the robot forward or backward.. The Sufficient Description of L-gen-propel is used by other LSAs to determine how to interact with L-gen-propel. To further illustrate this example, assume that one of the abstract-goals the system is attempting to accomplish is one to move the robot directly through a narrow doorway, and the robot is already aligned to the passageway. The particular abstract-goal corresponds to a L-Controller LSA of the sub-class L-ctrl-close-quarters-move (corresponding to CloseQuarterMove). This
LSA attempts to move the robot slowly through the narrow passage, and monitor the movement for mis-alignments, obstacles, and proper progression. In applying its Explication knowledge to the problem, L-ctrl-close-quarters-move determines that it requires the robot to move forward (this is interpreted as a need for a LSA which will produce velocity/steering actuator commands). This need can be satisfied by selecting a LSA based on features mapping to knowledge within the Sufficient Descriptions of all LSAs in the system. The L-ctrl-close-quarters-move queries the library of LSAs for one which has a behavior, or Primary-Effect, of LM-MOVEMENT. Since there are a number of LSAs which deal with moving the robot (LM-MOVEMENT), other features (such as Side-Effects, Appropriate-Usage-Context, Inappropriate-Usage-Context) are used to further down-select to a single choice. L-gen-pounce is designed to move the robot slowly through an area of potential obstacles, but where no contacts are expected. Thus, SLOW-MOVEMENT is a desirable behavior. Also, both abstract-goals of close-quarter-move and movethrough fall within the Appropriate-Usage-Context. Once L-gen-propel has been selected, L-ctrl-close-quarters-move will extract information about the LSA. For example, L-gen-propel requires three types of input-data: Bump-Detected, Move-Detected, and Polar-Direction. All of these become additional Relevant Information Needs of L-ctrl-close-quarter-move, and sources for each input need are determined and selected through their Sufficient Descriptions (in the same manner that L-gen-propel was selected). Upon selection, dataflow paths are established between the corresponding LSA’s output and L-gen-propel. L-ctrl-close-quarters-move "hooks" the two LSAs into L-gen-propel through the slot Mapping-to-Input (i.e. the name of the LSA selected for Bump-Detected is placed in L-gen-propel’s slot: input_path_source). When L-gen-propel requires its input data, it will access it directly from the LSAs which are deriving it. L-ctrl-close-quarters-move utilizes L-gen-propel’s Sufficient Description to determine what other information is relevant and where it is needed. In the same manner, the slots Errors-to-Monitor-For and Interactions-toMonitor-For also provide indications as to other LSAs which may be needed to aid in the monitoring of the goal accomplishment. Thus, the Sufficient Description of L-gen-propel aids in determinating information relevant to accomplish the goal. There are two last ways in which the Sufficient Description is used to by other LSAs to interact with L-gen-propel. Both deal with the control of the LSA during its execution. The first is in utilizing the Execution Methods (Initialization, Resetting, etc.). As mentioned before, these are procedural knowledge mechanisms which all LSAs must have and they are used extensively during LSA interactions. The last use of the Sufficient Description is in interacting with the Result-Parameters of the LSA. Simply, L-ctrl-close-quarter-move’s knowledge about moving through the door passageway includes the fact that most LM-MOVEMENT LSAs have Result-Parameters dealing with Speed, Aim, and ForeBack as well as knowledge about potential values for them. The Value-Ranges and Value-Increments slots are accessed to identify allowable values for these parameters. As the LSA is executing, and propelling the robot through the passageway, L-ctrl-close-quarters-move monitors the progression, and adjusts the allowable parameter values which it has knowledge about. Within L-gen-propel, any type of mapping function or tables could be used, including ones based on other parameters. Thus, the Sufficient Description can abstract the numerical values needed in the execution of actuator commands from the types of interactions needed between L-ctrl-close-quarters-move and L-gen-propel. A more manageable interface across the parameters exists. VII. RELATED RESEARCH There has been a number of approaches to combining planning and reacting for a mobile robot 7,8,9,10,17,18,19. Though such research yields knowledge concerning sensors and actuators, none of these approaches have focused on determining what sensor/actuator/process knowledge is necessary for effective utilization. Other work has been done in developing schema based computational processes for sensory based robotics 3,15. Murphy 16 extends Arkins action-oriented perception paradigm to have the process of sensor fusion contribute to the automatic configuration, exception handling, and execution of perceptual objectives and sensing strategies. This objective is comparable to Explication, and both approaches embrace many of the same philosophies about the importance of sensing in determining actions. There has also been a great deal of research in sensor selection. Hager and Mintz 11 use decision theory for deriving a set of sensor observations which will yield desired sensor data. Many others 1,14 are investigating sensor planning techniques for determining locations and motions, required for multiple vision sensors in order to view (monitor) the features of interest during task execution. This research is similar to the selection of a source of information (i.e. an LSA). Though such selection mechanisms are important, they are not emphasized in the LSAT research as much as how knowledge about (and required by) the selection mechanisms relate to the other components of a Sufficient Description.
Henderson 12,13 combines a declarative description of the sensors with procedural definitions of sensor control. This research, termed "Logical Sensors" consists of logical/abstracted views of sensors and sensor processing, much like how logical I/O is used to insulate the user from the differences of I/O devices and operating systems. Work has been done to extend the concept to include actuator control 2. In both, the development of a Logical Sensor includes determining knowledge about how to use sensors, much the same as the knowledge content of Sufficient Descriptions. VIII. CONCLUSION Currently, we are implementing the process of Explication within the LSAT framework. We are working on the Movethrough example as our first major milestone for abstract-goal achievement. Since the robot only utilizes the deadreckoning, proximity infra-red, and proximity ultra-sonic sonar sensors, the amount of information obtained from sensing is usually not rich enough to allow confident and complete decision making to occur from a single frame/collection of data. Thus, multiple collections from multiple viewpoints is required. Such a problem domain provides a rich series of decision making, relevant information collections, actuator manipulations, and knowledge utilizations for the robotic system to progress through in order to achieve the goal. Thus, this problem domain exercises the LSAT system’s capabilities and allows us to experiment with building robustness in the accomplishment of the goals through the acquisition of knowledge. Within the lab experiments, the robot is able to perform all the standard maneuvers to achieve the Move-through goal for standard doorways. A great deal of Utilization-Detail-Knowledge has been built into the current system. This includes the knowledge to perform the standard maneuvers for seeking, verifying, and passage through a standard doorway. Since the utility of this theory is on its ability to resolve non-standard and unusual situations through the application of knowledge, current efforts are concentrating on building and structuring this knowledge. We have implemented knowledge on monitoring and resolving situations when the doorway is difficult to find, as well as resolving when the doorway is too small to pass through. We are also expanding the knowledge base on the types of doorways. Currently the robot deals with doorways of standard size which are not in unusual situations (ie. location in a corner of room, or obstacles directly in front or on other side of doorway). As knowledge is added to monitor for and resolve unusual situations, the goal accomplishment robustness will increase. Other areas where the role of knowledge is expanding is in the coordination of the sub-LSAs. Though we anticipated conflicts to occur between many of the LSAs during execution, we are finding that applying the knowledge to anticipate and avoid as opposed to detect and resolve such situations has been easier to do and has resulted in better system performance. One difficult issue is the responsiveness/reactivity of the system. The system is able to customize data-flow paths such to reduce the amount of processing on a single path, thus increasing reactivity for that feedback control loop. Unfortunately, all LSAs and thus all data-flow paths are resident on a single SUN SPARCstation, and so reducing single data-flow paths without reducing the total LSAs does not eliminate the processing bottleneck. Until mechanisms are built into the system to utilize multiple processors (ie. on a Sun Network, or onto multi-processor hardware), knowledge concerning the recognition of time critical activities is being inserted. Thus, when an activity requires high reactivity, the system can temporarily shutdown LSAs which are not critical to that activity. As we have been building the library of LSAs used by Explication, we have been empirically developing our definition of Sufficient Description. This research is truly work-in-progress as we have not compiled extensive results on how the use of the knowledge within a Sufficient Description is enhancing the systems operation. We have only compiled "templates" of what knowledge is currently required for Explication to utilize a LSA, and attempted to formalize it into an understandable explanation. We are finding this approach to be useful not only for mobile robotics, but for areas in sensor management and process control. An appropriate problem characterization for applicability of this approach is any problem where there are innumerable "processing dataflows" and the selection of the best one is dependent on a dynamic environmental situation. Thus a dynamic "processing dataflow" reconfiguration capability is required. We have determined that the development of the LSAT framework (including the derivation of knowledge about Informational Needs, Utilization Details, Sufficient Descriptions, and Explication) has provided us greater insight to the use of sensors for accomplishing intelligent behavior of mobile robots. We hope, by identifying how abstract plans should be executed, that what sensor, actuator, and processing knowledge necessary to achieve them is identified as well.
ACKNOWLEDGMENTS We would like to thank John Fischer, Chris Smith, Karen Sutherland, and Erik the Red for their help and efforts. This work was funded in part by the NSF under grants NSF/CCR-8715220 and NSF/CDA-9022509, and by the AT&T Foundation. Though John Budenske is currently an employee of Paramax Systems Corporation (A Unisys Company), all research described in this paper was performed at the Robotics and Artificial Intelligence Laboratory of the Department of Computer Science, University of Minnesota. REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18.
19.
S. Abrams and P. K. Allen, "Sensor Planning in an Active Robotic Work Cell", Proceedings of the DARPA Image Understanding Workshop, January 1992, pp 941-950. P. K. Allen, "A Framework for Implementing Multi-Sensor Robotic Tasks," Proceedings of the DARPA Image Understanding Workshop, February 1987, pp 392-398. R. C. Arkin, "The Impact of Cybernetics on the Design of a Mobile Robot System: a Case Study," IEEE Transactions on Systems, Man, and Cybernetics, Vol 20, N. 6, 1990, pp 1245-1257. R. A. Brooks, "A Robust Layered Control System for a Mobile Robot," IEEE Journal of Robotics and Automation, Vol RA-2, N 1, 1986, pp 14-23. J. Budenske and M. Gini, "Determining Mobile Robot Actions which Require Sensor Interaction" Working Notes of AAAI Fall Symposium on Sensory Aspects of Robotic Intelligence, Monterey, 1991, pp 117-124. J. Budenske and M. Gini, "Achieving Goals Through Interaction with Sensors and Actuators" IEEE/RSJ International Conference on Intelligent Robotics and Systems, Raleigh, 1992, pp 903-908. R. J. Firby, "An Investigation into Reactive Planning in Complex Domains," Proc Sixth National Conference on Artificial Intelligence, Seattle, July 1987, pp 202-206. R. J. Firby, "Building Symbolic Primitives with Continuous Control Routines," Proc. Artificial Intelligence Planning Systems, College Park, Maryland, June 1992, pp 62-69. E. Gat, "Integrating Reaction and Planning in a Heterogeneous Asynchronous Architecture for Mobile Robot Navigation", SIGART Bulletin, Vol 2, N 4, 1991, pp 70-74. M. Georgeff and A. Lansky, "Reactive Reasoning and Planning," Proc Sixth National Conference on Artificial Intelligence, Seattle, July 1987, pp 677-682. G. Hager and M. Mintz, "Computational methods for task-directed sensor data fusion and sensor planning," International Journal of Robotics Research, 10, pp 285-313, 1991. T. Henderson and E. Shilcrat, "Logical Sensor Systems," Journal of Robotics, Vol 1, N 2, 1984, pp 169-193. T. Henderson, C. Hansen, B. Bhanu, "The Specification of Distributed Sensing and Control," Journal of Robotic Systems, Vol 2, N 4, April 1985, pp 387-396. S. A. Hutchinson and A. Kak. "Planning sensing strategies in a robot work cell with multi-sensor capabilities," IEEE Trans on Robotics and Automation, RA-5(6), pp 765-783, 1989. D. M. Lyons and M. A. Arbib "A Formal Model of Computation for Sensory-Based Robotics," IEEE Trans on Robotics and Automation, Vol RA-5, N 3, 1989, pp 280-293. R. R. Murphy and R. C. Arkin, "SFX: An Architecture for Action-Oriented Sensor Fusion" IEEE/RSJ International Conference on Intelligent Robotics and Systems, Raleigh, 1992, pp 1079-1086. D. W. Payton, J. Kenneth Rosenblatt, and David M. Keirsey, "Plan Guided Reaction", IEEE Transactions on Systems, Man, and Cybernetics, Vol 20, N. 6, 1990, pp 1370-1382. S. Rosenschein and L. Pack Kaelbling, "Integrating Planning and Reactive Control," Proc. NASA Conference on Space Telerobotics, Vol 2, G. Rodriguez and H. Seraji (eds), JPL Publ. 89-7, Jet Propulsion Laboratory, Pasadena, Ca, pp 359-366. R. Simmons, "An Architecture for Coordinating Planing, Sensing, and Action" DARPA Workshop on Planning, Scheduling, and Control, November 1990, pp. 292-297.