Functional Representation and Reasoning for Re ective Systems Eleni Stroulia and Ashok K. Goel yz College of Computing Georgia Institute of Technology Atlanta, GA 30332-0280
[email protected],
[email protected] Abstract
Functional models have been extensively investigated in the context of several problem-solving tasks such as device diagnosis and design. In this paper, we view problem solvers themselves as devices, and use structure-behavior-function models to represent how they work. The model representing the functioning of a problem solver explicitly speci es how the knowledge and reasoning of the problem solver result in the achievement of its goals. Then, we employ these models for performance-driven re ective learning. We view performance-driven learning as the task of redesigning the knowledge and reasoning of the problem solver to improve its performance. We use the model of the problem solver to monitor its reasoning, assign blame when it fails, and appropriately redesign its knowledge and reasoning. This paper focuses on the model-based redesign of a path planner's task structure. It illustrates the model-based re ection using examples from an operational system called Autognostic system.
1 Introduction AI has put considerable eort in constructing theories of comprehension of how physical devices work. Physical devices are teleological artifacts: they have been created in order to accomplish the purposes of their designers. They are subject to the causality of physical processes and principles: their designers designed them in such a way that the causal interactions among their components result in the accomplishment of their intended functions. Functional models exploit this relation between function and causality to enable more economical reasoning about physical devices. A functional model of a device explicitly represents the device function, and uses the device function as an encapsulation mechanism for abstracting and organizing the complex internal causal processes of the device. Such models have been widely used for several tasks, such as diagnosis, design, and prediction, and in several domains, such as mechanical and electrical devices (Sticklen and Chandrasekaran 1989, Umeda, Takeda, Tomiyama, and Yoshikawa 1990, Abu-Hanna, Benjamins, and Jansweijer 1991, Franke 1991, Gero, Lee and Tham 1991, Goel 1991, Price and Hunt 1992, Stroulia, Shankar, Goel, and Penberthy 1992, Kumar and Upadhyaya 1992, Forbus 1993). But AI problem solvers are designed artifacts too! Although not physical, they are devices with intended functions, structural elements, and complex internal processes. And, just like physical devices, problem solvers can fail in delivering their intended functions which may necessitate their diagnosis and redesign. Viewing problem solvers as abstract devices oers two bene ts. First, it suggests that functional models may provide a scheme for representing \how a problem solver works", that is, how the internal processes (methods, algorithms) of the problem solver result in its intended functions (tasks, goals). Second, it suggests This paper appears in the "Applied Arti cial Intelligence: An International Journal", Vol. 9, No. 1, pp. y A version of this paper was presented at the AAAI-93 Workshop on Reasoning About Function, held in
101-124. (1995) Washington D.C.
on July 11, 1993. z This work has been supported by the National Science Foundation (research grant IRI-92-10925), the Oce of Naval Research (research contract N00014-92-J-1234), and the Advanced Projects Research Agency. In addition, Stroulia's work has been supported by an IBM graduate fellowship.
1
that performance-driven learning may be viewed as a model-based redesign task. When the performance of the problem solver is suboptimal, the learning goal is to modify the problem solver's knowledge and reasoning in such a way that it does not fail under similar situations in the future. This leads us to the following hypothesis: if a problem solver has a functional model of its own reasoning process, then, when problem solver fails at some task, it can use it as a guide for redesigning itself appropriately by re ecting upon its reasoning. In this paper, we describe how a problem-solver's knowledge and reasoning can be represented in terms of a functional model, and how failure-driven re ective learning can be viewed as a model-based redesign task. We illustrate the process of learning using examples from the Autognostic system, a re ective path planner. In particular, we show how Autognostic can reorganize its spatial knowledge about its microworld and modify its task structure of path planning by re ecting on its own suboptimal performance. These two learning tasks, namely, the reorganization of world knowledge and the modi cation of task structure, have often proved to be especially dicult in past work on re ective learning. Autognostic indicates that functional models of problem solving can provide a computational mechanism for addressing some of the issues that arise in these learning tasks.
2 Re ection and Performance Improvement There is always room for improving problem-solving performance. All intelligent agents, natural or arti cial, are bound to produce a suboptimal solution to some problem, or even fail to solve a problem. Such failures are opportunities for learning, and an intelligent agent can improve itself by re ecting upon them. The importance of re ection to learning and performance improvement has been established by a long line of psychological research (e.g. Flavell 1971, Piaget 1971, Kluwe 1982, Baker and Brown 1984). The main result of this research is that re ection upon instances of failed problem solving enables the problem solver to reformulate the course of its own \thinking" so that, under similar circumstances in the future, it does not fail in a similar way. The goal of our work is to develop a computational model of re ection as a self-adaptive mechanism that enables improvement of problem-solving performance. Towards this goal, we address two interleaving issues: 1. a language for describing problem solving such that it can enable re ection, and 2. a process for re ection which uses the description of the problem solver to reason about its performance and redesign it to improve its performance.
2.1 A Language for Modeling Problem Solving
Previous AI research has suggested functional analysis as the appropriate level of describing intelligent behavior. Newell (1982), for example, distinguished the level at which goals and knowledge are speci ed from lower levels at which they are represented in symbolic structures. He advocated that knowledge should be characterized functionally, in terms of what goals it helps to accomplish, and not structurally, in terms of particular objects and relations. Marr (1982) too emphasized the need to separate the computational theory embodied in an intelligent system from its implementation. Chandrasekaran (1987) proposed that problem-solving tasks fall into major classes called generic tasks. All tasks that are instances of the same generic task can be solved by the same family of methods. This led him to the use of task structures| task-method-subtask decompositions|as a way of analyzing and building intelligent systems and modeling cognition (Chandrasekaran 89). Some of Clancey's (1985), McDermott's (1988), Steels (1990), and the KADS framework (Wielinga, Schreiber and Breuker 1992) work also shares this functional perspective. We adopt Chandrasekaran's task structures as the framework for describing problem solving. A problemsolving task in this framework is speci ed by the information it takes as input and the information it produces as output. A task can be accomplished by one or more methods, each of which decomposes it into a set of simpler subtasks. A method is speci ed by the subtasks it sets up, the control it exercises over their processing, and the knowledge it uses. The subtasks into which a method decomposes a task can, in turn, be accomplished by other methods, or, if the appropriate knowledge is available, they can be solved directly.
The reasoning process of a problem solver is thus described in terms of a recursive decomposition of its overall task into methods and subtasks. To formally describe the task structure of a problem solver, we use the language of structure-behaviorfunction (SBF) models. SBF models (Gero, Lee and Tham 1991, Goel 1989) are a generalization of the Functional Representation language (Sembugamoorthy and Chandrasekaran 1986), originally developed for modeling physical devices. Adapting this language for modeling problem solvers, we express tasks as transitions between information states, where the transitions are annotated by a set of conceptual relations describing how the task's input relates to its output, and act as indices to the methods that can accomplish them. Methods are expressed as partially ordered sequences of state transitions which specify in greater detail how they accomplish the task for which they are applicable. Tasks that are solved directly, without further decomposition, index the program modules that accomplish them.
2.2 A Process Model for Re ection
The comprehension of its problem solving in terms of a SBF model enables a problem solver to improve its performance 1. by specifying a \road map" for problem solving which allows the problem solver to monitor its progress on a speci c problem, 2. by specifying \correctness" criteria for the output of each subtask, so that, when it fails, the problem solver can assign blame for its failure to these subtasks whose output is inconsistent with their corresponding criteria, and 3. by guiding the problem solver to consistently redesign its own problem solving and thus improve its performance. While solving a speci c problem, the problem solver can use the model of its reasoning to monitor its progress. It can record which method it uses for its task, in which speci c order it performs the resulting subtasks, which methods it invokes for their respective accomplishment and what are their corresponding results. The model provides the problem solver with expectations regarding the information states it goes through as it solves the problem. For example, each information state should be related to the preceding one according to the conceptual relations of the task responsible for this transition. While monitoring its reasoning, the problem solver may notice that some of these expectations fail and recognize this failure as an opportunity for learning. Alternatively, it may complete its reasoning, produce a solution, and learn from the world that another solution would have been preferable. This is yet another opportunity for learning. After a failure, the problem solver can use the record of its failed reasoning process, and the model of its problem solving to assign blame for the failure to some element(s) of its task structure and propose modi cations which can potentially remedy the problem. The model explains how each subtask contributes to the accomplishment of the overall task, and thus it can help localize the failure of the overall process to some speci c element. The task-structure view of problem solving gives rise to a taxonomy of generic learning tasks, each one corresponding to a dierent type of potential cause of failure that the problem solver can identify. After having decided on a learning task, the problem solver can redesign its task structure according to the modi cations suggested by this task. The modi cations may be simple, i.e., integrating a new fact in its world knowledge, or more complex, i.e., introducing a new task in its task structure. In any case, the semantics of the language of SBF models guide the modi cation process to produce a valid task structure. The modi cation is not guaranteed to result in an improved problem solver. However, it is evaluated through subsequent problem solving. If the problem that triggered the modi cation can now be solved and the appropriate solution produced, this is strong evidence that the modi cation was appropriate. Otherwise, the problem solver may try another modi cation, if the blame assignment has suggested multiple alternatives, or, it may try to evaluate why the modi cation did not bring the expected results, if it has a model of its re ection process. Re ection, after all, is a reasoning task, and as such it can be modeled in terms of SBF models, just like any other task.
3 Router: A Case Study In this paper, we use Router (Goel and Callantine 1992, Goel et al. 1993), a path planning system, as a case study. Router's task is to nd a path given an initial and a goal location in a physical space. Router knows a model-based method and a case-based method that can accomplish this task. When presented with a problem, it chooses one of them based on a set of heuristics which evaluate the applicability and utility of each method on the problem at hand. Router's world model consists of a set of streets in the navigation space and the intersections between them. The streets are grouped into neighborhoods and the neighborhoods are organized in a space-subspace hierarchy. High-level neighborhoods describe large spaces and contain knowledge of major streets. At the next level, a neighborhood is decomposed into several subneighborhoods, each describing a smaller space and containing knowledge of additional minor streets (see Figure 4). When Router uses the model-based method to solve a problem, it rst nds the neighborhoods of the initial and goal intersections. If the two intersections belong in the same neighborhood, Router performs a local search to nd a path between them. Otherwise, it decomposes the overall problem into a set of subproblems, each of which can be solved locally within some neighborhood. In then synthesizes the partial solutions to nd the desired path and stores it in memory. Router's memory contains previously solved path-planning problems. For each problem in its memory, Router knows the initial and goal intersections, the neighborhoods they belong to, and a path between them. When Router uses the case-based method for a problem, it rst retrieves from its memory a path which connects locations in the neighborhoods of the current initial and goal locations. Then, it connects the current initial location to the retrieved path's starting point, and the end point of the retrieved path to the current goal location. Finally, it stores the new problem in its memory for future use. Router was developed independently of this work. It does not contain a functional model of its problem solving, nor does it re ect on it. We have since developed the Autognostic system which has a SBF model of Router's reasoning and is capable of the re ection process sketched above. Autognostic and Router, together, constitute a re ective reasoner and learner.
3.1 Autognostic's SBF Model of Router's Path Planning
Figure 1 depicts part of Autognostic's SBF model of Router's reasoning. The SBF model speci es the information transformations that occur as Router's problem solving progresses. Each information state is depicted as a rectangular box and contains the information available at that state. Each transformation is depicted by a double arrow, and is annotated by the description of the task which accomplishes it. The task description is shown below the double arrow of the corresponding transformation. Router's task is to produce a path from an initial location to a goal location. The route-planning task transforms the input information state, info-state-1, which contains the initial and goal-locations, to the output information state, info-state-5, which contains the path between them. A set of conceptual relations characterize the output with respect to the input: the initial node of the produced path is the same point as the initial location, and the nal node of the path is the same point as the goal location. Finally, the model speci es that route planning is accomplished by the route-planning method. The route-planning method, shown in the rst dashed-line box in Figure 1, decomposes the task into a sequence of simpler subtasks, elaboration, retrieval, search and storage, each one corresponding to a state transformation in this box. At info-state-2, the elaboration subtask has produced the two neighborhoods in which the initial and goal locations belong, i.e., initial-zone and goal-zone. The conceptual relations of the elaboration subtask specify that the initial location belongs in the initial zone, and the goal location belongs in the goal zone. The model also speci es that the elaboration subtask is an instance of the generic task, hierarchical classi cation, which makes possible the transfer of problem-solving methods from this generic task to its instance. The elaboration subtask is a \leaf" task, that is, it is directly solved by the invocation of elaboration-func, a structural element of the Router system. The next information transformation, from info-state-2 to info-state-3, is accomplished by the retrieval subtask. This task recalls from the planner's memory a path which connects locations spatially close to the current initial and goal locations. The search subtask, which transforms info-state-3 to info-state-5, produces the desired path, and, subsequently, the storage subtask stores it in memory
for future reuse. The search task can be accomplished by three dierent methods, the intrazonal, the the case-based method. In Figure 1, only the intrazonal search method is shown in the bottom dashed-line box. The SBF model explicitly speci es that this method is applicable only when the goal location also belongs in the initial zone. In this sequence, the initialization-of-search subtask transforms info-state-3 to info-state-4; at this state, Router has set up its current-location to the initial intersection, and initialized its temporary-path to contain only this intersection. info-state-4 is repeatedly transformed by the increase-of-path subtask, which incrementally adds additional intersections to the temporary path. As de ned by the conceptual relations of this subtask, all the nodes in the temporary path belong in the initial zone. The task is repeated until the length of the temporary path exceeds N. If, at some point, the current location equals the goal location, then info-state-4 is transformed to info-state-5 and Router outputs the temporary path as the desired path. Figure 2 depicts part of Autognostic's meta-model of Router's world knowledge. Router's world is described in terms of several dierent types of objects, such as intersections, neighborhoods, streets and paths. The types of information that Router reasons about in its path-planning process are instances of these objects. The objects in Router's world are related through relations, such as belongs-in which relates intersections to neighborhoods, and covers which relates neighborhoods with other neighborhoods. Finally, some relations are related to each other with domain constraints. In summary, the SBF model of a problem solver's reasoning describes its task structure. It captures the interdependencies between the dierent subtasks the problem solver performs, its problem-solving methods, and its world knowledge. The SBF model is non-deterministic; for any task there may be several methods, the order of the subtasks in a method is only partial, and some subtasks may sometimes be unnecessary. The model represents the possible sequences of elementary information-processing transformations, i.e., the transformations performed by the leaf subtasks, that the problem solver can perform to solve a problem, and the conditions under which they are appropriate. It partially describes the intermediate information states it goes through, with conceptual relations that connect each state with previous ones. Finally, it explains how each elementary transformation contributes to the accomplishment of the overall task in terms of the higher-level subtasks, it services. model-based, and
3.2 An Architecture for Re ective Learning
Figure 3 diagrammatically depicts Autognostic's architecture for re ective reasoning and learning (Stroulia and Goel 1992). Solid-line tilted boxes depict tasks, dashed-line boxes depict knowledge, double arrows depict control ow, simple arrows depict information ow, and double-headed arrows depict access and use of knowledge. Although Figure 3 describes the integration of Router and Autognostic as a re ective reasoner, the language of SBF models and Autognostic's re ection process are independent of Router and path planning. In this architecture, the reasoner has both reasoning and meta-reasoning capabilities. At the reasoning level, the reasoner (Router) has domain knowledge about the speci c navigation space in which it solves problems. It also has a memory of previous path-planning problems and their solutions. Router uses its knowledge to plan paths in its navigation space. At the meta-reasoning level, the reasoner (Autognostic) has an explicit SBF model of its (Router's) reasoning, in terms of which it comprehends its tasks, methods and knowledge. Figures 1 and 2 depict in greater detail parts of this SBF model. Autognostic uses its SBF model of Router to monitor its reasoning, assign blame to some element of its task structure when it fails, redesign it, and thus improve its performance.
4 A Taxonomy of Learning Tasks The task-structure view of problem solving gives rise to a taxonomy of learning tasks and to a corresponding taxonomy of potential causes of failure (Stroulia and Goel 1992). In this section, we describe each one of these causes of failure and the corresponding learning tasks, using examples from Router. For illustration, Figure 4 shows a part of Router's world model necessary for following the examples. The small circles in the model are initial or goal intersections for the example problems we discuss.
4.1 Knowledge Errors
The failure of the problem solver to solve a problem may be due to errors in its knowledge about the world. Its knowledge may be incorrect or incomplete. Alternatively, its knowledge may be inappropriately organized: although it may have all the facts necessary to produce the correct solution for a problem, it may not be able to access them when needed to produce the desired solution. In addition, the language it uses for representing the world|its ontology|may not be expressive enough for capturing all the information relevant to its task.
Incomplete and Incorrect World Knowledge
Router is presented with the problem of going from (10th & hemphill) to (10th & atlantic), for which it produces the path (10th hemphill SouthEast ferst-1) (hemphill ferst-1 East atlantic) (ferst-1 atlantic North 10th) .1 Although the above path is correct, it is suboptimal. A better path is provided as feedback to Autognostic: (hemphill 10th East atlantic) . Router produces a suboptimal path, because it believes that 10th is a one-way street, with West as its only valid direction. The feedback informs Autognostic that East is also a valid direction for 10th. Autognostic must infer that the failure is due to the incorrect description of 10th in Router's world model, and must update the model to represent 10th as a two-way street.
Incorrect Knowledge Organization
In Router's neighborhood organization, the same space is described by several neighborhoods at dierent levels of spatial aggregation. Thus, two intersections may both belong to more than one neighborhood and the problem of nding a path between them may be solvable in more than one neighborhood. For example, in the case of going from (10th & center) to (dalney & ferst-1), the problem can be solved both in za and z1. If solved in z1 the resulting path is (10th center East atlantic) (atlantic 10th South ferst-1) (ferst-1 atlantic West dalney) ; if solved in za the path is (10th center East dalney) (10th dalney South ferst-1) . The second path is preferable, because it is shorter. Autognostic should reorganize Router's neighborhood hierarchy in such a way that (i) con icts of the above type, where a pair of intersections belong in more than one neighborhood at the same time, do not occur, and (ii) each pair of intersections belongs to the neighborhood in which the solution of the problem of nding a path between them is the easiest. In this example, Autognostic has to decrease the detail of information in neighborhood z1 in order to make the problem solvable only in za. This can be achieved by removing the intersection (10th & center) from neighborhood z1.
Incorrect Knowledge Representation
Each intersection in Router's world model is described with two attributes: the names of the two streets that meet at the intersection, (street name 1 street name 2). When presented with the problem of going from (10th & northside) to (mcmillan & turner), Router returns the path (northside 10th East curran) (10th curran South 6th-1) (curran 6th-1 East mcmillan) (6th-1 mcmillan North turner) . Although correct, the above path is suboptimal. A better path is provided as feedback to Autognostic, i.e., (northside 10th East hemphill) (10th mcmillan South turner). Router's concept of intersection is that it is a location where two streets meet, and thus, at each intersection it has only two choices on which direction to go. Thus, Router misses the opportunity of going mcmillan) when at (10th & hemphill). In a path, the ending intersection of one segment is the same as the beginning intersection of the next segment. Thus, the feedback informs Autognostic that (10th & hemphill) and (10th & mcmillan) are the same intersection; thus mcmillan street passes through (10th & hemphill). This fact, however, can not be expressed in the language currently used for representing Router's world model. Autognostic has 1 A path is a sequence of path segments, where the ending intersection of a path-segment is the beginning intersection of the next. A path segment is a quadruple where the rst, second and fourth elements are names of streets, and the third element is the relative direction between the beginning and ending intersection of the segment. The path segment begins at the intersection of the rst and second streets and ends at the intersection of the second and fourth streets. The segment lies on the second street. For instance, the segment (tech-parkway ponders South marietta) lies on ponders, begins at (tech-parkway & ponders), and ends at (ponders & marietta). (ponders & marietta) is South of (tech-parkway & ponders).
to modify the representation of intersections so that it does not limit the number of streets that meet at the intersection to two. Essentially Autognostic has to generalize Router's concept of intersection. Only then, will Router be able to correctly represent the intersection (10th & hemphill & mcmillan).
4.2 Method Errors
The failure of the problem solver may be due to some error in the way it uses its problem-solving methods. The problem solver may not know which is the most appropriate method for a given problem, or it may not know all the subtasks that a method sets up for the task, or it may not know the correct order in which these tasks should be performed.
Erroneous Choice of Method
Router is presented with the problem of going from (10th & atlantic) to (ponders & marietta). Router retrieves from its memory (atlantic ferst-1 West hemphill) (hemphill ferst-1 West ferst-2) (ferst-1 ferst-2 West ponders) (ferst-2 ponders South north) . This path connects two intersections which belong in the same neighborhoods as the current initial and goal intersections. Router chooses the case-based method over the model-based method only when it has a path that matches the current problem on at least one intersection. Thus, for this problem, Router chooses the model-based method and produces the path (10th atlantic South ferst-1) (atlantic ferst-1 West hemphill) (hemphill ferst-1 West ferst-2) (ferst-1 ferst-2 South ponders) (ferst-2 ponders South north) (north ponders South marietta).
This path contains the path that Router had retrieved from its memory: the model-based method produced the path that the case-based method would have also produced. The task structure of the search task is simpler in the case-based than in the model-based method. Thus Autognostic infers that Router's cost of producing this path would have been less if the case-based method was used instead of the model-based method. In this example, Autognostic has to change Router's method-selection heuristics, so that in future, a path connecting the neighborhoods of the problem locations will be deemed useful as the basis for the solution of the current problem.
Incorrect or Incomplete set of subtasks
Router is presented with the problem of going from (atlantic & ferst-1) to (ferst-2 & ponders). From its memory, Router retrieves (atlantic ferst-1 West hemphill) (hemphill ferst-1 West ferst-2) (ferst-1 ferst-2 West ponders) (ferst-2 ponders South north) . Since the retrieved path begins at the current initial location, Router chooses the case-based method to solve the problem. The resulting path, (atlantic ferst-1 West hemphill) (hemphill ferst-1 West ferst-2) (ferst-1 ferst-2 West ponders) (ferst-2 ponders South north) (north ponders North ferst-2) , is suboptimal because it traverses the same street ponders twice in two dierent directions.
This error occurs because the overall path is a composition of two independently derived paths. Although neither the paths stored in memory nor the paths produced by the model-based method are redundant, their composition may be. This error can be recognized and corrected after the overall path is produced and before it is stored in memory. Autognostic needs to add a new subtask in the set of subtasks that the case-based method sets up for the search task, i.e., evaluation-in-terms-of-redundancy. This new subtask, will perform a post-processing of the composed path to eliminate from it redundant segments.
Incorrect ordering of the subtasks
The model-based method decomposes the search task into (i) nding a high-level neighborhood which covers both the initial and goal neighborhoods (ii) nding a common intersection between the initial and the common neighborhood, int1, (iii) nding a common intersection between the goal and the common neighborhood, int2, (iv) nding a path between int1 and int2, (v) nding a path between the initial intersection and int1, (vi) nding a path between int2 and the goal intersection, and (vii) composing the paths that tasks (iv), (v), and (vi) produced. If Router was required to start executing its path as soon as possible, with this ordering of its subtasks, it could start executing after subtasks (iv) and (v) are completed. However, if it reordered these two subtasks,
it could start executing just after one of them was completed. As the requirements on its processing change, Autognostic should learn to adapt Router's task structure to meet these requirements if possible.
4.3 Task Errors
Another kind of potential cause of failure is the incorrect understanding of the information transformation accomplished by a task. For example, the problem solver may not know all the input information that is necessary for the accomplishment of a task, or it may not know that an existing task can produce, in addition to its current output, some other information, or nally it may not have a correct understanding of the conceptual relations between the input and the output of a task.
Incorrect Task Input
When Router uses the model-based method for the search task, it has to nd common intersections between the high-level neighborhood and the two lower-level ones. Its subtask, identification-of-common-intersection, takes as input two neighborhoods and produces as output a common intersection between them. For example, to connect (5th & fowler) with (holly & 10th) Router has to select an intersection common between zc and z1 and an intersection common between z1 and zb. Router chooses as int1 (atlantic & ferst-1) and produce the path (fowler 5th West atlantic) (5th atlantic North ferst-1) (ferst-1 atlantic North 10th) (atlantic 10th East holly) . However if a better path (5th fowler North 8th-3) (8th-3 fowler North 10th) (fowler 10th West holly) is provided as feedback to Autognostic, then it should infer that the feedback path could have been produced if Router had chosen as int1 (fowler & 8th-3). The cause of Router's failure to select the appropriate common intersection is that the identification-of-common-intersection subtask ignores the actual initial and goal locations of the problem. To remedy this problem, Autognostic has to modify this task in order to include in its input the initial and goal locations. Moreover, it has to add in its conceptual relations that, if possible, the output common intersection should lie on the same street with one or both of these two locations.
Incorrect Task Output
Let us consider a situation where Router is required to interact with a scheduler, which needs to assign delivery tasks to agents. The scheduler needs to provide each agent with a route for its delivery task, and it also needs an estimate of the time to traverse the route in question in order to predict when the agent will be available again for a new assignment. Router already produces paths but with a slight adaptation it could also produce an estimate for the time to traverse them. In this situation, in order to accommodate the information needs of the scheduler, Autognostic should \rede ne" Router's path planning task in order to include in its output another type of information, i.e., the time to traverse the produced path. Such a modi cation also implies modi cation of the problem-solving methods that accomplish the path planning task in order to include a task which will produce the additional output information.
Incorrect Task Relations
Router's de nition of \path planning" is the production of a path such that its initial node will be the initial location of the problem and its nal node will be the goal location of the problem. In a highly connected world, there may be several paths that can t this relation and some may be preferable to others. If, for example, Router is required to produce \fast paths", Autognostic has to restrict the conceptual relations of the path-planning task to allow only the production of paths which traverse more highway segments than small street segments. Because the path-planning task gets accomplished by a method, a new subtask is needed in the task structure to prefer pathways to small streets when searching for a path, and thus ensure that the output conforms with the modi ed conceptual relation. If the modi ed task was directly achieved by a programming module, reprogramming of that module would be required.
5 Using the SBF Model of Problem Solving for Learning In this section, we will describe how Autognostic uses its functional model of Router to re ect upon its reasoning and redesign it by reorganizing its world knowledge and modifying its task structure. We illustrate this process by examining a speci c example in some detail. In this example, feedback from the world informs Autognostic that Router's solution to the given problem is not optimal. Autognostic uses its SBF model of Router's reasoning to identify the cause of this failure, and proposes two alternative modi cations to Router. The rst modi cation calls for a reorganization of a portion of Router's world knowledge and the second calls for the insertion of a new task in Router's task structure. We describe both modi cations and show how they improve Router's performance.
5.1 Monitoring
Router is presented with the problem of connecting (10th & center) with (dalney & ferst-1). Autognostic monitors Router's planning process and records its trace. The trace is an instantiation of the part of SBF model of Router's reasoning, which explains the sequence of steps executed to solve the problem at hand. In this example, the trace is the instantiation of the part of the SBF model depicted in Figure 1. This part of the SBF model is instantiated with the values (10th & center) for initial-location, (dalney & ferst-1) for goal-location, z1 for initial and goal-zone, and (center 10th East atlantic) (10th atlantic South ferst-1) (atlantic ferst-1 west dalney) for the output path.
5.2 Assigning Blame
The path that Router produced is correct. However it is suboptimal, because it is longer than the path
(center 10th East dalney) (10th dalney South ferst-1) which is presented to Autognostic as feedback from the
world. Autognostic uses this feedback, the trace of Router's reasoning on the speci c problem, and the SBF model of Router's problem solving to identify the cause of its failure. This model-based blame-assignment process is a heuristic search through the task structure of Router's problem solving, guided by the feedback and the problem-solving trace. The blame assignment regresses the desired solution and its properties back through the SBF model and identi es modi cations to the problem solver's knowledge or task structure that could result in the production of desired solution. Initially, Autognostic uses its model of Router's world knowledge to parse the desired solution in order to assimilate all the information it contains. More generally, the feedback-assimilation step aims at identifying information conveyed by the desired solution which is missing from or con icting with the problem solver's world knowledge. In this example, from its meta-model Autognostic knows that a path refers to a list of intersections. Thus, given the desired path, Autognostic evaluates whether all the intersections it contains already belong in Router's world model. For this example, this is indeed true. If there were intersections in the desired path which Router did not know, then Autognostic would infer that the cause of Router's failure was its incomplete knowledge of the world. If the assimilation step proceeds without errors, i.e., all the information conveyed by the desired solution is already known to the problem solver, Autognostic tries to identify modi cations which could potentially enable Router to produce the desired solution using the same methods that led to the failed solution before. It is possible that the task structure allows the production of several solutions to the same problem, one of which is more desirable than the others. In this case, the appropriate modi cation is the introduction of some new task in the task structure which, using some type of world knowledge, will distinguish between the possible alternatives and will \steer" the problem-solving process towards the correct solution. Another possibility is that, due to errors in its world knowledge, the problem solver may occasionally draw incorrect inferences although its underlying problem-solving mechanism is correct. In this case, the appropriate modi cation is to modify the world knowledge so that it does not support incorrect inferences. In this example, based on the conceptual relation of the increase-of-path subtask, i.e., all nodes of the path belong in the initial zone, and the desired path, Autognostic realizes that in order for the desired path to be produced by the same task decomposition that led to the failed solution, the initial-zone should
have been za instead of z1. At this point, Autognostic shifts the focus of its blame assignment towards identifying the causes that led to the suboptimal value z1 for the initial-zone instead of the desired za. The initial zone is produced by the elaboration subtask, and from the conceptual relations of this subtask, Autognostic infers that both values za, z1 are \legal", i.e., both values are consistent with the conceptual relations (correctness criteria) of the subtask. This leads Autognostic to two possible modi cations that can remedy the problem: (i) the modi cation of the domain relation belongs-in to make the value z1 \illegal", essentially the exclusion of the intersection (10th & center) from the neighborhood z1, and (ii) the insertion of a selection task, which, given the two alternative values for the initial zone, will result in the selection of the desired za. Often, the desired solution cannot be produced with the same task decomposition that led to the failed solution. Dierent methods produce solutions of dierent qualities, and often the desired solution exhibits qualities characteristic of a method other than the one originally used for problem solving. Autognostic recognizes this \incompatibility" between the desired solution and the method used for problem solving in terms of conceptual relations of the method's task that the desired solution does not meet. If for example, the desired path did not belong in a single neighborhood, i.e., there was no neighborhood in which all the intersections of the path were mentioned, then it would be in con ict with the conceptual relation of the increase-of-path subtask. In this case, Autognostic would try to infer an alternative method which wouldn't involve this subtask, but can produce this solution. This stage of blame assignment aims at improving the problem solver's criteria for selecting among its methods. Figure 5 shows Autognostic's blame-assignment algorithm for identifying the cause of the problem solver's failure to produce the desired solution. It shows in detail the second stage of the blame-assignment process described above, but does not elaborate on the feedback assimilation and the exploration of alternative methods stages.
5.3 Redesigning the Problem Solver
Knowledge Reorganization
Router's neighborhoods are concepts used to organize its knowledge of the navigation space in small localities in order to enable ecient search. Thus the particular set of intersections belonging in a neighborhood is not de ned by the state of the world but rather by the needs of problem solving. Autognostic can implement the above modi cation without any consequences for the correctness of its world model. In this example, the exclusion of the intersection (10th & center) from the set of intersections of neighborhood z1, belongs-in(?X z1), will make z1 impossible as value for the information initial zone. Thus the alternative value za will be used and the desired path will be produced. When Autognostic modi es a relation, it also propagates its consequences to other relations which are interrelated with it. As shown in Figure 2 there is a constraint crossing between the relations belongs-in and street-in-a-zone. If an intersection is deleted from a neighborhood, then it should be deleted from all the streets in this neighborhood that pass through. Thus, the exclusion of (10th & center) from belongs-in(?X z1) also means that center street is removed from the set of streets intersecting with 10th street in zone z1.
Insertion of a Selection Task
The motivation behind inserting a selection task after the elaboration task in Router's task structure is to enable Router to reason about the possible values for initial-zone and select the most appropriate one. As shown in Figure 2, the SBF model of the problem solver's reasoning speci es the type of world object for each type of information in the task structure. Moreover, for each type of world object, the model speci es the domain relations applicable to it. Autognostic uses this knowledge, along with the speci c values (actual and preferred) of the information type to be selected, to discover a relation which can be used to dierentiate between these values. If there is such a relation, then Autognostic can use it as a conceptual relation for the new task to be inserted in the task structure. In our example, Autognostic knows that the covers relation applies to neighborhoods, of which the information initial-zone is an instance. Given the actual and the alternative values for the initial-zone, z1 and za respectively, Autognostic discovers that covers(z1 za). It then hypothesizes that this relation can be used as a dierentiating criterion between possible alternative values for the
initial-zone. Thus, in the set of subtasks of the route-planning method, it inserts after elaboration, the selection-after-elaboration subtask, with input initial-zone, output selected-initial-zone, and conceptual relation covers(initial-zone selected-initial-zone). Given a speci c path-planning problem, the new task reasons about the possible values of the initial-zone in the context of this problem,
and selects the most speci c one. In order for the task structure to be consistent after this modi cation, Autognostic needs to perform some more modi cations: (i) change the syntactic type of the initial-zone from a simple zone element to a list of elements, (ii) change (reprogram) the function elaboration-func to actually return appropriately a list of values instead of a single one, and (iii) modify the elements of the task structure which originally were after the elaboration task to use, instead of the information type initial-zone, the selected-initial-zone. Autognostic can autonomously perform modi cations (i) and (iii) but not (ii), which is currently performed by a human programmer, at the suggestion of Autognostic.
5.4 Evaluating the Modi ed Problem Solver
After Router's task structure is modi ed, Autognostic evaluates the appropriateness of the revision by presenting Router with the problem that led to failure before. As Router solves the same problem Autognostic goes back to its monitoring task. In our example, Router produces the desired path the second time, after both modi cations. Thus, any one of these modi cations can be evaluated as successful. Had Router failed once again to deliver the desired path, Autognostic would have another learning opportunity, and it would repeat its blame-assignment-and-learning task.
6 Discussion In this section, we brie y compare our work with related lines of research: re ective reasoning and functional models of abstract devices. Re ective reasoning has received much attention in AI research on learning (e.g., Davis 1977, Davis 1980, Mitchell et al. 1989, Carbonell, Knoblock and Minton 1989, Kuokka 1990, Freed, Krulwich, Birnbaum and Collins 1992, Ram and Cox 1992, Wielinga, Schreiber and Breuker 1992). Much of earlier AI work has used reasoning traces for re ection instead of using functional models of reasoning. Also, this work has focused on learning tasks such as acquiring new world knowledge (Davis 1977, Davis 1980), learning heuristics for method (or operator) selection (Mitchell et al. 1989, Kuokka 1990), acquiring new concepts such as the concept of a `fork' in chess (Freed Krulwich, Birnbaum and Collins 1992), or reorganizing the concepts in memory (Ram and Cox 1992). Modifying the task structure of a reasoner has proved to be a more dicult problem because, like the reorganization of knowledge, it can have non-local consequences. For example, if a new task is inserted in the task structure of a problem solver, the task insertion might result in a recharacterization of other tasks in the task structure. This is illustrated by the example described above where the insertion of the selection-after-elaboration task in Router's task structure results in a recharacterization of the elaboration task. Reasoning traces by themselves do not in general provide sucient guidance in propagating the changes caused by a modi cation to the task structure or the organization of knowledge. In fact, reasoning traces seem incapable of enabling even the suggestion of changes to the task structure of a reasoner. Autognostic uses functional models of problem solving in conjunction with reasoning traces to address some of these issues. It uses the SBF models together with the reasoning traces for the tasks of monitoring, blame assignment, and redesign. This enables Autognostic to handle the kinds of learning tasks that earlier work had addressed such as acquiring new world knowledge, learning new criteria for method selection, acquiring new problem solving concepts, and reorganizing the concepts in memory. The SBF models in particular enable it to handle learning tasks involving certain kinds of changes to the task structure of the problem solver. Functional reasoning about abstract devices has also received some attention in AI research (e.g. Allemang 1990, Weintraub 1991, Johnson 1993). Allemang, for example, has used the Functional Representation framework to model computer programs, and has shown how such models can be used for program veri cation. Weintraub has used the Functional Representation framework to model the operation of a diagnostic system, and has shown how such a model can be used for assigning blame when the knowledge-based system produces an incorrect diagnosis. Autognostic too uses the SBF model for blame assignment but its process
for assigning blame is dierent from Weintraub's. In his work, the information state transitions associated with each task of the knowledge-based system are annotated by associative rules which indicate the likely sources of error. Moreover, the knowledge-based system being diagnosed has a deterministic task structure, i.e., each task is accomplished by a single method. In contrast, Autognostic permits the applicability of multiple methods for accomplishing the same task. It uses the derivational trace of problem solving in conjunction with the SBF model to identify the sources of error. In Johnson's work, the functional model of a student's potential problem-solving behavior is used by a tutoring system to provide help. She monitors the student's actions, uses the functional model to form hypotheses about the method being used by the student, and thus provides context sensitive help. Autognostic's monitoring of Router's behavior is similar to the tutor's monitoring of the student. Autognostic takes the idea of modeling problem solvers a step further: it is able to redesign the problem solver after identifying the causes of its failure. Its SBF model provides the vocabulary for indexing repair plans that correspond to dierent types of failure causes. In addition, the semantics of the model enable the modi cation of the planner in a manner that maintains the consistency of problem solving. Both Router and Autognostic are operational systems. We have evaluated Autognostic for several learning tasks in Router's task domain, for example, the acquisition of new world knowledge, certain kinds of reorganization of the world knowledge, and some kinds of modi cations to the task structure along the lines described above. In order to evaluate the generality of Autognostic's language for describing how a problem solver works and its process for re ective reasoning, we are presently using it in the task domain of engineering design.
7 Conclusions In this paper, we described how a problem solver's knowledge and reasoning can be modeled in the language of structure-behavior-function models within the framework of task-method-subtask structures. The SBF model of a problem solver describes its reasoning process as a non-deterministic sequence of information transformations, through which the task's output information is produced from its input. For each information transformation, the model makes explicit its contribution to the overall accomplishment of the task and the conditions under which it is appropriate. We also described a process model for re ective reasoning. We showed how the re ective process uses the SBF model of the problem solver's reasoning for three tasks:
In monitoring, the SBF model of the problem solver provides a language for interpreting the problem solver's reasoning steps and generates expectations regarding their outcomes. In blame assignment, the SBF model of the problem solver along with the trace of the reasoning on a speci c problem, enables Autognostic to localize the cause of the failure to some element of the problem solver's task structure. Moreover, the language of SBF models provides a vocabulary for indexing redesign methods. In redesign, the task-structure framework for problem solving gives rise to a taxonomy of learning tasks, i.e., modi cations to the problem solver's reasoning that can improve its performance. Moreover, the semantics of the language of SBF models enable the modi cation of the problem solver in a manner that maintains the consistency of problem solving.
We showed how Autognostic's functional models of Router's knowledge and reasoning enable it to reorganize Router's world knowledge and modify its task structure by inserting new tasks. Although we described the functional models and the re ective processes in the context of a particular problem solver, a planner called Router, neither the models nor the processes are particular to Router or to path planning.
References [1] Abu-Hanna, A., Benjamins, V.R., and Jansweijer, W.N.H. 1991. Device Understanding and modeling for Diagnosis. In IEEE EXPERT, 6(2):26-32. [2] Allemang, D. 1990. Understanding Programs as Devices, PhD Thesis, The Ohio State University. [3] Baker, L., and Brown, A.L. 1984. Metacognitive skills of reading. In D. Pearson (ed.), A Handbook of reading research, New York: Longman. [4] Carbonell, J.G, Knoblock, C.A. and Minton, S. 1989. Prodigy: An Integrated Architecture for Planning and Learning. In Architectures for Intelligence, Lawrence Erlbaum. [5] Chandrasekaran, B. 1987. Towards a functional architecture for intelligence based on generic information processing tasks, In Proc. of the Tenth IJCAI, 1183-1192, Milan. [6] Chandrasekaran, B. 1989. Task Structures, Knowledge Acquisition and Machine Learning. Machine Learning, 4:341-347. [7] Clancey, W.J. 1985. Heuristic Classi cation, Arti cial Intelligence, 27(3):289:350 [8] Davis, R. 1977. Interactive transfer of expertise: Acquisition of new inference rules, Arti cial Intelligence, 12:121-157. [9] Davis, R. 1980. Meta-Rules: Reasoning about Control. Arti cial Intelligence, 15:179-222. [10] Flavell, J.H. 1971. First discussant's comments: What is memory development the development of? Human Development 14:272-278. [11] Franke, D.W 1991. Deriving and Using Descriptions of Purpose, In IEEE Expert April. [12] Freed, M., Krulwich, B., Birnbaum, L., and Collins, G. 1992. Reasoning about performance intentions. Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society, 7-12. [13] Forbus, K. 1993. Qualitative Reasoning about Function: A Progress Report. In Working Notes of Reasoning About Function Workshop, AAAI 93, pp 31-36. [14] Gero, J.S., Lee, H.S., and Tham, K.W. 1991. Behaviour: A Link between Function and Structure in Design, Proceedings IFIP WG5.2 Working Conference on Intelligent CAD, pp. 201-230. [15] Goel, A. 1989. Integration of Case-Based Reasoning and Model-Based Reasoning for Adaptive Design Problem Solving, PhD Thesis, The Ohio State University. [16] Goel, A. 1991. A Model-Based Approach to Case Adaptation. In Proceedings of the Thirteenth Annual Conference of the Cognitive Science Society, Chicago, August 7-10 1991, pp. 143-148, Hillsdale, NJ: Lawrence Erlbaum. [17] Goel, A. and Callantine, T. 1992. An Experience-Based Approach to Navigational Path Planning. In Proceedings of the IEEE/RSJ International Conference on Robotics and Systems, Rayleigh, North Carolina, July 7-10, 1992, IEEE Press, Volume II, pp. 705-710. [18] Goel, A., Gomez, A., Callantine, T., Donnellan, M. and Santamaria, J. 1993. From Models to Cases: where do cases come from and what happens when a case is not available? In Proceedings of the Fifteenth Annual Conference of the Cognitive Science Society, Boulder, Colorado, 1993, pp. 474-480. [19] Johnson, K. 1993. Exploiting a Functional Model of problem Solving for Error Detection in Tutoring, PhD Thesis, The Ohio State University. [20] Kluwe, R.H. 1982. Cognitive Knowledge and Executive Control: Metacognition. In D. R. Grin (ed.), Animal Mind - Human Mind Springer-Verlag, Berlin.
[21] Kumar, A. and Upadhyaya, S. 1992. Framework for function-based Diagnosis. technical Report, State University of New York at Bualo, Bualo NY 14260. [22] Kuokka, D.R. 1990. The Deliberative Integration of Planning, Execution, and Learning, Carnegie Mellon, Computer Science, Technical Report CMU-CS-90-135. [23] Marr, D. 1982. Vision: A Computational investigation into the Human Representation and Processing of Visual Information, W.H. Freeman, San Francisco. [24] McDermott J. 1988. Preliminary steps toward a taxonomy of problem-solving methods, In Automating Knowledge Acquisition for Expert Systems, Sandra Marcus (ed.), Kluwer Academic Publishers. [25] Mitchell, T., Allen, J., Chalasani, P., Cheng, J., Etzioni, O., Ringuette, M., Schlimmer, J. 1989. Theo: A Framework for Self-Improving Systems. In Architectures for Intelligence, Lawrence Erlbaum. [26] Newell, A. 1982. The Knowledge level AI Journal 19(2):87-127. [27] Piaget, J. 1971. Biology and Knowledge Chicago: University of Chicago Press. [28] Ram, A. and Cox, M. T. 1992. Introspective Reasoning Using Meta-Explanations for Multistrategy Learning. To appear in Machine Learning: A Multistrategy Approach IV, R. S. Michalski and G. Tecuci (eds.), Morgan Kaufmann, San Mateo, CA. [29] Price, C.J. and Hunt, J.E. 1992. Automating FMEA through multiple models, In Research and Development in Expert Systems VIII, I. Graham (editor), pp 25-39. [30] Sembugamoorthy, V. and Chandrasekaran, B. 1986. Functional Representation of Devices and Compilation of Diagnostic Problem-Solving Systems, Experience, Memory and Problem solving, J. Kolodner and C. Riesbeck(editors), Lawrence Erlbaum, Hillsdale, NJ, pp. 47-73. [31] Steels, L. 1990. Components of Expertise AI Magazine 11(2):30-49. [32] Sticklen, J. and Chandrasekaran, B. 1989. Integrating classi cation-based compiled level reasoning with function-based deep level reasoning. In Causal AI Models: Steps towards Applications, W. Horn (editor), Hemisphere Publishing Corporation, pp 191-220. [33] Stroulia, E., Shankar, M., Goel, A. and Penberthy, L., 1992. A Model-Based Approach to Blame Assignment in Design. In Proc. Second International Conference on AI in Design. [34] Stroulia, E. and Goel, A. 1992. An Architecture for Incremental Self-Adaptation, In Proc. of Machine Learning Workshop on Computational Architectures for Supporting Machine Learning and Knowldege Acquisition, ML92. [35] Umeda, Y., Takeda, H., Tomiyama, T., Yoshikawa, H. 1990. Function, Behaviour, and Structure, Applications of Arti cial Intelligence in Engineering, vol 1, Design, Proceedings of the Fifth International Conference, Gero (editor), Springer-Verlag, Berlin, pp. 177-193. [36] Weintraub, M. 1991. An Explanation-Based Approach to Assigning Credit, PhD Thesis, The Ohio State University. [37] Wielinga, B.J., Schreiber, A.Th., and Breuker, J.A. 1992. KADS: A modelling approach to knowledge engineering. In Knowledge Acquisition 4(1). Special issue \The KADS approach to knowledge engineering".
Figure Captions Figure 1: Figure 2: Figure 3: Figure 4: Figure 5:
Fragment of Autognostic's model of Router's problem solving Fragment of Autognostic's meta-model of Router's world knowledge The Architecture of a Re ective Reasoner and Learner Router's world Autognostic's Blame Assignment Algorithm
info-state-1 initial location goal location
info-state-5 path TASK: route planning BY-METHODS: route planning method CONCEPTUAL-RELS: equal(initial-node(path) initial-location) equal( nal-node(path) goal-location) -
info-state-2 initial location goal location - initial zone goal zone
-
info-state-3 initial location goal location initial zone - goal zone best known path
? ROUTE-PLANNING METHOD TASK: storage STRUCT-ELEMENT: storage-func
info-state-1 info-state-5 info-state-6 - path -path memory initial location goal location TASK: elaboration TASK: retrieval TASK: search INSTANCE-OF: CONCEPTUAL-RELS: BY-METHODS: hierarchical classi cation intrazonal method belongs-in CONCEPTUAL-RELS: ( nal-node(best-known-path) goal-zone) model-based method belongs-in belongs-in case-based method (initial-location initial-zone) (initial-node(best-known-path) initial-zone) belongs-in STRUCT-ELEMENT: retrieval-func (goal-location goal-zone) STRUCT-ELEMENT: elaboration-func
? INTRAZONAL METHOD UNDER-CONDITION:
info-state-4 CONDITIONS: equal(current-location goal-location) belongs-in info-state-3 initial zone (goal-location initial-zone) info-state-5 goal location initial location - current location goal location path temporary path initial zone TASK: increase of path TASK: initialization of search 6 ? CONCEPTUAL-RELS: 6 ?CONDITIONS: length(temporary-path) N CONCEPTUAL-RELS: equal ForALL n IN nodes(temporary-path) (current-location initial-location) belongs-in(n initial-zone) STRUCT-ELEMENT: STRUCT-ELEMENT: visit-func initialization-func
Figure 1: Fragment of Autognostic's model of Router's problem solving
information types -
goal zone
goal location
initial zone objects are instantiated in information types
initial location
6
6
objects
street
path
6
6
neighborhood
intersection
6 66 6 6
relations apply to objects
relations starts-in
66
H HH @ H @ street-in-zone @ @ 6 @
covers contraints apply to relations
constraints
belongs-in
? ?
?
?
crossing
Figure 2: Fragment of Autognostic's meta-model of Router's world knowledge
SBF MODEL OF PATH PLANNING
Path
Problem
Planning
REFLECTIVE META-REASONING LEVEL
? World Model 6 Elaborate Path Planning - Methods ? ? JJ Find-LCZ JJ Retrieve JJ JJ Feedback JJ ? ? ? ? ? ? JJ ? ? Adapt Search JJ J J ^J J ^ J ] Path Memory J ? ? J JJ Store J J Y H JJ
@ 6HH I @ H J
J JJ
JJ
JJ
JJ ?
JJ Solution JJ Problem - ^ J Y H H 6 6 6 H I @ @ H ? ?
MONITOR
Problem Solving Trace
REASONING LEVEL
ASSIGN BLAME
Causes of Failure & Suggested Modi cations
Blame Assignment Methods
REDESIGN
DOMAIN KNOWLEDGE
World Model
Path Memory
Learning Methods
Figure 3: The Architecture of a Re ective Reasoner and Learner
b
b
ZA
b
A
test-1 A test-3 A test-2 Acenter A test-4 A curran8th-1 dalney A hemphill A b A mcmillan turner b ferst-2 6th-1 northside
b
cherry-1 ferst-0 @
ferst-1 b
@ @b
b
techwood 6th-2
5th 4th
ZE ferst-3
ponders
holly b b 10th atlantic peachtree-pl 8th-2 8th-3 state plum fowler
ZC
Z1
ZB
home-park
3rd cherry-2 uhw-1 cherry-3
north ZD wallace marietta Figure 4: Router's world
williams uhw-2
ASSIGN-BLAME-UNDESIRED-VALUE(i, v(i)actual , v(i)preferred , t, PStrace) Assimilate feedback value ( (v(i)preferred i)) Evaluate whether the conceptual relations of task t hold true for the feedback value, v(i)preferred .
If there are some conceptual relations, CRfalse , of task t, relating information i to some of
the input information of task t, input(t), which do not hold true for the preferred value v(i)preferred ,
then 8rel 2 CRfalse ; rel(i; iin) : iin 2 input(t) If iin 2= input(tproblem) iin is not part of the overall task input then Infer potential alternative values of iin, If the regression of the v(i)preferred suggests v(iin)preferred 6= v(iin)actual then Move the focus of the blame assignment process. assign-blame-undesired-value (iin , v(iin )actual , v(iin )preferred t ) where t = produced by(iin )) 0
0
else Include in Suggested-Modi cations Make Relation True include tuple (v(i)preferred v(iin )actual ) in the relation rel so that the feedback value is not in violation any more and Modify Task Semantics substitute rel with rel for which the tuple (v(i)preferred v(iin )actual ) is true 0
else task t can possibly produce both values, v(i)actual , and v(i)preferred , for information i. Include in Suggested-Modi cations Make Relation False, exclude tuple (v(i)actual v(iin )actual ) from the relation rel so that the actual value is not "legal" any more and Insert Selection Task to select among the alternative values for i, v(i)actual and v(i)preferred If task t is a leaf task, then Exit If task t has a prototype, then Identify errors in the prototype task, t = is a(t) an instance of which task t is. 0
assign-blame-undesired-value(i, v(i)actual , v(i)preferred , t , PStrace ) 0
If task t was accomplished by method M , then Re ne the focus of the blame assignment process.
assign-blame-undesired-value(i, v(i)actual , v(i)preferred , t , PStrace ) where t 2 subtasks(M ) ^ t = produced by(i)) 0
0
0
If Suggested-Modi cations contains only modi cations of type Modify-Task-Semantics then Explore applicability of alternative methods. Figure 5: Autognostic's Blame Assignment Algorithm