An Explicit Representation of Reasoning Failures - CiteSeerX

2 downloads 0 Views 61KB Size Report
verbs of thought such as “I forgot his birthday. ... sess a predicate such as “forget” or “remember” when trying to understand retrieval failure of memory item M.
In D. B. Leake & E. Plaza (Eds.), Case-Based Reasoning Research and Development: Second International Conference on Case-Based Reasoning (pp. 211-222). Berlin: Springer-Verlag.

An Explicit Representation of Reasoning Failures Michael T. Cox Computer Science Department. Carnegie Mellon University Pittsburgh, PA 15213-3891 [email protected]

Abstract. This paper focuses upon the content and the level of granularity at which representations for the mental world should be placed in case-based explainers that employ introspective reasoning. That is, for a case-based reasoning system to represent thinking about the self, about the states and processes of reasoning, at what level of detail should one attempt to declaratively capture the contents of thought? Some claim that a mere set of two mental primitives are sufficient to represent the utterances of humans concerning verbs of thought such as “I forgot his birthday.” Alternatively, many in the CBR community have built systems that record elaborate traces of reasoning, keep track of knowledge dependencies or inference, or encode much metaknowledge concerning the structure of internal rules and defaults. The position here is that a system should be able instead to capture enough details to represent causally a common set of reasoning failure symptoms. I propose a simple model of expectation-driven reasoning, derive a taxonomy of reasoning failures from the model, and present a declarative representation of the failure symptoms that have been implemented in a CBR simulation. Such representations enable a system to explain reasoning failures by mapping from symptoms of the failures to causal factors involved.

1 Introduction An early tenet of artificial intelligence is that reasoning about the world is facilitated by declarative knowledge structures that represent salient aspects of the world. An intelligent system can better understand and operate in such a represented world as opposed to one in which knowledge is encoded procedurally or implicitly. The system may inspect and manipulate such structures, the system can be more easily modified and maintained, and such representations provide computational uniformity. Likewise, if a system is to reason about itself and its own knowledge (for instance, when reasoning about its mistakes as a precondition to learning), explicit representations of its reasoning can facilitate the process. The goal of this paper, therefore, is to outline a declarative representation of reasoning failure and to posit a level of granularity for such representations. A case-based explainer can thus reason about its own memory system when it forgets and can reason about its knowledge and inferences when it draws faulty conclusions. A case-based learner can then use such explanations as it attempts to repair its knowledge in memory. To represent reasoning failures declaratively is to create a second order representation. That is, the representation is not about the world; rather, it is a representation about the reasoner who reasons about the world. For example, a case-based explainer such as AQUA (Ram, 1994) uses abstract patterns of causality (i.e., abstract cases) to explain and understand events in the world of an input story. The explanation it generates about why a character in the story performs a particular action is a first-order representation. However, given the fact that AQUA’s case-base is not complete (or necessarily consistent), such

212 explanations may fail to predict the future actions of the character. When the expectation generated by an explanation then fails, the task of the learner is to examine a trace of the reasoning that generated the explanation and to explain the failure. The Meta-AQUA system (Cox, 1996) uses second-order representations to explain how and why its storyunderstanding component generates faulty explanations. In this sense, these cases are explanations of explanation failure. Hence, such representations are called meta-explanations. The choice of story understanding as the reasoning task is somewhat arbitrary. The representations could be applied equally as well to learning from failures of planning or design instead. This is possible because we derive the classes of reasoning failure from a simple and task-independent model of expectation-driven reasoning. However, because the aim of this paper is to explain and to catalog the kinds of failure representations used by the system, we do not explain either the story-understanding process that generates reasoning failures nor the learning process that uses these representations to learn from such failures. Instead a companion paper in this volume (Cox, 1997) sketches the performance and learning tasks, and moreover, it empirically evaluates the usefulness of these representations to the learning system. Further details of the implementation can be found in Cox (1996) and Cox and Ram (1995). The focus of this paper is upon the representations that allow a system to explain a reasoning failure as a precursor to learning. Section 2 begins by discussing the alternatives when representing forgetting, a variant of an impasse failure. Section 3 then proposes a simple model of expectation-driven reasoning, derives a taxonomy of failure symptoms from the model, and then presents a representation for each class. Five types of reasoning failure symptoms are presented including contradiction, unexpected success, impasse, surprise, and false expectation. Section 4 concludes with a brief discussion.

2 Representing Forgetting: An Example In order to use representations of mental terms effectively, a system should consider the structure and semantics of the representation. For instance, it is not useful to simply possess a predicate such as “forget” or “remember” when trying to understand retrieval failure of memory item M. Forget (John, M) ≡

¬ Remember (John, M) The non-occurrence of a mental event is not well represented by the simple negation of a predicate representing an event which did occur. Because the predicates involve memory, it is helpful to propose instead the existence of two contrasting sets of axioms: the background knowledge (BK) of the person, P, and the foreground knowledge (FK) representing the currently active propositions of the person. A logical interpretation of forget is then expressed in (1). (1) ∃M(M ∈BKp) ∧ (M ∉FKp) With such a representation, one can also express the proposition that the person P

213 knows that he has forgotten something; that is, the memory item, M, is on the tip of person P’s tongue. P knows that M is in his background knowledge, but cannot retrieve it into his foreground knowledge: (2) ∃M (M ∈BKp) ∈ FKp ∧ (M ∉FKp) To add these interpretations is to add content to the representation, rather than simply semantics. It is part of the representation that determines an ontological category (i.e., what ought to be represented), and it begins to provide epistemological commitments (e.g., that the sets BK and FK are necessary representational distinctions). However, meaning is also determined by the inferences a system can draw from a representation. But as it stands, the forget predicate provides little assistance to a reasoning system when trying to understand or explain what happens when it forgets some memory item. This is not to say that logic cannot represent such a mental “non-event,” rather, this simply illustrates the difficulty of constructing an adequate representation of forgetting. An alternative approach was undertaken by Schank, Goldman, Rieger and Riesbeck (1972) in order to specify the representations for all verbs of thought in support of natural language understanding. They wish to represent what people say about the mental world, rather than represent all facets of a complex memory and reasoning model. They therefore use only two mental ACTS, MTRANS (mental transfer of information from one location to another) and MBUILD (mental building of conceptualizations), and a few support structures such as MLOC (mental locations, e.g., working memory, central processor and long term memory).1 As a consequence, the representation by Schank et al. of forgetting is as depicted in Figure 1.

John ⇔ MTRANS

o

←M← R

CP (John) LTM (John)

Fig. 1. CD representation of forgetting o=mental object or conceptualization; R=Recipient; CP=Central Processor; LTM=Long Term Memory

John does not mentally transfer a copy of the mental object, M, from the recipient case of John’s long term memory to his central processor. Such a representation does provide more structure than the predicate forms above, and it supports inference (e.g., if M was an intention to do some action, as opposed to a fact, then the result of such an action was not obtained; Schank, 1975, p. 60), but the CD formalism cannot distinguish between the case during which John forgot due to M not being in his long-term memory2 and a case 1. Schank et al. (1972) actually referred to working memory as immediate memory and the central processor as a conceptual processor. I have used some license to keep terms in a contemporary language. Moreover, Schank et al. used a third primitive ACT, CONC, which was to conceptualize or think about without building a new conceptualization, but Schank (1975) dropped it. For the purposes of this paper, however, the differences do not matter.

214 of forgetting due to missing associations between cues in the environment and the indexes with which M was encoded in memory. It does not provide enough information to explain the failure fully. An alternative representation exists for such mental phenomena based upon Explanation Pattern (XP) theory (Cox & Ram, 1995; Ram, 1993; Schank, 1986; Schank, Kass, & Riesbeck; 1994). A Meta-Explanation Pattern (Meta-XP) is a directed graph whose nodes represent mental states and processes (see Figure 2). Enables links connect states with the processes for which they are preconditions, results links connect a process with a result, and initiate links connect two states. Numbers on the links indicate relative temporal sequence. Furthermore, attributes and relations are represented explicitly in these graphs. For instance, the Truth attribute of the expectation E in Figure 2 has the value outFK (the interpretation of the value will be explained presently). This relation is represented explicitly by the node marked Truth having domain E and co-domain outFK.3

M co-domain

G Mentally Enables

C

I

domain

1 co-domain

=

New Input Mentally Results

Memory Retrieval

co-domain

domain

4

Truth

2

domain

Decision Basis

¬ Mentally Results Results

domain

co-domain

3

Mentally Initiates

outFK

A

E Retrieval Failure

co-domain domain Actual Outcome

co-domain domain

Expected Outcome

Fig. 2. Meta-XP representation of forgetting A=actual; E=expected; G=goal; C=cues; M=memory item; I=memory index.

The Meta-XP structure of Figure 2 shows that memory retrieval could not produce an expected outcome (hence the strike bar at 2); whereas, further input reveals the actual outcome (at 4). More formally, the structure represents a memory retrieval attempt enabled by goal, G, and cues, C, that tried to retrieve some memory object, M, given an index, I, that did not result in an expectation (or interpretation), E, that should have been equal to some 2. I am ignoring the issue of whether human memory is ever really lost. But, a computer can certainly delete memories. 3. This is isomorphic to mathematical functions that have a domain and a range.

215 actual item, A. The fact that E is out of the set of beliefs with respect to the reasoner’s foreground knowledge (i.e., is not present in working memory) initiates the knowledge that a retrieval failure had occurred. This representation captures an entire class of memory failures: failure due to a missing index, I; failure due to a missing object, M; failure because of a missing retrieval goal, G;4 or failure due to not attending to the proper cues, C, in the environment. Such a representation allows the system to reason about these various causes of forgetting; it can inspect the structural representation for a memory failure and pose hypotheses about the reasons behind the failure. Such an ability facilitates explanation because it allows a system to map the failure symptoms to the faults that caused such failure during self-diagnosis and then to generate a learning goal. For example, if the explanation of forgetting is that the item M is missing, then the goal to acquire a new case is licensed. If the explanation is that I is missing, then a memory reorganization goal is appropriate instead (see Cox & Ram, 1995). However, to represent reflective thoughts about reasoning, complete representations for all inferences and memory processes that generate such inferences, along with a complete enumeration of all knowledge dependencies, are not required. Humans certainly cannot maintain a logically complete and consistent knowledge base, nor can they perform full dependency-directed backtracking (Stallman & Sussman, 1977) or reason maintenance for belief revision (Doyle, 1979); rather, they depend on failures of reasoning and memory of past errors to indicate where inconsistencies in their knowledge lie. That is, as knowledge is locally updated, memory will often become globally inconsistent and partially obsolete. It is at the point in which a system (either human or machine) attempts to reuse obsolete information that inconsistency becomes most apparent.5 People often do know when they err if their conclusions contradict known facts, if plans go wrong, or if they forget (even if they cannot remember the forgotten item). Representations should support such types of self-knowledge, and it is at this level of granularity that an adequate content theory of mental representations can be built. For the above reasons, capturing the full level of details concerning mental activity is not necessary, and CD’s two primitives mental ACTS are not sufficient to comprise an adequate representational system that can express states and mechanisms of the mental world. Rather, a vocabulary needs to be delivered that can minimally express qualitative causal relationships involved in reasoning, but concurrently support the explanation of failure in sufficient detail that it can decide what to learn. That is, representational granularity is determined functionally.

4. The agent never attempted to remember. For instance, the reasoner may have wanted to ask a question after a lecture was complete, but failed to do so because he never generated a goal to remember. Alternatively the agent may know at the end of the lecture that he needs to ask something, but cannot remember what it was. This second example is the case of a missing index. 5. Glenberg, Wilkinson and Epstein (1982/1992) have shown that self-comprehension of text can be an illusion (i.e., people often do not accurately monitor their own level of text comprehension), and they speculate that it is at the point where reading comprehension fails that humans are alerted to the need for improvement.

216

3 Representing Reasoning Failure So as to support introspective reasoning, a representation should have a level of detail that reflects the causal structure and content of reasoning failures. One of the most basic mental functions is to compare one’s expectations with environmental feedback (or, alternatively, a “mental check” of conclusions) to detect when the potential for improvement exists. The reasoner calculates some expected outcome (E) and compares it with the actual outcome (A) that constitutes the feedback. When reasoning is successful, E is equal to A. A reasoning failure is defined as an outcome other than what is expected (or a lack of some outcome or appropriate expectation). Given the simple comparison model above, a logical matrix can be drawn depending on the values of the expected and actual outcomes (see Table 1). The expected outcome may or may not have been produced; thus, the expected outcome node, E, either exists or does not exist. Likewise, the actual outcome node, A, may be present or it may not. Table 1: Taxonomy of failure symptoms ∃E

∃A actual exists

∃A actual does not exist

∃E

expectation exists

expectation does not exist

Contradiction

Impasse

Unexpected Success

Surprise

False Expectation

Degenerate (N/A)

Given a mismatch between the expected outcome and the actual outcome when both exist, two types of failure can result. A contradiction occurs when a positive expectation conflicts with the actual result of either reasoning or action in the world. An unexpected success occurs when the reasoner did not believe that reasoning would be successful, yet it was nonetheless. Alternatively, failure may happen when no expectation is generated prior to some outcome. That is, an impasse occurs when the reasoner cannot produce a solution or understanding prior to being given it; whereas, a surprise occurs when an actual outcome demonstrates that the reasoner should have attempted a solution or prediction, but did not. Finally, a false expectation is the case where a reasoner expects some positive event, but none occurs (or when a solution is attempted for a problem having no solution). The degenerate case represents the condition such that no expectation was generated and no outcome presents itself. This paper presents a declarative representation for each of these first five classes of reasoning failure. In our ontology, mental processes are nodes labeled Cognize, and can be refined to either an inference process, a memory retrieval process, or an I/O process. 6 Expected outcomes come from one of these three basic processes. That is, an intelligent agent can form

217 an expectation by remembering, by inferential reasoning (logical or otherwise), or by another agent’s communication. The state terms used to identify reasoning failure constructions represent the vocabulary labels that compose meta-explanations. I propose two types of commission error labels. Inferential expectation failures typify errors of projection. They occur when the reasoner expects an event to happen in a certain way, but the actual event is different or missing. Incorporation failures result from an object or event having some attribute that contradicts a restriction on its values. In addition, I propose four omission error labels. Belated prediction occurs after the fact. Some prediction that should have occurred did not, but only in hindsight is this observation made. Retrieval failures occur when a reasoner cannot remember an appropriate piece of knowledge; in effect, it represents a memory failure. Construction failure is similar, but occurs when a reasoner cannot infer or construct a solution to a problem. Input failure is error due to lack of some input information. Successful prediction represent the condition whereby expectations agree with actual outcomes. This often labels a state that should have occurred, but did not. Combinations of these are used to represent each reasoning failure type listed in Table 1. 3.1 Contradiction Figure 3 illustrates the representation for contradiction. Some goal, G, and context or cues, C, enables some cognitive process to produce an expected outcome, E. A subsequent cognitive mechanism produces an actual outcome, A, which when compared to E, fails to meet the expectation. This inequality of actual outcome with expected outcome initiates the knowledge of contradiction. If the right-most Cognize node was some inferential process, then the failure becomes an expectation failure and the node C represents the context, whereas if the process was a memory function, the contradiction is labelled an incorporation failure and C represents memory cues. An incorporation failure occurs when an input concept does not meet a conceptual category. For example, an agent may be told that a deceased individual came back to life (which is false) or a novice student may have a conceptual memory-failure when told that the infinite series .9999 is equivalent to 1.0 (which is true). These examples contradict the agent’s concept of mortal and the naïve concept of numbers respectively. Both inferential expectation failure and incorporation failure are errors of commission. Some explicit expectation was violated by later processing or input. 3.2 Unexpected Success Figure 4 contains a Meta-XP representation of an unexpected success, a failure similar to contradiction. However, instead of E being violated by A, the expectation is that the violation will occur, yet does not. That is, the agent expects not to be able to perform some computation (e.g., create a solution to a given problem), yet succeeds nonetheless. In such cases the right-most Cognize node will be an inferential process. If it is a memory pro6. This is in keeping with Schwanenflugel, Fabricius, Noyes, Bigler and Alexander (1994) who analyzed folk theories of knowing. Subject responses during a similarity judgement task decomposed into inference, memory, and I/O clusters through factor analysis. In the scope of this paper, however, I will ignore I/O processes.

G 1

Cognize

G 1

C



3

2

Mentally Results

co-domain

domain

¬E

A arg1



3

Cognize

Mentally Results

4

domain

5

2

Expectation Failure co-domain

Mentally Results

co-domain

co-domain

Expected Outcome

5

E Expectation or Incorporation Failure

domain Actual Outcome

domain

domain Actual Outcome

Mentally Initiates

A

co-domain

Cognize

Mentally Enables

Compare arg2

Cognize

Mentally Results

Mentally Results

C Mentally Enables

domain

=

co-domain

4

domain

Expected Outcome

arg2

Mentally Initiates

co-domain

E

Mentally Results

Compare

arg1

Fig. 3. Meta-XP representation of contradiction

Fig. 4. Meta-XP representation of unexpected success

A=actual; E=expected; G=goal; C=context or cues

A=actual; E=expected; G=goal; C=context or cues

cess instead, the failure represents an agent that does not expect to be able to remember some fact, for example, during a memory test. Yet at test time or upon further mental elaboration of the cues the agent remembers it. See the experimental studies of feelings-ofknowing, i.e., judgements of future recognition of an item that was not recalled during some memory test (e.g., Krinsky & Nelson, 1985) and judgements-of-learning, i.e, judgements at rehearsal time as to future memory performance (e.g., Nelson & Dunlosky, 1991). Like the representation of contradiction, the agent expects one outcome (failure), yet another occurs (success) during unexpected successes. 3.3 Impasse Figure 5 represents a class of omission failures that include forgetting as discussed earlier. If the right-most Cognize is a memory retrieval process, then the Meta-XP indeed represents forgetting. The impasse is a memory process that fails to retrieve anything. If the node is an inferential process, however, then the impasse failure is equivalent to the failures as recognized by Soar (Newell, 1990) a blocked attempt to generate the solution to a goal. Thus, a construction failure is when no plan or solution is constructed by the inference process. In either case, the node E is not in the set of beliefs with respect to the foreground knowledge of the system (i.e., was not brought into or created within working memory).

G Mentally Enables

=

Cognize Mentally Results

Cognize

co-domain

domain

4

2

Truth

C 1

¬ Mentally Results

domain

co-domain

3

Mentally Initiates

outFK

A

E Retrieval or Construction Failure

co-domain domain Actual Outcome

co-domain domain

Expected Outcome

Fig. 5. Meta-XP representation of impasse A=actual; E=expected; G=goal; C=context or cues

3.4 Surprise Figure 6 represents a class of failures rarely treated in any AI system. A surprise is an omission error instantiated when a hindsight process reveals that some expectation was never generated. The explanation is that there was never a goal, G2, to create the expectation, either through remembering or inferring. Some earlier process with goal, G1, failed to generate the subsequent goal. When the node A is generated, however, the system realizes that it is missing. This error, by definition, is a missing expectation discovered after the fact. Both false expectation and surprise are quite related in structure. As is apparent in the figures, they both share the incorrectly anticipated Successful Prediction node and also the node labeled Belated Prediction. Semantically, they both have a passive element (i.e., non-occurrences for A and E, respectively). 3.5 False Expectation A false expectation is an erroneously generated expectation or one that proved false. For example, a spectator may expect to see the launch of a space shuttle while at Cape Canaveral, but engineers abort the launch. The spectator experiences a false expectation when the launch time comes and goes with no takeoff. A novice theoretical computer scientist might expect that she has a solution to the Halting Problem, not knowing that Turing proved many years ago that no such solution is possible. Note that, unlike the second example, the first is out of the reasoners control. As seen in Figure 7, the representation of false expectation anticipates an actual event (A1) which never occurs or cannot be calculated. Instead, another event (A2) causes the reasoner to realize the error through hindsight. It is not always evident what this second event may be, however. Sometimes it is a very subtle event associated with just the passage of time, so there is no claim here that the second event is a conscious one. In this sequence, the reasoner realizes that the anticipated event is out of the set of beliefs with respect to the FK, and will remain so.

C

G1 1

Mentally Enables

Cognize

¬Mentally Results

G2 ¬Mentally Enables

Cognize Mentally Results

Cognize

=

2

domain

¬Mentally Initiates

A 3

¬Mentall Results

co-domain

E

Mentally Enables

Successful Prediction

co-domain

co-domain

Actual Outcome

domain

domain

Expected Outcome

4

Hindsight

Truth Mentally Results

co-domain

5

domain

Mentally Initiates

outFK

Belated Prediction

Fig. 6. Meta-XP representation of surprise A=actual; E=expected; G1,G2=goals; C=context or cues

4 Conclusion If a case-based explanation system is to reason about its reasoning failures effectively, it needs to represent the kind of mental symptoms and faults it is likely to encounter so that these can be manipulated explicitly. Yet, only enough representational detail need be provided so that the system can explain its own failures and thereby learn from them. That is, the representations must have causal and relational components that identify those factors that explain how and why a failure occurred. A knowledge structure called a Meta-Explanation Pattern is used to provide these attributes. The Meta-XP representations presented here fall into a taxonomy of five failure symptoms. These include contradiction, unexpected success, impasse, surprise, and false expectation. The types of failure are derived from the simple assumption that reasoning involves the comparison of expected outcomes to actual outcomes. Despite the difficulty of formulating a complete representation of mental events, the effort promises to aid a system when reasoning about itself or other agents, especially when trying to explain why its own or another’s reasoning goes astray. Furthermore, even though the CD representation of mental terms leaves much detail unrepresented, the origi-

G 1

Cognize

Cognize

=

¬ Mentally Results domain

2 co-domain

E Successful Prediction

co-domain

co-domain domain

domain Actual Outcome

5 Mentally Results

4 Mentally Enables

Mentally Results

¬Mentally Initiates

A1 Mentally Enables

Hindsight

C Mentally Enables

Expected Outcome

domain

Truth co-domain Mentally Initiates

6 outFK

A2

Belated Prediction

3 Mentally Results

Cognize

Fig. 7. Meta-XP representation of false expectation A=actual; E=expected; G=goal; C=context or cues

nal goal of Schank and his colleagues some twenty-five years ago to represent the mental domain is still an ambitious and crucial one. If future research can more fully specify a representational vocabulary for the representation, then these domain independent terms can help many different intelligent systems reason in complex situations where errors occur.

Acknowledgments The Air Force Office of Scientific Research supported this research under grant number F49620-94-1-0092. I also thank the anonymous reviewers for their insights.

References 1. Cox, M. T. (1997). Loose Coupling of Failure Explanation and Repair: Using learning goals to sequence learning methods. This volume. 2. Cox, M. T. (1996). Introspective multistrategy learning: Constructing a learning strategy under reasoning failure. Doctoral dissertation, Technical Report, GIT-CC-96-06,

College of Computing, Georgia Institute of Technology, Atlanta. (Available at URL ftp://ftp.cc.gatech.edu/pub/ai/ram/git-cc-96-06.html) 3. Cox, M. T., & Ram, A. (1995). Interacting learning-goals: Treating learning as a planning task. In J.-P. Haton, M. Keane & M. Manago (Eds.), Advances in case-based reasoning: Second European Workshop, EWCBR-94 (pp. 60-74). Berlin: Springer-Verlag. 4. Doyle, J. (1979). A truth maintenance system, Artificial Intelligence, 12, 231-272. 5. Glenberg, A. M., Wilkinson, A. C., & Epstein, W. (1992). The illusion of knowing: Failure in the self-assessment of comprehension. In T. O. Nelson (Ed.), Metacognition: Core readings (pp. 185-195). Boston: Allyn and Bacon. (Original work published in 1982) 6. Krinsky, R., & Nelson, T. O. (1985). The feeling of knowing for different types of retrieval failure. Acta Psychologica, 58, 141-158. 7. Nelson, T. O., & Dunlosky, J. (1991). When people’s Judgements of Learning (JOLs) are extremely accurate at predicting subsequent recall: The “Delayed-JOL Effect.” Psychological Science, 2(4), 267-270. 8. Newell, A. (1990). Unified theories of cognition. Cambridge, MA: Harvard University Press. 9. Ram, A. (1993). Indexing, elaboration and refinement: Incremental learning of explanatory cases. Machine Learning, 10, 201-248. 10. Ram, A. (1994). AQUA: Questions that drive the understanding process. In R. C. Schank, A. Kass, & C. K. Riesbeck (Eds.), Inside case-based explanation (pp. 207261). Hillsdale, NJ: Lawrence Erlbaum Associates 11. Schank, R. C. (1975). Conceptual information processing. Amsterdam: North-Holland Publishing. 12. Schank, R. C. (1986). Explanation patterns. Hillsdale: LEA. 13. Schank, R. C., Goldman, N., Rieger, C. K., & Riesbeck, C. (1972). Primitive concepts underlying verbs of thought (Stanford Artificial Intelligence Project Memo No. 162). Stanford, CA: Stanford Univ., Computer Science Dept. (NTIS No. AD744634) 14. Schank, R. C., Kass, A., & Riesbeck, C. K. (1994). Inside case-based explanation. Hillsdale, NJ: LEA. 15. Schwanenflugel, P. J., Fabricius, W. V., Noyes, C. R., Bigler, K., D., & Alexander, J. M. (1994). The organization of mental verbs and folk theories of knowing. Journal of Memory and Language, 33, 376-395. 16. Stallman, R. M., & Sussman, G. J. (1977). Forward reasoning and dependencydirected backtracking in a system for computer-aided circuit analysis. Artificial Intelligence, 9, 135-196.

Suggest Documents