An Implemented System for Metaphor-Based ... - Semantic Scholar

3 downloads 0 Views 158KB Size Report
Mar 5, 1998 - as that John had reasons both to believe that Sally was right and to believe the opposite. ..... San Mateo, CA: Morgan Kaufmann. Ballim, A.
An Implemented System for Metaphor-Based Reasoning, With Special Application to Reasoning about Agents John A. Barnden School of Computer Science University of Birmingham Birmingham B15 2TT United Kingdom [email protected], (+44) (0)121-414-3816 March 5, 1998

SUBMISSION TO International Workshop on Computation for Metaphors, Analogy and Agents KEYWORDS: metaphor-understanding algorithms metaphor of mind reasoning about agents' mental states personi cation metaphor uncertain reasoning ABSTRACT: An implemented reasoning called ATT-Meta is sketched. It performs a type of metaphor-based reasoning. Although it relies on built-in knowledge of speci c metaphors, where a metaphor is a conceptual view of ones topic as another, it is very exible in allowing novel manifestations of those metaphors in discourse. The

exibility is partly the result of semantic agnosticism with regard to metaphor, in other words not insisting that metaphorical utterances should necessarily have clearly identi able meanings. Also, the metaphorical reasoning is fully integrated into a general framework for uncertain reasoning, enabling the system to cope with major sources of uncertainty in metaphor-based reasoning. The research has been focused on metaphors for mental states (even though the algorithms are more generally applicable), and as a result has some important connections with the study of agent descriptions in natural language discourse, multi-agent scenarios, personi cation of non-agents, and reasoning about agents' metaphorical thoughts. The system also naturally leads to an approach to chained metaphor.

0

1 Introduction and Overview of ATT-Meta First, some terminology. A metaphorical utterance is one that manifests (instantiates) a metaphor, where a metaphor is a conceptual view of one topic as another. Here I broadly follow Lako (e.g., Lako 1993). An example of a metaphor is the view of the mind as a three-dimensional physical region. (We call this metaphor MIND AS PHYSICAL SPACE.) Notice that, under this terminology, a metaphor is the view itself, as opposed to some piece of natural language that manifests the view. Such a piece of language might be \John believed in the recesses of his mind that ...," in the case of MIND AS PHYSICAL SPACE. When a metaphor is manifested in an utterance, the topic actually being discussed (John's mind, in the example) is the tenor, and the topic it is metaphorically cast as (physical space, in the example) is the vehicle. The ATT-Meta reasoning system is aimed at the reasoning needed to extract useful information from metaphorical utterances in mundane natural language discourse. It is not currently capable of dealing with novel metaphors | rather, it has pre-given knowledge of a speci c set of metaphors | but it is speci cally designed to handle novel manifestations of the metaphors it does know about. Its knowledge of any given metaphor consists mostly of a relatively small set of very general \conversion rules" that can convert information about the vehicle into information about the tenor, or vice versa. The degree of novelty the system can handle in a manifestation of a metaphor is limited only by the amount of knowledge it has about the vehicle and by the generality of the conversion rules. Note also Lako & Turner's (1989) persuasive claims that even in poetry metaphorical utterances are mostly manifestations of familiar, well-known metaphors, albeit the manifestations are highly novel and that metaphors can be mixed in novel ways. ATT-Meta is merely a reasoning system, and does not itself deal with natural language input directly. Rather, a user supplies hand-coded logic formulae that are intended to couch the literal meaning of small discourse chunks (two or three sentences). This will become clearer later in the paper. The ATT-Meta research has concentrated on a speci c type of metaphor, namely metaphors for mental states (and processes), such as MIND AS PHYSICAL SPACE. However, care has been taken to ensure that the principles and algorithms implemented are not restricted to this special case. The present paper will mainly use mental-state metaphors in examples, but the examples can readily be adapted to other types of metaphor. There are many mental-state metaphors apart from MIND AS PHYSICAL SPACE. Some are as follows: IDEAS AS PHYSICAL OBJECTS, under which ideas are cast as physical objects that have locations and can move about (either outside a person, or inside a person's mind conceived of as a space); COGNITION AS VISION, as when understanding, realization, knowledge, etc. is cast as vision; IDEAS AS INTERNAL UTTERANCES, which is manifested when a person's thoughts are described as internal speech or writing (internal speech is not literally speech); and MIND PARTS AS PERSONS, under which a person' mind is cast as containing several subagents with their own thoughts, emotions, etc. Many real-discourse examples of manifestations of metaphors for mental states and processes can be found in the author's databank on the web (http://www.cs.bham.ac.uk/ jab/ATT-Meta/Databank). The special case of mental states does have particular relevance to the current workshop, because of the workshop's interest in the subject of intelligent agents and societies of agents. There are many points of contact with this subject:(a) Mundane discourses, such as ordinary conversations and newspaper articles, often use metaphor in talking about mental states/processes of agents (mainly people). Indeed, as with many abstract topics, as soon as anything subtle or complex needs to be said, metaphor is practically essential. 1

(b) One commonly used metaphor for mental states, MIND PARTS AS PERSONS, casts the mind as containing a small society of sub-agents. Thus, research into multi-agent situations can contribute to the study of metaphor for mental states, as well as vice versa (cf. point (a)). (c) Thirdly, one important research topic in cognitive science is self-deception (see, e.g., Mele 1997), and, as I have argued elsewhere (Barnden 1997a), metaphor for mental states (including MIND PARTS AS PERSONS) can make a strong contribution to this area. (d) Metaphors for mental states and processes are strongly connected to metaphors for communication between agents, such as the conduit metaphor. (e) Even when an agent X's mental states and processes are not themselves metaphorically described, X itself may be thinking and reasoning metaphorically about something. Note that this claim respects the idea that a metaphor is a conceptual view that can be manifested in many different ways other than natural language, such as in visual art, action, and thought. Thus, there is a need for reasoning about agents' metaphorical thoughts. (f) Non-agents are often metaphorically cast as agents, i.e. personi ed, in mundane discourse. Either implicitly or explicitly this raises the prospect of the non-agent having mental states. An example of this is the sentence \My car doesn't want to start this morning." To contrast this with (e), we can call this reasoning about metaphorical agents' thoughts. Unusually for detailed technical treatments of metaphor, the ATT-Meta project has given much attention to the question of uncertainty in reasoning. (The work of Hobbs 1990 is the only other approach that gives comparable attention to uncertainty.) Metaphor-based reasoning introduces special types of uncertainty. Given an utterance, it is often not certain what particular metaphors or variants of them are manifested. But even given that a particular metaphor is manifested, the implications of it for the tenor are themselves uncertain, and may con ict with other lines of reasoning about the tenor (e.g., John's mind). Note also that non-metaphorical lines of reasoning about the tenor are likely to be uncertain, in practice. Lastly, a special source of uncertainty is that the understander's knowledge about the vehicle of the metaphor (e.g., physical space) is itself uncertain. For instance, mundane physical objects that are not close together generally do not physically interact in any direct sense, but they may do. ATT-Meta's treatment of uncertainty in metaphorical processing is completely integrated into its treatment of uncertainty in general. ATT-Meta deals only in qualitative measures of uncertainty, as opposed to, say, probabilistic measures. This is in part a simpli cation imposed to make the project more manageable, and in part re ects a claim that qualitative uncertainty is more appropriate for some purposes, notably some aspects of natural language understanding. Arguing this matter is beyond the scope of the current paper (but see Barnden 1997). The plan of the rest of the paper is as follows. Section 2 presents the fundamental principles on which ATT-Meta's metaphor-based reasoning works. Section 3 very brie y sketches ATT-Meta's basic reasoning facilities, irrespective of metaphor. Section 4 explains how the principles in Section 2 are realized within the framework of Section 3. Section 5 comments brie y on ATT-Meta's facilities for reasoning about agents' beliefs and reasoning, again irrespective of metaphor. Section 6 then combines the information from Sections 4 and 5 to indicate brie y how ATT-Meta could deal with chained metaphor, reasoning about agents' metaphorical thoughts (see (e) above), and reasoning about metaphorical agents' thoughts (see (f) above). Section 7 concludes. This short paper cannot convey much detail about ATT-Meta. Further detail of the system and the attendant research can be found in Barnden (1997b; in press; submitted) and Barnden et al. (1994a,b, 1996).

2

2 ATT-Meta's Metaphor-Based Reasoning: Principles Notoriously, metaphorical utterances can be dicult if not impossible to paraphrase in non-metaphorical terms. Equally, it can be dicult if not impossible to give them internal meaning representations that are not themselves metaphorical. Consider, for instance. \One part of John was insisting that Sally was right." This manifests the metaphor of MIND PARTS AS PERSONS, where furthermore the mentioned part engages in natural language utterance (the insistence), so that we also have IDEAS AS INTERNAL UTTERANCES being applied to John. I claim that we simply do not know enough about how the mind works to give a full, de nite, detailed account of what was going on in John's mind according to the sentence. After all, what non-metaphorical account can be given of some \part" of John \insisting" something? Rather, the utterance connotes things such as that John had reasons both to believe that Sally was right and to believe the opposite. This particular connotation arises from the observation that someone generally insists something only when someone else has stated the opposite (although there are other possible scenarios). So, the sentence suggests that some other \part" of John stated, and therefore probably believed, that Sally was not right. Then, because of the thoughts of the two sub-agents with John (the two parts), we can infer that John had reasons to believe the mentioned things about Sally. Some investigators may wish to call such an inference the underlying meaning of the utterance, or at least to claim that it is part of the meaning. The ATT-Meta research project has refrained from this step, which is after all only terminological, and only explicitly countenances literal meanings for metaphorical utterances. (The literal meaning of the above utterance is the ridiculous claim that John literally had a part that literally insisted that Sally was right.) However, the project presents no objection to the step. Thus, we can say that ATT-Meta is \semantically agnostic" as regards metaphor. (The approach is akin to but less extreme than that of Davidson 1979, which can be regarded as semantically \atheist.") ATT-Meta's approach is one of literal pretence. A literal-meaning representation for the metaphorical input utterance is constructed. The system then pretends that this representation, however ridiculous, is true. Within the context of this pretence, the system can do any reasoning that arises from its knowledge of the vehicles of the metaphors involved. In our example, it can use knowledge about interaction within groups of people, and knowledge about communicative acts such as insistence. As a result of this knowledge, the system can infer that the explicitly mentioned part of John believed (as well as insisted) that Sally was right, and some other, unmentioned, part of John believed (as well as stated) that Sally was not right. Suppose now that, as part of the system's knowledge of the MIND PARTS AS PERSONS metaphor, there is the knowledge that if a \part" of someone believes something P, then the person has reasons to believe P. The system can now infer both that John had reasons to believe that Sally was right and that John had reasons to believe that Sally was not right. Note here that the key point is that the reasoning from the literal meaning of the utterance, conducted within the pretence, link up with the just-mentioned knowledge. That knowledge is itself of a very fundamental, general nature, and does not, for instance, rely on the notion of insistence or any other sort of communicative act. Any line of within-pretence inference that linked up with that knowledge could lead to conclusions that John had reasons to believe certain things. This is the way in which ATT-Meta can deal with novel manifestations of metaphors. There are no need for it at all to have any knowledge of how insistence by a \part" of a person maps to some non-metaphorically describable feature of the person. Equally, an utterance that described a part as doing things from which it can be inferred that the part insisted that Sally was right would also to lead to the same inferences as our example utterance (unless it also led to contrary inferences by some route). 3

In sum, the ATT-Meta research has taken the line that is a mistake to focus on the notion of the underlying meaning of a metaphorical utterance, and has concentrated instead on the literal meaning and the inferences that can be drawn from it. This approach is the key to being able to deal exibly with metaphorical utterances.

3 ATT-Meta's Basic Reasoning ATT-Meta is a rule-based reasoning system that manipulates hypotheses (facts or goals). In ATTMeta, at any time any particular hypothesis H is tagged with an certainty level, one of certain, presumed, suggested, possible or certainly-not. The last just means that the negation of H is certain. Possible just means that the negation of H is not certain but no evidence has yet been found for H itself. Presumed means that H is a default: i.e., it is taken as a working assumption, pending further evidence. Suggested means that there is evidence for the hypothesis, but it is not strong enough to enable H to be a working assumption. ATT-Meta applies its rules in a backchaining style. It is given a reasoning goal, and uses rules to generate subgoals. Goals can of course also be satis ed by provided facts. When a rule application supports a hypothesis, it supplies a level of certainty to it, calculated as the minimum of the rule's own certainty level and the levels picked up from the hypotheses satisfying the rule's condition part. When several rules support a hypothesis, the maximum of their certainty contributions is taken. When both a hypothesis H and its negation {H are supported to level at least presumed, con ictresolution takes place. The most interesting case is when both hypotheses are supported to level presumed. The system attempts to see whether one hypothesis has more speci c evidence than the other, so that it can downgrade the certainty level of the other hypothesis. Speci city comparison is a commonly used heuristic for con ict-resolution in AI (e.g., Delgrande & Schaub 1994, Hunter 1994, Loui 1987, Loui et al. 1993, Poole 1991, Yen et al. 1991), although serious problems remain in coming up with adequate and practical heuristics. ATT-Meta's speci city comparison depends on what facts H and {H relie on and on derivability relationships between the hypotheses supporting H and {H. More detail on the speci city comparison can be found in Barnden (submitted).

4 ATT-Meta's Metaphor-Based Reasoning: Implementation Section 2 referred to reasoning taking place \within a pretence" that a metaphorical utterance was literally true. To implement this, ATT-Meta constructs a computational environment called a metaphorical pretence cocoon. The representation of the literal meaning of the utterance, namely that a part PJ of John insisted that Sally was right, is placed as a fact L inside this cocoon. Corresponding to this, outside the cocoon, the system has a hypothesis (a fact) SL that it itself (the system) is pretending that L holds. Also, the system has the fact, outside the cocoon, that it is pretending that PJ is a person. As usual, the system has a goal, such as the hypothesis that John believes that Sally is right (recall the example in Section 2). Assume the system has a rule that if someone X has reasons to believe P then, presumably, X believes P. (This is a default rule, so its conclusion can be defeated.) Thus, one subgoal that arises is that John had reasons to believe that Sally was right. Now, in Section 2 we referred to the system's knowledge about the MIND PARTS AS PERSONS metaphor. The mentioned knowledge is couched in the following rule: 4

IF I (the system) am pretending that part Y of agent X is a person AND I am pretending that Y believes Q THEN (presumably) X has reasons to believe Q.

Of course, this is a paraphrase of a imagined, formally expressed rule. We call this a conversion rule, as it maps between pretence and reality. Because of the subgoal that John had reasons to believe that Sally was right, the conversion leads to the setting up of the subgoal that the system is pretending that PJ (the mentioned part of John) believes that Sally is right, This subgoal is itself outside the cocoon, but it automatically leads to the the subgoal that PJ believes that Sally is right, within the cocoon. This subgoal can then be inferred (as a default) from the hypothesis that PJ stated that Sally was right, which itself can be inferred (as a default) from the existing withincocoon fact that PJ insisted that Sally was right. Notice carefully that these last two steps are entirely within the cocoon and merely use commonsense knowledge about real-life communication. As well as the original goal (John believed that Sally was right) the system also looks at the negation of this, and hence indirectly at the hypothesis that John has reasons to believe that Sally was not right. This subgoal gets support in a rather similar way to the above process, but it involves richer reasoning within the cocoon.

4.1 Uncertainty in ATT-Meta's Metaphorical Reasoning ATT-Meta incorporates a handling, at least partial, of all the types of uncertainty in metaphorbased reasoning that were mentioned in Section 1. First, the system can be unsure whether a metaphor holds, by having presumed, for instance, as the level of certainty for a fact like the above to the e ect that the system pretends that part PJ of John is a person. This fact is then potentially subject to defeat in the ordinary way. Secondly, notice the \presumably" in the above conversion rule, indicating that its certainty level is presumed. Thus, the rule is only a default rule. It is possible for there to be evidence that is strong enough (e.g., speci c enough) to defeat a conclusion made by the rule. Conversely, although there may be evidence against the conclusion of the rule, it may be weak enough to get defeated by the evidence for that conclusion. Thus, whether a piece of metaphorical reasoning overrides or fails to override other lines of reasoning about the tenor is matter of the peculiarities of the case at hand. Some authors (e.g., Lako 1994) assume that in cases of con ict tenor information should override metaphor-based inferences, but it appears that such assumptions are based on inadequate realization of the fact that tenor information can itself be uncertain. Finally, the reason within the cocoon is itself usually uncertain, since commonsense knowledge rules are usually uncertain.

5 ATT-Meta's Reasoning about Agents' Beliefs and Reasoning ATT-Meta has facilities for reason non-metaphorically about the beliefs and reasoning acts of agents, including where those beliefs and acts are about the beliefs and reasoning for further agents, and so forth. Although ATT-Meta can reasoning about beliefs in an ordinary rule-based way, its main tool is simulative reasoning (e.g., Creary 1979, Konolige 1986 [but called \attachment" there], Haas 1986, Ballim & Wilks 1991, Dinsmore 1991, Hwang & Schubert 1993, Chalupsky 1993 and 1996, Attardi & Simi 1994; see also related work in philosophy and psychology in Carruthers & Smith 1996, Davies & Stone 1995). In attempting to show that agent X believes P from the fact that X believes Q, the system puts P as a goal and Q as a fact in a simulation cocoon for X, which is a special environment which is meant to re ect X's own reasoning processes. Reasoning from Q 5

to P in the cocoon is alleged (by default) to be reasoning by X. The reasoning within the cocoon can involve ordinary rule-based reasoning and/or simulation of other agents. In particular, the reasoning can be uncertain. Also, the result of the simulation of X is itself uncertain: even if the simulation supports the hypothesis that X believes P, ordinary rule-based reasoning may support the negation more strongly.

6 Interesting Nestings In fact, simulation cocoons operate very similarly to metaphorical pretence cocoons. Just as simulation cocoons can be nested within each other, to get the e ect of reasoning about X's reasoning about Y's reasoning about ..., so metaphorical pretence cocoons can be nested within each other, and either way round with respect to simulation cocoons. We now look brie y at the uses for this. Nesting of metaphorical pretence cocoons within each other provides a treatment of chained metaphor., Consider the sentence \The thought hung over him like an angry cloud" (adapted from a real-text example). The thought is metaphorically cast as a cloud, and the cloud is in turn metaphorically cast as an animate being (because only animate beings can literally be angry). In ATT-Meta, this would be handled by having a metaphorical cocoon for the second of those two metaphorical steps nested within a cocoon for the rst. That is, within the pretence that the thought is a cloud there is a further pretence that the cloud is a person. Embedding of a metaphorical pretence cocoon within a simulation cocoon handles a major aspect of point (e) in Section 1, namely reasoning about agents' metaphorical reasoning. This would be needed for dealing with one interpretation of the sentence \Mary believed that the thought hung over John like an angry cloud" (although another interpretation is that the metaphor here is used only by the speaker, and not by Mary). Conversely, embedding of a simulation cocoon within a metaphorical pretence cocoon handles a major aspect of point (f) in Section 1, namely reasoning about metaphorical agents' reasoning, as required for sentences like \My car doesn't want to wake up because it thinks it's Sunday." From the fact that the car thinks it's Sunday, we might want to infer that the car thinks people needn't wake up until some relatively late time. (That thought would then be a reason for not wanting to wake up.) The car's alleged reasoning would occur within a simulation cocoon for the car, embedded within a metaphorical pretence cocoon for the pretence that the car is a person. These three types of nesting have not been experimented with yet, although the current algorithms in the system should be adequate to handle them once some some minor generalizations have been e ected.

7 Conclusion ATT-Meta performs a very exible type of metaphor-based reasoning, allowing for novel manifestations of the metaphors it knows about. Its metaphor-based reasoning is fully integrated into a framework for qualitative uncertain reasoning, and as a result is able to cope with major sources of uncertainty in metaphor-based reasoning. The research has been focused on metaphors for mental states (even though the algorithms are more generally applicable), and as a result has some important connections with the study of language about agents, multi-agent scenarios, personi cation of non-agents, and reasoning about agents' metaphorical thoughts.

6

Bibliography Attardi, G. & Simi, M. (1994). Proofs in context. In J. Doyle, E. Sandewall & P. Torasso (Eds), Principles of Knowledge Representation and Reasoning: Proceedings of the Fourth International Conference, pp.15{26. (Bonn, Germany, 24{27 May 1994.) San Mateo, CA: Morgan Kaufmann. Ballim, A. & Wilks, Y. (1991). Arti cial believers: The ascription of belief. Hillsdale, N.J.: Lawrence Erlbaum. Barnden, J.A. (1997a).Deceived by metaphor. Behavioral and Brain Sciences, 20 (1), pp.105{106. Invited Commentary on A.R. Mele's \Real Self-Deception." Barnden, J.A. (1997b). Simulation and uncertainty in reasoning about agents' beliefs. Memoranda in Computer and Cognitive Science, No. MCCS{97{310, Computing Research Laboratory, New Mexico State University, Las Cruces, NM 88003. Invited submission to a special issue of Arti cial Intelligence and Law, ed. E. Nissan. Barnden, J.A. (in press). An AI system for metaphorical reasoning about mental states in discourse. In Koenig, J-P. (Ed.), Conceptual Structure, Discourse, and Language II. Stanford, CA: CSLI/Cambridge University Press. Barnden, J.A. (submitted). Con ict-resolution in an implemented system for uncertain belief reasoning and metaphor-based reasoning. 6th International Conference on Principles of Knowledge Representation and Reasoning (KR'98), Trento, Ital, 2{5 June 1998. Barnden, J.A., Helmreich, S., Iverson, E. & Stein, G.C. (1994a). An integrated implementation of simulative, uncertain and metaphorical reasoning about mental states. In J. Doyle, E. Sandewall & P. Torasso (Eds), Principles of Knowledge Representation and Reasoning: Proceedings of the Fourth International Conference, pp.27{38. (Bonn, Germany, 24{27 May 1994.) San Mateo, CA: Morgan Kaufmann. Barnden, J.A., Helmreich, S., Iverson, E. & Stein, G.C. (1994b). Combining simulative and metaphor-based reasoning about beliefs. In Procs. 16th Annual Conference of the Cognitive Science Society (Atlanta, Georgia, August 1994), pp.21{26. Hillsdale, N.J.: Lawrence Erlbaum. Barnden, J.A., Helmreich, S., Iverson, E. & Stein, G.C. (1996). Arti cial intelligence and metaphors of mind: within-vehicle reasoning and its bene ts. Metaphor and Symbolic Activity, 11(2), pp.101{123. Carruthers, P. & Smith, P.K. (Eds). (1996). Theories of theories of mind. Cambridge, UK: Cambridge University Press. Chalupsky, H. (1993). Using hypothetical reasoning as a method for belief ascription. J. Experimental and Theoretical Arti cial Intelligence, 5 (2&3), pp.119{133. Chalupsky, H. (1996). Belief ascription by way of simulative reasoning. Ph.D. Dissertation, Department of Computer Science, State University of New York at Bu alo. Creary, L. G. (1979). Propositional attitudes: Fregean representation and simulative reasoning. Procs. 6th. Int. Joint Conf. on Arti cial Intelligence (Tokyo), pp.176{181. Los Altos, CA: Morgan Kaufmann. Davidson, D. (1979). What metaphors mean. In S. Sacks (Ed.), On Metaphor, pp.29{45. U. Chicago Press. Davies, M & Stone, T. (Eds) (1995). Mental simulation: evaluations and applications. Oxford, U.K.: Blackwell.

7

Delgrande, J.P. & Schaub, T.H. (1994). A general approach to speci city in default reasoning. In J. Doyle, E. Sandewall & P. Torasso (Eds), Principles of Knowledge Representation and Reasoning: Proceedings of the Fourth International Conference, pp.146{157. (Bonn, Germany, 24{27 May 1994.) San Mateo, CA: Morgan Kaufmann. Dinsmore, J. (1991). Partitioned representations: a study in mental representation, language processing and linguistic structure. Dordrecht: Kluwer Academic Publishers. Haas, A.R. (1986). A syntactic theory of belief and action. Arti cial Intelligence, 28, 245{292. Hobbs, J.R. (1990). Literature and cognition. CSLI Lecture Notes, No. 21, Center for the Study of Language and Information, Stanford University. Hunter, A. (1994). Defeasible reasoning with structured information. In J. Doyle, E. Sandewall & P. Torasso (Eds), Principles of Knowledge Representation and Reasoning: Proceedings of the Fourth International Conference, pp.281{292. (Bonn, Germany, 24{27 May 1994.) San Mateo, CA: Morgan Kaufmann. Hwang, C.H. & Schubert, L.K. (1993). Episodic logic: a comprehensive, natural representation for language understanding. Minds & Machines, 3 (4), pp.381{419. Konolige, K. (1986). A deduction model of belief. London: Pitman. Los Altos: Morgan Kaufmann. Lako , G. (1993). The contemporary theory of metaphor. In A. Ortony (Ed.), Metaphor and Thought, 2nd edition, pp.202{251. New York and Cambridge, U.K.: Cambridge University Press. Lako , G. (1994). What is metaphor? In J.A. Barnden & K.J. Holyoak (Eds.), Advances in Connectionist and Neural Computation Theory, Vol. 3: Analogy, Metaphor and Reminding. Norwood, N.J.: Ablex Publishing Corp. Lako , G. & Turner, M. (1989). More than cool reason: a eld guide to poetic metaphor. Chicago: University of Chicago Press. Loui, R.P. (1987). Defeat among arguments: a system of defeasible inference. Computational Intelligence, 3, pp.100{106. Loui, R.P., Norman, J., Olson, J. & Merrill, A. (1993). A design for reasoning with policies, precedents, and rationales. In Fourth International Conference on Arti cial Intelligence and Law: Proceedings of the Conference, pp.202{211. New York: Association for Computing Machinery. Mele, A.R. (1997). Real self-deception. Behavioral and Brain Sciences, 20 (1). Poole, D. (1991). The e ect of knowledge on belief: conditioning, speci city and the lottery paradox in default reasoning. Arti cial Intelligence, 49, pp.281{307. Yen, J., Neches, R. & MacGregor, R. (1991). CLASP: Integrating term subsumption systems and production systems. IEEE Trans. on Knowledge and Data Engineering, 3 (1), pp.25{32.

8

Suggest Documents