2 The Pragmatic View of Context and Relevance. We grasp the meaning of what is said in our language not because ap- preciation of context is unnecessary ...
Context and Relevance: A Pragmatic Approach Hamid R. Ekbia1 and Ana G. Maguitman2 Center for Research on Concepts and Cognition Center for Research on Arti cial and Natural Intelligence Computer Science Department, Indiana University Bloomington, IN 47405-7104 Phone: (812) 855-6965/Fax: (812) 855-6966 1
2
fhekbia,anmaguitg@cs.indiana.edu
Abstract. In recent years, AI research has started to take seriously the
role of context in developing models. This trend is heralded by a departure from logic-based views and their over-emphasis on the formal aspects of thought processes. The departure, however, has not been complete| some fundamental assumptions of formal logic are still maintained in the new approaches. Similarly, \relevance" remains a major challenge for AI research. This paper outlines an alternative proposal that takes context and relevance as intertwined aspects of thought and intelligence. We argue for a pragmatic approach, which shows better promise than formal logic in dealing with such issues.
1 Introduction Issues of context and relevance arise repeatedly in science and philosophy. Most often, however, they are dealt with separately as unrelated topics, or, if treated together, within the general framework of logic. This paper brings the two together under the assumption that they are not only conceptually linked, but also co-constituted. Furthermore, we shall argue, all the alternative approaches taken within the logical tradition fail to address context and relevance for fundamentally similar reasons|namely, what we call \formality assumptions" of logic. We begin with an outline of a pragmatic approach to context and relevance inspired by the views of Dewey in section 2. In sections 3 and 4 we provide a synopsis of logical approaches to these issues, and explain why the logical tradition cannot support the criteria suggested by the pragmatic view. The last section points out some of the implications of this view for AI. Our main goal in this paper is to discuss the limits of the logical approach rather than to abandon or rebuke logic-oriented views more generally. It needs to be stressed that the view that we advocate here, far from being an elaborated comprehensive scheme, is meant to suggest an alternative way of thinking about context and relevance. As these issues are propounded in many places in science and philosophy, a multidisciplinary approach strongly suggests itself. Thus, throughout, we have also mentioned similar intuitions in other areas that seem to be close to our own.
2 The Pragmatic View of Context and Relevance We grasp the meaning of what is said in our language not because appreciation of context is unnecessary but because context is inescapably present. [9] Dewey looked upon modern philosophers as moving between two extremes: those who claim that knowledge and reality are constructed out of discrete and independent elements, and those who claim that everything is so interrelated that reality and knowledge are ultimately a single whole. Arguing against both extremes, Dewey believed that they suer from the same fallacy, neglect of context|\the greatest single disaster which philosophic thinking can incur" [5]. More than fty years of AI practice demonstrates the aptness of a similar assertion about AI: neglect of context is the greatest single disaster that AI practice has incurred thus far. As a remedy for this situation in philosophy, Dewey proposed a notion of context, which, for ease of reference, we call the \pragmatic" view. Context, according to him, has two components (see Figure 1): (i) background, which is both spatial and temporal, and is ubiquitous in all thinking; and (ii) selective interest, which conditions the subject matter of thinking. The background is that part of context that \does not come into explicit purview, does not come into question; it is taken for granted" [9]. According to Dewey, this is because background context, or rather some part of it, cannot be an object of examination: \If everything were literally unsettled at once, there would be nothing to tie those factors, that being unsettled, are in process of discovery and determination" (ibid). This simple argument highlights a major dierence between the logicist and pragmatist views. Formal logic, let us recall, is strictly constrained to \explicit" forms of knowledge representation. Nothing in logic, in other words, could be taken for granted as such, whether it is part of the context or part of the subject matter. This is one explanation of why traditional AI (GOFAI) models have a hard time dealing with context. In these models, every fact with a remote chance of relevance has to be codi ed and explicitly represented. Since there is, in principle, no limit to relevant facts, one is always susceptible to land on one of the two extremes: incorporating too many or too few facts about the situation|hence the \problem of relevance" in AI. To complicate matters even further, it is not clear which facts remain unchanged as a result of interaction with the environment|hence the notorious \frame problem" of AI.
2.1 Background Context Background context, as we said, has both spatial and temporal aspects. The spatial aspect, according to Dewey, \covers all contemporary setting within which a course of thinking emerges" (ibid). The temporal aspect, in turn, is intellectual as well as existential. The existential background is an important notion for Dewey|it is part of the material means that contribute to the possibility of a
8 8 > > < background < spatial intellectual context : temporal existential > > :
selective interest
Fig. 1. The aspects of context thought process. A full grasp of this notion requires a discussion of \situations", which we will take up shortly. The intellectual background, on the other hand, can be social (traditions), individual (mental habits), theoretical (science), etc. Thinking, in other words, always take place in a background of social and cultural setting, and within an individual frame of mind. In physical science, for instance, Aristotelian physics and Ptolemaic astronomy were for centuries the taken-for-granted background of all special inquiries. Next came the Newtonian background, then relativity and quantum mechanics, . . . and so on. This much about background context may be old hat in AI, as in other places. Philosophers of language, in discussing indexical content, have taught us that any speech act happens within a context that crucially belongs to here and now [37, 44]. The meaning of the sentence \I have to nish the book in two months before I leave town" is as much dependent on who utters it as it is on the time and place of the utterance. Similar issues have been addressed in AI, especially in knowledge representation and language processing, although, as we shall see later, the recognition of the issues has not always resulted in the most feasible solutions. Part of the reason for the failure, we believe, is the neglect of the existential aspect of context mentioned above. More important, however, is the neglect of the second major component of context in Dewey's account| namely, \selective interest" (or bias). Both of these are related to Dewey's notion of \situation".
2.2 Situations, Selective Bias, and Relevance The notion of a problematic or indeterminate situation plays a central role in Dewey's thought. Thinking, according to Dewey, is a process of inquiry in which a confused, obscure, or con icting situation is transformed into a determinate one. While this claim might be agreeable to many, the core idea behind it might not necessarily be so|namely the idea that situations are not doubtful only in a \subjective" sense, but also in an \objective" or non-subjective way: \It is the situation that has these traits [confusion, ambiguity, con ict, etc.]. We are doubtful because the situation is inherently doubtful" (ibid). In other words, the indeterminacy is rst and foremost in the situation, not in us|an assertion that may not nd many supporters among the believers in mentalistic theories of cognition.
Dewey used the notion of \situation" to also talk about relevance: \The existence of the problematic situation to be resolved exercises control over the selective discrimination of relevant and eective evidential qualities as means" [10]. Relevance, Dewey concluded, is not inherent but accrues to natural qualities in virtue of the special function they perform in inquiry. What determines relevance, furthermore, is the impress of the individual inquirer on the context. There is selectivity (and rejection) found in every operation of thought. There is care, concern, implicated in every act of thought. There is someone who has aection for some things over others; when he becomes a thinker he does not leave his characteristic aection behind. As a thinker, he is still dierentially sensitive to some qualities, problems, themes.[9] \Material considerations", as Dewey calls them, represent the total eect of existential background and selective bias on thought processes. They reveal the close relation between \context" and \relevance" at once. They also happen to be the same considerations that, we claim, have been traditionally ignored in AI. How so? The answer should be sought in a philosophical and scienti c tradition that, in the name of \objectivism", has tended to undermine the individual and \subjective" aspects of thought processes, especially when it comes to scienti c inquiry. However, as Dewey notes, there is bad and good subjectivity, and confounding them has been the source of major mistakes in science and philosophy. While \subjectivism" deserves its ill repute and should be avoided as such, being subjective is not entitled to a similar treatment. \Interest, as the subjective, is after all equivalent to individuality and uniqueness", and individuality \is not a part or constituent of subject matter, but as a manner of action it selects subject matter and leaves a qualitative impress on it" [9]. In other words, subjectivity only in uences the \context of inquiry", as opposed to the \context of use", which has a rather intersubjective character. Scienti c inquiry, for instance, may be devoid of individual in uence as far as its nal products are concerned. It is, to be sure, also a product of the social and communal activity of scientists. Nevertheless, the process of inquiry that leads to those products is, at some level, an individual activity. We should, thus, \distinguish between science as a conclusion of re ective inquiry, and science as a ready-made body of organized subject matter" (ibid). Any thought process or linguistic expression, by the same token, involves both a context of inquiry and a context of use. Attending to the latter, as many linguistics and philosophers do, should not come at the cost of the former. GOFAI, in its pursuit of becoming an \objective" science, tended to neglect \material considerations". We believe that the same attitude still continues, to prevail, by and large, in many parts of AI. In planning, for instance, despite the advent of new models (like interleaved planning), AI research still follows the general scheme of plan generation, execution, and explicit recognition as the main vehicle for interaction between the system and the environment. As Suchman [46] has noted, the problem of interaction between individuals, on this view, is
to recognize the actions of others as the expression of their underlying plans, and the research problem is to formulate a set of inference rules for mapping between actions and plans. By the same token, interaction with the environment, according to this view, would involve the ability to detect and recognize the context explicitly and to revise the plans according to some reasoning scheme. What motivates this approach, we suggest, is a logicist point of view. But what in logic, it might be asked, dictates such views? Why should it not be possible for logic, in other words, to embrace contextual dependence in the sense outlined by Dewey? The answer to the above question is multifarious|e.g., adherence to explicit representations, universal meanings, etc. Such attributes, as we argue elsewhere [13], are not fully characteristic of human behavior. In speci c, the pragmatic approach suggests the following: 1. Context, most often, is not explicitly identi able. 2. There are no sharp boundaries among contexts. 3. The logical aspects of thinking cannot be isolated from material considerations. 4. Behaviour and context are jointly recognizable. Points 1 and 2 follow from Dewey's discussion of \background" context, and point 3 follows from his notions of \situation" and \selective bias". The fourth point is going to be discussed in more detail. Logic, we are going to argue, fails on all of these accounts. The reason, we believe, is what could be roughly called \formality assumptions" in logic|namely, the requirement of having sharp boundaries among things in a messy world that does not necessarily live up to such mandates.
3 The Logical View of Context \Environment" is not something around and about human activities in an external sense; it is their medium, or milieu, in the sense in which a medium is intermediate in the execution or carrying out of human activities, as well as being the channel through which they move and the vehicle by which they go on. ([11], original emphases) Setting up absolute separations between mind and nature, subject and object, inner and outer,. . . is another fallacy of modern philosophy to which Dewey referred (ibid). In this section, using the CYC project [26] as an example, we want to show how the above fallacy is manifested in AI practice. For some time, under the mandates of formal logic, context was either neglected in AI, or considered a triviality that could be given due attention if and when necessary. Occasional concessions as to its signi cance were also made with a strong residual commitment to formal logic, basically leading to similar outcomes. The continuous challenge and recurring problems that the CYC project has faced in dealing with issues of context could only be understood as such.
As late as 1991, for instance, Lenat and Feigenbaum, responding to Smith's [42] critique of the context-insensitive notion of meaning adopted for CYC, had this to say in rebuttal: \Use-dependent meaning does not imply that we have to abandon the computational framework of logic" [25]. The implied moral was to stick to the \universalized" meanings of logical terms and sentences (as opposed to use-dependent meanings of natural language utterances), and to maintain consistency at the cost of context. This turned out to be an illusive approach to the question of meaning. It took some years for the CYC group to learn that \it was foolhardy to try to maintain consistency in one huge at CYC knowledge base" [23]. A new scheme was thus adopted: breaking up the knowledge base into hundreds of contexts and \microtheories", each of which was internally consistent but could, in principle, be in contradiction with other contexts. Presumably, this scheme did not have the diculties of the previous one, but it suered from other problems [24]. In consideration of problems like the above, Lenat [24] has proposed yet another approach to context, which he calls \Dimensions of Context Space". This approach, as its name implies, treats context as a space with many dimensions. For reasons that should become clear throughout this paper, though, we doubt that the new scheme has a better chance of success than the previous two. We believe that some of the reservations expressed by McCarthy and Buvac [31], albeit on purely formal grounds, support our suspicion. The main reason, in our opinion, is the thesis that context is either directly given, or else should be inferred in a deductive fashion. Haugeland [17] calls the view behind the above thesis the \inferential" view of context dependence, which he characterizes as follows: 1. An instance of I , in context C , would be (or count as) an R. 2. Here is an instance of I ; and it is in context C . 3. So, here is an R. As Haugeland notes, however, this presumes \that C and I are identi able as such independently, and that the recognition of R is then just drawing a conclusion|not really a recognition at all". Haugeland argues that this is usually not the case in human cognition. What happens, rather, is that \contextinformed phenomena (. . . ) are recognized for what they are, quite apart from any independent recognition of the context or of anything which is \in" the context" (ibid, original emphasis). In other words, the phenomenon and the context are recognized jointly, not as separate entities one happening inside the other. A behavior such as a smile, for instance, could be understood either as reassuring or as a cautionary gesture, depending on the circumstances in which it is made. It is not the case that one rst recognizes a smile, and then interprets it as either reassuring or cautionary. Nor is it the case that one rst detects the context as such, and then interprets the smile accordingly. The context determines what the smile means as much as the smile de nes and reinforces the context. Haugeland calls this the \joint recognizability of instance-cum-context" (op. cit.). Although Haugeland's discussion was focused on pattern recognition, its basic intuitions could be carried over to other domains. Stalnaker [44], for instance,
has argued for a similar interaction between content and context in a linguistic environment: \First, context in uences content, . . . But second, the contents that are expressed in uence the context: speech acts aect the situation in which they are performed." As Stalnaker points out, his account is radical as compared to those which only consider the in uence in one direction|namely, from context to content. A more radical step in this direction should crucially involve dropping the formality assumption with regard to phenomenon and context. It is only then that one could get rid of unbridgeable dichotomies between a behavior and its embedding context, between an utterance and its use, or among dierent contexts. As an example of such an attempt in another domain, let us mention the work of Oyama [36] in developmental biology. Having picked up the old question of the origin of \form" and \information" in biological systems|usually framed in traditional dichotomies of nature/nurture, genes/environment, adaptation/development, function/structure, instruction/selection, etc.|Oyama has proposed an alternative approach to the question. She nds the so-called \interactionist" approaches, that have been in vogue for some time in developmental biology, unsatisfactory mainly because of their commonly-shared \preformationist" attitude toward information|namely that information \exists before its utilization or expression". By disposing of old dichotomies, Oyama's \constructive interactionism", we believe, has succeeded in shedding new light on an old question. We are basically advocating a similar approach in AI.
4 The Logical View of Relevance The issue of relevance repeatedly appears in AI, and its meaning seems to be intuitively clear. Broadly speaking, a piece of information is said to be relevant if it is of consequence to the matter in hand, or if it interferes with some of our beliefs or actions. Thus, we can describe the \problem of relevance" as the problem of identifying and using properly all the information that should exert an in uence on our beliefs, goals, or plans. The problem of relevance has received an admittedly distinct treatment from context in AI. Here, unlike context, the research community has been rather aware of the outstanding issues. Although the challenge is mostly met under the guise of the celebrated \frame problem" (which, by most accounts, is but a subspecies of the relevance problem), it comes up in various other places in AI| e.g., perception and representation [20], analogical reasoning [40], case-based reasoning [21], knowledge and inference [24]). \Relevance" is also ubiquitous in other places in science and philosophy, such as in cognitive psychology|e.g., in perception of similarity [34]|and in philosophy of science|e.g., in theory and explanation [47]. The formal framework of logic has been arguably dominant in most of these places. In the following subsections, we want to review some of the formal treatments of relevance in AI and the surrounding disciplines. Our goal is to show that these
accounts, although they improve upon some aspects of classical logic, are still insucient for capturing relevance in human cognition. The reason, we believe, is that they are still constrained to explicit representation, and that they solely focus on syntactic aspects of relevance. As we have seen, there are material considerations involved in any thought process, and, we are going to argue, they cannot be captured by pure formal and syntactic considerations.
4.1 The Logic of Relevance The notion of relevant implication has received special treatment in symbolic logic. It has been suggested, for instance, that in a formal theory of \entailment" some sentences like A ! B should be considered false if A does not have a bearing on the validity of B . Such a notion of relevance is then used to represent a special form of implication, more rigorous than material implication, in which the consequent of a rule is expected to be proved \from" the antecedent. The seminal idea of relevant implication, also known as rigorous implication, was introduced in 1956 by Ackermann [1]. Anderson and Belnap [2, 3] following the lead of Ackermann, proposed \The Calculus of Entailment." In this calculus, implication is constrained by conditions of relevance and necessity. Inferring A ! B means that A is used in the proof of B . For instance, a formula like A ! (B ! B ) is always true in classical logic but it is not a universally valid formula in the logic of relevance since A is not a reason to conclude B ! B . On the other hand, a formula like (A ! B ) ! ((B ! C ) ! (A ! C )) is valid in the logic of relevance since the step from B ! C to A ! C can be made given the assumption A ! B . This gave rise to a technique for keeping track of the steps used in a proof, which restricts the application of the deduction theorem to those cases in which the antecedent is relevant to the consequent. But \proof" is a purely syntactic notion. The logic of relevance, although it was a move in the right direction (by dispensing with some unintuitive notions of implication), has not addressed the crux of relevance, which is neither always explicit nor necessarily syntactic.
4.2 Default Reasoning AI has long dealt with the challenge of modeling inferential processes in the face of incomplete or inconsistent information. Default reasoning has emerged as a mechanism for drawing conclusions when there is con icting information that justi es the derivation of diering conclusions. Default reasoning has also been de ned as a mechanism that allows drawing conclusions when more information could prove to be relevant, and therefore when the inference is corrigible. The well known \quali cation problem" arises when we need to make explicit the absence of every potentially relevant factor that can interfere with our conclusion. All approaches to formalizing default reasoning, even when they diverge in other respects, share the basic principle of maintaining tentative conclusions as long as there is no relevant factor that dictates the opposite. Accordingly, in the
frame of \defeasible reasoning" proposed by Pollock [38], the acceptance of a belief is based on a mechanism for verifying that a reason that supports the belief remains undefeated after going through a process of justi cation. In this process of justi cation the rebutting defeaters and undercutting defeaters play the role of blocking factors|they are the relevant factors that can drive an agent to retract certain beliefs. In McDermott and Doyle's nonmonotonic logic [32, 33], Reiter's default logic [39] and Moore's autoepistemic logic [35], the consistency test can be seen as the veri cation of the nonexistence of relevant information interfering with certain inference. In all these logics there is a general pattern of inference that allows to \assume A in the absence of relevant information that dictates the contrary", usually interpreted as \if A can be consistently assumed, then assume A". At the same time, cirumscription (McCarthy [29, 30]) allows to conjecture that \. . . no relevant object exists in certain categories except those whose existence follows from the statement of the problem and common sense knowledge" [30]. The above approaches are also known as extensional approaches to default reasoning. They are able to go beyond the deductively valid conclusions, but they are also able to determine whether there exists relevant information interfering with the generation of certain conclusions. In approaches to default reasoning based on conditional or intentional interpretations [15, 16, 8, 28, 22, 6] the notion of relevance appears when the incorporation of some relevant condition to the antecedent of a conditional results in a \more exceptional" (or less normal) situation, in which case the previously maintained conclusion must be retracted, or a conclusion that was omitted before must be incorporated. If a conditional is true, it is natural to expect that a new conditional that results from adding new irrelevant information to the antecedent remains true. However, the basic conditional systems are too cautious, and do not allow to keep conclusions in the presence of new information, even when this information is irrelevant to the conditional. The problem of explicitly adding all properties that are irrelevant to a conditional is known as the irrelevance problem or \inverse quali cation problem" [6]. In situational calculus, a special notion of relevance naturally appears in the rst attempts to give a solution to the frame problem. The frame problem states the necessity of determining what aspects of a certain state are not changed after an action takes place. In other words, which true facts will remain true, and which false facts will remain false once an action has been completed. The frame problem can be stated as the problem of determining relevance relations between actions and properties of situations. Among the rst proposals for representing such relevance relations was the one presented by Sandewall [41] with the introduction of the \unless" primitive. All of the above schemes are meant to give a more realistic account of reasoning. Each one of them, however, invokes a purely syntactic notion|e.g., \consistency" in the case of extensional systems, and \normality" in the case of intentional systems|in order to do this. Also, they solely rely on explicit representations. For example, an extensional default reasoning system can have rules like \if you reach the front door, and it is consistent to assume (or believe) that
you can open the door, then open the door". Obviously, several rules are needed to determine whether the door is open or not, but they are far fewer than an exhaustive enumeration of irrelevant features such as the material of the door, its color, or the patterns of ickering shadows on it. An intentional default reasoning system can represent the same information as follows: \In the normal state of aairs, if you reach the front door, then open the door", where the normal state of aairs is somehow introduced to the system beforehand. It is also assumed that the properties of the objects with which we are dealing remain unchanged unless there is some relevant factor that dictates the contrary. Default reasoning was meant to address the quali cation, inverse quali cation and frame problems. It should be noticed, however, that all the proposed methods rely on the assumption that all meaning is context-insensitive, and could thus be xed a priori in the knowledge base.
4.3 Computational relevance The notion of relevance has also been analyzed as an eciency factor, giving rise to what is know as computational (ir)relevance. In automatic problem solving as well as in deductive databases special attention has been paid to the implementation of mechanisms that are guided by principles of computational relevance. In this sense, the traditional mechanisms of inference have been modi ed to pro t from the notion of relevance, pre-selecting from the search space the information that is either useful for certain goals, or is within certain precision limits. The notion of computational (ir)relevance underlies mechanisms of resolution as simple as the set of support strategy, where the resolution is forced to take into account only the resolvents that are relevant to the goal. In some bottom-up evaluation techniques, especially used in deductive databases, the computation of an answer is made more ecient by calculating a relevant subset of the database model rather that the complete model. Examples of such techniques are nave evaluation, seminave evaluation, magic sets [7, 4] and other techniques in which the evaluation of the query is done after constructing trees or graphs [27]. These techniques allow the removal of irrelevant formulas and ignore useless resolution paths. It is the incorporation of relevance as an eciency factor that allows the shift from blind search to a guided search method, as well as the creation of simpler theories with better computational properties. A theoretical analysis of the notion of computational (ir)relevance has been presented by Subramanian and Genesereth in [45]. In their work, besides introducing de nitions of computational (ir)relevance in terms of complexity, they propose a Logic of (Ir)relevance focused on the metatheoretic computation of (ir)relevant assertions. The use of special data structures like indexed list, trees and graphs, which make it possible to organize information, is another manifestation of the notion of relevance as an eciency factor. In such cases the information is arranged in such a way that the relevant pieces (to a speci c plan) can be easily identi ed and retrieved.
The methods discussed are aimed at improving the performance of an inference mechanism by discarding what is irrelevant or by facilitating the access to relevant information. Computational relevance, as we see, only deals with eciency but, having stayed within the framework of logic (e.g. resolution), it basically begs the question of determining relevance. Generally speaking, all of the variations on logic that we have discussed in this section, as important as they are in formalizing thought processes, remain committed to the solely explicit forms of representation and to the formality assumptions of classical logic. In speci c, they assume independence of the syntax and semantics in the way understood by Fodor [14]: What makes syntactic operations a species of formal operations is that being syntactic is a way of not being semantic. Formal operations are the ones that are speci ed without reference to such semantic properties of representations as, for example, truth, reference, and meaning. GOFAI systems were famously built on a similar assumption. The idea was to equip the system with enough rules that would make it possible for it to engender the right set of behaviors without paying attention to the embedding environment|whence the dictum: \you take care of the syntax, and the semantics will take care of itself." In most of these systems, the syntactic rules were dictated by the normative constraints of formal logic. Since these systems did not perform well in handling real-world tasks, however, alternative logics of the kind discussed in this section were sought and created. Formal alternatives have once again demonstrated that logic as a discipline does not deal with the question of how to get the interpretation function; it just assigns it. In order to get the interpretation, we suggest, one has to look in other directions. These directions could be as variegate as the number of ways human beings interact with the world|actions, beliefs, emotions, tastes, goals, plans, etc. In this paper, we have tried to show that the pragmatic approach can provide one such direction, that of acting.
5 Conclusion: Implications for AI The pragmatic tradition highlights the action-oriented nature of intelligence. Despite lay connotations of the word, the term \action" is to be understood in a broad sense that includes reasoning behavior as well. Action understood in such broad terms, we suggest, is the key to dealing with issues of context and relevance. At the heart of intelligence lies the ability to nd out what is relevant in any given situation, and to act accordingly. This understanding of intelligence is, in fact, shared by many other writers within AI and cognitive science. Hofstadter, for instance, describes the core of intelligence to be \the ability to adapt to dierent domains, to spot the gist of situations amidst a welter of super cial distractors" [19]. Our aim in this paper was to discover the limits of the logicist approach in dealing with issues of context and relevance. We have discussed that the existing
alternatives to formal logic have not parted company with the formality assumptions of classic logic|e.g., the assumption that there are sharp boundaries between behavior and its embedding context, the assumption that syntax and semantics are independent, . . . etc. Because of this, logic-based systems cannot ful ll the pragmatic requirements outlined in this paper. The work of Sperber and Wilson [43], for instance, who have tried to come up with a psychologically plausible account of context and relevance in speech communication, provides one example of the limits of the logicist framework. In contrast to the logicist framework, the pragmatic approach focuses our attention on aspects of thinking and intelligence that are ubiquitous, but have been traditionally ignored in logic. Most importantly, it highlights the twin aspects of context|namely \background" and \selective bias". In doing so, it also provides a new perspective for dealing with issues of relevance, and shows how context and relevance are co-constituted. Furthermore, by giving rise to an interactive notion of intelligence, it opens up the possibility of the joint recognition of the system and environment. Although some phenomenologists like Dreyfus [12] have also emphasized this point by invoking the notion of \being-in-a-situation," they do not seem to give enough importance to the active role of human judgment in making sense of situations. Dreyfus says: \Human experience is only intelligible when organized in terms of a situation in which relevance and signi cance are already given." Other implications of the pragmatic view for AI also suggest themselves. \Coordination management," as Smith [42] has described it, arises in many places in AI, and the pragmatic approach can complement the logicist one by focusing on this aspect of behavior. Take the notion of search, for instance. Inferential and computational relevance might provide useful notions. Compared to classical logic, they allow a more informed and ecient way of doing search (note that it is a common premise that any AI problem is a search problem). Determining the search space, however, has been taken for granted in most formal approaches to AI. Pragmatic relevance, dierently from inferential and computational relevance, has mainly to do with \selecting the search space" rather than with the search process itself.
Acknowledgments The authors wish to thank Brian C. Smith and Rasmus G. Winther for commenting on an earlier version of this paper and an anonymous reviewer for helpful suggestions.
References 1. Ackermann, W. Begrundung einer strengen Implikation. The Journal of Symbolic Logic 21, pages 113-128. (1956). 2. Anderson, A. R., and Belnap, N, Jr. The Pure Calculus of Entailment. The Journal of Symbolic Logic 1 (27), pages 19-52. (1962).
3. Anderson, A. R., and Belnap, N., JR. Entailment|The logic of relevance and necessity. Princeton University Press (1975). 4. Beeri, C., and Ramakrishnan, R. On the Power of Magic. Journal of Logic Programming 10, pages 255-299. Elsevier Science Publisher (1991). 5. Bernstein, R. J. On Experience, Nature, and Freedom: Representative Selections (John Dewey), Bobbs-Merrill Co., Indianapolis, (1960) 6. Boutilier, C. Conditional Logics for Default Reasoning and Belief Revision. PhD. thesis. Department of Computer Science. University of British Columbia (1992). 7. Das, S. K. Deductive Databases and Logic Programming. Addison-Wesley Publishing Company (1992). 8. Delgrande, J. P. A Logic for Representing Default an Prototypical Properties. IJCAI-87, pages 423-429. Milan, Italy (1987). 9. Dewey, J. Context and Thought. In Richard Bernstein (ed. 1960), pages 88-110 (1931). 10. Dewey, J. Logic: The Theory of Inquiry. In John Dewey: The Latter Works, 19251953, vol. 12. J. A. Boydston (ed.) Southern Illinois University Press, Carbondale (1991). 11. Dewey, J. Common Sense and Science. In John Dewey: The Later Works, 19251953, vol. 16. J. A. Boydston (ed.) Southern Illinois University Press, Carbondale (1991). 12. Dreyfus, H. L. What Computers Still Can't Do: A Critique of Arti cial Reason. MIT Press, Cambridge (1992). 13. Ekbia, H. R. Forever On the Threshold: The Case of CYC. Forthcoming (2001) 14. Fodor, J. A. Methodological Solipsism considered as a research strategy in cognitive psychology. Brain and Behaviour Sciences 3, 63-109 (1980). 15. Gabbay, D. Theoretical Foundations for Non-monotonic Reasoning. Expert Systems, Logics and Models of Concurrent Systems, pages 439-459. Springer Verlag (1985). 16. Ginsberg, M. L. Counterfactuals. Arti cial Intelligence 30, pages 35-79. Elsevier Science Publisher (1986). 17. Haugeland, J. Pattern and Being. In his Having Thought: Essays in the Metaphysics of Mind. Harvard University Press, Cambridge (1993/98). 18. Haugeland, J. Mind Design II: Philosophy, Psychology, Arti cial Intelligence. A Bradford Book, MIT Press, Cambridge (1997). 19. Hofstadter, D. R. Le Ton beau de Marot: In Praise of the Music of Language. Basic Books, New York (1997). 20. Hofstadter, D. R., and Fluid Analogies Research Group. Fluid Concepts and Creative Analogies. Basic Books, New York (1995). 21. Kolodner, J. L, and Leake, D. B. A Tutorial Introduction to Case-Based Reasoning. In David Leake (ed.) (1996): Case-Based Reasoning: Experiences, Lessons, and Future Directions. AAAI Press/MIT Press, Menlo Park/Cambridge (1996). 22. Kraus, S., Lehmann D., and Magidor, M. Nonmonotonic Reasoning, Preferential Models and Cumulative Logics. Arti cial Intelligence 44,pages 167-207. Elsevier Science Publisher (1990). 23. Lenat, D. B. From 2001 to 2001: Common Sense and the Mind of HAL. In David G. Stork (ed.) Hal's Legacy: 2001's Computer As Dream and Reality. MIT Press, Cambridge, Mass. (1997) 24. Lenat, D. B. The Dimensions of Context Space. CYCORP Web Page at www.cyc.com (1998) 25. Lenat, D. B., and Feigenbaum, E. A. On the Thresholds of Knowledge. Arti cial Intelligence 47, 1-3, Jan. 1991, pages 185-250 (1991)
26. Lenat, D. B., and Guha, R. V. Building Large Knowledge-Based Systems. Addison Wesley, Reading (1990). 27. Levy, A. Y. Irrelevance Reasoning in Knowledge Based Systems. PhD thesis. Computer Science Department. Stanford University (1993). 28. Makinson, D. General Theory of Cumulative Inference. Proceedings of the Second International Workshop on Non-Monotonic Reasoning, Lecture Notes in Arti cial Intelligence 346, pages 1-18. Springer-Verlag (1989). 29. McCarthy, J. Circumscription|A Form of Non-Monotonic Reasoning. Arti cial Intelligence 13, pages 27-39. Elsevier Science Publishers (1980). 30. McCarthy, J. Applications of Circumscription to Formalizing Common-Sense Knowledge. Arti cial Intelligence 28 (1), pages 89-116.Elsevier Science Publishers (1986). 31. McCarthy, J., and Buvac, S. Formalizing Context (Expanded Notes). Technical Note STAN-CS-TN-94-13, Stanford University (1994). 32. McDermott, D., and Doyle, J. Non-Monotonic Logic I. Arti cial Intelligence 13, pages 41-72. Elsevier Science Publishers (1980). 33. McDermott, D. Non-Monotonic Logic II. Journal of Association for Computing Machinery 29 (1), pages 33-57. ACM (1980). 34. Medin, D. L., Goldstone R. L, and Gentner D. Respects for Similarity. Psychological Review 100 (2,) pages 254-278 (1993). 35. Moore, R. C. Semantical Considerations on Nonmonotonic Logic. Arti cial Intelligence 25 (1), pages 75-94. Elsevier Science Publishers (1985). 36. Oyama, S. The Ontogeny of Information: Developmental Systems and Evolution (2nd ed.). Duke University Press (2000). 37. Perry, J. Indexicals, Contexts and Unarticulated Constituents. Available at http://www-csli.stanford.edu/ john/context/context.html (1998). 38. Pollock, J. L. Defeasible Reasoning. Cognitive Science 11, pages 481-518. (1987). 39. Reiter, R. A Logic for Default Reasoning. Arti cial Intelligence 13, pages 81-132. Elsevier Science Publishers (1980). 40. Russell, S. J. The Use of Knowledge in Analogy and Induction. Morgan Kaufmann Publishers (1989). 41. Sandewall, E. An Approach to the Frame Problem and its Implementation. Machine Intelligence 7, pages 195-204. Edinburgh University Press (1972). 42. Smith, B. C. The Owl and The Electric Encyclopedia. Arti cial Intelligence 47, 1-3, Jan. 1991, pages 251-288 (1991). 43. Sperber, D., and Wilson, D. Relevance: Communication and Cognition. Harvard University Press, Cambridge, MA. (1986) 44. Stalnaker, R. C. Context and Content. Oxford Cognitive Science Series, Oxford University Press (1999) 45. Subramanian, D, and Genesereth M. R. The Relevance of Irrelevance. In Proc. IJCAI-87, pages 423-429. Milan, Italy (1987). 46. Suchman, L. Plans and Situated Actions. Cambridge University Press, UK (1987). 47. van Fraassen, B. C. The Scienti c Image. Clarendon Press, Oxford (1980).