PHILOSOPHICAL PSYCHOLOGY , VOL. 15, NO . 3, 2002
The tripartite model of representation PETER SLEZAK
ABSTRACT Robert Cummins [(1996) Representations, targets and attitudes, Cambridge, MA: Bradford/MIT, p. 1] has characterized the vexed problem of mental representation as “the topic in the philosophy of mind for some time now.” This remark is something of an understatement. The same topic was central to the famous controversy between Nicolas Malebranche and Antoine Arnauld in the 17th century and remained central to the entire philosophical tradition of “ideas” in the writings of Locke, Berkeley, Hume, Reid and Kant. However, the scholarly, exegetical literature has almost no overlap with that of contemporary cognitive science. I show that the recurrence of certain deep perplexities about the mind is a systematic and pervasive pattern arising not only throughout history, but also in a number of independent domains today such as debates over visual imagery, symbolic systems and others. Such historical and contemporary convergences suggest that the fundamental issues cannot arise essentially from the theoretical guise they take in any particular case.
… if men had been born blind philosophy would be more perfect, because it would lack many false assumptions that have been taken from the sense of sight. (Galileo Galilei, 1610) Mental representation: “the topic for some time now” Robert Cummins (1996, p. 1) has recently characterized the vexed problem of mental representation as “the topic in the philosophy of mind for some time now.” However, this remark is something of an understatement. In fact, the same topic was central to the famous controversy between Antoine Arnauld and Nicolas Malebranche in the 17th century, and also central to the entire philosophical tradition of “ideas” in the writings of Locke, Berkeley, Hume, Reid and Kant. This pattern of recurrence is a striking fact. However, the cognitive science literature has almost no overlap with that of the history of early modern philosophy. This mutual neglect is remarkable in view of the intimate connection of their concerns. I am concerned here to reveal something of the rich and mutually illuminating connections between these disjoint literatures. In principle, such mutual illumination can make a valuable and perhaps novel contribution both to contemporary cognitive science and also to the scholarship of early modern philosophy. The possibility of mutual bene!t is even Peter Slezak, Program in Cognitive Science, School of History & Philosophy of Science, University of New South Wales, Sydney, NSW 2052, Australia, email:
[email protected] ISSN 0951-5089/print/ISSN 1465-394X/online/02/030239–32 Ó 2002 Taylor & Francis Ltd DOI: 10.1080/0951508021000006085
240
P. SLEZAK
more evident when we notice that the parallels extend beyond merely having common concerns. That is, not only the problems of the 17th century, but the same solutions are being rehearsed today at the forefront of research in cognitive science. Descartes’ de´ja` vu: Edelman’s Traite´ de l’homme A preliminary indication of the modern relevance of early philosophy may be seen in Edelman’s (1998) work on perception. Despite its concern with the latest theories of perception, the central problem is stated in terms identical with that of the entire tradition of writers on “ideas” since the 17th century. Edelman writes: “Advanced perceptual systems are faced with the problem of securing a principled (ideally, veridical) relationship between the world and its internal representation.” Edelman’s bold new solution “is a call for the representation of similarity instead of representation by similarity.” However, this might have been taken verbatim from Descartes’s Traite´ de l’homme (1662/1972) or Dioptrics (1637/1985) where he said “the problem is to know simply how [images] can enable the soul to have sensory perceptions of all the various qualities of the objects to which they correspond—not to know how they can resemble these objects” (Descartes, 1637/1985, pp. 1, 165). In the same vein as Edelman, Meyering (1997) points out that, despite its advocates today (Wright, 1993), resemblance cannot be analyzed without circularity. As we will see, this issue arises inescapably as part of a deeper problem concerning the nature of representation. The tripartite schema In a recent article, Bechtel (1998, p. 299) states the essentials of a modern theory of representation: “There are … three interrelated components in a representational story: what is represented, the representation, and the user of the representation.” Z: System Using Y ® Y: Representation ® X: Thing Represented Bechtel’s schema articulates a tripartite conception of ideas as representatives intervening between the mind and the world. As we will see, among the problematic assumptions, Bechtel’s diagram (modi!ed here) and discussion crucially fail to distinguish internal and external representations (see Abell & Currie, 1999). Importantly, Bechtel’s conception in this regard is not idiosyncratic, but accurately re"ects an almost universal conception in cognitive science (Dennett, 1978a; Lloyd, forthcoming; Newell, 1986, p. 33; Rumelhart & Norman, 1983). As we will note presently, the same tripartite conception in the case of the pictorial theory of images inherently involves the same assimilation of internal and external representations, and thereby encourages the illegitimate postulate of a user or external observer—the notorious homunculus. I will suggest that the same tacit assimilation of external and internal representations is at the heart of Searle’s (1980) “refutation” of symbolic AI and also leads to the doctrine that we think “in” language (Carruthers, 1996; Slezak, 2002). The assimilation just noted in Bechtel will also be seen in the seemingly unrelated problem of consciousness and the mind–body problem (Place, 1956). The
THE TRIPARTITE MODEL OF REPRESENTATION
241
tripartite scheme appears obvious and innocuous enough, though it has been remarkably fraught with dif!culties. Indeed, the inescapability and ubiquity of this picture in one form or another is apparent from the fact that Bechtel’s diagram is a variant of the scheme which we see throughout the long history of the subject. “The vision of all things in God” Thus, for example, nothing could seem more remote from modern theories in cognitive science today than Malebranche’s (1712/1997) 17th century doctrine of “the vision of all things in God”—the theory that ideas are objects of our perception that exist in God’s mind. On the contrary, however, despite the theological trappings, it is instructive to recognize the profound af!nity of Malebranche’s views with those at the very forefront of theorizing today in psychology and arti!cial intelligence: Malebranche’s theory is just Bechtel’s tripartite model (Nadler, 1992), and the modern problem of representation is how to avoid the notorious dif!culties clearly articulated by his critic Arnauld (1683/1990). Although these parallels need to be defended with detailed argument and exegesis, it is signi!cant that Fodor has occasionally made the af!nities explicit. Recently he suggests that his own Representational Theory of the Mind (RTM) may be understood on the model of the classical empiricist conception: Just for the purposes of building intuitions, think of mental representations on the model of what Empiricist philosophers sometimes called “Ideas”. That is, think of them as mental particulars endowed with causal powers and susceptible of semantic evaluation. (Fodor, 1998, p. 7) In this light, it is hardly surprising that modern problems might be simply the reinvention of old problems in a new guise. Fodor endorses the classical conception of ideas though he rejects a conception of representation by means of resembling images. He says “To a !rst approximation, … the idea that there are mental representations is the idea that there are Ideas minus the idea that Ideas are images” (1998, p. 8). Despite such disclaimers, we will see that there is particular irony in the fact that the problem for images may be, at a deeper level, the problem for Fodor’s RTM as well. Independently of Fodor’s explicit allusion, his conception of a representational theory of mind has always been evocative of traditional accounts. Thus, it may be just a fac¸on de parler, but Fodor’s (1978) analysis of propositional attitudes has consistently been expressed in terms of “relations between organisms and internal representations” which are “sentence-like entities” (1978, p. 198), that is, “formulae in an Internal Representational System” (1978, p. 194) and whose intentional contents refer to things in the world. Fodor earlier explained this idiom by the same analogy with traditional theories: This is, quite generally, the way that representational theories of the mind work. So in classical versions, thinking of John (construed opaquely) is a
242
P. SLEZAK
relation to an “idea”—viz., to an internal representation of John. (1978, p. 200) Fodor speaks of internal representations as the “immediate objects” of beliefs, thereby revealing the close similarity of his theory with the classical Lockean theory of ideas as the “immediate objects” of perception. This conception of internal representations as being in a relation to a person is an explicit tripartite scheme which Fodor takes to specify “a priori” conditions on propositional attitudes. This may be, at best, an awkward locution and, at worst, encouraging a notoriously problematic theory. It is important to acknowledge that Fodor and Bechtel, like most theorists, are fully aware of the fatal problem lurking here in principle. However, awareness of the problem in principle does not necessarily preclude falling victim in practice. For example, we will see that proponents of pictorial imagery have been repeatedly charged with committing the homunculus error. Notwithstanding their advocates’ protestations of innocence [1] and full awareness of the hazards, there are grounds for seeing pictorial theories as problematic in the traditional manner. The charge is that the representational format cannot be made to work without tacitly invoking the very abilities it is supposed to explain (Pylyshyn, 1973, 1978, 1981, in press; Slezak, 1992, 1995, in press). The error is not con!ned to imagery and is made unwittingly by failing to notice that the accessing mechanisms cannot perform their function on their own in view of the particular properties ascribed to the representation. As Bechtel (1998, p. 299) notes, a process which uses a representation as “stand in” must be coordinated with the format of the representation. However, the nature of the format may be such as to require a user which is not merely a process in an innocuous sense. Speci!cally, taking internal representations to be too closely modeled on our external representational artifacts clearly risks requiring the “user” to share our relevant perceptual and cognitive abilities, thereby begging the question in the traditional manner. We will see that the assimilation of internal and external representations in just this way is frequently made as an explicit doctrine. Adverting to the virtues of computational models which ultimately discharge their homunculi and pay back their loan on intelligence (Dennett, 1978b) is not suf!cient as a plea of innocence to these charges (Kosslyn et al., 1979). As Rorty (1979, p. 235) has put it, there is no advance in replacing the little man in the head by a little machine in the head. In particular, I will argue later that the common appeal to an internal symbolic language analogous to a formal system appears to be guilty of the same charge. The dispute, then, is about whether the theoretical models succeed in avoiding the well-known dif!culty despite their authors’ intentions. Thus, although rejecting the charge of being “ontologically promiscuous” (1978, p. 179), Fodor’s locution may be symptomatic of the deep dif!culties which pervade the problem of representation. Signi!cantly, Fodor says that his conception corresponds precisely with the view that psychologists have independently arrived at. To the extent that Fodor is correct in this observation, not only philosophers have been prey to the deeply compelling mistakes of theorizing about the mind. It is no accident that Gibson’s “ecological” approach, like the closely related
THE TRIPARTITE MODEL OF REPRESENTATION
243
“situated cognition,” are theories of direct realism which have been proposed as alternatives to the representationalism of computational theories. This is merely one form in which the Malebranche–Arnauld debate is being rehearsed today. This celebrated debate is described by Nadler (1989) as a debate between an “object theory” of ideas and an “act theory,” respectively. He explains … the object theory of ideas involves a commitment to a representationalist or indirect realist theory of perception, such as Malebranche (and, on the traditional reading, Locke) put forth. An act theory of ideas, on the other hand, forms the core of Arnauld’s perceptual direct realism. If ideas are representational mental acts [rather than entities], then they can put the mind in direct cognitive contact with the world—no intervening proxy, no tertium quid, gets in the way. (1989, p. 6) As Nadler (1989, p. 6) points out, Malebranche’s “vision in God” is a “theologization of cognition” according to which the contents of our own thoughts are dependent upon their divine source in the mind of God. However, although Malebranche’s theological and epistemological concerns are woven together, the threads may be separated and his doctrine of ideas identi!ed as the familiar, compelling and widely held theory until the present time. Although there is room for scholarly dispute [2], most commentators share a reading of Malebranche according to which ideas are intermediaries or proxies representing external objects and intervening between the mind and the world. This same “representative theory of perception” has been more familiar as John Locke’s “veil of ideas” in the tradition referred to as the “way of ideas.” On this view, ideas are internal mental objects of some kind toward which the mind’s operations are directed. Nadler echoes Bechtel, describing Malebranche’s theory as assuming that there are three elements in the normal perception or knowledge of the world (Nadler, 1989, p. 81). As Arnauld explained in his critique, Malebranche “regards this representation as being actually distinct from our mind as well as from the object” (1683/1990, p. 63). A crucial and frequently quoted passage from Malebranche himself explains: Thus, it does not see them by themselves, and our mind’s immediate object when it sees the sun, for example, is not the sun, but something that is intimately joined to our soul, and this is what I call an idea. Thus, by the word idea, I mean here nothing other than the immediate object, or the object closest to the mind, when it perceives something, ie., that which affects and modi!es the mind with the perception it has of an object. (1712/1997, p. 217) Situated cognition: the “canonical” cottage cheese case Signi!cantly, John Yolton has expressed a hope that from the study of early thinkers “we may be able to understand how we can have representation (cognitivity) and realism too” (1996, p. x). This is, of course, a comment on the perennial problem
244
P. SLEZAK
posed by the tripartite scheme. Yolton’s remarks on earlier thinkers is apt to describe the central problem of theories today: The pivotal concept for the accounts of perceptual acquaintance in the seventeenth and eighteenth centuries is that of objects present to the mind. Depending on how that concept was interpreted, those accounts moved between an indirectness of knowledge (because only a representative, proxy object can be present to the mind) and a strong direct realism where the object known was, in some way, itself present to or in the mind. (1984, p. 6) Recent proponents of “situated cognition” have been complaining of exactly the same indirect, mediated conception in computational theories of cognition, recognizing that these embody essentially the Locke–Malebranche’s scheme of representations intervening between mind and world. For example, Greeno (1989) unknowingly echoes Arnauld: I am persuaded … that in normal activity in physical and social settings, we are connected directly with the environment, rather than connected indirectly through cognitive representations. … An individual in ordinary circumstances is considered as interacting with the structures of situations directly, rather than constructing representations and interacting with the representations. (1989, p. 290) Greeno cites the Weight Watcher who had studied calculus but nevertheless answers a question about a daily allotment of cottage cheese by means of a simple, directly physical, operation dividing up a portion of cheese, rather than by any symbolic computation such as a multiplication on fractions. Ergo, reasoning is not symbolic but “situated.” The Weight Watcher case is supposed to illustrate the thesis that the person’s actions are somehow unmediated by mental representations [3]. The cause for Greeno’s concern is the modern version of Locke’s view: “It is evident that the mind knows not things immediately, but only by the intervention of the ideas it has of them” (Locke, 1690, Book IV, Chapter IV). Deux Carte´siens: plus c¸a change, plus c’est la meˆme chose It is amusing to notice how Malebranche’s attempt to articulate this picture is echoed today by Fodor. Malebranche wrote: I think everyone agrees that we do not perceive objects external to us by themselves. We see the sun, the stars and an in!nity of objects external to us; and it is not likely that the soul should leave the body to stroll about the heavens, as it were, in order to behold all these objects. (1712/1997, p. 217) Fodor writes in the same vein: It is, to repeat, puzzling how thought could mediate between behavior and
THE TRIPARTITE MODEL OF REPRESENTATION
245
the world … The trouble isn’t—anyhow, it isn’t solely—thinking that thoughts are somehow immaterial. It’s rather that thoughts need to be in more places than seems possible if they’re to do the job that they’re assigned to. They have to be, as it were, “out there” so that things in the world can interact with them, but they also have to be, as it were, “in here” so that they can proximally cause behavior. … it’s hard to see how anything could be both. (Fodor, 1994a, p. 83) Malebranche and Arnauld are not chosen for mention here at random or merely in retrospect for their current interest. As Gaukroger (1990) has noted, Malebranche’s Search after truth was “the most in"uential philosophical treatise of the second half of the seventeenth century, eclipsed only at the end of that century by Locke’s Essay” (1690/1964, p. 1). In particular, Malebranche’s doctrines were at the center of a famous controversy with Antoine Arnauld whose treatise On true and false ideas (1683) was a reply to Malebranche. Indeed, this debate was not only a major e´ve´nement intellectuel of its time, as Moreau (1999) has recently described it, but one whose echoes may be heard throughout the subsequent centuries of speculation about the mind. Moreau’s (1999) recent book-length study in French is perhaps the !rst devoted to the dispute as such, and attests to its importance as an intellectual cause ce´le`bre in the 17th century [4]. Nadler writes that, following the !rst round with Arnauld’s critique of Malebranche, For the next decade, until Arnauld’s death in 1694, these two men engaged in a public debate that attracted the attention of intellectual circles throughout Europe. Sides were taken in articles, reviews and letters in the foremost journals of the day, and the issues were debated by others as hotly as they were by the primary combatants themselves. … it remains … one of the most interesting episodes in seventeenth-century intellectual history. (1989, p. 2) Nadler adds that the debate is indispensable for understanding the central philosophical issues of the period, and this is especially true in relation to the work of Descartes. As the title of Moreau’s (1999) book indicates, Malebranche and Arnauld were, despite their differences, !rst and foremost Deux Carte´siens. Arnauld insisted that his conceptions were faithful to those of Descartes and, as Nadler notes, “Arnauld would remain committed to la pense´e carte´sienne for the rest of his life” (1989, p. 34). Though differing over the doctrine of ideas and perceptual acquaintance, both accepted the fundamental principles of Descartes’s philosophy (Nadler, 1989, p. 59). Arnauld’s view takes on special interest today since his critique of Malebranche constitutes a way out of the analogous problematic conceptions of modern cognitive science.
246
P. SLEZAK
Precursors: pointless exercise? ` propos of historical re"ections, with some justice, Stephen Gaukroger (1996) in his A landmark intellectual biography of Descartes has described as a “pointless exercise” the efforts to show the extent to which Descartes, for example, was a precursor of modern cognitive science. However, in some cases we may discern something more than fortuitous, independent reinvention. There is a more interesting kind of recurrence which deserves attention because it is a manifestation of deeper, and therefore more illuminating, causes—a chronic malaise whose recurrence is symptomatic of deep pathology. Noting anticipations of current theories is likely to be revealing in both directions: precursors of cognitive science provide an independent, extensive source of insight into contemporary issues and, conversely, are themselves elucidated in novel ways unavailable to traditional scholarship. (For preliminary steps in this direction, see Yolton, 1984, 1996, 2000; Slezak, 1999, 2000.) Thus, beyond merely noting the parallels, I would like to offer some preliminary diagnosis of the malaise and its etiology along the lines of Arnauld’s defense of Descartes’ view against Malebranche. Tables & chairs: bumping into things From Yolton’s statement of earlier concerns, we can see their relevance to contemporary issues: From the scholastics’ intelligible species, through the Cartesian’s objective reality, to Berkeley’s and Hume’s talk of ideas as the very things themselves, we see writers on perception striving for some way to say that we perceive physical objects. … One of the ways in which some of the writers tried to preserve the accuracy, if not the directness, of perceptual awareness was by talking of a conformity or agreement between ideas and objects; otherwise they said ideas represent objects. (1996, pp. 1–2) This is, of course, just the modern problem of intentionality or “psychosemantics” which Cummins describes as just that of saying “in some illuminating way, what it is for something in the mind to represent something” (1996, p. 1). Despite the seeming simplicity of the phenomenon, the burgeoning literature attests to the fact that there is a consensus, at least, on Fodor’s judgment that “of the semanticity of mental representations we have, as things now stand, no adequate account” (1985b, p. 28). Typically, Stalnaker, too, says, “There is little agreement about how to do semantics, or even about the questions that de!ne the subject of semantics” (1991, p. 229). Likewise, B.C. Smith confesses, “It should be admitted that how this all works—how symbols ‘reach out and touch someone’—remains an almost total mystery” (1987, p. 215). In a report on the state and prospects of interdisciplinary cognitive science, Fodor (1985a) joked that philosophers are notorious for having been prey to absurd, eccentric worries such as the “fear that there is something fundamentally unsound
THE TRIPARTITE MODEL OF REPRESENTATION
247
about tables and chairs.” Nevertheless, he optimistically contrasted such “mere” philosophical worries with those that occasionally turn out to be “real,” as with the representational character of cognition. Triumphantly, Fodor points to the fact that today, unlike other proprietary concerns, this problem is no longer just a philosophers’ preoccupation because its solution has become of general importance as a precondition of progress in several disciplines of cognitive science. However, there is an acute unintended irony in Fodor’s contrast, because the problem of representation at the forefront of cognitive science today is, in fact, identical with the philosophical anxiety about tables and chairs. In various more or less independent domains, cognitive scientists have simply rediscovered the very same sterile conundrums which have kept philosophers busy since Descartes. We see a revealing clue to this commonality in Jackendoff’s (1992, p. 161) question which is a reductio ad absurdum of contemporary symbolic, computational theories: In view of the “internalism” and “narrow” syntactic character of computational symbols, Jackendoff asks facetiously: “Why, if our understanding has no direct access to the real world, aren’t we always bumping into things?” Jackendoff’s satire is evocative of Samuel Johnson’s famous response to Berkeley’s “ingenious sophistry”: “I refute it thus,” he said, that is, by kicking a stone. In both cases, appealing to bumping into things, the responses bring into relief the way in which classical and modern theories entail a disconnection of the mind and the world. The suggestive parallel between Jackendoff and Johnson is no accident. Jackendoff captures precisely the paradox charged against Locke and also Malebranche, who Nadler (1992, p. 7) says “is often portrayed by his critics as enclosing the mind in a ‘palace of ideas,’ forever cut off from any kind of cognitive or perceptual contact with the material world.” Of course, Berkeley’s idealism is just the worry about the reality of tables and chairs, and Berkeley’s reaction to Locke’s “ideas” is analogous to Fodor’s reaction to Simon’s symbols—“methodological solipsism.” Seemingly isolating thought in a realm of its own, the representations intervene between mind and world—two items whose systematic connections with each other become mysterious. The traditional problem, rediscovered in cognitive science, is how to make sense of the relation between these three elements—mind, representation and world—seemingly essential to any model of cognition. The “philosophick topick” of ideas In his recent book, Yolton (1996, p. 43) mentions the anonymous author of a pamphlet written in 1705 titled Philosophick essay concerning ideas who says, “There is hardly any Topick we shall meet with that the Learned have differed more about than that of Ideas.” It is a remarkable fact that little has changed in this regard concerning the “Topick” in dispute, the underlying reasons for the problem and the solutions adopted [5]. Although the terminology of ef"uvia, essences, modes and substances has been replaced by information processing jargon, the essential issues are unchanged. Thus, Palmer’s (1978) article on “Fundamental aspects of cognitive representation” says “Anyone who has attempted to read the literature related to cognitive representation quickly becomes confused—and with good reason. The
248
P. SLEZAK
!eld is obtuse, poorly de!ned, and embarrassingly disorganized.” After enumerating a dozen distinct conceptions Palmer adds “These are not characteristics of a scienti!c !eld with a deep understanding of its problem, much less its solution” (1978, p. 259). The situation does not appear to have improved in the two decades since Palmer wrote. It is no accident that Palmer’s lament and his litany echo Yolton’s anonymous author because the theoretical disarray, like the doctrines themselves, are not unrelated. Suf!cient evidence of this is the fact that the 18th century author’s analysis of the problem and its causes remains appropriate today. … in considering the Mind, some men do not suf!ciently abstract their Thoughts from Matter, but make use of such Terms as can properly relate to Matter only, and apply them to the Mind in the same Sense as they are spoken of Matter, such as Images and Signatures, Marks, and Impressions, Characters and Notes of Things, and Seeds of Thoughts and Knowledge. (quoted in Yolton, 1956/1993, p. 96) Translated into current terminology, this is an insightful diagnosis of the latest disputes concerning representation in cognitive science today. It is, in fact, a re-statement of Arnauld’s orthodox Cartesian view which insists that mental representations cannot be properly characterized in terms taken too directly from those apt for our external, material representations—the problem of “original” versus “derived” intentionality. “Malebranchean Theatre”? Dennett’s (1991) reference to a “Cartesian Theater” has given wide currency to this term and thereby served to draw attention to the supposed provenance of a conception which is, indeed, at the heart of philosophical puzzles about the mind and consciousness. Indeed, the related mistakes of the “Theater” and the homunculus are at the heart of much theorizing in cognitive science. However, fully acknowledging the value of Dennett’s analysis, it remains that his terminology, at least, perpetuates an historical solecism. Conceding that Dennett was not concerned with exegetical, scholarly niceties, it remains important to correct a serious error of misattribution. The “Theater” in question is more appropriately ascribed to Malebranche than to Descartes. Although “Malebranchean Theater” does not have the same pleasing sonority, there is good philosophical reason to correct the usage besides mere historical pedantry. It is important to recognize that a commitment to the picture of an inner person observing a scene on the stage of consciousness is independent of, and does not follow directly from, dualism. Dennett recognizes this in his talk of “Cartesian materialism” (1991, p. 107) which he says is “the view you arrive at when you discard Descartes’ dualism but fail to discard the imagery of a central (but material) Theater where ‘it all comes together’.” However, Dennett seems to blame Descartes for holding this “Theater” conception together with, or directly as a consequence of, his dualism. Nevertheless, contrary to Dennett’s implication, while undeniably a Cartesian dualist, Descartes was emphatically not a Cartesian materialist as well. That is, he was not guilty of the Theater fallacy in this
THE TRIPARTITE MODEL OF REPRESENTATION
249
sense. On the contrary, despite positing the anatomical convergence of nerve !laments in the pineal gland, Descartes did not subscribe to the picture of an observer in the problematic “Theater” because he explicitly argued against positing concomitant representations of a kind which would require the notorious homunculus. Descartes’ “ghost in the machine” is not this observer, but a posit based on entirely different, independent considerations—namely, the “Cogito” argument and the limitations of machines (Discourse V). The rationale for Descartes’ immaterial soul is quite different and independent of the “Theater” conception which he explicitly repudiates in his Dioptrics (1637) and Treatise of man (1662). Quite apart from the evidence of Descartes’ own texts, ample support for these ascriptions is found in Arnauld’s writings which articulate an act-theory as an alternative to Malebranche’s representative “veil of ideas” (see Nadler, 1989, pp. 34, 118, 126, footnote 36). Arnauld saw this “direct perception” view as faithful to Descartes, and Descartes says in correspondence that Arnauld “has entered further than anyone else into the sense of what I have written” (AT III, 331). Thus, Arnauld’s Cartesian position adopts precisely Dennett’s stance against the “Malebranchean Theater.” Reinventions: synchronic and diachronic If Malebranche and Arnauld anticipated contemporary concerns about representation in cognitive science, then it is clear that the current theoretical problem has nothing to do with the theoretical framework of symbolic, computational approaches as universally assumed. It is particularly signi!cant, then, that the recurrences of interest here are found not only throughout history but in seemingly unrelated domains of cognitive science today. This recurrence of essentially the same dispute in widely varying contexts con!rms that the underlying problem does not arise essentially from the special features of any one of them. Given a seductive mistake concerning representation as such, multiple, seemingly independent, reinventions are just what we would expect to !nd. I will presently suggest that we may discern the same underlying problem at the heart of notorious disputes such as the “Imagery Debate,” Searle’s Chinese Room conundrum, the thinking-in-languag e debate and a number of others which have been prominent and recalcitrant. No representations? The “cognitive revolution” of the 1960s was characterized by a renewed recognition of the indispensability of internal representations following their repudiation by Skinnerian behaviorism. There is considerable irony in recent approaches which appear to reject internal representations once again (Brooks, 1991; Clark & Toribio, 1994; Freeman & Skarda, 1990; Greeno, 1989; van Gelder, 1998). Notwithstanding Eliasmith’s (1996) claim, these views are not plausibly seen as a return to behaviorism since, strictly speaking, they do not reject internal representations at all (see Markman & Dietrich, 2000). Nevertheless, these approaches and their rhetoric are symptoms of the profound dif!culties posed by the phenomena. Particularly in view
250
P. SLEZAK
of the revolutionary hype associated with the latest fashions, it is sobering to notice that Arnauld’s (1683/1990) critique of Malebranche exactly pre!gures these recent attacks on representational theories. It is no coincidence that Arnauld’s treatise On true and false ideas is concerned to repudiate what he describes as “imaginary representations,” saying, “I can, I believe, show the falsity of the hypothesis of representations” (1683/1990, p. 77) for “one must not make use of alleged entities of which we have no clear and distinct idea in order to explain the effects of nature, whether corporeal or spiritual” (1683/1990, p. 65). Illusions & misrepresentation: “curious and melancholy fact” In seeking to understand the persistence and recalcitrance of the problems of intentionality, it is instructive to examine one facet of the issue which reveals the seductiveness of the mistake. The problem of misrepresentation has arisen for causal or co-variation theories of intentional content (Dretske, 1986; Fodor, 1994a) since these theories seem to be unable to capture the way a mismatch might arise between a representation and the world. If a mentalese token “mouse” might be caused not only by mice but also by shrews, then the symbol must ipso facto mean “shrew” and cannot be in error. It seems not to have been noticed that this modern philosophical problem of misrepresentation is a variant of the well-known classical “Argument from Illusion” (Reynolds, 2000) which was employed in support of Locke’s “ideas” and A.J. Ayer’s (1940) sense-data as the immediate objects of perception. The parallel should not be surprising since, after all, an illusion in the relevant sense (that is, an hallucination) is precisely a misrepresentation. The problem of misrepresentation, then, appears to be one of the loose threads which may be pulled to unravel the rest of the tangled ball (see Slezak, forthcoming). Responding to Ayer (1940), Austin (1962, p. 61) remarked on the “curious” and “melancholy fact” that Ayer’s position on sense-data echoes that of Berkeley. It is an even more melancholy fact today that Fodor’s “real” problems of representation also echo Berkeley. Questions of veridicality for Locke’s ideas and Ayer’s sense-data arose from precisely the same assumptions as Fodor’s—namely, the assumption of being able to compare representations and the world. The earlier Fodorian passage from Malebranche is followed by a paragraph that explicitly articulates the “Argument from Illusion”: It should be carefully noted that for the mind to perceive an object, it is absolutely necessary for the idea of that object to be actually present to it—and about this there can be no doubt; but there need not be any external thing like that idea. For it often happens that we perceive things that do not exist, and that even have never existed—thus our mind often has real ideas of things that have never existed. When, for example, a man imagines a golden mountain, it is absolutely necessary that the idea of this mountain really be present to his mind. When a madman or someone asleep or in a high fever sees some animal before his eyes, it is certain that what he sees is not nothing, and that therefore the idea of this animal really
THE TRIPARTITE MODEL OF REPRESENTATION
251
does exist, though the golden mountain and the animal have never existed. (1712, p. 217) Illusions in this sense are cases in which the correspondence between representations and world fails—misrepresentations of exactly the sort relevant to the contemporary puzzle for symbolic, computational accounts of cognition. In the modern case, as posed by Fodor (1994a) and Dretske (1986), the problem is, given causation between these elements, how to explain the possibility of illusion; in the classical case the problem is, given illusion, how to explain causation. The modern problem of misrepresentation arises because causal or correlational theories don’t appear to permit a distinction between true and false representations. If a dog causes a representation of “cat” in mentalese, on the causal account it must ipso facto count as meaning “dog” and is, therefore, not a mistaken representation of cat. Conversely, the classical Argument from Illusion, starts from the other end, as it were. Beginning with the distinction between true and false representations, the Argument recognizes that these cannot both be correlated with an external reality, and concludes that in both veridical and non-veridical cases there must be some other object of direct perception, the “idea” or sense-datum (see Reynolds, 2000). In view of these analogies, therefore, I suggest it is no coincidence that Fodor’s (1980) “methodological solipsism” is strongly evocative of a Berkeleyan idealism. The Malebranche–Locke argument for representative ideas recognizes that illusions cannot be caused in the usual way by external objects—essentially Fodor’s puzzle expressed in reverse: Fodor argues that, if ideas are caused by external objects, we can’t have illusions. The parallels here appear to be more than super!cial or terminological [6]. My claim is not that the problem of misrepresentation and the Argument from Illusion are directed towards the same ends, but only that they arise from an identical conceptual scheme and are mirror-images of one another: The classical argument asserts: if there are illusions, then there is no direct connection or correlation with the external world (i.e. there must be intermediate objects of perception); conversely, Fodor’s argument asserts: If there is a direct connection (i.e. causal correlation) with the external world, then there can be no illusions. These are equivalent contrapositives: if we take “I” 5 illusion, “C” 5 correlation, then the Malebranche–Locke proposition is [I ® , C] and the Fodor–Dretske proposition is [C ® , I]. In passing, we may note that a degree of confusion has been introduced in these discussions by the failure to distinguish crucially different kinds of “illusion.” An illusion in the sense relevant to the argument concerning ideas, sense-data or representations is, strictly speaking, hallucination. However, certain other phenomena commonly referred to as “illusions” in this context such as mirages or bent sticks in water are not illusory at all in an important sense. These are veridical perceptions of the light patterns entering the eye unlike cognitive errors such as the Mu¨ller–Lyer illusion. Richard Gregory (1997), for example, has explicitly assimilated these phenomena, but no theory of cognitive processes could explain the “illusion” in the case of mirages and seemingly bent sticks due to refracted light. Gregory’s mistake in this regard is interesting and perhaps no mere mistake. Assuming that our
252
P. SLEZAK
knowledge of the actual conditions in the world must be used in characterizing mental representations is precisely the seductive error which I am concerned to expose in its various guises. A stick appearing bent in water is a case of “the world gone wrong” in just the sense of this felicitous phrase used by Fodor, as we will see presently, Truth conditions as explanatory? In both the case of misrepresentation and that of illusion the puzzle arises from a commitment to the tripartite conception in which representations intervene between the mind and the world and are somehow correlated with it. In particular, the questions of veridicality for Locke’s ideas arose from the impossibility of any comparison between representations and the world, except from the perspective of an independent outside observer. As Berkeley recognized, the very distinction between true and false ideas cannot be made without comparing representations and the world. Of course, this perspective is unavailable to the mind itself. Correspondingly, an explanatory theory cannot make tacit appeal to such a perspective without committing the homunculus error. This means that the veridicality or otherwise of mental representation does not serve an explanatory role and is, therefore, not a legitimate part of a theory of mind. In Berkeley’s idealist response to this problem we can see the precursor to Fodor’s problem arising from a commitment to truth conditions for mental representations. Securing the veridical connection between representations and the world through causation simply binds them in such a way as to preclude error and thus causation functions for Fodor in the way that a mysterious correspondence worked for Locke. Of course the problem of explaining error and that of explaining truth are two sides of the same coin. Accordingly, the puzzle of misrepresentation is symptomatic of fundamental problems in the conception of mental representations as semantically evaluable. Fodor is emphatic about the centrality of truth preservation for the computational RTM. Regarding the fact that mental processes tend to preserve semantic properties like truth Fodor says This is, in my view, the most important fact we know about minds; no doubt it’s why God bothered to give us any. A psychology that can’t make sense of such facts as that mental processes are typically truth preserving is ipso facto dead in the water. (Fodor, 1994a, p. 9) Fodor’s dilemma arises from the fact that content doesn’t appear to supervene on mental processes and, therefore, “semantics isn’t part of psychology” (Fodor, 1994a, p. 38). My point, then, is of course not that solipsism is true; it’s just that truth reference and the rest of the semantic notions aren’t psychological categories. (1980, p. 253) It seems that we can’t do psychology with the semantic notions, but we can’t do
THE TRIPARTITE MODEL OF REPRESENTATION
253
psychology without them either. This formulation of Fodor’s dilemma is reminiscent of a remark by Dennett in a quite different context in which he explained: … psychology without homunculi is impossible. But psychology with homunculi is doomed to circularity or in!nite regress, so psychology is impossible. (Dennett, 1978b, p. 123) My suggestion is that Fodor’s and Dennett’s dilemmas appear to be the same because at root the puzzle of semantics is a version of the homunculus problem. Just as Dennett (1978a, p. 122) pointed out that nothing is intrinsically a representation of anything but only for someone who is the interpreter, so nothing is a misrepresentation for the same reason. That is, Fodor’s current problem of misrepresentation might be accounted for by noting that it arises from the demand for tacitly adopting the stance of external interpreter: The very problem itself cannot be coherently formulated except in terms of judgments which are not part of the explanatory enterprise. The veridicality of representations is not a property which can play any role in the functioning of representations or the explanation of them. Like the picture on a jigsaw puzzle, the meaning of representations conceived as semantically evaluable in this way is for our own bene!t and not intrinsic to the arrangements of interlocking components. The sense in which a mental representation does its work is not one which requires judgment of its truth-value since this is only possible from the point of view of an observer, the theorist, for whom the representation is construed as an external symbol. The very concern with misrepresentation arises from tacitly adopting a questionable assumption endorsed by Davidson (1975) that having a belief requires also having the concept of belief, including the concept of error. Davidson says “someone cannot have a belief unless he understands the possibility of being mistaken, and this requires grasping the contrast between truth and error—true and false belief” (1975, p. 22). However, it seems that animals might have beliefs even if they are unable to know that they have them and re"ect on their truth-value. A cat can surely be correct in thinking that a mouse is in a certain hole without having the concepts of belief and truth. The judgment of truth or error in a belief must be distinguished from merely having a belief which is true or false. We as theorists may judge true and false beliefs (just as we may judge pictorial resemblance) since these are meta-linguistic or second-order beliefs, but truth and error are not intrinsic properties of representations as such, only to the judgments made about them. Cat psychology must be possible without invoking cat epistemology. It is not a big leap from misrepresentation and illusion to notice that images are a species of the same genus. Imagery involves illusory or non-veridical experiences of exactly the sort required for the classical argument for sense-data. The proverbial Pink Elephant of inebriated apprehension is a visual image par excellence, not relevantly different from Malebranche’s golden mountain or subjects’ imaginings in the celebrated experiments of Shepard and Metzler (1971) and Kosslyn (1994). As we will see presently, of course, if my conjectured parallel is warranted, it is perhaps no surprise that the imagery debate has been among the most persistently intractable
254
P. SLEZAK
disputes in cognitive science also arising from the theorist doing the work of the theory. Twin Earth Putnam’s (1975) Twin Earth puzzles, too, seem to be an unnoticed variant on the problem of misrepresentation we saw earlier. In the familiar scenario, instead of my Twin Earth double, we may substitute myself after having been unknowingly transported to Twin Earth. There, like my twin in the original story, I will refer to XYZ as “water.” However, on this variation of the original scenario my term “water” now fails to refer correctly rather than being a correct reference with a term having a different meaning. Since my twin and I are identical, the two scenarios must also be indistinguishable. That is, the problem of “wide” and “narrow” meaning is just the problem of misrepresentation in another guise. Instead of thinking of Twin Earth, then, we may imagine alternatively that on this earth, God might have switched all H2O to XYZ without my knowledge. Instead of taking the original Twin Earth story as showing that my twin must mean something other than “water,” we may equally conclude that my use of the term is simply in error when the worlds have been surreptitiously switched. The Twin Earth scenario is, indeed, simply another way of telling Dretske’s (1986) story of the magnetic micro-organisms which are fooled into “thinking” that up is down. Or, in a different case, as Fodor (in Millikan, 1991, p. 161) has put it, “it’s not the frog but the world that has gone wrong when a frog snaps at a bee-bee.” Undoubtedly, if the world is suf!ciently perverse, or it is contrived to alter things in certain ways, our concepts may accidentally “fail” to refer in the usual manner. It is not clear why such possibilities should be of interest to a theory of representation for their description depends on knowledge from a “God’s Eye” perspective available to the theorist. Whether the liquid substance is really XYZ or H2O is known only to the external omniscience of the theorist and has no explanatory role in a theory of representation. In this sense, the philosophical concern with misrepresentation is analogous to the spurious assimilation of mirages and seemingly bent sticks to genuine cognitive illusions, as noted earlier. In both cases, the actual truth about the world is invoked irrelevantly to explain cognition. Philosophers as three-year-olds? Ironically, the mistake I am indicating is not unknown in cognitive science: In the cases of interest here, philosophers are like the three-year-olds and autistics in the much-discussed “false belief task” of Wimmer and Perner (1983) (see Carruthers & Smith, 1996; Davies & Stone, 1995a,b). Like three-year-olds, philosophers fail to discount what they know to be the truth about the world in their “theory of mind.” The surreptitious switching of XYZ for H2O, bee-bees for "ies or magnetic “up” for “down” are ways of making “the world go wrong” precisely analogous to switching the candy while the child is looking in the “false belief” paradigm. Knowing how the world really is, philosophers truly ascribe false belief, just as the three-year-olds
THE TRIPARTITE MODEL OF REPRESENTATION
255
falsely ascribe true belief. In both cases, belief attributions are independent of any facts about the believer, depending instead on irrelevant external facts about the world. In these cases, the believer’s state of mind can remain !xed and yet the beliefs can be made to change from true to false by manipulating the world. The child, like the philosopher naively takes this possibility to be relevant to a “theory of mind” in ascribing mental representations. Justi!ed true belief? In case the foregoing analogy may be thought far-fetched or merely whimsical, it is perhaps worth noting en passant that yet another notorious philosophical puzzle may be seen to be merely a version of the same problem. Gettier (1963) paradoxes may be seen as a species of misrepresentation in which the world conspires to make a proposition true for reasons which are entirely independent of a person’s grounds for believing it. In these cases the problem can only be described because the theorist knows the truth about circumstances which make a belief accidentally true, even though the actual circumstances are irrelevant to the agent’s own reasons for believing the proposition. The Gettier cases are structurally identical with those of misrepresentation and Twin Earth because the truth or falsity of the mental representation (i.e. the state of the world) is varied independently of the agent’s belief-!xing mechanisms. Such considerations in all cases should be irrelevant to the problem of understanding mental representation. The moral of the Gettier cases, like that of misrepresentation, is that the only sensible, and perhaps the only possible, theory of knowledge is one that invokes justi!cations and not truth from a “God’s Eye” perspective [7]. Any adequate, or even complete, account of a person’s psychology would have to invoke only the relation of beliefs to available evidence and not their actual, ultimate truth-value. The world can mislead us in various ways, giving us good reasons for things that may be false, bad reasons for things that may be true and good reasons for things that may be true for other reasons. None of this should occasion philosophical anxieties for those interested in psychology. Does the speedometer of a bicycle misrepresent when the bike is ridden on rollers and not moving? Once again, it is the world that has gone wrong, known to us as external observers. However, psychology has no obligation to explain why the world may go wrong. Thus, conceivably, one might contrive things so that Cabernet Sauvignon replaced the usual liquid in someone’s veins. However, such a possibility is of no more theoretical concern for medical science than Dretske’s (1986) disoriented microbes are of interest to cognitive science. Idea-objects Bechtel’s (1998, p. 299) re-statement of the tripartite model in Malebranche’s terms makes explicit the widely held assumptions which are the potential source of the dif!culties in understanding representation [8]. In particular, Bechtel’s assimilation of internal and external representations is acknowledged where he lists the sorts of “high-level” representations which have been postulated by cognitive scientists.
256
P. SLEZAK
These include “concepts that might designate objects in the world or linguistic symbols, !gures and diagrams which we can use in reasoning and problem solving” (1998, p. 305). Bechtel suggests that if cognition does require such higher-level representations, “the most plausible analysis is that such representations are built upon these low-level representations and perhaps inherit their content from them” (Bechtel, 1998, p. 306). However, the dif!culty is that the distinction between !gures or diagrams, on the one hand, and representations operating in the frog’s retina, for example, is not simply a matter of higher and lower “levels” in an unproblematic sense. Linguistic symbols, !gures and diagrams which we use in reasoning and problem solving, as Bechtel says, are obviously “used” in a sense which is different and precisely inappropriate for internal mental representations. “Higher” concepts of this kind could not inherit their content from low-level concepts mentioned because the difference here is not one of level, but of a kind which precisely de!nes the distinction between original and derived intentionality. In this regard Bechtel’s account accurately re"ects the assumptions built in to the foundational notion of symbolic computation, as Allan Newell explains: The idea is that there is a class of systems which manipulate symbols, and the de!nition of these systems is what’s behind the programs in AI. The argument is very simple. We see humans using symbols all the time. They use symbol systems like books, they use !sh as a symbol for Christianity, so there is a whole range of symbolic activity, and that clearly appears to be essential to the exercise of mind. (1986, p. 33) This passage is striking for the explicitness with which Newell assimilates internal mental representations with our external communicative symbols. The assimilation of representations of radically different kinds appears, then, to be among the foundational assumptions of cognitive science. It was self-consciously articulated in Dennett’s (1978a) review of Fodor’s (1975) important work The language of thought which was the philosophical manifesto for the classical symbolic approach to cognition: What is needed is nothing less than a completely general theory of representation, with which we can explain how words, thoughts, thinkers, pictures, computers, animals, sentences, mechanisms, states, functions, nerve impulses, and formal models (inter alia) can be said to represent one thing or another. (1978, p. 91) The hoped-for uni!cation is to be achieved by showing that these seemingly heterogeneous items are all, in fact, variants of a common, underlying scheme. Dennett makes this explicit, explaining: It will not do to divide and conquer here—by saying that these various things do not represent in the same sense. Of course that is true, but what is important is that there is something that binds them all together, and we need a theory that can unify the variety. (1978a, p. 91). The pictorial account of imagery is perhaps the clearest example of taking our
THE TRIPARTITE MODEL OF REPRESENTATION
257
external artifacts as the model for internal representations. More generally, the dif!culties arise from an equivocation on the notion of “understanding” which can mean interpreting a meaningful representation as intelligible, or explaining it as in science. We will see presently that this con"ation is evidently at the heart of Searle’s (1980) Chinese Room scenario, for Searle asks whether he, as homunculus in the system, can understand the symbols. This criterion should be irrelevant to the question of whether a system has “original” intentionality, but Searle’s mistake is not his alone. That is, the Chinese Room scenario accurately captures the orthodox assumptions of the Simon–Newell “physical symbol system hypothesis.” Searle’s argument is, therefore, best understood, not as a refutation of “strong” AI, but as a reductio ad absurdum of the widely held assumptions on which AI and cognitive science are based. I will suggest that this standard formalist or logicist view conceived on the model of an uninterpreted logical calculus is problematic as an account of the way an intelligent system is related to the external world by depending on the intentions of the external user who supplies the interpretation of the meaningless formal symbols (see Birnbaum, 1991; Rosenschein, 1985). Embracing this conception, Nilsson (1987, 1991), like Newell above, explicitly invokes our external symbolic artifacts such as books to defend his view against the “proceduralist” position that representations are to be used by the system itself rather than understood or ascribed meaning by the designer. Undoubtedly recognizing the problems with invoking an intelligent user or understander, most theorists would not knowingly embrace such an account, but their intentions may be inconsistent with the actual properties of their model by virtue of assimilating external and internal representations [9]. It is not dif!cult to !nd leading theorists explicitly endorsing this assimilation. Thus, Rumelhart and Norman (1983) wrote: We de!ne a symbol to be an arbitrary entity that stands for or represents something else. By “entity” we mean anything that can be manipulated and examined … Humans also use external devices as symbols, such as the symbols of writing and printing, electronic displays or speech waves. (p. 78) In a recent statement for an encyclopaedia entry on representation Dan Lloyd explains: Humans are representing animals, and we have built a world crammed with representations of many kinds. Consider, for example, the number and variety of pictorial representation: paintings, photographs, moving pictures, line drawings, caricatures, diagrams, icons, charts, graphs, and Maps. Add the variety of linguistic representations in signs, titles, texts of all kinds, and especially spoken words and sentences … Human life, in short, is largely a cycle of making and interpreting representations. (Lloyd, forthcoming) By contrast, Block (1986) recognizes that “The representation on the page must be read or heard to be understood, but not so for the representation in the brain” (p. 83). However, despite making this distinction, Block’s discussion appears to
258
P. SLEZAK
lapse into the characteristic error. Block asks “what it is to grasp or understand meaning?” (p. 82). Of course, we don’t grasp or understand the meaning of our own mental representations, we just have them. In Dennett’s (1978b) felicitous phrase, the representations must understand themselves. Arnauld’s words appear to directly address theorists today: To say that our ideas and our perceptions (taking these to be the same thing) represent to us the things that we conceive and that they are their images, is to say something completely different from saying that pictures represent their originals and are the images of them, or that spoken or written words are the images of our thoughts. For in the case of ideas we mean that the things we conceive are objectively in our mind and in our thought. And this way of being objectively in the mind is so peculiar to the mind and to thought, since it is what speci!cally gives them their nature, that one seeks in vain anything similar outside the mind and thought. As I have already remarked, what has thrown the question of ideas into confusion is the attempt to explain the way in which objects are represented by ideas by analogy with corporeal things, but there can be no real comparison between bodies and minds on this question. (Arnauld, 1683/ 1990, p. 66)
Symbols & Searle: the meaning of meaning I have been suggesting that a crucial equivocation on distinct meanings of “meaning” has led to the postulation of symbols having meaning in an observer-relative sense in which a representation is necessarily apprehended and understood by someone. Cummins (1996) clearly points to this mistake of construing internal representations as if they may function through being understood. The question of meaning of mental representations is regularly confused between whether representations are intelligible and whether they are explainable. Searle (1980) trades directly on this confusion by asking whether an intelligent understander can interpret the symbols which are the substrate of thought. But however we might explain intentionality, it cannot depend on whether anyone can “understand” the goings-on in a machine or a head in the sense of apprehending them. The only sense in which these goings-on are to be understood is the quite different sense of scienti!c explanation. It is no accident that the same pernicious equivocation has bedeviled longstanding disputes in the social sciences between subjectivist advocates of verstehen as a method and “positivist” advocates of erkla¨ren (Slezak, 1990; Winch, 1957). However, understanding qua participant is not the same as understanding qua scientist [10]. Undeniably, Searle’s (1980) Chinese Room demonstrates that computational symbols are meaningless in the former sense, but this is no more problematic than the meaninglessness, in this sense, of action potentials or synaptic activations. Predictably, the problem may be seen arising in debates over connectionist systems where, in this case, it is the alleged absence of “explicit” symbols in dispute.
THE TRIPARTITE MODEL OF REPRESENTATION
259
However, the criterion is taken to be whether symbols may be “directly read off” or “immediately grasped” (Ramsey et al., 1991). The obvious question is: by whom? Searle’s conundrum is remarkably evoked by Glanvill’s response in 1661 to Descartes’ own version of a coding or information processing theory of perception. “But how is it, and by what Art doth the soul read that such an image or stroke in matter … signi!es such an object? Did we learn such an Alphabet in our Embryostate?” (quoted in Yolton, 1984, p. 28). Echoing Searle, Glanvill suggests that the “motions of the !laments of nerves” learn the quality of objects by analogy with the way in which a person learns to understand a language, for otherwise “the soul would be like an infant who hears sounds or sees lips move but has no understanding of what the sounds or movements signify, or like an illiterate person who sees letters but “knows not what they mean” (1984, p. 28). It is signi!cant that, unknowingly, Yolton also evokes Searle in the Chinese Room when discussing Locke’s conception in terms of a “perspective box” or camera obscura. Yolton asks “Was there some temptation to think of our awareness being like the face at the perspective box scanning the images on the wall of the box?” (1984, p. 127). Arnauld’s act-theory of direct realism Arnauld proposed a “direct perception” account against Malebranche’s indirect, object-mediated theory. For Arnauld, combating the tripartite view means that ideas are not distinct entities but just those very activities of the mind which are essentially representative per se. Since God desired our mind should know bodies, and that bodies should be known by our mind, it was undoubtedly simpler for him to render our mind capable of knowing bodies immediately, that is, without representative entities distinct from perceptions … and bodies capable of being known immediately by our mind, rather than leaving the soul powerless to see them otherwise than by means of certain representative entities. (VFI, 222–3, quoted in Nadler, 1989, p. 97) Arnauld insists that enclosing the mind in a “palace of ideas” as Berkeley was to do is an absurd conclusion to draw from the representative theory. Speci!cally, as we have seen, Arnauld diagnoses the absurdity as due to a mistake or false analogy between “being present to the mind” in the sense of having ideas, thinking or perceiving, on the one hand, and being present to the eyes in seeing. Of course, this is the assimilation of internal and external representations we have seen. That is, “seeing” with the mind or soul is confused with seeing with the eyes or body. Arnauld argues that philosophers have tried to explain how we think or perceive with the mind—mental vision or la vue spirituelle—by analogy with optical vision or true seeing [11] with the eye—la vue corporelle. This is the same insight expressed in the epigraph from Galileo. As Nadler points out, Arnauld insists that the problem with this analogy is that it rests on false assumptions and “One must not base one’s reasoning about the mental act of perception or observations on, or beliefs about, the physiological processes which constitute bodily seeing.”
260
P. SLEZAK
Nadler (1989, p. 93) construes Arnauld’s distinction between mental seeing and bodily vision as arising from an adherence to a strict Cartesian dualism. However, Nadler appears to be making a similar mistake to the one we have already noted by Dennett in his diagnosis of the “Cartesian Theater.” Arnauld, like Descartes, avoids the problems inherent in representative ideas qua intervening, apprehended entities because he has a better theory of perceptual and intellectual activity—namely, in terms of mental processes which are themselves inherently referential. This is to avoid the necessity of any central observing homunculus and the Theater by avoiding a conception of representative entities which require an intelligent perceiver to contemplate them. Contrary to Nadler, this conception is entirely independent of dualism “which rules out any such analogies” between mental and corporeal vision. Arnauld’s view is more subtle. The analogy is ruled out not because of any dualism but because Arnauld conceives mental activity as itself essentially representative and thereby dispenses with ideas as surrogate objects to be observed by the mind’s eye: “… I do not see any need for this alleged ‘representative entity’ in order to know any object, be it present or absent” (VFI, 221, quoted in Nadler, 1989, p. 96).
Not much of a revolution? By contrast with the usual hype, Chomsky recently expressed skepticism regarding the radical novelty of the so-called Cognitive Revolution saying “it wasn’t all that much of a revolution in my opinion” (1996, p. 1). Chomsky suggests that the same convergence of disciplinary interests had taken place in the 17th century in what he calls “ ‘the !rst cognitive revolution,’ perhaps the only real one” (p. 1). Chomsky (1966) began his Cartesian linguistics by quoting Whitehead, who said that the recent history of intellectual life may be accurately described as “living upon the accumulated capital of ideas provided … by the genius of the seventeenth century.” Chomsky was concerned to show that a return to classical concerns and appreciation of their parallels with contemporary developments is valuable in helping to advance the study of language [12]. I have been suggesting that, with some interesting differences, Chomsky’s point holds for recent speculation about the mind outside of linguistics as well. Indeed, Nadler (1992, p. 73) notes that it is both “strange and not a little embarrassing” that the Malebranche–Arnauld debate should remain largely ignored or misunderstood by philosophers and historians alike. The same goes for cognitive scientists. Thus, Chomsky’s example may be usefully followed in relation to these different issues such as the hotly contested representational theories of the mind. However, the differences from the case of linguistics are revealing. Though equally neglected in cognitive science, not only the good ideas of the 17th century are being rediscovered: We are not only reinventing the same theories and reliving the same debates, but also rehearsing the same notorious mistakes.
THE TRIPARTITE MODEL OF REPRESENTATION
261
Imagery: the pictorial theory The “Imagery Debate” is perhaps the most remarkable modern duplication of 17th century controversies. In this re-enactment, among the dramatis personae Pylyshyn plays Arnauld against Kosslyn’s Malebranche. Kosslyn (1994) claims to have resolved the debate in favor of his “pictorial” theory, but there remain grounds for skepticism. The mise-en-sce`ne is faithful even to the extent of the acrimony of the disputes. More importantly, the central error identi!ed by Arnauld of ascribing corporeal properties to mental ones is exactly the one charged by Pylyshyn (1973, 1978, 1981, in press) against Kosslyn. Despite its computational and neuroscience trappings, Kosslyn’s (1994) pictorial account of imagery takes mental images to represent by virtue of a relation of resemblance to their objects and by virtue of actually having spatial properties which they represent. Furthermore, “depictive” representations in a “visual buffer” are taken to have the speci!c function of permitting a re-inspection of images by the higher visual apparatus. There is said to be an “equivalence” between imagery and perception according to which the “higher” cognitive processing apparatus for visual perception is simply applied to an alternative input other than the retina—namely, the visual buffer. Thus, on the pictorial account, a mental image is conceived to be a “surrogate percept” (Pinker & Finke, 1980). In this way, an image may be “reprocessed as if it were perceptual input … thereby accomplishing the purposes of imagery that parallel those of perception” (Kosslyn 1987, p. 155). Of course this is an implicit endorsement of the tripartite conception of mental representation in imagery. Dennett (1991) takes this pictorial theory of imagery to be a paradigm example of the “Theater” misconception and, not surprisingly, this “quasi-perceptual” model has been repeatedly charged with the error of importing an “homunculus.” The charge is vigorously rejected on the grounds that “the theory is realized in a computer program” (Kosslyn et al., 1979, p. 574), but undischarged homunculi can lurk in computational models just as easily as in traditional discursive theories (see Slezak, 1995, in press). Thus, Kosslyn et al. (1989) offer a diagram of the visual imagery system which is a profusion of interconnected boxes and arrows. The box labeled “visual buffer” contains another box labeled “attention window” which is left unexplained. This box is, in fact, the observer in the “theater” which is the source of the traditional problem. The elaborate diagram is reducible to the same tripartite schema we have seen in Malebranche. Signi!cantly, following Descartes, Arnauld explicitly pointed to the seductive error of taking pictures as an appropriate model of mental representation (Arnauld, 1683/1990, p. 67), and he cites the camera obscura as an erroneous model for imagery. In a revealing misunderstanding, Kosslyn (1980, p. 30) has charged alternative propositional or “tacit knowledge” theories (Pylyshyn, 1973) as being “no imagery” accounts, but denying pictorial images is not to deny imagery per se. Rather, in an Arnauldian spirit, the denial of pictorial format for representations is actually to deny the problematic homunculus and its pseudo-explanation. In effect, as we see,
262
P. SLEZAK
the debate between Malebranche and Arnauld is being replayed throughout the history of speculation of the mind and contemporary cognitive science. Thus, one need not go back as far as the 17th and 18th centuries to discover the same concerns. F.C. Bartlett’s (1932) theory of schemata in his book Remembering, was a reaction to theories of “!xed, lifeless and fragmentary traces” or images which are merely “reduplicative,” capable only of being re-excited. Reminiscent of Arnauld’s rejection of “super"uous entities” and his view “that idea and perception are the same thing,” Bartlett wishes to substitute a cognitive process for objects which are pictorial or “reduplicative traces” (1932, p. 215) [13]. Phenomenological fallacy Kosslyn claims to have clinched the debate about imagery by appealing to the !ndings of neurophysiology and neuroanatomy [14]. Topographically organized regions of cortex or “retinotopic mapping” are said to “support depictive representations,” that is, pictures in some sense. Thus, for example, a monkey may be given a visual stimulus like a dartboard to look at. If the brain tissue is treated in a certain way, it can be shown to have a likeness of the dartboard “etched” on the cortex. The result was anticipated and perfectly understood by one psychologist 30 years before: At some point the organism must do more than create duplicates … The need for something beyond and quite different from copying is not widely understood. Suppose someone were to coat the occipital lobes of the brain with a special photographic emulsion which, when developed, yielded a reasonable copy of a current visual stimulus In many quarters this would be regarded as a triumph in the physiology of vision. Yet nothing could be more disastrous … (Skinner, 1963, p. 285) Skinner was acutely sensitive to the source of homunculi pseudo-explanations even if his behaviorist remedy is no longer attractive. Kosslyn’s TV screen metaphor reveals the link between seemingly unrelated problems in cognitive science. For example, in the classic statement of materialism, U.T. Place (1956) argued that the implausibility and rejection of materialism as a solution to the mind–body problem is based on the qualitative features of subjective experience. Although these features have recently been supposed to constitute the “hard” problem of consciousness (Chalmers, 1996), Place suggested that they are the source of the “phenomenological fallacy.” Anticipating Dennett (1991), Place wrote, this is “the mistake of supposing that when the subject describes his experience, how things look, sound, smell, taste, or feel to him, he is describing the literal properties of objects and events on a particular sort of internal cinema or television screen.” Thinking in language On the face of it, the persistent doctrine that we think “in” language is not obviously connected with the others we have considered such as pictorial imagery. Neverthe-
THE TRIPARTITE MODEL OF REPRESENTATION
263
less, these two theories are variations on the same theme. Symptomatic is the fact that both depend on a deep intuitive, introspective appeal. Just as we seem to be looking at pictures when we imagine visually, so we appear to talk to ourselves when we think. Indeed, Carruthers (1996), who seeks to revive what he acknowledges to be an unfashionable doctrine, explicitly bases his argument against Fodor’s Language of thought on such evidence of introspection. This is the evidence that we sometimes !nd ourselves in a silent monologue, talking to ourselves in our natural language, sotto voce, as it were. However, in a neglected article, Ryle (1968) suggested that the very idea that we might think “in” language is unintelligible, and the undeniable experience of talking to ourselves cannot support any claim about the vehicles of thought. It is signi!cant that Ryle mentions en passant among the equally problematical cases, that in which we claim to see things in our “mind’s eye”—taken to involve mental pictures of some kind. Ryle’s comparison and his warning is unwittingly con!rmed by Carruthers (1996, 1998), who explicitly invokes Kosslyn’s pictorial account of imagery as support for his own analogous theory. In doing so, however, Carruthers only brings into relief the notorious dif!culties of his own model which relies on a representational format—sentences of natural language—which is, like pictures, paradigmatically the kind requiring an external intelligent observer (see Slezak, 2002). Still more processing: triadic or dyadic? As we have seen, the traditional dif!culty, rediscovered in various forms today, arises from the problematic three-part relation between world, ideas and consciousness. Cognition without these basic features seems inconceivable, and yet they lead to seemingly intractable dif!culties. In its essentials, the Malebranche–Locke account may be captured in the following schematic diagram: External World ® Ideas ® Consciousness The commonality between this schema and modern ones is clearly revealed in a diagram of Ulric Neisser (1976). In its essentials the diagram may be represented as follows: Storage Storage Storage ¯! ¯! ¯! ® ® ® ® External Retinal Processing M ore processing Still more processing ® Consciousness World image Neisser’s schema obviously abstracts from the details of any speci!c account, but despite its mildly whimsical character, it purports to be a serious generic sketch of information processing theories. For example, the welter of boxes and arrows in Kosslyn’s diagram reduces to Neisser’s picture which has the virtue of not disguising its essential commitment to the problematic third element “consciousness.” One early attempt to avoid the dif!culties inherent in this account is in the reactions of Locke’s critic John Sergeant:
264
P. SLEZAK
He never found a satisfactory answer to the question of the nature of ideas, but he was convinced that they functioned to deny the mind direct access to things by restricting it to some kind of third entity. Cognition was thus made to consist in a triadic relation involving the knowing mind, the object or referent, and the ideas by means of which the mind came to know things. Sergeant wished to reduce the process of knowing to a dyadic relation consisting only of the knowing mind and the object known. (quoted in Yolton, 1956/1993, p. 103) No ideas? However, given a problematic three-part relation, there are only a few ways to get a dyadic relation. First, eliminating the middle term, ideas or representations, permits two construals. On one view, which Yolton describes as “wildly impossible” but which has nonetheless been explicitly held, an object is itself somehow literally present to, or in, the mind. This is the view we saw parodied by Malebranche saying that the soul does not stroll in the heavens among the stars. External World ® Consciousness No external world? Of course, a dyadic relation can also be obtained by dropping one of the other relata instead of the middle one. Thus, we might eliminate the external world to get Berkeley’s idealism. Ideas ® Consciousness Berkeley’s strategy has not been popular recently among cognitive scientists, though it is precisely the paradox of Fodor’s “methodological solipsism.” No consciousness? Of course, we have another choice besides getting rid of the world or getting rid of the representations. We can get rid of consciousness! Securing a direct connection between cognition and the world can also be achieved by dispensing with the agent [15]. External World ® Ideas Despite its counter-intuitiveness, in some respects this option is preferable to the others. Of course, the agent as an element in models of perception is the locus of the potential homunculus. The paradoxical rejection of consciousness is equivalent to the !rst of these three options, being the same as rejecting representations when these are conceived in certain ways. Therefore, expressed less paradoxically, the Gibsonian or “situated” case against representations may be best understood, not as an outright rejection of an internal mental medium as such, but rather as the
THE TRIPARTITE MODEL OF REPRESENTATION
265
Arnauldian rejection of a certain particular conception of representations which are explanatorily question-begging.
Conclusion Thirty years ago in his book Psychological explanation, Fodor (1968, p. vii) remarked: “I think many philosophers secretly harbor the view that there is something deeply (i.e. conceptually) wrong with psychology, but that a philosopher with a little training in the techniques of linguistic analysis and a free afternoon could straighten it out.” Today, the suspicion of deep conceptual problems at the heart of cognitive science is perhaps more clearly seen to be justi!ed, although Fodor’s joke was intended to re"ect as much upon philosophy as upon psychology. Notoriously, deep conceptual problems at the heart of philosophy have been no more dispelled than those in psychology. I have been suggesting that by adopting a broader perspective we may see why the sorry fortunes of the two disciplines have been inextricably linked.
Acknowledgements This paper has bene!ted greatly by the most helpful comments and criticisms of the editor, W. Bechtel, and two anonymous referees of Philosophical Psychology. Versions of the material have been presented at the Department of Philosophy, University of Sydney, the Sixteenth Annual Meeting of the Japanese Cognitive Science Society, Tokyo, August 1999, the Annual Meeting of the Australasian Association for History, Philosophy & Social Studies of Science, Melbourne, June 2001, and the Twenty Third Annual Conference of the Cognitive Science Society, Edinburgh, August 2001. I am particularly grateful for the comments of Stephen Gaukroger, Ron Giere, Pat Langley, Robert Nola, Zenon Pylyshyn and John Sutton. Notes [1] Kosslyn writes with evident annoyance: “Once and for all, the ‘homunculus problem’ is simply not a problem. We thought this would be obvious given that the theory is realized in a computer program, but it seems necessary to address this complaint again” (Kosslyn et al., 1979, p. 574). [2] The scholarly, exegetical niceties need not concern us here. However, Nadler (1992, p. 8) points out that the standard reading presented here is mistaken and even a caricature, though it has been almost universally held among commentators including Arnauld, Locke, Leibniz, Berkeley and Reid. [3] The “situated” critique confuses conscious calculation with the “sub-personal” use of symbols and their purely causal, functional role in cognition (see Slezak, 1999). As Pasnau says: “Surely, any modern direct-realist theory of perception will allow causal intermediaries between object and percipient: no one would dream of denying the title of direct realism to a theory of perception merely because it tolerates causal intermediaries” (Pasnau, 1997, p. 300). [4] Above all, Moreau expresses irritation at the view which has been widely held, especially among “nos contemporains anglo-saxons,” that this affair is at base nothing other than one of opposing temperaments—“Arnauld-la-teigne” against “Malebranche-le-grognon”—that is, Arnauld-the-nuis-
266
[5]
[6] [7]
[8]
[9]
[10]
[11]
[12] [13]
[14]
[15]
P. SLEZAK
ance against Malebranche-the-grouch (1999, p. 16). In fact, numerous great !gures of the time became embroiled in the debate, such as Leibniz who followed it closely (see Nadler, 1989, p. 5). In his earlier work on Locke and the way of ideas, Yolton (1956/1993) gives a longer passage from the same author which is worth repeating here as testimony to the curious persistence of the puzzle of ideas, albeit in different guises: “… like Men blundering in the dark, they feel after them to !nd them; some catch at them under one Appearance, some under another; some make them to be Material, others Spiritual; some will have them to be Ef!uvia, from the Bodies they Represent, others Totally Distinct Essences; some hold them to be Modes, others Substances; some assert them All to be Innate; others None: So that one would think there must needs be a very great Intricacy in that which has given Rise, not only to such a Variety but also such a Contradiction of Opinions” (quoted in Yolton, 1956/1993, p. 96). I am grateful to an anonymous referee for criticisms which have helped me to try clarifying this argument. This moral seems to extend in an obvious way to the broader concern with knowledge in philosophy of science. Although it cannot be pursued here, another deep parallel may be seen in the more or less distinct literatures of perceptual realism and scienti!c realism. Here too the problem of truth and error arises for realism in the form of the so-called “pessimistic meta-induction” from history—analogous to the Argument from Illusion. It is no accident that Mach’s instrumentalism was harshly criticized in Lenin’s (1927) Materialism and empirio-criticism as betraying science by deserting to a Berkeleyan idealism. See also Popper (1963a,b) on Berkeley as precursor to Mach and Einstein. See also Zahar (1981). Among the few explicit attempts to articulate the relevant distinctions here, see Cummins (1996, p. 87) on the distinction between “meaning” and “meaningfor”—the latter described by Cummins as “a three-place relation between a representation, a concept, and a cognitive system.” See discussion on p. 130. Conversely, it may be that researchers’ actual theories or programs are exemplary and free of problems, but only their “meta-theoretical” analyses suffer from the fatal "aws. I am grateful to Pat Langley and Ron Giere for this point. Chomsky (1962) has drawn attention to the way in which traditional grammars produce an illusion of explanatory completeness while, in fact, they have serious limitations from a scienti!c, explanatory point of view. The apparent success of traditional grammars depends on being “paired with an intelligent and comprehending reader.” This is another version of the homunculus fallacy because it is just this ability of the intelligent, comprehending reader that the theory is supposed to explain. Nadler (1989) oddly chooses to put scare quotes around the terms referring to the physiological processes of corporeal vision, i.e. with the eyes, as opposed to what he calls “true seeing” by the mind. This seems clearly to reverse the expectation, since surely it is seeing with the eyes which is the normal case and mental vision the metaphorical or analogical use of the terms. But see skeptical discussion of Chomsky’s claims and references in Buroker (1996, p. ix). An Arnauldian account is seen more recently in philosophy in the guise of so-called “adverbial” accounts of experience. See Tye (1984), who says “having a visual experience is a matter of sensing in a certain manner rather than sensing a peculiar immaterial object” (1984, p. 196). He adds “to the satisfaction of most people” (1994, p. vii). The exceptions Kosslyn has in mind are philosophers who are presumably immune to rational persuasion. In dismissive remarks he explains, “I fully expect philosophers to continue to debate the matter; after all, that is their business” (1994, p. 409). This point has been made in exactly the same terms by John Sutton (1998).
References ABELL , C. & CURRIE, G. (1999). Internal and external pictures. Philosophical Psychology, 12, 429–445.
THE TRIPARTITE MODEL OF REPRESENTATION
267
ARNAULD , A. (1683/1990). On true and false ideas, with introductory essay by S. GAUKROGER (Trans.). Manchester: Manchester University Press. AUSTIN, J.L. (1962). Sense and sensibilia. Oxford: Oxford University Press. AYER, A. J. (1940). The foundations of empirical knowledge. London: Macmillan. BARTLETT , F.C. (1932). Remembering: a study in experimental and social psychology. Cambridge: Cambridge University Press. BECHTEL, W. (1998). Representations and cognitive explanations: assessing the dynamicist’s challenge in cognitive science. Cognitive Science, 22, 295–318. BIRNBAUM , L. (1991). Rigor Mortis: a response to Nilsson’s “Logic and arti!cial intelligence.” Arti"cial Intelligence, 47, 57–77. BLOCK, N. (1986). Advertisement for a semantics for psychology. Midwest Studies in Philosophy, X, 615–78. BROOKS, R.A. (1991). Intelligence without representation. Arti"cial Intelligence, 47, 139–159. BUROKER , J.V. (1996). Introduction. In A. ARNAULD & P. NICOLE, Logic or the art of thinking, J.V. BUROKER (Ed., Trans.). Cambridge: Cambridge University Press. CARRUTHERS , P. (1996). Language, thought and consciousness. Cambridge: Cambridge University Press. CARRUTHERS , P. (1998). Thinking in language? Evolution and a modularist possibility. In P. CARRUTHERS & J. B OUCHER (Eds) Language and thought: interdisciplinary themes, Cambridge: Cambridge University Press. CARRUTHERS , P. & SMITH, P.K. (1996). Theories of theories of mind. Cambridge: Cambridge University Press. CHALMERS , D. (1996). The conscious mind. Oxford: Oxford University Press. CHOMSKY, N. (1962). Explanatory models in linguistics. In E. NAGEL , P. SUPPES & A. TARSKI (Eds) Logic, methodology and philosophy of science. Stanford: Stanford University Press. CHOMSKY, N. (1966). Cartesian linguistics. New York: Harper & Row. CHOMSKY, N. (1996). Power and prospects: re!ections on human nature and the social order. Sydney: Allen & Unwin. CLARK , A. & TORIBIO , J. (1994). Doing without representing? Synthese, 101, 401–431. CUMMINS, R. (1996). Representations, targets and attitudes. Cambridge, MA: Bradford/MIT. DAVIDSON, D. (1975). Thought and talk. In S. GUTTENPLAN (Ed.) Mind and language. Oxford: Clarendon Press. DAVIES , M. & STONE, T. (Eds) (1995a). Folk psychology. Oxford: Blackwell. DAVIES , M. & STONE, T. (Eds) (1995b). Mental simulation. Oxford: Blackwell. DENNETT , D.C. (1978a). A cure for the common code. In Brainstorms. Montgomery, VT: Bradford Books. DENNETT , D.C. (1978b). Arti!cial intelligence as philosophy and as psychology. In Brainstorms. Montgomery, VT: Bradford Books. DENNETT , D.C. (1991). Consciousness explained. London: Penguin. DESCARTES, R. (1662/1972). Treatise of man, commentary by T.S. H ALL (Trans.). Cambridge, MA: Harvard University Press. DESCARTES, R. (1637/1985). Dioptrics. In The philosophical writings of Descartes, 2 vols, J. COTTINGHAM , R., STOOTHOFF & D. MURDOCH (Trans.). Cambridge: Cambridge University Press. DRETSKE , F. (1986). Misrepresentation. In R.J. BOGDAN (Ed.) Belief: form, content and function. Oxford: Oxford University Press. EDELMAN , S. (1998). Representation is representation of similarities. Behavioral and Brain Sciences, 21, 449–498. ELIASMITH, C. (1996). The third contender: a critical examination of the dynamicist theory of cognition. Philosophical Psychology, 9, 441–463. FODOR, J.A. (1968). Psychological explanation. New York: Random House. FODOR, J.A. (1975). The language of thought. New York: Crowell. FODOR, J.A. (1978). Propositional attitudes. The Monist, 61, 4. FODOR, J.A. (1980). Methodological solipsism considered as a research strategy in cognitive psychology. Behavioral and Brain Sciences, 3, 63–109. FODOR, J.A. (1985a). A presentation to the National Science Foundation Workshop on Information and
268
P. SLEZAK
Representation. In B.H. PARTEE , S. PETERS & R. THOMASON (Eds) Report of Workshop on Information and Representation. Washington, DC: NSF System Development Foundation. FODOR, J.A. (1985b). Fodor’s guide to mental representation. Mind, Spring, 55–97. FODOR, J.A. (1994a). The elm and the expert: mentalese and its semantics. Cambridge, MA: MIT. FODOR, J.A. (1994b). Concepts: a potboiler. Cognition, 50, 95–113. FODOR, J.A. (1998). Concepts: where cognitive science went wrong. Oxford: Oxford University Press. FREEMAN , W.J. & SKARDA , C.A. (1990). Representations: who needs them? In J.L. MCGAUGH , N. WEINBERGER & G. LYNCH (Eds) Brain organization and memory cells, systems and circuits. Oxford: Oxford University Press. GALILEI , G. (1610/1983). The starry messenger. In S. DRAKE (Ed.) Telescopes, tides & tactics. Chicago: University of Chicago Press. GAUKROGER , S. (1990). The background to the problem of perceptual cognition. In A. ARNAULD , On true and false ideas, S. GAUKROGER (Trans.). Manchester: Manchester University Press. GAUKROGER , S. (1996). Descartes: an intellectual biography. Oxford: Oxford University Press. GETTIER , E.L. (1963). Is justi!ed true belief knowledge? Analysis, 23, 121–123. GREENO , J.G. (1989). Situations, mental models and generative knowledge. In D. KLAHR & K. KOTOVSKY (Eds) Complex information processing: the impact of Herbert A. Simon. Hillsdale, NJ: Lawrence Erlbaum. GREGORY , R.L. (1997). Knowledge in perception and illusion. Philosophical Transactions of the Royal Society London B, 352, 1121–1128. JACKENDOFF , R. (1992). Languages of the mind. Cambridge, MA: Bradford/MIT. KOSSLYN, S.M. (1980). Image and mind. Cambridge, MA: Harvard University Press. KOSSLYN, S.M. (1987). Seeing and imagining in the cerebral hemispheres: a computational approach. Psychological Review, 94, 148–175. KOSSLYN, S.M. (1994). Image and brain: the resolution of the imagery debate. Cambridge, MA: MIT. KOSSLYN, S.M., PINKER , S., SMITH, G.E. & SCHWARTZ, S.P. (1979). On the demysti!cation of mental imagery. The Behavioral and Brain Sciences, 2, 535–581. KOSSLYN, S.M., SOKOLOV, M.A. & CHEN, J.C. (1989). The lateralization of BRIAN: a computational theory and model of visual hemispheric specialization. In D. KLAHR & K. KOTOVSKY (Eds) Complex information processing: the impact of Herbert A. Simon, Hillsdale, NJ: Lawrence Erlbaum. LENIN , V.I. (1927). Materialism and empirio-criticism: critical notes concerning a reactionary philosophy. London: Martin Lawrence. LLOYD. D. (forthcoming). Representation. In Macmillan encyclopedia of cognitive science. London: Macmillan. LOCKE, J. (1690/1964). An essay concerning human understanding, A.D. WOOZLEY (Ed.). London: Collins. MALEBRANCHE , N. (1712/1997). The search after truth, T.M. LENNON & P.J. OLSCAMP (Eds, Trans.). Cambridge: Cambridge University Press. MARKMAN, A.B. & DIETRICH, E. (2000). In defense of representation. Cognitive Psychology, 40, 138–171. MEYERING , T.C. (1997). Representation and resemblance: a review essay of Richard A. Watson’s “Representational ideas from Plato to Patricia Churchland.” Philosophical Psychology, 10, 221–230. MILLIKAN , R.G. (1991). Speaking up for Darwin. In B. LOEWER & G. REY (Eds) Meaning in mind. Oxford: Basil Blackwell. MOREAU , D. (1999). Deux Carte´siens: la pole´mique entre Antoine Arnauld et Nicolas Malebranche. Paris: Vrin. NADLER , S. (1989). Arnauld and the Cartesian philosophy of ideas. Manchester: Manchester University Press. NADLER , S. (1992). Malebranche and ideas. Oxford: Oxford University Press. NEISSER, U. (1976). Cognition and reality. New York: Freeman. NEWELL , A, (1986). The symbol level and the knowledge level. In Z. PYLYSHYN & W. DEMOPOULOS (Eds) Meaning and cognitive structure. Norwood, NJ: Ablex. NILSSON, N.J. (1987). Commentary on McDermott. Computational Intelligence, 3, 202–203. NILSSON, N.J. (1991). Logic and arti!cial intelligence. Arti"cial Intelligence, 47, 31–56.
THE TRIPARTITE MODEL OF REPRESENTATION
269
PALMER , S.E. (1978). Fundamental aspects of cognitive representation. In E. ROSCH & B. LLOYD (Eds) Cognition and categorization. Hillsdale, NJ: Lawrence Erlbaum. PASNAU, R. (1997). Theories of cognition in the later Middle Ages. Cambridge: Cambridge University Press. PINKER , S. & FINKE, R. (1980). Emergent two-dimensional patterns in images rotated in depth. Journal of Experimental Psychology: Human Perception and Performance, 6, 244–264. PLACE, U.T. (1956). Is consciousness a brain process? British Journal of Psychology, 47, 44–50. POPPER , K.R. (1963a). A note on Berkeley as precursor of Mach and Einstein. In Conjectures and refutations. London: Routledge & Kegan Paul. POPPER , K.R. (1963b). Three views concerning human knolwedge. In Conjectures and refutations. London: Routledge & Kegan Paul. PUTNAM , H. (1975). The meaning of “meaning.” In K. GUNDERSON (Ed.) Language, mind and knowledge: Minnesota studies in the philosophy of science, Vol. 7. Minneapolis: University of Minnesota Press. PYLYSHYN, Z. (1973). What the mind’s eye tells the mind’s brain: a critique of mental imagery. Psychological Bulletin, 80, 1–24. PYLYSHYN, Z. (1978). Imagery and arti!cial intelligence. In C.W. SAVAGE (Ed.) Minnesota studies in the philosophy of science, Vol. IX. Minneapolis: University of Minnesota Press. PYLYSHYN, Z. (1981). The imagery debate. In N. BLOCK (Ed.) Imagery. Cambridge, MA: MIT. PYLYSHYN, Z. (in press). Mental imagery: in search of a theory. Behavioral and Brain Sciences. RAMSEY, W., SITCH, S. & GARON , J. (1991). Connectionism, eliminativism and the future of folk psychology. In W. RAMSEY, S. STICH & D. RUMELHART (Eds) Philosophy and connectionist theory. Hillsdale, NJ: Lawrence Erlbaum. REYNOLDS , S.L. (2000). The Argument from Illusion. Nouˆs, 34, 604–621. RORTY, R. (1979). Philosophy and the mirror of nature. Princeton: Princeton University Press. ROSENSCHEIN, S.J. (1985). Formal theories of knowledge in AI and robotics. New Generation Computing, 3, 345–357. RUMELHART , D.E. & NORMAN, D.A. (1983). Representation in memory. Center for Human Information Processing, Technical Report CHIP 116, University of California, San Diego. RYLE , G. (1968). A puzzling element in the notion of thinking. In P.F. STRAWSON (Ed.) Studies in the philosophy of thought and action. Oxford: Oxford University Press. SEARLE , J. (1980). Minds, brains and programs. Behavioral and Brain Sciences, 3, 417–424. SHEPARD , R.N. & METZLER , J. (1971). Mental rotation of three-dimensional objects. Science, 171, 701–703. SKINNER , B.F. (1963). Behaviorism at !fty. Science, 140, 951–58. SLEZAK , P. (1990). Man not a subject for science? Social Epistemology, 4, 327–342. SLEZAK , P. (1992). When can images be reinterpreted: non-chronometric tests of pictorialism. In Proceedings of 14th Conference of the Society for Cognitive Science. Hillsdale, NJ: Lawrence Erlbaum. SLEZAK , P. (1994). Situated cognition: empirical issue, paradigm shift or conceptual confusion. In Proceedings of 16th Conference of the Society for Cognitive Science. Hillsdale, NJ: Lawrence Erlbaum. SLEZAK , P. (1995). The “philosophical” case against visual imagery. In P. SLEZAK , T. CAELLI & R. CLARK (Eds) Perspectives on cognitive science. Norwood: Ablex. SLEZAK , P. (1999). Situated cognition: empirical issue, paradigm shift or conceptual confusion? In J. WILES & T. DARTNALL (Eds) Perspectives on cognitive science, Vol. 2. Norwood, NJ: Ablex. SLEZAK , P. (2000). Descartes’ startling doctrine of the reverse-sign relation. In S. GAUKROGER , J. SCHUSTER & J. SUTTON (Eds) Descartes’ natural philosophy. London: Routledge. SLEZAK , P. (2002). Thinking about thinking: language, thought & introspection. Language and Communication, 22, 353–373. SLEZAK , P. (in press). Mental imagery: de´ja` vu all over again? Behavioral and Brain Sciences. SLEZAK , P. (forthcoming). Images, illusions, mistakes & misrepresentations: the world gone wrong. In P. STAINES , H. CLAPIN & P. SLEZAK (Eds) Representation in mind. Westport, CT: Praeger. SMITH, B.C. (1987). The correspondence continuum. CSLI Report 87–71. STALNAKER , R. (1991). How to do semantics for the language of thought. In B. LOEWER & G. REY (Eds) Meaning in mind. Oxford: Basil Blackwell.
270
P. SLEZAK
SUTTON, J. (1998). Philosophy and memory traces: Descartes to connectionism. Cambridge: Cambridge University Press. TYE, M. (1984). The adverbial approach to visual experience. The Philosophical Review, 93, 195–225. VAN GELDER , T. (1998). The dynamical hypothesis in cognitive science. Behavioral and Brain Sciences, 21, 615–665. WIMMER, H. & PERNER , J. (1983). Beliefs about beliefs: representation and constraining function of wrong beliefs in young children’s understanding of deception. Cognition, 13, 103–128. WINCH, P. (1957). The idea of a social science. London: Routledge. WRIGHT, E. (Ed.) (1993). New representationalisms. London: Averbury. YOLTON, J.W. (1956/1993). Locke and the way of ideas. Bristol: Thoemmes Press. YOLTON, J.W. (1984). Perceptual acquaintance from Descartes to Reid. Minneapolis: University of Minnesota Press. YOLTON, J.W. (1996). Perception and reality: a history from Descartes to Kant. Ithaca: Cornell University Press. YOLTON, J.W. (2000). Replies to my fellow symposiasts. In S. GAUKROGER , J. SCHUSTER & J. SUTTON (Eds) Descartes’ natural philosophy. London: Routledge. ZAHAR, E. (1981). Second thoughts about Machian positivism: a reply to Feyerabend. British Journal for the Philosophy of Science, 32, 267–276.