Models as Parts of Distributed Cognitive Systems - CiteSeerX

0 downloads 0 Views 256KB Size Report
of their massive 1986 publication, was that humans do the kind of cognitive .... Herbert Simon had a great interest in diagrams and took a more moderate position. He long .... sustains their lives and makes possible their verbal communication?
Models as Parts of Distributed Cognitive Systems RONALD N. GIERE Department of Philosophy, Center for Philosophy of Science, University of Minnesota, Minneapolis, MN, USA 55455, [email protected].

Abstract:

1.

Recent work on the role of models in science has revealed a great many kinds of models performing many different roles. In this paper I suggest that one can find much unity among all this diversity by thinking of many models as being components of distributed cognitive systems. I begin by distinguishing the relevant notion of a distributed cognitive system and then give examples of different kinds of models that can be thought of as functioning as components of such systems. These include both physical and abstract models. After considering several objections, I conclude by locating distributed cognition within larger movements in contemporary cognitive science.

INTRODUCTION.

Recent work on the role of models in science has revealed a great many kinds of models performing different roles. In this paper I will suggest that one can find much unity among all this diversity by thinking of many models as being components of distributed cognitive systems. I will begin by distinguishing the relevant notion of a distributed cognitive system and then give examples of different kinds of models that can be thought of as functioning as components of such systems. After considering several objections, I will conclude by locating distributed cognition within larger movements in contemporary cognitive science.

2.

DISTRIBUTED COGNITION

Distributed cognition is much more than merely distributed processing, which has been around for a generation. There are a number of avenues by

2

Ronald N. Giere

which one can reach distributed cognition. Here is one that leads directly to the role of models. During the early 1980s, James McClelland, David Rumelhart and the PDP Research Group, based mainly in San Diego, began exploring the capabilities of networks of simple processors thought to be at least somewhat similar to neural structures in the human brain. They discovered that what such networks do best is recognize and complete patterns in input provided by the environment. The generalization to human brains is that humans recognize patterns through the activation of prototypes embodied as changes in the states of groups of neurons induced by sensory experience. But if something like this is correct, we face a big problem. How do humans do the kind of apparently linear symbol processing required for such fundamental cognitive activities as using language and doing mathematics? Their suggestion, tucked away near the end of a long paper in Volume 2 of their massive 1986 publication, was that humans do the kind of cognitive processing required for these linear activities by creating and manipulating external representations. These latter tasks can be done well by a complex pattern matcher. Consider the following simple example (McClelland, et al., 1986, v. 2, 44-48). Try to multiply two three-digit numbers, say 456 × 789, in your head. Few people can perform even this very simple arithmetical task. Figure 1 shows how many of us learned to do it.

Figure 1. Multiplying two three-digit numbers

This process involves an external representation consisting of written symbols. These symbols are manipulated, literally, by hand. The process involves eye-hand motor coordination and is not simply going on in the head of the person doing the multiplying. The person’s contribution is (1) constructing the external representation, (2) doing the correct manipulations in the right order, and (3) supplying the products for any two integers, which educated humans can do easily from memory.

Models as Parts of Distributed Cognitive Systems

3

Notice that this example focuses on the process of multiplication; the task, not the product, and not knowledge of the answer. Of course, if the task is done correctly, one does come to know the right answer, but the focus is on the process rather than the product. The emphasis is on the cognitive system instantiating the process rather than cognition simpliciter. Now, what is the cognitive system that performs this task? Their answer was that it is not merely the mind-brain of the person doing the multiplication, nor even the whole person doing the multiplication, but the system consisting of the person plus the external physical representation. It is this whole system that performs the cognitive task, that is, the multiplication. The cognitive process is distributed between a person and an external representation. The external representation for this problem can be regarded as a model of a multiplicative process in several senses of the term. It is a model in the sense employed in logic and mathematics in that it instantiates various arithmetical operations. It is also a model in the sense that it represents these arithmetical operations. Furthermore, it is a model in the sense of a working model of a physical process. In the process of performing the multiplication, the person literally constructs the model on paper, using it to help perform the cognitive task at hand. Of course, many people no longer perform multiplication using pencil and paper. They use an electronic calculator or a computer. In this case, the necessary arithmetical models are built into the machine, either as hardware or as software. The general situation, however, is the same a before. Calculators and computers are designed so as to be easily operated by a human with a pattern matching brain. In particular, a person plus a computer constitutes a powerful cognitive system that can carry out cognitive tasks far beyond the capabilities of an unaided human. On the other hand, a person alone is already a cognitive system with many capabilities, such as face recognition, that currently are little improved by adding a computer. Here one is tempted to ask whether a computer by itself is a cognitive system. From the point of view of distributed cognition, I think we should not count solitary computers as cognitive systems. My reasons turn on general issues to which I will return later in this paper.

3.

DIAGRAMMATIC REASONING

Among those emphasizing the role of models in science, diagrams are typically regarded as models and diagrammatic reasoning a prominent kind of model-based reasoning. Figure 2 provides an example of mathematical

4

Ronald N. Giere

reasoning that makes essential use of diagrams. The two diagrams in Figure 2 embody a famous proof of the Pythagorean theorem.

Figure 2. Diagrams used to prove the Pythagorean theorem

Note first that the areas of the two larger squares are identical, and, indeed, equal to (a + b)2. The area of the single smaller square in the left diagram is equal to c2 while the areas of the two smaller squares in the right diagram are a2 and b2 respectively. All of the triangles must be of equal area being right triangles with sides a and b respectively. In left top diagram the area of the smaller square, c2, is equal to the area of the larger square minus the areas of the four triangles. But similarly, in the right diagram the combined area of the two smaller squares, a2 plus b2 equals the area of the larger square minus the areas of the same four triangles. So it follows that c2 equals a2 plus b2. In this reasoning, crucial steps are supplied by observing the relative areas of squares and triangles in the diagrams. Very little propositional knowledge, for example, that the area of a right triangle is proportional to the product of the lengths of its two sides, is assumed. The logicians Jon Barwise and John Etchemendy (1996) call this reasoning ‘heterogeneous inference’ because it involves both propositional and visual representations. The issue is how further to characterize this reasoning. Here there are a number of views currently held. A very conservative view is expressed in the following passage written by a contemporary logician. Referring to a geometrical proof, he writes: [The diagram] is only an heuristic to prompt certain trains of inference; … it is dispensable as a proof-theoretic device; indeed, … it has no proper place in the proof as such. For the proof is a syntactic object consisting only of sentences arranged in a finite and inspectable array. (Tennant 1986)

Models as Parts of Distributed Cognitive Systems

5

This position simply begs the question as to whether there can be valid diagrammatic proofs, or, indeed, even heterogeneous inference. Even granting that any diagrammatic proof is dispensable in the sense that it can be reconstructed as a syntactic object, it does not follow that diagrams can have “no proper place” in a proof. Only by assuming the notion of a “proper proof” to be essentially syntactic can one make this so. Herbert Simon had a great interest in diagrams and took a more moderate position. He long ago (1978) claimed that two representations, for example, propositional and diagrammatic, might be “informationally equivalent” but not “computationally equivalent.” By this he meant that there could be a method of computation that made the same information more accessible or more easily processed in one representational form rather than another. There remained, however, an important respect in which Simon’s position was still relatively conservative, as illustrated in the following passage: Understanding diagrammatic thinking will be of special importance to those who design human-computer interfaces, where the diagrams presented on computer screens must find their way to the Mind’s Eye, there to be processed and reasoned about. (Simon, 1995, xiii) Here Simon seems to be saying that all the cognition goes on in the head of the human. Others are more explicit. A very common approach to diagrammatic reasoning in the AI community is to treat diagrams simply as perceptual input that is then processed in various ways (Chandrasekaran, et al., 1995). Here the most conservative approach is first to translate the diagram into propositional form and then proceed with processing in standard ways (Wang, Lee and Hervat, 1995). Figure 3 shows diagrammatic reasoning as understood in standard artificial intelligence.

Figure 3. Diagrammatic reasoning in standard artificial intelligence

6

Ronald N. Giere

From the standpoint of distributed cognition, treating a diagram merely as input to a standard logic-based computer misses most of what is important about diagrammatic reasoning. What is interesting about diagrammatic reasoning is the interaction between the diagram and a human with a fundamentally pattern-matching brain. Rather than locating all the cognition in the human brain, one locates it in the system consisting of a human together with a diagram. It is this system that performs the cognitive task, for example, proving the Pythagorean theorem. The system can fairly easily perform this task whereas the human alone might well not be able to do it at all. A large part of what makes the combined system more powerful is that the geometrical relationships are embodied in the diagrams themselves. Extracting the desired relationships from the diagrams is far easier (especially for a pattern-matcher) than attempting to represent them internally. Figure 4 shows diagrammatic reasoning as distributed cognition. As will become even clearer below, Figure 4 is oversimplified in that it pictures a brain in isolation from a body and without any means of physically interacting with the diagram.

Figure 4. Diagrammatic reasoning as distributed cognition.

From this perspective, there is at least one clear way of studying diagrammatic reasoning within the framework of cognitive science. This is to investigate the characteristics of diagrams that most effectively interact with human cognitive capabilities. Stephen Kosslyn’s Elements of Graph Design (1994) provides prototype for this kind of study. Unfortunately, he devotes only two pages explicitly to diagrams, but one of his examples shows the kind of thing one can do. The example concerns diagrams that show how pieces fit together in an artifact, as in Figure 5.

Models as Parts of Distributed Cognitive Systems

7

Figure 5. Good versus bad design of diagrams based on cognitive principles

The relevant fact about human cognition is that humans process shape and spatial relationships using different neural systems, so the two types of information do not integrate very well. The lesson is that, in a diagram, the representations of the pieces should be near to each other as in the lower diagram. My guess is that, in addition to such general design principles, effective design of diagrams differs significantly for different sciences. Thus, features making for cognitively effective diagrams in molecular biology may be quite different from features effective in high-energy physics.

4.

REASONING WITH PICTORIAL REPRESENTATIONS

One cannot draw a sharp distinction between diagrams and pictorial representations. At one extreme we find highly idealized line drawings such as the following picture of a diatomic molecule indicating three degrees of freedom, spinning, rotating, and vibrating. Such a drawing differs little from a diagram.

8

Ronald N. Giere

Figure 6. Idealized picture of a diatomic molecule

At the other extreme are actual photographs, for example, of a volcano. In between we find a vast variety of such things as X-ray photographs of broken bones, telescopic pictures of distant nebulae, magnetic resonance images of brains, scanning electron microscope pictures of human tissues, magnetic profiles of the ocean floor, and so on. Understanding pictorial reasoning as distributed cognition makes it possible to treat pictorial reasoning as more or less continuous with diagrammatic reasoning. The cognition is in the interaction between the viewer and the picture, as when geologists take magnetic profiles of the sea floor as indicators of sea-floor spreading (Giere, 1996). We need not imagine geologists as first forming a mental representation of the picture and reasoning with it. They can reason directly with the external representation of the phenomenon.

5.

REASONING WITH PHYSICAL MODELS

From a distributed cognition perspective, there is again little fundamental difference between reasoning with diagrams and reasoning with physical models. A diagram is, after all, a kind of physical model, only two rather than three-dimensional. With some physical models one can work with the relevant parts of the world itself, the world being its own best model. A simple example is the problem of fitting slightly different lids on a number of slightly different screw-top jars. Suppose a person cannot tell by simple inspection which lid fits which jar. There is, however, an obvious effective procedure for solving the problem using the same perceptual skills. Pick any lid and try in on the jars in any order until one finds the jar that fits. Then pick another lid and do likewise until all the jars are fitted with their appropriate lids. One need not construct a representation of the whole problem, whether internal or

Models as Parts of Distributed Cognitive Systems

9

external. One simply interacts with the world itself in an appropriate manner and the world guides one’s search for the correct solution. In a more normal case, particularly in the sciences, a physical model is constructed to be an external representation of some aspect of the world. A well-known example is James Watson’s model of DNA made from metal cut outs held together by a pole and clamps. By fiddling with the physical model so as to fit the base pairs inside a double-helical backbone, Watson came up with the right structure. Watson with his physical model turned out to be a more effective cognitive system than Roseland Franklin with her X-ray pictures and hand-drawn diagrams. Physical models provide what is probably the best case for understanding model-based reasoning as an example of distributed cognition. Here it is very clear that one need not be performing logical operations on an internal representation. It is sufficient to perform and observe appropriate physical operations on an external physical representation. The interaction between a person and the model is physical as well as perceptual.

6.

REASONING WITH ABSTRACT MODELS

Abstract models provide what is probably the most difficult case for understanding model-based reasoning as an example of distributed cognition. It is not clear in what sense an abstract model can be external. Nor is it clear how a person can interact with an abstract model. Yet many, if not most, models used in the sciences are abstract models. Think particularly of models in quantum physics or cosmology. So some account of abstract models is needed. The only alternative to regarding abstract models as external is to regard them as internal. There are, however, many reasons for rejecting this alternative. One reason is that we could not really refer to the model of anything because is to be expected that every individual involved will have a somewhat different internal model. We would have to speak of A’s model, B’s model, etc. An even stronger reason is that it is highly implausible to suppose that all the details of the complex abstract models of contemporary science could be represented in the form of a mental model. Even experts, when asked to solve a difficult problem, typically proceed first to create an external representation of the problem in the form of diagrams or equations. The suggested account of this process is that the expert is using the external representations in order to reconstruct aspects of the abstract model relevant to the problem at hand. This no doubt requires some simpler mental models. It also requires acquired skills in constructing the sort of models in question.

10

Ronald N. Giere

Yet, even if we agree that abstract models are in some sense external, there remains a question of just what this means. This question threatens to lead us into the arid land of the philosophy of mathematics where one worries about what numbers might be. I think we would do well to avoid this detour and take a safer route. As in our understanding of time, we traffic in abstract models every day without worrying about what they are. Consider plans and planning, well-known topics in cognitive science. Here abstract models are simply taken for granted. Suppose three friends are planning a party. The planned party is, at least in part, really an abstract model of a party. It is assigned a date and time in the future and potential guests may even be designated in a written list. We know the party starts out as an abstract entity, a mere possibility, because it may in fact never materialize. Now I suppose almost everyone who as thought about the problem agrees that the ability to use language has a lot to do with the human ability make plans, which is to create abstract entities, which we may take to be abstract models. This is enough for present purposes. Here we already have a plausible answer to the question how humans interact with abstract models. They do so by using language, both ordinary and highly technical. The three friends in my example build up their model of the party by talking about it. Moreover, they can reason about it as well, realizing, for example, that the number of potential guests has become too large for the intended space. It does not follow, however, that the possible party is itself in any way propositional, a mere linguistic entity. My three friends are not talking about what they are saying; they are talking about a possible party. However important language might be in developing abstract models, it is not the only means for doing so. Diagrams, pictures and physical models may also be used to characterize aspects of an abstract model. Watson’s physical model of DNA, for example, also served the purpose of specifying some features of an abstract model of DNA such as the pitch of the helix and the allowable base pairs. Other features of the physical model, such as being made partly of tin, have no counterpart in the abstract model.

7.

OBJECTIONS

Before moving on to more general considerations I will pause to consider two objections to the whole idea of distributed cognition.

Models as Parts of Distributed Cognitive Systems

7.1

11

The Individuation Problem

Once we allow that cognition extends beyond the boundaries of individuals, where do we stop (Clark, Ch. 10)? How do we individuate a cognitive system from its surroundings or, indeed, from other cognitive systems? Imagine two developmental biologists standing in front of a chalkboard in an office talking and drawing pictures of a possible regulatory mechanism. From a distributed cognition point of view, the cognitive system developing models of the mechanism (both pictorial and abstract) includes at least the two scientists together with the sketches on the board. But does it include the chalkboard itself or, more questionably, the air in the office that sustains their lives and makes possible their verbal communication? As will be noted below, the idea of distributed cognition is typically associated with the thesis that cognition is embodied. In more standard terms, one cannot abstract the cognition away from its physical implementation. Adopting this point of view, we should say that the chalkboard and the air in the room are indeed part of this distributed cognitive system. But now suppose that our two biologists work in different parts of the world and communicate electronically through their respective computers. Suppose part of the transmission is by means of satellite. One might easily agree that their computers are part of the cognitive system, but what about the transmission lines and the satellite? Yes, these too should count as part of the relevant cognitive system. But there are many things that clearly are not, for example, their automobiles and their homes, the supermarket down the street, and so on. It may sometimes be difficult to draw a sharp boundary between what is or is not part of a particular cognitive system, but that is true of many things. It is, I think, more interesting to distinguish those parts of a cognitive system that differentially effect quantity or quality of the intended output from those parts that merely sustain the system, making it possible for it to do anything at all. Most likely the models considered by our two biologists would be no different if they were sitting together with a large sheet of paper rather standing in front of a chalkboard. I suspect it is this fact that fuels the initial feeling that the chalkboard is not part of the relevant cognitive system. But one must be careful not to discount too quickly the relevance of various components of a cognitive system. One might initially think that the satellite does not differentially effect the output of the system, but this is at least questionable. If our two biologists had to communicate by regular mail rather than by near instantaneous satellite transmission, this might well influence which model they eventually pursue in their subsequent work.

12

7.2

Ronald N. Giere

Minds

In countenancing extended cognition, are we not thereby committed to the existence of something like “extended minds” or even “group minds”? And are these not fairly disreputable ideas? Now it is true that some advocates of distributed cognition are tempted by the notion of distributed minds (Clark 1997, Ch. 9). The presupposition behind this move seems to be that cognition is necessarily a property of minds. So, where there is cognition, there is also a mind. But I do not think that one is forced to take this route. I do not see why one cannot maintain an ordinary conception of mind, which restricts it to creatures with brains, that is, humans and perhaps some animals. Much cognition does, therefore, involve minds. Perhaps we could also agree that at least all of our currently existing cognitive systems have a human somewhere in the system. So every cognitive system has a mind somewhere. But as a concept in cognitive science, cognition need not be restricted exclusively to places where there are also minds. There is no compelling reason why we cannot extend the scientific concept of cognition to places where there are no minds. This is question for which I do not think there is now a determinate answer. Rather, the question will be settled, if it is ever settled at all, in terms of considerations regarding how best further to develop a science of cognition.

8.

CONNECTIONS

Although I have focused on just one source of recent interest in distributed cognition, the idea is much more widespread, as one can learn from Andy Clark’s recent Being There: Putting Brain, Body, and World Together Again (1997). Lucy Suchman’s Plans and Situated Action (1987), The Embodied Mind by Varela, Thompson, and Rosch (1993), and Edwin Hutchins’ Cognition in the Wild (1995) are among earlier influential works. A common theme in these works is that the human brain evolved primarily to coordinate movements of the body, thereby increasing effectiveness in activities such as hunting, mating, and rearing the young. Evolution favored cognition for effective action, not for contemplation. Cognition is embodied. This point of view argues against there being a single central processor controlling all activities and for there being many more specialized processors. Central processing is just too cumbersome for effective action in the wild. Thus, an emphasis on distributed cognition goes along with a rejection of strongly computational approaches to cognition. In fact, some

Models as Parts of Distributed Cognitive Systems

13

recent advocates of Dynamic Systems Theory have gone so far as to argue against there being any need at all for computation as the manipulation of internal representations (Thelen and Smith 1994). Andy Clark, in particular, adopts the notion of ‘scaffolding’ to describe the role of such things as diagrams and arithmetical schemas. They provide support for human capabilities. So the above examples involving multiplication and the Pythagorean theorem are instances of scaffolded cognition. Such external structures make it possible for a person with a pattern matching and pattern completing brain to perform cognitive tasks it could not otherwise accomplish. The most ambitious claim made in behalf of distributed cognition is that language itself is an elaborate external scaffold supporting not only communication, but thinking as well (Clark, Ch. 10). During childhood, the scaffolding is maintained by adults who provide instruction and oral examples. This is seen as analogous to parents supporting an infant as it learns to walk. Inner speech develops as the child learns to repeat instructions and examples to itself. Later, thinking and talking to oneself (often silently) make it seem as though language is fundamentally the external expression of inner thought, whereas, in origin, just the reverse is true. As argued by Vygotsky (1962) already in the 1920s, the capacity for inner thought expressed in language results from an internalization of the originally external forms of representation. There is no ‘language of thought’. Rather, thinking in language is a manifestation of a pattern matching brain trained on external linguistic structures (See also Bechtel, 1996). This distributed view of language implies that cognition is not only embodied, but also embedded in a society and in a historically developed culture. The scaffolding that supports language is a cultural product (Clark 1997). These views have recently been developed under the title of ‘cognitive linguistics’ or ‘functional linguistics’. Some advocates of cognitive linguistics, such as Tomasello, emphasize comparative studies of apes and humans (Tomasello 1996). The most striking claim is that the major difference between apes and human children is socialization. Of course there are some differences in genotype and anatomy, but these are surprisingly small. The bonobo, Kanzi, raised somewhat like a human child by Duane Rumbaugh and Sue Savage-Rumbaugh (1993), is said to have reached the linguistic level of a two-year-old human child. Tomasello argues that what makes the difference between a natural and an enculturated chimpanzee is developing a sense of oneself and others as intentional agents. Natural chimpanzees do not achieve this, but enculturated ones can. From the standpoint of computational linguistics, there has always been a question of how the neural machinery to support the necessary computations

14

Ronald N. Giere

could possibly have evolved. From the standpoint of cognitive linguistics, this problem simply disappears. Language is not fundamentally computational at all, but the product of a pattern matching neural structure, which biologically could evolve, supported by an elaborate scaffolding of social interaction within an established culture. Since I am myself sympathetic with these ways of thinking, it should now be clear why I earlier expressed reluctance to regard a solitary computer as a cognitive system. A computer may be embodied, but I am reluctant to call it a cognitive system because it fails to have the right kind of body for interacting with the world in appropriate ways. Syntactic inputs and outputs seem to me not to be enough. On the other hand, some present-day robots in some environments might be counted as cognitive systems roughly at the level of insects or other simple organic creatures (Brooks, 1994). Chimps surely should be counted as cognitive systems and even sometimes as parts of distributed cognition systems.

9.

CONCLUSION

In conclusion, I would merely like to reiterate that the question as to exactly what is to be counted as a cognitive system or a distributed cognitive system seems not to have a determinate answer. Various groups in the cognitive sciences and related fields have strong interests in making the answer come out one way or another. It is not guaranteed that a general consensus will ever be achieved and it is possible that the question will eventually cease to be of any interest to anyone. In the mean time, individuals and groups remain free to promote whichever answer they think best.

REFERENCES Barwise, J., and Etchemendy, J., 1996, Heterogeneous logic, in: Logical Reasoning with Diagrams, G. Allwein and J. Barwise, eds., Oxford University Press, New York, pp. 179200. Bechtel, W., 1996, What knowledge must be in the head in order to acquire knowledge?, in: Communicating Meaning: The Evolution and Development of Language, B. M. Velichkovsky and D. M. Rumbaugh, eds., Lawrence Erlbaum, Nahwah, NJ, pp. 45-78. Brooks, R., 1994, Coherent behavior from many adaptive processes, in: From Animals to Animats 3, D. Cliff et al., eds., MIT Press, Cambridge, MA. Chandrasekaran, B., Glasgow, J., and Narayanan, N. H., 1995, Diagrammatic Reasoning: Cognitive and Computational Perspectives, MIT Press, Cambridge, MA.

Models as Parts of Distributed Cognitive Systems

15

Clark, A., 1997, Being There: Putting Brain, Body, and World Together Again, MIT Press, Cambridge, MA. Giere, R. N., 1996, Visual models and scientific judgment’, in: Picturing Knowledge: Historical and Philosophical Problems Concerning the Use of Art in Science, B. S. Baigrie, ed., University of Toronto Press, Toronto, pp. 269-302. Hutchins, E., 1995, Cognition in the Wild, MIT Press, Cambridge, MA. Kosslyn, S. M., 1994, Elements of Graph Design, Freeman, New York. McClelland, J. L., Rumelhart, D. E., and the PDP Research Group, eds., 1986, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, MIT Press, Cambridge, MA. Savage-Rumbaugh, S., et al.,1993, Language comprehension in ape and child, Monographs of the Society for Research in Child Development, 58, Nos. 3-4. Simon, H. A., 1978, On the forms of mental representation, in: Perception and Cognition: Issues in the Foundations of Psychology, C. W. Savage, ed., Minnesota Studies in the Philosophy of Science, vol. 9, University of Minnesota Press, Minneapolis, pp. 3-18. Simon, H. A., 1995, Forward, in: Diagrammatic Reasoning: Cognitive and Computational Perspectives, Chandrasekaran, B., et al., ed., MIT Press, Cambridge, MA, pp. x-xiii. Suchman, L., 1987, Plans and Situated Actions, Cambridge University Press, Cambridge, UK. Tennant, N., 1986, The withering away of formal semantics, Mind and Language, 1:302-318. Thelan, E., and Smith, L. 1994, A Dynamic Systems Approach to the Development of Cognition and Action, MIT Press, Cambridge, MA. Tomasello, M., 1996, The cultural roots of language, in: Communicating Meaning: The Evolution and Development of Language, B. M. Velichkovsky and D. M. Rumbaugh, eds., Lawrence Erlbaum, Nahwah, NJ, pp. 275-307. Varela, F. J., Thompson, E. and Rosch, E., 1993, The Embodied Mind, MIT Press, Cambridge, MA. Vygotsky, L. S., 1962, Thought and Language, MIT Press, Cambridge, MA. Wang, D., Lee, J., and Hervat., H. 1995, Reasoning with diagrammatic representations, in: Diagrammatic Reasoning: Cognitive and Computational Perspectives, Chandrasekaran, B., et al., ed., MIT Press, Cambridge, MA, pp. 501-534.

456 789 4104 3648 3192 359784

Figure 1

a a2 b

c c2

b2

c

b

a

Figure 2

COGNITION DIAGRAM BRAIN COMPUTER

Figure 3

COGNITION

DIAGRAM BRAIN COMPUTER

Figure 4

Figure 5

Figure 6

Suggest Documents