The Concept of Representation in Cognitive

0 downloads 0 Views 31KB Size Report
nature of two brain cognitive functions: perceptual and executive. ... an inference about the represented structure. In the study .... the infero temporal cortex with the columnar model .... the sense of a natural language. ... to be decomposable in elementary types of signals. ... to cognitive processes like attention and conscious-.
The Concept of Representation in Cognitive Neuroscience Alfredo Pereira Jr. Instituto de Biociencias, UNESP (Universidade Estadual Paulista) Campus de Rubiao Junior, 18600-000, Botucatu, Sao Paulo, Brasil Email: [email protected]

Abstract This article discusses the possible representational nature of two brain cognitive functions: perceptual and executive. Assuming the Newellian definition of representational processes as those that establish an isomorphic relation between two structures, I claim that perceptual processes generate only a partial correspondence (between stimuli properties and brain states) and therefore should not be properly conceived as representational. On the other hand, executive processes encompass the combination of copies (i.e., representations) of perceptual patterns, generating new patterns that subserve behavior. In summary, I criticize the notion of perceptual representations, and propose that brain representational processes are related to executive functions, having a pragmatic dimension.

Are Perceptual Processes Representational? Different philosophical concepts of representation have been proposed in the last centuries. For a question of simplicity, here I assume that an operation of representation refers to a relation between two or more systems, where the state or organization produced in one system represents the state or organization previously obtained in another system. Following the proposal of Newell (1990), I understand the relation of representation as an isomorphism between two structures, such that information about properties of the representing structure should allow an inference about the represented structure. In the study of perception, a cognitive system B could be said to represent an external stimulus A, or more precisely a certain state or class of states of B would be assumed to represent a state or class of states of A. In this case, B represents A by means of “internally” reproducing (i.e., with its own resourc-

A. Riegler & M. Peschl

|1|

es) properties of A. This assumption leads the following questions: does the representation relation imply that B reproduces all properties of A? Or does B reproduce only the “essential” properties of A? Or isn’t the extent to which B reproduces A’s properties crucial for the relation of representation? If we want to assure that from the knowledge of the state obtained by B an observer would be able to identify A, then the first alternative—B representing all properties of A—would be the most adequate. Imagine a B-system that reproduces only three properties of an A-system: being yellow, round and hot. These properties are not sufficient for the identification of A, since they allow A being (among other things) the sun or the yolk of a boiled egg. However, such an exhaustive conception of the representation relation is not effectively applied in neuroscience and empirical studies on cognition, for the reason that the set of all properties of an empirical system is too large to be enumerated, or may even be considered infinite. The second possibility, namely B reproducing only the “essential” properties of A, brings into the scene the insoluble problem of defining a general “ontology” (a theory about the properties of all real and possible beings) shared by both systems. This general ontology would be necessary to verify if the properties reproduced by B are necessary and sufficient to identify A. However, biological systems have limited resources, and a particular perspective relatively to the stimuli. The properties that they reproduce are those that fall within such limitation and particular perspective. Such constraints suggest that B reproduces only the properties sufficient for the identification of A, given the present cognitive resources of B. This is a fully acceptable conception of a cognitive relationship between two systems, but may it be properly called a “representational” relation? If the proper-

Understanding Representation

Alfredo Pereira Jr.

ties of B come into play (as they must, if we take into account the existence of selective biases, including attentional processes), the resulting cognition will be a synthesis of properties of A and the properties of B that guide the choice. If B meets A in a different context, or with a different internal background, the resulting cognition will be different. In this case, perceptual processes would not be properly representational, but would instead generate a partial correspondence between A and B. A possibility still open for the representationalist view of perception would be taking into account all the reasonable contexts where B meets A, and all the reasonable backgrounds that B may have, and defining an invariant set of properties of A that in all situations B will always reproduce. For the sake of argumentation, we can presume that such a definition would be in principle possible. In this case, a distinction between the set of all the modifications of B that are elicited by A, and the subset of modifications of B that are invariant under the variation of contextual conditions and cognitive backgrounds of B is established. The conclusion would be that the invariant subset constitutes the representation of A by B. The straightforward objection is that for a neuroscientist it would not be possible to distinguish, in a B-system being measured, the invariant subset from all other characteristics that come from B itself. The problem also appears in the discussion of the notion of “informational content”. Dretske (1981) wanted for the informational state of the receptor system to have a “lawlike” correlation with the informational state of the source, and at the same time that the previous knowledge of the receptor about the source had a role in the specification of the message. Possibly such a dissociation between what comes from the source and the previous knowledge of the receptor can be precisely achieved only formal models. Empirical cognitive sciences deal with states of B-systems where properties of A-systems are inextricably entangled with properties of B itself. The reasoning seems to imply the conclusion that empirically studied cognitive systems, when perceiving an external stimulus, produce partial representations of the stimulus. However, the very idea of a partial representation is self-contradictory: if some system B partially represents a system A, then it is actually representing a system C, that has the same state/organization of the subsystem of A that it represents. Therefore the right conclusion should be that recognition processes are not properly repre-

A. Riegler & M. Peschl

|2|

sentational. Such a conclusion would help understanding why the use or the term “representation” has led to various ambiguities in the history of cognitive and brain sciences. I propose to substitute the idea of perceptual representations for the idea of a partial correspondence between properties of stimuli and brain states. This is a technical term used in the semantic approach to the philosophy of science proposed by Da Costa and French (1990); partial correspondence between two structures is an isomorphism between parts of the structures, that leaves other parts unrelated. Such an idea of partial correspondence is consistent with current studies of perceptual processes, that assume only a probabilistic correlation between properties of stimuli and neuronal activity: “…the experimenter chooses some particular time dependent sensory stimulus… and then examines the spike trains produced in response to repeated presentations of this stimulus. Since there is no unique response, the most we can say is that there is some probability of observing each of the different possible responses. This is a conditional probability distribution… We can describe the spike train in terms of the arrival times of each spike… signals are chosen from some probability distribution… the actual functional form of this distribution embodies all of the structure in the world, such as the persistence of sensory qualities over time and the smoothness of motion… the most complete description of the neuron in the sensory world would be to give the joint distribution of signals and spike trains… This distribution measures the likelihood that, in the course of an experiment or in the life of the animal, we will observe both the stimulus… and the spike train” (Rieke et al. 1997, pp. 21–22). A large number neuroscientific studies of perception have focused on the visual system. A methodological assumption frequently made is that the mapping of patterns from the stimulus in primary cortical areas is isomorphic to retinal locations (such a isomorphism is usually called “retinotopy”). Results of single-cell measurements in non-human primates have revealed a columnar specialization in the primary visual cortex, where the features of the stimulus are assumed to be encoded. A second perceptual operation, recognition, is considered to be performed through the ventral and dorsal pathways leading to associative areas, respectively the posterior parietal (recognition of spatial location, or “where” an object is located) and infero temporal

Understanding Representation

The Concept of Representation in Cognitive Neuroscience

cortex (recognition of form, or “what” is being seen) (see Ungerleider and Haxby 1994; and a discussion in Milner and Goodale 1995). A large number of studies about the “what” and “where” pathways have not led to a consensual understanding about how recognition processes are performed in such associative areas. The theoretical paradigm obtained from studies of primary sensory cortex—retinotopic mapping—doesn’t seem to apply to them. An important result obtained by Tanaka et al. (1990) is that, contrary to the model assumed for primary areas, cell assemblies in the infero temporal cortex selectively respond to complex forms (the paradigmatic case being face recognition) instead of simple patterns. However, Tanaka and his group have developed a model (see Tanaka 1993) that attempts to conciliate data obtained in the infero temporal cortex with the columnar model based on the primary visual cortex. In this model, infero temporal columns are assumed to be selective for entire objects, but not for particular features. This approach inspired computational models where the correspondence between the stimuli patterns and the “second order” patterns recognized by the associative cortex is conceived as a “representation of prototypes”. However, the idea seems to be logically equivalent to the self-contradictory notion of “partial representation”, and would better be substituted by “partial correspondence”. Despite the progress in the study of the visual system in the last decades, it is usually agreed that our present knowledge doesn’t support strong claims about the cognitive functions of the visual network. The problems observed in the study of recognition in the ventral pathway were summarized by Young (1995): “the functional details of ascending, descending, local and callosal interactions in this pathway are presently patchy. Similarly, the functional relations between ventral stream areas and other structures, some of which appear necessary for recognition and discrimination performance, are uncertain. As for how cells in this area participate in object recognition, information on processing stimuli other than faces is difficult to interpret” (p. 473). In this situation, the claim that perception is representational goes too far beyond available evidence. Alternatively, the modest claim of a mere partial correspondence, besides being epistemologically attractive, also accounts for the adaptive aspect of perception and the limitations and biases of biological cognitive systems.

A. Riegler & M. Peschl

|3|

Representations in the Brain The distinction between perceptual recognition and representation is fundamental for the epistemology of cognitive neuroscience. Recognized patterns are formed in the perceptual subsystems of the brain, and become available as matrices for further combination. They are not copies or representations of stimuli patterns; they constitute a “synthesis” of stimuli patterns and endogenous patterns resulting from genetic determination and learning. A formal account of such “synthesis” is given by the Adaptive Resonance Theory, developed by S. Grossberg and collaborators (see e.g. Carpenter et al. 1992). However, the concept of representation remains central in the epistemology of cognitive neuroscience. My claim that perception is not representational only changes the domain of application of the concept. Logical, generative capacities of the animal brain derive from processes of combination and recombination of perceived patterns. In order to perform logical operations upon such patterns, the brain has to copy (i.e., to represent) them. Therefore, in this view representational processes are internal to the brain. The copying and recombination of copies is made by the “executive” system. The executive system is constituted by the “associative” areas of the neocortex (pre-frontal, posterior parietal, and infero-temporal cortex), a large part of the limbic system (hippocampus and cingulate gyrus), and projections from subcortical structures (thalamus and basal ganglia). The pre-frontal cortex is connected with all these structures, having a coordinative role in many cognitive functions, as proposed by neuroimaging studies that have confirmed Baddeley’s model of working memory (Baddeley 1986; D’Esposito and Grossman l996). In primates the executive system encompasses the frontal-limbic complex, but in other mammals with undeveloped prefrontal cortex it corresponds basically to the hippocampal system. I propose to use the term representation in the epistemology of cognitive neuroscience exclusively for processes of copying and recombination of electrochemical patterns internal to the brain, thus avoiding sterile discussions about the brain representing the external world. Following this proposal, I will use the concept of representation where it really seems to be necessary and adequate, i.e. to account for the internal manipulation of information that makes possible executive functions as log-

Understanding Representation

Alfredo Pereira Jr.

ical inference and the planning activity that supports goal-directed behavior. Representational processes in this sense are not formal or “symbolic” processes, for two strong reasons: the isomorphic relation is established between electrochemical patterns, and such representations are necessary for the performance of logical inferences subservient to the planning and control of behavior. Peschl and Riegler (1999) make a similar point, in the Introduction to this volume, by criticizing the assumption of “linguistic transparency” of neuronal activity. I will follow this line of reasoning by proposing the idea of an electrochemical (non-linguistic) “code”, leaving the discussion of the pragmatic dimension of brain representations for the next section. Scientific descriptions of informational processes usually include the reference to a code. In the study of perceptual processes, neuroscientists often refer to informational patterns reaching peripheral sensors and being converted into a “central code”. A famous paragraph that illustrates this statement is found in the discussion of the results obtained by Lettvin et al. (1959) from the study of the optical nerve of the frog: “What are the consequences of this work? Fundamentally, it shows that the eye speaks to the brain in a language already highly organized and interpreted, instead of transmitting some more or less accurate copy of the distribution of light on the receptors” (p. 251). Although the notion of a code originated from linguistic studies (as deciphering lost languages or encoding secret messages for military purposes) by now it should be clear that the notion has broader applications. Many biologists—including Maturana, one of the authors of that influential study on the frog’s visual system—have been worried about an abusive use of terms like “information” and “code”. Nevertheless, no satisfactory substitute has been proposed—the same situation observed in molecular genetics, concerning the concepts of “genetic information” and “genetic code”. Instead of eliminating the concept or creating another term for the same concept, I propose to discuss a biologically sound notion of “code”. Of course, biological codes are not linguistic in the sense of a natural language. Neither are they representational symbols, as the notation used in formal logic and computer programming. However, the analogy with computer codes is useful to illus-

A. Riegler & M. Peschl

|4|

trate the importance of the notion of a code. In the computational paradigm, a code is defined as a set of instructions implemented in a material system, that allows the reproduction and combination of informational patterns. The necessity of referring to a code is clearly visible in the description of the working of computers; e.g., messages are recorded in disks, following a set of instructions that allow both to transfer and to retrieve the message from the disk. It is a tough task to understand what computers do by looking at their structure only; such understanding is much easier when the observer knows the program that the machine is running. Analogously, understanding what neurons collectively do (e.g., understanding registers of electrical activity of neurons) would become easier with the use of good theories about how neuronal activity processes information. In the context of neuroscience, an adequate concept of code is required, since the ideas of “instruction” or “program”, as well as “implementation”, are compromised with engineering procedures that constitute an essential part of computational science. In the biological approach to codes, it should be emphasized that a neuronal code—if it exists— must have been built by evolution, as the genetic code. Products of evolution are strongly determined by initial and boundary conditions of the ecological domain where and when the evolutionary facts happened. This historical origin makes difficult, if not impossible, a purely physical description of biological codes. Even if one attempts to construct a physical explanation of the origin of biological codes, the explanation will be based on theoretical hypotheses of possible evolutionary scenarios, instead of precise initial and boundary conditions. One way to overcome the difficulty of a physicist explanation of biology is to assume that evolution “engineered” (of course, without any previous purpose) complex signaling systems that control diverse biological functions, and then proceed to study such systems. Cognitive functions in the brain are likely to be supported by a complex and specialized signaling system, to be scientifically studied. I propose the idea that biological codes are combinatorial systems of signals, having a chemical nature in all cells and also an electrical nature only in neurons (due to well-known membrane specializations). The key word in the definition is “combinatorial”, implying two properties: stability of the elements and compositionality.

Understanding Representation

The Concept of Representation in Cognitive Neuroscience

Elemental stability means that chemical patterns that count as elementary signals in the system are thermodynamically stable configurations (or “attractors”) of macromolecules, able to engage in stereotyped patterns of reactivity. In the case of a neuronal code, also electrical patterns of activity in single neurons or in local populations are assumed to be decomposable in elementary types of signals. Compositionality means that complex combinations of signals generate biological functions that could be reliably described as the product of the properties of the components. This is a controversial issue, because in self-organizing systems—as cells and neuronal networks—the interaction of the parts of the system are likely to produce emergent properties, that cannot be reduced to the intrinsic properties of the parts. The assumption of a neuronal code implies that the complex functions of neuronal networks could be reliably explained in terms of simpler neuronal functions. In this sense, the existence of a neuronal code implies that neuronal activity has detectable regularities, such that the combination of such and such transmitters and receptors in such and such synapses is likely to produce such and such patterns of electrical activity, putatively supporting such and such cognitive functions. A philosophical objection that can be made to the idea of a neuronal code is that it would imply the existence of a “homunculus” or a “phantom”, an ethereal entity in the brain who “reads” the messages encoded by neuronal networks. Such an interpretation is by no means necessary. Even if the idea that neurons encode information is accepted, the receiver of the information is assumed to be another part of the brain and not an immaterial entity. In other words, the neuronal code would be useful for one part of the nervous system to encode information for other parts to receive. At the end of the process, besides numerous feedback pathways related to cognitive processes like attention and consciousness, there is only the production of behavior. Of course, once actions are performed, the same organism perceives the effects, and interprets his/her own internal states that generated such effects. The existence of external feedback loops (called “reafference”) of an organism perceiving and interpreting his/her own actions is likely to be related to diverse cognitive phenomena, including intentionality and consciousness. However, the presence of a “homunculus” is not required at any moment of this process.

A. Riegler & M. Peschl

|5|

Another important aspect of the study of neuronal coding is the methodological distinction between electrical and chemical aspects. A majority of studies has focused on electrical patterns, while few researchers have looked for the chemical aspects of the code (although a large quantity of data is available from neurobiochemistry). Two reasons why most efforts have been concentrated on the electric aspect are that it is simpler, and the technology for data acquisition is available, e.g., single cell recordings (with invasive electrodes), optical imaging of neuronal tissue and multi-channel EEG (electroencephalography) and MEG (magnetoencephalography). The technology for chemical analysis of the synapse in the living brain is beginning to be developed and may bring new contributions in the future. At this moment, the scope of this kind of research is restricted to data obtained ‘in vitro’. In fact, both studies should converge, because the electric activity of the neuron is controlled by chemical reactions at the synapses. However, as an electric pattern may be produced by different chemical combinations, the chemical aspect of the neuronal code is probable to be more complex than the electric one.

The Pragmatic Dimension of Brain Representations Previously recognized patterns are recombined, generating composite patterns that support goaldirected behavior. The combined patterns produced by the executive system are pragmatic representations, in the sense that they represent actual and possible actions of the organism in the environment. The represented objects are not abstract or “universal” sets of properties, but objects of actual or possible actions. In perspective, it is surprising to notice that such concept of pragmatic representations was discussed by one of the fathers of the artificial intelligence paradigm, Allen Newell. In his William James lectures (Newell 1990), after presenting the above mentioned concept of representation, he discussed the pragmatic dimension of representation in brains and machines. First he mentioned the “analogic” view of representation: “consider a task for the organism that requires a representation—that is, that requires that the organism produces a response function that depends on aspects of the environment that are not immediately available. Then, the problem for the

Understanding Representation

Alfredo Pereira Jr.

organism (or for evolution, it makes no difference) is to find the right kind of material with the right properties for encoding and decoding and the right dynamics for transformation… However, there is a difficulty with this approach. The more capable the organism (ultimately ourselves), the more representational schemes are needed” (Newell 1990, p. 60). This difficulty would lead to the emergence of computational systems: “there exists an alternative path to obtain highly complex adaptive systems, which can be called the Great Move. Instead of moving toward more and more specialized materials with specialized dynamics to support an increasingly great variety and intricacy of representational demands, an entirely different turn is possible. This is the move to using a neutral, stable medium that is capable of registering variety and then composing whatever transformations are needed to satisfy the representation laws” (p. 61). The electrochemical signaling system of the brain, in neuronal networks with feed forward and feedback architectures, that combine patterns to support behavior, would fit perfectly in Newell’s picture. However, Newell’s approach still fails to recognize the nature of pragmatic representations. He doesn’t refuse the idea of a representational relation between the organism and the environment. Although he mentions the role of selective attention (Newell 1990, p. 63), he is still referring to the representation of the “external world”. A second Great Move is necessary, in order to focus the representation of intended actions of the organism in the world. Andy Clark, following the proposal of researchers in robotics (Brooks 1991) and computational neuroscience (Ballard 1991), proposed a new step in this direction. He began by claiming that “to the extent that the biological brain does trade in anything usefully described as ‘internal representations’, a large body of those representations will be local and action-oriented rather than objective and action-independent” (Clark 1996). This approach may help to solve difficulties met in classical AI studies, because “the classical emphasis neglects the pervasive tendency of human agents to actively structure their environments in ways that will reduce subsequent computational loads” (p. 150). The classical approach “first create a full, objective world model and then define a costly procedure that (e.g.) takes the model as input and generates food-seeking actions as output. Instead, the system’s early encodings are already geared toward the production of appropriate action” (p. 152).

A. Riegler & M. Peschl

|6|

Epistemological Implications In summary, I proposed an epistemological distinction between perceptual recognition and representational processes in the brain. Perceptual recognition would better be described in terms of the generation of partial correspondences between brain states and the structure of selected parts of the environment. Representational processes would be limited to the copy and recombination of perceptual patterns, generating new and more complex patterns supporting executive functions. This distinction may help answering the title question of this volume, “does representation need reality?”. In one aspect, brain representations need reality, since they are built from recognized patterns that partially correspond to the structure of stimuli. Environmental stimulation is not a mere “perturbation”, but the material from which biological beings construct their “proper worlds” (Uexküll 1935). Different organisms pick up different parts from a common environment to construct representations of their actions in such an environment. In another aspect, brain representations don’t need reality, since representations don’t need to have a global correspondence (general isomorphism) with the structure of the environment. Different organisms construct different representations of a common environment, based on genetic, attentional and pragmatic factors. This view is contrary to realism, in terms of a “direct perception” view (see J. J. Gibson 1973), but is not anti-realist, because it assumes a partial correspondence between stimuli and recognized patterns. I agree with the criticism of the classical, “referential” concept of representation, as discussed by Peschl and Riegler (1999) in the Introduction to this volume. Representations constructed by the brain are likely to be “system-relative” and not determined by the environmental state. In neuroscientific research this proposition is supported by the fact that correlations between neuronal activity and properties of stimuli have been found mostly for neurons at the periphery of the nervous system and sensory cortex. Registers of the activity of central neurons usually cannot be interpreted in terms of properties of stimuli. Such a limitation led neuroscientists to model large recurrent neuronal networks—as the “working memory” model—where brain subsystems are assumed to operate upon the products of the activity of other brain subsystems.

Understanding Representation

The Concept of Representation in Cognitive Neuroscience

The concept of representation used in such models cannot be the classical referential one, although the majority of neuroscientists doesn’t seem to have a clear idea of the adequate substitute. If recurrent architectures imply some kind of “operational closure” (Maturana and Varela 1979) or anti-realist conclusions are questions open to debate. The notion of “self-referential representation”, mentioned by Peschl and Riegler (1999), may be misleading, if interpreted in the sense that brain states could refer to themselves. On the contrary, brain representations seem to be hetero-referential, in the sense that they usually refer to the world external to the brain. This is one of the possible cognitive implications of the philosophical concept of intentionality proposed by Brentano and Husserl. Brain representations are (almost) always representations of something external to the central nervous system. Even fictitious representations are localized outside the brains that represent them (of course there are exceptions, e.g. when a healthy person imagines a tumor in his/her brain). My first claim is that representations are constructed internally to the brain. We agree about this claim. The second claim is that they refer or are about the world external to the brain. It is not that they represent the world external to the brain. Brains are embedded in organisms. Organisms must develop actions in the environment in order to survive; brains represent the actions to be performed. Actions are not represented in an empty space. Therefore, together with the representation of the actions is the reference to the world where the actions are likely to occur (the situation may be regarded as similar to Heidegger’s “Being in the World”). Discussions in cognitive science usually assume that representations should be understood in the sense of a copy or reproduction of properties of stimuli. In my view, representations constructed by brains are not copies of the environment, but quite partial (in both sense of the word) recriations of it, i.e. non-exaustive recriations shaped by the particular perspective of the organism. In this sense, I believe my claims are not contradictory. How to conciliate hetero-referentiality with constructivism, i.e. the idea that representations are constructed by the brain and not determined by the environment? The solution is likely to include the hypothesis that brain representations are closely related to the planning and control of actions, and

A. Riegler & M. Peschl

|7|

therefore should be referred to the space and time of action. A detailed discussion of such a theme extends beyond the limits of this article.

Acknowledgments Prof. Stephan Chorover (MIT), FAPESP (SP, Brasil) and Paul Cisek (for a series of notes on the Psyche discussion list).

References Baddeley, A.D. (1986) Working Memory. Oxford: Oxford University Press. Ballard, D. H. (1991) Animate Vision. Artificial Intelligence 48: 57–86. Brooks, R. A. (1991) New Approaches to Robotics. Science 253: 1227–1232. Carpenter, G. A., Grossberg, S., Markuzon, N., Reynolds, J. H. & Rosen, D. B. (1992) Attentive Supervised Learning and Recognition by an Adaptative Resonance System. In: Carpenter, G. A. & Grossberg, S. (eds.) Neural Networks for Vision and Image Processing. Cambridge: The MIT Press. Clark, A. (1996) Being There. Cambridge: MIT Press. Da Costa, N. C. A. & French, S. R. D. (1990) The Model-Theoretic Approach in the Philosophy of Science. Philosophy of Science 57: 248–265. D’Esposito, M. & Grossman, M.(1996) The Physiological Basis of Executive Functions and Working Memory. The Neuroscientist 2: 345–352. Dretske, F. (1981) Knowledge and the Flow of Information. Cambridge: MIT Press. Gibson, J.J. (1979) The Ecological Approach to Visual Perception. Boston: Houghton-Mifflin. Lettvin, J. Y., Maturana, H., McCullogh, W. & Pitts, W. (1959) What the Frog’s Eye Tells the Frog’s Brain. In: McCullogh, W., Embodiments of Mind. Second Printing (1989) Cambridge: MIT Press. Maturana, H. R. & Varela, F. J. (1979) Autopoiesis and Cognition: The Realization of the Living. Boston: Reidel. Milner, A. D. & Goodale, M. (1995) The Visual Brain in Action. Oxford: Oxford University Press. Newell, A. (1990) Unified Theories of Cognition. Cambridge, Harvard University Press. Peschl, M. & Riegler, A. (1999) Does Representation Need Reality? In: Riegler, A. & Peschl, M. (eds.) Understanding Representation in the Cognitive

Understanding Representation

Alfredo Pereira Jr.

Sciences. New York: Plenum Press (this volume). Rieke, F., Warland, D., Steveninck, R. R. & Bialek, W. (1997) Spikes. Cambridge: MIT Press. Tanaka, K., Saito, H., Fukada, Y & Moriya, M. (1990) Integration of Form, Texture, and Color Information in the Inferotemporal Cortex of the Macaque. In: Iwai, E. & Mishkin, M. (eds.) Vision, Memory and the Temporal Lobe. New York: Elsevier. Tanaka, K. (1993) Neuronal Mechanisms of Object Recognition. Science 262: 685–688.

A. Riegler & M. Peschl

|8|

Uexküll, J. von (1934) A Stroll Into the Worlds of Animals and Man. In: Schiller, C. H. & Lashley, K. S. (1957) Instinctive Behavior. New York: International Universities Press. Ungerleider, L. G. & Haxby, J. V. (1994) ‘What’ and ‘Where’ in the Human Brain. Current Opinion in Neurobiology 4: 157–165. Young, M. P. (1995) Open Questions about the Neural Mechanisms of Visual Pattern Recognition. In: Gazzaniga, M. (ed.) The Cognitive Neurosciences. Cambridge: MIT Press.

Understanding Representation