How Are Neural Signals Related to Each Other

0 downloads 0 Views 406KB Size Report
the state of coherence of the brain in terms of signal relationships within it and .... The ensuing internal vibrations are transmitted over diverging ... Although one may be speaking only of a tiny shift in signal timing when turning a ...... Shadlen, M.N. and Newsome, W.T. (1994), 'Noise, neural codes and cortical organization', ...
Christoph von der Malsburg

How Are Neural Signals Related to Each Other and to the World? Abstract: The core of this paper is a discussion of how the physical signals of the nervous system acquire significance and meaning on the basis of relationships with each other and with the environment. Signal relations are discussed in terms of coherence (defined as lack of resistance), prediction, intentionality, inner reality and meaning. The original and most basic type of signal relation has the form of temporal correlations on coarser or finer time scales, and all other relations must ultimately be built up by the brain from this basis. Consequently, consciousness is the state of coherence of the brain in terms of signal relationships within it and with the environment. In this view, consciousness is of critical functional importance and far from a superfluous epiphenomenon. Understanding the mechanisms by which meaningful signal relationships are established in the brain is of great importance for the future of information technology. Trying to understand the nervous system to the point of being able to reproduce its function in artifacts is a venture of great scientific importance. The study of consciousness may be far removed from this venture, yet I think it is time this link is made. In my professional specialty of computer modelling of vision, for instance, it has become clear in recent years that individual functional components (such as for the detection of edges or motion or stereo depth from visual input in natural environments) cannot be made to function reliably and that only integrated systems of many such interlinked sub-systems will be able to interpret visual scenes in a meaningful and reliable way (for two examples see Steinhage, 2000, and Triesch and von der Malsburg, 2001). I argue here that the issues of consciousness, symbol meaning and intentionality are all intimately linked with the problem of sub-system integration in the brain. It may be expected that the link that I am pointing to here can, if taken seriously, lead to important mutual benefits for all fields involved. Correspondence: Christoph von der Malsburg, Institut für Neuroinformatik, Ruhr-Universität Bochum, D-44780 Bochum, Germany.

Journal of Consciousness Studies, 9, No. 1, 2002, pp. 47–60

48

C. VON DER MALSBURG

Resistance, Coherence and Prediction Our organism is exchanging a constant flood of signals and interactions with the surrounding world via arrays of afferent and efferent signal pathways. Patterns of signals and their statistical structure is all that we can know about the world, a thought captured so well in Plato’s cave metaphor. To decipher this code, our brain relies heavily on active experiments, manipulating the environment and controlling our location and orientation, thereby selecting and influencing sensory input in reliable and reproducible ways. It has long been recognized that this action–perception cycle is an important aspect of the way in which our interaction with the world enables us to learn and perceive (in the context of vision it was especially J.J. Gibson, 1950, who emphasized the importance of the subject’s active involvement; see also Held and Hein, 1963, and the more recent movement of ‘active vision’, Blake & Yuille, 1992). This interaction with the world varies according to the resistance that we encounter.1 In the literal sense, resistance is opposition of the world to an attempted movement of our limbs or a tool, leading to failure to complete an intended movement. In a generalized sense, resistance occurs as failure to complete an intended action, as discrepancy between sensory signals and internal predictions, and in the worst form, resistance takes the form of pain and damage inflicted upon us. It is one important purpose of the brain to avoid or reduce the resistance it experiences. Important for this purpose is the ability to predict events, or potential events contingent upon the moves at our disposal. It is amazing to what degree and detail our brain is precise in its expectations. We are reminded of this when there is unexpected discrepancy (such as when our hand shoots up when trying to lift an object that is much lighter than expected). Partly, these expectations are created on the basis of fixed laws. Thus, when turning our eyes we expect the images on our retinae to move accordingly. A precise prediction of image motion must be based on the ‘law’ governing the relation between image and eye rotation, and knowledge of the actual motion state, presumably made available in the form of an efference copy of the motor command responsible for the eye movement. As soon as a prediction turns out imprecise (as when the eye is passively moved by a finger pressing against the eyelid, causing a discrepancy between the efference copy and the actual eye position) we are alerted and sense what I am calling resistance.2 Generally speaking, signal prediction always necessitates internal models of fixed laws and regularities and the representation of the actual situation surrounding us.3 [1] I am borrowing the term resistance from theory of science (Pickering, 1995). [2] In the example, the discrepancy between actual and expected retinal image motion is interpreted as

movement of the environment.

[3] In reality, those two form a continuum, laws and regularities just relying on more slowly varying or

constant aspects of the environment. In the eye-movement example, the law governing the relationship between efferent copy and image movement is defined by the oculo-motor system, the properties of which might change over time — in which case the brain would have to take note of this as just another change in the situation.

HOW ARE NEURAL SIGNALS RELATED?

49

Coherence is the opposite of resistance. A coherent system is one that is structured so as to produce very little resistance, to be right with its predictions to a large extent. Coherence with the environment means success in terms of correct prediction of actual or possible events such that the organism can pursue its opportunities and avoid damage in making proper choice from among the contingencies. Inside the nervous system, coherence means agreement between signals where they converge. Seeing our organism and the environment as a network of interacting signals, one may classify elementary events either as causes or as effects. Causes are ‘spontaneous’ events that themselves are not caused by other events in the network but are due to some extraneous influence or are created spontaneously in some more proper sense of that word. Effects are regular consequences downstream from causes, produced by the regular transmissions and interactions within the network.4 Causes, being spontaneous, cannot be predicted or anticipated. However, the various effects of the same cause, being produced by the regular signal pathways in the system, stand in regular relationships to each other, and some of them can be used to predict others. Thus, the ultimate basis for all predictions and for all coherence in a system is the existence of alternate signal pathways and causal chains that start from an original event and converge, possibly after several transformations, on a common destination, where one of them may serve to predict others. In the eye movement example, the relevant cause is the spontaneous command to shift gaze (or the peripheral visual event that triggered it). One signal pathway is the oculo-motor pathway that executes the command, creating movement of the retinal image which is then transmitted to the visual system. A second pathway transmits the efference copy to the visual system where it is used to predict the image shift (or the arrival of the original visual stimulus in the fovea). An extensively studied experimental paradigm is the vestibulo-ocular reflex (the compensation of head rotation by eye movement to stabilize the retinal image, Anastasio, 1995) where resistance, in the form of residual image movement, can be induced by lesion of one of the pathways involved (Burt & Flohr, 1991), a discrepancy which is subsequently reduced by plasticity mechanisms (Burt & Flohr, 1991), as modelled by Anastasio (1995). Our brain is a vast array and network of events, signals and signal interactions. Whole arrays of signals are constantly compared and checked against each other to produce coherence. There is a temporal and a spatial aspect of these signal arrays. Under the temporal aspect, the decisive issue is whether two signals are temporally correlated beyond what can be expected on the basis of individual signal statistics, one of the signals predicting the other. Under the spatial aspect, arrays of simultaneously arriving signals may or may not fit the patterns of signals they impinge upon. Spatio-temporal arrays of signals can be considered as complex symbols which express the underlying events and causes. Different [4] The classification cause–effect depends, of course, on the system under consideration. When focus-

ing on a small part of our brain, many of the signals impinging on it would have to be interpreted as causes, whereas under more global purview they could be recognized as effects themselves.

50

C. VON DER MALSBURG

subsystems of our brain (or indeed of our whole organism and the environment if one adopts a more comprehensive view) have different symbol structures, the same event getting expressed in very different codes. In order to be able to bring symbols in contact and check them against each other the signal pathways between subsystems must translate between the different codes, this translation being performed by networks serving as transducers and interfaces between subsystems. Much of the description of coherence that I have just given can also be applied to the web of signals and interactions that are exchanged within a rock when hit by another rock. The ensuing internal vibrations are transmitted over diverging and converging pathways, molecular motions at one place being the result of converging signals and local properties, and the whole array of motions can certainly be called coherent.5 A very fundamental difference to the brain, though, resides in the fact that any motion within the rock is a passive reaction to arriving signals, and that there is nothing that could be interpreted as a prediction. Although one may be speaking only of a tiny shift in signal timing when turning a reaction into a prediction, it amounts to a tremendous functional difference (as any stock-market trader knows), and the transition is very non-trivial to achieve. Another way in which an animal differs from a rock is, of course, its physical activity, making signal coherence so much more interesting and difficult to achieve. Establishment of Coherence Identifying the mechanisms for the establishment of coherence in our brain is a very complex problem and will remain a research subject for some time to come. Embedded in it are many long-standing problems, including that of learning and epistemology. The mechanisms that create coherence in my brain at this moment in time reach back over several time scales, spanning logogeny, learning, ontogeny and phylogeny. In the process, the organism has to adapt to the environment and its regularities, and the different parts of the organism and the brain have to adapt to each other. It is not an option to imagine this adaptation to be akin to a copying mechanism that transfers the structure of the environment into the brain or of the structure of one part of the brain to another. The different subsystems speak very different languages and only at the interfaces between them can they be compared at all, in terms of local resonance in time and over different sub-channels. We will have to have recourse to general mechanisms of selforganization to understand how organisms steadily increase the level of coherence they achieve. Brain modelling and information technology are only just now beginning to realize the importance of understanding the general principles and mechanisms for the integration of different subsystems, for early examples see (Steinhage, 2000; Triesch and von der Malsburg, 2001). Logogeny is the process by which the current state of our brain is organized. We experience it as the process of concentrating on an issue, of directing our attention at something, of getting into a definite context, of tuning in with the [5] It is a surprisingly creative exercise to test ideas about consciousness on rocks, as was brought home to

me by my colleague Armand R. Tanguay, Jr. in a series of inspiring discussions.

HOW ARE NEURAL SIGNALS RELATED?

51

subject matter of current interest. As these terms already imply, logogeny is a process of selection and specialization to a narrow context. The environment we are immersed in at a given moment always has embedded in it a large number of themes, processes and aspects that evolve side by side and without predictable relationships between them. It is possible to establish definite signal relations and coherence only at the price of narrowing attention down to narrow functional components of the situation. We do that by directing our gaze at a definite object, by shutting out all sounds but one, by concentrating on one task at a time. This selection process takes place on a hierarchy of time scales and accompanying scales of contextual breadth. On a slower time scale we may set ourselves a more comprehensive task, which we may pursue over minutes or longer. Correspondingly the appropriate context is of a wider scale, and we collect and activate all the facts that are relevant to that wider context. On the short end of the hierarchy we direct our attention in a rapid sequence at very narrow patterns, particular objects or even parts of objects, coordinating all that while the different subsystems of our brain and body with each other, so as to be able to establish repeatable relationships in terms of exchanged signals. The definition of coherence ultimately has to rely in a circular way on the ability of different subsystems to establish signal relationships reliably in the presence of changing contexts. Progress with the nature of coherence will only be possible on the basis of better knowledge of the mechanisms of self-organization by which it is established in the brain. A typical context may be my sitting down at the breakfast table in the morning. My brain activates immediately all my expectations and habits connected with that activity. A more narrow context is my pouring tea. For a brief moment, my eyes look for the teapot, my visual system recognizes it and its parts and sends out signals to other regions of the brain specifying the whereabouts of the handle, complete with information on shape, position and pose, so that the motor part of my brain can fall into a pattern that controls my arm and hand to grasp it. At the same time, some parts of my brain have more information on the teapot, sending signals that it is probably full and correspondingly heavy, and that it is hot (of which latter fact the visual system has evidence in the form of rising steam, for instance), warning my hand accordingly. As soon as my hand touches the handle, other signals in my somato-sensory system, confirming expectations and giving more precise information, help to organize the grasp on a more detailed level. Attention is a well-studied phenomenon in the psychological, and more recently also in the neurophysiological literature (for an excellent brief guide to the literature on both aspects see Duncan, 1999). A much-investigated aspect of it is the restriction and focusing of the mind’s function to a narrow aspect of a situation. Less thoroughly studied are the mechanisms by which the brain manages to select thematically coherent aspects of a situation, especially also the question how different modalities and sub-systems of the brain manage to concentrate on thematically related signals (for a typical model see Deco and Zihl, 2001). The coordination between subsystems is not the work of a central agent with superior insight who knows each subsystem’s task and structure and

52

C. VON DER MALSBURG

dishes out commands to make them do the right things at the right time.6 Coordination between sub-systems is rather the result of the direct exchange of signals between them, the signals from one sub-system telling others of the particular context it addresses, and attracting them in turn to focus on the same, expressing that context in their own way and language. Logogenesis is a process of self-organization, in which signals are exchanged and exert mutual attraction between sub-systems towards compatible contexts, at the same time suppressing non-compatible ones. Many such interactions conspire to let the system as a whole gravitate towards a well-defined context in which there is optimal coherence and resonance between arriving signals and the states they are predicting. Part of this process of homing in to a coherent state is the suppression of non-corresponding signals, signals which cannot be co-ordinated reliably with each other and which by definition, then, belong to different contexts and are incoherent with each other. Each sub-system of the brain has only a limited view of the situation, and there is much ambiguity as to the precise subject matter on the basis of its perspective alone. Other modalities and sub-modalities of the brain have to add their own aspects to focus the context further and further before it can be called a welldefined subject matter. Thus, when the visual system sends out signals describing the shape of a teapot there is still ample ambiguity as to what this is about, and it needs other parts of the brain to express that I am sitting at breakfast table, that my cup is empty, that I haven’t had my cup of tea yet, that the next action to take is pouring tea, and so on. Logogeny, the process by which brain states, or rather short brain state histories, are organized, has to rely on the presence of structures and interactions that have been built up by evolution and learning. During learning we establish coherence with those of our environment’s particular aspects that generalize from situation to situation but that have not, or could not have, been foreseen by evolution. To date, the mechanisms of learning are still somewhat obscure. A fundamental issue concerns the principles by which the brain selects from the sensory input those sub-patterns that are significant in the sense of having a chance of ever occurring again in the future. Significant patterns are buried in a sea of others that are completely accidental and will never recur. The idea of merely watching out for repeating patterns in the input stream, which is the principle underlying Neural Network learning, fails in view of the enormous numbers of potentially significant patterns in natural sensory input and the ensuing totally unrealistic learning times required to collect the necessary statistics (Geman et al., 1992). Learning necessitates the prior existence of mechanisms for the detection of those candidates for temporal and spatial signal constellations that are likely to be significant. Some such mechanisms are obvious, others may still be unknown. The first and probably most striking signal correlations are produced by our own actions, [6] This is not to say that there aren’t ‘planning agents’, subsystems that have information on regular

sequences of actions to be taken. But these planning agents themselves are invoked by appropriate signals, rather than being ultimate movers, and they have very limited information on the details necessary to execute their commands.

HOW ARE NEURAL SIGNALS RELATED?

53

such as when we move our eyes and cause image motion or when we flail our limbs and thereby produce correlated sensations in different modalities. It is important to know and predict these signal correlations to be able to distinguish events against the background of an otherwise static environment. Coherent objects are an important origin of recurring patterns, especially in the visual sense. When they move they stand out from the background of other patterns on the basis of the common motion of all their parts. This mechanism for the identification of coherent objects has been demonstrated to be present in infants at least shortly after birth (Spelke, 1987). More generally, the most fundamental indication of causal connectedness is temporal signal correlation. The mechanisms of classical conditioning, reviewed in Mackintosh (1983), are well-known to pick up those events that by their timing have predictive value for other events of interest. Also at the level of individual synaptic connections, the prediction of a postsynaptic impulse by an arriving presynaptic spike leads to synaptic strengthening, whereas if that spike arrives just after the postsynaptic discharge, the synapse is weakened (Zhang et al., 1998). Let it be remarked at this point that learning doesn’t just concern the detection of external signal constellations but equally the adaptation of one part of the brain to others. In this it is important to detect the existence of remote, not directly observable, neural connections on the basis of the temporal correlations they impress on arriving signals. In addition to these rather general mechanisms for significance detection, evolution must have endowed us with very specific schemas for the detection of environmental patterns that are important for survival and reproduction. Wellknown examples are the gosling’s innate schema for defining and detecting its mother, or the mouse-catching schema of the kitten, or the face schema in the human infant. An innate schema must be simple, general and of enough detail to actually fire in relevant situations and thus identify the intended significant patterns to be picked up by a learning mechanism. Learning is a self-feeding process. For one, all those signal aspects that can already successfully be predicted form the back-drop against which unexpected deviations stand out as resistance in the form of discrepancy from prediction and thus as significant events. And then, to the extent that new significant patterns are just new combinations of already known ones, they can be detected as such. Thus, after seeing and segmenting a number of objects on the basis of common motion of their parts, the nervous system is able not only learn their shape, but moreover will be able to statistically pick up other feature relations likely to link points of the same object (the so-called Gestalt laws) on the basis of which eventually even immobile objects can be detected and separated from the ground (for a model for such a process see Prodöhl et al., 2001). Finally, the ability to learn self-amplifies in that the recognition of new examples relevant to a theme must be based on recognition mechanisms, which themselves are improved by learning. As learning feeds on itself, all we have to come equipped with at birth is a critical mass sufficient to start the process. Learning must have the general form of attractor dynamics: as soon as the brain detects some new form of signal

54

C. VON DER MALSBURG

coherence it must do everything to hold on to it by changing its connectivity pattern to stabilize it and lay the basis for its revival as soon as the situation demands it. On the lowest level, Hebbian plasticity has this property of stabilizing temporal signal correlations once they are detected. On a higher level, there seem to be central evaluation mechanisms that detect discord and resistance on the one hand and unusual new patterns of coherence (the flash of a new idea) on the other, and have the means by modulatory fibre plexuses to influence the whole system or large enough parts of it to suppress incoherent states and favour coherent ones, by strengthening appropriate fibre arrangements. Some relevant experimental studies are reviewed in (Singer & Artola, 1994). Consciousness Our most immediate experience is that of our inner reality. For Descartes, the great doubter, his own thought process was the only thing of this world which he felt he could take for granted. To a very large extent, the riddle of consciousness is the riddle of what it is that lends the process of our mind this quality of intensive reality. What is this experienced reality, and how is it established? A thing that is real isn’t just there. For it to be real we demand that it be connected to other things. It needs to be of consequence, it should make a difference elsewhere. Whenever we are in doubt about the reality of a sensation, we make reality checks by invoking independent tests. Usually, it is just our different senses with which we cross-examine. When in doubt about a visual sensation, we try to touch the object, or we move ourselves in order to find out whether the phenomenon changes the way it should according to our experience. We never doubt the reality of a phenomenon when there are several independent manifestations of its existence. As soon as some of the obvious tests break down and meet resistance, we have debunked the sensation as a mere illusion, as of only partial truth. Thus, all of our evidence for reality is in the form of coherence in our mental system. This is true when dealing with the reality of external phenomena, but it is also true of phenomena internal to our mind. In fact, the attempt to distinguish between external and internal phenomena is very difficult or impossible to maintain. What we experience immediately is to a great extent our inner reality. Even for very concrete objects we hold in our hand, the overwhelming share of all the activity that relates to its reality has been created in our mind itself.7 Much of what we naively experience as directly accessible external reality — the object in my hand, for instance — is the result of a complicated process of construction (as yet largely obscure to science). The fleeting images on our retinae, the individual tactile impressions on our hands, do not add up in any simple way to a description of the object scrutinized. This must rather be constructed as a collage of present and past information, and the whole can only be described as a pattern of coherence between the different agents of our mind taking an interest in the object. It is to be suspected that the bits of ‘sensation’ that are experienced [7] I am obviously subscribing here to the naive realism with which we all lead our daily lives, leaving

totally aside philosophers’ qualms about the ultimate cause behind our sensations. Compare Putnam’s ‘internal realism’ (1990).

HOW ARE NEURAL SIGNALS RELATED?

55

as most real are active constructions of our mind as tertium comparationis to establish coherence between different modalities. Thus, our concrete ‘sensation’ of the smooth, curved surface of an object is a set of assumptions that serve to create coherence between images in our eyes and sensations in our hand, the perceived stereo depth profile of the surface is a set of assumptions necessary to establish coherence between the two retinal images (as modelled convincingly in Becker and Hinton, 1993). Of all possible sets of assumptions that could be considered concerning the current situation only very specific ones are capable of establishing seamless coherence across the whole system of our senses, our motor commands, and the various inner agents of our mind concerned with the situation. It is these sets of assumptions that we take to be the reality of the situation. The strict separation of the outer and inner world is not a very useful concept for our purposes here. When focussing on one part of our brain, the rest of it is as much environment as the environment proper. While the current state of our mind evolves, individual parts of it have to make hypotheses about the state of others in order to be able to maintain coherence. These hypotheses qualify as having reality only if resistance is indeed avoided and, wherever they meet, alternate signal pathways can come to agree with each other, some of them predicting others correctly. As a consequence, even when we are day-dreaming and aren’t linking ourselves to the current physical situation surrounding us, we experience that process as one endowed with the quality of reality, one part of the mind describing a reality that refers to other parts of the mind. Our mind is a marvel in this sense of being able to create coherent states quickly and reliably and we catch ourselves rather rarely with inconsistencies. But we shouldn’t fool ourselves into believing that this coherence is a simple, natural and unavoidable property of our mind, rather than the result of a very complex process of organization. Inner reality is not a static state of affairs but a rapidly evolving historical process. While experiencing this process as it evolves we are inclined to believe that if time was subdivided into finer and finer slices, we would hit a scale, the ‘psychological present’ of one or a few tenths of a second (James, 1890), at which a snapshot of the state of our brain could still be conceived of as a fixed symbol for a coherent situation with its own inherent reality. I would like to contest that view. I submit that even on that time scale coherence, inner reality and consciousness are only defined on the basis of a temporal history, resolvable perhaps down to a few thousands of a second, in terms of which different sub-processes are or aren’t coherent with each other, and thus do or don’t relate to — predict, are in tune with — each other. Without the evidence of coherent history, the state of our brain would be a dead, passive, uninterpretable, incoherent, non-conscious entity. That consciousness requires time to be established becomes evident when a rapid sequence of captivating external events (like an unfolding traffic accident) doesn’t permit our mind to ‘come to its senses’, to engage all relevant agents and submodalities (see Marcel, 1983, for a line of experiments where insufficient viewing time prevents conscious perception). In such situations it is restricted parts of our nervous system that react in a reflexive way uninformed by what other sub-

56

C. VON DER MALSBURG

systems would have to say if only given time. We experience such states of mind as sub-conscious. As the situation changes, we have to change inner variables along with it to maintain coherence and to avoid resistance: we have to update our inner reality to stay in tune with the environment. It is this staying-in-tune that lends the symbols of our mind the quality of intentionality. They manifestly refer to outer things because they are, through the signal pathways of our nervous system, causally related to those outer things. The relation of intentionality is not in need of an external observer to be diagnosed, the individual brain does that itself continuously, each individual such relation constantly being surveyed by a web of other, indirect relationships. The riddle of intentionality is not so much the riddle of what it is, but rather of how it is established. This brings us to the issue of the meaning of symbols. One of the very mysterious aspects of the mind–body relationship and indeed of consciousness is the question of how physical states of our brain, seen as symbols, acquire meaning. In its most acute form the problem might be exemplified like this. Assume, as is claimed by some researchers, there is a colour area in our brain which is solely responsible for our awareness of colour. Destroy the area and we cannot even dream colour any longer. The conclusion seems forced on us that the activity in a certain group of neurons is responsible for, say, the sensation of a certain hue of blue in a certain region of our visual field. How can that be? Under the microscope or the electrode the colour area and the neurons in it look just like others. Of course, there is nothing blue about those neurons — we have long abandoned J. Müller’s concept of ‘specific irritabilities’. I think that we are tricked into seeing a riddle here by our habit of mistaking symbols for the meaning they stand for. A word reaching our ear immediately conjures up concrete meaning, and so we develop the habit of confusing the word with the meaning. In reality the word just triggers an avalanche of reactions in our brain, reaching many of its subsystems, activating symbols and symbol constellations there, which themselves cause tertiary reactions in still other parts of the system. None of those symbols has any meaning when taken as an isolated entity, the meaning is solely in the system of relationships between them. The matter may be illuminated by the following analogy. We associate dollar bills with value. Now, where does this value originate from? Look at the dollar bill in your hand. There is really nothing there of value in itself. What you see is just printed paper. A moment’s thought makes it clear that monetary value is established as the result of a very complex (and very hard to understand!) system of beliefs in the heads of all those participating in the economic system. The day after a monetary crash the green stuff, looking exactly the same as before, would have lost its value! By the same token, we shouldn’t ‘stare at the dollar bill’ when trying to understand the meaning of the physical signals in our brain. It is vast systems of relationships between those signals that represent meaning. See it as a question and answer game. Activating a neural signal is tantamount to a question: what does this signal mean? And the answer is more neural signals, which themselves are just more questions. The position I am expressing here has a long

HOW ARE NEURAL SIGNALS RELATED?

57

history. It was implicit in John Dewey’s Functional Psychology of the late nineteenth century, or in William James’ pragmatism, which held that the meaning of a word lies only in the circumstances of its employment and its importance only in its success or failure. Similarly there is Wittgenstein’s insistence on language games in his Philosophical Investigations (1953). The attitude is also a pervasive trend in modern mathematics, which insists on the relative importance of mappings between entities over those entities themselves. One may feel this to be an empty game and look for some grounding, for some final set of symbols which infuse the whole network with ultimate meaning (see Lycan, 1999, for a number of references for attempts at the symbol grounding problem). One may be tempted to take peripheral signals, afferent and efferent, to be this ultimate set of symbols with meaning in themselves. But then, these signals also are only messengers for more ‘symbols,’ for elementary events in the external world, which for us acquire their significance only through relationships. What is a flicker of light in our retina? It means nothing without being put in context with other such signals, with information about eye position, about an array of other visual signals, about patterns that we have seen in the past. Thus, peripheral signals are really the least appropriate for purposes of symbol grounding, being in need of profound and complex re-working before anything of more lasting significance is derived.8 More promising candidates are those symbols, far removed from the periphery, which are constructed by the system to mediate between different peripheral modalities, those tertia comparationis already referred to. They are taken by us as manifestations of reality and thus come closest to having meaning in themselves. But, of course, they must be represented by the signals of neurons which, again, look essentially like all other neural signals, and an external observer, even one equipped with the ultimate cerebroscope, would not find any specific name tags or labels attached to them but would have to trace back the connection of those signals to others to arrive at their interpretation. This back-tracking is a very difficult operation indeed. It wouldn’t help much to try and follow fibre connections, which one by one are much too insignificant and confusing. A better chance lies in following signals as they propagate through the network. No static print-out of the cerebroscope would do as a basis for this signal tracing, and temporal signal relations are required for that. Not even a print-out of a stretch of brain history would do, since temporal correlations cannot be reliably deduced from a single instance. It would be necessary to do a sequence of appropriate experiments, just as we do within our mind when trying to trace the meaning of some symbol. Thus, the working brain itself in its entirety is required (or a working replica of it in all physical detail, if that was possible) to assign meaning to symbols. Backtracking like this is what neurophysiology actually does when it records post-stimulus time histograms to look for temporal correlations of neural signals [8] This is expressed by Gibson, when saying ‘We understand well enough that a visual stimulus is nei-

ther an object nor an experience of that object, but something which stands between them. What we have failed to understand is that this stimulus need not look like either its cause, the object, or its effect, the experience. It need only be a specific correlate of both.’(Gibson, 1950, p. 116.)

58

C. VON DER MALSBURG

with stimuli in order to assign symbolic meaning to neurons. Grappling with grave technical difficulties in the modelling of brain functions, I arrived some time ago at the conclusion (von der Malsburg, 1981) that the time scale of meaning-instilling temporal correlations to be considered in the brain must be continued down to the range of around 3 millisecond resolution, and that the bulk of those correlations must concern signal fluctuations produced within the brain, not imported to it by afferent sensory signals. The functional significance of those correlations is to bind together dynamically more elementary symbols (ultimately single neurons) into groupings as the situation demands. The lack of such dynamical binding in previous theories is now generally referred to as ‘the binding problem’. Curiously, a very heated discussion has sprung up recently about the status of the binding problem and the temporal correlations that help to solve it in the brain. In a special issue of the journal Neuron, ten articles were devoted to the discussion, some of them reviewing experimental evidence in favour of the idea (Singer, 1999; Gray, 1999; Wolfe and Cave, 1999), some violently arguing against the concept (notably Shadlen and Movshon, 1999). The latter, besides rightly asking for more experimental evidence, argue (as already in Shadlen & Newsome, 1994) that if neural signals are Poissonian and independent of each other, the putative binding correlations cannot be statistically extracted from them. As pointed out in my own contribution to the Neuron issue (von der Malsburg, 1999), this argument is circular and faulty. For if the circuits of the brain are structured appropriately, clear-cut and easily detected signal correlations will ensue.9 In any case, it should be realized how difficult the assignment of meaning to neurons must be. An assertion is of scientific value only if it is not tied to a specific context but rather repeatable, in other laboratories and on other subjects. Certain signal relations in the brain may thus qualify for scientific study, being reproducible under a wide variety of contexts. Others, however, are the fleeting products of specific contexts and individual idiosyncrasies (for an heroic effort to assign meaning to single neural responses see Ohl et al., 2001). It is well known that words can assume specific semantic meanings in the context of specific sentences and situations, meanings that are totally unique, unprecedented and perhaps never to recur. The mind itself takes note and may draw important consequences of such inflections of meaning, and thus they are certainly not lost or insignificant. To the contrary, it is this historic aspect of our mind’s working that is the basis for all flexibility and creativity in dealing with unique and novel situations, and thus is essential to our success of survival and reproduction. And, as argued in (von der Malsburg, 1997), it is the web of meaningful relations pervading all parts of the brain that endows it with consciousness. One of the undertones of the ongoing discussion sees consciousness as nothing but icing on a cake, as if our brain could well function without it. Thus, Searle (1980) in his Chinese Room gedankenexperiment assumes there could be a [9] Part of my original proposal (von der Malsburg , 1981) was that neural connections can quickly respond

to signal correlations to organize the circuits that in turn support highly structured signal correlations.

HOW ARE NEURAL SIGNALS RELATED?

59

system able to answer questions about natural language text on the basis of a set of purely formal rules without understanding what the text is about, and he concludes from this that computers, programmed in the spirit of Artificial Intelligence, may well be able to reproduce the observable function of our brain, but that they would never come close to being conscious, that they would always lack the ability to produce anything that could be interpreted as meaning. It may well be that we will be unable to produce machines that have the equivalent of consciousness, at least for some time to come, but that will be only for being unable to produce fully functional brain equivalents with, for instance, the capability of translating natural language. It is very naive to believe that the meaning of a sentence, that is, its set of relationships to the intended situation, could be rendered in another language without first capturing it. One of the difficulties of the consciousness discussion is the lack of a clear definition. The situation may be compared to the problem of defining life. Of the latter we clearly have but one example, based on DNA and a watery soup, and we thus lack the possibility of comparing different types of life to discover traits common to all of them. Consequently, the best we can do is describe life in terms of all the traits that are common to Life on Earth. In relation to consciousness the situation is not so clear. At the present moment in history there is broad agreement that all humans are endowed with consciousness. We can speak and interact with them in a rich way and can thus manifestly link much of the conscious structure of our own mind in a coherent way with and through their minds, the better the closer we are in language and value system. About animals, opinions are already split, and it is a matter of definition whether we want to extend the phenomenon to them. We might just define consciousness to be a unique human possession and include in its definition uniquely human traits, such as our particular set of senses, language, or that large part of our mind that is concerned with the self — a detailed description of our own person, its intentions, values, merits and so on as embedded in the situation and the social context.10 The discussion about the definition of consciousness, if it is ever to come to a conclusion acceptable to science, will have to get out of this quagmire of opinion and come to a clear separation of scientific fact from mere matters of opinion. Limiting myself to the science aspect, I predict the ultimate scientific definition of consciousness will not insist on specific human traits such as language or self-consciousness but will be based on the criterion of functionality and thus, as I have argued, on the ability of brains to establish signal coherence in the presence of active change. Should self-consciousness and language turn out to be necessary consequences of that, so be it, but I doubt it. Our present electronic artefacts are certainly not conscious. At an annoying rate we catch them at gross inconsistencies with the situation. These ‘bugs’, this resistance, are left to the human user to detect and mend. The annoyance will go away only when our electronic servants are developed to the point of becoming tuned to (aware of!) the situations at hand, so as to react to signals appropriately and reduce resistance. [10] Jaynes (1976) has argued that in previous historical times this infatuation with the self wasn’t yet in

fashion, and is ready to conclude that humans weren’t conscious at that time!

60

C. VON DER MALSBURG

References Anastasio, T.J. (1995), ‘Vestibulo-ocular reflex: Performance and plasticity’, in The Handbook of Brain Theory and Neural Networks, ed. M.A. Arbib (Cambridge, MA: MIT Press). Becker, S. and Hinton, G. (1993), ‘A self-organizing neural network that discovers surfaces in random dot stereograms’, Nature, 355, pp. 161–3. Blake, A. and Yuille, A. (ed. 1992), Active Vision (Cambridge, MA: MIT Press). Burt, A. and Flohr, H. (1991), ‘Role of the visual input in recovery of function following unilateral vestibular lesion in the goldfish’, Behav. Brain Res., 42, pp. 201–25. Deco, G. and Zihl, J. (2001), ‘Top-down selective visual attention: A neurodynamical approach’, Vision Cognition, 8, pp. 119–40. Duncan, J. (1999), ‘Attention’, in The MIT Encyclopedia of the Cognitive Sciences, ed. R.A. Wilson and F.C. Keil (Cambridge, MA: MIT Press). Geman, S., Bienenstock, E. and Doursat, R. (1992), ‘Neural networks and the bias/variance dilemma’, Neural Computation, 4, pp. 1–58. Gibson, J.J. (1950), The Perception of the Visual World (Cambridge, MA: The Riverside Press). Gray, C.M. (1999), ‘The temporal correlation hypothesis of visual feature integration: Still alive and well’, Neuron, 24, pp. 31–47. Held, R. and Hein, A. (1963), ‘Movement-produced stimulation in the development of visually guided behavior’, J. Comp. Physiol. Psychol., 56, pp. 872–6. James, W. (1890), The Principles of Psychology (New York: Dover). Jaynes, J. (1976), The Origin of Consciousness and the Breakdown of the Bicameral Mind (Boston, MA: Houghton-Mifflin). Lycan, W. (1999), ‘Intentionality’, in The MIT Encyclopedia of the Cognitive Sciences, ed. R.A. Wilson and F.C. Keil (Cambridge, MA: MIT Press). Mackintosh, N.J. (1983), Conditioning and Associaitve Learning (Oxford: Clarendon Press). Marcel, A. (1983), ‘Conscious and unconscious perception: An approach to the relations between phenomenal experience and perceptual processes’, Cogn. Psychol., 15, pp. 238–300. Ohl, F.W., Scheich, H. and Freeman, W.J. (2001), ‘Change in pattern of ongoing cortical activity with auditory category learning’, Nature, 412, pp. 733–8. Pickering, A. (1995), The Mangle of Practice: Time, Agency, and Science (Chicago University Press). Prodöhl, C., Würtz, R.P. and von der Malsburg, C (2001), ‘Learning the Gestalt rule of collinearity from object motion’, submitted to Neural Computation. Putnam, H. (1990), Realism with A Human Face (Cambridge, MA: Harvard University Press). Searle, J.R. (1980), ‘Minds, brains and programs’, Behavioral and Brain Sciences, 3, pp. 417–57. Shadlen, M.N. and Movshon, J.A. (1999), ‘Synchrony Unbound: A Critical Evalutation of the Temporal Binding Hypothesis’, Neuron, 24, pp. 67–77. Shadlen, M.N. and Newsome, W.T. (1994), ‘Noise, neural codes and cortical organization’, Curr. Opin. Neurobiol., 4, pp. 569–79. Singer, W. (1999), ‘Neuronal synchrony: A versatile code for the definition of relations?’, Neuron, 24, pp. 49–65. Singer, W. and Artola, A. (1994), ‘Plasticity of the mature neocortex’, in Cellular and Molecular Mechanisms Underlying Higher Neural Functions,ed. A.I. Selverston and P. Ascher (Chichester: Wiley). Spelke, E.S. (1987), ‘Where perceiving ends and thinkin begins: the apprehension of objects in infancy’, in Perceptual Development in Infancy, ed. A. Yonas (Hillsdale, NJ: Lawrence Erlbaum). Steinhage, A. (2000), ‘Nonlinear attractor dynamics: a new approach to sensor fusion’, in Sensor Fusion and Decentralized Control in Robotic Systems II: Proceedings of SPIE, volume 3839, ed. P.S. Schenker and G.T. McKee, pp. 31–42. Triesch, J. and von der Malsburg, C. (2001), ‘Democratic integration: Self-organized integration of adaptive cues’, Neural Computation, 13, pp. 2049–74. von der Malsburg, C. (1981), ‘The correlation theory of brain function,’ Internal Report 81–2, MPI Biophysical Chemistry. Reprinted in Models of Neural networks II, ed. E. Domany, J.L. van Hemmen and K. Schulten (Berlin: Springer). von der Malsburg, C. (1997), ‘The coherence definition of consciousness’, in Cognition, Computation and Consciousness, ed. M. Ito, Y. Miyashita and E.T.Rolls (Oxford: Oxford University Press). von der Malsburg, C. (1999), ‘The what and why of binding: The modeler’s perspective’, Neuron, 24, pp. 95–104. Wittgenstein, L. (1953), Philosophical Investigations (Oxford: Blackwell). Wolfe, J.M. and Cave, K.R. (1999), ‘The psychophysical evidence for a binding problem in human vision’, Neuron, 24, pp. 11–17. Zhang, L.I., Huizhong, W.T., Holt, C.E., Harris, W.A. and Poo, M.-M. (1998), ‘A critical window for cooperation and competition among developing retinotectal synapses’, Nature, 395, pp. 37–44.

Paper received August 2000

Suggest Documents