Version 1 - CiteSeerX

2 downloads 0 Views 185KB Size Report
play their instruments with astounding fidelity, but are also in complete oneness with the rest of the orchestra (which is made up of likewise ..... the first few months of life, and is usually a life-long abnormality. .... religiosity, love, monogamy etc.
Embodiment as Behavioural Plasticity. Chris Harris & Faith Budge Plymouth Institute of Neuroscience University of Plymouth Plymouth PL4 8AA “Brains are us; everything that we see, feel, think, and do is a product of our brains’ activities” (Pinel)

1. Introduction Embodiment is a fashionable concept used in such diverse disciplines as sociology, psychology, neuroscience, environmental science and robotics. There is some commonality across disciplines in that embodiment is loosely understood as a dynamic, triadic interaction between mind/brain, body and environment. However, a concrete definition of the term remains elusive. Most writers do not even define ‘embodiment’, but assume implicitly that the reader will understand the term. The sociologist Simon Williams does define the term, when he describes it as a state “in which the body is largely taken-for-granted in the normal course of everyday life”(1996). Whilst on one level, we do not disagree with this definition, we propose in this paper that embodiment is more concrete than Williams suggests. We argue that it is not just a subjective or socio-cultural phenomenon but also a physiological state of the brain, which can be used to better understand self-other distinctions and to explain certain clinical disorders. We have all marvelled at the abilities of the champion sports players who exhibit remarkable effortless dexterity and timing, can plan sequences of movements and predict an opponent’s move. Consummate musicians not only play their instruments with astounding fidelity, but are also in complete oneness with the rest of the orchestra (which is made up of likewise individuals). What do these individuals have in common? They have all developed a high degree of control of their own actions and the ability to predict the behaviours of others. To the observer their behaviour appears in perfect harmony with the external world. We consider these individuals to be highly ‘embodied’. Of course these experts are extremes. We are all embodied most of the time, and most of us have experienced that peculiar sense of skill where things just work right, often unconsciously. Most of us take for granted that we can walk and talk. Most of us also have the ability to become embodied with mechanical extensions to our bodies. When we learn to drive a car (or bicycle), the car is the external device that needs to be interacted with. But with experience, the car becomes part of us, as if an extension of our body, just like reaching for a cup of tea. This occurs because our brains contain internal models that enable us to act as embodied beings.

2. Internal Models We propose that the genetic code endows the human organism not only with the usual structures to sustain life, develop senses and action, but also with neural structures that have the ability to develop, learn, and maintain internal models. Internal models are physiological assemblies of neurons whose responses approximate the responses of actual behavioural effectors, such as musculo-skeletal systems, but also other neural processes. Internal models predict the outcome of a neural activity or ‘command’ (feed-forward systems) without waiting for the consequences to be registered (feedback systems) (see Wolpert 1998a & 1998b). Thus, feed forward internal models predict the sensory consequences of a neural command. Sensory consequences may include vision, touch, hearing, proprioception, etc.; but crucially action per se can only be known through its sensory consequences. Similarly internal models can also predict the sensory consequences of external stimuli (which are really only neural activities by another name). In principle, this allows the brain to predict the behaviour of the external world. Internal models are approximations of the functions they attempt to represent; they are dynamic, and they are constantly being updated – they are simply ‘models’. Fundamentally, they engender the organism to distinguish between self and non-self, not because self is intrinsically worthy of neural representation, but because it allows the organism to model and predict external events. If properly trained, these models allow the host to be embodied and to survive a vast array of hostile and social environments. However, through various disorders, whether acquired,

1

congenital, inherited or due to extreme environments, these models may be temporarily or permanently disrupted leading to states of disembodiment. We start from the viewpoint of human action. Blind Man’s Bluff Action and sensation are fundamentally intertwined. Although sensors (touch, vision, hearing, etc.) inform the brain about the external world, they are inherently ambiguous for organisms that physically interact with the environment. This problem is particularly important for the survival of organisms that need to know about how other organisms move in the world (eg. predator-prey interactions). If I were blindfolded, the sensation of touching an object, such as my pet cat, could be caused by the cat touching me, or because I reached out to touch the cat. The sensation is identical, yet I know the difference because I ‘know’ my own action (more precisely, I have an expectation of the sensory consequence of my reaching!). If you take my hand and place it on the cat, I can sense that my arm is moving (although I am not the agent), but I still know that the cat did not come to me, but it was out there somewhere, possibly moving or possibly not – I would use my judgement about the context and your intentions to try to estimate the scenario. If I had a disease that made my arm twitch or move without my knowledge (at any level), and my hand happens to touch the cat, I would think that the cat came to me. However, if I knew I made a twitch, even though I had no intention of making the move, I would interpret that the cat was nearby and my arm touched it. If I expected to touch the cat when I reached out, but I felt nothing I might deduce that the cat had moved (cats do move). But, if I reached out to touch the wall, and felt nothing, I might deduce that I was in the wrong place (walls don’t move). How we interpret our sensations (perception) is complicated, and depends on our ‘sense’ of our ‘own’ action, our expectation (intention), our knowledge of the real world (context), and our sense of ownership of movement (agency). Does taking off the blindfold make things easier? – Not if you are an infant! Spatial Constancy & Visuo-Spatial Grounding Consider the image of an ‘object’ formed on the retina of a human infant (this does not require the infant to have knowledge of objects or ‘objectness’). The location of the image on the retina gives information about the location of the object, but only relative to the visual axis of the eye. To work out where the object really is in external (allotropic) space, requires the brain to ‘know’ where the eyes are pointing. Also, many objects move in the real world (such as mother), which cause their images to move across the retinas. But, eyes also move (and quite prolifically in infancy), which cause the images of even stationary objects to move. Thus, retinal image motion is also ambiguous, and could be caused by movement of the object, or by movement of the eyes, or both. Yet, amazingly, when we move through the world, our percept is one of a stationary world through which we move (rather than the world streaming by us). Usually, we do not have difficulty in detecting when an object is really moving. Clearly, the only way we can distinguish between object motion and self motion (or combinations of both) is by ‘knowing’ about our own motion. Crucially, the brain needs to know the sensory consequences of its own motor commands. This is the crux of embodiment. The problem is that there is no absolute way of knowing the consequences of self-action, and the brain must learn, or ‘calibrate’ self-motion and self-location. This is a problem in ‘grounding’ (a rather simple example of ‘embodiment’). It has often been suggested that proprioception (sensing muscle stretch through specialised muscle stretch receptors) or efference copy (using the motor command itself) will tell the brain about its own action, thereby grounding the system. However, at least for the infant, these self-generated signals are themselves inaccurate and need calibrating. Indeed, any self-generated action, such as waving the hands around (which infants like to do), is not helpful because the infant has yet to learn consequences of all of its motor commands. Clearly, the brain needs to calibrate (ground) its maps against a real-world spatial invariant. The most likely (for a seeing infant) is that the visual system locks onto the movement of the whole visual scene, which never actually moves. In other words, if the retina detects the movement of a large area of the visual scene moving in a coherent direction, this can only be caused by self-motion – not by real world movement. For the eyes, this is only half the problem since eye movements can be caused by contracting eye muscles or by moving the head (either as head movements or body movements). However, this also can be disambiguated by the vestibular system, which detects movements of the head. This is consistent with studies that have shown precocious oculomotor reflexes from birth (we will elaborate on this elsewhere). Feed-forward Models & Predictive Control In addition to having spatial maps, the brain needs to have considerable knowledge about the dynamical responses of musculoskeletal systems. Thus, the internal representation is not simply a static map of where limbs and eyes are in space, but it is a dynamical ‘model’ of how to make body parts move. Such dynamical models are

2

usually called feed-forward models (to distinguish them from feedback control). A fundamental feature of feedforward models is that they inherently estimate what will happen when a given control command is given. That is, they make predictions of the consequence of a behavioural command. These predictions have four key roles. 1.

2.

3.

4.

The prediction can be used to control the movement. For example, the brain decides to make an eye movement (a saccade) to look at an interesting object. Saccades are far too fast for visual feedback. Instead the brain has an internal model of how the eye moves and sends a command to the model and uses feedback from the model output (prediction) to control the trajectory. The prediction can be compared to what really happens. Any errors in the prediction can be used to update the internal model to try and make sure that predictions become more accurate. In engineering jargon, this is known as adaptive control. In other words, the brain uses its predictions to learn how the body works. When the internal model is accurate, the brain can correctly measure the external world. By correctly assigning sensory information to the external world, the brain can then build up models about how external objects move. Thus, by accurately predicting where my eye is pointing, my brain can learn how to predict the trajectory of a cricket ball. If my brain had a very poor model of where my eye was pointing, it would not be possible to predict the cricket ball because there would be uncertainty in how to attribute the retinal image. Feed-forward models allow the brain to try out different strategies without actually acting them out. This allows the individual to learn without making catastrophic errors.

Context Action and predicted consequences of actions will depend on the current context of the behaviour. Different contexts may require different internal models. Thus, touching my cat and touching the wall may have different sensory consequences (cats move but walls usually do not). However, we do not lose internal models when we enter a new context, but either we learn a new model or switch to an already learnt model. As a simple example, walking around with a new spectacle prescription may be quite disorienting, but gradually we learn a new internal model and are no longer disorientated. When we take the new spectacles off, we usually do not become disorientated again but instantly switch to the former model. Intention ‘Intention’ has often been used to describe wilful, voluntary, behaviour. It is poorly defined, but connotes the idea of an organism preparing and executing an action that organism expected to make (as opposed to an involuntary movement). We propose that a movement was ‘intended’ if the expected sensory consequences actually came about. ‘Intention’ is inherently a retrospective concept since the action needs to occur before the subject (or other) can determine its intentionality. However, the brain really does generate preparatory neural activity long before a movement actually occurs. We are reluctant to call these signals intentional signals (even if the action does eventually occur). Intention is a tricky and perhaps not that useful a concept, but many have an intuitive feel for it. We stress, however, that the ‘intended action’ is really the ‘intended (expected) sensory consequences of the action’. Embodiment & Disembodiment We can now define the organism as being ‘embodied’ when the internal models of how the body behaves matches the actual way the body behaves. When internal models are veridical, external events can be accurately ascribed to being external, and the distinction between ‘self’ and ‘other’ actions is accurate. It may seem paradoxical, but only by having very good models of own behaviour can it be possible to interact with the environment and other ‘selves’ with predictive precision. Put loosely, we are embodied when our intended actions occur with the expected consequences. We say an organism is ‘disembodied’ when internal models are not veridical. This leads to confusion between self and other action. In reality, internal models are only ever approximate (hence the term ‘model’), so that there will always be some imperfection in the internal models. Thus embodiment is not an all-or-nothing state, but graded from well embodied to very disembodied. A key point is that the degree of embodiment is not a subjective or affective state, but an objective measure about how well internal models match the physical world (although it may be difficult to measure in practice). Of course disembodiment may often be accompanied by affective states (such as anxiety) and heightened selfawareness (paying attention to one’s own actions) but we argue that these are not necessary associations.

3

Adaptive Recovery & Error Attribution In a normal healthy nervous system, internal models are constantly being updated to match any fluctuations in body performance (random fluctuations, fatigue, illness, development, ageing, etc.). Errors between predicted and actual outcomes are used to modify internal models to minimise future errors and make predictions more accurate. Thus, there is a dynamic recovery response to a ‘disembodying’ event. However, a mismatch between prediction and actual outcome could be caused by unexpected changes in the external world or by a change in internal dynamics. There is an inherent ambiguity, and it is not clear how the brain should decide correct attribution. Two possibilities are 1) grounding signals that are violated must be caused by internal changes; 2) consistent errors that occur in more than one context are unlikely to be caused by external changes (although not impossible) and are more likely to be caused by internal changes; 3) other sensory information may cue an external change. It should be recognised that this is a difficult area to analyse without a clear understanding of the way the brain handles errors. We give a simple well-studied example of the adaptiveness of saccadic eye movements, which appears to be straightforward – but is not! Example - Saccade Adaptive Control In this classic laboratory experiment, eye movements are recorded while a participant is presented with single visual targets at various locations. In a typical experiment, the participant fixates a central fixation target. At some random time, the fixation target is extinguished and a peripheral target switched on. When the participant makes a saccade, the experimenter surreptitiously displaces the peripheral target during the saccade, thereby introducing an unexpected fixation error at the end of the saccade. Initially, the participant will make a second ‘corrective’ saccade to the new target location. However, after a few hundred trials, the participant will make a primary saccade to the final target location without the need of a corrective saccade. The participant has ‘learnt’ to make a larger or smaller saccade than would normally be needed for the initial target location. Importantly, and surprisingly perhaps, participants are completely unaware of the experimental manipulation. Subjectively, the perception is that the target has jumped from its central location to a peripheral location, and feels no different from the experiment when there is no target displacement. In a novel variation on this paradigm, Bahcall & Kowler (1999) asked participants to localise the displaced peripheral target after adaptation, and found that participants placed the target at the same location as the initial peripheral target location. This spatial constancy was clearly contrary to the investigators’ expectations (and presumably others, since it was published in Nature), which invoked a complex argument of high-level efference (intention) copy to explain the results. However, this result is quite consistent if the error is attributed to an inaccurate internal model. The target displacement occurred during the primary saccade, which is a period of time when sensory information does not usually reach perception (saccadic suppression). Therefore we argue, participants have no cues that the target has been displaced, and the brain assumes the default (grounding) condition that the target has really remained stationary. The visual error that occurs after the primary saccade is therefore attributed to an inaccurate internal model. The adaptive process then adjusts the command and the internal model until the displaced target location is fixated by the primary saccade. Throughout the experiment the retinal error is not attributed to the external world, so that the actual target displacement is perceptually ‘ignored’ and spatial constancy prevails. If the experiment is repeated with the target displacement occurring well after the end of the initial saccade, participants make a sequence of two saccades to the initial and displaced target locations. Adaptation does not occur and participants are aware of the target displacement. This interpretation also explains why we are not aware of the small dysmetria (undershoot bias) that healthy individuals make when attempting to fixate a peripheral target. Even patients who have excessive degrees of saccade dysmetria (eg. caused by cerebellar disease) are not aware of their dysmetria nor do they perceive targets jumping around while they make corrective saccades. Thus efference copy is not driving perception, as is commonly assumed, but perception is determining the efference copy.

3: Disorders of Embodiment “I have no knowledge of myself as I am, but merely as I appear to myself.” (I. Kant)

In general, the creation of internal models, keeping them updated, and attributing error are brain functions and can therefore be damaged in neurological disease. Detecting and attributing error is a brain function – it is not a real

4

world issue. Thus, brain disease could destroy internal models, or affect the way errors are detected, and may even create errors where they do not really exist (from an objective observer’s viewpoint). Therefore, excessively inaccurate internal models will lead to disembodiment. However, failure to detect error or mis-attribute error to the external world could leave the individual in a state of ‘false embodiment’. On the other hand, mis-attributing error to the internal model could lead to a state of false disembodiment. For example, in the previous saccade example, external errors (albeit induced by a devious experimenter) have been misattributed to the inaccuracy of the internal model rather than to external perturbations. The brain is therefore ‘falsely disembodied’, that is it assumes there is an error in the internal model, when there is not. We suggest that error misattribution may underlie many of the strange cognitive/perceptual disorders that can follow brain damage or psychiatric disorders (see also Blakemore et al., 2003). We now consider some examples.

Embodiment with Disorder We stress that disembodiment is not synonymous with disease/illness/disorder. Many symptoms do not affect internal models (although they may draw attention to the body). Other disorders may be temporarily disembodying but (a new) embodiment may ensue after adaptation, such as following amputation (but not always). Congenital disorders are, however, particularly intriguing examples of embodiment with disorders. Consider the example of congenital nystagmus (CN). CN is a spontaneous oscillation of the eyes with an onset in the first few months of life, and is usually a life-long abnormality. Even though the eyes are wobbling (often quite dramatically), affected individuals usually do not see stationary objects (or themselves) as oscillating, even though the retinal images are themselves oscillating. This is not a mis-attribution of error, but the brain has clearly developed an embodied internal model of its own eye movements (although amazingly it cannot prevent the oscillations). More astonishing, is that when the object’s image is stabilized on the retina, CN patients experience the object as oscillating! Disembodiment In contrast to CN, Acquired Nystagmus (AN) does lead to disembodiment. This occurs with certain neurological disease resulting in the person’s eyes constantly oscillating, but unlike CN they will also perceive the world as constantly moving. They therefore can never predict where any object in their visual field will be located and they will have great difficulty distinguishing between object motion and self motion. There is clearly an error in the brains internal model. Unfortunately, there is no cure to enable the model to be updated and the condition becomes so disembodying that sufferers may even commit suicide. Of course there are many other examples of disorders that lead to disembodiment, especially those that result in sudden loss of motor function or sensation (e.g. following stroke, MS, paralysis etc).

False Embodiment False embodiment occurs when a patient believes she is embodied but is clearly not, as judged by others. The person intends to make an action, believes that the action was made, but in fact the intended action did not occur. If prediction errors are recognised by the brain, they are attributed to the external world. This seems bizarre, but numerous neurological conditions appear to be of this type including anosognosia, phantom limb, and involuntary movements as the following examples demonstrate. Example 1: Anosognosia Anosognosia is a general term used to describe a problem of insight or awareness attaining to illness. Anosognosia is often related to other conditions such as neglect, self-deception, denial and confabulation, but different studies have found significant levels of dissociation between these indicating that anosognosia is a specific and independent impairment of awareness (see Jehkonen et al 2000; Ghika-Schmid et al 1999; and others). It usually arises in relation to pathologies that result in loss of motor function (motor neglect / extinction / paralysis / amputation), but also occurs in patients with abnormal gain of function (dyskinesias - involuntary movement disorders such as tics, tremor, myoclonus, seizures etc.). It is usually a temporary condition occurring after an acute event such as a stroke and lasting up to about 3 months. In other conditions such as occurs with abnormal involuntary movements or with dementing illnesses (eg. Alzheimer’s disease) it persists over time. In cases of anosognosia, even when the deficit is pointed out, an anosognosic patient will still fail to recognize that there is a problem at all. A classic example would be when a patient fails to acknowledge that her left leg is

5

completely paralyzed, attempts to stand on the leg and then cannot understand why she has fallen over. She may try to explain away her deficit by claiming that it is caused by some other problem such as rheumatism/arthritis. When being examined they may say that they have moved the limb when it has remained still, appear distracted, be hostile to talking about this particular subject – ‘I don’t feel like talking about this or moving my arm right now’ or accuse the examiner of trying to trick them. They attempt to rationalize their experiences by attributing the cause to external agents. Vallar et al (2003) have suggested that it is an “ unawareness of a deficit of intention, or movement planning component, rather than, or in addition to, unawareness of a primary deficit”. Support for this hypothesis comes from evidence that shows that patients will say that they have made a movement even when there is no muscular evidence to support this. This model is based on the premise that because there is no feed forward mechanism and no feedback to show that the movement has been executed, then there does not appear to be a problem for the patient. This is a clear example of false embodiment, where an internal model is not updated in spite of catastrophic failure. Prediction errors are not recognized, but when pointed out, patients attribute the error to the external world. Example 2: Involuntary movements There are important connections between awareness of involuntary movements and emergent properties pertaining to a sense of self, such as self-awareness, self-ownership and self-agency. These two latter concepts are normally one and the same, part of willed, voluntary action but in the case of involuntary movements they mean different things – I can know that I have moved my arm, but I do not claim agency as I do not feel that I have caused or controlled the arm movement. (See Gallagher, 2000). Similarly, someone with anosognosia may not claim either to be true, even though the arm may have moved. Conversely, they may claim both but without the arm moving at all. In all cases self-awareness mechanisms are impaired. Issues relating to sense of agency and ownership have been extended to explain involuntary cognitive processes such as occur in schizophrenia. Here the patient will claim that their thoughts are not their own, but are being placed there by some external agent. They may also claim that somebody else is causing their movements, especially their lips when speaking. They acknowledge that it is their lips that are moving, but claim that the words produced are made by someone else. This it has been suggested is caused by impairments in forward models, which “could cause a lack of attenuation of the sensory consequences of self-produced actions, which would therefore be indistinguishable from externally generated sensations, hence causing a confusion between the self and the other” (Blakemore et al, 2003). They also experience delusions in the form of paranoid episodes where they believe an external agency is attempting to harm them in some way or experience verbal auditory hallucinations. Frith (1992) claims that this could be a case of breakdown in self-monitoring. Patients find it difficult to distinguish between their own thoughts and those voiced by others or those they have imagined to be voiced by others, but they normally attribute it to external sources. There is thus a fundamental error in the differentiation between self and other – internal and external. False Disembodiment False disembodiment occurs when prediction errors are incorrectly attributed to internal models, rather than the external world. Physiological examples are less clear, but somatoform conditions (conversion disorders, hypochondriasis) may reflect a heightened sensitivity to errors that are attributed to internal models. But usually such errors are residual normal errors and cannot be reduced any further, but the constant awareness of errors may then lead to anxiety. Anorexia nervosa appears to be a condition of false disembodiment, where a person erroneously believes they are overweight and attributes the error to the internal model, thus leading to further weight reduction.

5. Embodiment at the Social Level A world that can be explained even with bad reasons is a familiar world.

(A. Camus)

Thus far, we have talked about embodiment/disembodiment at a rather physiological level. We now propose that the human brain builds internal models at the social level. Although other individuals constitute the ‘environment’ their behaviours are vastly more complex than the physical environment, as are our own interactions with them. In order to cope with (and promote) this complexity, the primate brain has evolved many specialised functions such as language, face cells for recognition of human faces, mirror neurons which help us to imitate and understand the

6

actions of others, theory of mind and specific brain circuitry for self responsibility & self control in order to promote social coherence. There are possibly many other functions (with as yet unknown neural substrates) such as religiosity, love, monogamy etc. This ‘social brain’ supports survival and progeny in social (and antisocial) groups. (See Cacioppo et al, 2002 & Churchland, 2002). We propose that, even with this advanced neurosociological kit, the brain still needs to develop predictive models of own and others’ social behaviours. And just as before, prediction errors can arise and need to be attributed to external (others) or to the internal (self) model. This leads to a variety of interesting propositions: •

• • • • •

An internal model of one’s own social behaviour = identity. The brain maintains multiple internal models that are context dependent = multiple identities. If the correct model is chosen to match our environmental / social context then we are embodied. Internal models are influenced by socio/cultural/historical factors. Society is not static and neither are internal models. They will change over time due to different experiences, age etc. They will need to be updated in response to events that happen to us personally, or to external events in the world that impinge upon us in some way. We may become socially disembodied through changes in social environment (eg. emigration, political upheaval, war etc.) or through personal changes (divorce, ageing, illness, loss of wealth etc.). As before, error attribution is not straightforward. We have all witnessed individuals who, at one extreme, tend to blame others for unpredicted social events; but others at the opposite extreme who blame themselves for untoward events. Illness that may or may not be disembodying at the physiological level, may be disembodying at the social level. For example, it is difficult to form an embodied identity in social environments that do not accept (or appear to accept) physical/physiological disabilities (Such as CN). Internal models enable us to socially and culturally evolve. By continually updating these, successive generations build upon what went before. This is a key form of differentiation between humans and animals. (See Churchland, 2002) Embodiment issues may occur at the cultural level (where the individual brain is replaced by many brains). For example, the internal model of coalition forces in Iraq was one of liberator but predictions of the expected response were not met, leading to error and a continuing attribution problem.

Self - a brief comment In much of the sociological literature, the concept of disembodiment has often been connected to the concept of self – particularly a loss of self. (See Nochi, 1998 for example). When people talk of losing their sense of self, what in fact are they losing? Is it only when we become disembodied that we need to hold on to a belief in a stable self? Kempen (1998) argues that the self is an evolutionary device that was/is essential to ones survival (you look after your/self) –maybe it is only when we are threatened by poor health for example, that we fall back on this ‘selfing process’ as a survival mechanism. We propose that awareness of self becomes heightened at times of personal crisis, not lost. There cannot be a consistent sense of self to lose, as our experiences change constantly. We agree with Varela’s claim that there could be no fluidity of experiences if there was a static self – it is simply “ a convenient way of referring to a series of mental and bodily events and formations, that have a degree of causal coherence and integrity through time” (Varela, 1991,p.124). We ‘lose’ and replace aspects of ‘ourselves’ all the time – we are dynamic beings. We argue that a true loss of self would only occur through diseases such as Alzheimer’s and other dementia type conditions where internal models can not be updated due to the degeneration of the brain.

Conclusion The purpose of this article has been to outline how the concept of embodiment can be explained in terms of dynamic internal models. That is, we propose that meaningful ‘embodied’ interactions with the external world (be it physical or social) requires accurate internal models of own action in order to make accurate models of external phenomena. It may seem paradoxical, but the distinction between own and other actions is necessary for seamless interaction with the environment in which the actions of an organism and its environment seem indistinguishable. We argue (as do many others) that ‘self’ emerges from distinction, not because ‘self’ is inherently important, but because of the physical unity of the organism. The most dramatic example of this is tool use. The practiced use of tools is not an interaction between the organism and the tool, but an interaction between the tool and the

7

environment – the tool becomes part of (becomes embodied in) the organism, just like a hand. Indeed, we argue that from the brain’s viewpoint, there is no fundamental distinction between, say a knife and a hand – just different internal models are learnt. The term ‘embodiment’, and particularly ‘disembodiment’ have been used to describe virtually any subjective and/or affective state of the body. We believe that this overuse detracts from the value of the terms. Instead, we propose (dis)embodiment and false (dis)embodiment are objective states of internal models. As also described by others (Blakemore et al., 2002), this allows us to explain a variety of apparently bizarre neuro-psychiatric conditions, but it also follows that not all disease states (regardless of severity) are necessarily disembodying. Nor do we accept the notion that merely attending one’s body is ‘disembodying’. Only if (excessive) attention to one’s body’s performance leads to changes in internal models or to incorrect error attribution, would we invoke embodiment issues. We extend our argument to the social level, where all individuals develop internal models of own (identity) and other social actions. This is vastly more complicated, but we argue that the same basic principles apply. Individuals may be (dis)embodied or falsely (dis)embodied depending on the accuracy of their internal models. Error attribution is required and leads to constantly changing internal models to cope with changing social environments, and may lead to inappropriate social behaviour if errors are misattributed. Clearly, much more needs to be explored, but we hope that this restrained definition of embodiment may prove to be a useful tool in the future.

References Aglioti,S., Smania,N., Manfredi,M. & Berlucchi,G. (1996). Disownership of left hand and objects related to it in a patient with right brain damage. Neuroreport. 8. 1. 293-296. Bahcall,DO., & Kowler,E. (1999). Illusory shifts in visual direction accompany adaptation of saccadic eye movements. Nature. Aug.26. 400(6747). 864-6. Blakemore S.J., Oakley,D.A., & Frith,C.D. (2003). Delusions of alien control in the normal brain. Neuropsychologia, 41. 1058-1067. Blakemore,SJ., Wolpert,DM & Frith,CD. (2002). Abnormalities in the awareness of action. Trends in Cognitive Science. 6. 6. 237-242. Cacioppo,J., (ed.), (2002). Foundations in Social Neuroscience. Cambridge,Mass. MIT Press. Churchland,P. (2002). Brain-Wise. Studies in neurophilosophy. Cambridge,Mass. MIT Press. Frith,C. (1992). The Cognitive Neuropsychology of Schizophrenia. UK, Lawrence Erlbaum Assocs. Gallagher,S. (2000). Philosophical conceptions of the self: implications for cognitive science. Trends in Cognitive Science. 4. 1. 14-21. Gallagher,S. (2000a). ‘Self-Reference and Schizophrenia: A Cognitive Model of Immunity to Error through Misidentification’. In Zahavi,D.(ed.) Exploring the self: Philosophical and Psychopathological Perspectives on Selfexperience. Amsterdam, John Benjamins. Jehkonen,M., Ahonen,JP., Dastidar,P., Laippala,P. & Vilkki,J. (2000). Unawareness of deficits after right hemisphere stroke: double dissociations of anosognosias. Acta Neurologica Scandinavica. 102. 6. 378-384. Kempen,H. (1998). ‘Mind as body moving in space. Bringing the body back into self-psychology’. In Stam,H. (ed.) The Body and Psychology. London, Sage. Marin,L. & Oullier,O. (2001), When robots fail: The complex processes of learning and development. Behavioral and Brain Sciences. 24. 6. 1067-8. Nochi,M. (1998). “Loss of self” in the narratives of people with traumatic brain injuries: a qualitative analysis. Social Science and Medicine. 46. 7. 869-878. Pinel,J. & Edwards,M. (1998). A Colorful Introduction to the Anatomy of the Human Brain. Boston. Allyn & Bacon. Vallar,G., Bottini,G. & Sterzi,R. (2003). Anosognosia for left-sided motor and sensory deficits, motor neglect, and sensory hemiattention: is there a relationship? Neural control of space coding and action production. 142. 285-297. Varela,F., Thompson,E. & Rosch,E. (1991). The Embodied Mind. Cambridge,Mass. MIT Press. Williams,SJ. (1996). The vicissitudes of embodiment across the chronic illness trajectory. Body and Society. 2. 2. 23-47. Wolpert,D.M. & Kawato,M. (1998a). Multiple paired forward and inverse models for motor control. Neural Networks. 11. 1317-1329. Wolpert,D.M., Miall,C. & Kawato,M. (1998b). Internal models in the cerebellum. Trends in Cognitive Sciences. 2.9. 338-347.

8

9