Social cognitive neuroscience and humanoid ... - Semantic Scholar

10 downloads 134744 Views 623KB Size Report
facing an android will be invaluable to understanding human so- cial cognition; this is one of the objectives of the emerging field of android science (MacDorman ...
Journal of Physiology - Paris 103 (2009) 286–295

Contents lists available at ScienceDirect

Journal of Physiology - Paris journal homepage: www.elsevier.com/locate/jphysparis

Social cognitive neuroscience and humanoid robotics Thierry Chaminade a,*, Gordon Cheng b a

Mediterranean Institute for Cognitive Neuroscience (INCM), Aix-Marseille University – CNRS, 31 Chemin Joseph Aiguier, 13402 Marseille Cedex, France Department of Electrical Engineering and Information Technology, Cluster of Excellence ‘‘Cognition for Technical Systems – CoTeSys”, Barer Str. 21, Technical University Munich, 80290 Munich, Germany b

a r t i c l e Keywords: Robotic Humanoid Human Cognition Neuroscience Social interactions

i n f o

a b s t r a c t We believe that humanoid robots provide new tools to investigate human social cognition, the processes underlying everyday interactions between individuals. Resonance is an emerging framework to understand social interactions that is based on the finding that cognitive processes involved when experiencing a mental state and when perceiving another individual experiencing the same mental state overlap, both at the behavioral and neural levels. We will first review important aspects of his framework. In a second part, we will discuss how this framework is used to address questions pertaining to artificial agents’ social competence. We will focus on two types of paradigm, one derived from experimental psychology and the other using neuroimaging, that have been used to investigate humans’ responses to humanoid robots. Finally, we will speculate on the consequences of resonance in natural social interactions if humanoid robots are to become integral part of our societies. Ó 2009 Elsevier Ltd. All rights reserved.

1. Introduction Humanoid robots are robots whose appearance resembles that of a human body, in our case a robot with two legs, two arms and a head attached to a trunk. Because of this anthropomorphism, they provide relevant testbeds for hypotheses pertaining to human cognition. The phrase ‘‘understanding the brain by creating the brain” was coined to synthesize how humanoid robots and computational neuroscience could contribute to progresses in naturalizing human psychology and the underlying neurophysiology (Asada et al., 2001; Brooks, 1997; Cheng et al., 2007; Kawato, 2008). Here, we will discuss the application of this adage to the investigation of social interactions, on the premise that robots provide testbeds for hypotheses pertaining to natural social interactions. The distinction we wish to make here is with past approaches that placed focuses on behavior syntheses as the core of ‘‘cognition” (Arkin, 1998; Atkeson et al., 2000; Brooks, 1997) but, although said to be ‘‘biologically-inspired”, had little direct input from biological sciences. In contrast we wish to bring forward a direct connection between ‘‘humanoid robotics” and ‘‘social cognitive neurosciences”, in an endeavor to gain: 1. a better understanding of social interactions of human–human and human–machines (Chaminade, 2006; Chaminade and Decety, 2001);

* Corresponding author. E-mail address: [email protected] (T. Chaminade). 0928-4257/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.jphysparis.2009.08.011

2. deeper understanding of brain functions involved in these interactions (Chaminade et al., 2007); 3. better engineering guidelines in building machines (as suggested by Cheng et al., 2007) suitable for human interactions. In this review, we will provide examples of how robots can be used to test hypotheses pertaining to human social neuroscience, both in behavioral (Section 3.1) and neuroimaging (Section 3.2) experiments, but also how social cognitive neurosciences can provide insights for developing socially competent humanoid robots (Section 4.1). First, we will present a brief history of humanoids development. The last decade has seen the emergence of increasingly autonomous humanoids, and eventually of androids. Honda’s humanoids P2, in 1996, followed by P3 in 1997 and ASIMO in 2000 (Hirai et al., 1998; Sakagami et al., 2002), were among the first humanoids walking on their legs and feet (Fig. 1) and eventually climbing stairs and navigating autonomously, that stunned the world by going public: human-like robots were on their way from fiction to reality. SONY produced QRIO (Fig. 1) for entertainment purposes (Nagasaka et al., 2004), and the Humanoid Robotics Project investigate practical applications of humanoid robots (HRP series) cooperating with humans (Hirukawa et al., 2004). Fundamental developments in humanoid research also started their investigations with bipedal walk, as early as the mid-1960s (Waseda Lower-Limb series), then started to use humanoids as the embodied platform necessary for certain application, with actuators and sensors approximating human motor and sensory processes in order to simulate human ‘intelligence’ (Brooks, 1997). The use of

T. Chaminade, G. Cheng / Journal of Physiology - Paris 103 (2009) 286–295

Fig. 1. Center: SONY humanoid robot QRIO (photo courtesy of SONY). Clockwise from left, bottom: HONDA humanoid robots P3 and ASIMO (Advance) (photo courtesy of HONDA); infanoid (photo courtesy of Hideki Kozima); ATR humanoid robot, DB (co-developed with SARCOS during the JST Kawato dynamic brain project. Photo courtesy of Stefan Schaal); CB (co-developed with SARCOS during the computational brain project. Photo by Jan Moren, courtesy of Gordon Cheng).

287

duce a negative emotional response (MacDorman and Ishiguro, 2006; Mori, 1970). While this hypothesis has proved itself impractical, as neither anthropomorphism nor emotional response easily lend themselves to being described by one-dimensional variables, understanding the cognitive mechanisms underlying the feeling of uncanniness that one experiences when facing an android will be invaluable to understanding human social cognition; this is one of the objectives of the emerging field of android science (MacDorman and Ishiguro, 2006). Androids indistinguishable from humans in terms of form, motion and behaviors, a goal not unlike the Total Turing Test Stevan Harnad proposed (Harnad, 1989), would be invaluable for research by providing fully controlled partners in experimental social interactions. While artificial conversational abilities at the core of the original Turing Test (Turing, 1950), including language, semantics and symbolism, are beyond the scope of the present article, the concept of a robot ‘‘passing” a Total Turing Test highlights the possible outcomes of bidirectional exchanges between robotic developments and research in human cognition. The goal of this review is not to provide definitive answers about optimized robot design in the form of a series of guidelines for roboticists, but to present an overview, based on our works, on how robotics and cognitive sciences can work together towards the goal of developing social humanoids. We will rely on one theoretical framework that fueled our work, the hypothesis of motor resonance, that pertains to embodied social interactions with a focus on actions. After a section describing this framework, a second part will present pertinent experimental results obtained using robotic devices, and a last part will attempt to derive guidelines for improving the social competence of interacting humanoids based on this framework. 2. Motor resonance in social cognition

humanoids to ‘‘understand the brain” is now at the core of many projects, such as RoboCub, a European project investigating human cognition, and in particular developmental psychology, through the realization of a humanoid robot the size of a 3.5 year old child, iCub (Sandini et al., 2004). The humanoid robots DB and CB, produced in two projects headed by Mitsuo Kawato, were used in some studies reported here. In the ERATO project, the robotic group, led by Dr. Stefan Schaal and in collaboration with the research company SARCOS (Hollerbach and Jacobsen, 1996), developed a humanoid robot called DB (Dynamic Brain) replicating a human body given the robotics technology of the mid 1990s (Fig. 1). It was followed by the ICORP Computational Brain Project in which Dr. Gordon Cheng, again in collaboration with SARCOS, developed a new humanoid robot called CB (Computational Brain, Cheng et al., 2007), more accurate in reproducing the human body than DB (Fig. 1). Because they reproduce part of the human appearance, humanoids provide testbeds for hypotheses pertaining to natural social interactions. They are used for researching how global human-like appearance influences our perception of other agents, in comparison to real humans or, at the other end of the spectrum, industrial robotic arms. This is even more so of androids, a specific type of humanoids that attempt to reproduce the human appearance not only in their global shape, but also their finegrained details. Interestingly, the acceptability of androids in everyday application has been described by the ‘‘Uncanny Valley of Eeriness” hypothesized by Japanese roboticist Masahiro Mori (Mori, 1970). While one would expect that social acceptance of robots would increase with anthropomorphism, the ‘‘uncanny valley” hypothesis postulates that artificial agents attempting, but imperfectly, to impersonate humans, the case of androids, in-

Theories of social behaviors using concepts of resonance have flourished in the scientific literature following the finding that the same neural structures show an increase of activity both when executing a given action and when observing another individual executing the same action (Blakemore and Decety, 2001; Gallese et al., 2004; Rizzolatti et al., 2001). Neuropsychological findings, that used action production, perception, naming and imitation, hinted, in the early 1990s, that limb praxis and gesture perception share some parts of their cortical circuits (Rothi et al., 1991). Similarly in language, the motor theory of speech perception claimed, on the basis of experimental data, that the object of speech perception are not sounds, but the phonetic gestures of the speaker, whose neural underpinnings are motor commands (Liberman and Mattingly, 1985). We will refer to these processes under the header of motor resonance, which is defined, at the behavioral and neural levels, as the automatic activation of motor control systems during perception of actions. 2.1. Neurophysiology of resonance Mirror neurons offered the first physiological demonstration that motor resonance had validity at the cellular level. Mirror neurons are a type of neuron found in the macaque monkey brain and defined by their response, as recorded by single cell electrophysiological recordings. First reported in 1992 by Giacomo Rizzolatti’s group in Parma (di Pellegrino et al., 1992), they were officially named ‘‘mirror neurons” in a 1996 Cognitive Brain Research report as ‘‘a particular subset of F5 neurons [which] discharge[s] when the monkey observes meaningful hand movements made by the experimenter” (Gallese et al., 1996). The importance of this discovery stems from the known function of area F5, a premotor area in

288

T. Chaminade, G. Cheng / Journal of Physiology - Paris 103 (2009) 286–295

which neurons discharge when monkeys execute distal goal-directed motor acts such as grasping, holding or tearing an object. Comparing the various reports, it is reasonable to assume that around 20% of recordable neurons in this area have mirror properties in a loose sense, with a lower percentage, around 5%, shows action specificity, i.e. the same action is the most efficient in causing the neuron to fire when the monkey observes and when he executes it. These neurons are activated both during the execution of a given goal-directed action and during the observation of the same action made in front of the monkey. The human physiological data, using the brain imaging techniques which emerged in the last decades such as positron emission tomography (PET), functional magnetic resonance imagery (fMRI), electroencephalography (EEG), magnetoencephalography (MEG) and transcranial magnetic stimulation (TMS), entails an expected conclusion on the basis of the mirror neuron literature in macaque monkey: premotor cortices, originally considered to be exclusively concerned with motor control, are also active during observation of actions in the absence of any action execution (Chaminade and Decety, 2001). What remains unknown is whether the same brain region, and a fortiori the same neurons, would be activated by the observation and the execution of the same action in the whole of the premotor system, or whether this specificity is limited to a small percentage of ventral premotor neurons. In other words, are all premotor regions activated in response to the observation of action populated with mirror neurons? But irrespective of the answer to this question, accumulating human neuroimaging data does confirm in humans what mirror neurons demonstrated beyond doubt in macaque monkeys at the cellular level: neurophysiological bases for the perception of other individuals’ behaviors makes use of the neurophysiological bases for the control of the self’s behavior. An intriguing trend in human cognitive research is that this resonance is not limited to observation of object-directed hand actions, as mirror neurons are, but generalizes to a number of other domains of cognition. For example, an fMRI study investigated touch perception by looking for overlap between being touched and observing someone being touched (Keysers and Perrett, 2004). An overlap of activity was found in the secondary somatosensory cortex, a brain region involved in integrating somatosensory information with other sensory modalities such as touch. Another study reported activity in the primary sensory cortex during the observation of touch (Blakemore et al., 2005). Thus, there is a resonance for touch, by which observation of someone else being touched recruits neural underpinnings of the feeling of touch. In the same vein, observation of the expression of disgust activates a region of the insula also activated during the feeling of disgust caused by a nauseating smell (Wicker et al., 2003). Empathy for pain also makes use of resonance in the anterior cingulate cortex (Singer et al., 2004). Taken together, these findings led to the hypothesis that a generalized resonance between oneself and other selves, or social resonance, underlies a number of social behaviors including action, such as action understanding (Chaminade et al., 2001) and imitation (Rizzolatti et al., 2001), but also more generally in the social domain, such as empathy and social bonding. In summary, the mirror neurons studied in macaque monkey provided a very specific example of a more general mechanism of human cognition, namely the fact that neuronal structures used when we experience a mental state, including but not limited to internal representation of an action, are also used when we perceive other individuals experiencing the same mental state. Recent examples support a generalization of motor resonance to other domains of cognition such as emotions and pain that can be transferred between interacting agents, hence the term of social resonance.

2.2. Resonance in social interactions Motor resonance is evident in behaviors like action contagion (contagion of yawning for example), motor priming [the facilitation of the execution of an action by seeing it done (Edwards et al., 2003)] and motor interference [the hindering effect of observing incompatible actions during execution of actions (Kilner et al., 2003)]. But, does the motor resonance described in a laboratory environment have a significant impact in everyday life? The chameleon effect was introduced to describe the unconscious reproduction of ‘‘postures, mannerisms, facial expressions and other behaviors of one’s interacting partner” (Chartrand and Bargh, 1999). Subjects unaware of the purpose of the experiment interacted with an experimenter performing one of two target postures, rubbing the face or shaking the foot. Analysis of the behavior showed a significant increase of the tendency to engage in the same action. This effect can easily be experienced in face-to-face interactions, when one crosses his arms or legs to see his partner swiftly adopt the same posture. In addition this imitation makes the interacting partner more likable even though you are not aware of this imitation (Chartrand and Bargh, 1999). This mimicry has been described as a source of empathy (Decety and Chaminade, 2003), so that motor resonance offers a parsimonious system to automatically bond with conspecifics. The main function classically attributed to resonance is action understanding. The most convincing argument to date comes from neuropsychology, the study of cognitive impairments consecutive to brain lesions. It was recently reported that premotor lesions impair the perception of biological motion presented using pointlight displays (Saygin, 2007). Therefore, not only are premotor cortices activated during the perception of action, but also their lesion impairs the perception of biological motion, demonstrating that they are functionally involved in the perception of action. Another function frequently associated with resonance is imitation. Imitation covers a continuum of behaviors ranging from simple, automatic and involuntary action contagion to intentional imitation and emulation (Byrne et al., 2004). It is extensively used as a diagnostic tool in the neuropsychology of apraxia. Research on the neural bases of imitation supports the intervention of motor resonance in several types of imitative behaviors. At the automatic level, observing an action that shares features with an action present in the observer’s repertoire primes the production of the same action (Brass et al., 2000). Using fMRI to investigate the neural substrate of this phenomenon, Iacoboni et al. (Iacoboni et al., 1999) showed increased activity in the inferior frontal gyrus when subjects’ actions were primed by action observation compared to the other conditions of action execution. This region involved in human motor priming is putatively the homologue of the macaque monkey area F5, where mirror neurons were first reported. A study of voluntary imitation aimed at disentangling brain representation for the goal of an action and the means to achieve this goal demonstrated an involvement of the inferior parietal lobule bilaterally in imitation irrespective of the feature of the action being imitated (Chaminade et al., 2002), this brain region being in humans the possible homologue of the macaque monkey area PF where mirror neurons were also reported (Rizzolatti and Craighero, 2004). The same regions were also active when subjects naive in playing the guitar learned to do so by observing an expert in another fMRI experiment (Vogt et al., 2007). Regions in the inferior parietal lobule and ventral premotor cortex were more active when subjects observed actions to reproduce them later than during action observation without instruction to imitate. These results suggest that observed actions were internally simulated in order to parse them into elementary components to be able to reproduce them later. Altogether these results support the engagement of structures involved in motor

T. Chaminade, G. Cheng / Journal of Physiology - Paris 103 (2009) 286–295

resonance in increasingly complex form of imitation, from motor priming to action imitation to imitative learning. 3. Resonance applied to humanoid robotics Motor resonance is a well-studied phenomenon central to the understanding of social behaviors (Decety and Chaminade, 2003). The methods that have been developed to investigate it have been extended to investigate how humans react to anthropomorphic artificial agents such as humanoid robots. The underlying assumption is that the measure of resonance indicates the extent to which an artificial agent is considered as a social inter-actor. 3.1. Behavioral experiments In an experimental paradigm developed to investigate motor interference, volunteers were asked to raise their fingers in response either to a symbolic cue appearing on a nail or to a movement of the finger of a hand presented visually (Brass et al., 2000). The two cues could be present on the same finger (congruent cues) or on different fingers (incongruent cues). In the later case, there were two conflicting cues and only one was relevant for the volunteers. It was found that the observation of an incongruent finger movement hindered the response to the symbolic cue – i.e. increased the time needed to respond – but that the reverse effect – i.e. the symbolic cue hindering the response to the finger movement- was very small. In other word, when responding to a symbolic cue, the response is hindered by the observation of an incompatible action and facilitated by a compatible one. In this paradigm, producing an action similar to an observed action is a prepotent response that requires to be inhibited to execute the correct response. To summarize, as a consequence of motor resonance, perception of another individual’s actions influences the execution of actions by the self: observing an action facilitates the execution of the same action (motor priming), and hinders the execution of a different action (motor interference). These behavioral effects can be investigated experimentally to provide objective measures of the magnitude of motor resonance depending on the nature of the agents. 3.1.1. Motor priming with a robotic hand Motor priming can be conceptually conceived as a form of ‘‘automatic imitation” consequential of motor resonance. In other words, observing an action facilitates (‘‘primes”) the execution of the same action. In experimental terms, responses that are primed by observation are faster and more accurate. This effect was investigated with two actions, hand opening and hand closing, in response to the observation of a hand opening and closing, with the hand being either a realistic human hand or a simple robotic hand having the appearance of an articulated claw with two opposite fingers (Press et al., 2005). Volunteers in the experiment were required to make a prespecified response (to open or to close their right hand) as soon as a stimulus appears on the screen. Response time was recorded and analyzed as a function of the content of the stimulus, either a human or a robotic hand, in a posture congruent or incongruent with the prespecified movement (e.g. open or closed hand when the prespecified action is opening the hand). Results showed an increased response time in incongruent compared to congruent conditions, in response to both human and robotic hand, suggesting that the motor priming effect was not restricted to human stimuli but generalized to robotic stimuli (Press et al., 2005). As with the motor interference measure, the size of the effect, taking the form of the time difference between response to incongruent and congruent stimuli, was larger for human stimuli (30 ms) that for robotic stimuli (15 ms).

289

A follow-up experiment tested whether the effect is better explained by a bottom-up process due to the overall shape or a top-down process caused by the knowledge of the intentionality of humans compared to robotic devices (Press et al., 2006). Human hands were modified by the addition of a metal and wire wrist, and were perceived as less intentional than the original hands. Nevertheless in the priming experiment, no significant differences were found between the priming effect of the original and of the robotized human hand, in favor of the bottom-up hypothesis that the overall hand shape, and not its description as a human or robotic hand, affects the priming effect. 3.1.2. Motor interference with humanoid robot DB We investigated motor resonance elicited by the humanoid robot DB. DB is a 30 degrees-of-freedom (hydraulic actuators) human-size (1.85 m) anthropomorphic robot with legs, arms, fingerless hands, a head and a jointed torso. These human-like features were central to the experiment described in details now. This series of experiments (Chaminade et al., 2005; Oztop et al., 2005b), was initiated by Kilner et al.’s (Kilner et al., 2003) study of motor interference when facing a real human being or an industrial robotic arm. Volunteers in this study produced a vertical or horizontal arm movement while watching another agent in front of them producing a spatially congruent (i.e. vertical when vertical, horizontal when horizontal) or a spatially incongruent (horizontal when vertical and vertical when horizontal) movement. The interference effect, measured by the increase of the variance in the movement, was found when volunteers watched an arm movement spatially incompatible with the one they were producing – e.g. vertical versus horizontal, Fig. 2 (Kilner et al., 2003). Interestingly, Kilner et al.’s study did not find any interference effect using an industrial robotic arm moving at a constant velocity, suggesting at first that motor interference was specific to interactions between human agents. The original experimental paradigm was adapted to investigate how humanoid robots interfere with humans (Fig. 2). In these experiments, subjects performed rhythmic arm movements while observing either a human agent or the humanoid robot DB standing approximately 2 m away from them performing either congruent or incongruent 0.5 Hz rhythmic arm movements. The robot was programmed to track the end point Cartesian trajectories of rhythmic top-left to bottom-right and top-right to bottom-left reaching movements involving elbow, shoulder and some torso movements by commanding the right arm and the torso joints of the robot. The experimenter listened to a 1 Hz beep on headphones to keep its beat constant. Subjects were instructed to be in phase with the other agent’s movements. During each 30-s trial, the kinematics of the endpoint of the subject’s right index finger was recorded with a motion capture device. The variance of the executed movements was used as a measure of motor interference caused by the observed action. Briefly, each individual movement was segmented from the surrounding movements by the identification of endpoints using 3D curvature. Trajectories were projected onto a vertical and a horizontal planes. The signed area of each movement is defined as the deviation from the straight-line joining the start and end of each segmented movement. The variance of this signed area within a trial provides an estimate of the amount by which this curvature changes between individual movements. The variance was divided by the mean absolute signed area during this trial to normalize the data. In a first experiment (Oztop et al., 2005b), trajectories were derived from captured human motion of the same movements performed by the human control for the experiment. We found (Fig. 2) that in contrast to the industrial robotic arm, the humanoid robot executing movements based on motion captured data caused a significant change of the variance of the movement depending on

290

T. Chaminade, G. Cheng / Journal of Physiology - Paris 103 (2009) 286–295

Fig. 2. Top: factorial plan showing the four canonical condition of motor interference experiment: horizontally, the spatial congruency between the volunteers and the tested agent movement; vertically, the human control and the agent being tested, in this case the humanoid robot DB. Bottom: summary of the results from the three experiments described in the text. Bars represent the ratio between the variance for incongruent and congruent movements (error: standard error of the mean). Effect of appearance: results are given for three agents, an industrial robot on the left (Kilner et al., 2003), a humanoid robot with biological motion at the center (Oztop et al., 2005b) and a human on the right (Kilner et al., 2003; Oztop et al., 2005b). Effect of the motion: the humanoid robot DB displays artificial (ART) or biological (BIO) motion (Chaminade et al., 2005). Effect of visibility the humanoid robot displays artificial (ART) or biological (BIO) motion while its body is visible or hidden by a cloth (unpublished observations).

congruency (Oztop et al., 2005b). The ratio between the variance in the incongruent and in the congruent conditions increases from the industrial robotic arm (r = 1, no increase in incongruent condition, as reported in Kilner et al. (2003) and the human (r  2), both in ours and in Kilner et al.’s study. The new result was that an humanoid robot triggers an interference effect but weaker than a human (r  1.5). In a follow-up experiment, we investigated the effect of the movement kinematics on the interference. The humanoid robot moved either with a biological motion based, as previously, on recorded trajectories, or with an artificial motion implemented by a 1-DOF sinusoidal movement of the elbow. We found a significant effect of the factors defining the experimental conditions. The in-

crease in incongruent conditions was only significant when the robot movements followed biological motion (Chaminade et al., 2005). A similar trend for artificial motion was not significant. The ratio that could be calculated on the basis of the results was, in the case of biological motion, comparable to the ratio reported in the previous experiment, 1.3. Note the importance of having internal controls, in this case human agents, to compare the ratio within groups. A final experiment assessed whether seeing the full body or only body parts of the other agent influences motor resonance (Chaminade, Franklin, Oztop and Cheng, unpublished results). The effect of interference could be due merely to the appearance of the agent, which would predict a linear increase of the ratio

T. Chaminade, G. Cheng / Journal of Physiology - Paris 103 (2009) 286–295

between the variance for incongruent and congruent movements with anthropomorphism. Alternatively it could be influenced by the knowledge we have about the nature of the other agent. Current knowledge on motor resonance, as well as the previous results, including reproducing the doubling of variance in ours and Kilner et al.’s (Kilner et al., 2003) experiment, favors the former hypothesis of a purely bottom-up (i.e. perceptual and automatic) process. To test whether appearance was the main factor, we covered the body and face of both agents, the human and the humanoid robot, with a black cloth leaving just the moving arm visible, and compared the results of the interference paradigm between covered and uncovered agents. Preliminary results indicate that the variance is only increased when the body is visible, implying that motor interference cannot be measured in the absence of body visibility. This suggests that arm movements, from either a human or a humanoid robot, do not provide sufficient cues about the nature of the agent being interacted with to elicit motor resonance (bottom-up effect of the stimulus). Also, knowledge about the aspect of the agent being interacted with is not sufficient to elicit motor resonance (top-down effect of the knowledge). These results confirm the conclusions of he motor priming experiment described previously, in favor of a bottom-up effect due to the appearance of the robotic device. Overall, these accumulating results confirm the validity of using motor interference and motor priming as metrics of motor resonance, a possible proxy for social competence, with humanoid robots. First, motor resonance is an important aspect of social cognition, particularly important in automatic and unconscious perception of other agents. Second, the effects of motor resonance on behavior can be measured objectively, as movement variance or reaction time. Third, existing results strongly suggest the effect is modulated by the appearance of the agent being tested. And finally, these interference effects have been shown to increase with the realism of the stimulus. 3.2. Neuroimaging experiments Motor resonance has been extensively studied with neuroimaging in humans, and it is possible to adapt similar approaches to the perception of anthropomorphic robots. 3.2.1. Neuroimaging of grasping movements with a robotic hand Neuroimaging experiments comparing the observation of humans versus robots have so far yielded mixed results. In a PET study, subjects were presented with grasping action performed by a human or by a robotic arm. The authors report that the left ventral premotor activity found in previous experiments of action observation responded to human, but not robot, actions (Tai et al., 2004). However, results of a recent fMRI study indicate that a robotic arm and hand elicits motor resonance, in the form of increased activity in regions activated by the execution of actions during the observation of object-directed actions compared to simple movements (Gazzola et al., 2007). Furthermore, the trend is of an increased activity in response to robot compared to human stimuli, though this increase is not reported as significant. How can we reconcile these two sets of results? One possibility, the difference techniques used in these experiments, PET and fMRI, cannot explain the dramatic reversal of the results. Another possibility derives from differences in anthropomorphism of the robotic arm and hand used by the two groups, but in the absence of figure representing the robotic arm used in Tai et al., it is difficult to draw conclusions. It is enough to acknowledge here that according to both reports, the robotic arms and hands and their motions were not attempting to be realistic. The interpretation proposed by Gazzola et al., about the repetition of stimuli reducing activity in these areas also

291

seems questionable, as both robot and human stimuli underwent the same procedure in Tai et al. procedures. Note that the absence of motor interference when the body hidden, reported in the previous section, is not relevant to understand the present data as stimuli consisted of object-directed actions in both experiments, in contrast to the meaningless arm movements used in the interference experiments. Another source of discrepancy between the two studies comes from the experimental instructions. Indeed, instructions can have significant effects on the brain structures involved in a given cognitive task. This has been clearly shown in fMRI studies in which subjects interacted with a similar random program but were presented their partner as varying in anthropomorphism (Krach et al., 2008). Regions involved in mentalizing were more active when subjects believed they were interacting with the human compared to a unintentional, artificial agent. This highlights the importance of the experimental setting, in particular when using artificial agents. While it is the robot embodiment that is manipulated in both Tai et al. and Gazzola et al. studies, their instructions do differ. In the first report, ‘‘subjects were instructed to carefully observe the human (experimenter) or the robot model”, while in the second, ‘‘subjects were instructed to watch the movies carefully, paying particular attention to the relationship between the agents and the objects”. We’ll propose in the next part that differences between these instructions, in particular the focus on the goal of the actions, can explain discrepancies in the results. 3.2.2. Neuroimaging of a humanoid robot’s actions The preliminary results partially presented here derive from an international collaboration (Thierry Chaminade, Sarah-Jayne Blakemore, Chris D. Frith from UCL, UK; Massimiliano Zecca, Silvestro Micera, Paolo Dario, Atsuo Takanishi from ‘‘RoboCasa”, Japan, Giacomo Rizzolatti, Vittorio Gallese, Maria Alessandra Umiltà from Università di Parma, Italy; manuscript in preparation) aimed at investigating the involvement of motor resonance during the observation of a humanoid robot. Using fMRI, local brain activity was recorded when participants observed video clips of human and humanoid robot facial expressions of emotions, while participants rated the emotion (‘‘how much emotion in the video”, explicit task) or the movement (‘‘how much motion in the video”, implicit task). The humanoid robot used for this experiment, WE4RII, has 59 degrees of freedom (DOFs), 26 of which were specifically used for controlling the facial expression executed in this experiment plus 5 DOFs in the shoulders, important for squaring or shrugging gestures used in the expression of emotions. A subset of the facial Action Units (AU, described in Ekman and Friesen, 1978) was chosen for a simplified but realistic reproduction of the facial expression of emotions used in this experiment (Itoh et al., 2004). We were particularly interested in activity in the left ventral premotor cortex, a region involved in motor resonance that was found in the main effect of action observation. There was a significant interaction between the subjects’ task (implicit or explicit) and the agent used to display the stimulus (human or robot). Fig. 3 illustrates the source of this effect. The signal increased between the implicit and explicit tasks was mainly driven by the robot. This increased response to robot stimuli when subjects rated the emotionality of the stimulus supports a modulation of the motor resonance system’s response to the humanoid robot by the task. Our interpretation is grounded on the postulate that one function of motor resonance processes taking place in inferior frontal cortices is to extract automatically the goal from observed human actions (Rizzolatti and Craighero, 2004). Bottomup processes would then be automatic when perceiving human stimuli, and would show little to no modulation by the task, as

292

T. Chaminade, G. Cheng / Journal of Physiology - Paris 103 (2009) 286–295

Fig. 3. Top: location of the left ventral premotor cluster in which brain activity was analyzed. Bottom: graphs presenting brain activity in response to human (white) and robot (grey) agents presented on the right depending on the task (error bar represent standard error of the mean). Note the larger increase between implicit and explicit for robot than for human stimuli.

is the case here in response to human stimuli. In contrast, robot stimuli would not be processed automatically because the system has no existing representation of robots’ actions – as is the case when subjects rated the movement (implicit task). The large increase of activity in the left inferior frontal cortex during presentation of robot stimuli when the task is to explicitly judge emotion can be understood as forcing the perceptual system to process robot stimuli as goal-directed, anthropomorphic, actions: when the task is to explicitly rate the emotion, the subjects’ attention is directed towards the goal of the action, the emotion. The interaction between task and agent would thus derive from an interaction between bottom-up processes, influenced by the nature of the agent (automatic for human, not robot), and topdown processes, depending on the object of attention. If this interpretation is correct, motor resonance towards artificial agents would be enhanced when the agents’ actions are explicitly processed as actions, and not mechanical movements, by the perceiver. This finding offers an interesting solution to the issue raised in the previous section: when asking subjects to pay ‘‘particular attention to the relationship between the agents and the objects”, Gazzola et al. oriented their subjects’ attention to process the robot’s movement as transitive goal-directed actions, hence reinforcing a top-down activation of motor resonance. In contrast, Tai et al.’s instructions to ‘‘carefully observe” the agent did not impose focusing the attention on the goal of the action, hence relying exclusively on bottom-up processes to activate motor resonance, that is reduced towards humanoid robots. An important conclusion with regards to the social competence of humanoid robots therefore relates to the way they are perceived, either as a mechanical devices or as goal-directed agents, that would be influenced by the expectations of the observer.

4. Resonance and humanoid robots design While robots appear to be pertinent to investigate motor resonance, the last part of this review focuses on the complementary question: can social cognitive neuroscience, and in the present focus, the concept of resonance, be used to enhance the social competence of humanoid robots? While complete achievements are scarce, two lines of investigation are described here: can we build ‘‘resonating” robots, and could the ‘‘uncanny valley” hypothesis be explained by the concept of resonance.

4.1. Robots resonating with humans Artificial anthropomorphic agents such as humanoid and android robots are increasingly present in our societies, and everyday use of robots is becoming accessible, as with the example of Kokoro’s company simroid, a feeling and responsive android patient for use as a training tool for dentists, or robotic companions being introduced for use with children (Tanaka et al., 2007) or elderly people. For these robots to interact optimally with humans, it is important to understand humans’ reactions to these artificial agents in order to optimize their design. Studies have addressed the issue of the form (DiSalvo et al., 2002) and functionalities (Breazeal and Scassellati, 2000; Kozima and Yano, 2001) a humanoid robot should have in order to be socially accepted. Both types of approaches have mostly relied on subjective assumptions, such as the need for human traits. It was thus proposed that the design of consumer product humanoids should balance human-ness, facilitating social interaction, with an amount of robot-ness so that the observer does not develop false expectations about the robots emotional capabilities, and product-ness so that the user feels comfortable using the robot, and provided guidelines on how to achieve this balance (DiSalvo et al., 2002). For example, the face should be wider than tall to look less anthropomorphic, but have a nose, a mouth, eyes and eyelids. But anthropomorphism is not limited to the robot’s appearance and motion: interactive robots’ behaviors also matter for interacting with humans. Robot–human interactions are massively unidirectional at present. As increasingly complex and autonomous humanoid platforms become available, we believe that including human-like motor resonance in their behavior would significantly improve the social competence of their interactions. We recently demonstrated the feasibility of such an approach (Chaminade et al., 2008; Oztop et al., 2005a). Our hypothesis was that synchronized sensory feed-back of executed actions could drive Hebbian learning in associative brain networks, forming motor resonance networks from which contagion of behaviors could emerge. This scenario was inspired by the theoretical proposal that motor resonance networks can result from Hebbian learning of associations between visual and motor representations of actions (Keysers and Perrett, 2004), as well as developmental psychology observations that synchronized action and sensory feedback are available to neonate during motor babbling with their hands (Heyes, 2001).

T. Chaminade, G. Cheng / Journal of Physiology - Paris 103 (2009) 286–295

In our system, a simple associative network linked a robotic hand and a simple visual system consisting of a camera. During a training phase, the network was fed simultaneously by the motor commands sent to the robotic hand to perform gestures and by the visual feedback of the robotic hand. During a testing phase, the system was presented with the same or new hand postures, or with hand postures from a human agent. Our results indicated that some features of human behaviors, such as the ability to perform new actions (i.e. not present in the repertoire formed by training) by imitation, can emerge from this connectionist associative network (Chaminade et al., 2008). Similar results were obtained with a non-anthropomorphic robotic arm (Nadel et al., 2004). As is the case for behaviors derived from motor resonance, this imitation is unconscious, in the sense that the system has not been designed in order to imitate, but to reproduce the ontogenic origin of resonance system for testing whether this reproduction is sufficient to bootstrap a key behavior making use of motor resonance, imitation. Building on this ‘‘proof-of-concept”, a similar associative learning could be used with humanoid robots to develop a realistic architecture for full-body motor resonance abilities at the core of the robotic platform, akin to providing the robot with a sensorimotor body schema. This architecture could subtend realistic human behaviors. For instance, studies of natural interaction between humans have demonstrated that as a consequence of motor resonance, interacting agents align their behaviors (Schmidt and Richardson, 2008): two persons walking together in the street synchronize their step frequency unconsciously (Courtine and Schieppati, 2003), and crowds applause synchronously when one starts clapping at the end of a show (Neda et al., 2000). As bi-directionality is a hallmark of social interactions, implementing bidirectional coordination of behaviors in humanoid robots by incorporating a motor resonance framework to the platform may lead to dramatic improvements of their social abilities, though such a conclusion awaits demonstration. 4.2. Motor resonance and the uncanny valley The ‘‘Uncanny Valley of eeriness” hypothesis has served for years as a guideline to avoid realistic anthropomorphism in robotic designs for commercial usage. This hypothesis postulates that artificial agents imperfectly attempting to impersonate humans induce a negative emotional response (MacDorman and Ishiguro, 2006; Mori, 1970). As Toshitada Doi, an official representative commenting the design of Sony’s humanoid robot QRIO, explained ‘‘We suggested the idea of an ‘‘eight year-old space life form” to the designer – we did not want to make it too similar to a human. In the background, as well, lay an idea passed down from the man whose work forms the foundation of the Japanese robot industry, Masahiro Mori: ‘‘the valley of eeriness”. If your design is too close to human form, at a certain point it becomes just too . . . uncanny. So, while we created QRIO in a human image, we also wanted to give it little bit of a ‘‘spaceman” feel.” Nowadays though, people like David Hanson, founder of Hanson robotics, builds realistic anthropomorphic robots under the assumption that the uncanny valley is an illusion caused by the poor quality of aesthetic designs (Hanson, 2005), not an insurmountable limit. A speculative explanation for the uncanny valley hypothesis could be derived from the motor resonance framework described here. Results from the previous section support the hypothesis that the neural network subtending resonance results from Hebbian learning of associations between visual and motor representations of actions (Chaminade et al., 2008; Keysers and Perrett, 2004). The simultaneous experience of ‘‘doing an action” and of ‘‘perceiving an action” during human development is responsible for establishing resonance networks, that are in turn used to imi-

293

tate and understand others’ actions. The actual processes engaged when we understand a perceived action can be described as a competition between various representations of action. Selection of one representation among many would rely on reducing iteratively an error term (called a prediction error) between accumulating evidence about a perceived action and the predictions derived from competing existing representations of actions in the resonance network. Let’s speculate about the balance between the strength of the representation of an action and the prediction error when perceiving the same action performed by a human, a humanoid and an android robot. As the resonance network has been trained by the observation of human actions, the internal representation of the observed action will be selected by the reduction of the prediction error in the case of human agent. We proposed previously that bottom-up processes are reduced in the case of humanoids, so that the driving inputs to the resonance system are less strong than in the case of humans. Top-down control of resonance to interpret the robots’ movements as action provides a larger error as a consequence of the mechanical (i.e. not human) appearance, as suggested by fMRI: when subjects have to process actions explicitly, in Gazzola et al. as well as in the results described in Section 3.1.2, we observe a trend towards increased activity in response to robots compared to human actions. In the case of contemporary androids, the realistic human appearance triggers bottom-up processes as for real humans. But while the stimulus does select the representation of the correct action, the motor resonance system is unable to match its predictions with the incoming information because of androids imperfections in form and/or motion. The ensuing large prediction error signal could give rise to the feeling of eeriness: the realistic albeit imperfect impersonation of a human acting does not ‘‘fit” existing representations of human actions. We have preliminary fMRI data supporting this interpretation. In an international collaboration to investigate the perception of androids (Ayse Saygin, Thierry Chaminade, Chris Frith and Jon Driver from UCL, UK; Hiroshi Ishiguro and colleagues, Osaka University, Japan; manuscript in preparation), we recorded brain responses to the perception of actions performed by the android Repliee Q2, by the human after whom the android was developed, and by a humanoid robot obtained by removing the android cover skin. Repetition priming was used to isolate regions specifically responding to each agent’s actions. The main results come from comparing repetition priming results for the three agents: while the human and robot activate a limited number of circumscribed regions, the effect for the android is much larger and widespread across the cortex, as expected from an error signal: the inability to minimize the error recruits numerous cognitive processes in order to make sense of the input. In this interpretation of the uncanny valley, as we get closer to human appearance, the perceptual system, tuned by design for recognizing human actions, becomes particular to the tiniest flaws in the android form and motion. Such a view comforts David Hanson argument: the uncanny valley is mainly a result of poor aesthetic design. This is a particularly important line of development in the perspective of the design of companion robots. By extension, it is likely that a similar ‘‘uncanniness” will result from to imperfect social behaviors of anthropomorphic robots. For example, the addition of random micro-behaviors in Hiroshi Ishiguro’s latest android has proven beneficial to avoid it falling into the uncanny valley (Minato et al., 2004). Thus, our proposal to design anthropomorphic social behaviors based on motor resonance can participate to making robots interacting with humans more acceptable.

294

T. Chaminade, G. Cheng / Journal of Physiology - Paris 103 (2009) 286–295

5. Conclusions The fields of humanoid robotics and of social cognition can both benefit from mutual exchanges. Robots provide tools to investigate parameters modulating both behavioral and neural markers of motor resonance. Using the humanoid robot DB, we have shown that human-like appearance and motion is sufficient to elicit motor resonance. Investigating the brain response to the emotion-expressing robotic upper torso WE-4RII, we’ve proposed that while resonance is primarily a perceptual (i.e. automatic) process when perceiving humans, it may be more susceptible to the attention of the observer when perceiving robots. This result could be useful to frame users’ expectations and increase robots’ acceptability. Finally, we’ve shown that resonance could inspire epigenetic robotics, in particular the implementation of a body schema. These reciprocal influences between social cognitive neuroscience and humanoid robotics thus promise a better understanding of man– robot interactions that will ultimately lead to increasing the social acceptance of future robotic companions. References Arkin, R.C., 1998. Behavior-Based Robotics. MIT Press, Cambridge, MA. Asada, M., MacDorman, K.F., Ishiguro, H., Kuniyoshi, Y., 2001. Cognitive developmental robotics as a new paradigm for the design of humanoid robots. Robot. Auton. Syst. 37, 185–193. Atkeson, C.G., Hale, J.G., Pollick, F., Riley, M., Kotosaka, S., Schaal, S., Shibata, T., Tevatia, G., Ude, A., Vijayakumar, S., Kawato, M., 2000. Using humanoid robots to study human behavior. IEEE Intell. Syst. 15, 46–56. Blakemore, S.J., Decety, J., 2001. From the perception of action to the understanding of intention. Nat. Rev. Neurosci. 2, 561–567. Blakemore, S.J., Bristow, D., Bird, G., Frith, C., Ward, J., 2005. Somatosensory activations during the observation of touch and a case of vision-touch synaesthesia. Brain 128, 1571–1583. Brass, M., Bekkering, H., Wohlschlager, A., Prinz, W., 2000. Compatibility between observed and executed finger movements: comparing symbolic, spatial, and imitative cues. Brain Cogn. 44, 124–143. Breazeal, C., Scassellati, B., 2000. Infant-like social interactions between a robot and a human caretaker. Adapt. Behav. 8, 49–74. Brooks, R.A., 1997. The cog project. Advanced robotics. J. Robot. Soc. Jpn. 15, 968– 970. Byrne, R.W., Barnard, P.J., Davidson, I., Janik, V.M., McGrew, W.C., Miklosi, A., Wiessner, P., 2004. Understanding culture across species. Trends Cogn. Sci. 8, 341–346. Chaminade, T., 2006. Acquiring and probing self-other equivalencies – using artificial agents to study social cognition. Paper Presented at: 15th IEEE International Symposium on Robot and Human Interactive Communication (ROMAN), Reading, UK. Chaminade, T., Decety, J., 2001. A common framework for perception and action: neuroimaging evidence. Behav. Brain Sci. 24, 879–882. Chaminade, T., Meary, D., Orliaguet, J.P., Decety, J., 2001. Is perceptual anticipation a motor simulation? A PET study. NeuroReport 12, 3669–3674. Chaminade, T., Meltzoff, A.N., Decety, J., 2002. Does the end justify the means? A PET exploration of the mechanisms involved in human imitation. Neuroimage 15, 318–328. Chaminade, T., Franklin, D., Oztop, E., Cheng, G., 2005. Motor interference between humans and humanoid robots: effect of biological and artifical motion. Paper Presented at: International Conference on Development and Learning, Osaka, Japan. Chaminade, T., Hodgins, J., Kawato, M., 2007. Anthropomorphism influences perception of computer-animated characters’ actions. Soc. Cogn. Affect Neurosci. 2, 206–216. Chaminade, T., Oztop, E., Cheng, G., Kawato, M., 2008. From self-observation to imitation: visuomotor association on a robotic hand. Brain Res. Bull. 75, 775–784. Chartrand, T.L., Bargh, J.A., 1999. The chameleon effect: the perception-behavior link and social interaction. J. Pers. Soc. Psychol. 76, 893–910. Cheng, G., Hyon, S.-H., Morimoto, J., Ude, A., Hale, J., Colvin, G., Scroggin, W., Jacobsen, S., 2007. CB: a humanoid research platform for exploring neuroscience. J. Adv. Robot. 21, 1097–1114. Courtine, G., Schieppati, M., 2003. Human walking along a curved path. I. Body trajectory, segment orientation and the effect of vision. Eur. J. Neurosci. 18, 177–190. Decety, J., Chaminade, T., 2003. When the self represents the other: a new cognitive neuroscience view on psychological identification. Conscious. Cogn. 12, 577– 596. di Pellegrino, G., Fadiga, L., Fogassi, L., Gallese, V., Rizzolatti, G., 1992. Understanding motor events: a neurophysiological study. Exp. Brain Res. 9, 176–180. DiSalvo, C., Gemperle, F., Forlizzi, J., Kiesler, S., 2002. All robots are not created equal: the design and perception of humanoid robot heads. Paper Presented at: 4th Conference on Designing Interactive Systems, London, UK.

Edwards, M.G., Humphreys, G.W., Castiello, U., 2003. Motor facilitation following action observation: a behavioural study in prehensile action. Brain Cogn. 53, 495–502. Ekman, P., Friesen, W.V., 1978. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, CA. Gallese, V., Fadiga, L., Fogassi, L., Rizzolatti, G., 1996. Action recognition in the premotor cortex. Brain 119 (Pt 2), 593–609. Gallese, V., Keysers, C., Rizzolatti, G., 2004. A unifying view of the basis of social cognition. Trends Cogn. Sci. 8, 396–403. Gazzola, V., Rizzolatti, G., Wicker, B., Keysers, C., 2007. The anthropomorphic brain: the mirror neuron system responds to human and robotic actions. Neuroimage 35, 1674–1684. Hanson, D., 2005. Expanding the aesthetics possibilities for humanlike robots. Paper Presented at: Proc. IEEE Humanoid Robotics Conference, special session on the Uncanny Valley, Tsukuba, Japan. Harnad, S., 1989. Minds, machines and searle. J. Exp. Theor. Artif. Intell. 1, 5–25. Heyes, C., 2001. Causes and consequences of imitation. Trends Cogn. Sci. 5, 253– 261. Hirai, K., Hirose, M., Haikawa, Y., Takenaka, T., 1998. The development of Honda humanoid robot. Paper Presented at: IEEE International Conference on Robotics and Automation, Leuven, Belgium. Hirukawa, Kanehiro, Kaneko, K., Kajita, S., Fujiwara, K., Kawai, Y., Tomita, F., Hirai, Tanie, K., Isozumi, T., et al., 2004. Humanoid robotics platforms developed in HRP. Robot. Auton. Syst. 48, 165–175. Hollerbach, J.M., Jacobsen, S.C., 1996. Anthropomorphic robots and human interactions. Paper Presented at: First International Symposium on Humanoid Robots, Waseda, Japan. Iacoboni, M., Woods, R.P., Brass, M., Bekkering, H., Mazziotta, J.C., Rizzolatti, G., 1999. Cortical mechanisms of human imitation. Science 286, 2526–2528. Itoh, K., Miwa, H., Matsumoto, M., Zecca, M., Takanobu, H., Roccella, S., Carrozza, M.C., Dario, P., Takanishi, A., 2004. Various emotional expressions with emotion expression humanoid robot WE-4RII. Paper Presented at: First IEEE Technical Exhibition Based Conference on Robotics and Automation (TExCRA ‘04), Tokyo. Kawato, M., 2008. From ‘understanding the brain by creating the brain’ towards manipulative neuroscience. Philos. Trans. Roy. Soc. Lond. B Biol. Sci. 363, 2201– 2214. Keysers, C., Perrett, D.I., 2004. Demystifying social cognition: a Hebbian perspective. Trends Cogn. Sci. 8, 501–507. Kilner, J.M., Paulignan, Y., Blakemore, S.J., 2003. An interference effect of observed biological movement on action. Curr. Biol. 13, 522–525. Kozima, H., Yano, H., 2001. A robot that learns to communicate with human caregivers. Paper Presented at: First International Workshop on Epigenetic and Robotics, Lund, Sweden. Krach, S.R., Hegel, F., Wrede, B., Sagerer, G., Binkofski, F., Kircher, T., 2008. Can machines think? interaction and perspective taking with robots investigated via fMRI. PLoS ONE 3, e2597. Liberman, A.M., Mattingly, I.G., 1985. The motor theory of speech perception revised. Cognition 21, 1–36. MacDorman, K.F., Ishiguro, H., 2006. The uncanny advantage of using androids in cognitive and social science research. Interact Stud 7. Minato, T., Shimada, M., Ishiguro, H., Itakura, S., 2004. Development of an android robot for studying human–robot interaction. In: Innovations in Applied Artificial Intelligence. Springer, Berlin. pp. 424–434. Mori, M., 1970. The valley of eeriness (Japanese). Energy 7, 33–35. Nadel, J., Revel, A., Andry, P., Gaussier, P., 2004. Toward communication: first imitations in infants, low-functioning children with autism and robots. Interact. Stud. 5, 45–74. Nagasaka, K., Kuroki, Y., Suzuki, S., Itoh, Y., Yamaguchi, J., 2004. Integrated motion control for walking, jumping and running on a small bipedal entertainment robot. Paper Presented at: IEEE Int. Conf. on Robotics and Automation, New Orleans, LA. Neda, Z., Ravasz, E., Brechet, Y., Vicsek, T., Barabasi, A.L., 2000. The sound of many hands clapping. Nature 403, 849–850. Oztop, E., Chaminade, T., Cheng, G., Kawato, M., 2005a. Imitation bootstrapping: experiments on a robotic hand. Paper Presented at: 5th IEEE-RAS International Conference on Humanoid Robots, Osaka, Japan. Oztop, E., Franklin, D., Chaminade, T., Cheng, G., 2005b. Human–humanoid interaction: is a humanoid robot perceived as a human. Int. J. Humanoid Robot. 2, 537–559. Press, C., Bird, G., Flach, R., Heyes, C., 2005. Robotic movement elicits automatic imitation. Brain Res. Cogn. Brain Res. 25, 632–640. Press, C., Gillmeister, H., Heyes, C., 2006. Bottom-up, not top-down, modulation of imitation by human and robotic models. Eur. J. Neurosci. 24, 2415–2419. Rizzolatti, G., Craighero, L., 2004. The mirror-neuron system. Annu. Rev. Neurosci. 27, 169–192. Rizzolatti, G., Fogassi, L., Gallese, V., 2001. Neurophysiological mechanisms underlying the understanding and imitation of action. Nat. Rev. Neurosci. 2, 661–670. Rothi, L.J.G., Ochipa, C., Heilman, K.M., 1991. A cognitive neuropsychological model of limb praxis. Cogn. Neuropsychol. 8, 443–458. Sakagami, Y., Watanabe, R., Aoyama, C., Matsunaga, S., Higaki, N., Fujimura, K., 2002. The intelligent ASIMO: system overview and integration. Paper Presented at: IEEE/RSJ International Conference on Intelligent Robots and Systems, Lausanne, Switzerland. Sandini, G., Metta, G., and Vernon, D., 2004. RobotCub: an open framework for research in embodied cognition. Paper Presented at: IEEE International Conference on Humanoid Robots, Los Angeles, CA.

T. Chaminade, G. Cheng / Journal of Physiology - Paris 103 (2009) 286–295 Saygin, A.P., 2007. Superior temporal and premotor brain areas necessary for biological motion perception. Brain 130, 2452–2461. Schmidt, R.C., Richardson, M.J., 2008. Dynamics of interpersonal coordination. In: Fuchs, A., Jirsa, V. (Eds.), Coordination: Neural, Behavioural and Social Dynamics. Springer-Verlag, Heidelberg. Singer, T., Seymour, B., O’Doherty, J., Kaube, H., Dolan, R.J., Frith, C.D., 2004. Empathy for pain involves the affective but not sensory components of pain. Science 303, 1157–1162. Tai, Y.F., Scherfler, C., Brooks, D.J., Sawamoto, N., Castiello, U., 2004. The human premotor cortex is ‘mirror’ only for biological actions. Curr. Biol. 14, 117– 120.

295

Tanaka, F., Cicourel, A., Movellan, J.R., 2007. Socialization between toddlers and robots at an early childhood education center. Proc. Natl. Acad. Sci. 104, 17954– 17958. Turing, A., 1950. Computing machinery and intelligence. Mind 59, 433–460. Vogt, S., Buccino, G., Wohlschlager, A.M., Canessa, N., Shah, N.J., Zilles, K., Eickhoff, S.B., Freund, H.J., Rizzolatti, G., Fink, G.R., 2007. Prefrontal involvement in imitation learning of hand actions: effects of practice and expertise. Neuroimage 37, 1371–1383. Wicker, B., Keysers, C., Plailly, J., Royet, J.P., Gallese, V., Rizzolatti, G., 2003. Both of us disgusted in my insula: the common neural basis of seeing and feeling disgust. Neuron 40, 655–664.