Robots can be perceived as goal-oriented agents Alessandra Sciutti1, Ambra Bisio1, 2, Francesco Nori1, Giorgio Metta1, 3, Luciano Fadiga1, 4 & Giulio Sandini1 1Robotics,
Brain and Cognitive Sciences Dept., Istituto Italiano di Tecnologia of Experimental Medicine, Section of Human Physiology, University of Genova 3Center for Robotics and Neural Systems, Plymouth University, Plymouth 4Section of Human Physiology, University of Ferrara, Ferrara 2Department
Understanding the goals of others is fundamental for any kind of interpersonal interaction and collaboration. From a neurocognitive perspective, intention understanding has been proposed to depend on an involvement of the observer’s motor system in the prediction of the observed actions (Nyström et al. 2011; Rizzolatti & Sinigaglia 2010; Southgate et al. 2009). An open question is if a similar understanding of the goal mediated by motor resonance can occur not only between humans, but also for humanoid robots. In this study we investigated whether goal-oriented robotic actions can induce motor resonance by measuring the appearance of anticipatory gaze shifts to the goal during action observation. Our results indicate a similar implicit processing of humans’ and robots’ actions and propose to use anticipatory gaze behaviour as a tool for the evaluation of human-robot interactions. Keywords: Humanoid robot; motor resonance; anticipation; proactive gaze; action understanding
. Introduction The ability to understand others’ actions and to attribute them mental states and intentionality is crucial for the development of a theory of mind and of the ability to interact and collaborate. Indeed, the comprehension of the goal-directed nature of the actions performed by other people is something that we learn very early in infancy, already during our first year of life (e.g. Falck-Ytter et al. 2006; Kanakogi & Itakura 2011; Woodward 1998).
Interaction Studies : (), –. ./is...sci – / - – © John Benjamins Publishing Company
Alessandra Sciutti et al.
One of the main difficulties in investigating goal understanding – especially in preverbal children – is to tap into this mechanism without relying on high level cognitive evaluations or complex linguistic skills. A backdoor to action comprehension has been traditionally represented by the measurement of gaze behaviour in habituation paradigms and during action observation. The analysis of gaze has been used to assess action planning (e.g. Johansson et al. 2001) and action understanding both in adults (e.g. Ambrosini et al. 2011a; Flanagan & Johansson 2003), and in preverbal children (e.g. Falck-Ytter et al. 2006; Gredebäck et al. 2009a; Kanakogi & Itakura 2011; Rosander & von Hofsten 2011; Woodward 1998), sometimes even allowing to differentiate between implicit and explicit levels of social cognition (e.g. Senju et al. 2009). Gaze can therefore represent an important tool to communicate implicit or covert understanding of other agents’ goals. One specific aspect of gaze behaviour is represented by its anticipatory nature. When humans observe others performing goal directed actions, they shift their gaze to the target of their movement before it is completed (Flanagan & Johansson 2003). Interestingly, Flanagan and Johansson (2003) showed in a manipulation task that when the hands moving an object could not be seen, observers’ gaze did not anticipate object motion anymore, but started tracking it and being reactive rather than predictive. It has been therefore suggested that anticipatory gaze is linked to a motor resonance mechanism: the anticipatory shift of the gaze toward the goal of actions performed by someone else would be a part of the action plan covertly activated by the observation. In fact, the anticipatory looking measured during the observation of the actions of others reflects the same visuo-manual coordination exhibited during our own action execution, where gaze is directed anticipatorily to the relevant landmarks of that act, e.g. obstacles to be avoided, targets to be reached or places where the objects are going to be grasped or released (Johansson et al. 2001). Much evidence confirms that the perception of others’ action is dependent on motor system activation (Stadler et al. 2012; Stadler et al. 2011; Urgesi et al. 2010) and is based on a matching between the observed act and the observer’s motor representation of the same action: the observer would implement covert action plans corresponding to the action executed by the other agent (the direct matching hypothesis, Rizzolatti & Craighero 2004; Rizzolatti et al. 2001). This link between perception and action known as motor resonance has been indicated as the behavioural expression of the mirror neurons system (MNS, Rizzolatti et al. 1999), which is activated both when individuals perform a given motor act and when they observe others performing the same motor act (in monkeys: Gallese et al. 1996; Rizzolatti et al. 1996; in humans: see Fabbri-Destro & Rizzolatti 2008; Rizzolatti & Craighero 2004 for reviews). The activation of a set of neurons both during execution and observation of an action would provide a common
Robots can be perceived as goal-oriented agents
description of own and others’ behaviours, thus allowing for the anticipation of others’ goals. The hypothesis of anticipatory gaze being dependent on motor resonance has been supported by several studies on young infants (Falck-Ytter et al. 2006; Gredebäck & Kochukhova 2010a; Gredebäck & Melinder 2010a; Kanakogi & Itakura 2011). For example, Falck-Ytter et al. (2006) tested the assumption that if anticipation is the result of a direct match between the observed action and the observer’s motor repertoire, the anticipation on a given action can occur only after that specific action has been mastered by the infant. This hypothesis has been confirmed by the authors, who demonstrated that only 12 months old children, who were able to perform a transport action, exhibited anticipatory gaze shifts when observing others transporting an object, while younger children, who were unable to grasp and transport, tracked the actor’s movement. The fact that the tendency to anticipate with the gaze others’ goals emerges in correspondence with the development of the infant’s own motor ability to execute the same action has been proven also for the observation of more structured actions, as solving a puzzle (Gredebäck & Kochukhova 2010b), feeding another person (Gredebäck & Melinder 2010b) or moving objects into multiple target positions (Gredebäck et al. 2009b). A recent research reduced further the minimum age at which infants anticipate the action goals of others with their gaze. In particular, infants as young as 6 months of age anticipatorily shift their gaze to the target of a reaching-to-grasp, an action that they have already mastered (Kanakogi & Itakura 2011). In addition, children who perform an action more efficiently are also more proficient at anticipating the goal of similar acts executed by others (Gredebäck & Kochukhova 2010a; Kanakogi & Itakura 2011). These findings are in favour of the theory that goal anticipation is facilitated by a matching process that maps the observed action onto one’s own motor representation of that action. A recent Transcranial Magnetic Stimulation (TMS) study provided further support for a motor-based origin of anticipatory gaze shifts during action observation. Elsner et al. (2012) used TMS to disrupt activity in the “hand region” of the primary motor cortex and found that such selective disruption causes a delay in predictive gaze shifts relative to trials without TMS stimulation. This finding indicates a functional connection between the activation of the observer’s motor system and the anticipatory gazing, as the stimulation of the motor cortex has been shown to directly impact on the ability to predict other’s goals with the gaze. This relationship between anticipatory gaze shifts and motor resonance implies that the occurrence of goal anticipation with the eyes is dependent on matching the observed behaviour and the observer’s motor repertoire. It can be interpreted as an indication that the observer has implicitly (or motorically) recognized the actor as an agent who shares the same actions and the same goals.
Alessandra Sciutti et al.
This recognition becomes particularly relevant in the domain of human-robot interaction. Monitoring anticipatory gaze behaviour during the observation of a robot could therefore become a measure to evaluate whether this motor resonance mechanism at the basis of the interpretation of actions is extendable also to robots. This may be useful in building robots whose behaviour is easily predictable by the user and that elicit a natural confidence in their acting. Whether, and under which conditions, motor resonance can be evoked also by the observation of non-human agents is still an open issue (see Chaminade & Cheng 2009; Sciutti et al. 2012 for reviews). Although the first neuroimaging studies (Perani et al. 2001; Tai et al. 2004) excluded activation of the mirror neurons system and hence motor resonance when the action was performed by a virtual or non-biological agent, subsequent researches (Chaminade et al. 2010; Gazzola et al. 2007; Oberman et al. 2007a; Shimada 2010) have indicated that robotic agents evoke a similar MNS activity as humans do (or even stronger, Cross et al. 2011). Also in the context of behavioural experiments, a few studies found either the absence or a quantitative reduction in the resonance for the observation of robotic agents (Kilner et al. 2003; Oztop et al. 2005; Press et al. 2005) while other researchers observed conditions where the motor resonance effect was the same for human and non human agents observation (Liepelt et al. 2010; Press et al. 2007). In summary robotic agents can, to a certain degree, evoke motor resonance as a function of both their shape, the context in which they are immersed and the way they move. If at the neurophysiological level the MNS activation seems to be present also for very non-biological stimuli (i.e. when the non-biological agent moves with a non-biological kinematics, e.g. Cross et al. 2011; Gazzola et al. 2007) some behavioural effects require a higher degree of human resemblance also in terms of robot motion (Chaminade & Cheng 2009). In the context of assessing humans’ implicit perception of the robot, the analysis of the occurrence of anticipatory gaze shifts to the robotic action goal could tell something more: i.e. not only whether a resonance mechanism can be activated by an artificial visual model, but also if it can be exploited by the observer to predict the goal of a non-human agent, as it happens during human action observation. Previous studies failed to find anticipatory gaze shifts toward the spatial destination of an object moving by itself (Falck-Ytter et al. 2006; Flanagan & Johansson 2003), even if the object movement followed biological rules and the target position was unambiguous. Adult observers exhibited anticipatory gaze behaviour in the presence of non-biological agent when the latter could be interpreted as a tool they could use (a mechanical claw), while anticipation was not exhibited by young infants (4 to 10 months old), not as familiar with that tool (Kanakogi & Itakura 2011).
Robots can be perceived as goal-oriented agents
In this work we evaluated whether the observation of a robotic actor might evoke anticipatory gaze shifts in the observer. In particular, we replicated an “object – transport” task similar to the one described by Falck-Ytter (2006) and we replaced the human action demonstrator with a humanoid robot (the iCub robotic platform, Metta et al. 2010). We considered two alternative hypotheses. Either the human subject could implicitly recognize the robot as an agent and motorically match its action with his/her own, thus evoking the ability to anticipate with the gaze the goal of the robot’s action or alternatively, the robot could be perceived just as a very complex moving object, rather than a goal-oriented agent. In this latter case we would expect a significant reduction of the predictive gaze shift to the target and a stronger tendency to track the moving object. The results would allow us to provide hints to the robot designers for improving the robot overall behaviour.
. Methods . Subjects Ten right-handed subjects (2 women and 8 men, M = 31 years, SD = 13) took part in the experiment. All subjects were healthy, with normal or corrected to normal vision, and did not present any neurological, muscular or cognitive disorder. The participants gave informed consent prior to testing. All experiments were conducted in accordance with legal requirements and international norms (Declaration of Helsinki, 1964). . Action demonstrators .. The human demonstrator In the “human” condition, a human demonstrator presented repeatedly a grasp and transport action. The person who acted as model was a woman and was the same in all the experiments. She was previously trained to make movements at a steady, slow pace and her movement trajectory and timing was recorded prior to the experiment to program robot motion (see below for details). . The humanoid robot In the main experimental condition we used the humanoid robot iCub as action demonstrator and we made it repeatedly perform grasp-transport and release actions in front of the observer. iCub is a humanoid robot developed as part of the EU project RobotCub. It is approximately 1m tall with the appearance of a 3.5 years
Alessandra Sciutti et al.
old child (Metta et al. 2010; Sandini et al. 2007). Its hands have nine degrees of freedom each, with five fingers, three of which independently driven. All motors are placed remotely in the forearm and the hands are completely tendon driven. As we wanted to use the robot’s right arm and hand to produce grasping, release and transport movements, we commanded only the right arm and the torso joints to generate the movement. To generate the transport movement the robot had to track the end point Cartesian trajectories captured from human motion. The grasp and release actions were instead realized with a fast, position controlled, stereotyped closing and opening of the fingers. To produce robot’s hand trajectories, we collected the human transport movement at 250 Hz by means of an infrared marker positioned on the hand of a human actor (Optotrak Certus System, NDI). Then, we downsampled the data by a factor of 5 and we roto-translated them to match the coordinate system of the iCub hand, so that the points in the trajectory belonged to the workspace of the robot. The end effector coordinates were then transformed into torso and arm joint angles solving the inverse kinematics by means of nonlinear constrained optimization (Pattacini et al. 2010). A velocitybased controller was used to track the transformed trajectories. Tracking was satisfactory for our purposes as the robot movements were human-like for a human observer (Figure 1C). b. Hand stop
Hand start
a.
Goal area
Figure 1. Experimental setup. (A) Pictures of the setup: Subjects wearing an Eyelink II helmet sat in front of the action plane, with their chin positioned on a chin rest. (B) Schema of the subjective view of the participant with – superimposed – the rectangular zones representing the Areas of interest (AOI) used for the analysis and (in blue) the sample trajectories of the robotic hand. The superimposed graphical information was not visible to the subject
. Experimental paradigm Subjects sat comfortably on a chair at about 75 cm from the action plane, with their chin positioned on a chin rest. They wore an Eyelink II helmet, provided of a scene camera located in correspondence to the centre of their forehead. The scene
Robots can be perceived as goal-oriented agents
was a table top on which an object (little plush octopus) and a vase (the target) were placed at a distance of about 40 cm. The work area was individuated by two vertical bars, which also held the four infrared markers needed by the Eyelink system to compensate for head movements. At the beginning of each trial the object was placed in a predefined starting position on the side of the scene. Then the demonstrator, either a human actor or the robot iCub, grasped the object with his right hand and transported and released it into the target (Figure 2). Human experiment
Subject’s view
Scene view
Robot experiment
Figure 2. Experimental procedure. Sample transporting action in the robot (left) or human (right) condition, from external (top line) or subject’s (bottom line) point of view. The rectangles representing the Areas of Interest and the cross indicating gaze fixation were not visible during the experiment
During the whole movement the eyes of the demonstrator were hidden from view, to avoid the presence of any additional cue about action goal except object motion. A screen behind the demonstrator provided a uniform background. To replicate the setup described in Falck-Ytter et al. (2006) and maximize the probability to obtain gaze proactivity, as suggested by the results by Eshuis et al. (2009), we attached a little toy to the vase. The toy produced a sound at the arrival of the object. The Eyelink system recorded binocularly the gaze motion at 250 Hz and projected the gaze position in real-time on the video recorded by the scene camera. The camera was arranged in order to oversee the working plane. An alignment procedure assured that the gaze and camera images were correctly superimposed. Before each recording session a standard 9 points calibration procedure was performed on the movement plane. The calibration was then validated and repeated in case the average error was larger than 0.8 visual degrees (the average was computed for each eye separately and repeated for both eyes even if only one exceeded the threshold). In addition, a correction procedure for depth was realized to map
Alessandra Sciutti et al.
eye motion on the camera scene also in presence of fixations outside the calibration plane. All these procedures were performed through the SceneLink application (SR Research) provided with the Eyelink system. The experiment consisted of two sessions. During the first session the demonstrator was the iCub robot who repeated the grasp-transport and release action 8 times, from the same starting position to the vase, with a biological motion (see Section “The Humanoid Robot”). During the second session the robotic demonstrator was replaced by the experimenter, who repeated the same task. The choice to have always the robotic demonstrator first was taken to avoid people being forced to perceive the robot as human due to a pre-exposure to the human actions. The robotic movement was recorded on the video, while, simultaneously, the coordinates of the end effector were saved on a file. . Data analysis The analysis was mainly based on videos which recorded both actors’ movements and the observers’ gaze position automatically overlaid by the Eyelink software. Each video (720 × 480 pixel size, 29.97 fps) was manually segmented into different parts (one for each transport movement) in Adobe Premiere 6.5. The Eyelink Data Viewer software was adopted to analyze gaze movements in detail and to define Areas of Interests (AOI), which were afterward overlaid on the video. We individuated three AOIs (90 × 126 pixel each, corresponding to about 9.3 × 130); one covering the objects starting position (hand start), one covering the end position of the hand before leaving the object (hand stop) and one covering the vase (goal area). Gaze was measured during each movement of an object to the target. Data were included in the analysis if subjects fixated at least once the goal area up to 1000 ms after the object disappeared into the vase. Two subjects never looked at the goal area, either keeping an almost stable fixation during the whole experiment or continuously tracking the demonstrator’s hand without ever looking at the object. Their data were therefore discarded from all subsequent analyses, which were conducted on a total of 8 subjects. To compute gaze anticipation or delay, the timing of subjects’ fixation shift to the goal area was compared to the arrival time of the object. If gaze arrived at the goal area before the object, the trial was considered predictive (positive values). To evaluate the amount of anticipation, for each trial we computed the proportion of anticipation, i.e. the difference between the times of object and gaze arrival on target, divided by movement duration. Movement duration was computed as the time between object exit from the area hand start and its entrance into the goal area. While the arrival to the hand stop area was almost simultaneous with the entering of the object into the vase (goal area) in the human condition, the opening of
Robots can be perceived as goal-oriented agents
the robotic hand required a longer time. To compensate for the difference in the timing of the release action between human and robot hand, in the robotic condition the arrival time of the object into the goal area was replaced by the time when the hand stopped over the vase (hand stop). It should be noted here that such a criterion tends to reduce the estimate of anticipation in the robotic condition, as gaze needs to arrive to the goal area already before the stopping of the hand to be counted as anticipatory. To statistically compare the degree of anticipation during human and robot observation, the proportion of anticipation and the percentage of anticipatory trials in the two conditions were subjected to paired-sample t-test analysis. Moreover, one sample t-tests on the amount of anticipation were performed to determine whether gaze behaviour was significantly different from a tracking (i.e. 0 anticipation, corresponding to a simultaneous arrival of gaze and object to the goal). To evaluate the relation between the anticipatory behaviour exhibited by subjects in the human and the robot condition, a regression analysis was performed on both the percentage of anticipatory trials and the amount of anticipation in the two conditions. In addition, the percentage of variation in anticipation between conditions was computed for each subject as 100 * (1 – proportion_ anticipation_robot/proportion_anticipation_human). Lastly, to assess whether the amount of anticipation changed over multiple presentations of the transport action because of a habituation effect, a linear fit of proportion of anticipation as a function of repetition number was performed for each condition and subject.
.
Results
The first aim of this study was verifying which kind of gaze behaviour is associated to the observation of goal directed actions performed by a humanoid robot. The robotic movement lasted around 3s (M = 2.8s, SD = 0.09s). On average subjects showed an anticipatory behaviour, with a mean percentage of 70% (M = 70%, SD = 33%) of anticipatory trials and with gaze anticipating actor’s hand on average 30% of trial duration (M = 27%, SD = 25%, see Figure 3A). If the robot had been perceived as an inanimate device, we would have expected a tracking behaviour, with the eyes of the observer following the movement of the hand (Flanagan & Johansson 2003). This would have been translated into negative or near zero values of the measured anticipation (see Data Analysis in the Methods section). Instead, average anticipation (normalized by movement duration) was significantly greater than 0 (one-tailed one-sample t-test, t(7) = 3.05, p = 0.009), indicating the presence of proactive gaze behaviour, at least in the majority of subjects.
Alessandra Sciutti et al.
Robot
a.
b. 100 % Ant. Trial
100 % Ant. Trial
Human
75 50
50 25
25 0 –0.4
75
0.4 0.0 Prop. Ant
0.8
0 –0.4
0.0 0.4 Prop. Ant
0.8
Single subjs Average
Figure 3. Experimental Results. Gaze behaviour during the observation of robotic (A) and human (B) actions. Percentage of anticipation (percentage of trials in which gaze is arrived to target before actor’s hand) plotted against anticipation measured as proportion of movement duration. Different small symbols represent different subjects. The larger sphere indicates the population average. Error bars correspond to standard errors. The dashed line indicates zero anticipation, approximately corresponding to tracking gaze behaviour
Some individuals, however, showed a tracking behaviour. To understand whether the absence of anticipation was due to the presence of the robot as actor, we analyzed the proportion of anticipation and the percentage of anticipatory trials during the observation of a human actor executing actions similar to the ones previously performed by the robotic demonstrator. In Figure 3B the percentage of anticipatory trials and the amount of anticipation (in proportion to movement duration) are shown for each subject in this “human” condition. The pattern was similar to the one measured for the observation of robotic actions: those few subjects who assumed a tracking behaviour during robot observation did the same also during human observation. This suggests that the disappearance of anticipation was not due to the presence of a robotic artefact as demonstrator but rather to other factors, which modulate gazing behaviour also during human action observation. This finding replicates the results by Gesierich et al. (2008), which showed that half of their sample presented in tendency a tracking behaviour during action observation (moving virtual blocks on a computer screen, see Discussion). To assess whether the presence of a robotic demonstrator caused a quantitative difference in gazing strategy with respect to a human actor we compared the percentage of anticipatory trials in the “human” and the “robotic” condition. Though a tendency to increase prediction appeared for the human condition, no significant difference was present (paired sample t-test, t(7) = 1.43, p = 0.194).
Robots can be perceived as goal-oriented agents
Analyzing in more detail the relationship between anticipation in the human and the robot condition, we found a strong correlation between subjects’ behaviours in the two tasks. The linear regression of the amount of anticipation in the human versus the robot condition was highly significant (p = 0.003) with a slope not significantly different from 1 (0.89, 95% confidence interval: [0.43–1.36]; adjusted R2 = 0.75). Additionally, also the percentage of anticipatory trials was highly correlated in the two conditions (p = 0.018, slope = 0.78, 95% confidence interval: [0.18–1.37]; adjusted R2 = 0.57). Hence, a similar gazing behaviour was adopted by subjects during the observation of both actor types, as participants with a higher tendency to anticipate during human actions, tended also to anticipate more than others during the robotic ones. To assess whether the presence of a robotic actor produced a quantitative reduction in anticipatory behaviour, we evaluated the individual variation in anticipation between the human and the robot condition (i.e. 100 * (1- proportion_anticipation_robot/proportion_anticipation_human), see Data Analysis). By considering the percentage of variation with respect to the “human” condition, we compensate for the inter-individual differences in the natural tendency to anticipate, independently from the nature of the actor. A positive number would indicate that the introduction of the robot as actor has determined a decrease in anticipation. Average percentage of variation was not significantly different from zero (one-sample t-test, t(7) = −0.529, p = 0.613), indicating that robot observation did not significantly modify the natural anticipatory behaviour exhibited during human action observation. Additionally, we ran 10000 iterations of bootstrap simulation (Efron & Tibshirani 1993) on the percentages of variation. With this resampling technique we aimed at approximating the distribution of the average of this parameter from our sample. More precisely, on each iteration the percentages of variation were independently sampled with replacement to form a new 8-element sample, which was then averaged. Of the new 10000 average percentages of variation computed, only a minority (27%) resulted in a decrease of anticipation larger than 20% of the one measured in the “human” condition, providing no evidence in favour of a reduction in anticipation to be associated to robot observation. We also wanted to be sure that the similarity between human and robot observation did not derive from a habituation effect, i.e. did not depend on a progressive reduction in anticipatory gaze associated to the repetitive exposure to similar stimuli. This phenomenon would have particularly influenced the human condition, because it was the last to be presented. To check whether there was any learning or habituation effect we linearly fitted the proportion of anticipation with respect to repetition number for each subject in the robot and human conditions separately. No significant trend of change in anticipation as a function of repetition number emerged for any subject, neither for the robot nor for the human condition (all p > 0.05, with a slope not significantly different from 0 in a one-sample
Alessandra Sciutti et al.
t-test – p = 0.854 in the “robot” condition and p = 0.166 in the “human” one – and an average adjusted R2 of M = 0.09, SD = 0.14 in the “robot” condition and M = 0.08, SD = 0.11 in the “human” condition). Another possible confound in the results derived from the fact that the timing of the human action was more variable than the robotic one and in general shorter: average human movement duration was around 2. 5s (M = 2.4, SD = 0.4), while average robot movement duration lasted a little less than 3s (M = 2.8, SD = 0.09). To compensate for a possible subjective effect of this difference on the amount of anticipation between human and robotic action observation, we fitted linearly the proportion of anticipation over movement duration for all trials in the human condition, for each subject. Then, we extrapolated the proportion of anticipation in the human condition for a trial duration corresponding to the average robotic movement duration for that subject. Lastly, we replaced the anticipation measured in the “human” condition with this corrected estimate. The results are plotted in Figure 4. As it emerges clearly from the figure, also after this correction, no difference in gazing behaviour appears when the actor is a human or a humanoid robot. Indeed, replicating the previous analysis on the corrected data we obtained similar results, with an average percentage of variation between “human” and “robot”
Robot
0.8
b.
Anticipation
1.0
Ant. Prop.
a.
0.4
0.0
N=8
0.5
0.0
–0.5 0.0
0.4 Human
0.8
Robot
Human
Figure 4. Human – Robot comparison. Amount of anticipation (measured as proportion of movement duration). A: Single subjects’ proportion of anticipation during the observation of robotic actions plotted against the corresponding proportion of anticipation during the observation of human actions. “Human” values have been corrected for movement duration differences between robotic and human conditions (see text for details). Error bars represent within subject standard error. Different symbols represent different subjects. The dashed line individuates the identity line: if a data point lies under this line, the proportion of anticipation for that subject is higher in the “human” condition than in the “robot” one. B: Box plot of anticipation proportion in the “robot’” and the “human” actor conditions. Each box is determined by the 25th and 75th percentiles of the distribution while the whiskers are determined by the 5th and 95th percentiles. The small squares indicate the samples averages, while the horizontal lines represent their medians
Robots can be perceived as goal-oriented agents
condition not significantly different from 0 (t(7) = −0.757, p = 0.474 in a onesample t-test) and a strong correlation between the anticipation exhibited in the two conditions (p = 0.007 for the linear regression, with a slope not significantly different from 1 (0.95, 95% confidence interval: [0.37–1.52]; adjusted R2 = 0.68). Thus, our results suggest that motor resonance, in the form of anticipatory gaze behaviour, occurs during humanoid robot observation as much as during human agent observation.
.
Discussion
When we observe someone performing an action, we usually interpret the action as goal directed. This “goal-centric” understanding is also reflected by the way we move our eyes. So if we look at someone fetching a bottle of wine, our gaze will tend to shift anticipatorily to the glass into which the wine will be poured. Such anticipatory mechanism is triggered by action observation and in particular by an automatic matching between the observed act and our own motor repertoire: the actor is “motorically” interpreted as an agent, which shares with us similar motor representations and thus similar goals (e.g. drinking a glass of good wine). Since anticipatory eyes movements belong to the motor programme associated to action execution, they are similarly activated also when we just “resonate” to the action of others. Much evidence exists in favour of a tight link between gaze and action control. For instance gaze has been shown to signal the intention behind one’s action as much as the action itself, as demonstrated by a similar neural response in an observer when witnessing a gazing or a reach-to-grasp action toward the same object (Pierno et al. 2006). Gaze movements become therefore a backdoor to access the action processing mechanism of our partners. Eyes represent a communication channel during interaction, telling not only where others’ are focusing their attention, but also providing a direct connection to the very basic mechanism of mutual understanding. In this study we monitored subjects’ gaze during the observation of a humanoid robot performing a goal-directed action to evaluate whether a similar implicit mutual understanding can occur also between humans and robots. More precisely, the question was whether a robotic model is able to induce motor resonance as a human actor would or, on the contrary, it is perceived as a non goal-oriented, self moving object, which does not evoke mirror neurons system activation and anticipatory gaze. Our results show that subjects exhibited the tendency to anticipatorily shift their gaze toward action goal similarly when the actor was human or robot. The humanoid robot is therefore implicitly interpreted as a goal-oriented, predictable agent, able to evoke the same motor resonance as a human actor.
Alessandra Sciutti et al.
Before accepting this conclusion, one needs to examine whether alternative explanations are equally plausible. One alternative explanation is that being the spatial goal of the action evident, the anticipation of the action goal could have been performed with no need of the activation of the motor resonance mechanism sub serving anticipatory gaze. However, the request to the subject was not to concentrate on (or anticipate) the action goal, but rather just to look at the action. Previous studies have shown that even in presence of unambiguous spatial goals, automatic anticipatory gaze shifts to the goal occur significantly more often when an action (and an action belonging to the motor repertoire of the observer) is witnessed, while otherwise a tracking behaviour is more prominent (e.g. Falck-Ytter et al. 2006 in infants; Flanagan & Johansson 2003 in adults). Therefore, we are confident that the occurrence of anticipatory gaze shifts to action goal in presence of the robot’s action represents evidence in favour of the activation of a direct matching mechanism between the human observer and the robot, as it has been proved with neurophysiological studies for human observation (Elsner et al. 2012). Hence, at least at the level of a motor matching the robot is perceived almost as a conspecific, i.e. as a goal-oriented agent, sharing a common motor vocabulary. Another possibility is that subjects show anticipatory gaze behaviour because they have misunderstood experimenter’s instructions as if their task was to gaze as soon as possible to the action goal. We are however prone to exclude this alternative, because – as the spatial target of the action was clear already from the beginning of the movement – the gaze would have arrived immediately to the goal since the beginning of action presentation and would have not shown the variability here reported. However, we recognize that in some cases a misunderstanding of task requirements might have actually occurred. In fact, although subjects presented on average anticipatory gaze behaviour, some individuals followed the demonstrator’s hand movements all the time, irrespective of agent’s nature (human or robot). This result seems to confirm the hypothesis formulated by Gesierich et al. (2008) in relation to an experiment where they monitored eye tracking during the observation of virtual block stacking task on a computer. They failed to measure anticipation in about half of their sample of subjects and suggested that it could have been due to a misunderstanding of the task. The experimental setup and the calibration procedure may in fact have let subjects understand that eye movements were the relevant element in the study and thus erroneously infer that their task was to track the moving effector. This explanation seems to be confirmed by the behaviour of one of our subjects. He was discarded from the analysis because during action observation his gaze never entered the goal area, as it remained always aligned with demonstrator’s (human or robot) hand. To check whether he behaved this way for a lack of motor resonance or for a misunderstanding of the task, we analyzed his gaze behaviour not only during the transport-to-target action but
Robots can be perceived as goal-oriented agents
also in the phase of putting the object back to the start position. Interestingly in this case, which to the subject did not appear as a part of the task but rather as a functional movement necessary to begin with the next trial, subject’s gaze arrived on the target object in advance 75% of the times and with an anticipation of about 32% of the whole trial duration. This suggests that indeed some false belief about the task can modulate or even cancel the natural anticipatory behaviour that subjects would have shown in an ecological situation. The progressive improvement of eye tracking devices, which are becoming less invasive and require faster (or a posteriori) calibration procedures, will maybe simplify the design of more ecological testing situation, in which the subjects are not induced to monitor their own gazing behaviour during the experiment. However, the majority of our subjects were not confused by the task instructions and exhibited a clear anticipation for both human and robot action presentation. A possible alternative explanation of the results could be that subjects were somehow forced to perceive the robot as a human agent because it replicated an action previously performed by a human. To avoid this potential issue we presented always the robot as the first demonstrator, so that no immediate association between the observed action performed by the robot and a human agent could be made. This conservative choice assured that anticipation could not be ascribed to the attribution of human-like behaviours to the robot because of previous exposure to human actions. However, it did not allow for testing the existence of an effect of the order of the presentation of the two agents. Future research should be dedicated to measure whether witnessing robot actions has a significant impact on the subsequent observation of human actions and vice versa. It should be also noted that the robot was physically present, executing the action in front of the subjects’ eyes. Such concrete presence made it clear for the subject that the robot was a real artefact and not just an animated character or a human actor in disguise, which could have in turn possibly evoked a response similar to human observation by analogy. Therefore, all precautions were taken in order to avoid a high-level, cognitive “humanization” of the robotic platform. Of course, we cannot exclude that some subjects have anyway explicitly attributed a human-like nature to the iCub, however such explicit (non motor) attribution would not have automatically led to the occurrence of anticipatory gaze shifts (Gredebäck & Melinder 2010a). Another hypothesis is that subjects could get habituated to the multiple presentations and reduced their attention to the stimuli and possibly their anticipatory gaze shift. Hence, the anticipation measured for the human condition would be lower than what it really is, because it was always performed as the last condition. Indeed, the repetitive presentation of exactly the same action has been suggested by some authors to inhibit the firing of the mirror neuron system
Alessandra Sciutti et al.
(e.g. Gazzola et al. 2007), one of the neural mechanisms connected to the occurrence of anticipatory gaze (Elsner et al. 2012). However, in our case we would reject this hypothesis, because no trend of habituation was observed during either condition for all subjects. Indeed, no clear change in anticipatory gaze shifts was individuated as a function of the number of repetitions, suggesting that habituation did not play a relevant role in this experiment. Moreover, the test was conducted with real presentations of the stimuli, rather than videos. Probably the slight variations always present from movement to movement kept the resonance mechanism active during all the motion repetitions. In summary, during the observation of a humanoid robot performing a goal directed action as transporting an object into a container, subjects anticipatorily gaze to the goal of its action the same way they would for the presentation of a human action, suggesting that the robot has been implicitly interpreted as a goaloriented agent and not just as a complex moving object. This occurrence of anticipatory gaze shifts implies a motor-based understanding of the action goal, which does not require inferential or teleological reasoning, but is rather based on an implicit, covert matching between the observer’s and the agents’ actions (Elsner et al. 2012; Flanagan & Johansson 2003). Such motor-based mutual understanding constitutes one of the principal bases of human social interaction abilities (Gallese et al. 2004; Oberman et al. 2007b) and there is evidence that the manifestation of such resonance – e.g. when two agents imitate each other – can lead to an increased acceptance and sense of comfort in the interaction (Chartrand & Bargh 1999), an increased sense of closeness to other people and even evoke the occurrence of more prosocial behaviours (van Baaren et al. 2003). Therefore, these results are promising for HRI, in that they suggest that interaction with robots could be based on the same basic social implicit mechanisms on which human-human interaction is rooted. Several findings suggest that multiple subtle cues play a fundamental role to elicit mutual understanding both in interaction between humans and between human and robot, ranging from the way the robot moves (biological motion, e.g. Chaminade et al. 2005; Kupferberg et al. 2011), to robot appearance (humanoid shape, e.g. Moriguchi et al. 2010) and robot social signals (gaze behaviour, autonomous movements, e.g. Itakura et al. 2008; Sciutti et al. 2013). In our current study we focussed on the role of agent’s nature (human versus robot) controlling all other parameters, i.e. maintaining in both conditions a biological velocity profile of the motion, hiding actor’s gaze direction from subjects’ view and comparing human agents with a humanoid robot. Future research will be needed to determine which parameters are actually relevant in making a robot more or less likely to engage implicit social mechanisms, possibly disentangling the role of motion and form of the robot (as already suggested by Oztop et al. 2005; Chaminade & Cheng 2009).
Robots can be perceived as goal-oriented agents
This study introduces the measure of anticipatory gaze behaviour as a powerful tool to understand which elements in the robotic implementation let the robot be perceived as an interactive agent rather than a mechanical tool. In addition to motor resonance, several other factors affect the perception of the robot by a person, e.g. attention, the emotional state, the action context, previous experience and cultural background. However, the measure of motor resonance through the monitoring of anticipatory gaze behaviour could be an important source of information about the unconscious perception of robots behaviour, as it plays such a basic role in human interactions (Gallese et al. 2004). Therefore, we suggest that the combination of this measure with physiological (Dehais et al. 2011; Rani et al. 2002; Wada et al. 2005) and qualitative information (Bartneck et al. 2009; Kamide et al. 2012) would provide a comprehensive description of HRI, encompassing the conscious judgment provided by the human agent about the robot, as well as the quantification of its automatic response. This ensemble of techniques could therefore represent an innovative test of the basic predisposition to the interaction with humanoid robots, also useful to give guidelines on how to build new interactive robots. Indeed, the question of how robots are perceived by humans is becoming more central: the progressive introduction of robots in a wide range of common applications, as for instance home appliances, entertainment security or rehabilitation, is reducing the engineers’ control on who will interact with the robot and how. Consequently, the design of the robot has to take into account its interactive skills and the impact that its own behaviour has on its human partners. The monitoring of anticipatory gaze could tell us under which conditions humans unconsciously interpret robots as a predictable interaction partner, sharing their same action representations and their same goal-directed attitude.
Acknowledgments The authors would like to thank Marco Jacono for his help in building the setup and preparing the experiments. The work has been conducted in the framework of the European projects ITALK (Grant ICT-FP7-214668) and POETICON++ (Grant ICT-FP7-288382).
References Ambrosini, E., Costantini, M., & Sinigaglia, C. (2011a). Grasping with the eyes. Journal of Neurophysiology, 106, 1437–1442. Bartneck, C., Kulic, D., Croft, E., & Zoghbi, S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived safety of robots. International Journal of Social Robotics, 1, 71–81.
Alessandra Sciutti et al. Chaminade, T., & Cheng, G. (2009). Social cognitive neuroscience and humanoid robotics. Journal of physiology, Paris, 103, 286–295. Chaminade, T., Franklin, D., Oztop, E., & Cheng, G. (2005). Motor interference between humans and humanoid robots: Effect of biological and artifical motion. In International Conference on Development and Learning (pp. 96–101). Chaminade, T., Zecca, M., Blakemore, S–J., Takanishi, A., Frith, C.D., Micera, S., Dario, P., Rizzolatti, G., Gallese, V., & Umiltà, M.A. (2010). Brain response to a humanoid robot in areas implicated in the perception of human emotional gestures. PLoS One, 5, e11577. Chartrand, T.L., & Bargh, J.A. (1999). The chameleon effect: The perception-behavior link and social interaction. Journal of Personality and Social Psychology, 76, 893–910. Cross, E.S., Liepelt, R., de CHAF, Parkinson, J., Ramsey, R., Stadler, W., & Prinz, W. (2011). Robotic movement preferentially engages the action observation network. Human Brain Mapping, 33, 2238–2254. Dehais, F., Sisbot, E.A., Alami, R., & Causse, M. (2011). Physiological and subjective evaluation of a human-robot object hand-over task. Applied Ergonomics, 1–7. Efron, B., & Tibshirani, R.J. (1993). An introduction to the bootstrap. New York, NY: Chapman & Hall. Elsner, C., D’Ausilio, A., Gredeback, G., Falck-Ytter, T., & Fadiga, L. (2012). The motor cortex is causally related to predictive eye movements during action observation. Neuropsychologia, 51, 488–492. Eshuis, R., Coventry, K.R., & Vulchanova, M. (2009). Predictive eye movements are driven by goals, not by the mirror neuron system. Psychological Science, 20, 438–40. Fabbri-Destro, M., & Rizzolatti, G. (2008). Mirror neurons and mirror systems in monkeys and humans. Physiology (Bethesda), 23, 171–179. Falck-Ytter, T., Gredebaeck, G., & von Hofsten, C. (2006). Infants predict other people’s action goals. Nature Neuroscience, 9, 878–879. Flanagan, J.R., & Johansson, R.S. (2003). Action plans used in action observation. Nature, 424, 769–771. Gallese, V., Fadiga, L., Fogassi, L., & Rizzolatti, G. (1996). Action recognition in the premotor cortex. Brain, 119(Pt 2), 593–609. Gallese, V., Keysers, C., & Rizzolatti, G. (2004). A unifying view of the basis of social cognition. Trends in Cognitive Sciences, 8, 396–403. Gazzola, V., Rizzolatti, G., Wicker, B., & Keysers, C. (2007). The anthropomorphic brain: The mirror neuron system responds to human and robotic actions. Neuroimage, 35, 1674–1684. Gesierich, B., Bruzzo, A., Ottoboni, G., & Finos, L. (2008). Human gaze behaviour during action execution and observation. Acta psychological, 128, 324–330. Gredebäck, G., & Kochukhova, O. (2010a). Goal anticipation during action observation is influenced by synonymous action capabilities, a puzzling developmental study. Experimental Brain Research, 202, 493–497. Gredebäck, G., & Melinder, A. (2010a). Infants’ understanding of everyday social interactions: A dual process account. Cognition, 114, 197–206. Gredebäck, G., Stasiewicz, D., Falck-Ytter, T., von Hofsten, C., & Rosander, K. (2009a). Action type and goal type modulate goal-directed gaze shifts in 14-month-old infants. Developmental Psychology, 45, 1190–1194. Itakura, S., Ishida, H., Kanda, T., Shimada, Y., Ishiguro, H., & Lee, K. (2008). How to build an intentional android: Infants’ imitation of a robot’s goal-directed actions. Infancy, 3, 519–532.
Robots can be perceived as goal-oriented agents Johansson, R.S., Westling, G., Bäckström, A., & Flanagan, J.R. (2001). Eye-hand coordination in object manipulation. The Journal of Neuroscience, 21, 6917–6932. Kamide, H., Mae, Y., Kawabe, K., Shigemi, S., & Arai, T. (2012). A psychological scale for general impressions of humanoids In IEEE International Conference on Robotics and Automation (ICRA, pp. 4030–4037). Kanakogi, Y., & Itakura, S. (2011). Developmental correspondence between action prediction and motor ability in early infancy. Nature Communications, 2, 341. Kilner, J.M., Paulignan, Y., & Blakemore, S.J. (2003). An interference effect of observed biological movement on action. Current Biology, 13, 522–525. Kupferberg, A., Glasauer, S., Huber, M., Rickert, M., Knoll, A., & Brandt, T. (2011). Biological movement increases acceptance of humanoid robots as human partners in motor interaction. AI & Society, 26, 339–345. Liepelt, R., Prinz, W., & Brass, M. (2010). When do we simulate non-human agents? Dissociating communicative and non-communicative actions. Cognition, 115, 426–434. Metta, G., Natale, L., Nori, F., Sandini, G., Vernon, D., Fadiga, L., von Hofsten, C., Rosander, K., Lopes, M., Santos-Victor, J., Bernardino, A., & Montesano, L. (2010). The iCub humanoid robot: An open-systems platform for research in cognitive development. Neural Networks, 23, 1125–1134. Moriguchi, Y., Minato, T., Ishiguro, H., Shinohara, I., & Itakura, S. (2010). Cues that trigger social transmission of disinhibition in young children. Journal of Experimental Child Psychology, 107, 181–187. Nyström, P., Ljunghammar, T., Rosander, K., & von Hofsten, C. (2011). Using mu rhythm desynchronization to measure mirror neuron activity in infants. Developmental Science, 14, 327–335. Oberman, L.M., McCleery, J.P., Ramachandran, V.S., & Pineda, J.A. (2007a). EEG evidence for mirror neuron activity during the observation of human and robot actions: Toward an analysis of the human qualities of interactive robots. Neurocomputing, 70, 2194–2203. Oberman, L.M., Pineda, J.A., & Ramachandran, V.S. (2007b). The human mirror neuron system: A link between action observation and social skills. Social Cognitive and Affective Neuroscience, 2, 62–66. Oztop, E., Franklin, D., Chaminade, T., & Cheng, G. (2005). Human-humanoid interaction: Is a humanoid robot perceived as a human? International Journal of Humanoid Robotics, 2, 537–559. Pattacini, U., Nori, F., Natale, L., Metta, G., & Sandini, G. (2010). An experimental evaluation of a novel minimum-jerk cartesian controller for humanoid robots. IEEE International Conference on Intelligent Robots and Systems (pp. 1668–1674). Perani, D., Fazio, F., Borghese, N.A., Tettamanti, M., Ferrari, S., Decety, J., & Gilardi, M.C. (2001). Different brain correlates for watching real and virtual hand actions. Neuroimage, 14, 749–758. Pierno, A.C., Becchio C., Wall, M.B., Smith, A.T, Turella, L., & Castiello, U. (2006). When gaze turns into grasp. Journal of Cognitive Neuroscience, 18(12), 2130–2137. Press, C., Bird, G., Flach, R., & Heyes, C. (2005). Robotic movement elicits automatic imitation. Brain Research. Cognitive Brain Research, 25, 632–640. Press, C., Gillmeister, H., & Heyes, C. (2007). Sensorimotor experience enhances automatic imitation of robotic action. Proceedings. Biological sciences, 274, 2509–2514. Rani, P., Sims, J., Brackin, R., & Sarkar, N. (2002). Online stress detection using psychophysiological signal for implicit human-robot cooperation. Robotica, 20(6), 673–686.
Alessandra Sciutti et al. Rizzolatti, G., & Craighero, L. (2004). The mirror-neuron system. Annual Review of Neuroscience, 27, 169–192. Rizzolatti, G., Fadiga, L., Fogassi, L., & Gallese, V. (1999). Resonance behaviors and mirror neurons. Archives Italiennes de Biologie, 137, 85–100. Rizzolatti, G., Fadiga, L., Gallese, V., & Fogassi, L. (1996). Premotor cortex and the recognition of motor actions. Brain Research. Cognitive Brain Research, 3, 131–141. Rizzolatti, G., Fogassi, L., & Gallese, V. (2001). Neurophysiological mechanisms underlying the understanding and imitation of action. Nature Reviews Neuroscience, 2, 661–670. Rizzolatti, G., & Sinigaglia, C. (2010). The functional role of the parieto-frontal mirror circuit: Interpretations and misinterpretations. Nature Reviews Neuroscience, 11, 264–274. Rosander, K., & von Hofsten, C. (2011). Predictive gaze shifts elicited during observed and performed actions in 10-month-old infants and adults. Neuropsychologia, 49, 2911–2917. Sandini, G., Metta, G., & Vernon, D. (2007). The iCub cognitive humanoid robot: An opensystem research platform for enactive cognition. In 50 years of artificial intelligence (pp. 358–369). Springer Berlin: Heidelberg. Sciutti, A., Bisio, A., Nori, F., Metta, G., Fadiga, L., Pozzo, T., & Sandini, G. (2012). Measuring human-robot interaction through motor resonance. International Journal of Social Robotics, 4(3), 223–234. Sciutti, A., Del Prete, A., Natale, L., Burr, D.C., Sandini, G., & Gori, M. (2013). Perception during interaction is not based on statistical context. IEEE/ACM Proceedings of the Human Robot Interaction Conference 2013. (in press). Senju, A, Southgate, V, White, S, & Frith, U. (2009). Mindblind eyes: An absence of spontaneous theory of mind in Asperger syndrome. Science, 325, 883–885. Shimada, S. (2010). Deactivation in the sensorimotor area during observation of a human agent performing robotic actions. Brain Cognitive, 72, 394–399. Southgate, V., Johnson, M.H., Osborne, T., & Csibra, G. (2009). Predictive motor activation during action observation in human infants. Biology Letters, 5, 769–772. Stadler, W., Ott, D.V., Springer, A., Schubotz, R.I., Schutz-Bosbach, S., & Prinz, W. (2012). Repetitive TMS suggests a role of the human dorsal premotor cortex in action prediction. Frontiers in Human Neuroscience, 6. Stadler, W., Schubotz, R.I., von Cramon, D.Y., Springer, A., Graf, M., & Prinz, W. (2011). Predicting and memorizing observed action: Differential premotor cortex involvement. Human Brain Mapping, 32, 677–687. Tai, Y.F., Scherfler, C., Brooks, D.J., Sawamoto, N., & Castiello, U. (2004). The human premotor cortex is ‘mirror’ only for biological actions. Current Biology, 14, 117–120. Urgesi, C., Maieron, M., Avenanti, A., Tidoni, E., Fabbro, F., & Aglioti, S.M. (2010). Simulating the future of actions in the human corticospinal system. Cerebral Cortex, 20, 2511–2521. van Baaren, R.B., Holland, R.W., Steenaert, B., & van Knippenberg, A. (2003). Mimicry for money: Behavioral consequences of imitation. Journal of Experimental Social Psychology, 39, 393–398. Wada, K., Shibata, T., Musha, T., & Kimura, S. (2005). Effects of robot therapy for demented patients evaluated by EEG. In Proceedings IEEE/RSJ International Conference Intelligent Robots and Systems (IROS, pp.1552–1557). Woodward, A.L. (1998). Infants selectively encode the goal object of an actor’s reach. Cognition, 69, 1–34.
Robots can be perceived as goal-oriented agents
Authors’ addresses Alessandra Sciutti Istituto Italiano di Tecnologia Robotics, Brain and Cognitive Sciences Department Via Morego 30 16163 Genova Italy E-mail:
[email protected] Francesco Nori Istituto Italiano di Tecnologia Robotics, Brain and Cognitive Sciences Department Via Morego 30 16163 Genova Italy E-mail:
[email protected] Giorgio Metta Istituto Italiano di Tecnologia Robotics, Brain and Cognitive Sciences Department Via Morego 30 16163 Genova Italy Plymouth University Center for Robotics and Neural Systems (CRNS) A311 Portland Square PL4 8AA Plymouth Devon, United Kingdom E-mail:
[email protected] Giulio Sandini Istituto Italiano di Tecnologia Robotics, Brain and Cognitive Sciences Department Via Morego 30 16163 Genova Italy E-mail:
[email protected]
Ambra Bisio University of Genova Department of Experimental Medicine, Section of Human Physiology Viale Benedetto XV, 3 16132, Genova Italy Istituto Italiano di Tecnologia Robotics, Brain and Cognitive Sciences Department Via Morego 30 16163 Genova Italy E-mail:
[email protected] Luciano Fadiga University of Ferrara Section of Human Physiology Via Fossato di Mortara 17/19 44100 Ferrara Italy Istituto Italiano di Tecnologia Robotics, and Cognitive Sciences Department Via Morego 30 Italy E-mail:
[email protected]
Alessandra Sciutti et al.
Authors’ biographical notes Alessandra Sciutti received her Ph.D. in Humanoid Technologies from the University of Genoa (Italy) in 2010. After an experience of one year at the Robotics Lab of the Rehabilitation Institute of Chicago, she is currently working as a Post Doc at the RBCS Department of the Italian Institute of Technology. Her research activity mainly concerns the study of the action-perception link both for prediction and for human-human(oid) interaction. Ambra Bisio received her Ph.D. in Humanoid Technologies from the University of Genoa (Italy) in 2011. She spent one year as a Postdoc at the Robotics Brain and Cognitive Sciences (RBCS) department of the Italian Institute of Technology (IIT) investigating action-perception mechanisms in healthy adults and patients with dementia during human-human and human-robot interaction. She is currently working as Postdoc in the Department of Experimental Medicine, Section of Human Physiology at the University of Genoa where she approaches the study of motor resonance mechanism at behavioural and neurophysiological level by mean of motion capture and Transcranial Magnetic Stimulation techniques. Francesco Nori received his Ph.D. in Control and Dynamical Systems from the University of Padova (Italy) in 2005. After a visiting period at the University of California, Los Angeles, in 2006 he moved to the University of Genoa and started his Postdoc at the laboratory for integrated advanced robotics, beginning a fruitful collaboration with Prof. Metta and Prof. Sandini. In 2007 he moved to the IIT where he currently holds a Team Leader position. During the past 6 years, he has significantly contributed to the development of the iCub humanoid platform with a focus on hardware and control algorithm development. Giorgio Metta is senior scientist at the IIT since 2006 and assistant professor at the University of Genoa since 2005, where he teaches courses on anthropomorphic robotics and intelligent systems for the bioengineering curricula. He holds a MS with honours (in 1994) and Ph.D. (in 2000) in electronic engineering both from the University of Genoa. From 2001 to 2002 he was postdoctoral associate at the MIT AI-Lab. Giorgio Metta’s research activities are in the field of biologically motivated and humanoid robotics, and in particular in life-long developing artificial systems that show some of the abilities of natural systems. Luciano Fadiga is currently full professor of Human Physiology at the University of Ferrara and senior researcher at the RBCS department of the Italian Institute of Technology. He has been senior researcher at the University of Parma since 1992, assistant professor at the University of Parma since 1997 and associate professor of Human Physiology at the University of Ferrara (2000–2005). He has a long experience in electrophysiology and neurophysiology in monkeys (single neurons recordings) and humans (Transcranial Magnetic Stimulation, study of spinal excitability, brain imaging, and recording of single neurons in awake neurosurgery patients). Giulio Sandini is full professor of Bioengineering at the University of Genoa and Director of Research at the Italian Institute of Technology where he leads the Robotics, Brain and Cognitive Sciences Department. His past experience includes research activities in neuroscience and clinical labs in Italy (Scuola Normale in Pisa) and abroad (Harvard Medical School in Boston) as well as coordination of research groups involved in robotics and bioengineering at the University of Genoa’s LIRA-Lab.