Emotions in Robot Psychology - Semantic Scholar

6 downloads 125022 Views 525KB Size Report
while two groups were questioned by an android robot that closely resembled the human interviewer. In one of the two groups that interacted with the android, ...
Biol Cybern (2014) xxxx DOI xxxx

Pr o sp ects

Emotions in Robot Psychology V. Nitsch · M. Popp

Abstract In his famous thought experiments on synthetic vehicles, Valentino Braitenberg stipulated that simple stimulus-response reactions in an organism could evoke the appearance of complex behavior, which, to the unsuspecting human observer, may even appear to be driven by emotions such as fear, aggression and even love (Braitenberg 1984). In fact, humans appear to have a strong propensity to anthropomorphize, driven by our inherent desire for predictability that will quickly lead us to discern patterns, cause-and-effect relationships and yes, emotions, in animated entities, be they natural or artificial. But might there be reasons, that we should intentionally “implement” emotions into artificial entities, such as robots? How would we proceed in creating robot emotions? And what, if any, are the ethical implications of creating “emotional” robots? The following article aims to shed some light on these questions with a multidisciplinary review of recent empirical investigations into the various facets of emotions in robot psychology. Keywords Robotics · Emotions · Psychology · Neuroscience · Braitenberg

_______________________________________________ This article forms part of a special issue of Biological Cybernetics entitled “Structural Aspects of Biological Cybernetics: Valentino Braitenberg, Neuroanatomy, and Brain Function”. The final publication is available at http://link.springer.com. _______________________________________________ V. Nitsch . M. Popp Human Factors Institute Universität der Bundeswehr München Werner-Heisenberg-Weg 39 85577 Neubiberg, Germany Email: [email protected]

1 Introduction A philosophical debate on the emotional capacities of machines has raged for centuries. Common counterarguments state that emotions are idiosyncratic experiences, too nuanced for a machine to fully grasp in their intensity and complexity. On the other hand, Braitenberg (1984) has succinctly demonstrated with his thought experiments that complexity, intentionality and even emotionality can be ascribed to behavior that is, in fact, governed by simple stimulus-response mechanisms. With advances in psychology, neuroscience, computer science and engineering, machine capacities continuously increase and expand. In addition, a paradigm shift is taking place from machine-centered to increasingly humancentered approaches to technological development. The field of robotics is currently particularly affected by this paradigm change. Traditionally, robots have been employed mostly in the manufacturing industry. These robots are automatized and highly specialized, need to be physically separated from their human co-workers due to safety concerns and only communicate with humans by taking orders via programming code or remote control. As robots, through the virtue of technological advances coupled with decreasing production costs, become more ubiquitous and increasingly pervade not only our professional but also our private lives, a new breed of robots is set to supersede the traditional robot: the social robot. The goal of the social robot is to autonomously or semiautonomously “interact and communicate with humans by following the behavioral norms expected by the people with whom the robot is intended to interact” (Bartneck and Forlizzi 2004, p. 592). These robots may assist the user in pursuing their work, domestic chores or simply provide entertainment value. A considerable body of work is even dedicated to therapeutic uses of social robots (see Dautenhahn and Billard 1999; Fong, Nourbakhsh and Dautenhahn 2003, for reviews on the functions of social robot). Is there room for emotions in the development of social robots? The following article will discuss the need

2

for and potential applications of emotions in social robotics from a multi-disciplinary perspective. 2 Emotions and Human-Robot Interaction The engineering of social robots proves challenging. Not only are there numerous technological challenges to be solved, such as the navigation in unknown spaces and the manipulation of unknown objects, but robot psychology becomes also increasingly important. In order for social robots to interact closely and intuitively with humans, the robots need to predict human intentions and actions and display behavior that is appropriate to the context. Moreover, intuitive interaction requires the development of new forms of communication between humans and robots . However, to this date, we have little knowledge of how humans interact with artificial social entities, much less of what constitutes intuitive communication with them. Hence, by far, most research and engineering efforts on emotions in robotics have focused on the use of emotions to facilitate the human-robot interaction. 2.1 Human Emotions Although definitions vary, in the psychology literature, (human) emotions are commonly described as “fairly brief but intense experiences” (Eysenck and Keane 2000, p. 489), whereas, in the artificial intelligence literature, the term is frequently used in a broader sense to also include preferences and affective appraisals. Evolutionary views of human emotions have characterized them as adaptations for the rapid production and recognition of emotion cues, in particular, facial cues, with principally communicative purposes (Craig, Vaidyanathan, James and Melhuish 2010). Most famously, Ekman and Friesen (1978) have stipulated the existence of six primary emotions that are expressed through facial expressions: happiness, sadness, anger, fear, disgust, and surprise (see Figure 1). In their view, all human expressions can be viewed as a combination of these expressions. Evidence from cross-cultural studies of facial expressions of emotion seems to largely corroborate this view (Power and Dalgleish 1997).

Biol Cybern (2013) xxx

In addition to facial cues, humans have evolved a sensibility to numerous other emotion indicators, including body posture (e.g. Coulson, 2004) and vocal cues (e.g. Scherer, 1979). Emotions are a central part of human life as they affect not only communication with our peers and other species, but also our self-perception and our understanding of the world (Eysenck and Keane 2000; Hogg and Vaughan 2002). Hence, if social robots are to become an integral part of our lives, investigating the various aspects that are impacted by emotions is paramount to their success. 2.2 Robot Emotion Simulation Since human emotion cues were found to be very efficient in conveying information to other humans, various attempts were made to equip robots with the ability to project emotion cues as an intuitive form of human-robot communication. Common examples of emotion cues that may be simulated by a social robot include the use of humanoid facial expressions, such as furrowed eyebrows. For example, the FLOBI humanoid head utilizes so-called babyface cues, consisting of large round eyes, and a small nose and chin (see Figure 2). Emotions are primarily conveyed through different configurations of eyebrow, eyelid and lip movements. In addition, four LEDs project diffused white or red light onto the cheek surface, thus creating an impression of blushing (Lütkebohle, Hegel, Schulz et al. 2010).

Figure 2 Facial expressions of emotions of the FLOBI head. Source: Lütkebohle, Hegel, Schulz, et al., 2010.

Figure 1 Facial expressions of the six basic emotions according to Ekman and Friesen (1978). © Paul Ekman.

Increasingly, projector-based implementations of facial expressions gain in popularity, whereby a video stream of an animated face is typically retro-projected onto a semitransparent facial mask (see Figure 3). Such retro-projected animations of facial expressions have numerous advantages over traditional mechatronic faces, including a greater versatility in expressions and a cost-effective construction.

Biol Cybern (2013) xxx

Figure 3 Retro-projected robot face displaying surprise (left) and fear (right). Source: Delaunay, Greef, Belpaeme, 2009.

Less commonly employed are animalistic emotion cues such as tail-wagging or purring. Famous examples of animalistic social robots that aim at interacting with humans through the use of emotion cues are Kismet, Leonardo and Paro. Kismet was developed as part of the Sociable Machines Project at the Massachusetts Institute of Technology (MIT) (Breazeal 2002). The robot head can convey various emotion cues through changes in gaze, head orientation, eyelids, eyebrows, lips and ears (see Figure 4).

Figure 4 Kismet with six expressions of emotion. Source: Breazeal, 2002.

Leonardo, Kismet’s successor at MIT, possesses, according to its developers, near human-like facial expression capabilities and can also use its upper body to gesticulate (see Figure 5, Left) (Breazeal, Buchsbaum, Gray et al. 2005). The seal robot Paro can move its head, tail, front paws as well as its eyelids (see Figure 5, Right). Paro has been extensively used for therapeutic purposes such as pain management (Shibata, Mitsui, Wada et al. 2001) and antidepression therapy (Wada, Shibata, Saito et al. 2002). 2.3 Human anthropomorphism It has been suggested that robots which use explicit emotion cues have the advantage of exploiting human tendencies to anthropomorphize and thereby motivate

3

Figure 5 Left: Leonardo. Character design copyright Stan Winston Studio. Image copyright MIT Media Lab. Right: Seal Robot Paro Source: Wada, Shibata, Saito et al., 2005.

humans to apply previously learned patterns of interaction with fellow human beings to the interaction with robots, hence increasing the predictability of both human and robot during an interaction. In fact, it appears that humans are quite apt at applying known human-human interaction patterns to human-robot interaction. Studies suggest, however, that physical emotion cues are not a necessary prerequisite for this to occur. For instance, in a series of experiments, MacDorman, Minato, Shimada et al. (2005) investigated gaze behavior of a small number of Japanese students during an interview. One group of students was interviewed by a human interviewer, while two groups were questioned by an android robot that closely resembled the human interviewer. In one of the two groups that interacted with the android, participants were told that the robot was remotely controlled by another human, while the students in the other group believed the robot to act completely autonomously. During this interview, gaze direction and the amount of time that students spent looking in a particular direction were recorded. Although the results are not necessarily conclusive due to the small and varying sample sizes, they indicate that the students oftentimes casted their gaze downwards when talking to the human interviewer and the remotely-controlled android, which, considering the cultural context, could be interpreted as typical social behavior. Hence, these results would indicate that this particular learned interaction pattern applies to the humanhuman interaction as well as to the human-robot interaction. However, the results further show that participants who believed that the robot was acting autonomously lowered their gaze much less, suggesting that the extent to which learned behavior is applied to a human-robot interaction depends on other factors than physical appearance alone. Further evidence suggests that human behavior may transfer from human-human to human-robot interaction without the explicit evocation of human emotions. For instance, in an as yet unpublished study recently conducted at the Universität der Bundeswehr München in Germany, an experiment was conducted with 27 individuals which investigated human-robot negotiation behavior using an experimental paradigm known as the ultimatum game. In this 2-player paradigm, one person (the provider) must

4

make another person (the recipient) an offer to divide a certain amount of money. If the recipient accepts the offer, the money is accordingly divided between the two players. If, on the other hand, the recipient refuses the offer, none of the players receive money. Decades of research on this paradigm found that instead of aiming to maximize profits, players act fairness-oriented: they tend to make and expect “fair” offers (usually around 40% of the total sum), whilst rejecting and even punishing offers that were considered unfair (usually 20%), whereas the other half negotiated with an unfair partner (only offers 50% were accepted).

Figure 6 Aldebaran's NAO Next Gen Robot negotiating for money. Image copyright Universität der Bundeswehr München.

The results showed that even though participants were well aware that the robot had no use for the money and they assumed that the robot was acting autonomously, they made and expected fair offers when interacting with the robot. Interestingly, although there were no significant differences in the amounts of money that were offered and accepted by the human and robot negotiation partners within each fairness condition, the human partner was considered significantly less fair than the robot. Moreover, participants indicated no emotional responses to the unfair behavior of the robot. This finding implies that the human tendency to anthropomorphize is not necessarily linked to the experience of emotion. Hence, it seems that behavior and expectations during human-human interaction may transfer (at least to some degree) to human-robot interaction, even when no particular emotion is evoked in humans.

Biol Cybern (2013) xxx

2.4 Induction of Human Emotions Besides using emotion cues to increase the predictability of human behavior, roboticists are also interested in inducing emotions to impact the quality of the human-robot interaction. As Oberman, McCleery, Ramachandran and Pineda (2007) stated, by invoking different emotions in humans, social robots “could potentially tap into the powerful social motivation system inherent in human life, which could lead to more enjoyable and longer lasting human–robot interactions” (p. 2195). Evidence suggests that robots are well capable of eliciting emotions in humans. For example, Craig, Vaidyanathan, James and Melhuish (2010) presented participants with various stimuli depicting digital facial expressions of the BERT2 humanoid robot and examined the event-related response during two implicit emotion recognition experiments to determine the modulation of the facespecific N170 brain response component to robot facial expressions. It was found that the stimuli evoke the N170 response component and that digital facial expressions with high correlations can be discriminated, thus demonstrating that the response to robot facial expressions evoke similar brain activity to that of a human emotions. In fact, there is recent neuroscientific evidence linking robot activities to the activation of the human mirror neuron system, an area in the human brain that was previously believed to be specific for biological actions (Tai, Scherfler, Brooks et al. 2004). Gazzola, Rizzolatti, Wicker and Keyers (2007) showed participants still pictures and videos of a human arm and a robotic arm performing various maneuvers of varying complexity and were asked to perform the actions themselves. Using functional Magnetic Resonance Imaging (fMRI), the authors found that watching human and robotic actions, be they simple or complex, activated a significant section of the brain areas involved in the motor execution of similar actions, in particular the temporal, parietal and frontal areas, typically considered to compose the mirror neuron system. A number of studies found that specific emotions can be induced in humans even without simulating explicit emotion cues such as facial expressions. For instance, Wendt, Popp, Karg and Kühnlenz (2008) aimed to invoke four different emotion states (stress, boredom, surprise and perplexity) in a LEGO building task, during which a robot arm hands the human different building blocks (see Figure 7). The emotions are induced through the use of different movement speeds and patterns of the robot arm. Stress and boredom were generated by varying the handover interval from the normal working condition of 5 seconds to a shortened interval of 3 seconds (stress) or 35 seconds (boredom). Surprise was induced by handing an unexpected building block, while a sensation of perplexity was intended to be evoked by unexpectedly changing the handover position. Physiological measurements tracked changes in biosignals to ascertain physiological correlates

Biol Cybern (2013) xxx

of emotions during this prototypical human-robot interaction. While physiological correlates of surprise and perplexity were not clearly distinguishable from those obtained in a baseline measurement, the results do show distinguishable levels of physiological arousal in the stress and boredom conditions, indicating that simple variations in robot behavior can induce different emotion states in humans.

5

it was nevertheless preferred to the non-humorous robot. The authors suggested that humorous behavior may be used to mitigate a robot`s technological deficiencies.

Figure 8 Nonverbal humorous behavior displayed by the Funbot. Source: Wendt and Berg, 2009.

Figure 7 The robot arm handed a human a LEGO building block in different speed and movement patterns to induce emotions. Top: Instead of handing the LEGO to the human, the robot arm quickly moves to an unexpected position to induce perplexity. Bottom: The robot arm moves very slowly to induce boredom. Image copyright Universität der Bundeswehr München.

Others have sought to induce human emotions that would increase acceptance of social robots. For example, Wendt and Berg (2009) implemented nonverbal humorous behavior in a service robot whose task it was to bring human participants various items (e.g. slippers, food). In the humorous condition, the robot was dressed as a human butler and behaved in a playful manner (see Figure 8). For example, every time the robot is on its way to catch the demanded item, it pretends to slip over a tiger fur lying on the floor. In the last turn of the humor condition, the robot circumnavigates the fur, pauses for a moment, turns around towards the fur, and displays joy over its success by playing a cheerful jingle and “clapping” the grippers. In a control condition, the robot wore no clothes and simply fetched the participant the desired items. A survey showed that nonverbal humorous behaviour made the robot appear more human-like and more entertaining. Interestingly, although the humorous robot was considered less reliable,

It seems, however, that the relationship between emotional behavior and robot acceptance is more complex. In a number of surveys, Goetz et al. found out that people expect a robot to look and act appropriately for different tasks (Goetz and Kiesler 2002; Goetz, Kiesler and Powers 2003). For example, a robot that performs in a playful manner would be preferred for a fun game, but a serious robot would be preferred for a serious exercise regime. Some critics have argued that, since most people do not yet have experience in interacting with social robots, many studies on emotions during human-robot interaction are tainted by a novelty effect that can quickly dissipate with increasing exposure. To date, however, very few studies have investigated long-term interactions with social robots. Consequently, it is presently difficult to assess the longterm effects of the intentional induction of emotions in humans with regard to the acceptance of robots. 2.5 Undesired Emotions: the danger of the uncanny valley While studies indicate that robots which resemble humans are generally more effective than less humanlike machines in evoking reactions that had been learned during years of socialization with other humans, some have pointed out that anthropomorphizing may raise unrealistic expectations that might negatively impact the human-robot interaction (Dautenhahn 2004). Moreover, it has been suggested that even though robot likeability increases with increasing approximation to human beings, there may be a point at which likeability drops sharply. This effect has become known as the uncanny valley (Mori 1970). Based on observed reactions to lifelike prosthetic hands, Mori hypothesized that humans are sensitive to perceived

6

imperfections in entities that closely resemble humans. Movement was stipulated to exacerbate the uncanny valley effect (see Figure 9).

Figure 9 The uncanny valley. Source: MacDorman, 2006.

Not surprisingly, the uncanny valley hypothesis has been investigated thoroughly by the robotics community, in particular by scientists who work with android robots that imitate humans very closely- with mixed results. For instance, MacDorman and Ishiguro (2006) presented 45 participants a set of 11 images that morphed from a photograph of the humanoid robot Qrio to one of the Philip K. Dick android by Hanson Robotics to one of Philip K. Dick himself (see Figure 10).

Figure 10 Morphed images from the robot Qrio (left) to the Philip K. Dick android (center) to the real Philip K. Dick (right). Source: MacDorman and Ishiguro, 2006.

Participants were asked to rate the images on a nine-point scale ranging from very mechanical to very humanlike and from very strange to very familiar. They were subsequently asked to rate their perceived eeriness on a ten-point scale, ranging from slightly eerie to extremely eerie. The results show that images with middle scores on the mechanical-tohuman dimension also receive the highest eeriness ratings. Although the results do not replicate Mori’s uncanny valley, the do indicate a non-linear relationship between humanlike appearance and feelings of unease. In a subsequent study of the same authors, 56 participants were asked to rate video clips of various robots in action on the same dimensions. The results showed no consistent relationship between perceived human-likeness and eeriness. Since the presented video clips varied widely in the activities that were shown, with some robots even using speech, the authors suggest that negative affective reactions to humanlike robots may be influenced by a wide range of

Biol Cybern (2013) xxx

factors, in addition to physical appearance. Some have suggested a neurological basis for the uncanny valley effect: the negative affect is theorized to be a response to the violation of the brain’s prediction. Specifically, it is hypothesized that the brain continuously generates predictions about our perceived surroundings based on our previous lifetime experience. Hence, when we observe an agent that we classify as human, we would expect this agent to behave accordingly. If our predictions are violated, a feeling of unease is elicited (Saygin, Chaminad, Ishiguro et al. 2012). Empirical evidence provides some support for this hypothesis. For instance, Saygin, et al. (2012) explored the selectivity of the human action perception system (APS) using functional magnetic resonance imaging (fMRI). During the fMRI scans, participants viewed a series of short video clips of a human, an android and a robot carrying out various actions (see Figure 11). While the human looked and moved naturally, the android looked naturally but moved mechanically and the robot looked and moved mechanically.

Figure 11 Images of the robot, android and human shown to participants during fMRI scans. Source: Saygin, Chaminad, Ishiguro, et al., 2012.

The results showed similar levels of APS activity for the human and robot videos, but stronger effects for the android, thus indicating distinctive responses of the APS to mismatches between appearance and motion. Overall, however, the evidence supporting the uncanny valley hypothesis is tentative at best. At the very least it seems likely that this effect would be mitigated by multiple factors, including movement and speech. Moreover, if the negative affect was indeed caused by a mismatch of expectation and observation, it would seem highly likely, that the effect of the uncanny valley would quickly dissipate with increased experience in interacting with humanlike robots. 2.6 Emotion Recognition It has been suggested that communication with robots would be more intuitive if robots did not only convey emotional content but also recognized emotion cues in humans and adapted their behavior accordingly (e.g. Craig, Vaidnathan, James and Melhuish 2010). To date, only few attempts have been made to incorporate emotion recognition abilities in robots. Most of these focus on the detection and classification of human facial expressions. For more than two decades, robotics researchers have

Biol Cybern (2013) xxx

employed various algorithmic approaches, such as neural networks (e.g. Kobayashi and Hara,1992), hidden Markov models (e.g. Cohen, Garg and Huang 2000) and support vector machines (e.g. Michel and Kaliouby 2003), with the aim of classifying human facial cues of emotions, typically those of the six basic emotions proposed by Ekman and Friesen (1978). As of yet, few approaches seem to reach the accuracy, much less the range of human emotion detection capabilities. While humans are considerably less skilled at detecting emotions from voice tone without emotional context information, roboticists have striven to equip robots with such abilities. This task is particularly difficult due to considerable interindividual differences in voice tone and voice modulation. Park, Kim and Oh (2009) developed a speech emotion recognition algorithm for service robots that managed to correctly classify 97% of emotional speech samples when presented with two emotions, 80% with three and 65% with five emotions - a classification performance which, according to the authors, was comparable to that of human participants. It is difficult to discern the extent to which the ability to recognize human emotions actually contributes towards more intuitive interaction, since any evaluations of an interaction would necessarily also consider the robot's responses to the detected emotions. Furthermore, since humans are extremely adaptive, it is entirely possible that humans could learn to communicate just as well with robots that do not possess the ability to either project or recognize emotions. It seems plausible, however, to assume that human-robot communication would become more intuitive, the closer it approximates human-human communication, in particular if social robots are to be considered peers rather than machines. 3 Robot emotion models For many years, the view has prevailed that our emotions are simply a relic of our evolutionary development, which interferes with our rationality and has become superfluous in a modern society in which an individual’s drive to fight, gather food and procreate are no longer essential to the survival of humankind. In recent years, however, these notions have been revised as increasing evidence was uncovered which indicated that, in fact, emotions are an integral part of rational behavior (Gadanho and Hallam 2001). The idea that a specific part of the brain is responsible for the production of emotions has been convincingly disproved. Instead, it seems that specific brain areas are involved in the processing of specific emotions. For example, LeDoux (1996, 2002) and others have demonstrated that activities in specific nuclei of the amygdala correlate with specific aspects of fear. However, the amygdala is also involved in other (non-emotional) brain functions, as are other brain structures which have

7

been linked with emotional activity, such as the hypothalamus, the prefrontal cortex or the brain stem neuromodulatory centers (Fellous 2004). Hence, neuroscientific evidence suggests a close link between emotions and cognitions. For instance, emotions were found to be a prerequisite for establishing long-term memories (McGaugh 2000), promoting memory recall (Kenealy 1997) and that they have important implications for attention allocation: fear tends to focus attention on details, whereas positive affect promotes the viewing of the bigger picture (Arbib and Fellou 2004). Other studies found emotions to influence perception (Cytowic 1993; Derryberry and Tucker 1994; Niedenthal, Setterlund and Jones 1994) and reasoning (Bechara, Damasio, Tranel and Damásio 1997; Damásio 1994; LeDoux 1998). Although definitions of emotions vary widely, most researchers would agree that emotions are not an exclusively human property. In fact, neuroscientific and psychological evidence indicates that emotions are essential to producing efficient and adaptive behavior in humans as well as non-human primates (Birbaumer and Schmidt 2006). Hence, it has been speculated that robotic cognitive systems may also benefit from the integration of emotions, particularly with regard to context adaptiveness. Most commonly, emotions in robots are conceived as simple mechanisms for behavior selection. For example, Arbib and Fellous (2004) suggested that robot emotions may be used principally to organize and trigger perceptual and motor schemas by adjusting their relative weighting, depending on the respective context. Malfaz and Salichs (2004) employed emotions in a supervisory role for behavior selection by monitoring changes in certain emotion states and establishing emotion goals, e.g. happiness, that trigger corresponding behaviorisms. Similarly, Kato and Arita (2004) implemented anger, anxiety, contentment, excitement and sadness as so-called behavior modulations that would trigger different movement patterns in a mobile robotic platform. Some efforts have been made to integrate the concept of emotions into robot control architectures in order to affect robot perceptual and cognitive processes. For example, Gandho and Hallam (2001) integrated an emotion model that included four basic emotions (Happiness, Sadness, Fear and Anger) into a reinforcement-learning control architecture. In a series of experiments, this emotion-based architecture was found to be useful by drawing attention to specific environmental aspects that are relevant for a current emotional state, providing a reinforcement function to specify the relative importance of certain situations and to determine the occurrence of significant changes in the environment by monitoring corresponding changes in the emotional system state.

8

4 CONCLUSION Roboticists envision a near future in which ubiquitous social robots interact closely and intuitively with humans, as servants and perhaps even as life-long companions. Since emotions are a vital part of human social life, it seems only logical to consider their involvement in the development of social robots. Social robots can employ a variety of facial, verbal and/or behavioral cues to exploit our inherent propensity to anthropomorphize and use advanced signal processing techniques to recognize our emotions, thus rendering the human-robot interaction more predictable, intuitive and yes, perhaps even enjoyable. In addition to equipping robots with the ability to express, induce and recognize emotions in humans, efforts were also made to integrate emotions into their cognitive architecture. Although it is debatable whether our experience of emotions can ever be truly artificially recreated, efforts were made to at least approximate emotion functionality in robots in an effort to increase their cognitive efficiency and behavioral adaptability. How far can we go in equipping robots and other artificial agents with emotions? Can robots not only look happy or use an appraisal of an emotion to control perception sensors and memory storage but actually feel happy? Can one feel happiness without also feeling sadness? At which point would it be morally irresponsible References Arbib MA, Fellous JM (2004) Emotions: from brain to robot. Trends in Cogn Sci 8(12):554–561. Bartneck C, Forlizzi J (2004) A design-centered framework for social human-robot interaction. In: Proceedings of the 13th IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN), pp 591–594. Bechara A, Damasio H, Tranel D, Damásio AR (1997) Deciding advantageously before knowing the advantageous strategy. Science, 275, pp 1293–1295. Birbaumer N, Schmidt RF (2006) Biologische Psychologie, 6th edn. Heidelberg: Springer Verlag. Braitenberg V (2004) Vehikel. Experimente mit künstlichen Wesen. Münster: Lit Verlag (Wissenschaftliche Paperbacks, 26). Breazeal, C (2002) Designing Sociable Robots. MIT Press, Cambridge, MA. Breazeal C, Buchsbaum D, Gray J, Gatenby D, Blumberg B (2005) Learning from and about others: Towards using imitation to bootstrap the social understanding of others by robots. Artif Life, 11(1-2):31-62. Cohen I, Garg A, Huang TS (2000) Emotion recognition from facial expressions using multilevel HMM. Neural information processing systems (Vol. 2). Coulson, M. (2004). Attributing emotion to static body postures: Recognition accuracy, confusions, and viewpoint dependence. Journal of nonverbal behavior, 28(2), pp 117-139. Craig R, Vaidyanathan R, James C, Melhuish C (2010) Assessment of human response to robot facial expressions through visual evoked potentials. In: Proceedings of the 10th IEEE-RAS International Conference on Humanoid Robots (Humanoids), pp 647–652. Cytowic, RE (1993) The man who tasted shapes. London: Abacus. Dautenhahn K, Billard A (1999) Bringing up robots or—the psychology of socially intelligent robots: From theory to implementation. In: Proceedings of the Third Annual Conference on Autonomous Agents, pp 366-367.

Biol Cybern (2013) xxx

to experiment with sentient robots? Although there are a number of research groups concerned with robot ethics, it seems that this line of questioning is largely restricted to philosophical debates rather than scientific literature on social robots or artificial intelligence. As coordinated multidisciplinary efforts to define, assess and induce human and robot emotions become more frequent and sophisticated, however, it seems only a matter of time until these questions are brought into focus. As Craig et al. (2010, p. 652) pointed out: “There is a tremendous potential for mutually beneficial future research in this area between psychology, neuroscience and robotics. Reciprocal influences between social cognitive neuroscience and humanoid robotics promise a better understanding of social interactions that will ultimately lead to increasing the social acceptance of future robotic companions.” Is it necessary or even sensible to involve purposefully the concept of emotions in the development of social robots? Braitenberg might have argued that any form of behavior might appear complex, intelligent and even emotional to us, even if it is governed by simple stimulusresponse mechanisms. If we stipulate to the veracity of this argument, perhaps the more poignant question would be: Even if we wanted to, can we avoid the involvement of emotions? Dautenhahn K (2004) Robots we like to live with?! - a developmental perspective on a personalized, life-long robot companion, In: Proceedings of the 13th IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN), pp 17–22. Delaunay F, de Greeff J, Belpaeme T (2009) Towards retro-projected robot faces: an alternative to mechatronic and android faces. In: Proceedings of The 18th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp 306-311. Damásio AR (1994) Descartes’ error—Emotion, reason and human brain. London: Picador. Derryberry D, Tucker DM (1994) Motivating the focus of attention. In: PM Niedenthal, S Kitayama (Eds.), The heart’s eye, New York: Academic Press, pp 167–196. Ekman P, Friesen WV (1978) Facial action coding system: A technique for the measurement of facial movement. Palo Alto. CA: Consulting Psychologists Press. Eysenck MW, Keane MT (2000) Cognitive psychology: A student's handbook, 4th edn. United Kingdom: Psychology Press. Fellous JM (2004) From human emotions to robot emotions. In: Architectures for Modeling Emotion: Cross-Disciplinary Foundations, American Association for Artificial Intelligence, pp 39–46. Fong T, Nourbakhsh I, Dautenhahn K (2003) A survey of socially interactive robots. Robot Auton Syst, 42(3):143-166. Gadanho SC, Hallam J (2001) Robot learning driven by emotions. In: Adapt Behav 9(1): 42–64. Gazzola V, Rizzolatti G, Wicker B, Keysers C (2007) The anthropomorphic brain: the mirror neuron system responds to human and robotic actions. Neuroimage, 35(4):1674-1684. Goetz J, Kiesler S (2002) Cooperation with a robotic assistant. In: Proceedings of the CHI'02 Conference on Human Factors in Computing Systems, New York, USA, pp 578-579. Goetz J, Kiesler S, Powers A (2003) Matching robot appearance and behaviour to tasks to improve human-robot cooperation, In: Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp 55-60. Hogg MA, Vaughan GM (2002) Social Psychology. 3rd edn. UK:

Biol Cybern (2013) xxx Prentice Hall. Kato T, Arita T (2004) A Robotic Approach to Emotion from a Selectionist Perspective. In: Proceedings of the 9th International Symposium on Artificial Life and Robotics, pp 601–604. Kenealy PM (1997) Mood-state-dependent retrieval: The effects of induced mood on memory reconsidered. Q J Exp Psychol-A, 50:290-317. Kobayashi H, Hara F (1992) Recognition of six basic facial expression and their strength by neural network. In: Proceedings of the IEEE International Workshop on Robot and Human Communication, pp 381-386. LeDoux JE (1996) The Emotional Brain. New York: Simon and Schuster. LeDoux JE (1998) The Emotional Brain. London: Phoenix. LeDoux JE (2002) Synaptic self: How our brains become who we are. UK: Penguin Books. Lütkebohle I, Hegel F, Schulz S, Hackel M, Wrede B, Wachsmuth S, Sagerer G (2010) The Bielefeld anthropomorphic robot head “Flobi”. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp 3384-3391. MacDorman K, Minato T, Shimada M, Itakura S, Cowley S, Ishiguro H (2005) Assessing human likeness by eye contact in an android testbed. In: Proceedings of the XXVII Annual Meeting of the Cognitive Science Society, Stresa, Italy. MacDorman KF, Ishiguro H (2006) The uncanny advantage of using androids in cognitive and social science research. Interaction Studies, 7(3): 297-337. Malfaz M, Salichs MA (2004) A new architecture for autonomous robots based on emotions. In: Proceedings of the Fifth IFAC Symposium on Intelligent Autonomous Vehicles. Lisbon. Portugal. McGaugh JL, Cahill L, Ferry B, Roozendaaal B (2000) Brain systems and the regulation of memory consolidation. In: Bolhuis JJ (ed) Brain, Perception, Memory: Advances in Cognitive Neuroscience, Oxford University Press, pp 233–251. Michel, P, El Kaliouby R (2003) Real time facial expression recognition in video using support vector machines. In: Proceedings of the 5th international conference on Multimodal interfaces, pp 258-264. Mori M (1970) The Uncanny Valley. In: Energy 7 (4), pp 33–35. Niedenthal PM, Setterlund MB, Jones DE (1994) Emotional organization of perceptual memory. In: Niedenthal PM, Kitayama S (eds.), The heart’s eye. New York: Academic Press, pp 87–113. Oberman LM, McCleery JP, Ramachandran VS, Pineda JA (2007) EEG evidence for mirror neuron activity during the observation of

9 human and robot actions: Toward an analysis of the human qualities of interactive robots. In: Neurocomputing 70 (1315):2194–2203. Park JS, Kim JH, Oh YH (2009) Feature vector classification based speech emotion recognition for service robots. IEEE T Consum Electr, 55(3):1590-1596. Popp M (2011) Geschichte vom klugen Roboter: Bericht eines Elektropsychologen in zehn Sätzen. In: Hosp I, Schüz A, Braitenberg Z (eds.) Tentakel des Geistes: Begegnungen mit Valentin Braitenberg. Raetia/Arunda, pp 142-149. Power M, Dalgleish T (1997) Cognition and emotion: from order to disorder. Hove, UK: Psychology Press. Saygin AP, Chaminade T, Ishiguro H, Driver J, Frith C (2012) The thing that should not be: predictive coding and the uncanny valley in perceiving human and humanoid robot actions. In: Social Cognitive and Affective Neuroscience 7 (4):413–422. Scherer, K. R. (1979). Nonlinguistic vocal indicators of emotion and psychopathology. In: Emotions in personality and psychopathology, pp 493-529. Shibata T, Mitsui T, Wada K, Touda A, Kumasaka T, Tagami K, Tanie K. (2001) Mental commit robot and its application to therapy of children. In: Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics, 2001. (Vol. 2), pp 1053-1058. Tai YF, Scherfler C, Brooks DJ, Sawamoto N, Castiello U (2004) The human premotor cortex is mirror only for biological actions, Curr Biol 14:117–120. Wada K, Shibata T, Saito T, Tanie K (2002) Analysis of factors that bring mental effects to elderly people in robot assisted activity, In: Proceedings of the IEEE International Conference On Intelligent Robot and Systems. Wada K, Shibata T, Saito T, Sakamoto K, Tanie K (2005) Psychological and social effects of one year robot assisted activity on elderly people at a health service facility for the aged. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp 2785-2790. Wendt C, Popp M, Karg M, Kuhnlenz K (2008) Physiology and HRI: Recognition of over-and underchallenge. In: Proceedings of The 17th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp 448-452. Wendt C, Berg G (2009) Nonverbal humor as a new dimension of HRI. In: Proceedings of The 18th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) 2009, pp 183–188.