Responses: After listening to the recorded BBC News, each subject was ... 3. As a detailed impression, please select what emotions you think the robot was ...
Contextual Recognition of Robot Emotions Jiaming Zhang and Amanda J.C. Sharkey Neurocomputing and Robotics Group Department of Computer Science University of Sheffield, U.K. Presenter: Jiaming Zhang @Sheffield Hallam University 31st August, TAROS 2011
Contents • Introduction • Hypotheses • Method • Conclusion
Introduction Why should we explore the influence of the surrounding emotional context on human perception of a robot’s simulated emotions?
Can we create convincing and believable robotic facial expressions? • With the help of the FACS (Facial Action Coding System) (Ekman and Friesen, 1978; 2002) • And Russell’s circumplex model (Russell, 1997; Posner et al. 2005) • Successful examples include: MIT’s Kismet (Breazeal, 2002) and Vrije Universiteit Brussel’s Probo (Goris et al. 2008)
Emotion
Action Units
Happiness
6+12
Sadness
1+4+15
Surprise
1+2+5B+26
Fear
1+2+4+5+20+26
Anger
4+5+7+23
Disgust
9+15+16
Contempt
R12A+R14A
Can the perception of human facial expressions be influenced by the surrounding context? • Both the human face and the surrounding context made important contributions to emotion judgments (Niedenthal et al. 2006) • In more detail, the way in which facial and context cues were combined (congruent or incongruent with each other), affected observers’ judgments of facial expressions
Can the perception of the avatar face be affected by the emotional congruence? • Hong et al. (2002)’s study showed that the facial expressions of the avatar were better recognized when they matched the context (the avatar voices), than when it did not • Mower et al. (2008; 2009) ’s study demonstrated that observer’s recognition of the avatar facial expressions were influenced by the surrounding context (the avatar voices), and emotion conveyed by the voice could override that expressed by the face
Why should we investigate the effect of the surrounding context on the recognition of simulated robotic facial expressions? • Empirical research has shown that the facial emotional expressions of human beings and of computer simulated avatars were susceptible to the surrounding context • Contextual influences on the recognition of human emotional expressions are stronger when the expressions are ambiguous (Niedenthal et al. 2006) • Studies showed that robot facial expressions are more ambiguous than human emotional expressions
Hypotheses Can the recognition of robotic facial expressions be contextual? And to what extent?
Hypotheses • Primary hypothesis- Hypothesis 1 (H1): When there is a surrounding emotional context, the robot emotions are more easily recognised when they are congruent with that context than when they are not. • Additional hypothesis- Hypothesis 2 (H2): When a robot’s expressions are not appropriate given the context, subjects’ judgments of those expressions will be more affected by the context than the expressions themselves.
Method Interaction design for two experiments, and their procedures and results
Interaction Design • In two between-subjects experiments, the FACS was adapted to program a robotic head displayed either sequence of Positive Affect (mainly joy and surprise), or sequence of Negative Affect (mainly sadness, anger and disgust)(see Fig.1) • In both experiments, the context (Recorded BBC World News in the first experiment and pictures selected from the international affective picture system (IAPS) (Bradley and Lang, 2007) in the second experiment), and the presentation of emotional facial expressions occurred simultaneously
Fig. 1. Joy (top left), Surprise (top right), Fear (middle left), Sadness (middle left), Anger (bottom left), and Disgust (bottom right) of the robot head.
Procedures of the first experiment Warm-Up: see six different static facial expressions and fill in a questionnaire about how they perceived the robot’s emotions
Condition 1
Condition 2
Condition 3
Condition 4
Two of the groups thus received congruent emotional information from the news and the robot head (Group 1 and Group 4), whilst two received incongruent, or conflicting information (Group 2 and Group 3).
Responses: After listening to the recorded BBC News, each subject was asked to answer the following questions: 1. As a total impression, please select what kind of News you think you were listening to from the following given choices. ___________ A: Positive News B: Neutral News C: Negative News 2. As a total impression, please select what kind of emotion (affect) you think the robot was feeling from the following given choices. ___________ A: Positive Affect B: Neutral Affect C: Negative Affect 3. As a detailed impression, please select what emotions you think the robot was feeling from the following given choices. ___(you can choose more than one option) A: Joy B: Sadness C: Anger D: Fear E: Disgust F: Surprise
Fill in the Brief Mood Introspection Scale (BMIS) (Mayer and Gaschke, 1988)
Results of the first experiment The 72 subjects (38 male and 34 female) with average age 26.82 who participated in this experiment, had various nationalities. A Chi-square test for independence: χ2 (1, n=72) =22.562, p