An Interactive Cultural Training System controlled by User-Defined

0 downloads 0 Views 1MB Size Report
The journey leads to three different countries with each having about three ... Interaction is disabled, but the users spontaneously invent. 251 body gestures ...
Traveller:

An Interactive Cultural Training System controlled by User-Defined Body Gestures Felix Kistler1, Elisabeth André1, Samuel Mascarenhas2, André Silva2, Ana Paiva2, 5 3 3 4 4 Nick Degens , Gert Jan Hofstede , Eva Krumhuber , Arvid Kappas , and Ruth Aylett • Experience-based role play with virtual agents • Educate young adults (18-25) in cultural sensitivity •



Affective Goal: making users aware that their rules for interpretation of appropriate behavior might be incorrect Cognitive Goal: making users understand general differences in cultures

• Interactive storytelling environment created in Unity3D (http://unity3d.com) • Two in-game action types: a) Navigation

b) Dialogue

1)Yes 2)No 3)Maybe

• Story: Users adopt the role of a character that has not traveled too much for most of his life. The scenario starts at the café of the

character’s grandmother, in which he receives a letter from his deceased grandfather. In this letter, the grandfather, who liked to travel the world, promises the grandson a “lost treasure” that he should find in a journey through different countries. In each country the grandson has to interact with locals in so-called critical incidents to progress. To be successful, the users have to select the correct interaction options depending on the agents’ simulated synthetic culture. The journey leads to three different countries with each having about three different scenes.

• FAtiMA agent architecture [1] with a Social Importance Dynamics model and culturally adaptive behavior [2]: •

• •

Virtual characters simulate different synthetic cultures defined in three Hofstede dimensions [3]: power distance, individualism vs. collectivism, and masculinity vs. femininity Characters assign the user a different SI-level depending on his or her chosen actions Depending on the SI-level the characters adapt their behavior and synthesize emotions

• Process for creating user-defined gestures [4] adapted for full body gestures [5]: • • •

22 Users run through the first three scenes of the story that included ten in-game actions Interaction is disabled, but the users spontaneously invent 251 body gestures when the system displays possible actions in text fields Gesture candidates are chosen according to the agreement between the users

• Gesture candidates integrated with the FUBI framework [6] • • • • •

Gestures defined in XML and visualized as symbols Symbols on the screen represent interaction options Need to be performed as visualized to select an option Static images = postures that should be hold shortly Animations/Arrows = motions that need to be mimicked

• Implemented gesture candidates and related ingame actions (in brackets)

head nod (yes)

head shake (no)

sit down (sit at bar and wait)

step forward turn away point to front tip on shoulder (approach (leave the bar) (ask guard to talk to (ask permission) group/supervisor) supervisor)

arms out (ask for directions, ask about supervisor)

• Swipe menu for more complex dialogue actions

1. “Arms out” gesture opens the swipe menu if available

2. Stretching the hand in front of the shoulder activates the menu (center circle becomes yellow)

3. A swiping gesture in the direction of the wanted menu option selects it

References: [1] J. Dias, S. Mascarenhas, and A. Paiva. Fatima modular: Towards an agent architecture with a generic appraisal framework. In Proc. of the Int. Workshop on Standards for Emotion Modeling, 2011. [2] S. Mascarenhas, R. Prada, A. Paiva, N. Degens, and G. J. Hofstede. Can i ask you a favour? - a relational model of socio-cultural behaviour. In Proc. AAMAS 2013. Springer Berlin / Heidelberg. [3] G. Hofstede, G. J. Hofstede, and M. Minkov. Cultures and organizations: Software of the mind: Intercultural cooperation and its importance for survival. McGraw-Hill, New York, 2010. 1Human

Centered Multimedia www.hcm-lab.de Augsburg University, Universitätsstr. 6a D-86159 Augsburg, Germany

[4] J. O. Wobbrock, M. R. Morris, and A. D. Wilson. User-defined gestures for surface computing. In Proc. CHI 2009. ACM New York. [5] F. Kistler and E. André. User-defined body gestures for an interactive storytelling scenario. In Proc. INTERACT 2013. [6] F. Kistler, B. Endrass, I. Damian, C. Dang, and E. André. Natural interaction with culturally adaptive virtual characters. Journal on Multimodal User Interfaces, 2012.

INESC-ID / Instituto Superior Técnico, Porto Salvo, Portugal 3 Wageningen University, Wageningen, Netherlands 4 Jacobs University, Bremen , Germany 5 Heriot-Watt University, Edinburgh, United Kingdom

2

Acknowledgment:

This work was funded by the European Commission within the 7th Framework Program under grant agreement eCute (FP7-ICT-257666).

Suggest Documents