While performance based animation has been widely used in film and game ... to trigger pre-defined animations usually permits only limited variation, however,.
Interaction with a Virtual Character through Performance Based Animation Qiong Wu, Maryia Kazakevich, Robyn Taylor, and Pierre Boulanger Advanced Man-Machine Interface Laboratory Department of Computing Science, University of Alberta T6G 2E8 Edmonton, Alberta, Canada {qiong,maryia,robyn,pierreb}@cs.ualberta.ca
Abstract. While performance based animation has been widely used in film and game production, we apply a similar technique for recreational/artistic performance purposes, allowing users to experience realtime natural interaction with virtual character. We present a real-time system that allows a user to interact with a synthetic virtual character animated based on the performance of a dancer who is hidden behind the scenes. The virtual character responds to the user as if she can “see” and “listen” to the user. By presenting only the virtual character animated by our system to the observing audiences within an immersive virtual environment, we create a natural interaction between the virtual world and the real world. Key words: synthetic characters, virtual reality, motion capture, advanced man-machine interfaces, behavioral systems, interaction techniques, immersive entertainment, artistic installations
1
Introduction
As computer generated graphics gain sophistication and become part of our culture through our exposure to films, video games, and interactive applications, one of the great challenges of our time is to achieve seamless interaction between the virtual world and real world. For example, natural and intuitive responses from virtual characters can dramatically improve the end-user experience in applications such as video games, communication tools, and live performances. Such online applications require the virtual character to efficiently and accurately interpret the user’s intention and to respond accordingly and naturally. We have created a system which enables one-to-one real-time interaction between a user and a virtual character, a character which is animated through the live performance of a real actor in a remote location. The virtual character can “see” and “listen” to the user through camera and microphone installed in our system, and“respond” in real time driven by the performance based animation. Such an environment allows the user to naturally interact with a virtual character as if interacting with a real person, by providing audio, visual and even vocal ability to the character that is controlled by the hiding performer.
2
Qiong Wu, Maryia Kazakevich, Robyn Taylor, and Pierre Boulanger
Such a system may serve various purposes, ranging from the artistic (choreographed animation maybe driven by a dancer), the recreational (users maybe entertained by figuring the intelligence reason behind the system), or as a way to enhance live performances (the performer may perform with a virtual character together).
2
Related Work
There exists extensive research addressing interfaces to allow users to interact with virtual characters. For applications of games, novel interfaces that have been proposed seek the use of limited input resources, such as a simple mouse [1], a keyboard [2], video cameras [3], Nintendo Wiimotes [4], or a pressure sensor [5] etc. In these systems, once the input is interpreted, a pre-defined animation is played and cannot be interrupted until the next input is read. The similar idea is also applied as the interaction means for live performances such as music driven animations, which strive to reduce the barrier between the musician and the virtualized environment. Such mediated performance practice usually parameterizes the vocal [6], movements [7], and/or keyboard [8] input, extracts meaningful features and maps to pre-defined character behavior. Such “new media” performance mixes traditional performances with digital contents, which bring new multimedia elements to the performance itself. Using live input to trigger pre-defined animations usually permits only limited variation, however, and therefore lacks a real feeling of interaction for the user.
3
Real-time Interactive Dancing System
The idea of our system is to achieve the real interaction between a user and a virtual character through controlling the character by a hiding performer/dancer. The performer is invisible to the user but is able to see the user by a surveillance camera, and performs according to the user’s intention. The character is then animated exactly the same way as the performance through motion capture system and animation engine. Without knowing the existence of the performer, the audiences experience real interaction with a virtual character, bringing a new form of multimedia to the live performance itself. The system is composed of following three parts: – Sensing Interface: To achieve natural interaction, the interface interacting with the virtual character is hands free. We use a high-end visualization system which is capable of displaying stereoscopic images to display the life-sized and three-dimensional character. A five-walled life sized immersive display environment (called a CAVE) allows the user to experience the virtual world inhabited by the believable character. The CAVE is equipped with a microphone as well, so that the user may not only dance with the character but also speak to the character.
Interaction with a Virtual Character through Performance Based Animation
3
– Motion Capture: We use the motion capture system, OptiTrack by NaturalPoint, to capture the performance motion. Six infrared cameras are synchronized and connected to a computer. The dancer wears a special black suit with optical markers attached on the suit, so that when the dancer moves within the capturing volume the system can compute the position and orientation of markers as well as skeleton joints of the character. In total, 24 skeleton joints are used in our system to control the virtual character. – Animation Engine: Animation is rendered using a programmable game engine Virtools. We programmed a plug-in, “Emily NATNET BB” building block in the Virtools (as shown in Fig. 1). This plug-in mainly completes three tasks. First, acting as a client side, read real-time data (each joint’s position and orientation) from the motion capture server (Arena) using the Natnet transport (a customized streaming SDK from the NaturalPoint). Second, the plugin retargets the motion joints to the skeleton joints of the character which is modeled in Maya and exported into Virtools with the skeleton. Third, update the character’s motion. As one can see from Fig. 2, the character’s animation is synchronized in real-time with the motion capture data.
Fig. 1. Building block in Virtools sets up the animation engine
Using Virtools, we may further program computer generated animations based on the motion of the character. For example, the clapping motion of the character may elicit sparkles from the clapping position based on the motion detection. Such composing of real performance based animation with the computer generated contents has been wildly used in the post-production of film and game, yet most of people never have chance to interact in realtime with such a believable virtual world as what is currently seen in films or games.
4
Future work
It is our goal that user can experience believable and natural interaction with the virtual character. This requires that the character is equipped with as many senses like a human being, through which the user can interact with. The current system only allows the user to see the life-sized three-dimensional character performing in an interactive way within an immersive environment. We may
4
Qiong Wu, Maryia Kazakevich, Robyn Taylor, and Pierre Boulanger
(a)
(b)
Fig. 2. Performance based animation. The left side of the image is the OptiTrack software Arena which captures the motion data. The right side of the image is the animation rendered using Virtools
add a haptic device for the user to touch the character, a speaker to hear the character. A face capture system may be added to capture the performer’s facial expression, which adds more life to the character, and the virtual world will be enriched with a more complex cognition layer to respond to the motion as well as facial expression. We aim to create such a system that the user can experience real-time interacting with virtual characters like Navi in an immersive pandora environment in the movie Avatar rather than just watching static screens.
References 1. Zhao, P., Van De Panne, M: User interfaces for interactive control of physics-based 3d characters. In Symposium on Interactive 3D Graphics and Games, pp. 87–94. (2005) 2. Thorne, M., Burke, D., Van De Panne, M.: Motion doodles: an interface for sketching character motion. ACM Trans. on Graphics, pp. 424–431. (2004) 3. Chai, J., Hodgins, J. K.: Performance animation from low-dimensional control signals. ACM Trans. on Graphics 24, 3, pp. 686–696. (2005) 4. Shiratori, T., Hodgins, J. K.: Accelerometer-based user interfaces for the control of a physically simulated character. ACM Trans. on Graphics 27 5 pp. 123:1–123:9. (2008) 5. Yin, K., Pai, D.: FootSee: an interactive animation system. In ACM SIGGRAPH/Eurographics symposium on Computer Animation, p. 329–338. (2003) 6. Levin, G., Lieberman, Z.: In-situ speech visualization in real-time interactive installation and performance. In Proceedings of The 3rd Inter- national Symposium on Non-Photorealistic Animation and Rendering, pp. 7–14. ACM Press (2004) 7. Camurri, A., Coletta, P., Ricchetti, M., Volpe, G.: Expressiveness and Physicality in Interaction. Journal of New Music Research, 29(3), pp.187–198 (2000) 8. Taylor, R., Torres, D., Boulanger, P.: Using music to interact with a virtual character. In The Interactional Conference on New Interfaces for Musical Expression. (2005)