Keywords: modelling, animation, multi-touch, bimanual interaction. 1 Introduction ... evaluated in the limited scope of prototypical implementations. They are ...
A Multi-Touch System for 3D Modelling and Animation Benjamin Walther-Franks, Marc Herrlich, and Rainer Malaka Research Group Digital Media TZI, University of Bremen, Germany
Abstract. 3D modelling and animation software is typically operated via single-pointer input, imposing a serialised workflow that seems cumbersome in comparison to how humans manipulate objects in the real world. Research has brought forth new interaction techniques for modelling and animation that utilise input with more degrees of freedom or employ both hands to allow more parallel control, yet these are separate efforts across diverse input technologies and have not been applied to a usable system. We developed a 3D modelling and animation system for multi-touch interactive surfaces, as this technology offers parallel input with many degrees of freedom through one or both hands. It implements techniques for one-handed 3D navigation, 3D object manipulation, and time control. This includes mappings for layered or multi-track performance animation that allows the animation of different features across several passes or the modification of previously recorded motion. We show how these unimanual techniques can be combined for efficient bimanual control and propose techniques that specifically support the use of both hands for typical tasks in 3D editing. A study proved that even inexperienced users can successfully use our system for a more parallel and direct modelling or animation process. Keywords: modelling, animation, multi-touch, bimanual interaction
1
Introduction
The features and rendering power of 3D modelling and animation software have continuously expanded over the last decades to produce ever better results. Yet such systems are still largely operated with one-handed single-pointer input, i.e., mouse or tablet, controlling only two degrees of freedom (DOF) at the same time. This stands in contrast to how humans often employ both hands in real world tasks for a more parallel workflow. It has motivated many researchers to develop new 3D navigation, modelling, and animation strategies that explore high DOF input [3, 16] or bimanual control [23, 1]. Of these, the most interesting and recent are multi-touch 3D control techniques [8, 17]. Multi-touch interactive surfaces offer additional DOF compared to keyboard/mouse interfaces. The direct mapping between movements on the multi-touch display and the virtual scene as well as the possibility to make large fluid movements increase the feeling for proportion, spatial relationships, and timing.
2
B. Walther-Franks, M. Herrlich, and R. Malaka
However, these novel interaction techniques are only designed, developed and evaluated in the limited scope of prototypical implementations. They are rarely integrated into working systems, or discussed in combination with other techniques. In this paper, we address the problem of developing a 3D modelling and animation system for multi-touch interactive surfaces. It integrates concepts for more parallel, direct and expressive modelling and animation. We have identified and met four challenges: – Integrating features into a legacy system. Rather than constructing a system from scratch, how can we extend existing software without being restricted in our design space? – Finding mappings for unimanual control. How can multi-touch input through one hand be used to navigate space, manipulate objects, and control timing? – Developing strategies for bimanual control. How can unimanual controls be combined for symmetric or asymmetric control of space, space and view, or space and time? How can we leverage the extra freedom of the second hand to address typical modelling or animation problems? – Bringing 3D layered performance animation to interactive surfaces. The direct coupling of input to output makes interactive surfaces predestined for performative animation approaches. How can mappings for multi-track performance animation be transferred to the surface? After discussing related work, we describe how we extended the open source 3D package Blender by multi-touch input and parallel event handling. We then relate system functionality of unimanual multi-finger gestures for feature manipulation, view navigation, and timeline control. We further explain how users can assign each hand to the control of one of these aspects in quick succession or even simultaneously. We explicate three new bimanual techniques, auto-constraints, object pinning, and view pinning, that we developed to tackle specific problems in modelling and performance animation. Finally, we present a user study which illustrates that even inexperienced users can successfully use our system for more parallel and direct modelling and animation.
2 2.1
Related Work Multi-Touch Input for 2D and 3D Manipulation
The general problem of 2D and 3D manipulation on multi-touch interactive surfaces has been approached in many different ways, although not in the context of 3D modelling and animation. Techniques based on the number of fingers used include one finger selection and translation and two finger scale and rotate [22]. Employing least-squares approximation methods to derive stiff 2D affine transformations from finger movements [12] is a technique commonly used for the “ubiquitous” image browser. More recently, physics-based approaches have been explored, incorporating multi-touch input into a physics engine in order to simulate grasping behaviour using virtual forces [21, 20].
A Multi-Touch System for 3D Modelling and Animation
3
Work in this area often tries to leverage the benefits of the direct-touch paradigm, yet it is not always desirable. Limited reach (input space too small), limited precision (input space too coarse), or object occlusion can all be reasons for indirect control on interactive surfaces [4, 13]. Researchers have further investigated multi-touch techniques for 3D manipulation on interactive surfaces. Various multi-finger techniques have been proposed for translation and rotation in 3D space with different DOF and constraints [7, 8, 10]. The approximation approach for deriving affine transformations in the 2D domain has also partly been extended to 3D using a non-linear least squares energy optimization scheme to calculate 3D transformations directly from the screen space coordinates of the touch contacts, taking into account the corresponding object space positions [17]. While some of these techniques at least theoretically allow unimanual operation, none of them were explicitly designed to be used with one hand. 2.2
Bimanual Interaction
For real world tasks humans often use both hands in an asymmetric manner, meaning that the non-dominant hand (NDH) is used to provide a frame of reference for the movements of the dominant hand (DH) [6]. Asymmetric bimanual interaction using two mice for controlling the camera with the NDH and object manipulation with the DH was shown to be 20% faster than sequential unimanual control for a 3D selection task [1]. The same study also explored symmetrical bimanual interaction for 3D selection, controlling the camera and the object at the same time, and found symmetrical compared to asymmetrical bimanual interaction to impose a slightly higher cognitive load on unexperienced users. In how far this applies to multi-touch interaction has not yet been investigated. Researchers have explored the use of two pointers to perform various operations in 3D desktop applications, finding that the best techniques were those with physical intuition [23]. Comparing direct-touch input to indirect mouse interaction, the former has been found to be better suited for bimanual control, due to the cognitive overload of controlling two individual pointers indirectly [5]. Others have found that although users naturally tend to prefer bimanual interaction in the physical realm, in the virtual domain unimanal interaction is prevalent [19]. 2.3
Performance Animation
Performance animation, also known as computer puppetry, has a tradition of complex custom-built input hardware for high DOF input [3, 16, 18]. Multi-track or layered performance animation can split the DOF across several passes [16]. Dontcheva et al. identify several input-scene mappings and extend this technique to redoing or adjusting existing animation passes [2]. First steps toward more standardised input hardware were made by Neff et al., who explored configurable abstract mappings of high DOF content for layered performance animation that could be controlled with 2 DOF via mouse [15]. So far, the possibility of high DOF input through multi-touch interactive surfaces has only been explored for 2D performance animation [14].
4
3
B. Walther-Franks, M. Herrlich, and R. Malaka
System
For our multi-touch modelling and animation system we built on the open source 3D modelling tool Blender. This allowed us to start with a very powerful and feature complete modelling and animation tool chain. In order to integrate multitouch input, we had to adapt many operations and parts of the internal event system. We use a version of multi-finger chording [11] to map unimanual multifinger gestures to existing 2 DOF operators. Blender features a complete environment for 3D modelling and animation. Currently, it’s architecture does not feature any multi-point processing. For our multi-touch extension we employ the OSC-based TUIO protocol [9]. We require the multi-touch tracker to provide TUIO messages over a UDP port. For registration of unimanual multi-finger gestures, our system uses temporal and spatial thresholds to form finger clusters. If a new finger input is within 1/5th screen width and height distance of the average position of an existing finger cluster, it is added to the agglomeration and the average position is updated. Clusters can contain up to four fingers. Once a cluster has existed for 120 ms, the system releases a multi-finger event. Each cluster has a unique id assigned to it, which it keeps until destruction. The registration event and every subsequent move or destruction event issues this id. Adding or removing a finger to the cluster will not change the gesture. This makes continuous gestures resistant to tracking interruptions or touch pressure relaxation. The cluster remains until the last of its fingers is removed. To reduce errors caused by false detections or users accidentally touching the surface with more fingers than intended, touches are also filtered by a minimum lifetime threshold, currently set to 80 ms. Significant modification of the Blender event architecture was necessary to enable multi-point input. Operators are the entities responsible for executing changes in the scene. For continuous manipulation such as view control or spatial transformations, normally modal operators are employed in Blender that listen for new input until canceled. To enable simultaneous use of multiple operators and in different screen areas, we had to get rid of modality as a concept for operators and dialogs as exclusive global ownership of input and application focus conflicts with parallel multi-touch interaction. We introduced local input ownership by assigning the id of an event calling it to each operator. An operator will only accept input from events with the id it was assigned. Thus, continuous input is paired to a specific operator and cannot influence prior or subsequently created operators, as they are in turn paired with other input events via id. Global application focus had to be disabled accordingly. This also meant optimisations like restricting ray casts for picking to specific areas are no longer possible. Furthermore, interpretation of touch input generally requires a more global approach because single touches can only be interpreted correctly with knowledge of all currently available and/or active operations and other past and current touches, imposing a significant processing overhead compared to singlepointer input.
A Multi-Touch System for 3D Modelling and Animation
5
Fig. 1. Blending of frames of temporal sequences showing unimanual continuous multifinger gestures for view control: two-finger pan, three-finger rotate, four-finger dolly move
4
Multi-Finger Mappings for Unimanual Control
In order to allow asymmetric and symmetric control for maximum parallelisation, we make all basic controls usable with one hand. In the following, we describe how we map multi-finger gestures to object, camera, and time control. 4.1
Unimanual Object Manipulation
Object selection and translation are among the most common tasks in 3D modelling. We used the simplest gestures for these tasks, being one finger tapping for selection and one finger dragging for translation. Indirect translation is useful in many cases: for the manipulation of very small objects, many objects in close proximity (with possible overlap), or for ergonomic reasons on large interactive surfaces. Our system allows users to either touch an object and immediately move their finger for direct translation, or just tap the object to select it and then move a single finger anywhere in the 3D view on the multi-touch surface for indirect translation. 4.2
Manipulating Dynamic Objects
Even with input devices supporting the control of many DOF, performance animators often will not be able to animate all features concurrently. Further, animators will sometimes want to re-do or adjust recorded animation. Multitrack or layered animation can help to split DOF across several passes or adjust animation recorded in previous passes [2, 15, 16]. We follow Dontcheva et al.’s approach of absolute, additive and trajectory-relative mappings for layered performance animation [2]. 4.3
Unimanual Camera Control
As changing the view is a common task in 3D modelling, the employed gestures should be as simple and robust to detect as possible. Furthermore, gestures should be usable with either hand and regardless of handedness, and must not conflict with object manipulation gestures. Additionally, view control should work without using special areas or the like, in order to reduce screen clutter
6
B. Walther-Franks, M. Herrlich, and R. Malaka
Fig. 2. Three frames of a temporal sequence (left to right) showing direct symmetric bimanual object manipulation (auto-constraints, axis highlighted for better visibility): both objects are moved simultaneously, one acting as the target object for the other
and to facilitate the feeling of direct interaction and control. In our system, camera/view control is implemented via two- three and four-finger gestures (see section 3). There is no “optimal” mapping between multi-finger gestures and view control functions. We found good arguments for different choices and they are to a certain extent subject to individual preferences. One important measure is the frequency of use of a certain view control, and thus one could argue that the more commonly used functions should be mapped to the gestures requiring less user effort, i.e., less fingers. A different measure is how common a certain mapping is in related applications. In our experience, changing the distance of the view camera to the scene is the least used view control. We thus decided to map it to the four finger gesture. Users can dolly move the camera by moving four fingers vertically up or down. Two fingers are used for panning the view and three fingers for rotation (see figure 1). 4.4
Unimanual Time Control
For time control we employ a timeline. We transfer multi-finger gestures into the time domain, reusing gestures from the space domain: One finger allows absolute positioning of the playhead enabling scrubbing along the timeline, two and three finger gestures move the frame window displayed in the window forward or backward in time, and four finger gestures expand or contract the scale of the frame window. We disabled indirect control for the timeline, as this would make absolute jumping to selected frames impossible.
5
Bimanual Control for View, Space, and Time
We now show how unimanual controls for view, space and time can be combined for more parallel, efficient bimanual control. We also present three techniques that exploit bimanual interaction to further support common 3D modelling and performance animation tasks: auto-constraints, object pinning, and view pinning. 5.1
Bimanual Object Manipulation
Our system enables simultaneous translation of several objects. This allows quite natural 2 DOF control of separate features with a finger of each hand. For
A Multi-Touch System for 3D Modelling and Animation
7
Fig. 3. Three frames of a temporal sequence (left to right) showing bimanual camera and object control (object pinning): the DH indirectly controls the object while the NDH simultaneously rotates the view
example, an animator can pose two limbs of a character simultaneously rather than sequentially. While this is beneficial for keyframe animation, it is central to performance animation. Thus, a bimanual approach to layering animation theoretically halves the required animation tracks. Auto-Constraints To support users in performing docking operations, we implemented a technique we termed auto-constraints, which leverages bimanual interaction. The user can select one object to act as an anchor, which can be moved freely with one hand. When a second object is moved concurrently with the other hand, the movement of this second object is automatically constrained in a way that helps to align the two (figure 2). In our current implementation we use the axis that connects the centres of both objects (or the geometric centres of the two sets of objects), but an extension to a plane-based constraint would be possible. Currently, auto-constraints are per default enabled during modelling and disabled for animation as during animation moving several objects simultaneously and independently is usually preferred to docking. 5.2
Bimanual Camera and Object Control
Concurrent camera and object control follors Guiard’s principle of setting the reference frame in bimanual interaction [6] and has been suggested to improve efficiency as well as facilitating depth perception via the kinetic depth effect in mouse-based interaction [1]. The requirements of bimanual interaction on a multi-touch display are somewhat different than for bimanual interaction using indirect input, such as two mice or a touchpad, as independent but simultaneous interaction with each hand can break the direct interaction paradigm. For example, changing the view alters an object’s position on the screen. If this object is simultaneously being manipulated, it would move away from the location of the controlling touch. For our system, we developed object pinning to resolve this. Furthermore, we developed view pinning to solve the problem of selecting and manipulating dynamic objects, as can occur in layered performance animation. Object Pinning While bimanual camera and object control facilitates a parallel workflow and smooth interaction, in a completely unconstrained system
8
B. Walther-Franks, M. Herrlich, and R. Malaka
Fig. 4. Four frames of a temporal sequence (left to right) showing bimanual asymmetric control of view and space: the NDH fixes a dynamic reference frame relative to the screen for local performance animation by the DH (view pinning)
this might also lead to confusion and conflicts. By independently changing the view and the controlled object’s position freely, users might easily loose track of the orientation of the object and/or camera. Furthermore, users have to incrementally change the view and adjust the object position to get a good sense of the relative positions and distances in the virtual 3D space. A third problem often encountered in 3D modelling is to move a group of “background” objects relative to a “foreground” object. Object pinning is our answer to all three of these problems. By using the screen space position of the finger touching the object as a constraint and by keeping the distance to the virtual camera, i.e., screen z, constant, the object will stay at the same relative position in screen space at the user’s finger. Additionally, object pinning enables a “homing in” kind of interaction style (figure 3) for object docking and alignment. It also enables another level of indirection for transformation operations: by pinning the object to a certain screen space position the user can indirectly transform the object in world space by changing the camera. We implemented object pinning for panning and rotating the camera.
View Pinning A typical example of layering the animation of a character would be as follows: the creation of the trajectory as the first layer, and animation of legs, arms etc. in subsequent layers [2]. But the more dynamic the trajectory, the harder it becomes to convincingly animate local features from a global reference frame. Thus, the explicit setting of a reference frame by aligning the view to it is desirable for multi-track performance animation. View pinning allows easy control of the spatial reference frame by enabling the user to affix the view to the currently selected feature interactively with a multi-finger gesture. In performance animation mode, the two finger gesture used for panning the view is replaced by view pinning. When the gesture is registered at time t0 , the view stays locked to the feature selected at point t0 with the camera-feature offset at t0 . When the feature moves, the view will move with it (figure 4). The view will stay aligned in this manner, regardless of what is selected and manipulated subsequently, until the animator ends the gesture. The relation to panning the view is maintained, as a continuous move of the two-finger input will change the view-feature offset accordingly but keep it pinned.
A Multi-Touch System for 3D Modelling and Animation
5.3
9
Bimanual Object and Time Control
Bimanual interaction can be a great benefit for time control in classical key frame animation (as opposed to performative approaches) because it allows rapid switching between frames: by assigning one hand to set the time frame while the other controls space, users can rapidly alternate between time and space control without loosing orientation in either dimension as the hands/fingers act as a kind of physical marker. The scrubbing feature can also support a better overview of the animation. These benefits equally apply to performance animation. However, there are further ways in which bimanual input to the time and space domain can be exploited for the performance approach: with one hand moving continuously through time, the other can act in space simultaneously. This allows fine control of the playhead with one hand, which can propagate non-linearly backwards or forwards at varying speeds as the user wishes, while at the same time acting with the other hand.
6
User Study
We conducted a study to see how people would use our system. Rather than setting tasks, we wanted to observe how users would interact with the system when freely exploring the unimanual and bimanual controls. Thus we designed a session for free 3D modelling and performance animation. 6.1
Setup and Procedure
We tested our system on a tabletop setup. For modelling, we configured the interface to show a 3D view area with button controls on both sides to access legacy functionality. For animation, we configured the interface to show a 3D view area and a horizontal timeline. Six right-handed participants (4 M, 2 F) aged between 23 and 31 years took part in the study. Users had a varying skill level in modelling and animation, which gave us the opportunity to see how skill levels influence the acceptance of the new input modality. A short verbal introduction was followed by a 5 minute moderated warm-up phase to introduce the participants to the basic modelling functionality. Instructions were given on how to operate individual controls, but not on which hand to use for what, or whether and how to use both hands in combination. Then followed a 15 minute free modelling phase for which users were asked to model what they wanted and to use the tools to their own liking. We repeated the procedure for the performance animation aspect: once participants had a grasp of the system, they were asked to freely animate a character rig, for which they had 15 minutes. 6.2
Results
Single-touch control for legacy software features worked without problems. The multi-finger gestures for view and time control were detected throughout, with
10
B. Walther-Franks, M. Herrlich, and R. Malaka
only one participant encountering problems when fingers were placed too close to hold apart for the touch tracking. Participants understood the view controls and basic object manipulation well and used them easily. They soon adopted a workflow of quick switching between view and space control for manipulation in three dimensions, half of the participants did this with a dedicated hand for each control (see below). Selection of occluded and small objects was a problem for many participants. Most were able to overcome this by moving the view closer or around occluding objects. Indirect translation was successfully used to manipulate very small objects or in cluttered arrangements. Generally, unexperienced users had a much harder time to comprehend spatial relationships. For animation, the time controls were effortlessly used to view recorded motion. Five out of six participants explored bimanual control and increasingly used it. All combinations of unimanual control were employed (view/space, time/space, space/space) without participants getting any instructions to do so. Three participants operated view or time with dedicated NDH and space with dedicated DH. Three used their NDH to pin the view for animating with their DH in character space. One participant even used his NDH to manually operate the playhead whilst concurrently animating a character feature with his DH. Only one participant did not employ any mode of bimanual control. Auto-constraints were not used as much as anticipated, possibly because we did not pose an explicit alignment task. Object pinning was hardly used, again this might be due to the fact that we did not construe a specific situation where this control would prove helpful. However, view pinning for performance animation was easily understood and used as the benefit of locking the view to a frame of reference was immediately apparent. Given the short timeframe and lack of experience in performance animation, participants were able to create surprisingly refined character motion. Multitrack animation was mainly used to animate separate features in multiple passes, less to adjust existing animation. Additive mapping was used after some accustomisation. View pinning was successfully used to enable a more direct mapping in the local character frame of reference, as mentioned above. All participants created fairly complex models and expressive character animations within the short timeframe of 15 minutes each. In general, they stated having enjoyed using the system.
7
Discussion and Future Work
Our goal was to develop a system that integrates concepts for more parallel, direct, and expressive 3D modelling and animation on multi-touch displays. We now discuss in how far we have found solutions to the four challenges we derived from this goal – extending a legacy system, finding unimanual controls, establishing bimanual controls, and enabling multi-touch performance animation. Integrating features into a legacy system. One of the major issues we encountered is that current software packages are not designed for parallel interaction,
A Multi-Touch System for 3D Modelling and Animation
11
as internal event and GUI handling is geared toward single focus/single pointer interaction. We presented our modifications of the internal event system in order to remedy these problems: no strict modality for operators/dialogs, no “shortcuts” like event handling per area or widget and context-dependence of touch interpretation, which requires suitable touch aggregation and internal ids for touch and software events/operators. Finding mappings for unimanual control. We demonstrated how to employ robust, easy to understand, and conflict free unimanual mappings for view navigation, object manipulation, and timing control. We showed the benefits of using these mapping both for direct and indirect control. It remains to be seen how multi-finger mappings for higher-DOF feature control [10, 8] compare to this. Developing strategies for bimanual control. Our unimanual mappings also enabled both asymmetric and symmetric bimanual interaction for object manipulation, view and time control. This results in smoother interaction in the asymmetric case of one hand acting after the other, and it enables true simultaneous interaction for more advanced users. Mappings controlling more than 2 DOF most likely require more mental work on the users’ side. The 3D manipulation through alternating 2 DOF control and view change that our system enables potentially provides a good tradeoff between mental load and control. Our user study showed large acceptance of bimanual combinations. Users automatically took a more parallel approach to the view/space and time/space workflow with a dedicated hand for each task. For users with a lifetime of training in mainly unimanual operation of computer systems, this is not at all self-evident [19]. Bringing 3D layered performance animation to interactive surfaces. Our user study clearly demonstrated that direct coupling of input to output is perfect for performance animation. While additive and trajectory-relative control lose some of this directness, view pinning was successfully shown to provide a solution to this problem. With our fully working multi-touch 3D authoring system we have laid the basis for further work in this area.
8
Conclusion
The goal of this work was to bring techniques for more parallel, direct, and expressive modelling and animation into a usable application on interactive surfaces. We addressed this by presenting our working multi-touch system for 3D modelling and animation. We met four challenges that we identified in the course of reaching this goal. We described how we adapted legacy software for parallel multi-touch control. We designed and implemented multi-finger mappings for unimanual manipulation of view, objects, and time. We showed how these can be combined for efficient bimanual control and further presented several new specialised bimanual techniques. Our system also implements real-time performance animation that leverages the directness and expressiveness of multi-touch interaction. Furthermore, we reported on a user study showing that our system is usable even by novice users. Finally, we critically discussed our results and suggested future research directions.
12
B. Walther-Franks, M. Herrlich, and R. Malaka
References 1. Balakrishnan, R., Kurtenbach, G.: Exploring bimanual camera control and object manipulation in 3d graphics interfaces. In: Proc. CHI ’99. ACM (1999) 2. Dontcheva, M., Yngve, G., Popovi´c, Z.: Layered acting for character animation. ACM Trans. Graph. 22(3) (2003) 3. Esposito, C., Paley, W.B., Ong, J.C.: Of mice and monkeys: a specialized input device for virtual body animation. In: Proc. SI3D ’95. ACM (1995) 4. Forlines, C., Vogel, D., Balakrishnan, R.: Hybridpointing: fluid switching between absolute and relative pointing with a direct input device. In: Proc. UIST ’06. ACM (2006) 5. Forlines, C., Wigdor, D., Shen, C., Balakrishnan, R.: Direct-touch vs. mouse input for tabletop displays. In: Proc. CHI ’07. ACM (2007) 6. Guiard, Y.: Asymmetric division of labor in human skilled bimanual action: The kinematic chain as a model. Journal of Motor Behaviour 19 (1987) 7. Hancock, M., Carpendale, S., Cockburn, A.: Shallow-depth 3d interaction: design and evaluation of one-, two- and three-touch techniques. In: Proc. CHI ’07. ACM (2007) 8. Hancock, M., Cate, T.T., Carpendale, S.: Sticky tools: Full 6dof force-based interaction for multi-touch tables. In: Proc. ITS ’09. ACM (2009) 9. Kaltenbrunner, M., Bovermann, T., Bencina, R., Costanza, E.: Tuio - a protocol for table based tangible user interfaces. In: Proc. GW 2005. Vannes, France (2005) 10. Martinet, A., Casiez, G., Grisoni, L.: The design and evaluation of 3d positioning techniques for multi-touch displays. In: Proc. 3DUI. pp. 115–118. IEEE (2010) 11. Matejka, J., Grossman, T., Lo, J., Fitzmaurice, G.: The design and evaluation of multi-finger mouse emulation techniques. In: Proc. CHI ’09. ACM (2009) 12. Moscovich, T., Hughes, J.F.: Multi-finger cursor techniques. In: Proc. GI ’06. Canadian Information Processing Society (2006) 13. Moscovich, T., Hughes, J.F.: Indirect mappings of multi-touch input using one and two hands. In: Proc. CHI ’08. ACM (2008) 14. Moscovich, T., Igarashi, T., Rekimoto, J., Fukuchi, K., Hughes, J.F.: A multi-finger interface for performance animation of deformable drawings. In: Proc. UIST ’05. ACM (2005) 15. Neff, M., Albrecht, I., Seidel, H.P.: Layered performance animation with correlation maps. In: Proc. EUROGRAPHICS ’07 (2007) 16. Oore, S., Terzopoulos, D., Hinton, G.: A desktop input device and interface for interactive 3d character animation. In: Proc. Graphics Interface (2002) 17. Reisman, J.L., Davidson, P.L., Han, J.Y.: A screen-space formulation for 2d and 3d direct manipulation. In: Proc. UIST ’09. ACM (2009) 18. Sturman, D.J.: Computer puppetry. Computer Graphics in Entertainment (1998) 19. Terrenghi, L., Kirk, D., Sellen, A., Izadi, S.: Affordances for manipulation of physical versus digital media on interactive surfaces. In: Proc. CHI ’07. ACM (2007) 20. Wilson, A.D.: Simulating grasping behavior on an imaging interactive surface. In: Proc. ITS ’09. ACM (2009) 21. Wilson, A.D., Izadi, S., Hilliges, O., Mendoza, A.G., Kirk, D.: Bringing physics to the surface. In: Proc. UIST ’08. ACM (2008) 22. Wu, M., Balakrishnan, R.: Multi-finger and whole hand gestural interaction techniques for multi-user tabletop displays. In: Proc. UIST ’03. ACM (2003) 23. Zeleznik, R.C., Forsberg, A.S., Strauss, P.S.: Two pointer input for 3d interaction. In: Proc. SI3D ’97. ACM (1997)