MKM: A global framework for animating humans in virtual ... - CiteSeerX

8 downloads 1660 Views 711KB Size Report
Computer animation, virtual human, motion blending, motion adaptation. .... designed to deal with all of those processes in interactive environments by offering ..... for each character and at each time step on a P-IV 2.8GHz laptop computer.
MKM: A global framework for animating humans in virtual reality applications Franck Multon1,2, Richard Kulpa1, Benoit Bideau1 1

M2S, University Rennes 2,av. Charles Tillon, 35044 Rennes, France

2

Bunraku project, IRISA,Campus de Beaulieu, 35042 Rennes, France [email protected] Paper accepted in Presence in November 2007

Keywords Computer animation, virtual human, motion blending, motion adaptation. Abstract Virtual humans are more and more used in VR applications but their animation is still a challenge, especially if complex tasks must be carried-out in interaction with the user. In many applications with virtual humans, credible virtual characters play a major role in presence. Motion editing techniques assume that the natural laws are intrinsically encoded in prerecorded trajectories and that modifications may preserve them leading to credible autonomous actors. However, a complete knowledge of all the constraints is required to ensure continuity or to synchronize and blend several actions necessary to achieve a given task. We propose a framework capable of performing these tasks in an interactive environment that can change at each frame, depending on the user’s orders. This framework enables to animate from dozens of characters in real-time for complex constraints to hundreds of characters if only ground adaptation is performed. It offers the following capabilities: motion synchronization, blending, retargeting and adaptation thanks to enhanced inverse kinetics and kinematics solver. To evaluate this framework we have compared the motor behavior of subjects in real and in virtual environments.

Introduction Virtual reality (VR) generally implies animating believable human-like figures in order to interact with real users. Some VR applications require making numerous characters displace in a virtual environment, such as a team of soldiers or a city in which users can drive cars. The animation of such characters leads to several technical problems, including automatic motion retargeting, adaptation to the environment and control. To solve those problems in interactive environments, dynamic models, even if they look promising (Hodgins et al. 1995), cannot be used because they generally require high computation cost incompatible with interactive environments. On the opposite, descriptive models are quite fast because they are based on providing the average shape of angular trajectories for well-known motions. Despite their controllability and low computation time, they are generally limited to locomotion (Boulic, Magnenat-Thalmann & Thalmann 1990, Bruderlin & Calvert 1996) and seem difficult to extend to a generic animation model. Motion capture data were also used to generate new motions thanks to a database of recorded trajectories. Motion graphs (Kovar & Gleicher 2002) were then introduced in order to calculate all the possible transitions between postures recorded in this database. After a quite long computation, this method allows driving interactively a character while verifying complex tasks, such as controlling a boxer to punch various targets (Lee & Lee 2004). However, to ensure realism, this method requires recording a very large set of motions leading to an enormous database in order to achieve a large variety of motions. Moreover, motions stored in the graph generally correspond to a specific skeleton that is not compatible with any kind of virtual human. It is the same problem for methods based on PCA (Principal Component Analysis) (Safonova, Hodgins & Pollard 2004, Glardon, Boulic & Thalmann 2004, Forbes & Fiume 2005), machine learning (Hsu, Pulli & Popovic 2005) and style-based inverse kinematics (Grochow et al. 2004).

An alternative consists in editing captured motions. To this end, displacement maps are now widely used to animate realistically human-like figures with various sizes (Gleicher 98, Choi et al. 2000) and constraints (Gleicher & Litwinowicz 1998, Lee & Shin 1999). Those techniques generally use inverse kinematics to solve space-time constraints at imposed frames. An iterative process is then used to ensure continuity, requiring the knowledge of all the constraints for the entire animation sequence which is impossible in interactive environments. Prioritized inverse kinematics and kinetics (Baerlocher & Boulic 2004, Le Callennec & Boulic 2004) makes it possible to solve contradictory constraints and enables to animate rapidly one complete character with many constraints. To reduce computation time, other techniques generally use efficient inverse kinematics solvers either dedicated on parts of the skeleton (Tolani, Goswami & Badler 2000) or on the entire body (Lee & Shin 1999, Shin et al. 2001). Those methods, even more efficient, are only limited to the control of joints’ position without taking care of balance, dynamics and naturalness. Except for simple tasks, it is generally necessary to combine several different elementary motions, such as for displacing in a room while grasping various objects. A common method to solve this problem is motion blending (Kovar & Gleicher 2003, Mukai & Kuriyama 2005). This method calculates a weighted sum of several elementary trajectories according to priorities and activation/deactivation of constraints. This approach requires using dynamic time warping (Witkin & Kass 1995) because the time-scale and events for all the elementary motions may not correspond, leading to visual artifacts. However, in VR applications, it is quite difficult to predict how the actions will change with a quite long delay. Indeed, the user can request to start or stop motions at any time. As a consequence, a dynamic representation of all the running actions is required and was first modeled as a stack of actions (Boulic et al. 1997). In this stack, the actions with higher priorities are placed on the top. However, when the priority of an action placed in the bottom of the stack is increased, all the stack must be restructured leading to undesired computation time.

Commercial packages were proposed in the past few years in order to edit captured motions, dealing with motion blending and adaptation to specific constraints and to various skeletons. In those packages, synchronization is ensured manually by the user. Hence, those packages are generally dedicated to off-line editing and cannot be used in interactive environments with complex constraints, such as those required in VR. We propose a framework that was designed to deal with all of those processes in interactive environments by offering efficient motion representation and constraints solvers. It also allows intuitive and easy-to-use synchronization and blending algorithms. Not only such a system must be easy-to-use but it should make users react as in real situations, to improve presence. We thus present and recall some results obtained with our system for the duel between real handball goalkeepers and virtual opponents. Overview The entire framework is based on a representation of motion that is independent from morphology leading to very efficient techniques to perform motion retargeting that was presented in (Kulpa, Multon & Arnaldi 2005) and that is based on both Cartesian and angular data that avoids requiring costly inverse kinematics algorithms. The overall framework is organized as follows (see Figure 1): User Orders

Synchronization Motions Mi Param(t) Skeleton Adaptation

Quat Posture Rendering conversion L×Param(t) User’s Constraints constraints solver L×Param(t) Blending L×Param(t)

Figure 1. Overview of the framework, allowing motion retargeting, synchronization, blending and adaptation to the environment.



A synchronization module which task is to time-scale (dynamic time warping) each selected motion in order to make it be compatible with the other ones for motion blending,



A method to adapt postures to every kind of human-like figure. This method is based on the representation of posture that is independent from morphology,



An easy-to-use motion blending algorithm simply driven by priorities (ranking from 0 to infinity) and states (beginning, active, stopping, inactive) while automatically ensuring continuity,



A novel inverse kinematics and kinetics solver that makes it possible to adapt the resulting posture to constraints (either associated to the motions or defined interactively by the user) that are activated and deactivated continuously to avoid discontinuities in the resulting motion.

As stated above, all this system is based on a motion representation that allows many enhancements. In Figure 1, Param(t) is filled-in with values coming from this morphologicalindependent representation. Param(t) is provided by each motion for each time step. The skeleton adaptation module only scales those values by the dimensions of the human-like figure to be animated (named L×Param(t)). We recall that L×Param(t) does not contain any direct information on intermediate joints (such as knees, elbows and vertebrae) that are only calculated in the posture conversion module just before rendering. Let us recall now this representation. Motion representation The motion is stored using a normalized representation of the skeleton and a set of associated constraints (see Kulpa, Multon & Arnaldi 2005 for details). Those constraints are intrinsically linked to the motion, such as foot-contacts, distance between two hands in bimanual manipulations… Although those constraints are defined off-line, they are adapted and solved

in real-time in the virtual environment. Instead of storing joint angles that are not easy to adapt to new skeletons (requiring iterative processes with many inverse kinematics calls), we have proposed to store adimensional data that are independent of the character’s dimensions. Thus, retargeting a motion to a new character without any other constraints is simply achieved by scaling those data with the new character’s dimensions. Hence, many characters can use the same motion file without the help of complex processes to adapt it to their morphology. Let us consider some details about this description. First, the human body is subdivided into kinematic subchains that describe parts of the skeleton (see Figure 2). Those kinematic chains are divided into three main parts, as described in (Kulpa, Multon & Arnaldi 2005): •

the normalized segments are composed of only one body segment (such as the hands, the feet, the hips, the clavicle and the scapula); it consists of the Cartesian position of the extremity in the origin reference frame divided by its length in the initial skeleton. As each normalized segment is stored this way, we can deal with really different proportions between segments. Hence, it is more adapted than applying angular trajectories that intrinsically suppose equivalent proportions between the original and the target skeletons.



the limbs with variable length that encode upper and lower limbs; in this representation, the intermediate joints (elbows and knees) are not encoded because their position is directly linked to the character's anthropometric properties. Thus, retrieving the position of those intermediate joints for different proportions between segments leads to searching the intersection between two circles in a plane which can be expressed analytically, as proposed in (Tolani, Goswami & Badler 2000).



and the spine represented with a spline that can be subdivided into as segments as wishes in the real-time animation module.

Figure 2. New representation that is independent from the character's anthropometric dimensions. In this last representation, each point of the skeleton can be retrieved relatively to the root position. Contrary to classical representation, the instantaneous orientation of the root is divided into two main components. The global orientation deals with the global direction of the motion. The local orientation is the additional rotation applied to the global orientation in order to obtain the actual root orientation. During a walk, it represents the pelvis oscillations around the global direction. In order to encode movements without taking anthropometric properties into account, the position of the root is normalized by the leg length. A motion is not limited to a sequence of postures but is also linked to intrinsic constraints, such as ensuring foot-contact with the ground or reaching targets in the environment. All those intrinsic constraints are designed off-line by a user and are stored with the sequence of postures. To model a constraint Ci several parameters are necessary: C i = {CPi , Ti , KC i , Pi , S i } The first parameter CPi is the constrained point. It is linked to a body segment and its position is defined using a 3D local offset from the root of this segment. Next parameter Ti is the type of the constraint among the following ones: distance between two points (could equal zero for contacts), orientation of body segment and allowed/forbidden area. Depending on the

constraint, specific parameters must obviously be added such as the desired position of the constraint or the dimensions of the restricted area. The next parameter KCi defines the kinematic chain associated to the constraint. It allows the user to specify the set of usable body segments in order to solve the constraint. For example, a constraint C1 could be applied on the right hand and could act on all the segments ranging from the hand to the abdomen. Another constraint C2 could only involve the arm and the clavicle. The priority of the constraint, called Pi, indicates intuitively the importance of a constraint compared to others. The constraints with low priorities are only verified after those associated to higher priorities. Finally, the user can start and stop constraints leading to the computation of the parameter Si which is the state of the constraint (ranging continuously from 0 for deactivated to 1 for fully activated). With this method it is quite simple to make the virtual character react to unpredictable user’s actions by simply tuning priorities. This problem is addressed in the next section. The data linked to posture (using the representation presented above) and the set of constraints are stored together in a common structure called Parami(t) for motion Mi. Motion synchronization and blending The main principle of this method is to perform a weighted sum of postures and constraints at each time step (the data are first scaled to fit the new character's dimensions in order to have a L×Param(t) structure in which the intermediate joints such as the knees, elbows and vertebrae, are still not calculated). The main problems thus consist in: •

ensuring compatibility between all the motions selected for motion blending by analyzing stances for each of them,



calculating weights that take the priorities and continuity into account,



designing an easy-to-use blending algorithm for which a user just has to start, stop motions and provide the system with only one priority for each of them.

To this end, we have first chosen to synchronize motions according to stances by calculating and applying dynamic time warping on each of them, if required. Each type of stance is encoded as follows: NS for no-support phase, LS (resp. RS) for unipodal left-support (resp. right-support) phase and DS for double-support phases. Blending two motions m1 and m2 with their corresponding stances s1 and s2 may lead to several different cases: •

if s1=s2 then the motions are compatible at this time and their combination lead to a motion with the same stance sb=s1=s2,



if a LS is blend with a RS, it will lead to an incompatibility encoded with an impossible stance Err. Indeed, if a figure has only its left foot in contact with the ground it cannot also have only its right foot in contact with the ground.



if the two feet are in contact with the ground for m1, we assume the everything is possible for m2, resulting in a stance equal to s2. Indeed, when two feet are in contact with the ground, the character can naturally jump (resulting in a NS phase) or hold a foot up (resulting in a LS or RS phase).

An operator ⊕ was then introduced to extend those statements for n motions m1..n. The result of this operator belongs to the set {NS, LS, RS, DS, Err}. Hence, the result sr obtained for the blending of n motions is: sr = s1 ⊕ s2 ⊕ … ⊕ sn If Err is the result, it means that the motions are not compatible and dynamic time warping is required for some phases of at least one motion as described in (Ménardais, Kulpa, & Multon. 2004). To take interactivity into account, we cannot apply this algebraic relation on all the sequence but only on the next nk stances (a window sliding on the sequences of stances), assuming the current stance is number k. The following algorithm is then applied iteratively during realtime animation. In this algorithm, we assume that the motion was synchronized until the step

k+nk. The next stance that must be synchronized is si(k+nk+1) for all motions Mi. This last stance must ensure that the above relation is still true for stances j ∈ [k..k+nk+1]. If it is not true, the system has to modify the time scale of one or more stances si(j) with j∈[k..nk] yet synchronized. Such a problem occurs only if at least one LS is associated to one RS. This is the only case that leads to Err. Hence, to solve this problem, we have to modify the time scales of each motion that exhibits either LS or RS at stance k+nk+1. We assume that motions with high priorities may be less affected by this process than those with low priorities. As a consequence we have to search motion Mj that has either LS or RS (denoted sj(k+nk+1)) at stance k+nk+1 with the highest priority. Then we have to change the time scale of the stances

¬sj(k+nk+1) (with ¬LS=RS and ¬RS=LS) of all the remaining motions with lower priorities in order to solve the problem. Synchronizing the next stance k+nk+1 leads to the following algorithm: St_result = ⊕i=1:n si(k+nk+1) If St_result == Err

// only if there are both LS and RS

// search for the motion with highest priority that exhibits either LS or RS // and modify all motions with lower priority that exhibit the opposite stance stance = ∅ // we assume that motions are stored in the increasing order of their priority For i=n downto 1 If (stance==∅) & ((si(k+nk+1) ∈ {LS, RS}) stance = si(k+nk+1) else If (si(k+nk+1) == ¬ stance) TimeScale(si(k+nk))

// Mj found

end If end If end For; end If Figure 3 illustrates this process for three motions, with nk=1.

Figure 3. Synchronization of three motions that are not compatible. In the example of Figure 3, only the first column of stances can be modified because nk=1. We can see that an error occurs at stance k+2. s3 has the highest priority and a LS stance. As a consequence, we have to modify the time scale of all the motions that have lower priority and a RS stance. We consequently enlarge s2(k+1) and there is no more Err for stance k+2. This method has a limitation for highly dynamic motions. Hence if the character is in double stance in motion 1, we assume that “everything is possible” for motion M2. If motion M1 represents the character landing on the ground after a fall, no motion M2 other than absorbing the forces of impact are feasible for some time duration t, and only then new motions are possible. For such dynamic cases, the operator should be extended. However, practically, we did not experiment too much problematic cases even if they exist. Moreover, coupling this

synchronization module to the motion adaptation technique is coherent because the latter is also limited to kinematic and kinetic constraints. Once all the motions that are supposed to be blend have compatible stances, the system has to compute a weight for each motion depending on its priority, its state and the corresponding kinematic chain. Obviously, asking the user to tune manually those parameters is quite impossible, especially in real-time environment. As a consequence, an automatic calculation of those weights is necessary (see Ménardais et al. 2004 for details). Contrary to motion blending classically applied on angular trajectories, this method is applied on the data structure L×Param(t) presented above. This resulting structure must then be adapted to the environment leading to a constraints solving problem. Constraints solver All the geometric constraints (such as ensuring foot-contact with the ground without sliding) are solved in a common iterative process extended from the one proposed in (Shin et al. 2001). This process consists in decomposing the skeleton into groups for which analytical solutions are available, enhancing performance (see Kulpa, Multon & Arnaldi 2005 for details on the inverse kinematics module). One of the main point is the order in which groups are used to solve the kinematic constraints: from the lightest groups (limbs) to the heaviest ones (the trunk). It consequently minimizes the kinetic energy required to solve the constraints and lead to more realistic behaviors. Indeed, it would be totally unrealistic to bend the torso without moving the arms to catch an object placed below the pelvis. We now focus on how this method is adapted to deal with both kinematic and kinetic constraints. The inverse kinetics module is based on the same philosophy than the kinematic one: it consists in an iterative process that is based on analytic solutions for each required group. Hence, as for inverse kinematics, each group has an analytical solution to displace the center of mass (denoted COM) into the convenient direction.

Figure 4. COM1 (resp. COM2) is the COM position of the segment s1 (resp. s2). COMg is the COM position of the limb. Figure 4 shows the example of an arm composed of two segments which lengths are respectively l1 and l2. The COM position can be retrieved calculating first the COM positions of the two segments in the current posture (COM1 and COM2 respectively). These positions are calculated from the proximal articulations (such as a shoulder for an upper-arm and a elbow for a forearm) using two percentages r1 and r2 of the segment length. These ratios are provided from anthromopetric tables (Zatsiorsky, Seluyanov & Chugunova 1990). Then the COM position of the total limb is calculated using another ratio r3 that depends on the mass of the belonging segments:

r= 3

m 2 m1 +m2

To place the COM at position COMg, an analytical solution is calculated. Let l be the distance between the proximal (the shoulder) and the distal point (the wrist) of the limb. Changing the arm configuration could be summed-up as two independent operations: a change of its extension (l in Figure 4) and a rotation of the limb (supposed rigid). Hence, to modify the limb’s configuration on order to place the COM at the desired COM’g, the solution consists in calculating first a new length l’ according to the distance d’ between COM’g and the shoulder (see Figure 4). If d’ is greater than r3×[(r2l2 +l1)−r1l1]+r1l1 then the limb should be placed in its extended position leading to set l’ to the limb’s length. Otherwise, l’ is given by:

l' =

d '2 −F G

where

[

]

F = l12 (r1 + r3 ) + r3 [r2 (r1 r3 − r1 − r3 ) + r1 (r1 r3 − 2(r1 + r3 ))] 2

and G = r2 r3 (r3 + r1 − r1 r3 ) Knowing l’, it is possible to simply calculate the corresponding elbow flexion angle. The limb (with its new configuration) is then simply rotated around the shoulder in order to place the COM at the most convenient position, as a classical CCD algorithm would do. In this iterative inverse kinetics algorithm, we have to determine first the groups that can be requested in order to verify the COM position. The groups that are free of kinematic constraints can be requested for the kinetic adaptation. The user can then select only a subset of these free groups for adaptation. The selected groups are then considered according to their level in the groups’ hierarchy, from the heaviest groups (such as the trunk) to the lightest ones (such as the arms). This strategy could be observed in trained humans given that only a small movement of the heaviest mass makes the COM move significantly. On the contrary, displacing light masses would require large gestures of numerous body segments in order to obtain the same effect. One of the main problems in solving both kinematic and kinetic constraints is that the two processes may lead to opposite solutions that would make the system not converge to a solution. In order to minimize the number of required body segments, the groups with the minimum mass are used first for inverse kinematics while it is the contrary for inverse kinetics. In order to solve both kinematic (Km) and kinetic (Kn) constraints, the two methods are called in a global loop aiming at minimizing the two errors at the same time. This loop (see the following pseudo-code) converges to a solution which minimizes a tunable compromise of Km and Kn (see Figure 5).

it = 0 completed = false Do postureKm = kinematicAdaptation() postureKn = kineticAdaptation() If ( (ΔkmErrork < thrKm) & (ΔknErrork < thrKn) ) finalPosture = postureKm completed = true End If While ( (it++ < maxIt) & (¬completed) ) Where maxIt is the maximum number of iterations, ΔkmErrork and ΔknErrork are the variations of the errors in the kinematic and kinetic constraints respectively, with their arbitrary selected threshold thrKm and thrKn. As a consequence, this algorithm ends when both adaptations do not modify the resulting posture. It allows the system to ensure that the kinetic constraints are verified even if the kinematic constraints are not completely verified, as other priority-based inverse kinematics methods do (Le Callennec & Boulic 2004). We assumed that kinetic constraints are imposed mainly to maintain balance and violating those constraints would lead to unrealistic postures. If the kinematic constraints cannot be verified concurrently, it means that they are not reachable in a realistic (balance) posture, as shown in Figure 5.

Figure 5. Several different postures with two unreachable constraints. On each screenshot, the semitransparency character also drives its COM on its rest position (high weight for Kn compared to Km). The other character is calculated with a high value of Km compared to Kn, neglecting the control of the COM.

However, this method does not take dynamics into account and may produce some artifacts for dynamic motions but it offers convincing results in many cases. Let us consider a classical study case in VR where a human carries more or less heavy objects. Thanks to the control of the center of mass, the system adapts the posture to the addition of those weights. However, it would not be able to control realistically jumps or other motions involving complex external forces (such as wind and collisions). Framework application and testing in VR MKM was used in several applications including videogames, e-learning and virtual reality. In this section, we describe the results obtained in a VR experiment involving a real handball goalkeeper that had to stop virtual throws animated with MKM. First, a set of motions were

captured thanks to a Vicon-MX (product of Oxford Metrics) motion capture system composed of 12 cameras cadenced at 160Hz. Reflective markers were placed over standardized anatomical landmarks in order to retrieve joint centers and the motion of all the body segments. The captured motions were: running with a ball, throwing at various places in a goal with two methods (with and without jumping) and rest motions. All those motions were encoded with the motion representation described in this paper aiming at animating easily and rapidly virtual skeletons with various dimensions (see Figure 6).

Figure 6. Motion capture of handball throws and animation of virtual players with different dimensions. For all the motions, we specified the corresponding constraints: contact of the feet with the ground when necessary and position of the wrist on which was attached the ball. After a complete biomechanical analysis of all the collected trajectories, a model of handball thrower was designed providing us with a motion for each kind of throw. This model is based on MKM and is controlled through additional constraints. Hence, several operators were proposed thanks to those constraints, such as changing the position of the wrist at ball release, changing the orientation of the trunk and delaying the ball release event. A first study demonstrated that the goalkeepers’ gestures were similar when stopping virtual thrown balls while applying directly motion capture or while using this model (Bideau et al. 2003). To

evaluate of this similarity we have calculated the correlation between the arm’s gestures (the trajectory of the arm’s center of mass) of the goalkeeper in the two situations (real and virtual). We have found correlations between 0.96 and 0.98 for the 8 subjects (professional male players). Standard deviation was very little for each subject: almost 0.01. Another study showed that the goalkeeper’s gestures were affected when at least one of the three above modifications were used (Bideau et al. 2004): the correlation decreased down to 0.80 (from 0.76 to 0.82 for the same 8 subjects). This result is encouraging in using VR to study interactions between humans through interactions with virtual humans animated with MKM. Moreover, as the goalkeepers’ gestures are captured, it is also possible to animate concurrently the goalkeeper and the thrower in a common virtual environment leading to a very interesting investigation tool for trainers (see Figure 7).

Figure 7. The left view depicts a 3D-visualization of the complete scenery including thrower and goalkeeper (the gestures of the virtual goalkeeper were obtained thanks to motion capture when the actual goalkeeper tried to stop the virtual throws, such as in the right view). Additional tests were carried-out to evaluate the performance of MKM in animating several characters in real-time while verifying many constraints. Obviously, computation time depends on the number and the kind of constraints that are applied to the character. Hence, with only one reachable constraint, without dealing with the COM position, our system

requires 125ns for each character and at each time step on a P-IV 2.8GHz laptop computer. On the opposite, for two unreachable constraints (that require more iterations) while controlling the center of mass position, it requires 1470ns. As a consequence, on this computer up to 177 characters can be animated at 30Hz. For all those examples, motion retargeting and ground adaptation were also performed. Discussion In this paper we have presented a framework that embeds original algorithms to control human-like figures in interactive environments. The first main contribution of this work is to provide a set of methods based on a representation of motion that is independent from morphology. Thanks to this representation, motion retargeting does not require classical inverse kinematics. Hence, only a minimum database of motions recorded with various actors is required to animate many different characters without performing preprocessing such as motion retargeting. Moreover, in VR applications, the autonomous characters need modifying their gestures according to the users’ orders. Hence, instead of recording numerous motions for various situations, our framework enables using only one motion per type of interaction (such as grasping, displacing, punching…). An efficient inverse kinematics and kinetics solver allows modifying the gestures in order to verify constraints that can change continuously in the virtual world. This solver enabled us to animate numerous of various characters at 30Hz with complex interactive constraints. As a consequence, this framework can be used in crowd simulation or to make several autonomous characters live in a virtual city. In this framework, constraints are very intuitive to design and control. In order to grasp objects that move in the virtual environment, the Cartesian constraint associated to the grasping motion is set to the target’s position. The priority of this constraint is increased continuously to a maximum value when the object must be reached. The autonomous character can take objects in many positions and merge other complex motions while

preserving balance thanks to the inverse kinetics solver. Moreover, if the objects are heavy or light, the system can automatically adapt the posture in order to preserve balance by assuming that the mass of the object is added to the hand’s one. However, on the one hand, dynamics is not taken into account in this framework. As a consequence, nothing ensures that the resulting motion is linked to realistic forces and torques for humans. On the other hand, the experiments carried-out with experts in sports demonstrate that the system is able to make them react realistically to virtual opponents animated with our framework. Nevertheless, it could be interesting to take dynamics into account in order to make the virtual characters react to external forces in a more convincing way. The main problem consists in finding a method with low computation time in order to fit constraints of many VR applications. Up to our knowledge, no other framework is able to provide such functionalities (kinematic and kinetic constraints solving, motion retargeting, synchronization and blending) in an interactive environment. Procedural animations are limited to a small set of motions. Motion graphs require huge database of motions and a lot of manual and automatic preprocesses. They are also limited to reusing motions on character that has the same anthropometric properties as the original actor. Spacetime constraints are not dedicated to interactive applications because they require knowing all the constraints in advance. Some commercial packages allow part of these functionalities but generally animate a unique type of character or cannot deal with complex kinematic and kinetic constraints. We have carried-out preliminary experiments to verify if our framework engenders plausible motor behaviors in real users. This work has to be extended to more complex situations because it is a fundamental problem in VR. If people do not react as in real world, it may be a serious problem for many applications, such as in education, training, phobia treatment… Testing the motor behaviors of subjects in VR seems to us a very promising issue and the methods described in this paper are promising to evaluate of the animation quality. However,

this method should be improved to take other parameters into account, not only motor behaviors. Acknowledgements The authors wish to thank the reviewers for their constructive comments and suggestions. References Baerlocher, P., & Boulic, R. (2004) An inverse kinematic architecture enforcing on arbitrary number of strict priority levels. Visual Computer, 20, 6, 402–417. Bideau, B., Kulpa, R., Ménardais, S., Multon, F., Delamarche, P., & Arnaldi, B.(2003), Real handball keeper vs. virtual handball player: a case study, Presence, 12(4):412-421, august 2003 Bideau, B., Multon, F., Kulpa, R., Fradet, L., Arnaldi, B., & Delamarche, P. (2004) Virtual reality, a new tool to investigate anticipation skills: application to the goalkeeper and handball thrower duel. Neuroscience letters, 372(1-2) :119-122.. Boulic, R., Magnenat-Thalmann, N., & Thalmann, D. (1990) A global human walking model with real-time kinematic personification. Visual Computer, 6, 6, 344-358. Boulic, R. , Becheiraz, P., Emering, L., & Thalmann , D. (1997) Integration of Motion Control Techniques for Virtual Humans and Avatars Realtime Animation. Proceedings of ACM International Symposium VRST, 111-118. Bruderlin, A., & Calvert, T. (1996) Knowledge-Driven, Interactive Animation of Human Running. Proceedings of Graphics Interface, 213-221. Choi, K.J., & Ko, H.S. (2000) Online motion retargeting. The Journal Of Visualisation and Computer Animation, 11, 5, 223-235. Forbes, K., & Fiume, E. (2005) An Efficient Search Algorithm for Motion Data Using Weighted PCA. Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 67-76.

Glardon, P., Boulic, R., & Thalmann, D. (2004) PCA-based walking engine using motion capture data. Proceedings of IEEE Computer Graphics International, 292-298. Gleicher, M. (1998) Retargetting Motion to new Characters. ACM SIGGRAPH annual conference, 33-42. Gleicher, M., & Litwinowicz, P. (1998) Constraint-Based Motion Adaptation. Journal of Visualization and Computer Animation, 9, 2, 65-94. Guo, S., & Roberge, J. (1996) A high-level control mechanism for human locomotion based on parametric frame space interpolation. Proceedings of the Eurographics workshop on Computer animation and simulation, 95-107. Grochow, K., Martin, S.L., Hertzmann, A., & Popovic, Z. (2004) Style-based inverse kinematics. ACM Transactions on Graphics, 23, 3, 522-531. Hodgins, J., Wooten, W., Brogan, D., & O'Brien, J.(1995). Animating human athletics. Proceedings of ACM SIGGRAPH annual conference, 71-78. Hsu, E., Pulli, K., & Popovic, J. (2005) Style Translation for Human Motion. ACM Transactions on Graphics, 24, 3, 1082-1089. Kulpa, R., Multon, F. & Arnaldi, B. (2005). Morphology-independent representation of motions for interactive human-like animation. Computer Graphics Forum 24(3): 343-352 Kovar, L., & Gleicher, M. (2002) Motion graphs. ACM Transactions on Graphics, 21, 3, 473482. Kovar, L., & Gleicher, M. (2003) Flexible Automatic Motion Blending with Registration Curves. Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 214-224. Le Callennec, B., & Boulic, R. (2004) Interactive motion deformation with prioritized constraints. Proceedings of ACM SIGGRAPG/Eurographics Symposium on Computer Animation,163–171.

Lee, J., & Lee, K.H. (2004) Precomputing avatar behavior from human motion data. Proceedings of Eurographics/ACM SIGGRAPH Symposium on Computer Animation, 7987. Lee, J., & Shin, S.Y. (1999) A hierarchical approach to interactive motion editing for humanlike figures. Proceedings of ACM SIGGRAPH annual conference, 39-48. Ménardais, S., Multon, F., Kulpa, R., & Arnaldi, B. (2004) Motion blending for real-time animation while accounting for the environment. Proceedings IEEE Computer Graphics International, p 156-159, Crete, Greece, june 2004 Ménardais, S., Kulpa, R., & Multon, F. (2004) Synchronization of interactively adapted motions. Proceedings of ACM SIGGRAPH/EUROGRAPHICS Symposium of Computer Animation, p 325-335, Grenoble, august 2004 Mukai, T., & Kuriyama, S. (2005) Geostatistical Motion Interpolation. ACM Transactions on Graphics, 24, 3,1062-1070. Safonova, A., Hodgins, J., & Pollard., N (2004) Synthesizing physically realistic human motion in low-dimensional, behavior-specific spaces. ACM Transactions on Graphics, 23, 3, 514-521. Shin, H.J., Lee, J., Shin, S.Y., & Gleicher, M. (2001) Computer puppetry: An importancebased approach. ACM Transactions on Graphics, 20, 2, 67-94. Tolani, D., Goswami, A., & Badler, N. (2000) Real-Time Inverse Kinematics Techniques for Anthropomorphic Limbs. Graphical Models, 62, 353-388. Wang, L.-C. T., & Chen C. C. (1991) A combined optimization method for solving the inverse kinematics problem of mechanical manipulators. IEEE Trans. On Robotics and Applications, 7, 4, 489–499. Witkin, A., & Kass, M. (1995) Motion Warping. Proceedings of ACM SIGGRAPH 1995 annual conference, 105-107..

Zatsiorsky, Z., Seluyanov, V., & Chugunova, L.G. (1990) Methods of determining massinertial characteristics of human body segments. In: Contemporary problems of Biomechanics, Moscow: Mir publishers, 273-291.

Figure captions Figure 1. Overview of the frame work, allowing motion retargeting, synchronization, blending and adaptation to the environment. ..................................................................... 4 Figure 2. New representation that is independent from the character's anthropometric dimensions.......................................................................................................................... 7 Figure 3. Synchronization of three motions that are not compatible. ...................................... 11 Figure 4. COM1 (resp. COM2) is the COM position of the segment s1 (resp. s2). COMg is the COM position of the limb. ............................................................................................... 13 Figure 5. Several different postures with two unreachable constraints. On each screenshot, the semi-transparency character also drives its COM on its rest position (high weight for Kn compared to Km). The other character was calculated with a high value of Km compared to Kn, neglecting the control of the COM........................................................................ 16 Figure 6. Motion capture of handball throws and animation of virtual players with different dimensions........................................................................................................................ 17 Figure 7. The left view depicts a 3D-visualization of the complete scenery including thrower and goalkeeper (the gestures of the virtual goalkeeper were obtained thanks to motion capture when the actual goalkeeper tried to stop the virtual throws, such as in the right view)................................................................................................................................. 18