Interactive control of physically-valid aerial motion - Semantic Scholar

3 downloads 128 Views 2MB Size Report
Nov 7, 2007 - vironment, the animation engine should adjust the captured human motion according to ... concurrently search a database for a compatible motion that can be blended with ... ures through an optimization process that takes both kinematics and dynamics into .... 5 Interactive VR training system for gym- nasts.
Interactive control of physically-valid aerial motion: application to VR training system for gymnasts Franck Multon∗ M2S/Bunraku, Univ. Rennes2/IRISA

Ludovic Hoyet† Bunraku IRISA

Abstract

Richard Kulpa§ M2S, Univ. Rennes 2

An interesting case study is gymnastic motions which are mainly affected by dynamics: during aerial motions, the angular momentum is constant and the trajectory of the center of mass (COM) is perfectly defined. Although these dynamic constraints are very restrictive (no possible additional forces), gymnasts are able to initiate twists in the air (named the cat landing problem) and to control their angular velocity. In that case, kinematic models fail to reproduce such kind of behavior accurately .

This paper aims at proposing a new method to animate aerial motions in interactive environments while taking dynamics into account. Classical approaches are based on spacetime constraints and require a complete knowledge of the motion. However, in Virtual Reality, the user’s actions are unpredictable so that such techniques cannot be used. In this paper, we deal with the simulation of gymnastic aerial motions in virtual reality. A user can directly interact with the virtual gymnast thanks to a real-time motion capture system. The user’s arm motions are blended to the original aerial motions in order to verify their consequences on the virtual gymnast’s performance. Hence, a user can select an initial motion, an initial velocity vector, an initial angular momentum, and a virtual character. Each of these choices has a direct influence on mechanical values such as the linear and angular momentum. We thus have developed an original method to adapt the character’s poses at each time step in order to make these values compatible with mechanical laws: the angular momentum is constant during the aerial phase and the linear one is determined at take-off. Our method enables to animate up to 16 characters at 30hz on a common PC. To sum-up, our method enables to solve kinematic constraints, to retarget motion and to correct it to satisfy mechanical laws. The virtual gymnast application described in this paper is very promising to help sportsmen getting some ideas which postures are better during the aerial phase for better performance.

Models dealing with dynamics are generally based on specific controller design or optimization. The former leads to long handtuning for each new situation while the latter generally requires lots of computation time. Imagine that a trainer wants to teach new gymnasts about the biomechanics of aerial motions. He wishes to demonstrate that a motion of the arm during a somersault leads to the creation of a twist. Drawing stick figures on a board or looking for videos in order to explain this phenomenon is not always convincing as the students cannot manipulate it. Hence, VR could be a very promising tool for teaching them the ”mechanics of gymnasts”. Beyond the problem of designing a convincing virtual environment, the animation engine should adjust the captured human motion according to the movements done by the subject using the VR system: including new morphology (to evaluate the influence of body masses and inertias), additional motions (such as moving an arm or bending the trunk) and modified initial conditions (velocity vector and angular momentum at take-off). By this approach, gymnasts can use our system to get some ideas about which parameters (posture and initial velocity) are better during the aerial phase for better performance. Thus, a beginner can learn how specialists perform complex motions and test them in VR (with natural gestures, without using invasive interfaces) without being in danger. Trainers and specialists can also test new figures in VR that could not be tested without being in danger.

CR Categories: I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—Animation; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—Virtual Reality; I.3.7 [Computer Graphics]: Applications—; Keywords: virtual human, dynamics, interactivity, motion control, sports application

1

Taku Komura‡ Informatics, Edinburgh Univ

In this paper, we propose a new method that is able to deal with all these constraints in real-time. Hence, a user can interact with a virtual gymnast by modifying his gestures and verifying the consequence on the resulting whole-body motion.

Introduction

2

Motion capture is now widely used to animate human-like figures while it requires post-processing in order to adapt the trajectories to various skeletons and environments. In many applications, it is also possible to combine several motions in order to perform complex tasks. However usually the physical correctness of the motions is affected [Safonova and Hodgins 2005] and, as a result, the final animations look unnatural.

Related works

Recently, several works focused on the use of both simulation and motion capture in order to make characters react to collisions or pushes [Zordan et al. 2005]. The main idea is to let simulation drive the human skeleton as a passive hierarchy of rigid bodies and concurrently search a database for a compatible motion that can be blended with the simulation. Such methods are designed for dealing with collisions and shocks, and cannot be used to adapt a motion to continuous mechanical constraints.

∗ e-mail:

[email protected] [email protected] ‡ e-mail: [email protected] § e-mail: [email protected] † e-mail:

Spacetime constraints were introduced to control human-like figures through an optimization process that takes both kinematics and dynamics into account [Safonova et al. 2004]. The optimization process is mainly driven by a cost function that has to be minimized. Consequently, those methods are very sensitive to local minima and suffer from huge computation costs. In order to avoid the former limitation, some authors have proposed to control simpler dynamic constraints (such as the linear and angular momentum [Liu and Popovi´c 2002] or the aggregate-force constraints [Fang and

Copyright © 2007 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail [email protected]. VRST 2007, Newport Beach, California, November 5–7, 2007. © 2007 ACM 978-1-59593-863-3/07/0011 $5.00

77

Pollard 2003]) instead of joint torques. Although the computation cost is significantly reduced compared to full-dynamics optimization, those methods are mainly used for pre-processing [Abe et al. 2004][Sulejmanpasic and Popovic 2005] designed for a fixed skeleton with a fixed set of body segment parameters (mass and inertia).

valid trajectory of the COM. The angular momentum is calculated using the two first time steps after take-off: L0 =

n X

˙ i Ii ωi + mi GGi × GG

(1)

i=1

The idea of controlling simpler interactive dynamic constraints instead of driving joint torques has also been explored without using optimization. For example, it’s possible to scale applied forces computed on the original motion capture data in order to apply new constraints [Pollard and Behmaram-Mosavat 2000]. Shin et al. [Shin et al. 2003] proposed to verify that the Zero Moment Point is within the base of support all along contact phase. The posture is adapted by using displacement maps, as for dealing with kinematic constraints [Gleicher and Litwinowicz 1998] only. Our method falls into this category. In this paper, we focus on the aerial phase but we control both linear and angular momentum. Using displacement maps requires an iterative process that involves lots of computation and a complete knowledge of the motion during the aerial phase. The main idea here is to solve the dynamic constraints at each time step because the interactive environment may change due to the action of the user (here his arms’ motion are added to the current animation in real-time).

3

where n is the number of body segments, mi , Ii and ωi are the mass, inertia and angular velocity of the COM of segment i, denoted by Gi . • Angular momentum correction adapts the motion provided by the retargeting and blending module in order to ensure that the angular momentum is equal to L0 . This module is described in section 4. • COM correction adapts the whole-body translation in order to follow the trajectory computed with the COM dynamics module. Let ∆COM be the difference between the actual COM and the one calculated in module COM dynamics at each time step. In order to correct the COM position, we just have to add −∆COM to the location of the root of the skeleton. Let us consider now the Angular momentum correction module.

4

Overview

Angular momentum correction

Imagine the character’s proximal segments (such as the trunk) has larger proportional masses than those of the original actor. This leads to larger inertias than those of the actor. As the inertia is larger, the angular velocity of the character should be smaller than the one of the actor, for the same poses. The aim of the wholebody rotation correction is to take this concept into account: only adjusting the angular velocity while keeping the pose unchanged.

The overall process is depicted in figure 1. A user selects a motion capture file among a database of aerial motions. Then, he chooses a virtual human. Before running the simulation, the user can also provide an initial velocity vector at take-off that is different from the original one.

We compute the total angular momentum using equation 1 for the first frame after take-off. Let L be this value. L is supposed to be constant during the remaining of the aerial phase while it actually changes at each frame during simulation because of the modificab be the actual value of L at each time tion applied by the user. Let L b is a non-constant value). In our approach, we wish to keep step (L the whole-body posture unchanged. Therefore, only ω1 , which is the angular velocity of the root, is adapted. For each body segment i, we have: ωi = ω1 + ωi/1 where ωi/1 is the angular velocity of segment i according to the first segment (the root). Let q be the vector containing all the rotational degrees of freedom of the character. Let qi = (ψi , θi , φi ) be the vector containing the rotational degrees of freedom of segment i: q = {qi }i=1..N . Equation 1 becomes:

Figure 1: Overview of the interactive demo.

Li = Ii (ω1 + ωi/1 ) + fi (q, q) ˙

During the animation, the gestures of the subject are captured with a real-time motion capture system. Hence, his arm’s gestures can be added to the original motion in order to see the consequence on the resulting somersaults and twists. The animation engine takes the following steps:

(2)

˙ i. where fi (q, q) ˙ = mi GGi × GG In our approach, we only wish to tune ω1 . As a consequence, qj6=1 and its derivative are considered as known for every segment, except the root q1 . Hence, fi (q, q) ˙ could be only expressed as a function of q1 and q˙1 . Moreover, as we know q1 at time t and we wish to adapt it at time t + ∆t, the only unknown in this function is q˙1 . ω1 can also be expressed as a function of q˙1 . Equation 2 becomes:

• motion retargeting (in order to adapt the motion to the skeleton) and blending (to add the subject’s motions to the current pose at each time step) that calculate a pose compatible with all the kinematic constraints. Motion retargeting is performed using the method proposed in [Kulpa et al. 2005]. The motion is stored with a morphology-independent representation whose adimensional data are scaled to fit the character’s dimensions. Motion blending is performed by continuously tuning weights associated to each motion [Boulic et al. 1997].

Li − Ii ωi/1 = Ii Rwi q˙1 + fi (q˙1 )

(3)

To simplify the computations, we propose to linearize fi : ∆fi (q˙1 ) = Ji ∆q˙1 ∂f

(4)

with Ji = (. . . ∂{ψi{x,y,z} . . .) that can be ob1 ,θ1 ,φ1 } tained numerically thanks to finite differences, such as

• COM dynamics module that computes the initial angular momentum L0 (constant for each time step) and the physically-

78

(fi (ψ1 + δψ, θ1 , φ1 ) − fi (ψ1 − δψ, θ1 , φ1 )) /(2δψ ). equation 3 becomes:



Then

duration. The COM’s trajectory is adapted in order to follow the physically-valid trajectory according to the duration of the flight (a parabola with an acceleration equal to gravity).

(5)

At any moment before the jump, the user can select a character in a database. This database contains several kinds of characters, with various anthropometric properties. The motion is then adapted to the character. First, the trajectories are scaled to fit the character’s anthropometric properties as proposed in [Kulpa et al. 2005]. At this stage, the motion is certainly not valid from the mechanical point of view. Second, the method described in this paper is used to ensure that the angular momentum remains constant by adapting the angular velocity. At this stage, the COM’s position is translated at each frame to follow the required parabola.



b˙i + Ii ωi/1 Li = Ii Rwi q˙1 + fd i (q˙1 ) + Ji × q˙i − q

where fd i (q˙1 ) stands for the current value of fi (q˙1 ) (leading to a non-constant value of L). With this equation, the total expected angular momentum becomes: L=

N X i=1

!

Ii ωi/1 − Ji qb˙i + fd i (q˙1 )

+

N X

(Ii Rwi + Ji )

i=1

!

q˙1 (6)

If we wish to calculate q˙1 (t + ∆t) according to an imposed angular b, it comes: momentum L q˙1 = A−1 (L − B)

where

P

N i=1

A

=

PN

i=1

(I Rwi + Ji )

i

Ii ωi/1 − Ji qb˙i + fd i (q˙1 ) .

(7)



and

B

=

As there are several mathematical and numerical approximations in the previous equations, taking directly the result of equation 7 may create some small discontinuities in the resulting motion. To decrease this effect, we propose to apply a low-pass filter. As we target real-time interactive animations, we have to deal with the compromise between computational time and accuracy. If we accurately calculate Ji , it leads to numerous computations that obviously affect the performance of the algorithm. Adding a second order low-pass filter is less time consuming although it leads to an approximation of what physical laws state.

5

Figure 2: Original (solid line) and modified (with ’o’) angular momentum of a motion retargeted to a character with different morphology. No other modification is performed. The whole-body rotation is adapted to make the angular momentum becomes constant.

Interactive VR training system for gymnasts

The method described above runs very fast and doesn’t require a complete knowledge of the whole sequence. It simply adapts the motion at each frame. It can then be used in an interactive environment where a user can modify the character’s gestures. In gymnastics, it could correspond to a change of the arms’ gestures in order to evaluate the impact on the global performance (the number of somersaults and twists). We thus have designed a VR environment in which a user can control a virtual character’s motion of the arms by naturally moving his own arms (see figure 1).

6

Results

Once the initial motion is scaled to the character and blended with the user’s gestures, the acceleration of the COM is not strictly equal to gravity and the angular momentum is not constant. After correction of the whole-body rotation, we obtain a quite constant value (see figure 2). Computation time for this demo was about 2ms per frame for angular velocity adjustments. It means that up to 16 characters at 30Hz without rendering could be computed concurrently on a PentiumD 3.4 GHz with 2Gb of memory and a Quadro FX 3450 graphic cards. In our application, the user’s arms gestures are blended with the current animation. Hence, the total angular momentum is more importantly affected than in the previous results. In this paper, we show some results based on the modification of a prerecorded somersault composed with a left arm, a right arm, and both arms motions (see figure 4 a, b and c).

A VICON-MX (product of Oxford Metrics) real-time motion capture system is used to measure displacement of the user’s arms . The subjects performed a ”range of motion” involving the rotation of all his degrees of freedom. The resulting file is manually labeled so that the system can recognize automatically the markers during the real-time process. The systems provides the 3D position of each marker through networks to the arm’s gestures reconstruction module. The frame-rate in real-time is set to 50Hz and the latency is estimated to 3ms.

Compared to other approaches, the main approximation of our method is the computation of the Jacobian to determine the global angular velocity. Indeed, this linear approximation could lead to uncertainties if the current angular momentum is too far from the corrected one (as in inverse kinematics when the constraint is too far). We tested several differences: 0.01, 0.1, 1, 10 Kg.m2 .s−1 (potential errors in angular momentums). These differences lead to error less than 0.01 in the resulting angular momentum (evaluated thanks to the Jacobian), which is acceptable for our application.

The subjects can then perform free motions of the arms. In this demo, the motion of the rest of the body is not used. Each arm motion is then blended to the current animation. Hence, the original arm motion is associated with a very low weight while the motion captured one has large weight. As a consequence, the motion of the arm is almost the same as the captured one. In addition, the user can tune the flight duration. The selected original motion is linearly time warped in order to fit this new flight

Some papers in sports sciences clearly explain and demonstrate the

79

of angular momentum by recruiting other body segments. For example, a motion of the left arm can be compensated by a symmetric right arm motion. This is useful if the user wants to continue with the current angular velocity while moving the arms. The system will be used in the Department of sports sciences of University of Rennes in order to teach biomechanics in gymnastics.

Acknowledgements The authors wish to thank Professor Shigeo Morishima in Waseda University, Japan, for some of the motion capture data. We also thank the Conseil R´egional de Bretagne, the Conseil G´en´eral and Rennes M´etropole for their financial support.

Figure 3: Somersault composed with a left-arm motion for three different characters in real-time.

References

phenomenon that is illustrated in our example [Yeadon 1993]: creating a twist during the aerial phase by moving an arm. We also performed some experiments with elite subjects in trampoline, in collaboration with the French Federation of Gymnastics. The results show that moving the arms can lead to half a twist during a somersault in trampoline (up to 8-meters high jumps), requiring almost 5Kg.m2 .s−1 change of angular momentum along the twist axis. In this paper, we obtained 1/4 twist for ground gymnastics with a 9Kg.m2 .s−1 maximum change of angular momentum along this axis. But the jumps were less than 2-meters high compared to the 8-meters high jumps with trampoline. This result is thus compatible with those obtained during our experiments and with others published in biomechanics [Yeadon 1993].

A BE , Y., L IU , C., AND P OPOVIC , Z. 2004. Momentum-based prameterization of dynamic character motion. In Proceedings of ACM SIGGRPAH/Eurographics Symposium on Computer Animation, 173 – 182. B OULIC , R., B ECHEIRAZ , P., E MERING , L., AND T HALMANN , D. 1997. Integration of motion control techniques for virtual human and avatar real-time animation. In Proceedings of the ACM symposium on Virtual reality software and technology, ACM Press, 111–118. FANG , A., AND P OLLARD , N. 2003. Efficient synthesis of physically valid human motion. ACM Trans. on Graphics 22, 3, 417– 426. G LEICHER , M., AND L ITWINOWICZ , P. 1998. Constraint-based motion adaptation. Journal of Visualization and Computer Animation 9, 2, 65–94. K ULPA , R., M ULTON , F., AND A RNALDI , B. 2005. Morphologyindependent representation of motions for interactive human-like animation. Computer Graphics Forum, Eurographics 2005 special issue 24, 3, 343–352. L IU , C., AND P OPOVI C´ , Z. 2002. Synthesis of complex dynamic character motion from simple animations. ACM Transaction on Graphics 21, 3 (July), 408–416.

Figure 4: Somersault composed with a) fast right-arm motion → important twist, b) late/slow left-arm motion → negligible twist c) fast two-arms motion → no twist.

P OLLARD , N., AND B EHMARAM -M OSAVAT, F. 2000. Forcebased motion editing for locomotion tasks. In Proceedings of IEEE international conference on Robotics and Automation.

All the previous results have been obtained in the VR environment with a unique skeleton. Figure 3 depicts a result obtained with 3 different skeletons. The user’s motions is added to the three characters concurrently. All the characters are initialized with the same original motion but with different initial linear momentum (according to their global size).

7

S AFONOVA , A., AND H ODGINS , J. 2005. Analyzing the physical correctness of interpolated human motion. In SCA ’05: Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation, ACM Press, 171–180. S AFONOVA , A., H ODGINS , J., AND P OLLARD , N. 2004. Synthesizing physically realistic human motion in low-dimensional, behavior-specific spaces. In Proceedings of ACM SIGGRAPH.

Conclusion

The method presented in this paper is very promising to control dynamic motion of virtual humans in VR. However, this ongoing work has many perspectives. One of the most important consists in taking the user’s whole-body motion into account instead of only using the arms. A limitation of our current work is about take-off and landing. When contact with the ground occur, he motion is not modified which could lead to unrealistic accelerations. We are currently working on a method to deal with the connection between the contact and the aerial phases. It could consist in coupling motion warping and a search into a database of possible reactions at landing.

S HIN , H., K OVAR , L., AND G LEICHER , M. 2003. Physical touchup of human motions. In Proceedings of Pacific Graphics. S ULEJMANPASIC , A., AND P OPOVIC , J. 2005. Adaptation of performed ballistic motion. ACM Trans. on Graphics 24, 1 (January), 165–179. Y EADON , M. 1993. The biomechanics of twisting somersaults. part iii: Aerial twist. Journal of Sports Sciences 11, 209–218. Z ORDAN , V., M AJKOWSKA , A., C HIU , B., AND FAST, M. 2005. Dynamic response for motion capture animation. ACM Trans. Graph. 24, 3, 697–701.

We have also developed methods in order to compensate a change

80

Suggest Documents