Real time clothed humans for virtual environments Isaac Rudomin, Jose Luis Castillo, Lourdes Mu˜ noz, and Moises Alencastre Department of Computer Science ITESM-CEM, Carretera Lago de Guadalupe Km. 3.5, Atizap´ an de Zaragoza, Estado de M´exico, C.P. 52926 M´exico
[email protected], jluis
[email protected] http://cic087.cem.itesm.mx/ rudomin/
Abstract. A virtual environment application, including a clothed human as an avatar, is being developed. In this article we describe the overall application, how the real-time clothing simulation module was implemented as well as the technical issues that had to be solved for integration into a VR application.
1
Introduction
Including realtime simulations of clothed humans into Virtual Environments is a challenging subject. In addition to realtime performance that must be delivered by the simulation, multiple other things must happen in the environment, and yet the simulation must remain stable and robust. Incorporating the simulation was work performed for a virtual reality class at ITESM-CEM, and is still work in progress. It was decided that the class project would involve the following scenario: A clothed avatar would be included in an urban scene: a building in a corner, containing a gallery. It would be snowing if the user so desired. This scene would be populated by clones, that is crowds consisting of different Quake 2 [3] characters involved in different activities both outside and inside this building [4]. In the building there would be an elevator that would take you to other scenarios. See figure 1. Early in the design process, we decided to develop the application using VR-Juggler [11] and OpenGL-Performer [10] combined with some OpenGL [8] modules (basically the clothed character, the crowds), and that we would use the following two desktop systems we had available: A Desktop PC: 2 Pentium III 800Mhz processors (we only use one), 1GB RAM, Elsa DCC w/Nvidia Quadro Pro chip & 64MB at 1280x1024, running Linux RedHat 7.1 and a SGI Octane2: R12000 360Mhz processor, 512MB RAM, Odyssey graphic system w/32MB at 1280x1024, IRIX r6.5 OS. In this document, we explain the technical issues encountered during the implementation of the clothed human and its integration of into a virtual reality application. We will discuss the clone subsystem elsewhere.
Fig. 1. VE with clothed human and clones
2
Clothed Human
We have been working on a hybrid geometrical-physical approach based on the use of ellipsoids for collision detection and a spring-mesh description of the garments. The main advantage of using a hybrid system is that it can effectively simulate the movement of cloth and it is relatively easy to implement. Simulation of cloth in real time, however, has to meet several requirements. Among these we could mention stability, robustness and, of course, speed (see [1],[2],[12]). We need stability because whatever the conditions during the simulation, it must behave naturally, predictably and correctly. We also require robustness because of interactivity means changing conditions must be handled properly, whatever they are. Speed is an important aspect of any real time system. There are two major obstacles to stable and robust real time performance: the cloth model and collision detection. Accurate cloth simulation systems are too slow to be used in real time applications. On the other hand, over-simplified cloth models have problems of their own such as super-elasticity or high-compression. For virtual reality or other interactive simulation, the cloth model must be general enough as to yield a natural behaviour but simple enough to permit its simulation at high rates. Collision detection calculations slow down the simulation, most notably when the cloth covers a constantly moving object, as is the case with clothing over a character. In previous work we have presented one approach to the collision detection problem ([6],[7],[13],[14]). A group of implicit ellipsoids are defined that approximate the shape of a human character. A scalar field is generated from those ellipsoids, and it is evaluated for every point of the cloth, after which they are moved accordingly. Since a different scalar field may be used for every piece of garment, no collision detection calculations are needed between them, allowing the use of several layers of clothing. We then extended this technique to work at interactive rates. We will briefly summarise how this was done, and then, in the next section, how we proceeded to integrate a real time clothed character module into the VR application described previously.
One of the main optimisations we included is the shift from calculating implicit isosurfaces to ellipsoid distance calculations. This is much faster. The ellipsoids used for the approximation are obtained from scaled spheres. That is, a unit sphere is taken and scaled differently in its three axis; afterwards, the resulting ellipsoid’s centre is translated and rotated to the desired position and orientation. This method is highly efficient since all the calculations can be done using OpenGL matrix operations, which are normally accelerated and hence, quickly performed. Let us describe how this optimisation affects the distance and penetration computation. We must first obtain the transformation matrix and its inverse for every ellipsoid in the system. This is only done at the simulation start-up and when an ellipsoid (and, therefore, its children) has changed. Thus, when finding the distance from a point to the ellipsoid we just apply the inverse transformation matrix to this point, scale the point according to the ellipsoid’s dimension reciprocals and normalising the resulting vector. If the original vector is smaller than the normalised one, then penetration is confirmed. Finally, we scale the normalised vector with the ellipsoid’s dimensions and move it with its transformation matrix; the point obtained is the intersection point and the distance to it can be determined. Although this calculation is not exact, it is a reasonable approximation and accurate enough for our purposes. Penetration is, on the other hand, exactly determined. In conclusion this method, as opposed to other more exact methods (such as Newton’s), is fast, deterministic and has no singularities. Consequently, speed and robustness are achieved, which makes it fully appropriate for virtual reality applications. Another important optimisation to the simulation process is the arrangement of ellipsoids in groups and the vertex-ellipsoid association. The group composition is determined based on the ellipsoid’s proximity and behaviour. The adjacency of the groups must also be indicated. In addition, vertices are assigned an ellipsoid at start-up, which becomes its parent ellipsoid; this association is used for certain operations later described, which prevent some inconsistencies. Our system works as follows: 1. The character to be dressed is manually approximated using groups of ellipsoids, ellipsoids are organised in a hierarchy tree and group connections are indicated through the use of an adjacency graph; 2. A triangle mesh representing clothing is placed over the character (where every vertex is taken as a particle and every edge as a spring), each vertex associated initially to a given ellipsoid based on distance; 3. The simulation loop starts, if an animation sequence is being played the ellipsoids are moved to the proper position, if between frames or animations the position is interpolated; 4. The particles of the mesh are moved when their associated parent ellipsoid moves, the hierarchy tree is applied and the matrices are recalculated; 5. Particles are further adjusted by applying forces and integrating the system. An Euler first-order integration scheme has been working well enough to solve the system:
– Because it is used for minor adjustments rather than the complete simulation, – Because we use geometric mechanisms to enforce stiffness. 6. If any ellipsoid is penetrated, the particle position is adjusted; and, 7. The parent ellipsoid is reassigned if the closest ellipsoid is different than the parent one, distance calculations are only performed for ellipsoids in the same or an adjacent group of the parent’s. This ellipsoid approximation greatly simplifies the collision detection problem and makes it very fast to compute. Additionally, different types of cloth adjustment were defined. At the writing of this paper, two of them were developed: a gravity adjustment and a shape adjustment; however, more behaviours can be added in the future. Gravity adjustment, as its name implies, applies gravity to the garment producing the effect of cloth draping. On the other hand, shape adjustment fits the cloth closely to the character, without penetrating it; this type of adjustment is much faster than the previous one since several steps can be discarded, simplifying the calculations. One of the main advantages of this system is that it works extremely fast even on mid-range computers. It is also completely portable, since it only uses standard C++ and OpenGL code, without hardware-specific extensions or machinedependent code. It does so by optimising the crucial and usually intensive operations described above. Since we intend to apply our system on virtual reality applications where interaction and stability is more important than accuracy, this extremely simplified system works well. We will not go into further details here, since the results will be published elsewhere, but will go straight to the numbers and tables: We analysed the performance of the system on a notebook computer, with 1 Pentium III 650Mhz processor, 128MB RAM, S3 Savage/IX graphics card w/8MB, at 800x600, Windows 2000 Professional OS.
Garment / type of adjustment (vertices) Notebook Blouse 1/GA (911) 0.009 Pants 1/SA (944) 0.003 Both 1 (1855) 0.011 Blouse 2/GA (464) 0.005 Pants 2/SA (454) 0.002 Both 2 (918) 0.006 Table 1. Clothing adjustment time (in seconds)
In table 1 we see some results: the simulation time in seconds using different garments with different adjustment types. It can be noticed that the cloth adjustment process is almost linear, since reducing the garment vertex number to about half the original number, decreased the time in almost the same
proportion. The additional time is due mostly to the ellipsoids movement and recalculation (approximately 1ms in these tests). In figure 2 we see images taken from this test.
Fig. 2. Pieces of clothing animated interactively
From this information we can conclude that simulation time allows us real time calculation even for this relatively large number of vertices per garment, and on a notebook computer. Once we obtained these results, we proceeded to transform the application into a module suitable for inclusion within a virtual reality application. We describe this in the following section.
3
Adaptation of the Cloth Module
In the cloth module, several things had to be done in order to make it independent of the application. The original application was a stand-alone GLUT– OpenGL application written in C. It was decided to rewrite it as C++ classes using OpenGL for display and matrix operations but otherwise manageable by a general application or scenegraph, such as OpenGL-Performer, without jeopardising the frame rate and stability that had been achieved. OpenGL was chosen as the base graphics library, since it is entirely portable and many scenegraph frameworks support it. Besides GLUT, Performer and VR-Juggler, other OpenGL-based libraries can be used, such as Open Inventor [9]. Two main classes were developed to achieve the results: one that encapsulates the behaviour of the ellipsoid system and another that does the same for the cloth pieces. These two classes interact with each other; for a single character only an ellipsoid system object is needed, but a cloth object is required for every garment and all of them reference the same ellipsoid system object. As mentioned above, the ellipsoid system class includes the definition, modification, interface and animation of the ellipsoids that approximate the character’s shape. These ellipsoids can be loaded from a file or defined inside the code and support a hierarchical structure, in order to mimic the character’s movements.
The application can also query the transformation matrix of any of the ellipsoids, so it can place the character’s parts appropriately. The character should act as an avatar, walking or running according to the user’s instructions (communicated to the system by using a joystick). In order to do this in an application-independent manner, the ellipsoid system class had to include member functions that permit to start or stop an animation, or to change to a different one. Only one animation can be played at a time. As a note, integrating a joystick into VR-Juggler required writing some code and recompiling its libraries. Realistic walking and running cycles had to be provided for the character, and it should be possible to blend these cycles. Two base standing poses were then defined (as one-frame animations) but additional animations can be loaded. Animation sequences can be obtained from properly prepared VRML 2 files that may be produced by using animation packages such as Alias Wavefront’s Maya. Animation blending was implemented as a linear interpolation (for maximum performance) and can be done between frames or between animations; it is also transparent to the application, since it must only call a method that updates the ellipsoid system state depending on the current time and animation being played. As for the cloth class, a lot of consideration was taken to properly display the garments. The adjustment process and the display process were separated, thus allowing adjusting the cloth once and displaying it several times; this is especially useful in stereo viewing. The different cloth adjustment types are implemented as separate class methods, and one can switch from calling one method to the other at run-time, if so desired. A cloth object references only one ellipsoid system object and it is adjusted based on the system’s current state; in this way, during a frame the ellipsoid system is modified or animated once and then all the garments that references that system are adjusted accordingly. This organisation of classes and methods makes the cloth module easily adaptable to external OpenGL-based applications, such as scene graph frameworks or even game engines. We tested the module with the following configurations (with and without stereo viewing), achieving good results: – – – –
4
OpenGL OpenGL OpenGL OpenGL
/ / / /
GLUT, VR-Juggler, Performer / VR-Juggler, and Performer.
Integration
In this section we explain the technical issues encountered during the integration of the clothed character module, developed in OpenGL, into the VR application, which was developed in OpenGL-Performer and VR-Juggler as a vjPfApplication. Performer is used for scene graph management and VR-Juggler for interaction and display.
Most of the original OpenGL code was not modified, but it was necessary to add some functions to be used from the OpenGL-Performer code in order to create an OpenGL node in the scene graph in OpenGL-Performer. – First, the main function of the OpenGL code has to be dropped, also any menu or interaction functions; – The initialisation function and the display function of the OpenGL code have to be identified. It is important to say that the attributes, lights, materials, and other OpenGL properties that usually are enabled on the initialisation function, have to be moved to the display function. This is so because when running OpenGL code within Performer, the latter modifies attributes and changes states; therefore, OpenGL attributes and states must be set in every call to the render function, since OpenGL can not remember which attributes were enabled in the initialisation function. In order to user VR-Juggler with OpenGL-Performer, we used the pfNav sample application ($ VJ BASE DIR/ samples/pf/pfNav/). We also had to make some changes in the pfNav directory and in the nav directory. In the simplePfNavApp.cpp file we added some lines to include the OpenGL nodes in the scene graph. Details can be obtained from the authors. In figure 3 we show the dressed character within a virtual environment (also see figure 1).
Fig. 3. The clothed character within the VE
5
Conclusions
We can conclude that the results show that this method is fast, stable and portable. As opposed to other approaches, this one performs well on mid-range PCs, and does not rely on vendor-specific or expensive hardware. Partial OpenGL hardware acceleration is sufficient. The method is suitable for real time applications, where interaction is constant and accuracy is not the main aspect of the simulation. We were able to integrate this character into virtual environments.
A virtual reality application class project with clothed humans was built and tested on both an accelerated PC desktop and a SGI Octane2 using VR-Juggler and OpenGL-Performer.
6
Future Work
Since the project had a tight timeline, a lot of things had to be left out. As for the development and integration of clothed characters, the following features (among others) are considered for future work: – Use of a better ODE solver (implicit Euler) to get better overall appearance of the cloth, which at times seems elastic or unnatural; – Further acceleration of the garment simulation, by taking advantage of special hardware techniques such as vertex shading, thus allowing many clothed characters (or even clothed crowds) inside a virtual environment; – Improved animation blending, without losing performance; – The use of better and more garment designs; – Perfecting the multi-layer capability of the cloth, which sometimes causes certain artifacts; and – A better integration with VR-Juggler and Performer;
References 1. Baraff, D. and Witkin, A. Large Steps in Cloth Simulation. Computer Graphics, SIGGRAPH ’98 Proceedings, pp.43 2. Desbrun, M. et al. Interactive Animation of Structured Deformable Objects. Graphics Interface ’99 Proceedings, pp.1 3. ID Software, Inc. Quake II. URL: http://www.idsoftware.com/quake2 4. Musse, S.R. and Thalmann, D. A Behavioural Model for Real Time Simulation of Virtual Human Crowds. IEEE Transactions on Visualization and Computer Graphics, 7(2), pp.152 5. Osfield, R. et al. OpenSceneGraph. URL: http://www.openscenegraph.org 6. Prez-Urbiola, R. and Rudomn, I. Multi-Layer Implicit Garment Models. Shape Modeling International ’99 Proceedings, pp.66 7. Rudomn, I. and Meln, M. Multi-Layer Garments Using Hybrid Models. Visual 2000 Proceedings, pp.118 8. Silicon Graphics, Inc, OpenGL ARB. OpenGL 1.2. URL: http://www.opengl.org 9. Silicon Graphics, Inc. Open Inventor. URL: http://www.sgi.com/software/inventor 10. Silicon Graphics, Inc. OpenGL Performer. URL: http://www.sgi.com/software/performer 11. Various authors. VR-Juggler. URL: http://www.vrjuggler.org/ 12. Volino P. and Magnenat-Thalmann, N. Virtual Clothing. Springer Verlag, 2000 13. Rudomin, I. and Castillo, J. Realtime clothing: geometry and physics, pp 45-48, WSCG 2002 Posters, ISBN 80-903100-0-1,WSCG 2002, Plzen 14. Rudomin, I. et al, Multilayer garments using isosurfaces and physics, The Journal of Visualization and Computer Animation Volume 12, Issue 4, 2001. (Special Issue: The best papers of Visual 2000. Issue Edited by Isaac Rudomin.)