A Dynamic Virtual Environment for Haptic Interaction A l a s ta i r B a r r o w 1
Dr William Harwin2
Department of Cybernetics University of Reading, UK
• ABSTRACT We present a rigid-body simulation for multi-contact haptic interaction. The simulation is designed to make use of modern multiprocessor machines and the framework for this is discussed. An existing haptic rendering algorithm is extended to: facilitate simple implementation on a number of object types, enable the use of arbitrary objects as haptic cursors and allow multiple object contacts on the same haptic cursor. We also justify the use of hard-constraint based methods for rigid-body dynamics and discuss our implementation. Keywords: haptic, multi-finger, collision detection, collision response, multimodal. 1
INTRODUCTION
Adding haptic interaction to a dynamic, virtual environment presents new challenges over existing visual simulations as virtual objects must now behave in a manner that looks and also feels realistic. This is difficult in terms of both the complexity of the software system and the highly computational nature of physically based simulations, especially at haptic update rates. While the problem of real-time multi-body dynamics is an area of active research in the computer games domain little work has been conducted on the additional requirements of a physical simulation for haptic interaction In the following we outline the framework for a generic rigid body environment encompassing haptic, graphic and audio feedback. We consider two aspects of the system in detail: collision response in terms of haptic-object and object-object collisions and the methods used to accommodate such a computationally expensive real time simulation. 1.1
Collision Detection For Haptic Environments
In order to provide a response to a collision; haptically, audibly or otherwise, first the collision must be detected and a set of collision detection routines form a fundamental part of any physically based dynamic simulation. Collision detection is a broad field of study and many algorithms exist, each with different advantages and applications. Collision detection in haptic environments is likely to be performed at 1 KHz or greater and this makes it very difficult to haptically render smooth surfaces. The current methods used to generate smooth surfaces and the associated problems are as follows: • On a polygon mesh, the vertex and face normals can be averaged to smooth the transition between faces but ultimately a very high density of faces are still required to yield a smooth feeling surface thus slowing collision detection. • Complex parametric surfaces such as NURBS can generate a perfectly smooth surface but current algorithms for haptic interaction are complex, iterative in nature and have a necessary trade off between accuracy and speed. 1 2
e-mail:
[email protected] e-mail:
[email protected]
Constructive Solid Geometry (CSG), that is, creating a complex model out of primitive objects such as spheres, works well for simple models but has not been proven efficient when trying to recreate complex models. • Algorithms exist that give huge performance benefits when operating on a convex hull[1, 2]. A convex hull can be imagined as the form a rubber band would take if wrapped around a concave object, all the concave parts would be left out and a purely convex shape would remain. However, not all 3D objects can be completely decomposed into convex parts, e.g. a torus. As there is no single comprehensive solution it would be desirable therefore, to be able utilise the benefits of polygon meshes, CSG, convex hulls and the large quantity of other state of the art techniques in collision detection, but the coding effort to implement all the algorithms and data structures in an efficient manner is huge. So, rather than trying to implement a new haptic specific solution, it makes sense to use a mature collision detection library if there is one that meets the requirements of haptic interaction. A survey of collision detection libraries was conducted to select a suitable candidate from the many currently available. Some of the criteria used were: • Ability to return the contact point and normal of each collision, a necessity for haptic interaction. • Level of support/Large user base. • Quality of feature set. • Robustness of guaranteed solution times and accuracy. • The library selected was SOLID 3.5 [3]. SOLID has a number of additional benefits: • It can use polygon meshes, primitive objects and makes use of the Quick Hull Algorithm [2] to create convex hulls from meshes. • It maintains collision scenes, which greatly simplifies the handling of the different collision detection requirements of haptics, physics and sound. • It is open source and free for non-commercial use. 1.2
Collision Response Methods
Similar to collision detection, there are a number of methods that can be used to determine the correct physical response to a collision. Simulating physics is a challenging and computationally demanding task and any method used will need to favour either speed or accuracy. Those that prefer speed are often called ‘soft-constraint’ an example of which are penalty methods. Penalty methods are the basis of most haptic interaction algorithms such as God Objects [4] and have the advantage that they are conceptually and computationally simple and deal well with objects whose dynamic response is not completely known, such as a haptic interaction point. Softconstraint methods are prone to instability and typically require parameters to be tuned for different situations. Algorithms preferring accuracy, or ‘hard-constraints’, calculate the exact response required to prevent any violation of the defined constraints. In a rigid body simulation the primary constraint is that objects may not interpenetrate and so the solver must consider all points of contact and produce the
appropriate forces to prevent any penetration. The advantages of hard-constraint methods are that simulations are very stable, versatile and realistic. The principle draw back is the computational difficulty. Finding the correct forces requires the solution of a set of nonlinear equations which, in this case, is an O(n3) algorithm for the number of contacts, n. There are no real-time rigid-body algorithms that are guaranteed to always produce the physically correct response. It is possible, however, to generate a realistic and plausible response that complies with physical laws, such as conservation of momentum and energy. The objective, therefore, of a dynamic simulation with haptic interaction is to create both haptic-object and object-object interactions that are as convincingly realistic as possible. 1.2.1 Haptic Rendering Algorithms When a collision has been detected between a haptic interaction point (HIP) and a virtual object, a normal and frictional force must be calculated, then applied to the HIP and the contacted object. A number of algorithms exist to calculate these forces but the most widely adopted is the God Object (GO) method [4]. A God Object is a point in space that represents the ‘virtual’ point of contact on the surface of the object being touched. The HIP is allowed to penetrate the object but a spring force is calculated based on the distance from the HIP to the GO and applied to the HIP, “pulling” it back towards the surface. An extension of the God Object Method is the Friction Cone Algorithm (FCA) developed by Melder et al. [5]. Based on the same idea of a virtual point of contact at the surface of an object, it is also able to generate realistic static, dynamic and viscous frictional effects. The friction cone algorithm can accommodate an arbitrary number of HIPs contacting a single object and has been used to provide natural multi-finger manipulation of dynamic objects [5]. 1.2.2 Rigid Body Dynamics for Haptics The driving force behind most rigid-body dynamics libraries is the computer games industry. The consequence of this is that they are designed to achieve graphical realism for a large number of objects with as little computational and memory overhead as possible. This is commonly accomplished by relaxing the physical laws and requiring the virtual environment be constructed in a particular way. For example, a recent iterative method based on the projected Gauss-Seidel algorithm, while linear in time and memory footprint, uses an approximate model of motion and friction and requires that interacting objects have mass ratios less than an order of magnitude to remain stable [6]. This is not a problem for game designers but potentially limits the realism and generality of a haptic simulation. The simulator considered in this paper is principally concerned with multi-finger grasp and manipulation and as such the accuracy of the equations of motion and realism of objectobject interaction is considered a very high priority. However, the number of dynamic objects in a scene is assumed to be low. The popular haptics framework CHAI3D [7] uses the Open Dynamics Engine (ODE) for its rigid-body dynamics [8]. ODE is an open source dynamics library based on a hard-constraint model intended for game and animation applications. ODE was not used here primarily in favour of a method tailored specifically for realism in multi-contact haptic grasps but also because it employs an incomplete friction model (for speed) and has a known instability when using dynamic bodies with nonsymmetrical inertia tensors. The only simulator known to the authors to have explicitly considered the problem of physics for haptics is by S. Hasegawa
et al. [9]. Hasegawa developed a rigid body haptic simulation based on penalty methods. They solved some of the known problems with penalty methods by considering an area of contact as opposed to individual points. Using this method they were able to implement a fast simulator with a complete friction model that could compute each iteration in linear time, O(n). However, a number of problems can be observed [10]: • A great deal of interpenetration is permitted. • Stacks of objects wobble and oscillate greatly before damping takes effect. • Forces propagate slowly through chains of objects. Also, it may be more challenging to implement a generic simulator as it is difficult to automatically select good damping coefficients for different types of interaction. These issues do not seem to adversely effect simple haptic interaction but it is likely that they would detract greatly from the realism of multi-point grasps as described by Melder et al. [5]. The alternative then is to customise a hard-constraint model, such as that used by ODE, for maximum realism at haptic update rates. However, Hasegawa identified a number of issues concerning hard-constraint methods for haptic simulation [9]: • Constraint Methods cannot directly incorporate haptic pointers. • Constraint Methods are slow and do not scale well to a large numbers of objects. It is true that it is not possible to directly incorporate a HIP into the constraint model. However, it has been shown that the FCA allows interaction with dynamic objects based upon the integration of forces on an object [5]. If forces such as gravity can be realistically applied to objects manipulated using the FCA then there should not be a problem using a constraint based method that uses forces to affect dynamic objects. An algorithm that is O(n3) in computation time will inevitably slow dramatically as the problem size increases. However, our intention is to maintain high realism for a small number of dynamic objects in a scene and the methods used to maximise this number are discussed later. 1.3
Symmetric Multi Processor Machines
Modern personal computers are not only increasing in speed but are also beginning to provide multiple processing pipelines, previously the domain of very high end workstations and clustered computer systems. A “multi-core” CPU is one which combines two or more processors on a single chip. When combined with motherboards that can support two or four CPUs there is many times the available processing capability of a single fast chip. If a simulation can be structured to make best use of separate processing channels then there are potentially enormous benefits in terms of the level of detail and realism that can be simulated. 2
DESIGN OF A MULTIMODAL SIMULATION
To get maximum use of the available processing hardware the simulator was designed around two principles: • Fulfil only the minimum requirements of each of the human senses. • Operate in parallel where possible. 2.1
Target Requirement for Graphical Rendering
The required visual frame rate to create the illusion of smooth motion is normally considered to be above 25fps, though to achieve coherence with audible and tactile collisions we find
this is better raised to 30fps. However, for the two separate images of a stereo display this must be doubled to 60fps. 2.2
Shared Data (Copy) 60 Hz
Target Requirements for Sound Generation
Although sound cards remove the need for direct generation of the sound wave, new sounds from a collision for example, must be initiated soon enough after the event so that there is no perceivable delay. We find that for a system with haptic feedback, i.e. when the user can feel the collision, the minimum update for new sounds is 100Hz. 2.3
Target Requirements for Haptic Rendering
The haptic ‘stiffness’, which is the spring force pulling the HIP back towards the surface, must be great enough to give a believable representation of a surface. For our system this is approximately 0.5N/mm. Also, depending which is higher, the update rate of the haptic device must be above the human perception of vibration, typically 1 KHz, or fast enough to maintain stability over the full workspace at the required stiffness. For our system we find this is 2 KHz. The physics model and thus the update rate of objects in the virtual world must cater for the highest common denominator, which is clearly haptics. Fortunately, as long as the physics loop time is calculated very accurately, we find that an update rate over 500Hz is high enough to not perceive any increased inertia when manipulating objects. 3
UTILISING PARALLEL ARCHITECTURES
Due to the sensitivity of the haptic sense to small disturbances, the haptic loop has the highest priority in the system, followed by physics, sound and graphics respectively. As long as each component is guaranteed at least its minimum refresh rate then there should be no discontinuities perceived by the user. As even linear time physics models can take more than 1ms per frame for a small number of dynamic objects [9] there may not be enough processing time to maintain all the required update rates. Much of the burden on a single processing unit can be removed by using a symmetric multi-processing machine (SMP) and separating tasks into different threads. This is as long as shared variables can be made thread safe; for example, guaranteeing one thread will not change the position of an object whilst another is reading it. The simplest and most obvious way to add parallelisation is to give each of the three modalities and the physics loop a separate thread, which gives an immediate performance boost on a multi-processor machine. We will assume that this means there are four separate threads that must share certain data structures, although further parallelisation of the physics thread will be discussed later. The only data that must be shared between the threads is the current position of each object, the current position of each god object and the force applied by each haptic device; any other data is unique to just one thread. To guarantee the shared variables are not read and updated at the same time a mutual exclusion object (mutex) can be used. In its simplest form a mutex is a boolean value that is changed by what is known as an atomic process, that is, it requires a single instruction and can therefore not be interrupted midway. When a thread wants to access a shared variable it locks the appropriate mutex and unlocks it when finished. If another process wants to access the data it must first wait for the mutex to become unlocked. The graphics and sound threads need only read access to the shared data structures. They operate so much slower than physics and haptics that when they need to access the shared
Graphics Loop Lowest
Shared Data (Live)
100 Hz
100 Hz Sound Loop
1 KHz Physics Loop
2 KHz Haptics Loop
Priority
Highest
Figure 1: Communication Pathways and Bandwidth
data they can request that the faster loop makes a ‘thread safe’ copy in a separate location. This means that the faster thread need never block waiting on a slower thread. There is no limit to how many processes can read the same variable so sound and graphics can share one copy. Physics and haptics must share data in both directions but run at different speeds. This adds a small complication as the haptic loop would preferably never be blocked. Two possible solutions are: to maintain a level of mutex ‘buffering’ where two copies of the same data are maintained and a mutex points to the one which is presently being updated by the owner, or the alternative is to make sure that any part of a thread that sets a mutex runs for a very short and deterministic length of time. The second method works well in practice as the haptic loop need only block whilst the physics loop updates an individual object’s position and the physics loop, the slower, can be safely blocked while the haptics thread performs individual object-HIP collision tests. The communication pathways and bandwidth used by the different threads in the simulator are demonstrated graphically in figure 1. The bidirectional nature of the physics loop is due to it updating the positions, velocities etc. of objects based on forces from the haptic loop. Similarly, the haptics loop is bidirectional as it must read in the positions of any connected haptic devices as well as render a force to them. When implemented on a multi-processor machine this structure is able to maintain uninterrupted haptics, graphics and sound regardless of a slow down in the physics loop. Clearly it is preferable that the physics loop never slow down, but if it does the user perceives this as a gradual increase in the inertia of dynamic objects, to the limit where objects seem fixed. Most importantly, haptic interaction remains stable and responsive regardless.
4
HAPTIC RENDERING LOOP
The method selected for haptic rendering was the Friction Cone Algorithm. The reasons for this were as follows: • It is simple and fast to compute. • It provides static, dynamic and viscous friction models. • It scales well for multiple contact points. • It can be implemented on numerous object types, i.e. meshes, primitives etc. • It has been proven to work well for natural multi-finger manipulations. 4.1
Adapting the Friction Cone Algorithm
There is a significant quantity of support code required to allow correct polygon transitions when haptically exploring a polygon mesh [11]. A collision detection package like SOLID, however, returns no information about the structure of the
Figure 2 A-D: Placement of Surface Object (SO) and God object (GO) in the modified FCA.
contacted surface other than the normal and point of contact. Fortunately it turns out that with a small change to the existing FCA this greatly simplifies the haptic rendering algorithm. Also, previous FCA implementations have considered only a single point as the HIP/GO. Using a single point of contact means that, conceptually, there will be a disparity between the expected and haptically rendered point of contact, for example, if using a finger connected to a haptic device, the user would expect to feel contact when the extremity of the finger crosses the object’s boundary but, using a single point HIP, no force will be felt until the centre of the finger crosses the boundary. This gives the impression that virtual objects are smaller than they are intended to be and, in a grasp task, thin objects cannot be picked up at all. To remedy this, the simulation is designed, though not required, to use geometric objects as the haptic cursors. For finger interactions, spheres roughly the diameter of the finger tip are used. For more complicated scenarios such as palm collisions, any object can be assigned to a haptic pointer. However, by using HIPs with a non-zero volume it is now possible for a single HIP to be in contact with more than one object at a time, a situation not considered in previous FCA algorithms, though this is a minor adjustment. We will first consider the single object case. 4.2
Implementing the Adapted Friction Cone
The Friction Cone Algorithm is explained in detail elsewhere [5, 11] and so only a brief overview will be given to illustrate how it is modified for the simulation framework. It is also important to note that even though the description assumes SOLID is used for collision detection, the method described will work for any object type and collision detection method that can return the contact points of both colliding objects and the collision normal. Upon first contact with a virtual object, the God Object is positioned at the point on the surface nearest the HIP using the penetration depth. A plane is now generated using the surface normal but it is assumed that no more is known about the contacted surface. At the next iteration it is not clear how to update the GO position. Moving the GO along the surface plane would result in departure from a curved surface and the HIP may have exited to the other side of a thin surface thus allowing “push through” if using it for further collision detection. We solve this by introducing the idea of a Surface Object (SO). The SO assumes the shape of the haptic cursor and collision detection is performed using only the SO. In free space, not in contact with any objects, the SO is mapped to the exact position of the HIP. When a collision between the SO and an object is detected both the GO and the SO are positioned on the edge of the contacted object. As before, a plane is calculated using the surface normal which, at this stage, represents both the Surface Plane and the God Plane (that is, the plane generated at the Surface Object’s position and the plane generated at the God Object’s position). At the next iteration the SO is moved along the SO plane to the crossing point of a vector from the HIP to
the surface plane in the direction of the plane’s normal, figure 2A. The SO is then moved down into the plane in the direction opposite to the normal a small but large enough amount to guarantee contact with the surface, figure 2B. Collision detection is then performed with the SO which will return new contact points and a normal which are used to position the SO on the surface and generate a new plane equation. The radius of the friction circle is now calculated as the distance from the HIP to the SO multiplied by the coefficient of static friction. If the distance from the SO to the GO is less than the friction radius then the GO is left in place. If the distance is greater, the GO is moved to the edge of the friction circle and it enters the dynamic state, figure 2C. The new God Plane is taken as the average of the previous God Plane and the current Surface Plane. In the dynamic state, the friction coefficient used for the friction radius is the coefficient of dynamic friction, which should be smaller than the static coefficient. If at any update the GO is found to be within the friction circle then the static friction state is reentered. Once the GO is updated, the new HIP position is requested and the algorithm repeats, figure 2D. This cycle continues until either the HIP transitions the God Plane or no collision is returned after moving the SO. If this happens then it is assumed the HIP has left the virtual object but contact is only broken if the GO lies outside the friction circle, otherwise the conceptual point of contact is still on the edge of the object and thus contact should remain. When implemented using SOLID for collision detection this method requires little coding and is exactly the same for any object type; mesh, primitive or convex hull. 4.3
Touching Multiple Objects
To allow multiple HIP contacts there needs to be a way of applying a force to, and feeling the effects of, more than one object. However, if the haptic cursor has volume then it is also possible for the SO to make contact with another object without the HIP being inside that object. In this case the distance between the HIP and GO has no meaning as the force would pull the object and HIP together. Firstly, to allow the HIP to apply full frictional effects to multiple objects, each virtual object stores one SO and GO for every haptic device which, in practice, requires little extra storage. If the HIP is behind an object’s SO Plane, forces are calculated individually, as in 4.2, and added to the total force applied to the HIP. Secondly, to apply forces to objects when the HIP is not inside them, the HIP maintains its own SO. When in free space it is locked to the position of the HIP but when in contact with one or more objects it takes on the average position of the SOs of objects for which the HIP is behind the SO Plane. Now, for objects where the penetration depth of the HIP cannot be used, the penetration depth of the averaged SO is employed instead and forces are applied to both the object and the HIP as in 4.2.
These changes are all that is required to feel and apply forces to multiple contacted objects though one point may be noted: The God Object represents the ‘true’ point of contact on an object’s surface. This then means it makes more sense to use the averaged God Object position when testing for contact with other objects rather than the averaged Surface Object which represent positions that the God Objects have not yet reached. However, if using a high value of static friction, we find that the HIP may have penetrated far into a second object before the GO contacts it, thus applying a sudden impulsive force to the user. By using the averaged SO instead, we get a stable response and, while initial contact with a second surface happens slightly sooner than it should, there are no perceived irregularities even for large frictional values. 5
RIGID BODY DYNAMICS
The FCA provides an effective penalty based coupling to dynamic objects but in order to create a simulation that both looks and feels as realistic as possible a hard-constraint approach was used for the rigid body dynamics. Of primary concern using hard-constraint methods for haptics is the length of time required to find a solution, this is considered further in the implementation section. 5.1
Forces and Impulses
If two rigid bodies collide then an exceptionally large force must be applied over a very short time to prevent interpenetration. In a simulation based upon fixed time stepping integration, this can result in an unstable simulation or unrealistic collisions. In order to prevent interpenetration in a stable manner impulsive forces are often used. An impulse is an instantaneous change in momentum due to an infinite force over zero time. By using one of the collision laws (such as that attributed to Newton) the required velocity after the collision can be calculated and an impulse found that, when applied to both colliding objects, will give the required separation velocity. Many graphical simulators will only use either forces or impulses or only treat forces as a set of global constraints and apply impulses to high speed collisions individually to achieve realistic ‘bounce’. We find that the realism of a haptic simulation benefits from the use of both forces and impulses solved as sets of interdependent constraints. 5.2
Constructing the Problem
The basis of most real-time, hard-constraint algorithms is the solution of what is known as a linear complimentary problem. Due to limited space we will only outline the method used as it is well defined in the literature [12, 13] . The problem can be considered: for a set of ‘n’ contacts find a set of up to ‘n’ forces that prevent interpenetration, do not hold contacts together and do not add energy into the system. It can be shown [13] that for two points of contact on the same object then the change in acceleration at point ‘a’ due to a force applied && and P && as the relative at point ‘b’ is constant. If we define P (t ) ( t +1) acceleration at a contact point before and after applying the solution force, respectively, and Wab as the constant relationship between two contact points then we can construct a system of equations relating all ‘n’ contact points and forces as:
⎛ Waa ⎜ ⎜ M ⎜ ⎝ Wna
&& ⎞ ⎛ P && ⎞ K Wan ⎞⎛ Fa ⎞ ⎛ P a (t ) a ( t +1) ⎟ ⎜ ⎟ ⎟⎜ ⎟ ⎜ O M ⎟⎜ M ⎟ + ⎜ M ⎟ = ⎜ M ⎟ ⎟⎜ F ⎟ ⎜ P && ⎟ ⎜ && ⎟ L Wnn ⎠⎝ n⎠ ⎝ n ( t ) ⎠ ⎝ Pn (t +1) ⎠
(0.1)
A solution for (0.1) must be found that complies with all the required constraints. There are a number of ways of solving the system of equations but we use the adapted Dantzig algorithm as described by Baraff [13] as it has been shown to easily extend to include a complete model of friction. 5.3
Adding Friction
By considering frictional effects as forces applied at a tangent to each contact normal these can be included in the formation of (0.1) as extra constraints. However, this does increase the size of the system and thus the length of time required to find a solution. This is unavoidable using constraint based methods. Both force and impulse solutions can be adapted to solve the frictional version of (0.1) for static friction but adding dynamic friction to the force/acceleration problem can sometimes result in an unsolvable system [13]. Our initial implementation generates static friction when solving for forces and dynamic friction when solving for impulse response. This combination results in believable and realistic haptic interaction. 6
IMPLEMENTATION
We will now briefly sketch how the simulator has been implemented and some of the methods used to increase performance. An example of two finger haptic interaction using the simulator can be seen in figure 3. 6.1
Representing the world
The primary building blocks of the simulation are objects and models. We define an object as a single geometric structure which can be a primitive, arbitrary mesh or convex hull. A model is defined as a set of objects physically linked to each other, either statically or by mechanical constraints such as hinges. Thus a model can be constructed from any number or combination of the basic objects. Each of the modalities may have different requirements from a model, for example, most graphical shadowing methods require a polygon mesh but haptic interaction might benefit from using a number of primitives. To facilitate the different geometric requirements up to three separate representations of each model can be used, one for each of graphics, haptics and physics. Having multiple representations of the same object has little effect on the time taken to perform collision tests as long as some scene handling is performed to prevent unnecessary comparisons. 6.2
Hardware
The simulation was run a dual 2.8GHz XEON machine with 1 GB of RAM. Haptic interaction is achieved through three Phantom 1.5 devices[14], calibrated so they have the same world-coordinate to simulation-coordinate mapping. Although the simulation has only been tested using the Phantom, its design should allow the interface of any haptic device which generates force feedback. 6.3
Software
The simulation was run under the Windows XP operating system. 3D environmental audio is generated using the software library FMOD [15] and graphical rendering using OpenGL.
7
CONCLUSION
A multimodal virtual reality simulation designed specifically for multi-haptic, multi-object interaction has been implemented. The design and structure of a multimodal simulator was sketched and methods to take advantage of modern multiprocessor machines were discussed. We have introduced an extension to the Friction Cone Algorithm to simplify its use on a variety of object types, extended it to use arbitrarily shaped haptic pointers and to allow contact of multiple objects simultaneously. The physics model used was motivated by comparison with other game-physics engines and the only haptic specific solution known to the authors. Realism was favoured over performance but the implemented system was shown to perform well when interacting with up to ten dynamic objects in a scene. Figure 3: Two finger interaction in a dynamic environment.
6.4
Speeding Up The Physics Loop
The processor intensive nature of LCP methods is unavoidable but haptic performance can be achieved for small environments (up to about ten dynamic objects) using the following techniques: Firstly, after the collision detection phase of the rigid body simulation, we group the contacts into separate collision ‘chains’. Two chains are separate if they do not share a common movable object. Each chain is completely independent and can therefore be solved in a separate processing thread, thus making use of a parallel architecture. To do this we maintain a pool of threads owned by the physics engine which are blocked by a mutex until given a collision chain to process. We can dynamically adjust the frame rate for different collision chains. Chains of objects near or in contact with a HIP are given maximum priority and attempted to be updated at 1 KHz. Objects not in contact with a HIP are updated at 50-100 Hz to maintain visual realism. Although we assume multiple processors and thus no speed reduction in computing both forces and impulses at the same time it may be more appropriate to choose one or the other at each time step based on a threshold velocity, i.e. if collisions over a certain velocity are involved an impulse solution should be obtained to provide a crisp response and otherwise a force based solution should be used to get stable and realistic contact forces. When implemented the simulator was able to maintain an average performance of 1 KHz for a stack of 6 blocks and 500Hz for a 10 block stack. This compares favourably with the linear time, penalty method simulator [9] which achieves approximately 300Hz for a ten block stack on a single 2.8GHz Pentium 4. As would be expected, for larger problems linear time algorithms are far faster.
REFERENCES [1] E. G. Gilbert, D. W. Johnson, and S. S. Keerthi, "A fast procedure for computing the distance between complex objects in threedimensional space.," IEEE Journal of Robotics and Automation, vol. 4, pp. 193-203, 1988. [2] C. Barber, D. Dobkin, and H. Huhdanpaa, "The Quickhull Algorithm for Convex Hulls," Geometry Center Technical Report , Univ. of Minnesota, MN GCG53, 1993. [3] The SOLID Collision Detection Library, "http://www.dtecta.com/", Last Accessed: February 2006. [4] C.B.Zilles and J. K. Salisbury, "A Constraint-Based God-object method for haptic display," presented at International Conference on Intelligent Robots and Systems, 1995. [5] N. Melder and W. Harwin, "Improved Haptic Rendering for Multi-Finger Manipulation Using Friction Cone based GodObjects," presented at Eurohaptics, 2002. [6] E. Catto, "Iterative Dynamics with Temporal Coherence," presented at Game Developers Conference, San Jose Convention Center, San Jose, CA, USA 2005. [7] CHAI3D, "http://www.chai3d.org/", Last Accessed: February 2006. [8] Open Dynamics Engine, "http://www.ode.org/", Last Accessed: February 2006. [9] S. Hasegawa and M. Sato, "Real-time Rigid Body Simulation for Haptic Interactions Based on Contact Volume of Polygonal Objects," Computer Graphics, vol. 23(3), pp. 529-538, 2004. [10] S. Hasegawa, "Video Clip of Penalty Based Physics Model," http://sklab-www.pi.titech.ac.jp/~hase/index.en.php, Last Accessed: February 2006. [11] N. Melder and W. Harwin, "Efficient Aribitrary Polygon transitions," presented at Haptics Symposium, Chicago, 2006. [12] D. Baraff, "Analytical Methods for Dynamic Simulation of Nonpenetrating Rigid Bodies," Computer Graphics, vol. 23(3), pp. 223-232, 1989. [13] D. Baraff, "Fast Contact Force Computation for Nonpenetrating Rigid Bodies," presented at SIGGRAPH 94, Orlando, 1994. [14] Sensable Technologies, "http://www.sensable.com/", Last Accessed: February 2006. [15] The FMOD Audio Library, "http://www.fmod.org/", Last Accessed: February 2006.