An image-based rugosity mesostructure for

2 downloads 0 Views 1MB Size Report
Abstract— Rendering a surface's fine haptic features (or meso- structure) by ... cient haptic-interaction techniques may go beyond the simple detection of ... the haptic influence area. ..... the altitude of the prism) was modulated at 1%, 5%, 10%,.
1

An image-based rugosity mesostructure for rendering fine haptic detail V´ıctor Theoktisto† Marta Fair´en Isabel Navazo {vtheok, mfairen, isabel}@lsi.upc.edu Departament de Llenguatges i Sistemes Inform`atics Universitat Polit`ecnica de Catalunya, Barcelona, Spain

Abstract— Rendering a surface’s fine haptic features (or mesostructure) by precise geometric modeling in dense triangle meshes requires special structures, equipment, and unrealistically high sampling rates needed for perceiving detailed, rugged models. Some approaches simulate haptic texture at a lower processing cost, but at the expense of fidelity of perception. We propose a method for rendering local heightfield maps (or displacement textures) and corresponding normals to model mesostructure into much simpler triangle meshes, relying in fast collision detections followed by the rendering of surface detail by modulating force response using 2D texture of heightfield altitudes and precomputed normals. We compare our method against a force mapping implementation for rendering/perceiving the same models with textures using normal maps. The proposed technique allows for real time accurate perception of fine 3D surface detail for given models with low processing costs and undiminished sampled rates. Experimental results and user testing evaluations are presented to show the goodness of the approach, and actually determining practical threshholds in magnitude and frecuency between modeling surface with small triangles or mesostructured ones. Index Terms— Haptic Rendering

I. I NTRODUCTION

H

Aptic systems provide unique and bidirectional communication channels between humans and virtual environments in a manner much closer to personal physical manipulation. Haptic interfaces enable direct interaction with computergenerated objects, and when coupled with an intuitive visual display of complex data raise applications to new levels; these applications include molecular docking, nanomaterials manipulation, surgical training, virtual prototyping, machine assembly and digital sculpting. Haptics drives the development of new algorithms for object’s volume and surface modeling, volume processing, and the visualization of novel data structures able to encode shape and material properties. From single one-point based, single person operation to multi-point, multi-hand, and multi-person interaction scenarios, its enticingly rich interactivity is within reach of many computer graphics applications. One interesting applications of haptic perception is to be able to feel, with a haptic device, variations in texture, roughness or details of the surface being contacted. Some work has been done in this direction, but without a contrasting study of the results of applying different techniques to the same † On

leave from Universidad Sim´on Bol´ıvar, Caracas, Venezuela

Fig. 1.

Sensing a mesostructure coat placed on top of a regular mesh

model. The general idea, as in visualization, would be the simulation of roughness and other surface features without having to increase the geometric density of the model (see figure 2). In this paper, we compare three possible solutions, analizing their advantages and disadvantages, being the last one our best solution for detailed haptic perception. The contribution is a specific model for the rendering-perception of mesostructure surface details using heightfield-normal textures mapped onto the objects’ underlying coarser geometry (the corresponding term in visual rendering is displacement mapping). We achieve correct correspondence between surface detail visualization and haptic perception, without compromising sampling and rendering rates. The paper is organized as follows: in section 2 we present related work recently done in haptic rendering. Section 3 explains the solutions we have compared, with one of them being the approach we present in this paper. In section 4 we detail the testing protocol for measuring users’ perception of haptic properties, whereas in section 5 we present and analyze the obtained results, and finally we present relevant conclusions and delineate lines for future work. II. R ELATED WORK The term haptic rendering is applied to the generation and rendering of haptic virtual objects [1]. Although there has been some work in pseudo haptics by simulating surface properties using common computer mice [2], it is more common the use of a specialized haptic interface device producing a forcefeedback response. Users can manually navigate, explore and

2

feel the shape and surface details of virtual objects in the growing field of computer haptics [3]. With device sampling rates standardizing in the 1000 Hz range, as in the Phantom or HAPTICMaster device [4], efficient haptic-interaction techniques may go beyond the simple detection of geometric primitives, towards allowing real-time rendering of arbitrary surfaces of irregular detail, conveying spatial and material properties. All this without forgetting its other role as an user-interaction device for high level event adquisition , recognition of tactile “icons” and general haptic user interfaces or HUIs [5]. The most simple haptic model uses simple surfaces, a pointbased device and a force-collision detection algorithm [6]. A haptic cursor representing a force-feedback device is placed into a 3D environment, and a high priority event loop checks whether it collides with the surface of an object, after which it produces a repulsing force of varying direction and magnitude, which physically combines with the force exerted by the user in the haptic device, correcting any penetration ruled out by the object’s geometry [7]. Using a ray-based rendering algorithm with the same setup allows perceiving torque and force-torque feedback mechanisms [8], while using a third object as a extended probe allows also texture differentiation and shape perception [9]. There is also the issue of perceiving some other object properties, such as friction among objects [10], soft or elastic surfaces [11], [12] such as rubber and sponge, or deformable or breakable objects[13], in which the haptic probe is used to modify the object’s geometry transiently or permanently. These allow using the haptic device as interaction tool, to explore 3D medical images [14], or as navigational aids for blind users [15]. When used as an aid for navigating a space, its short range reach requires space exploration strategies, such as a moving bubble for navigation [16], a workspace drift control allowing perception discrepancies between visual and haptic space [17], or a force filed constraining movement [18]. Collaboration across the network allows simulation of realtime [19] activities such as stretcher-carrying, but it brings latency problems that may cause oscillations in the interaction These efforts choose among several alternatives for modeling and rendering surfaces. Johnson [20] uses a pure geometric modeling approach for haptically render arbitrary polygons using neighborhood proximities to reduce computational load. Some haptic techniques and approaches can be found in [3], with a treatment of “force mapping” that simulates detail at higher resolutions by perturbing surface normals, therefore modifying the surface texture perceived by the haptic probe. Jagnow [21] modifies a surface using displacement maps within an enclosing square slab containing bilinear patches. Potter [22] provides a simple model to perceive the haptic variation of large heightfield terrains. Policarpo [23] and Baboud [24] have worked on visualization and shading of objects whose geometry has been replaced by impostors’ side views assembled of multiple (2 to 6) displacement maps. On the reverse side, multi-resolution techniques [25] are also used to generate smooth geometry from large heightfield objects, allowing a better simulation of terrain models.

In the next section, we present a comparison among three different techniques for the surface relief detail haptic perception . III. M ODELS FOR HAPTIC PERCEPTION OF SURFACE DETAILS

The simulation of surface details in very complex models has not been a problem from the visualization point of view since the late 70’s. Algorithms such as the use of colored textures or bump-mapping are well known in the literature [26]. In the case of haptic perception, as we have seen in section II, any simulation algorithm should be efficient enough to achieve the high frequency updates (1000 Hz) required by the human sense of touch to perceive a continuous surface.. Our objective is to find an algorithm to allow haptic perception of surface details in objects represented by triangle meshes, and to compare it to other known solutions. Based on all previous works we can summarize a taxonomy for haptic detail rendering, which determines the particular algorithm to be used. • Geometric Detail, rendering the surface as detailed polygonal meshes, NURBS, or point clouds, and detecting collisions against the surfaces. • Surface Relief Detail, in which a haptic texture is sampled in lieu of the actual surface. On its own, haptic textures may be based on bump maps or heightfields. These approaches may also allow managing LOD in the visual and haptic resolutions, and having a repository of assorted relief textures. A. Generation of geometry A direct algorithm simulates surface details by generating the local geometry corresponding to these details (see figure 2(a)). This geometry can be calculated as a preprocess for the whole model, or locally for each triangle of the mesh at run time, by deciding which triangles of the mesh are inside the haptic influence area. A collision detection algorithm then tracks the haptic probe (or god-object), which is represented in the screen by a proxy. When moving freely in empty space, its coordinates coincide. When the god-object hits a triangle, the proxy clamps to the collision point, and a force is exerted in the haptic device to push the god-object to the nearest point in the hit triangle, with the proxy following suit along the surface. In the afore-mentioned cases, the cost in time and/or space of modeling detailed surface using just geometry is not affordable given the frequency updates requirement. Having many small triangles, the haptic probe may well be at another point in the surface (a far-off triangle) after detecting a collision, and the resulting force interplay can cause unstable behavior, adding spurious force feedback, inducing vibration or simply wrong sampling (haptic aliasing). B. Force mapping One of the visualization algorithms used to simulate surface details is the bump-mapping algorithm. It consists of a texture

3

(a) Geometry

(c) Force mapping Fig. 2.

(b) Normals

(d) Heightfields

Approaches to simulate surface details in haptic perception

map where each value corresponds to a surface normal vector (RGB corresponds to the respective XYZ coordinates), computed for each (s,t) 2D-point in the map. The bump-mapping algorithm uses these normal vectors at the discrete points (see figure 2(b)) to calculate illumination values inside the triangle, simulating the surface details. The haptic perception bump-mapping algorithm (also called force mapping) [3] is based on the same principle. It uses the normal vector at discretized surface points to calculate the force direction and magnitude to be applied to the haptic device when it collides with the triangle (see figure 2(c)). The algorithm used in this case is described in Algorithm III-B:

Algorithm 1 Haptic rendering with force-mapping loop Sample god-object’s position Detect collision with the mesh if (∃ collision with some triangle T ) then {We have a collision point within the triangle} Project the god-object’s position P (s, t) relative to T ~ (s, t) from surface normal map at that point Sample N ~ (s, t) Apply constant force F in direction N at the haptic device end if end loop

By using this haptic perception algorithm and implementing the bump-mapping visualization as a GPU shader, we achieve a correct perception of surface roughness when the distortion is not very high. This is because the collision is always detected against the triangle of the mesh, so a perception of displacement upward or downward from the triangle surface is not possible (more detailed results in section VI).

C. Fast heightfield rendering Heightfield rendering has been used primarily as a technique for imaging large datasets out of GIS data. These are usually huge files stored as RGBα images, with the RGB providing texture information and the α channel providing the vertical height. A grid is used to scan the heightfield, interpolating and filtering values as needed, and substituting for geometry (at figure 2(d)). For a haptic rendering in applications of this sort (terrain models), the proxy’s calculated position must correspond accurately to that of the tracked god-object’s, because the magnitude of the measured heights (32 bits or more) is usually very big and small displacement differences may cause wrong perception. These models generally render a whole terrain as a gigantic heightfield map [22]. The requirements can change slightly when the objective is not a GIS application. An accurate haptic perception of surface properties requires a very detailed mesh, and a collisiondetection algorithm to obtain exact collision coordinates. However, even with space management techniques, in a detailed surface most triangles will be small and possibly close to degenerate. The time spent in discarding potential collisions grows with detail, eventually affecting haptic rendering rates, producing lagging response times, surface perception aliasing, and wildly or unstable haptic response. In our case, our interest is using heightfields (also known as displacement maps) to separate the mesostructure from gross geometry, enabling to overlay and render diverse haptic reliefs on the same underlying mesh. With this approach, which we describe in detail in [27], we reduce in orders of magnitude the need for a dense triangle mesh of equivalent geometric detail. Each mesh requires just a moderate number of triangles, of about the same size. We trade a search in 3D space for the exact proxy’s collision coordinates against some small facet (as explained above) into

4

a procedure for identifying a much larger triangle, followed by a 2D mapping/search of the haptic probe’s position into the closest surface detail in that triangle. Therefore it becomes much more important to be able to determine, quickly and at a low cost, the triangle potentially collided by the haptic probe, so the haptic render algorithm may have enough time for finding out the appropriate heightfield altitude, determine whether there is collision point, and if that is the case, exert the appropriate repulsing force. The algorithm used for this approach works as follows (see figure III-C): Algorithm 2 Haptic rendering with heightfields loop Sample god-object’s position Detect collision with a triangle in the mesh if (∃ collision with some triangle T) then {god-object’s is inside the triangle prism} Project god-object’s position P (s, t), relative to T Sample the height H(s, t) from heightfield map if (distance(P (s, t), T ) < H(s, t)) then {Positive penetration, we had a collision} Calculate force magnitude F ∝ penetration ~ of T at the device Apply F in the normal N end if end if end loop In this algorithm we combine different techniques to make a fast haptic renderings: •







The triangle mesh is represented by an octree space decomposition, which allows a very quick detection of the area where the haptic probe is located, discarding early in the process a very high amount of model geometry; A bounding truncated prism is created for each mesh triangle hV0,k , V1,k V2,k i, with equal displacements up and down a maximum displacement mh along each vertex normal, containing all possible heightfield values (see figure 3(a)). All triangles created to define each bounding prism share the same identifying label of the original base triangle of the mesh, so a collision with any of them is conservatively interpreted as a potential collision against the surface of the face, so the identification of the relevant triangle is immediate; Any collision detected against any prism face triggers the heightfield haptic rendering of a small grid cell just lying in a triangle under the probe, shadowing the probe’s movement Ct ). If at any time the probe descends within the signaled height at that point, a repealing force is applied to the haptic probe along the face normal at the projected point, proportional to the height difference (or penetration). This forces the god-object to continually move towards the surface, at which point the force ceases to be (see figure 3(b)); The heightfield maps are forced to zero-displacement on the edges of the mapped triangle when there abrupt texture variations on these edges, which can cause instability

in the haptic device; By forcing the heightfield to altitude zero on the edges when changing textures, we insure C 0 -continuity on the triangle edges. This solution avoids the problem of having heights from different textures meeting at the triangles’ edge. In case of convex folds, this solution may avoid possible instabilities when cruising near the edges, but this is clearly not enough for concave folds, which require a special treatment.

Area of instability

Triangle A Triangle B

Fig. 4.

Possible case of instability

IV. M ESOSTRUCTURE MODEL FOR HAPTIC RENDERING A much accurate and natural approach is applying a repulsing force in the exact normal direction at the specific impacted surface point. By having both heightfield and normals precalculated as a texture, to be sampled on the fly, we not only can have a better haptic rendering without incurring a lagged response or precision reduction, but may explore easily the response of several such textures by “coating” or “dressing” coarse meshes and render them as if they were much denser. The normal and the heightfield value correspond to a RGBα texture file, with the normal hNx, Ny, Nzi in the hr, g, bi coordinates, and the height value in the α or transparency coordinate. The heightfield-normal tuples may be provided as static or procedural 2D, 3D or 4D (+ time) textures, allowing for an even greater quality of haptic perception. The visual part is rendered by mapping the displacement using the same heightfield and normals, so there is complete correspondence between the haptic and visual renderers. The algorithm used for this approach works similarly to the one explained in section III-C, except in the setting of the haptic response, shown on algorithm IV as follows: In order to apply our method for perceiving scratches on the surface [28], we have extended our method in order to accept texture heightfield maps having negative values, so we can represent negative heightfields (see figure 5). In these cases, the force mapping is not able to give the correct perception because there are neighbor points with very different normals which cause haptic instability. Our heightfield algorithm allows a correct perception of this sort of characteristics as well. Haptic resolution gets scaled in sync with the current visual zoom state. Getting closer to the tested object resizes the haptic space accordingly, so altitudes that perhaps were not measurable at some zoom level (“blurred”) become distinct and perceivable at a higher resolution, and the touch perception of surface change becomes more accurate.

5



 x

r

o

P

w

i

t

i

n

o

s

s

o

t

c

e

b

o

d

o

y

p

j

g

3 N 3





f i

t

i

n

s

o

o

s

t

c

e

b

o

d

o

p

j

g

N



= N

h

m

v

3

p i

i

n

i

t

t

r

o

a

e

n

e

P

p

p 3

3

= h

N

m

+

w p

i

i

t

e

n

e

c

a

m

l

i

s

F

D

H

p

v 3 w 2

w 1 N 2

N 1

N 2

h

m

h

1

m

N

t

i

n

o

d

t

e

c

e

r

o

p

j

p

p

p

p

p

1 1

2 2

v

v

2

1

(a) Triangle prism collision area Fig. 3.

Algorithm 3 Haptic rendering of mesostructure model loop Sample god-object’s position Detect collision with a triangle in the octree if (∃ collision with some triangle T) then {The god-object position is inside the triangle prism} Project god-object’s position P (s, t), relative to T ~ (s, t)] from Sample height and normal pair [H(s, t), N mesostructure if ( distance(P (s, t), T ) < H(s, t) ) then {Positive penetration, we had a collision} Calculate force magnitude F ∝ penetration ~ (s, t) at the device Apply F in the normal N end if end if end loop

Fig. 5.

(b) Collision point computation

Heightfield collision mapping

cases are rare in the most common applications, because the intervening triangles must have a very acute angle, and have positive heightfield values quite close to the common edge. This proximity effect is directly proportional to the maximum height value, so a lower overall magnitude increases well adjusted behavior and even allows acuter angles between faces before instability sets in. The situation is avoided in its entirety by identifying in each mesh all concave folds between faces (which means that their binding prisms intersect), so when god-object is indeed detected within the common region by a simple inclusion test, its position is checked against all posible colliding faces, and in case of real penetration, the repulsing force is set in the direction of the averaged normals of the impacted points. To avoid border cases, we add a small  to the collided heights to avoid getting trapped in a narrow crevice or hole. This check is performed only if the probe is detected inside the common region by a simple inclusion test, which can be computed from the current folding angle and the faces’ maximum heightfields. A better solution, if the object’s geometry remains constant, is to transform the initial mesostructure texture, so that height and cumulative normals are already mapped for those surface points that collide into other faces, saving the inclusion and collision tests altoghether 6. The same step may be used for stitching different mesostructures at triangle boundaries.

Example of a negative heightfield

A. Dealing with concave faces When two triangles form a concave fold (angle between faces less than 180◦ ) the god-object’s position can be inside two (or more) triangle prisms at the same time. In this case, the algorithm compares the heights in the textures of all involved triangles with the god-object’s position, and chooses the closest one. If we assume the correctness of the base model, the god-object may not hit more than one surface at the same time. However, if the model is correct but the sizeable height values are quite near to the common edge (as illustrated in figure 4), the user will perceive growing instability, because of rapidly changing forces applied in very near points in two different directions causes the probe to bounce backand-forth between the triangles’ surfaces. Nevertheless, these

Fig. 6.

Precomputed mesostructure mapping

B. Blending haptic textures at the edges The same edge-face proximity structure can be applied to blend two haptic textures at the joining edge of two triangles. Again, this can be precomputed, modifying the mesostructure texture at the seams so there is blending continuity across triangles. If the objects geometry is going to be modified at

6

that particlar faces, it will force the recalculation the whole neighborhood mesostructure. V. U SER T ESTING P ROTOCOL In order to measure quantitively and qualitatively users’ ability to perceive the surface’s haptic properties, we devised the following protocol for testing choice meshes and haptic textures, using both force mapping and heightfields, under the same environmental conditions for each run. This means using the same base models for both runs, and an exact correspondence between the relief map used as a heightfield and the normals map used for force mapping. The meshes are shown on Table I, the rugosity mesostructures are described in Table II, and the battery of tests are exposed in Table III We dispose of the following low density mesh models:: To create these mesostructures, we have generated appropriate heightfield maps, and calculated their corresponding normal maps, either by using a plugin in the open source GIMP application that computes a good approximation, or by an exact pre-calculation of the real normal map as explained in [29], which we implemented. Bilinear, bicubic and other filters may be applied to the heightfield if a smoother surface is desired. In the latter manner we can load and unload corresponding mesostructure pairs of heightfield maps and normal maps representing some 3D mesoestructure. Although a colored texture may be added to the surfaces, it is irrelevant as a surface haptic property. Each generated mesostructure shows some measurable perception property, either on the space or temporal axis: The following experiments will be performed and measured against heightfields of maximum altitude of 1%, 5%, 10%, 15% and 20% of the average edge length of each mesh.

B. Test II. Bump (force) mapping and Heightfield rendering difference perception First, a sphere, built recursively at several resolutions out of a perfect icosahedron was chosen as the base model. For the sake of enhancing the features detailed in the article, it has been set to the simplest possible resolution for the figures appearing in this article, to better appreciate visually the rendered surface texture. Secondly, we generated synthetic heightfield maps, and used it to compute a corresponding normal map, using a plugin for the open source GIMP application. In this way we obtained corresponding pairs of heightfield and normal maps representing the same 3D geometry, as shown on figure 7. In the end we settled for the three textures shown here, since each allows to perceive different surface characteristics. The textures chosen are one with alternating polished and variable bumpy areas (N4 and H4 ), one with gently sloping circles with a soft gradient (N3 and H3 ), and one with an embossed cross with only flat and vertical surfaces (N2 and H2 ). Additionally, taking advantage of the graphics card hardware, we implemented the shading part corresponding to our bump mapping as a GPU shader, just to allow a better haptic sampling rate. We show on figure 8, different renderings using force mapping and heightfields. It is evident that bump mapping makes a smoother visualization. However, in terms of haptic perception, the comparisons are quite different and depend on the characteristics of the texture map. We studied the haptic perception of the user in both methods, force mapping and our heightfield algorithm, with several texture maps with different characteristics: •

VI. R ESULTS The equipment used for the tests is a HapticMASTER from FCS, having a built-in wide haptic 3D space, able to exert forces from a delicate 0.01 N up to a very heavy 250 N, with a wide 3D perception field, equivalent to a cylindrical wedge of 40 cm x 36 cm x 1 radian. The PC is a 2 GHz Pentium IV with an ATI Radeon 9700 graphics card.



A. Test I. Baseline perception The baseline perception was designed so users would no find differences between a simple collision with the underlying mesh geometry, and the same geometry with the flat heightfield H0 (a constant height of zero throughout). With respect to mesh Ma (flat mesh of equal triangles) and Mf (flat mesh of unequal triangles) users detected whatsoever no edge crossings and no features. Working with meshes Mb and Md , they detected edge crossings only when such edges were between faces joining at non-flat angles. As expected, there was no [spurious, since there are none] detection of any other surface feature.



In the case of a texture map like the one shown in figures 8(a) and 8(b), where the texture image shows two bumpy areas, the resulting perception is somehow similar in both methods. The user perceives a certain roughness in the fine bumpy area and a bumping feeling and certain guidance among bumps in the coarse bumpy area. In the case of a texture like the one shown in figures 8(c) and 8(d), where the texture image shows a big oval and two small bumps over the surface, the resulting perception is a bit different between the two methods. In the part of small bumps there is almost no difference, but in the big oval part, when the user is inside the oval, although in both cases there is some similar resistance to go out, with the force mapping method the user only perceives the resistance for going up to the oval while with the heightfield method the perception is clearly going up to the oval and down from it. In the case of a texture like the one shown in figures 8(e) and 8(f), where the texture image shows a cross step over the surface, the perception is clearly different between the two methods. With the force mapping method the user only perceives resistance on the going up and a jump going down, but no height differences can be perceived. Moreover, there are some instabilities on this perception in the areas of the middle of the cross, because there are neighbor points with very different normal directions.

7

TABLE I T RIAL MODEL MESHES Mesh Ma Mb Mc Md,j Me Mf MG

Description Open regular mesh (flat surface of near equilateral triangles). The same mesh as a softly convex surface (folding angles in [180◦ , 225◦ ]. Similar to Mb , but faces foldi at acute, rect and obtuse angles. Closed convex meshes (spheres), beginning with an Icosahedron. Open concave regular mesh in the shape of a “cup”. Open flat mesh of a regular gradation of triangles, from big to small. A much denser mesh based on Mb , with very small triangles following the heightfield instead. TABLE II T RIAL RUGOSITY MESOSTRUCTURES

Model H0 , N0 H1,k , N1,k H2 , N2 H3 , N3 H4 , N4 H5 , N5 H6 , N6 H7 , N7

Feature description A uniformly flat surface, to be used as the baseline. A family of serrated patterns (steep vertical left side; sloping right side), of increasing frequency in k. Raised beams crossing at right angles. Gently sloping rings, peaks and holes. Bumps and warts of varying sizes and densities. Raised flat cylinders (like a coin). Grooved or engraved letter S (a negative heightfield). A fractal mountain range, with peaks and valleys. TABLE III T RIAL TEST PROTOCOL

Fig. 7.

Test Description I.- Baseline perception of null heightfield. II.- Perception of visual-haptic sensation quality

Model All Ma , Md,j

Mesostructure H0 H2 , H3 , H4

III.- Perception of monotonous mesostructure (simple pattern). IV.- Perception of non-monotonous mesostructure (complex patterns).

Ma , Mb

H1,k

Mb

H2 , H3 , H4

V.- Perception of visual-haptic resolution disparity of mesostructure in a gradated mesh

Mf

H5 , H6 H1 , H2 , H3

What is measuring Perception of edge crossings. Haptic differences between force-mapping and heightfield rendering algorithms. Visual-haptic resolution calibration, height variation, feature counting, groove orientation. Visual-haptic correspondence, height variation, contour following, bumpy quality. Limits of feature perception.

(a) N2 Normal map (cross)

(b) N3 Normal map (ovals)

(c) N4 Normal map (warts)

(d) H2 heightfield (cross)

(e) H3 heightfield (ovals)

(f) H4 heightfield (warts)

Normals and Heights maps

With the heightfield method the user perception is much better in this case, because the parts of the cross going

up and down give the feeling of going up and down with different height on the top of the cross than on the base.

8

There is no instability either in those areas in the middle of the cross. As a summary, we can conclude that the force mapping method can be a good approximation for modeling an apparent roughness of material, but is not sufficient for irregular normal maps where the perception has to be tight to the texture shape we want to simulate. This problem is solved with our heightfield algorithm, which gives an accurate sense of the surface characteristics. C. Test III. Perception of mesostructure with simple repeating patterns This test was devised to test the lower and upper limits of haptic modeling and perception using the heightfield approach. We chose a simple repeating texture in a regular serrated pattern, each ridge with a left vertical side and a sloping right one. The test measures several perception variables related to haptic resolution: How far are they spaced? Can the ridges be counted? How does it feel when going left-to-right and back?. Each trial was performed on the base mesh Mb using heighfields H1,j with the same serrated pattern at different resolutions (and corresponding precomputed normals N1,j , see figure 9). For each trial, the maximum heightfield value (that is, the altitude of the prism) was modulated at 1%, 5%, 10%, 15% and 20% of the average length of the mesh edges, and tested in several runs with users. Although we tested several frequencies, only the last two showed The results are summarized in table IV and table V: TABLE IV

the difference between going left-to-right or right-to-left as abrupt or sloping. However, at the highest texture resolution, all test subjects only felt vibration (fats updown movement) without discerning any sense direction or damping. • Height modulations greater than 20% resulted in growing instabilities in the haptic device, due to wild and fast changes in normal direction, and overshooting of features due to feedback kick, product of continued exerted forces in high vertical walls. These results hint at a practical threshold on how much geometric mesostructure may be modeled by this approach. In one end of the variation scale, mesh zones where surface variation exceeds 15% of average edge size are thus candidates for finer remeshing. In the other end of the scale, if a triangle is perceived as less detailed as the haptic texture dictates, the haptic sensation may be enhanced by using the same texture sampled a a lower rate. On the other hand, high resolution mesostructures with below-the-threshold heights become perceptible when zooming on the scene (see figure 10). The scaling effect is kept in sync between the visual and haptic field of view, so sensation becomes increasingly defined when going from afar (figure 10(a)) to near (figure 10(b)). The size of the haptic probe is correspondily reduced so it follows much more accurately the valleys and ridges in the texture. The reverse is also true, when going the other way, features become less perceptible. However, much care should be exercised in doing so very slowly, because a too fast approach may generate such strong forces in the device that it will swing dangerously wild to compensate.

H APTIC PERCEPTION OF HEIGHTFIELD TEXTURE F REC 256 % Height Count ridges? L to R feel?

1% 67% vibr. 33% no 100% vibr.

R to L feel?

100% vibr.

5% 83% 17% 83% 17% 83% 17%

yes vibr. slope vibr. up/dn vibr.

10% 100% yes 100% slope 100% up/dn

15% 100% yes 100% slope 100% up/dn

20% yes, unst. grow unst. grow unst

TABLE V

D. Test IV. Perception of non monotonous mesostructure In this test we measure abilities to perceive definite shapes in the haptic textures: A hard-edged cross; a soft textures of sloping rings, peaks and depressions; small-to-big warts or bumps; a protruding feature in the shape of a coin; A groove in the shape of the letter S. The object of this test the multimodal quality of perception: how much it corresponds with the visual representation and whether can be “followed along”.

H APTIC PERCEPTION OF HEIGHTFIELD TEXTURE F REC 512 % Height Count ridges? L to R feel? R to L feel?

1% 50% vibr. 50% no 83% vibr. 17% no. 100% vibr.

5% 50% yes 50% vibr. 100% up/dn 100% up/dn

10% 100% yes 100% up/dn 100% up/dn

15% 100% yes 100% up/dn 100% up/dn

20% grow unst. grow unst. grow unst.

TABLE VI H APTIC PERCEPTION OF FINE FEATURES IN NONMONOTONOUS MESOSTRUCTURE

% Height Straight walls Round contours Grooves

Three important conclusions that can be extracted from this results: • There exists a definite region for the best perception of haptic features, which sits between maximum peaks and valleys of 5%-15% of a triangle’s edge, with a “sweet spot” with optimum perception at prism altitude 10%. The 5%-15% region holds also for dynamic characteristics, such as sense of direction in the grooves, and sensing

Soft slopes Small bumps

1% 100% yes 100% yes 100% yes 100% yes 100% yes

5% 100% yes 100% yes 100% yes 100% yes 100% yes

10% 100% yes 100% yes 100% yes 100% yes 100% yes

15% 100% yes 100% yes 100% yes 100% yes 100% yes

20% unst unst unst. 100% yes 83% yes 17% no

As can be extracted from the table, event small scratches are felt and followed, until they become too deep and narrow for a proper rendering of the haptic forces generated.

9

(a) Warts (using force maps)

(b) Warts (using heightfields)

(c) Ovals (using force maps)

(d) Ovals (using heightfields)

(e) Cross (using force maps) Fig. 8.

heightfield collision mapping

(a) Slightly convex mesh Mb Fig. 9.

(f) Cross (using heightfields)

(b) Coarse serrated mesostructure

(c) Fine serrated mesostructure

Perception scaling adjustment for mesostructure

E. Test V. Perception of visual-haptic disparity in a gradated mesh

Here we measured resolution changes in perception. We map the same haptic texture into triangles of decreasing size, in order to test the limits of perception, aliasing effects and arising instabilities. We also measure how these qualities change as we zoom (both haptically and visually) in the mesh. Explicar como la resoluci´on afecta la sensaci´on haptica....

VII. C ONCLUSIONS We have developed a fast and accurate method for rendering local haptic texture in triangle meshes, which allows a user to perceive correct surface details at several resolutions. This extends the use of height field haptics beyond the usual field of gigantic terrain textures and allows perceiving surface details without modeling them. This approach can be used for locally mapping relief textures in triangular meshes and haptically render them in real time. We are extending further this research by exploring a super-

10

(a) High resolution mesostructure from afar Fig. 10.

(b) High resolution mesostructure up close

Perception scaling adjustment for mesostructure

position of multiple resolution haptic textures approaches. This extension would allow a better perception of those heightfields simulating relatively high slopes or with a high frequency change, allowing for more haptic detail where needed. Using the results obtained in this research, we are devising a procedure to scan the object’s fine geometry, and determine a less dense geometry of larger triangles that captures all the detected details within each triangle as a blending continuity using global mesostructure atlases of heightfield values, normals and other properties such as directed and undirected friction, and extending this approach in much denser models. ACKNOWLEDGEMENTS We want to thank Iban Lozano for his valuable help on the implementation of shaders. We extend our thanks to Eva Moncl´us, Santiago Murillo, Antoni Xica, and Marcos Balsa for their observations and patience during the measuring phase. This work has been partially co-financed by the project TIN2004-08065-C02-01 of the spanish government (MEC) and FEDER funding. R EFERENCES [1] M. Lin and K. Salisbury, “Haptic rendering – beyond visual computing,” IEEE Computer Graphics and Applications, pp. 22–23, March/April 2004. [2] A. L´ecuyer, J.-M. Burkhardt, and L. Etienne, “Feeling Bumps and Holes without a Haptic Interface: the Perception of Pseudo-Haptic Textures,” in Proceedings of CHI 2004, 2004. [3] C. Basdogan and M. A. Srinivasan, “Haptic rendering in virtual environments,” in Handbook of Virtual Environments, K. Stanney, Ed. London, Lawrence Earlbaum, Inc., 2002, ch. 6, pp. 117–1343, 2212–2218. [4] HapticMASTER Installation Manual, FCS Control Systems, The Netherlands, December 2002. [5] K. MacLean and M. Enriquez, “Perceptual design of haptic icons,” in Proceedings of the Eurohaptics 2003, Dublin, Ireland, 2003. [6] M. A. Otaduy and M. C. Lin, “Stable and Responsive Six-Degree-ofFreedom Haptic Manipulation Using Implicit Integration,” in Proceedings of the First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2005. [7] N. Melder and W. S. Harwin, “Extending the Friction Cone Algorithm for Arbitrary Polygon Based Haptic Objects,” in Proceedings of the 12th International Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (HAPTICS?04), 2004. [8] J. Mart´ın and J. Savall, “Mechanisms for Haptic Torque Feedback,” in Proceedings of the First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2005.

[9] M. A. Otaduy and M. C. Lin, “Sensation preserving simplification for haptic rendering,” ACM SIGGRAPH 2003 / ACM Transactions on Graphics, vol. 22, pp. 543–553, July 2003. [10] C. Richard and M. R. Cutkosky, “Friction Identification for Haptic Display,” in Proceedings of the 1999 ASME IMECE, 1999. [11] H. Iwata, H. Yano, and R. Kawamura, “Array Force Display for Hardness Distribution,” in Proceedings of the 10th International Symposium on Haptic Interfaces for Virtual Environments and Teleoperator Systems (HAPTICS’02), 2002. [12] J. Zhang, S. Payandeh, and J. Dill, “Haptic Subdivision: an Approach to Defining Level-of-detail in Haptic Rendering,” in Proceedings of the 10th International Symposium on Haptic Interfaces for Virtual Environments and Teleoperator Systems (HAPTICS’02), 2002. [13] M. Mahvash and V. Hayward, “High-Fidelity Haptic Synthesis of Contact with Deformable Bodies,” IEEE Computer Graphics and Applications, pp. 48–55, March/April 2004. [14] E. Vidholm and I. Nystr¨om, “A haptic interaction technique for volume images based on gradient diffusion,” in Proceedings of the First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2005. [15] K. S. Hale and K. M. Stanney, “Deriving haptic design guidelines from human physiological, psychophysical, and neurological foundations,” IEEE Computer Graphics and Applications, pp. 33–39, March/April 2004. [16] L. Dominjon, A. L´ecuyer, J.-M. Burkhardt, G. Andrade-Barroso, and S. Richir, “The “Bubble” Technique: Interacting with Large Virtual Environments Using Haptic Devices with Limited Workspace,” in Proceedings of the First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2005. [17] F. Conti and O. Khatib, “Spanning large workspaces using small haptic devices,” in Proceedings of the First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2005. [18] D. Xiao and R. Hubbold, “Navigation guided by artificial force fields,” in Proceedings of ACM CHI 98 Conference on Human Factors in Computing Systems, vol. 1, April 1998, pp. 179–186. [19] R. Hubbold and M. Keates, “Real-time simulation of a stretcher evacuation in a large-scale virtual environment,” Computer Graphics Forum, vol. 19, no. 2, 2000. [20] D. E. Johnson and P. Willemsen, “Accelerated Haptic Rendering of Polygonal Models through Local Descent,” in Proceedings of the 12th International Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (HAPTICS?04), 2004. [21] R. Jagnow and J. Dorsey, “Virtual sculpting with haptic displacement maps,” in Graphics Interface’02), 2002, pp. 125–132. [22] K. Potter, D. Johnson, and E. Cohen, “Height Field Haptics,” in Proceedings of the 12th International Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (HAPTICS?04), 2004. [23] F. Policarpo, M. M. Oliveira, and J. L. D. Comba, “Real-time relief mapping on arbitrary polygonal surfaces,” in SI3D ’05: Proceedings of the 2005 symposium on Interactive 3D graphics and games. New York, NY, USA: ACM Press, 2005, pp. 155–162.

11

[24] L. Baboud and X. D´ecoret, “Rendering geometry with relief textures,” in Graphics Interface ’06, Quebec, Canada, 2006. [25] M. W. Asghar and K. E. Barner, “Nonlinear Multi-resolution Techniques with Applications to Scientific Visualization in a Haptic Environment,” IEEE Transactions on Visualization and Computer Graphics, vol. 7, no. 1, pp. 76–93, January/March 2001. [26] J. F. Blinn, “Simulation of wrinkled surfaces,” in Proceedings of SIGGRAPH’78. ACM, 1978. [27] V. Theoktisto, M. Fair´en, I. Navazo, and E. Moncl´us, “Rendering detailed haptic textures,” in Proceedings of Second Workshop in Virtual Reality Interactions and Physical Simulations (VRIPHYS ’05), Pisa, Italy, 2005. [28] C. Bosch, X. Pueyo, S. M´erillau, and D. Ghazanfarpour, “A physicallybased model for rendering realistic scratches,” Computer Graphics Forum, vol. 23, no. 3, 2004. [29] J. Shankel, “Fast Heightfield Normal Calculation,” in Game Programming Gems 3. Charles River Media, Inc., 2002, pp. 344–348.

Suggest Documents