Towards perceptual enhancement of multiple intersecting surfaces

1 downloads 0 Views 6MB Size Report
2 shows two distinct surfaces (A and B), each assigned a unique color for rendering. Fig. ... Since both surfaces are opaque, the majority of the green surface is ..... A. Girshick, V. Interrante, S. Haker, and T. Lemoine, “Line direction matters: An ...
Towards perceptual enhancement of multiple intersecting surfaces Mark A. Robinson and Kay A. Robbins University of Texas at San Antonio, 6900 N. Loop 1604 West, San Antonio, TX, 78249, USA ABSTRACT Visualizations of 2D scalar fields are often based on the rendering of 3D surfaces. Combinations of standard 3D rendering techniques using color, transparency, texturing, and lighting can be employed to create visualizations for multiple surfaces. However, the transition from foreground to background due to surface intersections can confound accurate surface perception, even for techniques that work well for overlapping non-intersecting surfaces. Our research proposes a generalized method of decomposing and stratifying these surfaces into layered components, allowing complete rendering control of each surface component. We explore different rendering schemes based on stratification values and report initial results from experimentation. Keywords: visualization, 2D scalar field, intersecting surfaces, stratification, decomposition

1. INTRODUCTION Visualization technology can augment a scientist’s ability to explore and understand the structure and relationships of data. Often, the data involves multivariate samplings or multiple scalar fields spanning a shared sampling domain. Many techniques exist for creating meaningful visualizations of multiple surfaces representing scalar data sets. However, if the surfaces intersect each other, the rendering techniques used for each surface may interfere with accurate user perception of surface contours and the relationships between surfaces. One possible reason for this is that the rendering attributes of one surface may be optimized for a foreground, or background surface. However, these attributes may cause perceptual problems when the surface changes from the foreground to the background, and vice versa. Much work has been accomplished in the area of 2D display of multivariate scalar field visualization. Although 2D visualizations do not convey the exact magnitude of the visualized scalar fields, they can communicate the approximate relationship of overlapping fields. Ferry et al.1 use an adaptive variation of glyphs to represent large multivariate data sets. Using abstraction and importance functions, the authors scale the distribution of the glyphs to fit not only the data density, but also the number of visualization display pixels. Weigle et al.2 encode multiple 2D scalar data sets within the same visualization using oriented slivers. They performed tests using eight chemical composition data sets gathered from a scanning electron microscope. The authors suggest that up to fifteen data sets could be collectively represented using this texturing technique and that other characteristics of the texture could encode additional dimensions (e.g., size, color, density). Bokinsky3 develops a technique for displaying multivariate data as a series of overlaid layers. Each layer is rendered with a distinct hue, using an alpha mask of randomly generated circular Gaussians with a uniform standard deviation. From her user studies, she states that subjects were able to discern up to seven layers of overlaid data. Transparency and see-through textures enhance the simultaneous perception of multiple, overlapped surfaces. Metelli4 explores the necessary conditions for the perception of transparent overlapping figures. He defines the three primary conditions for effective perception: figural unity of the transparent layer, continuity of the boundary line, and adequate stratification. Metelli also discusses the color components of the resulting (fused) color as the sum of the scission colors (the original transparent color and the underlying opaque color). 3D graphics uses the dimension of space to communicate magnitude and relationship to the viewer. While this can increase understanding, 3D graphics also bring their own set of implementation hurdles. Dooley et al.5 introduce a set of rendering rules and primitives to better automate the visualization of scenes containing multiple overlapping and

occluding objects. They render transparent surfaces with a screen space based pattern similar to a dithering texture. The resulting texture reveals objects behind the surface, as well as the actual surface structure. Rheingans6 uses opacity-modulated textures to better infer the shape of an irregular 3D surface and to allow perception of an interior surface. These opacity-modulated textures use an alpha mask gradient to provide a smooth transition from fully opaque to fully transparent surfaces. She presents several side-by-side comparisons of textured versus transparent surfaces, with a fully opaque interior surface. Good texture selection for each of the visualized surfaces can improve overall surface perception. Interrante et al.7 propose the generation of a sparse, opaque texture, comprised of a small set of meaningful curves (ridge and valley lines) to improve perception of a transparent surface and still allow perception of underlying and contained surfaces. The texture consists of a small number of lines that carry both geometrical and perceptual significance to the transparent data set. The authors demonstrate this technique using data from a radiotherapy treatment application. Interrante et al.8 demonstrate that the use of sparsely-distributed discrete, opaque texture improves user perception of a transparent surface’s structure while still allowing good perception of an enclosed, fully opaque surface within. The authors discuss several different textures and propose a stroke texture in which stroke direction and length are controlled by the magnitude and direction of principle curvature of regularly-spaced locations along the surface. They design and implement an experimental paradigm for objectively measuring a user’s ability to determine structure, form, and relative distance to an enclosed surface. They apply this paradigm to measure the effectiveness of texture for images from a radiotherapy treatment application. The results suggest that texture greatly enhances transparent surface structure perception, but the experiments did not distinguish if one texture clearly works better than any of the others. They briefly mention the difficulty of displaying multiple transparent layers and that the stroked texture may not be suitable for such visualizations. Optimizing perception of multiple, non-intersecting surfaces involves selecting a good combination of color, transparency, and texture for the upper and lower surfaces. House et al.9 describe a general approach for optimizing surface perception of a visualization problem using a solution space where the dimensions are the visual parameters. They use a genetic algorithm to search through the space for “promising regions”. Their solution space considers the texture parameters: orientation, transparency, density of pattern, regularity of pattern, softness, foreground and background color, size, and stroke size. Their results indicate that the topmost surface should have a see-through attribute of 50% to 70% (i.e., percent of total texture that is fully transparent). The visualizations where perception was most successful had dense textures on the bottom surface and sparse textures on the top. Most previous work uses single texture assignment with possibly variable transparency to display overlapping surfaces. Very little attention has been given to the effect of surface intersection on perception. However, many applications involving the comparison of scalar data sets on the same spatial domain generate multiple intersecting surfaces. Our work explores the use of different rendering attribute assignments to portions of a surface to convey relative position and intersection information. The remainder of this paper is organized as follows. We introduce the concept of surface stratification in section 2 and outline how rendering attributes may be effectively assigned. Section 3 describes our implementation. Section 4 presents some preliminary results. We conclude and discuss further research in Section 5.

2. SURFACE STRATIFICATION The lack of effective display techniques for intersecting surfaces is the primary motivation for this research. Scientists have performed experiments in pursuit of optimal 3D surface rendering attributes for two8,9, but rarely three or more surfaces. These experiments do not consider intersections of the surfaces, and therefore do not consider optimal rendering attributes when surfaces transition from foreground to background. Fig. 1 demonstrates the difficulties in visualizing three intersecting surfaces. Each surface is assigned a distinct hue, luminance, and texture pattern. The dark grid surface can be perceived in its entirety, except when it is the bottommost surface. The yellow hoop surface is difficult to perceive when it is not the topmost surface. The blue diamond surface is generally very difficult to perceive due to a small luminance difference between the yellow surface and the background.

Figure 1. Visualization of three intersecting surfaces. Each surface is assigned a unique texture pattern, hue, and luminance.

An approach to improving the perception of multiple surfaces is to separate the portions of each surface that lie above or below the other surface(s). This separation of surfaces, or stratification, permits the tuning of rendering attributes for each portion (or stratum) of each surface. The strata are assigned individual rendering attributes, including texture pattern, opacity, and color. Fig. 2 shows two distinct surfaces (A and B), each assigned a unique color for rendering. Fig. 2C displays the union of these two surfaces within an aligned domain. Since both surfaces are opaque, the majority of the green surface is occluded by the blue surface. Fig. 2D uses distinct rendering attributes depending on the surface and its position relative to the other surface. In this example, the surface is rendered with a colored circular see-through texture when it is above the other surface, and with a solid color when it is below. In other words, stratum 1 of each surface is rendered with a solid color and stratum 2 is rendered with a colored circular see-through texture. When two surfaces are overlaid as in Fig. 2C and Fig. 2D, there are two possible stratifications (surface 1 over surface 2 and surface 2 over surface 1). The boundaries of individual strata are determined by the lines of intersection of the surfaces. Four types of strata are created from these intersections: 2 strata for surface 1 and 2 strata for surface 2. As a result, four sets of rendering attributes must be specified. The technique proposed in this paper allows each set of rendering attributes to be individually optimized to enhance overall perception of the surfaces. As the number of surfaces increases, the number of sets of rendering attributes grows rapidly. For n surfaces, there are n! stratifications depending on the order of overlap of the individual surfaces. To reduce complexity and to help the viewer deduce surface order from rendering attributes, we assign a unique hue to each surface and use texture pattern to convey order. This reduces the number of distinct rendering attribute sets from n! to n2. In addition we use luminance variance, lighting, transparency, and gel effects to enhance perception of the individual surfaces. Fig. 3 illustrates the strata created when three surfaces intersect and are stratified. Each surface is rendered with a unique hue. In this example, the texture type is fixed based on surface level. Stratum 1 of each surface is rendered with a solid color, stratum 2 is rendered with a thick grid, and stratum 3 is rendered with a colored circular see-through texture. The example in Fig. 3 does not use any perceptual enhancements such as luminance variance, lighting, or transparency.

Figure 2. Example of stratification of two surfaces. A. Surface 1 rendered with an unlit solid green color. B. Surface 2 rendered with an unlit solid blue color. C. Both surfaces overlapping. D. Both surfaces overlapping and stratified with top surface rendered with an unlit colored see-through circle texture.

Figure 3. Three surfaces stratified. Each surface is rendered with a unique hue. Each stratum is rendered with a unique texture.

3. IMPLEMENTATION Fig. 3 illustrates the different relationships that occur when three surfaces intersect. The boundaries of the portions of a surface that lie above and below another surface are determined by the lines of intersection. The terms “above” and “below” are based on comparisons of the scalar fields represented by the individual surfaces. We assume that the scalar fields are aligned on a domain represented by a rectangular grid. We triangulate this grid using the main diagonal of each sub-rectangle and treat the resulting triangles as separate domains. Each triangle is further subdivided based on the surface intersections that fall within its boundaries as illustrated in Fig. 4. This “divide and conquer” approach simplifies the overall implementation, as our intersection algorithm expects triangles as input and our rendering interface uses triangles as a basic drawing primitive.

Figure 4. The division of the visualization domain into triangular cells that are treated as separate domains.

Figure 5. A. Initial triangulation and labeling. B. Decomposition based on surface intersections within the triangular region. C. Stratification of 3 surface triangles within a single triangular cell.

Within each cell, the first task is the initial labeling of the surfaces’ triangles (see Fig. 5A). This is accomplished with a basic extent-sorting algorithm ordered by the minimum vertical value in each triangle. The bottommost surface is labeled 1, the next surface is labeled 2, and so on. This process guarantees that any surface that does not intersect another surface will be labeled with its proper stratification value. Next, the triangles of the surfaces are tested for intersection using a modified version of Tomas Möller’s triangletriangle intersection test10. If intersection occurs, the triangles involved are decomposed into smaller triangles until no further intersections occur. Fig. 5B shows the decomposition of the red and green surface triangles at the line at their line of intersection. The resulting stratification is shown in Fig. 5C. A non-decomposed triangle’s stratum value is equal to its initial label. A decomposed triangle’s stratum value is based upon its initial label and which side of the decomposing triangle it is on. The orientation of the decomposing triangle also influences the stratum value of the decomposed triangle. During the rendering of the visualization, each set of rendering attributes (texture, color, opacity, etc.) for a triangle (decomposed or otherwise) is determined by the triangle’s parent surface and stratification label. Referring back to the stratification example in Fig. 4, the portion of the blue surface that lies on the bottom could be rendered using a solid hue. The portion that lies in the middle could be rendered using a thick grid texture, and the portion that lies on the top could be rendered using a transparent circular texture. The application of this strategy to all three surfaces results in Fig. 3.

4. EXPERIMENTATION In order to test the effect of rendering attributes on perception, we have constructed a series of experimental visualizations of three 2D scalar fields based on the simple mathematical functions, sine and cosine, and a parabolic equation. In all of the experiments described in this paper, each surface is assigned a constant value for hue. We explore two texture assignment strategies: texture pattern fixed by level as illustrated in Fig. 3, and texture shape fixed by surface with stroke thickness, luminance, and transparency fixed by level. While research suggests that texture stroke direction plays a part in perceiving surface contour, we selected grid, diamond, and circular texture patterns to better differentiate them from each other.11,12 The background color is rendered with a grey color at 70% luminance.

Strata 1 (bottom) Texture: Grid Stroke: Thick Hue: Green Luminance: High Opacity: High

Strata 2 (middle) Texture: Grid Stroke: Thin Hue: Green Luminance: High Opacity: Medium

Strata 3 (top) Texture: Grid Stroke: Thinner Hue: Green Luminance: High Opacity: Low

Sine

Texture: Circles Stroke: Thick Hue: Red Luminance: Medium Opacity: High

Texture: Circles Stroke: Thin Hue: Red Luminance: Medium Opacity: Medium

Texture: Circles Stroke: Thinner Hue: Red Luminance: Medium Opacity: Low

Parabolic

Texture: Diamonds Stroke: Thick Hue: Blue Luminance: Low Opacity: High

Texture: Diamonds Stroke: Thin Hue: Blue Luminance: Low Opacity: Medium

Texture: Diamonds Stroke: Thinner Hue: Blue Luminance: Low Opacity: Low

Cosine

Table 1. The rendering attributes for the visualization shown in Fig. 6 and Fig. 7.

In the first experiment, the only rendering attribute that varies per stratum is texture stroke thickness. Table 1 summarizes the rendering attribute assignment used for Fig. 6. Four areas of particular interest have been enlarged in Fig. 7. The green and red surfaces are somewhat difficult to perceive in Fig. 7A, however, the change in texture stroke width assists in perceiving the switch in strata positions. Also, it is difficult to perceive the blue middle surface when it lies below the red top surface. The blue and red surfaces are easy to perceive in Fig. 7B, but the green bottom surface is difficult to perceive. All three surfaces are easy to perceive in Fig. 7C. All three surfaces are fairly easy to perceive in Fig. 7D, and the change in texture stroke width aids in the perception of the strata transitions.

Figure 6. The three test surfaces with texture stroke thickness as the only rendering attribute to vary per stratum.

Figure 7. Detail of the four areas of interest in Fig. 6. A. Red transitions from middle to top, green transitions from top to middle to bottom, and blue transitions from bottom to middle. B. Blue on top, red in middle, and green on bottom. C. Blue on top, green in middle, and red on bottom. D. Green transitions from middle to top, Blue transitions from top to middle to bottom, and red transitions from bottom to middle.

The next experiment expands upon the previous visualization by using alternating luminance values for the strata. Luminance contrast is an important factor in communicating detail with color.13 The bottom and top strata use a low luminance color. The middle stratum uses a high luminance color. As with all experiments, hue is constant for each surface. Table 2 summarizes the rendering attributes in the experiment. The result of the experiment is shown in Fig. 8. The same four areas from the previous experiment have been enlarged in Fig. 9.

Strata 1 (bottom) Texture: Grid Stroke: Thick Hue: Green Luminance: Low Opacity: High

Strata 2 (middle) Texture: Grid Stroke: Thin Hue: Green Luminance: High Opacity: Medium

Strata 3 (top) Texture: Grid Stroke: Thinner Hue: Green Luminance: Low Opacity: Low

Sine

Texture: Circles Stroke: Thick Hue: Red Luminance: Low Opacity: High

Texture: Circles Stroke: Thin Hue: Red Luminance: High Opacity: Medium

Texture: Circles Stroke: Thinner Hue: Red Luminance: Low Opacity: Low

Parabolic

Texture: Diamonds Stroke: Thick Hue: Blue Luminance: Low Opacity: High

Texture: Diamonds Stroke: Thin Hue: Blue Luminance: High Opacity: Medium

Texture: Diamonds Stroke: Thinner Hue: Blue Luminance: Low Opacity: Low

Cosine

Table 2. The rendering attributes for the visualization shown in Fig. 9 and Fig. 10.

Figure 8. The three test surfaces with texture stroke thickness and luminance as the rendering attribute varied per stratum.

Figure 9. Detail of the four areas of interest in Fig. 8.

The green and red surfaces are easier to perceive in Fig. 9A due to larger differences in the luminance values between the top and middle strata. However, the similarity in luminance values between the top and bottom surfaces discourages perception. It is easy to perceive the blue top surface in Fig. 9B and it is somewhat easy to perceive the red middle surface. But, it is very difficult to perceive the green bottom surface. All three surfaces are reasonably easy to perceive in Fig. 9C although the blue surface has a luminance value to the red surface. All three surfaces are fairly easy to perceive in Fig. 9D except for the blue top surface above the red bottom surface. The third experiment uses the same texture stroke width for the middle and top strata but blends the middle layer with a gel effect to create a mostly transparent, but not see-through, texture. Luminance is alternated between the strata as with the second experiment. Table 3 summarizes the rendering attributes in the experiment. The result of the experiment is shown in Fig. 10. The same four areas from the previous experiments have been enlarged in Fig. 11.

Strata 1 (bottom) Texture: Grid Stroke: Thick Hue: Green Luminance: Low Opacity: High

Strata 2 (middle) Texture: Grid Stroke: Thin Hue: Green Luminance: High Opacity: Low Gel

Strata 3 (top) Texture: Grid Stroke: Thinner Hue: Green Luminance: Low Opacity: Low

Sine

Texture: Circles Stroke: Thick Hue: Red Luminance: Low Opacity: High

Texture: Circles Stroke: Thin Hue: Red Luminance: High Opacity: Low Gel

Texture: Circles Stroke: Thinner Hue: Red Luminance: Low Opacity: Low

Parabolic

Texture: Diamonds Stroke: Thick Hue: Blue Luminance: Low Opacity: High

Texture: Diamonds Stroke: Thin Hue: Blue Luminance: High Opacity: Low Gel

Texture: Diamonds Stroke: Thinner Hue: Blue Luminance: Low Opacity: Low

Cosine

Table 3. The rendering attributes for the visualization shown in Fig. 12 and Fig. 13.

Figure 10. The three test surfaces with texture stroke thickness, luminance, and gel effect determined by the stratum value.

Figure 11. Detail of the four areas of interest in Fig. 10.

All three surfaces are fairly easy to perceive in Fig. 11A. The higher overall luminance of the middle stratum helps the top stratum to stand out better from the bottom stratum. However, the green top layer is difficult to see above the blue bottom layer. All red and blue surfaces are easy to perceive in Fig. 11B and the green bottom surface is fairly easy to perceive. All three surfaces are easy to perceive in Fig. 11C and Fig. 11D.

5. CONCLUSION The successful visualization of multiple, intersecting scalar fields within the same domain may assist viewers in the exploration and understanding of their data. Rendering these surfaces within the same visualization is a problematic task. However, the fine-tuning of surface rendering attributes based on the stratification of the surfaces may improve viewer perception of the individual surfaces and the relationships between them.

In initial experiments, three intersecting surfaces were rendered using different sets of rendering attributes for the individual surface strata. Overall perception of the three surfaces was possible, although each visualization possessed at least one perceptual problem. Further tuning of the individual rendering attributes may produce additional clarity in perception. It is important to note that the predictability and regularity of the test surfaces made it simpler to intuit the overall surface shape. For irregular data sets, sudden upheavals in the surface might cause one to mistake similar surface texture patterns. In such a case, a different rendering attribute may assist in providing distinction (e.g. hue, luminance). Subjective factors also play a role in successful perception.14 Granting the user interactive tuning control for the rendering attributes may aid perception. Coating the surface textures with a random-grained texture may also enhance surface shape perception.15 Additional user studies are clearly necessary to establish the effect of user control in optimizing perception and in quantifying the variability of the best rendering techniques across users. We plan to conduct a user study assessing this variability. The decomposition and stratification of surfaces, while not computationally intensive for offline rendering, are currently not well-suited for real-time rendering. The stratification of the surface triangles is a parallel problem. Additional work will be devoted to offloading a portion of the stratification computation to the GPU. Lastly, procedural generation of the surface textures will allow us to vary texture attributes based on differences between the surfaces’ scalar values.

REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15.

P. A. Ferry and J. W. Buchanan, “Icon density displays for multivariate data visualization,” Proc. SKIGRAPH, 1999. C. Weigle, W. Emigh, G. Liu, R. M. Taylor II, J. T. Enns, and C. G. Healey, “Oriented sliver textures: A technique for local value estimation of multiple scalar fields,” Proc. Graphics Interface, Montreal Canada, pp. 163–170, 2000. A. Bokinsky, “Data-driven spots,” PhD thesis, Univ. of North Carolina at Chapel Hill, 2003. F. Metelli, “The perception of transparency,” Scientific American, April, pp. 90–97, 1974. D. Dooley and M. F. Cohen, “Automatic illustration of 3D geometric models: Surfaces,” Proc. IEEE Visualization, pp. 307–131, 1990. P. Rheingans, “Opacity-modulating triangular textures for irregular surfaces,” Proc. IEEE Visualization, pp. 219– 225, 1996. V. Interrante, H. Fuchs, and S. M. Pizer, “Enhancing transparent skin surfaces with ridge and valley lines,” IEEE Visualization Proceedings, pp. 52–59, 1995. V. Interrante, H. Fuchs, and S. M. Pizer, “Conveying the 3D shape of smoothly curving transparent surfaces via texture,” IEEE Transactions on Visualization and Computer Graphics, 3(2), pp. 98–117, 1997. D. House and C. Ware, “A method for the perceptual optimization of complex visualizations,” Advanced Visual Interface, Trento Italy, pp. 148–155, 2002. T. Möller, “A fast triangle-triangle intersection test,” Journal of Graphics Tools, 2(2), pp. 25–30, 1997. A. Girshick, V. Interrante, S. Haker, and T. Lemoine, “Line direction matters: An argument for the use of principal directions in 3D line drawings,” Proc. NPAR, pp. 43–52, 2000. A. Girschick and V. Interrante, "Real-time principal direction line drawings of arbitrary 3d surfaces," Visual Proc. SIGGRAPH, p. 271, 1999. C. Ware, Information Visualization Perception for Design, Morgan Kaufmann, San Francisco, CA, 2004. P. Keller and M. Keller, Visual Cues Practical Data Visualization, IEEE Computer Society Press, Los Alamitos, CA, 1993. G. Sweet and C. Ware, “View direction, surface orientation and texture orientation for perception of surface shape,” Proc. Graphics Interface, pp. 97–106, 2004.

Suggest Documents