TAn2 - Visualization of Large Irregular Volume Datasets

1 downloads 0 Views 2MB Size Report
management and visualization features that are nowadays the realm of more ...... International Conference on Discrete Geometry for Computer Imagery, Esiee,.
TAn2 - Visualization of Large Irregular Volume Datasets Paolo Cignoni1 , Leila De Floriani2 , Paola Magillo2 , Enrico Puppo2 , Roberto Scopigno3 1

Istituto di Elaborazione dell’Informazione – Consiglio Nazionale delle Ricerche Pisa Research Area – Via Alfieri, 1, 56010, Ghezzano (Pisa), Italy Email: [email protected]

2

Dipartimento di Informatica e Scienze dell’Informazione – Universit`a di Genova Via Dodecaneso, 35, 16146 Genova, ITALY

f

g

Email: deflo,magillo,puppo @disi.unige.it 3

CNUCE – Consiglio Nazionale delle Ricerche

Pisa Research Area – Via Alfieri, 1, 56010, Ghezzano (Pisa), Italy Email: [email protected]

Abstract In this paper, we present TAn2 (Tetrahedra Analyzer), an interactive system for visualization of three-dimensional scalar fields. The system has been designed in order to overcome some of the problems posed by very large volumetric data sets, because of limitations in memory and in the performance of rendering algorithms and devices. The adopted solution exploits a multiresolution representation based on a tetrahedral domain decomposition. We have developed a new compact data structure for efficiently encoding the whole dataset at a virtually continuous range of different resolutions. Such structure supports the efficient extraction of simplified meshes that approximate the original dataset and are used for rendering. The mesh to be rendered can be selectively refined over areas that the user considers more critical, on the basis of either field values or domain locations. Extraction criteria are determined based on the current viewing modality and on the physical limitations of the system. The system supports different rendering techniques, such as isosurface fitting and Direct Volume Rendering (DVR). It also provides an advanced tool for the interactive design of the transfer function mapping field values into color and transparency parameters.

1 Introduction Several applications need to deal with huge sets of three-dimensional data describing scalar fields. Examples can be found in scientific visualization, medical imaging, computed aided surgery, finite element analysis, etc. A volume dataset consists of: a set of points in 3D space, each point holding a field value; and a mesh (either regular or irregular) spanning the data domain, and formed by cells having their vertices at data points. In this paper, we address the case of datasets having irregular distributions and non-convex domains, which are represented by tetrahedral meshes. The critical size for classifying a dataset as extremely large depends on the time and processing resources available for data generation (consider, for example, datasets which result from intensive computer simulations) and on the processing resources available for data analysis. In the context of visual analysis and visualization, we generally consider as extremely large those datasets that do not fit in contemporary high performance graphics workstations, or which make interactive visualization unfeasible. Space requirements and rendering time for a volume mesh are proportional to its resolution, i.e., to the density of the underlying point set. When the mesh is too large to be either rendered or stored within the hardware constraints, data must be processed to bring them to a more manageable size prior to visualization. Data simplification has become popular in the last few years for solving analogous problems in surface and scene rendering. This approach has been recently extended to volume data as well. The general idea is to reduce the resolution of the input mesh, while preserving as much as possible its ability to approximate the original mesh at high

1

resolution. In many cases, it is very useful to apply simplification selectively, e.g., only in a portion of the domain, or only in the proximity of interesting field values. This process is usually called selective refinement. Tetrahedral meshes have a high degree of adaptivity that make them suitable to perform simplification and selective refinement. Mesh simplification, however, is too time-consuming to be performed on line. A more efficient approach consists of decoupling the simplification phase from the selective refinement phase. This has led to the development of multiresolution models. Such models essentially encode the simplification steps as a partial order, from which a virtually continuous set of meshes at different Levels Of Detail (LODs) can be extracted, based on efficient graph traversal strategies. While several approaches have been proposed for dynamic multiresolution management of surfaces in 3D space (see [PS97, Gar99] for recent surveys), we are not aware of any similar work for three-dimensional scalar fields. In [CMPS97], we proposed a volume visualization system based on tetrahedral meshes, which could support limited multiresolution functionalities (no selective refinement), and was based on a data structure causing a relevant overhead in main memory. Our new system, called TAn2 (Tetrahedra Analyzer 2), is designed explicitly to overcome some of the issues that arise when visualizing extremely large volumetric data. The system is based on a multiresolution model which specializes for the three dimensional case the dimensionindependent model proposed in [DFPM97], called the Multi-Tesselation (MT). We have designed and developed a new compact data structure for the 3D Multi-Tesselation, which is specific for the three-dimensional case and for a simplification strategy based on edge collapse. Such structure not only introduces no memory overhead, but it even achieves a good compression with respect to the original mesh at the highest resolution. It represents also a novel contribution of the paper. We can perform selective refinement efficiently on the MT. On this basis, TAn2 exploits multiresolution not only for reducing data complexity uniformly, but also for varying the resolution through data domain. Selective refinement can be based either on the levels of the isosurfaces to be rendered, or on a spatial location within the domain. These features allow the user to focus on those areas that are considered more critical. Although all the techniques proposed and implemented in our system are general and are easily portable on any high performance platform, one of our major objectives is to enable a low cost PC platform to perform those data management and visualization features that are nowadays the realm of more costly graphics workstations. In addition to multiresolution features that allow the system to manage large tetrahedral models, TAn2 supports different rendering capabilities, such as isosurface fitting and Direct Volume Rendering (DVR), and contains an advanced tool for the interactive design of the transfer function mapping field values into color and transparency parameters. The paper is organized as follows. In Section 2, we overview the state of the art. In Section 3, we formulate a quantification of the constraints imposed by current hardware and software technologies on the sizes of volumetric datasets for an efficient rendering. In Section 4, we explain how these constraints have been overtaken in the TAn2 system by using a multiresolution approach, and describe the system architecture. In Section 5, we introduce the multiresolution model at the basis of the system, the Multi-Tesselation. In Section 6, we describe the algorithms used for building an MT from a three-dimensional data set, and for extracting meshes at a variable LOD from an MT. In Section 7, we explain the use of MT operations inside TAn2. In Section 8, we show some results and images obtained with TAn2. Finally, in Section 9, we draw some concluding remarks and outline future developments.

2 Related Work The problem of simplifying a mesh has been extensively studied for triangle meshes (see, e.g., surveys in [HG97, PS97, Gar99]). Most successful methods are based on incremental techniques, which perform a sequence of atomic modifications on a given mesh by either removing details from a mesh at high resolution, or adding details to a coarse mesh. This approach is especially interesting for constructing multiresolution models, since the intermediate steps of the process produce meshes at decreasing (or increasing) LODs. Some incremental techniques have been extended to the three-dimensional case for simplification of tetrahedral meshes [CMPS97, CCM+ 00, GG98, RO96, THJ99]. Refinement techniques are based on the incremental insertion of vertices [CDFM+ 94, CMPS97, GLE97, GG98, HC94, ZCK97]. However, these methods have the drawback of imposing restrictions on the shape of the domain, due to the need of defining an initial coarse mesh as a starting point for the refinement process. Techniques based on the progressive simplification of an initial mesh at high resolution are generally based on one of two possible local operators: vertex removal and edge collapse. While vertex removal works well in the two-

2

dimensional case, its extension to three dimensions gives problems: the polyhedral hole left from the removal of a vertex does not always admit a triangulation filling its interior [BE92]. Moreover, the problem of deciding whether a polyhedron can be tetrahedralized is NP-hard [RS89]. In [RO96], a method is proposed, which tries to fill the hole and, in case of failure, does not remove the vertex. The edge collapse operator [Hop96] seems more promising for simplifying tetrahedral meshes. There exist two variants of it: in half edge collapse, the edge is collapsed to one of its endpoints; in general edge collapse, the edge is collapsed to an arbitrary point inside the edge itself (possibly its midpoint). Collapse operations are not always feasible, and the result must be checked against intersection and inversion of tetrahedra before being performed. Gross and Staadt [GS98] present a technique to check and prevent intersection and inversion of tetrahedra, as well as various cost functions to drive a sequence of general edge collapses in a tetrahedralization. Cignoni et al. [CMPS97] propose an algorithm based on half edge collapse, in which the simplification process is driven by a combination of the geometric error introduced in simplifying the shape of the domain and of the error introduced in approximating the scalar field with fewer points. This approach has been extended in [CCM+ 00] by defining a framework for the unified management of the two errors (related to geometric domain and scalar field) and by proposing some techniques for an efficient evaluation and forecast of these errors. Trotts et al. [THJ99] also use half edge collapse. They control the quality of the simplified mesh by estimating the deviation of the simplified scalar field from the original one, and by predicting the deviation increase caused by a collapse. They also provide a mechanism to bound the deformation of the domain boundary. Multiresolution models for volume data are obtained as well by extending methods developed for the two-dimensional case. Simple models encode a coarse mesh plus a linear sequence of updates that can be applied in order to progressively refine it [CDFM+ 94, CMPS97, PH97, GS98]. These models support the extraction of a mesh only at those intermediate resolutions that can be obtained by truncating the sequence of refinements at some point, and therefore we will call them linear models. Some linear models also require to perform updates in batches, where each batch affects the whole domain uniformly [BDFM95, PRS99], thus providing just a discrete set of levels of detail. More sophisticated models can also support a continuous level of detail based on selective refinement. There exist a few proposals of multiresolution models for the special case of regular datasets [OR99, WvG94, ZCK97, DFLS00] as well as more general models that are also suitable for irregular datasets, and are obtained by organizing the updates in a DAG [DFPM97, Mag00]. The effectiveness of a multiresolution model is highly dependent on its storage requirements. An efficient representation for a general model based on a hierarchy of local updates is achieved through an efficient encoding of each mesh update, and an efficient encoding of the dependency relations among updates. Several methods have been proposed for efficiently encoding updates on triangle meshes, while only few proposal exist for tetrahedral meshes. Pajarola et al. [PRS99] have proposed an encoding method for tetrahedral meshes simplified through general edge collapses, called Implant Sprays; they show an average cost of about 19 bits per update, plus a compressed displacement vector; each update refines the current mesh by splitting a vertex into an edge. They assume to perform updates in batches such that a set of independent vertices are split simultaneously. This limits the model to handle discrete multiresolution. A compact encoding of generalized vertex splits has been proposed by Popovic and Hoppe in [PH97], which can provide continuous LODs. However, they are concerned with a more general situation, in which the update can affect the topology, as well as the regularity of the mesh. The method proposed in this paper also provides continuous LOD, and it is optimized on encoding tetrahedral meshes. Dependencies between updates can be represented directly either with a DAG [DFPM97, GTLH98, Mag00], or with similar structures [XESV97]: such structures, however, may be too heavy to deal with extremely large data sets. More compact structures based on binary trees of vertices [MMS97, Hop97, LE97] may be insufficient and/or redundant with respect to the natural notion of dependency (see [DFMP99] for a discussion). For the special case of general edge collapse in 2D, a data structure has been proposed recently by El-Sana and Varshney [ESV99], which can represent the actual dependency relations by storing just a forest of binary trees, and adopting a suitable numbering scheme for the vertices. In this work, we use a similar structure for encoding a multiresolution volume model based on general edge collapse.

3

3 Constraints on Data Complexity In this section, we introduce some practical figures on data complexity that are crucial to achieve a good performance on a current hypothetical low cost workstation assumed as a target architecture. Our aim is to fix some bounds on the size of the dataset that we can manage on such architecture in the context of an interactive session of data analysis and visualization. We assume a PC architecture equipped with 512MB RAM (this is now standard for a machine devoted to intensive graphics and visualization) and a graphics board having a sustained throughput of 1M shaded triangles/sec (a peak performance of 10M shaded triangles/sec is already offered by recent graphics boards priced around $200; however, it is realistic to assume a sustained throughput of about one order of magnitude lower than this). We consider application contexts which:

  

Manage datasets based on a tetrahedral decomposition of the 3D space, with scalar/vector data associated with the mesh geometry; Support sophisticated rendering features, such as Direct Volume Rendering (DVR), isosurface fitting and crossslice rendering; Require a reasonably high degree of interactivity, which we quantify in a frame rate of (at least) 10 fps.

In the following, we evaluate the maximal scene complexity that can be managed on the target architecture, in terms of maximum number of points in the input dataset: such a number is constrained by both storage and graphics constraints. Storage constraints. The complexity of a volume dataset based on a simplicial decomposition can be expressed in terms of elementary components. Let n, m and b be the number of vertices, tetrahedra and boundary triangles of a tetrahedral mesh, respectively. For a real dataset, it is reasonable to assume that m ' 6n and b ' 12(n2=3 ) [GGS99]. In the following, we will express all our figures in terms of n, by assuming the above approximations hold for m and b. We assume to store a single scalar value for each vertex (however, datasets with multiple values per vertex are also common). Field values and vertex coordinates are stored on words of size w = 4 bytes (both integer and floating point values). Therefore, minimal information for a single vertex requires three words for its coordinates and one word for the field value, with a total of 4w = 16 bytes per vertex. Under this assumption, a representation of a tetrahedral mesh that keeps only vertices and the relation which associates with each tetrahedron its four vertices (usually called an indexed data structure), requires

4w n + 4w m ' 4w n + 24w n = 28w n = 112n bytes. In order to support isosurface rendering, we should also keep the per-vertex normal for each vertex of the mesh. Following Deering [Dee95], a vertex normal can be compressed on less than one word, giving a total storage cost of 116n bytes for an indexed data structure with vertex normals. Explicit representation of topology is often needed. Some examples of visualization-related task are the detection of the boundary of the mesh, and the computation of depth sort in DVR mode. A data structure that encodes, for each tetrahedron, an explicit reference to its four face-adjacent tetrahedra requires additional 4w m + m ' (24w + 6)n = 102n bytes, hence a total of 218n bytes. We will call this structure a topological data structure. Efficient DVR engines need also auxiliary data structures that increase memory requirements further. Without going into details, we can estimate that a data structure supporting the techniques described in [CKM+ 99, ST90, Wil92a] for sorting and compositing tetrahedral cells (e.g. a Bsp-tree built on the mesh boundary, a directed acyclic graph, plane equations of the internal faces of the complex, the per-vertex color, etc.) require about 500n bytes. In summary, we can store in the target RAM (512MB) a mesh of size:

  

about 4M vertices if the indexed data structure (with or without normal vectors) is sufficient (e.g., if just isosurfaces and cross sections are supported); about 2M vertices if a topological data structure is maintained (e.g., for simple DVR); about 1M vertices if also auxiliary structures are maintained (for fast DVR). 4

In order to support isosurfaces and cross-sections, we also need to store triangle meshes. Through some statistics on real datasets, we have found that 1:5  2m2=3 ' 10n2=3 can be a reasonable expected value for the average size (number of facets) of an isosurface or of a cross-section, extracted from a tetrahedral mesh with m cells (where 1.5 is the average number of isosurface facets fitted on each active cell, and 2m2=3 is the average number of active cells). An isosurface can be stored in an indexed data structure, requiring 3w bytes per triangle (for the references to its vertices) and 6w bytes per vertex (for coordinates and normal; it is convenient to store explicitly all coordinates of the normal during rendering). Let ti and vi be the number of triangles and vertexes of an isosurface. By the Euler formula, we know that ti ' 2vi . Therefore, a single isosurface adds a storage cost of

3w ti + (3w + 3w)vi ' 6w ti = 24  10n2=3 = 240n2=3

bytes:

In the context of an interactive session, it is reasonable to keep two such surfaces at the same time, for a total of 480n2=3 bytes. Since the number of boundary facets of the tetrahedral mesh is b ' 12n2=3 , such cost becomes roughly 770n2=3 bytes if we also want to represent boundary faces explicitly. If we assume n in the range 1-4M (as from the figures above), we have an additional cost of 5-8n bytes for maintaining isosurfaces (or cross sections) and the boundary of the mesh. Graphics constraints. The need for interactive frame rate introduces a rendering constraint: given the rendering speed of the target architecture and the ideal frame rate, we can obtain the maximal scene complexity which can be managed interactively. If a visualization modality based on isosurfaces is adopted, we can manage interactively no more than 100K triangles, given the frame rate (10 fps) and the graphics throughput (1M t/s). By assuming an average number of 10n2=3 facets per isosurface, and two isosurfaces rendered at the same time, we have that the maximum size of a dataset is about 350K vertices while visualizing only isosurfaces. In the case of DVR, graphics constraints become much more severe. Interactive DVR approaches (that are usually based on the exploitation of graphics hardware [ST90, Wil92a]) require an explicit depth sort of the mesh, followed by a per-cell projection, and by a drawing stage that renders a small number of faces for each cell. Sorting algorithms available in the literature can process few hundreds thousand cells per second: bspmpvo can sort 50K-100K cell per second [CKM+ 99], mpvonc (approximated sort) can sort 300K cell per second [Wil92b]. In the rendering stage, approximately four triangles per cell must be drawn (and also some processing to produce a view-dependent classification of the given cell must be performed). The sum of the two components (which can be interleaved in a multiprocessor architecture) gives a threshold of around 100K-200K cell per second. Therefore, at a frame rate of 10 fps we can render in DVR mode only datasets in the range 10K-20K cells. In other words, the number of vertices should be in the range 1.6K-3.3K, which is a rather small size. For simplicity, in the following we will assume a bound of 3K vertices. Discussion.

From the analysis above, we can conclude that:

1. With current technology and current volume rendering applications, graphics constraints are stronger than memory constraints. We can store a mesh about one order of magnitude larger than that we can use to extract isosurfaces to be visualized; and we can store a mesh three orders of magnitude (!) larger than that we can use for DVR. This means that once we have saturated the graphics capabilities of the target architecture, we are still left with some free memory (in the case of DVR, with a lot of free memory). 2. The size of the largest mesh that we can manage while using just isosurfaces is two orders of magnitude larger than the largest mesh we can manage with DVR. Therefore, in case of hybrid visualization (DVR plus isosurfaces), we must severely reduce either the quality of isosurfaces, or the degree of interactivity (frame rate). 3. Storage costs are largely due to the 3D mesh, while the storage cost of isosurfaces to be rendered is almost negligible. Indeed, the cost of storing the 3D mesh is about 60 times that of storing an isosurface extracted from it. From the first remark, we can argue that the spare memory we are left might be used to maintain more data, or some mechanism to improve the above figures, or both. From the latter two remarks, we can argue that if we were able to use a 3D mesh where resolution is maximized near interesting field values (e.g., thresholds corresponding to isosurfaces extracted), while it is maintained low elsewhere, then we could obtain a double advantage: the size of 5

Figure 1: The architecture of the TAn2 system. a 3D mesh necessary to saturate the graphics constraint would become smaller, thus leaving more free memory for additional structures; and the gap between the sizes that we can manage with isosurfaces and with DVR, respectively, would become smaller.

4 Architecture of the TAn2 system The architecture of the TAn2 system (Tetrahedra Analyzer 2) has been designed to overcome some of the problems arising in the interactive visualization of very large volume data discussed in Section 3. The solution adopted in TAn2 is based on a multiresolution structure that describes the whole data set in an implicit and very compact way. Such structure, called the Multi-Tesselation (MT), is described in Section 5. From such structure, smaller subsets of data are extracted and rendered. The amount of data extracted for rendering and their level of resolution (possibly variable over the domain) are selected in such a way to give the largest mesh that can be rendered under the constraints defined in Section 3. Moreover, there are situations in which having a more concise representation can improve the scientist’s insight capability, since a very detailed representation does not always means more precise or accurate images. As an example, we could mention a peculiar effect of exploiting graphics hardware in direct volume rendering. In areas with a low opacity, the use of a high resolution mesh may introduce considerable errors because current graphics subsystems support only one byte per RGBA channel; thus, small size cells with a very low opacity could be rendered as completely transparent. This kind of problem can be overcome by adapting the resolution of the mesh to the relative opacity, so as to reduce accumulation errors due to insufficient precision of the graphics hardware. TAn2 can overcome this problem by producing variable LOD meshes where data resolution depends on the current Transfer Function (TF) settings. We kept the name of our previous volume visualization system, henceforth called TAn1 [CMPS97], even if the system has been completely re-designed and re-implemented. The system has been implemented in C++ and runs under Windows and Linux. The user interface has been implemented using fltk [FLT99], a portable GUI package available for both Windows and Linux platforms. The functionalities of the system, the use of multiresolution, and the system architecture are described in the following subsections.

6

4.1 System Functionalities TAn2 system provides both visualization through isosurfaces, cross sections and Direct Volume Rendering (DVR) on uniform or variable LOD. In isosurface rendering mode, one or more isosurfaces may be displayed at a time; data visualization can be enriched by introducing cross section planes (see Figure 11). The DVR approach adopted is based on a depth-sorting and a composition of tetrahedral cells [ST90, Wil92b]. In addition, the system can render a dataset by drawing its points, the edges of the tetrahedral mesh, or the triangle mesh forming its boundary. These elements can be superimposed to each other: for instance, it is possible to render a solid isosurface and the wireframe domain boundary at the same time. For surface-based data, TAn2 supports the classical viewing modes (e.g., wire-frame, hidden-line, flat-shaded, etc.). In all rendering modalities (isosurfaces, cross planes and DVR), appearance parameters such as colors and transparency values are determined by means of a user-defined Transfer Function (TF). The Transfer Function defines how field values are mapped into color and transparency values that will be used in rendering. The TF is defined in terms of RGBA components through linear interpolation of the RGBA values specified by the user at a finite set of field values. The system contains a visual interactive tool for editing transfer functions.

4.2 Role of Multiresolution within the System Within the system, a compact multiresolution data structure, the Multi-Tesselation, is maintained, which describes the whole dataset in an implicit way. For the purpose of rendering, a current mesh is stored, which is dynamically extracted from the multiresolution data structure by selecting just a subset of the data. Rendering is performed based on the current mesh: auxiliary information needed for rendering (described in Section 3) are maintained just for the current mesh. The current mesh has a size suitable to be rendered with the available memory and with an interactive frame rate under the constraints described in Section 3, depending on the current viewing modality. The resolution of the current mesh can be either uniform or variable over the domain, and it is chosen based on the current viewing parameters so as to provide the highest possible quality of displayed images within the size constraints. This approach allows us to manage a dataset much larger than the one that could be handled using traditional approaches. In particular, the system supports the following uses of multiresolution:

  

Uniform LOD: the resolution of the current mesh used in rendering is uniform over the domain based on an error threshold specified by the user. Variable LOD based on field value: the current mesh satisfies the error threshold specified by the user just in proximity of the isosurface values currently selected, while the resolution of the mesh in areas that do not contain such values is coarser. In this case the quality of the resulting isosurfaces is the same as that obtained at uniform resolution, but the size of the current mesh is smaller. Variable LOD based on spatial location in the domain: the user specifies both an error threshold and a focus volume, e.g., an axis aligned box. The focus volume is used as a sort of 3D magnifying glass in order to explore a dataset by increasing resolution only locally. The current mesh satisfies the error threshold specified by the user just inside the focus volume, while outside it is coarser.

In addition, progressive rendering can be enabled. Progressive rendering refers to the capability of using representations at different levels of resolution at different times during user interaction with a displayed scene, in order to automatically adapt the level of detail to the rate of interaction. TAn2 supports a simple version of progressive rendering, based on discrete LOD. When progressive rendering is enabled, a very raw mesh is extracted from the MT and maintained in addition to the current mesh. During intensively interactive frames (e.g., when the user virtually rotates the dataset or iteratively changes the Transfer Function), the system renders the raw mesh, and it goes back to rendering the current mesh extracted according to user parameters when interaction is suspended. In this way, we optimize system response during interactive phases, while, on the other hand, we give priority to image quality when the system load is low. Uniform LOD and progressive rendering are the only features also present in TAn1. Moreover, the multiresolution model adopted in TAn1 involves a substantial memory overhead with respect to the plain full resolution model. TAn2 accepts in input either a plain tetrahedral mesh or a Multi-Tesselation. Multiresolution features are available only in the latter case. TAn2 is just a visualization system, while the construction of a Multi-Tesselation is performed 7

off-line by an independent application: we use a simplification code based on edge collapse [CCM+ 00], as described in Section 6.1.

4.3 System Architecture The architecture of the TAn2 system is depicted in Figure 1 and consists of the following modules:

     

The GUI Manager, which controls the Graphic User Interface (GUI); The Multiresolution Engine, which contains the MT data structure and produces the current mesh by extracting it from the MT; The Current Model Manager, which stores the current mesh as well as those information which are needed in rendering; The Transfer Function Manager, which manages changes in the current TF, in the currently selected isosurface values, and in the current error threshold used for mesh extraction; The Isosurface Engine, which extracts isosurfaces and cross planes from the current mesh; The Rendering Engine, which renders the current mesh through either isosurface rendering or DVR.

The various modules are described in the next paragraphs. GUI Manager. The GUI Manager is responsible for rendering all GUI-related areas of the system interface, managing user interaction, and filtering all user inputs before passing them to other modules. The interface of TAn2, shown in Figure 2, consists of one main window subdivided into two regions. The View region (on the left) is devoted to the visualization of the dataset. The 3D scene visualized in the View Region is managed by the Rendering Engine. The Transfer Function region (on the right) gives an innovative unified GUI framework for the interactive setting of the current TF, the selection of isosurface thresholds; and the selection of the error threshold for mesh extraction from the MT. The Transfer Function region is perhaps the most sophisticated component of the GUI Manager. This region contains a visual editor to interactively design or modify the current TF. The TF is shown as four polylines describing the behaviour of each of the RGBA components as a function of the field value. The user can edit the TF by manipulating these polylines: vertices can be added, deleted, and dragged to change their position. After editing the TF, the user has to explicitly request to apply the modified TF to data rendering. The Transfer Function region is also used to set up the parameters for the selection of isosurfaces and for the extraction of the current mesh from the MT. The user specifies an isosurface by introducing an iso-surface handle in the TF area in correspondence of the desired field value. An isosurface is represented by a horizontal dotted line with a handle on the rightmost extreme (see Figure 2); the vertical position of the line, as well as its color, represent the threshold value of the isosurface. An isosurface can be interactively modified by acting on the corresponding handle: dragging this handle changes the isosurface field value, deleting the handle causes the corresponding isosurface to be deleted. Multiresolution selection, from the GUI perspective, is composed of two different selection actions. User can select the current error threshold by interactively manipulating a handle in the Transfer Function region, visualized as a vertical dotted line (see Figure 2). This selection is then applied either to the whole dataset (uniform LOD extraction) or to a subset of the spatial/field domain (variable LOD extraction). The second selection action is used to identify the subset of the spatial domain which is considered in variable LOD extraction. It adopts a direct manipulation approach based on a 3D widget (see Figure 9 and 10). The subset of the field domain is implicitely defined by the set of current isosurfaces. Changes to the TF, or the isosurface values, or the error threshold, are managed by the Transfer Function Manager with the help of the Multiresolution Engine, of the Isosurface Engine and of the Current Model Manager.

8

Figure 2: The main window of the TAn2 system. The isosurfaces shown in (a) have been computed on a uniform LOD mesh (containing 196K tetrahedra). The mesh rendered in (b) is a variable LOD mesh, obtained focusing on the field domain (it contains 98K tetrahedra). 9

Transfer Function Manager. The Transfer Function Manager collects all parameters set by the user through interaction in the Transfer Function Region of the main window. These parameters are transferred to the Transfer Function Manager under an explicit instruction by the user. These parameters are related to the current TF, to the current isosurface values, and to the current error threshold. The Transfer Function Manager acts as a filter that redirects the user specifications to the appropriate modules which handle them. Based on the new parameters, the Transfer Function Manager performs one or more of the following actions: recompute the color values on the mesh vertices according to the new TF; compute a new isosurface (for which it invokes the Isosurface Engine); extract a new current mesh (for which it invokes the Multiresolution Engine). Multiresolution Engine. Multiresolution is driven by interactively manipulating either the handle of the current error threshold in the Transfer Function region of the GUI, or the manipulator 3D widget provided to select the spatial focus region. The GUI Manager communicates any changes to the Transfer Function Manager, which in turn invokes the Multiresolution Engine. Based on the new error threshold, the Multiresolution Engine extracts a new mesh from the MT and sends it to the Current Model Manager. If the system is in uniform LOD mode, the position of the multiresolution handle determines the constant error threshold used in mesh extraction. If the system is in one of the two variable LOD modes, the value of the current handle is interpreted as the error threshold that has to be satisfied in the proximity of current isosurfaces, or inside the current focus volume, respectively. Current Model Manager. The Current Model Manager is devoted to the management of the current mesh: a dynamically modifiable tetrahedral mesh that stores the model as extracted from the MT and enriched by the data needed in various visualization modalities (like isosurfaces, cross planes, colors, etc). The Current Model Manager implements all data structures and methods for managing a tetrahedral mesh with per-vertex field values, colors, and normals, the adjacency relation among tetrahedra and the possible isosurfaces fitted on the current mesh. In other words, it maintains the ready-to-render model. If progressive rendering is enabled, the Current Model Manager stores also a very coarse representation of the dataset where all the various modification of the visualization parameters can be applied interactively. Isosurface Engine. The Isosurface Engine computes new isosurfaces on command of the Transfer Function Manager, or new cross sections following the requests of the GUI Manager. These updated surface meshes are maintained in the Current Model Manager. Rendering Engine. The Rendering Engine performs rendering either of surfaces, or directly of the data set through DVR. In addition, if specified by the user, it also renders the tetrahedral mesh, its boundary triangulation, or its vertices. The Rendering Engine is activated by any change in the Current Model Manager (e.g., creation, deletion, or update of isosurfaces or cross planes, update of the TF, or a new current mesh extracted after an updated error threshold). The Rendering Engine controls both rendering and user interaction in the View Region of the main window. The user can rotate, pan and zoom the dataset. In addition, the user can add a focus volume (i.e., an axis aligned box) and manipulate it (e.g., moving, resizing). Different visualization modes are supported to render the dataset mesh and the possible isosurfaces. In the example of Figure 2(a), a uniform LOD mesh has been extracted (the corresponding error threshold is the one selected with the multiresolution extraction handle), and three different isosurfaces are fitted and rendered (corresponding to the three isosurface handles shown) together with the mesh boundary surface. In Figure 2(b), the variable LOD mesh has been produced based on the field value. The mesh satisfies the error threshold set by the multiresolution handle just in the tetrahedra contributing to the isosurfaces: consequently, the isosurfaces are identical to those shown in Figure 2, while the size of the current mesh is significantly smaller.

5 Three-Dimensional Multi-Tesselation The Multi-Tesselation (MT) is a general multiresolution model that has been introduced for triangle meshes in [Pup96] and then extended to simplicial meshes in arbitrary dimensions [DFPM97]. In this section, we describe a special version of the MT suitable for tetrahedral meshes simplified through general edge collapse, as well as a new compact data structure to encode it. 10

12=1+4 12

12

12

9

9

4

1

9

4

1

9

14 7

14=9+7 11 13

11

7 11

2

11

2

13

2

7

6

11=10+6

9

3

10

7

4

1

7 5

10=3+5 2

6

13=2+8 8

8

8

8

Figure 3: A sequence of vertex splits preogressively refining a triangulation. The mesh portion affected by each update is shaded; the new edge introduced by a split is thickened. 14 13

12

11

10

Figure 4: The partial order corresponding to the sequence of vertex splits of Figure 3, depicted as a DAG where each node represents an update, and each arc represent a dependency. Node are labeled with the indices of the corresponding split vertices.

5.1 MT Basics A Multi-Tesselation (MT) is composed of a base mesh at coarse resolution plus a set of updates that can be applied to locally refine the base mesh. In general, an update consists of a set of cells (tetrahedra in the 3D case) to be removed from the mesh, and a second, larger, set of cells to be inserted into the mesh to replace the removed cells. Figure 3 shows a sequence of updates that progressively refine a triangle mesh. The MT encodes a partial order among updates, which is induced by the following dependency relation: an update u+ is dependent on another update up + if u+ removes some cell that has been introduced by up + . This means that we cannot apply u+ unless we apply up + first. Note that an update can also be applied in reverse order (i.e., undo) to locally coarsen a mesh at high resolution. Given a refinement update u+ , we will denote by u its reverse (the coarsening update). In the context of an MT, an update u+ and its reverse u will be represented as a single node u. The partial order among nodes must be seen as reversed when applying reverse updates. Figure 4 shows the partial order of updates encoded in the MT corresponding to Figure 3. We say that a (possibly empty) subset S of nodes of an MT is consistent if, for every node u 2 S , all the nodes up such that up + precedes u+ in the partial order are also in S . Updates of a consistent subset can be applied to the coarse mesh in any total order that extends the partial order, thus producing a mesh at an intermediate level of detail. We denote with S the mesh obtained by performing all updates u+ such that u 2 S in a consistent order. A node can be added to, or deleted from S if and only if the resulting set is still consistent (i.e., if and only if the corresponding update can be made on S ). Adding a node u to a consistent subset S locally refines S by performing u+ . Deleting a node u from S locally coarsens S by performing update u . In general, an update that can be made without violating consistency will be called feasible. Many different meshes at different levels of detail can be obtained by performing different subsets of the updates corresponding to nodes of the MT. It has been shown that all possible meshes formed by cells that appear in an MT (i.e., either in the base mesh or in some update) are in one-to-one correspondence with consistent subsets of updates, provided that the dimension of the MT and of the underlying space are coincident [Pup96].

11

edge collapse

t6 t1 t2

t1 t’

t

t6

t2

t5

t5 t3 t3

t4

t4 vertex split

Figure 5: Modification of a tetrahedral mesh through edge collapse and vertex split (exploded view). On the left, the collapsing edge and its endpoints are marked; triangles t and t0 degenerate into triangles. On the right, the split vertex is marked; tetrahedra t1 and t6 are attached to one endpoint, and tetrahedra t2 , t3 , t4 , t5 are attached to the other endpoint of the new edge.

5.2 MT Based on Edge Collapse Edge collapse is a popular example of a local update that has been widely used in the literature to simplify meshes, both in 2D and in 3D (see Section 2). A general edge collapse contracts an edge e, with endpoints v1 and v2 , to a point v located on e (for instance, to the midpoint of e). The 3D mesh around e is deformed by replacing vertices v1 and v2 with v: as a consequence, tetrahedra containing both v1 and v2 collapse into triangles (see Figure 5). The effect of an edge collapse on a mesh is uniquely defined by the two vertices v1 and v2 of the edge e to be collapsed, and by the position of the new vertex v . In the MT terminology, this corresponds to removing tetrahedra incident at v1 or v2 , and replacing them with new tetrahedra incident at v . The reverse operation of an edge collapse is called a vertex split. A vertex split expands a vertex v into an edge e having endpoints at v1 and v2 . The split partitions tetrahedra incident at v into two subsets, which are separated by a fan of triangles incident at v : tetrahedra of the two subsets are deformed to become incident at v1 and v2 , respectively; triangles of the fan are expanded into tetrahedra that become incident at both v1 and v2 (see Figure 5). The effect of a vertex split is uniquely defined given the vertex v to be split, the positions of the new vertices v1 and v2 , and a partition of the set of tetrahedra incident at v into two subsets. In the MT terminology, this corresponds to removing tetrahedra incident at v , and replacing them with new tetrahedra incident at v1 and v2 . In the following, we will consider only MTs where each update u+ is a vertex split, and its reverse update u is a general edge collapse. Note that, in this case, the nodes of an MT are in one-to-one correspondence with the vertices that appear in the mesh during a simplification process based on general edge collapses, except for those vertices that belong to the original mesh at high resolution. Thus, we identify each node u with the vertex vu split by u+ , and created by collapse u .

5.3 Data Structure In order to encode an MT we must maintain:

  

A data structure for the base mesh at low resolution, which can support edge collapse and vertex split operations efficiently. Sufficient information to retrieve the partial order of nodes, i.e., to understand when an update is feasible. Sufficient information to describe updates u and u+ corresponding to each node u of the MT.

Since our aim is dealing with extremely large datasets, we have designed a compact data structure, achieving a good compression factor with respect to the storage cost of the original mesh at high resolution (i.e., the largest mesh that we can get from the MT). Note that the expected size of the base mesh is low: in the 3D case, about two orders of magnitude smaller than the original mesh at high resolution. Therefore, our efforts have been devoted to the compact encoding of nodes and dependencies. For encoding the partial order of nodes, we elaborate on a result of El-Sana and Varshney [ESV99], who proved that, in the case of general edge collapses, all dependencies can be retrieved by maintaining only a binary forest of 12

vertices numbered in a suitable way. For encoding nodes, we propose an original method based on a bit vector, which is used to subdivide vertices adjacent to a given vertex (and, consequently, tetrahedra incident at it). 5.3.1 Encoding the base mesh The base mesh is maintained in a topological data structure, similar to the one described in Section 3. The data structure is made of two arrays: one containing vertices, and the other containing tetrahedra. For each tetrahedron t, we encode:

  

The indexes of its four vertices; The indexes of its four adjacent tetrahedra; The reverse index giving the position of t with respect to each of its adjacent tetrahedra.

For each vertex v of the current mesh, we store:

    

Its three coordinates; Its field value; Its normal; The index of one tetrahedron incident at v ; The index of the node corresponding to v in the MT data structure (the role of this index will be clarified later).

In the assumptions made in Section 3 (one word per coordinate/value/index, one word for the normal, one byte for the reverse indexes) this data structure requires

(8w + 1) mb + 7w nb ' (55w + 6) nb = 226nb bytes; where nb and mb are the number of vertices and tetrahedra in the base mesh, respectively, and mb

' 6nb as usual.

5.3.2 Encoding the partial order An explicit representation of the DAG depicted in Figure 4 could be too expensive, because the number of arcs may be quite high, and the degree of nodes may be quite variable. In order to overcome this problem, we adopt a data structure proposed by El-Sana and Varshney [ESV99] in the 2D case, which is based on two components:





A forest of binary trees of vertices is maintained. There is one node for each vertex of the MT plus those of the original mesh; the two children of each internal node vu are the endpoints v 0 u and v 00 u of the edge created when splitting vu . Vertices of the original mesh are all leaves of the forest, while internal vertices and roots are in one-to-one correspondence with nodes of the MT. In particular, roots of the forest correspond to vertices of the base mesh at coarse resolution. Vertices are numbered in the following way: the n vertices of the original mesh at high resolution are numbered arbitrarily from 1 to n; the remaining vertices are numbered with consecutive numbers in a total order that extends the partial order of the MT.

Since the model is built through simplification based on general edge collapse (see Section 6.1), the forest is built bottom-up while simplification occurs, and the numbering is obtained by the order in which new vertices appear during simplification. An example of enumeration can be seen in Figure 3 by reading the figure from right to left as a sequence of edge collepses. Figure 6(a) shows the forest for the MT of Figure 4. Given a mesh S corresponding to a consistent set S , the following rules can be derived from the results proved in [ESV99]: 1. A collapse u is feasible on S if and only if edge v 0 u v 00 u belongs to S and all the vertices adjacent to v 0 u or v00 u in S have a number lower than vu ;

13

11

12 4

13 1

8

10 2

5

6

14

3

9

7

(a)

wide

1 12

2 13

3 10

4 1

5 3

6 11

7 14

8 2

leaf nodes

9 7

deep

1

2

3

4

5

10

4

8

9

-

-

-

-

wide 6

5

internal and root nodes

(b)

Figure 6: (a) The forest of binary trees encoding the MT of Figure 4; Circles represent vertices corresponding to MT nodes, squares represent vertices of the original mesh; (b) The arrays encoding such forest. 2. A split u+ is feasible on number lower than vu .

S if and only if vertex vu belongs to S and all the vertices adjacent to vu have a

By applying the above rules, we are always sure that we can modify a consistent mesh in all possible consistent ways. Note that this mechanism could not be used for an MT based on half edge collapse, since the retrieved dependencies would overconstrain the model (i.e., not all feasible updates would be detected). Proofs of these facts are omitted for brevity. We encode the forest by using two arrays: one containing the leaves, and the other containing the other nodes of the trees. Let N and n be the total number of vertices in the MT and in the original mesh at high resolution, respectively

(N > n):

 

The first array has n entries, and each node (vertex) is stored at the entry corresponding to its number; The second array has N

n entries, and a node having number i is stored at entry i n.

We use two arrays in order to optimize storage cost, since leaves need much less information than internal nodes. For each entry v in the array of leaves we store just one index v:WIDE which points to:

 

its parent vp , if v is the second child of vp (by convention, the child having the smaller index); its sibling, if v is the first child of vp .

For each entry u in the array of the internal nodes, we store:

  

An index u:WIDE which points either to the parent, or to the sibling, of u as in the previous case; An index u:DEEP pointing to the first child of u (by convention, the child having the larger number); Information necessary to support vertex split, which will be detailed in the Subsection 5.3.3.

Note that: the parent of a node u has always a number larger than u; if u is a first child, its sibling has a number smaller than u. Thus, the parent of a node u is determined as u:WIDE if u:WIDE > u, and as u:WIDE:WIDE otherwise. The children u are found as u:DEEP and u:DEEP:WIDE. If nin = N n is the number of internal and root nodes, we have that nin  n 1; but, since the number of roots is small, we can assume nin ' n. Since we store one index for each leaf and two indices for each internal node, the storage cost of maintaining the raw forest is equal to

w n + 2w nin ' 3w n = 12n bytes. The additional cost of storing vertex information is specified in the following subsections.

14

5.3.3 Encoding MT nodes We have already remarked that each node u of the MT corresponds to a vertex vu , and to an internal node of the binary forest: a node u describes both updates u (edge collapse) and u+ (vertex split). Therefore, we must encode information sufficient to support such operations in our data structure. Note that updates will be always applied to a current mesh to be used within the visualization engine. The current mesh is a working structure of variable size, which is maintained in a topological data structure similar to the one used for the base mesh. Therefore, all geometric and attribute information about vertices of the current mesh are maintained in this data structure, which is dynamically linked to the corresponding node of the forest. In particular, each entry corresponding to a vertex of the current mesh will have a pointer to the node in the forest corresponding to the same vertex. The current mesh together with the raw forest provide sufficient information to support edge collapse. Suppose we want to collapse an edge having as endpoints a pair of vertices v 0 u and v 00 u in the current mesh. If such vertices are not siblings in the forest, then the collapse is not feasible. If they are siblings, then the node corresponding to their parent vu is found through the forest links described above. Then, it is sufficient to compare the index of vu with those of all neighbors of v 0 u and v 00 u in the current mesh, in order to test if the collapse is feasible. Once feasibility has been tested, the position, field and vertex normals of vertex vu are found by linear interpolation. For the sake of simplicity, we always collapse an edge to its midpoint, and we compute field and normal as averages of those quantities at its endpoints. The decision of wheather to perform a feasible collapse will depend on the error introduced by the collapse, which will be stored at the corresponding node of the forest. Vertex split requires more information that is stored directly at node u:

    

An offset vector, which is used to find the positions of vertices v 0 u and v 00 u from that of vu ; An offset value, which is used to obtain the field values at v 0 u and v 00 u from that of vu ;

An offset vector for normals, which is used to find the per-vertex normals at v 0 u and v 00 u from the normal at vu ; A bit mask, which is used to partition the star of vu , i.e., the set of tetrahedra incident at vu ; An error value, which provides an estimate of the approximation error corresponding to vu (see also Section 6.1).

The feasibility of vertex split can be tested directly on the current mesh by comparing the index of the candidate vertex with respect to those of its adjacent vertices. If vu belongs to the current mesh and its split is feasible, then offsets are used to compute positions, vertex normals, and field values of two new vertices, and the bit mask is used to partition the tetrahedra in the star of vu between such vertices. The bit mask contains one bit for each vertex adjacent to vu (such vertices are ordered by increasing index in order to match bits in the mask when the split occurs). Note that each tetrahedron in the star of vu has exactly three vertices in this set of adjacent vertices. Then the following rule is applied:

  

Tetrahedra with all three vertices marked 1 in the bit mask must replace vu with v 0 u ; All remaining tetrahedra must replace vu with v 00 u ;

For each tetrahedron t such that exactly one vertex of t is marked 0, the triangular face of t opposite to such vertex must be expanded to a tetrahedron incident at both v 0 u and v 00 u .

Figure 7 shows the use of update information for performing the updates u+ and u associated with a node u, in a simplified two-dimensional example. During MT construction, we collapse only edges such that the total number of adjacent vertices to their two endpoints is not larger than 32; then the bit mask can be stored in a single word of 4 bytes. The experiments we have performed confirm that this bound works well in practice. Moreover, we compress the offset vector for normals and the error value on a single word, packing the normal offset in the first 17 bits by using the compression scheme of [Dee95] and quantizing the field error on the remaining 15 bits using a logarithmic scale to give more importance to small errors. As explained in Section 6.1, we assume that the domain error can be derived from the field error by means of a scale factor.

15

vu

v’u

v’u

vu

v’’ u

v’’ u

collapse

split

(b)

(a) Figure 7: (a) Use of update information for performing a vertex split: the displacement vector d is shown; white and black vertices are marked 0 and 1, respectively, in the bit mask; The new vertices are found as v 0 u = vu + d and v 00 u = vu d. (b) Use of update information for performing an edge collapse: the displacement vector is shown; Vertex vu is found as v 00 u + d, or, equivalently, v 0 u d; the bist mask is not used. The storage cost for the information associated with a single node contributes for a cost of 6w = 24 bytes. Since this cost is added to all nint ' n entries of the array of internal nodes, the total cost of the data structure is

12n + 24n = 36n bytes, plus the cost of the base mesh, which is negligible, since it always has a small number of vertices. In summary, this data structure achieves a compression factor of about 6 with respect to a topological data structure that encodes the original tetrahedral mesh at high resolution.

6 MT Construction and Traversal In this section, we briefly describe the algorithm we use for generating our Multi-Tesselations, and the method used for extracting tetrahedral meshes at variable resolution from them. For building the MT, we use a simplification algorithm called CUTE (Collapsing Unstructured Tetrahedral meshes); a detailed description of such algorithm can be found in [CCM+ 00]; here, we just report some of those features which are relevant in building a multiresolution model. For performing selective refinement, we have extended a heuristic algorithm based on a dynamic approach, that we have proposed in [DFMMP00] for the 2D case.

6.1 Construction The construction of the multiresolution model is performed off-line by adopting a sophisticate non-interactive simplification algorithm called CUTE (Collapsing Unstructured Tetrahedral meshes). CUTE is able to maintain both the geometrical and topological correctness of the mesh and to provide an accurate estimate of the error introduced during the simplification process. The input to the construction algorithm is a tetrahedral mesh with scalar field values associated with its vertices. The domain of the mesh is a manifold with boundary that is not necessarily convex. The CUTE simplification algorithm proceeds by iteratively collapsing edges of the mesh; during this process it is important:

 

to maintain the geometric correctness the mesh (including prevention of self-intersections) and to preserve topology; to control the approximation error by using an integrated evaluation of the error introduced by the modification of the domain and the one introduced by the approximation of the field defined over the input mesh.

In order to produce a multiresolution model of good quality, the approximation error should increase as slowly and as smoothly as possible during simplification. Edges leading to the least error increase should be collapsed first. Thus, we maintain edges candidate to be collapsed in a priority queue, sorted by increasing error. Error evaluation is done in an exact manner by simulating the collapse; the use of an approximate evaluation can reduce running times substantially, but the quality of the simplification process becomes worse. At each iteration we extract from the queue the edge e having the least error and try to collapse it to its midpoint v . We actually perform the collapse only if it passes a set of tests ensuring the topological and geometrical consistency of the resulting mesh. 16

After the collapse of e, every edge e0 incident at one of the old endpoints of e is updated by replacing such endpoint with v , the error associated with e0 is recomputed, and the position of e0 in the queue is updated according to the newly evaluated error. 6.1.1 Error Evaluation At each stage of the simplification process, the approximation error of the current simplified mesh 0 with respect to the original mesh  can be described by using two measures: the domain error and the field error. The domain error measures the deformation of the domain due to collapsing edges lying on, or adjacent, to the boundary of the mesh. A correct measure of the domain error is provided by the symmetric Hausdorff distance between the boundary surfaces of the input mesh and of the current simplified mesh. Since computing the symmetric Hausdorff distance at each simplification step can be inefficient, we adopt a discrete approximation of such distance that evaluates the distance just at the vertices of the original mesh. The field error is the error introduced by representing the original field defined on  with the under-sampled one defined by the simplified mesh 0 . In our algorithm, we adopt a discrete approximation of the field error that evaluates the difference between the two fields at the vertices of the original mesh plus at a small set of sampled points inside each tetrahedron to achieve a better approximation of the error. Therefore, at each iteration, the current mesh 0 is characterized by a domain error "d , and a by field error "f . During the simplification, we compose these two error evaluations in a unique value that is used as the key for sorting edges in the priority queue. The criterion adopted to compose the two errors is based on a predefined ratio Æ between the two errors. During simplification, we ensure that Æ  "d  "f . With this convention, we can use a single value to characterize both errors during simplification and multiresolution management. Hereafter, when discussing about the error of a mesh, we will denote just the field error assuming that information about the domain error can be recovered easily.

6.2 Selective Refinement Selective refinement is an operation that adapts the level of detail of a mesh to the needs of an application, possibly varying resolution over different portions of the mesh. Selectively refined meshes are associated with subsets of nodes of the MT which are consistent with the dependency relation. Selective refinement is performed upon requests expressed in terms of the resolution and the size of the output mesh. A resolution threshold is a function  that assigns a real value to each node u of an MT. Intuitively,  (u) measures the “importance” of performing update u+ (i.e., of splitting vertex vu ) with respect to the resolution requirements of the application. A value of  (u) > 0 means that u+ needs to be performed, and the need is more important if  (u) is larger. A value of  (u)  0 means that u+ does not need to be performed, and performing it becomes more unimportant while the value becomes smaller. A size threshold is simply a positive integer b which gives an upper bound to the number of tetrahedra in the output mesh: b is the maximum number of tetrahedra that can be rendered with the available resources and within the given time constraints. Based on a resolution threshold  and a size threshold b, we want to find the mesh which best approximates the requirements of  , and has a size less or equal to b. This formulation gives higher priority to the size constraint over satisfaction of threshold  . Our aim is finding a consistent set S such that: (a) The size of the corresponding mesh S is  b; (b) (c)

maxf (u) j u can be added to S g is minimized; minf (u) j u can be deleted from S g is maximized.

Requirement (a) can always be achieved, provided that the size threshold b is not smaller than the size of the initial coarse mesh. Requirement (b) means that, compatibly with requirement (a), at least the most relevant vertex splits (i.e., nodes u with the largest positive value of  (u)), must be in S . Requirement (c) means that at least the most redundant vertex splits (i.e., nodes u with the negative largest value of  (u)) must not be in S . The algorithm for performing selective refinement uses a heuristics based on a dynamic approach. The algorithm starts from a current mesh S , and works in two phases:

17

1. In the contraction phase, edges are collapsed in order to satisfy requirements (a) and (c). Edges of the current mesh, having endpoints that correspond to siblings in the binary forest, are taken as candidate edges for collapse. Edges that can be collapsed without violating the resolution threshold are maintained in a priority queue. The priority of an edge v 0 u v 00 u depends on the value  (u) computed at the parent u of its endpoints on the binary forest: the lower the value of  (u), the higher the priority. Collapses are iteratively attempted in the order they come from the queue. Only feasible collapses are performed. A collapse may cause other edges to be added to the queue. The contraction loop ends when either the size of the current mesh becomes less or equal to b, or the priority queue is empty (i.e., no more feasible collapses can be performed without violating the resolution threshold  ). 2. In the expansion phase, vertices are split in order to satisfy requirement (b), without violating (a). Vertices of the current mesh corresponding to nodes having a positive value of  are taken as candidate vertices for split, and maintained in a priority queue. The priority of a vertex vu depends on the value of  (u): the higher the value of  (u), the higher the priority.

Splits are iteratively attempted in the order they come from the queue. If a split at a vertex vu is not feasible, then those vertices which are adjacent to vu and have an index larger than vu are recursively split. Performing a split may cause one or two new vertices to be added to the queue. The expansion loop ends when either the size of the current mesh becomes equal to b, or the priority queue is empty (i.e., the resolution threshold  is negative or null at all vertices).

At the beginning, set S is empty and the current mesh S is initialized as the base mesh. The result of a given query is taken as input by the successive query. The current mesh is maintained in a topological data structure similar to the one encoding the base mesh, described in Section 5.3.1. The maximum number of tetrahedra in the current mesh is bounded from above by the size threshold b and, in general, by a maximum bound fixed by the system, depending on available memory. Dynamic allocation and deallocation of arrays is used to maintain the data structure within the given bounds, and to reuse memory that is freed during the contraction phase.

7 Supporting Volume Rendering through Multiresolution As we have already pointed out, multiresolution functionalities offered by TAn2 are aimed at managing a dataset whose size, at full resolution, is beyond the graphics and memory capabilities of the target architecture. We exploit the MT data structure to store a mesh larger than that we could maintain in a standard topological data structure, and we use the selective refinement algorithm to extract smaller (and coarser) meshes, where resolution is possibly variable over the spatial domain or the field range. Each multiresolution functionality is supported by a specific customization of the resolution threshold used by the selective refinement algorithm. The size threshold can be adjusted independently in order to comply with user needs. In any case, the system automatically sizes an upper bound for the size threshold, in order to comply with available resources. In the following, we provide a summary of how such functionalities are supported through the MT framework. Uniform LOD. The simplest use of multiresolution consists in extracting meshes having a resolution uniformly lower than that of the original mesh over the whole spatial domain and field range. In this case, the resolution threshold is given by a function

 (u) = "(u) k; where "(u) is the error associated with a node u, while k is a value that can be adjusted interactively by the user.

Variable LOD based on spatial location in the domain. In this modality, the user specifies a focus volume, and resolution is locally increased only inside such volume. The system provides focus volumes shaped as axis aligned boxes. It is thus possible to focus on those areas that are considered more critical, while leaving the rest of the mesh at

18

lower resolution. A simple spatial filter is defined by applying two different resolution thresholds inside and without the focus volume:  u intersects the focus volume  (u) = ""((uu)) kk0 ifotherwise

where k 0 > k . Note that the volume occupied by a node u is intended as that of all the tetrahedra incident at vu in the current mesh. Even if the filter is defined in a very sharp manner, the mechanism of selective refinement forces resolution to decrease gradually while moving outside the focus volume.

Variable LOD based on field value. In this modality, resolution is focused on regions where the field assumes critical values. An example are those regions where the user wants to extract isosurfaces. Similarly to the previous case, the user defines a set of focus field values, and a field filter is obtained by combining it with a resolution threshold defined independently:

 (u) =



"(u) k "(u) k0

if u contains one of the focus field values otherwise

Also in this case k 0 > k . The field range associated with a node u for testing the containment of the focus field value is that spanned by vu and its adjacent vertices in the current mesh, enlarged by the field error corresponding to u. An accurate selection of the focus field values can reduce the size of the extracted mesh substantially, while preserving a high accuracy for those parts of the data that are considered more interesting. Since resolution of the 3D mesh extracted with this modality is high only in the selected focus field values (i.e., around potential isosurfaces), we need a mesh smaller than usual in order to saturate graphics capabilities, thus leaving more memory available for storing the multiresolution model (see discussions in Sections 3 and 8). Variable LOD based on the Transfer Function. While the previous modality is especially useful for isosurface extraction, it can be extended to support also DVR rendering, by making the range filter dependent on the current Transfer Function (TF). In this case, we take into account the component of the TF: cells that in any case give a negligible contribution ( nearly 0) can be represented using a coarser resolution. Let (u) be the average value of spanned by u (which depends on the range spanned by u, as before), and let c be a suitable opacity threshold. We define the spatial filter as follows:

 (t) =

(

"(u) k

 2 (u) ("(u) k) (1=c)2

if (u)  1=c otherwise

Note that reducing resolution in parts of low opacity not only helps reducing the size of the mesh to be rendered, but also helps improving the quality of rendering, because errors due to hardware quantization can be avoided (see discussion in Section 4). Similar rules can be defined to enhance the representation of those field values which correspond to sharp discontinuity of the TF (a mesh representation more refined in proximity of the TF discontinuity could make the visualization of a sharp transition easier). Progressive Rendering. When progressive rendering is enabled, the system extracts a very raw mesh from the MT and renders it during highly interactive frames. This mesh is simply extracted by using a uniform LOD and a size threshold. These two parameters are automatically selected by the system by taking into account the ideal frame rate, and the rate of redrawing requests per time unit.

8 Results In this section, we show images and performance statistics obtained from the TAn2 system on four sample datasets:



Fighter (13,832 vertexes, 70,125 tetrahedra), an irregular tetrahedral mesh which is the result of an air flow simulation over a jet fighter, courtesy of Nasa;

19

Dataset

vertices

tetrahedra

Fighter Bluntfin Turbine Plasma64

13,832 40,960 106,795 262,144

70,125 222,528 576,576 1,500,282

boundary cells 7,262 13,516 41,920 47,628

field range [0 .. 3] [0.19 .. 5] [0 .. 230] [1.0 .. 1.9]

isosurface avg size 20,625 10,654 71,118 53,527

Table 1: Parameters characterizing the data sets used in the experiments: mesh size, size of the surface bounding the mesh, field range, average isosurface size. Uniform resolution models error Plasma64 (% of field range) vertices tetrahedra 0.1 203,804 1,164,361 0.5 54,849 331,867 1.0 21,057 128,456 5.0 1,103 5,982 10.0 258 1,222

Turbine vertices tetrahedra 48,029 233,848 26,622 117,696 19,464 84,743 8,072 37,515 5,053 23,779

Table 2: Sizes of meshes at uniform resolution extracted from the multiresolution representation of the Plasma and of the turbine dataset.

  

BluntFin (40,960 vertexes, 222,528 tetrahedra), a curvilinear dataset produced by a fluid-flow simulation of an air flow over a blunt fin and a plate, produced and distributed by NASA–Ames Research Center; Turbine Blade (106,795 vertexes, 576,576 tetrahedra), a curvilinear mesh courtesy of AVS Inc. (tetrahedralized by O. G. Staadt); Plasma64 (262,144 vertexes, 1,500,282 tetrahedra), a large regular synthetic dataset whose field values represent the 3D Perlin’s noise [PH89].

The above datasets have been chosen to present a wide set of different characteristics: Fighter has a very complex boundary with small scale features (such as thin wings); BluntFin has an extremely uneven cell distribution; Turbine Blade is a rather large dataset, with many degenerated cells (i.e., having zero volume); Plasma64 is the biggest one, and its field distribution is strongly non-linear (e.g., very concise representations are hard to be obtained). The multiresolution models were built using the CUTE simplification tool. The simplification process is accurate but far from interactive (it took about 55 minutes for the Plasma64 mesh and 21 minutes for the Turbine). Table 1 shows the statistical parameters characterizing the four data sets. The average size of an isosurface on these data sets has been computed by considering nine field values equally distributed in the dataset field range. Meshes at uniform resolution have been extracted from the multiresolution representation of the Plasma and Turbine datasets according to a constant error threshold. The sizes of such meshes are reported in Table 2, where the error value is measured as a percentage of the field range. Table 3 contains results of isosurface generation on Plasma64 variable LOD meshes which depend on the field value. When an error threshold " = 0: is set for all the isosurface cells, we obtain variable LOD meshes whose size spans between 3% (best case) and 58% (worst case) of the original mesh size. Other values of the " threshold have also been used (" = 0:1 and " = 1:0, measured as percentages of the dataset field range), and the corresponding mesh sizes are reported in Table 3. Table 4 contains similar results for the Turbine dataset. It has to be noted that if the dataset contains an isosurface which intersects a large number of mesh cells, than the corresponding variable LOD cannot by definition be very concise. An example is the Turbine variable LOD mesh corresponding to the active isosurface 160.28 and to an error " = 0: (see Table 4). In these cases, the joint use of a field-based and a spatial domain-based focusing filter can improve the data reduction capabilities. While under the configuration described above (Turbine dataset, " = 0:, isosurface=160.28) we obtain in output a variable LOD composed of 473K cells, we can easily produce smaller representations (in the range of 80K cells) by also focusing on a domain subvolume.

20

Plasma64, variable resolution models isosurface " tetrahedra isosurf. threshold size 1.81 0. 51,576 964 1.63 0. 113,826 5,599 1.45 0. 374,473 34,517 1.27 0. 878,641 140,135 1.09 0. 496,578 55,292

"

tetrahedra

0.1 0.1 0.1 0.1 0.1

33,941 93,657 284,783 639,043 383,535

isosurf. size 807 4,744 29,336 119,409 47,828

"

tetrahedra

1. 1. 1. 1. 1.

14,918 24,493 53,815 93,253 64,906

isosurf. size 261 1,130 7,904 27,752 9,832

Table 3: Results of generating isosurfaces through the extraction of meshes at variable resolution from the multiresolution representation of the Plasma dataset. The error is measured as a percentage of the field range, and depends on the current selected isosurfaces (it is  " on tetrahedra contributing to the isosurface cells; any error on other tetrahedra). Turbine, variable resolution models isosurface " tetrahedra isosurf. threshold size 137.38 0. 352,556 73,952 160.28 0. 473,625 141,978 183.17 0. 263,311 28,718 206.07 0. 190,988 8,296 217.52 0. 108,298 3,001

"

tetrahedra

0.1 0.1 0.1 0.1 0.1

134,901 169,824 100,798 69,604 47,810

isosurf. size 33,759 67,909 18,081 6,858 2,621

"

tetrahedra

1. 1. 1. 1. 1.

40,611 46,048 36,148 28,637 22,486

isosurf. size 16,296 24,017 8,343 3,167 1,548

Table 4: Results of generating isosurfaces through the extraction of meshes at variable resolution from the multiresolution representation of the Turbine dataset. The error is measured as a percentage of the field range, and depends on the current selected isosurfaces (it is  " on tetrahedra contributing to the isosurface cells; any error on other tetrahedra).

The multiresolution extraction time is in general nearly interactive. Extraction times depends on the current MT model size and on the complexity of the update requested. In the case of the update action presented in Figure 10, the variable LOD mesh update time is less than a second (not considering isosurface construction or DVR auxiliary data structures update). The comparison of the results presented in Tables 2, 3 and 4 verify our initial assertion: the availability of tools for refining the mesh representation only in selected regions, according to a criterion based on the field space, allows us to produce very concise meshes that give a very precise representation of the field ranges currently taken into account. The multiresolution approach shows its benefit especially on large datasets and at high resolutions (i.e., low error thresholds), such as the Plasma and the Turbine datasets. With smaller datasets and/or coarser resolutions, the advantages remain, but are less evident. Two images demonstrating isosurface rendering from the Plasma64 dataset are shown in Figure 8. Figure 8(a) shows four isosurfaces fit on the Plasma64 mesh at full resolution. Figure 8 (b) shows two isosurfaces fit on a variable LOD mesh (with " = 0: on the tetrahedra which intersect the isosurface, and " rapidly increasing in outer mesh regions). In this case, the number of cells in the mesh is 7% of the number of cells of the full resolution one. Images representing a spatial-domain focused variable LOD meshes are presented in Figures 9 and 10. A BluntFin variable LOD representation (64k cells out of the original 222k) is shown in Figure 9, and the current position of the 3D manipulation widget used to select the current focus region is shown in the magnified rightmost image. In Figure 10 we show the change in mesh resolution when the location of the 3D manipulation widget is changed (on the Fighter dataset). Finally, images representing the other rendering modality are in Figures 11 and 12. In the first one, we show a cross plane extracted from the Turbine dataset (using a variable LOD of 40k cells, " = 2%). In the latter, we show: (a) a DVR image and (b) a hybrid DVR image (integrated isosurfaces plus DVR) on the same Turbine variable LOD.

21

Figure 8: Two images of the Plasma64 dataset are shown: (a) four isosurfaces obtained from the complete mesh at full resolution; (b) given a TF with two isosurfaces selected, a variable LOD mesh is extracted with error threshold " = 0: on the isosurface cells.

Figure 9: The use of a compact variable LOD mesh produced by a spatial-domain focused extraction allow us an efficient visualization of the very small details contained in the focus region. The dataset shown in the images (full and zoomed view) is only 64k cells (i.e., less than 1/3 of the original size) and the volume inside the box has a field error less than 0.01% of the field range.

9 Conclusions In this paper, we have described TAn2 (Tetrahedra Analyzer), an interactive system for visualization of three-dimensional scalar fields. The system supports common rendering techniques for 3D data (isosurface fitting and DVR), an easy control of the rendering parameters by means of an interactive editor for transfer functions, and can deal with extremely

22

Figure 10: The images show how well the multiresolution model preserves the fine-grain detail on the mesh boundary (i.e., no self intersection or strong variation form the original surface are introduced on the mesh boundary). The meshes shown have  16k cells, with respect to the original 70k.

Figure 11: An image of a Turbine variable LOD (40k cells, " = 2%) with a cross plane. large data set thanks to the use of a multiresolution approach. Its predecessor, TAn1 [CMPS97], managed only the extraction of uniform LOD’s. Conversely, TAn2 fully exploits the power of multiresolution by using meshes at variable LOD which can significantly reduce the memory and pro-

23

Figure 12: Two images of a Turbine variable LOD (40k cells, " = 2%) are shown, a DVR image is on the left and a hybrid DVR image (integrated isosurfaces plus DVR) is on the right. cessing load in isosurface rendering, or dataset exploration through a magnifying glass. This improvement has been made possible by a new multiresolution geometric model, the 3D Multi-Tesselation, that we have developed as a basis for TAn2. A separate algorithm for MT construction has been developed (described in [CCM+ 00]) and, in addition, TAn2 is able to display standard single-resolution tetrahedral meshes as well. To our knowledge, TAn2 is the only multiresolution volume visualization system able to deal with irregular and non-convex data sets efficiently. An additional contribution of this paper is a new compressed data structure for encoding a 3D Multi-Triangulation built based on a sequence of general edge collapses, which achieves a compression factor of about 6 when compared with the storage cost of the original mesh at high resolution. Further developments of the work presented here are concerned with the study of data structures for encoding a 3D Multi-Tesselation on secondary storage, and the design of out-of-core algorithms for MT construction and for selective refinement. This would allow overcoming the absolute limitations of memory of a system, since it would permit managing datasets that are not just too large to be rendered, but too large to be stored at all. In this case, a rendering system should query the multiresolution structure on secondary storage to extract a mesh covering a volume of interest, at a certain (possibly variable) resolution.

References [BDFM95]

M. Bertolotto, L. De Floriani, and P. Marzano. Pyramidal simplicial complexes. In Proceedings 4th International Symposium on Solid Modeling, pages 153–162, Salt Lake City, Utah, U.S.A., May 17-19 1995. ACM Press.

[BE92]

M. Bern and D. Eppstein. Mesh generation and optimal triangulation. In D.-Z. Du and F. Hwang, editors, Computing in Euclidean Geometry, pages 23–90. World Scientific, 1992.

[CCM+ 00]

P. Cignoni, D. Costanza, C. Montani, C. Rocchini, and R. Scopigno. Simplification of tetrahedral volume with accurate error evaluation. Technical report, I.E.I. – C.N.R., Pisa, Italy, March 2000.

24

[CDFM+ 94] P. Cignoni, L. De Floriani, C. Montani, E. Puppo, and R. Scopigno. Multiresolution modeling and rendering of volume data based on simplicial complexes. In Proceedings 1994 Symposium on Volume Visualization, pages 19–26. ACM Press, October 17-18 1994. [CKM+ 99]

J. Comba, J.T. Klosowski, N. Max, J.S.B. Mitchell, C.T. Silva, and P.L. Williams. Fast polyhedral cell sorting for interactive rendering of unstructured grids. Computer Graphics Forum (Eurographics’99 Conference Issue), 18(3):C369–C376, 1999.

[CMPS97]

P. Cignoni, C. Montani, E. Puppo, and R. Scopigno. Multiresolution modeling and visualization of volume data. IEEE Transactions on Visualization and Computer Graphics, 3(4):352–369, 1997.

[Dee95]

M. Deering. Geometry compression. In Comp. Graph. Proc., Annual Conf. Series (SIGGRAPH ’95), ACM Press, pages 13–20, 1995.

[DFLS00]

L. De Floriani, M. Lee, and H. Samet. Navigating through hierarchical tetrahedral meshes. Technical report, Computer Science Dept., University of Maryland, March 2000.

[DFMMP00] L. De Floriani, P. Magillo, F. Morando, and E. Puppo. Dynamic view-dependent multiresolution on a client-server architecture. CAD Journal, Special Issue on Multiresolution Geometric Models((to appear)), 2000. [DFMP99]

L. De Floriani, P. Magillo, and E. Puppo. Multiresolution representation of shapes based on cell complexes. In Proceedings International Conference on Discrete Geometry for Computer Imagery, Esiee, Noisy-le-Grand, France, 1999.

[DFPM97]

L. De Floriani, E. Puppo, and P. Magillo. A formal approach to multiresolution modeling. In R. Klein, W. Straßer, and R. Rau, editors, Geometric Modeling: Theory and Practice. Springer-Verlag, 1997.

[ESV99]

J. El-Sana and A. Varshney. Generalized view-dependent simplification. Computer Graphics Forum, 18(3):C83–C94, 1999.

[FLT99]

FLTK. Fast Light ToolKit. More info on:http://www.fltk.org, 1999.

[Gar99]

M. Garland. Multiresolution modeling: Survey & future opportunities. In Eurographics ’99 – State of the Art Reports, pages 111–131, 1999.

[GG98]

R. Grosso and G. Greiner. Hierarchical meshes for volume data. In Proceedings CGI’98, Hannover, Germany, June 22-26 1998.

[GGS99]

S. Gumhold, S. Guthe, and W. Straßer. Tetrahedral mesh compression with the cut-border machine. In Proceedings IEEE Visualization’99, pages 51–58. IEEE, 1999.

[GLE97]

R. Grosso, C. Luerig, and T. Ertl. The multilevel finite element method for adaptive mesh optimization and visualization of volume data. In IEEE Visualization ’97, pages 387–394, Phoenix, AZ, October 19-24 1997.

[GS98]

M.H. Gross and O.G. Staadt. Progressive tetrahedralizations. In Proceedings IEEE Visualization’98, pages 397–402, Research Triangle Park, NC, 1998. IEEE Comp. Soc. Press.

[GTLH98]

A. Gu´eziec, G. Taubin, F. Lazarus, and W. Horn. Simplicial maps for progressive transmission of polygonal surfaces. In Proceeding ACM VRML98, pages 25–31, 1998.

[HC94]

B. Hamann and J.L. Chen. Data point selection for piecewise trilinear approximation. Computer Aided Geometric Design, 11:477–489, 1994.

[HG97]

P. Heckbert and M. Garland. Survey of surface simplification algorithms. ACM SIGGRAPH ’97 Course Notes, 1997.

[Hop96]

H. Hoppe. Progressive meshes. In ACM Computer Graphics Proceedings, Annual Conference Series (SIGGRAPH ’96), pages 99–108, 1996.

25

[Hop97]

H. Hoppe. View-dependent refinement of progressive meshes. In ACM Computer Graphics Proceedings, Annual Conference Series, (SIGGRAPH ’97), pages 189–198, 1997.

[LE97]

D. Luebke and C. Erikson. View-dependent simplification of arbitrary polygonal environments. In ACM Computer Graphics Proceedings, Annual Conference Series, (SIGGRAPH ’97), pages 199–207, 1997.

[Mag00]

P. Magillo. The MT (Multi-Tesselation) package. DISI, University of Genova, Italy, http://www.disi.unige.it/person/MagilloP/MT/index.html, January 2000.

[MMS97]

A. Maheshwari, P. Morin, and J.-R. Sack. Progressive TINs: Algorithms and applications. In Proceedings 5th ACM Workshop on Advances in Geographic Information Systems, Las Vegas, 1997.

[OR99]

M. Ohlberger and M. Rumpf. Adaptive projection operators in multiresolution scientific visualization. IEEE Transactions on Visualization and Computer Graphics, 5(1):74–93, 1999.

[PH89]

Ken Perlin and Eric M. Hoffert. Hypertexture. Computer Graphics (SIGGRAPH ’89 Proceedings), 23(3):253–262, July 1989.

[PH97]

J. Popovic and H. Hoppe. Progressive simplicial complexes. In ACM Computer Graphics Proceedings, Annual Conference Series, (SIGGRAPH ’97), pages 217–224, 1997.

[PRS99]

R. Pajarola, J. Rossignac, and A. Szymczak. Implant Sprays: Compression of progressive tetrahedral mesh connectivity. In Proceedings IEEE Visualization’99, pages 299–305. IEEE Comp. Soc. Press, 1999.

[PS97]

E. Puppo and R. Scopigno. Simplification, LOD, and multiresolution - principles and applications. In Eurographics ’97 Tutorial Notes, 1997.

[Pup96]

E. Puppo. Variable resolution terrain surfaces. In Proceedings Eight Canadian Conference on Computational Geometry, pages 202–210, Ottawa, Canada, August 12-15 1996. Extended version appeared as Variable Resolution Triangulations, Computational Geometry Theory and Applications, 1998, 11(34):219-238.

[RO96]

K.J. Renze and J.H. Oliver. Generalized unstructured decimation. IEEE C.G.&A., 16(6):24–32, 1996.

[RS89]

J. Ruppert and R. Seidel. On the difficulty of tetrahedralizing 3-dimensional non-convex polyhedra. In Proceedings 5nd ACM Symposium on Computational Geometry, pages 380–392, 1989.

[ST90]

P. Shirley and A. Tuchman. A polygonal approximation to direct scalar volume rendering. Computer Graphics (San Diego Workshop on Volume Visualization), 24(5):63–70, November 1990.

[THJ99]

I.J. Trotts, B. Hamann, and K.I. Joy. Simplification of tetrahedral meshes with error bounds. IEEE Transactions on Visualization and Computer Graphics, 5(3):224–237, 1999.

[Wil92a]

P.L. Williams. Interactive splatting of nonrectilinear volumes. In A.E. Kaufman and G.M. Nielson, editors, Visualization ’92 Proceedings, pages 37–45. IEEE Computer Society Press, 1992.

[Wil92b]

P.L. Williams. Visibility ordering of meshed polyhedra. ACM Transaction on Graphics, 11(2):103–126, April 1992.

[WvG94]

J. Wilhelms and A. van Gelder. Multi-dimensional trees for controlled volume rendering and compression. In Proceedings 1994 Symposium on Volume Visualization, pages 27–34. ACM Press, October 17-18 1994.

[XESV97]

J.C. Xia, J. El-Sana, and A. Varshney. Adaptive real-time level-of-detail-based rendering for polygonal models. IEEE Transactions on Visualization and Computer Graphics, 3(2):171–183, 1997.

[ZCK97]

Y. Zhou, B. Chen, and A. Kaufman. Multiresolution tetrahedral framework for visualizing regular volume data. In Proceedings IEEE Visualization’97, pages 135–142. IEEE Press, 1997.

26

Suggest Documents