present in the data. As coarser scales usually take up less storage space, hierarchical data structures also provide the means for compression. In this paper we ...
Wavelet-Based Representation of Uniform Multi-Component Data Michel A. Westenberg, Thomas Ertl Inst. for Visualization and Interactive Systems, University of Stuttgart Universit¨atsstraße 38, 70569 Stuttgart, Germany Email: {westenberg, ertl}@vis.uni-stuttgart.de
Abstract We propose a wavelet-based hierarchical data structure for 2D and 3D data that deals with multicomponent data on a uniform grid. We distinguish two phases: (i) construction and pruning of the wavelet hierarchy, resulting in a block-wise representation of the data, where the resolution of each block is adapted to its content; (ii) visualization, based on a linearly interpolating reconstruction filter. We perform experiments with 2-D timedependent flow data and a 3-D multi-field data set containing vector components and several scalar components of a simulation of hurricane Isabel. For the various data sets, our method approximates the data with the amounts of wavelet coefficients ranging between 10% and 30%. We also show how to obtain gradients directly from the hierarchical representation, without the need to explicitly reconstruct the data first. Our method is not specific to the data we have used for the experiments, and it can be extended to higher dimensions or other data types, such as tensors, in a straightforward manner.
1
Introduction
Multiscale and hierarchical data representations have gained much interest over the past years, due to the ever increasing data set sizes. Hierarchical data structures provide a way to build a representation at scales appropriate for the size of features present in the data. As coarser scales usually take up less storage space, hierarchical data structures also provide the means for compression. In this paper we propose a wavelet-based hierarchical data structure for 2-D and 3-D multicomponent data sets. We distinguish two phases in our method: (i) Construction of the wavelet hierarchy and (ii) Reconstruction and visualization. In the first phase, we store multi-component wavelet VMV 2005
coefficients in a block-wise manner. The hierarchy is then pruned according to an error criterion. The result of this phase is a block-wise representation of the data, where the resolution of each block is adapted to its content. The blocks exist as wavelet approximation coefficients which are used by the visualization in the second phase. Our method differs from and expands upon existing approaches in the following aspects: (i) We use a block-wise decomposition of the wavelet transform which suffers less from blocking artifacts than a block-wise wavelet transform. (ii) Previous approaches have applied Haar wavelets on RGB data. We use a wavelet with a linearly interpolating scaling function, thereby going beyond Haar wavelets. Furthermore, our method is suitable for multifield data with an arbitrary number of components, and it is not limited to integer-type data. (iii) We have a flexible error computation mechanism that allows us to choose task-specific error measures, which are, for instance, specialized for vector fields or feature preservation. (iv) The preprocessing phase is noninteractive, as with many hierarchical representations. However, visualization is fast due to linear interpolation of wavelets coefficients. There are two ways to construct a wavelet transform for multi-component data. One is to apply a standard scalar wavelet transform to each component, and the other is to use a wavelet transform based on multiwavelets [19]. Multiwavelets expand a scalar function by several scaling functions and wavelet functions rather than by a single pair. In principle, they can be used directly to construct a multi-component wavelet transform. However, researchers have only attempted to design multiwavelet filters for two-component data. The main problem is a poor signal processing performance, which has been investigated extensively by [2], who have also proposed a (partial) solution to this problem for two-component vectors. The Erlangen, Germany, November 16–18, 2005
wavelet filters that were proposed in that paper have been applied to 2-D vector field compression [7], and have also been used for denoising 2-D vectorvalued data [16]. Due to the current limitations of multiwavelets, we use a component-wise scalar wavelet transform. The remainder of this paper is organized as follows. Related work is discussed in Section 2. We introduce the wavelet-based hierarchical data structure in Section 3, and discuss integration into visualization methods in Section 4. The experimental results are given in Section 5, and Section 6 concludes the paper, and discusses open issues and possible future work.
is studied. Very recent work concerns compression of 2-D time-varying vector fields [7]. The purpose of that work is to compress very large geoscience datasets, however, it deals solely with compression aspects. It is also not possible to extract approximations at different levels of detail from the compressed representation, rather, each time step has to be decompressed to its original full resolution size. Clustering methods appear to be more popular for flow visualization. Some of the methods proposed deal with vector field simplification, i.e., the aim is to reduce visual clutter of complex flow patterns. Telea and Van Wijk [14] use a bottom-up clustering approach, in which a vector similarity function is used to decide which clusters can be merged. In this way, the vector data can be visualized at different levels of detail by choosing a desired simplification level. Heckel et al. [6] also use clustering in a top-down approach. Splitting of clusters is controlled by an error measure that expresses the deviation of a streamline calculated in the simplified data and the same streamline calculated in the original data. The goal here is also to construct a simplified visualization of the vector field. More recently, a continuous clustering approach was proposed [3]. Lodha et al. [11] present a bottomup clustering approach for compressing 2-D vector fields. They define an error measure that explicitly takes the topology into account. The method is only concerned with visualization of critical point topology, and it is not clear if the multiscale representation can also be used for dense vector field visualization. Another topology-preserving method has been proposed that is based on triangle mesh simplification [15]. The algorithm ensures that triangles are only merged when the global topology is not changed.
2 Related Work Wavelets provide an excellent mathematical framework for systematic decomposition of data into versions at different levels of detail. By suppressing small wavelet coefficients corresponding to noise or uninteresting details, a much more sparse data representation is obtained. In visualization, wavelets have been used mostly for scalar fields, especially in the area of volume rendering. Examples are wavelet raycasting [8, 18], wavelet splatting [4, 17], and texture-based volume rendering [5]. The wavelet splatting method by Lippert et al. [10] can deal with RGB volumetric data. They perform a global wavelet transform to the three components, and compress the resulting wavelet coefficients independently in each channel. This makes it difficult to apply the method to flow data, where it is necessary to maintain the coupling of the vector components. Bajaj et al. [1] have come up with a wavelet-based RGB volumetric compression scheme. They partition the volume data into blocks, which in turn are subdivided into cells. These cells are decomposed into wavelet coefficients by the application of a transform with a Haar wavelet. The hierarchy thus obtained is then compressed by a sophisticated encoding and quantization scheme. The compression ratios are good, but the method is strongly tailored towards 8-bits data. There is not much prior work on the application of wavelets in the context of flow visualization. Nielson et al. [12] have used wavelets in combination with visualization of flow over a sphere. In that paper and in a subsequent one [13], the effect of different grid resolutions on the flow field topology
3 Multiscale Representation The discrete wavelet transform is the expansion of a function f (x) with respect to a scaling function φ with an associated wavelet ψ . The wavelet representation for a decomposition with M scale levels is given by M
j f (x) = ∑ cM k φM,k (x) + ∑ ∑ dk ψ j,k (x), k
(1)
j=1 k
where j denotes scale and k denotes translation. j The coefficients cM k and dk are called approxi666
Level
3
c 3 d3,1 d
3,2
3,3
d
d
2,1
c3
d3,1
d3,2
d3,3
1,1
d d
2,2
d
2,3
Split 2
d1,2
d2,2
d2,1
d1,3
d2,3
1
1,1
d
1,2
d
1,3
d
Figure 1: Hierarchical data structure. This example shows how a three-level 2-D wavelet representation is converted into a 4 × 4 block-wise representation. At each scale level, the block size is halved in both dimensions. In this configuration, a block always represents the same data at different scales. (Nb /2)×(Nb /2) or (Nb /2)×(Nb /2)×(Nb /2). The next step is to determine the approximation error of each block when it is reconstructed from the approximation coefficients only. This yields an approximation of the block at the same spatial extents as the block at full resolution, so that the error can be computed in a meaningful way. When this error is equal to zero, the detail coefficients for that block are not needed, since the entire signal can be captured by the approximation coefficients. Since it should be possible to reconstruct blocks independently of each other, some border approximation coefficients of neighboring blocks have to be stored explicitly in the hierarchy. For the 2-D case, these are the top and left border coefficients. This introduces only a minor storage overhead, when the block size is chosen not too small. After all these steps, level one of the hierarchy has been obtained, and the process continues by applying the above steps to the approximation coefficients c1 , resulting in level two of the hierarchy. The process ends when the block size measured along one dimension becomes smaller than the filter length.
mation coefficients and detail coefficients, respectively. The extension of the 1-D wavelet transform to higher dimensions is done by taking tensor products of the basis functions. In 3-D, we obtain approximation coefficients cM k,l,m and seven sets of deτ j, tail coefficients dk,l,m , where τ is an index. Before we introduce the hierarchical data structure, we need to choose a wavelet basis. This basis should satisfy the following criteria. First, the analysis and synthesis filters should be short for computational efficiency and to enable fast reconstruction from approximation coefficients. Second, the filters should be symmetric to preserve the phase of the signal. Finally, we wish to adopt a blockwise encoding scheme, therefore, it should be easy to interpolate between neighboring blocks of different resolution. The first non-trivial wavelet that satisfies these criteria is the bi-orthogonal LeGall 5/3 wavelet [9]. Note that orthogonal wavelets cannot satisfy the first and third criterion at the same time (an exception is the trivial Haar wavelet). The standard practice √ is to normalize the coefficients by a factor of 1/ 2 to preserve the energy of the signal. A very nice property of this wavelet is that the reconstruction filter performs linear interpolation. It will become clear later why this property is useful. The construction process starts by choosing an initial block size Nb with respect to the full resolution data, giving rise to blocks of size Nb × Nb and Nb × Nb × Nb in 2-D and 3-D, respectively. Then, a one-level global scalar wavelet transform is applied to the components. The resulting multicomponent approximation coefficients c1 and detail coefficients d 1,τ are then split into blocks of size
Ultimately, we obtain a hierarchical representation in which the number of blocks is kept the same at each scale level. The motivation for this approach is that a block can always represent the same data, albeit at different scales. Note that the hierarchy stores these data as wavelet coefficients, so an inverse wavelet transform of the block is required to actually obtain the data. Figure 1 shows a 2-D example of the resulting data structure for a three-level wavelet decomposition and a 4 × 4 block configuration. 666
We would like to emphasize that our block decomposition is different from a block-wise wavelet transform, which would start by splitting the input data into blocks that are then transformed independently. The problem with such an approach is that it introduces discontinuity artifacts at the boundaries of the blocks, which are especially strong between blocks of different resolutions, but also present between blocks of the same resolution. Even though such artifacts can be reduced, to some extent, by symmetric extension techniques, i.e. mirroring of the coefficients at the borders, or by letting the blocks overlap partially, it is not possible to remove the artifacts altogether. In our approach, these discontinuities do not occur when two neighboring blocks have the same resolution, and they are small when the blocks have different resolutions.
(a)
(b)
(c)
(d)
Figure 2: (a) Input vector data. (b) LIC rendering of the data. (c) Image of the vorticity scaled linearly between -0.4 and 0.4 to the full grey level range. (d) Example approximation. Areas of relatively constant flow are represented at a coarser scale than the vortex core and direct surroundings.
A final issue is the choice of an appropriate error criterion, which is largely dependent on the application. If energy preservation is important, the mean square error is an acceptable choice [1]. The maximum absolute error could be chosen when the introduction of false edges should be prevented. When dealing with fluid flow data, an error measure that combines angular error and vector magnitude error, or the similarity function introduced by Telea and Van Wijk [14] can be more appropriate. If flow topology preservation is important, a penalty term for changing the topology could be taken into account [11] as well. For arbitrary multifield data, it is not really clear what constitutes a good error measure. This poses interesting questions for future research.
(gradient-based) features are found. Figure 2 shows an example of a block-wise approximation of 2-D flow data. The input vector data are shown in Fig. 2(a), and a corresponding line integral convolution (LIC) rendering in Fig. 2(b). Fig. 2(c) shows an image of the vorticity, which was scaled linearly between −0.4 and 0.4 to the full grey scale range. The vorticity ω , or velocity curl, represents the amount of rotation in the flow, and is defined for each vector vk,l as the cross product of the gradient operator and the vector vk,l :
The final step of the construction process is a pruning phase. For each block, the coarsest scale level that still satisfies a given maximum error criterion is selected, and the block is reconstructed by an inverse wavelet transform up to that level. The result of this process is a block-wise representation of the data, where the resolution of each block is adapted to its content. After this step, the wavelet hierarchy itself is not needed any more, and we will continue to use only this block-wise approximation of the data. In some sense, the pruning phase can also be considered as a feature detection phase. Since the high-pass filter of the wavelet transform works as a kind of gradient operator, blocks that contain large gradients require a finer resolution than other blocks. In this way, the wavelet representation gives direct indications where interesting
ω = ∇ × vk,l .
(2)
In 2-D, this equation yields a scalar value. In the image, black corresponds to a counter-clockwise rotation, and white corresponds to a clockwise rotation. Finally, Fig. 2(d) contains an example of what an approximation might look like. We see that in areas of relatively constant flow, the representation is coarser than in the areas with large vorticity values, for instance. This reflects the fact that the high-pass filter of the wavelet transform works as a kind of gradient operator. The block size here was chosen Nb = 16, which means that the finest scale blocks consist of 16×16 coefficients, and the coarsest scale blocks consist of 2 × 2 coefficients. 666
4
Visualization
It is now straightforward to apply any standard visualization technique for scalar or vector field visualization. Gradient-based feature detection methods can also be implemented efficiently, because of the linearity of the gradient operator. Loosely formulated, this allows us to combine the gradient computation and the interpolation needed to reconstruct the approximation coefficients. For the 2-D case, we can derive the following j at solution. Given approximation coefficients ck,l
The result of the pruning phase is a collection of blocks containing approximation coefficients, in which the scale of each block is adapted to its content. For visualization, it should be possible to extract a data point at an arbitrary position with respect to the full resolution grid. Since we have chosen the LeGall wavelet, this can be implemented efficiently. Mathematically, the vector vx,y,z at position (x, y, z) is obtained in the following way. First, determine in which block the interpolation needs to be performed. Let the coordinates of this block be (bx , by , bz ) with bx = bx/Nb c, by = by/Nb c, and bz = bz/Nb c. Within the block, we now need the vector at position (b x, yb,b z), with xb = x − bx Nb , yb = y − by Nb , and b z = z − bz Nb . In general, this position does not coincide with a grid point, therefore, we need to perform interpolation to obtain the final solution vx,y,z . Fortunately, this can be done by a trilinear interpolation, since the scaling function is a linearly interpolating function. As the interpolaj , tion is performed on the scaling coefficients ck,l,m
scale j and positions (2 j k, 2 j l) in the full resolution grid, the gradient can be computed by central differences: ∇v2 j k+h
1 ,2
∇x = ∇y =
−3 j
−3 j 2
j 1 ,2 l+h2
− v2 j k+h
2 − v2 j k+h +1 2
1 −1,2
j l+h 2
j 1 ,2 l+h2 −1
j j ck,l+1 − ck,l−1 ).
,
(4)
,
(5)
The case h1 , h2 ∈ [1, 2 j − 1] requires interpolation, and from the interpolation equation (3), we can derive that the gradient is obtained as follows:
j + (1 − α )(1 − β )(1 − γ )cb2 −jx bc,b2− j ybc,b2− j b zc j + α (1 − β )(1 − γ )cd2 −jx be,b2− j ybc,b2− j b zc j αβ (1 − γ )cd2 + −jx be,d2− j ybe,b2− j b zc
v2 j k+h
j l+h 2
j j ∇v2 j k,2 j l = 2−2 j−1 (ck+1,l − ck−1,l ,
j + (1 − α )β (1 − γ )cb2 −jx bc,d2− j ybe,b2− j b zc
1 +1,2
= (∇x , ∇y ),
where h1 , h2 ∈ [0, 2 j − 1]. The case h1 = 0 and h2 = 0 is special, because the gradient can be computed directly from the approximation coefficients without interpolation, cf. (3), and (4) simplifies to
normalization by a factor of 2 2 and scaling of the coordinates by a scale factor of 2− j is necessary: vx,y,z = 2
v2 j k+h
j l+h 2
∇v2 j k+h
1 ,2
j l+h 2
= 2−2 j (∇x , ∇y ),
j j ∇x = (1 − β )(ck+1,l − ck,l )
(3)
j j + β (ck+1,l+1 − ck,l+1 ),
j (1 − α )(1 − β )γ cb2 + −jx bc,b2− j ybc,d2− j b ze
(6)
j j ∇y = (1 − α )(ck,l+1 − ck,l ) j j + α (ck+1,l+1 − ck+1,l ),
j α (1 − β )γ cd2 + −jx be,b2− j ybc,d2− j b ze
j (1 − α )β γ cb2 + −jx bc,d2− j ybe,d2− j b ze j αβ γ cd2 , −jx be,d2− j ybe,d2− j b ze
where the factor α is given by
α = 2− j xb− b2− j xbc,
and the factors β and γ are defined similarly for yb and b z, respectively. In 2-D, the equations above reduce to a bilinear interpolation. 666
with α = 2− j h1 and β = 2− j h2 . Gradient interpolation at the borders of the block is straightforward when a neighboring block is at the same scale. If this is not the case, the border coefficients of the coarser block need to be interpolated first, to bring them into spatial agreement with the border coefficients of the neighbor. We have also derived the equations for the 3-D case, but have chosen not to include these in the paper to preserve space. Basically, the scaling factor changes, (5) has an extra
term for the z-component, and linear interpolation becomes bilinear interpolation in (6).
5 Experimental Results The experiments were carried out on an AMD Athlon XP 1800+ CPU that runs at 1.5 GHz. The block size was chosen as Nb = 16, and the depth of the wavelet transform is therefore fixed to three. A smaller block size would be better able to localize features, but introduces a larger memory overhead, due to the overlap at the borders. A larger block size suffer less from memory overhead, but would be worse at localizing features. As a result, the chance that blocks need to be represented at finer scales increases. A block size of 16, therefore, is a compromise between storage overhead and ability to localize details in the data effectively. The first test data set is a time-dependent flow and consists of a simulation of flow around a halfcylinder in 2-D. The simulation was performed on a grid of 1024 × 256 and consists of 91 time steps from t = 0 to t = 9. The complete data set size is 273 MB. As an error criterion for block refinement, we computed the maximum angular error in radians and the maximum vector magnitude error relative to the longest vector in the original data. These two numbers were added to yield a single relative error value. As error threshold, we chose the maximum approximation error Emax = 0.002. The entire data set can be approximated with about 15% of the total amount of wavelet coefficients, which corresponds to about 35 MB of vector data. We computed the critical points of the data set, and verified that they remain in place or shift only slightly within a grid cell. The vorticity magnitude was computed directly from the block-wise representation by making use of Eq. (6). The resulting image at a resolution of 1024 × 256 is shown in Fig. 3(a). The vorticity was scaled linearly between -0.4 and 0.4 to the full grey level range, so that black and white correspond to counter-clockwise and clockwise rotation, respectively. The blocks are shown in Fig. 3(b), with their scale level encoded as a grey value: black corresponds to level 0, i.e. the full resolution data, white to level 3, and intermediate grey values to the remaining levels. Finally, Fig. 3(c) shows the absolute grey value error between the vorticity computed at full resolution and the approximated vor-
(a)
(b)
(c)
Figure 3: (a) The vorticity at t = 4.8 scaled linearly between -0.4 and 0.4 to the full grey level range. Black and white correspond to counter-clockwise and clock-wise rotation, respectively. Only 15% of the wavelet coefficients were used for the approximation. (b) Grey-value encoded scale levels of the approximation. Black corresponds to full resolution data, dark grey to an approximation at scale level 1, and white to an approximation at the coarsest scale level 3. (c) Absolute grey value errors of the vorticity computed at full resolution and the approximated vorticity. The contrast of the image is enhanced to make the differences visible. The total number of different pixels is less than 1%, and the majority of errors concerns only one grey level. The maximum absolute grey value difference is equal to three.
ticity. The contrast of the image is enhanced so that the differences are actually visible. The maximum absolute grey value difference is equal to three, a difference that is hardly visible. The total number of different pixels is less than 1% in both cases, and the majority of errors concerns only one grey level. The construction of the hierarchy for all 91 time steps of the 2-D velocity field takes less than one minute. The pruning phase of one time step consumes on average about 30 ms. The average vorticity computation time per time step was 44 ms, 666
ages is 512 × 512. The construction of the hierarchy for the Isabel data set and subsequent pruning takes about five minutes. As a performance test, we computed LIC images along the z-axis over all 100 slices. This is done in two steps: (i) reconstruction of slice with wind vectors from the hierarchy; (ii) LIC computation. The average slice reconstruction time was about 75 ms, which is a fairly interactive speed. In comparison, the extraction of a slice from the full resolution data costed about 15 ms. Considering that we have to perform a trilinear interpolation to reconstruct a value, a difference by a factor of 5 is reasonable. The LIC computation time is much longer, since our basic implementation of LIC is not very efficient: it takes about 1100 ms to compute a 512 × 512 image. The average reconstruction time of a single scalar component over the same range of z-slices was about 48 ms, so slicing the temperature and pressure can be done interactively.
when computed according to Eqs. (5) and (6). This is slightly faster than performing the same computation on the full size reconstruction (on average 55 ms). However, when most of the data can be represented at a coarse scale, the vorticity computation directly within the hierarchical representation is about 50% faster than on the full size reconstruction. The second test data set is a simulation of hurricane Isabel. We selected one time step, and constructed a multi-field data set containing the wind vectors, the clouds, the temperature, and the pressure fields. The result is a 6-component floating point data set of size 500 × 500 × 100 (572 MB). We constructed a wavelet tree, and used a maximum angular error criterion (in degrees) applied only to the wind vectors, i.e., the other components were not taken into account for the error calculation. The motivation behind this approach is that a visualization of the data in the vicinity of the hurricane requires more detail than in other regions. The angular error criterion is one way of achieving such an effect. When Emax = 10, we obtain an approximation with 32% of the total amount of wavelet coefficients. A visualization combining the several components is shown in Fig. 4. The wind vectors were rendered using 2-D LIC in the plane z = 7. The temperature is shown on the slice y = 256, and was mapped to a blue-red colormap over the full range of -83 to 32 degrees centigrade. The pressure is shown on the slice x = 207, and was mapped to a colormap that maps the entire range of -5000 to 3000 Pascals onto the full hue range. Finally, the values of the cloud data, ranging between 0 and 0.002, were mapped to 8 bits, and rendered by volume rendering. The images are qualitatively very similar, and the only visible differences appear in the fine-scale details of the clouds. This happens mostly at high altitudes, where the wind changes appear not to be very strong. In such areas, the approximation is coarser, which explains the loss of detail in the clouds. Figure 5 shows this effect in a close-up view of only the clouds and the pressure. The left image was rendered with 32% of the wavelet coefficients, and the right image was rendered with the full resolution data. If the maximum error is increased to Emax = 15, the data can be approximated with 21% of the wavelet coefficients. The effect on a 2-D LIC rendering in the slice z = 7 is minimal, see Fig. 6. The resolution of these im-
6 Conclusions and Future Work We have introduced a wavelet-based data structure for the representation of multi-component data sets. The data structure supports local level-of-detail control in a block-wise fashion. We have tested our method on different types of data sets, and have shown that the data can be approximated at coarser resolutions with acceptable errors. Our method has several advantages over other techniques. The block-wise decomposition of the wavelet transform does not suffer strongly from blocking artifacts. We go beyond Haar wavelets and RGB integer-type data, and use linearly interpolating wavelets on multifield data with an arbitrary number of components. The error computation mechanism is flexible, and it allows us to choose task-specific error measures. An important contribution of our paper is that we can handle arbitrary multifield data, whereas other methods are tailored towards either scalar data or vector data. One aspect of our work which still raises concern is compression. Our wavelet tree stores blocks containing detail coefficients completely. Consequently, it consumes at least as much memory as the original data. We are currently investigating vector quantization techniques [7], and have obtained preliminary results in 2-D that show encouraging compression factors. We are also investigating the pos666
sibility of computing other types of features directly within the hierarchical representation. Furthermore, we consider GPU-based reconstruction of the approximation coefficients to achieve a better performance. Finally, we plan to make an extensive comparison with existing hierarchical compression techniques when we have successfully incorporated vector quantization. This comparison will address both approximation quality and computational performance in a more systematic way than we could do in this paper. Note, however, that it may not be feasible to adapt previous methods for dealing with general multifield data, such as the six-component floating-point valued hurricane data set.
[7] L. Hua and J. E. Fowler. Wavelet-based coding of time-varying vector fields of oceansurface winds. IEEE Trans. Geoscience and Remote Sensing, 42(6):1283–1290, 2004. [8] I. Ihm and S. Park. Wavelet-based 3D compression scheme for interactive visualization of very large volume data. Computer Graphics Forum, 18(1):3–15, 1999. [9] D. LeGall and A. Tabatabai. Sub-band coding of digital images using symmetric short kernel filters and arithmetic coding techniques. In Proc. Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP-88), volume 2, pages 761–764, 1988. [10] L. Lippert, M. H. Gross, and C. Kurmann. Compression domain volume rendering for distributed environments. In Proc. Eurographics’97, pages 95–107, 1997. [11] S. K. Lodha, J. C. Renteria, and K. M. Roskin. Topology preserving compression of 2D vector fields. In Proc. IEEE Visualization 2000, pages 343–350, 2000. [12] G. M. Nielson, I.-H. Jung, and J. Sung. Haar wavelets over triangular domains with applications to multiresolution models for flow over a sphere. In Proc. IEEE Visualization ’97, pages 143–149, 1997. [13] G. M. Nielson, I.-H. Jung, and J. Sung. Wavelets over curvilinear grids. In Proc. IEEE Visualization ’98, pages 313–317, 1998. [14] A. Telea and J. J. van Wijk. Simplified representation of vector fields. In Proc. IEEE Visualization ’99, pages 35–42, 1999. [15] H. Theisel, Ch. R¨ossl, and H.-P. Seidel. Compression of 2D vector fields under guaranteed topology preservation. Computer Graphics Forum, 22(3):333–342, 2003. [16] M. A. Westenberg and T. Ertl. Denoising 2-D vector fields by vector wavelet thresholding. Journal of WSCG, 13(1):33–40, 2005. [17] M. A. Westenberg and J. B. T. M. Roerdink. X-ray volume rendering through two-stage splatting. Machine Graphics & Vision, 9(1/2):307–314, 2000. [18] R. Westermann. A multiresolution framework for volume rendering. In ACM workshop on Volume Visualization, pages 51–58, 1994. [19] X.-G. Xia and B. W. Suter. Vector-valued wavelets and vector filter banks. IEEE Trans. Signal Processing, 44(3):508–518, 1996.
7 Acknowledgements This research was funded by the Alexander von Humboldt Foundation (Humboldt Research Fellowship), Germany. Hurricane Isabel data is a courtesy of NCAR and NSF. The CT abdominal scan is courtesy of Michael Meißner, Viatronix Inc., USA.
References [1] C. Bajaj, I. Ihm, and S. Park. 3-D RGB image compression for interactive applications. ACM Trans. Graphics, 20(1):10–38, 2001. [2] J. E. Fowler and L. Hua. Wavelet transforms for vector fields using omnidirectionally balanced multiwavelets. IEEE Trans. Signal Processing, 50:3018–3027, 2002. [3] H. Garcke, T. Preußer, M. Rumpf, A. C. Telea, U Weikard, and J. J. van Wijk. A phase field model for continuous clustering on vector fields. IEEE Trans. Visualization and Computer Graphics, 7(3):230–241, 2001. [4] M. H. Gross, L. Lippert, R. Dittrich, and S. H¨aring. Two methods for wavelet-based volume rendering. Computer & Graphics, 21(2):237–252, 1997. [5] S. Guthe, M. Wand, J. Gonser, and W. Straßer. Interactive rendering of large volume data sets. In Proc. IEEE Visualization 2002, pages 53–60. IEEE Computer Society, 2002. [6] B. Heckel, G. Weber, B. Hamann, and K. I. Joy. Construction of vector field hierarchies. In Proc. IEEE Visualization ’99, pages 19–25, 1999. 666
(a) 15% (94 MB)
(b) 21% (126 MB)
(c) 32% (192 MB)
(d) full resolution (572 MB)
Figure 4: Visualization of several components of the Isabel data set: the wind vectors and three scalar fields: clouds, pressure, and temperature. The six components were encoded using our wavelet-based method, where we used a maximum angular error criterion for the wind vectors to control the resolution (see text). From left to right, the amount of wavelet coefficients taken into account for rendering increases. The data are shown at full resolution in (d). The clouds are visualized by volume rendering, the temperature and pressure are shown color-encoded on slices, and the wind is rendered with LIC.
Figure 5: Visualization of two components of hurricane Isabel. The left image was rendered using 32% of the wavelet coefficients. The right image shows the data at full resolution. The clouds are rendered by volume rendering. The pressure is mapped onto a slice and is color-coded (see text).
Figure 6: LIC images of hurricane Isabel in the plane z = 7, which is near the earth surface. The left image was rendered with 21% of the wavelet coefficients, and the right image was rendered with the full resolution data. 666