Volume Rendering by Template-Based Octree Projection Rajagopalan Srinivasan1 1 Shiaofen Fang2 Su Huang CIeMed, Institute Of Systems Science National University of Singapore
ABSTRACT We present a new volume rendering algorithm using raycasting and texture mapping formulations for parallel projection in this paper. The volume is represented and stored in an ecient octree data structure. The octree blocks are adaptively chosen to minimize unnecessary processing of empty voxels. The algorithm exploits the uniform shape, orientation and size of the octree blocks by building templates for ray/block intersections in the case of raycasting and Z-plane/block intersections in the case of texture mapping. Such templates are then pasted to all the octree blocks thereby avoiding the costly intersection computation. While Octree is an established technique to accelerate volume visualization, it has never been used in conjunction with templates and texture mapping. Together as an integrated acceleration tool, it is much more ecient than using octree alone. We have found that the use of template speeds up the rendering by a factor of three for raycasting and by a factor of six for texture mapping. The algorithm is also ecient in terms of memory requirements, can generate high quality images, and is very suitable for parallelization.
Keywords { Computer Graphics, Scienti c Visualization, Volume Rendering, Octree, Raycasting, 3D Texture Mapping, 3D imaging.
1 Introduction Volume rendering is a technique for visualizing sampled scalar or vector elds in 3-dimensional space. The size of the data and the computational overheads involved in the visualization has been a source of many algorithms proposed in recent years [5, 10, 14, 20, 15, 21, 22, 13, 11]. They can be roughly classi ed into image-order and object-order algorithms. Image-order techniques like raycasting produce high quality images, but are very expensive because of the redundant data structure traversal. Many acceleration techniques have been proposed and they mostly rely on spatial data structures such as pyramid and octree, that subdivide 3D space into volume hierarchies to assist rays to locate voxels of interest [4, 8, 15, 24, 13, 16]. The use of such data structures adds overhead in computing the intersections between the rays and objects and this osets the savings substantially. Yagel and Kaufman [26] developed an algorithm that determines the sequence of voxels to be intersected by a discrete ray using a pre-computed pattern called a ray-template. The template that we propose here diers from the above in two ways: we store ray/block intersections instead of ray-pattern and the template is used for continuous raytracing. Center for Information Enhanced Medicine Institute of Systems Science, National University of Singapore, Heng Mui Keng Terrace, Kent Ridge, Singapore 119597. email :
[email protected] 2 Department of Computer and Information Science, Indiana University Purdue University at Indianapolis, Indianapolis, IN 46202-5132. 1
1
Object-order algorithms process the voxels one-by-one in their memory order and project them onto the screen using splatting [13, 19, 22] or cell-projection [20, 23]. Some algorithms calculate the projections of the volume cells and treat them as polygons [23, 18]. Shear-warp factorization with 2D image warping has been developed for serial volume rendering in [12]. While the timing achieved by this algorithm is close to real-time, the memory requirements are demanding. This is a restricting factor in embedding it inside an application where visualization is just one of the many time and space critical sub systems. The use of 3D textures to accelerate volume rendering has been reported by several researchers [1, 3, 9, 2, 25, 7] and it usually works in object-space. Allen and Kwansik [7] incorporated a lighting model and touched upon the issues of multiple texture maps for large volumes. Since the texture memory supported in hardware is much smaller compared to the datasets, spatial data structures are needed to minimize the unnecessary processing of empty voxels and reduce the time consuming texture binding and swapping. Deformable texture mapping using octree encoded volumes was rst introduced in [6]. This paper gives details about the template and the projection of octree blocks using the template. The basic idea of our algorithm is to adaptively subdivide the volume into blocks of various sizes with xed shape and orientation, and then project the blocks onto the screen in a front-to-back or back-to-front order. The set of blocks is chosen to tightly enclose the object in the original volume, so that most of the empty voxels can be excluded. Due to the uniformity of the block shape and orientation, the results of the ray(Z-plane)/block intersection for all blocks are very similar. In other words, a pattern, or template, exists for the ray(Z-Plane)/block intersection, and can be precomputed once for all the blocks. Thus the need to compute the costly intersections for each block is essentially eliminated. Although the raycasting algorithm is basically an object space algorithm, it operates in image space raycasting order within each block, and can therefore take advantage of both the object space and image space algorithms. Since the block sizes are adaptively determined, the memory access of the volume can be optimized. In Section 2, the octree volume data structure is described. The octree projection algorithm is presented in Section 3 in the following steps: block selection, sorting, template building and projection. Section 4 discusses the implementation and performance analysis issues. Section 5 concludes the paper with some further comments and future work.
2 Octree Volume Octree data structures have been used in volume visualization mainly as an encoding tool to reorganize volume data for easy identi cation of interesting regions[15, 17, 24]. Unlike these earlier works, our approach uses the octree as the only data structure for volume representation, i: e: instead of forming a 3D array, the voxel data is directly stored within the octree. Volumes so represented are called octree volumes. Besides serving as a tool for locating neighbors and ordering regions, an octree volume is also generally more memory ecient than a 3D array representation, particularly for sparse volumes. A typical octree volume representation is shown in Figure 1. In this data structure, the eight children of each node are stored together as one memory block. A level-9 node is the bottom leaf-node representing a 23 subvolume (8 voxels) and containing the 8 intensity values. A level-0 node represents a 10243 volume. Each node in the octree represents geometrically a cubic block (subvolume) with a unique ID. Each inner-node contains a pointer to the header of its eight children, 2
and the minimum and maximum intensity values in the subtree under this node. Converting a regular 3D array volume to an octree volume is quite straightforward, and is done during preprocessing.
3 Octree Projection For each octree volume, a pre-processing step selects a set of blocks that are non-overlapping octants containing all voxels with values within the user-speci ed intensity range. For each block a subvolume in 3D array format is also extracted from the octree. Once the blocks are selected, they can be used for any viewing angle. For each given viewing direction, the rendering runs in three steps: block sorting, template building and projection. The block sorting step sorts the blocks in a front-to-back order for raycasting and back-to-front order for texture mapping. Since the octants are already well organized within the octree, sorting the blocks with a particular viewing angle is quite simple. The template building step computes the intersection pattern to be pasted in the block projection step. This section examines these steps in detail.
3.1 Block selection
Using dierent intensity ranges, we can have dierent block selections for the same octree volume. For each given intensity range, voxels with values outside the range are considered empty. The blocks to be selected must be non-overlapping octants of the octree that enclose all the non-empty voxels. A block taken from a level-i octant is called a level-i block. In order to minimize unnecessary computation for empty voxels and reduce the overhead in block projection, the fraction of empty voxels contained in all the blocks and the number of blocks should be small. This is actually a trade-o. A selection with a very low fraction of empty voxels tends to have a lot of (small) blocks; if the total number of blocks is small, the blocks tend to be big containing more empty voxels. To facilitate selection, we de ne a sequence, frig, as the lower bound of the percentages of empty voxels in level-i (i = 0 9) blocks. The octree is traversed in a depth- rst order. For each level-i octant encountered, if its percentage of empty voxels is below ri , this octant is considered suciently full and taken as a block, and no further traversal of its children is then necessary; otherwise, traversal continues with its eight children. Clearly, the lower bounds need to satisfy ri rj (i < j ). There exist no absolute optimal values for fri g | they are generally data dependent. The following setting is an example we used for some 2563 volumes to generate 323, 163, 83 and 43 blocks:
f ig = f0 0 0 0 0 0 4 0 5 0 6 1 0 1 1g ( = 0 9) r
;
;
;
;
;
: ;
: ;
: ;
: ;
:
;
i
(1)
In the case of texture mapping, after block selection the subvolumes extracted from each block are converted to texture blocks whose voxels contain RGB color and opacity values. Intensity to RGBA conversion is done through a user speci ed transfer function. Each texture block is considered as a separate texture map that can be accessed through the texture coordinates of points in the object space. When a texture block is rst accessed, a texture binding operation is needed to de ne the texture block as the current texture map until a new texture block is encountered. The binding is not expensive if the texture block is already in the texture memory. Otherwise, the texture block will be loaded into the texture memory. Furthermore, if the texture memory is full, the entire texture memory has to be swapped. Such texture block loading and 3
texture memory swapping is expensive and is often the dominant cost in texture mapping based algorithms [7].
3.2 Block sorting
The sorting consists of a view dependent depth- rst octree traversal. It is view dependent because the eight children of each node are visited in a view dependent order. Assuming the viewing direction vector in the octree volume coordinate system is (x; y; z ), the signs of the coordinates determine the order in which the eight child octants are visited. Blocks can be picked up during this view-dependent octree traversal from their octants automatically. Figure 2 shows the traversal order for front-to-back sorting. The order for back-to-front sorting is just the opposite.
3.3 Template construction and Block projection for raycasting
The projection of each block is essentially a raycasting process within the block. The raycasting results of all the blocks are blended and composited to the pixels to which the block projects. This requires the following information to be known for each block (see Figure 3): 1. the covered pixels: the pixels that the block projects to; 2. the covered sampling points: the raycasting sampling points along the rays from the covered pixels that are within the block. Computation of the covered pixels and covered sampling points for every block is quite costly since there could be thousands of blocks. We notice, however, that for a given viewing direction all the blocks have the same shape and orientation, with only a few xed sizes. So, generally, the set of covered pixels and covered sampling points for blocks of the same size should be a simple translation of each other, i: e: for each size, the covered pixels and covered sampling points should be identi ed only once, saved in a template, and pasted to all other blocks of the same size. For example, if (1) is used for selecting the blocks, only four templates need to be generated for the projection of all the blocks (around 2000 blocks for normal datasets). A template is de ned as a 2D array containing the screen area of the covered pixels. Each entry of the 2D array contains simply two index numbers indicating the interval of the covered sampling points. To build the template, a sample block for each block size is used. The 2D screen projection of the bounding box of the rotated sample block forms a rectangular screen area containing all the covered pixels of the sample block. For every ray shooting from each covered pixel, the ray segment that is within the block is computed by normal ray/block intersection. The interval of the raycasting sampling points within this ray segment is stored in the template as the covered sampling points. For each block in the sorted block list, a displacement vector from the sample block is rst computed. A template of appropriate size is translated by the displacement vector to obtain the covered pixels and covered sampling points of the block to perform raycasting within the block. There is one complication, however, in using the template. The set of all raycasting sampling points forms a 3D grid in the viewing space. If we use the center of the rotated block as the reference point, and de ne the block oset as the oset of the reference point to its nearest grid point, it is easy to see that such osets are usually dierent for dierent blocks, even if they have the same size and orientation. Ideally, if the oset of the sample block used for building the template is the 4
same as that of the block to be projected, there would be no problem in translating the template to project the block. But when their osets are dierent, the covered pixels and covered sampling points got from the translated template can be slightly dierent from what it should be if it is directly computed by ray/block intersection. Our solution to the above problem is to increase the size of the sample block to form a so called extended sample block for building the template, so that it covers all possible osets. Without loss of generality, we assume that the resolution (grid step) of the grid is 1. Thus, the oset values in X , Y and Z for any block can only be between ?0:5 to 0:5. In other words, an extended sample block with reference point at a grid point P should contain the union of all the possible sample blocks of the same size with their reference points within a unit cubic cell centered at P . To illustrate this idea, a 2D analog is shown in Figure 4. The extended sample block is obtained by extending the four (six in 3D) sides of the original p p sample block (dashed) outward by an amount of 2 (the largest oset magnitude, should be 3 in 3D). This is to ensure that all sample blocks with reference points within the unit cell are included. As shown in Figure 4, the sample block with its reference point Q at a corner of the unit cell, for instance, is still contained within the extended sample block. Using the extended sample block to build the template does ensure that the set of covered pixels and covered sampling points of a block is a subset of the translated template. But if we directly use such a template to project the blocks, double sampling can occur along the boundaries of the neighboring blocks. To avoid double sampling, during the projection of a block, we have to rst test a few points at the start and end of the sampling point interval of each ray to remove those sampling points that are not within the block, so that the raycasting is done for the sampling points within the block only. In fact, such a test is not expensive because the world coordinates in the original volume for the sequence of sampling points along a ray in each block are needed anyway and incrementally computed for the trilinear interpolation of the intensity values, testing whether a point is inside or outside an un-rotated block is very easy. Besides, there are, in general, at most one or two points that need to be skipped, thus the overhead of this test is fairly small compared to the savings, as the timing comparison will show in Table 2.
3.4 Template building and Block projection for Texture Mapping
In order to render a block using 3D texture mapping, polygons generated by intersecting Z-plane and the block need to be sent, together with texture coordinates, to the texture mapping hardware in a back-to-front order. Intuitively, the result of the Z-plane/block intersection for one sample block should be usable by all other blocks of the same size with a simple translation. As shown in Figure 5, the template is de ned as a list of polygons obtained as a result of the intersections of consecutive Z-planes and the sample block. Let us assume, without loss of generality, that the bottom vertex of the rotated sample block is at (0; 0; 0), and all Z-planes are at integer Z coordinates with a Z step = 1. For a given octree block with its bottom vertex at (x; y; z ) after rotation, if z is an integer number, we only need to translate all polygon vertices in the template by (x; y; z ) to get the polygons for this octree block. Otherwise, suppose z has an integer part k and a decimal part q (q 2 (0; 1)). Merely translating the template by (x; y; k) is not sucient because there is a small oset q that needs to be taken into account. With the q oset, as illustrated in a 2D analog in Figure 6, each translated polygon vertex needs to be shifted by q pe , where pe is the unit oset for each edge and is pre-computed and stored in the template as well. 5
The above approach for the oset works only when each Z-plane intersects the same set of block edges for both the translated sample block and the texture block. When the oset is too big, this will not be true anymore as shown in Figure 6 where the middle Z-plane intersects dierent edges for the dashed octree block. To overcome this problem, more than one template is needed. Figure 7 shows a 2D analog of this solution. For each vertex, Vi = (xi; yi ; zi), of the sample block, its oset qi = dzi e ? zi is rst computed. The Z interval [0,1] is partitioned by all the qi into subintervals. One template is built for each subinterval, using a sample block with bottom vertex at the low end of the subinterval as shown in Figure 7. When pasting a block, we rst check which subinterval the oset of the block q falls into, and use the template for that subinterval to compute the polygons for the octree block as described before. Taking advantage of block coherency we take the sorted blocks one by one and compute all the polygons for that block. Since the templates for blocks of dierent sizes have already been built, we eliminate the intersection computation. All that needs to be done is to choose an appropriate subtemplate from the template corresponding to the size of the texture block and translate the polygons stored therein by a simple displacement vector. Texture coordinates at the polygon vertices is obtained by tri-linear interpolation of the texture coordinates at the block vertices. Before sending the polygons for texture mapping the texture block has to be bound as the current texture map.
4 Implementation and Discussions The algorithm presented in this paper has been implemented in C++ and OpenGL and tested on an SGI R4400 workstation running at 150MHz with 64 Mbytes main memory. The texture memory on this machine is 4 Mbytes. Five datasets have been used in the performance results summarized in Tables 1-3. Besides the standard UNC-CT-Head, the MRI Brain and the Engine datasets, we also used a synthetic knot dataset and a sparse Human Vessels dataset. The Human Vessels dataset was manually segmented from the Visible Human dataset and the volume intensities of the voxels within the vessels were assigned based on their distances from the center lines. The timing is for the particular viewing angle shown in Figure 8. In table 1, performance for the raycasting algorithm is given. The timing for dierent rendering modes is partitioned into three parts: block projection, resampling/compositing/shading, and template building. The performance for the translucent rendering strongly depends on the transfer function that sets the transparency level for viewing the internal structures through the surface. A piecewise quadratic transfer function (which generally results in slower speed but higher transparency than using linear transfer functions) was used for all our test cases. We used four in nitely positioned lights, trilinear interpolation for resampling and a three-dimensional edge operator [27] for gradient estimation. For surface and skull rendering depth cueing was enabled. Except for the Human Vessels dataset the image size used was 300x300 pixels. For the Vessels dataset the image size was set to 890x520 pixels. Table 2 gives the performance of the algorithm for texture mapping. The rendered images are 512x512 pixels. When using multiple texture blocks for large datasets, artifacts (often lines) may occur along the boundary of two texture blocks, as seen in some of the texture mapping results shown in Figure 8. This is because the image interpolation in texture mapping along the boundary cannot use the neighboring image values that are stored in the other texture block. This problem can be solved if texture boundary is later supported by OpenGL as the way it is described in 6
OpenGL speci cation, because the neighboring image values can then be simply de ned as the boundary of the texture blocks. To evaluate the contribution of the templates, we also implemented a brute-force algorithm without using the templates. Their timings are shown in parenthesis in both table 1 and table 2 for each of the datasets. From the speedup column of both the tables it is evident that use of template does minimize the computational cost involved in nding the intersection of the ray (Z-plane) and the blocks. Due to the dierence in transfer functions, as well as the dierent hardware and software environment, it is dicult to have an accurate quantitative comparison of the timing with other works in the eld. However, a rough comparison to a few relevant previous works may still be useful. [15] uses an octree based algorithm and it takes roughly 105 seconds to render the skull of the UNC-CT-Head dataset on a SUN 4/280. [26] uses a template for discrete rays and it takes about 30 seconds to render the MRI brain on a Sun Sparc 1. As for texture mapping [7] reports timings of 1:33, 1:1 and 0:7 respectively for UNC-CT-Head, Engine and MRI brain datasets. Table 3 gives the memory overheads and the preprocessing time. Total memory used includes the octree volume, subvolumes of the blocks, image buer and the templates. The table suggests that, in general, an octree volume is more memory ecient than storing the volume in 3D array format. In our cases, even the total memory used by the algorithm is smaller than the memory used to merely store the original volume. Compared with our requirement of 6:2 and 4:02 Mbytes required for handling the cthead and brain datasets [12] requires 13 and 19 Mbytes respectively. Our advantage in memory requirement is particularly remarkable for sparse volumes. Since only the non-empty voxels are stored and processed, the rendering speed is also much faster for sparse volumes. Taking the example of the Human Vessels dataset, the size of the original volume (204 Mbytes) makes it dicult to even load the data into the normal workstation if the octree is not used.
5 Conclusions We have presented a parallel projection volume rendering algorithm based on octree projection. By taking advantage of the ecient octree data structure and exploiting object coherence of the blocks, the algorithm is ecient in terms of both the number of voxels to be processed and the intersection computation. A nice feature of the raycasting formulation is that scaling of the original volume can be done at run-time. This is because all object related operations (e: g: intersection) are done with the templates, and scaling only the extended sample blocks is sucient to render the entire scaled volume. This enables run-time zooming. Furthermore, the blocks can also be selectively projected, resulting in a selective subvolume rendering and 3D panning, which is desirable for some applications. Finally, since most of the empty voxels need not be stored, the algorithm is fairly memory ecient. Texture mapping of octree encoded blocks makes optimal use of the relatively smaller texture memory supported in current workstations. The main limitation of the raycasting algorithm is the lack of perspective projection rendering, though the run-time zooming in rendering with parallel projection can partly achieve the navigational value of perspective rendering. Another problem, which is common to all object space algorithms, is that block projection is less eective in early termination than pure raycasting algorithms. Although the rays from opaque pixels need not be processed within each block, the translated template for the block still has to be examined. 7
In the future, we will work on a new strategy for perspective block projection. The cubic shape of the blocks may have some overhead for very non-cubic datasets such as the Visible Human dataset. Non-cubic blocks might help in more ecient use of the blocks. For blocks that do not project to overlapping image regions, their projections are totally independent. This could be used to parallelize the algorithm eciently
6 Acknowledgments We take this oppurtunity to thank our manager Dr. Raghu Raghavan for his support throughout this work.
References [1] Kurt Akeley. Realityengine graphics. Computer Graphics, SIGGRAPH'93, 27:109{116, August 1993. [2] Brian Cabral, Nancy Cam, and Jim Foran. Accelerated volume rendering and tomographic reconstruction using texture mapping hardware. In Proc. 1994 Symposium on Volume Visualization, Washington, D.C., pages 91{98, October 1994. [3] T.J. Cullip and U. Neumann. Accelerating volume rendering with 3d texture hardware. Tech. Rep. TR-93-027, Univ. of North Carolina, Chapel Hill, 1993. [4] J. Danskin and Pat Hanrahan. Fast algorithms for volume ray tracing. In Proc. 1992 Workshop on Volume Visualization, pages 91{98, October 1992. [5] Robert A. Drebin, Loren Carpenter, and Pat Hanranhan. Volume rendering. Computer Graphics, SIGGRAPH'88, 22(4):65{74, August 1988. [6] Shiaofen Fang, Rajagoplan Srinivasan, Su Huang, and Raghu Raghavan. Deformable volume rendering by 3d texture mapping and octree encoding. In Proc. Visualization '96, San Francisco, CA, pages 73{80, 1996. [7] Allen Van Gelder and Kwansik Kim. Direct volume rendering with shading via threedimensional textures. In Proc. 1996 Symposium on Volume Visualization, pages 23{29, 1996. [8] A. S. Glassner. Space subdivision for fast ray tracing. IEEE Computer Graphics and Application, 4(10):15{22, Oct. 1984. [9] Sheng-Yih Guan and Richard Lipes. Innovative volume rendering using 3D texture mapping. In Proc. of 1994 SPIE Medical Imaging, SPIE 2164, pages 382{392, 1994. [10] Arie Kaufman. Volume Visualization. IEEE Computer Society Press, 1991. [11] Wolfgang Krueger. Volume rendering and data feature enhancement. In Proc. of the San Diego Workshop on Volume Visualization, volume 24, pages 21{26, 1990. [12] Philippe Lacroute and Marc Levoy. Fast volume rendering using a shear-warp factorization of the viewing transformation. SIGGRAPH'94, pages 451{458, 1994. 8
[13] David Laur and Pat Hanrahan. Hierarchical splatting: A progressive re nement algorithm for volume rendering. Computer Graphics, SIGGRAPH'91, 25(4):285{288, July 1991. [14] Marc Levoy. Display of surfaces from volume data. IEEE Computer Graphics and Application, 8(3):29{37, May 1988. [15] Marc Levoy. Ecient ray tracing of volume data. ACM Trans. on Graphics, 9(3):245{261, July 1990. [16] Marc Levoy. Volume rendering by adaptive re nement. The Visual Computer, 6(1):2{7, Feb 1990. [17] Hanan Samet. Applications of Spatial Data Structures: Computer Graphics, Image Processing, and GIS. Addison Wesley, 1990. [18] Peter Shirley and Allan Tuchman. A polygonal approximation to direct scalar volume rendering. IEEE Computer Graphics and Application, 24(5):63{70, December 1990. [19] D. Tost, A. Puig, and I. Navazo. A volume visualization algorithm using a coherent extended weight matrix. Computers & Graphics, 19(1):37{45, 1995. [20] Craig Upson and Michael Keeler. V-buer: Visible volume rendering. Computer Graphics, SIGGRAPH'88, 22(4):59{64, August 1988. [21] Lee Westover. Interactive volume rendering. In Proc. Chapel Hill Workshop on Volume Visualization, pages 9{16, 1989. [22] Lee Westover. Footprint evaluation for volume rendering. Computer Graphics, SIGGRAPH'90, 24(4):367{376, August 1990. [23] Jane Wilhelms and Allen Van Gelder. A coherent projection approach for direct volume rendering. Computer Graphics, SIGGRAPH'91, 25(4):275{284, July 1991. [24] Jane Wilhelms and Allen Van Gelder. Octrees for faster isosurface generation. ACM Trans. on Graphics, 11(3):201{227, July 1992. [25] Orion Wilson, Allen Van Gelder, and Jane Wilhelms. Direct volume rendering via 3D textures. Tech. Rep. UCSC-CRL-94-19, Computer Science Department, Univ. of California, Santa Cruz, June 1994. [26] Roni Yagel and Arie Kaufman. Template-based volume viewing. In Proc. Eurographics'92, pages 153{167, 1992. [27] S.W. Zucker and R.A. Hummel. A three dimensional edge operator. IEEE Transactions on Pattern Analysis and Machine Intelligence, 3(3):324{331, May 1981.
9
3
level-3 (128 )
level-4
XX
level-5
X
level-6
X
level-7
level-8
X
level-9
(2 )
X
X
X
3
: Pointer header;
XX X XXXXXXX
: Inner-node;
X
: Leaf-node.
Figure 1: The data structure for an octree volume Z
Y
X
Front-to-back order
=0
0, 1, 2, 4, 3, 5, 6, 7
z
6
7 (x,y,z)
5
4 4
5
7 5
y
3 0
1
1 x
Figure 2: The order of the octants for a given viewing direction a ray sample block
covered sampling points
covered pixel
Template
The Screen
Figure 3: The template for ray/block intersection used in raycasting (RC) 10
extended sample block original sample block
Q P
sample block with offset unit cell for offsets
Figure 4: A 2D analog of the extended sample block used in RC
Z-plane
intersecting polygons
z=z1+1 z=z1 sample block (0, 0, 0)
Figure 5: The template for Z-plane/block intersection used in texture mapping (TM)
target block
translated template
translated vertex from template
this block’s offset is too big to use this template.
z-plane polygon vertex of target block
q.pe e
q
(x,y,z) (x,y,k)
Figure 6: The 2D analog of the oset to the TM template 11
q1
V1
q3
V3 q2
V2 z=1
z=0
q1
q2 q3
V0 = (0,0,0)
Figure 7: The 2D analog of the TM subtemplates
Table 1. Summary of the algorithm’s performance for raycasting Timings in Seconds Dataset
Rendering Mode
Speedup template bldg.
Translucent
0.017 (--)
Skull
0.017 (--)
MRI Brain
Surface
0.021 (--)
Engine
Surface
0.019 (--)
Knot
Surface
0.019 (--)
CT Head
Human Vessels
Translucent
0.004 (--)
block Projn. 1.27 (13.25) 1.24 (12.89) 0.823 (13.07) 1.02 (13.52) 0.625 (6.69) 0.34 (6.78)
resampl, shading & blending 9.28 (9.28) 5.62 (5.62) 3.81 (3.81) 4.27 (4.27) 2.73 (2.73) 1.28 (1.28)
total 10.57 (22.53) 6.88 (18.51) 4.66 (16.89) 5.31 (17.79) 3.37 (9.42) 1.62 (8.06)
2.13 2.69 3.63 3.35 2.79 4.96
(Note : timings in parenthesis are obtained without using templates)
12
Table 2. Summary of the algorithm’s performance for texture mapping Timings in Seconds Dataset
Rendering Mode Translucent
CT Head Skull MRI Brain
Surface
Engine
Surface
Knot
Surface
template bldg. 0.019 (--) 0.017 (--) 0.021 (--) 0.027 (--) 0.02 (--)
template pasting 0.092 (4.40) 0.087 (4.12) 0.192 (3.28) 0.09 (4.02) 0.289 (3.19)
Speedup total
texture binding & blending 0.713 (0.713) 0.712 (0.712) 0.33 (0.33) 0.563 (0.563) 0.55 (0.55)
0.824 (5.11) 0.816 (4.83) 0.543 (3.61) 0.68 (4.58) 0.859 (3.74)
6.21 5.92 6.65 6.74 4.36
(Note : timings within parenthesis are obtained without using templates)
Table 3. Summary of memory use and pre-processing time Pre-processing time (sec) Dataset
Size (Voxels & MB)
CT Head MRI Brain Engine Knot Human Vessels
256x256x113 7.06 256x256x109 6.81 256x256x110 6.87 256x256x256 16 419x419x1218 204
Octree volume (MB)
Total memory
# of blocks
used by alg. (MB)
Octree
block
texture
creation
selection
definition
3.07
6.2
2056
25
8.5
8.18
1.99
4.02
1750
28
7.5
7.21
1.80
3.81
1943
24
7.1
7.42
2.37
3.13
958
31
6.8
9.21
1.36
1.88
3190
182
4.5
--
13
Figure 8: Results of the template-based octree projection algorithm. From top in left to right are (a) UNC-CT-Head: translucent: Raycasting (RC); (b) UNC-CT-Head: translucent: Texture Mapping (TM); (c) UNC-CT-Head: skull: RC; (d) Human Vessels: translucent: RC; (e) UNC-CTHead: skull: TM; (f) MRI brain: surface: RC; (g) MRI brain: surface: TM; (h) Engine: surface: RC; (i) Engine: surface: TM; (j) Knot: surface: RC; (k) Knot: surface: TM. 14