Tricubic Interpolation of Discrete Surfaces for Binary ... - CiteSeerX

8 downloads 0 Views 597KB Size Report
applications like virtual colonoscopy [18], [8]. Unlike existing methods, which employ trilinear interpolation, here we presented an efficient method using tricubic.
580

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,

VOL. 9,

NO. 4,

OCTOBER-DECEMBER 2003

Tricubic Interpolation of Discrete Surfaces for Binary Volumes Arie Kadosh, Daniel Cohen-Or, and Roni Yagel Abstract—Binary-defined 3D objects are common in volume graphics and medical imaging as a result of voxelization algorithms, segmentation methods, and binary operations such as clipping. Traditionally, renderings of binary objects suffer from severe image quality problems, especially when one tries to zoom-in and render the binary data from up close. We present a new rendering technique for discrete binary surfaces. The technique is based on distance-based normal estimation, an accelerated ray casting, and a tricubic interpolator. We demonstrate the quality achieved by our method and report on its interactive rendering speed. Index Terms—Volume visualization, volume rendering, ray casting, high order interpolation, distance function.

æ 1

INTRODUCTION

V

of 3D binary models has been extensively studied in the last decade. Early rendering of 3D medical data assumed that voxels are binary mainly because of storage limitation. Today, binary data in medical applications is commonly generated as a result of segmentation algorithms that decide which organ every voxel belongs to [19]. Furthermore, the generation of synthetic objects by various voxelization algorithms generates binary defined objects. These can be used to generate synthetic scenes or to be superimposed over sampled data (e.g., medical data). Finally, various common operations, such as clipping, cutting, and pasting of volume data, leave behind a binary-defined surface in the cut region. Naive rendering of discrete surfaces may result in various artifacts, especially when one tries to view the surface from up close. This is evident even with a continuous representation when a curved surface is approximated by a set of piecewise linear objects. One example to this approach is the Marching Cubes algorithm (MC), which represents the iso-value surface with a triangle mesh. When the viewer approaches the triangle mesh, even the known “tricks of the trade,” namely, Gouraud and Phong shading, cannot deliver the illusion of smoothness and the linear approximation becomes visible. Improved iso-surface reconstruction methods have been developed that generate adaptive meshes based on a trilinear filter [1]. The disadvantage of this approach is that it creates many more triangles than MC. A major disadvantage of most volume-to-surface methods is that reconstruction is not view-dependent and, therefore, large portions of the generated triangle mesh ISUALIZATION

. A. Kadosh is with Global CommerceZone, 8 Hamarpe St. Har Hotsvim, PO Box 45153, Jerusalem 91450, Israel. E-mail: [email protected]. . D. Cohen-Or is with the School of Computer Science, Tel Aviv University, Schreiber Building, Room 216 (new wing), Tel Aviv 69978, Israel. E-mail: [email protected]. . R. Yagel is with InSightec, TOR Systems, 3a Yoni Netanyahu St., Or Yehuda, 60376, Israel. E-mail: [email protected]. Manuscript received 15 Nov. 1999; revised 1 Mar. 2001; accepted 28 Sept. 2001. For information on obtaining reprints of this article, please send e-mail to: [email protected], and reference IEEECS Log Number 110945. 1077-2626/03/$17.00 ß 2003 IEEE

are not visible from a given viewpoint. Livnat and Hansen [11] extract view-dependent isosurfaces by first performing a coarse visibility culling and then reconstructing a surface from the potentially visible parts. Hong et al. [8] used occlusion culling to reduce the number of triangles that are fed into the graphics hardware in each frame. The storage size of the mesh representation remains large, however. In contrast, the advantage of a ray casting approach is that it inherently performs an occlusion culling. Thus, its performance is mainly output sensitive. Moreover, it does not require special graphics hardware and can therefore easily integrate many additional acceleration techniques [4]. Gibson [6] uses a distance map, assuming the density varies linearly near the surface, where the curvature of the surface is small relative to the sampling rate, thus avoiding high spatial frequencies at the boundary. This enables the usage of a simple trilinear interpolation function and a sixpoint central difference filter to reconstruct the values and derivatives of this map near the surface. Other related work includes that of Mueller et al. [14], who use distances to improve their volume rendering splatting technique, and Huang et al. [9], who attempt to provide sharp volume rendering using nonisotropic interpolation kernels. In this paper, we show that, by improving the accuracy and fidelity of the ray caster, it can reconstruct smooth surfaces of high quality even when one zooms-in very close to the surface. By applying different acceleration techniques, we were able to speed up the rendering time without significant reduction in the rendering quality. Another important feature of our method is that the rendering is applied directly to the discrete representation of a segmented modality, that is, the data may be provided to the renderer in a binary form, which is a result of some tissue classifier. Thus, the storage space of the segmented surfaces is extremely compact, which implies fast loading time and fast transmission over communication channels. Obviously, the interpolation function used in the generation of close-up images plays a crucial role in their quality. We showed [3] that trilinear interpolation, which is commonly used in volume rendering (see, for example, the work of Parker et al. [16]), is not smooth enough when binary voxels are rendered from up close. The examples in Published by the IEEE Computer Society

KADOSH ET AL.: TRICUBIC INTERPOLATION OF DISCRETE SURFACES FOR BINARY VOLUMES

581

Fig. 1. Trilinear cubic interpolation (left column) versus tricubic (right column). the use of a higher order interpolator greatly enhances the ability to display objects from up close. The trilinear interpolation yields reasonable images when the eye is not very close to the object (c).

Fig. 1 show some close-up views of binary voxel data. The pictures on the left show trilinear interpolation where the surface is defined by thresholding the trilinear function. The pictures on the right show three views of our cubic interpolation of the same binary data. Noticeably, the trilinear interpolation is not smooth enough. The use of higher-order interpolators greatly enhances one’s ability to display discrete objects from up close (Fig. 2). The reason for the difference is exemplified in Fig. 3a which illustrates and emphasizes the fact that thresholding a bilinear level set yields a C0 level curve, that is, not C1 . A cubic interpolation technique (see Fig. 3b) employs a Hermite

tricubic function, which yields a smooth surface over binary data sets. In this paper, we show how to overcome sampling problems and segmentation errors. Fig. 5 illustrates the same interpolation methods being applied over the same data. Fig. 5b shows how the method described in this paper overcomes the above problems. In this paper, we present a ray casting method that is capable of supporting the higher order interpolation technique while keeping the computations as simple as possible and supporting interactive rendering rates. In Section 2, we define the requirements of the reconstructed surface and briefly present our interpolation scheme. In Section 3, we introduce

582

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,

VOL. 9,

NO. 4,

OCTOBER-DECEMBER 2003

Fig. 2. The higher order interpolators enable the display of smooth images from a very close point to the object.

the ray-casting algorithm. In Section 4, we introduce an approximated method to further accelerate the process. In Section 5, we present implementation details including our shading technique and results. We conclude in Section 6.

2

THE SURFACE RECONSTRUCTION

Given a 3D grid of binary data, a reconstruction algorithm generates a continuous surface S that separates the “black” values from the “white” ones. The marching cube reconstruction algorithm, like the trilinear reconstruction, uses local properties to build a surface that separates the “black” and “white” vertices in a given cell. Our method also uses the derivatives at each point, thereby taking into consideration the values of adjacent voxels. Since we want to achieve a smooth surface even from close-up views, we need to have a C1 interpolation function. The surface reconstruction consists of two steps. In the first step, which is a preprocessing step, a value that reflects the point’s distance from the surface is assigned to each grid point. Points that are outside the body have positive values while those inside the body have negative values. Points that have zero value define the surface or a zero level set. In the second, interactive phase, a ray casting approach is applied to render the points with zero value (or another isovalue) that are visible from the viewpoint.

Based on the binary volume we first define a function g, which measures each voxel’s distance from the surface. High values of g indicate voxels far outside the object (external voxels), while voxels inside the object (internal voxels) have negative values. The derivatives’ values needed for the evaluation of the tricubic function are approximated by standard central difference approximation of g’. Our implementation of the function g is defined as follows: For each external voxel u which has one or more internal neighbors, find the closest internal one—v (using the 26-neighbors method). If u and v agree on two coordinates (i.e., u and v are six-adjacent), then assign to v the value 0.5. This means that the external voxel u and its closest internal voxel v are located one unit away from each other and, therefore, since v is the closest internal voxel to u, the distance from u to the surface is 0.5. Similarly, if u and v have only one grid coordinate in common v ffiffiare pffiffiffi (i.e., u and p ffi 18-adjacent), we assign to u the value 2=2. The value 3=2 is assigned to u if it does not have any grid coordinate in common with v (i.e., u and v are 26-adjacent). Fig. 4a and Fig. 4b show a 2D example of the value assignment. The same method applies to the negative values assigned to internal voxels that are neighbors to external voxels. Next, the distance values are propagated into the internal and external voxels. This can be done with a breadth-first search

Fig. 3. Thresholding a bilinear level set (a) yields a C0 level curve, that is, not a C1 . A cubic interpolation (b) yields a smooth curve (only the visible parts of the curve are displayed, where the rays are cast from below).

KADOSH ET AL.: TRICUBIC INTERPOLATION OF DISCRETE SURFACES FOR BINARY VOLUMES

Fig. 4. In (a), the “black” voxel (in the middle) seeks the closest neighbor with the opposite (“white”) value. In this example, such a voxel is located one grid unit from it. Therefore, this voxel’s initial distance topthe ffiffiffi surface is 1=2. In (b), the voxel in the middle finds such a voxel in 2pdistance ffiffiffi from it and, therefore, its initial distance from the surface is  2=2.

algorithm where each voxel takes its closest neighbor that already has a value and adds its distance to that voxel’s distance, provided that it shortens the distance. To further improve the voxels’ gray values and derivatives, we define another function h. The function measures the number of neighbors each voxel v has with the same binary value. It gives low values to voxels that are relatively isolated. A linear combination of g and h functions yields the gray values assigned to the voxels. This combination defines the scene as a distance map to the iso-surface. The iso-surface has zero-distance value, internal points have negative distance value, and external points have positive distance value. Note that derivatives are computed for the white as well as the black voxels. Fig. 5 shows the

583

enhancement that we get when using the voxels’ gray values rather than the binary ones. The above calculations generate a variation of a distance field around a surface. The interpolation of this field is defined by a tricubic function based on a Hermite-like interpolant. We first describe the Hermite interpolation in 2D and, later, its extension to 3D. Assuming that we know the function values and derivatives at t = 0 and t = 1, namely, P0 , P1 , P00 , and P01 , we want to interpolate the function value by a cubic polynomial at 0 < t ¼ T < 1. Such a polynomial has the form of PðtÞ ¼ at3 þ bt2 þ ct þ d. Since we know that Pð0Þ ¼ P0 , Pð1Þ ¼ P1 , P0 ð0Þ ¼ P00 , and P0 ð1Þ ¼ P01 , the coefficients of P can be solved. A ray that enters a cell that has a nonempty intersection with the surface searches for its intersection with the zero level set. The basic operation samples the level set at a sequence of points, aiming to converge to a zero value. A point at p = (x, y) is sampled by applying five 1D Hermite interpolations. First, p is vertically projected onto the cell boundary (face) to yield the points py0 and py1 (see Fig. 6). After computing the values and y-derivatives at py0 and py1 , the value at p is interpolated by a vertical 1D Hermite cubic function. The values at py0 and py1 are horizontally interpolated based on the values and xderivatives at the grid points. Since we also need the yderivatives at py0 and py1 , we interpolate them using the yderivatives and the xy-derivatives at the grid point. This means that, at each grid point, the voxel value and the derivative in the x-direction, y-direction, and xy-direction must be precalculated. In 3D, the grid points also contain

Fig. 5. (a) Applying the interpolation method on the voxels’ binary data. (b) The result of applying the same interpolator over the gray values that were calculated from the g and h functions.

Fig. 6. We first compute the value and the y-derivative at the projection points (a). The point’s value can then be calculated using the projection points’ values and y-derivatives (b).

584

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,

VOL. 9,

NO. 4,

OCTOBER-DECEMBER 2003

Fig. 7. (a) The entry and the exit points have a positive distance from the surface. The ray does not cross the surface. (b) The entry and exit points have a positive distance from the surface. The ray crosses the surface.

the z-derivative, xz-derivative, yz-derivative, and xyzderivative, which together require 21 one-dimensional Hermite interpolations (see [3] for more details).

3

THE RAY CASTING ALGORITHM

During rendering, rays are cast from the image space into the binary volume. Each ray traverses the empty space and searches for the intersection with the zero-level set. The volume traversal is implemented based on the Cleary and Wyvill [2] algorithm where the ray traverses along cubic cells whose eight corners are the voxels. Once the ray enters a cell containing eight voxels that do not agree on their level signs, it searches for the intersection point with the zerolevel set. This search is actually a sampling process which is computationally expensive since each sample requires the evaluation of the tricubic function. This sampling mechanism dominates the rendering process—most of the time is spent on sampling the level set rather than traversing the volume. To achieve interactive rendering time, the sampling method must be accelerated. Below, we present a collection of acceleration methods. First, we introduce numerical methods with which we accelerate the convergence of the intersection of the ray with the level set, without giving up accuracy. Then, we discuss heuristics with which we may very quickly render approximated views. These approximated views are used in a browsing mode when the user scans the scene while continuously changing the viewing direction. As soon as the user stops roaming, a high-fidelity image is generated. In the following discussion, we focus on the sampling process and the acceleration of its convergence to the level set. This process is activated once the ray encounters a nonempty cell. A naive implementation simply samples the level set along the ray until it reaches a value that is close to zero, up to a user-defined tolerance. The sampling is applied at constant intervals. If this interval is set to be too small, we will not further improve image quality while suffering a performance penalty. If, on the other hand, the sampling rate is too small, the improved performance will come with the price of reduced image quality. Note that a naive binary search cannot be applied here since the signed distance is not linear and we might miss the surface in cases where the interval intersects the surfaces and its endpoints are external to the surface. When the ray enters a nonempty cell, it samples the signs of the ray at its entry and exit points. There are two cases: 1) The entry and exit points are at a positive distance from the surface and 2) the entry point is at a positive distance

from the surface and the exit point is at a negative distance. The sign of the entry point cannot be negative since the entry point is the exit point of the previous cell. In case 2), it is guaranteed that the ray intersects the surface somewhere within the cell. Since we interpolate a cubic surface, we are assured that only one intersection exists within the cell. The exact intersection point is then computed with a NewtonRaphson root-finding method. In case 1) above, the ray enters and exits a cell at points that are at a positive distance from the surface. There are two possible subcases. The first is when the distance at the entry point from the surface is smaller than at a point along the ray that is located epsilon distance from the entry point (the ray is getting further from the surface). Since we assume that the surface is smooth enough and does not oscillate within the cell, the distance from the surface cannot increase and later decrease along the ray. This means that if, at the entry point, the distance is positive, with a positive derivative, we can assume that the ray does not intersect the surface within the given cell and the ray continues to the next cell. In the second subcase, the ray nears the surface and may intersect it. There, we start a local minima search to find the closest point to the surface along the ray within the given cell. This process is known as the Golden-Section method [17]. If the closest point to the surface also has a positive distance from it (Fig. 7a), the ray leaves the cell and continues to the next one. If, during the search, it finds a point along the ray with a negative distance (Fig. 7b), it means that there is an intersection with the surface between the two points and the algorithm switches to the NewtonRaphson root-finding method. The above process is further improved by exploiting rayto-ray coherency. It is very likely that, if a given ray intersects the surface within a given cell, its adjacent ray will also intersect the surface at that given cell. Thus, when a ray intersects the surface, its distance from the viewer is registered and is used if the next adjacent ray enters the same cell. This improves the first estimate of the NewtonRaphson method when applied within the given cell.

4

IMPLEMENTATION

AND

RESULTS

The numerical methods described above accelerate the generation of images without any loss of image fidelity. In this section, we introduce more aggressive methods that trade quality for speed. The first method is relatively gentle —the degradation of the image quality is rather small and its quality relatively high. The method referred to as the hitinterpolation method casts rays from every other pixel of the

KADOSH ET AL.: TRICUBIC INTERPOLATION OF DISCRETE SURFACES FOR BINARY VOLUMES

585

Fig. 8. The down-sampling method (b) yields a smooth image that is visually close to that generated by the full-fledged sampling method (a). It is improved by an edge-detection mechanism (c) that yields a reconstruction of a smooth silhouette.

image space. The skipped pixels are then approximated by means of interpolating the coordinates of the ray-surface intersection of their neighboring pixels. We should emphasize that we do not interpolate the pixel’s final color in image space, but rather in object space. Since we render the image from up close to the object, the rays hit the object relatively close to each other and, hence, this method does not harm the image quality. The hit-interpolation method yields a smooth image that is visually close to that generated by the full-fledged sampling method, with a small degradation of image quality. The major drawback of this method is that the rendered silhouettes are not always smooth enough. To overcome this drawback, an edgedetection mechanism is applied to improve the reconstruction of smooth silhouettes in the following manner: For each skipped pixel, we need to check whether it is part of the silhouette. This can be determined by testing the intersection values of its surrounding neighbors. If the locations of the surrounding intersection points are not close, the pixel in question is suspected of being part of the silhouette and a ray is cast through that pixel to refine the quality of the silhouette. Fig. 8 shows a down-sampling method with and without a refinement of the silhouette. Note that, in both cases, no image-space smoothing was applied. A cruder method can generate images much faster. The idea is to use a deeper down sampling and to cast one ray for a cluster of 3  3 pixels. The result of the middle pixel is replicated over the other eight pixels by means of an imagespace cubic interpolation. This method generates cruder approximations, especially along the silhouettes. The edge-

detection procedure can then be applied to soften the rough silhouettes. This method is applied during navigation and the image quality is then refined with our methods. Table 1 shows how each method accelerates the rendering process. The method requires storing nine values per voxel. However, in our implementation, we used a volume of two bytes per voxel. Only the voxels in which the surface passes have a link to the derivative values to significantly reduce the storage space consumption. To shade the surface, we need to calculate the normal vector at the ray-surface intersection point. Image-based shading methods are reviewed in [20]. In these methods, the depth values in the z-buffer are used to approximate the normal vectors at the hit points. In our system, we store, at each pixel, the 3D coordinates of the ray-surface intersection. After all the rays have been cast, image space is traversed in a postprocessing step and the normal vectors are approximated based on the object-space 3D coordinates. Let P0 be the 3D hit point in which we want to calculate the normal vector. We choose two 3D points from the pixel’s neighbors’ hit points. We look at the pixel’s neighbors and examine the 3D hit point that was preserved for each of them. We have to choose two points that are not too far from P0 . A distant point indicates that the ray hit a distinct part of the surface. Let P1 and P2 be these 3D points. The points should not be too close in order to avoid stability problems. In addition, the vectors that start at P0 and end at P1 and P2 should be linearly independent. We now look at the two vectors starting at P0 and ending at P1 and P2 , namely, V1 and V2 . These two vectors define a plane at the

TABLE 1 The Average Time Taken to Render a 300  300 Pixel Image Using the Different Acceleration Modes

These numbers were achieved with a 266 MHz Pentium II PC. * With edge-detection, Newton-Raphson, and Minima search.

586

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,

hit point. A standard Phong shading model uses the normal to this plane for shading the point.

5

CONCLUSIONS

The method we presented can be integrated in an interactive system, enabling the user to wander through the volumetric scene and to approach the surfaces without noticeable artifacts. This is an important feature for applications like virtual colonoscopy [18], [8]. Unlike existing methods, which employ trilinear interpolation, here we presented an efficient method using tricubic interpolation to yield C1 surfaces and, thus, smoother surfaces from binary voxel-based data. The interactive rates that we achieved were on a 266 MHz Pentium II PC. We believe that, with the development of stronger machines, such high order interpolations will be commonplace to achieve higher image quality.

REFERENCES [1]

[2] [3] [4] [5]

[6] [7]

[8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18]

F. Allamandri, P. Cignoni, C. Montani, and R. Scopigno, “Adaptively Adjusting Marching Cubes Output to Fit a Trilinear Reconstruction Filter,” Proc. Visualization in Scientific Computing ’98 Workshop, pp. 25-34, 1998. J.G. Cleary and G. Wyvill, “Analysis of an Algorithm for Fast Ray Tracing Using Uniform Space Subdivision,” The Visual Computer, vol. 4, no. 2, pp. 65-83, 1988. D. Cohen-Or, A. Kadosh, D. Levin, and R. Yagel, “Smooth Boundary Surfaces from Binary 3D Data Sets,” Proc. Int’l Workshop Volume Graphics, Mar. 1999. D. Cohen-Or, E. Rich, U. Lerner, and V. Shenkar, “Real-Time Photo-Realistic Visual Flythrough,” IEEE Trans. Visualization and Computer Graphics, vol. 2, no. 3, pp. 255-264, Sept. 1996. T. Fruhauf, “Ray Casting Opaque Isosurfaces in Non-Regularly Gridded CFD Data,” Visualization in Scientific Computing, R. Scanteni, J. van Wijk, and P. Zanarini, eds., pp. 45-57, Wien: Springer, 1995. S.F.F. Gibson, “Using Distance Maps for Accurate Surface Representation in Sampled Volumes,” Proc. IEEE 1998 Symp. Volume Visualization, pp. 23-30, 1998. B. Hamann, I.J. Trotts, and G.E. Farin, “On Approximating Contours of Piecewise Trilinear Interpolant Using Triangular Rational Quadratic Bezier Patches,” IEEE Trans. Visualization and Computer Graphics, vol. 3, no. 3, pp. 215-227, July-Sept. 1997. L. Hong, S. Muraki, A. Kaufman, D. Bartz, and T. He, “Virtual Voyage: Interactive Navigation in the Human Colon,” SIGGRAPH ’97 Conf. Proc., pp. 27-34, 1997. J. Huang, R. Crawfis, and D. Stredney, “Edge Preservation in Volume Rendering Using Splatting,” Proc. 1998 Volume Visualization Symp., pp. 63-70, 1998. A. Kaufman, D. Cohen, and R. Yagel, “Volume Graphics,” Computer, vol. 26, no. 7, pp. 51-64, July 1993. Y. Livnat and C. Hansen, “View Dependent Isosurfaces Extraction,” Proc. IEEE Visualization ’98, pp. 175-180, 1998. W.E. Lorensen and H.E. Cline, “Marching Cubes: A High Resolution 3D Surface Construction Algorithm,” SIGGRAPH ’87 Conf. Proc., vol. 21, pp. 163-170, 1987. S. Marschner and R.J. Lobb, “An Evaluation of Reconstruction Filters for Volume Rendering,” Proc. IEEE Visualization ’94, pp. 100-107, 1994. K. Mueller, T. Moeller, and R. Crawfis, “Splatting without the Blur,” Proc. IEEE Visualization ’99, pp. 363-370, 1999. T. Moeller, R. Machiraju, K. Mueller, and R. Yagel, “A Comparison of Normal Estimation Schemes,” Proc. IEEE Visualization ’97, pp. 19-26, Nov. 1997. S. Parker, P. Shirley, Y. Livnat, C. Hansen, and P. Sloan, “Interactive Ray Tracing for Isosurface Rendering,” Proc. IEEE Visualization ’98, pp. 233-238, 1998. W.H. Press, Numerical Recipes in C. Cambridge Univ. Press, 1990. O. Shibolet and D. Cohen-Or, “Coloring Voxel-Based Objects for Virtual Endoscopy,” Proc. 1998 IEEE Symp. Volume Visualization, pp. 15-22, 1998.

VOL. 9,

NO. 4,

OCTOBER-DECEMBER 2003

[19] U. Tiede, T. Schiemann, and K.H. Ho¨hne, “High Quality Rendering of Attributed Volume Data,” Proc. IEEE Visualization ’98, pp. 255-262, 1998. [20] R. Yagel, D. Cohen, and A. Kaufman, “Normal Estimation in 3D Discrete Space,” The Visual Computer, vol. 8, nos. 5/6, pp. 278-291, June 1992. Arie Kadosh received the BSc degree in math and computer science and the MSc degree in computer science (cum laude) from Tel Aviv University. He is currently the vice president of Research and Development at Global CommerceZone. He has experience of more than 10 years of management, algorithms development, applications, and engineering in the software industry. Prior to joining Global CommerceZone, he led the handwriting recognition development group at Advanced Recognition Technologies Ltd. He received the prestigious prime minister award of computer software in the discipline of artificial intelligence for the year of 1997 and has several registered patents. Daniel Cohen-Or received the BSc degree cum laude in both mathematics and computer science (1985), the MSc degree cum laude in computer science (1986) from Ben-Gurion University, and the PhD degree from the Department of Computer Science (1991) at the State University of New York at Stony Brook. He has been an associate professor in the Department of Computer Science at Tel-Aviv University since 1995. He was a lecturer in the Department of Mathematics and Computer Science at Ben Gurion University in 1992-1995. He is board member of a number of journals including IEEE TVCG, TVC, C&G, CGF. Between 1996-1998, he served as the chairman of the Central Israel SIGGRAPH Chapter. He has a rich record of industrial collaboration. In 1992-1993, he developed a real-time flythrough with Tiltan Ltd. and IBM Israel for the Israeli Air Force. During 1994-1995, he worked on the development of a new parallel architecture at Terra Ltd. In 1996-1997, he worked with MedSim Ltd. on the development of an ultrasound simulator. He is the inventor of the RichFX and Enbaya streaming technologies. His research interests are in computer graphics and include rendering techniques, visibility, client/server 3D graphics applications, real-time walkthroughs and flythroughs, volume graphics, architectures, and algorithms for voxel-based graphics. Roni Yagel received the PhD degree in 1991 from the State University of New York at Stony Brook, where he was also a researcher in the Department of Anatomy and the Department of Physiology and Biophysics. He received the BSc degree cum laude and the MSc degree cum laude from the Department of Mathematics and Computer Science at Ben Gurion University of the Negev, Israel, in 1986 and 1987, respectively. He is general manager of TOR Systems, which develops image guided therapy systems for InSightec. He has been involved in the areas of computer graphics and visualization for the last 15 years. He has published more than 100 technical papers in scientific journals and professional conferences. Prior to joining InSightec, he was general manager of Software Development within Elbit Medical and R&D program manager for Biomedical Applications at Silicon Graphics. He was with The Ohio State University for six years, where he held the position of associate professor of computer and information science and adjunct associate professor in the Advanced Computer Center for Art and Design and the Biomedical Engineering Center. He founded and headed the Volume Graphics Research Group, which pursued industry and government funded research in computer graphics, scientific visualization, virtual reality, and image processing.

. For more information on this or any computing topic, please visit our Digital Library at http://computer.org/publications/dlib.

Suggest Documents