A Volumetric Approach to Virtual Simulation of Functional ... - CiteSeerX

9 downloads 30020 Views 175KB Size Report
2The Ohio Supercomputer Center, Columbus, OH. 3Department of .... track positions in Cartesian or cylindrical space, and apply force in those same dimensions. The 3 axis .... We depict this area of influence, which we call gravity field, as a ...
A Volumetric Approach to Virtual Simulation of Functional Endoscopic Sinus Surgery Gregory J. Wiet1,2, Roni Yagel3, Don Stredney2, Petra Schmalbrock4, Dennis J. Sessanna2, Yair Kurzion3, Louis Rosenberg5, Michael Levin5, Kenneth Martin 5

1

Department of Otolaryngology, The Ohio State University Hospitals, Columbus, OH 2 The Ohio Supercomputer Center, Columbus, OH 3 Department of Computer and Information Science, The Ohio State University, Columbus, OH 4 Department of Radiology, The Ohio State University Hospitals, Columbus, OH 5 Immersion Corporation, San Jose, CA

Abstract Advanced display technologies have made the virtual exploration of relatively complex models feasible in many applications. Unfortunately, only a few human interfaces allow natural interaction with the environment. Moreover, in surgical applications, such realistic interaction requires real-time rendering of volumetric data - placing an overwhelming performance burden on the system. We report on a collaboration of an interdisciplinary group developing a virtual reality system that provides intuitive interaction with volume data by employing real-time volume rendering and force feedback (haptic) sensations. We describe our rendering methods and the haptic devices and explain its utility of this system in the real-world application of Endoscopic Sinus Surgery (ESS) simulation.

1. Introduction Human investigation of spatial information requires the subtle integration of several sensory modalities (visual, auditory, proprioception, which is internal information such as joint angles). Vision is the primary sensory modality, as it is instantaneous, provides a broad bandwidth, and allows the user to sample the environment with little or no physical navigation. Navigation is commonly used to evaluate environments and verify spatial comprehension previously provided by sight. However, investigations of objects are predominately explored both visually and manually. Manual interaction provides the movement required to verify size and shapes (like navigation) but additionally provides tactile information such as forces and textures. Most computer interfaces suffer from lack of multisensory information, providing only a passive witnessing of information. The human capacities for manual and visual investigations are inherent and develop early, as is evident in infant object exploration before the development of locomotive skills required for navigation [16]. The convergence of visual, auditory, and somatosensory inputs has been identified by Stein [17] in the superior colliculus. Bi-modal neural involvement has been identified in the putamen, parietal area 7b, and inferior area 6 [8] and is presumed integral to the representation of extrapersonal space. Emphasis and value of this multimodal interaction are clearly evident in the cerebral area apportioned to visual and dexterous activity.

Surgery is a human activity that employs visual and manual investigations. The user must integrate both haptic (manual) and visual information and correlate sensory information with previously held conceptual information, such as the internal representation of the regional anatomy. This representational map serves as a mental paragon for which the user will perform spatial processing required in the task, including orientation, position fixing, spatial reasoning (including spatial and temporal prediction), and performance (including execution, analysis, and re-evaluation). The representation will be utilized to generate and re-calibrate virtual trajectories of hand and arm movements that will be used to program actual physical trajectories of the limbs [1][4]. This mental representation will be modified with direct sensory information including visual, haptic, and information supplied by proprioreception of joints and visual feedback of user limb position [5]. In minimally invasive surgery, surgeons insert various tools through one or two small incisions. Their visual feedback is limited to the view they get through the eyepiece of an endoscope. This type of surgery requires the spatial localization of the surgical instrument through the integration of such nondirect observation via an endoscope and direct haptic cues provided via the surgical instruments. Most endoscopes provide only a monocular view of the scene, although stereo scopes are being introduced into surgical practice. In the following section we describe one type of endoscopic surgery we are attempting to simulate - endoscopic sinus surgery. The requirements for a computer system to simulate complex interaction, such as that required in endoscopic surgery, are many. The system must be able to generate realistic visual and haptic cues that provide temporal and spatial coherence with the data manipulation. It must provide high fidelity haptic and visual cues to “convince” and thereby engage the user. And finally, it must provide interactions and behaviors that are similar enough to the real world in order to maximize transfer to actual practice. We nevertheless believe that real-time interaction is the most essential requirement and practitioners prefer to operate in interactive environment even if it is rendered in a somewhat less realistic way. In the next section we summarize the application area of training for endoscopic surgery. In the following sections we describe our system and its main components. We conclude with some discussion and our future plans.

2. Endoscopic Sinus Surgery (ESS) Endoscopic Sinus Surgery (ESS) is currently the procedure of choice for the treatment of medically resistant, recurrent acute and chronic sinusitis. Its rationale is based on the simple premise of improving the natural drainage of the sinuses into the nasal airway. Endoscopic sinus surgery was first popularized by Messerklinger [12] and furthered by Stamberger [15]. In the past, sinus surgery was composed of rather crude techniques including blind probing and removal of tissue without direct visualization or the creation of external facial incisions to gain direct access. The further understanding of sinus anatomy and the advent of endoscopic techniques have allowed a less invasive, more precise and physiologically sound method of treatment of patients with these disease entities. The paranasal sinuses are composed of a series of labyrinthine passageways and spaces located in the lower forehead, between the eyes, at the center of the cranial base, and behind the cheeks. Several factors complicate surgery in these areas. First, the sinuses are surrounded by vital structures such as the eye socket and its contents, the internal carotid artery, which supplies blood to the brain, and the brain itself. Such structures lie within millimeters of the sinus boundaries. Second, to avoid external incisions, access to the sinuses is through the nostril, which precludes direct visualization and manipulation of the sinus structures by the naked eye . The technique consists of visualizing landmark structures within the nasal cavity and sinuses, excising and “biting” out diseased tissue, probing and suctioning under direct visualization through the endoscope or via a video monitor with attached endoscopic camera (see Figure 1). The nostril becomes a fulcrum of rotation as the scope and other instruments must pass through this narrow area to gain access to the deeper sinus structures.

Recognition of key structures by both visual and haptic cues is paramount to safe, minimally invasive, and adequate technique.

Figure 1: A view of the nasal cavity as seen through the endoscope eyepiece, showing a cutting tool (bottom left) performing sectioning.

2.1 Training for Endoscopic Surgery Today the techniques of endoscopic surgery are taught by a stepwise approach. First, the student of endoscopic surgery must master the detailed anatomy of the region. Next, detailed dissection of cadaver specimens is done to verify the model and gain a “feel” for the surgical techniques. Often, cadaver material is in limited supply and, when embalmed, does not exhibit the same subtle tissue characteristics of live tissue. Next, the student observes surgery on live patients under the direction of a mentor. Last, the student begins actually performing the procedure on live patients under the mentor’s direction. Beginning with simple tasks and progressing in a stepwise fashion to more complicated procedures, the novice surgeon gains experience and eventually becomes proficient. The learning process is labor intensive and can be frustrating because models and cadavers can show only limited variability and interaction. There is no way to actually practice the techniques learned without using either cadaver material or live subjects. A realistic simulator of the paranasal sinuses will allow for a quicker understanding of the three-dimensional anatomy and allow a safe and realistic environment for the novice surgeon to learn the techniques. A true simulator must have realistic haptic as well as visual display. Many of the surgical maneuvers require subtle haptic cues as well as visual. For example, in the application of sinus surgery, many of the tissues inside the sinuses look similar, and it is their “feel” that allows the expert surgeon to distinguish between such structures as another air-filled passage versus the thin bone that covers the brain and eye. One can be removed aggressively, while the other’s disruption may lead to a disastrous result for the patient. Haptic display is essential if the simulation is to be more than just a “fly through”. Finally, to convey a sense of reality the system must deliver real-time realistic rendering of the 3D environment. Our previous work includes the development of a system to provide for the simulation of regional anaesthesia [11]. This system integrates visual, haptic, and speech recognition for simulating an epidural block, a regional anesthesia technique commonly used in obstetrics. In addition, our group has developed a real-time volumetric system for the use in pre-planning the removal of brain and cranial base tumors [18]. In this paper, however, we focus on a specific application involving the integration of haptic and visual display technology for simulating functional endoscopic sinus surgery.

2.2 The ESS Simulator Our system is designed to support interaction with a volumetric model of the anatomical region

while delivering haptic feedback (force reflection) to the user. Due to the need to dissect and view tissue regions beyond the outer surface, it is obvious one needs to maintain and manipulate a volumetric representation of the nasal cavity. The volume-based approach has many advantages over surface-based models in the arena of medical applications. For surgery simulation, we note especially the efficient collision detection and more powerful deformation schemes, described in Section 5. The Endoscopic Sinus Simulator consists of four main components: Forceps Simulator, Endoscope Tracking Unit, Control Computer and Interface Card, and the Host Computer. See Figure 2. The Forceps Simulator is the heart of the Endoscopic Sinus Surgery Simulator. It includes the mounting platform, head assembly, calibration fixture, and a modified Impulse Engine 3GM with 3 axis gimbal assembly and integrated forceps. The IE-3GM is a 3 degree of freedom haptic interface that can track positions in Cartesian or cylindrical space, and apply force in those same dimensions. The 3 axis gimbal expands the utility of the 3GM by tracking the orientations of the forceps relative to the tool endpoint.

Figure 2: The ESS interface.

The Impulse Engine 3GM system is mounted to the bottom level of the mounting platform. It protrudes up through the top plate and into the head assembly where the 3 axis gimbal and forceps assemblies are mated to it. The gimbal has been optimized to provide the required resolution while remaining small enough to permit motion throughout the sinus cavities. The Endoscope Tracking Unit is based upon the MicroScribe 3DX. The MicroScribe is a precision spatial tracking mechanism that can resolve motions in 6 degrees of freedom. This system is equipped with a special stylus roll sensor to track the endoscope rotation. This sensor has been ergonomically designed into a simulated endoscope eyepiece. The kinematics and mounting configuration of the MicroScribe enable it to accurately track endoscope motion throughout the entire head, without interfering with the Forceps Simulator. The control computer is an IBM compatible system with a Pentium processor. It uses a custom designed ISA interface board that has been configured to provide access to the 6 encoders and the analog forceps position sensor. It can also generate 3 analog signals that are used to control the motors. The control computer communicates with a Silicon Graphics workstation. This dual processor system takes advantage of the SGI’s superior rendering and 3D modeling capabilities, while still providing real-time control of the haptic system at servo rates above 2kHz. The position and

orientation information from the Forceps Simulator and the Endoscope Tracking Unit are sent to the workstation 60 times / second. The SGI then rebuilds the model to reflect these interactions and passes a revised haptic model back to the Forceps Control Computer. This computer calculates the required forces in real-time from the haptic model and the current position and velocity data and transmits them to the Impulse Engine. Because the haptic algorithm is running above 2kHz and the model is only updated at 60 Hz, the control algorithm requires knowledge of the local space around the forceps tip, such that it can compute forces for small motions without waiting for an update from the SGI.

3. Volume Representation Our screens are composed of a two-dimensional array (raster) of pixels, each representing a unit of area. A volume is a three dimensional array of cubic elements, each representing a unit of space. The same way a screen can be used to store two- dimensional objects such as lines and digital pictures, a volume can be used to store three-dimensional geometric objects and three-dimensional pictures. The main disadvantage of volumes is their immense size. A medium resolution volume of 256x256x256 requires storage for 16 million voxels. However, volumes have distinct advantages: they can represent the interior of objects. Rendering and processing does not depend on the object’s complexity or type; they depend only on volume resolution. It easily supports operations such as subtraction, addition, collision detection, and deformation. For a complete comparison see [7].

Figure 3: Slices through a volume. For a back-to-front display we start at the plane farthest away and advance to the closest one.

Although there are many methods for rendering volumes and various ways to accelerate and optimize these, we employ two algorithms: splatting [17] and slicing [2] which we briefly describe here. Splatting is based on transforming all voxels from their locations in the data coordinates to the viewing screen coordinates. Then, the voxels are traversed and drawn to the screen in a back-to-front (BTF) order, that is, from the farthest away from the eye point the one that is closest to the eye. That way, we can just write every voxel we visit and overwrite whatever was drawn to the screen at that point. Each voxel is rendered to the screen not as a point. Instead, the energy of the voxels is distributed to a small neighborhood of pixels. The image a voxel creates by being smeared across pixel neighborhood is called a splat. The slicing algorithm is based on another observation [2]. If one embeds a plane in the space occupied by the volume and displays on the plane all the voxels it intersects, we have a slice through the volume. If this process repeats for multiple planes, we have something similar to what is shown in Figure 3. By drawing the slices in a BTF order we produce an image of the volume. 4. Rendering We have developed two real-time volume renderers, the Volume Splatter and the Volume Slicer. The main use of the Volume Splatter is to provide higher quality rendering in the absence of large texture memory. The main use of the Volume Slicer is to support real-time volume deformation,

heavily relying on the availability of large texture memory.

4.1 The Volume Splatter The Volume Splatter relies on the notion of fuzzy voxel set, which consists of a subset of the volume’s voxels. For each voxel in the original volume we evaluate a transfer function that maps the gradient and the density of the given voxel to an importance number. We include a voxel in the fuzzy set if it has a large enough importance value (above some user-defined threshold). The volume splatting algorithm takes as input a fuzzy set. It traverses the fuzzy set in a back-tofront order. For each member of the set it renders a rectangle facing the viewer, textured with a splat texture. The splat texture contains an image of a fuzzy circle, with opaque center and transparent circumference [9]. We also implemented a faster version of the rendering algorithm in which, instead of rectangles, we render enlarged points on the screen. These points have constant opacity and therefore generate images with some visible artifacts; however, because points are very simple graphic primitives, this method supports higher rendering speeds.

Figure 4: A view from within the nasal cavity, rendered by the Volume Splatter and provided to the surgeon during surgery simulation.

We control the material properties of the splats; however, for reasons of speed, we vary only opacity and diffuse reflection of the material for each splat. We define multiple light sources (infinite and local) and use the hardware-assisted lighting routines to shade the splats. We exploit rendering hardware to provide real-time performance; transforming the rect angles, scan-converting the textured rectangles, and compositing colors and opacities are performed by the SGI graphic hardware. Figure 4 shows a view from within the nasal cavity, as rendered by the volume splatter. We run our splat renderer on a multiprocessor Onyx with Reality Engine graphic hardware and a single Raster Manager (RM) board. Extracting a fuzzy set out of a 1283 volume takes approximately 60 seconds. Depending on the choice of splat threshold, we can control the resulting number of splats in the fuzzy set. For a fuzzy set with 50,000 splats lighted by four light sources at infinity, we get render rates of about twenty frames per second (20Hz) for point splats and about 7Hz for textured rectangular splats. Although initialization has to be repeated whenever the user changes the transfer function or loads another dataset, for the most significant visualization operations, such as surgery simulation, data is static. In such applications, the Volume Splatter provides a very attractive rendering speed.

4.2 The Volume Slicer

The commercially available texture mapping hardware allows mapping of three-dimensional rasters (volumes) on polygons. These three-dimensional rasters (called 3D texture maps) are mapped on polygons in 3D space using either zero order or first order interpolation. By rendering polygons slicing the volume and perpendicular to the view direction one generates a view of a rectangular volume data set [2]. Rendering these polygons from back to front and blending them into the frame buffer generates a correct image of the volume. Figure 5 shows an example image rendered by the Volume Slicer. Our implementation runs on SGI workstations with 3D texture capabilities. We supply results for a CRIMSON RE R4000 running at 100MHz. Our implementation uses the largest single 3D texture map that fits in the graphic hardware texture memory. In our SGI Reality Engine, we can fit 128x128x64 a 3D texture map entirely in texture memory, thus all our results refer to this 3D texture map. We expect no penalty when running our implementation on a machine with larger texture memory as our implementation does not depend on the volume size.

Figure 5: A view into the nasal cavity, rendered by the Volume Slicer.

5. Volume Deformation Our approach to deformation, first introduced in [8] applies the required shape transformation not to the objects in the scene but rather to the rendering agents. For example, one can render a 3D scene by sending sight rays from the eye and through each pixel on the screen. A screen pixel is painted with the color of the first object the ray hits. In that case, our deformation procedure will convert the straight ray into a curved ray. We employ a mechanism, called deflector, to define and model complex deformations. A deflector is a local transform, positioned in space, that bends all rays passing through its defined area of influence (See Figure 6). We depict this area of influence, which we call gravity field, as a circle (sphere). The deflector affects rays only within a limited area - a feature that ensures operator locality and allows the intuitive modeling of spatial deformations. In order to generate local deformations in space, we have to transform all sight rays in a direction opposite to that of the desired visual effect. For example, in order to create a bump facing right on a model of a box (see Figure 6), we must deflect all the sight rays passing along the right side of the box to the left. The four linear rays in Figure 6a do not intersect the box. When we transform them into curved rays, R1 and R2 intersect the box, while R3 does not (Figure 6b). The ray R4, as well as those parts of R1, R2 and R3 that do not intersect the deflector’s gravity field, remains intact. In the final image, the color of the box will show in the pixels where R1 and R2 emerged and not in the ones where R3 and R4 emerged. This is equivalent to deforming the box description by adding a bump to its right side (See Figure 6a). This type of deflector is called translation deflector since it shifts the rays in

the direction of the gravity vector. An examples of such deflection is shown in Figure 7a. R1 R2 R3 R4

R1 R2 R3 R4

Figure 6: (a) An object (pink box) and a deflector (yellow circle and blue gravity vector) being ray traced by the red rays R1 to R4. The first three rays intersect the gravity field of the deflector that pulls them to the left (dashed red arrow). (b) The rays are bent and now go through the object. (blue solid lines).

Many times we want to introduce slight discontinuities in a model. For instance, we may wish to simulate a knife cutting through a model. The desired deformation in this case is not continuous along the cutting plane. All of the original matter is preserved but is deformed in its position. A small modification to the deflection transformation makes it perform such locally discontinuous deformations. Figure 7b shows an example of a deflector function achieving the desired deformation.

(a)

(b)

Figure 7: Examples of space deformation by a single primitive deflector: (a) Starting with an MRI head scan we pull out by the nose. (b) Activating a discontinuous deflector on the MRI head.

5.1 Interactive Deformation We observe that the operation of mapping a three dimensional raster on a polygon in space is very similar to our ray deflection technique. In both methods we assign a point in space with a point in a 3D raster. Both a polygon vertex and a point along a sight ray are points in space. Both a threedimensional texture map and an un-deformed volume are 3D rasters. In order to use this hardware-assisted volume rendering technique for volume deformation we must compensate for the main difference between the two mapping techniques: In hardware-assisted volume rendering we map a three- dimensional raster onto a polygon by interpolating the 3D raster coordinates of the polygon vertices. In other words, the mapping is linear across an entire polygon. In the ray deflection technique we do not perform any linear interpolations but map each point along a sight ray separately. We compensate for the linear interpolation by tessellating the polygons into

smaller polygons. The tessellation limits the extent of the linear interpolation and provides better control of the deformation. This generates an obvious trade-off between the granularity of polygon tessellation and the visual quality or the resulting images. The finer the tessellation the more accurate our approximation of the deformation and the more computationally intensive the drawing processes. Performance depends on the number of deflectors, their size, the tessellation resolution, and the screen resolution. We experience performance of 3Hz for a simple case (2562 image, 4 deflectors, 125 slices, 200 polygons/slice). This performance degrades up to 0.3Hz for a complex case (5122 image, 27 deflectors, 125 slices, 880 polygons/slice). We observe performance of 14Hz for 1282 image going down to 1Hz for 5122 image, when no deformation is performed.

7. Interaction The design of our system consists of a mock patient head housing electromechanical mechanisms that provide the force reflection to the user (see Figure 2). We have decided to provide the user with a physical interface for several reasons. 1) The face of the patient is used to help provide surgeons with external landmarks for reference to internal positions; 2) it provides the users with a rest and fulcrum for the hands, commonly used in surgical practice; and 3) the nostrils of the model provide the fulcrum for the endoscopic instruments. As the user moves the instruments, sensors will monitor the position and orientation of the tool within a given three-dimensional workspace. The computer will then dictate force sensations to the instruments by generating forces on the drive mechanism. We have completed the integration of one degree-of-freedom (DOF) and 4DOF haptic devices and are currently building the haptic device to be housed in the head model. Depending on the renderer used, the Microscribe can be used to manipulate the data in various ways. When the Volume Splatter is employed, the Microscribe is used as a subtraction or addition tool. In subtraction mode, voxels touched by the user are removed and those underneath them are exposed, simulating the behavior of some surgery tools (e.g., hummer) which operate like milling or sanding tools. When the active renderer is the Volume Slicer, the tip of the Microscribe can be used to define and move around a slicing plane through the solid, displaying arbitrary cuts through the volume. Further, we can associate with the tip of the Microscribe a pulling force that can be used to interactively deform the volume in real-time, thereby simulating various surgical procedures such palpation and dissection.

8. Discussion Several drawbacks exist. First, when the surgeon uses the Volume Splatter the image is degraded as the “endoscope” approaches objects for a closer look; this is a result of the lack of a higher resolution acquisition in the dataset. Progress is being made in this area in MRI [14][20]. This high resolution allows for a more complete visual display of the regional anatomy. Another disadvantage of the splatter is that its performance depends on the number of splats. We plan to incorporate more powerful visibility preprocessing tools [19] that can single out all voxels that are not visible from the current. We also plan to integrate our two rendering methods in such a way that the system will be able to switch from one renderer to the other according to the task at hand, in a way completely transparent to the user. The computation of forces from the medical data is also under exploration. Surface normals and voxel tags, which are computed at the rendering and segmentation phases, respectively, can be used to assist in the efficient generation of force feedback. The system, once complete, will integrate realistic display, simulated surgery operators, and haptic feedback. We plan to go through clinical trials to evaluate the system’s fidelity and training value.

9. Acknowledgments This project was supported by the National Science Foundation under grant CCR-9211288, the Department of Defense USAMRDC 94228001, and the Department of Energy DE-FG03-94ER. Surgical equipment was donated by Karl Storz Endoscopy America, Inc. We thank Scott King, Naeem Shareef, and Ed Swan for their work in the implementation of several related software tools.

10. Bibliography [1] Bizzi E., Accornero E, Chapple W., and Hogan N., “Posture Control and Trajectory Formation During Arm Movement”, J. Neuroscience, 4:2738-2744, 1984. [2] Cabral B., Cam N., and Foran J., “Accelerated Volume Rendering and Tomographic Reconstruction Using Texture Mapping Hardware”, Proceedings of 1994 Symposium on Volume Visualization, October 1994, pp. 91-98 [3] Feldman A.G., Adamovich S.V., Ostry D.J., and Flanagan J.R., “The Origin of Electromyograms -Explanations based on Equilibrium Point Hypothesis”. In Multiple Muscle Systems: Biomechanics and Movement Organization, Winters and Woo, eds. Springer-Verlag, New York, 1990, pp. 195-213. [4] Flash T., “The Control of Hand Equilibrium Trajectories in Multi-joint Movements”. Biol. Cybern., 1987, 57:257-274. [5] Ghez C., Hening W., and Gordon J., “Organization of Voluntary Movement”. Curr. Opin. Neurobiol. 1:664-671, 1991. [6] Graziono M.S. and Gross C.G., “A Bimodal Map of Space: Tactile Receptive Fields in the Macaque Putamen with Corresponding Visual Receptive Fields”. Exp Brain Res. 97:96-109, 1993. [7] Kaufman A., Cohen D., and Yagel R., “Volumetric Graphics”, IEEE Computer, 26(7):51-64, July 1993. [8] Kurzion Y. and Yagel R., “Volume Deformation using Ray Deflectors”, The 6th Eurographics Workshop on Rendering, Dublin, June 1995, pp. 21-32. [9] Laur D. and Hanrahan P., “Hierarchical Splatting: A Progressive Refinement Algorithm for Volume Rendering”, Computer Graphics, 25(4):285-288, July 1991. [10] Leslie A.M. and Keeble S., “Do Six-Month-Old Infants Perceive Causality?”, Cognition, 25:267-287, 1987. [11] Mcdonald J.S., Rosenberg L.B., Stredney D., “Virtual Reality Technology Applied to Anesthesiology, Interactive Technology and the New Paradigm for Health Care”, Proceedings Medicine Meets Virtual Reality III, R.M. Satava, ed., IOS Press Amesterdam, 237-243, 1995. [12] Messerklinger W., “Endoscopy of the Nose”, Urban and Schwartzenburg, Inc. Baltimore, Maryland, 1978. [13] Rohlf J. and J., “IRIS Performer: A High Performance Multiprocessing Toolkit for Real-Time 3D Graphics”, Proceedings of SIGGRAPH ‘94, July 1994, pp. 381-395. [14] Schmalbrock P., Pruski J., Sun L., Rao A., and Monroe J.W., “Phased Array RF Coils for High Resolution Imaging of the Inner Ear and the Brain Stem”, J. Comp. Assist. Tom., 19:8-14, 1995. [15] Stammberger, H., “Functional Endoscopic Sinus Surgery. The Messerklinger Technique”, B. C. Decker, Philadelphia, Pennsylvania. 1991. [16] Streri A., “Seeing, Reaching, Touching -The Relations between Vision and Touch in Infancy”, MIT Press 1993. [17] Westover L., “Footprint Evaluation for Volume Rendering, Computer Graphics, 24(4):367-376, August 1990. [18] Wiet G.J., Schuller D.E., Goodman J., Stredney D.L., Bender C.F., Yagel R., Swan J.E., and Schmalbrock P., “Virtual Simulations of Brain and Cranial Base Tumors”, Proceedings of the 98th Annual Meeting of the American Academy of Otolaryngology Head and Neck Surgery, San Diego, California, September 1994. [19] Yagel R. and Ray W., “Visibility Computation for Efficient Walkthrough Complex Environments”, PRESENCE, 5(1):116, Winter 1996. [20] Ying K., Schmalbrock P., Clymer B.D., Echo-Time Reduction for Submillimeter Resolution Imaging with a Phase Encode Time Reduced Acquisition Method”, Magn. Reson. Med., 33:82-87, 1995.

Suggest Documents