an interface for precise and comfortable 3d work ... - Semantic Scholar

1 downloads 106 Views 409KB Size Report
the Virtual Tricorder [15] introduced the idea of using passive haptic feedback on a small hand-held board within a Head Mounted Display environment.
AN INTERFACE FOR PRECISE AND COMFORTABLE 3D WORK WITH VOLUMETRIC MEDICAL DATASETS Luis Serra1, Ph.D., Ng Hern1, M.Sc., Chua Gim Guan1, B.Sc., Eugene Lee1, B.Sc., Yen H. Lee1, M.Sc., Yeo T.T.3 M.D., Chumpon Chan2 M.D and Ralf A. Kockro3 M.D. 1

Kent Ridge Digital Labs=, 21 Heng Mui Keng Terrace, Singapore 119613 [email protected] 2 Dept. of Neurosurgery, Singapore General Hospital, Singapore 3 Dept. of Neurosurgery, Tan Tock Seng Hospital, Singapore Abstract

We have developed a 3D/2D paradigm of interaction that combines manipulation of precise 3D volumetric data with unambiguous widget interaction. Precise 3D interaction is ensured by a combination of resting the lower arms on an armrest and pivoting the hands around the wrist. Unambiguous 2D interaction is achieved by providing passive haptic feedback by means of a virtual control panel whose position coincides with the physical surfaces encasing the system. We have tested this interface with a neurosurgical planning application that has been clinically used for 17 skull-base cases at two local hospitals.

1. Introduction Effective user interaction in medical applications involving volumetric datasets, such as neurosurgery planning and diagnosis based on radiological data, requires an interface that provides two main features: first, it should allow real-time manual precision work in 3D; and second, it should support easy and unambiguous control the application through its widgets. Typical 3D operations on medical datasets include segmenting the tumors, measuring distances, areas and volumes, specifying bone work (e.g., craniotomies) and tissue removal (cuts on tissue spanning across several scan slices) and planning the path of approach. Ideally, these operations should be performed on volumes and should take place in realtime. Real-time rendering is obviously desirable to allow quick exploration of the volume, inviting the surgeon or radiologist to explore viewpoints. Volume rendering is preferred because it works directly on the available scan images, and delivers a good degree of fidelity to the original images (unlike surface rendering). These datasets are typically multimodal, requiring the display of more than one data set simultaneously. Multimodality is essential to show the interrelations of the different tissues. Moreover, the volume rendered has to be accurate in reporting detail from the original. In such environments, the application widgets can either be an integral part of the 3D environment (displayed and interacted in 3D space), or be detached from it, as it would be =

Formerly the Institute of Systems Science, ISS

in the case of a mouse and keyboard. In either case, the functionally of the widgets is generally 1D or 2D, as in as sliders or controllable curves, or simply on/off switches, as in a push-button. In [10] we presented the KRDL Virtual Workbench, a mirror-based VR system, as a solution to real-time interactions with 3D datasets. The user sits comfortably wearing stereoscopic glasses and enjoys precise hand-eye coordination by reaching in with both hands into a 3D virtual image. The setup consists of a 21" computer screen, a pair of stereo glasses, a mirror and two 3D position and orientation sensors, each on a handle. The user looks into the mirror through the glasses and perceives the virtual image within the work volume, where the sensors and hands lie. This produces a stereo view of a virtual volume in which a hand can move a handle, its position known through the arm, without obscuring the display (left and right eye views displayed alternately on the screen, with synchronized CrystalEyes™ glasses separating them for the eyes). The advantages of this approach (as opposed to Head Mounted Displays or rear projection systems like the Responsive Workbench [8] ) are as follows. First, it places the data in front of the user, at a comfortable distance within easy reach of the hands. Second, the 21" display monitor ensures a high quality image, and does away with the common problems associated with HMDs: encumbrance, discomfort, weight, and when dealing with medical data, image resolution problems, plus the fact that only one person per HMD can see the data. Third, the user doesn't need to change position in the virtual world, since the data comes to the user, and not the other way around. This frees both hands from any kind of commitment to "drive around" or holding interface tablets, so that one can concentrate on the data. Fourth, three to four people simultaneously can look at the data through the same mirror, without objectionable distortion of the data (this is our observation, although admittedly only one position produces the "correct" perspective). Head tracking is thus consciously avoided to enable teamwork on the data, eliminating the confusion generated by the changes in perspective resulting from the movements of the tracked head. In this paper, we report on the improvements made to this setup to ensure precise and comfortable 3D interactions, as well as efficient work with non-3D widgets (see also [16] ).

2. Background Schmandt [10] pioneered the manipulation of 2D icons over a surface that was integral part of the 3D space. The interaction with the surface icons was done using a 2D “graphics tablet” which was made to coincide with the virtual surface where the icons appeared. Schmandt also experimented with 3D interactions using a 3D tracker, although the two modes (2D and 3D) were not properly integrated together. Subsequent work by Sowizral on the Virtual Tricorder [15] introduced the idea of using passive haptic feedback on a small hand-held board within a Head Mounted Display environment. Hand-held boards or ‘pen-and-tablet’ interfaces are a response to the frustrations experienced by researchers in trying to interact with menus floating in space [4] [1] [15] . This method seems to be gaining popularity in the VR community, since it provides an unequivocal and inexpensive way to operate on buttons. Hand-held boards provide the necessary passive haptic feedback while holding the buttons. Perhaps their main shortcoming is having to devote one hand to hold the board itself. Floating menus on the other hand use some kind of virtual-pointer combined with physical button-click to manipulate widgets. Using these types of interfaces is hard to perform precise movements, such as dragging a slider to a specific location, or selecting from a pick list. The difficulty comes from the fact that the user is pointing in free space, without the aid of anything to steady the hands. Take for example Deering [3] who uses a hybrid 2D/3D menu widgets organized in a disk layout. Menus pop up in a fixed position relative to the current position of a 6 DOF

wand, and are then selected by hand-relative movements. His approach tries to compromise the advantages of 2D window interfaces with 3D work, but the lack of support in 3D interactions still detracts from its usability. Another approach to counter the lack of physical support is to use a force feedback device to take the VR metaphor literally: interacting with 2D and on/off buttons consists then on reaching for the desired button and pushing it by actually effecting force over it. This approach has been demonstrated by the Australian CSIRO team on the Haptic Workbench, an integration of our Virtual Workbench with Sensable's PHANToM device [16] . Although the idea is intuitive, and makes locating buttons doubly effective (visual as well as haptically), thus allowing for buttons to be placed anywhere in 3D space, the approach suffers from the fact that these force-feedback devices remain highly specialized and expensive. Other groups have developed interfaces for medical manipulation: Goble et al. [5] describe a 3D interface for image exploration, using hand-held props that provide passive haptic feedback. The props approach is intuitive but only applied to 3D objects (the patient’s head, the cutting plane) and 2D interaction is left to the traditional mouse/keyboard. Systems like O’Toole et at. [9] , or Hill et al. [6] , with a similar setup to ours (with the addition of force feedback), concentrate in training dexterity skills and not patient-specific planning (not making use of volume rendered data). Their approach to handling buttons and slider is not described in the literature.

3. Tools & Methods We have developed a 3D/2D paradigm of interaction that combines precise 3D manipulation of volumetric data with unambiguous widget interaction. The interactions take place within the Virtual Workbench, a 90cm-wide, 70cm-deep, 40cm-high physical encasing that holds the stereoscopic screen display, and provides housing for the input device system. The encasing is also designed to provide comfortable support for the arms and a smooth bottom surface (and optionally side surfaces) against which the tip of the stylus can rest. Precise 3D interaction is ensured by a combination of resting the lower arms on the armrest and pivoting the hands around the wrist. A tracking device with a single switch (the Polhemus’ “stylus”) on each hand simplifies the command structure while remaining expressive. While resting the arm, the wrist can easily access a space of interaction of 20 to 30 cm3, accurately and comfortably. By sliding the arms along the armrest, a wider space is available. Unambiguous 2D interaction is achieved by providing passive haptic feedback by means of a virtual control panel whose position coincides with the physical surfaces encasing the system. The stylus interacts with the widgets as though it was a mouse, in a similar fashion to [15] . The physical surfaces provide a hard medium against which the stylus switch can be pressed firmly and unequivocally to operate virtual buttons, sliders and curves. The virtual control panel only pops up when the tip of the stylus touches the physical surfaces surrounding the 3D space (e.g., the top of the table) and remains invisible otherwise. This eliminates confusion by keeping the screen clear from cluttering objects, while boosting graphics performance (fewer objects to redraw). We have observed that five frames per second are a minimum refresh rate for the users to work comfortably on the control panel. The physical surface is smooth, so that the tip of the stylus slides effortlessly over it enabling sliders and multiple buttons to be operated comfortably. The stylus casts a shadow over the panel adding a helpful depth cue (see Fig. 3).

3.1 Operation The virtual space is divided by software into two regions: one close to the surface of the encasing system and the other the rest of the space (Fig. 1).

Fig. 1. (a) Wrist-based 3D interaction; (b) Passive haptic feedback 2D interactions

The user interacts with virtual 3D objects in the reach-in manner common to most Virtual Reality systems. The user moves the stylus towards the object of interest, and when reached, the switch on the stylus is pressed to indicate the desired action (grab, delete, resize, etc.). While the control panel is enabled, the 3D objects floating above it can either be fully displayed, or be displayed in lower resolution to speed up the interactions with the 2D panel, or be made invisible. The 3D objects are displayed as a combination of polygonal and volume rendered object (see Fig. 3). We achieve real-time volume rendering by means of 3D textures [2] . This approach achieves real-time performance by relying on the Silicon Graphics hardwareaccelerated 3D texture mapping. Briefly, the algorithm takes a volume and generates sample planes orthogonal to the viewing direction, in back to front order, mapping a texture to each sample plane and blending it with the rest. The texture is obtained from the intersection between the sample plane and the volume, by trilinear interpolation. Our implementation enables free movement of the volume in 3D space and speeds up the display by having a number of sample planes that decreases with the distance to the viewpoint [13] . It also enables multimodality display using the multisampling mask available in Silicon Graphics Onyx workstations [12] . 3.2 The Control Panel The control panel contains 3D widgets such as buttons, sliders, curve-control panels, list boxes and file dialogs (see Fig. 2). Widgets can be grouped together under a title bar, and may then be repositioned on the control panel by clicking and dragging the title bar. This enables easy positioning of widgets on the control panel. Optionally, a widget can also be pulled out from the control panel into the 3D space. In so doing, the widget will be visible and interactable in 3D space even after the control panel is deactivated. It can also be put back into the control panel by moving the widget onto the base surface, causing the widget to snap back to its original position on the control panel. As the number of widgets increases, the size of the control panel expands, and there is a need to do control panel space management. We support three methods for handling the problem. The first method is to click on any unused space of the control panel and drag the control panel along its plane like a piece of paper over a table allowing access to a potentially infinite number of widgets. This allows viewing the portion of the control panel wanted.

The advantage of this method is that it can handle large working area. The disadvantage is that considerable amount of hand movement is needed to shift the control panel in and out.

Fig. 2. The Control Panel: “List 6” has been lifted out of the control panel, and “List 3” is being dropped back.

The second method is to subdivide the control panel into clusters of widgets. Clicking on specific buttons on the control panel will automatically pan the control panel to its desired cluster (Fig.3 illustrates this: the top figure shows the stylus clicking on a button that pans the control panel to show the figure on the bottom). The last method is to click on specific buttons that will replace the control panel with a new one (in Fig. 3, the left buttons labeled “SIM”, “FUSE” and “SEG” replace the control panel with new sets of buttons). A common problem with all three solutions is that neither provides a good overview of what the control panel contains. The control panel area visible at any one time is approximately 50 cm x 25 cm. We have experimented with the angle of inclination of the physical surface to maximize the area of interaction. A 0 degrees surface is used at the moment, but we are building a 25 degrees one (as in Fig. 1) to bring the control panel closer, for easier reach (one needs to stretch the arm to reach the buttons further away in the 0 degrees surface). We are also considering curved surfaces to facilitate wrist-based control panel interactions. 3.4 Implementation The system is written with the KRDL BrixMed C++/OpenGL software toolkit [12] and runs on a Silicon Graphics Onyx2 (1 R10000 180 MHz CPU, 1 Raster Manager with 64MB-texture memory). The input devices used are the FASTRAK™ from Polhemus, with two receivers: one stylus, and one normal receiver with a digital hand switch attached. For multimodal volumes of less than 64 MB (the available hardware texture) that occupy a quarter of the total screen space (1024x768 pixels) we obtained an average performance of 10 frames per second, in stereo. When the volumes occupy more than half of the screen space, the speed falls to 5 frames per second, due to overloading of the single Raster Manager.

Fig. 3. Multimodal data in VIVIAN neurosurgical planning system: CT, MRI and polygonally outlined tumor. Top: The stylus reaches the control panel to bring in the curve-control widget. Bottom: The stylus operates the transparency curve on the curve-control look-up table to reveal the details of the CT.

4. Results and Conclusion We are testing this interface on several areas, but specifically we have developed a neurosurgical planning system (called VIVIAN --Virtual Intra-cranial Visualization and Navigation [13] [7] ) which is undergoing clinical evaluation in Singapore at the Singapore General Hospital and Tan-Tock-Seng Hospital (17 skull-base cases planned so far). VIVIAN provides multimodal visualization and mensuration, segmentation of tumors and vessels, and simulation and planning of neurosurgical approaches (Fig. 3. shows the interface). Neurosurgeons value the system's ability to manipulate, annotate and explore the datasets in real-time. This enhances their understanding of the complexity of anatomical and pathological relationships surrounding a lesion. They also value the ease with which the application is controlled due to the support provided by the surrounding surfaces. We are in the process of changing the way in which priority is given to the quality of images, to the detriment of real-time response: i.e., at the moment, we ensure that the quality of images remains high, which sometimes results in slow performance. In future, the system will ensure real-time performance, and will degrade image quality to achieve that end. The user will have to stop certain interactions to regain the image quality desired. Also, we are taking the system to the next stage of neurosurgical planning, and into intraoperative surgical navigation by coupling the preoperative images to the surgical microscope.

5. References [1] [2]

[3] [4] [5] [6] [7]

[8] [9]

[10] [11] [12] [13] [14]

[15] [16]

[17]

Billinghurst, M., Baldis, S., Matheson, L., Philips, M., (1997) 3D Palette: A Virtual Reality Content Creation Tool, ACM VRST97, pp. 155-156. Cabral, B., Cam, B. and Foran, J. (1994) Accelerated Volume Rendering and Tomographic Reconstruction Using Texture Mapping Hardware, Proc. ACM/IEEE 1994 Symposium Volume Visualization, 91-98 and 131. Deering, M., (1996) The HoloSketch VR Sketching System, Comm. ACM, vol. 39, no. 5, pp. 55-61. Fuhrmann, A., Loeffelmann, H., Schmalstieg, D., Gervautz, M., (1998) Collaborative Visualization in Augmented Reality, IEEE Comp Graph & Appl, July 1998, pp. 54-59 Goble, J. C. Hinkley, K., Pausch, R., Snell, J. W. and Kassell, N. F., (1995) Two -handed spatial interface tools for neurosurgical planning, IEEE Computer, 28 (7), 20-26. Hill, J.W., Holst, P.A., Jensen, J.F., Goldman, J., Gorfu, Y., Ploeger, D.W., (1997) Telepresence Interface with Applications in Microsurgery and Surgical Simulations, Proc. MMVR:6, pp. 96-102. Kockro, R.A., Serra, L., Chan, C., Yeo, T.T., Sitoh Y.Y., Chua, C.C., Ng, H., Lee, E.C.K., Lee, Y.H., Nowinski, W.L., (1999) Planning of skull base surgery in the Virtual Workbench: Clinical experiences, to appear in MMVR:7. Krueger, W. and Froelich, B., (1994) The Responsive Workbench, IEEE Comp Graph & Appl, 4, 14, pp. 12-15. O’Toole, R., Playter, R., Krummel, T., Blank, W., Cornelius, N., Roberts, W., Bell, W., and Raibert, M., (1998) Assessing skill and learning in surgeons and medical students using a force feedback surgical simulator, In Proc. MICCAI 98, MIT USA, pp. 899-909. Poston, T. and Serra, L. (1996) Dextrous Virtual Work, Comm. ACM, 1996, vol. 39, no. 5, pp. 37-45. Schmandt, C., (1983) Spatial Input/Display Correspondence in a Stereoscopic Computer Graphic Work Station, Computer Graphics 17, 253-259. Serra, L., Ng H., The BrixMed C++ API, KRDL Internal Technical Report, 1997 Serra, L., Ng, H., Chua, B.C. and Poston, T., (1997) Interactive Vessel Tracing on Volume Data, Proc. ACM Symposium on Interactive 3D Graphics 1997, pp. 131-137. Serra, L., Kockro, R.A., Chua G.G., Ng H., Lee, C.K., Lee, Y.H., Nowinski, W.L., and Chan, C., (1998) Multimodal Volume-based Tumor Neurosurgery Planning in the Virtual Workbench, In Proc. MICCAI98, MIT USA, pp. 1007-1015. Sowizral, H.A., (1994) Interacting with virtual environments using augmented virtual tools, In Proc. Stereoscopic Displays and Virtual Reality Systems 94, SPIE, 2177, pp. 409-416. Stevenson, D.R., Smith, K.A., McLaughlin, J., Gunn, C., Veldkamp, J.P., Dixon, M.J., (1999) Haptic Workbench: A Multi-Sensory Virtual Environment, to appear at The Engineering Reality of Virtual Reality, Stereoscopic Displays and Applications Conference, January 25-27, 1999 San Jose, California The KRDL Virtual Workbench Web Page: http://www.krdl.org.sg/RND/biomed/virtual

Suggest Documents