Int J CARS (2010) 5:461–469 DOI 10.1007/s11548-010-0413-z
ORIGINAL ARTICLE
Interactive bone drilling using a 2D pointing device to support Microendoscopic Discectomy planning Keiho Imanishi · Megumi Nakao · Masahiko Kioka · Masato Mori · Munehito Yoshida · Takashi Takahashi · Kotaro Minato
Received: 8 January 2010 / Accepted: 10 March 2010 / Published online: 4 April 2010 © CARS 2010
Abstract Purpose To support preoperative planning of bone drilling for Microendoscopic Discectomy, we present a set of interactive bone-drilling methods using a general 2D pointing device. Methods Unlike the existing methods, our framework has the following features: (1) the user can directly cut away arbitrary 3D regions on the volumetrically rendered image, (2) in order to provide a simple interface to end-users, our algorithms make 3D drilling possible through only a general-purpose wheel mouse, (3) to reduce both over-drilling and unnatural drilling of an unintended region, we introduce K. Imanishi (B) · M. Nakao · K. Minato Graduate School of Information Science, Nara Institute of Science and Technology, 8916-5, Takayama, Ikoma, Nara 630-0192, Japan e-mail:
[email protected];
[email protected] M. Nakao e-mail:
[email protected] K. Minato e-mail:
[email protected] M. Kioka · M. Yoshida Department of Orthopaedic Surgery, Wakayama Medical University, 911-1 Kimiidera, Wakayama-city 641-8510, Japan M. Mori Department of Radiological Technology, Faculty of Medical Science, Kyoto College of Medical Science, 1-3 Imakita, Oyama-higashi, Sonobe, Nantan, Kyoto 622-0041, Japan e-mail:
[email protected] T. Takahashi Kyoto College of Medical Science, 1-3 Imakita, Oyama-higashi, Sonobe, Nantan, Kyoto 622-0041, Japan e-mail:
[email protected]
a smart depth control to ensure the continuity of the cutting operation and (4) a GPU-based rendering scheme for highquality shading of clipped boundaries. Results We applied our techniques to some CT data of specific patients. Several experiments confirmed that the user was able to directly drill a 3D complex region on a volumetrically rendered lumber spine through simple mouse operation. Also, our rendering scheme clearly visualizes time-varying drilled surfaces at interactive rates. By comparing simulation results to actual postoperative CT images, we confirmed the user interactively simulates similar cutting to that carried out in real surgery. Conclusion We concluded our techniques perform mousebased, direct drilling of complex 3D regions with high-quality rendering of drilled boundaries and contribute to preoperative planning of Microendoscopic Discectomy. Keywords Interactive drilling · Volume sculpting · 2D pointing device · Surgical planning
Introduction In the field of orthopedic surgery, lumbar disk disease has recently been treated by minimally invasive procedures such as Microendoscopic Discectomy (MED) [1]. Compared to the traditional open surgery that is accompanied with a large incision and exfoliation of muscles, MED provides various benefits such as a small incision, reduced postoperative pain, shorter hospital stay, shorter rehabilitation, and quick recovery to daily life or work. Not only patients but also practitioners have been awaiting the generalized use of MED. However, MED currently has the following difficulties: (a) the surgeon tends to lose the orientation of the image because of the complex anatomical structure of the lumbar
123
462
Int J CARS (2010) 5:461–469
Fig. 1 MED for the lumbar disk disease: a a conceptual image of MED b a postoperative CT slice image and c its volume rendered result.
spine and the small surgical field provided from the microendoscopy, (b) high technical skills with long clinical experience is required because a bundle of nerves must be carefully avoided when cutting the lumbar spine with a drill. These factors still prevent the widespread use of MED. In order to overcome these difficulties, surgical planning tools and supportive information techniques have the possibility of reducing these technical hurdles. A surgical navigation system has also gained attention in recent years [2,3]. In order to provide safer surgery and to make navigation-assisted surgery practical, an efficient simulation environment using the patient’s medical images is desired. In the surgery of MED, all procedures including bone drilling are performed through a microendoscopic view similar to that shown in Fig. 1a. For the preoperative planning of MED, surgeons need to discuss the 3D cutting area in order to minimize both invasiveness to the body and the risks of the procedure. In general, the precise surgical planning of osteotomy on CT slices is dependent on the knowledge and experience of the surgeon [4,5]. Removal of voxels from the virtual spines and the cartilages created from the patient’s CT images, as shown in Fig. 1b, c, is an effective way of supporting such discussion. 2D line drawing or semi-automatic region extraction on each CT slice [6,7] is traditionally used for the input of the cutting area. However, this approach forces elaborate drawing tasks on the user while imaging the 3D cutting area in their mind. Therefore, we focus on direct editing of a complicated 3D cutting area on a volumetrically rendered image. This approach makes it possible to perform practical planning through intuitive interaction with 3D virtual models, which is close to the real surgical process. Virtual cutting or direct bone-drilling systems have the possibility of forming practical planning tools in MED. In the last decades, some techniques [8–11] have been first introduced for cutting or drilling through 3D pointing devices such as PHANToM (SensAble Technologies Inc.). These systems are mainly designed for medical training use and focus on realistic drilling operation with force feedback. However, this interface is not always suitable for clinical use in preoperative planning because of the necessity to get used to 3D operation, the cost of the hardware, large setting space
123
etc. On the other hand, simulating bone drilling via standard 2D pointing devices such as a mouse that operators are already used to requires the solving of several issues on how to define 3D operation through only 2D operation. Also, unintended cutting can occur due to the necessity to interpret depth information. Nevertheless, we focus on the efficiency of the mouse-based approach and newly introduce a set of techniques for stable drilling with improved performance. In this paper, we propose virtual bone-drilling methods on the volume-rendering image via a 2D pointing device. Our algorithm maps corresponding 3D coordinates on the surface of the volumetric object from the 2D position of the mouse cursor. In doing so, we focus on smart constraints in depth mapping that greatly support natural, intuitive editing of volumetric objects. High-quality rendering of the time-varying volumetric object is also presented to improve quality of the drilled boundary. Specifically, our contribution to interactive volume drilling literature is summarized as follows: (1) In order to provide simple interface to end-users, we have designed a direct, volumetric drilling technique on the volumetrically rendered image via a general-purpose wheel mouse. (2) To reduce both over-drilling and unnatural drilling of an unintended region, we introduce a smart depth control scheme to ensure the continuity of the cutting operation (3) We present a GPU-based rendering algorithm to provide high-quality shading of drilled boundaries. (4) We apply the proposed techniques to some CT data of patients and show the performance of the mouse-based virtual drilling in the preoperative planning of MED.
Related works Many studies have been done to make direct editing of the volume-rendered image possible. As mentioned above, some studies used 3D input devices for the interface. Some cutting simulations [11–14] and methods to explore volumetric datasets [15,16] by cutting and editing 3D regions of volume rendering have been proposed. Although they can directly
Int J CARS (2010) 5:461–469
translate the position provided from the tip of a 3D pointing device to the 3D world coordinates of the virtual tool, handling collisions and occlusions between the virtual tool and the volume is required. More importantly, 3D pointing devices are not popular in the clinical field, as most medical doctors have to get used to the interface. Specifically in the preoperative planning, simpler interface design is preferred. Volume editing techniques using 2D pointing devices, such as a mouse and a pen-tablet, have also been proposed. Mapping methods to transfer the 2D input to the corresponding 3D position is a key for these techniques. Huff et al. [17] mapped 2D mouse position to the 3D coordinates by obtaining depth information through the wheel movement of the mouse. However, this approach is complicated for the user and needs skillful operation when drilling curved 3D regions on complex anatomical structures such as spines. Chen et al. [18] utilize 2D position and the pressure acquired from the pen-tablet as the depth information for 3D editing and propose a technique of interactive sculpting, lasering and peeling the volume data based on a point radiation framework. This approach requires careful operation of the 2D pointing device while editing the complicated 3D region. The usability decreases due to the correction operations needed when an unintended area is cut away by user’s faulty operation. Another important requirement when editing volume is shading of the hidden part revealed by the voxel clipping operation. In conventional methods, Weiskopf et al. [21] introduced a predefined sub-volume label called a clipping object to perform the clipping operation and represent the clear clipping boundary by adding the gradient of clipping object to the gradient of the original volume. However, interactive editing on the volumetrically rendered image requires updating the clipping object at interactive rates. In this paper, we aim to perform interactive bone drilling on the volumetrically rendered image by using only a general-purpose mouse. We obtain the depth information to the surface of the volume and present a bone-drilling framework in which the user can drill the arbitrary 3D region at an interactive rate using mouse operation. In our framework, we introduce a stable depth control to prevent drilling of unintended regions by the user. Moreover, we will also show that the drilled boundaries can be visualized more clearly by updating the drilling object interactively.
Interactive volume drilling methods This section describes the methods of volumetric bone drilling that allows the user to define the 3D cutting region through a standard 2D pointing device. Our methods assume interactive, direct input of an arbitrary 3D region on the volumetrically rendered image for supporting preoperative planning of MED.
463
Volume drilling basics We developed the framework by extending the Weiskopf’s volume clipping techniques [21] to interactive volume drilling issues. We do not edit the volume data directly but edit the drilling label such as a voxelized clipping object. In the rendering process, the fragment shader of the GPU (Graphics Processing Unit) refers to the drilling label and generates the rendered image with the drilled region. Let us now consider the texture-based volume rendering [19] from a CT volume data I. In order to represent a virtual drilling on the rendered image, we utilize a drilling label L that has information of drilled regions. First, we compute Iα from a user-defined color lookup table and intensity values I and then obtain a gradient volume data G = ∇Iα . Second, we transfer both I and L to the GPU memory, and obtain rendering results with shading at interactive rates. Though, in our implementation, we define the size of L as well as I, it can be generated in more detail to express more detailed drilling. Compared to the predefined static clipping object that Weiskopf et al. proposed [21], our framework updates the drilling label dynamically to represent the interactive drilling. In order to perform fast processing, we use a drilling object Li such as a sub-drilling label to update the L on the memory of the GPU at interactive rates. Figure 2 illustrates an example drilling result in which the total drilling label with resolution of 6 × 6 × 6 is updated by the drilling object with a resolution of 2 × 2 × 6. In Fig. 2, voxels with 1 represent original regions, and voxels with 0 represent the drilled region. The drilling on the volumetrically rendered image can be represented by multiplying the drilling label L and the actual volume data I. Referring to the conventional basic approach, binary representation can be adopted for the drilling label L. However, this representation will cause an issue with shading at the drilled boundary. In “Shading drilled boundary”, we will explain our improvement for shading the drilled boundary.
Shading drilled boundary Since a trilinear interpolation was applied on the GPU to the drilling label, an inappropriate interpolation will be performed on a curved boundary when setting a binary representation for the drilling label. This may cause an unnatural roughness at the drilling surface. In order to remedy this issue, we set a distance value l in the label volume L as in Fig. 2. The distance value l of each voxel is computed by the Eq. (1). la = (D + Da ) /2D lb = (D − Da ) /2D
if (Da > D) la = 1 if (Db > D) lb = 0
(1)
123
464
Int J CARS (2010) 5:461–469
Fig. 2 Local update of the drilling label with a distance field to represent volume sculpting
be restored if it updates directly with the acquired value. Therefore, in our approach, we take a comparison between the distance value stored in the voxel of the drilling label and the acquired value from formula (1) and adapt the smaller one to the voxel. Partial update of volume gradient Fig. 3 Euclidean distance to the drilled boundary of masking voxel
where D is the diagonal length of the drilling label voxel, l a and l b are distance values assigned undrilled voxel aand drilled voxel b of drilling label, Da and Db is the distance from center points of voxel a and voxel b to the drilling boundary as in Fig. 3. In this way, the interpolated value 0.5 represents the position of the drilling surface. To obtain a clear drilling surface without any roughness, we have only to set 0 to the alpha value at the fragment shader when the value of drilling label is less than 0.5. However, as described in “Volume drilling basics”, the drilling label will be interactively updated at real-time drilling unlike the static one. Therefore, we cannot immediately assign the value calculated by formula (1) to the drilling label. For instance, while the edge of the drill removes the drilling label voxel that has already been cut (and has been assigned a distance value 0), the distance value acquired from formula (1) gradually approaches 1. The removed voxel will
123
As described in the method above, hidden voxels will be revealed by the drilling operation. In this case, to shade the volume appropriately without unnatural artifacts requires updating of the gradient of the volume interactively. In our approach, we acquire the gradient J = ∇L of label volume by using the same calculation as gradient volume G and transfer it to the memory of the GPU initially. While L is updated by Li , we update J concurrently by the sub-gradient drilling label based on Li . At the fragment shader, the modified gradient G near the drilled boundary will be generated by the Eq. (2) adding the gradient volume G and the gradient label volume J. The final alpha value Iα is acquired by formula (3). At the final step, the final color is computed by means of phone shading [20] using I RG B , Iα and G . Where I RG B is acquired from the user-defined color lookup table and intensity values I. G = G + J 0 if l < 0.5 Iα = Iα if l ≥ 0.5
(2) (3)
Int J CARS (2010) 5:461–469
465
Fig. 4 Two issues of previous approach: a excessive removal of volume b removal of different depth region
Fig. 5 Solution to prevent excessive removal of volume a updating scheme of drilling label b drilling result of our solution
Constraints for volume drilling Mapping 2D points to 3D coordinates In order to determine the drilling position in 3D, we map the 2D position of the mouse cursor to the corresponding 3D points on the isosurface of the volumetric objects. The detection of the 3D point on the isosurface is carried out based on a ray-casting protocol [20]. The opacity value is sampled and accumulated at each voxel in the eye direction. When the sum of the opacity values reaches a certain threshold, the position p is set as the corresponding 3D position on the isosurface. p is directly used as the starting point of the drilling. This isosurface constraint is a key for direct drilling on the rendered image through mouse operation. However, we have found the following two issues are caused by this mapping approach. (1) Excessive removal of voxels in the depth direction easily occurs because of repeated isosurface detection and drilling at the same point. See Fig. 4a. (2) Discontinuous removal of voxels in the depth direction occurs when the user moves the mouse cursor on the complex anatomical structures. See Fig. 4b. This situation also easily happens by the slight mis-movement of the mouse. Although we preserve all states of the drilling label in the temporary memory to support the undo and redo functions
in our framework, the above two issues cause loss of continuity in operation and greatly reduce the usability. To overcome these issues, we introduced a new constraint on the update of p by focusing on the depth value. Depth-based constraints for volume drilling This section describes the depth-based constraints for performing natural volume drilling via 2D pointing devices. First of all, our framework provides continuous drilling as the operator drags the mouse while pressing the button. Releasing the button means the series of drilling operations is finished. To avoid confusion, we defined LCPU and LGPU to represent the memory of the drilling label on the CPU and the GPU. Since we perform a mapping corresponding to 3D positions from 2D points of mouse on the CPU, to reduce the excessive removal, we update LCPU when the button is released. While the operator drags the mouse with the button pressed, interactive visual feedback of the drilled sections is provided by updating only the LGPU using the methods mentioned in “Volume drilling basics” and “Shading drilled boundary”. The LCPU will be not updated, but the drilling object Li is saved temporarily at that time. When the operator releases the mouse button, the LCPU is updated by the saved Li . In this way, our approach can reduce the excessive removal and let the operator drill to the certain depth from the surface of the 3D region while making one single dragging motion as shown in Fig. 5.
123
466
Int J CARS (2010) 5:461–469
drill in MED, we modeled the drill as a spherical drilling object, and the radius can be set from 1 to 200 mm arbitrarily. All the results were generated by setting m = 5 and n = 10. Moreover, we prepared a three-cross-sectional viewer to represent any arbitrary sectional image of the volumetrically rendered image. The operator can utilize this viewer to confirm the depth and direction of the drilled region during the drilling operation. Validation of drilled boundary
Fig. 6 Solution to prevent drilling into different depth region of volume
For the second issue, we give the following solution: we register each ray-casting scanning distance d based on all sample points p acquired while dragging the mouse and calculate a distance gradient value δελτ αd. Our framework updates the point pi if di is within a certain threshold as an invalid drilling point and interrupts the drilling operation. In this way, our approach can reduce the drilling into regions of unintended depth caused from user’s mistaken operation and control so that the operator may drill within a certain gradient range as in Fig. 6. In the framework we propose, the threshold Ti is determined based on the gradient average of the n latest cutting points and a multiple coefficient m (Eq. 4). k=i−1 dk n (4) Ti = m k=i−1−n
Application and results We tested our approaches on a general-purpose PC with Intel Xeon 3.0 GHz, 4 GB RAM and Geforce GTX 260 graphics card, a wheel mouse and a general-purpose LCD with resolution of 1280 × 1024. Our framework applied the texturebased approach [19] for interactive volume rendering. GLSL (OpenGL Shading Language) was used to implement the proposed algorithms on the GPU. As surgeons use a high-speed Fig. 7 Improvement of drilled boundary a binary representation of drilling label b smoothing for binary representation c distance representation of drilling
123
We embed an improvement for smoothing the drilling surface interactively as described in “Partial update of volume gradient”. Figure 7a demonstrates the rendering result of the drilling surface without shading. Figure 7b demonstrates the jaggy appearance of the drilling surface due to the binary representation assigned to the drilling label. In this case, unnatural artifacts remain, and the boundary becomes ambiguous, even if smoothing process was adapted based on multiplying the interpolation value of the drilling label and the alpha value of the original volume data like Fig. 7c. Based on the improvement described above, Fig. 7d demonstrates a clear drilling surface without artifacts. Furthermore, Fig. 8 demonstrates that the drilling label is updated interactively, and our approach can represent a clear drilling surface at interactive rates while drilling on the volumetrically rendered image from actual CT data. Validation of drilling size We verified the difference and usability of changing the drill radius in our framework. Figure 8a demonstrates the result of drilling on the volumetrically rendered image of an actual lumbar spine CT data with the radius set at 1 and 2.5 mm. Figure 9b is the rendering result from a different view position from above the scene. Through the Fig. 8a, b, we found that the operator can drill a detailed region by changing the drill radius. Figure 9c, d demonstrate the results of drilling at same region from same view position with one mouse drag by setting the drill radius at 1.5 and 4.0 mm. Due to the constraint from methods we proposed in “Depth-based constraints for volume drilling”, drilling depth with one mouse drag is based on the size of drilling object. To the thick region, in contrast to a part near the surface that had been peeled to a radius of
Int J CARS (2010) 5:461–469
467
Fig. 8 Interactive update the drilling label and represent the drilled boundary
Fig. 9 Difference of drilling result according to radius of drilling object a, b drilling the spine at 1.0 and 2.5 mm drill radius, c drilling the spine with 1.5 mm drill radius, d drilling the spine with 4 mm drill radius
Fig. 10 Drilling into Engine volume image. a Original Data and ROI (b) drilling into ROI with no depth control. d Drilling into ROI with our proposed depth control
1.5 mm, most of the target region was removed, and the hidden region was revealed to a radius of 4.0 mm with one mouse drag. An uncomplicated target region will be removed more simply by setting a larger drill radius as shown in Fig. 9c, d. Validation of stable depth control In this study, we performed verifications of the stable depth control we proposed. First, we set the drill radius to 4 mm and operated drilling on the volumetrically rendered image from an engine CT data like Fig. 10a. Figure 10b demonstrates the result of drilling to an edge without depth control. In this case, two unintended regions were removed from incorrectly dragging to the back region. On the other hand, Fig. 10c demonstrates a successful drilling of the same area without any damage to the unintended region by validating the stable depth control. Figure 11 represents the depth of window coordinates translated from drilling points in each case. As shown in Fig. 11, in contrast to the rapid change of depth cause by the incorrect drilling without depth control, the depth of window coordinates is changed smoothly by the validating of the depth control. Next, to verify the simulation of drilling at MED, we utilized actual CT data (256 × 256 × 256) like Fig. 12a. We set
Fig. 11 Evaluation of stable depth control
the drill radius to 2 mm and performed a drilling examination of the ROI from the view position as in Fig. 12b. As shown in Fig. 12c, unintended regions of spine were injured when using without depth control, and this needed correction using the undo function. On the other hand, the target region was successfully removed when using the depth control as shown in Fig. 12d.
123
468
Int J CARS (2010) 5:461–469
Fig. 12 Drilling into spine CT volume image. a Original Data and ROI, b Zoom up image of ROI, c drilling into ROI with no depth control, d drilling into ROI with our proposed depth control
Fig. 13 Compare the drilling simulation result with actual result after surgery. a, b actual 3D and CT images before surgery. c, d drilling simulation results. e, f actual 3D and CT images after surgery
Additionally, in this research, for a case in which MED method is applied, we carried out a surgical planning simulation using its preoperative lumbar spine CT data (256 × 256 × 256). In this case, it was necessary to shave the pedicle by drill for decompression to the nerve. Figure 13a, b show the preoperative volume-rendering image and MPR (Multiplanar Reconstruction) image of ROI. We set the drill radius to 2 mm and performed a drilling simulation with the view position as in Fig. 13a. Figure 13c, d show the simulation result of dragging mouse four times without changing the view position. As the mouse-dragging result shown in Fig. 13c indicates, only the front side of the pedicle was cut, and the back side was preserved without any damage. Figure 13d represents the MPR image of ROI after the drilling simulation. By comparing Fig. 13 c, d with actual postoperative images (e, f), we found that operator was able to perform effective surgical planning of bone drilling in this case using the methods we proposed.
Discussion From above verifications, we found our framework can supply an efficient environment for the surgical planning of MED. The operator can simulate a bone drilling of compli-
123
cated regions such as the lumbar spine with simple mousedragging operations. In order to increase the continuity of the operation further, the development of the algorithm in which the operator can obtain an intended drilling result by continuous dragging to the thick region is a future work. In the framework we propose, since the rendering of the entire scene is performed during every rendering cycle, the acquirement interval of mouse position may vary due to the heavy processing. In this case, the discontinuous cutting result is caused by dragging the mouse too quickly, and it is necessary to remove the remaining parts by returning the mouse. In order to achieve a higher processing of the user’s input, an entire performance improvement and an algorithm to partially update the removed region need to be introduced to our framework. Another issue we should consider is the loss of the “image depth” during direct editing on a volumetrically rendered image using a general-purpose monitor. Even if we adapt stable depth control methods or provide a three-crosssectional image-view, this issue may cause excessive removal or unintended cutting and may prevent the continuity of the operation. In our future work, we intend to find a solution to overcome this issue. For instance, utilizing a stereo display may be one way. Creating a stereo volumetrically rendered image for the operator during the virtual drilling may resolve the problem of loss of “image depth”.
Int J CARS (2010) 5:461–469
469
Conclusion In this research, we proposed interactive bone-drilling methods via a 2D pointing device to support MED planning. In our framework, we adopt stable depth control methods so that the operator can obtain the intended bone-drilling result on complicated 3D regions such as the lumbar spine by simple mouse dragging operations. Future improvements will include performing user and clinical studies to formally validate the usability of the framework we propose. Furthermore, we plan to extend our methods and apply them to surgical navigation to help make MED surgery safer. Our method is an effective way to present a volume-rendered image with a planning result that synchronizes with the actual image during the surgery. We will research adaptive methods to present the planning result during the surgery (for instance, using augmented reality is one way). In addition, matching the planning result with the conventional navigation system, in which the 3D position of the drilling tool can be detected, will make it possible to detect drilling of unintended regions beforehand and inform the surgeon in time to prevent a mistake being made. Acknowledgements This research has been carried out as a project entitled “Development of tailor-made surgical navigation system” under contract with the Innovation Plaza Kyoto, Japan Science and Technology Agency (JST). A part of this study was supported by a Grant-in-Aid for Scientific Research for Young Scientists (A) (21680044) from The Ministry of Education, Culture, Sports, Science and Technology, Japan.
5.
6.
7.
8.
9.
10. 11. 12.
13.
14.
15.
16.
References 1. Nakagawa Y, Yoshida M, Maio K (2006) Microendoscopic discectomy (MED) for surgical management of lumbar disc disease: technical note. Int J Spine Surgery 2(2). http://www.ispub.com/ ostia/index.php?xmlFilePath=journals/ijss/vol2n2/med.xml 2. Nakatani N (2008) Computer-assisted navigation system in microendoscopic laminotomy for patients with lumbar spinal canal stenosis. Cent Jpn J Orthop Traumat 51(3):491–492 3. Minamide A, Yoshida M, Yamada H, Nakagawa Y, Maio K, Keho H, Kawai M, Iwasaki H, Nakao S, Kawakami M, Ando M (2007) The usefulness of computed-assisted navigation system for microendoscopic decompression surgery for extraforaminal stenosis at L5-S1. In: 7th annual meeting of pacific and Asian society of minimally invasive spine surgery. Gyeongju, Korea 4. Kordelle J, Millis M, Jolesz FA, Kikinis R, Richolt JA (2001) Three-dimensional analysis of the proximal femur in patients with
17.
18. 19.
20. 21.
slipped capital femoral epiphysis based on the computed tomography. J Pediat Orthop 21:179–182 Richolt JA, Teschner M, Everett P, Girod B, Millis M, Kikinis R (1998) Planning and evaluation of reorienting osteotomies of the proximal femur in cases of SCFE using virtual three-dimensional models. LNCS 1496:1–8 Loncaric S, Kovacevic D, Sorantin E (2000) Semi-automatic active contour approach to segmentation of computed tomography volumes. Proc SPIE 3979:917–924 Hamarneh G, Yang J, McIntosh C, Langille M (2005) 3D live-wirebased semi-automatic segmentation of medical images. SPIE Med Imaging 5747:1597–1603 Galyean TA, Hughes JF (1991) Sculpting: an interactive volumetric modeling technique. In: Proceedings of SIGGRAPH, vol 91. pp 267–274 Avila RS, Sobierajskim LM (1996) A haptic interaction method for volume visualization. In: Proceedings of IEEE Visualization. pp 197–204 Wang SW, Kaufman AE (1995) Volume sculpting. In: Proceedings of the 1995 symposium on interactive 3D graphics. pp 151–156 Kim L, Park SH (2006) Haptic interaction and volume modeling techniques for realistic dental simulation. Vis Comput 22:90–98 Agus M, Giachetti A, Gobbettiet E (2003) Adaptive techniques for real-time haptic and visual simu-lation of bone dissection. In: Proceedings of IEEE VR. pp 102–109 Prior A (2006) “On-the-fly” voxelization for 6 degrees-of-freedom haptic virtual sculpting. In: Proceedings of ACM VRCIA. pp 263–270 Petersik A, Pflesser B, Tiede U, Hohne K-H, Leuwer R (2003) Realistic haptic interaction in volume sculpting for surgery simulation. Surgery simulation and soft tissue modeling, International Symposium. IS4TM, pp 192–202 Pflesser B, Leuwer R, Tiede U, Höhne KH (2000) Planning and rehearsal of surgical interventions in the volume model. Stud Health Tech Inform 70:259–264 Sorensen MS, Mosegaard J, Trier P (2009) The visible ear simulator: a public PC application for GPU-accelerated haptic 3D simulation of ear surgery based on the visible ear data. Otol Neurotol 30(4):484–487 Huff R, Dietrich CA, Nedel LP, Freitas CMDS, Comba JLD, Olabarriaga SD (2006) Erasing, digging and clipping in volumetric datasets with one or two hands. In: Proceedings of the ACM international conference on virtual reality continuum and its applications. pp 271–278 Chen H-L et al (2008) GPU-based point radiation for interactive volume sculpting and segmentation. Vis Comput 24(7–9):689–698 Cabral B, Cam N, Foran J (1994) Accelerated volume rendering and tomographic reconstruction using texture mapping hardware. In: Proceedings volume visualization symposium. pp 91–98 Levoy M (1990) Efficient ray-tracing of volume data. ACM Trans Graph 9(3):256–261 Weiskopf D, Engel K, Ertl T (2003) Interactive clipping techniques for texture-based volume visualization and volume shading. IEEE Trans Vis Comput Graph 9:298–312
123