Augmented Reality Navigation With Automatic Marker-Free Image ...

9 downloads 18560 Views 862KB Size Report
system with automatic marker-free image registration using 3-D image overlay and .... Finally, to the best of our knowledge, all the navigation meth- ods for dental surgery .... template domain to the original image domain: τu,v,α (x, y)=(xcosα ...
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 61, NO. 4, APRIL 2014

1295

Augmented Reality Navigation With Automatic Marker-Free Image Registration Using 3-D Image Overlay for Dental Surgery Junchen Wang, Hideyuki Suenaga, Kazuto Hoshi, Liangjing Yang, Etsuko Kobayashi, Ichiro Sakuma, Member, IEEE, and Hongen Liao∗ , Member, IEEE

Abstract—Computer-assisted oral and maxillofacial surgery (OMS) has been rapidly evolving since the last decade. State-ofthe-art surgical navigation in OMS still suffers from bulky tracking sensors, troublesome image registration procedures, patient movement, loss of depth perception in visual guidance, and low navigation accuracy. We present an augmented reality navigation system with automatic marker-free image registration using 3-D image overlay and stereo tracking for dental surgery. A customized stereo camera is designed to track both the patient and instrument. Image registration is performed by patient tracking and real-time 3-D contour matching, without requiring any fiducial and reference markers. Real-time autostereoscopic 3-D imaging is implemented with the help of a consumer-level graphics processing unit. The resulting 3-D image of the patient’s anatomy is overlaid on the surgical site by a half-silvered mirror using image registration and IP-camera registration to guide the surgeon by exposing hidden critical structures. The 3-D image of the surgical instrument is also overlaid over the real one for an augmented display. The 3-D images present both stereo and motion parallax from which depth perception can be obtained. Experiments were performed to evaluate various aspects of the system; the overall image overlay error of the proposed system was 0.71 mm. Index Terms—Augmented reality (AR), dental surgery, integral photography (IP), stereo tracking, surgical navigation, 3-D image overlay.

Manuscript received October 24, 2013; revised December 13, 2013; accepted January 11, 2014. Date of publication January 17, 2014; date of current version March 17, 2014. This work was supported in part by Grant for Translational Systems Biology and Medicine Initiative from the Ministry of Education, Culture, Sports, Science and Technology of Japan. The work of H. Liao was supported by Grant-in-Aid for Scientific Research (23680049, 24650289) of the Japan Society for the Promotion of Science, and the National Natural Science Foundation of China (81271735, 61361160417). Asterisk indicates corresponding author. J. Wang, L. Yang, E. Kobayashi, and I. Sakuma are with the Graduate School of Engineering, University of Tokyo, Tokyo 1138656, Japan (e-mail: [email protected]; [email protected]; [email protected]; [email protected]). H. Suenaga is with the Department of Oral-Maxillofacial Surgery, Dentistry and Orthodontics, University of Tokyo Hospital, Tokyo 1138656, Japan (e-mail: [email protected]). K. Hoshi is with the Department of Cartilage and Bone Regeneration, Graduate School of Medicine, University of Tokyo, Tokyo 1138656, Japan (e-mail: [email protected]). ∗ H. Liao is with the Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China, and also with the Graduate School of Engineering, University of Tokyo, Tokyo 1138656, Japan (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TBME.2014.2301191

I. INTRODUCTION ENTAL surgery could be any of a number of surgical procedures performed on the teeth and jaw bones to modify dentition. Among the types of dental surgery are endodontic surgery, prosthodontics, and orthodontic treatment, and the basic operations include drilling, cutting, milling, fixation, resection, and implantation. These operations are performed within a limited space, i.e., patient’s mouth without direct access to surgical targets in most cases. The difficulty in viewing these surgical targets and surrounding structures poses challenges to routine clinical practice. For example, surgeons have to avoid damaging the surrounding critical structures (e.g., nerve channels or tooth roots) while accessing the surgical targets, which might be hidden by gingiva and bones. At the same time, dental surgery requires highly precise operations. When drilling a hole between adjacent tooth roots, for example, the preferred accuracy is less than 1 mm. To overcome these challenges, computer-assisted oral and maxillofacial surgery (OMS) has rapidly evolved in the last decade. The categories of computer-assisted OMS technologies can be roughly divided into surgical simulation and surgical navigation performed in the preoperative and the intraoperative phases, respectively. In preoperative simulation, 3-D models of the surgical site are created from preoperative medical images, e.g., computer tomography (CT). Surgeons can inspect, measure, and label the models and make detailed surgical plans with the help of a computer. Special software and haptic devices could also simulate surgical procedures. [1], [2]. In intraoperative navigation, a device tracks the surgical instrument whose position and orientation are mapped to the virtual space through image registration. The spatial relationship between the instrument and the surgical site is visualized using computer graphics (CG) techniques. If the CG-generated virtual scene is registered to simulate surgical reality, then the system performs augmented reality (AR) navigation; otherwise, it performs virtual reality (VR) navigation. Some commercially available systems and prototypes for OMS navigation have been proposed and evaluated. Casap et al. evaluated the accuracy of a VR navigation system (Denex Image Guided Implantology, Moshav Ora, Israel) [3] and its benefits for both dental implantation and student training [4]. Yu et al. applied computer-assisted navigation in OMS based on 104 clinical cases using the Acc-Navi system (Multifunctional Surgical Navigation System, Shanghai, China) and the Stryker navigation

D

0018-9294 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications standards/publications/rights/index.html for more information.

1296

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 61, NO. 4, APRIL 2014

system (Stryker Leibinger, Freiburg, Germany) [5]. Bouchard et al. developed a navigation system for OMS based on electromagnetic (EM) tracking [6]. Tsuji et al. proposed a navigation system based on cephalograms and dental casts, where the dental casts were used for image registration [7]. Yamaguchi et al. presented an AR system for dental surgery, where a 2-D HMD was worn by the surgeon to see the augmented view [8]. However, several disadvantages still exist in these systems and related technologies, especially for use in dental surgery. First, all these systems employ either optical or EM tracking devices for intraoperative motion tracking. Optical markers used in the optical tracking systems were too bulky to be comfortably used within a small operative field (i.e., within the mouth); attaching it to either the patient or the instrument is not feasible. Moreover, commercial optical tracking systems were created for general use and are not optimized for OMS surgery; therefore, maintaining the line of sight between the optical marker and the camera is inconvenient. An EM tracker is susceptible to interference from metallic materials; therefore, the tracking may become unstable during dental surgery. Second, inconvenient procedures are required by the current image registration techniques for OMS navigation [9]. Fiducial markers are usually necessary for accurate image registration [10]. However, the attachment of fiducial markers either is an invasive procedure for patients, or it requires a cumbersome patient-specific cast, which is error-prone [7], [11]. In addition, the registration may become invalid if the patient moves, in which case the tedious registration must be repeated. To avoid registration failure, the patient must also be tracked; therefore, a reference marker must be attached to the patient, which is also an invasive and/or inconvenient procedure. Excessive markers and additional procedures interfere with common surgical procedures and may cause unexpected safety issues. Finally, to the best of our knowledge, all the navigation methods for dental surgery presented by other groups rely on 2-D display guidance. In VR navigation [3]–[7], the CG-generated virtual scene is displayed on a 2-D flat monitor. Surgeons have to divert their eyes off the surgical site to look at the monitor while trying to maintain hand–eye coordination. In AR navigation [8], the virtual scene is mixed with the real scene on a tracked 2-D head-mounted display (HMD) worn by the surgeon. All these methods are 2-D display guidance systems, i.e., the virtual scene is essentially a projected 2-D scene. Compared with visual perception of a 3-D object, 2-D projection lacks two important visual clues that give a viewer the perception of depth: stereo parallax and motion parallax [12]. Depth perception in image-guided surgery enhances the safety of the surgical operation. HMDs and head-mounted operating microscopes with stereoscopic vision have been used many times for AR visualization in the medical field [13]–[15]. However, such video seethrough devices have two views that present only horizontal parallax, instead of the full parallax. Projector-based AR visualization [16] is appropriate for large operative field overlays; however, it lacks depth perception. In our AR visualization, we adopted an autostereoscopic 3-D image overlay using a translucent mirror. Autostereoscopic 3-D imaging is achieved using

Fig. 1.

Overview of the proposed system.

integral photography (IP) [17], [18], and we have successfully applied IP techniques for 3-D image navigation in other surgical fields [19]–[22]. However, bulky optical markers and troublesome procedures for image registration, as well as the use of fiducial markers, were still necessary. We have proposed a VR solution for dental surgery [23]. In this study, we developed an AR navigation system with automatic marker-free image registration for dental surgery, using real-time autostereoscopic 3-D imaging and stereo tracking to overcome the aforementioned disadvantages. II. METHODS A. System Overview The operative field of dental surgery is a small area. Therefore, a flexible and compact overlay system is preferred. The proposed system consists of a stereo camera tracking device, a 3-D display device, a half-silvered mirror for image overlay, a computer workstation with a graphics processing unit (GPU) for CG rendering, and passive support arms, as illustrated in Fig. 1. The stereo camera is used for patient and instrument tracking. The 3-D display device is composed of a high-resolution liquid crystal display (LCD) and a convex lens hexagonal array placed in front of the LCD. Patient models (triangle mesh) were created from CT data. Autostereoscopic 3-D images of the models are displayed by the 3-D display device, which are viewed by surgeons through the half mirror for anatomical visualization. The 3-D images appear to be overlaid on the real surgical site and provide continuous motion parallax within the viewing zone. Surgeons will see a 3-D image overlaid in situ to represent the real anatomy. B. Autostereoscopic 3-D Imaging For an AR navigation system that superimposes an image on the surgical site, a 3-D image is preferable so that a consistent image is maintained when observed from different directions. IP is a promising autostereoscopic 3-D display technique without wearing glasses. The basic principle of IP can be found elsewhere [18], [24]. The pickup procedure of conventional IP can

WANG et al.: AR NAVIGATION WITH AUTOMATIC MARKER-FREE IMAGE REGISTRATION

Fig. 2. Tool marker (32 mm × 32 mm × 5 mm) attached to the surgical instrument for instrument tracking.

be completely simulated by CG techniques if the 3-D object is created implicitly from 3-D data, and we need only a highresolution LCD and a lens array to display 3-D images [25]. Given 3-D data, the process to synthesize the IP elemental images is called IP rendering. Three-dimensional imaging of the tracked surgical instrument requires real-time update. However, IP rendering is computationally costly, as one IP rendering needs many conventional surface renderings (for rendering a surface model) or numeric raycasting procedures (for rendering a data volume). Fortunately, the rendering process can be performed in parallel through GPU processing. We have implemented a flexible rendering pipeline for real-time 3-D medical imaging using GPU-accelerated IP and evaluated the rendering performance [26]. C. Stereo Tracking 1) Stereo Camera Design: The stereo camera is composed of two highly sensitive complementary metal oxide semiconductor cameras, separated by a distance of approximate 120 mm. The stereo camera is calibrated [27] using a 7 × 7 dot array pattern on a 100 mm × 100 mm plate. The left and right video streams are then undistorted and rectified so that only the horizontal parallax exists between left and right cameras [28]. The intraoperative configuration of the stereo camera is illustrated in Fig. 1. The camera is looking down over the operative field at a height of approximate 460 mm. This configuration has two advantages: the measurement geometry is customized optimally for the limited operative field so as to achieve better accuracy; and the line of sight between the camera and the instrument is easy to maintain during the operation. 2) Instrument Tracking: A tool marker is designed for the instrument tracking task. It is composed of a resin mounting base and a 3 × 3 dot array pattern with a small solid triangle at one corner. The triangle is used to distinguish the in-plane rotation of the marker. The tool marker is attached to the cylindrical instrument as depicted in Fig. 2. The dot array patterns are first recognized in left and right images, and then the dot centroids in both images are extracted with subpixel accuracy. Threedimensional coordinates of the dot centroids in the stereo camera frame are calculated by binocular triangulation. The 6-DOF pose of the tool marker is determined by matching the local coordinates in the marker frame with the calculated coordinates in the stereo camera frame, using a point-based registration method [29]. The most important part is the tool tip. The pose

1297

of the tip frame is obtained by postmultiplying the marker pose with the marker-tip transformation, which is determined by a pivot calibration procedure. Fig. 2 shows the two coordinate frames associated with the instrument. The tip frame locates the instrument in the stereo camera frame. 3) Tip Fluctuation Removal: The disadvantage of the planar marker is the greater angular uncertainties around the axes on the marker plane. The uncertainties will be further amplified by the marker-tip transformation, causing a greater uncertainty in the tracked tip position. For example, the tracked tip position will randomly fluctuate, even if the instrument remains static. The fluctuation becomes obvious if the tool tip is far from the marker, thereby affecting the visual guidance adversely. Extended Kalman filtering (EKF) [30] is used to estimate the quasi-static pose of the tip frame (i.e., remaining static or moving slowly) to reduce the fluctuation. Let the 6 × 1 state vector x = (r x ; tx ) represent the pose of the tip frame with r x being the rotation vector and tx being the translation vector. Similarly, let the 6 × 1 observation vector y = (r y ; ty ) denote the tracked pose of the marker frame, and let the 6 × 1 constant m m vector f m tip = (r ; t ) be the transformation from the tip to the marker. The state and observation equations of the EKF model are xk = xk −1 + ω

(1)

y k = xk ◦ f m tip + ν

(2)

where k denotes the kth tracking time, the binary operator ◦ denotes the concatenation of two transformations, ω and ν are zero-mean noise vectors that follow the Gaussian distribution p(ω) ∼ N (0, Q), p(ν) ∼ N (0, R), where Q and R are variance matrices. The estimates of the tip pose and its variance ˆ k and P k , respectively, are recursively at time k, denoted by x calculated as follows: ˆ k −1 ˆ− x k =x

(3)

P− k = P k −1 + Q  −1 T − T Gk = P − k Hk Hk P k Hk + R   m ˆk = x ˆ− ˆ− x k + Gk y k − x k ◦ f tip

(4)

P k = (I − Gk H k ) P − k

(7)

(5) (6)

∂ (x ◦f m )

k tip |xk = xˆ −k is where I is the identity matrix and H k = ∂ xk the Jacobian matrix. The aforementioned EKF update at step k is performed if the tip distance between step k − 1 and k is less than 1 mm.

D. Image Registration In our system, the following four coordinate frames must be coregistered, as illustrated in Fig. 3: the stereo camera frame TC , the preoperative image frame TI , the tool tip frame Ttip , and the IP frame TIP . Note that the actual position of TIP must be reflected by the half mirror. For clarity, only the original TIP is shown in Fig. 3. The transformation from TC to Ttip (i.e., T tip C ) is provided by the described instrument tracking. The patientimage registration determines the transformation matrix from

1298

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 61, NO. 4, APRIL 2014

The 3-D contour is determined using binocular triangulation and is updated in real time along with the stereo video streams. The recovered 3-D contour is registered with the preoperative model derived from CT data to obtain T IC . We first retrieve the corresponding 3-D contour on the model preoperatively and then match the two contours intraoperatively. The triangle mesh model of the teeth is created using the marching cubes algorithm and is rendered by OpenGL. In a rendered OpenGL scene, the corresponding z-buffer encodes the depth information of each pixel P n = M pro j M mv P w ,

Fig. 3.

Coordinate systems used in the system.

TC to TI , which is denoted by T IC ; the IP-camera registration determines the transformation matrix from TIP to TC , which is C I denoted by T C IP . The combination of T IP and T C produces the I transformation from TIP to TI , denoted by T IP , which is used to create the correct 3-D image overlay. 1) Patient-Image Registration: We propose an automatic marker-free registration method using patient tracking and 3-D contour matching. The teeth have a sharp edge whose 3-D coordinates can be captured using the stereo camera. Fig. 4(a) shows a simulated surgical scene of dental surgery for the lower teeth, captured by the stereo camera through the half mirror. Because of the high contrast between the teeth and the oral cavity, the 3-D contour of the front teeth could be easily extracted using template matching and edge extraction. First, an image template t(x, y) bounded by the red rectangle is selected manually only in the first frame of the left camera. Next, template matching, based on normalized cross correlation (NCC) [31], is performed on the corresponding right-camera image and on the subsequent stereo images to locate the regions of interest (ROIs) on the teeth by minimizing the following: r(u, v, α) = 

Cov (f (τ u ,v ,α (x, y)) , t(x, y)) Var (f (τ u ,v ,α (x, y))) · Var (t(x, y))

(8)

where f (·, ·) represents the original image where the template is to be searched, Cov and Var denote the covariance and variance, respectively, τ u ,v ,α (x, y) is a mapping function from the template domain to the original image domain: τ u ,v ,α (x, y) = (xcosα − ysinα + u, xsinα + ycosα + v)T (9) (u∗ , v ∗ , α∗ ) with the minimum r(u∗ , v ∗ , α∗ ) is the position and orientation of the template in the original image. Then, 2-D edges of the front teeth are extracted with subpixel accuracy within the detected ROIs, and the extracted teeth edges are stereo-matched using epipolar constraint searching [see Fig. 4(b)]. If multiple edge points appear on the epipolar line, the one with the closest NCC value (calculated in an 11 × 11 area centered at the candidate edge point) is chosen for the match.

(10)

where P n is the 4 × 1 homogeneous normalized device coordinates, M pro j is the projection matrix, M mv is the modelview matrix, and P w is the 4 × 1 homogeneous world coordinates. Equation (10) describes the transformation from the world coordinates to the normalized device coordinates in OpenGL. The normalized third component (divided by the fourth component) of P n will be stored in the z-buffer during the rasterization procedure. Our goal is to retrieve the 3-D coordinates of a pixel in the rendered image from the z-buffer value. As shown in Fig. 4(c), a rendered view of the teeth model is prepared, and those pixels whose z-buffer values are not 1 (1 represents the background) are extracted to generate a binary image. Next, edge detection is performed on the binary image to obtain the edge points of the front teeth. Then, the z-buffer values of these edge points are obtained from the z-buffer according to their pixel coordinates. The pixel coordinates are further divided by the image size to get the normalized device coordinates (P n ). Finally, the 3-D contour is recovered by P w = (M pro j M mv )−1 P n . Note that the aforementioned procedure needs to be done only once preoperatively. The intraoperatively tracked contour is registered with the model contour using the iterative closest point (ICP) algorithm [32]. The algorithm requires an initial match, which is easily achieved by transforming the principal frame (the center and the three principal axes) of the tracked contour into the principal frame of the model contour. Suppose χ = (x, y, z) is an N × 3 matrix representing a centered N-point set. Each row of χ represents the coordinates of a point subtracted from the center ¯ = (¯ ¯, z ¯ ). χ can be written as the following of the point set χ x, y singular value decomposition: χ = U ΣV T .

(11)

The three columns of the 3 × 3 orthogonal matrix V represent the three principal axes of the point set. Let χc = U c Σc V T c and χm = U m Σm V T m be the reconstructed contour and model contour, respectively, subtracted from the center of the point set. The initial alignment matrix is calculated as follows:   −1 ¯T ¯T Vm χ Vc χ m c . (12) T init = 0 1 0 1 ICP algorithm is applied to further refine the alignment between the two contours, and the resulting transform is denoted by T fine . Therefore, we have T IC = (T fine T init )−1 . Note that the initial match could be erroneous, if the contour exhibits some symmetry. In such a case, an additional alignment is performed

WANG et al.: AR NAVIGATION WITH AUTOMATIC MARKER-FREE IMAGE REGISTRATION

1299

Fig. 4. Contour tracking-based patient-image registration. (a) Simulated surgical scene captured by the stereo camera. ROIs enclosing the teeth contour are automatically extracted in the stereo images (red rectangles) using template matching. (b) Three-dimensional contour reconstruction in the stereo camera frame. The figure shows the extracted teeth edges within the ROIs and the stereo matching for triangulation. (c) Three-dimensional contour extraction in the preoperative model space. Left: prepared CG view for teeth edge detection. Right: recovered teeth’s 3-D contour using the extracted 2-D edge and the z-buffer values.

Fig. 6. Fig. 5. IP-camera registration. (a) Calibration model with known geometry in the IP frame. (b) The 3-D image of the calibration model is displayed and the stereo images of the 3-D image are captured by the stereo camera through the half mirror for 3-D reconstruction.

by rotating one principal frame by 180◦ around the axis that is vertical to the symmetry plane. The best initial match is selected by comparing the overlap of two contours. The ICP matching is carried out just after the intraoperative teeth contour is successfully tracked. 2) IP-Camera Registration: We designed a CG model (or “calibration model”) consisting of five spatial spheres for IP-camera registration. Fig. 5(a) shows the 3-D image of the calibration model with known geometry in the IP frame. As illustrated in Fig. 5(b), the stereo images of the calibration model’s 3-D image are captured by the stereo camera through the half mirror, and the spherical centers in the stereo camera frame are reconstructed. The center coordinates in the different frames are registered using a point-based registration method to obC tain T C IP . Note that T IP is a mirror transformation matrix. The mirror transformation will change the winding order of the geometric primitives in the CG rendering process, which may result in unexpected effects for settings that enable face culling. The IP-camera registration procedure is done only once, if the spatial relationship between the stereo camera and the 3-D display is fixed.

Flowchart of the AR navigation.

E. AR Navigation We have so far obtained all spatial information for our I AR navigation. T IIP = T C IP T C is used when the preoperative anatomical map is superimposed on the intraoperative anatomy to visualize hidden surgical targets and surrounding critical tip C structures; T tip IP = T IP T C is used to superimpose an image of the surgical instrument for augmented display. Because of the automatic real-time image registration, the correct image will be overlaid even if the patient moves. The preoperative process and the intraoperative navigation workflow are illustrated in Fig. 6. III. EXPERIMENTS AND RESULTS A. Physical Setup Fig. 7 shows the physical setup of the proposed system. The stereo camera (UI-3240CP, IDS Imaging Development Systems GmbH, Germany) can work in both monochrome and color modes at a frame rate of 60 frames/s with an image resolution of 1280 × 1024 pixels. The LCD (6.4 in NEC 1024 × 768 NL10276BC, NEC Electronics, Japan) of the 3-D display has a resolution of 1024 × 768 pixels with a pixel pitch of 0.127 mm (200 pixels/in, ppi). The micro lens array is a 110 × 128 matrix with lens pitches of 0.889 and 1.016 mm in the vertical and horizontal directions, respectively. The distance between the LCD

1300

Fig. 7.

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 61, NO. 4, APRIL 2014

Physical setup. TABLE I PRECISION EVALUATION RESULTS

and the lens array is approximately 2 mm. A computer workstation with a multicore Intel Core CPU (i7-3960X CPU, 3.30 GHz) and an NVIDIA GeForce GTX580 GPU was used for information processing and IP rendering. All of the algorithms were implemented using C++. The machine vision library HALCON was used for camera calibration and image processing. The graphics pipeline for IP rendering was implemented using OpenGL4.3 with the CG language for NVIDIA GPU programming. We achieved a frame rate of more than 60 frames/s with an IP elemental image resolution of 1024 × 768 pixels, which is fast enough for our real-time 3-D imaging task. B. Stereo Tracking Evaluation We evaluated the stereo tracking system on two aspects: precision and accuracy. A tool marker is fixed on a three-axis stage, which was placed in the field of view of the camera, replacing the patient phantom in Fig. 7. The stage has a step resolution of 2 μm in the x- and y-directions, and 10 μm in the z-direction. The system took approximately 20 ms to process a pair of images for marker tracking. 1) Precision Evaluation: The X–Y–Z Euler angle pose of the marker, represented by (x, y, z, α, β, γ), was tracked while keeping the marker static. The tracked pose samples included imaging sensor noise and followed a Gaussian distribution whose standard deviation σ was used to evaluate the tracking precision. In addition, the range of variation δ (i.e., the absolute difference between the maximum and the minimum) was also given to assess the worst case. We took 10 000 samples to obtain statistically meaningful results, which are summarized in Table I. 2) Accuracy Evaluation: The three-axis stage was used to move the tool marker every 5 mm along the x-, y- and z-directions to sample a 10 × 10 × 5-point grid whose physical range was 50 mm × 50 mm × 25 mm. The range was the same as that of the typical operative field. On each grid vertex, the marker position was taken using the stereo camera, and these coordinates

Fig. 8.

Tip fluctuation removal using EKF.

were rigidly registered to the ground truth (a 10 × 10 × 5-point grid with homogeneous 5-mm spacing). The fiducial registration error (FRE) [33] was used to evaluate the tracking accuracy, which was 0.26 mm. 3) Tip Fluctuation Removal: The EKF parameters were determined as follows: R was calculated based on Table I as diag(3.9 × 10−7 , 4.3 × 10−7 , 1.8 × 10−6 , 2.2 × 10−5 , 4.4 × 10−5 , 3.9 × 10−4 ), Q was chosen empirically as 10−4 diag(100, 100, 100, 1, 1, 1), and f m tip was (−π, 0, 0, −32, 0, −100). The x–y–z positions of the tool tip in the static state was tracked, and the results are shown in Fig. 8. The red curve shows the raw data and the blue curve shows the filtered data. The standard deviations of the x-, y-, and z-coordinates are as follows (raw versus filtered): 0.066 mm versus 0.018 mm, 0.062 mm versus 0.016 mm, and 0.033 mm versus 0.017 mm. C. Patient-Image Registration Evaluation Our patient-image registration was evaluated by phantom experiments. A lower jaw model with lower teeth was created using a 3-D printer from CT images of a patient. The jaw model was assembled with a rubber head phantom to simulate a surgical scenario. A surgical dental clamp was used to expand the mouth and expose the teeth to the camera. The phantom was moved to test the real-time patient-image registration performance. It took about 30 ms to track the teeth contour and match it with the preoperative model. We also tested the registration algorithm as applied to molars. Unlike the front teeth, the molars have two salient contours presented in the captured images. The inner salient contour, which is easy to be segmented from the background, was used for tracking. The tracking and matching results of both front teeth and molars are shown in Fig. 9. The experimental results showed that our algorithm worked well in both cases, which covers the entire surgical site of dental surgery. D. IP-Camera Registration Evaluation IP-camera registration was performed to evaluate the feasibility and the registration accuracy. The 3-D image of the calibration model [see Fig. 5(a)] was projected by the 3-D display, and its stereo images were captured by the stereo camera, which are shown in Fig. 10(a) and (b). Projected spheres in the images

WANG et al.: AR NAVIGATION WITH AUTOMATIC MARKER-FREE IMAGE REGISTRATION

Fig. 9. Real-time patient-image registration using contour tracking and ICP matching. First and second rows: Images captured through the half mirror by the left and right cameras. Third row: Molar tracking where the green and red edges are the extracted contours from the left and right images for 3-D reconstruction. A teeth model (yellow CG model) was superimposed on the camera views using the patient-image registration matrix.

Fig. 10. (a) and (b) Left and right images, respectively, of the 3-D image of the calibration model captured by the stereo camera, with the ball centers automatically detected. The parallax of the middle sphere between left and right images is apparent. (c) Reconstructed geometric information of the calibration model in the stereo camera frame.

were automatically segmented from the background according to color information. Two-dimensional coordinates of the spherical centers in the images were extracted for stereo matching and triangulation. Fig. 10(c) shows the reconstructed 3-D coordinates of the spherical centers in the stereo camera frame, where the red and blue axes correspond to the x- and y-axes of the calibration model, respectively. Using mirror transformation, the reconstructed 3-D coordinates of five spheres were rigidly registered to the known geometric information of the calibration model in the IP frame to obtain T C IP . We achieved a FRE of 0.35 mm, and the reconstructed distance from the middle sphere to the plane consisting of the other four spheres was 19.76 mm, compared with the ground truth of 20 mm. This means that our 3-D display system is able to give the correct depth information and the IP-camera registration has submillimeter accuracy. E. Visual Effect Evaluation of AR Overlay After the IP-camera and patient-image registration, the deI rived T IIP = T C IP T C was used to project the 3-D images of the teeth (including tooth roots) and the surrounding nerve channels over the patient phantom for visualization. In addition,

1301

Fig. 11. (a) Teeth model overlay with critical structures to visualize the hidden tooth roots. (b) Augmented display of the surgical instrument with the overlaid drill path. (c) Three-dimensional images of molars including a growing wisdom tooth were overlaid on a lower jaw model. (d) Augmented display of the surgical instrument indicating the drill path.

tip C T tip IP = T IP T C was used to overlay a 3-D image of the surgical instrument over the real one for augmented display. Fig. 11(a) shows the 3-D image overlay, and Fig. 11(b) shows the augmented display of the surgical instrument with its extended line indicating the drill path. Because the hidden tooth roots are visualized, surgeons could drill a hole between two adjacent tooth roots without causing damage to them. We also moved the phantom to simulate patient movement, and the system was able to update T IC instantaneously while maintaining the correct 3-D image overlay. Fig. 11(c) and (d) shows a surgical site, where the molar area was tracked for the patient-image registration. The 3-D image overlay provides depth perception, which is helpful for the operation.

F. Accuracy Evaluation of 3-D Image Overlay We evaluated the image overlay accuracy, which includes the IP-camera registration error and the patient-image registration error. Fiducial holes covering the common surgical site of dental surgery were made on a lower jaw CG model, which was reconstructed from the CT data [see Fig. 12(a)]. A real lower jaw model with the holes [see Fig. 12(b)] was fabricated using a 3-D printer with layer thickness of 28 μm (Alaris30, Stratasys) according to the lower jaw CG model. The hole channels were obtained by reversing the CG model [see Fig. 12(a)]. The lower teeth (including the tooth roots), shown in Fig. 12(c) and the hole channels were overlaid on the real jaw model. Fig. 12(d) and (e) shows the overlay of 3-D images using front teeth tracking and molar tracking, respectively, for the patient-image registration. A laser pointer was set up beside the jaw model and the laser beam was adjusted to coincide with a hole channel under AR guidance. Therefore, a center-to-center error could be measured manually using a digital caliper (±0.01 mm) by comparing the projected laser spot with the reference hole. We measured each error ten times and

1302

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 61, NO. 4, APRIL 2014

Fig. 12. Three-dimensional image overlay accuracy evaluation. (a) CG model of a lower jaw with fiducial holes. (b) Fabricated lower jaw model with the holes using a 3-D printer. (c) Lower teeth and hole channels for image overlay. (d) and (e) Visual effects of the image overlay for accuracy evaluation. (f) Experimental setup for error measurement.

TABLE II ERROR MEASUREMENT RESULTS

took the average as the measurement result. Fig. 12(f) shows the experimental setup for error measurement. We evaluated the errors on a total of 3 × 11 holes in both the front teeth and molar areas, and the results are summarized in Table II.

the teeth to the stereo camera. Our algorithm applies to both front teeth and molars, covering the entire possible surgical site for dental surgery. Surgeons can push the overlay system away if the device interferes with the surgical operation and pull it back for visualization when necessary. Recently, a semiautomatic 2-D/3-D matching method between models and photographs was proposed to align 3-D orthodontics models [34]. This perspective-n-point concept in OMS registration is worth exploring and could have potential use in our real-time, automatic image registration strategy. For the IP-camera registration, only a virtual calibration model is needed. With “true 3-D” image with correct spatial information (in contrast with the common binocular imaging using left and right images), spatial (not planar) registration is possible, thereby improving the registration accuracy. The target registration error near the “features” that are used for registration is smaller than that in the farther areas [35]. Therefore, front teeth tracking or molar tracking is selected for the patient-image registration based on the location of the surgical site. The teeth are classified into three areas: front teeth, left molars, and right molars. The preoperative and intraoperative contours of each part can be easily extracted using the proposed methods, and the pair near the surgical site is used for patient-image registration. The EKF estimation of the tip position reduces the random fluctuations of the tip by combining prior knowledge and the current measurement while aiming at the target position. The filtering is performed only when the surgical instrument is in the quasi-static state, which is indicated by small variation (less than 1 mm) in the tip position; otherwise, the tracked pose is used directly. In this way, the tip tracking has a stable static output without losing dynamic performance.

IV. DISCUSSION The LCD resolution and the lens array quality determine the image depth limit, which is the maximum depth of the 3-D image within which the 3-D image can be clearly observed. Because of the focus deviation of individual lenses, parallel rays may not converge to the same point, leading to a 3-D image blur as depth increases. The depth limit in our current configuration is about 40 mm. In the patient-image registration, the teeth contour is tracked by the stereo camera to deal with the patient’s movements. The overlaid 3-D image could interfere with the contour reconstruction; therefore, in our current strategy, the 3-D image will not be displayed during patient tracking. If the patient remains stationary, the 3-D image is overlaid to guide the surgical operation. The registration and overlay are currently performed separately. In the future, we plan to implement a timesharing display-tracking strategy where the tracking and the display are performed alternately in the stereo camera video stream. If the frequency is high enough, the tracking and the display would seem concurrent. For the patient-image registration, surgeons would not be aware of the registration procedure, which is performed automatically within tens of milliseconds. They only need to expose

V. CONCLUSION We presented an AR navigation system for dental surgery based on 3-D image overlay, and overcame the main shortcomings in the currently available technologies. This paper includes a customized stereo camera tracker for both patient tracking and instrument tracking, an automatic real-time patient-image registration method without fiducial and reference markers, a simple but accurate IP-camera registration method, and AR visualization based on an autostereoscopic 3-D image overlay that provides both stereo and motion parallax. The main methodological contributions of this paper are concluded as follows: 1) an accurate real-time tracking method with a low-cost stereo camera. 2) a simple and accurate IP-Camera registration method using virtual spatial markers. 3) a real-time marker-free patientimage registration method which would not bring extra burdens to surgeons. The application innovation of this paper is a 3-D image overlay-based AR navigation system for dental surgery. Experiments were performed to evaluate the system from various aspects. The mean overall error of the 3-D image overlay was 0.71 mm, which meets clinical needs. Our next step is to customize the system for applications in the operating room.

WANG et al.: AR NAVIGATION WITH AUTOMATIC MARKER-FREE IMAGE REGISTRATION

REFERENCES [1] P. Pohlenz, A. Gr¨obe, A. Petersik, N. von Sternberg, B. Pflesser, A. Pommert, K.-H. Hhne, U. Tiede, I. Springer, and M. Heiland, “Virtual dental surgery as a new educational tool in dental school,” J. Cranio Maxill. Surg., vol. 38, no. 8, pp. 560–564, 2010. [2] D. Wang, Y. Zhang, Y. Wang, Y.-S. Lee, P. Lu, and Y. Wang, “Cutting on triangle mesh: Local model-based haptic display for dental preparation surgery simulation,” IEEE Trans. Vis. Comput. Graph., vol. 11, no. 6, pp. 671–683, Nov./Dec. 2005. [3] N. Casap, A. Wexler, N. Persky, A. Schneider, and J. Lustmann, “Navigation surgery for dental implants: Assessment of accuracy of the image guided implantology system,” J. Oral Maxillofac. Surg., vol. 62, pp. 116– 119, 2004. [4] N. Casap, S. Nadel, E. Tarazi, and E. I. Weiss, “Evaluation of a navigation system for dental implantation as a tool to train novice dental practitioners,” J. Oral Maxillofac. Surg., vol. 69, no. 10, pp. 2548–2556, 2011. [5] H. Yu, S. G. Shen, X. Wang, L. Zhang, and S. Zhang, “The indication and application of computer-assisted navigation in oral and maxillofacial surgery Shanghai’s experience based on 104 cases,” J. Cranio Maxill. Surg., vol. 41, no. 8, pp. 770–774, 2013. [6] C. Bouchard, J. Magill, V. Nikonovskiy, M. Byl, B. Murphy, L. Kaban, and M. Troulis, “Osteomark: A surgical navigation system for oral and maxillofacial surgery,” Int. J. Oral Maxillofac. Surg., vol. 41, no. 2, pp. 265– 270, 2012. [7] M. Tsuji, N. Noguchi, M. Shigematsu, Y. Yamashita, K. Ihara, M. Shikimori, and M. Goto, “A new navigation system based on cephalograms and dental casts for oral and maxillofacial surgery,” Int. J. Oral Max. Surg., vol. 35, no. 9, pp. 828–836, 2006. [8] S. Yamaguchi, T. Ohtani, H. Yatani, and T. Sohmura, “Augmented reality system for dental implant surgery,” in Virtual and Mixed Reality, (Lecture Notes in Computer Science series), R. Shumaker, Ed. vol. 5622Berlin, Germany: Springer-Verlag, 2009, pp. 633–638. [9] H.-T. Luebbers, P. Messmer, J. A. Obwegeser, R. A. Zwahlen, R. Kikinis, K. W. Graetz, and F. Matthews, “Comparison of different registration methods for surgical navigation in cranio-maxillofacial surgery,” J. Cranio Maxill. Surg., vol. 36, no. 2, pp. 109–116, 2008. [10] J. Maintz and M. A. Viergever, “A survey of medical image registration,” Med. Image Anal., vol. 2, no. 1, pp. 1–36, 1998. [11] D. Venosta, Y. Sun, F. Matthews, A. L. Kruse, M. Lanzer, T. Gander, K. W. Grtz, and H.-T. Lbbers, “Evaluation of two dental registrationsplint techniques for surgical navigation in cranio-maxillofacial surgery,” J. Cranio Maxill. Surg., (2013, Jul. 5). [Online]. [12] N. Dodgson, “Autostereoscopic 3-D displays,” Computer, vol. 38, no. 8, pp. 31–36, 2005. [13] W. Birkfellner, M. Figl, K. Huber, F. Watzinger, F. Wanschitz, J. Hummel, R. Hanel, W. Greimel, P. Homolka, R. Ewers, and H. Bergmann, “A head-mounted operating binocular for augmented reality visualization in medicine—Design and initial evaluation,” IEEE Trans. Med. Imag., vol. 21, no. 8, pp. 991–997, Aug. 2002. [14] M. Figl, C. Ede, J. Hummel, F. Wanschitz, R. Ewers, H. Bergmann, and W. Birkfellner, “A fully automated calibration method for an optical see-through head-mounted operating microscope with variable zoom and focus,” IEEE Trans. Med. Imag., vol. 24, no. 11, pp. 1492–1499, Nov. 2005. [15] V. Ferrari, G. Megali, E. Troia, A. Pietrabissa, and F. Mosca, “A 3-D mixed-reality system for stereoscopic visualization of medical dataset,” IEEE Trans. Biomed. Eng., vol. 56, no. 11, pp. 2627–2633, Nov. 2009. [16] R. Wen, C.-K. Chui, S.-H. Ong, K.-B. Lim, and S.-K. Chang, “Projectionbased visual guidance for robot-aided RF needle insertion,” Int. J. Comput. Assist. Radiol. Surg., vol. 8, pp. 1015–1025, 2013. [17] A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE, vol. 94, no. 3, pp. 591–607, Mar. 2006. [18] J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt., vol. 48, no. 34, pp. H77–H94, Dec. 2009. [19] H. Liao, T. Inomata, I. Sakuma, and T. Dohi, “3-D augmented reality for MRI-guided surgery using integral videography autostereoscopic image overlay,” IEEE Trans. Biomed. Eng., vol. 57, no. 6, pp. 1476–1486, Jun. 2010. [20] H. Liao, N. Hata, S. Nakajima, M. Iwahara, I. Sakuma, and T. Dohi, “Surgical navigation by autostereoscopic image overlay of integral videography,” IEEE Trans. Inf. Technol. Biomed., vol. 8, no. 2, pp. 114–121, Jun. 2004.

1303

[21] H. Liao, H. Ishihara, H. H. Tran, K. Masamune, I. Sakuma, and T. Dohi, “Precision-guided surgical navigation system using laser guidance and 3D autostereoscopic image overlay,” Comput. Med. Imag. Graph., vol. 34, no. 1, pp. 46–54, 2010. [22] H. Tran, H. Suenaga, K. Kuwana, K. Masamune, T. Dohi, S. Nakajima, and H. Liao, “Augmented reality system for oral surgery using 3D auto stereoscopic visualization,” in Proc. Med. Image Comput Comput.-Assist. Intervention, 2011, vol. 6891, pp. 81–88. [23] J. Wang, H. Suenaga, L. Yang, H. Liao, E. Kobayashi, T. Takato, and I. Sakuma, “Real-time marker-free patient registration and image-based navigation using stereovision for dental surgery,” in Augmented Reality Environments for Medical Imaging and Computer-Assisted Interventions, (Lecture Notes in Computer Science Series). vol. 8090, Berlin, Germany: Springer-Verlag, 2013, pp. 9–18. [24] G. Lippmann, “Epreuves reversible donnant la sensation du relief,” J. Phys. Theor. Appl., vol. 7, no. 1, pp. 821–825, 1908. [25] H. Liao, T. Dohi, and K. Nomura, “Autostereoscopic 3-D display with long visualization depth using referential viewing area-based integral photography,” IEEE Trans. Vis. Comput. Graph., vol. 17, no. 11, pp. 1690–1701, Nov. 2011. [26] J. Wang, I. Sakuma, and H. Liao, “A hybrid flexible rendering pipeline for real-time 3D medical imaging using GPU-accelerated integral videography,” Int. J. Comput. Assist. Radiol. Surg., vol. 8, pp. S287–S288, 2013. [27] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 11, pp. 1330–1334, Nov. 2000. [28] A. Fusiello, E. Trucco, and A. Verri, “A compact algorithm for rectification of stereo pairs,” Mach. Vision Appl., vol. 12, no. 1, pp. 16–22, 2000. [29] B. K. P. Horn, H. M. Hilden, and S. Negahdaripour, “Closed-form solution of absolute orientation using orthonormal matrices,” J. Opt. Soc. Amer. A, vol. 5, no. 7, pp. 1127–1135, Jul. 1988. [30] G. Welch and G. Bishop. (1995). “An introduction to the Kalman filter,” [Online]. Available: http://clubs.ens-cachan.fr/krobot/ old/data/positionnement/kalman.pdf [31] K. Briechle and U. D. Hanebeck, “Template matching using fast normalized cross correlation,” Proc. SPIE, Opt. Pattern Recog. XII, vol. 4387, pp. 95–102, 2001. [32] S. Rusinkiewicz and M. Levoy, “Efficient variants of the ICP algorithm,” in Proc. 3rd Int. Conf. 3-D Digital Imag. Model., 2001, pp. 145–152. [33] J. Fitzpatrick and J. West, “The distribution of target registration error in rigid-body point-based registration,” IEEE Trans. Med. Imag., vol. 20, no. 9, pp. 917–927, Sep. 2001. [34] R. Destrez, S. Treuillet, Y. Lucas, and B. Albouy-Kissi, “Semi-automatic registration of 3D orthodontics models from photographs,” Proc. SPIE, Med. Imag., vol. 8669, pp. 86 691E1–86 691E12, 2013. [35] J. Fitzpatrick, J. West, and C. R. Maurer, Jr., “Predicting error in rigidbody point-based registration,” IEEE Trans. Med. Imag., vol. 17, no. 5, pp. 694–702, Oct. 1998.

Junchen Wang received the B.S. and Ph.D. degrees in mechatronics from Beihang University, Beijing, China, in 2006 and 2012, respectively. He was a visiting Ph.D. student in the Graduate School of Engineering, University of Tokyo, Tokyo, Japan, from 2008 to 2010, supported by the China Scholar Council. He is currently a Postdoctoral Fellow in the University of Tokyo, Tokyo, Japan. His research interests include surgical navigation, medical image computing, and surgical vision and graphics.

1304

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 61, NO. 4, APRIL 2014

Hideyuki Suenaga received the Ph.D. degree in medical science from the University of Tokyo, Tokyo, Japan, in 2004. He is currently an Assistant Professor in the Department of Oral-Maxillofacial Surgery, Dentistry and Orthodontics, University of Tokyo Hospital, Tokyo, Japan. His research interests include computer-aided surgery, especially medical devices for surgery.

Kazuto Hoshi received the M.D. and Ph.D. degrees from the University of Tokyo, Tokyo, Japan, in 1991 and 1998, respectively. From 2002, he is with the Department of Cartilage and Bone Regeneration, Graduate School of Medicine, University of Tokyo, Tokyo, Japan, where he is currently an Associate Professor. He has been engaged in the research and development of threedimensional tissue-engineered cartilage and succeeded in the clinical application for the cleft-lip nose patients. Dr. Hoshi is a board member of the Japanese Society for Regenerative Medicine from 2013.

Liangjing Yang received the B.Eng. and M.Eng. degrees in mechanical engineering from the Department of Mechanical Engineering, National University of Singapore, Singapore, in 2008 and 2011, respectively. He is currently working toward the Ph.D. degree in the Graduate School of Engineering, University of Tokyo, Tokyo, Japan, on image mapping of ultrasonography and endoscopy. He was a Research Engineer working on augmented reality robotic surgery in the Department of Mechanical Engineering, National University of Singapore from 2008 to 2011.

Etsuko Kobayashi received the B.S., M.S., and Ph.D. degrees in precision machinery engineering from the University of Tokyo, Tokyo, Japan, in 1995, 1997, and 2000, respectively. From 2000 to 2003, she was a Research Associate in the School of Frontier Science, University of Tokyo. From 2003 to 2006, she was a Lecturer of the School of Frontier Science, University of Tokyo. From 2006, she is with the Department of Precision Engineering, Graduate School of Engineering, University of Tokyo, where she is currently an Associate Professor. Her research interests include medical robotics, surgical navigation systems, and biomedical instrumentation. Dr. Kobayashi is a member of the International Society for Computer Aided Surgery, the Japan Society of Computer Aided Surgery, and the Japanese Society for Medical and Biological Engineering.

Ichiro Sakuma (A’88–M’08) received the B.S., M.S., and Ph.D. degrees in precision machinery engineering from the University of Tokyo, Tokyo, Japan, in 1982, 1984, and 1989, respectively. From 1985 to 1987, he was a Research Associate in the Department of Precision Machinery Engineering, Faculty of Engineering, University of Tokyo. From 1991 to 1999, he was an Associate Professor in the Department of Applied Electronic Engineering, Tokyo Denki University, Saitama, Japan. He was an Associate Professor and a Professor at the Institute of Environmental Studies, Graduate School of Frontier Sciences, University of Tokyo, from 1999 to 2001, and from 2001 to 2006. He is currently a Professor and the Director of Medical Device Development and Regulation Research Center in the Department of Precision Engineering, Graduate School of Engineering, University of Tokyo. His research interests include biomedical instrumentation, simulation of biomedical phenomena, computer-assisted intervention, and surgical robotics. Dr. Sakuma is a board member of the Medical Image Computing and Computer Assisted Intervention Society, the Japan Society of Computer Aided Surgery, and the Japanese Society of Electro Cardiology. He was the Vice President of the Japanese Society for Medical and Biological Engineering from 2006 to 2007.

Hongen Liao (M’04) received the B.S. degree in mechanics and engineering sciences from Peking University, Beijing, China, in 1996, and the M.E. and Ph.D. degrees in precision machinery engineering from the University of Tokyo, Tokyo, Japan, in 2000 and 2003, respectively. Since 2004, he has been a Faculty Member at the Graduate School of Engineering, University of Tokyo, where he became an Associate Professor in 2007. He has been selected as a National “Thousand Talents” Distinguished Professor, National Recruitment Program of Global Experts, China, and is currently a Full Professor in the Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China. His research interests include 3-D medical image, imageguided surgery, medical robotics, computer-assisted surgery, and fusion of these techniques for minimally invasive precision diagnosis and therapy. He is the author and coauthor of more than 150 peer-reviewed articles published in journals and conference proceedings, as well as more than 250 abstracts and numerous invited lectures. Dr. Liao was distinguished by receiving the government award [The Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology (MEXT), Japan]. He was a Research Fellow of the Japan Society for the Promotion of Science. He received more than ten awards including OGINO Award (2007), ERICSSON Young Scientist Award (2006), IFMBE Young Investigators Awards (2005 and 2006), and various Best Paper Awards from different academic societies. His research was well funded by MEXT, the Ministry of Internal Affairs and Communications, New Energy and Industrial Technology Development Organization, the Japan Society for the Promotion of Science in Japan, and the National Natural Science Foundation of China. He is an Associate Editor of IEEE Engineering in Medicine and Biology Society Conference, the Organization Chair of Medical Imaging and Augmented Reality Conference (MIAR) 2008, the Program Chair of the Asian Conference on Computer-Aided Surgery Conference (ACCAS) 2008 and 2009, the Tutorial co-chair of the Medical Image Computing and Computer Assisted Intervention Conference (MICCAI) 2009, the Publicity Chair of MICCAI 2010, the General Chair of MIAR 2010 and ACCAS 2012, the Program Chair of MIAR 2013, and the Workshop Chair of MICCAI 2013.

Suggest Documents