A Novel Bronchoscope Tracking Method for Bronchoscopic Navigation ...

1 downloads 0 Views 7MB Size Report
Navigation Using a Low Cost Optical Mouse Sensor ... bronchoscope and localized by an electromagnetic (EM) tracking system, such as the commercially ...
A Novel Bronchoscope Tracking Method for Bronchoscopic Navigation Using a Low Cost Optical Mouse Sensor Xi´ongbi¯ao Lu´oa , Marco Feuersteinb , Takayuki Kitasakac , Hiroshi Natorid , Hirotsugu Takabatakee , Yoshinori Hasegawaf , Kensaku Morig,a a Graduate

School of Information Science, Nagoya University, Japan; Aided Medical Procedures (CAMP), Technische Universit¨at M¨ unchen, Germany; c Faculty of Information Science, Aichi Institute of Technology, Japan; d Keiwakai Nishioka Hospital, Japan; e Sapporo-Minami-Sanjo Hospital,Japan; f Graduate School of Medicine, Nagoya University, Japan; g Information and Communications Headquarters, Nagoya University, Japan

b Computer

ABSTRACT Image-guided bronchoscopy usually requires to track the bronchoscope camera position and orientation to align the preinterventional 3-D computed tomography (CT) images to the intrainterventional 2-D bronchoscopic video frames. Current state-of-the-art image-based algorithms often fail in bronchoscope tracking due to shortages of information on depth and rotation around the viewing (running) direction of the bronchoscope camera. To address these problems, this paper presents a novel bronchoscope tracking method for bronchoscopic navigation based on a low-cost optical mouse sensor, bronchial structure information, and image registration. We first utilize an optical mouse senor to automatically measure the insertion depth and the rotation of the viewing direction along the bronchoscope. We integrate the outputs of such a 2-D sensor by performing a centerline matching on the basis of bronchial structure information before optimizing the bronchoscope camera motion parameters during image registration. An assessment of our new method is implemented on phantom data. Experimental results illustrate that our proposed method is a promising means for bronchoscope tracking, compared to our previous image-based method, significantly improving the tracking performance. Keywords: Bronchoscopic Navigation, Bronchoscope Tracking, Centerline Matching, Optical Mouse Sensor

1. INTRODUCTION Navigated bronchoscopy systems are exploited to accurately register any useful preoperative and intraoperative information (e.g., 3-D CT chest slices and real-time 2-D bronchoscopic video images) to guide physicians, e.g., for locating surgical instruments such as transbronchial lung biopsy (TBLB) needles to obtain samples of suspicious tumors and treating or removing precancerous tissue during minimally invasive diagnosis and treatment of bronchus and lung cancer. However, one of the essential requirements to achieve such navigational function is to determine the position and orientation of the camera attached at the bronchoscope tip. Unfortunately, it is really challenging to accurately estimate the position and orientation of the bronchoscope camera inside a patient’s airway tree in real time during bronchoscopic navigation. Up to now, two main methods (or their combination) for tracking a bronchoscope have been published in the literature: (a) sensor-based and (b) image-based tracking. The former uses a sensing coil (sensor) attached to the working channel of the

Further author information: (Send correspondence to Xi´ ongbi¯ ao Lu´ o) Xi´ ongbi¯ ao Lu´ o: E-mail: [email protected], Telephone: +81 52 789 5688 Marco Feuerstein: E-mail: [email protected], Telephone: +49 89 289 19427 Kensaku Mori: E-mail: [email protected], Telephone: +81 52 789 5689

bronchoscope and localized by an electromagnetic (EM) tracking system, such as the commercially available superDimension navigation system.1, 2 However, EM sensor-based tracking methods have two main drawbacks:(1) sensitivity to localization problems resulting from patient movement (i.e., airway deformation or patient coughing) and (2) measurement inaccuracies because of magnetic field distortion caused by ferrous metals or conductive material within or close to the working volume. The latter processes the bronchoscopic video images obtained from the bronchoscope camera to continuously estimate the pose information of the bronchoscope camera on the basis of image registration methods.3–5 Beside tracking methods mentioned above, a model-based method to locate a bronchoscope inside airway tree was proposed in the work of Kukuk.6, 7 By directly and manually measuring information on depth and rotation along the viewing direction and the bending angle of the bendable section of a bronchoscope, Kukuk et al. constructed a model that integrated the measured information to estimate the current position and orientation of the bronchoscope camera during a optimization process. Usually, image-based methods register a real bronchoscope camera pose to a virtual camera pose generated by placing a virtual camera inside the 3-D CT data by calculating intensity-based similarities between real and virtual images. However, image-based methods are constrained to characteristic information of bronchial trees (e.g., bifurcations or folds), so they cannot tackle cases where this information cannot be clearly observed by the bronchoscope camera.5 Furthermore, current image-based algorithms often fail in bronchoscope tracking due to lacks of information on depth and rotation around the viewing (running) direction of the bronchoscope camera. The shortages of depth and rotation information usually happen when problematic bronchoscopic video images (e.g., bubbles, motion blur, or airway deformation) are observed during bronchoscope tracking. Those problematic bronchoscopic video images easily cause the registration procedure to being trapped in local minima during the optimization process and unavoidably lose the successful tracking of the bronchoscope movement. The purpose of this work is to develop a new external bronchoscope tracking method based on an optical mouse (OM) sensor, centerline matching using bronchial structure information, and intensity-based image registration for navigated bronchoscopy. First, since current image-guided bronchoscopy usually fails to track the bronchoscope camera due to the lack of information on the depth and the rotation of the viewing direction, we propose a new solution that utilizes an OM sensor to directly obtain this information for improving the tracking accuracy and robustness of the image-based method. After obtaining the insertion depth and rotation information along the running direction of the bronchoscope from the OM sensor outputs, we perform centerline matching based on bronchial structure information to integrate this information and to calculate the initial guess for the current bronchoscopic frame before the refinement of the current camera pose by image registration.

2. A NOVEL EXTERNAL TRACKING 2.1 Overview Our proposed method for predictive bronchoscope motion tracking consists of several stages: (1) data acquisition, (2) centerline matching, (3) determination of initial guess, and (4) optimization of camera motion parameters. In the data acquisition stage, we use an OM sensor to record the 2-D sensor outputs. During centerline matching, we update the current camera pose by integrating the OM outputs on the basis of bronchial tree structure information. From the stage (3), we determine the initial guess of the optimizer during image registration. In stage (4), intensity-based registration is performed to optimize the bronchoscope camera pose.

2.2 Acquisition of Optical Mouse Sensor Outputs An OM sensor is designed as a microscopic image acquisition device integrated with a digital signal processor (DSP), which can detect the direction and magnitude of the mouse movement by optically acquiring and mathematically comparing sequential surface images.8 To track bronchoscope movements, we use an OM sensor as 2-D displacement sensor to directly measure the surface movement of the bronchoscope shaft and hence can obtain its insertion depth and rotation angle of the viewing direction, as shown in Figure 1. Subsequently, we can estimate the position and the viewing direction information of the bronchoscope tip and its integrated camera. For recording the 2-D displacement of the OM sensor, we physically construct a sensor-microcontrollercomputer interface that is used to access the OM sensor and its outputs. By integrating this interface into our

Figure 1: Using an OM sensor to directly measure the information on the insertion depth and the rotation along the viewing direction of the bronchoscope camera.

Figure 2: Hacking an OM sensor to construct a sensor-microcontroller-computer interface to record the immediate bronchoscope movement. We need to physically connect bins on the sensor and microcontroller boards. bronchoscopic video grabber, we can simultaneously acquire the 2-D OM sensor outputs, bronchoscopic images, and their timestamps. However, we still need to synchronize the sensor outputs and bronchoscopic images in accordance with their timestamps. Figure 2 displays our constructed interface for the communication between the OM sensor and the computer.

2.3 Centerline Matching Using Bronchial Structure Information After timestamp synchronization between sensor outputs and bronchoscopic frames, we obtain the relative insertion depth ∆ and the rotating angle θ of the viewing direction between successive bronchoscopic images. The OM sensor gives two measurements: (1) the insertion depth and (2) the rotation of the viewing direction of the bronchoscope shaft. To use the two measurements ∆ and θ from the OM sensor, we first need to perform a centerline matching on the basis of bronchial structure information to obtain a new guess for the pose of the current frame.9 This procedure is described in detail as follows. Figure 3 shows the procedure of combining the OM sensor outputs by centerline matching and determining the initial guess of image registration. (a) Assignment of bronchial branch. The centerline matching requires to find a closest point on the branch centerline, which corresponds to the bronchoscope camera position. Suppose Q(pk , rk ) denotes the camera pose

including position pk and rotation rk at frame k (k = 1, ... , M ) in CT coordinates. We calculate the distances D(pk , bi ) between the camera position pk and each bronchial branch bi = {(bsi , bei )} (i = 1, ... , N ; bsi and bei are the start and end points of the branch, respectively) according to  k s 2  if λ < 0  kp − bi k (pk − bsi ) · (bei − bsi ) 2 k k e D(p , bi ) = , (1) kp − bi k if λ > kbei − bsi k , λ =  kbei − bsi k  k s 2 2 kp − bi k − λ otherwise where λ is the length of the vector (pk − bsi ) projected on the branch centerline. We can determine the assigned branch b∗ via either the minimum distance D(pk , bi ) or the minimum angle between the branch vector and the camera viewing direction in the case that several branches have the same (minimum) or almost equal distance to the camera position. (b) Calculation of corresponding pose. After obtaining the optimal branch b∗ , the corresponding position qk on the bronchial centerline can be computed with respect to b∗ by qk = bs∗ +

(pk − bs∗ ) · (be∗ − bs∗ ) (be∗ − bs∗ ) · e . kbe∗ − bs∗ k kb∗ − bs∗ k | {z }

(2)

λ∗

However, from Eq. 2, qk is not always on the real centerline but sometimes on its extension. To guarantee the new position qk∗ to be on the real bronchial centerline after integration of the depth ∆, we need to take into c account the children branches b∗j (j = 1, 2 or j = 1, 2, 3) and the parent branch bp∗ of the optimal branch b∗ . Since the moving direction of the corresponding point qk depends on the depth information ∆ (in fact, the bronchoscope is moving in or out the bronchial tree if ∆ 6 0 or ∆ > 0, respectively), we calculate the new corresponding position qk∗ on the centerlines but not the extension line of b∗ with respect to ∆ by  s k  b∗ + (L1 − ∆) · r∗ s k k b + (L − ∆) · r 2 q∗ = ∗ ∗  scj b∗ + (L3 − ∆) · rk∗ sc

if ∆ > 0 if −kbe∗ − bs∗ k 6 ∆ 6 0 , if ∆ < −kbe∗ − bs∗ k

(3)

c

where b∗j is the start point of b∗j , L1 and L3 are determined by   if λ∗ < 0  0  −kpk − bsi k e s λ∗ if 0 6 λ∗ 6 kbi − bi k , L3 = λ∗ − kpk − bsi k L1 =   k s e s kp − bi k if λ∗ > kbi − bi k 0

if λ∗ < 0 if 0 6 λ∗ 6 kbei − bsi k , if λ∗ > kbei − bsi k

(4)

and L2 is equal to L1 if λ∗ 6 kbei − bsi k. The moving direction rk of the corresponding position qk∗ depends on  p  r∗ if L1 6 ∆ and ∆ > 0 rb if (L1 > ∆ and ∆ > 0) or (L2 > ∆ and −kbei − bsi k 6 ∆ 6 0) , rk∗ = (5)  ∗cj r∗ if (L3 > ∆ and ∆ 6 −kbei − bsi k) c

c

where r∗j , rp∗ , and rb∗ are the branch directions of b∗j , bp∗ and b∗ , respectively. Additionally, for the precise determination of the corresponding pose on the centerline, we note that image registration will be called to decide which bronchial branch the bronchoscope should be moved in when it is passing a bifurcation of the bronchial tree.

Figure 3: Performing centerline matching using bronchial structure information to intergrate the OM sensor outputs (insertion depth ∆ and rotation angle θ) to obtain the initial guess.

2.4 Determination of initial guess The key part of our new method is to integrate the OM sensor measurements ∆ and θ to calculate the initial guess of image registration for compensating the depth information and the rotation angle around the running direction of the camera frame. After finding the corresponding pose (qk∗ , rk∗ ) on the bronchial centerline in qk − pk ) with accordance with Eqs. 3 and 5, we further drift the point qk∗ from the centerline along the vector (˜ k k k the distance k˜ q − p k and hence obtain an new position q ˆ∗  s  b∗ if λ∗ < 0 qk if 0 6 λ∗ 6 kbei − bsi k . q ˆk∗ = qk∗ + (˜ qk − pk ), q ˜k = (6)  e b∗ if λ∗ > kbei − bsi k Moreover, we update the rotation information rk∗ using another output θ of the OM sensor by ˆ rk∗ = rk + r(θ),

(7)

where r(θ) means the changes of orientation information around the running direction between consecutive frames. We now obtain the initial guess (ˆ qk∗ , ˆ rk∗ ) for the optimization of camera poses during the next stage.

2.5 Optimization of Camera Motion Parameters According to the initial guess on the basis of the 2-D OM sensor measurements and after defining the similarity measure M oM SE 5 between the real bronchoscopic (RB) image B(k) at frame k and the virtual bronchoscopic

ˆ generated from pose Q ˆ using volume rendering techniques, we optimize the camera motion (VB) image V(Q) parameters by ˆ ∗ = arg min M oM SE(B(k) , V(Q)), ˆ Q (8) ˆ Q

ˆ is initialized as (ˆ ˆ ∗ corresponds to the most where Q qk∗ , ˆ rk∗ ) calculated by Eqs. 6 and 7 and the final estimate Q ˆ ∗ ). The minimization process is executed using the Powell method.10 similar virtual bronchsocpic image V(Q

3. EXPERIMENTAL RESULTS Since we currently have no patient data, we evaluate our proposed method on a phantom where RB video sequences and 3D CT images are acquired in terms of a standard clinical protocol. The acquisition parameters of the CT images are: 512 × 512 pixels, 341 slices, 0.5mm slice thickness, 1.0-2.0mm reconstruction pitch. The bronchoscopic video was transferred to the host computer at 30 frames per second. The image size of video frames is 360 × 370 pixels. We have done all implementations on a Microsoft Visual C++ platform and ran the software on a conventional PC (CPU: Intel XEON 3.80 GHz × 2 processors, 4-GByte main memory). We compare two tracking methods: (1) Deguchi et al,5 only using intensity-based registration on the basis of the similarity measure M oM SE, and (2) our new method, as described in Section 2. We quantitatively evaluate the results by visual assessment where we manually inspect whether RB images are similar to VB images. Table 1 quantifies the tracking results from each method tested on four phantom cases. From the results listed in Table 1, it can be seen that our proposed method significantly improves the tracking performance, compared to our previous method,5 increasing by 1920 successfully tracked frames. Figure 4 shows examples of real bronchoscopic images and the corresponding virtual bronchoscopic images generated from the camera parameters predicted by each method. This visual comparison of the successfully tracked frames further demonstrates the effectiveness of the proposed method. Table 1: Quantitative comparison of the tracking results. Phantom cases

Number of frames

Number of successfully tracked frames Deguchi et al5 Our new method

A B C D

1083 1487 1051 1805

137 102 144 173

416 671 583 806

Total

5426

556

2476

Compared to the previous method,5 we can see that the improvement of the tracking performance of our method is attributed to the estimation of the insertion depth and the rotation angle around the running direction on the basis of the OM sensor measurements. Table 1 and Fig. 4 show that the method of Deguchi et al. hardly works for these cases since it usually gets stuck in a local minimum due to the lack of depth information, the rotation angle around the bronchoscope running direction, or other information such as bronchial bifurcations and folds. The computational efficiency of the proposed method is about 2.5 frames per second.

4. DISCUSSION This work focuses on developing a novel tracking method that deals with the difficulty to estimate the insertion depth and the rotation angle around the running direction of the bronchoscope camera during bronchoscopic navigation. Since image-based methods usually lose the information on insertion depth and rotation along the running direction of a bronchoscope camera during bronchoscopic navigation, it is easy for their optimization stage to get trapped in local minima. To overcome this problem of image-based methods, we proposed a method to directly measure insertion depth and rotation information during bronchoscope tracking. Therefore, the main

Frame number

RB images

Deguchi et al.5

Our method

5

75

144

208

255

304

306

421

476

525

598

624

(a) Examples of Case B

Figure 4: Results of bronchoscope tracking by different methods under phantom validation. The left column shows selected frame numbers of phantom RB images and their corresponding phantom RB images. The other columns display virtual bronchoscopic images generated from the tracking results using the method of Deguchi et al.5 and our method. Our method shows the best performance.

Frame number

RB images

Deguchi et al.5

3

110

134

173

266

329

393

473

506

634

715

783

(b) Examples of Case D

Figure 4: Continued.

Our method

contribution of our study is that a novel bronchoscope tracking prototype was constructed and evaluated on a bronchial phantom. We originally propose a novel external tracking on the basis of an optical mouse (OM) sensor for bronchoscope motion estimation. Although only 2-D motion information of the bronchoscope can be obtained from the OM sensor, an OM sensor-based bronchoscope tracking still has several advantages in contrast to other external tracking methods such as electromagnetic (EM) tracking. First, it is very cheap and simple to construct such external tracking. Next, since we design to locate the OM sensor outside the patient airway tree, such tracking will not occupy the space of the bronchoscope tip while the EM tracker currently requires to attach an EM sensor either at the surface or inside the working channel of the bronchoscope tip and hence constrains the bronchoscope to only be moved in big bronchial branches. Moreover, OM sensor measurements are not affected by ferrous metals or conductive material within the bronchoscope, which usually distort the magnetic field of EM tracking systems such that it results in inaccurate measurements of the EM sensor. Finally, we also think it should be convenient and easy to integrate such external tracking in the operating room. To sum up, we believe that our new method provides a promising means for constructing a more accurate and robust bronchoscopic navigation system. However, our new external tracking still fails to track all bronchoscopic video frames. The main reason is that we assume the bronchoscope is moving along the bronchial centerline. This is a hard constraint since, in practice, it is difficult to control the bronchoscope moving along the centerline of airway tree. Moreover, the bronchoscope often collided with the bronchial walls during bronchoscopic video frame acquisition and hence we obtained lots of dark or uninformative images (no bifurcations or folds information), for which our image-based method hardly works since dark images provide little information for the optimization process to converge to a good estimate of the bronchoscope motion. Finally, we also clarify that airways deformation caused by patient movement, breathing, and coughing is one particular challenge in navigated bronchoscopy. In our experiments, we did not introduce simulated respiratory motion to evaluate our new method. This should be one of our future works.

5. CONCLUSIONS We presented a novel external bronchoscope tracking method by combining measurements of a low-cost optical mouse sensor, centerline matching using bronchial structure information, and intensity-based registration. By an optical mouse sensor we can directly measure the information on insertion depth and rotation along the running direction of a bronchoscope. We perform centerline matching to integrate the measured insertion depth and rotation information to obtain a good initial guess for the optimization process of intensity-based image registration. Experimental results proved the effectiveness of our new method and the significant improvement of the tracking performance, as 1920 more frames were successfully registered. Our future work includes further improvement of the accuracy and robustness of our new external tracking method and experiments on patient data as well as reduction of its computational complexity.

ACKNOWLEDGMENTS This work was partly supported by Hori Foundation, the Grant-in-Aid for Science Research funded by JSPS, Cancer Research by NCC, the JSPS postdoctoral fellowship program for foreign researchers, and “Computational anatomy for computer-aided diagnosis and therapy: frontiers of medical image sciences ,” funded Grant-in-Aid for Scientific Research on Innovative Areas, MEXT, Japan.

REFERENCES [1] Solomon, S. B., P. White, J., Wiener, C. M., Orens, J. B., and Wang, K. P., “Three-dimensionsal ct-guided bronchoscopy with a real-time electromagnetic position sensor: a comparison of two image registration methods,” Chest 118(6), 1783–1787 (2000).

[2] Schwarz, Y., Greif, J., Becker, H. D., Ernst, A., and Mehta, A., “Real-time electromagnetic navigation bronchoscopy to peripheral lung lesions using overlaid ct images: the first human study,” Chest 129(4), 988–994 (2006). [3] Deligianni, F., Chung, A. J., and Yang, G. Z., “Nonrigid 2-d/3-d registration for patient specific bronchoscopy simulation with statistical shape modeling: Phantom validation,” IEEE Transactions on Medical Imaging 25(11), 1462–1471 (2006). [4] Helferty, J., Sherbondy, A., Kiraly, A., and Higgins, W., “Computer-based system for the virtual-endoscopic guidance of bronchoscopy,” Computer Vision and Image Understanding 108, 171–187 (2007). [5] Deguchi, D. and et al., “Selective image similarity measure for bronchoscope tracking based on image registration,” Medical Image Analysis 13(4), 621–633 (2009). [6] Kukuk, M., Geiger, B., and Muller, H., “Tbna-protocols guiding transbronchial needle aspirations without a computer in the operating room,” in [MICCAI], pp. 997–1006 (2001). [7] Kukuk, M. and Geiger, B., “A real-time deformable model for flexible instruments inserted into tubular structures,” in [MICCAI], pp. 331–338 (2002). [8] Avago Technologies, “ADNS-3080: high performance optical mouse sensor.” http://www.avagotec.com. [9] Mori, K. and et al., “Improvement of accuracy of marker-free bronchoscope tracking using electromagnetic tracker based on bronchial branch information,” in [MICCAI], LNCS 5242,, 535–542 (2008). [10] Berghen, F. V. and Bersini, H., “Condor, a new parallel, constrained extension of powell’s uobyqa algorithm: experimental results and comparison with the dfo algorithm,” Journal of Computational and Applied Mathematics 181(1), 157–175 (2005).