On Scale Invariant Features and Sequential Monte Carlo Sampling for Bronchoscope Tracking Xi´ongbi¯ao Lu´oa , Marco Feuersteinb , Takayuki Kitasakac , Hiroshi Natorid , Hirotsugu Takabatakee , Yoshinori Hasegawaf , Kensaku Morig,a a Graduate
School of Information Science, Nagoya University, Japan; Aided Medical Procedures (CAMP), Technische Universit¨at M¨ unchen, Germany; c Faculty of Information Science, Aichi Institute of Technology, Japan; d Keiwakai Nishioka Hospital, Japan; e Sapporo-Minami-Sanjo Hospital,Japan; f Graduate School of Medicine, Nagoya University, Japan; g Information and Communications Headquarters, Nagoya University, Japan
b Computer
ABSTRACT This paper presents an improved bronchoscope tracking method for bronchoscopic navigation using scale invariant features and sequential Monte Carlo sampling. Although image-based methods are widely discussed in the community of bronchoscope tracking, they are still limited to characteristic information such as bronchial bifurcations or folds and cannot automatically resume the tracking procedure after failures, which usually result from problematic bronchoscopic video frames or airway deformation. To overcome these problems, we propose a new approach that integrates scale invariant feature-based camera motion estimation into sequential Monte Carlo sampling to achieve an accurate and robust tracking. In our approach, sequential Monte Carlo sampling is employed to recursively estimate the posterior probability densities of the bronchoscope camera motion parameters according to the observation model based on scale invariant feature-based camera motion recovery. We evaluate our proposed method on patient datasets. Experimental results illustrate that our proposed method can track a bronchoscope more accurate and robust than current state-of-the-art methods, increasing the tracking performance by 38.7% without using an additional position sensor. Keywords: Bronchoscope Tracking, Bronchoscopic Navigation, Scale Invariant Feature Transform, Sequential Monte Carlo Sampling, Image Registration
1. INTRODUCTION Navigated endoscopy is generally agreed to be the next generation of diagnostic or surgical endoscopy. However, a common challenge is to accurately register pre-interventional information and intra-interventional information (e.g., 3-D CT data and 2-D endoscopic video images) in real time to provide physicians with augmented environments to guide medical interventions during endoscopic procedures in operating rooms (ORs). A bronchoscope is a useful medical tool for the assessment of bronchus and lung cancer that is the leading cause of cancer death worldwide. Unfortunately, it is difficult for physicians to manually align pre- and intrainterventional information and guide bronchoscopic instruments such as transbronchial lung biopsy (TBLB) needles to obtain samples of suspicious tumors and to treat or remove precancerous tissue during interventional bronchoscopy. Therefore, bronchoscopic navigation systems are developed to align real-time 2-D bronchoscopic
Further author information: (Send correspondence to Xi´ ongbi¯ ao Lu´ o) Xi´ ongbi¯ ao Lu´ o: E-mail:
[email protected], Telephone: +81 52 789 5688 Marco Feuerstein: E-mail:
[email protected], Telephone: +49 89 289 19427 Kensaku Mori: E-mail:
[email protected], Telephone: +81 52 789 5689
video images to a pre-built 3-D anatomical airway tree model, which enables physicians to accurately control their viewpoint (or camera position and orientation) and to localize the biopsy needle properly. One of the essential requirements to achieve such navigational function is to determine the position and orientation parameters of the camera integrated at the bronchoscope tip. Bronchoscope tracking is usually used to estimate the bronchoscope movement during bronchoscopic navigation. Various techniques such as vision- and sensor-based methods (i.e., image registration and electromagnetic tracking) have been proposed on the topic of bronchoscope motion estimation.1–6 Although some of these methods demonstrated good performance, it is still challenging to precisely and successfully determine the six-degree-of-freedom movements of the bronchoscope camera. Vision-based based methods often suffer from problematic bronchoscpoic video images such as frames with bubbles or motion-blurred frames. Electromagnetic tracking (EMT) systems often locate the bronchoscope tip incorrectly under any airway deformation, and measurements of an EMT sensor are heavily affected by magnetic field distortion. Generally speaking, no matter what approaches are used for bronchoscope motion tracking, they hardly adapt themselves to situation changes (e.g., patient coughing or dynamic errors in EMT measurements) over time during interventions. Hence, the tracking procedure easily becomes uncertain or ambiguous under image artifacts or patient movement. To obtain accurate and robust tracking, we must deal with those ambiguities during bronchoscope tracking. Furthermore, we require a multi-modal tracking model to characterize those tracking uncertainties. This study develops a more accurate and robust bronchoscope motion estimation method that deals with tracking ambiguities when using image registration-based methods1–3 for bronchoscope tracking. We aim to tackle two drawbacks of image-based approaches: (1) dependency on specific bronchial information (e.g., bifurcations and folds) to be observed and (2) no capacity for automatically recovering the tracking procedure once it fails. The former disadvantage usually causes the registration procedure to easily being trapped in local minima during the optimization process when no characteristic information is clearly shown on real bronchoscopic video frames. To overcome this limitation, we utilize scale invariant features that contain stable texture information and are independent of bronchial bifurcations or folds. In this work, we use a well-known algorithm called scale invariant feature transform7 (SIFT) to detect image features. After feature extraction and correspondence, we perform epipolar geometry analysis to recover relative motion between consecutive frames. The latter drawback is mainly caused by tracking uncertainties, for example, the tracking processing easily fails to estimate the bronchoscope movement due to problematic bronchoscopic video images usually encountered under airway deformation (coughing, patient movement, and breathing), diffuse inter-reflection and specular reflection due to non-shiny bronchial mucosa, appearance of bubbles, collision with the bronchial wall, motion blurring. To resolve this problem, we introduce the sequential Monte Carlo (SMC) sampling method to alleviate disturbances and to automatically retrieve the tracking failure by itself. We here clarify that the main benefit of the usage of an SMC sampler is that it possibly handles or resolves the problem of tracking uncertainties (e.g., bubbles or breathing motion) occurred in bronchoscope tracking since the SMC sampler is a multi-modal model and designed for uncertain tracking. Moreover, SMC methods approximate the posterior densities of the camera motion parameters by generating and transmitting a set of random samples, which enable to maintain potential importance modes that either they are confirmed or moved to be the subsequent observations. Therefore, the usage of SMC simulation enables our method to tackle ambiguities appearing in bronchoscope tracking. Generally, this work proposes an improved tracking method that integrates scale invariant features into SMC-based simulation to address the limitations of image-based tracking methods. Our experimental results demonstrate that our method provides more accurate and robust bronchoscope tracking without an additional position sensor, compared to the state-of-the-art image-based methods.
2. METHOD 2.1 Overview The basic process of our proposed method includes two main stages: (1) feature-based relative camera motion recovery and (2) SMC sampling-based pose estimation. In the stage (1), we predict the inter-frame motion parameters between consecutive images of patient specific bronchoscopic video using SIFT features and epipolar
Figure 1: The flowchart of bronchoscope camera motion tracking using our proposed method that consists of two steps of scale invariant feature-based motion recovery and sequential Monte Carlo sampling. geometry analysis. During the stage (2), we recursively approximate the posterior probability density of the current bronchoscope camera pose in accordance with the estimated result of the first stage. Since the second stage generates a set of random samples that are defined as the camera motion parameters and the similarity between virtual bronchoscopic and patient specific real bronchoscopic images, the current camera motion parameters can be determined to be equal to the pose of one sample that corresponds to the maximum similarity inside the sample set. Figure 1 displays the flowchart of our proposed method that is described in the following sections in more detail. Additionally, our proposed bronchoscope tracking approach is summarized in Algorithm 1.
2.2 Scale Invariant Feature-Based Motion Recovery Since the first stage of our proposed method is quite similar to our previous work,6 we here only briefly review the method of relative motion estimation between continuous frames by SIFT features. The following steps are performed to recover the inter-frame motion parameters ∆CT TC (∆CT RC , ∆CT tC ) including rotation ∆CT RC and translation ∆CT tC from the camera coordinate system to the CT coordinate system: • Feature Extraction. We detect 2-D feature points for each bronchoscopic video image using the SIFT algorithm.7 We also store all detected points of each frame for point matching to save computation time. • Point Correspondences. We need to find corresponding 2-D point pairs between the previous and the current images. We here use the nearest neighbors method to obtain point matches.7 • Epipolar Geometry Analysis. We first calculate the fundamental matrix from the point correspondences using epipolar constraints. After calibrating the bronchoscope camera and obtaining the intrinsic matrix, we can compute the essential matrix E and then sequentially solve the following equations: ET ∆CT tC = 0, ∆CT RC ET = [∆CT tC ]T× to obtain ∆CT RC and ∆CT tC .
Additionally, we need to clarify that we set the scale factor of the essential matrix to 0.3 during epipolar geometry analysis, since bronchoscope movement was of the order of about 0.3 mm/frame (at a video frame rate of 30 fps) in our experiments, which is different from the scale factor determination described in our work.6 The next section concentrates on the details of the bronchoscope camera motion estimation using SMC sampling.
2.3 SMC Sampling for Camera Pose Determination Sequential Monte Carlo methods are a set of simulation-based algorithms that collect a cloud of weighted random samples to investigate the problem of optimal state estimation in nonlinear non-Gaussian state space models.8, 9 In such approaches, a set of weighted random samples is deterministically drifted and stochastically diffused to estimate the posterior probability density of interest. We here utilize an SMC sampling method, which originates from the approach of Isard and Blake.10 (i)
Before describing our approach, we introduce some notations. Let CT TC denote the transformation matrix between camera and CT coordinate systems written as a seven-dimensional state vector xi that includes translation and rotation represented by a quaternion. The observation yi with its history is represented by Yi = {yi }. Additionally, wik is the sample weight that is calculated by the similarity measure M oM SE.3 In detail, after generating a random sample set Sik = {(xki , wik , cki ) : i = 1, 2, 3, ..., N ; k = 1, 2, 3, ..., M }(N and M are the number of frames and samples, respectively; cki means the accumulative weight of each sample) that are used to approach the posterior probabilistic density of the current bronchocope camera motion parameters xki at frame i , our proposed bronchoscope tracking method that uses SMC sampling is mainly implemented by the following stages: (a) State Evolution. Each sample requires to be transmitted to a new state through deterministic drift and stochastic diffusion. During the deterministic drift step, the new state x ˆki can be represented by x ˆki = Γ(xki−1 , ∆x),
(1)
where the uncertainty vector ∆x depicts the inter-frame camera motion parameters ∆CT TC that are determined during the SIFT-based motion retrieval stage, and x ˆki is the updated state after drifted motion based on the transform function Γ. Next, a stochastic movement (or noise vector) nki based on a seven-dimensional standard normal distribution is applied to each sample to obtain the potential importance modes, so that the final evolved state of each sample can be calculated with respect to x ˆki and ni xki = x ˆki + nki = Γ(xki−1 , ∆x) + nki .
(2)
Additionally, SMC sampling methods need a probabilistic density function to describe the state transmission probability p(xki |xki−1 ) between consecutive time steps. Since we have no prior knowledge of the bronchoscope camera movement, in other words, we do not know the prior probabilistic distribution p(xi ) for the state vector xi , we use a random walk on the basis of normal distribution with respect to nki : nki ∼ N (µ, σ 2 ) to approximate the density p(xki |xki−1 ) in accordance with Eq. 210, 11 p(xki |xki−1 ) ∝ √
1 exp(−((xki − Γ(xki−1 , ∆x)) − µ)2 /2σ 2 ). 2πσ
(3)
(b) Observation Probability. After state transmission, it is necessary to calculate the observation density p(yi |xi ). According to the factored sampling scheme,10 the observation density can be determined by: p(yi |xi = xki ) = wik (
M X j=1
wij )−1
(4)
where the observation yi that is constructed by yi = Hˆ xi , where H is the observation matrix describing the transformation from the CT to the camera coordinates. Algorithm 1: Bronchoscope Tracking Using Scale Invariant Features and SMC Sampling7, 10 input : Bronchoscopic video images B(i) , CT slices for generating virtual images V e (i) of the bronchoscope camera poses output: A series of estimates CT T C (0) CT (0) Initialization: At i = 0, state vector: CT TC ⇔ x0 ; sample set S0k = {(xk0 , w0k , ck0 )}M T C ⇔ y0 ; k=1 ; observation: k k k k M For each frame i, generate M samples Si = {(xi , wi , ci )}k=1 and compute the weights (similarities) wik : for i = 1 to N do Before SMC sampling: 1. SIFT feature extraction and keypoint correspondences7 between frames (i − 1) and i; 2. Epipolar geometry analysis: calculate inter-frame motion parameters ∆CT TC ⇐⇒ ∆x; e (i) Start SMC sampling ⇔ 3. Compute CT T C k k (1). Construct the new sample set Si at frame i from the previous sample set Si−1 : for k = 1 to M do Produce a random number r in accordance with uniform distribution: r ∈ [0, 1]; Compare r to cji−1 (j ∈ [1, M ]) and find the smallest j to satisfy the inequality: r ≤ cji−1 ; The sample with the index j will be resampled at frame i: xji−1 ; Draw each new sample (xki , wik ) ∼ p(xki |xki−1 = xji−1 ) by: Deterministic drift: xji−1 =⇒ x ˆki according to Eq 1; k Stochastic diffusion: x ˆi =⇒ xki according to Eq 2; to the similarity measure M oM SE 3 ; Weight computation: wik = MoMSE(B(i) , V(xki )) according k k PM Calculate the observation density: p(yi |xi = xi ) = wi ( j=1 wij )−1 ; end P k (2). Compute total weights: Wi = M k=1 wi ; −1 k k (3). Weight normalization: wi = Wi wi ; (4). For each sample, assign an incremental weight cki (c0i = 0): for k = 1 to M do cki = cik−1 + wik ; end (5). The current estimated state x ˜i : x ˜i = arg maxwk {(xki , wik , cki )}; i (i) e ; (6). Return: x ˜i ⇐⇒ CT T C 4. Go to the next iteration; end
tb
(c) Weight Calculation. After state transmission, it is necessary to calculate the sample weight wik . As aforementioned, we define each sample weight as an image similarity between the real bronchoscopic image B(i) and virtual image V based on modified mean squared error (MoMSE ) as follows:3 MoMSE(B(i) , V) =
X 1 1 |D| |A(i) | (i) D∈A
X
2 (i) (B(i) m,n − BD ) − (Vm,n − VD )
(5)
(m,n)∈D (i)
where|A(i) | is the number of selected subblocks A in image i, and BD and V are the respective mean intensities of all subblocks D of B(i) and V. The mean intensities of B(i) and V may be different in an actual bronchoscopic (i)
image because of different powers of the light source. For diminishing this influence, BD and VD are subtracted from each pixel. Hence, a weight wik can be calculated by the following equation with respect to the state pose parameters xki wik = MoMSE(B(i) , V(xki ))
(6)
Finally, in our case, the output of the SMC sampling for the current estimated motion state can be determined in terms of wik : (7) x ˜i = arg max{(xki , wik )}, wik
that is, sample x ˜i with maximal weight w ˜i corresponds to the maximum similarity between the current bronchoscope camera frame and the generated virtual frame.
3. EXPERIMENTAL RESULTS For patient validation, we applied our proposed method to real bronchoscopic (RB) video sequences and 3D CT images in accordance with a standard clinical protocol. The acquisition parameters of the CT images are: 512 × 512 pixels, 72-209 slices, 2.0-5.0mm slice thickness, 1.0-2.0mm reconstruction pitch. Bronchoscopic videos were recorded onto digital videotapes in operation rooms during examinations and were transferred to the host computer at 30 frames per second. The image size of the video frames is 362 × 370 pixels. We have done all implementations on a Microsoft Visual C++ platform and ran it on a conventional PC (CPU: Intel XEON 3.80 GHz processors, 4-GByte memory). We compare three bronchoscope tracking methods: (1) Deguchi et al.,3 a sole intensity-based registration scheme, (2) Nagao et al.,12 a combination of Kalman filtering and intensity-based registration, and (3) our method, as described in Section 2. Additionally, for the evaluation of the estimated results, we assess the tracking results in terms of the number of frames successfully tracked by visual inspection. The tracking was judged to fail when virtual images generated from the estimated camera parameters looked greatly different from the corresponding real bronchoscopic images. Table 1 illustrates quantitative results of each scheme. According to the tracking results listed in Table 1, our proposed method significantly enhances the tracking performance, increasing by 50.4% and 38.7%, compared to our two previous methods, respectively. Moreover, successfully tracked RB images are investigated by visual inspection in Fig. 2. It displays examples of real bronchoscopic images and the corresponding virtual images generated from the estimated camera motion parameters from each method. Table 1: Quantitative comparison of the tracking results. Patient Cases
Num. of Frames
Number (percentage) of successfully tracked frames Deguchi et al.3 Nagao et al.12 Our method
A B C D E
1300 850 1090 1406 379
671 (51.6%) 282 (33.2%) 100 (9.2%) 496 (35.3%) 100 (26.4%)
805 (61.9%) 350 (41.2%) 127 (11.7%) 863 (61.4%) 89 (23.5%)
1136 (87.4%) 525 (61.8%) 1000 (91.7%) 1205 (85.7%) 315 (83.1%)
Total
5025
1649 (32.8%)
2234 (44.5%)
4181 (83.2%)
In contrast with the previous methods, in most cases our proposed method can automatically recover the tracking procedure it fails, so that tracking performance is greatly improved, as demonstrated from the results in Table 1 and Fig. 2. We attribute the performance (accuracy and robustness) improvement to the utilization of: (1) scale invariant features: they enable the registration algorithm to overcome the limitation of characteristic bronchial information, and (2) SMC sampling: it can accurately approximate the posterior probability distribution of the bronchoscope camera motion parameters. We also clarify that the Kalman-based method cannot deal with the case that the bronchoscope camera changes its movement direction due to heavily violating the assumption of moving the bronchsocope with a constant acceleration. This explains why it hardly works for Case E, as you can see in Table 1 and Fig. 2. Additionally, we clarify that the the computational time of our method is around 3.0 seconds per frame.
Frame number
RB images
Deguchi et al.3
Nagao et al.12
Our method
1315
1370
1438
1467
1525
1589
1623
1686
1715
1776
1802
1869
(a) Patient Case D.
Figure 2: Results of bronchoscope tracking by different methods under patient validation. The left column shows selected frame numbers of phantom RB images and their corresponding phantom RB images. The other columns display virtual bronchoscopic images generated from the tracking results using Deguchi et al.,3 Nagao et al.,12 and our method. Our method shows the best performance.
Frame number
RB images
Deguchi et al.3
Nagao et al.12
6102
6142
6182
6237
6281
6291
6303
6355
6368
6387
8408
6479
(b) Patient Case E.
Figure 2: Continued.
Our method
4. DISCUSSION Our method is designed to tackle situations where image-based algorithms usually fail to track the camera attached at the bronchsocope tip. It is well-known that image registration-based tracking algorithms usually fail to continuously or successfully track the bronchoscope tip due to the shortage of specific bronchial information, airway deformation (e.g., patient breathing motion or coughing), and problematic video frames because of appearing bubbles and collisions with the bronchial wall. Once they lose tracking, they usually cannot recover the tracking procedure. We address these limitations and compare our previous methods3, 12 and improve them in various aspects that are described as follows. First, since scale invariant features are independent of characteristic bronchial information, these stable features are very useful to compensate the tracking performance, particularly in the case that the bronchial bifurcations or folds cannot clearly observed. The second advantage of our method is that we integrate SMC sampling into image-based tracking and hence it can characterize the property of multi-modal (or non-Gaussian) observation uncertainty that originates from the aforementioned limitations; the Kalman-based method usually assumes the observation probability distribution to be unimodal (or Gaussian) in the state variable prediction.12 Moreover, the key advantage of the developed approach is that it has the capability of recovering the failed tracking procedure by itself. Since SMC sampling methods represent the observation density by a multi-modal or non-Gaussian probability distribution and approximate the posterior densities of the state parameters by collecting a set of random samples and sequentially predict the state vector on the basis of the factored (or importance) sampling, they provide the ability to maintain potential importance modes that either they are confirmed or moved to be the subsequent observations. This contributes significantly to the avoidance of the tracking failure or the automatic recovery of tracking loss even in case of ambiguous bronchoscopic video frames or airway deformation. Hence, our method shows the best tracking performance in Table 1 and Fig. 2, compared to the previous methods. However, in our experiments, our proposed method still failed to correctly register all real bronchoscopic images and virtual images when continuously tracking the bronchoscope for the following reasons: (1) problematic bronchoscopic video images. Although our method can mostly deal with such situations, uncertainties during bronchoscope tracking are still challenging; (2) inconsistency between CT and physical spaces. It is sometimes difficult to find proper virtual images that correspond to real bronchoscopic images due to CT device imperfection or its resolutions; and (3) airway deformation. Currently, we did not explicitly address the problem of respiratory motion in our method. A respiratory motion model needs to be constructed to compensate the tracking accuracy. Additionally, the average runtime of our proposed method per frame (3.0 seconds) is much higher than that of the previous methods (0.9 and 0.8 seconds). This is because each random sample must compute its weight based on the similarities between real and virtual images and SIFT features need to be detected for each frame (around 1.0 seconds per frame); both are really time-consuming.
5. CONCLUSIONS We presented an improved bronchoscope tracking method using scale invariant features and sequential Monte Carlo sampling. We use SIFT features to overcome the problem of image-based methods that depend on characteristic bronchial information. We introduced sequential Monte Carlo sampling to approximate the posterior probability distributions of the bronchoscope camera motion parameters, which provides the ability to recover the failed tracking procedure by itself. According to the evaluation on patient datasets, our proposed method significantly improves the performance of image-based bronchoscope tracking at least by 38.7% without using an additional position sensor, compared to the state-of-the-art image-based approaches. We conclude that our method can not only overcome the contraint of the availability of specific bronchial information but also tackle tracking ambiguties occurring in bronchoscopic navigation and recover tracking failures. Our future work includes further improvement of the accuracy and robustness of our method and reduction of its computational complexity.
ACKNOWLEDGMENTS This work was partly supported by Hori Foundation, the Grant-in-Aid for Science Research funded by JSPS, Cancer Research by NCC, the JSPS postdoctoral fellowship program for foreign researchers, “Computational anatomy for computer-aided diagnosis and therapy: frontiers of medical image sciences ,” funded Grant-in-Aid for Scientific Research on Innovative Areas, MEXT, Japan, and “Ci-no-kyoten” project by Aichi Prefecture.
REFERENCES [1] Deligianni, F., Chung, A. J., and Yang, G. Z., “Nonrigid 2-d/3-d registration for patient specific bronchoscopy simulation with statistical shape modeling: Phantom validation,” IEEE Transactions on Medical Imaging 25(11), 1462–1471 (2006). [2] Helferty, J., Sherbondy, A., Kiraly, A., and Higgins, W., “Computer-based system for the virtual-endoscopic guidance of bronchoscopy,” Computer Vision and Image Understanding 108, 171–187 (2007). [3] Deguchi, D., Mori, K., Feuerstein, M., Kitasaka, T., Maurer Jr., C. R., Suenaga, Y., Takabatake, H., Mori, M., and Natori, H., “Selective image similarity measure for bronchoscope tracking based on image registration,” Medical Image Analysis 13(4), 621–633 (2009). [4] Schwarz, Y., Greif, J., Becker, H. D., Ernst, A., and Mehta, A., “Real-time electromagnetic navigation bronchoscopy to peripheral lung lesions using overlaid ct images: the first human study,” Chest 129(4), 988–994 (2006). [5] Soper, T. D., Haynor, D. R., Glenny, R. W., and Seibel, E. J., “In vivo validation of a hybrid tracking system for navigation of an ultrathin bronchoscope within peripheral airways,” IEEE Transactions on Biomedical Engineering 57(3), 736–745 (2010). [6] Luo, X., Feuerstein, M., Reichl, T., Kitasaka, T., and Mori, K., “An application driven comparison of several feature extraction algorithms in bronchoscope tracking during navigated bronchoscopy,” in [5th International Workshop on Medical Imaging and Augmented Reality ], (2010). [7] Lowe, D. G., “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision 60(2), 91–110 (2004). [8] Doucet, A., de Freitas, N., and Gordon, N., [Sequential Monte Carlo Methods in Practice], Springer-Verlag, Berlin, Heidelberg, New York (2001). [9] Arnaud Doucet, Simon Godsill, C. A., “On sequential monte carlo sampling methods for Bayesian filtering,” Statistics and Computing 10(3), 197–208 (2000). [10] Isard, M. and Blake, A., “Condensation - conditional density propagation for visual tracking,” International Journal of Computer Vision 29(1), 5–28 (1998). [11] Pupilli, M., Particle filtering for real-time camera localisation, PhD thesis, University of Bristol, UK (2006). [12] Nagao, J., Mori, K., Enjouji, T., Deguchi, D., Kitasaka, T., Suenaga, Y., Hasegawa, J., Toriwaki, J., Takabatake, H., and Natori, H., “Fast and accurate bronchoscope tracking using image registration and motion prediction,” in [Proceedings of MICCAI], 3217, 551–558 (2004).