Parkview Place, St. Louis, MO USA 63110. ABSTRACT. Medical imaging applications of rigid and non-rigid elastic deformable image registration are undergoing wide scale development ..... it as the motion vectors for the neighborhood center.
Level set motion assisted non-rigid 3D image registration Deshan Yang*a, Joseph O. Deasya, Daniel A. Lowa, and Issam El Naqaa a Department of Radiation Oncology, School of Medicine, Washington University in St. Louis, 4921 Parkview Place, St. Louis, MO USA 63110 ABSTRACT Medical imaging applications of rigid and non-rigid elastic deformable image registration are undergoing wide scale development. Our approach determines image deformation maps through a hierarchical process, from global to local scales. Vemuri (2000) reported a registration method, based on levelset evolution theory, to morph an image along the motion gradient until it deforms to the reference image. We have applied this level set motion method as basis to iteratively compute the incremental motion fields and then we approximated the field using a higher-level affine and non-rigid motion model. In such a way, we combine sequentially the global affine motion, local affine motion and local non-rigid motion. Our method is fully automated, computationally efficient, and is able to detect large deformations if used together with multi-grid approaches, potentially yielding greater registration accuracy. Keywords: non-rigid image registration, level set, optical flow, deformable image registration
1. INTRODUCTION Deformable image registration is one of the most important tasks for medical imaging. For instance, in radiation therapy treatment of lung cancer, it is important to align images acquired during free breathing to a certain reference breathing phase for accurate dose delivery [1]. Many deformable image registration methods have been proposed in the literature, for example, spline function based methods [2, 3], optical flow methods [4-6], elastic methods [7], fluid models [8], etc. Among these methods, spline based methods are parametric, so the process determines optimal spline base function parameters and the spline base functions are used to approximate the image motion field. Other methods include nonparametric techniques, which find the motion field directly by solving physical model based partial differential equations (PDEs). Medical imaging applications of rigid and non-rigid elastic deformable image registration are undergoing wide scale development. It may provide improved registration results by combining the advantages of rigid and deformable image registration methods and to sequentially determine a hierarchical image deformation, from the global to the local scale. In this study, we approach deformable image registration using such a hierarchical method. We sequentially computed global affine motion, local affine motion and local deformable motion. We used Vemuri’s level set motion method [9, 10], which will be introduced in section 2.2, as our motion estimator. We computed the initial deformation field, and then used higher level motion models to approximate the motion field. We also applied our method in a multi-grid approach [11], in such a way that image motion was computed at a coarse image resolution first, then sequentially step by step to finer image resolutions. By applying the multi-grid approach, we were able to detect image motion of large/small magnitudes while improving the computation efficiency. We tested our method with synthesized 2D CT images and clinical 3D CT images. The data show that this approach is promising.
2. METHOD 2.1.
Overall
Our method is a multi-grid and multi-step procedure to incrementally compute the motion field through multiple estimation-approximation loops. The key idea is to use Vemuri’s level set motion method to compute the initial motion field and then compute the second motion field, which approximates the initial motion field according to certain motion models. We use a few motion models in sequence, including the global affine motion, local affine motion and spline elastic motion. The estimation-approximation procedure is repeated for multiple image resolutions, multiple motion models at each image resolution and multiple iterations at each motion model.
The final goal of our method is to find the motion vector field that deforms the moving image to the reference image. To better register images with larger motion magnitudes, we apply the multi-grid approach [11]. We down-sample the images by 2, by 4 and by 8 with a Laplacian pyramid filter [12] and apply our method sequentially on each image resolutions, from coarse to fine. In such a way, large image motions in the original image resolution become small with respect to the pixels in the down-sampled images. Computation of motion field at a finer resolution uses the motion field computed at coarser image resolutions as the initial condition. At each image resolution, we sequentially and incrementally compute image motions by using pre-selected motion models. According to the nature of the image motion in a certain application, the selection of motion models for each image resolution stage is configurable. Fig. 1 shows a general configuration. Global affine motion is usually only computed for the coarsest image resolution. Local free deformation is usually only computed for finer image resolutions. If we a priori determine that there is no global motion or any affine motion between the images, the configurations similar to the one shown in Fig. 2 could be used. In such a configuration, we do not compute global affine or local affine motions. We only compute local non-rigid spline and local free form deformations.
Fig. 1: General configuration of image resolution steps and motion model selection. Global affine motion is only for the lowest image resolution step. Local free deformation is only for later image resolution steps. Motion is computed sequentially and incrementally.
Fig. 2: Configuration for situations when there is no global or local affine motion.
When each model is applied at a specific image resolution, the image motion is computed from the moving image, which is deformed according to the motion field computed in the previous step with respect to the reference image. The result is obtained by adding this incremental motion to the motion field computed in all previous steps. The process of applying a specific motion model at each image resolution is conducted using multiple iterations. This is illustrated in Fig. 3, where n is the iteration count, I1 is the moving image and I2 is the reference image. The loop of iterations computes the motion field from I1,0, which is I1 deformed by Vpre (the motion field computed by all previous steps), to I2. For each iteration, we use Vemuri’s level set motion method, which is explained in the next section, to compute Vn,init (the initial motion field of the current iteration), and then computed the Vn,inc (the incremental motion field of the iteration) by processing the Vn,init through a procedure referred as F. The Procedure F, which will be explained in
the following sections, is different for different motion models. F basically computes Vn,inc from Vn,init. Vn,inc is the least mean square (LMS) approximation or some other kind of optimal approximation of Vn,init according to the different motion models. The motion field V, which is the incremental motion field added to Vpre, is then computed cumulatively as Vn+1 = Vn+Vn,inc and the moving image is deformed according to the new motion field Vpre+Vn+1. If the computation converges in the way that Vn,inc becomes smaller for any iteration with respect to a pre-defined threshold, then computation is finished for the current motion model at the current image resolution. Otherwise, we redo the computation of Vn,inc and Vn,init for the next iteration by using the latest deformed I1,n, until iteration converges according to defined convergence criteria which is discussed in section 2.5.
Fig. 3: The diagram shows that the main procedure is the loop of motion estimation – approximation – concatenation.
This entire procedure incrementally finds the motion field V until I1 almost deforms to I2 and further changes in V are insignificant. For a single iteration during the procedure, the adjustment of V is fairly small and may not be the best possible adjustment. The procedure will continuously compute the incremental adjustment for the motion field V through multiple iterations and multiple motion models until the convergence is reached. 2.2.
Level set motion estimation method
The level set motion method proposed by Vemuri [9, 10] is a deformable image registration method by itself. It is based on the levelset evolution theory [13]. With this method, an image iteratively morphs along its gradient direction until it deforms to the reference image. The motion field change is calculated according to equation (1) and the motion field V is adjusted according to equation (2).
∇I1 (V ) ∇I1 (V )
dV / dt = (I 2 − I1 (V )) Vn +1 = Vn + (dV / dt )Δt
(1)
V0 = 0
(2)
where I1 is the moving image, I2 is the reference image, V is the motion field, I1(V) is the deformed image of I1 by V, t is an analog of time, dV is the incremental motion field, Δt is the selected time step. This method is computationally efficient and it is straight forward to extend from 2D to 3D. However, results by this algorithm often lack smoothness. The algorithm may also be easily trapped in local minima. To avoid such problems, Vemuri suggested applying the method with multi-grid approach. In this study, we used Vemuri’s method as the basic motion estimator of our iterations. We used equation (1) to compute Vn,init in the Fig. 3 as
) ∇I1, n ( X )
(
Vn, init = I 2 ( X ) − I1, n ( X )
∇I1, n ( X )
(3)
Note that Vemuri’s method is very similar to optical flow methods, although there are differences [9]. According to optical flow image intensity constraint, Vn,init is computed using
(
)
Vn,init = I 2 ( X ) − I1, n ( X )
∇I1, n ( X ) ∇I1, n ( X )
2
(4)
There are advantages and disadvantages with both Vemuri’s level set motion method and the optical flow method. Neither equation can accurately compute the full image motion field. They can only compute the motion projected on the image gradient direction. This is known as the aperture problem [14]. More information or constraints are needed in order to compute the original motion field. For more general image registration situations when the reference image is
not exactly the deformed version of the moving image (for example, when noises exist), the computation of Vn,init is even less accurate, and rather unreliable and noisy. Equation 4 is more stable in image regions with steeper image gradients and less stable in image regions with shallower image gradients. Equation 3 is more stable overall for the entire image than equation 4. Because of this behavior, we selected to use the level set motion method instead of optical flow method for initial motion estimation in this work. 2.3.
Approximation of motion field with high level motion models
The challenges for computing the original motion field from the projected version have been well studied by optical flow methods. In the Horn and Schunck optical flow method [4], they used a global smoothness constraint by trying to minimize the system energy Etotal 2
2
2
E total = ∫D (∇I1 ⋅ V + I t ) 2 dv + α 2 ∫D ( ∇u + ∇v + ∇w )dv
(5)
where u, v and w are the motion vectors in 3D, α is a regularization parameter, and D is the entire image domain. They had an iterative solution for the problem using numerical approximations of higher-order derivatives. In Lucas-Kanade (LK) optical flow method [5], they computed the motion vectors for each single pixel as the weighted average of motion in small neighborhoods, averaged in the LMS sense by minimizing the neighborhood energy defined as E neighborhood = ∑ w 2 ( X )[∇I ( X ) ⋅ V + I t ( X )]2 x∈Ω
(6)
where w(X) is the weighting parameter and Ω is the neighborhood domain. Our solution to this problem is different. Instead of computing the original motion field directly, we approach the original motion field step by step through multiple iterations of estimation-approximation-adjustment. In such a way, we could compute the motion field hierarchically, from global scale to local scale and from rigid motion to non-rigid motion.
2.3.1.
Affine motion
Our affine motion approximation method is similar to LK’s optical flow method. The LK method used LMS minimization on small neighborhood regions, to find the weighted average shifting of the entire neighborhood and used it as the motion vectors for the neighborhood center. The neighborhood size in the LK method is usually fairly small, as 5×5 for 2D or 5×5×5 for 3D. In our affine motion approximation method, we used LMS method to find the best affine transformation matrix to approximate the motion field at larger regions, as the entire image for global affine approximation, and as large local regions for local affine approximation. An affine transformation can be written as
X ′ = GX
(7)
where G is the affine transformation matrix, X is the image pixel position of the image before motion, and X’ are the corresponding position of the pixel after motion. Having Vn,init, we would like to find G which minimize E
((
)
E = ∑ ( X 'i −GX i )2 = ∑ X i + Vn,init (i) − GX i n
n
)2
(8)
where n is the number of pixels in the image region in which the affine approximation is applied, and Vn,init(i) is the value of Vn,init for pixel i. For 2D situations, the affine transformation can be written as equation (9), (10), (11) and (12). ⎡ g11 G = ⎢⎢ g 21 ⎢⎣ 0
g12 g 22 0
g13 ⎤ g 23 ⎥⎥ 1 ⎥⎦
(9)
⎡ x ⎤ ⎡ g11 ⎡ x′ ⎤ ⎥ ⎢ X ′ = ⎢ y ′⎥ = GX = G ⎢⎢ y ⎥⎥ = ⎢⎢ g 21 ⎢⎣ 1 ⎥⎦ ⎢⎣ 0 ⎢⎣ 1 ⎥⎦
g12 g 22 0
g13 ⎤ ⎡ x ⎤ g 23 ⎥⎥ ⎢⎢ y ⎥⎥ 1 ⎥⎦ ⎢⎣ 1 ⎥⎦
(10)
x′ = g11 x + g12 y + g13
(11)
y′ = g 21 x + g 22 y + g 23
(12)
Finding the best G to minimize E in equation 8 is the well known multiple-regression problem and the solution for 2D situation is ⎡ ∑ Ω xi + Δxi ⎤ ⎡ g11 ⎤ ⎥ ⎢ g ⎥ = A−1 ⎢ ⎢ ∑ Ω xi (xi + Δxi )⎥ ⎢ 12 ⎥ ⎢⎣∑ Ω yi (xi + Δxi )⎥⎦ ⎢⎣ g13 ⎥⎦
(13)
⎡ ∑ Ω yi + Δyi ⎤ ⎡ g 21 ⎤ ⎥ ⎢ g ⎥ = A−1 ⎢ ⎢ ∑ Ω xi ( yi + Δyi )⎥ ⎢ 22 ⎥ ⎢⎣∑ Ω yi ( yi + Δyi )⎥⎦ ⎢⎣ g 23 ⎥⎦
(14)
⎡ N ⎢ A = ⎢ ∑ Ω xi ⎢⎣∑Ω yi
∑ Ω xi ∑ Ω yi ⎤ ⎥ 2 x ∑ Ω i ∑ Ω xi y i ⎥ 2 ∑Ω xi yi ∑Ω yi ⎥⎦
(15)
where Δxi and Δyi are Vn,init on x and y direction for pixel i, xi and yi are the position for pixel i, N is the total number of pixels in the image region Ω. After the optimal transformation matrix G is computed, the approximation motion field Vn,inc could be computed as Vn,inc = GX − X = (G − I )X
(16)
Solutions for 3D situation are similar to 2D solutions. The matrix A in equation (15) must not be singular in order to have valid results in equation (13) and (14). This is usually not a problem because the size of the region Ω is usually much larger than the neighborhood size used in the LK optical flow method. We compute the global affine approximation only for coarsest image resolution. The resulting global affine transformation matrix is usually accurate because the information of the entire Vn,diff motion field is used in that step. There is no need to compute the global affine motion again after the first step. Later steps only compute incremental motion additional to the global affine motion. To compute the local affine approximation, we split the motion field Vn,init into blocks and then computed the optimal affine transformation matrix for each block. Vn,inc, which was computed for each block using equation (16), is put together to form the entire motion field. Because each block has its own affine transformation matrix G, and the constructed Vn,inc from multiple blocks have discontinuities at the block boundaries. We used a Gaussian low-pass filter, with σ = 2, to smooth the motion field in order to order to reduce the discontinuity. We used block size of 9×9 for 2D, or 9×9×9 for 3D.
2.3.2.
Non-rigid spline approximation and low-pass smoothing
There also exists elastic motion in the image (object deformation). Such elastic motion cannot be well approximated by either global or local affine motion. For each image resolution, after we have computed the best global and local affine motion and deformed the moving image towards the reference image according to the motion field computed by previous steps, there is still elastic motion left to be computed between the deformed moving image and the reference image. We used two methods to compute the elastic motion. The first method uses spline interpolation to approximate the motion field, as follows:
From Vn,init to compute the motion vectors at isolated mesh nodes. Mesh nodes are at a fixed distance apart of 7 pixels apart in our implementation. Motion vectors at the mesh nodes are computed as the average motion in the neighborhood of each mesh node. The neighborhood size is the same size as the distance between the nodes and the nodes are placed in the center of each neighborhood. - Compute Vn,inc for the entire field by performing spline interpolation on the motion vectors of the mesh nodes. The result Vn,inc was then in fact the spline smoothed version of Vn,init. The second method applies Gaussian low pass filter directly on the motion field Vn,init. In this way, -
Vn,inc = Flowpass {Vn,init}
2.4.
(17)
Incremental motion adjustment
For most steps, we first compute the incremental motion Vinc from the deformed moving image, which is deformed according to the current motion field Vpre from all previous steps, to the reference image by a number of iterations, and then we add the new incremental motion Vinc from the latest iteration to the motion field Vpre. However, the result V is not exactly equal to Vpre + Vinc. The new V needs to be calculated using Vnew ( X ) = Vpre ( X + Vinc ( X )) + Vinc ( X )
(18)
The adjustment of the incremental motion field is not direct addition, but a motion vector field concatenation. 2.5.
Iteration stability and convergence control
In order to reach stable results during the iteration, Vemuri suggested to control Δt in equation (2) to ensure that Vn,inc ≤ 1 for each iteration. Vemuri computed Δt and Vn,inc using Δt =
1 max( Vn,inc )
Vn,′ inc = Δt ⋅ Vn,inc
(19) (20)
We use a slightly different way. We control max( Vn,inc ) using Vn,′ inc = Δt ⋅ Vn,inc ⋅ f
(21)
to ensure the stability as well as fast convergence. Still using Δt from equation 19, we used an additional artificial factor f in equation 21. We set f=1 for the first iteration. If f=1, equation 21 is the same as equation 20. During the iteration, we reduce f by a factor of 0.6 when oscillation is detected. In such a way, max( Vn,inc ) is decreased gradually. Oscillation is defined when the incremental motion vector
Vn,inc changes direction between iterations for more than 50% of the total number of pixels. Both f and oscillation can be defined globally or locally in an image region of a specified size. We stop the iteration when the convergence criteria are met, for example, when mean(Vn,inc ) < 10 −3 pixel distance.
3. IMPLEMENTATION AND PERFORMANCE We used MATLAB version 7.0.4 as our computing software. We also used the GUI (Graphic User Interface) tools in MATLAB to develop visualization tools to monitor the progress of the multiple step image registration procedure and to visualize and analyze the results. The computer platform we used was a multiprocessor workstation, running RedHat 4.0 Linux operation system and X11. It had 8 AMD 1.8 GHz dual core Opteron 865 Processors and 16 GB RAM. Note that MATLAB software ran only on a single thread, not on multiple threads. However, this Linux cluster had more RAM than regular Windows based PCs, and enables us to register large 3D images. It required 60 to 80 minutes to register two 512x512x240 voxel 3D-CT images. In our implementation, we used a 5 element differential mask (-1/12, 8/12, 0, -8/12, 1/12) to compute the image gradient in any direction. We used linear interpolation to compute the moved image from the moving image and the motion field.
In the multi-grid approach, we used spline interpolation to up-scale the motion field from a multi-grid stage to the next stage in which image resolution was 2 times greater. We used linear interpolation to compute the concatenation motion field from two consequential motion fields using equation 18. For the image gradient in equation (3), instead of computing ∇V1,n , we smoothed the V1,n and then computed ∇ Gσ = 2 V1,n , where G is the Gaussian low-pass filter with
(
[ ])
σ=2.
4. RESULTS 4.1.
Results with 2D synthesized images
We applied our method to 2D CT images for validation purposes. The results are shown in Fig. 4. The reference image (b) is synthesized from the moving image (a) by rotating it by 15° clockwise and then artificially creating some local elastic deformation. We sequentially computed the global affine motion, local affine motion and local free form deformation at the original image resolution. Fig. 4 shows the original images and deformed moving images at selected steps. The difference image in Fig. 4 (f) shows the deformed moving image matched very well to the reference image. The results show that our multiple step estimation-approximation-adjustment procedure works very well to recover the global motion and local non-rigid motion for this simple 2D case. 4.2.
Results with 3D-CT lung images
We have also tested our procedure with clinical 3D-CT images that were acquired from patients for radiation therapy treatments. The 3D-CT images were acquired when the patient during free breathing. We used two image datasets. For dataset 1, which contains mainly the lung, we registered the 3D-CT images from the EE (end of exhalation) phase to the EI (end of inhalation) phase. The results are shown in Fig. 5. For dataset 2, which mainly contains abdominal organs, we registered the 3D-CT images from the EI phase to EE phase. The results are shown in Fig. 6. Fig. 5 (7) and (8) are the difference images before registration. They clearly show the magnitude of image spatial motion as black and white. One can see that the magnitude of the overlaid motion vector field matches very well with the magnitude of the image spatial motion. Fig. 6 (7) and (8) are similar, but for image dataset 2. We calculated a few image statistics values before and after registration for the two images of each datasets. The results are listed in Table 1. The following metrics were used for analysis: MI is the mutual information, NMI is the normalized mutual information, CC is the cross correlation, and MSE is the mean square error between the two images. Table 1: Image statistics calculated before and after registration
Images Dataset 1 before registration Dataset 1 after registration Dataset 2 before registration Dataset 2 after registration
MI 1.363 1.765 1.324 1.603
NMI 1.328 1.472 1.299 1.398
CC 0.981 0.998 0.980 0.995
MSE 3.69% 1.05% 2.76% 1.04%
5. DISCUSSIONS Our proposed method uses a multiple step procedure using a combination of the level set motion method and different approximation methods. In this sense, it differs from any previously published optical flow methods or the Vemuri’s original level set motion method. Unlike most optical flow methods that solve the motion field directly or iteratively, our method uses an iterative estimation-approximation-adjustment process to approach the best solution of the motion field. During this process, the adjustment by a single iteration may not be the most accurate adjustment at the iteration, the algorithm continues to reach to better results until the moving image completely deforms to the reference image, or any further adjustment is too small to be significant. This process is similar to the approach in Vemuri’s original level set motion method. The local affine approximation is similar to the LK optical flow method. In the LK method, the motion vector for every pixel is computed as the LMS average of the motion (pixel shifting only) in a small neighborhood of the pixel. Our local affine motion method uses larger neighborhood blocks, and we compute the affine motion (shifting, scaling and rotation) for the entire neighborhood block.
Our method computes the motion field hierarchically, from a global scale, to a local scale and then to the individual pixel scales. Such an approach works well for medical images, in which the image motion consists of global and local motions. It is also helpful to solve some of the main problems encountered when using traditional image gradient-based registration methods. For instance, image gradient based optical flow methods usually do not work very well in low contrast image regions. The global and local affine approximation steps help to recover motion in low contrast regions by using the information from high contrast regions. Our hierarchical approach also ensures global smoothness of the entire motion field, which usually causes problems with methods that depend on local motions only such as the LK optical flow method and Vemuri’s level set motion method.
6. CONCLUSION In this study, we presented a non-rigid image registration method. This method is an iteratively estimationapproximation procedure in multiple steps with multi-grid approach. It uses the level set motion method as the basic motion estimator to compute the motion field, and then approximate the initial motion field with high level motion models – named as global and local affine motion, local elastic motion and smooth local free form deformation. Different motion models are used to account for different type of motions, sequentially from the global scale, to the local regional scale and finally to the individual pixel scale. We expect such a hierarchical procedure to be able to more accurately compute image motion of human anatomy deformation. These preliminary results show that our method provides promising results for good registration of volumetric CT images.
REFERENCES [1] D. A. Low, M. Nystrom, E. Kalinin, P. Parikh, J. F. Dempsey, J. D. Bradley, S. Mutic, S. H. Wahab, T. Islam, G. Christensen, D. G. Politte, and B. R. Whiting, "A method for the reconstruction of four-dimensional synchronized CT scans acquired during free breathing," Med Phys, vol. 30, pp. 1254-63, 2003. [2] D. Rueckert, L. I. Sonoda, C. Hayes, D. L. Hill, M. O. Leach, and D. J. Hawkes, "Nonrigid registration using freeform deformations: application to breast MR images," IEEE Trans Med Imaging, vol. 18, pp. 712-721, 1999. [3] J. Feldmar, G. Malandain, J. Declerck, and N. Ayache, "Extension of the ICP algorithm to non-rigid intensity-based registration of 3D volumes," 1996. [4] B. K. P. Horn and B. G. Schunck, "Determining Optical Flow," Artificial Intelligence, vol. 17, pp. 185-203, 1981. [5] B. D. Lucas and T. Kanade, "An Iterative Image Registration Technique with an Application to Stereo Vision," Proceedings of the 7th International Joint Conference on Artificial Intelligence, pp. 674-679, 1981. [6] C. Tomasi and T. Kanade, "Detection and Tracking of Point Features," Carnegie Mellon University, 1991. [7] B. Ruzena and K. Stane, "Multiresolution elastic matching," Comput. Vision Graph. Image Process., vol. 46, pp. 121, 1989. [8] G. E. Christensen, R. D. Rabbitt, and M. I. Miller, "Deformable templates using large deformation kinematics," Image Processing, IEEE Transactions on, vol. 5, pp. 1435-1447, 1996. [9] B. C. Vemuri, J. Ye, Y. Chen, and M. O. Leach, "A level-set based approach to image registration," presented at Mathematical Methods in Biomedical Image Analysis, 2000. Proceedings. IEEE Workshop on, Hilton Head Island, SC, USA, 2000. [10] B. C. Vemuri, J. Ye, Y. Chen, and C. M. Leonard, "Image registration via level-set motion: applications to atlasbased segmentation," Med Image Anal, vol. 7, pp. 1-20, 2003. [11] F. C. Glazer, "Hierarchical motion detection," University of Massachusetts, 1987. [12] P. J. Burt and E. H. Adelson, "The laplacian Pyramid as a Compact Image Code," IEEE Transactions on Communications, vol. COM-31, pp. 532-540, 1983. [13] S. Osher and R. Fedkiw, Level Set Methods and Dynamic Implicit Surfaces, vol. 153: Springer, 2003. [14] S. Ullman, The Interpretation of Visual Motion. Cambridge, London: MIT Press, 1979.
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 4: Results of 2D CT brain images. (a) The moving image. (b) The reference image. (c) The moving image deformed after global affine motion step. (d) After local affine motion step. (e) The final deformed moving image. (f) The difference image between (e) and (b).
(1)
(2)
(3)
(4)
(5)
(7)
(6)
(8)
Fig. 5: Results of registering the lung 3D-CT images. (1) A coronal slice of the moving image. (b) The reference image. (3) The checkerboard image before registration. The brighter parts are from the moving image. The darker parts are from the reference image. (4) The checkerboard image after registration. The brighter parts are from the deformed moving image. (5) The coronal difference image before registration, overlaid with deformable vector field. (6) The sagittal difference image before registration, overlaid with deformable vector field. (7) The coronal difference image after registration. (8) The sagittal difference image after registration.
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
Fig. 6: Results of registering the abdominal 3D-CT images. (1) A coronal slice of the moving image. (2) The reference image. (3) The checkerboard image before registration. The brighter parts are from the moving image. The darker parts are from the reference image. (4) The checkerboard image after registration. The brighter parts are from the deformed moving image. (5) The coronal different image before registration, overlaid with deformable vector field. (6) The sagittal different image before registration, overlaid with deformable vector field. (7) The coronal difference image after registration. (8) The sagittal different image after registration.