Large Deformation Inverse Consistent Elastic Image Registration

13 downloads 5630 Views 353KB Size Report
plate image as a viscous fluid and each point in the image domain as a mass par- ticle that ... The two images to be registered are denoted as I0(x) and I1(x). The.
Large Deformation Inverse Consistent Elastic Image Registration Jianchun He and Gary E. Christensen Department of Electrical and Computer Engineering The University of Iowa, Iowa City, IA, 52242 [email protected] and [email protected]

Abstract. This paper presents a new image registration algorithm that accommodates locally large nonlinear deformations. The algorithm concurrently estimates the forward and reverse transformations between a pair of images while minimizing the inverse consistency error between the transformations. It assumes that the two images to be registered contain topologically similar objects and were collected using the same imaging modality. The large deformation transformation from one image to the other is accommodated by concatenating a sequence of small deformation transformations. Each incremental transformation is regularized using a linear elastic continuum mechanical model. Results of ten 2D and twelve 3D MR image registration experiments are presented that tested the algorithm’s performance on real brain shapes. For these experiments, the inverse consistency error was reduced on average by 50 times in 2D and 30 times in 3D compared to the viscous fluid registration algorithm.

1

Introduction

Magnetic resonance images (MRI) of the head demonstrate that the macroscopic shape of the brain is complex and varies widely across normal individuals. Lowdimensional, small-deformation, and linear image registration algorithms [1–7] can only determine correspondences between brain images at a coarse global level. High dimensional large deformation image registration algorithms [8–10] are needed to describe the complex shape differences between individuals at the local level. In this paper we present a new large-deformation, inverse-consistent, elastic image registration (LDCEIR) algorithm. This method accommodates large nonlinear deformations by concatenating a sequence of small incremental transformations. Inverse consistency between the forward and reverse transformations is achieved by jointly estimating the incremental transformations while enforcing inverse consistency constraints on each incremental transformation. The transformation estimation is regularized using a linear differential operator that penalizes second order derivatives in both the spatial and temporal dimensions. This regularization is most similar to a thin-plate spline or linear elastic regularization with the difference that it is applied to both the spatial and temporal dimensions instead of just the spatial dimension.

Previous work on large deformation image registration includes the viscous fluid intensity registration (VFIR) algorithm [8, 11, 12], viscous fluid landmark registration (VFLR) algorithm [9], and hyperelastic intensity registration (HEIR) algorithm [10]. The viscous fluid intensity registration algorithm models the template image as a viscous fluid and each point in the image domain as a mass particle that moves in space. The method solves a modified Navier-Stokes equation for the velocities of the mass particles and find the displacement field by integrating the velocity field over time. The LDCEIR method differs from the VFIR and VFLR algorithms in that it regularizes the displacement field of the transformation instead of the velocity field of the transformation. Thus, the LDCEIR model penalizes large nonlinear deformations similar to the HEIR algorithm while the VFIR and VFLR penalize the rate at which one image is deformed into another. These differences make the LDCEIR better suited for modeling anatomical structures that deform elastically and the VFIR and VFLR better for modeling fluids in the anatomy when registering images collected of the same anatomy over time. The LDCEIR algorithm is similar to the VFLR algorithm in that both algorithms are solved in both space and time. But it is different from the VFIR algorithm which solves the Navier-Stokes equation using a greedy strategy and only regularizes in the spatial domain. The LDCEIR algorithm also differs from the VFIR algorithm in that it is bi-directional (i.e., estimates the forward and reverse transformations) while the VFIR algorithm is unidirectional (only estimates the forward transformation). This difference allows the LDCEIR algorithm to estimate transformations with much less inverse consistency error than is possible using the VFIR algorithm as demonstrated in this paper. The LDCEIR method generalizes the small-deformation, inverse-consistent, linear-elastic intensity registration (SDCEIR) algorithm [6] by including intermediate transformations so that it can accommodate large deformations. The LDCEIR method simplifies to the SDCEIR algorithm for the case of no intermediate transformations. The rest of the paper is organized as follows. The LDCEIR algorithm is described in Section 2. Registration results are presented in Section 3 that test the LDCEIR algorithm on 2D and 3D MRI brain images and these results are compared to the viscous fluid intensity registration algorithm. Finally, Section 4 summarizes and gives conclusions of this work.

2 2.1

Methods Notation

This section describes the notation and assumptions used through out the paper. For convenience, it is assumed that an image is three dimensional and is defined both on a discrete domain Ωd = {(n1 , n2 , n3 ) : 0 ≤ n1 < N1 , 0 ≤ n2 < N2 , 0 ≤ n3 < N3 } corresponding to the voxel lattice and on a continuous domain Ωx that is extended from the voxel lattice to the continuum by linear interpolation. A point x = (x1 , x2 , x3 ) ∈ Ωx corresponds to a point in the continuous domain of an image while x = (x1 , x2 , x3 ) ∈ Ωd corresponds to a specific voxel in the

image. The two images to be registered are denoted as I0 (x) and I1 (x). The notation T (x, i), for 0 ≤ i < N4 , is used to denoted a sequence of images as shown in Figure 1 for N4 = 8. It will be assumed that the number of images N4 in the sequence is an even number.

Fig. 1. A periodic in time image sequence and associated incremental transformations.

An incremental transformation h(x, i) (see Figure 1) defines the pointwise correspondence between image T (x, i) and image T (x, i + 1). The incremental transformations h(x, i), for 0 ≤ i < N4 , are related to the images in the image sequence by the equations T (x, 0) = I0 (x), T (x, 2) = T (h(x, 1), 1),

T (x, 1) = T (h(x, 0), 0), T (x, 3) = T (h(x, 2), 2),

T (x, 4) = I1 (x), T (x, 6) = T (h(x, 5), 5),

T (x, 5) = I1 (h(x, 4)), T (x, 7) = T (h(x, 6), 6)

(1)

for the case N4 = 8. Notice that the images T (x, 0) and T (x, 4) in the image sequence are set to equal the two images being registered. The transformation that deforms I0 (x) into I1 (x) is called the forward transformation and is computed by h(h(h(h(x, 3), 2), 1), 0) which is the concatenation of the incremental transformations h(x, i) for i = 0, 1, 2, 3. The reverse transformation deforms I1 (x) to I0 (x)

and is computed in a similar manner using the formula h(h(h(h(x, 7), 6), 5), 4). Thus, I0 (h(h(h(h(x, 3), 2), 1), 0)) ∼ I1 (x)

and

I1 (h(h(h(h(x, 7), 6), 5), 4)) ∼ I0 (x).

(2)

Let u(x, i) = h(x, i) − x denote the displacement field associated with the incremental transformation h(x, i). 2.2

Image Registration

This section describes how two 3D images I0 (x) and I1 (x) are registered by constructing an image sequence T (x, i) that is periodic in both the spatial and temporal dimensions. The registration problem is formulated as an optimization problem in which the displacement fields u(x, i) = h(x, i)−x, for 0 ≤ i < N4 , are estimated instead of the incremental transformations h(x, i). The optimization problem is formulated to achieve several goals. The first goal is to estimate the incremental transformation functions h(x, i) such that h(x, i) deforms image T (x, i) into the shape of T (x, i + 1), for 0 ≤ i < N4 . This is accomplished by minimizing the intensity similarity cost function CS (u) =

NX 4 −1 Z i=0

(T (u(x, i) + x, i) − T (x, i + 1))2 dx.

(3)

Ωd

The second goal is to estimate a set of incremental transformations that gradually deform I0 (x) into the shape of I1 (x) such that the forward transformation is evenly distributed among the incremental transformations h(x, i), for 0 ≤ i < N4 /2. Similarly, the reverse transformation that deforms I1 (x) back to the shape of I0 (x) should be evenly distributed among the incremental transformations h(x, i), for i = N4 /2, . . . , N4 − 1. This condition produces a periodic image sequence T (x, i) that is symmetric about T (x, 0) and T (x, N4 /2) such that the images T (x, i) and T (x, N4 − i) look similar to each other. This constraint is imposed on the optimization problem by minimizing the symmetric similarity cost function given by CM (u) =

NX 4 −1 Z i=0

(T (u(x, i) + x, i) − T (x, N4 − 1 − i))2 dx.

(4)

Ωd

The third goal is to constrain each incremental transformation to be a smooth small deformation transformation. This goal is accomplished by regularizing each incremental transformation with a linear elastic continuum mechanical model. This constraint is incorporated into the optimization problem by minimizing the regularization cost function CR (u) =

NX 4 −1 Z i=0

||Lu(x, i)||2 dx, Ωd

(5)

where the linear elastic operator L is defined as L = −(αx ∇2x + αt ∇2t ) − β∇x (∇x ·) + γ. The αt ∇2t term is added to the regular 3D linear elastic operator to smooth the transition from one time step to the next and to help uniformly distribute the total transformation among the incremental transformations. The fourth goal is to minimize the inverse consistency error between the forward and reverse transformations. By construction, the incremental transformations h(x, i) and h(x, N4 − 1 − i) should be inverses of each other since the images T (x, i) and T (x, N4 − i) are constrained to look similar. The inverse consistency error is minimized by mininimizing the inverse consistency cost function CI (u) =

NX 4 −1 Z i=0

||u(x, i) − u ˜(x, N4 − 1 − i)||2 dx,

(6)

Ωd

where u ˜(x, i) = h−1 (x, i) − x. Notice that imposing the inverse consistency constraint on the incremental transformations imposes the inverse consistency constraint on the forward and reverse transformations. Due to its symmetric form, Eq. (6) also helps to ensure the symmetry of the image sequence T (x, i) about i = 0 and i = N4 /2. Note that it is possible to compute Eq. 6 without computing inverses using the approach of Cachier and Rey [13]. The LDCEIR image registration algorithm is formulated as the optimization problem: determine the displacement fields u(x, ·) that minimize the weighted sum of Eqs. (3), (4), (5), and (6) given by u(x, ·) = argmin(σS CS (u) + σM CM (u) + σR CR (u) + σI CI (u))

(7)

where σS , σM , σR , and σI are weighting factors. Notice that the four terms in Eq. (7) compete with one another and the final solution is a trade off between the four constraints. The image sequence T (x, ·) is initialized by setting the first half of the images in the sequence equal to image I0 (x) and the second half equal to image I1 (x). For example when N4 = 8, the image sequence T (x, i) is initialized as T (x, 0) = T (x, 1) = T (x, 2) = T (x, 3) = I0 (x) T (x, 4) = T (x, 5) = T (x, 6) = T (x, 7) = I1 (x). 2.3

(8)

Estimation Procedure

Eq. (7) is minimized assuming that the displacement field u(x, ·) is parameterized by a 4D Fourier series given by X

N/2

u(x, i) =

µ[k]ej ,

(9)

k=−N/2

where N = [N1 , N2 , N3 , N4 ] and < ·, · > represents the standard inner product. The coefficients µ[k] are (3 × 1), complex-valued vectors with complex conjugate 1 2πk2 2πk3 2πk4 symmetry and ωk = [ 2πk N1 , N2 , N3 , N4 ].

With the Fourier series parameterization, Eq. (7) is solved for µ[k] using the gradient descent method. Assume the images to be registered are d-dimensional and there are Np pixels in each image, then the total number of parameters to be estimated is approximately d × N4 × Np . But the high frequency coefficients are usually very small numbers and have very limited contribution to the total deformation. So they may be omitted to reduce the number of parameters required to represent the displacement fields. In practice, Eq. (9) is approximated by the following u(x, i) =

r X

µ[k]ej ,

(10)

k=−r

where r = [r1 , r2 , r3 , r4 ] ≤ N/2. The constants r1 , r2 , r3 and r4 represent the largest x1 , x2 , x3 and i harmonic components of the displacement fields. They are set to small numbers at the beginning and periodically increased throughout the iterative optimization procedure. In other words, the low frequency basis coefficients are estimated before the higher ones in our approach. The benefit of this approach is that the global image features are registered before the local details, so the registration is less likely to be trapped in local minimums of the total cost function. Moreover, when the values of r1 , r2 , and r3 are small, the template and target images are down sampled to reduce the computation task. Down sampling of the images in spatial domain also helps to get rid of some local minimums of the cost function. In practice, each dimension of the images is down sampled by a factor of 4 at the beginning, and then increased to a factor of 2. The full scaled images are only used for the final iterations to fine tune the registration. The steps involved in estimating the basis coefficients µ are summarized in the following algorithm. Algorithm 1. Initialize T (x, i) using Eq. (8). Set µ[k] = 0, u(x, i) = 0 and u ˜(x, i) = 0. Set r = [1, 1, 1, N4 /2]. 2. Update the basis coefficients µ[k] using gradient descent on Eq. (7). 3. Compute the displacement field u(x, i) using Eq. (10). 4. Update T (x, i) using Eq. (1). 5. Compute h−1 (x, i) and set u ˜(x, i) = h−1 (x, i) − x. 6. If the criteria is met to increase the number of basis functions then set r = r + 1, and set the new coefficients in Eq. (10) to zero. 7. If the algorithm has not converged or reached the maximum number of iterations goto step 2. 8. Use the displacement field u(x, i) to transform I0 (x) and I1 (x).

3

Results

The performance of the large-deformation, inverse-consistent, elastic image registration (LDCEIR) algorithm was tested by registering 10 pairs of 2D brain MR

images and 13 3D brain MR images. The 2D experiments consisted of 6 pairs of transverse slices, 2 pairs of coronal slices and 2 pairs of sagittal slices. All of the 2D slices were selected from the set of 3D images of dimension 256 × 320 × 256. For each pair of the 2D slices, both of the forward and reverse transformations were estimated using the LDCEIR algorithm and the viscous fluid intensity registration (VFIR) algorithm. For 3D experiments, one data set from a set of 13 MRI brain images was selected as the template image and registered with the other 12 images using both the LDCEIR and VFIR algorithms. The images were all the same size of 128 × 160 × 128. Figures 2 – 5 show typical results from one of the 2D image registration experiments comparing the performance of the LDCEIR algorithm to the VFIR algorithm. Figure 2 shows the results of one of the ten 2D MRI brain image registration experiments. The left column shows the images I0 (top) and I1 (bottom) that were registered. The center and right columns contain the registration results of the LDCEIR and the VFIR algorithm, respectively. The top-center and topright panels show the result of transforming image I1 into the shape of I0 using both algorithms. Similarly, the bottom-center and bottom right panels show the result of transforming I0 into I1 . In all four cases, the deformed images closely resemble original target image that they were registered with. This figure shows that both the LDCEIR and the VFIR algorithms did a good job matching the outer contour of the brains and the ventricles, but there are still some mismatch in the cortex. The VFIR algorithm minimized the intensity difference better than the LDCEIR algorithm because it compressed some of the sulci and gyri that did not correspond between the brains into small thread like structures. This behavior is not desirable in regions where the brain structures do not correspond. Figure 3 shows the absolute intensity difference images for the experiment shown in Fig. 2. These images were computed by subtracting the intensities of each deformed image in Fig. 2 from its target image and then taking the absolute value. White in these images correspond to no intensity difference while black corresponds to a large intensity difference. Figure 3 shows that the VFIR algorithm did a better job in minimizing the intensity difference than the LDCEIR algorithm. The average absolute intensity error for the LDCEIR algorithm is 15.1 on the range of 0-255 compared to 9.0 for the VFIR algorithm. Figure 4 shows the natural logarithm of the Jacobian values of the forward and reverse transformations of the LDCEIR and the VFIR algorithms. The intensity range for these images was scaled to the same range of -2.5 – 2.5 for comparison. The results of the LDCEIR algorithm are shown in the left column and the results of the VFIR algorithm are shown in the right column. The logJacobian values ranged from -2.05 to 2.01 for the LDCEIR algorithm and from -2.55 to 3.16 for the VFIR algorithm. This figure shows that the log-Jacobian image of the VFIR algorithm has sharper details than that of the LDCEIR algorithm, meaning that the transformation of the VFIR algorithm is not as smooth as that of the LDCEIR algorithm. In all cases the log-Jacobian images show regions of expansion and contraction of the brain structures. This is particularly evident in the area of the ventricles.

Original Images

LDCEIR Deformed Images VFIR Deformed Images

Fig. 2. 2D MRI brain image registration experiment: Left column: images I0 (top) and I1 (bottom). Center column: Registration results of LDCEIR algorithm where I 1 deformed into shape of I0 (top) and I0 deformed into shape of I1 (bottom). Right column: same as center column except for the VFIR algorithm.

Figure 5 shows the inverse consistency error of the LDCEIR and the VFIR algorithms for the experiment shown in Fig. 2. The inverse consistency error measures how far a point ends up from its original position after it is transformed by the forward and reverse transformations consecutively. These images were produced by applying the concatenated forward and reverse transformations to a rectangular grid image for each method. If the forward and reverse transformations are inverse of each other or have no inverse consistency error, then the concatenated forward and reverse transformations produce the identity mapping and the deformed grid remains undeformed. Thus, the less distortion of the grid image, the better the inverse consistency of the forward and reverse transformation. This figure shows that the inverse consistency of the LDCEIR algorithm is much better than that of the VFIR algorithm. For this experiment, the maximum inverse consistency error was 0.814 pixels for the LDCEIR algorithm and 15.1 pixels for the VFIR.

LDCEIR

VFIR

Reverse Registration

Forward Registration

0.0

135.0

Fig. 3. Absolute intensity difference image between original images and deformed images shown in Fig. 2. White corresponds to no intensity difference while black corresponds to a large intensity difference. The average absolute intensity error for the LDCEIR algorithm is 15.1 on the range of 0-255 compared to 9.0 for the VFIR algorithm.

The results of the ten 2D and twelve 3D registration experiments are summarized in the boxplots shown in Fig. 6. The boxplots compares the results of the 2D Elastic, 2D Fluid, 3D Elastic, and 3D Fluid experiments. The top and the bottom of a box are the 75 and the 25 percentiles of the measurements, respectively. The line across the inside of the box shows the median of the measurements. The whiskers show the range of the measurements, excluding outliers. They extend up or down to the extreme values that are less than 1.5H away from the box, where H is the height of the box. Values outside this range are considered outliers and are indicated by the + signs. A small dot at the bottom of the lower whiskers indicates that there is no outlier in the measurements. Figure 6 shows that the RMS intensity error of the LDCEIR algorithm is approximately 50 percent larger than that of the VFIR algorithm for the 2D

LDCEIR

VFIR

Reverse Registration

Forward Registration

2.5

−2.5

Fig. 4. Log-Jacobian of forward and reverse transformations for the experiment shown in Fig. 2. The range of the log-Jacobian values is from -2.05 to 2.01 for the LDCEIR algorithm and from -2.55 to 3.16 for the VFIR algorithm.

and 3D experiments. However, the inverse consistency error of the LDCEIR algorithm was reduced on average by 50 times in 2D and 30 times in 3D compared to the VFIR algorithm for results with comparable RMS intensity error. In addition, the LDCEIR transformations are smoother than the VFIR algorithm as indicated by the smaller minimum/maximum Jacobian values.

4

Summary and Conclusions

We presented a new large-deformation inverse consistent image registration (LDCEIR) algorithm that accommodates large, non-linear deformations. The LDCEIR algorithm was compared with the viscous fluid intensity registration (VFIR) algorithm in ten 2D and twelve 3D brain MR image registration experiments. The LDCEIR algorithm produced smoother transformation functions compared to the VFIR algorithm as indicated by the smaller maximum and minimum

LDCEIR

VFIR

Fig. 5. Inverse consistency error visualized with a deformed grid image for the experiment shown in Fig. 2. The maximum inverse consistency error was 0.814 pixels for the LDCEIR algorithm and 15.1 pixels for the VFIR for this experiment.

log-Jacobian values of the transformations. For these experiments, the inverse consistency error was reduced on average by 50 times in 2D and 30 times in 3D compared to the viscous fluid registration algorithm for results with comparable RMS intensity error.

5

Acknowledgments

This work was supported in part by the NIH under grants NS35368 and DC03590.

References 1. J. Talairach and P. Tournoux, Co-Planar Stereotactic Atlas of the Human Brain, Beorg Thieme Verlag, Stuttgart, 1988. 2. F.L. Bookstein, “Pricipal warps: Thin-plate splines and the decomposition of deformations,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 11, pp. 567–585, 1989. 3. P.J. Besl and N.D. McKay, “A method for registration of 3-d shapes,” IEEEMI, vol. 14, no. 2, pp. 239–256, 1992. 4. R.P. Woods, S.T Grafton, C.J. Holmes, S.R. Cherry, and J.C. Mazziotta, “Automated Image Registration: I. General Methods and Intrasubject, Intramodality Validation,” Journal of Computer Assisted Tomography, vol. 22, no. 1, pp. 139–152, 1998. 5. J. Ashburner and K.J. Friston, “Voxel-based morphometry - the methods,” NeuroImage, vol. 11, no. 6, pp. 805–821, 2000. 6. G.E. Christensen and H.J. Johnson, “Consistent image registration,” IEEE Transactions on Medical Imaging, vol. 20, no. 7, pp. 568–582, July 2001.

RMS Int Err

Max Log Jac

Min Log Jac

Ave ICE

Max ICE 1.8

15

25

6

−1.5

1.6

14

5.5

13

−2

12

1.2

4.5

−2.5

11

15

4

10

−3

1

3.5

9

0.8 10

3

−3.5

8

1.4

20

5

0.6

2.5

7

−4

0.4 5

2

0.2

6 1.5

−4.5

5 2E 2F 3E 3F

0

0 2E 2F 3E 3F

2E 2F 3E 3F

2E 2F 3E 3F

2E 2F 3E 3F

Fig. 6. Box plot of RMS intensity error, minimum/maximum logarithm Jacobian and maximum/average inverse consistency error, where the x-labels 2E and 3E correspond to 2D/3D LDCEIR registration results, respectively, and the x-labels 2F, 3F correspond to 2D/3D VFIR registration results.

7. D. Shen and C. Davatzikos, “Hammer: hierarchical attribute matching mechanism for elastic registration,” IEEE Trans. on Medical Imaging, vol. 21, no. 11, pp. 1421–1439, Dec 2002. 8. G.E. Christensen, R.D. Rabbitt, and M.I. Miller, “Deformable templates using large deformation kinematics,” IEEE Transactions on Image Processing, vol. 5, no. 10, pp. 1435–1447, Oct 1996. 9. S.C. Joshi and M. I. Miller, “Landmark matching via large deformation diffeomorphisms,” IEEE Transactions on Image Processing, vol. 9, no. 8, pp. 1357–1370, August 2000. 10. R.D. Rabbitt, J. Wiess, G.E. Christensen, and M.I. Miller, “Mapping of hyperelastic deformable templates using the finite element method,” in Vision Geometry IV, R.A. Melter, A.Y. Wu, F.L. Bookstein, and W.D. Green, Eds., 1995, Proceedings of SPIE Vol. 2573, pp. 252–265. 11. M. Bro-Nielsen and C. Gramkow, “Fast fluid registration of medical images,” in Visualization in Biomedical Computing, Karl H. H¨ ohne and Ron Kikinis, Eds., vol. 1131, pp. 267–276. Springer, Hamburg, Germany, 1996. 12. H. Lester, S.R. Arridge, K.M. Jansons, L. Lemieux, J.V. Hajnal, and A. Oatridge, “Non-linear registration with the variable viscosity fluid algorithm,” in Information Processing in Medical Imaging, A. Kuba and M. Samal, Eds., Berlin, June 1999, LCNS 1613, pp. 238–251, Springer-Verlag. 13. P. Cachier and D. Rey, “Symmetrization of the non-rigid registration problem using inversion-invariant energies: Application to multiple sclerosis,” in MICCAI’00 LNCS 1935, Pittsburgh USA, October 2000, pp. 472–481.