Multi-modal medical image registration based on

0 downloads 0 Views 427KB Size Report
p s x of intensity i which centered at pixel x in a circular region of radius s . .... Mutual Information and Gradient Information”, MICCAI, 1935, 452-461(2000). 5.
Multi-modal medical image registration based on phase congruency and quantitative-qualitative mutual information Shan Zhang*a, Hongbin Hanb, Zhaoying Liua, Bo Liua, Fugen Zhoua a Image Processing Center, Beijing University of Aeronautics and Astronautics, Beijing, China 100191; b Department of Radiology, Peking University Third Hospital, Beijing, China 100191 ABSTRACT A new approach of multi-modal medical image registration is proposed to overcome the drawbacks of mutual information as taking no consideration of the space information, taking all intensities without distinction, and being sensitive to noise. The proposed method firstly extracts the phase congruencies of the reference and floating image, secondly, it computes quantitative-qualitative mutual information with the phase congruency mappings, finally, the geometric transform is optimized by Particle Swarm Optimization. The quantitative-qualitative mutual information used in our algorithm select the pixels whose utility are larger than the threshold of 1. In addition, Mutual information incorporating phase congruency assimilates the information of both intensity and space. Experiment results show that our approach is more robust in suppressing noise and can achieve higher accuracy.

Keywords: phase congruency, mutual information, Particle Swarm Optimization, image registration 1. INTRODUCTION Image registration is the process of finding the optimal geometric transformation that aligns images of the same scene acquired under different conditions, which plays an important role in medical image applications ranging from computerassisted surgery to disease analysis. A popular class of multimodal image registration methods is those based on the maximization of mutual information (MI). Since the initial work of Viola et al. [1] and Maes et al. [2], several groups have improved MI. Studholme et al. [3] introduced normalized mutual information (NMI) which is invariant to overlap field to decrease the misalignment, Pluim et al. [4] added a gradient-based term to the MI (GMI) in order to decrease the number of local maxima. Rueckert et al. [5] used second order entropy estimation introducing the dependency of a pixel on its neighborhood pixels. Daniel B. Russakoffet et al. [6] proposed regional mutual information (RMI) which has taken neighborhood regions of corresponding pixels into account. Mellor et al. [7] employed the local monogenic phase as the basic statistic instead of the intensity to MI (PMI). PMI provides no indication on the significance of characteristics gained from local monogenic phase, which makes it be sensitive to background degradation. In addition, it is highly susceptible to noise. To increase the smoothness of the registration function, Yang et al. [8] expanded edges to the neighborhood regions using feature potential function (FPMI). FPMI can avoid great fluctuation of the joint distributions, but its accuracy depends on accuracy of the edge detection. Luan et al. [9] employed Q-MI including the quantitative aspects of information based on the probability of intensity and the qualitative aspects of information based on the utility of the intensity. Loeckx et al. [10] proposed conditional mutual information (cMI) by taking a certain spatial distribution as the condition. *[email protected]; phone 010-82338048

2. METHODOLOGY In order to incorporate the space information, highlight intensity with larger utility, and increase the robustness of mutual information, we present an algorithm combining phase congruency with Q-MI. Firstly, local phase congruencies for both reference image and floating image are extracted using complex-valued wavelets across multiple scales and orientations.

MIPPR 2011: Parallel Processing of Images and Optimization and Medical Imaging Processing, edited by Jianguo Liu, Mingyue Ding, Zhong Chen, Proc. of SPIE Vol. 8005, 80050S · © 2011 SPIE CCC code: 0277-786X/11/$18 · doi: 10.1117/12.901917 Proc. of SPIE Vol. 8005 80050S-1 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/20/2015 Terms of Use: http://spiedl.org/terms

Secondly, the local phase congruency images are used to align the images using the Q-MI. The Q-MI we used is based on quantitative-qualitative measure of mutual information: the utilities of intensities are evaluated for original images rather than the phase congruency mappings, and then we select the utility above the threshold of 1 to define the joint utility for each intensity pair in the two images. We take the joint utility of each intensity pair as the weight of the joint histogram of the corresponding intensity pair. The computation process of joint utility is similar to that of the joint histogram. Finally, Particle Swarm Optimization [11] is used to find the optimal geometric transform by maximizing the Q-MI. Experiment results show that our approach is more robust in suppressing noise and can achieve higher accuracy. The process of the algorithm is summarized in Fig. 1.

Fig. 1. The general pipeline of out algorithm

2.1 Phase congruency Local phase congruency is based on the theory of local energy model [12]. It is normalized by dividing the sum over all orientations and scales of the amplitudes of the individual wavelet responses at the location x in the image, it can be defined as (1): ∑∑Wθ ( x) ⎣⎢ Anθ ( x)ΔΦ nθ ( x) − Tθ ⎦⎥ PC ( x) = θ n (1) ∑∑ Aθ n ( x) + ε θ

n

ΔΦ ( x, θ ) = cos(φn ( x, θ ) − φ ( x,θ )) − sin(φn ( x, θ ) − φ ( x, θ ))

(2)

Where Anθ is the amplitude at the n th wavelet scale and the θ th orientation, φn represent the phase at wavelet scale n ,

φ represents the weighted mean phase ,W represents the frequency spread weighting factor, the noise compensation T is performed in each orientation independently. ε is a small constant used to avoid division by zero. 2.2 Quantitative-qualitative mutual information 2.2.1 Quantitative-qualitative mutual information

Based on the Kullback-Leibler distance, mutual information is defined as follows: n m p ( Ri , Fj ) MI ( R, F ) = ∑∑ p ( Ri , F j ) log pi q j i =1 j =1

(3)

Belis and Guiasu presented a quantitative- qualitative measure of information based on cybernetic system in 1968. The Quantitative-qualitative mutual information (Q-MI) can be defined in equation (4), which becomes the form of conventional mutual information if all the utilities are equal to one. The joint utility u ( Ri , F j ) is defined in Sec. 2.2.2. n

m

QMI ( R, F ;U ) = ∑∑ u ( Ri , Fj ) p ( Ri , Fj ) log i =1 j =1

p ( Ri , Fj ) pi q j

2.2.2 Joint Utility

The utility of an intensity pair (i, j ) can be defined as:

Proc. of SPIE Vol. 8005 80050S-2 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/20/2015 Terms of Use: http://spiedl.org/terms

(4)

u (i, j ) =



x, y∈Ω I R ( x ) =i , I F ( y ) = j

AR ( x) × AF ( y )

(5)

Where, I R ( x) and AR ( x) are the intensity and saliency value of pixel x in the reference image R. I F ( y ) and AF ( y ) are the intensity and saliency value of pixel y in the floating image F. Here, we select AR ( x) and AF ( y ) which are bigger than 1. Selecting the utility bigger than 1 can exclude the pixels that are not very salient. Furthermore, the utility of the pixel in one image less than 1 will minimize the joint utility after multiplied by the utility of the pixel in the other image. The utility AR ( x) and AF ( y ) are defined in Sec. 2.2.3. 2.2.3 Saliency

Saliency measure [13] is determined by analyzing entropy in the local regions. We first calculate the probability distribution pi ( s, x) of intensity i which centered at pixel x in a circular region of radius s . Then, the local entropy Η D ( s, x) can be defined from pi ( s, x ) : Η D ( s, x) = −∑ pi ( s, x) log 2 pi ( s, x)

(6)

i

By maximizing local entropy Η D ( s, x ) , the best scale s p for the region centered at pixel x is s p = {s : H D ( s − 1, x) < H D ( s, x) > H D ( s + 1, x)}

(7)

The saliency value AD ( s p , x) of pixel is defined by maximal local entropy value, weighted by inter-scale saliency measure WD ( s, x) = ∑ i

∂pi ( s, x) sx : ∂s

AD ( s p , x) = H D ( s p , x) × WD ( s p , x)

(8)

After calculating the utility of each pixel, we are ready to compute the joint utility in Section 2.2.2, which is the weight of the Q-MI in Section 2.2.1.

3. EXPERIMENT RESULT A number of experiments have been performed to demonstrate the performance of the proposed algorithm (PC+QMI). We use two sets of experiments to validate the accuracy and two sets of experiments to validate the robustness described in the previous sections. We performed our experiments on a PC at 2.80GHz with 1 G ARM using Matlab 7.11.0. 3.1 Validation 1 The first set of experiments is performed on T1 and T2 images. The experiment result is showed in Fig. 2, the ground truth transform and the results of other algorithms are in Table I. We use retrospective registration method in [14] to compute the mean square errors (MSE) of the point under the ground truth transform and the registration result. The MSE is shown in Table I as well as computation time. From Table I, we can find that the MSE of the proposed method (PC+QMI) and PC+MI are small, and the computation time of MI, PC+MI, PC+QMI is less than the other methods, however, the MSE of the MI is larger than the other methods.

Proc. of SPIE Vol. 8005 80050S-3 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/20/2015 Terms of Use: http://spiedl.org/terms

Fig. 2. The first set experiment result. (a) T1 image is the reference image, (b) Transformed T2 image is the floating image, and (c) is the registration result after canny detection. TABLE I.

registration results 1 Technique Tx Ty Angle MSE(pixel) Time(s)

Ground Truth 13 10 15 / /

MI

RMI

QMI

PC+MI

PC+RMI

PC+QMI

14.31 9.82 14.99 1.3258 110

13.67 9.51 14.78 0.4818 233

13.49 10.51 15.08 0.8778 176

12.60 9.87 15.13 0.4775 115

13.43 10.26 15.06 0.6757 295

13.48 10.46 15.04 0.4729 164

3.2 Validation 2 The second set of experiments is performed on the CT and MR image. The experiment result is showed in Fig. 3, the registration transform, MSE and computation time are in Table II.

Fig. 3. The second set experiment result. (a) CT image is the reference image, (b) Transformed MR image is the floating image, and (c) is the registration result after canny detection. TABLE II.

registration results 2 Technique Tx Ty Angle MSE(pixel) Time(s)

Ground Truth 12 10 6 / /

MI

RMI

QMI

PC+MI

PC+RMI

PC+QMI

11.06 11.90 5.38 2.0731 99

10.49 8.71 5.61 1.9705 446

11.36 11.19 6.02 1.3558 139

11.02 8.01 2.01 6.3442 34

11.65 11.27 6.50 1.4922 416

11.34 11.18 6.01 1.3543 292

From the registration results in the Table II, we can figure that PC+MI [15] failed the registration, especially for the rotation angle. We can also figure that the MSE of PC+QMI is least among the six methods, and the computation time is less than RMI and PC+RMI. 3.3 Robustness 1 In order to verify the robustness of our algorithm, we employ Gaussian noise with zero mean, 0.01 variance and zero mean, 0.02 variance to the image without any transform in sec. 3.1. The results are shown in Fig.4. From Fig. 4 (a), we

Proc. of SPIE Vol. 8005 80050S-4 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/20/2015 Terms of Use: http://spiedl.org/terms

can figure that the curves of MI and PC+MI both failed to align the T1 and T2 image without any transformation. The curves of the horizontal translation tx and vertical translation ty are all smooth in Fig. 4(b) and Fig. 4 (c). However, by comparing the first and second row, we can find that other algorithms failed to align images with larger variance of Gaussian noise except our method.

(a) Rotation angle

(b) Horizontal Translation tx

(c) Vertical Translation ty

(d) Rotation angle (e) Horizontal Translation tx (f) Vertical Translation ty Fig. 4. The first rows are the curves after adding Gaussian noise with zero mean, 0.01 variance to the T1 and T2 images. The second rows are the curves after adding Gaussian noise with zero mean, 0.02 variance to the T1 and T2 images. The black, red, green, blue curves are the curve of MI, PC+MI, PC+RMI, PC+QMI, respectively.

3.4 Robustness 2 Similarly, we employ Gaussian noise with zero mean, 0.01variance and zero mean, 0.02 variance to the image without any transform in sec. 3.2. As is shown in Figure 5 (a) (d), we can figure that the cure of PC+RMI failed to align the CT and MR images without any transform as the larger variance of Gaussian noise. In addition, MI and PC+MI failed, too. Our algorithm is more robust than the other algorithms.

(a) Rotation angle

(b) Horizontal Translation tx

(c) Vertical Translation ty

(d) Rotation angle (e) Horizontal Translation tx (f) Vertical Translation ty Fig. 5. The first row are the curves after adding Gaussian noise with zero mean, 0.01 variance to the CT and MR images, the second row are the curves after adding Gaussian noise with zero mean, 0.02 variance to the CT and MR images. The black, red, green, blue curves are the curve of MI, PC+MI, PC+RMI, PC+QMI, respectively.

4. CONCLUSION In this article, we have introduced a new registration method based on phase congruency and quantitative-qualitative mutual information. The quantitative-qualitative mutual information used in our algorithm select the pixels whose utility

Proc. of SPIE Vol. 8005 80050S-5 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/20/2015 Terms of Use: http://spiedl.org/terms

are larger than the threshold of 1. In addition, Mutual information incorporating phase congruency assimilates the information of both intensity and space. Experiment results show that our approach is more robust in suppressing noise and can achieve higher accuracy.

ACKNOWLEDGEMENT This work was supported by the National Natural Science Foundation of China (Grant No.30972811), the Natural Science Foundation of Beijing (Grant No.7093137), and the Natural Science Foundation of Beijing (Grant No. 3102020).

REFERENCES 1. William M. Wells III, Paul Viola, Hideki Atsumi, Shin Nakajima, and Ron Kikinis, “Multi-modal Volume Registration by Maximization of Mutual Information”, Medical Image Analysis, 1, 35-51(1996). 2. Frederik Maes, Andre Collinon, Dirk Vandermeulen, Guy Marchal, and Paul Suetens, “Multimodality Image Registration by Maximization of Mutual Information”, IEEE Transaction on Medical Imaging, 16, 187-198(1997). 3. C.Studholme, D.L.G. Hill, and D.J. Hawkes, “An Overlap Invariant Entropy Measure of 3D Medical Image Alignment”, Pattern Recognition, 32, 71-86(1999). 4. Josien P.W. Pluim, J.B. Antoine Maintz, and Max A. Viergever, “Image Registration by Maximization of Combined Mutual Information and Gradient Information”, MICCAI, 1935, 452-461(2000). 5. D. Ruecket, M.J. Clarkson, D.L.G. Hill, and D.J. Hawkes, “Non-rigid Registration Using Higher-order Mutual Information”, in proceedings of SPIE 2000, 3979, 438-447(2000). 6. Daniel B. Russakoff, Carlo Tomasi, Torsten Rohlfing, and Calvin R. Maurer, “Image Similarity Using Mutual Information of Regions”, ECCV, 3023, 596-607(2004). 7. Matthew Mellor, Michael Brady, “Phase Mutual Information as a Similarity Measure for Registration”, Medical Image Analysis, 9(4), 330-343(2005). 8. Xuan Yang, Jihong Pei, and Weixin Xie, “Maximization of Feature Potential Mutual Information in Multimodality Image Registration Using Particle Swarm Optimization”, in proceedings of SPIE 2005, 5747, 1300-1309(2005). 9. Hongxia Luan, Feihu Qi, Zhong Xue, Liya Chen, and Ding gang Shen, “Multimodality Image Registration by Maximization of Quantitative-Qualitative Measure of Mutual Information”, Pattern Recognition, 41, 285-298(2008). 10. Dirk Loeckx, Pieter Slagmolen, Frederic Maes, Dirk Vandermeulen, and Paul Suetens, “Nonrigid Image Registration using Conditional Mutual Information”, IEEE TRANSACTIONS ON MEDICAL IMAGING, 29, 1929(2010). 11. Qi Li, Isao Sato, “Multimodality Image Registration by Particle Swarm Optimization of Mutual Information”, ICIC, 4682, 1120-1130(2007). 12. Peter Kovesi, “Image Feature from Phase Congruency”, Journal of Computer Vision Research, 1(3), 1-27(1999). 13. Timor Kadir, Michael Brady, “Saliency, Scale and Image Description”, International Journal of Computer Vision, 45, 83-105(2001). 14. Jay West, J. Michael Fitzpatrick, Matthew Y. Wang, Benoit M. Dawant, Calvin R. Maurer, Jr., et al, “Comparison and evaluation of retrospective intermodality image registration techniques”, Proc. SPIE 2710, 332-347(1996). 15. Juan Zhang, Zhentai Lu, Qianjin Feng, and Wufan Chen, “Medical Image Registration based on Phase Congruency and RMI”, MIACA, 10, 103-105(2010).

Proc. of SPIE Vol. 8005 80050S-6 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/20/2015 Terms of Use: http://spiedl.org/terms