An Improved Medical Image Registration Method ...

5 downloads 15494 Views 407KB Size Report
711302, WB, India, [email protected], [email protected]. ABSTRACT : This paper proposes a curvelet transform based medical image registration .... scale and angle first. Fig. 2: Curvelet representation in the frequency domain ...
INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS IN ENGINEERING, TECHNOLOGY AND SCIENCES (IJ-CA-ETS)

An Improved Medical Image Registration Method using Curvelet Transform and Genetic Algorithm 1

1 2

ATANU DAS , 2 RAJIB BAG

Asst. Professor, Dept. of CSE & IT, Netaji Subhash Engineering College, Garia, Kolkata-700152, India, Asst. Professor & HOD, Dept. of CSE, Seacom Engineering College, Jaladhulagari, Sankrail, Howrah711302, WB, India, [email protected], [email protected]

ABSTRACT : This paper proposes a curvelet transform based medical image registration method. Shortcomings in conventional wavelet transforms pose difficulty in processing geometric image features like ridges and edges. Two such images are used for registration among which one is the reference image and the other is a floating image which is to be registered. Curvelet transform is performed and the coefficients are used as features. A genetic algorithm (GA) is used to maximize the similarity metric which is chosen here as mutual information. An affine transformation model is chosen to transform the sensed image and overlay on the referenced one. The matching error of the curvelet transform based image registration method is compared with that of the usual wavelet transform based method. The comparison results show that the former is more efficient than the later. Keywords: Curvelet transform, mutual information, genetic algorithm . 1. INTRODUCTION Medical imaging technology has seen remarkable developments in last 25 years. It is widely being used in healthcare and biomedical research and a very wide range of modalities of the above technology are available now a day. X ray computed tomography images being one such are sensitive to tissue density and atomic composition. The x ray attenuation coefficient and magnetic resonance images (MRI) are related to proton density, relaxation times, flow and other parameters. The introduction of contrast agents provides information on the potency and function of tubular structures, such as blood vessel, the bile duct and the bowel as the state of the blood brain barrier. In nuclear medicine,

radiopharmaceuticals introduced into the body allow delineation of functioning tissue and measurement of metabolic and pathophsyological process. These together with other medical imaging technologies now provide vital information on physiological properties and biological function of organs. Now medical image registration is the process of aligning or overlaying two or more images of the same scene taken at different times, from different viewpoints, and/or by different sensors. It geometrically aligns two images—the reference and sensed images. The majority of the registration methods consist of the following steps described in the block diagram below.

Fig. 1: Block Diagram of A classical Image Registration System The first step is feature detection. Salient and distinctive objects (closed-boundary regions, edges, contours, line intersections, corners, etc.) are manually or, preferably, automatically detected. For

further processing, these features can be represented by their point representatives (centers of gravity, line endings, distinctive points), which are called control points. The second step is feature matching. In this

ISSN: 0974-3596 | Oct ’10 – March ’11 | Volume 3 : Issue 1 |

Page 152

INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS IN ENGINEERING, TECHNOLOGY AND SCIENCES (IJ-CA-ETS) step, the correspondence between the features detected in the sensed image and those detected in the reference image is established. Various feature descriptors and similarity measures along with spatial relationships among the features are used for that purpose. In the transformation and model estimation the type and parameters of the so-called mapping functions, aligning the sensed image with the reference image, are estimated. The parameters of the mapping functions are computed by means of the established feature correspondence. The last step is the image resampling and transformation. The sensed image is transformed by means of the mapping functions. Image values in non-integer coordinates are computed by the appropriate interpolation technique. The application of curvelet transform for image registration has been addressed by Safran et. al. [1]. They have used an adaptive sampling method for the estimation of mutual information. The method uses the fast discrete curvelet transform to identify regions along anatomical curves on which the mutual information is computed. Ali et. al [2] has used curvelet transform over wavelet transform to fuse the MRI and CT images and their efficiency of the curvelet transform has been shown to be better than the wavelet transform based on peak signal to noise ratio. Bhattacharya et. al. [3] has used mutual information as a similarity metric and has performed the optimization of the metric using GA. Hanling Zhang et. al. [4] has used mutual information as a similarity metric and a hybrid optimization algorithm. Fan et. al. has used parallel GA to address the non convex and non linear optimization problems. In this paper an improved method of medical image registration has been proposed where curvelet transform is used for feature detection, and the similarity metric is optimized using GA. This work used t1 and t2 weighted MRI scan and PET scan images of two different parts of brain corona radiata [5] and ceneus [6]. The former is used for evaluating the proposed methodology and the second set for cross validating the system using statistical error of type1 and type2. The paper has been divided in the following sections. At first we describe curvelet transform, and then mutual information as similarity metric. After that we have discussed the proposed GA based methodology with a block diagram. The empirical data sets (of images) and the validation of the proposed methodology are then given. The paper ended with a conclusion and noting some future scope for development. 2. CURVELET TRANSFORM In 1999, Candes and Donoho [7] introduced the curvelet transform. Over the past few years, curvelet construction has been redesigned in order to make it simpler to understand and use. Then the second generation curvelet transform [8] was introduced in 2006. This is not only simpler, but also it is faster and

less redundant compared to its first generation version. Curvelet transform is multi-scale and multidirectional. Curvelets exhibit highly anisotropic shape obeying parabolic-scaling relationship. Like wavelet and ridgelet transform, the second continuous curvelet transform is also fallen into the category of sparseness theory. It can be used to represent sparsely signal by applying the inner product of basis function and signal. Then, the curvelet transform can be expressed by

c j, l , k  : f ,  j ,l ,k

Where

 j ,l , k

(1)

denotes curvelet function, and j, l and k

denotes the variable of scale, orientation, and position respectively. In the frequency domain, the Curvelet transform can be implemented with φ by means of the window function U . Defining a pair of windows W (r ) (a radial window) and V (t ) (an angular window) as the followings:

W 2 r   1 , r  3 / 4,3 / 2 

2

j

(2)

j   

V t  1  1 , t   1 / 2,1 / 2 2

(3)

l  

where variables W is used as a frequency domain variable, and r and θ as polar coordinates in the frequency domain. For each j  j 0 , U j is defined in the Fourier domain by

 2[ j / 2 ]    (4) U J r ,   2 3 j / 4 w 2  j r v  2 





where [j/2] denotes the integer part of j/2. A polar ‗wedge‘ represented by U j is supported by W and V, the radial and angular windows. Fig. 1 shows the division of wedges of the Fourier frequency plane. The wedges are the result of partitioning the Fourier plane in radial (concentric circles) and angular divisions. Concentric circles are responsible for decomposition of the image in multiple scales (used for bandpassing the image) and angular divisions corresponding to different angles or orientation. So, to address a particular wedge one needs to define the scale and angle first.

Fig. 2: Curvelet representation in the frequency domain

ISSN: 0974-3596 | Oct ’10 – March ’11 | Volume 3 : Issue 1 |

Page 153

INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS IN ENGINEERING, TECHNOLOGY AND SCIENCES (IJ-CA-ETS) Let

 j    U j  

(2) A formation of a product of Uj for each scale and angle. (3) A wrapping of this product around the origin. (4) An application of a 2D inverse fast Fourier transform, resulting in discrete curvelet coefficients.

 j at the scale j then the

and

curvelet can be obtained by rotating and shifting

j

-j

at the other scale 2 . Defines: (1) Uniform rotation

l  2 .2

[  j / 2]

angle

serial

l , 0,1..., 0   l  2 (5)

3.

(2) Shift parameter k  k1 , k 2   Z . According to the above ideas, the curvelet can be defined as a 2

2 j orientation  l

function of x =(x1, x2) at scale and

position

 j ,l ,k  x   j R x  x X

 j ,l 

k

R

1



K .2 l

j

 (6)  , K .2 j ,l

Xk

( j ,l )

by

k

J / 2

2

MI ( X , Y )  H (Y )  H (Y | X )  H ( X )  H (Y )  H ( X , Y ) (11) where H ( X )   EX (log( P( X ))) (12)

(7)

R is the rotation in radians. Then the

And

continuous curvelet transform can be defined by

C  j , k , l  : f ,  j ,l ,k 



MUTUAL INFORMATION AS A SIMILARITY METRIC Mutual information (MI) is one of the leading techniques in multimodal image registration. The concept of MI emerges from information theory. It is a measure of statistical dependency of two data sets, and is particularly suitable for images from different modalities. MI between two random variables X and Y is given by

represents the entropy of random variables and P(x) the probability distribution of X.

f x  j ,l ,k x dx (8)

R2

Based on Plancherel therory [9], the following formula can be deduced from the above,

C  j , l , k  : 

1 2

2

1 2 2

 f   x dw j ,l , k

 f wU R  e j

j

X l , j ,k  k dw (9)

Set the input f [t1, t 2] , 0  t1, t 2  n in the spatial Cartesian, then the discrete form of above continuous curvelet transform can be expressed as

C D  j, l.k  :

 f t 

D

j ,l , k

t 

0 t  n

(10) The discrete curvelet transform can be implemented by a ‗wrapping‘ algorithm. In this algorithm, four steps are carried out as follows: (1) An application of 2D fast Fourier transform to the image.

4.

MAXIMIZATION OF MI USING GA Finding the minimum of dissimilarity measure (penalty function) or the maximum of similarity measure is a multidimensional optimization problem and the only method is finding a global optimum solution i.e. performing an exhaustive search over the entire image. The algorithm chosen here is based on a meta-heuristic algorithm i.e. GA. One of the advantages of GA is that it is parallel because they have multiple offspring thus making it ideal for large problems where evaluation of all possible solutions in serial would be too time taking, if not impossible. Here GA is applied using the pyramidal approach [10] for faster computation. The termination of the algorithm is based on whether the level of computation is greater or equal to the Max level. The maximum level chosen here is 4.

Fig. 3: The GA for maximization of MI

ISSN: 0974-3596 | Oct ’10 – March ’11 | Volume 3 : Issue 1 |

Page 154

INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS IN ENGINEERING, TECHNOLOGY AND SCIENCES (IJ-CA-ETS) Step 7: Finally register the image. 5. THE PROPOSED METHODOLOGY A. PROPOSED ALGORITHM As mentioned before a pyramidal approach is B. IMAGE FUSION OUTCOME followed for the image registration process which is Here brain images were chosen among which one is described below from the corona radiata and cuneus each. The MRI Step 1: Choose the floating and the reference image as input. scan of these two segments were taken both t1 and t2 Step 2: Set the initial level to zero. [11] weighted. On a t2 weighted scan water and fluid Step 3: Carry out curvelet transform as well as discrete containing tissues are bright, and fat containing wavelet transform on both the images. tissues are dark. For a t1 weighted scan it is just the Step 4: Use the wavelet coefficients as features. opposite. The fused outcome of the images is given Step 5: Apply Mutual Information similarity metric for below and then PET scan of the above brain areas feature comparison. were taken and fused. They were both taken from Step 6: Use GA for optimization of the similarity metric. trans axial view. The results are given below

I.

IMAGE FUSION OUTCOME OF MRI T1 WEIGHTED CORONA RADIATA WITH PET CORONA RADIATA

Fig. 4: MRI t1 weighted II.

Fig. 5: PET

Fig .6: Fused image

IMAGE FUSION OUTCOME OF MRI t2 WEIGHTED CORONA RADIATA WITH PET CORONA RADIATA

Fig. 7: MRI t2 weighted

Fig. 8: PET

ISSN: 0974-3596 | Oct ’10 – March ’11 | Volume 3 : Issue 1 |

Fig. 9: Fused image

Page 155

INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS IN ENGINEERING, TECHNOLOGY AND SCIENCES (IJ-CA-ETS) III.

IMAGE FUSION OUTCOME OF MRI t1 WEIGHTED CENEUS WITH PET CENEUS

Fig. 10: MRI t1 weighted IV.

Fig. 11: PET

Fig. 12: Fused image

IMAGE FUSION OUTCOME OF MRI t2 WEIGHTED CENEUS WITH PET CENEUS

Fig 13. MRI t1 weighted

Fig 15. Fused image

Table I: Threshold values for wavelet transform Lev MI value for similar Upper lower el bound bound brain image registration 1 1.7456 2 1.7720 1.79 1.74 3 1.7642 4 1.7864

MI value for dissimilar brain image registration 3.09845 3.26423 3.26423 3.08064

upper bound

lower bound

3.3

3.0

Table 2: Threshold values for curvelet transform Leve MI value for similar Upper lower l bound bound brain image registration

MI value for dissimilar brain image registration

upper bound

lower bound

2.88

2.80

1 2 3 4

6.

Fig 14. PET

1.9625 1.9877 1.9873 1.8956

1.98

1.80

PERFORMANCE EVALUATION MRI images of ceneus part of the brain images were registered with the PET images corona radiata as well

2.7346 2.8104 2.8713 2.7726

as the usual corona radiate with its own PET scan. The mutual information value for the data set containing similar corona radiate images and ceneus

ISSN: 0974-3596 | Oct ’10 – March ’11 | Volume 3 : Issue 1 |

Page 156

INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS IN ENGINEERING, TECHNOLOGY AND SCIENCES (IJ-CA-ETS) and textured information. So, use of curvelets emerge better output in comparison to use of other mother wavelets without leaving any good qualities of the previously described methods like robustness, accuracy and multimodality. Future work is directed to optimize the similarity metric MI using swarm particle optimization algorithm. REFERENCES: M.N. Safrana, M. Freimanb, M. Wermanb, L. Joskowicz, “Curvelet Based Sampling for Accurate and Efficient Multi Modal Image registration”, The Hebrew University of Jerusalem, Israel, 2009. F. E. Ali, I. M. El-Dokany, A. A. Saad, and F. E.-S. Abd El-Samie, “Curvelet fusion of MR and CT images”, Progress In Electromagnetics Research C, Vol. 3, 215-224, 2008. Bhattacharya, M., Das, A, “Multi resolution medical image registration using maximization of mutual information & optimization by genetic algorithm”, Indian Inst. of Inf. Technology. & Management Nuclear Science Symposium Conference Record, 2007. Hanling Zhang, Fan Yang, “Multimodality Medical Image Registration Using Hybrid Optimization Algorithm”, 2008 International 90 Conference on BioMedical Engineering and 80 70 Informatics, ISBN: 978-0-7695-3118-2, 2008. 60 [5] http://medical-dictionary.Thefree FP 50 dictionary.com/corona+radiata 40 FN 30 [6] http://medical20 dictionary.thefreedictionary.com/ceneus 10 [7] Emmanuel Candes, Laurent Demanet, David 0 Donoho and Lexing,Ying, Fast Discrete Curvelet Transforms, in SIAM Multiscale Model, Simulation, Threshold Value 2006. [8] Moyan Xiao, Zhibiao He, “Multisensor data Fig. 8: Matching error evaluation using curvelet fusion based on the second generation curvelet transform”, Proceedings of SPIE, the International 90 Society for Optical Engineering, 2007. 80 70 [9] C. Markett, M. Rosenblum and J. Rovnyak, ―A 60 plancherel theory for Newton spaces‖, Journal 50 FP Integral Equations and Operator Theory, 9, no. 6, 40 FN 30 831–862, 1986. 20 [10] Aickelin, Uwe, ―A Pyramidal Genetic Algorithm 10 for Multiple-Choice Problems‖, Proceedings of 0 Annual Operational Research Conference 43, Bath, 2001. Threshold Value [11] Q. Zhang, Y. Wang, J. Yu, and S. Yang, Fig. 9: Matching error evaluation using wavelet ―Multimodal Medical Image Registration using Geometric Flow and Gabor Filter‖, BioMED, 2008. 7. CONCLUSION [12] Repperger, D.W., Pinkus, A.R., Skipper, J.A., From the study reported in this paper, it is found that Woodyard, R, ―Studies on Image Fusion Techniques complex wavelets such as curvelet is outperforming for Dynamic Applications‖, Proceedings of IEEE on wavelets family in medical imaging and emerge National Aerospace and Electronics Conference, 2 good alignment results. Curvelets is very useful and NAECON, ISSN: 7964-0977, 2008. came out to a nice application in medical imaging. [13] Rhoda Baggs, Dan E Tamir, ―Non-Rigid Image More methods using complex wavelets can be Registration‖, Proceedings of the Twenty-First investigated and can materialize a new turn to International FLAIRS Conference, 2008. research in the field of medical image registration. Medical images have lot of curves, phase information 3.2

2.9

3.05

2.6

2.75

2.3

2.45

2.15

2

1.7

2. 6 2. 75 2. 9 3. 05 3. 2

2. 3 2. 45

2 2. 15

1. 7 1. 85

Percentage

1.85

Percentage

images were calculated as well as corona radiate with ceneus images were also calculated. The values are given in table I and II. Now statistical Type 1 and Type 2 errors were selected as a metric for matching error. We know that Type 1 error which is also known as alpha error or false positive error results that are detected actually should not be detected. In this case, brain images are registered against non-brain images of same sizes and [1] structures, which gave false positives [12] (i.e. these images are when tried to register against each other should flag an error message). Type 2 error or beta error or false negative error means results that are not [2] detected but actually should be detected. According to our realization images that are not registered because of chosen value of threshold should be registered. Hence in both methods these different [3] threshold values were applied and counted the number of false positives and false negatives. On the basis of these observations, percentage of false positives and false negatives has been calculated [13]. A graphical representation of them have been given below. [4]

ISSN: 0974-3596 | Oct ’10 – March ’11 | Volume 3 : Issue 1 |

Page 157