Medical Image Fusion using Content Based ...

11 downloads 2893 Views 775KB Size Report
IEEE International Conference on Recent Advances and Innovations in Engineering (ICRAIE-2014), May 09-11, 2014, Jaipur, India. Medical Image Fusion using ...
IEEE International Conference on Recent Advances and Innovations in Engineering (ICRAIE-2014),May 09-11,2014,Jaipur,India

Medical Image Fusion using Content Based Automatic Segmentation Ch.Hima Bindu

Dr.K.Veera Swamy

ECE Department Q[SCET, Ongole, Andhra Pradesh, India. Email: [email protected]

Principal, ECE Department Q[SCET, Ongole, Andhra Pradesh, India. Email: [email protected]

Abstract-

Image

fusion

is

a

process

of

combining

complementary information from multi modality images of the same patient in to an image. Hence the resultant image consists of

more informative than the individual images alone. In this paper,

a novel feature level image fusion is proposed. In feature level fusion, source images are segmented into regions and features

like pixel intensities, edges or texture are used for fusion. The

feature level image fusion with region based would be more

meaningful than the pixel based fusion methods. The proposed fusion method contains three steps. Firstly, the multi modal

images are segmented into regions using automatic segmentation process. Secondly the images are fused according to region based

images. In feature level fusion, source images are segmented into regions and features like pixel intensities, edges or texture, are used for fusion. Decision level fusion is a high level fusion which is based on statistics, voting, fuzzy logic, prediction and heuristics, etc. Rest of the paper is organized explained in Section [I. Section fusion method. Experimental evaluations are given in Sections the work are given in Section V.

fusion rule. Finally the regions are merged together to acquire

II.

final fused image. The performance of the proposed method can

be evaluated with fusion symmetry, peak signal to noise ratio

both quantitatively and qualitatively.

Keywords-Image fusion; multimodal images; segmentation; Region Correlation Coefficient (RCC).

I.

INTRODUCTION

Medical imaging is the technique and process used to create images of the human body for clinical purposes using different medical scanners. These scanned images provide information to radiologists and physicians for diagnosing disorders and diseases. There are several types of medical scanners to provide specific imaging details of the body. Computed Tomography (CT) scanner provides images of bone, brain, lung and other areas of the body, Magnetic Resonance Imaging (MRI) scanner provides soft tissue irregularities and PET (Positron Emission Tomography) scanner provides information on functionality instead of anatomy [5,7]. The MRI and CT scan images provide both structural and anatomical information of the body where as the PET image provides functional information. Hence any one of these images is not able to carry all relevant information [13]. Image fusion assists physicians in extracting features that may not normally visible in images produced by different modalities. This is defined as merging of multiple scanner images to provide additional information. This is especially useful in diagnosing and treating a diseases and further analysis with this image is more accurate [7]. Image fusion can be performed at three levels - pixel level, feature level and decision level [4]. Pixel level fusion deals with information associated with each pixel and fused image can be obtained from the corresponding pixel values of source

[978-1-4799-4040-0/14/$31.00 ©2014 IEEE]

as follows: image fusion is III explains the proposed results and performance [v. Finally, conclusions of

[MAGE FUSION

The [mage fusion is the process of combining two or more images of a scene into a single fused image which is more informative and is more suitable for visual perception or computer processing. The benefits of using image fusion are wider spatial and temporal coverage with decreased uncertainty, improved reliability and increased robustness of system performance. There are two basic requirements for image fusion [1,9] : First, fused image should possess all possible relevant information contained in the source images; second, fusion process should not introduce any artifact or noise in the fused image. The image fusion methods are categorized depending on the fusion process and application as follows [17].

A.



Multi View Fusion



Multi Focus Fusion



Multi Temporal Fusion



Multi Modal Fusion

Multi Modal Fusion

Multi modal fusion is fusion of images with different modes rather than different viewing directions. Image fusion aims at integrating information from multiple modality images to obtain a more complete and accurate description of the same object [[ 7]. This method is also called multi source image fusion. [mages of different modalities are PET, CT, MR[, visible, infrared, ultraviolet etc. Due to the difference of imaging principle images of different modes have emphasis on different characteristics information of scene. The common methods under this are:

IEEE International Conference on Recent Advances and Innovations in Engineering (ICRAIE-2014),May 09-11,2014,Jaipur,India

Weighted averaging- pixel wise

between the input images in the region. RCCA,B(RJ Can



Fusion in transform domains



Object level fusion

represent the regions that need keeping spectral characteristics and enhancing spatial resolution in image. It can be used as guideline in decision making during fusion process.



III.

PROPOSED METHOD

In this paper, there are a variety of input multimodal images are considered, These are downloaded from [IS], These images are default in register with each other. The image fusion process should preserve all the salient features of source images. Once the images are segmented with 3-D Doctor Software. The automated regions of images are follows fusion rule to combine the information of the source images [S], Input A

Automatic segmentation

Automatic segmentation

B.

Fusion Process

The entire process of the proposed fusion process is explained in detail as follows and shown in figure I. Step 1: The multimodal source images A and B are automatic segmented into several regions using 3-D Doctor Software. Step 2: The Region Correlation Coefficient of each regoin of segmented A and B is computed using equations (1 & 2).

Region based Fusion Rule

Step 3: Compare the regoin correlation coefficeint of the corresponding regoins of the two source images to decide which should be benifited to construct the fused image. The regoin based fusion rule is defined as

Fig. l. Flow Chart of proposed method A.

Image analysis basing on Spatial weightage RCC

The regions of multimodal medical images are distinguished as those needed to be spatially enhanced and those whose spectral characteristics are to be preserved using some metric. The purpose of region division is to divide the multi-modal image into the regoins need to be spatially enhanced and need to preserve spectral characteristics [14-I6]. The above operators can not represent such regions effectively. The concept of Region Correlation Coefficient (RCC) can be defmed as follows [16]

R'A RCCA,s(Ri) < T RCCA,R (Ri) > T R� = R� (R� + R� ) / 2 ELSE

(3)

Here R�, R� are regoins of input images A and B. repeat step 3 for all 'i' regoins of A and B. The T value indicates threshold value which varies from 0-1. Based on T value the visibility of the fused image can be changed. Step 4: Merge all the selected regoins to reconstruct the final fused image. •

IV. (1) In the equation (I), where 1 LL(x,y) u(Ri)=N (X,y ) E Rik)

(2)

N

represents number of pixels in the region Ri. L =A,B is the MRI-CT images or MRI-PET(intensity component) images respectively. u(Ri) is the mean gray value in region Ri•

RCCA,H(Ri) is used as a spatial weightage to measure similarity between the images in region R i [I]. If the value of, RCCA,H(Ri) is less, it means there is a larger difference

EXPERIMENTAL RESULTS

This section gives visual and quantitative representation of the proposed method. Experiments have been performed over two different modality of images; CT -MRl and MRl-PET images. Both of these medical imaging modalities are complementary in nature. CT images are sensitive to bone and a hard tissue while MRI images are more informative about soft tissues and PET image provides information on functionality. Fusion of these will provide single image which will be more useful for diagnostic purpose in case of diseases, tumours, etc. The fused output with proposed method and segmented results are shown in Fig 2-3. Fusion of these two images carries information of both, bone tissues and functional

IEEE International Conference on Recent Advances and Innovations in Engineering (ICRAIE-2014),May 09-11,2014,Jaipur,India A.

information, which cannot be perceived by the individual CT, MRI and PET images alone.

(a)

It is well identified fact that the image quality metrics can directly imply the visual quality of images, hence have to consider both the visual representation and quantitative assessment of fused images. For evaluation of the proposed method with Fusion Symmetry (FS) and Peak signal to noise ratio (PSNR). All these are comparatively measured among various existing methods like basic pixel average method and region based spatial frequency method [6,2].

(b) J

'"



(

(

,

)

j�

(d)

(el

Performance Evaluation

i.

Fusion symmetry Fusion symmetry is a measure of symmetry of the fused image and is given by [9-10]

FS

=

abS(

IAF IAF +IBF

o.sJ

(4)

Where IAF and ISF are mutual information between source images and fused image. Low value of fusion symmetry indicates the goodness of the fusion algorithm.

Fig. 2. MRI-CT fusion results (a) Source image (MRI), (b) Source image (CT), (c-d) Automatic segmented results using 3-D Doctor Software (e) Fused image using Proposed method.

ii.

Peak Signal to Noise Ratio (PSNR)

PSNR (Peak Signal to between the maximum of corrupting noise representation. It is reconstructed images.

Noise Ratio) is a metric for the ratio possible power of a signal and power that affects the fidelity of its used to measure the quality of

TABLE: GRAPHICAL REPRESENTATION OF VARIOUS (a) ,, -

(b)

..

1,1'''' \'1:..\ r- ' -':;} �

\.."

....,

/:)

,��



-'

�(Jl � :;".:



(d)

(e)

PSNR= 10loglO

[

PERFORMANCE MEASURES 2

: M� N I �l I �=l(R(i,j)-IF (i,j)) .

' 2

j

(5)

Where R (i, j) and I F (i, i) are the pixel values of the ideal reference and the obtained fused image, respectively. M and N are the dimensions of the images [7]. V. CONCLUSION

(e)

Fig. 3. MRI-PET fusion results (a) Source image (MRI), (b) Source image (PET), (c-d) Automatic segmented results using 3-D Doctor Software (e) Fused image using Proposed method.

The medical image fusion plays a significant role in medical diagnostics. [t is better in some applications, where redundancy is not a major issue. [n this paper, a new region based feature level multimodal image fusion method is proposed. Here the proposed method process regions rather than pixels. It can able to overcome the drawbacks of pixel level image fusion methods such as sensitivity to noise,

IEEE International Conference on Recent Advances and Innovations in Engineering (ICRAIE-2014),May 09-11,2014,Jaipur,India [8]

N.Y.Rao, Veeraswamy.K, Hima Bindu.Ch, "Image fusion using manual segmentation", international journal of emerging trends & technologyin computer sceince,Vol.2,Issue 6,Nov2013,pp: I03-106.

[9]

Radika .V, Veera Swamy K, "Uniform based approach for image fusion",ICECCS 2012,CCIS 305,pp:186-194.

blurring effects and miss registration. By using region correlation coefficient as a spatial weightage method reduces the complexity of fusion and increases the reliability of fusion. The basic idea of proposed method is better to achieve less complex fusion rather than transform domain approach. The results are compared with existing methods like pixel average and region based spatial weightage fusion. The table shows that the performance of the proposed method is better than the compared methods. The output fused images shows quality of the proposed method visually. This method may be extended with various spatial weightage fusion rules to get better results with various performance measures.

[13] Ch.Hima Bindu, Dr.K.Satya Prasad, "MRI-PET medical image fusion by combining DWT and contourlet transform." To be published in Springer conference,Aug2012,ITC 2012, LNEE, pp. 124-129,2012.

ACKNOWLEDGMENT

[14] G. Piella, "A region-based multiresolution imagefosion algorithm," ISIF Fusion 2002 conference,Annapolis,vol. 28,no.l,pp. 1557-1564,2002.

This research work is carried with the facilities(QIS College of Engineering and Technology, Ongole ) of Research Type of image

MRl-CT

MRl-PET

Algorithm Pixel average Region based spatial frequency [6]

Fusion symmetry 0.965

PSNR 14.85

0.94

22.97

Proposed method

0.914

24.98

Pixel average Region based spatial frequency [6]

0.96

21.94

0.97

25.89

0.91

42.71

Proposed method

[10] G. Qu, D. Zhang, P. Van, Infonnation measure for performance of image fusion,Electronics Letters 38 (7) (2001) 313-315. [11] C. Xydeas, Y. Petrovic, Objective image fusion performance measure, Electronics Letters 36 (4) (2000) 308-309. [12] J.K.Agarwal, "Multi sensor image fusion for computer vision", spinger­ verlag,Berlin,Gennany,1993.

[15] Wu Van, Yang Wan-hai, Li Ming, "Fusion

Algorithm of Multispectral

and High-resolution Panchromatic Images,"

Acta Photonica Sinica, vol.

32,no.2,pp. 174-178,2003.

[16] Chuanqi Ye,Xiliang,Liu,Zhiyong Zhang and Ping Wang,"Multispectral and Panchromatic Image Fusion Based on Region Correlation Coefficient in Nonsubsampled Contourlet Transform Domain",IEEE Transactions,voI.5,issue I,pp36-48,2011. [17] Jan Fiusser, Filip Sroubek, Barbara Zitova, "Image Fusion: Principles, Methods,and Applications",EUSIPCO 2007,pp. 1-60. [18] http://www.cma.mgh.harvard.edulibsr.

AUTHORS PROFILE

Promotion Scheme (RPS) grant by the AICTE, India with ref no: 20/AICTE/RIFDI RPS (policy III )55/2012-13. The first and second author would like to express their gratitude to QIS management, Dr.K.Satya Prasad and Dr.S.Srinivasa Kumar for their constant help for successful completion of this work. REFERENCES

[1]

Sabalan Daneshvar, Hassan Ghassemian, MRI and PET Image Fusion by Combining HIS and retina - inspired Models,Information Fusion II, 2010,pp.114-123.

[2]

Kiran Parmar, Rahul Kher, "A Comparative Analysis of Multimodality Medical Image Fusion Methods", 2012 Sixth Asia Modelling Symposium, 978-0-7695-4730-5/12 ,2012 IEEE computer society,001 10.1109,pp:93-97.

[3]

A.A. Goshtasby, S. Nikolov, Image fusion: advances in the state of the art, Guest editorial,Information Fusion 8 (2) (2007) 114-118.

[4]

V. Petrovic, Multisensor Pixel-level image fusion, Phd dissertation, Universityof Manchester,200 I.

[5]

Roopa Maddali, Ch.Hima Bindu, Dr.K.Satya Prasad, "Discrete Wavelet Transform Based Medical Image Fusion Using Spatial Frequency Technique" , International Journal of Systems, Algortihms & Applications,Issue ICRAET 2012,vol.2,pp:44-47.

[6]

S.T.Li,b.Yang, Multifocus Image Fusion Using Region Segmentation and Spatial Frequency, Image and Vision Computing 26 (7) ,2008, pp. 971-979.

[7]

Ch.hima Bindu, Dr.K.Satya Prasad, "Performance analysis of multi source fused medical images using multi resolution transfonn', International Journal of Advanced Computer Science and Applications, Vol. 3,No. 10,2012,pp.54-62.

Ch.Hima bindu is currently working as Professor in ECE Department, QIS College of Engineering & Technology, ONGOLE, Andra Pradesh, India. She received her Ph.D. from JNTUK, Kakinada. She received her M.Tech. from the same institute. She has I I years of experience in teaching undergraduate students and post graduate students. She has published 11 research papers in International journals and more than 10 research papers in National & International Conferences. Her research interests are in the areas of image Segmentation, image Feature Extraction and Signal Processing.

IEEE International Conference on Recent Advances and Innovations in Engineering (ICRAIE-2014),May 09-11,2014,Jaipur,India

K.Veera Swamy is currently Professor in ECE department and Principal of QIS College of Engineering and Technology, Ongole, kP, India, He received his Ph,D from JNTUK, Kakinada, He has fifteen years experience in teaching under graduate and post graduate students, He published 66 papers in national/international conferences/journals. Presently he is guiding 5 students for their PhD work. His research interests are in the areas of image compression, image watermarking, Face recognition, CBIR, and networking tools.