M. Moorthi,R. Amutha, IntJScRT, 2013v1i2, 1-9
AN IMPROVED CODING TECHNIQUE FOR COMPRESSION OF MEDICAL IMAGES IN TELEMEDICINE
1
R. Amutha2
M. Moorthi1 Research Scholar, Department of Electronics `2
and Communication Engineering, Sri
Professor, Department of Electronics and Communication Engineering
Chandrasekharendra Saraswathi Viswa Mahavidyalaya
SSN College of Engineering, Chennai, India
University, Kanchipuram, India.
E-mail:
[email protected] E-mail:
[email protected],
Abstract The proposed work promotes a new image compression technique for the coding of medical images. This compression technique invokes that Contourlet Transform is a directional transform and also it is capable of capturing contours and fine details in medical images. Initially wiener filtering and the Contourlet transform are applied to the medical images. The adaptive multi-stage vector quantization (AMSVQ) is applied to the corresponding sub bands of Contourlet coefficients. The AMSVQ is implemented in which the search time and codebook complexity reduction is obtained. The numbers of code vectors are calculated in the adaptive vector quantization scheme depending on the dynamic range of the input image. The proposed method experimentally proves that it achieves better performance in representing edges than wavelets because of its anisotropy and directionality; hence it is well suited for multiscale image representation. Better results are achieved by the proposed method in terms of various factors like compression ratio (CR), peak signal to noise ratio (PSNR), compression and decompression time period for different medical images.
Keywords Compression, Decompression, Contourlet transform, PSNR
communication systems (PACS)[1]. Most of the
I. Introduction
modern medical data is represented as images or
Digital images are very important in many
other types of digital signals, such as Ultrasound,
application areas such as internet browsing, medical
MRI, computer Tomography (CT), Positron Emission
sciences, astronomy and remote sensing. Once
Tomography (PET) [2],[3]. Several compression
personal computers gained the capacity to display
algorithms, such as the JPEG standard [4] for still
sophisticated pictures as digital images, people
images and the MPEG standard [5] for video images
started to seek methods for efficient representation of
are based on DCT [6]. However, the EZW [7], the
these digital pictures in order to simplify their
SPIHT [8], the SPECK [9], the EBCOT [10]
transmission and save disk space. Image compression
algorithms and the current JPEG 2000 [11] standard
has a vital role for medical picture archiving and
are based on the discrete wavelet transform (DWT)
IJSRT | MAY - JUNE 2013 Available
[email protected]
1
M. Moorthi,R. Amutha, IntJScRT, 2013v1i2, 1-9
[12]–[14]. N.M .S.Rahim et al [15] proposed a
II.
Proposed method
method for Image compression using an improved
The proposed method is using contourlet transform
feature map finite vector quantization in 2002. A
and adaptive multistage vector quantization followed
model using a modified vector quantization for
by Huffman coder .it is shown in Figure 1.
compressing a medical image was designed by Jayantakumar et al [16] in 2008.DWT is capable of
Input Image
solving the blocking effect introduced by DCT; the correlation between the neighbouring pixels is also
Pre processing
reduced. DWT gives multi scale sparse representation Contourlet Transform
of the image.
One of the properties of Contourlet is to preserve
Adaptive Multistage Vector Quantization
edges and fine details in the image. In the proposed scheme the encoding complexity is less when compared to tree structured quantization. Wavelet is
Encoding
Compressed Image
a useful denoising tool for its properties of sparsity, locality and multiresolution. However, commonly used
separable
wavelet transforms
Noise removal and Decoder
which are
constructed from tensor products of one-dimensional filter banks. Together with curvelet, wedgelet, etc., Contourlet is considered to be the new generation of
Inverse Adaptive Multistage Vector Quantization
Inverse Contourlet Transform
wavelet in two and higher dimensions.
The Contourlet transform enables the representation
High Quality Decompressed Image
of images with a large degree of sparsity. For most images, a large fraction of image energy is captured by very few contourlet coefficients. Capitalizing on this property, contourlet transform can be applied in a wide range of image processing tasks, such as denoising,
texture
classification
and
fusion
.Furthermore, it has been applied in the field of synthetic aperture radar (SAR) image processing as well as medical image processing
and shows its
potentials. The size of the captured data is becoming even larger, and causes considerable problems especially in the fields of teleradiology and telemedicine. IJSRT | MAY - JUNE 2013 Available
[email protected]
Figure 1.Block diagram of proposed method The contourlet transform of the input medical image is taken to minimize the correlation present in the input image. Different pyramidal and directional filters are used for decomposition. Both the transform coefficients and the residual coefficients are vector quantized. One main difference is that the transform coefficients are adaptively vector quantized in a multistage manner, while the residual coefficients are just vector quantized. The quantized coefficients are lossless coded using static Huffman code. In
2
M. Moorthi,R. Amutha, IntJScRT, 2013v1i2, 1-9
decoding; the decoder basically performs the reverse
The laplacian pyramid in contourlet filter bank uses
process of the above steps.
orthogonal filters and down sampling by 2 in each
A. Noise Removal
dimension M=diag (2, 2).
Pre-processing should be performed in order to make
The laplacian pyramid in contourlet filter bank uses
the image noise free as the first step. Adaptive wiener
orthogonal filters and down sampling by 2 in each
filtering technique is used for noise removal.
dimension
B. The Contourlet transform
.
The scaling function is given as (1)
The traditional image enhancement methods losing detail geometric information of images and tending to amplify noise.
The continuous function is given as (2)
Contourlet transform and SVD
transform are used to overcome the problems of
The shift invariant subspace denoted by (3)
existing method. The CT expansion is composed of basis images oriented at various directions in multiple
Let
scales, with flexible aspect ratios. Existing image
f with the scaling function
enhancement methods cannot confine the directional
be the input image, the output after the LP state are J
edge information of the image.
band
The Contourlet Transform is a directional transform [5,6]. Contourlet filter bank is doubled iterated filter
the inner product function
pass
image
at a scale L. Let
images
and
the
low
pass
. (i.e )Image is decomposed in to the
coarser image
and detail image
by LP
bank structure which decomposes image in to
(4)
directional sub bands at multiple scales. Here first
(5)
decompose the image by laplacian pyramid (multi scale filter bank), then resulting band passed
Each band pass image
frequency are passed through directional filter bank
by an Lj level DFB in to
(DFB). The flow graph of the contourlet transform is
images
is further decomposed band pass directional
.
shown in Figure 2a.
is decomposed in to coefficients by Discrete Contourlet Transform (n)=
(6) =
(7) -1
Directional (synthesis) filter is represented by (8) DFB is implemented via an L level binary tree decomposition that leads to
subbands with wedge
shaped frequency portioning .The overall sampling Figure 2a: The contourlet transform Block diagram IJSRT | MAY - JUNE 2013 Available
[email protected]
matrices is given as
3
M. Moorthi,R. Amutha, IntJScRT, 2013v1i2, 1-9
(14)
(9)
This transforms simultaneously reducing image noise The basis function resembles a local Radon transform
and retained Strong edge information corresponding
are called Radonlets
to relevant features.
implemented
via
. DFB is
an
L
level
binary
tree
l
C. Adaptive Vector Quantizer
decomposition that leads to 2 sub bands with wedge
Vector Quantizer transforms the vectors of data into
shaped frequency portioning as illustrated below in
indexes that represents clusters of vectors, which
Figure 2b.
improves the image quality by average training vectors and then splits the average result to Codebook that has minimum distortion. The structure of encoder consists of a cascade of VQ stages and Huffman coder as shown in Figure 3.
Figure 2b: Resulting frequency division. The output of the Lj levels DFB given an image can be written as = It
has
(10) support
length
of
size
width
and
and the parabolic scaling relation
for curves width x length2. When the DFB is applied to the approximation subspaces Vj
(11)
(12)
When the DFB is applied to the detail subspaces W j,
Figure 3: Block diagram of encoder The input vector ‘A’ is quantized with the first stage
then the contourlet transform is given by (13)
codebook producing the first stage code vector VQ0
The index j ,k ,n specify the scale ,direction, location
(A), residual vector y0 is formed by subtracting VQ 0
respectively.
(A) from ‘A’. The second stage codebook is used to
Lj
represent
number
of
DFB
decomposition levels l at different scales j.
quantize y0, with exactly the first stage procedure,
Where
but ‘y0’ is used instead of ‘A’ as the input for
,
quantizing. Thus, a residual vector is generated, in each stage except the last stage and passed to the next IJSRT | MAY - JUNE 2013 Available
[email protected]
4
M. Moorthi,R. Amutha, IntJScRT, 2013v1i2, 1-9
stage for independent quantization of the other
the last stage. The encoding complexity to the storage
stages. The quantization algorithm is explained as
complexity is rendered by sequential searching of the
follows
stage codebooks
1. First set M=1 and find centroid of vector by eq. no.
D. Entropy coder
15 as follows
The entropy encoding uses a model that produces a
Let A=
suitable code based on the probabilities for each (15)
quantized value, so that the final output code stream
k=1,2,3,…,16 , n =1,2,3,…,16384,j = 1,2,3,…,n; n =
will be of smaller size than the input stream. The
number of the image parts after that use Linde-Buzo-
frequency of occurrence of the data item is the basic
Gray’s (LBG) algorithm .
phenomenon of Huffman coding. The method is to
2. Compare between M and Nc, when Nc is the
use a minimum number of encoding bits to encode
number of vector in codebook. If M equal Nc then we
the frequently occurring data. A code book is
keep Ym and complete the algorithm but if M not
constructed for every image or a group of images to
equal Nc .we will update the new codebook, the
store the codes for each image. For the purpose of
average value
decoding, the code book along with the encoded data
is calculate by eq. no. 16
must be transmitted. (16) III. Quality Measures where (17)
The Quality of the reconstructed image is measured in terms of mean square error (MSE) and peak signal
Continue the loop until M=Nc. Finally quantized coefficients are coded by Huffman coder. The decoder is shown in Figure 4,
to noise ratio (PSNR) ratio [25]. The MSE is often called reconstruction error variance q2. The MSE between the original image X and the reconstructed image Y at decoder is defined as:
q2=MSE=
(18)
Where the sum over i,j denotes the sum over all pixels in the image and M*N is the number of pixels in each image. Xij -original image, Yij-Reconstructed image. The peak signal-to-noise ratio is defined as the ratio between signal variance and reconstruction error variance. The PSNR in terms of decibels (dBs) is given by: Figure 4: Block diagram of decoder It receives for each stage an index identifying the stage code vector selected and forms the reproduction A by summing the identified vectors. The total quantization error is the quantization residual from IJSRT | MAY - JUNE 2013 Available
[email protected]
PSNR=10
(
)
(19)
Generally when PSNR is 40 dB or greater, then the original and the reconstructed images are virtually indistinguishable by human eyes.
5
M. Moorthi,R. Amutha, IntJScRT, 2013v1i2, 1-9
The compression ratio is calculated as Compression ratio =
input data size Output data size
(20)
The term ‘compression ratio’ is used to characterize the compression capability of the system. There are two time factors in the compression algorithm which are important. The algorithm having less compression and decompression times is effective with respective to the time factor. IV. Results and Conclusion The performance of the proposed method and the existing method are tested for the Peak Signal to Noise Ratio (PSNR) and Time period. The values
Fig 5a) comparison of PSNR value with paper [15]
have been analysed on 256 x 256 bit size of different medical images and the results are given in Table 1 and Table 2. The new medical image compression scheme proved that preserve the Meta information like
edges
using
successive
approximation
quantization of the image and achieved the PSNR around 63 db. Table 1.The compression and decompression time for different medical images Compression Medical images
Time(sec)
1.Eye image
Decompression Time(sec)
0.97
0.07
2.MRI brain image
1
0.07
3.MRI knee image
0.99
0.072
4.MRI heart image
0.98
0.072
5.MRI spinal image
0.992
0.072
6.CT brain image
0.967
0.082
Fig 5b) comparison of PSNR value with paper [16] Figure 5a, 5b shows the comparison of PSNR value
Table 1 show the time period, which varies greatly
with reference papers [15] and [16]. The PSNR
between the proposed and existing method. The
obtained using contourlet transform is better than that
proposed method takes only half of the time for
of the wavelet transform. Also the compression and
compression and decompression as compared with
decompression time for the proposed method are
the existing method.
minimum. Comparison with other existing schemes
IJSRT | MAY - JUNE 2013 Available
[email protected]
6
M. Moorthi,R. Amutha, IntJScRT, 2013v1i2, 1-9
showed
that
the
proposed
scheme
achieves
competitive image quality in terms of PSNR. The
REFERENCES [1] A.Bruckmann,
“Selective
Medical
image
original, the contourlet transformed image and the
compression techniques for telemedicine and
reconstructed images are shown in fig 6a, 6b, 6c
archiving applications”, Computers in Biology and Medicine, vol.30. [2] D.M.Levin, C.A.Pellizzari, G.T.Y.Chen, and C.Cooper, “Retrospective geometric correlation of MRI, CT and PET images”, Radiology vol.169, pp. 817-823, 1988. [3] J.I. Fabrikant and R.P.Levy, “Image Correlation of MRI and CT in treatment planning for radio surgery of intracranial vascular malformations”, Int.J.Radiol. on Col.Biol.Phys., vol 20, No.4, pp 881-88,1991. [4] Wallace GK., 1991. “The JPEG still picture compression standard,’’ Comm. of the ACM3, vol 34, pp. 30-44. [5] Ji-Zheng Xu, Shipeng Li and Ya-Qin Zhang, "Three-dimensional
shape-adaptive
discrete
wavelet transforms for efficient object-based video
coding",
IEEE/SPIE
Visual
Communications and Image Processing (VCIP) 2000, Perth, June 2000. [6] Beretta. P., Prost. R., Amiel M., “Optimal bit allocation for Full-Frame DCT coding scheme – Application to cardiac angiography”, SPIE Image Capture, Formatting and Display, vol. 2164, p. 291-311, 1994. [7] Shapiro. J.M. “Embedded image coding using zerotrees of wavelet coefficients. IEEE Trans Fig 6a) Input medical images, 6b) Contourlet transformed Images, 6 c) Reconstructed images
Signal Process.41,3445-3462(1993). [8] Said, A. Pesrlman, W.A.: “A new. fast
and
It is used in mobile image transmission system which
efficient image codec based on set partitioning in
is based on the IS-54 digital cellular standard.
hierarchical trees. IEEE Trans. Circuits Syst. Video Technology.6, 243-250(1996).
IJSRT | MAY - JUNE 2013 Available
[email protected]
7
M. Moorthi,R. Amutha, IntJScRT, 2013v1i2, 1-9 [9] Yushin Cho, W.A.Pearlman and A. Said, “Low complexity resolution progressive image coding algorithm:
(progressive
resolution
IEEE International joint conferences on Neural Networks, pp. 171-177, 2008. [17] M.N. Do and M. Vetterli, "The contourlet
decompression)”, IEEE InternationalConference
transform:
on Image processing, Volume 3, pp. 49-52,
multiresolution image representation," IEEE
2005.
Trans. of Image Processing, vol. 14, no. 12, pp.
[10] D. S. Taubman, 2000. “High performance
an
efficient
directional
2091-2106, December 2004.
scalable image compression with EBCOT,”
[18] R.M. Gray, "Vector Quantization," IEEE ASSP
IEEETransaction Image Processing, vol. 9, no. 7,
magazine, vol. 1, no. 2, pp. 4 - 29, Apr. 1984. [19] P. J. Burt and E.H. Adelson, "The Laplacian
pp. 1158– 1170. [11] M. Rabbani and R. Joshi, “An Overview of the JPEG 2000 Still Image Compression Standard,” Signal Processing Image Comm., vol. 17, pp. 3-
Commun, vol. 31, pp. 532-540, Apr. 1983. [20] Yushin Cho, W.A.Pearlman and A. Said,: Low complexity resolution progressive image coding
48, 2002. [12] Armando Manduca, “Compression Images with Wavelet/Subband Coding”, IEEE engineering in medicine
and
biology,
pp
639-646,
M.,
“Textural
algorithm: IEEE International Conference on Image processing, Volume 3, pp. 49-52,(2005). [21] Park K, Park HW. : Region-of-interest coding based on set partitioning in hierarchical trees.
September/October 1995 [13] Unser.
pyramid as a compact image code," IEEE Trans.
classification
and
Segmentation Using Wavelet Frames”, IEEE Transactions Image Processing vol4, Issue 11, 1549-1560.
IEEE Trans Circuit Syst Video Tech vol 12(2):106–13(2002). [22] Ansari MA, Anand RS.:Implementation of efficient medical image compression algorithms
[14] Ed Chiu, Jacques Vaisey, and M. Stella Atkins,
with JPEG, wavelet transform and SPIHT. Int J
“Wavelet-Based Space-Frequency Compression
Comput Intell Res Appl (IJCIRA) 2(1):43–
of Ultrasound Images”, IEEE transactions on
55(2008).
information technology in biomedicine, vol. 5, no. 4, pp 300-310, December 2001 [15] N.M .S.Rahim and T. Yahagi, "Image coding
[23] Kesheng, W., J. Otoo and S. Arie, Optimizing bitmap indices with efficient compression, ACM Trans. Database Systems, 31: 1-38(2006).
using an improved feature map finite vector
[24] R. C. Gonzalez and R. E. Woods, Digital Image
quantization," IEICE Trans. on fundamental of
Processing. Englewood Cliffs, NJ: Prentice-
Electronics, Communications and computer
Hall,(2007).
sciences, vol. E85-A, no. 11, pp. 2453-2458, Nov.2002. [16] Jayantakumar Debnath,Newaz Muhammad Syfur Rhim,and Wai-keung Fung and T. Yahagi, "A
[25] David Salomon’s, : Data Compression, Second edition. [26] Kirk Baker, Singular Value Decomposition Tutorial. (2005)
Modified vector quantization based image
[27] Achim A, Bezerianos A, Tsakalides P.: Novel
compression technique using wavelet transform," IJSRT | MAY - JUNE 2013 Available
[email protected]
Bayesian multiscale method for speckle removal 8
M. Moorthi,R. Amutha, IntJScRT, 2013v1i2, 1-9
in
medical
ultrasound
images
[J].
IEEE
Transactions on Medical Imaging, 20(8): 772– 783(2001). [28] Zhou Z F, Shui P L. :Contourlet-based image denoising algorithm using directional windows [J]. Electronics Letters, 43(2): 92–93(2007). [29] NiW, Guo B L, Yan Y Y, :Speckle suppression for SAR images based on adaptive shrinkage in contourlet domain, [C]// Wcica 2006: Sixth World Congress on Intelligent Control and
M.E - Medical Electronics in the year 2007 at Anna University, Gundy campus, Chennai, India. He has 12 years of teaching experience and he is currently working as Assistant Professor in the department of Electronics and Communication Engineering at Prathyusha Institute of Technology and management, Chennai. He is a member of the Institute of Electrical and Electronics Engineers (IEEE), Indian Society for Technical Education (ISTE), IETE. He has published and presented papers in National and International Conference in the area of Image processing. He has been the reviewer for 2012 & 2013 IET image processing. His research interests are Image Segmentation, Image Compression, Neural network, Fuzzy logic, microprocessor and microcontroller.
Automation. NY:IEEE,10017–10021(2006). [30] Song H H, Yu S Y, Wang C, : A new deblocking algorithm
based
on
adjusted
contourlet
transform, [C]// IEEE International Conference on Multimedia and Expo. NY:IEEE, 449– 452(2006). [31] Li H F, Song W W, Wang S X. :A novel blind watermarking algorithm in contourlet domain, [C]//18th International Conference on Pattern Recognition. NY:IEEE, 639–642(2006). [32] Bouzidi A, Baaziz N. : Contourlet domain feature
extraction
authentication
[C]//
for IEEE
image
content
Workshop
on
Dr.R.Amutha, Professor, ECE department graduated from Thiagarajar college of Engineering in the year 1987. She obtained her M.E degree from PSG college of Technology. She got her Ph.D from Anna University in 2006. She has 24 years of teaching and 10 years research experience. Her research area includes coding theory, Wireless communication network and Image processing. She published 3 International and 2 national journal papers. She has 20 International and national conference papers to her credit. She reviewed three international journal papers. She is a recognized research supervisor of Anna University and SCSVMV University for Ph.D and M.S (by research). She is supervising 9 Ph.D research scholars.
Multimedia Signal Processing. NY:IEEE, 202– 206(2006). [33] Miao Q G, Wang B S. :The contourlet transform for image fusion, [C]// SPIE Conference on Multisensor,Multisource
Informatin
Fusion:
Architectures, Algorithms, and Applications. Bellingham:SPIE, 6242-6245(2006). AUTHOUR
M.Moorthi pursuing his Ph.D program at Sri Chandrasekharendra Saraswathi Viswa Mahavidyalaya University, Kanchipuram. He completed his B.E degree at Arulmigu Meenakshi Amman College of Engineering, Kanchipuram, in Electronics and Communication Engineering in the year 2001 and IJSRT | MAY - JUNE 2013 Available
[email protected]
9