2010 International Conference on Advances in Communication, Network, and Computing

Image Compression using Lifting Wavelet Transform Swanirbhar Majumder1

N. Loyalakpa Meitei1

A. Dinamani Singh1

Madhusudhan Mishra1

[email protected]

[email protected]

[email protected]

[email protected]

1

North Eastern Regional Institute of Science and Technology (Deemed University), Arunachal Pradesh, India

Abstract— Wavelet transform, due to its time frequency characteristics, has been a popular multiresolution analysis tool. Its discrete version, i.e. DWT has been widely used in various applications till date. The hugely applied version of DWT is convolution based. But for hardware implementation this convolution based system has had problems with floating point numbers. Thus the lifting based DWT method, with less cost of computation, more efficient performance and easier hardware implementability has become popular. Here image compression scheme using this lifting based DWT method is presented. The quality analysis of this method has been checked for three different levels of DWT scaling, with varying quantization levels with lossless encoding scheme. The image quality analysis has been done using two sets of parameters, namely the popular peak signal to ratio method and the block based median singular value decomposition method of Shnyderman, et. al. Based on these two image quality assessments, this method along with its properties of easier hardware computation may be easily realized for real time image compression for real time devices and systems at lower costs.

−

Image compression research aims at reducing the number of bits needed to represent an image by removing the spatial and spectral redundancies as much as possible. A typical lossy image compression system is shown in Fig. 1. It consists of three closely connected components namely (a) Source Encoder (b) Quantizer, and (c) Entropy Encoder. Compression is accomplished by applying a linear transform to decorrelate the image data, quantizing the resulting transform coefficients, and entropy coding the quantized values.

Figure 1: A Typical Lossy Signal/Image Encoder.

The quantizer used here is a uniform quantizer. Here we apply two popular lossless encoding schemes. They are RLE (run length encoding) and Huffman encoding. But the idea is to apply then consecutively by first encoding using the RLE to get the iterations of same number reduced and then apply Huffman encoding to sort out the appearance the different numbers. The redundancy removed are therefore mainly in the transform domain (i.e. spectral redundancy). This is due to the use of DWT which works based on the lifting based method in time frequency domain. Other than these some amount of redundancy is removed after quantization via the two encoding techniques used in cascade [1] [2]. A lot of image compression methodologies have been based on the similar initial techniques [3-7]. Here another such methodology is being proposed, but specifically keeping the hardware implementability in mind for real time embedded applications [8-13]

Keywords—DWT, lifting, peak signal to noise ratio, block based median singular value decomposition

I.

INTRODUCTION

Uncompressed multimedia (graphics, audio and video) data requires considerable storage capacity and transmission bandwidth. Despite rapid progress in mass-storage density, processor speeds, and digital communication system performance, demand for data storage capacity and datatransmission bandwidth continues to outstrip the capabilities of available technologies. The recent growth of data intensive multimedia-based web applications have not only sustained the need for more efficient ways to encode signals and images but have made compression of such signals central to storage and communication technology[1]. A common characteristic of most images is that the neighboring pixels are correlated, and therefore contain redundant information. The foremost task then is to find less correlated representation of the image. Two fundamental components of compression are redundancy and irrelevancy reduction. Redundancy reduction aims at removing duplication from the signal source (image/video). Irrelevancy reduction omits parts of the signal that will not be noticed by the signal receiver, namely the Human Visual System (HVS). In general, three types of redundancy can be identified: − Spatial Redundancy or correlation between neighboring pixel values. − Spectral Redundancy or correlation between different color planes or spectral bands. 978-0-7695-4209-6/10 $26.00 © 2010 IEEE DOI 10.1109/CNC.2010.11

Temporal Redundancy or correlation between adjacent frames in a sequence of images (in video applications).

II.

DISCRETE WAVELET TRANSFORM

The computation of wavelet series requires significant computational time and resources. It is possible to reduce this by using sub-band coding algorithm which yields a faster wavelet transform. The wavelet transform of a signal using CWT is obtained by changing the scale of the analysis window, shifting the window in time, multiplying the signal and integrating the result over all time as in equation 1. In case of DWT, the wavelet transform is obtained by filtering the signal through a series of digital filters at different scales. The scaling operation is done by changing the resolution of the signal by sub-sampling.

10

1

X CWT (τ , s) =

s

∫ x(t )ψ

*

⎛ t −τ ⎞ ⎜ ⎟ ⎝ s ⎠

values using the liner combination of the predicted odd sequence so that the basic properties of the original sequence is preserved. Therefore the data in s[n] is updated using data in d[n] i.e. given by s[n] ← s[n] + U (d[n]).

(1)

DWT can be computed using either the convolution based or the lifting based methods. The input sequence is decomposed into low pass and high pass sub-bands, in both. Each consists of half the number of samples in the original sequence.

Figure 2: A one dimensional DWT system.

The input here is filtered using a filter bank consisting of low pass and high pass filters and down sampled by a factor of 2 in the convolution based DWT as shown in figure 2 for one level of DWT. This can also be computed mathematically using equation 2 and 3, where s is the low pass signal and d is the high pass signal.

s[n] =

∞

∑ x[k ]h[2n − k ]

MSE =

∞

∑

x[k ]g[2n − k ]

1 m −1 n −1 X (i, j ) − X (i, j ) ∑∑ mn i =0 j =0

2

(4)

(2)

⎛ MAX I ⎞ PSNR = 20 log10 ⎜ ⎟ ⎝ MSE ⎠

(3)

Here, MAXI is the maximum pixel value of the image. When the pixels are represented using 8 bits per sample, MAXI is 255 [3-8].

k =−∞

d [n] =

PEAK SIGNAL TO NOISE RATIO

III.

To compare the reconstructed image with the original image PSNR is among the most common methods. The ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation is called PSNR (peak signal-to-noise ratio). It is commonly used as a measure of quality of reconstruction in image compression. As many signals have a very wide dynamic range, PSNR is usually expressed in terms of the logarithmic decibel scale. The original image X(i,j) and the reconstructed image X (i, j ) may be compared for evaluation of the effectiveness of the compression technique by calculating the mean squared error (MSE) as given by equation 4 and PSNR is expressed by equation 5.

k =−∞

The lifting scheme is much more efficient method compared to the classical convolution method [8]. This scheme has been motivated by the second generation wavelets. Unlike the first generation wavelets, do not use the translation and dilation of same wavelet prototype in different levels. The lifting scheme is a general scheme, and not limited to developing a filter structure for the second generation wavelets. It can also be used to build ladder type structure for first generation wavelet filters. Any classical filter bank can be decomposed into lifting steps through the use of Euclidean algorithm.

(5)

MEDIAN SVD

IV.

As per human visual system (HVS), the mean square error (MSE) based methods, which has been used as the most common evaluation tool have been found to have poor correlation with it. Even with the advent of new ideas and techniques the HVS-based objective measures do not appear to be superior to the simple pixel-based measures like the MSE, Peak Signal-to-Noise Ratio (PSNR), or Root Mean Squared Error (RMSE) [7-9]. Thus Shnayderman, et. al., proposed the singular value decomposition based grayscale image quality measure for both local and global assessment. This techniques initially has a graphical measure which computes distance between the singular values of the original image block and that of the distorted one as in equation 6.

The DWT of a one dimensional signal using a simple lifting scheme with one pair of lifting sub steps will follow the split, predict and update step. Thu firstly under split one dimensional input sequence x[n] is split to two subsets, one even (s0[n]) and other odd (d0[n]). This splitting is called lazy wavelet transform, and is used for the ease of reconstruction. Now for a sparser approximation of one of x[n] subsets the next step with linear combination of elements in one subsequence is used to predict the values of the other subsequence. Here it is assumed that two subsequences produced in the splitting step are correlated. If correlation present in x[n] is high, the predicted values will be closer to the actual ones. Thus d[n] is predicted using samples in s[n] and replaces the samples in d[n] using prediction error i.e. given by d[n] ← d[n] - P(s[n]). The predict step results in the loss of some basic properties of the signal like mean value, which needs to be preserved. The update step lifts the even sequence

Di =

n

∑(s − s ) i

2

i

(6)

i =1

where si are the singular values of the original image block and s i of the distorted one, and n being the block size. Now for image size of k, there are (k/n) x (k/n) blocks. The numerical measure derived from the graphical measure computes the global error, expressed as a single numerical value depending on the distortion type, as in equation 7. ( k / n )×( k / n )

M SVD =

11

∑

Di − Dmid

i =1

( k / n) × ( k / n)

(7)

where Dmid represents the midpoint or median of the sorted Di, s, and k and n being image and block size as defined earlier in equation 6. V.

particular wavelet has been used widely for hardware implementation [10]. The quantization levels used are from 2 to 50 with a gap of 2. Therefore for each of the even quantization levels from 2 to 50, the percentage of compression, PSNR and median SVD values have been analyzed. They have been plotted for each of the 4 levels of DWT in figures 5, 6 and 7 respectively.

IMAGE COMPRESSION AND DECOMPRESSION METHOD

The method used here is very simple. There are 5 steps mainly. Firstly the image is under gone lifting wavelet based DWT for the desired number of levels as per the size of the image. This is followed by undergoing zig-zag scan, to convert it to one dimensional format. Then it is uniformly quantized and encoded using the run length encoding (RLE). Finally the RLE encoded data is again encoded using Huffman coding such that the Huffman dictionary has a length equal to the one more than the number of quantization levels used. The method is shown in figure 3.

Compression % v/s Quantization Levels for 4 levels of Lifting W avlet Trasform 100

Percentage of Compression

95

The decompression method is just the reverse of the compression method as in figure 4. Here compression is mainly achieved by removing spectral redundancy in the DWT domain, and achieving some amount of loss of data due to quantization. But as both the encoding methods used are lossless so no data loss is undergone due to the encoding and decoding steps. Moreover it has been seen that more amount of compression is achieved if Huffman encoding is undergone after RLE. Thus they have been used in this order.

1 2 3 4

90

Level Level Level Level

DW T DW T DW T DW T

85

80

75

0

5

10

15

20 25 30 Quantization Levels used

35

40

45

50

Figure 5: Percentage of Compression for different levels of DWT with varying quantization levels.

As per the figure 5, it can be clearly seen that as quantization level increases the percentage of compression decreases for DWT of levels 1 and 2, and to some extent in 3. But if the levels of DWT are increased the quantization levels do not change the percentage of compression much. Ideally the PSNR for good quality images lie above 30dB [3-8][11-12]. It can be seen in figure 6 that for levels 1 and 2 of DWT achieve this in around 10 quantization levels while for higher levels this feat is achieved only after a minimum of 15 quantization levels. PSNR v/s Quantization Levels for 4 levels of Lifting W avlet Trasform 45

Figure 3: Image Compression System. 40

PSNR in dB

35

30

1 2 3 4

25

Level Level Level Level

DW T DW T DW T DW T

20

15

10

Figure 4: Image Decompression System.

VI.

RESULTS AND DISCUSSION

0

5

10

15

20 25 30 Quantization Levels used

35

40

45

50

Figure 6: PSNR in dB for different levels of DWT with varying quantization levels.

The standard 256x256 ‘cameraman’ image has been used for analysis here. The this image has been undergone 1,2,3 and 4 levels of DWT and IDWT using the Cohen-DaubechiesFeauveau (CDF) 9/7 wavelet, which is the name 'cdf97' specifically. Still other wavelets can be used as well as per the necessity of the compression application. This is because this

Interestingly the PSNR for 1 level DWT does cross 40dB PSNR (i.e. provide very good quality output), while for 2 level DWT it goes up to 35dB. But for levels 3 and 4 of DWT it is around 30 dB constantly from 24-26 levels of quantization, and does not increase much till the end. Thus indicating that, these

12

REFERENCES

are very good for low resolution devices like VoIP, where picture quality is not of very much importance.

[1] [2]

Finally coming to our analysis based on median SVD, as in figure 7, we see it behaves opposite to PSNR. This is because their numerical values work in opposite directions. Higher the PSNR better the image quality, while lesser the median SVD better the quality. That is why incase of level 1 DWT lowest values were computed. But mainly one thing that is very much clear from figure 7 is that below 15 levels of quantization the image quality is significantly bad. 3

x 10

5

[3]

[4]

[5]

Median SVD v/s Quantization Levels for 4 levels of Lifting W avlet Trasform 1 2 3 4

2.5

Level Level Level Level

DW T DW T DW T DW T

[6]

Median SVD value

2

[7]

1.5

1

[8]

0.5

0

0

5

10

15

20 25 30 Quantization Levels used

35

40

45

[9]

50

Figure 7: Median SVD for different levels of DWT with varying quantization levels.

[10]

VII. CONCLUSION

[11]

Therefore to conclude, it can be seen from the results that the best intermediate path for compression of gray scale images of size 256x256(like ‘cameraman’) is to use 2 level lifting based DWT and quantization level of more than 20. This is because here compression of around 95%, with PSNR of around 35 dB and median SVD of about 12000-17000 can be achieved. Moreover as instead of conventional DWT as the lifting based DWT has been used, so hardware implementablity is easier and computation time is lower. Therefore, the luxury of two encoding schemes can be enjoyed out here.

[12]

[13]

13

Salomon, D., Data Compression, 4th Edition. Springer, 2006-07 Sayood K. Introduction Data Compression, 2nd Edition 2000, Morgan Kauffmann S. Majumder, et. al., “A Comparative Study of DCT and Wavelet-Based Image Coding & Reconstruction”, Pg 43-46 proceedings of National Conference on Smart Communication Technologies & Industrial Informatics, NIT Rourkela, 3-4, February 2007. S. Majumder, et. al., “Wavelet and DCT-Based Image Coding & Reconstruction For Low Resolution Implementation” pg 814-818 International Conference on Modeling and Simulation 2007, AMSE, Dec’ 07. S. Majumder, et. al., “A comparative study of image compression techniques based on SVD, DWT-SVD and DWT-DCT” pg 500-504 at International Conference on Systemics, Cybernetics, Informatics (ICSCI-2008), Hyderabad, Jan’ 08. S. Majumder, et. al., "A Comparative study of different coding schemes for Image Compression using Wavelet Transform" pg 41-48, National Conference on Emerging Trends in Engineering, SVPCET, Puttur Andhra Pradesh, April’08. S. Majumder, et. al., "Digital Image Compression using Neural Networks", ID 95, pg 116-120, in International Conference on Advances in Computing, Control and Telecommunication Technologies'2009(ACT 2009), IEEE Computer Society, Trivendum, Dec’09. ISBN 978-0-76953915-7. R. Claderbank, et. al., “Wavlet Transforms that map integers to integers”, Applied and computational harmonica analysis, 5(3), 332-369, 1998. A. Shnayderman, et. al., “An SVD-based grayscale image quality measure for local and global assessment”, IEEE Transactions on Image Processing, Vol: 15(2), pp- 422-429. S. Majumder, et. al., "Image Watermarking by Fast Lifting Wavelet Transform", 3rd National Conference Mathematical Techniques: Emerging Paradigms for Electronics and IT Industries (MATEIT '10), New Delhi, January 2010. S. Majumder et. al., “SVD and Error Control Coding based Digital Image Watermarking”, pg 60-63, in proceedings of ACT 2009,published by IEEE CS, ISBN 978-0-7695-3915-7. S. Majumder et. al., “SVD and Neural Network based Watermarking Scheme”, in proceedings of BAIP 2010, pp 1-5, Vol 70 Springer CCIS, ISBN: 978-3-642-12213-2 (Print) ISBN: 978-3-642-12214-9 (Online). S. Majumder et. al., “BPNN and Lifting Wavelet based Image Compression”, in proceedings of ICT 2010, published by Springer CCIS.

Image Compression using Lifting Wavelet Transform Swanirbhar Majumder1

N. Loyalakpa Meitei1

A. Dinamani Singh1

Madhusudhan Mishra1

[email protected]

[email protected]

[email protected]

[email protected]

1

North Eastern Regional Institute of Science and Technology (Deemed University), Arunachal Pradesh, India

Abstract— Wavelet transform, due to its time frequency characteristics, has been a popular multiresolution analysis tool. Its discrete version, i.e. DWT has been widely used in various applications till date. The hugely applied version of DWT is convolution based. But for hardware implementation this convolution based system has had problems with floating point numbers. Thus the lifting based DWT method, with less cost of computation, more efficient performance and easier hardware implementability has become popular. Here image compression scheme using this lifting based DWT method is presented. The quality analysis of this method has been checked for three different levels of DWT scaling, with varying quantization levels with lossless encoding scheme. The image quality analysis has been done using two sets of parameters, namely the popular peak signal to ratio method and the block based median singular value decomposition method of Shnyderman, et. al. Based on these two image quality assessments, this method along with its properties of easier hardware computation may be easily realized for real time image compression for real time devices and systems at lower costs.

−

Image compression research aims at reducing the number of bits needed to represent an image by removing the spatial and spectral redundancies as much as possible. A typical lossy image compression system is shown in Fig. 1. It consists of three closely connected components namely (a) Source Encoder (b) Quantizer, and (c) Entropy Encoder. Compression is accomplished by applying a linear transform to decorrelate the image data, quantizing the resulting transform coefficients, and entropy coding the quantized values.

Figure 1: A Typical Lossy Signal/Image Encoder.

The quantizer used here is a uniform quantizer. Here we apply two popular lossless encoding schemes. They are RLE (run length encoding) and Huffman encoding. But the idea is to apply then consecutively by first encoding using the RLE to get the iterations of same number reduced and then apply Huffman encoding to sort out the appearance the different numbers. The redundancy removed are therefore mainly in the transform domain (i.e. spectral redundancy). This is due to the use of DWT which works based on the lifting based method in time frequency domain. Other than these some amount of redundancy is removed after quantization via the two encoding techniques used in cascade [1] [2]. A lot of image compression methodologies have been based on the similar initial techniques [3-7]. Here another such methodology is being proposed, but specifically keeping the hardware implementability in mind for real time embedded applications [8-13]

Keywords—DWT, lifting, peak signal to noise ratio, block based median singular value decomposition

I.

INTRODUCTION

Uncompressed multimedia (graphics, audio and video) data requires considerable storage capacity and transmission bandwidth. Despite rapid progress in mass-storage density, processor speeds, and digital communication system performance, demand for data storage capacity and datatransmission bandwidth continues to outstrip the capabilities of available technologies. The recent growth of data intensive multimedia-based web applications have not only sustained the need for more efficient ways to encode signals and images but have made compression of such signals central to storage and communication technology[1]. A common characteristic of most images is that the neighboring pixels are correlated, and therefore contain redundant information. The foremost task then is to find less correlated representation of the image. Two fundamental components of compression are redundancy and irrelevancy reduction. Redundancy reduction aims at removing duplication from the signal source (image/video). Irrelevancy reduction omits parts of the signal that will not be noticed by the signal receiver, namely the Human Visual System (HVS). In general, three types of redundancy can be identified: − Spatial Redundancy or correlation between neighboring pixel values. − Spectral Redundancy or correlation between different color planes or spectral bands. 978-0-7695-4209-6/10 $26.00 © 2010 IEEE DOI 10.1109/CNC.2010.11

Temporal Redundancy or correlation between adjacent frames in a sequence of images (in video applications).

II.

DISCRETE WAVELET TRANSFORM

The computation of wavelet series requires significant computational time and resources. It is possible to reduce this by using sub-band coding algorithm which yields a faster wavelet transform. The wavelet transform of a signal using CWT is obtained by changing the scale of the analysis window, shifting the window in time, multiplying the signal and integrating the result over all time as in equation 1. In case of DWT, the wavelet transform is obtained by filtering the signal through a series of digital filters at different scales. The scaling operation is done by changing the resolution of the signal by sub-sampling.

10

1

X CWT (τ , s) =

s

∫ x(t )ψ

*

⎛ t −τ ⎞ ⎜ ⎟ ⎝ s ⎠

values using the liner combination of the predicted odd sequence so that the basic properties of the original sequence is preserved. Therefore the data in s[n] is updated using data in d[n] i.e. given by s[n] ← s[n] + U (d[n]).

(1)

DWT can be computed using either the convolution based or the lifting based methods. The input sequence is decomposed into low pass and high pass sub-bands, in both. Each consists of half the number of samples in the original sequence.

Figure 2: A one dimensional DWT system.

The input here is filtered using a filter bank consisting of low pass and high pass filters and down sampled by a factor of 2 in the convolution based DWT as shown in figure 2 for one level of DWT. This can also be computed mathematically using equation 2 and 3, where s is the low pass signal and d is the high pass signal.

s[n] =

∞

∑ x[k ]h[2n − k ]

MSE =

∞

∑

x[k ]g[2n − k ]

1 m −1 n −1 X (i, j ) − X (i, j ) ∑∑ mn i =0 j =0

2

(4)

(2)

⎛ MAX I ⎞ PSNR = 20 log10 ⎜ ⎟ ⎝ MSE ⎠

(3)

Here, MAXI is the maximum pixel value of the image. When the pixels are represented using 8 bits per sample, MAXI is 255 [3-8].

k =−∞

d [n] =

PEAK SIGNAL TO NOISE RATIO

III.

To compare the reconstructed image with the original image PSNR is among the most common methods. The ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation is called PSNR (peak signal-to-noise ratio). It is commonly used as a measure of quality of reconstruction in image compression. As many signals have a very wide dynamic range, PSNR is usually expressed in terms of the logarithmic decibel scale. The original image X(i,j) and the reconstructed image X (i, j ) may be compared for evaluation of the effectiveness of the compression technique by calculating the mean squared error (MSE) as given by equation 4 and PSNR is expressed by equation 5.

k =−∞

The lifting scheme is much more efficient method compared to the classical convolution method [8]. This scheme has been motivated by the second generation wavelets. Unlike the first generation wavelets, do not use the translation and dilation of same wavelet prototype in different levels. The lifting scheme is a general scheme, and not limited to developing a filter structure for the second generation wavelets. It can also be used to build ladder type structure for first generation wavelet filters. Any classical filter bank can be decomposed into lifting steps through the use of Euclidean algorithm.

(5)

MEDIAN SVD

IV.

As per human visual system (HVS), the mean square error (MSE) based methods, which has been used as the most common evaluation tool have been found to have poor correlation with it. Even with the advent of new ideas and techniques the HVS-based objective measures do not appear to be superior to the simple pixel-based measures like the MSE, Peak Signal-to-Noise Ratio (PSNR), or Root Mean Squared Error (RMSE) [7-9]. Thus Shnayderman, et. al., proposed the singular value decomposition based grayscale image quality measure for both local and global assessment. This techniques initially has a graphical measure which computes distance between the singular values of the original image block and that of the distorted one as in equation 6.

The DWT of a one dimensional signal using a simple lifting scheme with one pair of lifting sub steps will follow the split, predict and update step. Thu firstly under split one dimensional input sequence x[n] is split to two subsets, one even (s0[n]) and other odd (d0[n]). This splitting is called lazy wavelet transform, and is used for the ease of reconstruction. Now for a sparser approximation of one of x[n] subsets the next step with linear combination of elements in one subsequence is used to predict the values of the other subsequence. Here it is assumed that two subsequences produced in the splitting step are correlated. If correlation present in x[n] is high, the predicted values will be closer to the actual ones. Thus d[n] is predicted using samples in s[n] and replaces the samples in d[n] using prediction error i.e. given by d[n] ← d[n] - P(s[n]). The predict step results in the loss of some basic properties of the signal like mean value, which needs to be preserved. The update step lifts the even sequence

Di =

n

∑(s − s ) i

2

i

(6)

i =1

where si are the singular values of the original image block and s i of the distorted one, and n being the block size. Now for image size of k, there are (k/n) x (k/n) blocks. The numerical measure derived from the graphical measure computes the global error, expressed as a single numerical value depending on the distortion type, as in equation 7. ( k / n )×( k / n )

M SVD =

11

∑

Di − Dmid

i =1

( k / n) × ( k / n)

(7)

where Dmid represents the midpoint or median of the sorted Di, s, and k and n being image and block size as defined earlier in equation 6. V.

particular wavelet has been used widely for hardware implementation [10]. The quantization levels used are from 2 to 50 with a gap of 2. Therefore for each of the even quantization levels from 2 to 50, the percentage of compression, PSNR and median SVD values have been analyzed. They have been plotted for each of the 4 levels of DWT in figures 5, 6 and 7 respectively.

IMAGE COMPRESSION AND DECOMPRESSION METHOD

The method used here is very simple. There are 5 steps mainly. Firstly the image is under gone lifting wavelet based DWT for the desired number of levels as per the size of the image. This is followed by undergoing zig-zag scan, to convert it to one dimensional format. Then it is uniformly quantized and encoded using the run length encoding (RLE). Finally the RLE encoded data is again encoded using Huffman coding such that the Huffman dictionary has a length equal to the one more than the number of quantization levels used. The method is shown in figure 3.

Compression % v/s Quantization Levels for 4 levels of Lifting W avlet Trasform 100

Percentage of Compression

95

The decompression method is just the reverse of the compression method as in figure 4. Here compression is mainly achieved by removing spectral redundancy in the DWT domain, and achieving some amount of loss of data due to quantization. But as both the encoding methods used are lossless so no data loss is undergone due to the encoding and decoding steps. Moreover it has been seen that more amount of compression is achieved if Huffman encoding is undergone after RLE. Thus they have been used in this order.

1 2 3 4

90

Level Level Level Level

DW T DW T DW T DW T

85

80

75

0

5

10

15

20 25 30 Quantization Levels used

35

40

45

50

Figure 5: Percentage of Compression for different levels of DWT with varying quantization levels.

As per the figure 5, it can be clearly seen that as quantization level increases the percentage of compression decreases for DWT of levels 1 and 2, and to some extent in 3. But if the levels of DWT are increased the quantization levels do not change the percentage of compression much. Ideally the PSNR for good quality images lie above 30dB [3-8][11-12]. It can be seen in figure 6 that for levels 1 and 2 of DWT achieve this in around 10 quantization levels while for higher levels this feat is achieved only after a minimum of 15 quantization levels. PSNR v/s Quantization Levels for 4 levels of Lifting W avlet Trasform 45

Figure 3: Image Compression System. 40

PSNR in dB

35

30

1 2 3 4

25

Level Level Level Level

DW T DW T DW T DW T

20

15

10

Figure 4: Image Decompression System.

VI.

RESULTS AND DISCUSSION

0

5

10

15

20 25 30 Quantization Levels used

35

40

45

50

Figure 6: PSNR in dB for different levels of DWT with varying quantization levels.

The standard 256x256 ‘cameraman’ image has been used for analysis here. The this image has been undergone 1,2,3 and 4 levels of DWT and IDWT using the Cohen-DaubechiesFeauveau (CDF) 9/7 wavelet, which is the name 'cdf97' specifically. Still other wavelets can be used as well as per the necessity of the compression application. This is because this

Interestingly the PSNR for 1 level DWT does cross 40dB PSNR (i.e. provide very good quality output), while for 2 level DWT it goes up to 35dB. But for levels 3 and 4 of DWT it is around 30 dB constantly from 24-26 levels of quantization, and does not increase much till the end. Thus indicating that, these

12

REFERENCES

are very good for low resolution devices like VoIP, where picture quality is not of very much importance.

[1] [2]

Finally coming to our analysis based on median SVD, as in figure 7, we see it behaves opposite to PSNR. This is because their numerical values work in opposite directions. Higher the PSNR better the image quality, while lesser the median SVD better the quality. That is why incase of level 1 DWT lowest values were computed. But mainly one thing that is very much clear from figure 7 is that below 15 levels of quantization the image quality is significantly bad. 3

x 10

5

[3]

[4]

[5]

Median SVD v/s Quantization Levels for 4 levels of Lifting W avlet Trasform 1 2 3 4

2.5

Level Level Level Level

DW T DW T DW T DW T

[6]

Median SVD value

2

[7]

1.5

1

[8]

0.5

0

0

5

10

15

20 25 30 Quantization Levels used

35

40

45

[9]

50

Figure 7: Median SVD for different levels of DWT with varying quantization levels.

[10]

VII. CONCLUSION

[11]

Therefore to conclude, it can be seen from the results that the best intermediate path for compression of gray scale images of size 256x256(like ‘cameraman’) is to use 2 level lifting based DWT and quantization level of more than 20. This is because here compression of around 95%, with PSNR of around 35 dB and median SVD of about 12000-17000 can be achieved. Moreover as instead of conventional DWT as the lifting based DWT has been used, so hardware implementablity is easier and computation time is lower. Therefore, the luxury of two encoding schemes can be enjoyed out here.

[12]

[13]

13

Salomon, D., Data Compression, 4th Edition. Springer, 2006-07 Sayood K. Introduction Data Compression, 2nd Edition 2000, Morgan Kauffmann S. Majumder, et. al., “A Comparative Study of DCT and Wavelet-Based Image Coding & Reconstruction”, Pg 43-46 proceedings of National Conference on Smart Communication Technologies & Industrial Informatics, NIT Rourkela, 3-4, February 2007. S. Majumder, et. al., “Wavelet and DCT-Based Image Coding & Reconstruction For Low Resolution Implementation” pg 814-818 International Conference on Modeling and Simulation 2007, AMSE, Dec’ 07. S. Majumder, et. al., “A comparative study of image compression techniques based on SVD, DWT-SVD and DWT-DCT” pg 500-504 at International Conference on Systemics, Cybernetics, Informatics (ICSCI-2008), Hyderabad, Jan’ 08. S. Majumder, et. al., "A Comparative study of different coding schemes for Image Compression using Wavelet Transform" pg 41-48, National Conference on Emerging Trends in Engineering, SVPCET, Puttur Andhra Pradesh, April’08. S. Majumder, et. al., "Digital Image Compression using Neural Networks", ID 95, pg 116-120, in International Conference on Advances in Computing, Control and Telecommunication Technologies'2009(ACT 2009), IEEE Computer Society, Trivendum, Dec’09. ISBN 978-0-76953915-7. R. Claderbank, et. al., “Wavlet Transforms that map integers to integers”, Applied and computational harmonica analysis, 5(3), 332-369, 1998. A. Shnayderman, et. al., “An SVD-based grayscale image quality measure for local and global assessment”, IEEE Transactions on Image Processing, Vol: 15(2), pp- 422-429. S. Majumder, et. al., "Image Watermarking by Fast Lifting Wavelet Transform", 3rd National Conference Mathematical Techniques: Emerging Paradigms for Electronics and IT Industries (MATEIT '10), New Delhi, January 2010. S. Majumder et. al., “SVD and Error Control Coding based Digital Image Watermarking”, pg 60-63, in proceedings of ACT 2009,published by IEEE CS, ISBN 978-0-7695-3915-7. S. Majumder et. al., “SVD and Neural Network based Watermarking Scheme”, in proceedings of BAIP 2010, pp 1-5, Vol 70 Springer CCIS, ISBN: 978-3-642-12213-2 (Print) ISBN: 978-3-642-12214-9 (Online). S. Majumder et. al., “BPNN and Lifting Wavelet based Image Compression”, in proceedings of ICT 2010, published by Springer CCIS.