A Novel Approach for Image Compression Based on Multi-level Image ...

2 downloads 37 Views 2MB Size Report
A Novel Approach for Image Compression Based on Multi-level Image. Thresholding using Shannon Entropy and Differential Evolution. Sujoy Paul and Bitan ...
Proceeding of the 2014 IEEE Students' Technology Symposium

A Novel Approach for Image Compression Based on Multi-level Image Thresholding using Shannon Entropy and Differential Evolution Sujoy Paul and Bitan Bandyopadhyay Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata, INDIA Email: [email protected] and [email protected] Abstract—Image compression is one of the most important step in image transmission and storage. Most of the state-of-art image compression techniques are spatial based. In this paper, a histogram based image compression technique is proposed based on multi-level image thresholding. The gray scale of the image is divided into crisp group of probabilistic partition. Shannon’s Entropy is used to measure the randomness of the crisp grouping. The entropy function is maximized using a popular metaheuristic named Differential Evolution to reduce the computational time and standard deviation of optimized objective value. Some images from popular image database of UC Berkeley and CMU are used as benchmark images. Important image quality metrics- PSNR, WPSNR and storage size of the compressed image file are used for comparison and testing. Comparison of Shannon’s entropy with Tsallis Entropy is also provided. Some specific applications of the proposed image compression algorithm are also pointed out. Keywords— Image Compression; Shannon’s Entropy; PSNR Differential Evolution; Image Thresholding;

I.

INTRODUCTION

In recent times, with the advent of new technology, high speed internet and need for high amount of data storage, image compression is one of the most important task to be accomplished. Also, medical science increasingly requires huge amount of images to be stored digitally, most of which are generally grayscale images. Also in wireless sensor networks,where low power devices are deployed, image compression techniques are required for reducing power consumption, transmission time and failure probability. Image compression techniques may be divided into two ways namely, lossy and lossless compression. Irrespective of the advantages of lossy compression, they are not used in medical science due to fact that in lossy compression, some important data may be over-ridden due to compression, which do not happen in lossless compression. Lossy compression is generally used in streaming media and telephone applications. Some algorithms for lossy and lossless compression are presented in[1]-[4]. Most of image compression techniques proposed over the yearsaremostlyspatial based by some encoding or decoding logic based on processing the images by dividing them into small blocks. Nobuhara et al [5] proposed a fast image compression technique using fuzzy relational equation. Mohammed et al[6] proposed an image compression technique based on block truncation coding. They divided the image into non-overlapping blocks and applied the coding technique.Chang et al[7] presented an adaptive wavelet thresholding technique for image compression, with the threshold being driven by a Bayesian framework.Several image compression techniques particularly for gray scale images have also been proposed [8]-[9].A new technique

TS14IVC03 237

based on invariant image features, is proposed in [10]. Initially, a standard lossy image compression technique existed known as JPEG [11], which is based on Discrete Cosine Transform (DCT). It was later replaced by the JPEG2000 [12]. Evolutionary optimization algorithms have also been used by some researchers [13], to decrease the computational time. Most of the image compression techniques proposed till date ismainly based on spatial features with some being on other statistical measures. In this paper an image histogram based multi-level image thresholding technique is proposed for image compression. Multilevel image thresholding is one of the most common ways to segment an image. Several entropy based methods [14]-[18] have been proposed for multi-level image global thresholding. Other image thresholding algorithms have also been proposed earlier, with a great impact by Otsu [19], which was later modified by Kapuret al [20]. Some fuzzy partition based techniques have also been proposed in [21]-[22]. Although fuzzy based techniques provide better results, they consume much higher computational time. Thresholding divides the image into several objects and background. When the number of thresholds is increased, the image becomes over segmented, but at the same time approaches the original image. This concept is utilized in this paper for image compression. The histogram is approximated by using higher number of thresholds, thus decreasing the compression error. The approximation of the histogram is done by maximization of Shannon’s Entropy as discussed in section II. With an increase in the number of thresholds, the dimension and hence the computational time increases almost exponentially. For this reason a meta-heuristic Differential Evolution (DE) is used for optimization.It may be mentioned here that as the compression technique is based on histogram processing, the computational time is the same, irrespective of the size of the image. The rest of the paper has been organized as follows: section II presents a description of the proposed algorithm, section III provides a brief description of DE, experimental results are provided in section IV, and lastly section V concludes the paper. II.

PROPOSED METHOD

A. Shannon’s Entropy In information theory, entropy is used to measure the randomness of a random variable. Let’s consider a discrete , ,… , . The random variable , with possible values amount of information conveyed by a random variable may be quantitatively measured by the formula given below,

978-1-4799-2608-4/14/$31.00 ©2014 IEEE

56

Proceeding of the 2014 IEEE Students' Technology Symposium

log

1

where

is a symbol having probability of occurrence and ∑ 1 , n being the total number of symbols. Shannon’s entropy [23] is defined as the expected value of the information contained. It may be expressed as, log

2

For two possible values of a random variable, Shannon’s 0.5 . entropy reaches a maximum for Similarly, for n random variable, a maxima is reached when all the probabilities of the random variables are equal ( 1 ), or in other words, when the random variables are equally likely. This concept is applied for image thresholding as discussed in the next section. B. Image Compression by Image Thresholding using Shannon’s Entropy Let’s consider a digital image of the size of . Let , be the gray value of the pixel having coordinates , where 1,2, … , and 1,2, … , . If is the number of gray levels of , the set of all gray levels 0, 1, 2 … , 1 is represented as , such that, , ,

,

:

,

,

0, 1, 2 … ,

1

3

be the normalized histogram of the Let , ,…, ⁄ , is the number of pixels image where . If the image is required to be divided into 1 levels in , ,…, such that 0 by thresholds, 1, then the thresholds should be chosen such that the histogram frequency values within two threshold values almost have the same value. This may be done by using the concept of maximization of Shannon’s Entropy, which for 1 to , … , each group of gray levels, ( 0 to , 1 to 1 may be defined as,

max

The optimization task of (7)is done by Differential Evolution, discussed in section III. This thresholding technique basically makes an approximation of the image histogram by properly choosing the set of thresholds. Now as the number of thresholds is increased, then the approximation of the frequency values of a histogram is more accurate, and the approximation corresponds almost to the actual values. But at the same time, the variation in the pixel values over the image is reduced. In this way compression of the image can be achieved, but with some error or loss in information. It may be noted here that when the image is thresholded into 1 1 bits will be required to levels, then only log represent the image. Thus, an image having 256 levels, represented by 8 bits, when thresholded by 15 thresholds, to obtain 16 levels, requires only log 16 4 bits to represent the compressed image. DIFFERENTIAL EVOLUTION

III.

Differential Evolution (DE) [24]is ameta-heuristic, which has recently emerged as a simple yet very competitive evolutionary optimizer. It is aniterative population-based optimizer like other evolutionary algorithms. In DE the individual of the population at generation time step is a dimensional vector, containing a set of optimization parameters: ,

,

ln ln

, …, 4

.

,

,

,……,

,

,

9

,

,

, if

0,1

, 10

,

,

, …, 5

In order to obtain the correct threshold values, each of these entropy functions must be maximized. Thus the total Shannon’s entropy presented below must also be maximized to obtain the best threshold set .

0,1 0,1 is the jth where 1, 2 … and evaluation of a uniform random number generator and 1, 2, … … , is a randomly chosen index to ensures that gets at least one component from . is the crossover rate. Finally ‘selection’ of the new vector is performed as follows, 1

,

11

,

where . is the function to be minimized. The above steps are repeated until the stopping criterion is met. In our

6

TS14IVC03 237

8

where is a scalar quantity, called scaling factor. In order to increase the potential diversity of the donor vector, a binomial crossover is performed to create a trial vector, as follows, ,

where,

,

The population israndomly initiatedwithin the search space. In (say), each generation to change the population members is created through mutation. In one of first a donor vector the earliest variants of DE, named DE/rand/1 scheme, to create for each member, three other parameter vectors (say vectors are such that , , 1, and the 1, , and ) are chosen at random from the current population. The donor vector is then obtained as follows, ,

ln

7

978-1-4799-2608-4/14/$31.00 ©2014 IEEE

57

Proceeding of the 2014 IEEE Students' Technology Symposium

proposed algorithm, the stopping criterion is number of iterations. IV.

EXPERIMENTAL RESULTS

The benchmark images used for testing are taken from image dataset of UC Berkeley [25] and CMU [26]. The initial set up used for DE is as follows: F=0.5, Cr=0.95, D= number of thresholds, 10 . Minimum and maximum bound of elements of the population vector are 0 and 255 for 8 bit image. A high number of objective function evaluations (FE) as stopping criterion may lead to higher execution time, but better convergence. But to reduce the execution time, with very less or negligibly lesser convergence in objective value, the stopping criterion is chosen to be 200*10*D. Such a number have been selected by a series of experimentation.Four images are shown in figure 2, with 7 images for each of these four images, which are obtained for different levels of bit representations varying from 2 to 7. The error rates are calculated using Peak Signal to Noise Ratio (PSNR) and Weighted Peak Signal to Noise Ratio (WPSNR)and their plots are also provided for each of the four images used. The memory required for storing the compressed and the original images is also presented in a graphical plot for each image.It is observed from the plots that more than 50% compression of the images are possible by the proposed algorithm with 4 bit representation and having a decent 30dB PSNR. For example, no visual difference may be noticed from the images 2.2.d-f and 2.2.g. Similar observations may be made from other images. Comparison with Tsallis Entropy [14] based method is presented in the graphs of figure 2. It may be observed that, although in some cases, Tsallis entropy occupies less storage space, it has lower PSNR and WPSNR values. Also it may be seen that the results produced by maximization of Tsallis entropy are not consistent, i.e. generally the PRNR and WPSNR values should increase monotonically with increase in number of thresholds, but this is not the case for Tsallis entropy. A comparison of DE with other state-of-art algorithm like Particle Swarm Optimization (PSO)[27] and Genetic Algorithm (GA) [28] is presented in Table I and II in terms of mean maximized objective function value and their standard deviations. The image numbers are those, which assigned by the dataset owners. It may be observed that DE proves out to be the best among the three algorithms. Image No. 24063 37073 119082 368016

TABLE I: Mean objective value DE PSO 67.1990 67.5683 66.6734 67.6430 68.8049 68.8685 68.2222 68.2618

GA 65.0434 64.3027 64.2490 66.1989

TABLE II: Standard deviation of objective values Image No. DE PSO GA 24063 0.1074 0.4512 0.0809 37073 0.2621 0.1721 0.0513 119082 0.3343 0.4625 0.1540 368016 0.3748 0.3318 0.1657

TS14IVC03 237

Other than serving the basic needs and requirements of an image compression algorithm, the proposed algorithm also serves the following tasks, A. Image transmission using wireless sensor networks: A wireless sensor network (WSN) mainly consists of tiny sensors deployed over a geographical area. Each sensor is a low power device that integrates computing, wireless communication and sensing capabilities.In many applications involving WSN, sensors are required to transfer captured images to the base-station or store images itself in the nodes. Now, the captured images are mainly 8-bit images, which cause degradation in the performance of the networkwith respect to 4-bit images, using which offers better network performance. We have compared the performance of model networks consisting of 20 sensors transmitting 4-bit and 8-bit images for the performance metrics - reliability, probability of failure and average packet delay. The comparisons are shown graphically in figure 1.

1.1

1.2

1.3 Fig 1:1.1: Plot for average packet delay vs. offered load. 1.2: Fault probability vs. offered load. 1.3: Reliability vs. offered load

B. Image Enhancement: Images having low or very high exposures are very difficult to visualize and also work with, in computer vision applications. So generally an image enhancement algorithm is used to enhance the visual quality of the image.In the proposed algorithm, if the gray values assigned to thethresholded classes are chosen to be equally distributed throughout the gray scale, then even though the image gray values are constricted within a very narrow region in the gray scale, the resulting image after thresholding by higher number of thresholds will be enhanced. This can be viewed from the images presented in figure 3.

978-1-4799-2608-4/14/$31.00 ©2014 IEEE

58

Proceeding of the 2014 IEEE Students' Technology Symposium

2.1.a

2.1.b

2.1.c

2.1.d

2.1.e

2.1.f

2.1.g

2.1.h

2.1.i

2.2.a

2.2.b

2.2.c

2.2.d

2.2.e

2.2.f

2.2.g

2.2.h

2.2.i

TS14IVC03 237

978-1-4799-2608-4/14/$31.00 ©2014 IEEE

59

Proceeding of the 2014 IEEE Students' Technology Symposium

2.3.a

2.3.b

2.3.c

2.3.d

2.3.e

2.3.f

2.3.g

2.3.h

2.3.i

2.4.a

2.4.b

2.4.c

2.4.d

2.4.e

2.4.f

2.4.g

2.4.h

2.4.i

Fig 2: With x varying from 1 to 4 for the four images, 22.x.a to 2.x.f represent the compressed images with bit representations 2 to 7 respectively. 2.x.g represent the original images. 2.x.h represent the plot for memory connsumed for storing the compressed and the original images. 2.x.i are plo ots of PSNR and WPSNR values of the compressed images

TS14IVC03 237

978-1-4799-2608-4/14/$31.00 ©2014 IEEE

60

Proceeding of the 2014 IEEE Students' Technology Symposium

[5]

[6]

[7] a

b [8] [9] [10] [11]

c

[12]

d

[13] [14] [15] e f Fig 3: a, c, e are original images, b, d, f are their correspponding enhanced images

V.

CONCLUSION

[17]

In this paper, an image thresholding basedd algorithm is proposed for image compression. Shannonn’s Entropy is maximized to obtain the best thresholdss. Differential Evolution is used to reduce the computatioonal time by a great extent. The background of the proposeed algorithm is explained with proper reasoning. Several appllications of the proposed algorithm have also been pointed oout. It may be mentioned here that there exists several other entropy measures, which performs well in segmentiing the image properly. Future work will be centered at m making use of different entropy measures, to find out whichh measure suits best for the problem and further enhances the P PSNR values. REFERENCES [1]

[2] [3]

[4]

[18]

[19] [20] [21] [22] [23]

David A. Clunie, “Lossless Compression of ggrayscale medical images: effectiveness or traditional and state-of-the-art approach”Proc. SPIE 3980, Medical Imaging 20000: PACS Design and Evaluation: Engineering and Clinical Issues, 744 (May 18, 2000) Paul G. Howard, “The design and analysis of efficient lossless data compression systems”, Technical Report No. C CS-93-28, Brown University, June, 1993 Kountchev,R, Milanova,M, Todorov,V, Kountchheva,R, “Adaptive Compression of Compound Images”14th Internatioonal Workshop on Systems, Signals and Image Processing, 2007 aand 6th EURASIP Conference focused on Speech and Image Proceessing, Multimedia Communications and Services., pp 133-136, 2007 T. Chen, K. Chuang, “A pseudo lossless image com mpression method”, IEEE Congress on Image and Signal Processing, vvol 2, pp 610-615, 2010

TS14IVC03 237

[16]

[24] [25] [26] [27] [28]

H. Nobuhara, W. Pedrycz, K. Hirotta, “Fast solving method of fuzzy relational equation and its application to lossy image compression/reconstruction”, IEEE Transactions on Fuzzy Systems, vol 8, issue 3, pp 325-334, June, 200 00 Doaa Mohammed, Fatma Abou-Ch hadi, “Image Compression Using Block Truncation Coding”, Cyb ber Journals: Multidisciplinary Journals in Science and Technolog gy, Journal of Selected Areas in Telecommunications (JSAT), Februaary Edition, 2011 S. G Chang, Yu Bin, M. Vetterli , “Adaptive wavelet thresholding for image denoising and compression n”, IEEE Transactions on Image Processing, vol 9, issue 9, pp 1532-1546, September,2000 Y. Fu, F. He, B. Song, “A New w Compression Method of Gray Image”, IEEE ICIECS, pp 1-4, Deceember, 2009 A. Kumar, P. Singh, “An image com mpression algorithm for gray scale image”, IEEE ETNCC, pp 342-346, April, 2011 H. Yue, X. Sun, F. Wu, J. Yang, “S SIFT- Based Image Compression”, IEEE ICME, pp 473-478, July, 2012 2 G.K.Wallace, “The JPEG still pictu ure compression standard”, IEEE Transactions on Consumer Electro onics, vol 38, issue 1, pp xviiixxxiv, 1992 PEG2000: standard for interactive D.S.Taubman, M.W.Marcellin, “JP imaging” , Proceedings of the IEEE E, vol 90, issue 8, pp 1336-1357, 2002 K. Uma, P. G Palanisamy, P. G Poornachandran, P “Comparison of image compression using GA, AC CO and PSO techniques”, IEEE ICRTIT, pp 815-820, June, 2011 A “Image thresholding M.P.Albuquerquea, I.A. Esquefb, A.R.G.Melloa, using Tsallis entropy”, Pattern Reccognition Letters, vol 25, issue 9, pp 1059-1065, 2004 R.Benzid, D.Arar, M.Bentoumi, “A “ fast technique for gray level image thresholding and quantizzation based on the entropy maximization”, International Multii-Conference on Systems, Signals and Devices, pp. 1-4, 2008 P. K. Sahoo and G. Arora, “A thrresholding method based on two dimensional Renyi’s entropy”, Pa attern Recognition, vol. 37, pp. 1149–1161, 2004 Suo Lan, Liu Li Zhi Kong, Jian Guo o Wang, “Segmentation Approach Based on Fuzzy Renyi Entropy”, Paattern Recognition, pp. 1-4, 2010 Wen Jie Tian, Yu Geng, Ji Cheng Liu, Lan Ai, “ Maximum Fuzzy Entropy and Immune Clone Selection S Algorithm for Image Segmentation”, Information Proceessing, Asia-Pacific Conference, vol: 1,pp. 38-41, 2009 N. Otsu, “A threshold selection metthod from gray level histograms”, IEEE Transactions on System, Man n and Cybernetics, vol. 9, pp 62– 66, 1979 ng, “A new method for gray-level J.N. Kapur, P.K.Sahoo, A.K.C Won picture thresholding using the entrropy of the histogram”, Comput. Vision Graphics Image Process. 29 (1985) , 273 – 285 M.S.Zhao, A.M.N.Fu, H.Yan,“A tecchnique of three level thresholding based on probability partition and fuzzy 3-partition”, IEEE Trans. Fuzzy Systems, vol 9, no 3, pp 469 - 479, 2001 W.B.Tao, J.W.Tian, J.Liu, “Imag ge segmentation by three-level thresholding based on maximum m fuzzy entropy and genetic algorithm”, Pattern Recognition Lettters, vol 24, pp 3069 - 3078, 2003 Pl. Kannappan, “On Shannon's entropy, e directed divergence and inaccuracy”, Springer: Probability Theory T and Related Fields, vol 22, pp 95-100, 1972 volution-a simple and efficient R.Storn, K.Price,“Differential ev heuristic for global optimiza- tion over o continuous spaces”,Journal of Global Optimization, vol. 11, pp 341 1 – 359,1997 The Berkeley Segmentation Datasset and Benchmark (BSDS500)Link:http://www.eecs.berkeley.edu/R Research/Projects/CS/vision/grou ping/segbench/ C Image Database, Carnegie Vision and Autonomous Systems Center's Mellon University, Link: http://vasc.ri.cmu.edu/idb/ J.Kennedy,R.Eberhat, “Particle swarm optimization”, IEEE International Conference on Neura al Networks, vol. 4, pp1942-1948, 1995 hi, “A computationally efficient K. Deb, A. Anand, and D. Josh evolutionary algorithm for real-parameter optimization”, Evolutionary Computation, 10(4), pp. 371-395, 20 002

978-1-4799-2608-4/14/$31.00 ©2014 IEEE

61