Hyperspectral Image Compression Algorithms – A ...

2 downloads 0 Views 224KB Size Report
*1Research Scholar, CSE Dept, Jawaharlal Nehru Technological University-Kakinada, Kakinada – 533001, Andhra Pradesh. India, 2Dean-Academic , RMD ...
Hyperspectral Image Compression Algorithms – A Review K. Subhash Babu *1, Dr. K.K.Thyagharajan2, Dr.V.Ramachandran3, Geeta Santhosh4 *1Research Scholar, CSE Dept, Jawaharlal Nehru Technological University-Kakinada, Kakinada – 533001, Andhra Pradesh India, 2Dean-Academic , RMD Engineering College, Kavaraipettai,Tamilnadu,601206, India 3Director, NIT, Nagaland, Dimapur -797103, Nagaland India, 4Associate Professor, MCA Dept. SSN College of Engineering Kalavakkam, Chennai603110, Tamilnadu, India

[email protected], [email protected], [email protected], [email protected]

Abstract—Satellite based remote sensing applications require collection of high volumes of image data of which hyperspectral images are a particular type. Hyperspectral images are collected by high resolution instruments over a very large number of wavelengths on board a satellite / airborne vehicle and then sent onwards to a ground station for further processing. Compression of hyperspectral images is undertaken to reduce the on-board memory requirement, communication channel capacity and the download time. Compression algorithms can be either lossless or lossy. The purpose of this paper is to review a number of compression techniques employed for onsite processing of hyperspectral image data, to reduce the transmission overhead. A review of the theory of hyperspectral images and the compression techniques employed therein with emphasis on recent research developments is presented. Recent research on video compression techniques for hyperspectral imaging is also discussed. Keywords—Hyperspectral imaging, Compression, Lossless, lossy

1 Introduction Spectral imaging is a study of partial or complete spectral information collected at every location in an image plane. This spectral information can be based on the range and resolution of the spectrum, number, width and adjacency of bands etc. Based on these various distinctions we have different terms namely multilayer imaging, multispectral imaging, super spectral imaging and hyperspectral imaging. Spectral images are often represented as an image cube, a type of data cube. Spectral Imaging enlarges the size of a pixel until it includes an area for which a spectral response can be obtained for the surface cover to be discriminated. The pixels need not be placed immediately adjacent to each other in order to facilitate identification of the pixel contents, as the identification of a pixel’s contents could be based on the spectral response of that pixel only. This helps in bringing down to a large extent the number of pixels needed to represent a given area. A hyperspectral image can be regarded as a stack of individual images of the same spatial scene. Each such image represents the scene viewed in a narrow portion of the electromagnetic spectrum [1] as depicted in fig 1. These individual images are referred to as spectral bands. Hyperspectral images typically consist of more than 200 spectral bands; the voluminous amounts of data comprising hyperspectral images make them good candidates for data compression. In hyperspectral Imaging, each pixel records the intensity at different wavelengths of the light reflected by a specified area, which is divided into a number of consecutive bands. The hyperspectral images are represented as a two dimensional matrix or image cube [2] as shown in fig 2, where each pixel is a vector having one component for each spectral band. Hyperspectral images that provide high resolution spectral information are useful for many remote sensing applications. They generate massively large image data sets. Hence capture and onward transport of large hyperspectral image data sets entail efficient processing, storage and transmission capabilities. One way to address this problem is through image compression.

Fig. 1. Electromagnetic spectrum depicted as a spectrum

2 Conventional Compression Algorithms

.

Compression algorithms take digital image files as input. This input can also be a text or sound file. It then constructs an efficient encoded representation using as few bits as possible so that it can be transmitted with minimum usage of bandwidth, and reconstructed at the destination with minimum loss of bits. Depending on the quality of reconstruction desired, compression algorithms can be classified as lossless or lossy. In lossless compression schemes, the image formed at the end of the compression process should be an exact replica of the original image. Lossless algorithms strive to transmit the image data without any loss of information. Due to this constraint only a modest amount of compression is possible. In general, lossless compression is employed for character data, numerical data, icons and executable components. On the other hand, lossy algorithms are based on the presumption that it is not necessary to transmit the image file in its entirety as not all parts of the image are significant. The extraneous or insignificant image parts could be removed without affecting the visual impact of the image. In addition, the redundant information need not be transmitted hence can be completely discarded. Therefore the lossy compression algorithms provide a higher degree of compression, albeit with some loss of information and degradation in the image quality. Therefore lossy compression has been extensively employed for the compression of multimedia files and has been the trigger for the immense popularity of multimedia. This has been possible due to the fact that the human visual and hearing system is insensitive to certain kinds and levels of distortion. Thus, selective loss of information can be used to significantly enhance the compression performance of the lossy algorithms.

Fig 2. Two dimensional projection of a hyperspectral cube

3 Classification of Compression Algorithms Some of the important classifications [3] of compression algorithms are: A.Statistical (SI) The idea of statistical coding or the statistical approach is to use statistical information to replace a fixed-size code of symbols/patterns by a, hopefully, shorter variable-sized code. Generally most common patterns are replaced with shorter patterns and least common ones with longer patterns. B . Dictionary (DI) Dictionary approaches build a list of frequently occurring words and phrases and associate an index with them. While compressing it replaces the words/ phrases with the respective indices. C.Pattern Finding (PF) Pattern finding approaches seek to find and extract regular patterns from the file and perform compression based on these patterns. Most common pattern finding approach namely run length coding takes the consecutive symbols or patterns (runs) that occur recurrently in the source file and replaces them with an 8 bit ASCII code pair. This code pair can be either the symbol that is being repeated and the number of times it occurs or a sequence of symbols that are non repetitive. D.Approximation (AP) Approximate solutions approach employs quantization. For a given set of data certain computations are applied and approximations are arrived at and a simpler form of the data is generated. E.Programmable Solutions (PR) Programming solutions entail creation of an entirely new programming language for encoding the data and thereby reducing the file size. F.Wavelet Solutions (WA) A wavelet-based compression solution creates approximation for the data by adding together a number of basic patterns known as wavelets that are weighted by different coefficients. These coefficients replace the original patterns and make the input file compact. Cosine function is the most prevalent wavelet in use. G.Adaptive Solutions (AD) Adaptive solutions try to adapt to local conditions in an input file. Certain patterns may be found only in the beginning, certain in the middle and certain only in the end. Adaptive solutions may adapt accordingly while performing compression. In general, dictionary-based compression algorithms are better suited for text data. Statistical methods are better suited for image data. Some statistical based algorithms like Huffman coding or Arithmetic coding are used for text compression too. But whenever a file has fairly long words or phrases, dictionary based algorithms score above Huffman coding and arithmetic coding. Dictionary schemes, however, would fare worse in situations where there are no long repeated words and phrases. Adaptive techniques are generally used when the amount of overhead in a file has to be reduced. Statistical methods will usually ship some table summarizing the statistical information at the beginning of a file. Adaptive methods can reduce this amount, in some cases, because they often don't ship any additional data. Most statistical approaches can be changed, or adapted through one simple idea. Instead of building a statistical model for all of the characters or tags in a file, the model is constructed for a small group and updated frequently. Programming approach entails compressing data with executable files mostly in situations where the data are well structured. It is not suited for audio files.

4 Compression Techniques for Hyperspectral images A typical compression algorithm utilizes statistical structure in the image data set. In hyperspectral images, two types of correlation are possible. One is spatial correlation that exists between adjacent pixels in the same band, and the other is spectral correlation that exists between pixels in adjacent bands. We have considered only spectral correlation based studies for our review. 4.1 Lossless Compression Techniques for Hyperspectral Imaging

In hyperspectral imaging considerable research has been done using lossless compression techniques. In the following section, we describe various techniques employed and the results obtained. Keyan Wang et. al. employ adaptive edge based prediction in [4]. This is done by concentrating on increased affinity between the pixels present in the edge of the images to put forth a new high performance compression algorithm. The algorithm uses three different types of predictors for different modes of prediction. Intraband prediction is done using an improved median predictor (IMP) that can detect the diagonal edge. An adaptive edge-based predictor (AEP) is used for interband prediction that takes into account the redundancy in the spectrum. In case of the band that does not have any prediction mode the pixels are entropy coded. This algorithm shows an improvement in the lossless compression ratio for both standard Airborne Visible/ Infrared Imaging Spectrometer (AVIRIS) 1997 hyperspectral images and the newer Consultative committee for space data systems (CCSDS) test images. Quang Zhang et. al. discuss a new double-random projection method in [5] that uses randomized dimensionality reduction techniques for efficiently capturing global correlation structures and residual encoding. This new method extended from the randomized matrix decomposition methods for achieving rapid lossless compression and reconstruction of hyperspectral imaging (HSI) data, ensures effective lossless compression of HSI data. It shows empirically that the entropy of the residual image decreases significantly for HSI data. It expects that the conventional entropy-based methods for integer coding will perform well on these low-entropy residuals. The paper concludes that the integration of advanced residual coding algorithms with the randomized algorithm can be an important research topic for the future study. A transform based lossless compression for hyperspectral images is presented by Kai-jen Cheng et. al in [6] which is inspired by Shapiro’s (1993) Embedded Zerotree Wavelet (EZW) algorithm. The method that is proposed for compression employs a hybrid transform which is a combination of an integer Karhunen-Loève transform (KLT) and integer discrete wavelet transform (DWT). The proposed method was applied to AVIRIS images and compared to other state-of-the-art image compression techniques. It is shown that the lossless compression algorithm proposed has higher efficiency and compression ratio than other similar algorithms. Nor Rizuan Mat Noor et. al. propose an algorithm in [7] that performs lossless hyperspectral image compression by employing the Integer KLT, an integer approximation of the Karhunen-Loève Transform (KLT). This paper explores the effect of single-bit errors on the performance of the Integer KLT. This algorithm is of a low-complexity and is capable of detecting and correcting a single-bit error. Changguo Li et. al. discuss in [8] how their algorithm can exploit both interband and intraband statistical correlations, and in turn give superior compression performance as compared to existing classical algorithms. In this paper linear prediction is combined with a gradient adjusted prediction and an interband version of gradient adjusted prediction (GAP) is proposed. Jinwei Song et. al. in [9] propose a lossless compression algorithm that is based on a recursive least squares (RLS) filter. They show that this algorithm generates state-of-the-art performance with relatively low complexity. It looks at lots of conventional lossless compression algorithms for red–green–blue images, such as JPEG-lossless (LS), Diff.JPEG, clustered-differential pulse code modulation etc. Jarno Mielikainen in [10] makes use of lookup tables to propose an algorithm for hyperspectral image compression. By using interband correlation as a means of prediction, this low complexity algorithm achieves good compression ratio and makes it highly suitable for onboard image compression. Lin BAI et.al. presents in [11] context prediction based lossless compression algorithm for hyperspectral images. The algorithm performs sequentially linear prediction, followed by 3D context prediction followed by arithmetic coding and exhibits low complexity and a compression ratio of 3.01 which is better than many state of art algorithms. Aaron B. Kiely et. al. compares in [12] the compression performance of standard AVIRIS hyperspectral raw images that do not contain appreciable calibration-induced artifacts and the images containing calibrated artifacts and concludes that raw images achieve a worse compression performance than processed images. 4.2. Lossy Compression Techniques for Hyperspectral Imaging Lucana Santos et. al. in [13] address the urgent need for defining new hardware architectures for the implementation of algorithms for hyperspectral image compression onboard satellites. Their algorithm demonstrate the use of Graphics Processing Units (GPUs) for enhancing greatly the computation speeds for parallel tasks and data. The algorithm also specifies the parallelization strategy used and discusses solutions to the probable problems that are encountered during attempts at acceleration of hyperspectral compression algorithms. It also throws light on the effects of various configuration parameters on the compression algorithms. An architecture is proposed by Lucana Santos et. al. in [14] that enables hastening of the compression task by employing a parallel algorithm for onboard lossy hyperspectral image compression by implementing it on the GPU. The algorithm uses block-based operation and provides an optimized GPU implementation incurring a negligible overhead in comparison to the original single-threaded version. Barbara Penna et. al. in [15] propose a lossy algorithm for pixel anomaly detection which uses a hybrid of spectral KLT + spatial JPEG2000 and achieves better distortion rate. It can be employed for both coding and decoding of images.

4.3 Lossless & Lossy Compression Techniques for Hyperspectral Imaging Yongjian Nian et. al. Talk about distributed source coding (DSC) compression algorithm in [16] for both lossy and lossless compressions. The complexity of the algorithm is found to be very low. It uses regression model to improve the compression performance of distributed lossless compression algorithm, and optimal scalar quantization for distributed lossy compression. The proposed algorithm is on par with other similar algorithms on competitive compression performance. This encoder has low complexity and hence it is most suitable for onboard compression.

5 Comparative Study of Hyperspectral Compression Algorithms Table:1 Comparison of Hyperspectral Compression Algorithms Hyperspectral Compression

Parameters for Comparison

Algorithm Classification Used Adaptive (Lossless)

edge

based

Analysis Criteria

predictionStatistical based (Least squares optimization and Entropy encoding)

Compression ratio

Double random projection method (Combination of random method & residual coding algorithm) (Lossless)

Approximation based Low rank Approximation

Use of randomized dimensionality reduction techniques for efficiently capturing global correlation structures and residual encoding

3D binary EZW algorithm (Lossless)

Wavelet Solutions based

Compression ratio

Hybrid Transform (Integer Karhunen-Loève Transform (KLT) + Integer Discrete Wavelet Transform (DWT)) (Lossless)

Hybrid of Wavelet Solution and Statistical based

Compression ratio

Linear + gradient adjusted prediction (GAP) an interband version of GAP is proposed (Lossless) Recursive least squares (RLS) filter (Lossless)

Statistical based

Bit rate compression

Statistical based

Complexity

Graphics Processing Units (GPUs) (Lossy)

Statistical (Use of Golomb Entropy code)

Throughput

Block-based operation & optimized GPU implementation (Lossy)

Wavelet Solutions (Discrete Cosine Transforms)

Rate distortion optimization

Hyperspectral image compression in

Prediction

Compression ratio

based

on

the

Results The algorithm makes good use of the high correlation among the pixels based on the edges to formulate a new lossless compression algorithm for hyperspectral images namely adaptive edge-based prediction algorithm. The results show an improvement in the lossless compression ratio for AVIRIS 1997 hyper-spectral images. Algorithm employs randomized dimensionality reduction techniques for efficiently capturing global correlation structures, residual encoding and provides lossless compression. Entropy of the residual decreases significantly for HSI data Has combined merits of EZW and SPIHT algorithms. Is computationally simpler and efficient.. It also has higher compression ratio than other algorithms Low-complexity and is capable of detecting and correcting a single-bit error Experimental results on the compression performance, latency and power consumption are reported Achieves better compression performance compared with those existing classical algorithms as the encoder complexity is found to be low The algorithm gives state of art performance with low complexity. It attempts to eliminate both spatial and spectral correlation in hyperspectral images. Highly suitable for on board compression in real time. New hardware architecture supporting parallelization shows enhances performance. It is done by comparing the performance of the GPU implementation with a single-threaded CPU implementation. Gives a competitive compression performance and low encoder complexity. The results makes a good study for usage of space qualified GPUs The algorithm achieves a low

band-interleaved-by-line format using lookup tables (Lossless)

(BIL)

Spectral Karhunen– Loève transform, followed by spatial JPEG 2000 (Lossy) Hybrid algorithm for Context prediction based ( Lossless ) Calibration induced Algorithm (Lossless)

interband Correlation

Wavelet based Prediction based spectral correlation Statistical based

algorithmic complexity and high compression ratios compared to other standard algorithms. It simplifies both compression and decompression process. Anamoly detection performance is better than state of art algorithms

Rate distortion optimization on

the

Compression ratio

Low complexity and achieves a compression ratio of 3.01

Compression Ratio

Calibrated images give better performance

6 Video Compression Algorithms Video compression algorithms have gained widespread popularity on account of their simplicity, cost and interoperability. Efforts have been made in order to explore the feasibility of employing these algorithms for the compression of hyperspectral images. Table 2 summarizes some of these algorithms. Timothy S. Wikinson et. al. have compared in [17] various compression techniques that can be applied for hyperspectral imaging. Zixiang Xiong discusses the compression of video images based on distributed source coding principles in [18]. Lucana Santos et. al. in [19] evaluate the performance of H.264/AVC video coding standard for lossy compression of hyperspectral images and conclude that it can reduce the future design complexities of such encoders. Table 2: Video Compression Algorithms for Hyperspectral Image Data Video Compression

Analysis Criteria

Observations

Algorithms MPEG and JPEG

Compression Fidelity

Lossless coding. Achieves reasonable compression ratio

Generic Algorithm for

Slepian-Wolf coded quantization

Witsenhausen-Wyner video coding, layered Wyner-Ziv

multiterminal video coding

video coding, and multiterminal video coding have been compared with the standard JMVM joint encoding scheme

H.264/AVC Video coding standard

Compression ratio, accuracy

Reduction in complexities of potential low-power/real-time

of unmixing and

hyperspectral encoders

computation time

7. Conclusion Although a wide variety of compression algorithms are available, there is no single algorithmic solution that is universally applicable. Non adaptive algorithms do not modify their operations based on the characteristic of input whereas the perceptive algorithms take into consideration the special characteristics of the data and modify their operations accordingly. Symmetric algorithms use the same algorithm for both compression and decompression but in reverse order of each other and non-symmetric ones use different algorithms for compression and decompression. The researchers mostly combine a number of the algorithms mentioned in section III to arrive at a hybrid algorithm that is most suited to their imaging application. Commercially available software packages also do the same. Most of the papers considered for this study have employed statistical or wavelet based compression algorithms or a combination of both. In addition, the algorithm proposed in most of the studies is lossless and focuses on encoding schemes only. Further, video compression algorithms can reduce the design complexities present in the hyperspectral image data compression algorithms.

References Canada Centre for Remote Sensing, “Fundamentals of Remote Sensing”, Remote Sensing Tutorial,2013 2. http://commons.wikimedia.org/wiki/File:HyperspectralCube.jpg 3. Peter Wayner, “ Data Compression for Real Programmers”, Morgan Kaufman Inc. 1999 1.

Keyan Wang ; Liping Wang ; Huilin Liao ; Juan Song ; Yunsong Li, “ Lossless compression of hyperspectral images using adaptive edge-based prediction”, Proc. SPIE 8871, Satellite Data Compression, Communications, and Processing IX, 887105 (September 24, 2013); doi:10.1117/12.2022426 5. Qiang Zhang, V. Paúl Pauca, Robert Plemmons, “ Randomized methods in lossless compression of hyperspectral data, Journal of Applied Remote Sensing 074599-1 Vol. 7, 2013 6. Kai-jen Cheng ; Jeffrey Dill,” Hyperspectral images lossless compression using the 3D binary EZW algorithm”, Proc. SPIE 8655, Image Processing: Algorithms and Systems XI, 865515 (February 19, 2013); doi:10.1117/12.2002820 From Conference Volume 8655, Image Processing: Algorithms and Systems XI Karen O. Egiazarian; Sos S. Agaian; Atanas P. Gotchev Burlingame, California, USA February 03, 2013 7. Nor Rizuan Mat Noor and Tanya Vladimirova, “Parallelised Fault-Tolerant Integer KLT Implementation for Lossless Hyperspectral Image Compression on Board Satellites” 2013 NASA/ESA Conference on Adaptive Hardware and Systems (AHS-2013) 8. Changguo Li and KeGuo, ”Lossless Compression of Hyperspectral Images Using Interband Gradient Adjusted Prediction” at 4th IEEE International Conference on Software Engineering and Service Science (ICSESS 2013), 978-1-4673-5000-6/13, IEEE Digital Explore Object Identifier 10.1109/ICSESS 2013.6615408,date of conference 23-25 May2013 9. Jinwei Song, Zhongwei Zhang and Xiaomin Chen,” Lossless compression of hyperspectral imagery via RLS filter” Electronic Letters 1st August 2013 Vol. 49 No. 16 10. Jarno Mielikainen, “Lossless Compression of Hyperspectral Images Using Lookup Tables”, IEEE Signal Processing letters, Vol. 13, No. 3, March 2006 11. Lin BAI, Mingyi HE, Yuchao DAI, ”Lossless Compression of Hyperspectral Images Based on 3D Context Prediction”, 3rd IEEE Conference on Industrial Electronics and Applications, 978-1-4244-1717-9,, pages 1845 – 1848, 3-5 June 2008 12. Aaron B. Kiely, Matthew A. Klimesh ,“Exploiting Calibration-Induced Artifacts in Lossless Compression of Hyperspectral Imagery”, IEEE Geoscience and Remote Sensing letters, Vol. 47, No. 8, August 2009 13. Lucana Santos, Enrico Magli, Raffaele Vitulli, Antonio Núñez, José F. López, Roberto Sarmiento ,“Lossy hyperspectral image compression on a graphics processing unit: parallelization strategy and performance evaluation”, Journal of Applied Remote Sensing, DOI: 10.1117/1.JRS.7.074599 14. Lucana Santos, Enrico Magli, Raffaele Vitulli, José F. López, and Roberto Sarmiento, ”Highly -Parallel GPU Architecture for Lossy Hyperspectral Image Compression” IEEE Journal of Selected Topics in Applied earth Observations and Remote Sensing, Vol.6, No.2, April 2013. 15. Barbara Penna, Tammam Tillo, Enrico Magli, Gabriella Olmo, “Hyperspectral Image Compression Employing a Model of Anomalous Pixels”, IEEE Geoscience and Remote Sensing letters, Vol. 4, No. 4, October 2007 16. Yongjian Nian, Mi He, and JianweiWan, “Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding”, Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 2013, Article ID 825673 17. Timothy S Wilkinson, Val D. Vaughn, “ Application of video based coding to hyperspectral imagery”, Proc. SPIE 2821, Hyperspectral Remote Sensing and Applications, 44 (November 6, 1996); doi:10.1117/12.257183 18. Zixiang Xiong,”Video Compression Based on Distributed Source Coding Principles” Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers, 2009 19. Santos L , Callicó G.M., Lopez J F, Sarmiento R,” Performance Evaluation of the H.264/AVC Video Coding Standard for Lossy Hyperspectral Image Compression, Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of Volume:5, Issue:2, Digital Object identifier: 10.1109/JSTARS.2011.2173906 Publication Year: 2012 , Page(s): 451 – 461 4.

Suggest Documents