Compression of Digital Images by Block Truncation Coding: A Survey © The Computer Journal, 37 (4), 308-332, 1994
Pasi Fränti, Olli Nevalainen and Timo Kaukoranta Department of Computer Science, University of Turku Lemminkäisenkatu 14 A, FIN-20520 Turku, FINLAND E-mail:
[email protected]
Abstract: Block truncation coding is a lossy moment preserving quantization method for compressing digital gray-level images. Its advantages are simplicity, fault tolerance, the relatively high compression efficiency and good image quality of the decoded image. Several improvements of the basic method have been recently proposed in the literature. In this survey we will study the basic algorithm and its improvements by dividing it into three separate tasks: performing quantization, coding the quantization data, and coding the bit plane. Each phase of the algorithm will be analyzed separately. On the basis of the analysis, a combined BTC algorithm will be proposed, and comparisons to the standard JPEG algorithm will be made. Index terms: image compression, quantization methods, lossy compression techniques.
1. INTRODUCTION
Block truncation coding (BTC) is a simple and fast lossy compression technique for digitized gray scale images originally introduced by Delp and Mitchell [10]. The key idea of BTC is to perform moment preserving (MP) quantization for blocks of pixels so that the quality of the image will remain acceptable and at the same time the demand for the storage space will decrease. Even if the compression gain of the algorithm is inferior to the standard JPEG compression algorithm [46], BTC has gained popularity due to its practical usefulness. Several improvements of the basic method have been recently proposed in the literature. In this survey we study BTC and its improvements by dividing the algorithm into three separate tasks: performing the quantization of a block, coding the quantization data, and coding the bit plane. Each phase of the algorithm is analyzed separately by first introducing various improvements presented in the literature, including some new ideas of our own, and then comparing the performance of them against each other. On the basis of the analysis, a new combined BTC algorithm is proposed and compared to the JPEG algorithm. We start in Section 2 by recalling factors characterizing the usefulness of compression techniques. Then the original BTC method are briefly described in Section 3. Variants of the basic algorithm is studied in Section 4 in the following way.
Several alternatives for the quantization method are presented in Section 4.1 [10, 14, 19, 23, 31, 47]. These methods try to either preserve certain moments [10, 23, 31], or minimize a fidelity criterion which is usually mean square error (MSE) or mean absolute error (MAE). They can also aim at the same goal by using a heuristic algorithm [14, 19] or by performing indirect quantization via a neural network [47]. The quantization data of a block can be expressed either by two statistical values (usually mean and standard deviation), or by the quantization level values themselves. For these two approaches, several coding methods are discussed in Section 4.2. One can significantly reduce the amount of data at the cost of image quality by using vector quantization [61] or discrete cosine transform [73]. Remarkable savings are also possible via entropy coding [16, 34] without any loss in the quality. The application of any other lossless image coding method is also possible, and thus we propose the use of FELICS coding [25] the implementation and testing of which is described in this paper. In Section 4.3, we review methods for reducing the storage size of the bit plane. Most of these select a representative for the bit plane from a smaller subset, which is either a root signal set of some filtering technique [3, 65, 66], a codebook of a vector quantization method [14, 61, 73], or designed ad hoc [41]. A different approach is the use of interpolation [76], or entropy coding [16]. BTC can also be applied hierarchically using variable sized blocks [27, 54]. With a large block size one can decrease the total number of blocks and therefore reduce the bit rate. On the other hand, small blocks of the hierarchy improve the image quality. The technique is studied in Section 5. Several extensions of BTC are discussed in Section 6 including 3-level quantization and generalization to n-level quantization [12, 19, 35, 53, 56]. Hybrid formulations of BTC are considered [10, 11] as well as some pre- and post processing techniques [37, 41]. BTC's application to color [31, 74] and video images [24, 54] are shortly discussed. Some block to block adaptive approaches [14, 22, 37, 41] are discussed in Section 7. In Section 8, we evaluate the different methods experimentally, starting with the quantization methods in Section 8.1. We will observe, that because of unavoidable rounding errors, the moment preserving quantization does not preserve the first moment any better than the other tested quantization methods. It turns out that the moment preserving quantization is the worst in the MSE-sense, and that a simple and fast heuristic algorithm produces high quality decompressed images with only 5 % deficiency from the MSE-optimal quantization. Some aspects of the coding of the quantization data are studied in Section 8.2. FELICS turns out to be a very good method for its efficiency and practical usefulness. Several bit plane coding methods are studied experimentally in Section 8.3. Among these, interpolation gives the best results in the MSE-sense. Hierarchical block decomposition is studied in Section 8.4. The choice for the minimum block size of hierarchy turns out to be the most important factor when small MSE-values are desired. The need for block to block adaptive schemes is analyzed in Section 8.5. For typical images, more than 50 % of all MSE originates from the 10 % of the high variance blocks. Even so,
2
there was no evidence that the superiority between any two given bit plane coding methods was dependent on the variance of a block. In Section 8.6 we link together the most promising methods to form an efficient BTC algorithm. With this we perform several test runs and make comparisons with the standard JPEG algorithm. The overall performance of BTC turns out to be weaker than that of JPEG, but because of the high speed (especially at the decoding phase) and simplicity, BTC is useful in practical implementations.
Notations: a, b a$ , b$ B bpp ei,j k m MAE MSE N q SNR vth xi x$i xmax xmin xth x xr xl xh yi α σ σ'
Quantization levels. Predictions of the quantization levels. Bit plane of a block. Bits per pixel. Difference between the original and predicted pixel value. Number of bits assigned to a single pixel. Number of pixels in a block. Mean absolute error of the reconstructed image. Mean square error of the reconstructed image. Number of pixels in an image. Number of pixels in a block whose intensity is greater than or equal to xth. Signal to Noise Ratio. Variance threshold. Pixel value of the original image. Predicted pixel value of xi. Maximum pixel value. Minimum pixel value. Quantization threshold. Mean of xi-values. Mean of xir-values. Lower mean; average of xi < xth. Higher mean; average of xi ≥ xth. Pixel value of the reconstructed (decompressed) image. Sample first absolute central moment. Sample standard deviation. Standard deviation according to xth.
3
2. BACKGROUND
Let us consider a digital gray-level still image of the size N pixels where each pixel is expressed by k bits. The aim of the image compression is to transform the image to a space efficient (compressed) form so that the information content is preserved as far as possible when decompressing the encoded image. The quality of a compression method can be characterized by considering a set of features which describe the usefulness of the method. These features include the bit rate, which gives the average number of bits per stored pixel of the image. Bit rate is the principal parameter of a compression technique because it measures the efficiency of the technique. Another feature is the ability to preserve the information content. A compression technique is lossless if the decoded image is exactly the same as the original one. Otherwise the technique is lossy. Most of the image compression techniques are of the latter type. The quality of the reconstructed image can be measured by the mean square error (MSE), mean absolute error (MAE), signal to noise ratio (SNR), or it can be analyzed by a human photo analyst. Let xi stand for the i'th pixel value in the original image and yi the corresponding pixel in the reconstructed image (i=1,2,...,N). Here the row-major order of pixels of the image matrix is supposed. The three quantitative measures are defined as
MSE =
1 N ∑ yi − xi N i =1
MAE =
1 N ∑ yi − xi N i =1
(2)
SNR = −10 ⋅ log10 MSE 2552
(3)
2
(1)
These parameters are widely used but unfortunately they do not always coincide with the evaluations of an human expert [36]. For a survey of different quality measures, see Eskicioglu and Fisher [15]. The third feature of importance is the processing speed of the compression and decompression algorithms. In on-line applications the response times are often critical factors. In the extreme case a space efficient compression algorithm is useless if its processing time causes an intolerable delay in the image processing application. In some cases one can tolerate longer compression time if the compression operation can be done as a background task. However, fast decompression is desirable. The fourth feature of importance is robustness against data transmission errors. The compressed file is normally an object of a data transmission operation. The transmission is in the simplest form between internal memory and secondary storage but it can as well be between two remote sites via transmission lines. Visual comparison of the original and a disturbed images gives a direct evaluation of the robustness. However, the data transmission systems commonly contain fault tolerant internal data formats so that this property is not always obligatory.
4
Among other interesting features of the compression techniques we may mention the ease of implementation and the demand for a working storage space. Nowadays these factors are often of secondary importance. From the practical point of view the last but often not least feature is complexity of the algorithm itself. Reliability of the software often to a high degree depends on the complexity of the algorithm.
3. ORIGINAL BTC ALGORITHM
The N pixel image is divided into smaller blocks of the size m pixels and each block is processed separately. A compact representation is searched locally for each block. The decompression process transforms the compressed blocks back into pixel values so that the decoded image resembles the original one as much as possible. The mean value ( x ) and standard deviation (σ) are calculated and encoded for each block.
x=
1 m ∑ xi m i =1
x2 =
(4)
1 m 2 ∑ xi m i =1
σ = x2 − x
(5) 2
(6)
Then a two-level quantization is performed. Pixels with values less than the quantization threshold (xi4
4.00
32->4 32->2
B P P 3.00
16->2 2.00
8->2 4->2
1.00
2->2 galaxy
jupiter
xray
neworlea
tiffany
sail
lenna
peppers
lena
goldhill
girl
camera
couple
bridge
boats
bigfoot
airplane
baboon
0.00
FIGURE 20. Compression efficiency versus block size.
120 100
32->4
80
4->4 32->2
M S E 60
16->2 40
8->2 4->2
20
2->2 galaxy
jupiter
xray
neworlea
tiffany
sail
peppers
lenna
lena
goldhill
girl
couple
camera
bridge
boats
bigfoot
baboon
airplane
0
FIGURE 21. Image quality versus block size.
8.5. BLOCK TO BLOCK ADAPTIVE BTC
In Adaptive BTC, the blocks are classified according to the contrast of a block, and a different coding method is applied to different types of blocks. Fig. 22 and 23 illustrates the distribution of the blocks and their average MSE values (per pixel) according to standard deviation. The higher the variance, the greater the MSE contributed by the block. Even if there are only few such blocks, their net effect is remarkable: more than 50 % of the MSE-value originates from the 10 % of the high variance blocks. This indicates that another 39
compression method might be used for these. One can also reduce the number of the high variance blocks by the use of hierarchical decomposition, as was done in the previous section.
800
3000
700
Number of blocks
2500
600 2000
500 MSE 400
1500
300
1000
200 500
100 0
0
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 Standard deviation of a block
Standard deviation of a block
FIGURE 22. Distribution of 4*4 blocks.
FIGURE 23. Average MSE of different blocks.
Fig. 24 illustrates the (absolute) additional MSE (compared to the basic BTC) originating from the coding of the bit plane. Even if the increase of MSE is greater in high variance blocks, there is no evidence that the variance of a block would reflect to the relative increase in MSE. A more important observation is that the superiority between any two given bit plane coding method is independent of the contrast of a block. The consequence of this is that there is no clear advantage of block to block adaptation in the bit plane coding method.
Average additional MSE
80 70
Int-75
60 50
Int-50
40
Int-25
30
Med .Filt.
20
VQ 10 0 0
2
4
6
8
10
12
14
16
18
20
Standard deviation of a block
FIGURE 24. Additional MSE contributed by bit plane coding according to the block types.
We have restricted our study of the adaptive schemes to the bit plane coding only. It would still be possible to extend the adaptation to the other parts of the algorithm also, and even other related coding systems, see Sections 6 and 7.
40
8.6. COMPARISON TO JPEG
Next we will link together the best ideas of the BTC algorithm. Since the three phases of the BTC were studied separately, there is no guarantee that the combined algorithm is optimal in any sense, nor the best that could be constructed from these elements. In spite of that, we will compare it against JPEG. This is done by first fixing a certain bit rate and then comparing the MSE-values of these two methods. The combined BTC algorithm consists of the components given in Table 9. Hierarchical decomposition is fully utilized: The corresponding minimum and maximum block sizes are 2*2 and 32*32. Standard deviation (σ) is used as a threshold criterion and is set to 6 for other level transitions than "4*4->2*2", for which the threshold is left as an adjusting parameter.
TABLE 9. Elements of the combined BTC algorithm. Phase: Block size Quantization system Quantization data Number of bits Coding the quant. data Coding the bit plane
Methods: • Hierarchical (32→2) • GB + means • (a,b) • 6+6 • FELICS • 1 bpp • Interpolation • Skipping the bit plane
The bit plane is either stored as such, or its size is reduced by the interpolation technique. The level of interpolation is chosen according to the desired bit rate level. It will be referred by the proportion of coded bits which is either 50, 75, or 100 %. Bit planes of uniform blocks are skipped and the 1-bit quantization is applied to these blocks. The uniform block skipping threshold is used as a fine-tuning parameter. Fig. 25 illustrates the performance of the combined BTC algorithm for Lena. The tests were performed by varying the threshold of the hierarchical decomposition from σ=0 to 50. Each curve represents one of the selected interpolation levels. In this way we are able to cover reasonably well the bit rates from 1.0 to 2.0 bits per pixel. Better image qualities are still possible at the cost of decreasing compression efficiency, but the increase of the bit rate serves the purpose only up to a limit. By the use of lossless image compression methods, such as lossless JPEG [46, 57] or FELICS [25], one can achieve bit rates of about 4.5 bpp (for Lena). At the other end, the image quality will decrease rapidly if very low bit rates are desired.
41
50
50%
40
75% 100%
30
M SE 20
10
0 0.50
0.75
1.00
1.25
1.50
1.75
2.00
2.25
BPP
FIGURE 25. Performance of the combined BTC algorithm for Lena.
A comparison of JPEG and the combined BTC is summarized in Fig. 26-29. Here we use the same parameters that were used for the test results in Fig. 25 with one exception: we performed two different tests. In the first one, we used the skipping threshold σ=0, and in the second σ=5. The final curves of Fig. 26-29 are assembled from the results of these two so that they correspond to the best MSE-values for each bits per pixel selection. The noncontinuation of the curve in Fig. 29 originates from the nature of the image Galaxy. It consists entirely of low contrast regions, and therefore the hierarchical decomposition cannot be used for adjusting its performance as much as is desirable (see Fig. 20 and 21 of Section 8.4). In our tests, JPEG clearly outperforms BTC (see Fig. 30). At the bit rate of 2.0, the MSE of JPEG is 65 % from the MSE of BTC (for Lena), and at the bit rate of 1.0 bpp, it is only 53 %. The difference in the visual quality is yet unclear. Small differences in the MSE-value are not necessarily reflected in the visual quality and one must keep in mind, that the distortions caused by these two methods are not alike. Moreover, distortion is almost unnoticeable if one looks at the images in their natural sizes. Magnifications of Lena are shown in Fig. 31. We want to emphasize that there is a definite need for a better objective quality measure than MSE.
42
60 50
JPEG 40
BTC
M S E 30 20 10 0 0.00
0.50
1.00
1.50
2.00
2.50
BPP
FIGURE 26. Comparison of JPEG versus BTC with test image Lena.
50
JPEG
40
BTC
30
M SE 20
10
0 0.00
0.25
0.50
0.75
1.00
1.25
1.50
1.75
BPP
FIGURE 27. Comparison of JPEG versus BTC with test image Boats.
43
2.00
250
200
JPEG
150
M SE BTC
100
50
0 0.50
1.00
1.50
2.00
2.50
3.00
BPP
FIGURE 28. Comparison of JPEG versus BTC with test image Baboon.
16
12
M SE 8
4
BTC
JPEG
0 0.00
0.25
0.50
0.75
1.00
1.25
BPP
FIGURE 29. Comparison of JPEG versus BTC with test image Galaxy.
44
1.50
JPEG
bpp=1.00
mse=17.26
bpp=1.50
mse=11.48
bpp=2.00
mse=8.18
bpp=2.00
mse=12.52
BTC
bpp=1.00
mse=32.47
bpp=1.50
mse=19.81
FIGURE 30. Part of Lena when coded by JPEG and BTC.
45
JPEG
bpp=1.00
mse=17.26
bpp=1.50
mse=11.48
bpp=2.00
mse=8.18
bpp=2.00
mse=12.52
BTC
bpp=1.00
mse=32.47
bpp=1.50
mse=19.81
FIGURE 31. Eye of Lena when coded by JPEG and BTC.
46
9. CONCLUSIONS
We have done a comparative study of the variants of the block truncation coding. Our main observation was that one can remarkably improve the overall performance of BTC by the use of hierarchical decomposition, interpolation and FELICS coding. The most important change for improving the image quality was the use of 2*2-blocks in the high contrast regions. When compared to the performance of JPEG, the new combined block truncation coding is still weak if the image quality is measured by the mean square error. There are still other aspects which make BTC a useful alternative for JPEG in some practical applications. All the selected algorithms are quite simple and easy to implement. The integration makes the implementation somewhat more complicated, but it is nevertheless probably much simpler than the JPEG implementation. Since we did not perform any throughput analysis, we can only estimate the running times. The encoding phase of BTC is very fast, and the decoding phase is even faster. (Though it has to be said that the JPEG software used was not slow either.) One can maximize the throughput at the decoding phase by the use of (a,b) as the quantization data. It could be further improved by the use of vector quantization instead of interpolation. This would load the encoding more, but the computational demands would be much less at the decoding phase. FELICS coding is a fast and practical alternative to other lossless schemes, and therefore it is a good choice here. Moreover, it is utilized only twice per block. The hierarchical decomposition has only a small effect on the throughput, but is directly reflected in the number of blocks to be coded. By the use of hierarchical decomposition and FELICS coding, one can reduce the interblock redundancy. This will eliminate direct access to the decoded file and thus there is no longer guarantee of fault tolerance.
47
REFERENCES
[1] [2] [3] [4] [5] [6]
[7] [8] [9] [10] [11] [12] [13]
[14]
[15]
[16] [17] [18] [19] [20] [21]
Ahmed N., Natarajan T., Rao K.R. (1974) Discrete Cosine Transform. IEEE Transactions on Computers, C-23, 90-93. Alcaim A., Oliveira L.V. (1992) Vector Quantization of the Side Information in BTC Image Coding. Proc. ICCS/ICITA 92, Singapore, 1, 345-349. Arce G., Gallagher N.C. Jr. (1983) BTC Image Coding Using Median Filter Roots. IEEE Transactions on Communications, 31, 784-793. CCITT Document No. 420 (1988) Adaptive Discrete Cosine Transform Coding Scheme for Still Image Telecommunications Services. Chen W.H., Pratt W. (1984) Scene Adaptive Coder. IEEE Transactions on Communications, 32, 225-232. Cheng S.-C., Tsai W.-H. (1993) Image Compression Using Adaptive Multilevel Block Truncation Coding. Journal of Visual Communication and Image Representation, 4, 225-241. Colomb S.W. (1966) Run-Length Encodings. IEEE Transactions on Information Theory, 12, 399-401. Connor D.J., Pease R.F., Scholes W.G. (1971) Television Coding Using TwoDimensional Spatial Prediction. Bell Systems Technical Journal, 50, 1049-1061. Delp E.J., Mitchell O.R. (1979) Some Aspects of Moment Preserving Quantizers. IEEE International Conference Communications, (ICC'79), 1, 10-14, 7.2.1-7.2.5.. Delp E.J., Mitchell O.R. (1979) Image Coding Using Block Truncation Coding. IEEE Transactions on Communications, 27, 1335-1342. Delp E.J., Mitchell O.R. (1991) The Use of Block Truncation Coding in DPCM Image Coding. IEEE Transactions on Signal Processing, 39, 967-971. Delp E.J., Mitchell O.R. (1991) Moment Preserving Quantization. IEEE Transactions on Communications, 39, 1549-1558. DeWitte J., Ronsin J. (1983) Original Block coding Scheme for Low Bit Rate Image Transmission. Signal Processing II: Theories and Applications, Proc. EUSIPCO-83. Elsevier, Amsterdam, 143-146. Efrati N., Liciztin H., Mitchell H.B. (1991) Classified Block Truncation CodingVector Quantization: an Edge Sensitive Image Compression Algorithm. Signal Processing, Image Commun., 3, 275-283. Eskicioglu A.M., Fisher P.S. (1993) A Survey of Quality Measures for Gray Scale Image Compression. Proceedings Space and Earth Science Data Compression Workshop, Snowbird, Utah, 49-61. Fränti P., Nevalainen O. (1993) Block Truncation Coding with Entropy Coding. (submitted to IEEE Transactions on Communications) Fränti P., Kaukoranta T., Nevalainen O. (1994) A New Approach to BTC-VQ Image Compression System. Proc. Picture Coding Symposium, Sacramento, CA. Gersho A., Gray R.M. (1992) Vector Quantization and Signal Compression. Kluwer Academic Publishers. Goeddel T., Bass S. (1981) A Two Dimensional Quantizer for Coding Digital Imagery. IEEE Transactions on Communications, 29, 60-67. Goldberg M., Bouncher P.R., Shlien S. (1986) Image Compression Using Adaptive Vector Quantization. IEEE Transactions on Communications, 34, 180-187. Gonzalez R.C., Woods R.E. (1992) Digital Image Processing. Addison-Wesley.
48
[22]
[23]
[24] [25] [26] [27]
[28] [29] [30] [31]
[32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44]
Griswold N., Halverson D., Wise G. (1987) A Note on Adaptive Block Truncation Coding for Image Processing. IEEE Transactions on Accoustics, Speech and Signal Processing, 35, 1201-1203. Halverson D., Griswold N., Wise G. (1984) A Generalized Block Truncation Coding Algorithm for Image Compression. IEEE Transactions on Accoustics, Speech and Signal Processing, 32, 664-668. Healy D., Mitchell O.R. (1981) Digital Bandwidth Compression Using Block Truncation Coding. IEEE Transactions on Communications, 29, 1809-1817. Howard P.G., Vitter J.S. (1993) Fast and Efficient Lossless Image Compression. IEEE Proceedings Data Compression conference, Snowbird, Utah, 351-360. Huang L., Bijaoui A. (1991) An Efficient Image Compression Algorithm Without Distortion. Pattern Recognition Letters, 12, 69-72. Kamel M., Sun C. Guan L. (1991) Image Compression by Variable Block Truncation Coding with Optimal Threshold. IEEE Transactions on Signal Processing, 39, 208-212. Kruger A. (1992) Block Truncation Compression. Dr. Dobb's Journal, 48-55 and 104-106. Kunt M., Bénard M., Leonardi R. (1987) Recent results in high-Compression Image Coding. IEEE Transactions on Circuits and Systems, 34, 1306-1336. Kunt M., Ikonomopolous A., Kocher M. (1985) Second Generation Image Coding Technique. Proceedings of the IEEE, 73, 549-574. Lema M.D., Mitchell O.R. (1984) Absolute Moment Block Truncation Coding and its Application to Color Images. IEEE Transactions on Communications, 32, 1148-1157. Linde Y., Buzo A., Gray R.M. (1980) An Algorithm for Vector Quantizer Design. IEEE Transactions on Communications, 28, 84-95. Lloyd S.P. (1982) Least Squares Quantization in PCM. IEEE Transactions on Information Theory, 28, 129-137. Lu W.W., Gough M., Davies P. (1991) Scientific Data Compression for Space: a Modified Block Truncation Coding Algorithm. Proc. SPIE, 1470, 197-205. Max J. (1960) Quantizing for Minimum Distortion. IRE Transactions on Information Theory, 6, 7-12. Mitchell O.R., Bass S.C., Delp E.J., Goeddel T.W., Huang T.S. (1980) Image Coding for Photo Analysis. Proc. Soc. Inform. Display, 21, 279-292. Mitchell O.R., Delp E.J. (1980) Multilevel Graphics Representation Using Block Truncation Coding. Proceedings of the IEEE, 68, 868-873. Mitchell H.B., Dorfan M. (1992) Block Truncation Coding Using Hopfield Neural Network. Electronics Letters, 28, 2144-2145. Moffat A. (1991) Two-level Context Based Compression of Binary Images. IEEE Proceedings Data Compression conference, Snowbird, Utah, 382-391. Monet P., Dubois E. (1993) Block Adaptive Quantization of Images. IEEE Transactions on Communications, 41, 303-306. Nasiopoulos P., Ward R., Morse D. (1991) Adaptive Compression Coding. IEEE Transactions on Communications, 39, 1245-1254. Nasrabadi N.M., King R.A. (1988) Image Coding Using Vector quantization: A Review. IEEE Transactions on Communications, 36, 957-971. Neejärvi J. (1992) Analysis and Extensions of FIR-Median Hybrid and Morphological Filters. Tampere University of Technology, Ph.D. Thesis. Netravali A.N., Haskell B.G. (1988) Digital Pictures. Plenum Press.
49
[45]
[46] [47] [48]
[49] [50] [51] [52] [53] [54] [55] [56] [57] [58]
[59] [60] [61] [62]
[63] [64] [65] [66]
Pennebaker W.B., Mitchell J.L., Langdon G.G., Arps R.B. (1988) An Overview of the Basic Principles of the Q-coder. IBM Journal of Research and Development, 32, 717-726. Pennebaker W.B., Mitchell J.L. (1993) JPEG Still Image Data Compression Standard. Van Nostrand Reinhold, New York. Qiu G., Varley M.R., Terrell T.J. (1991) Improved Block Truncation Coding Using Hopfield Neural Network. Electronics Letters, 27, 1924-1926. Qiu G., Varley M.R., Terrell T.J. (1992) Image Compression by a Variable Size BTC Using Hopfield Neural Networks. Colloquiumon 'Neural Networks for Image Processing. Applications', London, 10/1-6. Rabbani M., Melnychuck P.W. (1992) Conditioning Context for the Arithmetic Coding of Bit Planes. IEEE Transactions on Signal Processing, 40, 232-236. Rabbani M., Jones P.W. (1991) Digital Image Compression Techniques. Bellingham, USA, SPIE Optical Engineering Press. Rao K.R., Yip P. (1990) Discrete Cosine Transform. New York, Academic Press. Rice R.F. (1979) Some Practical Universal Noiseless Coding Techniques. Jet Propulsion Laboratory, JPL Publication 79-22, Pasadena, CA. Ronsin J., DeWitte J. (1982) Adaptive Block Truncation Coding Scheme Using an Edge Following Algorithm. Proc. ICASSP 82, Paris, 1235-1238. Roy J., Nasrabadi M. (1991) Hierarchical Block Truncation Coding. Optical Engineering, 30, 551-556. Samet H. (1990) The Design and Analysis of Spatial Data Structures. AddisonWesley, Reading, MA. Sharma D.J., Netravali A.N. (1977) Design of Quantizers for DPCM Coding of Picture Signals. IEEE Transactions on communications, 25, 1267-1274. Tischer P.E., Worley R.T., Maeder A.J., Goodwin M. (1993) Context-based Lossless Image Compression. The Computer Journal, 36, 68-77. Todd S., Langdon G.G., Rissanen J. (1985) Parameter Reduction and Context Selection for Compression of Gray-Scale Images. IBM Journal of Research and Development, 29, 188-193. Tukey J.W. (1971) Exploratory Data Analysis. Addison-Wesley. Udpikar V., Raina J. (1985) Modified Algorithm for the Block Truncation Coding of Monochrome Images. Electronics Letters, 21, 900-902. Udpikar V., Raina J. (1987) BTC Image Coding Using Vector Quantization. IEEE Transactions on communications, 35, 352-356. Uhl T.J. (1983) Adaptive Picture Data Compression Using Block Truncation Coding. Signal Processing II: Theories and Applications, Proc. EUSIPCO-83. Elsevier, Amsterdam, 147-150. Wang Q. (1992) Analysis of Median Related and Morphological Filters with Application to Image Processing. Tampere University of Technology, Ph.D. Thesis. Wang Q., Gabbouj M., Neuvo Y. (1993) Root Properties of Morphological Filters. Signal Processing, 34, 131-148. Wang Q., Neuvo Y. (1992) BTC Image Coding Using Mathematical Morphology. Proc. IEEE International Conference on Systems Engineering, Kobe, Japan, 592-595. Wang Q., Neuvo Y. (1993) Deterministic Properties of Separable and Cross Median Filters with an Application to Block Truncation Coding. Multidimensional Systems and Signal Processing, 4, 23-38.
50
[67]
[68]
[69]
[70]
[71] [72] [73] [74] [75]
[76] [77] [78]
Wang Y., Mitra S.K. (1993) Image Representation Using Block Pattern Models and Its Image Processing Applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15, 321-336. Weitzman A., Mitchell H.B. (1991) Fast Multi-Stage VQ Search for the BTC-VQ Algorithm. Proceedings of 17th Convention of Electrical and Electronics Engineers in Israel, Tel Aviv, Israel, 182-185. Weitzman A., Mitchell H.B. (1992) An Interblock BTC-VQ Image Coder. Proceedings of 11th IAPR International Conference on Pattern Recognition, III, Conference C, The Hague, Netherlands, 426-429. Weitzman A., Mitchell H.B. (1993) An Adaptive BTC-VQ Image Compression Algorithm Operating at a Fixed Bit-Rate. Signal Processing, Image Commun., 5, 287294. Wendt P.D., Coyle P.D., Gallagher N.C. (1986) Stack Filters. IEEE Transactions on Accoustics, Speech and Signal Processing, 34, 898-911. Witten I., Neal R., Cleary J. (1987) Arithmetic Coding for Data Compression. Comm. ACM, 30, 520-539. Wu Y., Coll D.C. (1991) BTC-VQ-DCT Hybrid Coding of Digital Images. IEEE Transactions on Communications, 39, 1283-1287. Wu Y., Coll D.C. (1992) Single Bit-Map Block Truncation Coding of Color Images. IEEE Journal on Selected Areas in Communications, 10, 952-959. Wu Y., Coll D.C. (1993) Multilevel Block Truncation Coding Using Minimax Error Criterion for High Fidelity Compression of Digital Images. IEEE Transactions on Communications, 41, 1179-1191. Zeng B. (1991) Two Interpolative BTC Image Coding Schemes. Electronics Letters, 27, 1126-1128. Zeng B., Neuvo Y. (1993) Interpolative BTC Image Coding with Vector Quantization. IEEE Transactions on Communications, 41, 1436-1438. Zeng B., Gabbouj M., Neuvo Y. (1991) A Unified Design Method for Rank Order, Stack, and Generalized Stack Filters Based on Classical Bayes Decision. IEEE Transactions on Circuits and Systems, 38, 1003-1020.
51