Variable Block-size Image Coding by Resource Planning - CiteSeerX

2 downloads 0 Views 215KB Size Report
later Sullivan and Baker [9] independently derived the optimal .... minimize rate as well as distortion, Sullivan and ..... [9] Gary J. Sullivan and Richard L. Baker,.
Variable Block-size Image Coding by Resource Planning Chao-wen Kevin Chen and David Y. Y. Yun Laboratories of Intelligent and Parallel Systems (LIPS), University of Hawaii (UH) Holmes 492, 2540 Dole Street, Honolulu, Hawaii 96822 Phone: (808) 956-7627; Fax: (808) 941-1399; Email: [email protected] Abstract: An image coding, and compression, technique is presented, where an image is segmented and encoded into blocks of variable block-sizes (VBS). Conventional quad-tree algorithm imposes excessively rigid rules on the placement of blocks, causing ineffective segmentation of the image. Although several algorithms have been proposed to alleviate these constraints, they all impose fixed scanning order to place the blocks, which frequently yields sub-optimal results. This paper discusses a flexible VBS algorithm, with no requirement of any specific scanning order nor any fixed block dimensions. By applying the principles of a general resource management paradigm, called Constrained Resource Planning (CRP), the Adaptive Block Coding (ABC) algorithm sequentially selects blocks with the most appropriate size and the least interference with the neighboring blocks. In order to alleviate the overhead of placing and encoding the disorganized blocks, a new encoding scheme is developed to systematically compact the representation for placement and save code bits. Experiments have been conducted to compare with quad-tree (QT) technique and several other VBS algorithms that use both of the popular mean-value and vector quantization (VQ) methods to encode blocks. Results show that the ABC algorithm outperforms the others. Keywords: Image Compression, Variable Block-size Coding, Resource Planning, Vector Quantization.

1. Introduction Recently, variable block-size (VBS) image coding techniques have received increasing attention [15] due to their superior adaptability to local image characteristics. Conventional block-based image compression approaches such as JPEG [6], vector quantization (VQ) [7], simply partition an image into fixed-size blocks and seek for spatial-domain and/or frequency-domain redundancy in the blocks. Despite of the optimality achieved within each block, they lack the ability as VBS does to

encode low-detailed regions on the images using large blocks to reduce the bit-rate. Segmenting images into blocks of variable sizes in the VBS techniques is usually accomplished by using the well-known quad-tree (QT) decomposition. Researchers have applied it with several block-based coding schemes. QT decomposition is favorable for its tree structure representation, which allows efficient bit storage for encoding the tree structure, and straightforward binary decision on split/merge nodes. Shoham and Gersho [7] discussed the general bit-allocation principles given some segmentation is provided. Chou et al. [8] and later Sullivan and Baker [9] independently derived the optimal quad-tree structure for QT-based VBS (or, simply QT) image coding. Nevertheless, quad-tree’s rigid restriction on the block placement prevents itself from providing the best image partition results. In general, the low-detailed (or high-detailed) regions of a typical image do not appear at regular locations as required by QT, as a consequence, more blocks than necessary will be needed to encode these areas. A general VBS algorithm should allow the blocks to be placed at arbitrary locations and more flexible block shapes to adapt to image characteristics. Little work has been done to fix the problem. Recently, Boxerman and Lee [2] introduced two unconstrained tiling image compression algorithms that remove the constraints. The algorithms were further applied to video compression in [3, 5], where rectangular blocks are used. However, the specific search order for placement imposed by the algorithms is myopic since it does not consider the possibility of encoding from the other direction, nor the impact on the segmentation of neighbor regions caused by the placement. Apparently, the problem of placing variablesize blocks at arbitrary locations turns out to be a

NP-complete problem. Over the past decade several difficult combinatorial search problems have been successfully solved by the Constrained Resource Planning (CRP) paradigm [10,11]. This paper presents a novel adaptive block-size compression (ABC) algorithm based on the CRP principle. The CRP-ABC algorithm achieves a balance between the cost of a block (in terms of the number of bits needed for both the placement and coding) and its global repeatability. The algorithm also manages to simultaneously minimize the mean-square-error (MSE) of the code and the total number of bits for the blocks used in the decomposition. The organization of this paper is as follows. Section 2 reviews the VBS problem, QT algorithm, and the unconstrained tiling algorithms. Section 3 introduces the CRP methodology and the CRP-ABC algorithm. The overhead in encoding the placement structure is also addressed, where a new structure-coding scheme to alleviate the problem is discussed. Section 4 presents the experiment results by combining CRP-ABC with mean-value-coding and the VQ approach, respectively. Section 5 concludes this work.

2. Adaptive Block-size Coding 2.1.

Problem Definition

Given a 2N x 2N gray-level image, the objective of a variable block-size algorithm is to decompose the image into m variable-sized rectangular (2a, 2b) blocks. The minimal size blocks ( 2 n0 × 2 n0 ) are called Unit Blocks (or level-0 blocks) and the n i n j blocks ( 2 0 + × 2 0 + , (1,1) ≤ (i, j ) ≤ (imax , jmax )) are called level-(i, j) blocks. For the rest of the paper, a single index is used for level-k ( 1 ≤ k ≤ ( i max × j max ) ), if the possibility of confusion does not exist. Thus, the legal location n n for a 2a x 2b block is ( 2 0 x,2 0 y ), where,

x, y ≥ 0, 2 n0 x + 2 a < 2 N , 2 n0 y + 2 b < 2 N . The compression of the image is done by encoding (1) the block placement structure and, (2) the value for the block itself (in the case of VQ, the codeword index is used). The optimization of the

process can be achieved by minimizing two objective functions, m in( B s +

m

∑b k =1

k

)

(1a)

m

m in ∑ e k

(1b)

k =1

where Bs is the total number of bits required to encode the placement structure, bk is the number of bits assigned to the block k, and ek is the MSE of the block k. In general, it is assumed that the MSE is monotonously increasing with the block size while the bit rate is monotone decreasing. Namely, when four neighboring 2k x 2k blocks are replaced by a 2k+1 x 2k+1 block, equation (1a) will increase while equation (1b) will decrease. Therefore, the two objective functions are mutually conflicting so that they can not be optimized separately. To reduce computation time for certain block-coding algorithms, it is also possible to use some image statistics such as mean, variance to approximate the block distortion ek. The MSE will be used as a measure of distortion throughout this paper.

2.2.

Quad-Tree

Quad-tree segmentation approach is the most commonly used decomposition scheme, mainly due to its known bit-rate optimality. The quadtree approach decomposes a 2N x 2N image into a (imax+1)-level tree. Each block Xk at level k has block size 2k x 2k, which can either be further divided into four 2k-1 x 2k-1 level-(k-1) blocks (Xk,i, i=1,..,4), or be a leaf. There are two major advantages of using QT: 1. Concise structure representation: Because of the tree structure of the segmentation, the tree can simply be represented by a bit-stream in which each leaf is encoded with a ‘0’ and each non-leaf node with a ‘1’. 2. Straightforward split/merge decision: Since each block has only one parent, a level-k block can only be merged with three other pre-defined neighbor level-k blocks into its solely level-(k+1) parent block when it is beneficial. In order to obtain the optimal QT structure to minimize rate as well as distortion, Sullivan and Bakers [9] proposed to use Lagrange multiplier λ , their algorithm finds points on the

convex hull of all possible R-D pairs by minimizing the objective function, (2) m in { e k + λ b k } for each separate block k. Use (2), the decision of merging four level-k blocks into a level-(k+1) block is made whenever, (3) ∆e ≤ λ∆b where 4

∆e = e k +1 ( X k +1 ) − ∑ e ( X k , i ) * k

(4)

) − bk +1 ( X k +1 )

(5)

i =1

∆b =

4

∑b (X i =1

* k

k ,i

3. Modeling ABC as CRP 3.1.

Despite the efficiency of QT and the optimality achieved by Sullivan and Bakers’ algorithm, QT itself suffers a major limit. The constraint of positioning blocks at pre-defined locations eliminates a large number of choices that can substantially improve both the rate and distortion.

2.3.

video compression, so that the true value of rectangular blocks was not made evident. Our paper represents an attempt of combining rectangular blocks with adaptive positioning (as opposed to the ordered sweep of tpSegment) for still image encoding.

Unconstrained Tiling Algorithms

(1) tpSegment Algorithm: In Boxerman and Lee’s recent work [2], two algorithms that remove the tiling constraint of QT were proposed. The two-pass segmentation strategy (tpSegment), which is the better between the two, segments the input image into as many largest square blocks as possible. The segmentation proceeds uses a left-to-right, top-tobottom fashion to scanning through an image. The largest blocks encountered that satisfy a predefined homogeneous criterion are in turn placed. Although it is reported that the scheme saves 2,002 blocks in their experiments, it should be noted that the resultant image has worse PSNR performance. In fact, as shown in the following experiment results, when using the same encoding scheme and comparing with the QT results of same PSNR, the scheme actually has about the same bit-rate. (2) C&A Algorithm: Corte-Real and Alves. [3, 5] proposed the use of non-square (rectangular) blocks to complement the top-down, left-right sweep of the tpSegment approach. Here we denote it as C&A algorithm. Unfortunately, they applied this idea primarily to

Mapping to CRP

Although the tpSegment and C&A algorithms remove the tiling constraints imposed by QT, they both have a major drawback. As illustrated in Figure 1, where all four large blocks (composed from four squares), B1, B2, B3, and B4 satisfy equation (3) (therefore favored over their four square children), B1 will be chosen first by the algorithm because of the B1 scanning order. Consequently, the placement of B2, B3, and B4 will all be prevented and six single B4 B3 squares will be left. In B2 contrast, if B2, and B3 were Figure 1 placed first, then two large blocks are used and only two single squares are left. This is a simple illustration of the disadvantage of using an ordered sweep. Continuing the illustration using Figure 1, it is also obvious to see the advantage of the use of rectangular blocks. Namely, if B1 is used first, then the six single squares can be combined into 3 rectangular blocks. Even better, using blocks B2, and B3 first would leave only one rectangular block of two squares. In other words, rectangular blocks add flexibility to the placement of blocks and the use of fewer blocks, and thus allow an improvement of the compression ratio. Based on the above observations, four major features are implemented in the following algorithm to maximize the benefits from the VBS concept: (1) flexible block placement; (2) unrestricted scanning or sweeping order; (3) the use of rectangular blocks; and (4) a search engine (CRP described below) that wisely selects blocks for placement. Here, the Constrained Resource Planning (CRP) [10, 11, 12] methodology and the associated planning engine are used.

When a more general VBS segmentation, where blocks can be freely placed, is considered, each level-(k, k) block has four level-(k+1, k+1) square parent blocks (and more non-square and higher-level ones), instead of only one in the QTVBS case. The simple binary decision (2)-(5) for merging in QT can not be directly applied anymore. Deeper consideration needs to be done before selecting a large block because of the contention among the overlapping blocks. From the resource management point of view, we can regard all the blocks that satisfy equation (3) as limit-supplied ‘resources’, each of which associates with an error value. Without losing generality, by assuming that each block requires equal amount of bits, the problem is transformed into finding a minimum set of contention-free (non-overlapping) blocks so that their total error is also minimized. The Constrained Resource Planning (CRP) methodology is a technique developed to solve such problems. By tackling the most urgent task using the solution that has the least impact on the other unsolved tasks, CRP manages to accomplish the job that satisfy all the given constraints with few backtracking. To model the VBS problem as a constrained resource planning problem, the decision of placing the k-th block is done by, 1. Determine which level to be considered: The blocks of each level are examined to provide a global view. Then the ensembles compete with each other based on equations (2)-(5). In order to accommodate more high quality blocks, the level that best satisfies the criteria, or most constrained, should be considered first. 2. Determine the location to place the block: Because when a block is placed, the neighbors that overlap with it will have to be discarded. In order to preserve valuable resources, among all the candidate blocks of the chosen level, we should choose the one that has least number of overlapped, or impacted, candidate blocks. For example, in Figure 1, block B2 nd B3 overlap with only two blocks, while B1, and B4 with three blocks. Therefore, B2 or B3 should be placed first.

3.2.

CRP-ABC Algorithm

Calc-Error Input Image

CRPSegmenetation

Structure Coding Compressed Image Block Coding

Figure 2 Diagram for the Encoder

Identify MostConstrained Level Level-i

Agenda

Constraint Propogation

Candidate Blocks Generation

Discard overlapped blocks

Chosen Block

Select LeastImpact Block

Candidate Block Space

Figure 3 CRP-Segmentation

The entire encoding process of our algorithm can be divided into four sub-processes (as shown in Figure 2), CalcError, CRP-Segmentation, StructureCoding, and BlockCoding. Their functions are as follows, 1. CalcError: Calculate the distortion of each legal block. As mentioned previously, the exact MSE is used here. In practice, the disqualified blocks can be screened by simple mean/variance tests to reduce the computation costs. 2. CRP-Segmentation: The core segmentation process using CRP concept. 3. StructureCoding: Encode the block placement structure. 4. BlockCoding: Encode the value of each block. When VQ is used, the codeword index

of the block is used. DPCM and entropy coding can also be carried out at this stage. 2

1

In CRP-Segmentation (Figure 3), all level-i’s are put into an agenda. The algorithm sequentially determine the most suitable level under global consideration, then select a block of that level that has least impact on the other blocks. During the process, a block is considered valid if it satisfies equations (3)-(5) and none of its parents satisfies the same requirement. The highlevel description of the algorithm is shown below, Algorithm CRP-Segmentation Step 1: Form process agenda by inserting the levels that contain blocks that satisfy eq. (3)(5) while no parent block does. Step 2: If there is no any valid level left in the agenda, then GOTO Step 6. Step 3: Select the most-constrained level by considering all the valid blocks of that level, and, choose the level L that has, MIN # of valid blocks If there is more that one such level, Choose the level L that,

# of blocks overlapped MIN ∑ block i in all valid levels Step 4: Select the least-impact block B from level L by choosing the block that, MIN # of blocks overlapped in all valid levels by B If there is more than one such block, Choose the block B that,

MIN (e B +

∑ (e

i

5

4

6

Figure 4a Cases where blocks are uncovered

7

8

9

10

11

12

13

14

15

16

18

19

20

21

22

23

24

25

17

26

+ λbi ) / area)

overlapped block i

Step 5: Place block B, discard all the overlapped blocks, update valid blocks for each level, GOTO Steo 2. Step 6: Fill the uncovered region with Unit Blocks, EXIT.

3.3.

3

Structure Coding

The removal of the tiling constraint of QT provides more flexibility of choosing blocks. However, it also breaks one nice feature of QT, i.e. the regular tree structure. Fortunately, if both the encoder and decoder use the same scanning order to place the blocks, the location of the

27

28

29

30

31

Figure 4b Cases where blocks area already partially covered

blocks is not needed to be stored. As the technique used in both [2] and [3], the scanning

order is predetermined using left-to-right, top-tobottom fashion. At the decoder end, each block is placed in the first legal empty location in that scanning order to maintain the consistence between the encoder and decoder. Furthermore, by representing the block size using entropy coding, the storage requirement for structure code can be further reduced. In other words, the most frequently used block is encoded as ‘0’, the next one as ‘10’, ..., etc. Nevertheless, the structure code encoded in this way still requires about 2.5 times of what QT consumes. In order to alleviate the overhead, a new coding scheme that removes more redundancy is developed. A ‘Z-shape’ scanning scheme similar to what used in JPEG is employed. Consider each non-overlapping level-1 block, let’s call it baseblock. During the scanning, the current looked-at base-block may have one of three status: 1. uncovered (shown in Figure 4a, the gray blocks represent blocks larger than level-0 block), 2. partially covered by previously placed blocks (Figure 4b), and 3. entirely covered by the other blocks, where case 3 can be ignored by moving to the next available base-block. For the cases of using square blocks only, as enumerated in Figure 4, there are totally 31 sub-cases that could occur during the encoding process when square blocks are used. Based on the area that is already covered by previously placed blocks, there will usually be less number of choices to place the next block. Therefore, we can use fewer bits to indicate the choice and entropy-encode the block-size of large (gray) blocks (one bit less than previously approach since the unit block is taken cared in the choice bits and does not need to encode the size separately). Specifically, if the next encountered base-block in the Z-shape scanning sequence is uncovered by previous encoded blocks as in subcases 1-6, one bit is used to determine whether the block is going to be fully occupied by some block. If it is not, two bits are used to indicate the possible block combinations in subcases 1-4. As a result, for sub-cases 2 and 3, one bit can be saved compared with what used in the other VBS algorithms. Another example, when the base-

block is covered at the lower-left corner, there will be only three legal choices for the placement, sub-case 11,12, and 13. The encoder only needs to use averagely 1.3 bits to indicate the choice and some extra bits to code the block size in the gray area. Even more bits can be saved for cases such as 17 and 27 since there is no other choice except placing two level-0 blocks. Hence no bit at all is needed to indicate the presence of the two unitblocks. Overall, out of the 31 cases, there are 3 cases where 2 bits are saved, and 12 cases where 1 bit is saved.

4. Experimental Results The experiments are divided into two parts. In the first part, each block is encoded with its mean value. Then we combine Vector Quantization with several VBS algorithms and show the results in the second part. Two CRP-ABC algorithms, CRP-ABCs that uses only square blocks, and CRP-ABCr that uses rectangular blocks, are presented in the comparison.

4.1.

Mean-value Encoding

Since all the VBS algorithms discussed above need only the error values for each block to perform segmentation, mean-value encoding provides a very good starting point for examining various algorithms. In the experiments, eight 256x256 gray-scale images are tested. The block sizes we use are, 2x2, 4x4, 8x8, 16x16, and 32x32. A segmentation example performed on lena256 using QT and CRP-ABCr (allowing rectangular blocks) is shown in Figure 5a and 5b, respectively. In the results, the solid dark areas represent the 2x2 blocks. Clearly, CRP-ABCr results in much less small blocks than QT. Consequently, less number of bits is needed for encoding the image. It is also noteworthy that because of the flexibility of placing policy and the shape of blocks in the CRP-ABCr algorithm, it uses the smallest blocks only edges and the detailed areas such as eyes and hair.

Figure 5. (left) Segmentation result using QT, (right) using CRPABCR

4.2.

Vector Quantization Encoding

We choose Vector Quantization encoding technique for comparison because it is commonly used in the variable block-size coding benchmark. Also, it generates much higher compression ratio and PSNR than mean-value encoding does. To cover various characteristics appeared in most images, seven standard 512x512 and 512x480 gray-scale images (kids, airplane, sailboat, soccer, cornfield, announcer, cablecar) are used as training images. The training images are divided into non-overlapping blocks and fed into the

R-D plot for lena256 (using Mean-value) 27.6 27.4

PSNR(dB)

In the experiment, four algorithms, CRPABCR, CRP-ABCS, C&A, and QT are tested on the image lena256. The resultant rate/distortion plot is shown in Figure 6. Apparently, CRPABCR consistently obtains better PSNR than the other three algorithms, especially in the low bitrate cases. The C&A algorithm, though using rectangular blocks too, is only slightly better QT. The square-block based CRP-ABCS, in this case, has similar performance as QT. In fact, it still uses less number of blocks that QT to encode the image. However, as previously discussed, the overhead of structure code compensates the gain from block number. Moreover, using mean-value as block value does not seem to encourage using large blocks. This conjecture is verified in the following case where vector quantization is used.

27.2 CRP-ABC (rectangle)

27

C&A 26.8

QT CRP-ABC (square only)

26.6 26.4 0.5

0.7

0.9

1.1 1.3 1.5 Rate (bits/pixel)

1.7

1.9

Figure 6 Mean-value Encoding Results standard LBG algorithm for training the VQ codebooks. In total, four levels (4x4, 8x8, 16x16, and 32x32) are used in encoding for the squareblock cases, and ten levels (4x4, 8x4, 4x8, 8x8, 16x8, 8x16, 16x16, 32x16, 16x32, 32x32) for the rectangular-block cases. Only the first two levels (four for the rectangular block cases) are trained using the actually block sizes since it takes much longer time to train for large block size. Therefore, we generate the other level codewords by interpolating the first two level codewords. Because that in the CalcError stage, the CRPABC algorithm examines more blocks than the

R-D plot for lena512 (using VQ) 30.5 30.3 30.1 29.9 PS N 29.7 R (d 29.5 B) 29.3

QT tpSegment CRP-ABCs

29.1 28.9 28.7 0.15

0.2

0.25

0.3 0.35 Rate (bits/pixel)

0.4

0.45

0.5

Figure 7 VQ encoding results

R-Dplot for lena512 (using VQ) 31.5

PSNR (dB)

31 30.5 QT CRP-ABCR

30 29.5

C&A

29 28.5 0.15

0.2

0.25

0.3

0.35

0.4

0.45

DPCM and entropy coding is used for encoding the block values (codewords). We test the three square-block based algorithms using codebook sizes, 512, 512, 1024, 512, and show the result in Figure 7. Apparently, CRP-ABCs outperforms the other two by about 0.2 dB for low bit-rate cases. We further compare CRP-ABCr, C&A, and QT with all codebooks of size 512. As shown in Figure 8, the results show that CRP-ABCr algorithm consistently performs better than QT and C&A. Specifically, CRP-ABCr outperforms the commonly used variable block-size technique, QT, by up to 0.55dB for the image lena512. It also outperforms C&A by about 0.2 dB. Similar amount performance gain is also observed for the other test images. In addition, because of the flexible block placement scheme of CRP-ABC, and the use of rectangular blocks, it is shown in Figure 9 to more conform to the image characteristics than QT does. As shown in Figure 9(c)-(d), the shoulder area encoded by CRP-ABCr appears less blocky and more natural, thus provides better visual effect. The performance of VQ depends heavily on the codebook design. In our experiments, although we tried to include more codewords for large blocks, it may still not be representative for the blocks on the test images. Therefore, it is possible to further improve the PSNR if an appropriate codebook set is formed.

0.5

Rate (bits/pixel)

Figure 8 VQ Encoding Results QT does, we use a two-stage search scheme to find the corresponding codeword for each block. In other word, for large block, we first subsample both the block and the codewords and perform VQ to find the most likely codewords, then a second-stage search is executed to locate the best codewords. Three test images (lena512, yacht, pens) that are not in the training images are used. We compare the results of three square-block based algorithms, QT, tpSegment, and CRPABCs, and two rectangular-block based algorithm, C&A, and CRP-ABCr, by varying λ as does in the Sullivan and Bakers’s work. No

5. Conclusions A new algorithm that implements the most flexible variable block-size (VBS) compression is developed. Under the CRP paradigm, the new ABC algorithm takes the influence of each block on its neighbors into account, in order to improve both the distortion and bit rate simultaneously. Consequently, several contributions of this work can be summarized as follows: 1. Free Placement: Unlike the conventional QT scheme, the CRP-ABC algorithm allows the placement of blocks at any location. 2. Unrestricted Placement Order: Although there are several existing algorithms attempting to achieve free placement, the block placement usually turns out to be actually constrained by the imposed order of

Figure 9a QT encoded image

Figure 9c Enlargement of QT encoded image in 9a.

Figure 9b CRP-ABCr encoded image

Figure 9d Enlargement of CRP-ABCr encoded image in 9b.

scanning. As a result, many good block placements, and combinations, are disallowed. 3. Conflict Consideration: All the previously developed VBS algorithms follow the merging/splitting criterion of QT. As it is likely to have overlapping blocks, selecting a given block may disable the use of the others. Such conflicts should be taken care of in a delicate and balanced manner. The CRPABC is the first algorithm possessing the capability of minimizing the conflict and

achieving a balanced optimality by using the CRP principle. 4. Improved Structure Coding Scheme: In order to alleviate the overhead in bits caused by the use of unstructured segmentation. A new scheme for encoding the segmentation structure is developed here, experimental results show that improvements could be as much as 12%. As discussed in Section 4, it is possible to achieve higher PSNR gain by designing more

representative codewords to encourage the use of large blocks and rectangular blocks at the locations not allowed by the rigid QT. An improved codeword generation process, currently under development, would readily yield a better code efficiency. The experiments reported here already show the flexibility of VBS. The computation time for segmentation in CRP-ABC is slower than the other algorithms because it does global consideration at each step. This prevents it from being used in real-time image compression applications. However, the decompression process is much faster, because it only needs to identify the location of each block. Thus, decompression of this algorithm is just as fast as any of the VBS algorithms. This asymmetry makes it more suitable for applications where quality matters more than the computation time, such as image archival. Presently, the segmentation is only carried out on the spatial domain images. The combination of wavelet transform and QT has been shown to be promising [13] recently. Because of the flexibility of CRP-ABC algorithm over QT in selecting block shape and location, it is expected that CRP-ABC will yield higher performance gain when combined with the wavelet approach.

References [1] D. J. Vaisey and A. Gersho, “Variable BlockSize Image Coding,” in Proceedings of ICASSP ‘87, Dallas, IEEE Acoustics, Speech, and Signal Processing Society, pp. 1051-1054, April 1987. [2] Jerrold L. Boxerman and Ho John Lee, “Variable Block-sized Vector Quantization of Grayscale Images with Unconstrained Tiling,” Proc. IEEE Int. Conf. on Acoustic, Speech and Signal Processing, pp. 22772280, 1990. [3] L. Corte-real and A. P. Alves, “Vector Quantization of Image Sequences Using Variable Size and Variable Shape Blocks,” Electronics Letters, vol. 26, no. 18, pp. 14831484, Aug. 1990. [4] Ruey-Feng Chang, Wei-Ming Chen, “Interframe Difference Quadtree Edge-based Side-match Finite-state Classified Vector Quantization for Image Sequence Coding,”

IEEE Trans. on Circuits and Systems for Video Technology, vol. 6, no. 1, pp. 32-39, Feb. 1996. [5] Luis Corte-real and Artur Pimenta Alves, “A Very Low Bit Rate Video Coder Based on Vector Quantization,” IEEE Trans. on Image Processing, vol. 5, no. 2, pp. 263-273, Feb. 1996. [6] Majid Rabbani and Paul W. Jones, Digital Image Compression Techniques, SPIE Optical Engineering Press, 1991. [7] Gray, R. M. “Vector Quantization,” IEEE ASSP Magazine, 1984, (4), pp. 4-29. [8] P. A. Chou, T. Lookabaugh, and R. M. Gray, “Optimal Pruning with Applications to Treestructured Source Coding and Modeling,” IEEE Trans. Info. Theory, vol. 35, pp. 299315, Mar. 1989. [9] Gary J. Sullivan and Richard L. Baker, “Efficient Quadtree Coding of Images and Video,” IEEE Trans. on Image Processing, vol. 3, no. 3, pp. 327-331, May 1994. [10] D. Y. Y. Yun, “The Evolution of Computing Software Tools,” Computing Tools for Scientific Problem Solving (Keynote Paper), Academic Press, pp. 7-22, 1990. [11] Ge, Y. and D. Y. Y. Yun, “Simultaneous Compression of Makespan and Number of Processors Using CRP,” Proceedings of 1996 International Parallel Processing Symposium, Honolulu, Hawaii, 1996. [12] Zong Ling and David Y. Y. Yun, “An Effective Approach for Solving Subgraph Isomorphism Problem,” Proceedings of the IASTED International Conference on Artificial Intelligence, Expert Systems and Neural Networks, pp. 342-345, August 1996. [13] H.M. Jung, Y. Kim, S. Rhee, H-Y. Song, and K. T. Park, “HD-VCR Codec for Studio Application Using Quadtree Structured Binary Symbols in Wavelet Transform Domain,” IEEE Trans. On Circuits and Systems for Video Technology, vol. 6, no. 5 , pp. 506-513, 1996.

Suggest Documents