A LOW COMPLEXITY ERROR RECOVERY ... - Semantic Scholar

3 downloads 0 Views 1MB Size Report
its correct location due to the wrong number of decoded sym- bols in the error propagation region. A low complexity scheme is proposed in this research to ...
IEEE Trans. Consumer Electronics, Nov. 2002, Vol. 48, No. 4, pp. 973-981 A LOW COMPLEXITY ERROR RECOVERY TECHNIQUE FOR WAVELET IMAGE CODECS WITH INTER-SUBBAND DEPENDENCY Te-Chung Yang

Sunil Kumar

Member of Technical Staff Sharp Laboratories of America Huntington Beach, CA 92647, USA E-mail: [email protected]

Department of Electrical and Computer Engineering Clarkson University Potsdam, NY 13699, USA E-mail: [email protected]

ABSTRACT Error(s) in Huffman coded wavelet coefficients of a subband usually propagates till the end of subband. Self-synchronizing Huffman codes are used to limit the propagation of the error, by reestablishing synchronization. After regaining synchronization, the decoder may still not be able to align each symbol with its correct location due to the wrong number of decoded symbols in the error propagation region. A low complexity scheme is proposed in this research to identify the error propagation region in the subband of a wavelet coded image and re-align displaced coefficients by using inter-subband dependency between coefficients of a corrupted subband and its parent subband. It is demonstrated by experiments that the proposed technique provides a more robust image codec under different error conditions with a very little bit-rate overhead.

1. INTRODUCTION Due to the superior rate-distortion performance, the wavelet based still image compression scheme has been used for still image compression in the new JPEG 2000 and MPEG4-VTC standards [1]-[2]. In a simple wavelet image coding system, the input image is first decomposed into different subbands and coefficients. Coefficients in each subband are then quantized, serialized by scanning, and entropy coded one by one by using variable length codes (VLCs). The entropy coded coefficients are stored in the final bitstream for transmission. At the decoder end, the entropy decoder decodes the received bitstream and the decoded wavelet coefficients are then placed one after the other based on the scanning order. The final synthesis stage transfers wavelet coefficients into pixel values to reconstruct the decompressed image. Compressed images are being increasingly used in applications that require their transmission over error prone wireless channels. The compressed bitstream may, therefore, get corrupted due to the channel errors during trans-

mission. Errors in VLCs often propagate to subsequent symbols and result in the loss of synchronization between the encoder and the decoder. During the loss-of-synchronization period, the number of decoded symbols is usually different than that of coded symbols. Self-synchronizing codes that quickly reestablish synchronization and thus reduce the length of error propagation have been studied by researchers [3]-[7]. A VLC code is said to be self-synchronized if it contains some synchronizing codewords, which can resynchronize the bitstream and then correctly decode the following symbols regardless of any preceding synchronization slippage. A corrupted subband would thus contain the correctly decoded region, the error propagation region and the resynchronized region, whose starting location is often shifted from the original one due to the mismatch in the number of encoded and decoded symbols. Self-synchronizing VLCs, however, suffer from a significant disadvantage. Even though the decoder will be in synchronization after decoding a synchronizing codeword, it may not know when the resynchronization was established. In particular, it may decode symbols correctly from the resynchronized portion of the bitstream, but not know their correct location. Lam and Reibman [8] have suggested the use of extended synchronizing codewords at predetermined locations in the bitstream, to partially solve the above problem. In this scheme, the error propagation is restricted by the extended synchronizing codeword. The symbols located after the extended synchronizing codeword will thus be decoded free of error and correctly placed in the subband. However, all the corrupted symbols located between the two consecutive extended synchronizing codewords can still not be recovered. Moreover, the extended synchronizing codewords can not be used very frequently in the bitstream

due to their additional code length overhead. The fixedlength code (FLC) and reversible variable length codes (RVLCs) provide other solutions to this problem. They both however depend on the probability distribution of the source and often have lower coding efficiency than Huffman code. The loss of synchronization can also be solved by using the the error-resilient entropy coding (EREC) scheme [9]. However, this technique fails to place decoded coefficients correctly before the next starting point. In this paper, we describe a low complexity scheme to identify the proper starting location of the resynchronized region to correctly relocate its decoded coefficients in the subband, by exploring inter-subband correlation of wavelet coefficients. In our scheme, the quantized subband coefficients have been entropy coded by using a selfsynchronizing variable length code proposed by Maxted and Robinson [4] whose coding efficiency is the same as Huffman code. This code satisfies the suffix condition and thus minimizes the length of error propagation region, as explained in Section 3. Hereafter, we shall refer this self-synchronizing coding scheme as ’suffix-rich self-synchronizing code’. The paper is organized as follows. The wavelet image coding and error resilience techniques are briefly reviewed in Section 2. The suffix-rich self-synchronizing Huffman code is discussed in Section 3. The proposed error recovery scheme is described in Section 4. The experimental results and conclusions are discussed in Sections 5 and 6, respectively. 2. BACKGROUND 2.1. Wavelet Image Coding The block diagram of a typical wavelet image compression scheme is shown in Fig. 1. First, the wavelet decomposition is accomplished by passing the image data through a bank of analysis filters. The analysis filter bank could be orthogonal or biorthogonal to the synthesis filter bank at the decoder end. The filtered subband images are then subsampled, resulting in a series of reduced size subband images. These subbands are more suitable to encode than the original image since the energy compaction is achieved in the transform. Note also that each subband contains different energy, and should be quantized and coded separately. Based on the human visual system characteristics and the energy distribution in each subband, various quantization schemes such as the scalar

quantization (SQ), the vector quantization (VQ), the successive approximation quantization (SAQ) and the trellis coded quantization (TCQ) can be used in each subband. The quantized coefficients of a subband are then scanned in a certain order, such as zig-zag and rastering, to form a serialized sequence. Finally, the resulting serialized sequence is entropy coded into the bitstream for transmission. At the decoder end, the entropy decoder decodes the received bitstream. The decoded wavelet coefficients are placed one after the other based on the scanning order.

Fig. 1. The block diagram of a typical wavelet image coding scheme. The quantization process is lossy in general. It often discards wavelet coefficients with low energy in high frequency subbands if the resulting distortion is not visible to the human visual system (HVS). The high data compression ratio is actually achieved at the quantization stage. For a target bit rate, a higher compression ratio in the high frequency subbands allows coefficients in low frequency subbands to be coded with a higher fidelity. The quantization scheme determined by the perceptual importance of subband coefficients is a promising approach to high quality image coding at low bit rates. Further compression is achieved by reducing the redundancy in quantized coefficients by using the variable length entropy coder. At the entropy coding stage, one can use the run-length, the arithmetic, or the Huffman code. This operation is reversible and lossless. After this block, the resulting compressed image is represented as a bitstream, which is composed of variable length codes. This bitstream can be transmitted through a communication channel. 2.2. Error Resilience Many techniques have been discussed in the literature to handle the transmission error problem. They can be classified into three categories: forward error concealment, error concealment by post-processing and interactive error concealment [10]. However, these three approaches are not exclusive of each other, and their combination often achieves a better performance.

Forward error concealment refers to those techniques in which the encoder plays the primary role. In these techniques, a source coding scheme and/or a transportcontrol mechanism are designed to minimize the effect of transmission errors without requiring any error concealment at the decoder. Forward error correction (FEC), joint source/channel coding and layered coding are some examples of this approach. Since this approach requires some overhead bits, the coding efficiency is decreased. By error concealment via post-processing, we refer to those techniques in which the decoder fulfills the task of error concealment. They attempt to recover the lost information by estimation and interpolation without relying on additional information from the encoder. Spatial smoothing, interpolation and filtering belong to this category. In the interactive error concealment approach, the encoder and the decoder work cooperatively to minimize the impact of transmission errors. Automatic repeated request (ARQ) and selective predictive coding based on feedback are examples of this approach. The performance of the above three approaches can be compared by using the following criteria: the quality of reconstructed image, the time delay introduced, the size of overhead bits, and the processing complexity. The importance of each criterion depends on intended application. For example, time delay is critical in real-time two-way communications whereas non-real-time Internet applications may tolerate a relatively longer delay. Retransmission may work well for the point-to-point transmission. However, it is not suitable for multi-point applications. Low computational complexity is important for hand-held user devices, such as PDAs and cell phones.

X

Probability

1 2 3 4 5

0.25 0.25 0.2 0.15 0.15

Huffman codeword 01 10 11 000 001

Suffix-rich codeword 11 01 00 101 100

Length 2 2 2 3 3

Table 1. Huffman and suffix-rich codes

suffix-rich code. An example is given below to explain the suffix condition w.r.t. a given probability source. Let us consider a random variable X taking values in the set f1, 2, 3, 4, 5g with probabilities 0.25, 0.25, 0.2, 0.15, 0.15, respectively. To construct the suffix-rich self-synchronizing code, we first choose a preferred suffix. Without loss of generality, we choose 1 as the preferred suffix, 1

>

0:

The two-bit codewords can be constructed based on the above choice, as shown below, (11; 01)

>

(10; 00):

Note that in the above, codewords in the pair (11, 01) are preferred to (10, 00). But the codewords within a pair (e.g., 11 and 01) do not have priority over each other. The same is true for the pair (10, 00). However, to continue further, we need to give a preference among codewords within the same pair. Let us assign the preference as,

3. SELF-SYNCHRONIZING SUFFIX-RICH CODE

11

>

01

>

10

>

00

(1)

The optimal (i.e., with the shortest expected length) prefixfree code for a given distribution can be constructed by a simple algorithm proposed by Huffman [11]. In this section, we examine the suffix-rich self-synchronizing code proposed by Maxted and Robinson [4].

Thus the three-bit codewords can be constructed as shown below,

Huffman and the suffix-rich codes for a given source are shown in Table 1. Both codes are prefix-free and optimal in terms of the shortest expected length. The only difference between them is the assignment of 1’s and 0’s. In the Huffman code, only one codeword f01g is the suffix of another codeword f001g. The suffix-rich code has two codewords, which are suffices of other codewords. They are f01; 101g and f00; 100g. This is why it is called the

The above procedure can also be understood by assigning ”0” and ”1” bits to a Huffman tree. Let us start with a Huffman tree as shown in Fig. 2, without the assignment of 1 and 0 as the initialization. Let us focus on the symbol X = 1, that has the highest probability (i.e., 0.25). The length of the codeword needed for X = 1 is 2 and we choose the first preference codeword 11 in the two-bit codeword list, shown in (1). After assigning the

(111; 011)

>

(101; 001)

>

(110; 010)

>

(100; 000)

(2)

Fig. 2. A Huffman coding tree before (left) and after (right) codeword assignment. first codeword to X = 1, we consider X = 2. Note that the only choice for the first bit of codeword assigned to X = 2 must be 0. Since the next candidate with 0 as leading bit in the preference list of two-bit codewords is 01, we assign 01 to X = 2, and then move to X = 3. The only two-bit codeword which can be assigned to X = 3 is 00. The assignment of X = 4 and X = 5 is restricted to three-bit codewords with prefix of 10. We select 101 for X = 4 and 100 for X = 5 from (2), as shown in Fig. 2. 3.1. Loss of Synchronization and Error Propagation Length Errors often occur when the compressed bitstream is transmitted over unreliable channels. Due to the use of variable length codes, an error in any codeword often results in decoding of an erroneous longer or shorter codeword. This error generally propagates and causes errors in subsequent decoded symbols. In practice, most Huffman codes will resynchronize themselves after some codewords on a statistical basis. The decoded coefficients will, however, be misaligned.

decoder reaches a leaf node. Note that the Huffman tree in Fig. 3 has three internal nodes. When the decoder is at an internal node, it needs more bits to complete the decoding of the current codeword. When one bit error occurs in the received compressed bitstream, the erroneous bit changes to its complement. This means that the decoder will choose the opposite branch and would reach a leaf node before (or after) the actual ending bit of the current symbol has been decoded. Thus, we can focus on each of the three internal nodes as shown in Fig. 3, and compute the corresponding mean error propagation lengths. Node 1: the decoder is in node 1 when the actual codeword of one symbol is finished. If the next codeword is 11 (X = 1), the decoder comes back to node 1 after decoding this codeword. If the next codeword is 01, 00, 101, or 100 (X = 2; 3; 4; 5), the decoder reaches the leaf node and thus regains its synchronization with the encoder after it decodes any of the above codewords. The mean error propagation length for this case is 3.067 bits. Node 2: the decoder can reach the leaf nodes from this internal node only when the Received codeword is 101 or 100 (X = 4; 5). If the next codeword is 11 or 01, the decoder moves to node 1 after decoding the received codeword. If 00 is the next codeword, the decoder remains at node 2 after decoding that codeword. The mean error propagation length for this case is 4.792 bits. Node 3: the decoder is at the same situation as in node 2 so that the mean error propagation length is 4.792 bits.

Fig. 3. The Huffman tree for decoding. We use the Huffman tree in Fig. 3 to explain the loss of synchronization at the decoder end. The decoding starts from the root of the Huffman tree. Whenever the decoder receives an input bit, it proceeds from the current node to one of its child nodes based on the received bit. After completely decoding a correctly received codeword, the

The result of the traversal of Huffman tree from all of the three nodes is shown in Tables 2 and 3 for the Huffman and self-synchronizing suffix-rich codes, respectively. The mean error propagation lengths of the two codes are compared in Table 4. It is evident from the above tables that the traversal of the Huffman tree is more likely to end in a leaf node, in the presence of an error,

Probability

Codeword

0.25 0.25 0.2 0.15 0.15

01 10 11 000 001

Ended node when begins from node 1 leaf node Node 1 Node 2 Node 1 Node 2

Ended node when begins from node 2 node 2 node 1 node 1 node 3 leaf node

Ended node when begins from node 3 node 2 node 1 node 2 node 3 leaf node

Table 2. Transitional relationship at nodes 1, 2 and 3 for the Huffman code.

Probability

Codeword

0.25 0.25 0.20 0.15 0.15

11 01 00 101 100

Ended node when begins from node 1 Node 1 leaf node leaf node leaf node leaf node

Ended node when begins from node 2 Node 1 Node 1 Node 2 leaf node leaf node

Ended node when begins from node 3 Node 1 Node 1 Node 2 leaf node leaf node

Table 3. Transitional relationship at nodes 1, 2 and 3 for the self-synchronizing suffix-rich code.

Node 1 Node 2 Node 3

Suffix-rich code 3.067 4.792 4.792

Huffman code 16.277 13.741 5.131

Table 4. Comparison of mean error propagation lengths for the Huffman and self-synchronizing suffix-rich codes.

when suffix-rich code is used. This implies that the subsequent symbol will be decoded correctly. This is why the mean error propagation length in Table 4 is smaller for the suffix-rich code than for the Huffman code. 4. PROPOSED ERROR RECOVERY SCHEME As discussed in the previous section, an error in the bitstream is likely to result in an inaccurate (fewer or more) number of symbols in a given subband due to error propagation. This makes it difficult to determine the beginning of the new subsequent subband. This problem can be solved by using resynchronization markers. We have used byte-aligned unique two-byte resynch markers, 0xFFFx, at the beginning of each new subband, in our scheme.

Fig. 4. Three regions in a corrupted subband The corrupted subband can be divided into three regions, i.e. the correctly decoded region, the error propagation region and the shifted region, as shown in Fig. 4. The use of the suffix-rich self-synchronizing code reestablishes synchronization and the subsequent codewords are correctly decoded. However, the shifted region will still cause perceptible quality degradation in the reconstructed image, especially in the edge areas of an image. In this section, we propose to use the inter-subband correlation to restore the shifted region correctly. The proposed scheme consists of two steps as discussed below. Step I is performed at the encoder whereas Step II is carried out at the decoder for the corrupted subband(s). Step I: It is well known that there exists certain correlation between coefficients of parent and child subbands in

Fig. 5. The correlation of coefficients in the parent (the Y-axis) and the child subbands (the X-axis) for the test image Woman, in vertical (left), horizontal (center) and diagonal (right) directions. Here the vertical axis is the coefficient magnitude square of the parent subband while the horizontal axis is that of its associated child subband a wavelet transformed image. This property has been utilized in the embedded zero-tree and SPIHT wavelet image coders [12]-[13]. In Fig. 5, we show three typical log-log plots of the squared magnitude of coefficients in different directional subbands of test image Woman. The plots reveal fairly strong correlation between corresponding coefficients of the parent and child subbands, especially when magnitude of coefficients in the parent subband is not very small. The similar behavior has been observed for parentchild pairs of other subbands at all the decomposition levels, for different test images. As a coarse approximation, we adopt a linear prediction rule to describe this relationship, i.e. 2

Ps

(x; y ) =

As

X

2

(i;j )

2

Cs (i; j )

+ Bs ;

(3)

R

where Cs (i; j ) is the magnitude of a coefficient in 2  2 block R of the current subband s, and Ps (x; y ) is the magnitude of the corresponding coefficient in the parent subband at the same spatial location as R. Parameters A and B can be determined for each subband via least square error (LSE) curve fitting, and transmitted to the decoder in the compressed bitstream [14]. Step II: The number of coefficients in a given subband can be easily determined based on the level of wavelet decomposition, the image size and decomposition level of the current subband. Also we have used resynch markers to identify the end of each subband. Thus a propagating error is assumed to have occurred in the bitstream when the number of decoded symbols in a subband is different from the number of encoded symbols. When the

number of decoded symbols in a subband is less than expected, a split-algorithm is used to generate more coefficients. Here, it is assumed that its parent subband has already been correctly decoded or recovered because this algorithm is applied starting from low-low (LL) frequency subband. The decoder estimates the coefficient value of the parent subband denoted by P^ , corresponding to a 2  2 block of children subband coefficients by using (3) and parameters A and B . The estimated value P^ is then compared with the actual decoded value P . The 2  2 block that has the largest difference between the actual and estimated value is considered as lying in the error propagation region. Thus, the following coefficients of this block may lie either in the error propagation region or in the shifted region. We append one more coefficient to the block by duplicating its last coefficient. All subsequent coefficients will thus be shifted by 1 position towards the end of the subband. This procedure moves coefficients of the shifted region towards their correct locations. The same process is repeated until the number of decoded coefficients in the child subband matches the expected value. When the number of decoded coefficients in the child subband is more than expected, the merge-algorithm is used by following the same spirit. In this situation, the coefficients in the error propagation region are deleted one by one so that the following coefficients are moved forward to their correct position until there is a match. The split/merging based error recovery scheme works well when there is only one error propagation region in the corrupted subband. However, more than one independent and disjoint error propagation regions may sometimes be

present in higher frequency subbands (due to their lager size), especially at high BERs. To overcome this problem, we have used resynch markers to partition the large subbands in several parts, to minimize the probability of more than one error propagation region in the bitstream between two consecutive resynch markers. The split/merging algorithm can then be applied in each corrupted portion of the subband. 5. EXPERIMENTAL RESULTS We have tested the proposed scheme on three images (i.e. Woman, Cafe and Bike) of size 512  512 and 256 gray levels. We adopt the 9/7 tap filter and three-level pyramid decomposition in the image codec. The differential coding is used to encode coefficients of the LL subband. Coefficients in all other subbands are encoded directly by using a uniform quantizer with a dead zone. The encoded bitstream is sent through a noisy binary symmetric channel (BSC) with a bit error rate (BER) ranging from 10 4 to 10 3 . For each BER, we repeat the simulation 500 times to obtain the mean PSNR (Peak Signal-to-Noise Ratio) value. The average PSNR values of three decoded images are given in Table 5 under three different BERs. We see that the proposed scheme has about 2 to 3 dB improvement over the Huffman coded image. The original test image Woman is shown in Fig. 6. For visual performance comparison, the reconstructed images are shown in Figs. 7 and 8 at BERs of 10 4 and 10 3 , respectively. We see that the proposed error recovery method provides a much better subjective image quality than the reference method. Image

BER

 10 5  10 1  10 1  10 5  10 1  10 1  10 5  10 1  10 1

Woman

Caf´e

Bike

4 4 3 4 4 3 4 4 3

Proposed Scheme 25.85 dB 24.44 dB 20.43 dB 25.27 dB 22.91 dB 20.11 dB 25.54 dB 24.47 dB 20.24 dB

Huffman code 23.11 dB 21.41 dB 18.22 dB 23.04 dB 20.67 dB 18.12 dB 23.01 dB 21.03 dB 18.23 dB

Table 5. The PSNR comparison of reconstructed images at different bit error rates.

Fig. 6. The original Woman test image.

6. CONCLUSION A very simple yet effective technique was proposed in this work to decrease the impact of misplaced coefficients due to error propagation. The restoration of misplaced coefficients to their accurate locations was achieved by exploiting inter-subband correlation of wavelet coefficients. It was shown in experimental results that the proposed technique provides a robust performance against random channel errors with negligible bit-rate overhead. Due to low complexity and negligible bit-rate overhead, this scheme can prove useful in recovering the data of corrupted images on hand-held wireless devices, such as PDAs and cell phones. 7. REFERENCES [1] JPEG2000 part I final draft international standard, ISO/IEC JTC 1/SC29 WG 1 N1980, Sept. 2000. [2] MPEG-4 Video Verification Model version 18.0, ISO/IEC JTC1/SC29/WG11 N3908, Jan. 2001. [3] T. J. Ferguson and J. H. Rabinowitz, ”Selfsynchronization Huffman codes”, IEEE Trans. on Inform. Theory, Vol. IT-30, pp. 687-693, 1984.

Fig. 7. The decoded Woman image at BER=10 4 , by using the Huffman code (left) and the proposed scheme (right).

Fig. 8. The decoded Woman image at BER=10 3 , by using the Huffman code (left) and the proposed scheme (right).

[4] J. C. Maxted and J.P. Robinson, ”Error recovery for variable length codes”, IEEE Trans. on Inform. Theory, Vol. IT-31, pp. 794-801, 1985. [5] B. L. Montgomery and J. Abrahams, ”Synchronization of binary source codes”, IEEE Trans. on Inform. Theory, Vol. IT-32, pp. 849-854, 1986.

ACKNOWLEDGMENT Authors thankfully acknowledge the support and guidance of Prof. C.-C. Jay Kuo of University of Southern California, Los Angeles, for this research work. BIOGRAPHY

[6] R. M. Capocelli, A. A. D. Santis, L. Gargano and U. Vaccaro, ”On the construction of statistically synchronizable codes”, IEEE Trans. on Inform. Theory, Vol. IT-38, pp. 407-414, 1992. [7] S.S. Hemami, ”Robust image transmission using resynchronizing variable-length codes and error concealment”, IEEE Journal on Selected Areas in Communications, Vol. 18, pp. 927-939, 2000. [8] W.-M. Lam and A.R. Reibman, ”Self-synchronizing variable-length codes for image transmission”, Proc. of IEEE ICASSP, Vol. 3, pp. 477-480, March 1992. [9] D.W. Redmill and N.G. Kingsbury, ”The EREC: an error-resilient technique for coding variable-length blocks of data”, IEEE Trans. Image Proc., Vol. 5, pp. 565-574, 1996.

Te-Chung Yang received the B.S. degree in Electrical Engineering from National Taiwan University, Taipei, Taiwan, in 1991, and the M.S. and Ph.D. degrees in Electrical Engineering from the University of Southern California, Los Angeles, in 1995 and 1999, respectively. Currently, he is a member of technical staff, in Sharp Laboratories of America, Huntington Beach, CA. His research interests include error-resilient compression algorithms for still image and low-bit rate video tailored to wireless transmission channels.

[10] Y. Wang and Q.-F. Zhu, ”Error control and concealment for video communication: a review”, Proceedings of IEEE, Vol. 86, pp. 974-997, 1998. [11] D.A. Huffman, ”A method for the construction of minimum-redundancy codes”, Proc. IRE, Vol. 40, pp. 1098-1101, 1952. [12] J. M. Shapiro, ”Embedded image coding using zerotrees of wavelet coefficients”, IEEE Trans. Sig. Proc., Vol. 41, pp. 3445-3462, 1993. [13] A. Said and W.A. Pearlman, ”A new fast and efficient image codec based on set partitioning in hierarchical trees”, IEEE Trans. Cir. and Syst. for Video Technol., Vol. 6, pp. 243-250, 1996. [14] T.-C. Yang, S. Kumar and C.-C. J. Kuo, ”Error resilient wavelet image coding with fast resynchronization”, SPIE Conf. on Visual Communication and Image Processing, pp. 84-93, San Jose, CA, 1999.

Sunil Kumar received B.E. (Electronics Engineering) degree from Regional Engineering College, Surat (India), in 1988 and the M.E. (Electronics & Control Engineering) and Ph.D. (Electrical and Electronics Engineering) degrees from the Birla Institute of Technology and Science (BITS), Pilani (India) in 1993 and 1997, respectively. He also taught in the Electrical and Electronics Engineering department at BITS, Pilani. From 1997 to 2001, he was a post-doctoral researcher at Signal and Image Processing Institute at the University of Southern California (USC), Los Angeles, CA. He taught in the Electrical Engineering department at USC during 2000-02. Since June 2000, he has also been a consultant with industry on MPEG-4 and JPEG2000 projects and participated in JPEG2000 standardization activities. Currently, he is assistant professor in the department of Electrical & Computer Engineering at Clarkson University, Potsdam, NY. His research interests include error resilient and scalable multimedia compression techniques, MPEG-4 and JPEG2000 standards, and mobile multimedia networks.