Combined Image Compressor and Denoiser based on Tree-adapted Wavelet Shrinkage Revision of OE 010337
James S. Walker Department of Mathematics University of Wisconsin–Eau Claire Eau Claire, WI 54702–4004 Phone: 715–836–3301 Fax: 715–836–2924 e-mail:
[email protected]
1
Abstract An algorithm is described for simultaneously compressing and denoising images. The algorithm is called tree-adapted wavelet shrinkage and compression (TAWS-Comp). TAWS-Comp is a synthesis of an image compression algorithm, adaptively scanned wavelet difference reduction (ASWDR), and a denoising algorithm, tree-adapted wavelet shrinkage (TAWS). As a compression procedure, TAWS-Comp inherits all of the advantages of ASWDR: its ability to achieve a precise bit-rate assigned before compressing, its scalability of decompression, and its capability for enhancing regions-of-interest. Such a full range of features has not been available with previous compressor plus denoiser algorithms. As a denoising procedure, TAWS-Comp is nearly as effective as TAWS alone. TAWS has been shown to have good performance, comparable to state of the art denoisers. In many cases, TAWS-Comp matches the performance of TAWS while simultaneously performing compression. TAWS-Comp is compared with other combined compressor/denoisers, in terms of error reduction and compressed bit-rates. Simultaneous compression and denoising is needed when images are acquired from a noisy source and storage or transmission capacity is severely limited (as in some video coding applications). An application is described where the features of TAWS-Comp as both compressor and denoiser are exploited. Keywords: image compression; image denoising; signal processing; video coding.
Introduction We shall describe an algorithm for simultaneously compressing and denoising an image corrupted by additive random noise. The classical approach to compressing a noisy image—motivated by results from rate distortion theory [1], [2]—is to use a two-step process. In the first step the noisy image is denoised, while in the second step the denoised image is compressed. There are situations, however, where limited processor resources call for the greater efficiency afforded by combining compressing with denoising. Combining compressing with denoising is particularly apt for situations, such as some video coding applications, that satisfy the following three conditions: (1) the image source is noisy, (2) the time for processing is short and/or processor power is limited, and (3) the transmission channel capacity is low. Condition (1) calls for denoising, condition (3) calls for compressing, and condition (2) calls for a simultaneous denoising and compressing in order to save time and/or free the processor for other tasks. From this point on, we shall refer to such a combined compressor plus denoiser as a compdenoiser. In section 2 we examine our compdenoiser algorithm, the tree-adapted wavelet shrinkage and compression (TAWS-Comp) algorithm. The underlying theory behind TAWS-Comp will be explained and we shall describe how to implement it. As a denoising procedure, TAWS-Comp is nearly as effective as TAWS alone. TAWS was shown in [3] to have good performance, comparable to state of the art denoisers. We will show that, for many images, TAWS-Comp matches the performance of TAWS while simultaneously performing compression. Section 3, which is the last section, summarizes some experimental denoisings and compares the performance of TAWS-Comp with previous algorithms. While an enormous amount of work has been done on the separate topics of compressing and denoising, relatively little work has been done on their combination. Two compdenoisers suitable for images were described in [4] and [5]. 2
In section 3 we shall compare the performance of TAWS-Comp with these two algorithms, and find that it equals or exceeds their performance. An important feature of TAWS-Comp, one which is not enjoyed by these two algorithms, is its scalability. Any bit-rate, less than or equal to the compressed image’s bit-rate, can be used for decompressing a TAWS-Comp compressed image. Scalability is an important feature for transmitting compressed images over low-capacity channels. For these other two algorithms, scalability is missing, the only available bit-rate for decompression is the bit-rate at which an image was compressed.
1
An application of compression plus denoising
In this section we illustrate an application requiring a compdenoiser. When transmitting an image over a low-capacity channel, the compressed image is typically transmitted using data packets. The first packet produces a low-resolution image at the receiver, and subsequent packets enhance this image. TAWS-Comp, which is an embedded algorithm using bit-plane encoding, easily fits into such a transmission scheme. Between successive packets, the receiver could request the compdenoiser to selectively enhance a region-of-interest (ROI). The TAWS-Comp algorithm incorporates just such an ROI capability into its design. We show in Fig. 1 some images illustrating such a system. In Fig. 1(a) there is a noisy image of a woman. Because transmission capacity is low, this image must be transmitted in packets that are, say, 100 times smaller in size than the total bits for the image. In Fig. 1(b), we show the image at the receiver after the first packet has been transmitted. This image is a 100:1 compression of the source image, and it has been denoised as well. Suppose now that the receiver requests that the next packet focus exclusively on enhancing the region inside the rectangle shown in Fig. 1(a). If the transmitter were not capable of performing denoising along with compression, then the next packet would produce the image shown in Fig. 1(c). Notice that a significant amount of noise appears within the ROI, reducing the quality of the enhancement. However, when the transmitter performs denoising along with compression via the TAWS-Comp algorithm, then the ROI is enhanced but without added noise as shown in Fig. 1(d).
2
Description of the TAWS-Comp algorithm
In the previous section we discussed an application which requires a compdenoiser. In this section we shall describe the TAWS-Comp algorithm for accomplishing this task of simultaneous compression and denoising. We shall first examine the basic theory underlying TAWS-Comp and then discuss its implementation. Our emphasis here will not be on mathematical proofs—in fact, they have not been developed at this time—but rather on the fundamental ideas involved. The basic theory behind TAWS-Comp is a synthesis of the ideas that underlie ASWDR and TAWS, which have been described in references [6], [7], and [3]. Both ASWDR and TAWS require that the source image be transformed using a wavelet transform [8], [9]. Wavelet transforms of natural images possess the following four crucial properties: 1. Energy Conservation. The wavelet transform is an orthogonal transform.
3
2. Energy Compaction. There are large numbers of small magnitude wavelet coefficients. Most of the energy is concentrated in the trend (the all lowpass subband). 3. Two Populations. The larger wavelet coefficients are clustered around edges. Smaller wavelet coefficients reside in smoother regions. 4. Clustering. Large magnitude wavelet coefficients tend to have some large magnitude coefficients located near them. The Energy Conservation property is useful for dealing with additive Gaussian noise. With additive Gaussian noise, the noisy image g is related to the original image f by g = f + n, where the noise values, n[i, j], are independent random variables with underlying distributions that are all zero-mean Gaussian normal of variance σ 2 . It is well-known that an orthogonal transform will preserve the noise-type: the transformed noise will remain Gaussian i.i.d. with mean zero and variance σ 2 . Because the transformed noise is Gaussian i.i.d., the other three properties above will not apply to it. These latter three properties, which are assumed to apply to the transform of f , can then be used to distinguish noise-dominated transform values from image-dominated values. We shall now discuss the validity of properties 2 to 4 for wavelet transforms of natural images, assuming certain smoothness conditions for an appropriate image model, and we shall then explain how TAWS-Comp uses all four properties for denoising as well as compression. Properties 2 through 4 can be verified, at least to a high probability, if we assume that our images are obtained from discrete values of piecewise smooth functions. Energy Compaction will hold because in a region where f is smooth, it may closely approximated by a polynomial of some fixed degree, as in a truncated Taylor expansion. When the support of the analyzing wavelet is contained within this region, then the corresponding wavelet coefficient will be approximately zero (assuming the analyzing wavelet has sufficiently many zero moments). This leads to many small-magnitude wavelet coefficients located in regions where the signal is smooth. In fact, for piecewise smooth images it can be shown that, when parent and child wavelets (parent and child wavelets are described in Ref. [9]) have supports that are disjoint from the edges of the image, the following implication holds (when T is not too small): |parent coefficient| < T =⇒ |child coefficient| < T /2.
(1)
We will say that a parent coefficient is insignificant (significant) when its magnitude is less than (greater than or equal to) some fixed threshold T , and a child coefficient is insignificant (significant) when its magnitude is less than (greater than or equal to) T /2. Thus (1) says, for piecewise smooth images, that insignificant parents have insignificant children (at least away from edges). Therefore, away from edges, the following statistical implication is valid: Insignificant parent ≈> Insignificant children.
(2)
In other words, there is a high probability that when a parent is insignificant then all of its children are insignificant. This fact is the basis for zerotree image compression methods, such as the ones described in [10] and [11]. Further probabilistic arguments in favor of (2) are given in [10]. Property 3 holds because the wavelets used are continuous and compactly supported. Consequently, sharp transitions in image values near edges produce relatively higher values for the inner products of the image with wavelet basis functions supported in regions overlapping these edges. 4
Thus relatively higher transform values occur near edges. Detailed theoretical and statistical verifications of this fact are described in [9] and [12]. Furthermore, since the support of a parent wavelet contains the support of its child wavelets, it follows that a significant value of a parent coefficient occurring near an edge will, at least statistically, imply that some of the child values will also be significant. This statistical implication is expressed as Significant parent ≈> Some significant children.
(3)
Data in support of (3) was compiled in [3]. The Clustering Property follows from the large amount of overlap of supports of adjacent wavelet basis functions. Because of this overlap, if a wavelet coefficient is large near an edge, then there is a significant probability that adjacent wavelet coefficients will also be large. The statistical implication that expresses the Clustering property is Significant coefficient ≈> Some significant adjacent coefficients.
(4)
Studies confirming this statistical implication are described in references [13] to [15]. A nice illustration of the validity of (2) to (4) can be seen in Fig. 2. In Figures 2(b) and (c) we show the locations of significant parent and child coefficients, when the threshold T = 20, for a wavelet transform of the Boats test image. The grey pixels in both images indicate insignificant pixels, and the white pixels indicate significant coefficients. The similarity of the regions made up of grey pixels in the two images indicates that insignificant parents tend to have insignificant children, as stated by (2). Likewise, the similarity of the regions made up of white pixels indicates that significant parents tend to have some significant children, as (3) states. Finally, the clustering together of significant coefficients (in either image) illustrates the validity of (4). Properties 1 to 4 are used by TAWS-Comp to remove noise-dominated transform coefficients and retain image-dominated coefficients. It does this by modifying the VisuShrink method of Donoho and Johnstone [16]. In VisuShrink, the values of the wavelet transformed noisy image are subjected to the following shrinkage function (where τ is the threshold): S(t) = sgn(t)[ |t| − τ ] + . To be more precise, shrinkage is only applied to wavelet coefficients, the scaling coefficients (in the all-lowpass subband) are left unaltered. With a 3- or 4-level wavelet transform, the noise energy of the scaling coefficients is generallyp small enough to ignore. For N × N images, the threshold τ is chosen by VisuShrink to be τV = σ 2 loge N . After shrinkage, an inverse wavelet transform is applied to produce the denoised image. In [16], it was shown that VisuShrink is nearly optimal for removing i.i.d. Gaussian random noise. In fact, as N → ∞, the probability that the magnitude of a noise-dominated wavelet coefficient is less than τV approaches 1. It is thus asymptotically certain that shrinkage using τV will produce a noise-free image. Although a VisuShrink denoising is noise-free, it is also generally oversmoothed and appears out of focus. These defects stem from the VisuShrink threshold τ V being too high to capture sufficient details in the image. To counteract this effect, TAWS-Comp uses a threshold √ τ T which is significantly smaller than τV . To denoise the images in this paper, we used τT = 2 τV /8 ≈ τV /5.657, which has been found experimentally to generally yield consistently high PSNRs. Below the VisuShrink threshold, TAWS-Comp uses the following three principles for distinguishing image-dominated wavelet coefficients from noise-dominated coefficients: TAWS-Comp Selection Principles 5
A. Only accept significant children with significant parents. B. Reject a significant parent if all its children are insignificant. C. Only accept significant coefficients with at least one adjacent significant coefficient. These three principles are based, respectively, on the implications in (2), (3), and (4). The TAWSComp method combines these three Selection Principles with the ASWDR compression method. There are three parameters which are set at the start. One parameter is the descent index D, which is a non-negative integer. The TAWS-Comp threshold τT is then set at α τV /2D , where the height index α satisfies α ≥ 1. For all of the TAWS-Comp denoisings reported on in this paper, the value √ of this second parameter α was set as 2. Note that if D = 0 and α = 1, then τT = τV and TAWS reduces to the VisuShrink method. The third parameter is a depth index D, which is an integer lying between 1 and L, where L is the number of levels in the wavelet transform. We shall clarify below the nature of the depth index D. TAWS-Comp combines TAWS with ASWDR by using the Selection Principles to exclude noise-dominated transform values during compression. It consists of the following five steps: The TAWS-Comp Method Step 1 (Initialization). Compute the wavelet transform {f [i, j]} of the discrete image. Define a scanning order for the transform. This is a one-to-one and onto mapping, fb[i, j] = x[k], whereby the transform values are scanned through via a linear ordering k = 1, 2, . . . , M . The value of M being the number of pixels in the image. The initial scanning order is described in [6] and [7]. Choose an initial threshold, T0 , such that at least one transform value, x[n] say, satisfies |x[n]| ≥ T0 and all transform values, x[k], satisfy |x[k]| < 2T0 . Set T = T0 . Step 2 (Significance pass). Determine new significant index values—i.e., those new indices m for which x[m] has a magnitude greater than or equal to the present threshold T . Assign a value q[m] = T sign(x[m]) as the quantized version of x[m]. Encode these new significant indices using the difference reduction method described in [17] and [6]. Step 3 (Refinement pass). Refine quantized transform values corresponding to old significant transform values. Each refined value is a better approximation to the exact transform value. The refinement pass successively computes bits in the binary expansions of scaled transform values {x[m]/(2T0 )}, each pass outputting the next bit. Step 4 (New Scan Order). Create a new scanning order as follows. As long as T ≥ τ V , produce a new scanning order in the following way. At the highest-scale level (the one containing the all-lowpass subband), use the indices of the remaining insignificant values as the scan order at that level. Use the scan order at level j to create the new scan order at level j − 1 as follows. Run through the significant values at level j in the wavelet transform. Each significant value induces a set of four child values (as described in [11]). The first part of the scan order at level j − 1 contains the insignificant values lying among these child values. Run through the insignificant values at level j in the wavelet transform. The second part of the scan order at level j − 1 contains the insignificant values having at least one significant sibling among the child values of these insignificant values. Run through the insignificant
6
values at level j again. The third part of the scan order at level j − 1 contains the remaining insignificant values lying among the child values of these insignificant parent values (child values with no significant siblings). Use this new scanning order for level j − 1 to create the new scanning order for level j − 2, until all levels are exhausted.
When τT ≤ T < τV , proceed as follows. If the level j is larger than D, then use the method just described to produce the new scan order for level j − 1. For each level j from D + 1 to 2, produce the new scan order at level j − 1 as follows. Use the old scan order to scan through the significant wavelet coefficients in level j. If such a significant coefficient, x m , satisfies |xm | < τV and has no significant children, then set xm = 0 and qm = 0. (Thus invoking Selection Principle B.) On the other hand, if |xm | < τV and xm has some significant children, or if |xm | ≥ τV , then include in the new scan order all of the insignificant children of x m that have at least one significant sibling. (This implements Selection Principle A, and partially implements Selection Principle C, for these lower levels.) Step 5 (Divide Threshold by 2). Replace the threshold T by 1/2 of its value and repeat steps 2 to 5 until this new threshold T is less than τT .
When the procedure is finished, then decompression can be performed on the compressed data. This decompression consists of the following four steps: 1. Recapitulate Steps 1 to 5 above in order to obtain the quantized transform. 2. Set to zero all quantized transform values having magnitude less than τ V which do not have a non-zero adjacent value (this implements selection principle C). 3. Apply shrinkage to the quantized transform using threshold τ T . 4. Invert the quantized transform (and round to 8-bit precision). Either integer-to-integer [18], or floating point wavelet transforms, can be used with the TAWSComp procedure (the TAWS algorithm [3] requires a floating point transform). When an integerto-integer transform is used, then a rescaling of the transform is performed during the Initialization Step which approximates an orthogonal transform [19]. For the images in Fig. 1 the Daub 5/3 integer-to-integer transform [18] was employed. The Daub 9/7 transform [20] was used for the denoisings in section 3. Although these transforms are not orthogonal, they are close enough to being orthogonal that employing them for energy-based, threshold denoising is still quite effective. Moreover, the symmetry of these wavelets reduces boundary artifacts in denoised images.
Separately controlling compressing and denoising The TAWS-Comp algorithm allows the user to separately control both the degree of compression and the degree of denoising. First, we describe how to control the degree of compression. The description of TAWS-Comp given above makes it appear as if the only exit point of the compression procedure is when T < τT occurs in Step 5. However, TAWS-Comp also allows for checking the cumulative total of bits output throughout the compression process, and exiting may occur when this bit total exhausts a prescribed bit budget. Thus TAWS-Comp can match pre-assigned bit rates. A similar exiting criterion within the decompression process allows TAWS-Comp to 7
decompress at any bit rate up to the total compressed rate, hence TAWS-Comp enjoys scalability of decompression. Second, we describe how to control the degree of denoising. This can be done, as with most algorithms that utilize thresholding, by modifying the size of the threshold. One way to modify the threshold size in TAWS-Comp is to vary the descent index D. In Figures 3(c) and (d) we show the effect of using two different values of D. The denoising in Fig. 3(d) used a higher value of D (hence a lower threshold τT ) than in Fig. 3(c). It is slightly sharper and better represents the texture of the water surface (at the cost of allowing more background noise). By adjusting other parameters, such as the depth index D, the height parameter α, or replacing τ V by a differently chosen threshold, users can tune TAWS-Comp for their particular application.
ROI Capability TAWS-Comp encodes the precise location of significant transform coefficients. Consequently, it is able to ignore coefficients whose wavelet basis functions have supports disjoint from a given region. Therefore it is capable of ROI-enhancement. We saw with the example in Fig. 1 that this ROI capability is compatible with its denoising capability.
3
Experimental results and comparisons
We shall now compare the performance of TAWS-Comp with several state of the art denoisers and compdenoisers. The performance of TAWS and TAWS-Comp will be compared with the denoisers, SureShrink [21] and BayesShrink [4], and the compdenoisers BayesShrink+Compress [4] and EQ [5]. Software for TAWS-Comp can be downloaded from the website in [22]. For all the test images discussed in √ this section, TAWS and TAWS-Comp used a 4-level Daub 9/7 transform, and a threshold of τT = 2 τV /8. The depth index D was varied, depending on the size of σ. For relatively small values of σ, satisfying σ ≤ 15, we used D = 1. For larger values of σ, satisfying σ > 15, we used D = 2. Reducing the value of D for small values of σ was needed because of the fact that for lower thresholds the implications in (2) to (4) are less likely to hold for higher scale subbands. In Table 1 we demonstrate both the compressing and denoising capability of TAWS-Comp by comparing it with ASWDR compressions of SureShrink denoisings at various bit-rates. As an objective measure of denoising we used PSNR. The PSNRs reported in Table 1 show that TAWSComp performs nearly as well as SureShrink followed by compression. Generally the differences in PSNR between the two methods are insignificantly small. In Fig. 3(c) we show a TAWS-Comp compdenoising (at a 40:1 rate) of a noisy boat image, and show a 40:1 compression of a SureShrink denoising in Fig. 3(f). In this case, the TAWS-Comp compdenoising has a slightly higher PSNR value and shows far less background noise than the compressed SureShrink denoising. In Table 2 we summarize the performance of TAWS and TAWS-Comp in comparison with the denoiser BayesShrink and the compdenoiser BayesShrink+Compress (BayesShr+Comp for short). Each PSNR value is an average over 5 different noisy images. Although this may seem to be a small number for averaging, it was found that all PSNRs differed by no more than ±0.1 db within every set of 5 images. The PSNRs for BayesShrink and BayesShr+Comp are taken from [4]. They were also obtained by averaging over 5 different noisy images. 8
The BayesShr+Comp algorithm performs significantly worse than BayesShrink. This was noted in [4] and was attributed to the negative effects of quantization noise from compression. TAWS-Comp, however, performs nearly as well as TAWS alone. For most images, TAWS-Comp produces a denoised and compressed image with significantly higher PSNR than BayesShr+Comp. The compressed bit rates for TAWS-Comp are similar to those produced by BayesShr+Comp. TAWS-Comp was programmed to continue compressing until the rate of new significant coefficients dropped to less than 5%. This produced the bpp values reported in Table 2 for TAWS-Comp. For the first three images in Table 2, there were significantly higher bit rates for TAWS-Comp than for BayesShr+Comp. In the last column of Table 2 we report the PSNRs for TAWS-Comp compdenoised images at the same bit rates as for BayesShr+Compress. For those first three images, the decreases in PSNRs for TAWS-Comp were fairly small, and yet TAWS-Comp still produced higher PSNRs than BayesShr+Comp. In fact, for most images, TAWS-Comp produced higher PSNRs than BayesShr+Comp at the same bit rates. Both TAWS and TAWS-Comp registered poorer performance with one image, the Baboon image. That image is not well described via a piecewise smooth image model; in particular, there are large amounts of fur on the baboon which both TAWS and TAWS-Comp mistakenly treat as noise. Nevertheless, it should be noted that for the Baboon image all methods produced relatively low PSNRs. In Fig. 4 we compare denoisings and compdenoisings of a noisy version of the Goldhill test image. (All of these images can be downloaded from the website given in [23].) Figure 4(b) is a noisy version, with σ = 15, of the Goldhill image in Fig. 4(a). BayesShrink and TAWS denoisings of this noisy image are shown in Figures 4(c) and (d), respectively. These two denoisings are very similar, both in PSNR and visual quality (although the TAWS image is just a bit sharper). However, the BayesShr+Comp compdenoising, in Fig. 4(e) has suffered a significant decrease in PSNR and in visual quality compared to these two denoisings. In contrast, the TAWS-Comp compdenoising in Fig. 4(f) is nearly equal, both in PSNR and visual quality, to the TAWS denoising in Fig. 4(d). Comparing the two compdenoisings, we notice that the TAWS-Comp image has a less noisy appearance in the sky, has better preserved edges in several of the windows, has more texture preserved in the rock pile in front of the left-side door, and has a generally sharper appearance overall, than the BayesShr+Comp image. In [5], a compdenoiser called the EQ algorithm was described. It combines compression with SureShrink denoising. In Table 3 we compare the performance of TAWS and TAWS-Comp with SureShrink and EQ. The data for SureShrink and EQ were supplied in [5]. We can see in Table 3 results that are similar to those we saw in Table 2. Namely, TAWS-Comp is not adversely affected by the combining of compressing with denoising. Its PSNRs are only slightly smaller than those of TAWS alone. On the other hand, the performance of EQ’s combination of compression with SureShrink denoising is significantly lower than the performance of SureShrink denoising alone. PSNRs for EQ are significantly smaller than those for SureShrink. Consequently, TAWS-Comp has higher PSNRs—often at least 1.0 db higher—than those for EQ.
Conclusion We have described the TAWS-Comp method of simultaneously compressing and denoising images. Conditions where combining these two operations would be desirable were indicated, and an
9
application using TAWS-Comp was described. In most cases, TAWS-Comp compares favorably with state of the art compdenoisers, often producing compressed images of nearly equal quality to those produced by state of the art denoisers alone.
References [1] J.K. Wolf and J. Ziv, Transmission of noisy information to a noisy receiver with minimum distortion. IEEE Trans. on Info. Theory, Vol. 16, pp. 406–411, July 1970. [2] O.K. Al-Shaykh and R.M. Mersereau, Lossy compression of noisy images. IEEE Trans. on Image Proc., Vol. 7, pp. 1641–1652, Dec. 1998. [3] J.S. Walker and Y.-J. Chen, Image denoising using tree-based wavelet subband correlations and shrinkage. Opt. Eng., Vol. 39, pp. 2900-2908, Nov. 2000. [4] S.G. Chang, B. Yu, and M. Vetterli, Adaptive wavelet thresholding for image denoising and compression. IEEE Trans. on Image Proc., Vol. 9, pp. 1532–1546, Sept. 2000. [5] J. Liu and P. Moulin, Complexity-regularized image denoising. IEEE Trans. on Image Proc., Vol. 10, pp. 841–851, June 2001. [6] J.S. Walker. Lossy image codec based on adaptively scanned wavelet difference reduction. Opt. Eng., Vol. 39, pp. 1891-1897, July 2000. [7] J.S. Walker and T.Q. Nguyen, Adaptive scanning methods for wavelet difference reduction in lossy image compression. IEEE Int’l Conf. on Image Proc. 2000, Vol. 3, pp. 182–185, Sept. 2000. [8] S. Mallat, A theory for multiresolution signal decomposition: The wavelet representation. IEEE Trans. on Pattern Recogn. Mach. Intell., Vol. 11, 674–693, July 1989. [9] S. Mallat, A Wavelet Tour of Signal Processing. Academic Press, New York, 1998. [10] J.M. Shapiro, Embedded image coding using zerotrees of wavelet coefficients. IEEE Trans. Signal Proc., 41, 3445–3462, Dec. 1993. [11] A. Said and W.A. Pearlman. A new, fast, and efficient image codec based on set partitioning in hierarchical trees. IEEE Trans. on Circuits and Systems for Video Technology, 6, 243–250, June 1996. [12] Y. Wang, Jump and sharp cusp detection by wavelets. Biometrika, 82, 385–397, June 1995. [13] J. Liu and P. Moulin, Information-theoretic analysis of interscale and intrascale dependencies between image wavelet coefficients. IEEE Trans. on Image Proc., 10, 1647–1658, Nov. 2001. [14] R.W. Buccigrossi and E.P. Simoncelli, Image compression via joint statistical characterization in the wavelet domain. IEEE Trans. on Image Proc., 8, 1688–1701, Dec. 1999.
10
[15] J. Huang and D. Mumford, Statistics of natural images and models. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 1999, 541–547. [16] D. Donoho and I. Johnstone, Ideal spatial adaptation via wavelet shrinkage. Biometrika, 81, 425–455, Dec. 1994. [17] J. Tian and R.O. Wells, Jr., Embedded image coding using wavelet-difference-reduction. Wavelet Image and Video Compression, P. Topiwala, ed., pp. 289–301. Kluwer Academic Publ., Norwell, MA, 1998. [18] A.R. Calderbank, I. Daubechies, W. Sweldens, and B.-L. Yeo, Wavelet transforms that map integers to integers. Applied and Computational Harmonic Analysis, 5, 332-369, March 1998. [19] A. Said, A., W.A. Pearlman. An image multi-resolution representation for lossless and lossy image compression. IEEE Trans. Image Proc., 5, 1303–1310, Sept. 1996. [20] A. Cohen, I. Daubechies, and J.-C. Feauveau. Biorthogonal bases of compactly supported wavelets. Commun. on Pure and Appl. Math., 45, 485-560, Mar. 1992. [21] D. Donoho and I. Johnstone, Adapting to unknown smoothness via wavelet shrinkage. American Statistical Assoc., 90, 1200–1224, Dec. 1995. [22] Software CompDen: www.uwec.edu/academic/curric/walkerjs/ [23] www.uwec.edu/academic/curric/walkerjs/CDNImages/
11
Figure and Table Captions Table 1 PSNRs for SureShrink denoisings (SuSh) which have been compressed at 0.5, 0.25, and 0.125 bpp, and TAWS-Comp compdenoisings (TS-Cp) at the same bitrates. Table 2 PSNRs for the denoisers, BayesShrink [4] and TAWS [3], and the compdenoisers, TAWSComp and BayesShr+Comp [4]. The last column contains PSNRs for TAWS-Comp at the same bpp values as those for BayesShr+Comp. Table 3 PSNRs for the denoisers, SureShrink [21] and TAWS [3], and the compdenoisers, TAWSComp and EQ [5]. Figure 1 Illustration of ROI capability of TAWS-Comp. Figure 2 Relation between parent and child coefficients (1st and 2nd vertically oriented subbands). (a) Boats image. (b) Locations (white pixels) of significant parent values, threshold = 20. (c) Locations (white pixels) of significant child values, half-threshold = 10. Figure 3 Denoisings and compressions of noisy boats image, with PSNRs. Figure 4 Denoisings and compdenoisings of a noisy Goldhill image, with PSNRs. Images
SuSh, 0.5 bpp
Goldhill Lena Barbara Baboon
30.5 32.8 28.8 24.7
Goldhill Lena Barbara Baboon
28.4 29.9 26.4 23.5
Goldhill Lena Barbara Baboon
27.0 28.2 24.7 22.3
TS-Cp σ 30.4 32.3 29.1 24.1 σ 28.3 30.2 26.2 23.1 σ 26.7 28.0 23.6 21.4
SuSh, 0.25 bpp TS-Cp = 10, PSNR = 28.1 29.4 29.0 32.1 31.4 26.3 26.5 22.8 22.3 = 20, PSNR = 22.1 27.9 27.7 29.7 29.7 25.1 25.3 22.3 22.0 = 30, PSNR = 18.6 26.8 26.7 28.1 28.0 24.1 23.6 21.7 21.3
SuSh, 0.125 bpp
TS-Cp
27.8 29.9 24.3 21.4
27.5 29.6 23.9 21.2
27.2 28.9 23.9 21.2
26.5 28.1 23.6 20.9
26.4 27.7 23.4 20.9
26.3 27.6 23.2 20.7
Table 1: PSNRs for SureShrink denoisings (SuSh) which have been compressed at 0.5, 0.25, and 0.125 bpp, and TAWS-Comp compdenoisings (TS-Cp) at the same bitrates.
12
Images
BayesShr TAWS
Goldhill Lena Barbara Baboon
31.9 33.4 31.0 29.1
31.6 33.2 31.1 28.3
Goldhill Lena Barbara Baboon
28.7 30.2 27.3 25.6
28.5 30.1 26.8 24.2
Goldhill Lena Barbara Baboon
27.1 28.5 25.3 23.8
26.7 28.0 24.7 22.5
TAWS-Comp | bpp BayesShr+Comp | bpp σ = 10, PSNR = 28.1 31.4 | 1.18 30.4 | 1.06 33.2 | 1.01 32.0 | 0.75 30.8 | 1.42 29.0 | 1.21 25.7 | 1.33 27.0 | 1.34 σ = 20, PSNR = 22.1 28.3 | 0.44 27.6 | 0.45 30.1 | 0.41 29.0 | 0.37 26.4 | 0.59 25.8 | 0.85 23.9 | 0.91 24.3 | 0.85 σ = 30, PSNR = 18.6 26.7 | 0.24 26.2 | 0.27 28.0 | 0.25 27.4 | 0.23 23.6 | 0.29 24.1 | 0.62 21.4 | 0.38 22.9 | 0.58
TAWS-Comp 31.2 32.9 30.6 25.7 28.3 29.9 26.4 23.8 26.7 27.9 23.6 21.4
Table 2: PSNRs for the denoisers, BayesShrink [4] and TAWS [3], and the compdenoisers, TAWSComp and BayesShr+Comp [4]. The last column contains PSNRs for TAWS-Comp at the same bpp values as those for BayesShr+Comp.
13
Images Lena Barbara Lena Barbara Lena Barbara
SureShrink TAWS TAWS-Comp σ = 5, PSNR = 34.1 37.1 36.9 36.6 35.7 35.1 35.1 σ = 7, PSNR = 31.2 35.1 35.1 35.1 33.3 33.2 32.1 σ = 10, PSNR = 28.1 33.0 33.2 33.2 30.8 31.1 30.8
EQ 36.0 34.1 34.1 32.0 32.8 29.2
Table 3: PSNRs for the denoisers, SureShrink [21] and TAWS [3], and the compdenoisers, TAWSComp and EQ [5].
14
(a) Source image (ROI inside rectangle)
(b) 100 : 1 compression
(c) Enhanced ROI without denoising
(d) Enhanced ROI with denoising
Figure 1: Illustration of ROI capability of TAWS-Comp.
15
(a) Boats
(b) Parents, T = 20
(c) Children, T /2 = 10
Figure 2: Relation between parent and child coefficients (1st and 2nd vertically oriented subbands). (a) Boats image. (b) Locations (white pixels) of significant parent values, threshold = 20. (c) Locations (white pixels) of significant child values, half-threshold = 10.
16
(a) Boat Image
(b) Noisy version (σ = 20), 22.2 db
(c) TAWS-Comp (40:1, D = 3), 28.3 db
(d) TAWS-Comp (40:1, D = 4), 28.2 db
(e) SureShrink, 28.8 db
(f) 40:1 compression of (e), 28.2 db
Figure 3: Denoisings and compressions of noisy boats image, with PSNRs.
17
(a) Original
(b) Noisy version (σ = 15), 24.6 db
(c) BayesShrink, 29.9 db
(d) TAWS, 29.6 db
(e) BayesShr+Comp, 28.7 db
(f) TAWS-Comp, 29.4 db
Figure 4: Denoisings and compdenoisings of a noisy Goldhill image, with PSNRs.
18