in a rather cheap way without awareness of possibly violating copyrights. ... Several spatial-domain and transform-domain digital watermarking algorithms have ...
ENHANCING ROBUSTNESS OF DIGITAL IMAGE WATERMARKS USING CONTOURLET TRANSFORM Shereen Ghannam and Fatma E. Z. Abou-Chadi Dept. of Electronics and Communications Engineering, Mansoura University, Egypt ABSTRACT In this paper a simple and robust Contourlet-based image watermarking technique for copyright protection is presented. The main idea is to utilize the flexible representation of images provided by the Contourlet transform in designing a robust watermarking technique. The watermark is embedded into different bands to resist different kinds of signal processing attacks without causing visual artifacts into the host image. The performance of the technique is evaluated. The experimental results and comparisons with other Contourlet-based techniques prove the effectiveness of the proposed technique.
Index Terms - Robust, Contourlet, copyright protection.
1. INTRODUCTION In this digital era, the ubiquitous network environment has promoted the rapid delivery of digital multimedia data. Users became capable of sharing various media information in a rather cheap way without awareness of possibly violating copyrights. Digital watermark has been prevalently utilized as a possible solution for intellectual property rights protection [1]-[3]. Several spatial-domain and transform-domain digital watermarking algorithms have been proposed. Although most of work in the field of watermarking focuses on utilizing the multiresolution analysis proposed by Wavelet Transform, the Contourlet Transform (CLT) defined by Do et al. [4] began to gain some interest for its capability of capturing directional information such as smooth contours, and directional edges [5,6]. This interest spreads to cover different application areas including watermarking. In this paper, a CLT-based watermarking technique is proposed. It relies on embedding watermark bits into appropriate locations in low and high frequency subbands resulted from the Contourlet Transform while preserving good fidelity unlike the one proposed by Jayalakshmi et al. [5] which relies on embedding watermark into only high frequency subbands of the last level of decomposition to avoid visual distortions. This paper is organized as follows: A quick overview about CLT is presented in Section 2. The proposed watermarking technique is demonstrated in Section 3. The
978-1-4244-5654-3/09/$26.00 ©2009 IEEE
3645
performance evaluation criteria are described in Section 4 and the experimental results are reported in Section 5. Finally the concluding remarks are discussed in Section 6.
2. CONTOURLET TRANSFORM Although wavelets are good at representing discontinuities in one dimension or point singularities, it fails to represent singularities in higher dimensions. The Discrete Contourlet Transform is a relatively new transform defined in the discrete form by Do et al. [4] to capture the edge information in all directions. The main feature of this transform is the potential to efficiently handle 2-D singularities, i.e. edges. This difference is caused by two main properties that the CLT possess: 1) the directionality property, i.e. having basis functions at many directions, as opposed to only 3 directions of wavelets 2) the anisotropy property, meaning that the bases functions appear at various aspect ratios (depending on the scale), whereas wavelets are separable functions and thus their aspect ratio equals to 1. So it provides a flexible multiresolution representation for images. The main advantage of the CLT over other geometrically-driven representations, e.g. curvelets [7] and bandelets [8], is its relatively simple and efficient waveletlike implementation using iterative filter banks. Due to its structural resemblance with the wavelet transform, many image processing tasks applied on wavelets can be seamlessly adapted to contourlets. The CLT is constructed by two filter-bank stages, a Laplacian Pyramid (LP) [9] followed by a Directional Filter Bank (DFB) [10] as shown in Fig.1. The LP decomposes the image into octave radial-like frequency bands to capture the point discontinuities, while the DFB decomposes each LP detail band into many directions (a power of 2) to link these point discontinuities into linear structures. Figure 2 shows one level LP decomposition. It has the distinguishing feature that each pyramid level generates only one bandpass image (even for multidimensional cases) which does not have “scrambled” frequencies. It generates a down sampled low pass version of the original and the difference between the original and the prediction, resulting in a bandpass image (prediction error).
ICIP 2009
relationship and to increase the invisibility based on the characteristics of images then map the values from {0,1} to {-1,1}. Step2: Apply Contourlet Transform with two levels on the host image with eight directional subbands in the first level and four directional subbands at the second level. Step3: In order to select the most appropriate locations for embedding which can ensure good fidelity, perform the same selection procedure proposed by Joo et al. [11]: The low frequency sub-band is further decomposed using the critically sampled 2-D Wavelet decomposition then the three high frequency subbands are initialized to zeros. Apply Inverse Wavelet transform to the result to produce a reference sub-band. The selected coefficients are at the first m×n locations after sorting the absolute of difference between the values of the low frequency sub-band and the reference sub-band in ascending order. Step4: Modify the selected low pass coefficients as follows: (1) f ' (i , j ) f (i , j ) D 1 w (i , j )
Fig.1. Contourlet Filter Bank
Fig.2. One level LP Decomposition
The directional filter bank is a critically sampled filter bank that can decompose images into any power of two’s number of directions. The DFB is efficiently implemented via a llevel treestructured decomposition that leads to ‘2l’ subbands with wedge-shaped frequency partition as shown in Fig.3.
where f(i,j) is the selected coefficient at location (i,j) in the low pass image. Step5: In order to reach higher robustness against attacks, the cousins and children of the selected coefficients are also modified. The cousins in the second level are modified as follows: (2) f k' ( i , j ) f k ( i , j ) D 2 w( i , j ) where fk(i,j) is the cousin coefficient at location (i,j) in the second level at the directional subband k where k ɽ {1,2,3,4}. The children in the first level are modified as follows: (3) f k' ( p , q ) f k ( p , q ) D 3 w( i , j ) where fk(p,q) is the children coefficient in the first level at the directional subband k where k ɽ {1,2,3,4,5,6,7,8} and p = i & q = 2j-1:2j or p = 2i-1:2i & q = j according to the direction of the subband. Note that the scaling factor Į1 is larger than Į2 and Į3. These scaling factors are adjusted to obtain an acceptable quality. Step6: Apply Inverse Contourlet Transform to the result to obtain the watermarked image.
Fig.3. DFB Frequency Partitioning (l=3)
3. WATERMARKNG TECHNIQUE The proposed technique relies on embedding binary logo into both low and high frequency bands resulted from the multiresolution analysis proposed by the Contourlet Transform. The watermark is embedded into the visually insensitive locations in the lowest frequency band and their cousins and children in other bands and levels.
3.1 Watermark Embedding Procedure Step 1: Permute the watermark bits of size m×n randomly using a secret seed in order to disperse the spatial
3.2 Watermark Extraction Procedure Step1: Apply Contourlet Transform with the same number of levels and directional subbands as in the embedding procedure to the original and watermarked image. Step2: Extract three versions of the permuted watermark from the known locations in the low frequency image, their cousins and children:
3646
w1 ( i , j ) ( f ' ( i , j ) f ( i , j )) / D 1
(4)
1 4 ¦ ( f k' ( i , j ) f k ( i , j )) / D 2 4k 1 1 8 1 ' w3 ( i , j ) ¦ ( f k ( p,q ) f k ( p,q )) / D3 8k 12
w2 ( i , j )
(5)
5.1 Assessing Perceptual Quality
(6)
The PSNR, wPSNR and MSSIM values of the watermarked images are recorded in Table 1.
Step3: Reverse permute the extracted values using the secret seed and map them back to {0,1}.
Table 1. Perceptual quality metrics of watermarked images
4. PERFORMANCE EVALUATION To investigate the performance of the technique, it is essential to subjectively or objectively evaluate the quality of the image after the embedding process and also evaluate the robustness of the embedded watermark. Although PSNR is simple and widely used fidelity measure, the correlation between it and human judgement of quality is not tight enough. So two more metrics; weighted Peak Signal to Noise Ratio (wPSNR) [12] and Mean Structural Similarity (MSSIM) index [13]; are used. To evaluate the robustness of the embedded watermark, the distortion occurred in the extracted logos after applying different signal processing operations is evaluated. One of the most popular difference distortion measures is the Normalized Mean Squared Error (NMSE) metric. NMSE
¦ y (i , j ) x (i , j ) / ¦ x (i , j ) 2
i, j
2
Lena
Baboon
Peppers
Goldhill
Boat
F-16
PSNR
41.2612
41.2653
41.2780
41.2590
41.2425
41.2522
wPSNR
42.1866
45.2170
42.3675
42.8319
42.1723
41.8144
MSSIM
0.9652
0.9868
0.9667
0.9769
0.9622
0.9575
The PSNR values of all the test images are adjusted at about 41 dB via choosing suitable values of scaling factors.
5.2 Assessing Robustness Common signal processing attacks are applied to the watermarked images to measure the robustness of the technique. The Normalized Mean Squared Error (NMSE) values for the extracted logos from low frequency (Lf) and high frequency (Hf) components are listed in Table 2. Table 2. NMSE values of the extracted logos from low freq. portion (Lf) and high freq. one (Hf) after applying different attacks on watermarked images
(7)
i, j
Lena
5. EXPERIMENTAL RESULTS
Goldhill
Baboon
Boat
Peppers
F-16
0.0079
0.0475
0.0285
Hf
0.6086
0.4834
0.6197
0.5182
0.5816 0.6070
Lf
0.0174
0.0285
0.0048
0.0174
0.0254 0.0063
Hf
0.0000
0.0444
0.0000
0.0000
0.0000 0.0000
Lf
0.0000
0.0000
0.0000
0.0000
0.0000 0.0000
Hf
0.0951
0.3661
0.0666
0.2377
0.1506 0.0792
Lf
3×3 Median. Filtering Hf
0.0032
0.0571
0.0032
0.0016
0.0048 0.0016
0.0745
0.3740
0.0840
0.2060
0.1220 0.0539
Lf
0.0000
0.0000
0.0000
0.0000
0.0000 0.0000
Hf
0.0507
0.2916
0.0254
0.1616
0.0903 0.0380 0.6101 0.6228
JPEG (20%) Sharpening Blurring
3×3 Wiener Filtering Intensity Adjustment
0.0666 0.0697
Lf
0.6228
0.6228
0.5848
0.6165
([0 0.8],[0 1])
Hf
0.0048
0.0000
0.0016
0.0190
0.0000 0.1426
Gamma Correction (1.5)
Lf
1.0000
1.0000
1.0000
1.0000
1.0000 1.0000
Hf
0.0000
0.0000
0.0000
0.0000
0.0000 0.0000
Histogram Lf Equalization Hf
0.6165
0.5895
0.4390
0.6086
0.6624 0.6086
0.0000
0.0190
0.0000
0.0032
0.0000 0.0000
Lf
0.1284
0.0998
0.0713
0.0745
0.1252 0.0824
Hf
0.0586
0.0792
0.0507
0.0428
0.0618 0.0460
Lf
0.0000
0.0000
0.0000
0.0000
0.0000 0.0000
Hf
0.0951
0.3518
0.0729
0.2235
0.1616 0.0713
Lf
0.0143
0.3407
0.0143
0.0460
0.2187 0.0301
Hf
0.6165
0.7544
0.5515
0.5261
0.6926 0.6355
F-16
Fig.4. Image Database (size of 512×512 pixels)
Boat
0.0761
A set of six 8-bit grayscale digital images, shown in Fig.4, were applied to the watermarking technique and the embedded watermark, shown in Fig.5, was chosen to be a binary logo of size 32×32 pixels.
Lena
Baboon Peppers Goldhill
Lf
Cropping (10%) Resizing (0.5) JPEG2000 (0.4 bpp)
Fig.5. Embedded Watermark
3647
After manipulating watermarked images with different types of noise, the NMSE values of the extracted logos are measured and recorded in Table 3. Table 3. NMSE values of the extracted logos from low freq. portion (Lf) and high freq. one (Hf) after adding different types of noise on watermarked images AWGN (ȝ=0, ı2=0.005)
Lena
Baboon
Peppers
Goldhill
Boat
F-16
Lf
0.1632
0.1379
0.1854
0.1632
0.1743
0.1601
Hf
0.2171
0.2345
0.2013
0.2425
0.2504
0.2298
Salt & Pepper (0.01)
Lf
0.1046
0.0903
0.0872
0.0903
0.1268
0.1189
Hf
0.0840
0.0967
0.1157
0.1125
0.1014
0.1252
Speckle
Lf
0.0586
0.0792
0.0460
0.0571
0.1141
0.1981
(0.01)
Hf
0.1189
0.1173
0.0681
0.0697
0.1585
0.2488
The NMSE values listed in the above tables assure that the proposed technique shows a fairly high level of robustness against common signal processing attacks. Although the Contourlet-based watermarking technique proposed by Jayalakshmi et al. [5] showed a satisfactory performance in resisting attacks like compression and noise addition, it performs poorly in resisting attacks like intensity adjustment and histogram equalization. Figure 6 clarifies the performance of this technique and the proposed one after applying cropping attack with different percentages to the watermarked images. It shows that the performance of the proposed technique is better for all percentages of cropping. 0.5 0.45 0.4
NMSE
0.35 0.3 0.25 0.2 0.15 0.1 Jayalakshmi et al. Proposed Technique
0.05 2
6
10
14
18
22
26
30
34
38
42
46
50
Discarded Part Percentage (%)
Fig.6. NMSE values against percentage of discarded area
Besides its superiority in resisting attacks, the proposed technique does not have the restriction that exists in the Contourlet-based technique proposed by Song et al. [6] as the latter one relies on a visual mask limited to only four directional subbands in each level.
3648
6. CONCLUSION This paper has presented a simple technique for embedding a robust digital watermark into frequency bands resulted from Contourlet Transform (CLT). Experimental results assure the robustness of the proposed technique. The watermark embedded in the low frequency portion of the image was capable of surviving after applying group of attacks like JPEG compression, JPEG2000 compression, low pass filtering and resizing, while the one embedded in the high frequency portion was capable of surviving after another group of attacks like intensity adjustment, gamma correction, histogram equalization and cropping.
7.
REFERENCES
[1] F. Hartung, and M. Kutter, “Multimedia Watermarking Techniques”, Proceedings of the IEEE, Special Issue on Identification and Protection of Multimedia Information 87, pp 1079-1107, July 1999. [2] V. M. Potdar, S. Han, and E. Chang, “A Survey of Digital Image Watermarking Techniques”, 3rd International Conference on Industrial Informatics, 2005. [3] C. S. Lu, Multimedia Security: Steganography and Digital Watermarking for Protection of Intellectual Property, Idea Group Publishing, 2005. [4] M. N. Do and M. Vetterli, “The contourlet transform: An efficient directional multiresolution image representation,” IEEE Trans. on Image Processing, vol. 14, No. 12, pp. 2091– 2106, December 2005. [5] Jayalakshmi M., S. N. Merchant and U. B. Desai, “Digital Watermarking in Contourlet Domain,” Proc. 18th International Conference on Pattern Recognition (ICPR2006), vol.3, pp.861864, 2006. [6] Haohao Song, Songyu Yu, Xiaokang Yang, Li Song, Chen Wang, “Contourlet-based image adaptive watermarking”, Signal Processing: Image Communication, vol. 23, no. 3, pp. 162-178, March 2008. [7] E. J. Cand´es and D. Donoho, “New tight frames of curvelets and optimal representations of objects with smooth singularities,” technical report, 2002. [8] E. L. Pennec and S. Mallat, “Sparse geometric image representation with bandelets,” IEEE Trans. on Image Processing 14, pp. 423–438, April 2005. [9] M. N. Do and M. Vetterli, “Framing pyramids,” IEEE Trans. on Signal Processing 51, pp. 2329–2342, September 2003. [10] R. H. Bamberger and M. J. T. Smith, “A filter bank for the directional decomposition of images: Theory and design,” IEEE Trans. on Signal Processing 40, pp. 882–893, April 1992. [11] S. Joo, Y. Suh, J. Shin, and H. Kikuchi, “A New Robust Watermarking Embedding into Wavelet DC Components”, ETRI Journal, vol. 24, no. 5, October 2002. [12] S. Voloshynovskiy, S. Pereira, A. Herrigel, N. Baumgartner, T. Pun, “A Stochastic Approach to Content Adaptive Digital Image Watermarking”, Proceedings of the Third International Workshop on Information Hiding, pp.211-236, 1999. [13] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: From error measurement to structural similarity" IEEE Transactions on Image Processing, vol. 13, No. 1, January 2004.