J. Vis. Commun. Image R. 26 (2015) 1–8
Contents lists available at ScienceDirect
J. Vis. Commun. Image R. journal homepage: www.elsevier.com/locate/jvci
AN H.264/AVC HDTV watermarking algorithm robust to camcorder recording Li Li a, Zihui Dong a, Jianfeng Lu a,1, Junping Dai b, Qianru Huang a, Chin-Chen Chang c,d,⇑, Ting Wu a a
Institute of Graphics and Image, Hangzhou Dianzi University, Hangzhou, Zhejiang, China Institute of Digital Media, Hangzhou Dianzi University, Hangzhou, Zhejiang, China c Department of Information Engineering and Computer Science, Feng Chia University, Taichung 40724, Taiwan d Department of Computer Science and Information Engineering, Asia University, Taichung 41354, Taiwan b
a r t i c l e
i n f o
Article history: Received 1 March 2014 Accepted 26 August 2014 Available online 16 September 2014 Keywords: Video watermark Camcorder recording Watermark pattern Robustness Copyright Geometric attack Watson visual model Temporal synchronization
a b s t r a c t With the purpose of copyright protection of digital video, this paper proposes an H.264/AVC HDTV watermarking method that is robust to camcorder recording. Because contents in the consecutive frames of a video are almost identical, we embed the copyright information by fine-tuning the luminance relationship of the consecutive frames. To ensure the quality of the video and the robustness of the algorithm, we use an adaptive watermark pattern to reduce the image area to be modified, and we make the embedding strength adaptive according to the improved Watson visual model. When detecting watermarks, to reduce detecting errors caused by recording errors and shot changes, we determine the detecting area adaptively according to the size of the watermarked video and segment the video into shots using the directional empirical mode decomposition method. Experimental results show that the proposed method achieves high video quality and is robust to camcorder recording, transcoding, recoding, and other geometric attacks. Ó 2014 Elsevier Inc. All rights reserved.
1. Introduction In recent years, with the continuous development of multimedia technology, there have been more and more high-definition videos in people’s everyday lives through high-speed networks and high-definition equipment. Although the development of technology can enrich people’s lives, it can also cause huge losses to product copyright owners because digital media products can be duplicated and distributed easily with high-quality recording devices. Thus, in the network era of the digital media industry, it is necessary to explore effective solutions to prevent piracy and protect intellectual property rights of digital media products. Geometric attack is the most serious one of all watermark attacks. For still images, there have been various watermarking proposals against geometrical attacks. The watermark embedding process should be synchronized with the extracting process. The principle of watermark synchronization are geometric correction methods [1–3], geometric invariant methods [4–7] and feature based methods [8–10]. Geometric correction methods include ⇑ Corresponding author at: Department of Information Engineering and Computer Science, Feng Chia University, Taichung, 40724, Taiwan. E-mail addresses: jfl
[email protected] (J. Lu),
[email protected] (C.-C. Chang). 1 Principal corresponding author. http://dx.doi.org/10.1016/j.jvcir.2014.08.009 1047-3203/Ó 2014 Elsevier Inc. All rights reserved.
image registration and template watermarking schemes. Geometric invariant methods include Fourier–Mellin conversion coefficient, geometric invariant moment and normalization watermarking method, etc. Feature-based methods include Harris-Laplace feature, SIFT or SURF based watermarking schemes, etc. However, these schemes are not suitable for video watermarking because of their high computational complexity and limited robustness against compound geometrical and temporal attacks. Some researches have been conducted on copyright protection of digital videos [11–13]. However, most of these studies target standard-definition videos; when applied to high-definition videos, the quality of the watermarked videos is badly affected. In addition, the studies’ methods are only robust to some conventional video watermarking attacks and do not address camcorder recording attacks. High-definition video watermarking methods are proposed in [14,15], they achieve high video quality but they are also vulnerable to camcorder recording attacks. For the video watermarking algorithm against camcorder recording attack, the biggest influence is also geometric attacks, such as rotation, resizing, and cropping. These geometric attacks can cause an offset of the spatial position, which leads to failure of watermark detection. The video watermarking strategy commonly used against geometric attacks is to embed the watermark in the geometric invariant places, such as features [16] of video
2
L. Li et al. / J. Vis. Commun. Image R. 26 (2015) 1–8
frames. These methods can handle geometric distortions; however, when applied to high-definition video, they cannot satisfy the requirement for real-time embedding due to their large volume of calculations. Paper [17] embedded the watermark in the low frequency DCT coefficients using the quantization index modulation (QIM) technique. This method is robust to downscaling and temporal synchronization attacks but cannot survive other geometric attacks. In addition to geometric attacks, the recording will also bring transcoding, recoding, frame structure changes, and other attacks. To resist the combination of all these attacks, the robustness of the algorithm must be very high. Studies of video watermarking algorithms to handle attacks of camcorder recording are relatively few, but some researches have been performed. Paper [18] proposed a scheme against camcorder capture based on the LACF (local autocorrelation function). The watermark is detected after restoring the watermark pattern by extracting the geometrical distortion parameters using the LACF. However, it is difficult to use peaks of the LACF, because the peaks are not clear after several signal-processing attacks. This scheme is also not robust to combination of geometrical distortions. Because video contents of adjacent frames are similar, researchers [19,20] have proposed video watermarking methods that modify the overall trend of adjacent frames to indicate a watermark bit. The drawback of these methods is that they are only suitable for standard-definition video. When the algorithms are applied to high-definition video, they achieve poor invisibility. In addition, for particular video clips, such as a still video sequence, their methods cannot meet robustness requirements. In this paper, we propose a new algorithm that embeds watermark information bits by regulating the luminance trend of a few adjacent frames. Unlike previous methods, our method is proposed for high-definition video. In the algorithm, we use an adaptive watermark pattern W to reduce the area to be modified and make the embedding strength adaptive according to the improved Watson visual model. In the watermark-detecting scheme, we use temporal synchronization and adaptive detecting areas to improve detection accuracy. Experimental results show that our method achieves high video quality and robustness to camcorder recording, transcoding, recoding, and other geometric attacks. The paper is organized as follows. In Section 2, we point out the problems of traditional video watermarking methods against camcorder recording and introduce the general idea of our algorithm to solve these problems. We then detail the watermark embedding and extracting process. In Section 3, we analyze our results, and we provide the conclusion in Section 4.
2. Proposed method 2.1. Existing problems and countermeasures 2.1.1. Watermark embedding area and watermark embedding strength Based on the characteristic that contents in adjacent frames of a video are almost identical, traditional video watermarking methods embed the watermark bit by regulating the trend of the pixels of the consecutive frames. If there is an obvious increasing or decreasing trend, it is on behalf of the bit ‘‘1’’ or ‘‘1’’ respectively. The modified area of existing methods is the whole frame; when it comes to flat frames or continuous stationary frames, the invisibility and the robustness are not adequate. In this paper, we choose a fixed number of small blocks in the frame to regulate their luminance adaptively based on the content of the frame; the modified area is thus decreased and invisibility increased. In addition, when calculating the embedding strength, the research in [19] obtained the embedding strength based on
fluctuation of the mean of the low-frequency DCT coefficients; the problem with this method is that the power is not adaptive based on video content and this will greatly influence the quality of the video. In paper [20], the embedding strength was adjusted by a texture detector and a motion detector in the chrominance channel. This method yields good video quality if the video’s texture is rich and the movement is quite obvious, but it does not take luminance masking into account. For static, flat, and colorful video segments (Fig. 2), the embedding strength is too weak so the robustness is not sufficiently strong. Our method modifies the luminance value of each small block of every frame based on the improved Watson luminance model [21]; the strength of modification is adaptive based on the video contents to suit the requirements of invisibility and robustness. The details are described below. The Watson model is a perception model. There is an important concept called critical difference JND (just noticeable difference); the difference is the minimum distortion of the frequency coefficient that can be perceived by people according to the results of a large number of experiments performed by Watson. When the variation of the frequency coefficient is less than the corresponding JND threshold, the human eye cannot identify the loss of image quality due to the change of the frequency coefficient, so we can calculate the embedding power based on the JND threshold. According to Watson’s luminance masking model, the critical difference JND has a strong correlation with the overall and local brightness of the image; the higher the brightness of the background, the more image distortion it can conceal. The JND threshold of every DCT coefficient is calculated as:
ti;j;k ¼ t i;j ½cð0; 0; kÞ=cð0; 0Þ
a
ð1Þ
where ti,j,k is the JND threshold of the (i, j)-th DCT coefficient of the k-th 8 8 block of the image, ti,j is the (i, j)-th value in the sensitivity matrix of the frequency coefficient, c(0, 0, k) is the DC coefficient of the k-th 8 8 block, c(0, 0) represents the average DC coefficients of all blocks of the image, and a is the luminance masking exponent; Watson recommended a value of 0.649. The Watson perception model is based on all the characteristics of the image; in this paper, we calculate the embedding strength based on the 32 32 block, so we consider the local characteristics of the image, making the improvements described below based on Watson model. Considering a frame in the video, we divide it into non-overlapping 32 32 blocks. Then each 32 32 block is divided into sixteen 8 8 blocks. The DC coefficient is calculated as c(0, 0, k) of the sixteen blocks according to the method, yielding the DC coefficient in the spatial domain [22]. When calculating c(0, 0), we do not use the DC coefficient of the whole frame, but the DC coefficients of the sixteen 8 8 blocks in the 32 32 block:
cð0; 0Þ ¼
15 1X cð0; 0; kÞ 3 k¼0
ð2Þ
From Eqs. (1) and (2) we can obtain the JND threshold of the DC coefficient for each 8 8 block, then modify the DC coefficient by regulating the pixels in the spatial domain [22]. Experimental results show that the proposed method can ensure the quality of the video as well as the robustness. Part 2.2.1 shows the details of how to regulate the pixels, and experimental results can be seen in Section 3. 2.1.2. Improvement in watermark detection process When detecting the trend of adjacent K frames, traditional methods are based on the content of the whole frame; the content of the watermark detection area directly affects the accuracy of watermark detection. When the recording error is large enough
L. Li et al. / J. Vis. Commun. Image R. 26 (2015) 1–8
3
Fig. 1. (a) Represents the traditional embedding method and (b) represents the method introduced in this paper. Gray areas mean pixels to be modified. Gray squares in (b) are 4 4 blocks.
Fig. 2. Frame captured from a static video sequence.
that irrelevant contents in the background are recorded or rotation occurs, the methods need to detect the region corresponding to the original video by cutting out margins before watermark detecting [20], which increases the complexity of the watermark detection process. The watermark detecting process is described in Fig. 3. To solve the above problem, we make the detecting area a small area in the middle of the recorded frame, with the size determined by the width and height of the recorded frame. When serious rotation occurs or irrelevant background is recorded, the detecting area is still in the frame and the trend of consecutive K frames will not be influenced by irrelevant contents, so there is no need to find the original frame region. Therefore, the detecting process is more convenient and detecting accuracy is higher. In addition, most videos composed of multiple scenes and frames within the same shot have a strong correlation, so we can embed the watermark bit by fine-tuning the luminance relationship of the consecutive frames. Changing between shots will destroy the correlation and cause the watermark detection to fail. In this paper, we segment shots based on directional empirical mode decomposition (DEMD) before watermark detecting is performed within shots to avoid the detecting error resulting from shot changes and to increase detecting accuracy. Another common attack in practical transmission of videos is frame loss, it will destroy the synchronicity of the watermark, leading the watermark detection to fail. To deal with this problem, we use redundant embedding and a frame count C to ensure the detection rate. This will be detailed in part 2.2.
Fig. 3. Comparison of embedding and detecting of the two methods; (a) shows the embedding and detecting process of traditional methods and (b) shows the proposed method. Gray areas represent modified areas; the red box is the detecting area. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
2.2. Watermark embedding and extracting scheme 2.2.1. Watermark embedding scheme The general idea of the proposed algorithm is described as follows: first, we decode the original video into frames, change the RGB color space into YUV color space, and divide the Y channel image into blocks. Then we calculate the critical difference JND of each DC coefficient of each block based on the improved Watson
model. Last, we modify the DC coefficient by regulating the pixel value in the spatial domain to embed the watermark bit. The detailed process is as follows: 1) Decode the original video and get a frame, change the RGB color space to YUV color space, and divide the Y channel image into non-overlapping 32 32 blocks.
4
L. Li et al. / J. Vis. Commun. Image R. 26 (2015) 1–8
Fig. 4. (a) Shows the luminance trend of original frames, which is stable near a constant; (b) shows a decreasing trend representing bit 1; and (c) shows an increasing trend representing bit 1. The embedding process has guaranteed that the trend will not cause obvious visual distortion.
Fig. 5. The Synchronization sequence and watermarking information is a pair, the embedded information is a repetition of synchronization sequence and watermark sequence.
2) Divide each 32 32 block into sixteen 8 8 blocks, calculate the DC coefficient of each 8 8 block, then the JND threshold t0,0,k of the DC coefficient of each 8 8 block is obtained by combining Eqs. (1) and (2). 3) The modified area of the whole frame is composed of all the modified blocks as shown in Fig. 1(b), if directly modifying pixels within the 8 8 block, there will be an obvious quality loss as the modified area is too big. Thus, we divide each 8 8 block into four 4 4 blocks and choose one block with maximum brightness to regulate its luminance value. The degree of regulating is t0,0,k/8 b; t0,0,k/8 is the regulating degree of each 8 8 block. When the modified area becomes smaller, we can appropriately increase the regulating degree by multiplying it by a factor of b, in this paper, we set b = 2 as an empirical value. After calculating regulating degrees of all the 4 4 blocks in the frame, we obtain the corresponding watermark pattern W of the frame. The size of W is as same
as the size of the video frame, (i, j) means the location of each pixel in the frame, and W(i, j) means the regulating degree of the pixel. 4) If the watermark bit is 1, we add the watermark pattern W to the first two frames of the consecutive five-frame group and add the pattern W to the last two frames. If the watermark bit is 1, we add the watermark pattern W to the first two frames of the group and add the pattern W to the last two frames. 5) The embedding process is redundant embedding until completion of the video decoding. We recode the watermarked frames to obtain the watermarked video with watermark information. In step 5, We make the embedding process redundant because our watermarking scheme is vulnerable to attacks that strongly influence the correlation between adjacent frames, such as frame
L. Li et al. / J. Vis. Commun. Image R. 26 (2015) 1–8
Fig. 6. (a) Original video frame; (b) watermarked video frame.
Fig. 7. Camcorder recorded frames. (a) Normally recorded; (b) recorded with black margins and rotated; (c) recorded, rotated and scaled.
5
6
L. Li et al. / J. Vis. Commun. Image R. 26 (2015) 1–8
Table 1 Average and minimum PSNR. Video sequence
Average PSNR (proposed method)
Average PSNR (Watson model)
Sequence Sequence Sequence Sequence Sequence
44.05 dB 46.63 dB 47.51 dB 46.04 dB 45.93 dB
42.36 dB 43.20 dB 41.43 dB 42.16 dB 41.61 dB
1 2 3 4 5
dropping attack. If just a few frames are dropped, it will not influence the detecting in other watermarking groups as we have a lot of repetitions of watermark. 2.2.2. Watermark detecting scheme The video watermarking method of this paper is a blind watermarking method. The detecting idea is illustrated in Fig. 4. When recording a video, in addition to bringing attacks like geometric attacks, transcoding, and recoding, the recording will also have problems with temporal synchronization. Thus, how to find the correct boundaries of the beginning of the information bits is important. The strategy to solve this problem is to put a synchronization sequence before a watermark sequence. The frame rate is converted back to the original frame rate before detecting, and then we slide the detecting window by a frame until we find a synchronization sequence based on the Hamming distance between the extracted sequence and the embedded sequence. We set the threshold of the Hamming distance to 1 which means we allow one bit of the extracted synchronization sequence to be wrong. When we find a synchronization sequence, we extract the watermark bit after the sequence until we find the next synchronization sequence. The synchronization sequence and watermarking information is a pair, the embedded information is a repetition of the pair, assuming the number of the pairs is Np. The details of the watermark-detecting process are as follows: 1) Segment the watermarked video into single-shot videos using the DEMD method. As a video too short will not hold an integrated watermark pair, we ignore the segmented videos which are less than S seconds to ensure the detection rate. S is a threshold which depends on K (we set K = 5) and the length of the synchronization sequence and the watermark sequence. 2) Decode one chosen single-shot video and get the watermarked frames, change the RGB color space into YUV space,
take 1/4 of the area in the middle of the Y channel image of five consecutive frames as the detecting area, then get the information bit based on the luminance trend in the detecting area in the consecutive five frames. 3) Once we extract the correct synchronization sequences, we can obtain the watermark bits based on Fig. 5. Considering there may be frame loss in video transmission, it will cause the detection to fail. To solve this problem, we get the frame count of each watermark sequence Ci, i = 1, 2, . . ., Np0 , Np0 is the number of extracted watermark sequence, it is less than or equal to the embedded number Np. Assuming the length of the watermark sequence is Lw, if there is no frame loss, each Ci should be equal to Lw K. If Ci is unequal to Lw K, the corresponding extracted watermark sequence is wrong and we will discard it. 4) Repeat step 2 and step 3 until all the chosen single shots are finished watermark extracting. Then we get the watermark information. In step 3, we discard the wrong extracted watermark sequences because we have many repetitions of watermark sequence in the watermarked video. Comparing with the watermark capacity, the primary purpose of the scheme is to resist the attack of camcorder recording. Even if only one correct watermark sequence is extracted in the whole video, it can successfully prove the copyright of the video.
3. Experiments The method we have proposed in this paper is tested on five high-definition H.264/AVC videos of size 1920 by 1080 pixels, length 6 min, rate 25 fps, and bit rate 10 Mbps; the total frame count of each test video is 9000. We set K = 5 which means we embed one bit each consecutive five frames, a longer K will increase the robustness but decrease the watermark capacity. The synchronization sequence is a 10-bit PN-sequence, the length of copyright information is 40 bits considering our test videos are 6 min short, so embedded bits include 50 bits which needs 250 frames; 6 min of video can embed Np = 9000/250 = 36 repetitions of information. The threshold S is set to 12 s. The length of copyright information is variable according to the demand of users, normal movies are about 2 h, it is long enough to embed a longer copyright information like 256 bits or longer. Based on subjective judgment, the video quality of the watermarked video is almost unaffected, as shown in Fig. 6.
Table 2 Correct detection rate (%). Attack
Seq. 1 (HD)
Seq. 2 (HD)
Seq. 3 (HD)
Seq. 4 (HD)
Seq. 5 (HD)
Paper [19] (SD)
Paper [20] (SD)
Rotate 3° Rotate 5° Scale down (1024 ⁄ 768) Scale down (800 ⁄ 600) Crop 5% Crop 10%
100 100 100 100 100 100
100 100 100 100 100 100
100 100 100 100 100 100
100 100 100 100 100 100
100 100 100 100 100 100
100 100 100 100 99.6 92.8
100 100 100 100 100 100
Frame rate change (29.97)
100
100
100
100
100
100
100
Recode (MPEG2) Recode (MPEG4)
100 100
100 100
100 100
100 100
100 100
n n
n n
Transcode (6 Mbps) Transcode (3 Mbps)
100 100
100 100
100 100
100 100
100 100
99.2 97.2
100 100
Frame dropping
100
100
100
100
100
n
n
Recording 1 (Fig. 7a) Recording 2 (Fig. 7b) Recording 3 (Fig. 7c)
100 99.5 99
100 98.5 98.5
100 99 99
99 98.9 98.9
100 99.3 98.6
99.8 97.2 97.7
99.8 99.2 99.4
L. Li et al. / J. Vis. Commun. Image R. 26 (2015) 1–8
7
be correctly extracted in these steady shots. For sections with large variation in adjacent frames, the detection rate will be influenced relatively serious, however, the impact on the overall detection rate is acceptable. 4. Conclusion
Fig. 8. If there is frame loss, the number of extracted watermark sequences is decreased.
The visual quality can be assessed by measuring the Peak Signal-to-Noise Ratio (PSNR) of watermarked frames compared with the original frames. Table 1 shows the average PSNR of the frames of the watermarked video using our method and the Watson model. One can see that the watermarked video maintains sufficient quality in PSNR, and the quality using our method is better than the quality using the Watson model. Table 2 shows the robustness of our method under different attacks; first, we test geometric attacks such as rotation, scaling, adding, and cropping images using the virtual Dub program. This software is widely used for video editing and is open to public. Second, we change the frame rate to 29.97 fps and convert it back to 25 fps to test the influence of a frame rate change. To test the robustness of recoding, we maintain the frame rate and use an MPEG2, MPEG4 encoder to recode the watermarked video. We change the bit rate from 10 Mbps to 6 Mbps and 3 Mbps to test the robustness of transcoding. Visual quality will be severely damaged and lose ornamental value if there are a lot of frames lost. In our test, to ensure the video quality, we randomly dropped 10 frames in the watermarked video to test the robustness against frame loss attack. Finally, we record the watermarked video using a camcorder (Sony HXR-MC1500C) on a tripod 2 m away from a 24 in. LCD monitor. In Table 2, seq. 1 to seq. 5 are test videos for this paper. They are six-minute high-definition videos with complex scenes. Test videos for paper [19] and paper [20] are single-shot standard videos (352 288), and the detection accuracy of the last two columns means the average correction in detection accuracy under corresponding attacks in the original papers. From Table 2, we can see that the proposed method is totally robust to attacks such as geometric attack, recoding, and transcoding. A small amount of frame loss will decrease the watermark capacity, but it will not affect the overall detection rate. Fig. 8 shows the number of extracted watermark sequence. We embedded 36 pieces of repeated watermark sequence into the test videos, the number of extracted watermark sequences is less than the embedded number because shot changing and frame loss will affect the watermark extracting. However, even one piece of correct extracted watermark sequence is sufficient to deliver copyright information, so our scheme is robust to frame loss which outperforms the existing methods. Finally, for recording attacks, our scheme also has a higher or the same detection accuracy compared with traditional methods. Our watermarking scheme has a drawback because it relies on the correlation between adjacent frames too much, the detecting rate will be influenced if there is a large variation. Considering most normal movies, the climaxes are relatively less, most shots are steady without large variation, the watermark sequences can
For video copyright protection, this paper proposed a high-definition video watermarking algorithm that is robust to camcorder recording. Traditional video watermarking algorithms against recording can only be applied to single-shot and short videos; when applied to more shot-changing, colorful, or longer high-definition videos, the invisibility and robustness requirements are not satisfied. The proposed method employs an adaptive watermark pattern to reduce the area to be modified and makes the embedding strength adaptive according to the improved Watson visual model. To improve detection accuracy, we used shot segmentation and an adaptive detection area when detecting watermark information. Experimental results showed that the proposed method has good invisibility and is robust to camcorder recording, transcoding, geometric attacks, frame dropping and recoding. For complex high-definition videos, it has good invisibility and robustness; thus, this suggests the superiority of the proposed method. Acknowledgments This work was partially supported by the Sub-project under National Science and Technology Support Program of China (No. 2012BAH91F03) and the National Natural Science Found of China (No. 61370218) and the Natural Science Foundation of Zhejiang Province (No. LY12F02006). References [1] S. Pereira, T. Pun, Robust template matching for affine resistant image watermarks, Image Process., IEEE Trans. 9 (6) (2000) 1123–1129, http:// dx.doi.org/10.1109/83.846253. [2] X. Qi, J. Qi, Improved affine resistant watermarking by using robust templates, acoustics, speech, and signal processing, Proceedings. (ICASSP ‘04). IEEE International Conference on, vol. 3, 2004, pp. iii-405-8, doi: http://dx.doi.org/ 10.1109/ICASSP.2004.1326567. [3] V. Monga, D. Vats, B. Evans, Image authentication under geometric attacks via structure matching, Multimedia and Expo, 2005, ICME 2005, IEEE International Conference on, 2005, pp. 229–232, doi: 10.1109/ICME.2005.1521402. [4] M. Manoochehri, H. Pourghassem, G. Shahgholian, A novel synthetic image watermarking algorithm based on discrete wavelet transform and Fourier– Mellin transform, Communication Software and Networks (ICCSN) 2011 IEEE 3rd International Conference on, 2011, pp. 265–269, doi: 10.1109/ ICCSN.2011.6014719. [5] X. Xu, R. Zhang, X. Niu, Image synchronization using watermark as rst invariant descriptors, Information Theory and Information Security(ICITIS), 2010 IEEE International Conference on, 2010, pp. 831–836, doi: 10.1109/ ICITIS.2010.5689703. [6] M. Schlauweg, D. Profrock, E. Muller, Watermark embedding by geometric warping after novel image moment-based normalization, Multimedia and Ubiquitous Engineering (MUE), 2010 4th International Conference on 2010, pp. 1–6, doi: 10.1109/MUE.2010.5575079. [7] X. Shao-wen, G. Guang-yong, Image watermark algorithm based on legendre moment invariants, Information Science and Control Engineering 2012 (ICISCE 2012), IET International Conference on, 2012, pp. 1–4, doi: 10.1049/ cp.2012.2317. [8] X.-Y. Wang, P.-P. Niu, H.-Y. Yang, L.-L. Chen, Affine invariant image watermarking using intensity probability density-based harris laplace detector, J. Visual Commun. Image Representation 23 (6) (2012) 892–907. Affine invariant; Geometric distortion; Image features; Nonsubsampled contourlet transforms; Probability densities; Synchronization error, .. [9] M. Cedillo-Hernandez, F. Garcia-Ugalde, M. Nakano-Miyatake, H. Perez-Meana, Robust digital image watermarking using interest points and dft domain, Telecommunications and Signal Processing (TSP), 2012 35th International Conference on, 2012, pp. 715–719, doi: 10.1109/TSP.2012.6256390. [10] L. Verstrepen, T. Meesters, T. Dams, A. Dooms, D. Bardyn, Circular spatial improved watermark embedding using a new global sift synchronization scheme, Digital Signal Processing, 2009 16th International Conference on, 2009, pp. 1–8, doi: 10.1109/ICDSP.2009.5201048.
8
L. Li et al. / J. Vis. Commun. Image R. 26 (2015) 1–8
[11] L. Jianfeng, Y. Zhenhua, Y. Fan, L. Li, A mpeg2 video watermarking algorithm based on dct domain, Hangzhou, China, 2011, pp. 194-197, blind watermarking algorithm; Copyright protections; DCT domain; Discrete Cosinus Transform; Intermediate frequencies; MPEG-2 video; mpeg2; Video media; Video watermarking, . [12] Y. Yuan, Y. Zhao, K. Wang, X. Bai, A robust video watermarking algorithm based on sliding window for avs, Seoul, Korea, Republic of, 2010, pp. 7–10, aVS; DCT domain; Intra frame; Sliding Window; Video watermarking. [13] G. Prabakaran, R. Bhavani, M. Ramesh, A robust qr-code video watermarking scheme based on svd and dwt composite domain, Salem, Tamilnadu, India, 2013, pp. 251–257, 2d barcode; Digital watermarks; Quick response code; Robust video watermarking; Video documents; Video processing; Video watermarking; Watermarked images, . [14] M.-J. Lee, D.-H. Im, H.-Y. Lee, K.-S. Kim, H.-K. Lee, Real-time video watermarking system on the compressed domain for high-definition video contents: practical issues, Digital Signal Process.: Rev. J. 22 (1) (2012) 190– 198. Copyright protections; Practical video watermarking; Quantization index modulation; Real-time video watermarking; Robust video watermarking, .. [15] P. Swamy, M.G. Chandra, B.S. Adiga, On incorporating biometric based watermark for hd video using svd and error correction codes, Kerala, India, 2013, pp. Amal Jyothi College of Engineering; Defence Research and Development Organization (DRDO); Kerala State Council for Science; Technology and Environment (KSCSTE), eCC; Embedding; Fingerprint; Hashing; High definition video; Resizing; Scaling; Syndrome Coding; Video watermarking, .
[16] X.-C. Yuan, C.-M. Pun, Feature based video watermarking resistant to geometric distortions, Trust, Security and Privacy in Computing and Communications (TrustCom), 2013 12th IEEE International Conference on, 2013, pp. 763–767, doi: 10.1109/TrustCom.92. [17] M.-J. Lee, D.-H. Im, H.-Y. Lee, K.-S. Kim, H.-K. Lee, Real-time video watermarking system on the compressed domain for high-definition video contents: practical issues, Digital Signal Process.: Rev. J. 22 (1) (2012) 190– 198. Copyright protections; Practical video watermarking; Quantization index modulation; Real-time video watermarking; Robust video watermarking, .. [18] M.-J. Lee, K.-S. Kim, Y.-H. Suh, H.-K. Lee, Improved watermark detection robust to camcorder capture based on quadrangle estimation, Image Processing (ICIP), 2009 16th IEEE International Conference on, 2009, pp. 101–104, doi: 10.1109/ICIP.2009.5414109. [19] Y. Wang, A blind mpeg-2 video watermarking robust to regular geometric attacks, Open-source Software for Scientific Computation (OSSC), 2009 IEEE International Workshop on, 2009, pp. 169–171, doi: 10.1109/OSSC.5416913. [20] H. Do, D. Choi, H. Choi, T. Kim, Digital video watermarking based on histogram and temporal modulation and robust to camcorder recording, Signal Processing and Information Technology, 2008, ISSPIT 2008, IEEE International Symposium on, 2008, pp. 330–335, doi: 10.1109/ISSPIT. 4775680. [21] A. Watson, Visually optimal dct quantization matrices for individual images, Data Compression Conference, DCC ‘93, 1993, pp. 178–187, doi: 10.1109/ DCC.1993.253132. [22] S. Qing-tang, N. Yu-gang, L. Xian-xi, Image watermarking algorithm based on dc components implementing in spatial domain, Appl. Res. Comput. 29 (4) (2012) 1441–1444.