Robust Watermarking of 3D Mesh Models Han Sae Song and Nam Ik Cho
JongWeon Kim
Seoul National Univ. San 56-1, Shillim-dong, Kwanak-gu Seoul, 151-742, Korea Telephone: +82-2-880-1774 Fax: +82-2-882-4658 Email:
[email protected],
[email protected]
MarkAny Research Institute 1810 Aju Bldg, 679-5 Yeoksam-Dong, Seoul, Korea Telephone: +82-2-2262-5222 Fax: +82-2-2262-5333 Email:
[email protected]
Abstract— A robust watermarking algorithm for the 3D mesh models is proposed. The algorithm is based on the watermarking of images from a virtual 3D scanner which mimics the operation of 3D scanner in the real world. The position of the object in the scanner is determined by the principle component analysis of the vertex points. After obtaining 2D range image from the virtual scanner, we embed the watermark using the conventional 2D image watermarking method based on the DCT. Then, the vertices of the model are moved according to the range values modified by the 2D watermark. For the watermark extraction, the virtual ranging is performed and then the retrieval process of 2D image watermarking is performed. Experimental results show that the proposed algorithm is robust against the attacks such as mesh simplification and Gaussian noise.
I. I NTRODUCTION Protection of intellectual proprietary is one of the most important problems in the production and consumption of digital multimedia data. The problem is gaining more and more attention as the multimedia data is increasing, and thus there have been much efforts on securing the multimedia by encryption and watermarking. While the encryption precludes the unauthorized user’s access to the data by scrambling, the digital watermarking protects the copyright or ownership by hiding some information into the multimedia data. Recently, as the use of 3D models for the animation and CAD are increasing, the watermarking of 3D models is being studied in many literature. In the area of ownership proof, Ohbuchi introduced some watermarking algorithms for the polygonal mesh [10] and NURBS [9]. Benedens proposed several robust watermarking algorithms: the watermarking that is robust to mesh simplification and affine transform [3][4][2], high-capacity watermarking [5] and a combined watermarking system [1]. Wagner [16] also developed a watermarking which is robust against the affine transform. In [8], Praun et al. introduced a robust watermarking against various attcks including cropping. In the authentication area, Yeo et al. [15] developed an algorithm for the verification of 3D objects. In this paper, we propose an algorithm that extracts 2D image from the 3D model and embed watermark to the 2D image, whereas the conventional 3D watermarking algorithm directly handles the vertex points in the 3D coordinates. As a result, every 2D watermarking algorithm can be employed for the watermarking of 3D objects, and we can choose an
appropriate one for the given problem. In this paper, the 2D image extracted from the object is the virtual range data, i.e., the distance image from a cylinder around the object, and the DCT based 2D watermarking [7] is employed. The experimental results show that the watermark is robust to the attacks such as mesh simplification, and Gaussian noise. Analysis and experiments for the other possible attacks and other 2D watermarking algorithms are omitted here due to the limited space. This paper is organized as follows. In section II, the virtual ranging is described. Then, the watermark embedding and extraction procedure are presented in section III. Experimental results are shown in section IV and we conclude the paper in section V. II. V IRTUAL R ANGING A. Scanning Cylinder and Reference axes The virtual scanner consists of a cylinder and three reference axes. The cylinder is positioned around the 3D model and is set to fit the model tightly. Because the same range image must be obtained in spite of the similarity transforms, the cylinder setting should not be altered by the transform. Hence we need reference axes which are immune to the translation and rotation of 3D model. We employ the eigen-vectors of the covariance matrix of vertex coordinates [12] as the reference axes. The mean of the vertices is denoted as mv and the covariance matrix as Cv . The scanning cylinder is set as shown in Fig. 1, where s1 is the eigen-vector of Cv corresponding to the largest eigen-value, s2 and s3 are the eigen-vectors corresponding to the second and third eigen-values. All eigenvectors have unit length. The radius and height of the virtual scanning cylinder is set to be tightly fit to the 3D model. As a result, the cylinder undergoes the same amount of uniform scaling as the 3D model does, which gives immunity to the scaling attack. The radius r and height h of the virtual scanner are also related with the eigen-vectors as r = max v∈V
{s2 T (v − mv )}2 + {s3 T (v − mv )}2
0-7803-7714-1/02/$17.00 (C) 2002 IEEE
(1)
Fig. 1.
Scanning cylinder and reference axes
h = hmax − hmin hmax = max s1 T (v − mv ) v∈V
Fig. 2.
(2)
hmin = min s1 T (v − mv ) v∈V
where V is the set of vertex points. Depending on the methods of eigen-vector computation and the amount of the objects’ symmetry, the resulting eigen-vectors may be oppositely directed. In order to make the eigen-vectors have consistent directions, we first find the standard deviation of the lengths between the center point and vertices [12], which is defined as 1 2 v − mv . (3) DV = n v∈V
2D grid on the side face of the scanning cylinder
grid. To be more precise on the computation of l, we first find the intersecting triangles by employing the algorithm of the ray-triangle intersection test [11][14]. After finding the faces, we define two vectors as eu = Nhu s1 ev (x) = cos(2πx/Nv )s3 + sin(2πx/Nv )s2
(4)
where ev (x) is a unit vector and a function of x. x is the arbitrary v-coordinate of the grid in the range of 0 ≤ x < Nv . Hence, ev (x) is the radial direction vector of the cylinder which points to the line whose v-coordinate is x. In terms of eu and ev , the coordinates of the point q and q can be expressed as q (ur , vr ) = mv + hmin s1 + (ur + 0.5)eu + rev (vr ) q(ur , vr ) = mv + hmin s1 + (ur + 0.5)eu − rev (vr ).
For determining the direction of s1 , the vertices of 3D model is first divided into two sets, A and B. The set A contains the vertices which satisfy {s1 T (v − mv ) > 0} and B contains vertices satisfying {s1 T (v − mv ) < 0}. Then, the standard deviations of each set is calculated from (3). If DA < DB , we change the s1 to be oppositely directed. The direction of s2 is confirmed in the same way, and the s3 is calculated as the outer product, s2 × s1 .
(5) The range is calculated by solving the simultaneous equation of intersecting point x which is on the line qq and also on the triangle abc as
B. Procedure of obtaining the range image
The solution of the above equation is the range as given by
In this subsection, we describe how to make the range image from the above virtual scanning system. It is assumed that the 3D model is triangular mesh and their faces have counterclockwise orientation. First, we make Nv × Nu grid on the side face of the cylinder as shown in Fig. 2. The grid points, where the ranges are calculated, are denoted by (ur , vr ) ∈ {(0, 0), · · · , (Nu − 1, Nv − 1)}. The vertical line(vr = 0) lies in the direction of s3 and vr increases along the counterclockwise direction. Fig. 3 shows the general situation of virtual ranging. From (ur , vr ) on the grid which corresponds to q in (x, y, z) coordinate, we draw a line towards the point q on the opposite side of the cylinder. The triangle abc represents one of the triangular faces which intersects the line segment q q. Note that the triangle abc is facing the point q , i.e., the order of vertices is a → b → c. The l is the range value that we wish to find, and the l is computed for every (ur , vr ) point on the
x = q (ur , vr ) − l(ur , vr )ev (vr ) T (x − a) n = 0.
l(ur , vr ) =
(q (ur , vr ) − a)T n , ev (vr )T n
(6)
(7)
where n is the normal vector of the triangle abc. If we have several intersecting faces, we choose the shortest range because the face is not transparent. III. WATERMARK E MBEDDING AND E XTRACTION A. Embedding procedure In this paper, the algorithm in [7] is chosen for the watermarking of range image. The watermarking is applied to the rectangular region of the virtual range image, where the corner points are used as the key of watermarking. For applying the DCT domain watermarking to the virtual range image, the range values are first linearly normalized to [0,255]. Then, the range image is divided into 8×8 DCT blocks and the watermark is added to AC components as c =
0-7803-7714-1/02/$17.00 (C) 2002 IEEE
In (9), ev (vv ) is the same vector as in (4) and the variable x is replaced by vv . But above modification makes severe discontinuity along the adjacent Bur ,vr as shown in Fig. 4 (b). Hence we multiply weighting function to ∆l and smooth the surface. The 1D weighting function employed here is f (x) = {1 + cos(πx)}/2.
(10)
If a vertex is in Bur ,vr , the weighting function applied to ∆l(ur , vr ) in (9) is the multiplication of the 1D weight functions in each direction as follows. Fig. 3.
Ranging situation
(a) Before watermarking Fig. 4.
w(ur ,vr ) (uv , vv ) = f (uv − ur )f (vv − vr ) 0
Range values
c+αw. Watermark is a uniform noise which is -1 or 1, and the seed value of the uniform noise generator [13] is also another key of watermark. In [7], α is decided by the human visual system (HVS) model, but a constant value is used here because the range image is not directly related with HVS. After watermarking, the original range values are modified. Accordingly, we must modify the vertices and generate the watermarked 3D model. Fig. 4 shows the model surface in the u-direction, where the horizontal line is the model surface and the arrow is the ranging ray. After watermarking process, the corresponding vertices are moved in order to match the new range value ˜l as in Fig. 4 (b). In order to find the vertices which correspond to the new range value, all the vertices are first projected to the u−v coordinates along the radial direction of the cylinder. In the case of v coordinate, one vertex can be projected into two radial directions. Hence we examine the normal vector of the current projecting vertex and then project it to the radial direction which is closer to the normal vector. As a result of above procedure, we have the knowledge of u − v coordinates of all vertices. The u − v coordinate of an arbitrary vertex v is denoted as (uv , vv ). Hence we can define a set of vertices, Bur ,vr , which contains the vertices whose the coordinates (uv , vv ) satisfy |uv − ur | < 0.5 and |vv − vr | < 0.5. That is, Bur ,vr is a set of vertices which are projected to the 1 × 1 rectangular region in the u − v grid and the center of the region is (ur , vr ). Each Bur ,vr corresponds to a range value, that is, l(ur , vr ). By defining the difference between the original range and the watermarked one as
∀ v ∈ Bur ,vr .
v = v−
1 1
w(ur +j,vr +k) (uv , vv )∆l(ur +j, vr +k) ev (vv ).
j=−1 k=−1
(12)
B. Extraction procedure For the extraction of watermark, two keys and watermarked model are needed. One of the keys is the seed number and the other is the corner points of the rectangular region where the watermarked is embedded. Nv and Nu are also needed. In order to extarct the watermark, the range image of the watermarked model is obtained using the virtual ranging process. Then, the watermark is extracted from the range image. Because the extraction process is performed without the original 3D model, the reference axes of the watermarked model differ from that of the original model roughly by 0.005 degrees. And they differ by 0.1-0.2 degrees after simplification attack. But this difference is negligible. IV. E XPERIMENTAL RESULTS
(8)
The proposed algprithm is applied to the rabbit model and happy buddha model. The rabbit has 67039 vertices and 134073 faces, and the happy buddha has 148488 vertices and 300074 faces. The watermark strength α is set to 1.3 for the rabbit and 0.9 for the happy buddha. The size of watermarking region is 80×80 for both models. The grid dimension is 84×120 for the buddha and 84×84 for the rabbit. The SNR of the watermarked rabbit model is 43.98dB and that of the buddha model is 49.62dB, where the SNR is defined as var{v − mv } . (13) SN R = 10log10 var{v − v}
(9)
The SNR doesn’t tell the visual quality of the model but it may be used as an objective measure to describe how much watermark is added.
we can express the modified vertices as v = v − ∆l(ur , vr ) ev (vv )
(11)
The weighting function influences eight neighboring regions, and we add the overlapped values. With the above weighting function, (9) is changed as follows.
(b) After watermarking
l(ur , vr ) − l(ur , vr ). ∆l(ur , vr ) = ˜
|uu − ur | < 1 |vv − vr | < 1 otherwise
if
0-7803-7714-1/02/$17.00 (C) 2002 IEEE
Fig. 5.
Rabbit model and Happy Buddha model
image of the object is obtained. Then, we have employs the DCT based 2D watermarking algorithm for embedding the watermark to the range image. Other 2D watermarks may also be used. The watermarked 3D object is reconstructed from the modified range data with some weight functions. In the extraction procedure, the watermarked range image is obtained and the watermark is extracted with a little amount of additional information. Experimental results show that the Pd is on 0.97 when Pf is 10−6 for the Gaussian noise attack, which is as robust as the 2D watermarking algorithm based on the DCT. R EFERENCES
(a) ROC curve for simplification attack
(b) ROC curve for Gaussian noise attack Fig. 6.
ROC curves for simplification attack and Gaussian noise attack
The model is attacked by the mesh simplification, addition of Gaussian noise and similarity transforms. The simplification method is adopted from [6][18], where ‘plycrunch’ is used for the buddha and ‘plysimplify’ for the rabbit. The simplification attack reduces the number of the faces of the model by half. In Gaussian noise attack, we add the Gaussian random vector to the coordinates of the vertices, where the amplitude of the random vector is 0.6% of the largest diagonal length of the bounding box. For the similarity transforms, we can always detect the watermark. For the other attacks, the results are shown in Fig. 6, by the ROC(Receiver Operating Characteristics) curves. For false-alarm probability, we iterated the extraction upto 108 times, and thus there is no information on Pf below 10−8 . The result shows that the proposed algorithm is robust to the attacks such as mesh simplification and Gaussian noise. V. C ONCLUSIONS
[1] O. Benedens, “Towards Blind Detection of Robust Watermarks in Polygonal Models,” Eurographics 2000, Vol.19, No. 3, 2000. [2] O. Benedens, “Affine Invariant Watermarks for 3D Polygonal and NURBS Based Models,” Springer, Information Security, Third International Workshop, ISW 2000, vol. 1975, pp. 15-29, Australia, December 2000. [3] O. Benedens, “Geometry-based Watermarking of 3D Models,” IEEE Computer Graphics and Applications, Vol. 19, pp. 46-55, Jan.-Feb 1999. [4] O. Benedens, “Watermarking of 3D Polygon Based Models with Robustness against Mesh Simplification,” Proceedings of SPIE: Security and Watermarking of Multimedia Contents, Vol. 3657, pp. 329 - 340, 1999. [5] O. Benedens, “Two High Capacity Methods for Embedding Public Watermarks into 3D Polygonal Models,” Proceedings of the Multimedia and Security-Workshop at ACM Multimedia 99, pp. 95 - 99, Orlando, Florida, 1999. [6] J. Cohen et al., “Simplification Envelopes,” ACM SIGGRAPH ’96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pp 119-128, 1996. [7] J. R. Hernandez and F. Perez-Gonzalez, “Statistical Analysis of Watermarking Schemes for Copyright Protection of Images,” Proceedings of the IEEE, Vol. 87, Issue 7, pp. 1142 -1166, July 1999. [8] E. Praun, H. Hoppe and A. Finkelstein, “Robust Mesh Watermarking,” ACM the 26th annual conference on Computer graphics and interactive techniques, pp. 49 - 56, New York, July 1999 [9] R. Ohbuchi, H. Masuda and M. Aono, “A Shape-preserving Data Embedding Algorithm for NURBS Curves and Surfaces,” IEEE Computer Graphics International Proceedings, pp. 180 -187, 1999. [10] R. Ohbuchi, H. Masuda and M. Aono, “Watermarking ThreeDimensional Polygonal Models Through Geometric and Topological Modifications,” IEEE Journal on Selected Areas in Communications, Vol. 16, Issue 4, pp. 551 -560, May 1998. [11] J. Orourke, “Computational geometry in C : 2nd ed.,” Cambridge, UK, ; New York, NY, USA : Cambridge University Press, 1998. [12] E. Paquet and M. Rioux, “Nefertiti: A Query by Content Software for Three-Dimensional Models Databases Management,” 3-D Digital Imaging and Modeling, 1997. Proceedings., International Conference on Recent Advances in , 1997, pp. 345 -352, 1997. [13] W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery, “Numerical recipes in C - The art of scientific programming : 2nd edition,” Section 7.1: Uniform Deviates, 1999. [14] R.J. Segura and F. R. Feito, “An Algorithm for Determining Intersection Segment-Polygon in 3D,” Computers and Graphics published by Elsevier Science, Vol.22, Issue 5, pp. 587-592, October 1998. [15] B.-L. Yeo and M. Minerva, “Watermarking 3D objects for Verification,” IEEE Computer Graphics, Special Issue on Image Security, pp.46-55, January/February 1999. [16] M.G. Wagner, “Robust Watermarking of Polygonal Meshes,” Geometric Modeling and Processing 2000. Theory and Applications. Proceedings, pp. 201 -208, 2000. [17] Y. Wu, X. Guan, M.S. Kankanhalli and Z. Huang, “Robust Invisible Watermarking of Volume Data using the 3D DCT ,” IEEE Computer Graphics International 2001, pp. 359 -362, 2001 . [18] Simplification envelopes Ver. 1.2, http://www.cs.unc.edu/ geom/envelope.html.
We have proposed a robust watermarking algorithm for the 3D mesh models. A virtual 3D scanner is devised, and a range
0-7803-7714-1/02/$17.00 (C) 2002 IEEE