Triangle Surface Mesh Watermarking Based on a ... - IEEE Xplore

1 downloads 0 Views 1MB Size Report
Aug 14, 2014 - Xavier Rolland-Nevière, Gwenaël Doërr, Senior Member, IEEE, and Pierre Alliez. Abstract— A watermarking strategy for triangle surface.
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 9, NO. 9, SEPTEMBER 2014

1491

Triangle Surface Mesh Watermarking Based on a Constrained Optimization Framework Xavier Rolland-Nevière, Gwenaël Doërr, Senior Member, IEEE, and Pierre Alliez

Abstract— A watermarking strategy for triangle surface meshes consists of modifying the vertex positions along the radial directions, in order to adjust the distribution of radial distances and thereby encode the desired payload. To guarantee that watermark embedding does not alter the center of mass, prior work formulated this task as a quadratic programming problem. In this paper, we contribute to the generalization of this formulation with: 1) integral reference primitives; 2) arbitrary relocation directions to alter the vertex positions; and 3) alternate distortion metrics to minimize the perceptual impact of the embedding process. These variants are benchmarked against a range of attacks and we report both improved robustness performances, in particular for simplification attacks, and improved control over the embedding distortion. Index Terms— 3D mesh watermarking, quadratic programming, integral barycenter, local roughness, perceptual shaping.

I. I NTRODUCTION

D

IGITAL watermarking consists in modifying multimedia content in a robust and imperceptible way in order to hide a secret message, and it is a central component of content protection architectures [1]. The embedded message, referred to as the watermark payload, can indeed serve as a forensic piece of evidence for traitor tracing tasks, e.g., by identifying a leak when content is illegally made available on the Internet. The increasing popularity of 3D generated models has recently called for dedicated watermarking methods [2]. Surface meshes are approximations of the surface boundary of 3D objects [3] and most 3D watermarking methods focus on the popular triangle mesh representation. Mesh watermarking is facing three main challenges. First, the original content may contain non-manifold parts, selfintersections, or holes. These defects can appear during the acquisition process or along the geometry processing pipeline. In the Entertainment industry, artists may even create incomplete meshes, as there is no need to model parts which will not be rendered. Second, meshes are irregular samplings of surfaces. This prevents straightforward application of standard Manuscript received January 2, 2014; revised April 30, 2014 and July 1, 2014; accepted July 1, 2014. Date of publication July 8, 2014; date of current version August 14, 2014. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Patrick Bas. X. Rolland-Nevière and G. Doërr are with the Security and Content Protection Laboratory, Technicolor Research and Development, CessonSèvignè 35576, France (e-mail: [email protected]; [email protected]). P. Alliez is with Inria Sophia Antipolis-Méditerranée, Sophia-Antipolis 06902, France (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIFS.2014.2336376

signal processing tools. A large body of research has focused on adapting these tools (e.g., the Fourier transform) to meshes but these tool usually require a uniform sampling and regular structure. Third, meshes can undergo a much wider range of alterations than other types of content. Non-blind 3D watermarking methods exhibit satisfactory robustness. Nevertheless, most practical applications preclude having access to the original 3D model at decoding and designing blind methods therefore remains a challenge. For instance, blind watermarking detection cannot leverage on recent advances in shape matching, which could help for registration e.g. when dealing with different poses of a model. The objective of this article is to extend a blind optimization-based watermarking framework. To begin with, we review relevant state-of-the-art techniques for 3D watermarking in Section II, taking special care to highlight their limitation. We then introduce a mathematical framework in Section III, that will serve to formulate our baseline 3D watermarking system as a quadratic programming problem. Section IV details our proposed extensions of this formulation which allows for using integral centers of mass, arbitrary directions of modifications, and multiple distortion metrics to minimize the embedding distortion. Section V reports the performances of various variants of this baseline system through a series of benchmark experiments. II. P RIOR A RT 3D watermarking techniques may be classified into spatial domain-based approaches vs. transform domain-based approaches. Methods in the latter category are usually relying on a mechanism that extends the Fourier transform to meshes. These methods exhibit good properties in terms of imperceptibility and robustness against, for instance, noise attacks. However, they also require significant computational power [4] and are sensitive to connectivity-altering distortions such as simplification or remeshing. These issues are often addressed by either using non-blind watermarking approaches [5], or partitioning the mesh into canonical patches [6], thereby raising resynchronization problems. For spatial approaches, the watermark embedding process directly alters the coordinates of the mesh vertices. A common strategy is to watermark the Euclidean distances between the vertices and a reference structure. This distance is often referred to as the vertex norm. A seminal approach alters for instance the distances between a local symmetry axis and a subset of the vertices [7]. At decoding, robust registration

1556-6013 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

1492

IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 9, NO. 9, SEPTEMBER 2014

is necessary to recover the watermarked subset. Moreover, the embedding operation introduces uncontrolled distortion on the reference structure, i.e. the local symmetry axis in this particular example. This raises a causality issue which is likely to hamper decoding performances. A similar algorithm alters the distribution of the distances between all vertices and the center of mass of the mesh [8]. As a result, registration is no longer required and the synchronization between the embedder and the detector is greatly simplified. This algorithm showcases increased robustness and can embed around 75 bits of payload with meshes having a few dozen thousand vertices. This system is still considered as one of the most robust nowadays [5] and therefore served as a reference for several follow-up algorithms. Due to the definition of the center of mass, this baseline approach is inherently sensitive to connectivity-altering distortions such as resampling when the surface is non-uniformly sampled. Some authors thus introduced integral quantities, which are less sensitive to sampling defects [9]. First, a more robust definition for the center of mass has been advocated for, such as a surface-weighted [10] or a volume-weighted center of mass [11]. Second, the distribution of distances in the baseline algorithm is computed with a non-weighted histogram, in which each vertex contributes equally. Conversely, one may weight each vertex, using for instance the area of a local surrounding patch [12]. Although not presented in such a way, a recent proposal could be seen as watermarking a volume weighted histogram of the distribution of radial distances [13]. To improve the robustness against cropping and pose attacks, canonical partitioning of the mesh has been explored. However, the robustness then relies on a correct synchronization between the partitions computed at the embedding and detection stages. Handling this synchronization issue remains an open challenge: the most popular solution is based on a volume-moment-based principal component analysis registration, which is not, for instance, pose invariant. A recent thread of research has revisited the embedding procedure itself to address three weaknesses of the baseline system, namely: (i) the distribution of the norms of the vertices is assumed to be uniform for efficient vertex relocation, (ii) the causality issue is overlooked, and (iii) the imperceptibility of the watermark is not properly addressed. Adding degrees of freedom for the vertex relocation and finding the optimal vertex positions to minimize the perceptual impact of the watermark has been formulated as an optimization problem [9]. Still, the whole watermarking process is now separated in two stages: the first one is almost identical to the baseline algorithm whereas the second one can be seen as a generic refinement post-process to reduce distortion. As such, the latter part could be advantageously integrated into any watermarking system based on altering the vertex norms to enhance fidelity. This being said, this two-stages strategy operates a non-joint optimization and there might be alternate solutions that yield better results. To explicitly address the causality issue and provide a more systematic approach, the baseline algorithm has been incorporated within a quadratic programming (QP) framework [14].

TABLE I S TRENGTHS AND W EAKNESSES OF THE A LGORITHMS WATERMARKING THE

N ORMS OF THE V ERTICES

However, this formulation is limited to the discrete center of mass, to radial vertex relocation, and only minimizes the embedding distortion with regard to the squared error metric. In this article, we generalize this formulation to successively lift each one of those three limitations. Table I provides an eagle-eye view of the state-of-the-art by summarizing the strengths and weaknesses of the different methods mentioned in this survey. III. O PTIMIZATION F RAMEWORK A. Notations A triangle surface mesh M is defined by its set of n v vertices V, its set of n f facets F and its set of edges. The vertex v i is associated to the point pi ∈ R3 , whose Cartesian coordinates are (x i , yi , z i ) and the matrix P contains the Cartesian coordinates of all vertices. The spherical coordinates of pi with respect to the center of mass C of the mesh (also referred to as the mesh barycenter) are denoted by (ρi , θi , φi ). The unitary radial and normal vectors for v i are ρ i = Cpi /Cpi  and ni , where . is the Euclidean norm. Let us define the histogram of radial distances ρ = {ρi , i ∈ 1, n v }: it has n B bins, its edges are evenly j j spaced by a step s, (ρmin , ρmax ) denotes the boundaries of the bin j , and N j is the number of samples within it. All distances ρi can be normalized in [0, 1) using the affine transform: B

i ρi − ρmin (1) s where Bi denotes the index of the bin associated to the distance ρi . Let m denote the watermark payload with n b bits and α ∈ (0, 1/2) the watermark embedding strength. A superscript “w ” indicates a watermarked variable throughout the article. Vectors are written in column layout by convention. For conciseness, sets and vectors are used indifferently, i.e. ρ is a column vector in Rnv . The Jacobian matrix of X with respect X ∈ R|X|×|Y| whose entry at index (i, j ) to Y is the matrix JY is defined as: ∂ X i /∂Y j .

ρ˜i =

B. Optimization Model The description in this section is a general formulation of the state-of-the-art optimization framework instantiated using QP [14]. It formulates watermark embedding as the minimization of a distortion metric (fidelity criterion), subject to the constraints of (i) embedding m in the histogram of ρ (the watermark carrier) and (ii) preserving causality.

ROLLAND-NEVIÈRE et al.: TRIANGLE SURFACE MESH WATERMARKING

1493

1) Cost Function: The cost function c corresponds to a mesh distortion metric. Minimizing c is equivalent to minimizing the embedding distortion, with regard to this metric, and corresponds to the fidelity constraint in the watermarking system. To define an as general as possible optimization model, we set the cost function as the squared norm of a function f(.) which depends on (i) the watermarked vertex positions, (ii) the initial vertex positions, and (iii) the connectivity of the triangular mesh, i.e., the facets: c = f(Pw , P, F )2 .

(2)

Provided that the distortion does not change the mesh connectivity, this formulation encompasses many distortion metric definitions. 2) Watermark Constraints: The payload m is embedded in the mesh by modulating the average value inside the bins of the histogram of ρ. For simplicity , the number of bins of the histogram n B and the payload length n b are set equal in this section. To embed the bit m j ∈ {−1; +1}, the average value of ρ inside bin j is raised above the value μ j + sα or lowered below μ j − sα. In the original approach, the radial distances ρ are assumed to be uniformly distributed and μ j is therefore placed in the middle of the bin to minimize distortion. Moreover, the alteration of the bin averages relies on a power-like histogram mapping function which is computationally efficient [8]. Although our framework slightly  differs, we will use  j j the same setting μ j = 1/2 ρmax + ρmin . The watermarking constraint for bin j can then be written: ⎧ nv 1  ⎪ ⎪ ⎪ ρiw δ j,Bi > μ j + sα if m j = 1 ⎪ ⎨ Nj i=1 (3) nv ⎪  1 ⎪ w j ⎪ ⎪ ρi δ j,Bi < μ − sα otherwise, ⎩ Nj i=1

where δ denotes the usual Kronecker delta. Let W ∈ Rnb ×nv denote the matrix whose coefficients W j,i = m j δ j,Bi represent the mapping between the mesh vertices and the bins of the histogram, and T the vector defined by the entries T j = N j (m j μ j + sα), corresponding to the watermarking targets defined in Eq. (3). The watermark constraints in the general case then reduce to a set of linear inequalities: T < Wρ w .

(4)

3) Causality Constraints: Watermarking detection essentially assumes that the same histogram is reconstructed on the receiver side. The watermarking process should therefore preserve (i) the location of the center of mass, (ii) the mapping between vertices and bins of the histogram, and (iii) the histogram edges. These constraints can be expressed with the following set of equations: Cw = C, ∀i ∈ 1, n v , Biw = Bi ,

min ρ w = min ρ, max ρ w = max ρ.

(5) (6) (7)

C. Quadratic Programming Approach This general optimization problem has been solved using a QP formulation in the state-of-the-art [14]. The center of mass C is the average of all pi and vertices are relocated along their radial directions. In this context, the optimization variables are the normalized radial displacements ρ˜iw = ρ˜iw − ρ˜i and the cost function is the Squared Error (SE), computed as the sum of all squared displacements. As a result, the cost and constraint equations become respectively quadratic and linear and the optimization problem can be solved using efficient large-scale QP solvers [15]. Let ρ˜ w = [ρ˜1w , . . . , ρ˜nwv ]T denote the vector of normalized radial displacements. For the SE metric, the cost function in Eq. (2) becomes: c = ρ˜ w 2 .

(8)

Similarly, the watermark embedding constraints in Eq. (4) can be rewritten: T˜ − Wρ˜ < Wρ˜ w .

(9)

˜ ˜ where

vector containing the normalized entries T j = 1 T is the N j 2 m j + α . The left hand side of the Equation corresponds to the difference between the initial bin averages and the target averages encoding the desired payload. The right hand side accounts for the variations in the bin averages due to the relocation of the vertices with ρ˜ w . Since C is the discrete center of mass, the barycenter stability constraint in Eq. (5) is equivalent to imposing all radial displacements to average to the null vector: ⎛ ⎞ nv cos θi cos φi  w⎝ sin θi cos φi ⎠ = 0. ρ˜i (10) sin φi i=1 It should be noted that this constraint can be advantageously rewritten using the Jacobian matrix of pi = (x i , yi , z i ) with respect to ρ˜i : ⎛ ⎞ cos θi cos φi pi Jρ˜i (ρ˜i ) = s ⎝ sin θi cos φi ⎠. (11) sin φi ˜ the 3 × n v matrix whose i th column is By denoting JρP˜ (ρ) pi Jρ˜i (ρ˜i ), Eq. (10) indeed becomes equivalent to: JρP˜ (ρ) ˜ ρ˜ w = 0.

(12)

The stability of the histogram is guaranteed by two additional constraints. First, the first and last bins are not considered during embedding to preserve the extremal values of ρ (Eq. (7)). In practice, the number of bins of the histogram is simply set to n B = n b + 2 to yield a one-to-one mapping between the n b watermark payload bits and the remaining bins. Second, to keep the mapping vertex–bin unaltered (Eq. (6)), the optimization variable ρ˜ w is bounded using a bin separation offset G:  G − ρ˜i ≤ ρ˜iw ∀i ∈ 1, n v  (13) , Bi ∈ {1, n B } ρ˜iw < 1 − G − ρ˜i

1494

IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 9, NO. 9, SEPTEMBER 2014

Finally, the solution returned by the QP solver is exploited to build the watermarked mesh: w ∀i ∈ 1, n v , pw i = pi + sρ˜i ρ i

(14)

On the receiver side, the payload extraction reduces to constructing the histogram of distances ρ and comparing the normalized average inside each bin to the value 0.5. In other words, denoting by ρ¯ j the normalized average inside bin j , the estimated bit is given by: wˆ j = sign(ρ¯ j − 1/2). IV. E XTENSIONS OF THE QP F RAMEWORK A. Integral Formulation of the Stability Constraints The discrete center of mass used in the original approach is, by definition, not robust to non-uniform and/or anisotropic remeshing. This limitation calls for incorporating more integral formulations of the center of mass into the framework in an attempt to improve robustness. However, integral formulations involve non-linear and neighborhood-dependent weighting functions. As a result, Eq. (5) and (12) are no longer equivalent, and the mathematical model cannot be formulated as a QP problem. 1) Derivation of the Stability Constraint: Given a perfacet weight and center (w( f ), C( f )) ∈ R+ × R3 , an integral center of mass is defined, in general, as a weighted sum over all facets of a 3D mesh1 : 1  C= w( f )C( f ), (15) w0 

f ∈F

th where w0 = f ∈F w( f ) is a normalization factor. The i C column of the Jacobian matrix Jρ˜ (ρ) ˜ is: p

JρC˜i (ρ˜i ) = JpCi (pi )Jρ˜ii (ρ˜i ).

(16)

For the discrete barycenter, JpCi (pi ) simplifies to 13 I3 , which subsequently leads to Eq. (12). In the general case, assuming that C( f ) and w( f ) only depend on the vertices in the facet f , JpCi (pi ) is written:   ∂w( f ) 1 [C( f ) − C] JpCi (pi ) = (pi ) w0 ∂pi f ∈N1 (v i )   ∂C( f ) w( f ) + ··· (pi ) , (17) ∂pi f ∈N1 (v i )

where N1 (v i ) is the 1-ring neighborhood around v i . With a first-order development, Eq. (6) can be linearized: ˜ ρ˜ w = 0, JρC˜ (ρ)

(18)

where the i th column of the matrix JρC˜ (ρ) ˜ is given by:   ∂w( f ) [C( f ) − C] JρC˜i (ρ˜i ) = (pi ) ∂pi f ∈N1 (v i )   ∂C( f ) p w( f ) + ··· (pi ) Jρ˜ii (ρ˜i ). (19) ∂pi f ∈N1 (v i )

1 Alternate formulations use a sum over the vertices, for which equations similar to the ones in this section can be derived.

Eq. (18) provides a generalization of the center of mass stability constraint that is still linear in the variable ρ˜ w . It grants the flexibility to use more integral barycenter definitions without losing the benefits of the QP formulation.  f f f 2) Surface-Weighted Barycenter: Let p0 , p1 , p2 denote the vertex locations in facet f . The surface weights are defined by:  1  f f f f  w( f ) = (p1 − p0 ) × (p2 − p0 ) . (20) 2 The facet center and its partial derivatives are given by:  1 f f f C( f ) = (21) p0 + p1 + p2 , 3 ∂C( f )  f  1 (22) pi = I3 . f 3 ∂pi The gradient of the weights is thus given by:  ⊥ T ∂w( f )  f  1  f f pi+2 mod 3 − pi+1 mod 3 pi = f 2 ∂pi

(23)

where ⊥ denotes a π/2 counter-clockwise rotation in the triangle plane. 3) Volume-Weighted Barycenter: Let O represent an arbitrary reference point, in practice the origin of the coordinate system. The   facet f is associated to the tetrahedron f f f O, p0 , p1 , p2 , and is assigned a weight w( f ) equating to its signed volume:   1 f f f w( f ) = det p0 , p1 , p2 . (24) 6 The facet center and its partial derivatives are given by:  1 f f f C( f ) = (25) p0 + p1 + p2 , 4   ∂C( f ) 1 f pi = I3 . (26) f 4 ∂p i

The gradient of the weights is thus given by: T ∂w( f )  f   f f p = p × p . i i+1 mod 3 i+2 mod 3 f ∂pi

(27)

B. Generalization of the Directions of Alteration In the original framework, the displacements of the vertices are restricted to the radial directions. When ρ i · ni ≈ 1, the watermark effectively alters the geometry of the surface by relocating vertices along the normal direction. In contrast, when the radial direction lies within the tangent plane (i.e., when ρ i · ni ≈ 0), the embedding may be ineffective. For instance, in the case of coplanar vertices, points are only moved on the surface of the object i.e. no geometric change is introduced. While such alteration may yield very low distortion according to most mesh distortion metrics, it also produces weak watermarks that could be wiped out, e.g. after resampling. In summary, the robustness versus imperceptibility trade-off is affected by the selected direction of alteration. To possibly leverage on this degree of freedom, and also provide greater

ROLLAND-NEVIÈRE et al.: TRIANGLE SURFACE MESH WATERMARKING

flexibility, the optimization variables (modified during embedding) and the radial distances (carrying the watermark) are dissociated. This amounts to defining a vector field ui that is used instead of ρ i to displace the vertices. The optimization variable riw will account for the signed displacement of pi along the preset directions ui . 1) Modifications to the QP Framework: This change of strategy translates in a number of modifications in the formulation of the watermarking process. The cost function is barely modified: r w is simply substituted to ρ˜ w in Eq. (8). To build the watermarked mesh, Eq. (14) is updated to: w ∀i ∈ 1, n v , pw i = pi + ri ui .

(28)

Using cos ψi = ρ i · ui , the linear expansion of the radial distance ρiw is given by: ρiw = ρi + riw cos ψi +

1 r w 2 sin2 ψi + o(riw 2 ). (29) 2ρi i

In the case of radial embedding (ψi = 0), the terms above the first order are null and the variables ρ˜iw and riw are equal (up to the bin scale s). The watermark embedding constraints (Eq. (9)) are now given by: T − Wρ < Wr w ,

(30)

where  denote the diagonal matrix of cos ψi . Intuitively, this matrix indicates how much the relocation distortion is actually used to reach the watermarking target. Obtaining the constraint C (r) ˜ for Jr on the barycenter is a matter of substituting JρC˜ (ρ) in Eq. (18), yielding: C (0)r w = 0. Jr

(31)

C (0) is JC (p )u , Applying the chain-rule, the i th column of Jr pi i i where the first term is given in Eq. (17). Let us define:   Bi 1 − ρi s(1 − G) + ρmin (32) i = Bi cos ψi sG + ρmin − ρi

Plugging the linearization in the histogram stability constraint (Eq. (13)) then gives: ∀i ∈ 1, n v  , min( i ) ≤ riw ≤ max( i ) Bi ∈ {1, n B }

(33)

Geometrically speaking, these linearized constraints approximate the spherical bin boundaries as tangent lines (cf. Fig. 5 in Appendix). The smaller | cos ψi | is, the larger the approximation error, as indicated by Eq. (29). Ill-defined numerical cases occur for | cos ψi | ≈ 0. This being said, boundaries can be precomputed more accurately, since they correspond to intersections between spheres (bin boundaries) and lines (relocation directions) which are fixed throughout the optimization process. The mathematical derivations and the algorithm to compute accurate boundaries without ill-defined numerical cases are provided in Appendix. The constraints on the histogram are thus the only ones which are not approximated when extending the QP framework to non-radial relocation directions.

1495

2) Alteration Vector Fields: As mentioned earlier, using the normal directions as the alteration vector field, i.e. ∀i ∈ 1, n v , ui = ni , may provide additional robustness but is also likely to incur prohibitive embedding distortion. As a result, it may not provide a better robustness versus fidelity trade-off. The new flexibility with respect to the alteration vector field should be regarded as a means to perceptually shape the watermark i.e. adjusting the ‘direction’ of the embedded watermark according to the local properties of the content. For instance, prior work clearly highlighted that alterations in rough areas of the mesh are significantly less noticeable than in smooth regions [16]. To leverage on this masking effect, the alteration vector field may favor radial alterations in rough areas while limiting displacements in the tangent plane, i.e. ui = ρ i − (ρ i · ni )ni , in smooth regions in an attempt to mitigate distortion. C. Alternate Minimization Strategies In the original QP framework, the embedding distortion is minimized with regard to the SE metric, which gives the same weight to all alterations over the mesh. Despite its simplicity, this metric is known to be only mildly correlated with the distortion perceived by human observers [16]. In line with related works for other types of content, this limitations call for incorporating perceptual metrics in the optimization framework. In contrast with the perceptual shaping mechanism presented earlier, the objective here is not to modify the direction of alteration but rather to adjust the magnitude of the displacement along the predefined direction according to some local characteristic. For instance, adapting the magnitude of the embedding alteration using the masking effect indeed can yield significant improvements in 3D watermarking [17]. Various 3D metrics have been investigated and the ones showcasing the highest correlation with perceived distortion are based on multi-scale analysis of the roughness [18] or on the mesh curvatures [19]. Unfortunately, these quantities are highly non-linear and cannot be readily plugged into the QP framework. Still, a few existing metrics achieve better results than the SE metric and can be expressed as quadratic functions of the vertex displacements r w . For instance, the Quadric Error Metric (QEM) [20] has been shown to improve the control over the embedding distortion in 3D watermarking [9]. In this case, the cost function becomes:

T

(34) Qr w , c = λ r w 2 +(1 − λ) Qr w       SE

QEM

where λ ∈ [0, 1] is a mixing parameter used to trade SE for QEM. The matrix Q ∈ Rnv ×nv is diagonal and its i th entry is the sum of the projections of the relocation direction  ui onto the normal n f of the facets around vertex v i , i.e., f ∈N1 (v i ) ui · n f . Intuitively, the QEM focuses on the distortion along the normal direction, since alterations in the tangent plane are less noticeable. Alternatively, following a thread of research in mesh compression, the local roughness can be assimilated to the difference di between a vertex position and its position after smoothing using the Laplacian matrix L [21]. Based on this

1496

IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 9, NO. 9, SEPTEMBER 2014

TABLE II D ATABASE OF 3D M ODELS U SED FOR B ENCHMARKING AND T HEIR C HARACTERISTICS

rationale, it is possible to define a distortion metric that sums the squared magnitude of the difference in local roughness diw − di 2 over all vertices. Again, this Laplacian-based metric is usually combined with the SE by means of a mixing parameter λ ∈ [0, 1]. Several discretizations of the Laplacian matrix have been proposed but only the ones based solely on the connectivity of the mesh (referred to as graph Laplacian) can be integrated into the QP framework without further approximations. In this case, the cost function is a quadratic function in the optimization variables, written as: c =λr w 2

 2  nv     w  1 w  r ui − + · · · (1 − λ) r u j j .  i |N1 (v i )|  i=1  v j ∈N1 (v i ) (35)

D. Implementation Details These extensions of the QP framework have been implemented with MATLAB [15]. Since most of the constraints are approximations (through, e.g., linearization), the solution found by the solver is no longer exact: the mathematical model and the practical implementation are not equivalent. To guarantee watermark efficiency, it is therefore necessary to perform the embedding procedure iteratively. After each embedding iteration, the payload is decoded and the bit error rate (BER) is measured. This process continues until the BER reaches 0 or the number of iterations reaches a maximal value, set to 10 by default. This being said, our empirical observation during our experiments is that two iterations are at most needed to achieve embedding. V. E XPERIMENTAL R ESULTS A. General Setup Several variants of the proposed watermarking framework are evaluated and compared to the original QP approach. Each variant corresponds to a combination of: (i) a center of mass, either discrete (D), surface-weighted (S) or volumeweighted (V); (ii) a direction of relocation, either radial (R), normal (N) or a roughness-adapted direction (S); and (iii) a cost function, either the SE (S), the QEM (Q) or the Laplacian-based one (L). As a result, each variant can be designated using three letters, e.g. DRS for the baseline method. In our experiments, we considered a database of thirteen 3D models, detailed in Table II, that provides a representative diversity of shapes. The payload size has been arbitrarily set to n b = 64 bits and the embedding distortion has been calibrated to guarantee a fair comparison between the different variants. In line with benchmarking recommendations, the

embedding strength α is adjusted for each method to calibrate distortion using a combination of both a fully-objective and a perceptually-correlated metric [22]. The symmetric Hausdorff distance is a popular objective metric in the watermarking community. However, its computation time is rather prohibitive and precludes large scale benchmarking campaigns. Since the Root Mean Square (RMS) error showcases similar correlation with the perceived distortion, we use it to assess the objective geometric embedding distortion and express its result as a percentage with respect to the length of the space diagonal of the bounding box. The Mesh Structural Distortion Measure (MSDM) [23] is reported to provide superior perceptually-correlated distortion estimations compared to other metrics [16]. Its extension to a multi-scale approach yields even better results but at the cost of a large increase in complexity and computation time [19]. In our experiments, we use the MSDM to assess the perceptual embedding distortion and its values are within [0, 1]. For each benchmarked variant, the embedding strength α is set so that both the RMS and the MSDM are respectively lower or equal to 0.04% and 0.25 for calibration purpose. B. Robustness Against Attacks 3D attacks are commonly grouped into three clusters: mesh-preserving attacks, connectivity-preserving attacks and connectivity-altering attacks. For each attack, the experimental procedure consists of: (i) generating six random payloads; (ii) watermarking all models once with each payload; (iii) creating ten attacked versions of the watermarked meshes (when an attack yields non-deterministic output); and (iv) detecting the payload from the resulting attacked meshes. For each variant in the framework, and for a given attack strength, the reported BER is the median value over the 780 detections trials resulting from this procedure. 1) Mesh-Preserving Attacks: Two types of processes fall within this category: file attacks and similarity transformations. The former simply reorders the vertices and the facets i.e. it changes the digital file representation of the mesh but not the mesh itself. The latter attack is simulated by applying combinations of a random rotation, translation and scaling to a mesh. Methods based on a discrete or surface-weighted barycenter are provably invariant to both types of attacks, which is confirmed experimentally. When the volume-weighted barycenter is well-defined, i.e., when the watermarked mesh is watertight and without self intersections, methods based on this center of mass are also provably invariant to these attacks. In real-life however, 3D meshes, including the ones in our database, do present self-intersections and degeneracies.

ROLLAND-NEVIÈRE et al.: TRIANGLE SURFACE MESH WATERMARKING

1497

Fig. 1. Average robustness results. Median BER in five attack scenarios over the thirteen models in the database for three watermarking variants within the framework: the baseline one (DRS), and the ones based on the surface (SRS) and volume-weighted (VRS) extensions. (a) Noise. (b) Quantization. (c) Smoothing. (d) Triangle Soup. (e) Simplification.

While these degeneracies could theoretically yield decoding errors, we have not observed such errors in our experiments. In other words, the uncertainty in the estimation process due to these degeneracies is not large enough to trigger any decoding error. A solution to address this theoretical problem is to preprocess the input mesh to create a watertight 2-manifold mesh [13]. Since the incertitude on the position of volumeweighted center of mass is not significant enough, we will overlook this issue in the remainder of the article. Moreover, it should be noted that creating a watertight 2-manifold mesh from an arbitrary input is still an open-issue in 3D mesh processing [3]. 2) Connectivity-Preserving Attacks: Three different connectivity-preserving attacks are simulated in this benchmark. The first attack adds random noise to the coordinates of every vertex. The noise is uniformly generated in [−A, A], where A is expressed as a ratio of the length of the space diagonal of the mesh bounding box. Hence, A represents the attacking strength, which is independent of the scale of the model. Second, a quantization of the vertex locations is benchmarked, with each coordinate rounded on a 2b grid, b denoting the number of bits and corresponding to the attack strength. Finally, watermarked meshes are smoothed using the Laplacian smoothing technique with its deformation factor λ = 0.3. In this case, the attack strength is indicated by the number of smoothing iterations. 3) Connectivity-Altering Attacks: These attacks are one of the toughest challenges in 3D watermarking, as they often create synchronization issues in addition to the more straightforward problem of the robustness of the watermarked primitive. In this benchmark, we consider four types of alterations.

The first attack is referred to as the triangle soup and consists of: (i) disconnecting all facets; (ii) modifying all edges according to a predefined ratio (thereby creating holes and overlaps); and (iii) adding a uniform random noise to all vertices. In this benchmark, the attack strength is only parameterized by the ratio used to alter the edge lengths. The strength of the uniform noise is set to a constant 0.01% with respect to the space diagonal of the bounding box. Mesh simplification is a common operation of the 3D rendering pipeline. Highly detailed meshes often need to be simplified to reach a level of details which is suitable for computer hardware with limited capabilities. Increasingly simplified versions of the watermarked meshes are thus generated [24]. Conversely, the complexity of a mesh can also be increased. A refinement attack is simulated by applying one iteration of a Loop subdivision. Finally, the cropping attack, which deletes an approximative ratio of the input vertices, is considered to be the most damaging attack for algorithms based on mesh barycenter. C. Robustness of the Integral Centers of Mass In a first round of experiments, the baseline method (DRS) is only compared with the variants resulting from using a surface (SRS) or a volume-weighted barycenter (VRS), without using the extension on the relocation directions or the alternate cost functions. The computation time, averaged over all the meshes in the database, are respectively 9s, 11s and 9s for the embedding, and about 0.1s in all cases for the decoding. The optimization process takes around 80% of the overall embedding time. The benchmarking results are summarized in Fig. 1. The simplification attack is the one against which the integral

1498

IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 9, NO. 9, SEPTEMBER 2014

Fig. 2. Average robustness results. Median BER for the thirteen models in the database for different attacks and several variants of the QP watermarking framework. Apart from using the volume-weighted barycenter, each variant differs from the baseline in a single aspect: (i) VRQ is based on a minimization with respect to the QEM; (ii) VRL is based on a minimization with respect to the Laplacian-based metric; (iii) VNS uses relocation directions set to the mesh normal; and (iv) VSS uses relocation directions taking into account the local roughness. (a) Noise. (b) Quantization. (c) Smoothing. (d) Triangle Soup. (e) Simplification.

TABLE III M EDIAN BER A GAINST L OOP S UBDIVISION (1 I TERATION )

formulations are expected to outperform the baseline method, as showcased by Fig. 1(e). Fig. 1(a) illustrates the performance against noise addition and suggests that SRS has lower robustness than the other methods for large attack levels. This observation indicates that surface-weighted quantities may be more sensitive to noise attacks than discrete quantities (see [13, Table I]). The results against the quantization attack in Fig. 1(b) confirm this phenomenon. Against the smoothing, the triangle soup and the refinement attacks, all variants perform almost equally, as indicated in Fig. 1(c) and 1(d), and Table III. Finally, our experiments show that all methods fail against the cropping attack, i.e., the BER reaches 40% for a cropping ratio of less than 0.01%. The remaining extensions are next compared to VRS, since it achieves better results than the baseline against simplification attacks, and slightly better results in other cases. D. Robustness of the Other Extensions To evaluate the improvement resulting from altering the cost function, two variants labeled VRQ and VRL are tested. In VRQ, the cost function is based on Eq. (34) using a mixing parameter λ set to 0.5; in VRL, the cost function is defined by Eq. (35) using a mixing parameter λ also set to 0.5. These heuristic values have been found to yield the best performances

in empirical tests, although other values achieve very similar performance. While the computational overhead to compute these cost functions is marginal (less than 0.02s increase in average), the optimization process exhibits a significant slowdown in the case of VRL, and a marginal slow-down for VRQ. The average computation times are indeed 39s and 11s respectively, compared to 9s for VRS. Fig. 2 depicts the robustness of these variants against several attacks and Table III reports on the robustness against the subdivision attack. In summary, VRL (dotted green) consistently outperforms VRS (solid blue). The robustness gain is particularly notable for the refinement attack. In contrast, the incorporation of the QEM metric in VRQ (dotted red) seems to be counter-productive, with robustness performances slightly poorer than VRS in some cases. This may be due to the cost function being less aligned with the distortion metrics used for fidelity calibration. The last round of experiments is dedicated to the use of alternate alteration vector fields. Fig. 2 clearly highlights that using the normal direction is not a good idea. The MSDM used in the calibration process is highly sensitive to displacements along the normal and therefore yields an embedding strength α five times smaller than for other methods, e.g. 0.01 for VNS vs. 0.05 for VRS. In other words, the normal vector field does not offer a better fidelity-robustness trade-off. To implement the variant VSS, we rely on a local roughness estimate [25]. In practice, the local roughness is post-processed to discard outliers and then rescale all estimates in [0, 1]. The resulting scalar field is close to 1 in smooth area and almost null for regions with a lot of details. To identify the smooth regions where the relocation directions should lie within the

ROLLAND-NEVIÈRE et al.: TRIANGLE SURFACE MESH WATERMARKING

1499

object Fandisk. This 3D model is characterized by the presence of large planar surfaces. As a result, when the embedding process is limited to the radial direction (VRS), fidelity collapses very quickly when increasing the embedding strength α. Moreover, typical ring-like artifacts are produced at the surface of the mesh. By forbidding displacements outside the tangent place in smooth areas (VSS), and in particular in planar regions, this ring effect disappears. However, the flip side of the coin at same embedding strength is a loss of robustness, e.g. in case of remeshing attacks, since the watermark is not actually burnt into the geometry. This being said, as reported earlier, both VRS and VSS exhibit comparable robustness when the embedding distortion is calibrated. Fig. 3. Cumulative distribution function of the local MSDM for the Dragon mesh when using VRS and VRL.

Fig. 4. Embedding distortion for the Fandisk when using VRS and VSS as a function of the embedding strength α. The mesh close-ups correspond to an embedding strength α = 0.05.

tangent plane, a threshold is set to the smallest value between the eighth decile of the scalar field and 0.8. This accounts for objects such as the Fandisk that have a local roughness estimate close to 1 in most places. Computing this local roughness takes on average 30s and thus increases the embedding time to 40s, in average for VSS. Fig. 2 shows that the robustness of this variant is very similar to VRS, and even slightly better for small and moderate attack levels in the noise addition or simplification cases. E. In-Depth Fidelity Assessment Since the added value of the proposed extensions are not fully showcased in the robustness evaluation, we had a deeper look at the fidelity of the different variants under analysis. For instance, Fig. 3 depicts the cumulative distribution function of the contribution of each vertex to the MSDM metric, for the two variants VRS and VRL in the Dragon model. While both watermarked objects globally yield very similar MSDM measurement, they exhibit very different behaviors at a local level. Incorporating the Laplacian component in the cost function indeed appear to produce fewer large valued local MSDM that may yield noticeable artifacts. Fig. 4 then illustrates the impact of relaxing the constraint on the direction of alteration for the mechanical

VI. C ONCLUSIONS AND F UTURE W ORK In this article, we generalized a previous framework for 3D mesh watermarking, where the embedding process is formulated as a quadratic programming (QP) problem. More specifically, we described three different extensions to the baseline system: (i) the revision of the mathematical framework to support integral definitions of the center of mass of a mesh, (ii) the relaxation of the constraint on the direction of alteration to allow displacements deviating from the radial direction, and (iii) the integration of perceptual components in the cost function to better account for human perception during the minimization process. The resulting flexibility allows various combinations of the different components and a number of variants have been evaluated through an extensive benchmarking campaign. The reported experimental results demonstrate the potential for added value of these modifications with respect to the traditional fidelity–robustness trade-off. These new degrees of freedom introduce interesting research avenues. For instance, the perceptual components incorporated into the cost function are constrained by the QP framework: they have to be quadratic in the unknowns. Ideally, one would rather use well-established 3D distortion metrics. While this article focused on approximating these perceptual components to fit them into the QP framework, alternate options exist. First, these extensions usually involve weights computed from the original mesh, and used throughout the optimization process; it may be advantageous to update these weights at each iteration of an instrumented solver. Second, it may be worth dropping the QP framework altogether and investigate whether alternate solvers could provide better results. The degree of freedom related to the directions of alteration opens up a new line of research on its own. While we exemplified the potential benefit of this modification using a toy example, it comes with its own set of shortcomings and is definitely not optimal. This raises a fundamental question: what would be the alteration vector field that optimizes the watermarking fidelity–robustness trade-off? For instance, it may be tempting to consider the vector field corresponding to the direction of smallest distortion gradient. Nevertheless, regardless of the difficulty of deriving/computing such direction, it is likely to produce displacements close to the tangent plane and therefore less robust. In future work, we will also investigate issues that have not been discussed, such as the robustness against cropping and

1500

IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 9, NO. 9, SEPTEMBER 2014

Algorithm 1 Boundary constraints approximation 1: procedure B OUNDARY C ONSTRAINTS (radius ρ; upperbound U ; lower-bound L; projection γ ) 2: if U + γ 2 < 0 then C1 3: return Boundaries B = ∅ 4: end if     5: (S1 , S2 ) = −γ − γ 2 + U ; −γ + γ 2 + U 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16:

if L + γ 2 < 0 then B = [S1 , S2 ] C2 else     (S3 , S4 ) = −γ − γ 2 + L; −γ + γ 2 + L if S1 S3 ≤ 0 then B = [S1 , S3 ] C3 else if S4 S2 ≤ 0 then B = [S4 , S2 ] idem else if |S3 | < |S4 | then B = [S1 , S3 ] C4 else B = [S4 , S2 ] idem end if end if return Boundaries B end procedure

isometric deformations, aka. pose. 3D watermarking systems relying on the modulation of the radial distances share a weakness against these two attacks, due to the loss of vertices and the inability to recover the center of mass. A potential solution would be to adapt the presented framework to support other watermarking carriers, e.g. geodesic distances [26]. Another critical issue relates to the security of this family of watermarking schemes. The embedding alters the distribution of radial distances and may introduce non-natural features that could be exploited by an adversary. A handful of works have tackled this issue [9], [27] but they remain rather primitive in view of the maturity achieved for other types of content [28], [29]. Our observations suggest that more damaging attacks could be constructed even if some security mechanisms (deadzone, spread transform) are added in the framework. A PPENDIX Taking the square of Equation (13) and using the equality: (ρiw )2 = (riw )2 + ρi2 + 2riw ρi cos ψi , the boundary constraints can be written as second-degree inequalities with respect to riw : riw 2 + 2γi riw − L i > 0,

riw 2

+ 2γi riw

− Ui < 0,  2 Bi L i = ρmin + sG − ρi2 ,

(36) (37)

and where γi = ρi cos ψi ,  2 Bi Ui = ρmax − sG − ρi2 . For compactness, the offset G henceforth denotes the rescaled offset sG. A first degenerate case occurs when pi is outside the Bi − G) and ui ∩ Smax = ∅. Inequalsphere Smax (C, ρmax ity (37) indeed has no solution (label C1 in Algorithm 1). In other words, this corresponds to using a bin separation offset G such that pi , which initially lies in the upper part of bin Bi , has to be relocated farther away from the upper Bi to enforce the separation offset. But pi bin boundary ρmax

Fig. 5. Simplified configurations to compute the boundary constraints. Using 2D projections and setting G = 0, 3 cases are shown for p with directions d1 , d2 and d3 . p is to stay in-between Smin (C, ρmin ) and Smax (C, ρmax ). Linearized constraints correspond to the lines L min and L max , and boundaries are computed using their intersections with the direction of alteration. d3 is an ill-defined case (| cos ψ| = 0). The smaller | cos ψ|, the larger the approximation error (i2lmax vs. i2max ). Using the non-linearized constraints, 2 configurations are depicted: the boundary constraints can actually be linear (d2 and d3 , C2 in Algorithm 1); otherwise p can be within 2 disjoint segments (d1 ) and the relocation is restricted to [i1min,1 , i1max,1 ] (C3 in Algorithm 1).

can only be relocated along a direction which does not enable achieving this constraint (empty intersection). In practice, this case can be handled by resetting ui = ρ i . Discarding this degenerate case, if pi is outside the sphere Bi + G) and ui ∩ Smin = ∅ (for instance with Smin (C, ρmin direction d2 in Figure 5), the constraints reduce to Inequality (37) (label C2 in Algorithm 1) and thus become linear in riw . Otherwise, the constraints correspond to the union of two disjoint segments (direction d1 in Figure 5). If pi is already within one segment (label C3), the constraints are approximated with this single segment. If pi is within the sphere Smin (label C4), the constraints are approximated using the segment closest to pi . This case is symmetrical to the first degenerate case (label C1). However, there is always at least one intersection between the relocation direction and the lower bin boundary sphere, offset with G. Therefore, the boundaries are always well-defined. R EFERENCES [1] I. J. Cox, M. L. Miller, J. A. Bloom, J. Fridrich, and T. Kalker, Digital Watermarking and Steganography, 2nd ed. San Mateo, CA, USA: Morgan Kaufmann, 2007. [2] K. Wang, G. Lavoué, F. Denis, and A. Baskurt, “A comprehensive survey on three-dimensional mesh watermarking,” IEEE Trans. Multimedia, vol. 10, no. 8, pp. 1513–1527, Dec. 2008. [3] M. Botsch, L. Kobbelt, M. Pauly, P. Alliez, and B. Levy, Polygon Mesh Processing. Natick, MA, USA: AK Peters, 2010. [4] K. Wang, M. Luo, A. G. Bors, and F. Denis, “Blind and robust mesh watermarking using manifold harmonics,” in Proc. 16th IEEE Int. Conf. Image Process., Nov. 2009, pp. 3657–3660. [5] M. Luo, “Robust and blind 3D watermarking,” Ph.D. dissertation, Dept. Comput. Sci., Univ. York, York, U.K., May 2006.

ROLLAND-NEVIÈRE et al.: TRIANGLE SURFACE MESH WATERMARKING

[6] J. M. Konstantinides, A. Mademlis, P. Daras, P. A. Mitkas, and M. G. Strintzis, “Blind robust 3-D mesh watermarking based on oblate spheroidal harmonics,” IEEE Trans. Multimedia, vol. 11, no. 1, pp. 23–38, Jan. 2009. [7] S. Zafeiriou, A. Tefas, and I. Pitas, “Blind robust watermarking schemes for copyright protection of 3D mesh objects,” IEEE Trans. Vis. Comput. Graphics, vol. 11, no. 5, pp. 596–607, Sep. 2005. [8] J.-W. Cho, R. Prost, and H.-Y. Jung, “An oblivious watermarking for 3-D polygonal meshes using distribution of vertex norms,” IEEE Trans. Signal Process., vol. 55, no. 1, pp. 142–155, Jan. 2007. [9] A. G. Bors and M. Luo, “Optimized 3D watermarking for minimal surface distortion,” IEEE Trans. Image Process., vol. 22, no. 5, pp. 1822–1835, May 2013. [10] A. Gupta, P. Alliez, and S. Pion, “Principal component analysis in CGAL,” INRIA Sophia-Antipolis, Nice, France, Tech. Rep. 6642, Sep. 2008. [11] C. Zhang and T. Chen, “Efficient feature extraction for 2D/3D objects in mesh representation,” in Proc. IEEE Int. Conf. Image Process., vol. 3. Oct. 2001, pp. 935–938. [12] P. R. Alface, B. Macq, and F. Cayre, “Blind and robust watermarking of 3D models: How to withstand the cropping attack?” in Proc. IEEE Int. Conf. Image Process., vol. 5. Sep./Oct. 2007, pp. 465–468. [13] K. Wang, G. Lavoué, F. Denis, and A. Baskurt, “Robust and blind mesh watermarking based on volume moments,” Comput. Graph., vol. 35, no. 1, pp. 1–19, Feb. 2011. [14] R. Hu, P. Rondao-Alface, and B. Macq, “Constrained optimisation of 3D polygonal mesh watermarking by quadratic programming,” in Proc. IEEE Int. Conf. Acoust., Speech Signal Process., Apr. 2009, pp. 1501–1504. [15] (2013). Optimization Toolbox User’s Guide, The MathWorks Inc., Natick, MA, USA [Online]. Available: http://www.mathworks.com/ access/helpdesk/help/toolbox/optim/ [16] M. Corsini, M.-C. Larabi, G. Lavoué, O. Petˇrík, L. Váša, and K. Wang, “Perceptual metrics for static and dynamic triangle meshes,” Comput. Graph. Forum, vol. 32, no. 1, pp. 101–125, Feb. 2013. [17] K. Kim, M. Barni, and H. Z. Tan, “Roughness-adaptive 3-D watermarking based on masking effect of surface roughness,” IEEE Trans. Inf. Forensics Security, vol. 5, no. 4, pp. 721–733, Dec. 2010. [18] M. Corsini, E. D. Gelasca, T. Ebrahimi, and M. Barni, “Watermarked 3-D mesh quality assessment,” IEEE Trans. Multimedia, vol. 9, no. 2, pp. 247–256, Feb. 2007. [19] G. Lavoué, “A multiscale metric for 3D mesh visual quality assessment,” Comput. Graph. Forum, vol. 30, no. 5, pp. 1427–1437, Aug. 2011. [20] M. Garland and P. S. Heckbert, “Surface simplification using quadric error metrics,” in Proc. 24th Annu. Conf. Comput. Graph. Interact. Techn., Aug. 1997, pp. 209–216. [21] Z. Karni and C. Gotsman, “Spectral compression of mesh geometry,” in Proc. 27th Annu. Conf. Comput. Graph. Interact. Techn., Jul. 2000, pp. 279–286. [22] K. Wang, G. Lavoué, F. Denis, A. Baskurt, and X. He, “A benchmark for 3D mesh watermarking,” in Proc. IEEE Int. Conf. Shape Model. Appl., Jun. 2010, pp. 231–235. [23] G. Lavoué, E. D. Gelasca, F. Dupont, A. Baskurt, and T. Ebrahimi, “Perceptually-driven 3D distance metrics with application to watermarking,” Proc. SPIE, Appl. Digit. Image Process., Aug. 2006. [24] P. Lindstrom and G. Turk, “Fast and memory efficient polygonal simplification,” in Proc. Conf. Visualizat., Oct. 1998, pp. 279–286. [25] G. Lavoué, “A roughness measure for 3D mesh visual masking,” in Proc. 4th Symp. Appl. Perception Graph. Visualizat., Jul. 2007, pp. 57–60. [26] M. Luo and A. G. Bors, “Surface-preserving robust watermarking of 3-D shapes,” IEEE Trans. Image Process., vol. 20, no. 10, pp. 2813–2826, Oct. 2011. [27] Y. Yang, “Information analysis for steganography and steganalysis in 3D polygonal meshes,” Ph.D. dissertation, Dept. Eng. Comput. Sci., Durham Univ., Durham, NC, USA, Oct. 2013. [28] F. Cayre, C. Fontaine, and T. Furon, “Watermarking security: Theory and practice,” IEEE Trans. Signal Process., vol. 53, no. 10, pp. 3976–3987, Oct. 2005. [29] P. Bas and T. Furon, “A new measure of watermarking security: The effective key length,” IEEE Trans. Inf. Forensics Security, vol. 8, no. 8, pp. 1306–1317, Aug. 2013.

1501

Xavier Rolland-Nevière received the French Engineering degree in telecommunications systems from Telecom-Bretagne, Plouzané, France, and the M.Sc. degree in signal processing from the University of Rennes 1, Rennes, France, in 2011. He is a Ph.D. candidate at Inria Sophia Antipolis - Méditerranée, and is currently with the Security and Content Protection Labs of Technicolor, Cesson-Sevigne, France. His research interests include watermarking, mesh processing, and video processing. Gwenaël Doërr (M’06–SM’12) received the M.Sc. degree in telecommunications systems from Telecom Sud-Paris, Evry, France, in 2001, and the Ph.D. degree in signal and image processing from the Université de Nice Sophia-Antipolis, Nice, France, in 2005. He was a Lecturer of Digital Rights Management with the Department of Computer Science, University College London, London, U.K., from 2005 to 2009. In Spring 2008, he was a Visiting Scholar with HP Labs, Palo Alto, CA, USA, to work on the interoperability of DRM systems. In 2010, he joined the Security and Content Protection Labs, Technicolor Research and Development France, Cesson-Sevigne, France, as a Senior Research Scientist on content protection. His research interests encompass various aspects of multimedia security technologies. His recent works focused on signal processing techniques for antipiracy, including transactional watermarking for different types of content, content fingerprinting for resynchronization, and passive forensics analysis to characterize pirate samples, and piracy channels. Dr. Doërr is currently the Chair of the IEEE Signal Processing Society Technical Committee on Information Forensics and Security. He is also a Distinguished Member of Technicolor’s Fellow Network. He is an Associate Editor of the IEEE T RANSACTIONS ON I NFORMATION F ORENSICS AND S ECURITY and the EURASIP Journal on Image and Video Processing. He co-organized Information Hiding in 2007 in St Malo, France, and the IEEE Workshop on Information Forensics and Security in 2009 in London, U.K. Pierre Alliez received the M.Sc. degree in computer science from Université de Nice Sophia-Antipolis, Nice, France, in 1996, and the Ph.D. degree in image and signal processing from Telecom ParisTech, Paris, France, in 2000. He was a Post-Doctoral Scholar with the University of Southern California, Los Angeles, CA, USA, in 2001, and joined INRIA, Sophia Antipolis, France, in 2002, as a Junior Researcher in Geometric Computing and Modeling. He received the Habilitation degree from the Universite de Nice SophiaAntipolis in 2008. He was promoted to Senior Researcher at INRIA in 2010, and has led an INRIA project-team on geometric modeling of 3-D environments since 2013. His research interests are on topics commonly referred to as geometry processing, mesh compression, surface reconstruction, mesh generation, surface approximation, surface remeshing, mesh parameterization, and more recently, mesh watermarking. He is an Associate Editor of the Computational Geometry Algorithms Library, and is currently serving as Associate Editor of the Association for Computing Machinery Transactions on Graphics, the Elsevier Graphical Models, and the Elsevier Computer Aided Geometric Design. He organized the second EUROGRAPHICS Symposium on Geometry Processing in 2004 in Nice. He was the Program Co-Chair of the EUROGRAPHICS Symposium on Geometry Processing in 2008, EUROGRAPHICS short papers in 2009, Pacific Graphics in 2010, and Geometric Modeling and Processing in 2014. He was a recipient of the EUROGRAPHICS Young Researcher Award for his contributions to computer graphics and geometry processing in 2005 and a Starting Grant from the European Research Council on Robust Digital Geometry Processing in 2011. In 2014, he was nominated by the European Commission a member of the Horizon 2020 Advisory Group for Societal Challenge 6 Europe in a changing world Inclusive, Innovative, and Reflective Societies.