Document not found! Please try again

3D Virtual Character Reconstruction From Projections: A ... - CiteSeerX

2 downloads 381816 Views 2MB Size Report
Cartoon production, 3D character reconstruction, deformation, NURBS, Visual Hull, ... In the following section, let us draw a state-of-the-art of the existing 2D/3D ...... “Drawing for Illustration and Annotation in 3D”, EUROGRAPHICS 2001, Vol.
3D Virtual Character Reconstruction From Projections: A NURBS-based approach Olfa Triki

Titus Zaharia

Françoise Prêteux

Groupe des Ecoles des Télécommunications Institut National des Télécommunications / ARTEMIS Project Unit 9 rue Charles Fourier 91011 EVRY, France @int-evry.fr

ABSTRACT This work has been carried out within the framework of the industrial project, so-called TOON, supported by the French government. TOON aims at developing tools for automating the traditional 2D cartoon content production. This paper presents preliminary results of the TOON platform. The proposed methodology concerns the issues of 2D/3D reconstruction from a limited number of drawn projections, and 2D/3D manipulation/deformation/refinement of virtual characters. Specifically, we show that the NURBS-based modeling approach developed here offers a well-suited framework for generating deformable 3D virtual characters from incomplete 2D information. Furthermore, crucial functionalities such as animation and non-rigid deformation can be also efficiently handled and solved. Note that user interaction is enabled exclusively in 2D by achieving a multiview constraint specification method. This is fully consistent and compliant with the cartoon creator traditional practice and makes it possible to avoid the use of 3D modeling software packages which are generally complex to manipulate.

Keywords Cartoon production, 3D character reconstruction, deformation, NURBS, Visual Hull, shape from silhouette.

1

INTRODUCTION

The 2D cartoon production generally requires a huge amount of human skills and resources, involving character designers, scenarists and animators. The animation of cartoon characters is currently manually achieved, which leads to relatively important production costs. Identifying the challenging issue of elaborating tools for automating the cartoon creation/animation process, the French ANVAR (National Agency for Valorization of Research) [1] TOON project aims at elaborating a unified platform for reconstructing, animating, and deforming virtual characters. In order to clearly specify the constraints related to the specific cartoon application considered in this work, let us briefly describe the traditional working style of cartoon makers.

1.1 Problem statement First, the cartoon character needs to be defined. This is the role of the character creator, which provides the character’s model sheet by: 1- specifying a set of drawing rules, needed for making it possible to draw the character in a reproducible manner (Figure 1.a). The character is drawn within a coarse-to-fine manner, starting from a first baseline, represented by simplified, more or less elementary primitives which roughly define the body sub-parts and their relative proportions. On this rough sketch morphological details and adjustments are gradually added, for refining the character drawing (in 2D and for a given and unique viewpoint). 2- creating a set of 2D views of the character, so-called “turn around” (Figure 1.b), needed for specifying details in the front, profile and back views of the character. To animate the character, a storyboard is first created by the scenarists, which provide a set of key-frames corresponding to various poses of the character. The huge task of the animators is then to manually create the set of all

intermediate frames needed to ensure a smooth transition between key-frames, at the required video framerate (e.g., 25 frames per second). In practice, this stage is the most time consuming and costly of the cartoon production chain, since it generally requires large teams of animators. Creating tools for automating this animation process becomes then a highly challenging issue, with a potentially strong economical impact in the world of cartoon industry. Such tools should scrupulously respect the traditional way of work of cartoon makers, which are reluctant to modifying their essentially 2D working style.

1.a. Drawing rules.

1.b. Turn around.

Figure 1. Model sheet for the famous Tootuff character. Simple 2D modeling techniques [2] are not sufficient to automatically animate cartoon characters, because of the articulated nature of such characters. Therefore, the use of 3D modeling techniques is required to deal with selfocclusions of body sub-parts. Most 3D designing software [3], [4], [5] and research interfaces [6], [7], [8] proceed to a direct creation of a 3D model. However, the deformation and animation stages are not straightforward and generally require a large amount of human interaction. The main concern is that professional or experimental platforms for 3D modeling are quite unpopular in the world of 2D cartoon creators. Such tools flagrantly violate their traditional working style and did not prove to be an economically viable solution. The TOON project specifically aims at overcoming such difficulties by proposing a 2D/3D modeling platform. In order to respect the cartoon creators working style, the 3D modeling stage is re-formulated as a 2D/3D reconstruction problem, and the user interactions are completely achieved in 2D. More precisely, the methodology adopted within the TOON project consists of first creating a 3D model from a limited set of drawn projections (at the very most 8), corresponding to the turn around provided by the character designer. Once available, the 3D character may be used for automated animation purposes. The scenarist provides two keyframe poses. A 2D/3D registration procedure is then applied in order to recover, for each keyframe, the corresponding 3D pose parameters. Finally, an interpolation procedure is performed in order to automatically generate all intermediate frames while ensuring a smooth transition between keyframes. In the following section, let us draw a state-of-the-art of the existing 2D/3D modeling techniques.

1.2 Related Work The Harold system [9] offers an interface for drawing 2D strokes (i.e. 2D drawings within the image plane) associated with billboards, defined as 3D planes used as support region for drawing. The 3D scene is defined as a collection of billboards and associated strokes. As the camera moves, the billboards and the corresponding strokes are projected onto the 2D scene and appropriately rendered. The method implicitly relies on the assumption that the drawings preserve their aspect when observed from different viewpoints. A similar approach is presented in [10]. Here, the 2D strokes specified by the user are converted, by using some heuristic procedures, to some 3D Bezier surface patches, that are further used for rendering purposes. Within the framework of architectural applications, Tolba et al. [11] propose a similar 2D stroke-based approach. Instead of using planar billboards, the authors project the strokes onto the unit-sphere. After applying the desired transformations to the image plane, the strokes are re-projected and rendered onto the 2D scene. Although highly relevant for background 3D scene modeling, such approaches are too elementary for creating realistic articulated characters whose projections change drastically with the considered viewpoint and are prone to self occlusions of 3D body subparts. A second, more object-oriented category of approaches infer 3D shapes from 2D drawings by specifying a set of basic 2D/3D interaction mechanisms and by making some a priori assumptions on the 3D model’s shape, allowing to “guess” the 3D object from 2D strokes. Thus, the Teddy interface [12] creates 3D polygonal models from 2D sketches by using several modeling operations such as inflation, extrusion, smoothing, cutting... The advantage of the method is its

ability to rapidly create simple characters in real-time, with several simple tools and a very intuitive interface. However, the inflation tool –which allows the 2D/3D reconstruction– makes strong assumptions about the object thickness, linked here to its width, which are rather arbitrary and rarely satisfied in practice. Karpenko et al. developed a similar interface [13] for creating free-form 3D models from 2D hand-drawn sketches. The difference with Teddy is that authors use here variational implicit surfaces [14], instead of triangular meshes. The system offers two interesting functionalities: (a) it makes it possible to construct a hierarchical representation of the drawn model, and (b) allows the user to modify a surface by constraining its projection to interpolate a drawn 2D curve. However, the proposed methodology suffers from its restricted applicability to round, smooth shapes and from the expensive computational complexity of the fitting procedure, not suited to interactive applications and prohibitive for modeling more complex characters. Within the same framework, the Sketch system [15] offers a human-machine interface for geometric modeling. 3D surfaces are represented here as simple primitives such as cubes and objects of revolution, automatically derived from 2D sketches. The interface provides a set of predefined gestures recognized by the system and allowing the user to rapidly create simple primitives. More complex models can be finally created by combining primitives with CSG (Constructive Solid Geometry)-like operations [16]. Such approaches make it possible to create more or less rapidly simplified characters. However, because of the very simple implicit surface models used, the obtained 3D shapes do not respect the level-of detail usually desired in real cartoon applications. In order to overcome such limitations, a third category combines billboard and object-based approaches in order to improve the reconstructed characters. The idea is to use no longer planar billboards, but a basic 3D shape to which 2D strokes are attached. In [17], authors replace the planar billboards with some elementary 3D blobs, modeled by variational implicit surfaces [14]. Such blobs are created by specifying a 2D outline silhouette and by applying the fitting procedure described in [18]. The strokes are then mapped onto this simplified 3D shapes. When changing the object’s 3D pose, strokes are moved jointly with the blobs in order to generate the corresponding views. A closely similar approach is reported in [19], where the basic shapes are modeled as polygonal surfaces of revolution derived from the simplified anatomy representation defined during the drawing rule specification stage. Such simple techniques prove to be helpful for automating the cartoon animation stage. However, they still require a lot of user interaction for refining/adjusting the automatically generated in-between frames, since, in practice, the hypothesis that the character’s outline does not change dramatically with the angle of view is not satisfied. Another class of 2D/3D reconstruction approaches concerns the model-based techniques [20], [21], [22]. Here, a generic anthropomorphic 3D model is supposed to be available, the goal being to deform the 3D model in order to fit the 2D drawings/images. The deformed 3D model will then represent the reconstructed object. Such approaches are very useful when the number of input 2D views is very limited, since the underlying generic 3D model makes it possible to integrate some strong a priori knowledge within the reconstruction procedure. However, the main drawback here comes from the limited field of applicability which is restricted to objects with similar morphologies, such as humanoids represented by a generic H-Anim or MPEG-4 models [23][24]. The analysis of the literature shows that there is currently a lack of tools allowing to create and exploit for animation purposes complex and generic virtual characters (which can be humanoids, animals, purely imaginary creatures…), represented at the level of detail usually requested by the cartoon industry. Within this context, the choice of the 3D modeling stage is a crucial issue. The 3D representation should jointly satisfy the three objectives of the project: 2D/3D reconstruction, animation and deformation. In this paper, we show that NURBS-based surface modeling provides an appropriate framework for satisfying these requirements in a complete and unified manner. The paper is organized as follows. Section 2 gives an overview of the NURBS-based reconstruction procedure considered here. Section 3 details the several 2D/3D interaction and deformation mechanisms, and Section 4 presents some concluding remarks and opens perspectives of future work.

2. 2D/3D NURBS RECONSTRUCTION Reconstructing 3D models from the turn around drawings provided by cartoon creator (Figure 1.b) is a difficult and complex task, because of: • the very limited number of projections, • the lack of texture or motion information usually exploited in structure from texture/motion approaches, • the inaccuracy of the drawn projections: the angles of view corresponding to the 2D projections are approximative and very designer-dependent, and the drawings sometimes present pose inconsistencies between different views (Figure 2), • the lack of an exact model of projection, • the complex contour information provided by the drawer (Figure 1.b) which usually mixes informative morphological details together with non-relevant and purely ornamental features, • the huge variability of cartoon characters (humanoids, animals, purely imaginary creatures…) which makes it impossible to exploit any a priori knowledge of the models.

2.a. Tootuff’s hands and crest pose inconsistencies.

2.b. Barney’s legs pose inconsistencies.

Figure 2. Pose inconsistencies highlighted for Tootuff and Barney characters. For all these reasons, the approach adopted in this paper is based on shape-from silhouette techniques [25], [26], [27], which lead to an initial 3D volumetric model. Because of the limited number of views available, the reconstructed volume offers only a coarse approximation of the object to be reconstructed. That is why, an additional NURBS modeling module is integrated, which offers the possibility to create a deformable surface model which may be interactively manipulated for adding details and refining the virtual character. The proposed semi-automatic reconstruction procedure, illustrated in Figure 3, consists of the following three steps: 1. Determine an encompassing volume, defined by the visual hull of the character [25], [27]. This volume is obtained by intersecting the different volumes extruded from each projection view. 2. Construct a NURBS surface generated by the resulting visual hull. The construction of the NURBS surface is achieved in a cross-sectional manner: the 3D volume previously obtained is first swept in the vertical direction to generate 2D slices (Figure 4.a). Then, a 2D NURBS curve, approximating each connected component of each slice is determined (Figure 4.b). As a collateral product, the separation in connected components allows us to obtain a coarse segmentation of the volume into sub-parts, which can be exploited for MPEG-4 animation purposes. An interpolation mechanism, combined with a knot refinement procedure [28], is finally achieved for generating, for each connected component, the NURBS surface interpolating the individual NURBS section curves. 3. Re-project the obtained 3D model by using angles of view different from those available in the turn-around images and gradually adjust/refine the model by specifying 2D constraints that the apparent contours of the reconstructed surface should interpolate.

View 2D #1

Calibration, alignment

View 2D #n

Volume intersection

Feedback loop for model refinement

3D NURBS modeling 3D virtual character

Constraint specification

Interaction with designer

Visualisation & Rendering

Figure 3. Overview of the 2D/3D reconstruction procedure.

4.a. Volumetric model scanning and generated horizontal slices.

4.b. NURBS-based slice modeling; from left to right: the considered slice, its contours, the control points and the associated NURBS curve.

Figure 4. Cross-sectional design of NURBS reconstructed surface. The reconstruction is performed starting from binary images representing the character from different points of view. Figure 5 provides a first example of 2D/3D reconstruction from synthetic data. A VRML humanoid model has been used here for generating the eight projection views illustrated in Figure 5.a. Note that in the case of synthetic data the calibration parameters (axis of rotation, angles of view) are perfectly known.

5.a. Initial 3D model (VRML polygonal object) and artificially generated projections used for the reconstruction of the character..

5.b. Reconstructed visual hull, NURBS surface (3rd degree, 100 control points per slice), and segmentation of the main anatomical parts.

Figure 5. Reconstruction from synthetic data.

Figure 6 shows a second reconstruction example, in the case of real data representing the Tootuff character.

6.a. Input views generated from the drawn contours in Figure 1.b.

6.b. Reconstructed visual hull and NURBS surface (3rd degree, 100 control points per slice).

Figure 6. Reconstruction of Tootuff from the external contours. The calibration stage was here manually performed by specifying in each view the position of the vertical axis of rotation and by considering approximate angles of view of 0°, 55°, 90° and 125°. The “monolithic” aspect of the reconstructed model is explained by the fact that interior contours have not been taken into account. In order to evaluate the impact that such interior elements might have on the quality of the reconstructed objects, we have performed a manual segmentation of the input drawings (Figure 7.a) within several sub-parts corresponding to the head, arms, legs, bust, nose and ears. The reconstruction has then been performed independently for each sub-part of the object (Figure 7.b).

7.a. Input drawings segmentation.

7.b. Reconstructed visual hull 7.c. Projections of the reconstructed model for angles and NURBS surface. of view of 0°, 30° and 120° (from left to right).

Figure 7. Reconstruction of Tootuff after segmentation of input drawings. Let us note that, in spite of some imperfections of the reconstructed 3D model, the 2D projections obtained for angles of view different from those available in the turn-around images (Figure 7.c) offer a good approximation of the character, which can be exploited by drawers and animators. The increased realism of the reconstructed character shows the importance, for our future developments, to create tools allowing to semi-automatically detect interior contour features and to exploit them in the reconstruction stage. The choice of NURBS surfaces is motivated by their capacity of accurately representing complex shapes and by the multiple deformation facilities offered, described in the following section.

3. MODEL REFINEMENT VIA NON-RIGID DEFORMATION Let us recall the definition of a Non Uniform Rational B-Spline (NURBS) surface, given by the following parametric expression: n

m

S (u, v) = ∑ ∑ wij Pij Rij (u, v ) , i =0 j =0

(1)

where (u , v ) are curvilinear co-ordinates, Pij the surface control points, wij the associated weights, and Rij (u , v ) the associated de Boor coefficients [29], [30], [28], which are calculated according to two node vectors, U and V. By definition, NURBS-based representation offers the advantage of a direct and intuitive local control, achieved by modifying the weights and moving the control points. The deformation methodology adopted in this paper relies on the constraint-based approach proposed in [31], because of its simplicity and intuitive nature. Let us briefly recall the basic principle of the adopted method. 3.1 Deformation with 3D constraints The principle of the approach in [31] consists of deforming a NURBS surface according to some 3D geometric constraints that the deformed surface has to interpolate. Constraints can be either sets of points and/or normal vectors to the surface, or a 3D NURBS curve. The basic idea is to introduce a set of displacements ε ij , associated with control points Pij , minimizing the global displacement energy i2

2

j2

E = ∑ ∑ ε ij ,

(2)

i =i1 j = j1

and satisfying the following constraints equations: i2  Tl = Sˆ (u l , v l ) = S l + ∑ i = i1 

i2



∑ ε ij Rij (u l , vl )

j = j1

,

(3)

 l = 0 ,...r

where • Tl denote the target surface points, i.e. 3D points that the deformed surface has to interpolate, • •

Sˆ is the deformed surface,

S l = S (u l , v l ) are source points, i.e. points on the initial, undeformed surface matched with the target points and defined by the parametric coordinates (ul, vl). In practice, source points are defined as the projections of the target points onto the initial surface.

This typical constrained optimization problem can be transformed within an unconstrained one, by minimizing the following Lagrangian functional: 2

i2 i2 r   + ∑ λl Tl − Sl − ∑ ∑ε ij Rij (ul , vl ) , i =i1 j = j1 l =0  i =i1 j = j1  where λl , are the Lagrangian multipliers, and . denotes the Euclidean norm. i2

j2

L = ∑ ∑ ε ij

The displacements

ε ij

(4)

are determined as the solution of the following linear system: i2 i2  = + T S ε ij Rij (u l , v l ) l = 0,...r ∑ ∑ l l   i = i1 j = j1 ,  r ε = λ l Rij (u l , v l ) k = 0,.., K  k ∑ l =0

(5)

where K = (i 2 − i1 )( j 2 − j1 ) is the number of control points to be updated. Figure 8 shows an example of deformation applied to the humanoid face according to two constraint points coordinates. Target and source points are respectively represented with black and white dots.

Figure 8. Nose deformation inferred by 2 constraint points. The main drawback of such a formulation results from the constraint specification, which is totally performed in the 3D space. This violates the traditional working style of cartoon designers. In order to overcome this limitation, we propose a 2D constraint formulation. The constraints are here no longer specified in the 3D space, but by means of several 2D projection images. The first strategy adopted and detailed in the next sub-section, consists of fitting a 3D NURBS curve from a set of 2D contours, specified in several views and considered as projections of the 3D curve to be determined. 3.2 Multiview constraint specification by 2D/3D NURBS curve reconstruction Let us consider a 3D NURBS curve defined as: n

C (u ) = ∑ Pi N i , p (u ) ,

(6)

i =0

where Pi ∈ IR3 are the control points and N i , p (u ) the B-Spline basis functions of degree p. Let us recall that that the projection of a 3D NURBS curve is a 2D NURBS curve whose control points are the projections of the 3D NURBS control points. The projection of the considered 3D curve for an angle of view θ , denoted by C θ (u ) , is then given by the following expression: n

θ

C θ (u ) = ∑ Pi N i , p (u ) ,

(7)

i =0

where Piθ are the projections onto the image plane of the 3D curve control points Pi , for the view angle θ . For each view θ j , the constraints are specified as a set of 2D points. A 2D NURBS curve,

θ

c j is fitted to each set of

points by applying the approximation procedure described in [28]. Let us denote by Rθ the rotation of angle θ j around j the z-axis (considered as the vertical axis), π the projection operator, and π θ = π o Rθ . j j The principle consists of finding 3D control points Pi minimizing the global square error E expressed as:

E=

V −1m −1

∑∑

j =0 k =0

Qk

θj

n

θ

~

2

− ∑ Pi j N i , p (u k ) i =0

where •

Qk θ j = c j (u~k ) ,

• •

V is the number of views considered , u~k are curvilinear coordinates uniformly sampling the [0, 1] interval.

θ

 0 1 0

t

Let Rθ t = Rθ t I 2→3 , with I 2→3 =   0 0 1  . j j  

,

(8)

The coordinates of the 3D NURBS control points can then be determined as solutions of the following linear system:

{

}

 v−1 m−1 v −1 m−1 n   θ ∀l ∈ {0, n},  ∑ ∑ 2Nl , p (u~k )Rθt j Qk j = ∑ ∑ ∑ Ni, p (u~k ) Nl , p (u~k )Rθt j πθ j Pi  ,  j =0 k =1   j =0 k =1 i =1

(9)

where θj Q k = Qk θ j − N 0 ,p ( u~ k )Q0 θ j − N n,p ( u~ k )Qm θ j ,

(10)

The 3D NURBS curve thus determined can be now exploited as a constraint for the algorithm described in section 3.1. Figure 9 shows a second deformation result, obtained using a 3D NURBS curve recovered from three projections.

Figure 9. Humanoid’s back deformed using a 2D/3D reconstructed NURBS curve. Generally, cartoon animators are rather used to draw surface contours, and not projections of a same 3D curve from different viewpoints. For this reason, a second deformation method is proposed, which allows to directly deform the 3D NURBS surface such that its projection corresponds to the user drawings specified in a single or multiple images. 3.3 Direct 3D deformation from 2D constraints The idea is to replace the 3D constraints expressed in equation (3) by a set of 2D constraints corresponding to a number of views. The constraint equation for a given viewing angle θk is defined as follows:

Here, T θ k l

^   T lθ k = π θ ( S l ) = π θ  S θl k + ∑ ε i, j Ri , j (uθl k , vθl k )  . (11) i, j   are 2D target points associated with view θk . The source points S θ k = S (u lθ k , v lθ k ) are defined as l

corresponding points on the 3D surface’s contour generator. Let us recall that the contour generator of a smooth 3D surface is a geodesic curve defined as the geometric locus of all points of the surface with normal vector orthogonal to the direction of view. The projection of the contour generator onto the 2D image represents the apparent contour. The source and target points are determined according to the following steps. For each section of the 3D NURBS model, corresponding to a fixed coordinate vf, the contour generator points are extracted. First, intervals [ui, ui+1] between successive knots where the scalar product between the viewing direction Nθ k

and the surface normal vector NS (u, vf) changes sign are determined. For such an interval, the parametric coordinate of the contour generator point is initialized to u (0) = (ui + ui+1 ) / 2 . The position of the generator point is then iteratively refined, by using the Newton algorithm. Let us denote by f(u) the scalar product between the viewing direction and the surface normal vector:

f (u ) = N S (u, v f ), N θ k .

(12)

The Newton iterations consist in updating the parametric position u until convergence, as described in equation (13):

u ( n ) = u ( n −1) −

f (u ( n −1) ) f ' (u ( n −1) )

(13)

.

The resulting contour generator points are considered as the source points S θ k . Their projections in the l

θk

corresponding 2D views, denoted by A , yield a set of corresponding apparent contour points which are then l

interpolated by using the procedure described in [28] for generating a 2D NURBS curve, aθ k , approaching the apparent contour of the surface. Let {us} denote the parametric coordinates of the apparent contour points Aθ k in the NURBS l

representation, such that: (14)

Aθ k = a θ k (u s ) . l

The input drawn contours are also modeled by a 2D NURBS curve, denoted by procedure described in [28] .

cθ k , by applying the approximation

The two NURBS curves are then matched. More precisely, to each apparent contour point Aθ k an initial point l

C

θ k( 0 )

θk

on c is associated by setting:

C θl k

(0)

= cθ k (u (l 0 ) ) ,

with u ( 0 ) = u s .

(15)

l

The initial guess is then refined by applying Newton iterations in order to find the zero of the following function:

f (u ) = Tc (u), Alθ k − cθ k (u) ,

(16)

where Tc (u) is the tangent vector to the input contour at u. The Newton iterations are performed until convergence or until reaching a pre-defined number of iterations. At convergence, the target 2D points Tlθ k associated with source points S θ k are defined as: l

θk

Tl

θk

( final )

= c (u l

).

(17)

The Newton algorithm applied here guarantees that the final corresponding target points approximate the orthogonal projection of each apparent contour point Aθ k onto the input contour. l

Figure 10 illustrates this 2D matching technique.

10.a. Initial matching.

10.b. Matching after Newton iterations.

Figure 10. Apparent and input contour matching. Once corresponding source and target points determined, control point displacements are determined as described in Section 3.1, by minimizing the following Lagrangian functional:

L = ∑ ε i, j

2

i, j

where λl are Lagrangian multipliers.

    + ∑  ∑ λl  T lθ k − π θ  S θl k + ∑ ε i , j Ri , j (u θl k , v θl k )    ,   k  i, j      l 

(18)

The above-described deformation method has been first applied to optimize the reconstructed NURBS surface. By construction, the NURBS model is included in the volumetric model, and the reconstructed surface does not exactly project onto the initial contour drawings, as illustrated in Figure 11 for the torso and the hand of Tootuff character. Let us note that in the case of real data, this phenomenon is accentuated by errors of projection and manual calibration, and by the presence of drawing inconsistencies. Figure 11 shows that the above-described procedure makes it possible to adjust the reconstructed models in order to better fit the input drawings.

11.a. Tootuff torso correction.

11.b. Tootuff hand correction.

Figure 11. Model correction according to one single view. Apparent points and input contours (the exterior ones) are represented here before and after applying the deformation procedure. We can observe that the corrected surface is much closer to the input contour than the initial one. A second application of the deformation procedure concerns the addition of morphological details, as illustrated in Figure 12, where two ears have been added to a humanoid head.

12.a. Initial and modified front vierw.

12.b. Initial and deformed 3D NURBS model.

Figure 12. Detail specification and surface deformation from a single view. Here, the constraints have been specified within a single image, corresponding to the front view. Other types of geometric constraints are currently under study, offering a very promising field of exploration for creating finer levels of detail.

4. CONCLUSION This paper presents preliminary results on the TOON platform, which aims at automating the animation process of traditional 2D cartoons production chain. The analysis of the specificity of the considered application led us to adopt a 2D/3D multiview approach, based on shape from silhouette techniques and NURBS modeling. Such an approach makes it possible to accurately respect the traditional 2D working style of cartoon makers. User interaction, entirely performed in 2D, allows cartoon creators to refine the reconstructed virtual characters by using multiview geometric constraints. In the near future, our work will concern the creation of authoring tools allowing: (1) to semi-automatically detect/select interior contour features and (2) to elaborate a complete set of purely 2D manipulations, including specification of normal constraints within a single 2D image, and topological modifications (such as cuts and stitches of the surface)... Our longer term developments will consider the development of an interactive platform for semi-automatic

cartoon content creation, providing with convivial user interfaces, and integrating reconstruction, MPEG-4 compliant animation and deformation modules.

5. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31]

http://www.anvar.fr J.D. Fekete, E. Bizouarn, E. Cournarie, T. Galas, F. Taillefer , “TicTacToon: A Paperless System for Professional 2D Animation” , SIGGRAPH 95 Conference Proceedings, 1995. http://www.aliaswavefront.com. http://www.3dmax.com http://www.rhino3d.com K.T.McDonnell, H.Qin, “Dynamic sculpting and animation of free-form subdivision solids”, The Visual Computer, 2002. E. Ferley, M. P. Cani, J. D. Gascuel, “Practical Volumetric Sculpting” The Visual Computer, 16 (8), pp. 469-480, 2000. R .Turner, E .Gobbetti, “Interactive Construction and Animation of Layered Elastically Deformable Characters”, Computer Graphics Forum, Vol 17(2), pp.135-152, June 1998. J.M. Cohen, J.F. Hughes, R.C. Zeleznik, “Harold : a wold made of drawings”, Proc. of the First International Symposium on Non Photorealistic Animation and Rendering, pp. 83-90, ACM Press, June 2000. D. Bourguignon, M.P. Cani, G. Drettakis, “Drawing for Illustration and Annotation in 3D”, EUROGRAPHICS 2001, Vol. 20, n°3, 2001. O. Tolba, J. Dorsey, L. McMillan, “Sketching with Projective 2D Strokes”, Proc. ACM symposium on User Interface Software and Technology, pp. 149-157, 1999. T. Igarashi, S. Matsuoka, H. Tanaka, “Teddy : a sketching interface for 3D freeform design”, Proc. ACM SIGGRAPH’99, pp. 409-416, 1999. O. Karpenko, J.F. Hugues, R. Raskar, “Free-form sketching with variational implicit surfaces”, Eurographics 2002, Vol 21(3), 2002. G. Turk, J.F. O'Brien, “Variational Implicit Surfaces”, Technical Report GIT-GVU-99-15, Graphics, Visualization, and Useability Center, Georgia Institute of Technology, 1999. R.C. Zeleznik, K.P. Herndon, J.F. Hughes, “SKETCH: An Interface for Sketching 3D Scences”, Proc. SIGGRAPH '96, pp. 163-170, August 1996. A.G Requicha, “Representations for rigid solids: theory, methods, and systems”, ACM Computing Surveys, 12(4), pp. 437464, 1980. R. Zenka, P. Slavik, “New Dimension for Sketches”, Proc. Spring Conference on Computer Graphics, pp. 173-179, Bratislava, 2003. G. Turk, H. Q. Dinh, J. F. O'Brien, G. Yngve, “Implicit surfaces that interpolate”, Proc. of the 7th International Conference on Shape Modeling and Applications, pp. 62-71, 2001. F. Di Fiore, F. Van Reeth, “Employing Approximate 3D models to enrich traditional computer assisted animation”, IEEE Computer Animation 2002, pp. 183-190, Switzerland, June 2002. A. Hilton, D. Beresford, T. Gentils, R. Smith, W. Sun, “Virtual People: Capturing Human Models to Populate Virtual Worlds”, Computer Animation, Switzerland, May 1999. W. Lee, J. Gu, N. Magnenat-Thalmann, “Generating Animated 3D Virtual Humans from Photographs”, EUROGRAPHICS 2000, Vol. 19, No. 3, 2000. J. Stark, A. Hilton, “Model-Based Multiple View Reconstruction of People”, Proc. Internat. Conference on Computer Vision (ICCV 2003), 2003. www.h-anim.org. The MPEG-4 international standard, ISOIEC 14496-2:2001 (2001). Information technology. Coding of audio-visual objects. Part 2: Visual, International Organization for Standardization, Switzerland, 2001. A. Laurentini, “The visual hull of curved objects”, Proc. Internat. Conference on Computer Vision (ICCV'99), Sept. 1999. G. Slabaugh, B. Culbertson, T. Malzbender, R. Schafer “A survey of methods for volumetric scene reconstruction from photographs”, K. Mueller and A. Kaufmann editors, Proc. of the Joint IEEE TCVG and Eurographics Workshop (VolumeGraphics-01), pp. 81-100, Wien, June 2001. W. Matusik, C. Buehler, R. Raskar, S.J. Gortler, L. McMillan, “Image-based visual hulls”, Proc. ACM SIGGRAPH’2000, , 2000. L.Piegl, W.Tiller, “The NURBS Book”, Springer-Verlag, 1997. G.Farin, “From Conics to NURBS : a tutorial and survey”, IEEE Computer Graphics and Applications, pp. 78-86, 1992. L.Piegl, “On NURBS :a survey”, IEEE Computer Graphics & Applications, pp. 55-71, 1991. S.M.Hu, Y.F.Li, J.X.Zhu, “Modifying the shape of NURBS surfaces with geometric constraints”, Computer Aided Design, Vol. 33, pp. 903-912, 2001.