Lecture 9 Body Representations

88 downloads 305 Views 8MB Size Report
Skeleton Animations. Skeleton (tree hierarchy). - Nodes represent joints. - Joints are local coordinate systems (frames). - Edges represent bones. Skin.
15-869

Lecture 9 Body Representations Leonid Sigal Human Motion Modeling and Analysis Fall 2012

Saturday, October 20, 12

Course Updates Capture Project - Now due next Monday (one week from today) Final Project - 3 minute pitches are due next class - You should try to post your idea to the blog and get some feedback before then Reading Signup - Everyone has signed up by this point?

Saturday, October 20, 12

Plan for Today’s Class - Review representation of skeleton motion - Modeling shape and geometry of the body - Skinning (rigid, linear, dual quaternion) - Data-driven body models (SCAPE) - Applications and discussion - Shape estimation from images - Image reshaping

Saturday, October 20, 12

2

Background Information

Skeleton Animations

This section provides a short review of the terminology and notational style that will be used to describe the processes involved in reading and processing motion capture data. 2.1

Terminology

The following list outlines some of the more important keywords that will be used to identify and describe different aspects of a motion: • •

Skeleton (tree hierarchy) - Nodes represent joints





Skeleton – The whole character for which the motion represents. Bone – The basic entity in representing a skeleton. Each bone represents the smallest segment within the motion that is subject to individual translation and orientation changes during the animation. A skeleton is comprised of a number of bones (usually in a hierarchical structure, as illustrated in figure 2.1), where each bone can be associated with a vertex mesh to represent a specific part of the character, for example the femur or humerus. Channel or Degree of Freedom (DOF) – Each bone within a skeleton can be subject to position, orientation and scale changes over the course of the animation, where each parameter is referred to as a channel (or DOF). The changes in the channel data over time give rise to the animation. Frame – Every animation is comprised of a number of frames where for each frame the channel data for each bone is defined. Motion capture data can be captured as high as 240 frames per second, however in many applications a rate of 30 or 60 frames per second tends to be the norm. High frame rates are used to capture motions that contain high frequency content such as a combination of karate actions. Although in many cases the extra detail cannot be displayed during a real-time playback because of maximum refresh rates of display hardware7, it can provide useful information for adding motion blurring to the animation or simply for motion analysis.

Hierarchical Structure Common Data structure for Body Pose

- Joints are local coordinate systems (frames) - Edges represent bones Skin - 3D model driven by the skeleton

Head

Right Hand

Left Hand

Root

Right Foot

Left Foot

Figure 2.1: Hierarchical Structure for a Human Figure

2.2

Notation Source: Meredith and Maddock, Motion Capture File Formats Explained

During the discussion on transforming bones to correctly position and orientate them for an animation, matrix arithmetic will be used to demonstrate the motion decoding and displaying algorithms. The nomenclature used when writing matrix expressions is right to left, as illustrated in Equation 2.1 where v’ and v are the transformed and original vertices respectfully and M is the transform matrix. (This convention is used over the traditional left to right approach, v’ = vM, because it relates more directly to the OpenGL graphics pipeline, where vertices are pushed in after the transforms)

7

Affordable, everyday monitor refresh rates presently max out at about 100hz, and a sustained 60fps rate in a modern computer game is considered an excellent mark to reach.

Saturday, October 20, 12

Skeleton Animations Skeleton (tree hierarchy) - Nodes represent joints - Joints are local coordinate systems (frames) - Edges represent bones Skin - 3D model driven by the skeleton

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Skeleton Animations Skeleton (tree hierarchy) - Nodes represent joints - Joints are local coordinate systems (frames) - Edges represent bones Skin - 3D model driven by the skeleton

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Skeleton Animations Skeleton (tree hierarchy) - Nodes represent joints - Joints are local coordinate systems (frames) - Edges represent bones Skin - 3D model driven by the skeleton Both are typically designed in a reference pose [Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Skeleton in Reference Posture Rj

j p(j)

N-joint skeleton in reference frame is given by - Root frame expressed with respect to the world - R0 - Relative joint coordinate frames - R1 , R2 , R3 , . . . , RN Mapping from world to a coordinate frame of joint j Aj = R0 · · · Rp(j) Rj [Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Skeleton in Reference Posture Rj

j p(j)

N-joint skeleton in reference frame is given by - Root frame expressed with respect to the world - R0 - Relative joint coordinate frames - R1 , R2 , R3 , . . . , RN 0

1

Mapping from world to a coordinate r11 r12 r13frame t1 of joint j B r21 r22 r23 t2 C C Rj A =jB = R · · · R R 0 j @ r r r p(j)t A 31

32

33

3

0

0

0

1

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine]

Saturday, October 20, 12

Skeleton in Reference Posture Rj

j p(j)

N-joint skeleton in reference frame is given by - Root frame expressed with respect to the world - R0 - Relative joint coordinate frames - R1 , R2 , R3 , . . . , RN Mapping from world to a coordinate frame of joint j Aj = R0 · · · Rp(j) Rj [Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Skeleton in Reference Posture Rj

j p(j)

N-joint skeleton in reference frame is given by - Root frame expressed with respect to the world - R0 - Relative joint coordinate frames - R1 , R2 , R3 , . . . , RN Mapping from world to a coordinate frame of joint j Aj = R0 · · · Rp(j) Rj

parent of joint j

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Animating Skeleton Achieved by rotating each joint from it’s reference posture - Note: joint rotation effects the entire sub-tree (e.g., rotation at the shoulder will induce motion of the whole arm) Rotation at joint j is described by: 0

r11 B r21 Tj = B @ r31 0

r12 r22 r32 0

r13 r23 r33 0

1

0 0 C C 0 A 1

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Animating Skeleton Mapping from world to a coordinate frame of joint j in reference pose: Aj = R0 · · · Rp(j) Rj

Mapping from world to a coordinate frame of joint j in animated pose: Fj = R0 T0 . . . Rp(j) Tp(j) Rj Tj

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Animating Skeleton Mapping from world to a coordinate frame of joint j in reference pose: Aj = R0 · · · Rp(j) Rj

Mapping from world to a coordinate frame of joint j in animated pose: Fj = R0 T0 . . . Rp(j) Tp(j) Rj Tj

Note: if Tj = I4⇥4 then Fj = Aj [Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Character Rigging 1. Embed the skeleton into a 3D mesh (skin) 2. Assign vertices of the mesh to one or more bones to allow skin to move with the skeleton

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Rigid Skinning 1. Embed the skeleton into a 3D mesh (skin) 2. Assign vertices of the mesh to one or more bones to allow skin to move with the skeleton Assign each vertex to one bone/joint - j vˆi = Fj (Aj ) vi Aj Fj vˆi

1

vi

- position of vertex in reference mesh - joint j in reference mesh - joint j in animated mesh - position of vertex in animated mesh

Saturday, October 20, 12

vi

Rigid Skinning Requires assignment of vertices on 3D mesh to joints on skeleton - Often done manually - Assigning to joint influencing the closest bone is usually a good automatic guess vi Assign each vertex to one bone/joint - j vˆi = Fj (Aj ) vi Aj Fj vˆi

1

vi

- position of vertex in reference mesh - joint j in reference mesh - joint j in animated mesh - position of vertex in animated mesh

Saturday, October 20, 12

Rigid Skinning Limitations In reference pose:

In animated pose:

- Works well away from the ends of the bone (joints) - Leads to unrealistic non-smooth deformations near joints

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Linear Blend Skinning Each vertex is assigned to multiple bone/joints vˆi =

N X

wji Fj (Aj )

1

vi

j=1

vi - position of vertex i in reference mesh Aj - joint j in reference mesh Fj - joint j in animated mesh vˆi - position of vertex i in animated mesh wji - influence of joint j on the vertex

Saturday, October 20, 12

vi

Linear Blend Skinning Each vertex is assigned to multiple bone/joints vˆi =

N X

wji Fj (Aj )

1

vi

vi

j=1

vi - position of vertex i in reference mesh Aj - joint j in reference mesh Fj - joint j in animated mesh vˆi - position of vertex i in animated mesh wji - influence of joint j on the vertex

Weights need to be convex:

N X j=1

Saturday, October 20, 12

wji = 1, wji

0

Linear Blend Skinning Each vertex is assigned to multiple bone/joints vˆi =

N X

wji Fj (Aj )

1

vi

vi

j=1

vi - position of vertex i in reference mesh Aj - joint j in reference mesh Fj - joint j in animated mesh vˆi - position of vertex i in animated mesh painted on or wji - influence of joint j on the vertex based on distance to joints N X wji = 1, wji 0 Weights need to be convex: j=1

Saturday, October 20, 12

Linear Blend Skinning Weights wlarm + wuarm = 1 wlarm = 1

wuarm = 1 [Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Linear Blend Skinning Weights

wlarm + wuarm = 1

wlarm = 1

wuarm = 1

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Linear Blend Skinning Weights

wlarm + wuarm = 1

wlarm = 1

wuarm = 1

Why weights must convex? [Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Linear Blend Skinning Weights vˆi =

N X

wji Fj (Aj )

1

vi

j=1

wlarm + wuarm = 1

wlarm = 1

wuarm = 1

Why weights must convex? [Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Linear Blend Skinning Example

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Linear Blend Skinning Limitations Joint twisting 180 degrees

produces a collapsing effect where skin collapses to a single point (“candy-wrapper” artifact)

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Linear Blend Skinning Limitations

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Linear Blend Skinning Limitations

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Why LBS Produce Artifacts? vˆi =

N X j=1

wji Fj (Aj )

1

vi

0 1 N X wji Mj A vi vˆi = @ j=1

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Why LBS Produce Artifacts? vˆi =

N X

wji Fj (Aj )

j=1

1

vi

0 1 N X wji Mj A vi vˆi = @ j=1

M1 M2

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Why LBS Produce Artifacts? vˆi =

N X

wji Fj (Aj )

j=1

1

vi

0 1 N X wji Mj A vi vˆi = @ j=1

M1 M2

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Why LBS Produce Artifacts? vˆi =

N X

wji Fj (Aj )

j=1

1

vi

0 1 N X wji Mj A vi vˆi = @ j=1

M1

Mblend M2

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Why LBS Produce Artifacts?

M1 Mblend

M2

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

SE(3) Intrinsic Blending

Mblend M1 M2

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Dual Quaternion Skinning

[Kavan et al., ACM TOG 2008]

Closed form approximation of SE(3) blending

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Regular Quaternions Remember: Euler angles

Rotation in 3D X

Y Z ✓

3

2

cos( ) X 4 Y 0 5 = 4 sin( ) 0 Z0 3 2 cos(✓) X0 4 Y0 5=4 0 0 sin(✓) Z 3 2 1 X0 0 4 Y 5=4 0 0 Z0 2

X

Y Z

0

2

X

Y Z

2

sin( ) cos( ) 0

Rz 0 1 0

Ry

0 cos( ) sin( )

Rx

32

3

0 X 5 4 0 Y 5 1 Z

32 3 sin(✓) X 54 Y 5 0 cos(✓) Z 32 3 0 X sin( ) 5 4 Y 5 cos( ) Z

Note: I’ve overloaded the use of \phi in these slides. Earlier I used \phi to denote the original orientation. Here I am using it to denote the rotation about the Z-axis.

Interpolating Euler angles has similar issues as LBS skinning Saturday, October 20, 12

Regular Quaternions - Quaternions are alternative representation for orientations (defined using complex algebra) - Represents orientation using 4 tuples (roughly speaking one for amount of rotation and 3 for the axis) q = w + xi + yi + zi

- However, there are only 3 degrees of freedom for a rotation - Hence, to be a valid rotation, quaternion must be unit norm ||q|| = 1

Saturday, October 20, 12

Regular Quaternions - Quaternions are alternative representation for orientations (defined using complex algebra) - Represents orientation using 4 tuples (roughly speaking one for amount of rotation and 3 for the axis) q = w + xi + yi + zi

- However, there are only 3 degrees of freedom for a rotation - Hence, to be a valid rotation, quaternion must be unit norm ||q|| = 1

Interpretation: Quaternions live on a sphere in a 4D space Saturday, October 20, 12

Dual Quaternions

[Clifford 1873]

- Dual quaternions are able to model rigid transformations (rotation + translation) - Map a 6 dimensional manifold in an 8 dimensional space - Need to be unit length to represent a valid rigid transform

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Dual Quaternions

[Clifford 1873]

- Dual quaternions are able to model rigid transformations (rotation + translation) - Map a 6 dimensional manifold in an 8 dimensional space - Need to be unit length to represent a valid rigid transform

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Dual Quaternion Skinning

[Kavan et al., ACM TOG 2008]

Closed form approximation of SE(3) blending

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Comparison: Linear Blend Skinning

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Comparison: Dual Quaternion

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Dual Quaternion Skinning

[Slide content and illustrations from Ladislav Kavan and Olga Sorkine] Saturday, October 20, 12

Skinning Limitations - All skinning methods assume a fixed relationship between skeleton motion and the mesh - Humans are more complex (e.g., muscles lead to local deformations) - Skinning only allows animation of predefined body geometry (created by an animator), do not help us create this geometry

Saturday, October 20, 12

35

Data-driven Body Shape Models

(a)

[ Cyberware ]

Idea: Let’s scan real people and figure out how their body deforms and what body types are possible

Saturday, October 20, 12

35

Data-driven Body Shape Models

Pipeline - Register all the scans - Create (typically parametric) model of shape - Use that model for an interesting application (a)

[ Cyberware ]

Saturday, October 20, 12

Registration / Correspondence

ptimization Strategy

[Allen et al., 2003]

Marker-based Non-rigid Iterative Closest Point Registration

function sum to of triangulated the three error terms: Goal:isFita aweighted template mesh 3D point cloud E = ÆEd + ØEs + ∞Em gradient descent.

alized using a global transformation that aligns the center of

h andAmounts the overall orientation scale.forThe optimization c to estimating a 4x4and transform

every vertex through optimization of above landmarks (10,00 eighting schedules that emphasize matching

), overall shape fitting (600 iterations with Æ = 2, Ø = 10, ∞

ometry (400 iterations with Æ = 1, Ø = 0.1, ∞ = 0.1). (a) Saturday, October 20, 12

n that trades oÆ fit to the raw scan data, fit to known markers, and smoothness of the

Registration / Correspondence [x , y , z , 1] .

For the purpose of mesh registration, vertex locations are expressed in homogeneous T

[Allen et al., 2003]

ptimization Strategy i

i

i

Marker-based Non-rigid Iterative Closest Point Registration

function sum to of triangulated the three error terms: Goal:isFita aweighted template mesh 3D point cloud

ective is for the aligned template surface to be as close as possible to the target

we encourageE vertices of the template mesh to move toward the closest respective = ÆE + ØE + ∞E d s m face of the target mesh in order to acquire the geometry of the raw scan. We use data term

rm Ed to penalize the remaining gap between the transformed vertices Ti~vi and

gradient descent.

e D:

alized using

V X a global transformation Ed = wi gap2 (Ti~vi , D) .

that aligns the center of

(3.1)

i=1 h and the overall orientation and scale. The optimization c

he number of vertices for the template mesh T and wi is used to control the influence

Distanceschedules from the reference mesh point to eighting that emphasize matching landmarks (10,00 in the presence of holes in the target mesh. The function gap(·, ·) computes the

closest compatible vertex on observed surface

), overall shapecompatible fitting vertex (600 of iterations with = 2, Ø =using 10, a∞ point to the closest a surface and it is Æ implemented

Angle between surface normals ucture for computational e±ciency. TheÆ compatibility restriction safeguards against ometry (400 iterations with = 1, Ø = 0.1, ∞ = 0.1). - Robust measure for dealing with holes

ces being matched to back-facing surfaces and is measured in terms(a)of the angle Saturday, October 20, 12

ructure for computational e±ciency. The compatibility restriction safeguards against

Registration / Correspondence rface normals. It also restricts the distance between them to a threshold to avoid

faces being matched to back-facing surfaces and is measured in terms of the angle

ptimization Strategy

[Allen et al., 2003]

gh holesMarker-based in the target mesh. Non-rigid

Iterative Closest Point Registration

function sum to of triangulated the three error terms: Goal:isFita aweighted template mesh 3D point cloud Error

ints to a wavy surface independently using the closest point strategy alone introduces Eartifacts. = ÆEdThe+solution ØEs + m ng and stretching can∞E be regularized by adding a deformation

straint Es that require a±ne transformations applied to adjacent vertices on the data term smoothness term

similar as possible: gradient descent.

alized

Es a = using

X

global

2 ||T ° T || i j F , transformation

(3.2) of that aligns the center

{i,j|(~ vi ,~ vj )2edges(T )}

h and the overall orientation and scale. The optimization c

Ensuresschedules that transforms on near by vertices eighting that emphasize matching landmarks (10,00 are similar (induces local smoothness)

), overall shape fitting (600 iterations with Æ = 2, Ø = 10, ∞

ometry (400 iterations with Æ = 1, Ø = 0.1, ∞ = 0.1). (a) Saturday, October 20, 12

Registration / Correspondence

large surface deformations of the target mesh with respect to etthe temp [Allen al., 2003]

ptimization Strategy

ness constraints are Non-rigid often insu±cient achieving a correct alignment. Marker-based IterativeforClosest Point Registration

rocess and is help guide the deformation the template mesh into place, function sum to of triangulated theofthree error terms: Goal: Fita aweighted template mesh 3D point cloud

t surface m ~ i that correspond to known points on the template are ident

e the correspondences E = ÆEdat+marker ØEs +locations ∞Em to be correct. We use the ma mize the distance each marker’s location on the template surface databetween term smoothness marker anchor

get surface: descent. gradient

term

term

M transformation that aligns the center of alized using a global X 2

Em =

||T∑i ~v∑i ° m ~ i || .

h and the overalli=1 orientation and scale. The optimization c

Ensures that transforms on near bythat vertices eighting schedules that emphasize matching landmarks (10,00 vertex indices from the template mesh correspond to the markers on are similar (induces local smoothness)

), overall shape fitting (600 iterations with Æ = 2, Ø = 10, ∞ 3.4a).

ometry (400 iterations with Æ = 1, Ø = 0.1, ∞ = 0.1). (a) Saturday, October 20, 12

Registration / Correspondence

[Allen et al., 2003]

ptimization Strategy

Marker-based Non-rigid Iterative Closest Point Registration

function sum to of triangulated the three error terms: Goal:isFita aweighted template mesh 3D point cloud E = ÆEd + ØEs + ∞Em data term

gradient descent.

smoothness term

marker anchor term

alizedSolved usingusing a global transformation that aligns the center of gradient descent

h and the overall orientation and scale. The optimization c Initialized by aligning centers of mass

eighting schedules that emphasize matching landmarks (10,00

), overall shape fitting (600 iterations with Æ = 2, Ø = 10, ∞

ometry (400 iterations with Æ = 1, Ø = 0.1, ∞ = 0.1). (a) Saturday, October 20, 12

Registration 42

Figure 3.5: Mesh Registration Results. Example meshes obtained using the mesh registration process, where the template mesh (top left) has been brought into alignment withfrom the scans. By [Image Alexandru construction, these meshes maintain full per-vertex and per-triangle correspondence with the template mesh and are hole-filled. Given a segmentation of the template mesh into 15 body parts, the segmentation transfers naturally to the example meshes using the learned correspondences. We illustrate the transfer by assigning a diÆerent color to each body part. (Top) example meshes from

Saturday, October 20, 12

Balan]

bstructing the first equation from the others to eliminate the translation compone

MeshADeformation Gradients (~x ° ~x ) = ~y ° ~y , k 2 {2, 3} , t

t,k

t,1

can be rewritten in matrix as 3x3form transform

t,k

t,1

[Sumner and Popovic, 2004]

for every triangle 44

At [¢~xt,2 , ¢~xt,3 ] = [¢~yt,2 , ¢~yt,3 ] , ~x t,2

~yt

,2

the ¢ operator computes the edge vector: ¢~xt,k = ~xt,k ° ~xt,1 .

e deformation~xt, gradient for each triangle is given by At , the non-translational comp 3

~y

t,3 ne transformation, which encodes only the local change in orientation, scale and skew

Source Mesh

Target Mesh

[Image from Alexandru Balan]

Figure 3.7: Deformations Based on Shape Gradients. Shape deformation gradients are the non-translational component At of an a±ne transformation that align the edge vectors ¢~xt,k of a source mesh (left) to a target mesh (right). Since the shape deformation gradients are translation Saturday, October 20, 12

bstructing the first equation from the others to eliminate the translation compone

MeshADeformation Gradients (~x ° ~x ) = ~y ° ~y , k 2 {2, 3} , t

t,k

t,1

can be rewritten in matrix as 3x3form transform

t,k

t,1

[Sumner and Popovic, 2004]

for every triangle

At [¢~xt,2 , ¢~xt,3 ] = [¢~yt,2 , ¢~yt,3 ] ,

45 44

~yt

,2

by the deformation of the triangle edges (Figure 3.7). The set of deformation gradients tabulated ~x t,2 for each triangle provide only the an intrinsic representation of the geometry. Therefore we need the ¢ operator computes edge vector: ¢~xt,k = ~xt,kmesh ° ~xt,1 .

address two problems: how to compute the deformations for example meshes and how to reconstruc

e deformation~xt, gradient for each triangle is given by At , the non-translational comp 3 meshes from shape gradients.

~y Since for a given triangle only two of its edges are actually constraining the shape gradient in

t,3 ne transformation, which encodes only the local change in orientation, scale and skew

Eq. 3.8, At is not uniquely determined. Sumner and Popovi´c (2004) propose adding an ad hoc Source Target Meshalong the direction perpendicular to the triangle plane to implicitly forth vertex to each triangle Mesh add a third constraint for the subspace orthogonal to the triangle. We follow a more principled

approach and regularize the solution by introducing a smoothness constraint which prefers simila

deformations in adjacent triangles. We formulate a least-squares linear regression problem Balan] and solve [Image from Alexandru for all deformations gradients at once:

Estimation:

arg min

T X X

{A1 ,··· ,AT } t=1 k=2,3

2

||At ¢~xt,k ° ¢~yt,k || + ws

X

t1 ,t2 adj

||At1 ° At2 ||2F .

Figure 3.7: Deformations Based on Shape Gradients. Shape deformation gradients are the non-translational that align edgegeometry vectors ¢~xt,k of aleads t of an a±ne transformation This approach eÆectivelycomponent removesAhigh-frequency noise from the the mesh and source mesh (left) to a target mesh (right). Since the shape deformation gradients are translation Saturday, October 20, 12

(3.9

to bette

bstructing the first equation from the others to eliminate the translation compone

MeshADeformation Gradients (~x ° ~x ) = ~y ° ~y , k 2 {2, 3} , t

t,k

t,1

can be rewritten in matrix as 3x3form transform

t,k

t,1

[Sumner and Popovic, 2004]

for every triangle

At [¢~xt,2 , ¢~xt,3 ] = [¢~yt,2 , ¢~yt,3 ] ,

45 44

~yt

,2

by the deformation of the triangle edges (Figure 3.7). The set of deformation gradients tabulated ~x t,2 for each triangle provide only the an intrinsic representation of the geometry. Therefore we need the ¢ operator computes edge vector: ¢~xt,k = ~xt,kmesh ° ~xt,1 .

address two problems: how to compute the deformations for example meshes and how to reconstruc

e deformation~xt, gradient for each triangle is given by At , the non-translational comp 3 meshes from shape gradients.

~y Since for a given triangle only two of its edges are actually constraining the shape gradient in

t,3 ne transformation, which encodes only the local change in orientation, scale and skew

Eq. 3.8, At is not uniquely determined. Sumner and Popovi´c (2004) propose adding an ad hoc Source Target Meshalong the direction perpendicular to the triangle plane to implicitly forth vertex to each triangle Mesh add a third constraint for the subspace orthogonal to the triangle. We follow a more principled

approach and regularize the solution by introducing a smoothness constraint which prefers simila

deformations in adjacent triangles. We formulate a least-squares linear regression problem Balan] and solve [Image from Alexandru for all deformations gradients at once:

Estimation:

arg min

T X X

{A1 ,··· ,AT } t=1 k=2,3

2

||At ¢~xt,k ° ¢~yt,k || + ws

X

2 ||A ° A || . t t under-constrained 1 2 F

(3.9

t1 ,t2 adj

Figure 3.7: Deformations Based on Shape Gradients. Shape deformation gradients are the non-translational that align edgegeometry vectors ¢~xt,k of aleads t of an a±ne transformation This approach eÆectivelycomponent removesAhigh-frequency noise from the the mesh and source mesh (left) to a target mesh (right). Since the shape deformation gradients are translation Saturday, October 20, 12

to bette

bstructing the first equation from the others to eliminate the translation compone

MeshADeformation Gradients (~x ° ~x ) = ~y ° ~y , k 2 {2, 3} , t

t,k

t,1

can be rewritten in matrix as 3x3form transform

t,k

t,1

[Sumner and Popovic, 2004]

for every triangle

At [¢~xt,2 , ¢~xt,3 ] = [¢~yt,2 , ¢~yt,3 ] ,

45 44

~yt

,2

by the deformation of the triangle edges (Figure 3.7). The set of deformation gradients tabulated ~x t,2 for each triangle provide only the an intrinsic representation of the geometry. Therefore we need the ¢ operator computes edge vector: ¢~xt,k = ~xt,kmesh ° ~xt,1 .

address two problems: how to compute the deformations for example meshes and how to reconstruc

e deformation~xt, gradient for each triangle is given by At , the non-translational comp 3 meshes from shape gradients.

~y Since for a given triangle only two of its edges are actually constraining the shape gradient in

t,3 ne transformation, which encodes only the local change in orientation, scale and skew

Eq. 3.8, At is not uniquely determined. Sumner and Popovi´c (2004) propose adding an ad hoc Source Target Meshalong the direction perpendicular to the triangle plane to implicitly forth vertex to each triangle Mesh add a third constraint for the subspace orthogonal to the triangle. We follow a more principled

approach and regularize the solution by introducing a smoothness constraint which prefers simila

deformations in adjacent triangles. We formulate a least-squares linear regression problem Balan] and solve [Image from Alexandru for all deformations gradients at once:

Estimation:

arg min

T X X

{A1 ,··· ,AT } t=1 k=2,3

2

||At ¢~xt,k ° ¢~yt,k || + ws

X

t1 ,t2 adj

||At1 ° At2 ||2F .

Figure 3.7: Deformations Based on Shape Gradients. Shape deformation gradients are the non-translational that align edgegeometry vectors ¢~xt,k of aleads t of an a±ne transformation This approach eÆectivelycomponent removesAhigh-frequency noise from the the mesh and source mesh (left) to a target mesh (right). Since the shape deformation gradients are translation Saturday, October 20, 12

(3.9

to bette

Mesh Deformation Gradients

[Sumner and Popovic, 2004]

Applying deformation gradients typically leads to inconsistent 44 edges and structures ~yt

,2

~x t,2 ~x

t,

3

~yt,3 Source Mesh

Target Mesh

[Image from Alexandru Balan]

Figure 3.7: Deformations Based on Shape Gradients. Shape deformation gradients are the non-translational component At of an a±ne transformation that align the edge vectors ¢~xt,k of a source mesh (left) to a target mesh (right). Since the shape deformation gradients are translation Saturday, October 20, 12

nd regularize the solution by introducing a smoothness constraint which prefers si

Mesh Deformation Gradients mations gradients at once:

ns in adjacent triangles. We formulate a least-squares linear regression problem and [Sumner and Popovic, 2004]

Tdeformation ApplyingX gradients 2typicallyX leads to inconsistent X 244 arg min ||At ¢~ x ° ¢~ y || + w ||A ° A || t,k t,k structures s t t F . edges and {A ,··· ,A } 1

1

T

t=1 k=2,3

2

t1 ,t2 adj

~x t,2

~yt

,2

ach eÆectively removes high-frequency noise from the mesh geometry and leads to b ~x

t, on when modeling the deformations in subsequent steps. 3

~yt,3

consider the problem of reconstructing a mesh from a set of shape gradients. Due t

e of the deformationsSource gradients that contain no notion of translation or connectivi

Target transform individual triangles of the template by the corresponding Mesh deformation A Mesh

nconsistent edge vectors and unknown placement with respect to the neighbor tria

. Given a set of deformation gradients, reconstructing a consistent mesh requires so [Image from Alexandru Balan]

st squares problem over shared vertex coordinates:

Reconstruction:

arg min

T X X

2

||At ¢~xt,k ° ¢~yt,k || .

{~ y1 ,··· ,~ yV } Based Figure 3.7: Deformations on Shape Gradients. Shape deformation gradients are the t=1 k=2,3 non-translational component At of an a±ne transformation that align the edge vectors ¢~xt,k of a source mesh (left) to a target mesh (right). Since the shape deformation gradients are translation Saturday, October 20, 12

(

SCAPE: Shape Completion and Animation of PEople [Angelov et al., ACM TOG, 2005] Key Idea: factor mesh deformation for a person into: (1) Articulated rigid deformations (2) Non-rigid deformations (3) Body shape deformations

Saturday, October 20, 12

SCAPE: Shape Completion and Animation of PEople [Angelov et al., ACM TOG, 2005] Key Idea: factor mesh deformation for a person into: (1) Articulated rigid deformations (2) Non-rigid deformations (3) Body shape deformations Gives a parametric model of any person in any pose!

Saturday, October 20, 12

Articulated Rigid Deformation

[Angelov et al., ACM TOG, 2005]

This is basically rigid skinning

Rp (θ)

Articulated Rigid Deformation

θ – joint angles

[Image from Alexandru Balan] Saturday, October 20, 12

Non-rigid Deformations

[Angelov et al., ACM TOG, 2005]

(a)

(b)

(c)

(d)

Figure 3.8: Articulated Rigid and Non-rigid Deformations. (a) Template mesh. (b) Example scanned mesh for which we compute the part-based rotation matrices Rp[t] . (c) Reconstructed mesh that assumes only articulated rigid deformations, using the rotation matrices as shape gradients (Equation 3.12). Since non-rigid deformations are not captured at this stage, significant artifacts remain mainly at the joints. The torso and shoulders retain the orientation of the template as the arms are raised, making the skin fold on the boundary between adjacent body parts. (d) Reconstructed mesh using articulated rigid and predicted non-rigid pose-dependent deformations (Equation 3.16), closely resembling the example mesh in (b), albeit slightly smoother in regions such as armpits, abdomen and knees.

From a dataset of one person in different poses, learn residual deformations matrices for each mesh Y i directly from training data using the same idea as in Equation 3.9: T X ØØ ØØ2 X ØØ i i ØØ arg min ØØRp[t] Qit ¢~xt,k ° ¢~yt,k ØØ + ws

{Qi1 ,··· ,QiT } t=1 k=2,3

X

t1 ,t2 adj p[t1 ]=p[t2 ]

||Qit1 ° Qit2 ||2F .

(3.14)

As before, since the Q matrices are not fully constrained by the triangle edge vectors, we regularize

[Imageinfrom Alexandru Balan] the solution by adding smoothing constraints that require similar deformations adjacent triangles. However, due to the foldings occurring at the boundary between adjacent body parts, as noted in

Saturday, October 20, 12

Non-rigid Deformations

[Angelov et al., ACM TOG, 2005]

Let non-rigid deformations be linear functions of pose

Models bulging of muscles, etc. [Image from Alexandru Balan] Saturday, October 20, 12

Body Shape

[Angelov et al., ACM TOG, 2005]

From a dataset of many people in one pose learn variations - Extract mesh deformation gradients - Do PCA on them (vectorizing them first) - 6 PCA dimensions can capture > 80% of variation [Image from Alexandru Balan] Saturday, October 20, 12

Body Shape

[Image from Peng Guan] Saturday, October 20, 12

Body Shape It is also possible to create semantic parameters and regress from them to PCA coefficients

[Image from Peng Guan] Saturday, October 20, 12

Body Shape It is also possible to create semantic parameters and regress from them to PCA coefficients

Demo [Image from Peng Guan] Saturday, October 20, 12

SCAPE: Putting it all together

[Angelov et al., ACM TOG, 2005]

Rp (θ)

θ – joint angles

Qt (Rp (θ))

St (v)

v – shape parameters

We can simply concatenate all the deformations by multiplying them together

Saturday, October 20, 12

SCAPE Model

[Video curtesy of Peng Guan] Saturday, October 20, 12

Applications: Shape Estimation

[Sigal et al., NIPS‘2007]



Goal: learn functional mapping from image features to pose and shape parameters of the SCAPE model





Saturday, October 20, 12

(from synthesized input-output pairs)

Applications: Shape Estimation

[Sigal et al., NIPS‘2007]



Goal: learn functional mapping from image features to pose and shape parameters of the SCAPE model





(from synthesized input-output pairs)

Remember how Kinect works? Let’s try that for shape

Saturday, October 20, 12

Applications: Shape Estimation

[Sigal et al., NIPS‘2007]



Goal: learn functional mapping from image features to pose and shape parameters of the SCAPE model





Saturday, October 20, 12

(from synthesized input-output pairs)

Applications: Shape Estimation

[Sigal et al., NIPS‘2007]



Goal: learn functional mapping from image features to pose and shape parameters of the SCAPE model





feature space Saturday, October 20, 12

(from synthesized input-output pairs)

SCAPE space

Applications: Shape Estimation

[Sigal et al., NIPS‘2007]



Goal: learn functional mapping from image features to pose and shape parameters of the SCAPE model





(from synthesized input-output pairs)

SCAPE space

feature space SCAPE parameters = g ( features ) Saturday, October 20, 12

Proof of concept results

[Sigal et al., NIPS‘2007]

After

Before



Can we estimate weight loss discriminatively from monocular images?

Saturday, October 20, 12

Proof of concept results

[Sigal et al., NIPS‘2007]

After

Before



Can we estimate weight loss discriminatively from monocular images?

Saturday, October 20, 12

Proof of concept results

[Sigal et al., NIPS‘2007]

After

Before



Can we estimate weight loss discriminatively from monocular images?

Saturday, October 20, 12

Proof of concept results

[Sigal et al., NIPS‘2007]

After

Before



Can we estimate weight loss discriminatively from monocular images?

Saturday, October 20, 12

Assume density of water

Proof of concept results

[Sigal et al., NIPS‘2007]

After

Before



Can we estimate weight loss discriminatively from monocular images?

Saturday, October 20, 12

Assume density of water

Proof of concept results

[Sigal et al., NIPS‘2007]

After

Before



Can we estimate weight loss discriminatively from monocular images?

Saturday, October 20, 12

Assume density of water

Estimated Weight Loss: 22lb Reported Weight Loss: 24lb

Proof of concept results

[Sigal et al., NIPS‘2007]

After

Before



Can we estimate weight loss discriminatively from monocular images?

Saturday, October 20, 12

Assume density of water

Estimated Weight Loss: 32lb Reported Weight Loss: 64lb

Silhouettes is not enough [Guan et al., ICCV, 2009]

Silhouettes are ambiguous

Saturday, October 20, 12

Silhouettes is not enough [Guan et al., ICCV, 2009]

Silhouettes are ambiguous

Saturday, October 20, 12

Silhouettes is not enough [Guan et al., ICCV, 2009]

Silhouettes are ambiguous

Saturday, October 20, 12

Adding edges

Silhouettes is not enough [Guan et al., ICCV, 2009]

Silhouettes are ambiguous

Adding edges

They also add shading cues Saturday, October 20, 12

Estimating Pose and Shape from Image [Guan et al., ICCV, 2009]

Saturday, October 20, 12

Fun Results [Guan et al., ICCV, 2009]

Internet Images:

Paintings:

Saturday, October 20, 12

Parametric Body Reshaping [Zhou et al., ACM TOG, 2010]

Saturday, October 20, 12

Parametric Body Reshaping [Zhou et al., ACM TOG, 2010]

Saturday, October 20, 12