a novel variant of shape from shading which we call shape from image ..... object, center warping the prototype to match the object, right the estimated shape.
Image Warping for Shape Recovery and Recognition Alan L. Yuille, Mario Ferraro∗ and Tony Zhang∗∗ Smith-Kettlewell Eye Research Institute, San Francisco, CA 94115. USA.
∗
Dipartimento di Fisica Sperimentale, Universita’ di Torino, via Giuria 1, 10125
Torino, Italy. ∗∗
Division of Applied Sciences, Harvard University, Cambridge, MA 02138,
USA. In Computer Vision and Image Understanding. 1998.
1
Abstract We demonstrate that, for a large class of reflectance functions, there is a direct relationship between image warps and the corresponding geometric deformations of the underlying three-dimensional objects. This helps explain the hidden geometrical assumptions in object recognition schemes which involve two-dimensional image warping computed by matching image intensity. In addition, it allows us to propose a novel variant of shape from shading which we call shape from image warping. The idea is that the three-dimensional shape of an object is estimated by determining how much the image of the object is warped with respect to the image of a known prototype shape. Therefore detecting the image warp relative to a prototype of known shape allows us to reconstruct the shape of the imaged object. We derive properties of these shape warps and illustrate the results by recovering the shapes of faces.
2
Symbols Roman characters in equations. ~a denotes a vector. √ b denotes “square root. R
is the square root.
~ denotes the mathematics symbol “grad”. ∇ ∂ is the mathematics symbol for partial derivative. Σ denotes summation. ∈ denotes “is a member of” ∞ denotes infinity. ⊂ denotes “is a subset of” Greek letters: α, φ, λ, ψ denote alpha, phi, lambda and psi.
3
1
Introduction
Recent work on object recognition [5] uses two-dimensional warping of intensity images to allow for the changing three-dimensional geometry of the objects. For example, the image of a viewed object will change with the angle of view. For small changes of angle, or small deformations of shape, the change can be modeled as a spatial warp of the intensity image. The approach makes the implicit assumption that we can model geometrical changes in three-dimensions by warps in the twodimensional image plane. When is this assumption valid? In this paper we show that, for a large class of reflectance functions, there is a direct relationship between image warps and geometrical changes of the underlying three-dimensional objects. We also demonstrate that not all warps are physically reasonable and determine constraints that physical warps must satisfy. Our analysis explains the hidden assumptions used by Hallinan and other workers on object recognition [5]. It allows us to understand the relationship to other object recognition theories based on three-dimensional geometry. In addition, it offers a novel approach to shape from shading [6], [7], [9]. We call our method shape from warping. By contrast to standard techniques, our approach works by assuming prototype models of shape and by estimating the warp between the input image and the image of the prototype. From these image warps we show how to deduce the three-dimensional shape. This method allows us to introduce object specific knowledge into the shape estimation. We will prove that this approach works without needing to know the precise reflectance function of the object. Our approach has some similarities to recent work [1], [2], which shows that shape can be recovered from an intensity image provided prior assumptions are made about the shape class. The techniques used in this paper, however, are very different. Our methods make fewer assumptions about the viewed object and hence are more general. Atick’s [1] work is specific to faces and uses strong prior knowledge about their geometric shape obtained by doing principal analysis on a large database 4
of range data. The Epstein work [2] typically assumes several images of the object under different lighting conditions, when applied to a single image it requires that the object is symmetric in addition to using a prototype. Moreover, both Atick and Epstein assume specific reflectance functions, namely Lambertian. This paper is organized as follows. Section (2) demonstrates the basic relationship between two-dimensional image warping and three-dimensional geometrical variations. In Section (3) we describe how the surface integrability condition put constraints on the class of warps by requiring that they generate consistent surfaces. Section (4) shows how an arbitrary twice differentiable surface can be obtained from warping a paraboloid. In Section (5) we discuss the underlying assumptions of object recognition theories which involve two-dimensional image warping. Section (6) illustrates shape from warping be recovering the shape of faces.
2
Relating Image and Geometrical Warps
Suppose we have a surface, or object, of the form z(~x) = f (~x; ). (More precisely, we consider a Monge patch, which is assumed to be smooth, C ∞ , or at least C 2 .) This surface corresponds to an intensity image I(~x). The image is related to the surface by a reflectance function, see [6]. For example, it is common to assume that: I(~x) = R(~n(~x), ~k, ~s),
(1)
where R is the reflectance function, ~n(~x) are the surface normals, ~k is the viewer direction, and ~s is the light source direction. (This is the most general class of reflectance function in common use and includes, for example, the Phong model as a special case provided the albedo is constant). Our key result is that warping the image corresponds to warping the surface normals of the underlying surface provided the reflectance function of the surface is of the form specified by equation (1). More precisely, suppose we apply a warp φ(.)
5
to the image by the mapping: φ : ~x → φ(~x),
(2)
I(~x) 7→ I(φ(~x)),
(3)
then this induces a warp:
to the image and, by equation (1), a warp: ~n(~x) 7→ ~n(φ(~x)),
(4)
to the surface normals of the surface. This result can be used in two ways. Firstly, it can used to explain the hidden geometrical assumptions of theories of object recognition which make use of two dimensional image warps [5], see section (5) for more details. Secondly, the result can be exploited to develop a novel approach to shape from shading which we call shape from warping. If the warping assumption is valid then we can use the image warps to recover three-dimensional shape assuming that we have a known threedimensional prototype with surface normals {~n0 (~x)} and a corresponding image I0 (~x). For a given image I(~x) we find the warp φ(~x) such that I(~x) = I0 (φ(~x)) and hence determine the shape to be ~n(~x) = ~n0 (φ(~x)). It should be emphasized that the approach assumes that the reflectance function is of form I(~x; α) = R(~n(~x; α), ~k, ~s). This will not be true if the object has significant albedo changes but may be a reasonable approximation for many objects. (We note that almost all shape from shading algorithms assume constant albedo). This seems to be true for our experiments on faces, see section (6).
2.1
A Simple Example
Consider a simple one-dimensional example. Let us assume that the objects are circles centered on the origin. This gives z(x; α) =
√ α2 − x2 , −α ≤ x ≤ α, 6
(5)
√ with zx (x; α) = −x/ α2 − x2 . The surface normal is ~n(x; α) = q
1 1 + zx2
(1, −zx ),
(6)
Hence we find ~n(x; α) =
q
1 − (x/α)2 (1, q
x/α 1 − (x/α)2
).
(7)
If we assume a Lambertian model, with source direction ~s = (s, t), then we have an observed image: J(x; α) =
q
1 − (x/α)2 {s + t q
x/α 1 − (x/α)2
}.
(8)
Let us define J0 (x) ≡ J(x; 1). Then it follows directly that J(x; α) = J0 (x/α). In other words, the image of any given circle can be obtained by spatial warping the prototype image J0 (x) and so the warping assumption is valid. In this case the spatial warp is φα (x) = x/α. Finding the image warp, by intensity matching, will estimate α and so determine the shape. But what class of image warps are allowable? By the warping assumption they must correspond to warping the surface normals of a prototype shape, and not all such warps will form consistent surfaces. We investigate this in the next section.
3
Normal Consistency
The preceeding section showed that image warps often corresponded to warps of surface normals. In what situations will this give a consistent surface? Warps transform a given set of surface normals N0 into a different set say N , but this does not guarantee the existence of a consistent surface with these normals. To ensure the existence of a such a surface a further condition must be satisfied, that is
7
referred to as the integrability condition [10]. Let us derive the surface integrability condition (see, for example, [7]). Suppose we have a surface z = f (x, y). Then its surface normals are defined by: ~n(x, y) =
1 ~ · ∇f ~ }1/2 {1 + ∇f
(−fx , −fy , 1),
(9)
~ = (fx , fy ). We can write ~n = (n1 , n2 , n3 ) and observe that: where ∇f n1 n2 = −fx , = −fy . n3 n3
(10)
Hence we derive the surface integrability condition: ∂ ∂y
n1 n3
=
∂ n2 . ∂x n3
(11)
We have shown that this is a necessary condition for the surface to be consistent. To see that it is sufficient we observe that, by elementary vector calculus [3], equation (11) implies that there exists a function ψ(x, y) such that:
n1 n3
∂ψ = , ∂x
n2 n3
=
∂ψ . ∂y
(12)
We can solve equation (12) for n1 , n2 , n3 , using the normalization condition n21 + n22 + n23 = 1, and obtain the surface expression (9) after identifying ψ with −f . Thus we see that equation (11) is also a sufficient condition. We must now tackle the harder task of putting consistency conditions on the warp so that the warped normals are consistent. Suppose ~n(~x) are the surface normals of the prototype surface, and hence obey the surface integrability condition. ~ x) = (φ1 (~x), φ2 (~x)) to the prototype surface. The resulting Let us apply a warp φ(~ surface ~n(φ(~x)) is consistent provided:
~ x)) ~ x)) ∂ n2 (φ(~ ∂ n1 (φ(~ = . ~ x)) ~ x)) ∂y n3 (φ(~ ∂x n3 (φ(~
(13)
By equation (10) we have: ~ x)) n1 (φ(~ ∂f |~ , =− ~ ∂x (φ(~x)) n3 (φ(~x)) ~ x)) n2 (φ(~ ∂f =− |~ , ~ x)) ∂y (φ(~x)) n3 (φ(~ 8
(14)
~ x). We where the vertical bar indicates that the derivatives are evaluated at φ(~ observe that ∂2f ∂2f ∂ ∂f ∂φ1 ∂φ2 ( |(φ(~ | + | , ~ x)) ) = ~ ~ ( φ(~ x )) ( φ(~ x )) ∂y ∂x ∂x2 ∂y ∂x∂y ∂y
(15)
where the derivative with respect y is computed by the chain rule yielding the derivative of ∂f /∂x with respect to φ1 , times the derivative of φ1 with respect y plus the analogous term with respect to φ2 . But these derivatives with respect to ~ x). φ1 &φ2 are precisely the derivatives with respect to x&y evaluated at φ(~ A similar result holds if we replace x by y. As an example of this notation, we consider the one-dimensional case where the surface f (x) can be expressed as a power series f (x) = by φ(x) gives f (φ(x)) =
PN
r=0
PN
r=0
ar xr and warping it
ar {φ(x)}r . In this case we find that:
N X ∂ ∂f |φ(x) = ar r{φ(x)}r−1 = f (φ(x)). ∂x ∂φ(x) r=0
(16)
Substituting from (14) into (13) and using (15) yields the result: φ1,y
∂2f ∂2f ∂2f ∂2f | +φ | = φ | +φ |~ . ~ ~ ~ 2,y 1,x 2,x ∂x2 φ(~x) ∂x∂y φ(~x) ∂x∂y φ(~x) ∂y 2 φ(~x)
(17)
This gives a relationship between the Hessian of the surface (terms ∂ 2 f /∂x2 , etc.) and the stress tensor of the warps which is defined to have components φ1,x , φ1,y , φ2,x , φ2,y . To get some insight into these equations let us examine the special case of surfaces of revolution. These surfaces are essentially one-dimensional so we expect that warps which preserve the revolution property will indeed satisfy this equation. The following subsection proves that this is the case.
3.1
Surfaces of Revolution
A surface of revolution, viewed directly from above, is of form: f (x, y) = g(x2 + y 2 ), 9
(18)
for some function g(.). This means that the derivatives of f are of the following forms: fx = 2xg 0 ,
fy = 2yg 0
fxx = 2g 0 + (2x)2 g 00 , fyy = 2g 0 + (2y)2 g 00 , fxy = 4xyg 00 ,
(19)
where we use 0 to denote derivatives of g(.). Now we choose a warp which preserves rotational symmetry. The general form ~ x) = ~xh(x2 + y 2 ). In coordinates this becomes: is φ(~ φ1 (x, y) = h(x2 + y 2 )x, φ2 (x, y) = h(x2 + y 2 )y.
(20)
We can calculate the derivatives: φ1,y = 2xyh0 , φ1,x = h + 2x2 h0 , φ2,y = h + 2y 2 h0 , φ2,x = 2xyh0 .
(21)
We now substitute equations (19) and (21) into the condition (17). This reduces to the condition: 2 2 00 2 0 2 00 2xyh0 {2g 0 |φ(~ ~ x) +4x h g |φ(~ ~ x) } + (h + 2y h ){4xyh g |φ(~ ~ x) } 0 0 2 2 00 −(h + 2x2 h0 ){4xyh2 g 00 |φ(~ ~ x) } − 2xyh {2g |φ(~ ~ x) +4y h g |φ(~ ~ x) }
= 0.
(22)
By elementary algebra we see that the terms in equation (22) cancel and so the integrability condition is satisfied automatically. So any warp on the normals of a surface of revolution that preserves the symmetry property will generate a consistent surface.
3.2
Interpreting Surface Consistency.
We now return to the consistency condition (11). Can we interpret it geometrically? 10
~ satisfies the additional condition φ1,y = A special case occurs when the warp φ φ2,x . Warps satisfying this condition will be called irrotational warps, by analogy ~ corresponds to the velocity flow ~u. The intuition is that to fluid dynamics when φ locally the warp has only translational and shearing parts, i.e. no local rotation. In this case the consistency condition reduces to requiring that the Hessian of the surface at the transformed point commutes with the stress tensor. It is straightforward algebra to show that the condition for two symmetric matrices A and B to compute is: A11 B12 + A12 B22 = B11 A12 + B21 A22 ,
(23)
where we use the obvious convention for indices. We see that this is precisely the same as the consistency condition (17) provided A corresponds to the Hessian and B corresponds to the stress tensor (i.e. B11 = φ1,x , B12 = φ1,y , B21 = φ2,x , and B22 = φ2,y .) If the stress tensor is not symmetric then we also get a simple matrix relation. Using the fact that the Hessian A is symmetric (i.e. A12 = A21 ) we can rewrite (23) as: (AB)12 = (AB)21 ,
(24)
where (AB)ij denotes the ij entry of the matrix product AB. This is the condition that the product matrix AB is symmetric. It is straightforward to show that this symmetric condition reduces to the commutation condition provided that B is symmetric. The symmetry condition implies that AB = (AB)T = BT A, where we have used the fact that A is symmetric. If, in addition, we can assume that B is symmetric, then we obtain the commutation condition AB = BA. We can get deeper understanding of the symmetry condition by relating it to the Hessian of the warped surface. Indeed the symmetry condition is simply the consistency requirement that this Hessian is symmetric. To see this we observe from 11
(12) and (14) that the warped surface can be described by a function ψ(x, y) (this is the negative of the height function of the warped surface) such that: ∂ψ ∂f =− |~ , ∂x ∂x φ(~x) ∂f ∂ψ =− |~ . ∂y ∂y φ(~x)
(25)
We can compute the negative of the Hessian to be: ∂2ψ ∂x2 ∂2ψ ∂y∂x ∂2ψ ∂x∂y ∂2ψ ∂y 2
∂φ1 ∂2f ∂2f | − ~ ∂x2 φ(~x) ∂x ∂x∂y 2 ∂ f ∂2f ∂φ1 = − 2 |φ(~ − ~ x) ∂x ∂y ∂x∂y 2 ∂ f ∂φ1 ∂ 2 f =− |φ(~ − 2 ~ x) ∂x∂y ∂x ∂y 2 ∂ f ∂φ1 ∂ 2 f =− |φ(~ − 2 ~ x) ∂x∂y ∂y ∂y
=−
|φ(~ ~ x) |φ(~ ~ x) |φ(~ ~ x) |φ(~ ~ x)
∂φ2 ∂x ∂φ2 ∂y ∂φ2 ∂x ∂φ2 ∂y
(26)
Thus we see that the warped Hessian equals the product of the warp matrix with the original Hessian. The symmetry condition is merely the requirement that the warped Hessian is symmetric.
4
Generating Surfaces by Warping
We now start with a simple parabolic surface and show how we can warp its normals to obtain other surfaces. Let us choose: f (x, y) =
x2 y 2 + 2. a2 b
(27)
We compute the Hessian to be fxx = 2/a2 , fxy = fyx = 0, fyy = 2/b2 . Observe that the Hessian is constant, which simplifies the calculations. The consistency equation (11) reduces to: φ1,y =
a2 φ2,x . b2
12
(28)
Our main result of this section is that we can obtain any twice differentiable surface by setting φ2,x to be an arbitrary function a(x, y). This gives warps: φ1 (x, y) =
a2 a(x, y)dy + f1 (x) b2 Z Z
φ2 (x, y) =
a(x, y)dx + f2 (y),
(29)
where the integrals are indefinite. The normals are specified, after warping, by the equations: ~n(~x) = ~n0 (φ1 (~x), φ2 (~x)).
(30)
For the surface specified by (27) we calculate: ~n0 (~x) =
q
( − 2x/a2 , −2y/b2 , 1)
1 + (4x2 /a4 ) + (4y 2 /b4 )
.
(31)
The warped normal field forms a consistent surface and must satisfy the integrability condition (12) which can be used to determine a height function ψ(x, y).
∂ψ 2 2 2 = − 2 φ1 = − 2 a(x, y)dy − 2 f1 (x) ∂x a b a Z ∂ψ 2 2 2 = − 2 φ2 = − 2 a(x, y)dx − 2 f2 (y) ∂y b b b Z
(32)
This is equivalent to requiring ∂2ψ = −a(x, y). ∂x∂y
(33)
We can generate any twice differentiable surface by choosing the appropriate a(x, y). But for physical reasons we might want to require the warps to be mononotic (J.J. Clark - personal communication). This would involve enforcing the warp matrix to be positive or negative definite, in other words to have positive determininant. The warp matrix is directly related to the Hessian of ψ by the relations: a2 a2 ψxx , φ1,y = − ψxy , 2 2 b2 b2 = − ψxy , φ2,y = − ψxy . 2 2
φ1,x = − φ2,x
13
(34)
These relations ensure that the warp has positive determinant if and only if the hessian of ψ is negative definite. Recall, from (12), that the surface is described by z = −ψ(x, y). Thus the warp condition is equivalent to requiring that the Hessian of the surface has a positive determinant. This means that there is at most one maxima or minima. Thus if we require monotonicity then we cannot create or destroy extrema in the viewed surface. Monotonicity of warps is often required in object recognition algorithms [5]. Moreover, determining the image warps is greatly simplified if monotonicity is assumed.
5
Underlying Geometric Assumptions of Image Warping
The Hallinan model for face recognition [5] assumes that the image of a given face can be written as1 : I(~x) =
5 X
αi Bi (φ(~x)),
(35)
i=1
where the {Bi (.)} are lighting basis functions [4] – which model the appearance of the face under different lighting conditions – and φ(.) is a geometric warp which is required to be monotonic. The {αi } are coefficients corresponding to the specific lighting conditions. The Hallinan model therefore assumes that any image of a face can be obtained by monotonically warping a prototype face model. Now the albedo is approximately constant over most of the face and, in particular, it is constant over those regions of the face where most of the shape changes occur – the cheeks, forehead, and nose. Thus we argue that faces approximately satisfy the assumptions of shape from warping. 1
The full model also includes a global affine warp which we will neglect for simplicity.
14
Therefore the image warps of the Hallinan model should correspond directly to warps of the surface normals of faces2 . If so, the monotonicity assumption of Hallinan’s warps implies that no surface extrema can be created in the images objects. In other words, the model given by equation (35) will break down if some of the objects have a different number of surface extrema than the prototype. This will not happen for faces but it will be a problem if the model is applied to other object classes. It is interest to constrast Hallinan’s face model with the one recently proposed by Atick et al. [1]. This model represents the face in terms of a three-dimensional model which is generated by doing principal component analysis of a set of threedimensional range data of faces. A face can therefore be represented by its coefficients of expansion in a basis of eigenheads. By contrast, Hallinan has computed a class of two-dimensional eigenwarps. Our paper therefore suggests that Hallinan’s image eigenwarps correspond to eigenwarps of the surface normals and hence are closely related to Atick’s eigenheads. We note that computing image warps is a harder tasks that doing PCA on range data and thus we might expect that the results from range data are more accurate.
6
Examples
We first tested our theory first on synthetic data of ellipsoids with reflectance given by the Phong model and using warps specified by third order polynomials in x, y. The results of reconstruction were very accurate, as is typically the case for simulated data! So we now give our theory a more rigorous test by applying it to faces. Our prototype model consists of a face image Ip (~x) and its corresponding surface shape z = fp (~x) obtained from laser-range data. From this surface we compute the 2
This will be verified by the experiments described in the following section.
15
surface normals ~np (~x), see figure (1) For given input images {Iα (~x)} we compute the spatial warps φα (~x) relative to the prototype image by using the algorithm, and code, described in [5], see figures (2,4). These warps are calculated to minimize a matching energy function:
+
Z
1 E[φ] = |D| D
Z
D
ψ{I(φ(~x)) − Ip (~x)} d~x
(trJ(~x)T J(~x){1 +
1 } − 4) d~x, det J(~x)2
(36)
where J is the Jacobian matrix of the warp field φ(~x) and ψ is a robust norm [8] used to make the matching insenitive to outliers. This choice of smoothing function was used because it allows contractions and expansions to be equally likely and is relatively insensitive to small rotation. Hallinan [5] gives a derivation of this prior and compares it to the more standard priors for motion flow [6]. Hallinan’s algorithm consists of coarse to fine matching at levels of resolution specified by a full octave. Results from the coarse scale are used to initialize the matching at the finer scales. At each level the matching is done by steepest descent. More sophisticated versions of the algorithm in [5] will obtain the spatial warps even if the lighting conditions are unknown. But we will not deal with this case in this paper. The code described in [5] does not put any restrictions on the class of warps other than requiring monotonicity. It would be interesting to impose the consistency condition, see equation (17), while calculating the warps but this is difficult because the warps appear in the arguments of the surface derivative terms. To estimate the three-dimensional shape we use Hallinan’s algorithm to compute the image warps φ(~x) and calculate the warps normal fields ~n(~x) = ~np (φ(~x)). We estimate, the depth z(~x), up to an additive constant, by minimizing the cost function: E[z] =
Z
{(
∂z n1 (~x) 2 ∂z n2 (~x) 2 − ) +( − ) } d~x. ∂x n3 (~x) ∂y n3 (~x)
(37)
The resulting warps and reconstructed faces are shown in figures (3, 5). The 16
Figure 1: The 3D prototype model, its image, shape and surface normals. Top row: left the intensity image of the prototype face, center the corresponding depth map, right a mesh plot of the shape. Bottom row: left to right the x, y and z components of the surface normals. warps are generally well calculated by Hallinan’s algorithm though inspection of figure (4) shows some errors, particularly on the right boundary of object number 3. These errors becomes more apparent in the reconstructions, see figure (5), and we see that errors arise on the edge of object 3. Nevertheless it seems that the errors in reconstruction are due to partial failures in the warping algorithm. Thus, since these are nontrivial images, we regard this as proof of concept for shape from warping.
17
Figure 2: The first object, its flow field when warped to the prototype image and resulting warped normals. Top row: left the intensity image, center the flow field in the x direction, right the flow filed in the y direction. Bottom row: left to right the normal components in the x, y and z directions estimated by the algorithm.
Figure 3: Applying our algorithm to the first object. Left the intensity of the first object, center warping the prototype to match the object, right the estimated shape of the first object. 18
Figure 4: Comparing the warping for different objects. The rows correspond to objects 1,2 and 3. The columns show: left the intensity image, center the forward warping the prototype to match the object image, and right warping the object image to match the prototype. The warps are generally good except at the eyes and sometimes at the object boundaries.
19
Figure 5: Showing the estimated face for the objects, from different viewpoints, and comparing them to the prototype shape. The four rows correspond to different viewpoints. The first column shows the prototype shape and the next three columns show the three objects. Observe the errors which sometimes occur at the object boundaries. These are side effects from the warping algorithm.
20
7
Summary
Shape from warping is a variant of shape from shading which allows shape to be determined for a class of surfaces without needing to know the exact reflectance function. We investigated the class of warps that are consistent and what types of surfaces could be reconstructed in this way. The theory was illustrated by applying it to recovering the shape of faces. Our analysis also shows the underlying geometrical assumptions about threedimensional shape used in models of object recognition which rely on dense twodimensional warps computed by matching image intensity. In particular, it is shown that restricting the image warps to be monotonic means that the underlying threedimensional shapes must all have the same number of surface extrema.
Acknowledgements We would like to thank Peter Belhumeur, Gaile Gordon, and particularly Peter Hallinan for access to their computer code. Support was provided by NSF Grant IRI 93-17670 and ARPA/ONR Contract N00014-95-1-1022.
References [1] J.J. Atick, P.A. Griffin, and A.N. Redlich. “Statistical Approach to Shape from Shading: Reconstruction of 3-Dimensional Face Surfaces from Single 2Dimensional Images”. Neural Computation. 8, No. 6, pp 1321-1340. 1996. [2] R. Epstein, A.L. Yuille, and P.N. Belhumeur. “Learning object representations from lighting variations”.In Object Representation in Computer Vision II. Eds. J. Ponce, A. Zisserman, and M. Hebert. Springer Lecture Notes in Computer Science 1144. 1996.
21
[3] M.D. Greenberg. Foundations of Applied Mathematics. Prentice-Hall. Inc. Englewood Cliffs. New Jersey. 1978. [4] P.W. Hallinan. “A low-dimensional lighting representation of human faces for arbitrary lighting conditions”. In. Proc. IEEE Conf. on Comp. Vision and Patt. Recog., pp 995-999. 1994. [5] P.W. Hallinan. A Deformable Model for Face Recognition under Arbitrary Lighting Conditions. PhD Thesis. Division of Applied Sciences. Harvard University. 1995. [6] B.K.P. Horn. Computer Vision. MIT Press, Cambridge, Mass. 1986. [7] B.K.P. Horn and M. J. Brooks, Eds. Shape from Shading. Cambridge MA, MIT Press, 1989. [8] P.J. Huber. Robust Statistics. John Wiley and Sons. New York. 1981. [9] J. Oliensis. “Shape from shading as a partially well-constrained problem”. Computer Vision, Graphics, and Image Processing: Image Understanding. 54, pp 163-183. 1991. [10] C. Von Westenholz. Differential forms in Mathematical Physics. NorthHolland publishing company. Amsterdam. 1981.
22