Illumination-Insensitive Face Recognition Using Symmetric Shape-from-Shading WenYi Zhao Rama Chellappa Center for Automation Research University of Maryland, College Park, MD 20742-3275 Email: fwyzhao,
[email protected]
2 Previous Approaches
Abstract
To handle the illumination problem, researchers have proposed various methods. For example, within the eigen-subspace domain, it has been suggested that by discarding the three most signi cant principal components, variations due to lighting can be reduced. It was in fact experimentally veri ed in [4] that discarding the rst few principal components seems to work reasonably well for images under variable lighting. However in order to maintain system performance for normally lighted images and improve performance for images acquired under varying illumination, we must assume that the rst three principal components capture only the variations due to lighting. With the assumption of Lambertian surfaces, no shadowing and the availability of three aligned images/faces acquired under dierent lighting conditions, a 3D linear illumination subspace/per person has been constructed in [4, 9, 8, 10] for a xed viewpoint. Thus under ideal assumptions, recognition based on the 3D linear illumination subspace is illumination-invariant. This 3D linear subspace method was later extended to a well-known approach: illumination cone, which is an eective method of handling illumination variations, including shadowing and multiple lighting sources [12, 11]. More rencently, a multi-image training-based approache called quotient image was proposed which assumes that all face objects have the same surface shape[13]. Quite dierent from these training-based approaches, Jacobs et al. have suggested a new measure that is robust to illumination change for comparing two images [15]. Their method is based on the observation that the dierence between two images of the same object is smaller than the dierence between images of dierent objects.
Sensitivity to variations in illumination is a fundamental and challenging problem in face recognition. In this paper, we describe a new method based on symmetric shape-from-shading (SSFS) to develop a face recognition system that is robust to changes in illumination. The basic idea of this approach is to use the SSFS algorithm as a tool to obtain a prototype image which is illumination-normalized. It has been shown that the SSFS algorithm has a unique point-wise solution [25]. But it is still dicult to recover accurate shape information given a single real face image with complex shape and varying albedo. In stead, we utilize the fact that all faces share a similar shape making the direct computation of the prototype image from a given face image feasible. Finally, to demonstrate the ecacy of our method, we have applied it to several publicly available face databases.
1 Introduction
As one of the most successful applications of image analysis and understanding, face recognition has recently gained signi cant attention. This is evidenced by the emergence of speci c face recognition conferences such as AFGR (Automatic Face and Gesture Recognition) and systematic empirical evaluation of face recognition techniques in the FERET program [3]. There are at least two reasons for such a trend: the rst is the wide range of possible real applications such as face recognition based ATM machines, smart user interfaces, etc. [1]; and the second is the availability of feasible technologies [2]. Many methods have been proposed for face recognition; they can be broadly divided into holistic template matching based systems, geometrical local-feature-based schemes and hybrid systems. Even though algorithms of these types have been successfully applied to the task of face recognition/veri cation, they all have certain drawbacks. One of the major diculties is the illumination problem [7], that is, the fact that system performance drops signi cantly when illumination variations are present in the input images. This diculty is clearly revealed in the most recent FERET test report, and solving the illumination problem is suggested as a major research issue [5].
3 Applying SSFS to Face Recognition
In this paper, we address the illumination problem speci c to face recognition under the following assumption: we do not have enough training images, e.g., just one face image is available. We wish to develop methods that keep up the performance of the face recognition system when the input face image is acquired under dierent illumination conditions. The rationale for applying SFS to face recognition is to infer the face shape information, so problems of illumination change and rotation out of the image plane
1
can be solved simultaneously . For example, from a given input image I , we can render the prototype image Ip , which we de ne as the frontal lighted image of the same face, from any given image I [x; y]. After this step, all comparisons/classi cations are carried out using prototype images.
3.1 SFS for Face Recognition?
The standard equation in the SFS problem is the
irradiance equation [17]:
I [x; y] = R(p[x; y]; q[x; y])
(1)
where I [x; y] is the image of the scene, R is the re ectance map, and p[x; y]; q[x; y] are the shape gradients (partial derivatives of the depth map z [x; y]). With the assumption of Lambertian surface re ection and a single, distant light source, the equation can be written as 1 + pPsp+ qQs (2) I = p 2 1 + p + q2 ; 1 + Ps2 + Q2s where Ps = k sin cos and Qs = k sin sin , and k is the length of vector L~ . And the light source is represented by the two angles slant (the angle between the negative L~ and the positive z -axis: 2 [00; 1800]) and tilt (the angle between the negative L~ and the x-z plane: 2 [?1800; 1800]). Though in theory it has been shown that SFS has unique solution under certain conditions such as the existence of singular points [16], in practice it is still an ill-posed problem in general. We have applied three existing SFS algorithms to real face images, the results are far away from being satisfactory [25]. The three SFS algorithms are (1) Zheng and Chellappa [18] (an iterative global method based on the variational principle), (2) Wei and Hirzinger [20] (a global method based on radial basis expansion), and (3) Tsai and Shah [19] (a local approach based on linearization of the depth map). The possible reason is that the face not only has a complex shape but also is composed of materials with dierent re ecting properties: cheek, lip, eye, eyelid, etc. Hence, it is impossible to model the face surface eciently with the Lambertian model and constant albedo. In summary, there are diculties in applying existing SFS algorithms to face: 1) using regularization to obtain smooth shape con icts the fact that face has a complex surface, and 2) assuming constant albedo con icts the fact that face has varying albedo. Another interesting SFS-based approach particular for face recognition is the statistical SFS [14]. In their paper, the authors suggested using Principal Component Analysis (PCA) as a tool for solving the parametric SFS problem, i.e., obtain the eigenhead approximation of a real 3D head after training on about 300 laser-scanned range images of real human heads. Though the ill-posed SFS problem is transformed into a parametric problem, they still assume constant albedo.
3.2 Applying SSFS for Face Recognition
3.2.1 Symmetric Shape from Shading
Symmetry is very useful information that can be exploited in SFS algorithms for symmetric objects. However, implicitlyintroducing this information into existing SFS algorithms does not seem to help very much. We therefore describe a direct method of incorporating this important cue. Let us assume that we are dealing with a symmetric surface. The background should be excluded since it need not be symmetric. Our de nition of a symmetric surface is based on the following equations (with an easily-understood coordinate system): z [x; y] = z [?x; y]; [x; y] = [?x; y]: (3) One immediate property of a symmetric (dierentiable) surface is that it has both anti-symmetric and symmetric gradients: p[x; y] = ?p[?x; y]; q[x; y] = q[?x; y]: (4) As suggested by (3,4), explicitly using the symmetric property can reduce the number of unknowns by half. Moreover, we can derive a new irradiance equation which contains no albedo information at all. We introduce the concept of self-ratio image to cancel the eect of the varying albedo. The idea of using two aligned images to construct a ratio has been explored by many researchers [15, 24]. Here we extend the idea to a single image. Let us substitute (3,4) into the equations for I [x; y] and I [?x; y], and then add them, giving 1 +pqQs : I [x; y] + I [?x; y] = 2 p 2 1 + p + q2 1 + Ps2 + Q2s (5) Similarly we have pPs p : 1 + p + q2 1 + Ps2 + Q2s
I [x; y] ? I [?x; y] = 2 p
2
(6) To simplify the notation, let us de ne I+ [x; y] = I [x;y ]+I [?x;y ] and I? [x; y] = I [x;y]?2I [?x;y] . Then the 2 self-ratio image rI can be de ned as pPs I [x; y] : (7) = rI [x; y] = ? I+ [x; y] 1 + qQs De ning the right-hand-side of the above equation as the self-ratio re ectance map rR (p; q), we arrive at the following self-ratio irradiance equation: rI [x; y] = rR (p[x; y]; q[x; y]): (8) Solving for shape information using this equation combined with (1) will be called symmetric SFS. Unlike standard SFS, symmetric SFS has two re ectance maps R(p; q) and rR (p; q). The main result of SSFS is the following theorem [25]:
Theorem 1 If the depth z is a C surface and the 2
albedo eld is piece-wise constant, then both the pointwise solution for shape (p; q) and the solution for albedo are unique except in some special conditions 1.
3.2.2 Enhanced Face Recognition
Having observed that most existing SFS techniques do not produce accurate prototype images for real face images, we developed symmetric SFS as a better alternative. However, we should point out that in practice it is not easy to guarantee the correct solution for real face images. There are many practical issues that require study before we fully implement symmetric SFS. Based on these observations and the assumption that we may have only one image available, we decided to incorporate another important fact: all faces share a similar common shape. With the aid of a generic 3D head model, we can shorten the two-step procedure of obtaining the prototype image from a given image (1. given image to shape via SFS, 2. recovered shape to prototype image) to one direct step: image to prototype image. Currently the generic 3D model is the Mozart head (a laser-scanned range map) used in [18]. To align this 3D model to an input face image, they are both normalized to the same size (48 42) and two eye pairs are in the same xed positions. Let us write the image equation for the prototype image Ip with = 0: 1 : (9) Ip [x; y] = p 1 + p2 + q 2 Comparing (5) and (9), we obtain Ip [x; y] =
K 2(1 + qQs ) (I [x; y] + I [?x; y]); p
(10)
where K is a constant equal to 1 + Ps2 + Q2s . This simple equation directly relates the prototype image Ip to I [x; y]+ I [x; ?y] which is already available. It is worthwhile to point out that this direct computation of Ip from I oers the following advantages over the two-step procedure: There is no need to recover the varying albedo [x; y]. There is no need to recover the full shape gradients (p; q). The only parameter that needs to be recovered is the partial shape information q. Theoretically, we can use the symmetric SFS algorithm to compute this value. But as we argued earlier, due to practical issues of using just one image, we approximate this value with the partial derivative of a 3D face model and use the selfratio irradiance equation (8) as a consistency checking tool. More details are described in Section 4. Using prototype images, we can construct illumination-invariant measure. The idea of having an 1 Please
see [25] for the proof.
illumination-invariant measure is very appealing since such a measure eliminates the illumination sensitivity problem. We de ne such an illumination-invariant measure as follows: dP (I; J ) =
Z Z
k SF Sp (I ) ? SF Sp (J ) k dxdy; (11)
where SF Sp is the operator which recovers the prototype image Ip or Jp from a given image I or J using the proposed model-based symmetric SFS algorithm. To compare, we list the robust (not invariant) gradientbased measure proposed by Jacobs et al [15]: Z Z
min(I; J ) k r( JI ) kk r( JI ) k dxdy: (12) The two measures have the following dierences: dP (I; J ) is a true illumination-invariant measure, while dG (I; J ) is not. dG(I; J ) can be easily computed from images, while dP (I; J ) needs the prior computation of IP and JP . To summarize, dP (I; J ) is a better measure in theory, but in practice it is not easy to declare which is better because of the errors in computing IP and JP . Before we apply an SFS algorithm, we need to reliably determine the light source direction. Many source-from-shading algorithms are available, for example, those of Lee and Rosenfeld [21], Zheng and Chellappa [18], and Pentland [22]. We implemented all three algorithms and found out that simplest algorithm by Lee and Rosenfeld produced reasonable and better results for both simulated and real face images (according to subjective judgment, as no groundtruth is available). To estimate the slant angle more reliably, we propose a new model-based symmetric source-from-shading algorithm. For details, please refer to [25]. dG (I; J ) =
4 Shadow and Implementation Issues
One important issue we have not discussed in detail is the attached-shadow and cast-shadow problem. By de nition, attached-shadow points are those where the image intensities are set to zero because (1+ pPs + qQs ) 0. A cast-shadow is the shadow cast by the object itself. It has been shown in [25] that the shadow points can still be utilized in both source estimation and image rendering. For example in the case of source estimation, one advantage of using a 3D face model is that we can take into account both attachedshadow and cast-shadow eects, which are not utilized in the traditional statistics-based methods. However, these points contribute signi cantly and correctly to the computation of the slant and tilt angles. Hence the model-based method can produce a more accurate estimate if the 3D face model is a good approximation to the real 3D face shape. In addition to these shadow points, we need to single out the \bad" points, or outliers in statistical
terms, for stable source estimation and prototype image rendering. This is because we need to compute the self-ratio image which may be sensitive to image noise. Let us denote the set of all \bad" points by B; at these points the values cannot be used. From a robust statistics point of view, these \bad" points are outliers [23]. Hence our policy for handling these outliers is to reject them and mark their locations. We then use values computed at good points to interpolate/extrapolate at the marked bad points. Many interpolation methods are available such as nearest-neighbor interpolation, polynomial interpolation, spline interpolation etc. Since we may have an irregular structure of good points, we use triangle-based methods.
5 Experiments
In this section, we demonstrate the eectiveness of applying the proposed light-source estimation algorithm and the prototype image rendering algorithm based on the symmetry property to real face images. The algorithms were used to generate the prototype images from the given images under various lighting conditions. We then used the prototype images to construct an illumination-invariantmeasure, and more importantly we used these prototype images in existing PCA and subspace LDA face recognition systems. Signi cant performance improvements were observed based on experiments on two face databases. Except in Fig. 1 all face images in these experiments have image size 48 42, obtained by normalizing the original images with eye locations xed. We used image size 96 84 in Fig. 1 because our experiments indicated that a certain image size is needed for regular SFS algorithms to work well. The faces we used in our experiments are from the FERET, Yale and Weizmann databases. The Yale database contains 15 persons including four images obtained under dierent illuminations. The Weizmann database contains 24 persons also including four images obtained under dierent illuminations.
5.1 Rendering Prototype Images
We have applied our light-source estimation and direct prototype image rendering method to more than 150 face images from the Yale and Weizmann databases. Though the purpose of rendering prototype images is to improve recognition performance, we would like to visualize the quality of the rendered images and compare them to the images obtained using a local SFS algorithm [19]. In Fig. 1 we compare the results of rendering the prototype images using 1) local SFS and model-based source-from-shading and 2) direct computation based on SSFS plus a generic 3D face model and model-based symmetric source-from-shading. These results clearly indicate the superior quality of the prototype images rendered by direct computation.
5.2 Enhancing Face Recognition
We have conducted three experiments. The rst experiment demonstrates improvements in recognition performance by using the new illumination-invariant measure and compares this measure and the gradient measure introduced in [15]. The second experiment is
Figure 1: Image rendering comparison. All the original images are listed in the rst column. The second column represents the prototype images rendered using the local SFS algorithm. Prototype images rendered with symmetric SFS are plotted in the third column. Finally, the fourth column represents real images which are close to the prototype images. used to show that using the rendered prototype images instead of the original images can signi cantly improve existing face recognition methods such as PCA and LDA. We think it is appropriate to separate the rst two sets since they are based on dierent methodologies. For example, introduction of the gradient measure is meant to alleviate the eect of illumination change, but it does not involve any training, whereas PCA and LDA are training-based methods that do not explicitly handle illumination changes. Finally, in the third experiment we demonstrate that the generalized/predictive recognition rate of subspace LDA can be greatly enhanced.
5.2.1 Comparison of Matching Measure
In this section, we investigate the eect of using dierent measures on the recognition of face images under dierent illuminations. The three measures we compare are 1) the image dierence measure dD (I; J ), 2) the gradient measure dG(I; J ), and 3) the illumination-invariant measure dP (I; J ). In the following experiment, we divide each face database (Yale and Weizmann) which contains four images per person into two disjoint sets: a sample set and a testing set. We conducted two rounds of experiments for each face database: In the rst round (Table 1) the sample set contains only one image, while in the second round (Table 2) the sample set contains two images. In our experiment, we found out that applying the zero-mean-unit-variance preprocessing step can considerably improve matching performance based on the dD (I; J ) and we only report these results. Some conclusions can be drawn from the results: 1) both the gradient measure and the illumination-invariant measure handle the illumination changes better than the dD (I; J ), and 2) the illumination-invariant measure is a better measure than the gradient measure if the computation/approximation errors in (p; q) are small. However, we also noticed an interesting phenomenon
Database Yale Weizmann
D (I ; J ) 68.3% 86.5%
d
G(I ; J ) 78.3% 97.9%
d
P (I ; J ) 83.3% 81.3%
d
Table 1: Testing round one: matching performance using three dierent measures. Only one sample image per class is available. Database Yale Weizmann
D (I ; J ) 78.3% 72.9%
d
G(I ; J ) 88.3% 96.9%
d
P (I ; J ) 90.0% 97.9%
d
Table 2: Testing round two: matching performance using three dierent measures. Two sample images per class are available. in Table 1 for the Weizmann database. That is, the illumination-invariant measure is worse than the gradient measure, and even worse than the image measure. This is due to the following reason: large approximation error in using the 3D generic model to t individual faces. In Weizmann database, all images seem to be elongated along the vertical direction. This also explains a similar phenomenon in Table 4 for the Weizmann database.
5.2.2 Prototype Images for PCA and LDA In this section we show that the use of prototype images signi cantly improves existing PCA and LDA systems. Both PCA and LDA use either original images or rendered prototype images as training and testing samples. The testing scenarios here are exactly the same as in the experiment described above. Hence PCA and LDA are trained using either one image per class (Table 3) or two images per class (Table 4 ). LDA was performed on the PCA projection coef cients. The LDA used here is a regularized one, i.e. scatter matrix Sw has been modi ed to add a very small diagonal constant [6]. For example, in the case of one sample per class, the scatter matrix Sw was set to be the identity matrix. If no weights are applied in the PCA and LDA transformed spaces, both classi ers are exactly the same. This is clearly revealed in Table 3. Database PCA LDA P-PCA P-LDA Yale 56.7% 56.7% 81.7% 81.7% Weizmann 38.5% 38.5% 77.1% 77.1%
Table 3: Testing round one: matching performance based on PCA and LDA. We use symbols P-PCA and P-LDA to denote PCA and LDA systems using prototype images.
Database PCA LDA P-PCA P-LDA Yale 71.7% 88.3% 90.0% 95.0% Weizmann 97.9% 100% 95.8% 98.9%
Table 4: Testing round two: matching performance based on PCA and LDA. Symbols used here are the same as in Table 3.
5.2.3 Enhanced Subspace LDA
We have built a successful subspace LDA face recognition system [6]. By carefully choosing the dimension of the (PCA) subspace, this method can handle the over- tting problem, so it has good predictive/generalized recognition performance. This characteristic enables the system to recognize new face images or new classes/persons without retraining of the subspace (PCA) and/or LDA. For details of the system and an independent evaluation of this system, see [6, 3]. In this experiment, the face-subspace was obtained by training on 1038 FERET images from a total of 444 classes with only 2 to 4 training samples per class and the face subspace dimension was chosen to be 300 based on the characteristics of the eigenimages [6]. The LDA projection matrix W was also trained on these FERET images. There was no retraining of and W when new classes from the Yale and Weizmann databases were presented in this experiment. We used a testing protocol similar to the FERET test: we have a gallery set and a probe set. For each image in the probe set a rank ordering of all the images in the gallery set is produced. The cumulative match scores are computed in the same way as in the FERET test [3]. We conducted two independent experiments on the Yale and Weizmann databases. For the Yale database, we have a testing database composed of a gallery set containing 486 images from several face databases, including 15 (one image per class) from the Yale database, and a probe set containing 60 images also from the Yale database. For the Weizmann database, we have a testing database composed of a gallery set containing 495 images from several face databases, including 24 (one image per class) from the Weizmann database, and a probe set containing 96 images from the same database. Figure 2 shows the signi cant improvement in performance using the prototype images in both databases.
6 Discussion and Conclusions
We have proposed a SSFS based method to handle illumination problem for face recognition. By combining the symmetric SFS algorithm and a generic 3D head model, we can enhance face recognition under illumination changes. The feasibility and ecacy of this method has been demonstrated experimentally using two sets of real face images.
6.1 Discussion
Based on SSFS and a generic 3D model, our method of direct prototype image computation has the follow-
Performance Comparison for Yale Dataset
6.2 Future Directions
Performance Comparison for Weismann Dataset 1.05
The illumination problem and the recognition
Since in our image rendering algorithm we use division and the given partial shape information may not be accurate, it is likely that we have some bad reconstruction points. A question to ask is quantitatively how sensitive our method is to these errors. Another good question is face detection/eye detection related, that is how to get image normalized when the lighting is an issue. We will address these problems in near future. One way to have better individual model tting is to deform the generic 3D face model. One simple implementation is to detect the nose tip, face boundary, mouth corner, etc. rst, and then deform the shape according to the movement of these key points. For example, the deformation can be done using a Bspline based method which deforms and interpolates the given initial shape. For face object under out-of-plane rotation we are also developing model-based SSFS algorithms. This can be done by rst determining the 3D pose of the object and then rotating it back [26]. In order to facilitate this procedure, we need the following lemma (the proof is omitted due to space restrictions): Lemma 1 Suppose that the partial gradients (p[x; y]; q[x; y]) become (p [x0; y0 ]; q[x0; y0 ]) after the underlying surface is rotated in the x-z plane about the y-axis by (anti-clock-wise); then they are related by
Since no full SSFS is really carried out and the
p [x0; y0 ] = tan( + 0 ) [x;y ] cos 0 q [x0; y0 ] = qcos( +0 ) ;
1 1
0.95
0.95
Accumulative Match Score
Accumulative Match Score
0.9 0.9
0.85
0.8
0.85
0.8
0.75
0.7
0.65 0.75 0.6
0.7
0
50
100
150
200
250 Rank
300
350
400
450
500
0.55
0
50
100
150
200
250 Rank
300
350
400
450
500
(a) (b) Figure 2: Enhancing the predictive/generalized recognition of subspace LDA. The thin lines represent the cumulative scores when applying the existing subspace LDA to the original images, while the thick lines represent the scores when applying it to the prototype images. The curves in (a) are for the Yale face database, the curves in (b) are for the Weizmann database. ing features in handling the illumination problem in face recognition: There is no training, hence only one image is needed.
The new matching measure is illuminationinvariant.
problem based on training images are nicely separated. computation is image to image, it is fast.
The problem of solving for complex/arbitrary albedo information is avoided.
Our method can be easily integrated with other
approaches. For example, we can combine SSFS with multiple 3D models such as the laserscanned range images of real human heads. Using multiple 3D models we can render much more accurate prototype images by choosing the best model. Some drawbacks of our scheme: The need to apply symmetric light-source estimation for each image, which may be timeconsuming.
The accuracy of this scheme depends on light-
source estimation and on the approximation of the real 3D shapes of individual faces by the 3D head. Though in theory our method needs accurate alignment and 3D model tting, the accuracy issue does not seem to be critical based on our experiments.
(13)
where tan 0 = p[x; y]. In Fig. 3, we illustrate the synthesized images under dierent rotations and illuminations (with a Lambertian model) [26]. In summary, we can pursue several major future directions: Use more than one face image to obtain better gradient information, i.e., full SSFS. Apply simple geometric operation of the generic 3D model to t individual faces better. Extend the Lambertian model to more general models, such as including specular re ections and multiple lighting sources. Possible ambient lighting eects should also be considered.
References
[1] Chellappa, R., Wilson, C.L., and Sirohey, S. 1995. Human and Machine Recognition of Faces, A Survey. Proc. of the IEEE, Vol. 83, pp. 705-740. [2] Wechsler, H., Phillips, P.J., Bruce, V., Soulie, F.F., and Huang, T.S. 1998. Face Recognition: From Theory to Applications. Springer: Berlin.
Figure 3: Rendering images under dierent rotations and illuminations using the Mozart head. The images are arranged as follows: in the row direction (left to right) the images are rotated by the following angles: 50, 100, 250 , and 350 ; in the column direction (top to bottom) images are illuminated under the following conditions: pure texture warping (no illumination imposed); ( = 00); and ( = 300 , = 1200). [3] Phillips, P.J., Moon, H., Rauss, R., and Rizvi, S.A. 1997. The FERET Evaluation Methodology for Face-Recognition Algorithms. In Proc. Conference on Computer Vision and Pattern Recognition, San Juan, PR, pp. 137-143. [4] Belhumeur, P.N., Hespanha, J.P., and Kriegman, D.J. 1997. Eigenfaces vs. Fisherfaces: Recognition Using Class Speci c Linear Projection. IEEE Trans. on PAMI, Vol. 19, pp. 711-720. [5] Phillips, P.J., Moon, H., Rizvi S., and Rauss, P. 1998. The FERET Evaluation. Face Recognition: From Theory to Applications, Eds. Wechsler, H., Phillips, P.J., Bruce, V., Soulie, F.F., and Huang, T.S. Springer: Berlin, pp. 244-261. [6] Zhao, W., Chellappa, R., and Phillips, P.J. 1999. Subspace Linear Discriminant Analysis for Face Recognition. Center for Automation Research, University of Maryland, College Park, Technical Report CAR-TR-914. [7] Adini, Y., Moses, Y., and Ullman, S. 1997. Face Recognition: The Problem of Compensating for Changes in Illumination Direction. IEEE Trans. on PAMI, Vol. 19, pp. 721-732. [8] Hallinan, P. 1994. A Low-Dimensional Representation of Human Faces for Arbitrary Lightning Conditions. In Proc. Conference on Computer Vision and Pattern Recognition, Seattle, WA, pp. 995999.
[9] Nayar, S.K. and Murase, H. 1994. Dimensionality of Illumination Manifold in Eigenspace. Department of Computer Science, Columbia University, Technical Report CUCS-021-94. [10] Shashua, A. 1997. On Photometric Issues in 3D Visual Recognition from a Single 2D image. Int. Journal of Computer Vision, Vol. 21, pp. 99-122. [11] Belhumeur P.N. and Kriegman, D.J. 1997. What is the Set of Images of an Object Under All Possible Lighting Conditions? In Proc. Conference on Computer Vision and Pattern Recognition, San Juan, PR, pp. 52-58. [12] Georghiades, A.S., Kriegman, D.J., and Belhumeur, P.N. 1998. Illumination Cones for Recognition Under Variable Lighting: Faces. In Proc. Conference on Computer Vision and Pattern Recognition, Santa Barbara, CA, pp. 52-58. [13] T. Riklin-Raviv and A. Shashua, \The Quotient Image: Class Based Re-rendering and Recognition With Varying Illuminations," in Proc. of Computer Vision and Pattern Recognition, Fort Collins, Colorado, pp. 566-571, 1999. [14] Atick, J., Grin, P., and Redlich, N. 1996. Statistical Approach to Shape from Shading: Reconstruction of Three-Dimensional Face Surfaces from Single Two-Dimensional Images. Neural Computation, Vol 8, pp. 1321-1340. [15] Jacobs, D.W., Belhumeur, P.N., and Basri, R. 1998. Comparing Images under Variable Illumination. In Proc. Conference on Computer Vision and Pattern Recognition, Santa Barbara, CA, pp. 610617. [16] Oliensis, J. 1991. Uniqueness in Shape from Shading. Int. Journal of Computer Vision, Vol. 6, pp. 75-104. [17] Horn, B.K.P. and Brooks, M.J. 1989. Shape from Shading, MIT Press: Cambridge, MA. [18] Zheng, Q. and Chellappa, R. 1991. Estimation of Illumination Direction, Albedo, and Shape from Shading. IEEE Trans. on PAMI, Vol. 13, pp. 680702. [19] Tsai, P.S. and Shah, M. 1992. A Fast Linear Shape from Shading. In Proc. Conference on Computer Vision and Pattern Recognition, Urbana/Champaign, IL, pp. 459-465, [20] Wei, G.Q., and Hirzinger, G. 1997. Parametric Shape-from-Shading by Radial Basis Functions. IEEE Trans. on PAMI, Vol. 19, pp. 353-365. [21] Lee, C. H. and Rosenfeld, A. 1989. Improved Methods of Estimating Shape from Shading Using the Light Source Coordinate System. Shape from Shading, Eds. B.K.P. Horn and M.J. Brooks, MIT Press: Cambridge, MA, pp. 323-569.
[22] Pentland, A.P. 1982. Finding the Illumination Directions. Journal of the Optical Society of America A, Vol. 72, pp. 448-455. [23] Hampel, F.R. et al. 1986. Robust Statistics: The Approach Based on In uence Functions, John Wiley & Sons: New York. [24] Wol, L.B. and Angelopoulou, E. 1994. 3-D Stereo Using Photometric Ratios. In Proc. European Conference on Computer Vision, Springer, Berlin, pp. 247-258, [25] Zhao, W. and Chellappa, R. 1999. Robust Face Recognition using Symmetric Shape-fromShading. Center for Automation Research, University of Maryland, College Park, Technical Report CAR-TR-919. [26] Zhao, W., Chellappa, R., 1999. SFS Based View Synthesis for Robust Face Recognition. in International Workshop on Automatic Face and Gesture Recognition, France.