Isometric Deformation Modeling using Singular Value Decomposition ...

3 downloads 25427 Views 834KB Size Report
making 3D surface capturing devices affordable in security purposes. However ... vertices within a geodesic distance of 80 mm from the nose tip. large amount of ...
Isometric Deformation Modeling using Singular Value Decomposition for 3D Expression-Invariant Face Recognition Dirk Smeets, Thomas Fabry, Jeroen Hermans, Dirk Vandermeulen and Paul Suetens Abstract— Currently, the recognition of faces under varying expressions is one of the main challenges in the face recognition community. In this paper a method is presented dealing with those expression variations by using an isometric deformation model. The method is built upon the geodesic distance matrix as a representation of the 3D face. We will show that the set of largest singular values is an excellent expression-invariant shape descriptor. Face comparison is performed by comparison of their shape descriptors using the mean normalized Manhattan distance as dissimilarity measure. The presented method is validated on a subset of 900 faces of the BU-3DFE face database resulting in an equal error rate of 13.37% for the verification scenario. This result is comparable with the equal error rates of other 3D expression-invariant face recognition methods using an isometric deformation model on the same database.

I. I NTRODUCTION Recent research in face recognition has been dealing with the challenge of the large variability in head pose, lighting conditions, facial expression, and aging. The problem of varying lighting conditions and pose variations is mainly solved by using 3D images. Technological improvements are making 3D surface capturing devices affordable in security purposes. However, the intra subject deformations still need special attention in automatic 3D face recognition. 3D expression-invariant face recognition methods can be subdivided into three classes, depending on the way of handling expressions. Historically, the first face recognition methods dealing with expression variations were regionbased. These rely on parts of the face that remain approximately rigid during expression variations. The most popular regions to select in advance are the region of the nose [1], [2], [3], [4], [5], cheek [1], chin [1], eyebrows [1], eyes [2], forehead [2], [6] and the region above the mouth [7], [8], [9]. Other algorithms automatically select the more rigid regions while matching [10], by training a priori defined regions [11], [12] or fuse a priori defined regions by applying different weights [13]. Another approach for expressioninvariant recognition is the use of statistical models. Mostly a principal component analysis (PCA) model is used either to model expressions directly [14], [15] or by including nonneutral faces in the training set [16], [17], [18]. A third group of methods use an isometric deformation model. According to [19], geodesic distances between corresponding points remain approximately constant under expression variation. Several methods [20], [21], [22], [23], [24] propose a method The authors are with the K.U.Leuven, Faculty of Engineering, Department of Electrical Engineering, Center of Processing Speech and Images at the Medical Imaging Research Center, University Hospitals Gasthuisberg, Herestraat 49 - bus 7003, B-3000 Leuven, Belgium (corresponding author: [email protected])

978-1-4244-5020-6/09/$25.00 ©2009 IEEE

Fig. 1. 3D meshes of the same face with different levels of the same expression (angry), coming from the BU-3DFE database [30].

based on this invariance of geodesic distances. Some other methods do not handle expressions explicitly, but perform well under a certain amount of expression variation. Examples of these robust methods are [25], [26], [27], [28], [29]. The method presented in section II is built upon the hypothesis of the invariance of geodesic distances and therefore belongs to the third class. After preprocessing, the faces are represented by a geodesic distance matrix (GDM). Performing a singular value decomposition of this GDM provides an excellent shape descriptor which forms the basis for recognition experiments. Section III gives the results of these experiments, which are discussed and compared with other expression-invariant face recognition methods in section IV. II. M ETHOD This section describes the method for 3D expressioninvariant face recognition in detail. The method starts with 3D images of faces, represented as a mesh. An example of faces with different levels of the same expression is shown in Fig. 1. The proposed method uses an isometric deformation model assuming that expression variations cause isometric deformations of the facial surface. In mathematics, an isometry is a distance-preserving isomorphism between metric spaces. The basis of the model is therefore the invariance of geodesic distances between corresponding points on the surface during expression variations. The geodesic distance between two points is defined as the length of the curve on the surface with the minimum length between these points. A. Preprocessing The preprocessing steps aim to extract the same region in each face of the database. The first step is the detection of the tip of the nose. This can easily be achieved for a

140 200 120

400 600

100

800 80 1000 60

1200 1400

40

1600 20 1800

Fig. 2. The surface of the left face in Fig. 1 is cropped by only keeping vertices within a geodesic distance of 80 mm from the nose tip.

large amount of 3D meshes in the database by taking the vertex with the largest z-value, where the z-axis is mostly aligned with the gaze direction. Wrongly detected nose tips are corrected manually. This step can be automated, e.g. with [31], [32]. The second step crops the face to the desired region by keeping only the vertices with geodesic distance to the nose tip smaller than a predefined threshold. For the calculation of the geodesic distance, the Eikonal equation |∇T (P)| = 1,

(1)

has to be solved on the surface with starting condition T (P1 ) = 0 with P1 the starting point of the geodesic path. This can be achieved with a fast marching algorithm for triangulated meshes [33]. The geodesic cropping is shown in Fig. 2 with a cutoff threshold of 80 mm. The third step is downsampling the mesh to the same amount of points for each face in the database. The result is a normalized face region that contains the same face area for every 3D image with the same amount of points. This last step is important because the proposed method assumes that each point on the surface has a corresponding point on each other facial surface of the same subject. B. Geodesic distance matrix as face representation An appropriate object representation to exploit the advantages of an isometric deformation model is the geodesic distance matrix (GDM). We call G a GDM for a particular face if G = [gij ], with gij the geodesic distance between points i and j on the facial surface. This matrix is a symmetric matrix and defined up to a random permutation of the points on the represented facial surface. Fig. 3 shows the GDM associated with the face presented in Fig. 2. For the calculations of the GDM, again the fast marching algorithm for triangulated meshes of [33] is used. The algorithm computes the distance of the shortest (discrete) path between each pair of surface points. The complexity of this computation is O(n.m), with n the dimension of the GDM and m ≥ n the number of points in the cropped mesh. Besides the geodesic distance matrix (G1 = [gij ]), also other affinity matrices, closely related to the GDM are

2000

500

1000

1500

2000

Fig. 3. The geodesic distance matrix representation of face shown in Fig. 2

2 examined. For example the squared GDM (G2 = [gij ]), the 2 2 Gaussian weighted GDM (G3 = [exp(−gij /(2σ )]) and the increasing weighting function GDM (G4 = [1 + σ1 gij ]−1 ) [34] are compared with the non-weighted GDM (G1 ).

C. The set of largest singular values as expression-invariant shape descriptor According to the isometric assumption of facial expression variations, the GDM remains the same for different images with different expressions but from the same subject (under the condition that each point has a corresponding point on each other facial surface of the same subject). However, because point correspondences are assumed to be unknown, the GDM G is only defined up to arbitrary simultaneous permutations of its rows and its columns. An eigenvalue decomposition (EVD) or, more general, a singular value decomposition (SVD) decomposes a GDM into permutation-variant eigenvectors or singular vector matrices and a permutation-invariant diagonal matrix. Proof: Let P be a random permutation matrix, such that G′ = P GP T is a GDM with rows and columns permuted, and G = U ΣV T a singular value decomposition of G, then G′ = P GP T = P U Σ(P V )T .

(2)

Because P U and P V remain unitary matrices and Σ is still a diagonal matrix with non-negative real numbers on the diagonal, the right hand side of (2) is a valid singular value decomposition of G′ . A common convention is to sort the singular values in non-increasing order. In this case, the diagonal matrix Σ is uniquely determined by G′ . Therefore, Σ = Σ′ , with Σ′ the singular value matrix of G′ . Both decompositions (EVD and SVD) give similar results because the GDM is a symmetric matrix and therefore, the EVD and SVD can both be used in the proposed method. Consequently, the information in the GDM can be separated into a matrix that contains intrinsic shape information and a matrix with information about corresponding points. The eigenvalues and singular values can be used as intrinsic

20 19 18 17 16 EER [%]

shape descriptors, while the eigenvectors and singular vectors give information about correspondences. For numerical reasons, only the k largest eigenvalues or singular values are determined. As such, the computational complexity is limited to O(k.n2 ), with n the dimension of the GDM. M ATLAB 2009a is used for the calculation of eigenvalues. Hence, the set of the k largest eigenvalues of a face’s GDM is proposed as an expression-invariant and a permutation invariant shape descriptor.

15 14

D. Dissimilarity measures for face comparison

13

In order to compare the faces in a database, an appropriate dissimilarity measure has to be chosen to compare the corresponding shape descriptors. We examined the dissimilarity measures listed in table I. In this table S represents the shape descriptor, i.e. a vector containing the absolute values of the k largest eigenvalues.

12

The method is validated on a subset of the BU-3DFE database [30]. Most faces had a closed mouth. In total, 900 facial surfaces are considered. These correspond to different expressions of 100 subjects. Only 11% of the surfaces represent neutral facial expressions. For each other (nonneutral) expression, four different levels of expression are defined. For validation purposes, the standard verification scenario is used. The performance of this scenario is measured with the receiving operating characteristic (ROC) curve. This curve plots the false rejection rate (FRR) against the false acceptance rate (FAR). The equal error rate (EER) is the point on the ROC for which the FAR is equal to the FRR and can therefore be seen as an important characteristic for the verification performance. After detecting the nose tip which is done automatically for 98.0% of the faces in the database, the method randomly selects 2000 points with a geodesic distance of 80 mm or less from the nose tip. A fast marching algorithm for meshes then computes the GDM, which is decomposed by an eigenvalue decomposition, for each face in the database. A first influential design choice is the dissimilarity measure. We validated the algorithm for the different measures of table I on a small subset of the database and experimentally determined that the mean normalized Manhattan distance is the most appropriate to compare the shape descriptors. Second, it is important to choose the right number of eigenvalues. The more eigenvalues to be calculated, the higher is the computation time. Fig. 4 plots the EER against the number of eigenvalues used in the expression-invariant shape descriptor. The EER decreases if more eigenvalues are used. The third effect for the performance is the choice of the affinity matrix, as defined in section II-B. Table II lists the EER for the non-weighted GDM (G1 ) and three related matrices: the squared GDM (G2 ), the Gaussian weighted GDM (G3 ) and the increasing weighting function GDM (G4 ). For σ the maximal value in the GDM is chosen. The

10

10

20

30

40 50 # eigenvalues

60

70

80

Fig. 4. The number of eigenvalues has an influence on the verification performance. TABLE II C OMPARISON OF DIFFERENT WEIGHTING FUNCTION OF THE GDM AS DEFINED IN SECTION

EER

G1 13.37%

G2 26.50%

II-B.

G3 26.64%

G4 17.84%

results clearly show that the best performance is achieved for the non-weighted GDM. Finally, the receiving operating characteristic (ROC) is shown in Fig. 5 using the mean normalized Manhattan distance to compare shape descriptor containing the 80 largest eigenvalues. The EER is equal to 13.37%. 1 0.9 0.8 0.7 0.6 FRR

III. R ESULTS

11

0.5 0.4 0.3 0.2 0.1 0

0

0.1

0.2

0.3

0.4

0.5 FAR

0.6

0.7

0.8

0.9

1

Fig. 5. The receiving operating characteristic (ROC) for a subset of the BU-3DFE database.

IV. D ISCUSSION As demonstrated in [35], geodesic distances between corresponding point pairs do not remain constant during expression variations. Following [36], the standard deviation

TABLE I D ISSIMILARITY MEASURES . Dissimilarity measure

Formula

Jensen-Shannon Divergence

D1 = H( 12 S k + 12 S l ) − ( 12 H(S k ) + 21 H(S l )) P 2|Sik −Sil | D2 = m k l i=1

Mean normalized Manhattan distance

Si +Si 2|Sik −Sil | maxi Sik +Sil q q P m 2| Sik − Sil | q q i=1 Sik + Sil

Mean normalized maximum norm

D3 =

Mean normalized absolute difference of square root vectors

D4 =

Correlation

·S D5 = 1 − kSSk kkS lk qP m k − S l )2 (S D6 = i q P i=1 i m k − S l )2 /σ 2 D7 = (S i i q P i=1 i m k l T −1 (S k − S l ) D8 = i=1 (S − S ) cov(S)

k

Euclidean distance Normalized Euclidean distance Mahalanobis distance

of the relative change in geodesic distance was found to be about 15%. Therefore, the isometric deformation model is an approximation. Moreover, without additional processing to disconnect upper and lower lip in the surface representation before determination of the geodesic distances, it is not valid for faces with open mouth. Also occlusions result in a miscalculation of geodesic distances. These aspects can be seen as the general disadvantages of methods using an isometric deformation model. However the other classes of expression-invariant methods (section I) also have disadvantages that are not present in methods using the isometric deformation model. Methods using statistical models always need a training stage to construct the model. Therefore, if no representative training data are used, the recognition performance will decrease. The region-based methods do not use all available information by throwing away those parts that are affected by expressions. This leads to loss of information, that could be discriminative. The main advantage of the method proposed in this paper is that, except for the nose, the algorithm does not have to determine correspondences between points on different surfaces. The singular value decomposition captures the intrinsic shape information in the diagonal matrix, independent of the sampling order. On the other hand, one could mention that it is hard to satisfy the condition that each point needs to have a corresponding point on each other facial surface of the same subject. However, when enough points on the same facial area are considered, there will always be a point close to the (virtual) corresponding point. Mpiperis et al. [36] developed a face recognition method, which is based on an isometric deformation model using the geodesic polar representation. Instead of calculating pairwise geodesic distances, geodesic distances from the nose tip to all other points are calculated to construct a geodesic polar parameterization. In [36], this method is compared with another method using an isometric deformation model developed by Bronstein et al. [35]. Thereto, again a subset of the BU-3DFE database [30] of 1600 images is used. Mpiperis

l

et al. reported an EER of 9.8% for their method and 15.4% for the method of [35]. A partial explanation for the better performance is the use of color information and a special method for handling the open mouth problem in [36]. It is particularly interesting to compare the method with the related method described in [20] (which is the basis of [35]). Here, the face is, in a first step, also represented by a geodesic distance matrix. By applying multidimensional scaling (MDS) on this GDM, a canonical form in dimension m is created. The canonical forms are compared using rigid registration. If the MDS is done by classical scaling, an eigenvalue decomposition is performed on the squared GDM, after double-centering. If m = 3, only the three largest eigenvalues are taken and some intrinsic shape information is discarded. This might by an explanation for the higher EER of 15.4% in [36] than the EER of 13.4% of the method that is described here. V. C ONCLUSION This paper described a method for 3D expression-invariant face recognition using an isometric deformation model. Three-dimensional faces are represented as geodesic distance matrices. By applying a singular value decomposition, this matrix can be decomposed into a matrix that contains intrinsic shape information, the singular value matrix, and a matrix with information about corresponding points. The set of largest singular values is therefore an interesting shape descriptor. Shape dissimilarity is expressed using the mean normalized Manhattan distance. This resulted in an equal error rate of 13.37% on a subset of the BU-3DFE database of the Binghamton university. As future work we are planning to validate the identification scenario by calculating the cumulative matching curve (CMC) that gives the recognition rate for several ranks. To increase performance, open mouths need to be handled. This can be achieved by cutting the mesh between the mouth corners. We also propose to further exploit the eigenvectors or singular vector matrices in order to obtain correspondences between different faces of the same subject.

VI. ACKNOWLEDGEMENTS This work is supported by the Flemish Institute for the Promotion of Innovation by Science and Technology in Flanders (IWT Vlaanderen), the Fund for Scientific Research in Flanders (FWO) and the Research Fund K.U.Leuven. R EFERENCES [1] J. C. Lee and E. Milios, “Matching range images of human faces,” in ICCV ’90: Proceedings of the Third IEEE International Conference on Computer Vision, (Osaka, Japan), pp. 722–726, December 1990. [2] A. S. Mian, M. Bennamoun, and R. A. Owens, “Region-based matching for robust 3D face recognition,” in BMVC ’05: Proceedings of the British Machine Vision Conference, vol. 1, (Oxford, United Kingdom), pp. 199–208, June 2005. [3] K. I. Chang, K. W. Bowyer, and P. J. Flynn, “Multiple nose region matching for 3D face recognition under varying facial expression,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 10, pp. 1695–1700, 2006. [4] F. B. ter Haar and R. C. Veltkamp, “SHREC’08 entry: 3D face recognition using facial contour curves,” in SMI ’08: Proceedings of the IEEE International Conference on Shape Modeling and Applications, (Stony Brook, NY, USA), pp. 259–260, June 2008. [5] D. Xu, P. Hu, W. Cao, and H. Li, “SHREC’08 entry: 3D face recognition using moment invariants,” in SMI ’08: Proceedings of the IEEE International Conference on Shape Modeling and Applications, (Stony Brook, NY, USA), pp. 261–262, June 2008. [6] W.-Y. Lin, M.-Y. Chen, K. R. Widder, Y. H. Hu, and N. Boston, “Fusion of multiple facial regions for expression-invariant face recognition,” in MMSP ’07: Proceedings of the IEEE 9th Workshop on Multimedia Signal Processing, (Chania, Crete, Greece), IEEE Computer Society, October 2007. [7] C.-S. Chua, F. Han, and Y.-K. Ho, “3D human face recognition using point signature,” in FG ’00: Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000, (Washington, DC, USA), p. 233, IEEE Computer Society, 2000. [8] S. Berretti, A. D. Bimbo, and P. Pala, “SHREC’08 entry: 3D face recognition using integral shape information,” in SMI ’08: Proceedings of the IEEE International Conference on Shape Modeling and Applications, (Stony Brook, NY, USA), pp. 255–256, June 2008. [9] P. Nair and A. Cavallaro, “SHREC’08 entry: Registration and retrieval of 3D faces using a point distribution model,” in SMI ’08: Proceedings of the IEEE International Conference on Shape Modeling and Applications, (Stony Brook, NY, USA), pp. 257–258, June 2008. [10] Y. Wang, G. Pan, Z. Wu, and Y. Wang, “Exploring facial expression effects in 3D face recognition using partial ICP,” in Computer Vision ACCV 2006 (P. Narayanan, ed.), vol. 3851 of Lecture Notes in Computer Science, pp. 581–590, Springer Berlin / Heidelberg, 2006. [11] Y. Wang, G. Pan, and Z. Wu, “3D face recognition in the presence of expression: A guidance-based constraint deformation approach,” in CVPR ’07: Proceedings of the 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1–7, IEEE Computer Society, June 2007. [12] T. Faltemier, K. W. Bowyer, and P. J. Flynn, “A region ensemble for 3-D face recognition,” IEEE Transactions on Information Forensics and Security, vol. 3, pp. 62–73, March 2008. [13] N. Aly¨uz, B. G¨okberk, and L. Akarun, “A 3D face recognition system for expression and occlusion invariance,” in BTAS ’08: Proceedings of the IEEE Second International Conference on Biometrics Theory, Applications and Systems, (Arlington, Virginia, USA), September 2008. [14] X. Lu and A. K. Jain, “Deformation modeling for robust 3D face matching,” in CVPR ’06: Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (Washington, DC, USA), pp. 1377–1383, IEEE Computer Society, 2006. [15] B. Amberg, R. Knothe, and T. Vetter, “Expression invariant 3D face recognition with a morphable model,” in FG ’08: Proceedings of the 8th IEEE International Conference on Automatic Face and Gesture Recognition, (Amsterdam, The Netherlands), IEEE Computer Society, 2008.

[16] C. Hesher, A. Srivastava, and G. Erlebacher, “A novel technique for face recognition using range imaging,” in ISSPA ’03: Proceedings of the 7th IEEE International Symposium on Signal Processing and Its Applications, vol. 2, (Paris, France), pp. 201–204, IEEE Computer Society, July 2003. [17] T. Heseltine, N. Pears, and J. Austin, “Three-dimensional face recognition using surface space combinations,” in BMVC ’04: Proceedings of the British Machine Vision Conference (A. Hoppe, S. Barman, and T. Ellis, eds.), (London, UK), September 2004. [18] T. Russ, C. Boehnen, and T. Peters, “3D face recognition using 3D alignment for PCA,” in CVPR ’06: Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (Washington, DC, USA), pp. 1391–1398, IEEE Computer Society, 2006. [19] A. M. Bronstein, M. M. Bronstein, and R. Kimmel, “Expressioninvariant 3D face recognition,” in AVBPA ’03: Proceedings of the 4th International Conference on Audio and Video-based Biometric Person Authentication (J. Kittler and M. Nixon, eds.), vol. 2688 of Lecture Notes in Computer Science, pp. 62–69, Springer, 2003. [20] A. M. Bronstein, M. M. Bronstein, and R. Kimmel, “Threedimensional face recognition,” International Journal of Computer Vision, vol. 64, no. 1, pp. 5–30, 2005. [21] C. Samir, A. Srivastava, and M. Daoudi, “Three-dimensional face recognition using shapes of facial curves,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 11, pp. 1858– 1863, 2006. [22] S. Berretti, A. D. Bimbo, and P. Pala, “Description and retrieval of 3D face models using iso-geodesic stripes,” in MIR ’06: Proceedings of the 8th ACM international workshop on Multimedia information retrieval, (New York, NY, USA), pp. 13–22, ACM, 2006. [23] X. Li and H. Zhang, “Adapting geometric attributes for expressioninvariant 3D face recognition,” in SMI ’07: Proceedings of the IEEE International Conference on Shape Modeling and Applications, (Washington, DC, USA), pp. 21–32, IEEE Computer Society, 2007. [24] S. Jahanbin, H. Choi, Y. Liu, and A. C. Bovik, “Three dimensional face recognition using iso-geodesic and iso-depth curves,” in BTAS ’08: Proceedings of the IEEE Second International Conference on Biometrics Theory, Applications and Systems, (Arlington, Virginia, USA), September 2008. [25] T. Maurer, D. Guigonis, I. Maslov, B. Pesenti, A. Tsaregorodtsev, D. West, and G. Medioni, “Performance of Geometrix ActiveIDT M 3D face recognition engine on the FRGC data,” in CVPR ’05: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05) - Workshops, (Washington, DC, USA), p. 154, IEEE Computer Society, 2005. [26] M. H¨usken, M. Brauckmann, S. Gehlen, and C. V. der Malsburg, “Strategies and benefits of fusion of 2D and 3D face recognition,” in CVPR ’05: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05) Workshops, (Washington, DC, USA), p. 174, IEEE Computer Society, 2005. [27] J. A. Cook, V. Chandran, and C. B. Fookes, “3D face recognition using log-gabor templates,” in BMVC ’06: Proceedings of the 17th British Machine Vision Conference (M. Chantler, M. Trucco, and B. Fisher, eds.), vol. 2, (Edinburgh, Scotland), pp. 769–778, British Machine Vision Association, September 2006. [28] I. A. Kakadiaris, G. Passalis, G. Toderici, M. N. Murtuza, Y. Lu, N. Karampatziakis, and T. Theoharis, “Three-dimensional face recognition in the presence of facial expressions: An annotated deformable model approach,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 4, pp. 640–649, 2007. [29] T. Fabry, D. Vandermeulen, and P. Suetens, “3D face recognition using point cloud kernel correlation,” in BTAS ’08: Proceedings of the IEEE Second International Conference on Biometrics Theory, Applications and Systems, (Arlington, Virginia, USA), September 2008. [30] L. Yin, X. Wei, Y. Sun, J. Wang, and M. J. Rosato, “A 3D facial expression database for facial behavior research,” in FG ’06: Proceedings of the 7th IEEE International Conference on Automatic Face and Gesture Recognition, (Southampton, UK), pp. 211–216, April 2006. [31] C. Xu, T. Tan, Y. Wang, and L. Quan, “Combining local features for robust nose location in 3D facial data,” Pattern Recognition Letters, vol. 27, no. 13, pp. 1487–1494, 2006. [32] W. J. Chew, K. P. Seng, , and L.-M. Ang, “Nose tip detection on a three-dimensional face range image invariant to head pose,” in IMECS

[33] [34] [35] [36]

’09: Proceedings of the International MultiConference of Engineers and Computer Scientists, vol. 1, (Hong Kong), March 2009. G. Peyr´e and L. D. Cohen, “Heuristically driven front propagation for fast geodesic extraction,” International Journal for Computational Vision and Biomechanics, vol. 1, no. 1, pp. 55–67. M. Carcassoni and E. R. Hancock, “Spectral correspondence for point pattern matching,” Pattern Recognition, vol. 36, pp. 193–204, 2003. A. M. Bronstein, M. M. Bronstein, and R. Kimmel, “Expressioninvariant representations of faces,” IEEE Transactions on Image Processing, vol. 16, pp. 188–197, January 2007. I. Mpiperis, S. Malasiotis, and M. G. Strintzis, “3-D face recognition with the geodesic polar representation,” IEEE Transactions on Information Forensics and Security, vol. 2, pp. 537–547, September 2007.

Suggest Documents