International Journal of Research and Reviews in Information Sciences (IJRRIS) Vol. 2, No. 1, March 2012, ISSN: 2046-6439 © Science Academy Publisher, United Kingdom www.sciacademypublisher.com
155
Age Range Estimation from Human Face Images Using Face Triangle Formation Hiranmoy Roy1, Debotosh Bhattacharjee2, Mita Nasipuri2, and Dipak Kumar Basu2 1
Department of Information Technology, RCC Institute of Information Technology, Kolkata 700015, India Department of Computer Science and Engineering, Jadavpur University, Kolkata 700032, India
2
Email:
[email protected],
[email protected],
[email protected],
[email protected]
Abstract – With the advancement in technology, one thing that concerns the whole world and especially in the developing countries is the tremendous increase in the population. With such a rapid rate of increase, it is becoming difficult to recognize each and every individual because we have to maintain copies either in digital or hard copy format of every individual at different time periods of his life. Sometimes database has the required information of that particular individual, but it’s of no use as it is now obsolete. With age a person’s facial features changes and it becomes difficult to identify a person given an image of his at two different ages. This paper discusses a novel mechanism by which two images at different time periods of the individual life can be compared so that it can be ascertained both images are of same individual. This is done by extracting feature points of a face from which a face triangle is formed. If the ratio of areas of the triangles of both images is within a specified range then we can say both images are of same person. A range of threshold values is proposed adhering to the ratios of the areas between various age groups which can be matched with to determine whether a particular image is of the same person or not. At the same time if it is known that the images are of same person then age range can also be estimated. Experimental results show that face recognition and age range estimation both may be effectively performed and which performs with low computational effort. Keywords – Face Triangle, Age Estimation, Face Recognition, Feature extraction, Eye localization, Eyebrow detection
1.
Introduction
Face recognition is an important field of biometrics which is of great use in our day to day life. Be it the traditional uses in identification documents such as passports, driver’s licenses, voter ID, etc., o r i t s u s e s i n recent years, where, face images are being increasingly used as additional means of authentication in applications such as credit/debit cards and in places of high security. But with age progression the facial features changes and the database needs to be updated regularly of the changes which is a tedious task. So we need to address the issue of facial aging and come up with a mechanism that identifies a person in spite of the aging. 1.1. Techniques used in Face Recognition Techniques used in face recognition can be broadly categorized into three, as per followings: i) Traditional, which includes methods like, Eigenface or principal component analysis (PCA), fisherface or linear discriminate analysis (LDA) etc. These techniques [1, 2] extract facial features from an image and then using them perform search in the face database for images with matching features. Other algorithms [3,4] normalize a gallery of face images and then compress the face data, only saving the data in the image that is useful for face detection. A probe image is then compared with the face data. ii) 3-D: this technique uses 3-D sensors to capture information about the shape of a face [5, 6]. This information is then used to identify distinctive features on the surface of a face, such as the contour of the eye sockets, nose, and chin.
This technique is robust to changes in lighting and viewing angles. iii) Skin texture analysis: this technique [7, 8] uses the visual details of the skin, as captured in standard digital or scanned images, and turns the unique lines, patterns, and spots apparent in a person’s skin into a mathematical space.. 1.2. Earlier work on Age based Face Recognition A lot of work has been done previously in the field of face recognition starting with Gibson’s ecological approach towards perception [9] and Thompson’s pioneering work on geometric transformations in the study of morphogenesis [10] that mostly laid the fundamentals for the study of craniofacial growth. They explained human head as a fluid filled spherical water container and performed a hydrostatic analysis on the effects of gravity on craniofacial growth. In human computer interaction, aging effects in human faces has been studied from two main reasons: 1) automatic age estimation for face image classification and 2) automatic age progression for face recognition. Kwon et al. [11] developed a system to classify face images into one of three age groups: infants, young adults and senior adults. They extracted key landmarks from face images and calculated distances between those landmarks. Then ratios of those distances were used to classify face images as that of infants or adults. They also proposed methods for wrinkle detection in predetermined regions in face images to further classify adult images into young adults and senior adults. The first real human age estimation theory was proposed Lanitis et al. [12, 13]. They proposed methods to imitate aging effects on face images.
H. Roy et al. / IJRRIS, Vol. 2, No. 1, pp. 155-160, March 2012
They developed an aging function (quadratic function) based on a parametric model of face images and performed tasks such as automatic age estimation, face recognition across age progression. They only considered database having face images of individuals less than 30 yrs of age. N.Ramanathan and R.Chellappa [14, 15] developed a Bayesian age difference classifier that classifies face images of individuals based on age differences and performs face verification across age progression. They used coordinate transformation and deformation of local facial feature landmarks. But males and females may have different face aging patterns depending on nature effects. Geng et al. [16] proposed the AGES (Aging pattern Sub-space) method for automatic age estimation. They model the aging pattern in a 2D sub-space and then for an unseen face image they construct the face and determine the age. Anil K. Jain [17] proposed a 3D aging modeling technique which automatically generates some missing images in different age groups. It learnt aging pattern based on PCA coefficients in separated 3D space and texture given 2D database. Most of the aforementioned techniques need greater computational modeling and higher level of mathematics. Some of them trying to generate the face image for unknown time domain but the most important part face recognition are not included. Lin et al. [18] proposed that the frontal face view form an isosceles triangle combining the two eyes and one mouth. This isosceles triangle is quite useful for face recognition. From careful observation we conclude that this face triangle is unique for every person and this face triangle can be better used for face recognition with age. In this paper a novel and effective human face recognition with age and age range calculation are proposed. For better performance, the face images of different subjects are grouped into four age range: 1st from age 1 to 10 as childhood, 2nd from age 11 to 20 as teenage, 3rd from age 21 to 30 as young and 4th from age 31 to 40 as adult. Only 1-40 range is taken because the FG-NET database [19] face images have most subjects with this range. There are may be more than one image in a particular range, and then the best front view face image has been chosen. Here a set of face triangle area ratio for different age range has been proposed which can be effectively used to estimate age range value. The method is designed of two main parts as shown in Figure 1. The first part is to locate the feature points and generate the isosceles face triangle. The second part is to perform the task of face recognition or age range estimation. There are three steps in the second part: first, calculate the areas of the face triangles of known image and unknown image, second, generate the ratio of face triangle’s areas for those two images, and third, face recognition or age range estimation for the unknown image.
2.
Present Method
The work presented here is based on facial components. They are eyes, eyebrows and mouth. In our earlier work [20] we described a novel technique to detect eyebrows and eyes. Using the same technique those feature points are extracted from the image in such a way so that hair, background and any variations in the face do not disturb the process. Sometime face images in FG-NET Database [19] are of side-
156 view images, in those cases frontal images are constructed from side-view images using face mosacing, which is described in our earlier work [20]. The present method is described through the Flow Chart in Figure 1.
Figure 1: Flow Chart of the Method
2.1. 1st Part (Feature point’s detection & face triangle formation) Step 1. Cropping and resizing of the original image: The original image is cropped to get only the face region (manually). In the suggested method, the size of the image has been standardized as having 200 pixels height and 150 pixels width. Step 2. Facial features extraction: If color image then it is converted to gray scale image. Then using eyebrow detection and eye detection, as described in our earlier work [20], are detected. Mouth is always below the eye region and within the middle between the mid points of two eyes. So mouth is detected. Then these feature components are extracted and stored in the same location as in original image in a new image as shown in Figure 2.
H. Roy et al. / IJRRIS, Vol. 2, No. 1, pp. 155-160, March 2012
157 B(x2, y2) = ( (erx1 + erx2)/2, (ery1 + ery2)/2) Midpoint of mouth: C(x3, y3) = ( (mx1 + mx2)/2, (my1 + my2)/2)
Figure 2. Facial Features Extraction.
Step 3. Binary image conversion & detection of mid points of both eyes and lips: The gray scale feature extracted image is now converted to binary image. Line dilation is performed on the image to eliminate the effect of white pixels in a region of continuous black pixels and vice versa. Then this image pixels values are stored in the 2-d array. Black pixel is stored as 1 and white pixel as 0. Array is divided into two parts from middle i.e. left part and right part. Suppose length of the array is n. Then for the left part it is traversed from n/2 to 0 and for the right part (n/2 +1) to n. For the left part the row with maximum number of 1s is chosen by scanning all the rows one after another. The position of maximum continuous black pixels denotes the first feature point from the top. The scan is stopped for that feature point once a complete row with white pixels is found. Once the row is found out containing the maximum black pixels its mid point is found out and coordinate is noted. X coordinate is column value for the left part of the array where the midpoint is located. Y coordinate is the row value for the left part of the array. Previous steps are repeated for the rest two feature points. And same steps are repeated for the right part of the array. Step 4. Face Triangle Formation: We get the six different coordinate points as shown in Figure 3. These are two for the left eyebrow, two for the right eyebrow and two for the mouth. Left Eyebrow: (elx1, el y1), (elx2, ely2) Right Eyebrow: (erx1, ery1), (erx2, ery2) Mouth: (mx1, my1), (mx2, my2)
Figure 3. Coordinate points of the three face features.
Then midpoint formula is used to find out the midpoint between left eyebrow and left eye, right eyebrow and right eye and the two coordinates on the lips as shown in Figure 4. Midpoint of left eyebrow: A(x1, y1) =( (elx1 + elx2)/2, (ely1 + ely2)/2) Midpoint of right eyebrow:
Figure 4. Coordinate points of the mid points of the three face features.
Using the 3 points A, B, and C an isosceles triangle is drawn as shown in Figure 5.
Figure 5. Isosceles Triangle using A, B, and C coordinate points.
2.2. 2nd Part (Face Recognition Or Age Range Estimation) Step 1. Area Calculation: Then the area of the above triangle is calculated according to the following formula: a = |A(x1, y1) – C(x3, y3)|. b = |B(x2, y2) – A(x1, y1)|. c = |C(x3, y3) – B(x2, y2)|. S = (a + b + c)/2. Here a, b, c is the length of the three sides of the triangle and S is the semi perimeter of the triangle. So, the area of the triangle ( )
Resultant area for few samples from FG-NET database. [17] is shown in Figure 6. Step 2. Using the same formula the area of the recognised face triangle is calculated for both known and unknown images. Let the area of face triangle of known image is F1 and for unknown image is F2. Now the ratio between these two areas is calculated. Let the ratio is R. Step 3. If R is within the proposed range then the two face images are of same person, otherwise not. If it is already known that the face images are of same person and what the age range of known face image is, then
H. Roy et al. / IJRRIS, Vol. 2, No. 1, pp. 155-160, March 2012
158
age range of the unknown face can be estimated, because ratio value for one particular range stands for the age separated face image’s ratio of known age. 2.3. Age Range Based Face Area Ratio Calculation: The proposed age range based face area ratio is calculated from images of the same person at different age group. After face triangle area calculation, the area ratio of two age groups is calculated. Ratio is taken as fraction number up to two decimal points. Here, ages are grouped into four regions i.e. from age 1 to age 10, age 11 to age 20, age 21 to age 30 and age 31 to age 40. A1: Age 1 to Age 10 A2: Age 11 to Age 20 A3: Age 21 to Age 30 A4: Age 31 to Age 40 So there are 4 images of the same person total 4C2 ratios are found. These ratios are grouped according to the age ranges and Region of maximum frequency in which the ratios lie in each group is found. It is called valid threshold range for the group and is stored in the database. This experiment is done on 25 different subjects having 4 different age range images of FG-NET Database [19]. A table of 9 subjects face area values and its different age range ratios are given in Table1. Table 1. Showing Face Triangle Area and its Ratios
The proposed age range ratio is given in Table 2 Table 2. Age Range based Face Area Ratio’s threshold values.
The graph below shows the ratios against the age range .
Graph 1
Figure 6. Area values of samples from FG-NET database.
3.
Experimental Results
In order to test the validity of the algorithm designed here, several experiments are conducted. One such experiment is shown through Figure 7 to Figure 11. In Figure 7 and Figure 8 both the sample images are of same person at different age. The area ratio result shows valid threshold range i.e. for A2/A1 = (1.05 to 1.20). In Figure 9 the original image of S1 is side-viewed, so it is converted into front view using our earlier work’s algorithm [20] and then area ratio is again in valid threshold range i.e. for A3/A2 = (0.79 to 0.97). But in Figure 10 and Fig 11 the samples are different and our method gives face area ratio 1.42 and 1.01 respectively which is not in the valid threshold range for A2/A1 = (1.05 to 1.20). So face recognition is verified.
H. Roy et al. / IJRRIS, Vol. 2, No. 1, pp. 155-160, March 2012
159
Figure 7. Experimental Result-1. Figure 11. Experimental Result-5.
4.
Figure 8. Experimental Result-2.
Conclusion
In this work a method for face recognition with age progression is thoroughly described. As face changes with age, it is very difficult to update periodically the databases where face recognition is very important. So the proposed technique provides a robust method that verifies the identity of individuals from a pair of age separated face images. But the method shows some difficulties in detecting the facial components if face image is not frontal image. Side-view images up to 150 rotational pose can be recognized by constructing frontal face. So, there seems to be a definite possibility for further extension of the work which includes extracting more feature points such as base angle of the isosceles face triangle, face isosceles triangle in between two eyes and nose tip can improve accuracy of our match. It is also seen that in the age group 1-10 and 11-20 the structural changes in face are maximum where as in the age group 3140 the textural changes are maximum. So age group can be split into a smaller range such as age 1 to age 5 and age 6 to age 10 and texture information such as wrinkle formation can be included. Also, age range 41-60 we can be included as old age.
Acknowledgment
Figure 9. Experimental Result-3.
Authors are thankful to a major project entitled “Design and Development of Facial Thermogram Technology for Biometric Security System,” funded by University Grants Commission (UGC),India and “DST-PURSE Programme” at Department of Computer Science and Engineering, Jadavpur University, India for providing necessary infrastructure to conduct experiments relating to this work.
References [1]
[2] [3]
[4] Figure 10. Experimental Result-4.
[5]
P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman. Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):711–720, July 1997. M. A. Turk and A. P. Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1): 71–86, 1991. Zhao, J., Su, Y., Wang, D., Luo, S., 2003. Illumination ratio image:Synthesizing and recognition with varying lluminations. Pattern Recognition Lett. 24 (15), 2703–2710. Xie, X., Lam, K.M., 2005b. An efficient method for face recognition under varying illumination. In: Proc. Internat. Symposium on Circuits and Systems, Kobe, Japan. V. Blanz and T. Vetter. Face recognition based on fitting a 3D morphable model. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(9):1063 –1074, September 2003.
H. Roy et al. / IJRRIS, Vol. 2, No. 1, pp. 155-160, March 2012 [6]
[7]
[8]
[9]
[10] [11]
[12]
[13]
[14]
[15]
[16]
[17] [18]
[19] [20]
R. Kimmel A. M. Bronstein, M. M. Bronstein. Three-dimensional face recognition. Intl. Journal of Computer Vision, 64(1):5–30, August 2005. B.D., Zarit, B.J., Super, AND F.K.H. Quek, “Comparison of five color models in skin pixel classification”. In Int. Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems, pages 58-63, Corfu, Greece, Sept. 1999. R.L., Hsu, M., Abdel-Mottaleb, and A.K.Jain, “Face detection in color images”. IEEE Trans. on Pattern Analysis and Machine Intelligence, 24(5):696-706, May 2002. S. Gibson, M.C. Smith, C. Gupta, A. Kruzner, J. Merwin, D. Rollend, A. Nallamuthu. Robust Facial Recognition with Reconfigurable Platforms. Clemson University, College of Engineering and Science, 2008. D.W.Thompson. on Growth and Form. Dover Publications, 1992 (original publication - 1917). Y.H.Kwno and N.daVitoria Lobo, “Age Classification from Facial Images,” Computer Vision and Image Understanding, vol.74, no.1, pp.1-21, 1999. A.Lanitis and C.J.Taylor, “Towards Automatic Face Identification Robust to Ageing Variation,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.24, no.24, p.442-455, 2002. A.Lanitis, C.Draganova, and C.Christodoulou, ”Comparing different classifiers for automatic age estimation,” IEEE Trans.Syst.Man, Cybern.B, Cybern, vol34, no.1, pp.621-628, Feb.2004. N.Ramanathan and R. Chellappa, “Face verification across age progression,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, San Diego, CA, 2005, vol.2, pp.462-469. N.Ramanathan and R. Chellappa, “Modelling Age Progression in young faces,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), vol.1, pp.387-394, 2006. X.Geng, Z.H. Zhou, and K. Smith-Miles, “Automatic age estimation based on facial aging patterns,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.29, pp.2234-2240, 2007. Anil K.Jain, “Age Invariant Face Recognition,” IEEE Trans. on Pattern Analysis and Machine Intelligence.2010 Chiunhsiun Lin, Kuo-Chin Fan, “Triangle-based approach to the detection of human face,” Pattern Recognition Journal Society, vol.34, pp.1271-1284, 2001. G-NET Aging Database, http://www.fgnet.rsunit.com Hiranmoy Roy, Debotosh Bhattacherjee, Mita Nasipuri, Dipak Kumar Basu and Mahantapas Kundu, “Construction of Frontal Face from Side-view Images using Face Mosaicing,” International Journal of Recent Trends in Engineering, vol.2, no.2, pp.55-59, November 2009.
160