Gender Classification using GA-based Adjusted Order ... - IEEE Xplore

1 downloads 0 Views 299KB Size Report
Electrical Engineering Department,. Bu-Ali Sina University,. Hamedan, Iran [email protected]. Abstract—An important problem in gender classification.
2013 13th Iranian Conference on Fuzzy Systems (IFSC)

Gender Classification using GA-based Adjusted Order PZM and Fuzzy Similarity Measure Elham Khoshkerdar

Hamidreza Rashidy Kanan

Department of Electrical, Computer and IT Engineering, Qazvin Branch, Islamic Azad University, Qazvin, Iran [email protected]

Electrical Engineering Department, Bu-Ali Sina University, Hamedan, Iran [email protected]

Abstract—An important problem in gender classification system is dealing with facial expression variations, lighting direction changes, noise presence and etc. In this paper, a new patch based method is proposed for gender classification under above conditions and when one sample from each person is available. A genetic algorithm based adjusted order PseudoZernike Moment (PZM) is used to extract features of each face area. In the proposed method, a weighting scheme is utilized to determine the importance of each local area. Finally, the similarity between input image and gallery images is calculated by fuzzy similarity measure. The satisfactory experimental results show the high recognition rate of our method on the AR and FERET face databases compared to recent available approaches. Keywords—gender classification; pseudo-Zernike moment (PZM); fuzzy similarity measure; genetic algorithm; entropy.

I.

INTRODUCTION

Gender classification plays an important role in social and is used in applications such as Human-Computer Interaction (HCI), security industry, demographic studies and purchase statistics for the market. Primary research on gender classification has been done in early 1990s. Golombo et al. [1] designed a two-layer network called SEXNET to gender classification from face images. Shobeirinejad and Gao [2] used Interlaced Derivative Patterns (IDP) for distinct features extraction. In their method, IDP fastens the process by keeping the only more important data. Sun et al. [3] claimed that feature selection is an important issue for gender classification. They created feature vectors from face images using principal component analysis (PCA) and applied genetic algorithm to select a subset of features from vectors. Four different classifiers were compared in this study: the Bayesian decision making, a neural network (NN), support vector machines (SVM) and a classifier based on linear discriminant analysis (LDA). The SVM achieved the best performance in the comparative experiments. Recently, a fuzzy support vector machine (FSVM) approach has also been developed to improve the generalization ability for gender classification [4]. Most of the models in gender classification have been performed by global information (whole face image), in which, instead of using several specific points as features, the whole face image is used for recognition. In these methods, 978-1-4799-1228-5/13/$31.00 ©2013 IEEE.

extracted features can be influenced by some changes like as facial expressions. In contrast, local approaches divide the whole face image to some areas and separately extract features of each area. These methods become much popular due to their robustness under environmental changes and also they are independent from the location of facial components. Therefore, this Li et al. [5] introduced a gender classification framework which utilizes not only facial features, but also external information, i.e. hair and clothing. Mozaffari et al. [6] presented a gender classification algorithm by combining geometric and appearance-based methods to extract features of patches in one exemplar image per person. The other approach is based on geometrical features. Geometrical approach need to localize different facial components such as eyes, eyebrows, and nose. Saatci and Town [7] used AAM for feature extraction, and then extracted features are used to train the SVM classifiers which arranged into a cascade structure. This classification approach, improves gender classification accuracy under facial expression changes and vice versa. In this paper, the purpose is to improve the accuracy of gender classification by image partitioning and using weight factor for each area considering the importance of that area's features as the distinction between men and women. And due to successful application of PZM in face recognition [8], and also its robustness to noise and rotation [9], we use PZM to extract features of each face region. Genetic algorithm is used to determine the order of PZM that caused extracting enough features from each region. A weighting method generated by a function of entropy and mutual information is employed to weigh each region, and finally weighted feature vectors are compared to each other via fuzzy similarity measure. In order to evaluation of the proposed method, FERET [10] and AR [11] databases are used. Experimental results show that the facial regions combination gives better accuracy than the whole face. To demonstrate the robustness of our algorithm, we carried out experiments on the AR database, under illumination changes, facial expression variations and artificial Gaussian noise. These experiments indicate that our method is robust against facial expressions, illumination changes and noise.

The rest of the paper is organized as follows: Section 2 presents details of the proposed approach for gender classification. Section 3 contains the experimental results and the final section includes conclusions. II.

THE PROPOSED GENDR CLASSIFICATION APPROACH

In this section, our new approach is presented for gender classification based on frontal images of each person. Fig. 1 shows block diagaram of the proposed approach. Different parts of the proposed approach and detailes of each part are explained afterwards. A. Image partitioning In the proposed approach, first, the input image is divided in to 10 non-overlapping areas. Dividing the face image to some areas would lead to extraction the features that generally distinguish men from women. According to Fig. 1, partitioning is done so that only one facial feature is placed in each area. B. Feature extraction In the feature extraction step, pseudo-Zernike moments are extracted from each area of the image. The kernel of pseudoZernike moments includes a set of orthogonal pseudo-Zernike polynomials defined within a unit circle. The two-dimensional complex pseudo-Zernike moments of order n with repetition m related to continuous gray-level image f ( x, y ) are defined as [8]:

PZM n , m ( f ( x, y ) ) =

n +1

π

2

∫∫

2

Vn∗, m ( x, y ) f ( x, y ) dxdy

(1)

x + y ≤1

where n = 1, 2,3,..∞ and m is all positive and negative values which must the following conditions: n − m = even , n ≤ m Also, symbol * is synonymous with "complex conjugate". If pseudo-Zernike polynomials are called Vn , m ( x, y ) , they can be defined as [8]:

face images of men and women are obtained, respectively. Fig. 2 shows general face images of men and women. D. Weight calculation Using local entropy and mutual information of pixels, a weight is assigned to each area of the image. Local entropy in a gray scale image reveals the amount of information existing in that area. Entropy H is defined for a discrete-valued random variable X ( x1 , x2 ,..., xs ) as [12]: s

H ( X ) = −∑ p ( xi ).log( p( xi )) Also, mutual information can be stated as [13]: I ( X ; Y ) = H ( X ) + H (Y ) − H ( X , Y ) =H (Y ) − H (X Y )

where, H ( X ) and H (Y ) are marginal entropy, H ( X Y ) and H (Y X ) are conditional entropy and H ( X , Y ) is joint entropy of X and Y . Considering the Jensen inequality, it can be shown in the definition of mutual information that I ( X ; Y ) is positive and as a result H ( X ) ≥ H ( X Y ) . Conditional entropy: if X and Y are discrete random variables and f (x , y ) and f ( y x) are distribution values of conditional and joint probabilities, respectively. Conditional entropy can be calculated as [13]:

H (Y X ) = − ∑ ∑ f ( x, y ) log f ( y x ) x∈ X y∈Y

( x ) is

the coordinate origin to pixel location and θ = tan −1 y

the angle between this vector and x axis. Rn , m (r ) is called radial polynomials and are defined as [8]: n− m

(2n + 1 − s )!

∑ (−1) . s !(n − m − s)!(n + m + 1 − s)! r s

(n−s)

(3)

s =0

It should be noted that Rn , − m (r ) = Rn, m (r ) . The features extracted from each area are concatenated as an array of features to each other. C. General images By averaging values of pixels corresponding to face images of men and women, which are from gallery images, general

(6)

The above entropy equation is the conditional entropy of Y given X.

where r = x 2 + y 2 is the length of the vector extending from

Rn , m (r ) =

(5)

= H (Y ) − H (Y | X )

(2)

Vn , m ( x, y ) = Rn , m (r )e jmθ

(4)

i =1

Fig. 1. The Proposed System Architecture

and assign weigh to them, (8) is used in order to obtain similarity between input image and gallery face images [15]. k

Fsim ( X , Y ) = (a)

Fig. 2. General images. (a): woman, (b): man

Joint entropy: joint entropy includes the information value in two (or more random) variables. If X and Y are considered random variables, joint entropy is defined as [13]: x∈ X y∈Y

i

i

i =1 k

∑ ( xi ∨ yi )

(8)

i =1

(b)

H ( X , Y ) = − ∑ ∑ f ( x, y ) log f ( x, y )

∑( x ∧ y )

(7)

By comparing every local area in the input face image with the corresponding local area in the general face, mutual information can be obtained. The product of entropy square and mutual information is assigned as a specific weight to each area of face image.

E. Determining maximun order of pseudo-Zernike moments in each area of face image Features extraction from different face areas with the same maximum order parameter is not always effective to gender classification process. Because, face areas in different face conditions have different importance, and their importance changes with different face condition. Sometimes, in some variations of face appearance, more features extraction from eye area, which is more important area in comparison to other areas, doesn't lead to correct gender classification. It may require extracting more features from lips or cheek areas instead eye ones (according to the fact that some areas in some women and men are alike and vice versa). Hence, it needs to extract enough and effective number of features from every face area that eventually leads to gender classification improvement.

where, X and Y are features vectors which k is their length. Also, ∧ and ∨ are intersection and union fuzzy operators respectively. According to different definitions for union and intersection operators, (8) can be calculated from various ways. Several definitions are indicated in Table I for ∧ and ∨ operators. As Table I shows, minimum and maximum operators have the most effectiveness at this method. Thus, the obtained results are based on these operators. III.

EXPERIMENTAL RESULTS

Two face databases of AR and FERET were used for evaluating the proposed approach. AR face database contains images of the frontal view faces of 126 people (70 men and 56 women). Each person has 26 images captured in two different sessions (separated by two-week time interval). This database consists of images with occlusion (scarf or sun glasses), facial expression (neutral, smile, scream and anger) and illumination variations. All the images were normalized in terms of size and rotation and the normalized images were cropped in size of 160 × 160 pixels. In the experiments, the identification based on one sample from each person (single-model based identification strategy) is used, that the entire algorithms based on training, suffer under these conditions. According to this fact that we doesn’t use training based process and lack of single image based approaches in gender classification, we’ve compared our results with some training based methods in the experiments section. Experiment results reveal that our proposed approach has more improvement in comparison to other proposed methods.

In this paper, genetic algorithm [14] is used in order to achieve this purpose. Thus, using convenient order parameter for every area obtained by genetic algorithm, pseudo-Zernike moments extract features. After dividing the image in to 10 areas, pseudo-Zernike moments are applied to each area of the face image in order to extract features of that area. The maximum order parameter of pseudo-Zernike moments is adjusted using genetic algorithm. Initial population is randomly produced, which contained the moment's order. Crossover operator is two-point and mutation operator randomly selected one bit and changed its value. Fitness function in this algorithm is recognition rate and the algorithm is finished, when the highest classification rate is obtained.

In the first experiment, neutral images in the first session of AR database were used as gallery images and those of the second session were utilized as probe ones. In order to comparing with other methods, we also used FERET image database. The FERET database consists of a total of 14052 gray-scale images representing 1199 individuals, with images containing variations in illumination, expression, pose and so on. In the FERET database, we used fa images as a gallery and fb images as a probe. In this paper, only frontal faces are considered. Table II shows the obtained recognition rates in comparison with some recent methods. As Table II shows, our proposed method on both FERET and AR databases obtained more accurate classification rate than other algorithms. The obtained results also indicate that the proposed feature extraction technique contains discriminant gender information.

F. Classification According to difference in men and women face features and also some common features between them, these common features can make negative effect in final decision. So, the classifier should act as a solution to achieve a response that is closer to accurate response. Thus, after finding face features

In the second experiment, the neutral images taken from the first session of AR database were used as gallery images and the face images with changing conditions of illumination and facial expressions were used as probe ones. Fig. 3 shows results of experiments on the whole and partitioned images in the mentioned conditions.

The third experiment was carried out by applying Gaussian noise (with zero mean and 0.01 variance) to neutral images in the second session's set of AR database and images in the fb’s set of FERET database, which were considered probe. It is worth noting that gallery images remained unchanged. The third experiment results show that the proposed approach achieves 100% rate on the AR database and 98.99% on the FERET database. Results of the recent experiments (as shown in Fig. 3 and Table II), demonstrate that the proposed approach has better performance than the global method particularly in illumination changes and scream conditions. IV.

TABLE II. Algorithm

Global face

Local Regions

LBP+SVM+Fuzzy [5]

FERET

--

95.8 ± 0.4

Gabor+Fuzzy SVM [4]

FERET

98.09%

--

2DPCA+SVM [16]

FERET

90.44

94.83

LBP+DCT+GDF [6]

AR

--

96.0

The proposed method

[1]

[2]

[3]

[4]

[5]

COMMON FUZZY OPERATORS

Intersection Operators

Union Operators

Minimum Min{x, y} Drastic product x. y Einstein product x. y

Maximum Max{x, y} Drastic sum x + y − x. y Einstein sum x+ y [1 + x. y ]

2 − [ x + y − ( x. y )] Hamacher product x. y

Hamacher sum x + y − 2.x. y

x + y − ( x. y )

1 − ( x. y )

[7]

[8]

[9]

[10]

[11] [12] [13] [14] [15] [16]

Fig. 3. Recognition rate obtained under facial expressions changes and different illumination conditions.

AR

95

100

FERET

91.26

98.99

REFERENCES

[6] TABLE I.

Recognition Rate (%) Database

CONCLUSION

In this paper, a new algorithm for gender classification based on local areas was proposed when only one sample image is available per person. We extract local features of facial components instead of global features. Weighted pseudoZernike moments were determined to extract features from different facial areas and the maximum order of these moments were obtained by genetic algorithm. Effectiveness of the proposed approach was evaluated using AR and FERET databases and compared with other local approaches. The experimental results show that our proposed technique improves recognition rate and demonstrate robustness of our algorithm in different conditions like as aging, facial expressions changes, illumination variation and presence of noise.

RECOGNITION RATE OF DIFFERENT METHODS

B. Golomb, D. Lawrence, and T. Sejnowski, "Sexnet: A Neural Network Identifies Sex from Human Faces," Advances in neural information processing systems, vol. 3, pp. 572-577, 1991. A.Shobeirinejad and Y. Gao, "Gender Classification Using Interlaced Derivative Patterns ," in International Conference on Pattern Recognition, 2010, pp. 1509-1512. Z. Sun, G. Bebis, X. Yuan and S. J. Louis, "Genetic feature subset selection for gender classification: A comparison study," in In: Proc. IEEEWorkshop on Applications of Computer Vision (WACV’02), 2002, pp. 165–170. X. Leng and Y. Wang, "Improving Generalization for Gender Classification," in 15th IEEE International Conference on Image Processing (ICIP 2008), 2008, pp. 1656-1659. B.Li and X.C.Lian and B.L.Lu, "Gender classification by combining clothing, hair and facial component classifiers," Neurocomputing, vol. 76, pp. 18–27, 2012. S.Mozaffari, H.Behravan and R.Akbari , "Gender Classification using Single Frontal Image per Person: Combination of Appearance and Geometric based Features ," International Conference on Pattern Recognition, 2010, pp. 1192-1195. Y. Saatci and C. Town, "Cascaded Classification of Gender and Facial Expression Using Active Appearance Models," in in 7th International Conference on Automatic Face and Gesture Recognition, 2006, pp. 393-398. H. Kanan, K.Faez and Y.Gao, "Face recognition using adaptively weighted patch PZM array from a single exemplar image per person," Pattern Recognition, vol. 41, pp. 3799-3812, 2008. G. Amayeh, S. Kasaei, G.Bebis, A.Tavakkoli, K. Veropoulos "Improvement of Zernike moment descriptors on affine transformed shapes ," in 9th International Symposium on Signal Processing and Its Applications (ISSPA 2007), 2007, pp. 1-4. P. J. Philips, H. Moon, P. Rauss and S.A. Rizvi, "The FERET Evaluation Methodology for Face-Recognition Algorithms," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, pp. 1090-1104, 2000. A. Martinez and R. Benavente, ": The AR Face Database," CVC Technical Report, vol. 24, 1998. C. E. Shannon, "A Mathematical Theory of Communication," Bell Syst. Tech, vol. 27, pp. 379-423 and 623–656, 1948. Robert M. Gray, Entropy and Information Theory.:Springer-Verlag, July 16 2009. Tom M. Mitchell, "Genetic Algorithm," in Machine Learning.: McGraw-Hill Science/Engineering/Math, 1997, ch. 9, pp. 249-270. Li-X. Wang, A Course in Fuzzy Systems and Control. Hong Kong, ch. 4, pp. 34-46. Li Lu and Pengfei Shi, "A novel fusion-based method for expressioninvariant gender classification," in ICASSP, 2009, pp. 1065-1068.

Suggest Documents