Optimizing Selection of PZMI Features based on MMAS Algorithm for Face Recognition of the Online Video Contextual Advertisement User-Oriented System Le Nguyen Bao1 , Dac-Nhuong Le2 , Gia Nhu Nguyen1 , and Do Nang Toan3 1
Duytan University, Da Nang, Vietnam Haiphong University, Hai Phong, Vietnam Information Technology Institute, VNU University, Hanoi, Vietnam 2
3
[email protected],
[email protected],
[email protected],
[email protected]
Abstract. Presently, the advertising has been grown to focus on multimedia interactive model with through the Internet. The Online Video Advertisement User-oriented (OVAU) system is combined of the machine learning model for face recognition from camera, multimedia streaming protocols, and video meta-data storage technology. Face recognition is an importance phase which can improve the efficiency performance of the OVAU system. The Feature Selection (FS) for face recognition is solved by MMAS-FS algorithm used PZMI feature. The heuristic information extracted from the selected feature vector as ant’s pheromone. The feature subset optimal is selected by the shortest length features and best presentation of classifier. The experiments were analyzed on face recognition show that our algorithm can be easily applied without the priori information of features. The performance evaluated of our algorithm is better than previous approaches for feature selection. Keywords: Contextual Advertising, Face Recognition (FR), Video-based face recognition (VbFR), User-Oriented System, Feature selection (FS), PZMI feature, Max-Min Ant System (MMAS)
1
Introduction
Presently, the online-advertising has been grown to focus on multimedia interactive model with the high interoperability through the Internet, such as: Google AdWords, Google AdSense.... The OVAU system generate the content of advertising relevant, truly, and useful to customs in each explicit context over the object detected and recognized from the camera. The three phases of OVAUS system presented in Fig.1 [24, 25]. In the first phase, the objects have been identified and classify directly from the camera to get the features and characteristics. In the second phase, we can access videos based the objects classified using video meta-data storage technology. Finally, the content of advertising videos suitable will be transfer to customs by multimedia streaming protocols (RTPReal-time Transport Protocol or RTP/RTCP-Real-Time Control Protocol ). The media stream is sent as chunks of data, put into RTP/RTCP packets [25].
Fig. 1. The OVAUS system proposed
Our aiming is detect face of objects extracted from the video camera after we removed unwanted elements. There are two steps of face recognition process: (i) face detection and (ii) objects identifier automatically. The keys of face recognition is the distinguishing extraction methods from objects in images and performed standard for identification automatically.
Fig. 2. Object recognition model from the camera in phase 1
The paper proposed a novel MMAS algorithm solve face recognition (FR) problem used feature selection (FS). Ant’s pheromone presents the heuristic information of the selected feature vector extracted. The feature subset optimal is selected by the shortest feature length and best presentation of classifier. The article is structured as follows: Section 2 introduces the related works of FS ap-
proaches for FR. Section 3 presents face recognition approaches for video used FS framework and section 4 implemented our MMAS-FS algorithm. The experiments was analyzed and evaluated performance on face recognition presents in section 5. Our conclusion and some approaches in the future mentioned in section 6.
2
Related Works
The input of OVAUS system is real videos from cameras. So, video-based face recognition used feature selection for the real videos from cameras problem is a complex problem. Many approaches for VBFR problem proposed in [52]. The tracking algorithms are tracked faces in the videos. However, these approaches could not applied directly to video sequences [46]. Therefore, the challenges for researchers are unlimited orientations and positions of human faces can be detect in the video scenes [13]. Our motivation is finding the temporal information, an basic characteristic available in videos to improving face recognition in video. The local models or subspaces extract a variety of features from the complex face manifold in the video [49]. Two keys for the video based face detection are: How to extract features? and How to apply learning algorithm? Table.1 shows the summary of approaches and methods in face recognition. Table 1. The approaches and methods for face recognition Approaches and methods PCA LDA Bayesian Intra-personal/Extra-personal Classier (BIC) Combined PCA and LDA (PCA+LDA) SVM Iterative Dynamic Programming (DP) Boosted Cascade of Simple Features (BOOST) Isomap extended, KDF-Isomap Kernel Principal Angles (KPA) Locality Preserving Projections (LPP) Statistical Local Feature Analysis (LFA) Discriminative Canonical Correlations (DCC) Locally Linear Embedding (LLE) Local Graph Matching (LGM) KNN model and Gaussian mixture model (GMM) Hidden Markov Mode (HMM) 3D model based approach Feature Subspace Determination Learning Neighborhood Discriminative Manifolds (LNDM) MFA NPE Multi-dimensional scaling (MDS) Data Uncertainty in Face recognition Orthogonal Locality Preserving Projections (OLPP)
Year Reference 1991 [44] 1991, 2001 [28][23][14] 1996 [9] 1997 [30] 2000 [2] 2000 [53] 2001 [31] 2002, 2005 [29][36] 2006 [45] 2006 [17] 2006 [10] 2003, 2007 [7][22] 2008 [51] 2007 [8] 2007 [38] 2008 [20] 2007, 2008 [33][47] 2008 [39] 2011 [10] 2012 [55] 2012 [19] 2012 [40] 2014 [50] 2015 [18]
Feature Selection (FS) [48, 15, 21] is a widespread areas. It contains , data mining, document classification, biometrics, object recognition, and computer
vision. Usually, FS is algorithms include the random search strategies or heuristic search strategies to reduce the search space and computational complexity. It means that the optimal feature subset found was also restricted. Depending on the evaluation procedure, FS algorithms are classified into two categories. The first, FS algorithms implemented independently with learning algorithms. The second, the FS algorithms based on the approach wrapper. In which, the learning algorithm is used to evaluate. Five main methods mentioned approaches ares: Forward Selection, Backward Selection, Forward/Backward, Random Choice, and Instance. The majority of methods often begin with a random feature subset heuristically beforehand. The features will be added to or removed after each iteration in the Forward/Backward methods [41]. The FS approaches and methods are summarized in Table 2. Table 2. The approaches and methods for FS problem Algorithm: Approaches and methods Year References GA- Genetic algorithm 1989, 2008 [48][12] FOCUS - Learning with many irrelevant features 1991 [15] RELIEF 1992 [21] LVW- A probabilistic wrapper approach 1996 [16] Neural Network 1997 [34] KFD- Invariant feature extraction and classication in feature spaces 2000 [42] FDR- The fractal dimension 2000 [4] EBR - A rough set 2001 [35] SCRAP - Instance-based lter 2002 [3] FGM- Feature Grouping Methods 2002 [32] SA- Simulated annealing 2005 [37] ACO- Ant colony optimization + Ant-Miner find the rough set reduces 2004 [1] + ACOSVM- ACO with SVM 2004 [54] + Hybird-ACO (HACO) 2005 [5] + ACO- Fisher Discrimination Rate (FDR) 2005 [11] + ACO- Combining rough and fuzzy sets 2005 [37] + ACS- Ant Colony System 2008 [12] + ASrank -Ant System Rank-Based 2008 [12] + ACOG- Ant colony optimization and Genetic Algorithm 2010 [41] + ACOFS 2011 [26]
Recently, the famous approaches for feature selection are used on metaheuristic algorithms, such as: GA [48], SA [37], ACO [1, 54, 5, 11, 12, 41]. A hybrid ACO method for speech classication presented in [34]. The ACO algorithm employed mixture of shared information and ACO. The HACO used mutual information solve the forecaster FS [5]. The ACO-FDR has used the FDR heuristic information for FS in network intrusion introduced in [11]. The ACO solves the rough set reduces presented in [37]. The Ant-Miner algorithm employed a difcult pheromone updating strategy and state transition rule [1]. The combined ACO with SVM is called ACOSVM [54], the ACS and ASrank are proposed in [12]. The combined ACOG and ACOFS algorithms introduced in [41, 26]. All the objective function are calculated in accordance of a feature subset is created and compared with the best previous of candidates. Then characteristics of feature subset are compared, selection and replacement of the best features are selected. The algorithm will stop when the implementation of the iterators
or no features is selected and replaced. Almost of the proposed ACO algorithms proposed for FS problem recommended use a complete graph to present features. The ants will find the way through the nodes (feature) on the graph that with the selection of features. The nodes of each path corresponding to a set of features selected as a solution. Therefore, we do not necessarily have to use a complete graph to represent all solutions. At a time, an ant is in a node, it will choose an edge to connected to the another nodes by the heuristic information of this edge. But in the FS problem, one feature to be selected is independent with the last feature added to the partial solution. So, ACO algorithms [1, 54, 5, 11, 12, 41] have built a complete-graph G with O(n2 ) edges. Next section, we present our framework and algorithm based on MMAS to solve feature selection problem.
3 3.1
Video-based face recognition used feature selection Video-based face recognition used feature selection problem
Definition 1. The sequence of Nc face images in the video can define as Xc = {xc,1 , xc,2 , ..., xc,Nc }
(1)
Definition 2. (The face recognition problem used feature selection) – Given F = {f1 , ..., fn } (n is number of features) is a feature set. – Find a optimal feature subset S ⊆ F of minimal m features (m < n) while ensuring represented the original features. Definition 3. (A combinatorial optimization of the feature selection problem) – Given F = {f1 , ..., fn } is a feature set of basic components. – A solution s of the problem is a subset of components. – The number of feasible solutions is S ⊆ 2F , It means that a solution S is feasible if s ∈ S. – f : 2F 7→ R is the cost function. – Find a minimum cost feasible solution s∗ , s∗ ∈ S and f (s∗ ) ≤ f (s), ∀s ∈ S. 3.2
Framework of face recognition
Our algorithm framework proposed for feature selection shows in Fig.3. In the first step, the input face image has converted to a binary before extract features. The face image is calculated from the centroid (X, Y ) as follows: P P my mx and Y = P (2) X= P m m Here, x, y are the values of co-ordinate, and m = f (x, y) = 0/1. After that, only face has cropped and converted to the gray level and the features have been collected, the PZMI [12] features have been extracted. In the second step, we used the MMAS algorithm to select feature and reduce dimension. The final step, the images will be classification by Nearest Neighbor Classifier (NNC) [27] to make decision. The decisions will be feedback for MMAS algorithm.
Fig. 3. Framework of face recognition used MMAS algorithm
4 4.1
MMAS proposed for feature selection problem Construct Ant solutions
To efficiently apply MMAS algorithm [43, 6], we generated the construct graph G(E, V ) for the n features as F = {f1 , f2 , ..., fn }. We used a digraph with O(n) = 2n arcs to present the search space. The digraph G(E, V ) described in Fig.4.
Fig. 4. The construct graph G = (E, V ) for feature set F
The features represent by nodes, and two features related to each other is represented by an edge connectivity, in which: 1. Each feature represent a node in the graph G(E, V ), node vi represent feature fi ∈ F (i = 0..n). 2. Suppose there are m features in the solution, we add a source node (v0 ) is the start node of the graph where the ants started. 3. For all nodes vi and vi+1 , i = 0..n, we add two edges named Ci0 and Ci1 . If the ant at vi selects edge Ci0 means the feature ith is selected. Otherwise, the edge Ci1 is choice means that the feature ith is not selected. 4. The ants move on the graph G(E, V ) from v0 to v1 , then v2 ... The process of moving of each ant will stop when it moving to vn , and the feature subset outputs. When each ant move from v0 to vn , it mean that a solution s is constructed.
4.2
MMAS-FS algorithm implementation
Algorithm 1 . MMAS-FS Algorithm proposed for feature selection problem PARAMETERS: α = 1, β = 0.1, ρ = 0.5. Size of Ant population: K = 100. Number of iteration: NM ax = 500; BEGIN GENERATION: τmin = 0.01; τmax = 1; For each i=1..n do For each j=0..1 do τij = τmax ; end for end for i = 1; sBest ⇐ ; Repeat For each antk (k = 1...K) do Construct solution of Antk : sk by choose features follows by (5) and (6); If (sBest = ) then sBest ⇐ sk ; If f (sBest ) < f (sk ) then sBest ⇐ sk ; Update pheromone trails follows by (7); τi,j is set in [τmin , τmax ] by (10); end if end for Until (i > NM ax ) or (s∗ optimal solution found); Return Best subset s∗ ; END
4.3
Update pheromones
Let τmin is lower and τmax is upper bounds of pheromone values. The pheromone value τi,j is the information feedback in the graph G(E, V ) when the ants searching. All pheromone trails initial to τmax : τi,j = τmax = 1 , ∀i = 1..n, ∀j = 0..1
(3)
For each candidate feature fi , at time a new feature fi is added to the solution sk means that the ant moves on edge Ci0 . The pheromone factor τsk incremented by τi,0 . The highest pheromone are calculated by τs =
n X X
τi,j , ∀fi ∈ s, i = 1..n
(4)
i=1 j=0
The ants start with an empty solution to construct a feature subset and move on the graph G(E, V ) to find the optimal solution. Assuming that, at the
present time an ant is at vi−1 when constructing new solutions. It must decide to select the next vi by moving over an edge {vi−1 , vi }. After each iteration, the pheromone of the edge {vi−1 , vi } will increase when the features fi and fi−1 in the best solution s∗ . Each ant is initialized with a empty solution. In the first step, an ant chooses v0 , features are selected follow their related pheromone value. The probability equation based on local heuristic information of the remaining candidate selects of ant at vi−1 to select the edge Cij updated according to the formula: α
Pi,j (t) =
β
[τi,j (t)] [ηi,j ] , ∀i = 0..n, ∀j = 0, 1 α α [τi,0 (t)] [ηi,0 ]β + [τi,1 (t)] [ηi,1 ]β
(5)
where, τi,j (t) is the pheromone factor in the edge Cij between nodes (vi−1 , vi ) at time t which reflects the potential tend for ants to follow edge Cij (j = 0, 1). The influence of the pheromone concentration to the probability value is presented by the constant α, while constant β do the same for the desirability and control the relative importance of pheromone trail versus local heuristic value. The knowledge-based information ηi,j expected to choose the edge Cij can be defined as: m P (xki − xi ) k=1 ηi,j = , ∀i = 1..n (6) Nik m P P k 1 [ N k −1 (xi,j − xki )2 ] k=1
i
j=1
Nik
where, denotes the number of feature samples fi (i = 1..n in class of image set k(k = 1..m); xki,j is the j th (j = 1..Nik ) training sample for the feature fi of images in class k. xi is value of the feature fi of all images, xki is the value of the feature fi of all images in class k. ηi,1 implies the feature fi has a greater n P discriminative ability, and a constant ηi,0 = nξ ηi,1 . i=1
The ant’s pheromone values is usually much better after each loop. The original pheromone trail will be updated follow the best solution by: Best
τi,j (t + 1) ← (1 − ρ) × τi,j (t) + ∆τi,j (t)
+ Qi,j (t)
(7)
where, ρ is tuning parameter and the trail evaporation rate (0 ≤ ρ < 1); P Best ∆τi,j (t) = Si,j1(t) f (s); Qi,j (t) = Q is a positive constant if Cij ∈ s∈Si,j (t)
sBest , otherwise Qi,j (t) = 0; Si,j (t) is set of solution in the iteration tth traverse over the edge Cij ; The best solution found is sBest , the extra pheromone increment on the edges included in sBest . The cost function f (s) of solution s is calculated as follows: f (s) =
NCorrect 1 + λNF eat
(8)
in which, NF eat is size of features selected in s, NCorrect is the number of classified correctly examples, constant λ is the degree of accuracy for features selected.
The best solution s∗ is given by: f (s∗ ) < f (s) , ∀s ∈ Si,j (∀i = 1..n, ∀j = 0..1)
(9)
Stop condition of the algorithm is either when the ant found a optimal solution, or when has executed the maximum number of iteration. After pheromone update, τi,j is set in [τmin , τmax ], ∀i = 1..n, j = 0..1 defined by: τmax if τi,j > τmax τi,j = τmax if τi,j ∈ [τmin , τmax ] (10) τmin if τi,j < τmin
5 5.1
Experiment and Results Experiment implementation
The ORL gray scale database includes 400 facial images from 40 different states individuals with dimensions is 92 × 112 [56]. After pre-processing, we extracted the PZMI features with orders 1 to 20 from each face image. We used MMASFS algorithm proposed to select the optimal feature subsets. In this experiment, we used 40 classes and 10 image in each class. Some samples images are also included in Fig.5.
Fig. 5. Sample face images of ORL Databases
We analyze, compare, and evaluate the effectiveness of MMAS-FS with other meta-heuristic approaches. The performance comparison between various methods from Table 3 shows that the MMAS-FS proposed produce much lower classification error rates 1.5% and the execution times than others of the GA-based and ACO-based methods.
Table 3. The comparison of performance Meta-heuristic algorithms Methods Mean Square Error (%) Number of features selected Time Execution(s) GA [48, 12] 3.5 55 1080 ACO [37, 1] 3.25 53 215 ACOSVM [54] 3 45 325 ACO [5] 4.5 55 255 ACO [11] 5 54 280 ACS [12] 3 49 780 AS Rank [12] 1.5 42 300 ACOG [41] 3.5 43 315 ACOFS [26] 4 45 347 MMAS-FS 1.5 40 285
Fig. 6. The comparison of recognition rate of PZMI feature subsets of each algorithm
Finally, we compare the obtained recognition rates of PZMI feature subsets of each algorithm are shown in Fig.6. Our algorithm proposed can achieve 98.83% and the recognition rate is 98.57% only with 40 features selected.
6
Conclusions
We proposes MMAS-FS algorithm used feature selection based on PZMI feature for face recognition. The heuristic information extracted from the selected feature vector as ant’s pheromone. The feature subset optimal is selected by the minimal feature length and the best presentation of classifier. The experiments were analyzed on face recognition show that our algorithm can be easily applied without the priori information of features. The performance evaluated of MMAS-FS is more effective than other meta-heuristic approaches for FS. Our algorithm proposed can achieve 98.83% and 98.57% recognition rate only with 40 features selected on the ORL database.
References 1. B. Liu, H.A. Abbass, B. McKay (2004), Classication rule discovery with ant colony optimization, IEEE Computational Intelligence Bulletin 3(1). 2. B. Heisele et al. (2000), Face Detection in Still Gray Images, AI Laboratory, MIT. 3. B. Raman, T.R. Ioerger (2002), Instance-based lter for feature selection, Journal of Machine Learning Research 1, pp.123. 4. C. Traina, A. Traina, L. Wu, C. Faloutsos (2000), Fast feature selection using the fractal dimension, in: Proceedings of the 15th Brazilian Symposium on Databases (SBBD), pp.158-171. 5. C.K. Zhang et al (2005), Feature selection using the hybrid of ant colony optimization and mutual information for the forecaster, in: Proceedings of the 4 Int.Conf on Machine Learning and Cybernetics. 6. Dac-Nhuong Le (2014), Evaluation of Pheromone Update in Min-Max Ant System Algorithm to Optimizing QoS for Multimedia Services in NGNs, Advances in Intelligent and Soft Computing, Vol.338, pp.9-17, Springer. 7. D.Q. Dai, P.C. Yuen (2003), Regularized discriminant analysis and its applications to face recognition, Pattern Recognition 36(3),pp.845-847. 8. Ersi, E.F., Zelek, J.S., Tsotsos, J.K.(2007): Robust face recognition through local graph matching. Journal of Multimedia, 31-37. 9. Etemad, K., Chellappa, R.(1997), Discriminant analysis for recognition of human face images. Journal of the Optical Society of America 14, 1724-1733. 10. Ersi, E.F., Zelek, J.S.(2006), Local feature matching for face recognition. In: Proceedings of the 3rd Canadian Conference on Computer and Robot Vision. 11. H.H. Gao et al.(2005), Ant colony optimization based network intrusion feature selection and detection, Proceedings of the 4 International Conference on Machine Learning and Cybernetics. 12. Hamidreza Rashidy Kannan et al(2008), An improved feature selection method based on ant colony optimization (ACO) evaluated on face recognition system, Applied Mathematics and Computation, Vol.205, pp.716-725. 13. Huafeng Wang, Yunhong Wang, And Yuan Cao (2009), Video-based Face Recognition: A Survey, World Academy of Science, Engineering and Technology Vol:3, pp.273-283. 14. H. Yu, J. Yang (2001), A direct LDA algorithm for high-dimensional data-with application to face recognition, Pattern Recognition Vol.34(10),pp.2067-2070 15. H. Almuallim, T.G. Dietterich (1991), Learning with many irrelevant features, The 9th National Conf on Articial Intelligence, MIT Press, pp.547-552 16. H. Liu, R. Setiono (1996), Feature selection and classication- a probabilistic wrapper approach, in: Proceedings of the 9th ICIEAAIES, pp.419-424. 17. Jae Young Choi et al (2008), Feature Subspace Determination in Video-based Mismatched Face Recognition, 8th IEEE International Conference on Automatic Face and Gesture Recognition. 18. J.Soldera et al.(2015), Customized Orthogonal Locality Preserving Projections With SoftMargin Maximization for Face Recognition, IEEE Trans on Instrume Vol.64(9), pp.2417-2426. 19. Jie Gui et al. (2012), Discriminant sparse neighborhood preserving embedding for face recognition, Pattern Recognition Vol.45(8), pp.2884-2893. 20. Kim, M., Kumar, S., Pavlovic, V., Rowley, H.A.(2008): Face tracking and recognition with visual constraints in real-world videos. In: CVPR 21. K. Kira, L.A. Rendell (1992), The feature selection problem: traditional methods and a new algorithm, in: Proceedings of 9 National Conference on Articial Intelligence, pp.129-134. 22. Kim TK et al (2007), Discriminative learning and recognition of image set classes using canonical correlations. IEEE Trans Pattern Anal Mach Intell 29(6). 23. L.F. Chen et al. (2000), A new LDA-based face recognition system which can solve the small sample size problem, Pattern Recognition Vol.33(10),pp.1713-1726. 24. Le Nguyen Bao, Dac-Nhuong Le, Le Van Chung, Gia Nhu Nguyen (2016), Performance Evaluation of Video-Based Face Recognition Approaches for Online Video Contextual Advertisement User-Oriented System, Advances in Intelligent System and Computing, Vol.435, p.287-295, Springer. 25. Le Nguyen Bao, Le Van Chung, and Do Nang Toan (2016), A Proposed Framework for the Online Video Contextual Advertisement User-Oriented System using Video-based Face Recognition, International Journal of Applied Engineering Research, Vol.11(15), pp.8609-8617 26. Ling Chen, Bolun Chen, Yixin Chen (2011), Image Feature Selection Based on Ant Colony Optimization, Lecture Notes in Computer Science, Vol.7106, pp 580-589. 27. L. Kozma (2008), k- Nearest Neighbours Algorithm, Helsinki University Technology. 28. Moghaddam, B., Nastar, C., Pentland, A.(1996), Bayesian face recognition using deformable intensity surfaces. In: Proceedings of Computer Vision and Pattern Recognition, pp.638-645 29. Ming-Hsuan Yang (2002), Face recognition using extended isomap, Proceedings. 2002 International Conference on Image Processing 2012. 30. P.N. Belhumeur et al.(1997), Eigenfaces vs. sherfaces: recognition using class specic linear projection, IEEE Trans on Pattern Analysis and Machine Intelligence 19, 711-720. 31. P. Viola and M. Jones (2001), Rapid Object Detection Using a Boosted Cascade of Simple Features, Proc. Conf. Computer Vision and Pattern Recognition, pp.511-518.
32. P. Paclik et al.(2002), On feature selection with measurement cost and grouped features, Proceedings of the 4th Int Workshop on Statistical Techniques in Pattern Recognition, pp.461-469. 33. Park, U. et al.(2007), 3D model-based face recognition in video. LNCS 4642, pp.1085-1094. 34. R. Setiono, H. Liu (1997), Neural network feature selector, IEEE Transactions on Neural Networks 8(3),pp.645-662. 35. R. Jensen, Q. Shen (2001), A rough set-aided system for sorting WWW bookmarks, Web Intelligence: Research and Development, pp.95-105. 36. Rui-Fan Li et al (2005), Face recognition using KFD-Isomap, International Conference on Machine Learning and Cybernetics, pp.4544-4548. 37. R. Jensen (2005), Combining rough and fuzzy sets for feature selection, Ph.D. Thesis, University of Edinburgh. 38. Stallkamp, J., Ekenel, H.K.(2007): Video-based face recognition on real-world data. 39. Shaokang Chen et al. (2010), Face Recognition from Still Images to Video Sequences: A LocalFeature-Based Framework, EURASIP Journal on Image and Video Processing. 40. Soma Biswas et al.(2012), Multidimensional Scaling for Matching Low-Resolution Face Images, IEEE Transactions on Pattern Vol.34(10), pp.2019-2030. 41. S.Venkatesan et al.(2010), Face Recognition System with Genetic Algorithm and ANT Colony Optimization, Int Journal of Innovation, Management and Technology, Vol.1(5), pp.469-471. 42. S. Mika et al.(2000), Invariant feature extraction and classication in feature spaces, Advances in Neural Information Processing Systems, 12, MIT Press, Cambridge, MIT, pp.526-532. 43. Stutzle, T., Ibanez, M.L., Dorigo, M. (2010), A Concise Overview of Application of Ant Colony Optimization. Wiley, New York. 44. Turk, M. et al. (1991), Eigenfaces for recognition. Journal of Cognitive Neuro-science 3, 71-86. 45. Tat-Jun Chin, et al (2006), Incremental Kernel SVD for Face Recognition with Image Sets, Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition. 46. W.Y. Zhao, R. Chellappa, A. Rosenfeld, P.J. Phillips (2003), Face Recognition: A Literature Survey, ACM Computing Surveys,Vol:35. 47. Wu, Y. et al (2008), Integrating illumination, motion and shape models for robust face recognition in video. EURASIP Journal of Advances in Signal Processing. 48. W. Siedlecki et al.(1989), A note on genetic algorithms for large-scale feature selection, Pattern Recognition Letters 10(5), pp.335-347. 49. Yan, S., Xu, D., Zhang, B., Zhang, H.J., Yang, Q., Lin, S. (2007), Graph embedding: A general framework for dimensionality reduction. IEEE Transactions on PAMI 29(1), 4051. 50. Yong Xu et al.(2014), Data Uncertainty in Face recoginition, IEEE Transaction on Cybernetics, Vol.44(10), pp.1950-1961. 51. Ying Han Pang et al (2008), Supervised Locally Linear Embedding in face recognition, Biometrics and Security Technologies, ISBAST 2008, pp.1-6. 52. Zhaoxiang Zhang, Chao Wang, and Yunhong Wang (2011), Video-Based Face Recognition: State of the Art, LNCS Vol.7098, pp.19, Springer. 53. Z. Liu and Y. Wang (2000), Face Detection and Tracking in Video Using Dynamic Programming” Proc. Int’l Conf. Image Processing. 54. Zhong Yan et al (2004), Ant Colony Optimization for Feature Selection in Face Recoginition, LNCS Vol.3072, pp.221-226. 55. Ziqiang Wang et al. (2012), Optimal Kernel Marginal Fisher Analysis for Face Recognition, Journal of Computer, Vol.7(9), pp.2298-2305. 56. http://www.face-rec.org/databases/