Enhancement in Security by Reducing Dimensions ... - Semantic Scholar

4 downloads 0 Views 116KB Size Report
Langford,” A Global Geometric Framework for. Nonlinear Dimensionality Reduction,” ... [20] Tatyana V. Bandos, Lorenzo Bruzzone, and. Gustavo Camps-Valls ...
Vol 8. No. 2 June, 2015 African Journal of Computing & ICT © 2015 Afr J Comp & ICT – All Rights Reserved - ISSN 2006-1781 www.ajocict.net

Enhancement in Security by Reducing Dimensions of Hyperspectral Face Images for Face Recognition S. Arya, N.Pratap & V. Kumar Department of Computer Science Gurukula KangrimVishwavidyalaya, Haridwar Neelkanth Institute of Technology, Meerut [email protected] Phone: +919897335329

ABSTRACT From the past few decades the security system industry has grown rapidly. Every year there is an increased demand for more precise, accurate and full proof security systems. Designing flawless advance security system involves complex parameters and the most important one is the image. This has attracted researchers in the field of Face Recognition (FR). Due to the high dimension of the images, it’s always a challenging task to collect them and analyze them. Generally the depth of dimension is the main problem, especially in the case of hyperspectral face images, decreases the classification accuracy thus affecting the performance of a security system. During experimentation, it has been observed that reduction in dimension results in enhancing classification accuracy as well as recognition accuracy, so directly affect the security performance. The objectives of this paper are: 1) Study and review the major Data reduction techniques along with their advantages and disadvantages. 2) Relevance of Data Reduction techniques in improving the overall accuracy of FR System and 3) Implementation of FR systems in designing advance security system. Keywords- PCA; Face Recognition; Preprocessing. African Journal of Computing & ICT Reference Format: S. Arya, N.Pratap & V. Kumar (2015). Enhancement in Security by Reducing Dimensions of Hyperspectral Face Images for Face Recognition. Afr J. of Comp & ICTs. Vol 8, No. 2. Pp 35-40.

I. INTRODUCTION Generally in a conventional face recognition system, broadband monochrome or color imaging sensors [17], only those imagery data can be seen and analyzed which include only visible light in Electromagnetic Spectrum (EMS). The CBI contains the huge difference between the low visual features and high level and is pointed by the color, texture and shape. Due to the low-level visibility features, the representation of these images is not possible directly by the human. The example of these types of images are multispectral images, hyperspectral images [16], images related to medical science etc and contains thousands of features to represent an image. When the computer deals all these types of images, all these large volume of features produce problem in analysis. This decrease the performance as well as efficiency of the system. So it is required to reduce the dimension of these images without the loss of information. An example of hyperspectral and multispectral images are shown with Figure 1. Dimensionality reduction is the process of converting the high dimensional patterns into the low dimensional pattern and also to maintain the local structure of the original high dimensional data. Dimensionality reduction is the process to remove all the redundant features and to extracts relevant features.

Fig1. Example of multispectral and hyperspectral images [14]

35

Vol 8. No. 2 June, 2015 African Journal of Computing & ICT © 2015 Afr J Comp & ICT – All Rights Reserved - ISSN 2006-1781 www.ajocict.net

Supervised classification- with supervised classification, we identify similar area on an image by identifying ‘training sites’, also known Area of Interest (AOI), of known targets or classes then extrapolating those to other area of unknown targets. There are different supervised classification algorithms in DIP research domain named as, Quadratic Discriminant Analysis (QDA), Minimum Distance, Parallel Piped, Maximum likelihood [20], Linear Discriminant Analysis (LDA), Support Vector Machine [19].

Intrinsic dimensionality is the term; specify the required parameters to point the properties of the data. So it is always required that the reduced representation should correspond to the intrinsic dimensionality. The main advantage of lower dimensional representation is the efficient visualization of the data and in the same way the high dimensional data can be analyzed by transforming them into a lower dimensional representation. For example, the hyper spectral satellites produce the images containing thousands of bands and these large set of bands creates problem in analysis of these images. So it is always required to reduce these bands and by selecting those, which produce the relevant information. Generally there are two techniques related with the reduction technique i.e. Linear and non-linear techniques. In linear, the large dimensional data set is transformed into low dimensional data set by using the linear mapping. In other words if m is the original high dimensional data set and n is the lower dimensional data set, then n is always less than to m.

Unsupervised classificationwith unsupervised classification, the user does not require specifying any ‘training sites’ or AOI for classification. The classification creates natural grouping of pixels in the image, called clusters or classes. There are different unsupervised classification algorithms named as c-Means clustering, Active Classification through Clustering (ACC), K-means clustering etc [18]. In today scenario, the main objective of image processing is subject visualization, enhancement, sharpening and restoration, transformation, classification as well as measurement of patterns in different areas of research and applications. The Face Recognition is a technique of the pattern matching techniques which is used to identify and verify the individual human face.

Linear techniques mainly focus on two methods, one is Principal Component Analysis (PCA) [3] and another is Linear Discriminant Analysis (LDA) [4]. The main advantages of linear techniques are that these are very simple and always easy to implement. In some situation the linear techniques are limited to find the accurate structure of non-linear datasets. These can be overcome by introducing the non-linear techniques such as Locally Linear Embedding (LLE) [5] and Isomap [6].

3 FACE IMAGE EXTRACTION SYSTEM As the FR has wide applications in the different fields such as visual surveillance, retrieval of an identity from a face database for forensic applications, criminal investigations and security systems, so it has gain the more attention by the researchers to enhance the efficiency. Face recognition also face the dimension reduction problem because for calculation, if a face image having p*q pixels can be represented by a vector in Tp*q .This p*q-dimensional space is too high for the recognition system and increase the complexity for recognition process. So it is always required to reduce the dimension of data sets so that’s the fewer variables are used for the computation purpose.

The main advantage of non-linear methods is the capability to handle complex non-linear structures of large datasets in the efficient manner. Sometime there can be the situation that the required data in the high dimensional data exist on the abstract space, called as manifold. This type of nonlinear reduction methods are called as manifold learning techniques [5][6]. In Manifold learning, the non-linear structures are extracted from high dimensional data sets and maintain the same structure as in the high dimension. In such cases, the linear methods are limited while the nonlinear dimensionality reduction techniques can find the manifold structure efficiently.

In relation to the machine learning, image extraction can be seen as the problem of dividing the images into different classes and the accuracy is represented by performance metric. To measure the performance of machine learning system, the classification accuracy is a widely used metric. Generally a FR system contains two steps, one is Feature Extraction and other is Classification. In feature extraction, the features are extracted for the representation of faces and belong to a particular class. The feature extraction is performed with the low dimensional data sets, obtained by linear or non-linear dimensionality reduction methods. Then the classifier is trained for training images to classify the face images into the appropriate classes.

2. FACE IMAGE CLASSIFICATION Image classification is the process of sorting pixels into finite number of individual classes or categories of data, based on their DN values of pixels. A classified image is comprised of an array of pixels, each of which belong to a particular theme and called thematic image. Classification of multi-dimensional images is used to assign corresponding level with respect to groups of same characteristics, with the aim of discriminating multiple objects from each other with in the image(s). The main objective of image classification is to establish spatial relationship with neighbouring pixels and assign all pixels in the image to a particular or different class(s) [20]. Image classification process is categorized into two forms:

36

Vol 8. No. 2 June, 2015 African Journal of Computing & ICT © 2015 Afr J Comp & ICT – All Rights Reserved - ISSN 2006-1781 www.ajocict.net

4

In 2010, Mahesh Pal et al. [7] conducted a study on the dimension reduction as a preprocessing step. Support Vector Machine (SVM) is a widely used method for classification which contains a hyper plane, which provides the high distance between different classes to provide better classification accuracy. It was experimentally proved that the classification accuracy of SVM can be increased by reducing the dimensionality of the data [7]. As the features are increased, the classification accuracy of SVM decreases. So SVM is best apparent for small training sets. Experimental results proved that there is a strong dependency between dimensionality reduction and classification accuracy of SVM. When the feature dimensions were taken 55, 60 and 65, the accuracy achieved was 92.24%, 92.11% and 91.76%.

LITERATURE REVIEW

A number of studies have been performed in the field of face detection and recognition. Generally the Face recognition methods are based on two techniques named as feature based methods and subspace methods. In feature – based methods the local features of the face such as the position of the eyes, nose, mouth etc are collected while on the other hand the Subspace method works on the reduction of the dimension of the data and maintain the maximum distinction between distinct classes. The subspace methods works on the two approaches named as ‘Eigenface’ [13] and ‘Fisherface’ [4]. In ‘Eigenface’ PCA is used as the linear unsupervised dimensionality reduction method to generate the subspace while the ‘Fisherface’ works on Linear Discriminant Analysis (LDA) as the linear supervised dimensionality reduction method.

Besides providing classification accuracy, dimensionality reduction offers the advantages to SVM such as increment in the speed of the classification by reducing feature set size and also reduced in data storage requirements. In 2009, Jianke Li et al. [8] conducted a study and introduced a method for FR based on the combination of Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) to extract the feature. In order to perform feature extraction and dimension reduction, PCA is used, resulting that the components are uncorrelated and have less reconstruction error. After that, LDA is used to increases the class separation and provides higher discrimination of the samples. So PCA makes a PCA subspace and the combination of PCA and LDA form the LDA subspace showing high discrimination.

PCA finds a projection on a lower dimensional representation and provides the correct representation of the data with minimum error and also finds the best axis for projection [4]. On the other hand LDA maximize the separation between different classes with minimum distances among them. In classification systems LDA is given too much preference because it generates the high class discrimination by using the class information [3] [4]. In 2001, Aleix M conducted a study in which PCA performs better than to LDA, when the number of samples per class is very less. LLE and Isomap are used for FR [5] [11] and are based on the non linear dimensionality reduction techniques. The high dimensional data that contains non linear structures is processed by LLE and also maintain neighborhood structure of the high dimensional data. LLE show the less discrimination capability as compared to LDA [11]. In 2000, Joshua B [6] conducted a study by using Isomap to maintain the true dimensionality and geometric structure of the high dimensional data. The result was in favour as it maintains the global structure but the computational cost was too high for large datasets. It was also experimentally proved that when Isomap is applied alone for face recognition, the performance was satisfactory [11]. Several another studies was also conducted in which the different dimension reduction techniques were combined for face recognition [8] [12] However, the performance achieved was not so high.

To perform the classification, Nearest Neighbor Classifier (NNC) was used on the ORL face database containing 400 images of 40 individuals. The accuracy was calculated for different training sample size and for various feature dimensions. As the sample size was increased, the classification accuracy of both PCA method and combined PCA and LDA method was also increased. Also, it is observed that the combination of PCA and LDA provides better classification accuracy than the PCA alone method for face recognition [8]. In 2009, Md. Omar Faruqe et al [9] proposed a face recognition system using Principal Component Analysis (PCA) and support vector machine (SVM). Because the major components of a FR system are feature extraction and classification, so it’s always a matter of consideration, how the features are extracted and how the features are classified into the several classes. Therefore, the selection of the feature extractor and selection of classifier is very important in a face recognition system. In this study, the authors have used PCA for feature extraction and SVM for classification.

37

Vol 8. No. 2 June, 2015 African Journal of Computing & ICT © 2015 Afr J Comp & ICT – All Rights Reserved - ISSN 2006-1781 www.ajocict.net

The classification based on SVM is done by producing the new hyper plane which provides maximum separation between the data item. The vectors near to the hyper plane are termed as support vectors. Input space point is nonlinearly mapped into a high dimensional feature space by using kernel functions. In 2009, Md. Omar Faruqe [9] used the kernel functions and conducted an experiment on the ORL face database containing 400 images of 40 individuals. They selected 200 samples as the training sample, used for constructing Eigen faces and also for training the SVM. From the experiments it can be observed that the combination of PCA and SVM [15] provides better classification accuracy for face recognition on the ORL face database.

Table 1: Summary of Classification corresponding to different methods Methods used Principal Component Analysis Principal Component Analysis Combination of Principal component analysis and Linear Discriminant Analysis

In 2010, Sangwoo Moon et al [10] works on the dimensionality reduction technique using support vector machine, in which the decision vectors were used for classification purpose in SVM as the mapping vectors for dimensionality reduction and produces a feature set having high efficiency and better classification ability. Generally the natures of these mapping vectors are highly redundant. So, it is required to further reduce the redundancy, based on the factor called as asymmetric decorrelation and removes the less meaningful mapping vectors. The remaining mapping vectors are far away from each other and also provide higher classification accuracy. Experiment was performed on the handwritten numeric characters dataset and the classification accuracy was 98.2%. So it can be stated on the above facts that SVM provides better dimensionality reduction as well as higher classification accuracy.

Selected Classifier Nearest Neighborhood Classifier Support Vector Machine Nearest Neighborhood Classifier

accuracy

Dimensions of Features 40

Accuracy

40

97% Approx

40

93% Approx

85% Approx.

It was also proved experimentally conducted on the ORL face database using SVM as a classifier, it is clear that PCA alone method provides better classification accuracy of 97% for the same feature dimension [9].

5.

CONCLUSION AND FUTURE WORK

The feature dimensionality has great impact on the classification accuracy of a classifier. SVM is a widely used classifier, but prior to SVM, dimension reduction is mandatory step. PCA is one of the best techniques for dimension reduction and found suitable for SVM. By providing best dimension reduction and higher class discrimination prior to the classification, resulting higher classification accuracy. PCA is an extensively used dimensionality reduction method but limited in class discrimination. LDA improves the classification accuracy by introducing between PCA and SVM. Combination of PCA and LDA improved the capability of LDA when the samples of images are less and classification accuracy is also increased. During the dimension reduction using PCA, main features of face images are extracted, LDA selects the significant features for class separability and SVM classifier that classify them into one of available classes. As the Classification accuracy is increased.

Table 1 provides the brief summary of the different dimensionality reduction methods used for face recognition. Experiments were performed on the ORL database containing 400 images of 40 individuals and the face images were taken at different times under varying lighting and different facial expressions having the dark homogeneous background. It can be concluded from the literature survey, whatever be the classifier used for classification; dimensionality reduction is always a required step prior to the classification stage [7]. The classification provides higher classification accuracy preceded by the better dimensionality reduction stage. When the Nearest Neighbor Classifier (NNC) is used on face images in ORL database, PCA provided a classification accuracy of 85%, and significantly improved upto 93% by using PCA and LDA combination. So it can be stated that performance of combination of PCA and LDA is very much significant over to the PCA alone for improving classification accuracy of SVM [8].

FR accuracy percentages are also increased and also have a direct enhancement in security system. Security is directly depends on the accuracy of FR and FR accuracy is increased as the classification accuracy is increased. Future work also needs to explore new techniques to remove the redundant mapping vectors of SVM to minimize the computation cost and overall performance of the system.

38

Vol 8. No. 2 June, 2015 African Journal of Computing & ICT © 2015 Afr J Comp & ICT – All Rights Reserved - ISSN 2006-1781 www.ajocict.net

[1] G. Chen and S.E.Qian, “Dimensionality reduction of hyper spectral imagery using improved locally linear embedding,” Journal o f Applied Remote Sensing 1, 2007. [2] Imola K. Fodor, “A Survey of Dimension reduction techniques,” 2002. [3] Aleix M. MartoAnez and Avinash C. Kak, “PCA

[11] A.Zagouras, A.Macedonas, G.Economou and S.Fotopoulos, “An application study of manifold learning-ranking techniques in face recognition,” Multimedia Signal Processing, pp. 445 – 448, (2007). [12] Junping Zhang, Huanxing Shen and Zhi-Hua Zhou, “Unified Locally Linear Embedding and Linear Discriminant Analysis Algorithm for Face Recognition,” Sinobiometrics, LNCS 3338, pp.

versus LDA,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 228-233, 2001. [4] Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces, Recognition Using Class Specific Linear Projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7,

269-304, Springer, 2004. [13] M.Truk and A. Pentland, “Eigen faces for recognition,” Journal of Cognitive Neuroscience, 3:72-86, 1991. [14] Wei Di, Lei Zhang, David Zhang, and Quan Pan, “Studies on Hyperspectral Face Recognition in Visible Spectrum with Feature Band Selection,” IEEE Transactions on Systems, Man, and

1997. [5] S.T. Roweis and L. Saul, “Nonlinear dimensionality reduction by locally linear embedding,” Science, vol 290 22, no. 5500, pp. 2323-2326, 2000. [6] Joshua B. Tenenbaum, Vin de Silva and John C. Langford,” A Global Geometric Framework for Nonlinear Dimensionality Reduction,” Science, vol. 290 22, 2000.

Cybernetics — Part a: Systems and Humans, vol.40, no.6, pp. 1354-1361, 2010. [15] Ivanna K. Timotius, Iwan Setyawan, , Andreas A. Febrianto, “Face Recognition between Two Person using Kernel Principal Component Analysis and Support Vector Machines,” International Journal on Electrical Engineering and Informatics, vol. 2, number 1, pp: 53-61, 2010.

[7] Mahesh Pal and Giles M. Foody, “Feature Selection for Classification of Hyperspectral Data by SVM,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 5, 2010. [8] Jianke Li, Baojun Zhao and Hui Zhang, “Face Recognition Based on PCA and LDA Combination Feature Extraction,” The 1st International Conference on Information Science and Engineering, 2009.

[16] Zhihong Pan, Glenn Healey, Manish Prasad, and Bruce Tromberg, “Face Recognition in Hyperspectral Images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol 25, no.12, pp: 1552-1560, 2003. [17] Hong Chang, Andreas Koschan, Mongi Abidi, Seong G. Kong, Chang-Hee Won, “Multispectral Visible and Infrared Imaging for Face Recognition” IEEE Computer Society

[9] Md. Omar Faruqe and Md. Al Mehedi Hasan, “Face Recognition Using PCA and SVM,” The 3rd International Conference on Antiounterfeiting, Security, and Identification in Communication, 2009. [10] Sangwoo Moon and Hairong Qi, “Effective Dimensionality Reduction based on Support Vector Machine,” International Conference on

Conference on Computer Vision and Pattern Recognition, CVPRW, pp: 1-6, 2008. [18] Dr. Aruna mastani, “An Unsupervised active classification technique for face recognition,” International Journal of Engineering Science and Technology vol 2(5), pp: 1073-1077, 2010.

REFERENCES

Pattern Recognition, 2010.

39

Vol 8. No. 2 June, 2015 African Journal of Computing & ICT © 2015 Afr J Comp & ICT – All Rights Reserved - ISSN 2006-1781 www.ajocict.net

[19] Ivanna K. Timotius, Iwan Setyawan, , Andreas A. Febrianto, “Face Recognition between Two Person using Kernel Principal Component Analysis and Support Vector Machines,” International Journal on Electrical Engineering and Informatics, vol 2, no 1, pp: 53-61 , 2010.

[20] Tatyana V. Bandos, Lorenzo Bruzzone, and Gustavo Camps-Valls, “Classification of Hyperspectral Images with Regularized Linear Discriminant Analysis” IEEE Transactions on Geoscience and Remote Sensing, vol 47, no.3, pp: 862-873, 2009.

40

Suggest Documents