Document not found! Please try again

ScienceDirect Pixel Based Supervised Classification of Hyperspectral

0 downloads 0 Views 837KB Size Report
[15] Kakadiaris, Ioannis A, Georgios Passalis, George Toderici, Mohammed N. Murtuza, and Theoharis (2006) “3D Face Recognition”, In. BMVC: 869-878.
Available online at www.sciencedirect.com

ScienceDirect

Available online at www.sciencedirect.com Procedia Computer Science 00 (2018) 000–000

ScienceDirect

www.elsevier.com/locate/procedia

Procedia Computer Science 132 (2018) 706–717

International Conference on Computational Intelligence and Data Science (ICCIDS 2018)

Pixel Based Supervised Classification of Hyperspectral Face Images for Face Recognition Shwetanka*, Neeraja, Jitendraa, Vikeshb, Kamal Jainc

a

Department of Computer Science, Gurukula Kangri Vishwavidyalaya, Haridwar, India b Professor and Director, Neelkanth Group of Institutions, Meerut, India c Professor, Department of Civil Engineering, IIT, Roorkee, India

Abstract Hyperspectral facial data presents creative information, taken with Hyperspectral camera, as compared to the traditional camera and images. This facial spectral dataset is a rich source of information extraction and analysis over traditional single band data with constant wavelength of Electromagnetic Spectrum (EMS). Herein study, Hyperspectral Image (HSI) classification and recognition have done in Visible to Near Infrared (NIR) region using supervised classifiers: Maximum Likelihood, Minimum Distance and SAM (Spectral Angle Mapper). The experiment is performed on CMU Hyperspectral Face Datasets (HFDS) within the spectral range of 610 to 1050 nm (VIR-NIR), having 45 spectral bands using ENVI 4.8 image processing tool. Supervised classification is performed using conventional supervised classifiers (MLH, MDA and SAM) on pre-processed HSI, further compressed dataset has classified by PCAMLH, PCAMD and PCASAM classifiers. 2018The TheAuthors. Authors.Published Publishedby byElsevier ElsevierB.V. Ltd. ©©2018 This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/3.0/) Peer-review under responsibility of the scientific committee of the International Conference on Computational Intelligence and Peer-review under responsibility of the scientific committee of the International Conference on Computational Intelligence and Data DataScience Science(ICCIDS (ICCIDS2018). 2018). Keywords: Hyperspectral; Supervised Classification; Maximum Likelihood; PCA

* Corresponding author. Tel.: 91-9760076668 E-mail address: [email protected] 1877-0509 © 2018 The Authors. Published by Elsevier B.V.

Peer-review under responsibility of the scientific committee of the International Conference on Computational Intelligence and Data Science (ICCIDS 2018).

1877-0509 © 2018 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/3.0/) Peer-review under responsibility of the scientific committee of the International Conference on Computational Intelligence and Data Science (ICCIDS 2018). 10.1016/j.procs.2018.05.077



Shwetank et al. / Procedia Computer Science 132 (2018) 706–717 Shwetank/ Procedia Computer Science 00 (2018) 000–000

707 2

1. Introduction Generally, the traditional cameras capture three spectral components (RGB) in the entire EMS range. Hyperspectral face images capturing devices are a new class of visualization device which use the complete band of EMS to capture the images. HSI provides powerful spectral range to every single pixel, which has proved their utility in several areas, for instance security measures [1], inaccessible things identifying [1] - [5] and medicinal [6]-[7]. Generally, several biometric systems such as Facial Recognition (FR) uses symmetrical characteristics in the three-dimensional domain [8]–[9] and also achieve higher result [10]-[11]. FR accuracy decrease in condition of changes in visibility and face positioning [12]. In 2001, R.Gross [12] proved a prominent failure in FR accurateness in the case when the images were captured on a rotation of 30 degrees of faces. To minimize this failure, in 2002, R.Gross also worked on three- dimensions, demonstrating and achieved the high FR accuracy in the case of 60 degrees rotation [13]. Incomplete face obstruction can frequently decrease the accuracy in FR. In 2008, in a study conducted by Hotta [14], distributed face images into remote face components and achieved high FR accuracy while there were 16% obstructions on facial mages. A three-dimensions facial image set was also studied for FR through various face angles [15]-[16]. All these studies have provided the significant improvement in FR accuracy as compared to 2-dimensional FR datasets. Furthermore, for 3-dimensions, FR accuracy was computed in-depth and required significant processing time. The use of Thermal Face Images (TFI) also provide a progressive imaging that was used for FR [17]-[18]. The methods applied on TFI was based on three-dimensional features. Performance of TFI becomes constant in the case of pose changes. The above limitations of FR are advanced with use of spectral statistics [19]. Since numerous tissues have the existent in the humanoid skin, which glow with diverse spectral constituents in EMS and also make major changes in the spectral characteristics [19]. In 2003[19], reflectiveness of spectral curves was obtained in NIR range of EMS. These curves/graphs have fewer deviations in largeness and phases for an individual facial image. Variations in the position of faces do not disturb the reflectance curves driven from the individual faces. 2. Significance of Classification in Face Recognition Main objective of classification is to classify all pixels in a face image into one of the numerous face classes. This classified face image may then be used to different parts of the face images having different color pixels. Normally, in classification, HSI are used and truly, the spectral arrangement existing inside the classified image for every single pixel is used as the numerical basis for classification. HSI provides supportive discriminants for FR that cannot be accomplished by any remaining imaging system [20]. The objective of image classification is to recognize and to characterize the classes, as a unique color or gray level, which represents some features. These dissimilar color classes can also be used to show the spectral characteristics of every one pixel within the class, which is unique. Two key classification approaches are supervised and unsupervised classification. In supervised, a user can choice sample pixels in a face image which are illustrative of particular classes and then direct the image processing software to use these training sites as references for the classification of all other pixels in the image. Training sites, also known as input classes and are nominated based on the understanding of the user. The user also specify the perimeters for how similar other pixels must be to group them together. These parameters are mostly set based on the spectral features of the training area. The user also describes the numeral of classes that the image is classified into. In unsupervised classification, the results (combination of pixels with similar features) are based on the software analysis of an image without the user providing sample classes. The computer uses practices to decide which pixels are related and groups them into classes. The user can stipulate which algorithm the software will use and the preferred number of output classes but otherwise does not aid in the classification process. Though, the user needs to have knowledge of the area of face images being classified. The supervised classification can be done on the basis of image based/object based and pixel based. The image based classification consider the entire object/image at a time while in pixel based the classification is done on the basis of pixel. The pixel based classification is simple in its nature and generally, is used for small datasets or indoor images. Pixel based classification approach is easy to interpret the results and produce the constant spectral constituents which represent the spectral information for long term studies. The accuracy of pixel based classification generally depends on the selection of pixels, while in case of image based classification, it depends on image segmentation.

708

Shwetank et al. / Procedia Computer Science 132 (2018) 706–717 Shwetank/ Procedia Computer Science 00 (2018) 000–000

3

3. Hyperspectral Face Images To extract the features for classification, a set of 45 bands are considered. The hyperspectral face database is accessed from CMU, within the spectral range of 610-1050 nm having the length of 10nm [21] [22]. The available portion of the CMU database contains imageries for fifty - four diverse subjects. A subgroup of images is available for several sessions i.e. twenty-eight subjects for three sessions, twenty-two subjects for four sessions and sixteen subjects for five sessions.

Fig.1.Training set of hyperspectral face images (610-1050nm) [22]

4. Related Work During the last years, due to wide-range of interest of researchers in this area, HSI classification and FR has attained excessive achievement and much more is yet to come. The awareness of FR in HSI was carried by Pan et al. [19], who physically extracted the spectral curves for the face skin in the NIR spectrum of EMS. Further this work was extended by including spatial information moreover to spectral. Similarly in this context Robila et al. [23] also developed spectral curves but their assessment standard was spectral angle measurements. Almost all these researchers used special device to develop spectral curves. Di et al. [24] projected the hyperspectral image into low dimensional space using 2D-PCA for feature extraction, and were then compared using Euclidean distance. In current works, Liang et al. [25] apply 3-Dimensions texture outline to extract distinguishing patterns which incorporate the spatial and spectral information as features. Uzair et al. [26] used 3-Dimensions discrete cosine transformation and discovered the low frequency coefficients as features and Partial Least Squares Regression (PLSR) is applied for classification. In recent, Uzair et al. [27] stretched former study [26] and used 3-Dimensions cubelets to extract the spatial-spectral covariance features for FR. Herein study, the hyperspectral image is fully exploited without losing information by performing feature extraction and classification at the pixel-level. PCA is used for feature extraction by calculating Eigenvalues for the 45-bands, shown in Table 5. Then supervised classification algorithms are applied



Shwetank et al. / Procedia Computer Science 132 (2018) 706–717 Shwetank/ Procedia Computer Science 00 (2018) 000–000

709 4

without and with PCA. The comparison of various methods in this area used by other researchers are listed in Table 11. 5. Methodology Methodology defines the sequence of steps performed to achieve the objectives. Here in this study CMU hyperspectral face datasets is considered for supervised classification.

(a)

(b)

Fig. 2. Supervised classification methodology (a) without and (b) with PCA

Supervised classification is performed by considering two cases. In first case, classification is performed without PCA shown in Figure 2(a). In second case, dimensions of Hyperspectral Face Datasets (HFDS) are reduced by PCA and then supervised classification is performed as shown in Figure 2(b). In both cases, three supervised classification algorithms (Minimum Distance, Maximum Likelihood and Spectral Angel Mapper) are applied and Maximum Likelihood classifier has achieved more accuracy than others classifiers and is discussed in Table 10.

Shwetank et al. / Procedia Computer Science 132 (2018) 706–717 Shwetank/ Procedia Computer Science 00 (2018) 000–000

710

5

6. Conventional Supervised Classification Approaches Classification comprises a wide-ranging of decision-theoretic methodologies for identification of an entire image or some parts of it. Generally, classification procedures are founded on the hypothesis that the query image shows one or more than one features. Figure 2(a) and 2(b) present the systematic steps for the classification of hyperspectral image dataset. As per the objectives and features of HSI, ten classes are taken for classification described in Table 1. Several classification methods are analyzed and compared using training datasets or roisto get maximum accuracy. Table 1. ROI with different attributes ROI Name Color Pixels Left Eye Right Eye Left Nose Right Nose Upper Lips Lower Lips Left Ear Right Ear Forehead Hair

Red Green Blue Yellow Cyan Magenta Maroon Sea Green Purple Coral

41 44 37 39 41 37 44 42 35 40

Polygons 0/0 0/0 0/0 0/0 0/0 0/0 0/0 0/0 0/0 0/0

Polylines 0/0 0/0 0/0 0/0 0/0 0/0 0/0 0/0 0/0 0/0

Points 41 44 37 39 41 37 44 42 35 40

Fill Solid Solid Solid Solid Solid Solid Solid Solid Solid Solid

Orientation 45 45 45 45 45 45 45 45 45 45

Color representation

Space 0.10 0.10 0.10 0.10 0.10 0.10 0.10 0.10 0.10 0.10

6.1 Maximum likelihood approach (MLH) The Maximum Likelihood (MLH) classifier is the best technique of supervised classification, in which the classification is done on the pixel basis. In this approach, the pixel which has maximum similarity is classified into the matching classes and likelihood of a pixel being in different classes is calculated conditionally on the available features, and the pixel is assigned to the class with the highest likelihood. The likelihood Lm can be explained on the basis of the possibility of a pixel have its place to class m. Lm = T (m/X) = T (m) * T(X/m) / Ʃ T (i) * T(X/i) Where T (m) = prior possibility of class m T(X/m) = Unconfirmed possibility to detect X from class m or possibility density function 1 1 1 (1) Lm(X) = exp ( X   m ) m ( X   m ) t n 1 2 (2 )  m 2 2 where n = number of bands X= image data of n bands Lm(X) = Likelihood of X belonging to class m  m = mean vector of class m Ʃm = variance-covariance matrix of class m Before applying MLH approach, the number of bands should be properly sampled so that normal distribution of data can be done otherwise this approach will not be suitable for classification purpose.

(a)

Image cube (45 Bands) (b) ROI Image (c) Classified Face Image using MLH Fig. 3. Sequence of (a) 45 bands (b) ROI Image and (c) Classified Image using MLH



Shwetank et al. / Procedia Computer Science 132 (2018) 706–717 Shwetank/ Procedia Computer Science 00 (2018) 000–000

Table 2. Confusion matrix for ten classes using Maximum Likelihood (without PCA) Class Left Right Left Right Upper Lower Eye Eye Nose Nose Lips Lips Left Eye 40 1 0 0 0 0

Left Ear 0

Right Ear 0

Forehead

711 6

Hair

Total

0

0

41

Right eye

5

39

0

0

0

0

0

0

0

0

44

Left Nose

0

0

36

1

0

0

0

0

0

0

37

Right Nose

0

0

0

39

0

0

0

0

0

0

39

Upper Lips

0

0

0

0

39

2

0

0

0

0

41

Lower Lips Left Ear Right Ear Forehead Hair Total

0 0 0 0 0 45

0 0 0 0 0 40

0 0 0 0 0 36

0 0 0 0 0 40

0 0 0 0 0 39

37 0 0 0 0 39

0 43 0 0 0 43

0 1 42 0 0 43

0 0 0 35 0 35

0 0 0 0 40 40

37 44 42 35 40 400

6.2 Minimum Distance Approach (MDA) Minimum Distance is a simple kind of supervised algorithm. In this approach, the mean vector is calculated for each class. Then Euclidean distance is calculated from each pixel to the class mean vector and assign each pixel to the class to which it is to be closest.

(a)

Image cube(45 Bands) (b) ROI Image (c) Classified Face Image using MDA Fig. 4. Sequence of (a) 45 Bands (b) ROI image and (c) Classified Image using MDA

Table 3. Confusion matrix for ten classes using Minimum Distance (Without PCA) Class Left Right Left Right Upper Lower Eye Eye Nose Nose Lips Lips Left Eye 35 6 0 0 0 0

Left Ear 0

Right Ear 0

Forehead

Hair

Total

0

0

41

Right eye

10

34

0

0

0

0

0

0

0

0

44

Left Nose

0

0

33

4

0

0

0

0

0

0

37

Right Nose

0

0

2

37

0

0

0

0

0

0

39

Upper Lips

0

0

0

0

36

5

0

0

0

0

41

Lower Lips Left Ear Right Ear Forehead Hair Total

0 0 0 0 0 45

0 0 0 0 0 40

0 0 0 0 0 35

0 0 0 0 0 41

3 0 0 0 0 39

34 0 0 0 0 39

0 40 2 0 0 42

0 4 40 2 3 49

0 0 0 33 0 33

0 0 0 0 37 37

37 44 42 35 40 400

6.3 Spectral Angle Mapper(SAM) Inthis approach, every class is represented by a vector, and then angle between class vector and pixel feature vector is calculated. It calculates angle in spectral space between pixels and a set of reference classes for image classification based on spectral similarity. For each pixel SAM calculates the angle between vector defined by the pixel values and each class vector. The smaller is spectral angle, the more similar a pixel is to a given class.

Shwetank et al. / Procedia Computer Science 132 (2018) 706–717 Shwetank/ Procedia Computer Science 00 (2018) 000–000

712

7

(a) Image cube(45 Bands) (b) ROI Image (c) Classified face Image using SAM Fig.5. Sequence of (a) 45 Bands (b) ROI image and (c) Classified Image using SAM Table 4. Confusion matrix for ten classes using SAM (without PCA) Class Left Right Left Right Upper Eye Eye Nose Nose Lips Left Eye 36 5 0 0 0

Lower Lips 0

Left Ear 0

Right Ear 0

Forehead

Hair

Total

0

0

41

Right eye

9

35

0

0

0

0

0

0

0

0

44

Left Nose

0

0

34

3

0

0

0

0

0

0

37

Right Nose

0

0

1

38

0

0

0

0

0

0

39

Upper Lips

0

0

0

0

37

4

0

0

0

0

41

Lower Lips Left Ear Right Ear Forehead Hair Total

0 0 0 0 0 45

0 0 0 0 0 40

0 0 0 0 0 35

0 0 0 0 0 41

2 0 0 0 0 39

35 0 0 0 0 39

0 41 1 0 0 42

0 3 41 1 2 47

0 0 0 34 0 34

0 0 0 0 38 38

37 44 42 35 40 400

7. Supervised Classification Approaches with PCA Principal Component Analysis (PCA) is the numerical method often used in signal processing to the data dimension reduction. The Attribute Vectors (AV) extraction is the major stage in FR. PCA shows a countless importance to achieve this, which is based on image matrices and its transformation have vigorous implication in image AV extraction [17]. Let us consider the hyperspectral image set comprising of total Pimages taking a dimension of p× q picture element. At this stage, the major emphasis is to calculate AV for verification of face images by calculating Eigen-values and Eigen-vectors.The exact vertical value in every appearance shows the strength of every picture element and also make a row direction which is formed by combining the pixels of each and every row. This is linear mapping which selects a novel axis position intended for the hyperspectral image set with the purpose that first difference by mapping of hyperspectral image set occurs on principal axis i.e. 1 st Principal Component(PC) and so on. Generally, the later PCs are removed because of having the less information. A sample of HSI is considered with P images and has q picture elements. So it can be represented as p × qmatrix where every row signifies to a single image. PC is calculated with the help of Eigen-vector and Eigen-value of concerned Covariance Matrix (COVM). Generally, the HSI has a number of layers/bands and the main focus is to analyze these images to check the relationship between these layers. Covariance is the association among these layers and is calculated between two dimensions. First of all, PCA uses a stabilized path in the t-dimensional area such that the discrepancy in W is exploited. Again it selects one more direction, in which the variance W is maximized. Similarly, in this way, pdirections can be designated and the resulting set yields to the PCs. Co var iance (W , Z )  

( wi  Wavg )( zi  Z avg )

(2) (q  1) Generally covariance matrix points to interdependence structure and mostly in the form of square type. The Eigenvector having the biggest Eigen-value is the path of difference and is measured 1st PC. Similarly 2nd biggest Eigenvalue with the next maximum difference is considered as the 2nd PC and so on. Afterward calculating Eigen-vectors from covariance matrix, successive step is to arrange AV as per their Eigen-values in decreasing manner. Now the AV is formed in the form of a matrix called as Attribute Vector Matrix.



Shwetank et al. / Procedia Computer Science 132 (2018) 706–717 Shwetank/ Procedia Computer Science 00 (2018) 000–000

713 8

From the plotted graph as shown in Fig. 6, it can be observed that, Eigen-value for the first and second band having the maximum value, therefore the maximum information exist in band 1 and band 2. So for further processing like classification, the lower bands can be rejected because they have very less information due to low Eigen-value. So, it reduces the complexity of hyperspectral face images and those bands are only further processed which have the meaningful information. Herein study first ten bands (PC1, PC2…PC10) are considered for the selection of Eigenvalues. The Eigen-value for each band is depicted in Figure 6 and listed in Table 5. Table 5 describe the details of Principal Components (PC) used for classification after implementing PCA.

Fig.6. Plotting of Eigen-Values for corresponding bands in hyperspectral face image set Table 5. Attribute Vector Matrix having Eigen-Value for corresponding bands/layers S.No Layer/Band Number Eigen Value S.No Layer/Band Number 1. 1.00 62.38 24 24.00 2. 2.00 32.34 25 25.00 3. 3.00 0.29 26 26.00 4 4.00 0.07 27 27.00 5 5.00 0.05 28 28.00 6 6.00 0.02 29 29.00 7 7.00 0.02 30 30.00 8 8.00 0.02 31 31.00 9 9.00 0.02 32 32.00 10 10.00 0.02 33 33.00 11 11.00 0.01 34 34.00 12 12.00 0.01 35 35.00 13 13.00 0.01 36 36.00 14 14.00 0.01 37 37..00 15 15.00 0.01 38 38.00 16 16.00 0.01 39 39.00 17 17.00 0.01 40 40.00 18 18.00 0.01 41 41.00 19 19.00 0.01 42 42.00 20 20.00 0.01 43 43.00 21 21.00 0.01 44 44.00 22 22.00 0.01 45 45.00 23 23.00 0.01

Eigen Value 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01

Shwetank/ Computer ScienceScience 00 (2018) Shwetank Procedia et al. / Procedia Computer 132000–000 (2018) 706–717

714

7.1 PCAMLH

S. No 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

9

Table 6. Detailed of PCsused for Classification after Implementing PCA Principal Components Eigen Value Cumulative Value PC1 62.38 62.38 PC2 32.34 94.72 PC3 0.29 95.01 PC4 0.07 95.08 PC5 0.05 95.13 PC6 0.02 95.15 PC7 0.02 95.17 PC8 0.02 95.19 PC9 0.02 95.21 PC10 0.02 95.23

First, dimensions of hyperspectral face images are reduced from 45 bands to 10 bands using PCA. Details of these 10 bands are depicted in Table 6. Then ROI image is constructed using this reduced data set and then MLH is applied for the classification. The sequence of flow of steps is shown in Fig. 7.The classification accuracy is depicted with confusion matrix as shown in Table7.

(a)

Image Cube(10 Bands) (b) ROI Image (c) Classified face image using MLH Fig. 7. Sequence of (a) 10 Bands (b) ROI image and (c) Classified Image using MLH

Table 7. Confusion matrix for ten classes Using Maximum Likelihood (with PCA) Class Left Right Left Right Upper Lower Eye Eye Nose Nose Lips Lips Left Eye 40 1 0 0 0 0

Left Ear 0

Right Ear 0

Forehead

Hair

Total

0

0

41

Right eye

01

43

0

0

0

0

0

0

0

0

44

Left Nose

0

0

36

1

0

0

0

0

0

0

37

Right Nose

0

0

0

39

0

0

0

0

0

0

39

Upper Lips

0

0

0

0

40

1

0

0

0

0

41

Lower Lips Left Ear Right Ear Forehead Hair Total

0 0 0 0 0 41

0 0 0 0 0 44

0 0 0 0 0 36

0 0 0 0 0 40

0 0 0 0 0 40

37 0 0 0 0 38

0 42 0 0 0 42

0 2 42 0 1 45

0 0 0 35 0 35

0 0 0 0 39 39

37 44 42 35 40 400

7.2 PCAMDA Here MDA supervised classification approach is applied on the reduced hyperspectral datasets. The sequence of flow of steps is shown with Fig. 8 and the accuracy is represented with the confusion matrix shown in Table 8.

(a)

Image Cube(10 Bands) (b) ROI Image (c) Classified face image using MDA Fig. 8. Sequence of (a) 10 Bands (b) ROI image and (c) Classified Image using MDA

Shwetank/ Computer ScienceScience 00 (2018) Shwetank Procedia et al. / Procedia Computer 132000–000 (2018) 706–717



Class Left Eye

Left Eye 35

Table 8. Confusion matrix for ten classes using Minimum Distance (with PCA) Right Left Right Upper Lower Left Right Forehead Eye Nose Nose Lips Lips Ear Ear 6 0 0 0 0 0 0 0

10 715

Hair

Total

0

41

Right eye

9

35

0

0

0

0

0

0

0

0

44

Left Nose

0

0

33

4

0

0

0

0

0

0

37

Right Nose

0

0

2

37

0

0

0

0

0

0

39

Upper Lips

0

0

0

0

36

5

0

0

0

0

41

Lower Lips Left Ear Right Ear Forehead Hair Total

0 0 0 0 0 44

0 0 0 0 0 41

0 0 0 0 0 35

0 0 0 0 0 41

2 0 0 0 0 38

35 0 0 0 0 40

0 41 1 0 0 42

0 3 41 1 2 47

0 0 0 34 0 34

0 0 0 0 38 38

37 44 42 35 40 400

7.3 PCASAM Here the classification is done on the reduced hyperspectral face image having 10 bands using SAM. First ROI Image is created and then it is used for the classification. The sequence of steps is shown with the Fig. 9 and the classification accuracy is represented by confusion matrix, shown in Table 9.

(a)

Class

Image Cube(10 Bands) (b) ROI Image (c) Classified face image using SAM Fig. 9. Sequence of (a) 10 Bands (b) ROI image and (c) Classified image using SAM Table 9. Confusion Matrix for Ten classes using SAM (with PCA) Left Right Upper Lower Left Right Forehead Nose Nose Lips Lips Ear Ear 0 0 0 0 0 0 0

Left Eye 36

Right Eye 5

Right eye

8

36

0

0

0

0

0

0

0

Left Nose

0

0

34

3

0

0

0

0

Right Nose

0

0

1

38

0

0

0

Upper Lips

0

0

0

0

37

4

0

Lower Lips Left Ear Right Ear Forehead Hair Total

0 0 0 0 0 44

0 0 0 0 0 41

0 0 0 0 0 35

0 0 0 0 0 41

1 0 0 0 0 38

36 0 0 0 0 40

0 42 0 0 0 42

Left Eye

Hair

Total

0

41

0

44

0

0

37

0

0

0

39

0

0

0

41

0 2 42 0 1 45

0 0 0 35 0 35

0 0 0 0 39 39

37 44 42 35 40 400

8. Results and Analysis The suggested facial classification approach includes HSI, as an alternative of outdated face images. The HSI is accessed from CMU, Pittsburgh for the research purpose only. These enormous image data sets reduce the computing capability of the computer. Hence, to decrease the image size, PCA is used to extract those bands which have meaningful information and the remaining bands are not further processed. Herein study, Band 1 to Band 10 have the maximum Eigen-value and remaining have very less, so these bands are mainly processed here for classification. The Classification of HSI is performed using supervised classification algorithms and verified on dataset having spectral range from 610 to 1050 nm. The correct classification percentage of a classifier is calculated

Shwetank et al. / Procedia Computer Science 132 (2018) 706–717 Shwetank/ Procedia Computer Science 00 (2018) 000–000

716

11

by

CP = (Number of Correct Face Classification / Total faces)*100 (3) Untrue classification percentage may be calculated to find the failure percentage of a classifier. Here, ten classes (Left Eye, Right Eye, Left nose, Right nose, Upper lips, Lower lips, Left Ear, Right Ear, Forehead, and Hair) are considered for supervised classification. The selection of classes is made in such a way so that’s the maximum information related to the face image can be obtained. For classification, The Region of Interest (ROI) having a number of pixels /points and all others related attributes are shown with Table 1. Confusion matrix represents overall accuracy of the classifier and is depicted for each approaches. The classification accuracy basically depends on the matching number of pixels for the corresponding class. It is always advisable to take maximum number of pixels for classification as well as to get maximum accuracy assessment. Here in this study, the confusion matrix for three classifiers without and with using PCA. Kappa Coefficient, a metric that compares an Observed Accuracy (OA) with an Expected Accuracy (EA). Calculation of OAand EA is integral to the understanding of the kappa statistic and can be defined easily through the use of a confusion matrix. After performing the experiment, Maximum Likelihood approach has maximum classification accuracy 98.25% (with PCA) and the value of Kappa Coefficient is 0.97, while without PCA, it is 97.50%. Table 10. Comparison of different supervised algorithms S. No. Supervised Classifiers Without using PCA Accuracy (%) Kappa Coefficients 1. Minimum Distance 89.75 0.88 2. Maximum Likelihood 97.50 0.96 3. Spectral Angel Mapper 92.25 0.91 Table 11. Accuracy assessment with others methods S. No. Bands Selection

Accuracy (%) 91.25 98.25 93.75

Methods

Number of Classes/ROI

With PCA Kappa Coefficients 0.90 0.97 0.92

Spectral Range

Accuracy (%)

1. 2. 3.

All Bands All Bands All Bands

Spectral Angle [23] 2-Dimensions PCA [24] 3-Dimensions Discrete Cosine Transformation [26]

6 4

400nm-900nm 400nm-720nm 450nm-100nm

38.1 72.1 88.6

4. 5.

All Bands All Bands

Spectral Signature [19] Collaborative Representation Classifier[CRC) [28]

5 -

700nm-1000nm 450nm-1090nm

38.1 97.3

6.

Compressed HSI

PCAMLH

10

610nm-1050nm

98.25

9. Conclusion and Future Scope This paper represents typical methods of face image classification on an entirely diverse set of face images called Hyperspectral datasets. Hyperspectral dataset make data cubes of huge dimensions and can be effectively preserved with PCA which produce the hyper datasets of less size and further processed for the classification purpose. PCA supports in classifying most diverse spectral components. The classification of face images is performed by supervised classification approaches and the maximum classification response is obtained by Maximum Likelihood approach for the hyperspectral face images. In conclusion, hyperspectral face recognition is the skill of the forthcoming, where modest computational methods yield outstanding classification rates. Further this approach can be enhanced in the field of biological applications, medical applications and environment monitoring. In the present study, a pixel based supervised classification has been implemented in hyperspectral face recognition. It’s a natural phenomenon that two human beings cannot have one biological characteristics. In order to differentiate two persons, however they are twince or not, their skin spectral characteristics (Melanin, Hb, H2O, HbO2 etc.) can be extracted in form of DN or reflectance value using image processing techniques. In electromagnetic spectrum of hyperspectral range, this DN value of skin tissues is called skin spectra or spectral curves. These spectra can play an important role in biometric trails. For future work, a digital spectral library of human facial skin tissues can be developed without use of spectroscopic sensors and devices. This would make it possible to prepare an advance and strong biometrics authentication system based on image processing techniques rather than instruments in the hyperspectral range.

Shwetank/ Procedia Computer Science 00 (2018) 000–000



Shwetank et al. / Procedia Computer Science 132 (2018) 706–717

12

717

References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28]

Melgani, Farid (2004) “Classification of Hyperspectral remote sensing images with support vector machines”, IEE Transactions onGeoscience and Remote Sensing 42(8): 1778-1790. Chang, Chein-I (2003) “Hyperspectral imaging: techniques for spectral detection and classification” Springer US. (1). Thenkabail, Prasad S., Ronald B. Smith, and Eddy De Pauw (2000) “Hyperspectral vegetation indices and their relationships with agricultural crop characteristics”, Remote sensing of Environment71(2): 158-182. Lee, Zhongping, et al (1999) “Hyperspectral remote sensing for shallow Waters: 2. Deriving bottom depths and water properties by optimization.”Applied Optics 38(18): 3831-3843. Blackburn, George Alan (1998) “Quantifying chlorophylls and carotenoids at leaf and canopy scales: An evaluation of some hyperspectral approaches”. Remote sensing of environment66(3): 273-285. D. Dicker, J. Lerner, P. Van Belle, S. Barth, D. Guerry, et al (2006) “Differentiation of normal skin and melanoma using high -resolution Hyperspectral imaging”. Cancer biology & therapy5(8):1033. Stamatas, Georgios N., et al (2004) “Non‐Invasive Measurements of Skin Pigmentation” Pigment Cell Research17(6): 618-626. Ahonen, Timo (2006) “Face description with local binary patterns: Application to face recognition”, IEEE Transactions onPattern Analysis and Machine Intelligence28(12):2037-2041. Li, Zhifeng (2009) “Nonparametric discriminant analysis for face recognition”, IEEE Transactions onPattern Analysis and Machine Intelligence31(4): 755-761. Kong, Seong G., et al (2005) “Recent advances in visual and infrared face recognition—a review”, Computer Vision and Image Understanding97(1): 103-135. Etemad, Kamran (1997) “Discriminant Analysis for Recognition of human face images”, International Conference on Audio –and-Video Based Biometric Person Authentication, LNCS(1206): 125-142. R. Gross, J. Shi (2001) “Quo Vadis Face Recognition”, Technical Report CMU-RI-TR-01-17, Robotics Inst., Carnegie-Mellon University. R. Gross, I. Matthews, and S. Baker(2004) “Appearance-Based Face Recognition and Light-Fields.”IEEE Transactions on Pattern Analysis and Machine Intelligence26(4): 449-465. Hotta, Kazuhiro (2008) “Robust face recognition under partial occlusion based on support vector machine with local Gaussian summation kernel”, Image and Vision Computing26(11): 1490-1498. Kakadiaris, Ioannis A, Georgios Passalis, George Toderici, Mohammed N. Murtuza, and Theoharis (2006) “3D Face Recognition”, In BMVC: 869-878. Bronstein, Alexander M. (2003) “Expression-invariant 3D face recognition”, In Audio-and Video-Based Biometric Person Authentication, Springer Berlin Heidelberg 62-70. Socolinsky, Diego A., Andrea Selinger, and Joshua D. Neuheisel (2003) “Face recognition with visible and thermal infrared ima gery”, Computer Vision and Image Understanding91(1): 72-114. Kong, Seong G., Jingu Heo, Besma R. Abidi, Joonki Paik, and Mongi A. Abidi. (2005) “Recent advances in visual and infrared fa ce recognition—a review”, Computer Vision and Image Understanding97(1):103-135. Zhihong Pan, Glenn Healey, Manish Prasad, and Bruce Tromberg (2003) “Face Recognition in Hyperspectral Images”, IEEE Transactions on Pattern Analysis and Machine Intelligence25(12):1552-1560. S.Arya, N.Pratap (2015) “Future of Face Recognition, A Review”, Procedia Computer Science(58): 578-585. Simone (2016) “On the usefulness of hyperspectral imaging for face recognition”, Journal of Electronic Imaging 25(6). www.consortium.ri.cmu.edu/hsagree/index.cg Stefan A Robila (2008) “Toward hyperspectral face recognition.” International Society for Optical Engineering(6812):1-9 Wei Di, Lei Zhang, David Zhang, and QuanPan (2010) “Studies on Hyperspectral Face Recognition in Visible Spectrum With Featu re Band Selection” IEEE Transactions On Systems, Man, And Cybernetics—Part A: Systems And Humans40(6): 1354-1361. Jie Liang, Jun Zhou, and Yongsheng Gao (2015) “3d local derivative pattern for hyperspectral face recognition,” IEEE International Conference on Automatic Face and Gesture Recognition. Muhammad Uzair, Arif Mahmood, and Ajmal Mian (2013) “Hyperspectral face recognition using 3d-dct and partial least squares,” in: British Machine Vision Conference:57.1-57.10 Muhammad Uzair, Arif Mahmood, and Ajmal Mian (2015) “Hyperspectral face recognition with spatiospectral information fusion and PLS regression.” IEEE Transactions on Image Processing24(3): 1127-1137. Guangyi Chen, Changjun Li, Wei Sun (2016) “Hyperspectral face recognition via feature extraction and CRC -based classifier” IET Image processing11(4):266-272.

Suggest Documents