An integrated authentication framework based on multi-tier biometrics

0 downloads 0 Views 2MB Size Report
Keywords: multimodal; tier-based; ear biometrics; colour spaces; normalisation ... The use of the human ear as a biometric recognition tool is not an entirely new.
Int. J. Biometrics, Vol. 3, No. 1, 2011

13

An integrated authentication framework based on multi-tier biometrics Prasanta Bhattacharya*, Praveen Ranjan Srivastava*, Abhilash Rajakoti and V. Vasanth Kumar Computer Science and Information Systems Group, Birla Institute of Technology and Science (BITS), Pilani 333031, Rajasthan, India Fax: 01596-244183 E-mail: [email protected] E-mail: [email protected] E-mail: [email protected] E-mail: [email protected] *Corresponding authors Abstract: Biometric techniques exploit the human physiological and behavioural traits for verification and identification purposes. Biometrics powered systems make effective use of the face, fingerprint, iris etc. as physiological and signature, gait, posture and voice recognition as behavioural metrics for identification. This paper explains the inadequacy of uni-modal strategies and the pressing need for a multimodal system. The concerns over the cost effectiveness of current multimodal strategies are discussed and a novel tier-based solution is proposed. In this paper, we use a face-ear multimodal biometric framework to demonstrate the efficiency of the tier-based approach and to discuss the template storage aspects. Keywords: multimodal; tier-based; ear biometrics; colour normalisation; 2D Gabor filter; EGT; PSO; template security.

spaces;

Reference to this paper should be made as follows: Bhattacharya, P., Srivastava, P.R., Rajakoti, A. and Kumar, V.V. (2011) ‘An integrated authentication framework based on multi-tier biometrics’, Int. J. Biometrics, Vol. 3, No. 1, pp.13–39. Biographical notes: Prasanta Bhattacharya is a Post-Graduate Student in the Computer Science and Information Systems Department at Birla Institute of Technology and Science. He is presently doing ME in Software Systems at BITS, Pilani, India. He specialises in the areas of multimodal biometrics and the associated areas of fusion strategies, performance prediction models and template security. Praveen Ranjan Srivastava is working in Computer Science and Information Systems Group at Birla Institute of Technology and Science (BITS) Pilani India. He is currently doing research in the area of Software Testing. His research areas are software testing, quality assurance, testing effort, software release, test data generation, agent-oriented software testing, stopping testing, soft computing techniques and biometrics.

Copyright © 2011 Inderscience Enterprises Ltd.

14

P. Bhattacharya et al. Abhilash Rajakoti is a Post-Graduate Student in the Computer Science and Information Systems Department at Birla Institute of Technology and Science. He is presently doing ME in Software Systems at BITS, Pilani, India. He specialises in the areas of pattern recognition and internetworking technologies. V. Vasanth Kumar is a Post-Graduate Student in the Computer Science and Information Systems Department at Birla Institute of Technology and Science. He is presently doing ME in Software Systems at BITS, Pilani, India. He specialises in the areas of advanced algorithms and analysis and data security.

1

Introduction

The use of biometrics has emerged as an essential solution to the increased security concerns over the rapid advancements of data communication, mobility and networking. The processes used to establish evidence of identity are called identification and authentication, while the procedures involved in the grant of user access rights are summarised under the term authorisation. From a historical perspective, the process of authentication has been achieved (in prevalent security applications) through the use of multiple aspects namely, something we know, i.e., a password, PIN, mother’s maiden name, etc., or something we possess, i.e., id card, key, ring, etc. While the above-mentioned methods of authentication have proven acceptable for hundreds of years, increasing importance of data security calls for more accurate methods of person verification. This is due, in part, to the fact that passwords can be compromised or merely guessed, and possessed tokens can be stolen or forged or tampered with. In addition, automated systems based on the above-mentioned methods are unable to determine if the person supplying the token or information is, in fact, the same person that is enrolled in the system. In an attempt to solve this dilemma, many security companies are turning to biometrics. This is demonstrated by the fact that market worth for biometrics in 2006 was $2 billion, which increased to $3.5 billion in 2008 and is projected to grow up to $6 billion by 2010, up from approximately $58.4 million in 1999 (from [International Biometric Group 2001]). The biometrics industry has been growing exponentially at over 33% year on between 2000 and 2009. The use of biometrics in security creates a third method for the process of authentication, through: “Something that we are”. This brings in the much needed added dimension to personal security. The use of human physiological and behavioural traits to correctly identify a person is not a novel idea. The present-day techniques use evidence based on fingerprint information, face and iris templates, as well as signature and voice recognition parameters (Jain et al., 2004). Though automated systems of ear recognition have not been commercially employed till date, the ear possesses some very evident advantages over the other prevalent biometric techniques, which have spurred increasing research in this area. First, the ear shows very minute changes over the lifetime of the individual when compared with the human face, which changes significantly in contour and skin texture over the span of life. Second, face changes due to cosmetics, spectacles and facial hair pose a more serious problem than those of ear occlusion. Third, the facial structure shows change with passing emotions

An integrated authentication framework based on multi-tier biometrics

15

and expresses different moods like happiness, sadness, anger and pain. The ear, on the other hand, remains largely static in the face of changing psychological stimuli. The use of the human ear as a biometric recognition tool is not an entirely new revelation. The idea of utilising local characteristics of the human ear was initially proposed by French criminologist A. Bertillon and later in the famous Iannarelli thesis (Iannarelli, 1989), which proposed the use of a non-automated recognition system based on a set of 12 measurements. Prior works in this include those by Victor et al. (Pun and Moon, 2004) on Principal Component Analysis (PCA) of ear images, and a further improvement of the same by Chang et al. (2003). While Victor’s experiments showed that face performed better than the ear in most authentication cases, Chang proved that under controlled condition, both the face and the ear have similar performance characteristics. Burger and Burger (1999) had proposed an approach based on neighbourhood graph Voronoi diagrams but the latter suffered from ‘false edge’ occurrences in the graph diagram. The geometric method of feature extraction, from human ear images, was proposed by Choras (2005) but he was still unable to find a solution to the erroneous curve detection problem, so common with such geometric extraction techniques. Some other important works in this field included those done by Moreno et al. (1999) on three neural network approaches viz. Borda, Bayesian and weighted Bayesian networks. They concluded a recognition rate of 93% for the best of the three approaches. Yuizono et al. (2002) implemented a recognition-system-based genetic search algorithm and reported promising results for a set of over 600 images (Victor et al., 2002). Bhanu and Chen (2003) presented a three-dimensional (3D) ear detection system using a local surface shape descriptor. However, this method showed unsatisfactory results with noisy data and was computationally more intensive than PCA and the other previous techniques. A very significant improvement was demonstrated by Hurley et al. (2002) and his force field extraction technique, which considered the ear as a Gaussian well and mapped the path followed by test pixels placed at various points in its contour. This method showed remarkable robustness to noise, pose variations, scale and rotation. However, most of the current biometric systems are essentially uni-modal based, implying that they depend on a single feature or strategy to authenticate a person. However, such heightened reliability on a single parameter to judge the authenticity of a subject introduces a wide range of problems to the security system. First, noise or unavailability of sensor data is an unforeseen event. For instance, the finger of a subject may be scarred or there might be an excess of occlusion present on the face, leading to severe enrolment difficulties leading up to a failure in subject enrolment. In such cases, a multiplicity of biometric features comes of valuable help in correctly authenticating the subject. Second, faulty user interactions and behavioural defects form a major issue in subject authentication nowadays. Usually, such errors fall under the category of subject registration and enrolment-based errors, wherein due to incorrect posture, unnecessary movement or gesture and improper lighting conditions, the sensory system fails to register the subject accurately, leading to either a false acceptance or a false rejection in the later stages of the authentication process. Third, we must remember that the amount of information represented by a biometric is limited and it is often the case that a high degree of inter-class variation is noted amongst subjects with respect to that particular feature. For instance, the number of differentiable feature patterns amongst candidates when using hand geometry and facial features is in the order of a mere 105 and 103, respectively (Golfarelli et al., 1997), which is not sufficient to form the basis of a

16

P. Bhattacharya et al.

dependable and spoof-free security system. Fourth, the past few years have witnessed the emergence of yet another menace, which has accompanied the growing popularity of biometric security, that of biometric spoofing. It has been proven by researchers worldwide that it is increasingly possible if not trivial to mimic conventional voice, signature as well as face and fingerprinting-based systems. Finally and most importantly, it is more of conventional wisdom that the combination of a number of features in judging a person would necessarily provide better and conclusive evidence than a single feature. It is in this light that the advent of multimodal strategies have been appreciated and welcomed by the biometrics industry. Multimodal approaches necessarily make use of multiple independent sources of information or multiple independent processing strategies to authenticate the person (Ross and Jain, 2003). The vast variety of biometric features that can be combined to generate a secure template involves almost all known biometrics. However, the conventional multimodal strategies demonstrate some critical shortcomings and fall short of being an effective security provider. The motive of this paper is twofold, wherein we demonstrate the effectiveness of the human ear as a secure biometric and also propose a unique tier-based multimodal system, which offsets most of the shortfalls of the present multimodal system. The benefit of a tier-based approach is established by performing an analysis of face-ear multimodal authentication using a global matching approach supported by colour analysis. Section 2 focuses on the various fusion techniques for conventional multimodal systems and the various issues concerning them. Section 3 discusses the proposed template generation approach. Section 4 explains the advantages to be sought from using a tier-based selective multimodal scheme that we propose in this paper. Section 5 demonstrates the computational benefits to be obtained by employing a Particle Swarm Optimisation (PSO) powered heuristic score fusion schema for our system. In Section 6, a unique template storage mechanism for enhanced template security is put forward. Section 7 provides the various analysis results obtained using our approach. Finally, Section 8 discussed the issues arising from our proposed approach and the future implications of the same.

2

Multimodal fusion strategies

The advantage to be garnered from the use of multimodal techniques in biometrics is due to the abundance of authentication data that is obtained from using a multiplicity of features, sensors or algorithms. Multimodal techniques fall into several genres as the name suggests. First, there are the single trait and multiple sensor systems. Multiple sensors monitor and collect information about the same biometric trait, to varying degrees of accuracy at times. Kumar et al. (2003) described a hand-based verification system that combines the geometric features of the hand with the prints of the palm and then combines the same using a score-level fusion strategy. The various fusion strategies that may be employed are narrated in the later parts of this section. However, it is often found that the fusion at the score level (Duin and Tax, 2000; Lam and Suen, 1995) often proves computationally advantageous than fusion at the sensor or the feature level. Second, we may employ a single trait, single sensor and multiple unit policy wherein the subject and the feature is the same, but we just use different instances of the feature. For example, a person may register 10 different fingerprints depending on which finger he or she wishes to register with. The same may be the case with the iris or retina,

An integrated authentication framework based on multi-tier biometrics

17

wherein the user may get to choose which unit to register with. While such techniques may be increasingly useful for people having partial damage to any one unit, some applications specify exactly which unit to register for either computational feasibility or standardisation convenience as with the human ear. Third, we may seek to record multiple biometric traits based on data collected by a number of different sensors varying in operating characteristics, cost and precision. The mutual independence of the sensor data will essentially ensure the effectiveness of the multimodal approach. Brunelli and Falavigna (1995) used the face and voice traits of an individual for multimodal identification by employing a Hyper BF network to combine the normalised scores of five different classifiers operating on the voice and face feature sets. Bigun et al. (1997) developed a Bayesian framework to integrate the text-dependent speech data and face data of a user. Hong and Jain (1998) followed a methodology that involved associating different confidence measures with the matchers while integrating the face and fingerprint user data. The various fusion strategies roughly fall under two broad categories, namely the classification and the combination problem. Under the classification category, a feature vector is constructed (Verlinde and Cholet, 1999) by some function of the match scores of the individual matchers and this feature vector then decides on whether the score falls in the match category or not. On the other hand, under the combination problem, the matching scores are combined to generate a single figure of merit, which decides the permeability of the user (Ben-Yacoub et al., 1999; Dieckmann et al., 1997). We proceed with the findings of Ross and Jain (2003) who have demonstrated that a simple sum rule can be sufficient to optimise the matching performance of a multimodal biometrics system. We also incorporate the usage of weights to reflect the user trust on the reliability of the individual biometric feature. Other fusion strategies at the decision level (post-matching strategies) include a majority voting technique (Zuev and Ivanon, 1996), behaviour knowledge space method (Tax et al., 2000), weighted voting technique (Xu et al., 1992) and AND/OR rules (J. Daugman). In this paper, we employ a multimodal system involving the human face and the human ear (left ear) and demonstrate our tier-based strategy to optimise the performance. We also demonstrate a novel weighted-score fusion technique employing a PSO-based optimisation approach. The exact description of the enhanced PSO approach that we employed and the exploratory studies that we conducted using PSO are dealt in Section 5.

3

Proposed template generation systems

The overall system for our template generation is shown in Figure 1. Since our proposed methodology employs score-level fusion, all feature data must essentially pass through the following mentioned steps. The ear sample maybe acquired from the subject using a conventional CCD/CMOS camera or an internal camera in case of an integrated webcam. The captured image may then be stored as a standardised static database, which maybe periodically referenced for analysis. The images are then separated into separated colour spaces before any pre-processing is performed. The powerful results of such colour-based discrimination have been demonstrated by Nanni and Lumini (2009b). The sample images in the target colour spaces are then filtered using a 2D Gabor filter to generate the Enhanced Gabor Templates (EGTs) and the output template is securely stored for further analysis and testing. Finally, the image templates are matched using the processed

18

P. Bhattacharya et al.

histogram approach and the Chi-square distance between the templates is noted and plotted in terms of the average match percentage. The match percentage obtained is then submitted to the operator who fuses it with the companion scores to generate a single figure of merit for the biometric system. Various performance measures may be adopted to judge the efficiency of the biometric technique. The False Acceptance Rate (FAR) determines the percentage of invalid users that the system incorrectly provides access to. Conversely, the False Rejection Rate (FRR) is the percentage of valid users the system incorrectly prohibits from accessing. Figure 1

Overall biometric system

3.1 Image scaling and standardisation A very important precaution followed before any image pre-processing is the scaling of the sample RGB image into a fixed sized image, which is then represented in a suitable colour space namely RGB or HSV, etc. This is especially vital as all image processing in this project is performed using MATLAB (ver. 7.4.0), which treats every image as a matrix of fixed 2D size (including a third dimension representing the colour channels). The sample images in our case were resized to 76 × 185 and 180 × 200 sizes for the human ear templates and face templates, respectively, using the imresize.m function in MATLAB. The images shown in the test database have also been converted to a greyscale format to ease the computational complexity of the feature extraction algorithms as shown in Figure 2. Figure 2

Image scaling and greyscale conversion (see online version for colours)

An integrated authentication framework based on multi-tier biometrics

19

RGB images are converted to greyscale using the formula as shown I(g) = 0.2989*I(R) + 0.5870*I(G) + 0.1140*I(B)

(1)

where, I(R): Red component value of image I(G): Green component value of image I(B): Blue component value of image I(g): Greyscale intensity level (0–255 on a linear scale).

3.2 Colour space separation A colour model is an abstract mathematical model demonstrating the various representations of the colour components in terms of a specified colour space. Colour space conversion (Nanni and Lumini, 2009b) is the translation of colour components from one basis to another. In case the sample ear images are provided in the RGB colour space, we transform the RGB sample in the following 12 colour spaces viz: ‘YPbPr’, ‘YCbCr’, ‘YDbDr’, ‘JPEG-YCbCr’, ‘YIQ’, ‘YUV’, ‘HSV’, ‘HSL’, ‘XYZ’, ‘LAB’, ‘LUV’ and ‘LCH’ as suggested by Nanni and Lumini (2009b). Then, the individual image components of each of the 13 spaces are submitted to the normalisation and pre-processing phases. A few sample conversions of a test image are shown in Figure 3. Figure 3

Sample image in YUV, HSV, XYZ spaces (see online version for colours)

Thus, for a single ear sample, we have the opportunity to obtain 1*13*3 (Number of colour spaces * Number of components in each space) = 39 trainable samples for the global matcher. Later, we would also demonstrate the relative flexibility of the Gabor filter with each of these colour space samples, thereby summarising the effectiveness of the colour space analysis.

3.3 Contrast stretching and equalisation Contrast stretching is a simple image enhancement technique that attempts to improve the contrast in an image by ‘stretching’ the range of intensity values it contains to span a desired range of values, e.g., the full range of pixel values that the image type concerned allows. It differs from the more sophisticated histogram equalisation in that it can only apply a linear scaling function to the image pixel values. This method usually increases

20

P. Bhattacharya et al.

the global contrast intensities of many images, especially when the usable data of the image is represented by close contrast values. Through this adjustment, the intensities can be better distributed on the histogram. This allows for areas of lower local contrast to gain a higher contrast without affecting the global contrast. Histogram equalisation accomplishes this by effectively spreading out the most frequent intensity values. Frequently, an image is scanned in such a way that the resulting brightness values do not make full use of the available dynamic range. By stretching the histogram over the available dynamic range, we attempt to correct this situation. If the image goes from brightness factor 0 to brightness factor 2B-1, for a brightness parameter B, then we can conventionally map the 0% value (or minimum) to the minimum value 0 and the 100% value (or maximum) to the maximum value 2B-1. The appropriate transformation function (Zuev and Ivanon, 1996) is then given by the following b[m, n] = (2 B − 1)
80% in un-pre-processed templates to about 50% in normalised unfiltered templates and to finally about 0% in EGTs. However, the exact threshold values must be set after consulting with the client regarding the desired level of security. The use of EGT also optimises the time and resource consumptions of the system as proved from prior discussions. Figure 13 demonstrates the time consumed in performing the various matching operations and matching the EGTs as per Table 1. Table 4 displays the time consumed in template matching with and without the usage of EGT. The computational advantage obtained from migrating to EGT is self-explanatory from the figure as a time reduction of the order of 10 was noted. The same is illustrated by the results shown in Table 4.

34

P. Bhattacharya et al.

Figure 13 Template matching time for EGT (see online version for colours)

Table 4 Sample ID

Time consumption data using EGT and raw Raw templates (s)

EGT (s)

% Decrease in time cost

1,2

0.015753

0.000408

97.41

3,4

0.000997

0.000324

67.50

5,6

0.004717

0.000320

93.22

7,8

0.000916

0.000437

52.29

9,10

0.000921

0.000379

58.84

11,12

0.001030

0.000331

67.86

13,14

0.001161

0.000322

72.27

15,16

0.001030

0.000331

67.86

17,18

0.001233

0.000331

73.15

19,20

0.001480

0.000325

78.04

The time benefits obtained from using EGT are well illustrated by Table 4 as shown earlier. The score fusion problem was dealt using the Particle Swarm approach as explained in earlier sections. We performed an investigatory study of the PSO technique by varying the environmental forces acting on the model, in the form of learning factors viz. c1 and c2 and the inertia force. This study was made with an intention to discover trends and optimal values of the factors as applicable in the given scenario. The results of the study are presented in the following plots viz. Figure 14, which shows the relationship between convergence rate and learning factors, and Figure 15, which shows the relationship between convergence rate and inertia factor. From the above-mentioned trend analysis, we can conclude that while the learning factor should range ideally from 1 to 3, the optimum values are obtained in the neighbourhood of 2. Moreover, the inertia factors increase steadily from 0 onwards and show a firing point of around 1.3–1.5 after which the convergence runs out of bounds, i.e., the simulation fails to converge. Thus, for most PSO implementations, a learning factor of c1 = c2 = 2 and inertia = 1 is chosen as the optimal environmental factors. A number of subjects spanning the data set 2 were analysed to exploit the advantages of using a tier-based approach. A few ‘easy’ subjects were selected from Data Set 2 for initial tests, which were followed by a few ‘tough’ subjects showing increased occlusion, pose variations and shadowing. With these latter subjects, the match curves for the

An integrated authentication framework based on multi-tier biometrics

35

human face showed degrading performances. With such sub-optimal and declining matching rates, it becomes increasingly difficult to make a decision pertaining to subject acceptance or rejection using any particular algorithm or policy. Here is where the need of an extra tier of security is required, i.e., the need for multimodal. Under such a circumstance, we proceeded to add one more tier of security to the face-tier in the form of the human ear. Figure 14 Trend analysis of convergence vs. learning factors (c1,c2) (see online version for colours)

Figure 15 Trend analysis of convergence vs. inertia factors (see online version for colours)

It was observed much to our delight that while the face recognition showed an increased degradation of performance with our technique, the performance ramped up rapidly with the ear templates of the same person. The results of the analysis are displayed in Figure 16. It is interesting to observe that using EGT technique, samples with match

36

P. Bhattacharya et al.

percentages above the green line and below the blue curve can be accepted and rejected, respectively, even with a single biometric approach. Figure 16 Comparative performances of face and ear for Data Set 2 (see online version for colours)

For samples falling within the two curves, a multimodal approach is required followed by a score fusion process. However, as one can observe that the area between the curves is negligible (due to the effectiveness of EGT) when compared with the total graph area, the idea of a tier-based application gets further enforced, and the usefulness of a tier-based mixed selection strategy is well established. Figure 16 well demonstrates the fruits of a tier-based approach. Thus, from the above-mentioned analysis results, we can safely conclude the following facts. First, the use of Gabor templates (EGT) leads to suitable robustness and resilience in the face of occlusions. Second, a highly stable match performance is obtained with EGT as opposed to raw or normalised templates. Third, the use of EGT leads to highly optimised time and resource usage when compared with unfiltered processing approaches. Lastly, the use of a tier-based implementation scenario proves to be of much more benefit than a pure multimodal strategy. The freedom to add an extra security tier as and when needed and not as a rigid must-have component leads to optimal performance of the security application. The implementation of a unique stable storage policy for the biometric template was also demonstrated. The template conversion and regeneration was performed in minimal time. Moreover, with the usage of a highly optimised stochastic technique like PSO, the score fusion problem was also solved in the minimum possible time. In this fashion, a complete and integrated biometrics solution was proposed.

8

Conclusion and future implications

The proposed biometric system has significant advantages over the prior techniques employed in this area. The problem of occlusion, noise and shadowing is totally eliminated with the use of colour analysis and the application of Gabor filters.

An integrated authentication framework based on multi-tier biometrics

37

The biometric template conversion to an audio template can be further optimised with respect to space and time overload. This added concern towards security could make this implementation suitable for effective usage and commercialisation. Prediction models can be implemented to a much wider extent to prevent sub-optimal fusion of biometric features. Privacy issues form a major concern for biometrics experts nowadays, which inhibit the widespread deployment of such techniques. Development of special laws and Government policies must accompany the development of such pervasive technologies to ensure the privacy and protection of citizen information. Yet another threatening menace is that of biometric spoofing, wherein an imposter uses prosthetic organs or parts to mimic an actual biometric feature. While emerging technologies make use of liveness detection methodologies to verify the genuineness of the subject, other off-the-shelf technologies may be mobilised in future security implementations to enhance the flexibility and security aspect of such biometric applications.

Acknowledgements We thank the anonymous referees and editors for their suggestions, which helped to improve the quality of this paper. We are also thankful to the faculty and management of BITS, Pilani, for creating an atmosphere conducive to quality research and development.

References Ben-Yacoub, S., Abdeljaoued, S. and Mayoraz, E. (1999) ‘Fusion of face and speech data for person identity verification’, IEEE Trans. on Neural Networks, Vol. 10, pp.1065–1074. Bhanu, B. and Chen, H. (2003) ‘Human ear recognition in 3D’, Workshop on Multimodal User Authentication, Santa Barbara, pp.91–98. Bigun, E., Bigun, J., Duc, B. and Fischer, S. (1997) ‘Expert conciliation for multimodal person authentication systems using Bayesian Statistics’, Proceedings of First International Conference on AVBPA, Crans-Montana, Switzerland, pp.291–300. Brunelli, R. and Falavigna, D. (1995) ‘Person identification using multiple cues’, IEEE Transactions on PAMI, Vol. 12, pp.955–966. Burger, M. and Burger, W. (1999) ‘Ear biometrics’, Biometrics: Personal Identification in Networked Society, Kluwer Academic, pp.273–286. Chang, K., Bowyer, K. and Barnabas, V. (2003) ‘Comparison and combination of ear and face images in appearance-based biometrics’, IEEE Trans. Pattern Analysis Machine Intelligence, Vol. 25, pp.1160–1165. Choras, M. (2005) ‘Ear biometrics based on geometrical feature extraction’, Electronic Letters on Computer Vision and Image Analysis (Journal ELCVIA), Vol. 5, No. 3, pp.84–95. Daugman, J. (NA) Combining Multiple Biometrics, http://www.cl.cam.ac.uk/users/jgd1000/ combine/ Dieckmann, U., Plankensteiner, P. and Wagner Sesam, T. (1997) ‘A biometric person identification system using sensor fusion’, Pattern Recognition Letters, Vol. 18, No. 9, pp.827–833.

38

P. Bhattacharya et al.

Dong, P., Brankov, J.G., Galatsanos, N.P., Yang, Y. and Davoine, F. (2005) ‘Digital watermarking robust to geometric distortions’, Proceedings of IEEE Trans. Image Processing, Vol. 14, No. 12, pp.2140–2150. Duin, R.P.W. and Tax, D.M.J. (2000) ‘Experiments with classifier combining rules’, Proceedings of 1st Workshop on Multiple Classifier Systems, Vol. LNCS 1857, Springer, Cagliari, Italy, pp.16–29. Eberhart, R.C. and Kennedy, J. (1995) ‘A new optimizer using particle swarm theory’, Proceedings of the Sixth International Symposium on Micro-machine and Human Science, Nagoya, Japan, pp.39–43. Golfarelli, M., Maio, D. and Maltoni, D. (1997) ‘On the error-reject tradeoff in biometric verification systems’, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 19, pp.786–796. Hong, L. and Jain, A.K. (1998) ‘Integrating faces and fingerprints for personal identification’, IEEE Transactions on PAMI, Vol. 20, pp.1295–1307. Hurley, D., Nixon, M. and Carter, J. (2002) ‘Force-field energy functionals for image feature extraction’, Image and Vision Computing Journal, Vol. 20, pp.429–432. Iannarelli, A. (1989) Ear Identification Forensic Identification Series, Paramont Publishing Company, Fremont, California. Jain, A.K., Ross, A. and Prabhakar, S. (2004) ‘An introduction to biometric recognition’, IEEE Trans. on Circuits and Systems for Video Technology, Vol. 14, pp.4–20. Kennedy, J. and Eberhart, R.C. (1995) ‘Particle swarm optimization’, Proceedings of IEEE International Conference on Neural Networks, Piscataway, NJ, pp.1942–1948. Kumar, A., Wong, D.C.M., Shen, H.C. and Jain, A.K. (2003) ‘Personal verification using palmprint and hand geometry biometric’, Proceedings of 4th Int’l Conf. on Audio and Video-based Biometric Person Authentication (AVBPA), Guildford, UK, pp.668–678. Lam, L. and Suen, C.Y. (1995) ‘Optimal combination of pattern classifiers’, Pattern Recognition Letters, Vol. 16, No. 9, pp.945–954. Moreno, B., Sanchez, A. and Velez, J. (1999) ‘On the use of outer ear images for personal identification in security applications’, IEEE International Carnaham Conference on Security Technology, pp.469–476. Nanni, L. and Lumini, A. (2009a) ‘On selecting Gabor features for biometric authentication’, International Journal of Computing Applications in Technology, Vol. 35, No. 1, pp.23–28. Nanni, L. and Lumini, A. (2009b) ‘Fusion of color spaces for ear authentication’, Pattern Recognition, Vol. 42, No. 9, pp.1906–1913. Pun, K. and Moon, Y. (2004) ‘Recent advances in ear biometrics’, Proceedings of the Sixth International Conference on Automatic Face and Gesture Recognition, Seoul, South Korea, pp.164–169. Ross, A. and Jain, A.K. (2003) ‘Information fusion in biometrics’, Pattern Recognition Letters, Vol. 24, pp.2115–2125. Tax, D.M.J., Breukelen, M.V., Duin, R.P.W. and Kittler, J. (2000) ‘Combining multiple classifiers by averaging or by multiplying’, Pattern Recognition, Vol. 33, pp.1475–1485. Verlinde, P. and Cholet, G. (1999) ‘Comparing decision fusion paradigms using k-NN based classifiers, decision trees and logistic regression in a multi-modal identity verification application’, Proceedings of 2nd Int’l Conf. on Audio- and Video-based Person Authentication, Washington DC, USA, pp.188–193. Victor, B., Bowyer, K. and Sarkar, S. (2002) ‘An evaluation of face and ear biometrics’, 16th International Conference of Pattern Recognition, Quebec City, QC, Canada, pp.429–432. Wang, R. and Bhanu, B. (2006) ‘Performance prediction for multimodal biometrics’, 18th International Conference of Pattern Recognition, Hong Kong, Vol. 3, pp.586–589.

An integrated authentication framework based on multi-tier biometrics

39

Xu, L., Krzyzak, A. and Suen, C. (1992) ‘Methods of combining multiple classifiers and their applications to handwriting recognition’, IEEE Trans. on Systems, Man and Cybernetics, Vol. 22, No. 3, pp.418–435. Yuizono, T., Wang, Y., Satoh, K. and Nakayama, S. (2002) ‘Study on individual recognition for ear images by using genetic local search’, Proceedings of the 2002 Congress on Evolutionary Computation, Honolulu, HI, USA, pp.237–242. Zuev, Y. and Ivanon, S. (1996) ‘The voting as a way to increase the decision reliability’, Foundations of Information/Decision Fusion with Applications to Engineering Problems, Washington DC, USA, pp.206–210.

Suggest Documents