Document not found! Please try again

Automatic Fingerprint Verification Using Neural Networks - CiteSeerX

13 downloads 333 Views 172KB Size Report
shape signatures which are then digitized to form a feature vector describing the ... verification involves matching two fingerprint images, in order to verify a.
Automatic Fingerprint Verification Using Neural Networks Anna Ceguerra and Irena Koprinska School of Information Technologies, University of Sydney, Sydney, Australia {anna, irena}@it.usyd.edu.au

Abstract. This paper presents an application of Learning Vector Quantization (LVQ) neural network (NN) to Automatic Fingerprint Verification (AFV). The new approach is based on both local (minutiae) and global image features (shape signatures). The matched minutiae are used as reference axis for generating shape signatures which are then digitized to form a feature vector describing the fingerprint. A LVQ NN is trained to match the fingerprints using the difference of a pair of feature vectors. The results show that the integrated system significantly outperforms the minutiae-based system alone in terms of classification accuracy. It also confirms the ability of the trained NN to have consistent performance on unseen databases.

1 Introduction Fingerprint verification involves matching two fingerprint images, in order to verify a person’s claimed identity. The most popular approaches match local features, such as minutiae, using point-pattern matching [3,8], graph matching [2], or structural matching [10]. These methods require extensive preprocessing to reliably extract the minutiae and are also very sensitive to noise due to image acquisition and feature extraction. Global recognition approaches, on the other hand, match features characterizing the entire image, typically extracted by filtering or transform operations [9]. They require less preprocessing than the minutiae-based approaches, but are effective when the representation is invariant to translation, rotation and scale. The invariants are addressed by registering the images with respect to a reference axis [4], which can be consistently detected in the different instances of the fingerprint. Singular points were used as reference points but their detection is not precise enough for matching. To overcome the above limitations, this paper presents an approach which integrates local and global features and uses a NN for the final recognition. The matched local features are used as the reference axis for generating global features. In our specific implementation, minutiae and shape signatures were combined. Minutiae are first matched by a point-pattern matching. Shape signatures are then generated by using the matched minutiae as their frame of reference and are digitised to form a feature vector. Finally, a LVQ NN was trained to learn to distinguish between matching and nonmatching fingerprints based on the difference of a pair of feature vectors.

2 AFV System The AFV system consists of three modules: minutiae-based, shape signature-based and neural network module.

2.1 Minutiae-Based Module Pre-processing and Minutiae Extraction. To remove noise and enhance the fingerprint ridge pattern, the images were first pre-processed using Gabor filters. Segmentation then limited the image to those areas that contain ridge information. Black and white image were created from a grey-scale using binarization. Finally, skeletonisation of the binarised image was performed, where ridges with a width of multiple pixels were simplified to ridges of single pixel width. Minutiae were detected by examining the neighbourhood around each pixel of the thinned ridges [3]. The position of a pixel was recorded as a ridge termination, if its neighbourhood contained only one other pixel, or ridge bifurcation if it contained three other pixels. Minutiae Matching. Following [8], each minutia pi is represented in terms of the counterclockwise sequence of its adjacent minutiae {pik : k = 1, …, K}, where two minutiae are defined to be adjacent if their Voronoi cells share a border. The position of the adjacent minutia pik is expressed in terms of the minutia pi, by the distance dk between pik and pi, and the polar angle αk between pik and pik –1 with respect to pi . The respective distances and angles are combined and normalised, to form a sequence of complex numbers, and the Discrete Fourier Transform (DFT) is applied: K

f(pi) = DFT{uk : k = 1, …, K}, uk = d k exp{ jα k } / ∑ d l

(1)

l =1

The matching algorithm in [8] was also implemented. It uses a function, which quantifies the similarity between two minutiae pm and qo. Our implementation extended this, by incorporating a check of the minutiae type (termination or bifurcation): 0, if dim{ f ( pm )} ≠ dim{ f (q0 )} or if type( pm ) ≠ type(q0 ) s(pn, qm) =   2 exp{−( f ( pm ) − f (q0 ) ) / σ }, otherwise  

(2)

The set of matching pairs of minutiae C is found by determining, for each minutia pm in one fingerprint, the best matching minutia qo in the other fingerprint. Another constraint for this pair to be in C is that they must be sufficiently similar, that is the similarity value of the two minutiae must occur above a threshold T. 2.2 Shape Signature Module Reference Axis Generation. Each matched pair of minutiae (pm, qo) forms a reference point that corresponds between the two fingerprints. Another pair of matched minutiae (pmj, qok) is found by finding correspondences between the adjacent minutiae of pm and qo, and choosing the closest adjacent minutia pmj and its match qok. The two points within each image form the reference line. In this way, a matching pair of reference axes (refpm, refqo) is found. Shape Signature Generation. The shape signature, or turning function, represents a two-dimensional boundary as a one-dimensional function [7]. This representation is generated in terms of the reference axis refpm corresponding to that fingerprint. For each pixel lu on the thinned ridges, the distance ru from the reference point, the

clockwise angle θu from the reference line, and the average tangent angle tu are calculated. To calculate tu, the ridge pixels {luk : k = 1, …, K} in the neighbourhood of lu are first found. The tangent angle (tuk) made by each of the eight neighbouring pixels with respect to lu is calculated as tuk = ( arg[(ru - ruk) + j(θu - θuk)] ) mod π

(3)

The average tangent angle (tu) is the average of the set of tangent angles {tuk : k = 1,…, K} made by the neighbourhood of lu. 2.3 Neural Network Module Feature Vector Generation. An n-dimensional feature vector is calculated, by digitising the shape signature. Digitisation compresses the amount of information in the shape signature, while remembering the general shape as the clockwise angle changes. This is done using a hybrid of the compression approaches presented by [4,6,1]. The entire set of ridge pixels {lu : u = 1, …, M} is divided into n subsets, where lu belongs to the ith subset if θu is within the range of [(i − 1) / 2nπ , i / 2nπ ) . The average of all tu for the ith subset is then calculated, forming the ith entry in the ndimensional feature vector. Figure 1 is a visual summary of this step.

Fig. 1. Generating the feature vector: a fingerprint with the reference axis (top), its shape signature (middle), its digitised shape signature (bottom), with the direction of each feature

Determining the Similarity. Once the feature vectors for both fingerprints are generated, taking the difference between the two feature vectors captures the concept of their similarity. This difference is an input to a LVQ NN [5] that was trained to distinguish between a match and non-match. LVQ is a supervised competitive algorithm. Given a set of pre-classified feature vectors (training examples), it creates a

few prototypes for each class, adjusts their positions by learning and then classifies the unseen examples by means of the nearest-neighbour principle.

3 Experimental Results The goal of the experiments was two-fold: a) to find the best neural network architecture for the combined minutiae-based/shape signature & NN-based system (AFVcombined) system, and b) to compare the combined system with the minutiae-based module (AFV-local) alone. 3.1 Data and Performance Measures Fingerprint Databases. Three databases were used for these experiments: ‘hi-optical’, ‘synthetic’ and ‘db-ipl’. The first comprises of Set B of DB3 which was captured by an optical scanner, and the second is Set B of the synthetically generated database (DB4) from the FVC2000 competition [11]. Both contain 10 distinct fingers, each finger having 8 impressions, creating two databases with 80 fingerprint images each. These images had a size of 448x478 pixels. There were 3160 possible pairings from each of these databases, excluding self-pairings, and 280 (8.86%) of these were matching pairs. The Image Processing Laboratory, University of Trieste, Italy captured the images for the last database. This contains 16 distinct fingers of 8 impressions each, with 448 (5.51%) matching pairs out of the 8128 possible pairings. The images in the third database had a size of 240x320 pixels. All images have resolution of 500dpi. Performance Measures. Five measures were used to evaluate the classification abilities of the AFV system (Table 1). In addition to the most commonly used Accuracy, Recall (True Acceptance Rate, TAR) and Specificity (True Rejection Rate, TRR), Predictive value positive (Pos), and Predictive value negative (Neg) were calculated as they measure the ‘trustworthiness’ of the output of the AFV system, and are most useful when compared to the ground truth proportions of matching and nonmatching pairs, respectively. Table 1. Performance measures used

Actual + class -

Predicted class + TA FR FA

TR

Accuracy = (TA+TR)/(TA+FR+FA+TR) Recall (TAR) = TA/(TA+FR) Specificity (TRR) = TR/(FA+TR) Pos = TA/(TA+FA) Neg =TR/(FR+TR)

3.2 Determining the Neural Network Architecture The LVQ network architecture for the combined system was determined using stratified 10-fold cross validation, by comparing the performance of different combinations of feature vector size and number of neurons within the competitive layer. Only the hioptical database was used for this experiment. The output of the minutiae-matching module (with T = 0.9 and σ = 0.1) was used to generate shape signatures for each reference axis found in each fingerprint. This in-

formation was stored on disk, with each file containing the shape signatures for one pair of fingerprints, as well as whether this fingerprint pair matches or not. Feature vectors of size 9, 10, 11, 12, 13, 14, and 15 were created from each pair of shape signatures within each file. Feature vectors of the same size, along with their target outputs, were bundled together into a single file, defining the data for the NN with the same number of input features. Creating different combinations of input size and 8, 16, 24, 32, and 40 competitive neurons gained 35 different NN architectures. Of the two output neurons, the first represents a match and the other a non-match. Stratified ten-fold cross validation was performed for each of the NNs, and the performance measures were calculated (Fig. 2). A neural architecture with 24 input and 12 hidden units was selected, as for these parameters the performance of Accuracy, TRR and Pos is in the best region, and at its second best for TAR and Neg.

13 14 16 24 32

15 40

features 0.75-0.8 0.85-0.9

12 13 14 8

16

24

15 32 40

10 11 12 13 14

8

16

24

32

10 11 12 13 14

15 40

8

features

features 0.8-0.85

9

16

24

9 10 11 12 13 14

15 32 40

8

features

16

24

32

hidden neurons

12

11

hidden neurons

11

10

Neg

Pos

9

hidden neurons

9

hidden neurons

10

8

Specificity (TRR)

Recall (TAR) 9

hidden neurons

Accuracy

15 40

features

0-0.15

0.15-0.3

0.8-0.85

0.85-0.9

0-0.1

0.1-0.2

0.2-0.3

0.85-0.87

0.87-0.89

0.3-0.45

0.45-0.6

0.9-0.95

0.95-1

0.3-0.4

0.4-0.5

0.5-0.6

0.89-0.91

0.91-0.93

Fig. 2. Performance of different combinations of hidden neurons (9-15) and features (8-40)

3.3 Comparing AFV-local and AFV-combined The combined system used the minutiae-matching module with T = 0.9 and σ = 0.1, a neural network module with 12 input, 24 competitive and 2 output neurons trained on the hi-optical database only. Two fingerprints were considered to match by the combined system if: (1) the minutiae-matching module found at least one reference axis, and (2) the NN module matched at least one pair of shape signatures from all possible signatures generated by the corresponding different reference axes. Table 2. Comparison of clasification performance [%]

hi-optical AFV-local AFV-combined synthetic AFV-local AFV-combined db-ipl AFV-local AFV-combined

Accuracy

TAR

TRR

Pos

Neg

85.54 90.32

12.86 3.57

92.60 98.75

14.46 21.74

91.62 91.33

88.83 90.79

3.93 1.07

97.08 99.51

11.58 17.65

91.22 91.19

93.05 94.45

4.91 4.24

98.19 99.71

13.66 46.34

94.65 94.70

As it can be seen from Table 2, the complete AFV system has equal or better performance on most measures, with the most significant improvement on Pos, or the trustworthiness of the match decision. The only measure with no improvement is TAR, since this approach cannot increase the number of matches – it only refines the match decision of the minutia-based module. It can also be seen that this improvement is

consistent over all databases, regardless of whether the NN has been trained on the dataset or not, which confirms the high generalization ability of LVQ. It should be noted that the training data given to LVQ was of small size and highly skewed towards the non-match examples. By providing a larger training set with even proportion of matching examples, the performance of the NN can be further improved. Another interesting observation is that AFV-local has trustworthiness values that are close to the actual proportions within the databases (see Sec. 3.1), whereas for AFVcombined, these values are higher than the actual proportions. This suggests that the trustworthiness values of the combined system are not dependent on the proportions within the database, while AFV-local shows this dependency.

4 Conclusions A new approach for combining local and global features for AFV systems using neural networks was developed. It uses matched local features as reference axis for generating global features and LVQ neural net for final recognition. Any local and global recognition schemes can be combined in this way. In our implementation, minutiae-based and shape-based algorithms were used. LVQ learns to recognise matching pair of fingerprints based on the difference in their feature vectors which were extracted from the shape signatures. Experimental evaluation shows that the integrated system outperforms the local system on various accuracy measures, with significant improvement in the trustworthiness. It also confirms the ability of the trained neural network to have consistent performance on unseen databases. Therefore, this approach has clear potential in AFV systems that require high reliability and excellent accuracy.

Acknowledgements We are very grateful to Piero Calucci from Tender S.p.A., Trieste, Italy and Fabio Vitali from the University of Trieste, who provided the implementations for the preprocessing and minutiae extraction. Thanks are also due to Dr Marius Tico, Tampere Univ. of Technology, Finland, for his helpful comments on the minutiae matching.

References 1. Blue, J.L., Candela, G.T., Grother, P.J., Chellappa, R., Wilson, C.L.: Evaluation of Pattern Classifiers for Fingerprint and OCR Applications. Pattern Recognition 27 (1994) 485-501 2. Hollingum, J.:Automated Fingerprint Analysis Offers Fast Verif. Sensor Rev 3 (1992) 12-15 3. Jain, A.K., Hong, L., Bolle, R.: On-line Fingerprint Verific. IEEE PAMI 19 (1997) 302-314 4. Jain, A.K., Prabakhar, S., Hong, L., Pankanti, S.: FingerCode: A Filterbank for Fingerprint Representation and Matching. In Proc. CV&PR Conf., Fort Collins (1999) 5. Kohonen, T.: Self-Organizing Maps. Springer (2001) 6. Logan, B., Salomon, A.: A Content-Based Music Similarity Function. TR CRL 02 (2001) 7. Loncaric, S.: A survey of shape analysis techniques. Pattern Recogn. 31 (1998) 983-1001 8. Tico, M., Rusu, C., Kuosmanen, P.: A Geometric Invariant Representation for the Identification of Corresponding Points. In Proc. ICIP (1999) 462-466 9. Tico, M., Immonen, E., Ramo, P., Kuosmanen, P., Saarinen, J.: Fingerprint Recognition Using Wavelet Features. In Proc. ISCAS (2001) 21-24 10. Wahab, A., Chin, S.H., Tan, E.C.: Novel Approach to Automated Fingerprint Recognition. IEE Proc. Vis. Image Signal Processing, 145 (3) (1998) 160-166 11. www2.csr.unibo.it/research/biolab/

Suggest Documents