Oriented diffusion filtering for enhancing low-quality fingerprint images

0 downloads 0 Views 844KB Size Report
May 1, 2012 - preparing input images for later processing stages. Most systems .... In the Appendix a concise introduction to anisotropic diffusion filtering is ...
www.ietdl.org Published in IET Biometrics Received on 9th January 2012 Revised on 1st May 2012 doi: 10.1049/iet-bmt.2012.0003

ISSN 2047-4938

Oriented diffusion filtering for enhancing low-quality fingerprint images C. Gottschlich1 C.-B. Scho¨nlieb2 1

Institute for Mathematical Stochastics, University of Go¨ttingen, Goldschmidtstrasse 7, 37077 Go¨ttingen, Germany Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, CB3 0WA Cambridge, UK E-mail: [email protected]

2

Abstract: To enhance low-quality fingerprint images, we present a novel method that first estimates the local orientation of the fingerprint ridge and valley flow and next performs oriented diffusion filtering, followed by a locally adaptive contrast enhancement step. By applying the authors’ new approach to low-quality images of the FVC2004 fingerprint databases, the authors are able to show its competitiveness with other state-of-the-art enhancement methods for fingerprints like curved Gabor filtering. A major advantage of oriented diffusion filtering over those is its computational efficiency. Combining oriented diffusion filtering with curved Gabor filters led to additional improvements and, to the best of the authors’ knowledge, the lowest equal error rates achieved so far using MINDTCT and BOZORTH3 on the FVC2004 databases. The recognition performance and the computational efficiency of the method suggest to include oriented diffusion filtering as a standard image enhancement add-on module for real-time fingerprint recognition systems. In order to facilitate the reproduction of these results, an implementation of the oriented diffusion filtering for Matlab and GNU Octave is made available for download.

1

Introduction

Increasingly, biometrics, and in particular, fingerprint recognition plays an important role for verifying (1:1 comparison) and identifying (1:N comparison) persons in commercial, governmental and forensic applications. The matching performance of a fingerprint recognition system depends heavily on the image quality [1]. Image enhancement aims at improving the overall performance by preparing input images for later processing stages. Most systems extract minutiae from fingerprints and use them as the main feature for matching [2]. The presence of noise can interfere with the extraction. As a result, true minutiae may be missed and false minutiae may be detected, both having a negative effect on the recognition rate. In order to avoid these two types of errors, an enhancement step intends to improve image quality by removing noise and increasing the clarity of the ridge and valley structure. Especially, if ridges are interrupted for example because of creases, scars or dryness of the finger, an image enhancement method shall be able to reconnect them. Ridges that falsely stick together, for example caused by wetness or smudges, shall be separated while true ridge endings and bifurcations shall be preserved. The enhancement of low-quality images (occurring e.g. in all databases of FVC2004 [3]) and very low-quality prints like latents (e.g., NIST SD27 [4]) is in general a challenging task. Techniques based on contextual filtering, particularly the Gabor filter (GF), are widely used for fingerprint image IET Biometrics, pp. 1–9 doi: 10.1049/iet-bmt.2012.0003

enhancement [2]. Those methods heavily rely on a correct estimation of the local context, that is, the local orientation and ridge frequency taken as inputs for the GF. Errors made in estimating the local context may lead to the creation of artefacts in the enhanced image which consequently tends to increase the number of verification errors. In fact, for low-quality images there is a substantial risk that an image enhancement step may impair the recognition performance as shown by Fronthaler et al. [5] (results are cited in Table 1 of Section 3). Hence, the choice of an adequate enhancement strategy is a crucial one in fingerprint matching. The present work proposes to obtain a robust orientation field (OF) estimation as a first step. From this estimated OF a tensor is derived which steers the subsequent anisotropic diffusion process. Finally, low contrast in the enhanced fingerprint is compensated for by locally adaptive contrast enhancement, before the improved fingerprint image is passed on to a feature extraction module. We examine linear oriented diffusion and three types of non-linear oriented diffusion: coherence-enhancing, incoherenceenhancing and edge-enhancing diffusion. The performance of oriented diffusion filtering for enhancing low-quality prints is compared to state-of-the-art methods. Additional improvements in terms of lower equal error rates (EERs) are achieved by combining oriented diffusion filtering with curved GFs. In order to facilitate the reproduction of these results by other researchers, our implementation for Matlab and GNU Octave of the diffusion filtering is made available for download (www.stochastik.math.uni-goettingen.de/ 1

& The Institution of Engineering and Technology 2012

www.ietdl.org Table 1

EERs in % for matcher BZ3 on the original and enhanced images of FVC2004 [3]

Enhancement method original images traditional GF [6] short time Fourier transform analysis [7] pyramid-based filtering [5] curved GFs [8] with curved region: 33 × 65 pixels, sx ¼ 4.0, sy ¼ 4.0 Gradients based anisotropic diffusion filtering coherence-enhancing diffusion [9] with a ¼ 0.001, C ¼ 0.0001, r ¼ 10 Oriented diffusion filtering linear diffusion with a ¼ 0.01 incoherence-enhancing diffusion with a ¼ 0.001, C ¼ 0.01, r ¼ 10 coherence-enhancing diffusion with a ¼ 0.001, C ¼ 0.001, r ¼ 32 Combining coherence-enhancing oriented diffusion and curved GF max rule sum rule template cross matching

DB1

DB2

DB3

DB4

14.5 (16.9) (19.1) 12.0 9.7

9.5 14.4 11.9 8.2 6.3

6.2 7.1 7.6 5.0 5.1

7.3 9.8 10.9 7.0 6.5

11.0

8.9

4.8

7.9

10.0 10.0 10.0

5.9 5.6 6.4

5.0 5.2 5.0

6.0 6.1 6.0

9.0 9.3 8.9

5.0 4.8 4.3

4.2 3.6 3.4

5.4 5.2 4.9

Parentheses indicate that only a small fingerprint area was useful for recognition. Results of the top four rows are cited from [5]. All diffusion processes stopped after 40 iterations with step size 0.25

biometrics/). Although the focus of this paper is on fingerprint image enhancement, the presented method is also applicable for other oriented structures [10] like, for example, images of annual rings in tree discs, muscle fibers or blood vessels. 1.1

Related work

After the already successful application of non-linear diffusion equations in imaging, for example in [11, 12], anisotropic diffusion filtering has been introduced into the image processing community by Weickert [13] and Bigun [14]. In particular, coherence-enhancing anisotropic diffusion is proposed in [9]. There, the author presents the effectiveness of the method in the context of fingerprint enhancement by applying it to a medium quality print. Meihua and Zhengming [15] propose a method for sharpening edges in fingerprint images by anisotropic diffusion. Moreover, Zhai et al. [16] describe the use of conventional gradient-based non-linear anisotropic diffusion filtering for the binarisation of fingerprints. The application of non-linear anisotropic reverse diffusion equations for enhancing prints is proposed in [17]. Furthermore, anisotropic diffusion is examined in

[18] for the classification of fingerprint images. Hastings [19] propose ridge enhancement by an iterative smoothing along the orientation. Perona [20] and Chen and Dong [21] employ diffusion filtering for smoothing the OF of the fingerprint rather than the fingerprint itself. Zhao et al. [22] suggest a singularity driven diffusion process for regularising the OF. The novelty and at the same time crucial difference of our proposed method to the diffusion-based methods listed above is that the diffusion tensor (defining the directions of the diffusion process) is fed with directional information from a pre-computed OF, cf. (1), rather than being defined by the image gradient only, cf. (11). This OF is more robust to noise and other degradations in the fingerprint than the gradient of the image and hence, oriented diffusion performs incredibly better for low-quality fingerprints in particular, cf. Fig. 1. 1.2

Organisation of the paper

The paper commences with the presentation of our algorithm in Section 2. We explain how a diffusion tensor is derived

Fig. 1 Original image (left, finger 84, impression 6, FVC2004 database 1) after enhancement by coherence-enhancing anisotropic diffusion filtering (b) using the image gradients [23] and linear diffusion filtering with a-priori estimated OF as described in Section 2.1 (c) a Original fingerprint b Gradient based coherence-enhancing diffusion c Oriented linear diffusion Structure tensor derived from the aggregated OF clearly outperforms the classical gradient based tensor in areas with a high noise level 2

& The Institution of Engineering and Technology 2012

IET Biometrics, pp. 1–9 doi: 10.1049/iet-bmt.2012.0003

www.ietdl.org from an a priori computed OF. The latter is computed with an aggregation approach described in Section 2.1 which combines two OF estimation methods in order to achieve a more reliable estimation of the local context. Subsequently, three types of oriented diffusion filtering are proposed in Section 2.2. After smoothing by oriented diffusion filtering we compensate for differences in grey-level intensities along ridges and valleys by applying a locally adaptive contrast enhancement as detailed in Section 2.3. For comparing different diffusion-based techniques with existing enhancement methods, low-quality images of FVC2004 [3] were enhanced and used in verification tests. Results stated in Section 3 show the soundness of this approach and its equivalence to state-of-the-art image enhancement methods in terms of EERs. Further improvements were achieved by combining anisotropic diffusion filtering with curved GFs. The paper concludes with a discussion of advantages and drawbacks in Section 4. In the Appendix a concise introduction to anisotropic diffusion filtering is given.

2

Fingerprint image enhancement

Our enhancement algorithm consists of three steps, which are presented in the following. Oriented diffusion filtering algorithm: 1. Estimation of an OF (cf. Section 2.1). 2. Oriented diffusion filtering (cf. Section 2.2). 3. Contrast enhancement (cf. Section 2.3). The main dynamics of the algorithm are defined by an oriented diffusion process. That is an anisotropic diffusion process a la Weickert [23] but steered by an a priori estimated OF. For an introduction to anisotropic diffusion filtering we refer to the Appendix of the paper. There, the diffusion process (11) is steered by the diffusion tensor (or structure tensor), which is derived from the image gradients smoothed by a Gaussian kernel. However, in areas of fingerprint images which are affected by noise like scars, smudges, wetness or dryness of the finger, gradients are error-prone and unreliable [24]. Depending on the level of noise, smoothing with a Gaussian kernel may be insufficient for obtaining a feasible diffusion tensor (see Fig. 1 for an example). The dilemma is that especially in those regions which could profit the most from diffusion filtering, the image gradients are not correctly estimated because of the influences of noise. In contrast to previous works, we therefore decided to generate our diffusion tensor from a more reliable OF estimation (see Fig. 1). To be more specific, we propose an application of anisotropic diffusions like (11) steered by an a priori estimated OF, cf. Section 2.1, for the enhancement of fingerprint images. In fact, instead of (11) we consider ⎧ ∂u ⎪ ⎪ ⎨ = div(D(J r (∇us ), OF)∇u) on V × (0, 1) ∂t u(x, 0) = f (x) on V ⎪ ⎪ ⎩ kD(J (∇u ), OF)∇u, nl = 0 on ∂V × (0, 1) r s

(1)

where now the diffusion tensor D is not only dependent on the structure tensor J r , but also on the OF. More precisely, D is IET Biometrics, pp. 1–9 doi: 10.1049/iet-bmt.2012.0003

chosen with eigenvectors parallel to the two orthogonal directions vOF 1 = (cos (pg/180), sin (pg/180)) OF ⊥ vOF 2 = (v1 )

(2)

given by the OF (g measures angles in degrees) but with eigenvalues l1 and l2 determined by the structure tensor J r (∇us ). In Section 2.2 we shall discuss the wellposedness of (1) for different choices of eigenvalues. 2.1

Orientation field estimation

In order to obtain robust estimates of the OF for low-quality prints, we compute the OF by using a combination of the line sensor method [24] and the gradients based method [10, 25]. The two individual OFs are compared pixelwise, and if the angle between both estimations is smaller than a threshold (we used t ¼ 158), the orientation of the combined OF is set to the average of the two. Otherwise, the pixel is marked as missing. In a final step, all inner gaps are reconstructed and the orientation of the outer proximity is extrapolated up to a radius of 16 pixels, both as described in [24]. This method combining the line sensor and image gradients for an OF estimation was tested on the FVC-onGoing (FVConGoing web site: http://biolab.csr.unibo.it/FVCOnGoing/) benchmark for fingerprint orientation extraction (FOE). The combined method achieved an AvgErr (see Section II-A in [26]) of 6.53% on the good quality and 15.02% on the bad quality images of FOE Set A which is available for training purposes. On FOE Set B, the results were 6.49% for the good and 16.39% for the bad quality images. In comparison to the results listed in [26], the combined OF outperforms the baseline algorithms with optimised parameters (see Table II in [26]). This observation was confirmed in verification tests on all 12 databases of FVC2000 to 2004 [3, 27, 28] which showed a better performance of the combined OF applied for contextual image enhancement than each individual OF estimation [29]. The OF being the only parameter that was changed, lower EERs can be interpreted as an indicator that the combined OF contains fewer estimation errors than each of the individual estimations. The same OF estimation was used in [8] for enhancing fingerprint images by curved GFs (see Table 1 for a comparison with this approach). The information fusion strategy for obtaining the combined OF was inspired by Predd et al. [30]. The two OF estimation methods can be regarded as judges or experts and the orientation estimation for a certain pixel as a judgment. If the angle between both estimations is greater than a threshold t, the judgments are considered as incoherent, and consequently not averaged. If an estimation method provides no estimation for a pixel, it is regarded as abstaining. Orientation estimations for pixels with incoherent or abstaining judges are reconstructed or extrapolated from pixels with coherent judgments. Providing an OF estimation which is both robust to noise and precise enough for contextual filtering is certainly a not yet fully solved challenge. Further research deserves the combination of global models for the OF, for example, based on quadratic differentials [31] with local and semilocal approaches. 3

& The Institution of Engineering and Technology 2012

www.ietdl.org 2.2

Choices for the eigenvalues

In the following we discuss different choices for the eigenvalues l1 , l2 of D in (1). Choosing the eigenvalues l1 and l2 means to decide on the strength of the diffusion in each eigendirection v1 and v2 . 2.2.1 Linear anisotropic diffusion filtering: Our first approach is the following choice of eigenvalues

l1 (x) = a,

⎧ a ⎪ ⎪ ⎪ ⎪ ⎨ l2 (x) = 1 ⎪ ⎪ ⎪ ⎪ ⎩

if orientation could not be estimated in x (3) if we have a trustable estimate for the orientation in x

for x [ V and with positive constant a ≪ 1. Then (1) results in a linear diffusion equation with non-constant coefficient tensor ∂u = div(D(a, OF)∇u) ∂t where the diffusion tensor D is discontinuous in space and constant in diffusion time. Instead of the well-posedness result of Weickert in Theorem 2, for this choice of D we have the following existence result. Theorem 1: With the above choice of the diffusion tensor D, that is, D has eigenvectors parallel to (2) and eigenvalues (3), the anisotropic diffusion problem (1) has an unique solution u(x, t) in the distributional sense with u [ C([0, T ]; L2 (V)) > L2 (0, T ; H 1 (V)) ∂u [ L2 ((0, T ); H 1 (V)) ∂t Moreover, u [ C b,b/2 (V × (0, T ]), 0 , b ≤ 1. Proof: The proof is a standard application of semi-group theory. Since D is symmetric (since its eigenvectors form an orthonormal basis) the differential operator Au(x, t) = div(D∇u(x, t)) is self-adjoint and hence by the spectral theorem it generates a one-parameter semi-group, cf. [32]. Further, the coefficients in D are bounded in L 2(V) but are non-smooth. Hence, in general we cannot expect more than Ho¨lder regularity of order b in space and b/2 in time, cf. for example [33]. A Similar approaches have been considered in the literature, among them so-called shape-based diffusion or directed diffusion [34]. 2.2.2 Non-linear anisotropic diffusion filtering: In order to make also the strength of the smoothing in (11) dependent on the local image structure, the eigenvalues l1 and l2 of D are defined in dependency of the image gradient. Such an approach results in a non-linear diffusion equation. We consider two different, somewhat orthogonal, choices for the eigenvalues, which result into so-called coherence-enhancing and incoherence-enhancing diffusions. In order to meet the smoothness assumption on the diffusion tensor D, that is, assumption (12), we assume that the pre-estimated OF is smooth. In our numerical computations in Section 3 this smoothness will be 4

& The Institution of Engineering and Technology 2012

automatically guaranteed by the discrete setting (e.g. as the discrete spatial grid size of our problem goes to zero we resolve the OF by smooth interpolation). For both choices of D the well-posedness result of Weickert in Theorem 2 applies. Coherence-enhancing diffusion has been proposed in [9, 23] to enhance the coherence of flow-like structures. With m1 , m2 being the eigenvalues of J r as before, we define (for C . 0)

l1 (x) = a  l2 (x) =

if m1 (x) = m2 (x)

a a + (1 − a)e

−(C/(m1 −m2 )(x)2 )

else (4)

for x [ V, where a [ (0, 1), a ≪ 1. The constant C determines the steepness of the exponential function. The choice of the parameters which appear in (4) will be discussed later in Section 3. With this choice for the eigenvalues, the smoothing of the coherence-enhancing diffusion is stronger in the neighbourhood of coherent structures (where the radius of the neighbourhood is determined by r) while stopping diffusion in homogeneous areas, at corners and in general in incoherent (random) areas of the image. Note, that this approach depends crucially on the correct choice of r. Choosing r too small the filter might not be able to detect coherent structures anymore (depending on the level of noise contained in f ). In contrast, assigning a value to r that is too large results in a smoothing which is too strong and possibly merges single structures in the image. However, having found the correct r this approach is even able to close gaps (smaller than r) in coherent structures in the image. Additionally to this choice we propose another diffusion filter, which follows a philosophy opposite to coherenceenhancing diffusion (4). We define the eigenvalues of the incoherence-enhancing diffusion as follows

l1 (x) = a  l2 (x) =

|m1 − m2 |(x) = 1

a a + (1 − a)e

−(C/1−(m1 −m2 )(x)2 )

else (5)

for x [ V, where as before a ≪ 1 is a positive constant. Further, we choose C such that the exponential function is very steep, and the structure tensor J has been normalised such that |m1 2 m2| ≤ 1. Then, if the coherence is large we have l2 ≃ a and we smooth only a little bit, in both directions v1 and v2 with the same strength a. The smoothing becomes stronger the smaller the coherence becomes. Such an approach makes sense in its application to fingerprint images whenever they contain large areas of broken structures and when choosing r small (such that gaps in coherent structures cannot be overseen and classified as coherent). Remarks 1: (i) Note, that with the construction of the eigendirections of D from the OF in (1), if we computed the coherence (10) according to these eigenfunctions, it always would equal one. To see this we consider (8) – (10) IET Biometrics, pp. 1–9 doi: 10.1049/iet-bmt.2012.0003

www.ietdl.org

Fig. 2 Comparison of enhancement methods a Original fingerprint b Gradients based coherence-enhancing diffusion c Short time Fourier transform analysis d Curved GFs e Orientation field f Oriented linear diffusion g Oriented coherence-enhancing diffusion h Oriented incoherence-enhancing diffusion (a) Displays a low-quality print of the FOE Set A [26] and the respective ground truth OF (e). Orientations in degrees are encoded as grey values between 0 and 179, where 0 corresponds to x-axis and angles increase clock-wise. The gradients based coherence-enhancing diffusion filtering (b) according to Weickert [9] and enhancement using STFT analysis (c) [7] fail in the region at the bottom right of the image foreground. These two methods create artefacts in the enhanced image which impair the recognition performance. Oriented diffusion filtering (f –h) and curved GFs (d) [8] improve the image quality utilising the OF. Comparing the three types oriented diffusion filtering (f– h) shows that they lead to very similar results. The crucial factor for the enhancement is the OF

and plug in the corresponding values for (9), for example j11 = cos2 (pg/180)

estimated direction is favourable no matter what the coherence value is, cf. Fig. 2f– h. 2.3

Locally adaptive contrast enhancement

This gives

m˜ 1 − m˜ 2 = (cos2 (pg/180) − sin2 (pg/180))2 + 4(cos (pg/180) sin (pg/180))2 = ( cos2 (2pg/180) + sin2 (2pg/180))2 = 1 Owing to this the coherence – used in the computation of the eigenvalues l1 and l2 of D and hence responsible for the diffusion strength – has been computed from the structure tensor J r (∇us ) of the smoothed image us rather than from the fixed OF. (ii) Note however, that the influence of the non-linear weighting introduced in (4) and (5) compared with the linear diffusion process with eigenvalues (3) seems to have little (or even negative) impact on the enhancement process, cf. the classifcation results in Table 1. The reason is that the directional information in D, determined by the precomputed OF from Section 2.1, already constitutes a very accurate estimation of the orientation of lines in the fingerprint (except where the orientation could not be estimated, in which case no preferred diffusion direction is imposed). Hence, in general strong diffusion in this IET Biometrics, pp. 1–9 doi: 10.1049/iet-bmt.2012.0003

The diffusion process improves the image quality by smoothing along the local orientation. However, the method also tends to reduce the overall image contrast and after the diffusion, there are also considerable differences in grey level intensities along ridges and valleys. In a final step, we compensate for this by enhancing the contrast in a locally adaptive way. Our contrast enhancement is based on the normalisation formula from Section 2.3 in [6] which was proposed for a global image normalisation. We decompose the image domain into foreground FV containing the ridge and valley structure which is useful for recognition and background V\FV. Then, we first compute for each foreground pixel (i, j) the local mean mI and variance vI of the grey value I(i, j) by considering only neighbouring pixels within a radius ≤r (we used r ¼ 6 for all images). Next, a target grey value T (i, j) is calculated for a given target mean mT and variance vT (in our tests, we set mT ¼ 127.5 and vT ¼ 10 000). Finally, the new grey value G(i, j) is obtained by adjusting the current grey value I(i, j) a certain percentage pi,j towards the target value. Besides the good performance of this algorithm in the inner areas of the fingerprint, it may create artefacts at the border between image foreground and background. Overall, this may 5

& The Institution of Engineering and Technology 2012

www.ietdl.org an optical sensor, the images of the third databases were acquired by a thermal sweeping sensor and database 4 consists of synthetically generated images. For each database, a training set (set B) is provided comprising ten fingers and eight impressions per finger. Each test set (set A) consists of 100 fingers with eight impressions for each finger. The parameters for the oriented diffusion process listed in Table 1 were determined on the basis of training set B by choosing the parameter combination with the lowest average equal error rate (EER) over the four training databases from a set of tested combinations. For the verification tests, we follow the FVC protocol in order to ensure comparability of the results with [5] and other researchers. Around 2800 genuine and 4950 impostor recognition attempts were conducted for each of the FVC2004 databases. The FVC protocol and computation of the EERs are described in [27]. 3.2

Fig. 3 Original images (left) are first smoothed by linear anisotropic diffusion filtering with a constant a ¼ 0.01, 40 iterations and a stepsize of 0.25 and as a final step, a locally adaptive contrast enhancement (right) is performed a Original fingerprint b Oriented diffusion enhanced image c Original fingerprint d Oriented diffusion enhanced image Impressions 1 (top row) and 3 of finger 2 in database 4 from FVC2004 [3] are displayed. Scores for matching both impressions using BZ3 increase from 37 for the originals to 54 after diffusion and to 86 after contrast enhancement

increase the detection of false minutiae and hence result in higher EERs. To avoid this, the contrast enhancement is tuned to slowly change in s steps from the foreground to the background area. Foreground pixels which have background pixels as four-connected neighbours start with pi,j ¼ (1/s)p. Their four-connected foreground neighbours are set to pi,j ¼ (2/s)p and so on for s steps, with pi,j ¼ p for all other foreground pixels (see Fig. 3 for an example). We tested a number of parameter combinations on a few images from the four B databases (prints of ten fingers with eight impressions per fingers; for training) and found that p ¼ 0.5 and s ¼ 60 produced a smooth transition from foreground to background. With this, the newly computed fingerprint image G is computed pixel-wise as  I(i, j) + (pi,j ∗ (T (i, j) − I(i, j))), for I(i, j) [ FV G(i, j) = mT otherwise  v with T (i, j) = mT + T (I(i, j) − mI ) vI

3 3.1

Experimental results Fingerprint databases and verification protocol

Experiments were conducted on the four databases of FVC2004 [3] which contain low-quality fingerprint images. The images of the first two databases were acquired using 6

& The Institution of Engineering and Technology 2012

Software

The matcher referred to as ‘BZ3’ is based on the freely available (http://fingerprint.nist.gov/) NIST biometric image software package (NBIS [35]). Minutiae were extracted using MINDTCT and templates were matched by BOZORTH3. Our numerical implementation of (1) is based on the ‘Nonlinear Diffusion Toolbox’ in [36] which starts with an initial condition u 0 ¼ f and computes each subsequent anisotropic diffusion step as uk+1 = uk + Dt div(D∇uk )

(6)

The iterative approach (6) constitutes an explicit time stepping scheme for the time discretisation of (1). The spatial derivatives div and ∇ are discretised with finite differences. Our implementation is available for download (www. stochastik.math.uni-goettingen.de/biometrics/) and works with both Matlab and GNU Octave. 3.3 Combining oriented diffusion filtering and curved GF In order to improve the matching performance, we tested two information fusion strategies for combining oriented diffusion filtering and curved GFs. For each fingerprint image of the FVC2004 databases, two enhanced versions are computed using both enhancement methods, and from each enhanced image, a minutiae template is extracted using MINDTCT. First, let us consider fusion on the feature level. Both templates are combined into one template by union. Duplicate entries are avoided by comparing the x- and ycoordinates and the direction of the minutiae, and similar minutiae are only added once to the combined template. Alignment is not required, since minutiae are extracted from enhanced versions of the same original fingerprint image. The rationale behind template fusion is to reduce the number of missed true minutiae, if both enhancement methods complement one another. However, in our experiments matching the fused templates did not improve the performance and resulted in similar EERs as matching templates of one enhancement method. MINDTCT is known for extracting false minutiae at the border between foreground and background (see e.g. [37]) and the template fusion did not only reduce the number of missed minutiae, IET Biometrics, pp. 1–9 doi: 10.1049/iet-bmt.2012.0003

www.ietdl.org but it also increased the number of falsely detected minutiae, and in doing so, the gains from template fusion were cancelled out. Moreover, score-level fusion is a popular approach for combining for example, different fingerprint matching algorithms or different biometric traits. Henceforth, ADi denotes the minutiae template extracted from image i after enhancement by anisotropic diffusion filtering, CGi from image i enhanced by curved GFs. For each recognition attempt (RA), two fingerprints are matched denoted as a and b. For a genuine RA, a and b are different impressions of the same finger, for an impostor RA, a and b are impressions belonging to different fingers. Performance improvements were achieved by combining the two scores obtained by matching ADa with ADb and matching CGa with CGb using the max rule or the sum rule [38]. Here, normalising the scores [2] prior to fusion is not necessary, because all scores were obtained using the same matching algorithm, BOZORTH3. Further improvements were achieved by cross matching the templates, that is, additionally matching templates ADa with CGb and CGa with ADb and considering the sum of the best two of the four scores as the combined score [29]. Combinations of linear, coherence-enhancing and incoherence-enhancing diffusion with curved GFs were tested and all three combinations led to major performance improvements. In Table 1, we report the best of the three variants.

4

Discussion

The performance of oriented diffusion filtering for enhancing low-quality fingerprint images is quite impressive in comparison to existing methods: for example, linear anisotropic diffusion filtering clearly outperforms the traditional GF [6] using a gradient-based OF and enhancement based on the short time Fourier transform [7] on all four databases, and pyramid-based filtering [5] on three of the four databases. The EERs achieved by linear diffusion filtering are similar to those of the curved GFs which applied exactly the same OF estimation as the diffusion filtering for obtaining the results listed in Table 1. Advantages of diffusion filtering are that they do not require estimation of the ridge frequency (RF) and they can be computed fast. The performance of the curved GFs heavily depends on the quality and reliability of the OF and RF estimation. This fact is nicely illustrated when considering the use of GFs for the generation of synthetic fingerprints [39]: one black spike on a white image and the iterative application of the GF is sufficient for creating a ridge pattern. In this sense, the pattern is completely defined by the OF and RF image, and with respect to the input, the GF creates a perfect enhanced image. However, when dealing with low-quality images, errors may occur during the estimation of the local context, and if these erroneous estimations are passed on to the GF, it will create incorrect structures in the enhanced image. The diffusion filter in contrast proceeds more gently. Greyvalue differences are evened along the orientation, and in doing so, interrupted ridges are reconnected and conglutinated neighbouring ridges are separated by trend. The results in Section 3 show that the proposed oriented diffusion filters followed by a contrast enhancing step are well-suited for enhancing low-quality images. Comparing the different types of oriented diffusion filters, we conclude that coherence-enhancing, incoherenceenhancing and linear anisotropic diffusion filtering diffusion IET Biometrics, pp. 1–9 doi: 10.1049/iet-bmt.2012.0003

achieve very similar EERs. Of course, for real-life applications, the diffusion with a constant a is the most favourable because it requires the least computational efforts. Indeed, the numerical complexity for solving the discrete equation (6) for oriented diffusion filters heavily depends on the nature of the tensor D. In the linear case D is constant in time and hence, its corresponding discrete matrix has to be built just once. In contrast, in the nonlinear setting D = D(J r (∇us , ∇us )) depends on the solution and hence, has to be updated at least after every couple of iterations in (6). This update amounts to compute the structure tensor of us and the corresponding eigenvalues of D. Of course this is reflected by the runtime in seconds of the algorithms. The average runtime for linear diffusion filtering over all four databases was 0.78 s (using one core of a Intel Core i7-970 with 3.2 GHz). Coherence enhancing takes about seven times longer (5.41 s) in comparison to linear diffusion. Its computational efficiency suggest to include linear oriented diffusion filtering as a standard image enhancement add-on module for a future real-time fingerprint recognition system. Combining oriented diffusion filtering with curved GFs led to additional advancements and, to the best of our knowledge, the lowest EERs achieved so far using MINDTCT and BOZORTH3 on the FVC2004 databases. It is of interest to develop a graphics processing unit (GPU)-based implementation of the combined enhancement method in order to show its practicability for real-time application. Further improvements of the recognition performance especially rest upon better OF estimations. Automatic and reliable OF estimation methods for low-quality and very low-quality prints is a challenging topic that certainly deserves further research.

5

Acknowledgments

The authors thank Thomas Hotz, Stephan Huckemann and Axel Munk for their valuable comments during the preparation of this manuscript. C. Gottschlich and C.-B. Scho¨nlieb gratefully acknowledge support by DFG RTG 1023 ‘Identification in Mathematical Models: Synergy of Stochastic and Numerical Methods’. Moreover, C.-B. Scho¨nlieb acknowledges the financial support provided by the project WWTF Five senses-Call 2006, ‘Mathematical Methods for Image Analysis and Processing in the Visual Arts’ and the ‘Cambridge Centre for Analysis’ (CCA). Further, this publication is based on work supported by Award No. KUK-I1-007-43, made by King Abdullah University of Science and Technology (KAUST).

6

References

1 Alonso-Fernandez, F., Fierrez-Aguilar, J., Ortega-Garcia, J., et al.: ‘A comparative study of fingerprint image-quality estimation methods’, IEEE Trans. Inf. Forensics Sec., 2007, 2, (4), pp. 734–743 2 Maltoni, D., Maio, D., Jain, A.K., Prabhakar, S.: ‘Handbook of fingerprint recognition’ (Springer, London, UK, 2009) 3 Maio, D., Maltoni, D., Capelli, R., Wayman, J.L., Jain, A.K.: ‘FVC2004: third fingerprint verification competition’. Proc. Int. Conf. on Biometric Authentication (ICBA), Hong Kong, 2004, pp. 1 –7 4 Garris, M.D., McCabe, R.M.: ‘Nist special database 27: fingerprint minutiae from latent and matching tenprint images’. Tech. Rep. 6534, National Institute of Standards and Technology, Gaithersburg, MD, USA, 2000 5 Fronthaler, H., Kollreider, K., Bigun, J.: ‘Local features for enhancement and minutiae extraction in fingerprints’, IEEE Trans Image Process., 2008, 17, (3), pp. 354– 363 7

& The Institution of Engineering and Technology 2012

www.ietdl.org 6 Hong, L., Wan, Y., Jain, A.K.: ‘Fingerprint image enhancement: algorithms and performance evaluation’, IEEE Trans. Pattern Anal. Mach. Intell., 1998, 20, (8), pp. 777–789 7 Chikkerur, S., Cartwright, A., Govindaraju, V.: ‘Fingerprint image enhancement using stft analysis’, Pattern Recognit., 2007, 40, (1), pp. 198–211 8 Gottschlich, C.: ‘Curved-region-based ridge frequency estimation and curved gabor filters for fingerprint image enhancement’, IEEE Trans. Image Process., 2012, 21, (4), pp. 2220– 2227 9 Weickert, J.: ‘Coherence-enhancing diffusion filtering’, Int. J. Comput. Vis., 1999, 31, (2/3), pp. 111– 127 10 Kass, M., Witkin, A.: ‘Analyzing oriented patterns’, Comput. Vis. Graph. Image Process., 1987, 37, (3), pp. 362–385 11 Perona, P., Malik, J.: ‘Scale-space and edge detection using anisotropic diffusion’, IEEE Trans. Pattern Anal. Mach. Intell., 1990, 12, (7), pp. 629–639 12 Nitzberg, M., Shiota, T.: ‘Nonlinear image filtering with edge and corner enhancement’, IEEE Trans. Pattern Anal. Mach. Intell., 1992, 14, (8), pp. 826–833 13 Weickert, J.: ‘Theoretical foundations of anisotropic diffusion in image processing’, Computing, 1996, 11, pp. 221– 236 14 Bigun, J.: ‘Vision with direction’ (Springer, Berlin, Germany, 2006) 15 Meihua, X., Zhengming, W.: ‘Fingerprint enhancement based on edgedirected diffusion’. Proc. Third Int. Conf. on Image and Graphics, December 2004, pp. 274– 277 16 Zhai, X., Wang, Y., Shi, Z., Zheng, X.: ‘An integration of topographic scheme and nonlinear diffusion filtering scheme for fingerprint binarization’. Proc. ICIC, 2006, pp. 702 –708 17 Hao, Y., Yuan, C.: ‘Fingerprint image enhancement based on nonlinear anisotropic reverse-diffusion equations’. Proc. Int. Conf. IEEE EMBS, San Fransisco, CA, USA, September 2004 18 Vallarino, G., Gianarelli, G., Barattini, J., Gomez, A., Fernandez, A., Pardo, A.: ‘Performance improvement in a fingerprint classification system using anisotropic diffusion’. Proc. CIARP, 2004, pp. 582–588 19 Hastings, R.: ‘Ridge enhancement in fingerprint images using oriented diffusion’. Proc. Ninth Conf. on Digital Image Computation Technoloy and Application, Glenelg, Australia, December 2007, pp. 245–252 20 Perona, P.: ‘Orientation diffusions’, IEEE Trans. Image Process., 1998, 7, (3), pp. 457– 467 21 Chen, H., Dong, G.: ‘Fingerprint image enhancement by diffusion processes’. Proc. Int. Conf. Image Processing, Atlanta, GA, USA, October 2006, pp. 297– 300 22 Zhao, Q., Zhang, L., Zhang, D., Huang, W., Bai, J.: ‘Curvature and singularity driven diffusion for oriented pattern enhancement with singular points’. Proc. Conf. on Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA, June 2009, pp. 2129–2135 23 Weickert, J.: ‘Anisotropic diffusion in image processing’ (Teubner, Stuttgart, Germany, 1998) 24 Gottschlich, C., Miha˘ilescu, P., Munk, A.: ‘Robust orientation field estimation and extrapolation using semilocal line sensors’, IEEE Trans. Inf. Forensics Sec., 2009, 4, (4), pp. 802–811 25 Bazen, A.M., Gerez, S.H.: ‘Systematic methods for the computation of the directional fields and singular points of fingerprints’, IEEE Trans. Pattern Anal. Mach. Intell., 2002, 24, (7), pp. 905– 919 26 Turroni, F., Maltoni, D., Cappelli, R., Maio, D.: ‘Improving fingerprint orientation extraction’, IEEE Trans. Inf. Forensics Sec., 2011, 6, (3), pp. 1002– 1013 27 Maio, D., Maltoni, D., Capelli, R., Wayman, J.L., Jain, A.K.: ‘FVC2000: fingerprint verification competition’, IEEE Trans. Pattern Anal. Mach. Intell., 2002, 24, (3), pp. 402–412 28 Maio, D., Maltoni, D., Capelli, R., Wayman, J.L., Jain, A.K.: ‘FVC2002: second fingerprint verification competition’. Proc. 16th Int. Conf. on Pattern Recognition (ICPR), 2002, vol. 3, pp. 811– 814 29 Gottschlich, C.: ‘Fingerprint growth prediction, image preprocessing and multi-level judgment aggregation’. PhD thesis, University of Goettingen, Goettingen, Germany, April 2010 30 Predd, J.B., Osherson, D.N., Kulkarni, S.R., Poor, H.V.: ‘Aggregating probabilistic forecasts from incoherent and abstaining experts’, Decis. Anal., 2008, 5, (4), pp. 177– 189 31 Huckemann, S., Hotz, T., Munk, A.: ‘Global models for the orientation field of fingerprints: an approach based on quadratic differentials’, IEEE Trans. Pattern Anal. Mach. Intell., 2008, 30, (9), pp. 1507– 1517 32 Engel, K.-J., Nagel, R.: ‘One-parameter semigroups for linear evolution equations’ (Springer, Berlin, Germany, 2000) 33 Ladyzenskaja, O.A., Solonnikov, V.A., Uralceva, N.N.: ‘Linear and quasilinear equations of parabolic type’ (American Mathematical Society, Providence, RI, USA, 1968) 34 Illner, R., Neunzert, H.: ‘Relative entropy maximization and directed diffusion equations’, Math. Meth. Appl. Sci., 1993, 16, pp. 545– 554 8

& The Institution of Engineering and Technology 2012

35 Watson, C.I., Garris, M.D., Tabassi, E., et al.: ‘User’s guide to nist biometric image software (NBIS)’. Technical Report, National Institute of Standards and Technology, Gaithersburg, MD, USA, 2007 36 D’Almeida, F.: ‘Nonlinear diffusion toolbox’, 2004 37 Wu, C., Tulyakov, S., Govindaraju, V.: ‘Robust point-based feature fingerprint segmentation algorithm’, in ‘Advances in biometrics: ICB 2007’ (Seoul, Korea, 2007), pp. 1095– 1103 38 Kittler, J., Hatef, M., Duin, R.P.W., Matas, J.: ‘On combining classifiers’, IEEE Trans. Pattern Anal. Mach. Intell., 1998, 20, (4), pp. 226–239 39 Cappelli, R., Erol, A., Maio, D., Maltoni, D.: ‘Synthetic fingerprintimage generation’. Proc. 15th Int. Conf. Pattern Recognition (ICPR), Barcelona, Spain, September 2000, pp. 3– 7 40 Bigun, J., Bigun, T., Nilsson, K.: ‘Recognition by symmetry derivatives and the generalized structure tensor’, IEEE Trans. Pattern Anal. Mach. Intell., 2004, 26, (12), pp. 1590– 1605 41 Peyre, G.: ‘Texture synthesis with grouplets’, IEEE Trans. Pattern Anal. Mach. Intell., 2010, 32, (4), pp. 733–748 42 Harris, C., Stephens, M.: ‘A combined corner and edge detector’. Proc. Alvey Vision Conf., 1988, pp. 147–151 43 Mikolajczyk, K., Schmid, C.: ‘Scale affine invariant interest point detectors’, Int. J. Comput. Vis., 2004, 60, (1), pp. 63–86 44 Lowe, D.G.: ‘Distinctive image features from scale-invariant keypoints’, Int. J. Comput. Vis., 2004, 60, (2), pp. 91–110 45 Loog, M., Lauze, F.: ‘The improbability of Harris interest points’, IEEE Trans. Pattern Anal. Mach. Intell., 2010, 32, (6), pp. 1141– 1147

7 Appendix: anisotropic diffusion filtering – an introduction Let V , R2 be a rectangular domain and f [ L 1(V) the given image of a fingerprint defined on V. We shall construct a set of filtered (enhanced) images u = u(x, t): V × [0, 1)  R by the application of anisotropic diffusion filters given by the evolution ⎧ ∂u ⎪ on V × (0, 1) ⎨ = div(D∇u) ∂t ⎪ ⎩ u(x, t = 0) = f (x) on V kD∇u, nl = 0 on ∂V × (0, 1)

(7)

where D: V 7 ! R2×2 is the so-called diffusion tensor and n is the outward pointing unit normal vector on ∂V. The enhanced fingerprint image is then defined as a solution u ¼ u(x, T ) of (7) for an appointed time t ¼ T. Depending on the construction of the diffusion tensor D the diffusion favours different types of local structures. For its construction [13] defines the structure tensor J r of an image u to be J r (∇us ) := Kr ∗ (∇us ⊗ ∇us ),

r.0

Here, Kr is a Gaussian kernel with variance r and us is the image u convolved with Ks . The use of ∇us ⊗ ∇us := ∇us · ∇u⊥ s as a structure descriptor aims at making J r insensitive to noise and sensitive to change in orientation only, that is, the sign of the gradient is not taken into account in the definition of J r . The tensor J r is positive semi-definite and has two orthonormal eigenvectors v1 ∇us (points in the gradient direction) and v2 ||∇u⊥ s (points in the direction of the level lines) and corresponding eigenvalues m1 , m2 , which can be computed as

 2 m1 = 1/2 j11 + j22 + (j11 − j22 )2 + 4j12

 2 m2 = 1/2 j11 + j22 − (j11 − j22 )2 + 4j12

(8)

IET Biometrics, pp. 1–9 doi: 10.1049/iet-bmt.2012.0003

www.ietdl.org where the jik’s are the components of J r , that is 2 ∂ us ∂x ∂ ∂ u u = j21 = Kr ∗ ∂x s ∂y s 2 ∂ us = Kr ∗ ∂y

u [ C([0, T ], L2 (V)) > L2 (0, T; H 1 (V))



∂u [ L2 (0, T; H 1 (V)) ∂t

j11 = Kr ∗ j12 j22

(9)  × (0, T ]), it depends continuously on f Moreover, u [ C 1 (V with respect to . 2L (V), and it fulfils an extremum principle.

The eigenvalues of J r describe the r-averaged contrast in the eigendirections, for example, if m1 ¼ m2 ¼ 0 it means that the image is homogeneous in this area, if m1 ≫ m2 ¼ 0 we are sitting on a straight line and finally, if m1 ≥ m2 ≫ 0 we are at a corner of an object. Based on the eigenvalues, we can define the quantity 2 Coh = (m1 − m2 )2 = (j11 − j22 )2 + 4j12

(10)

as the local coherence of structures: this quantity is large for line-like structures whereas it is small for constant areas in the image. With the derived structure tensor we consider the modified anisotropic diffusion (7) as ⎧ ∂u ⎪ ⎪ ⎨ = div(D(J r (∇us ))∇u) on V × (0, 1) ∂t u(x, 0) = f (x) on V ⎪ ⎪ ⎩ kD(J (∇u ))∇u, nl = 0 on ∂V × (0, 1) r s

(11)

Assumptions 1: For the well-posedness of (11) we make the following assumptions on D D [ C 1 (R2×2 , R2×2 )

(12)

D(J ) is symmetric

(13)

D is positive definite: for all w [ L1 (V; R2 ) (14)

D(J r (w)), w = 0 Under these assumptions Weickert proves the following theorem. Theorem 2 [23]: The anisotropic diffusion problem (11) has a unique solution u(x, t) in the distributional sense with

IET Biometrics, pp. 1–9 doi: 10.1049/iet-bmt.2012.0003

2 det(Jr ) = (j11 j22 ) − j12 = m1 m2

trace(Jr ) = j11 + j22 = m1 + m2 And the corner response R is defined as R = det(Jr ) − k trace(Jr )2

where the eigenvectors of the new diffusion tensor D(J r (∇us )): V 7 ! R2×2 are parallel to the ones of J r (∇us ) and its eigenvalues l1 and l2 are chosen depending on the desired enhancement method. This choice has to be made in accordance with the following assumptions which are necessary for the well-posedness of (11), cf. [23].

 there exists a positive with |w(x)| ≤ K on V lower bound n(K) for all eigenvalues of

Remarks 2: (i) As pointed out by Bigun [14], the structure tensor also goes by the names ‘second order moment tensor’, ‘inertia tensor’, ‘outer product tensor’, and ‘covariance matrix’. Bigun et al. [40] introduced a generalised structure tensor by analytically extending its standard form. Recently, the structure tensor was also applied for image synthesis and inpainting by Peyre [41]. (ii) The structure tensor is also utilised for detecting interest points by the Harris corner detector [42]. The computation of the eigenvalues is avoided by instead considering the determinant and the trace of the structure tensor which in that context is referred to as autocorrelation matrix

where k is chosen empirically. Harris corner points are applied as features for fingerprint segmentation by Wu et al. [37], and they are useful as interest points in many other situations because of their invariance under affine transformations [43]. During the process of selecting keypoints for SIFT feautures, the Harris corner response is applied for eliminating candidate points on edges (see Section 4.1 in [44]). Moreover, Harris interest points are proposed as a measure for image saliency [45], linking them to models of preattentive human visual perception. 3. Bazen and Gerez [25] showed that applying principal component analysis to the structure tensor (which is called autocovariance matrix in that context) is equivalent to the averaging squared gradients method for estimating the local orientation. They define coherence as follows

= m1 − m2 = Coh m1 + m2

 2 (j11 − j22 )2 + 4j12 j11 + j22

In our tests, both definitions of the coherence led to very similar results applying coherence-enhancing diffusion filtering to the images of the FVC2004 databases. In the following, we shall concentrate on the first definition (10) of the coherence Coh.

9

& The Institution of Engineering and Technology 2012

Suggest Documents