Biometric Template Protection using Turbo Codes and ... - IEEE Xplore

4 downloads 6053 Views 1MB Size Report
Specifically, the properties of digital modulation and turbo codes with soft-decoding .... III-B. Eventually, its application to on-line signature template protection is ...
Biometric Template Protection using Turbo Codes and Modulation Constellations Emanuele Maiorana, Daniele Blasi, Patrizio Campisi Department of Applied Electronics, Universit` a Roma Tre Via della Vasca Navale 84, 00146, Roma, Italy [email protected], [email protected] Abstract—In this paper we propose a general biometric cryptosystem framework inspired by the code-offset sketch. Specifically, the properties of digital modulation and turbo codes with soft-decoding are exploited to design a template protection system able to guarantee high performance in terms of both verification rates and security, also when dealing with biometrics characterized by a high intra-class variability. The effectiveness of the presented approach is evaluated by its application as case study to on-line signature recognition. Fig. 1.

I. I NTRODUCTION

II. C ODE - OFFSET FRAMEWORK vs THE DIGITAL MODULATION PARADIGM

Security and privacy concerns related to the use of biometric data are two key issues for the successful deployment of automatic biometrics-based recognition systems. In fact, individuals’ biometrics are limited in number, and can be hardly replaced if stolen or copied. Biometric data can also reveal significative information regarding people personality and health, or be employed to perform an unauthorized tracking of the enrolled subjects across multiple databases [1]. In order to improve biometrics’ public acceptance, many efforts have been recently spent in designing template protection schemes able to properly address the aforementioned concerns, by defining biometric cryptosystems or feature transformation approaches [2]. A key-binding method stemming from the general concept of code-offset sketches [3] is proposed in this paper. The properties of digital modulation and turbo codes with softdecoding are here exploited to design a template protection system able to guarantee high performance in terms of both verification rates and security. A practical implementation of the proposed general scheme is given with application to on-line signature biometrics. Specifically, the code-offset framework is highlighted in Section II, together with its similarities with the digital modulation paradigm and the main limitations of its most common application, the fuzzy commitment scheme [4]. The proposed biometric cryptosystem is then described in Section III, where its application for a protected on-line signature-based recognition system is also presented as a proof of concept with no loss of generality. A detailed discussion on the security of the presented scheme is then provided in Section IV, while experimental results proving the effectiveness of the proposed approach are eventually given in Section V.

Let us indicate with x a biometric template vector and with c ∈ C a generic codeword belonging to a code space C. A code-offset v, v = f (x, c), can be defined as the output of a function f which binds x with c. A properly designed function f should generate offsets v whose knowledge brings as less information as possible on either c or x. Under this constraint, v can be made publicly available still guaranteeing the desired security for the considered biometric template. In the recognition phase, a function g is employed to revert ˜ the binding operation once a fresh biometric acquisition x ˜ = g(˜ ˜ = x, is available, thus obtaining c x, v). When x ˜ = c, thus the original codeword can be retrieved being c authenticating the presented subject. However, in a practical scenario, two acquisitions of the same biometrics are likely to ˜ 6= c. The possibility of recovering be different thus obtaining c the original codeword is therefore determined by the properties ˜ and c. of the employed code C and on the distance between c As an illustrative example, the code-offset scheme is depicted in Figure 1, where, for the sake of simplicity, a 2D data vector x is considered and the binding function f is given by the ˜ vector sum operator. It can be seen that if the difference x − x lies within the gray-shaded area around c, representing the correct decision region of the employed code, then c could be recovered by selecting the codeword at minimum distance to ˜. the reconstructed vector c The aforementioned scenario strongly resembles the digital communication scenario for transmission over noisy channels [5], where a digital signal modulated according to a given scheme (e.g. quadrature amplitude modulation (QAM), phase-shift keying (PSK), etc.) is typically represented, at sampling instants, as a single complex symbol belonging to a two-dimensional scatter diagram, namely the modulation constellation. At the receiving side, a potentially corrupted

WIFS‘2012, December, 2-5, 2012, Tenerife, Spain. c IEEE. 978-1-4673-2287-4/12/$ 31.00 2012

978-1-4673-2287-4/12/$31.00 ©2012 IEEE

Code-offset general concept. Dotted grid points represent code C.

25

WIFS 2012

version of the transmitted signal is demodulated and a decision about the symbol which was transmitted is finally taken by selecting the constellation point closest to the received symbol. With reference to Figure 1, the code C can be seen as the employed constellation, the codeword c as the symbol which has been transmitted, the difference x−˜ x as the noise affecting ˜ as the received corrupted symbol to be the signal, while c processed for recovering the original message.

most stable features of a biometric representation can be performed [6], thus reducing the intra-class variability of the employed data. However, additional helper data have to be employed to do this, thus exposing discriminative information which can aid an attacker in tracking a user across different databases [11]. The proposed biometric cryptosystem relies on the presented similarities between code-offset sketches and constellation modulation to overcome the aforementioned limitations. Moreover, it allows to set the template size without any constraint and to manage data characterized by a high intraclass variability without exploiting specific characteristics of the considered biometrics. These tasks are achieved by employing turbo codes which allows to achieve high ECCs, while constellation modulations are employed to let the codes operating in soft-decoding modality, thus further improving their correction capabilities and providing a very flexible framework with different operating conditions. III. P ROPOSED C RYPTOSYSTEM

A practical implementation of the code-offset scheme is the fuzzy commitment cryptographic protocol [4], which has been used in template protection approaches for several biometric modalities. With reference to Figure 1, in this case C is a binary linear error correcting code, the biometric template vector x is a binary vector with the same size of the codewords in C, and both functions f and g are given by the XOR operator. Due to the differences between the biometrics acquired in the enrollment and in the recognition stages, the binary vector ˜ can be considered as a corrupted version of c, that is, c a codeword affected by errors. Common applications of the fuzzy commitment rely on linear block codes such as BCH codes which allow to choose among different error correcting capabilities (ECCs) [6]. However, the following drawbacks are typically experienced: •



The proposed framework for biometric template protection is depicted in Figure 2, where x represents a biometric template vector and b a binary randomly generated message of length k. The general characteristics of the proposed framework are discussed in Section III-A, while a practical implementation of the proposed scheme is presented in Section III-B. Eventually, its application to on-line signature template protection is given in Section III-C as a case study. A. General description

the use of linear block codes requires binary biometric templates having the same size of the employed codewords: some bits have to be discarded, or a bit stuffing has to be performed as in [7]. A loss in discriminability may be experienced in the former case, while in the latter case a severe leakage of information about c can result from the observation of the code-offset v. It is therefore desirable to employ codewords whose length can be adapted to the length of x, and not vice versa; linear block codes often do not provide ECCs high enough to properly manage biometric data with significant intra-class variability, thus resulting in high False Rejection Rates (FRRs). In these cases it could be possible to exploit known statistics about existing differences between the considered biometrics to design a specific code adapted to the biometrics properties. This property has been exploited in [8], where Hamming and ReedSolomon codes are jointly employed to manage respectively the background and the burst differences deriving from the comparison of two iris templates. However, as observed in [9], a random permutation process shall be applied to the considered biometric representation before binding it to a codeword, in order to prevent possible decodability attacks which can affect the users’ privacy. Such random permutation alters the patterns of ˜ , thus making statistical differences between x and x it unfeasible to design a code on the basis of specific characteristics of the considered biometrics. Moreover, the use of a combination of codes may leave the system vulnerable to statistical attacks exploiting the histograms of the computed offsets [10], and should be therefore avoided. Alternatively, a user-specific selection of the

As in a digital communication scenario, where proper channel coding has to be performed to correct the errors which may affect the transmitted data, error correcting codes need to be employed to manage the intra-class variability of biometric data. Specifically, b is turbo encoded into an n-bit string, which is then modulated into s symbols of a constellation with L points, being s = logn L . Each codeword c of the employed 2 code-offset scheme is therefore given by s complex symbols which can assume L determinations, being thus c ∈ C ⊂ Cs . The offset v is then computed by means of an operator f which takes as arguments both the symbol c and the biometrics template x, thus obtaining v = f (x, c) ∈ V ⊂ Cs , being V the space in which v is defined. The operator f has to be defined in accordance with the employed constellation and in such a way that a function g, which allows recovering c as c = g(x, v), exists. This latter function is employed ˜ is used in the recognition stage, where a fresh biometrics x together with v to obtain a potentially corrupted codeword ˜ = g(˜ c x, v) ∈ Cs , composed by s symbols each lying in the complex plane. A joint demodulation and decoding process ˜ The hashed versions of ˜ to obtain b. can be then applied to c ˜ are then compared to determine the outcome of the b and b verification process. It is worth observing that the use of turbo codes has been already suggested within the framework of the fuzzy commitment in [12], where it is shown that they outperform BCH codes in terms of secret-key vs. privacy-leakage rate. They

26

Fig. 2.

Proposed protection framework based on modulation constellations and turbo codes.

Its application to on-line signature template protection is then presented in Section III-C, while an in depth security analysis is performed in Section IV.

have also been used in [13] in a fingerprint template protection scheme. However, in both cases only hard decoding on binary data has been performed, thus limiting the capabilities of turbo codes which loose any information regarding the deviation of the received symbol from its original value, making it impossible to evaluate the reliability of the performed estimates. Conversely, by resorting to the digital modulation paradigm, in the proposed approach we evaluate the use of soft-decoding [14], which has never been previously considered in a biometric cryptosystem. With reference to Figure 2, the demodulating and decoding processes are performed jointly according to the implementation in [15]: the decoder produces estimates of the probability of correct determination of a received symbol, which are then exploited in an iterative way by determining the value of an originally transmitted symbol on the basis of the estimated information regarding the other received symbols. The improvement in ECCs resulting from the adoption of softdecoding [5] is discussed in Section V. Another feature of turbo codes which is beneficial within a key binding scenario relies in the possibility of selecting among a wide collection of possible interleavers and puncturing patterns when specifying an encoder. Specifically, by leveraging on the employed puncturing pattern, it is possible to easily modify the length of the produced codewords, thus making the template length independent of external constraints like the codeword length as in the classical implementation of the fuzzy commitment. Moreover, the interleavers employed in turbo codes inherently provide the means to prevent possible decodability attacks which can affect the users’ privacy [16], due to the fact that they introduce a random permutation in the binding process which is beneficial for the privacy protection purpose as pointed out in [9]. As for the binding function f employed in the proposed scheme, it has to be remarked that it should be defined in such a way that an attacker could not retrieve information about c from the knowledge of publicly available data, that is, the code-offset v, the operator f , and the statistics of the biometric representation x. This requisite is for instance not satisfied by the sum operator employed in Figure 1 where the code-offset is exemplified: if an attacker is aware of the range of admissible values for x, he could narrow the set of possible originating codewords c to just those closest to the known value of v, thus recovering some information on c. A practical implementation of the proposed general framework is given in Section III-B where feasible functions f (·) and g(·) are given.

B. Practical implementation The constellation and the binding function f employed in a practical implementation of the proposed cryptosystem are described hereafter. Although our approach can be applied in principle to different modulation schemes, in this paper we resort to PSK modulation. Specifically, the codeword c can be represented through s symbols, each belonging to a set of L points lying on the unit circle in the complex domain. The used function f , defined accordingly to the employed constellation, first maps the range of admissible values of each element in x = {xi }, 1 ≤ i ≤ s, to the interval [−π, π). The data acquired in the enrollment stage can be used to estimate the inter-class mean vector µ and the inter-class standard deviation σ of the employed biometric representation x. Therefore a clamped feature vector χ = {χi } is obtained from x as follows:  mi ≤ xi ≤ Mi   xi x i < mi χi = mi (1)   Mi xi > Mi

being m = µ − ασ and M = µ + ασ respectively, and α a system parameter. The values of χ are then linearly mapped 2π to the interval [−π, π) thus obtaining φ = M−m (χ − m) − π. In order to properly analyze the security of the proposed framework in Section IV, and to evaluate in Section V the increase in ECC due to the use of soft-decoding in turbo decoders, a vector ϕ is then computed by uniformly quantizing each coefficient in φ to D possible belonging to the  Dvalues  π D set D = { D + 2π d}, with d = − , − + 1, . . . , D D 2 2 2 −1 , assuming that each value in φ is mapped to the closest value in D. The binding of x and c can be then performed by applying angular shifts depending on x to the original constellation points, while an opposite shifts may be performed during the recognition phase: ( v = f (x, c) = c · eiϕ , (2) ˜ = g(v, c) = v · e−iϕ˜ , c where ϕ is obtained by normalizing and quantizing x in the ˜ by applying the same processing to enrollment stage, and ϕ ˜ in the recognition stage. Therefore, the codomain V of the x

27

function f therefore consists of s points lying on the unit complex circle. It is worth observing that the normalization process and the quantization set D are chosen in order to make it hard to state which constellation point has generated a given symbol in v, thus preserving the security of the proposed system. Specifically, in order to not reduce the number of possible constellation points from which each symbol in the offset v could have been originated, D has to be greater than L, with D = L · 2l in the proposed implementation, where l ∈ N0 being N0 the set of non-negative integers. The dependance of the verification performance on the parameters L, D, and α is discussed in Section V.

estimating the conditional entropy H(x|v). However, within the proposed approach the evaluation of H(x|v) is equivalent to the evaluation of H(ϕ|v) being ϕ a quantized version of x. In the presented code-offset framework scheme H(ϕ|v) is equal to H(b|v). In fact due to (2), once v is known, the knowledge of ϕ directly provides b, as well as knowing b reveals also the biometric information ϕ. It is worth noting that b can be retrieved once k bits out of n of its encoded version are known [22]. We can therefore assume that the knowledge of z = logk L constellation points of the modulated codeword 2 c, as well as the knowledge of z coefficients of ϕ once v is known, would entirely reveal b. The security of the considered system can be then evaluated by observing that H(b|v) = minZ∈Z {H(ϕZ |v)}, being ϕZ a string generated from ϕ by selecting only z coefficients out of the available s ones, with Z being the ensemble of all possible sets Z of z coefficients, Z ⊂ {1, . . . , s}. In order to practically compute H(ϕZ |v), we can resort to an approximation based on second-order dependency trees [23] by expressing the probability P (ϕZ ) as follows:

C. Application to On-line Signature Recognition As a proof of concept, the proposed framework is here applied to the protection of on-line signatures templates. Template protection within the framework of on-line signature recognition has already been taken into account in several contributions such as [17], where a feature transformation approach relying on convolutions has been employed, or in [18], where the fuzzy vault scheme has been used. Fuzzy commitment has been adopted for on-line signature template protection in [6], where a parametric signature representation is considered, and in [19], where the employed signature representation is based on a set of discrete sequences. In this paper we rely on the signature modelization based on Universal Background Models (UBMs) [20] employed by the authors in [21]. Specifically, UBMs are statistical descriptors employed to represent person-independent biometric observations. Once estimated, the biometrics of a specific user can be represented by adapting the well-trained parameters in the UBM to the specific characteristics of the acquired users traits. When employed for recognition purposes, a likelihood ratio evaluation of the data provided during recognition with the user-adapted models estimated in enrollment is typically performed [20]. However, in [21] it has been shown that UBMs can also be employed to generate a parametric feature representation, even if characterized by a high intra-class variability. Such variability is managed in [21] by computing different projections according to several UBMs, and then retaining for each user the most stable components. However, a user-specific helper data containing the indexes of the most relevant coefficient has to be stored, thus exposing discriminative information which could be exploited by an attacker to track a user across different databases [11]. In order to counteract this side effect and to provide the required renewability, different UBMs for different systems are employed in [21].

Pˆ (ϕZ ) =

z Y

Z P (ϕZ ui |ϕt(ui ) ), 1 ≤ t(ui ) < ui ,

(3)

i=1

where u = {ui }, 1 ≤ i ≤ z, is a permutation of the indexes Z Z [1, 2, . . . , z], t(u1 ) = u1 , and P (ϕZ u1 |ϕt(u1 ) ) = P (ϕu1 ). It can be shown that, given the assumption in (3), the conditional entropy H(ϕZ |v) can be approximated as ˆ Z |v) = H(ϕ

z X  i=1

Z Z H(ϕZ ui , ϕt(ui ) |v) − H(ϕt(ui ) |v) .

(4)

As in [23], the best approximation can be obtained by minimizing the Kullback-Leibler distance between the real and the approximated distribution of ϕZ |v, so to compute: ˆ Z |v) = min H(ϕ {u}

z X  i=1

Z Z H(ϕZ ui , ϕt(ui ) |v) − H(ϕt(ui ) |v) .

(5) The solution of the minimization problem in (5) requires determining the minimum spanning tree in a graph whose vertices are connected by edges with weights W (i1 , i2 ) = Z Z H(ϕZ i1 , ϕi2 |v) − H(ϕi2 |v), with 1 ≤ i1 , i2 ≤ z. It has to be noted that, given the assumption in (3) and being the offset v generated as in (2), we can derive W (i1 , i2 ) = Z Z Z Z Z Z H(ϕZ i1 , ϕi2 |vi1 , vi2 ) − H(ϕi2 |vi1 , vi2 ), being therefore the considered ordering u relevant also for the computation of H(ϕZ i2 |v). Moreover, in general it is possible that W (i1 , i2 ) 6= W (i2 , i1 ), so the edges are directed, and therefore the algorithm proposed in [24] has to be employed to find the optimum branchings for the considered kind of graphs. Having available a testing dataset to compute the aforementioned entropies, the following procedure can be therefore followed to compute the weights W (i1 , i2 ) for each pair of Z coefficients {ϕZ i1 , ϕi2 }: • for each specific couple of offset coefficients (¯ viZ1 , v¯iZ2 ), Z the corresponding possible values of (ϕi1 , ϕZ i2 ) are

IV. S ECURITY A NALYSIS The security of the proposed biometric template protection system can be evaluated by estimating the conditional entropy H(b|v), which measures the uncertainty about the secret message b once the code-offset v has been made publicly available [22]. On the other hand, the privacy of the proposed biometric template protection system can be evaluated by

28

s

considered (assuming that the symbols of the codewords in cZ are equiprobable), and the probability Z Z P (ϕZ vi1 , v¯iZ2 ) is evaluated by counting the occuri1 , ϕi2 |¯ rences of admissible values. It is worth observing that, ∀ i1 , even if ϕZ i1 could assume D values as described in Section III-B, only L values remain admissible once v¯iZ1 is given: each admissible value is in fact the offset from one of the L possible constellation point to v¯iZ1 . The probability P (¯ viZ1 , v¯iZ2 ) is also estimated by counting Z Z the occurrence of admissible values (ϕZ vi1 , v¯iZ2 ), i1 , ϕi2 |¯ Z Z Z and the joint conditional entropy H(ϕi1 , ϕi2 |vi1 , viZ2 ) is obtained as X Z Z P (¯ viZ1 , v¯iZ2 ) · H(ϕZ vi1 , v¯iZ2 ); (6) i1 , ϕi2 |¯

511 1023 2047 4095

BCH codes FAR H(ϕZ |v) 0.3 8.7-9.5 0.2 16.5-18.2 0.1 24.3-26.7 0.2 9.6-10.5 0.1 13.9-15.2 0.1 22.6-24.8 0.0 10.4-11.4 0.0 20.0-21.9 0.0 29.6-32.4 0.0 11.3-12.4 0.0 16.5-18.2 0.0 27.0-29.6

FRR 4.6 7.5 12.6 7.4 14.1 22.4 15.0 25.3 36.6 32.4 48.6 64.5

Turbo Codes FAR H(ϕZ |v) 8.5 22.9-25.8 5.2 27.8-31.4 3.1 34.4-38.7 3.9 50.7-57.2 2.3 60.6-68.3 1.2 72.1-81.2 1.6 108.1-121.9 0.9 124.4-140.4 0.5 147.4-166.2 0.4 221.5-251.1 0.2 256.2-290.4 0.1 302.5-342.9

TABLE I V ERIFICATION ( IN %)

AND SECURITY ( RANGE OF ESTIMATED VALUES ) PERFORMANCE FOR DIFFERENT VALUES s OF SELECTED FEATURES , FOR A FUZZY COMMITMENT APPROACH WITH BCH OR TURBO CODES .

(¯ viZ ,¯ viZ ) 1

FRR 50.0 63.6 77.2 61.5 69.0 76.4 77.8 84.5 89.1 93.6 94.5 95.4

2

4800 coefficients should be sorted according to their stability in the binarization process. Table I reports the performance obtained for different numbers s of features selected according to this criterion, when using 30 subjects for training and the remaining 70 for testing. As can be seen, BCH codes do not provide enough ECC to manage the intra-class variability of the considered features, thus resulting in high FRR, although the False Acceptance Rate (FAR) remains low. Also the security achievable with BCH codes is unacceptable, being the lengths of the admissible secrets b very small, with a corresponding entropy H(ϕZ |v), estimated as described in Section IV, even lower. Table I also shows the improvement achievable when using turbo codes with different ECCs on the same data, instead of BCH codes. The security in this case is much higher, however the FRR still remains too high, even when using turbo codes with the highest available ECC. It is worth observing that the fuzzy commitment used with turbo codes can be considered as a special case of the proposed framework, in which L = 2 and D = 2. The proposed framework can be successfully employed in order to guarantee better performance both in terms of FAR, FRR, and security. Specifically, the cases with s = {2047, 4095} UBM coefficients selected according to the previously specified ordering are analyzed in the following, since they permit longer key values and therefore better security. Moreover, only the turbo code with the highest available ECC is considered, being able to guarantee the best possible performance in terms of FRR. Table II summarizes the outcomes of the experiments performed by varying the system parameters L, D, and α. The best verification performance obtained for each choice of s, L, and D are highlighted in bold. From the analysis of these results, the following considerations can be drawn:

the probabilities P (ϕZ viZ1 , v¯iZ2 ) and P (ϕZ viZ1 , v¯iZ2 ) are i1 |¯ i2 |¯ evaluated by saturating the measures computed at the previous step, being then possible to compute Z Z Z Z Z H(ϕZ i1 |vi1 , vi2 ) and H(ϕi2 |vi1 , vi2 ); • once the weights W (i1 , i2 ) and W (i2 , i1 ) are computed ∀ (i1 , i2 ), the optimum branchings algorithm can be employed to determine the permutation u = {ui }, 1 ≤ i ≤ z, resulting in the best approximation for H(ϕZ |v). The results of the security analysis for the proposed approach applied to a UBMs-based on-line signature representation are given in Section V, where the effects on security of the selection of the system parameters L, D and α are discussed. V. E XPERIMENTAL RESULTS AND C ONCLUSIONS •

The performance of the proposed protection scheme is evaluated on the on-line signature UBM-based parametric representation introduced in [21]. Specifically, 4800 projections are computed for each signature belonging to the MCYT on-line signature database, which comprises signatures taken from 100 subjects, with 25 genuine signatures and 25 skilled forgeries for each user. During the enrollment, ten signatures are taken from each user to generate the biometric template x as the mean of the available projections. In order to verify the effectiveness of the proposed protection scheme, it is beneficial to first evaluate the performance, expressed in terms of both recognition accuracy and security, achievable when using the UBM framework described in Section III-C for on-line signature modeling and the fuzzy commitment for template protection. Specifically the approach in [21] has been applied where the obtained templates are binarized with respect to their inter-class mean and BCH coding has been used to provide error correction. Moreover a user-specific selection of the most stable projections has to be performed to guarantee acceptable recognition rates. In order to employ the same feature set for each user, thus avoiding any privacy leakage from additional user-specific helper data, an ordering of the available projections should be determined during a training phase to select those providing the best possible recognition rates. Specifically, if there is the need of keeping the FRR as low as possible, the available





29

increasing α improves the FAR and the security, while the FRR worsens. This can be explained by taking into account the high intra-class variability of the considered data, which is increased by enlarging the range of values to be normalized; increasing the number of constellation points L improves the FAR and the security while worsens the FRR. An increase of L, while keeping fixed the number of employed

s

coefficients s, requires an increase of the secret key size k. Moreover, as observed in Section IV, once the offset v is known, each coefficient in ϕ can assume just L values, being therefore this parameter connected with the system security. However, recognizing a legitimate user is made more difficult due to distance decrease between adjacent constellation points; • increasing the number of quantized values D typically improves the FRR, while the FAR and the security worsens. A high number of quantized values improves the performance of the employed turbo codes, which can exploit the information coming from the soft-decoding process to better perform the decoding. However, more information is leaked about the biometric data x. Moreover, the non-ideal distribution of the coefficients in ϕ becomes more evident. In fact, in order to provide perfect security, the available coefficients should be independent, and either with a uniform distribution or with a periodic distribution with period 2π L . It is possible to observe that keeping low the FRR and providing high security are conflicting requisites. It is also worth observing that the minimum values of H(ϕZ |v) are obtained when evaluating the entropy over the first z coefficients of the ordering computed to minimize the FRR for the fuzzy commitment, which should represent the most stable projections among the available ones. The results in Table II state that the proposed framework is able to provide great flexibility in selecting the operating conditions, and to achieve good verification performances while providing a security greater than 300 bits. For the sake of comparison, when using the same signature modelization, the verification performance achieved in the proposed approach are quite close to the ones obtained in [21]. However, in this latter user-specific helper data have to be employed and the length of the corresponding secret b is limited to 76 bits, which enlightens the superiority of the proposed approach. R EFERENCES

L

2

2047 4

8

2

4095 4

8

α 0.40 0.45 0.50 0.55 0.30 0.35 0.40 0.45 0.25 0.30 0.35 0.40 0.40 0.45 0.50 0.55 0.30 0.35 0.40 0.45 0.25 0.30 0.35 0.40

D=4 FRR FAR H(ϕZ |v) 1.1 36.5 75.9-95.1 2.1 22.3 79.8-99.0 4.7 12.9 84.5-103.9 8.2 8.1 87.9-107.6 1.4 17.3 177.3-208.2 6.1 8.7 184.1-214.7 11.6 4.8 190.9-222.0 21.4 2.0 195.4-226.9 1.6 15.1 158.1-206.1 6.0 7.8 166.7-215.0 14.1 3.8 174.6-222.8 24.8 1.8 182.5-230.2 4.98 6.1 364.1-439.6 13.2 2.2 378.1-454.4 27.6 0.7 390.2-467.8 44.3 0.3 400.6-479.8 -

D=8 FRR FAR H(ϕZ |v) 0.1 92.1 57.3-74.2 1.3 67.2 61.0-78.5 4.5 34.5 65.1-84.0 15.0 13.2 69.3-88.9 0.0 98.1 135.8-150.2 0.5 74.5 126.2-161-1 6.1 33.2 136.6-170.4 17.2 14.1 147.7-180.6 0.1 97.2 211.6-255.3 1.8 58.2 225.7-268.3 18.4 13.1 239.8-283.4 55.4 2.7 252.8-297.4 0.6 85.3 117.6-164.3 4.1 44.2 127.9-174.4 13.8 14.5 137.1-185.2 42.1 5.2 147.3-195.1 0.1 94.3 258.4-352.6 1.8 54.3 274.2-364.8 15.2 14.2 290.3-380.6 41.6 2.5 305.8-391.1 0.2 82.0 434.3-539.6 7.6 24.2 463.2-574.3 41.1 2.4 492.1-605.2 58.7 0.2 520.4-634.6

TABLE II V ERIFICATION ( IN %) AND SECURITY ( RANGE OF ESTIMATED VALUES ) PERFORMANCE FOR s = {2049, 4095}, WHEN EMPLOYING THE PROPOSED PROTECTION FRAMEWORK . [10] A. Stoianov, T. Kevenaar, and M. V. der Veen, “Security issues of biometric encryption,” in IEEE TIC-STH Symp. on Information Assurance, Biometric Security and Business Continuity, Toronto, Canada, 2009. [11] Q. Li, M. Guo, and E.-C. Chang, “Fuzzy extractors for asymmetric biometric representation,” in IEEE CVPR, June 2008. [12] T. Ignatenko and F. Willems, “On information leakage in fuzzy commitment,” in SPIE Media Forensics and Security II, USA, Jan. 2010. [13] K. Nandakumar, “A fingerprint cryptosystem based on minutiae phase spectrum,” in WIFS, Seattle, WA, USA, Dec. 2010. [14] L. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal decoding of linear codes form minimizing symbol error rate,” IEEE Transactions on Information Theory, vol. 20, no. 2, pp. 284–287, Mar. 1974. [15] C. Berrou and A. Glavieux, “Near optimum error correcting coding and decoding: Turbo-codes,” IEEE Transactions on Communications, vol. 44, no. 10, pp. 1261–1271, Oct. 1996. [16] K. Simoens, P. Tuyls, and B. Preneel, “Privacy weaknesses in biometric sketches,” in IEEE Symposium on Security and Privacy, Oakland, CA, USA, May 2009. [17] E. Maiorana, P. Campisi, J. Fierrez, J. Ortega-Garcia, and A. Neri, “Cancelable templates for sequence-based biometrics with application to on-line signature recognition,” IEEE Transactions on System Man and Cybernetics Part A, vol. 40, no. 3, pp. 525–538, 2010. [18] M. Freire-Santos, J. Fierrez-Aguilar, and J. Ortega-Garcia, “Cryptographic key generation using handwritten signature,” in SPIE Defense and Security Symposium, April 2006. [19] E. Maiorana and P. Campisi, “Fuzzy commitment for function based signature template protection,” IEEE Signal Processing Letters, vol. 17, no. 3, pp. 249–252, 2010. [20] D. Reynolds, T. F. Quatieri, and R. B. Dunn, “Speaker verification using adapted gaussian mixture models,” Digital Signal Processing, vol. 10, no. 1, pp. 19–41, 2000. [21] E. Argones, E. Maiorana, J. A. Castro, and P. Campisi, “Biometric template protection using universal background models: An application to online signature,” IEEE Transactions on Information Forensics and Security, vol. 7, no. 1, pp. 269–282, 2012. [22] X. Zhou, A. Kuijper, R. Veldhuis, and C. Busch, “Quantifying privacy and security of biometric fuzzy commitment,” in IEEE IJCB, 2011. [23] C. Chow and C. Liu, “Approximating discrete probability distributions with dependence trees,” IEEE Transactions on Information Theory, vol. 14, pp. 462–467, 1968. [24] J. Edmonds, “Optimum branchings,” Journal of Research of the National Bureau of Standurds, vol. 71, pp. 233–240, 1967.

[1] P. Tuyls, B. Skoric, and T. Kevenaar, Security with Noisy Data: Private Biometrics, Secure Key Storage and Anti-Counterfeiting. Springer, 2007. [2] A. Jain, K. Nandakumar, and A. Nagar, “Biometric template security,” EURASIP Journal on Advances in Signal Processing, Special Issue on Biometrics, 2008. [3] Y. Dodis, L. Reyzin, and A. Smith, “Fuzzy extractors: How to generate strong keys from biometrics and other noisy data,” in EUROCRYPT, Interlaken, Switzerland, May 2004. [4] A. Juels and M. Wattenberg, “A fuzzy commitment scheme,” in ACM Conf. on Computer and Communication Security, Singapore, Nov. 1999. [5] J. Proakis, Digital communications. McGraw Hill, 2001. [6] E. Maiorana, P. Campisi, and A. Neri, “User adaptive fuzzy commitment for signature templates protection and renewability,” SPIE Journal of Electronic Imaging, vol. 17, no. 1, March 2008. [7] S. Yang and I. Verbauwhede, “Secure iris verification,” in IEEE ICASSP, Honolulu, Hawaii, USA, Apr. 2007. [8] F. Hao, R. Anderson, and J. Daugman, “Combining crypto with biometrics effectively,” IEEE Transactions on Computers, vol. 55, no. 9, pp. 1081–1088, 2006. [9] E. J. C. Kelkboom, J. Breebaart, T. A. M. Kevenaar, I. Buhan, and R. N. J. Veldhuis, “Preventing the decodability attack based crossmatching in a fuzzy commitment scheme,” IEEE Transactions on Information Forensics and Security, vol. 6, no. 1, pp. 107–121, 2011.

30