... serves to threaten or erode privacy may also be enlisted to its protection, thus .... Various Flex ID Key sizes can be configured to meet the storage needs of the ...
Assessment of Privacy Enhancing Technologies for Biometrics David Bissessar1, Dmitry O. Gorodnichy1, Alex Stoianov2, and Michael Thieme3 1
Canada Border Services Agency, Science and Engineering Directorate, Ottawa, Ontario, Canada 2 Office of the Information and Privacy Commissioner of Ontario, Toronto, Ontario, Canada 3 International Biometric Group, One Battery Park Plaza, Suite 2901, NY 100004 USA
Abstract – Biometric privacy-enhancing technologies (PET) include two major groups: Biometric Encryption and Cancelable Biometrics. They enhance both privacy and security of a biometric system, thus embodying the Privacy by Design principles in a way that benefits the individual and minimizes the privacy risks. In this paper, the results of independent third-party evaluation of a commercial biometric PET product are presented. The GenKey “BioCryptic® ID Management System” is evaluated against a benchmark commercial product, Neurotechnology VeriFinger version 6.3. The database contained approximately 20,000 flat fingerprint images from 1,200 subjects. For a two-finger, two images per finger enrollment, the best results are: at FMR = 0.1%, FNMR = 1.54% for the PET System and FNMR = 1.74% for the benchmark system; at FMR = 0.01%, FNMR = 2.41% for the PET System and FNMR = 1.74% for the benchmark system. The results also suggest that specific subjects are substantially more likely than others to cause false matches. Security issues of the PET system are briefly discussed but no security evaluation is performed. The study demonstrates the viability of fingerprint-based PETs from a matching accuracy perspective.
Disclaimer: Specific hardware and software products identified in this paper were used in order to perform the evaluations described in this document. In no way does identification of any commercial product, trade name, or vendor, imply recommendation or endorsement by the Canada Border Services Agency, nor does it imply that the products and equipment identified are necessarily the best available for the purpose. Index Terms— Biometric Encryption, Privacy, Performance Evaluation, Security
I. INTRODUCTION The rapid progress of biometrics technologies in the last few years and the ease with which biometric data can be acquired has resulted in the accumulation of large varying databases of biometric information of both national and international scale. This trend will continue in the future, with databases growing at an ever-increasing rate. The current maturity level of biometrics has facilitated the use of biometrics in a wide range of government and private sector applications, while much work remains in devising acceptable ways of controlling the use, storage, and exchange of biometric data and personal information. Biometric datasets represent very sensitive, personally identifiable information and, when used to authenticate a subject’s transactions, they could be misused to track the subject’s actions and movements. This presents a new set of challenges regarding privacy and data safeguarding. Unlike passwords, biometric data are unique, permanent and therefore irrevocable. However, the same technology that serves to threaten or erode privacy may also be enlisted to its protection, thus giving rise to “privacy-enhancing technologies” (PET). This entails the use of Privacy by Design (PbD) – embedding privacy directly into technologies and business practices, resulting in privacy, security and functionality through a “positive-sum” paradigm [1]. The purpose of this paper is an independent third-party examination of one of the first commercial PET systems in biometrics called “BioCryptic® ID Management Systems”
from GenKey. The paper begins with a brief overview of the biometric PETs and then outlines how they provide a viable privacy protective alternative to conventional biometrics (Section II). We provide full details of our evaluation protocol (Section III). The results and discussion are presented in Section IV. Section V concludes the paper with the discussion for the necessity of future PET deployments and pilots. II. PRIVACY-ENHANCING TECHNOLOGIES IN BIOMETRICS Biometric PETs have been given more and more attention over the last few years (see the review papers [2, 3, 4, 5]). Also called “Untraceable Biometrics” [3], they are defined by the following features: • there is no storage of a biometric image or conventional biometric template; • it is computationally difficult to recreate the original biometric image/template from the stored information; • a large number of private (i.e. untraceable) templates for the same biometric can be created for different applications; • the private templates from different applications cannot be linked; and • the private templates can be renewed or cancelled. These features embody standard fair information
principles, providing user control, data minimization, and data security. At present, biometric PETs include two major groups of emerging technologies: Biometric Encryption (BE) and Cancelable Biometrics (CB). Cancelable Biometrics (aka feature transformation techniques) does the feature transformation and stores the transformed template. On verification, the transformed templates are compared to output a matching score. There is a large number of transforms available, so that the templates are cancelable. The difficulty with this approach is that the transform is in most cases fully or partially invertible, meaning that it should be kept secret. The feature transformation usually degrades the system accuracy. The system remains vulnerable to a substitution attack and to overriding the Yes/No response. Biometric Encryption (aka biometric template protection, biometric cryptosystems, etc.) binds the key with the biometric (or extract a key from the biometrics) on a fundamental level making it computationally difficult to retrieve either the key or the biometric from the stored BE private template (which is often called “helper data”). The key will be recreated only if the genuine biometric sample is presented on verification. The output of the BE authentication is either a key (correct or incorrect) or a failure message. Unlike conventional cryptography, this “encryption/decryption” process is fuzzy because of the natural variability of biometrics. There are two BE approaches: key binding, when a key is generated in random and then is bound to the biometric, and key generation, when a key is directly derived from the biometric. Both approaches usually store biometric dependent private template and are often interchangeable for most BE schemes. BE can greatly enhance both privacy and security of a biometric system. In general, BE is less susceptible to highlevel security attacks on a biometric system. BE can work in non-trusted or, at least, in less trusted environment, and is less dependent on hardware, procedures, and policies. The keys are usually longer than conventional passwords and do not require user memorization. BE private templates are renewable and revocable, as in CB. After the digital key is recreated on BE verification, it can be used as the basis for any physical or logical application. The most obvious use of BE is in a conventional cryptosystem where the key serves as a password and may generate, e.g., a pair of public and private keys. One of the challenges to the adoption of a biometric PET system is that the system should maintain an acceptable level of functionality, accuracy and speed compared to a conventional biometric system. There are a few biometric PET products that are already commercially available [6 - 10]. The largest and, perhaps, the most notable BE deployment so far is a facial recognition with BE in a watch-list scenario for the selfexclusion program in Ontario casinos [11, 5]. However, until now there has been no independent third-party evaluation of a commercial biometric PET. Here we present such an evaluation.
III. EVALUATION PROTOCOL
Systems, Data, Processes and Metrics 1) The PET System This study used the “BioCryptic® ID Management Systems” by GenKey (“the PET System”) for evaluationi. According to the public material from the company [7] the PET System features a proprietary algorithm incorporating cryptography with biometrics which converts a biometric image into a cryptographic key that said to be irreversible. The PET System is said to be applicable a variety of contexts including education, licensing, travel, healthcare, etc. The PET System provides a means to enroll and verify individuals based on fingerprints. The algorithm offers a number of options which provide tradeoffs between recognition speed, accuracy, and biometric privacy. The BioCryptic® technology, at least on a high level [12], is based on Lyseggen et al [13] and Duffy and Jones [14] patent applications. They proposed a scheme where a continuous biometric feature is offset to the middle of an integer interval. Those offsets form the correction vector, v(n) (where n is a number of features), and are stored in a private template that is called “ID Key” in GenKey products. The continuous feature vector, f(n), is then transformed to its quantized (integer) form, fq(n). The cryptographic keys are generated from fq(n) on-the-fly. On verification, a fresh feature vector, f’(n), is obtained, and, using the stored correction vector v(n), a new fq’(n) and the cryptographic keys are generated. For a legitimate user, they are supposed to be the same as on enrollment. This scheme is similar to a quantization method called "shielding functions" proposed by Linnartz and Tuyls [15] and its generalization by Buhan et al [16] called Quantization Index Modulation (QIM). In order to eliminate possible errors in fq’(n), Duffy and Jones reduce the number of components in fq(n) using a simple repetition error correcting code with a majority decoder. It is obvious that more sophisticated error correcting codes can be employed. The error correcting information is also stored into the private template. As described, the algorithm falls under the category of BE schemes that use quantization and store correction vector [3]. They can run in either key generation or key binding mode. Also, the scheme can be used in a feature transformation (i.e. “cancelable biometrics”) mode when a matching score is obtained directly from fq(n) and fq’(n). Since the exact mode of operation is unknown, we prefer to use a generic term “PET System” for this technology. 2) The Benchmark System To provide a benchmark, the same fingerprint dataset was run through a widely-adopted, minutiae-based fingerprint algorithm Neurotechnology VeriFinger version 6.3 [17]. The Neurotechnology algorithm consistently scored high (6th – 7th place) in numerous fingerprint algorithm competitions [18]. Our benchmark enrollment and verification processing i In 2011, GenKey Corporation and the Netherlands-based Priv-ID B.V. merged into one company called GenKey. The BioCryptic® technology tested in the present paper refers to the pre-merger GenKey product.
approach was equivalent to that of the PET System as described below. Comparing and contrasting the results of the PET and Benchmark systems provides a general frame of reference for validating the operational viability of the PET technology (see also [19] concerning evaluation methods for biometric systems). 3) Test Data A collection of approximately 20,000 flat fingerprint images (left and right index, middle, and thumb) from 1,200 subjects collected under indoor office conditions was used to conduct the evaluation. Fingerprint images were collected through a 500dpi Cross Match Verifier. Each subject provided two samples per position during a first visit. Additionally, approximately 650 of the 1,200 subjects provided two samples per position during a second visit which occurred roughly one month after the first. Images were inspected at the time of capture for quality, but no automated quality checks were implemented. Therefore data quality is variable. 4) ID Key Generation The PET System’s algorithm can perform two types of enrollment: feature template enrollment and “ID key” (i.e. private template) enrollment. In feature template enrollment, discriminating features from fingerprint images are extracted and stored as the individual’s biometric template. The feature template is stored for later use and the biometric image may be discarded. ID key enrollment also extracts discriminating features from fingerprint images, but converts features into a set of numbers which represent the individual’s biometric; however, it is said to be computationally difficult to obtain the feature template from the stored information. At the conclusion of an ID key enrollment, the “ID key” (i.e. the private template) is stored and the biometric image and the feature template may be discarded. It is this type of enrollment that is the focus of this evaluation. During verification, the individual provides one or more enrolled fingerprints. The PET System’s algorithm analyzes the images and creates a feature template. The feature template and ID Key are provided to the PET System’s algorithm and a binary verification decision is returned. In other words, the cryptographic aspects of the technology, i.e. the ability to generate cryptographic keys from biometrics, are not fully utilized in this study. This fact does not affect the accuracy numbers presented here. For ID Keys, balancing the tradeoff between FAR and FRR is accomplished through a “FAR control parameter” provided to the algorithm. ID Keys are created with respect to a particular FAR control parameter. The value of this parameter ranges from 0.0 to 1.0; lower values correspond to lower FAR (and higher FRR). The FAR control parameter is only applicable at enrollment. As a result, adjusting the FAR threshold will not impact the error characteristics of ID Keys that have already been enrolled. The FAR control parameter is not the false match rate measured in Section IV. It is interesting to note that closely related QIM scheme also has a tunable parameter, the size of the quantization interval
[16]. As shown in [20], by varying this parameter it is possible to generate a BE ROC curve (as opposed to just a few operating points in other BE schemes), thus obtaining a better tradeoff between accuracy and security.
Fig. 1 Enrollment transaction logic. Two different sample images from each position used to create generalized enrollment.
The study focuses on two of the PET System’s ID Key types: • Standard ID Keys offer performance close to that of feature template enrollments. • Flex ID Keys tradeoff key size and accuracy. Various Flex ID Key sizes can be configured to meet the storage needs of the application, and the GenKey algorithm will maximize accuracy for the given key size automatically. Flex ID keys are designed to maintain reasonable performance levels even at substantially reduced key sizes. We evaluated performance using the Standard ID Key (64 – 128 bytes) as well as 12, 25, 56, and 107-byte Flex ID keys. During ID Key enrollment, an enrollee provides one or more images from one or more fingers. These sample fingerprint images are first converted to feature templates as described earlier. The enrollment feature templates from all fingers to be enrolled are then used to generate a single private template called ID Key that represents the set of provided enrollment fingers. This ID Key can then be stored for later use during verification or identification. 5) Enrollment Process A custom application was developed to integrate with the PET System and perform feature extraction, template creation, and ID Key generation. First-visit fingerprint images were used for enrollment and ID Key creation. As shown in Fig. 1 two different images for each position were used to create a generalized enrollment. This approach increases matching accuracy and is consistent with the usage of the PET System in operational deployments. Both index fingerprints had to enroll in order for an enrollment to be successful.
6) Recognition Process A custom application to perform bulk matching against the PET System and the Benchmark systems was developed. Second-visit fingerprint images were used for recognition (i.e. as probe images) are compared against first-visit fingerprint data. The recognition process is shown Fig. 2. Verification logic registered a match is either left or right attempt matched gallery but registered a failure if neither matched: ver ( g , g r , a , a r )
l
l
True if False if
g a g a l l r r (1) g a g a l l r r
where a match is denoted by and a non-match is denoted by ; gl and gr are the left and right gallery templates respectively, and al = {al,1 , al,2} and ar = {ar,1 , ar,2} are the sets of left and right verification templates. If any match occurred between a sample and an ID Key, the transaction was declared a match. If all samples failed to match their respective ID Keys, the transaction was declared a non-match. All comparisons were intra-position, meaning that (for example) index fingerprints were never compared against middle fingers. This recognition logic was arrived at after considerable testing and experimentation in which results were generated separately for thumb, index, middle, and ring fingers. Processing volumes (less quality failures and missing positions) are shown in TABLE I. TABLE I TEMPLATE PROCESSING VOLUMES
Subjects V1 Samples V2 Samples Positions Genuine comparisons Impostor Comparisons
PET 650 2 2 2 456
Benchmark 650 2 2 2 459
550976
560741
Based on collection and comparison processes described above, metrics were generated as shown in TABLE II.
IV. RESULTS Based on data collected, metrics on system performance in terms of throughput, usability and accuracy were collected.
Throughput In terms of timing the most relevant metric for the PET is time required to generate a feature template (i.e. encoding or enrollment time). At operationally relevant security levels, the PET System averaged approximately 400ms per image. In a 1:1 system, this processing time can be considered trivial.
. Fig. 2 Verification transaction Logic. A match is reported if any of the digits match recorded gallery print. A failure is reported if none of attempted digits match corresponding gallery print.
Enrollment and Encoding Rates Enrollment and encoding rates can be discussed at the image level and at the transaction level. At the image level: • Failure-To-Enroll Rate (FTE) is the proportion of enrollment transactions in which the test subject failed to enroll • Failure-To-Acquire Rate (FTA) is the proportion of recognition transactions in which the test subject failed to acquire an image. Image-level FTE and FTA rates are shown in TABLE III and TABLE IV, respectively. TABLE III IMAGE LEVEL FAILURE TO ENROLL
TABLE II EVALUATION METRICS
Usability Metrics Failure to Enroll Rate (FTE) Failure to Acquire Rate (FTA) Processing Time
Accuracy Metrics False Match Rate (FMR) False Non-Match Rate (FNMR) Matching Error Rates by Position Distribution of Errors by Subject
A number of usability and matching accuracy metrics were examined for the PET and the Benchmark system.
Benchmark PET
Enrollment Attempts 2448 2448
FTE Count 42 45
FTE Rate 1.72% 1.84%
Similar Failure to Enroll rates between the PET and Benchmark System.
Results show that the PET and the Benchmark systems are roughly equivalent with respect to FTE, while the PET’s FTA is substantially higher than that of the Benchmark System. This underscores the concept that the PET System’s usage is predicated on the use of high-quality images whose quality is validated in real time at the point of capture.
TABLE IV IMAGE LEVEL FAILURE TO ACQUIRE
Benchmark PET
Recognition Encoding Attempts 1841 1841
FTA Count
FTA Rate
1 66
0.05% 3.59%
PET system exhibits higher Failure-to-Acquire rates
Quality of Enrolled Images The PET System included ability to quality values for encoded images. These are presented in Fig. 3. Quality values range from 0-1 and are binned by 0.1 for clarity. Results suggest that as long as images are of sufficient quality to create a reference template, the PET System will bin the image as high-quality.
Fig. 3 Quality of encoded fingerprints. Binning the acceptable images into one high-quality class is sufficient.
Table V shows examples of fingerprint images in five quality “bins”. Five representative images are shown for each row (each bin). Higher-quality images are at the bottom of the table.
TABLE V IMAGE QUALITY SAMPLES
Quality Level
Sample Images
0.0-0.2
0.2-0.4
0.4-0.6
0.6-0.8
0.8-1.0
Image quality bins as determined by the PET with sample images for each quality level.
.
Accuracy Rates Table VI shows the results of the PET System at each evaluated threshold, T, which is a FAR control parameter mentioned in the previous section. Values in bold, in the range of threshold values from 0.30 to 0.50, can be considered the most operationally relevant. Results show that at certain thresholds and key types, the PET System provides transaction-level FMR and FNMR lower than 1.00%. This performance level is generally considered a basic benchmark for commercial systems, particularly when dealing with variable-quality data. While the most robust results are at an ID Key size of 107 bytes, the 56-byte key generated performance approaching the
1.00% crossover level. A test that generated every possible combination of threshold and ID key size would pinpoint the optimal tradeoff of ID key size and matching accuracy – for example, a threshold of 0.47 with a 63-byte ID key might be ideal. This type of factor may need to be considered when planning a biometric PETs deployment. Performance can be rendered through detection error tradeoff (DET) curves. The range of thresholds was selected based on the incidence of observed errors and on consideration of operationally realistic values. Left- and lower-most DET curves indicate lower comparison error rates. DET curves can be used to identify
Table VI PET MATCHING ACCURACY Standard TTFNMR FMR
Flex-ID 107 byte TTFNMR FMR
Flex-ID 56 byte TTFNMR FMR
Flex-ID 25 byte TTFNMR FMR
Flex-ID 12 byte TTFNMR FMR
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40
0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.02% 0.09%
43.86% 33.77% 27.19% 20.83% 13.60% 8.99% 5.04% 2.85% 2.19%
0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.03% 0.14%
39.47% 32.24% 23.68% 17.98% 10.75% 6.80% 3.51% 2.41% 1.54%
0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.03% 0.14%
40.35% 32.89% 24.34% 18.42% 11.40% 7.24% 3.95% 2.63% 1.75%
0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.01% 0.03% 0.12%
50.88% 42.54% 33.77% 23.90% 16.01% 9.87% 6.80% 4.39% 2.85%
0.00% 0.00% 0.00% 0.00% 0.00% 0.01% 0.02% 0.07% 0.23%
64.25% 55.92% 47.59% 39.25% 28.29% 20.18% 13.60% 7.89% 4.39%
0.45
0.38%
0.88%
0.56%
0.88%
0.56%
1.10%
0.44%
1.54%
0.74%
2.63%
0.50
1.91%
0.44%
2.51%
0.22%
2.49%
0.44%
1.72%
0.88%
2.85%
1.32%
T
0.55 3.21% 0.22% 4.07% 0.22% 4.04% 0.22% 2.71% 0.66% 4.34% 1.10% 0.60 5.26% 0.00% 6.51% 0.00% 6.46% 0.00% 4.29% 0.44% 6.39% 1.10% 0.65 8.22% 0.00% 9.88% 0.00% 9.81% 0.00% 6.68% 0.22% 9.01% 1.10% 0.70 12.30% 0.00% 14.44% 0.00% 14.34% 0.00% 9.78% 0.00% 11.93% 1.10% 0.75 19.13% 0.00% 21.83% 0.00% 21.70% 0.00% 14.71% 0.00% 15.56% 0.44% 0.80 27.22% 0.00% 30.56% 0.00% 30.40% 0.00% 21.20% 0.00% 18.92% 0.44% 0.85 42.65% 0.00% 45.99% 0.00% 45.79% 0.00% 34.55% 0.00% 23.52% 0.22% 0.90 54.70% 0.00% 56.71% 0.00% 56.51% 0.00% 45.80% 0.00% 26.78% 0.22% 0.95 63.38% 0.00% 63.71% 0.00% 63.48% 0.00% 55.39% 0.00% 30.11% 0.22% T-FMR and T-FNMR at various thresholds. The range of threshold values from 0.30 to 0.50, can be considered the most operationally relevant. While the most robust results are at an ID-Key size of 107 bytes, the 56-byte key generated performance approaching the 1.00% crossover level
the point at which one wishes to operate one's system – e.g. at 0.01% FMR or 1.00% FNMR – and estimate the corresponding genuine or impostor error rate at that operating point. While DET curves will ideally be smooth through the full range of performance, at the right- and bottom-hand side of the curve, plots may become "stepped", indicating that the number of genuine or impostor errors at these points is unchanged while the counterpart error type changes. In order to maintain readability and to focus on reasonable or differentiated performance ranges, the DETs below show error rates across the following ranges: •
T-FNMR: 0.1% to 10%
•
T-FMR: 0.01% to 10%
False non-match rates and false match rates are calculated by dividing the number of errors at a given threshold by the total number of genuine and impostor comparisons executed, respectively. In many operational deployments, users are permitted to execute multiple attempts, such that FNMR is lower than observed in single-attempt tests. Transactional comparison error rates are more operationally realistic than attempt-based for the purpose of this study. Fig. 4 shows the PET matching results in chart form, and also adds results for the Benchmark system.
FNMR at the stricter threshold, suggesting that ID key size may be selected based on FMR requirements. TABLE VII Fingerprint PET evaluation - FNMR at vendor-specified FMR thresholds
Benchmark PET Std. Key PET (107 byte key) PET (56 byte key) PET (25 byte key) PET (12 byte key) Fig. 4 Comparative Matching Accuracy. Comparable and in some cases better match rates for the PET System.
Results showed that, at certain thresholds, fingerprint-based PET accuracy was equivalent to if not superior to non-PET fingerprint accuracy. For larger ID Key sizes, PET error rates were up to 50% lower than those of the traditional, non-PET fingerprint algorithm. This is a significant finding in that it suggests the viability of fingerprint-based PETs from a matching accuracy perspective. The tradeoff is that the PET encountered higher processing failure rates than the non-PET fingerprint algorithm. Table VII presents FNMR at specific FMR values for the PET and Benchmark systems. Systems are frequently evaluated based on their genuine error rates at specific impostor error rates. Many systems are configured to provide 0.10% or 0.01% FMR, such that the likelihood of a false match is 1 in 1000 or 1 in 10,000 (respectively). Therefore it is useful to examine genuine error rates at these specific impostor error rates. Error rates presented in this section do not include acquisition failures (FTA or FTE). Results show the anticipated increase in non-match rates as stricter false match rates are implemented. The PET outperforms the Benchmark system, at 0.10% FMR but the opposite is true at 0.01% FMR, where the FNMR of the Benchmark system stays the same. One interesting result is that the 25- and 12-byte ID keys see substantial increases in
Genuine Error Rates at 0.10% FMR 1.74% 2.19% 1.54%
Genuine Error Rates at 0.01% FMR 1.74% 2.85% 2.41%
1.75% 2.85% 7.89%
2.63% 6.80% 20.18%
False Match Rate by Subject For many biometric systems, certain subjects encounter higher false match rates than others. We analyzed the distribution of false matches across subjects. Aggregated across all evaluated ID Key types and sizes, and measured at a typical threshold of 0.45, the subject-specific false match rates ranged from as high as 4.04% to as low as 0.00%. The ten test subjects with the highest false match rates are shown in TABLE VIII. While false match rates generally decline as ID key sizes increase, in some cases subjects encountered higher false match rates at larger ID key sizes. For example, 251 of 457 subjects encountered higher false match rates at an ID Key size of 56 bytes than at an ID Key size of 25 bytes. The distribution of false match rates across all test subjects is shown in Fig. 5. Results suggest that specific subjects are substantially more likely than others to cause false matches. One can draw tentative conclusion from this data, which may be extensible to other biometric PET systems. System designers will need to be aware of templates that represent appealing targets for attackers, potentially executing pre-enrollment testing against a full dataset to identify problem enrollees. A potential solution to this
Table VIII SUBJECT WITH HIGHEST AGGREGATED FALSE MATCH RATES
ID 861 931 77 264 419 504 167 738 233 884
standard 107 56 25 12 standard 107 56 25 12 FM FM FM FM FM Comparisons FMR FMR FMR FMR FMR 49 54 54 49 40 1217 4.03% 4.44% 4.44% 4.03% 3.29% 32 42 42 25 20 1217 2.63% 3.45% 3.45% 2.05% 1.64% 22 36 36 30 14 1217 1.81% 2.96% 2.96% 2.47% 1.15% 22 26 26 30 33 1217 1.81% 2.14% 2.14% 2.47% 2.71% 27 34 34 25 12 1217 2.22% 2.79% 2.79% 2.05% 0.99% 24 26 26 17 29 1217 1.97% 2.14% 2.14% 1.40% 2.38% 17 19 19 22 40 1217 1.40% 1.56% 1.56% 1.81% 3.29% 22 33 33 23 4 1217 1.81% 2.71% 2.71% 1.89% 0.33% 22 31 31 12 19 1217 1.81% 2.55% 2.55% 0.99% 1.56% 19 29 29 18 20 1217 1.56% 2.38% 2.38% 1.48% 1.64%
Fig. 5 False match rate by subject. Results suggest presence of subjectbased false matches for selected PET and biometric modality.
would be to enroll a different position and determine if position-specific impostor rates change. Any pre-deployment qualification or certification testing will need to address the distribution issue, because the average or mean impostor rate is not very meaningful in this situation. One could decide to treat outliers as special cases and develop policies based on, for example, the 75% of users with the most common impostor matching experience.
Security Issues BE systems are, in general, more resilient than a conventional biometric system to high level attacks, such as a Trojan Horse or a substitution attack [3]. However, some BE systems may be vulnerable to low level attacks, when an attacker is familiar with the algorithm and can access the stored private template (but not a genuine biometric sample). The attacker tries to obtain the key bound to the biometrics (or at least reduce the search space), and/or to obtain the original biometric or create an approximate (“masquerade”) version of it. The overview of the attacks against BE is given in [3, 4]. For the schemes with a correction vector, the private template reveals some amount of information, depending on the size of the quantization step, q. Linnartz and Tuyls [15] showed that if q is less than one standard deviation of the signal (q < σx) the information leak will be small (< 10-5). However, depending on the strength of the noise σn, q may need to be chosen larger to avoid errors in authentication. Buhan et al [16, 21] provided more detailed analysis of the security of different QIM variants, and also suggested improving the security of the QIM scheme by adding a random variable, S, to f(n) on enrollment and subtracting the same on verification [22]. With respect to the specific attacks, the schemes with the correction vector may be vulnerable to Hill Climbing [23] and Nearest Impostors [24] attacks. In both attacks, the attacker derives an intermediate matching score based on the knowledge of the BE algorithm (normally, BE is not supposed to have any matching score). BE schemes with a correction vector may let the attacker derive the intermediate score based on the distance to the nearest integer [3, 24]. At
present, however, it is unclear whether this score is discriminative enough to run the Hill Climbing or the Nearest Impostors attack. If BioCryptic® technology uses an error correcting code (ECC) as suggested by Duffy and Jones [14], it may be vulnerable to the attacks that analyze the ECC output statistics [24]. The QIM scheme may be vulnerable to the Linkage attack [3, 25] which tries to link (but not necessarily crack) the private templates of multiple enrollments for the same user. False Acceptance attack is conceptually the simplest and is applicable to all BE schemes. The attacker needs to collect or generate a biometric database of a sufficient size to obtain offline a false acceptance against the private template. The feature set that has generated the false acceptance will be an approximate copy of the enrolled set. Since all biometric systems, including BE, have non-zero FAR (typically, 10-4 – 10-5), the size of the offline database (that is required to crack the private template) will always be finite. The FAR attack can be mitigated by applying a secret transform (preferably controlled by a user’s password), by using slowdown functions, in a Match-on-Card architecture, or by other security measures common in biometrics. A Fuzzy Commitment BE scheme can be made cryptographically secure in Client – Service Provider – Database architecture using Goldwasser-Micali and Pallier homomorphic encryption [26] (see also [27]). As shown in [28], both Fuzzy Commitment and QIM schemes are cryptographically secure within the framework of BlumGoldwasser public key cryptosystem. This solution, which is much more feasible than the homomorphic schemes, is likely applicable to BioCryptic® technology (in view of its closeness to QIM) as well. For obvious reasons, this study did not perform any security evaluations such as attempts to reverse biometric template from generated ID Key, or attempts to correlate ID Keys of multiple enrollments. Further studies of the algorithm cryptographic basis and security are required to establish its strengths and vulnerabilities. V. TRANSITION AND EXPLOITATION The following strategic recommendations were developed in the course of the study to inform future development and implementation of PETs. These recommendations will facilitate the transition of developmental PETs to more advanced commercial states, and will provide opportunities for government and commercial agencies to exploit PETs. PET deployments and pilots will increase the visibility of PET solutions and generate additional findings / best practices. Pilots and deployments will help build an understanding of the ability of a PET System to process realworld data and support integration with real-world identity management systems. Pilots can also help establish relationships between sensors / devices, legacy (traditional biometric) algorithms, and PET systems. Results and recommendations can be used to support decisions on system configuration, calibration, and design. Developing PET-specific performance metrics will help potential users understand the benefits of PET utilization.
Direct comparison of PET and non-PET biometric performance may understate the benefits of PET utilization. Further, as the biometric PET market matures, more refined ways of comparing PET vs. PET usability and performance will be necessary. These metrics may pertain to image quality, speed and throughput, or key length / complexity. Supporting the development of PET business cases will help government and commercial deployers assess the implementation or operational costs associated with PETs. Cost considerations may be a factor for certain government and commercial deployers when assessing PETs, and it is not clear whether biometric PETs represent additional costs or replacement costs. Developing cost / benefit analyses will clarify the value proposition for use of system-level and lowlevel PETs. Models for cross-jurisdictional PET data sharing need to be developed to ensure that system deployers are not locked into proprietary PETs for fingerprint, face, and/or iris recognition. Developing clear roadmaps to privacyprotective data sharing will be essential to long-term adoption of PETs. ACKNOWLEDGMENT This work has been done as part of the Defence Research and Development Canada (DRDC) funded Public Security Technical Program (PSTP) project on Data Safeguarding techniques for Biometrics (PSTP09-BIOM351). REFERENCES
PCT/NO02/00352 (WO 03/034655 A1), Oct. 1, 2002 (Priority date: Oct. 1, 2001). [14] G. D. Duffy and W. A. Jones, "Data processing apparatus and method". PCT Patent Application PCT/GB02/00626 (WO 02/098053), Dec. 5, 2002 (Priority date: May 31, 2001). [15] J.-P. Linnartz and P. Tuyls, “New shielding functions to enhance privacy and prevent misuse of biometric templates.” In Proc. of the 4th Int. Conf. on Audio and Video Based Biometric Person Authentication, pp. 393– 402, Guildford, UK, 2003. [16] I. R. Buhan, J. M. Doumen, P. H. Hartel, and R. N. J. Veldhuis, “Constructing practical Fuzzy Extractors using QIM”. Technical Report TR-CTIT-07-52 Centre for Telematics and Information Technology, University of Twente, Enschede, 2007. Available at http://eprints.eemcs.utwente.nl/10785/01/quantizers.pdf . [17] http://neurotechnology.com [18] FVC 2002: bias.csr.unibo.it/fvc2002/; FVC 2004: bias.csr.unibo.it/fvc2004/; FVC 2006: bias.csr.unibo.it/fvc2006/; FPVTE 2003: biometrics.nist.gov/cs_links/fpvte/report/ir_7123_summary.pdf; MINEX’04: www.nist.gov/customcf/get_pdf.cfm?pub_id=150619. [19] M. Gamassi, M. Lazzaroni, M. Misino, V. Piuri, D. Sana, F. Scotti, "Accuracy and performance of biometric systems", in Proceedings of the 21st IEEE Instrumentation and Measurement Technology Conference, IMTC 04, Como, Italy, pp. 510-515, May, 2004. [20] K. Martin, H. Lu, F. Bui, K. N. Plataniotis, and D. A. Hatzinakos, “A biometric encryption system for the self-exclusion scenario of face recognition”. IEEE Systems Journal: Special Issue on Biometrics Systems 3(4), 440-450 (2009) [21] I. R Buhan, J. M. Doumen, P. H. Hartel, and R. N. J. Veldhuis, “Fuzzy extractors for continuous distributions”. In Proceedings of the 2nd ACM Symposium on Information, Computer and Communications Security (ASIACCS), Singapore, pp. 353-355, March 2007.
[1] Privacy by Design Resolution. Jerusalem: 32nd International Conference of Data Protection and Privacy Commissioners (2010). http://www.ipc.on.ca/site_documents/pbd-resolution.pdf
[22] I. Buhan, J. Doumen, and P. Hartel, “Controlling Leakage of Biometric Information using Dithering”. In: 16th European Signal Processing Conference, 25-29 Aug 2008, Lausanne, Switzerland.
[2]A. K.. Jain, K. Nandakumar, and A. Nagar, “Biometric Template Security,” EURASIP J. Adv. Signal Proc., 2008, 1-17 (2008).
[23] A. Adler, “Vulnerabilities in Biometric Encryption Systems”. In Audio- and video-based Biometric Person Authentication (AVBPA2005), Tarrytown, New York, USA. Lecture Notes in Computer Science, Springer, v. 3546, pp. 1100–1109, 2005.
[3] A. Cavoukian, and A. Stoianov, “Biometric Encryption: The New Breed of Untraceable Biometrics,” Chapter 26 in Boulgouris, N. V., Plataniotis, K. N., Micheli-Tzanakou, E., eds., [Biometrics: fundamentals, theory, and systems], Wiley-IEEE Press, 655-718 (2009). [4] C. Rathgeb and A. Uhl. “A survey on biometric cryptosystems and cancelable biometrics”. EURASIP Journal on Information Security 2011:3, pp. 1-25 (2011). http://jis.eurasipjournals.com/content/2011/1/3 [5] A. Cavoukian, M. Chibba, and A. Stoianov. "Advances in Biometric Encryption: Taking Privacy by Design from Academic Research to Deployment". Review of Policy Research 29(1), pp. 37-60 (2012). [6] priv-id.com [7] “Cost Effective Biocryptic ID Management”. GenKey Corporation, Oslo, Norway, 2008. [8] www.coretexsys.com [9] www.securics.com [10] innovya.com/technology/ [11] A. Cavoukian and T. Marinelli. “Privacy Protective Facial Recognition: Biometric Encryption Proof of Concept. A Research Report on the use of Biometric Encryption to Limit ‘Self-Excluded’ Problem Gambler Access to Gaming Venues” (2010). http://www.ipc.on.ca/images/Resources/pbd-olg-facial-recog.pdf [12] Jorn Lyseggen, private communication, 2007. [13] J. Lyseggen, R. A. Lauritzsen, and K. G. S. Oyhus, "System, portable device and method for digital authenticating, crypting and signing by generating short-lived cryptokeys". PCT Patent Application
[24] A. Stoianov, T. Kevenaar, and M. Van der Veen, “Security Issues of Biometric Encryption,” IEEE TIC-STH Symp. Information Assurance, Biometric Security and Business Continuity, Sept., 2009, Toronto, Canada, 34-39 (2009). [25] E. J. C. Kelkboom, J. Breebaart, T. A. M. Kevenaar, I. Buhan, R. N. J. Veldhuis, “Preventing the Decodability Attack based Cross-matching in a Fuzzy Commitment Scheme”. IEEE Transactions on Information Forensics and Security, 6(1), 107 – 121 (2010). [26] J. Bringer and H. Chabanne, “An authentication protocol with encrypted biometric data,” LNCS, Springer, 5023, 109-124 (2008). [27] M. Barni, T. Bianchi, D. Catalano, M. Di Raimondo, R. Donida Labati, P. Failla, D. Fiore, R. Lazzeretti, V. Piuri, F. Scotti, and A. Piva, "PrivacyPreserving Fingercode Authentication", in Proceedings of the 12th ACM workshop on Multimedia and security, ACM, New York, NY, USA, pp. 231 - 240, September 9-10, 2010 [28] A. Stoianov, “Cryptographically secure biometrics”. Proc. SPIE, 7667, pp. 76670C-1 - 76670C-12 (2010).