Iris Imaging in Visible Spectrum using White LED - IEEE Xplore

14 downloads 0 Views 901KB Size Report
1kiran.raja; raghavendra.ramachandra; christoph.buschl @hig.no. Norwegian Biometrics Laboratory, Gjøvik University College, 2802 Gjøvik, Norway. Abstract.
Iris Imaging in Visible Spectrum using White LED Kiran B. Raja ⇤ R. Raghavendra ⇤ Christoph Busch {kiran.raja; raghavendra.ramachandra; christoph.busch} @hig.no Norwegian Biometrics Laboratory, Gjøvik University College, 2802 Gjøvik, Norway Abstract Iris recognition in the visible spectrum has many challenging aspects. Especially, for subjects with dark iris color, which is caused by higher melanin pigmentation and collagen fibrils, the pattern is not clearly observable under visible light. Thus, the verification performance is generally lowered due to limited texture visibility in the captured iris samples. In this work, we propose a novel method of employing a white light-emitting-diode (LED) to obtain highquality iris images with detailed texture. To evaluate the proposed set-up with LED light, we have acquired a new database of dark iris images comprising of 62 unique iris instances with ten samples each that were captured in different sessions. The database is acquired using three different smartphones - iPhone 5S, Nokia Lumia 1020 and Samsung Active S4. We also provide a benchmark of the proposed method with conventional to Near-Infra-Red (NIR) images, which are available for a subset of the database. Extensive experiments were carried out using five different well-established iris recognition algorithms and one commercial-of-the-shelf algorithm. They demonstrate the reliable performance of the proposed image capturing setup with GMR of 91.01% at FMR = 0.01% indicating the applicability in real-life authentication scenarios.

1. Introduction Iris recognition has carved a niche in the field of biometrics for its impressive recognition accuracy. Daugman’s experiments on one billion comparisons have once proven the clear demarcation of the performance of what iris offers as compared to other biometric characteristics [3]. Today the Indian UIDAI system is operating de-duplication in a 1.2 billion enrolment database trusting the reliability of iris recognition. The iris characteristic is a thin circular muscle structure in the eye, which controls the diameter and size of the pupil to regulate the amount of light entering the retina. The ob⇤ Both

the authors have contributed equally to this work.

served size of the iris is approximately 12mm in diameter with a central opening called pupil. Under the presence of bright light, the pupil constricts and dilates in dark conditions. The complexity and the randomness of the iris pattern is completely governed by the presence of Fuchs’s crypts, nevi, Wolfflin nodules and contraction furrows [13]. Along with the complex nature of the iris pattern, the color also changes for each individual based on various aspects. Abundant presence of the melanocytes and melanin in the anterior layer and stroma makes the iris appear brown. The lower concentration of such pigmentation causes the iris to appear lighter in color as it is the case for blue irises [13]. Along with the melanin, collagen fibrils play a vital role in the appearance of the iris in darker or lighter color. The collagen fibrils that are present in the iris structure scatter the shorter wavelength light, typically in the blue band. Higher density of such fibrils also causes the light to be absorbed and thus the shorter wavelength light is not scattered back. Thus, darker irises have higher density of collagen along with higher density of the melanin [13]. One well established technique to image both light and dark colored irises is through the use of Near-Infra-Red (NIR) light. The NIR light, generally in the range of 780 nm to 840 nm, effectively resolves the iris pattern as the light in this range can be scattered by collagen fibrils and melanin pigments in the anterior layer and stroma. However, most of the everyday-imaging sensors such as of the shelve cameras, do not have the NIR light to support the capture process. The attractiveness using a smartphone and its embedded sensors (camera) for various biometric applications may be hindered by such a limiting factor. If one intends to use visible spectrum light in the range of 380 nm to 720 nm to capture iris patterns, the success is limited and restricted to only those iris instances that have light colors and that are captured in a controlled scenario. In the view of increasing popularity of smartphone based biometrics employing iris recognition [9, 8, 7, 6, 5], it is important to address such a problem. Thus, in this work, we propose a new set-up to capture in the visible spectrum good quality images from a variety of colored iris patterns including dark colored irises by us-

Iris Pre-processing Proposed Iris imaging setup

Segmentation

Normalization

Feature Extraction

Comparison

Figure 1: Block diagram of the proposed iris recognition framework ing a state of the art smartphone. The proposed approach makes use of a white light-emitting-diode (LED), which is constructed using a similar specification as it is used for the flash illumination that is present in smartphones. To this extent, we developed a prototype, which captures good quality iris images by placing a white LED inclined at an acute angle to the iris position. Thus, the main advantage of the proposed approach is to capture the good quality iris samples without any additional hardware with a conventional smartphone. The main contributions of this paper are outlined as: 1. Novel iris image capturing setup based on normal white LED.

iris recognition framework, which is explained in detail in Section 2.1. After acquiring the eye image, the first task is to localize the iris region in the captured sample. The localization of the eye is important for two main reasons (i) The field of view of smartphone cameras is wider, and hence details from background have to be discarded. (2) The iris segmentation errors can be significantly reduced as the region is small. Thus in this work, we employ the Haar cascade based eye detector for locating the specific region of eye [15]. Once the eye is localized, the iris has to be segmented. The details of each component are given in the section below.

2.1. Proposed Approach for Imaging Dark Iris

2. New iris database comprising of 62 unique iris instances. Each of these patterns was captured with 10 samples that are collected in 10 different sessions using three different smartphones, namely: iPhone 5S, Nokia Lumia 1020 and Samsung Active S4 active. The whole database has 62 U nique iris instances ⇥ 10 sessions ⇥ 3 smartphones = 1860 iris samples. 3. Extensive evaluation of five different iris recognition algorithms and one commercial-off-the-shelf (COTS) algorithm to render a comparative performance analysis on three different smartphone samples employed in this work. 4. Comparison of the proposed iris imaging setup with a conventional NIR iris capture setup using a commercial iris capture device MorphoTrust Mobile-Eyes [1] on a subset of 24 unique iris patterns. In the rest of this paper, Section 2 presents the iris recognition pipeline using the proposed iris imaging set up. Section 3 provides a detailed description of the database acquired using the proposed setup. Section 4 provides the discussion on the experimental protocols and obtained results. Finally, Section 5 gives the conclusive remarks of the obtained results and the significance of the proposed setup.

2. Proposed iris recognition framework Figure 1 shows the block diagram of the proposed iris recognition framework introduced in this work. The novel contribution of this work lies in the imaging component of

Camera position

LED

Head rest

LED

Figure 3: Prototype of the proposed visible iris image acquisition setup To capture the iris pattern in the visible spectrum, we propose to use white LED at an acute angle. Figure 3 illustrates the prototype of the proposed visible iris imaging setup. The LED is used to illuminate the eye region such that the iris pattern is visible, and there is no strain for the eyes. As the white LED is medically accepted for the fact that it does not impact or damage the vision and moreover, similar white LEDs are also used in smartphones, we have employed this white LED for our work. The placement of LED at an acute angle is to maximize the visibility of the iris pattern. Figure 2 provides the detailed illustration of images obtained from three different smartphones under the proposed setup. Along with the images obtained from the three different smartphones, an image of the same eye obtained in the conventional NIR setup is presented. The key factor to note from the illustration in Figure 2 is the improvement in visibility of the iris pattern obtained due to the proposed set up.

p State-of-the-art iris images With Flash Without Flash

g Images acquired with our set-up

NIR-MorphoTrust Mobile-Eyes

iPhone 5S

Nokia Lumia 1020

Samsung S4 Active

Figure 2: Comparison of improved visibility of iris pattern obtained using various smartphones and a conventional NIR capture device as baseline On the left part of the Figure 2, the iris pattern is not visible neither with nor without flash. However, for the samples in the third column that are generated using our approach, the pattern of the iris becomes visible to a greater degree.

2.2. Segmentation and Normalization The captured iris images have to be segmented prior to processing the texture. OSIRIS V 4.1 [14] has proven its robustness in the segmentation of iris images both in the NIR domain [14] and also in the visible spectrum domain. As the segmentation algorithm is based on Viterbi search, the iris and pupil boundary are well localized. In the case of visible spectrum iris recognition, factors such as non-uniform illumination require a robust noise mask and OSIRIS v4.1 performs localization of noise to a better degree [14]. Followed by the segmentation, the iris texture is further normalized using Daugman’s rubber sheet expansion technique [3]. The dimension of a normalized iris image in this work is fixed to 512 ⇥ 64 pixels.

2.3. Feature extraction and comparison The features are extracted from the normalized iris images. The texture features are obtained using five state-ofthe-art techniques. We have employed texture feature extraction based on well-established algorithms - Daugman’s 2D Gabor features [3], Masek’s 1D Log Gabor features [12], multichannel spatially filtered features [11], cumulative sum of gray value features [10] and Masek’s 1D Log Gabor features [12] with sparse representation [9]. In order to compare the feature vector from iris probes and iris references, we employ hamming distance score as comparison subsystem for most of the techniques [3, 12, 10, 11] and residual scores obtained from comparison of sparse representation of 1D Log Gabor feature [9].

3. Database A new database has been constructed to evaluate the applicability of the proposed illumination technique in this work. The new database consists of images obtained from 31 unique subjects amounting to 62 unique iris instances. Each unique iris instances is captured using three different smartphone cameras. For each subject, Images are captured in 10 different sessions. As the proposed setup is intended to improve the iris texture visibility of dark irises, the data is collected from individuals of Eastern ethnicity who typically exhibit dark iris pattern due to high melanin pigmentation and higher density of collagen fibrils. Table 1: Details of the database for iris recognition in our current work Subjects

Unique Eyes

Samples

Total images

Apple iPhone 5S

Smartphone

31

62

10

620

Nokia Lumia 1020

31

62

10

620

Samsung Active S4 NIR - MorphoTrust Mobile-Eyes [1]

31

62

10

620

12

24

10

240

Three new smartphones are employed in this work - Apple iPhone 5S, Nokia Lumia 1020, Samsung Active S4. Each unique eye is captured with all three smartphones. A total of 10 samples are captured for each iris instance with each smartphone and thus a total of 620 images are obtained in one set. The complete overview of a number of iris images in the database is presented in the Table 1. The database consists of 1860 images in total.

3.1. Near Infra Red Database In order to benchmark the proposed illumination method in the visible spectrum iris recognition on smartphones, we

*

Table 2: LED based iris recognition verification rate OSIRIS v4.1 implementation of algorithms † University of Salzburg Iris-Toolkit v1.0 implementation of algorithms GMR @ specified FMR (%)

Phone

iPhone 5S

Nokia Lumia 1020

Samsung Active S4

Feature Extraction

FMR=0.01%

FMR=0.1%

FMR=0.01%

FMR=0.1%

FMR=0.01%

FMR=0.1%

Daugman * [3]

85.98

89.31

85.8

90.14

78.04

88.85

Log Gabor [12]

83.44

87.27

81.57

87.13

66.81

82.41



Ko et al. [10]

81.72

86.66

71.57

78.42

63.33

83.21

Ma et al. † [11]

80.35

85.62

67.84

85.26

53.44

69.11

LG-SRC [12]

85.08

88.85

83.87

87.88

80.03

86.28





construct a complementary iris image database acquired using a conventional Near Infra Red (NIR) iris imaging device MorphoTrust Mobile-Eyes [1]. A smaller subset of 24 unique iris was acquired based on the willingness of the participants to participate in both capture processes. Thus, the images are obtained from subjects corresponding to the volunteers in the visible spectrum iris database obtained using smartphones. Table 1 also provides details on the number of images present in the NIR database in the current work.

4.1. Performance metrics The practicality of a biometric system in real life application can be measured using the recognition accuracy and Genuine Match Rate (GMR) [4]. The GMR is dependent on the False Match Rate (FMR) and the False Non-Match Rate (FNMR) [4]. Higher values of GMR at a specified FMR imply the superior performance of verification accuracy. The GMR is defined using False Non Match Rate (FNMR) (%) at a given False Match Rate (FMR) and is given by: GM R = 1 − F N M R

4. Experiments and Results

4.2. Experiments on visible spectrum iris recognition

This section provides the details of the employed experimental protocol to evaluate the proposed set up. As described in Section 3, the database consists of two different subsets. The first subset of data stems from the capture of the iris in the visible spectrum using the proposed setup. The second subset of data originates from using the standard NIR iris acquisition device. Thus, we have two subsets and corresponding experiments outlined in this work.

In this set of experiments, we evaluate the performance of the iris recognition for the iris captured using white LED under the proposed setup. As the data is collected using three different smartphones, we have three different sets of evaluation. Exploiting the availability of 10 samples for each unique eye instance, we employ one sample as the reference image and the rest of the 9 samples as probe images. We continuously change the reference image 10 times such that all the images corresponding to one unique eye become

MobileIris NIR

Captured Image

Segmented Image

Normalized Image

iphone 5S

Captured Image

Segmented Image

Normalized Image

Nokia Lumia 1020

Samsung active S4

Captured Image

Captured Image

Segmented Image

Segmented Image

Normalized Image

Normalized Image

Figure 4: Illustration of segmentation and normalization of iris images.

NIR

Proposed imaging approach

Sample subject 1

Sample subject 2

Figure 5: Illustration of iris texture visibility in normalized iris pattern using proposed approach for sample subjects. reference image at least once. All scores obtained from the comparison are accumulated to generate the final set of genuine and impostor scores. For each set of evaluation, we have obtained 2790 genuine scores and 189100 impostor scores for iPhone, Nokia and Samsung data. Results on visible spectrum iris recognition

Table 2 presents the results obtained on the visible spectrum iris dataset acquired using our proposed approach of LED. It can be observed that the verification rate is consistently good when following our approach. It has to be noted that

The best GMR of 85.98% is obtained from the data acquired in the proposed setup with iPhone 5S as indicated

100

100 95

96 94 92 90 88 86 Daugman Masek and Kovesi Ko et al. Ma et al. LG-SRC

84 82 80 10-2

10-1

100

False Match Rate (%)

101

90 85 80 75 Daugman Masek and Kovesi Ko et al. Ma et al. LG-SRC

70 65 10-2

102

10-1

(a) iPhone Iris Dataset

100

False Match Rate (%)

101

(b) Nokia Iris Dataset

100 95

Genuine Match Rate (%)

Genuine Match Rate (%)

98

Genuine Match Rate (%)

4.2.1

the iris pattern would not be visible without our proposed approach for the involved dark eyed capture subjects. Figure 5 provides a sample illustration of the iris texture visibility using the proposed setup. It can be noted that the texture is highly visible even for a dark-eyed subject. For the simplicity of illustration, we have also provided the iris images of the same subject captured in the NIR domain. Furthermore, each iris is accompanied by the 2D Gabor features corresponding to one single scale as illustrated in Figure 5.

90 85 80 75 70 65 Daugman Masek and Kovesi Ko et al. Ma et al. LG-SRC

60 55 50 10-2

10-1

100

False Match Rate (%)

101

102

(c) Samsung Iris Dataset

Figure 6: ROC curves for iris recognition on various smartphones using our proposed approach

102

100

100 98

90

Genuine Match Rate (%)

Genuine Match Rate (%)

95

85 80 75 70 65 Daugman Masek and Kovesi Ko et al. Ma et al. LG-SRC

60 55 50 -2 10

10

-1

10

0

False Match Rate (%)

10

1

10

96 94 92 90 88

84 -2 10

2

(a) NIR Iris Dataset

-1

10

0

10

False Match Rate (%)

1

10

2

100 95

95

Genuine Match Rate (%)

Genuine Match Rate (%)

10

(b) iPhone Iris Dataset

100

90

85

Daugman Masek and Kovesi Ko et al. Ma et al. LG-SRC

80

75 -2 10

Daugman Masek and Kovesi Ko et al. Ma et al. LG-SRC

86

10

-1

10

0

False Match Rate (%)

10

1

10

90 85 80 75 Daugman Masek and Kovesi Ko et al. Ma et al. LG-SRC

70

2

65 -2 10

(c) Nokia Iris Dataset

10

-1

10

0

10

False Match Rate (%)

1

10

2

(d) Samsung Iris Dataset

Figure 7: ROC curves for various dataset acquired using our proposed approach. (a) Benchmark performance obtained using NIR images; (b)-(d) Performance obtained using proposed approach for various smartphones in the Table 2. Further, the GMR of 85.8% and 80.03% are obtained for Nokia and Samsung respectively again at the FMR of 0.01% indicating the robust performance of the proposed setup. The analysis indicates that the proposed setup can be supported by either smartphone hardware. It is interesting to note that the GMR obtained from different smartphones under the proposed setup is consistently higher than 80% at F M R = 0.1%, which indicate the applicability in a real life verification scenario. Figure 6 presents the Receiver Operating Characteristics (ROC) curves for iPhone, Nokia, and Samsung phone. The consistent performance of the proposed setup can be confirmed at various FMR by observing the ROC curves.

4.3. Experiments on NIR spectrum iris recognition As discussed in the earlier section, in order to benchmark the visible spectrum performance versus a conventional NIR performance, we have collected a subset of iris images in the NIR domain corresponding to and involving volunteers, that were also enrolled in the visible spectrum database. Similar to the protocol mentioned earlier, due to the availability of 10 samples per eye instance, we consider

one sample as a reference and the rest of 9 samples as a probe. The reference sample is iteratively swapped such that each sample becomes reference at least once. All the scores obtained from the comparisons are combined to form final genuine and impostor scores. The number of genuine and impostor scores generated in this set of experiments is detailed in the Table 4. Table 4: Distribution of genuine and impostor scores in NIR-Visible Spectrum iris database Phone iPhone 5S Nokia Lumia 1020 Samsung Active S4 NIR -MorphoTrust Mobile-Eyes [1]

4.3.1

Number of unique eyes 24 24 24

Samples per eye 10 10 10

Genuine score 1080 1080 1080

Impostor score 27600 27600 27600

24

10

1080

27600

Results on NIR spectrum iris recognition

The benchmark performance of iris recognition obtained using NIR versus the visible spectrum using our approach for

Table 3: Benchmark performance of LED versus NIR dataset. Reported accuracy for individual feature extraction methods on smartphone differs from results in Table 2, as the dataset is significantly smaller. * OSIRIS v4.1 implementation of algorithms † University of Salzburg Iris-Toolkit v1.0 implementation of algorithms GMR @ specified FMR (%) Feature Extraction

iPhone 5S

Nokia Lumia 1020

Samsung Active S4

NIR - MorphoTrust Mobile-Eyes [1]

FMR=0.01%

FMR=0.1%

FMR=0.01%

FMR=0.1%

FMR=0.01%

FMR=0.1%

FMR=0.01%

FMR=0.1%

Daugman * [3]

91.01

92.03

88.33

91.75

86.87

88.82

86.85

90.92

Log Gabor † [12]

84.53

87.96

83.61

87.5

82.56

85.53

52.68

77.12

Ko et al. † [10]

87.59

90.37

75.74

82.12

83.58

87.48

50.55

64.07

Ma et al. † [11]

84.9

88.24

78.05

86.2

67.58

76.2

79.81

94.44

LG-SRC † [12]

85.37

90.92

86.48

91.01

85

89.11

67.4

95.92

4.4. Evaluation of commercial iris recognition system Most of the iris recognition based authentication system in real-life verification scenarios employ commercial-offthe-shelf (COTS) algorithms. To measure the applicability of our proposed set-up, we have evaluated VeriEye commercial algorithm [2]. To provide the comparison with respect to open-source algorithms, we have retained the experimental protocols as discussed in previous sections. VeriEye SDK is highly tuned to work with NIR iris images and has been proven to work well even in NIST IREX performance evaluation [2]. As the VeriEye SDK fails to extract the features/template for images obtained using the smartphone in the visible spectrum, the performance metrics such as EER, FMR and FMNR do not hold good [4]. All the images for which the template extraction fails must be treated as Failure-to-Enroll (FTE). As the acquisition of images are continued until the images of satisfactory quality are obtained, Failure-to-Acquire (FTA) equals to zero. Thus, the performance measured should account the FTA, FTE and according to the International Standard ISO/IEC 19795-1 [4], the effective system performance should be reported as Generalized Equal Error Rate (GEER), which can be obtained using the Generalized False Accept Rate (GFAR) and the Generalized False Reject Rate (GFRR) where, GF AR = F M R ⇤ (1 − F T A) ⇤ (1 − F T E)2

(1)

GF RR = F T E + (1 − F T E) ⇤ F T A

(2)

+ (1 − F T E) ⇤ (1 − F T A) ⇤ F N M R

where GF AR is the generalized false accept rate; GF RR is the generalized false reject rate; F M R is the false match rate; F N M R is the false non-match rate; F T E is the failure-to-enrol rate and F T A is the failure-to-acquire rate [4]. Table 5: Performance for images captured NIR sensor and smartphones in visible spectrum using VeriEye SDK Sensor / Smartphone

FTE(%)

GMR

GEER (%)

FMR = 0.01% NIR - MorphoTrust Mobile-Eyes

0

100

0

iPhone 5S

29.09

78.98

14.15

Nokia Lumia 1020

4.64

91.86

6.74

Samsung Active S4

6.70

89.53

8.46

100

Generalized Genuine Match Rate (%)

the subset of data as mentioned in Section 4.3 is provided in Table 4. The corresponding ROC curves are presented in Figure 7. The obtained performance for the 24 unique eyes instances in NIR data in terms of GMR is 91.01% at FMR of 0.01% that corresponds to images captured with iPhone 5S. Comparing the GMR obtained for the same set of data with the NIR device, our proposed setup indicates an equivalent performance. It can also be observed from the ROC curves that the obtained GMR is consistently higher than 80% even at lower FMR indicating the superior or equivalent performance as compared to NIR images.

95

90

85

80

75 10-2

NIR Sensor iPhone 5S Nokia 1020 Samsung

10-1

100

101

Generalized False Match Rate (%)

102

Figure 8: Performance curves obtained using VeriEye commercial SDK for NIR iris images and images acquired using proposed setup in visible spectrum The performance obtained from the VeriEye SDK is provided in the Table 5. Significant FTE can be observed from

the Table 5 for iPhone data and moderately low amount of FTE for Nokia and Samsung data. However, the FTE equals to zero for NIR data as VeriEye successfully extracts template for NIR iris images. Further, from the Table 5, it can be noted that GMR of 91.86% is obtained for data acquired using Nokia Lumia 1020 and GMR of 100% is observed for NIR data acquired from MorphoEyes. Along the same lines, one can observe GEER obtained for NIR data is 0% and 6.74% for data obtained using Nokia smartphone. Figure 8 presents the ROC for the COTS evaluation of NIR images and images acquired using proposed set-up. The performance obtained using COTS algorithm supports the applicability of the proposed imaging set-up for the real-life verification system. As the COTS system is highly tuned for NIR images, one can expect improved verification performance if the COTS algorithms are tuned for images in visible spectrum.

5. Conclusions The concentration of melanin pigments and the density of the collagen fibrils in iris structure governs the color of the iris. A lower concentration of melanin and lower density of collagen fibrils result in the light colored eye. A light colored eye can easily be captured in the visible spectrum as the shorter wavelength light is scattered. However, to image dark colored iris, NIR has been used as the light in the shorter wavelength range is easily absorbed. In this work, we have proposed a novel way of imaging such dark colored iris using white LED illuminated in a certain angle. The specifications of the LED is close to the LED used in smartphones. Furthermore, to validate the robustness of the proposed setup for imaging the dark iris, we have constructed a new database using three new smartphones. The best GMR of 91.01% at F M R = 0.01% obtained for iPhone data validates the applicability of proposed approach in everyday authentication scenarios for dark colored iris. The proposed approach eliminates the need for NIR light on smartphones and instead can make use of the existing LED flash illumination. Benchmark performance evaluation of iris recognition in the visible spectrum using proposed approach versus NIR illumination, has indicated the robust performance that is generally higher or equivalent to the visible spectrum. The proposed approach of imaging can thus be well adopted to real life verification scenarios for everyday authentication.

Acknowledgments The authors wish to express thanks to Morpho (Safran Group) for supporting this work, and in particular to Morpho Research & Technology team for the fruitful technical and scientific exchanges related to this particular work.

References [1] MorphoTrust USA. http://www.morphotrust.com/ IdentitySolutions/ForFederalAgencies/ Officer360/ArrestandCustody360/ Mobile-Eyes.aspx. Accessed: 2015-07-03. [2] VeriEye SDK. http://www.neurotechnology.com/ verieye.html. Accessed: 2015-07-03. [3] J. Daugman. How iris recognition works. IEEE Transactions on Circuits and Systems for Video Technology, 14(1):21–30, 2004. [4] ISO/IEC TC JTC1 SC37 Biometrics. ISO/IEC 197951:2006. Information Technology – Biometric Performance Testing and Reporting – Part 1: Principles and Framework. International Organization for Standardization and International Electrotechnical Committee, Mar. 2006. [5] Kiran B. Raja, R. Raghavendra, and C. Busch. Smartphone based robust iris recognition in visible spectrum using clustered k-means features. In 2014, Proceedings of IEEE Workshop on Biometric Measurements and Systems for Security and Medical Applications (BIOMS), pages 15–21. IEEE, 2014. [6] Kiran B. Raja., R. Raghavendra, and C. Busch. Video presentation attack detection in visible spectrum iris recognition using magnified phase information. Information Forensics and Security, IEEE Transactions on, PP(99):1–1, 2015. [7] Kiran B. Raja, R. Raghavendra, M. Stokkenes, and C. Busch. Smartphone authentication system using periocular biometrics. In 2014 International Conference on Biometrics Special Interest Group, pages 27–38. IEEE, 2014. [8] Kiran B. Raja, R. Raghavendra, M. Stokkenes, and C. Busch. Multi-modal authentication system for smartphones using face, iris and periocular. In IEEE International Conf. Biometrics (ICB), Phuket, Thailand, 2015. [9] Kiran B. Raja, R. Raghavendra, V. K. Vemuri, and C. Busch. Smartphone based visible iris recognition using deep sparse filtering. Pattern Recognition Letters, 57(0):33 – 42, 2015. [10] J.-G. Ko, Y.-H. Gil, J.-H. Yoo, and K.-I. Chung. A novel and efficient feature extraction method for iris recognition. ETRI journal, 29(3):399–401, 2007. [11] L. Ma, T. Tan, Y. Wang, and D. Zhang. Personal identification based on iris texture analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(12):1519–1533, 2003. [12] L. Masek and P. Kovesi. Matlab source code for a biometric identification system based on iris patterns. The School of Computer Science and Software Engineering, The University of Western Australia, 2(4), 2003. [13] R. A. Sturm and M. Larsson. Genetics of human iris colour and patterns. Pigment cell & melanoma research, 22(5):544– 562, 2009. [14] G. Sutra, B. Dorizzi, S. Garcia-Salicetti, and N. Othman. A biometric reference system for iris, osiris version 4.1. 2012. [15] P. Viola and M. Jones. Robust real-time face detection. International Journal of Computer Vision, 57:137–154, 2004.

Suggest Documents