Improving Iris Recognition Performance Using ... - Semantic Scholar

15 downloads 33758 Views 1MB Size Report
iris capture setup, users are required to look into the camera ... Digital Object Identifier 10.1109/TSMCB.2008.922059 occlusion, which ..... signatures. In this ...
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS

1

Improving Iris Recognition Performance Using Segmentation, Quality Enhancement, Match Score Fusion, and Indexing Mayank Vatsa, Student Member, IEEE, Richa Singh, Student Member, IEEE, and Afzel Noore, Member, IEEE

Abstract—This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford–Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases. Index Terms—Information fusion, iris indexing, iris recognition, Mumford–Shah curve evolution, quality enhancement, support vector machine (SVM).

I. I NTRODUCTION

C

URRENT iris recognition systems claim to perform with very high accuracy. However, these iris images are captured in a controlled environment to ensure high quality. Daugman [1]–[4] proposed an iris recognition system representing an iris as a mathematical function. Wildes [5], Boles and Boashash [6], and several other researchers proposed different recognition algorithms [7]–[32]. With a sophisticated iris capture setup, users are required to look into the camera from a fixed distance, and the image is captured. Iris images captured in an uncontrolled environment produce nonideal iris images with varying image quality. If the eyes are not properly opened, certain regions of the iris cannot be captured due to

Manuscript received November 29, 2006; revised July 19, 2007 and February 25, 2008. This paper was recommended by Associate Editor S. Sarkar. The authors are with Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV 26506-6109 USA (e-mail: [email protected]; [email protected]; noore@csee. wvu.edu). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TSMCB.2008.922059

occlusion, which further affects the process of segmentation and, consequently, the recognition performance. Images may also suffer from motion blur, camera diffusion, presence of eyelids and eyelashes, head rotation, gaze direction, camera angle, reflections, contrast, luminosity, and problems due to contraction and dilation. Fig. 1 from the UBIRIS database [26], [27] shows images with some of the aforementioned problems. These artifacts in iris images increase the false rejection rate (FRR), thus decreasing the performance of the recognition system. Experimental results from the Iris Challenge Evaluation (ICE) 2005 and ICE 2006 [30], [31] also show that most of the recognition algorithms have a high FRR. Table I compares existing iris recognition algorithms with respect to image quality, segmentation, enhancement, feature extraction, and matching techniques. A detailed literature survey of iris recognition algorithms can be found in [28]. This research effort focuses on reducing the false rejection by accurate iris detection, quality enhancement, fusion of textural and topological iris features, and iris indexing. For iris detection, some researchers assume that the iris is circular or elliptical. In nonideal images such as off-angle iris images, motion blur, and noisy images, this assumption is not valid because the iris appears to be noncircular and nonelliptical. In this paper, we propose a two-level hierarchical iris segmentation algorithm to accurately and efficiently detect iris boundaries from nonideal iris images. The first level of the iris segmentation algorithm uses intensity thresholding to detect an approximate elliptical boundary, and the second level applies Mumford–Shah functional to obtain the accurate iris boundary. We next describe a support vector machine (SVM) based iris quality enhancement algorithm [29]. The SVM quality enhancement algorithm identifies good-quality regions from different globally enhanced iris images and combines them to generate a single high-quality feature-rich iris image. Textural and topological features [17], [18] are then extracted from the quality-enhanced image for matching. Most of the iris recognition algorithms extract features that provide only global information or local information of iris patterns. In this paper, the feature extraction algorithm extracts global textural features and local topological features. The textural features are extracted using the 1-D log polar Gabor transform, which is invariant to rotation and translation, and the topological features are extracted using the Euler number technique, which is invariant under translation, rotation, scaling, and polar transformation.

1083-4419/$25.00 © 2008 IEEE

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 2

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS

2ν-SVM to develop a fusion algorithm that combines the match scores obtained by matching textural and topological features for improved performance. The performance of verification and identification suffers due to nonideal acquisition issues. However, identification is more difficult compared to verification because of the problem of a high penetration rate and a false accept rate (FAR). To improve the identification performance, we propose an iris indexing algorithm. In the proposed indexing algorithm, the Euler code is first used to filter possible matches. This subset is further processed using the textural features and 2ν-SVM fusion for accurate identification. Section II presents the proposed nonideal iris segmentation algorithm, and Section III describes the novel quality enhancement algorithm. Section IV briefly explains the extraction of global features using the 1-D log polar Gabor transform and the extraction of local features using the Euler number. Section V describes the intelligent match score fusion algorithm, and Section VI presents the indexing algorithm to reduce the average identification time. The details of iris databases and existing algorithms that are used for the validation of the proposed algorithm are presented in Section VII. Sections VIII and IX summarize the verification and identification performance of the proposed algorithms with existing recognition and fusion algorithms. II. N ONIDEAL I RIS S EGMENTATION A LGORITHM

Fig. 1. Iris images representing the challenges of iris recognition. (a) Iris texture occluded by eyelids and eyelashes. (b) Iris images of an individual with a different gaze direction. (c) Iris images of an individual showing the effects of contraction and dilation. (d) Iris images of the same individual at different instances: the first image is of good quality; the second image has motion blurriness, and limited information is present. (e) Images of an individual showing the effect of the natural luminosity factor [26].

The state-of-the-art iris recognition algorithms have a very low false acceptance rate, but reducing the number of false rejections is still a major challenge. In multibiometric literature [33]–[36], it has been suggested that fusion of information extracted from different classifiers provides better performance compared to single classifiers. In this paper, we propose using

Processing nonideal iris images is a challenging task because the iris and the pupil are noncircular, and the shape varies depending on how the image is captured. The first step in iris segmentation is the detection of pupil and iris boundaries from the input eye image and unwrapping the extracted iris into a rectangular form. Researchers have proposed different algorithms for iris detection. Daugman [1] applied an integrodifferential operator to detect the boundaries of the iris and the pupil. The segmented iris is then converted into a rectangular form by applying polar transformation. Wildes [5] used the first derivative of image intensity to find the location of edges corresponding to the iris boundaries. This system explicitly models the upper and lower eyelids with parabolic arcs, whereas Daugman excludes the upper and lower portions of the image. Boles and Boashash [6] localized and normalized the iris by using edge detection and other computer vision algorithms. Ma et al.[12], [13] used the Hough transform to detect the iris and pupil boundaries. Normally, the pupil has a dark color, and the iris has a light color with varying pigmentation. In certain nonideal conditions, the iris can be dark, and the pupil can appear illuminated. For example, because of the specular reflections from the cornea or coaxial illumination directly into the eye, light is reflected into the retina and back through the pupil, which makes the pupil appear bright. Also, the boundary of the nonideal iris image is irregular and cannot be considered exactly circular or elliptical. For such nonideal and irregular iris images, researchers have recently proposed segmentation algorithms that combine conventional intensity techniques with active contours for pupil and iris boundary detection [32], [37]–[39]. These algorithms use intensity-based techniques for center and pupil boundary detection. The pupil boundary is

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. VATSA et al.: IMPROVING IRIS RECOGNITION PERFORMANCE

3

TABLE I COMPARISON OF EXISTING IRIS RECOGNITION ALGORITHMS

used to initialize the active contour, which evolves to find the outer boundary of the iris. This method of evolution from the pupil to the outer iris boundary is computationally expensive. We, therefore, propose a two-stage iris segmentation algorithm in which we first estimate the inner and outer boundaries of the iris using an elliptical model. In the second stage, we apply the modified Mumford–Shah functional [40] in a narrow band over the estimated boundaries to compute the exact inner and outer boundaries of the iris. To identify the approximate boundary of the pupil in nonideal eye images, an elliptical region with major axis a = 1, minor axis b = 1, and center (x, y) is selected as the center of the eye, and the intensity values are computed for a fixed number of points on the circumference. The parameters of the ellipse (a, b, x, y, θ) are iteratively varied with a step size of two pixels to increase the size of the ellipse, and, every time, a fixed number of points are randomly chosen on the circumference (in the experiments, it is set to be 40 points) to calculate the total intensity value. This process is repeated to find the boundary with maximum variation in intensity and the center of the pupil. The approximate outer boundary of the iris is also detected in a similar manner. The parameters for the outer boundary a1 , b1 , x1 , y1 , and θ1 are varied by setting the initial parameters to the pupil boundary parameters. A fixed number of points (in the experiments, it is set to be 120 points) are chosen on the circumference, and the sum of the intensity values is computed. Values corresponding to the maximum intensity change give the outer boundary of the iris, and the center of this ellipse gives the center of the iris. This method, thus, provides approximate iris and pupil boundaries, corresponding centers, and major and minor axes. Some researchers assume the center of the pupil to be the center of the iris and compute the outer boundary. Although this helps to simplify the modeling, in reality, this assumption is not valid for the nonideal iris. Computing the outer boundary using the proposed algorithm provides accurate segmentation even when the pupil and the iris are not concentric. Using these

approximate inner and outer boundaries, we now perform the curve evolution with modified Mumford–Shah functional [40], [41] for iris segmentation. In the proposed curve evolution method for iris segmentation, the model begins with the following energy functional: Energy(c) = α

    ¯  ∂C   φ dc + β   ∂c  Ω

 

|I(x, y) − c1 |2 dxdy

in(C)

|I(x, y) − c2 |2 dxdy



(1)

out(C)

where C¯ is the evolution curve such that C¯ = {(x, y) : ¯ y) = 0}, c is the curve parameter, φ is the weighting ψ(x, function or the stopping term, Ω represents the image domain, I(x, y) is the original iris image, c1 and c2 are the average val¯ respectively, and α, β, and ues of pixels inside and outside C, λ are positive constants such that α < β ≤ λ. Parameterizing (1) and deducing the associated Euler–Lagrange equation lead to the following active contour model: ¯ ¯ ¯ −c2 )2 ν + k )|  ψ|+φ  ψ+βδ(I −c1 )2 +λδ ψ(I ψ¯t = αφ(¯ (2) where ν¯ is the advection term, k is the curvature-based smoothing term,  is the gradient operator, and δ = 0.5/(π(x2 + 0.25)). The stopping term φ is defined as follows: φ=

1 . 1 + (|  I|)2

(3)

The active contour ψ¯ is initialized to the approximate pupil boundary, and the exact pupil boundary is computed by evolving the contour in a narrow band [42] of ±5 pixels. Similarly, for computing the exact outer iris boundary, the approximate

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 4

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS

Fig. 2. Iris detection using the proposed nonideal iris segmentation algorithm. (a) Original image. (b) Pupil boundary. (c) Final iris and pupil boundary.

¯ and the curve iris boundary is used as the initial contour ψ, is evolved in a narrow band [42] of ±10 pixels. Using the stopping term φ, the curve evolution stops at the exact outer iris boundary. Since we are using the approximate iris boundaries ¯ the complexity of curve evolution is reduced as the initial ψ, and is suitable for real-time applications. Fig. 2 shows the pupil and iris boundaries extracted using the proposed nonideal iris segmentation algorithm. In nonideal cases, eyelids and eyelashes may be present as noise and decrease the recognition performance. Using the technique described in [1], eyelids are isolated by fitting lines to the upper and lower eyelids. A mask based on the detected eyelids and eyelashes is then used to extract the iris without noise. Image processing of the iris is computationally intensive, as the area of interest is of donut shape, and grabbing the pixels in this region requires repeated rectangular to polar conversion. To simplify this, the detected iris is unwrapped into a rectangular region by converting into polar coordinates. Let I(x, y) be the segmented iris image and I(r, θ) be the polar representation obtained using 

(x − xc )2 + (y − yc )2 ,   y − yc θ = tan−1 . x − xc r=

0 ≤ r ≤ rmax

(4) (5)

r and θ are defined with respect to the center coordinates (xc , yc ). The center coordinates obtained during approximate elliptical iris boundary fitting are used as the center point for Cartesian to polar transformation. The transformed polar iris image is further used for enhancement, feature extraction, and matching.

filters the high-frequency noise. Poursaberi and Araabi [25] proposed the use of the low-pass Wiener 2-D filter for iris image enhancement. However, these filtering techniques are not effective in mitigating the effects of blur, out of focus, and entropy-based irregularities. Another challenge with existing enhancement techniques is that they enhance the low-quality regions that are present in the image, but are likely to deteriorate the good-quality regions and alter the features of the iris image. A nonideal iris image containing multiple irregularities may require the application of specific algorithms to local regions that need enhancement. However, identifying and isolating these local regions in an iris image can be tedious, time consuming, and not pragmatic. In this paper, we address the problem by concurrently applying a set of selected enhancement algorithms globally to the original iris image [29]. Thus, each resulting image contains enhanced local regions. These enhanced local regions are identified from each of the transformed images using an SVM-based [44] learning algorithm and are then synergistically combined to generate a single high-quality iris image. Let I be the original iris image. For every iris image in the training database, a set of transformed images is generated by applying standard enhancement algorithms for noise removal, defocus, motion blur removal, histogram equalization, entropy equalization, homomorphic filtering, and background subtraction. The set of enhancement functions is expressed as follows: I1 = fnoise (I) I2 = fblur (I) I3 = ffocus (I) I4 = fhistogram (I)

III. G ENERATION OF A S INGLE H IGH -Q UALITY I RIS I MAGE U SING ν-SVM For iris image enhancement, researchers consecutively apply selected enhancement algorithms, such as deblurring, denoising, entropy correction, and background subtraction, and use the final enhanced image for further processing. Huang et al. [43] used superresolution and a Markov network for iris image quality enhancement; however, their method does not perform well with unregistered iris images. Ma et al. [12] proposed background-subtraction-based iris enhancement that

I5 = fentropy (I) I6 = ffilter (I) I7 = fbackground (I)

(6)

where fnoise is the algorithm for noise removal, fblur is the algorithm for blur removal, ffocus is the algorithm for adjusting the focus of the image, fhistogram is the histogram equalization function, fentropy is the entropy filter, ffilter is the homomorphic filter for contrast enhancement, and fbackground is the

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. VATSA et al.: IMPROVING IRIS RECOGNITION PERFORMANCE

5

background subtraction process. I1 , I2 , I3 , I4 , I5 , I6 , and I7 are the resulting globally enhanced images that are obtained when the above enhancement operations are applied to the original iris image I. Applying several global enhancement algorithms does not uniformly enhance all the regions of the iris image. A learning algorithm is proposed to train and classify the pixel quality from corresponding locations of the globally enhanced iris images. This knowledge is used by the algorithm to identify the good-quality regions from each of the transformed and original iris images, which are combined to form a single highquality iris image. The learning algorithm uses ν-SVM [45], which is expressed as follows:  m  αi yi k(x, xi ) + b f (x) = sgn i=1 m  i=1

αi yi = 0,

m 

αi ≥ ν

(7)

i=1

where ν [0, 1], xi is the input to ν-SVM, yi is the corresponding label, m is the number of tuples, αi is the dual variable, and k is the RBF kernel. Furthermore, fast implementation of ν-SVM [46] is used to decrease the time complexity. Training involves classifying the local regions of the input and globally enhanced iris image as good or bad. Any quality assessment algorithm can be used for this task. However, in this paper, we have used the redundant discrete wavelet transformation-based quality assessment algorithm described in [47]. To minimize the possibility of errors due to the quality assessment algorithm, we also manually verify the labels and correct them in case of errors. The labeled training data are then used to train the ν-SVM. The training algorithm is described as follows. • The training iris images are decomposed to l levels using the discrete wavelet transform (DWT). The 3l detail subbands of each image contain the edge features, and, thus, these bands are used for training. • The subbands are divided into windows of size 3 × 3, and the activity level of each window is computed. • The ν-SVM is trained using labeled iris images to determine the quality of every wavelet coefficient. The activity levels computed in the previous step are used as input to the ν-SVM. • The output of the training algorithm is ν-SVM with a separating hyperplane. The trained ν-SVM labels the coefficient G or 1 if it is good and B or 0 if the coefficient is bad. Next, the trained ν-SVM is used to classify the pixels from the input image and to generate a new feature-rich high-quality iris image. The classification algorithm is described as follows. • The original iris image and the corresponding globally enhanced iris images that are generated using (6) are decomposed to l DWT levels. • The ν-SVM classifier is then used to classify the coefficients of the input bands as good or bad. A decision matrix, i.e., Decision, is generated to store the quality of each coefficient in terms of G and B. At any position (i, j),

if the SVM output O(i, j) is positive, then that coefficient is labeled as G; otherwise, it is labeled as B, i.e., G if O(i, j) ≥ 0 Decision(i, j) = (8) B if O(i, j) < 0. • The above operation is performed on all eight images including the original iris image, and a decision matrix corresponding to every image is generated. • For each of the eight decision matrices, the average of all coefficients with label G is computed, and the coefficients having label B are discarded. In this manner, one fused approximation band and 3l fused detail subbands are generated. Individual processing of every coefficient ensures that the irregularities present locally in the image are removed. Furthermore, the selection of good-quality coefficients and the removal of all bad coefficients address multiple irregularities that are present in one region. • Inverse DWT is applied on the fused approximation and detail subbands to generate a single feature-rich highquality iris image. In this manner, the quality enhancement algorithm enhances the quality of the input iris image, and a feature-rich image is obtained for feature extraction and matching. Fig. 3 shows an example of the original iris image, different globally enhanced images, and the combined image generated using the proposed iris image quality enhancement algorithm. IV. I RIS T EXTURAL AND T OPOLOGICAL F EATURE E XTRACTION AND M ATCHING A LGORITHMS Researchers have proposed several feature extraction algorithms to extract unique and invariant features from the iris image. These algorithms use either texture- or appearance-based features. The first algorithm was proposed by Daugman [1], which used 2-D Gabor for feature extraction. Wildes [5] applied isotropic bandpass decomposition that is derived from the application of Laplacian of Gaussian filters to the iris image. It was followed by several different research papers such as those of Ma et al. [12], [13] in which the multichannel evensymmetric Gabor wavelet and the multichannel spatial filters were used to extract textural information from iris patterns. The usefulness of the iris features depends on the properties of the basis function and the feature encoding process. In this paper, the iris recognition algorithm uses global and local properties of an iris image. A 1-D log polar Gabor transform-based [48] textural feature [17], [18] provides the global properties that are invariant to scaling, shift, rotation, illumination, and contrast. Topological features [17], [18] extracted using the Euler number [49] provide local information of iris patterns and are invariant to rotation, translation, and scaling of the image. Sections IV-1 and 2 briefly describe the textural and topological feature extraction algorithm. 1) Textural Feature Extraction Using the 1-D Log Polar Gabor Wavelet: The textural feature extraction algorithm [17], [18] uses the 1-D log polar Gabor transform [48]. Like the Gabor transform [50], the log polar Gabor transform is also based on polar coordinates; however, unlike the frequency

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 6

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS

Fig. 3. Original iris image, seven globally enhanced images, and the SVM-enhanced iris image.

dependence on a linear graduation, the dependence is realized by a logarithmic frequency scale [50], [51]. Therefore, the functional form of the 1-D log polar Gabor transform is given by

2 2

2 r−r −2π σ {ln( f 0 )} τ 2 +{2 ln(f0 sin(θ−θ0 ))}2 Gr0 θ0 (θ) = exp (9) where (r, θ) are the polar coordinates, r0 and θ0 are the initial values, f is the center frequency of the filter, and f0 is the parameter that controls the bandwidth of the filter. σ and τ are defined as follows: ln 2 1 (10) σ= π ln(r0 ) sin(π/θ0 ) 2 2 ln(r0 ) sin(π/θ0 ) ln 2 . (11) τ= ln 2 2 The Gabor transform is symmetric with respect to the principal axis. During encoding, the Gabor function overrepresents the low-frequency components and underrepresents the highfrequency components [48], [50], [51]. In contrast, the log polar Gabor transform shows maximum translation from the center of gravity in the direction of lower frequency and flattening of the high-frequency part. The most important feature of this filter is invariance to rotation and scaling. Also, log polar Gabor functions have extended tails and encode natural images more efficiently than Gabor functions. To generate an iris template from the 1-D log polar Gabor transform, the 2-D unwrapped iris pattern is decomposed into a number of 1-D signals, where each row corresponds to a circular ring on the iris region. For encoding, the angular direction is used rather than the radial direction because maximum independence occurs along this direction. One-dimensional signals are convolved with the 1-D log polar Gabor transform in the frequency domain. The values of the convolved iris image are

Fig. 4. Binary iris templates generated using the 1-D log polar Gabor transform. (a), (b) Iris templates of the same individual at two different instances.

complex in nature. Using these real and imaginary values, the phase information is extracted and encoded in a binary pattern. If the convolved iris image is Ig (r, θ), then the phase feature P (r, θ) is computed using   Im Ig (r, θ) P (r, θ) = tan−1 (12) Re Ig (r, θ)  [1, 1] if 00 < P (r, θ) ≤ 90◦    [0, 1] if 90◦ < P (r, θ) ≤ 180◦ Ip (r, θ) = (13)  [0, 0] if 180◦ < P (r, θ) ≤ 270◦   [1, 0] if 270◦ < P (r, θ) ≤ 360◦ . Phase features are quantized using the phase quantization process represented in (13), where Ip (r, θ) is the resulting binary iris template of 4096 bits. Fig. 4 shows the iris template that is generated using this algorithm. To verify a person’s identity, the query iris template is matched with the stored templates. For matching the textural iris templates, we use Hamming distance [1]. The match score M Stexture for any two texture-based masked iris templates Ai and Bi is computed using the Hamming distance measure given by M Stexture

N 1  = Ai ⊕ Bi N i=1

(14)

where N is the number of bits represented by each template, and ⊕ is the XOR operator. For handling rotation, the templates

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. VATSA et al.: IMPROVING IRIS RECOGNITION PERFORMANCE

7

TABLE II EULER CODE OF AN INDIVIDUAL AT THREE DIFFERENT INSTANCES

fined as follows: D(x, y) =



(x − y)t S −1 (x − y)

(15)

where x and y are the two Euler codes to be matched, and S is the positive-definite covariance matrix of x and y. If the Euler code has a large variance, it increases the false reject rate. The Mahalanobis distance ensures that the features having a high variance do not contribute to the distance. Applying the Mahalanobis distance measure for comparison, thus, avoids the increase in the false reject rate. The topology-based match score is computed as follows: Fig. 5. image.

Binary images corresponding to 8-bit planes of the masked polar

are shifted left and right bitwise, and the match scores are calculated for every successive shift [1]. The smallest value is used as the final match score M Stexture . The bitwise shifting in the horizontal direction corresponds to the rotation of the original iris region at an angle that is defined by the angular resolution. This also takes into account the misalignments in the normalized iris pattern, which are caused by the rotational differences during imaging. 2) Topological Feature Extraction Using the Euler Number: Convolution with the 1-D log polar Gabor transform extracts the global textural characteristics of the iris image. To further improve the performance, local features represented by the topology of the iris image are extracted using Euler numbers [18], [49]. For a binary image, the Euler number is defined as the difference between the number of connected components and the number of holes. Euler numbers are invariant to rotation, translation, scaling, and polar transformation of the image [18]. Each pixel of the unwrapped iris can be represented as an 8-bit binary vector {b7 , b6 , b5 , b4 , b3 , b2 , b1 , b0 }. These bits form eight planes with binary values. As shown in Fig. 5, four planes formed from the four most significant bits (MSBs) represent the structural information of the iris, and the remaining four planes represent the brightness information [49]. The brightness information is random in nature and is not useful for comparing the structural topology of two iris images. For comparing two iris images using the Euler code, a common mask is generated for both the iris images to be matched. The common mask is generated by performing a bitwise OR operation of the individual masks of the two iris images and is then applied to both the polar iris images. For each of the two iris images with a common mask, a 4-tuple Euler code is generated, which represents the Euler number of the image corresponding to the four MSB planes. Table II shows the Euler codes of a person at three different instances. We use the Mahalanobis distance to match the two Euler codes. The Mahalanobis distance between two vectors is de-

M Stopology =

D(x, y) log10 max(D)

(16)

where max(D) is the maximum possible value of the Mahalanobis distance between two Euler codes. The match score of Euler codes is the normalized Mahalanobis distance between two Euler codes. V. F USION OF TEXTURAL AND T OPOLOGICAL M ATCHING S CORES Iris recognition algorithms have succeeded in achieving a low false acceptance rate; however, reducing the rejection rate remains a major challenge. To make iris recognition algorithms more practical and adaptable to diverse applications, the FRR needs to be significantly reduced. In [33], [35], [36], and [52], it has been suggested that the fusion of match scores from two or more classifiers provides better performance compared to a single classifier. In general, match score fusion is performed using the sum rule, the product rule, or other statistical rules. Recently, in [35], a kernel-based match score fusion algorithm has been proposed to fuse the match scores of fingerprints and signatures. In this section, we propose using 2ν-SVM [53] to fuse the information obtained by matching the textural and topological features of the iris image that are described in Section IV. The proposed fusion algorithm reduces the FRR while maintaining a low false acceptance rate. Let the training set be Z = (xi , yi ), where i = 1, . . . , N . N is the number of multimodal scores used for training, and yi ∈ (1, −1), where 1 represents the genuine class, and −1 represents the impostor class. An SVM is trained using these labeled training data. The mapping function ϕ(·) is used to map the training data into a higher dimensional feature space such that Z → ϕ(Z). The optimal hyperplane, which separates the higher dimensional feature space into two different classes in the higher dimensional feature space, can be obtained using 2ν-SVM [53]. We have {xi , yi } as the set of N multimodal scores with xi ∈ d . Here, xi is the ith score that belongs to the binary class yi .

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 8

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS

Fig. 6. Steps involved in the proposed 2ν-SVM match score fusion algorithm.

The objective of training 2ν-SVM is to find the hyperplane that separates two classes with the widest margins, i.e., wϕ(x) + b = 0

(17)

subject to yi (wϕ(x) + b) ≥ (ρ − ψi ),

ξi ≥ 0

(18)

to minimize (19)

where ρ is the position of margin, and ν is the error parameter. ϕ(x) is the mapping function that is used to map the data space to the feature space and to provide generalization for the decision function that may not be a linear function of the training data. Ci (νρ − ξi ) is the cost of errors, w is the normal vector, b is the bias, and ξi is the slack variable for classification errors. ν can be calculated using ν+ and ν− , which are the error parameters for training the positive and negative classes, respectively, i.e., 2ν+ ν− , ν+ + ν−

0 < ν+ < 1 and 0 < ν− < 1.

Error penalty Ci is calculated as follows: C+ , if yi = +1 C= C− , if yi = −1

(20)

(21)

where 



C+ = n+ 

ν+ 1+ ν−

−1 (22)

 −1 ν− C− = n− 1 + ν+

(23)

and n+ and n− are the number of training points for the positive and negative classes, respectively. 2ν-SVM training can be formulated as follows:     1 αi αj yi yj K(xi , xj ) (24) max −  (αi )  2 i,j

where 0 ≤ αi ≤ Ci ,

 i

αi yi = 0,

 i

K(xi , xj ) = ϕ(xi )ϕ(xj ).

αi ≥ ν

(25)

(26)

Kernel function K(xi , xj ) is chosen as the radial basis function. The 2ν-SVM is initialized and optimized using iterative decomposition training [53], which leads to reduced complexity. In the testing phase, fused score ft of a multimodal test pattern xt is defined as follows: ft = f (xt ) = wϕ(xt ) + b.

 1

w 2 − Ci (νρ − ξi ) 2 i

ν=

i, j ∈ 1, . . . , N , and the kernel function is given by

(27)

The solution of this equation is the signed distance of xt from the separating hyperplane given by 2ν-SVM. Last, an accept or reject decision is made on the test pattern xt using a threshold X, i.e., accept, if output of SVM ≥ X (28) Result(xt ) = reject, if otherwise. Fig. 6 presents the steps involved in the proposed 2ν-SVM learning algorithm, which fuses the textural and topological match scores for improved classification. VI. I RIS I DENTIFICATION U SING E ULER C ODE I NDEXING Iris recognition can be used for verification (1 : 1 matching) as well as identification (1 : N matching). Apart from the irregularities due to nonideal acquisition, iris identification suffers from high system penetration and false accept cases. For identification, a probe iris image is matched with all the gallery images, and the best match is rank 1 match. Due to the poor quality and nonideal acquisition, rank 1 match may not be the correct match and leads to false acceptance. The computational time for performing iris identification on large databases is another challenge [31]. For example, identifying an individual from a database of 50 million users requires an average of 25 million comparisons. On such databases, applying distancebased iris code matching or the proposed SVM fusion will take a significant amount of time. Parallel processing and improved hardware can reduce the computational time at the expense of the operational cost. Other techniques that can be used to speed up the identification process are classification and indexing. Yu et al. [19] proposed a coarse iris classification technique using fractals that classifies iris images into four categories. The classification technique improves the performance in terms of the computational time, but compromises the identification accuracy. Mukherjee [54] proposed an iris indexing algorithm in which block-based statistics are used for iris indexing. A single-pixel

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. VATSA et al.: IMPROVING IRIS RECOGNITION PERFORMANCE

9

B. Probe Image Identification Similar to the database enrollment process, features are extracted from the probe iris image, and the Euler code is used to find the possible matches. For matching two iris indexing parameters (Euler codes) E1 (i) and E2 (i) (i = 1, 2, 3, 4), we apply a thresholding scheme. Indexing parameters are said to be matched if |E1 (i) − E2 (i)| ≤ T , where T is the geometric tolerance constant. Indexing score S is computed using  s(i) = 1 if |E1 (i) − E2 (i)| ≤ T (29) 0 otherwise 1 s(i) 4 i=1 4

S=

Fig. 7. Iris image divided into four parts. Regions A and B are used in the proposed iris indexing algorithm.

difference histogram used in the indexing algorithm yields good performance on a subset of the CASIA version 3.0 database. However, the indexing algorithm is not evaluated for nonideal poor-quality iris images. In this paper, we propose a feature-based iris indexing algorithm for reducing the computational time that is required for iris identification without compromising the identification accuracy. The proposed indexing algorithm is a two-step process where the Euler code is first used to generate a small subset of possible matches. The 2ν-SVM match score fusion algorithm is then used to find the best matches from the list of possible matches. The proposed indexing algorithm is divided into two parts: 1) feature extraction and database enrollment in which features are extracted from the gallery images and indexed using the Euler code; and 2) probe image identification in which features from the probe image are extracted and matched.

(30)

where |s| = 4 is the intermediate score vector that provides the number of matched Euler values. We extend this scheme for iris identification by matching the indexing parameter of the probe image with the gallery images. Let n be the total number of gallery images, and let Sn represent the indexing scores corresponding to the n comparisons. The indexing scores Sn are sorted in a descending order, and the top M match scores are selected as possible matches. For every probe image, the Euler code-based indexing scheme yields a small subset of top M matches from the gallery, where M n (for instance, M = 20 and n = 2000). To further improve the identification accuracy, we apply the proposed 2ν-SVM match score fusion. We then use the algorithms described in Sections IV and V to match the textural and topological features of the probe image with top M matched images from the gallery and compute the fused match score for each of the M gallery images. Last, these M fused match scores are again sorted, and a new ranking is obtained to determine the identity. VII. D ATABASES AND A LGORITHMS U SED FOR P ERFORMANCE E VALUATION AND C OMPARISON In this section, we describe the iris databases and algorithms that are used for evaluating the performance of the proposed algorithms.

A. Feature Extraction and Database Enrollment

A. Databases Used for Validation

Compared to the feature extraction and matching algorithm described in Section IV, we use a slightly different strategy for feature extraction. For verification, we use common masks from gallery and probe images to hide eyelids and eyelashes. However, in indexing, we do not follow the same method because generating a common mask for every set of probe and gallery images will increase the computational cost. Using the iris center coordinates, x- and y-axes are drawn, and the iris is divided into four regions. Researchers have shown that regions A and B in Fig. 7 contain minimum occlusion due to eyelids and eyelashes and, hence, are the most useful for iris recognition [4], [9], [22]. Therefore, for indexing, we use regions A and B to extract features. The extracted features are stored in the database, and the Euler code is used as the indexing parameter.

To evaluate the performance of the proposed algorithms, we selected three iris databases, namely, ICE 2005 [30], [31], CASIA Version 3 [55], and UBIRIS [26], [27]. These databases are chosen for validation because the iris images embody the irregularities captured with different instruments and device characteristics under varying conditions. The databases also contain iris images from different ethnicity and facilitate a comprehensive performance evaluation of the proposed algorithms. • The ICE 2005 database [30], [31] used in recent Iris Challenge Evaluation contains iris images from 244 iris classes. The total number of images present in the database is 2953. • The CASIA Version 3 database [55] contains 22 051 iris images pertaining to more than 1600 classes. The images have been captured using different imaging setup. The

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 10

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS

Fig. 10. ROC plot showing the performance of the proposed algorithms on the CASIA Version 3 database [55].

Fig. 8. Results of the proposed iris segmentation algorithm.

Fig. 11. ROC plot showing the performance of the proposed algorithms on the UBIRIS iris database [26].

B. Existing Algorithms Used for Validation

Fig. 9. ROC plot showing the performance of the proposed algorithms on the ICE 2005 database [30].

quality of images present in the database also varies from high-quality images with extremely clear iris textural details to images with nonlinear deformation due to variations in visible illumination. Unlike CASIA Version 1, where artificially manipulated images were present, CASIA Version 3 contains original unmasked images. • The UBIRIS database [26], [27] is composed of 1877 images from 241 classes captured in two different sessions. The images in the first session are of good quality, whereas the images captured in the second session have irregularities in reflection, contrast, natural luminosity, and focus.

To evaluate the effect of the proposed quality enhancement algorithm on different feature extraction and matching techniques, we implemented Daugman’s integrodifferential operator and neural-network-architecture-based 2-D Gabor transform described in [1]–[4]. We also used Masek’s iris recognition algorithm obtained from [11]. Furthermore, the performance of the proposed 2ν-SVM fusion algorithm is compared with the sum rule [33], [34], the min/max rule [33], [34], and the kernel-based fusion rule [35]. VIII. P ERFORMANCE E VALUATION AND V ALIDATION FOR I RIS V ERIFICATION In this section, we evaluate the performance of the proposed segmentation, enhancement, feature extraction, and fusion algorithms for iris verification. The performance of the proposed algorithms is validated using the databases and algorithms described in Section VII. For validation, we divided the databases into three parts—the training data set, the gallery data

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. VATSA et al.: IMPROVING IRIS RECOGNITION PERFORMANCE

11

Fig. 12. Sample iris images from the UBIRIS database [26] on which the proposed algorithms fail to perform.

set, and the probe data set. The training data set consists of manually labeled one good-quality and one bad-quality images per class. This data set is used to train the ν-SVM for quality enhancement and 2ν-SVM for fusion. After training, the goodquality image in the training data set is used as the gallery data set, and the remaining images are used as the probe data set. The bad-quality images of the training data set are not used for either gallery or probe data set. For iris segmentation, we performed extensive experiments to compute a common set of curve evolution parameters that can be applied to detect the exact boundaries of the iris and the pupil from all the databases. The values of different parameters for segmentation with narrow-band curve evolution are α = 0.2, β = 0.4, λ = 0.4, advection term ν¯ = 0.72, and curvature term k = 0.001. These values provide accurate segmentation results for all three databases. Fig. 8 shows sample results demonstrating the effectiveness of the proposed iris segmentation algorithm on all the databases with different characteristics. The inner yellow curve represents the pupil boundary, and the outer red curve represents the iris boundary. Fig. 8 also shows that the proposed segmentation algorithm is not affected by specular reflections present in the pupil region. Using the proposed iris segmentation and quality enhancement algorithms, we then evaluated the verification performance with the textural and topological features. The match scores obtained from the textural and topological features were fused using 2ν-SVM to further evaluate the proposed fusion algorithm. Figs. 9–11 show the receiver operating characteristic (ROC) plots for iris recognition using the textural feature extraction, topological feature extraction, and 2ν-SVM match score fusion algorithms. Fig. 9 shows the ROC plot for the ICE 2005 database [30], and Fig. 10 shows the results for the CASIA Version 3 database [55]. The ROC plots show that the proposed 2ν-SVM match score fusion performs the best followed by the textural- and topological-feature-based verification. The FRR of individual features is high, but the fusion algorithm significantly reduces it and provides the FRR of 0.74% at 0.0001% FAR on the ICE 2005 database and 0.38% on the CASIA Version 3 database. The results on the ICE 2005 database also show that the

TABLE III PERFORMANCE COMPARISON OF THE PROPOSED ALGORITHMS ON THREE IRIS DATABASES

verification performance of the proposed fusion algorithm is comparable to the three best algorithms in the Iris Challenge Evaluation 2005 [31]. The same set of experiments is performed using the UBIRIS database [26]. The images in this database contain irregularities due to motion blur, off angle, gaze direction, diffusion, and other real-world problems that enable us to evaluate the robustness of the proposed algorithms on nonideal iris images. Fig. 11 shows the ROC plot obtained using the UBIRIS database. In this experiment, the best performance of 7.35% FRR at 0.0001% FAR is achieved using the 2ν-SVM match score fusion algorithm. The high rate of false rejection is due to cases where the iris is partially visible. Examples of such cases are shown in Fig. 12. The experimental results on all three databases are summarized in Table III. In this table, it can be seen that the proposed fusion algorithm significantly reduces the FRR. However, the rejection rate cannot be reduced if a closed eye image or an eye image with limited information is present for matching. We next evaluated the effectiveness of the proposed iris image quality enhancement algorithm and compared with existing enhancement algorithms, namely, Wiener filtering [25] and background subtraction [12]. Table IV shows the results for the proposed and existing verification algorithms when the original iris image is used and when the quality-enhanced images are used. For the ICE 2005 database, this table shows that without enhancement, the proposed 2ν-SVM fusion algorithm gives 1.99% FRR at 0.0001% FAR. The performance improves by 1.25% when the proposed iris image quality enhancement algorithm is used. We also found that the proposed SVM image quality enhancement algorithm outperforms existing

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 12

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS

TABLE IV EFFECT OF THE PROPOSED IRIS IMAGE QUALITY ENHANCEMENT ALGORITHM AND PERFORMANCE COMPARISON OF IRIS RECOGNITION ALGORITHMS

TABLE V COMPARISON OF EXISTING FUSION ALGORITHMS WITH THE PROPOSED 2ν-SVM FUSION ALGORITHM ON THE ICE 2005 DATABASE

TABLE VI AVERAGE TIME TAKEN FOR THE STEPS INVOLVED IN THE PROPOSED IRIS RECOGNITION ALGORITHM

enhancement algorithms by at least 0.89%. Similar results are obtained for other two iris image databases. The SVM iris image quality enhancement algorithm also improves the performance of existing iris recognition algorithms. The SVM enhancement algorithm performs better because the SVM locally removes the irregularities, such as blur and noise, and enhances the intensity of the iris image, whereas the Wiener filter only removes the noise, and the background subtraction algorithm only highlights the features by improving the image intensity. We further compared the performance of the proposed 2ν-SVM fusion algorithm with Daugman’s iris detection and recognition algorithms [1]–[4] and Masek’s implementation of iris recognition [11]. The results in Table IV show that the proposed 2ν-SVM fusion yields better performance compared to Daugman’s and Masek’s implementation because the 2ν-SVM fusion algorithm uses multiple cues extracted from the iris image and intelligently fuses the match scores such that the false rejection is reduced without increasing the false acceptance rate. Higher performance of the proposed algorithm is also due to the accurate iris segmentation obtained using the modified Mumford–Shah functional. Furthermore, the performance of the proposed 2ν-SVM fusion algorithm is compared with the sum rule [33], [34], the min/max rule [33], [34], and the kernel-based fusion algorithms [35]. The performance of the proposed and existing fusion algorithms is evaluated on the ICE 2005 database by fusing the match scores obtained from the textural and topological features. Table V shows that the proposed 2ν-SVM fusion algorithm performs best with 0.74% FRR at 0.0001% FAR, which is 0.74% better than the kernel-based fusion algorithm [35] and 0.83% better than the sum rule [33]. These results,

Fig. 13. CMC plot showing the identification accuracy obtained by the proposed indexing algorithm.

thus, show that the proposed fusion algorithm effectively fuses the textural and topological features of the iris image, enhances the recognition performance, and considerably reduces the FRR. The average time for matching two iris images is 1.56 s on a Pentium IV 3.2-GHz processor with 1-GB RAM under the C programming environment. Table VI shows the breakdown of computational complexity in terms of the average execution time for iris segmentation, enhancement, feature extraction and matching, and 2ν-SVM fusion. IX. P ERFORMANCE E VALUATION AND V ALIDATION FOR I RIS I DENTIFICATION In this section, we present the performance of the proposed indexing algorithm for iris identification. Similar to verification, we use segmentation, enhancement, feature extraction, and fusion algorithms described in Sections II–V. To validate the performance of the proposed iris indexing algorithm, we combine the three iris databases and generate a nonhomogeneous database with 2085 classes and 26 881 images. The experimental setup (the training data set, the gallery data set, the probe data set, and segmentation parameters) is similar to the setup used for iris verification. Using the training data set, we found the value of geometric tolerance constant T = 20. Fig. 13 shows the cumulative matching characteristic (CMC) plots for the proposed indexing algorithm with and without the 2ν-SVM match score fusion. The plots show that rank 1 identification accuracy of 92.39% is achieved when the indexing algorithm is used without match

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. VATSA et al.: IMPROVING IRIS RECOGNITION PERFORMANCE

13

TABLE VII IRIS IDENTIFICATION PERFORMANCE WITH AND WITHOUT THE PROPOSED IRIS INDEXING ALGORITHM. ACCURACY IS REPORTED FOR RANK 1 IDENTIFICATION USING A DATABASE OF 2085 CLASSES WITH 26 881 IRIS IMAGES

score fusion. The accuracy improves to 97.21% with the use of 2ν-SVM match score fusion. Incorporating textural features and match score fusion, thus, reduces the FAR and provides an improvement of around 5% in the rank 1 identification accuracy. We also observed that on the nonhomogeneous database, 100% accuracy could not be achieved because the database contains occluded images with very limited information similar to those shown in Fig. 12. We next compared the identification performance of Daugman’s iris code algorithm and the proposed 2ν-SVM match score fusion without indexing. Daugman’s algorithm is used as a baseline for comparison. Daugman’s algorithm yields the identification accuracy of 95.89%, and the average time required for identifying an individual is 5.58 s. On the other hand, the identification accuracy of the proposed 2ν-SVM match score fusion algorithm without indexing is 97.21%. However, the average time for identifying an individual is 221.14 s, which is considerably higher than Daugman’s algorithm. To reduce the time taken for identification, the proposed indexing algorithm described in Section VI is used. Indexing is achieved by using the Euler code, which is computed from the local topological features of the iris image. The indexing algorithm identifies a small subset of the most likely candidates that will yield a match. Specifically, we analyze three scenarios. Case 1 determines a match based on the local topological features. Case 2 is an extension that uses the subset of images identified with the local features. However, the matching is based on the global textural features. Case 3 is a further extension that fuses the match scores obtained from the local and global features to perform identification. The identification performance is determined by experimentally computing the accuracy and the time taken for identification. The results are summarized in Table VII for all three cases when indexing is used with the proposed recognition algorithm. In all three scenarios, the proposed algorithm considerably decreases the identification time, thereby making it suitable for real-time applications and the use with large databases. In case 1, since only the local Euler feature is used for indexing, the identification time is the fastest (0.043 s); however, the accuracy is lower compared to Daugman’s algorithm. The accuracy improved when the global and local features are

sequentially used. Furthermore, as shown in Table VII, case 3 yields the best performance in terms of accuracy (97.21%) with an average identification time of less than 2 s. X. C ONCLUSION In this paper, we address the challenge of improving the performance of iris verification and identification. This paper presents accurate nonideal iris segmentation using the modified Mumford–Shah functional. Depending on the type of abnormalities likely to be encountered during the image capture, a set of global image enhancement algorithms is concurrently applied to the iris image. Although this enhances the lowquality regions, it also adds undesirable artifacts in the original high-quality regions of the iris image. Enhancing only selected regions of the image is extremely difficult and not pragmatic. This paper describes a novel learning algorithm that selects enhanced regions from each globally enhanced image and synergistically combines to form a single composite high-quality iris image. Furthermore, we extract global textural and local topological features from the iris image. The corresponding match scores are fused using the proposed 2ν-SVM match score fusion algorithm to further improve the performance. Iris recognition algorithms require a significant amount of time to perform identification. We have proposed an iris indexing algorithm using local and global features to reduce the identification time without compromising the identification accuracy. The performance is evaluated using three nonhomogeneous databases with varying characteristics. The proposed algorithms are also compared with existing algorithms. It is shown that the cumulative effect of accurate segmentation, high-quality iris enhancement, and intelligent fusion of match scores obtained using global and local features reduces the FRR for verification. Moreover, the proposed indexing algorithm significantly reduces the computational time without affecting the identification accuracy. ACKNOWLEDGMENT The authors would like to thank Dr. P. Flynn, CASIA (China), and U.B.I. (Portugal) for providing the iris databases

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 14

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS

used in this paper. The authors would also like to thank the reviewers and editors for providing constructive and helpful comments. R EFERENCES [1] J. G. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 15, no. 11, pp. 1148–1161, Nov. 1993. [2] J. G. Daugman, “The importance of being random: Statistical principles of iris recognition,” Pattern Recognit., vol. 36, no. 2, pp. 279–291, Feb. 2003. [3] J. G. Daugman, “Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters,” J. Opt. Soc. Amer. A, Opt. Image Sci., vol. 2, no. 7, pp. 1160– 1169, Jul. 1985. [4] J. G. Daugman, “Biometric personal identification system based on iris analysis,” U.S. Patent Number 5 291 560, Mar. 1, 1994. [5] R. P. Wildes, “Iris recognition: An emerging biometric technology,” Proc. IEEE, vol. 85, no. 9, pp. 1348–1363, Sep. 1997. [6] W. W. Boles and B. Boashash, “A human identification technique using images of the iris and wavelet transform,” IEEE Trans. Signal Process., vol. 46, no. 4, pp. 1185–1188, Apr. 1998. [7] Y. Zhu, T. Tan, and Y. Wang, “Biometric personal identification based on iris patterns,” in Proc. IEEE Int. Conf. Pattern Recog., 2000, pp. 2801–2804. [8] C. L. Tisse, L. Martin, L. Torres, and M. Robert, “Iris recognition system for person identification,” in Proc. 2nd Int. Workshop Pattern Recog. Inf. Syst., 2002, pp. 186–199. [9] C. L. Tisse, L. Torres, and R. Michel, “Person identification technique using human iris recognition,” in Proc. 15th Int. Conf. Vis. Interface, 2002, pp. 294–299. [10] W.-S. Chen and S.-Y. Yuan, “A novel personal biometric authentication technique using human iris based on fractal dimension features,” in Proc. Int. Conf. Acoust., Speech, Signal Process., 2003, vol. 3, pp. 201–204. [11] L. Masek and P. Kovesi, MATLAB Source Code for a Biometric Identification System Based on Iris Patterns. Perth, Australia: School Comput. Sci. Softw. Eng., Univ. Western Australia, 2003. [Online]. Available: http://www.csse.uwa.edu.au/pk/studentprojects/ libor/sourcecode.html [12] L. Ma, T. Tan, Y. Wang, and D. Zhang, “Personal identification based on iris texture analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 12, pp. 1519–1533, Dec. 2003. [13] L. Ma, T. Tan, Y. Wang, and D. Zhang, “Efficient iris recognition by characterizing key local variations,” IEEE Trans. Image Process., vol. 13, no. 6, pp. 739–750, Jun. 2004. [14] B. R. Meena, M. Vatsa, R. Singh, and P. Gupta, “Iris based human verification algorithms,” in Proc. Int. Conf. Biometric Authentication, 2004, pp. 458–466. [15] M. Vatsa, R. Singh, and P. Gupta, “Comparison of iris recognition algorithms,” in Proc. Int. Conf. Intell. Sens. Inf. Process., 2004, pp. 354–358. [16] C. Sanchez-Avila and R. Sanchez-Reillo, “Two different approaches for iris recognition using Gabor filters and multiscale zero-crossing representation,” Pattern Recognit., vol. 38, no. 2, pp. 231–240, Feb. 2005. [17] M. Vatsa, “Reducing false rejection rate in iris recognition by quality enhancement and information fusion,” M.S. thesis, West Virginia Univ., Morgantown, WV, 2005. [18] M. Vatsa, R. Singh, and A. Noore, “Reducing the false rejection rate of iris recognition using textural and topological features,” Int. J. Signal Process., vol. 2, no. 1, pp. 66–72, 2005. [19] L. Yu, D. Zhang, K. Wang, and W. Yang, “Coarse iris classification using box-counting to estimate fractal dimensions,” Pattern Recognit., vol. 38, no. 11, pp. 1791–1798, Nov. 2005. [20] B. Ganeshan, D. Theckedath, R. Young, and C. Chatwin, “Biometric iris recognition system using a fast and robust iris localization and alignment procedure,” Opt. Lasers Eng., vol. 44, no. 1, pp. 1–24, Jan. 2006. [21] N. D. Kalka, J. Zuo, V. Dorairaj, N. A. Schmid, and B. Cukic, “Image quality assessment for iris biometric,” in Proc. SPIE Conf.—Biometric Technology for Human Identification III, 2006, vol. 6202, pp. 6 102 0D1– 62 020 D11. [22] H. Proenca and L.A. Alexandre, “Toward noncooperative iris recognition: A classification approach using multiple signatures,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 4, pp. 607–612, Apr. 2007. [23] J. Thornton, M. Savvides, and B. V. K. Vijaya Kumar, “A Bayesian approach to deformed pattern matching of iris images,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 4, pp. 596–606, Apr. 2007.

[24] D. M. Monro, S. Rakshit, and D. Zhang, “DCT-based iris recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 4, pp. 586–596, Apr. 2007. [25] A. Poursaberi and B. N. Araabi, “Iris recognition for partially occluded images: Methodology and sensitivity analysis,” EURASIP J. Adv. Signal Process., vol. 2007, no. 1, p. 20, Jan. 2007. Article ID 36751. [26] H. Proenca and L. A. Alexandre, “UBIRIS: A noisy iris image database,” in Proc. 13th Int. Conf. Image Anal. Process., 2005, vol. 1, pp. 970–977. [27] [Online]. http://iris.di.ubi.pt/ [28] K. W. Bowyer, K. Hollingsworth, and P. J. Flynn, “Image understanding for iris biometrics: A survey,” Comput. Vis. Image Underst., 2008. DOI:10.1016/j.cviu.2007.08.005, to be published. [29] R. Singh, M. Vatsa, and A. Noore, “Improving verification accuracy by synthesis of locally enhanced biometric images and deformable model,” Signal Process., vol. 87, no. 11, pp. 2746–2764, Nov. 2007. [30] X. Liu, K. W. Bowyer, and P. J. Flynn, “Experiments with an improved iris segmentation algorithm,” in Proc. 4th IEEE Workshop Autom. Identification Adv. Technol., 2005, pp. 118–123. [31] [Online]. http://iris.nist.gov/ice/ICE Home.htm [32] J. Daugman, “New methods in iris recognition,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 37, no. 5, pp. 1168–1176, Oct. 2007. [33] J. Kittler, M. Hatef, R. P. Duin, and J. G. Matas, “On combining classifiers,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 3, pp. 226–239, Mar. 1998. [34] A. Ross and A. K. Jain, “Information fusion in biometrics,” Pattern Recognit. Lett., vol. 24, no. 13, pp. 2115–2125, Sep. 2003. [35] J. F. Aguilar, J. O. Garcia, J. G. Rodriguez, and J. Bigun, “Kernelbased multimodal biometric verification using quality signals,” in Proc. SPIE—Biometric Technology for Human Identification, 2004, vol. 5404, p. 544. [36] B. Duc, G. Maitre, S. Fischer, and J. Bigun, “Person authentication by fusing face and speech information,” in Proc. 1st Int. Conf. Audio Video Based Biometric Person Authentication, 1997, pp. 311–318. [37] A. Ross and S. Shah, “Segmenting non-ideal irises using geodesic active contours,” in Proc. Biometric Consortium Conf., 2006, pp. 1–6. [38] E. M. Arvacheh and H. R. Tizhoosh, “Iris segmentation: Detecting pupil, limbus and eyelids,” in Proc. IEEE Int. Conf. Image Process., 2006, pp. 2453–2456. [39] X. Liu, “Optimizations in Iris Recognition,” Ph.D. dissertation, Univ. Notre Dame, Notre Dame, IN, 2006. [40] A. Tsai, A. Yezzi, Jr., and A. Willsky, “Curve evolution implementation of the Mumford–Shah functional for image segmentation, denoising, interpolation, and magnification,” IEEE Trans. Image Process., vol. 10, no. 8, pp. 1169–1186, Aug. 2001. [41] T. Chan and L. Vese, “Active contours without edges,” IEEE Trans. Image Process., vol. 10, no. 2, pp. 266–277, Feb. 2001. [42] R. Malladi, J. Sethian, and B. Vemuri, “Shape modeling with front propagation: A level set approach,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 17, no. 2, pp. 158–175, Feb. 1995. [43] J. Z. Huang, L. Ma, T. N. Tan, and Y. H. Wang, “Learning-based enhancement model of iris,” in Proc. Brit. Mach. Vis. Conf., 2003, pp. 153–162. [44] V. N. Vapnik, The Nature of Statistical Learning Theory, 2nd ed. New York: Springer-Verlag, 1999. [45] P.-H. Chen, C.-J. Lin, and B. Schlkopf, “A tutorial on ν-support vector machines,” Appl. Stoch. Models Bus. Ind., vol. 21, no. 2, pp. 111–136, Mar./Apr. 2005. [46] C. C. Chang, and C. J. Lin, LIBSVM: A Library for Support Vector Machines, 2000. [Online]. Available: http://www.csie.ntu.edu.tw/_ cjlin/libsvm [47] R. Singh, M. Vatsa, and A. Noore, “SVM based adaptive biometric image enhancement using quality assessment,” in Speech, Audio, Image and Biomedical Signal Processing Using Neural Networks, B. Prasad and S. R. M. Prasanna, Eds. New York: Springer-Verlag, 2008, ch. 16, pp. 351–372. [48] D. J. Field, “Relations between the statistics of natural images and the response properties of cortical cells,” J. Opt. Soc. Amer. A, Opt. Image Sci., vol. 4, no. 12, pp. 2379–2394, Dec. 1987. [49] A. Bishnu, B. B. Bhattacharya, M. K. Kundu, C. A. Murthy, and T. Acharya, “Euler vector for search and retrieval of gray-tone images,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 35, no. 4, pp. 801–812, Aug. 2005. [50] C. Palm and T. M. Lehmann, “Classification of color textures by Gabor filtering,” Mach. Graph. Vis., vol. 11, no. 2/3, pp. 195–219, 2002. [51] D. J. Field, “What is the goal of sensory coding?” Neural Comput., vol. 6, no. 4, pp. 559–601, Jul. 1994.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. VATSA et al.: IMPROVING IRIS RECOGNITION PERFORMANCE

[52] Y. Wang, T. Tan, and A. K. Jain, “Combining face and iris biometrics for identity verification,” in Proc. 4th Int. Conf. Audio Video Based Biometric Person Authentication, 2003, pp. 805–813. [53] H. G. Chew, C. C. Lim, and R. E. Bogner, “An implementation of training dual-nu support vector machines,” in Optimization and Control with Applications, L. Qi, K. L. Teo, and X. Yang, Eds. Norwell, MA: Kluwer, 2005. [54] R. Mukherjee, “Indexing techniques for fingerprint and iris databases,” M.S. thesis, West Virginia Univ., Morgantown, WV, 2007. [55] [Online].Available:http://www.cbsr.ia.ac.cn/IrisDatabase/irisdatabase.php

Mayank Vatsa (S’04) received the M.S. degree in computer science in 2005 and is currently working toward the Ph.D. degree in computer science at West Virginia University, Morgantown. He was actively involved in the development of a multimodal biometric system, which includes face, fingerprint, signature, and iris recognition at Indian Institute of Technology, Kanpur, India, from July 2002 to July 2004. He has more than 65 publications in refereed journals, book chapters, and conferences. His current areas of interest include pattern recognition, image processing, uncertainty principles, biometrics, watermarking, and information fusion. Mr. Vatsa is a member of the IEEE Computer Society and ACM. He is also a member of the Phi Kappa Phi, Tau Beta Pi, Sigma Xi, Upsilon Pi Epsilon, and Eta Kappa Nu honor societies. He was the recipient of four best paper awards.

15

Richa Singh (S’04) received the M.S. degree in computer science in 2005 and is currently working toward the Ph.D. degree in computer science at West Virginia University, Morgantown. She had been actively involved in the development of a multimodal biometric system, which includes face, fingerprint, signature, and iris recognition at the Indian Institute of Technology, Kanpur, from July 2002 to July 2004. Her current areas of interest include pattern recognition, image processing, machine learning, granular computing, biometrics, and data fusion. She has more than 65 publications in refereed journals, book chapters, and conferences. Ms. Singh is a member of the IEEE Computer Society and ACM. She is also a member of the Phi Kappa Phi, Tau Beta Pi, Upsilon Pi Epsilon, and Eta Kappa Nu honor societies. She was the recipient of four best paper awards.

Afzel Noore (M’03) received the Ph.D. degree in electrical engineering from West Virginia University, Morgantown. He was a Digital Design Engineer with Philips India. From 1996 to 2003, he was the Associate Dean for Academic Affairs and Special Assistant to the Dean with the College of Engineering and Mineral Resources, West Virginia University, where he is currently a Professor with the Lane Department of Computer Science and Electrical Engineering. His research has been funded by NASA, NSF, Westinghouse, GE, the Electric Power Research Institute, the U.S. Department of Energy, and the U.S. Department of Justice. He serves on the editorial boards of Recent Patents on Engineering and Open Nanoscience Journal. He has over 90 publications in refereed journals, book chapters, and conferences. His research interests include computational intelligence, biometrics, software reliability modeling, machine learning, hardware description languages, and quantum computing. Dr. Noore is a member of Phi Kappa Phi, Sigma Xi, Eta Kappa Nu, and Tau Beta Pi honor societies. He was the recipient of four best paper awards.