Iris segmentation using pupil location, linearization ... - Springer Link

1 downloads 0 Views 552KB Size Report
Nov 13, 2010 - Keywords Ambient intelligence а Biometrics а. Circular pattern detection а Iris а Limbus boundary а. Pupil а Iris segmentation. 1 Introduction.
J Ambient Intell Human Comput (2011) 2:153–162 DOI 10.1007/s12652-010-0035-x

ORIGINAL RESEARCH

Iris segmentation using pupil location, linearization, and limbus boundary reconstruction in ambient intelligent environments Maria De Marsico • Michele Nappi Daniel Riccio • Harry Wechsler



Received: 18 January 2010 / Accepted: 27 September 2010 / Published online: 13 November 2010  Springer-Verlag 2010

Abstract Advances in sensors and technology, on one side, and recognition techniques, on the other side, make the iris a top candidate for biometric use. Iris detection and segmentation, however, are still lacking. We propose here a novel iris segmentation technique using pupil location, linearization, and limbus boundary reconstruction, and show its feasibility and comparative advantages against existing methods. Keywords Ambient intelligence  Biometrics  Circular pattern detection  Iris  Limbus boundary  Pupil  Iris segmentation

1 Introduction Ambient intelligence (AmI) is a recent methodology for the transparent integration of devices, processes, protocols, and services. Biometric systems, in general, and iris authentication, in particular, are one application area where AmI M. De Marsico Sapienza Universita` di Roma, Via Salaria 113, 00198 Rome, Italy e-mail: [email protected] M. Nappi (&)  D. Riccio Universita` di Salerno, Via Ponte don Melillo, 84084 Fisciano (SA), Italy e-mail: [email protected] D. Riccio e-mail: [email protected] H. Wechsler Department of Computer Science, George Mason University, Fairfax, VA 22030, USA e-mail: [email protected]

holds much promise. The iris biometric modality does not require direct contact for acquisition and subsequent enrollment, it is extremely discriminative, and furthermore it is time invariant. Practical concerns related to using the iris are mainly focused on effective and robust data capture and target (‘‘iris’’) segmentation. The iris presents a very limited surface of only about 3.64 cm2, while its acquisition places stringent constraints on the viewing distance (less than one meter for most current sensing devices) to guarantee sufficient resolution for adequate processing. Such short distances make the use of the iris for biometrics obtrusive and possibly disturbing to potential users. However, smoothly guided acquisition procedures, which are possible to implement in AmI settings, could help the deployment of iris biometric technology in a timely and anxiety-free setting, with sound or light signals used to notify the user that the sensing device is now at the right distance and ready to acquire the biometric. Furthermore, high resolution sensing devices are becoming available at decreasing costs, making the combined use of AmI and biometrics a tantalizing possibility. Significant research on iris authentication can be traced first to Daugman (2004). His approach relies on an integrodifferential kernel to locate the iris region, on 2D Gabor filters to extract relevant features, and the Hamming distance used for matching and subsequent authentication. Wildes (1997) employs edge detection with the Hough transform then used to detect circular regions. Related approaches to Wildes enhance the iris segmentation performance in terms of computational complexity (Li and Liu 2008) using ellipse fitting algorithms (Yahya and Nordin 2008) or linear filters (Nguyen and Hakil 2008) instead of the Hough transform. Such methods are hampered by the fact that the eccentricity of the sought after ellipses and/or the actual pupil radius are not known in

123

154

advance. Noise or spurious artifacts in the vicinity of the iris may further hinder its segmentation.

2 Background Most existing techniques for iris localization and segmentation apply a circular model to locate the pupil and limbus. This is very intuitive and straightforward due to the iris boundary shape. The use of the Hough transform (HT) (Wildes 1997) to locate circular shapes is, however, hampered due to its high computational cost. Furthermore, HT may fail when non-perfectly circular patterns are encountered. Better and more accurate edge fitting and subsequent processing, including active contours (Daugman 2007), try to overcome such limitations. An alternative approach (He et al. 2009) exploits a Pulling and Pushing method, which consists of a set of edge detectors augmented by circle fitting iterations, to identify edge elements while searching and following boundaries. The geometry of iris boundaries can be assumed for all practical purposes to be circular and this suggests the use of polar coordinates. We introduce here a novel and hybrid iris segmentation method, and in addition we propose an accuracy index to estimate the quality of iris segmentation. Much of the research dealing with iris quality starts from the assumption that the iris has been correctly located (Belcher and Du 2007). Therefore, most attention is focused on evaluating the quality of the iris patterns, and their influence on the authentication performances when the iris is the biometric of choice. However, we focused our attention on a different problem, which is estimating the accuracy for the localization of the iris region. The notion of quality in this paper refers to this aspect, i.e. on how much the region that was labeled as iris coincides with the actual iris in the image. In other words reference is made here to the accuracy of boundaries located between (a) pupil and iris, and between (b) iris and sclera, including their influence on biometric authentication performance. The latter concern has been so far neglected. Therefore, an accuracy index is proposed here with the double aim of (1) forcing further refinements in case of poor quality, and (2) providing confidence that helps to estimate/predict the reliability of the biometric classifier. An interesting work along those lines (Jinyu Zuo and Schmid 2008) links the quality of iris location and segmentation to three different factors: (a) the size of the pupil (b) the gradient along pupil and limbus contours, and (c) the ratio among the intensities of the grey levels of pupil, iris and sclera. Using empirical thresholds, each of the above three factors helps to decide if the pupil has been correctly located or not. A similar example (Fernandez-Saavedra et al. 2007) decides upon the quality of iris segmentation using a number of tests aiming

123

M. De Marsico et al.

at evaluating (a) if the iris is completely contained within the image, (b) pupil dilation/contraction, (c) pupil and iris concentricity, (d) pupil and iris overlap, (e) iris and sclera overlap and, (f) possible occlusions. The limit of both approaches is that the returned 0/1 response does not provide any quantitative information about the reliability for the extracted iris. The work in (Lili Pan and Mei Xie 2007) addresses the reliability aspect. The original image is analyzed using wavelets (and multi-resolution decomposition). This makes it possible to identify the presence of eyelids and eyelashes from the details found in higher frequencies. However, the analysis of the Fourier spectrum and of the Wavelet transform, used for both defocus calculation and occlusions detection, makes sense only when dealing with images of canonical and well located irises. If the image also includes eyebrows or similar structures, the presence of further details can lead to a completely wrong evaluation. 2.1 Refinement and assessment of iris location Our proposed iris segmentation algorithm for identification systems (ISIS) detects circular objects within the image using a circle detection procedure introduced by Taubin (1991). This is based on moments and on Newton’s method, and it is precise and fast. The difference between Taubin’s approach and ellipse fitting, which is used by other methods, is a relevant aspect to consider. In practice, given a set of points on a plane, the former identifies a circle to better approximates such points, while the latter would usually identify an ellipse. However, the presence of noise, e.g., spurious branches caused by Canny filter, may cause the erroneous detection of a quite elliptical shape even where the expected result would rather be a circular object. Other existing methods rely on the Hough transform, so that they perform circle search by setting parameters according to anticipated pupil and iris sizes. Since ISIS does not use the Hough transform, additional (and thus erroneous) circles are found while searching for the pupil. In order to perform a suitable selection among them, the two ranking criteria below are proposed to identify the best candidate for the pupil circle. 2.1.1 Homogeneity The actual pupil should correspond to a circular region with a homogeneous pixel distribution. A number of approaches rely on the assumption that it is the darker region inside the image. However, this is not always true, as demonstrated by many images coming from different databases, especially when captured in poorly controlled conditions. In such cases, the pupil is often affected by

Iris segmentation using pupil location, linearization, and limbus boundary reconstruction

light reflections which alter its appearance. We devised a scoring function based on the histogram of the candidate pupil, which does not take into account the grey levels, but rather performs a quantitative evaluation. Towards that end, each circle receives a score according to the homogeneity degree of the pixels it contains. Assuming H is the histogram of the region inside the analyzed circle, the score will be equal to the maximum number of occurrences of the same value, normalized with respect to the whole histogram: , 255 X sH ¼ max½HðiÞ HðiÞ: ð1Þ i

i¼0

2.1.2 Separability Similar to the limbus, the pupil contour also represents a boundary region with a quite pronounced transition from a darker to a lighter zone, with particularly dark irises an exception when such a transition step is not particularly evident. Given a candidate circle C, with centre c = (cx, cy) and radius q in image I, the Cartesian coordinates are given by xC(q, h) = cx ? qcos(h) and yC(q, h) = cy ? qsin(h), h [ [0, 2p]. One then considers the circle CIN with radius q1 = 0.9q internal to C, the circle CEX with radius q2 = 1.1q, external to C, and measures the difference between the grey levels of corresponding pixels on the two circles for angles hi: DðiÞ ¼ I ðxc ðq2 ; hi Þ; yc ðq2 ; hi ÞÞ  I ðxc ðq1 ; hi Þ; yc ðq1 ; hi ÞÞ ð2Þ with i = 1,2,…, 360 standing for the discrete angles along the circle, while hI = ip/180 represents the same angle expressed in radians. Regarding the pupil, one expects a high and constant value for D, which implies a high mean D and a low variance r(D); therefore, one defines the separability index as: sD ¼

D : rðDÞ þ 1

ð3Þ

A number of methods (Daugman 2004; Lili Pan and Mei Xie 2007) locate the pupil first and continue to analyze the image in radial directions from its centre in order to find the iris contour represented by the limbus region. In our work, we adopt a different approach. We rely on the observation that segmenting an eye image by searching for linear structures produces more accurate results, and achieve this through techniques which are less computationally intensive. As noted earlier, many methods approximate the contour region of the iris by a circle. There is, however, detailed evidence (Basit et al. 2008) showing that such an assumption is not always warranted. If one analyzes the polar image of an eye (see for example the image in Sect.

155

3.2, Fig. 3, top), in particular along the vertical direction after projecting the image in the polar space, it is possible to identify the boundary between iris and sclera, i.e., the limbus, in an extremely precise way. In the approach we propose here, the limbus is represented by such a boundary F, which identifies in a more precise fashion the region of interest, though not necessarily modeled as a circle.

3 Iris segmentation The proposed new iris segmentation algorithm consists of four main stages: (a) pre-processing, (b) pupil location, (c) linearization and, (d) limbus boundary location. 3.1 Pre-processing and pupil location Input image I contains information that is superfluous, if not even misleading, affecting locating the pupil. Details in sclera vessels, skin pores, and/or eyelashes are complex patterns that can negatively interfere with the edge detection operations aiming at identifying iris boundaries. The first stage of the iris segmentation algorithm eliminates such interferences through a sharpening filter FE (Enhance). A square window W of size k 9 k (here k = 17 pixels) slides over the whole image, pixel by pixel; each time, a histogram hW is computed for the image region in window W, and the value with the maximum occurrence is substituted in the central window position. Fig. 1a shows the image I of an eye, while Fig. 1b shows the enhanced image IE. Canny edge detection filter is applied to IE using ten different thresholds th = 0.05, 0.10, 0.15,…, 0.55 (Fig. 2). For each of the ten resulting images, the connected components are identified, with those containing a number of pixels greater than a given threshold ThC included in a unique list L (here ThC = 150). Taubin’s algorithm (Taubin 1991) is applied to each connected component in L to compute the corresponding circle. Components whose circles are not completely contained inside the image are pruned form L, so that a final list LC is obtained.

Fig. 1 The image of an eye (a) and the corresponding enhanced image (b)

123

156

M. De Marsico et al.

Fig. 2 Canny edge detection filter applied on enhanced images using different thresholds

  _ j þ d; hi Þ  Iðq _ j  d; hi Þ Dðqj ; hi Þ ¼ /ðI;_ qj ; hi Þ  Iðq

ð5Þ

where /ðI;_ qj ; hi Þ 8 > < 1 if ¼ > : 0 otherwise

_ j þ d; hi Þ  Iðq _ j  d; hi Þ [ 0 and Iðq   _ j þ d; hi Þ [ eG _ j  d; hi Þ; Iðq min Iðq ð6Þ

Fig. 3 Image in the polar space I˙ (top); boundary F for an iris image (down-left); circles which approximate pupil and iris (down-right)

In order to identify the pupil, each circle in LC undergoes a voting procedure, according to the homogeneity and separability criteria specified earlier, with the final score s computed as: s ¼ sH þ sD :

ð4Þ

The highest score smax identifies the circular shape that best approximates the pupil. 3.2 Linearization and limbus location One proceeds to seek for the pixel in the image I located at the greatest distance qmax from the centre of the pupil circle. The image I is then transformed from Cartesian to polar coordinates to yield a new image I˙ (Fig. 3). This transformation aims at identifying the boundary line between iris and sclera. I˙ is further processed by a median filter: given a row R of image I˙, for each pixel P in R we consider a neighbourhood including 2q ? 1 pixels: the pixel itself, the q preceding pixels, and the q subsequent pixels (q = 4). The median value found substitutes for the raw value of pixel P. This operation on I˙ amounts to flattening the level of detail along all the concentric circles included in I and centred at the pupil. For each column, ranging over qj, and corresponding to a position hi of the horizontal axis of I˙, one computes (pixel by pixel) the following weighted difference:

123

Regarding (6) one readily notice that the pupil occupies the lowest qj coordinates of I˙, followed by iris and sclera. The sign of the difference is therefore important, since one expects that the sclera is lighter than the iris. This suggests to look for the most significant variation with positive sign, which represents the region of transition from iris to sclera. The first inequality in (6) imposes a positive gradient, while the second inequality rules out possible borderline pixels between pupil and iris by requiring that the darker pixel in the pair of pixels has a grey level greater than a threshold eG [ [0, 255] (here eG = 50). The returned limbus boundary F is composed by the points maximizing (5) for each column hi in I˙. The horizontal resolution of image I˙ determines their number. Notice from the top image in Fig. 3 that the right and left peaks of the image might be dominated by eyelids, and, in other cases, by the presence of very thick and dark eyelashes. This may distort the boundary F. Points in F belong to a polar space, with their q component thus about constant, while h varies tracing a circle. A natural smoothness criterion is used to discount outliers. Towards that end, one considers the median value qmed over F, and employs (7) to compute a relative error. Points in F with a qI that yield a relative error above a threshold e are discarded (here e = 0.4). err ¼

jqi  qmed j : maxjqi  qmed j

ð7Þ

i

One can expand on the computation carried over F to also retrieve a circle, which approximates well the iris. The centre and the radius of such a circle CI are recomputed by applying Taubin’s algorithm to points of F. CI is always computed here, and sometimes employed as described

Iris segmentation using pupil location, linearization, and limbus boundary reconstruction

below. The noticeable aspect is that one can now find the circle CI in a relatively straightforward way.

157

again, the value that yields the maximum total score smax corresponds to the shape that best approximates the pupil.

3.3 Special cases When the image is blurred, or the iris quality is poor, the process of pupil location might identify the whole iris instead of the pupil within it. This might appear as a drawback of the approach, relative to robustness. This, however, becomes a strong point of the algorithm, since it frequently happens that the pupil is actually difficult to locate, having no information to start working with. Note that it is almost always possible to detect when this happens as confirmed by our experiments. Nevertheless, when the iris is detected instead of the pupil, the approach proposed proceeds exactly in the same way as it is the case for correct pupil location. The latter implies that the radius of the circle which approximates the pupil is lower than 3/4 of the radius of the circle detected afterwards for the iris. This condition prevents the whole iris to be detected as the pupil. The circumference that correctly identifies the true pupil is contained in the one identified for the iris and, with high probability, it has been already included in the list LC of circumferences built during pupil location. Starting from LC one builds a new list LP by eliminating all the circumferences that do not comply with the following three properties: qI  ðdC þ qP Þ [ 0 dC  qP =2\0

ð8Þ

qP  qI =2:7\0 where, for each circumference CP in LC, which is a potential pupil, and therefore included in LP, the values dC, qP and qI represent the distance between the centre of CP and the centre of the circumference CI corresponding to the iris, the radius of CP, and the radius of CI, respectively. These conditions generally hold also for an off-axis gaze. We notice that the pupil and iris are rarely perfectly concentric, if at all. However, the properties in (8) generally hold. First, the radius of the iris must be greater than that of the candidate pupil augmented with the distance between the centres of the two circles: the pupil must completely lie inside iris. In addition, the distance between the two centres must also be lower than half the radius of the candidate pupil: the centre of the iris cannot lay outside the pupil, while the pupil cannot be much off the center. Finally, according to our empirical studies, the radius of the candidate pupil must be lower than 1/2.7 of the radius of the iris, even considering dilatations due to light. Note that the list LP is a subset of LC. One employs again the values already computed using (4) during the previous scoring step, and

4 Evaluation of segmentation accuracy Most existing quality indexes aim at quantitatively and qualitatively evaluating the information provided by the iris after its location is found, i.e. the extent of using the iris features for accurate recognition. The information content can be measured using the Fourier transform, the Wavelet transform, or the entropy. The main goal of the index defined herein is rather to provide an estimate regarding the quality of iris localization, i.e. the correctness of the identified boundaries, in terms of the accuracy obtained vis-a`-vis the iris radius, the pupil radius and its centre. In practice, the accuracy index is computed as the weighted sum of three different components: pupil separability sD(pupil), iris separability sD(iris), and Gaussian distribution of grey levels (Gdist). The separability for both iris and pupil is computed according to the formulas introduced earlier. However, to have the separability indexes for pupil and iris in the range [0, 1], they are now defined as: 1 SD ðpupilÞ þ 1 1 : SI ¼ 1  SD ðirisÞ þ 1

SP ¼ 1 

ð9Þ

As for the third component of the accuracy index, experiments indicate that the grey levels of a correctly located iris tend to show a Gaussian distribution. Figure 4 shows that for a correctly located iris (second column) the histogram h has a distribution closer to a Gaussian, compared to an iris that was less precisely located (first column). Therefore, for histogram approximation one estimates a Gaussian curve as: f ðxÞ ¼ e

ðxlÞ2 2r2

:

ð10Þ

The true histogram h is represented by a vector of 256 elements (grey levels), with the bin values normalized to the range [0, 1]. Parameter l is approximated by the abscissa xmax [ [0, 255] corresponding to the maximum value in h. Given the variance r, the following set is defined:      ðxðiÞlÞ2  E ¼ ije 2r2  hðiÞ [ eh ; i ¼ 0; . . .; jhj  1 : ð11Þ E is the set of all those elements in h for which the approximation error is above a threshold eh (in our case eh = 0.1). The parameter r, in (10) is estimated to maximize Gdist = 1 - |E|/|h|,

123

158

M. De Marsico et al.

Fig. 4 Two examples of iris location on the first row. On the second row the respective histograms of the iris region are shown, and a histogram approximation is shown on the third row

rmax ¼ arg maxð1  jEj=jhjÞ

ð12Þ

r

where |E| represents the cardinality of the set E and |h| is the length of the vector h (here 256). The plot on the third row in Fig. 4 shows an example of a curve that is estimated using this approach. By definition, Gdist is a value in the range [0, 1]. The accuracy index u can be finally defined as the weighted sum: / ¼ a  SP þ b  SI þ c  Gdist

ð13Þ

with a ? b ? c = 1 (in this work a = b = c = 1/3).

5 Experimental results The database used for our experiments was UBIRIS.v2 (Proenc¸a et al. 2010). Due to its recency, there are few references to UBIRIS.v2, and making comparisons with other

123

databases is not always possible. The reason for choosing UBIRIS.v2 comes from its images being captured using noncontrolled conditions, e.g., from distance, on the-move, visible wavelengths, and realistic noise, when compared to UBIRIS.v1 or, even more so with CASIA-IrisV3. This is important to better simulate real AmI settings. 5.1 Reflection removal Processing of iris images tends to be negatively influenced by the presence of specular reflections. This problem is addressed for example in He et al. (2009). In most cases, specular reflections appear as the brightest points in the iris image I(x,y). In the cited work, the authors use a bi-linear interpolation method to substitute for reflections, after using an adaptive threshold Tref to calculate a binary ‘reflection’ map R(x,y) from I(x,y). We compared He’s approach with the interpolation function provided by

Iris segmentation using pupil location, linearization, and limbus boundary reconstruction

OpenCV library, and with an another method detailed next. After computing the reflection map, we make a square window slide along the border of a white zone (a reflex), and for each white pixel we substitute, in the original image, the mean value of pixels in the window. The process is iterated until all the white pixels are filled. The experimental results section refers to this method as Frontier_M. In an alternative implementation, the mean value of a random value subset within the window substitutes for the white pixel. This method will be denoted as Frontier_R. 5.2 Experimental results on segmentation In this section, we will first compare the results on the quality of iris segmentation using different algorithms for the removal of specular reflections. To benchmark the results we rely on a semi-manual procedure we implemented in order to obtain the most ‘‘objective’’ and reliable segmentation. Relevant test contours are not sketched in a completely manual way, to avoid unevenness and uncertainty which may be typical of manual tracing. Rather, some points are set on the contour to be sketched, and the system automatically computes an approximating circle. The procedure can be repeated until the operator is satisfied with the result. After this first comparison, we report the results obtained by controlling the compliance of the iris and pupil boundaries found from time to time with the properties described in Sect. 3.3, with and without histogram equalization. In all the tables the columns have a similar meaning. The first column lists the different strategies for removal of specular reflections. The second

159

column reports the mean distance between the centre of the pupil in semi-manual segmentation and the one located by the automatic procedure, after the procedure for removal of specular reflections in the corresponding row; the mean distance is normalized with respect to image dimensions, and reported as a percentage. The following columns report similar data for the pupil radius, for the iris distance, and for the iris radius. The last three columns list mean times (in seconds) which are required for reflection removal, localization of both iris and pupil, and the total time spent. The best results are in bold font for better readability. Table 1 shows that the best results are obtained by the modified version of algorithm in (He et al. 2009) for the pupil, and by OpenCV interpolation for the iris. The latter also achieves the shortest mean total times, except for the case when there is non removal. The methods based on frontier scanning both perform worse, and the worst performances are obtained when using a random subset of neighbour pixel values within a window to substitute a pixel belonging to a specular reflection. Table 2 clearly shows a significant improvement when localization quality assessment is performed to possibly refine the localization results. Bilinear Interpolation is the method that improves the most. Table 3 shows the results when performing an histogram equalization pre-processing over the original image before performing the following segmentation procedure steps. A further overall improvement is obtained, and again the bilinear interpolation is the method that improves the most. Finally, Table 4 reports the results obtained by ISIS, which includes histogram equalization and segmentation

Table 1 Experimental results for the modified version of algorithm in (He et al. 2009) without equalization and without segmentation quality assessment Method

Pupil distance (%)

Pupil radius (%)

No removal Bilinear Interpolation

10.763 10.086

6.500 6.157

OpenCV Interpolation

10.364

6.902

Frontier_M

11.48

7.03

Frontier_R

11.631

7.135

Iris distance (%)

Iris radius (%)

Total removal time

Total localization time

Total time

10.249 9.540

5.722 4.299

0 0.001764531

0.512309661 0.563010096

0.5123097 0.5647746

9.417

4.165

0.002037633

0.511462

0.5134996

4.17

0.003993831

0.622495814

0.6264897

4.371

0.003648294

0.546019842

0.5496682

979 9.932

Table 2 Experimental results for the modified version of algorithm in (He et al. 2009) without equalization and using segmentation quality assessment [properties (8)] Method

Pupil distance (%)

Pupil radius (%)

Iris distance (%)

Iris radius (%)

Total removal time

Total localization time

Total time

No removal Bilinear interpolation

7.075 5.947

3.784 3.027

7.101 6.190

2.708 2.716

0 0.001566611

0.327427333 0.363643722

0.3274273 0.3652102

OpenCV interpolation

6.533

3.826

6.553

3.154

0.002424404

0.293235

0.2956595

Frontier_M

6.48

3.32

6.26

3.14

0.003313472

0.355837038

0.3591506

Frontier_R

7.003

3.743

7.018

3.324

0.003135382

0.362516691

0.3656521

123

160

M. De Marsico et al.

Table 3 Experimental results for the modified version of algorithm in (He et al. 2009) with preliminary image histogram equalization and using segmentation quality assessment [properties (8)] Method

Pupil distance (%)

Pupil radius (%)

Iris distance (%)

Iris radius (%)

Total removal time

Total localization time

Total time

No removal

7.827

4.136

7.355

3.002

0

0.365743893

0.3657439

Bilinear interpolation

4.465

3.113

5.290

2.088

0.003220732

0.362850902

0.3660717

OpenCV interpolation Frontier_M

6.626 6.23

4.420 4.14

6.731 6.22

2.601 2.04

0.002685071 0.002997771

0.382233179 0.384969571

0.3849183 0.3879673

Frontier_R

5.423

3.591

5.828

2.003

0.003034472

0.367664944

0.3706994

Method

Pupil distance (%)

Pupil radius (%)

Iris distance (%)

Iris radius (%)

Total removal time

Total localization time

Total time

No removal Bilinear interpolation

4.352 4.978

2.862 2.928

3.902 4.562

2.206 2.699

0 0.001622064

0.216191724 0.215907661

0.2161917 0.2175297

OpenCV interpolation

4.956

2.718

4.248

2.745

0.002883723

0.217924827

0.2208086

Frontier_M

4.00

2.51

3.88

2.18

0.00316303

0.212911503

0.2160745

Frontier_R

3.994

2.278

3.907

2.149

0.003232671

0.215562126

0.2187948

Table 4 Results by ISIS

assessment in its basic design. These results are definitely the best, so we can state that ISIS outperforms the algorithm in (He et al. 2009) when using UBIRIS.v.2 database. It is interesting to note that, due to the characteristics of the proposed procedure, the time spent for specular reflection removal does not add substantial quality to the final results, so that we will use ISIS without. In order to experiment the effect of a better segmentation over a complete iris recognition process, we compared the results of the method in (He et al. 2009) with those obtained by substituting their automatic segmentation with our semi-manual procedure. We achieved an improvement of EER of about 10%, as well a better ROC curve. In identification mode, we obtained an increased Recognition Rate and better values for cumulative match scores (CMS) at higher ranks. A detailed demonstration that a better segmentation leads to a better recognition is not one of the goals for the present paper. However, it is quite intuitive that a bad segmentation causes extraneous elements to be coded as iris regions, e.g., parts of the pupil or of the eyelids. Such elements are possibly different from an instance to another one, due to pose and light variations. Therefore, when performing a matching operation between two iris codes, they negatively influence the result.

6 Ambient intelligence Ambient intelligence (AmI) anticipates the implementation of a complex, distributed and well-structured intelligent

123

service system. This implies a practical configuration of technology which delivers services based on context sensitivity (including the identity of an user in some intelligent environment). This is also related to pervasive and ubiquitous computing, where the environment follows the user and is constantly aware of her presence. AmI borrows concepts and techniques from the theory and practice of autonomous and intelligent systems. The resulting environment is a ‘‘community’’ of smart objects with computational capability and high user-friendliness awareness. Such devices and sensors surround people and are capable of recognising and responding to the presence of different individuals in a seamless, unobtrusive and often invisible way. To design such environments, many methodologies and techniques have to be merged together, which implies embedding and networking intelligent computational devices within the environment, context awareness to recognize people and be attuned to them and their physical settings, and personalization of services. The last while monitoring the needs expressed and/or perceived together with overall AmI system performance. The topic of this paper is especially bound to user recognition. This process should be preferably performed in a non-intrusive and transparent way, possibly without user involvement. Biometric recognition techniques are a very good candidate to address this aspect. However, each biometric suffers from specific limitations. In particular, most existing systems for iris recognition impose tight constraints on position, and the distance and motion of the subject to be identified. If we analyze this fact more deeply, we see that the

Iris segmentation using pupil location, linearization, and limbus boundary reconstruction

constraints are largely bound to the image acquisition procedure, rather than to representation, coding, and matching. In other words, what affects the user interaction is the high quality expected from iris images vis-a`-vis resolution and distance to the subject of interest. For instance, the standard ISO/IEC 19794-6 (ISO/IEC 1979), addressing iris data interchange formats in biometric systems, considers a resolution of 100–150 pixels across the iris to be of ‘‘marginal’’ (poor) quality. Two main approaches can be identified to classify current approaches to address this problem. From one side, we have approaches that exploit special hardware, as for example iris on the move (IOM) system (Matey et al. 2006) with its highresolution cameras and video synchronized strobed illumination. Along a second approach, we have attempts to increase algorithmic robustness in order to employ quite common hardware. Examples include a number of projects, which currently consider different biometric characteristics like face. This Projects aim to exploit mid-range mobile devices (smart phones) with their standard camera to allow authentication to (remote) services [see for example (Poh et al. 2009)]. Our work takes its place along this second approach. Since segmentation is the processing step which is mostly affected by possible acquisition problems, we aim at improving it even in less controlled environments using hardware of lower performances. For example, we experimented using currently available medium level webcams instead of high-resolution cameras. This goes towards the goal of making the process as transparent and non-intrusive as possible for the user, as in the true spirit of AmI. The quality measures accuracy indexes presented in Sect. 4 help in reaching this goal. In a video sequence, more frames may undergo the segmentation process, and the system can automatically choose the ‘‘best’’ one (the one giving the more reliable segmentation) before continuing the recognition process. The user may remain completely unaware of this, unless adverse conditions suggest that the system automatically asks for a different position or distance. The organization of two important international competitions, aiming to evaluate present iris segmentation [NICE I, http://nice1.di.ubi.pt/index.html (Proenca and Alexandre 2007)] and recognition [(NICE II, http://nice2.di.ubi. pt/)] algorithms, is a demonstration of the actual shift of both research and market interests towards such objectives. These competitions address in particular the kind of iris images captured and their acquisition modalities. The iris images are captured using normal cameras in the visible light, instead of the often used devices based on near-infrared (NIR). The direct consequence is the presence of noise, specular reflections, and spot lights which make both segmentation and recognition particularly complex. In a specific case, our system ISIS can be set according to two different configurations. In the first one, it is used

161

downstream of a webcam-based acquisition process. In particular, we experimented with a Logitech C305 equipped with Carl Zeiss lens and software-controllable focus. A system of this kind might be suited for logical access to domotic (home automation) services, implemented using a personal computer, and can be integrated with possible preexisting functionalities. The main limit, according to the results highlighted by the research on the IOM system (Matey et al. 2006), is the so called ‘‘acquisition volume’’ (see Fig. 5 for the 3D biometric space where image capture takes place). Since the webcam position is fixed relative to the user, the system is not able to address cases when the user’s height or position might hinder the acquisition process. In the second case, the adopted solution is a Pan/Tilt/ Zoom (PTZ) device (see Fig. 5), where a step engine allows orienting the camera in a semi-spherical region; also the zoom is software-controlled. Communication is implemented via Ethernet, further supporting acquisition device deployment even at a large distance from the processing module. The ability to adjust via software both camera orientation and zoom in extremely short time, allows to significantly increase the device acquisition volume. Figure 6 shows how the system first locates the ocular region using an Haar-based object detector [Viola and Jones (Viola and Jones 2001)]. The region located is the input for ISIS, which performs the iris segmentation and measures the quality of the obtained sample. Based on ISIS

Fig. 5 An example IRIS system based on visibile images acquired through a high-resolution PTZ (Pan/Tilt/Zoom) camera

123

162

M. De Marsico et al.

References

Fig. 6 Examples of iris location in cases of off-axis position, caused by the pose of the subject with respect to the camera

feedback, the system is able to select the best sample (or samples) or to adjust acquisition conditions.

7 Conclusions This paper describes a novel iris segmentation technique using pupil location, linearization, and limbus boundary location. Our experimental results show its feasibility and comparative advantages against existing methods. The proposed biometric method mediates the interface between ambient intelligence (AmI) and pervasive and ubiquitous computing. This is relevant to security and surveillance, on one side, and mobile commercial and financial applications, on the other side. The interface models the equivalent of a closed-loop control with the Ambient Intelligence environment driven by the quality of iris segmentation, e.g., accuracy indexes that index expected authentication performance, and it is the user-friendly force driving biometric (‘‘iris’’) data acquisition and capture. Pervasive and Ubiquitous Computing, driven by user profiles, e.g., biometric signatures and templates including soft biometrics, provide feedback on authentication and its confidence. The interface is adaptive with the closed-loop control modeling analysis and synthesis cycles of iterative and progressive interpretation. The main factor affecting AmI is the relative limited range for the sensors available, while iris authentication is the main factor affecting pervasive and ubiquitous computing. Venues for future research include extending the range for sensors, on one side, and the ability to deal with incomplete and corrupt iris information, on the other side, i.e. the ability to perform with lesser quality iris images and allow for less sophisticated equipment.

123

Basit A, Javed MY, Masood S (2008) Non-circular pupil localization in iris images. In: International conference on emerging technologies, pp 228–231 Belcher C, Yingzi Du (2007) Feature information based quality measure for iris recognition. In: Proceedings of IEEE international conference on systems, man, and cybernetics, pp 3339–3345 Daugman J (2004) How iris recognition works. IEEE Trans Circuits Syst Video Technol 14(1):21–30 Daugman J (2007) New methods in iris recognition. IEEE Trans Syst Man Cybern B Cybern 37(5):1167–1175 Fernandez-Saavedra B, Liu-Jimenez J, Sanchez-Avila C (2007) Quality measurements for iris images in biometrics. In: EUROCON, 2007 international conference on ‘‘computer as a tool’’, pp 759–764 He Z, Tan T, Sun Z, Qiu X (2009) Toward accurate and fast iris segmentation for iris biometrics. IEEE Trans Pattern Anal Mach Intell 31(9):1670–1684 ISO/IEC 19794-6:2005. Information technology. Biometric Data Interchange Formats. Iris Image Data. http://webstore.iec.ch/ preview/info_isoiec19794-6%7Bed1.0%7Den.pdf. Accessed 4 Nov 2010 Jinyu Zuo, Schmid NA (2008) An automatic algorithm for evaluating the precision of iris segmentation, biometrics: theory, applications and systems, 2008. In: 2nd international conference on BTAS 2008, pp 1–6 Li P, Liu X (2008) An incremental method for accurate iris segmentation. In: International conference on pattern recognition, pp 1–4 Lili Pan, Mei Xie (2007) The algorithm of iris image quality evaluation, communications, circuits and systems, 2007. In: International conference on ICCCAS 2007, pp 616–619 Matey RJ, Naroditsky O, Hanna K, Kolczynski R, Loiacono DJ, Mangru S, Tinker M, Zappia TM, Zhao WY (2006) Iris on the move: acquisition of images for iris recognition in less constrained environments. Proc IEEE 94(11):1936–1947 Nguyen VH, Hakil K (2008) A novel circle detection method for iris segmentation. Congr Image Signal Process 3:620–624 Poh N, Wong R, Kittler J, Roli F (2009) Challenges and research directions for adaptive biometric recognition systems. In: Tistarelli M, Nixon MS (eds) Proceedings of ICB 2009 Alghero, Italy. LNCS, vol 5558. Springer, Berlin, pp 753–764 Proenca H, Alexandre LA (2007) The NICE.I: noisy iris challenge evaluation—part I. In: Proceedings of the first IEEE international conference on biometrics: theory, applications, and systems (BTAS 2007), Crystal City, VA, pp 1–4 Proenc¸a H, Filipe S, Santos R, Oliveira J, Alexandre LA (2010) The UBIRIS.v2: a database of visible wavelength iris images captured on-the-move and at-a-distance. IEEE Trans Pattern Anal Mach Intell 32(8):1529–1535 Taubin G (1991) Estimation of planar curves, surfaces and nonplanar space curves defined by implicit equations, with applications to edge and range image segmentation. IEEE Trans Pattern Anal Mach Intell 13:1115–1138 Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit 1:511–518 Wildes R (1997) Iris recognition: an emerging biometric technology. Proc IEEE 85(9):1348–1363 Yahya AE, Nordin MJ (2008) A new technology for iris localization in iris recognition systems. Inf Technol J 7:924–929

Suggest Documents