Optical iris localization approach - IEEE Computer Society

1 downloads 0 Views 1MB Size Report
Oum El Kheir ABRA. UFR ACSYS, Laboratoire. LIMIARF, Département de. Physique. Faculté des Sciences,. Université Mohammed V. Rabat, Morocco.
Optical iris localization approach Oum El Kheir ABRA

Esmail AHOUZI

Nawfel AZAMI

Fakhita REGRAGUI

UFR ACSYS, Laboratoire LIMIARF, Département de Physique Faculté des Sciences, Université Mohammed V Rabat, Morocco [email protected]

Institut National des Postes et Télécommunications Rabat, Morocco [email protected]

Institut National des Postes et Télécommunications Rabat, Morocco [email protected]

UFR ACSYS, Laboratoire LIMIARF, Département de Physique Faculté des Sciences, Université Mohammed V Rabat, Morocco [email protected]

Abstract— This paper introduces a new iris segmentation approach based on correlation filters. The iris boundaries are detected using an original technique of composite filter called indexed composite filters. For each of the iris boundary detection, an indexed composite filter is generated using a database of circular contours corresponding to all boundary radius possible values. Experimental results of pupil and iris outer boundary localization on the CASIA iris database demonstrate the promising performance of this method. Index terms: Biometrics, iris, segmentation, correlation filter, pattern recognition.

I.

INTRODUCTION

Biometrics has become an important research topic in the last years. This has been largely motivated by the increasing stress on e-security and surveillance. Most current verification and identification systems are based on passwords which can be guessed, stolen or broken. On the other hand, biometrics provides solutions to identify a person or verify his identity using physiological or behavioral characteristics. As examples of biometrics we find face, iris, fingerprint, voice, hand, signature, and so on. Iris identification is regarded as one of the most promising fields of biometric recognition, because of its high reliability, convenience and stability. In fact, its distinctiveness is due to several characteristics including ligaments, furrows, ridges, crypts, rings, corona, freckles, and a zigzag collarette. The first stage in iris recognition is to localize and extract the iris region from a digital eye image. Iris segmentation aims to extract the iris region from the background by detecting the inner boundary (between pupil and iris) and the outer boundary (between iris and sclera). Generally, the iris region is approximated by two circles, one for the inner boundary, and the other, for the outer boundary. During the last decade, many techniques have been proposed for automatic iris segmentation [1] [2] [3]. J.g. Daugman utilizes an Integrodifferential operator to locate the circular iris and pupil regions. The Hough transform is also used in most of iris segmentation literature, but this method suffers from some drawbacks related to the thresholding values used when

978-1-4244-3806-8/09/$25.00 © 2009 IEEE

generating edge map of the eye image. It is also computationally intensive which is not convenient for real time applications. Several studies have been made to reduce this complexity by limiting the area of application of the Hough transform. On the other hand, Kumar et al. proposed an iris verification scheme based correlation filters [3] [4]. The iris segmentation algorithm proposed first downsamples an iris image to 100 by 100 pixels. Then it computes the crosscorrelation between the coarse image and a bank of 100 circles of different radii. A gradient operator is applied to the resulting circular integrations to find the maximum radial gradients, one for the inner boundary and one for the outer boundary. In this work, we propose to use optical composite correlation filters in the step of iris segmentation. Both of the outer and the inner iris boundaries are approximated as nonconcentric circles which radii vary from 28 to 75 pixels for iris inner boundary (pupil) and from 90 to 150 for iris outer boundary; generally, these are the limiting values of each boundary radius for the CASIA database [5]. An obvious first approach to boundary detection, proposed in [4], is to use a bank of circular matched filter to each circle and cross correlate the original eye image using all these 108 filters and the maximum among these 108 outputs indicates the circles radii of the iris boundaries; the correlation peaks positions in the correlation plane gives the center coordinates of these boundaries. In practice, this simple method works well, but it is very redundant. The algorithm that we propose eliminates this redundancy by using a new design of composite filters called Indexed Composite Filters (ICF) introduced in an earlier work [6]. This approach requires one cross-correlation per input image for the detection of each contour. The parameters (radius and coordinates of the center) of the iris contours are determined through the optical correlation of the eye image with two indexed composite filters. The first filter is synthesized from the images including circular contours which correspond to all the possible values of the contour iris-pupil radius, the second filter is synthesized in the same way from the images

563

including circular contours radii which correspond to all the possible values of the contour iris-sclera radius. This paper is organized as follows: Section 2 provides a review of the indexed composite filters method. In section 3, we present experimental results in the CASIA database of the proposed algorithm. In section 4, we give conclusions. II.

Let denote by s ( x, y ) and r ( x, y ) the input scene and the reference, respectively. The reference information is Fourier transformed (FT) and its conjugate is introduced in the Fourier plane (P2) whereas the scene image is presented in the input plane (P1). Both the reference and the scene are displayed onto spatial light modulators (SLM). In the Fourier plane behind the filter plane, the correlation function is expressed as: C ( u , v ) = S (u , v ). H (u , v ) , H (u , v) = R ∗ (u , v ) is the matched filter to the reference r ( x, y ). S (u , v) and

R (u , v) are the Fourier transform of the input scene and the reference, respectively and the superscript * denotes the complex conjugate; the correlation function is obtained in plane (P3) by c ( x, y ) = s ( x, y ) ⊗ h( x, y ) , h( x, y ) is the impulse response of the filter matched to the reference r ( x, y ) and ⊗ denotes the correlation operator. The correlation plane will exhibit a peak of correlation when the reference is located in the scene image. In ref [6], we proposed a new design of optical composite filters for multi-classes

+ R

k

k

(u , v ) e (u , v )

hr ( x, y ) = r ( x − α 0 , y − β 0 ) + r ( x − α1 , y − β1 ) ,

(2)

When performing correlation with an input scene containing an object r ( x, y ) centered at the origin given by eq. (1) and the filter described by eq. (2), in the correlation plane, two correlation peaks appear, distant by an amount d equal to (α1 − α 0 ) 2 + (β1 − β 0 ) 2 . This distance is used as a criterion for

[

j ( xk −α

n

∑ (e

k0

)u + ( yk − β

k0

)v

]

Ri (u , v ) =

(u , v ) ,

e

[

j ϕ r ( u ,v ) + ( α i u + β i v ) i 0 0

(3) ]

+

e

[

j ϕ r ( u ,v ) + ( α i u + β v ) i 1 i1

]

ri Fourier

transform. Let us suppose that a mono object input scene s ( x, y ) containing an object rk ( x − xk , y − yk ) from the training class is correlated with the filter given by eq. (3). Assuming that ( xk , y k ) are the position coordinates of the object rk in the input scene. In the Fourier plane, the correlation is formulated as: C ( u , v ) = R k ( u , v ). H ( u , v ) ,

[

k

(u , v ) e ]

+

The first term on the right hand side of eq. (5) is the autocorrelation distribution which gives a peak at the position ( x k − α k 0 , y k − β k 0 ) and the second term is the

* i

ϕ r ( u , v ) is the phase information of the reference i

( u ,v ) + ( α i u + β i v ) − j ϕ r ( u ,v ) − ϕ r i k 0 0

the input scene Fourier transform, respectively.

∑R i =1

+ R

ϕ r ( u , v ) are the amplitude and the phase of k

i=n

H (u , v ) =

i≠ k

Rk (u, v) and

(1)

indexing objects of the learning class of the filter. An index is attached to each object of the learning class in order to identify it at the output. According to this approach, we propose to design an indexed composite phase only filter POF (ICPOF) using a set of n objects {ri ( x , y ) }i = 1 : n considered as the learning class of the filter. The ICPOF is designed as a sum of only phase information of various references in order to ensure good discrimination [8] [9] [10].

Fig. 1. Optical processor

R

s ( x, y ) = r ( x, y ) ,

Let us assume using a filter matched to the reference described as:

PROPOSED METHOD

Pattern recognition based optical correlation in the case of images is a processing technique that compares a scene image to a predefined image called reference. The correlator with which we are concerned is based on the conventional “4f” configuration of the frequency plane, coherent optical correlator [7] shown in Fig. 1.

C (u , v ) =

recognition that allows both location and identification of objects in input scenes. Let us suppose an input scene s ( x, y ) containing the reference r ( x, y ) located at the origin, the scene is described as:

e

[

j ( xk −α

k1

)u + ( yk − β

− j ⎡⎢ ϕ ri ( u , v ) − ϕ rk ⎣

(4)

k1

)v

]

( u , v ) + ( α i u + β i v ) ⎤⎥ 1 1 ⎦

,

(5)

)

position ( x k − α k1 , y k − β k1 ) . The others terms are the cross correlations of the object k with the rest of the objects of the learning class. Let us consider that ( xp1 , yp1 ) and ( xp2 , yp2 ) are the positions coordinates of the two correlation peaks in the correlation plane:

autocorrelation distribution which gives a second peak at the

564

xp1 = xk − α k 0

and

yp1 = y k − β k 0

xp2 = x k − α k1

and

yp2 = y k − β k1



k0

, β

k0

)

and ( α

k1

, β

k1

)

are

,

known,

(6)

( xp1, yp1 ) and

(xp2, yp2 )are obtained by analyzing the correlation plane, hence

( xk , yk ) are computed as below:

A. Iris inner boundary detection An ICPOF is synthesized starting from a set of n circular contours which radii vary from 28 to 75 pixels for iris inner boundary (pupil) detection. Related examples are shown in figure 4.

xp + xp2 − α k 0 − α k1 yp + yp2 − β k 0 − β k1 , (7) xk = 1 , yk = 1 2 2 III.

In this work, both boundaries are approximated as circles. For both boundaries, the problem is to find the circles parameters (radius and centre coordinates). We propose to use the method described in § 2 for segmenting the region of iris from the eye image.

IRIS SEGMENTATION

An eye is composed of three main parts, sclera, iris and pupil. Sclera is the white part of the eye which is the outer part of the iris. Pupil is in the center of the eye and its diameter relative to iris diameter is constantly changing, even under steady illumination. The iris, which has abundant texture information, is the colored annular portion surrounding the pupil. For iris extraction from the eye image, we need to detect the inner boundary (iris/pupil) and the outer boundary (iris/sclera).

..



Fig. 4. Examples of contour references Each circular contour is encoded with an index that is used for the identification of the contour parameters in the scene image during the segmentation process. Figure 5(a) presents one of the input scene; in the correlation plane, as depicted in Fig. 5(b) we note two visible peaks, displaced from the centre by an amount equal to the index associated to the contour at the design stage.

(a)

(b) Contour centre

Fig. 3. (a) Eye image (b) segmented iris

Correlation peaks

(a)

(b)

(c)

Fig. 5: (a) Input scene. (b) Correlation plane obtained with the ICF. (c) Circle parameters (radius and centre coordinates) obtained by analyze of the correlation plane Using canny edge detector, an edge map is generated from the original eye image using a proper threshold value (0.5 in our experiments). This edge image is then correlated with the ICPOF proposed, and by analyzing the resulting correlation plane, the parameters of the contour are identified.

(a)

The search is made for the two maximum correlation peaks which correspond to the circular contour having the radius value close to the pupil radius value. The distance between the two peaks gives the radius of the inner boundary and its center is found from the peaks coordinates in the correlation plane according to eq. (7).

(b)

(c)

(d)

Figure 6: Pupil localization: (a) eye image. (b) Edge image. (c) Correlation plane obtained. (d) Localized pupil

565

(a)

(b)

(c)

(d)

Figure 7: Iris outer boundary localization: (a) eye image. (b) Edge image. (c) Correlation plane obtained. (d) Localized outer iris boundary

B.

Iris outer boundary detection Generally, it is difficult to locate the outer boundary from the surrounding noises when there is little contrast between iris and sclera regions particularly when the eyelids or eyelashes occlude the iris. Difficulties arise from the fact that edge detection will be unsuccessful to find the edges of the iris border. In our experiments, the threshold value of the edge canny operator is set to a value of 0.25 and the sigma is set to 2.5. In the same manner as the inner boundary detection, an ICPOF is designed using a database of 60 circular contours which radii are comprised between the limited values of the iris outer boundary (90 to 150). Then the edge map image obtained is correlated with the ICPOF and the correlation plane is analyzed in order to localize the two highest peaks which give the outer boundary parameters.

Because it is the first work that exploits composite correlation filters in iris segmentation process, there is no similar work to compare with in terms of accuracy and time consumption. Nevertheless, in comparison with the work in reference [4], our method reduces the complexity at 2 crosscorrelations instead of 108 correlations. In table1 statistics of correct segmentation of the iris region boundaries are given. For the detection of the outer iris boundary, we selected from the CASIA database, the eye images which are not seriously occluded by eyelashes. It should be noted that false detections are due to low contrast between the iris and sclera which does not grant the correct detection of outer boundary of the iris. In other databases where the contrast is better, the results are more effective.

Iris/Pupil boundary Iris/Sclera boundary TABLE I.

Number of images

Correct detections

False detections

Rate (%)

756

756

0

100

400

300

100

75

EXPERIMENTAL RESULTS OF IRIS LOCALIZATION

IV.

CONCLUSION

be detected, an ICPOF is generated using a database of circular contours corresponding to all boundary radii possible values. Each circular contour is encoded with an index that serves for identification purposes in the recognition process. This index is reflected in the correlation plane, by two peaks encoding the parameters of the contour. Based on the results presented above, we can conclude that the proposed method is encouraging since it is the first time that composite correlation filters are used in iris segmentation. Furthermore, the segmentation method proposed originates from others by the fact that ICPOFs can be implemented on available optical devices like SLMs. ACKNOWLEDGMENT

“Portions of the research in this paper use the CASIA iris image database collected by Institute of Automation, Chinese Academy of Sciences” REFERENCES [1]

L. Ma, “Personal identification based on iris recognition,” Ph.D dissertation, Inst. Automation, Chinese Academy of Sciences, Beijing, China, (June 2003). [2] J. G. Daugman,” How Iris Recognition Works”, University of Cambridge, (2001) [3] B. Kumar, C. Xie, and J. Thornton, “Iris verification using correlation filters” in Proc. 4th Int. Conf. Audio- and Video-Based Biometric Person Authentication, pp. 697–705, (2003). [4] J. Thornton, M. Savvides, and B. V. Kumar. “Robust iris recognition using advanced correlation techniques” LNCS 3656 ICIAR, SpringerVerlagBerlin Heidelberg, page 1098-1105, ( 2005 ). [5] CASIA Iris Image Database, http://www.sinobiometrics.com [6] O.abra, E. Ahouzi, I. Moreno, F. Regragui, “Indexed Composite Filters” in press in opt. Rev. vol 16 (2009) [7] A. Vander Lugt ”Signal detection by complex spatial filtering ”: IEEE Trans. Inf. Theory IT-10 (1964) 139. [8] E. Ahouzi, J. Campos K. Chalasinska-Macukow, M.J. Yzuel, "Optoelectronic pure phase correlator" Opt. Commun. 110, 27 (1994). [9] E. Ahouzi, J. Campos, and M. J. Yzuel, "Phase-only filter with improved discrimination" Opt. Lett. 19 1340, (1994). [10] E. Ahouzi, J. Campos K. Chalasinska-Macukow, M.J. Yzuel, "Pure phase correlation with improved discrimination capability" Opt. Rev. (1996).

In this paper a new iris localization method using indexed composite correlation filters is proposed. For each boundary to

566