An Efficient Iris Segmentation Approach to

0 downloads 0 Views 971KB Size Report
automated iris recognition system based on proposed segmentation approach. This approach is proved ... differ in the process of segmentation, iris code generation and also in the .... The proposed system is implemented using MATLAB. 7.50.
JOURNAL OF COMPUTING, VOLUME 4, ISSUE 6, JUNE 2012, ISSN (Online) 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

24

An Efficient Iris Segmentation Approach to Develop an Iris Recognition System Md. Selim Al Mamun, S.M. Tareeq and Md. Hasanuzzaman Abstract—Iris recognition is regarded as the most stable and accurate biometric identification system. An iris recognition system basically consists of four steps- segmentation, normalization, encoding and matching. This paper proposes an efficient approach for iris segmentation. This segmentation approach uses a modified canny edge detection algorithm by considering gradient finding, non- maximum suppression and hysteresis thresholding for the best results. This paper also present an automated iris recognition system based on proposed segmentation approach. This approach is proved to be very successful and about 95% images of dataset [CASIA database version 1.0] are segmented successfully. The iris recognition system resulted in False Reject Rates (FRR) and False Accept Rates (FAR) of 5.222 and 1.932 respectively. Index Terms— Iris Segmentation, Irish Recognition, Hough Transformation, False Reject Ratio, False Accept Ratio.

——————————  ——————————

1 INTRODUCTION

A

biometric system refers to the identification and verification of individuals based on certain physiological traits of a person. Commonly used biometric features for the purpose include facial features, voice, fingerprint, handwriting, retina and the most important iris. The idea of person identification using iris is a newly emergent technique in the world of biometric system. It is gaining lots of attention due to its accuracy, reliability and simplicity as compared to other biometric systems. The iris is an externally visible, yet protected organ located behind the cornea. The features of iris include trabecular meshwork, crypts and the pigment spots that is moles and freckles and the color of the iris. These visible patterns are unique to all individuals and it has been found that the probability of finding two individuals with identical iris patterns is almost zero. Even the left and correlate with genetic determination right irises for a given person are different from each other [1].

erator for locating the circular iris region. Daugman algorithm does not suffer from the thresholding problems of Hough transform but this algorithm may fail if there is noise (from reflections) in the image. Boles and Boashash [4] used an active contour model for the segmentation method which suffers from time consumption. This paper consists of five sections. Section 2 focuses proposed Iris recognition system, section 3 describes implementation method of this system, section 4 presents experimental results and discussions and section 5 concludes this paper.

2 PROPOSED IRIS RECOGNITION SYSTEM

Image Acquisition

Image Preprocessing Edge Detection

Iris recognition system is an automated person identification technique where pattern of iris of the individual is used for the purpose. Some prototype systems of iris recognitions had been proposed earlier, but it did not come to the topic of discussion until the Cambridge researcher, John Daugman[2] practically implemented a working iris recognition system. This was the first implemented working automated iris recognition system. Besides J.Daugman’s[2] system some other systems had been developed. The most notable include the systems of Wildes et al [3], Boles and Boashash [4], Lim et al [5] and Noh et al [6]. The systems implemented by different researches, differ in the process of segmentation, iris code generation and also in the matching techniques. Wildes et al [3] system employed Hough transform, a standard computer vision algorithm used to determine the parameters of simple geometric objects. But it requires threshold values to be chosen for edge detection. This may remove some critical edge points resulting in failure to detect circles/arcs. J. Daugman [2] used an integro-differential op-

Segmentation Noise Filtering Normalization

Iris Localization

Feature Extracion

Pupil Detection

Matching

Removal of Eyelids and Eyelashes

Fig. 1. Proposed System Architecture

Fig.1 shows the proposed system architecture. Proposed system is composed of five modules: (i) Image Acquisition (ii) Segmentation: locating the iris region in an eye image (iii) Normalization: creating a dimensionally consistent representation of the iris region, (iv) Feature Extraction: creating a template containing only the most

JOURNAL OF COMPUTING, VOLUME 4, ISSUE 6, JUNE 2012, ISSN (Online) 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

discriminating features of the iris and (v) Matching : matching a test template with the stored templates. The proposed segmentation approach includes image preprocessing, edge detection on the eye image, noise filtering, iris localization, pupil detection and removal of eyelids and eyelashes.

3 SYSTEM DESCRIPTION AND IMPLEMENTATION 3.1 Image acquisition This step is one of the most important and deciding factors for obtaining a good result. A good and clear image eliminates the process of noise removal and also helps in avoiding errors in calculation. This paper uses CASIA (The Chinese Academy of Sciences Institute of Automation) [7]. It contains 756 iris images from 108 subjects. All iris images are 8 bit gray-level JPEG files, collected under near infrared illumination and free from specular reflection.

25

pression method proposed by Kovesi’s [9]. A pixel (x,y), in the gradient image and given the orientation Ө(x,y), the edge intersects two of its 8 connected neighbors. The point at (x,y) is a maximum if its value is not smaller than the values at the two intersection points. The third step is hysteresis thresholding which is implemented using the same hysteresis thresolding method used in J. Canny’s algorithm [10]. Any pixel having a value greater than a high threshold is taken as an edge pixel and eliminates all the pixels below a low threshold. The pixels between these two ranges but connected to the edge pixels (pixels above the high threshold) through a chain of pixels all above the low threshold are also considered as edge pixels.

Fig. 3. Result of Edge Detection

3.2 Image segmentation

3.2.3

The image segmentation module includes several steps. In the following subsections we will sequentially introduce image preprocessing, edge detection on the eye image, noise filtering, iris localization, pupil detection and removal of eyelids and eyelashes methods.

A median filter [11] is used in order to decrease the extraneous data found in the edge detection stage. This can reduce the pixels on the circle boundary but still successful localization of the boundary can be obtained even with the absence of few pixels. It does not make only the circle localization accurate but it is also computationally faster since the boundary pixels are lesser for calculation.

3.2.1

Image Preprocessing

To make the computation faster the image is scaled down to 0.40. The images of CASIA are already preprocessed for iris research and there is very little to clean up the image. The images are filtered using Gaussian smoothing filter [8], which blurs the image and reduces effects due to noise. The degree of smoothening is decided by the standard deviation and in this case it is chosen 2.0.

Fig. 4. Result of Noise Filtering

3.2.4

Fig. 2. Result of Preprocessing

3.2.2

Edge Detection

This paper uses modified canny edge detection algorithm for detecting edges in the eye image. The modified edge detection algorithm involves three steps: finding the gradient, non-maximum suppression and the hysteresis thresholding. Kovesi’s [9] algorithm is used for finding the gradients in the image and it is modified as Wildes [3] suggestion. For iris area vertical gradient is weighted by 1.0 and horizontally by 0.0 and for pupil both vertical and horizontal gradients are weighted by 1.0. This gradient image is used to find peaks using non-maximum sup-

Noise Filtering

Iris Localization

To detect the outer circle in the iris/sclera boundary a modified Circular Hough Transformation algorithm [12] is used. The range of radius values is set manually, the iris radius range from 90 to 150 pixels. For each edge point, circles with different radii are drawn and the points on the circles surrounding it at different radii are taken, and their weights are increased if they are also edge points. These weights are added to the accumulator array. When for all the edge points the circle points for different radius are considered, the maximum from the accumulator array is used to find the center of the circle and its radius.

© 2012 Journal of Computing Press, NY, USA, ISSN 2151-9617 http://sites.google.com/site/journalofcomputing/

JOURNAL OF COMPUTING, VOLUME 4, ISSUE 6, JUNE 2012, ISSN (Online) 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

26



cos

tan

The displacement of the centre of the pupil relative to the centre of the iris is given by σx, σy and is the distance between the edge of the pupil and edge of the iris at an angle θ around the region, and r1 is the radius of the iris. Fig. 5. Result of Iris Localization

3.2.5

Pupil Detection

For pupil detection again circular Hough Transformation algorithm is applied. The radius range for pupil is 28 to 75 pixels. In order to make pupil detection process more efficient and accurate, the Hough transform for iris/pupil boundary is performed within the iris region, instead of the whole eye region, since the pupil is always within the iris region. After this process a circle is clearly present along the pupil boundary.

Fig. 8. Result of Normalization

3.4 Feature Extraction Encoding is done using the Gabor filter [13], by breaking up the 2D normalized pattern into a number of 1D wavelets, and then these signals are convolved with 1D Gabor wavelets. The output of filter is then phase quantized to four levels using the Daugman [2] method, with each filter producing two bits of data for each phasor. The iris code is formed by assigning 2 bits for each pixel of the image. The bit is 1 or 0 depending on the sign + or – of the real and imaginary part respectively. Fig. 6. Result of Pupil Detection

3.2.6

Removal of Eyelids and Eyelashes

To isolate eyelids from the rest of the image, a line to the upper and lower eyelids drawn using the linear Hough transforms. The lines are fitted exterior to the pupil region and interior to the iris region. The points upper and lower the lines are marked as NaN. The eye lashes are very dark compared to the whole iris image. So it is easily removed using simple thresholding technique. Those pixels were marked as NaN. Fig. 10 shows the result.

3.3 Normalization The next step is to normalize this part, to enable generation of the iris template and present them in a generalized way for comparisons. For this purpose, a technique based on Daugman’s [2] rubber sheet model is employed. The center of the pupil is considered as the reference point and a remapping formula is used to convert the points on the Cartesian scale to the polar scale. ′



Where And







(1)

Fig. 9. Result of Feature Extraction : (a) Original image, (b) Template of the eye image and (c) Mask of the eye image

3.5 Matching 3.5.1

Hamming Distance

This paper uses modified Hamming distance, used by J. Daughman[2] for matching which also incorporates noise masking so that only significant bits are used in calculating the Hamming distance between two iris templates. The modified hamming distance algorithm that is used for matching is given below.















(2) Where Xj and Yj are the two bit-wise templates to compare, and are corresponding noise masks for

JOURNAL OF COMPUTING, VOLUME 4, ISSUE 6, JUNE 2012, ISSN (Online) 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

27

Xj and Yj, and N is the number of bits represented by each template. 3.5.1 Rotation Varitation Adaption In order to account for rotational inconsistencies, when the Hamming distance of two templates is calculated, one template is shifted left. This bit-wise shifting in the horizontal direction corresponds to rotation of the original iris region by an angle given by the angular resolution used for iris detection.

4 EXPERIMENTAL RESULTS AND DISCUSSIONS In this chapter performance of the developed system is evaluated. Different types of tests are conducted to evaluate the accuracy of the system. These include decidability, False Accept Rate (FAR), False Reject Rate (FRR), Equal Error Rate and number of shifts to make the system rotation invariant. This paper uses iris images from CASIA database (version 1.0) to verify the uniqueness of iris pattern and to evaluate the performance of the proposed system.

4.1 Experimental Setup

Fig. 11. Cases where segmentation fails

4.3 Performance Evaluation The key issue in all pattern recognition problems is to find a unique separation point between intra-class and inter class variability. An individual can be reliably classified only if the variability among different instances of a given class is less than the variability between different classes. So a threshold value must be chosen so that a decision can be made as to whether two templates were created from the same individual or whether they were created from the different individuals. From figure 12 it is easily visible that the common region between intra-class and inter-class is very small which gives indication of good result.

The proposed system is implemented using MATLAB 7.50. For statistical analysis a statistical tool ’R’ is used. The system is implemented in Intel Core 2duo-2.13 GHz with 2GB RAM (DDR2, 800 bus). The system is tested using CASIA dataset where number of subject is 108 with 7 samples each.

4.2 Result of Segmentation

Fig. 12. Distribution of Hamming Distance (HD Vs Density)

Fig. 10. Result of Segmentation: (a) Original Image, (b) After removing eyelids and (c) After removing eyelashes

The proposed segmentation approach is proved to be very successful. The new segmentation approach successfully segmented 681 out of 756 of eye images of CASIA (version 1.0) which corresponds to a success rate of around 90%. There are some cases where segmentation of iris fails. The problem images had small intensity differences between the iris region and the pupil regions. This situation is shown in the following figures.

Fig. 13. Distribution of Hamming Distance(HD Vs Frequency)

4.3.1

Decidability

A popular metric for determining the threshold value for pattern recognition or identification is ‘decidability’. It is evaluated from the mean and standard deviation of the intra-class and intra class distribution. The decidability is defined as ′



|

|





(3)

JOURNAL OF COMPUTING, VOLUME 4, ISSUE 6, JUNE 2012, ISSN (Online) 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

28

The higher the decidability, the greater the variation between the intra-class and inter-class distributions which is the key to the iris recognition system. The decidability calculated from the result is given below. The values of decidability for each test are found near 5.0 or greater which is a good result.

4.3.2

FAR, FRR and EER

FAR = FRR =







(4)

(a)

(5)

False Accept Rate and False Reject Rate are related inverse proportionally. An important way to judge the system’s FAR at 5 % of FRR.

(b) Fig. 16. Density of Intra-Class and Inter-Class Distribution with (a) 0 shift and (b) 8 shifts Fig. 14. Threshold Vs FAR and FRR

Fig. 15. Threshold Vs FAR and FRR(Closed look)

At threshold 0.39, the FAR is 1.932% and FRR is 5.222%, at 0.394 FAR = FRR and it is 4.8% = EER. Accuracy = 95.2%.

4.3.3

Rotation Variation Adaption

Robust representation for iris pattern recognition must be invariant to changes in size, position and rotation. To compensate the rotation variant the templates of the iris images are shifted 8bits in both sides: left and right and then the hamming distance are taken.

Due to rotational inconsistencies a significant number of templates are not aligned and common area is large which means larger false rate. With 8 shifts values become much closer distributed around the mean and common area is decreased. Without considering rotation variation, at threshold value 0.44, the FRR is found 5.397 % and FAR is 33.609 %. But considering the rotation variation at threshold value 0.39 the FRR is 5.222 % and FAR is 1.932 %.

5 CONCLUSION This paper proposes a new segmentation approach including image preprocessing, edge detection, noise filtering, iris localization, pupil detection and removal of eyelids and eyelashes methods. For edge detection Kovesi’s[9] edge detection algorithm with the modification of Wildes[3] is used so that gradients can be weighted to find edges in horizontal and vertical direction which are important for locating iris region. As eyelids and eyelashes occluded upper portion of iris, gradients are biased vertically so that the circle is visible in between two eyelids. For pupil detection gradients are weighted equally so that pupil boundary is visible in horizontal and vertical direction. To make the circle detection algorithm easier median filter is to remove extraneous data that is randomly present near the circle. For iris and pupil detection circular Hough trans-

JOURNAL OF COMPUTING, VOLUME 4, ISSUE 6, JUNE 2012, ISSN (Online) 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

form is used. To make the algorithm faster the image is scaled down to 40% and to make it efficient and more accurate, pupil detection is applied only within the iris circle because the pupil is always within the iris region. To remove eyelids linear Hough transform is used to fit lines for top and lower eyelids. The eyelashes are removed by simple thresholding method. Experimental result shows that segmentation accuracy is around 90% which is better than kovesi[9] where segmentation accuracy was 83% for the same dataset (CASIA ver.1). This paper also implements iris recognition system using this new segmentation approach and experimental result shows that FAR is 1.932% and FRR is 5.222 % which is satisfactory. There are still some issues that need to be considered. To make the system fully automated an iris acquisition camera should be included rather than having a set of iris images from a database. The most of the time required for computation include performing the Hough transform, and calculating Hamming distance values.

REFERENCES [1]

[2] [3]

[4]

[5]

[6]

[7]

[8] [9] [10] [11] [12] [13]

El-Bakry, H.M, Human Iris Detection Using Fast Cooperative Modular Neural Nets, Neural Networks, Proceedings of International Joint Conference on IJCNN '01, vol.1, 2001, pp 577 –582. J. Daugman. How iris recognition works. Proceedings of 2002 International Conference on Image Processing, Vol. 1, 2002 R. Wildes, J. Asmuth, G. Green, S. Hsu, R. Kolczynski, J. Matey, S. McBride: A Machine-vision System for Iris Recognition. Machine Vision and Applications Vol. 9 (1996) W. Boles, B. Boashah, A Human Identification Technique Using Images of the Iris and Wavelet Transform. IEEE Transaction on Signal Processing Vol. 46 (1998) S. Lim, K. Lee, O. Byeon, T. Kim. Efficient, iris recognition through improvement of feature vector and classifier. ETRI Journal, Vol. 23, No. 2, Korea, 2001. S. Noh, K. Pae, C. Lee, J. Kim, Multi resolution independent component analysis for iris identification. The 2002 International Technical Conference on Circuits/Systems, Computers and Communications, Phuket, Thailand, 2002. CASIA (The Chinese Academy of Sciences Institute of Automation) Iris image database (version 1.0) http://www.cbsr.ia.ac.cn/IrisDatabase.htm. R. Gonzalez and R. Woods, Digital Image Processing, AddisonWesley Publishing Company, 1992, p 191. Peter Kovesi, Matlab functions for Computer Vision and Image Processing, What are Log-Gabor filters? Canny. J. A Computational Approach To Edge Detection, IEEE Trans. Pattern Analysis and Machine Intelligence, 8:679-714, 1986. R. Boyle and R. Thomas, Computer Vision: A First Course, Blackwell Scientific Publications, 1988, pp 32 - 34). A. Jain, Fundamentals of Digital Image Processing, Prentice-Hall, 1989, Chap. 9. Hans G. Feichtinger, Thomas Strohmer: "Gabor Analysis and Algorithms", Birkhäuser, 1998; ISBN: 0817639594.

29

Md. Selim Al Mamun received B.Sc (Hons) and MS from from the Department of Computer Science & Engineering, University of Dhaka, Bangladesh. His research interests include Pattern Recognition, Image processing Artificial Intelligence, Bioinformatics etc. S.M. Tareeq is working as an Associate Professor in Department of Computer Science & Engineering, University of Dhaka, Bangladesh. He has already published many International Journals and Conference papers and participated many International Conferences. His research interests include Artificial Intelligence, Fuzzy Logic, Pattern Recogntion, Image processing, Robotics etc. Md. Hasanuzzamn is working as an Associate Professor in Department of Computer Science & Engineering, University of Dhaka, Bangladesh. He has already published many International Journals and Conference papers and participated many International Conferences. His research interests include Artificial Intelligence, Fuzzy Logic, Pattern Recogntion, Image processing, Robotics etc. .

Suggest Documents