iris verification system

0 downloads 0 Views 2MB Size Report
Biometric-based authentication applications include workstation, network ...... recognition. The optimum encoding of iris features was with one 1D Log-Gabor.
Benha University

Faculty of Engineering

IRIS VERIFICATION SYSTEM By Heba Mohamed Abdel Hamid A thesis submitted In partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering Technology Supervised by Prof. Dr. Ahmed Mohamed Hamad Emeritus Professor of Computer Systems Faculty of Computer and Information Science Ain -Shams University Dr .Mostafa Elsayed Ahmed Ibrahim Electrical Engineering Department Faculty of Engineering Benha University Dr .Wael Abdel Rahman Mohamed Electrical Engineering Department Faculty of Engineering Benha University 2014 i

Copyright (©) 2014 by Heba Mohamed Abdel Hamid All rights reserved. Reproduction in whole or in part in any form requires the prior written permission of Heba Mohamed Abdel Hamid

or designated representative

ii

The undersigned have examined the thesis entitled ―Iris Verification System‘‘ presented by Heba Mohamed Abdel Hamid , a candidate for the degree of Master of Science in Electrical Engineering technology and here by certify that it is worthy of acceptance. Approved by the Examining Committee: Prof. Dr. Ahmed Mohamed Hamad, Ain Shams University

Thesis Advisor and Committee Chairperson Prof. Dr. Mohamed Hashem Abdel Aziz Ahmed Examiner Prof. Dr. Salah Ghazy Ramadan, Benha University

Examiner Accepted for Electrical Engineering Department: Prof. Dr. Mahmoud Elbahy Department Chairman Accepted for the Post Graduate Affairs: Prof. Hesham Elbatsh

Vice Dean for post graduate studies Accepted for the Faculty: Prof. Dr. Mohammed Basiouny Dean of the Faculty of Engineering

iii

ABSTRACT A biometric system provides automatic identification of an individual based on a unique feature or characteristic possessed by the individual. Among the features measured are; face fingerprints, hand geometry, handwriting, iris, retinal, vein, and voice. The need for biometrics can be found in federal, state and local governments, in the military, in commercial applications, network, passport control, and automatic teller machine. Pin numbers, email passwords, credit card numbers, and protected premises access numbers all have something in common. All of them are a key to your identity, and all of them can easily be stolen or guessed

so

identity management accomplished by individual

recognition. Iris recognition is regarded as the most reliable and accurate biometric identification system available. The iris recognition system consists of an automatic segmentation system that is based on the Hough transform, and is able to localize the circular iris and pupil region, occluding eyelids and eyelashes, and reflections. The extracted iris region was then normalized into a rectangular block with constant dimensions to account for imaging inconsistencies. Finally, the phase data from 1D Log-Gabor filters was extracted and quantized to encode the unique pattern of the iris into a bit-wise biometric template. For determining the recognition performance of the system CASIA Iris Interval (left and right images) database of digitized grey scale eye images was used. The database is divided into training and testing database. In training database, I use first image from each folder for left and right eye where each folder contains more than or equal two

iv

images and I removed bad segmented images where database contains correct segmentation images only. In testing database, I use the remaining images of database without bad segmented images to improve recognition rate.

The Hamming distance was employed for classification of iris templates, and two templates were found to match if a test of statistical independence was failed. The system performed with perfect recognition on a set of 1332 left eye images resulted in false accept and false reject rates of 1.218% and 2.062% respectively and 1305 right images resulted in false accept and false reject rates of 1.357% and 2.810% respectively . Therefore, iris recognition is shown to be a reliable and accurate biometric technology.

v

ACKNOWLEDGEMENTS A long way has passed until this moment; a way that was full of many ups and downs. And finally, it is the time to thank all those who have contributed to this final outcome. First of all, I would like to thank GOD for giving me the capability to learn all what I have learned in my life and to give me the ability to write this thesis. After GOD, I want to express my gratitude to Prof. Dr. Ahmed Mohamed Hamad who's not only served as my supervisor but also encouraged me and provided me with valuable guidance and indispensable help. His words of advice, his trust, and his patience and understanding helped me to finish this work. He has been a role model to me.And finally, special thanks to my supervisors Dr. Mostafa Elsayed Ahmed Ibrahim and Dr. Wael Abdel Rahman Mohamed for their support I also grant all the success realized in my life to my parents to whom I owe all good things I learned and will learn till I die. Also I would like to dedicate this acknowledgement to all the people who helped me making it happen especially for my husband for his continuous support and understanding without which I have probably quitted a long time ago, my 2 sons who sacrificed many happy moments I had to be busy away from them. I‘m also grateful to all my family and my dear friends for their support and continuous help all the way long.

vi

CONTENTS Abstract …………………………………………………………...iv Acknowledgements ………………………………………………...vi Contents ………………………………………………………… .vii List of Figures ……………………………………………………..x List of Tables …………………………………………………… xii Nomenclature …………………………………………………….xiii Chapter 1: Introduction…………………………………………… 1 1.1 Biometrics Overview ……………………………………………1 1.2 Human Iris ….. ………………………………………………...2 1.3 Typical Stages of Iris Recognition……………………………….4 1.4 Motivations………………………………………………………4 1.5 Contributions……………………………………………………..6 1.6 Thesis Outline……………………………………………………6 Chapter2: Related Work…………………………………………….7 2.1 Segmentation…………………………………………………….7 2.2 Normalization…………………………………………………....9 2.3 Feature Extraction……………………………………………...10 2.4 Matching……………………………………………………….11 Chapter3: Technical Background of Iris Recognition....................12 3.1 Iris Recognition System…………………………………………12 3.2 Segmentation Methods…………………………………………..13 3.2.1

Hough Transform……………………………………..14

3.2.2

Daugman‘s Integro-differential Operator…………….16

vii

3.2.3

Active Contour Models ………………………………17

3.2.4 Eyelash and Noise Detection……………………………18 3.3 Normalization Methods………………………………………..19 3.3.1 Daugman‘s Rubber Sheet Model ………………………20 3.3.2 Image Registration ……………………………………..21 3.3.3 Virtual Circles…………………………………………..22 3.4 Feature Extraction Methods and Encoding Stage……………...22 3.4.1 cumulative-sum-based analysis method………………...23 3.4.2 Wavelet Encoding………………………………………25 3.4.3 Gabor Filters…………………………………………….26 3.4.4 Log-Gabor Filters ……………………………………...28 3.4.5 Zero-crossings of the 1D wavelet………………………29 3.4.6 Haar Wavelet …………………………………………..30 3.4.7

Laplacian of Gaussian Filters…………………………31

3.5 Matching Methods ……………………………………………32 3.5.1 Hamming distance……………………………………...32 3.5.2 Weighted Euclidean Distance…………………………..33 3.5.3 Normalised Correlation…………………………………33 3.6 Summary……………………………………………………...34 Chapter4: Proposed Iris Recognition System and Results………35 4.1 Proposed Iris Recognition System…………………………...35 4.1.1 Proposed Segmentation Method………………………..35 4.1.2 Proposed Normalization Method……………………….38 4.1.3 Proposed Feature Extraction Method…………………...41 viii

4.1.4 Proposed Matching Method…………………………….44 4.2 System Overview…………………………………………….48 4.3 Data Set (CASIA-Iris – Interval)…………………………….50 4.4 Results………………………………………………………..52 4.4.1 Results of Segmentation………………………………..53 4.4.2 Results of Normalization ………………………………58 4.4.3 Results of the Whole System……………..…………….60 4.5 Summary…………………………………………………..67 Chapter5: Conclusions.......................................................................69 5.1 Conclusions..........................................................................69 5.2 Future Work ......................................................................70 References……………………………………………………………71 Appendix A: The CASIA -IrisV4 ………………………………….78

ix

LIST OF FIGURES Figure1-1: Location of iris in human eye ……………………………..3 Figure 3-1: Typical stages of the iris recognition system…………….12 Figure 3-2:

a) an eye image (from the CASIA database) b)

corresponding edge map c) edge map with only horizontal gradients d) edge map with only vertical gradients. ………………………………16 Figure 3-3: Daugman‘s rubber sheet model ………………………….20 Figure 3-4: Division of normalized iris image into cell regions and grouping of cell regions………………………………………………24 Figure 3-5 : Example of iris code generation ………………………..25 Figure 3-6 : A quadrature pair of 2D Gabor filters left) real component right) imaginary component ………………………………………….27 Figure 3-7: Haar Transform ………………………………………….31 Figure 4-1: Stages of segmentation with eye image ‗S1008R01‘ ……37 Figure 4-2 Outline of the normalization process with radial resolution of 20 pixels, and angular resolution of 240 pixels. …………………..40 Figure 4-3: An illustration of the feature encoding process………….42 Figure 4-4: Five-level decomposition process with Haar wavelet…...43 Figure 4-5: An illustration of the shifting process……………………46 Figure 4-6: An overview of the sub-systems and MATLAB functions that make up the iris recognition software system……………………48 Figure 4-7: The self-developed iris camera used for collection of CASIA-Iris-Interval…………………………………………………..51 Figure 4-8: Example iris images in CASIA-Iris-Interval…………….52 Figure 4-9: An example for segmentation fails for the CASIA Iris Interval V4 (S1022L02 image)……………………………………….55

x

Figure 4-10: The eyelash detection technique, eyelash regions are denote asblack (Image‗S1001L01‘) …………………………….........56 Figure 4-11: Automatic segmentation of various images (left and right eye). Black regions denote detected eyelid and eyelash regions……..58 Figure 4-12: Illustration of the normalization process for two images of the same iris taken under varying conditions. …………………….59 Figure 4-13 : False accept and false reject rates for left images data set with different separation points using above parameters……………..62 Figure 4-14: False accept and false reject rates for right images data set with different separation points using above parameters. ……………64 Figure 4-15 : False accept and false reject rates for left images data set with different separation points by Haar wavelet……………………66

xi

LIST OF TABLES Table 4-1: False accept and false reject rates for the ‗CASIA Iris Interval (left images)‘ data set with different separation points using above parameters. ……………………………………………………62 Table 4-2: False accept and false reject rates for the ‗CASIA Iris Interval (right images)‘ data set with different separation points using above parameters. ……………………………………………………63 Table 4-3: Comparison among multiple wavelet coefficients……….65 Table 4-4: Recognition Rate………………………………………….67

xii

NOMENCLATURE LIST OF SYMBOLS r

radius

G (r) σ

Gaussian smoothing function

I(x, y)

The eye image

F

The internal force

i

G

i

The external force

v

The position of vertex i.

R (φ)

matrix representing rotation by φ.

i

σ

The standard deviation of the Gaussian

LIST OF ABBRIVIATIONS CRR

Correct Recognition Rate

DCAC

Discrete Circular Active Contour

DFT

Discrete Fourier Transforms

FAR

False Accept Rate

FRR

False Reject Rate

HD

Hamming distance

PIN

Person Identification Number

WED

Weighted Euclidean distance

xiii

xiv

Chapter 1 INTRODUCTION 1.1 Biometrics Overview The meaning of Biometrics comes from the Greeks. The Greek hybrid of the words is bio meaning life and metry meaning to measure. The Webster‘s definition is the statistical measurement and analysis of biological observations and phenomena. In a simpler terms biometrics means using the body as a password. Biometrics are automated methods of recognizing a person based on a physiological or behavioral characteristic.

Among the features

measured are; face fingerprints, hand geometry, handwriting, iris, retinal, vein, and voice.

Biometric technologies are becoming the

foundation of an extensive array of highly secure identification and personal verification solutions. As the level of security breaches and transaction fraud increases, the need for highly secure identification and personal verification technologies is becoming apparent. Biometric-based solutions are able to provide for confidential financial transactions and personal data privacy. The need for biometrics can be found in federal, state and local governments, in the military, and in commercial

applications.

Enterprise-wide

network

security

infrastructures, government IDs, secure electronic banking, investing and other financial transactions, retail sales, law enforcement, and health and social services are already benefiting from these technologies. Biometric-based authentication applications include workstation, network, and domain access, single sign-on, application logon, data protection, remote access to resources, transaction security and Web 1

security. Trust in these electronic transactions is essential to the healthy growth of the global economy. Utilized alone or integrated with other technologies such as smart cards, encryption keys and digital signatures, biometrics are set to pervade nearly all aspects of the economy and our daily lives. Utilizing biometrics for personal authentication is becoming convenient and considerably more accurate than current methods (such as the utilization of passwords or PIN (Person Identification Number)). This is because biometrics links the event to a particular individual (a password or token may be used by someone other than the authorized user), is convenient (nothing to carry or remember), accurate (it provides for positive authentication), can provide an audit trail and is becoming socially acceptable and cost effective. More information about biometrics, standards activities, government and industry organizations and research initiatives on biometrics can be found throughout this website. Among the various traits, iris recognition has attracted a lot of attention because it has various advantageous factors like greater speed, simplicity and accuracy compared to other biometric traits. Iris recognition relies on the unique patterns of the human iris to identify or verify the identity of an individual.

1.2 Human Iris [1] The iris is a thin, circular structure in the eye, responsible for controlling the diameter and size of the pupils and thus the amount of light reaching the retina. .The average diameter of the iris is 12mm, and the pupil size can vary from 10% to 80% of the iris diameter [2]. Eye color is the color of the iris, which can be green, blue, or brown. In some cases it can be hazel (a combination of light brown, green and

2

gold), grey, violet, or even pink. In response to the amount of light entering the eye, muscles attached to the iris expand or contract the aperture at the center of the iris, known as the pupil. The larger the pupil, the more light can enter. The iris is a protected organ whose random texture is stable throughout the life and hence can be used as an identity document offering a very high degree of identity assurance.

Figure1-1: Location of iris in human eye

3

1.3 Typical Stages of Iris Recognition For iris recognition, an input image is taken using Infra-red sensitive CCD camera and the frame grabber. From this input image eye is localized using various image processing algorithms. Area of interest i.e. iris is then detected from the eye and the features are extracted. These features are encoded into pattern which is stored in the database for enrollment and are matched with the database for authentication. The general Iris recognition system consists of four important steps: 1) Iris segmentation which extracts iris portion from the localized eye image. 2) Iris normalization which converts the iris portion into rectangular strip of fixed Dimensions to compensate the deformation of pupil due to change in environmental conditions. 3) Iris feature extraction deals with extraction of core iris features from the iris texture patterns and generate bitwise biometric template. 4) Iris template matching compares the stored template with the query template and gives the decision of authentication of a person based on some predefined threshold.

1.4 Motivations Pin numbers, email passwords, credit card numbers, and protected premises access numbers all have something in common. All of them are a key to your identity, and all of them can easily be stolen or guessed after reading the first few pages of ―Identity Theft for Dummies‖. Currently users have been encouraged to create strong passwords for every different domain. This leads to some logical problems. People tend to forget multiple, lengthy and varied passwords, therefore, they use one strong password for everything. This only allows the successful thief to gain access to all the protected

4

information. The other option which follows is to carry a hard copy of each password which again can only be a reward for the quick pickpocket. Identity management is important in many applications such as network, passport control, bank automatic teller machine and government intelligence. Only recently have companies started to use biometric authentication to protect access to highly confidential assets. You may be familiar with some of the physical traits used by biometric authentication programs such as fingerprints and retinas. Other traits that can be measured include the voice, face, and the iris. For most people, these are only high-tech gadgets simulated in Hollywood. However, this technology is very real and is currently being used in the private sector. One method of particular interest is the use of iris codes to authenticate users. Iris detection is one of the most accurate and secures means of biometric identification while also being one of the least invasive. Fingerprints of a person can be faked—dead people can come to life by using a severed thumb. Thieves can don a nifty mask to fool a simple face recognition program. The iris has many properties which make it the ideal biometric recognition component. Iris patterns are unique to each subject and remain stable throughout life [3][4]. Especially, it is protected by the body‘s own mechanisms and impossible to be modified without risk. Thus, iris is reputed to be the most accurate and reliable for person identification [5] and has received extensive attentions over the last decade [3][4],[6]-[11]. Irises not only differ between identical twins, but also between the left and right eye.

5

1.5 Contributions We did many things in this thesis such as the following 

Surveying the state-of-the-art in Iris recognition research topic.



Mastering iris recognition system stages.



Enhancing the performance of a pre-developed iris recognition system.



Evaluating the performance of the developed system in terms of Correct Recognition Rate (CRR), False Accept Rate (FAR) and False Reject Rate (FRR) using recent iris data base.

1.6 Thesis Outline The rest of the thesis is organized as follows. 

Chapter 2 introduces the related work of iris recognition systems.



Chapter 3 gives a theoretical background of the different stages involved in any iris recognition system.



Chapter 4 explains the proposed iris recognition system as well as the experimental results and the performance evaluation.



Chapter 5 concludes the thesis commenting on the probable impact of the obtained results. In addition to the summary of the presented unique contributions, a discussion of the possible future directions of research based on this thesis is presented.

6

Chapter 2 RELATED WORK On a general iris recognition system four different stages can be identified: iris segmentation, normalization stage, feature extraction and feature comparison (matching). 2.1 Segmentation Since 1987, when the first relevant methodology was presented by Flom and Safir [12], many distinct approaches have been proposed. In 1993, J. Daugman [3] presents one of the most relevant methodologies, constituting the basis of many functioning systems. On the segmentation stage, this author introduces an integrodifferential operator to find both the iris inner and outer borders. This operator remains actual and was proposed with some minor differences in 2004 by [13]. Wildes [7] proposes iris segmentation through a gradient based binary edge map construction followed by circular Hough transform. This methodology is the most widely used, being proposed with minor several variants in [14] - [18]. [19] Proposes one simple method based on thresholds and function maximization in order to obtain two ring parameters corresponding to iris inner and outer borders. Authors from [20] propose one iris detection method based on priori pupil localization. The image is then transformed into polar coordinates and the iris outer border is identified as the largest horizontal edge resultant from Sobel filtering. This approach May fail in case of nonconcentric iris and pupil, as well as for very dark iris textures.

7

Morphologic operators were applied by [21] to obtain iris borders. They detect the inner border by applying threshold, opening and image closing and the outer border with threshold, closing and opening sequence. Based on the assumption that the image captured intensity values can be well represented by a mixture of three Gaussian distribution components, authors in [22] propose the use of Expectation Maximization [23] algorithm to estimate the respective distributions parameters. They expect that ‗Dark‘, ‗Intermediate‘ and ‗Bright‘ distributions contain the pixels corresponding to the pupil, iris and reflections areas. Many of described approaches present a major disadvantage: the use of thresholds, usually to construct binary edge maps. This can be considered as a weak point regarding their robustness on image intensity changes. Black hole search method is proposed by C. Teo and H. Ewe [24] to compute the center and area of a pupil. Since the pupil is the darkest region in the image, this approach applies threshold segmentation method to find the dark areas in the iris image. The dark areas are called as ―black holes‖. The center of mass of these black holes is computed from the global image. The area of pupil is the total number of those black holes within the region. The radius of the pupil can be calculated from the circle area formula. Cui et al. [25] decomposed the iris image using Haar wavelet before pupil localization. Modified Hough Transform was used to obtain the center and radius of pupil. Iris outer boundary was localized using integral differential operator .Texture segmentation is adopted to detect upper and lower eyelids. The energy of high spectrum at each region

8

computed to segment the eyelashes. The region with high frequency is considered as the eyelashes area. The upper eyelashes are fit with a parabolic arc. The parabolic arc shows the position of the upper eyelid. For lower eyelid detection, the histogram of the original image is used. The lower eyelid area is segmented to compute the edge points of the lower eyelid. The lower eyelid is fit with the edge points. W. Kong et al. [26] proposed Gabor filter and variance of intensity approaches for eyelash detection. The eyelashes are categorized into separable eyelashes and multiple eyelashes. Separable eyelashes are detected using 1D Gabor filters. A low output value is obtained from the convolution of the separable eyelashes with the Gabor filter. For multiple eyelashes, the variance of intensity is very small .If the variance of intensity in a window is smaller than a threshold, the center of the window is considered as the eyelashes.

2.2 Normalization Normalization is the next step of iris recognition which in this step the segmented iris will be transformed to polar coordinates from Cartesian coordinates. Daugman rubber sheet model [2] is a non-concentric normalization method. In this method circler iris is represented into rectangular form to compensate the varying size of the captured iris. The Wildes [7] system employs an image registration technique, which geometrically warps a newly acquired image, into alignment with a selected database image. Masek [18] propose a new non-concentric normalization model. Boles [6] propose virtual circles method for normalization.

9

2.3 Feature Extraction The iris recognition system performance is influenced by many parameters [12]:

the focus of eye images, orientation and

environmental factors of iris image acquisition. From past researches many techniques are suggested to overcome these problems. Beginning from 1987, automatic iris recognition systems have been proposed. Daugman [3] developed an iris recognition system using 2D Gabor filter for feature extraction and hamming distance for verification. It became the basis for most of the current commercial iris verification products. Besides Daugman‘s method [27], various approaches for feature extraction [27] have been studied. Wildes‘ algorithm [28] used the first derivative of Laplacian of Gaussian filters to locate the iris in eye-contained images. Boashash and Boles [6] proposed a new approach based on zero crossings, which can handle noise in the image data and also is invariant to image translation and rotation. Li Ma, et al. [29] used circular symmetry filters to capture the local texture information of the iris and construct a fixed length feature vector. L. Ma et al. [30] use Gaussian-Hermite moments to characterize local variations of the intensity signals. Gaussian- Hermite moments are used for texture feature extraction with mathematical orthogonality and effectiveness for characterizing local details of the signal [31].Lim et al. [32] and Ali et al. [33] decomposed an iris image into four levels using 2D Haar wavelet transform and quantized the fourth-level highfrequency information to form an 87-bit code. L. Ma et al. [8], [34], [35] constructed a bank of spatial filters, whose kernels are suitable for iris recognition to represent local texture features of the iris and thus

10

achieved much better results. The one-dimensional continuous wavelet transform is used to decompose iris image in [36]. Here, each decomposed one-dimensional waveform is approximated by an optimal piecewise linear curve connecting a small set of node points, which is used as a feature vector. Kazuyuki Miyazawa, Koji Kobayashi and et al. proposed an efficient iris recognition algorithm [37]-[39] using phase-based image matching —

an image matching technique using phase components in 2D

Discrete Fourier Transforms (DFTs) of given images. Vasta et al [40] use Log-Gabor filter for iris feature extraction.

2.4 Matching There are several matching techniques [2], [3], [33], [34], [40] , [41] to match a captured iris template with enrolled template . Among all these Hamming distance, Weighted Euclidean distance [41] and Normalized correlation [7] measurement techniques are normalized. The Hamming distance gives a measure of how many bits are same between two bit patterns. The Hamming distance (HD) [2] via the XOR operator is used for the similarity measure between two iris templates in [2], [3] ,[29], [33], [34], [40]. The Weighted Euclidean distance (WED) is used to compare two iris templates in [41]. The weighted Euclidean distance gives a measure of how similar a collection of values are between two templates. The weighted Euclidean distance can be used to compare two templates, especially if the template is composed of integer values. The normalized correlation is for matching the iris template and was used by Wildes et al [7] and Kim et al [36].

11

Chapter3 TECHNICAL BACKGROUND OF IRIS RECOGNITION 3.1 Iris Recognition System

Figure 3-1: Typical stages of the iris recognition systems

Figure 3-1 illustrated steps of recognize an iris from the eyes. The image of eye is taken from database. In segmentation, the iris region is located. A dimensionally consistent representation of the iris region is created in the normalization step. A template containing only the most discriminating features of the iris is generated in the feature extraction stage. In enrollment system, this template is stored to build database. In authentication, template is compared with others in database at matching stage to identify the person. A recognition system operates in two different modes [42]. In identification mode, the system reads a sample and compares it to every

12

templates in database .The system conducts one to many comparison to establish the identity of an individual. The concept of identification is based on ―Who am I?‖ On the other hand, the verification mode obtains input (password, etc) from the user which points to the template in the database then obtains the sample from the user and compares it against the user defined template.

The system conducts one to one

comparison to know whether the identity claimed by an individual is genuine or not. The concept of authentication is based on ―Am I whom ®

I claim I am?‖ . The development tool used will be MATLAB , and emphasis will be only on the software for performing recognition, and ®

not hardware for capturing an eye image. MATLAB provides an image processing toolbox, and high level programming methodology. The system is to be composed of a number of sub-systems, which correspond to each stage of iris recognition. These stages are segmentation – locating the iris region in an eye image, normalization – creating a dimensionally consistent representation of the iris region, and feature extraction – creating a template containing only the most discriminating features of the iris. The input to the system will be an eye image, and the output will be an iris template, which will provide a mathematical representation of the iris region.

3.2 Segmentation Methods The first stage of iris recognition system is to isolate the actual iris region in a digital eye. The purpose of iris localization is to localize an acquired image that corresponds to an iris. The iris region can be approximated by two circles, one for the iris/sclera boundary (Limbic boundary) and the other is interior to the 13

first, for the iris/pupil boundary (Pupillary boundary). The eyelids and eyelashes are normally occluding the upper and lower parts of the iris region. Also, specular reflections can occur within the iris region corrupting the iris pattern. A technique is required to isolate and exclude these artifacts as well as locating the circular iris region.

The success of segmentation depends on the imaging quality of eye images. Images in the CASIA iris database [22] do not contain specular reflections due to the use of near infra-red light for illumination. However, persons with darkly pigmented irises will present very low contrast between the pupil and iris region if imaged under natural light, making segmentation more difficult. The segmentation stage is critical to the success of an iris recognition system, since data that is falsely represented as iris pattern data will corrupt the biometric templates generated, resulting in poor recognition rates .

3.2.1 Hough Transform The Hough transform is a standard computer vision algorithm that can be used to determine the parameters of simple geometric objects, such as lines and circles, present in an image. The circular Hough transform can be employed to deduce the radius and centre coordinates of the pupil and iris regions. An automatic segmentation algorithm based on the circular Hough transform is employed by Wildes et al. [28], Kong and Zhang [26], Tisse et al. [11], and Ma et al [29]. Firstly, an edge map is generated by calculating the first derivatives of intensity values in an eye image and then thresholding the result. From the edge map, votes are cast in Hough space for the parameters of circles passing through each edge point. These parameters are the centre coordinates x

c

14

and y , and the radius r, which are able to define any circle according to c

the equation x2c + y2c – r2 = 0

(3.1)

A maximum point in the Hough space will correspond to the radius and centre coordinates of the circle best defined by the edge points. Wildes et al. and Kong and Zhang also make use of the parabolic Hough transform to detect the eyelids, approximating the upper and lower eyelids with parabolic arcs, which are represented as; (-(x- hj) sinθ j+(y–kj) cosθj)2= aj ((x-hj) cosθj+(y – kj)sin θj)

(3.2)

Where aj controls the curvature, (hj, kj) is the peak of the parabola and θj is the angle of rotation relative to the x-axis. In performing the preceding edge detection step, Wildes et al. bias the derivatives in the horizontal direction for detecting the eyelids, and in the vertical direction for detecting the outer circular boundary of the iris, this is illustrated in Figure 3-2. The motivation for this is that the eyelids are usually horizontally aligned, and also the eyelid edge map will corrupt the circular iris boundary edge map if using all gradient data. Taking only the vertical gradients for locating the iris boundary will reduce influence of the eyelids when performing circular Hough transform, and not all of the edge pixels defining the circle are required for successful localization. Not only does this make circle localization more accurate, it also makes it more efficient, since there are less edge points to cast votes in the Hough space.

15

a

b

c

d

Figure 3-2: a) an eye image (from the CASIA database) b) corresponding edge map c) edge map with only horizontal gradients d) edge map with only vertical gradients. There are a number of problems with the Hough transform method. First of all, it requires threshold values to be chosen for edge detection, and this may result in critical edge points being removed, resulting in failure to detect circles/arcs. Secondly, the Hough transform is computationally intensive due to its ‗brute-force‘ approach, and thus may not be suitable for real time applications.

3.2.2 Daugman’s Integro-differential Operator Daugman makes use of an integro-differential operator for locating the circular iris and pupil regions, and also the arcs of the upper and lower eyelids. The integro-differential operator is defined as

16

Ә max (r,xp,yo) Gσ (r) * ‫ــــــــ‬ Әr

I (x,y)

r,xo,yo

‫ ــــــــــــــــــ‬ds

(3.3)

2 πr

Where I(x, y) is the eye image, r is the radius to search for, G (r) is a σ

Gaussian smoothing function, and s is the contour of the circle given by r, x , y . The operator searches for the circular path where there is 0

0

maximum change in pixel values, by varying the radius and centre x and y position of the circular contour. The operator is applied iteratively with the amount of smoothing progressively reduced in order to attain precise localization. Eyelids are localized in a similar manner, with the path of contour integration changed from circular to an arc. The integro-differential can be seen as a variation of the Hough transform, since it too makes use of first derivatives of the image and performs a search to find geometric parameters. Since it works with raw derivative information, it does not suffer from the thresholding problems of the Hough transform. However, the algorithm can fail where there is noise in the eye image, such as from reflections, since it works only on a local scale.

3.2.3 Active Contour Models Ritter et al. [43] make use of active contour models for localizing the pupil in eye images. Active contours respond to pre-set internal and external forces by deforming internally or moving across an image until equilibrium is reached. The contour contains a number of vertices, whose positions are changed by two opposing forces, an internal force,

17

which is dependent on the desired characteristics, and an external force, which is dependent on the image. Each vertex is moved between time t and t + 1 by vi (t+1) = vi(t) + Fi(t) + G(t)

(3.4)

Where F is the internal force, G is the external force and v is the i

i

i

position of vertex i. For localization of the pupil region, the internal forces are calibrated so that the contour forms a globally expanding discrete circle. The external forces are usually found using the edge information. In order to improve accuracy Ritter et al. use the variance image, rather than the edge image. A point interior to the pupil is located from a variance image and then a discrete circular active contour (DCAC) is created with this point as its centre. The DCAC is then moved under the influence of internal and external forces until it reaches equilibrium, and the pupil is localized .

3.2.4 Eyelash and Noise Detection Kong and Zhang [26] present a method for eyelash detection, where eyelashes are treated as belonging to two types, separable eyelashes, which are isolated in the image, and multiple eyelashes, which are bunched together and overlap in the eye image. Separable eyelashes are detected using 1D Gabor filters, since the convolution of a separable eyelash with the Gaussian smoothing function results in a low output value. Thus, if a resultant point is smaller than a threshold, it is noted that this point belongs to an eyelash. Multiple eyelashes are detected using the variance of intensity. If the variance of intensity values in a small window is lower than a threshold, the centre of the window is

18

considered as a point in an eyelash. The Kong and Zhang model also makes use of connective criterion, so that each point in an eyelash should connect to another point in an eyelash or to an eyelid. Specular reflections along the eye image are detected using thresholding, since the intensity values at these regions will be higher than at any other regions in the image.

3.3 Normalization Methods Once the iris region is successfully segmented from an eye image, the next stage is to transform the iris region so that it has fixed dimensions in order to allow comparisons. The dimensional inconsistencies between eye images are mainly due to the stretching of the iris caused by pupil dilation from varying levels of illumination. Other sources of inconsistency include, varying imaging distance, rotation of the camera, head tilt, and rotation of the eye within the eye socket. The normalization process will produce iris regions, which have the same constant dimensions, so that two photographs of the same iris under different conditions will have characteristic features at the same spatial location. Another point of note is that the pupil region is not always concentric within the iris region, and is usually slightly nasal [2]. This must be taken into account if trying to normalize the ‗doughnut‘ shaped iris region to have constant radius.

19

3.3.1 Daugman’s Rubber Sheet Model The homogenous rubber sheet model devised by Daugman [44] remaps each point within the iris region to a pair of polar coordinates (r, θ) where r is on the interval [0, 1] and θ is angle [0,2π].

Figure 3-3 : Daugman‘s rubber sheet model

The remapping of the iris region from (x, y) Cartesian coordinates to the normalized non-concentric polar representation is modeled as I(x (r, θ), y (r, θ)) → I (r, θ)

(3.5)

With x (r, θ) = ( 1 – r ) xp(θ) + r x1(θ) y (r, θ) = (1 – r) yp(θ) + r y1(θ) Where I(x,y) is the iris region image, (x,y) are the original Cartesian coordinates, (r,θ) are the corresponding normalized polar coordinates, and xp

,

yp and x1 , y1

are the coordinates of the pupil and iris 20

boundaries along the θ direction. The rubber sheet model takes into account pupil dilation and size inconsistencies in order to produce a normalized representation with constant dimensions. In this way the iris region is modeled as a flexible rubber sheet anchored at the iris boundary with the pupil centre as the reference point. Even though the homogenous rubber sheet model accounts for pupil dilation, imaging distance and non-concentric pupil displacement, it does not compensate for rotational inconsistencies. In the Daugman system, rotation is accounted for during matching by shifting the iris templates in the θ direction until two iris templates are aligned.

3.3.2 Image Registration The Wildes et al. system employs an image registration technique, which geometrically warps a newly acquired image, Ia(x, y) into alignment with a selected database image Id(x, y) [3]. When choosing a mapping function (u(x , y) , v(x , y))

to transform the original

coordinates, the image intensity values of the new image are made to be close to those of corresponding points in the reference image. The mapping function must be chosen so as to minimize ∫x ∫ y (Id (x, y) – Ia (x – u, y- v)) 2

dxdy

(3.6)

While being constrained to capture a similarity transformation of image coordinates (x, y) to (x′, y′), that is x'

x =

y'

x - sR (Ø)

y

(3.7)

y

With s a scaling factor and R (φ) a matrix representing rotation by φ. In implementation, given a pair of iris images I and I , the warping a

21

d

parameters s and φ are recovered via an iterative minimization procedure [26].

3.3.3 Virtual Circles In the Boles [6] system, iris images are first scaled to have constant diameter so that when comparing two images, one is considered as the reference image. This works differently to the other techniques, since normalization is not performed until attempting to match two iris regions, rather than performing normalization and saving the result for later comparisons. Once the two irises have the same dimensions, features are extracted from the iris region by storing the intensity values along virtual concentric circles, with origin at the centre of the pupil. A normalization resolution is selected, so that the number of data points extracted from each iris is the same. This is essentially the same as Daugman‘s rubber sheet model; however scaling is at match time, and is relative to the comparing iris region, rather than scaling to some constant dimensions. Also, it is not mentioned by Boles, how rotational invariance is obtained.

3.4 Feature Extraction Methods and Encoding Stage In order to provide accurate recognition of individuals, the most discriminating information present in an iris pattern must be extracted. Only the significant features of the iris must be encoded so that comparisons between templates can be made. Most iris recognition systems make use of a band pass decomposition of the iris image to create a biometric template.

22

The template that is generated in the feature encoding process will also need a corresponding matching metric, which gives a measure of similarity between two iris templates. This metric should give one range of values when comparing templates generated from the same eye, known as intra-class comparisons, and another range of values when comparing templates created from different irises, known as inter-class comparisons. These two cases should give distinct and separate values, so that a decision can be made with high confidence as to whether two templates are from the same iris, or from two different irises.

3.4.1 cumulative-sum-based analysis method This method [45] is used to extract features from iris images by the following steps Step 1. Divide normalized iris image into basic cell regions for calculating cumulative sums. One cell region has 3 (row) ×10 (col) pixels size. An average grey value is used as a representative value of a basic cell region for calculation. Step 2. Basic cell regions are grouped horizontally and vertically as shown in Figure 3-4 (Five basic cell regions are grouped together because experimental results show that much better performance is achieved when a group consists of five cells). Step 3 . Calculate cumulative sums over each group as in (2). Step 4 . Generate iris feature codes as shown in Figure3-5 .Cumulative sums are calculated as follows. Suppose that X1, X2,..., and X5 are five representative values of each cell region within the first group located on the left top corner of Figure3-4. �Calculate the average.

23

X = (x1+x2+…+x5)/5 �Calculate cumulative sum from 0: S0 = 0. � Calculate the other cumulative sums by adding the difference between the current value and the average to the previous sum: Si = si-1+ (xi-x)

for i = 1, 2,…, 5.

(3.8)

Cumulative sums are calculated by addition and subtraction, so the cumulative-sums-based feature extraction method creates a lower processing burden than other methods. After calculation, iris codes are generated for each cell using the following algorithm. Iris _code_ generation { For (2 times loop) { // for horizontal and vertical directions} MAX = max (S1, S2,…,S5); MIN = min(S1,S2,…,S5); If Si located between MAX and MIN index If (Si on upward slope) set cell‘s iris _code to 1 If (Si on downward slope) set cell‘s iris _code to 2 else Set cell‘s iris _code to 0}}

Figure 3-4: Division of normalized iris image into cell regions and grouping of cell regions.

This algorithm generates iris codes by analyzing cumulative sums which describe the variations in the grey values of iris patterns. An

24

upward slope of cumulative sums means that the iris pattern may change from darkness to brightness. A downward slope of cumulative sums means the opposite. An example of iris code generation is shown Figure3-5. Cumulative sums between max and min generate iris code 1 since they compose of upward slope. The fifth cumulative sum generates iris code 0 because it is not located between max and min as shown in Figure3-5(a). In Figure3-5(b), the second, third, and fourth cumulative sums generate iris code 2 since they compose the downward slope. Each cell has two iris codes: one for the horizontal direction, the other for the vertical.

Figure 3-5: Example of iris code generation

3.4.2 Wavelet Encoding Wavelets can be used to decompose the data in the iris region into components that appear at different resolutions. Wavelets have the advantage over traditional Fourier transform in that the frequency data is localized, allowing features which occur at the same position and resolution to be matched up. A number of wavelet filters, also called a bank of wavelets, is applied to the 2D iris region, one for each resolution with each wavelet a scaled version of some basis function. The output of applying the wavelets is then encoded in order to provide a compact and discriminating representation of the iris pattern.

25

3.4.3 Gabor Filters Gabor filters are able to provide optimum conjoint representation of a signal in space and spatial frequency. A Gabor filter is constructed by modulating a sine/cosine wave with a Gaussian. This is able to provide the optimum conjoint localization in both space and frequency, since a sine wave is perfectly localized in frequency, but not localized in space. Modulation of the sine with a Gaussian provides localization in space, though with loss of localization in frequency. Decomposition of a signal is accomplished using a quadrature pair of Gabor filters, with a real part specified by a cosine modulated by a Gaussian, and an imaginary part specified by a sine modulated by a Gaussian. The real and imaginary filters are also known as the even symmetric and odd symmetric components respectively. The centre frequency of the filter is specified by the frequency of the sine/cosine wave, and the bandwidth of the filter is specified by the width of the Gaussian. Daugman makes uses of a 2D version of Gabor filters [44] in order to encode iris pattern data. A 2D Gabor filter over the an image domain (x,y) is represented as 2 2 2 2 - π](x-xo) /a +(y-yo) /B [ -2πi]uo(x-xo) +vo(y-yo)[

G(x, y) = e

(3.9)

e

Where (x , yo) specify position in the image, (α, β) specify the o

effective width and length, and (u , v ) specify modulation, which has o

spatial frequency

o

. The odd symmetric and even

symmetric 2D Gabor filters are shown in Figure 3-6

26

Figure 3-6: A quadrature pair of 2D Gabor filters left) real component right) imaginary component.

Daugman demodulates the output of the Gabor filters in order to compress the data. This is done by quantizing the phase information into four levels, for each possible quadrant in the complex plane. It has been shown by Oppenheim and Lim [46] that phase information, rather than amplitude information provides the most significant information within an image. Taking only the phase will allow encoding of discriminating information in the iris, while discarding redundant information such as illumination, which is represented by the amplitude component.

These four levels are represented using two bits of data, so each pixel in the normalised iris pattern corresponds to two bits of data in the iris template. A total of 2,048 bits are calculated for the template, and an equal number of masking bits are generated in order to mask out corrupted regions within the iris. This creates a compact 256-byte

27

template, which allows for efficient storage and comparison of irises. The Daugman system

makes

use of polar coordinates for

normalization, therefore in polar form the filters are given as H(r,Ө) = e -iw(Ө- Øo)e-(r - ro)2/a2 e- i(Ө- Өo

)2/B2

(3.10)

Where (α,β) are the same as in Equation 3.9 and (r , θ ) specify the 0

0

centre frequency of the filter. The demodulation and phase Quantization process can be represented as h{Re,Im}= sgn {Re,Im} ∫ ∫ I(ρ , φ ) e-iw(Өo- Ø)e-(ro-ρ )2/a2 e-(Өo- Ø)2/B2 ρ d ρ dØ (3.11) ρØ where h

{Re, Im}

can be regarded as a complex valued bit whose real and

imaginary components are dependent on the sign of the 2D integral, and I(ρ , φ ) is the raw iris image in a dimensionless polar coordinate system. For a detailed study of 2D Gabor wavelets see [47].

3.4.4 Log-Gabor Filters A disadvantage of the Gabor filter is that the even symmetric filter will have a DC component whenever the bandwidth is larger than one octave [48]. However, zero DC component can be obtained for any bandwidth by using a Gabor filter which is Gaussian on a logarithmic scale; this is known as the Log-Gabor filter. The frequency response of a Log-Gabor filter is given as;

‫( ـــ‬log( f / f0 ))2 G ( f ) =exp ‫ــــــــــــــــــــــــــــ‬

(3.12)

2 (log (σ / f0)) 2

Where f represents the centre frequency, and σ gives the bandwidth of 0

the filter. Details of the Log-Gabor filter are examined by Field [48]. 28

3.4.5 Zero-crossings of the 1D wavelet Boles and Boashash [6] make use of 1D wavelets for encoding iris pattern data. The mother wavelet is defined as the second derivative of a smoothing function θ(x). d2Ө(x)

ψ ( x ) = ‫ـــــــــــــــ‬ .111 dx2

(3.13)

The zero crossings of dyadic scales of these filters are then used to encode features. The wavelet transform of a signal f(x) at scale s and position x is given by

d2 Ө(x) Wsf(x) = f * ( S ‫( ) ــــــــــــــــــــ‬x) dx2 2

(3.14)

d2 = S ‫ ( ـــــــــــ‬f * Өs )(x) Өs = (1/s) Ө(x/s) 2 dx 2

Where Өs = (1/s) Ө(x/s) W f(x) is proportional to the second derivative of f(x) smoothed by s

θ (x), and the zero crossings of the transform correspond to points of s

inflection in f*θ (x). The motivation for this technique is that zeros

crossings correspond to significant features with the iris region.

29

3.4.6 Haar Wavelet Lim et al. [49] also use the wavelet transform to extract features from the iris region. Both the Gabor transform and the Haar wavelet are considered as the mother wavelet. From multi-dimensionally filtering, a feature vector with 87 dimensions is computed. Since each dimension has a real value ranging from -1.0 to +1.0, the feature vector is sign quantized so that any positive value is represented by 1, and negative value as 0. This results in a compact biometric template consisting of only 87 bits.

Lim et al. compare the use of Gabor transform and Haar wavelet transform, and show that the recognition rate of Haar wavelet transform is slightly better than Gabor transform by 0.9%.

The Wavelet transform breaks an image down into four sub-sampled images as shown in figure 3-7. The result consists of one image that has been high-pass filtered in the horizontal and vertical directions (HH or diagonal coefficients), one has been low-pass filtered in the vertical and high-pass filtered in the horizontal (LH or horizontal coefficients), one that has been low-pass filtered in the horizontal and high-pass filtered in the vertical (HL or Vertical coefficients), and one that has been low-pass filtered in both directions (LL or details coefficient) (Kim et al., 2004).

30

LL LH LL

LH

LH HL HH

HL

HL

HH

HH

Figure 3-7: Haar Transform

3.4.7 Laplacian of Gaussian Filters In order to encode features, the Wildes et al. system decomposes the iris region by application of Laplacian of Gaussian filters to the iris region image. The filters are given as 1

p2

2

2

‫ ـــ‬p /2σ

G = ‫ ـــــــ ـــ‬1 ‫ ـــــــــ ـــ‬e π σ4 2σ2 Where σ is the standard deviation of the G

(3.15)

Where σ is the standard deviation of the Gaussian and ρ is the radial distance of a point from the centre of the filter. The filtered image is represented as a Laplacian pyramid which is able to compress the data, so that only significant data remains. Details of Laplacian Pyramids are presented by Burt and Adel son [49]. A Laplacian pyramid is constructed with four different resolution levels in order to generate a compact iris template.

31

3.5 Matching Methods 3.5.1 Hamming distance The Hamming distance gives a measure of how many bits are the same between two bit patterns. Using the Hamming distance of two bit patterns, a decision can be made as to whether the two patterns were generated from different irises or from the same one.

In comparing the bit patterns X and Y, the Hamming distance, HD, is defined as the sum of disagreeing bits (sum of the exclusive-OR between X and Y) over N, the total number of bits in the bit pattern. 1 HD = ‫ـــــــ‬ N

N

(3.16)

∑ X j (XOR) Yj j=1

Since an individual iris region contains features with high degrees of freedom, each iris region will produce a bit-pattern which is independent to that produced by another iris, on the other hand, two iris codes produced from the same iris will be highly correlated. If two bits patterns are completely independent, such as iris templates generated from different irises, the Hamming distance between the two patterns should equal 0.5. This occurs because independence implies the two bit patterns will be totally random, so there is 0.5 chance of setting any bit to 1, and vice versa. Therefore, half of the bits will agree and half will disagree between the two patterns. If two patterns are derived from the same iris, the Hamming distance between them will be close to 0.0, since they are highly correlated and the bits should agree between the two iris codes.

32

The Hamming distance is the matching metric employed by Daugman, and calculation of the Hamming distance is taken only with bits that are generated from the actual iris region.

3.5.2 Weighted Euclidean Distance The weighted Euclidean distance (WED) can be used to compare two templates, especially if the template is composed of integer values. The weighting Euclidean distance gives a measure of how similar a collection of values are between two templates. This metric is employed by Zhu et al. [14] and is specified as (fi – fi (k)) 2 WED (K) = ∑ ‫ـــــــــــــــــــــــــــــــ‬ i=1 (δi (k)) 2 N

(3.17)

th

th

Where fi is the i feature of the unknown iris, and fi (k) is the i feature th

of iris template, k, and δi (k) is the standard deviation of the i feature in iris template k. The unknown iris template is found to match iris template k, when WED is a minimum at k.

3.5.3 Normalised Correlation Wildes et al. make use of normalised correlation between the acquired and database representation for goodness of match. This is represented as n

m

∑ ∑ (P1 [i, j] – µ1 ) ) P2 [ i , j] – µ2 ) i=1 j=1

nm σ1 σ2

33

(3.18)

Where p and p are two images of size n xm, μ and σ are the mean and 1

2

1

1

standard deviation of p , and μ and σ are the mean and standard 1

2

2

deviation of p . 2

Normalised correlation is advantageous over standard correlation, since it is able to account for local variations in image intensity that corrupt the standard correlation calculation.

3.6 Summary In this chapter , I showed the different methods of segmentation stage such as Hough Transform , Daugman‘s Integro-differential Operator , Active Contour Models , Eyelash and Noise Detection and different methods of normalization as Daugman‘s Rubber Sheet Model , Image Registration , Virtual Circles and methods of feature extraction such as cumulative-sum-based analysis method , Wavelet Encoding , Gabor Filters , Log Gabor Filters , Zero-crossings of the 1D wavelet , Haar Wavelet , Laplacian of Gaussian Filters and different methods of matching stage such as Hamming Distance , Weighted Euclidean Distance and Normalised Correlation .

34

Chapter4 PROPOSED IRIS RECOGNITIN SYSTEM AND RESULTS 4.1 Proposed Iris Recognition System In proposed system , we use Circular Hough Transform and Canny edge Detection for segmentation , rubber sheet model in normalization to convert Cartesian coordinates to polar coordinates , 1D Log Gabor Filter to create template to use in matching and use Hamming distance to compare between images and take decision . we used libor masek software as a reference but enhanced the performance of system by calculating automatic bad segmentation , applying the system on new database and doing automation for all system where we calculate FAR , FRR and CRR for the system.

4.1.1 Proposed Segmentation Method The circular Hough transform for detecting the iris and pupil boundaries was used. This involves first employing canny edge detection to generate an edge map. Gradients were biased in the vertical direction for the outer iris/sclera boundary, as suggested by Wildes et al. [44]. Vertical and horizontal gradients were weighted equally for the inner iris/pupil boundary. A modified version of Kovesi‘s Canny edge detection MATLAB® function [50] was implemented, which allowed for weighting of the gradients. The range of radius values to search for was set manually, depending on the database used. For the CASIA database, values of the iris radius range from 90 to 150 pixels, while the pupil radius ranges from 28 to 75 pixels. In order to make the circle detection process more efficient and accurate, the Hough transform for the iris/sclera boundary was

35

performed first, then the Hough transform for the iris/pupil boundary was performed within the iris region, instead of the whole eye region, since the pupil is always within the iris region. After this process was complete, six parameters are stored, the radius, and x and y centre coordinates for both circles. Eyelids were isolated by first fitting a line to the upper and lower eyelid using the linear Hough transform. A second horizontal line is then drawn, which intersects with the first line at the iris edge that is closest to the pupil. This process is illustrated in Figure 4-1 and is done for both the top and bottom eyelids. The second horizontal line allows maximum isolation of eyelid regions. Canny edge detection is used to create an edge map, and only horizontal gradient information is taken. The linear Hough transform is implemented using the MATLAB® Radon transform, which is a form of the Hough transform. If the maximum in Hough space is lower than a set threshold, then no line is fitted, since this corresponds to non-occluding eyelids. Also, the lines are restricted to lie exterior to the pupil region, and interior to the iris region. A linear Hough transform has the advantage over its parabolic version, in that there are less parameters to deduce, making the process less computationally demanding.

36

Figure 4-1: Stages of segmentation with eye image ‗S1008R01‘.

37

For isolating eyelashes in the CASIA database a simple thresholding technique was used, since analysis reveals that eyelashes are quite dark when compared with the rest of the eye image. For the eyelid, eyelash, and reflection detection process, the coordinates of any of these noise areas are marked using the ®

MATLAB NaN type, so that intensity values at these points are not misrepresented as iris region data.

4.1.2 Proposed Normalization Method For normalization of iris regions a technique based on Daugman‘s rubber sheet model was employed. The centre of the pupil was considered as the reference point, and radial vectors pass through the iris region, as shown in Figure 4-2 .numbers of data points are selected along each radial line and this is defined as the radial resolution. The number of radial lines going around the iris region is defined as the angular resolution. Since the pupil can be non-concentric to the iris, a remapping formula is needed to rescale points depending on the angle around the circle. This is given by

(4.1)

Where displacement of the centre of the pupil relative to the centre of the iris is given by o , o , and r’ is the distance between the edge of the x

y

38

pupil and edge of the iris at an angle, θ around the region, and r is the I

radius of the iris. The remapping formula first gives the radius of the iris region ‗doughnut‘ as a function of the angle θ. A constant number of points are chosen along each radial line, so that a constant number of radial data points are taken, irrespective of how narrow or wide the radius is at a particular angle. The normalized pattern was created by backtracking to find the Cartesian coordinates of data points from the radial and angular position in the normalized pattern. From the ‗doughnut‘ iris region, normalization produces a 2D array with horizontal dimensions of angular resolution and vertical dimensions of radial resolution. Another 2D array was created for marking

reflections,

eyelashes,

and

eyelids

detected

in

the

segmentation stage. In order to prevent non-iris region data from corrupting the normalized representation, data points which occur along the pupil border or the iris border are discarded. As in Daugman‘s rubber sheet model, removing rotational inconsistencies is performed at the matching stage and will be discussed in the next chapter.

39

20 pixels

r θ 240 pixels Figure 4-2 : Outline of the normalization process with radial resolution of 20 pixels, and angular resolution of 240 pixels.

40

4.1.3 Proposed Feature Extraction Method Feature encoding was implemented by convolving the normalised iris pattern with 1D Log-Gabor wavelets. The 2D normalised pattern is broken up into a number of 1D signals, and then these 1D signals are convolved with 1D Gabor wavelets. The rows of the 2D normalised pattern are taken as the 1D signal, each row corresponds to a circular ring on the iris region. The angular direction is taken rather than the radial one, which corresponds to columns of the normalised pattern, since maximum independence occurs in the angular direction. The intensity values at known noise areas in the normalised pattern are set to the average intensity of surrounding pixels to prevent influence of noise in the output of the filtering. The output of filtering is then phase quantized to four levels using the Daugman method [44], with each filter producing two bits of data for each phasor. The output of phase quantization is chosen to be a grey code, so that when going from one quadrant to another, only 1 bit changes. This will minimize the number of bits disagreeing, if say two intra-class patterns are slightly misaligned and thus will provide more accurate recognition. The feature encoding process is illustrated in Figure 4-3. The encoding process produces a bitwise template containing a number of bits of information, and a corresponding noise mask which corresponds to corrupt areas within the iris pattern, and marks bits in the template as corrupt. Since the phase information will be meaningless at regions where the amplitude is zero, these regions are also marked in the noise mask. The total number of bits in the template will be the angular resolution times the radial resolution, times 2, times the number of filters used. The number of filters, their centre frequencies and parameters of the modulating Gaussian function in 41

order to achieve the best recognition rate will be discussed in the next chapter.

Figure 4-3: An illustration of the feature encoding process.

42

I use the Haar wavelet transform as another method to extract characteristic values of the iris images. The normalized image can be decomposed using the Haar wavelet into a maximum of five levels. Figure 4-4 shows a five level decomposition with Haar wavelet.

LL5 HL5

LH5

LH4

HH5 HL4

LH3, LH2, LH1 HH4

HL3, HL2, HL1

HH3, HH2, HH1

Figure 4-4: Five-level decomposition process with Haar wavelet. It is very important to represent the obtained vector in a binary code because it is easier to find the difference between two binary codewords than between two number vectors so some features are encoded using two-level quantization while other features are encoded using four-level quantization. I found that all the vectors that we obtained have a maximum value equal 1 and minimum value equal -1. If ―Coef‖ is the feature vector of an image than the following quantization scheme converts it to its equivalent code-word:

43

Two quantization 

If Coef( i ) >= 0 then Coef( i ) = 1



If Coef( i ) < 0 then Coef( i ) = 0

Four quantization 

If Coef( i ) > 0.5 then Coef( i ) = 11



If 0< Coef( i )

Suggest Documents