Retina Based Biometric Identification Using SURF and ... - IEEE Xplore

0 downloads 0 Views 1010KB Size Report
alka[email protected]. Abstract— Biometric technologies are the automated procedures for validating and identifying the originality of an individual.
Retina Based Biometric Identification Using SURF and ORB Feature Descriptors Shalaka Haware Electronics and Telecommunication Maharashtra Institute of Technology, Pune Pune, India [email protected]

Alka Barhatte Electronics and Telecommunication Maharashtra Institute of Technology, Pune Pune, India [email protected]

Abstract— Biometric technologies are the automated procedures for validating and identifying the originality of an individual. The biometric systems for automatic authentication are fingerprint, iris, retina, voice, hand geometry recognition. Nowadays, the retina recognition has received an importance, as the unique characteristics of the retina. In the acquisition method, the retinal images might experience some deformations like a change in background intensity, rotation and scale etc. These troubles may disturb the feature extraction process, as a result of the identification process. The Optical Disc Ring (ODR) is used as a preprocessing method. Subsequently, the Speeded Up Robust Feature (SURF) is employed. It is fast as well as vigorous. Oriented FAST and Rotated and BRIEF (ORB) is also used as a feature extraction process. ORB is efficient, unaffected by noise and rotation invariant. The true acceptance rate for SURF and ORB is 98.148 percent. Keywords—Biometric technology, Speeded Up Robust Feature, Oriented FAST and Rotated BRIEF, Optical Disc Ring;

I. INTRODUCTION Biometrics is the field, that use a computer to recognise the people with physiological and behavioural characteristics and determines true identity. The biometric characteristics have some qualities like robustness, distinctiveness, availability, accessibility etc. [1]. The retina recognition is one of the most eminent biometric technologies, which gives a high level of accuracy. Retinal identification becomes significant in present years because of its distinctive characteristics that cannot be replicate, universality, time invariance, security issues The Retinal pattern of an individual is unique though the patterns are identical twins. The patterns remain unchanged from birth until death [2]. Retina biometrics is utilized as a part of where the high-security condition is required, for example, in government divisions, military, ATMs, and managing an account. Retina covers nearly 65% of the internal surface. The tissue layer may be a sensitive skinny bedded tissue lies on the rear of the eyeball. The Photosensitive light incidents on the eye and converted into signals, which is then sent to the brain [3]. Eye patterns of the left and right eye are not identical. The structure of the eye is shown in figure 1.

978-1-5386-1716-8/17/$31.00 ©2017 IEEE

Fig. 1. Structure of Retina [4]

It is the most reliable and safe method for identification. While verifying retina pattern, the process undergoes some issues. In the course of the acquisition process, retina images frequently affected by background intensity differences, affine transformations i.e. translation, scale changes, rotation. This may affect the execution time of the system. Hence to deal with these issues, Speeded Up Robust Feature (SURF) and Oriented FAST and Rotated BRIEF (ORB) are utilized for feature extraction process. The Optical Disc Ring (ODR) is used in the pre-processing method. SURF is invariant to variable illumination, rotation, and scale changes. The ORB is a proficient substitute to SURF in matching process and computation cost. ORB is unaffected by noise and also it is rotation invariant. The detailed working is distributed into below sections. II. RELATED WORK In the past, different retina recognition strategies are enforced. In 1976, the primary identification system employing a profitable retina scanner known as EyeDentification 7.5 was planned by EyeDentify [5]. Masoud Sabaghi et al. suggested the feature extraction technique while not pre-processing step supported the angular splitting of a spectrum method because according to author blood vessel

extraction method was time-consuming. The recommended system practices angular and radial separating to extract the features. The proposed fuzzy system makes the final decision. The trained neural network can be used for decision making instead of fuzzy logic [6]. R. Ghaderi et al. projected methodology for separation of the vasculature in retinal pictures with the 2-D Morlet riffle and Neural Network. The 2-D Morlet has been used because it can be tune on any specific frequency hence noise is filtered [7]. Chen Ding et al. used the Neural Network for the supervised segmentation of vasculature in retinal images [8]. In [9] A. Hoover et al. suggested the fuzzy union of blood vessels and located the optic nerve in the retinal vessels. In this, the origination of the blood vessel network is decided by Fuzzy convergence. In [10] Tripti Rani Borah et al. used adaptive Neuro-Fuzzy inference system for the identification of retina. Mahua Nandy et al. proposed utilization of the Gabor filter as well as Artificial Neural Network for retinal vessels segmentation [11].  III. RETINAL VERIFICATION SYSTEM The feature extraction is done by two methods from two methods i.e. SURF and ORB. The system flow for the retinal verification is shown in figure 2 and figure 3 for SURF and ORB methods respectively.

Fig. 3. System flow for ORB

In the acquisition process of the retina, the retinal patterns may go through some malformation due to various factors such as the change in position of the eye, etc. Hence a steadfast and proficient model is needed which may deal with a malformation. Changes in the location of the eye consist of rotation, translation of axis and scale. Hence a system which will overcome all these snags is needed. The execution time required for this process can affect the performance of the retina recognition process [2]. The method includes image acquisition process, pre-processing, feature extraction and matching. A. Pre-processing Method The personal identification process faces some difficulties due to acquisition process like a low-level contrast between background and blood vessels, noise occurrences, direction and shape changes. These defects may reduce the efficiency of the retinal identification process. Hence a robust method is needed for preprocessing of the retinal image. It enhances the input image quality and reduces the execution time for identification which is Optical Disc Ring (ODR).

Fig. 2. Flow diagram for SURF

SURF description offers the intensity variation in this region. Hence preservation of the brightest area is needed i.e. optical disc. ODR extracts the interest ring around the optical disc. A three-step algorithmic rule is used to seek out the location of the optical disc. The first phase is the contrast limited adaptive histogram method which improves the input retinal images. The second step is to find the optical disc. In this, the Sauvola binarisation is carried out and by calculating the Euclidean distance the center of the optical disc is found. The final step is to extract the interest ring around the disc [12]. Figure 4 shows the result of pre-processing method.

Fig. 4. Result of a pre-processing step. Left side: original registered image Right side: ring extracted.

descriptors describe the dissemination of the intensity within interest point neighbourhood, like SIFT, which extracts gradient information. Rather than employing a gradient, the distribution of first-order Harr wavelet responses in x and ydirection are taken. It uses solely 64 dimensions. For making it invariant to rotation, a reproducible orientation of the interest point is identified. This is based on a circular region around the interest point. To extract the descriptors, a square region is built around the interest point and oriented. The region is riven up in 4×4 sub-regions. For each sub region, Harr wavelet response is calculated. dx and dy are the Harr wavelet response in x and y direction respectively [13]. Therefore on get the dat a of polarity concerning intensity changes, the total of absolute ly the price of responses is taken into consideration [2].

B. Speeded Up Robust Feature (SURF) The Speeded Up Robust Feature (SURF) is strong and rotation invariant technique for feature extraction. The feature extraction process of SURF is distributed into some steps i.e. the representation of associate integral image, the filtering of the images by Hessian matrix, Next is suppression of nonmaxima and the interest point localization is the last step [13].

V=

(2)

1) Feature Extraction 2)

The feature process consists of following methods. a) Representation of Integral Image: An integral image representation is another method for representation an image. Fast computation of box type of convolution filter is allowed in integral image representation. This speeds the feature extraction process. The dimensions of the resulted image are equivalent to the input image. b) Detection of interest point using Hessian matrix: Hessian based detectors are used over the Gaussian. It is built on detection of interest points which gives more accuracy and also a good performance. The blob-like structure is found at the location where determinant is maximum. H(x,ı) = where

(1) is the convolution of Gaussian second

order derivative similarly for

at point x with the image and ,

Feature Matching

In the matching part of surfing, the feature vectors of every keypoint find the matching points in the sampled image. The g2NN take a look at of matching is used for matching. It considers the ratio between nearest one and second nearest one with the given threshold. In particular case, it is the proportion of the distance to the candidate match and the distance to the second similar feature point. For a given keypoint, the vector D = {d1, d2, d3…dn-1} defines the Euclidean distances with respect to other keypoints. The keypoint with reference to d1 is taken a match of given keypoint only if the ratio d1/d2 is lower than a fixed threshold. A very high ratio i.e. ratio is higher than the threshold, are considered as two random features. The 2NN test is performed between di/di+1 until this ratio is greater than the threshold. A procedure is sopped by supposing k. In this experiment, 50 strongest keypints are evaluated. The following figure 5 shows the keypoint matching between two images, which is authenticated. Image indexed with 44 from the test set is unauthorised [13] [2].

.

The equation (1) shows the approximated Hessian matrix. It defines the blob response in image at x. Over different scales these responses are deposited and then a local maximum is calculated [13]. c) Suppression of non-maxima: For localization, the interest point over the image, the non maximum suppression i n an exceedingly 3×3×3 neighbourhood is utilized. On various scales, SURF finds the maxima points i.e. interest points in the image [2]. d) Descriptor calculation: Calculation of the descriptor vector is the last stage in feature extraction procedure. The

Fig. 5: SURF keypoints matching using g2NN test. Authentication case: Authorized.

B.

Oriented FAST and Rotated BRIEF(ORB)

Oriented FAST and Rotated BRIEF (ORB) could be a fusion of FAST keypoint detector and BRIEF descriptor with some variations for enhancing the performance. ORB is said to be an efficient alternative for SIFT and SURF that has parallel matching process as that of SIFT and SURF, however less tormented by noise, as ORB method has less computation cost, good performance, performance for matching and patents. The SIFT and SURF are patented and hence the people have pay for it to use them. But ORB is free to use [14]. ORB first uses a FAST for detection of keypoint and BRIEF descriptors are used for descriptors. The preprocessing method is same as that of used for SURF feature extraction method. 1)

Feature point detection

a) Fast detector: In ORB, feature point detection relies on the Features from Accelerated Segment Test (FAST) feature purpose detection [15]. From the given image, one pixel is considered as FAST corner, if there are some pixels in a circular ring abound the considered candidate. If compared with the candidate corner, all are having different region. The non-maximum suppression is applied for discarding multiple interest points in an adjacent location. The appropriate threshold is set. b) Orientation by Intensity Centroid: For finding the orientation in the ORB approach, the simple and effective measure is used i.e. the intensity centroid. This intensity centroid undertakes that corner’s intensity is equipoise from its centre. Hence this vector is used to assign an orientation. The moment of patch S, and one feature point is its centre, is given as [14], Mp,q

(3)

The centroid can be establish from these moments, given as C=

(4)

From corner’s centre O, ORB constructs a vector to the centroid; the orientation is given as,

IJ (p; x,y) = where, p(x) is intensity of p at point x= feature is defined with n binary tests; fn(p)=

IJ(p;xi,yi)

(6) . The

(7)

A vector length of n=256 is chosen by ORB in above equation. There is no rotation invariance in (7). For this, a steer BRIEF method is used in line with the direction of keypoint. A 2 n matrix is provided at position (xi,yi) for every feature set of n binary tests. S=

(8)

ORB creates the ‘steered’ version Sș of S, correlating rotation matrix Rș and patch orientation ș, Sș = RșS

(9)

The Steered BRIEF operator is given as, gn (p,ș) = fn(p)

(10)

In the above equation, the correlation test of the descriptor is larger. The discretized the angle to increments 2ʌ/30 and look-up table is created by previously figured BRIEF pattern. The true set of points Sș is used for estimation of descriptor provided that the keypoint orientation ș is uniform over the views [14]. One of the foremost important properties of BRIEF is that every bit feature has a giant variance. High variance reacts differentially to the input this makes the feature more discriminative. The tests for this process are uncorrelated; hence every test will present the result. To beat this, ORB goes a greedy search among all probable binary tests for the set of uncorrelated test, and find with high variance [16]. The result gets from this algorithm is rBRIEF. rBRIEF has major enhancement than steer BRIEF. 3) Matching of binary features

(5) 2)

ORB Descriptors with Rotation Invariance

The SURF creates vectors for many features, which utilize lots of memory which is not viable many applications especially in the embedded system. Large is the memory, the matching procedure may take more time. All the dimensions may not need for actual matching; hence the binary strings are used. The feature matching is done using hamming distance with binary string. BRIEF delivers a simple method for finding the binary strings. It takes smooth image patch p. A binary check IJ is given by;

The Brute-Force Matcher is employed It takes the descriptor of 1 feature with in the check set and matches with all different descriptors from the registered set and additionally the closest one is returned. Hamming distance is employed for measure within the Brute-Force. The matcher returns solely matches having values (i,j) as long as the ith descriptor in test set has the jth descriptor within the registered set, because the best match. Specified function computes descriptors using keypoints. By eliminating the additional matches which correspond to the error. This is done by setting a distance. The threshold and matches are found whose distance is less than threshold. The ratio test is used for

k best results [17]. The probability density function for correct and incorrect matches is given by the ratio of the closest to the second closest neighbours of each keypoint [14].

Fig. 6: ORB keypoints matching using Brute-Force matcher.

Figure 6 shows ORB keypoints matching between two images for image index no 14. This indicates that the access is authorised. Image indexed no 16 and 21 from the test set are found to be unauthorised. The result shows 500 features.

Fig.7. The Receiver Operating Characteristics (ROC) curve of SURF.

IV. EXPERIMENTAL RESULT The motive of the experiment is to show the matching performance of feature extraction methods i.e. SURF and ORB. The retinal images are taken from VARIA database [18] [19], contains 233 images from 139 totally dissimilar people. Out of these 84 images are taken as test images and 82 images are taken as registered. In this, images from 1 to 54 are different images of the same retina. The single copy of a different image of the same retina exists in test and registered set. These images are indexed as 1 to 54. The huge freely available dataset is absent. Hence it inhibits large scale and consistent testing of the retinal verification system. The experiments implemented for SURF are performed in MATLAB 2015a. TABLE I: PERFORMANCE ANALYSIS Percentage %

Verification System SURF

ORB

False Acceptance Rate

0.000%

0.000%

False Rejetion Rate

1.852%

1.852%

True Acceptance Rate

98.148%

98.148%

True Rejection Rate

100.000%

100.000%

Fig.8. The FAR and FRR curve of SURF based system.

The Area Under Curve (AUC) for SURF is 99.6% and for ORB is 99.8%. Figure 7 and figure 9 shows the False Rejection Rate (FRR) and False Acceptance Rate (FAR) curves.

The table I shows the performance analysis of SURF and ORB methods. The Receiver Operating Characteristics (ROC) curve shows the accuracy of biometric system. Figure 6 and figure show the ROC curve for SURF and ORB methods respectively.

Fig. 9: The Receiver Operating Characteristics (ROC) curve of ORB.

[9]

[10]

[11]

[12]

[13]

[14] Fig. 10: The FAR and FRR curve of ORB based system. [15]

V. CONCLUSION The paper presents the retina identification system using SURF and ORB technique. The extracted features from these methods are rotation undeviating, robust and resistant to noise. The ring is extracted around the optical disc in preprocessing method, which is taken as input for feature extraction process. In SURF feature extraction method, SURF descriptors are responsible for feature extraction. The matched keypoints are detected by the g2NN matching process and categorise identical retinal images that have less complexity with true acceptance rate 98.148% and Area Under Curve (AUC) is 99.6%. For ORB, FAST is employed for sleuthing keypoints and descriptors are obtained by BRIEF which is rotation invariant. Then Brute-Force matcher helps for matching using Hamming distance. Hence true acceptance rate of ORB is 98.148% and Area Under Curve (AUC) is 99.8%. The result shows that the accuracy of the identification of retina using ORB is higher than SURF feature extraction. REFERENCES [1] [2]

[3] [4] [5]

[6]

[7]

[8]

J. Wayman, A. Jain and D. Maio, “Biometric Systems: An Introduction to Biometric Authentication Systems,” Springer, pp. 1-20,2005. T. Chihaoui, H. Jlassi, R. Kachouri, K. Hamrouni and M. Akil, “Personal Verification System supported on tissue layer and SURF Descriptors,” International Multi-Conference on System, Signals and Devices(SSD), Leipzig-Germany, pp. 280-286, 19 May 2016. http://hyperphysics.phy-astr.gsu.edu/hbase/vision/retina.html https://www.virginiaeyeconsultants.com/procedures/eyeconditions/retia/ A. Jain, R. Bolle, S. Pankati, R.B. Retinal Identification. In “Biometrics: Personal Identification in Networked Society,” Springer, Berlin, Germany, pp. 126, 1999. M. Sabaghi, S. R. Hadianamrei, and M. N. Lahiji, “A New Partitioning Methodology in Frequency Analysis of the Retinal images for Human Identification,” J. Signal Inform. Process, pp. 274–278, 2011. R. ghaderi, H. Hassanpour and M. Shahiri, “Retinal vessel segmentation exploitation the 2-D Morlet ripple and neural network,” International Conference on Intelligent and Advanced Systems, Kuala LumpurMalaysia, pp. 1251 – 1255, 24 Oct. 2008. C. Ding, Y. Xia and Y. Li, “Supervised Segmentation of Vasculature in Retinal Images Using Neural Networks,” IEEE Conference Publications, Xian, pp. 49 – 52, 20 Nov. 2014.

[16]

[17] [18]

[19]

A. Hoover and M. Goldbaum, “Locating the optic nerve in a retinal image utilizing the fuzzy convergence of the blood vessels,” IEEE Transactions on Medical Imaging, vol. 22, no. 8, Aug. 2003 T. R. Borah, K. K. Sarma and P. H. Talukdar, “Retina Identification System By Adaptive Neuro Fuzzy Interfernce System,”2015 International Conference on Computer, Communication and Control (IC4), Indore-India, pp. 1-6, 11 Jan 2016. M. Nandy and M. Banerjee, “Retinal Vessel Segmentation with Artificial Neural Network and Gabor Filter,” 2012 Third International Conference on Emerging Applications and Information Technology, Kolkata, pp. 157-160, 11 Janauary 2013. T. Chihaoui, H. Jlassi, R. Kachouri, K. Hamrouni and M. Akil, “Human Identification system based on the detection of Optical Disc Ring in retinal images,” 2015 International Conference on Image Processing Theory, Tools and Applications (IPTA), Orleans-France, pp. 263-267, 4 Jan. 2016. H. Bay, A. Ess, T. Tuytelaars and L. V. Gool, “Speeded-Up Robust Features (SURF),” European Conference on Computer Vision, Springer, pp. 404-417, 2006. E. Rublee, V. Rabaud, K. Konolige and G. Bradski, “ORB: an efficient alternative to SIFT or SURF,” International Conference on Computer Vision, pp. 2564-2571, 2012. Miroslav Trajkovii, Mark Hedley, “Fast corner detection,” Image and Vision computing, pp. 75-87, Feb 1998 Y. Qin, H. Xu and H. Chen, “Image Feature Points Matching via Improved ORB,” Progress in Informatics and Computing in IEEE International Conference, pp. 204-208, 04 Dec. 2014. D. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” 6June 2003. M. Ortega, M. G. Penedo, J. Rouco, N. Barreira and M. J. Carreira, "Retinal verification using a feature points based biometric pattern", EURASIP Journal on Advances in Signal Processing, pp. 13, 2009. M. Ortega, M. G. Penedo and N. Barreira, "Personal verification suppoeted extraction and characterization of retinal feature points", Visual Languages and Computing Journal, vol. 20 , pp. 80-90, 2009.