An Algorithm for Retinal Feature Extraction Using ... - Science Direct

0 downloads 0 Views 605KB Size Report
Available online at www.sciencedirect.com. 1877-0509 ... challenge of retinal image analysis called segmentation. ... in image processing for feature extraction.
Available online at www.sciencedirect.com

ScienceDirect Procedia Computer Science 79 (2016) 61 – 68

7th International Conference on Communication, Computing and Virtualization 2016

An Algorithm for Retinal Feature Extraction using Hybrid Approach Parth Panchala, Ronak Bhojanib, Tejendra Panchalc* a,b,c

A. D. Patel Institute of Technology, New Vallabh Vidyanagar, 388121, Gujarat, India

Abstract Today, in the era of cutting-edge technology, demand for the reliable security system is increasing and so the biometric based authentication system. Several biometric based systems such as fingerprint and face recognition are used in various applications such as security devices and forensic identification. Human retina is one of the wellspring of biometric system which gives the most reliable and efficient method for authentication. Even in the medical science, ophthalmologist considers the changes in the retinal vessels as many retinal diseases are characterized by it. Retina feature extraction is a challenging task. In this paper, an easy and reliable method is proposed for retinal feature extraction. The concept of line tracking is used for the binarization of input retina image. A hybrid method of morphology and scanning window analysis (SWA) is applied for obtaining reliable result. Also, the validation of proposed method is checked against the database of 55 images of eyes with healthy, glaucomatous, and diabetic retinopathy. © 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license © 2016 The Authors. Published by Elsevier B.V. (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer-reviewunder underresponsibility responsibility of the Organizing Committee of ICCCV Peer-review of the Organizing Committee of ICCCV 2016 2016. Keywords: Retinal image; fundus; morphological operation; feature extraction; SWA

1. Introduction A biometric system is a recognition system that perceives a person on the premise of a component vector got from a particular physiological or behavioral characteristic that the individual has. This paper contemplates crucial challenge of retinal image analysis called segmentation. The individual traits used in biometric identification system

* Corresponding author. Tel.: +91-953-752-1255. E-mail address: [email protected]

1877-0509 © 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer-review under responsibility of the Organizing Committee of ICCCV 2016 doi:10.1016/j.procs.2016.03.009

62

Parth Panchal et al. / Procedia Computer Science 79 (2016) 61 – 68

can be physiological, for example, facial components, fingerprints, iris, and retina. The retina is a meager layer of cells at the back of the eyeball of vertebrates. The function of the retina is to convert light into nervous signals. The fundamental elements of a fundus retinal picture were characterized as the optic circle, fovea, and veins [1]. The macula or macula lutea is an oval-formed pigmented area close to the focal point of the retina of the human eye having diameter of nearly 5.5 mm and contains inside of it the 1.5 mm breadth fovea and, inside of that, the foveola, a 0.35 mm area that houses the highest grouping of cone photoreceptors in the retina and is concerned with empowering maximal, focal visual sharpness. As the blood vessels have unique structure of veins, it is widely accepted in biometric identification system. Examination of veins in the eye permits detection of eye diseases such as, diabetic retinopathy, hypertensive retinopathy, glaucoma, and arteriosclerosis [2]. A blemish in the technique of medical imaging for information representation for medicinal imaging is that the refinement between a clinical determination and the steps for feature extraction are not clear [3]. They regularly pass on the same data however it may not generally be the situation. Xu and Lue [4] proposed a novel based strategy extricate substantial and thin vessels independently. Some authors have also considered Gabor wavelets to extract the vascular patterns such as a method proposed by Usman et al. [5]. Hoover et al. [6] Produced binary classifiers for vessel segmentation. Mendonça and Campilho [7] proposed a method for segmentation of blood vessels using morphological reconstruction. Chanwimaluang and Fan [8] has proposed a concept of matched filter detection. Numerous alternatives are found in order to implement segmentation over an algorithm to produce satisfactory results but to achieve reliable result, some modifications are essential to tune up the system to meet up the result having higher reliability. To remove noise from processed retinal image, Helen et al. [9] has proposed an algorithm that figure out if a processed pixel is a part of a vessel or not. In this paper, focus is on examine whether it is conceivable to outline the procedure of feature extraction and clinical conclusion unmistakably and utilize an incremental learning system for keeping up the information required in image processing for feature extraction. The concept of linear tracking is used in the proposed algorithm to convert RGB image into the binary image. At last, after applying morphological operations, scanning window analysis is used extract the features such as ridges, bifurcation, and optical focal point of retina. The remaining paper is organized as takes after: Section 2 contains the proposed methodology for feature extraction from retinal images. Experimental results are given in section 3 and the accuracy of the proposed algorithm is listed in Table 1. Conclusion is given in section 4 followed by future work in the last section. 2. Methodology In this section the proposed system is outlined which appeared in Fig. 1. From Fig.1, it can be seen that initial phase in the calculation is to take the input retinal image and apply image enhancement on it for better edge extraction of blood vessels. In the wake of removing the spurs from the extracted edges, scanning window analysis combined with morphological operations is executed to extricate the features from it. The same methodology is utilized for the optical point of convergence. Every one of the progressions of the proposed steps is clarified in the following part.

Input

Scanning Window Analysis (SWA)

Feature extracted

Fig. 1. Proposed algorithm.

Image enhancement

Thinning to only 1-pixel

End

Edge detection

Morphological operations

Parth Panchal et al. / Procedia Computer Science 79 (2016) 61 – 68

2.1. Image processing Image processing operations change the gray values of the pixels. There are three fundamental systems by which this is finished. In its most basic way, the pixel gray values are changed with no processing of encompassing or "neighbourhood" pixel values. Neighbourhood processing consolidates the estimations of pixels in a small neighbourhood around every pixel being referred to. At last, changes are more intricate and include control of the whole image so that the pixels qualities are interpreted in an alternate yet proportionate form. This may take into account more productive and capable processing before the picture is returning to its unique method of representation. The points of the processing of a picture typically can be categorized as one of the three general classes: enhancement (e.g., enhanced contrast), restoration (deblurring of an image) and segmentation (secluding specific regions of enthusiasm inside of the image) [10]. 2.2. Image enhancement A difficulty in the captured image of the ocular fundus is picture quality which is influenced by many factors, for example, media opacities, defocus or vicinity of relic [11,12]. Picture enhancement includes the development or improvement of images so that the result is more suitable for further operations. Image enhancement mean the image is more satisfactory for analysis, processing or viewing. This may include procedures, for example, enhancing complexity or brighten up an image. The picture histogram gives essential data about the presence of an image. It comprises of a graph demonstrating the quantity of times every gray level happens in the image. Horizontal line of the graph indicates the possible gray level present in the images, e.g., 0–255. The vertical line of the graph indicates number of occurrence of those gray level pixels. In an unreasonably dull or brightest image, the gray level would be bunched to the extremes of the histogram, yet in an all-around differentiated image these levels would be well spread out over a significant part of the extent. Histogram equalization calculations act to circulate gray levels all the more similarly over the degree as indicated by particular user characterized mathematical statements and in this way make an image with more prominent contrast than the first. Histogram equalization takes a shot at a comparable rule, however is a totally programmed process that indicates to make the histogram as uniform as could be expected under the circumstances. 2.3. Image segmentation Segmentation includes partitioning images into subsections that are region of interest, for example, characterizing territories of an image that are fitting to be in this manner analysed, or discovering circles, lines or different states of interest. When object of interest has been found, the process of segmentation must be stopped. For any image generally, segmentation will be done based on discontinuity present in the image such as edges in the image or on similitudes judged by predefined criteria which will be explored in this section. Thresholding permits the partition of an image into particular segments by transforming it into a binary image. This includes the image being isolated into white or dark pixels on the premise of whether their value is more noteworthy or not exactly a sure threshold level. The procedure of thresholding may be especially helpful to evacuate pointless subtle element or varieties and highlight detail that is of interest. A global threshold may be picked consequently or on the premise of clear focuses in the image histogram that would consider proficient detachment. More intricate intensity criteria may be utilized to dispense whether pixel values get to be white or dark. For a few images, versatile or neighbourhood thresholding is helpful where diverse edges are connected to distinctive areas of the image, e.g., the image has shifting levels of illumination. Edges contain the absolute most valuable data in an image. They can be utilized, e.g., to quantify the extent of items or to perceive and isolate objects. An edge in an advanced image comprises of a perceptible distinction in pixel values inside of a sure range as shown in Fig. 2. Most edge recognition calculations evaluate this change by discovering the size of the gradient of the pixel level values. This should be possible by the utilization of specific filters of differing many-sided quality and utility. A limit can be connected to the resultant image to make a binary image of the edges. Cases of edge recognition masks incorporate Sobel [13] and Canny [14] edge discovery programs. The Sobel edge detector had been utilizes a couple of ͵ ൈ ͵ convolution masks, one assessing the angle in

63

64

Parth Panchal et al. / Procedia Computer Science 79 (2016) 61 – 68

(a)

(b)

(c)

Fig. 2. (a) Original image; (b) Edge detection in noisy image; (c) Noise removal.

the x-direction (sections) and the other evaluating the slope in the y-direction (lines). Be that as it may, in an examination of three computerized systems of edge discovery to distinguish the limits and comparing widths of retinal veins, Sobel was observed to be the most conflicting, conceivably identified with the project recognizing the focal light reflex from the vein as an edge [15]. The Canny edge detection system has been utilized as a part of neural systems to naturally confine retinal veins in fundal RGB images [16]. Neighbourhood incorporating so as to process broadens the power of processing calculations estimations of adjoining pixels in calculation. A user defined matrix, or mask is characterized with enough components to cover a solitary pixel as well as some of its neighbouring pixels. Every pixel secured by the components of the mask is liable to a relating capacity. The combination of mask and capacity is known as a channel. In this manner, the consequence of applying a mask to a specific area is that the last resultant quality is a capacity of the focal pixel's qualities as well as of its neighbouring pixel values. Morphology in image processing is especially suitable for analyse or finding shapes in images. The two principle procedures are those of erosion and dilation. These procedures include an extraordinary system of consolidating two arrangements of pixels. Typically, one set comprises of the image being prepared and the other a littler arrangement of pixels known as an organizing component or piece. In dilation, each point in the image is superimposed onto by the part, with its encompassing pixels. The resultant impact of enlargement is of expanding the span of the original components. Erosion is a converse technique in which an image is diminished through subtraction by means of an organizing component or bit. The part is superimposed onto the first image and just at areas when it fits altogether inside of its limits will a resultant focal pixel be acknowledged. The basic principle of opening and closing are based upon these procedures. Opening comprises of erosion took after by dilation, and tends to smooth an image, breaking limited joints and evacuating slim distensions. Closing comprises of dilation took after by erosion furthermore smoothest images, however by combining narrow breaks and inlets and dispensing with little gaps. Algorithms joining the above procedures are used to find out edge detection, for removing noise and for finding specific shapes in the image. Here all the blood vessels are thinned to one line pixel width as shown in Fig. 3(g). 2.4. Scanning window analysis After setting out the thinned image to simply single pixel, SWA is used to elicit the characteristics. Equally, all the lines are abbreviated to just one pixel, a ͵ ൈ ͵ window can be used to extract out the ridges and bifurcation information. Fig. 3 shows the methodology used in SWA. By using a ͵ ൈ ͵ window, thinned image is scanned from top to bottom and left to right. If the centre pixel is black then it will decide whether there are any ridges or bifurcation. Fig. 3(a) shows centre pixel black and only two pixels are in scanning window. So it is clear that only two pixels in scanning window results in ridges end. Fig. 3(b) consists of only three pixels and that is why it is a continuous line. If there is a pixel count higher or equal to four, it has to be a bifurcation. By utilizing this scheme all

Parth Panchal et al. / Procedia Computer Science 79 (2016) 61 – 68

Fig. 3. (a) Line end; (b) Line; (c) Bifurcation; (d) Crossovers; (e) Ridges end; (f) Bifurcation points; (g) Vessel ending and bifurcations; (h) Extracted features with optical focal point.

over the thinned image, ridges and bifurcation information can be obtained extracted from a fundus images. For the localization of an optical focal point, scanning window of ͹Ͳ ൈ ͹Ͳ is used based upon empirical experiments. It can be seen from the results that the bifurcation points are found maximum around focal point, so the proposed SWA method gives better localization even the images having complex disorders patters. This minutiae information can be further used for recognition purpose. 3. Experimental results The performance of the proposed algorithm is demonstrated using a retinal image database of Retinal Identification DataBase (RIDB). The proposed algorithm is run as outlined in Section 2. The system said to be robust if it is able to extract the retinal features from the input fundus image correctly. The results shown in Fig. 4, 5 and 6 are the one obtained by implementing the proposed algorithm in licensed software MATLAB ®. From the result obtained by applying the discussed algorithm, it is conspicuous that the minutiae extracted from fundus images are less sensitive to the disorders characteristics of the eye. Blue imprint in result speak to the bifurcation whereas the red imprint demonstrates the ridges end and red circle shows the localization of an optical focal point. Hither, the simulation results for diverse instances of retinal images, i.e. healthy, glaucoma, diabetic retinopathy etc. are recorded. Fig. 4 shows the simulation results for the healthy retinal images. The first image shows the input fundus. Binarization results obtained are shown next to it. After binarization, thinning operation is used to get all ridges width to only 1-pixel. Scanning window analysis is applied to extract the bifurcation and ridges end points. Here, the optical point of convergence is finding out on the premise that the area around it contains the greatest number of bifurcation points and so as the dark pixel. It is visible that the results obtained for healthy images are very good. All the feature points are well extracted.

65

66

Parth Panchal et al. / Procedia Computer Science 79 (2016) 61 – 68

(a)

(b)

(c)

(d)

Fig. 4. Healthy retinal images (a) Fundus; (b) Extracted blood vessels; (c) Thinning; (d) Extracted features.

Results shown in Fig. 5 obtained from the glaucoma images. The bifurcation points and optical focal points are well extracted even with the increase in number of spurs in the binarized image. Thus even with the disease, like glaucoma, the proposed methodology gives the best outcomes. All the features which are extracted are well located within the area. Accuracy rate achieved for this type of the images is 93.33% which is quite reliable.

(a)

(b)

(c)

(d)

Fig. 5. Glaucoma retinal images (a) Fundus; (b) Extracted blood vessels; (c) Thinning; (d) Extracted features.

For diabetic retinopathy retinal pictures, the outcomes are appeared in Fig. 6. It is difficult to settle the elements inside this sort of pictures. As in diabetic retinopathy pictures, number of spurs acquired after binarization result are all that much. In this manner the genuine edge end point is hard to find and even the optical point of convergence. Despite of that type of binarized result, the bifurcation and focal point is extricated effectively with tolerable false extraction ridges end points.

67

Parth Panchal et al. / Procedia Computer Science 79 (2016) 61 – 68

(a)

(b)

(c)

(d)

Fig. 6. Diabetic retinopathy retinal images (a) Fundus; (b) Extracted blood vessels; (c) Thinning; (d) Extracted features.

The accuracy of the proposed method is tabulated in the Table. 1 shows that the algorithm works well even with the retinal images with disorders. Table 1. Experimental results. Retinal image (RIDB database)

Total Images

Extracted successfully

Accuracy (%)

Healthy images

25

25

100

Glaucoma images

15

14

93.33

Diabetic retinopathy images

15

12

80

Overall accuracy of proposed method (%)

92.72

4. Conclusion This paper has proposed a hybrid approach which combines morphological operations with knowledge of scanning window analysis to extract the retinal features successfully from the fundus image. Here, the concept of thinning the blood vessels to only one line pixel width is used with merging of ͵ ൈ ͵ window for the segmentation. Furthermore, the pixel density is used as a parameter to localized optical point of convergence. In order to check the validity of the proposed algorithm, the system is tested on retinal image database of RIDB contains a set of 55 retinal images of healthy, glaucoma, and diabetic retinopathy images. It is found that the overall accuracy of the system is 92.72%. The improvement in the existing methodology is discussed in the next section. 5. Future Work Further validation and training of existing algorithm can be implemented through stringent multistage tasking. An improvement of existing algorithm by integrating approaches to attain more robustness for development of any system. To make registration process proficient and reliable especially for security purpose, the bifurcation angle can be merged with existing extracted features.

Acknowledgements Any accomplishment requires the elbow grease of many people. Authors would like to offer their sincere thanks to all of them. Authors would like to express their gratitude towards the reviewers of this paper for the valuable comments and suggestions, to all those researchers whose research work published in different journals and

68

Parth Panchal et al. / Procedia Computer Science 79 (2016) 61 – 68

proceedings has been used as a reference in this paper and all people who directly or indirectly helped in this work. Authors would like to thank A. D. Patel Institute of Technology for assistance, counsel, and providing platform where the simulation work can be performed.

References

[1]

R.S. Choraś, Retina recognition for biometrics, 7th Int. Conf. Digit. Inf. Manag. ICDIM 2012. (2012) 177–180. doi:10.1109/ICDIM.2012.6360109.

[2]

F. Kirbas, C. Quek, A review of vessel extraction techniques and algorithms, Comput. Surv. 36 (2004) 81–121. doi:10.1145/1031120.1031121.

[3]

F.C.F. Chui, I. Bindoff, R. Williams, B.H. Kang, Feature Extraction for Classification from Images: A Look at the Retina, 2008 Int. Symp. Ubiquitous Multimed. Comput. (2008) 93–98. doi:10.1109/UMC.2008.27.

[4]

L. Xu, S. Luo, A novel method for blood vessel detection from retinal images., Biomed. Eng. Online. 9 (2010) 14. doi:10.1186/1475925X-9-14.

[5]

M.U. Akram, A. Tariq, S. a. Khan, Retinal image blood vessel segmentation, 2009 Int. Conf. Inf. Commun. Technol. (2009) 181–192. doi:10.1109/ICICT.2009.5267194.

[6]

A. Hoover, Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response, IEEE Trans. Med. Imaging. 19 (2000) 203–210. doi:10.1109/42.845178.

[7]

A.M. Mendonça, A. Campilho, Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction, IEEE Trans. Med. Imaging. 25 (2006) 1200–1213. doi:10.1109/TMI.2006.879955.

[8]

T. Chanwimaluang, G. Fan, An efficient algorithm for extraction of anatomical structures in retinal images, Image Process. 2003. ICIP 2003. Proceedings. 2003 Int. Conf. 1 (2003) I–1093–6 vol.1. doi:10.1109/ICIP.2003.1247157.

[9]

H. Ocbagabir, I. Hameed, S. Abdulmalik, D. Barkana Buket, A novel vessel segmentation algorithm in color images of the retina, 9th Annu. Conf. Long Isl. Syst. Appl. Technol. LISAT 2013. 3 (2013) 560–564. doi:10.1109/LISAT.2013.6578224.

[10]

R.C. Gonzalez, R.E. Woods, Digital Image Processing, Addison Wesley,Reading, MA, 1992.

[11]

J. Kristinsson, M. Gottfriedsdottir, E. Stefansson, Retinal vessel dilation and elongation precedes diabetic macular oedema, Br. J. Ophthalmol. 81 (1997) 274–278.

[12]

B. Liesenfeld, E. Kohner, W. Piehlmeier, S. Kluthe, M. Porta, T. Bek, et al., A telemedical approach to the screening of diabetic retinopathy: digital fundus photography., Diabetes Care. 23 (2000) 345–348.

[13]

R.C. Gonzalez, R.E. Woods, Digital Image Processing, in: Addison Wesley,Reading, MA, 2002: pp. 85–94.

[14]

J. Canny, A computational approach to edge detection, . IEEE Trans. Pattern Anal. Mach. Intell. 8 (1986) 769–798.

[15]

N. Chapman, N. Witt, X. Gao, A.A. Bharath, A.V. Stanton, S.A. Thom, et al., Computer algorithms for the automated measurement of retinal arteriolar diameters., Br. J. Ophthalmol. 85 (2001) 74–79.

[16]

C. Sinthanayothin, J.F. Boyce, H.L. Cook, T.H. Williamso n, Automated localisation of the optic disc,fovea, and retinal blood vessels from digital colour fundus images, Br. J. Ophthalmol. 83 (1999) 902–910.

Suggest Documents