IET Image Processing Research Article
Robust retinal blood vessel segmentation using hybrid active contour model
ISSN 1751-9659 Received on 30th April 2018 Revised 28th September 2018 Accepted on 5th November 2018 doi: 10.1049/iet-ipr.2018.5413 www.ietdl.org
Prakash Kumar Karn1, Birendra Biswal1 , Subhransu Ranjan Samantaray2 1Department
of Electronics and Communication Engineering, GVP College of Engineering (A), Visakhapatnam, India of Electrical Sciences, IIT Bhubaneswar, India E-mail:
[email protected]
2School
Abstract: In the present scenario, retinal image processing is toiling hard to get an efficient algorithm for de-noising and segmenting the blood vessel confined inside the closed curvature boundary. On this ground, this study presents a hybrid active contour model with a novel preprocessing technique to segment the retinal blood vessel in different fundus images. Contour driven black top-hat transformation and phase-based binarisation method have been implemented to preserve the edge and corner details of the vessels. In the proposed work, gradient vector flow (GVF)-based snake and balloon method are combined to achieve better accuracy over different existing active contour models. In the earlier active contour models, the snake cannot enter inside the closed curvature resulting loss of tiny blood vessels. To circumvent this problem, an inflation term Finf balloon with GVF-based snake is incorporated together to achieve the new internal energy of snake for effective vessel segmentation. The evaluation parameters are calculated over four publically available databases: STARE, DRIVE, CHASE, and VAMPIRE. The proposed model outperforms its competitors by calculating a wide range of proven parameters to prove its robustness. The proposed method achieves an accuracy of 0.97 for DRIVE & CHASE and 0.96 for STARE & VAMPIRE datasets.
1 Introduction The improved access to clinical facilities and developments in medical diagnostics has increased the global life expectancy. To further enhance diagnosis with an objective to develop highly sophisticated medical equipment, computer-based solutions have recently been developed that help in the early detection of some diseases. Vision is a key component for building artificial systems that can perceive and understand their environment, similar to humans who perceive the great majority of information about their environment through sight. One of the most interesting uses of image processing is the development of machine-vision system. One of these machine vision-based systems is biometric-based machine-vision systems. In this context, the application of biometric imaging has been of great interest to the researchers. There are two main types of intelligence that are artificial intelligence and machine intelligence. In biometric-based electronic systems, mainly machine intelligence is used. Another application to machine vision is to develop a high-end biometric device which can be used for security, identity, demographics survey and so on. In order to perform these tasks, the input data are collected in the form of a facial image, fingerprint, retina image and so on, which again compels us to depend upon different image processing algorithm for a better understanding of hidden information in input data. Digital image processing has been proved as an effective approach for analysing the image for various applications in the field of education, medical imaging, technology and so on. It performs some operations on an image, in order to get an enhanced image and to extract some useful information from it. In the field of technology, image processing is widely used for facial recognition, pattern detection, recognising and tracking special objects, mapping the Earth's resources, and developing military systems and so on. In most cases, the acquired image from the camera is not directly processed within the application, instead, it is preprocessed to enhance the image according to the specific task. The advent of digital image processing has developed the culture of using imaging-based automatic inspection for process control, and robot guidance in the industry which is often known as machine vision. The concept of replacing repetitive task once performed by a man with an autonomous machine is not new. IET Image Process. © The Institution of Engineering and Technology 2018
Vision-guided robotic vision system has already replaced humans for the task like automated farming, animal husbandry, crop monitoring and analysis and so on. The ultimate goal of travelling so far in the area of image processing is to provide vision to the machine. With the application to machine vision, many electronic devices are developed which are serving in the area of agriculture, medical devices and pharmaceutical, printing and packaging, general mechanical engineering, food processing and many more. Today machine-vision system is frequently deployed in constrained environments where lightning can be controlled. However, machine-vision system for an agricultural task such as planting, tending and harvesting crops must operate in an unconstrained environment where lightning and weather condition may vary dramatically. In the area of medical imaging, establishing the relationship between digital images and clinical information is a challenging task that can be overcome by detecting medical symptoms. This gives the raise of computer-aided diagnosis, which is nothing but an interdisciplinary technology with combining elements of machine intelligence and computer vision with radiological and pathology image processing. A typical application is the detection of a tumour. For instance, some hospitals use computer-aided design (CAD) to support preventive medical check-ups in mammography (diagnosis of breast cancer), the detection of polyps in the colon, and lung cancer. There are prototype cars that can drive by themselves but without smart vision, they cannot really tell the difference between a crumpled paper bag on road, which can run over, and rock size which should be avoided. Although fabulous megapixel cameras have been made but a sight to the blind is still a far-fetched dream. The drone can fly over massive land but do not have enough vision technology to track the changes in the rainforest. Security cameras are everywhere but they do not alert us when a child is drowning in the swimming pool. Our society is very much blind because our smartest machines are still blind. To take a picture is not the same as to see but by seeing, we really mean understanding. So, when a machine can see, doctors will have extra pairs of tireless eyes to help them to diagnosis and take care of patients. Car will run smarter and safer on road. Robots, not just humans, will help us to tackle the disaster zone to save the trapped and wounded people. First, we teach them to see then they help us to see in a better way. 1
In the context of medical image processing, by the end of 2020 A.D. market for machine-vision system will reach $9.5 billion and 37% of it will be taken by medical image analysis which is definitely not a small chunk. Medical imaging can be extended to other modalities like X-ray, ultrasound, Computer Tomography (CT), Magnetic Resonance Imaging (MRI), and in clinical indications like radiology, cardiology, neurology, breast mammography, which can be utilised in hospital, diagnostic centres, and research centres. Different diseases can be screened and diagnosed if the retinal blood vessels are studied properly and periodically [1]. The properties and structure of retinal blood vessels are highly affected, when a person is suffering from several life-threatening diseases such as diabetic retinopathy, age-related macular degeneration, glaucoma and so on, that leads to permanent blindness. Therefore, study of retinal blood vessels has become an emerging research area [2]. Among many indications, neovascularisation is an indication to diabetic retinopathy (DR) and its severity [3], which is the seventh leading cause of permanent blindness in underdeveloped and developed countries as well. According to WHO, if these indications are taken seriously during its preliminary stage and treated accordingly, diabetic retinopathy can be cured [4]. This fact has created a hope in researchers to overcome this slow-spreading epidemic. As a result different supervised and unsupervised methods are emanated to diagnosis the diabetic retinopathy and its different stages of severity. Basically, diabetic retinopathy has its two stages: nonproliferative DR (NPDR) and proliferative DR (PDR) on which the severity of the disease and probability of its remedy can be decided. Micro-aneurysms, haemorrhages and hard exudates on the retinal image are the signs of NPDR [5]. If this disease is not treated at this stage, it progresses to a proliferative stage where abnormal new blood vessels growths (neovascularisation) take place [6]. In order to detect new and abnormal blood vessels, we need to extract it from its background with maximum true detection and minimum false detection and need to be compared with manually segmented fundus image. This process is also termed as blood vessels segmentation [7]. It is sometimes difficult for the doctors to manually grade the fundus image with increasing number of the patient as it is time-consuming and highly erroneous also. These hitches of manual grading headed to the automation of segmentation process. By the implementation of automatic blood vessel segmentation, one can overcome with problems like time complexity and the probability of error in comparison to manual segmentation process. Many segmentation methods [8–15] have been proposed in the literature with a substantial contribution to the overall precision of the segmented output. Existing methods have dealt with some of the problems such as false vessel detection, overlapping of two crossing blood vessels, missing of tiny blood vessels and so on. Some applications of retinal image processing such as optical disc removal, blood vessel segmentation are very close to radar image processing such as Synthetic Aperture Radar (SAR) image segmentation, SAR image object detection, SAR target tracking, and SAR object classification. Since both the SAR imaging and retinal imaging have similar type of challenges of speckle as multiplicative noise, progress in the area of SAR imaging and algorithm cannot be neglected. In this paper, a robust segmentation method is proposed, that counters the limitations discussed above with higher accuracy which runs flawlessly on healthy as well as diseased images also. Though the main novelty in this paper is a hybrid active contour model which is obtained by combining snake and balloon algorithm and running them concurrently on the retinal image. However, the introduction of unique pre-processing and phase based binarization has added extra newness to the paper with improved outcomes in the form of performance metrics. The remaining of this paper is structured as Section 2 deals with the overview of prior works. Section 3 describes the basis of proposed segmentation method. Section 4 explains the entire proposed method in detail. Section 6 presents the experimental results of the proposed method on four publicly available datasets DRIVE, STARE, CHASE and VAMPIRE presented in Section 5. Section 6 2
also compares the results of the proposed method with different segmentation methodologies given in the literature. Finally, Section 7 presents the discussions and conclusions.
2 Overview of prior work Most of the researchers have focused these days on a supervised segmentation algorithm using classifiers. Some widely used classifiers (e.g. support vector machine [16, 17], K-nearest mean [18], Gaussian mixture model [14, 19, 20], AdaBoost [21], conditional random model [15], artificial neural network [22], deep neural network [13] etc.) have used large number of datasets as trainee image to train the classifier and to extract features of blood vessels. On the basis of these features extracted, pixels are classified as vessel or non-vessels. Due to less number of freely available fundus images with its gold standard image, unsupervised segmentation methods are profusely used. Different unsupervised segmentation methodology had already been implemented on the retinal image to extract vessels from its background [6–8]. Biswal et al. [8] have used a line detector with multiple masks to segment the blood vessels from its foreground. Roychowdhury et al. [23] have used adaptive threshold technique of pixel to extract the vessels. Modava and Akbarizadeh [24] have used active contour model combined with spatial fuzzy clustering for coastline extraction from SAR data. Similarly, several number of methodologies ranging from morphological path opening [25] to sophisticated approaches like active contour models and IPACHI model [26] have already been used. In active contour model, also known as snake model, a number of methods (e.g. Geodesic active contour [27], Ribbon of twin (ROT) [28], Chan-Vese (CV) model [29, 30], distance regularised level set evolution (DRLSE) [31] have already been applied. However, some limitations still persist in the above-mentioned methods. Such as formulation of ROT model is difficult [28]. Modava and Akbarizadeh's method has the problem of elimination of extremely small land areas over a large water area by using morphological operators. Akbarizadeh and Tirandaz [32] have used an unsupervised feature learning method for segmentation in which the features of different areas of SAR images are extracted, and then learned using an unsupervised manner and finally, the learned features were clustered. The shortest smooth boundary length and its regularisation term in CV and DRLSE methods have made them unsuitable for blood vessel segmentation though it is easy to formulate [29, 31]. GAC model is simple and powerful that allow contour to handle structural changes during the gradient descent curve growth. This rise to a new method called level set method but Wang and Chan said that all the curve development equations should not be directly solved by this method [27]. Tirandaz and Akbarizadeh [33] have proposed a very promising algorithm based on kurtosis curvelet energy (KCE) and unsupervised spectral regression for segmentation of SAR image which can also be used for de-noising of the retinal image. Akbarizadeh [34] has used statistical-based kurtosis wavelet energy (KWE) for texture recognition, which can equally be used for retinal image processing too. Since KWE and KCE have been proved as a better estimator for image binarisation, it may help to get binarised retinal blood vessel also. However, in this proposed methodology, we have tried to give a different insight of image binarisation which is based on the phase of an image. Though above-mentioned methods have already achieved high accuracy, it can be further increased by our well-designed method proposed in this paper. The proposed methodology has used a hybrid active contour model, where GVF snake and balloon methods are combined efficiently to achieve higher accuracy. Quantitative comparison of all above-listed methods with the proposed methods is cited in Table 1.
3 Basics of the proposed method In this, the combined strength of both GVF-based snake and balloon models is considered for efficient detection of blood vessels in the fundus image. Active contour model or snake model is nothing but a deformable spline curve with some energy, which is attracted toward the object in the image with some internal force that repels deformation [42]. This model mimics the movement of IET Image Process. © The Institution of Engineering and Technology 2018
4.1 Preprocessing
a snake in an empty room. A moving snake rarely moves into the centre of an empty room but it moves alongside the wall's corner and is always in search of a hole. They find the hole, will enter into it and again return back if the hole is further closed inside. Now, the same can be related to the snake moving on a retinal blood vessel. Here the wall is nothing but the blood vessels and holes are the crack and opening in the blood vessels. In order to move the snake alongside the blood vessels, the energy of the snake should be less than the internal energy of the blood vessels so that snake cannot deform the boundary of the vessels. Gradient vector flow method is used for the minimisation of the snake's energy [43]. The sum of the internal energy Ei , the energy of image (Eim) and user defined constraint force Econs over a point vq, where q = 0,...,n − 1, gives the energy function of the snake. This is given by the following equation: Es* =
∫
0
1
Es(v′ q dq =
∫
1
0
In this method of proposed segmentation, a pre-processed image is considered as the input to the hybrid active contour algorithm. In the outset, an inverted green channel image is extracted from RGB channelled fundus image. Further processing steps involved are as follows. 4.1.1 Vessel enhancement: Most of the preprocessing methods had applied central vessels reflex removal algorithm prior to vessel enhancement, which removes the tiny blood vessels having intensity nearly equal to the background. So the image obtained from the inverted green channel is enhanced in the first step only. This approach has enhanced the minute blood vessels but at the same time noise present in the image was also enhanced. This noise was further removed in the main binarisation process. For the enhancement, contrast limit adaptive histogram equalisation (CLAHE) [45] is applied. Instead of operating on the entire image, CLAHE method operates on a small region called as tiles. In the proposed method, the contrast is limited to 0.05 with an exponential distribution, since the distribution of intensity is exponentially distributed between background and foreground.
Ei v′ q + Eim v′ q + Econs v′ q ) dq (1)
Similarly, a balloon algorithm [44] is introduced to overcome the major defect of the normal snake model. Let us consider a case, where a blood vessel, to be detected, is inside a closed concave boundary and the snake is initiated outside the concave. In such a condition, at any number of iterations, the snake cannot enter inside the closed concave boundary. As a result, the subject of interest will be lost. To overcome this lacuna, a novel idea is proposed, where both the snake and balloon model can be initiated concurrently on the given image to detect the blood vessels outside and inside the closed concave boundary. Balloon model mimic the natural way of inflation and deflection of the balloon. Instead of shrinking of the curve towards edge boundary (as in snake model), here the curve will expand towards minima contour. This model presents an extra inflation term Finf balloon into the energy function of the snake. This extra inflation term is given by Finf balloon = m1k S
4.1.2 Central vessel reflex removal: During the image acquisition process, the blood vessels are having a lower reflectance as compared to other surfaces of the retinal image. Due to this, the inner part of vessels seems to be darker in compare to the outer surface having reflectance almost equal to the background. As a result, a small flash of light appears at the centre of the vessel, known as central vessel reflex. To eliminate this, a contour driven black top-hat transform algorithm [46] is applied to vessel enhanced image. This method has preserved the details of the image with the accurate background. Let M and S denote the grey level image and a disk type structuring element with ∂S being the contour of S. The contour driven dilation (CDD), CD erosion, CD opening, CD closing, opening of image (OS) and closing of image (CS ) by ∂S are defined as below:
(2)
Here m1 is the magnitude of the force, whose magnitude should be same as image normalisation factor m. To overcome the inflation force at image edge, the value of m should be smaller than m1. The normal unitary vector of the curve at k(s) is represented by k S .
4 Proposed method This section deals with the detail of the segmentation process from input retinal image to output segmented binary image. As per the process involved in the block diagram, the description is arranged below.
CDDS m = m ⊕ ∂S
(3)
CDES m = m ⊖ ∂S
(4)
CDOS m = m ⊖ ∂S ⊕ S
(5)
CDCS m = m ⊕ ∂S ⊖ S
(6)
OS m = max m, CDOS m
(7)
CS m = min m, CDCS m
(8)
Table 1 Comparison with existing segmentation methodology for DRIVE and STARE databases METHODS DRIVE Se SP PR F1 G MCC ACC Se SP PR
STARE F1 G
Lupascu et al. [21] You et al. [35] Vega et al. [36] Chakraborti et al. [37] Fraz et al. [38] Fraz et al. [39] Odstrcilik et al. [40] Roychowdhury et al. [14] Yin et al. [41] Zhao et al. [26] Human observer Biswal et al. [8] Liskowski and Krawiec [13] proposed
— — 0.60 — 0.73 0.73 — — — — 0.74 0.76 — 0.82
0.67 0.74 0.74 0.72 0.71 0.73 0.70 0.73 0.65 0.74 0.77 0.71 0.78 0.78
0.98 0.97 0.96 0.95 0.97 0.97 0.96 0.97 0.97 0.98 0.97 0.97 0.96 0.98
— — — — 0.82 0.81 — — — — 0.80 0.84 — 0.89
— — 0.68 — 0.76 0.76 — — — — 0.78 0.75 — 0.83
0.81 0.85 0.84 0.83 0.83 0.84 0.82 0.85 0.79 0.85 0.86 0.85 — 0.87
— — 0.66 — 0.73 0.73 — — — — 0.76 0.76 — 0.80
0.95 0.94 0.94 0.93 0.94 0.94 0.93 0.95 0.94 0.95 0.93 0.95 0.95 0.97
0.77 0.72 0.70 0.67 0.74 0.73 0.78 0.73 0.72 0.78 0.89 0.70 0.92 0.80
0.95 0.97 0.96 0.95 0.96 0.96 0.95 0.98 0.96 0.97 0.93 0.97 0.97 0.96
— — — — 0.73 0.72 — — — — 0.64 0.81 — 0.86
0.85 0.83 0.82 0.80 0.84 0.84 0.86 0.84 0.83 0.87 0.91 0.84 — 0.87
MCC
ACC
— — 0.59 — 0.70 0.69 — — — — 0.72 0.74 — 0.78
0.95 0.94 0.94 0.93 0.95 0.94 0.93 0.95 0.94 0.95 0.94 0.95 0.97 0.96
Bold values indicate the proposed algorithm outcomes.
IET Image Process. © The Institution of Engineering and Technology 2018
3
Because of the contour structuring element and maximum (minimum) operation, only the region processed by contour driven opening (CDO) and contour driven closing (CDC) having grey level value more (less) than original image is processed, leaving all other regions unchanged. Similarly, the information of image details is preserved by OS and CS by improving the estimation of a noisy background. Similar to the classical top hat transform, contour driven white top-hat transform (CDWTH) and contour driven black top-hat transform (CDBTH) is given as follows: CDWTH m = m − CS m
(9)
CDBTH m = OS m − m
(10)
Enclosed quantity . gives equality to itself when its value is +ve, −ve or 0. Since the above expression is highly sensitive to the noise, so Rayleigh distribution is used to model the distribution of noise RD p =
μRD = σJ
4.2.1 Phase congruency-centred feature map: In this step, the phase-based binarisation method is introduced which was given by Nafchi et al. [47]. It is proven that the phase information of the image is the most important feature of an image [48]. In order to map this feature, Kovesi's phase congruency model is considered [48]. Based on our experiment, the parameters are varied to get the best mapping of the feature. In this phase congruency method [49, 50], the points having a maximum phase of Fourier components are seen as the point of interest. To model the phase congruency of the image, let us consider an even and odd symmetric log-Gabor wavelets that are denoted by W ρe and W ρO, respectively, with a scale denoted by ρ. These even and odd wavelets are also known as quadratic pairs [51]. Convolving each image point P with m p gives the response of each quadratic pair of filter. Here, m p is a 1D signal eρ p , Oρ p = m p × W ρe , m p × W ρO
(11)
where Oρ p and eρ p are the imaginary and real parts of complexed valued wavelet response. Now, the local amplitude Aρ p and local phase ∅ ρ p of the wavelet transform with ρ as wavelet scale can be calculated as ∅ ρ p = arc tan2 Oρ p , eρ p Aρ p = eρ p + Oρ p
2
(12) (13)
The phase congruency weighting mean function W p and a fractional measure of spread S p can be written as follows: 1 ∑ ρ Aρ p S p = X AMAX p W p =
1 1 + eα(t − S p )
¯ p ) − sin(℧ p − ℧ ¯ p ) Δ℧ p = cos(℧ ρ p − ℧ ρ
4
∑ ρ W p Aρ p Δ℧ ρ p ∑ ρ Aρ p
E Aρ min =
π 2 σ 2 J
(20) (21)
R¨ π 2 ln 2
(22)
This gives σJ =
E Aρ min π/2
(23)
Similarly, μRD and σRD can be calculated. The threshold of noise in this paper is given by TH = μRD + KσRD
(24)
where K is the number of σRD which is used to match the noise distribution. Using (24) into (17), we get f c1d p =
∑ ρ W p Aρ p Δ℧ ρ p − TH ∑ ρ Aρ p
(25)
To represent the same for 2D, scale ( ρ) and orientation l is considered ¯ p ) − sin(℧ p − ℧ ¯ p ) Δ℧ ρl = cos(℧ ρl p − ℧ l ρl l
f c2d p =
(17)
(19)
We can also write
(15)
(16)
π 2
R¨ = σJ ln 4
2D phase congruency is given by
¯ p is the phase deviation at scale ρ and ℧ ¯ p where ℧ ρ p − ℧ 1d indicates the mean phase angle. Let f c denotes the 1D phase congruency
f c1d p =
2 σRD = 2−
(14)
where X is the filter scales. AMAX p is the amplitude of filter having a maximum response. W p is built by using the sigmoid function on the filter response spread; t is the cut-off value under which phase congruency values are penalised. To vary the sharpness of the cut-off, a gain factor α is introduced. Sensitive phase deviation function ℧ p is defined as below:
(18)
where σJ denotes the Rayleigh distribution parameter. The mean 2 μRD and standard deviation σRD and median R¨ of Rayleigh distribution can be expressed on the basis of σJ as follows:
4.2 Phase-based binarisation
2
p −P2 2 exp σJ 2σJ2
∑ ρ W l p Aρl p Δ℧ ρl p − THl ∑ ρ Aρl p
(26)
(27)
Maximum moment of phase congruency covariance is given as IMM = max f c2d p l
(28)
Edge strength measurement is given by mapping of IMM. Here the value is in the range of (0, 1] in which larger value denotes the stronger edge. 4.2.2 Main binarisation: This is the last algorithm applied over the image to get the final pre-processed image. Further, the resultant image from this step is segmented by a hybrid active contour segmentation algorithm. Filtering of noise, obtained so far, is done using Kovesi's phase preserved de-noising method [49]. For this, a complex-valued log-Gabor wavelet, which is nonorthogonal in nature that extracts the local phase and amplitude information in the image at every point, is used. In this method, the noise threshold is calculated by (24) where 2 is the variance of Rayleigh distribution. μRD is the mean and σRD IET Image Process. © The Institution of Engineering and Technology 2018
After this, the image was normalised followed by Otsu's method of binarisation [52]. However, in this method, there is a fair chance of losing tiny blood vessels. Otsu's method shows the comparatively good performance if the histogram is supposed to have bimodal distribution and assumed to have a deep and sharp dale between two peaks. However, in case the foreground area is comparatively smaller than the background area, the histogram no longer shows bimodality. And if the variances of the foreground and the background intensities are huge compared to the mean difference or the image is severely degraded by additive noise, the sharpness of the histogram is degraded. Then perhaps a false threshold determined by Otsu's method results in the segmentation error. As an improvisation for the above problem in Otsu's method, an effective approach is known as two-dimensional Otsu's method is also proposed by some researchers [53]. In this approach, the grey-level value of each pixel, as well as the average value of its immediate neighbourhood, is studied so that the binarisation results are greatly improved, especially for those images corrupted by noise. However, in this paper, to surmount this problem, the output of Otsu's method is multiplied with IMM as set in (28) to obtain a main binarised image where IMM is nothing but a maximum moment of 2D phase congruency covariance which gives the edge strength of the image. Instead of focusing only on the grey-level value we have considered phase deviation and orientation as well. In this paper, IMM is used to differentiate the foreground from the background. The value of j is estimated as follows: j=2+ γ×
∑a, b IO a, b ∑a, b IDK a, b
(29)
Here γ is a constant, whose value is 0.5; IO is the resultant image of Otsu's method; IDK is the normalised de-noised image by Kovesi's method. 4.3 Hybrid active contour segmentation As discussed in Section 3, both GVF snake and balloon methods have been combined to carry out the segmentation process. The energy equation of snake Es* given in (1) is comprised of internal energy of snake, energy of image and constrain energy [42]. The above energy equations are explained below. 4.3.1 Internal energy of snake: The smoothness of contour Esm and continuity of contour Ec give the internal energy of snake [https://www.cse.unr.edu/~bebis/CS791E/Notes/ DeformableContours.pdf]
2
+ β′ q
Eln = Gauss_Filter X m, n
∂2v′ q ∂q2
(34)
Edge function: This function depends on the image gradient Eedg = − ∇X m, n
2
(35)
There is a chance that snake starting farther from the desired object may converge to some local noisy pixel or minima. To overcome this lacuna, blurring filter is used at the beginning and gradually the blurriness is decreased to redefine the snake fitting Eedg = − G¨ σ × ∇X m, n
2
(36)
where G¨ σ is a Gaussian blur with σ as standard deviation (SD). Either the blurring filter can be used in line function or in edge function. According to the Marr–Hildreth theory to the definition of edge, the minima of the above function come on zero crossing of G¨ σ∇X m, n . Closure function: In order to detect the corner and termination of line, image is blurred by G¨ σ. Let us consider J m, n be the image after smoothing, then J m, n = G¨ σ × x m, n
(37)
Having gradient angle ^
θ = arctan
Jn Jm
(38)
The unit vector along the direction of the gradient is ^
^
n^ = cos θ, sin θ
(39)
A unit vector n⊥ perpendicular to the gradient direction ^
n⊥ = −sin θ, cos θ
(40)
The energy of closure (termination) function is given by
1 1 2 2 α′ q vq′ + β′ q vqq ′ 2 2 1 ∂v′ α′ q q 2 ∂q
(33)
The magnitude sign of W ln determine whether the snake will be attracted towards darker or lighter vessels. In the proposed work a Gaussian blurring filter is applied on the image to prevent the snake being attracted towards isolated noisy pixels. The equation after filtration can be written as
(30)
Further, the above equation can be expanded as
=
Eln = X m, n
^
Eisnake = Ec + Esm
Eisnake =
Line functional: This is nothing but the intensity of image given by the following equation:
Ecls = 2
(31)
E¨ img = W ln Gauss_Filter X m, n + W edg − G¨ σ × ∇X m, n
To control the sensitivity of the length of the snake, a usercontrolled weight α′ q and β′ q is introduced. 4.3.2 Energy of image: Let the image X m, n has some features like edge, closure and line. Then the energy of the image can be formulated as E¨ img = W lnEln + W edgEedg + W clsEcls
(32)
Here W ln, W edg, W cls are the weights of the feature like line, edge, and closure and Eln, Eedg, Ecls are their energies, respectively. The different energies associated with line, edge and closure are illustrated below. IET Image Process. © The Institution of Engineering and Technology 2018
^
∂2J /∂2n⊥ JnnJm2 − 2JmnJmJn + Jmm + Jn2 ∂θ = = 3/2 ∂n⊥ ∂J /∂n^ Jm2 + Jn2
+W cls
JnnJm2 − 2JmnJmJn + Jmm + Jn2 2
(41) 2
(42)
2 3/2
Jm + Jn
4.3.3 Constrain energy: Constrain energy is used to keep snake away or close from some features of objects, which are usually defined by the user. Considering the above two energy equations, the final energy function of the snake can be rewritten from (1). Now, the internal energy of the snake needs to be minimised in such a way that the snake cannot penetrate the vessels and can move alongside the wall. Several techniques exist to optimise the energy. Some of them are gradient descent method [54], discrete approximation method and so on.
5
In this paper, the gradient vector flow method is used to minimise the external energy acting on vessels, i.e. internal energy of the snake. This GVF snake model [43] has addressed the issues like i. Poor convergence of snake when originated from minima. ii. Poor convergence of concave boundaries. The 2D energy function of GVF field vector is given as E¨ GVF =
∫∫ μu
2
m
^ 2 ^
^ 2
+ un2 + vm2 + vn2 + ∇ f V − ∇ f ∂m ∂n (43)
the proposed algorithm. They are specificity (SP), sensitivity (SE), accuracy (ACC), false discovery rate (FDR), precision (PR), Matthew's correlation coefficient (MCC), G-mean, F1score (F1) and Dice coefficient (DC) that are calculated by comparing the gold standard image with the segmented output. In order to calculate the above mentioned parameters, we have calculated TP, TN, FP and FN which stand for true positive (vessels detected correctly), true negative (background pixel detected correctly), false positive (background pixel or noise detected as vessels), false negative(vessels that are left undetected), respectively. The evaluation parameters were measured as follows:
where μ is the controllable smoothing term. By using Euler's method in (43), the following equations are derived as: ∂ ∂ ∂ μ∇ u − u^ − F F m, n 2 + Fext m, n ∂m ext ∂m ext ∂n 2^
∂ ∂ ∂ μ∇ v − v − Fext F m, n 2 + Fext m, n ∂n ∂m ext ∂n 2^
^
2
2
= 0 (44) = 0 (45)
∂ F ∂m ext
∂ ∂ F m, n 2 + Fext m, n ∂m ext ∂n vi + 1 = vi + μ∇2vi − vi −
2
∂ F ∂n ext
∂ ∂ F m, n 2 + Fext m, n ∂m ext ∂n
2
(46)
(47)
SP =
TN TP + FN
(50)
TN + TP TP + TN + FP + FN
F1 =
(51)
TP TP + FP
(52)
2 ⋅ PR ⋅ SE PR + SE
(53)
FDR = 1 − PR
(54)
G = SE × SP
(55)
DC = 2 G ∩ S / G + S
(56)
Here G is ground truth image and S is the manually segmented result (48)
5 Datasets and performance metrics 5.1 Datasets For the analysis of the proposed algorithm, the experiment is carried out on four datasets. They are DRIVE [18], STARE, CHASE [50] and VAMPIRE [55]. DRIVE [http://www.isi.uu.nl/ Research/Datasets/DRIVE/] (Digital Retinal Image for Vessel Extraction) is a freely available dataset consisting of 40 retinal images (colour) which were captured in a diabetic retinopathy screening program conducted in the Netherlands. The resolution of each image is 768 × 584 pixels with a 45-degree field of view (FOV). The set is divided into two sets each containing 20 images as test and training images. STARE [http://www.ces.clemson.edu/ ~ahoover/stare/] (STrucured Analysis of the Retina) consists of 20 color retinal images among which 10 images have a sign of disease present. The image was captured by TopCon TRV-50 fundus camera with a resolution of 605 × 700 pixels. CHASE [https:// blogs.kingston.ac.uk/retinal/chasedb1/] dataset consists of total 28 images collected from both left and right eyes of 14 children. Images are of 8 bits color channel with a resolution of 1280 × 960 pixels. VAMPIRE dataset consists of 8 ultra-wide field of view (FOV) angiographic retinal images, captured with OPTOS P200C camera. The resolution of each image is 3900 × 3072 pixel among which four images are taken from AMD14 and four images from GER7. 5.2 Performance metrics The proposed segmentation algorithm gives an automated segmented image of the given retinal image, which is compared with the gold standard image. The Gold Standard for segmentation of medical images is the manual drawing of the region of interest. This manual tracing is performed by experts (radiologists). Various parameters are considered in this paper to check the flexibility of 6
(49)
PR =
This result can be replaced by default external force * = FGVF Fex
TP TP + FN
ACC =
Solving through iteration with a steady-state value we have ui + 1 = ui + μ∇2ui − ui −
SE =
SP + SE 2
(57)
TP/M − P × Q Q×P 1−P × 1−Q
(58)
AUC = MCC =
Here M = TP + FP + TN + FN is the total number of pixels in the image with P = TP + FN/M and Q = TP + FP/M . The descriptions of parameters used in this paper are described below: • Sensitivity denotes the fraction of actual vessels detected correctly. Higher the value of sensitivity, better the segmentation process is. • Specificity denotes the fraction of non-vessels detected as nonvessel or as a background pixel. Its higher value denotes the better segmentation. • Accuracy and area under the curve (AUC) measure the overall performance of the segmentation process. These parameters are also supposed to be high. • MCC denotes the correlation between gold standard image and segmented image. This value should be close enough to 1 to indicate better segmentation. • Dice coefficient is a measure of overlapping metric used to compare the agreement between ground truth image and segmented image. A value greater than 0.7 is generally considered as excellent matching [56]. • The F1 score measures the accuracy considering precision and recall of test. Apart from the above-mentioned parameters, PSNR is calculated for an image obtained from (28) to compare the maximum SNR for edge only and edge with a corner. It is calculated as
IET Image Process. © The Institution of Engineering and Technology 2018
PSNR = 10 ⋅ log10 MSE =
P−1Q−1
1 PQ I∑ =0
∑
j=0
MAX2X MSE
X i, j − K i, j
(59)
2
(60)
Here MSE is mean square error for a noise-free image P × Q with noisy approximation as K and grey-scale image X i, j . MAXX is the maximum probable pixel value of image X.
6 Experiments and results In this section, the experiments to check the robustness of the proposed methodology are presented. The experimental setup is kept simple where every single image from each dataset is given as an input image to the algorithm. For better computation, the entire input images are resized to 584 × 565. Since no learning or training process is involved in this work, all the free parameters were adjusted manually to meet the desired output. Various free parameters can be modified to de-noise the image to get the desired output. Among all, noise threshold that needs to be rejected j , number of filters scales (xρ) and number of orientations (xl) are mostly used. The extent to which low frequency needs to be covered is controlled by xρ. Higher the value is, lower the frequencies are covered. In this paper, we have used, j = 1, xρ = 5 and xl = 3 to best meet the desired output. All the evaluation parameters were calculated that are given in Section 5.2, are presented in this section. All the experiments were performed using MATLAB software of version 2016b with a laptop having 4 GB RAM and 1.8 GHz processor. The proposed method contains three essential parts: novel preprocessing, phase-based binarisation and segmentation with hybrid active contour using GVF-based snake and balloon model. The resultant images of each step followed in the proposed methodology as per block diagram in Fig. 1 are presented in Fig. 2. Randomly selected images from STARE, CHASE and VAMPIRE databases are segmented with our proposed method and kept alongside with gold standard image to analyse the efficiency of the proposed method in Fig. 3.
In this section, the segmented result of the GVF based snake with balloon algorithm and without balloon algorithm is depicted in Fig. 4. Figs. 4c–e show how the balloon is inflated inside the closed concave boundary, where the snake cannot reach. The peak SNR as 1.9 and 3.9 dB of two different stages: edge only and edge with a corner, respectively, are calculated also. This justifies that phase based feature map has high PSNR when both edge and corner are taken into account. Fig. 5 shows the receiver operating characteristics (ROC) curve, which compares the probability of actual vessel detection and the probability of false vessel detection for all datasets used in this paper. The main problems in active contour model are the iteration numbers and the initial place of the initial contour. For different datasets and depending upon the level of degradation, a number of iterations to perform complete segmentation was dynamic. To monitor this, the program was set to auto save the segmented image at every 100th iteration. The average number of iterations for DRIVE dataset was 270, which was relatively lower when compared with STARE, CHASE and VAMPIRE datasets as 310, 350 and 420, respectively. These may not be a very promising figure in terms of time complexity but when compared with other performance evaluating parameters ours proposed method does exceedingly well. To speed up the segmentation process, the contour was initiated very close to the object. In the case of DRIVE and STARE a false mask of the same size of input image was created and considered as the initial boundary of contour. However, due to lack of mask available for VAMPIRE and CHASE datasets, manually the initial contour was selected to cover the area of interest. The overall summary in terms of a number of iterations required during the segmentation process and performance metric (MSE and PSNR) before segmentation, i.e. after pre-processing is stated in Table 2. 6.1 Comparison with existing active models In this section, the proposed model is compared with some of the existing active contour models as stated in Section 2. The five active contour models: IPACHI model, IPAC model, CV model, ROT and DSLR having various parameters such as SE, SP, ACC, DC, and AUC are compared with our proposed methodology in Table 3. For comparison, we have used three datasets: DRIVE, STARE and VAMPIRE. The proposed method has achieved higher
Fig. 1 Block diagram of the proposed method IET Image Process. © The Institution of Engineering and Technology 2018
7
Fig. 2 Proposed segmentation algorithm on one image of DRIVE dataset (a) Original image, (b) Green channel extracted image, (c) vessel enhanced by CLAHE, (d) Central vessel reflex removed by contour driven black top-hat transform (CDBTH), (e) Phase preserved de-noised image by Kovesi's method, (f) Normalised image of figure (e), (g) Otsu's binarised image of figure (f), (h) Phase congruency centred featured map (edge and corner), (i) Complimented image of the figure (h), (j) Main binarised image after multiplying figure (g) and (i), (k) Final segmented image by applying proposed method of figure (j)
Fig. 4 Hybrid active contour segmentation process (a) segmentation at iteration 100, (b) segmentation at iteration 300, (c) starting of the balloon inside the closed contour, (d), (e) snake on either side of wall despite closed curve vessel due to a combination of GVF and Balloon algorithm
6.2 Comparison with other segmentation methods
Fig. 3 Gold standard images versus segmented images by proposed methodology (a), (c), (e) Manually segmented or gold segmented image of STARE, CHASE and VAMPIRE followed by (b), (d), (f) as a segmented image, respectively
SE, SP, ACC, DC, and AUC for DRIVE dataset, which are 0.78, 0.98, 0.97, 0.80, and 0.88, respectively. Similarly, for STARE, we have achieved better SE (0.80), DC (0.83), AUC (0.88), and ACC (0.96). For VAMPIRE we could only achieve better SE (0.79) and AUC (0.88).
8
Although the previous comparison has already demonstrated the proposed active contour model efficient over the existing model. However, it is necessary to compare our proposed segmentation method over existing methods to prove it's robustness by evaluating various decision parameters. Tables 1, 4 and 5 show the comparison of existing segmentation methods with the proposed method against various datasets. For different unsupervised and supervised methods, the proposed method has achieved better PR (0.89), F1 (0.83), G-mean (0.87) and MCC (0.80) for DRIVE dataset. Similarly, for STARE dataset, we could not achieve better G-mean but parameters like PR (0.86), F1 (0.82) and MCC (0.78) are achieved comparatively better.
7 Discussion and conclusions In this paper, the problem of segmentation of retinal blood vessel is mitigated using hybrid active contour model. Our results are pretty promising with better accuracy against various existing algorithms. In order to preserve the tiny blood vessel, the conventional preprocessing technique is replaced with phase-preserved binarisation method. To make the binarisation method more IET Image Process. © The Institution of Engineering and Technology 2018
effective, contrast limit adaptive histogram equalisation method is implemented before contour driven black top hat transformation. Further, the image is de-noised using phase preserved Kovesi's denoising method with a change in some parameters like noise threshold that need to be rejected j , a number of filters scales (xρ) and number of orientations ( xl). This problem of the closed concave boundaries, where the snake cannot enter, is judiciously solved by adding inflation term of balloon model in energy function of the snake. This approach has hybridised the conventional snake model resulting in the better segmentation of
blood vessels. The proposed methodology has been tested over four publicly available databases considering a wide range of evaluation parameters. In the end, we have tried to propose an efficient hybrid active contour model to segment the retinal images effectively. This will be a robust tool to investigate any vascular-related diseases. The proposed method is not only limited to the detection of vessels in the healthy retinal image but also the diseased images having more closed concave boundaries. This algorithm can also be used for the detection of coastline or Islands during satellite image processing
Fig. 5 ROC curve with various datasets with AUC
Table 2 Summary of result in term of no. of iterations and performance metric before segmentation of various datasets Datasets No. of iterations Performance metric before segmentation (IMM) Best Average Worst MSE PSNR, dB DRIVE STARE CHASE VAMPIRE
240 272 280 330
270 310 350 420
320 380 400 490
0.032 0.051 0.042 0.029
3.9 4.2 4.0 4.2
Table 3 Comparison with existing active model MODEL DRIVE SE SP DC AUC ACC
SE
SP
STARE DC AUC
ACC
SE
SP
IPAC [57] IPACHI [26] CV [30] ROT [28] DRLSE [31] VP [55] proposed
0.75 0.78 0.77 0.75 — — 0.80
0.96 0.97 0.95 0.96 — — 0.96
0.77 0.80 0.79 — — — 0.83
0.94 0.95 0.93 — — — 0.96
0.72 0.72 0.71 — — 0.66 0.79
0.97 0.98 0.98 — — — 0.97
0.72 0.74 0.67 0.72 0.71 — 0.78
0.96 0.98 0.92 0.95 0.97 — 0.98
0.74 0.78 0.70 — — — 0.80
0.84 0.86 0.80 0.84 0.84 — 0.88
0.94 0.95 0.93 — 0.94 — 0.97
0.86 0.84 0.86 0.86 — — 0.88
VAMPIRE DC AUC 0.73 0.73 0.73 — — — 0.72
0.84 0.85 0.85 — — — 0.88
ACC 0.96 0.97 0.97 — — 0.97 0.96
Bold values indicate the proposed algorithm outcomes.
Table 4 Comparison with existing segmentation methodology for CHASE database CHASE SE SP PR Orlando et al. [15] Fraz et al. [58] Fraz et al. [59] Roychowdhury et al. [14] Azzopardi et al. [60] Roychowdhury et al. [23] Chakraborti et al. [37] Zhang et al. [61] Fan et al. [62] human observer Biswal et al. [8] proposed
0.72 0.72 0.72 0.72 0.72 0.75 0.53 0.77 0.65 0.74 0.76 0.78
0.97 0.97 0.97 0.98 0.96 0.96 0.95 0.98 0.97 0.97 0.97 0.97
0.74 0.77 — — — — — — — 0.80 0.76 0.76
F1
G
MCC
ACC
0.73 0.74 — — — — — — — 0.76 0.75 0.76
0.84 0.84 — — — — — — — 0.85 0.85 0.86
0.70 — — — 0.67 — — — — — 0.73 0.73
— 0.95 0.94 0.95 0.94 0.94 0.93 0.96 0.95 — — 0.97
Bold values indicate the proposed algorithm outcomes.
IET Image Process. © The Institution of Engineering and Technology 2018
9
Table 5 Comparison with existing segmentation methodology for VAMPIRE database VAMPIRE SE SP
DC
AUC
ACC
Barchiesi et al. [57] Zhao et al. [26] Chan et al. [30] Perez et al. [55] Zhao et al. [63] Shrichandran et al. [64] proposed
0.73 0.73 0.73 — 0.72 — 0.72
0.84 0.85 0.85 — 0.72 — 0.88
0.96 0.97 0.97 0.97 0.72 0.98 0.96
0.72 0.72 0.71 0.66 0.72 0.92 0.79
0.97 0.98 0.98 — 0.72 0.89 0.97
Bold values indicate the proposed algorithm outcomes.
and SAR imaging as they have a similar type of noise and closed concave boundary like the retinal image.
[21] [22]
8 Acknowledgment The authors thank the Department of Science and Technology (DST), India for granting this paper under Extramural Research (EMR) funding scheme of Science Engineering and Research Board (SERB) under grant no – EMR/2017/000885.
9 References [1] [2] [3] [4] [5] [6] [7] [8] [9]
[10] [11] [12] [13] [14] [15] [16] [17] [18] [19]
[20]
10
Abràmoff, M.D., Garvin, M.K., Sonka, M.: ‘Retinal imaging and image analysis’, IEEE Rev. Biomed. Eng., 2010, 3, pp. 169–208 Wilson, C.M., Cocker, K.D., Moseley, M.J., et al.: ‘Computerized analysis of retinal vessel width and tortuosity in premature infants’, Investig. Ophthalmol. Vis. Sci., 2008, 49, (8), pp. 3577–3585 Sussman, E.J.: ‘Diagnosis of diabetic eye disease’, JAMA J. Am. Med. Assoc., 1982, 247, (23), p. 3231 World Health Organization: ‘Prevention of blindness from diabetes mellitus’ (WHO, Geneva, 2005), pp. 1–48 Roychowdhury, S., Koozekanani, D.D., Parhi, K.K.: ‘DREAM: diabetic retinopathy analysis using machine learning’, IEEE J. Biomed. Heal. Inf., 2014, 18, (5), pp. 1717–1728 Hoover, A.: ‘Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response’, IEEE Trans. Med. Imaging, 2000, 19, (3), pp. 203–210 Fraz, M.M., Remagnino, P., Hoppe, A., et al.: ‘Blood vessel segmentation methodologies in retinal images – A survey’, Comput. Methods Programs Biomed., 2012, 108, (1), pp. 407–433 Biswal, B., Pooja, T., Bala Subrahmanyam, N.: ‘Robust retinal blood vessel segmentation using line detectors with multiple masks’, IET Image Process., 2018, 12, (3), pp. 389–399 Nugroho, H.A., Lestari, T., Aras, R.A., et al.: ‘Segmentation of retinal blood vessels using Gabor wavelet and morphological reconstruction’. Proc. 2017 3rd Int. Conf. on Science in Information Technology (ICSITech), Bandung, Indonesia, 2017, pp. 513–516 Bajceta, M., Sekulic, P., Djukanovic, S., et al.: ‘Retinal blood vessels segmentation using ant colony optimization’. 2016 13th Symp. Neural Networks Applications, Belgrade, Serbia, 2016, pp. 1–6 Salazar-Gonzalez, A., Kaba, D., Li, Y., et al.: ‘Segmentation of the blood vessels and optic disk in retinal images’, IEEE J. Biomed. Heal. Inf., 2014, 18, (6), pp. 1874–1886 Mendonça, A.M., Campilho, A.: ‘Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction’, IEEE Trans. Med. Imaging, 2006, 25, (9), pp. 1200–1213 Liskowski, P., Krawiec, K.: ‘Segmenting retinal blood vessels with deep neural networks’, IEEE Trans. Med. Imaging, 2016, 35, (11), pp. 2369–2380 Roychowdhury, S., Koozekanani, D.D., Parhi, K.K.: ‘Blood vessel segmentation of fundus images by major vessel extraction and subimage classification’, IEEE J. Biomed. Heal. Inf., 2015, 19, (3), pp. 1118–1128 Orlando, J.I., Prokofyeva, E., Blaschko, M.B.: ‘A discriminatively trained fully connected conditional random field model for blood vessel segmentation in fundus images’, IEEE Trans. Biomed. Eng., 2017, 64, (1), pp. 16–27 Tuba, E., Mrkela, L., Tuba, M.: ‘Retinal blood vessel segmentation by support vector machine classification’. Proc. 2017 27th Int. Conf. Radioelekronika, Brno, Czech Republic, 2017 Ricci, E., Perfetti, R.: ‘Retinal blood vessel segmentation using line operators and support vector classification’, IEEE Trans. Med. Imaging, 2007, 26, (10), pp. 1357–1365 Staal, J., Abràmoff, M.D., Niemeijer, M., et al.: ‘Ridge-based vessel segmentation in color images of the retina’, IEEE Trans. Med. Imaging, 2004, 23, (4), pp. 501–509 Marín, D., Aquino, A., Gegúndez-Arias, M.E., et al.: ‘A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features’, IEEE Trans. Med. Imaging, 2011, 30, (1), pp. 146–158 Soares, J.V.B., Leandro, J.J.G., Cesar, R.M., et al.: ‘Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification’, IEEE Trans. Med. Imaging, 2006, 25, (9), pp. 1214–1222
[23] [24] [25] [26]
[27] [28] [29] [30] [31] [32] [33]
[34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46]
Lupascu, C.A., Tegolo, D., Trucco, E.: ‘FABC: retinal vessel segmentation using AdaBoost’, IEEE Trans. Inf. Technol. Biomed., 2010, 14, (5), pp. 1267– 1274 Yao, Z., Zhang, Z., Xu, L.-Q.: ‘Convolutional neural network for retinal blood vessel segmentation’. Proc. 2016 9th Int. Symp. on Computational Intelligence and Design (ISCID), Hangzhou, China, 2016, pp. 406–409 Roychowdhury, S., Koozekanani, D.D., Parhi, K.K.: ‘Iterative vessel segmentation of fundus images’, IEEE Trans. Biomed. Eng., 2015, 62, (7), pp. 1738–1749 Modava, M., Akbarizadeh, G.: ‘Coastline extraction from SAR images using spatial fuzzy clustering and the active contour method’, Int. J. Remote Sens., 2017, 38, (2), pp. 355–370 Rossant, F., Badellino, M., Chavillon, A., et al.: ‘A morphological approach for vessel segmentation in eye fundus images, with quantitative evaluation’, J. Med. Imaging Heal. Inf., 2011, 1, (1), pp. 42–49 Zhao, Y., Rada, L., Chen, K., et al.: ‘Automated vessel segmentation using infinite perimeter active contour model with hybrid region information with application to retina images’, IEEE Trans. Med. Imaging, 2015, 0062, (c), pp. 1–1 Läthén, G., Jonasson, J., Borga, M.: ‘Blood vessel segmentation using multiscale quadrature filtering’, Pattern Recognit. Lett., 2010, 31, (8), pp. 762–767 Al-Diri, B., Hunter, A., Steel, D.: ‘An active contour model for segmenting and measuring retinal vessels’, IEEE Trans. Med. Imaging, 2009, 28, (9), pp. 1488–1497 Sun, K., Chen, Z., Jiang, S.: ‘Automatic vascular segmentation’, IEEE Trans. Biomed. Eng., 2012, 59, (2), pp. 464–473 Chan, T.F., Vese, L.A.: ‘Active contours without edges’, IEEE Trans. Image Process., 2001, 10, (2), pp. 266–277 Li, C., Xu, C., Gui, C., et al.: ‘Distance regularized level set evolution and its application to image segmentation’, IEEE Trans. Image Process., 2010, 19, (12), pp. 3243–3254 Akbarizadeh, G., Tirandaz, Z.: ‘SAR image segmentation using unsupervised spectral regression and Gabor filter bank’. 2015 7th Conf. Information Knowledge Technology (IKT 2015), Urmia, Iran, 2015, pp. 9–12 Tirandaz, Z., Akbarizadeh, G.: ‘A two-phase algorithm based on kurtosis curvelet energy and unsupervised spectral regression for segmentation of SAR images’, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 2016, 9, (3), pp. 1244–1264 Akbarizadeh, G.: ‘A new statistical-based kurtosis wavelet energy feature for texture recognition of SAR images’, IEEE Trans. Geosci. Remote Sens., 2012, 50, (11 PART1), pp. 4358–4368 You, X., Peng, Q., Yuan, Y., et al.: ‘Segmentation of retinal blood vessels using the radial projection and semi-supervised approach’, Pattern Recognit., 2011, 44, (10–11), pp. 2314–2324 Vega, R., Sanchez-Ante, G., Falcon-Morales, L.E., et al.: ‘Retinal vessel extraction using lattice neural networks with dendritic processing’, Comput. Biol. Med., 2015, 58, pp. 20–30 Chakraborti, T., Jha, D.K., Chowdhury, A.S., et al.: ‘A self-adaptive matched filter for retinal blood vessel detection’, Mach. Vis. Appl., 2014, 26, (1), pp. 55–68 Fraz, M.M., Remagnino, P., Hoppe, A., et al.: ‘Retinal vessel extraction using first-order derivative’, no date, pp. 410–420 Fraz, M.M., Basit, A., Barman, S.A.: ‘Application of morphological bit planes in retinal blood vessel extraction’, J. Digit. Imaging, 2013, 26, (2), pp. 274– 286 Odstrcilik, J., Kolar, R., Budai, A., et al.: ‘Retinal vessel segmentation by improved matched filtering: evaluation on a new high-resolution fundus image database’, IET Image Proc., 2013, 7, (4), pp. 373–383 Yin, Y., Adel, M., Bourennane, S.: ‘Automatic segmentation and measurement of vasculature in retinal fundus images using probabilistic formulation’, Comput. Math. Meth. Med., 2013, 2013, Article ID 260410 Kass, M., Witkin, A., Terzopoulos, D.: ‘Snakes: active contour models’, Int. J. Comput. Vis., 1988, 1, (4), pp. 321–331 Xu, C., Prince, J.L.: ‘Gradient vector flow: a new external force for snakes’. Proc., 1997 IEEE Computer Society Conf. Computer Vision and Pattern Recognition, San Juan, Puerto Rico, 1997, pp. 66–71 Chellappa, R.: ‘A 3-D depth map from one or more images’, CVGIP: Image Understanding, 1991, 53, (2), pp. 219–226 Pizer, S.M., Amburn, E.P., Austin, J.D., et al.: ‘Adaptive histogram equalization and its variations’, Comput. Vision Graph. Image Process., 1987, 39, (3), pp. 355–368 Bai, X., Zhou, F., Xie, Y., et al.: ‘Modified top-hat transformation based on contour structuring element to detect infrared small target’. 2008 3rd IEEE Conf. Industrial Electronics Applications (ICIEA 2008), Singapore, 2008, pp. 575–579
IET Image Process. © The Institution of Engineering and Technology 2018
[47] [48] [49] [50] [51] [52] [53] [54] [55]
[56]
Nafchi, H.Z., Moghaddam, R.F., Cheriet, M.: ‘Phase-based binarization of ancient document images: model and applications’, IEEE Trans. Image Process., 2014, 23, (7), pp. 2916–2930 Nafchi, H.Z., Kanan, H.R.: ‘A phase congruency based document binarization’. Proc. Int. Conf. on Image and Signal Processing, Agadir, Morocco, June 2012, pp. 113–121 Kovesi, P.: ‘Phase preserving denoising of images’. DICTA ‘99 Fifth Int. Biennial Conf. Digit. Image Comput. Tech. Appl., Perth, Australia, 1999, pp. 212–217 Morrone, M.C., Ross, J., Burr, D.C., et al.: ‘Mach bands are phase dependent’, Nature (1986), 324, (6049), pp. 250–253 Papari, G., Petkov, N.: ‘Edge and line oriented contour detection: state of the art’, Image Vis. Comput., 2011, 29, (2–3), pp. 79–103 Otsu, N.: ‘A threshold selection method from gray-level histograms’, IEEE Trans. Syst. Man. Cybern., 1979, 9, (1), pp. 62–66 Zhang, J., Hu, J.: ‘Image segmentation based on 2D Otsu method with histogram analysis’. 2008 Int. Conf. Computer Science and Software Engineering, Hubei, China, 2008, (1), pp. 105–108 Morse, B.S.: ‘Lecture 21: image understanding’ Image (Rochester, NY), 2000, pp. 1998–2000 Perez-Rovira, A., Zutis, K., Hubschman, J.P., et al.: ‘Improving vessel segmentation in ultra-wide field-of-view retinal fluorescein angiograms’. Proc. Annual Int. Conf. IEEE Engineering Medicine Biology Society (EMBS), Boston, USA, 2011, pp. 2614–2617 Zijdenbos, A.P., Dawant, B.M., Margolin, R.A., et al.: ‘Morphometric analysis of white matter lesions in MR images: method and validation’, IEEE Trans. Med. Imaging, 1994, 13, (4), pp. 716–724
IET Image Process. © The Institution of Engineering and Technology 2018
[57]
[58] [59] [60] [61] [62] [63] [64]
Barchiesi, M., Kang, S.H., Le, T.M., et al.: ‘A variational model for infinite perimeter segmentations based on Lipschitz level set functions: denoising while keeping finely oscillatory boundaries’, Multiscale Model. Simul., 2010, 8, (5), pp. 1715–1741 Fraz, M.M., Rudnicka, A.R., Owen, C.G., et al.: ‘Delineation of blood vessels in pediatric retinal images using decision trees-based ensemble classification’, Int. J. Comput. Assist. Radiol. Surg., 2014, 9, (5), pp. 795–811 Fraz, M.M., Remagnino, P., Hoppe, A., et al.: ‘An ensemble classificationbased approach applied to retinal blood vessel segmentation’, IEEE Trans. Biomed. Eng., 2012, 59, (9), pp. 2538–2548 Azzopardi, G., Strisciuglio, N., Vento, M., et al.: ‘Trainable COSFIRE filters for vessel delineation with application to retinal images’, Med. Image Anal., 2015, 19, (1), pp. 46–57 Zhang, B., Division, B., Huang, S., et al.: ‘Multi-scale neural networks for retinal blood vessels segmentation’, no date, (Midl 2018), pp. 1–11 Fan, Z., Member, S., Lu, J., et al.: ‘A hierarchical image matting model for blood vessel segmentation in fundus images’, 2017, pp. 1–10 Zhao, Y., Liu, Y., Wu, X., et al.: ‘Retinal vessel segmentation: an efficient graph cut approach with retinex and local phase’, Plos One, 2015, 10, (4), pp. 1–22 Shrichandran, G.V., Sathiyamoorthy, S., Malarchelvi, P.D.S.K.: ‘An efficient segmentation of retinal blood vessel using quantum evolutionary algorithm’, TAGA Journal, 2018, 14, pp. 671–685
11