Automated Diabetic Macular Edema (DME) Grading

1 downloads 0 Views 1MB Size Report
... proposed for. DME grading using image processing and pattern recognition techniques. ... the manual grading workload by 36.3%. Giancardo et al. .... (Ac) and DCT coefficients are shown in 4th and 5th column of Figure 3 for normal, NCSME ..... [42] R. C. Gonzalez, R. E. Woods, Book on "digital image processing" (2005).
Automated Diabetic Macular Edema (DME) Grading System using DWT, DCT Features and Maculopathy Index U. Rajendra Acharyaa,b,c, Muthu Rama Krishnan Mookiaha,∗, Joel E.W. Koha, Jen Hong Tana, Sulatha V. Bhandaryd, A Krishna Raod, Yuki Hagiwaraa, Chua Kuang Chuaa, Augustinus Laudee Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore 599489 bDepartment of Biomedical Engineering, School of Science and Technology, SIM University,

a

Singapore 599491 Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Malaysia 50603 dDepartment of Ophthalmology, Kasturba Medical College, Manipal, India 576104 eNational Healthcare Group Eye Institute, Tan Tock Seng Hospital, Singapore 308433 *Corresponding author

c

Postal Address: Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore 599489 Telephone: +65 6460 7889; Email Address: [email protected]

Abstract The cause of diabetic Macular Edema (DME) is due to prolong and uncontrolled diabetes mellitus (DM) and it affects the vision of diabetic subjects. DME is graded using exudate location from the macula. It is clinically diagnosed using fundus imaging which is tedious and time-consuming. Regular eye screening and subsequent treatment may prevent the vision loss. Hence, in this work, a hybrid system based on radon transform (RT), discrete wavelet transform (DWT) and discrete cosine transform (DCT) are proposed for an automated detection of DME. Contrast limited adaptive histogram equalisation (CLAHE) is used to enhance the fundus image contrast. The enhanced images are subjected to RT to obtain sinograms and DWT is applied on these sinograms to extract wavelet coefficients (approximate, horizontal, vertical and diagonal). DCT is applied on approximate coefficients to obtain 2D-DCT coefficients. Further, these coefficients are converted into 1D vector to reduce the dimensionality using locality sensitive discriminant analysis (LSDA). Finally, various supervised classifiers are used

1

to classify three classes using significant features. Our proposed technique yielded a classification accuracy of 100% and 97.01% using two and seven significant features for private and public (MESSIDOR) datasets respectively. Also, a maculopathy index is formulated using two significant features to discriminate the three classes distinctly. Hence, the obtained results suggest that this system can be used as an eye screening tool for diabetic subjects.

Keywords: Decision support system, Diabetic macular edema, Discrete cosine transform, Discrete wavelet transform, Fundus imaging, Locality sensitive discriminant analysis, Radon transform.

1. Introduction Diabetes mellitus (DM) is caused due to damage of pancreatic β-cells [1, 2, 3]. It is classified into insulin deficiency (Type-I) and resistance (Type-II) diabetes [1, 2, 3]. According to the latest statistics, it is estimated that worldwide diabetes prevalence is going to increase from 8.3% to 10.1% between 2013 to 2035 [4]. Currently, there are 382 million people affected globally due to diabetes and this figure is estimated to increase up to 471 million by 2035 [4]. DM affects the vital organs of the human body including blood vessels of the retina causing diabetic retinopathy (DR) [5]. It is classified into nonproliferative diabetic retinopathy (NPDR), proliferative diabetic retinopathy (PDR) and diabetic macular edema (DME) [5] based on the location and presence of clinical features viz. microaneurysms (MA), cotton wool spots (CWS) hemorrhages (HA), hard exudates (HE) and neovascularization (NV) [5]. Early Treatment Diabetic Retinopathy Study (ETDRS) characterize diabetic macular edema (DME) due to the presence of hard exudates (HE), hemorrhages (HA) with and without microaneurysms (MA), and blot hemorrhages (HA) within 2-disc diameter (DD) from the centre of macula [6]. Diabetic macular edema (DME) affects the macula which

is responsible for central vision and leads to blindness [3, 5, 6]. Globally, 21 million people are identified with DME and its prevalence rate is 10.2% [7]. Types of DME (See Figure 1) are briefly described in Table 1.

Table 1: Types and clinical features of DME [3, 8, 9, 10, 11]. DME types

Various diabetic macular edema (DME) features

Location of clinical features

Non-clinically significant macular edema (NCSME) (See Figure 1b)

Thickening of retina, presence of hard exudates, presence of hemorrhages with and without microaneurysms

>1 and ≤ 2 DD from fovea centre

Clinically significant macular edema (CSME) (See Figure 1c)

Thickening of retina, presence of hard exudates

≤ 1 DD from fovea centre

Diabetic macular edema (DME) is diagnosed by clinical examination of the eye using biomicroscopy, fundus ophthalmoscopy, fluorescein angiography (FA), colour fundus imaging, optical coherence tomography (OCT), and retinal thickness analyser (RTA) [3]. Among these imaging modalities fundus photography is economical and easily available [3, 5]. However, clinical diagnosis takes considerable time and effort [3, 5, 10]. Hence, various automated methods [12, 13, 14, 15, 16, 17, 18, 19, 20, 9, 21, 22] are proposed for DME grading using image processing and pattern recognition techniques.

3

Figure 1: Representative fundus images: (a) Normal, (b) Non-Clinically Significant Macular Edema, and (C) Clinically Significant Macular Edema (MESSIDOR dataset).

The reported literature [12, 13, 14, 15, 16, 17, 18, 19, 20, 9, 21, 22, 23, 24, 25, 26] used exudate location from macula to discriminate normal and DME classes. Tariq et al. [13] used geometrical relation to isolate macula. Gabor filter bank and thresholding to segment exudates. Finally, support vector machine (SVM) classifier is used to discriminate non-clinically significant macular edema (NCSME) and clinically significant macular edema (CSME) classes. Their algorithm eliminates blood vessels and optic disc (OD) to reduce false positives in exudates detection. Morphological, texture methods and multi-class SVM are used in [14]. Their triangulation approach accurately localizes the macula to aid in DME grading. Gabor filter, Otsu thresholding, and Hough transform are used in [15] for DME grading. The features such as area, mean intensity value, energy and mean value of filter bank response are used to achieve better exudates segmentation. Alipour et al. [16] used curvelet transform on the isolated fovea. Their method is able to identify ischemic maculopathy by calculating the non-perfusion area. Tariq et al. [12] and Hunter et al. [17] used morphological features of exudates for exudates detection by removing OD and blood vessels. Siddalingaswamy et al. [18] used unsupervised clustering and location of exudates to perform DME grading. Their method used rectangular search space to locate macula which yielded high detection sensitivity of 95.60%. Exudates segmentation using morphological image processing and neural network (NN) are used in [19, 20]. Their method obtained a specificity of 100% which

4

significantly reduced the workload of clinicians. Morphological filters and illumination correction is used in [9] to discriminate NCSME and CSME. Their system is able to reduce the manual grading workload by 36.3%. Giancardo et al. [21] used wavelet and exudates location to discriminate normal and DME [21]. Their exudates detection algorithm does not require ground truth at lesion level to reduce false positives. Watershed transform and Sobel filter used in [22] is able to segment the exudates and its location to grade DME. Their method yielded a specificity of 90.20%. Medhi et al. [23] used image inpainting and morphological image processing for the segmentation of exudates to detect DME. Particle Swarm Optimization (PSO) and mathematical morphology are used in [24] for DME grading. Gabor filter, thresholding, and morphological processing are used in [25] to discriminate normal, NCSME and CSME [25]. Top-hat filtering and thresholding are used in [26] to segment exudates and the geometrical relation between macula and exudates. Several authors [17, 27, 28, 29, 30, 31, 32] have proposed various feature extraction methods to grade DME stages without segmenting exudates. Color, texture, and morphological features are used in [17] for automated grading of referral maculopathy. Exudates are identified using contrast, lesion intensity peak point, and NN to discriminate normal and DME [17]. Motion pattern analysis and radon transform (RT) used in [28] is able to classify normal, NCSME and CSME. Their method enhances the features of HE for better recognition. Texture features namely fractal dimension (FD), local binary pattern (LBP), laws mask energy (LME), Gabor wavelet and fuzzy Sugeno (FS) are used in [29] to detect DME. Their method used different feature ranking scheme to achieve better classification performance [29]. Dual-tree complex wavelet transform (DTCWT) and Gaussian data description (GDD) feature extraction methods are used in [30] to grade DME stages. Their method obtained a lower precision rate of 78.23% for DME grading. Edge histogram descriptor (EHD) based feature extraction is used in [31] to classify normal and DME. Their method can be used for content-based image retrieval and it yielded precision rate of 79.2% in the classification of normal and DME stages. Mookiah et al. [27] have compared the classification of normal, NCSME and CSME using higher order spectra (HOS) cumulant features and reported an accuracy of 95.56% [27]. Texture and Sugeno fuzzy classifier used in [33] to perform DME grading and reported an accuracy of 86.67%. Discrete wavelet transform (DWT), entropy features and neurofuzzy inference system are used in [32] to grade DME stages and obtained an accuracy of 98.55%.

5

Similarly, the proposed approach does not require segmentation and localization of exudates. This method starts with pre-processing followed by radon transform (RT). Further, the top-hat transformation is used to enhance the gradient of dark and bright pixels in the fundus image. Discrete wavelet transform (DWT) and discrete cosine transform (DCT) are used to decompose the sinogram image into various frequency coefficients. Then the 2-D feature vector is converted into 1-D in a zig-zag fashion. The dimension of 1-D features is reduced using locality sensitive discriminant analysis (LSDA) and finally, various supervised classifiers are used to discriminate the three classes. The following block diagram (Figure 2) shows the different steps of the proposed method.

Figure 2: Schematic diagram of DME grading system.

Fundus image acquisition, methods used in the proposed technique and formulation of maculopathy index are explained in section 2. Results are described in section 3 and discussed in section 4. Finally, the paper is concluded in section 5.

6

2. Materials and methods 2.1. Fundus image dataset Private dataset: The color fundus images were acquired using Zeiss FF450+ mydriatic fundus camera with 45° Field of View (FOV) and stored in JPEG format with image size of 480 × 364 pixels. These images were collected from Department of Ophthalmology, Kasturba Medical College (KMC), Manipal, India. The informed consent was taken from patients to use these images for research. The clinicians have reviewed the images and certified. In this work, we have used 100 normal, 100 non-clinically significant macular edema (NCSME) and 100 clinically significant macular edema (CSME) images.

MESSIDOR dataset: The images were acquired using TopCon TRC NW6 nonmydriatic fundus camera with the FOV of 45° and resolution of 1440 × 960, 2240 × 1488 and 2304 × 1536 [34]. This dataset consists of risk 1 (NCSME) and risk 2 (CSME) images [34]. In this work, 300 images (normal = 100, NCSME = 100 and CSME = 100) resized to 1440 × 960 were used.

2.2. Pre-processing The green channel image contrast is enhanced using contrast limited adaptive histogram equalisation (CLAHE) [35]. This method separates the images into small blocks or region and performs adaptive histogram equalization to stretch the image contrast [35]. Pixel intensity values of enhanced image are directly proportional to intensity cumulative distribution function of each small blocks [35]. The advantage of this method is that it does not enhance the image artifacts and noise [35]. After contrast enhancement, macula, OD, blood vessels, exudates, HA, and MA are more visible [36, 37, 38, 39, 27]. The feature extraction steps are illustrated in Figure 3.

7

Figure 3: Illustration of outputs of Figure 2.

2.3. Radon transform RT is proposed by J. K. A. Radon [40] and used for computed tomography (CT) image reconstruction using image integrals. It scans the images using line integrals at a specific angle and yielded line parameters [36, 39, 27, 41]. These features can be used to classify normal and abnormal classes [36, 39, 27, 41]. However, in this work, RT is used to generate sinograms for every 1° and the angle is varied from 0° to 179° [41]. The sinogram of normal, NCSME and CSME are shown in the 2nd column of Figure 3.

2.4. Top-hat filtering It eliminates the redundant information from images and enhances the edges of lesions by improving the gradient between dark and bright pixels [41, 42]. It has two types white and black filters based on opening and closing operations of structuring elements namely line, octagon, diamond, pair, disk, square, periodic and rectangle [41, 42]. White top-hat extracts bright structures and black top-hat extracts dark structures in the image [41, 42]. The output of RT (sinogram) is subjected to top-hat filtering to improve the DME recognition [41]. Top-hat filtering output is shown in the 3rd column of Figure 3 for normal, NCSME and CSME classes.

8

2.5. Discrete wavelet transform and discrete cosine transform based feature extraction The top-hat transformed image is subjected to discrete wavelet transform (DWT) with Haar mother wavelet function to achieve higher recognition rate [41, 43]. Haar wavelet can pick up sudden changes and transitions in the 1-D vector very quickly. Hence, more information can be represented when using the Haar wavelet. Thus, we used Haar wavelet in this work. DWT separates low and high-frequency components using lowpass and high-pass filters. It yields approximate (Ac), horizontal (Hc), vertical (Vc) and diagonal (Dc) coefficients [41, 43]. Approximate coefficients are fed to discrete cosine transform (DCT) to obtain frequency domain features which have better discriminating ability as compared DWT coefficients features [41, 43]. The extracted DWT approximate (Ac) and DCT coefficients are shown in 4th and 5th column of Figure 3 for normal, NCSME and CSME classes. Further, the extracted DCT coefficients are converted into 1-D vector [41] by zig-zag manner.

2.6. Feature dimensionality reduction The dimension of DCT coefficients is reduced using LSDA [44]. It finds the correlation between data and classes [44, 39]. Within class compactness (Cc) and between class separability (Cs) are calculated in LSDA to reduce the data dimension [44]. It can be calculated as follows. 𝐶𝑐 = 𝑚𝑖𝑛 ∑(𝑘𝑙 − 𝑘𝑚 )2 𝑊𝑐,𝑙𝑚 𝑙𝑚

(1)

𝐶𝑠 = 𝑚𝑖𝑛 ∑(𝑘𝑙 − 𝑘𝑚 )2 𝑊𝑠,𝑙𝑚 𝑙𝑚

(2) where kl = αT Xl and km = αT Xm are 1-D mapping of Xl and Xm, Wc and Ws refer to the weight matrices of within-class and between class respectively. α denotes the projected data direction [44, 39].

9

Further, Analysis of variance (ANOVA) [45] is used to select the features. It compares population mean of more than two classes [45]. If the difference is high the feature is considered as significant [45]. The feature rank is obtained using F-value of the ANOVA test [45]. Feature with higher F-value yielded first rank and vice versa.

2.7. Classification Decision tree (DT) classifier has several nodes namely root, internal and leaf [46, 47]. The complex decision splits into several simple solutions using these nodes to obtain the final decision [46, 47]. k-nearest neighbour (k-NN) is a memory based, non-parametric classification approach which computes the distance between k-nearest neighbours to make the decision [48]. In this work, k = 5 provides maximum classification performance. Probabilistic neural network (PNN) works based on Parzen window estimation and it has input, pattern and summation layers [49]. The complete function of summation layer is used to classify the classes with derived distance vectors [49]. The spread parameter (σ) is varied from 0.01 to 1 in intervals of 0.01. In this work, maximum classification performance is obtained for gamma σ = 0.01. Support vector machine (SVM) classifier works based on structural risk minimization principle [50]. It uses training samples to construct the decision boundaries called hyperplanes to separate normal and abnormal classes [48, 50]. SVM uses linear (L), quadratic (Q), polynomial (P) and radial basis function (RBF) kernels to handle non-linearly separable data [48, 50]. The width (σ) of RBF kernel is varied from 0.01 to 5 in intervals of 0.01. In this work, kernel width of σ = 0.5 yielded the maximum classification performance.

2.8. Maculopathy index Maculopathy index is formulated for both private (KMC) and MESSIDOR datasets. This approach is formulated and proposed by Ghista et al. [51]. This index is used to discriminate different disease namely diabetes [52], oral cancer [53], fatty liver disease [54], DR [55], sudden cardiac death [56], coronary artery disease [57], carotid plaque [58], glaucoma [59], thyroid [60], and epilepsy [61]. In this work, L3 and L17 features are used to develop the index in equation (3). The two features are empirically combined (See

10

equation (3)) to produce an index which can give maximum distinction between the three classes. Therefore, the Maculopathy Index aids the clinicians in a faster and accurate diagnosis of normal, non-clinically significant macula edema (NCSME), and clinically significant macula edema (CSME) fundus images with just a single number. 𝑀𝑎𝑐𝑢𝑙𝑜𝑝𝑎𝑡ℎ𝑦 𝐼𝑛𝑑𝑒𝑥 = (2.56 𝑥 𝐿3 ) + (1.53 𝑥 𝐿17 ) − 0.92 (3)

3. Results The 1-D DCT coefficients are reduced into 30 LSDA features (L1, L2, L3, L4, ... L30) and ranked using F-value. The ranked features for private and MESSIDOR datasets are shown in Figure 4 and Figure 5 respectively. These ranked features are used to discriminate normal, NCSME and CSME using decision tree (DT), k-nearest neighbour (k-NN), probabilistic neural network (PNN) and support vector machine (SVM) classifiers. Finally, the performance measures such as true positive (TP), true negative (TN), false positive (FP), false negative (FN), sensitivity (Sen), specificity (Spec) and accuracy (Acc) are calculated using ten-fold cross-validation. Table 2 and Table 3 show the classification performances for private and MESSIDOR datasets respectively.

11

Figure 4: Bar plot of ranked features for private dataset.

Figure 5: Bar plot of ranked features for MESSIDOR dataset.

The classification results of the private dataset are shown in Table 2. It shows that SVML and SVM-Q obtained highest classification accuracy, sensitivity, and specificity of 100% using only two significant features compared to all other classifiers. Similarly, PNN used two features and yielded an average accuracy of 98.67%, sensitivity of 98% and specificity of 100% (See Table 2). However, PNN classifier performance is second best among all classifiers (See Table 2). Table 2: Performance measures of different classifiers for private dataset. Classifiers

Number of features used

TP

TN

FP

FN

Acc (%)

Sen (%)

Spec (%)

DT

4

195

98

2

5

97.67

97.50

98

k-NN (k = 5)

5

187

99

1

13

95.33

93.50

99

PNN (σ = 0.01)

2

196

100

0

4

98.67

98

100

SVM-L

2

200

100

0

0

100

100

100

SVM-Q

2

200

100

0

0

100

100

100

SVM-P

7

192

98

2

8

96.67

96

98

SVM-RBF (σ = 0.50)

3

196

98

2

4

98

98

98

Table 3 shows the classification results of MESSIDOR dataset. It reveals that DT classier yielded maximum classification accuracy of 97.01%, the sensitivity of 92.14% and

12

specificity of 99.07% using seven significant LSDA features (See Table 3). k-NN reported an accuracy of 96.36%, the sensitivity of 88.65% and specificity of 99.63% (See Table 3) using five features. Table 3: Performance measures of different classifiers for MESSIDOR dataset. Classifiers

Number of features used

TP

TN

FP

FN

Acc (%)

Sen (%)

Spec (%)

DT

7

211

535

5

18

97.01

92.14

99.07

k-NN (k = 5)

5

203

538

2

26

96.36

88.65

99.63

PNN (σ = 0.01)

8

190

538

2

39

94.67

82.97

99.63

SVM-L

14

79

527

13

150

78.80

34.50

97.59

SVM-Q

12

175

533

7

54

92.07

76.42

98.70

SVM-P

12

190

534

6

39

94.15

82.97

98.89

SVM-RBF (σ = 0.50)

8

199

535

5

30

95.45

86.90

99.07

The confusion matrix of SVM and DT classifier for private and MESSIDOR datasets are shown in Table 4 and Table 5 respectively. The classification accuracy of the individual class is the ratio of correctly classified instances in each column in the confusion matrix, divided by the total number of instances (See Table 4 and Table 5). The individual classification accuracy of normal, NCSME and CSME are 100% for private dataset and 99.07%, 88% and 94.16% for MESSIDOR dataset respectively.

Table 4: Confusion matrix of SVM-L and SVM-Q classifiers using private dataset. Predicted class Normal

NCSME

CSME

Normal

100

0

0

NCSME

0

100

0

CSME

0

0

100

Accuracy (%)

100

100

100

True class

13

Table 5: Confusion matrix of DT classifier using MESSIDOR dataset. Predicted class Normal

NCSME

CSME

Normal

535

3

2

NCSME

1

66

7

CSME

4

6

145

Accuracy (%)

99.07

88

94.16

True class

The plot of performance measures (accuracy, sensitivity, and specificity) versus a number of features for MESSIDOR dataset is shown in Figure 6. It reveals that seven significant features yielded the maximum classification performance. Moreover, the accuracy and specificity are consistent when the number of features is varied (See Figure 6).

Figure 6: Plot of performance measures versus a number of features for MESSIDOR dataset.

In addition to classification results, the maculopathy index for private and MESSIDOR datasets, are shown in Figure 7a and Figure 7b respectively. The index is developed using

14

two same significant LSDA features. The box plot is distinct for normal, NCSME and CSME classes.

(a)

(b)

Figure 7: Error bar plot of maculopathy index (a) Private (KMC), (b) MESSIDOR dataset for normal, NCSME and CSME classes.

4. Discussion Prolonged DM affects retinal blood vessels and leads to DME [3, 5]. Regular eye examination may prevent possible blindness due to diabetic retinopathy (DR), glaucoma, diabetic macular edema (DME) and age-related macular degeneration (AMD) [3, 5]. Various imaging methods namely fluorescein angiography (FA), colour fundus imaging, optical coherence tomography (OCT), and retinal thickness analyser (RTA) can be used to examine these eye diseases. However, fundus photography is a low-cost imaging modality as compared to the rest [3, 5]. Moreover, as image processing advances, data mining techniques and high-performance computing paved the way for accurate and affordable eye screening methods [3, 5]. Hence, in this work a combination of radon transform (RT), top-hat transform, discrete wavelet transform (DWT), and discrete cosine transform (DCT) are proposed to characterize and classify normal, NCSME and CSME fundus images in the past similar method have been used for the iris recognition [41]. RT enhances line features present in the image [41] and helps to extract lines and curves [41] from the anatomical structures (OD and blood vessels) and lesions (exudates, MA and HA) present in the fundus images. The output of RT is a sonogram. It reflects the pixel variations of normal, NCSME and CSME images. Top-hat transformation

15

further enhances these changes using black and white filters [41]. DWT decomposes the images into different levels and performs spatial domain analysis [36, 39, 41]. Hence, DWT separates high (detail coefficients) and low frequency (approximate coefficients) components [36, 39] from top-hat transformed image [41]. It captures sudden changes [36, 39, 41] like exudates in the images. Further, DCT is performed on DWT approximate coefficients which yield frequency domain features [41]. It has an ability to capture signal energy from the low frequency (approximate coefficients) region [54, 62]. Hence, macula and exudates are characterized using this technique [36, 39]. Finally, the dimension of 1D-DCT coefficients is reduced using LSDA. Therefore, the proposed approach captures sudden changes (exudates and other lesions) in the DME images yielding high classification performance [41]. The classifiers namely SVM-L and SVM-Q yielded highest classification accuracy of 100% using two significant features for the private dataset (See Table 2). However, DT classifier obtained maximum classification accuracy of 97.01% for MESSIDOR dataset using seven significant features (See Table 3). The accuracy plot (See Figure 6) also reveals that the obtained classification performance for MESSIDOR datasets is consistent. We obtained 100% accuracy using only two features for the private database. The proposed maculopathy index shows that clear discrimination between three classes (See Figure 7a and Figure 7b) using a single number. The proposed approach is compared with existing available literature and it is summarized in Table 6. Table 6: Summary of automated DME grading methods [3, 27]. Authors

Database (NOIU)

Methods and Classifiers

Performance measure

Exudates based DME grading

Nayak et al. (2009) [20]

Fleming et al. (2010) [9]

Private (350)

Private (14406)

Matched correlation and neural Network

Morphological image processing, exudates location and Kappa statistics

16

Sen-95.40% Spec-100%

Acc-99.2% (NCSME) Acc-97.3% (CSME)

Siddalingaswamy et al. (2010) [18]

Private (148)

Clustering and exudate location

Sen-95.60% Spec-96.15%

Sen-90% Ang et al. (2011) [19]

Private (90)

Mathematical morphology and neural network

Spec-100% Acc-96.67%

Sen-80.90% Lim et al. (2011) [22]

MESSIDOR (88)

Watershed transform and exudate location

Spec-90.20% Acc-85.20%

Akram et al. (2012) [15]

MESSIDOR (1200)

Alipour et al. (2012) [16]

Private (75)

Giancardo et al. (2012) [21]

HEI-MED and MESSIDOR (1200)

Morphological image processing, features extracted from filter bank response, energy and support vector machine

Sen-92.60% Spec-97.80% Acc-97.30%

Curvelet and Foveal Avascular Zone (FAZ) size

Sen-93%

Wavelet transform, Kirsch edge detection, color, and support vector machine

Area Under receiver operator characteristics

Medhi et al. (2012) [23]

DRIVE, DIARETDB1, ARIA (50)

Morphological image processing and exudate Location

Punnolil et al. (2013) [14]

DRIVE, DIARETDB1, STARE (251)

Morphological features of exudates, texture, and SVM

17

Spec-86%

Curve (AUC)-0.94

Sen-96%

Sen-96.89% Spec-97.15%

Sreejini and Govindan (2013) [24]

Tariq et al. (2013) [13]

Tariq et al. (2013) [12]

Zaidi et al. (2013) [25]

MESSIDOR (100)

PSO, Morphological image processing, and exudates location

MESSIDOR and STARE (1281)

Gabor filter, thresholding and support vector machine

MESSIDOR and STARE (1281)

Morphological features of exudates, Gabor filter, thresholding, texture and Gaussian mixture model

MESSIDOR (1200)

Gabor filter, morphological image processing, thresholding, Bayesian classifier and exudate location

Medhi and Dandapat (2014) [26]

DRIVE, DIARETDB1 and HRF (174)

Top hat filtering, thresholding, and exudate location

Sen-82.5% Spec-100% Acc-93%

Acc-97.20% (MESSIDOR) Acc-97.53% (STARE)

Acc-97.30% (MESSIDOR) Acc-97.89% (STARE)

Sen-93.9% Spec-95.8% Acc-94.1%

Sen-97.5% Spec-98.7%

Texture features based DME grading

Hunter et al. (2011) [17]

Private (1000)

Deepak and Sivaswamy (2012) [28]

HEI-MED, MESSIDOR, MESSIDOR(IR), DIARETDB1 and combined dataset (590)

Morphological features of exudates, intensity, color, texture

Motion patterns, Gaussian data description and principal component analysis data description (PCADD)

Sen-97% Spec-80%

Highest AUC-0.99 (MESSIDOR(IR))

Sen-100% Chua et al. (2013) [33]

Private (300)

Texture and fuzzy classifier

Spec-100% Acc-86.67%

18

Chowriappa et al. (2013) [29]

Private (90)

Texture and ensemble classifier

Acc-96.70%

Baby and Chandy et al. (2013) [30]

MESSIDOR (1200)

Gaussian data description, Dual-tree complex wavelet transform, and divergence

Precision rate-78.23%

Naguib et al. (2013) [31]

MESSIDOR (100)

Morphological image processing, EHD and distance measure

Precision rate-79.2%

Mookiah et al. (2015) [27]

Private and MESSIDOR (600)

Higher order statistic, Naive Bayes, and Support vector machine

Private (300)

Entropies, Fuzzy Sugeno, discrete wavelet transform, and neuro-fuzzy interference

Private and MESSIDOR (600)

Radon transform, Top-hat filtering, discrete wavelet transform, discrete cosine transform, decision tree and support vector machine

Ibrahim et al. (2015) [32]

This work

Acc-95.56% (Private) Acc-95.93% (MESSIDOR)

Acc-98.55%

Acc-100%(Private) Acc-97.01%(MESSIDOR) *developed Maculopathy Index to discriminate the three classes using a single number.

*Acc: Accuracy, Sen: Sensitivity, Spec: Specificity, AUC: Area under curve ARIA: Automated Retinal Image Analysis DIARETDB1: Standard Diabetic Retinopathy Database DRIVE: Digital Retinal Images for Vessel Extraction HEI-MED: Hamilton Eye Institute Macular Edema HRF: High Resolution Fundus STARE: Structural Analysis of Retina NOIU: Number of Images Used

We have also, analysed the effect of top-hat transform and DWT on RT-DCT based feature extraction approach [54]. Our results (See Table 7) show that the proposed

19

method yielded a highest average accuracy of 100% and 97.01% using two and seven features for private and MESSIDOR datasets using our proposed method. This performance falls to 98.67% and 94.41% using five and eight features for private and MESSIDOR datasets respectively without top-hat and DWT. This clearly shows that tophat and DWT can boost the accuracy using less number of features. The combination of top-hat and DWT is able to capture the subtle changes in the pixel variations around the macula.

20

Table 7: Best classification performance with and without the combination of top-hat and discrete wavelet transform (DWT) feature extraction for private and MESSIDOR datasets. NOF Dataset, feature extraction and classifier combination

Acc (%)

RTDCT Proposed (without top-hat transform and DWT)

Sen (%)

RTDCT (without top-hat transform and DWT)

Proposed

RTDCT (without top-hat transform and DWT)

Spec (%)

Proposed RTDCT (without top-hat transform and DWT)

Proposed

Private RT-DCT (without top-hat transform and DWT) PNN ProposedSVM-L

5

2

98.67

100

98.50

100

99

100

MESSIDOR RT-DCT (without tophat transform and DWT)DT Proposed-DT

8

7

94.41

97.01

82.97

92.14

99.26

99.07

*NOF: Number of Features

The proposed approach has the following advantages: • Yielded highest classification accuracy of 100% for private and 97.01% for MESSIDOR dataset compared to other reported works in Table 6. • The method is completely automatic and does not involve any segmentation algorithms. • The system is validated with private (KMC) and public (MESSIDOR) datasets. • Used maximum number of images (600 images) (in Table 6) to develop our algorithm using a minimum number of features.

21

• Proposed method is able to yield higher accuracy with top-hat transform and DWT (See Table 7). • Developed maculopathy index for both private and public databases using same two features. Hence, three classes can be discriminated using one number.

However, the limitations of this work are as follows: • The system can detect NCSME only with an accuracy of 88% correctly using MESSIDOR database. • Developed prototype need to be tested with huge database before installing in polyclinics and developing countries.

5. Conclusion Color fundus images can be used to diagnose diabetic retinopathy (DR), diabetic macular edema (DME), age-related macular degeneration (AMD), and glaucoma. In this work a hybrid feature extraction approach is proposed using radon transform (RT), top-hat transform, discrete wavelet transform (DWT), and discrete cosine transform (DCT) techniques to classify normal, non-clinically significant macular edema (NCSME) and clinically significant macular edema (CSME) classes. The proposed approach has obtained a specificity of 99% indicating that false positive (FP) is almost zero. This will reduce the world load of clinicians by 50%. Moreover, the maculopathy index accurately discriminates normal, NCSME and CSME classes using a single number. Hence, the proposed method can be used to develop an eye screening tool to aid the clinicians. Further, it can be used for teleophthalmology and mobile-based individualized eye screening. However, the performance of the system needs to be tested with a huge and diverse database. Also, the detection of the early stage of maculopathy (NCSME) needs to be improved using better feature extraction methods and robust data mining techniques. 22

Conflicts of interest The authors do not have any related conflict of interest. AppendixA. Bibliography [1] [2] [3]

[4] [5] [6]

[7]

[8] [9]

[10]

[11] [12] [13] [14]

[15]

F. Harney, Diabetic retinopathy, Medicine 34 (3) (2006) 95-98. A. A. Alghadyan, Diabetic retinopathy-an update, Saudi Journal of Ophthalmology 25 (1) (2011) 99111. M. R. K. Mookiah, U. R. Acharya, H. Fujita, J. H. Tan, C. K. Chua, S. V. Bhandary, A. Laude, L. Tong, Application of different imaging modalities for diagnosis of diabetic macular edema: A review, Computers in biology and medicine 66 (2015) 295-315. International Diabetes Federation, diabetes atlas, http://www.idf.org/sites/default/files/ EN_6E_Atlas_Full_0.pdf, online; accessed 29 February 2016 (2013). M. R. K. Mookiah, U. R. Acharya, C. K. Chua, C. M. Lim, E. Ng, A. Laude, Computer-aided diagnosis of diabetic retinopathy: A review, Computers in biology and medicine 43 (12) (2013) 2136-2155. E. T. D. R. S. R. Group, et al., Treatment techniques and clinical guidelines for photocoagulation of diabetic macular edema: Early treatment diabetic retinopathy study report number 2, Ophthalmology 94 (7) (1987) 761-774. J. W. Yau, S. L. Rogers, R. Kawasaki, E. L. Lamoureux, J. W. Kowalski, T. Bek, S.-J. Chen, J. M. Dekker, A. Fletcher, J. Grauslund, et al., Global prevalence and major risk factors of diabetic retinopathy, Diabetes care (2012) DC_111909. C. A. Kiire, M. Porta, V. Chong, Medical management for the prevention and treatment of diabetic macular edema, Survey of Ophthalmology 58 (5) (2013) 459-465. A. D. Fleming, K. A. Goatman, S. Philip, G. J. Prescott, P. F. Sharp, J. A. Olson, Automated grading for diabetic retinopathy: a large-scale audit using arbitration by clinical experts, British Journal of Ophthalmology 94 (12) (2010) 1606-1610. A. D. Fleming, K. A. Goatman, S. Philip, G. J. Williams, G. J. Prescott, G. S. Scotland, P. McNamee, G. P. Leese, W. N. Wykes, P. F. Sharp, et al., The role of haemorrhage and exudate detection in automated grading of diabetic retinopathy, British Journal of Ophthalmology 94 (6) (2010) 706-711. T. A. Ciulla, A. G. Amador, B. Zinman, Diabetic retinopathy and diabetic macular edema pathophysiology, screening, and novel therapies, Diabetes care 26 (9) (2003) 2653-2664. A. Tariq, M. U. Akram, A. Shaukat, S. A. Khan, Automated detection and grading of diabetic maculopathy in digital retinal images, Journal of Digital Imaging (2013) 1-10. A. Tariq, M. Akram, A. Shaukat, S. Khan, A computer aided system for grading of maculopathy, in: Biomedical Engineering Conference (CIBEC), 2012 Cairo International, 2012, pp. 31-34. A. Punnolil, A novel approach for diagnosis and severity grading of diabetic maculopathy, in: Advances in Computing, Communications and Informatics (ICACCI), 2013 International Conference on, IEEE, 2013, pp. 1230-1235. M. U. Akram, M. Akhtar, M. Y. Javed, An automated system for the grading of diabetic maculopathy in fundus images, in: Neural Information Processing, Springer, 2012, pp. 36-43.

23

[16] S. H. M. Alipour, H. Rabbani, M. Akhlaghi, A. M. Dehnavi, S. H. Javanmard, Analysis of foveal avascular zone for grading of diabetic retinopathy severity based on curvelet transform, Graefe’s Archive for Clinical and Experimental Ophthalmology 250 (11) (2012) 1607-1614. [17] A. Hunter, J. A. Lowell, B. Ryder, A. Basu, D. Steel, Automated diagnosis of referable maculopathy in diabetic retinopathy screening, in: Engineering in Medicine and Biology Society, EMBC, 2011 Annual International Conference of the IEEE, IEEE, 2011, pp. 3375-3378. [18] P. Siddalingaswamy, K. G. Prabhu, Automatic grading of diabetic maculopathy severity levels, in: Systems in Medicine and Biology (ICSMB), 2010 International Conference on, IEEE, 2010, pp. 331-334. [19] M. H. Ang, U. R. Acharya, S. V. Sree, T.-C. Lim, J. S. Suri, Computer-based identification of diabetic maculopathy stages using fundus images, in: Multi-Modality State-of-the-Art Medical Image Segmentation and Registration Methodologies, Springer, 2011, pp. 377-399. [20] J. Nayak, P. S. Bhat, U. Acharya, Automatic identification of diabetic maculopathy stages using fundus images, Journal of medical engineering & technology 33 (2) (2009) 119-129. [21] L. Giancardo, F. Meriaudeau, T. P. Karnowski, Y. Li, S. Garg, K. W. Tobin, E. Chaum, Exudate-based diabetic macular edema detection in fundus images using publicly available datasets, Medical image analysis 16 (1) (2012) 216-226. [22] S. Lim, W. Zaki, A. Hussain, S. Lim, S. Kusalavan, Automatic classi cation of diabetic macular edema in digital fundus images, in: Humanities, Science and Engineering (CHUSER), 2011 IEEE Colloquium on, IEEE, 2011, pp. 265-269. [23] J. P. Medhi, M. K. Nath, S. Dandapat, Automatic grading of macular degeneration from color fundus images, in: Information and Communication Technologies (WICT), 2012 World Congress on, IEEE, 2012, pp. 511-514. [24] K. Sreejini, V. Govindan, Automatic grading of severity of diabetic macular edema using color fundus images, in: Advances in Computing and Communications (ICACC), 2013 Third International Conference on, IEEE, 2013, pp. 177-180. [25] Z. Y. Zaidi, M. U. Akram, A. Tariq, Retinal image analysis for diagnosis of macular edema using digital fundus images, in: Applied Electrical Engineering and Computing Technologies (AEECT), 2013 IEEE Jordan Conference on, IEEE, 2013, pp. 1-5. [26] J. P. Medhi, S. Dandapat, Analysis of maculopathy in color fundus images, in: India Conference (INDICON), 2014 Annual IEEE, IEEE, 2014, pp. 1-4. [27] M. R. K. Mookiah, U. R. Acharya, V. Chandran, R. J. Martis, J. H. Tan, J. E. Koh, C. K. Chua, L. Tong, A. Laude, Application of higher-order spectra for automated grading of diabetic maculopathy, Medical & biological engineering & computing 53 (12) (2015) 1319-1331. [28] K. S. Deepak, J. Sivaswamy, Automatic assessment of macular edema from color retinal images, Medical Imaging, IEEE Transactions on 31 (3) (2012) 766-776. [29] P. Chowriappa, S. Dua, U. R. Acharya, M. M. R. Krishnan, Ensemble selection for feature-based classification of diabetic maculopathy images, Computers in biology and medicine 43 (12) (2013) 21562162. [30] C. G. Baby, D. A. Chandy, Content-based retinal image retrieval using dual-tree complex wavelet transform, in: Signal Processing Image Processing & Pattern Recognition (ICSIPR), 2013 International Conference on, IEEE, 2013, pp. 195-199. [31] A. M. Naguib, A. M. Ghanem, A. S. Fahmy, Content based image retrieval of diabetic macular edema images, in: Computer-Based Medical Systems (CBMS), 2013 IEEE 26th International Symposium on, IEEE, 2013, pp. 560-562.

24

[32] S. Ibrahim, P. Chowriappa, S. Dua, U. R. Acharya, K. Noronha, S. Bhandary, H. Mugasa, Classi cation of diabetes maculopathy images using data-adaptive neuro-fuzzy inference classifier, Medical & biological engineering & computing 53 (12) (2015) 1345-1360. [33] C. Chua, M. Mookiah, J. Koh, U. Acharya, C. Lim, A. Laude, E. Ng, et al., Automated diagnosis of maculopathy stages using texture features., International Journal of Integrated Care (IJIC) 13. [34] E. DecenciŁre, X. Zhang, G. Cazuguel, B. Lay, B. Cochener, C. Trone, P. Gain, R. Ordonez, P. Massin, A. Erginay, B. Charton, J.-C. Klein, Feedback on a publicly distributed database: the messidor database, Image Analysis & Stereology 33 (3) (2014) 231-234. doi:10.5566/ias.1155. [35] E. D. Pisano, S. Zong, B. M. Hemminger, M. DeLuca, R. E. Johnston, K. Muller, M. P. Braeuning, S. M. Pizer, Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms, Journal of Digital imaging 11 (4) (1998) 193-200. [36] M. R. K. Mookiah, U. R. Acharya, J. E. Koh, V. Chandran, C. K. Chua, J. H. Tan, C. M. Lim, E. Ng, K. Noronha, L. Tong, et al., Automated diagnosis of age-related macular degeneration using greyscale features from digital fundus images, Computers in biology and medicine 53 (2014) 55-64. [37] M. R. K. Mookiah, U. R. Acharya, H. Fujita, J. E. Koh, J. H. Tan, K. Noronha, S. V. Bhandary, C. K. Chua, C. M. Lim, A. Laude, et al., Local configuration pattern features for age-related macular degeneration characterization and classification, Computers in biology and medicine 63 (2015) 208218. [38] M. R. K. Mookiah, U. R. Acharya, J. E. Koh, C. K. Chua, J. H. Tan, V. Chandran, C. M. Lim, K. Noronha, A. Laude, L. Tong, Decision support system for age-related macular degeneration using discrete wavelet transform, Medical & biological engineering & computing 52 (9) (2014) 781-796. [39] M. R. K. Mookiah, U. R. Acharya, H. Fujita, J. E. Koh, J. H. Tan, C. K. Chua, S. V. Bhandary, K. Noronha, A. Laude, L. Tong, Automated detection of age-related macular degeneration using empirical mode decomposition, Knowledge-Based Systems 89 (2015) 654-668. [40] J. Radon, On the determination of functions from their integral values along certain manifolds, Medical Imaging, IEEE Transactions on 5 (4) (1986) 170-176. [41] S. S. Dhage, S. S. Hegde, K. Manikantan, S. Ramachandran, Dwt-based feature extraction and radon transform based contrast enhancement for improved iris recognition, Procedia Computer Science 45 (2015) 256-265. [42] R. C. Gonzalez, R. E. Woods, Book on "digital image processing" (2005). [43] M. Kocioek, A. Materka, M. Strzelecki, P. Szczypiski, Discrete wavelet transform-derived features for digital image texture analysis, in: International Conference on Signals and Electronic Systems, 2001, pp. 99-104. [44] Q. Gao, J. Liu, K. Cui, H. Zhang, X. Wang, Stable locality sensitive discriminant analysis for image recognition, Neural Networks 54 (2014) 49-56. [45] A. Gun, M. Gupta, B. Dasgupta, Fundamentals of Statistics, 4th Edition, The World Press Private Ltd, Kolkata, India, 2008. [46] S. Safavian, D. Landgrebe, A survey of decision tree classifier methodology, IEEE Transactions on Systems, Man, and Cybernetics 21 (3) (1991) 660-674. [47] A. E. Gutierrez-Rodrguez, J. F. Mart nez-Trinidad, M. Garc a-Borroto, J. A. Carrasco-Ochoa, Mining patterns for clustering on numerical datasets using unsupervised decision trees, Knowledge-Based Systems 82 (2015) 70-79. [48] R. Duda, P. Hart, D. Stork, Pattern Classification, John Wiley & Sons, New York, USA, 2012. [49] D. F. Specht, Probabilistic neural networks, Neural networks 3 (1) (1990) 109-118.

25

[50] V. N. Vapnik, V. Vapnik, Statistical learning theory, Vol. 1, Wiley New York, 1998. [51] D. N. Ghista, Nondimensional physiological indices for medical assessment, Journal of Mechanics in Medicine and Biology 9 (04) (2009) 643-669. [52] U. R. Acharya, O. Faust, S. V. Sree, D. N. Ghista, S. Dua, P. Joseph, V. T. Ahamed, N. Janarthanan, T. Tamura, An integrated diabetic index using heart rate variability signal features for diagnosis of diabetes, Computer methods in biomechanics and biomedical engineering 16 (2) (2013) 222-234. [53] M. R. K. Mookiah, V. Venkatraghavan, U. R. Acharya, M. Pal, R. R. Paul, L. C. Min, A. K. Ray, J. Chatterjee, C. Chakraborty, Automated oral cancer identification using histopathological images: a hybrid feature extraction paradigm, Micron 43 (2) (2012) 352-364. [54] U. R. Acharya, H. Fujita, V. K. Sudarshan, M. R. K. Mookiah, J. E. Koh, J. H. Tan, Y. Hagiwara, C. KC, J. S. Padmakumar, A. Vijayananthan, K. H. Ng, An integrated index for identification of fatty liver disease using radon transform and discrete cosine transform features in ultrasound images, Information Fusion (2015) doi:http://dx.doi.org/10.1016/j.inffus.2015.12.007. URL http://www.sciencedirect.com/science/article/pii/S1566253515001190 [55] U. R. Acharya, E. Y.-K. Ng, J.-H. Tan, S. V. Sree, K.-H. Ng, An integrated index for the identification of diabetic retinopathy stages using texture parameters, Journal of medical systems 36 (3) (2012) 20112020. [56] U. R. Acharya, H. Fujita, V. K. Sudarshan, V. S. Sree, L. W. J. Eugene, D. N. Ghista, R. San Tan, An integrated index for detection of sudden cardiac death using discrete wavelet transform and nonlinear features, Knowledge-Based Systems 83 (2015) 149-158. [57] S. Patidar, R. B. Pachori, U. R. Acharya, Automated diagnosis of coronary artery disease using tunableq wavelet transform applied on heart rate signals, Knowledge-Based Systems 82 (2015) 1-10. [58] R. U. Acharya, O. Faust, A. P. C. Alvin, S. V. Sree, F. Molinari, L. Saba, A. Nicolaides, J. S. Suri, Symptomatic vs. asymptomatic plaque classification in carotid ultrasound, Journal of medical systems 36 (3) (2012) 1861-1871. [59] M. R. K. Mookiah, U. R. Acharya, C. M. Lim, A. Petznick, J. S. Suri, Data mining technique for automated diagnosis of glaucoma using higher order spectra and wavelet energy features, Knowledge-Based Systems 33 (2012) 73-82. [60] U. Acharya, O. Faust, S. V. Sree, F. Molinari, R. Garberoglio, J. Suri, Cost-effective and non-invasive automated benign & malignant thyroid lesion classification in 3d contrast-enhanced ultrasound using combination of wavelets and textures: a class of thyroscan TM algorithms, Technology in cancer research & treatment 10 (4) (2011) 371-380. [61] R. Sharma, R. B. Pachori, U. R. Acharya, An integrated index for the identification of focal electroencephalogram signals using discrete wavelet transform and entropy measures, Entropy 17 (8) (2015) 5218-5240. [62] R. J. Martis, U. R. Acharya, C. M. Lim, J. S. Suri, Characterization of ecg beats from cardiac arrhythmia using discrete cosine transform in pca framework, Knowledge-Based Systems 45 (2013) 76-82.

26