This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2017.2712767, IEEE Access 1
A Novel Technique for Robust and Fast Segmentation of Corneal Layer Interfaces Based on Spectral-Domain Optical Coherence Tomography Imaging Tianqiao Zhang1,2 , Ahmed Elazab3 , Xiaogang Wang4 , Fucang Jia1 , Jianhuang Wu1 , Guanglin Li1,5 Senior Member, IEEE, and Qingmao Hu1,5* 1 Shenzhen
Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, 518055, China 3 Computer Science Department, Misr Higher Institute for Commerce and Computers, Mansoura, 35516, Egypt 4 Shanxi Eye Hospital, Taiyuan, 030002, China 5 Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen, 518055, China
2 Shenzhen
Abstract: A novel approach to segment corneal layer interfaces using optical coherence tomography images is presented. In this paper, we performed customized edge detection for initial location of interfaces, fitting the initial interfaces to circles via customized Hough transform, and refining interfaces by employing Kalman filtering to model the horizontal projectile motion of interface boundaries. Validation based on 20 B-scan images from 60 volumes shows that 3 layer interfaces in each image can be segmented within 0.52 seconds with an average absolute layer interface error below 5.4 µm. Compared with an existing method, we are able to yield significantly better or similar accuracy at a higher speed with inferior software environment. From the validation experiments based on images from normal human subjects, images with keratoconus and images with laser in-situ keratomileusis flap, we showed that the proposed customized Hough transform for circles can represent the corneal layer interfaces more accurately. On the other hand, Kalman filtering can handle the heavy noise exhibited in the image, and can be adapted to shape variation in order to be closer to the real layer interfaces. In conclusion, our approach can be a potential tool to quantify corneal layer interfaces in a clinical environment with lower computational expenses while maintaining high effectiveness. Index Terms—optical coherence tomography, Kalman filtering, Hough transform, corneal layer interface, image segmentation, fast algorithm.
I. I NTRODUCTION PTICAL coherence tomography (OCT) has become an important optical signal acquisition and processing modality for imaging subsurface tissue structures with micrometer depth resolution in clinical ophthalmology [14]. The procedure is performed in a non-invasive and noncontact way, especially pertaining to the retina [1,2, 5-11] and cornea [3,4,12-14]. Quantities derived from the segmented corneal layer interfaces/boundaries can provide information about the thickness and curvature, which are essential in diagnostic and surgical management of corneal edema, ocular hypertension, and endothelial function or refractive surgery [15-26]. Accurate segmentation is crucial as a few micrometers of corneal segmentation errors can lead to significant changes in the derived clinical parameters [22]. Several methods to segment corneal layer interfaces have been addressed with varying degrees of success. Li et al. presented an algorithm for automatic corneal segmentation via exploring a combined parametric active contours and parabolic fitting algorithm [12,13,22]. Eichel et al. presented a semiautomated corneal segmentation method by using enhanced intelligent scissors and an energy minimizing spline [27]. A threshold-based model was utilized by Shen et al. to
O
*corresponding authors: Qingmao Hu Address all correspondence to: Qingmao Hu, Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, 1068 Xueyuan Boulevard, Shenzhen 518055, China; Tel: +86 755 8639-2214; Fax: +86 755 8639-2299; E-mails:
[email protected]
delineate the corneal anterior interface, without considering the posterior surface [28]. Shu and Sun [29] developed a random sample consensus for circular fitting and a divideand-conquer strategy to extract inner and outer contours of the anterior chamber from time-domain OCT (TD-OCT) images. Unfortunately, none of the above algorithms can effectively handle images with low signal-to-noise ratio (SNR) regions or varying sources of artifacts such as central saturation and horizontal artifacts (Fig. 1). Recently, more robust methods such as graph theory and level set were explored to achieve better segmentation. LaRocca et al. employed graph theory to segment central corneal layer interfaces for better robustness to artifacts [30]. Williams et al. proposed level set based shape prior model for automatic segmentation of corneal anterior and posterior interfaces [31].
Fig. 1. OCT corneal image with varying SNRs and artifacts, including lowSNR regions, horizontal artifacts, and central artifacts. This image is adapted from Image 3 of reference [30].
2169-3536 (c) 2016 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2017.2712767, IEEE Access 2
Later, they extended the graph theory approach and genetic algorithms to reconstruct three dimensional (3D) surface maps of cornea [32]. However, methods using graph theory or level set result in high computational costs [5-7,30-32]. In our previous work [33], we employed Kalman filtering [34], customized active contour and curve smoothing on retinal SD-OCT images to show promising performance in terms of speed and robustness to different sources of artifacts. Because the cornea and retina have different structures and different types of artifacts, the method cannot be directly applied to segment corneal OCT images. The main novelties of this study are summarized as follows. Firstly, difference of mean intensity, inspired by the onedimensional (1D) Haar wavelet [35] and the difference of Gaussian [36], is employed for edge detection, followed by smoothing based on Savitzky-Golay algorithms [37] for initial localization of corneal layer interfaces. Secondly, a customized Hough transform (HT) for circles (HTC) based on the theorem of perpendicular bisection of a chord is introduced to reduce the computational cost to fit layer interfaces to circles. Thirdly, we model corneal layer interfaces as parabolic curves with parameters estimated from the derived circles. Finally, based on modeling each corneal layer interface as a horizontal projectile motion of particles, Kalman filtering [34] is employed to track boundaries (particles) in each interface of a single image to yield fast and accurate layer interfaces. The rest of the paper is organized as follows. Section II is devoted to method. Experimental results are then described in Section III while discussion and conclusion are given in Section IV. II. P ROPOSED M ETHOD An OCT imaging system can generate B-scans (i.e., two dimensional (2D) images) by gathering a number of A-scans (axial or depth direction) from adjacent transverse (lateral direction) positions [1]. In this paper, we use the following denotations: H for the depth of an A-scan, W for the width of a B-scan, y for the depth coordinate (ranging from 0 to (H-1)), x for the lateral coordinate (ranging from 0 to (W-1)), and f(x,y) for the grayscale/intensity at pixel (x,y). We will divide Section II into five subsections, which respectively deals with preprocessing, detecting layer interfaces using customized edge detection, customized HTC, approximating axial coordinates for apexes of Bowman and endothelium, and refining layer interfaces through Kalman filtering. Figure 2 shows the flowchart of the method to be elaborated in more details.
original and resized images are denoted as W, H, and f (x, y) respectively. 2) Approximating Lateral Coordinate of Corneal Apex Firstly, each column of the image is projected along the axial direction [30] based on the following equation: g(x) =
H−1 1 X f (x, y) H y=0
(1)
The grayscale of the maximum of g(x) and its x position are denoted as gmax and pmax , respectively. Then, the image is divided into three regions with equal width denoted as regions I, II, and III, from left to right, in which only the central region II is assumed to hold the central artifact [30]. Next, the average grayscale for region I and region III are denoted as mI and mIII respectively. Then, we set the threshold to detect the central artifact as Tcent , which is defined: Tcent = ρ × SI,III + (1.0 − ρ) × gmax , 0 < ρ < 1.0 (2) SI,III = (mI + mIII )/2 Here, ρ is a predefined constant that is set as 0.5. Finally, we search from the position pmax to the leftmost and the rightmost in region II to find the lateral coordinates with g(x) being greater than Tcent , and denote them as plef t and pright , respectively. The x coordinate of the corneal apex can be approximated as: Ax = (plef t + pright )/2
(3)
Figure 3 shows the derived positions of central artifact. 3) Approximating Axial Coordinate of Epithelial Apex To begin with, each row of the image is projected along the lateral direction using: g(y) =
W −1 1 X f (x, y) W x=0
(4)
The epithelial apex corresponds to the local maxima, however, not necessarily the maximum due to horizontal artifacts (Fig. 4). To obtain a robust and precise y position of the epithelial apex, forward difference is performed for g(y) as follows: g 0 (y) = g(y + 1) − g(y)
(5)
The maximum of g 0 (y) corresponds to the y coordinate of the epithelial apex (Fig. 4).
A. Preprocessing We divide the pre-processing stage into four steps: image resizing, approximating the lateral coordinate of corneal apex, approximating the axial coordinate of epithelial apex, and suppression of horizontal artifact. 1) Image Resizing The image is resized to isotropic pixel sizes in order to facilitate modelling central corneal layer interfaces with circles. The width, height and the grayscales of both the
Fig. 2. Flowchart of our proposed method to segment corneal layer interfaces from B-scan images.
2169-3536 (c) 2016 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2017.2712767, IEEE Access 3
4) suppression of horizontal artifact To make the delineation of corneal layer interfaces more reliable, we suppress the horizontal artifact based on subtracting the mean signal for each row (Fig. 5). For simplicity of denotation, the image after artifact suppression is still denoted as f (x,y). B. Detecting Layer Interfaces through Customized Edge Detection There are two characteristics of corneal layer interfaces: layered structures, and the dark-to-bright or bright-to-dark intensity transitions in axial direction. These distinct properties are to be explored in three steps: derivation of region of interest (ROI) of layer interfaces, flattening the ROI so that edges are almost horizontal, and customized edge detection. 1) Derivation of the Region of Interest of Layer Interfaces An ROI of each layer interface is expressed as a circular ring. From Section II-A, the approximated lateral and axial coordinates of the center (epithelial apex) are available. In addition, we can know its radius from prior knowledge (7.7mm) epithe [39]. Thus, a circle Cinit being the central curve of ROI for air-epithelium interface can be determined. Similarly, ROIs for the other two layer interfaces can be constructed. They are given in Section II-D as they will need determination of the epithelium interface, while radii can be determined from prior knowledge (6.8 mm for endothelium-aqueous interface, 7.6 mm for epithelium-Bowman’s interface) [39]. Half heights of the circular rings are predefined constants in this study, being 30 pixels for air-epithelium interface, 40 pixels for endothelium-aqueous interface, and 10 pixels for epitheliumBowman’s interface. The heights of the circular rings are carefully chosen to decrease the computational cost as well as to include the real interfaces due to the error in approximating the centers. Figure 5(a) shows the derived ROI of epithelium interfaces. 2) Image Flattening within the Region of Interest Once the ROI of each layer interface is determined, the ROI becomes flattened with respect to the derived layer interface. Specifically, the flattening process is carried out this way: for each column x, the column will move (Cinit (y)-Ay +H1 ) upwards with H1 being the half height of the ROI in the axial direction, Cinit (y) being the y coordinate of the pixel in the original image, and Ay being the y coordinate of the apex of each layer interface. In this way, the central curve
Fig. 3. An OCT corneal image (from Fig. 1) and its vertical projection: (a) an original OCT corneal image with colored lines for the leftmost (in red), central (in green) and rightmost (in blue) position of the central artifact and, (b) the average grayscale curve (in cyan) of each column of (a).
becomes a horizontal line. Note that the layer interfaces are determined after obtaining the edge through Section II-B-3 using customized edge detection. They are curved to obtain the corresponding coordinates in the resized image. 3) Customized Edge Detection Once the ROI of each layer interface is flattened, the layer interface becomes almost horizontal to facilitate edge detection. Edges are detected through calculating the difference of intensity means column-wise using the following formula: Dj (x) =
1
HROI X−1
HROI − j
y=j
j−1
f (x, y) −
1X f (x, y) j y=0
(6)
where j is in the range from 1 to HROI -1, notably with HROI being the height of the flattened ROI which is the height of the circular ring. In this paper, we search for the maximum absolute value of Dj (x) for each x as the edge pixel at column x. Note that Eq. (6) can be used with a low computational complexity by employing a 1D integral image [35]. After edges are detected using this equation, Savitzky-Golay filters [37, 46] are employed for curve smoothing of the layer interface with the order of the polynomial being 1 and a neighborhood size of 20 pixels. The edge detection through the equation followed by Savitzky-Golay filtering is called customized edge detection in this paper. Figure 5 shows the detected edges for air-epithelium interface through customized edge detection. C. Hough Transform for Circles The cornea of normal eyes is of a nearly spherical structure [38]. According to the Gullstrand eye model, the crosssection perpendicular to the optic axis and passing through the eye, intersects epithelium and endothelium of the cornea to contain two circles, with their radii being 7.7 mm and 6.8 mm respectively [39]. Since the OCT is a cross-sectional imaging, it is reasonable to approximate the central corneal layer interfaces with circles [38, 39]. However, in order to handle shape variability due to pathology such as keratoconus, parabolic curves are proposed to approximate spherical as well as near spherical corneal surfaces. A parabolic curve has four parameters, including the coordinates of the vertex (xv ,yv ), its curvature, and its orientation [40]. Here, we propose to estimate the parameters of a
Fig. 4. An OCT corneal B-scan (from Fig. 1) and its lateral projection: (a) an original OCT corneal image with vertical arrow pointing to the horizontal artifact, and horizontal arrows pointing respectively to the epithelial apex, Bowman’s apex, and endothelium apex, and (b) the average grayscale curve of all rows of (a) (top) and its forward difference curve (bottom).
2169-3536 (c) 2016 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2017.2712767, IEEE Access 4
parabolic curve from its inscribed circle. It can be proven that a horizontal projectile motion whose trajectory falls into the family of parabolic curves can be constrained by its inscribed circle (see Section II-E). Then, an HTC is employed to accelerate the voting process. HT is one of the most popular approaches for circle detection [41, 42]. A customized HTC based on bisection of a chord is presented to reduce the computational complexity. For circle detection, the following two geometric properties are employed: a circle is uniquely determined by three noncollinear points, and the line segment that bisects a chord of a circle will pass through the center of the circle [43] (Fig.6). Here, we introduce one triple on a circle (Fig. 6) and denote it as KMN which consists of three points K(xK ,yK ), M(xM ,yM ), and N(xN ,yN ). Since OP and OQ bisect the line segments KM and MN respectively, we have: (yO − yP ) × (yK − yM ) + (xO − xP ) × (xK − xM ) = 0 (7) (yO − yQ ) × (yM − yN ) + (xO − xQ ) × (xM − xN ) = 0 (8) By Cramer’s rule [44], the center (x0 ,y0 ) can be computed as:
xO =
yO =
(xK − xM ) (yK − yM )
(xK − xM ) (xM − xN )
(yQ × yM + xQ × xM − yQ × yN − xQ × xN ) (yP × yP + xP × xK − yP × yM − xP × xM ) (xK − xM ) (yK − yM ) (xM − xN ) (yM − yN ) (9) (yP × yP + xP × xK − yP × yM − xP × xM ) (yQ × yM + xQ × xM − yQ × yN − xQ × xN ) (xK − xM ) (yK − yM ) (xM − xN ) (yM − yN ) (10)
The existence condition for (x0 ,y0 ) is: (xK −xM )×(yM −yN )−(xM −xN )×(yK −yM ) 6= 0 (11) which is equivalent to the non-collinearity of the three points K, M and N. Once the contour is derived from the customized edge detection, we can obtain the circle through the customized HTC. Let the distance of adjacent points of each triple in axial direction be Distx pixels, the x coordinate of pixel K will be from 0 to (W − 2 × Distx − 1). The voting of HTC includes the center followed by the radius.
Fig. 5. ROI determination and edge detection for air-epithelium of Fig. 1: (a) ROI of air-epithelium, where green curve, red curve and blue curve are central curve, top limit and bottom limit of the ROI respectively, (b) customized edge detection within the flattened ROI, and (c) approximated layer interface shown by the color magenta.
For computational efficiency, we sequentially select the consecutive triples with the difference in x coordinate between two neighbor points of a triple being Distx . The proposed method requires only a 1D array whose length is (W − 2 × Distx − 1) for voting the center (Fig. 6). Although the intersection of two line segments that bisect two chords of a circle are 2D and therefore needs a 2D accumulator array, the 2D accumulator array can be confined in a small region (200×200 pixels, see below). After detecting the possible center, the 1D radius histogram of the distances between all edge points from the center can be utilized to compute the radius and verify the circle. By this technique, the overall computational load of the proposed HTC method is substantially decreased. Specifically, for the dataset from [30], the searching parameters are set from experiments as below. The cell size in x and y directions for the Hough space of the center and the cell size for the radius histogram are all one pixel. Let the radius be Rc pixels, then the searching ranges for the circle center in x and y directions and for radius are [Ax -100, Ax +100], [Ay +Rc 100, Ay +Rc +100] and [Rc -100, Rc +100], respectively. Figure 7 shows the derived circle using the customized HTC for segmenting the air-epithelium boundary interface. D. Approximating Axial Coordinates for Apexes of Bowman and Endothelium Similar to the presentation in Section II-A-3, we approximate axial coordinates of apex of Bowman and endothelium, which are essential to detect epithelium-Bowman’s interface and endothelium-aqueous interface, respectively. These coordinates are determined in four steps. Firstly, to reduce impacts of central artifact, the grayscales of each pixel at center artifact regions are set to zero. Secondly, we flatten the image based on the segmented air-epithelium interface. Thirdly, each row of the flattened image is projected along the lateral direction using Eq. (4). Fourthly, to obtain a robust and precise y position of the apex of endothelium and Bowman, forward difference is performed for g(y) according to Eq. (5). The positions of the maximum of g 0 (y) over the region 400800 µm below the air-epithelium interface and the minimum of g 0 (y) below the air-epithelium interface correspond to the y coordinates of the Bowman’s apex and the endothelium apex, respectively.
Fig. 6. Hough transform for circles and its voting strategy: (a) line segments bisecting a chord of a circle will pass through the center of the circle, and (b) sequential equal-interval (in x direction) triples for voting the center.
2169-3536 (c) 2016 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2017.2712767, IEEE Access 5
E. Kalman Filtering to Model Horizontal Projectile Motion In this subsection, we first show that the parabola, which is tangent to a circle at the vertex and intersects the circle, is uniquely determined. As shown in Fig.8, suppose that the parabola β is tangent to a circle α at the origin point M(0,0), and intersects at point N (xN , yN ), the center of the circle is O(xO , yO ). Then, equations of the parabola and the circle can be written as: 1 (12) y = ax2 2 (y − r)2 + x2 = r2
(13)
where a and r are the coefficient of the parabola and the radius of the circle, respectively. As the circle and the parabola intersect at point N, we can determine a by solving equations (12) and (13): q (14) a = 2(r − r2 − x2N )/x2N which relates the parabola and the circle. Here, r is derived from HTC for each layer interface, and xN is set as the maximum x coordinate of the fitted circle apart from its center. As shown in Fig.9, each layer interface of the cornea may be analogous to a pair of trajectories of particle projectile motion from the apex of the layer interface in the two opposite horizontal directions. In light of the great success in control domain [45], Kalman filtering was introduced to track adjacent layer interfaces frame by frame [33]. By extending this approach, we explore horizontal projectile equation based on Kalman filtering to approximate the adjacent interface points for each corneal layer interface. The horizontal projectile motion whose trajectory is parabola is given by Eqs. (15) and (16). These are based on the condition of a constant acceleration [47]: 1 y(t) = y(t − 1) + vy (t − 1) × ∆t + × ay × (∆t)2 (15) 2 vy (t) = vy (t − 1) + ay × ∆t (16)
the time in x axis (i.e., t is replaced with x), with x, x-1 and ∆x respectively pertaining to the current layer interface point, the previous interface point and the space interval in x direction. Furthmore, after replacing t with x, Eqs. (15) and (16) can be written in a matrix form using Eq. (17): Φ(x) = AΦ(x − 1) + Bay
(17)
where Φ, A and B are the 2D vector, thestate transfer matrix y 1 ∆x and the control input matrix, being Φ = ,A= vy 0 1 0.5 and B = , respectively. 1 The covariance can be estimated as below: Pˆ (x) = AP (x − 1)AT
(18)
Here, T represents matrix transposing,ˆfor the estimated value. The correction equation is then given by: K(x) = Pˆ (x)H T (H Pˆ (x)H T + R)−1
(19)
where H is [1 0], R is the standard deviation of the measurement noise. K will be adaptive to R, i.e., the components of K will be smaller when R is large, and larger otherwise. The state Φ can be updated using: ˆ ˆ − 1) + K(x)(Z(x) − AΦ(x ˆ − 1)) Φ(x) = AΦ(x
(20)
where Z(x) is the y coordinate of current layer interface point derived from the customized edge detection (Section II-B-3). Finally, the covariance P can be updated: P (x) = (I − K(x)H)Pˆ (x)
(21)
where I is the identity matrix. In summary, two phases (prediction by Eqs.(17) and (18), and updating by Eqs. (19)-(21)) of Kalman filtering are applied to obtain the state with minimum variance. Figure 10 shows the refined corneal layer interfaces based on the Kalman filtering model. III. E XPERIMENTS
where y, vy and ay are the Y coordinate, the speed and the acceleration of the adjacent layer boundaries being tracked, t, t-1 and ∆t (let ∆t = 1) being the current time, the previous time and the time interval respectively. Constant ay represents the vertical downward acceleration of the particle projectile motion, as determined by Eq. (14). In this paper, we denote
Experiments were carried out to validate the proposed method on 20 images selected randomly from a pool of 60 SDOCT corneal volumes from both eyes of 10 normal subjects, which was provided by [30]. The performance of the proposed method was then compared with that of [30] using the same dataset. Our algorithm has been implemented on a personal
Fig. 7. Derived circle using the customized HTC for segmenting the airepithelium interface of Fig. 1.
Fig. 8. A parabola and its inscribed circle, where the parabola β is tangent to a circle α at the origin point M, and intersects at point N, with the center of the circle being O.
2169-3536 (c) 2016 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2017.2712767, IEEE Access 6
computer without parallel processing (Matlab program, 32-bit OS, Intel core i5-2320 CPU at 3.3 GHz and 4 GB RAM). The algorithm [30] implemented parallel processing with 8 threads (Matlab program, 64-bit OS, Intel (R) Core (TM) i7 CPU 860 at 2.80 GHz and 16 GB RAM). For each corneal layer interface, we computed the average absolute position difference in layer interface between the automatic and manual delineation for quantification. We also calculated the average absolute position difference in layer interface between two graders delineations [30] to show the variability by eye graders, which is a reflection of difficulties in identifying the layer interfaces. The parameter ρ determines the proportion of mean of mI and mIII in the threshold Tcent (Eq.(2)). We altered the ρ in the range from 0.20 to 0.80 to yield similar segmentation accuracy (with the air-epithelium interface, epithelium-Bowman’s interface and endothelium-aqueous interface showing a difference of 0.4±0.3 pixels, 0.7±0.4 pixels and 1.8±2.0 pixels respectively) and same time consumption. In this study, ρ is fixed as the median value of the range to be 0.5. A. Comparison with a State-of-the-Art Method We compared our proposed method with that of LaRocca et al. [30] using their dataset and their manual segmentation as reference of normal eyes. The 60 volumes were acquired by Bioptigen Inc (Research Triangle Park, NC, USA) SDOCT imaging system fitted with a corneal adapter. The commercial system measured B-scan pixel sampling resolution being 6.1µm × 4.6µm, and a lateral sampling measured peak sensitivity of 104 dB at 50 ms imaging time per B-scan. Only 20 B-scans that are randomly selected from the 60 volumes were manually delineated by two experts (one junior and one senior) and the manual delineation by the senior expert was taken as the reference. Figure 11(b) shows the comparative segmentation results based on one normal image. It took an average of 521 ms to automatically derive the 3 layer interfaces of an image on a personal computer. Distribution of the segmentation time (521 ms) is as follows: pre-processing (93 ms), initial interfaces via customized edges detection (22 ms × 3 = 66 ms), circle fitting via customized HTC (73 ms ×3 = 219 ms), refining interfaces via Kalman filtering (35 ms × 3 = 105 ms), and un-resizing (38 ms). Here, × 3 was used to process three interfaces for each image. The comparison results between two experts as well as between the proposed method and [30] are summarized in Table I.
Fig. 9. The pair of horizontal projectile motion for Fig.1: (a) an original OCT corneal image with colored arrows for opposite horizontal directions, and (b) the horizontal projectile motion curve with constant vertical downward acceleration and constant rightward velocity based on the right part of (a).
B. Validation on Images with Various Artifacts and Pathological Images Images with relocated apex deviating from the center, hyporeflective region, eyelash artifact, and horizontal artifact below the apex are tested and shown in Fig.12 to highlight the robustness of the proposed algorithm. We also tested the proposed method on the pathological image datasets. The dataset obtained through RTVue-XR OCT (Optovue Inc., USA) was provided by Shanxi Eye Hospital, which includes 20 images with laser in-situ keratomileusis (LASIK) flap from 13 patients, and 10 images with abnormally steep curvatures (keratoconus) from 5 patients. As shown in Fig.13 and Table II, our method can perform well on both images of corneal with keratoconus and with LASIK flap. C. Additional Experiments To elaborate the roles of Kalman filtering and HTC, two variants were devised and tested. One variant was to use the proposed algorithm without Kalman filtering, i.e., taking the fitted circle as the layer interface, which was denoted as method VA ; and the other variant was to take the output of customized edge detection as the layer interface, without the use of HTC and Kalman fitting, which was denoted as method VB . Figure 11(c) shows the segmented endothelium-aqueous interfaces by the proposed method, method VA and method VB . IV. D ISCUSSION AND C ONCLUSION A. Roles of Kalman Filtering and Hough Transform The customized HTC can significantly enhance the layer interface accuracy (from 2.89±2.73 to 1.43±2.57 pixels, comparison between methods VB and VA , Table III), which may reflect the fact that the corneal layer interface can be well represented by a carefully designed circle to handle well interface boundaries that are difficult to determine (such as broken or unclear interface boundaries as shown by Fig. 11(c). Note that the interface boundaries of methods VA and VB are in cyan and green respectively.). Kalman filtering can further enhance the accuracy of the fitted circle significantly (from 1.43±2.57 to 0.92±1.07 pixels, comparison between method VA and the proposed method, Table III), which may imply that Kalman filtering can effectively handle the heavy noise exhibited in the image and could
Fig. 10. Refined interfaces using the Kalman filtering modeled with horizontal projectile motion, where blue, green and red curves are respectively for the airepithelium, epithelium-Bowman’s, and endothelium-aqueous layer interfaces based on (a) image 1; and (b) Fig.1.
2169-3536 (c) 2016 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2017.2712767, IEEE Access 7
TABLE I C OMPARISON OF THE EXISTING METHOD WITH THE PROPOSED
MEASUREMENT OF CORNEAL LAYER INTERFACES OF
Comparison between two manual graders first expert second trained between two manual grader to manual grader to manual graders self self mean error±SD mean error±SD mean error±SD
corneal layer interface first interface second interface third interface
0.46 ± 0.25 0.76 ± 0.42 1.49 ± 2.28
1.16 ± 0.76 1.11 ± 0.92 2.46 ± 3.03
1.50 ± 0.43 1.24 ± 0.52 2.07 ± 2.27
20 IMAGES OF NORMAL EYES .
Comparison between automatic and manual segmentations LaRocca’s method proposed
mean error±SD 0.55 ± 0.43 0.85 ± 0.55 1.66 ± 2.15
mean error±SD 0.44 ± 0.37∗ 0.72 ± 0.58∗ 1.59 ± 2.04
Note: The first, second and third interfaces are respectively air-epithelium interface, epithelium-Bowman’s interface, and endothelium-aqueous interface; First Expert (second trained) manual grade to self represents intra-observer repeatability of the first expert (second trained) manual grader; SD for standard deviation; * for significant difference (p