BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES, Vol. 65, No. 1, 2017 DOI: 10.1515/bpasts-2017-0009
Denoising methods for improving automatic segmentation in OCT images of human eye A. STANKIEWICZ1, T. MARCINIAK1*, A. DĄBROWSKI1, M. STOPA2, 3, P. RAKOWICZ2, 3, and E. MARCINIAK2 1 2
Division of Signal Processing and Electronic Systems, Poznan University of Technology, 24 Jana Pawla II St., 60-965 Poznan, Poland
Clinical Eye Unit and Pediatric Ophthalmology Service, Heliodor Swiecicki University Hospital, Poznan University of Medical Sciences, 16/18 Grunwaldzka St., 60-780 Poznan, Poland 3
Department of Optometry and Biology of Visual System, Poznan University of Medical Sciences, 5D Rokietnicka St., 60-806 Poznan, Poland
Abstract. This paper presents analysis of selected noise reduction methods used in optical coherence tomography (OCT) retina images (the socalled B-scans). The tested algorithms include median and averaging filtering, anisotropic diffusion, soft wavelet thresholding, and multiframe wavelet thresholding. Precision of the denoising process was evaluated based on the results of automated retina layers segmentation, since this stage (vital for ophthalmic diagnosis) is strongly dependent on the image quality. Experiments were conducted with a set of 3D low quality scans obtained from 10 healthy patients and 10 patients with vitreoretinal pathologies. Influence of each method on the automatic image segmentation for both groups of patients is thoroughly described. Manual annotations of investigated retina layers provided by ophthalmology experts served as reference data for evaluation of the segmentation algorithm. Key words: optical coherence tomography (OCT), image denoising, image segmentation, anisotropic diffusion, wavelet thresholding.
1. Introduction The newest measurement technologies provide automatic visualization and analysis of pathologic tissues. This process can be further extended through application of advanced algorithms for analysis of medical images [1]. The detailed measurements and proper representation of thickness, volume and placement of the examined structures make it easier for the specialists to chose the proper treatment course [2]. Among noninvasive techniques for soft tissue measurement used in ophthalmology is the spectral domain optical coherence tomography (SD-OCT) [3]. This technology is based on illuminating the tissue with a stream of infrared light. The light reflected from inner structures of the eye, is spectrally analyzed to receive data, representing layers of the retina. A set of single scans across the retina (called A-scans) is assembled into a cross-section (i.e. the B-scan) illustrating the layered morphological structure of the retina. A series of B-scans creates a three-dimensional visualization (3D OCT scan) of the retina. Figure 1 presents a 3D OCT scan annotated with the most commonly identified retina layers: inner limiting membrane (ILM), nerve fiber layer (NFL), ganglion cell layer (GCL), inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer (ONL), inner segments of photoreceptors (IS), outer segments of photoreceptors (OS), and retinal pigment epithelium (RPE) [4]. *e-mail:
[email protected]
Bull. Pol. Ac.: Tech. 65(1) 2017
Fig. 1. Example of 3D OCT scan of human retina with annotated most important layers
2. Characteristics of OCT interfaces The inspection of biometric properties of an eye using the OCT technology is based on segmentation and identification of the most important segments (such as retina layers and borders of the optic disc) and on the analysis of their depth at various
71
A. Stankiewicz, T. Marciniak, A. Dąbrowski, M. Stopa, P. Rakowicz, and E. Marciniak
Fig. 2. General scheme of biometric analysis of eye structures using OCT
points. This information is required to perform the detailed and proper diagnosis that can point out possible treatment courses. Fig. 2 presents a general scheme of the OCT analysis procedure. Using modern OCT devices it is possible to acquire even 70 000 A-scans per second [5]. This means performing a 3D scan, of 141 B-scans with the resolution of 640£385 points each, in around 0.8 second. The fast measurement assures no artifacts caused by involuntary movements of the eyeball. Some of the newest devices employ also a motion correction technology (MCT) in order to minimize this problem [6]. The B-scans constituting the 3D OCT scan can be very noisy, what is the main cause of errors during the automatic segmentation of layers. The research on the nature of noise in the OCT images [7] proves, that it is not an entirely random noise as it contains some specific information. It is called the speckle noise. It depends on: ● the placement in the image ● intensity scale of the image ● reflecting properties of the examined tissue. It is not possible to automatically assess its level. Due to the above features, elimination of noise from the OCT images is a very difficult task, though necessary for the proper tissue analysis. Fig. 3 illustrates an example of the B-scan acquired through a 3D OCT examination. This image has a low quality due to the noise. It includes pathological changes caused by the vitreomacular traction syndrome (tearing of the retina in the fovea region due to anomalous vitreous detachment). This pathology causes incorrect segmentation of retina layers performed with the use of the automatic procedure presented in Section 3. Methods for reducing the speckle noise can be divided into two groups: methods based on averaging of a series of images and algorithms designed for denoising single frames (B-scans). The first group of methods requires multiple scans of the same area that are next averaged [8, 9]. Since the noise present in every cross-section has various spatial distributions (as opposed to the examined tissue) this is an efficient approach to
72
this noise reduction. The more images used for the averaging, the better is the noise minimization and the tissue structure enhancement. Unfortunately, this technique significantly prolongs the measurement time. Indeed, for the above example, the standard number of frames is 32, but using MCT even 80, what means the time acquisition of 24 and 62 seconds, respectively for the earlier discussed 3D scan. This is not acceptable for a patient. The prolonged acquisition would also cause shift and rotation between subsequent B-scans in the 3D dataset due to unexpected movement of the patient/eyeball, problems with maintaining vision focus and blinking. These problems do not affect scan acquisition under a second, therefore this method can be used only during the acquisition of a single B-scan (one Line Scan Pattern) through the center of macula. Methods of the second group used for denoising the B-scans in a volumetric set involve: ● averaging and median filtering [10] ● regularization [11] ● local Bayesian estimation [12] ● diffusion filtering, including nonlinear anisotropic filtering [13–15] ● wavelet thresholding (e.g. spatially adaptive filtering [16], dual tree complex wavelet transformation [17], curvelets transformation [18]). The results of research conducted by Ozcan et al. [19] suggest superiority of methods based on wavelet transformations in comparison to other techniques. The published findings were derived from data gathered mainly using images of animal tissues (pigs, rats, mice) [8, 15], human skin [20, 21], and human healthy retina [9]. In many cases the noise reduction methods were analyzed on nonmedical and synthetic images [11, 13, 15]. Furthermore, little information can be found in the literature about analysis of speckle noise on OCT images of pathological changes of the retina. One of the reports that used images from 30 patients with 15 various retina pathologies was presented by Abbirame et al. [22].
A
B
Fig. 3. Example of 3D OCT retina cross-section (B-scan): (A) B-scan with expert’s manual segmentation, (B) B-scan with erroneous automatic segmentation of retina layers (places with erroneous segmentation are indicated by arrows)
Bull. Pol. Ac.: Tech. 65(1) 2017
3.2 Anisotropic diffusion filtering Anisotropic diffusion (AD) is an efficient noise reduction method. For an input image this method, proposed by Perona and Malik [24], defines the denoised Denoising methods for improving automatic segmentation in OCT images of human eye image as
3.4 Mul The uses a s number noise in other im at wherewhere ∆ represents Laplaceoperator, operator, Recent literature includes also papers describing applica∆ representsthe the Laplace and and c(x, y) describes Auth describes the diffusion coefficient dependent on the tion of the wavelet thresholding method for a 3D set of human the diffusion coefficient dependent on the position (x, y) in the weight f position the image space, according to the skin images [20, 21]. This method utilizes the information from image space, in according to the function estimati function neighboring frames to minimize the effect of blurring and emphasize the details in the image. significa ,(2) where defines gradient, is the denoising parameter, defines κ isis the denoising being beingwhere a positive real gradient, value that related to theparameter, noise level a positive real value that is related to the noise level and to and to the expected edge preservation in the image. For the In this article we analyze the influence of selected denoising expected edge preservation in the image.calculated For edge detection edge detection wesmoothing use the image gradient with procedure. The on tests 3D segmentation sets of OCTprocedure. images for valuewe allows for of areas between the edges. methods theinclude retinainclude layers The tests use the image gradient calculated with the Prewitt operator, procedure. The tests 3D sets of OCT images forthe Prewitt value allows for smoothing of areas between the edges. operator, since it is robust to noise [14]. procedure. The tests include 3D sets of OCT images for value allows for smoothing of areas between the edges. both healthy and pathological retinas. Thehealthy inspected Thanks toit this, thetolines and in the image, include 3D sets images for both and pathosince is robust noise [14]. structures both healthy andof OCT pathological retinas. The inspected Thanks to this, the lines and structures in the image, Through lower value ofThis theintechnique diffusion both logical healthy pathological retinas. The vitreomacular inspected Thanks tochoosing this, thethe lines and structures the image, pathology is the and vitreomacular traction syndrome. The important for interpretation, preserved. inspected pathology Through choosing the are lower value of the diffusion coeffipathologyretinas. is theThe vitreomacular tractionis the syndrome. Thecoefficient important for interpretation, are preserved. This technique we avoid the blurring of edges, while bigger pathology issyndrome. the algorithms vitreomacular traction syndrome. The important for interpretation, are preserved. This technique group oftraction investigated is presented in Fig. 4 and is useful for reducing the speckle noise in OCT images. The group of investigated algorithms pre- cient we avoid the blurring edges, while value allows group of investigated algorithms is presented in Fig. 4isand is useful for reducing theofspeckle noisebigger in OCT images. groupsented of investigated algorithms is presented in Fig. 4 and isfor useful for reducing the speckle noise inThanks OCT images. described below. in Fig. 4 and described below. smoothing of areas between the edges. to this, the described below. described below. 3.3 Wavelet thresholding lines and structures in the image, important for interpretation, 3.3 Wavelet thresholding 3.3 thresholding The wavelet thresholding (WT) methodreducing providesthegood areWavelet preserved. This thresholding technique is useful speckle The wavelet (WT)formethod provides good The wavelet thresholding (WT) method provides good results in in denoising OCT images. This is mainly due to noise OCT images. results in denoising OCT images. This is mainly due to in denoising images.distributed This is mainly due to the results fact that the noiseOCT is evenly between the fact that the noise is evenly distributed between 3 the that thresholding. thewhile noisetheismajority evenly between 3.4.fact Wavelet The wavelet thresholding (WT) wavelet coefficients, ofdistributed the informative wavelet coefficients, while the majority of the informative method provides goodamong results inmajority denoising images. wavelet coefficients, while the ofOCT thewith informative content is concentrated the coefficients highThis isdue concentrated among the coefficients with high iscontent mainly to the fact that thethreshold noise is evenly distributed content isByconcentrated among the coefficients with high magnitude. selecting a proper value (which magnitude. By selecting a proper threshold value (which between wavelet coefficients, while the majority of the informagnitude. By selecting proper value (which might be a difficult task) wea are able threshold to reduce the noise might be a difficult task) we are able to reduce the noise mativebecontent is concentrated theimage coefficients withnoise high might a difficult task) weamong are to reduce maintaining characteristic features of able the [25].the maintaining characteristic features of the image [25]. By selecting a proper threshold might maintaining characteristic features of, represented the value image(which [25]. in the Inmagnitude. this algorithm, a single B-scan represented in the In this algorithm, a able single B-scanthe,noise beIn a difficult task) we are to reduce maintaining in the this scale, algorithm, a single B-scanwith, represented logarithmic is decomposed the wavelet logarithmic scale, is decomposed with the wavelet Fig. 4. Selected denoising methods for testing their influence on image characteristicscale, featuresisof the image [25]. with the wavelet Fig. 4. Selected denoising methods for testing their influence on image logarithmic decomposed of the maximum decomposition level . segmentation Fig. 4. Selected denoising methods for testing their influence on image transform transform of thea single maximum level In this algorithm, B-scan Idecomposition the loga- . i, represented inlevel segmentation transform of the maximum decomposition . Thereby receiving approximation coefficients segmentation rithmic scale, is decomposed with the wavelet transformand of the and Thereby receiving approximation coefficients and Thereby receiving approximation coefficients 3.1 Averaging filtering , where describes direction detailmaximum coefficients 4. Selected denoising decomposition level L. Thereby receiving approx3.1Fig. Averaging filteringmethods for testing their influence on image , where describes direction detail coefficients 3.1 filtering Lof , image L direction where describes detail coefficients TheAveraging basic, commonly used segmentation method of image denoising (horizontal or vertical) filtering. During the imation coefficients A and detail coefficients W , where i i, D The basic, commonly used method of image denoising (horizontal or vertical) of image filtering. During Dthe The basic, commonly usedreferred method to of as image denoising is averaging filtering (further AVG). It (horizontal or vertical) of image filtering. During we used the soft thresholding method with the describes direction (horizontal or vertical) of image filtering. is averaging filtering (further referred to as AVG). Itexperiments experiments we used the soft thresholding method with is averaging (further referred tobetween as AVG). employs the twofiltering dimensional convolution an It the experiments we used the soft thresholding method with wavelet, namely we the used discrete stationary wavelet During the experiments the soft thresholding method employs the two dimensional convolution between an Haar the Haar wavelet, namely the discrete stationary wavelet employs the twoanddimensional convolution between an investigated image a previously defined filter. Suchmethod the Haar wavelet, namely the the discrete stationary 3.1. Averaging filtering. basic, commonly used with the Haar wavelet, namely discrete stationarywavelet wavelet (DSWT) [26]. investigated image and a The previously defined filter. Suchtransform transform (DSWT) [26]. investigated image and a previously defined filter. Such filter isofusually defined isasaveraging a square matrix withreferred odd to as transform (DSWT) [26]. image denoising filtering (further transform (DSWT) [26]. The denoising operations consist in reducing the detail filter is usually defined as a square matrix with odd Thedenoising denoising operations consist in reducing thecoefdetail filterAVG). isof usually defined a square matrixalthough with odd an It employs thecolumns twoasdimensional convolution between Thedenoising operations in reducing the detail numbers rows and (e.g. 3×3), The operations consist in reducing thethe detail for a position inconsist the image, based on numbers of rows and columns (e.g. 3×3), althoughcoefficients coefficients for a position in the image, based onL the numbers of rows columns (e.g. although investigated imageand a previously defined filter.in Such ficients for a position x in the image, on the weight rectangular filters, such asand3×19, can also be 3×3), found thefilter is coefficients for a position in thebased image, based onGthe i, D rectangular filters, such as 3×19, can also be found in theweight weight usually as a square odd numbers rows weight rectangular filters, as 3×19, canwith also bevariants found in literature [23]. defined In our such analysis wematrix tested four ofofthe literature [23]. In our analysis werectangular tested fourfilters, variants of and columns although such ,(3) literature [23]. In(e.g. our3£3), analysis we of as such filters, namely: 3×3, 5×5, 7×7, andtested 9×9. four variants such filters, namely: 3×3, in 5×5, 7×7, and [23]. 9×9. In our analysis also be3×3, found literature such 3£19, filters,can namely: 5×5,the 7×7, and 9×9. L is calculated for a manually wherewhere eacheach weight we tested four variants of such filters, namely: 3£3, 5£5, 7£7, weight G is calculated for a manually selected is calculated for a manually where each weight i, D 3.2 Median filtering calculated forequation a manually where eachaccording weight 3.2and Median filtering selected threshold according to the following 9£9. threshold toaccording the is following equation 3.2 Median filteringused nonlinear technique of image Another commonly selected τthreshold to the following equation Another commonly used nonlinear technique of image selected threshold according to the following equation Another commonly used nonlinear technique of image denoising is median filtering (further referred to as MED). denoising is median filtering (further referred to as MED). 3.2. Median filtering. Another commonly used denoising is filtering referred tononlinear asisMED). Its advantage in median preprocessing of(further OCT examinations the tech .(4) Itsnique advantage in denoising preprocessing of OCT examinations is theto of image is median filtering (further referred Its advantage in preprocessing OCT features, examinations ability to preserve the edges andoftissue and is to the ability to preserve the edges and tissue features, and to as MED). Its advantage in preprocessing of OCT examinations ability preserve of the the edges and tissue features, reduce thetoinfluence speckle noise. During and the to is the the ability to preserve and tissue reduce influence of the theedges speckle noise.features, Duringandtheto The last step of this algorithm requires performing the The last step of this algorithm requires performing the reduce the influence of the speckle noise. During the experiment wethe evaluated four sizes of this type of filter: last of this algorithm requires performing reduce influence of the speckle During The last step step this algorithm performing the the inexperiment we evaluated four sizesnoise. of this typethe of experifilter:inverseThe wavelet transform. Fig. 5Arequires presents a general inverse wavelet transform. Fig. 5A presents a general experiment we evaluated four sizes of this type of filter: 3×3, 5×5, 7×7, 9×9. four sizes of this type of filter: 3£3, 5£5, inverse ment weand evaluated verse transform. Fig. 5a a general ascheme of wavelet transform. Fig.presents 5A presents general 3×3, 5×5, 7×7, and 9×9. scheme of wavelet the described algorithm. scheme of the described algorithm. 3×3, 7£7, 5×5, and 7×7,9£9. and 9×9. the described scheme of the algorithm. described algorithm. 3.2 Anisotropic diffusion filtering 3.2 Anisotropic diffusion filtering 3.4 Multiframe wavelet thresholding 3.4Multiframe Multiframe wavelet thresholding 3.2 Anisotropic diffusion filtering Anisotropic diffusion (AD) is an efficient noise 3.3. Anisotropic diffusion filtering. 3.5. wavelet multiframe wavelet Multiframe thresholding Anisotropic diffusion (AD) is Anisotropic an efficientdiffusion noise 3.4 The multiframe wavelet waveletthresholding. thresholdingThe method (MWT) The multiframe waveletuses thresholding method (MWT) Anisotropic diffusion (AD) is an efficient noise reduction method. For an input image this method, (AD) is an efficient noise reduction method. For an input image thresholding method (MWT) a set of frames i 2 h1, N The multiframe wavelet thresholding method (MWT) reduction method. For an input image this method,uses a set of frames , where defines the i, , where defines uses a set of frames reduction method. For an input image this method, proposedthis bymethod, Peronaproposed and Malik [24], defines the denoised Perona[24], and Malik the uses where N defines the number of the ,processed It the as-the defines aofset frames proposed by Perona andbyMalik defines[24], the defines denoised number theofprocessed B-scans. It where assumesB-scans. that the number of the processed B-scans. It assumes that proposed by Perona and Malik [24], defines the denoised sumes that in imageB-scans. i is uncorrelated with respect tothe image asdenoised image as number of the theisnoise processed It assumes thatin the image as noise in image uncorrelated with respect to noises noise ininother image is (B-scans), uncorrelated withitsrespect noises in noises images and that standardtodeviation image as in image is uncorrelated with respect to noises in noise images (B-scans), and that its standard deviation otherσother ,(1) position x(B-scans), is the sameand as inthat the other images. deviation its standard i(x) at images other images (B-scans), that standard at position is the sameand as in theits other images.deviation where ∆ represents the Laplace operator, and at position is the same as in the other images. where ∆ represents the Laplace operator, and at position is the same as in the other images.the where ∆ represents the Laplace operator, and Authors of this method [8] propose calculating describes the diffusion coefficient dependent on the Authors of this method [8] propose calculating the describes the diffusion coefficient dependent on the Authors of this method [8] describes thein diffusion coefficient dependentto on for the detail coefficients in apropose way thatcalculating allows for the position the 65(1) image space, according the the weightweight Bull. Pol. Ac.: Tech. for the detail coefficients in a way that allows73for position in the 2017 image space, according to the weight for a wayisthat allows position in the image space, according to the estimation of the detail local coefficients noise. This in weight called the for function estimation of the local noise. This weight is called the function estimationweight of the local noise. This weight as is follows called the function and is calculated significance and is calculated as follows significance weight and is calculated as follows significance weight where defines gradient, is the denoising parameter,
3. Denoising of 3D OCT scans
concentrated among coefficients with high content isthe concentrated the threshold coefficients with(which high ise is evenly distributed between magnitude. magnitude. By By selecting selectingamong aa proper proper threshold value value (which By selecting a proper threshold value (which magnitude. By selecting a proper threshold value (which while the majority informative might might of be bethe aa difficult difficult task) task) we we are are able able to to reduce reduce the the noise noise webe area able to reduce the noise might difficult task) we are able to image reduce the noise ddifficult among task) the coefficients with high maintaining maintaining characteristic characteristic features features of of the the image [25]. [25]. gng characteristic features value of the (which image [25]. characteristic features of the,, represented image [25]. in a propermaintaining threshold represented in the theM. Stopa, P. Rakowicz, and E. Marciniak In In this this algorithm, algorithm, aaStankiewicz, single single B-scan B-scan A. T. Marciniak, A. Dąbrowski, , represented in the lgorithm, a single B-scan , represented in the In this algorithm, a single B-scan sk) we are logarithmic able to reducescale, the noise logarithmic scale, is is decomposed decomposed with with the the wavelet wavelet on onscale, image image islogarithmic decomposed with is thedecomposed wavelet with the wavelet stic featurestransform of the image [25]. transform of ofscale, the the maximum maximum decomposition decomposition level level .. n image the B-scan maximum decomposition . transform of the inmaximum level and . , represented the level decomposition aofsingle and Thereby Thereby receiving receiving approximation approximation coefficients coefficients a) b) and eceiving approximation coefficients and Thereby receiving approximation coefficients s decomposed the wavelet ,, where where describes describes direction direction detail detailwith coefficients coefficients , where describes., direction ficients where describesDuring direction detail coefficients maximum decomposition noising noising (horizontal (horizontal or or level vertical) vertical) of of image image filtering. filtering. During the the noising or vertical) of imageorfiltering. During the filtering. During the image and pproximation coefficients VG). VG). ItIt (horizontal experiments experiments we wevertical) used used the theofsoft soft thresholding thresholding method method with with VG). Itused experiments seen we the soft thresholding method with wedirection used the soft thresholding method with ween an , an where describes the the Haar Haar wavelet, wavelet, namely namely the the discrete discrete stationary stationary wavelet wavelet een an avelet, namely the discrete stationary wavelet the Haar wavelet, namely the discrete stationary wavelet r.r. Such Such al) of image filtering.(DSWT) During[26]. the transform transform (DSWT) [26]. .thSuch DSWT) [26]. transform (DSWT) [26]. ith odd odd the soft thresholding methodoperations with The The denoising denoising operations consist consist in in reducing reducing the the detail detail hoising odd operations consist in reducing the detail in reducing the detail Thestationary denoising lthough lthough mely the discrete coefficients coefficients for for awavelet aoperations position position consist in in the the image, image, based based on on the the for a position in theforimage, based on coefficients a position in the the image, based on the d6]. dthough in in the the weight weight in the weight iants ants of ofconsist rations in reducing the detail ants of image, based on the ition in the is is calculated calculated for for aa manually manually where where each each weight weight is each calculated h weight where is calculated for a manually weightfor a manually selected selected threshold threshold according according to to the the following following equation equation following feshold f image image according the equation selectedto threshold according to the following equation image MED). MED). is calculated for a manually MED). ns s is is the the cording to the following equation sand is the and to to and to ing ng the the The The last last step step of of this this algorithm algorithm requires requires performing performing the the ng the ffstep filter: filter: of this algorithm requires performing therequires performing the The last step of this algorithm inverse inverse wavelet wavelet transform. transform. Fig. Fig. 5A 5A presents presents aa general general filter:transform. velet Fig. 5A presents a general inverse transform. Fig. 5A presents a general scheme scheme wavelet of of the the described described algorithm. algorithm. he described algorithm. scheme ofperforming the described s algorithm requires thealgorithm.
form. Fig. 3.4 5AMultiframe presents a wavelet general thresholding 3.4 Multiframe wavelet thresholding tame noise noise wavelet 3.4thresholding Multiframe wavelet thresholding ed algorithm. The The multiframe multiframe wavelet wavelet thresholding thresholding method method (MWT) (MWT) noise wavelet thresholding method (MWT) ltiframe method, method, The multiframe wavelet thresholding method (MWT) , , where where defines defines the the uses uses a a set set of of frames frames method, enoised enoised , where defines the of frames , where defines the uses a set of frames let thresholding number number of of the theof processed processed B-scans. B-scans. ItIt assumes assumes that that the the thresholding and b) multiframe wavelet thresholding approach noised Fig.B-scans. 5. Scheme image denosing algorithm a) standard wavelet the processed assumes that the Itfor: number of theIt processed B-scans. assumes that avelet thresholding (MWT) noise noise in inmethod image image is is uncorrelated uncorrelated with with respect respect to to noises noisesthe in in is uncorrelated age with respect to and noises in its noise inimages image is uncorrelated with respect to noises in , where defines the other other images (B-scans), (B-scans), and that that its standard standard deviation deviation es and that its(B-scans), standard deviation other and that deviation ssed(B-scans), B-scans. It images assumes at at position positionthatis is the the same same as as in inits the thestandard other other images. images. sition is the same as in the other images. at position is the same as in the other images. ncorrelated respect to noises in Authors Authors of ofmethod this this method method [8] [8] calculating propose propose calculating calculating thethat the distance between the subsequent B-scans in the 3D exon on the the with Authors of this [8] propose the weight the of the this that method [8] propose calculating the Authors of this method [8] propose calculating the on s), and its standard deviation weight for forcoefficients the the detail detail coefficients coefficients in aa way way that allows allows for foramination is about 50 μm, a little change in the tissue structure to to the the forweight the detail in a way thatin allows forthat estimation the detail coefficients in adetail way that allows for for the coefficients in asignificance way that to same the as he in the other images. estimation estimation of of This the the weight local local noise. noise. This weight weight is is allows called called for the the ofweight the local noise. is calledThis the weight is observed in the neighboring cross-sections. Based on this, we of the local noise. This weight isasnoise. called the weight is called thepropose to use 3 subsequent frames in the 3D OCT set as input L estimation of the local This (x) and isweight calculated follows method [8]Gsig, i, D propose calculating the and and is is calculated calculated as as follows follows significance significance weight is calculated e weight and is calculated as followsimages for this method. A general scheme of this approach is significance weight coefficients in a wayand that allows for as follows illustrated in Fig. 5b. al noise. This weight is called the ameter, ameter, ameter, and is calculated as follows se se level level (5) ege. level ge. For For 4. Experiments ge. For ed ed with with d with 4.1. Methodology. As was mentioned earlier, we measured ef-
iffusion ffusion of the denoising by analysis Fig. 5. Scheme of image denosing algorithm for: (A) standard waveletthresholding and (B) multiframe wavelet methods thresholding approach of influence on ffusion fectiveness bigger bigger , (6) the image segmentation accuracy. There are two reasons for this bigger the the mean squared wherewhere σ defines mean squareddistance distance between between the S, i, D defines in individual the detail coefficients images, parameter detail coefficients in individual images, parameter k describes isparameter the describes thereduction noise reduction and the noise level, and θilevel, is the normalized normalized parameter calculated as in (7):calculated as in (7): 33 3 .(7) obtaining new and approximation coefficients AfterAfter obtaining new detail detail and approximation with the with abovethe weights for weights all images, are avcoefficients above fortheallcoefficients images, the eraged and inverse transform calculated. coefficients aretheaveraged and theisinverse transform is This algorithm was developed for processing a set of frames calculated. of the same examined tissue area,for as is in the caseaofset multiple This algorithm was developed processing of acquisition of a single B-scan across the fovea. Bearing in mind, frames of the same examined tissue area, as is in the case of multiple acquisition of a single B-scan across the fovea. Bearing in mind, that the distance between the subsequent 74 B-scans in the 3D examination is about 50 μm, a little change in the tissue structure is observed in the neighboring cross-sections. Based on this, we propose to use 3 subsequent frames in the 3D OCT set as input
experiment. First, for OCT images a reference image (an ideal
directly. Second,noise) noisedoes in OCT images errors in image without not exist, thus,isitcausing is difficult to caltheculate segmentation of the retina layers. The segmentation accuracy of denoising algorithms directly. Second, noise procedure is inis causing turn a errors key instep in definingof the in OCT images the segmentation the morphological structure of the retina during retina layers. The segmentation procedure is inthe turndiagnosis. a key step Visualization measurement of ofthe retinaduring layers in defining theand morphological structure the retina the thickness is Visualization the base lineand for measurement the retina analysis. diagnosis. of the retina layers Currentisresearch concerning segmentation thickness the base line for the retina analysis. of medical Current research concerning medical imimages indicates methods based segmentation on the graphof theory as the agesaccurate indicates approach methods based the algorithm graph theory as the most most [27].onThe selected for accurate selected for this study this study approach (reported[27]. by The Chiualgorithm et al. [23]) treats a single (reported by as Chiu et al. [23]) treats every a single OCTisB-scan as OCT B-scan a graph, in which pixel a node. a graph, in which every pixel is a node. For the created graph, For the created graph, with previously calculated weights, calculated userepresenting the shortest path wewith usepreviously the shortest path toweights, find theweline the border between two neighboring layers in the image. In the conducted experiment we used a modified version of Bull. Pol. Ac.: Tech. 65(1) 2017 this algorithm [28][29]. The modified version takes continuity of tissues into account across all cross-sections and the signal quality in each column of the image. The 3D OCT scans used for testing were acquired with
Denoising methods for improving automatic segmentation in OCT images of human eye
to find the line representing the border between two neighboring layers in the image. In the conducted experiment we used a modified version of this algorithm [28, 29]. The modified version takes continuity of tissues into account across all cross-sections and the signal quality in each column of the image. The 3D OCT scans used for testing were acquired with Avanti RTvue device (Optovue Inc., Freemont, USA [5]) from 10 patients with vitreomacular traction (VMT) pathology and 10 healthy volunteers. Average age for each group was 71 and 39 years, respectively. Every patient had a full 3D OCT examination of macular region of size 7£7£2 mm. The examined volume is represented by 141£640£385 data points (141 B-scans with 640£385 resolution). Next, the acquired scans were manually segmented by experts from the Clinical Eye Unit at the Heliodor Święcicki Medical University Hospital in Poznań to annotate 7 retina layers (further used as reference data for evaluating investigated algorithms). In the next step, each image was denoised with the earlier described methods. For every method various parameter values were tested. Table 1 presents a list of the tested parameters and their values, with the best results obtained for values marked with gray shading. Also, as a part of subjective evaluation of the denoising algorithms, the ophthalmology experts were given an opportunity to select the best parameter values. It is worth mentioning that they have intuitively selected bigger values then those resulting from the tests (see Fig. 6). Table 1 Values of parameters chosen for tested denoising methods MED
AD
DSWT
MWT
Mask size
Mask size
κ
τ
k
Value 1
3£3
3£3
1
1
0.1
Value 2
5£5
5£5
5
10
1
Value 3
7£7
7£7
10
30
10
Value 4
9£9
9£9
20
100
100
4.2. Comparison of denoising algorithms. Examples of B-scans obtained after application of each investigated denoising method are illustrated in Figs. 6 and 7. Fig. 6A presents an original B-scan, while an example of employing AVG and MED filtering is illustrated in Fig. 6B and 6C respectively. Fig. 6D, 6E and 6F illustrate the results of AD, DSWT and MWT algorithms. The original single cross-section (A) has a visibly high noise content (a grainy structure). By comparing it with images (B) and (C) we can notice that these methods cause blurring of the image and the noise is still present. Method (E) also leads to blurring of the image, although the regions of individual tissues are smoothed. Additionally, for bigger threshold values rectangular shaped artifacts appear in the image. The greatest unification and smoothing of tissues can be detected in method (D). Unfortunately, this algorithm reduces the line of the posterior vitreous cortex present in the upper left part of Bull. Pol. Ac.: Tech. 65(1) 2017
B
C
D
AVG Parameter
A
E
F
Fig. 6. Original B-scan (A), and illustration of results of analyzed noise reduction methods: (B) average filtering (3£3 mask), (C) median filtering (3£3 mask), (D) anisotropic diffusion (κ = 20), (E) soft wavelet thresholding (τ = 20), (F) multiframe wavelet thresholding (k = 1)
75
C C C
part of subjective evaluation of the denoising algorithms, the ophthalmology experts were given an opportunity to select the best parameter values. It is worth mentioning A. Stankiewicz, Marciniak, Dąbrowski, M. Stopa, P. Rakowicz, and E. Marciniak that they have intuitively selected bigger T. values thenA.those resulting from the tests (see Fig. 6).
B
Table 1 B Values of parameters chosen for tested denoising methods
A
Parameter Value 1 Value 2 Value 3 Value 4
AVG Mask size 3×3 5×5 7×7 9×9
MED Mask size 3×3 5×5 7×7 9×9
AD
DSWT
MWT
1 5 10 20
1 10 30 100
0.1 1 10 100
C
D D D E E E
E E E
C
F F F
4.2 Comparison of denoising algorithms Examples of B-scans obtained after application of each investigated denoising method are illustrated in Figs. Fig. 7. Illustration of manually segmented l Fig. 7. Illustration of manually segmented l automatic analysis after noisesegmented reduction w Fig. Illustration of ll 6 and D 7. Fig. 6A presents an originalE B-scan, while an F Fig. 7. 7.automatic Illustration of manually manually analysis after noisesegmented reduction w average filtering (3×3 mask), (C) medianw automatic analysis after noise reduction example of employing AVG and MED filtering is automatic analysis aftermask), noise (C) reduction average filtering (3×3 medianw anisotropic diffusion ( ), median (E) so average filtering (3×3 (C) average filtering (3×3 mask), mask), (C) anisotropic diffusion ( ), median (E) so illustrated in Fig. 6B and 6C respectively. Fig. 6D, 6E and (anisotropic ), (F)diffusion multiframe wavelet thresso ( ), ( wavelet ), (E) (E) (anisotropic ), (F)diffusion multiframe thresso presented in a section of the size 220×22 ( ), (F) multiframe wavelet thres 6F illustrate the results of AD, DSWT and MWT Fig. 7. Illustration of manuallypresented segmented layers (A) and results of ( ), (F) multiframe wavelet thres in a section of the size 220×22 Fig. 7. Illustration of manuallycenter segmented layers (A) and results of of the image in Fig. 6 presented in aawith section of the size 220×22 automatic analysis after noise reduction selected methods: Fig. 7. Illustration of manually segmented layers (A) and results of presented in section of the size 220×22 center of the image in Fig. 6methods: (B) algorithms. automatic analysis after noise reduction with selected (B) center of the image in Fig. 6 average filtering (3×3 (C) median filtering (3×3 mask), (D) automatic analysis aftermask), noise reduction with selected methods: (B) center of the image in Fig. 6 mask), (D) average filtering (3×3 mask), (C) median filtering (3×3 The original single cross-section (A) has a visibly high anisotropic diffusion ( ), median (E) soft wavelet average filtering (3×3 mask), (C) filtering (3×3thresholding mask), (D) anisotropic diffusion ( ), (E) soft wavelet thresholding (anisotropic ), (F)diffusion multiframe thresholding ( ). Data is ( wavelet ), (E) soft wavelet thresholding noise content (a grainy structure). By comparing it with ( ), (F) multiframe wavelet thresholding ( ). Data is presented of thewavelet size 220×220 pixels cropped ( ), in (F)a section multiframe thresholding ( ). from Datathe is images (B) and (C) we can notice that these methods presented in a section of the size 220×220 pixels cropped from the center of the image in Fig. 6 presented in a section of the size 220×220 pixels cropped from the center of the image in Fig. 6 cause blurring of the image and the noise is still present. center of the image in Fig. 6 Method (E) also leads to blurring of the image, although the regions of individual tissues are smoothed. Additionally, for bigger threshold values rectangular Fig. 7. Illustration of manually segmented layers (A) and results of automatic analysis after noise reduction with selected methods: (B) average shaped artifacts appear in the image. The greatest filtering (3£3 mask), (C) median filtering (3£3 mask), (D) anisotropic diffusion (κ = 20), (E) soft wavelet thresholding (τ = 10), (F) multiframe unification andthresholding smoothing of tissues can be detected wavelet (k = 1). Data is presented in a sectionin of the size 220£220 pixels cropped from the center of the image in Fig. 6 method (D). Unfortunately, this algorithm reduces the line of the posterior vitreous cortex present in the upper left part of image. is best revealed by method (F), to best accuracy for segmenting pathological tissue. Although still thethe image. ThisThis line line is best revealed by method (F), thanks thanks to which, it is possible to maintain most which, it is possible to maintain most informative content about some inadequacies can be found, especially in places of shading informative content about the pathology the pathology distribution. Method (F) also distribution. provides a good caused by blood vessels or fluids (e.g. for IS/OS border). Method (F) also provides a good quality quality image with low noise content andimage visiblewith retinalow tissue The OCT device used for acquiring the image provides aunoiseareas content and visible retina tissue areas separation. separation. tomatic segmentation of only 4 retina layers (i.e. ILM, IPL/INL, Next,Next, eacheach 3D3D scan was imageseg- IS/OS and RPE/Choroid). Thus, comparison of the proposed scan wassubjected subjected to to automatic automatic image segmentation onthe thegraph graph theory. Verification of mentation based based on theory. Verification of effectivemethod to this device is not applicable. ness of theofimplemented methodsmethods was basedwas on calculation Results of automatic segmentation were calculated sepaeffectiveness the implemented based on of the peakofsignal to noise ratio rately for images of healthy volunteers and eyes with VMT calculation the peak signal to(PSNR): noise ratio (PSNR):
D
E
, [dB] [dB](8)
F
pathology. Their comparisons for each evaluated method illustrate Figs. 8‒12. It is clearly visible that smaller filter masks for both averaging and median filtering provide better results, while bigger masks tend to blur the image. It can be also inferred, that lower values of κ parameter in the AD method and threshold τ for the WSDT method guarantee better performance.
where MAX defines maximum possible possiblevalue value of annotathe where MAX definesa a maximum of the annotation range (in our case it is equal to the height of tion range (in our case it is equal to the height of the image), the image), and a meanerror squared errorthe between theand and MSE is MSE a meanissquared between automatic automatic manual segmentations. Due intoannotating divergence manualand segmentations. Due to divergence layer borders bylayer experts and byby computer, between Fig. 6. Original B-scan (A), and illustration of results of analyzed noise in annotating borders experts the anddifference by computer, reduction methods: (B) average filtering (3×3 mask), (C) median them lower than 5 pixels was classified as negligible error the difference between them lower than 5 pixels(i.e., was filtering (3×3 mask), (D) anisotropic diffusion ( ), (E) soft equal to what would PSNR value 45 dB. classified aszero), negligible (i.e., give error equal to over zero), what wavelet thresholding ( ), (F) multiframe wavelet would give PSNR value over 45 dB. thresholding ( ) 4.3. Analysis of segmentation accuracy. During the conducted experiment we used an image segmentation algorithm implemented in the Matlab/Simulink R2014b environment [30] for annotating 7 borders between the retina layers: ILM, NFL/GCL, IPL/INL, INL/OPL, OPL/ONL, IS/OS, and RPE/Choroid using the procedure described in Section 4.1. 5 Figure 7 presents the segmentation results after each investigated denoising method. As can be seen, method (F) provides
76
Fig. 8. PSNR of retina layers segmentation for healthy and pathological eyes after averaging filtering
Bull. Pol. Ac.: Tech. 65(1) 2017
Denoising methods for improving automatic segmentation in OCT images of human eye
mean results for other patients in this group were in the range of 43.78 to 44.42 dB. For all patients with the VMT pathology the erroneous segmentation occurred in the area of pathology, and the biggest error was 49 px. This error may be caused by layers irregularities in the area of pathology, as the segmentation algorithm assumes smoothness of the layers borders. This continuity characteristic is expected by the experts and is noticeable in their manual annotations. Fig. 9. PSNR of retina layers segmentation for healthy and pathological eyes after median filtering
Fig. 10. PSNR of retina layers segmentation for healthy and pathological eyes after filtering with anisotropic diffusion method
Table 2 PSNR values for automatic segmentation of selected retina layers for patients with VMT [dB] Method
AVG
MED
AD
DSWT
MWT
All layers
37,40
37,40
37,75
37,70
37,88
ILM
49,28
49,34
48,54
48,56
49,23
NFL/GCL
38,35
38,24
40,12
40,14
39,11
IPL/INL
35,56
35,50
35,84
35,80
35,94
INL/OPL
33,82
33,79
33,90
33,85
34,17
OPL/ONL
34,87
34,98
35,02
34,91
35,45
IS/OS
41,64
41,84
42,35
42,38
42,24
RPE/Choroid
45,67
45,53
46,52
46,47
46,20
Table 3 PSNR values for automatic segmentation of selected retina layers for healthy eyes [dB]
Fig. 11. PSNR of retina layers segmentation for healthy and pathological eyes after filtering with wavelet soft thresholding method
Fig. 12. PSNR of retina layers segmentation for healthy and pathological eyes after filtering with multiframe wavelet thresholding method
Tables 2 and 3 contain PSNR results obtained for the best parameter values for each denoising method. It is worth mentioning, that for borders between hyper-reflective and dark regions (i.e. ILM, IS/OS, and RPE/Choroid) the segmentation results for both groups of patients have higher scores, regardless of the denoising method. Additionally, a severe VMT pathology, that affected one of the examined patients, has greatly lowered the calculated PSNR value for this group. For this patient the best results in each method gave a score in the range of 36.80 to 38.14 dB, while Bull. Pol. Ac.: Tech. 65(1) 2017
Method
AVG
MED
AD
DSWT
MWT
All layers
43,58
43,29
44,60
44,56
43,15
ILM
55,66
55,42
55,45
55,55
54,49
NFL/GCL
40,09
39,95
42,74
42,72
40,84
IPL/INL
44,31
43,68
44,26
44,16
42,62
INL/OPL
43,44
42,54
41,79
41,68
40,58
OPL/ONL
44,62
43,95
44,13
44,04
42,31
IS/OS
43,08
43,44
48,76
48,68
46,14
RPE/Choroid
43,91
44,01
45,11
45,34
45,51
5. Conclusions Manufacturers of OCT devices constantly try to overcome the problem of low quality of the acquired images. The denoising methods such as anisotropic diffusion and wavelet thresholding allow for better retina segmentation for both tested groups of patients. Additionally, in case of the VMT pathology we were able to improve accuracy by using the multiframe wavelet thresholding algorithm. We have observed that this approach did not provide significant improvement for images of healthy retinas. Our experiments confirmed that the proposed method of the multiframe wavelet thresholding improved segmentation accuracy for OCT images of pathological tissues, although change of the noise reduction parameter did not influence the segmentation process.
77
A. Stankiewicz, T. Marciniak, A. Dąbrowski, M. Stopa, P. Rakowicz, and E. Marciniak
For healthy data also soft wavelet thresholding of single B-scans and anisotropic diffusion methods are worth exploring, although for these methods it is worth setting the parameter value as small as possible. The proposed solution can be applied to volumetric datasets to aid during the diagnostic procedure, since precise segmentation allows for early detection of morphological changes and conduction of the thorough assessment of the pathology stage. Thanks to that, it could be possible to select and perform the proper therapy treatment. Acknowledgements. This work was prepared within the PRELUDIUM CADOCT Project number 2014/15/N/ST6/00710 founded by the National Science Centre in Poland.
References [1] T. Kudasik and S. Miechowicz, “Methods of reconstructing complex multi-structural anatomical objects with RP techniques”, Bull. Pol. Ac.: Tech. 64 (2), 315–323 (2016). [2] T. Kudasik, M. Libura, O. Markowska, and S. Miechowicz, “Methods for designing and fabrication large-size medical models for orthopaedics”, Bull. Pol. Ac.: Tech. 63 (3), 623–627 (2015). [3] M. Wojtkowski, “High-speed optical coherence tomography: basics and applications”, Appl. Opt. 49 (16), D30–D61 (2010). [4] M.D. Abràmoff, M.K. Garvin, and M. Sonka, “Retinal imaging and image analysis”, IEEE Reviews in Biomedical Engineering 3, 169–208 (2010). [5] Optovue Inc., RTVue XR 100 Avanti System. User manual. Software Version 2016.0.0, 2016. [6] M.F. Kraus, B. Potsaid, et al., “Motion correction in optical coherence tomography volumes on a per A-scan basis using orthogonal scan patterns”, Biomedical Optics Express 3 (6), 1182–1199 (2012). [7] B. Karamata, K. Hassler, M. Laubscher, and T. Lasser, “Speckle statistics in optical coherence tomography”, J. Opt. Soc. Am. A 22 (4), 593–596 (2005). [8] M.A. Mayer, A. Borsdorf, et al., “Wavelet denoising of multiframe optical coherence tomography data”, Biomedical Optics Express 3 (3), 572–589 (2012). [9] A. Baghaie, R.M. D’souza, and Z. Yu, “Sparse and low rank decomposition based batch image alignment for speckle reduction of retinal OCT images”, IEEE 12th International Symposium on Biomedical Imaging, (2015). [10] J. Rogowska, “Image processing techniques for noise removal, enhancement and segmentation of cartilage OCT images”, Physics in Medicine and Biology 47 (4), 641–655 (2002). [11] D.L. Marks, T.S. Ralston, and S.A. Boppart, “Speckle reduction by I-divergence regularization in optical coherence tomography”, J. Opt. Soc. Am. A 22 (11), 2366–2371 (2005). [12] A. Wong, A. Mishra, K. Bizheva, and D.A. Clausi, “General Bayesian estimation for speckle noise reduction in optical coherence tomography retinal imagery”, Opt. Express 18 (8), 8338–8352 (2010). [13] R. Bernardes, C. Maduro, et al., “Improved adaptive complex diffusion despeckling filter”, Opt. Express 18 (23), 24048–24059 (2010).
78
[14] P. Puvanathasan and K. Bizheva, “Interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in optical coherence tomography images”, Opt. Express 17 (2), 733–746 (2009). [15] W. Habib, A.M. Siddiqui, and I. Touqir, “Wavelet based despeckling of multiframe optical coherence tomography data using similarity measure and anisotropic diffusion filtering”, IEEE International Conference on Bioinformatics and Biomedicine, 330–333 (2013). [16] Z. Hongwei, L. Baowang, and F. Juan, “Adaptive wavelet transformation for speckle reduction in optical coherence tomography images”, IEEE International Conference onSignal Processing, Communications and Computing, 1–5 (2011). [17] S. Chitchian, M.A. Fiddy, and N.M. Fried, “Denoising during optical coherence tomography of the prostate nerves via wavelet shrinkage using dual-tree complex wavelet transform”, J. Biomedical Optics 14 (1), 14–31 (2009). [18] Z. Jian, L. Yu, B. Rao, B.J. Tromberg, and Z. Chen, “Three-dimensional speckle suppression in optical coherence tomography based on the curvelet transform”, Optics Express 18 (2), 1024– 1032 (2010). [19] A. Ozcan, A. Bilenca, A.E. Desjardins, B.E. Bouma, and G.J. Tearney, “Speckle reduction in optical coherence tomography images using digital filtering”, J. Opt. Soc. Am. A 24 (7), 1901–1910 (2007). [20] L. Wang, Z. Meng, et al., “Adaptive speckle reduction in OCT volume data based on block-matching and 3-D filtering”, IEEE Phot. Technol. Lett. 24 (20), 1802–1804 (2012). [21] J.J. Gómez-Valverde, J.E. Ortuño, et al., “Evaluation of speckle reduction with denoising filtering in optical coherence tomography for dermatology”, IEEE 12th International Symposium on Biomedical Imaging, (2015). [22] K.S. Abbirame, N. Padmasini, R. Umamaheshwari, and S.M. Yacin, “Speckle noise reduction in spectral domain optical coherence tomography retinal images using fuzzification method”, Int. Conf. on Green Computing Communication and Electrical Engineering, 1–6 (2014). [23] S.J. Chiu, X.T. Li, et al., “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation”, Optics express 18 (18), 19413–19428 (2010). [24] P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion”, IEEE Trans. Pattern Anal. and Mach. Intell. 12, 629–639 (1990). [25] S. Mallat, A Wavelet Tour of Signals Processing, 3rd ed., Academic Press, 2009. [26] G. Nason and B. Silverman, “The stationary wavelet transform and some statistical applications”, Lecture Notes in Statistics 103, 281–299 (1995). [27] M. Sonka and M.D. Abràmoff, “Quantitative analysis of retinal OCT”, Medical Image Analysis 33, 165–169 (2016). [28] A. Stankiewicz, T. Marciniak, A. Dabrowski, M. Stopa, and E. Marciniak, “A new OCT-based method to generate virtual maps of vitreomacular interface pathologies”, Proceedings of 18th IEEE International Conference on Signal Processing Algorithms, Architectures, Arrangements, and Applications, 83–88 (2014). [29] A. Stankiewicz, T. Marciniak, et al., “Improving segmentation of 3D retina layers based on graph theory approach for low quality OCT images”, Metrology and Measurement Systems 23 (2), 269–280 (2015). [30] Mathworks Inc., Matlab R2014b. User’s Guide, 2014.
Bull. Pol. Ac.: Tech. 65(1) 2017