Multi-resolution retinal vessel tracker based on directional smoothing. K.-H. Englmeier. 1,. S. Bichler. 1. , K. Schmid. 1. , M. Maurino. 2. , M. Porta. 2. , T. Bek. 3.
Multi-resolution retinal vessel tracker based on directional smoothing K.-H. Englmeier1, S. Bichler1, K. Schmid1, M. Maurino2, M. Porta2, T. Bek3, B. Ege4, O.V. Larsen4, O.K. Hejlesen4 1. GSF- Institute of Medical Informatics, Ingolstaedter Landstr. 1, D-85764 Neuherberg, Germany 2. Department of Internal Medicine, University of Turin, corso AM Dogliotti 14, 10126 Torino, Italy 3. Aarhus University Hospital, Department of Ophthalmology, DK-8000 Aarhus, Denmark 4. Department of Medical Informatics and Image Analysis, Aalborg Unversity, Fredrik Bajersvej 7D, DK-9220 Aalborg, Denmark
ABSTRACT To support ophthalmologists in their routine and enable the quantitative assessment of vascular changes in color fundus photographs a multi-resolution approach was developed which segments the vessel tree efficiently and precisely in digital images of the retina. The algorithm starts at seed points, found in a preprocessing step and then follows the vessel, iteratively adjusting the direction of the search, and finding the center line of the vessels. As an addition, vessel branches and crossings are detected and stored in detailed lists. Every iteration of the Directional Smoothing Based (DSB) tracking process starts at a given point in the middle of a vessel. First rectangular windows for several directions in a neighborhood of this point are smoothed in the assumed direction of the vessel. The window, that results in the best contrast is then said to have the true direction of the vessel. The center point is moved into that direction 1/8th of the vessel width, and the algorithm continues with the next iteration. The vessel branch and crossing detection uses a list with unique vessel segment IDs and branch point IDs. During the tracking, when another vessel is crossed, the tracking is stopped. The newly traced vessel segment is stored in the vessel segment list, and the vessel, that had been traced before is broken up at the crossing- or branch point, and is stored as two different vessel segments. This approach has several advantages: - With directional smoothing, noise is eliminated, while the edges of the vessels are kept. - DSB works on high resolution images (3000 x 2000 pixel) as well as on low-resolution images (900 x 600 pixel), because a large area of the vessel is used to find the vessel direction - For the detection of venous beading the vessel width is measured for every step of the traced vessel. - With the lists of branch- and crossing points, we get a network of connected vessel segments, that can be used for further processing the retinal vessel tree. Keywords: retina, vessel course tracking, vessel contour, color fundus photographs
1. INTRODUCTION Pathological changes of the retinal vessel tree can be observed in a variety of diseases. Diabetic retinopathy (DR), which is a result of long-term diabetes mellitus, is such a disease. It represents one of the leading causes of blindness in the western countries and produces vascular changes like microaneurysms, intraretinal microvascular abnormalities, venous beading and neovascularisations together with haemorrhages, exudates and retinal edema. In order to prevent these complications it is very important to treat the patients as early as possible. At present in the clinical routine vascular modifications are examined qualitatively by direct inspection (ophthalmoscopy) or semi-quantitatively by analyzing photographic documentations of the ocular fundus (fundus photographs). Image processing methods are needed to extract relevant quantitative data about changes of the retinal vessel tree and, thus, to improve the assessment of the retina status. Data about the vessel course and contour can be extracted by a vessel tracking algorithm. This data can be analyzed by the ophthalmologist or used for an automatical recognition of vascular changes like vasoconstrictions, vasodilations, venous beading and new vessels.
230
Medical Imaging 2002: Physiology and Function from Multidimensional Images, Anne V. Clough, Chin-Tu Chen, Editors, Proceedings of SPIE Vol. 4683 (2002) © 2002 SPIE · 1605-7422/02/$15.00
The extraction and analysis of retinal vessels has been performed in various ways. Any automatic method has to solve the problem of missegmentation. This missegmentation mainly comes from inhomogeneous illumination of the eye background, which produces color changes within one image, and from color differences between different images. Besides, image quality, like contrast or brightness is not standardized. This paper presents a method for the extraction of vessels in color fundus photographs combined with directional smoothing. This eliminates noise while edges of the vessels are kept. In addition to this, the algorithm can be applied on images with different resolutions.
2. METHODS 2.1 Material The input images must be a color image, possibly with good definition and medium-high resolution. The development of our vessel tracker is based on 248 color images, 64 from subjects without diabetic retinopathy (DR), 64 with mild to moderate DR, 60 with severe non proliferative DR and, finally, another 60 with proliferative DR. The images had a resolution of 760x570 pixels, 24 bit – 16.8 millions colors in Tiff format. 2.2 State of the art There have been many approaches to vessel detection in retinal images in the past. One of the first methods was to use a line detection algorithm often followed by skeletonisation for achieving a vascular network pattern (Akita(1,2,3), Tanaka(21)). The so-called "matched filter method" is a well-known technique for vessel extraction, introduced by Chaudhuri(5). A set of 2D kernels is matched across the grayscale image with a Gaussian cross-section and a given length. No initialization or user intervention is required, but it is computational intensive and has problems with handling bifurcations. Hoover (14) improved this method, so it is capable of managing bifurcations and obtains better separation. This method is often used in later works about retinopathy, e.g. in analysis of digital angiograms (Zhou(26)), for measurements of vascular tortuosity (Hart(13)) or in a screening system (Goh(10)). Various kinds of edge maps are used for vessel tracking. Wu(23) and Meehan(17) are interested in vessel width and take different edge detectors for finding a suitable piece of vessel. After localization of the optic disk with pyramidal decomposition, Hausdorff-based template matching and confidence assignment Gagnon(6) starts a recursive dual edge tracking based on Canny algorithm and connectivity recovering beginning at the optic disk. The same authors (7) presented also a nonrecursive paired tracking for vessel extraction. With additional features like following the twin border and setting of seed points for further tracking the authors achieved improvements in handling bifurcations and jumping over broken or missing edges. Several combinations of edge detection or matched filter methods with artificial neural networks have been investigated. For example Gardner(9) uses the sobel edge detection and Sinthanayothin(18) the Chaudhuri method to gain a base for neural networks. Goldbaum(11) takes an inverted Gaussian-shaped zero-sum matched filter for extracting the vessels and further processing in a neural network. According to Gregson(12) and Kozousek(16) is a simple thresholding for extracting a rough silhouette with a following morphological closing algorithm and thinning to a centerline the first part of an automated grading of venous beading. In a second step the vessel diameter from the original image following the centerline is measured. After a fast Fourier transformation a venous beading index can be calculated. The use of steerable filters was shown by Kochner(15), where first the centerline of each vessel is extracted and then vessel contour and branches are detected, all with steerable filters. The starting points are determined by drawing a circle around the optic disk and using the intersections with edge points. Tamura(20) detects first the optic disc with a Hough transformation technique and traces then the vessels with a second order derivative of Gaussian function. Here is also a circle around the optic disk used for the starting points. Only vessel diameter are measured by Gao(8) but with the same starting point method. A fuzzy logic algorithm was presented by Tolias(22). It tracks automatically fundus vessels with a C-means clustering algorithm, and overcomes the problems of initialization and vessel profile.
Proc. SPIE Vol. 4683
231
Zana(24,25) found an algorithm based on mathematical morphology and linear processing for vessel detection in noisy angiograms and for registration of images. A geometric model of all possible patterns, that can be confused with vessels, is the rationale of a morphological treatment, combined with curvature measurement. Can(4) and Shen(19) suggest an exploratory tracing algorithm which provides useful partial results, scales well with image size and requires only fewest number of parameter settings. Firstly the image is explored for frame contrast and brightness levels along a grid of pixel-wide lines thus gaining edge pixels. Secondly false seed points are filtered out, and thirdly a sequence of recursive tracing steps proceeds along vessel centerlines. The main problems in the former studies were need of user-interaction, the presence of the optic disk, intensive computations due to preprocessing, large kernels used or processing each pixel in the image, poor scaling with image size or no providing of partial results if there is a computational deadline. Poor handling of bifurcations and of broken or missing edges in particular often cause an unsatisfying result of vessel tracking. Additionally, many algorithm need a particular environment and definite image resolution or image size. 2.3 Multi-Resolution Retinal Vessel Tracker Our new algorithm overcomes the following problems described by Coatrieux(27), namely: 1. Robust and accurate handling of branching and crossover points 2. Improved handling of discontinuous regions by relying on local contrast, edge information and noise In addition to these the computational efficiency of our algorithm in case of high resolution images (2K x 3K) makes it attractive. In the following we describe the steps of our method: 1. Preprocessing and calculation of a priori maximum vessel width 2. Grid calculation and seed point detection 3. Evaluation of seed point candidates 4. Vessel tracking 5. Directional smoothing 6. Branch- and crossing point classification 2.3.1 Preprocessing For preprocessing, as a first step, the background is masked out by removing all pixels, that are darker than 17% of the maximum brightness in the image. The threshold at 17% has been found empirically, and has proved to be correct for all photographs analysed so far. Then, as the best contrast in a color fundus image can be found in the green channel, the red and the blue channel are removed. As a further step of preprocessing, an assumption of the a priori maximum vessel width in the image is made, based on the resolution of the image. This number is very important for the tracker, because it will stop, if the vessel gets wider than this maximum. In numerous test it was found, that 1/60 of the image width in pixels is the optimum for this assumption. No preprocessing to enhance contrast or to remove noise is done here, because this is done locally, while the tracking occurs. That way, no time is wasted on areas of the image, that do not contain vessels.
Figure 1: Search of a local minimum: A local minimum is accepted as a seed point, if it is at least 5% darker than the average local maxima, and not wider than the a priori determined vessel width.
232
Proc. SPIE Vol. 4683
Figure 2: White spots show the location of detected local minima on the black grid lines.
2.3.2 Grid-Search for Seed Points To find seed-points for the algorithm to start from, the image is overlaid by a grid, consisting of 40 horizontal and 40 vertical lines independent of the resolution. This idea was taken from Can(4). Can also showed that 40 lines are the optimum, because although more grid lines will return more seed points, this won't result in more detected vessel, as most of the seed points will be rejected afterwards. Vessels are darker than the background of the eye. Therefore on each grid-line a search for local minima of the gray-value is performed. A local minimum is accepted if it is at least 5% darker than the local maxima to the left and to the right, and if the local maxima to the left and to the right are not more than the a priori determined vessel width away from each other. (see Fig. 1) An example for detected seed points can be found in Fig. 2. 2.3.3 Evaluation of seed point candidates For the evaluation of seed point candidates, first the neighborhood of the seed point is investigated for already detected vessel segments to prevent tracking the same vessel twice. Next, the direction of the untracked vessel at the seed point has to be found. This is done by calling the same directional smoothing subfunction, that is used by the tracker, for windows in 16 different directions (see 2.3.5), and assuming, that the direction, for which the subfunction returns the strongest edges, is the direction of the vessel. This value is calculated by the gradient after directional smoothing. Using a threshold method, if the edges are too weak, the seed point is omitted. Based on the location of the edges in the chosen window, the seed point can be transferred to the center of the two edges, so that the tracker starts from the center of the vessel. Also the vessel width can be calculated from the position of the edges. If this value is greater than the assumed maximum vessel width, the seed point is rejected and the algorithm continues with the next seed point. After these tests have been performed, the seed-point can be passed to the tracker, to find the vessel. 2.3.4 Tracking The aim of the following tracker is the automatic detection of the center line of the vessel as well as the progression of the vessel width along this center line. The tracker is given the position of the seed point, the direction of the vessel, and the width of the vessel, as calculated by the seed point evaluation. It performs the following tasks in a loop, that terminates only when special criteria are met, as explained below. 1. The centerpoint is moved 1/8 of the vessel width along the vessel-direction. 2. Three windows in different directions are tested by the directional-smoothing subfunction for the strongest edge. The directions are: the vessel direction, 22.5° left of the vessel direction and 22.5° right of the vessel direction (A sixteenth of a full circle). The window, that has the strongest edges after smoothing, gives the new direction of the vessel. 3. From the position of the edges, the new vessel width is calculated. 4. Based on the new width, and the position of the edges, the centerpoint can be moved to the center of the two edges, which is the center of the vessel. 5. The loop is terminated, if the vessel ends or if it comes across another vessel, that had been traced before. Therefore two classes of stopping criteria are tested: Criteria that signal the end of a vessel, and criteria that signal a crossing with another vessel. If the new centerpoint is outside the image area, if the contrast is too low, or if the new vessel width is too small or too wide, the vessel has probably ended. In this case the tracker aborts and stores the vessel segment it has found without passing it to the "Branch- and Crossing-Point Classification". If the new centerpoint is on an already traced vessel segment, the tracker also aborts, but passes the vessel segment to the "Branch- and Crossing-Point Classification", explained below. 6. If no stopping criterium was met, the tracker continues with step 1. 2.3.5 Directional-Smoothing Subfunction In order to find the best direction for the vessel tracker, the following steps are performed for three different directions (see Fig. 3):
Proc. SPIE Vol. 4683
233
1. A rectangle window with twice the size of the vessel width is taken out of the image and rotated in the direction, that is to be analyzed. 2. The rectangle window is smoothed along this direction and summed up to generate a one-dimensional vector. The length of the Gaussian filter used is adjusted to the vessel width. 3. The minimum and the maximum of the gradient of this vector is defined as the left and the right edge of the assumed vessel. 4. The location of these edges, as well as their strength is returned as the result.
Figure 3: To analyze the strength of vessel edges in a given direction, the following tasks are performed: 1. A rectangle is taken out of the image and rotated into the direction to be analyzed 2. A directional gaussian smoothing filter is applied, and the result summed up, to generate a onedimensional vector. 3. The maximum and minimum of the gradient of this vector are at the right and the left edge of the vessel. The value of the gradient is a measure for the strength of the edge.
2.3.6 Branch- and Crossing-Point Classification For the classification of branch- and crossing points, it is important, where the new vessel segment intersects the old vessel segment. If the intersection occurs somewhere in the middle of the old vessel, this vessel is broken up in two, and the intersection is stored as a branching point. (see fig. 4) If the intersection occurs at the end of the old vessel, the old and the new vessel are joined, and the endpoint of the old vessel removed. (see fig. 5) If the intersection occurs at a branching point, the branching point becomes a crossing. (see fig. 6) All the detected branching and crossing points, as well as the vessel segments including the vessel width are stored in detailed lists, that can be used for further processing the output of our algorithm. For each branching- and crossing point, a list of the connected vessel segments is stored. For each vessel segment, the centerline, the progression of the width and the direction at each point of the centerline are stored, as well as the branch- or crossing points, it is connected to. As an example, this information can be used to plot the detected vessel segments into the image as shown in figure 7.
234
Proc. SPIE Vol. 4683
Figure 4: If an intersection occurs in the middle of a vessel, it is stored as a new branching point.
Figure 5: If the intersection occurs at the end of a vessel, the old and the new vessel are joined, and the endpoint is removed.
Figure 6: If the intersection occurs at a branching point, this branching point is stored as a new crossing point.
Figure 7: The segmented vessel tree plotted into the image.
3. RESULTS The vessel tracker aims at emphasizing as correctly as possible the retinal vascular network captured on a digital image, whether directly acquired with a fundus camera or digitized by a scanner, in Tiff format, with any kind of resolution, including an entire photographic field or just one section of it. The output is a monochromatic retinal image. Superimposed on this, vessel tracings are marked by red lines running along the center of the vessels while width and direction in each point are marked by blue segments, the length of which indicates the vessel diameter in that particular point The computer used for the test was a PII 350 with 128 MB RAM, video resolution 1024x768, with a Windows NT for workstation operative system. Tracking of each image requires approximately from 8 to 20 seconds. This high variability is due both to presence or absence of retinal damage, in particular haemorrhages, and sharpness problems consequent to cataract in greater or smaller severity. When both components are present, we have the longest registered times.
Proc. SPIE Vol. 4683
235
Most of the images had been captured by a Canon NM6 with a 45° acquisition angle. Some images were obtained with a Mydriatic Kowa ProII, 30°, and some by both fundus cameras. From each of the groups “No DR”, “Mild-to-moderate DR”, “Severe non proliferative DR” and “Proliferative DR”, 15 cases were randomly selected. For each patient there were 4 retinal fields: right eye temporal, right eye nasal, left eye temporal and left eye nasal. Factors such as sharpness, brightness or readability did not affect the selection. Some images with moderate, or even severe, cataract were included. Usually the largest vessels were tracked correctly if their resolution was sufficient. In almost all the images it was possible to distinguish the temporal vascular arcades, and was particularly easy to recognize the venous tree, due to its dimensions and its contrast. Even if tracking was lost because of vessel fragmentation, it was possible to find its further distal prolongation. Vessels running too close to the image border were often ignored, this being due to partial or insufficient centering. It is important to point out that the “local” vessel dimension is always shown with precision. In no image were discrete lesions identified as vessel segments. Similarly, light reflexes normally observed in retinas of young patients did not interfere with the vessel tracker work. Concerning the smallest vessels, small arteries were harder to detect than small veins, due to their smaller diameter and lower contrast. In order to detect them easily, good quality images, preferably on a bright retina, are required. On the contrary, small veins are easily tracked. Different performances were found between images acquired from the two cameras: Kowa images gave better results than Canon. Additional tests on high resolution images obtained by other fundus cameras in some cases reached 100% of vessel mapping. However, vessels with a lot of direction changes (“tortuous”) remain difficult to find out. To sum it up our algorithm is faster and has a higher accuracy than our previous version (Kochner (15)). It estimates the direction and width of the vessel whereby the tracking can be resumed after an interruption on the vessel Some factors that can reduce completeness of the vascular mapping are due to the necessity to reduce the “noise”. They are in detail: • Vessels or fragments are too short • Vessel path is too close to parallel vessel or border of image • Low Quality of initial image Possible future developments of the algorithm include the separation of arteries and veins, the automated detection of macroscopic vessel anomalies and the definition a “gold standard” for diameter changes to discriminate possible pathologic anomalies, so as to alert and assist readers with their clinical detection.
REFERENCES 1. K. Akita, H. Kuga, Pattern recognition of blood vessel networks in ocular fundus images, IEEE 1982, pp 436-441 2. K. Akita, H. Kuga, A computer method of understanding ocular fundus images, Pattern Recognition Vol. 15, No6,1982, pp 431-443 3. K. Akita, H. Kuga, Digital processing of color ocular fundus images, Proceedings of MEDINFO-80, 1980, pp 80-84 4. A. Can, H. Shen, J. N. Turner, H. L. Tanenbaum, B. Roysam, Rapid automated tracing and feature extraction from retinal fundus images using direct exploratory algorithms, IEEE Trans Inf Tech Biomedicine, Vol.3,No.2,June 1999, pp 125-138 5. S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, M. Goldbaum, Detection of blood vessels in retinal images using two dimensional matched filters, IEEE Trans Med Imaging, Vol. 8, No. 3, September 1989, pp 263-269 6. L. Gagnon,M. Lalonde, M. Beaulieu, M.-C. Bouchert, Procedure to detect anatomical structures in optical fundus images, Proc of Conference Medical Imaging 2001;SPIE Proc. Vol. 4322; pp 1218-1225, www.crim.ca/~lgagnon/articles/SPIEMedImag2001.pdf 7. M. Lalonde, L. Gagnon, M.-C. Bouchert, Non-recursive paired tracking for vessel extraction from retinal images, Proceedings of the Conference Vision Interface 2000, Montreal, Mai 2000, pp 61-68 8. X. Gao, A. Bharath, A. Stanton, A. Hughes, N. Chapman, S. Thom, Measurement of vessel diameters on retinal images for cardiovascular studies, On-line Conference Proceedings: Medical Image Understanding and Analysis 2001, www.cs.bham.ac.uk/research/proceedings/miua2001/papers/gao.pdf 9. G. G. Gardner, D. Keating, T.H. Williamson, A. T. Elliott, Automatic detection of diabetic retinopathy using an artificial neural network: A screening tool, British Journal of Ophthalmology 1996; 80: pp 940-944 10. K. G. Goh, W. Hsu, M. L. Lee, H. Wang, Adris: An automatic diabetic retinal image screening system, Book chapter in Medical Data Mining and Knowledge Discovery, K.J. Cios (ed.), Physica (Springer) Verlag, 2001, pp 181-210, www.comp.nus.edu.sg/~whsu/publication/ADRIS-Springer.pdf
236
Proc. SPIE Vol. 4683
11. M.H. Goldbaum, N.P. Katz, S.Chaudhuri, M.Nelson, Image understanding for automated retinal diagnosis, th Proceedings of the 13 Annual Symp. on Comp. Appl. in Medical Care, IEEE Computer Society Press 1989, pp 756760 12. P.H. Gregson, Z. Shen, R.C. Scott, V. Kozousek, Automated grading of venous beading, Computers and Biomedical Research 28, 1995, pp 291-304 13. W. E. Hart, M. Goldbaum, B. Cote, P. Kube, M. R. Nelson, Automated measurement of retinal vascular tortuosity, Proceedings AMIA Fall Conference 1997, http://citeseer.nj.nec.com/cachedpage/132896/1 14. A. Hoover, V. Kouznetsova, M. Goldbaum, Locating blood vessels in retinal images by piece-wise threshold probing of a matched filter response, Proc AMIA Symp 1998,pp 931-935 15. B. Kochner, D. Schuhmann, M. Michaelis, G. Mann, K.-H. Englmeier, Course tracking and contour extraction of retinal vessels from color fundus photographs: Most efficient use of steerable filters from model based image analysis, Proc. SPIE Medical Imaging 1998, pp 755-761 16. V. Kozousek, Z. Shen, P. Gregson, R. C. Scott, Automated detection and quantification of venous beading using fourier analysis, Can J Ophthalmol, Vol 27, No 6, 1982, pp 288-294 17. R. T. Meehan, G. R. Taylor, P. Rock, T. H. Mader, N. Hunter, A. Cymerman, An automated method of quantifying retinal vascular responses during exposure to novel environmental conditions, Ophthalmology, July 1990, Vol. 97, No. 7, pp 875-881 18. C. Sinthanayothin, J. F. Boyce, H. L. Cook, T. H. Williamson, Automated localization of the optic disc, fovea, and retinal blood vessels from digital colour fundus images, Br J Ophthmol 1999; 83: pp 902-910 19. H. Shen, B. Roysam, C. V. Stewart, J. N. Turner, H. L. Tanenbaum, Optimal scheduling of tracing computations for real-time vascular landmark extraction from retinal fundus images, IEEE Trans Inf Tech Biomedicine, Vol 5, No 1, March 2001, pp 77-91 20. S. Tamura, Y. Okamoto, K. Yanashima, Zero-crossing interval correction in tracing eye-fundus blood vessels, Pattern Recognition, Vol. 21, No. 3, 1988, pp 227-233 21. M. Tanaka, K. Tanaka, An automatic technique for fundus-photograph mosaic and vascular net reconstruction, Proceedings of MEDINFO-80, 1980, pp 116-120 22. Y.A. Tolias, S.M. Panas, A fuzzy vessel tracking algorithm for retinal images based on fuzzy clustering, IEEE Trans Med Imaging; 17(2); Apr 1998; pp 263-273 23. D. Wu, B. Schwartz, J. Schwoerer, R. Banwatt, Retinal blood vessel width measured on color fundus photographs by image analysis, Acta Ophthalmologica Scandinavica 1995: 73 (Suppl. 215): pp 33-40 24. F. Zana, Jean-Claude Klein, Robust segmentation of vessels from retinal angiography, Proc. DSP, 1997, pp 10871090) 25. F. Zana, J.C. Klein, A multimodal registration algorithm of eye fundus images using vessels detection and hough transform, IEEE Trans Med Imaging; 18(5); May 1999; pp 419-428 26. L. Zhou, M. S. Rzeszotarski, L. J. Singerman, J. M. Chokreff, The detection and quantification of retinopathy using digital angiograms, IEEE Transactions on Medical Imaging, Vol. 13, No. 4, December 1994 27. R. Collorec, J.L Coatrieux, Vectorial tracking and directed contour finder for vascular network in digital subtraction angiography, Pattern Recog. Lett., Vol. 8, No.5, pp 353-358, December 1988
Proc. SPIE Vol. 4683
237