Original Article
Improved Detection of Branching Points in Algorithms for Automated Neuron Tracing from 3D Confocal Images Yousef Al-Kofahi,1 Natalie Dowell-Mesfin,2 Christopher Pace,2 William Shain,2 James N. Turner,2 Badrinath Roysam1*
1
Rensselaer Polytechnic Institute, Troy, New York 12180
2
The Wadsworth Center, NY State Department of Health, Albany, New York 12201-0509
Received 29 December 2007; Revision Received 24 August 2007; Accepted 31 October 2007 This article contains supplementary material available via the Internet at http://www.interscience.wiley.com/ jpages/1552-4922/suppmat. *Correspondence to: Prof. Badrinath Roysam, JEC 7010, Rensselaer Polytechnic Institute, Troy, NY 121803590, USA Email:
[email protected] Published online 7 December 2007 in Wiley InterScience (www.interscience. wiley.com) DOI: 10.1002/cyto.a.20499 © 2007 International Society for Analytical Cytology
Cytometry Part A 73A: 3643, 2008
Abstract Automated tracing of neuronal processes from 3D confocal microscopy images is essential for quantitative neuroanatomy and neuronal assays. Two basic approaches are described in the literature—one based on skeletonization and another based on sequential tracing along neuronal processes. This article presents algorithms for improving the rate of detection, and the accuracy of estimating the location and process angles at branching points for the latter class of algorithms. The problem of simultaneously detecting branch points and estimating their measurements is formulated as a generalized likelihood ratio test defined on a spatial neighborhood of each candidate point, in which likelihoods were computed using a ridge detection approach. The average detection rate increased from from 37 to 86%. The average error in locating the branch points decreased from 2.6 to 2.1 voxels in 3D images. The generalized hypothesis test improves the rate of detection of branching points, and the accuracy of location estimates, enabling a more complete extraction of neuroanatomy and more accurate counting of branch points in neuronal assays. More accurate branch point morphometry is valuable for image registration and change analysis. ' 2007 International Society for Analytical Cytology
Key terms automated neurite tracing; branch points; ridge detection; generalized likelihood ratio test
THE goal of this work is to improve automated analysis of branching points in cytological structures such as neurites, and histological structures such as vasculature. By a branch point we mean a location where a process bifurcates. We are interested in correctly counting such locations in an image, being able to estimate the branch locations consistently. The analysis of branch points is of interest in many applications (1–9). They are of interest in neuroanatomic studies (6), development studies (7), endpoints in toxicological and screening assays (8,9), and also valuable as landmarks for image registration (4,5). Several algorithms are presented in the literature to segment and analyze tubular structures such as neurites and vasculature (10–36). Broadly speaking, two different approaches exist. The first is based on 2D/3D skeletonization algorithms (24–27). In this approach, the image volume is first segmented or binarized to extract the foreground structures of interest, and the resulting binary image is systematically thinned to arrive at the skeleton of the neurite that is processed further. The second approach is referred to as vectorization, or tracing (10–23). In this approach, a set of initial ‘‘seed’’ points are extracted from the image and the neurites are traced sequentially from these seed points by exploiting their generalized tube geometry. This category is usually fully automated such that initial seed points and their orientations are selected automatically (10–17). In some instances, the user manually specifies the ini-
ORIGINAL ARTICLE
Figure 1. (A) shows a 2D projection of a small region extracted from a larger 3D image (B) illustrates a case of missed detection using the previous algorithm (C) shows the result of using the proposed method.
tial points and directions, and then, the algorithm traces the whole branching structure recursively (18–23). The present work focuses on a specific aspect (branch points) in fully automated or ‘‘exploratory’’ tracing algorithms (10–17). These algorithms are attractive in terms of speed since they operate directly on the image data. They are also locally adaptive and more robust to image artifacts compared to skeletonization algorithms since they operate mostly on the Cytometry Part A 73A: 3643, 2008
foreground structures, rather than the entire image volume. Their computation time scales favorably with growing image size since the computational effort is proportional to the amount of structure in the image rather than the image size. Skeletonization algorithms are, in principle, much more general in concept and applicability, and several authors have used them to also analyze secondary structures such as spines (26). In practice, they are susceptible to generating small 37
ORIGINAL ARTICLE ‘‘barb’’ like artifacts for noise-caused surface irregularities in the image segmentation. The reason why automated tracing algorithms perform poorly at branch points is simple—they are based on a geometric model (generalized tubes) that is poorly satisfied around branch points, leading to localized tracing errors. This issue was described and addressed for the retinal vessel tracing problem by Tsai et al. (1), who presented a model-based algorithm, termed the exclusion region and position refinement (ERPR), to improve the accuracy and repeatability of estimating the locations of branching points from 2D images. The ERPR algorithm assumes that the branch points have been detected already, and only refines their location and angular measurements in 2D space. In other words, the ERPR method does not improve the rate of detection of branch points, while also being two-dimensional. The present work makes several new contributions. First, it improves the primary rate of detection of branching points. Second, it addresses the problem of accurately estimating the locations of branching points. Accurate locations lead to improved estimation of the intersection angles of the neurites. Finally, it allows both improved detection as well as location/ angle estimation in three-dimensional space. Although our methods can be adapted to any tracing algorithm, our description here is an extension of the exploratory tracing algorithms presented by Al-Kofahi et al. (11). The presented algorithms mainly improve the detection rate, and secondarily, the accuracy of branch point location and angle estimates. As a motivating example, Figure 1A shows an x-y projection of a small volume extracted from a larger 3D image. Figure 1B illustrates a case of missed detection using the previous algorithm. Figure 1C shows the result of using the proposed method. The rest of the article describes our methodology, and its performance assessment. The steps used in this work are listed in the flow chart shown in Figure 2.
MATERIALS AND METHODS Specimen Preparation and Imaging Protocols This section describes materials and methods for 3D imaging and image analysis. The supplementary document describes the same for 2D imaging and analysis of cultured neurons that may be of interest to many readers. The brain tissue slices were obtained from Wistar rats. After sedation, each animal was perfused transcardially with phosphate buffered saline followed by 4% paraformaldehyde and 4% sucrose in 0.1 M phosphate buffer. The brains were removed and postfixed for 1–4 h. The visual cortex was blocked, embedded in 4% agar to provide structural support during the injection, sectioned with an Oxford Vibratome to produce 600-lm thick tissue slices. These were collected in 0.1 M phosphate buffer and placed on the stage of an Olympus microscope equipped for epifluorescence microscopy. Individual neurons were impaled with a glass micropipette and injected with 4% Alexa 594 (Molecular Probes, now Invitrogen, Portland, OR) in distilled H2O. Typically, 15 or more cells can be injected in a single slice, and 200 or more cells in a single animal. The cells 38
Figure 2. A flow chart showing the steps used in our algorithm.
were selected randomly, and spaced apart sufficiently to include a single neuron in each field. Immediately after the last cell in a slice was filled, the slice was incubated in 4% paraformaldehyde at 48C for 2–18 h and then resectioned into 250 lm tissue slices. All subsequent processing was done at room temperature with continuous agitation. Sections were placed in phosphate buffer containing 3% normal goat serum and 2% Triton-X 100 for 1 h, rinsed and incubated for 3–16 h in ABC (Vector Elite) in phosphate buffer with 0.6% Triton-X 100 and Branch Point Detection in Automated Neuron Tracing Algorithms
ORIGINAL ARTICLE
Figure 3. (A) The volume transformation process. (B) Illustrates the 3D template used in the likelihood ratio test.
0.5% BSA. The sections were rinsed and incubated in 0.05% DAB in TRIS buffer for 30 min, and then in a DAB/glucose oxidase solution for 25–60 min. Sections were mounted in 50% glycerol/50% phosphate buffer with a flat plastic spacer between two coverslips to minimize distortion. The images were collected using NORAN Oz confocal attachment mounted on an Olympus IX-70 inverted infinity corrected microscope. A long-working-distance water immersion 403 lens (Zeiss 46 17 02, NA 1.15) with a field size of 192 lm 3 180 lm and 0.375 lm/pixel. The optical sections were spaced 0.5-lm apart, which is less than the depth of field of the lens, so the data was finely sampled. Some images were deconvolved using the blind deconvolution software of NORAN running on an SGI Origin with an R10000 processor. To assess the performance of the proposed methods, a set of 15 3D neuronal images were tested. The images, in general, were of varying planar dimensions and depths. The number of optical sections collected varied from 30 to more than 300. Also, each of these images contains 1 neuron and the average numbers of neurites and branches per image (neuron) were 27 and 23, respectively. Initial Automated 3D Tracing of Neurites The 3D images were traced using the algorithm described in (11), but with the following improvement: the robust median response of the correlation templates was used rather than the average, following the work of Abdul-Karim et al. (16). The resulting traces were processed to extract all end points. Some endpoints resulted from reaching the natural end of a neurite, or from reaching a branch point. Others represent gaps in traces of neurites that resulted from imaging Cytometry Part A 73A: 3643, 2008
noise, artifacts, or nonuniform staining. These endpoints were subjected to the test described in earlier section. Selecting Candidate Points for Detection and Refinement Each trace endpoint was evaluated to determine whether or not a branch point might exist. The points that passed this step were termed ‘‘candidate points.’’ To reduce the computation associated with spatial transformation operations over 3D volumes, we transformed a small volume of neighboring voxels around each endpoint to a standardized pose, as illustrated in Figure 3A. In this standardized pose, a local extrapolation (based on five previously traced points) of the trace over a distance lmax was mapped to the x axis. Here, lmax was not less than the maximum of the vertical and horizontal neurite widths obtained from the tracing results. The size of the volume can be set by the user, but the only requirement for the size was that it fully includes the branch point even for the thickest expected neurites. For the examples shown here, this volume was typically lmax 3 21 3 21 voxels. A larger choice of this volume (side longer than 21) incurred a greater computational cost without improving accuracy/performance. Too small a choice will result in missed branch points. Note that some of the transformed points could have noninteger coordinates, so we used bilinear interpolation to fit the volume to integer coordinates. An endpoint is considered a candidate if there is at least one point from another traced segment inside the lmax 3 21 3 21 volume. The selected candidate points are subjected to a generalized hypothesis testing based detection and refinement step described in earlier section. 39
ORIGINAL ARTICLE Computing Foreground and Background Likelihoods To be able to detect branch points, we computed the likelihood of each pixel/voxel to fall in either the foreground or the background. The likelihood values were computed using a 3D extension of the 2D ridge detector proposed by Meijering et al. (19). Given a 3D image f and a normalized 3D Gaussian filter G, we first compute
2
fij ðXÞ ¼ ðf
where kmax(X) is the eigenvalue with the largest magnitude at the current pixel/voxel X, kmin and is the smallest eigenvalue over all the voxels in the image. Finally, the likelihood of each voxel X to be in the background is computed as 1 2 q(X). One drawback of using this neuriteness measure is that it is computationally expensive, since it involves computing the eigenvalues of the Hessian matrix at each voxel in the image. We first implemented the neuriteness algorithm for 2D images and then extended that to 3D images. In 2D images of hundreds of thousands to few millions of pixels this operation took from a few seconds up to a minute. However, this operation took from few minutes up to an hour when applied on 3D images of tens of millions to hundreds of millions of voxels. Our solution was to limit computation of the neuriteness values to voxels inside the lmax 3 21 3 21 volumes at each candidate point. The size of each volume varies depending on lmax, but it is usually few thousands of voxels. As an example, processing a 3D image with 25 candidate points, requires computing the neuriteness values for 25 small volumes, i.e. a total of few hundreds of thousands of voxels, which can be processed in less than a minute. Detecting Branch Points by Generalized Hypothesis Testing The algorithm used for extracting branch points is based on a generalized likelihood ratio test (GLRT). This test has been used by Mahadevan et al. (37), for vessel detection. We searched for an untraced neurite segment in the lmax 3 21 3 21 volume starting from the candidate. In our test we decided among two possible hypotheses H0 and H1, where the null hypothesis H0 indicated the absence of a branch, while the alternative hypothesis H1, indicated the presence of a branch. 40
Gij ÞðXÞ with Gij ðXÞ ¼
@2 G ðXÞ @i @j
ð1Þ
where * denotes special convolution, X is a voxel position, and i and j can be x, y, or z. Then, we form the Hessian-like matrix at each pixel/voxel X as follows:
ð1 aÞfxy ðXÞ fxx ðXÞ þ a2 fyy ðXÞ þ a2 fzz ðXÞ 0 4 ð1 aÞfxy ðXÞ fyy ðXÞ þ a2 fxx ðXÞ þ a2 fzz ðXÞ Hf ðXÞ ¼ ð1 aÞfxz ðXÞ ð1 aÞfyz ðXÞ
where a is a parameter whose suggested optimal value by (19) is 1 3 . The eigenvectors V(X) and eigenvalues k(X) of the matrix above are then computed, and then we computed the foreground likelihood probability (neuriteness) at each voxel X as follows ( kmax ðXÞ=kmin if kmax ðXÞ < 0 ð3Þ qðXÞ ¼ 0 if kmax ðXÞ < 0
3 ð1 aÞfxz ðXÞ 5 ð1 aÞfyz ðXÞ fzz ðXÞ þ a2 fxx ðXÞ þ a2 fyy ðXÞ
ð2Þ
First, we selected a template of voxels T representing the assumed untraced part of a neurite segment. The template was represented by a volume of size lmax 3 3 3 3 shown in Figure 3B. For simplicity, each template was represented by a vector Y 5 (Y1, . . . ,Yn)T holding the voxels’ intensities. The two hypotheses H0 and H1 were described using probabilities P0(Y) and P1(Y), respectively. Assuming that (Y1, . . . ,Yn) were independent and identically distributed (i.i.d.), the probability for all of the voxels inside T to satisfy any of the hypotheses was derived by simple multiplication as follows: Pi ðY Þ ¼
n Y
Pi ðYk Þ;
i ¼ 0; 1
ð4Þ
k¼1
The likelihood ratio function was then written as: n Q
LðY Þ ¼ k¼1 n Q
n Q
P1 ðYk Þ
qðYk Þ k¼1 ¼ Q n P0 ðYk Þ ð1 qðYk ÞÞ
k¼1
ð5Þ
k¼1
In three-dimensional space, directions are represented by two angles yh (left-right) and yv (up-down). The template is initially oriented along the x-axis as a result of the 3D transformation process described in earlier section and the initial y- and zorientations are set to zero. After that we tested the template using 16 different directions by rotating the template by yh and yv in increments of 7.58, i.e., {yh, yv} [ {(08, 08), (08, 7.58), (08, 158), (7.58, 08), (7.58, 7.58), (158, 08), (158, 158)}. In our test, we aimed to select the direction that maximizes the respective likelihood ratio functions. The generalized likelihood ratio test (LRT) was then formulated as follows: n Q
qðYk jhy ; hz Þ
k¼1 max Q n ð1 qðYk jhy ; hz ÞÞ
ðhh ;hv Þ
k¼1
H1 > < H0
s
ð6Þ
where s is a user selected threshold in the range from 0 to 1. For most images, a value of 1 was used, corresponding to the case of balanced prior probabilities. For the lower-signal Branch Point Detection in Automated Neuron Tracing Algorithms
ORIGINAL ARTICLE
Figure 4. Examples of branch points in 3D images shown as x, y, and z projections; the left set of projections shows results from the previous method and the right set shows results from the new method. For each image set, the three projections are shown, and some branches are marked with arrows of different colors for different branch points. Traces are shown in green and branches are shown in red, and the numbers indicate segment numbers generated by the automatic tracing algorithm. The blue dots indicate seed points used by the automatic tracing algorithm.
images, it was lowered to 0.8 to increase the detection rate, at the expense of raising the rate of false positives. We used a lower and an upper bounds on the likelihoods such that q [ [0.01, 0.99] to avoid multiplication or division by zero. Cytometry Part A 73A: 3643, 2008
RESULTS AND VALIDATION A set of 15 3D neuronal images were tested with typical examples are presented in this section. In addition, some 2D sample results based on cultured neurons can be found in the 41
ORIGINAL ARTICLE electronic supplement. The algorithms were implemented on MATLAB and on C11 as part of the 3D tracing software. Since our method processes neighboring pixels/voxels at each candidate point only, the processing time depends on the complexity of the structure and the expected number of branch points. Representative 3D examples are presented in Figure 4. The left set of 3D projections shows results from the previous method and the right set present results from the current method. Both images were superimposed on the x-y, x-z, and y-z projections. The output of the automated tracing is shown in green and the branches are shown in red in both methods. Also, some arrows with different colors are used to identify different branch points. The detection rate and the accuracy of the proposed algorithms were evaluated by a human observer and compared with the results from the previous merging method using 100 manually selected true branch points. The accuracy of detection is measured by finding the average of the distances between the correctly detected branch points and their true locations, where the true locations were found manually. The average detection rate increased from 37 to 86% and the average error in location estimation decreased from 2.6 to 2.1 voxels. While the broad significance of the increased detection rate is obvious, the improvement in the location estimation error is more specialized in terms of value. This is of value to image registration and change analysis algorithms. It is well known that small registration errors produce significant amounts of change detection errors. Actually the consistency of branch location estimation resulting from the objective automation is of greater value. It is also valuable for more consistent estimation of branch angles if needed for an investigation (not pursued here). The detection rates and accuracy were used as the comparison criteria. We used ImageJ to perform the comparisons where the original and the resulting images were opened side by side and each branch point was inspected visually by using the slices navigator and the zooming capabilities in ImageJ.
DISCUSSION The primary outcome of this work is increased rate of detection of branching points in automated three-dimensional neurite tracing algorithms, resulting in traces that are more complete and accurate in these critical regions. This is valuable for ensuring completeness of extracting neuronal topologies, and greater accuracy in neurite profiling, outgrowth, and toxicology assays that require counting of branch points (e.g., Cellomics HCS). Secondarily, we have also refined extraction of branch locations and angles. This is valuable for improving the accuracy of automated image registration and mosaicing (2–4) of images, especially for applications requiring automated change analysis, since registration errors are falsely detected as changes (7). From a practical standpoint, the benefits of our algorithm do not incur an undue computational cost, since the underlying exploratory neurite tracing algorithms are extremely efficient, robust (11), and amenable to automatic tuning (38). 42
The proposed computation only occurs at endpoints of traced segments, so its application across the image space is decidedly sparse compared to methods that process each pixel/voxel in the image. The proposed methods can be readily adapted to other tube-like structures in fluorescence images, for instance, microvasculature (39). Although our investigation was necessarily based on automated tracing algorithms reported by this group, the issues we describe are germane to other approaches as well. The computational methods described in this article are robust to depth-dependent attenuation as long as the foreground signal exceeds the background. However, as with any image segmentation algorithm, the results ultimately limited by image quality. Any methods to improve the depth of imaging and controlling signal attenuation can only improve our automated results.
LITERATURE CITED 1. Tsai C, Stewart C, Tanenbaum H, Roysam B. Model-based method for improving the accuracy and repeatability of estimating vascular bifurcations and crossovers from retinal fundus images. IEEE Trans Inf Technol Biomed 2004;8:142–153. 2. Can A, Stewart C, Roysam B, Tanenbaum H. A feature-based robust hierarchical algorithm for registration pairs of images of the curved human retina. IEEE Trans Pattern Anal Mach Intell 2002;24:347–364. 3. Can A, Stewart C, Roysam B, Tanenbaum H. A feature-based algorithm for joint, linear estimation of high-order image-to-mosaic transformations: Mosaicing the curved human retina. IEEE Trans Pattern Anal Mach Intell 2002;24:412–419. 4. Al-Kofahi O, Can A, Lasek A, Szarowski D, Turner J, Roysam B. Hierarchical algorithms for affine 3-D registration of neuronal images acquired by confocal laser scanning microscopy. J Microsc 2003;211:8–18. 5. Can A, Al-Kofahi O, Lasek S, Szarowski D, Turner J, Roysam B. Attenuation correction in confocal laser microscopes: A novel two-view approach. J Microsc 2003;211:67–79. 6. Ascoli G, Krichmar J, Nasuto S, Senft S. Generation, description, and storage of dendritic morphology data. Philios Trans R Soc Lond B Biol Sci 2001;356:1131–1145. 7. Al-Kofahi O, Radke R, Roysam B, Banker G. Automated semantic analysis of changes in image sequences of neurons in culture. IEEE Trans Biomed Eng 2006;53:1109– 1123. 8. Kerrison J, Lewis R, Otteson D, Zack D. Bone morphogenetic proteins promote neurite outgrowth in retinal ganglion cells. Mol Vis 2005;11:208–215. 9. Forgie A, Wyatt S, Correll PH, Davies AM. Macrophage stimulating protein is a target-derived neurotrophic factor for developing sensory and sympathetic neurons. Development 2003;130(5):995–1002. 10. Al-Kofahi K, Lasek S, Szarowski D, Pace C, Nagy G, Turner J, Roysam B. Rapid automated three-dimensional tracing of neurons from confocal image stacks. IEEE Trans Inf Technol Biomed 2002;6:171–187. 11. Al-Kofahi K, Can A, Lasek S, Szarowski D, Dowell N, Shain W, Turner JN, Roysam B. Median based robust algorithms for tracing neurons from noisy confocal microscope images. IEEE Trans Inf Technol Biomed 2003;7:302–317. 12. Can A, Shen H, Turner J, Tanenbaum H, Roysam B. Rapid automated tracing and feature extraction from live high-resolution retinal fundus images using direct exploratory algorithms. IEEE Trans Inf Technol Biomed 1999;3:125–138. 13. Gang L, Chutatape O, Krishnan S. Detection and measurement of retinal vessels in fundus images using amplitude modified second-order Gaussian filter. IEEE Trans Biomed Eng 2002;49:168–172. 14. Xiong G, Zhou X, Degterev A, Ji L, Wong S. Automated neurite labeling and analysis in fluorescence microscopy images. Cytometry A 2006;69A:494–505. 15. Weaver C, Pinezich J, Lindquist W, Vazquez M. An algorithm for neurite outgrowth reconstruction. J Neurosci Methods 2003;124:197–205. 16. Abdul-Karim M-A, Al-Kofahi K, Brown E, Jain R, Roysam B. Automated tracing and change analysis of tumor vasculature from in vivo multi-photon confocal image time series. J Microvasc Res 2003;66:113–125. 17. Tyrrell J, Mahadevan V, Tong R, Roysam B, Brown E, Jain R. 3-D model-based complexity analysis of tumor microvasculature from in vivo multi-photon confocal images. J Microvasc Res 2005;70:165–178. 18. van Cuyck J, Gerbrands J, Reiber J. Automated centerline tracing in coronary angiograms. Pattern Recognit Artif Intell 1988;7:169–183. 19. Meijering E, Jacob M, Sarria JC, Steiner P, Hirling H, Unser M. Design and validation of a tool for neurite tracing and analysis in fluorescence microscopy images. Cytometry A 2004;58A:167–176. 20. Falca˜o A, Udupa J, Miyazawa F. An ultra-fast user-steered image segmentation paradigm: Live wire on the fly. IEEE Trans Med Imaging 2000;19:55–62. 21. Flasque N, Desvignes M, Constans J, Revenu M. Acquisition, segmentation and tracking of the cerebral vascular tree on 3D magnetic resonance angiography images. Med Image Anal 2001;5:173–183.
Branch Point Detection in Automated Neuron Tracing Algorithms
ORIGINAL ARTICLE 22. Wink O, Niessen W, Viergever MA. Multiscale vessel tracking. IEEE Trans Med Imaging 2004;23:130–133. 23. Schmitt S, Evers J, Duch C, Scholz M, Obermayer K. New methods for the computerassisted 3-D reconstruction of neurons from confocal image stacks. Neuroimage 2004;23:1283–1298. 24. Cohen A, Roysam B, Turner J. Automated tracing and volume measurements of neurons from 3-D confocal fluorescence microscopy data. J Microsc 1994;173:103–114. 25. He W, Hamilton T, Cohen A, Holmes T, Pace C, Szarowski D, Turner J, Roysam B. Automated three-dimensional tracing of neurons in confocal and brightfield images. Microsc Microanal 2003;9:296–310. 26. Koh I, Lindquist W, Zito K, Nimchinsky E, Svoboda K. An image analysis algorithm for dendritic spines. Neural Comput 2003;14:1283–1310. 27. Weaver C, Hof P, Wearne S, Lindquist W. Automated algorithms for multiscale morphometry of neuronal dendrites. Neural Comput 2004;16:1353–1383. 28. Staal J, Abramoff M, Niemeijer M, Viergever M, van Ginneken B. Ridge-based vessel segmentation in color images of the retina. IEEE Trans Med Imaging 2004;23:501–509. 29. Maddah M, Afzali-Kusha A, Soltanian-Zadeh H. Efficient center-line extraction for quantification of vessels in confocal microscopy images. Med Phys 2003;30:204–211. 30. Wearne S, Rodriguez A, Ehlenberger D, Rocher A, Hendersion S, Hof P. New techniques for imaging, digitization and analysis of three-dimensional neural morphology on multiple scales. Neuroscience 2005;136:661–680. 31. Gratama van Andel HAF, Meijering E, van der Lugt A, Vrooman H, de Monye´ C, Stokking R. Evaluation of an improved technique for automated center lumen line definition in cardiovascular image data. Eur Radiol 2004;16:391–398.
Cytometry Part A 73A: 3643, 2008
32. Huang A, Nielson G, Razdan A, Farin G, Baluch D, Capco D. Thin structure segmentation and visualization in three-dimensional biomedical images: A shape-based approach. IEEE Trans Vis Comput Graph 2006;12:93–102. 33. Roysam B, Lin G, Abdul-Karim M, Al-Kofahi O, Al-Kofahi K, Shain W, Szarowski D, Turner J. Automated 3-D image analysis methods for confocal microscopy. In: Pauley J, editor. Handbook of Confocal Microscopy; 2006: Chapter 15, pp 316–337, Third Ed. New York: Springer. 34. Jiang X, Mojon D. Adaptive local thresholding by verification-based multithreshold probing with application to vessel detection in retinal images. IEEE Trans Pattern Anal Mach Intell 2003;25:131–137. 35. Lowell J, Hunter A, Steel D, Basu A, Ryder R, Kennedy R. Measurement of retinal vessel widths from fundus images based on 2-D modeling. IEEE Trans Med Imaging 2004;23:1196–1204. 36. Chen J, Amini A. Quantifying 3-D vascular structures in MRA images using hybrid PDE and geometric deformable models. IEEE Trans Med Imaging 2003;23:1251– 1262. 37. Mahadevan V, Narasimha Iyer H, Roysam B, Tanenbaum H. Robust model-based vasculature detection in noisy biomedical images. IEEE Trans Inf Technol Biomed 2004;8:306–376. 38. Abdul-Karim MA, Roysam B, Dowell N, Jeromin A, Yuksel M, Kalyanaraman S. Automatic selection of parameters for vessel/neurite segmentation algorithms. IEEE Trans Image Process 2005;14:1338–1350. 39. Tyrrell J, di Tomaso E, Fuja D, Tong R, Kozak K, Brown E, Jain R, Roysam B. Robust 3-D modeling of vasculature imagery using superellipsoids. IEEE Trans Med Imaging 2007;26:223–237.
43