A fast, fully automated cell segmentation ... - Wiley Online Library

3 downloads 69288 Views 346KB Size Report
Aug 27, 2008 - and fully automated algorithm for cell segmentation is proposed. The algorithm is based on a ... acquired during a HT-HCS campaign [%half a million (4)] requires a fully automated ... Email: [email protected]. Published ...
Brief Report

A Fast, Fully Automated Cell Segmentation Algorithm for High-Throughput and High-Content Screening D. Fenistein,1* B. Lenseigne,1 T. Christophe,2 P. Brodin,3 A. Genovesio1

1

Image Mining Group, Institut Pasteur Korea, Hawolgok-dong, Seongbuk-gu, Seoul 136-791, Korea

2

Screening Technology and Pharmacology Group, Institut Pasteur Korea, Hawolgok-dong, Seongbuk-gu, Seoul 136-791, Korea

3

Inserm Equipe Avenir Biology of Intracellular Pathogens, Institut Pasteur Korea, Hawolgok-dong, Seongbuk-gu, Seoul 136-791, Korea

Received 11 December 2007; Revision Received 21 March 2008; Accepted 15 July 2008 Additional Supporting Information may be found in the online version of this article. *Correspondence to: D. Fenistein; Image Mining Group, Institut Pasteur Korea, 39-1, Hawolgok-dong, Seongbuk-gu, Seoul 136-791, Korea Email: [email protected] Published online 27 August 2008 in Wiley InterScience (www.interscience. wiley.com) DOI: 10.1002/cyto.a.20627 © 2008 International Society for Advancement of Cytometry

 Abstract High-throughput, high-content screening (HT-HCS) of large compound libraries for drug discovery imposes new constraints on image analysis algorithms. Time and robustness are paramount while accuracy is intrinsically statistical. In this article, a fast and fully automated algorithm for cell segmentation is proposed. The algorithm is based on a strong attachment to the data that provide robustness and have been validated on the HT-HCS of large compound libraries and different biological assays. We present the algorithm and its performance, a description of its advantages and limitations, and a discussion of its range of application. ' 2008 International Society for Advancement of Cytometry

 Key terms image analysis; biological image processing; automation; cytometry; object detection; segmentation; high-content screening

AUTOMATED

fluorescent microscopy and high-performance computing have allowed the emergence of high-content screening (HCS) as a useful tool in the early stages of drug discovery (1–4). The multidimensional information (‘‘high content’’ in HCS) allows for the tackling of biological models inaccessible to unidimensional high-throughput screening (HTS). HCS can also measure multiple effects in a single experiment; for example, the effect of a drug on bacteria (virus, receptor, etc.) and its toxicity on the host target (1,4). HCS therefore has a potential to become a risk/delay/ cost reducer for the later stages of drug development. The last few years have seen a huge increase in image acquisition capacity. Automated fluorescent microscopes can now record more than 40,000 images a day (90 Gb/day), and do so for weeks at a time. HCS is therefore truly becoming high throughput. This evolution introduces fundamental differences with former HCS approaches: (i) The number of images acquired during a HT-HCS campaign [half a million (4)] requires a fully automated image analysis. (ii) For image analysis not to become the bottleneck of HT-HCS, the rate of image acquisition (2 s per image) imposes the rate of image analysis. (iii) The amount of data forbids visual control. Results have meaning in a statistical way and must be weighted against a statistically based acceptance criteria. (iv) The measure of quality for a HT-HCS algorithm is a trade off between speed on the one hand and accuracy and robustness on the other.

PREVIOUS WORK Numerous HT-HCS applications require a stage of cell segmentation. It most often serves as a basis for subsequent operations, more diverse and specific to a biological assay. While manual (5,6) or semiautomatic (7) cell segmentation methods may be used for HCS, the need for speed and repeatability forbids them for HT-HCS. Most of the cell segmentation algorithms commonly used in HCS suffer from one or many drawbacks that make them ill adapted to HT-HCS use. They may be slow when compared with the Cytometry Part A  73A: 958964, 2008

BRIEF REPORT limiting rate of 2 s per image (3,8–10). They may be tailored to quite specific situations (8–12), which means potentially weak robustness when compared with the huge variability intrinsic to HT-HCS. They might need the manual setting of parameters (13). They sometimes require two distinct channels for the separate staining of the whole cell and the nucleus (14). This creates more data to store and process and substantial additional mechanical operations that might induce delay, costs, and a source of variability (4). Most popular HCS or HT-HCS algorithms for cell segmentation process in two steps (3,9,12,14). First, the nuclei are detected and used as seeds for the second stage that then divides the cytoplasm into regions. Each region relates to one nucleus. The underlying assumption is that nuclei are necessarily well separated. Errors are therefore expected to remain mostly limited to cytoplasm segmentation. Background-cell separation is generally performed with a histogram-based method. Nuclei are also often segmented from the histogram (9,11,12,14), especially if stained separately and recorded in a separate channel. Cytoplasm is defined by the region between the nucleus and the background and generally found by either a distance transform, a watershed, or a more subtle combination of gradient information and watershed (9). Approaches based on active contours are iterative and require seeding points (15) which make them slow and ill-adapted to the constraints of HTHCS. An interesting use of active contour in HT-HCS has been in refining a segmentation obtained from a ‘‘traditional’’ cell segmentation stage (10).

MATERIALS AND METHODS We based our algorithm on a model of the cells as a roughly convex area of a given size and high intensity, for it corresponds to their most commonly shared feature. Convolving the original image with a Gaussian kernel naturally associates each pixel to a quantity that is compared (i.e. template matched (16)) with our cell model if r, the standard deviation of the Gaussian kernel equals s, the cell radius. Our method is based on a number of assumptions. Cells should be quite monodisperse in size, convex, and symmetric in shape. Cells are stained in a single color band. The nucleus is expected to be at least as bright as the cytoplasm. Syto60 usually ensures this condition. Image dynamics is assumed to have verified a series of tests (mean, standard deviation, etc.) that remove extreme cases (oversaturated, empty, or uniform images). Illumination should be quite uniform if not naturally, at least after illumination correction (17,18). Thresholding the Background First, the original image is filtered by a small Gaussian filter in order to remove noise. Next, the histogram is thresholded using a three-class (background 1 cytoplasm 1 nucleus) K-means algorithm. The lowest of the two threshold values defines the separation between cell (cytoplasm 1 nucleus) and background. Each pixel is assigned either as an element of the cell surface that we refer to as mask M in the following or to the background surface (Fig. 1a). We perform the thresholding early in the process only for the consideration of speed, so as to restrict later operations to pixels of M. Cytometry Part A  73A: 958964, 2008

Gaussian filtering Next, the original image U is convolved with a Gaussian kernel of standard deviation r that is set equal to the cell radius (r 5 s) (Fig. 1b).

Ur ¼ ðGr 3 U Þ

ð1Þ

Seeding Cell Centers Cell centers are identified as local maxima of Ur inside M. Local maxima are found by creating a map in which a nonzero value is given to each pixel of M. Next, all the pixels of the image are visited using the Chamfer method: two passes over the whole image: one forward and one backward. Each pixel i is compared with all neighboring pixels j: For each i in Ur, for each j, neighbor of i,

if Ur ðiÞ < Ur ðjÞ ) mapðiÞ ¼ 0

ð2Þ

At the end of the process the map has zero values everywhere except at local maxima (Fig. 1b). Region growth Cell boundaries are not just the boundaries of M because cells may be clustered. Each pixel inside M must be attributed to a single individual cell. We identify boundaries between touching (clustered) cells with ridges of minima of Ur inside M (just like in the previous section, cell centers are identified with local maxima of Ur inside M). We use a region growth procedure (16,19) whose seeds are the previously defined cell centers and whose outer limit is M. First, each seed is labeled on a separate label map. We take advantage of existing fast labelization algorithm. Next, all pixels of the images are visited using a Chamfer method as in the previous section. For strongly nonconvex cells, a single two-pass Chamfer method may not be sufficient to label all the pixels of M. In such a (rare) case, we reiterate the two-pass Chamfer method until all the pixels of M are labeled. A seed grows by spatial extension of its value on the label map as long as it is surrounded by neighbors of lower Ur: For each i in Ur, for each j, neighbor of i,

if Ur ðiÞ  Ur ðjÞ ) labelðjÞ ¼ labelðiÞ

ð3Þ

Figure 1c shows the label map at the end of the procedure. Each cell surface is grown until it reaches the limits of M or of another cell surface. M ends up being filled with nonzero values, each value identifying an individual cell. Cell borders are defined as neighboring pixels of different labels. Finally, cells whose surfaces are too small are removed. Note that the cell radius is the only independent parameter in our method. The small Gaussian filter (see the Gaussian filtering section above) has a fixed value and the typical small-surface parameter may be proportional to r2. This step ends the cell segmentation algorithm but usually not the image analysis. 959

BRIEF REPORT

Figure 1. Successive steps of the cell segmentation algorithm. (a) Original image with boundaries of mask M defining background and cell; (b) Ur with positions of local maxima inside M; (c) label map at the end of the region growth; (d) segmented regions.

RESULTS Algorithm Evaluation Evaluation of a HT-HCS algorithm is tricky. The scale of HT-HCS limits visual control to small subsets of images.

Unfortunately a good performance on small subset does not necessarily translate into success for an entire HT-HCS campaign due to variability. We propose an evaluation at three different sample sizes: (i) We chose the four images of Figure 2 for a detailed study (both cell count and border localization).

Figure 2. Results of the cell segmentation method on images ad for r 5 s (Table 1). Enlargement of Figures 2b and 2d is available as Supplementary Figure 4.

960

Fast, Automated Cell Segmentation Algorithm for High-Throughput and High-Content Screening

BRIEF REPORT Table 1. Description of images ad

Image a Image b Image c Image d

CELL TYPE

S

SM(S)

NC

N(S)

CHO HEK CEM Raw

13 11 7 9

12.9 11.4 7.4 9.1

185 243 304 532

184 241 306 534

s, input average cell size (visually determined); sm(s), output average cell size; NC, reference cell count; N(s), measured cell count.

Such a study is necessarily limited in sample size but nevertheless provides an understanding of the algorithm mechanism and relates its performance to the image content and the underlying biological question. This allows for a better estimation of the algorithm robustness (i.e. a better prediction of its performances for larger sample sizes). (ii) We use a serial dilution experiment: a number of cell densities known in average, to test our algorithm on some 624 images (106 cells). Results are in essence statistical. Still, visual control, at least qualitative, is possible. This step validates the performance of the algorithm in a statistical way (the cell count only) and also confirms the expectations of robustness. (iii) For even larger sample sizes, numerous assays based on our cell segmentation algorithm have consistently given satisfactory statistical measures over large compound libraries. As failure of the cell segmentation step would not likely provide success with all of the image analysis, this is at least an indirect clue that points towards a good statistical performance of our algorithm at the scale of some entire HT-HCS campaigns. Images Description In the following, the four images of Figure 2 are called images a, b, c, and d. They were selected for testing of our algorithm because they scan a large range of variations. Each image contains a different cell type and is typical of HT-HCS imaging conditions. The dye used is Syto60 which preferably stains the nuclei over the cytoplasm. The magnification factor is 203. Excitation and emission wavelengths are 633 and 690 nm, respectively. Images a–d contain Chinese hamster ovary cells (CHO), human embryonic kidney cells (HEK), CEM cells, and Raw 264.7 cells, respectively. Table 1 summarizes these characteristics as well as some results obtained later. Images a and b correspond to internalization assays that measure whether a given target is located inside or outside the cells. Images c and d correspond to infection assays (viral for image c, bacterial for image d) that measure the quantity of virus (or bacteria) in the cells. In all such assays the cell segmentation is a key step. Cells in images a and c are dispersed; intercell separations are large in size and have low intensity. Conversely, cells in images b and d are strongly aggregated. Separations between adjacent cells may be very narrow and barely distinguishable. Staining is quite homogeneous for images a, c, and d, especially for image a, as structures inside the cells are neither too large nor too intense. On the other hand, image b, contains important intracellular structures in Cytometry Part A  73A: 958964, 2008

some cells or groups of adjacent cells. Finally, some fluorescent saturation appears to affects the dynamic of image d which tends to erase intercell separations. Cell Count on Images ad The algorithm’s only parameter r must be set equal to the cell radius s. s implies an average radius of a mono-modal distribution of cell radii. This is expected both from biological assumptions and from visual observations (Fig. 2). First we estimate s visually. The obtained values (13, 11, 7, and 9 pixels for images a–d, respectively) are reported in Table 1. Later (in the section ‘‘Full Automation’’) we justify these values a posteriori and propose a fully automatic determination of s. Manual control is performed on images a–d to get a reference cell count NC for each image (see Table 1). Next, our algorithm is run for all r values between 1 and 99, a range that is about 0.1–10 times the value of s. Figure 3 displays the resulting cell count N(r) renormalized by NC as a function of r, itself renormalized by s. Note that for images a–d, N(r)/NC 5 1 precisely at r/s 5 1. This is our first result of importance: N(s) 5 NC. Figure 2 shows the segmentation result for r 5 s for images a–d. Visual control confirms the accuracy of our segmentation algorithm not only as a global result (N(s) does indeed equal NC) but also as a local result (the location of cell borders is visually acceptable). In images a–d, cells whose surfaces are less than 30 pixels are removed. Cell Border Localization on Images ad The position of cell borders defines individual cell segmentation. Outer-border pixels (separating cell from background) are defined by the mask M (section ‘‘Thresholding the Background’’) and are therefore independent of r. We focus on intraborder pixels (separating neighboring cells) whose number bin(r) and location depends on r. bin(r) is larger for aggregated cells (images b and d) than for dispersed cells (images a and c) (see Supplementary Fig. 1). In the following, the segmentation of images a–d obtained at r 5 s is used as a reference for the border localization because it has been previously validated both globally (N(s) 5 NC) and locally (from visual observation, see Fig. 2). Next, this reference is compared to the border localizations obtained when segmenting at different r values. What we question is the robustness: If r departs from s, how much of the border remains in place and if it moves, how far does it go? A way to answer this is by constructing the quantity Fd(r) defined as the fraction of bin(r) which is positioned at a distance  d from the reference. For all d, Fd(r) 5 100% because the localization of intraborder pixels measured at s is the reference itself. Even as r varies as much as 3 pixels away from s (a variation of 30% compared with s), F0(s  3) [ 75%. This means that even as r departs from s, a major proportion of the border ([75%) does not move away from the reference (see Supplementary Fig. 2). Even more interesting is that F1(s  3) [ 90%. This means that when r departs from s (see Supplementary Fig. 2), even if the border moves away from the reference, it does not move far; 90% of the inner border pixels remain located within d 5 1 pixel from the reference. Similar results 961

BRIEF REPORT cell concentration vary. Still the intrinsic robustness of our algorithm keeps errors statistically low. Full Automation For each value of r, we can compute an average cell radius sm(r) a posteriori (‘‘m’’ stands for ‘‘measured’’ so as to differentiate sm(r) from r or s). From the total surface of all the cells Surf(r) and the cell count N(r) and assuming a roughly circular shape of the cell:

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Surf ðrÞ sm ðrÞ ¼ pN ðrÞ Figure 3. N(r)/NC as a function of r/s for images ad. For each curve, a character (ad) helps recognition.

(not shown) are obtained for images a–c. This shows that a strong attachment to the data ensures robustness of our algorithm, not only for the cell count but also for the border localization.

Serial Dilution Here we present the results of a serial dilution experiment. Twelve dilutions of fibroblast-like cells (L929) stained with Syto60 were done each 13 times (in 13 wells). Four images of each well were taken totaling 12 3 13 3 4 5 624 images, 52 images for each dilution series. The cell number dispensed per well is known and reported on the x-axis of Figure 4. The cell size is determined automatically (see the section ‘‘Full Automation’’) as s 5 12 pixels. The cell count N(s) per image averaged over 52 images for each dilution is plotted as the y-axis on Figure 4. The linearity between dispensed and measured cell number is excellent. At  300 cells per image, cells start to fill [50% of the image surface (see the right inset of Fig. 4). Further increase in the dispensed number of cells per well no longer translates into a linear increase in number of cells per image. Cells may escape from the plane of focus. Some nonlinearity progressively sets in for larger cell counts as the slight bending of the last two points of Figure 4 seems to indicate. The cell count is not only linear but also accurate; it takes 70 images to cover a well entirely. Therefore, the cell count per image should equal 1/70th of the number of cells per well. The dotted line on Figure 4 displays a linear fit of our data with a slope of 1/69.93. Error bars on the cell count are typically less than 5% of their nominal value. It must be seen as an overestimation of the real error that affects the image analysis itself because such a value encompasses also the standard deviation of the actual cell number. As an example, the maximum recorded deviation for the cell count (13% for 12,500 cells/well) actually corresponds after visual inspection to a much larger than usual dispersion of the cells in these particular images. The range of variation encountered at the scale of a few hundred images is already much larger than that for a few images. The well position, the dispensing time, and the 962

ð4Þ

For images a–d, all sm(r) curves cross the y 5 x line (see Supplementary Fig. 3). There is always a r for which sm(r) 5 r. More importantly this value turns out to be r 5 s (see the precise values of sm(s) that are reported in Table 1). This brings about two conclusions. First it further validates the algorithm’s accuracy: If properly initialized with r 5 s, a cell size that is visually acceptable, our algorithm gives a similar quantity as output sm(s) 5 s (Table 1). Second, it gives a simple recipe to automatically determine the optimum value of s, the algorithm’s only parameter. Prior to the screening, we routinely determine s from a selected set of images by simply scanning r and searching for the match between input and output sizes (with additional visual supervision). sm(s) is also used as a quality control during screening itself that triggers suspicion on the image quality or assay conditions if the algorithm output sm(s) notably differs from its input s.

DISCUSSION Earlier, we made assumptions on the cells and on the image quality (monodisperse, symmetric, convex cells, homogeneous staining, illumination, etc.). We have seen that our algorithm performs well in the presence of important varia-

Figure 4. Measured averaged cell count N(s) per image as a function of the dispensed cell number per well, with s 5 12. The dotted line is a fit of the data with a slope which is 0.0143 5 1/69.93. Insets on the left and right show typical images of 2,500 and 20,000 cells/well, respectively.

Fast, Automated Cell Segmentation Algorithm for High-Throughput and High-Content Screening

BRIEF REPORT tions; images a–d were chosen precisely for their individual differences (section ‘‘Images Description’’). In a given image, all cells are neither perfectly symmetric, convex, and monodisperse, nor ideally stained (Fig. 2). The linearity response experiment (section ‘‘Serial Dilution’’) and, even more so, an entire screening campaign implies important variations in cell density, aggregation, illumination, size polydispersity, or staining homogeneity. Here we address robustness: How much and what kind of variability is allowed before serious missegmentation occurs? A good understanding of the underlying mechanism of the algorithm gives a measure of its range of application. Remember that both cell center seeding and region growth are based on Gaussian filtering that generates a scale space (20–22) and ensures a number of useful properties; when transforming from smaller to larger scales, new structures are not created, local maxima do not increase, local minima do not decrease, and smaller structures tend to be suppressed. Therefore, our algorithm may be seen as searching for structures at a typical scale s. Errors may arise if structures existing at a different scale compete with the cells: at small scale (r \ s) from cell internal inhomogeneities; at the cell’s scale (r 5 s) from asymmetry, concavity, or polydispersity; at large scale (r [ s), from aggregation, especially if cell separations are little contrasted. This competition between structures at various scales also helps relating the derivative of N(r) (see Fig. 3) to the image content (section ‘‘Images Description’’). For example, dispersed cells (images a and c) are those for which N(r) plateaus because no structure larger than s can compete with the cells, while aggregated cells (image b and d) are those for which N(r) keeps on decreasing far beyond s. Similarly, but at small scale (for r \ s), cells in images a and c with smaller, less numerous, less intense internal structures relate to smaller values of @N(r)/@r around r 5 s than the cells in image b which have greater internal heterogeneities. Note also the similarity in the variation of N(r) and bin(r) with r. This reflects a similarity in robustness of both cell count and cell border localization that should be expected because it comes from the same types of arguments. After all, the cell centers and the cell boundaries are each identified with ‘‘local maxima’’ and ‘‘ridges of minima’’ of the same Ur at the same scale r 5 s. Out of a limited subset of HT-HCS images and prior to screening, visual observation and the shape of @N(r)/@r help in estimating the algorithm’s performances and robustness. Important segmentation errors and robustness issues are expected in the presence of some combinations of intracellular inhomogeneities, polydispersity, cell asymmetry and concavity, and/or aggregation. For each of these, the value of the derivative at the relevant scale helps to quantify its potential for inducing errors during the HT-HCS campaign. The algorithm’s simplicity, robustness, and ability to treat a wide range of images is already used during the assay development phase, the step that precedes screening itself when the conditions and parameters of the assay are being tried and tested (setting the robotics, the compound delivery, the image acquisition parameters, the type of dye, quantity of compound, cell density, etc.). It easily corresponds to more than Cytometry Part A  73A: 958964, 2008

50,000 images and may be about 10% of the total amount of images recorded during the screening. The variability is large because all parameters are being systematically changed. Still, the screening developer needs an image analysis tool to estimate the efficiency of each of its actions. Doing so as fast as possible for the next stage of the assay development often depends on the success of the previous one. Software development time is therefore also limited. A highly robust and simple tool for cell counting is one of the keys to success. Our algorithm provides just that. Its genericity allows for many cell types and densities. Its simplicity allows for potential changes or adaptation of the algorithm to be completed quickly. In the future, the following experiments may be considered as potential valuable test systems for our algorithm. (i) Staining the cells with another dye in the Syto series of nucleic acid stains. (ii) Performing a serial dilution of Syto60, so as to check the algorithm’s performance against variations of the signal to background ratio. (iii) Staining the cells with both Syto60 and DAPI in order to compare the DAPI nuclear count that is generally easy and reliable (gold standard) to the Syto60 count. (iv) Comparing our cell segmentation to manually segmented images and/or synthetic images.

CONCLUSION Herein is reported a cell segmentation algorithm dedicated to HT-HCS applications. Its performance in terms of the speed–accuracy–robustness trade off is based on a good compromise between a strong attachment to the data and a generic model of the cells. Results of accuracy are typically above 95% for both cell count and border localization on very different cell images and in typical of HT-HCS conditions. The entire processing of 624 images (each 666 3 504 pixels) of the linearity response experiment (section ‘‘Serial Dilution’’) takes less than 5 min on a normal PC which corresponds to about 0.48 s per image; a rate much below the image acquisition rate. This even allows ample time to perform the subsequent and specific image analysis steps that a given assay often requires. The algorithm allows for full automation (zero parameter) as well as feedback information for quality control. Robustness derives intrinsically from the use of a Gaussian kernel convolution. Prior to screening, and from a limited subset of HT-HCS images, the study of a combination of some of the outputs (the visual observation, the cell count and its derivatives against r on both sides of s) reveals a lot of information about the fit between a set of images and our algorithm so that performances can be foreseen, parameters adjusted, and the decision to use this algorithm at all can be made on more solid ground. In HT-HCS, the accuracy that maters is intrinsically statistical. Average cell counts have been shown to match linearity response measurements with very low statistical dispersion. The final validation of our algorithm is its wide and successful use in several HT-HCS campaigns of large compound libraries.

LITERATURE CITED 1. Abraham V, Lansing D Haskins J. High content screening applied to large scale cell biology. Trends Biotechnol 2004;22:15–22.

963

BRIEF REPORT 2. Pelkmans L, Fava E, Grabner H, Hannus M, Habermann B, Krauz E, Zerial M. Genome-wide analysis of human kinases in clathrin- and caveolae/raft mediated endocytosis. Nature 2005;436:78–86. 3. Carpenter A, Jones T, Lamprecht M, Clarke C, Kang I, Friman O, Guertin D, Chang J, Lindquist R, Moffat J, Golland P, Sabatini D. Cellprofiler: Image analysis software for identifying and quantifying cell phenotypes. Genome Biol 2006;7:R100. 4. Lang P, Yeow K, Nichols A, Scheer A. Cellular imaging in drug discovery. Nature Rev 2006;5:343–356. 5. Kiger A, Baum B, Jones S, Jones M, Coulson A, Echeverri C, Perrimon N. A functional genomic analysis of cell morphology using RNA interference. J Biol 2003;2: 27. 6. Kim J, Gabel H, Kamath R, Tewari M, Pasquinelli A, Rual J, Kennedy S, Dybbs M, Bertin N, Kaplan J, Vidal M, Ruvkun G. Functional genomic analysis of RNA interference in C. elegans. Science 2005;308:1164–1167. 7. Baggett D, Nakaya M, McAuliffe M, Yamaguchi T, Lockett S. Whole cell segmentation in solid tissue sections. Cytometry Part A 2005;67A:137–143. 8. Wahlby C, Sintor I-M, Erlandsson F, Borgefors G, Bengtsson E. Combining intensity, edge and shape information for 2D and 3D segmentation of cell nuclei in tissue sections. J Microsc 2004;215:67–76. 9. Lin G, Adiga U, Olson K, Guzowski J, Barnes C, Roysam B. A hybrid 3d watershed algorithm incorporating gradient cues and object models for automatic segmentation of nuclei in confocal image stacks. Cytometry Part A 2003;56A:23–36. 10. Shen H, Nelson G, Nelson D, Kennedy S, Spiller D, Griffiths T, Paton N, Oliver S, White M, Kell D. Automated tracking of gene expression in individual cells and cell compartments. J R Soc Interface 2006;3:787–794. 11. Wu K, Gauthier D, Levine MD. Live cell image segmentation. IEEE Trans Biomed Eng 1995;42:1–12.

964

12. Malpica N, Solorzano C, Vaquero J, Santos A, Vallcorba I, Garcia-Sagrado J, Del Pozo F. Applying watershed algorithms to the segmentation of clustered nuclei. Cytometry 1997;28:289–297. 13. Belien J, van Ginkel H, Tekola P, Ploeger L, Poulin N, Baak J, van Diest P. Confocal DNA cytometry: A contour based segmentation algorithm for automated three dimensional image segmentation. Cytometry 2002;49:12. 14. Lindblad J, Walby C, Bengstsson E, Zaltsmann A. Image analysis for automatic segmentation of cytoplasms and classification of rac1 activation. Cytometry Part A 2004;57A:23–33. 15. Osher S, Sethian JA. Fronts propagating with curvature dependent speed: algorithms based on Hamilton-Jacobi formulations. J Comput Phys 1988;79:12– 49. 16. Forsyth D, Ponce J. Computer Vision—A Modern Approach. Upper Saddle River, NJ: Prentice-Hall; Alan Apt; 2003. 17. Dorval T, Genovesio A. Automated confocal microscope bias correction. In: Proceedings of the Fifth International Workshop on Information Optics, WIO’06, American Institute of Physics, 2006. pp 463–470. 18. Dorval T, Ogier A, Dusch E, Emans N, Genovesio A. Bias free features detection for high content screening. Presented at the Fourth IEEE International Symposium on Biomedical Imaging, Metro Washington, DC, 2007. 19. Castleman K. Digital Image Processing. Upper Saddle River, NJ: Prentice-Hall; 1996. 20. Koenderink J. The structure of images. Biol Cybernet 1984;50:363–370. 21. Lindeberg T. Automatic scale selection as a pre-processing stage for interpreting the visual world. In: Proceedings of the Eighth International Conference on Tools with Artificial Intelligence, IEEE Computer Society Press, 1996. p 490. 22. Lindeberg T. Feature detection with automatic scale selection. Int J Comput Vis 1998;30:77–116.

Fast, Automated Cell Segmentation Algorithm for High-Throughput and High-Content Screening

Suggest Documents