Connected Components Labeling Algorithm Based on Span Tracking

2 downloads 0 Views 692KB Size Report
72. Connected Components Labeling Algorithm. Based on Span Tracking. Farag Ibrahim Younis Elnagahy. Abstract — In this paper, a single scan algorithm for.
Canadian Journal on Image Processing and Computer Vision Vol. 2 No. 7, November 2011

Connected Components Labeling Algorithm Based on Span Tracking Farag Ibrahim Younis Elnagahy

computer architectures and two-dimensional images; there are mainly the following three classes of algorithms [6]: (1) Multi-scan algorithms [7-8]: in these algorithms, the image is scanned in the forward and backward raster directions alternately to propagate label equivalences until no label changes. (2) Two-scan algorithms [6, 9–19]: two scans are performed to complete the labeling process. In the first scan, provisional labels are assigned to object pixels, and the label equivalences are recorded. Label equivalences are resolved during or after the first scan. All equivalent labels are replaced by their representative label in the second scan. (3) One-scan algorithms [1, 20-21]. These algorithms avoid analysis of label equivalences by tracing the contours of objects (connected components) or by the use of an iterative recursion operation. The most efficient algorithm in this group is Chang's contourtracing algorithm [21]. The algorithms of the second category are suitable for special image representations and special computer architectures. This category can be divided into the following two classes: (1) Algorithms for hierarchical image formats [16, 2225]: There are many labeling algorithms that are developed for more complex image formats than the simple 2D array. These images are represented by hierarchical tree structures such as bintree, quadtree, octree, etc. (2) Parallel algorithms [26-31]: These algorithms were developed for parallel machine models such as a mesh-connected massively parallel processors or systolic array processors. This paper is organized as follows: In section II a brief description of the most efficient single scan labeling algorithm; contour-tracing labeling algorithm (CTLA) is presented. The general structure of the proposed labeling algorithm is presented in section III. Section IV discusses experimental results of the proposed and the CTLA for various images. The paper concludes with Section V.

Abstract — In this paper, a single scan algorithm for

labeling of connected components in binary images is presented. The main feature of this algorithm is to label the entire component in one shot, to avoid the analysis of label equivalences. In the algorithm, the binary image is scanned row by row, from top to bottom and from left to right. When a starting pixel (seed) of a component is encountered, the scanning process is paused and the algorithm completely labels all pixels of that component with the same label. The scanning process is resumed and the labeling process is continued until the entire image is scanned. The performance of the proposed algorithm is compared with the well-known single scan labeling algorithm, Chang‘s contour tracing algorithm. Both algorithms generate consecutive labels for objects. Experimental results demonstrated that the presented algorithm is superior to contour tracing algorithm1. Key Words — Component- labeling algorithm, contour tracing, pixel spans.

I. INTRODUCTION Connected component labeling is a fundamental process common to virtually all image processing applications in two and three dimensions [1-4]. The pixels in a connected component form a candidate region for representing an object. A component-labeling algorithm finds all connected components in an image and assigns a unique number (“label”) to all pixels in the same components. These labels are the keys for any subsequent analysis procedure and are used for distinguishing and referencing the objects. This makes connected component labeling an indispensable part of nearly all applications in pattern recognition and computer vision [5]. In many cases, connected component labeling is also one of the most time-consuming tasks among other patternrecognition algorithms. Therefore, connected component algorithms usually form a bottleneck in machine vision systems. Many algorithms have been proposed for connected component labeling. They can be classified into two categories according to computer architecture and image representations. The algorithms of the first category are suitable for ordinary

II. CONTOUR TRACING LABELING ALGORITHM (CTLA) In this algorithm, a binary image is scanned from top to bottom and from left to right per each line. When an external contour pixel A is encountered the first time (i.e. unlabeled pixel), the algorithm performs a complete trace of the contour pixels until returns A as shown in Fig. 1a. Pixel A and all pixel

1

Farag Elnagahy is with the King ABDUL AZIZ University, Faculty of Computing and Information Technology, Department of Computer Science, Kingdom of Saudi Arabia. The author is also with the National Research Institute of Astronomy and Geophysics (NRIAG), Department of Astronomy, Helwan, Cairo, Egypt. [email protected]

72

Canadian Journal on Image Processing and Computer Vision Vol. 2 No. 7, November 2011 of that contour are assigned a label. In Fig. 1b, when a labeled external contour point A' is encountered, all subsequent black pixels (if they exist) are assigned the same label as A'. When an internal contour pixel B is encountered the first time, it is assigned the same label as the external contour of the same component. The algorithm performs a complete trace of the internal contour pixels until returns B as shown in Fig. 1c. The internal contour pixels are assigned a label as B. In Fig.1d, when a labeled internal contour pixel B' is encountered, all subsequent black pixels (if they exist) are assigned the same label as B'. In doing so, all the component points are visited in a single pass over the image and label them according to a new index or according to the one that has already been assigned to its left neighbor. CTLA uses two sub-procedures, contour tracing and tracer. The goal of contour tracing is to find an external or internal contour at a given pixel, whereas the goal of tracer is to search for the next contour point from the current point. The tracing is controlled by the initial search direction, which is determined by the position of a previous contour point. For more detailed information about this algorithm, readers are encouraged to refer to the original paper by Chang [21].

S10 = {8, 2, 2}, S11 = {8, 5, 5}, S12 = {8, 8, 9}, S13 = {9, 9, 9}, S14 = {9, 5, 5}, and S15 = {9, 2, 2}.

III. OUTLINE OF THE PROPOSED ALGORITHM Fig. 2: A part of binary image (gray cells represent significant horizontal pixel spans).

Before describing the proposed algorithm, definitions of some terms that will be used throughout this paper are given below. Binary image: Consider an N * M binary image in which I(x, y) ∈ [0, 1] be the intensity of a pixel at location (x, y), where x and y represent the column and row coordinates of a pixel respectively with x ∈ [1, N] and y ∈ [1, M]. N is the number of columns (i.e. image width) and M is the number of rows (i.e. image height). Pixel belongs to an object has I(x, y) = 1 and called significant pixel, while I(x, y) = 0 represents a background pixel and called insignificant pixel. Horizontal pixel spans: Contiguous pixels in the same row and have the same value are called horizontal pixel span (HPS). In the binary image, there are two types of horizontal pixel spans: significant horizontal pixel span (SHPS) and insignificant horizontal pixel span (ISHPS). SHPS contains significant pixels and belongs to an object, while ISHPS contains insignificant pixels and belongs to the background. SHPS is bounded by background pixel, and ISHPS is bounded by object pixel. The horizontal pixel span (significant or insignificant) is described by three parameters HPS = {Y, XS, XE} where Y, XS, and XE represent the row, the starting column and the ending column coordinates of the span respectively where XE ≥ XS. As shown in Fig. 2, there are 15 (S1:S15) significant horizontal pixel spans (gray color) and 20 insignificant horizontal pixel spans (white color).

Vertically connected significant horizontal pixel spans (8connected): Two significant horizontal pixel spans are vertically connected if they are at adjacent rows and there is at least one pixel belongs to the first horizontal pixel span is connected to any pixel belongs to the second horizontal pixel span using 8-connected connectivity. In other words, consider two significant horizontal pixel spans SHPSA = {YA, XSA, XEA} and SHPSB = {YB, XSB, XEB}; SHPSA and SHPSB are vertically connected if YB = YA ± 1 and at least one of the following conditions is satisfied:  {XSA, XSA+1, XSA+2, …, XEA} ∩ {XSB, XSB+1, XSB+2, …, XEB} ≠ ∅. (Such as {S1, S2}, {S4, S5}, {S9, S11}, {S10, S15}, {S11, S14}, and {S12, S13}}, that are shown in Fig. 2)  XSA = XEB + 1. (such as S5 and S7)  XSB = XEA + 1. (such as S5 and S8) As an example, S9 is connected to S7, S8, S10, S11, and S12, whereas S2 is neither connected to S3 nor to S6 as shown in Fig. 2. Vertically connected significant horizontal pixel spans (4connected): Two significant horizontal pixel spans are vertically connected if they are at adjacent rows and there is at least one pixel belongs to the first horizontal pixel span is connected to any pixel belongs to the second horizontal pixel span using 4-connected connectivity. In other words, consider two significant horizontal pixel spans SHPSA = {YA, XSA, XEA} and SHPSB = {YB, XSB, XEB}; SHPSA and SHPSB are vertically connected if and only if the following two conditions are satisfied

The parameters of the significant horizontal pixel spans are: S1 = {1, 4, 6}, S2 = {2, 5, 5}, S3 = {3, 3, 3}, S4 = {4, 4, 6}, S5 = {5, 3, 7}, S6 = {3, 7, 7}, S7 = {6, 1, 2}, S8 = {6, 8, 9}, S9 = {7, 3, 7}, 73

Canadian Journal on Image Processing and Computer Vision Vol. 2 No. 7, November 2011 { Pop the span parameters from the top of the stack

YB = YA ± 1 {XSA, XSA+1, XSA+2, …, XEA} ∩ {XSB, XSB+1, XSB+2, …, XEB} ≠ ∅. For example S9 is connected to S11, while S9 is not connected to S7, S8, S10, and S12 that are shown in Fig. 2. Horizontally connected significant horizontal pixel spans: Two significant horizontal pixel spans located at the same row are horizontally connected if there exists another horizontal pixel span that is located above or below that row and is vertically connected to both of them. In other words, consider SHPSA= {YA, XSA, XEA} and SHPSB = {YB, XSB, XEB} are two significant horizontal pixel spans located at the same row (i.e. YA = YB = n where n is the row number) and XEB > XEA +1 and SHPSC is a significant horizontal pixel spans located above or below that row i.e. YC = n ± 1. SHPSA, SHPSB are horizontally connected if and only if SHPSC is vertically connected to both SHPSA and SHPSB. For example S3 is horizontally connected to S6 via S4. This principal can be generalized to more than two spans, for instance, S10, S11, S12 are horizontally connected via S9 using 8-connected connectivity. Objects in binary image: An object in the binary image is composed of vertically and/or horizontally connected significant horizontal pixel spans. As shown in Fig. 2, there are two objects. The first object is composed of two connected significant horizontal pixel spans {S1, S2}. While the second object, is composed of 13 connected significant horizontal pixel spans {S1, S2, S3, ….., S15}. As so, in this proposed labeling algorithm, labeling is done in a single pass over the image and no re-labeling is required throughout the entire process. The binary image is raster scanned from the top left to the bottom right. Once the algorithm encounters a starting pixel (seed) of an object, it completely finds and labels all vertically and/or horizontally connected significant horizontal pixel spans that belong to that object with the same label. The raster scan is resumed until the entire image has been scanned. The objects are labeled one at a time in the image while scanning in raster order. The proposed algorithm will be referred to as span tracing labeling algorithm (STLA) in the following sections. The following Pseudocode describes the proposed labeling algorithm:  

Find all spans that are above (Y+1) and vertically connected to the current span, Push their parameters to stack, and label all pixels in those spans with the label L. Find all spans that are below (Y-1) and vertically connected to the current span, Push their parameters to stack, and label all pixels in those spans with the label L. } L ← L+1 //Increment the value of the label } } Three one-dimensional arrays are used to implement the stack. One array is used to temporary hold the row coordinate (Y) while the second two arrays are used to temporary hold the starting column (XS) and the ending column (XE) coordinates of the span respectively. One pointer is used to point to the top of these arrays. IV. EXPERIMENTAL RESULTS The two algorithms CTLA as well as STLA were implemented in the C language and complied under the same condition. 8-connected approach was used in the implementation of both algorithms. Both algorithms were applied to different types of test images on an Intel Pentium 4, 2GHz personal computer with 1GB SDRAM. The C source code of the CTLA is downloaded from: http://ocrwks11.iis.sinica.edu.tw/~dar/Download/WebPages/C omponent.htm. Memory used: CTLA requires two 2-dimentional arrays to label N x M binary image. The size of the first 2D array is NM bytes and it is used to hold the binary image, while the second 2D array is used to hold the labeled image and has a size of 2MN bytes. The total 3MN memory bytes were used by CTLA. On the other hand, for an N ×M-size image, STLA algorithm requires one 2D array to temporary hold the binary image first and during the labeling process it will be overwritten with labels. The size of this array is 2NM bytes. In addition, STLA requires three one-dimensional arrays for temporarily holding the parameters of the significant horizontal spans. Each array has at most NM/3 memory bytes which was determined experimentally, when there is one object (worst case) in a given image and is distributed over whole image pixels as a chessboard. The total bytes of these three arrays (worst case) are not exceeding NM bytes. As so, both CTLA and STLA use the same amount of memory (3NM bytes) to perform the labeling process on the same image. Test images: Six types of images were used to compare the performance of both CTLA and STLA algorithms. These types include natural, texture, medical, fingerprint, text, noise, and artificial images. Artificial images contain specialized patterns

// Let B is an N * M binary image, Pixel belongs to an // object when it has B(x, y) = 1, while B(x, y) = 0 represents // a background pixel. L ← 2 // Set the initial value of the label to 2 Do for row = 1 to M //Scan the image from up to down Do for col = 1 to N // and from left to right { If B[row][col] = 1 Then // a starting pixel (seed) of an object is encountered. { Find the horizontal span containing the seed pixel, push its parameters to stack, and label all pixels in that span with the label L. Do while the stack is not empty 74

Canadian Journal on Image Processing and Computer Vision Vol. 2 No. 7, November 2011 (symbols, stair-like, spiral- like, saw-tooth-like, checker-boardlike, and honeycomb-like connected components). Noise images were prepared as follows: Uniform random noise images of six sizes (32 × 32, 64 × 64, 128 × 128, 256 × 256, 512 × 512, and 1024 × 1024 pixels) were constructed. For each size, 41 binary images were generated by varying the threshold from 1 to 1000 with the step of 25. This produces a total of 246 binary images. This kind of images is appropriate for the severe evaluation of the labeling algorithms, because the connected components in them have the complicated geometrical shapes and the complex connectivity [8]. Natural images include landscape, aerial, portrait, still-life, and snap shot images. Texture images as well as natural images were downloaded from USC-SIPI image database (The image database of the signal & image processing Institute, university of Southern California- http://sipi.usc.edu/database/) and SAMPL image database (The Image database of the signal analysis and machine perception laboratory, OHIO State University-http://sampl.ece.ohio-state.edu/data/stills/). Fingerprint images were downloaded from the FVC2000 web site (http://bias.csr.unibo.it/fvc2000/). Medical images were obtained from Japanese society of radiological technology (JSRT) Digital Image Database (http://www.jsrt.or.jp/web_data ). Finally, text images were obtained by converting an E-book to images. All images were transformed into binary images by means of Otsu’s threshold selection method [32]. Execution time computation: The execution time of a labeling process (as well as other programs) applied to the same image on the same machine is different for consecutive runs. These differences are due to random activities on the machine. The execution time for a single test run is the sum of the actual algorithm’s execution time and a parasitic time due to random activities on the test machine. To minimize the effect of the parasitic time, the process should be performed K times on the same image and the average execution time is considered as the process’s execution time. To determine the value of K that minimizes the effect of the parasitic time on the algorithms execution time, the following experiment was performed. The average execution time for the labeling process was computed for K = 1, 10, 100, 1000, 2000, 3000, 4000 and 5000 respectively. For each K, a 100 consecutive runs on the same machine is performed. Table 1 shows the minimum, maximum, average, and the standard deviation δ of the average execution time obtained for different values of K. As shown from table 1, the standard deviation of the execution time is inversely proportional to K. The effect of the parasitic time is minimized when K=5000 or higher as shown in Fig. 3. All time values presented in this paper are in milliseconds and were obtained by averaging of the execution time of the labeling process for K =5000.

10 100 1000 2000 3000 4000 5000

Min 0

2.84 2.52 2.53 2.53 2.53 2.51 2.53

9.4 3.44 2.89 3.03 2.77 2.74 2.64

1.97 0.26 0.10 0.08 0.06 0.05 0.04

Execution time [msec]

3.4 3.2 100 1000 2000 3000

3 2.8 2.6 2.4

4000 5000

2.2 2 1.8 0

20

40

60

80

100

th

n run

Fig. 3. Random activities on the test machine

All noise images were used to test the linearity of the execution time versus image size. As shown in Fig. 4, all of the two algorithms have the ideal linear characteristics versus image size. Noise images with a size of 1024 × 1024 pixels were used for testing the execution time versus the density of an image. As shown in Fig. 5, STLA algorithm is superior to the CTLA algorithm for most images. Natural images, texture images, medical images, fingerprint images, text images, noise images, and artificial images with specialized shape patterns were used to compare the performance of both CTLA and STLA algorithms. The results of the comparisons are shown in Table 2. The results of some selected images are illustrated in Fig. 6, Fig. 7, and Fig. 8. Where O represents the number of object pixels and C represents the number of connected components. The object pixels are displayed in black. The results demonstrated that STLA is faster than CTLA for all images.

Fig. 4. Execution time versus the number of pixels in an image.

TABLE 1 EXECUTION TIME FOR 100 CONSECUTIVE RUNS FOR DIFFERENT K K 1

0 1.87 2.31 2.36 2.42 2.39 2.40

Execution time [msec] Average Max Standard deviation (δ δ) 3.36 16 6.55

75

Canadian Journal on Image Processing and Computer Vision Vol. 2 No. 7, November 2011 Fig. 5. Execution time versus the density of objects in an image [1024x1024].

TABLE 2 PERFORMANCES OF CTLA AND STLA ALGORITHMS N: number of images in the test set. O: is the number of object pixels. C: is the number of connected components. mage type

Size

N

512x512

240

1024x1024

216

Text&image

1300x1900

233

Text only

1300x1900

209

512x512

147

1024x1024

89

JSRT_medical

2048x2048

247

Artificial

512x512

125

240x320

80

300x300

80

256x364

80

448x478

80

Texture

Natural

Fingerprint

C 2 732.1 8576 1 1049.1 9835 172 1778.1 6201 169 2393.8 3966 49 866.2 6912 159 2256.2 8937 12703 19686.9 45173 32 825.08 8576 194 396.5 514 16 160.6 602 22 296.5 1467 55 547.9 1889

Min Average Max Min Average Max Min Average Max Min Average Max Min Average Max Min Average Max Min Average Max Min Average Max Min Average Max Min Average Max Min Average Max Min Average Max

76

O 16856 134239 222148 72481 526146.1 865403 14493 100931.2 188837 9314 137103.4 223532 24132 123227.2 222318 95513 459878.4 825779 1705721 2005060 2458927 6975 31295.87 151074 8458 15409.2 26454 20422 37394.7 61173 12330 40594.0 70855 25441 68311.63 115097

CTLA 2.76 7.44 12.98 9.22 22.59 44.79 14.17 26.74 39.74 13.75 32.34 51.38 1.88 5.52 12.25 8.81 20.81 35.57 92.12 114.25 175.15 2.02 3.81 11.10 1.21 1.65 1.99 1.44 1.95 2.49 1.88 2.72 4.15 2.31 4.35 6.47

STLA 1.72 4.72 7.27 7.29 18.34 28.87 12.11 17.03 21.82 12.04 19.03 29.29 1.67 3.98 6.47 7.28 16.52 25.70 70.17 82.23 111.17 1.46 2.20 5.66 0.61 0.88 1.14 1.04 1.31 1.52 1.07 1.63 2.52 1.65 2.70 3.69

Canadian Journal on Image Processing and Computer Vision Vol. 2 No. 7, November 2011

Fig. 6. Execution time (ms) of labeling algorithms for the selected binary images: (a) fingerprint images; (b) texture images.

77

Canadian Journal on Image Processing and Computer Vision Vol. 2 No. 7, November 2011

78

Canadian Journal on Image Processing and Computer Vision Vol. 2 No. 7, November 2011 Fig. 7. Execution time (ms) of labeling algorithms for the selected binary images: (a) natural images; (b) text images.

Fig. 8. Execution time (ms) of labeling algorithms for the selected binary images: (a) medical images; (b) artificial images; (c) noise images

79

[5]

K. Wu, E. Otoo, and K. Suzuki, “Optimizing two-pass connectedcomponent labeling algorithms," Pattern Analysis & Applications 2008, in press, doi: 10.1007/s10044-008-0109-y.

[6]

L. He, Y. Chao, K. Suzuki, and K. Wu, “Fast connected-component labeling,” Pattern Recognition, vol. 42, pp. 1977-1987, 2009.

[7]

R. M. Haralick, Some neighborhood operations, in: Real Time/Parallel Computing Image Analysis, Plenum Press, New York, 1981, pp. 11–35.

[8]

K. Suzuki, I. Horiba, and N. Sugie, “Linear-time connectedcomponent labeling based on sequential local operations,” Computer Vision Image Understanding, vol. 89, pp.1–23, 2003.

[9]

A. Rosenfeld, and J. L. Pfalts, “Sequential operations in digital picture processing,” Journal of ACM, vol. 13, no. 4, pp. 471–494, 1966.

[10]

R. Lumia, L. Shapiro, and O. Zungia, “A new connected components algorithm for virtual memory computers,” Computer Vision Graphics Image Processing, vol. 22, no. 2, pp. 287–300, 1983.

[11]

R. Lumia, “A new three-dimensional connected components algorithm,” Computer Vision Graphics Image Processing, vol. 23, no. 2, pp.207–217, 1983.

[12]

Y. Shirai, Labeling connected regions, in: Three-Dimensional Computer Vision, Springer, Berlin, 1987, pp. 86–89.

[13]

T. Gotoh, Y. Ohta, M. Yoshida, and Y. Shirai, Component labeling algorithm for video rate processing, in: Proceeding of the SPIE, Advances in Image Processing, 804, April 1987, pp. 217–224.

[14]

M. Komeichi, Y. Ohta, T. Gotoh, T. Mima, and M. Yoshida, Videorate labeling processor, in: Proceedings of the SPIE, Image Processing II, 1027, September 1988, pp. 69–76.

[15]

R. M. Haralick, and L. G. Shapiro, Computer and Robot Vision, vol. I, Addison-Wesley, Reading, MA, 1992, pp. 28–48.

[16]

M. B. Dillencourt, H. Samet, and M. Tamminen, “A general approach to connected-component labeling for arbitrary image representations,” Journal of ACM, vol. 39, no. 2, pp. 253–280, 1992.

[17]

S. Naoi, High-speed labeling method using adaptive variable window size for character shape feature, in: IEEE Asian Conference on Computer Vision, vol. 1, December 1995, p. 408–411.

[18]

C. Fiorio, and J. Gustedt, “Two linear time union-find strategies for image processing,” Theoretical Computer Science, vol. 154, no. 2, pp. 165–181, 1996.

[19]

L. He, Y. Chao, and K. Suzuki, “A run-based two-scan labeling algorithm,” IEEE Transactions Image Processing, vol. 17, no. 5, , pp. 749–756, 2008.

[20]

A. Rosenfeld, “Connectivity in digital pictures,” Journal of ACM, vol. 17, no. 1, pp.146–160, 1970.

[21]

F. Chang, C. J. Chen, and C. J. Lu, “A linear-time componentlabeling algorithm using contour tracing technique,” Computer. Vision Image Understanding, vol. 93, pp. 206–220, 2004.

[22]

H. Samet, “Connected component labeling using quadtrees,” Journal of ACM, vol. 28, no. 3, pp. 487–501, 1981.

[23]

H. Samet, and M. Tamminen, An improved approach to connected component labeling of images, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, Florida, 1986, pp. 312–318.

[24]

H. Samet, and M. Taminen, “Efficient component labeling of images of arbitrary dimension represented by linear bintrees,” IEEE Transactions Pattern Analysis Machine Intelligence, vol. 10, no. 4, pp. 579–586, 1988.

[25]

J. Hecquard, and R. Acharya, “Connected component labeling with linear octree,” Pattern Recognition, vol. 24, no. 6, pp. 515–531, 1991.

V. CONCLUSION In this paper, a single scan algorithm (STLA) for labeling connected components in binary images is presented. The main feature of this algorithm is to label the entire component in one shot, to avoid the analysis of label equivalences. In the algorithm, the binary image is raster scanned from the top left to the bottom right. Once the algorithm encounters a starting pixel (seed) of an object, it completely finds and labels all vertically and/or horizontally connected significant horizontal pixel spans that belong to that object with the same label. The raster scan is resumed until the entire image has been scanned. The objects are labeled one at a time in the image while scanning in raster order. The performance of the proposed algorithm is compared with the well-known single scan algorithm, Chang‘s contour tracing algorithm (CTLA). Both algorithms generate consecutive labels for objects and use the same amount of memory (3NM bytes) to perform the labeling process on the same image. Experimental results demonstrated that the proposed STLA algorithm is superior to contour tracing algorithm. The significant span parameters can be used to extract the component contours (external and/or internal contours), which can be useful for many applications. STLA can extract object's information immediately after labeling the object. These information are required for subsequent processing, such as area size, height, width, and perimeter. In addition to this, many statistical measures for each object can be determined such as number of pixels, mean, sum, average, standard deviation, etc. ACKNOWLEDGMENT The author wishes to express his deep gratitude and respect to the authors of CTLA algorithm, especially Dr. Fu Chang, for the permission to download their C source code. Deep thanks for the stuff of JSRT for their permission to download the medical images. The author also thanks Prof. Radi Teleb of King ABDULAZIZ University and Dr. Yosry A. Azzam of Al-MAJMAAH University for their proofreading this paper. Deep thanks for the anonymous referees for their valuable comments that improved this paper greatly. The author is grateful to the editor for his/her kind cooperation and help. REFERENCES [1]

D. H. Ballard, Computer Vision, Englewood, New Jesey: PrenticeHall, 1982.

[2]

A. Rosenfeld, and A. C. Kak, Digital Picture Processing, second ed., vol. 2, Academic Press, San Diego, CA, 1982.

[3]

G. C. Stockman, and L. G. Shapiro, Computer Vision. Englewood, New Jesey, Prentice Hall, 2001.

[4]

R. C. Gonzalez, and R. E. Woods, Digital Image Processing, second ed., New Jerse: Prentice Hall, 2002.

80

[26]

L. W. Tucker, Labeling connected components on a massively parallel tree machine, in: Proc. IEEE Conf. Computer Vision and Pattern Recognition, Miami, Florida, 1986, pp. 124–129.

[27]

R. Cypher, J. Sanz, and L. Snyder, “An erew pram algorithm for image component labeling,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 3, pp. 258–262, 1989.

[28]

H. M. Alnuweiri and V. K. Prasanna, “Parallel architectures and algorithms for image component labeling,” IEEE Transactions Pattern Analysis Machine Intelligence, vol. 14, no. 10, pp. 1024– 1034, 1992.

[29]

P. Bhattacharya, “Connected component labeling for binary images on a reconfigurable mesh architectures,” J. Syst. Archit., vol. 42, no. 4, , pp. 309–313, 1996.

[30]

A. N. Moga and M. Gabbouj, ”Parallel image component labeling with watershed transformations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 5, pp. 441– 450, 1997.

[31]

F. Knop and V. Rego, “Parallel labeling of three dimensional clusters on networks of workstations,” Journal of Parallel and Distributed Computing, vol. 49, no. 2, pp.182–203, 1998.

[32]

N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Transactions Systems Man and Cybernetics, vol. 9, no. 1, pp. 62–66, 1979. BIOGRAPHIES

Dr. Eng. Farag Ibrahim Younis El-Nagahy: received his B.Sc. degree in Electronic engineering from Menoufia University, Faculty of electronic engineering, Egypt, in 1990, and the M.Sc. degree in Systems & Computers Engineering from, Al-Azhar University, Faculty of Engineering, Cairo, Egypt, in 1998, and his Ph.D. degree in Electrical Engineering and informatics from Czech Technical University in Prague, Faculty of Electrical Engineering, 2004. From 1992 to 1998, he worked as Researcher Assistant in National Research Institute of Astronomy and Geophysics (NRIAG), Helwan, Cairo, Egypt. From 1998 to 2005, he worked as Assistant Researcher in NRIAG. From 2005 to 2007, he worked as Researcher in NRIAG. Since 2007, he has been Assistant Professor in the Department of Computer Science, Faculty of Computing and Information Technology, King ABDUL AZIZ University, Kingdom of Saudi Arabia. His research interests include image processing, neural network, computer vision, and pattern recognition.

81

Suggest Documents