âsecond opinionâ to human observers. 1 Introduction. Lung cancer is one of the most lethal kinds of cancer worldwide. Its cure depends critically on disease ...
Automated detection of small-size pulmonary nodules based on helical CT images Xiangwei Zhang1 , Geoffrey McLennan2 , Eric A. Hoffman3 , and Milan Sonka1 1 2
Dept. of Electrical Engineering, University of Iowa, Iowa City, IA, 52242, USA Department of Internal Medicine, University of Iowa, Iowa City, IA, 52242, USA 3 Department of Radiology, University of Iowa, Iowa City, IA, 52242, USA
Abstract. A computer-aided diagnosis (CAD) system to detect smallsize (from 2 mm to around 10 mm) pulmonary nodules in helical CT scans is developed. This system uses different schemes to locate juxtapleural nodules and non-pleural nodules. For juxtapleural nodules, morphological closing, thresholding and labeling are performed to obtain volumetric nodule candidates; gray level and geometric features are extracted and analyzed using a linear discriminant analysis (LDA) classifier. To locate non-pleural nodules, a discrete-time cellular neural network (DTCNN) uses local shape features which successfully capture the differences between nodules and non-nodules, especially vessels. The DTCNN was trained using genetic algorithm (GA). Testing on 17 cases with 3979 slice images showed the effectiveness of the proposed system, yielding sensitivity of 85.6% with 9.5 FPs/case (0.04 FPs/image). Moreover, the CAD system detected many nodules missed by human visual reading. This showed that the proposed CAD system acted effectively as an assistant for human experts to detect small nodules and provided a “second opinion” to human observers.
1
Introduction
Lung cancer is one of the most lethal kinds of cancer worldwide. Its cure depends critically on disease detection in the early stages. Computed tomography (CT) technology is an important tool for detection and diagnosis of pulmonary nodules. Due to the large amount of image data created by thoracic CT examination, interpreting lung CT images to detect nodules is a very challenging task for the radiologists. Computer-aided diagnosis (CAD) is considered as a promising tool to aid the radiologists in lung nodule CT interpretation. Many techniques of nodule detection have been developed based on chest radiographs or CT images. Giger [1] obtained nodules using multiple gray-level thresholding and a rule-based approach. Armato [2] introduced some 3D features, and performed feature analysis by a linear discriminant analysis (LDA) classifier. Kanazawa [3] used fuzzy clustering and a rule-based method. Penedo [4] set up 2 Neural networks (NNs), with the first one detecting the suspected areas, and the second one acting as a classifier. Template-based methods were used to detect nodules by Lee [5]. A prior model [6] was developed by Brown to find nodules on the baseline scan and located nodules in the follow up scans.
Fig. 1. Overall scheme of the whole pulmonary nodule CAD system.
According to locations, nodules can be divided into two groups: juxtapleural nodules (nodules attached to pleura) and non-pleural nodules. Usually, a juxtapleural nodule distorts the transversal (axial) lung contour and yields an indented part, a human observer is able to find this abnormality by a tracking procedure along the contour. This method does not work for non-pleural nodules. A small-sized nodule tends to be ignored by a human observer, when located near vessels or airways, especially when attached to them. Based on previous observations, we propose to deal with juxtapleural nodules and non-pleural nodules respectively. In order to find nodules in initial stages and compensate for readings of radiologists, detection of small-sized nodules (from 2 mm to around 10 mm) is of our main interest.
2
Overall Scheme
The overall scheme of nodule detection is outlined in Fig. 1. There are three fundamental steps: preprocessing, juxtapleural nodule detection, and non-pleural nodule detection. The preprocessing consists of isotropic resampling (implemented by trilinear interpolation) and lung segmentation. The isotropic data are more suitable for 3D processing, and also simplify the structure of the DTCNN. The cubic voxel size of 0.7 mm is always produced. And the lung area is extracted from the thoracic CT data [7], so that the successive processing is restricted to the pulmonary zone. In the detection of juxtapleural nodules, morphological closing is applied to the original segmented lung mask to include juxtapleural nodules; then thresholding and labeling are performed to yield 3D volumetric nodule candidates; finally, gray level features and 3D global geometric features are extracted and fed into a LDA classifier to confirm or refute a nodule. In the detecting non-pleural nodules, first optimal thresholding is applied to the whole lung to obtain the non-air part, which consists of nodules, vessels, airway walls and other high attenuation structures. For each voxel belonging to the non-air part, local shape index feature is computed. A DTCNN trained using genetic algorithm (GA) is applied to the local geometric feature to extract nodule areas. 3D labeling is operated to give the positions of detected nodules.
(a)
(b)
(c)
(d)
Fig. 2. Juxta-pleural nodule candidate generation. (a) An original CT section image with a juxtapleural nodule; (b) Segmented lung mask image; (c) Lung mask after 2D closing; (d) Juxta-pleural nodule candidates created by thresholding.
3 3.1
Juxtapleural nodule detection Juxtapleural nodule candidate generation
In the preprocessing, the lung field extracted from the thoracic CT data [7] includes two 3D connected components, indicating left and right lung, shown in Fig. 2 (b). Note that in this step, a juxtapleural nodule is usually treated as being outside the lung field, shown in Fig. 2 (b). Morphological closing can be used to include the indented area of a juxtapleural nodule, shown in Fig. 2 (c). Note that the closing operation should be applied to left and right lungs individually so that no regions between the two lung parts are included. Considering the isometric property of the resampled CT data, the structural element can be easily chosen as a sphere (3D) or a circle (2D). Due to heart motion effect, many undesired areas near the heart will be included if 3D morphological closing is used. Experiments showed that a simple 2D closing in the transversal section image is able to include true nodule areas without introducing too many false nodule areas caused by heart motion effect. Considering the voxel size of 0.7 mm and the nodule size of our interest ( from
2 mm to around 10 mm), a circular structural element with radius of 15 voxels is large enough to include the major part of a juxta-pleural nodule. Optimal thresholding [8] is used to automatically determine a threshold for segmentation between the background (air part) and objects (high intensity part, including nodules). The threshold is detected iteratively. Let T t be the threshold at iteration step t, and µtb and µto be the mean gray-level of background and objects. µtb and µto are obtained by applying T t to the image. Then for the next step t + 1, the threshold is updated by T t+1 = (µtb + µto )/2. This updating procedure is iterated until the threshold does not change, i.e., T t+1 = T t . The initial threshold value T 0 is chosen as the mean value of the whole image. 3D component analysis is followed to organize the connected high intensity image voxels into 3D objects, but only the objects attached to the lung wall are of interest. The size property is used to remove the candidates that have very large volume or very small volume, for example, the main part of the vessel tree is usually the largest object in the lung. The results of the previous processing are the nodule candidate objects, shown in Fig. 2 (d). 3.2
Feature extraction
Given the 3D volume of the nodule candidates, gray level features including highest value, lowest value, average value, standard deviation can be obtained directly. Seven geometric features are also extracted: volume size, surface area, AspectRatio, Sphericity, m1 , m2 , mRatio. AspectRatio =
M aximum Diameter , M inimum Diameter
sphericity =
1/3 ( 3Size 4π ) (Surf aceArea 1/2 ) 4π
. (1)
The rth contour sequence moments and central moments mr , µr are defined as mr =
N 1 X (z(i))r , N i=1
µr =
N 1 X (z(i) − m1 )r N i=1
(2)
where z(i) is the distance between the center and boundary surface point i. N is the number of points on the boundary surface. mRatio is simply defined as m2 /m1 . 3.3
Feature analysis
Leave one case out method is used for training and testing. In this scheme, the classifier was trained based on nodule candidates in all but one case, and the nodule candidates in the remaining case was employed to test the trained classifier. The procedure of training and processing was repeated until each case has been already utilized as the testing case. The idea of choosing this scheme instead of common leave one (candidate) out is that each clinical case is independent of other cases, but a nodule candidate is possibly dependent on other candidates in the same case. In this work, LDA, neural network (NN), and support vector machine (SVM) are tried to do the classification.
Fig. 3. Iso-surface rendering of anatomical structures in the pulmonary zone. The spherical like objects located close to the center of each image are nodules; the tubular structures are vessels; the planar shapes are pleural surface. Note that the nodules are illustrated in a darker gray level.
4 4.1
Non-pleural nodule detection Local shape features
Basic observations show that most non-pleural nodules usually take a spherelike shape, while vessels and airways are tubular structures. See, for example, Fig. 3. This shape difference usually was used as a feature of a segmented object, i.e., a global feature of a suspected nodule area (SNA), as in the detection of juxtapleural nodules. In order to avoid treating a non-isolated nodule as part of other structures, a local shape feature associated with each voxel is used. By assuming the surface of interest to be a level surface locally, local shape features can be computed for each voxel. A local shape can be completely described by its two principal curvatures, i.e., the maximal and minimal curvatures k1 , k2 . Equivalently, the Gaussian curvature K and the mean curvature H can describe a local shape, like in the HK segmentation introduced by Besl [9]. Neither the k1 , k2 nor the HK curvature pair capture the intuitive notion of “local shape” very well, as two parameters are needed to “tell” the local shape. Koenderink [10, 11] proposed two measures of local surface, “shape index” S and “curvedness” C. The shape index is scale-invariant and captures the intuitive notion of “local shape”, whereas the
curvedness specifies the amount of curvature. The S and C are defined as: k1 + k 2 2 , S = · arctan π k1 − k 2
C=
r
k12 + k22 , 2
for k1 ≥ k2
(3)
SC scheme decouples the shape and the magnitude of the curvatures. This is done by transforming a k1 , k2 Cartesian coordinate description of a local shape into a polar coordinate description. Every distinct shape, except for the plane, corresponds to a unique value of S. Specifically, S = 1 indicates a cap (like spherical nodules); S = 0.5 (ridge) corresponds to cylindrical shapes (like vessels). The SC scheme has been successfully been used to detect colonic polyps in virtual colonoscopy[12]. Many derivations of curvature computation for level surface have been developed [13–15]. The resulting formula is essentially identical and a concise derivation is given in [15]. The Gaussian curvature K and the mean curvature H have the following formulas: K=
1 2 {f 2 (fyy fzz − fyz ) + 2fy fz (fxy fxz − fxx fyz ) (fx2 + fy2 + fz2 )2 x
2 + fy2 (fxx fzz − fxz ) + 2fx fz (fxy fyz − fxz fyy ) 2 + fz2 (fxx fyy − fxy ) + 2fx fy (fxz fyz − fxy fzz )} −1 {(fy2 + fz2 )fxx + (fx2 + fz2 )fyy + (fx2 + fy2 )fzz H= 2(fx2 + fy2 + fz2 )3/2
− 2fx fy fxy − 2fx fz fxz − 2fy fz fyz }
(4)
(5)
The principal curvatures can be computed from K and H as follows: k1,2 = H ±
p
H2 − K
(6)
The estimation of partial derivatives is implemented by directly convolving the intensity image with the corresponding derivatives of the Gaussian filter. Because most small-sized nodules show a sphere-like structure, the shape index of each voxel belonging to nodules should be around the value of 1. Similarly, vessels have tube-like structures, represented by the shape index value of 0.5. Due to the existence of structural noise, the difference of the shape index value between a nodule and vessels only makes sense for a population of voxels, rather than a single voxel. What is needed is an information processing system that synthesizes the information in a neighborhood of a voxel to give a decision on the voxel’s class, either nodule or non-nodule. In this work, discrete-time cellular neural networks (DTCNN) act as this classification system. Considering the high intensity property of nodules, the local shape based detection can be applied only to the bright parts inside the lung, i.e., a region of interest (ROI) can be obtained to reduce the processing. Optimal thresholding [8] described before is used to determine the threshold.
4.2
Discrete-time Cellular Neural Networks
The discrete-time cellular neural network (DTCNN) [16, 17] was introduced as a discrete-time version of the CNN [18, 19]. This first-order, discrete-time dynamical system consists of multiple identical cells on regular spaced positions. The dynamics of a cell is described by the following discrete-time state equation: X(n + 1) = AY (n) + BU + I
(7)
where X indicates the inner state of the cell, Y the output, and U the input of the cell. n is a nonnegative integer indicating the time step. A is the feedback template; B is the control (input) template. I is a constant bias parameter. The output is described by the following piecewise-linear function: Y (n) =
1 (|X(n) + 1| − |X(n) − 1|) 2
(8)
The two templates A, B and bias I completely determine the behavior of a DTCNN with given inputs and initial conditions. The DTCNN can be interpreted as an iterative filter. For the one-step filter in Equation (7), a new voxel value, X(n + 1) are determined directly by the old voxel values in the corresponding neighborhood. The r-neighborhood of the cell C(i, j, k) is a cubic region of dimensions (2r + 1) × (2r + 1) × (2r + 1) around the center position (i, j, k). This neighborhood is usually chosen to be as small as possible, typically, r = 1. Therefore, a one-step filter can only extract the very local properties. The propagation property of iterative filter asserts that the output image value after n iterations can be indirectly influenced by a neighborhood n times larger than the original neighborhood. In our nodule detection application, a 3D DTCNN is built with the same structure and size as the segmented lung, with each cell representing a voxel at a corresponding position. The neighborhood of the DTCNN is chosen as 3×3×3, r = 1. With the aim of quickly detecting nodules in their initial stages (diameters around or under 10 mm), the iteration time N = 9 of the DTCNN is chosen. This results in an affected neighborhood with size 19 × 19 × 19, which is sufficient to cover the various sizes of nodules of our interest. In this work, the output reached when the iteration stops is defined as the settled output, denoted as Y (N ). Because CNN’s default input and initial condition values are in the interval [−1, 1], a normalization is needed to make the data suitable for usage in CNN. The shape index value range [0.5, 1] is linearly transformed to [−1, 1], and all the values from the interval [−1, 0.5] are mapped to the value of −1. Considering that the output value of 1 corresponds to the nodule class, this normalization method actually gives the voxels with the shape index values near 1 a larger initial probability of being a nodule. Based on this idea, the DTCNN can be viewed as a system with the initial state of each cell being the shape index value of the corresponding voxel. With the iterations increasing, the information in larger neighborhoods is utilized to make the decision, until either the state reaches a stable equilibrium point or the iterations stop.
4.3
DTCNN learning based on genetic algorithm
Due to the piecewise-linear nonlinearity of DTCNN, derivative based methods, such as the gradient descent procedure cannot be used for training of DTCNN. Kozek [20] proposed genetic algorithm (GA) for CNN learning. By minimizing the differences between the settled output and the desired output, this inputoutput approach guarantees that the settled output finally approaches the desired output from the initial state. For each nodule used for training, the image data in a neighborhood are extracted. The choice of the neighborhood size is a tradeoff between the learning load and the capability of decreasing the potential FPs. One extreme choice is the whole lung, but this is not practical due to the heavy load. In our scheme, 31×31×31 (edge about 20 mm) neighborhood is chosen. Accurate segmentation of nodules requires a lot of domain knowledge, so a simple method of creating the desired output image (DOI) is used instead. An ellipsoid having value 1 inside and value −1 outside is generated for each nodule, with the center and the size are adapted to overlap the nodule as much as possible. Genetic algorithm [21] are stochastic optimization techniques that simulate the mechanisms of natural selection and genetics. GA uses binary strings (chromosomes) to encode points in parameter space. Chromosomes are evaluated by a predefined fitness function to quantify the performance of each possible solution. A higher fitness corresponds to a better solution. Searching from a population of chromosomes, GA tries to combine information in good chromosomes to get the optimal solution. Instead of using the most commonly used cost function of the Euclidean distance type [20], a more complex metric [22] is used. This method measures sensitivity and specificity of a classification scheme by a pair of parameters, ρ 1 and ρ2 . For simplicity, we denote the desired nodule area, i.e., the area with value 1, as Dj for the jth training image; similarly, the recognized nodule area is indicated by Rj ; and the intersection between Dj and Rj is represented as Dj ∩Rj . Then ρ1j = (Dj ∩Rj )/Dj , and ρ2j = (Dj ∩Rj )/Rj . Another consideration is the convergence speed of the DTCNN. We propose a scheme to penalize an oscillated solution. For the jth training image, an oscillation index is defined as k 1 X Oj = |yi (N − 1) − yi (N )| 2k i=1
(9)
It measures the difference between the settled output and the output at previous step. The value of Oj falls in the interval [0, 1]. The fitness functions of a chromosome are defined for a single training image j and for all the training images: m 1 X fj (10) fj = ρ1j ρ2j (1 − Oj ), f (p) = m j=1 Here m is the number of the training images. The ultimate fitness function is the average of the fitness values for all the training images.
1
0.8
True Positive Fraction
True Positive Fraction
1
0.6 0.4 0.2 0 0
0.2 0.4 0.6 0.8 False Positive Fraction
(a)
1
0.8 0.6 0.4 0.2 0 0
5 10 False Positive Number per case
15
(b)
Fig. 4. ROC curves of juxtapleural nodule detection. (a) ROC curve of the LDA; (b) ROC curve of the juxtapleural nodule detection system (including both the candidate creation and classification).
5
Experimental Results
The database consisted of 19 CT scans in lung cancer screening trial. These cases were selected according to radiology reports showing the existence of lung nodules. The slice size is 512 × 512 (x, y) pixels, the in-plane resolution, same for x and y directions, ranges from 0.54 mm to 0.93 mm. The axial (z) reconstruction interval varied from 0.70 mm to 1.30 mm. The 19 cases were divided into two groups: one group with 2 scans was used for training of DTCNN; the remaining 17 cases were used for evaluation. The 17 scans were visually read by a pulmonologist, who was asked to find all nodules and give their locations. The detection of juxtapleural nodules consisted of candidate creation and classification. In the candidate creation step, 29 from 31 juxtapleural nodules in the 17 scans were included, a sensitivity of 93.6% with 145 FPs/case. For the classification step, LDA, feedforward neural networks, linear support vector machine with recursive feature elimination (SVM-RFE) were tried. And LDA gave the best results. The reason for this is probablly due to the curse of dimensionality, as the number of samples is small comparing to the number of features. ROC of the LDA using leave one case out scheme is shown in Fig. 4 (a). The area under the ROC is 0.9735. An operating point with sensitivity of 89.7% (26 from 29) and specificity of 95.4% on this ROC curve resulted in an overall sensitivity of 83.9% (26 from 31) with an average of 6.7 FPs per case, Fig. 4 (b). In the detection of non-pleural nodules, 5 nodules from the two CT scans formed the training data according to the procedure in Section 4.3. Totally 111 possible nodule positions were given in the 17 testing cases by the CAD. This 111 positions included 20 from the 30 nodules found by the pulmonologist, a sensitivity of 66.7%. One important thing is the presence of nodules missed by the human reader but identified by the CAD. The pulmonologist annotating nodules in the initial session visually reviewed the CAD results to confirm nodules that were identified by the CAD system. Pleural based focal opacities were confirmed by the human
Table 1. The number of correctly detected lesions in terms of locations and detection method ( Nodule (Nodule like) ). Location/reader
Human
CAD
Human−CAD
CAD−Human
CAD+Human
Juxtapleural
31
37(50)
5
11(24)
42(55)
Non-pleural
30
52(60)
10
32(40)
62(70)
reader as nodule or nodule-like focal opacities if identified by a CAD system as actionable, and therefore in need of a subsequent visual reading. Linear pleural opacities that were part of fissures were not included as significantly identifiable in the visual scanning. Apical opacities were not identified as significantly findable/actionable lesions. 11 juxtapleural nodules, which were undetected in the first visual reading were identified as true nodules in the second review. 13 areas were indicated as nodule-like focal opacity pleural lesions. For the non-pleural nodules, 32 nodules undetected in the first reading were identified as true nodules in the second review after they were identified by the CAD system, and 8 additional positions were identified as very suspicious areas. The detection results of the human reader and the CAD are shown in Table 1. The CAD system detected many nodules originally missed by the human visual reading. This demonstartes that the proposed CAD system performed effectively as a radiologist’s assistant to detect small nodules in the pulmonary CT images. This facts motivated an alternative evaluation method. The union of the nodules detected in the first visual reading and the nodules confirmed in the second visual review was considered as the truth. Accordingly, for juxtapleural nodule detection (previously chosen operating point), 37 from 42 true nodules were detected by the CAD, a sensitivity of 88.1% with an average of 6.1 FPs/case; furthermore, if the nodule like focal opacities are considered as TPs, the sensitivity of our CAD increased to 90.9% (50 from 55) with 5.3 FPs/case. For non-pleural nodule detection, the total number of true nodules in these 17 cases were 62. The human reader detected 30 of these 62 nodules, a sensitivity of 48.4%, whereas our CAD system located 52, a sensitivity of 83.9%. The number of total FPs by CAD in these 17 cases was 59, corresponding to 3.5FPs/case. If the 8 very suspicious areas were considered as TPs, the sensitivity of our CAD increased to 85.7%, and the total number of FPs dropped to 51, equivalent to 3FPs/case. Considering juxtapleural and non-pleural nodules together, a overall sensitivity of 85.6% (89 from 104) with 9.5 FPs/case was attained. If the possible lesions are treated as TPs rather than FPs, the sensitivity increased to 88% (110 from 125) with 8.3 FPs/case. Table 2 shows the performance comparison of our scheme with several other methods of detecting pulmonary nodules in CT images. Although no strict conclusion on the superiority can be given due to the differences of the test images, the comparison is of interest. The statistical results show that our method attained a much higher sensitivity and much lower FP rate than multiple thresh-
Table 2. Performance comparisons between several nodule detection CAD systems. Methods
Nodule size/number
Sensitivity
FP
Multiple thresholding [2]
3 − 28mm/187
70%
3/slice
Template matching [5]
5 − 30mm/98
72%
30/case
Prior model [6]
5 − 30mm/36
86%
11/case
This work
2 − 15mm/104
85.6%
9.5/case
olding method [2] and template matching scheme [5]. A prior model method [6] gave a similar sensitivity and FP rate, but the results were obtained on a much smaller dataset. In addition, the goal of our work presented here is to detect nodules as early as possible, so the CT scans used were collected in a lung cancer screening trial for asymptomatic subjects. The nodules in these cases are all small-size nodules (most of them are from 2 mm to 10 mm). Detecting smallsize nodules is more difficult and often leads to a lower sensitivity and higher FP rate.
6
Conclusion
A new CAD system was proposed to locate small nodules in high resolution helical CT scans. Morphological closing, thresholding and 3D component analysis were used to obtain juxtapleural nodule candidates, gray level and geometric features were analyzed using a LDA classifier. Leave one case out method was utilized to evaluate the LDA. This juxtapleural nodule detection method was able to obtain a sensitivity 88.1% with an average of 6.1 FPs/case. To locate non-pleural nodules, a DTCNN based scheme was developed. This method employed the local shape feature to perform voxel classification. The DTCNN was trained using genetic algorithm (GA). The non-pleural nodule finding scheme attained sensitivity of 83.9% with an average 3.5 FPs/case. By evaluating the two subsystems together, an overall performance of 85.6% sensitivity with 9.5 FPs/case (0.04 FPs/image) will be attained. Furthermore, the CAD system located many nodules missed by the human reading. This showed that the proposed CAD system was an effective assistant for human experts to detect small nodules and provide a valuable “second opinion” to the human observer.
References 1. Giger, M.L., Bae, K.T., MacMahon, H.: Computerized detection of pulmonary nodules in computed tomography images. Investigate. Radiol. 29 (1994) 459–465 2. Armato, S.G., Giger, M.L., Moran, C.J., Blackburn, J.T., Doi, K., MacMahon, H.: Computerized detection of pulmonary nodules on CT scans. Radiographics 19 (1999) 1303–1311
3. Kanazawa, K., Kawata, Y., Niki, N., Satoh, H., Ohmatsu, H., Kakinuma, R., Kaneko, M., Moriyama, N., Eguchi, K.: Computer-aided diagnostic system for pulmonary nodules based on helical CT images. In Doi, K., MacMahon, H., Giger, M.L., Hoffmann, K., eds.: Computer-Aided Diagnosis Medical Imaging. Elsevier, Amesterdam, The Netherlands (1999) 131–136 4. Penedo, M.G., Carreira, M.J., Mosquera, A., Cabello, D.: Computer-aided diagnosis: a neural-network-based approach to lung nodule detection. IEEE Transactions on Medical Imaging 17 (1998) 872–880 5. Lee, Y., Hara, T., Fujita, H., Itoh, S., Ishigaki, T.: Automated detection of pulmonary nodules in helical CT images based on an improved template-matching technique. IEEE Transactions on Medical Imaging 20 (2001) 595–604 6. Brown, M.S., McNitt-Gray, M.F., Goldin, J.G., Suh, R.D., Sayre, J.W., Aberle, D.R.: Patient-specific models for lung nodule detection and surveillance in CT images. IEEE Transactions on Medical Imaging 20 (2001) 1242–1250 7. Hu, S., Hoffman, E.A., Reinhardt, J.M.: Automatic lung segmentation for accurate quantitation of volumetric X-ray CT images. IEEE Transactions on Medical Imaging 20 (2001) 490–498 8. Ridler, T.W., Calvard, S.: Picture thresholding using an iterative selection method. IEEE Transactions on Systems, Man and Cybernetics 8 (1978) 630–632 9. Besl, P.J., Jain, R.C.: Segmentation through variable-order surface fitting. IEEE Trans. Patt. Anal. Machine Intell. 10 (1988) 167–192 10. Koenderink, J.J.: Solid Shape. The MIT Press, London (1990) 11. Koenderink, J.J., van Doorn, A.J.: Surface shape and curvature scales. Image and Vision Computing 10 (1992) 557–565 12. Yoshida, H., Nappi, J.: Three-dimensional computer-aided diagnosis scheme for detection of colonic polyps. IEEE Transactions on Medical Imaging 20 (2001) 1261–1274 13. Monga, O., Benayoun, S.: Using partial derivatives of 3D images to extract typical surface features. Computer Vision and Image Understanding 61 (1995) 171–189 14. Thirion, J.P., Gourdon, A.: Computing the differential characteristics of isointensity surfaces. Computer Vision and Image Understanding 61 (1995) 190–202 15. Turkiyyah, G., Stori, D., Ganter, M., Chen, H., Vimawala, M.: An acceleration triangulation method for computing the skeletons of free from solid models. Computer-Aided Design 29 (1997) 5–19 16. Harrer, H., Nossek, J.A., Stelzl, R.: An analog implementation of discrete time cellular neural networks. IEEE Transactions on Neural Networks 3 (1992) 466– 476 17. Harrer, H., Nossek, J.A.: Discrete time cellular neural networks. Int. J. Circuit Theory and Applicat. 20 (1992) 453–467 18. Chua, L.O., Yang, L.: Cellular neural networks: theory. IEEE Transactions on Circuits and Systems 35 (1988) 1257–1272 19. Chua, L.O., Yang, L.: Cellular neural networks: applications. IEEE Transactions on Circuits and Systems 35 (1988) 1273–1290 20. Kozek, T., Roska, T., Chua, L.O.: Genetic algorithm for CNN template learning. IEEE Transactions on Circuits and Systems 40 (1988) 392–402 21. Goldberg, D.E.: Genetic Algorithms in Search, Optimization, and Machine learning. Addison-Wesley, Reading, MA (1989) 22. Potocnik, B., Zazula, D.: Automated analysis of a sequence of ovarian ultrasound images. Part I, segmentation of single 2D images. Image and Vision Computing 20 (2002) 217–225