Particle Swarm Optimization Based Feature Selection in Mammogram ...

2 downloads 0 Views 237KB Size Report
Abstract—Mammography is currently the most effective method for early detection of breast cancer. This paper proposes an effective technique to classify ...
Particle Swarm Optimization Based Feature Selection in Mammogram Mass Classification Man To Wong, Xiangjian He, Hung Nguyen

Wei-Chang Yeh

Faculty of Engineering & Information Technology University of Technology, Sydney Broadway, NSW 2007, Australia [email protected], [email protected], [email protected]

Dept. of Industrial Engineering & Engineering Management National Tsing Hua University Hsinchu 300, Taiwan [email protected]

Abstract—Mammography is currently the most effective method for early detection of breast cancer. This paper proposes an effective technique to classify regions of interests (ROIs) of digitized mammograms into mass and normal tissue regions by first finding the significant texture features of ROI using binary particle swarm optimization (BPSO). The data set used consisted of sixty-nine ROIs from the MIAS Mini-Mammographic database. Eighteen texture features were derived from the gray level co-occurrence matrix (GLCM) of each ROI. Significant features are found by a feature selection technique based on BPSO. The decision tree classifier is then used to classify the test set using these significant features. Experimental results show that the significant texture features found by the BPSO based feature selection technique can have better classification accuracy when compared to the full set of features. The BPSO feature selection technique also has similar or better performance in classification accuracy when compared to other widely used existing techniques. Keywords – mammography, mass classification, particle swarm optimization, feature selection

I.

INTRODUCTION

Breast cancer is the leading cause of death of women in the U.S [1]. Currently the most effective method for early detection of breast cancers is mammography [2]. Mammography is currently the only widely accepted imaging method used for routine breast cancer screening [3]. Masses and micro-calcifications are two important early signs of breast cancer [4]. It is more difficult to detect masses than micro-calcifications because the features of masses can be obscured or can be similar to normal breast parenchyma [5]. Reading mammograms is a demanding job for radiologists. A computer aided detection (CAD) system can provide a consistent second opinion to a radiologist and greatly improve the detection accuracy. In this paper, regions of interests (ROIs) are manually extracted from the MIAS Mini-Mammographic database. The ROIs can contain mass or normal tissue. A method is proposed for the classification of ROIs as mass or non-mass regions using texture features calculated from the gray level cooccurrence matrix (GLCM). However, a large number of features are available and not all of the features are useful for classification. As some features may be irrelevant or

redundant, the classification accuracy may be worse when there are too many selected features, especially when the number of training cases is small. It is important to select a smaller subset of features to overcome the problem of “curse of dimensionality”. This occurs when the test set classification accuracy decreases with an increasing number of features. It is important to select a smaller subset of significant features if the ratio of the number of training cases to the number of available features is not sufficiently large [6,7]. Existing feature selection methods can be broadly divided into two categories: filter approaches and wrapper approaches [8]. In filter methods, the search process is independent of a learning algorithm, and they are usually computationally less expensive and more general than the wrapper approach. On the other hand, the best feature subset of the wrapper approach is searched by using a learning algorithm as part of the evaluation function. By considering the performance of the selected feature subset on a particular learning algorithm, the wrapper approach can usually have better results than the filter approach [8,9]. In feature selection, the size of the search space will grow exponentially with the number of features and this makes the exhaustive search not a practical solution. Greedy algorithms have been used in feature selection such as sequential forward selection (SFS) [10] and sequential backward selection (SBS) [11]. However, greedy approaches can be trapped in local optima. Hence an efficient global search technique can be used to improve the feature selection. Recently, evolutionary computation techniques have been used in feature selection due to their global search ability. Genetic algorithms (GAs) and particle swarm optimization (PSO) have been used in feature selection [7,8]. When compared to GA, PSO is easier to implement, computationally less expensive and can converge quickly [12]. Many PSO based feature selection techniques have been used with machine learning datasets [8, 13]. Recently PSO based feature selection methods have been used in classification of microcalcification in mammograms [14]. However, the use of PSO based feature selection in mammogram mass classification is rare. The purpose of this paper is to use binary PSO (BPSO) to select the significant texture features from ROI using the

wrapper approach. BPSO is used to search for the feature subset, and K-nearest neighbor (KNN) classifier is used to evaluate the feature subset. A decision tree classifier is then trained using the significant features in the training set found by BPSO-KNN feature selection method. The trained decision tree classifier is then used to classify the ROIs in the test set into mass or normal breast tissue, using the significant texture features only. One objective is to show that a small number of significant GLCM based texture features found by BPSOKNN feature selection can have better or comparable performance in classification accuracy when compared to the full set of features or other existing mass classification methods. The second objective is to show that the BPSOKNN based feature selection technique used in this paper is comparable or better than other widely used feature selection methods. II.

Each particle keeps track of the following information in the problem space: xi, the current position of the particle; vi, the current velocity of the particle; and yi, the personal best position of the particle which is the best position that it has achieved so far. This position yields the best fitness value for that particle. The fitness value of this position, called pbest, is also stored. There are two approaches to PSO, namely local best (lbest) and global best (gbest). The difference is in the neighborhood topology used to exchange information among the particles. For the gbest model, the best particle is determined from the entire swarm. For the lbest model, the swarm is divided into overlapping neighborhoods of particles. For each neighborhood, a best particle is determined. The gbest PSO is a special case of lbest when the neighborhood is the entire swarm. Another best value that is tracked by the global version of the PSO is the overall best value (gbest), obtained so far by any particle in the population. The location of this overall best value is called yg. This location is also tracked by PSO. The PSO changes the velocity of each particle at each time step so that it moves toward its personal best and global best locations. The algorithm for implementing the global version of PSO is as follows [19]:

2.

4.

5.

Compare particle’s fitness evaluation with particle’s personal best value (pbest). If the current fitness function value is better than pbest , then set the pbest value equal to the current value, and the pbest location equal to the current location in the ddimensional space. Compare fitness evaluation with the population’s overall previous best value. If the current value is better than the global best value (gbest), then set gbest to the current particle’s value and set the global best position yg to the current particle’s position. Change the velocity and position of the particle according to Equations (1) and (2), respectively. 1 (1)

FEATURE SELECTION USING BINARY PSO

PSO is a population based stochastic optimization technique modeled after the social behavior of bird flocks [15,16]. In PSO, the algorithm maintains a population of particles, where each particle represents a potential solution to the optimization problem. Each particle is also assigned a randomized velocity. The particles are then flown through the problem space [16,17,18]. The aim of PSO is to find the particle position that results in the best evaluation of a given fitness function.

1.

3.

Initialize a population of particles with random positions and velocities on a d-dimensional problem space. For each particle, evaluate the desired optimization fitness function of d variables.

1

6.

1

(2)

where w is the inertia weight, c1 and c2 are the acceleration constants, and r1(t) and r2(t) are random numbers generated in the range between 0 and 1. Velocity updates are also clamped to prevent them from exploding, thereby causing premature convergence. Loop to Step 2 until a termination criterion is met. The criterion is usually a sufficiently good fitness or a maximum number of iterations. In this paper, a maximum number of iterations is used.

The original version of PSO operated in continuous space. However, a feature selection problem occurs in a space featuring discrete, qualitative distinctions between variables and between levels of variables [20]. To extend the implementation of the PSO algorithm, the original authors of PSO developed a binary PSO (BPSO) for discrete problems [20]. The velocity in BPSO represents the probability of an element in the position taking value 1. Equation (1) is still used to update the velocity while xi, yi and yg are restricted to 1 or 0. A sigmoid function s(vi) is used to transform vi to the range of (0,1). BPSO updates the position of each particle according to the following formulae: 1 where

,

0

(3) (4)

rand( ) is a random number selected from a uniform distribution in [0,1]. In this paper, binary PSO (BPSO) is used to search for the feature subset in the training set. When is 1, the feature corresponding to this bit position will be selected. When is 0, the feature will not be selected. K-nearest neighbor (KNN) classifier is used to evaluate the feature subset using leave-oneout (LOO) cross validation. The fitness function of BPSO used is the KNN classification error rate in the training set using LOO.

III.

IV.

TEXTURE FEATURES

GLCM matrix is used to measure the texture-context information. The texture-context information is specified by the matrix of relative frequencies P(i, ,j, d, θ) with which two neighboring pixels separated by distance d and along direction θ occur on the image; one pixel with gray level i and the other with gray level j [5,21]. To simplify the computational complexity, θ is often given as 0, 45, 90 and 135 degrees. When θ is 0, the element P(i, ,j, d, θ) can be expressed as [5] : , , ,0

#

,

|

,|

,

,

,

0 .

(5) I (x, y) is the intensity value of the pixel at the position (x, y), I (x1, y1) equals i, I (x2, y2) equals j, and # S is the number of elements in the set S. After the number of neighboring pixel pairs R used in computing a particular GLCM matrix is obtained, the matrix is normalized by dividing each entry by R, the normalizing constant [21]. In this paper, for each ROI, six texture features were derived from each GLCM. These features are entropy, energy, homogeneity, contrast, maximum probability and correlation [21,22]. In the equations below, the notation p(i ,j) is used to represent the (i, j)th entry in a normalized GLCM matrix and p(i ,j) is obtained by dividing each entry of the matrix P(i, j) by R, the normalizing constant [21]. Also ∑ , represents ∑ ∑ where n is the number of gray levels per pixel in the image. ∑, ∑,

, log ,

.

∑, |

A. MIAS MiniMamographic Database The MIAS MiniMammographic Database is provided by the Mammographic Image Analysis Society, UK [23]. The mammograms are digitized at 200 micron pixel edge and have a resolution 1024 x 1024. The MIAS database provides ground truth for each abnormality in the form of circles, with an approximation of the centre and the radius of each abnormality. The types of abnormality in the database include calcification, masses, architectural distortion and asymmetry. Mammograms which do not contain any abnormality (classified as normal) are also provided. Three types of background tissues are given: fatty, fatty-glandular and dense-glandular tissue. B. Test Methodology Sixty-nine ROIs were manually extracted from the images in the MIAS Database. Each ROI is a square window of size 160 x 160 pixels with the mass at the centre of the window (if the ROI contains a mass). Thirty of the sixty-nine ROIs contain mass and thirty-nine of them contain normal tissue only. For ROIs which contain mass, the mass can be benign or malignant. For ROIs which contain normal tissue only (no mass), the ROIs are randomly chosen inside the breast body.

(7) |

,

|

,

|

.

(8)

.

(9)

.

, ∑,

,

.

.

(10)

Figure 1. Examples of ROIs with mass (left) and normal tissue only (right)

(11)

The sixty-nine ROIs are divided into three equal sets. Two sets are used as a training set and the remaining set as a test set. Hence there are 46 ROIs in the training set (20 ROIs contain mass, 26 ROIs contain normal tissue only) and 23 ROIs in the test set (10 ROIs contain mass, 13 ROIs contain normal tissue only). Feature selection by BPSO-KNN is done using the training set only. Then only the significant features obtained from feature selection are used to train the classifier, using the training set only. The trained classifier is then used to classify the test set, using the significant features only. The above process is repeated by using another set of data as a test set and the other two sets as a training set. Three-fold cross validation is used to calculate the classification accuracy on the test set. Every ROI is used in the test set once only. The average classification accuracy of the three test sets is calculated.

∑ ∑

,

.

(12)



,

.

(13)



This section describes the mammogram database used, the test methodology, the test result, comparison of the proposed method in this paper with other existing techniques and discussion.

(6)

.

∑, Contrast

,

EXPERIMENTAL RESULT AND DISCUSSION





,

.

(14)





,

.

(15)

In this paper, in finding the GLCM, d is set to 1. Four directions are used for θ : 0, 45, 90 and 135 degrees. Hence, for each feature, four features are generated, one for each matrix. Then the average, range and standard deviation of the four values of each feature are calculated. The range is defined as the difference between the maximum and minimum of the four values. Hence a total of eighteen texture features are found for each ROI. The statistics of the features are used instead of using the individual feature values of each matrix. The BPSOKNN based feature selection technique is then used to select the most significant features from the original 18 features.

In BPSO-KNN based feature selection, KNN is used to evaluate the feature subset in the training set. The classification accuracy of the feature subset on the training set is evaluated using KNN and leave-one-out (LOO) cross validation. Once the significant features have been found by the BPSO-KNN technique, only the significant features are used in the training set to train the classifier. The classifier to be trained is the

Classification and Regression Tree (CART) [24]. The trained CART is then used to classify the test set. The KNN is not used to classify the test set after feature selection because it has been found by experiment in this paper that CART can have better classification accuracy on the test set than the KNN. Note that 3-fold cross validation is used to calculate the classification accuracy of the decision tree on the test set while LOO cross validation is used to evaluate the feature subset found by BPSO-KNN in the training set. The BPSO-KNN feature selection method and the CART are implemented using C++ language and OpenCV software library [31]. The BPSO based feature selection method is compared with three other feature selection methods which are all available in the WEKA machine learning workbench [25]. These three methods are briefly described below [25,26]: •





Best first search + Cfs-Subset-Evaluation: this method is the default feature selection method of WEKA. The best first search technique searches the space of attribute subsets by greedy hillclimbing augmented with backtracking facility. The Cfs-Subset-Evaluation technique [27] evaluates the worth of a subset of attributes by considering the individual predictive ability of each feature along with the degree of redundancy between them. Genetic search + K-nearest neighbor classifier: the genetic search performs a search using the genetic algorithm described in Goldberg (1989) [28]. The wrapper subset evaluation technique used is Knearest neighbor with K set to 5. All other parameters use the default WEKA setting. Greedy stepwise search + K-nearest neighbor: the greedy stepwise search performs a greedy forward search through the space of feature subsets. The wrapper subset evaluation technique used is Knearest neighbor with K set to 5. All other parameters use the default WEKA setting.

K was set to 5 in WEKA in testing because in BPSO-KNN feature selection technique, K is also equal to 5. This will make the comparison of three different search methods (PSO, genetic algorithm and greedy stepwise search) fairer. Note that in best first search, the Cfs-Subset-Evaluation method was used rather than K-nearest neighbor because best first search followed by Cfs-Subset-Evaluation is the default method of feature selection in WEKA. C. Experimental result and discussion For all the experiments, the following parameters are used for BPSO-KNN based feature selection: Number of particles = 30 Number of iterations for termination = 100 Acceleration constants c1 and c2 = 1.49618 Maximum velocity vm = 6 The parameters shown above are based on the settings used in previous research papers [8,29]. The fully connected topology is used in BPSO.

In TABLE I, classification accuracy is defined as the number of correctly classified samples (to the class mass or non-mass) in the test set divided by the number of samples in the test set. Three-fold cross validation was used in testing and the average accuracy of three test sets was recorded. The average number of features in the table is the average of the number of significant features found in the three different training set partitions in 3-fold cross-validation. The significant features are found in the training set using the specified feature selection method. Then only the significant features are used in the training set in training the CART classifier. The trained CART is used to classify the test set using the significant features only. For feature selection methods in TABLE I, each method is run five times in the training set and the best result is used. In BPSO-KNN based feature selection, KNN and leave-one-out cross validation technique was used to determine the classification accuracy of the feature subset. The program run with best classification accuracy is used. If two program runs have the same classification accuracy, then the program run with less number of features was chosen and the feature subset found by it was recorded. TABLE I.

COMPARISON OF VARIOUS FEATURE SELECTION METHODS USING CART TREE AS CLASSIFIER ON TEST SET

Feature selection method All features: No feature selection BPSO+KNN Best-first search + cfsSubsetEval GreedyStepWise search + KNN Genetic algorithm+ KNN

Average Number of features

Average classification accuracy (%) in test set

18

79.71

6.3

81.16

4.0

81.16

2.0

79.71

6.3

81.16

From TABLE I, the BPSO-KNN feature selection method has better classification accuracy than the method without feature selection (using all 18 features). The BPSO method can reduce the number of features from 18 to 6.3 while improving the classification accuracy from 79.71% (using all features) to 81.16% (6.3 features). The BPSO method is also better than the greedy stepwise search method in classification accuracy. This is reasonable as the greedy stepwise search method may be trapped in local optima. When compared to the best first search method (with CfsSubsetEval) and the genetic search, the classification accuracy is the same. The best first search method has a smaller average number of features than the BPSO-KNN method. This is because in the fitness function used by the BPSO-KNN feature selection, only the KNN error rate of the feature subset is minimized. No attempt had been made to reduce the number of features in the feature subset in the fitness function. Hence in the feature subset found by the BPSO-KNN method, there might be redundancy among the features. One commonly used fitness function is to include both the KNN classification error rate and the number of features in feature subset in the fitness function [8]. However in some testing of this paper, it is found

that this may give a very low number of features but unsatisfactory classification accuracy. One possible solution is to use KNN classification error rate as the only objective in the fitness function so as to get the best classification accuracy of the feature subset. Then the redundant features (if any) in the significant feature subset obtained can be removed by some methods like sequential backward search. This idea has not been tested in this paper but will be tried in the future. Table II compares the mass classification accuracy of the proposed method in this paper and the methods used by Christoyianni et.al. [30]. COMPARISON OF VARIOUS MASS CLASSIFICATION METHODS USING PROPOSED METHOD IN THIS PAPER AND METHODS IN REFERENCE [30]

classification approaches. This illustrates the feasibility of using GLCM-based texture features, BPSO-KNN feature selection, and decision tree classifier in mammogram mass classification. REFERENCES [1]

[2]

TABLE II.

Classifier method used CART RBF neural network [30] MLP neural network [30]

Features used Significant texture features from GLCM using BPSOKNN feature selection Texture features from GLCM Features from gray level histogram moments

Classification accuracy (%) in test set

81.16 84.87

[4]

[5]

82.35

The average classification accuracy of the proposed mass classification method in this paper using significant texture features selected by BPSO-KNN based feature selection and CART classifier is 81.16%. Christoyianni et.al. [30] had used Radial-Basis-Function (RBF) networks and multi-layer perceptron (MLP) neural networks to classify ROIs from the MIAS database. The features used by [30] were texture features from GLCM and gray level histogram moments. Using texture features from GLCM, the best overall classification accuracy was 84.87% using RBF function neural network when θ of the GLCM is 45 degrees. Christoyianni et. al. used the features from one matrix in testing and repeated the experiments for matrices at different angles. The values of features from each matrix were used directly in the experiment instead of using the statistics of the feature values from several matrices. For gray level histogram moments, the best classification accuracy from [30] was 82.35% using MLP neural network with two hidden layers. The accuracy of the proposed method in this paper is comparable to the accuracy reported in [30]. It should be noted that Christoyianni et. al. [30] had tried a large number of the topologies of the neural network before he obtained the best solution. Also experiments were repeated for different angles of GLCM to obtain the best solution. V.

[3]

CONCLUSIONS

The experimental results show that the BPSO-KNN feature selection method used in this paper can have comparable or better result than other widely used feature selection methods when it is applied to mammogram mass classification. By using texture features from GLCM alone, a small number of significant features found by BPSO-KNN can have better performance in classification accuracy than the full set of features in mass classification. By using the significant texture features found by the BPSO-KNN feature selection method and the CART classifier, the classification accuracy on the test set has a comparable performance to other existing mass

[6] [7]

[8]

[9] [10] [11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

A. S. Constantinidis, M. C. Fairhurst, F. Deravi, M. Hanson, C. P. Wells, C. Chapman-Jones, “Evaluating classification strategies for detection of circumscribed masses in digital mammograms,” Proceedings of 7th Int’l Conference on Image Processing and Its Applications, pp. 435-439, 1999. K. Bovis, S. Singh, J. Fieldsend, C. Pinder, “ Identification of masses in digital mammograms with MLP and RBF nets,” Proceedings of the IEEE-INNS-ENNS International joint Conference in Neural Networks, pp. 342-347, 2000. J. Tang, R. M. Rangayyan, J. Xu, I. E. Naqa, Y. Yang, “Computer-aided detection and diagnosis of breast cnacer with mammography: recent advances,” IEEE Trans. on Information Technology in Biomedicine 13(2), 236-251, 2009. H. D. Cheng, X. P. Cai, X. W. Chen, X. L. Hu, X. L. Lou, “Computer aided detection and classification of microcalcifications in mammograms: a survey,” Pattern Recognition 36, 2967-2991, 2003. H. D. Cheng, X. J. Shi, R. Min, L. M. Hu, X. P. Cai, H. N. Du, “Approaches for automated detection & classification of masses in mammograms,” Pattern Recognition 39, pp.646-668 , 2006. R. O. Duda, P. E. Hart, “Pattern classification and scene analysis,” Wiley, New York, 1973. B. Sahiner, H. P. Chan, D. Wei, N. Petrick, M. A. Helvie, D. D. Adler, M. M. Goodsitt, “Image feature selection by a genetic algorithm: application to classification of mass and normal breast tissue,” Med. Physics, 23(10), pp. 1671-1684, Oct 1996. B. Xue, M. Zhang, W. N. Browne, “New fitness functions in binary particle swarm optimization for feature selection,” IEEE World Congress on Computational Intelligence (WCCI 2012), Brisbane, Australia, 2012. R. Kohavi, G. H. John, “Wrappers for feature subset selection,” Artificial Intelligence, vol. 97, pp.273-324, 1997. A. W. Whitney, “A direct method of nonparametric measurement selection,” IEEE Trans. on Computers C-20(9), pp.1100-1103, 1971. T. Marill, D. Green, “On the effectiveness of receptors in recognition systems,” IEEE Transactions on Information Theory, vol. 9, no. 1, pp. 11-17, 1963. J. Kennedy, W. Spears, “Matching algorithms to problems: an experimental test of the particle swarm and some genetic algorithms on the multimodal problem generator,” IEEE Congress on Evolutionary Computation (CEC98), pp. 78-83, 1998. L. Cervante, B. Xue, M. Zhang, L. Shang, “Bianry particle swarm optimization for feature selection: a filter based approach,” IEEE World Congress on Computational Intelligence (WCCI 2012), Brisbane, Australia, 2012. I. Zyout, I.Abdel-Qader,"Classification of microcalcification clusters via PSO-KNN heuristic parameter selection and GLCM features," International Journal of Computer Applications (0975 – 8887) Vol. 31 No.2, October 2011. J. Kennedy and R. Eberhart, “Particle swarm optimization,” Proceedings of the IEEE International Joint Conference on Neural Networks, Perth, Australia, vol. 4, pp. 1942-1948, 1995. R. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory,” 6th International Symposium on Micro Machine and Human Science, 1995. Y. Shi and R. Eberhart, “A modified particle swarm optimizer,” Proceedings of IEEE International Conference on Evolutionary Computation, World Congress on Computational Intelligence, Anchorage, Alaska, 1998. Y. Shi and R. Eberhart, “Empirical study of particle swarm optimization,” Proceedings of the 1999 Congress on Evolutionary Computation (CEC 1999), Piscataway, NJ: IEEE Service Center, pp. 1945-1950, 1999.

[19] R. Eberhart and Y. Shi, “Particle swarm optimization: developments, applications and resources,” Proceedings of the 2001 Congress on Evolutionary Computation (CEC 2001), IEEE Press, pp 81-86, 2001. [20] J. Kennedy, R. Eberhart, “A discrete binary version of the particle swarm algorithm,” IEEE International Conference on Systems, Man, and Cybernetics, 1997. [21] R. M. Haralick, K. Shanmugam, I. Dinstein, “Texture features for image classification,” IEEE Trans. Syst. Man Cybernet, SMC-3, No. 6, pp. 610-621, 1973. [22] A. Petrosian, H. P. Chan, M. A. Helvie, M. M. Goodsitt, D. D Adler, “Computer-aided diagnosis in mammography: classification of mass and normal tissue by texture analysis,” Physics in Medicine and Biology 39(12), pp. 2273-2288, 1994. [23] J. Suckling, J. Parker, D. Dance, S. Astley, I. Hutt, C. Boggis, I. Ricketts, E. Stamatakis, N. Cerneaz, S. Kok, P. Taylor, D. Betal, J. Savage, “The Mammographic Image Analysis Society digital mammogram database,” Proceedings of the 2nd International Workshop on Digital Mammography Vol. 1069, pp. 375-378, 1994. [24] L. Breiman, J. Friedman, R. Olshen, C. Stone, “Classification and regression trees,” Wadsworth, 1984. [25] I. H. Witten, E. Frank, “Data mining: practical machine learning tools and techniques,” second edition, Morgan Kaufman, 2005. [26] Weka homepage: http://www.cs.waikato.ac.nz/~ml/weka [27] M. A. Hall, “Correlation-based feature selection for machine learning,” Ph.D thesis, Department of Computer Science, The University of Waikato, Hamilton, New Zealand, April, 1999. [28] D. E. Goldberg, “Genetic algorithms in search, optimization and machine learning,” Addison-Wesley, 1989. [29] F. V. D. Bergh, “An analysis of particle swarm optimizers,” Ph.D dissertation, Pretoria, South Africa, 2002. [30] I. Christoyianni, E. Dermatas, G. Kokkinakis, “Neural classification of abnormal tissue in digital mammography using statistical features of the texture,” Proc. of the 6th IEEE Int’l Conf. on Electronics, Circuits & Systems Vol. 1, pp 117-120, 1999. [31] G. Bradski, A. Kaehler, “Learning OpenCV,” first edition, O’Reilly, Sept. 2008.

Suggest Documents