A Bacterial Swarming Algorithm For Global Optimization W. J. Tang, Q. H. Wu, Senior Member, IEEE, and J. R. Saunders Abstract— This paper presents a novel Bacterial Swarming Algorithm (BSA) for global optimization. This algorithm is inspired by swarming behaviors of bacteria, in particular, focusing on the study of tumble and run actions which are the major part of the chemotactic process. Adaptive tumble and run operators are developed to improve the global and local search capability of the BSA, based on the existing bacterial foraging algorithm (BFA). Simplified quorum-sensing mechanism is also incorporated to enhance the performance of this algorithm. BSA has been evaluated, in comparison with existing Evolutionary Algorithms (EAs), such as Fast Evolutionary Programming (FEP) and Particle Swarm Optimizer (PSO), on a number of mathematical benchmark functions. The simulation studies have been undertaken and the results show that the BSA can provide superior performance than FEP and PSO in optimizing these benchmark functions, particularly, in terms of its convergence rates and robustness.
I. I NTRODUCTION Over the past a few decades, many biologically inspired computational methodologies have been invented, such as Evolutionary Algorithms (EAs) [1] which include Genetic Algorithms (GAs) [2], Evolutionary Programming (EP) [3] and Evolutionary Strategy (ES) [4]. The EAs have been intensively studied and widely applied to solve various scientific and engineering problems such as robot design [5], protein folding problems [6] and power system economic dispatching [7] [8], etc. They have been and will continue to enjoy success in applications due to their simplicity, robustness and flexibility. However, the process of EAs may be time consuming in searching along the directions which are randomly selected. Therefore it has a slow convergence rate and is reluctantly used in many large optimization problems. Although hybrid EAs have been introduced [9], these hybrid EAs require the objective function to be differentiable. Recently, two Swarm Intelligence paradigms have been developed to tackle many optimization problems for which robust solutions are difficult or impossible to find using traditional approaches. One of them is Ant Colony Optimization (ACO) [10], which was inspired by ant routing behavior. The other is Particle Swarm Optimizer (PSO) [11], simulating animal swarm behavior. However, although the EAs and PSO have been comprehensively studied, these methods are facing difficulties in application to large-scale high-dimensional optimization problems, primarily because of the huge computational burden they impose. A key advance in this field will therefore be met by a significant W. J. Tang and Q. H. Wu are with the Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool, L69 3GJ, U.K. J. R. Saunders is with School of Biological Sciences, The University of Liverpool, Liverpool, L69 3BX, U.K. Corresponding author: Q. H. Wu, Tel: ++44 151 7944535; Fax: ++44 151 7944540; Email:
[email protected]
reduction in the computational time-costs whilst further improving the efficiency of global research capabilities of these algorithms. More recently, the study of Bacterial Foraging Algorithm (BFA) [12] has received great attention in the Computational Intelligence community worldwide. BFA is based on study of the bacterial foraging behaviors. The complex while organized activities exhibited in bacterial foraging patterns could inspire a new solution for optimization problems. The underlying mechanism of the surviving of bacteria, especially E. coli in a complex environment has been reported by researchers in the area of biological sciences [13]. Inspired from these phenomena, BFA was developed as an optimization algorithm, in which the self-adaptability of individuals in the group searching activities has attracted a great deal of interests. However, in BFA, the chemotactic process is artificially set, hence, this process could not obtain convincing results in a certain range of optimization problems, especially high dimensional and multi-modal problems. In this paper, we propose a novel optimization algorithm, BSA. BSA is based on BFA but incorporates ideas from the modeling of bacterial foraging patterns studied recently [14]. The behaviors of tumble and run actions of bacteria and modeling of these actions have been further incorporated to develop BSA. To improve the performance of BSA while used for complex optimization problems, we have also introduced an adaptive step length of run actions, which is independent of the optimization problems but can speed up the convergence process of BSA. To demonstrate the merits of BSA, we have evaluated it on a number of mathematical benchmark functions which cover a range of optimization problems from uni-modal and multi-modal to low and high dimensions. The algorithm evaluation has been undertaken in comparison with FEP and PSO. The simulation results are provided to backup our discussions on this new optimization algorithm. This paper is organized as follows: Section II introduces the mathematical framework of bacterial foraging patterns and chemotactic process as well as BSA algorithm and its pseudo code. The simulation results of evaluating BSA on a number of benchmark functions are presented in Section III. In the section, the discussion of the results are also included. The conclusion is drawn in Section IV. II. M ATHEMATICAL FRAMEWORK AND ALGORITHM A. The biological phenomenon 1) Bacterial foraging: Bacterial foraging algorithm is inspired by the patten exhibited by bacterial foraging behaviors. Bacteria have the tendency to gather to the nutrient-rich areas by an activity called “chemotaxis”. It is known that bacteria
1207 c 1-4244-1340-0/07$25.00 2007 IEEE
swim by rotating whip-like flagella driven by a reversible motor embedded in the cell wall. E. coli has 8 ∼ 10 flagella placed randomly on a cell body. When all flagella rotate counterclockwise, they form a compact, helically propelling the cell along a helical trajectory, which is called Run. When the flagella rotate clockwise, they all pull on the bacterium in different directions, which causes the bacteria to Tumble. Since motility of E. coli cells in clusters is formed of chemotactic aggregation, the population of E. coli in clusters varies according to the available nutrient. Given good growing conditions, a bacterium grows slightly in size or length, a new cell wall grows through the center forming two daughter cells, each with the same genetic material as the parent cell. If the environment is favorable, the two daughter cells may divide into four in 20 minutes typically. 2) The role of cell-cell communication: The cell-cell communication is concerned with the ability of exploration and exploitation in an optimization algorithm. Communication between bacteria, by the mechanism referred to “quorum sensing”, is widespread in nature. This phenomenon is related to the density of a colony. The higher the density, the more attraction would be generated by a colony to the individuals outside of the colony, which will influence bacterial foraging patterns and development of colonies during the whole evolutionary process. This bacterial phenomenon is similar to that emulated in PSO. PSO adopts the attraction of the particles as the major or determining factor of the algorithm. Each particle in PSO updates its position at each iteration according to its historical best position and the global best position in that iteration. The direction from the current position to global best position can be deemed as the communication between particles, while the other one towards the historical best can be regarded as the random walk and the best position, which has been put into memory. The inertia weight vector used in PSO can effectively stop the particles converging prematurely. In BSA, the position of the global best is supposed to be the center of a colony, and a simplified cell-cell communication is adopted, similar to PSO. Apart from the attraction, we also introduce the dispersion events into BSA. The cells are dispersed to the area near the best performed ones according to a preset probability. As a result, the combination of attraction and dispersion events ensures the diversity and the searching ability of this algorithm. B. The mathematical framework 1) Mathematical representation: Bacterial chemotaxis is based on the suppression of tumbles in cells that happen by chance to be moving up along a gradient direction. Bacteria make decision according to their ambient environment. The motion of individual peritrichously flagellated bacteria can be described in terms of run intervals during which the cell swims approximately in a straight line interspersed with tumbles, when the organism undergoes a random reorientation. In BSA, a unit walk with random direction represents a Tumble and a unit walk with the same direction of the last step indicates a Run. Similar to BFA, a chemotactic process
1208
in BSA consists of one step of Tumble and Ns steps of Run, depending on the variation of environment. In the process of Tumble, the position of the ith bacterium can be represented as θi (j + 1, r) = θi (j, r) + Ci (j)∠φi (j)
(1)
where θi (j, r) indicates the position of the ith bacterium at the jth chemotactic step in the rth iteration loop; Ci (j) is the length vector of a unit walk for the ith bacterium at the jth chemotactic step and φi (j) is the direction angle of the jth step for bacterium i, and it is a random angle generated within a range of [0, 2π]. However, in BSA, in order to accelerate the convergence rate, and enhance its searching ability in different types of problems. Ci (j) is set to be adaptive, which is defined as follows: Cinit × B, : if Tumble Ci (j) = (2) D1 × r1 × B, : if Run where Cinit is the step size for a unit walk, D1 a constant, r1 a random number, and r1 ∈ [0, 1], B the length vector of the boundaries of the search domain. The fitness of the ith bacterium at the jth chemotactic step is represented by Ji (j, r). If Ji (j+1, r) is better than Ji (j, r), then the process of Run follows, which can be represented by: θˆik+1 (j + 1, r) = θˆik (j + 1, r) + Ci (j)∠φi (j)
(3)
where 1 ≤ k ≤ Ns , and θˆik denotes the position of the ith bacterium in the kth step of Run. This process continues until Jik+1 (j + 1, r) is worse than Jik (j + 1, r). The angle ∠φi (j) remains unchanged during this process. After each chemotactic process, the energy obtained by a single bacterium in its life time, which is denoted by Nc , is accumulated. Compared with the others, the individual with most energy gained is defined as the best cell and its position θp (r) is kept for the calculation of the other bacteria in the next selection process. A certain percentage of bacteria are involved in an attraction action, which are selected in probability pa . Based on both their current positions and the global best position, their positions are recalculated as follows. θi (1, r + 1) = θi (Nc , r) + r2 × D2 × (θp (r) − θi (Nc , r)) (4) where D2 is a constant and r2 is a random number, r2 ∈ [0, 1]. It should be mentioned that in the above equation the position of each bacterium starts to be updated by the attraction action using its position at the last step of the current chemotactic process for the first step of the next chemotactic process. This is why the index of chemotactic process is set to be 1 on the left side of the above equation. This simplified cell-cell communication process shares some similarities with PSO while keeping its simplicity by only using the global best, which reduce the computational
2007 IEEE Congress on Evolutionary Computation (CEC 2007)
complexity to a certain extent. The rest bacteria are dispersed to positions around the best individual with a randomly chosen mutation step and a mutation angle, using the following equation. θi (1, r + 1) = θp (r) + r3 × B × ∠ψ
(5)
where r3 is a random number, r3 ∈ [0, 1] and ψ is a random angle chosen from [0, 2π]. 2) Pseudo code: The pseudo code of BSA is listed in Table I, which indicates the executable procedure of the mathematical framework and shows the implementation of BSA.
Step Function: f2 (x) =
30
(xi + 0.5)2
(7)
i=1
where x ∈ [−100, 100]30; Quartic Function, i.e., Noise: f3 (x) =
30
ix4i + random[0, 1)
(8)
i=1
where x ∈ [−1.28, 1.28]30; Generalized Schwefel’s Problem 2.26: f4 (x) = −
TABLE I P SEUDO CODE FOR BSA
30
(xi sin( |xi |))
(9)
i=1
where x ∈ [−500, 500]30; Generalized Rastrigin’s Function:
Randomly initialize positions of bacteria in the domain; FOR (Selection r = 1 : Ns ) FOR (chemotactic steps per bacterium j = 1 : Nc ) Calculate : Calculate the nutrient function of bacterium i as Ji (j, r); Tumble : For bacterium i, set Ji (j, r) as Jlast. Generate a random angle represented by an array X, where each element belongs to [0, 1]; X by Move to a random direction X a unit walk by step size Cinit , the new position is calculated by (1); Start anotherchemotactic step. IF Ji (j + 1, r) < Ji (j, r) WHILE (Jik+1 (j + 1, r) < Jik (j + 1, r) and k < Nc ) Run: For bacterium i, calculate the step fitness as Jik (j + 1, r), the positions are calculated by (3). If Jik (j + 1, r) < Jlast, take another walk of the same direction, the step size is defined in (2). Set Ji (j, r) as Jlast; END WHILE END IF END FOR (chemotactic steps) Sum : Set Ji as the sum of the step fitness over the life time of bacterium i; Sort : Sort Ji in ascending values of fitness in the population; Communication : Calculate the rank of bacterium i according to their fitness in its life time; keep the position of the fittest individual; randomly select the rest individuals, by a probability pa , to perform attraction using (4); other individuals perform dispersion using (5). END FOR (Selection)
f5 (x) =
30
x2i − 10 cos(2πxi ) + 10
(10)
i=1
where x ∈ [−5.12, 5.12]30; Ackley’s Function:
30 1
2 −20 exp − 0.2 x 30 i=1 i 30 1 − exp cos 2πxi + 20 + e (11) 30 i=1
f6 (x)
=
where x ∈ [−32, 32]30; Shekel’s Foxholes Function: ⎤−1 ⎡ 25 1 1 ⎦ + f7 (x) = ⎣ 500 j=1 j + 2i=1 (xi − aij )6
(12)
where x ∈ [−65.536, 65.536]2, and −32 −16 0 16 32 −32 · · · 0 16 32 aij = −32 −32 −32 −32 −32 −16 · · · 32 32 32 The benchmark functions f1 ∼ f6 are well-known. They are in high dimension, among them f1 ∼ f3 are uni-modal functions, f4 ∼ f6 are multi-modal functions, and f7 falls into the category of low dimension functions. B. Parameter settings
III. S IMULATION
STUDIES TABLE II
A. The benchmark functions In order to evaluate the performance of BSA, seven benchmark functions selected from [15], are listed below. Generalized Rosenbrock’s Function: f1 (x) =
29
(100(xi+1 − x2i )2 + (xi − 1)2 )
i=1
where x ∈ [−30, 30]30;
(6)
T HE EVALUATION NUMBERS Function f1 f2 f3 f4 f5 f6 f7
FEP 2,000,000 150,000 300,000 900,000 500,000 150,000 10,000
2007 IEEE Congress on Evolutionary Computation (CEC 2007)
PSO 200,000 200,000 200,000 200,000 200,000 200,000 200,000
BSA 200,000 200,000 200,000 200,000 200,000 200,000 5,000
1209
In the simulation studies, BSA is evaluated on the benchmark functions in comparison with FEP and PSO. Here, the evaluation number of an objective function in each algorithm is adopted for comparison purpose. The total evaluation number for each algorithm, taken in a complete optimization process, is listed in Table II. The evaluation numbers of FEP and PSO are used in [15] and [16]. The parameters of FEP and PSO are set as indicated in [15] and [16], respectively. The population of each algorithm is set as 50. The cross over rate for FEP is set to 0.7. Generally, the parameters in BSA are set as Cinit = 10e−4 , D1 = 0.5, D2 = 1 and pa = 0.8. C. Simulation results BSA, FEP and PSO are used to optimize the benchmark functions respectively. Each algorithm ran 20 times to give a mean value of the best solutions and a standard deviation obtained from the 20 runs. Table III demonstrates the results obtained by three algorithms applied on the seven benchmark functions respectively. In this table, the results of FEP and PSO were generated using the programs developed by ourselves and they are close to the results reported in [15]. It should be mentioned that the result of f5 is significantly different from that given in [15], although we have attempted to achieve a same result without success. This will remain as an issue for us to investigate in the near future. From Table III, it can be seen clearly that BSA can provide a better optimization solution with a much smaller deviation for four out of seven benchmark functions, encompassing both uni-modal and multi-modal problems with low and high dimensions. The merits and characteristics of BSA are discussed in comparison with FEP and PSO as follows. 10
10
BSA PSO FEP
8
Best fitness
10
6
10
4
10
2
10
0
10
0
0.5
Fig. 1.
1 1.5 Function evaluation
2 5
x 10
Best results of Function f1
1) Convergence: Figures 1 ∼ 7 show the convergence process of BSA, FEP and PSO respectively, conducted on the seven benchmark functions. The three algorithms take 200,000 function evaluations, for functions f1 ∼ f6 and 5,000 function evaluations for function f7 , respectively. It should be mentioned here that a generation of BSA involves
1210
more evaluations of the objective function than that taken by FEP and PSO, since the generation is concerned with the life time of a bacterium, which consists of an unfixed number of chemotactic processes. For a comparison purpose, we use the number of evaluations to plot the convergence performance of the three algorithms. All the figures illustrate the average best fitness in a population obtained from 20 runs of the program, which is plotted using a logarithmic scale in order to reduce the biggest and smallest values in the whole optimization process. BSA has also demonstrated a better ability of global searching for functions f1 , f4 , f5 , f7 . However, PSO performs significantly better than the other two algorithms in f6 thanks to its great ability of escaping from local optima. BSA performs almost as well as FEP in f3 . For f3 , the combination of local search and global search should be finely tuned to offer the best search performance. From the results presented in Table III, it can be seen that BSA performs the best in average for most of the benchmark functions. This is because that, the adaptive local search and swarming of BSA enables its capability of searching global optimum in multi-modal functions. 2) The role of attraction and mutation: BSA has been proposed to represent the survival and reproduction of bacteria from a viewpoint of underlying biological mechanisms. However, since it is inspired from the bacterial foraging patterns, it follows the basic biological rules by which each cell attempts to obtain as much energy as possible in its lifetime. The simulation results show that BSA usually converges to the global optimum at an early stage for most of the evaluation functions, and specifically for f4 ∼ f5 . As discussed in Section II-B, BSA inherits a certain merits from PSO, especially the add-on of attraction. Similar to its counterpart in bacterial system, e.g. quorum sensing, the algorithm of attraction which is artificially set, has arguably accelerated the convergence process. On the other hand, PSO has only utilized the features of “attraction” and inertia weight, which makes it ineffective for a certain number of multi-modal optimization problems, such as f4 . FEP has adopted Cauchy mutation operator for EP, which covers a wider range of mutation intensity. It has to be mentioned that mutation in FEP plays a key role in searching for global optimum in f2 . In this step function, the range of this mutation increases exponentially as the dimension gets higher. As f2 is evaluated with a higher dimension, therefore, a greater range of mutation enables FEP more capable for global search. FEP functions very well in this benchmark function while the other two algorithms more focus on local search. Thus they have lack of a sufficient level of mutation, which hampers their performance in this function. 3) Robustness: In most of EAs, the algorithm robustness is a crucial issue, as EAs are based on stochastic research and random selections. This issue is also considered in PSO. Regarding bacterial foraging behaviors, by an appropriate observation of population development and environmental difference in a bacterial foraging process, it has been well
2007 IEEE Congress on Evolutionary Computation (CEC 2007)
10
4
10
10 BSA PSO FEP
BSA PSO FEP
2
10 10
Best fitness
Best fitness
0
−10
0
10
10
−2
10
−20
10
−4
0
0.5
1 1.5 Function evaluation
2
10
0
Best results of Function f2
Fig. 2.
0.5
5
x 10
Fig. 3.
1 1.5 Function evaluation
2 5
x 10
Best results of Function f3
3
5000
10 BSA PSO FEP
BSA PSO FEP
0 Best fitness
Best fitness
2
−5000
10
1
10 −10000
−15000 0
0
0.5
Fig. 4.
1 1.5 Function evaluation
10
2
0
0.5
5
x 10
Best results of Function f4
Fig. 5.
5
1 1.5 Function evaluation
2 5
x 10
Best results of Function f5
3
10
10 BSA PSO FEP
0
BSA PSO FEP
10
Best fitness
Best fitness
2
−5
10
10
1
10 −10
10
−15
10
0
0
0.5
Fig. 6.
1 1.5 Function evaluation Best results of Function f6
2
10
0
1000
5
x 10
Fig. 7.
2000 3000 Function evaluation
4000
5000
Best results of Function f7
2007 IEEE Congress on Evolutionary Computation (CEC 2007)
1211
TABLE III S IMULATION RESULTS
Function f1 f2 f3 f4 f5 f6 f7
FEP Mean Best 32.5321 0 7.4 × 10−4 −11397 13.92 0.4482 1.1968
Std Dev 18.7004 0 2.319 × 10−4 364 3.9577 0.3869 0.4192
PSO Mean Best Std Dev 30.1393 25.8183 0.1 0.3162 2.5 × 10−3 5.28 × 10−4 -7467 980.7 56.5371 19.6147 2.32 × 10−9 1.33 × 10−9 1.3952 0.6940
noted that bacteria have a potential capability of growing in and moving towards rich nutrient areas. The behavior is more certain than that emulated in most of EAs and PSO. Although the sensitivity to the initial positions and the instability in different functions have been noted during the course of the simulation studies, the simulation results show that the standard deviations obtained, for most of benchmark functions, by BSA are smaller than those obtained by FEP and PSO. IV. C ONCLUSIONS The novel BSA has been proposed for global optimization. In this algorithm, the adaptive tumble and run operators have been developed and incorporated, which are based on the understanding of the details of bacterial chemotactic process. The operators involve two parts: the first is concerned with the selections of tumble and run actions, based on their probabilities which are updated during the searching process; the second is related to the length of run steps, which is made adaptive and independent of the knowledge of optimization problems. These two parts are utilized to balance the global and local searching capabilities of BSA. Beyond the tumble and run operators, attraction and mutation operations have also been developed. The former is inspired from the quorum sensing phenomenon of bacterial foraging patterns, incorporating the idea of PSO attraction, and the latter, considering the dispersion events of the growing process of bacteria, plays a significant role in keeping a certain diversity of bacterial population for remaining a global searching capability, in particular in the cases of optimization of high dimensional functions. BSA has been evaluated on a number of benchmark problems, which include uni-modal and multi-modal functions in low and high dimension domains, respectively. The convergence rates and robustness of BSA have been well discussed in this paper. The simulation results have shown that BSA has superior performance in comparison with FEP and PSO. Through the simulation studies, it has been seen that BSA possesses a great potential for global optimization of complex problems.
BSA Mean Best Std Dev 2.4657 3.0588 0.15 0.6708 8.7203 × 10−4 7.5285 × 10−3 -12567.5 3.25 1.5051 6.6717 6.87 × 10−2 7.78 × 10−2 0.9980 0
[2] D. E. Goldberg. Genetic algorithms in search, optimization, and machine learning. Addison-Wesley Publishing Company, Inc., 1989. [3] D. B. Fogel. Evolutionary Computation. IEEE Press, New York, 1995. [4] H. G. Beyer. The Theory of Evolution Strategies. Springer, Heidelberg, April 2001. [5] D. R. Frutiger, C. J. Bongard, and Fumiya Iida. Iterative product engineering: Evolutionary robot design. In P. Bidaud and F. B. Amar, editors, Proceedings of the Fifth International Conference on Climbing and Walking Robots, pages 619–629, Paris, France, 2002. [6] A. Piccolboni and G. Mauri. Application of evolutionary algorithms to protein folding prediction, volume 1363, pages 123–136. SpringerVerlag, Berlin, 1997. [7] Q. H. Wu and J. T. Ma. Power system optimal reactive power dispatch using evolutionary programming. IEEE Transactions on Power Systems, 10(3):1243–1249, 1995. [8] Q. H. Wu, Y. J. Cao, and J. Y. Wen. Optimal reactive power dispatch using an adaptive genetic algorithm. Electrical Power and Energy Systems, 20(8):563–569, 1998. [9] Istvan Borgulya. A hybrid evolutionary algorithm for the p-median problem. In GECCO ’05: Proceedings of the 2005 conference on Genetic and evolutionary computation, pages 649–650, New York, NY, USA, 2005. ACM Press. [10] M. Dorigo and L. M. Gambardella. Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Transactions on Evolutionary Computation, 1(1):53–66, April 1997. [11] J. Kennedy and R. C. Eberhart. Particle swarm optimization. In Proceeding of IEEE International Conference on Neural Networks, pages 1942–1948, Perth,Australia, November 1995. [12] K. M. Passino. Biomimicry of bacterial foraging for distributed optimization and control. IEEE Control Systems Magazine, pages 53– 67, June 2002. [13] H. C. Berg and D. A. Brown. Chemotaxis in escherichia coli analyzed by three-dimensional tracking. Nature, 239:500–504, 1972. [14] W. J. Tang, Q. H. Wu, and J. R. Saunders. A novel model for bacterial foraging in varying environments. In M. Gavrilova and et al., editors, Lecture Notes in Computer Science, volume 3980, pages 556–565. Springer-Verlag Berlin, 2006. [15] X. Yao, Y. Liu, and G. Lin. Evolutionary programming made faster. IEEE Transactions on Evolutionary Computation, 3(2):82–102, July 1999. [16] F. van den Bergh and A. P. Engelbrecht. A cooperative approach to particle swarm optimization. IEEE Transactions on Evolutionary Computation, 8(3):225–239, 2004.
R EFERENCES [1] T. Back. Evolutionary Algorithms in Theory and Practice. Oxford University Press, Oxford, 1996.
1212
2007 IEEE Congress on Evolutionary Computation (CEC 2007)