An Atomic Model Based Optimization Algorithm Anupam Biswas1 and Bhaskar Biswas2
Krishn Kumar Mishra
Department of Computer Science & Engineering, Indian Institute of Technology (BHU), Varanasi, India Email: 1
[email protected], 2
[email protected]
Department of Computer Science & Engineering, Motilal Nehru National Institute of Technology, Allahabad, India Email:
[email protected]
Abstract—In this paper, an optimization algorithm is proposed for solving combinatorial optimization problems. The designing of the algorithm is based on the Bohr’s atomic model. Microcosm of physics and chemistry such as excitation and de-excitation of electrons in atom and atom’s bond formation phenomenon are the key mechanism of this algorithm. Atoms form compounds to minimize the energy of their electrons and become more stable, with this quest positively charged atoms attracts negatively charged atoms to form bonds among them. Utilizing these phenomenon to attain more stable state of an atom, a novel strategy is incorporated into the algorithm. This stable state is resembled with global optima in the proposed algorithm. Performance of proposed algorithm is analyzed on real life combinatorial optimization problems. Index Terms—Genetic Algorithm, Particle Swarm Optimization, Differential Evolution, Optimization.
I. I NTRODUCTION Optimization is a process of finding the best one or most favorable one from a collection or set of values based on some predefined objectives. Generally the element of the collection that posses either maximum or minimum value in terms of defined objectives is considered as the best one. Any problem seeking for best solutions can be depicted as optimization problem. An optimization problem can be defined as to find a set of parameters x = (x1 , x2 , ..., xn ) that optimizes (minimizes or maximizes) an objective function f (x). Best solution or function value obtained for any optimization problem is referred as optimal solution or optima. An optimization problem may have several optimal solutions in some regions within the solution domain. These optimal solutions are called local optima or local optimal solution and problem is referred as multi-modal optimization problem. Optimal solution of all local optima is referred as global optima. Optimization problems which require to multiple objective functions to be optimized are referred as multi-objective optimization. This work will deal with single objective optimization problems. Almost all domains of science and engineering are associated with optimization problem. Hence, optimization problems need to solve efficiently and effectively. Although some mathematical approaches such as Linear Programming (LP) [1], Non-linear Programming(NLP) [2] and Dynamic Programming(DP) [3] were developed for solving optimization problems, due to some drawbacks [4] these approaches losses their popularity. Optimization problem that has requirement of finding the global optimal solution is called global optimiza-
tion problem. Numerous algorithms have been developed since early 1970s. Most of them are inspired by nature. These techniques includes: Evolutionary Algorithms(EA) [5], Genetic Algorithm (GA) [6], Particle Swarm Optimization(PSO) [7], Ant Colony Optimization(ACO) [8] etc. Apart from these nature inspired methods some chemical and physical phenomenon based algorithms are also developed. In similar line of work, Chemical Reaction Optimization (CRO) [9] have been introduced. Physical phenomenon based algorithms have drawn attention these days [10]. Our nature being the great source of inspiration to solve optimization problems as well as improvisation of existing one. Popular algorithms such as GA and PSO undergoes numerous changes in last decades. After proposal of Holland, GA has been further developed by Goldberg [14], [15]. In recent years lot of improvement to PSO has been proposed by various parameter tuning. Shi and Eberhart proposed PSO with Time Varying Inertia Weight (PSO-TVIW) [16], which improves performance of PSO by varying the value of inertia weight. Another variant is PSO with Random Inertia Weight (PSO-RANDIW) [17], where random inertia weight is used instead of variation with time. Ratnaweera et. al [18] proposes PSO with Time Varying Acceleration Coefficients (PSO-TVAC) using time varying inertia along with time varying cognitive acceleration coefficient as well as both coefficients. Cleric and Kennedy introduced constriction parameter , which serves as an alternative to Vmax . Several types of constrictions such as Type 1, T ype 10 and T ype 100 has been proposed in [19]. Recently, Zhan et. al [22] introduced orthogonal learning mechanism to PSO. Though, orthogonal learning PSO improves solution quality, but requires more computation time [21]. In the same line of work, Differential Evolution (DE) approaches such as jDE [12] also evolved. In [13], showed jDE performs better than other variants of DE algorithms. Although these algorithms have gained wide acceptance in last decades, no free launch theorem [11] states that non of the optimization algorithm is better in all aspect. Hence, there is always need of new algorithms as well as improvement of existing one to solve optimization problems more efficiently. Moreover, algorithms inspired from natural phenomenon have gained wide acceptance in last decades. Since, the natural processes automatically happens in optimal way, in this paper we proposed another nature inspired heuristic for solving optimization problems incorporating excitation
and de-excitation mechanism of atom. We refer our method as Atom Stabilization Algorithm (ASA). With main focusing point on the proposed approach, rest of the paper is organized as follows: section 2 describes representational aspects of proposed approach. Section 3 elaborates proposed method with pseudo codes. Section 4 analyzes performance of ASA. Finally, concluded in section 5. II. D ESIGN F RAMEWORK Atom undergoes structural changes by manipulating its valance electrons. Aim of this structural change in atom is to become more stable. This kind of structural change depends on the environment where the atom lies. An atom experiencing different environment may attain different structures to become stable. Notion of these structural changes in atoms are similar to the searching of optimal solution in optimization problems. Process of structural change in atom can be resembled with optimization (process of finding the best one). Environment where the atom lies can be resembled with problem domain of optimization problem and the stable structures can be resembled with optimal solutions. These explicit resemblance are the key motivation behind the proposed algorithm. A set of atom is considered to materialize the concept of stable state of atom in the proposed algorithm. Each atom in this set represents as a solution to any optimization problem. Objective is to find stable state of any of these atoms’ which will be the best solution of the corresponding problem. Searching of stable state of atoms in the set is done by strategically manipulating their structure. Similar to Bohr’s atomic model, each atom comprises shells where electrons reside. Each shell can comprise only one electron. Each shell is associated with an specific energy level as discussed in previous section. Electrons present in each shell possesses certain amount of potential energy. Inner most electrons have the higher energy where as outer most electrons have comparatively lower potential energy. Presence of an electron is represented with bit one and absence of an electron is represented with bit zero. These bits are referred as electron bits (or bits) and henceforth this term will used throughout the paper interchangeably with bit or electron . For example bit string 1011 represents electrons present at 1st , 2nd and 4th position or electrons present at 1st , 3rd and 4th shell. State of this particular atomic structure will look like as in Figure 1. As shown in the figure nth bit will be the (m − n)th shell where m is the maximum number of bit used for representation. Potential energy of this particular atomic structure is calculated as follows: 1 × 23 + 0 × 22 + 1 × 21 + 1 × 20 = 11
(1)
This much amount of negative energy binds these electrons present in the atom. In more general, in this conceptual atomic model potential energy at energy level k of atom ai is computed as follows: Epot (k) = (bit at (Smax − k)th position) × 2(Smax −k) (2)
Fig. 1.
Representation of bits in atomic model.
Here, Smax is the maximum number of shells in atoms. As k th shell is the (Smax − k)th bit in the atomic representation so the potential energy of j th electron bit can be computed as follows: j Epot = (j th bit) × 2j (3) Hence total potential energy of atom ai is computed as follows: (S
0 1 Epot (ai ) = Epot + Epot + ... + Epotmax
−1)
Epot (ai ) = (1st bit) × 20 + (2nd bit) × 21 + ... + ((Smax − 1)th bit) × 2(Smax −1)
(4)
This implies the decimal value of an atomic representation. So potential energy of atom ai can be expressed as: Epot (ai ) = Decimal value of ai
(5)
III. ATOM S TABILIZATION A LGORITHM The proposed algorithm comprises three operators. First operator is named as Excitation. The role of this operator is to excite electron bits from lower energy level to higher energy level or in other words electron bit jumps from inner shell to outer shell. To furnish excitement of an electron, significant amount of positive energy (photon) is supplied strategically to the system from outside. Depending on the positive energy applied some of electron bit excites to any higher energy level or escape from the atom. Second operator is named as Rehabilitation. The role of this operator is to rehabilitate electron bits again to the lower energy level depending on the stability (fitness) of atom. If an atom attains better state (better fitness)after excitation it retains on the excited state, whereas atoms attained worse state de-exited to nearby the previous state. Third operator is named as Alternation. Role of this operator is to attract oppositely charged atoms and form bonds among them to attain more stable state. Similar to other population based algorithms, ASA is also initialized with randomly generated population of atoms. Iteratively apply excitation operator, rehabilitation operator and excitation operator sequentially. Two versions of ASA are
Algorithm 1 Procedure ASA f lag ← 0 (for ASA-I) or 1 (ASA-II) A ← probability of alternation for ASA-II while termination condition not met do Excitation() Rehabilitation() if f lag = 1 then R ← rand() if A > R then Alternation() end if end if f it ← evaluate stability of each atom (fitness) end while
proposed. First one is ASA-I, in which alternation operator is not used. Second one is ASA-II, where number of outermost shells are considered to perform alternation operator. Pseudo code of ASA is given as in Algorithm 1. A. Excitation This operator disrupts the stability of atom. Notion of this disturbance is to look around for better solution (stable state). Main aim of this operation is to reduce the potential energy of atom. When an electron bit excites to any higher energy level potential energy reduces, which implies more stable state than previous one. Each atom is exercised with a distinct amount of positive energy so that the electron bits of particular atom can excite to its highest possible energy level or if gains enough energy it can escape from the atom. External positive energy to be applied to an atom for excitation is computed with the help of current fitness of each atom. Pseudo code of Excitation is given as in Algorithm 2. A normalized fitness (nfit) corresponding to all the atoms are computed. This normalization ensures that atoms with better fitness than the middle atom (in terms of fitness)will try to attain optimal state, whereas atoms with fitness less than middle atom will try to attain state of middle atom. Now randomly select some of the electron bits for excitation and apply positive energy (Epos ). With application of Epos amount of positive energy, whether an electron bit will escape from the atom or excite to any higher energy level is depends on the work function (φ) of the atom. At least φ amount of positive energy is needed to escape an electron bit from an atom. Hence, φ is depicted as escape energy of an atom in this particular. B. Rehabilitation This operator negotiates the negative effect of the excitation operator. As due to excitation of electron bits, state of the atom changes or in other words explore the search domain. Disturbance generated in the system during excitation is temporary. Unfruitful excitations are negotiated during rehabilitation. After rehabilitation of electron bits move back to the original position or any lower energy level whichever is suitable and become stable. Unlike excitation operator in this case external
Algorithm 2 Procedure Excitation φ ← threshold value for escaping electrons from atom C ← constant value for each atom ai , i = 1, 2, ..., n do if f it(ai ) < median(f it) then median(f it)−f it(ai ) nf it(ai ) ← median(f it)−minimum(f it) else f it(ai )−median(f it) nf it(ai ) ← maximum(f it)−minimum(f it) end if ones ← select p bit positions whose bit values are 1 for each bit position j ∈ p do Epos ← rand() + e−nf it(ai ) if Epos > φ then EpotExcite (ai ) ← Epot ∓ 2Smax −j−1 else k ← select random bit position outer than j and whose bit value is 0. Eexcite ← C × ( j12 − k12 ) if Eexcite < Epos then EpotExcite (ai ) ← Epot ∓ 2Smax −j−1 else EpotExcite (ai ) ← Epot end if end if end for end for
is not applied for de-excitement of excited electron bits. Pseudo code of Rehabilitation is given as in Algorithm 3. If an atom reaches better state after excitation, it retains the state. Otherwise, whether an atom is to rehabilitate or not, is decided with a probability parameter NRh (probability that not to rehabilitate). If probability is less than NRh, the atom will directed towards any better state with the influence of the previous best atom. If probability is greater than NRh, then the atom will rehabilitate. For rehabilitation randomly select shell whose bit values are zero (0 bit). The probability parameter Em determines whether the selected shell of atom has to rehabilitate electron bits from outside atom or from any higher energy level within the atom. If the probability is less than Em , the atom will rehabilitate electron bit from outside atom. If the probability is greater than Em , the atom will rehabilitate electron from any higher energy level. C. Alternation Electrons present at outermost shell of an atom are generally referred as valance electrons, which participates during bond formation. When an atom release electrons, it become positively charged (cations) and when gain electrons, it become negatively charged (anions). In this hypothetical system, the atom which contains bit 0 in outermost shell is considered as anion, with the similar sense that the bit will become 1 after gaining electron bit. Similarly, the atom which contains electron bit (bit 1) outermost shell is considered as cation after releasing electron bit. This concept is generalized for other
Algorithm 3 Procedure Rehabilitation(EpotExcite ) N Rh ← set probability of not to rehabilitate Em ← set probability of emission f itold ← before excitation, f itnew ← after excitation for each atom ai , i = 1, 2, ..., n do R ← rand() if f itnew (ai ) < f itold (ai ) then Epot (ai ) ← EpotExcite (ai ) else if N Rh > R then Epot (ai ) ← EpotExcite (ai ) + N Rh × R × (Epot (abest ) − EpotExcite (ai )) else zeros ← select q bit positions whose bit values are 0 for each bit position j ∈ q do if Em > rand() then Epot (ai ) ← EpotExcite (ai ) ± 2Smax −j else k ← select random bit position inner than j and whose bit value is 1. Epot (ai ) ← EpotExcite (ai ) ± (2Smax −k − 2Smax −j ) end if end for end if end for
shells instead of considering only outermost shell. For any randomly selected shell, if bit is 0 then considered as anion and if bit is 1 then considered as cation. Atoms in this hypothetical system are displaced depending on their ionization. Highly positive charged atoms (cations) or the atoms having high potential energy and have electron bit in outer shell are more likely to release the outer electron bit and become stable. Similarly, highly negative charged atoms (anions) or the atoms having high potential energy and does not have electron bit in the outer shell are more likely to gain electron bit and become stable. Due to this kind of desire positively charged atoms are attracted towards the negatively charged atoms. To materialize this attraction into proposed strategy, atoms in the system are rearranged. Highest negatively charged atom is placed in 1st place, 2nd highest atom is placed in 2nd place and so on up to the rth place. Remaining (n−r) atoms of n atoms having positively charged are placed as: Highest positively charged atom is placed in nth place 2nd highest atom is placed in (n − 1)th and so on up to the (r + 1)th place. Implicatioon of Alternation operator is illustrated as in Algorithm 4. Algorithm 4 Procedure Alternation s ← randomly choose a shell from certain percentage of outer shell positions Rearrange atoms considering s as valence shell of all atoms Exchange bit values depending on potential energy and charge up to pairing position
IV. E XPERIMENTAL R ESULTS Most of the optimization algorithm uses random values during execution. Hence it is very difficult to prove whether an optimization algorithm is good or bad. However, convergence rate towards any optimal solution of any optimization problem and optimality of the solution is comparable. All optimization algorithms are not able to perform well in every problem [11]. Hence, performance should be compared on some standard problems. We have considered two real life combinatorial optimization problems. First one is the Royal Roads problem and second one is the 01-Knapsack problem. Performance of ASA is compared and discussed with other algorithms. Implementation details of ASA to these problems are illustrated below. A. Experimental Setup To examine performance of ASA, all experiments are carried out with population size 40. Crossover and mutation probability of GA are considered 0.8 and 0.35 respectively. Parameters of PSO-TVIW such as C1 set with value 0.5, C2 set with value 1.5 and ω gradually decreased from 0.9 to 0.4. Parameters of jDE are initialized with random values and adapted during execution. Parameters of ASA such as work function φ set with value 0.95, constant C set with value 0.85, probability of not to rehabilitate (N Rh) set with value 0.25 and the probability of rehabilitation from outside the atom (Em ) set with value 0.15. For ASA-II, alternation probability A is set with value 0.15 and 25% of outermost shells to perform alternation. B. Royal Roads Problem The Royal Road functions are special class of fitness landscapes developed by Mitchell and his colleagues [20]. A Royal Road function consists of a list of partially specified bit strings called schema. Any schema si contains two types of positions: specified position and unspecified position. Specified positions are strictly supposed to be either 1 or 0, which not allows 0 if it is specified as 1 and vice-versa. On the contrary, unspecified positions ‘*’ are like wild cards where both 0 and 1 are allowed. Order of a schema is denoted by the number of specified bits in the schema. A bit string x is referred as an instance of a schema si , x ∈ si , if x has matches in the specified (i.e., non-‘*’) positions of si . The Royal Road function f (x) of a bit string x used for the experiments is defined as follows: ( 15 X 1, if x ∈ xi f (x) = δi (x)o(si ), where δi (x) = 0, else, i=1 and where, o(si ) is the order of si . Objective is to maximize f (x). List of schemas considered for our experiment is shown in the Figure 2 with their respective orders indicating in the right side of each schema. Performance of ASA is evaluated on the above defined Royal Roads problem and compared with GA, PSO-TVIW and jDE. Each algorithm is executed with population size of
s1 =1111****************************, o(s1 ) = 4; s2 =****1111************************, o(s2 ) = 4; s3 =********1111********************, o(s3 ) = 4; s4 =************1111****************, o(s4 ) = 4; s5 =****************1111************, o(s5 ) = 4; s6 =********************1111********, o(s6 ) = 4; s7 =************************1111****, o(s7 ) = 4; s8 =****************************1111, o(s8 ) = 4; s9 =11111111************************, o(s9 ) = 8; s10 =********11111111****************, o(s10 ) = 8; s11 =****************11111111********, o(s11 ) = 8; s12 =************************11111111, o(s12 ) = 8; s13 =1111111111111111****************, o(s13 ) = 16; s14 =****************1111111111111111, o(s14 ) = 16; s15 =11111111111111111111111111111111, o(s15 ) = 32; Fig. 2.
Instances
#Weights
Capacity
Optimal Selection
P01 P02 P03 P04 P05 P06
10 5 6 7 8 7
165 26 190 50 104 170
1111010000 01110 110010 1001000 10111011 0101001
TABLE III D ETAIL OF 01-K NAPSACK PROBLEM DATA . H ERE , SR=S UCCESS R ATE AND AOG=AVERAGE O PTIMAL G ENERATIONS NEEDED DURING SUCCESSFUL OPTIMAL SELECTION . Data
Measure
GA
PSO-TVIW
jDE
ASA-I
ASA-II
P01
SR AOG
26% 37.69
90% 61.51
82% 50.65
100% 9.68
100% 9.28
P02
SR AOG
66% 5.84
100% 2.14
100% 2.24
100% 1.14
100% 1.08
P03
SR AOG
64% 5.9
100% 10.18
100% 4.48
98% 2.80
100% 3.64
P04
SR AOG
40% 2.90
98% 14.38
100% 4.56
100% 2.04
100% 1.90
P05
SR AOG
48% 8.83
100% 8.22
100% 7.06
100% 15.08
100% 16.50
P06
SR AOG
48% 14.45
100% 7.52
100% 5.84
100% 5.06
100% 2.86
List of schema and their order.
TABLE I P ERFORMANCE COMPARISON ON ROYAL ROAD P ROBLEM . Algorithms
Mean
Std.Dev.
Maximum
Minimum
GA PSO-TVIW jDE ASA-I ASA-II
48.35 34.00 50.88 51.28 50.08
8.23 12.76 8.41 9.24 7.01
68.00 68.00 68.00 68.00 64.00
36.00 20.00 32.00 36.00 36.00
120 for 200 generations. Results obtained for 50 trials are presented in the Table I. Clearly, ASA performs better than all the three algorithms. The ASA attains better average optimal values for the same numbers of generations. C. 01-Knapsack Problem The knapsack problem is a combinatorial optimization problem. Given a set of items, each with a weight and a profit and the objective is to determine the number of each item to include in a collection so that it represents the maximum total profit, provided total weight is less than or equal to a given capacity. The 01-Knapsack problem restricts the number of each kind of items to zero or one. Thus, 01-Knapsack problem with a set of n items numbered from 1 up to n, each having a weight wi and a profit pi , along with a maximum weight capacity C is defined as follows: maximize f (x) =
n X
p i xi
i=1
subject to
TABLE II D ETAIL OF 01-K NAPSACK PROBLEM DATA .
n X
wi xi ≤ C and xi ∈ {0, 1}.
i=1
Here, x is the bit string of 1s and 0s, xi represents absence or presence of any item in the Knapsack. For the experimentation, first six instances of KNAPSACK01 data set 1 are considered. Each of the instances from P01 to P06 avails data for 01-Knapsack problem that are 1 The KNAPSACK01 data set is publicly available online on https://people.sc.fsu.edu/ jburkardt/datasets/datasets.html
summarized in the Table II. Performance of ASA is evaluated on the all six instances and compared with GA, PSO-TVIW and jDE. Each algorithm is executed with population size of 30 for 200 generations. Results obtained for 50 trials are presented in the Table III. ASA performs better in almost all of the six 01-Knapsack data. Performance of GA is worst in all of the 01-Knapsack data. Success rate of ASA-I noted is 100% in all of the six 01-Knapsack data except P03. However, ASA-I requires comparatively less numbers of generation to attain optimal selection on P03. On the other hand, ASA-II requires very less numbers of generations to acquire optimal selections in comparison to GA, PSO-TVIW and jDE. Success rate of ASA-II is noted 100% in all of the six 01-Knapsack data. V. C ONCLUSION In this paper we have proposed Atom Stabilization Algorithm (ASA) for solving combinatorial optimization problems. the designing of algorithm is based on Borh’s atomic model. Solutions of search domain are represented as atoms by placing binary encoded bits into various shells (bit positions) of atom. Excitation, de-excitation and bond formation mechanism enriched the proposed algorithm with immense exploration of search domain. Two versions of ASA has been proposed ASA-I and ASA-II. In ASA-I alternation operator is absent, whereas in ASA-II alternation operator is applied with smaller probability.
Performance of ASA is also evaluated on real life combinatorial optimization problems such as the Royal Roads problem and the 01-Knapsack problem. In the Royal Roads problem, ASA attains better average optimal values than GA, PSO-TVIW and jDE for the same numbers of generations. In the 01-Knapsack problem also requires very less numbers of generations to acquire optimal selection on almost all of the six 01-Knapsack data. Moreover, success rate of ASA is also noted higher than GA, PSO-TVIW and jDE. Overall, ASA is comparatively very efficient in terms of time required to reach good quality solutions. R EFERENCES [1] Beasley J. E., editor. Advances in Linear and Integer Programming. Oxford Science, 1996. [2] Avriel, M. (2003). Nonlinear Programming: Analysis and Methods. Dover Pub. [3] Bellman, Richard. Dynamic Programming, Princeton University Press. Dover paperback edition (2003). [4] Geem Z. W, Kim J. H., and Loganathan G. V. , A New Heuristic Optimization Algorithm: Harmony Search, Simulation, 2001. [5] Schwefel, H.-P., Numerical optimazaion of computer models Chichester, Wiley ,1981. [6] Holland, J. Adaptation In Natural and Artificial Systems. University of Michigan Press, Ann Arbor, 1975. [7] Kennedy, J.; Eberhart, R. (1995).Particle Swarm Optimization Proceedings of IEEE International Conference on Neural Networks. IV. pp. 1942-1948. [8] M. Dorigo, V. Maniezzo, and A. Colorni, Ant System: Optimization by a colony of cooperating agents, IEEE Transactions on Systems, Man, and CyberneticsPart B, vol. 26, no. 1, pp. 2941, 1996. [9] Lam A. Y. S. amd Li V. O. K.,Chemical-Reaction-Inspired Metaheuristic for Optimization, IEEE Trans. on Evol. Comp. , vol. 14, no. 3, pp. 381399, 2010. [10] A. Biswas, K. K. Mishra, S. Tiwari, and A. K. Misra, Physics-Inspired Optimization Algorithms: A Survey, Journal of Opt., vol. 2013, Article ID 438152, 16 pages, 2013. [11] Wolpert, D.H.; Macready, W.G., ”No free lunch theorems for optimization,” IEEE Transactions on Evolutionary Computation, vol.1, no.1, pp.67-82, Apr 1997. ˇ [12] J. Brest, Greiner S., B. Boˇ skovi´ c, M. Mernik and V. Zumer, SelfAdapting Control Parameters in Differential Evolution: A Comparative Study on Numerical Benchmark Problems’, IEEE Trans. on Evol. Comp., vol. 10, no. 6, 646-657, 2006. ˇ [13] Veˇ cek N., Mernik M., and Crepinˇ sek M., A chess rating system for evolutionary algorithms: A new method for the comparison and ranking of evolutionary algorithms, Information Sciences, Elsevier, vol. 277, 656-679, 2014. [14] Goldberg, D. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, Reading, MA, 1989. [15] Goldberg, D. A note on Boltzmann tournament selection for genetic algorithms and population-oriented simulated annealing. TCGA 90003, Engineering Mechanics, University of Alabama, 1990. [16] Y. Shi and R. C. Eberhart, Empirical study of particle swarm optimization, in Proc. IEEE Int. Congress Evolutionary Computation, vol. 3, 1999, pp. 101-106. [17] R. C. Eberhart and Y. Shi, Tracking and optimizing dynamic systems with particle swarms, in Proc. 2001 IEEE Int. Congress Evolutionary Computation, pp. 94-100. [18] A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, Self-organizing hierarchical particle swarm optimizer with time varying acceleration coefficient , IEEE Trans. Evol. Comput., vol. 8, pp. 240-255, 2004. [19] Cleric M. and Kennedy J., The Particle Swarm-Explosion, Stability, and Convergence in a Multidimensional Complex Space, in IEEE Transactions on Evolutionary Computation, Vol. 6, No. 1, pp. 58-73, 2002. [20] Mitchell M., Forrest S., and Holland J. H., The royal road for genetic algorithms: Fitness landscapes and GA performance, Proceedings of the first european conference on artificial life, Cambridge, The MIT Press, 245–254, 1992.
[21] Biswas, A., Gupta, P., Modi, M. and Biswas., B. ”An Empirical Study of Some Particle Swarm Optimizer Variants for Community Detection.” In Advances in Intelligent Informatics, pp. 511-520. Springer International Publishing, 2015. [22] Zhan, Zhi-Hui, Jun Zhang, Yun Li, and Yu-Hui Shi. ”Orthogonal learning particle swarm optimization.” IEEE Trans. on Evolu. Comp. 15, no. 6 (2011): 832-847.