Genetical Swarm Optimization: a New Hybrid Evolutionary Algorithm for Electromagnetic Applications E. Alfassio Grimaldi1, F.Grimaccia1, M. Mussetta1, P. Pirinoli2, R.E. Zich1 1) Politecnico di Milano, Dipartimento di Elettrotecnica Piazza Leonardo da Vinci 32, 20133 Milano, Italy E-mail:
[email protected] 2) Politecnico di Torino, Dipartimento di Elettronica Corso Duca degli Abruzzi 24, 10129 Torino, Italy, E-mail:
[email protected] Abstract In this paper a new effective optimization algorithm suitably developed for electromagnetic applications called Genetical Swarm Optimization (GSO) will be presented. This is an hybrid algorithm developed in order to combine in the most effective way the properties of two of the most popular evolutionary optimization approaches now in use for the optimization of electromagnetic structures, the Particle Swarm Optimization (PSO) and Genetic Algorithms (GA). This algorithm is essentially, as PSO and GA, a population-based heuristic search technique, which can be used to solve combinatorial optimization problems, modeled on the concepts of natural selection and evolution (GA) but also based on cultural and social rules derived from the analysis of the swarm intelligence and from the interaction among particles (PSO). The algorithm is tested here with respect to the other optimization techniques dealing with two typical problems, a purely mathematical one, the search for the global maximum of a multi-dimensional sinc function and an electromagnetic application, the optimization of a linear array. 1. INTRODUCTION In recent years several evolutionary algorithms has been developed for the optimization of every kind of electromagnetic problems. Generally the solution domain of an electromagnetic optimization problem may present discontinuous and non differentiable regions, and so it is often necessary to introduce suitable approximations of the electromagnetic phenomena in order to conserve computational resources. Furthermore, when the amount of variables goes to hundreds or thousands, the traditional algorithms show their limits. Advantages of evolutionary computation are the capability to find a global optimum, without being trapped in local optima, and the possibility to face nonlinear and discontinuous problems, with great numbers of variables. On the other hand, these algorithms have strong stochastic bases, thus they require a great number of iterations to get significant results, and consequently their performances are evaluated in terms of speed of convergence. Furthermore, the problem of premature convergence of the best individuals of the population to a local optimum is a well known drawback frequently found in these techniques. To overcome these limits, in previous papers the authors introduced a new kind of hybrid method, GSO, consisting in a strong cooperation of GA and PSO [1]. Genetic Algorithm (GA) is one of the most effective evolutionary algorithm developed until now [2]; it simulates the natural evolution, in terms of
survival of the fittest, adopting pseudo-biological operators such as selection, crossover, mutation, and many other additional operators introduced to get a faster convergence rate. Hybrid genetic algorithms with local search methods have been introduced to improve the performance of GA in searching process, solving a wide variety of engineering design problems. Often they find better solutions than simple GA searching more efficiently in the solution space. Particle Swarm Optimization (PSO) is one of the more recently developed evolutionary technique, and it is based on a suitable model of social interaction between independent agents (particles) and it uses social knowledge (also called swarm intelligence) in order to find the global maximum or minimum of a generic function [3]. While for the GA the improvement in the population fitness is assured by pseudo-biological operators, such as selection, crossover and mutation, the main PSO operator is velocity update:
(
Vi = ωVi + φ1η1 (Pi − X i ) + φ 2η 2 Pg − X i
)
(1)
that takes into account the best position Pg reached by all the particles and the best position Pi that the agent itself has reached in its path, resulting in a migration of the swarm towards the global optimum. Our GSO consists essentially in a strong co-operation of the two evolutionary algorithms described above.
Random population
Fitness evaluation of all individuals
yes
Stop?
Best individuals
no Splitting of population
Start
GA
End
PSO
Selective reproduction
Velocity updating
Crossover
Calculation of new positions
Mutation
Personal and global bests updating
x1 x 2 x3 x4 x 5 x 6
x'1 x' 2 x'3 x' 4 x' 5 x'6
PSO
GA
i
PSO
GA
i+1
Resulting new population
Figure 1. Flow chart of the GSO algorithm (on the left) and detail of the cooperation of GA and PSO for each iteration (on the right). 2. GENETICAL SWARM OPTIMIZATION GENESIS Some comparisons of the performances of GA and PSO are present in literature [4], underlining the reliability and convergence speed of both methods, but continuing in keeping them separate. Anyway, the population-based representation of the parameters that characterize a particular solution is the same for both the algorithms; therefore it is possible to implement an hybrid technique in order to utilize the qualities and uniqueness of the two algorithms. Some attempts have been done in this direction [5], with good results, but with weak integration of the two strategies, because one algorithm is used mainly as the pre-optimizer for the initial population of the other one. The hybrid technique here proposed, called Genetical Swarm Optimization (GSO), is essentially a population-based heuristic search technique which can be used to solve combinatorial optimization problems, modeled on the concept of natural selection but also based on cultural and social evolution. GSO algorithm consist in a strong cooperation of GA and PSO, since it maintains the integration of the two techniques for the entire run. In each iteration, in fact, the population is divided into two parts and they are evolved with the two techniques respectively. They are then recombined in the updated population, that is again divided randomly into two parts in the next iteration for another run of genetic or particle swarm operators. The population update concept can be easily understood thinking that a part of the individuals is
substituted by new generated ones by means of GA, while the remaining are the same of the previous generation but moved on the solution space by PSO. The driving parameter of GSO algorithm is the Hybridization Coefficient (HC); it express the percentage of population that in each iteration is evolved with GA: so HC=0 means the procedure is a pure PSO (the whole population is updated according to PSO operators), HC=1 means pure GA, while 0