Modern Metaheuristics for Function Optimization Problem ... - CiteSeerX

5 downloads 403 Views 175KB Size Report
or fishes to travel in groups (flocks of birds and schools of fish) in harmony. Terminology ... t; c1, c2, c3 – weight coefficients defining the influence of each of the three elements of the trade-off, ..... SCI2000. Conference, Orlando, FL, 2000.
Modern Metaheuristics for Function Optimization Problem 1

Marek Pilski1, Pascal Bouvry2, Franciszek Seredyński1,3,4

Institute of Computer Science, Academy of Podlasie, Sienkiewicza 51, 08-110 Siedlce, Poland 2 University of Luxembourg, Faculty of Sciences, Technology and Communication, 6, rue Coudenhove Kalergi, L1359 Luxembourg-Kirchberg, Luxembourg 3 Institute of Computer Science, Polish Academy of Sciences, Ordona 21, 01-237 Warszawa, Poland 4 Polish-Japanese Institute of Information Technology, Koszykowa 86, 02-008 Warszawa, Poland

Abstract This paper compares the behaviour of three metaheuristics for the function optimization problem on a set of classical functions handling a large number of variables and known to be hard. The first algorithm to be described is Particle Swarm Optimization (PSO). The second one is based on the paradigm of Artificial Immune System (AIS). Both algorithms are then compared with a Genetic Algorithm (GA). New insights on how these algorithms behave on a set of difficult objective functions with a large number of variables are provided.

1. Introduction There is a wide range of problems, classified as NP-hard, for which it appears impossible in practice to solve. In such case, various heuristics may be useful, which, however, do not always guarantee that the optimal solution is found. Nevertheless, they can find a „good” solution within a reasonably short period of time. The multi-variable optimization field for non-linear functions includes many intractable problems. Evolutionary algorithms (EA) [2] are based on analogy with the evolution process found in nature and become increasingly popular and successful. For instance parallel versions of such algorithms has been particularly studied recently and resulted in the development of coevelutionary algorithms e.g. CCGA (Cooperative Coevolutionary Genetic Algorithm) [9][10] and LCGA (Loosely Coupled Genetic Algorithm) [1][10]. In the current article we examine 2 new algorithmic techniques belonging to the EA class: i.e. Particle Swarm Optimization (PSO) and Artificial Immune System (AIS). Next we compared them to the classical Genetic Algorithm on 6 different problems described by multi-variables functions. In [16], A number of “no free lunch” (NFL) theorems are presented that establish that for any algorithm, any elevated performance over one class of problems is exactly paid for in performance over another class. The aim of the current study if thus not to check if one algorithm is in general “better” than another one, but to analyze the behaviour of three different algorithms on a given set of optimization functions with a large number of variables and that are known to be extremely difficult to solve and to provide new insights on using these algorithms for the proposed objective functions. One of the inspirations behind PSO metaheuristic is the ability of living creatures such as birds or fishes to travel in groups (flocks of birds and schools of fish) in harmony. Terminology concerning the “swarm” was proposed by Millonas [7] in his book, where he describes the modelling of artificial life. Optimization of PSO includes very simple notions and paradigms, which can be easily coded and simulated using a computer. The algorithm requires only basic mathematical operators and is not computationally expensive regarding memory and CPU usage. Hence, PSO is becoming increasingly popular, and numerous implementations, applications and modifications appear [3][11][12][13]. AIS is based on rules governing the function of immune system of vertebrates. Mechanisms governing immune systems are used to build AIS, dedicated to data analysis, optimization, as well as the construction of anomaly detecting systems [14][15]. Recently, they have also become popular as regards numerous implementations, improvements and applications.

1

We will compare all 3 heuristics in a common testing environment of well-known functions, which constitute the experimental firing ground for many optimization methods. The paper is organized in the following way. The next section presents PSO algorithm. Section 3 describes AIS and section 4 outlines GA. Section 5 presents a set of test functions and section 6 shows resutlts of experiment study. Last section contains conclusion.

2. Particle Swarm Optimization Initial position of individuals is chosen at random from the solutions space. Next, a single particle may move in the direction described by the equation:

⎧⎪v t +1 [k ] = c1 r1v t [k ] + c 2 r2 ( y t [k ] − x t [k ]) + c3 r3 ( y *t [k ] − x t [k ]) ⎨ t +1 ⎪⎩ x [k ] = x t [k ] + v t +1 [k ],

(1)

where vt – speed of a molecule at a time t; xt – position of a molecule at a time t; yt – the best position found so far by a molecule for a time t; y*t – the best position fund so far by the neighbours for a time t; c1, c2, c3 – weight coefficients defining the influence of each of the three elements of the trade-off, which respectively define how much a molecule trusts: 1. itself at a time, 2. its own experience, 3. its neighbours and their experience; [k] – k-th vectors coordinates x, v and y of the length equal to the number of dimensions of the space of solutions. Coefficients c1, c2, c3 are multiplied by random values r1, r2 et r3, which belong to range and are defined for every generation. The second line of the equation (1) means that the speed v is represented by the number defining the distance a molecule can travel in a time t=1 so we can add up the values of variables x and v assuming that their units are identical. The value of the vector of speed can vary, which prevents the travelling of an individual through a space of solutions, along the straight line. The change in the vector’s value is calculated in the following way:

vi (t ) = χ (vi (t − 1) + ρ1 ( pi − xi (t − 1)) + ρ 2 ( p g − xi (t − 1))) ,

(2)

where vi – vector of speed of an individual in an iteration i; pi – best position of a given individual (pbesti – value for the position pi); pg – position of the best individual in the whole population (gbest – its value). Moreover, parameters ρ1 and ρ2 may influence the vector of speed of a molecule. First of them influences pi value, the second on pg. A change in these parameters changes the force of influence of the best values found so far on molecules. The speed of a molecule should be great enough to make it possible for a molecule to leave a local minimum and, at the same time, small enough to provide a division into search areas. It is recommended that the value be chosen in accordance to the problem in question [8]. It is done by introducing an additional parameter called inertia weight:

χ=

κ ρ ⎛ 2 ⎜ 1 − − abs ρ − 4 ρ 2 abs⎜ 2 ⎜ ⎜ ⎝

(

)

⎞ ⎟ ⎟ ⎟ ⎟ ⎠

,

(3)

where к – coefficient, к є (0,1>; ρ – coefficient, which is the sum of ρ1 and ρ2.

2

A similar solution was proposed in paper [5], where the authors worked out a new method using Random Number Inertia Weight. Alternative methods of setting inertia weight by using a fuzzy variable [12][13], which give better results, are proposed. Algorithm: Particle Swarm Optimization Individual: x={x1, x2, ..., xN} (structure representing a solution) Population: S Optimization criterion: function f(x) Initialize S DO FOR every individual Count the value of function IF (f(xi)

Suggest Documents