Proactive Particles in Swarm Optimization: A self-tuning ... - IEEE Xplore

0 downloads 0 Views 2MB Size Report
*Dipartimento di Informatica, Sistemistica e Comunicazione, Universita degli Studi di ... +Dipartimento di Scienze Umane e Sociali, Universita degli Studi di ...
Proactive Particles in Swarm Optimization: A Self-tuning Algorithm Based on Fuzzy Logic Marco S. Nobile*t, Gabriella Pasi*, Paolo Cazzanigatt, Daniela Besozzi§t, Riccardo Colombo*t and Giancarlo Mauri*t *Dipartimento di Informatica, Sistemistica e Comunicazione, Universita degli Studi di Milano-Bicocca, 20 126 Milano, Italy tSYSBIO Centre of Systems Biology, 20 126 Milano, Italy + Dipartimento di Scienze Umane e Sociali, Universita degli Studi di Bergamo, 24 129 Bergamo, Italy § Dipartimento di Informatica, Universita degli Studi di Milano, 20 135 Milano, Italy

Abstract-Among the existing global optimization algorithms, Particle Swarm Optimization (PSO) is one of the most effective when dealing with non-linear and complex high-dimensional problems. However, the performance of PSO is strongly depen­ dent on the choice of its settings. In this work we propose a novel and self-tuning PSO algorithm - called Proactive Particles in Swarm Optimization (PPSO) - which exploits Fuzzy Logic to calculate the best setting for the inertia, cognitive factor and social factor. Thanks to additional heuristics, PPSO automatically determines also the best setting for the swarm size and for the particles maximum velocity. PPSO significantly differs from other versions of PSO that exploit Fuzzy Logic, since specific settings are assigned to each particle according to its history, instead of being globally defined for the whole swarm. Thus, the novelty of PPSO is that particles gain a limited autonomous and proactive intelligence, instead of being simple reactive agents. Our results show that PPSO outperforms the standard PSO, both in terms of convergence speed and average quality of solutions, remarkably without the need for any user setting. Keywords-Particle Swarm Optimization, adaptive algorithms, Fuzzy Logic, settings-free optimization, self-tuning algorithms.

I.

INTRODUCTION

Particle Swarm Optimization (PSO) is a global optimiza­ tion meta-heuristics, inspired by the collective movement of birds flocks and fish schools [ 1]. In PSO, a population (the swarm) of N individuals (the particles) moves inside a bounded search space, realizing a joint effort in the iden­ tification of the best solution for a problem. Two types of attractors influence the position of particles within the search space: the social attraction, that drives each particle towards the (global) best particle of the swarm (or of a properly defined neighborhood), and the cognitive attraction, driving it towards the best position found by the particle itself. Both attractions are balanced by means of two constants, called the social ( csoc ) and cognitive factors ( ccog ) , respectively. In order to effectively explore the space of feasible solutions, the movement of particles is weighed by means of an inertia factor W, and their velocity values are clamped to a fixed threshold

Vmax·

As in the case of different optimization algorithms, the performance of PSO strongly depends on a proper balancing of the aforementioned settings (N, Csoc, ccog, W, vmax ) . Un­ fortunately, the analytic determination of the best setting is Corresponding author: [email protected]

generally impossible, since it is problem-dependent: only a good knowledge of the shape and roughness of the fitness landscape might help the user in properly setting the PSO. In general, the process of identifying the best settings is complex, lengthy and time consuming, so that an intense research is devoted to self-tuning and adaptive modifications of PSO and other evolutionary algorithms. TRIBES, for instance, is an evolutionary methodology inspired by PSO that automatically changes the particles' behavior as well as the topology of the swarm, resulting in a settings-free version of PSO [2]. Other examples are represented by the Parameter- Less Evolutionary Search, based on Genetic Algorithms, which dynamically determines the settings by exploiting some statistical properties of the population [3], and by the plague technique applied to Genetic Programming [4], whose goal consists in automati­ cally adjusting the number of individuals of the population according to the fitness variation. A completely different approach to the dynamical selection of PSO settings consists in the use of Fuzzy Logic [5] to analyze the contingent situation of the swarm. In [6], the first attempt in exploiting a Fuzzy Rule-Based System (FRBS) to help the configuration of PSO settings was presented. In this work, during each iteration, the performance of the current best candidate solution and the current inertia weight, are exploited as inputs of the FRBS to calculate a new inertia weight for the whole swarm. Fuzzy Adaptive Turbulence in Particle Swarm Optimization was introduced in [7] to solve the problem of premature convergence. In this model, the minimum velocity of particles is adaptively tuned using Fuzzy Logic, in order to reduce the convergence of the swarm around the global best. In [8], a Fuzzy Particle Swarm Optimization algorithm (FPSO) was introduced; FPSO adapts the inertia weight and the learning coefficient, a new parameter introduced to modulate the velocity of particles. In FPSO, the FRBS exploits two input variables: the improvement of the global best and the deviation of particles' fitness. A survey on different methods used to select PSO settings can be found in [9], while [ 10] contains a collection of works describing different evolutionary algorithms enhanced by means of Fuzzy Logic. The aforementioned papers show that Fuzzy Logic repre­ sents an advantageous approach to develop self-tuning strate­ gies for PSO. Anyway, all these works considered only a

subset of the overall PSO settings. Moreover, to the best of our knowledge, all existing methods consider w, Csoc and ccog as global settings, i.e., they are identical for each particle in the swarm. On the contrary, in this work each particle exploits specific settings determined by means of a FRBS approach, so that the simple reactive individuals of PSO become proactive optimizing agents. The FRBS calculates the individual settings of particles by exploiting two functions: the distance from the global best, and a normalized fitness incremental factor. We name our novel algorithm P roactive Particles in Swarm (PPSO). Considering the taxonomy proposed by [ 1 1], PPSO exploits an adaptive parameter control, where the feedback of FRBS is used to dynamically adjust the strategy parameters (e.g., inertia, cognitive and social factors) of each particle at each iteration of the algorithm. The performance of PPSO is analyzed by means of comparative evaluations with respect to the standard PSO. Optimization

The paper is structured as follows. In Section II we describe the standard PSO algorithm, while PPSO is presented in Section III. We show the results of our analysis in Section IV, empirically proving the better performance of PPSO for a set of multi-dimensional benchmark functions. Finally, in Section V we discuss some future developments of our methodology. II.

PARTICLE SWARM OPTIMIZATION

PSO is an evolutionary meta-heuristics useful to perform the optimization of problems whose solutions can be encoded as real-valued vectors [ 1]. PSO is a population-based algo­ rithm, in which a set (the swarm) of N candidate solutions (the particles) moves in a bounded M-dimensional search space, cooperating to identify the optimal solution. The i-th particle is characterized by two vectors: the position Xi E ]RM in the search space, and the velocity viE ]RM. The initial position of particles is randomly selected with a uniform distribution over the search space. In the classic version of PS�, the velocity of particles changes during the optimization phase as a result of two attractors: the best position hi E ]RM found so far by the particle itself, and the best position gE ]RM identified by the whole swarm. The two attractors are balanced by two algorithm-specific settings: the cognitive factor ccogE ]R+ and the social factor CsocE ]R+. A completely deterministic movement of particles could lead to local optima; for this reason, the components of both attractors are multiplied by two vectors Rl and R2 of random numbers sampled from the uniform distribution in the unit interval (0,1). Moreover, to avoid a chaotic movement of particles, the change of velocity is modulated by an inertia weight wE ]R+. Formally, the formula of velocity update for the i-th particle at iteration tis:

viet)

W . Viet - 1)+ + Csoc · RIO (Xi(t - 1) - get - 1» + + ccog R2 0 (Xi(t - 1) - bi(t - 1», =

(1)



where 0 denotes the component-wise multiplication operator. Once the velocity values at iteration t are evaluated, the positions of particles are updated by calculating:

Xi (t)

=

Xi (t

-

1) + vi et), for

all i

=

1,. . . , N.

To assess the "goodness" of each particle, that is, of each candidate solution to the optimization problem, PSO makes use of the so-called fitness function (hereby denoted by f). The hyper-surface described by the fitness values over the set of feasible solutions is called the fitness landscape. The fitness function drives the evolution of the whole swarm since it is used, iteration by iteration, to update the values of hi and g. This methodology, though, may guide the particles outside the feasible space of solutions for the optimization problem, or even towards the infinity. In order to avoid this situation, the search space is bounded (according to domain knowledge) and some boundary conditions are applied to the particles that reach this limit. We denote the boundaries of the m-th dimension of the search space by bmin= and 1,. . . ,M, with bminm, bmaxmE ]R. bmaxm, for each m In this work, we consider the damping boundary conditions proposed by [ 12], whereby a random bounce on the limit of the search space is simulated, letting the particle go back to the feasible region. =

The velocity of particles may diverge during the optimiza­ tion process; for this reason, its value is usually clamped to a maximum value vmax=E ]R+ along each m-th dimension of the search space, with m 1,. . . ,M [ 13]. =

The values for N, Csoc, ccog, w and the vector of maximum velocity values VmaX' typically set by the user, have a huge impact on the optimization performance [ 13], both in terms of convergence speed and quality of the best solution. In this work, we propose a novel algorithm for the automatic selection of these values. In particular, we exploit a set of fuzzy rules to dynamically change, for each particle of the swarm, the values of w, Csoc and ccog. III.

P ROACTIVE PSO

To the aim of designing a fully-automated, self-tuning optimization algorithm based on PS�, in this paper we propose to dynamically determine the values for w , Csoc, ccog by means of Fuzzy Logic. PSO was previously hybridized with Fuzzy Logic [6], [7], [8], though with a different goal: all the existing works consider the automatic determination, based on fuzzy rules, of the global settings of PSO. In this work, we propose a modified PSO algorithm named Proactive Particles in Swarm Optimization (PPSO) in which the particles do not share C Olmnon settings for the inertia w and the social/cognitive factors Csoc and ccog. On the contrary, in PPSO each particle updates its velocity according to its own settings. Formally, in PPSO Equation 1 becomes:

Viet)

Wi(t - 1) . Viet - 1)+ + CSOCi (t - 1) . RIO (Xi(t - 1) - get - 1») + + CC09, (t - 1) . R2 0 (Xi(t - 1) - bi(t - 1» , =

(2)

where Wi (t) , csoc, (t) and CCOgi (t) denote, respectively, the inertia, social factor and cognitive factor of i-th particle during iteration t.These three factors are dynamically determined by means of fuzzy rules which are based on two main concepts: the distance of the particle from the global best g, and a function measuring its fitness improvement with respect to the previous iteration.

Formally, the distance between two particles i and j is a function 0 : JRM X JRM -+ JR, calculated according to a 2-norm: M

o(x;(t),Xj(t))

=

Ilx;(t) - Xj(t) 112

=

L (X; ,m(t) - Xj,m(t)) 2,

m=l

(3) where Xi,m,Xj,m denote the m-th components of the position vectors Xi, Xj, respectively, for some i , j 1,. . . , N.

I

TABLE I. Rule n I

Fuzzy RULES USED BY PPSO

Rule definition

IF (1), [s Worse OR l5i [S Medium OR 15, [S High) THEN Inertia, [S Low

2

IF (1), [S Unvaried OR 15, [S Low) THEN Inertia, [S Medium

3

IF 1>, IS Better THEN Inertia, IS High

4

IF (1)i [S Better OR 15, [S Medium) THEN Sociali [S Low

5

IF 1>, [S Unvaried THEN SociaL, [S Medium

6

IF (1)., [S Worse OR 15, IS Low OR 15, IS High) THEN SociaL, IS High

=

The normalized fitness incremental factor is a function ¢ : JRM X JRM -+ (-1,1), calculated according to the curr�nt and the previous positions of particle i and the corresponding fitness values: ¢(x;(t),x;(t - 1) ) (4)

7

IF (1), IS Unvaried OR 1>.; IS Worse OR 0, IS Low OR 0, IS Medium) THEN Cognitive, IS Medium

9

IF 1>, [S Better THEN Cognitivei [S High

=

min{J(x;(t)), fL.} - min{J(x;(t IfL.1

1), fL.)} o(x;(t),x;(t - 1) ) .

IF 15, IS High THEN Cognitive, IS Low

8

omax

where omax is the length of the diagonal of the hyperrectangle defined by the search space, and fLo. represents the estimated worst fitness value for the optimization problem under inves­ tigation. Since the fitness landscape of the problem is generally unknown, an accurate estimate of the worst fitness value is, intuitively, as difficult as solving the optimization problem itself. Anyway, during the first iteration of PPSO, we can calculate the fitness values of all particles in their initial positions and assume fLo. to be equal to the worst value. Then, during the optimization phase, we can clamp any fitness value worse than fLo., which is exactly the rationale of the min functions in Equation 4. More precisely, the first factor in Equation 4 considers the improvementl of the fitness value of the i-th particle. The variation of the fitness function is normalized in [-1,1] by dividing by 1 fLo. I· Note that a low value of ¢(Xi(t), Xi(t -1)) within the interval [-1,1] indicates a lower fitness value of particle i with respect to the value it had in the previous iteration, i.e., it corresponds to a position Xi which represents a better solution for the optimization problem. The second factor in Equation 4 weighs ¢ according to the distance between the current and the previous position of the particle. The distance is normalized by dividing by omax, so that the second factor takes values in the interval (0,1). To determine the values of Wi(t), CSOCi(t) and Ccog,(t), for each particle i 1,. . . , N at each iteration t, we defined a FRBS composed by 9 fuzzy rules (Table I). In the antecedent of rules, we make use of two linguistic variables that we have named Distance from g and Normalized fitness incremental factor; in the following we denote them as Oi and ¢i, respec­ tively. The output variables in the consequent of rules are called Inertiai, Sociali and Cognitivei which, intuitively, correspond to the respective settings of the i-th particle in PPSO. =

The universe of discourse of the Distance from g variable is constituted by the numeric values of the distance between Xi and g, according to Equation 3. Thus, the base variable of Oi corresponds to the interval [O,Omax]. The term set of this variable is composed by three linguistic values: Low, Medium and High. The definition of a linguistic variable that expresses the distance from g allows to characterize the l In this work we consider minimization problems. In the case of maximiza­ tion problems, the sign of function

Suggest Documents