Soft Computing

2 downloads 0 Views 4MB Size Report
The last thirty years saw a dramatic rise in popularity of nature-inspired metaheuristics .... Functions 9-12 map five holes of equal width. One of ..... vector is limited within the range [vmax i,vmax i]. 2 min max max i i i u v. . = (5) ... The present study evaluates 16 different combinations of bee colony partitions ..... Page 38 ...
Soft Computing Benchmarking and Comparison of Nature-Inspired Population-Based Continuous Optimisation Algorithms --Manuscript Draft-Manuscript Number:

SOCO-D-13-00338R1

Full Title:

Benchmarking and Comparison of Nature-Inspired Population-Based Continuous Optimisation Algorithms

Article Type:

Original Research

Keywords:

optimisation; bees algorithm; swarm intelligence; Evolutionary algorithms; optimisation benchmarks

Corresponding Author:

marco castellani Institute of Marine Research Bergen, NORWAY

Corresponding Author Secondary Information: Corresponding Author's Institution:

Institute of Marine Research

First Author:

Duc Truong Pham

First Author Secondary Information: Order of Authors:

Duc Truong Pham marco castellani

Abstract:

This paper describes an experimental investigation into four nature-inspired population-based continuous optimisation methods: the Bees Algorithm, Evolutionary Algorithms, Particle Swarm Optimisation, and the Artificial Bee Colony algorithm. The aim of the proposed study is to understand and compare the specific capabilities of each optimisation algorithm. For each algorithm, thirty-two configurations covering different combinations of operators and learning parameters were examined. In order to evaluate the optimisation procedures, twenty-five function minimisation benchmarks were designed by the authors. The proposed set of benchmarks includes many diverse fitness landscapes, and constitutes a contribution to the systematic study of optimisation techniques and operators. The experimental results highlight the strengths and weaknesses of the algorithms and configurations tested. The existence and extent of origin and alignment search biases related to the use of different recombination operators are highlighted. The analysis of the results reveals interesting regularities that help to identify some of the crucial issues in the choice and configuration of the search algorithms.

Section/Category:

Methodologies & Application

Powered by Editorial Manager® and ProduXion Manager® from Aries Systems Corporation

*Manuscript Click here to download Manuscript: Benchmarking and Comparison of Nature-Inspired Population-Based Continuous Optimisation Algorith Click here to view linked References

Benchmarking and Comparison of NatureInspired Population-Based Continuous Optimisation Algorithms D.T. Pham1, M. Castellani2 1

School of Mechanical Engineering, University of Birmingham, Birmingham B15 2TT, UK. 2

Department of Biology, University of Bergen,

Postboks 7803, NO-5020 Bergen Norway Abstract – This paper describes an experimental investigation into four nature-inspired populationbased continuous optimisation methods: the Bees Algorithm, Evolutionary Algorithms, Particle Swarm Optimisation, and the Artificial Bee Colony algorithm. The aim of the proposed study is to understand and compare the specific capabilities of each optimisation algorithm. For each algorithm, thirty-two configurations covering different combinations of operators and learning parameters were examined. In order to evaluate the optimisation procedures, twenty-five function minimisation benchmarks were designed by the authors. The proposed set of benchmarks includes many diverse fitness landscapes, and constitutes a contribution to the systematic study of optimisation techniques and operators. The experimental results highlight the strengths and weaknesses of the algorithms and configurations tested. The existence and extent of origin and alignment search biases related to the use of different recombination operators are highlighted. The analysis of the results reveals interesting regularities that help to identify some of the crucial issues in the choice and configuration of the search algorithms.

Keywords: optimisation, bees algorithm, swarm intelligence, evolutionary algorithms, optimisation benchmarks.

1 INTRODUCTION The last thirty years saw a dramatic rise in popularity of nature-inspired metaheuristics based on the collective problem-solving strategies of large ensembles of agents. Evolutionary Algorithms (EAs) (Rechenberg 1965; L.J. Fogel et al. 1966; Holland 1975; Koza 1992) were the first instances of such methods. Based on Darwin’s principle of survival of the fittest, EAs are today a well-established and popular class of optimisation algorithms, and an area of broad and active research. Swarm 1

Intelligence (SI) (Bonabeau et al. 1999; Kennedy 2006) comprises a wide class of procedures modelling the behaviour of social animals. Conceived in the nineties (Kennedy and Eberhart 1995; Dorigo et al. 1996), SI quickly grew in popularity and attracted a blooming research community gathering along a network of popular websites, scientific meetings, and publications. There are nowadays at least five main EA branches: Evolutionary Programming (L.J. Fogel et al. 1966), Evolution Strategies (ES) (Rechenberg 1965; Baeck et al. 1991), Genetic Algorithms (Holland 1975; Goldberg 1989), Genetic Programming (Koza 1992), and Differential Evolution (DE) (Storn and Price, 1997). The growing body of SI and alike nature-inspired methods includes Particle Swarm Optimisation (PSO) (Kennedy and Eberhart 1995), Ant Algorithms (Dorigo et al. 1996), Bacterial Foraging Optimisation (Passino 2002), the Termite algorithm (Roth and Wicker 2003), the Bees Algorithm (Pham et al. 2005, Pham et al. 2006), the Artificial Bee Colony Algorithm (ABC) (Karaboga 2005), Invasive Weed Optimization (Mehrabian and Lucas 2006), the Firefly algorithm (Yang 2008), Cuckoo Search (Yang and Deb 2008), etc. All the above metaheuristics evolve the desired problem solution from iterative cycles of interactions between the population members. Whilst EAs rely on the competition amongst agents as the driving force for evolution, SI approaches usually exploit mechanisms of cooperation between the agents. The SI paradigm prescribes also a fully decentralised population structure in favour of system selforganisation and distributed intelligence. On the contrary, traditional EAs employ centralised selection, mating, and replacement procedures. Many agent-based procedures lie at the boundaries between EAs and SI. Some swarm-inspired algorithms (e.g. the Bees Algorithm) use relatively centralised

2

population selection and replacement schemes, or employ EA operators (e.g. ABC). Conversely, fairly decentralised swarm-like EA procedures can be obtained using population niching approaches (Engelbrecht 2005) and crowding methods (Baeck et al. 1991; Mengshoel and Goldberg 2008). Cross-fertilisation amongst different fields and hybridisation between different procedures (Juang 2004; Shi et al. 2003; Pham and Sholedolu 2008) contribute to increasing the overlapping between different classes of algorithms. Given the rapidly rising number of population-based optimisation methods and variants, how can the practitioner identify the most efficient problem-solver? How should a particular algorithm be configured? Wolpert and Macready (1997) proved that the performance of all black box optimisation methods is equivalent when averaged across all possible problems (No Free Lunch Theorem). That is, there does not exist a universal problem solver; rather, the success of a particular procedure on a given task depends on how well its operators match the problem. The lack of an all-purpose algorithm makes it impossible to choose a priori the optimisation method. This poses the problem of understanding precisely what the specific capabilities of each optimisation algorithm are. That is, to understand on which kind of fitness landscapes a particular search procedure performs best, and which procedures perform best on a given kind of landscape. Given this knowledge, any known feature of the problem domain can be used to determine suitable choices of procedures, operators, and configurations. In a reverse process, by testing a small number of algorithms or configurations, the designer may obtain important clues on the nature of the fitness landscape. Finally, a deep understanding of the action and robustness of the parameters and operators of an

3

algorithm might help the designer to choose the right configuration through a minimal set of crucial choices. In the field of continuous optimisation, the benchmarking of EA and SI methods is often performed on a variable number of popular analytical test functions (Bersini et al. 1996; Adorio 2005; Suganthan et al. 2005; Molga and Smutnicki 2005; Tang et al. 2007; Tang et al. 2009; Herrera et al. 2010). Unfortunately, these functions are neither a systematic nor a representative sample of the range of possible fitness surfaces (MacNish 2007), and sometimes replicate similar cases. As a result, many evaluations of EA and SI algorithms based on such functions (Elbeltagi, 2005; Socha and Dorigo 2008; Karaboga and Basturk 2008; Pham and Castellani 2009; Balazs, 2010; El-Abd, 2012) are inherently limited and biased. Other authors included real-world applications in the benchmarking test suite (Elbeltagi, 2005; Balazs, 2010). Although useful to provide further insights on the capabilities of evolutionary and SI algorithms, these studies did not address the need of a systematic and unbiased evaluation of evolutionary and SI algorithms. This paper presents a comparative study of four nature-inspired population-based continuous optimisation methods, namely EAs, PSO, ABC and the Bees Algorithm. The aim of the investigation is to characterise the action of the four procedures and their operators, and understand the effect of different choices of algorithm configurations. Without loss of generality, the optimisation problem will be assumed henceforth to be a minimisation task. For each algorithm, 32 combinations of different operators and parameter settings are tested. In order to evaluate the different configurations thoroughly, 25 benchmark functions were purpose designed by the authors. Although not exhaustive, these functions include a wide range of fitness

4

landscapes, and aim to represent a first step toward a fair and methodical study of optimisation methods. The proposed study expands, systematises, and brings further the study of Pham and Castellani (2009), replacing the standard benchmarks from the literature with the 25 purpose made functions, adding a methodical experimental study of the effects of operators and parameterizations, and including the experimental verification of two common search biases. Section 2 presents the continuous function benchmarks. Section 3 describes the optimisation algorithms and configurations tested. Section 4 states the criteria used for comparing the algorithms. Sections 5 and 6 present the experimental results of respectively the comparison between different configurations, and between different algorithms. Section 7 discusses the results. Section 8 concludes the paper and gives suggestions for further work.

2 FUNCTION OPTIMISATION BENCHMARKS Pham and Castellani (2009) compared the performance of EAs, PSO, ABC and the Bees Algorithm on the twelve function minimisation benchmarks reported in Table 1. These functions are widely used in the literature, and have often been included in optimisation test suites (Bersini et al. 1996; Adorio 2005; Molga and Smutnicki 2005). Thanks to their popularity, they have become de facto standards for the benchmarking and comparison of algorithms (MacNish 2007). Unfortunately, many of these functions have similar features. For example, it is often possible to locate approximately the position of the global minimum. This is true for the four unimodal functions (Easom, Hypersphere, Martin & Gaddy, and Rosenbrock 2D), the Schaffer function (a damped sinusoid), and functions composed of an overall unimodal characteristic and a cosinusoidal “noise” 5

component (Ackley, Griewank, Rastrigin). In all these cases, exploitative search strategies might be advantaged. Indeed, the experimental results reported by Pham and Castellani (2009) confirm this assumption in many of the above benchmarks. Moreover, many of the functions listed in Table 1 map the global minimum at (or in proximity of) the centre of the solution space. This case is known to boost the performance of algorithms that employ averaging operators (Monson and Seppi 2005). MacNish (2007) pointed out other possible sources of evaluation biases, such as axial alignment of the peaks (Ackley, Griewank, Rastrigin, Schwefel), regular spacing of the peaks (Ackley, Griewank, Rastrigin, Schwefel), rotational symmetry of the function (Easom, Hypersphere, Schaffer), and linear separability of the function into a sum of independent one-variable functions (Hypersphere, Easom, Rastrigin, Ackley). Some of the above biases can be avoided via Euclidean transformations of the variable space. Suganthan et al. (2005), Tang et. al. (2007; 2009) and Herrera et al. (2010) avoided the origin bias by translating the fitness landscape (and hence the optimum) of the benchmarks included in the test suite (Ackley, Hypersphere, Rastrigin, and Rosenbrock). Likewise, the functions are made linearly inseparable by rotating the fitness landscape (Suganthan et al. 2005; Tang et al. 2009). MacNish (MacNish 2007) proposed a method to generate pseudo-random fractal landscapes of variable complexity. These landscapes allow the practitioner to evaluate optimisation algorithms free from known biases. However, their pseudorandom nature does not allow the user to control the generation of landscape features such as flat steps, narrow valleys, etc.

6

In order to evaluate the performance of different optimisation algorithms on specific kinds of fitness landscapes, the authors developed a test suite of 25 continuous benchmark functions.

2.1 Benchmark Functions All the benchmarks are two-dimensional, with the exception of functions 23, 24, and 25. They are defined within the interval of numbers {xi∈R | -100