An improved self-adaptive artificial bee colony ... - Semantic Scholar

0 downloads 0 Views 184KB Size Report
the structure of ABC to enhance the speed of the algorithm. .... for another trial and error method to choose a better frequency control rate which may or may not ...
Int. J. Swarm Intelligence, Vol. 1, No. 2, 2014

An improved self-adaptive artificial bee colony algorithm for global optimisation Anguluri Rajasekhar* Department of Electrical Engineering, National Institute of Technology, Warangal, 506004, Andhra Pradesh, India E-mail: [email protected] *Corresponding author

Millie Pant Department of Applied Science and Engineering, Indian Institute of Technology, Roorkee, 247667, Roorkee, India E-mail: [email protected] Abstract: In this paper we propose an improved self-adaptive artificial bee colony algorithm (IS-ABC) for accurate numerical function optimisation. A modified self-adaptive mechanism based on 1/5th success rule is embedded in the structure of ABC to enhance the speed of the algorithm. The proposed algorithm has been tested on various numerical benchmark functions including the non-traditional functions proposed in CEC 2008 competition. To further validate performance of proposed algorithm we had also considered two challenging real world continuous optimisation problems. The results obtained with self-adaptive ABC have been compared with those obtained by new state-of-variants of PSO, DE, etc. Keywords: artificial bee colony algorithm; global optimisation; Rechenberg’s rule. Reference to this paper should be made as follows: Rajasekhar, A. and Pant, M. (2014) ‘An improved self-adaptive artificial bee colony algorithm for global optimisation’, Int. J. Swarm Intelligence, Vol. 1, No. 2, pp.115–132. Biographical notes: Anguluri Rajasekhar received his BTech from the Department of Electrical Engineering, National Institute of Technology, Warangal, India in 2013. His primary research interests include control theory, fractional calculus and its applications in control systems, optimisation and soft computing. He has published more than 20 research articles in impact factor journals and in proceedings of reputed international conferences. Millie Pant is working as an Associate Professor at Department of Applied Science and Engineering, Saharanpur Campus of IIT Roorkee since 2012. She received her MSc in Mathematics from CCS University, Meerut and PhD from Mathematics Department of IIT Roorkee. Her areas of interest include numerical methods, optimisation and operations research, evolutionary algorithms and swarm intelligence techniques. She has published more than 100 research papers in various journals and conferences of national and international repute.

Copyright © 2014 Inderscience Enterprises Ltd.

115

116

1

A. Rajasekhar and M. Pant

Introduction

Swarm Intelligence algorithms belong to the class of algorithms based on the collective and intelligent behaviour of various species. Ant colony algorithm (ACO) (Dorigo and Di Caro, 1999) is perhaps the oldest algorithm belonging to this class, which is based on the behaviour of ants. Another popular algorithm belonging to this class is particle swarm optimisation (PSO) (Kennedy and Eberhart, 1995). These algorithms have been applied successfully to a wide range of problems occurring in various fields (Kwang and Weng, 1999; Jatoth and Rajasekhar, 2010; Radha et al., 2010; Selvakumar and Thanushkodi, 2007). Algorithms based on the behaviour of honey bees, fireflies, glow worms termites, etc., are gaining a lot of popularity these days because of their simple structure and efficiency for solving complex optimisation problems which are otherwise difficult to solve by traditional methods. These algorithms are based on various special traits and characteristics displayed by these species. A few of them include bat algorithm (Yang and Gandomi, 2012), cuckoo search algorithm (Gandomi et al., 2013), differential search algorithm (Civicioglu, 2012). Artificial bee colony (ABC) is one of the latest additions to the class of Swarm Intelligent algorithms for solving optimisation problems. It was proposed by Karaboga and Basturk (2007), Karaboga (2005) and Karaboga et al. (2012) for solving multi variable and multi-modal functions. A detailed description of ABC is given in the next section. ABC is a simple and efficient optimisation algorithm based on the foraging behaviour of honey bees. However it has been observed that like other algorithms of similar kind, ABC also has some inherent drawbacks which limit its performance. For example, the structure of ABC is such that it favours exploration more than exploitation (Guopu and Sam, 2010). Researchers proposed various modifications in the search mechanism of ABC for improving its performance on practical problems as well on numerical functions (Rajasekhar et al., 2012a, 2012b; Sharma et al., 2012). In the present study we present a modified variant of ABC based on an improved self-adaptive mechanism based on 1/5th success rule. This added mechanism, helps in improving the exploitation capabilities of basic ABC, thereby improving its performance. The remaining of the paper is organised as follows; in the next section we give a brief overview of the basic ABC algorithm. In Section 3, the proposed IS-ABC is described. experimental setup and benchmark problems are given in Section 4. Real-life problems solved with IS-ABC are given in Section 5. Finally the paper concludes with the Section 6.

2

Artificial bee colony

ABC algorithm is a swarm intelligence search technique inspired by the foraging behaviour of honey bees. The ABC classifies the foraging artificial bees into three groups namely employed bees, onlooker bees and scouts. The first half colony consists of the employed bees and second half consists of onlooker bees. A bee that is currently searching for food or exploiting a food source is called an employed bee and a bee waiting in the hive for making decision to choose a food source is called an onlooker. For every food source, there is only one employed bee. The employed bee of abandoned food source becomes scout. In ABCA, each solution to the problem is considered as food source and represented by a D-dimensional real-valued vector, where the fitness of the

An improved self-adaptive artificial bee colony algorithm

117

solution corresponds to the nectar amount of associated food resource. Like other swarm intelligent-based mechanisms ABC also progress to the optimum value iteratively. The following steps are repeated until the algorithm met a termination criterion. •

appraise the nectar amounts of food sources by sending the employed bees



after the information is shared by the employed bees, select the food sources with the help of onlookers based on the probability. Then determine the nectar amount of food sources



determine the abandoned food sources with help of scout bees and send them to find out new food sources.

The main components of ABC algorithm are explained in detail below with the step by step representation. 1

Initialisation of parameters ABC algorithm enjoy the advantage of having only less number of parameters. The basic parameters of ABC algorithm are number of food sources (FS) which is equal to number of employed bees (ne) or onlooker bees (no), the number of trails after which food source is assumed to be abandoned is set with help of parameter limit. In ABC algorithm, the only constraint we are considering is that the numbers of employed bees are made equal to number of food sources, i.e., for every food source there is one employed bee.

2

Initialisation of individuals (bees) ABC starts by initialising all employed bees with randomly generated food sources. In general the position of ith food source that correspond to the solutions in the search space are represented as Xi = (xi1, xi2, …, xiD), and is produced by use of following equation (1).

(

xij = lb j + rand ∗ ub j − lb j

)

(1)

where i = 1, 2, …, FS, j = 1, 2, …, D. FS is the number of food sources and D is the dimension of search space; rand is a random number in range of [0 1]; ubj and lbj are upper and lower bounds for the jth dimension respectively. Now food sources are assigned randomly to bees, here after employed, onlooker bees are supposed to exploit food sources and new sources are explored by means of scout bee. 3

Employed bee phase In this phase a new solution is exploited by each employed bee in the neighbourhood of its present position depending on local information and then evaluates its quality. A new food source is exploited with the help of equation (2).

(

vij = xij + φij xij − xkj

)

(2)

Here vi is the new food source in neighbourhood of xi; k ∈ 1, 2, …, FS such that k ≠ i and j ∈ 1, 2, …, D are randomly chosen indices. φij is an uniformly distributed random number in the range [–1 1]. After generating vi, a fitness value fiti

118

A. Rajasekhar and M. Pant corresponding to ith bee food source for a minimisation problem is formulated as follows. ⎪⎧1 (1 + fi ) , if fi ≥ 0 fiti = ⎨ ⎪⎩1 + abs ( fi ) if fi < 0

(3)

where fi is the objective function value of food source corresponding to ith bee. Once the new solution is obtained, a greedy selection mechanism is employed between the old and new candidate solutions, i.e., xi and vi; then the better one is selected depending on fitness values and rest is discarded. If the source at vi is better to that of xi in terms of profitability, the employed bee memorises the new position and discards the old one from memory. Otherwise the previous position is stored in memory. 4

Determine probability values for probabilistic selection When all employed bees complete their foraging, now they are ready to perform different dances to share the information about nectar amounts and the position of their sources with the onlooker bees on the dance area. An onlooker bee carefully observes the nectar information from all employed bees and selects a food source position with a probability related to its nectar amount and this probabilistic selection is dependent on fitness values of the solutions in population. In ABC, roulette wheel fitness-based selection scheme is incorporated. The probability is given by equation (4). Pi =



fiti FS i =1

(4) fiti

where fiti is the fitness value of the ith food source xi. 5

Onlooker bee phase In ABC algorithm, for each food source a random real number in range [0, 1] is generated. If the probability value in equation (4) associated with that current source is greater than the produced random number then the onlooker bee produces modification on its food source position by making use of equation (2). After the source is evaluated a greedy mechanism similar to the Employed bee phase is applied and then values are updated.

6

Exploration of new food sources by Scout bees After completion of one cycle of this cyclic process, i.e., when all employed and onlooker bees complete their searches; the algorithm is made to check if there is any exhausted food source that needs to be abandoned. Abandoned food source by a bee is replaced by a new food source discovered by the scout. This is done by producing a site position randomly and replacing with abandoned one. This operation is similar to that of initialising a new food sources to an employed bee (discussed in earlier section).

An improved self-adaptive artificial bee colony algorithm Algorithm 1 1

Pseudo code for ABC algorithm

Initialisation.

2

Move the employed bees onto their food sources and evaluate their nectar amounts.

3

Place the onlookers depending upon the nectar amounts obtained from employed bees.

4

Send the scouts for exploiting new food sources.

5

Memorise the best food sources obtained so far.

6

If a termination criterion is not satisfied, go to step 2; other wise stop the procedure and display the best food source obtained so far.

3

119

Improved self-adaptive ABC algorithm

In this section we discuss about the improved self-adaptive ABC (IS-ABC) algorithm and its adaptive mechanisms. Akay and Karaboga (2011) had made a modification to the original ABC algorithm based on the frequency of perturbation (i.e., number of perturbations) and magnitude of perturbation. Their method states that the global search process of ABC has been hampered by updating only one variable, and hence they suggested using a frequency control rate to choose number of variables to be updated. As per our studies we came to know that updating a variable one or more in continuous optimisation problem of lesser dimension will not affect global search, but instead calls for another trial and error method to choose a better frequency control rate which may or may not give good results for all the functions. This control rate can become handy for solving large scale optimisation problems of Dimensions more than 500 as they consists of more dimensions and hence changing only variable calls for more number of function evaluations (NFEs). Hence we focused only in updating the magnitude of perturbation which is the key factor in global search of ABC.

3.1 Structure of IS-ABC In basic version of ABC a random perturbation along with difference of solutions (xij and xkj) weighted by random real number which avoids getting struck at local minima is added to the current solution in order to produce a new solution. The value of φkj varies within the range [–1, 1] and the range shall remain constant in traditional ABC, hampering the speed by selecting random value far away from the sufficient value in the due course of iterations. As the new solution is produced with the help of equation (2) meticulous care has to be taken to enhance the speed of algorithm and this done with the help of variance operator. To accelerate the search of the algorithm whatever may be the landscape of the problem in this IS-ABC the range is made to be dynamic and varies in the range [–AF, AF]. Hence the magnitude of perturbation is controlled by a control parameter called the acceleration factor AF. To make algorithm free from constant parameter AF this operator is made to adaptive based on the speed of convergence. The lower value of AF allows the search to fine tune the process in small steps leading to slow convergence while larger value of AF enhances the searching speed at the same time exploitation capability of the algorithm reduces.

120

A. Rajasekhar and M. Pant

To conduct an adaptive mechanism, an automating tuning of AF has been done with the help of improved Rechenbergs 1/5 mutation rule which improves the speed of algorithm much faster to that of traditional Rechenberg 1/5th rule or 1/5th success rule. This modified rule is provided in equation (5), changing the step size according to this rule in every n number of cycles is done according to ⎧min ( AF cd , Dm ) , ⎪ ⎪ AF , AF = ⎨ ⎪ AF ⋅ cd , ⎪⎩min(2 AF , Dm),

if ϕ (n) > 1/ 5 if ϕ (n) = 1/ 5 if 1/ 20 < ϕ (n) < 1/ 5

(5)

if ϕ (n) < 1/ 20

Here ϕ(n) denote the percentage of successful mutations over n cycles, where cd = 0.85, and Dm is the diameter of search region. One of the important cares, which is taken by this modified rule is that as the search is conducted over a bounded space AF value is limited to the maximum value of Dm.

3.2 Improved rule vs. traditional rule According to Rechenberg 1/5th rule (David et al., 2001) “The ratio of successful mutations to all mutations should be 1/5. If this ratio is greater than 1/5, increase the mutation strength, while if it is less than 1/5, and decrease the mutation strength”. A mutation is claimed to be successful if the mutated offspring is better than the parent solution. If many successful mutations are observed, this indicates that the solutions are residing in a better region in the search space. Hence, it is time to increase the mutation strength in the hope of finding even better solutions closer to the optimum solution. If the AF uses original Rechenbergs rule then equation would be as follows ⎧ AF ⋅ cd , if ϕ (n) < 1 / 5 ⎪ AF = ⎨ AF cd , if ϕ (n) > 1 / 5 ⎪ AF , if ϕ (n) = 1 / 5 ⎩

(6)

By counting the number of successful mutations over n trials, Schwefel suggested that cd = 0.817 in the updating rule. There are several self-adaptive mechanisms cited in the literature (Deb, 2009), but the main advantage of this rule is that it is has a simple structure and dynamic performance when improving the EAs. According to Rudolph (1999), self-adaptation can improve performance of search algorithm at the same time it does not always guarantee convergence to the global optimum. From his experiments he proved that Rechenberg 1/5 rule has failed to solve a non-convex function with one local and global minimum. When search process is exploiting near local minimum percentage of successful mutations will fall below than 1/5 and this enables the 1/5-success rule in reduction of AF which restricts the search to even smaller neighbourhood which becomes major issue in solving shifted complex and rotated functions as they are enriched with many local minima’s. To enjoy the results of IS-ABC we had to ensure that it should not struck up at a local optima and hence modified Rechenberg’s 1/5 rule was used, which can take care of the problem discussed. The modified rule almost remain same except the adaptation mechanism is slight modified. As mentioned earlier that AF should never exceed Dm,

An improved self-adaptive artificial bee colony algorithm

121

hence its worthwhile to restrict it whenever it reaches Dm. The major appreciable part in this scheme is when ϕ(n) < 1/20 AF has been increased such that it can escape any local minima (Greenwood and Zhu, 2001). Algorithm 2

Pseudo code for ABC algorithm

Step 1

Initialise the population xij = 1, 2 …, FS, j = 1, 2, …, D, counti = 0; counti = 0 is the non-improvement number

Step 2

Evaluate the individual bees

Step 3

Cycle = 1 REPEAT {EmployedBeePhase}

Step 4

For i = 1 to FS do 1

Produce a new food source vi for the employed bee of the food source xi by using equation (2) and improve the value of AF accordingly using equation (5).

2

Apply a greedy selection mechanism between vi and xi; and then select the better food source.

3

If solution xi does not improve, increment counti, otherwise counti = 0;

End For Step 5

Calculate the probability Pi values by equation (4) for the solutions using fitness values. {OnlookerBeePhase} Initialise a temporary variable t = 0, i = 1 REPEAT

Step 6

IF rand < Pi THEN 1

Produce a new food source vi for the employed bee of the food source xi by using equation (2) and improve the value of AF accordingly using equation (5).

2

Apply a greedy selection mechanism between vi and xi; and then select the better food source.

3

If solution xi does not improve, increment counti, otherwise counti = 0;

4

t=t+1

End IF UNTI (t = FS) {DetermineScout} Step 7

IF max(count) > limit THEN 1

Replace xi with a new randomly produced solution by making use of following equation xij = lb j + rand ∗ ( ub j − lb j )

End IF {Memorise the best solution achieved so far} Step 8

Cycle = Cycle + 1 UNTIL (Cycle = Maximum cycle number)

122

4

A. Rajasekhar and M. Pant

Experimental section of benchmark functions

In this section we provide empirical validation to proposed method on various benchmark functions including shifted benchmark functions suggested in CEC 2008. A total of 13 benchmarks along with two real world optimisation problems are considered.

4.1 Description of benchmarks Mathematical representations of Traditional as well as shifted benchmarks are recorded in Tables 1 and 2, respectively. We had chosen three uni-modal and three multi-modal traditional benchmarks and evaluated the performance for 30-dimensions. On other hand we had considered total seven shifted functions (Tang et al., 2007) for a 100-dimension (a test companion available at http://nical.ustc.edu.cn/cec08ss.php). Table 1

Description of traditional benchmarks functions

Function

Mathematical representation

r f1 ( x) =

Sphere

D i =1

xi2

r f 2 ( x) = 418.9829 ∗ D −

Schwefel

Rosenbrock Rastrigin Grienwank

Ackley



r f 3 ( x) =



r f 4 ( x) =



r f5 ( x) =

1 4,000



D i =1

xi sin( ⎢xi ⎢)

D −1 i =1

[100( xi +1 − xi2 ) 2 + ( xi − 1) 2 ]

D −1 i =1

[ xi2 − 10cos(2π xi ) + 10]



D i =1

x2 −



⎛ x ⎞ cos ⎜ i ⎟ + 1 i =1 ⎝ i⎠ D

D ⎛ ⎞ 1 r f 6 ( x) = −20 ⎜⎜ −0.2 xi2 ⎟⎟ i =1 D ⎝ ⎠ D ⎛1 ⎞ cos(2π xi ) ⎟ + 20 + e − exp ⎜ = 1 i ⎝D ⎠



Range of search (S)

(–100, 100)

Theoretical optima r f1 (0) = 0

(–500, 500)

r f 2 (420.97) = 0

(–30, 30)

r f3 ( 1 ) = 0

(–5.12, 5.12)

r f 4 ( 0) = 0

(–600, 600)

r f 5 ( 0) = 0

(–32, 32)

r f 6 ( 0) = 0



Table 2

Fun

Description of non-traditional shifted problems of CEC 2008 Name

F1

Shifted sphere

F2

Shifted Schwefel’s

F3

Shifted Rosenbrock’s

F4

Shifted Rastrigin’s

F5

Shifted Griewangk’s

F6

Shifted Ackley’s

F7

FastFractal DobleDip

Properties

Range of search

Uni-modal, separable, scalable

(–100, 100)

Uni-modal, non-separable

(–100, 100)

Multimodal, non-separable. A narrow valley from local optimum to global optimum

(–100, 100)

Multimodal, separable, number of local optima Multimodal, non-separable Multimodal, separable Multimodal, non-separable

(–5, 5) (–600, 600) (–32, 32) (–1, 1)

An improved self-adaptive artificial bee colony algorithm

123

4.2 Algorithmic parameter settings One of the most important advantages of IS-ABC algorithm is that it has a few parameter settings and almost equal to that of ABC algorithm. The parameters used in this work are summarised in Table 3. The other parameter to be considered apart from the algorithm is the termination criteria for which we had used NFEs as limit. For traditional functions 200,000 NFEs had been chosen as limit and for CEC 2008 benchmarks we had used the termination criteria mentioned by the competition organisers for the fare comparison with rest of methods. Algorithmic parameters

Table 3

Parameters

Value

No. of bees (NB)

20

Food sources

NB/2

Employed bees ne

50% of bees

Onlooker bees no

50% of bees ne ∗ D

Limit Initial acceleration factor (AF)

1

4.3 Discussion on results The performance of the proposed IS-ABC is analysed on a set of 13 benchmark problems comprising of six traditional benchmark functions and seven non-traditional functions. 25 independent runs are made for all the problems and the corresponding average results are recorded in Tables 4 and 5 in terms of average fitness function and standard deviation (std). Table 4

Comparison of IS-ABC on traditional benchmarks with other variants in terms of average fitness and standard deviation (std)

Function Sphere (f1) Schwefel (f2) Rosenbrock (f3) Rastrigin (f4) Griewangk (f5)

IS-ABC

PSO

DE

FIPS

CSA

4.27e–16

9.78e–30

9.8e–14

2.69e–12

2.33e–03

(9.36e–17)

(2.5e–29)

(8.4e–14)

(6.84e–13)

(5.58e–04)

3.81e–04

1.10e+03

5.7e+01

2.05e+03

2.25e+02

(9.39e–13)

(2.56e+02)

(7.6e+01)

(9.58e+02)

(8.43e+01)

5.65e–02

2.93e+01

2.1e+00

2.45e+01

1.46e–01

(3.12e–03)

(2.51e+01)

(1.5e+0)

(2.19e–01)

(1.00e–01)

1.71e–14

2.90e+01

7.1e+01

7.30e+01

3.08e–01

(2.75e–14)

(7.70e+00)

(2.1e+01)

(1.24e+01)

(4.60e–01)

0 (0)

8.13e–03

0 (0)

1.16e–06

6.50e–03

(1.87e–06)

(2.03e–03)

(7.16e–03) Ackley (f6)

3.32e–14

3.94e–14

9.7e–11

4.81e–07

1.13e–02

(3.53e–15)

(1.12e–14)

(5.0e–11)

(9.17e–08)

(1.65e–03)

124

A. Rajasekhar and M. Pant

Table 5

Function F1 F2 F3 F4 F5 F6 F7

Comparison of IS-ABC on CEC 2008 benchmarks with other variants in terms of average fitness and standard deviation (std) IS-ABC

EPS-PSO

MLCC

DEwSAcc

1.94e–15

7.47e–01

6.82e–14

5.68e–14

(2.03e–16)

(1.70e–01)

(2.32e–14)

(0.00e+00)

1.41e+01

1.86e+01

2.52e+01

8.25e+00

(6.73e+00)

(2.26e+00)

(8.72e+00)

(5.32e+00)

1.86e+01

4.99e+03

1.49e+02

1.44e+02

(1.78e+01)

(5.35e+03)

(5.72e+01)

(5.84e+01)

0.00e+00

4.71e+02

4.38e–13

4.37e+00

(0.00e+00)

(5.94e+01)

(9.21e–14)

(7.65e+00)

1.78e–16

3.72e–01

3.41e–14

3.06e–14

(1.49e–16)

(5.60e–02)

(1.16e–14)

(7.86e–15)

1.04e–13

2.06e+00

1.11e–13

1.12e–13

(3.89e–15)

(4.40e–01)

(7.86e–15)

(1.53e–14)

–1.54e+03

–8.55e+02

–1.54e+03

–1.36e+03

(4.52e+00)

(1.35e+01)

(2.52e+00)

(2.45e+01)

Table 4 summarises the traditional benchmark results of IS-ABC and also the results of PSO; FIPS (Liang et al., 2006), DE (Storn and Price, 1997), CSA (Gong et al., 2010). Here we can observe that except for the sphere function where PSO was slightly better than the proposed IS-ABC, in all the remaining functions IS-ABC emerged as a clear winner. On the other hand results of IS-ABC in terms of average fitness function and std. over CEC large scale optimisation problems results are tabulated in Table 5. From this Table once again we can observe that even for large scale complex optimisation problem, the proposed IS-ABC gave results as good as the competing algorithms like EPUS-PSO (Hsieh et al., 2008), MLCC (Yang et al., 2008), DEwSAcc (Zamuda et al., 2008).

5

Application of IS-ABC to real world practical problems

5.1 FM sound waves parameter estimation Frequency-modulated (FM) sound synthesis plays a vital role in several modern musical applications. In this section an interesting application of proposed IS-ABC to the optimisation of parameters of an FM synthesiser is analysed. Genetic algorithms are being used for the FM synthesiser in past (Horner et al., 1993; Herrera and Lozano, 2000). Here, the system is designed in such a way that it can automatically generate sounds similar to the target sounds. The target sound is a .wav file. The IS-ABC initialises a set of parameters, and the FM synthesiser generates the corresponding sounds. In the feature extraction step, the dissimilarities of features between the target sound and synthesised sounds are used to compute the fitness value. This process repeats until the synthesised sounds become very similar to the target. r The problem involves determination of six real parameters X = {a1 , ω1 , a2 , ω 2 , a3 , ω3 } of the FM sound wave given by equation (7) for approximating it to the sound

An improved self-adaptive artificial bee colony algorithm

125

wave given in equation (8) where 0 = 2π/100. The parameters are defined in the range [–6.4, +6.35].

(

y (t ) = a1 ⋅ sin ω1 ⋅ t ⋅θ + a2 ⋅ sin (ω 2 ⋅ t ⋅θ + a3 ⋅ sin (ω3 ⋅ t ⋅θ ) )

(

y0 (t ) = 1 ⋅ sin 5 ⋅ t ⋅θ − 1.5 ⋅ sin ( 4.8 ⋅ t ⋅θ + 2 ⋅ sin ( 4.9 ⋅ t ⋅θ ) )

)

(7)

)

(8)

The objective is to minimise the sum of square errors given by equation (9). This problem is a highly complex multimodal function having strong epistasis (interrelation among the variables), with the optimum value of 0. r f X =

100

( ) ∑ ( y(t ) − y (t ) )

2

(9)

0

t =0

To perform this application we had considered the algorithmic parameters that of Table 3. A termination criterion of 106 Functional Evaluations was used for designing FM synthesiser. Figure 3(a) shows the tracking of waveform generated by IS-ABC over the original target waveform and Figure 3(b) depicts convergence of IS-ABC for the frequency modulated sound synthesis design problem. The IS-ABC had outperformed the rest of methods proposed in the literature (Dasgupta et al., 2009). Table 6

Mean and std values of frequency modulator design problem

IS-ABC

HPSO-TVAC

8.797e–05 (1.202e–05)

BFOA

ABFOA

7.653e–01

2.748e+00

3.65e–03

(1.154e–01)

(8.314e–01)

(8.51e–04)

5.2 Design of PID controller for AVR system In this subsection, we had considered a linearised model of higher order AVR system compensated with a PID controller. The block diagram representation is given in Figure 1.

(

)

(

)(

min F K p , K i , K d = e −α ⋅ ( ts − tr ) + 1 − e−α ⋅ M p + Ess

The performance index of controller is given as •

Kp, Ki, Kd: gains of PID controller



range: 0 ≤ Kp, Ki, Kd ≤ 2



α: weighting factor



Mp: % overshoot



Ess: steady-state error



ts: settling time (sec)



tr: rise time (sec).

)

126 Figure 1

A. Rajasekhar and M. Pant Convergence plots of 30-D functions f1–f6 (see online version for colours)

An improved self-adaptive artificial bee colony algorithm Figure 2

Convergence plots of 100-D functions F1–F7 (see online version for colours)

127

128

A. Rajasekhar and M. Pant

The performance criterion minF(Kp, Ki, Kd) is chosen in such a way that the designer can tune it according to system requirements, simply by varying the value of α. This objective function has quite interesting features, if α is less than 0.7 then rise time and settling time is reduced through the minimisation process. On the other hand α can be set to a value greater than 0.7 to reduce the overshoot and steady state error. The algorithmic parameters are kept same as that of Table 3 and the termination criteria has been fixed to a total of 3,000 functional evaluations. Figure 3

(a) Actual target waveform and waveform generated by IS-ABC (b) Convergence of IS-ABC towards optimum (see online version for colours)

(a) Figure 4

(b)

Block diagram of AVR system

5.2.1 Performance of AVR system without PID controller The figure shows the terminal voltage step response of an AVR system without PID controller and it is evident from the response that feedback is not effective in improvising the performance. A typical AVR system without PID controller will give a response, shown in Figure 5(a).

An improved self-adaptive artificial bee colony algorithm Figure 5

129

(a) Step response of terminal voltage in AVR without controller (b) Response with IS-ABC PID controller (α = 1) (c) Response with IS-ABC PID controller (α = 1.5) (d) Convergence Characteristics of PID controller parameters (α = 1) (e) Convergence Characteristics of PID controller parameters (α = 1.5) (f) Convergence characteristics of IS-ABC PID controller (see online version for colours)

(a)

(b)

(c)

(d)

(e)

(f)

130

A. Rajasekhar and M. Pant

5.2.2 Performance of AVR system with IS-ABC tuned PID controller To enhance the performance of AVR system PID controller has been introduced, which now requires optimum parameters Kp, Ki, Kd to get the desired performance. IS-ABC is employed to design PID controller in such way that the above mentioned objective function is minimised. In this application we had considered two values for α to study the responses and their time domain indices with the change in the value of α. Figures 5(b) and 5(c) shows the performance of AVR system, with α = 1, α = 1.5 respectively in the objective function; the corresponding time domain indices are recorded in Table 7. Figure 5(d), Figure 5(e) depicts the convergence characteristics of the PID parameters (of two a values) for optimal values. At last the convergence characteristics of IS-ABC are shown in Figure 6(f). Introduction of IS-ABC tuned PID controller had greatly enhanced the AVR system performance From Table 7 it is very clear that Overshoot for had been 0 for both the a values considered and also the steady state error, followed by settling time are also improved to great extent. Table 7

Comparison of time domain indices of IS-ABC tuned PID-AVR system for different α values

System

tr (sec)

Mp

ts (sec)

Ess

Fmin

Un-tuned

0.2608

65.71%

6.99

0.091

-

α=1

0.3228

0

0.5041

5.699e–08

0.3128 (0.0120)

α = 1.5

0.3531

0

0.5521

5.35e–08

0.2006 (0.0041)

6

Conclusions

In the present study a simple and effective variant of ABC algorithm is proposed. The proposed algorithm called IS-ABC is based on the improved self-adaptive mechanism of Rechenbergs 1/5th success rule. The proposed modification helps in improving the exploitation capabilities of the basic ABC. The proposed algorithm is analysed on a set of test problems as well as real life problems. It is evident from the numerical results that the proposed algorithm is well suited for test as well as real life problems.

References Akay, B. and Karaboga, D. (2011) ‘A modified artificial bee colony algorithm for real-parameter optimization’, Information Sciences, Vol. 192, pp.120–142. Civicioglu, P. (2012) ‘Transforming geocentric Cartesian coordinates to geodetic coordinates by using differential search algorithm’, Computers and Geosciences, Vol. 46, pp.229–247. Dasgupta, S., Das, S., Abraham, A. and Biswas, A. (2009) ‘Adaptive computational chemotaxis in bacterial foraging optimization: an analysis’, IEEE Transactions of Evolutionary Computation, Vol. 13, No. 4, pp.919–941. David, B.F., Garry, B.F. and Kazuhiro, O. (2001) ‘Multiple-vector self-adaptation in evolutionary algorithms’, Biosystems, Vol. 61, Nos. 2–3, pp.155–162. Deb, K. (2009) Multi-Objective Optimization Using Evolutionary Algorithms, John Wiley & Sons Inc., New York, NY, USA.

An improved self-adaptive artificial bee colony algorithm

131

Dorigo, M. and Di Caro, G. (1999) ‘Ant colony optimization: a new meta-heuristic’, Proceedings of 1999 Congress on Evolutionary Computation, Vol. 2, p.3. Gandomi, A.H., Yang, X-S. and Alavi, A.H. (2013) ‘Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems’, Engineering with Computers, Vol. 29, No.1, pp.17–35. Gong, M.G., Jiao, L.C. and Zhang, L.N. (2010) ‘Baldwinian learning in clonal selection algorithm’, Information Sciences, Vol. 180, No. 8, pp.1218–1236. Greenwood, G.W. and Zhu, Q.J. (2001) ‘Convergence in evolutionary programs with self-adaptation’, Evolutionary Computation, Vol. 9, No. 2, pp.147–157. Guopu, Z. and Sam, K. (2010) ‘Gbest-guided artificial bee colony algorithm for numerical function optimization’, App. Math. and Comput., Vol. 217, No. 7, pp.3166–3173. Herrera, F. and Lozano, M. (2008) ‘Gradual distributed real-coded genetic algorithm’, IEEE Transactions on Evolutionary Computation, Vol. 4, No. 1, pp.43–62. Horner, A., Beauchamp, J. and Haken, L. (1993) ‘Genetic algorithms and their application to FM matching synthesis’, J. Comput. Music, Vol. 17, No. 4, pp.17–29. Hsieh, T.S., Sun, T.Y., Liu, C.C. and Tsai, S.J. (2008) ‘Solving large scale global optimization using improved particle swarm optimizer’, CEC ‘2008 (IEEE World Congress on Computational Intelligence), pp.1777–1784. Jatoth, R.K and Rajasekhar, A. (2010) ‘adaptive bacterial foraging optimization based tuning of optimal PI speed controller for PMSM drive’, Contemporary Computing, Communications in Computer and Information Science, Vol. 94, pp.588–599. Karaboga, D. (2005) An Idea based on Honey Bee Swarm for Numerical Optimization, Technical Report TR06, Computer Engineering Department Engineering faculty, Erciyes University. Karaboga, D. and Basturk, B. (2007) ‘A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm’, Journal of Global Optimization, Vol. 39, No. 3, pp.459–471. Karaboga, D., Gorkemli, B., Ozturk, C. and Karaboga, N. (2012) ‘A comprehensive survey: artificial bee colony (ABC) algorithm and applications’, Artificial Intelligence Review, ISSN: 0269–2821, pp.1–37. Kennedy, J. and Eberhart, R. (1995) ‘Particle swarm optimization’, IEEE International Conference on Neural Networks, Vol. 2, pp.1942–1948. Kwang, M.S. and Weng, H.S. (2003) ‘Ant colony optimization for routing and load-balancing: survey and new directions’, IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, Vol. 33, No. 10, pp.560–572. Liang, J.J, Qin, A.K. and Suganthan, P.N. (2006) ‘Comprehensive learning particle swarm optimizer for global optimization of multimodal functions’, IEEE Transactions on Evolutionary Computation, Vol. 10, No. 3, pp.281–295. Radha, T., Millie, P. and Kususm, D. (2010) ‘Optimal coordination of over-current relays using modified differential evolution algorithms’, Engineering Applications of Artificial Intelligence, Vol. 23, No. 5, pp.820–829. Rajasekhar, A., Das, S. and Suganthan, P.N. (2012a) ‘Design of fractional order controller for servohydraulic positioning system with micro artificial bee colony algorithm’, in Proc: IEEE World Congress on Computational Intelligence (CEC), Brisbane, Australia, pp.1–8. Rajasekhar, A., Pant, M. and Abraham, A. (2012b) ‘A hybrid differential artificial bee colony algorithm based tuning of fractional order controller for permanent magnet synchronous motor drive’, International Journal of Machine Learning and Cybernetics (IJMLC), pp.1–11. Rudolph, G. (1999) ‘Self-adaptation and global convergence: a counter-example’, Proceedings of the 1999 Congress on Evolutionary Computation, IEEE Press, Piscataway, New Jersey, pp.646–651.

132

A. Rajasekhar and M. Pant

Selvakumar, A.I and Thanushkodi, K. (2007) ‘A new particle swarm optimization solution to nonconvex economic dispatch problem’, IEEE Transactions on Power Systems, Vol. 22, No. 1, pp.42–51. Sharma, T.K., Pant, M. and Bansal, J.C. (2012) ‘Artificial bee colony with mean mutation operator for better exploitation’, in Proc. IEEE World Congress on Computational Intelligence (CEC), Brisbane, Australia, pp.1–7. Storn, R. and Price, K. (1997) ‘Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces’, Journal of Global Optimization, Vol. 11, No. 4, pp.341–359. Tang, K., Yao, X., Suganthan, P.N., MacNish, C., Chen, Y.P., Chen, C.M. and Yang, Z. (2007) Benchmark Functions for the CEC 2008 Special Session and Competition on Large Scale Global Optimization, Technical report, Nature Inspired Computation and Applications Laboratory, USTC, China. Yang, X-S and Gandomi, A.H. (2012) ‘Bat algorithm: a novel approach for global engineering optimization’, Engineering Computation, Vol. 29, No. 5, pp.464–483. Yang, Z., Tang, K. and Yao, X. (2008) ‘Multiple cooperative coevolution for large scale optimization’, CEC ‘2008 (IEEE World Congress on Computational Intelligence), pp.1663–1670. Zamuda, A., Brest, J., Boskovic, B. and Zumer, V. (2008) ‘Large scale global optimization using differential evolution with self-adaptation and cooperative co-evolution’, CEC 2008 (IEEE World Congress on Computational Intelligence), pp.3718–3725.