Discrete bee algorithms and their application in ... - Springer Link

2 downloads 0 Views 155KB Size Report
Nov 12, 2010 - those of three other bee algorithms, i.e. the artificial bee colony (ABC), the queen bee (QB), and the fast marriage in honey bee optimization ...
Artif Intell Rev (2011) 35:73–84 DOI 10.1007/s10462-010-9184-8

Discrete bee algorithms and their application in multivariable function optimization Mina Salim · M. T. Vakil-Baghmisheh

Published online: 12 November 2010 © Springer Science+Business Media B.V. 2010

Abstract In this paper we present four discrete versions of two different existing honey bee optimization algorithms: the discrete artificial bee colony algorithm (DABC) and three versions of the discrete fast marriage in honey bee optimization algorithm (DFMBO1, DFMBO2, and DFMBO3). In these discretized algorithms we have utilized three logical operators, i.e. OR, AND and XOR operators. Then we have compared performances of our algorithms and those of three other bee algorithms, i.e. the artificial bee colony (ABC), the queen bee (QB), and the fast marriage in honey bee optimization (FMBO) on four benchmark functions for various numbers of variables up to 100. The obtained results show that our discrete algorithms are faster than other algorithms. In general, when precision of answer and number of variables are low, the difference between our new algorithms and the other three algorithms is small in terms of speed, but by increasing precision of answer and number of variables, the needed number of function evaluations for other algorithms increases beyond manageable amounts, hence their success rates decrease. Among our proposed discrete algorithms, the DFMBO3 is always fast, and achieves a success rate of 100% on all benchmarks with an average number of function evaluations not more than 1010. Keywords Artificial bee colony algorithm · Discrete artificial bee colony algorithm · Fast marriage in honey bee optimization · Discrete fast marriage in honey bee optimization

1 Introduction Using evolutionary algorithms has become conventional for function optimization purposes, examples are genetic algorithms (Holland 1975), ant colony optimization (ACO) algorithms (Dorigo and Stützle 2005), particle swarm optimization (PSO) algorithms (Kennedy and

M. Salim · M. T. Vakil-Baghmisheh (B) ICT Research Center, Faculty of Electrical & Computer Engineering, University of Tabriz, Tabriz, Iran e-mail: [email protected] M. Salim e-mail: [email protected]

123

74

M. Salim, M.T. Vakil-Baghmisheh

Eberhart 1995), simulated annealing (SA) algorithm (Kirkpatrick et al. 1993), and bee systems (Sato and Hagiwara 1997), just to call a few. Evolutionary algorithms are classified into two classes: continuous and discrete. Usually the discrete algorithms are more suitable for problems with discrete variables, but in most cases they could be successfully applied on continuous variable problems, as well. They are widely used in numerical problems (Huang and Arora 1997), training neural networks (Bonabeau et al. 1999) and (Hsu et al. 2001), nonlinear optimization (Arora and Huang 1994), etc. The first discrete particle swarm optimization (DPSO) algorithm was introduced in 1997 (Eberhart and Kennedy 1997) in which a sigmoid function was used to limit the gene values of the velocity vector in the rang [0, 1] and the inertia factor was eliminated. In 2001 five new methods for discretizing the PSO algorithm were introduced (Mohan and Al-kazemi 2001). In 2005, a variant of the DPSO was applied on the traveling salesman problem where the velocity vector is defined as a function of position (Liu et al. 2006). In 2005 a new variant of DPSO algorithm was introduced (Afshinmanesh et al. 2005) in which instead of subtraction, addition and multiplication operators, respectively the logical XOR, OR, and AND operators were used. In this paper we apply these logical operators for discretizing the bee algorithms. The rest of the paper is organized as follows. Section 2 reviews QB, ABC, and FMBO algorithms. Section 3 introduces our discretized algorithms. Section 4 introduces the benchmark functions. In Sect. 5 the simulation results are presented and analyzed. Section 6 concludes the paper.

2 A review of the utilized evolutionary algorithms Evolutionary algorithms can obtain suboptimal solutions for very difficult problems for which no other method can find a good solution in a reasonable time. These solutions are called suboptimal since usually there is no way to proof their optimality. Among evolutionary algorithms are bee algorithms. The bee algorithms have been used in a vast range of optimization problems such as multivariable problems (Karaboga and Basturk 2007), nonlinear constraint problems (Qin et al. 2004), training of feed-forward neural networks (Karaboga et al. 2007), and navigation (Sandini et al. 1993). In this section we review three variants of existing bee algorithms, i.e. the QB, the ABC, and the FMBO algorithms. 2.1 The queen bee algorithm (QB) The QB algorithm is a relatively new method in the class of collective search algorithms (Jung 2003). It has some common concepts with the genetic algorithms as genes, population, selection, crossover, mutation, and reproduction. It has two main differences with standard genetic algorithms: (a) fixing the best member of the population as one of the parents (called queen), (b) having two different mutation rates, i.e. normal mutation rate and strong mutation rate. Before performing mutation, the population is sorted based on ascending cost function. The better members of population (ζ %) are mutated with normal mutation rate ( pm probability), and to prevent premature convergence of the algorithm the remaining members, i.e. (1 − ζ )% of the population, are mutated with strong mutation rate ( p˙ m probability), ( pm < p˙ m , 0.4 ≤ ζ ≤ 0.8). Remarks 1. Selection of the second parent is performed based on the roulette wheel selection method.

123

Discrete bee algorithms and their application in multivariable function optimization

75

2. In our simulations we have used blending crossover. This operator works on two selected members of population (parents) to produce two offspring. To this end, a gene position is selected randomly, and the values of genes located before the selected point are swapped between the parents, and the values of genes located in the selected position and hereafter are replaced with two new values, i.e. x1 = xm − β(xm − xd )

(1)

x2 = xd + β(xm − xd )

(2)

where xm , xd are initial values, and β is a random value (0 < β < 1).

2.2 The artificial bee colony algorithm This algorithm is based on the natural behavior of honey bees in finding food resource (Basturk and Karaboga 2006). In the following, we briefly describe this algorithm. In an artificial hive the population of bees is divided into three groups: onlookers, scouts, and workers. In the beginning, the hive has only onlooker and worker bees, and the number of onlooker and worker bees are equal. Algorithm start with a population of solutions (for worker bees) seeking food resources. Each worker bee search around the solution to find the food resource with some additional nectar. Completion of send by all of the employed bees, they share their earned information with onlooker bees in the dancing hall to introduce a new solution depending on the probability associated for each solution: cos ti pi =  S N n=1 cos tn

(3)

This process is continued until all onlooker bees find a solution. If one solution isn’t improved for a predetermined number of iterations (limit), it would be called an abounded solution and the corresponding bee is called a scout bee. This solution is replaced with another random solution. The value of limit is a parameter for controlling the algorithm The ABC algorithm is given below. 2.2.1 The ABC algorithm 1. Initialize the population of solutions and evaluate them. x = var l + (var h − var l) × rand(S N , D), i = 1, . . . , S N , j = 1, . . . , D

(4)

2. Introduce new solutions from x by: vi j = xi j + φi j (xi j − xk j )

(5)

and evaluate them, where φ ∈ [−1, 1], k ∈ {1, 2, . . . , S N } , j ∈ {1, 2, . . . , D} 3. Replace each member of population with corresponding member of v if its fitness is improved. For all members repeat stages 4–6. 4. Assume a probability p for each member of population (using Eq. 3). 5. Introduce new solution v2 from selected solution similar to stage 2(xi j selected with p probability). 6. Replace each member of population with the member of v2 if the fitness is improved. 7. Determine the abandoned solution and replace it with a random solution.

123

76

M. Salim, M.T. Vakil-Baghmisheh

8. Save the best solution so far. 9. While stopping criteria are not satisfied repeat stages 2–8.

2.3 The fast marriage in honey bee optimization This algorithm was inspired by the mating process of honey bees (Yang et al. 2007). The algorithm is presented in the following. 2.3.1 The FMBO algorithm 1. 2. 3. 4. 5. 6. 7.

Initialize the population of queens and workers with random solutions. Improve the fitness of queens using worker bees. For each queen generate a drone randomly. Apply single point crossover between selected queen and drone and generate a brood. Mutate the brood. Improve the fitness of the brood using worker bees and improve the fitness of workers. Replace the worst queen with the brood if the brood’s fitness is better than worst queen’s fitness.

3 Discretized algorithms In this section we introduce our discrete bee algorithms. First a discrete version of the ABC algorithm and then three discrete versions of the FMBO algorithm are introduced. We discretize the bee algorithms by applying a method similar to that used for discretizing the PSO algorithm (Afshinmanesh et al. 2005), i.e. we use the logical XOR, OR, and AND operators instead of subtraction, addition, and multiplication operators, respectively. 3.1 The discrete artificial bee colony algorithm (DABC) Here, we present a discretized version of the artificial bee colony algorithm introduced by Karaboga and Basturk (2007). Our discretized algorithm is given below. 3.1.1 The algorithm of discrete artificial bee colony 1. Initialize a population of solutions X 1 = r ound(rand(S N , N t)

(6)

where S N denotes population size, and N t = D × nbits D is the dimension of the search space, nbits is the number of bits used for encoding each variable in binary format. 2. Decode population into continuous variables according to the range of variables and evaluate them. 3. Sort population according to increasing cost function. While stopping criteria is not satisfied repeat the following stages: 4. Produce new solutions V1 from X 1 according to the following stages: For all members of population repeat:

123

Discrete bee algorithms and their application in multivariable function optimization

77

a. Choose a member of population randomly and name it k b. Define a random vector with binary arrays: φ = r ound(rand(1, N t))

(7)

V1i = X 1i ∨ (φ ∧ (X 1i ⊕ k))

(8)

c. Produce

5. Decode V1 6. Replace each member of population with the corresponding new solution if the fitness improves. 7. Calculate the probabilities pi for solutions X 1 (using Eq. 3). 8. Produce new solutions V2 from the solutions X 1 selected randomly depending on pi and evaluate them (similar to stage 4). 9. Replace each member of population with new solutions if the fitness improves. 10. Determine the abandoned solution and replace it with the best solution. 11. Save the best solution achieved so far.

3.2 The discrete fast marriage in honey bee optimization Now we present three versions of the discrete fast marriage in honey bee optimization algorithm, i.e. DFMBO1, and DFMBO2, and DFMBO3. 3.2.1 The DFMBO1 algorithm 1. Initialize the population of queens and workers with random solutions. 2. Repeat stages 3–7 for all queens. 3. Select a worker from the worker’s population randomly, choose a bit of queen randomly and replace it with the result of applying the logic AND operator between this bit and the selected worker’s bit. (If the queen’s fitness improves) 4. Select a drone randomly. 5. Apply crossover between the drone and the queen and generate a brood. 6. Mutate the brood. 7. Replace the worst queen with the brood if the brood’s fitness is better than queen’s fitness. 3.2.2 The DFMBO2 algorithm The algorithm of DFMBO2 is similar to DFMBO1 with the following differences: 1. Using logic OR (∨) operator instead of AND (∧) operator. 2. Using ∨ operator on the entire chromosome instead of only on one bit. 3.2.3 The DFMBO3 algorithm 1. Initialize two populations of queens and workers with random solutions. qpop = r ound(rand(Q, N t))

(9)

wpop = r ound(rand(w, N t))

(10)

123

78

M. Salim, M.T. Vakil-Baghmisheh

(Q is the number of queens, w is the number of workers, N t = npar × nbits npar is the dimension of the search space, nbits is the number of bits used for encoding each variable in binary format.) 2. Decode q pop into continuous variables according to the range of variables and evaluate them. 3. Repeat following stages for all queens: a. Select a worker from wpop and name it k. b. Define a random vector with binary arrays (using Eq. 7). c. Generate a chromosome according to xi = qpopi ∨ (φ ∧ (qpopi ⊕ k))

(11)

d. Replace each member of qpop with the corresponding member of x if the fitness improves. e. Generate a drone randomly. f. Repeat following stages M times (M ≥ 1). I. Apply single point crossover between generated drone and the selected queen and generate a brood. II. Mutate the brood. III. Apply the selected worker for improving the fitness of brood according to the following stages: b = br ood ∨ (φ ∧ (br ood ⊕ k)

(12)

Replace the brood with b if the fitness improves. IV. Apply the brood for improving the fitness of the selected worker according to the following stages: 1. k  = k  ∨ (φ ∧ (br ood ⊕ k  )

(13)

2. Replace k with k  if the fitness improves. 4. Replace the worst queen with the brood if the brood’s fitness is better than the fitness of worst queen. 5. While stopping criteria are not satisfied repeat stages 2–4.

4 The benchmark functions We tested our algorithms on four benchmark functions. These functions are described in the following. 1. Sphere function: f (x) =

n 

xi2 , xi ∈ [−100, 100], i = 1, . . . n.

(14)

i=1

The sphere function is smooth, strongly convex, continuous, and symmetric. The global minimum f (x) = 0 is obtainable for xi = 0, i = 1, . . . n 

123

Discrete bee algorithms and their application in multivariable function optimization

79

Table 1 Simulation results for Sphere function Algorithm Tolerance No. of variables

FMBO

ABC

QB

DFMBO1

DFMBO2

DFMBO3

DABC

2

5

10

20

50

100

100(7526) 100(14651) 100(22592) 100(210328) 100(421030)

0.5

100(2721)

0.1

100(2721)

100(7526) 100(14651) 100(22592) 100(210328) 100(421030)

0.01

100(2721)

100(7526) 100(14651) 100(22592) 100(210328) 100(421030)

0.001

100(2721)

100(7526) 100(14651) 100(22592) 100(210328) 100(421030)

0.5

100(148)

100(869)

0.1

100(247)

100(1277) 100(3251)

100(42900) 100(505290) 100(6947800)

0.01

100(444)

100(1796) 100(4177)

100(56408) 100(619960) 100(8396910)

0.001

100(649)

100(2273) 100(5124)

100(67035) 100(742520) 100(9808800)

0.5

100(1406)

100(6579) 100(8836)

100(42315) 100(18936)

0.1

100(3126)

90(10933) 90(22056)

70(60801)

90(42036)

80(148610)

0.01

100(9903)

80(23220) 60(18523)

30(80200)

70(59686)

40(202140)

0.001

100(10470) 60(28288) 40(18735)

20(73185)

20(89860)

20(193177)

0.5

100(73)

100(185)

100(397)

100(5663)

100(14696)

100(72696)

0.1

100(95)

100(240)

100(511)

100(7004)

100(18635)

100(87785)

0.01

100(119)

100(302)

100(655)

100(8893)

100(24217)

100(109910)

0.001

100(147)

100(366)

100(797)

100(10771) 100(29699)

100(131480)

0.5

100(60)

100(105)

100(458)

100(1870)

100(6189)

93.3(26716)

0.1

100(67)

100(124)

100(614)

100(2400)

100(7687)

93.3(32561)

0.01

100(74)

100(159)

100(846)

100(3152)

100(9832)

93.3(40939)

0.001

100(86)

100(188)

100(1097)

100(3911)

100(11994)

93.3(49350)

0.5

100(305)

100(406)

100(439)

100(538)

100(671)

100(742)

0.1

100(349)

100(455)

100(501)

100(599)

100(784)

100(808)

0.01

100(414)

100(545)

100(585)

100(681)

100(867)

100(942)

0.001

100(476)

100(628)

100(677)

100(765)

100(999)

100(1010)

0.5

100(333)

100(7938) 100(1075)

100(1178)

100(3001)

100(9337)

100(2647)

100(34820) 100(422490) 100(5893100)

100(148610)

0.1

100(368)

100(872)

100(1163)

100(1296)

100(3257)

100(9337)

0.01

100(577)

100(978)

100(1295)

100(1469)

100(3498)

100(9709)

0.001

100(687)

100(1106) 100(1442)

100(1593)

100(3739)

100(10441)

2. Schwefel function: f (x) =

n 

 −xi sin( |xi |), xi ∈ [−500, 500], i = 1, . . . n.

(15)

i=1

This function is highly multimodal and non-linear. The surface of Schwefel function is composed of a large number of peaks and valleys. The global minimum f (x) = 0 is obtainable for xi = 420.9867, i = 1, . . . n  3. Rastrigin function: f (x) =

n 

[xi2 − 10 cos(2π xi ) + 10], xi ∈ [−5.12, 5.12], i = 1, . . . n. (16)

i=1

123

80

M. Salim, M.T. Vakil-Baghmisheh

Table 2 Simulation results for Schwefel function Algorithm

FMBO

ABC

QB

DFMBO1

DFMBO2

DFMBO3

DABC

Tolerance

No. of variables 2

5

10

20

50

100 100(328620)

0.5

100(4886)

100(4495)

100(10200)

100(43753)

100(67101)

0.1

100(4886)

100(4495)

100(10200)

100(43753)

100(67101)

100(328620)

0.01

100(4886)

100(4495)

100(10200)

100(43753)

100(67101)

100(328620)

0.001

100(4886)

100(4495)

100(10200)

100(43753)

100(67101)

100(328620)

0.5

100(147)

100(151)

100(157)

100(612)

100(3758)

100(15060)

0.1

100(147)

100(151)

100(157)

100(612)

100(3758)

100(15060)

0.01

100(147)

100(151)

100(157)

100(612)

100(3758)

100(15060)

0.001

100(147)

100(151)

100(157)

100(612)

100(3758)

100(15060)

0.5

100(12)

100(20)

100(30)

100(60)

100(80)

100(120)

0.1

100(12)

100(20)

100(30)

100(60)

100(80)

100(120)

0.01

100(12)

100(20)

100(30)

100(60)

100(80)

100(120)

0.001

100(12)

100(20)

100(30)

100(60)

100(80)

100(120)

0.5

100(9)

100(34)

100(122)

100(270)

100(463)

100(469)

0.1

100(10)

100(34)

100(122)

100(270)

100(463)

100(469)

0.01

100(10)

100(34)

100(122)

100(270)

100(463)

100(469)

0.001

100(10)

100(34)

100(122)

100(270)

100(463)

100(469)

0.5

100(34)

100(38)

100(40)

100(118)

100(253)

100(437)

0.1

100(34)

100(38)

100(40)

100(118)

100(253)

100(437)

0.01

100(34)

100(38)

100(40)

100(118)

100(253)

100(437)

0.001

100(34)

100(38)

100(40)

100(118)

100(253)

100(437)

0.5

100(17)

100(19)

100(42)

100(48)

100(62)

100(80)

0.1

100(17)

100(19)

100(42)

100(48)

100(62)

100(80)

0.01

100(17)

100(19)

100(42)

100(48)

100(62)

100(80)

0.001

100(17)

100(19)

100(42)

100(48)

100(62)

100(80)

0.5

100(41)

100(147)

100(328)

100(329)

100(562)

100(1323)

0.1

100(41)

100(147)

100(328)

100(329)

100(562)

100(1323)

0.01

100(41)

100(147)

100(328)

100(329)

100(562)

100(1323)

0.001

100(41)

100(147)

100(328)

100(329)

100(562)

100(1323)

This function is highly multimodal and non-linear. It contains millions of local minima. The locations of the minima are regularly distributed. The difficulty about this function is that an optimization algorithm easily could be trapped in a local minimum. The global minimum f (x) = 0 is obtainable for xi = 0, i = 1, 2, . . . n  4. Griewank function: f (x) =

  n n 1  2  xi xi − cos √ + 1, xi ∈ [−600, 600], i = 1, . . . n. (17) 4000 i i=1 i=1

This function is non-linear. Since the number of local minima increases with the dimensionality it is strongly multimodal. The global minimum f (x) = 0 is obtainable for xi = 0, i = 1, . . . n 

123

Discrete bee algorithms and their application in multivariable function optimization

81

Table 3 Simulation results for Rastrigin function Algorithm Tolerance No. of variables 2 FMBO

ABC

5

10

20

50

100

0.5

100(6110)

100(10886) 100(22966) 100(45072)

100(215150)

100(631824)

0.1

100(6110)

100(10886) 100(22966) 100(45072)

100(215150)

100(631824)

0.01

100(6110)

100(10886) 100(22966) 100(45072)

100(215150)

100(631824)

0.001

100(6110)

100(10886) 100(22966) 100(45072)

96.6(215730) 80(636440)

0.5

100(839)

100(10681) 100(35735) 100(208631) 100(2167000) 100(5399925)

0.1

100(1074)

100(12541) 100(38647) 100(221901) 100(2275100) 100(5618400)

0.01

100(1405)

100(15952) 100(43395) 100(251126) 100(2388300) 100(5827800)

0.001

100(1593)

100(18963) 100(49237) 100(276042) 100(2536245) 100(5941800)

0.5

100(1450)

100(2357)

100(55392)

80(35133)

0.1

100(2298)

100(13534) 100(22122) 100(44469)

80(95115)

60(69222)

0.01

100(7044)

100(43464) 60(31150)

60(78270)

80(99235)

50(71162)

0.001

100(12880) 70(35639)

50(39368)

30(73080)

30(89640)

40(158332)

DFMBO1 0.5

100(203)

100(472)

100(19096) 96.6(43852) 76.6(402860) 40(1813600)

0.1

100(227)

100(528)

100(21239) 96.6(47023) 76.6(437010) 40(1916300)

0.01

100(260)

100(604)

100(24256) 96.6(51494) 76.6(485620) 40(2062700)

0.001

100(295)

100(682)

100(27193) 96.6(55775) 76.6(533930) 36.6(1864200)

DFMBO2 0.5

100(373)

100(910)

100(2204)

96.6(8450)

80(65110)

0.1

100(413)

100(1002)

100(2418)

96.6(9345)

80(73030)

46.6(137420)

0.01

100(421)

100(1099)

100(2790)

96.6(10603) 80(84359)

46.6(148610)

0.001

QB

100(5190)

100(36486)

46.6(129630)

100(456)

100(1195)

100(3123)

96.6(11777) 80(95634)

46.6(158990)

DFMBO3 0.5

100(109)

100(120)

100(217)

100(348)

100(472)

100(658)

0.1

100(126)

100(135)

100(244)

100(385)

100(534)

100(714)

0.01

100(153)

100(161)

100(286)

100(440)

100(588)

100(826)

0.001

100(178)

100(187)

100(332)

100(497)

100(672)

100(887)

0.5

100(237

100(734)

100(887)

100(2791)

100(5027)

100(7802)

0.1

100(292)

100(842)

100(968)

100(2860)

100(5399)

100(8111)

0.01

100(386)

100(949)

100(1090)

100(3261)

100(5841)

100(8439)

0.001

100(472)

100(1056)

100(1219)

100(3427)

100(6051)

100(8732)

DABC

All these functions are optimized by seven described algorithms for n = 2, 5, 10, 20, 50, 100.

5 The simulation results and analysis The simulation results are given in Tables 1, 2, 3, and 4. In these tables the success rate and the average number of function evaluations obtained in successful runs (over 30 runs) are given. In these tables * illustrates that obtaining success rates are computationally too timeconsuming, so we gave up computing them.

123

82

M. Salim, M.T. Vakil-Baghmisheh

Table 4 Simulation results for Greiwank function Algorithm Tolerance No. of variables 2 FMBO

ABC

QB

DFMBO1

DFMBO2

DFMBO3

DABC

5

10

20

50

100

0.5

100(10415) 100(80051) 100(277724) 100(2712132) 100(352050)

0.1

100(10415) 100(80051) 86.6(275820) 83.3(2702300) 56.6(3538800) *

*

0.01

100(10415) 26.6(83563) 10(279970)

6.6(2811120) 13.3(3645390) *

0.001 63.3(10591) 10(91000)

3.3(283960) 6.6(2811120) 6.6(3645300) *

0.5

100(662)

100(9980)

100(26909)

100(66063)

100(183710)

100(104010)

0.1

100(1031)

100(13405) 100(35007)

100(80313)

100(216720)

100(122632)

0.01

100(1609)

100(268192)

96.6(141470)

100(29653) 86.6(61248) 100(108070)

0.001 100(2688)

76.6(42542) 66.6(63718) 93.3(122230) 96.6(311210) 96.6(166830)

0.5

100(364)

100(605)

100(2210)

100(8577)

100(10994)

0.1

100(2947)

100(2405)

90(30198)

90(5678)

80(21724)

60(53214)

0.01

80(18258)

90(31323)

40(36730)

40(45195)

40(60230)

40(80443)

0.001 60(20268)

80(41402)

40(41290)

30(118340)

30(123450)

30(303094)

0.5

100(537)

100(1150)

100(9414)

96.6(150390) 83.3(410890)

100(134)

100(35921)

0.1

100(226)

100(639)

100(1369)

100(11023)

96.6(174260) 83.3(504730)

0.01

100(262)

100(694)

96.6(1630)

100(13085)

96.6(203340) 83.3(620504)

0.001 100(282)

100(754)

96.6(1876)

100(15201)

96.6(231120) 83.3(733000)

0.5

100(146)

100(295)

100(2293)

100(4115)

96.6(53298)

0.1

100(189)

100(327)

100(3048)

100(4868)

96.6(59758)

86.6(61465)

0.01

100(233)

100(378)

100(3936)

100(5858)

96.6(67890)

86.6(73140)

0.001 100(247)

100(422)

100(4887)

100(6830)

96.6(75948)

86.6(84221)

0.5

100(71)

100(89)

100(180)

100(189)

100(308)

100(44)

86.6(51297)

0.1

100(65)

100(81)

100(100)

100(206)

100(213)

100(350)

0.01

100(87)

100(96)

100(116)

100(236)

100(247)

100(405)

0.001 100(97)

100(109)

100(132)

100(269)

100(282)

100(451)

0.5

100(967)

100(1069)

100(2431)

100(4874)

100(5560)

100(308)

0.1

100(650)

100(1002)

100(1111)

100(2529)

100(5170)

100(5917)

0.01

100(807)

100(1098)

100(1257)

100(2861)

100(5609)

100(6402)

100(1234)

100(1445)

100(3179)

100(5985)

100(6938)

0.001 100(1121)

The results obtained on Sphere function are presented in Table 1. We see when number of variables grows, and the tolerance of answer becomes small enough, i.e. equal or less than 10−3 , the speed of the continuous algorithms, i.e. QB, ABC, and FMBO algorithms become prohibitively slow and their success rates decrease. On the other hand by increasing number of variables up to 100, and decreasing the tolerance of answer as small as 10−3 , the discrete algorithms are still fast and number of function evaluations is still manageable. The DFMBO3 algorithm is the fastest one and number of function evaluations doesn’t grow beyond 1010. By analyzing the results presented in Table 2, we see all discrete counterparts of the FMBO are faster than the FMBO on all cases; especially when number of variables is too large the differences between their speeds are clear. The DABC is too faster than the ABC on all cases. The DFMBO1, the DFMBO2, the DFMBO3 are faster than the DABC on all cases. For 2, 5, 10 and 20 variables the QB, and for more variables the DFMBO3 are the fastest algorithms.

123

Discrete bee algorithms and their application in multivariable function optimization

83

From Table 3, we see that the DFMBO3 performs better than the FMBO in terms of both success rate and speed in all cases. Also, in all cases the DABC is faster than the ABC. In Table 3, the DFMBO3 is the strongest algorithm. From Table 4, we see all discrete counterparts of the FMBO algorithm work better than the FMBO in terms of both success rate and speed. The speed of DABC is better than the ABC. The DFMBO3 is the strongest and the FMBO is the weakest algorithm among all methods. In general the success rates of the DABC and the DFMBO3 in all cases are 100%. Also the average function evaluations for the DFMBO3 are less than all continuous algorithms. All discrete algorithms have less average function evaluations than their continuous counterparts.

6 Conclusions In this paper we introduced four discrete honey bee algorithms. i.e. the DFMBO1, the DFMBO2, the DFMBO3, and the DABC algorithms. Then we tested these new algorithms along with their continuous counterparts and QB algorithm on four multivariable benchmarks for various numbers of variables up to 100. We reported success rates and the average function evaluations to find optimum in thirty runs and for four predefined threshold values. In general, when the precision of answer and number of variables are small, the differences between our new algorithms and the old existing algorithms are not clear, but by increasing the precision of answer and number of variables, the average function evaluations in the old algorithms increase beyond manageable amounts, and as a result the success rates decrease. The simulation results show that DFMBO3 is the most efficient one among all tested algorithms we applied in terms of both success rate and speed. The DFMBO1 and the DFMBO2 work better than the FMBO in most cases. The DABC performs better than the ABC in all cases. The DFMBO3 algorithm should be considered as the final contribution of this paper, and it is suggested to be called as the DFMBO algorithm in future works or citations. Acknowledgments The authors would like to thank the unknown reviewers for their valuable comments which helped to clarify the ambiguities and to improve the quality of the paper.

References Afshinmanesh F, Marandi A, Rahimi-Kian A (2005) A novel binary particle swarm optimization method using artificial immune system. The international conference on computer as a tool (EUROCON 2005), vol 1., pp 217–220 Arora JS, Huang MW (1994) Methods for optimization of nonlinear problems with discrete variables: a review. Struct Multidiscip Optim 8(2–3):69–85 Basturk B, Karaboga D (2006) An artificial bee colony (ABC) algorithm for numeric function optimization. In: Proceedings of the IEEE swarm intelligence symposium 2006. Indianapolis, Indiana, USA Bonabeau E, Dorigo M, Theraulaz G (1999) From natural to artificial swarm intelligence. Oxford University Press, Oxford Dorigo M, Stützle T (2005) Ant colony optimization. Prentice-Hall, Englewood Cliffs Eberhart R, Kennedy J (1997) A discrete binary version of the particle swarm algorithm. In: Proceedings of the IEEE conference on computational cybernetics and simulation (Systems, Man, and Cybernetics), vol 5., pp 4104–4109 Holland J (1975) Adoption in natural and artificial systems. University of Michigan Press, Michigan Hsu YL, Dong YH, Hsu MS (2001) A sequential approximation method using neural networks for nonlinear discrete-variable optimization with implicit constraints. JSME Int J Ser C Mech Syst Mach Elem Manuf 44(1):103–112

123

84

M. Salim, M.T. Vakil-Baghmisheh

Huang MW, Arora JS (1997) Optimal design with discrete variables: some numerical experiments. Int J Numer Methods Eng 40(1):165–188 Jung SH (2003) Queen-bee evolution for genetic algorithms. Electron Lett 39(6):575–576 Karaboga V, Akay B, Ozturk C (2007) Artificial bee colony (ABC) optimization algorithm for training feedforward neural networks. In: Proceedings of the 4th international conference on modeling decisions for artificial intelligence., pp 318–329 Karaboga D, Basturk B (2007) A powerful and efficient algorithm for numerical function optimization: artificial bee colony. Glob Optim 39(3):459–471 Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings. of IEEE international conference on neural networks (ICNN), Australia, vol 4., pp 1942–1948 Kirkpatrick S, Gelatt CD, Vecchi MP (1993) Optimization by simulated annealing. Science 220:671–680 Liu B, Wang L, Jin Yh, Huang D (2006) An effective PSO-based memetic algorithm for TSP. In: Intelligent computing in signal processing and pattern recognition, vol 345. Springer, Berlin, pp 1151–1156 Mohan CK, Al-kazemi B (2001) Discrete particle swarm optimization, In: Proceedings of the workshop on particle swarm optimization, Indianapolis Qin LD, Jiang QY, Zou ZY, Cao YJ (2004) A queen-bee evolution based on genetic algorithm for economic power dispatch. In: 39th international universities power engineering conference (UPEC 2004) Bristol, UK, pp 453–456 Sandini G, Santos-Victor J, Curtto F, Garibaldi S (1993) Robotic bees. In: IEEE/RSJ international conference on intelligent robots and system, vol 1. Yokohama, Japan, pp 629–635 Sato T, Hagiwara M (1997) Bee system: finding solution by a concentrated search. In: Proceedings of the IEEE international conference on systems, man, and cybernetics, vol 4(C)., pp 3954–3959 Yang C, Chen J, Xuyan T (2007) Algorithm of fast marriage in honey bees optimization and convergence analysis. In: Proceedings of the IEEE international conference on automation and logistics, China., pp 1794–1799

123

Suggest Documents