New Clustering Search approaches applied to continuous domain optimization Tarc´ısio Souza Costa
Alexandre C´esar Muniz de Oliveira
Federal University of Maranhao 65.085-580 – Bacanga Campus, S˜ao Lu´ıs, Brazil Computer Learning and Optimization Methods Laboratory Email:
[email protected]
Federal University of Maranhao 65.085-580 – Bacanga Campus, S˜ao Lu´ıs, Brazil Department of Computer Sciense Email:
[email protected]
Abstract—Clustering Search (*CS) has been proposed as a generic way of combining search metaheuristics with clustering to detect promising search areas before applying local search procedures. The clustering process may keep representative solutions associated to different search subspaces (search areas). In this work, new approaches are proposed, based on Artificial Bee Colony (ABC) and Differential Evolution (DE), observing the inherent characteristics of detecting promising food sources employed by that metaheuristic. The proposed hybrid algorithms, performing a Hooke & Jeeves based local, are compared against another hybrid versions of ABC and DE, exploring an elitist criteria.
I.
I NTRODUCTION
Modern metaheuristics attempt to balance exploration and exploitation movements explicitly by employing global and local search strategies. The global algorithm plays the role of generator of reference points to diversified search areas, which are more intensively inspected by problem-specific components further. Early hybrid algorithms have demonstrated concern about rationally employing local search. Instead of always applying local search procedures over entire population, the global algorithm runs normally until a promising area is detected, according to a specific criteria, for example, when a new best individual is found (elitist criteria). In a general way, the following action must be better to exploit the expected promising area, in a reduced search domain, by a local search procedure. Clustering Search (*CS) has been proposed as a generic way of combining search metaheuristics with clustering, aiming to detect promising search areas before applying local search procedures [6]. Its generalized nature is achieved both by the possibility of employing any metaheuristic and by also applying to combinatorial and continuous optimisation problems. Clusters of mutually close solutions provide reference points to relevant areas of attraction in the most of search metaheuristics. Relevant search areas can be treated by problem-specific local search procedures. Furthermore, the cluster center itself is always updated by a procedure involving inner solutions, called assimilation [6]. Artificial Bee Colony (ABC) [2] and Differential Evolution (DE) [1] are proposed optimisation algorithms inspired by biology. Both are powerful population-based stochastic search techniques, being efficient and effective global optimizers in the continuous search domain. ABC is based on the intelligent
behavior of honey bees, providing a population-based search procedure in which individuals called foods positions are modified by the artificial bees, aiming to discover promising food sources and, finally, the one with the optimal amount of nectar [2]. DE algorithm works with a population of vectors and some operations, similar to Genetic Algorithm mutation and crossover, are made at each generation trying to find best vector components, that can be the best solution of an optimization problem. This paper is devoted to propose a new *CS approaches, based on Artificial Bee Colony (ABC) and Differential Evolution (DE), applied to unconstrained continuous optimisation, in which promising search areas are associated to promising food sources and the bee movement is performed by the intrinsic *CS operation known as assimilation [6]. The remainder of this paper is organized as follows. Section II describes the concepts behind *CS, ABC and DE. New proposed *CS approaches based on ABC and DE are described in section III. Computational results are presented in IV. The findings and conclusions are summarized in section V. II.
A LGORITHMS F OUNDATIONS
A challenge in hybrid metaheuristic is to employ efficient strategies to cover all the search space, applying local search only in actually promising search areas. The inspiration in nature has been pursued to design flexible, coherent and efficient computational models. In this section, the Clustering Search (*CS) and ABC are briefly described, introducing concepts needed to further explain how to combine them to allow detecting promising search areas before applying the Hooke & Jeeves local search procedure [5]. A. Clustering Search The *CS attempts to locate promising search areas by framing them by clusters. A cluster can be defined as a tuple G = {c, r, s}, where c and r are the center and the radius of the area, respectively. The radius of a search area is the distance from its center to the edge. There could exist different search strategies s associated to the clusters. Initially, the center c is obtained randomly and progressively it tends to slip along really promising sampled points in the close subspace. The total cluster volume is defined by the radius r and can be calculated, considering the problem nature. It is important that r must define a search subspace suitable to be exploited by the search strategies s associated to the cluster [11].
*CS can be splitted off in four conceptually independent parts: a search metaheuristic (SM); an iterative clustering (IC) component; an analyzer module (AM); and a local searcher (LS). SM component works as a full-time solution generator, according to its specific search strategy, a priori, performing independently of the remaining parts, and manipulating a set of |P | solutions. Figure 1 brings the *CS conceptual design, in this case the component SM is an evolutionary algorithm (EA), that is an evolutionary clustering search (ECS) [6].
respectively, the known upper and lower bounds of the domain of variable x, considering that all variables xi have the same domain. The assimilation process is applied over the closest center ci , considering the new generated solution sk . According to β in Eq. 2, the assimilation can assume different forms. The three types of assimilation, proposed in [6], are: simple, crossover and path relinking. In simple assimilation, β ∈ [0, 1] is a constant parameter, meaning a deterministic move of ci in the direction of sk . Only one internal point is generated more or less closer to ci , depending on β, to be evaluated afterwards. The greater β, the less conservative the move is. This type of assimilation can be employed only with real-coded variables, where percentage of intervals can be applied to.
Fig. 1.
CS conceptual design
IC component aims to gather similar solutions into groups, maintaining a representative cluster center for them. To avoid extra computational effort, IC is fed, progressively, by solutions generated in each regular iteration of SM. A maximum number of clusters N C is an upper bound value that prevents an unlimited cluster creation, initially. A distance metric, ℘, must be defined, allowing a similarity measure for the clustering process. Solutions sk generated by SM are passed to IC that attempts to group as known information, according to ℘. If the information is considered sufficiently new, it is kept as a center in a new cluster, cnew . Otherwise, redundant information activates the closest center ci (cluster center that minimizes ℘(sk , cj=1,2,···, )), causing an assimilation of the information represented by sk . Considering Gj (j=1,2,···) as all current detected clusters: cnew = sk if ℘(sk , cj ) > rj , ∀Gj , or
(1)
c0i = ci ⊕ β(sk ci ), otherwise.
(2)
where ⊕ and are abstract operations over ci and sk meaning, respectively, addition and subtraction of solutions. The operation (sk ci ) means the difference between the solutions sk and ci , considering the distance metric. A certain percentage, β, of this difference is used to update ci , giving c0i . According to β, the assimilation can assume different forms: simple, crossover or path assimilations [11]. To cover all the search space, a common radius rt must be defined, working as a threshold of similarity. For the Euclidean space, for instance, rt has been defined as [6]: rt =
xsup − xinf p 2 · n |Ct |
(3)
where |Ct | is the current number of clusters (initially, |Ct | = N C, in general, about 20-30 clusters), xsup and xinf are,
Crossover assimilation means any random operation between two candidate solutions (parents), giving other ones (offsprings), similarly as a crossover operation in EAs. In this assimilation, β is an n−dimensional random vector and c0i can assume a random point inside the hyper plane containing sk and ci . Since the whole operation is a crossover or other binary operator between sk and ci , it can be applied to any type of coding or even problem (combinatorial or continuous → − one). The β parameter is resulting from the type of crossover employed, not the crossover parameter itself. Path assimilation can generate several internal points or even external ones, holding the best evaluated one to be the new center. These exploratory moves are commonly referred in path relinking theory. In this assimilation, β is a η−dimensional vector of constant and evenly spaced parameters, used to generate η samples taken in the path connecting ci and sk . Since each sample is evaluated by the objective function, the path assimilation itself is an intensification mechanism inside the clusters, allowing yet sampling of well-succeeded external points, extrapolating the sk -ci interval. AM component examines each cluster, indicating a probable promising cluster or a cluster to be eliminated. A cluster density, δi , is a measure that indicates the activity level inside the cluster i. For simplicity, δi counts the number of solutions generated by SM (selected or updated solutions in the EA case) [6], [13]. Whenever δi reaches a certain threshold, meaning that some information template becomes predominantly generated by SM, such information cluster must be better investigated to accelerate the convergence process on it. Considering that each cluster should receive the same number of individuals, a cluster i is promising whether: δi >
NC |Pnew |
(4)
On the other hand, clusters with lower δi are eliminated, as part of a mechanism that allows creating other centers of information, keeping framed the most active of them. At the end of each generation, the AM performs the cooling of all clusters, resetting the accounting of δi . A cluster is considered inactive when no activity has occurred in the last generation. At last, the LS component is an internal searcher module that provides the exploitation of a supposed promising search
area, framed by a cluster. This process can happen after AM having discovered a target cluster. LS can be considered as the particular search strategy s associated with the cluster, i.e., a problem-specific local search to be employed into the cluster. The component LS has been activated, at once, if δi ≥ PD ·
NS |Ct |
(5)
where the pressure of density, PD, allows controlling the sensibility of the component AM. The component LS has been implemented by a Hooke-Jeeves direct search in continuous optimisation [6].
A food source is assumed to be abandoned if its fitness cannot be improved through certain number of cycles. The value of pre-determined number of cycles is called “limit” for abandonment [4]. An abandoned food source should be replaced by another. In this case, the scout bees are responsible for substitute old food souces by new ones, that are new positions found randomly in the search space. This substitution is done through simple modification of current abandoned food source’s position, taking into account minimum (xmin ) and maximum (xmax ) of search space limited by variables, as can be seen on Equation 8: xji = xjmin + rand[0, 1](xjmax − xjmin ) j ∈ {1, 2...D}
B. Artificial Bee Colony The artificial bee colony (ABC) algorithm is a relatively new swarm intelligence based optimizer. It is inspired on the cooperative foraging behavior of a swarm of honey bees and was initialy proposed by Dervis Karaboga [2]. In the ABC algorithm, the colony of artificial bees contains three groups of bees: employed bees, onlookers and scouts. A bee waiting on the dance area for making decision to choose a food source, is called an onlooker and a bee going to the food source visited by itself previously is named an employed bee. A bee carrying out random search is called a scout. In the ABC algorithm, first half of the colony consists of employed artificial bees and the second half constitutes the onlookers. For every food source, there is only one employed bee. In other words, the number of employed bees is equal to the number of food sources around the hive. The employed bee whose food source is exhausted by the employed and onlooker bees becomes a scout. At each cycle ABC perform three basic operations: place employed bees on their food sources and evaluate their nectar amounts; after sharing the nectar information of food sources, onlookers select food source depending on their nectar amounts; send scout bees to explore new sources randomly on the search space. These main steps are repeated through a predetermined Maximum Cycle Number (MCN ) or until a termination criterion is satisfied. Employed bees share the nectar amount of food sources on the dance area that will be used by onlooker bees as parameter to decide which source is better to be explored. The decision to be done by onlooker bees is governed by Equation 6: f iti (6) pi = PSN n=1 f itn where pi is the probability associated with food source i, f iti is the fitness value of solution i and SN is the total number of food sources which is equal to the number of employed bees or onlooker bees. Both employed and onlooker bees modify their food sources, if the modification increase nectar amounts of those sources, onlooker and employed bees memorize the best source and forget previous one. Thus, to produce a candidate food position from the old one in memory is done by following equation: vij = xij + φij (xij − xkj ) (7) where k ∈ {1, 2...SN } and j ∈ {1, 2...D} are randomly chosen indexes, and D is the dimension of variables.
(8)
The balance between exploration and exploitation of search space, that is a desired characteristic of metaheuristics, can be seen in ABC, considering that employed and onlooker bees perform exploitation of search space, specially onlooker bees that explores especific food sources (possible promissing search subspaces). The exploration of search space is done by scout bees, that try to find different positions to food sources. C. Differential Evolution DE algorithm try to find the optimal solution x∗ in Ddimensional space, where a population P with individuals x~i , i = 1, . . . , N P is randomly selected at the first step of DE [1]. At each iteration, for every individual x~i , i = 1, . . . , N P, three other distinct parameter vectors x~a , x~b , and x~c are randomly chosen from the population P . The mutation operation consist to generate a new vector u~i , described by the Equation 9: u~i = x~a + F (x~b − x~c ) (9) where F is a scaling parameter. The mutated vector u~i and the target (initial) vector x~i form two parents. The crossover operation that is done between u~i and x~i can be seen by trial vector (offspring) y~i : ( yij
=
uji
if rand(0, 1)j ≤ CR or j = jrand ,
xji
otherwise
(10)
where CR is a predefined crossover parameter, jrand is a random integer variable from [1, D] and rand(0, 1)j is a uniform random real variable from range [0, 1]. If f (~ yi ) ≤ f (x~i ) then y~i replaces x~i and this operation is called selection. Several works have been investigated different values to DE parameters, as CR and F, and different DE strategies. One of these works that test several different values for CR and F and several DE strategies can be seen at [3], the initial version defined above was named DE/rand/1/bin. III.
N EW C LUSTERING S EARCH A PPROACHES
In this work two new *CS approaches are presented, based on ABC and DE metaheuristics. The first one is called Artificial Bee Clustering Search (ABCS) and the second one is Differential Evolutionary Clustering Search (DECS).
The Artificial Bee Clustering Search (ABCS) algorithm is based on potencial characteristic of detecting promising areas presented by ABC algorithm. Such potencial characteristic can be seen on basic operations of ABC, such as the exploration and abandonment of a food source. The exploration performed by employed bees can be seen as an activation of a cluster on *CS-based algorithms, considering that a food source is a central region of a cluster. The abandonment of food sources can be seen as a deactivation of a cluster with few assimilations for a long period of generations. An onlooker bee can be compared as a solution that must be assimilated by a cluster on a *CS-based algorithms, because this bee will increase a source’s fitness after it has explored and found a better position to this food source. ABCS is proposed to represent all food sources as central regions of clusters and onlooker bees are solutions that can be or cannot be assimilated by these clusters. The Algorithm I ilustrates the few differences between ABCS and pure ABC. The accounting of δi (density of food source i) is done only at onlooker bee phase, because in this phase the food sources are selected based on its probabilities of selection. Thus δi is a parameter to perform local search, that will inform which food sources are most promissing. To detect these promissing areas, δi increases whenever an onlooker bee access the food source and gets a new better position. If a food source reach a certain threshold there is a time to perform local search in such source to find better solutions. This threshold is known in *CS as cluster density (Section II). δi ≥ PD ·
NP |SN |
(11)
Initialize the population of solutions xi ,i = 1, 2, ..., SN Evaluate xi ,i = 1, 2, ..., SN and memorize the best one cycles ← 1 repeat for i ← 1 to SN do vij = xij + φij (xij − xkj ) //candidate solution by employed bee if f (vi ) < f (xi ) then x i ← vi else ti ← ti + 1 end end //calculating the probabilities xi for i ← 1 to SN do f it pi = PSN i n=1
f itn
end i←0 t←0 while t < SN do if r < pi then t←t+1 vij = xij + φij (xij − xkj ) //candidate solution by onlooker bee if f (vi ) < f (xi ) then δi ← δi + 1 //updating density of food source xi ← vi else ti ← ti + 1 end end P if δi ≥ PD · N SN then LocalSearch(xi ) //exploitation of promising food source δi ← 1 end i←i+1 if i = SN then i←0 end end for i ← 1 to SN do if ti > Limit then xji = xjmin + rand[0, 1](xjmax − xjmin ) end end until cycles = MCN ;
where PD is a pressure of density, N P is the colony size and SN is the total number of food sources.
Algorithm 1: Artificial Bee Clustering Search
ABCS is proposed to add to ABC some clustering elements without modifying the main structure of original algorithm. This work proposes to add two elements in ABCS: the amount of activations of each food source and local search routines. The algorithm used to perform local search in ABCS is the Hooke & Jeeves’ algorithm [5], the same local searcher used in another algorithms shown in this work.
found and the known optimal solution is less or equal than 10−4 , the execution is well succeeded, that is another stopping criterion adopted for all algorithms.
Differential Evolutionary Clustering Search (DECS) is similar to ECS. The mainly difference between ECS and DECS is the component EA. In first ECS version [6], EA is a genetic algorithm, while in DECS, this component is a DE. IV.
E XPERIMENTAL R ESULTS
In this section we present the numerical study done on the newly proposed approaches to evaluate its performance. We compared the newly proposed approaches with the original ABC, an HABC (Hybrid Artificial Bee Colony), the original DE, an HDE (Hybrid Differential Evolution) and ECS (Evolutionary Clustering Search [6]). The results were obtained allowing all algorithms to perform up to 2 ∗ 106 Maximum Function Evaluations in each one of 30 trials, an stopping criterion. An execution is considered well succeeded if the best solution found satisfies an error rate, that was set to 10−4 . In other words, whenever the difference between the best solution
The list of parameters of all algorithms are summarized in tables I and II, where the value 80 of N P was used based on [4], the other values were empirically tested. The parameter Limit was fixed on 200, that means a food source is assumed to be abandoned if its fitness could not be improved until 200 cycles. The component EA of ECS implemented in this work is a steady-state real-coded Genetic Algorithm employing wellknown genetic operators as roulette wheel selection [9], blend crossover (BLX0.25) [14], and non-uniform mutation [10] with simple assimilation. The specific parameters used by the ECS algorithm can be seen on Table I, where P opSize denotes the GA’s population size, maxGen is the total number of generations and N C is the total number of clusters. Different from traditional *CS-based algorithms, ECS implementation of this work was designed to perform local search in certain cluster center ci only if the last call to Hooke & Jeeves in this center ci succeeded, as an attempt to minimize the total function evaluations performed by local search algorithm. The test suite consists of eleven benchmark functions
80
200
10, 000
P opSize 200
M axGen 8, 000
NC 40
PD
10
ECS
TABLE I.
Algorithm DE HDE DECS
Parameters Limit MCN
NP
Algorithm ABC HABC ABCS
2.5
PARAMETERS OF ABC, HABC AND ECS
NP
M axGen
300
8, 000
TABLE II.
Parameters F CR 0.4
Strategy rand/1/bin rand/1/bin best/1/bin
0.9
PARAMETERS OF DE, HDE AND DECS
commonly used for algorithms’ performance evaluation found in literature. Table III shows this benchmark, where D is the number of variables (Dimension), M in is the optimal solution and Range is the known upper and lower bounds of the domain of functions’s variables. HABC and HDE algorithms were not designed to detect promising areas, the Hooke & Jeeves Local Search is activated only whenever the better current solution has been improved (elitist strategy). The variables Mean, Standard Deviation (StdDev), H/A (Hits/Attempts) and FE (Function Evaluations) are necessary to analize the performance of algorithms. FE is a mean between total function calls in 30 executions for each function. Abbreviation fzak fmic fros fsch fras frid fcol flan fsph fgri fack
TABLE III.
Function Zakarov Michalewicz Rosenbrock Schwef el7 Rastrigin Ridge Colville Langerman Sphere Griewank Ackley
Range [-5,10] [0, 3.1415] [-30,30] [-500,500] [-5.12,5.12] [-64,64] [-10,10] [0,10] [-5.12,5.12] [-600,600] [-15,30]
D 30 10 30 30 30 30 4 10 30 30 30
Min 0 -9.6601 0 0 0 0 0 -1.4999 0 0 0
Fig. 2. Func fmic
fros
fras
fsch
flan
Convergence curve for fzak
Mean StdDev H/A FE Mean StdDev H/A FE Mean StdDev H/A FE Mean StdDev H/A FE Mean StdDev H/A FE
TABLE IV.
ABC -9.6600 6.8215E-005 30/30 35,874.48 0.0343 0.0499 0/30 800,081.10 0.0001 7.1006E-005 30/30 70,597.24 7.8033E-005 6.2561E-005 30/30 109,758.65 -0.9551 0.2802 4/30 733,114.20
HABC -9.3207 0.29288 10/30 911,211.31 0.0001 0.00002 30/30 185,090.24 0.0001 1.1056E-005 30/30 43,929.13 1295.28 252.88 0/30 2 ∗ 106 -0.8450 0.1331 1/30 1,952,654.20
ABCS -9.6600 5.8031E-005 30/30 97,753.20 0.0001 2.5765E-005 30/30 171,625.75 0.0001 3.0449E-005 30/30 30,982.41 0.0001 6.1813E-005 30/30 733,098 -1.0715099 0.2954 7/30 1,001,020.17
ECS -9.6355 0.0214 6/30 1,745,612.73 6.4467E-005 4.8001E-005 30/30 508,843.9 1.9567 0.8046 2/30 1,928,062.2 1075.82 416.58 0/30 2 ∗ 106 -0.8445 0.1781 2/30 1,918,115.96
F IRST SET OF FUNCTIONS (ABC VARIATIONS AND ECS)
E LEVEN T EST F UNCTIONS USED IN THE EXPERIMENTS
The results of overall experiment was divided into six tables: Initial three tables IV, V and VI contemplates results of overall functions to ABC and its hybrid versions (HABC and ABCS) and ECS. In tables VII, VIII and IX the results to DE and its hybrid versions (HDE and DECS) are shown. The ABCS algorithm presented superior performance for most of all benchmark functions. Only in three functions DE and its variations presented better results than ABCS: flan , fzak and frid . When ABCS presented similar performance of HABC (the same quantity of hits) the number of function evaluations is less than HABC. Minimizing Langerman function (flan ), which provided great difficulty for most of all algorithms, ABCS presents good performance with greater amount of hits and fewer function evaluations if compared against HABC and ECS. This comparison is analogue at frid (Table VI). The ABCS algorithm reach the optimal known solution of fros and fras (Table IV), all functions of Table V and reach best mean of fcol (Table VI) with fewer FE if compared against ABC and HABC. Considering the quantity of hits, the algorithm DECS presents better performance than DE and HDE at fmic , fras , fsch and fgri , observing that at fgri , DECS reached the
smallest quantity of FE. HABC and HDE got bad performance at fsch , because this function is extremely multimodal and these algorithms performs many calls to Hooke & Jeeves, without information about which regions of search space are promising, getting parked in local minimum points. ECS get better results to fsch when the parameter PD is less than 2.5, 2.5 was used because is the best one to overall functions. Figure 3 illustrates the progress of all algorithms minimizing Langerman function. The mean of objective function (known as fitness value in the chart) was calculated between 30 trials at intervals of 5, 000 function evaluations. The Figure 3 shows the velocity of convergence reached by ABCS, that with fewer quantity of FE, ABCS algorithm reached best means of fitness (y axis). In other figures 2, 4 and 5 the ABCS algorithm presents good convergence if compared against another approaches, always getting best mean of fitness, although in figures 2 and 5 ECS converges first, but with higher average fitness (y axis), at the end, ABCS shows better fitness values. Only DE and its variations found the optimal solution of fzak function, ABCS got best aproximation (about 0.014)
Func fsph
Mean StdDev H/A FE Mean StdDev H/A FE Mean StdDev H/A FE
fgri
fack
TABLE V.
Func fzak
frid
Fig. 3.
Convergence curve for flan fcol
ABC 136.45 23.76 0/30 800,040.58 417.33 170.63 0/30 800,040.10 0.17965 0.13612 0/30 800,218.517
Mean StdDev H/A FE Mean StdDev H/A FE Mean StdDev H/A FE
fros
fras
fsch
Convergence curve for fsch flan
TABLE VII.
Func fsph
fgri
fack
TABLE VIII. Convergence curve for fmic
with best standard deviation, 0.0172 (Table VI) among ABC, HABC and ECS.
ABCS 100,000 3.08E-005 30/30 11,508.96 0.0001 0.00002 30/30 11,105.58 0.0001 2.38E-005 30/30 32,731.55
ECS 100,000 2.2968E-006 30/30 3,274.2 0.0001 3.574E-005 30/30 4,943 8.646E-005 7.1095E-005 30/30 366,911.866
HABC 0.023 0.025 0/30 2 ∗ 106 0.0003 0.00007 2/30 1,938,631.93 0.0001 2.86E-005 30/30 22,002.65
ABCS 0.014 0.0172 0/30 2 ∗ 106 0.0002 1.98E-005 25/30 1,447,802.55 0.0001 2.96E-005 30/30 15,069.03
ECS 2.5389 1.555 0/30 2 ∗ 106 0.766 0.538 0/30 2 ∗ 106 0.00006 5.5197E-005 30/30 3,123.6
T HIRD SET OF FUNCTIONS (ABC VARIATIONS AND ECS)
Func fmic
Fig. 5.
HABC 100,000 8.66E-006 30/30 57,343.65 0.0001 9.07E-006 30/30 39,759 0.0001 1.46E-005 30/30 40,015.55
S ECOND SET OF FUNCTIONS (ABC VARIATIONS AND ECS)
TABLE VI.
Fig. 4.
ABC 100,000 3.67E-005 30/30 30,326.89 0.0001 4.82E-005 30/30 55,186.20 0.0001 3.33E-005 30/30 61,055.17
Mean StdDev H/A FE Mean StdDev H/A FE Mean StdDev H/A FE Mean StdDev H/A FE Mean StdDev H/A FE
DE -9.6570 0.0105 27/30 699,994.0 0.0001 0 30/30 1,007,680.8 102.6892 10.7306 0/30 2 ∗ 106 4828.1128 719.0559 0/30 2 ∗ 106 -1.4604 0.1502 28/30 281,709.5
HDE -9.6568 0.0105 25/30 789,470.8 0.1331 0.7278 29/30 292,264.0 39.2439 17.6983 0/30 2 ∗ 106 3787.5247 1060.6500 0/30 2 ∗ 106 -1.4426 0.1750 27/30 306,508.5
DECS -9.6600 0 30/30 53,637.7 0.4524 1.0437 0/30 1,649,142.3 0.0001 0 30/30 840,849.5 0.0001 0 30/30 256,162.0 -1.3989 0.2156 3/30 1,589,190.7
F IRST SET OF FUNCTIONS (DE VARIATIONS )
Mean StdDev H/A FE Mean StdDev H/A FE Mean StdDev H/A FE
DE 100,000 0 30/30 110,461.1 0,0001 0 30/30 169,774.5 0,0001 0 30/30 203,759.6
HDE 100,000 0 30/30 52,357.5 0,0079 0.0093 12/30 1,242,650.9 0,0001 0,0001 30/30 129,076.9
DECS 100,000 0 30/30 88,267.1 0,0001 0.0001 30/30 119,629.3 0 0 30/30 148,216.3
S ECOND SET OF FUNCTIONS (DE VARIATIONS )
Func fzak
frid
fcol
TABLE IX.
Mean StdDev H/A FE Mean StdDev H/A FE Mean StdDev H/A FE
DE 0,0001 0 30/30 565,631.4 0,0001 0 30/30 610,822.0 0,0001 0,0001 30/30 25,281.5
HDE 0,0001 0 30/30 472,566.4 0,0001 0 30/30 439,338.0 0,0001 0,0001 30/30 19,450.7
DECS 0,0001 0 30/30 1,597,636.8 12,4608 21.4445 0/30 1,629,141.2 0,0001 0,0001 30/30 236,202.3
[10] [11]
[12]
[13]
T HIRD SET OF FUNCTIONS (DE VARIATIONS ) [14]
V.
C ONCLUSION
In this work new approaches are proposed combining Clustering Search with popular metaheuristics as Artificial Bee Colony and Differential Evolution. The proposed hybrid metaheuristics are called Artificial Bee Clustering Search (ABCS) and Differential Evolutionary Clustering Search (DECS). Two elements of *CS are added to ABCS aiming to improve inherent characteristics of detecting promising food sources by ABC: Quantity of activations of each food source and the Local Search component. DECS consist basically to use a DE algorithm as AE component. Some unconstrained continuous optimisation functions are minimized and ABCS presented competitive results compared against algorithms: Pure ABC, HABC, ECS and variations of DE. DECS approach presented the second best result. When compared to recent works found in literature, ABCS presents similar performance to [15] and [16], specifically for functions: Rosenbrock, Colville, Sphere, Ridge, Zackarov and Ackley. Future work consist of testing some *CS components as the distance metric and different types of bee assimilation by food sources, since the natural bee assimilation used by ABC algorithm is double cluster assimilation, because k is a neighbor food source that is selected randomly to contribute to perturbation caused a given food source i (Section II). R EFERENCES [1]
[2]
[3]
[4] [5] [6]
[7] [8] [9]
Price, K. and Storn, R.: Differential Evolution – A simple and efficient adaptive scheme for global optimization over continuous spaces. Technical Report TR- 95-012, International Computer Sciences Institute, Berkeley, CA, USA. (1995) Karaboga, D.: An Idea Based On Honey Bee Swarm for Numerical Optimization, Technical Report TR06, Erciyes University, Engineering Faculty, Computer Engineering Department (2005) Mezura-Montes, E., Velazques-Reyes, J., Coello, C.A.C.: A comparative study of differential evolution variants for global optimization. In: Genetic and Evolutionary Conference (GECCO), pp. 485–492 (2006) Karaboga, D., Akay, B.: A comparative study of Artificial Bee Colony Algorithm. Applied Mathematics and Computation. 214, 108–132 (2009) Hooke, R., Jeeves, T. A.: Direct search solution of numerical and statistical problems. J. Assoc. Comput. Mach. 8, 212-229 (1961) Oliveira A. C. M., Lorena L. A. N.: Detecting promising areas by evolutionary clustering search. Brazilian Symposium on Artificial intelligence SBIA2004 Springer LNAI 3171:385-394 (2004) Feo T, Resende M.: Greedy randomized adaptive search procedures. Journal of Global Optimization 6:109-133 (1995) Mladenovic N., Hansen P.: Variable neighborhood search. Computers and Operations Research 24:1097-1100 (1997). Goldberg, D.E.: Genetic algorithms in search, optimisation and machine learning. Addison-Wesley (1989)
[15]
[16]
[17]
Michalewicz, Z.: GeneticAlgorithms + DataStructures = EvolutionP rograms. Springer-Verlag, New York (1996) Oliveira A. C. M. & Lorena, L. A. N.: Hybrid Evolutionary Algorithms and Clustering Search. Hybrid Evolutionary Systems: Studies in Computational Intelligence, 75: 81–102 (2007). Chaves, A.A., Correa, F.A., Lorena, L. A. N.: Clustering Search Heuristic for the Capacitated p-median Problem. Springer Advances in Software Computing Series, 44: 136–143 (2007) Ribeiro Filho, G., Nagano, M. S. & Lorena, L. A. N.: Evolutionary Clustering Search for Flowtime Minimization in Permutation Flow Shop. Lecture Notes in Computer Science, 4771: 69–81 (2007) Eshelman, L.J., Schawer, J.D.: Real-coded genetic algorithms and interval-schemata, In: Foundation of Genetic Algorithms-2, L. Darrell Whitley (Eds.), Morgan Kaufmann Pub. San Mateo 187-202 (1993) Kang, F., Li, J., and Ma, Z.: Rosenbrock artificial bee colony algorithm for accurate global optimization of numerical functions. Inf. Sci, 181(16):3508– 3531 (2011) Wu, B., Qian, C., Ni, W., and Fan, S.: Hybrid harmony search and artificial bee colony algorithm for global optimization problems. Computers & Mathematics with Applications, 64(8):2621–2634 (2012) Costa, T. S., Oliveira, A.C.M., Lorena, L. A. N.: Advances in Clustering Search. Advances in Soft Computing, 73: 227–235 (2010)