A robust ant colony optimization for continuous functions

0 downloads 0 Views 887KB Size Report
Mar 25, 2017 - Ant colony optimization (ACO) for continuous functions has been widely ... munication systems, and water resources management systems.
Expert Systems With Applications 81 (2017) 309–320

Contents lists available at ScienceDirect

Expert Systems With Applications journal homepage: www.elsevier.com/locate/eswa

A robust ant colony optimization for continuous functions Zhiming Chen a,∗, Shaorui Zhou b, Jieting Luo c a

Department of Credit Management, Guangdong University of Finance, Street address: No. 527 YingFu Road, YingFu Road, Guangzhou, 510521 China Department of Management Science, School of Business, Sun Yat-sen University, Guangzhou, 510275 China c Department of Information and Computing Sciences, Faculty of Science, Utrecht University, Utrecht, 3584CC Netherlands b

a r t i c l e

i n f o

Article history: Received 7 September 2016 Revised 15 March 2017 Accepted 16 March 2017 Available online 25 March 2017 Keywords: Broad-range search Ant colony algorithm Continuous optimization Robustness

a b s t r a c t Ant colony optimization (ACO) for continuous functions has been widely applied in recent years in different areas of expert and intelligent systems, such as steganography in medical systems, modelling signal strength distribution in communication systems, and water resources management systems. For these problems that have been addressed previously, the optimal solutions were known a priori and contained in the pre-specified initial domains. However, for practical problems in expert and intelligent systems, the optimal solutions are often not known beforehand. In this paper, we propose a robust ant colony optimization for continuous functions (RACO), which is robust to domains of variables. RACO applies selfadaptive approaches in terms of domain adjustment, pheromone increment, domain division, and ant size without any major conceptual change to ACO’s framework. These new characteristics make the search of ants not limited to the given initial domain, but extended to a completely different domain. In the case of initial domains without the optimal solution, RACO can still obtain the correct result no matter how the initial domains vary. In the case of initial domains with the optimal solution, we also show that RACO is a competitive algorithm. With the assistance of RACO, there is no need to estimate proper initial domains for practical continuous optimization problems in expert and intelligent systems. © 2017 Elsevier Ltd. All rights reserved.

1. Introduction When solving practical problems, ideally people can set up mathematical models to make better decisions. However, since the models’ complexity increases proportionally with the problems’ difficulties, the mathematical methods show poor performance in handling the highly complicated models. Fortunately people see hope as intelligent algorithms emerge. Due to the advantage of computer’s powerful calculation, the optimal solution can be achieved in a short time. Among the various intelligent algorithms, ant colony optimization (ACO) which is inspired by the foraging behavior of ants has the outstanding performance in solving combinatorial optimization problems (Demirel & Toksarı, 2006; Ding, Hu, Sun, & Wang, 2012; Dorigo & Gambardella, 1997; Huang, Yang, & Cheng, 2013). However, Due to the limitation of searching mechanism, ACO is not good at dealing with continuous variables. Over the past years, a lot of attempts have been made to fill this gap. Now based on the great development, ACO can be widely applied in continuous optimization decisions, which can improve



Corresponding author.Tel.: +86 13662521750. E-mail addresses: [email protected] (Z. Chen), [email protected] (S. Zhou), [email protected] (J. Luo). http://dx.doi.org/10.1016/j.eswa.2017.03.036 0957-4174/© 2017 Elsevier Ltd. All rights reserved.

expert and intelligent systems in terms of data clustering, training neural networks, scheduling in power systems, steganography in medical systems, modeling signal strength distribution in communication systems, and water resources management systems. (Afshar, Massoumi, Afshar, & Mariño, 2015; Chen & Wang, 2014; Dorigo & Stützle, 2010; Edward, Ramu, & Swaminathan, 2016; Fetanat & Khorasaninejad, 2015; Fetanat & Shafipour, 2011). The framework of ACO consists of three parts: ant based solution construction, pheromone update, daemon action (optional). Based on the usage or not of this framework, relevant algorithms can be divided into two categories: ant-related algorithms and extension of ACO to continuous functions. One of the early attempts is CACO (Bilchev & Parmee, 1995) in which a domain is divided into randomly distributed regions updated by local search and global search. API (Monmarché, Venturini, & Slimane, 20 0 0) adopted a strategy that ants try to cover a given area around their nest moved periodically and their searching sites are sensitive to the successful sites. CIAC (Dréo & Siarry, 2004) used stigmergic information and direct inter-individual communication to intensify ants’ search. COAC (Hu, Zhang, & Li, 2008) divided a domain into many regions of various sizes and utilized pheromone and the orthogonal exploration to guide ants’ movement. Although the above algorithms got inspiration from ACO, they did not follow the

310

Z. Chen et al. / Expert Systems With Applications 81 (2017) 309–320

framework of ACO strictly. Therefore, they are considered as antrelated algorithms, not real extensions of ACO to continuous functions. The first algorithm that can be classified as an extension of ACO to continuous functions is ACOR (Socha & Dorigo, 2008), where a weighting Gaussian CDF is used to generate solutions which are stored in an archive, and the pheromone update is accomplished by replacing the worst solutions in the archive with the new solutions. Based on the principles ACOR , in order to escape from a local optimum, DACOR (Leguizamόn & Coello, 2010) was proposed to improve diversity in the population to explore more regions. Inspired by ACOR , HACO (Xiao & Li, 2011) combined continuous populationbased incremental learning and differential evolution so that it could learn the mean and variance values of next generation and avoid local optimum. As another variant of ACOR , IACOR -LS (Liao, Montes de Oca, Aydın, Stützle, & Dorigo, 2011) used three types of local search procedures and increased the size of archive over time. UACOR (Liao, Stützle, Montes de Oca, & Dorigo, 2014) put forward a framework that in combination with an automatic parameter tuning method enables the automatic synthesis of ACOR , DACOR and IACOR -LS. ACOMV (Liao, Socha, Montes de Oca, Stutzle, & Dorigo, 2014) divided the archive into three parts to store continuous variables, ordinal variables and categorical variables respectively and made innovation in the calculation of the weight. Thus, it can deal with not only continuous optimization but also mixed-variable optimization. AM-ACO (Yang et al., 2016), which adopted an adaptive parameter adjusting strategy, differential evolution operator and local search, can get a good balance between exploration and exploitation in addressing multimodal continuous optimization. Although there are many improved ACOs for continuous functions, they all belong to a category we call limited-range-search algorithms, which search for the optimal solution in the pre-specific domains. It has been shown in the literature that these algorithms are highly sensitive to the independent variable’s domain since their results are affected by the domain’s properties such as length, border and symmetry (Eiben & Bäck, 1997; Fogel & Bayer, 1995). If the given domains do not contain the optimal solution, these algorithms will result in an incorrect solution since the ants cannot go outside the domains. Thus, it is of great importance to correctly estimate the initial domains beforehand. However, the optimal solution is sometimes hard to estimate due to the complexity of the function derived from the real world. These algorithms will not work well when the initial domains do not contain the optimal solution. In this paper, we propose a robust ant colony optimization (RACO) which is robust to domains of variables. RACO is developed based on the grid method (Gao, Zhong, & Mo, 2003) which discretizes continuous variables such that continuous optimization is transformed into combinatorial optimization. In particular, if the initial domains are estimated incorrectly, RACO is still able to find the optimal solution. Compared with the limited-range-search algorithms, RACO uses an extensive search that is not restricted within the initial domains. Instead, it can change the borders of domains based on the quality of the solution found so that the ants can reach completely different domains. Thus, RACO is a broadrange-search algorithm. Given arbitrary domains, RACO is able to correct their borders towards the direction of the optimal solution. RACO also belongs to an extension of ACO to continuous functions. Compared with other algorithms in this category, RACO is more aligned with the original ACO. There are three reasons. (1)For those algorithms, pheromone information is reflected by the solutions stored in an archive. However, for RACO, pheromone information is stored in a matrix, of which the structure is consistent with the distribution of discrete values derived from the continuous domains. (2)Those algorithms use a continuous probability density function to generate a candidate solution, whereas RACO

uses a discrete probability density function. (3) For those algorithms, pheromone update is accomplished by replacing the worst solutions with the newly generated solutions. But RACO follows the conventional way in which the pheromone values associated with promising solutions are increased and all pheromone values are decreased by pheromone evaporation. Since RACO does not change the original ACO’s methods in terms of solution construction, transition probability and pheromone update, it is much simpler to use than other ACOs for continuous functions. The remainder of this paper is organized as follows. In Section 2, we present the basic problem and an overview of the key principles of the grid method. In Section 3, we introduce RACO, which is based on the grid method, and present its detailed steps. In Section 4, we show the experimental results of RACO when the initial domains contain and do not contain the optimal solution, and discuss the self-adaptive mechanism in depth. Finally, in Section 5, we present some conclusions and future research directions. 2. Problem and grid method A continuous optimization problem can be formally defined as the following model: P = (X, , f ), where X is an n-dimensional solution vector with continuous variables xi (i = 1, 2, . . . , n),  is a set of constraints among the variables, and f is an objective function mapping from X to a set of non-negative real numbers. S is assumed as the value space of X. To solve the continuous optimization problem, the optimal X∗ which minimizes the function should be obtained: f(X∗ ) ≤ f(X), ∀X ∈ S. If  = ∅, then P is called an unconstrained problem model; otherwise, P is called a constrained one. A constrained problem model can typically be converted to an unconstrained one through a penalty function. Hence, P discussed in this study is an unconstrained model. The grid method is similar to drawing a grid over the domain of the variable. Thereafter, the continuous domain can be discretized into a few points. The principles of the grid method are introduced in the following subsections. 2.1. Solution construction Based on the nature of the problem, we first estimate the initial domains: xi ∈ [ximin , ximax ] (i = 1, 2, . . . , n). Assume our estimation is correct and the initial domains contain the optimal solution. Then we divide the domain of xi into k equivalent shares so that the value of xi can be selected from the k + 1 points. The equivaxi

−xi

lent share’s value is hi = max k min (i = 1,2,…,n). Beginning from the first variable, each ant selects a point in the domain of each variable. Then a route that corresponds to a solution vector X is completed after the n choices of an ant. Fig. 1 describes one route as (2,1,3,…,k), which corresponds to the following solution:

  (x1 , x2 , . . . , xn ) = x1min + 2 · h1 , x2min + h2 , x3min + 3 · h3 , . . . , xnmin + k · hk

2.2. Transition probability and pheromone update Based on the distribution of the points, we construct a corresponding pheromone matrix Tau. Note that Tau is a (k + 1 ) × n matrix. The number of rows is equal to the number of discrete values assigned to each variable. The number of columns is equal to the number of variables. An element of Tau is denoted as τ ij, which represents the amount of pheromones left on the ith point within the domain of the jth variable. Each ant begins its trip from the first variable x1 , and completes a route by going through all the other variables. For each ant, the probability of choosing the ith

Z. Chen et al. / Expert Systems With Applications 81 (2017) 309–320

311

3.1. Self-adaptive domain adjustment

Fig. 1. Construction of the search space of an ant.

point within the domain of the jth variable is denoted by pij as follows:

p i j = τi j

 k+1 i=1

τi j

(1)

Eq. (1) follows the format of conventional transition probability, that is, τ ij is used to represent the attraction intensity of a point. Note that we do not use the weighting function η(·), the positive parameters α and β which determine the relation between pheromone information and heuristic information, and therefore pij is very simple. After one trip, each ant leaves a pheromone increment τ ij upon each point that it went through. We calculate τ ij using the ant-cycle system model as follows:

τi j = Q/ f

(2)

In the grid method, unequal pheromones left on different points present the quality of routes. A point with more pheromones must be closer to the optimal solution. If a given domain contains the optimal solution, then the pheromones tend to converge to a point (near the optimal solution) within the domain after iterative searches. If a given domain does not contain the optimal solution, then the pheromones tend to converge to a point near the border because the border is closer to the optimal solution. When the point with the most pheromones is near the border, the ants emit a signal that the optimal solution is near the border and is either within or outside of the border. Hence, the ants should be given the ability to adjust the domain. We use θ as a threshold for deciding whether to launch the mechanism of the self-adaptive domain adjustment. Here θ with the recommended value 0.1–0.3 also represents the percentage of the number of rows. The value of θ should not make θ (k + 1) too large or (1 − θ ) (k + 1) too small in order to represent that the node with the most pheromones is near the domain border. The row number of the largest element in each column of Tau is denoted by ri (i = 1, 2, . . . , n). When ri ≤ θ (k + 1 ) or ri ≥ (1 − θ )(k + 1 ), we can know that the point with the most pheromones is near the lower or upper border. It means that the optimal solution is probably outside of the border. Therefore, the ants need to explore a new domain using ri as the center. The new borders are reset as follows:

ximin ← ri − (k/2 + 1 ) · hi

(4)

ximax ← ri − (k/2 + 1 ) · hi

(5)

( i = 1, 2, . . . , n )

where f is the objective function value that corresponds to the route of an ant, and Q is a constant that represents the pheromone amount. The lower the value of the objective function is, the larger the pheromone increment is. In the new trip, the point with more pheromones has a more positive effect on directing the search of an ant. In order to prevent the algorithm from being prematurely trapped in a local optimum, we use the elite tactic for pheromone update. In other words, only the pheromones of the best route, the objective function value of which is minimal, are increased; the pheromones of the other routes remain unchanged. The update method is provided as follows:

where 1 presents the increment of the domain. Note that 1 is valued based on the performance of the algorithm. Moreover, 1 should be small in order to make the new domain slightly longer than the old one. The reason for widening the domain is that the value hi of the equivalent share can be changed dynamically. Changing hi facilitates finding the optimal solution. The new domain is 21 hi longer than the old one. When θ (k + 1 ) ≤ ri ≤ (1 − θ )(k + 1 ), we can know that the point with the most pheromones is far from the border. This means that the optimal solution is most likely within the domain. Therefore, the ants need to narrow the domain to improve the accuracy of the search. The new borders are reset as follows:

τi j new = (1 − ρ ) · τi j old + τi j

(3)

ximin ← ximin + ximax − ximin · 2

where ρ is a constant below 1, and presents the evaporation rate. The larger ρ is, the stronger the forgetting effect is. Hence, it is possible for ants to explore a new area in the new round.

ximax ← ximax − ximax − ximin · 2

3. RACO for continuous functions It is of great important for conventional ACO for continuous functions to correctly estimate the domains of independent variables. If the given domains do not contain the optimal solution, they will obtain a wrong solution because the search of the ants is conducted in the wrong area. Thus, offering ants the ability to correct the domains is the key to solving this problem. In this section, we propose a robust ant colony optimization (RACO) that applies novel approaches in terms of domain adjustment, pheromone increment, domain division and ant size. It is worthwhile to note that RACO uses the grid method, and thus, the solution construction, transition probability and pheromone update remain the same as those aforementioned.









(6) (7)

( i = 1, 2, . . . , n ) where 2 represents the percentage of the length of the domain. Note that 2 should be small, since otherwise, the optimal solution would be missed due to substantial narrowing. The new domain is 22 (ximax − ximin ) shorter than the old one. In the case of initial domains without the optimal solution, Eqs. (4) and (5) allow the ants to perform a broad-range search. Due to the possibility that the range of the search can be expanded, the ants can finally reach a new domain which may be far from the initial one. With the help of the broad-range search, RACO is able to find the optimum beyond the given domains. In the case of the given domains with the optimal solution, there is no need to explore a new area. Therefore, we could use another method to make the ants search more efficiently.

312

Z. Chen et al. / Expert Systems With Applications 81 (2017) 309–320

If ri ≤ θ (k + 1 ), then we retain the lower border unchanged and merely diminish the upper border as follows:





ximax ← ximax − ximax − ximin · 3

(8)

If ri ≥ (1 − θ )(k + 1 ), the we retain the upper border unchanged and merely increase the lower border as follows:





ximin ← ximin + ximax − ximin · 3

(9)

where 3 also represents the percentage of the length of the domain, and its value doubles the value of 2 . Note that one side of the domain is adjusted using Eq. (8) or (9), whereas both sides of the domain are adjusted using Eq. (6) or (7). Thus, if we require that the domains decrease to the same degree in these two cases, then we should make 3 = 22 . If θ (k + 1 ) ≤ ri ≤ (1 − θ )(k + 1 ), we change the lower and upper borders using Eqs. (6) and (7). The purpose of Eqs. (8) and (9) is to make the ants search within the initial domains. 3.2. Self-adaptive pheromone increment When the length of the initial domain is very large, it is difficult for the conventional ACO for continuous functions to obtain an accurate result. The reason is that the parameter Q used for pheromone increment in Eq. (3) is constant. However, the function value would change as the ants’ search area varies. The order of magnitude of Q may become inconsistent with that of the function value, thereby leading to little change in the pheromone increment τ . Because the ants choose the points based on the pheromones, the algorithm cannot converge to the optimal solution if different points have similar amount of pheromones. In the case of the initial domains without the optimal solution, RACO could shift the search area of the ants shift to a new, completely different domain. Moreover, the order of magnitude of the function value may change significantly. If RACO uses a fixed Q, then the accuracy problem will become more serious. Hence, we propose a self-adaptive pheromone increment that changes the value of Q based on the order of magnitude of the function value. At the beginning of each iteration, we randomly generate a number of routes and calculate the corresponding function values. The order of magnitude of the minimum function value is denoted by OMmin . For each iteration, we value Q dynamically as follows:

Q = 10OMmin +1

on the number of the equivalent shares. The ants will inevitably miss a few values in the continuous domains because they search only the limited points. Although the ants can find the best result among the given points, this best result may not be the global best because of the structural limitation of the search space. Moreover, if the length of a given domain is substantial, only a few equivalent shares may have a negative effect on the algorithm’s correctness. However, numerous equivalent shares may increase the search time. Thus, obtaining a reasonable number of equivalent shares is a trial-and-error process. In this paper, we propose a self-adaptive domain division method. The number of equivalent shares is denoted by k, the value of which is changed dynamically depending on the performance of RACO. At the beginning, we assign a small value to k to increase the convergence speed (k is a positive integer). If the ants cannot find a better result after several iterations, we let k ← k + 1. In this way, the structure of the search space is changed dynamically so that the ants can search other values in the continuous domains and avoid getting stuck. When k is small, there is no need to set a large ant size (i.e., the number of ants) m. However, as k increases, the ant size needs to be consistent with k. Thus, we make the ant size self-adaptive by letting m ← k + m. m is a small positive integer, thereby making m just slightly larger than k and preventing too large an ant size from producing a long search time.

3.4. Procedures of RACO algorithm Step 1: Initializations. Determine the number of loops nc_max for the search of ants in one iteration, the accuracy of the termination condition, and the ant size’s increment m. Assign an initial value to k. Provide an arbitrary initial domain for each variable: xi ∈ [ximin , ximax ] (i = 1,2,...,n). Step 2: Divide the domain of each variable into k equivalent xi

Step 3:

(10)

The self-adaptive property guarantees that the order of magnitude of Q is consistent with that of the function value; thus, the precision of the algorithm can be improved. Moreover, generating routes at random can be considered a function sampling, which can not only infer the order of magnitude of Q but also optimize the initial pheromone matrix Tau. Since the conventional ACO generally makes initial Tau’s elements equal, the ants will select different routes with equal probability, and it thus takes a long time to find better routes. If Tau is initialized with certain heuristic information, the algorithm’s convergence speed can be improved. For example, we randomly generate 100 routes and sort them in descending order of function value. The first 30 routes are given the pheromone increments calculated by Eqs. (2) and (10). In this way, the ants’ search in the early stages becomes more effective because they are inclined to choose the better points.

Step 4:

Step 5:

Step 6:

Step 7:

3.3. Self-adaptive domain division and ant size The substance of the grid method is that the continuous domain is converted into finite discrete points. We find that the domain division is very important for the performance of RACO, because the different number of equivalent shares for the same domain may lead to a completely different result. Note that the structure of the ants’ search space (the values of the points) depends

Step 8:

−xi

shares: hi = max k min (i = 1,2,...,n), where hi is the value of the equivalent share. If max(h1 , h2 , . . . , hn ) < (other termination conditions can be used here), then the algorithm is over. Thereafter, output the global minimal function value Fmin . Otherwise, go to Step 4. Set the loop number nc ← 0 and the self-adaptive ant size m ← k + m. Initialize the pheromone matrix Tau by letting each element be 1. Thereafter, randomly generate a certain amount of routes. Update Tau according to some better routes to enable Tau to own the heuristic information. Calculate the self-adaptive pheromone amount Q using Eq. (10). Beginning from x1 to xn , each ant selects one point in the domain of each variable based on the transition probability pij (i = 1,2,…,k + 1; j = 1,2,…,n) calculated by Eq. (1). For the ncth loop, find the best route with the minimal objective function value among the m routes generated by the m ants. Update Tau using Eq. (2) based on the best route. Let nc ← nc + 1. If nc ≤ nc_max, go back to Step 5. Otherwise, find the minimal function value fmin among all loops. If fmin < Fmin , then Fmin ← fmin . If the condition of fmin ≥ Fmin is satisfied for t times, then let k ← k + 1. Find out the row number ri (i = 1,2,...,n) of the largest element in each column of Tau. If ri is near the domain border, then widen the domain using Eqs. (4) and (5) in the case of a domain without the optimal solution, or narrow the domain using Eq. (8) or (9) in the case of a domain with the optimal solution. If ri is far from the do-

Z. Chen et al. / Expert Systems With Applications 81 (2017) 309–320

313

Fig. 2. The flow of RACO for continuous functions.

main border, narrow the domain using Eqs. (6) and (7). Go back to Step 2. The description of our algorithm is provided in Fig. 2. 4. Experimental results and analysis In this section, we evaluate the performance of RACO through various numerical experiments, which can be summarized as three

parts. (1) Given initial domains with the optimal solution, compare RACO with other metahueristics for continuous optimization. (2) Given initial domains without the optimal solution, test RACO’s robustness. (3) Perform sensitivity analysis for the key parameters of RACO, and demonstrate the mechanism of the self-adaptive domain adjustment. The initial parameters used in RACO are k = 11, θ = 0.2, m = 2, 1 = 1.25, 2 = 0.05, 3 = 0.1, nc_max = 50. Once the amount of iterations in which the ants cannot find a better solution reaches 15, we make k ← k + 1. Note that RACO

314

Z. Chen et al. / Expert Systems With Applications 81 (2017) 309–320 Table 1 First part of benchmark functions. Function Sphere Ellipsoid Cigar Tablet Rosenbrock

Formula f (x ) = f (x ) = f (x ) =

n 

xi 2

i=1 n 

(100

i=1

x21

+ 10

4

f (x ) = 104 x21 + f (x ) =

n −1 i=1

i−1 n−1

xi )

n  i=2

n 

i=2

xi

x∗ = (0, . . . , 0)

f min = 0

∗

x = ( 0, . . . , 0 )

f min = 0

2

∗

x = ( 0, . . . , 0 )

f min = 0

x∗ = (0, . . . , 0)

f min = 0

x∗ = (1, . . . , 1)

f min = 0

xi 2 2

[100(x2i − xi+1 ) + (xi − 1 ) ]

4.1. Tests on initial domains with the optimal solution There are so many algorithms proposed to solve continuous optimization problem. Obviously, it is impractical to compare RACO with each of them. Since ACOR (Socha & Dorigo, 2008) is recognized by ACO’s founder as an effective algorithm, we compare the results found and referred by their paper with those obtained by RACO. As they did, the comparison metahueristics are divided into three groups: (1) Probability-learning methods which explicitly model and sample probability distributions. (2) Other metahueristics which are originally developed for combinatorial optimization and later adapted to continuous domains. (3) Ant-related methods which draw the inspiration from the behavior of ants. We use the number of function evaluations rather than CPU time as the performance measure due to the advantages that it is insensitive to different programming languages, code-optimization skills and computer configurations. In order to make the results obtained by RACO comparable to those obtained with other algorithms, the performance measure, which is consistent with that used in the original papers, differs slightly in different experiment groups. The median number of function evaluations is used in the 1st experiment group, and the average number of function evaluations is used in the 2nd and 3rd experiment groups. Moreover, we follow the same initial domains, termination conditions and running times as the competing algorithms used. The probability-learning methods in comparison include (1 + 1)-Evolution Strategy with 1/5th-success-rule ((1 + 1)-ES), Evolution Strategy with Cumulative Step Size Adaptation (CSA-ES), Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), Iterated Density Estimation Evolutionary Algorithm (IDEA) and Mixed Bayesian Optimization Algorithm (MBOA). The size of the population used in above algorithms is chosen separately for each algorithm-problem pair. The smallest population is selected from the set p ∈ [10, 20, 50, 100, 200, 400, 800, 1600, 3200]. For a detailed description of the parameter settings, we refer the reader to (Kern, Müller, Hansen, Büche, Ocenasek, & Koumoutsakos, 2004). The basic parameters of ACOR are as follows: the number of ants m = 2, speed of convergence n = 0.85, locality of the search process q = 10−4 and archive size k = 50. The benchmark functions ranging from simple to complex, with dimension n = 10, are listed in Table 1. For a fair comparison, we follow the same experimental setups as the literature did. All the benchmark functions have been executed 20 times each. The termination condition is given as:

|f − f | <

Minimal f (x∗ )

2

can solve different benchmark functions under the same parameter setting, and the value of k will dynamically vary during computation, as will ant size m.



Optimal x∗

2

where f is the best function value found by RACO, f∗ is the (known a priori) optimal value for the benchmark function, and =10−10 . We directly take the results of ACOR , (1 + 1)ES, CSA-ES, CMA-ES, IDEA and MBOA from (Socha & Dorigo, 2008). Table 2 shows the median number of function evaluations (MNFE) obtained by RACO and other 6 algorithms with the best results highlighted. In addition, the average error (AE) obtained by RACO is reported in the last column to help future researchers perform accuracy comparisons. The AE represents the average difference between the minimum function values found and the optimal value in runs. We can see that RACO performs quite well in terms of search speed. It gets the lowest MNFE which is much smaller than the second best algorithm. Note that the order of magnitude of the MNFE achieved by the second best algorithm on each function is 4, while for RACO the order of magnitude of the MNFE is 3 in 80% of benchmark functions. Although RACO gets the MNFE beyond 10 0 0 on Rosenbrock function, the result is still 82% less than the second best algorithm. The main cause responsible for the large differences between the performance of RACO and reference algorithms is the usage of the mechanism of domain adjustment. For a domain including the optimal solution, it can be narrowed smaller and smaller by RACO in the iterative process. A smaller search area facilitates ants to locate the optimal solution. Moreover, the grid method discretizes a continuous domain into limited values, and there is no need for ants to consider numerous candidate solutions in each iteration. Thus, the exploration phase is fostered. The other metahueristics used in comparison are Continuous Genetic Algorithm (CGA), Enhanced Continuous Tabu Search (ECTS), Enhanced Simulated Annealing (ESA) and Differential Evolution (DE). The parameters of above algorithms are essentially picked by a trial-and-error procedure. The ant-related methods used in comparison are Continuous ACO (CACO), API algorithm and Continuous Interacting Ant Colony (CIAC). The parameters settings are described as follows: for CACO, the ant size m = 100, number of regions r = 200, mutation probability p1 = 0.5, fashion crossover probability p2 = 1; for API, the ant size m = 20, number of explorations for each ant t = 50, failed searching times Placal = 50; for CICA, the ant size m = 100, ranges distribution ratio σ = 0.5, persistence of pheromonal spots ρ = 0.1, initial messages number μ = 10; for ACOR , the ant size m = 2, speed of convergence n = 0.85, locality of the search process q = 10−1 and archive size k = 50. All the results of referenced algorithms are provided by (Socha & Dorigo, 2008). The benchmark sets consisting of unimodal functions and multimodal functions are presented in Tables 3 and 4, with the exception of Sphere and Rosenbrock function listed in Table 1. In order to ensure a fair comparison, the experimental setups are the same as the literature. RACO was independently run for 100 times on each function, and the termination condition is given as:

| f − f ∗ | < 1 f ∗ + 2 where f is the best function value found by RACO, f∗ is the (known a priori) optimal value for the benchmark function, and

Z. Chen et al. / Expert Systems With Applications 81 (2017) 309–320

315

Table 2 Comparison of results of RACO, ACOR and probability-learning methods. Function

MNFE

Sphere x :[−3,7]n , n=10 Ellipsoid x :[−3,7]n , n= 10 Cigar x :[−3,7]n , n = 10 Tablet x :[−3,7]n , n = 10 Rosenbrock x :[−5,5]n , n = 10

MNFE

ACOR

(1 + 1)ES

CSA-ES

CMA-ES

IDEA

MBOA

RACO

1371.1 4452.6 3841.4 2567 7191.1

1370 4516 4450 2613 7241

1371.6 4560 4640 2632 7370

1371.3 4450 3840 2568.7 7190

1375 4451.6 3844.6 2569.9 7400

1418 4464 3852 2591 8290

192.5 218 235 203 1256

AE

8.11E−11 8.96E−11 8.19E−11 8.74E−11 9.91E−11

Table 3 Second part of benchmark functions. Function

Formula

Hartmann (H3,4 )

f (x ) = −

4  i=1

 ci exp −

3 

 ai j ( x j − pi j )

2

j=1

3 . 0 10.0 30.0 10.0 35.0 0 . 1 ai j = , 3 . 0 10 . 0 30 . 0 0 . 1 10.0 35.0 0.3689 0.1170 0.4387 0.4699 pi j = 0.8732 0.1091 0.0382 0.5743 Hartmann (H6,4 )

f (x ) = −

4  i=1

 ci exp −

10.0 0.05 ai j = 3.00 17.0 0.1312 0.2329 pi j = 0.2348 0.4047 Shekel (S4, k ,k=5,7,10)

3 

1.0 1.2 ci = 3.0 3.2 0.2673 0.7570 0.5547 0.8828

Optimal x∗

Minimal f (x∗ )

x∗1 = 0.114 x∗2 = 0.555 x∗3 = 0.855

fmin = −3.8628

x∗1 x∗2 x∗3 x∗4 x∗5 x∗6

= 0.201 = 0.150 = 0.477 = 0.275 = 0.311 = 0.657

f min = −3.3223

x∗1 x∗2 x∗3 x∗4

=4 =4 =4 =4

k=5 f min = −10.1532 k=7 = −10.4029 f min k=10 f min = −10.5364

 ai j ( x j − pi j )

2

j=1

3.00 17.0 10.0 17.0 3.50 1.70 8.00 0.05 0.1696 0.4135 0.1451 0.8828

3.50 0.10 10.0 10.0 0.5569 0.8307 0.3522 0.8732

1.70 8.00 17.0 0.10 0.0124 0.3736 0.2883 0.5743

1.0 8.00 14.0 1.2 , ci = 8.00 3.0 3.2 14.0 0.8283 0.1004 0.3047 0.1091

k  →  − → T − ( (x − − a )(x − ai ) + ci )−1 f (x ) = 4.0 4.0i=1 4.0 i 4.0 0.1 1.0 1.0 1.0 1 . 0 0.2 8 . 0 0.2 8.0 8.0 8.0 6 . 0 0.4 6 . 0 6 . 0 6 . 0 3 . 0 0.4 7.0 3.0 7.0 = ai j = , c i 0.6 9.0 2.0 9.0 2 . 0 5 . 0 0.3 5.0 3.0 3.0 8 . 0 0.7 1.0 8.0 1.0 6.0 2.0 2.0 6 . 0 0.5 7 . 0 0.5 3.6 7.0 3.6



0.5886 0.9991 0.6650 0.0381

Table 4 Third part of benchmark functions. Function

Formula

Griewangk

f (x ) =

Goldstein & Price

Martin & Gaddy B2 Branin RCOS Easom Zakharov De Jong

n 

x2i 40 0 0

n

Minimal f (x∗ )

x∗ = (0, . . . , 0)

f min = 0

f (x ) = [1 + (x1 + x2 + 1 )2 (19 − 14x1 + 3x21 − 14x2 + 6x1 x2 + 3x22 )].[30 + (2x1 − 3x2 )2 (18 − 32x1 + 12x21 + 48x2 − 36x1 x2 + 27x22 )] f (x ) = (x1 − x2 )2 + ( x1 +x32 −10 )2 f (x ) = x21 + 2x22 − 0.3cos(3π x1 ) − 0.4cos(4π x2 )+0.7 2 f (x ) = (x2 − 54xπ12 + 5πx1 − 6 )2 + 10(1 − 81π ) cos(x1 ) + 10 2 2  f (x ) = − cos(x1 ) cos(x2 )exp(−( (x1 − π ) + (x2 − π ) ) ) n n n    2 2 4  f (x ) = xi + ( 0.5ixi ) + ( 0.5ixi )

x∗ = (0, −1 )

f min = 3

x∗ = (5, 5 ) x∗ = (0, 0 ) 3 optima x∗ = (π , π )

f min f min f min f min

x∗ = (0, . . . , 0)

f min = 0

f (x ) =

x∗ = (0, 0, 0)

f min = 0

i=1

i=1 x21 +

x22



i=1

i=1 + x23

cos( √xi ) + 1

Optimal x∗ i

i=1

1 = 2 =10−4 which are respectively the relative and absolute errors. Tables 5 and 6 present the results obtain by RACO, ACOR , and respectively other ant-related algorithms and other metaheuristics. In addition to the average number of function evaluations

=0 =0 = 0.397887 = −1

(ANFE), the success rates are reported because some of the algorithms failed to find the optima. The results highlighted indicate that RACO outperforms or perform equally best as the other algorithms. Also, the average error (AE) obtained by RACO is reported in the last column. Note that the ANFE and AE are computed in

316

Z. Chen et al. / Expert Systems With Applications 81 (2017) 309–320 Table 5 Comparison of results of RACO, ACOR and ant-related methods. Function

ANFE

Rosenbrock (R2 ) x : [−5,10]n , n=2 Sphere x : [−5.12,5.12]n , n= 6 Griewangk x :[−5.12,5.12]n , n = 10 Goldstein and Price x :[−2,2]n , n = 2 Martin and Gaddy x :[−20,20]n , n = 2 B2 x :[−100,100]n , n = 2 Rosenbrock (R5 ) x :[−5,10]n , n = 5 Shekel (S4,5 ) x :[0,10]n , n = 4

Success rate (%)

AE

RACO

ACOR

CACO

API

CIAC

RACO

ACOR

CACO

API

CIAC

RACO

78.9 47 30 49 16 81 184.3 47.9

820 781 1390 384 345 544 2487 787

828.3 809 1426 398 350 – – –

832 794 – – – – – –

834 845 1426 445 379 566 2503 837

100 100 97 100 100 100 100 56

100 100 61 100 100 100 97 57

100 100 100 100 100 – – –

100 100 – – – – – –

100 100 52 56 20 100 90 5

5.61E−05 9.90E−05 8.13E−05 2.56E−04 2.11E−05 8.91E−05 7.17E−05 7.63E−04

Table 6 Comparison of results of RACO, ACOR and other metahueristics. Function

ANFE

Success rate (%)

RACO

ACOR

CGA

ECTS

ESA

DE

RACO

ACOR

CGA

ECTS

ESA

DE

RACO

Branin RCOS x :[−5,15]n , n = 2 B2 x :[−100,100]n , n = 2 Easom x :[−100,100]n , n = 2 Goldstein and Price x :[−2,2]n , n = 2 Rosenbrock (R2) x :[−5,10]n , n = 2 Zakharov (Z2) x :[−5,10]n , n = 2 De Jong x :[−5.12,5.12]n , n = 3 Hartmann (H3,4) x :[0,1]n , n = 3 Shekel (S4,5) x :[0,10]n , n = 4 Shekel (S4,7) x :[0,10]n , n = 4 Shekel (S4,10) x :[0,10]n , n = 4 Rosenbrock (R5) x :[−5,10]n , n = 5 Zakharov (Z5)x :[−5,10]n , n = 5 Hartmann (H6,4) x :[0,1]n , n = 6 Griewangk x :[−5.12,5.12]n , n = 10

79.8 81 55.2 49 78.9 20 42 26.8 47.9 48.9 49.6 184.3 61.8 28.6 29.7

248.5 431.3 772 232.7 481.7 196.5 393 342 611.3 681.1 651.1 2143.2 727 722 1390

247.5 430 773.9 232.8 482 198.2 393.9 343.7 610 680 650 2143.9 728.9 723.3 –

245 – – 231 480 195 – 343.6 611.4 681.3 651.4 2142 730.1 724.1 –

– – – 234.4 481.7 276 – 344 611.9 681.8 651.8 2144.5 823 725.7 –

– – – – 481.3 – 392 – – – – – – – 1399.2

100 100 70 100 100 100 100 100 56 92 97 100 100 50 97

100 100 98 100 100 100 100 100 57 79 81 97 100 100 61

100 100 100 100 100 100 100 100 76 83 83 100 100 100 –

100 – – 100 100 100 – 100 75 80 80 100 100 100 –

– – – 100 100 100 – 100 54 54 50 100 100 100 –

– – – – 100 – 100 – – – – – – – 100

6.68E−05 8.91E−05 1.27E−04 2.56E−04 5.61E−05 1.62E−05 9.47E−05 3.19E−04 7.63E−04 7.87E−04 7.23E−04 7.17E−05 6.58E−05 2.67E−04 8.13E−05

relation to only the successful run. The symbol “—” represents the datum is unavailable for the algorithm. With respect to the ANFE, clearly we can see that RACO is a winner in the two groups of comparisons. The differences in the ANFE between RACO and the considered algorithms are huge since the second best one used 2– 45 times more ANFE. The reason is that, while the iterative process continues, the length of the domain is reduced gradually and the ants just need to search limited values in each newly-formed domain, which can speed up the optimization process. In terms of the success rate, RACO gets the best results on 11 of the 15 functions in the 2nd group (Table 5) and on 6 of the 8 functions in the 3rd group (Table 6). When compared with ACOR and other antrelated algorithms, on Griewangk and Shekel (S4,5 ) function, RACO obtains slightly worse success rates than the best one. When compared with ACOR and other metaheuristics, on Easom, Shekel (S4,5 ) and Hartmann (H6,4 ) function, the success rates obtained by RACO are much worse than the best one. The less ideal results reveal that there exists a risk in the domain adjustment. In the iterative process, the domain is adjusted towards a smaller area having the point with the most pheromones. However, the point with the most pheromones might be not near the global optimum but the local optimum. Thus, there is a possibility that the global optimum is removed from the newly-formed domain. Because RACO does not take into account the correlation between variables and local search, the point with the most pheromones might mislead the direction of domain adjustment and make the ants stuck in the local optimum. Nevertheless, given the comprehensive performance, RACO can be considered a competitive approach. 4.2. Tests on initial domains without the optimal solution Since there is no literature on ACO in terms of the robustness of domains of variables, we do not compare RACO with other algo-

AE

rithms in this subsection. Here we use a benchmark set consisting of 9 functions with integer solutions and 5 functions with fractional solutions. The benchmark functions except for those aforementioned are presented in Table 7. The results obtained by RACO are reported in terms of the average number of function evaluations (ANFE), average error (AE) and success rate. Note that the ANFE and AE are evaluated only for the successful run. If the difference between the minimum function value found by RACO and the optimum is smaller than or equal to 10−5 , a run is considered successful. On each benchmark function, RACO was run for 20 times using the following termination condition:

max(h1 , h2 , . . . , hn ) <

where hi (i = 1,2,...,n) is the value of the equivalent share and

=10−5 . In all experiments, we use 2-dimensional functions, and give 5 types of initial domains without the optimal solution: (1) Both domains are positive and large; (2) Both domains are negative and large; (3) One positive domain and one negative domain, both of which are far from the optimal solution; (4) One positive domain and one negative domain, both of which are extremely narrow; (5) One positive and small domain, and one negative and large domain. Table 8 presents the results obtained by RACO, and the symbol “×” indicates RACO failed to find the optimal solution. We can see that RACO shows a good performance under the condition of initial domains without the optimal solution. Across the 5 types of domains, RACO can find the correct results with 100% success rate on 9 of the 14 functions. Statistically speaking, the 3rd type of domains cost RACO the most ANFE, and the 4th type of domains cost the least. The results reflect a general trend of RACO spending more ANFE on the domains far from the optimal solution and less ANFE on the domains close to the optimal solution.

Z. Chen et al. / Expert Systems With Applications 81 (2017) 309–320

317

Table 7 Fourth part of benchmark functions. Function

Formula

Optimal x∗

Minimal f (x∗ )

Rastrigin

f (x ) = x21 + x22 − cos(18x1 ) − cos(18x2 ) 5 5   f (x ) = ( icos(i + (i + 1 )x1 ) )( icos(i + (i + 1 )x2 ) )

x∗ = (0, 0 )

f min = −2

18 optima

f min = −186.7309

x∗ = (0, . . . , 0)

f min = 0

x∗ = (1, . . . , 1)

f min = 0

Shubert



i=1

Ackley

f (x ) = −20exp(−0.2 +20 + e

1 n

n  i=1

n −1

i=1

x2i ) − exp(

1 n

n  i=1

cos(2π xi ) )

Levy

f (x ) = sin2 (π y1 ) +

Bleale Six-Hump Camel-Back

+ (yn − 1 ) (1 + 10sin (2π yn )) ( − 1) ) f (x ) = (1.5 − x1 + x1 x2 )2 + (2.25 + (2.625 − x1 + x1 x2 3 )2 f (x ) = 4x21 − 2.1x41 + x61 /3 + x1 x2 − 4x22 + 4x42

Bohachevsky

f (x ) = x21 + x22 − 0.3cos(3π x1 ) + 0.3cos(4π x2 ) + 0.3

Hansen

f (x ) = (cos(1 ) + 2 cos(x1 + 2 ) + 3 cos(2x1 + 3 ) +4 cos(3x1 + 4 ) + 5cos(4x1 + 5 )) · (cos(2x2 + 1 ) + 2 cos(3x2 + 2 ) + 3 cos(4x2 + 3 ) + 4 cos(5x2 + 4 ) + 5cos(6x2 + 5 ))

2

i=1

[(yi − 1 ) (1 + 10sin2 (π yi + 1 ))] 2

2

, yi = 1 + 14 xi − x1 + x1 x2 2 2

x∗ = (3, 0.5)

f min = 0

x∗ = (0.08983, −0.7126), (−0.08983, 0.7126) x∗ =(0,0.24), (0,−0.24) 9 optima x∗ = (−1.30671,4.85806), etc.

f min = −1.0316

f min = −0.24

f min = −176.5418

Table 8 Results of RACO given the incorrect domains. Initial domain

ANFE AE Success ANFE AE Success ANFE AE Success ANFE AE Success ANFE AE Success ANFE AE Success ANFE AE Success ANFE AE Success ANFE AE Success ANFE AE Success ANFE AE Success ANFE AE Success ANFE AE Success ANFE AE Success

rate

rate

rate

rate

rate

rate

rate

rate

rate

rate

rate

rate

rate

rate

Function

x1 : [100, 200] x2 : [50, 80]

x1 : [−300, −180] x2 : [−600, −50]

x1 : [1800, 1900] x2 : [−230, −110]

x1 : [1, 2] x2 : [−3, −1]

x1 : [100, 110] x2 : [−300, −190]

383 9.18E−10 100 155 1.84E−11 100 155 4.10E−12 100 188 2.64E−12 70 156 1.16E−10 100 151 0 100 171 2.77E−06 100 157 3.84E−11 100 157 1.70E−12 100 178 2.93E−12 100 253 4.74E−11 100 153 0 100 205 1.31E−07 100 166 4.65E−08 100

370 1.99E−09 100 171 2.58E−12 100 169 2.23E−12 100 220 1.15E−12 80 189 2.91E−11 100 282 0 100 × × 0 170 5.64E−10 100 170 4.91E−13 100 178 1.11E−11 100 242 3.35E−11 100 169 0 100 261 1.30E−07 100 174 4.65E−08 100

1559 7.22E−10 100 187 4.53E−12 100 184 3.74E−12 100 217 1.42E−12 40 190 9.18E−11 100 284 0 100 × × 0 182 7.23E−12 100 261 5.93E−12 100 190 3.41E−12 100 802 1.37E−11 100 182 0 100 260 1.36E−07 100 177 4.65E−08 100

111 5.17E−11 100 108 2.14E−11 100 125 2.87E−12 100 108 5.40E−12 100 111 2.08E−11 100 × × 0 108 1.15E−05 100 108 6.55E−11 100 × × 0 118 2.03E−12 100 163 1.29E−10 100 119 0 100 113 1.33E−07 100 × × 0

292 1.17E−09 100 163 8.42E−12 100 164 2.11E−12 100 249 1.23E−12 25 159 1.23–09 100 266 0 100 × × 0 156 3.33E−10 100 × × 0 167 3.95E−12 100 423 2.01E−11 100 168 0 100 188 1.29E−07 95 176 4.65E−08 100

Goldstein & Price

Zakharov

Martin & Gaddy

Griewangk

Rastrigin

Shubert

Ackley

B2

Levy

Beale

Rosenbrock

Bohachevsky

Hansen

Six-Hump Camel-Back

318

Z. Chen et al. / Expert Systems With Applications 81 (2017) 309–320

Rosenbrock 1400 1200 1000

Domain of X1

800 600 400 200 0 −200 −400 −600 0

50

100

150

200

250

300

Iteration Fig. 3. Domain adjustment for x1 of Rosenbrock.

On Griewangk and Ackley function, RACO gets stuck in the 2nd, 3rd and 5th types of domains. On Shubert, Levy and Six-Hump Camel-Back function, RACO gets stuck in the 4th type of domains. In terms of accuracy, the orders of magnitude of average errors are well controlled in 10−14 –10−7 except for Ackley function. These experiments show that the deviation about the optimal solution in an initial domain has a very small negative effect on RACO. The self-adaptive domain adjustment is the key reason why RACO owns the robustness in terms of domains of variables. Using this mechanism, the ants can change the border of the initial domain to explore a completely new area. The direction of border adjustment is not blind, but based on the position of the element owing the most pheromones in Tau. Once a newly-formed domain contains the optimal solution, its border can be narrowed gradually to speed up the convergence.

Table 9 Sensitivity analysis for the parameters 1 and 2 . The blod values of 1 and 2 are the initial values provided at the beginning of Section 4. Function

Rosenbrock (R2 ) x1 : [100,110] x2 : [−300,−190]

Parameter

ANFE

Success rate (%)

AE

1

× 1130.3 521.1 421.1 678.4 499.4 1105.5 421.1 687.9 429 1380.3 ×

0 100 100 100 100 70 100 100 100 100 80 0

× 3.96E−11 1.52E−10 2.58E−11 7.56E−11 1.00E−10 1.55E−11 2.58E−11 7.60E−11 3.19E−10 1.69E−10 ×

2

0 0.25 0.75 1.25 1.75 2.25 0.025 0.05 0.1 0.15 0.2 0.25

4.3. Discussion The characteristic of RACO is the self-adaptive domain adjustment, from which the self-adaptive pheromone increment, domain division and ant size derive. In order to see clearly how the selfadaptive domain adjustment works, given the 5th type of domains for Rosenbrock function, we take one run of RACO as an example. Figs. 3 and 4 display the process of domain adjustment with each bar presenting a new domain obtained by RACO after one iteration. We can see that a new domain may extend larger so that the ants are able to explore a new area. After the corrected domain contains a promising solution, it tends to shrink smaller. Finally, it converges to the optimum after iterative adjustments. It is also found that a newly-formed domain after several iterations may be far from the old one. Thus, the function value can change significantly as the domain varies. Fig. 5 describes the change of the order of magnitude of Rosenbrock function value. At the beginning of the iterative process, the order of magnitude is 10. Near the end of iterations, it falls to −11. The dramatic change of the function value requires the Q in Eq. (2) to be changed dynamically. That is the reason why we put forward the self-adaptive pheromone increment. Moreover, in the scenario that a domain is enlarged significantly, the small number of equivalent shares which can speed up the searching in the early stage is not applicable any more. Thus, we put forward the self-adaptive domain division, which also requires the ant size to become self-adaptive.

In order to help understand the self-adaptive domain adjustment in depth, we make sensitivity analysis for the key parameters in Eqs. (4)–(7). Note that the larger 1 is, the longer the expanded domain is; the larger 2 is, the shorter the reduced domain is. Given different values of the parameters 1 and 2 , RACO was run for 20times on Rosenbrock function, and the average number of function evaluations (ANFE), average error (AE) and success rate are reported with the benchmark results outlined in bold in Table 9. The symbol “×” indicates RACO failed to find the optimal solution. Compared with the benchmark results, a decline in 1 makes the ANFE increase. The reason is that a smaller domain increment cost RACO more times of domain adjustment before the correct domain is located. The case of 1 = 0 indicates that the new domain is as long as the old one, which makes RACO fail to find the optimum. However, the value of 1 should not be large. Otherwise, it will have a negative effect on RACO’s convergence and correctness. As 1 increases to 2.25, the ANFE increases from 421.1 to 499.4, and the success rate decrease from 100% to 70%. Further, compared with the benchmark results, a growth in 2 makes the ANFE increase and the success rate decrease. The reason is that the area containing the optimal solution may be missed if the domain is reduced in a large scale. Although decreasing the value of 2 can improve the accuracy of RACO, the searching speed is sacrificed. In the case that 2 decreases from 0.05 to 0.025, the AE decreases

Z. Chen et al. / Expert Systems With Applications 81 (2017) 309–320

319

Rosenbrock 120

Domain of X2

100

80

60

40

20

0 0

50

100

150

200

250

300

250

300

Iteration Fig. 4. Domain adjustment for x2 of Rosenbrock.

Rosenbrock 10

Order of magnitude of Y

5

0

−5

−10

−15 0

50

100

150

200

Iteration Fig. 5. The order of magnitude of Rosenbrock function value Y.

simultaneously which is at the cost of increasing of ANFE. In a sentence, the parameter setting is a trial-and-error procedure. 5. Conclusions Although there are many improved ACOs for continuous functions, they all belong to a category that we call the limited-rangesearch algorithm, in which the ants search for solution only in the given initial domains. The quality of their results is quite sensitive to the domain’s properties, such as length, symmetry and border. For a complex function established from a real problem, if people cannot give a correct domain containing the optimal solution, those algorithms cannot work. To overcome this shortcoming, we propose a robust ant colony optimization (RACO) owning the robustness with respect to the domains of variables. RACO can change the border of the domain based on the quality of the solution found. Thus, the ants’ search is not limited to the given initial domain, but can extend to a completely different domain. In this sense, RACO is a broad-range-search algorithm which is able to find the correct result even if the given domains do not contain the optimal solution. Because RACO inherits the framework of con-

ventional ACO, it is very simple to use. There are several new characteristics in RACO: self-adaptive domain adjustment, self-adaptive ant size, self-adaptive pheromone increment and self-adaptive domain division. Experiments show that RACO can obtain the optimal solution based on arbitrary domains. With this ability of RACO, people no longer need to estimate a correct domain with the optimal solution beforehand. The work presented here can be extended along two directions. Firstly, due to RACO’s closeness to the formulation of the original ACO used to solve combinatorial optimization problems, it is possible to apply RACO in mixed discrete-continuous optimization problems. Secondly, when the dimension of benchmark function increases to a large size, the precision of RACO will decrease in the case of given domains without the optimal solution. Local search heuristics can be employed to intensify the ants’ search. Acknowledgment This research is supported by MOE (Ministry of Education in China) Project of Humanities and Social Sciences (Grant No. 15YJC630013)

320

Z. Chen et al. / Expert Systems With Applications 81 (2017) 309–320

References Afshar, A., Massoumi, F., Afshar, A., & Mariño, M. A. (2015). State of the art review of ant colony optimization applications in water resource. Water Resources Management, 29(11), 3891–3904. Bilchev, G., & Parmee, I. C. (1995). The ant colony metaphor for searching continuous design spaces. In Proceedings of the AISB workshop on evolutionary computation, university of Sheffield, UK. LNCS (pp. 25–39). Springer-Verlag. 933. Chen, Z., & Wang, C. (2014). Modeling RFID signal distribution based on neural network combined with continuous ant colony optimization. Neurocomputing, 123(123), 354–361. Dorigo, M., & Gambardella, L. M. (1997). Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Transaction on Evolutionary Computation, 1(1), 53–66. Dréo, J., & Siarry, P. (2004). Continuous interacting ant colony algorithm based on dense hierarchy. Future Generation Computer Systems, 20(5), 841–856. Demirel, N. Ç., & Toksarı, M. D. (2006). Optimization of the quadratic assignment problem using an ant colony algorithm. Applied Mathematics & Computation, 183(1), 427–435. Dorigo, M., & Stützle, T. (2010). Ant colony optimization: Overview and recent advances. In Handbook of metaheuristics (pp. 227–263). US: Springer. Ding, Q. L., Hu, X. P., Sun, L. J., & Wang, Y. Z. (2012). An improved ant colony optimization and its application to vehicle routing problem with time windows. Neurocomputing, 98, 101–107. Eiben, A. E., & Bäck, T. (1997). Empirical investigation of multiparent recombination operators in evolution strategies. Evolutionary Computation, 5(3), 347–365. Edward, J. S., Ramu, P., & Swaminathan, R. (2016). Imperceptibility—Robustness tradeoff studies for ECG steganography using continuous ant colony optimization. Expert Systems with Application, 49, 123–135. Fetanat, A., & Khorasaninejad, E. (2015). Size optimization for hybrid photovoltaic–wind energy system using ant colony optimization for continuous domains based integer programming. Applied Soft Computing, 31, 196–209. Fetanat, A., & Shafipour, G. (2011). Generation maintenance scheduling in power systems using ant colony optimization for continuous domains based 0–1 integer programming. Expert Systems with Applications, 38(8), 9729–9735. Fogel, D. B., & Bayer, H. G. (1995). A note on the empirical evaluation of intermediate recombination. Evolutionary Computation, 3(4), 491–495.

Gao, S., Zhong, J., & Mo, S. J. (2003). Research on ant colony algorithm for continuous optimization problem. Microcomputer Development, 13(1), 21–22. Hu, X. M., Zhang, J., & Li, Y. (2008). Orthogonal methods based ant colony search for solving continuous optimization problems. Journal of Computer Science and Technology, 23(1), 2–18. Huang, R. H., Yang, C. L., & Cheng, W. C. (2013). Flexible job shop scheduling with due window—A two-pheromone ant colony approach. International Journal of Production Economics, 141(2), 685–697. Kern, S., Müller, S. D., Hansen, N., Büche, D., Ocenasek, J., & Koumoutsakos, P. (2004). Learning probability distributions in continuous evolutionary algorithms—A comparative review. Natural Computing, 3(1), 77–112. Leguizamon, G., & Coello, C. A. C. (2010). An alternative ACOR algorithm for continuous optimization problems. In Proceedings of the seventh international conference on swarm intelligence, ANTS 2010. LNCS (pp. 48–59). Springer. 6234. Liao, T. J., Montes de Oca, M. A., Aydın, D., Stützle, T., & Dorigo, M. (2011). An incremental ant colony algorithm with local search for continuous optimization. In Proceedings of the genetic and evolutionary computation conference. GECCO’11 (pp. 125–132). ACM. Liao, T. J., Stützle, T., Montes de Oca, M. A., & Dorigo, M. (2014). A unified ant colony optimization algorithm for continuous optimization. European Journal of Operational Research, 234(3), 597–609. Liao, T. J., Socha, K., Montes de Oca, M., Stutzle, T., & Dorigo, M. (2014). Ant colony optimization for mixed-variable optimization problems. IEEE Transactions on Evolutionary Computation, 18(4), 503–518. Monmarché, N., Venturini, G., & Slimane, M. (20 0 0). On how Pachycondyla apicalis ants suggest a new search algorithm. Future generation computer systems, 16(8), 937–946. Socha, K., & Dorigo, M. (2008). Ant colony optimization for continuous domains. European Journal of Operational Research, 185(3), 1155–1173. Xiao, J., & Li, L. P. (2011). A hybrid ant colony optimization for continuous domains. Expert Systems with Applications, 38(9), 11072–11077. Yang, Q., Chen, W. N., Yu, Z., Gu, T., Li, Y., & Zhang, H. (2016). Adaptive multimodal continuous ant colony optimization. IEEE Transactions on Evolutionary Computation, 99, 1–14.