Research Article Heterogeneous Differential ... - BioMedSearch

1 downloads 0 Views 562KB Size Report
Dec 23, 2013 - 1942–1948, December 1995. [13] E. GarGarcıaI-Gonzalo and J. L. Fernández-Martınez, “A brief historical review of particle swarm optimization ...
Hindawi Publishing Corporation e Scientific World Journal Volume 2014, Article ID 318063, 7 pages http://dx.doi.org/10.1155/2014/318063

Research Article Heterogeneous Differential Evolution for Numerical Optimization Hui Wang,1 Wenjun Wang,2 Zhihua Cui,3,4 Hui Sun,1 and Shahryar Rahnamayan5 1

School of Information Engineering, Nanchang Institute of Technology, Nanchang 330099, China School of Business Administration, Nanchang Institute of Technology, Nanchang 330099, China 3 Complex System and Computational Intelligent Laboratory, Taiyuan University of Science and Technology, Taiyuan 030024, China 4 State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China 5 Department of Electrical, Computer and Software Engineering, University of Ontario Institute of Technology, 2000 Simcoe Street North Oshawa, ON, Canada L1H 7K4 2

Correspondence should be addressed to Hui Wang; [email protected] Received 28 September 2013; Accepted 23 December 2013; Published 5 February 2014 Academic Editors: G. C. Gini and J. Zhang Copyright © 2014 Hui Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Differential evolution (DE) is a population-based stochastic search algorithm which has shown a good performance in solving many benchmarks and real-world optimization problems. Individuals in the standard DE, and most of its modifications, exhibit the same search characteristics because of the use of the same DE scheme. This paper proposes a simple and effective heterogeneous DE (HDE) to balance exploration and exploitation. In HDE, individuals are allowed to follow different search behaviors randomly selected from a DE scheme pool. Experiments are conducted on a comprehensive set of benchmark functions, including classical problems and shifted large-scale problems. The results show that heterogeneous DE achieves promising performance on a majority of the test problems.

1. Introduction Differential evolution (DE) [1] is a well-known algorithm for global optimization over continuous search spaces. Although DE has shown a good performance over many optimization problems, its performance is greatly influenced by its mutation scheme and control parameters (population size, scale factor, and crossover rate). To enhance the performance of DE, many improved DE variants have been proposed based on modified mutation strategies [2, 3] or adaptive parameter control [4–6]. The standard DE and most of its modifications [7–10] make use of homogeneous populations where all of the individuals follow exactly the same behavior. That is, individuals implement the same DE scheme, such as DE/rand/1/bin and DE/best/1/bin. The effect is that individuals in the population behave with the same exploration and/or exploitation characteristics [11]. An ideal optimization algorithm should balance exploration and exploitation during the search process. Initially, the algorithm should concentrate on exploration. As

the iteration increases, it would be better to use exploitation to find more accurate solutions. However, it is difficult to determine when the algorithm should switch from an explorative behavior to an exploitative behavior. To tackle this problem, a new concept of heterogeneous swarms [11] is proposed and applied to particle swarm optimization (PSO) [12, 13], where particles in the swarm use different velocity and position update rules. Therefore, the swarm may consist of explorative particles as well as exploitative particles. This makes the heterogeneous PSO have the ability to balance exploration and exploitation during the search process. In this paper, a simple and effective heterogeneous DE (HDE) algorithm is proposed inspired by the idea of heterogeneous swarms [11]. In HDE, individuals will be allocated to different search behaviors randomly selected from a DE scheme pool. However, the concept of heterogeneous swarms used in DE is not new. Qin et al. [6] proposed a selfadaptive DE (SaDE), where individuals are also allowed to implement different mutation schemes according to a complex probability model. Gong et al. [14] combined

2

The Scientific World Journal

a strategy adaptation mechanism with four mutation strategies proposed in JADE [3]. For other self-adaptive DE variants [8], the search characteristics of individuals dynamically change according to the adaptation of the control parameters (scale factor and/or crossover rate). The HDE proposed in this paper differs from the above DE variants. In HDE, the behaviors are randomly assigned from a DE scheme pool. By the suggestions of heterogeneous PSO [11], two heterogeneous models are proposed. This first one is static HDE (sHDE), where the randomly selected behaviors are fixed during the evolution. The second one is dynamic HDE (dHDE), where the behaviors can randomly change during the search process. Experimental studies are conducted on a comprehensive set of benchmark functions, including classical problems and shifted large-scale problems. Simulation results demonstrate the efficiency and effectiveness of the proposed heterogeneous DE. The rest of the paper is organized as follows. In Section 2, the standard DE algorithm is briefly introduced. The HDE is proposed in Section 3. Experimental simulations, results, and discussions are presented in Section 4. Finally, the work is concluded in Section 5.

2. Differential Evolution There are several variants of DE [1], which use different mutation strategies and/or crossover schemes. To distinguish these different DE schemes, the notation “DE/𝑥/𝑦/𝑧” is used, where “DE” indicates the DE algorithm, “𝑥” denotes the vector to be mutated, “𝑦” is the number of difference vectors used in the mutation, and “𝑧” stands for the type of crossover scheme, exponential (exp) or binomial (bin). The following section discusses the mutation and crossover operations. 2.1. Mutation. For each vector 𝑋𝑖,𝐺 at generation 𝐺, this operation creates mutant vectors 𝑉𝑖,𝐺 based on the current parent population. The following are five well-known variant mutation strategies: (1) DE/rand/1 𝑉𝑖,𝐺 = 𝑋𝑖1 ,𝐺 + 𝐹 ⋅ (𝑋𝑖2 ,𝐺 − 𝑋𝑖3 ,𝐺) ,

(1)

(2) DE/best/1 𝑉𝑖,𝐺 = 𝑋best,𝐺 + 𝐹 ⋅ (𝑋𝑖1 ,𝐺 − 𝑋𝑖2 ,𝐺) ,

(2)

(3) DE/current-to-best/1 𝑉𝑖,𝐺 = 𝑋𝑖,𝐺 + 𝐹 ⋅ (𝑋best,𝐺 − 𝑋𝑖,𝐺) + 𝐹 ⋅ (𝑋𝑖1 ,𝐺 − 𝑋𝑖2 ,𝐺) , (3) (4) DE/rand/2 𝑉𝑖,𝐺 = 𝑋𝑖1 ,𝐺 + 𝐹 ⋅ (𝑋𝑖2 ,𝐺 − 𝑋𝑖3 ,𝐺) + 𝐹 ⋅ (𝑋𝑖4 ,𝐺 − 𝑋𝑖5 ,𝐺) , (4) (5) DE/best/2 𝑉𝑖,𝐺 = 𝑋best,𝐺 + 𝐹 ⋅ (𝑋𝑖1 ,𝐺 − 𝑋𝑖2 ,𝐺) + 𝐹 ⋅ (𝑋𝑖3 ,𝐺 − 𝑋𝑖4 ,𝐺) , (5)

where the indices 𝑖1 , 𝑖2 , 𝑖3 , 𝑖4 , and 𝑖5 are mutually different random indices chosen from the set {1, 2, . . . , 𝑁𝑝 }, 𝑁𝑝 is the population size, and all are different from the base index 𝑖. The scale factor 𝐹 is a real number that controls the difference vectors. 𝑋best,𝐺 is the best vector in terms of fitness value at the current generation 𝐺. 2.2. Crossover. Similar to EAs, DE also employs a crossover operator to build trial vectors 𝑈𝑖,𝐺 by recombining the current vector 𝑋𝑖,𝐺 and the mutant one 𝑉𝑖,𝐺. The trail vector is defined as follows: 𝑢𝑖,𝑗,𝐺 = {

V𝑖,𝑗,𝐺, if rand𝑗 (0, 1) ≤ CR ∨ 𝑗 = 𝑗rand 𝑥𝑖,𝑗,𝐺, otherwise,

(6)

where CR ∈ (0, 1) is the predefined crossover probability, rand𝑗 (0, 1) is a uniform random number within [0, 1] for the 𝑗th dimension, and 𝑗rand ∈ {1, 2, . . . , 𝐷} is a random index. 2.3. Selection. After the crossover, a greedy selection mechanism is used to select the better one from the parent vector 𝑋𝑖,𝐺 and the trail vector 𝑈𝑖,𝐺 according to their fitness values 𝑓(⋅). Without losing of generality, this paper only considers minimization problems. If, and only if, the trial vector 𝑈𝑖,𝐺 is better than the parent vector 𝑋𝑖,𝐺, then 𝑋𝑖,𝐺+1 is set to 𝑈𝑖,𝐺; otherwise, we keep 𝑋𝑖,𝐺+1 the same with 𝑋𝑖,𝐺: 𝑈 , 𝑋𝑖,𝐺+1 = { 𝑖,𝐺 𝑋𝑖,𝐺,

if 𝑓 (𝑈𝑖,𝐺) ≤ 𝑓 (𝑋𝑖,𝐺) otherwise.

(7)

3. Heterogeneous DE In the standard DE and most of its modifications, individuals in the population behave with the same search characteristics, exploration, and/or exploitation, because of the use of the same DE scheme. The effect is that the algorithms could hardly balance exploration and exploitation during the search process. Inspired by the heterogeneous swarms [11], a simple and effective heterogeneous DE (HDE) algorithm is proposed in this paper. Compared to DE and most of its variants, individuals in HDE are allowed to implement different behaviors. To implement different DE schemes in HDE, we need to address two questions. First, which DE scheme should be chosen to construct the DE scheme pool? Second, how do we assign DE schemes to individuals? As mentioned before, there are several DE schemes, and different DE schemes have different search characteristics. In this paper, three different DE schemes are chosen to construct the DE scheme pool: (1) DE/rand/1/bin; (2) DE/best/1/bin; and (3) DE/BoR/1/bin [16]. The first two schemes are two basic DE strategies proposed in [1], and the last one was recently proposed in [16], where a new mutation strategy called “best-of-random” (BoR) is defined as follows: 𝑉𝑖,𝐺 = 𝑋𝑖𝑏 ,𝐺 + 𝐹 ⋅ (𝑋𝑖1 ,𝐺 − 𝑋𝑖2 ,𝐺) ,

(8)

where the individuals 𝑋𝑖1 , 𝑋𝑖2 , and 𝑋𝑖𝑏 are randomly chosen from the current population, 𝑖 ≠ 𝑖1 ≠ 𝑖2 ≠ 𝑖𝑏 , and 𝑋𝑖𝑏 is the best one of them; that is, 𝑓(𝑋𝑖𝑏 ) ≤ min(𝑓(𝑋𝑖1 ), 𝑓(𝑋𝑖2 )).

The Scientific World Journal

3 Variables

DE scheme

X1,G

ID1

X2,G

ID2

X3,G

ID3

···

···

XN𝑝 ,G

IDN𝑝

Figure 1: The encoding of individuals in heterogeneous DE.

There are two reasons for choosing these DE schemes. First, these three DE schemes are very simple and easy to implement. Second, each of them has different search characteristics. For the DE/rand/1/bin, it obtains higher population diversity. DE/best/1/bin shows faster convergence speed. The last one provides a middle phase between the first two DE schemes. Note that this paper only chooses three DE schemes to construct the DE scheme pool and other DE schemes can also be possibly used. Figure 1 presents the encoding method in HDE, where ID𝑖 denotes the employed DE scheme for the 𝑖th individual. In this paper, ID𝑖 ∈ {1, 2, 3}; that is, ID𝑖 = 1 indicates the DE/rand/1/bin scheme, ID𝑖 = 2 denotes the DE/best/1/bin scheme, and ID𝑖 = 3 stands for the DE/BoR/1/bin scheme. In order to address the second question, two different heterogeneous models, namely, static HDE (sHDE) and dynamic HDE (dHDE), are used to assign DE schemes to individuals [11]. (i) In the static HDE (sHDE), DE schemes are randomly assigned to individuals in population initialization. The assigned DE schemes do not change during the search process. (ii) In the dynamic HDE (dHDE), DE schemes are randomly assigned to individuals during initialization. As the iteration increases, the assigned DE schemes of individuals are not fixed. They can randomly change during the search process. An individual randomly selects a new DE scheme from the DE scheme pool when the individual fails to improve its objective fitness value. In the standard DE and its variants, the objective fitness value of each individual 𝑋𝑖 satisfies 𝑓(𝑋𝑖,1 ) ≤ 𝑓(𝑋𝑖,2 ) ≤ ⋅ ⋅ ⋅ ≤ 𝑓(X𝑖,𝐺). If the new generated individual (trail vector 𝑈𝑖 ) could not improve its previous position (𝑋𝑖 ), it may indicate early stagnation. This can be addressed by assigning a new DE scheme to the individual. The main steps of dynamic heterogeneous DE (dHDE) are presented in Algorithm 1, where 𝑃 is the current population, FEs is the number of fitness evaluations, and MAX FEs is

the maximum number of FEs. For static HDE (sHDE), please delete line 21 in Algorithm 1.

4. Experimental Verifications This section provides experimental studies of HDE on 18 wellknown benchmark optimization problems. According to the properties of these problems, two series of experiments are conducted: (1) comparison of HDE on classical optimization problems and (2) comparison of HDE on shifted large-scale optimization problems. 4.1. Results on Classical Optimization Problems. In this section, 12 classical benchmark problems are used to verify the performance of HDE. These problems were considered in [2, 5, 17–19]. Table 1 presents a brief description of these benchmark problems. All the problems are to be minimized. Experiments are conducted to compare HDE with other six DE variants. The involved algorithms are listed below: (i) DE/rand/1/bin, (ii) DE/best/1/bin, (iii) DE/BoR/1/bin [16], (iv) self-adapting DE (jDE) [5], (v) DE with neighborhood search (NSDE) [20], (vi) DE using neighborhood-based mutation (DEGL) [2], (vii) the proposed sHDE and dHDE. In the experiments, we have two series of comparisons: (1) comparison of sHDE/dHDE with basic DE schemes and (2) comparison of sHDE/dHDE with state-of-the-art DE variants. The first comparison aims to check whether the heterogeneous method is helpful to improve the performance of DE. The second comparison investigates whether the proposed approach is better or worse than some recently proposed DE variants. For these two comparisons, we use the same parameter settings as follows. For the three basic DE schemes (DE/rand/ 1/bin, DE/best/1/bin, and DE/BoR/1/bin) and sHDE/dHDE,

4

The Scientific World Journal Table 1: The 12 classical benchmark optimization problems

Problem 𝑓1 𝑓2 𝑓3 𝑓4 𝑓5 𝑓6 𝑓7 𝑓8 𝑓9 𝑓10 𝑓11 𝑓12

Name Sphere Schewefel 2.22 Schewefel 1.2 Schewefel 2.21 Rosenbrock Step Quartic with noise Schewefel 2.26 Rastrigin Ackley Griewank Penalized

𝐷 25 25 25 25 25 25 25 25 25 25 25 25

Properties Unimodal Unimodal Unimodal Unimodal Multimodal Unimodal Unimodal Multimodal Multimodal Multimodal Multimodal Multimodal

Search range [−100, 100] [−10, 10] [−100, 100] [−100, 100] [−30, 30] [−100, 100] [−1.28, 1.28] [−500, 500] [−5.12, 5.12] [−32, 32] [−600, 600] [−50, 50]

(1) Randomly initialize the population 𝑃 including the variables and ID; (2) Evaluate the objective fitness value of each individuals in 𝑃; (3) FEs = 𝑁𝑝 ; (4) while FEs ≤ MAX FEs do (5) for 𝑖 = 1 to 𝑁𝑝 do (6) if ID𝑖 == 1 then (7) Use DE/rand/1/bin to generate a trail vector 𝑈𝑖 ; (8) end (9) if ID𝑖 == 2 then (10) Use DE/best/1/bin to generate a trail vector 𝑈𝑖 ; (11) end (12) if ID𝑖 == 3 then (13) Use DE/BoR/1/bin to generate a trail vector 𝑈𝑖 ; (14) end (15) Evaluate the objective fitness value of 𝑈𝑖 ; (16) FEs++; (17) if f (𝑈𝑖 ) ≤ f (𝑋𝑖 ) then (18) 𝑋𝑖 = 𝑈𝑖 (19) end (20) else /∗ For static HDE (sHDE), please delete lines 20–22 ∗/ (21) Randomly assign a new different DE scheme to 𝑋𝑖 , (change the value of ID𝑖 ); (22) end (23) end (24) end Algorithm 1: The dynamic heterogeneous DE (dHDE).

the control parameters, 𝐹 and CR, are set to 0.5 and 0.9, respectively [5]. For jDE, NSDE, and DEGL, the parameters 𝐹 and CR are self-adaptive. For all algorithms, the population size (𝑁𝑝 ) and maximum number of FEs are set to 10 ⋅ 𝐷 and 5.00𝐸+05, respectively [2]. All the experiments are conducted 30 times, and the mean error fitness values are reported. 4.1.1. Comparison of HDE with Basic DE Schemes. The comparison results of HDE with the three basic DE schemes are presented in Table 2, where “Mean” denotes the mean error fitness values. From the results, sHDE and dHDE outperform other three basic DE schemes on a majority of test problems. For unimodal problems (𝑓1 − 𝑓4 ), DE/best/1/bin shows faster

convergence than other algorithms for the attraction of the global best individual. For 𝑓6 and 𝑓7 , all the algorithms obtain similar performance. DE/rand/1/bin provides high population diversity but slows down convergence rate. That is why DE/rand/1/bin performs better than DE/best/1/bin on most multimodal problems, but worse on unimodal problems. DE/BoR/bin/1 is a middle phase of DE/rand/1/bin and DE/best/1/bin. It obtains better performance than the other two basic DE schemes. For the heterogeneity of these basic DE schemes, sHDE and dHDE significantly improve the performance of DE. For unimodal problems, DE/best/1/bin obtains the best performance, while sHDE and dHDE perform better than DE/rand/1/bin and DE/BoR/1/bin. For

The Scientific World Journal

5

Table 2: Comparison of HDE with basic DE schemes on classical benchmark problems. Problem 𝑓1 𝑓2 𝑓3 𝑓4 𝑓5 𝑓6 𝑓7 𝑓8 𝑓9 𝑓10 𝑓11 𝑓12

DE/rand/1/bin Mean 0.00𝐸 + 00 0.00𝐸 + 00 2.68𝐸 − 125 5.34𝐸 − 64 5.75𝐸 − 29 0.00𝐸 + 00 1.38𝐸 − 03 3.36𝐸 + 03 4.38𝐸 + 01 2.81𝐸 + 00 2.95𝐸 − 02 9.96𝐸 − 01

DE/best/1/bin Mean 4.38𝐸 − 27 2.64𝐸 − 13 2.41𝐸 − 04 1.77𝐸 − 07 2.91𝐸 − 04 0.00𝐸 + 00 2.50𝐸 − 03 1.68𝐸 + 03 1.32𝐸 + 02 2.55𝐸 − 14 0.00𝐸 + 00 5.00𝐸 − 28

DE/BoR/1/bin Mean 2.37𝐸 − 46 3.02𝐸 − 22 1.51𝐸 − 11 1.89𝐸 − 11 1.45𝐸 − 16 0.00𝐸 + 00 1.52𝐸 − 03 2.78𝐸 + 03 2.62𝐸 + 01 4.14𝐸 − 15 0.00𝐸 + 00 1.88𝐸 − 32

sHDE Mean 6.58𝐸 − 79 3.32𝐸 − 39 3.10𝐸 − 21 1.42𝐸 − 07 1.11𝐸 − 29 0.00𝐸 + 00 8.31𝐸 − 04 0.00𝐸 + 00 5.22𝐸 + 01 4.14𝐸 − 15 0.00𝐸 + 00 1.88𝐸 − 32

dHDE Mean 0.00𝐸 + 00 4.96𝐸 − 103 7.21𝐸 − 57 2.49𝐸 − 40 1.73𝐸 − 29 0.00𝐸 + 00 1.84𝐸 − 03 8.56𝐸 + 02 0.00𝐸 + 00 4.14𝐸 − 15 0.00𝐸 + 00 1.88𝐸 − 32

Table 3: Comparison of HDE with jDE, NSDE, and DEGL on classical benchmark problems. Problem 𝑓1 𝑓2 𝑓3 𝑓4 𝑓5 𝑓6 𝑓7 𝑓8 𝑓9 𝑓10 𝑓11 𝑓12

jDE Mean 4.04𝐸 − 35 8.34𝐸 − 26 4.76𝐸 − 14 3.02𝐸 − 14 5.64𝐸 − 26 1.67𝐸 − 36 3.76𝐸 − 02 0.00𝐸 + 00 6.74𝐸 − 24 7.83𝐸 − 15 1.83𝐸 − 28 9.37𝐸 − 24

NSDE Mean 9.55𝐸 − 35 8.94𝐸 − 30 3.06𝐸 − 09 2.09𝐸 − 11 2.65𝐸 − 25 4.04𝐸 − 28 4.35𝐸 − 03 2.60𝐸 + 00 4.84𝐸 − 21 5.97𝐸 − 10 7.93𝐸 − 26 5.85𝐸 − 21

multimodal problems, DE/best/1/bin is the worst algorithm, while sHDE and dHDE obtain better performance than other algorithms. For the comparison of sHDE and dHDE, sHDE performs better on 3 problems, while dHDE achieves better results on 5 problems. For the remaining 4 problems, both of them could search the global optimum. Although the dynamic heterogeneous model improves the performance of HDE on many problems, it may lead to premature convergence. In the dHDE, if the current individual could not improve its fitness value, a new DE scheme is assigned to the individual. If the new scheme is DE/best/1/bin, the individual will be attracted by the global best individuals. This will potentially run the risk of premature convergence. 4.1.2. Comparison of HDE with Other State-of-the-Art DE Variants. The comparison results of HDE with other three state-of-the-art DE variants are presented in Table 3, where “Mean” denotes the mean error fitness values. From the results, sHDE and dHDE perform better than other three DE algorithms on the majority of test problems. dHDE

DEGL Mean 8.78𝐸 − 37 4.95𝐸 − 36 1.21𝐸 − 26 4.99𝐸 − 15 6.89𝐸 − 25 9.56𝐸 − 48 1.05𝐸 − 07 0.00𝐸 + 00 5.85𝐸 − 25 5.98𝐸 − 23 2.99𝐸 − 36 7.21𝐸 − 27

sHDE Mean 6.58𝐸 − 79 3.32𝐸 − 39 3.10𝐸 − 21 1.42𝐸 − 04 1.11𝐸 − 29 0.00𝐸 + 00 8.31𝐸 − 04 0.00𝐸 + 00 5.22𝐸 + 01 4.14𝐸 − 15 0.00𝐸 + 00 1.88𝐸 − 32

dHDE Mean 0.00𝐸 + 00 4.96𝐸 − 103 7.21𝐸 − 57 2.49𝐸 − 40 1.73𝐸 − 29 0.00𝐸 + 00 1.84𝐸 − 03 8.56𝐸 + 02 0.00𝐸 + 00 4.14𝐸 − 15 0.00𝐸 + 00 1.88𝐸 − 32

Table 4: Average rankings achieved by Friedman test. Algorithms dHDE

Ranking 4.17

sHDE

3.58

DEGL

3.50

jDE

2.25

NSDE

1.50

outperforms jDE and NSDE on all test problems except for 𝑓8 . On this problem, dHDE falls into the local minima, while sHDE could successfully solve it. dHDE achieves better results than DEGL on 9 problems, while DEGL performs better than dHDE on the remaining 3 problems. In order to compare the performance of multiple algorithms on the test suite, we conducted Friedman test according to the suggestions of [21]. Table 4 shows the average ranking of jDE, NSDE, DEGL, sHDE, and dHDE. As seen, the performance of the five algorithms can be sorted by average ranking into the following order: dHDE, sHDE, DEGL,

6

The Scientific World Journal Table 5: The 6 shifted large-scale benchmark optimization problems proposed in [15].

Problem 𝐹1 𝐹2 𝐹3 𝐹4 𝐹5 𝐹6

Name Shifted Sphere Shifted Schewefel 2.21 Shifted Rosenbrock Shifted Rastrigin Shifted Griewank Shifted Ackley

𝐷 500 500 500 500 500 500

Properties Unimodal, separable, scalable Unimodal, nonseparable Multimodal, nonseparable Multimodal, separable Multimodal, nonseparable Multimodal, separable

Search range [−100, 100] [−100, 100] [−100, 100] [−5, 5] [−600, 600] [−32, 32]

Table 6: Comparison of HDE with ODE and MDE on shifted large-scale benchmark problems. Problem 𝐹1 𝐹2 𝐹3 𝐹4 𝐹5 𝐹6

ODE Mean 8.02𝐸 + 01 5.78𝐸 + 00 1.54𝐸 + 05 4.22𝐸 + 03 1.77𝐸 + 00 4.51𝐸 + 00

MDE Mean 1.95𝐸 + 01 2.70𝐸 + 01 4.67𝐸 + 05 4.14𝐸 + 03 1.52𝐸 + 00 4.02𝐸 + 00

jDE, and NSDE. The best average ranking was obtained by the dHDE algorithm, which outperforms the other four algorithms. 4.2. Results of HDE on Shifted Large-Scale Optimization Problems. To verify the performance of HDE on complex optimization problems, this section provides experimental studies of HDE on 6 shifted large-scale problems. These problems were considered in CEC 2008 special session and competition on large-scale global optimization [15]. Table 5 presents a brief descriptions of these benchmark problems. All the problems are to be minimized. Experiments are conducted to compare four DE algorithms including the proposed sHDE and dHDE on 6 test problems with 𝐷 = 500. The involved algorithms are listed below: (i) opposition-based DE (ODE) [22], (ii) modified DE using Cauchy mutation (MDE) [23], (iii) the proposed sHDE and dHDE. To have a fair comparison, all the four algorithms use the same parameter settings by the suggestions of [22]. The population size 𝑁𝑝 is set to 𝐷. The control parameters, 𝐹 and CR, are set to 0.5 and 0.9, respectively [22]. For ODE, the rate of opposition 𝐽𝑟 is 0.3. The maximum number of FEs is 5000 ⋅ 𝐷. All the experiments are conducted 30 times, and the mean error fitness values are reported. Table 6 presents the comparison results of HDE with ODE and MDE on shifted large-scale problems. As seen, sHDE and dHDE outperform ODE and MDE on four problems. The static heterogeneous model slightly improves the final solutions of DE, while the dynamic model significantly improves the results on 𝐹1 and 𝐹3 . However, the two heterogeneous models do not always work. For some problems, sHDE and/or dHDE perform worse than ODE and MDE. The

sHDE Mean 8.45𝐸 + 00 8.54𝐸 + 01 1.52𝐸 + 05 5.27𝐸 + 03 6.94𝐸 − 01 1.26𝐸 + 00

dHDE Mean 1.14𝐸 − 10 1.01𝐸 + 02 1.29𝐸 + 03 3.42𝐸 + 03 1.83𝐸 − 01 1.58𝐸 + 01

possible reason is that the heterogeneous models may hinder the evolution. For the static model, the entire population is divided into three groups. Each group uses a different basic DE scheme to generate new individuals. Though these groups share the search information of the entire population, each group with small population may obtain slow convergence rate. As mentioned before, the dynamic heterogeneous model runs the risk of premature convergence.

5. Conclusion In the standard DE and most of its modifications, individuals follow the same search behavior for using the same DE scheme. The effect is that the algorithms could hardly balance explorative and exploitative abilities during the evolution. Inspired by the heterogeneous swarms, a simple and effective heterogeneous DE (HDE) is proposed in this paper, where individuals could follow different search characteristics randomly selected from a DE scheme pool. To implement the HDE, two heterogeneous models are proposed, one static, where individuals do not change their search behaviors, and a dynamic model where individuals may change their search behaviors. Both versions of HDE initialize individual behaviors by randomly selecting DE schemes from a DE scheme pool. In the dynamic HDE, when an individual does not improve its fitness value, a new different DE scheme is randomly selected from the DE scheme pool. To verify the performance of HDE, two types of benchmark problems, including classical problems and shifted large-scale problems, are used in the experiments. Simulation results show that the proposed sHDE and dHDE outperform the other eight DE variants on a majority of test problems. It demonstrates that the heterogeneous models are helpful to improve the performance of DE. For the dynamic heterogeneous model, the frequency of changing new DE schemes may affect the performance of

The Scientific World Journal dHDE. More experiments will be investigated. The concept of heterogeneous swarms can be applied to other evolutionary algorithms to obtain good performance. Future research will investigate different algorithms based on heterogeneous swarms. Moreover, we will try to apply our approach to some real-world problems [24, 25].

Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments This work is supported by the Humanity and Social Science Foundation of Ministry of Education of China (no. 13YJCZH174), the National Natural Science Foundation of China (nos. 61305150, 61261039, and 61175127), the Science and Technology Plan Projects of Jiangxi Provincial Education Department (no. GJJ13744 and GJJ13762), and the Foundation of State Key Laboratory of Software Engineering (no. SKLSE2012-09-19).

References [1] R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341– 359, 1997. [2] S. Das, A. Abraham, U. K. Chakraborty, and A. Konar, “Differential evolution using a neighborhood-based mutation operator,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 3, pp. 526–553, 2009. [3] J. Zhang and A. C. Sanderson, “JADE: adaptive differential evolution with optional external archive,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 5, pp. 945–958, 2009. [4] J. Liu and J. Lampinen, “A fuzzy adaptive differential evolution algorithm,” Soft Computing, vol. 9, no. 6, pp. 448–462, 2005. [5] J. Brest, S. Greiner, B. Boˇskovi´c, M. Mernik, and V. Zumer, “Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 6, pp. 646–657, 2006. [6] A. K. Qin, V. L. Huang, and P. N. Suganthan, “Differential evolution algorithm with strategy adaptation for global numerical optimization,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 2, pp. 398–417, 2009. [7] F. Neri and V. Tirronen, “Recent advances in differential evolution: a survey and experimental analysis,” Artificial Intelligence Review, vol. 33, no. 1-2, pp. 61–106, 2010. [8] S. Das and P. N. Suganthan, “Differential evolution: a survey of the state-of-the-art,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 1, pp. 4–31, 2011. [9] X. S. Yang and S. Deb, “Two-stage eagle strategy with differential evolution,” International Journal of Bio-Inspired Computation, vol. 4, no. 1, pp. 1–5, 2012. [10] Y. W. Zhong, L. J. Wang, C. Y. Wang, and H. Zhang, “Multi-agent simulated annealing algorithm based on differential evolution algorithm,” International Journal of Bioinspired Computation, vol. 4, no. 4, pp. 217–228, 2012.

7 [11] A. P. Engelbrecht, “Heterogeneous particle swarm optimization,” International Conference on Swarm Intelligence, vol. 6234, pp. 191–202, 2010. [12] J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, December 1995. [13] E. GarGarc´ıaI-Gonzalo and J. L. Fern´andez-Mart´ınez, “A brief historical review of particle swarm optimization (PSO),” Journal of Bioinformatics and Intelligent Control, vol. 1, no. 1, pp. 3–16, 2012. [14] W. Gong, Z. Cai, C. X. Ling, and H. Li, “Enhanced differential evolution with adaptive strategies for numerical optimization,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 41, no. 2, pp. 397–413, 2011. [15] K. Tang, X. Yao, P. N. Suganthan et al., Benchmark functions for the CEC’ 2008 special session and competition on high-dimensional real-parameter optimization, Nature Inspired Computation and Applications Laboratory, USTC, Hefei, China, 2007. [16] C. Lin, A. Qing, and Q. Feng, “A new differential mutation base generator for differential evolution,” Journal of Global Optimization, vol. 49, no. 1, pp. 69–90, 2011. [17] L. Ali and S. L. Sabat, “Particle swarm optimization based universal solver for global optimization,” Journal of Bioinformatics and Intelligent Control, vol. 1, no. 1, pp. 95–105., 2012. [18] X. J. Cai, S. J. Fan, and Y. Tan, “Light responsive curve selection for photosynthesis operator of APOA,” International Journal of Bio-Inspired Computation, vol. 4, no. 6, pp. 373–379, 2012. [19] L. P. Xie, J. C. Zeng, and R. A. Formato, “Selection strategies for gravitational constant G in artificial physics optimisation based on analysis of convergence properties,” International Journal of Bio-Inspired Computation, vol. 4, no. 6, pp. 380–391, 2012. [20] Z. Yang, J. He, and X. Yao, “Making a difference to differential evolution,” Advance in Metaheuristics for Hard Optimization, pp. 397–414, 2008. [21] S. Garc´ıa, A. Fern´andez, J. Luengo, and F. Herrera, “Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: experimental analysis of power,” Information Sciences, vol. 180, no. 10, pp. 2044–2064, 2010. [22] R. S. Rahnamayan, H. R. Tizhoosh, and M. M. A. Salama, “Opposition-based differential evolution,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 1, pp. 64–79, 2008. [23] M. Ali and M. Pant, “Improving the performance of differential evolution algorithm using Cauchy mutation,” Soft Computing, vol. 15, no. 5, pp. 991–1007, 2011. [24] A. Y. Abdelaziz, R. A. Osama, and S. M. Elkhodary, “Application of ant colony optimization and harmony search algorithms to reconfiguration of radial distribution networks with distributed generations,” Journal of Bioinformatics and Intelligent Control, vol. 1, no. 1, pp. 86–94, 2012. [25] S. Y. Zeng, Z. Liu, C. H. Li, Q. Zhang, and W. Wang, “An evolutionary algorithm and its application in antenna design,” Journal of Bioinformatics and Intelligent Control, vol. 1, no. 2, pp. 129–137, 2012.