A Glowworm Swarm Optimization Algorithm Based on ...

25 downloads 472 Views 410KB Size Report
domains. Using this method to make the position updating glowworm move closer to the ... Raising a new algorithm called definite updating search domains of ...
Journal of Computational Information Systems 7: 10 (2011) 3698-3705 Available at http://www.Jofcis.com

A Glowworm Swarm Optimization Algorithm Based on Definite Updating Search Domains Jiakun LIU, Yongquan ZHOU†, Kai HUANG, Zhe OUYANG, Yingjiu WANG Guangxi Key Laboratory of Hybrid Computation and Integrated Circuit Design Analysis, Nanning 530006, China College of Mathematics and Computer Science, Guangxi University for Nationalities, Nanning 530006, China

Abstract Aiming at the glowworm swarm optimization algorithm is easy to fall into local optimization, having the low speed of convergence and low accuracy. In order to solve these problems, this article raised a conception of definite updating search domains. Using this method to make the position updating glowworm move closer to the best so that to improve the accuracy and speeding up convergence. Through eight typical functions testing, experiment results show that the proposed algorithm has strong global searching capability, and can effectively avoid precocious phenomenon, thus obviously improving the optimization global ability. Keywords: Glowworm Swarm Optimization (GSO); Definite Updating Search Domains; Functions Optimization

1. Introduction Glowworm swarm optimization (GSO) [1-3] is a new method of swarm intelligence based algorithm for optimizing multi-modal functions raised by K.N.Krishnanad and D.Ghose in 2005. This algorithm becomes a new research hotspot of computational intelligence draw our sights on it. With the research deeper and deeper, it’s been used at noisy text of sensor [4] and simulating robots [5]. Though GSO has strong universal, but there are also premature, the search accuracy is not high, post-iteration efficiency is not high defects. Aroused by Elitist Strategy of Ant Colony Optimization, ACO [6], hybrid GSO algorithm [8-11] .This paper introduces a conception of definite updating search field at glowworm updating position stage. Raising a new algorithm called definite updating search domains of glowworm swarm optimization, GSO-D. Using this method to make the position updating glowworm move closer to the best so that to improve the accuracy and speeding up convergence. This paper is organized as follows. In section 2, a basic glowworm swarm optimization is introduction. In section 3 we will introduce our definite updating search domains of glowworm swarm optimization, algorithm followed by the experimental results and analysis in section 4. The conclusions are given in section 5.



Corresponding author. Email addresses: [email protected] (Yongquan ZHOU).

1553-9105/ Copyright © 2011 Binary Information Press October, 2011

3699

J. Liu et al. /Journal of Computational Information Systems 7:10 (2011) 3698-3705

2. Basic Glowworm Swarm Optimization Algorithm (GSO) In GSO, each glowworm distributes in the objective function definition space. These glowworms carry own luciferin respectively, and has the respective field of vision scope called local-decision range. Their brightness concerns with in the position of objective function value. The brighter the glow, the better is the position, namely has the good target value. The glow seeks for the neighbor set in the local-decision range, in the set, a brighter glow has a higher attraction to attract this glow toward this traverse, and the flight direction each time different will change along with the choice neighbor. Moreover, the local-decision range size will be influenced by the neighbor quantity, when the neighbor density will be low, glow's policy-making radius will enlarge favors seeks for more neighbors, otherwise, the policy-making radius reduces. Finally, the majority of glowworm return gathers at the multiple optima of the given objective function. Each glowworm i encodes the object function value J ( xi ( t ) ) at its current location xi ( t ) into a luciferin value li and broadcasts the same within its neighbourhood. The set of neighbours N i ( t ) of glowworm

i consists of those glowworms that have a relatively higher luciferin value and that are located

within a dynamic decision domain, and updating by formula (1) at each iteration. Local-decision range update: rd i (t + 1) = min{rs , max{0, rd i (t ) + β (nt − N i (t ) )}};

(1)

And rd i (t + 1) is the glowworm i ’s local-decision range at the t + 1 iteration, rs is the sensor range, nt is

the neighbourhood threshold, the parameter β affects the rate of change of the nerghbourhood range. The number of glow in local-decision range: N i (t ) = { j : x j (t ) − xi (t ) < rd i ; li (t ) < l j (t )};

(2)

and, x j ( t ) is the glowworm i ’s position at the t iteration, l j ( t ) is the glowworm i ’s luciferin at the t iteration.; the set of neighbours of glowworm

i consists of those glowworms that have a relatively

higher luciferin value and that are located within a dynamic decision domain whose range rdi is bounded above by a circular sensor range rs (0 < rd i < rs ) .Each glowworm i selects a neighbour j with a probability pij ( t ) and moves toward it. These movements that are based only on local information, enable the glowworms to partition into disjoint subgroups, exhibit a simultaneous taxis-behaviour toward and eventually co-locate at the multiple optima of the given objective function. Probability distribution used to select a neighbour:

J. Liu et al. /Journal of Computational Information Systems 7:10 (2011) 3698-3705 pij ( t ) =



l j ( t ) − li ( t )

3700

;

(3)

⎞ ⎟; ⎟ ⎠

(4)

l ( t ) − li ( t )

k ∈N i ( t ) k

Movement update: ⎛ x j (t ) − xi (t ) xi (t + 1) = xi (t ) + s ⎜ ⎜ x j (t ) − xi (t ) ⎝

Luciferin-update: li (t ) = (1 − ρ )li (t − 1) + γ J ( xi (t ));

(5)

and li (t ) is a luciferin value of glowworm i at the t iteration, ρ ∈ (0,1) leads to the reflection of the cumulative goodness of the path followed by the glowworms in their current luciferin values, the parameter γ only scales the function fitness values, J ( xi ( t ) ) is the value of test function.

3. A Definite Updating Search Domains of Glowworm Swarm Optimization Algorithm (GSO-D) 3.1. Thought of GSO-D Algorithm

In basic GSO algorithm foundation, introduce a conception of definite updating search domains at glowworm position stochastic updating stage to control the change of glowworm position. So that after the update the glow position always in current most superior individual periphery to enhances the convergence rate of the algorithm. Improvement algorithm according to update location at blew formula: x i (t ) = x best (t ) + ( rand − 0 .5);

(6)

and xbest (t ) as the best one at the t iteration, xi (t ) is the position after updated of glow that need to change. 3.2. GSO-D Algorithm

The basic step of GSO-D as below: Step 1. Initialization parameters of ρ , γ , β , s, l 0 , m, n , initialization the position of each glowworm. Step 2. To each glow, updating luciferin value according to equation (5). Step 3. Selects conforms to the condition glowworm according to equation (2). Step 4. Regarding does not conform to Equation (2) condition glowworm to renew its position with

equation (6). Step 5. Using (3) to select the distribution j ( j ∈ N i (t )) , and updating with equation (4). Step 6. Revision search radius by equation (1). Step 7. One iteration complete, enter the next iteration, judges whether to satisfy the termination

condition, satisfied the withdrawal circulation, the record result, otherwise transferred Step 2. The flow chart of GSO-D as below:

J. Liu et al. /Journal of Computational Information Systems 7:10 (2011) 3698-3705

3701

Basic Parameter Initialization

Initiative the Position of Each Glowworm

Whether to Achieve the

True End

Max Iteration

False

Updating Luciferin for Each Glowworm

False

Whether in the Local-decision Range

Updating Position with Eq. (6) True

Definite Movement

Updating Position with Eq. (4)

Revision Search Radius, Iterative Number of Times Adds One

Fig.1 GSO-D Algorithm Flow Chart

4. Experimental Results Comparison between GSO and GSO-D 4.1. Test Functions

In order to test the effective of our algorithm, we take eight test functions [7] to verify and compare with GSO. The eight test functions are as below: n

F1 : f 1( x) = ∑ xi2 , i =1

−100 ≤ xi ≤ 100

( i = 1,2,…, n; n = 10) , the objective function value is 0 at ( 0, 0,…, 0 ) ;

J. Liu et al. /Journal of Computational Information Systems 7:10 (2011) 3698-3705

3702

n

F2 : f 2 ( x ) = ∑ ( xi + 0.5) 2 , −10 ≤ xi ≤ 10, ( i = 1, 2,…, n; n = 20 ) ,the objective function value is 0 at i =1

( 0, 0,…, 0 ) ; F3 : f 3 ( x) = 0.5 −

sin 2 x12 + x2 2 − 0.5 ⎡1 + 0.001( x12 + x2 2 ) ⎤ ⎣ ⎦

2

, −100 ≤ xi ≤ 100, i = {1, 2} , the objective function value is 0

at ( 0, 0 ) ;

F4 :

2 f 4 ( x) = ⎡1 + ( x1 + x2 + 1) (19 − 14 x1 + 3 x12 − 14 x2 + 6 x1 x2 + 3 x2 2 ) ⎤ ⎣ ⎦

⎡30 + ( 2 x1 − 3 x2 )2 (18 − 32 x1 + 12 x12 + 48 x2 − 36 x2 + 27 x2 2 ) ⎤ ⎣ ⎦

−2 ≤ xi ≤ 2, i = {1, 2} , the objective

function value is 3 at ( 0, −1) ; 1

1

F5 : f 5 ( x ) = ( x12 + x2 2 ) 4 [sin 2 (50( x12 + x2 2 )10 ) + 1.0], −100 < xi < 100 , the objective function value is 0

at ( 0, 0 ) ; F6 : f 6 ( x ) = 1 +

1 n 2 n ⎛x ⎞ cos ⎜ i ⎟, −600 < xi < 600 , the objective function value is 0; ∑ xi − ∏ 4000 i =1 i =1 ⎝ i⎠

n

F7 : f 7 ( x ) = ∑ ixi2 , −5.12 < xi < 5.12 , the objective function value is 0 at ( 0, 0 ) ; i =1

n ⎛ ⎞ F8 : f8 ( x ) = − exp ⎜ −0.5∑ xi2 ⎟ , −1 < xi < 1 , the objective function value is -1. i =1 ⎝ ⎠

4.2. Test Platform and Parameter

The GSO and GSO-D are code in MATLAB7.0 and implemented on Intel Core2 T5870 2.00GHz machine with 2G RAM under windows 7 platform. The set of GSO and GSO-D’s parameters are as below: n = 100 , max of iteration max t = 500 , ρ = 0.4 , γ = 0.6 , β = 0.08 ,moving step s = 0.03 , nt = 5 and initialization of luciferin l0 = 5 . 4.3. Analyses of Results

To test the function F1 ~ F8 , do 10 times independent experiments, Find the best value, worst value, average, and compared with the GSO. Result as table 1.We can see comparing with the original GSO, GSO-D in the best value, worst value, average, has a significantly improved accuracy, GSO-D be the best solution is more close to the theoretical value. Especially for high-dimensional functions F1 , F2 , F7 , the performance effect is more obviously. Accuracy increased by 5, 2 and 3 orders of magnitude. For those prone to fall into the local convergence of the multi-peak function, the proposed algorithm GSO-D is also able to achieve a better result. For example, the peak function F4 , F5 , F6 are easy to fall into a number of local convergence of the multi-modal functions. GSO-D also can improve the results 2 to 3 orders of magnitude. From the experimental results, the proposed GSO-D algorithm is better, especially for high

3703

J. Liu et al. /Journal of Computational Information Systems 7:10 (2011) 3698-3705

dimensional and have more local minima, especially in multi-modal function. Table 1 Experimental Comparison between GSO and GSO-D Function

Algorithm

Best Value

Worst Value

Average Value

GSO-D

0.0209092081

0.1788653901

0.076542144

GSO

3.7458821928e+003

5.3105070326e+003

4.2142874619e+003

GSO-D

1.1938979877

8.0008926229

4.33021957748

GSO

2.0712723774e+002

2.3536696703e+002

2.1943403272e+002

GSO-D

0.0024555858

0.0043157173

0.0029038642

GSO

0.0224807260

0.0611025937

0.0381388748

GSO-D

3.0000016989

3.0003852691

3.0001352657

GSO

3.0005729948

3.0022107424

3.0010483495

GSO-D

0.0313546159

0.0556201361

0.0395147785

GSO

1.2452648318

2.7916929980

1.8659677216

GSO-D

0.2288965549

0.9940557583

0.7355870408

GSO

4.6429707790

8.3272650679

6.8226892045

GSO-D

5.0310332859e-004

0.0043873157

0.0031286511

GSO

5.7866718261

7.9434573575

7.2334658683

GSO-D

-0.9678747851

-0.8643420545

-0.9024066180

GSO

-0.6348221903

-0.5591000349

-0.6094998622

F1

F2

F3

F4

F5

F6

F7

F8

Fig. 4.1

F1

CURVES of the Objective Function Value

Fig. 4.2

F2

Curves of the Objective Function Value

J. Liu et al. /Journal of Computational Information Systems 7:10 (2011) 3698-3705

Fig. 4.3

F3

Curves of the Objective Function Value

Fig.4.5

F5

Curves of the Objective Function Value

Fig.4.7

F7

Curves of the Objective Function Value

Fig. 4.4

F4

Fig. 4.6

Fig. 4.8

F8

3704

Curves of the Objective Function Value

F6

Curves of the Objective Function Value

Curves of the Objective Function Value

Fig. 4.1-4.8 are GSO and GSO-D’s convergence curves; visually see the GSO-D calculation of high accuracy, fast convergence and high-dimensional functions for greater processing power.

3705

J. Liu et al. /Journal of Computational Information Systems 7:10 (2011) 3698-3705

5. Conclusions

For Glowworm swarm optimization (GSO) the accuracy is not high, the disadvantage of slow convergence, this paper presents Definite updating search field of glowworm swarm optimization (GSO-D). This algorithm updates each particle to determine the course of domain concepts presented, so that the updated position is always better in the current around the fireflies, to improve convergence speed and accuracy. Experiments show that the new algorithm has strong global search ability and fast convergence rate, accuracy has greatly improved. Acknowledgement

This work is supported by the Grants 60461001 from NSF of China and the project Supported by Grants 0832082, 0991086 from Guangxi Science Foundation. References [1]

Krishnanand,K.N.D. Ghose,D. Glowworm swarm optimization: a new method for optimizing multi-modal functions.Computational Intelligence Studies, 1(1):93-119,2009. [2] Krishnanand,K.N. Glowworm swarm optimization: a multimodal function optimization paradigm with applications to multiple signal source localization tasks. Indian: Department of Aerospace Engineering, Indian Institute of Science, 2007. [3] Krishnanand, K.N. and Ghose, D. Theoretical foundations for rendezvous of glowworm-inspired agent swarms at multiple locations.Robotics and Autonomous Systems, 7(56): 549–569,2008. [4] Krishnanand, K.N. and Ghose, D. A glowworm swarm optimization based multi-robot system for signal source localization.Design and Control of Intelligent Robotic Systems, 53-74,2009. [5] Krishnanand, K.N. and Ghose, D. Chasing multiple mobile signal sources: a glowworm swarm optimization approach. In Third Indian International Conference on Artificial Intelligence (IICAI 07), Indian ,2007. [6] M. Dorigo, V. Maniezzo, A. Colorni, The Ant System: Optimization by a Colony of Cooperating Agents. IEEE Transaction on Systems, Man and Cybernetics - Part B, 26(1), 29-42, 1996. [7] Kusum Deep* and Jagdish Chand Bansal. Mean particle swarm optimisation for function optimisation. Computational Intelligence Studies, 1(1):72-91,2009. [8] Yan Yang, Yongquan Zhou, Qiaoqiao Gong. Hybrid artificial glowworm swarm optimization algorithm for solving system of nonlinear equations. Journal of Computational Information Systems,6(10):3431-3438. 2010. [9] Hongxia Liu, Yongquan Zhou, Yan Yang, Qiaoqiao Gong, Zhengxin Huang. A novel hybrid optimization algorithm based on glowworm swarm and fish school. Journal of Computational Information Systems, 6(13):4533-4541. 2010. [10] Hongxia Liu, Shiliang Chen,Yongquan Zhou. An improved simulating fishing strategy optimization algorithm by using dynamic step. Journal of Information & Computational Science, 7(13):2715-2721.2010. [11] Zhengxin Huang, Yongquan Zhou. Using glowworm swarm optimization algorithm for clustering analysis. Journal of Convergence Information Technology, 6(2):78-85.2011.

Suggest Documents