JOURNAL OF NETWORKS, VOL. 5, NO. 4, APRIL 2010
427
Population Migration Algorithm Description Method and Application Based on Unified Framework of Swarm Intelligence Yongquan Zhou College of Mathematics and Computer Science, Guangxi University for Nationalities, Nanning, 530006, China Email:
[email protected] Weiwei Zhang College of Mathematics and Computer Science, Guangxi University for Nationalities, Nanning, 530006, China Email:
[email protected]
Aijia Ouyang College of Mathematics and Computer Science, Guangxi University for Nationalities, Nanning, 530006, China Email:
[email protected]
Abstract—Population migration algorithm (PMA) is proposed in recent years, it’s a new search algorithm for global optimization that mainly simulates population transition with economics and dispersion with population pressure increment, the former encourages the algorithm to search in a region with good solutions, the latter avoids getting stuck in a local optimum to a certain degree. So far, and there was no unified framework to describe this algorithm. This paper describes the population migration algorithm based on a unified framework of swarm intelligence, as well as combination with chaos map, a chaos population migration and PMA for training RBF neural network algorithm are proposed. Finally, two numerical experiment examples results show that the PMA algorithm is effective and accuracy. Index Terms—Swarm intelligence, framework description, population migration algorithm, chaos map, radical basis function.
I.
INTRODUCTION
In the field of artificial intelligence, with further study of many types of intelligent computing models, Swarm Intelligence (SI) as a novel evolution of computing technology, its subject ideas and related research become more and more concerned by people. Swarm intelligence algorithms are inspiration by self-organized acts of social animals (such as ant colony, bees, birds, etc.), although individual animal behavior very simple, but when they collaboration with each other is produced a very complex (smart) behavior without centralized control and the given global model [1]. SI algorithms is designed by the principle of the natural world (biological) law, those algorithms are already been widely applied in the areas of complex engineering optimization and science computation problems.
© 2010 ACADEMY PUBLISHER doi:10.4304/jnw.5.4.427-434
So far, many types of intelligence optimization algorithms have different patterns, ideas, modeling and analysis tool, but it just embodies the diversity of intelligent computing model. Diversity is recognized, but we want to know whether existence certain degree uniformity among them, the answer is obviously yes. Unity framework helps to understand inherent nature of SI algorithm model, also conducive to integrate among algorithms, which can learn from other’s strong points to offset one's weakness [2]. By a given framework, this paper tries to describe population migration algorithm (PMA), which is recently raised by scholars of China-Zhou Yonghua and Mao Zongyuan. Aiming to establish a PMA mathematical model can help people understand and further study the basic theory of PMA. Finally, as well as combination with chaos map, a chaos population migration and PMA for training RBF neural network algorithm are proposed , two numerical experimental examples results show that the method in this paper is effective and accuracy. The rest of this paper is organized as follows: the unified framework of swarm intelligent optimization is described in Section 2. Section 3 describes population migration algorithm. The chaos population migration algorithm is described in Section 4. In Section 5, the population migration algorithm is used for function optimization and training RBF neural network in Section 6. Finally, Section 7 provides the conclusion. II.
THE UNIFIED FRAMEWORK ON SWARM INTELLIGENT OPTIMIZATION
The document [3] gave mathematical description of population-based (swarm) intelligent optimization (PIO)’s key part, and then proposed a unified framework on swarm intelligent optimization. For each PIO, it’s searching
428
JOURNAL OF NETWORKS, VOL. 5, NO. 4, APRIL 2010
behavior mainly decided by parameters and operation of algorithm, but most of algorithm mechanisms are refine from biological or social behavior. The iteration of the searching algorithm can be summed up in three basic segments: social-cooperation, self-adaptation and competition. Social-cooperation is on the behalf of individuals exchanging information and learning from each other's in the optimization process; self-adaptation symbols that individuals adjust their states to adapt the environment actively or passively, but without other’s information; competition is on the behalf of population update strategy, that is the better individual will get a bigger chance to survival. The general process of population-based (swarm) intelligent optimization algorithm is as follows: Initialize population; While (Termination rules not satisfied) do Social-cooperation; Self-adaptation; Competition. End
( Pop , S , A, C , D , E , J , t )
(1)
( pop1 , pop 2 , , pop P ) on the behalf of population, P means to population scale; S means to
Where
Pop
social-cooperation; D means to the information which S needs; A means to self-adaptation; E means to the information which
E
needs, C means to competition;
means to the information which C needs, t means to time or iteration number. In essence, any kind of swarm intelligence algorithms are available described by an 8-tuple express as formula (1).
J
In formula (1), social-cooperation can be described t t t as S ( Pop , D ) , where Pop means to population at t
generation t , D = [the selection of collaborative or learning objects, the number of the objects, the way to produce new individuals, the way to use history information] means to information that required by a determine collaboration strategy. Also self-adaptation mathematics t t t description is A( Pop , E ) , where E means to the necessary information for a determined self-adaptation strategy. To the part of competition, the mathematical description is
Pop t 1
C ( Pop t , A( S ( P t , D t ), E t ), J t ) t
where Pop means to population at generation t 1 ,
© 2010 ACADEMY PUBLISHER
scale), r (updated strategy), elitist], means to the information for a determine competition strategy. The above-mentioned unified framework is mathematical description for single intelligent algorithm. At present hybrid intelligent algorithm becomes a hot spot in the field of computer science and operations research. By combining the characteristics of different ways to add or integrate each other, that can design robust and efficient algorithm. Memetic algorithm (MA) [4] is proposed by Pablo Moscato in 1989, it is a modern heuristic search strategy based on population and local improvement. Its searching mechanism is more magnitude faster than traditional GA on some issues; it can be applied to a wide range of issues and get the satisfied results [5]. For MA, a unified description of hybrid intelligent algorithm consists of four aspects, in addition to the abovementioned three aspects increase in mixed-learning mechanism. Mixed learning process can be described as
M ( Pop t , Ot ) ,
Based on the above process of PMA, a mathematical description is given for PIO unified framework:
PIO
J t = [ p (population
(2)
(3)
t
where O [ the number of neighborhood operator, the way to choose neighborhood operator, the learning mechanism of the new one instead the old one], means to the necessary information of mixed learning strategy. Hybrid intelligent algorithm can be described as:
Popt 1
>
@
M ^C Popt , AS Pt ,D t , E t , J t , Ot ` III.
(4)
POPULATION MIGRATION ALGORITHM
Population Migration Algorithm [6] (PMA) is proposed in 2003, it is a kind of global optimization algorithm that simulates population migration mechanism. As the living groups, population will certainly keep migrate to survive and improve. Population migration can be divided into three basic forms: population flow, population migration and population proliferation, population flow is spontaneously, without an overall plan movement in the local environment; population migration is a selective movement crossing a wide range, the basic rule is following the economic center of gravity; population proliferation is a selective movement that form the beneficial region to the non-preferential one due to the population pressure increased. As a optimization algorithm, in the PMA, the optimization variable x refers to living places, the objective function f ( x) refers to attractive of the residence place, the optimal solution(local optimal solution) refers to the most attractive place (beneficial region),the "up" or "mountain climbing" of algorithm refers to move to beneficial region, escaping from the local optimal refers to move out of beneficial region as a result population pressure; population flow corresponds to random, local searching method ; and population migration corresponds to the way to choose the
JOURNAL OF NETWORKS, VOL. 5, NO. 4, APRIL 2010
429
approximate solution like as population struggles upwards; population proliferation combines the overall searching with escaping from the local optimal strategy. From the abstract sense, the problem that PMA can deal with described as:
max f ( x) ;
(5)
xS
Where the function
f : S o R is a real map, x R n ˈ
population proliferation adjust its state to adapt the pressure environment passively, so it can avoid falling into local optimal solution to a certain extent. The two parts do not rely on other individuals’ information. In addition, selfadaptation also reflects on population migration partshrinkage the beneficial region, this is good individuals to improve themselves. Thus, self-adaptation of PMA can be described as:
Et
¦ >a , b @ is the searching place, and a < b i
i
i
i
, it is
i 1
assumed that the existence of global optimum value. The general process of PMA is as follows: BEGIN Initialize: Initialize the model of the population, record the best individual and best value according to the evaluation function While (Termination rules not satisfied) do Flow in their respective regions, and record the point’s value; Determine the beneficial region according to the largest value point, and produce new population to replace the original population in the new beneficial region; Contraction the beneficial region, then population flow also produces new population in the region of the largest value point; If the population pressure is too heavy, then population proliferation, produce new population in the original searching places. End While Output the best individual and best value. END The following are description of PMA based on a given framework: The strategy of social-cooperation reflects to individual movement cause of attracted by the beneficial region, after random movement of population, individual attracted by the largest value point, move to its place. Summarized as follows: the individual collaborate with the most attractive point of the current group; the number of collaboration individual is 1;the act of social-cooperation refers to the individuals move to the beneficial region; the cooperation only use the best point in history. Therefore, the socialcooperation strategy of PMA can be described as:
Dt [ xbest ,1,
Population migration, xbest ]
(6)
Self-adaptation, which can be divided into two parts: population flow and population proliferation adapt the entire individual, population flow in the region adjust its state spontaneously, it is random local searching, however,
© 2010 ACADEMY PUBLISHER
( Population flow, population proliferation ) shrinkage the beneficial region]
n
S
[
(7)
Competition, that is population updated strategy. Due to the design simplified of the algorithm, all population migrated, it can be simulated as regenerate a new group to replace the original one, and so do the part of population proliferation and shrinkage the beneficial region, so the strategy is the whole replacement, also reflect to the entire searching mechanism. Record the best individual and best value point. Therefore, PMA's competitive strategy can be expressed as:
J t =[P
O , ( P , O ) , record and update xbest and f ( xbest ) ) ]
(8)
The above description contributes to understand the principles of algorithm systematically, and can also offer a platform to improve the algorithm. For example, in socialcooperation, it mainly involved information on how to select the object to study or collaboration, the number of object, the way to produce new individuals and the degree of using historical information; those can be used as the object to improve and optimized. In document [7], the algorithm is improved by introducing norm for the number of object. In document [8], in response to self-adaptation, population makes random movement in the range of pressure, this blindness act is not conducive to improve the algorithm performance, and the author proposed that using foraging behavior of artificial fish-swarm algorithm replace the random act, also for the self-adaptation strategy. In addition, mixed migration algorithm can join a mixed learning mechanism. In algorithm design, initialization operation also affected the algorithm performance. The following will put forward an improve algorithm for initialization operation and self-adaptation strategy. IV.
CHAOS POPULATION MIGRATION ALGORITHM
Chaos is nonlinear, it refers to a certainty system, but the behavior is shown uncertainty and unpredictable. It has some characteristics as follow: 1) Random, its mess phenomenon just like as random variables; 2) Ergodicity, chaos can experience all of the states in a certain range without repeat.
430
JOURNAL OF NETWORKS, VOL. 5, NO. 4, APRIL 2010
3) Regularity, that is, it produced by identified function. Where ergodicity of chaotic orbit is the most important thing, also the basic starting point of optimization, it can be avoid falling to local maximum point, to improve the convergence of algorithm [9]. The basic idea of Chaos Population Migration Algorithm (CPMA) is: in the initial phase of the algorithm, use the most commonly chaotic sequence generator [10] Logistic map, then produce chaos individual. Logistic map is defined as:
z k 1
P z k (1 z k ) , z k (0,1) , k
0,1,2, ….
(9)
z k means to variable, P means to control parameter. When P =4, the arising sequence of map performs chaotic dynamic features, the variable can almost travel all the states of (0,1) . In the process of algorithm running, the random number used all replaced by the number of chaotic sequence. The basic algorithm process is as follows: 1) Chaotic initialization individual set the size of the population N , as well as the number of population movement, the shrinkage factor, the population pressure parameter and the number of iteration. Given initial value z 0 , use the chaotic sequence produced by Logistic map, k = (0, 500). In the search space (a, b), for each individual determine their upper and lower bounds. Population (0, i) = ai (bi ai ) z m , Where z m is selected randomly form (0, 500), i (1, x num ), x num is the number of variables, this resulted in an N chaotic individuals. 2) Then according to the document [6] calculate the points value, initialize the best value and the best record point, and the followed by population flow, population migration and population proliferation. In each act, all use the random number produced by Logistic map. 3) While meet the conditions for the termination, output the best value and the best point. Based on the above description of Hybrid intelligent algorithm, CPMA can be seen as a single migration algorithm combine with a chaotic system for improving local, for optimize the searching, it uses chaotic sequence introduced by chaotic sequence generator to instead of the original random numbers. So its mixture learning process can be described as:
Ot
[ Chaos disturbance, Logistic map, chaos map] V.
POPULATION MIGRATION ALGORITHM
The document [6] sets 5 parameters, that is, population size N, the number of population movement L, shrinkage factor ' , population pressure parameter D and the number of iteration m . The numerical experiments show that the size of population usually takes 3, L =10, ' =0.01, D =10e6, m =10. So do the CPMA. Based on the uniform framework, it will descript CPMA on the test function, parameters set such as the above. N is the size of population, so the initial group 1 2 3 t is x , x , x , Pop is the group of time t , t m ,set the center of the region of point i is center
FOR
To test the optimization ability of CPMA, the following is given the test function:
i
x i , determine it
i
upper and lower bound center r G ,where G =1/6. In the part of social-cooperation, like formula (5), xbest is the best value point at present, the number is only one, and the region it stays is the beneficial region, others move to the region, the best point information has been retained throughout the evolution. To the self-adaptation, like formula (6), first of all, initialize the population and then population flow, the points in their respective areas, moving follow this:
xi
(1 / 3) * chaos() x i 1 / 6 .
(11)
Where chaos() is a disturbance number by chaos, then shrinkage the beneficial region after population migration, 6 , the next G (1 0.01) * G ; if max G j 10 if max G j ! 10
6
, then population proliferation. For
competition, like formula (5), N is 3, and replacement strategy is (3, 3), also record the best point and the best value. For the mixed learning, like formula (6), use the Logistic map:
z k 1 P z k (1 z k ) ˈ z k (0,1) ˈ k 0,1,2, …,
(12)
then produce chaotic sequence. In the process of CPMA, set k =1000, then run 20 times randomly. TABLE I.
(10)
FUNCTION OPTIMIZATION
© 2010 ACADEMY PUBLISHER
max f1(x) (1 2sin20 (3S x) sin20 (20S x))20 ,0 d x d1.
Algorithm
RESULTS OF NUMERICAL EXPERIMENT The best value
The worst value
The number of convergence
PMA
1048576.000
1048527.769
11
CPMA
1048576.000
1048575.905
18
From Table I, we see the result of chaos population migration algorithm (CPMA) is better than population migration algorithm (PMA).
JOURNAL OF NETWORKS, VOL. 5, NO. 4, APRIL 2010
VI.
431
POPULATION MIGRATION ALGORITHM FOR TRAINING RBF NEURAL NETWORS
B. RBF neural network learning algorithm RBFNN training process can be divided into two phases:
To test the optimization ability of PMA, the following is given the training RBF neural network: A. RBF Neural Network Radical Basis Function (RBF) neural network is a single hidden layer of three-tier network, as shown in Figure 1. RBF neural network input layer only play the role of transferring information, not to make any change to the input information; hidden layer units is consist of a group of radial basis function, it is a non-negative non-linear function that to the center is radial symmetry and attenuation ; output layer is respond to the role of input mode. The basic idea of RBF network is: the hidden layer space is consisting of the base of hidden units; it can make the input vector mapped to the hidden space directly without going through the weight connection. When the center of RBF is determined, the mapping relation is also determined. And the mapping from the hidden layer to the output space is linear, that is, the output of network is weighting sum of the hidden layer. The most typical RBF is Gaussian function, the expressions is:
exp( x ci
Ri ( x)
2
(1) Do clustering to all the samples; determine the center value and the width of RBF. (2) Learning of the weight values, it can be transformed into a linear optimization problem, so it can be solved by linear optimization algorithms, such as LMS algorithm, least squares algorithm and gradient descent method. The weight adjustment of the LMS algorithm can be expressed as:
i -th basic function, and has the same dimension with x ; V i determine the width of the function around the central points; m is the hidden layer nodes. From Figure 1, it can be seen that the input layer carries out the nonlinear maps from: x o Ri (x ) , and the input layer carries out the linear maps from: Ri ( x ) o y k , that is:
ik
Ri ( x), k
1,2 p .
2
ˈ
(15)
E
1 m p 2 ¦¦ ek (t ) (16) 2i1k1
The function is a non-linear function with many minimum points, then the RBF neural network training process is adoption of the above two stages of learning phases until the error function to minimize. C. PMA For Training RBF neural network For PMA, the global optimization problem (1), n Where f : S o R is a real map, x R ˈ n
m
¦w
Ri ( x )
output is, D is learning rate 0 D 1 . Based on the training sample set, the error cost function can be expressed as:
S=
yk
ek (t ) Ri ( x)
Where ek (t ) the difference of desired output and actual
/ 2V i2 ), i 1,2 m . (13)
Where x is n dimensional input vector; ci is the center of
wik (t ) D
wik (t 1)
(14)
i 1
Where p is the output nodes, wik is the output weight values of RBF.
¦ >a , b @ is i
i
the searching place, and ai < bi , it is
i 1
assumed that the global optimum value f * existed, the global optimum solution M is not empty. In the algorithm, optimizing variables x are corresponding to the information of habitual residence population; i i i i x ( x1 , x 2 , x n ) refers to the i -th point,
x i R n ; xij represents the j -th weight of the i -th point;
įi R n , įij is the radius of the j -th weight,
Gij (bj aj )/(2N) ˈ įij ! 0 ; i 1,2,, N, j 1,2,, n .
Figure 1. Structure of RBFNN
© 2010 ACADEMY PUBLISHER
N is the population size. The objective function corresponds to the surface space of population migration as well as all kinds of information; the optimal solution of the question corresponds to the most attractive regions; the algorithm to move up or climbing refers to migrate the beneficial area, algorithm to escape from the local optimization corresponds to people emigrate from the beneficial caused by too heavy population pressure,
432
JOURNAL OF NETWORKS, VOL. 5, NO. 4, APRIL 2010
population flow corresponds to the local random search of the algorithm. D. Behavior Description (1) Population flow: Randomly generate N points in the i
search space that the center is center , population flow in the region of permission. Flows as:
xi
2Grand (*) center i G , rand (*)
is more than pre-given number, then do population migration, contract the beneficial region and change the points uniformly, also record and update of the optimal value. When population pressure is greater than a certain population parameters, then do population proliferation, record the optimal value. Step 5. Determine whether the number of iterations achieve the given largest number, if so, output the result, otherwise, to Step 3.
(17)
is random value, G is the radius of population movement region. (2) Population migration: Let the most attractive point as the center of the region, generate N points in place of the original points, then people flow in the new beneficial region and shrink the region, the way of shrink the region as follows: G G (1 ' ) , ' is contraction coefficient distribution in (0,1). (3) Population proliferation: When the population pressure is much heavy, population move out of the beneficial region, randomly generated N points to replace the original point in the search space.
F. Numerical simulations 1) In order to evaluate the performance of the new algorithm, first of all seeing about the experiment results of sin( x) function. Set the parameters of PMA: population size N 3 ; contraction coefficient ' =0.1; the number of population flow L 10 ; the input layer and the output layer of RBFNN are one neuron, the hidden layer has 20 nodes, the simulation results of sin( x) are as follows: 1 desired output actual output 0.5
Step 1. Using k means clustering algorithm to determine the center values ci and the width V i of the basis function. First of all initialize the center values, and classify the samples with the principle of minimum distance, that is, make the input samples assigned to ci by:
min x ci
-0.5
-1
-1.5
0
1
2
3
.Then adjust the center values
4
5
6
7
x
i
according to the average of the samples, judge the new center whether or not the same as the old, if there is no changes in the center distribution, then terminate, otherwise repeat the above steps. Make the minimum distance between the centers as the width of the base function. Step 2. Initialize the population size N , the largest number of iterations MAX num , contraction coefficient ' , population pressure parameter D , the largest number of mobile L . Set the initial iteration number num 0 , generate the initial population N in the feasible region, that is, generate N initial weight values wij , each component
^ `
is random number in [-1, 1]. Step 3. Take the mean square error of the network output E as the attractiveness of the beneficial region, namely: F
1 , record the maximum of F . E
Step 4. Population flow in each individual region, record and update of the optimal value. If the flow number
© 2010 ACADEMY PUBLISHER
Figure 2. The approximation
sin(x) result of RBF training cures
1 desired output actual output 0.5
0 sin(x)
d ij
sin(x)
0
E. The Training Steps of RBF Neural Network
-0.5
-1
-1.5
0
1
2
3
4
5
6
7
x
Figure 3. The approximation
sin(x) result of PMF training curves
JOURNAL OF NETWORKS, VOL. 5, NO. 4, APRIL 2010
433
0.07 RBF PMF
0.06
0.05
2 desired output actual output
1.5 1 s in(0.9*pi*x )+ c os (0.5*pi*x )
From Figure 3, Figure 4, it can seen that the difference between two types of training algorithm is not very obvious, but price trends in the error curve from Figure 4, it can seen that the PMA training methods this paper presents has faster convergence and higher calculation accuracy than the RBF training method.
0.5 0 -0.5 -1
0.04 E
-1.5 0.03
-2 0.02
0
1
2
3
4
5
6
7
x
Figure 6. The approximation function
0.01
sin(0.9Sx) cos(0.5Sx)
result of PMF training curves 0
0
10
20
30
40
Figure 4. The error cost function
50 num
60
70
80
90
100
sin(x) curve of the RBF and the PMF
2) Then we study a nonlinear function: y sin(0.9Sx) cos(0.5Sx) , parameter setting is just like the test function 1, the experimental simulation results are as follows:
Also from Figure 6, Figure 7, it can seen that the difference between two types of training algorithm is not very obvious, but price trends in the error curve from Figure 7, it can seen that the PMA training methods this paper presents has faster convergence and higher calculation accuracy than the RBF training method. 0.7
2 desired output actual output
1.5
RBF PMF
0.6
0.5
0.4 E
s in(0.9*pi*x )+ c os (0.5*pi*x )
1 0.5
0.3
0 0.2
-0.5 0.1
-1 0
-1.5 -2
0
1
2
3
4
5
6
7
0
10
20
30
40
50 num
60
70
80
90
100
Figure 7. The error cost function curve of the RBF and the PMF method
x
Figure 5. The approximation function result of RBF training curves
© 2010 ACADEMY PUBLISHER
sin(0.9Sx) cos(0.5Sx)
On the other hand, we compared the mean square error of the two test function, the mean square error as the following Table II.
434
JOURNAL OF NETWORKS, VOL. 5, NO. 4, APRIL 2010
REFERENCES TABLEĊ. THE MEAN SQUARE ERROR OF THE TWO TEST FUNCTION Training method RBF Algorithm PMF Algorithm
Test function 1
Test function 2
0.00061777606021
0.00323999364262
0.00060308098546
0.00318448598939
From comparison above, it can seen that to the objective function the decline curve of the new algorithm has a faster rate of decline and a better level of decline than the RBF algorithm, also to achieve a smaller mean square error.
[1]
[2]
[3]
[4]
[5]
[6]
VII. CONCLUSIONS In recent years, swarm intelligence algorithm become hot in the field of intelligent computing for its simple frame and unique problem solving capacity, but the basic theory is not completed, and also at setting the parameters. The article describe the PMA at a given framework, Also, an algorithm based on PMA using for RBF neural network training process is designed, from analysis of experimental results it can be seen that PMA-based RBF training algorithm has faster speed convergence and higher calculation accuracy than the RBF algorithm. Of course, how selecting the range of various parameters has great influence on the algorithm convergence rate, selecting the parameters and the algorithm convergence is our future research work.
[7]
[8]
[9]
[10]
[11] [12]
ACKNOWLEDGMENT This work is supported by Grants 60461001 from NSF of China and the project Supported by Grants 0832082; 0991086 from Guangxi Science Foundation.
Yongquan Zhou Ph. D. Professor. His received the B.S. degree in mathematics from Xianyang Normal University, Xianyang, China. In 1983. The M.S. degree in computer science from Lanzhou University, Lanzhou, China. In 1993, and Ph.D. in pattern recognition and intelligent system from Xidian University, Xian, China. His current research interests including neural networks, computation intelligence and application.
Weiwei Zhang received the M.S degree in computer application technology from Guangxi University for Nationalities, Nanning, China. He current research interests including computation intelligence and application.
© 2010 ACADEMY PUBLISHER
[13]
[14]
Peng Xiyuan, Peng Yu, Dai Yufeng. “Swarm intelligence theory and application”. Acta electronic sinica. 2003, 31(12A): 1982-1988(in Chinese) Kang Qi, Wang Lei, Liu Xiaoli, Wu Qidi. “General mode description genetic algorithms based on a framework of swarm intelligence”. CAAI Transactions on Intelligence Systems. 2007, 2(5):42-47(in Chinese) Wang Li, Liu Bo. Particle Swarm Optimization and Scheduling Algorithms. Beijing: Tsinghua University Press. 2008,15-17(in Chinese) Moscato E. On evolution, search, optimization, genetic algorithm and martial arts: toward memetic algorithm. Rep.826, Pasadena, CA: California Inst. Technol. , 1989. Liu Mandan. “The Development of the Memetic Algorithm”. Techniques of Automation Science and Applications. 2007,26 (11):14 (in Chinese). Zhou Yonghua, Mao Zongyuan. “A New Search Algorithm for Global Optimization: Population Migration Algorithm (I)”. Journal of South China University of Technology (Natural Science Edition). 2003, 31(3):1- 5 (in Chinese) Li Zhentao, Zhang Guoli, Wang Shuling, Ni Guibo, Shen Honglian. “Application of Improved Population Migration Algorithm in Function Optimization”. Control & Automation. 2007,23(12-3):199200(in Chinese) Li Bin. The Research of the Population Migration Algorithm based on Artificial Fish Swarm Algorithm.Wuhan: Wuhan University of Technology,2008 (in Chinese) Wang Ling, Zhen Dazheng, Li Qingsheng. ”Survey on Chaotic Optimization Methods”. Computing Technology and Automation. 2001,20(1):1-5 (in Chinese). Li Bing, Jiang Weisun. “Chaos Optimization Method and Application”. Control theory & Applications. 1997, 14(4):613-615. (in Chinese) Han Liqun, Artificial neural network theory, design and application . Beijing: Chemical Industry press, 2007, 14-19. Girosif, Jonesm, Poggio T. Ā Regularization theory and neural networks architectures” .Neural Compution, 1995, 7(2):219-269 Zhou Yonghua, Mao Zongyuan. “A New Search Algorithm for Global Optimization: Population Migration Algorithm (II)”. Journal of South China University of Technology 2003, 31(4):41- 43(in Chinese) Zhang Jun, Cao Jing, Jiang Shizhong, Neural Network Practical Guide . Beijing: Mechanical Industry press: 2007, 37-40.