An adaptive gradient descent-based local search in ...

6 downloads 786 Views 2MB Size Report
Key words: Population-based algorithm, Memetic algorithm, Local search, Engineering .... GA generates solutions to optimization problems using techniques inspired by ..... firm izes sea resu. m v l fu o the m the and arch ult i value uncti e gl e pe d di ...... The crisp output can be defined by using the product inference engine, ...
An adaptive gradient descent-based local search in memetic algorithm applied to optimal controller design Aliasghar Arab1, Alireza Alfi1* 1

Faculty of Electrical and Robotic Engineering, University of Shahrood, Shahrood 36199-95161, Iran

* Corresponding Author:

Email: [email protected] Tel and Fax: +98 (23) 32300250

Abstract: Memetic Algorithm (MA) is a combination of Evolutionary Algorithms (EAs) and Local Search (LS) operators known as hybrid algorithms. In this paper, an efficient MA with a novel LS, namely Memetic Algorithm with Adaptive LS (MA-ALS), is proposed to improve accuracy and convergence speed simultaneously. In the core of the proposed MA-ALS, an adaptive mechanism is carried out in LS level based on the employment of specific group with particular properties, which is inspired from an elite selection process. Thus, the proposed adaptive LS can help MA to execute a robust local refinement. This methodology reduces computational costs without loss of accuracy. The algorithm is tested against a suite of well-known benchmark functions and the results are compared to GA and the two types of MAs. A permanent DC motor, a Duffing nonlinear chaotic system and a robot manipulator with 6 degree-of-freedom are employed to evaluate the performance of the proposed algorithm in optimal controller design. Simulation results demonstrate the feasibility of the algorithm in terms of accuracy and convergence speed. Key words: Population-based algorithm, Memetic algorithm, Local search, Engineering optimization problem, Optimal control 1. Introduction Optimal control of nonlinear systems is amongst pivotal subjects in control engineering, such as solving problems in economics, chemical engineering and robotics. In classic optimal control theory, there exist nonlinear Hamilton–Jacobi–Bellman partial differential equations that are quite difficult to solve [7]. Accordingly, numerical methods need to be used such as Gradient Descents (GDs) and Iterative Dynamic Programming (IDP) for solving optimal control problems. Despite their usefulness, both GDs and IDP may get trapped in local optimum depending on the initial solution guess. Moreover, IDP suffers from high computational costs [34]. Another way of finding the optimal solution would be the Evolutionary Competition (EC). EC finds the optimal solution through cooperation and competition among the individuals of population

[61]. The frequently used population-based EC methods include evolutionary strategies [62], Genetic Algorithm (GA) [20, 38, 53], Evolutionary Programming (EP) [42], clustering methods [35], Ant Colony Optimization (ACO) [15, 41], Particle Swarm Optimization (PSO) [2-6, 22, 36, 39] and chemical optimization paradigm [37]. Meanwhile, Evolutionary Algorithms (EAs) can be enhanced by combining existing algorithms that can be used for modeling and controlling of the processes with complex dynamical behavior [36, 37, 58]. Since EAs are population-based, a well balance between exploration and exploitation need to be utilized in order to find optimum solutions [11]. The exploration refers to the ability of investigating various unknown regions in the solution space to discover the global optimum. Whereas, the exploitation indicates the capability of the algorithm to refine a candidate solution to a local optimum, having already found in a region of interest. For instance, GA can explore a wide range of search space efficiently, while it does not have efficient exploitation ability. It means GA does not have a useful Local Search (LS) mechanism to accurately search near a good solution [16]. The performance of GA can be improved by incorporating different LS and traditional GAs, which is called Memetic Algorithm (MA) [17, 28, 40]. Indeed, MA is a combination of EAs and LS operators to obtain an efficient global search known as hybrid algorithms. MAs have been successfully applied in wide areas of engineering fields [1, 29, 31, 43, 51, 57, 59, 63, 64] and the significance influence of LS on the performance of the search process of MAs has already been reported [8, 13, 19, 21, 23, 24, 47, 50, 53, 65]. MAs are more efficient than the traditional EAs in many applications; however the method of designing a MA with a good performance is complicated. To design a competitive MA, as stated earlier the LS operators should be kept in a balance between exploration and exploitation. Here, a novel GD-based LS is introduced to improve the performance of MA in solving engineering optimization problems. The LS operator is employed to enhance the individuals’ qualities efficiently in a local area. Accordingly, an adaptive type of LS is proposed with a mechanism that limits the number of selected individuals based on their fitness. As a result, the overall speed of LS is improved, since local search is only applied to the elite individuals and not on the whole population. The main motivation of the proposed algorithm is to combine the ability of GA in global search and the accuracy of GD in local search by applying reasonable adaptive laws. The algorithm accomplishes global search over the whole search space through GA and the LS is performed by a GD-based algorithm. The LS has two significant parameters for each individual, namely the search neighborhood and the number of iterations. Herewith, at the beginning of the algorithm since there is no awareness about the solution, the elite society should be vast enough to cover the whole search space. For this phase, the search neighborhood is considered to

be large and at the same time the number of gradient descent iterations need to be low. As the global search goes on, the elite society starts to become smaller and smaller. As a result, the boundary to search neighborhood will be shrunken. Thereafter, the LS implies to the specific group with particular properties, which is inspired from the elite selection process. In this way, the proposed algorithm is able to find an optimum solution more accurately. The rest of the paper is organized as follows. Section 2 describes the optimization algorithms briefly. Section 3 presents the framework on the base of MA search scheme. The proposed MA-ALS algorithm is also described in details. Simulation results are presented in Section 4. Section 5 deals with the evaluation of MA-ALS via considering three real-world optimization problems. Finally, conclusion is drawn in Section 6. 2. Optimization algorithms In this section, the basic principles of the optimization algorithms are briefly described, which are used in this paper. 2.1. GA GA generates solutions to optimization problems using techniques inspired by natural evolution, such as inheritance, mutation, selection, and crossover. Only binary variables (genes) are used in the traditional GA, which form a string (chromosome). Original real numbers are generated by decoding the final binary digits after a utilization of binary-coded GA. On the other hand, all genes in a chromosome are real numbers in real-coded GA. Therefore, to solve practical engineering problems, it is preferred to use the real-coded GA instead of the binary-coded GA [10]. In spite of convergence to the global optimum, GA converges excessively slowly to find the global optimum with sufficient precision. The reason is the failure of GA in exploiting local information [58]. In the rest of the paper, GA refers to "real-coded GA". 2.2. GD-Based LS Gradient information of the objective function is consequential due to the dependency of a large number of widely-used deterministic algorithms on it such as steepest descent and Newton’s method [33]. A relation between input and output of the functions in a gradient-based method must be established or approximated [14]. Although the first-order derivatives for some methods like steepest descent will suffice, the Newton’s method n requires second-order information [44]. Considering a pre-defined search direction Δk ∈ R , the optimal step

size α k is explored: F ( x + α k Δk ) = min{F (x + α k Δk )} α

(1)

Steepest descent method introduced by Curry [12] is one of the simplest gradient methods. This method is easy to implement, but it usually converges to a local optimum. It follows from the fact that the gradient vector points to the direction of fastest increase in the value of the function. The search points are iteratively generated by stepping in the opposite direction of the gradient. The current search point x k is generated as

x k +1 = x k − α k ∇f (x k ) / ∇f (x k )

(2)

3. The proposed Algorithm In this section, MA is briefly described and the proposed MA-ALS algorithm is explained in details. 3.1. MA Up to now, different optimization algorithms such as GA and PSO have been used to perform global search. These algorithms have no prevailing acceptable local exploration results. Thus, combination of these algorithms and LS algorithms has been formed to compensate this defect. Like other EAs, convergence speed and trap avoidance in local optima are two important goals in MA [25, 26, 30, 45, 51, 48-50, 54, 60]. Effects of neighborhood structure on the performance of MAs have been reported [25, 26]. They investigated how to change the size and the type of neighborhood structures dynamically in the framework of MAs. In [27], it was shown that if the computational complexity of the LS is relatively low, applying LS on every individual can be useful. In [45], a Probabilistic Memetic Framework (PrMF) introduced for balancing evolution and individual learning. In PrMF, during the search process the learning intensity of each individual is governed according to the theoretical upper bound. A key issue in MA is to select the LS operator, which should be kept in a balance with the global search component. In this point of view, designing an MA with good performance is complicated due to the nature of contradiction between exploitation and exploration. Many adaptive LS already exist and have been shown to solve some problems very effectively [9, 48, 49]. Caponio et al. introduced a fast adaptive MA (FAMA) that is a MA with a dynamic parameter setting and two local searchers adaptively launched, either one by one or simultaneously [9]. FAMA adaptively chooses the size of the population and the probability of mutation based on the fitness values. Furthermore, it adaptively decides whether how the local searches have to be executed. A comprehensive study of various adaptive mechanisms can be found in [48]. Based on the taxonomies introduced in [49], the adaptive MAs can be categorized in adaptive- type and level. The type refers to the mechanism of adaptation utilized in the choice of memes. The adaptive type is divided into static, adaptive and self-adaptive. The adaptive level indicates the level of historical knowledge about the memes that are employed in memes selection. The level is divided into external, local, and global categories (for more details see [49]). A

classification of adaptive MAs is listed in Table 1. Here, we focus on investigations into the adaptive choice of LS to enhance performance of MA search. The current paper improves the performance of MA via applying adaptive GD-based LS into GA. According to the classification on existing strategies of adaptive MAs given in [49], the structure of MA-ALS can be considered as a local self-adaptive approach. Table 1: Classification of adaptive MAs [49] Adaptive Type Static

Adaptive

Self-Adaptive

Adaptive Level External

Local

Global

3.2. MA-ALS Existing MA algorithms suffer from some drawbacks, including but not limited to giving the same weight to the all population in local search process and considering all the effective LS parameters to be constant during the search process. To overcome these shortages, this paper proposes a novel adaptive method for LS in MA, namely Memetic Algorithm with Adaptive Local Search (MA-ALS). An adaptive mechanism is introduced to enhance LS ability in terms of accuracy and convergence speed. In this mechanism, there exist three key parameters, which can be intelligently determined in each step of LS process: (1) number of individuals, (2) search steps and (3) number of search iterations. Similar to the other MAs, we have used GA in exploration due to its power in the global search. 1) Number of individuals Clearly, it is not necessary to consider all the individuals in each step. Since the information of elite individuals is not available, at the beginning, all the individuals are used. After each iteration, the individuals will be arranged according to the values of the fitness function. Based on these values, the best individuals are chosen for local search. Similarly, this procedure is repeated in the next steps. Therefore, in each step, the elite individuals are applied for LS process, which leads to the lower computational cost. 2) Search steps The accuracy and speed convergence are two common subjects in the optimization problems. In the early steps, high accuracy is not necessary. In the other word, during the search process in these steps, the convergence is more significant than accuracy. In contrast, the accuracy is much more important in the later steps of the search process. Therefore, the smaller search steps are chosen as the number of LS iterations increases. 3) Number of search iterations

Number of search iterations parameter indicates the number of LS iterations in internal loop that each individual can be provided for optimality. This parameter directly affects the computational cost of the algorithm. In early steps, smaller value is chosen due to the large population. In successive iterations, where population decreases, a larger value is assigned. It is worth mentioning that the better individual has more number of search steps (i.e. more time) in this procedure. In this way, this hybrid algorithm may find an optimum solution more quickly and accurately. Three strategies that are mentioned above enable the proposed MA-ALS to have a balanced search behavior. The Number of Individuals (NI), Search Step (SS) and number of Local search Iteration (LI) are automatically adjusted during the search process as follows. NI = ⎡ NI max − generation.( NI max − NI min ) / max ⎤ ⎥ generation ⎦ ⎣⎢

(3)

SS = SS max − generation .(SS max − SS min ) / max

(4)

LI = ⎡ LI min + generation.( LI max − LI min ) / max ⎤ ⎥ generation ⎦ ⎣⎢

(5)

generation

The pseudo-code of MA-ALS can be summarized as follows. - Initialize the optimization problem and the parameters of the algorithm - Evaluate the fitness of each individual While no stopping criterion has been fulfilled do Arrange population according to their fitness values Use the adaptive law from the population and determine the LS parameters Refine the individual through LS Evaluate the fitness of each individual - Stop (if the stopping criterion is satisfied) - Arrange the individuals according to the fitness values and choose the best individuals for LS process. Evolve the population through crossover, mutation, and elitism For i=1: NI; (use the adaptive law for selecting gens from the population) Do ALS (LI times) LS with selected search step (SS) Break (if the stopping criterion is satisfied in LS) End Do End For End While The stopping criterion can be considered as a predefined number of iterations or threshold of the value of the fitness function. 4. Experimental methodology The experimental setting and simulation strategies, which are used to obtain empirical results, are discussed in this section. 4.1. Test Functions

Well-defined benchmark functions based on mathematical functions can be utilized to measure and test the performance of optimization algorithms. Six well-known benchmarks with different characteristics are employed to evaluate the performance of MA-ALS in terms of solution quality and convergence speed including Sphere, Rosenbrock, Ackley, Rastrigrin, Schwefel and Griewank. This set is enough to include many different kinds of problems such as unimodal (Sphere, Rosenbrock), multimodal (Schwefel, Rastrigin, Griewank), separable (Sphere, Rastrigin) and non-separable (Schwefel, Rosenbrock, Griewank). Sphere function is one of the simplest test benchmarks, which is continuous, convex and unimodal. Rosenbrock function is a classic optimization problem, which is also known as banana function. The global optimum lays inside a long, narrow, parabolic shaped flat valley. Ackley is a widely used multimodal test function. Rastrigin function is highly multimodal with many local optima and only one global optimum but its minima are regularly distributed. Schwefel function is deceptive in that the global minimum is geometrically distant, over the parameter space, from the next best local minima. Therefore, the optimization algorithms are potentially prone to convergence in the wrong direction. Similar to Rastrigin function, Griewank function has many widespread local minima, which are regularly distributed. The details of the test functions are given in Table 2 [32]. In this Table, M and U stand for the multimodal and unimodal functions, respectively, and S and N refer to separable and non-separable functions, respectively.

Table 2. Description of the test functions. U: Unimodal, M: Multimodal, S: Separable, N: Non-Separable. Name

Formulation

Sphere

F1 = ∑x i2

Range

fmin

[-5.12 5.12]D

0

[-32.76 2.76]D

0

[-5.12 5.12]D

0

[-2.04 2.04]D

0

MS

[-500 500]D

0

MN

[-600 600]D

0

Characteristics

D

US

i =1

Ackley

F2 = −20 exp(−0.2

1 D

D

∑x

2 i

) − exp(

i =1

1 D

D

∑ cos 2π x

i

)

MN

i =1

D

Rastrigin

F3 = 10D + ∑ ( x i2 − 10cos(2π x (i ) )

MS

i =1

D −1

Rosenbrock

F4 = ∑100( x i +1 − x i2 ) 2 + (1 − x i ) 2

UN

i =1

D

Schwefel

F5 ( x ) = 418.9829 × D − ∑ ⎡ x i sin( x i ) ⎤ ⎣ ⎦ i =1

Griewank

F6 ( x ) =

x 1 D 2 D cos( i ) + 1 ∑xi −∏ 4000 i =1 i i =1

D is the number of dimension.

If a function has more than one local optimum, it is called multimodal. Such a function is used to test the ability of the algorithms getting rid of local minima. If the exploration process of the algorithm is poor, it cannot search the whole space efficiently and gets stuck at the local minima. It is worth mentioning that for unimodal functions, the convergence characteristics are more interesting than the final results. However, for multimodal functions, the final results are more significant than the convergence characteristics since they reflect an algorithm’s ability of escaping from poor local optima and locating a good near-global optimum. Another group of test problems is separable/ nonseparable functions. Non-separable functions have interrelation among their variables. Therefore, these functions are more complicated than the separable functions. 4.2. Common Parameters There are common parameters to be determined in the population-based optimization algorithms. The performance of the algorithm varies greatly for solving different problems when these parameters are set differently. The common parameters include population size, dimension and maximum number of iteration. Population size is a crucial task in EAs since it can affect the convergence rate and the quality of the final result. The dimensionality of the search space is also an important topic since the increment in the dimension of the function increases the difficulty. To evaluate MA-ALS in terms of the search capability, the experiments are carried out for different population sizes and dimensions of the test functions. Thereby, all empirical experiments are performed in 10, 20, 30, 50 and 100 dimensions for population sizes 20, 30 and 40. 4.3. Experimental settings The performance of MA-ALS is compared with GA and two types of MAs [48]. The first MA employs GA with the search method of the Powell’s direct (MA1) whereas the second algorithm expects convergence trace of a traditional MA with multiple LS methods (MA2). In GA, the crossover probability Pc and the mutation probability Pm are set to 0.8 and 0.05 , respectively. The default values of parameters in MA-ALS are as follows:

NG ∈[NG min NG max ] = [1 30] , NI ∈[ SSmin SSmax ] = [0.1 1] and NI ∈[NI min NI max ] = [1 20] The aforementioned algorithms are coded in Matlab 7.0 and the simulations are run on a laptop computer with Corei3 2.1 GHz speed processor and 4 GB memory capacity. Each algorithm runs 20 times and we record the mean value of the difference between the global optimum value and the best result obtained by the algorithms over 20 runs for 10000 iterations, which is defined as

20

(

mean = ∑ f ( x k =1

)k

− f (x * )

k

) / 20

(6)

where x and x * show the best solution value obtained by the algorithm and the global optimum value, respectively. Moreover, to compare the computational time of these algorithms, the predefined number of iterations is replaced by a threshold of 10 −10 as the stopping criterion. The maximum number of iteration is also set to 10000 if the algorithm cannot reach to the predefined threshold. Then, 20 trials are performed and the average of the computational time is recorded. 4.4. Experimental results and discussion

Simulations are carried out by observing the performance of MA-ALS along two dimensions: solution quality and efficiency. Solution quality refers to the ability of the algorithm to find optimal solutions, which are assessed by the number of finding the optimum solution. Efficiency measures the success ratio of the algorithm to obtain the optimal solution evaluated by convergence speed. Tables 3 and 4 illustrate the results obtained by GA, MA1, MA2 and MA-ALS for dimension 30. The results indicate the number of iterations and computational time for finding the optimal solution. It is obvious that the proposed algorithm spends extremely fewer iterations and less computational time to reach a predefined threshold ( ε ) as compared with the other algorithms. Hence, it can be concluded that MA-ALS is superior to aforementioned algorithms because of solution quality and efficiency. It is worth remarking that the Number of Fitness Evaluations (NFEs) shows a number of evaluations to reach the highest quality value for each problem. The corresponding results are shown in Fig. 1. Referring to Fig. 1, it can be seen that the results of MA-ALS are achieved with significant small NFEs. To demonstrate the first generation of MA-ALS, Figs. 2 and 3 show the convergence of LS procedure for Rastrigin function. In the figures, we use " o " and " Δ " to denote the points before and after LS procedure, respectively. Form Fig. 3, it is apparent that the individuals are moved towards to the optimal areas more efficiently, which are shown by dark color.

Table 3. Number of finding optimum for 30 dimensions. Function

ε

GA

MA1

MA2

MA-ALS

F1

1e-12

0

2

1

50

F2

1e-12

24

41

39

50

F3

1e-9

0

11

24

50

F4

1e-9

12

28

23

33

F5

1e-9

0

0

19

50

F6

1e-9

0

0

38

49

Table 4. Avarage of computational time (Sec) for 30 dimensions. Function

ε

GA

MA1

MA2

MA-ALS

F1

1e-6

1.33

4.52

3.98

0. 54

F2

1e-9

5.1

12.53

13.34

2.582

F3

1e-6

NF

25.85

28.79

4.087

F4

1e-9

8.3

20.65

24.01

2.824

F5

1e-6

NF

5.86

5.73

1.82

F6

1e-9

NF

NF

12.65

11.4

NF means Not Found.

Fig. 1. NFEs for dimension 30.

90

100

80

90 70

80 70

60

60 50

50 40

40 30 30

20 10

20 0 -6 -4 -2 2 0 2 4 6

6

4

2

0

-2

6 -6

-4

10

Fig. 2. Rast R triginn 2D D funnctioon.

Fig. 3. LS L for f selected ggenees. 30 SS LI NI

25

Value

20

15

10

5

0 0

200

400 0 6 600 Gene eratio on num mber

800 0

10 000

F 4. Varia Fig. V ationn of adap a ptivee parrameeters durring the t searc s ch prroceess.

T conf To c firm m thee peerforrmannce of MA M -AL LS inn terrm of l ed inn Taables 5-10 for f o searcch capabbilitty, the t resu r ults aare liste ddiffe ferennt siizes andd diimennsioons. Figg. 4 illuustraates the varriatiion of adap N SS S and a LI a ptivee paaram meteers incluudinng NI, dduriing the seaarchh prroceess. Com mpaaratiive resuults inddicatte tthat MA A A-A ALS hass goood glooball searchh abbility. An iinterestting resuult is i thhat the t fitness valuue of o MAM ALS S allgorrithm A-AL LS can also fiind m is sm malleer thhan otheers. MA tthe optiimum m value v e off thee com mpleex teest func f ctionns even e n forr Rastriggin ffuncctionn. Altho he Rastriiginn funnctioon iss a A oughh th m mulltimodal fuunctiion withh many m y peaks of nearrly equal hheighht, M t loca l al opptim ma and a MA A-AL LS can c escape from the cconvvergge to o thee globaal opptim mum wheereaas otthers caannoot reeach to tthe glob ownn in Tabble 6. 6 g bal optim o mum m ass sho

Table 5. Mean fitness values for F1 over 20 runs with 10000 iterations. Dimension

10

20

30

50

100

Population size

GA

MA1

MA2

MA-ALS

20

4.7e-9

1.2e-9

3.2e-9

0

30

3.5e-9

7.1e-10

9.6e-10

0

40

3.1e-9

6.2e-10

7.9e-10

0

20

5.7e-9

3.2e-9

6.2e-9

0

30

4.3e-9

2.3e-9

5.7e-9

0

40

3.3e-9

1.5e-9

3.9e-9

0

20

3.6e-8

3.4e-8

9.7e-8

0

30

8.7e-9

5.7e-9

6.8e-9

0

40

5.5e-9

4.3e-9

5.6e-9

0

20

1.9e-6

4.1e-7

1.1e-6

0

30

9.3e-7

1.4e-7

8.2e-7

0

40

8.5e-7

9.4e-8

2.2e-7

0

20

3.1e-5

8.3e-7

5.1e-6

0

30

7.1e-6

6.2e-7

3.3e-6

0

40

5.3e-6

4.4e-7

8.2e-7

0

Table 6. Mean fitness values for F2 over 20 runs with 10000 iterations. Dimension

10

20

30

50

100

Population size

GA

MA1

MA2

MA-ALS

20

3e-12

4.5e-14

1.5e-14

0

30

2.4e-13

2.4e-14

7.3e-15

0

40

1.1e-13

1 e-14

0

0

20

4.6e-10

7.6e-12

5.5e-13

0

30

3.4e-11

4.3e-12

5.3e-13

0

40

3.1e-12

2.4e-13

3.3e-13

0

20

7.7e-9

3.4e-9

9.7e-10

0

30

8.9e-10

5.7e-12

7.7e-12

0

40

5.5e-10

4.3e-12

3.8e-12

0

20

6.3e-7

4.1e-8

8.2e-9

1.6e-12

30

3.1e-7

1.1e-8

4.3e-9

0

40

8.2e-8

8.2e-9

1.1e-9

0

20

3.1e-6

9.9e-8

3.2e-8

3.5e-11

30

1.4e-6

7.3e-8

8.2e-9

0

40

7.3e-7

3.5e-9

5.7e-9

0

Table 7. Mean fitness values for F3 over 20 runs with 10000 iterations. Dimension

10

20

30

50

100

Population size

GA

MA1

MA2

MA-ALS

20

8.5e-6

3.3e-6

4.7e-6

0

30

6.8e-6

1.3e-6

3.8e-6

0

40

4.8e-6

7.0e-7

8.9e-7

0

20

8.9e-5

1.3e-5

9.0e-6

0

30

3.3e-5

7.2e-6

4.3e-6

0

40

7.6e-6

3.4e-6

1.8e-6

0

20

4.9e-4

9.2e-5

9.3e-5

0

30

6.3e-5

6.1e-5

4.3e-5

0

40

3.4e-5

2.2e-5

1.0e-5

0

20

1.2e-3

8.1e-4

8.0e-4

6.7e-9

30

9.7e-4

7.8e-4

3.2e-4

2.3e-10

40

6.7e-5

4.2e-4

7.1e-5

0

20

7.4e-3

3.2e-3

5.3e-3

5.1e-8

30

3.3e-3

9.8e-4

2.9e-3

1.2e-9

40

7.9e-4

6.1e-4

8.6e-4

0

Table 8. Mean fitness values for F4 over 20 runs with 10000 iterations. Dimension

10

20

30

50

100

Population size

GA

MA1

MA2

MA-ALS

20

8.5e-3

4.3e-3

5.1e-3

6.6e-6

30

6.8e-3

2.3e-3

1.7e-3

6.1e-6

40

9.8e-4

7.0e-4

1.2e-3

5.9e-6

20

8.9e-3

5.3e-3

4.4e-3

9.0e-6

30

8.1e-3

4.0e-3

3.1e-3

8.8e-6

40

5.5e-3

3.4e-3

3.0e-3

7.9e-6

20

1.2e-2

6.3e-3

7.3e-3

5.9e-5

30

8.0e-3

5.1e-3

4.4e-3

5.8e-5

40

6.7e-3

4.0e-3

3.8e-3

5.6e-5

20

7.8e-2

4.6e-2

5.1e-2

1.7e-4

30

4.1e-2

2.1e-2

1.8e-2

1.1e-4

40

2.4e-2

8.3e-3

7.3e-3

8.8e-5

20

3.1e-1

9.7e-2

8.2e-2

8.6e-4

30

8.2e-2

6.4e-2

4.3e-2

5.5e-4

40

5.4e-2

4.3e-2

1.1e-2

2.3e-4

Table 9. Mean fitness values for F5 over 20 runs with 10000 iterations. Dimension

10

20

30

50

100

Population size

GA

MA1

MA2

MA-ALS

20

4.5e-6

5.2e-7

3.2e-8

0

30

1.8e-6

4.3e-7

1.3e-8

0

40

9.8e-7

1.0e-7

8.2e-9

0

20

9.9e-6

9.1e-7

7.1e-8

0

30

7.1e-6

8.6e-7

5.3e-8

0

40

2.5e-6

7.8e-7

2.1e-8

0

20

1.2e-5

2.4e-6

1.9e-7

0

30

9.4e-6

1.1e-6

9.8e-8

0

40

7.7e-6

9.4e-7

6.8e-8

0

20

4.3e-4

3.4e-5

2.7e-4

1.1e-8

30

7.3e-4

5.7e-5

2.0e-4

0

40

8.1e-5

4.3e-6

6.6e-5

0

20

9.3e-4

4.9e-4

7.3e-4

8.4e-8

30

7.4e-4

2.3e-4

3.8e-4

4.3e-9

40

3.1e-4

8.8e-5

2.6e-4

0

Table 10. Mean fitness values for F6 over 20 runs with 10000 iterations. Dimension

10

20

30

50

100

Population size

GA

MA1

MA2

MA-ALS

20

1.5e-3

9.3e-5

5.1e-8

0

30

9.8e-4

8.9e-5

1.1e-8

0

40

8.9e-4

6.7e-5

9.2e-9

0

20

3.9e-3

6.2e-5

7.7e-8

0

30

1.8e-3

3.5e-5

3.5e-8

0

40

1.0e-3

2.9e-5

1.3e-8

0

20

5.1e-3

6.1e-5

8.1e-8

0

30

2.4e-3

4.9e-5

3.9e-8

0

40

1.9e-3

3.9e-5

2.9e-8

0

20

9.6e-3

3.4e-4

7.7e-7

7.7e-12

30

8.7e-3

8.9e-5

7.9e-8

0

40

5.5e-3

6.3e-5

3.4e-8

0

20

8.5e-2

2.8e-3

8.9e-7

3.2e-11

30

2.5e-2

6.3e-4

7.7e-7

4.1e-12

40

7.6e-3

9.4e-5

5.9e-7

0

Important observations about the convergence speed of MA-ALS can be illustrated from the results presented in Figs. 5-10. The results of MA-ALS are quite encouraging and the fitness function values converge faster than others to the optimum solution with less iteration as inferred from these figures. For instance, MA-ALS detects the best solution considerably faster than other optimizers at 110 generations as shown in Fig. 5. To confirm the supervisory of MA-ALS, the statistical results are also shown in Table 11-15 including the best, the worst and the variance of the test function over 20 runs in 1000 iteration for population size 30. Here, both population size and dimension are considered 30. The performance of MA-ALS is comparable in all aspects with respect to other algorithms. In conclusion, the main findings of this experiment are as follows: MA-ALS can successfully find the global optimum in all tests with lower computational costs. Despite the choice of optimization problems with different characteristics, the outstanding performance of MA-ALS can be clarified with its structure as balance between explorative and exploitative processes by combining different control parameter strategies. 4.5. Sensitivity Analysis

The sensitivity analysis is performed for determining the appropriate value of parameters. There are three key parameters in MA-ALS described in part 3: NI, SS and LI. To inspect the influence of parameters on the performance of MA-ALS, an experiment is conducted on all the tests in 30 dimensions. To separate the effect of each parameter, the default values are utilized for the other parameters. The default values are set to

SS ∈[SS min SS max ] = [0.1

1] , NG ∈[NG min NG max ] = [1

30] and NI ∈[NI min NI max ] = [1

20] .

Table 16 shows the corresponding results. As can be inferred from Table 16, the larger value of SS concludes the faster convergence speed at the beginning stages of the search process while it nearly does not have any effect on the end of search process. Therefore, it is better to adjust the SS parameter during the LS procedure from large values to small values with increasing iteration.

10

10

10

GA MA1 10

MA2 10

Best Result

Best Result

0

10

-5

10

10

10

0

MA2 MA-ALS

10

10

GA MA1

MA-ALS

5

10

10

2

-2

-4

-6

-8

-10

-10

10

0

10

1

2

10 Generation

10

3

10

4

4

10

GA MA1 10

2

0

10

1

2

10 Generation

10

3

MA2

10

GA MA1

0

MA2 MA-ALS

10

0

Best Result

Best Result

10

-2

10

10

10

0

10

1

2

10 Generation

10

3

10

-8

-10

10

4

4

10

GA MA 1

2

0

10

1

2

10 Generation

10

10

10

10

10

10

10

10

-2

-4

-6

10

10

10

-8

10

-10

10

0

10

1

2

10 Generation

10

3

4

GA MA 1

2

MA 2

MA-ALS

B es t Res ult

B es t Res ult

10

10

4

MA 2 0

3

Fig. 8. Convergence behaviour for F4 with dimension 30 and population size 30.

Fig. 7. Convergence behaviour for F3 with dimention 30 and population size 30.

10

-6

-8

10

10

-4

-6

10

10

-2

-4

10

10

4

2

MA-ALS 10

10

Fig. 6. Convergence behaviour for F2 with dimension 30 and population size 30.

Fig. 5. Convergence behaviour for F1 with dimension 30 and population sizeg 30. 10

10

10

Fig. 9. Convergence behaviour for F5 with dimension 30 and population size 30.

4

10

MA-ALS

0

-2

-4

-6

-8

-10

10

0

10

1

2

10 Generation

10

3

Fig. 10. Convergence behaviour for F6 with dimension 30 and population size 30.

10

4

Table 11. The Best, worst and variance of each function in 20 times run, population size 30 with 1000 iteration for D=10. Function

F1 F2 F3 F4 F5 F6

GA

MA1

MA2

MA-ALS

Best

Variance

Worst

Best

Variance

Worst

Best

Variance

Worst

Best

Variance

worst

2e-8

1.20

8

1e-8

0.24

1.1

7e-4

0.65

1.71

0

0

0

6e-9

0.171

0.980

0

0.098

3e-2

0

3e-5

4e-5

0

0

0

6e-4

0.91

1.98

3e-6

0.86

1.21

4e-6

0.76

2.10

0

0

0

5e-3

32

45

3e-8

12

26

1e-7

28

43

0

2e-4

3e-4

5e-3

0.087

0.23

7e-5

0.87

1.29

5e-8

0.032

0.20

1e-13

2.1e-7

7e-9

2e-4

1.62

9.3

2e-5

2.01

4.76

2e-9

0.074

1.19

7e-10

5e-8

4e-6

Table 12. The Best, worst and variance of each function in 20 times run, population size 30 with 1000 iteration for D=20. Function

F1 F2 F3 F4 F5 F6

GA

MA1

MA2

MA-ALS

Best

Variance

Worst

Best

Variance

Worst

Best

Variance

Worst

Best

Variance

worst

8e-6

1.91

23

3e-7

0.35

3.3

6e-5

1.04

3.15

0

0

0

7e-7

0.223

1.012

0

0.109

8e-2

0

8e-5

1e-4

0

0

0

2e-3

1.13

3.21

9e-6

0.98

2.86

8e-5

0.95

3.04

0

6e-9

1e-8

6e-2

66

101

7e-6

34

59

7e-6

32

67

4e-6

3.1e-3

5e-3

3e-2

0.68

1.44

3e-4

1.02

2.68

9e-8

0.065

0.54

1e-11

4.7e-6

4e-7

7e-4

2.0

15

8e-5

2.7

5.32

6e-9

0.11

1.68

1e-8

8e-7

5e-6

Table 13. The Best, worst and variance of each function in 20 times run, population size 30 with 1000 iteration for D=30. Function

F1 F2 F3 F4 F5 F6

GA

MA1

MA2

MA-ALS

Best

Variance

Worst

Best

Variance

Worst

Best

Variance

Worst

Best

Variance

worst

5e-5

2.20

36

2e-6

0. 50

4

2e-5

1.3

6.94

0

0

0

3e-6

0.296

1.408

0

0.123

0.717

0

1e-4

2e-4

0

0

0

7e-3

1.52

5.81

4e-4

1.21

4.12

3e-3

1.12

4.87

0

1.3e-6

2e-6

2e-2

205.5

213

2e-5

78

128

2e-5

43

92

3e-4

6.1e-3

8e-2

7e-2

1.12

3.81

4e-4

1.21

4.12

3e-7

0.12

0.87

1e-9

2.3e-4

3e-5

1e-2

3.3

23

3e-4

3.23

9.20

3e-8

0.23

2.20

4e-8

8e-6

1e-5

Table 14. The Best, worst and variance of each function in 20 times run, population size 30 with 1000 iteration for D=50. Function

F1 F2 F3 F4 F5 F6

GA

MA1

MA2

MA-ALS

Best

Variance

Worst

Best

Variance

Worst

Best

Variance

Worst

Best

Variance

worst

2e-4

2.58

42

1e-4

0.62

5.2

1e-3

1.45

8.18

0

0

0

7e-5

0.314

2.14

0

0.324

1.45

0

8e-4

7e-3

0

2.1e-9

4e-9

0.01

1.97

7.4

3e-3

1.98

6.00

8e-3

1.47

7.31

1e-8

2.1e-6

4e-6

0.12

478

515

8e-3

98

167

1e-3

67

125

7e-4

8.3e-3

12e-2

0.63

3.27

6.21

2e-3

1.95

6.23

9e-5

0.64

2.34

3e-8

7.8e-3

2e-3

0.32

5.21

45

5e-3

4.22

17.97

7e-5

0.49

4.31

5e-5

1e-4

5e-3

Table 15. The Best, worst and variance of each function in 20 times run, population size 30 with 1000 iteration for D=100. Function

F1 F2 F3 F4 F5 F6

GA

MA1

MA2

MA-ALS

Best

Variance

Worst

Best

Variance

Worst

Best

Variance

Worst

Best

Variance

worst

8e-3

3.28

58

2e-3

0.89

6.1

3e-3

1.76

9.07

0

0

0

4e-3

0.87

4.965

3e-9

0.563

3.07

0

2e-3

3e-2

0

6.4e-7

8e-7

0.09

3.01

14.2

0.02

3.21

9.67

0.03

1.98

11.5

6e-7

5.2e-6

2e-5

1.4

987

1023

0.02

144

203

9e-3

78

144

3e-3

9.1e-2

83e-2

1.21

4.39

9.52

0.02

3.25

8.77

3e-4

1.26

6.46

5e-6

32e-3

7e-2

2.21

7.54

62

0.07

7.12

43.11

5e-4

0.98

9.84

3e-3

3e-2

12e-2

Table 16. Sensitivity of parameters for dimension 30 and population size 30. Function

SS Value [SS min

SS max ]

NI Value [NI min

NI max ]

[0.01 0.1]

[0.1 1]

[1 10]

[0 10]

[1 20]

[5 30]

F1

@234

@108

@114

@345

@234

@121

F2

@976

@1007

@1237

@1073

@976

@567

F3

@1204

@1021

@1221

@1564

@1204

@1001

F4

@9925

3.8e-10

8.1e-10

1.0e-9

3.8e-10

@8933

F5

@2201

@2607

@2859

@2722

@2110

@1987

F6

@3143

@2860

@3533

@4025

@4017

@3928

@x means final result found at x.

5. Application on three real-world optimization problems

Three optimization problems are considered for evaluation of MA-ALS algorithm on real-world applications. The first problem is to find the parameters of a fuzzy Proportional-Derivative (PD) controller for position control of Permanent Magnet DC (PMDC) motor. The second problem is to force the output of a nonlinear chaotic system, namely Duffing, to follow a sin-wave trajectory using sliding mode controller. The third problem is to design a feed- forward Proportional-Integrator-Derivative (PID) controller for a 6-Degree-of-Freedom (DoF) robot manipulator, which is a multi-input multi-output, nonlinear and highly coupled system.

The purpose of the problems is to find an optimal controller design using MA-ALS. The objective function is defined as follows: (7)

T

J = ∫(k u U2 + k e e2 )dt 0

where

ku

and

ke

are the positive constant coefficients of energy and error, respectively.

The optimization problems in hand can be considered on error, energy and/ or both of them by determining these parameters. 5.1. Design of Fuzzy PD controller for PMDC motor

Since fuzzy sets are powerful tools in the presence of nonlinearities and uncertainties, they have been designed and implemented in various engineering problems. In this application, a fuzzy PD controller is designed for the position control of PMDC motor. Fig. 11 shows the block diagram of the PMDC motor. In Fig. 11, V is the maximum of motor voltage, I denotes the vector of motor currents, R , L and k b are the values of armature resistance, inductance and back-emf constant, respectively, and J , B and rG stand for inertia, damping and reduction gear, respectively. The parameters of the motor used in this paper are given in Table 17. Table 17. Parameters of permanent magnet DC motor [56].

R(Ω)

kb

J

1.6

0.26

0.0002

B 0.00148

rG

V max (Volt)

0.02

12

τl rG

V

1/ (Ls + R )

-

I

-

kb

θm 1/ (Js + B )

θm 1/ s

rG

kb Fig. 11. The PMDC motor.

k d d dt

V DC Motor

θd -

e

kp

MA-ALS

Fitness Function

Fig. 12. General block diagram of fuzzy PD controller.

θ

θ

Since the grades of membership functions may become uncertain in values, the control problem cannot easily handle the nonlinear systems subject to parameter uncertainties. To deal with uncertain grades of membership functions for nonlinear dynamic systems, a Type-2 Fuzzy Logic (T2FL) system was recently proposed [52]. However, the systematic design of T2FL is still a challenging problem due to the main difficulty in determining the parameters of the T2FL system. Furthermore, more fuzzy rules are based on human knowledge and differ among various people despite the same performance of the system. It is difficult to assume that the given expert’s knowledge, which is captured in the form of the fuzzy controller, leads to optimal control. Therefore, the effective approaches are significantly required without trial and error techniques. From these viewpoints, it is required to develop an optimal tuning strategy of the controllers, which can determine a set of controller gains simultaneously by solving an optimization problem. Several studies have employed evolutionary algorithms for the design of fuzzy logic controllers [46, 52]. Here, the controller goal is to achieve an optimal fuzzy PD controller for the best trajectory tracking of PMDC motor. In the core of controller, MA-ALS algorithm is employed for evolving the controller parameters. Fig. 12 represents the general block diagram of the fuzzy PD controller including two inputs namely error (e ) and the change rate of error (e ) . The associated fuzzy sets are considered as N: Negative

Z: Zero

PM: Positive Medium

P: Positive

NH: Negative High

PH: Positive High NM: Negative Medium

The corresponding membership functions of the inputs (e and e) and the output are illustrated in Figs. 13 and 14, respectively. The fuzzy rules are also defined as follows: Rule 1: IF (e is P and e is P) THEN ρ is PH Rule 2: IF (e is P and e is Z) THEN ρ is PM Rule 3: IF (e is P and e is N) THEN ρ is Z Rule 4: IF (e is Z and e is P) THEN ρ is PM Rule 5: IF (e is Z and e is Z) THEN ρ is Z Rule 6: IF (e is Z and e is N) THEN ρ is NM Rule 7: IF (e is N and e is P) THEN ρ is Z Rule 8: IF (e is N and e is Z) THEN ρ is NM Rule 9: IF (e is N and e is N) THEN ρ is NH

1

1 Z P N

0.8 Membership Function

Membership function

0.8

NM Z PM PH NH

0.9

0.6

0.4

0.2

0.7 0.6 0.5 0.4 0.3 0.2 0.1

0 -2

-1

0 Input

1

2

Fig. 13. Input membership functions for fuzzy PD controller ( e and e )

0 -20

-15

-10

-5

0 Output

5

10

15

20

Fig. 14. Output membership functions for fuzzy PD controller.

The crisp output can be defined by using the product inference engine, singleton fuzzifier and center average Defuzzifier as

ρ (e ,e ) = (∑i =1 μAl (K p e ) μBl (K d e) ρ ) / (∑i =1 μAl (K p e ) μBl (K d e)) 9

9

(8)

where µ i , i = 1,…,9 is the membership degree of i rule and ρ = [ ρ1 … ρ9 ] are the centers of output membership functions. The MA-ALS performance is compared with those obtained by MA1, MA2 and GA. Table 18 lists the best fitness values over 5 independent runs with different objective functions. Fig. 15 represents the trajectory of the system output. In addition, Fig. 16 depicts the control effort signal. In these figures, the non-optimal response is illustrated using K p = 2 and K d = 2 . The trajectories of fitness functions ( J e and J e + E ) and the controller parameters ( k p and k d ) are represented in Figs. 17-22. These figures illustrate that the trajectories of the controller parameters obtained by MA-ALS approach the steady-state values faster than other algorithms. The steady-state values are listed in Table 19. 5.2. Design of sliding mode controller for Duffing chaotic system

A chaotic system is highly nonlinear system and its response exhibits some specific characteristics such as the extreme sensitivity to initial conditions [3]. This sensitivity makes the motion unpredictable and irregular in a long time run. To further assess of MA-ALS performance, numerical simulations are carried out for controlling Duffing chaotic system. The corresponding dynamic equation is given by [18]

x1 = x 2

(9)

x 2 = −0.1x 2 − x − 12 cos(t ) + u + d 3 1

y = x1 where u ∈ R is the control input, x = [x , x ]T = [x 1 , x 2 ]T ∈ R 2 denotes the state vector and d stands for the external bounded disturbance.

Table 18. Best Results for permanent magnet DC motor with different fitness functions. Fitness Name

Fitness Function

error

J e = ∫e 2dt

GA

MA1

MA2

MA-ALS

0.6355

0.6612

0.6458

0.6273

18.19

15.46

14.09

12.52

10

0

10

J e + E = ∫ (12e 2 + u 2 )dt

error and energy

0

Table 19. Optimal parameters for permanent magnet DC motor with different fitness functions Using MA-ALS.

kp

kd

error

47.3881

0.4851

error and energy

4.64000

0.7491

Fitness Name

1.4

6

eopt

eopt (e+E)opt

(e+E)opt

1.2

Non optimal

4

Non optimal 1 Voltage (Volt)

Motor Position

2

0.8

0.6

0

-2

0.4 -4

0.2 -6

0 0

2

4

6

8

0

10

1

2

3

4

Time (Sec)

5 6 Time (Sec)

7

8

9

10

Fig. 16. Voltage profile of permanent magnet DC motor.

Fig. 15. Tracking response of permanent magnet DC motor.

70

GA MA

1.15 1.1

MA

1.05

GA MA1

1

MA2

60

2

MA-ALS

MA-ALS 50 Best Result

Best Result

1 0.95 0.9 0.85 0.8

40

30

0.75 20

0.7 0.65 0

5

10

15 Generation

20

25

30

Fig. 17. Best fitness with error optimization for fuzzy PD controller.

10 0

5

10

15 Generation

20

25

30

Fig. 18. Best fitness with error and energy optimization for fuzzy PD controller.

25 GA MA1

50

45

MA2

20

MA-ALS

40 15

k

k

p

p

35

30

10

25

GA MA1

20

MA2

5

MA-ALS 15 0

5

10

15 Generation

20

25

0 0

30

Fig. 19. Variation of Kp with error optimization for fuzzy PD controller.

10

15 Generation

20

30

35

1

GA MA1

30

MA

MA2

2

20

25

Fig. 20. Variation of Kp with error and energy optimization for fuzzy PD controller.

GA MA

25

5

MA-ALS

MA-ALS

25

20 K

k

d

d

15

15

10 10

5 5

0 0

5

10

15 20 Generation

25

30

Fig. 21. Variation of Kd with error optimization for fuzzy PD controller.

0 0

5

10

15 Generation

20

25

30

Fig. 22. Variation of Kd with error and energy optimization for fuzzy PD controller.

Eq. (9) can be written as

y = f ( y , y , t ) + u +d

(10)

where f ( y , y , t ) is generally an unknown nonlinear function but bounded as given in Eq (11).

f( y , y , t ) = fˆ ( y , y , t ) + Δ f( y , y , t )

(11)

In the above Eq., fˆ ( y , y , t ) is the nominal value of f ( y , y , t ) and Δf ( y , y , t ) stands for the uncertainty of system model. The chaos control problem is that the system output y can follow the reference signal y d . Let us now define the tracking error vector

e = yd − y

(12)

The strategy of sliding mode control is simple and easy to implement. It can also offer strong robustness to the system uncertainties and external disturbance. Therefore, MA-ALS is employed to improve the system performance by optimizing the parameters of sliding mode controller including switching function and exponential reaching law. We now consider the sliding surface S = e + λe

(13)

where λ is a positive constant. The condition

1d 2 S ≤ -η|S |, η ≥ 0 2 dt

(14)

1 is the sufficient condition must to achieve an appropriate behavior. Constructing a Lyapunov function V = S 2 , 2

from its time derivative it can be found that

( yd − (fˆ ( y , y , t ) + Δ f( y , y , t ) + u ) + λe < −η sgn(S )

(15)

According to the condition of sliding mode, the following control law is deduced

u = yd + λe − fˆ + (η + ρ ) sgn(S )

(16)

where ρ is the upper bound of uncertainties. Assume the parameter uncertainty is equal to 20%. In addition, the value of upper bound for external disturbances is set to 3. Therefore, it can be obtained

ρ = 0.02 y + 0.2 y 3 +15

(17)

Using control law given in Eq. (16), the goal is to minimize J for obtaining the optimal parameters λopt and η opt . Table 20 summarizes the best fitness values over 5 independent runs with different objective functions. The

steady-state values are also listed in Table 21. The tracking response and the control effort are shown in Figs. 23 and 24, respectively. The non-optimal response is also illustrated with η = 1 and λ = 20 in these Figs. The corresponding trajectories of the fitness functions ( J e and J e + E ) and the control parameters ( η and λ ) are given in Figs. 25-30. From the results, the parameters obtained by MA-ALS converge quickly to the optimal values. Moreover, it is obvious that the proposed controller not only has optimal tracking ability, but also is robust against the system uncertainties and external disturbance.

Table 20. Best Results for Duffing system with different fitness functions. Fitness Name

GA

MA1

MA2

MA-ALS

11.0134

10.6093

10.2051

9.6172

262.8612

238.6756

239.6756

230.0122

Fitness Function 5

J e = 100∫e 2dt

error

0

5

J e + E = ∫ (100e 2 + u 2 )dt

error and energy

0

Table 21. Optimal parameters for Duffing system with different fitness functions using MA-ALS. ߣ௢௣௧

ߟ௢௣௧

error

7.5144

36.2031

error and energy

8.4288

0.10000

Fitness Name

60

1 0.9

40

0.8

20

Control Effort (U)

0.7

Y

0.6 0.5 0.4

0

-20

-40

0.3 -60

0.2

eopt

0.1

(e+E)opt

eopt (e+E)opt

-80

Non optimal

Non optimal 0 0

1

2 3 Time (Sec)

4

-100 0

5

1

2

3

4

5

Time (Sec)

Fig. 23. Tracking response for Duffing system.

Fig. 24. Control effort for Duffing system. g

p 625

17 GA MA1

16

MA2 MA-ALS

15

620

GA MA1

615

MA2 MA-ALS

610 Best Result

Best Result

14 13

605 600 595

12 590

11

585 580

10 9 0

5

10

15 Generation

20

25

Fig. 25. Best fitness with error optimization for Duffing system.

30

575 0

5

10

15 Generation

20

25

30

Fig. 26. Best fitness with error and energy optimization for Duffing system.

100

45

GA MA1

90

GA MA1

40

MA2

MA2 80

MA-ALS

MA-ALS

35 30

60

25 Eta

Eta

70

50

20

40

15

30

10

20

5

10 0

5

10

15 Generation

20

25

0 0

30

Fig. 27. Variation of η with error optimization for Duffing system.

5

10

15 Generation

20

25

30

Fig. 28. Variation of η with error and energy optimization for Duffing system.

100

90

GA MA1

90

GA MA1

80

MA2

MA2

80

MA-ALS

70

MA-ALS

70

60 Landa

Landa

60 50

50 40

40 30

30 20

20 10

10 0 0

5

10

Fig. 29. Variation of

λ

15 Generation

20

25

30

with error optimization for Duffing system.

0 0

5

10

15 Generation

20

25

30

Fig. 30. Variation of λ with error and energy optimization for Duffing system.

5.3. Design of feed-forward PID for a 6-DoF robot manipulator

The goal is to design PID controller for trajectory control problem of a 6-DoF robot manipulator [3]. The controller should be used separately for every joint. Thus, the dimension of the optimization problem is 18. The corresponding dynamic equation of the robot manipulator is given by [55]

 + C(q, q )q + g (q) + F (q ) + τ d = τ D(q)q where q = [q1

q2

q3

q4

q5

(18)

q6 ] ∈ R 6 is the joint’s angular position, q , q ∈ R 6 are the angular velocities

  ∈ R 6 is a vector of centrifugal and accelerations of the joints, respectively, D(q) is the inertia matrix, C(q,q)q and Coriolis forces, g(q) ∈ R 6 is a vector of gravitational forces, τ ∈ R 6 is the input torque vector, F (q ) ∈ R 6 is the friction vector and τ d ∈ R 6 is the external bounded disturbance.

The model-based feed-forward PID control law for tracking the desired path q d is given as (19)

T

 d + k d e + k p e + k i ∫ e dt ) + C ( q , q ) q + g ( q ) τ = D ( q )( q 0

where e = qd − q , e = q d − q and the gain matrices k p ∈ R 6×6 , k d ∈ R 6×6 and k i ∈ R 6×6 are diagonal. The torque of the robot’s actuators is saturated and the external disturbance and the friction of the system are considered as the uncertain part of the system. Using control law given in Eq. (19), the goal is to minimize J for obtaining the optimal parameters k d opt , k p opt and k i opt . Table 22 lists the best fitness values over 5 runs with different objective functions. The steady-state values are also given in Table 23. The norm of tracking response and the control effort are shown in Figs. 31 and 32, respectively. The non-optimal response is illustrated with k p = 500I6 , k d = 125I 6 and k i = 100I 6 in these figures. The corresponding trajectories of the fitness functions ( J e and J e + E ) are given in Figs. 33-34. It is apparent that not only the optimal tracking ability is achieved, but also the system is robust against the uncertainties and external disturbance.

Table 22. Best results for robot manipulator. Fitness Name

Fitness Function

GA

MA1

MA2

MA-ALS

0.1541

0.1263

0.1534

0.0912

0.2384

0.2217

0.2360

0.2008

1

J e = ∫ e 2 dt

error

0

1

error and energy

J e + E = ∫ ( e 2 + 10−8 u 2 ) dt 0

Table 23. Optimal parameters robot manipulator using MA-ALS. ݇ 

݇ 

݇ 

error

Diag([918,835,537,658,786,714])

Diag([33,34,50,33,32,31])

Diag([131,2,26,50;14,22])

error + energy

Diag([298,299,334,854,768,460])

Diag([28,35,10,29,8,29])

Diag([67,15,10,11,44,27])

Fitness Name

2.5

14000 e opt

e opt (e+E)opt

2

(e+E)opt

12000

Non optimal

Non optimal Torque (N.m)

Error (Rad)

10000

1.5

1

8000 6000 4000

0.5 2000

0 0

0.2

0.4

0.6

0.8

0 0

1

0.2

0.4 0.6 Time (s)

Time (s)

1

Fig. 32. Norm of control effort for robot manipulator.

Fig. 31. Norm of tracking error for robot manipulator.

0.3

0.17 0.16 0.15 0.14

GA MA 1

0.28

MA 2

0.27

MA 2 MA ALS

0.13

0.29

MA ALS

GA MA 1

best Result

best Result

0.8

0.12

0.26 0.25 0.24 0.23

0.11 0.22 0.1 0.09

0.21

0

5

10

15

20

25

30

35

40

0.2

0

5

10

15

generation

Fig. 33. Best fitness with error optimization for robot manipulator.

20

25

30

35

40

45

50

generation

Fig. 34. Best fitness with error and energy optimization for robot manipulator.

6. Conclusions To avoid trapping into local optima and improve the accuracy of the final solution, this paper proposed MA-ALS for solving engineering optimization problems. The adaptive LS method, which employs three different parameters, was introduced for locally refining exploitation. These parameters were intelligently determined in each step of LS process. To analyze the performance of the proposed MA-ALS, well-defined benchmark functions with different characteristics were utilized. The presented comparisons confirmed the superiority of the algorithm in terms of convergence speed and accuracy. Finally, to show the effectiveness of MA-ALS, it was successfully applied to control of three real-world optimization problems.

Acknowledgements The authors would like to express their sincere appreciation to the anonymous reviewers for their insightful comments which greatly improve the quality of this paper. The authors would like to thank Professor W. Pedrycz, Editor-in-Chief, for providing very helpful and constructive suggestions. We would also like to thank Dr. Naser Farrokhi for his valuable advice during the edition of the manuscript.

References [1] A.M. Acilar, A. Arslan, A novel approach for designing adaptive fuzzy classifiers based on the combination of an artificial immune network and a memetic algorithm, Information Sciences 264 (2014) 158–181. [2] A. Alfi, M.M. Fateh, Intelligent identification and control using improved fuzzy particle swarm optimization, Expert Systems with Applications 38 (2011) 12312–12317. [3] A. Alfi, Chaos suppression on a class of uncertain nonlinear chaotic systems using an optimal H ∞ adaptive PID controller, Journal of Chaos, Solitons & Fractals 42 (2012) 351–357. [4] A. Alfi, PSO with adaptive mutation and inertia weight and its application in parameter estimation of dynamic systems, Acta Automatica 37 (2011) 541-549. [5] A. Alfi, M.M. Fateh, Identification of nonlinear systems using modified particle swarm optimization: A hydraulic suspension system, Journal of Vehicle System Dynamics 46 (2011) 871-887. [6] A. Alfi, H. Modares, System identification and control using adaptive particle swarm optimization, Journal of Applied Mathematical Modelling 35 (2011) 1210-1221. [7] J.A. Bryson, Y.C. Ho, Applied optimal control: optimization, estimation and control, Hemisphere, Washington, DC, (1975). [8] E. K. Burke, G. Kendall, E. Soubeiga, A tabu search hyper heuristic for timetabling and rostering, Journal of Heuristics 9 (2003) 451–470. [9] A. Caponio, G.L. Cascella, F. Neri, N. Salvatore, M. Sumne, A fast adaptive memetic algorithm for online and offline control design of PMSM drives, IEEE Transactions on System, Man and Cybernetics, Part B: Cybern. 37 (2007) 28–41.

[10] W. Chang, Nonlinear system identification and control using areal-coded genetic algorithm, Applied Mathematical Modeling 31 (2007) 541–550. [11] J. Chen, B. Xin, Z. Peng, L. Dou, J. Zhang, Optimal contraction theorem for exploration-exploitation tradeoff in search and optimization, IEEE Transactions on Systems Man and Cybernetics 39 (2009) 680-691.

[12] H. B. Curry, The method of steepest descent for non-linear minimization problems, Quarterly of Applied Mathematics 2 (1944) 258-261. [13] P. Cowling, G. Kendall, E. Soubeiga, A hyper heuristic approach to scheduling a sales summit, in PATAT 2000, Springer Lecture Notes in Computer Science, Konstanz, Germany, pp. 176–190, 2000. [14] Y.H. Dai, Y. Yuan, A nonlinear conjugate gradient method with a strong global convergence property, SIAM J. Optimization 10 (1999) 177–182.

[15] M. Dorigo, T. Stützle, Ant colony optimization. Scituate, Bradford Company Scituate, MA, USA, (2004). [16] L. Gál, L.T. Kóczy, R. Lovassy, A novel version of the bacterial memetic algorithm with modified operator execution order, Óbuda University e'Bulletin 1 (2010) 25–34. [17] L. Gao, G. Zhang, L. Zhang, X. Li, An efficient memetic algorithm for solving the job shop scheduling problem, Computers & Industrial Engineering 60 (2011) 699–705 [18] H.F. Ho, Y.K. Wong, A.B. Rad, Adaptive PID controller for nonlinear systems with H∞ tracking Performance, Int. Conf. Phys. Control (2003) 1315–1319. [19] H. Ishibuchi, T. Yoshida, T. Murata, Balance between genetic search and local search in memetic algorithms for multi-objective permutation flow shop scheduling, IEEE Transactions on Evolutionary Computation 7 (2003) 204–223. [20] B. Jiang, F. Zhang, Y. Sun, X. Zhou, J. Dong, L. Zhang, Modeling and optimization for curing of polymer flooding using an artificial neural network and a genetic algorithm, Journal of the Taiwan Institute of Chemical Engineers (45) 2014 2217–2224. [21] G. Kendall, P. Cowling, E. Soubeiga, Choice function and random hyper heuristics, Proc. 4th Asian Pac. Conf. Simulated Evol. Learn., pp. 667–671, 2002. [22] G. Koulinas, L. Kotsikas, K. Anagnostopoulos, A particle swarm optimization based hyper-heuristic algorithm for the classic resource constrained project scheduling problem, Information Sciences (277) 2014 680– 693. [23] N. Krasnogor, Studies on the theory and design space of memetic algorithms, Ph.D. dissertation, Faculty of Comput., Math., and Eng., Univ. of the West of England, Bristol, U.K., 2002. [24] N. Krasnogor, B. Blackburne, J.D. Hirst, E.K.N. Burke, Multimeme algorithms for the structure prediction and structure comparison of proteins, in Parallel Problem Solving From Nature, Lecture Notes in Computer Science, 2002. [25] N. Krasnogor, J. Smith, A memetic algorithm with self-adaptive local search: TSP as a case study, Proc. Genetic Evol. Comput. Conf., San Francisco, CA: Morgan Kaufman, pp. 987–994, 2000. [26] N. Krasnogor, J. Smith, A tutorial for competent memetic algorithms: Model, taxonomy, and design issues, IEEE Transactions on Evolutionary Computation 9 (2005) 474–488.

[27] K.W.C. Ku, M.W. Mak, W.C. Siu, A study of the Lamarckian evolution of recurrent neural networks, IEEE Transactions on Evolutionary Computation 4 (2000) 31–42. [28] A. Lara, G. Sanchez, C.A. Coello, O. Sch¨utze, A new local search strategy for Memetic multi-objective evolutionary algorithms, IEEE Transactions on Evolutionary Computation 14 (2010) 112–132.

[29] B. Lacroix, D. Molina, F. Herrera, Region based memetic algorithm for real-parameter optimisation, Information Sciences 262 (2014) 15–31. [30] M. Lastra, D. Molina, J. Benítez, A high performance memetic algorithm for extremely high-dimensional problems, Information Sciences 293 (2015) 35–58. [31] J. Lee, D.W. Kim, Memetic feature selection algorithm for multi-label classification, Information Sciences 293 (2015) 80–96. [32] C. Li, S. Yang, T.T. Nguyen, A Self-Learning particle swarm optimizer for global optimization problems, IEEE Transaction on System, Man, and Cybernetics 42 (2011) 627–646. [33] B. Li, Y.S. Ong, M.N. Le, C.K. Goh, Memetic gradient search, IEEE World Congress on Computational Intelligence (2008) 2894–2901. [34] R. Luus, Iterative dynamic programming, Chapman and Hall/CRC Press, Boca Raton, FL, (2000). [35] P.C.H. Ma, K.C.C. Chan, Y. Xin, D.K.Y Chiu, An evolutionary clustering algorithm for gene expression microarray data analysis, IEEE Transactions on Evolutionary Computation 10 (2006) 296–314. [36] Y. Maldonado, O. Castillo, P. Melin, Particle swarm optimization of interval type-2 fuzzy systems for FPGA applications, Applied Soft Computing 13 (2013) 496–508. [37] P. Melin, L. Astudillo, O. Castillo, F. Valdez, M. Garcia, Optimal design of type-2 and type-1 fuzzy tracking controllers for autonomous mobile robots under perturbed torques using a new chemical optimization paradigm, Expert Systems with Applications 40 (2013) 3185–3195. [38] Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, 2nd ed. New York: SpringerVerlag, (1994). [39] H. Modares, A. Alfi, M.M. Fateh, Parameter identification of chaotic dynamic systems through an improved particle swarm optimization, Expert Systems with Applications 37 (2010) 3714–3720. [40] D. Molina, M. Lozano, C. Garc´ıaMart´ınez, F. Herrera, Memetic algorithms for continuous optimization based on local Search Chains, Evolutionary Computation 18 (2010) 27–63. [41] N. Monmarche, G. Venturini, M. Slimane, On how pachycondyla apicalis ants suggest a new search algorithm, Future Generation Computation Systems 16 (2000) 937–946. [42] P. Moscato, On evolution, search, optimization, genetic algorithms and martial Arts: towards Memetic algorithms, Technical Report Caltech Concurrent Computational Program, Report 826, California Institute of Technology, Pasadena, USA, (1989). [43] F. Neri, J. Toivanen, G.L. Cascella, Y.S. Ong, An adaptive multi meme algorithm for designing HIV multidrug therapies, IEEE/ACM Transactions on Computational Biology and Bioinformatics 4 (2007) 264–278.

[44] J.A. Nocedal, S.J. Wright, Numerical optimization, New York: Springer, (1999). [45] Q. H. Nguyen, Y. Ong, M.H. Lim, A probabilistic memetic framework, IEEE Transactions on Evolutionary Computation 13 (2009) 604–623. [46] S.K. Oh, H.J. Jang, W. Pedrycz, A comparative experimental study of type-1/type-2 fuzzy cascade controller based on genetic algorithms and particle swarm optimization, Expert Systems with Applications 38 (2011) 11217–11229. [47] Y.S. Ong, Artificial intelligence technologies in complex engineering design, Ph.D. dissertation, Sch. Eng. Sci., Univ. Southampton, Southampton, U.K., 2002. [48] Y.S. Ong, A.J. Keane, Meta-Lamarckian learning in memetic algorithms, IEEE Transactions on Evolutionary Computation 8 (2004) 99–110. [49] Y.S. Ong, M.H. Lim, N. Zhu, K. Wong, Classification of adaptive memetic algorithms: a comparative study, IEEE Transactions on Evolutionary Computation 36 (2006) 141-152. [50] Y.S. Ong, M.H. Lim, F. Neri, H. Ishibuchi, Special issue on emerging trends in soft computing: memetic algorithms, Soft Computing 13 (2009) 739–740 .

[51] Y.S. Ong, P.B. Nair, K.Y. Lum, Min-Max surrogate-assisted evolutionary algorithm for robust aerodynamic design, IEEE Transactions on Evolutionary Computation 10 (2006) 392–404. [52] W. Pedrycz, Granular computing: analysis and design of intelligent systems, CRC Press/Francis Taylor, Boca Raton, (2013). [53] C. Perales-Gravan, R. Lahoz-Beltra, An AM radio receiver designed with a genetic algorithm based on a bacterial conjugation genetic operator, IEEE Transactions on Evolutionary Computation 12 (2008) 129–142. [54] J.E. Smith, Co-evolving memetic algorithms: A review and progress report, IEEE Transactions on System, Man and Cybernetics, Part B: Cybern. 37 (2007) 6–17.

[55] J.E. Smith, Co-evolving memetic algorithms: A learning approach to robust scalable optimization, IEEE Congr. Evol. Comput., vol. 1, pp. 498–505, 2003. [56] M.W. Spong, M. Vidyasagar, Robot dynamics and control, John Wiley and Sons, Inc, (1989). [57] D. Tang, Y. Cai, J. Zhao, Y. Xue, A quantum-behaved particle swarm optimization with memetic algorithm and memory for continuous non-linear large scale problems, Information Sciences 289 (2014) 162–189. [58] F. Valdez, P. Melin, O. Castillo, An improved evolutionary method with fuzzy logic for combining particle swarm optimization and genetic algorithms, Applied Soft Computing 11 (2011) 2625–2632. [59] Y. Wang, B. Li, T. Weise, Two-stage ensemble memetic algorithm: function optimization and digital IIR filter design, Information Sciences 220 (2013) 408–424.

[60] H. Wang, I. Moon, S. Yang, D. Wang, A memetic particle swarm optimization algorithm for multimodal optimization problems, Information Sciences 197 (2012) 38–52 [61] J. Xiao, Z. Michalewicz, L. Zhang, K. Trojanowski, Adaptive evolutionary planner/navigator for mobile robots, IEEE Transactions on Evolutionary Computation 1(1) (1997) 18–28. [62] G. Yang, Y. Dong, K.P. Wong, A modified differential evolution algorithm with fitness sharing for power system planning, IEEE Transactions on Power Systems 23 (2008) 514–522. [63] S. Yang, K. Cheng, M. Wang, D. Xie, L. Jiao, High resolution range-reflectivity estimation of radar targets via compressive sampling and memetic algorithm, Information Sciences 252 (2013) 144–156. [64] Z. Zhu, Y.S. Ong, M. Dash, Wrapper-filter feature selection algorithm using a memetic framework, IEEE Transactions on Systems Man and Cybernetics 37 (2007) 70–76. [65] N. Zhu, Y.S. Ong, K.W. Wong, K.T. Seow, Using memetic algorithms for fuzzy modeling, Australian Journal of Intelligent Information Processing Systems 8 (2004) 147–154.

Caption Figures: Fig. 1. NFEs for dimension 30. Fig. 2. Rastrigin 2D function. Fig. 3. LS for selected genes. Fig. 4. Variation of adaptive parameters during the search process. Fig. 5. Convergence behavior for F1 with dimension 30 and population size 30. Fig. 6. Convergence behavior for F2 with dimension 30 and population size 30. Fig. 7. Convergence behavior for F3 with dimension 30 and population size 30. Fig. 8. Convergence behavior for F4 with dimension 30 and population size 30. Fig. 9. Convergence behavior for F5 with dimension 30 and population size 30. Fig. 10. Convergence behavior for F6 with dimension 30 and population size 30. Fig. 11. The PMDC motor. Fig. 12. General block diagram of fuzzy PD controller. Fig. 13. Input membership functions for fuzzy PD controller ( e and e ) Fig. 14. Output membership functions for fuzzy PD controller Fig. 15. Tracking response of permanent magnet DC motor. Fig. 16. Voltage profile of permanent magnet DC motor. Fig. 17. Best fitness with error optimization for fuzzy PD controller. Fig. 18. Best fitness with error and energy optimization for fuzzy PD controller. Fig. 19. Variation of Kp with error optimization for fuzzy PD controller.

Fig. 20. Variation of Kp with error and energy optimization for fuzzy PD controller. Fig. 21. Variation of Kd with error optimization for fuzzy PD controller. Fig. 22. Variation of Kd with error and energy optimization for fuzzy PD controller. Fig. 23. Tracking response for Duffing system. Fig. 24. Control effort for Duffing system. Fig. 25. Best fitness with error optimization for Duffing system. Fig. 26. Best fitness with error and energy optimization for Duffing system. Fig. 27. Variation of η with error optimization for Duffing system. Fig. 28. Variation of η with error and energy optimization for Duffing system. Fig. 29. Variation of λ with error optimization for Duffing system. Fig. 30. Variation of λ with error and energy optimization for Duffing system. Fig. 31. Norm of tracking error for robot manipulator. Fig. 32. Norm of control effort for robot manipulator. Fig. 33. Best fitness with error optimization for robot manipulator. Fig. 34. Best fitness with error and energy optimization for robot manipulator.

Suggest Documents