System identification and control using adaptive

1 downloads 0 Views 734KB Size Report
Aug 13, 2010 - b Department of Electrical Engineering, Ferdowsi University of Mashhad, ..... [2] K. Valarmathi, D. Devaraj, T.K. Radhakrishnan, Real-coded ...
Applied Mathematical Modelling 35 (2011) 1210–1221

Contents lists available at ScienceDirect

Applied Mathematical Modelling journal homepage: www.elsevier.com/locate/apm

System identification and control using adaptive particle swarm optimization Alireza Alfi a,⇑, Hamidreza Modares b a b

Shahrood University of Technology, Faculty of Electrical and Robotic Engineering, Shahrood 36199-95161, Iran Department of Electrical Engineering, Ferdowsi University of Mashhad, Mashhad 91775-1111, Iran

a r t i c l e

i n f o

Article history: Received 24 May 2009 Received in revised form 2 August 2010 Accepted 10 August 2010 Available online 13 August 2010 Keywords: Particle swarm optimization Parameter estimation PID controller Genetic algorithm

a b s t r a c t This paper presents a methodology for finding optimal system parameters and optimal control parameters using a novel adaptive particle swarm optimization (APSO) algorithm. In the proposed APSO, every particle dynamically adjusts inertia weight according to feedback taken from particles’ best memories. The main advantages of the proposed APSO are to achieve faster convergence speed and better solution accuracy with minimum incremental computational burden. In the beginning we attempt to utilize the proposed algorithm to identify the unknown system parameters the structure of which is assumed to be known previously. Next, according to the identified system, PID gains are optimally found by also using the proposed algorithm. Two simulated examples are finally given to demonstrate the effectiveness of the proposed algorithm. The comparison to PSO with linearly decreasing inertia weight (LDW-PSO) and genetic algorithm (GA) exhibits the APSObased system’s superiority. Ó 2010 Elsevier Inc. All rights reserved.

1. Introduction System identification is the first and crucial step for the design of a controller. According to an identified system model, a controller can be designed by means of various control methods to achieve the required specification. System identification consists of two portions: the selection of an appropriate identification model and an estimation of the model’s parameters. Fortunately, we know a great deal about the structures of most engineering systems and industrial processes; usually it is possible to derive a specific class of models that can best describe the real system. Hence, system identification problem is usually reduced to that of parameter estimation. The least-squares approach is a basic technique often used for parameters estimation. It has been successfully used to identify the parameters in the static and dynamic systems, respectively. But, it is only suitable for the model structure of system having the property of being linear in the parameters. Once the form of model structure is not linear in the parameters, this approach may be invalid [1]. To solve this problem, the heuristic optimization techniques such as genetic algorithm (GA) and particle swarm optimization (PSO) algorithms seem to be a more hopeful approach and provide a powerful means [2,3]. They seem to be a promising alternative to traditional techniques. Although GA is efficient to find the global minimum, it consumes too much search time. The PSO algorithm is an alternative. It has less computational complexity and better performance than GA [4]. Based on this, recently PSO algorithm is applied in many applications successfully [5–14]. Although PSO has shown some important advances by providing high speed of convergence in specific problems, it does exhibit some shortages. It sometimes is easy to be trapped in local optimal, and the convergence rate decreased considerably ⇑ Corresponding author. Tel./fax: +98 273 3393116. E-mail address: a_alfi@shahroodut.ac.ir (A. Alfi). 0307-904X/$ - see front matter Ó 2010 Elsevier Inc. All rights reserved. doi:10.1016/j.apm.2010.08.008

A. Alfi, H. Modares / Applied Mathematical Modelling 35 (2011) 1210–1221

1211

in the later period of evolution when handling some complex or multimode functions [15,16]. Various attempts have been made to overcome this problem. Among them, many approaches and strategies are proposed to enhance the performance of SPSO via adjusting inertia weight. The value of inertia weight is very significant in order to ensure an optimal tradeoff between exploration and exploitation mechanisms of the swarm population. The larger values of inertia weight enhance the exploration by locating promising regions in the search space whereas a smaller value helps to endorse the local exploitation. The most well-known algorithm for controlling of inertia weights is linearly decreasing inertia weight PSO (LDW-PSO) [17]. The strategy of linearly decreasing weight is most commonly used, and it can improve the performance of PSO to some extent, but there are still some problems. First, introducing the same inertia weight for all particles, by ignoring the differences among particles’ performances, simulated a roughly animal background, not a more precise biological model. Second, since the search process of PSO is nonlinear and highly complicated, linearly decreasing inertia weight with no feedback taken from the optimum fatnesses found by each particle cannot truly reflect the actual search process. To overcome these shortages, this paper presents a novel PSO with adaptive inertia weight to rationally balance the global exploration and local exploitation abilities for SPSO. In the proposed method, the inertia weight is dynamically adapted for every particle by considering a measure called adjacency index (AI), which is defined to indicate whether every particle needs global exploration or local exploitation abilities. A function is employed to dynamically calculate the values of inertia weight according to AI. Compared with that in SPSO, the proposed APSO has two different characteristics: (1) to incorporate the difference between particles into PSO, so that it can simulate a more precise biological model, the inertia weight is variable with the number of particles and (2) to truly reflect the actual search process, the inertia weight is set according to feedback taken from particles’ best memories. This paper will also discuss the PID controller design by using APSO according to the identified system. Three PID control gains: proportional gain Kp, integral gain Ki and derivative gain Kd are identified by the proposed APSO such that the defined objective function is minimized. Due to ease of use, good stability and simple realization, proportional integral derivative (PID) controllers are widely used in process industries over decades. The key issue for PID controllers is the accurate and efficient tuning of parameters. In practice, controlled systems usually have some features, such as nonlinearity and time delay, which make PID parameter tuning complex. Many PID tuning methods were proposed. The Ziegler–Nichols (ZN) method is an experimental one that is widely used, despite the requirement of a step input application with stopped process [18]. One of the disadvantages of this method is the necessity of prior knowledge regarding plant model. Once tuning the controller by ZN method, a good but not optimum system response will be reached. Over the last two decades, many artificial intelligence (AI) techniques such as neural networks, fuzzy systems and neural-fuzzy logic have been widely applied to the proper tuning of PID controller parameters [19,20]. Besides these methods, many heuristic methods, such as GA and PSO, have recently received much interest for achieving high efficiency and searching global optimal solution in problem space [19]. It is necessary to notice that although some previous papers presented the use of PSO algorithm for system identification [3,7,21] and PID tuning [8–12], some of these papers worked only on system identification, the others did the PID tuning task considering the system model knowledge. The goal of this paper is to employ the proposed novel APSO for both parameter identification and control of nonlinear systems, simultaneously. Authors of this paper illustrate that the proposed APSO is a feasible approach to both parameter identification and control of nonlinear systems. In order to evaluate the performance of the APSO, two simulated examples are finally given to demonstrate the effectiveness of the proposed algorithm. The proposed algorithm has superior features, including stable convergence characteristics and good computational efficiency. Fast tuning of optimal PID controller parameters yields high-quality solutions. 2. Nonlinear system identification This section presents the system description and the required basic information to the nonlinear dynamic systems under study for identification. In this paper, a class of discrete nonlinear systems described by the state space model is considered as follows:

xðk þ 1Þ ¼ f ðk; xðkÞ; uðkÞ; P 1 Þ; yðkÞ ¼ gðk; xðkÞ; uðkÞ; P2 Þ;

ð1Þ

where u 2 R is the input of the system, x 2 Rn denotes the state vector, y 2 R is the output, and finally P1 and P2 are the vectors of system parameters, to be identified. Without loss of generality, let h = [h1, h2, . . . , hm]T be a rearranging vector containing all parameters P1 and P2 where m represents the total number of unknown system parameters. When estimating the parameters, assume that the structure of the system is known in advance. As a result, the estimated system can be described as follows:

b 1 Þ; ^xðk þ 1Þ ¼ f ðk; ^xðkÞ; uðkÞ; P b 2 Þ; ^ðkÞ ¼ gðk; ^xðkÞ; uðkÞ; P y

ð2Þ

b 1 and P b 2 are the vectors ^ 2 R are the state vector and the output of estimated system, respectively, and P where ^ x 2 Rn and y of system parameters.

1212

A. Alfi, H. Modares / Applied Mathematical Modelling 35 (2011) 1210–1221

The basic idea of parameter estimation is to compare the system responses with the parameterized model based on a performance function giving a measure of how well the model response fits the system response. To cope with the problem of parameters estimation, the problem of parameter estimation can be formulated as an optimization problem. In this study, the Sum of Square of Errors (SSE) between real and estimated responses for a number of given samples is considered as fitness of estimated model parameters. Hence, the fitness function is defined as follows:

SSE ¼

N X

e2 ðkÞ ¼

k¼1

N X ^ðkÞÞ2 ; ðyðkÞ  y

ð3Þ

k¼1

^ðkÞ are where k = 1, . . . , N is the sampling time point, N denotes the length of data used for parameter estimation, y(k) and y ^ðkÞ. the real and estimated values in each sample time, respectively, and e is the error between y(k) and y Our objective is to determine system parameters using the proposed APSO, in such a way that the value of SSE is minimized, approaching zero as much as possible. In this case, every particle is encoded as a set of possible solutions to the identification problem. It means that every particle is an n-dimensional vector containing the n unknown system parameters. The system parameters associated with every particle is used to evaluate the SSE criterion given in Eq. (3). Then, the particles fly in the search space, according to the corresponding particle’s experience and the particle’s companions’ experiences to find the optimum set of system parameters. 3. PID controller design The PID controller is the standard tool for industrial automation. The flexibility of the controller makes it possible to use PID control in many applications. Many control problems can be handled very well by PID control [22]. The continuous control law of PID controller is

uðtÞ ¼ K p eðtÞ þ K i

Z

t

eðsÞds þ K d 0

d eðtÞ: dt

ð4Þ

In Eq. (4), e is the error signal between the desired and actual outputs, u is the PID control force, parameters Kp, Ki and Kd are the proportional, integral, and derivative gains, respectively. Using trapezoidal approximations for Eq. (4) to obtain the discrete control law, we have

uðkÞ ¼ uðk  1Þ þ K p ½eðkÞ  eðk  1Þ þ K i

Ts 1 ½eðkÞ þ eðk  1Þ þ K d ½eðkÞ  2eðk  1Þ þ eðk  2Þ; Ts 2

ð5Þ

where Ts is the sampling period. How to solve these three gains to meet the required performance is the most key in the PID control system. For simplification, let h = [h1, h2, . . . , hn]T = [Kp, Ki, Kd]T be control gain vector. Fig. 1 illustrates a PID controller design using heuristic algorithms, where yr is the desired output and y is the system output. In the control process, the objective is to minimize the fitness function, defined as the Sum of Square of Errors (SSE), which determines the performance of any algorithm. In the PID controller design, the objective function is modified by

SSE ¼

N X

e2 ðkÞ ¼

k¼1

N X ðyr ðkÞ  yðkÞÞ2 :

ð6Þ

k¼1

4. The proposed APSO Unlike population-based evolutionary algorithms, PSO is motivated by the simulation of social behavior and each candidate solution is associated with a velocity. The candidate solutions, called ‘‘particles”, then ‘‘fly” through the search space. In the beginning, a population with the size of particles is created. Then, the velocity of every particle is constantly adjusted according to the corresponding particle’s experience and the particle’s companions’ experiences. It is expected that

Optimization Algorithm

K p Ki yr +

-

e

Kd

PID Controller

u

Fig. 1. A PID controller design.

Plant

y

A. Alfi, H. Modares / Applied Mathematical Modelling 35 (2011) 1210–1221

1213

the particles will move towards better solution areas. The fitness of every particle can be evaluated according to the objective function of optimization problem. At each iteration, the velocity of every particle will be calculated as follows:

v tþ1 ¼ xv ti þ c1 r 1 i

    t t pbesti  xti þ c2 r2 gbest  xti ;

ð7Þ t

where xti is the position of the particle i in tth iteration, pbest i is the best previous position of this particle (memorized by every particle), gbestt is the best previous position among all the particles in tth iteration (memorized in a common repository), x is the inertia weight, c1 and c2 are acceleration coefficients and are known as the cognitive and social parameters, respectively. Finally, r1 and r2 are two random numbers in the range [0, 1]. After calculating the velocity, the new position of every particle can be worked out

xtþ1 ¼ xti þ v tþ1 : i i

ð8Þ

The PSO algorithm performs repeated applications of the update equations above until the pre-specified number of generations G is reached. Although SPSO has shown some important advances by providing high speed of convergence in specific problems, it does exhibit some shortages. It is found that SPSO has a poor ability to search at a fine grain because it lacks velocity control mechanism [16]. Many approaches are attempted to improve the performance of SPSO by variable inertia weight. The inertia weight is critical for the performance of PSO, which balances global exploration and local exploitation abilities of the swarm. A big inertia weight facilitates exploration, but it makes the particle long time to converge. Conversely, a small inertia weight makes the particle fast converge, but it sometimes leads to local optimal. Hence the linearly decreasing weight and the nonlinearly decreasing inertia weight are proposed in the literature [23–29]. Nevertheless these algorithms improve the performance of PSO, they cannot truly reflect the actual search process without any feedback taken from how far the particle’s fitness is from the estimated (or real) optimal value, when the real optimal value is known in advance. Actually, for the particle for which its fitness is far away from the real optimal value, a big velocity is still needed to globally search the solution space and thus its inertia weight must set to larger values. Conversely, only a small movement is needed and so inertia weight must set to a small value to facilitate finer local explorations. Furthermore, introducing the same inertia weight for all particles, by ignoring the differences among particles’ performances, simulated a roughly animal background, not a more precise biological model. In fact, during the search every particle dynamically changes its position, so every particle locates in a complex environment and faces a different situation. Therefore, every particle may have different trade offs between global and local search abilities. Motivated by the aforementioned, in this paper, the inertia weight is dynamically adapted for every particle by considering a measure called adjacency index (AI), which characterizes the nearness of individual fitness to the real optimal solution. Based on this index, every particle could decide how to adjust the values of inertia weight. For this purpose, the velocity updating rules in the proposed APSO is given by

v tþ1 ¼ xti v ti þ ct1i r 1 i

    t t pbest i  xti þ ct2 r 2 gbest  xti :

ð9Þ

To calculate the inertia weight for ith particle in tth iteration, denoted by xti in Eq. (9), first the adjacency index (AI) is defined as follows:

  1 F pbest i  F KN  AIti ¼   1; t F pbesti  F KN

ð10Þ

  t where F pbest i is the fitness of the best previous position of ith particle and FKN is the known real optimal solution value. It can be concluded that the AI is variable with the number of particles and set according to feedback taken from particles’ best memories. A small AIi means that the fitness of ith particle is far away from the real optimal value and it needs a strong global exploration, therefore a large inertia weight. On the other hand, a big AIi means that ith particle has a high adjacency to the real optimum and so it needs a strong local exploitation, therefore a small inertia weight. Hence, the value of inertia weight for every particle in tth iteration is dynamically calculated with the following formula:

xti ¼

1 t 1

1 þ eðaAIi Þ

;

ð11Þ

where a is a positive constant in the range (0, 1]. Under the assumption and definitions above, it can be concluded that 0.5 6 xi < 1. It is clearly apparent that the value of inertia weight for every particle in tth iteration depends on the value of parameter a. The value of parameter a controls the decreasing speed of inertia weight. The lower the value of parameter a, the higher the increasing rate of inertia weight. To observe the impact of a on the variation of inertia weight, the value of parameter a is varied from 0.1 to 1 with step size 0.1. Fig. 2 depicts the variation of inertia weight versus AI with different values of a. According to Eqs. (10) and (11), during the search, the particles face different fatnesses; as a result they get different values of AI and then inertia weight. While the fitness of a particle is far away from the real global optimal, AI for this particle has a small value (a low adjacency) and the value of inertia weight will be large resulting strong global search abilities and

1214

A. Alfi, H. Modares / Applied Mathematical Modelling 35 (2011) 1210–1221

Fig. 2. Inertia weight versus AI with different values of a.

locate the promising search areas. Meanwhile, when the fitness of a particle achieves near the real global optimal, AI for this particle has a big value (a high adjacency) and inertia weight will be set small, depending on the nearness of its best fitness to the optimal value, to facilitate a finer local explorations and so accelerate convergence. 5. Simulation results and comparison This section demonstrates the feasibility of the APSO-based system identification and PID control design for two cases described by [30]. The results are compared to those obtained from LDW-PSO and GA. In LDW-PSO, c1 = c2 = 2 and x decreases linearly from 0.9 to 0.4. In APSO, the parameter x is determined using Eq. (11). In addition, in GA, the crossover probability Pc and the mutation probability Pm are set to 0.8 and 0.1, respectively. To perform a fair comparison, the same computational effort is used in all of LDW-PSO, APSO and GA. That is, the maximum generation, population size and searching range of the parameters in GA are the same as those in LDW-PSO and APSO. In order to observe the impact of a on the performance of APSO, different values of a, 20 particles, 200 maximum iterations, were conducted on parameter estimation of the following examples. For each experimental setting, 20 runs of the algorithm were performed. Table 1 lists the mean best fitness values averaged over 20 runs. As shown in Table 1, the values in range [0.2, 0.8] for parameter a can all lead to acceptable performance. In this paper, a is set to 0.5. Example 1. An unstable nonlinear system is described by [19]

x1 ðk þ 1Þ ¼ h1 x1 ðkÞx2 ðkÞ; x2 ðk þ 1Þ ¼

h2 x21 ðkÞ

þ uðkÞ;

x1 ð0Þ ¼ 1; x2 ð0Þ ¼ 1;

ð12Þ

yðkÞ ¼ h3 x2 ðkÞ  h4 x21 ðkÞ; The real system parameters of the above equation are assumed to be h = [h1, h2, h3, h4] = [0.5, 0.3, 1.8, 0.9]. 5.1. Parameter estimation The objective in parameters identification is to determine h as accurately as possible. The relative variables utilized in optimization algorithms are given by:

Table 1 The mean best fitness values with different values of a.

a 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

SSE Example 1

Example 2

3.4336  1018 6.7265  1021 9.3181  1022 9.2890  1022 2.6248  1021 2.5584  1022 7.0130  1020 1.1220  1020 1.0984  1018 6.0226  1019

4.3721e  1022 6.8930  1025 4.0960  1025 4.3784  1025 6.3573  1026 9.1982  1025 9.3265  1026 1.1431  1024 7.5995  1024 8.5882  1023

1215

A. Alfi, H. Modares / Applied Mathematical Modelling 35 (2011) 1210–1221

h1min ¼ 0; S ¼ 20;

h1max ¼ 2;

h2min ¼ 0;

h2max ¼ 2;

h3min ¼ 0;

h3max ¼ 2;

h4min ¼ 0;

h4max ¼ 2;

N ¼ 8;

G ¼ 200:

For the APSO, the known optimal value FKN is zero. Table 2 lists the mean of estimated parameters obtained by GA [30], LDWPSO and APSO, when each algorithm is implemented 50 times independently. From Table 2, it can be seen that all of the estimated parameters obtained by APSO are very close to the true values. Table 3 shows the worst, mean, best and standard deviation (Std.) of SSE results during 50 runs for each algorithm. From Table 3, it is obvious that the worst result obtained by APSO is even better than the best result obtained by GA and LDW-PSO. Figs. 3–6 depict the great success of optimization process by using APSO algorithm compared with the other algorithms for the identified parameters h1, h2, h3, and h4, respectively. Moreover, the convergence of the optimal SSE at each generation

Table 2 Estimated parameters obtained using GA [30], LDW-PSO and APSO for Example 1.

Real parameters GA LDW-PSO APSO

h1

h2

h3

h4

0.5000 0.4853 0.5002 0.5000

0.3000 0.2987 0.3001 0.3000

1.8000 1.7971 1.8001 1.8000

0.9000 0.8802 0.9001 0.9000

Table 3 Performance comparison between using GA [30], LDW-PSO and APSO for Example 1. Criterion

Worst

Mean

Best

Std.

GA LDW-PSO APSO

1.6408 7.8441  108 8.5327  1021

0.7456 3.7530  1012 4.9653  1022

0.1734 8.3425  1014 2.1954  1024

0.6241 4.4612  109 4.2636  1022

Fig. 3. Comparison of trajectories of parameter h1.

Fig. 4. Comparison of trajectories of parameter h2.

1216

A. Alfi, H. Modares / Applied Mathematical Modelling 35 (2011) 1210–1221

Fig. 5. Comparison of trajectories of parameter h3.

Fig. 6. Comparison of trajectories of parameter h4.

is plotted in Fig. 7. It confirms the superiority of APSO algorithm in terms of convergence speed without the premature convergence problem. 5.2. PID controller design A PID controller for this system will be solved by using the proposed APSO based on the above estimation results. In this simulation, the control objective is to wish that the plant output y is regulated to the desired output yr = 2. The population size, number of sampling steps and maximum generation are considered as 10, 1000 and 300, respectively. Moreover, for the APSO, the known optimal value FKN is zero. Fig. 8 represents the convergence trajectories of SSE for the obtained PID controller parameters using different algorithms, respectively. The search space for PID gains is defined by

K pmin ¼ 0;

K pmax ¼ 1;

K dmin ¼ 0;

K dmax ¼ 1;

K imin ¼ 0;

K imax ¼ 1:

Tables 4 and 5 show the results obtained for PID controller parameters and SSE for GA, LDW-PSO and APSO, where each algorithm runs 50 times independently, respectively. It is clearly obvious that the proposed algorithm has superior features, including stable convergence characteristics, good computational efficiency and accuracy. Fast tuning of optimal PID controller parameters yields high-quality solutions.

Fig. 7. Comparison of convergence of objective function for Example 1.

1217

A. Alfi, H. Modares / Applied Mathematical Modelling 35 (2011) 1210–1221

Fig. 8. Comparison of convergence of objective function for PID controller design of Example 1.

Table 4 Identified parameters of PID controller using GA [30], LDW-PSO and APSO for Example 1. PID parameters

Kp

Ki

Kd

GA LDW-PSO APSO

0.8411 0.8221 0.8224

0.9929 1.0305 1.0044

0.0097 0.0577 0.0426

Table 5 Performance comparison of PID controller using GA [30], LDW-PSO and APSO for Example 1. Criterion

Worst

Mean

Best

Std.

GA LDW-PSO APSO

1.3846 1.2064 1.1075

1.328 1.134 1.1031

1.2904 1.1251 1.0976

0.9045 0.0382 0.0045

Example 2. Consider a first-order with time-delay system whose transfer function is given by [30]

GðsÞ ¼

YðsÞ k ¼ eTs ; UðsÞ ss þ 1

ð13Þ

where s is the time constant, T is the time delay and k is the steady-state gain. It is assumed that the real values are k = 10, s = 5 and T = 9, respectively. With the sampling time 0.01, the system given Eq. (14) can be changed to be the discrete dynamic equation as follows:

xðk þ 1Þ ¼

  1 k xðkÞ þ 1 uðk  10TÞ; 10s 10s

xð0Þ ¼ 0;

ð14Þ

yðkÞ ¼ xðkÞ: 5.3. Parameter estimation By considering h = [h1, h2, h3] = [k, s, T] as a vector of estimated parameters, the objective in parameters identification is to determine h as accurately as possible. In these simulations, the control input u(k) = 1 is used. The relative variables utilized in optimization algorithms are given by:

h1min ¼ 0;

h1max ¼ 20;

h2min ¼ 0;

h2max ¼ 20;

h3min ¼ 0;

h3max ¼ 20 N ¼ 350;

S ¼ 20;

G ¼ 200:

For the APSO, the known optimal value FKN is zero. A comparison between GA [30], LDW-PSO and APSO is listed in Tables 6 and 7 for 50 runs of each algorithm. These results again show the advantage of the APSO over other algorithms for parameter estimation. Table 6 Estimated parameters obtained using GA [30], LDW-PSO and APSO for Example 2.

Real parameter GA LDW-PSO APSO

k

s

T

10.000 10.001 10.001 10.000

5.000 5.001 5.000 5.000

9.000 9.008 9.000 9.000

1218

A. Alfi, H. Modares / Applied Mathematical Modelling 35 (2011) 1210–1221 Table 7 Performance comparison between using GA [30], LDW-PSO and APSO for Example 2. Criterion

Worst

Mean

Best

Std.

GA LDW-PSO APSO

0.0853 2.8566  1010 6.9730  1024

0.0498 3.8574  1013 5.2579  1026

0.0034 9.8647  1016 2.0784  1029

0.0456 7.7045  1012 8.9638  1025

Figs. 9–12 illustrate the convergence trajectories for SSE and system parameters k, s and T, respectively. It again confirms the superiority of APSO algorithm in terms of convergence speed without the premature convergence problem.

Fig. 9. Comparison of trajectories of parameter k.

Fig. 10. Comparison of trajectories of parameter s.

Fig. 11. Comparison of trajectories of parameter T.

1219

A. Alfi, H. Modares / Applied Mathematical Modelling 35 (2011) 1210–1221

Fig. 12. Comparison of convergence of objective function for Example 2.

5.4. PID controller design The desired output yr = 1 is given in the simulation. The population size, maximum generation and number of sampling steps are considered as 10, 1000 and 300, respectively. In addition, for the APSO, the real optimal value FKN is set to the number of samples existing in the interval 0–5 s. This occurs because of the delay which causes the output to be zero and then, error for each sample will be one. The search space for PID gains is defined by

K pmin ¼ 0;

K pmax ¼ 1;

K dmin ¼ 0;

K dmax ¼ 1;

K imin ¼ 0;

K imax ¼ 1:

According to ZN tuning technique [11], the PID controller parameters Kp, Ki and Kd are obtained as 0.0667, 0.0037, 0.3002, respectively; and its corresponding SSE is 135.78. The results obtained for PID Controller parameters and SSE for GA, LDWPSO and APSO are presented in Tables 8 and 9, respectively. For each algorithm, 50 independent runs are considered. The Table 8 Identified parameters of PID controller using GA [30], LDW-PSO and APSO for Example 2. PID parameters

Kp

Ki

Kd

GA LDW-PSO APSO

0.0836 0.1083 0.1592

0.0103 0.0114 0.0131

0.2885 0.2280 0.2089

Table 9 Performance comparison of PID controller using GA [30], LDW-PSO and APSO for Example 2. Criterion

Worst

Mean

Best

Std.

GA LDW-PSO APSO

98.0573 80.4378 72.0953

92.4137 78.9505 71.1085

87.5864 75.0963 69.9905

5.1977 2.8950 1.2095

Fig. 13. Comparison of convergence of objective function for PID controller design of Example 2.

1220

A. Alfi, H. Modares / Applied Mathematical Modelling 35 (2011) 1210–1221

Fig. 14. The performance of obtained PID’s from different algorithms for Example 2.

convergence trajectories of SSE and step response for optimal PID controller parameters using different algorithms are shown in Figs. 13 and 14, respectively. Simulation results again exhibit the APSO-based PID tuning system’s superiority. 6. Conclusion In the present paper, a complete design for identifying system parameters and PID controller gains was proposed based on using the proposed APSO algorithm. In APSO, an adaptive inertia weight is introduced to incorporate the difference between particles into PSO, so that it can simulate a more precise biological model and to truly reflect the actual search process by setting the inertia weight according to feedback taken from the particles’ best memories. To show the validity of the proposed algorithm, two cases were considered. The simulation results obtained from GA, LDW-PSO, and APSO algorithms were compared. They clearly reveal the effectiveness of the proposed APSO algorithm in identifying system parameters and PID control gains. Since APSO has more accuracy and faster convergence speed than GA and LDW-PSO in offline identification, the future work is to apply APSO algorithm for online identification and control of nonlinear systems. References [1] K.J. Astrom, B. Wittenmark, Adaptive Control, Addison-Wesley, Massachusetts, 1995. [2] K. Valarmathi, D. Devaraj, T.K. Radhakrishnan, Real-coded genetic algorithm for system identification and controller tuning, Appl. Math. Model. (2008), doi:10.1016/j.apm.2008.11.006. [3] H. Modares, A. Alfi, M.M. Fateh, Parameter identification of chaotic dynamic systems through an improved particle swarm optimization, Expert Syst. Appl. 37 (2010) 3714–3720. [4] R. Eberhart, Y. Shi, Comparison between genetic algorithms and particle swarm optimization, Lect. Notes Comput. Sci. (1998) 611–618. [5] J.P. Coelho, P.B.d.M. Oliveira, J.B. Cunha, Greenhouse air temperature predictive control using the particle swarm optimisation algorithm, Comput. Electron. Agric. 49 (3) (2005) 330–344. [6] L.D.S. Coelho, A.A.R. Coelho, Model-free adaptive control optimization using a chaotic particle swarm approach, Chaos Solitons Fract. 41 (4) (2009) 2001–2009. [7] F.A. Guerra, L.D.S. Coelho, Multi-step ahead nonlinear identification of Lorenz’s chaotic system using radial basis neural network with learning by clustering and particle swarm optimization, Chaos Solitons Fract. 35 (5) (2008) 967–979. [8] C.C. Kao, C.W. Chuang, R.F. Fung, The self-tuning PID control in a slider-crank mechanism system by applying particle swarm optimization approach, Mechatron 16 (8) (2006) 513–522. [9] L.D.S. Coelho, D.L.A. Bernert, PID control design for chaotic synchronization using a tribes optimization approach, Chaos Solitons Fract. 42 (1) (2009) 634–640. [10] M. Zamani, M. Karimi-Ghartemani, N. Sadati, M. Parniani, Design of a fractional order PID controller for an AVR using particle swarm optimization, Control Eng. Pract. 17 (12) (2009) 1380–1387. [11] T.H. Kim, I. Maruta, T. Sugie, Robust PID controller tuning based on the constrained particle swarm optimization, Automatica 44 (4) (2008) 1104–1110. [12] V. Mukherjee, S.P. Ghoshal, Intelligent particle swarm optimized fuzzy PID controller for AVR system, Electr. Power Syst. Res. 77 (12) (2007) 1689– 1698. [13] R. Mansouri, M. Bettayeb, T. Djamah, S. Djennoune, Vector Fitting fractional system identification using particle swarm optimization, Appl. Math. Comput. 206 (2) (2008) 510–520. [14] Y.L. Lin, W.D. Chang, J.G. Hsieh, A particle swarm optimization approach to nonlinear rational filter modeling, Expert Syst. Appl. 34 (2008) 1194–1199. [15] M. Clerc, J. Kennedy, The particle swarm-explosion, stability, and convergence in a multidimensional complex space, IEEE Trans. Evol. Comput. 6 (1) (2002) 58–73. [16] P.J. Angeline, Evolutionary optimization versus particle swarm optimization: philosophy and performance differences, Lect. Notes Comput. Sci. 1447 (1998) 601–610. [17] Y. Shi, R.C. Eberhart, A modified particle swarm optimizer, in: Proceedings of the IEEE Conference on Evolutionary Computation, USA, 1998, pp. 69–73. [18] F.G. Shinskey, Process Control System: Application, Design and Tuning, McGraw-Hill, 1996. [19] A. Visioli, Tuning of PID controllers with fuzzy logic, Proc. IEE – Control Theory Appl. 148 (1) (2001) 1–8. [20] T.L. Seng, M.B. Khalid, R. Yusof, Tuning of a neuro-fuzzy controller by genetic algorithm, IEEE Trans. Syst. Man Cybern. B 29 (1999) 226–236. [21] H. Modares, A. Alfi, M.B. Naghibi Sistani, Parameter estimation of bilinear systems based on an adaptive particle swarm optimization, Eng. Appl. Artif. Intell. (2010), doi:10.1016/j.engappai.2010.05.003. [22] K.J. Astrom, K.J. Hagglund, PID Controllers: Theory, Design and Tuning, International Society for Measurement and Control, 1995. [23] J. Kennedy, R.C. Eberhart, Y. Shi, Swarm Intelligence, Morgan Kaufmann Publishers, San Francisco, 2001.

A. Alfi, H. Modares / Applied Mathematical Modelling 35 (2011) 1210–1221

1221

[24] Y.P. Chang, C.N. Ko, A PSO method with nonlinear time-varying evolution based on neural network for design of optimal harmonic filters, Expert Syst. Appl. 36 (2009) 6809–6816. [25] B. Jiao, Z. Lian, X.A. Gu, Dynamic inertia weight particle swarm optimization algorithm, Chaos Solitons Fract. 37 (2008) 698–705. [26] X. Yang, J. Yuan, J. Yuan, H. Mao, A modified particle swarm optimizer with dynamic adaptation, Appl. Math. Comput. 189 (2007) 1205–1213. [27] A. Chatterjee, P. Siarry, Nonlinear inertia weight variation for dynamic adaptation in particle swarm optimization, Comput. Oper. Res. 33 (2006) 859– 871. [28] Y. Shi, R.C. Eberhart, Parameter selection in particle swarm optimization, in: Proceedings of the Seventh Annual Conference on Evolutionary Programming, New York, 1998, pp. 591–600. [29] A. Ratnaweera, S.K. Halgamuge, H.C. Watson, Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients, IEEE Trans. Evolut. Comput. 8 (3) (2004) 240–255. [30] W.D. Chang, Nonlinear system identification and control using a real-coded genetic algorithm, Appl. Math. Model. 31 (2007) 541–550.

Suggest Documents