Differential Evolution: An Efficient Method in ... - Semantic Scholar

0 downloads 381 Views 165KB Size Report
[email protected] e e4@163. .... too much control, we add the input value rin(t)'s squire in ..... http://www.engi
Proceedings of the International Conference on Complex Systems and Applications Copyright c 2006 Watam Press

Differential Evolution: An Efficient Method in Optimal PID Tuning and On–line Tuning Fei Gao

Hengqing Tong

Department of Mathematics, School of Science Wuhan University of Technology Wuhan, Hubei 430070, China [email protected]

Department of Mathematics, School of Science Wuhan University of Technology Wuhan, Hubei 430070, China e [email protected]

Abstract— PID controller is an extremely important type of controller. In this paper the Differential Evolution technique, is successfully applied in system PID tuning and on–line tuning as a novel technique for optimum adaptive control. The details of applying the proposed method are given and the experiments done show that the proposed strategy is effective and robust.

I. Introduction The PID algorithm was devised in the 1940s, and remains remarkably useful and applicable over a large range of process challenges. PID controllers are used to control process variables ranging from fluid flow, level, pressure, temperature, consistency, density etc. It is a robust easily understood algorithm that can provide excellent control performance despite the varied dynamic characteristics of process plant [1]. Normally PID algorithms executes on PLCs (Programmable Logic Controllers), DCS (Distributed Control System) or single loop or stand alone controllers. And PID is also the basis for many advanced control algorithms and strategies [2]. In order for control loops to work properly, the PID loop must be properly tuned. Standard methods for tuning loops and criteria for judging the loop tuning have been used for many years, but should be reevaluated for use on modern digital control systems [1, 2]. While the basic algorithms have been unchanged for many years and is used in all distributed control systems, the actual digital implementation of the algorithm has changed and differs from one system to another and from commercial equipment to academia. For many years a variety of different methods have been used to determine optimal PID parameters, such as hillclimbing, gradient methods, simplex methods, expert system etc. Though these methods have the advantages of good performance in optimization, there are some disadvantages exist such as sensitivity to initials, convergence to local optimal and a hard work in dealing with the knowledge data mining. Evolutionary algorithms (EAs) is an umbrella term used to describe computer-based problem solving systems with some of the known mechanisms of EVOLUTION as key elements. Although simplistic from a biologist’s viewpoint,

these algorithms are sufficiently complex to provide robust and powerful adaptive search mechanisms [3, 4]. Differential Evolution(DE) algorithm is a novel minimization method in EAs, capable of handling nondifferentiable, nonlinear and multi-modal objective functions, with few, easily chosen, control parameters [5, 6, 7]. The crucial idea behind DE is a scheme for generating trial parameter vectors. Basically, DE adds the weighted difference between two population vectors to a third vector. In this way no separate probability distribution has to be used which makes the scheme completely self-organizing [8, 9, 10]. In this paper, a novel PID controller tuning and on–line tuning approach based on the DE is proposed to design robust PID parameters by transforming the problems of PID controller into correspondent optimization problems. The rest of this paper is organized as follows. In Section II., the main concepts of PID controller and the transformation are introduced. Section III.gives the main idea of DE. Details of applying DE in PID control and experimental results are reported and analyzed in Section IV.The paper concludes with Section V. II. Optimal Pid Tuning and On–line Tuning A. The Main Concept of PID PID stands for proportional-integral-derivative. The controller response combines three response mechanism as a whole: proportional response – proportional to the gap between the reading and the setpoint, integral response – proportional to the integral of the changes between the past and present reading vs. the set point and derivative response – proportional to the rate of change of the reading. By adjusting the weights on the three responses, one can almost always insure a stable, fast reacting control dynamics [1]. PID controller is three-term linear, and it make the control windage error(t) = rin(t) − yout(t) between the desired input value rin(t) and the actual output yout with controller: µ u(t) = kp

1 error(t) + T1

Z 0

t

Td derror(t) error(t)dt + dt

or another transfer function form

785

¶ (1)

as below: if ey(t) < 0 then R ¡ ¢ ∞ J = 0 w1 |e(t)| + w2 u2 (t) + w4 |ey(t)| dt + w3 · tu

(5)

when w4 is the weight value subject to w4 À w1 , ey(t) = y(t) − y(t − 1), y(t) is the output of the system controlled. With the objective function J’s minimization, a good combination of Kp , Kd , and KI is obtained to get the better control. Fig. 1. Principle of PID

C. PID Controller On–line Tuning

µ ¶ 1 U (s) = Kp 1 + + TD s G(s) = E(s) T1 s

(2)

where the proportional gain is Kp , the magnitude of the error plus the integral gain is KI = Kp /T1 and the integral of the error plus the derivative gain Kd = Kp × T1 . This signal u will be sent to the plant(system to be controlled), and the new output yout will be obtained. This new output yout(t) will be sent back to the sensor again to find the new error signal error(t). The PID controller(Fig. 1) takes this new error signal and computes its derivative and its integral again. This process goes on and on. The proportional controller Kp will have the effect of reducing the rise time and will reduce ,but never eliminate, the steady-state error. An integral control KI will have the effect of eliminating the steady-state error, but it may make the transient response worse. A derivative control Kd will have the effect of increasing the stability of the system, reducing the overshoot, and improving the transient response. Effects of each of controllers Kp , Kd , and KI are dependent on each other [2]. B. PID Controller Tuning Let the system to be controlled is the transfer function G(s) =

400 s2 + 50s

(3)

The goal of PID controller is to show how each of Kp , Kd , and KI contributes to obtain: fast rise time, minimum overshoot, no steady-state error. To gain the satisfied dynamic properties of process, choose time integral of the error signal error(t)’s absolute value as objective function to be minimized. And to avoid too much control, we add the input value rin(t)’s squire in the objective function. That is, Z ∞ ¡ ¢ w1 |e(t)| + w2 u2 (t) dt + w3 · tu (4) J= 0

where w1 , w2 , w3 are the weight values, u(t) is control signal output, tu 4 is the rise time. To avoid overshoot, we take a penalty in the objective function when the overshoot becomes. Then (4) is limited

The main concept of on–line tuning PID controller is tuning the PD parameters at each sampling time. Take the system (3) for instance, let errori(i) be the error of parameters combination i at time k, and de(i) is changing rate of i’s position’s track error. And the objective function to be optimized is below: J(i) = αP × |errori(i)| + βP × |de(i)|

(6)

where αP , βP is the weight value. To avoid overshoot, a penalty is taken in the objective function when the overshoot becomes. That is, if errori(t) < 0, then J(i) = J(i) + 100 |errori(i)|, then the problems of PID controller on–line tuning is transferred into that of minimization of a function. To reduce the blindness in initial optimization, the efforts in computation and bound of the parameters to be optimized, a team of kp , kd is selected experientially. III. The Main Concept of Differential Evolution Algorithm In mathematics, optimization is the discipline which is concerned with finding the maxima and minima of functions, possibly subject to constraints. Optimization problem is defined as a computational problem in which the object is to find the best of all possible solutions [11]. Definition 1 The general problems can be defined: min F (x) x∈S

numerical

optimization (7)

where F (x) is objective function, x = (x1 ,. . . ,xi ,. . . ,xn )∈ S ⊆ 0 is a user-defined real parameter, called mutation constant, which controls the amplification of the difference between two individuals to avoid search stagnation. Following the crossover phase, the crossover operator is (G) applied on Xi . Then a trial vector U = (U1 , U2 , · · · , Un ) is generated by: ½ Vm , if (rand(0, 1) < CR) or (m = k), Um = (G) Xi m , if (rand(0, 1) ≥ CR) and (m 6= k). (9) in the current population[5], where m = 1, 2, · · · , n, the index k ∈ {1, 2, · · · , n} is randomly chosen, CR ∈ [0, 1] is a user-defined crossover constant[5, 7]. In other words, the trial vector consists of some of the components of the mutant vector, and at least one of the components of a randomly selected individual of the population. Then it comes to the replacement phase. To maintain the population size, we have to compare the fitness of U (G) and Xi , then choose the better: ( (G) U, if F (U ) < F (Xi ), (G+1) (10) Xi = (G) Xi , otherwise.

IV. Simulations In this section, the operation of the proposed technique as an optimization method on PID controller tuning and on–line tuning to (3) are illustrated. And for the problems discussed, let DE run 100 times independently. The parameters for PID controller tuning were fixed: the size of the population was set equal to M = 30, the default values CF = 0.5, CR = 0.1, which are valid enough in former studies [9]. The individual have been constrained in the corresponding region for each test problem. For the objective function (4) the maximum number of iterations was Gmax = 100, kP ∈ [0, 20], ki , kd ∈ [0, 1], w1 = 0.999, w2 = 0.001, w3 = 2.0, w4 = 100. The process of optimizing the objective function (4) and the step response of PID controller with parameters kP , ki , kd are shown in Fig.2 and Fig.3. The parameters for PID controller on–line tuning were fixed: the size of the population was set equal to M = 30, the default values CF = 0.5, CR = 0.1, which are valid enough in former studies[9]. The individual have been constrained in the corresponding region for each test problem.

787

For the objective function (6), each generation the maximum number of iterations was Gmax = 30, αP = 0.95, βP = 0.05, kP ∈ [9.0, 12.0], kd ∈ [0.2, 0.3]. The step response of PID controller with parameters kP , kd , the controller u(k)’s change, parameters kP , kd ’s changes with t in PID tuning are shown separately in Fig.4, Fig.5, Fig.6 and Fig.7.

12 11.5 kp change

1.2 rin,yout

1 0.8 0.6

11 10.5 10 9.5

0.4

9 0

0.2 0 0

0.02

0.04 0.06 Time(s)

0.08

0.1

0.02

0.04 0.06 Time(s)

0.08

0.1

0.04 0.06 Time(s)

0.08

0.1

Fig. 6. kP ’s change

Fig. 4. The step response of PID controller tuning on–line

300

u

200 100 0.3 0 0.28 0.02

0.04 0.06 Time(s)

0.08

0.1

kd change

0

Fig. 5. u(k)’s change

From the Figures above, we can conclude: when it is the initial control process(error less than about 0.5), kP rises and kd declines; when the error is larger than 0.5, kP declines and kd rise to avoiding the error varies too fast; and when the overshoot comes, kP rises and kd declines again to reduce the error.

0.26 0.24 0.22 0.2 0

Fig. 7. kd ’s change

V. Conclusions An application to PID tuning and on–line tuning through DE is proposed. From the simulations above, we can conclude that DE is efficient and robust for PID control tuning and tuning on–line.

788

0.02

Though experiments of DE are done to system (3), we can easily derive it into the other systems.

[15] H. G. Beyer, H. P. Schwefel, “Evolution strategies: a comprehensive introduction”. Natural Computing, Vol. 1, pp. 35-52, 2002.

Acknowledgment

[16] D. C. II Wunsch, S. Mulder, “Evolutionary algorithms, Markov decision processes, adaptive critic designs, and clustering: commonalities, hybridization and performance,” Intelligent Sensing and Information Processing, Proceedings of International Conference, pp. 477-482, 2004.

We thank the anonymous reviewers for their constructive remarks and comments. The work is partially supported by the Chinese NSF Grant No.30570611 and the Science Foundation Grant No. 02C26214200218 for Technology Creative Research from the Ministry of Science and Technology of China to H. Q. TONG, the Foundation Grant No.XJJ2004113, Project of educational research, the UIRT Project Grant No.A156 and No.A157 granted by Wuhan University of Technology in China.

[17] J. Kennedy, R. C. Eberhart, “Particle swarm optimization,” IEEE Int. Conf. on Neural Networks, Perth, Australia, pp. 1942-1948, 1995.

Accepted March 2006. email:[email protected] http://monotone.uwaterloo.ca/∼journal/

References Sciences, “The PID Control [1] D&G http://www.dgsciences.com/acs54/pid.htm, 2005.

Algorithms,”

[2] Regents of the University of Michigan, “PID Tutorial,” http://www.engin.umich.edu/group/ctm /PID/PID.html, 2005. [3] R. C. Eberhart, Y. Shi, “Comparing inertia weights and constriction factors in particle swarm optimization,” Proceedings of the 2000 Congress on Evolutionary Computation, IEEE Service Center: Piscataway, NJ, pp. 84-88, 2000. [4] D. Whitley, “An overview of evolutionary algorithms: Practical issues and common pitfalls,” Information and Software Technology, Vol.43, No. 14, pp. 817-831, 2001. [5] R. Storn, K. Price, “Differential evolution-a simple and efficient adaptive scheme for global optimization over continuous spaces,” Journal of Global Optimization, Vol.11, pp. 341-359, 1997. [6] Rainer Storn, “Differential Evolution for Continuous Function Optimization,” http://www.icsi. berkeley.edu/–storn/code.html, 2005. [7] K. Price, “An introduction to differential evolution,” New Ideas in Optimization, eds. D. Corne, M. Dorigo, and F. Glover, McGraw-Hill, London (UK), pp. 79-108, 1999. [8] X. F. Xie, W. J. Zhang, D. C. Bi, “Handling equality constraints by adaptive relaxing rule for swarm algorithms,” Congress on Evolutionary Computation (CEC), Oregon, USA, pp. 2012-2016, 2004. [9] F. Gao, H. Q. Tong, “Computing Two Linchpins of Topological Degree by a Novel Differential Evolution Algorithm,” International Journal of Computational Intelligence and Applications, Vol.5, No. 3, pp.1-16, 2005. [10] F. Gao, “Computing unstable Period orbits of discrete chaotic system though differential evolutionary algorithms basing on elite subspace,” Xitong Gongcheng Lilun yu Shijian/System Engineering Theory and Practice, Vol. 25, No. 4, pp. 96-102, 2005. “Category:Optimization,” [11] Wikipedia, wikipedia.org/wiki/Category:Optimization, 2005.

http://en.

[12] Y. X. Yuan, W. Y. Sun, “Optimization theory and Method,” Science Press, Peiking, 1997. [13] D. Cvijovi, J. Klinowski, “Taboo search: an approach to the multiple minima problem,” Science, Vol. 267, pp. 664-666, 1995. [14] A. Dekkers, E. Aarts, “Global optimization and simulated annealing,” Mathematical Programming, Vol. 50, pp. 367-393, 1991.

789

Suggest Documents