When level is near to zero, temperature has an important variation. System parameters are not exactly known and approxim
Fuzzy Controller Tuning for a Multivariable System Using a Neuro-Fuzzy System Alberto SORIA CINVESTAV: Centro de Investigaciones y Estudios Avanzados del Instituto Politécnico Nacional – México. Departamento de Control Automático. e-mail:
[email protected]
Abstract Fuzzy controller tuning is achieved using a neuro-fuzzy system. We propose a regression term to be introduced in the traditional neuro-fuzzy learning algorithm. Results concerning the real and the simulated system are presented. Keywords: Multivariable Fuzzy Controller Tuning, Neuro-Fuzzy Systems. Introduction A decentralised fuzzy logic controller is simpler to tune than a centralised version, due to interactions between control loops [GEGOV 96]. To be able to take in account these interactions we propose to tune the parameters of the decentralised controllers at the same time, using a neuro-fuzzy technique. This method will be used with the decentralised structure to achieve the tuning of a fuzzy logic controller of a multivariable hydraulic plant. Once obtained the tuned fuzzy logic controller, it’s then applied to the real system. This paper will describe the hydraulic system, the fuzzy controller structure, the tuning method and algorithm modification, the results obtained in simulation and finally the application to the real system.
I. Hydraulic Multivariable System The objective is to control the temperature and level of an hydraulic multivariable system presented in Figure 1. In the model of this system level function h(t) h(t ) . is non-linear due to a variation Temperature
function To (t ) has a non-linear
1 . When level is h (t ) near to zero, temperature has an important variation. System parameters are not exactly known and approximate values are used. coupling that varies like
qi(t) Input flow Ti (t) Input temperatrue
Valve
Heat exchanger
h(t) To(t)
Output flow qo(t)
V
v(t)
Level sensor Temperature sensor
Heating element
Volume of bottom tank
Pump
Figure 1. Hydraulic Tank System outline.
II. Fuzzy Control Structure Figure 2 presents a decentralised control architecture [GEGOV 96]. The objective is to drive the level variable h(t) towards set point 1, while temperature variable To(t) is drived towards a set point 2. The controller will output control increments du given the error and change of error.
Set Point 1
+
Error1 du1
1/s
u1
Level h(t) y1(t)
d/dt Set Point 2
+ -
Error2 du2 d/dt
Multivariable Fuzzy Controller
1/s
u2
y2(t)
Temperature To(t)
Integrator
Hydraulic System
Figure 2. Control Structure and Process.
The multivariable fuzzy control has a decentralised structure shown in the precedent figure, having two fuzzy controllers with two inputs each and giving an output to control level and temperature.
III. Neuro-Fuzzy Tuning The formal analogy between a fuzzy inference system and a multilayer neural network associated with optimisation algorithms issued from the retropropagation gradient algorithm
have winded up in what is called a “Neuro-Fuzzy Network” [GLORENEC 91]. Presentation of Neuro-Fuzzy Network
III.I
A Sugeno type fuzzy system is determined in three stages: • Given an input x a membership degree µ is obtained from the antecedent part of rules. • A truth value degree αi is obtained, associated to the premises of each rule Ri: IF x1 is X1i AND Si x2 is X2i ...AND IF xn is Xni THEN u IS bi . • An aggregation stage to take in account all r
∑α b
i i
rules by u =
i =1 r
∑α
will permit to obtain
i
i =1
a crisp value u. These operations can be traduced by the layer structure shown in figure 3. Each layer, connected with others by adjustable parameters, having a specific function. w112
1
. . 2 w21
. .
x1
1
Σ|G Σ|G
Σ|G
Σ|F x 31
. .
. 1
-1
4 11
w
.
2 w43
1
. . 2 w44
x2
r
Σ|G
. .
. .
Σ|G
.
Σ|F
w14i 3
xi
. . 1
-1
.
∑x w
3 4 j 1j
j =1
r
∑x
y1 3 j
j =1
. Σ|F
.
Figure 3. Neuro-Fuzzy Inference System
III.II
Architecture and Learning Algorithm
Architecture One of the simplest learning architectures is the one known as « Mini-Jean » which substitutes the jacobien by its sign, studied by Renders [RENDERS 94] and Lutaud-Brunet [LUTAUD-BRUNET 96]. It is possible to apply such learning architecture because jacobien signs don’t change in the process functioning domain. We will use this architecture shown in figure 4.
2
Neuro-Fuzzy y* + Model
e de
Neuro-Fuzzy Controller
εy
u System
y
-
+
c
d/dt
Figure 4. JEAN Learning Architecture.
Algorithm Optimisation of adjustable parameter is accomplished with a version of the classic gradient retropropagation algorithm adapted to net structure of figure 3. The aim is to minimise cost function J: (1) 1 2 J= ε 2 where ε is the difference between set point and process output. The basic equations of the algorithm are: w ijn (t ) = wijn (t − 1) + ∆wijn (t ) ∆w ijn (t ) = −ηδ in x nj −1 + m∆wijn (t − 1)
-1
x12
.
δy*
δu*
where wijn : ith parameter between i of layer n and jst unit of layer n-1. η : learning gain. m : “moment” parameter. δ in : error term (ith neurone of layer n). n −1 x j : output of jth unit of layer n-1.
(2)
The quality of solution obtained using this algorithm depends on input learning signals, algorithm control parameters and learning duration (number of iterations). In a first application of the basic algorithm, we have seen that all parameters from the different layers don’t converge to fixed values, not permitting to put in a linguistic form controller rules nor to leave the learning
mechanism in place permanently. The parameter values tends to increase in absolute value, which doesn’t permit a reproducible controller and where controller performance will depend on the time that learning is stopped, which is not acceptable. For layers 2, 3 and 4: δ 14 =
1j
y1 − d 1
∑ x 3j
−
j
δ = F ′ ∑ wij3 x 2j ∑ δ k4 wki4 j k 3 i
(3)
j
: desired output.
y1 F
: effective output value. : activation function for the units in layer 3. : activation function for the units in layer 2.
To counter these problems we propose to introduce in cost function a term which penalises parameter augmentation. In classification, this modification is classic [BISHOP 95], and offers an efficient and simple way to regularise learning in a multilayer network. Cost function becomes: 2
j
2ηλw14j (t − 1)
V. Implementation and Simulation Results
j
IV. Algorithm Modification-Weight Regression
J = E + λ ∑ (w )
∑x
3 k
Since the objective of algorithm modification is to verify parameter convergence, we will use a plot of the sum of theirs squares T as: (6) 2 T = ∑ (w14j )
k
d1
x
3 j
k
δ i2 = G ′ ∑ wij2 x 1j ∑ δ k3 wki3
G
The final form of the corrective regression term for the weight w14j takes in account the “relative force” of each rule [BARRET 97, AMAMOU 99] giving: ∆w14j (t ) = −ηδ 14 x 3j (5) + m∆w 4 (t − 1)
(4)
Learning will be stopped when an asymptotic behaviour is confirmed towards a constant value, and eventually convergence can be verified for each individual value w14j . After a systematic study in a large range, we decided to use η=0.02 and λ=0.25 for the values of for the two intern parameters of the algorithm learning gain η and regression coefficient λ that gives the minimum value of average error. Figure 5 and figure 6 show the behaviour of the sum of weights in the learning, where we can observe an asymptotic behaviour of plot T.
where E is the classic quadratic error, w are the parameters (weights) to optimise parameters and λ a “weight regression coefficient” that must be chosen, and that can be different for each weight in the network [GALLINARI 95]. Taking as a starting point experiences with the basic algorithm, we observed little impact in variation of input fuzzy set parameters. We then decided to fix these fuzzy sets and to only optimise rule conclusions parameters w14j .
3
Error Negative
Error Zero
Error Positive
-0.57 (NG) -0.09 (Z) -0.04 (Z)
-0.13 (NP) 0 (Z) 0.13 (NP)
0.06 (Z) 0.10 (Z) 0.70 (PG)
Error Change Negative Error Change Zero Error Change Positive
Table 1. Control Rules for the Level controller.
Error Negative -0.56 (NG) -0.21 (NP) -0.17 (NP)
Error Change Negative Error Change Zero Error Change Positive
Error Zero -0.003 (Z) 0 (Z) 0.008(Z)
Error Positive 0.15 (PP) 0.16 (PP) 0.51 (PG)
Table 2. Control Rules for the Temperature Controller. Evolution of rule consequences - Level
Rule 1 consequences square sum
Rule consequences behaviour - Temperature
Rule 3.5 consequences square sum
0.9
3
0.8
2.5 0.7
2
0.6
0.5
1.5
0.4
1 0.3
0.5 0.2
0.1 0
1
2
3
4
5 time (sec)
6
7
8
9
0
10
2
3
4
5 time (sec)
6
7
8
9
10 5 x 10
a) Weights Square Sum
a) Weights Square Sum
Rule Conseque nces - Temperature
Rule consequences - Level
0.4
1
5
x 10
0.9
Average Error
Mean Error
0.8
0.35
0.7
0.3
0.6
0.25
0.5
0.2 0.4
0.15 0.3
0.1 0.2
0.05 0.1
0
0
1
2
3
4
5 time (sec)
6
7
8
9
0
1
2
3
10
4
5 time (sec)
6
7
8
9
10 5
x 10
5
x 10
b) Average Error
b) Average Error Level tuning behaviour
Tuning behaviour - Temperature Temperature behavior
4 Level behavior
3
3.9
2.8
3.8 3.7
2.6
3.6 3.5
2.4 3.4 3.3
2.2
3.2 3.1 3 9.95
2
9.955
9.96
9.965
9.97
9.975 9.98 time(sec)
9.985
9.99
9.995
10 5
x 10
c) Behaviour of Level h(t) with Learning Signal Figure 5. Results of the Tuning of Level Controller h(t).
4
9.93
9.94
9.95
9.96 9.97 time(sec)
9.98
9.99
10 5
x 10
c) Behaviour of level To(t) with learning signal Figure 6. Results of the Tuning of Temperature Controller To(t).
The final control rule tables for the two controllers are shown in table 1 and table 2. The presented numerical values are those of the conclusions of rules w14j after convergence. We note that the table have a central linguistic symmetry.
Level behaviour
5
Set Point y1
4.5 4 3.5 3 Output 2.5 2 1.5 1
Level behaviour 5
0.5
Set Point y1
4.5
0 0
1000
2000
3000 4000 time (sec)
5000
6000
7000
4
Temperature behaviour
5 3.5
Set Point y2
4.5 3 Output
4 2.5
3.5
2
3 Output 2.5
1.5
2
1
1.5 0.5
1 0
0
1
2
3
4
5 time (sec)
6
7
8
9
10
0.5
4
x 10
0 0
1000
2000
Temperature behaviour 5
Set Point y2
4.5
3000 4000 time (sec)
5000
6000
7000
5000
6000
7000
5000
6000
7000
Control signal Level
5 4
3.5
4
3 Output
3 Output
2.5
2
2
1.5
1 1
0.5
0
0
0
1
2
3
4
5 time (sec)
6
7
8
9
0
10 x 10
1000
2000
3000 4000 time (sec)
4
Control Signal Temperature
5
Figure 7. Control results with fixed weights after learning.
4
3
A non-linear control characteristic is obtained having a mixture of a PI like control, near the set point, and a all/nothing control, when process outputs are far from the set points. Figure 7 shows the controllers performance for other set points. Level control loop shows a much more reactive control with a diminution of response time and weak set point overshoot. Temperature controller reacts rapidly in response to set point changes. When temperature changes are caused by level coupling effects, the response is good in terms of overshoot and rapidity.
Output
2
1
0 0
1000
2000
3000 4000 time (sec)
Figure 8. Results in Simulation.
VI. Application to the Real System The results are present in simulation and then with the real system. The simplified model used for the simulations doesn’t take in account, for example, noise in the real sensors or heating element mechanism. The controllers were tuned with the simulated model and then tested, without modification, over the real system. Their behaviour gives an idea of the robustness in relation with the model 5
limitations and noise of the real sensors. The overall behaviour is satisfying. The optimisation permits to reduce the response time, but with overshoot. Temperature control is much solicited and tends to function in an all/nothing manner. This lack of robustness may be produced by a poor learning signal. An on line optimisation may ameliorate this response; for technical reasons, it has not been tested. Level behaviour
4.5
Set Point y1
4 3.5 3
Output
2.5 2 1.5
1
0.5 0 0
1000
2000
3000 4000 time (sec)
5000
6000
7000
Temperature behaviour
4
Set Point Y2
3.5
3
2.5 Output 2
1.5
Conclusion We have studied the tuning of a multivariable fuzzy logic controller using a decentralised structure. Control performances are satisfying in simulation. Real system application presents a robustness problem in temperature controller that can be resolved applying the optimisation on line. This method permits to find controller parameters from a local information obtained at each instant of the optimisation process. Tuning quality needs learning signals that will permit to activate all control rules. A prior knowledge can be introduced in parameter initialisation. The application here presented, shows the interest of a supplementary parameter -the regression parameter-, that permits a convergence of parameters of the fuzzy controller, allowing to reduce the time for parameter stabilisation. This approach is interesting because few parameters are necessary to control this algorithm, the method needs a short optimisation time and, thanks to it’s local nature, it permits an on line application.
References
1
0.5
0 0
[Amamou 99] 1000
2000
3000 4000 time (sec)
5000
6000
7000
Control signal Level
5 4.5
[BARRET 97]
4 3.5 3 Output 2.5 2 1.5
Amamou, A., Barret, C. & Maaref H.“Optimisation of a fuzzy controller for reactive navigation”. Computational Intelligence and Applications, N.E. Mastorakis, Ed. World Scientific, 1999, p. 193-198. BARRET, C.; BENREGUIEG, M. & MAAREF, H.“Design of an auto-tuned fuzzy controller: application to the reactive navigation of a mobile robot”. 3º IFAC Symp. on intelligent Computing and Instrumentation for Control applications (Annecy, France, June 9-11).- 1997, p. 277-282.
1
[BISHOP 95]
0.5 0 0
1000
2000
3000
4000 time (sec)
5000
6000
7000
8000
6000
7000
8000
Control Signal Temperature
5 4.5 4 3.5 3 Output
2.5 2 1.5 1 0.5 0 0
1000
2000
3000
4000 time (sec)
5000
Figure 9. Results on the Real System.
6
BISHOP, C.- Neural Networks for Pattern Recognition. Oxford University Press, 1995. GALLINARI, P. & GASCUEL, O. – “Statistiques, [GALLINARI 95] apprentissage et généralisation. Application aux réseaux de neurones”. Revue d’intelligence artificielle, nº 2-3, vol. 10.- 1995, p. 285-343. [GEGOV 96] GEGOV, A.- Distributed Fuzzy Control of Multivariable Systems.- Netherlands: Kluwer Academic Publishers, 1996. [GLORENNEC 91] GLORENNEC, P.- « An Evolutive Neurofuzzy Network », .- [Procs.] 4e Int. Conf "Neural Networks and their Applications".- Nîmes 1991, p. 301-314. [LUTAUD-BRUNET 96] LAUTAUD-BRUNET, M.- Identification et contrôle de processus par réseaux neuro-flous. / PhD. Dissertation, Sciences de l’ingénieur : Université d'Evry - Val d'Essonne, 1996. RENDERS, J.- Métaphores biologiques appliquées [RENDERS 94] à la commande de processus / PhD. Dissertation : Université libre de Bruxelles, 1994.