Constrained Linear Quadratic Gaussian Control with ... - Science Direct

10 downloads 0 Views 1MB Size Report
2. REGULATOR DESIGN USING STOCHASTIC. CONTROL METHODS .... variance control law is the optimal solution; (2) ...... Seti=0,2 (°)>0,Ho=l,and0
0005 1098/84$3.00+ 0.00 PergamonPressLtd. © 1984InternationalFederationof AutomaticControl

Vol.20, No. 1, pp. 15 29, 1984 Printed in Great Britain. Automatica,

Constrained Linear Quadratic Gaussian Control with Process Applications* P. M. M.~KIL~,,t T. WESTERLUND t and H. T. TOIVONENt

Constrained linear quadratic Gaussian control provides, for a class of process control problems, a systematic design procedure in which the control objective and practical design constraints can be incorporated. Key Words--Cement industry; control applications; optimal control; plastics industry; process control; quality control; self-tuning regulators; stochastic control.

The processes which are considered have the features that, firstly, the disturbances can be described as stationary stochastic processes; secondly, the criterion for control is to minimize the steady-state variances of some process variables. The criterion is often well-motivated physically. In quality control the reduction of the variance of a quality variable makes it possible to produce a product which satisfies tighter specification limits. An important class of control problems is when the optimal operating point lies at a constraint. Reduction of the variance of a variable makes it possible to operate the process closer to the constraint given a certain risk of falling outside the constraint. The benefits of this shift of operating point may for example be increased production, reduced energy consumption or a saving in the use of raw materials. The problem of minimizing the variances of certain process variables can be solved using linear quadratic Gaussian control theory in which a quadratic loss function is minimized. However, the control law obtained in this way may require unacceptably large control signals, or the variances of some other process variables, which do not appear in the loss function, are too large. In such cases the loss function is often modified, by introducing ad hoc weights on the variables showing too large variations, until a 'satisfactory' control law is obtained. This means that in such cases the standard LQG control problem formulation does not correspond adequately to the real control problem. In fact it has been said that it is difficult to find good straightforward applications of the standard LQG theory (Astr6m, 1978). At least the following deficiences of the standard LQG design procedure in applications have been pointed out. The physical meaning of the quadratic criterion is often considered to be unclear (A,str6m, 1978). This

Abstract--Constrained linear quadratic Gaussian control is considered. Important practical design constraints including restrictions in control signal variations and in regulator structure are introduced quantitatively into the control problem formulation. Various topics in the resulting extension of the standard LQG design procedure are discussed, for instance optimality conditions, design of optimal low-order controllers and variance-constrained self-tuning control. Numerical algorithms for solving the constrained LQG control problems are given facilitating the application of the design procedure. Three industrial applications of linear quadratic Gaussian design are described. The examples are taken from the cement industry and from a process for the production of plastic film. 1. INTRODUCTION

control theory is one area of modern control theory which has proved to be useful in industrial control as well as in other application fields (,~str6m, 1970;/~str6m and co-workers, 1977; Balchen, 1979). In this paper linear quadratic Gaussian control is discussed with emphasis on the incorporation of design constraints in the control problem formulation. Two types of design constraints present in industrial process control are considered, namely variance restrictions in the control signals and state variables and controller structure constraints. In this way a useful extension of the standard LQG design procedure is obtained. Three examples taken from industrial applications are included. The examples serve to illustrate the constrained LQG (CLQG) design procedure and the importance of the CLQG control problem formulation. STOCHASTIC

*Received 28 October 1980; revised 20 January 1982; revised 25 February 1983; revised 1 August 1983. The original version of this paper was presented at the 8th IFAC World Congress on Control Science and Technology for the Progress of Society which was held in Kyoto, Japan, during August 1981. The published proceedings of this IFAC meeting may be ordered from Pergamon Press Ltd, Headington Hill Hall, Oxford OX3 0BW, U.K. This paper was recommended for publication in revised form by associate editor A. van Cauwenberghe. tAbo Akademi, Department of Chemical Engineering, SF20500 Abo 50, Finland. 15

16

P.M. MXKILX, T. WESTERLUND and H. T. TOIVONEN

has presumably much to do with the previously mentioned trial and error nature of the design procedure (Athans, 1971 ; MacGregor, 1973), i.e. the weights in the quadratic criterion are considered to be free design parameters which are changed until a 'satisfactory' control response is obtained. Furthermore model complexity and regulator complexity go hand in hand in standard LQG design. This can lead to computational and memory space difficulties in implementation of optimal full-structure LQG controllers. Also the resulting control law may simply be unnecessarily complex so that a much simpler well-tuned controller might suffice very well in practice. The CLQG design procedure overcomes these problems to a large extent in a most natural way. Here the original loss function is preserved and the minimization is performed subject to inequality constraints on the variances of some variables specified by the designer and possibly subject to controller structure constraints. The CLQG control approach offers an appealing tool for the design of linear controllers of any complexity. Regulators of different order, from P, PI and PID to more complex regulators, can be designed and compared in performance. The constrained linear quadratic Gaussian control problems form a class of nonlinear programming problems. Optimality conditions and other properties of the control problems relevant to the development of numerical algorithms for solving the nonlinear programming problems are discussed. An interesting difference between the varianceconstrained LQG control problem and the standard LQG control problem is that the well-known separation theorem does apply only partly for the variance-constrained control problem, i.e. the estimator part of the optimal controller can be computed separately but the optimal feedback gain matrix is in general dependent also on the process and measurement noise covariance matrices. The numerical solution of CLQG control problems is non-trivial, and it is therefore considered in some detail in this paper. It has been pointed out (O'Reilly, 19801 that the lack of effective solution algorithms has been a major obstacle to the application of the output feedback concept, which is a subproblem of the control problems considered in this paper. In this paper an efficient algorithm for the output feedback problem (M/ikil/i, 1982a) is used to obtain an iterative algorithm for the variance-constrained LQG problem for structure constrained controllers. In some applications adaptive on-line solution of the control problem is required or is a practical alternative. Self-tuning regulators based on linear quadratic Gaussian control offer a useful solution (Astr6m and co-workers, 1977). The varianceconstrained optimal control problem formulation can be utilized in developing new self-tuning

algorithms (Toivonen, 1983a, b). Specifically selltuning regulators with on-line adaptation of the control weights in the control law, a desirable feature in applications (Belanger, 1980), can be obtained (Toivonen, 1983a). This feature can be used for example in the self-tuning controller of Clarke and Gawthrop (1975). Industrial examples showing the application of the linear quadratic Gaussian design procedure are discussed. In the first example (Westerlund, 1981) multivariable control of a cement kiln is considered. A linear stochastic process model was determined from an identification experiment. Reduction of the variance of one of the process outputs makes it possible to operate the kiln closer to a quality limit resulting in reduced energy consumption. In this application there were severe constraints on the variances of the control signals. The second application (Westerlund, Toivonen and Nyman, 1979) is a control system for the mixing of cement raw material. The objective is to produce a cement raw meal of the desired chemical composition for the cement kiln. The control problem was solved in practice by using a self-tuning regulator. The algorithm adapts the regulator parameters when the properties of the raw material change. The implementation of the algorithm has resulted in a significant reduction of the variance of the chemical composition of the cement raw meal. In the third example a blown film process for the production of high-quality plastic film (M/ikil/i, 1980; M/ikilS.and Syrj/inen, 1983) is considered. The purpose of the control is to minimize thevariations of the film thickness. The main use of the film is as an isolator material in electric capacitors which places high requirements on the product quality. The paper is organized as follows: in Section 2 regulator design using linear quadratic Gaussian control with variance constraints is discussed. One modification which is considered here is to fix the structure of the regulator and then tune the regulator parameters optimally. In this way it is often possible to achieve good control performance with very simple controllers. Self-tuning control is also discussed, and a self-tuning algorithm for the variance-constrained optimal control problem is given. The industrial applications are descirbed in Section 3. Numerical algorithms for solving the constrained linear quadratic control problems discussed in this paper are given in the appendices. In Appendix B the computation of a transfer function representation for the optimal LQG controller is considered. 2. REGULATORDESIGN USING STOCHASTIC CONTROL METHODS The design of regulators using linear quadratic Gaussian control theory is considered in this section.

Constrained linear quadratic Gaussian control The optimal control problem is presented as a variance constrained nonlinear programming problem containing the standard stationary linear quadratic Gaussian control problem as a subset. 2.1. Process models Process models required in regulator design can be found in practice using process identification. Many excellent papers and surveys on identification methods and on design of identification experiments are available (Isermann, 1981). Time series models are mostly used. A CARMA time series model is given by the vector difference equation y(t) + A l y ( t - 1) + ... + A , a y ( t - nA) =

1) + ... + B , g u ( t - L -

Bou(t - L -

+ e(t) + C l e ( t -

1) + . . . + (C, c e ( t -

variations of the input, or, although the variances of the variables which are of primary interest are minimized, some other process variables have variances which cannot be accepted. These cases can be solved by introducing constraints on the variances of the variables (Box and Jenkins, 1970). The linear quadratic Gaussian control problem is then modified to that of minimizing the quadratic loss function subject to the restrictions that the variances of some variables are less or equal to some maximum values, specified by the designer. The variance-constrained optimal control problem can be formulated as follows (M~ikilL Westerlund and Toivonen, 1982). Introduce the stationary covariance matrices of the state and the input

1 -nB) nc) (1)

where y is an r-dimensional output vector, u is a pdimensional input vector and {e(t)} is a Gaussian white noise sequence with zero mean value and covariance Ee(t)e(t) x = R e

17

(2)

and L is a time lag. A more general description of linear stochastic systems is the state-space model x ( t + 1) = Ax(t) + Bu(t) + w(t)

(3)

V~ = E x ( t ) x ( t ) T

(5a)

V~ = E u ( t ) u ( t ) T

(5b)

where x(t) and u(t) are considered as the stationary processes which are obtained when the system (3) is controlled by a time-invariant regulator. Consider the quadratic loss function J = trQ~V~ + t r O . V ,

(6)

where Qx and Qu are symmetric positive semidefinite weighting matrices. Inequality constraints on the variances of the variables are introduced as

y(t) = C x ( t ) + v(t)

tr Qivx ~ q~, where x is an n-dimensional state vector, and {w(t)} and {v(t)} are Gaussian white noise sequences with zero mean values and covariances E w ( t ) w ( t ) T = Rw

(4a)

Ev(t)v(t) T = R~,

(4b)

E w ( t ) v ( t ) v = O.

(4c)

The CARMA system (1) can be represented as a state-space system. In the following the state-space description (3) and (4) are used.

The problem of minimizing the variances of some process variables can be solved using the well-known linear quadratic Gaussian control theory, in which a quadratic loss function is minimized. There are, however, cases in which the control strategy obtained in this way cannot be applied to the real process. The reason for this is that the strategy gives too large AUT 20:I-B

(Va)

t r R j V ~ < r 2, j = 1..... n,

(Vb)

where Qi, i = 1. . . . ,nq and R j , j = 1. . . . . nr are symmetric positive semidefinite matrices. The control problem is to minimize the loss function (6) subject to the inequality constraints (7). The minimization is to be performed in the set of admissible control strategies a~ defined by the u(t) which are functions of the available information ~¢~_, = {y(t - k), y(t - k - 1)..... u(t - 1), u ( t - 2 ) .... } only. It is useful to consider the Lagrangian function of this nonlinear program JL = tr Q* Vx + tr Q* V,

2.2. Linear quadratic Gaussian control with variance restrictions

i = 1. . . . . nq

(8)

where nq

Q* = Qx + Y', 2x.iQi

(9a)

i=1 mr

Q.* = Q. + 2 ,~..ja, j=l

(9b)

18

P . M . M)~KILA, T. WESTERLUND and H. T. TOIVONEN

Here ).x,~ and 2 . j are Lagrange multipliers all >/0. The minimization of the Lagrangian function (8) w.r.t, all admissible control strategies u(t) is closely related to the minimization of the loss function (6) subject to the inequality constraints (7). In particular if u*(t) is minimizer of the Lagrangian function (8) and furthermore the inequality constraints (7) and the complementary slackness condition

V. = L*(V., -

(14)

P OL*L

Equation (13) is obtained readily from (3) and (4j observing that the estimation error for the optimal state estimate ~(tlt - k) is uncorrelated with any estimate of the state based on the same information state ~/~_~, i.e. then specifically E Ex(t) - ,9,(tit - k)].cc(tlt - k)T = 0.

nq

115)

2x,~(trQiV~ - q2)+

Equation (13) is a discrete Liapunov equation for

i-1 nv

2,,~(tr RjV~ - r2) = 0

(10)

j=l

are fulfilled at u* (t) then u* (t) is also a minimizer of the variance-constrained optimal control problem. The solution to the problem of minimizing the Lagrangian function (8) w.r.t, all admissible control strategies u(t) is given by the well-known standard LQG solution (/~str6m, 1970)

vx. Note that if the LQG subproblem solution (11) is substituted into the constraints (7) and into (10) then the solution of the local optimality conditions of the variance-constrained optimal control problem can be interpreted as a problem of solving the system of nonlinear equations in the Lagrange multipliers given by ~i = 2x,i(trQiVx - q{) = O,

u*(t) = - L ' r e ( t i t -

k)

~,q+j=2,,j(trRjV,-r

where it(tit - k) is the conditional mean of the state x ( t ) given k'~-t and L* is the optimal feedback gain matrix obtained from a Riccati equation. A major problem in obtaining the solution to the variance-constrained optimal control problem is then the determination of appropriate estimates for the Lagrange multipliers 2~,~ and 2,,j such that the conditions (7) and (10) are satisfied for u*(t). In practice this must be done iteratively resulting in a method of solution of the control problem where an iteration step consists of solving a standard LQG problem, i.e. of minimizing the Lagrangian function (8) with 2~,~ = 2~! and 2,j = 2(k,),and of updating the Lagrangian multipliers 2x,~ and 2.,j according to a suitable update formula. It should be observed here that although this method of solution is perhaps the easiest to implement it need not be a very fast method due to the infrequent Lagrange multiplier updating. Thestandardlinear quadraticcontrol problem isa subproblem of the above described method of solution of the variance-constrained optimal control problem. A useful property is then that the Kalman filter equations can be solved once for all to give the stationary state estimation error covariance matrix Px defined by P~, = E I x ( t ) - 5c(tlt - k)~ [ x ( t ) - So(tit - k)] v. (12)

The stationary covariance matrices of the state and the input, V~ and V., required in evaluating the constraints (7), can then be computed from Vx = (A - B L * ) V ~ ( A - BL*)T + A P ~ A T (A - BL*)P~,(A -- BL*)T 4-- R,,.

i

=

rlq

(16a)

. . . . . nr.

(16b)

1 .....

(11)

(13)

2)=0,

j=l

This interpretation indicates that useful multiplier updating formulae could be obtained from the Broyden class of methods for solving systems of nonlinear equations (Broyden, 1965). Introducing ,~ = [,~x,1..... •u,1,..,] x the resulting class of Lagrange multiplier update formulae is given by 2~k+l) --- 21k) + flkHk~(2~k))

(17)

where fig is a steplength parameter, Hk is an approximation to the inverse of the Jacobian matrix of ~ w.r.t. 2 and 2~k) obtained recursively from a Broyden updating formula (see Appendix A). There is an interesting difference between the solutions to the standard and the varianceconstrained LQG control problems. Namely for the variance-constrained LQG problem the optimal feedback matrix is in general dependent of the characteristics of the disturbances, i.e. of the covariance matrices R w and R~, as well. Note furthermore that the variance-constrained optimal control problem may have no solution if the constraints (7) are too severe. For example there is a lower limit on the variances of the control signals to stabilize an unstable system. The approach discussed in this section may be compared with the standard procedure for applying linear quadratic Gaussian control theory suggested in the literature (Athans, 1971; MacGregor, 1973). The weights of the quadratic loss function are then considered as design variables which are selected in such a way that the closed-loop system has acceptable properties. Some rules of thumb for the

Constrained linear quadratic Gaussian control choice of proper values for the weighting matrices have been suggested (Athans, 1971). It is observed that a constraint on a variable will in general correspond to an increased weighting on the variable in the loss function. The actual amount of additional weighting will depend on the value of the Lagrange multiplier at the minimum of the varianceconstrained control problem. In the varianceconstrained LQG control problem it is possible to choose the loss function (6) to correspond to a physically well motivated criterion, i.e. the minimization of the variances of some primary process variables. Other process variables can then be taken into account by introducing variance constraints on these variables. The optimal control strategy for the varianceconstrained LQG control problem was obtained above in the form u(t) = - L ~ ( t l t - k).

(18)

The regulator (18) can be represented by its transfer function u(t) + H l u ( t -

1) + . . . + H , H u ( t - - nu) =

Goy(t - k) + . . . + G,Gy(t -- k - no).

(19)

The transfer function representation (19) is useful when implementing the optimal control strategy. For numerical computation of a convenient realization of the transfer function representation (Westerlund, M~ikil~i and Toivonen, 1979) (see Appendix B). Example: minimum variance control with input constraints. Consider the common situation when

the criterion for control is to minimize the steadystate variance of the output of a single-input singleoutput system (Astr6m, 1970). The loss function is then J = Ey(t) 2.

(20)

There are cases when the strategy which minimizes (20) requires control signals which are too large. One possibility is then to use an increased sampling interval. The output variance which is obtained when using the minimum variance strategy at the longer sampling interval is therefore larger, and there have been practical situations when this method does not give good results (cf. Section 3.1). The output variance can in these cases be minimized subject to a constraint on the input variance (Box and Jenkins, 1970). The loss function (20) is then minimized subject to the inequality constraint Eu(t) 2 0 at the solution. The minimization of the loss function (20) is of course meaningless if for all stabilizing control strategies E u ( t ) 2 > c 2. Then the control problem has no solution. Remark 2.1. By constraining the variance of the input it is often possible to obtain a significant decrease of the input variance but simultaneously only a slight increase of the output variance (MacGregor, 1973; Moden and S6derstr6m, 1982) (cf. Section 3.3). The use of a variance-constrained strategy would then be meaningful in cases where no obvious quantitative technical input restrictions exist. Remark 2.2. In applications hard input constraints such as lower and upper limits on the range of the control signal u(t) can affect the control quality. The control problem is then to minimize the loss function (20) subject to the restriction that fl 0 is chosen so that for all 6E [0,e] fk + 6(fk* -- fk)+ 0, i.e. the domain of minimization; equation (32). Inserting the expressions for the gradient (33), i.e. E(k~fk + bk, and for bk = --E(k~"~ results in

~)~-/~ (f* - / i ) = - (J * - A)~ E~(J ~ - / i ) < o

(39) when j"~ +~fk due to Ek being positive definite. The direction (f* - Jk) is thus a descent direction of the Lagrangian function j(gk)(f) at jk. Choose then the algorithmic mapping q' (36b) so that j~,+ 1 is given by

Es, = 2 ~ (BXSB + Q*)~k(DVxO T + R,,)j, = Et~ (34)

k,l

Jk+l =A + ~k(J~' -A)

wheres = ( j - 1)r + i, a n d t = ( I - 1)r + k. Itthen follows that

(40)

where ek is such that J[k)(Jk+ 1) < d~k)(Jk)" Note also that J~' - J k can be written as

j'rEJ = 2tr [F(DE, D r + R,,)FX(B'rSB + Q*)] 1> 0, for all j

(35)

due to positive semidefiniteness of the matrices (DVxD T + R,) and (BrSB + Q*). If these matrices are both positive definite, then E is also. This property is retained also when only some of the elements in the feedback matrix F are tuned. This is utilized in the sequel. The algorithmic mappings generating a sequence of Lagrange multiplier vectors and a sequence of feedback gain vectors 2~k+1) = F(2(k),jk)

(36a)

JR+ 1 = UL(Jk, 2(k+ 1))

(36b)

are now constructed, the iterates hopefully converging to a local minimizer of the nonlinear program defined above. Let n o w E (k) and b (k) denote

10J']~"

(41)

This expression gives a useful way to generalize the algorithmic mapping • defined by (36b) and (40) to the case where E (k) is only positive semidefinite so that the inverse g (k) 1 does not exist. Namely define then

1/~JL/T

(42)

w h e r e D (k) is any positive semidefinite matrix such

that E (k) + O (k) is positive definite, and ek > 0 such thatJ(Lk)(jk+ 1) < J[k)(jk). The descent property of the search direction is preserved by this modifcation. The step-length, or relaxation, parameter, ek, is an essential part of the algorithmic mapping q' defined by (36b) and (42). The problem of determining ek can also be interpreted as an inexact, or partial, line search problem, and thus line search algorithms

22

P.M. MAKILA,T. WESTERLUND and H. T. TOIVONEN

using implicit step-length rules (Fletcher, 1980), such as the Goldstein step-length rule, can offer fast schemes for this purpose. Note that (41) can be interpreted as the Newton step to the minimum of a quadratic approximation to the Lagrangian function around Jk. The algorithmic mapping F (36a) for the Lagrange multipliers 2 can be chosen to have the form (17). The Lagrange multipliers are updated after each use of the algorithmic mapping • defined by (36b) and (42). If the Lagrange multiplier update scheme is successful, then a fast convergence to a local minimizer of the nonlinear program can be expected in many cases. Although convergence conditions of this algorithm have not been determined some numerical experience indicates that the algorithm can perform effectively (M~ikil~i, 1982a). The structure constrained control problem as formulated above is flexible so that linear multivariable controllers of any order, also dynamic controllers with low-order estimators (O'Reilly, 1980), can be handled computationally. Remark 2.3. A combination of standard LQG design and structure constrained LQG design offers a practical regulator design procedure. The performance of regulators of different structural complexities can be readily compared. If for example the structure (19) is used, then it can be determined how many previous inputs and outputs are required in the regulator to obtain good control quality.

previously there are, however, cases when this approach is not satisfactory. Self-tuning regulators which restrict the input variance by using a quadratic loss function with a weight on the input have been suggested (Clarke and Gawthrop, 1975, Astr6m and co-workers, 1977). A self-tuning regulator for the variance-constrained control problem has been discussed by Toivonen (1983a). The problem is to minimize the loss function (20) subject to the inequality constraint (21). It is observed that the Kuhn-Tucker necessary optimality condition corresponding to (10) is q,(Eu(t) 2 - c2) = 0.

(43)

A self-tuning regulator can then be constructed by applying the Robbins-Monro scheme on equation (43) to adjust the Lagrange multiplier q,. In Toivonen (1983a) the following algorithm was given for the variance-constrained control problem. Step 1. Parameter estimation. Estimate the parameters of the model (1) using a recursive identification method. Step 2. Control signal computation. Determine the control signal u(t) from the strategy which minimizes the Lagrangian function (22). The solution is found by solving a steady state Riccati equation or by performing a spectral factorization. Step 3. Adjustment oJ q,. Adjust the Lagrange multiplier q, according to the Robbins-Monro scheme q,(t + 1) = q,(t) + It(t + 1 ) ~ ( u ( t ) 2 - c 2)

2.4. SelJ~tuning control Self-tuning regulators have been developed for systems whose parameters are unknown and possible slowly time-varying. A self-tuning regulator consists of on-line computation of the regulator parameters (Astr6m and co-workers, 1977). When the criterion for control is to minimize the variance of the output a very simple robust control algorithm is obtained (Astr6m.and Wittenmark, 1973). Several successful applications of the self-tuning minimum variance regulator have been reported (Astr6m and coworkers, 1977). An extension to multivariable systems has been given by Borisson (1979). In Keviczky and co-workers (1978) and Westerlund and co-workers (1979) the algorithm has been applied for minimizing the variance of an estimated variable. The problem is more complex in the case when it is necessary to restrict the input variance. In such cases it may still be possible to use a self-tuning regulator based on minimum variance control by increasing the sampling interval or by using a longer prediction interval in the regulator. As was mentioned

(44) where #(9 is a sequence of positive scalars, such that #(t)< 1. For a time-invariant system convergence can take place only if It(t) ~ 0 as t ---, 3o. To allow for slowly time-varying parameters it can be appropriate to let It(t) be a small positive constant. In simulations a value It(t) = 0.005..... 0.05 has been found useful. Remark 2.4. The solution to the varianceconstrained optimal control problem implies that a steady-state Riccati equation should be solved. A simpler algorithm is obtained by using a single-step loss function (Toivonen, 1983a). The regulator is then related to the algorithm given by Clarke and Gawthrop (1975), with the modification that the input weight q, is adjusted at each step. This method has given good control performance in many simulated examples (Toivonen, 1983a). Varianceconstrained self-tuning control offers an on-line scheme to take into account constraints in the form of admissible control signal variations or magnitudes. Allidina and Hughes (1980) give a self-tuning pole

Constrained linear quadratic Gaussian control assignment controller adjusting the input weight qu, or more generally the coefficients of the polynomials in the definition of the generalized output of the Clarke and Gawthrop (1975) algorithm, so as to place the poles of the closed-loop system to prespecified locations. In a variance-constrained selftuning controller the input weight is, however, adjusted so as to minimize a quadratic criterion subject to an explicitly defined control signal magnitude, or variance, constraint. In Toivonen (1983a) it has been shown that the algorithm has good convergence properties. It is also straightforward to generalize the algorithm to the multivariable case (Toivonen, 1983b). 3. A P P L I C A T I O N S

In this section the application of the discussed linear quadratic Gaussian control methods is illustrated with three industrial examples taken from process control.

fuel rate using a conventional control loop. The control variable ul therefore correlates with the energy flow into the kiln. The control problem consists of reducing the variances of the output variables. Large variations in YI should be avoided in order to ensure steady operation of the plant. The signal Y2 correlates with the product quality specified as the free lime content of the clinker (Westerlund, 1981). A reduction of the variance of Y2 will in this case make it possible to operate the process closer to the limit which specifies the maximum free lime content of the clinker. This will result in reduced energy consumption. Before using variance-constrained control, the process was being controlled manually by two operators. The multivariable character of the process makes, however, manual operation difficult. It was found (Westerlund, 1981) that a reasonable criterion for control is to minimize the loss function J = Ey(t)Vy(t)

3.1. Control of a dry process rotary cement kiln In this example stochastic control methods have been applied to the control of a dry process cement kiln, with a capacity of 1000 t of clinker a day. A detailed description of the project can be found in Westerlund (1981). The model obtained from an identification experiment was y(t) + A l y ( t - 1) = Bou(t - 1) + e(t) + C ~ e ( t ) - 1)

(45)

23

(46)

The minimum variance strategy for the model (45) gives the output variances Ey 1(t) 2 = 0 . 0 6 4 4

(47) Ey2(t) 2 = 0 . 0 2 1 4 .

The variances of the control signals are then Eul(t) 2 = 0.148 Eu2(t) 2 = 108.

(48)

where These variances are much larger than co'uld be accepted in the process. In practice it was appropriate to restrict the input variances according to Eux (02 y.

where s is the observability index of the pair (T, L), i.e. the smallest positive integer ( < n, the dimension of the state vector) for which

Step 8. Compute fli+l = fl~(~'- fll)(Y - 1) -~, where y > 1. Set i = l + l and go to Step 3. End of algorithm. The solution to the variance-constrained optimal control problem is then given by the control law u(t) = - L f c ( t l t

(A9)

- k)

where the state estimate ~(tlt - k) is obtained from a Kalman filter according to (Astr6m, 1970). f,(t - kit - k) = Afc(t - k - lit - k - 1) + B u ( t - k - 1) + K o [ y ( t - k) - CA.~(t - k - lit - k - 1) - C B u ( t - k - 1)]

(AI0) .~(t - i + lit - k) = ASc(t - lit - k) + B u ( t - i), i=k,k-1

..... 1

(All)

and the optimal feedback matrix L is obtained for example as described in the above algorithm.

APPENDIX B. C O M P U T A T I O N O F A TRANSFER F U N C T I O N REPRESENTATION O F THE LINEAR STATE FEEDBACK C O N T R O L L E R In this appendix the computation of a transfer function representation (19) for the optimal controller (18) is considered. Specifically a computational scheme is given for the transformation of the optimal control law u ( t ) = - L f c ( t l t ) for the filtered case to a transfer function representation (Westerlund, M~ikil/i and Toivonen, 1979) u(t) + H l u ( t -

1) +

...

+

H . ~ u ( t - nn) = G o y ( t ) + . . .

+ G,j(t

- n6)

(A12)

Other cases u(t) = - L Y e ( t i t - k) with k > 0 can be considered analogically. The optimal control law is a linear dynamic system given by ~(tlt ) = TSc(t - lit - 1) + K o y ( t )

(AI3)

u(t) = - LS;(tlt)

(A14)

rank (Os) = rank (Os÷ t) where Oi is defined by 0 i "-= [ ( T T ) i - I L

T. . . . .

TTLT, LT],

(A18)

The solution {Hi} and {Gj} to (A15) and (A16) is generally not unique. Then for example the following scheme can be adapted to select a convenient solution. First it is observed that only rank (O~) columns of the matrix [ H t , H 2 . . . . . H~], corresponding to rank (Os) independent rows of OT, has to be considered, and the remaining columns can be set equal to zero. A natural choice is to disregard the oldest inputs, i.e. to select the columns of [ H ~ , H 2 . . . . . H s ] corresponding to the first rank(O~) linearly independent rows of O~r. APPENDIX C. C O M P U T A T I O N O F OPTIMAL STRUCTURE CONSTRAINED CONTROLLERS FOR THE VARIANCE CONSTRAINED O P T I M A L C O N T R O L PROBLEM The computation of the optimal feedback gains of a structure constrained controller for the variance restricted optimabcontrol problem is considered in this appendix. The notation of Section 2 is used. Let now the vectorfconsist of only those feedback gains in the feedback matrix F which are to be tuned (some elements of F may thus be fixed to zero, etc.). A local necessary optimality condition for the control problem is then 0JL - -

~f

=

0

(AI9)

i.e. the gradient of the Lagrangian function of the control problem w.r.t, f must vanish at a minimum. The gradient in (A19) is obtained from (27) OJL -- { 2 B T S [ ( A + B F D ) V x DT + B F R ~ + RTw] + OFij 2 Q * F ( D V , D T + Rn)}i j.

(A20)

It is observed that if in (A20) the matrices S, V~ and Q~* are considered to be fixed then (A20) is a linear expression infwhich combined with (A19) results in a linear equation in f, i.e. El+

where T = A - B L - K o C ( A - B L ) and K0 is the Kalman filter gain matrix. This is a state-space system with y as input and u as

(A17)

b= 0

(A21)

where E and b are easily obtained from (A20). The following iterative algorithm for the computation of the optimal feedback gains can then be constructed (see also M/ikilii, 1982a).

Constrained linear quadratic Gaussian control The algorithm Step 1. Set k = 0,fo arbitrary such that J(fo) is finite, 2 ~°~ > 0, Ho = 1, and 0 < flo < 1. Step 2. Solve the algebraic Liapunov equation given below for the positive semidefinite matrix V~ ~

29

Denote the solution to the linear equation given above with f*. Step 7. Find a point f * along the l i n e f = f ( e ) =fk + e(fl --fk) satisfying the Goldstein step-length criterion (Goldstein, 1965) ce [OJLI T ~ - I k (f~ --fk) > J~}(f*) - J~}(fk) >

V~ ) = (A + BFkD)V~k)(A + BFkD) r + BFkR.F~B T + R~, + BFR.w + (BFkR,w) r.

dJz v

Step 3. Update the Lagrange multipliers 2 according to where j ~ l ( f ) is defined as 2~k+,) = ).!k) + sat (flkH~,~lk~, O-2!k)), i = 1..... nq + n, Jf~(f) = tr Qf~gx + tr Q ~ v , where H~ is the ith row of Hk, 0 < a < 1, and and (OJL/Of)~ is obtained from i l k) -. -. . ]. {. k ) ( t.r. ~(I_ ~ I/(k) - -

q~),

i = 1..... nq

~}k) = 2~+j(trRjV~k) _ ry), j = 1..... n, v~~' = r~ovI~o~ v I +

[l°JLI]

~ - / k J O = {2BrSl.

Set