Linear model predictive control based on state space

0 downloads 0 Views 994KB Size Report
A.3.2 An Example of Luenberger Observer with model mismatch . ..... other optimal based state estimation such as Full Information Estimation and Moving Horizon ... Assume that the following equations are the sampled representation of a ...... the feasible region, the quadratic problem in 3.9 has a unique solution for:.
Fakult¨at Bio- und Chemieingenieurwesen Lehrstuhl f¨ ur Systemdynamik und Prozessf¨ uhrung

Master’s Thesis

Linear model predictive control based on state space models combined with multi-rate state estimation using moving horizon estimation

Sadaf Shariati

First Supervisor: Prof. Dr.-Ing. Sebastian Engell Second Supervisor: Dr.-Ing. Stefan Kr¨amer

Declaration I hereby declare that I wrote this Master’s thesis by myself. All references used in this work are properly quoted. K¨oln, September 22, 2011

Sadaf Shariati

iii

Contents List of Figures

ix

List of Tables

xi

Nomenklatur

1

1 Introduction 1.1 Model Predictive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Constrained State Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 1 2

2 State Space System 2.1 System and Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Input-Output System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 State Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Luenberger Observer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Calculation of feedback Matrix L . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Luenberger Observer and Model Mismatch . . . . . . . . . . . . . . . . . . 2.3.4 Luenberger Observer and unmeasurable disturbances . . . . . . . . . . . . . 2.3.5 Disturbance Observer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Predictor-Corrector-Form of the Kalman Filter . . . . . . . . . . . . . . . . 2.5 State Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 State Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.2 Pre-filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Target Calculation with Origin-Shifting . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 Set point Adjustment under unknown disturbances and model uncertainties 2.7 Disturbance Observer and Disturbance Compensator . . . . . . . . . . . . . . . . .

3 3 3 5 5 6 7 8 10 11 13 14 14 15 15 18 19

3 Model Predictive control 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Problem Formulation . . . . . . . . . . . . . . . . . 3.2 Target Calculation . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Target Calculation according to Rawlings . . . . . . 3.2.2 Target Calculation according to Muske . . . . . . . . ¯ . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Infeasible y 3.2.4 Economic optimization . . . . . . . . . . . . . . . . . 3.2.5 Implementation in Matlab . . . . . . . . . . . . . . . 3.2.6 Tuning Parameters in Target Calculation algorithm

21 21 22 23 24 25 26 27 27 28

v

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

3.3

Moving Horizon Regulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.3.1 Tuning Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.3.2 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4 Single Adjuster Tuning 4.1 Introduction . . . . . . . . . . . . . . 4.2 Table of Parameters . . . . . . . . . 4.3 Normalization . . . . . . . . . . . . . 4.4 Calculating Parameters from a priori 4.4.1 Kalman Filter Parameters . . 4.4.2 Regulator Parameters . . . . 4.4.3 Conclusion . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

39 39 39 40 41 41 43 45

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

47 47 47 47 48 48 49 49 50 50 50 50 51 51 52 54 54 54 55

6 Applications of MPC: Control of the inflow of a Tank 6.1 Problem formulation . . . . . . . . . . . . . . . . . . . . 6.1.1 Control objectives . . . . . . . . . . . . . . . . . 6.2 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 a priori information and user-defined parameters . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

57 57 57 57 59

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . with inaccurate initial condition with model mismatch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

63 63 63 63 64 64 64 65 67 67 67

. . . . . . . . . . . . . . . . . . . . . . . . information . . . . . . . . . . . . . . . . . . . . . . . . .

5 Constrained State Estimation 5.1 Introduction . . . . . . . . . . . . . . . . 5.2 Constrained Kalman Filter . . . . . . . 5.2.1 Problem Formulation . . . . . . 5.2.2 Conclusion . . . . . . . . . . . . 5.3 Full Information Estimation . . . . . . . 5.3.1 Problem Formulation . . . . . . 5.3.2 Stability check . . . . . . . . . . 5.3.3 Conclusion . . . . . . . . . . . . 5.4 Moving Horizon Estimation . . . . . . . 5.4.1 Problem formulation . . . . . . . 5.4.2 Stability check . . . . . . . . . . 5.4.3 Conclusion . . . . . . . . . . . . 5.5 Multi-scale state estimation . . . . . . . 5.5.1 Problem Formulation . . . . . . 5.5.2 Results . . . . . . . . . . . . . . 5.6 Constrained Multi-rate state estimation 5.6.1 Problem Formulation . . . . . . 5.6.2 Conclusion . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

A Appendix A.1 Controllability and Observability . . . . . . A.1.1 Controllability and stability . . . . . A.1.2 Observability and stability . . . . . . A.2 Defining the Model of the System . . . . . . A.3 Luenberger Observer . . . . . . . . . . . . . A.3.1 An Example of Luenberger Observer A.3.2 An Example of Luenberger Observer A.4 Kalman Filter . . . . . . . . . . . . . . . . . A.5 State Regulator . . . . . . . . . . . . . . . . A.6 Target Calculation . . . . . . . . . . . . . . vi

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

A.7 Disturbance Observer and Disturbance Compensator . . . A.8 MPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.8.1 Target Calculation in MPC . . . . . . . . . . . . . A.8.2 Implementation of the MPC Regulator in Matlab . A.8.3 Tuning regulator parameters . . . . . . . . . . . . A.9 Normalization . . . . . . . . . . . . . . . . . . . . . . . . . A.9.1 Normalization with Operating Points uop and yop A.9.2 Normalization with Operating Point yop . . . . . . A.9.3 Normalization with extreme values . . . . . . . . . A.10 Single Adjuster Tuning . . . . . . . . . . . . . . . . . . . . A.11 Constrained State Estimation . . . . . . . . . . . . . . . . A.11.1 Constrained Kalman filter . . . . . . . . . . . . . . A.11.2 Full Information Estimator . . . . . . . . . . . . . A.11.3 Moving Horizon Estimation . . . . . . . . . . . . . A.11.4 Multi Rate State Estimation . . . . . . . . . . . . A.11.5 Constrained Multi-rate State Estimation . . . . . . Bibligraphy

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

68 72 72 74 76 82 82 86 86 88 98 98 98 98 101 101 106

vii

List of Figures 2.1 2.2 2.3 2.4 2.5 2.6 2.7

Luenberger Observer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . System and Observer with Unmeasured Disturbances . . . . . . . . . . . . . . . Disturbance Observer estimates the unmeasured constant disturbance. . . . . . State Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Target Calculation as an alternative for pre-filter, removes the output offset. . . The regulator can not remove the offset resulted from unestimated disturbance. The state regulator compensates the offset from estimated disturbance. . . . .

. . . . . . .

5.1

Discrete system with 3 measurement vectors and different sample times . . . . . . 52

6.1 6.2 6.3

Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 System1, rate of changes in V˙ in3 is limited to [−0.1, 0.1] . . . . . . . . . . . . . . . 60 System1, rate of changes in V˙ in3 is limited to [−0.01, 0.01] . . . . . . . . . . . . . . 61

A.1 System with Luenberger Observer and different Initial Value . . . . . . . . . . . A.2 System with Luenberger Observer and model mismatch . . . . . . . . . . . . . . A.3 Predictor Corrector form of KF with disturbance observer and exact model. . . . A.4 Predictor Corrector form of KF with disturbance observer and model mismatch. A.5 Pre-filter eliminates the offset resulted from non-zero reference. . . . . . . . . . . A.6 Compensation of the measured Disturbances with Target Calculation . . . . . . . A.7 MPC Regulator with QK = 0.01, SK = 1, RK = 1 . . . . . . . . . . . . . . . . . . A.8 MPC Regulator with QK = 1, SK = 1, RK = 1 . . . . . . . . . . . . . . . . . . . A.9 MPC Regulator with QK = 10, SK = 1, RK = 1 . . . . . . . . . . . . . . . . . . . A.10 MPC Regulator with QK = 100, SK = 1, RK = 1 . . . . . . . . . . . . . . . . . . A.11 MPC Regulator with QK = 10, SK = 0.01, RK = 1 . . . . . . . . . . . . . . . . . A.12 MPC Regulator with QK = 10, SK = 0.1, RK = 1 . . . . . . . . . . . . . . . . . . A.13 MPC Regulator with QK = 10, SK = 1, RK = 1 . . . . . . . . . . . . . . . . . . . A.14 MPC Regulator with QK = 10, SK = 100, RK = 1 . . . . . . . . . . . . . . . . . A.15 MPC Regulator with QK = 10, SK = 1, RK = 0.01 . . . . . . . . . . . . . . . . . A.16 MPC Regulator with QK = 10, SK = 1, RK = 0.1 . . . . . . . . . . . . . . . . . . A.17 MPC Regulator with QK = 10, SK = 1, RK = 10 . . . . . . . . . . . . . . . . . . A.18 MPC Regulator with QK = 10, SK = 1, RK = 100 . . . . . . . . . . . . . . . . . A.19 MPC Regulator with QK = 10, SK = 0.1, RK = 1, Zl = 1000 . . . . . . . . . . . A.20 MPC Regulator with QK = 10, SK = 0.1, RK = 1, Zl = 10 . . . . . . . . . . . . . A.21 MPC Regulator with QK = 10, SK = 0.1, RK = 1, Zl = 0.01 . . . . . . . . . . . . A.22 MPC Regulator with Initialized Global Tuning Adjusters . . . . . . . . . . . . . A.23 MPC Regulator with Initialized Global Tuning Adjusters . . . . . . . . . . . . . A.24 MPC Regulator:sCr = 1, rKF and rDX are set to default values . . . . . . . . . . A.25 MPC Regulator:sCr = 10, rKF and rDX are set to default values . . . . . . . . . A.26 MPC Regulator with rDX = 10 and default values for sCr and rKF . . . . . . . . ix

. . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

6 9 11 14 17 20 20

65 66 68 69 70 71 77 77 78 78 79 79 80 80 81 81 83 83 84 84 85 90 90 91 91 93

A.27 MPC Regulator with rDX = 10 and default values for sCr and rKF . . A.28 MPC Regulator with rDX = 0.046 and default values for sCr and rKF A.29 MPC Regulator with rDX = 0.046 and default values for sCr and rKF A.30 MPC Regulator with rKF = 0.1, rDX = 2 and default value for sCr . . A.31 MPC Regulator with rDX = 0.1, rDX = 2 and default value for sCr . A.32 MPC Regulator with rKF = 2.0891, rDX = 2 and default value for sCr A.33 MPC Regulator with rDX = 2.0891, rDX = 2 and default value for sCr A.34 MPC Regulator with rKF = 10, rDX = 2 and default value for sCr . . A.35 MPC Regulator with rKF = 10, rDX = 2 and default value for sCr . . A.36 CKF in comparison with KF with an unconstrained system . . . . . . A.37 CKF in comparison with KF in the presence of output constraints . . A.38 Unconstrained Full Information Estimator in comparison with KF . . A.39 Constrained Full Information Estimator in comparison with KF . . . A.40 Unconstrained MHE in comparison with KF . . . . . . . . . . . . . . . A.41 Constrained MHE in comparison with KF . . . . . . . . . . . . . . . A.42 Multi-rate state estimation with variable structure . . . . . . . . . . . A.43 Multi-rate state estimation with fixed structure . . . . . . . . . . . . . A.44 A comparison between fix and variable structure . . . . . . . . . . . . A.45 Multi-rate state estimation combined with constrained Kalman Filter A.46 A comparison between multi-rate-CKF and variable structure . . . . .

x

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

93 94 94 95 95 96 96 97 97 99 99 100 100 102 102 103 103 104 105 105

List of Tables 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8

Target Target Target Target Target Target Target Target

calculation: calculation: calculation: calculation: calculation: calculation: calculation: calculation:

A comparison between Muske’s and Rawling’s method. ¯=1 . . . . . . . . Tuning Qs and Rs with q = 1 and y ¯ = 1. . . . . . . . Tuning Qs and Rs with q = 100 and y ¯ = 1. . . . . . . . Tuning Qs and Rs with q = 103 and y Tuning Qs , Rs and q for non-square systems. . . . . . . Tuning Qs , Rs and q for non-square systems. . . . . . . Tuning Ty . . . . . . . . . . . . . . . . . . . . . . . . . . Tuning Tu . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

29 30 30 30 31 32 33 34

4.1 4.2

Table of parameters for MPC regulators. . . . . . . . . . . . . . . . . . . . . . . . . 40 A priori data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

6.1

A priori Data of the example1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

A.1 Table of information for the example 16 . . . . . . . . . . . . . . . . . . . . . . . . 88

xi

Chapter 1

Introduction 1.1

Model Predictive Control

Model Predictive control(MPC) emerged at 1970s and published for the first time in 1978 (Richelt,Rault,Testud and Papon). Early applications of MPC were simple by current standards and focused on industrial processes. The main goal was to address the process control issues such as hard and soft constraints, unmeasured disturbances and modeling errors. In 1990s MPC was interpreted in a state-space framework by Morari and Lee(1991) and Ricker(1991) which were the first steps to meld MPC with established theory such as linear quadratic control. The new MPCs are capable of enable improved dynamic control for processes in which the quality of the product is important such as chemical industries[3]. The main idea behind MPC is to predict the future behavior of the process and optimize over the manipulated variable. A sequence of optimal manipulated variables is calculated from the optimization problem. The first element of this sequence is applied to the system. An important characteristic of MPC is that the input and output constraints can be incorporated in its optimization problem. The optimization is repeated at each sampling time because the model prediction is updated with a bias term computed from the difference between the actual and the predicted output.[4] Prediction of the future behavior of the process requires a suitable model of the system. Moreover, to have an offset free tracking, the unknown disturbances must be estimated which requires a disturbance model. Thus, the steady state model of the process and disturbance model are important elements of MPC. One important application of MPC is dealing with non-square systems and constraints which both may cause infeasibility of the set-points. To address infeasibility usually the steady state targets are calculated in each sample time based on the actual estimated states and disturbances. In target calculation an optimization problem seek for the best targets which either satisfy the constraints and is the nearest possible solution to the desired set-point. In 2008 Rawling [10] suggests a new structure MPC in which the target calculation is eliminated and the steady state and the dynamic part of the regulation are combined in one optimization problem.

1.2

Constrained State Estimation

Constraints are important elements of industrial processes. Since the optimal-based state estimation provides the possibility to incorporate the state constraints into the state estimation, these 1

2

CHAPTER 1. INTRODUCTION

methods are of high importance in Process control. The simplest optimal-based state estimation is the Constrained Kalman Filter(CKF). Due to low computational cost and simple structure, CKF is widely used. However in non-linear systems other optimal based state estimation such as Full Information Estimation and Moving Horizon Estimation(MHE) are preferred. The aim of this work is to develop a suitable MPC for an industrial plant based on its requirements, using optimal-based state estimation under soft and hard constraints. Moreover, due to the large number of tuning parameters in MPC, a new structure is developed by which the user can tune the MPC by tuning merely one parameter.

1.3

Thesis Outline

This project is organized as follows: Chapter 2 presents some essential theoretical background which is essential to implement an MPC such as disturbance estimation, state-space model and target calculation. In chapter 3 MPC regulator with two different approaches for target calculation, Rawling’s method and Muske’s method, are implemented. The two approaches are analyzed and compared. In chapter 4, a simple structure for the tuning of the MPC and the state estimator is introduced. With this new structure, the user is able to tune the regulator and the observer with merely three global adjuster. Additionally, initial values are calculated for the global adjusters in a way that the process is launched in a smooth and non-aggressive way. Constrained state estimation is investigated in chapter 5. Three different approaches are analyzed and compared: Constrained Kalman Filter, Full Information Estimation, Moving Horizon Estimation. For the multi-scale systems, two approaches are suggested and compared: Fixed and variable structure. At the end of this chapter, a constrained multi-scale state estimation is implemented. In chapter 6, an industrial plant is modeled. The plant has hard and soft constraints on manipulated variables, controlled variables, and rate of change of both manipulated and controlled variable. An MPC is developed according to the constraints and control objectives.

Chapter 2

State Space System 2.1

System and Model

Assume that the following equations are the sampled representation of a controlled system:

xk+1 = Ar xk + Br uk ,

xk=0 = x0

(2.1)

yk = Cr xk + Dr uk with x ∈ Rn , u ∈ Rr , y ∈ Rm and Ar ∈ Rn×n , Br ∈ Rn×r , Cr ∈ Rm×n . Disturbances can influence the system (known and unknown) and are modeled as follows:

xk+1 = Ar xk + Br uk + Er zk + Bdr dk ,

xk=0 = x0

(2.2)

yk = Cr xk with z, d ∈ Rp and Er , Bdr ∈ Rn×p . Although equation 2.2 is modified to model the disturbance, in reality there are unmeasured disturbances which affect the states x and the output y . The model of the system is linear and normally based on step response identification. Since step response tests in engineering practice are likely to be subject to load disturbances, the model based on this method is highly exposed to model mismatch errors. This means that the matrices of the model exhibit a deviation from the real system matrices. To show this deviation, the real system matrices have a subscript-r and the model matrices have no index.

2.2

Input-Output System

The Input-Output System is a linear transformation which maps the input vector space U to the output vector space Y. The input vector u and output vector y may have different dimensions. 3

4

CHAPTER 2. STATE SPACE SYSTEM

To find the steady state values of the system described with equation 2.2, the states of the system are constant which means xk+1 = xk . assuming that the system is stable and I − A is regular, the output of the steady state is:

y = C(I − A)−1 B u + Du |

{z

}

(2.3)

Φ

= (C(I − A)−1 B)u y = Φu

with Φ ∈ R

(2.4) m×r

where : u ∈ Rr×1 , Φ ∈ Rm×r and the output vector y ∈ Rm×1 . Under certain conditions, if Φ is invertible, it is possible to find the input vector u from the output vector y: Φ−1 y = u

with Φ ∈ Rm×r

(2.5)

If Φ is regular ( det Φ ̸= 0) and square, the inverse of Φ exists . In this case each y in the vector space Y can be mapped to exactly one u in the vector space U and vice versa. It means there is a bijective map between Y and U. If the matrix Φ is not regular, it is not invertible. There are different case to consider: • dim(u) < dim(y): – The transformation U → Yu forms a subspace Yu in Y . In this case for each element from U there is one element from Y which can be assigned. This also means that only r steady state set-point can be reached. – In the inverse transformation Y → U not all the elements in the space vector Y can be mapped to an element in space vector U, therefore instead of mapping each element of Y to some predefined element in U, those elements of Y are mapped which are the best in the sense of solving minimization problem. Using the least square method, e.g., it is possible to formulate a minimization problem, in which the vector u is found in a way that Φu − y is minimized or in other words, the Φu is the closest possible vector to y: min{(y − Φu)T (y − Φu)} u

(2.6)

Solving (5) yields the following expression for u which requires the calculation of the pseudo inverse of matrix Φ : u = (ΦT Φ)−1 ΦT y |

{z

}

(2.7)

Pseudo inverse

Since this mapping is not bijective, there are vectors y that cannot be mapped to any u and therefore there are some vectors in the output vector space which are not reachable is steady state even if the system is controllable (for controllability see A.1). • dim(u) > dim(y): – U → Y With different input vectors u the same output y can be reached. Therefore, additional conditions must be defined so that a unique u yields a specified y.

2.3. STATE ESTIMATOR

5

– Y → Uy For each y from the output vector-space there are different vectors u in the input vector-space. Since the mapping is not bijective, there may be several vectors u in the input vector space as the solution. The best solution can be chosen through an optimization problem. One solution is the pseudo-inverse of ϕ.

2.3

State Estimator

The regulator needs to have the actual states of the system to calculate the manipulated variable accordingly. In practice it is not always possible to measure all the states of a process due to: • The states are physically impossible to measure; • Measurement is time consuming ( Maybe a lab work is needed to measure some variables); • The measurement is expensive; State estimators recontract the unavailable states of the system based on the available measurements. Only if the system is observable, the estimator can fully reconstruct the system. The state estimator is based on a model of the system which is in parallel to the real system and takes the same input as the real system.

2.3.1

Luenberger Observer

In this section it is assumed that the model of the system is accurate and as a result the matrices of the model and the system are the same. However, having an accurate model of the system is not enough to reconstruct the states of the system, because still the unknown disturbances and different initial state values x0 ̸= x ˆ0 may affect the system and cause the states x ˆk and output y ˆk of the model deviate from states and outputs of the real system. When the initial values of model and real system do not match, x0 ̸= x ˆ0 , the estimated states may approach to the same steady state but the estimator cannot reconstruct the dynamics of the system. The same situation happens when the real system is subject to unknown disturbances which are not identified by the estimator. An estimator is supposed to reconstruct the states if the initial conditions and disturbances are known. In the case that the initial conditions are unknown or inaccurate and the system is subject to unknown disturbances, the state estimator must converge to the correct states, i.e. the state estimator must be capable of estimating the initial conditions and unknown disturbances. In order to estimate unknown disturbances and initial condition, feedback is added to the former estimator. The error between the output of the system yk and the model output y ˆk is propagated through a weighting matrix L to the estimator input. Regardless of that the difference between the real and estimated output is caused by different initial values or unknown disturbances, it is estimated and corrected through the feedback loop. The resulted estimator with the feedback matrix L, the so called Luenberger observer, is:

x ˆk+1 = Aˆ xk + Buk + L(yk − y ˆk ), y ˆk = Cˆ xk mit L ∈ Rn×m

x ˆk=0 = x ˆ0

(2.8)

6

CHAPTER 2. STATE SPACE SYSTEM

x0 uk

-

System

q - xk+1 = Ar xk + Br uk

yk = Cr xk xk ?

ˆ0 x

L



?

Model ˆ k+1 = Aˆ x xk + Buk ˆk) +L(yk − y ˆ k = Cˆ y xk

q-y

k

+?

h

−6

q-y ˆ

k

ˆk ? x Figure 2.1: Luenberger Observer

For the stability of the loop it should be ensured that the estimation error exk = xk − x ˆk vanishes. The feedback matrix L should be calculated so that the state error exk converges to zero. Convergence Condition For k → ∞ the estimation error should vanish[19]: lim ∥exk ∥ = 0

k→∞

2.3.2

(2.9)

Calculation of feedback Matrix L

The differential equation of the observer is [19](derived from equation 2.8, 2.1 and 2.11): ek = xk − x ˆk

(2.10)

ek+1 = xk+1 − x ˆk+1 = Axk + Buk − Aˆ xk − Buk − LC(xk − x ˆk ) = (A − LC)(xk − x ˆk )

ek+1 = (A − LC)ek

(2.11)

To satisfy the convergence condition, the error stated in the equation 2.11 should be asymptotically stable which means that all the eigenvalues of the matrix (A − LC) should stay in the unit circle. In this case the estimation error approaches to zero. Matrix L manipulates the eigenvalues of the matrix (A − LC) and as a result determines the stability of the system. The matrix L can be selected using the following methods: • Pole placement; • Riccati equation.

2.3. STATE ESTIMATOR

2.3.3

7

Luenberger Observer and Model Mismatch

An estimator needs an accurate model of the system. In the case that the model is not accurate, there would be an error in the output of the system. The error between system output and the observer output is propagated through the matrix L to the estimator. The matrix L is selected b k between the system states and the observer in a way that the estimation error exk = xk − x states vanishes. However, owing to model mismatch, the states of the observer differ from the states of the system and although they approach the system states, finally they are not exactly bk . the same, xk ̸= x Assuming a system with model mismatch, without unknown disturbance, the state errors is calculated as follows:

A = Ar + ∆Ar

(2.12)

B = Br + ∆Br C = Cr + ∆Cr The real system stays the same:

xk+1 = Ar xk + Br uk

(2.13)

yk = Cr xk Considering the model mismatch, the observer is:

x ˆk+1 = (Ar + ∆Ar )ˆ xk + (Br + ∆Br )uk − L(yk − y ˆk )

(2.14)

y ˆk = (Cr + ∆Cr )ˆ xk The state error between real system and the observer is:

ek = xk − x ˆk

(2.15)

⇒ ek+1 = xk+1 − x ˆk+1 then: exk+1 = (Ar + LCr )exk − ∆Ar x ˆk − ∆Br uk + L∆Cr x ˆk

(2.16)

In comparison with the equation 2.11 on the facing page, it can be seen that the term −∆Ar x ˆk − ∆Br uk + L∆Cr x ˆk is added. It can be interpreted that adjusting the matrix L is not enough to remove the estimation error any longer. In the steady state, where all the states reach a steady value, the state error ex is:

ex = (Ar + LCr )−1 ((−∆Ar − L∆Cr )ˆ x + ∆Br uk )

(2.17)

8

CHAPTER 2. STATE SPACE SYSTEM

where the term (−∆Ar − L∆Cr )ˆ x + ∆Br uk ) is not zero if the system is stable. Conclusion For more examples on Luenberger Observer and inaccurate initial condition and model mismatch refer to A.3.1 and A.3.2. It can be concluded that model mismatch leads to an error ex between the real and the observer states. In order to control the states, a state regulator needs the estimated states, if they are not estimated precisely, the regulator is not able to adjust the controlled variable yk to the manipulated variable wk and an undesired offset appears in the output.

2.3.4

Luenberger Observer and unmeasurable disturbances

Beside the measurable disturbances, most of the time the so called unmeasured disturbances affect the states. Estimation of unmeasured disturbances dk which affect the system states, an is application of estimation. Although unmeasured disturbances affect the states of the system, b k . Thus the estimator cannot exactly estimates the they have no impact on the model states x states xk . At the beginning the states of system and estimator match each other, but after the unmeasured disturbance dk affects the system, they differ. The system equations with unmeasured disturbance is: xk+1 = Ar xk + Br uk + Bd dk

(2.18)

yk = Cr xk and the observer equation is: x ˆk+1 = Ar x ˆk + Br uk − L(yk − y ˆk )

(2.19)

y ˆk = Cr x ˆk In the steady state, the estimation error is as following: exs = (I − A + LC)−1 Bd dk

(2.20)

Example 1 (Luenberger Observer and unmeasured disturbances). Consider the following sampled system: 



x ˆk+1





 

1 0.0094 0.8187 0.1725 0.0085       0.9092 0.0863 x ˆk + 0.0998 uk + 0 d = 0 0 0.0953 0 0.0863 0.8230 (

)

y= 1 0 0 x having: • A Luenberger Observer; • An exact model; • Akin initial values; • Unmeasured disturbance d.

2.3. STATE ESTIMATOR

9

States−Input−Disturbance

System and Observer 8 6 4

x1 x2 x3 x ˆ1 x ˆ2 x ˆ3

2 0 −2 −2

0

2

4

6 Time(s)

8

10

12

14

10

12

14

Input−Output−Error

System and Observer 8 6

y yˆ ey u d

4 2 0 −2 −2

0

2

4

6 Time(s)

8

Figure 2.2: System and Observer with Unmeasured Disturbances and the observer has an accurate model of the system : 







0.8187 0.1725 0.0085 0.0094     ˆ 0.9092 0.0863 x ˆ + 0.0998 u + L(y − y ˆ ), x˙ =  0 0 0.0863 0.8230 0.0953 (

)

ˆ. ˆ= 1 0 0 x y As can be seen in figure 2.2, when the unmeasured disturbance d starts acting on the system, ˆ is not zero anymore and the error between the output of the system y and the estimated error y ˆ. the states of the real system x are not matching the estimated states x Conclusion When the system is subject to the unmeasured disturbance dk , the states cannot be estimated accurately any longer and a resulting estimation error exs exists. If there was a state regulator, the estimation error would lead to an undesired difference between the output yk and the manipulated variable wk . Despite the fact that state estimator cannot work properly in the presence of unmeasured disturbances and model mismatches, estimators are widely used for controlling the states. For that estimators have been modified in order to detect the unmeasured disturbances and accordingly remove the error between the output yk and the manipulated variable wk .

10

CHAPTER 2. STATE SPACE SYSTEM

2.3.5

Disturbance Observer

As discussed before, it is necessary to handle unknown disturbances in the controller design. The system with unmeasured disturbances dk ∈ Rr is as follows:

xk+1 = Axk + Buk + Bd dk + Ezk ,

xk=0 = x0

(2.21)

yk = Cxk In order to estimate the states, Bd should be modeled. Since this is not the case, usually Bd is selected as B and certain conditions are tested to ensure that the resulting system is detectable and observable [16, 6]. ˆ under the assumption that d ˆ has no dynamics and is only maThe state estimator estimates d ˆ is added to the states of the estimator. nipulated by the estimator. The estimated disturbance d m ˆ ˆ∈R . d has the same dimension as the output vector y ( )

x ˆ ˆ d

(

= k+1

A Bd 0 I

)( )

x ˆ ˆ d

k

( )

( )

B E + uk + z + LT 0 0 k

(

(

yk − C 0

( ) ) ) x ˆ

ˆ d

(2.22)

k

Example 2 (Disturbance Observer). The system which is introduced in example 1 on page 8 is assumed, having: • A disturbance Observer; • An exact model; • Akin initial values; • Unmeasured disturbance d. The real system, which is subject to an unmeasured disturbance, is described as: 

xk+1











0.8187 0.1725 0.0085 0.0094 0.0094       0.9092 0.0863 xk + 0.0998 u + 0.0998 dk , = 0 0 0.0863 0.8230 0.0953 0.0953 (

)

yk = 1 0 0 xk . According to equation 2.22 the observer for the augmented system is:

x ˆ ˆ d





( ) k+1





0.0094 0.8187 0.1725 0.0085 0.0094 ( ) ( ( ) )     0 ( ) x ˆ ˆ 0.9092 0.0863 0.0998 x 0.0998  T yk − 1 0 0 0 ˆ + =  u + +L  ˆ 0.0953 k  0 0.0863 0.8230 0.0953 d d k k 0 0 0 0 1 (

)

( )

x ˆ ˆk = 1 0 0 0 · ˆ y d

k

2.4. KALMAN FILTER

11

States−Input−Disturbance

System and Disturbance Observer 10

x1 x2 x3 x ˆ1 x ˆ2 x ˆ3 u d dˆ

8 6 4 2 0 −2

−2

0

2

4

6

8 10 Time(s)

12

14

16

18

20

14

16

18

20

Output−Input−Dist−Error

System and Disturbance Observer 10

y yˆ u d ey

8 6 4 2 0 −2

−2

0

2

4

6

8 10 Time(s)

12

Figure 2.3: Disturbance Observer estimates the unmeasured constant disturbance.

The feedback matrix L is [0.6842, 0.7975, 0.5350, 0.5809 ] and the unmeasured disturbance is assumed to be constant. In figure 2.3 it is shown how the disturbance observer estimates the disturbance. Although estimated disturbance fits real disturbance after a significant time period, but there is no resulting deviation in the states and output. Conclusion ˆ k . The estimated In conclusion, the extended observer is capable to estimate disturbance vector d ˆ k can be a representative of both uncertainties in the parameters and undisturbance vector d measured disturbances. Using a Kalman filter as the disturbance observer, it is possible to tune the observer so that the disturbance is well estimated and consequently the states are properly reconstructed.

2.4

Kalman Filter

The Luenberger observer uses the measured output y to estimate the states x ˆk , while measured values are basically subject to error and noises. Using the distorted output to estimate the states, causes the estimated states to be noisy. The system and observer including the noises on the states of the system xk and the measurement yk are as follows:

12

CHAPTER 2. STATE SPACE SYSTEM

xk+1 = Ar xk + Br uk + Er zk + ξ k yk = Cr xk + φ k

(2.23)

The variables ξ k and φ k are random variables representing the normally distributed state and measurement noise. The Kalman filter is another interpretation of the observer concept. The Kalman Filter is able to reconstruct the states of the system in the presence of random noises and errors. It is based on the solution of the minimization of the estimation error which can be found in the Matrix-Riccati-equation. Moreover, having two tuning matrices Q and R, it gives more degrees of freedom. Feedback Matrix in Kalman Filter The observer of the noisy system has the same structure as the Luenberger observer introduced in section 2.3.1 on page 5.

x ˆk+1 = (A − LC)ˆ xk + Buk + Ezk + Lyk

(2.24)

The matrix L is calculated through an optimization problem. The Kalman filter should determine the states x ˆk in a way that on average they match the system states xk . It is assumed that the model of the system is accurate. The estimation error is defined as: ˆk exk = xk − x

(2.25)

The cost function of the minimization problem is the expectation of the estimation error, defined as:

J=

n ∑ k=1

E(e2xk ) =

n ∑

[

]

E (xk − x ˆk )T (xk − x ˆk )

(2.26)

k=1

This cost function J is a function of the Matrix L of the observer and matrix L affects the state estimation. The feedback matrix L is calculated in a way that the mean square value of the estimation error is the smallest possible value. Thus the minimization problem is defined as:

minL {J}

(2.27)

The obtained solution of such an optimization problem for the feedback matrix L is: L = (BT PB + R)−1 BT PA

(2.28)

where P is found using the Riccati equation as follows: Pk+1 = AT (Pk − Pk (BT Pk B + R)−1 BT Pk )A + Q

(2.29)

Q and R are the covariance matrices of the random variables ξ k and φ k . Using these two matrices, it is possible to tune the Kalman Filter.

2.5. STATE CONTROLLER

2.4.1

13

Predictor-Corrector-Form of the Kalman Filter

In a simple Kalman filter, estimation of the state xk+1 is based on yk which is one sample time older than the estimated state. Using the predictor-corrector form, the estimator can be modified in a way that estimation of the states are performed based on the measurement of the same time step.

x ˆk+1|k+1 = Aˆ xk|k + Buk + Ezk + L(yk+1 − Cˆ xk+1|k )

(2.30)

The correction of the states is calculated based on yk+1 instead of measured value yk and states b k and the predicted states x b k+1|k . Shifting the equation 2.30 one sample time backward yields x the following:

x ˆk|k = Aˆ xk−1|k−1 + Buk−1 + Ezk−1 + L(yk − Cˆ xk|k−1 )

(2.31)

where the predicted value x ˆk|k−1 is:

x ˆk|k−1 = Aˆ xk−1|k−1 + Buk−1 + Ezk−1 ⇒ˆ xk|k = x ˆk|k−1 + L(yk − Cˆ xk|k−1 )

(2.32) (2.33)

In the Kalman filter, the prediction-correction form is as follows:

Prediction:

x ˆk|k−1 = Aˆ xk−1|k−1 + Buk−1 + Ezk−1

Correction:

x ˆk|k = x ˆk|k−1 + L(yk − Cˆ xk|k−1 )

In these equations the calculation of P and L can be incorporated:

Prediction: x ˆk|k−1 = Aˆ xk−1|k−1 + Buk−1 + Ezk−1

(2.34)

T

Pk|k−1 = APk−1|k−1 A + Q Correction: Lk = Pk|k−1 CT (CPk|k−1 CT + R)−1 x ˆk|k = x ˆk|k−1 + Lk (yk − Cˆ xk|k−1 ) Pk|k = (I − Lk Ck|k−1 )Pk,k−1 See Appendix A.4 for an example of predictor corrector form.

(2.35)

14

2.5 2.5.1

CHAPTER 2. STATE SPACE SYSTEM

State Controller State Feedback

In MIMO systems it is possible to propagate the estimated states x ˆk and add them to the input of system: uk = −Kˆ xk

(2.36)

The state vector x ˆk is propagated through the feedback matrix K with K ∈ Rr×n to the input uk . In this case the matrix K is the state regulator. Finding the Feedback Matrix K An observer with no model mismatch and no disturbance which affects the system is assumed. Using the system which is described in equations 2.1 on page 3, the feedback matrix K is easily computed. Since there is no model mismatch, the matrices of the system and model match each other and consequently in the estimation, the states of the model matches the states of the system (ˆ xk = xk ). Calculation of the feedback matrix K of the state controller is very similar to the calculation of the feedback matrix of the observer L. The system with the state-controller forms the following equations:

xk+1 = Ar xk + Br uk yk = Cr xk uk = −Kxk + wk and thus:

xk+1 = Ar xk + Br ( − Kxk + wk ) yk = Cr xk If the system is asymptotically stable, i.e. all the eigenvalues of the (Ar − Br K) stay in the unit circle, the dynamics of the system can be affected by the feedback loop. The eigenvalues of the

wk

-

Filter

System x = Ar xk + Br uk k+1 uk q qyk = Cr xk −6

Estimator ˆ x = Aˆ xk + Buk  k+1 ˆk) +L(yk − y ˆ k = Cˆ y xk K 

ˆk x

Figure 2.4: State Controller

yk

q-

2.6. TARGET CALCULATION WITH ORIGIN-SHIFTING

15

matrix (Ar − Br K) and consequently the stability of the system are affected by the selection of ˆ0 is excited, and there is no reference applied matrix K. If the system with initial value x0 and x to the system wk = 0, determination of the K affects the transient response of the system. Due to the duality of the controller and observer design, the feedback Matrix K can be determined in the same way that the feedback matrix L in the observer is determined, in that instead of A, AT and instead of C, BT is substituted.

2.5.2

Pre-filter

Using the state feedback K has the disadvantage that when the reference is not zero, in the output an offset appears which needs to be removed by an input pre-filter. With the state feedback the system equations are : xk+1 = Ar xk + Br uk yk = Cr xk uk = −Kxk + wk and thus: xk+1 = Ar xk + Br ( − Kxk ) + Br wk yk = Cr xk !

When the dynamics of the system are passed and system is in steady state, i.e. xk = xk+1 with k → ∞, the final value of the measured output in steady state is: yk = −Cr (−I + Ar − Br K)−1 Br wk

(2.37)

Since generally the matrix Cr (−I + Ar − Br K)−1 Br ̸= I, this means that the output has an offset. Pre-filter M corrects the offset and provides a suitable loop behavior.

xk+1 = Ar xk + Br uk

(2.38)

yk = Cr xk uk = −Kxk + Mwk The pre-filter is defined as [19]: M = −(Cr (−I + Ar − Br K)−1 Br )−1

(2.39)

See Appendix A.5 for an example of pre-filter.

2.6

Target Calculation with Origin-Shifting

Target-calculation and Origin-shifting can be considered as a practical alternative for the prefilter. The key idea behind target calculation is the fact that a state regulator takes the reference (which has a zero value) and based on the reference calculates the target states. The Originshifting method is described in the following:

16

CHAPTER 2. STATE SPACE SYSTEM ¯ 1. With the reference w and solving the following equations, the target values for the input u ¯ , are calculated. and the states x (

)( )

I − A −B C 0

¯ x ¯ u

( )

=

0 w

(2.40)

In the case that m ̸= r, using reconciliation calculations, meaningful targets should be found. 2. The manipulated variables vector is calculated: ¯) + u ¯. uk = −K(xk − x

(2.41)

Using an accurate model, this approach controls the system without any offset (y = w):

Target:

(I − A)¯ x = B¯ u

(2.42)

Steady State

x = Ax + Bu

(2.43)

Controller

¯) + u ¯ u = −K(x − x

(2.44)

(2.44) in (2.43)

x = Ax − BKx + BK¯ x + B¯ u

(2.45)

(2.42) in (2.45)

x = Ax − BKx + BK¯ x + (I − A)¯ x

(2.46)



(I − A + BK)x = (I − A + BK)¯ x

(2.47)



¯ x=x

(2.48)



Cx = C¯ x=y=w

(2.49)

Example 3 (Target Calculation). Consider the system in example 1, the step reference with the attitude of 2 is applied to the system under the following conditions: • An exact model; • Akin initial values; • State regulator; • Target calculation and origin-shifting; Target of states x is calculated as [2, 2, 1.33] and the input target u is 0.6667. The offset is removed and the output matches the reference w (Figure: 2.5) while the uncontrolled system has an offset in the output. Consideration of measurable disturbances in the state controller If the disturbances zk of the system are measurable, then a disturbance compensator can address it. The input of the system including the term uzk for the disturbance is:

uk = −Kxk + Mwk + uzk

(2.50)

The mean value of the measured disturbance zk can be considered as a compensation signal uzk , which is added to the input of the system. Solving a minimization problem, this compensation signal is determined as follows:

2.6. TARGET CALCULATION WITH ORIGIN-SHIFTING

17

Uncontrolled System and Observer x1 x2 x3 x ˆ1 x ˆ2 x ˆ3 w

Input−States

6 4 2 0 −2

−1

0

1

2

3

4 5 Time(s)

6

7

8

9

10

8

9

10

8

9

10

System, Observer, Disturbance Observer and Regulator

Input−States

3

x1 x2 x3 x ˆ1 x ˆ2 x ˆ3 w u

2 1 0 −1

−1

0

1

2

3

4 5 Time(s)

6

7

Reference−Input−Output

System, Observer, Disturbance Observer and Regulator 3

y yˆ w u

2 1 0 −1

−1

0

1

2

3

4 5 Time(s)

6

7

Figure 2.5: Target Calculation as an alternative for pre-filter, removes the output offset.

18

CHAPTER 2. STATE SPACE SYSTEM

!

ϵ = Buz + Ezk = 0

(2.51)

Solving the minimization problem yields the additive disturbance compensator:

uz = −(BT B)−1 BT Ezk

(2.52)

The other solution to remove the effect of measurable disturbances is using target calculation as follows: (

)( )

I − A −B C 0

¯ x ¯ u

(

=

Ezk w

)

(2.53)

Conclusion If disturbance is measurable, then a disturbance compensator can compensate it. This kind of disturbance is not harmful to the controller.(See A.6)

2.6.1

Set point Adjustment under unknown disturbances and model uncertainties

An offset can be a result of unmeasured disturbances, or model mismatches. Irrespective of its origin, it should always be assured that no offset is to be appeared in the output of a state space. State Controller with Offset When the model of the observer is not accurate, the unmeasurable disturbances result in a deviation of output yk from the reference wk . Although a state controller as it is described before, can eliminate the offset which is a result of nonzero input, it is unable to compensate the offsets, which are the result of unmeasurable disturbance or model mismatch. The controlled variable can be calculated for the stationary case as follows: The standard state controller uk = −Kxk is useful when the system is controllable, all the states are controlled to approach to zero and the reference w is zero. In fact the state controller behave like a proportional controller. There are other situation in which the state controller works with an offset in the output: 1. The reference value is not zero. 2. Model does not matches the real system. 3. Unknown disturbances are applied to the system. In the first case, a pre-filter or Origin-shifting helps the controller to eliminate the offset. In the second or third case, the state controller should be modified by a disturbance observer which adds the estimated disturbance to the system, or a PI-state regulator.

2.7. DISTURBANCE OBSERVER AND DISTURBANCE COMPENSATOR

2.7

19

Disturbance Observer and Disturbance Compensator

The following control system includes a pre-filter:

xk+1 = Ar xk + Br uk

(2.54)

ˆ k + Lx (yk − y ˆ k+1 = Aˆ ˆk) x xk + Buk + Bd ˆ k+1 = d ˆ k + Ld (yk − y ˆk) d ˆk uk = −Kˆ xk + Mw − d

(2.55) (2.56) (2.57)

yk = Cr xk

(2.58)

ˆ k = Cˆ y xk

(2.59)

M = −(C(I − A + BK)−1 B)−1

(2.60)

ˆ is the exact if (See Appendix A.7): It can be shown that the estimated disturbance d (( ) ( ) ) A B Lx C q, ys has increased significantly. The reason is that the linear penalty q penalizes the deviation of controlled variable from its ¯ . On the other hand, the term −Ty ys in equation A.30 requires a great value for ys to set-point y minimize the main objective function of target calculation problem. For Ty ≤ q, the term qT η is stronger than −Ty ys . Therefore, ys is forced toward the set-point. As Ty becomes greater than q, ys moves toward its maximum admissible values. As can be seen in 3.7, although ys is admissible inside the range [−∞, 1.2] due to a great Ty , ys violates the soft constraint which is not desired. Indeed, Ty must not be greater than q which is 1000 in this example. M is the weighting matrix in equation A.30. Matrix M creates a free zone around the admissible range of controlled variable. As M is smaller, the free zone is narrower. If M is small enough, for instance in this example for M ≤ 0.01, it confronts the effect of Ty = 1100. Any constraint on manipulated variable is exact and cannot be violated. Ty M u∈ y∈ ys

0 1

1 1

1

1

 xs

us

 0  0  4.76 1.1905

103 1 [−∞, +∞] 1



 0  0  4.76 1.1905



 0  0  4.76 1.1905

1

1100 10

0.01

[−∞, 1.2] 1.2071

[−∞, 1.2] 1.8442

[−∞, 1.2] 1.2





1 [−∞, +∞] 4.55 

 0  0  21.67 5.41



 0 0 5.7 1.437

 0  0  8.78 2.1955

1 [−∞, 2] [−∞, ∞] 1.68

 0  0  5.71

  0 0 8

1.42

2

Table 3.7: Target calculation: Tuning Ty . Table 3.8 shows how Tu influences the targets. Tu forces the inputs to become smaller and pushes them toward their minimum admissible values, although they never violate their constraints.

34

CHAPTER 3. MODEL PREDICTIVE CONTROL Tu M u∈ y∈ ys

0

us

1000 1

5000 10

0.01

[−2, ∞] -41.40

-2

[−∞, +∞] 1 

xs

100

 0  0  4.76 1.1905

1

[−∞, +∞] -7.466



 0  0  4.76 1.1905



 0  0  −35.55 -8.88

-204.55 

 0  0  −973 -243.4

-2.48 

 0  0  −11 -2.96



 0  0  −197 -49.29



 0  0  −2.381 -2.381

100 [−1, ∞] [−∞, ∞] -0.84 

 0 0 −4 -1

Table 3.8: Target calculation: Tuning Tu Tu confronts the effect of quadratic penalty Rs as quadratic penalizes the deviation of inputs from their set-points. As it is shown in table 3.8, for Tu = 5000 and M = 0.01, us can not be less than -2.381 because although Tu makes the manipulated variable to be so small, but the controlled variable can not exceed its minimum value, because M is very small and this limits the manipulated variable as well. Conclusion It can be concluded that, to ensure that the soft constraints are exact, M must be set to a small number and the economic optimization parameter Ty must not be greater than q. Generally, the controlled variable can never violate the soft constraints for economic reasons.

3.3

Moving Horizon Regulator

Having (xs , us ) from target calculation and xk from the estimator, the deviation parameters in 3.6 on page 22 can be found. The infinite horizon optimal control problem introduced in equation 3.7 on page 22 is not easy to compute. To reduce the computation cost, the horizon is limited to N time steps. It has been shown that if the origin stays in the feasible region and equation 3.7 on page 22 has a solution, then state and input trajectories {wj , vj }∞ j=0 approach to zero. Moreover, there exist a finite N so that state trajectory {wj }∞ stays inside the feasible j=N set[23]. There are several approaches to find the proper length of the horizon. In this work, N is a tuning parameter which is found experimentally. It should be great enough to capture the steady state effects of all estimated future inputs.[25] At each time step, the moving horizon regulator calculates the vector {vj }N j=0 and then the first element of this sequence is injected to the plant as the input. The infinite horizon objective in equation 3.7 on page 22 is expressed as finite horizon objective as follows: N 1∑ min gjT QK gj + vTj SK vj + ∆vjT RK ∆vj . {gj ,vj } 2 j=0

(3.20)

3.3. MOVING HORIZON REGULATOR

35

subject to: g0 = xk − xs , v−1 = uk−1 − us gj+1 = Agj + Bvj umin − us ≤ vj ≤ umax − us − ∆umin ≤ ∆vj ≤ ∆umax ¯ ≤ Cgj + Dvj ≤ ymax − y ¯ ymin − y for j = 0, 1, ..., N.

which has a unique solution under the following assumptions [24]: • RK , SK and QK are symmetric semi definite; • (gj , vj ) = (0, 0) stays inside the feasible region; • The pair (A, B) is stabilisable; 1

2 • The pair (A, QK C) is detectable; 1 2 Remark: When the pair (A, QK C) is detectable, unstable modes cannot influence the system without affecting the objective function. [7]

The finite horizon objective function introduced in 3.20 may not give a stable controller [8]. To ensure the closed-loop stability, it is suggested to incorporate an additional term, named terminal penalty, into the cost function. The terminal penalty has a quadratic form: [gN , vN −1 ]T P[gN , vN −1 ] where gN = xk+N − xs and vN −1 = uk+N −1 − us , the states and inputs deviation from the computed target values at the end of control horizon, are penalized and P is calculated from Riccati difference equation.[20]

Unfeasible y¯ If the set-point y¯ is infeasible, the target calculation function finds the targets in a way that the resulting output is as close as possible to the set-point. Since the soft constraints are relaxed in the target calculation, the output may violate the admissible region which may cause the regulator to be infeasible. Even if the soft constraints were not relaxed in the target calculation, the targets are calculated in a way that the resulted output meets the maximum or minimum of the admissible region. In this case, a small unknown disturbance can make the controller infeasible. Relaxing the soft constraint in the regulator can be a solution to retain the feasibility of the regulator. A new slack variable ε is introduced and penalized in l1 /l2 optimization sense by ZL and ZQ which are linear and quadratic penalty. Although a linear penalty can penalize the ε, a quadratic term is added to ensure that the QP is still strictly convex, otherwise the total quadratic matrix of the problem contains zero elements on the diagonal while a necessary condition for the QP matrix to be convex is being strictly positive definite. Finally, the MPC cost function is:

36

CHAPTER 3. MODEL PREDICTIVE CONTROL

N 1∑ (gjT QK gj + vjT SK vj + (vj − vj−1 )T RK (vj − vj−1 ) + εT ZQ ε + ZL ε) {gj ,vj } 2 j=0

min

+

1( gN 2

vN −1

)T

(

P gN

(3.21)

)

vN −1 .

subject to:

g0 = xk − xs ,

v−1 = uk−1 − us

gj+1 = Agj + Bvj umin − us ≤ vj ≤ umax − us − ∆umin ≤ ∆vj ≤ ∆umax ¯ − ε ≤ Cgj + Dvj ≤ ymax − y ¯+ε ymin − y ε≥0 for j = 0, 1, ..., N. P is the steady state solution of the Riccati equation: ˜ T PA ˜ +Q ˜ −A ˜ T PB(S ˜ K +B ˜ T PB) ˜ −1 B ˜ T PA ˜ P=A ˜ B ˜ and Q ˜ are: where the matrices A,

(

˜ = A (

˜ = Q

A

B

)

0r×n Ir×r QK 0n×r 0r×n 0r×r

(

˜ = ,B )

B

(3.22)

)

Ir×r

In this formulation, the calculated manipulated variable, drive the system as fast as possible to the set-points. If an large disturbance enter the system and move the system to its limitations and due to the hard constraints on the manipulated variable is unable to reject the disturbance, an undesired offset appears in the output. Pannocchia in [5] modified the MPC regulator with a linear penalty to keep the controlled variable as long as possible: N 1∑ (gjT (QK gj + 2q) + vjT SK vj + (vj − vj−1 )T RK (vj − vj−1 ) + εT ZQ ε + ZL ε) (3.23) {gj ,vj } 2 j=0

min

+

1( gN 2

vN −1

)T

(

)

(P gN

vN −1 + 2p)

where: ¯) q = CT (¯ yk − y

( )

˜ +B ˜ K) ˜ T )−1 q p = (I − (A 0

Thus, the q and consequently p are none-zero, if the set-point is unreachable. In 2008, Rawlings in the published paper [10] introduced a new solution for this problem: by ¯ instead of y − ys in the regulator. removing the target calculation from MPC and replacing y − y In A.8.2 in it described how the MPC regulator has been implemented in Matlab.

3.3. MOVING HORIZON REGULATOR

3.3.1

37

Tuning Parameters

• QK penalizes the deviation of states from their targets. Once the system is excited with a new input (i.e set-point is changed, a new disturbance acts on the system,etc.) the regulator moves the states towards the calculated target values (xs , us ) to minimize the cost function. For a large QK , the regulator pushes the states towards the targets fast and forcefully, which may causes overshoots and oscillation. Aggressive control action is undesired because although it moves the states faster and rejects the disturbances strongly, it causes sudden changes in the manipulated variable and apply impulses to the actuator. On the other hand, for a small QK , the controller is less sensitive to the disturbances and set-points and control action is slow and smooth. A compromise should be found between a fast and non-aggressive control action. • SK penalizes the deviation of manipulated variable from its target. For a great SK , uj is pushed towards input targets us rapidly. This means that regulator cannot adjust the manipulated variable to move the states toward their targets but instead tries to adjust it as soon as possible to its target. The result is that uj does not oscillate and has no overshoot and moves to the target fast. If the ratio between SK and QK is great, although the regulator moves the inputs towards input targets rapidly, but the states move slowly. The reason is that the quadratic term vjT SK vj minimizes the cost function faster than the term gjT QK gj . Generally, greater SK results in smoother control action and limited changes in the manipulated variable. • RK limits the rate of changes in the control signal uj . For a greater RK , the control signal has less overshoots and no impulse behavior. It is better for the actuator to limit the impulse inputs, since they may damage the actuator or move it to saturation. On the other hand, the regulator cannot excite the system with an impulse or sudden change in manipulated variable therefore the control action is non-aggressive, disturbance rejection is weaker, and states move more slowly. • ZL and ZQ are set to large positive values to guarantee the feasibility of the regulator.

3.3.2

Conclusion

In A.8.3 in example 15, it is depicted how regulator parameters influence the control action. It can be concluded that: • For a larger QK , the disturbance rejection becomes stronger and the control action becomes faster. • For a large SK , disturbance rejection becomes weaker and the controller becomes less sensitive and non-aggressive. • As RK increases, control action becomes smoother and slower, the peak in the manipulated variable becomes smoother and overshoot vanishes. ¯ is unfeasible due to the output constraints. Then a large • ZL is only effective when the y ZL yields an aggressive control action and overshoot in manipulated variable.

Chapter 4

Single Adjuster Tuning 4.1

Introduction

The idea behind the Single Adjuster Tuning is to provide a structure in which by adjusting only one or a limited number of tuning elements, the regulator is tuned properly. In MPC controllers, which consist of several parameters for the target calculation, the Kalman filter and regulator separately, the single adjuster tuning can reduce the complexity of tuning significantly and ease it for any user.

4.2

Table of Parameters

In MPC some of the parameters are implied by system identification or by the physical properties of the devices while some other parameters are tuned according to the user demands. Table 4.1 contains the list of parameters. Tuning RKF is usually measured practically by doing some off-line sample measurements. Generally selecting a large value for RKF on the one hand means less trust in measurements and on the other hand slows down the Kalman Filter. Tuning QKF is even more complicated. If the process model is poor, the Kalman filter works better with a larger QKF . In this case it is important to have a more reliable measurement. In other words, larger QKF means less trust in the model accuracy. QKF is determined by system identification and is represents the model mismatch. The target calculation parameters and the regulator parameters are tuned automatically or semiautomatically. The parameters such as ZL , ZQ as well as ρL and ρQ are tuned automatically because changing their values does not change the process behavior. Indeed it is enough to choose meaningful matrices for these parameters which guarantee the convexity of the regulator objective function. The effective parameter in the soft constraint relaxation is the matrix M which is tuned in a semi-automatic way. Qs and QK are diagonal matrices and determine which state or controlled variable should be tuned faster than the others. Rs and SK are also diagonal symmetric matrices, which control the manipulated variables. RK penalizes the rate of change in the manipulated variables. Increasing Rs and SK has the same influence on the control action as reducing Qs and QK . This means that it is possible to set Qs and QK at the beginning, keep them constant during the process and tune the controller by changing merely Rs and SK or conversely. 39

40

CHAPTER 4. SINGLE ADJUSTER TUNING

Economical optimization parameters such as Tu and Ty are set by the user according to the economical issues. Limitations such as ymin and ymax are set according to the physical limitations of the sensors and the actuators. Symbol Explanation Kalman filter: RKF Cov(σ), where σ is the measurement error QKF Cov(Model Mismatch) Target Calculation: ¯ Qs The penalty on the deviation of CV∗ from its targets: ys − y ¯ Rs The penalty on the deviation of MV∗∗ from its targets: us − u ¯ qs The linear penalty on the deviation of CV from its targets: ys − y ρQ , ρ L The penalty on the soft constraint violation M Violation the soft constraint Regulator: Qk The penalty on the deviation of CV from its targets: Sk The penalty on the deviation of MV from its targets: Rk The penalty on the rate of changes of MV ZL , ZQ The same as ρQ , ρL Economical optimization: Tu The economical coefficient of MV Ty The economic coefficient of CV Global adjuster: rKF The global regulation adjuster rKF The global observation adjuster rDX The global disturbance observation adjuster Limitations and set-points: ymin The lower limit of CV ymax The upper limit of CV umin The lower limit of CV umax The upper limit of CV ∆min The lower limit of the actuator speed ∆max The upper limit of the actuator speed ¯ y The set-point of CV ¯ u The set-point of MV ∗ CV stands for Controlled variable. ∗∗ MV stands for Manipulated variable.

Calculation From measurement From system identification. Semi-Automatic Semi-Automatic Automatic Automatic Semi-Automatic Semi-Automatic Semi-Automatic Semi-Automatic Automatic user user user user user user user user user user user user user

Table 4.1: Table of parameters for MPC regulators.

4.3

Normalization

In the state space models the inputs, the outputs and the states are physical signals described by physical units. Tuning the weighting matrices in MPC such as RK and QK , these differences must be taken into account, otherwise the resulted matrix may be ill-conditioned. To avoid this situation, normalization of the states, the inputs and the outputs is recommended. The normalized variables have no dimension and typically have a value around 1. In A.9 the different approaches of normalization are described.

4.4. CALCULATING PARAMETERS FROM A PRIORI INFORMATION

4.4

41

Calculating Parameters from a priori information

As mentioned in the introduction of this chapter, to compute the parameters which are introduced in table 4.1, the user must know some data a priori. Assuming that the user has a set of data similar to the parameters in table 4.2, it is possible to tune MPC regulator and observer and calculate the normalization matrices. Beside these parameters, the user may know the operating point of the process, uop and yop . As described in A.9.2 having the operating points of the system ease the calculations of normalization matrices. min

Limitation max ∆min

y

ymin

ymax

u

umin

umax

Importance setCosts point

∆max

setpoint

setpoint

-

-

¯ y

Qreal

-

∆u min

∆u max

¯ u

-

Rreal

Violation of Limits

Measurment error

Model Mismatch

Ty,real

Mreal

σy

Mis

Tu,real

-

-

-

Table 4.2: A priori data. In table 4.2 there are four different groups of parameters. The first group contains the limitations which are representing the physical limitations of the actuators and the devices. The second group of parameters are the set-points. The third group contains Qreal and Rreal which are defined by the user and they specify which controlled variables and manipulated variables are more important than the others. The user must specify the costs of manipulated variable and controlled variable based on the economical issues. The most expensive product has the largest value in the vector of the controlled variables costs Ty and similarly, the most expensive ingredient has the largest value in the vector of the manipulated variables costs Tu . M is the maximum admissible violation from soft constraints. The forth group contains the parameters which are found through the system identification and represent the model mismatch and the measurement error. Based on these two parameters the Kalman filter is tuned.

4.4.1

Kalman Filter Parameters

RKF is calculated from the measurement error σ which is known from system identification:

´ KF = diag{cov(σ)} R

(4.1)

The resulting matrix should be normalized as follows:

RKF = Ny

´ KF R N ´ KF )) y max(diag(R

(4.2)

42

CHAPTER 4. SINGLE ADJUSTER TUNING

QKF is calculated from the stationary model mismatch in percentage:

´ KF = diag{cov(Model Mismatch%)} Q

(4.3)

and the final normalized matrix is : (

QKF =

)

(

)T ( ) ´ KF Q XN 0n×m ( XN 0n×m · C 01×m · · C 01×m · ´ 0m×n DN 0 DN m×n max(diag(QKF ))

)

(4.4)

Disturbance Observation adjuster: Since it is impossible to measure the unknown disturbance and its dynamics are unknown, a tuning parameter rD X is required to adjust the relative speed of the disturbance estimator compared to the state estimator. The disturbance observation adjuster influence the QKF in the following way: (

QKF =

−1 XN · rDX 0m×n

0n×m DN · rDX

)

(

· C 01×m (

)T

·

) ( −1 ´ KF Q XN · rDX · C 01×m · ´ KF )) 0m×n max(diag(Q

(4.5) 0n×m DN · rDX

)

(4.6)

Therefore, if the initial speed of the disturbance estimation compared to the state estimation is slow, by increasing rDX and if the disturbance estimator is so fast and oscillatory by reducing rDX the observer is adjusted. The disturbance observer adjuster tunes only the relative speed between disturbance and state estimation and is not enough to tune the whole observer.

Global Adjuster of the Kalman Filter: 2 is mulThe global adjuster rKF is used for tuning the Kalman Filter by influencing RKF . rKF tiplied with RKF and consequently influences the filtering. Increasing rKF strengthens the filter and decreasing rKF weakens the filter.

Summary: It is possible to adjust the Kalman Filter with only two parameters: • Disturbance Observation adjuster rDX tunes the relative speed between disturbance. A larger rDX results in a faster disturbance estimation and a slower state estimation. • Global Adjuster rKF tunes the whole estimator by moving its eigenvalues. A larger rKF results in a slower state and disturbance estimation.

4.4. CALCULATING PARAMETERS FROM A PRIORI INFORMATION

43

Initialization of Global Adjuster and Disturbance Observation adjuster of Kalman Filter Although the user can tune the Kalman filter during the process, it is desired to initialize the Kalman filter in a way that the eigenvalues of the closed loop system is as close as possible to the eigenvalues of the original system. Moreover, oscillation, instability and fast dynamics are not desired. In this section, the initial values for rKF and rDX are found in a way that the resulting closed loop eigenvalues stay as close as possible to the ideal poles. The ideal poles are defined under the following conditions: • They are as close as possible to the original system poles so that the closed loop system behaves similar to the process; • Since oscillation is not desired, if the original poles are complex conjugate, the ideal poles are the closest possible real values to the real part of the complex conjugate poles. • Ideal poles must be stable. Since the estimated disturbance part is always an integrator, the relative poles are replaced by a smaller value (In this work they have been replaced by 0.3); • The ideal poles have no multiplicity, because it is not possible to move more than one eigenvalue to the same place, therefore if the original poles have multiplicity, one of them or both must be moved. • The ideal poles should have positive real value, because they should not be ringing. An optimization problem is formulated to find rKF and rDX so that the closed-loop poles Pcl are as close as possible to the ideal poles Pideal .

{rKF , rDX } = arg min

ρIm(Pcl )T Im(Pcl ) + ϕ(Re(Pcl ) − Pideal )T (Re(Pcl ) − Pideal )

(4.7)

Subject to:

2 0 = − K + PCT (CPCT + rKF Rs )−1

0 = − P + A(P − PC (CPC + T

T

2 rKF Rs )−1 CP)AT

(

)

(

−1 −1 rDX 0n×m rDX 0n×m + Qs 0m×n rDX 0m×n rDX

)

where ρ and ϕ are weighting matrices to prioritize the objectives. If it is more important for the closed-loop poles to have similar real parts as the ideal poles rather than smaller imaginary part, ϕ is greater than ρ. In this work ρ and ϕ is set to 1 and 10.

4.4.2

Regulator Parameters

Tuning the regulator is composed of setting the parameters for the target calculation function and the main MPC function. Some of the parameters are in both functions such as Qs , Rs , ρQ and ρl in the target calculation which are similar to QK , SK , ZQ and Zl in the regulator main function. The economical optimization parameters such as Tu and Ty and the linear penalty qs are only in the target calculation function. The quadratic penalty on the rate of changes in the manipulated variable RK is only in the main regulator function.

44

CHAPTER 4. SINGLE ADJUSTER TUNING

Qs in the target calculation function and QK in the regulator, are both calculated from Qreal as shown in table 4.2. As it is mentioned in 4.4 on page 41, by setting Qreal , the user specifies which controlled variable must be controlled faster than the others. The difference between Qs and QK is that Qs is influenced by the economical optimization Ty . If the economical optimization is not zero, the effect of Qs on the controlled variable target is decreased. Therefore, firstly the economical optimization parameters are calculated from a priori information:

Tu,real max(Tu,real ) Ty,real Ty = max(Ty,real ) ´ s = diag (qs,1 , . . . , qs,m ) Q

Manipulated variable:

Tu =

Controlled variable: Qs : where

 

qs,i =

Qreal (i,i) max(Qreal ) Qreal (i,i)  max(Qreal ) max(Ty,real )

(4.8) (4.9) ∀Ty,real,i = 0 ∀Ty,real,i ̸= 0

−1 ´ −1 Qs = YN Qs YN

QK is similarly calculated from Qreal as follows: −1 QK = YN

Qreal Y−1 max(diag(Qreal )) N

Rs in the target calculation function and SK in MPC regulator are both calculated from Rreal , by which the user specifies which manipulated variable must be manipulated faster that the others. Rs = U−1 N

Rreal U−1 max(diag(Rreal )) N

SK is influenced by the global regulator adjuster sCr by which the user tunes the regulator.

SK = s2Cr U−1 N

Rreal U−1 max(diag(Rreal )) N (4.10)

By changing sCr the user moves the eigenvalues of the controller. As sCr is larger, the controller is slower. Mreal from table 4.2, is used in both target calculation function and regulator and is normalized as follows: M = Ny

MN max(diag(MN ))

4.4. CALCULATING PARAMETERS FROM A PRIORI INFORMATION

45

The rest of the parameters are not specified by the user but are defined automatically according to the conditions which guarantee the convexity of the objective functions.

ρQ

and

ZQ = 103 Im×m

ρL

and

ZL = 105 1 · · · 1

( |

{z m

) }

(4.11) (4.12)

The linear penalty on the controlled variable qs in the target calculation function is also defined automatically and set to a large number. Initialization of global regulator adjuster As mentioned, the user can tune the regulator during the process by the global regulator adjuster sCr . Similar to the Kalman filter, it is desired for the regulator to be initialized. Initial value of sCr is calculated by an optimization problem which find sCr in a way that the closed loop eigenvalues of the system are placed on the real axis and with enough margin from stability borders. Moreover, the imaginary part of the closed loop eigenvalues are minimized. Similar to the Kalman filter adjuster initialization, an ideal pole is defined. The ideal pole contains those eigenvalues of the system which are stable and real, and if the original system has complex conjugate poles, they are replaced by the closest real poles. Since aggressive control action is not desired, if the poles are near to 1, they are shifted toward the origin. After finding the ideal poles, the optimization problem is defined to minimize the distance between the real eigenvalues of the system from the ideal eigenvalues.

sCr = arg min

γIm(Pcl )T Im(Pcl ) + ϕ(Re(Pcl ) − Pideal )T (Re(Pcl ) − Pideal )

(4.13)

Subject to:

0 = − K + PCT (CPCT + s2Cr SK )−1 0 = − P + A(P − PCT (CPCT + s2Cr SK )−1 CP)AT + QK

4.4.3

Conclusion

In A.10, in examples 16 on page 88, 17 on page 89, 18 on page 92 and 19 on page 92, it is depicted how the global adjusters influence the control action and state estimation. It is shown in the examples that: • For larger sCr , SK is larger and consequently the control action is smoother and slower while for smaller sCr , SK is smaller which yields a fast and aggressive control action. • For rDX > 1, the disturbance estimation is faster than the state estimation and conversely, if rDX < 1, the state estimation is faster than disturbance estimation.

46

CHAPTER 4. SINGLE ADJUSTER TUNING • As rKF becomes larger, the estimated state and disturbance converge to the real values slower.

Chapter 5

Constrained State Estimation 5.1

Introduction

In industrial process states constraints are of high importance. State constraints mostly imply the physical boundaries of the estimated internal states, e.g. the estimated concentration in chemical process cannot be negative [12]. In the previous chapters, the Kalman Filter has been used to estimate the states because it is the standard choice dealing with the systems which are exposed to unmeasurable process disturbances and measurement noises. However, one disadvantage of Kalman filter is that the state constraints do not fit into the structure of Kalman Filter. The optimal-based state estimation provides the possibility to incorporate the state constraints into the state estimation. In this chapter, several optimization-based constrained estimation approaches are introduced and compared with each other. The simplest example of the optimal based constrained state estimation is the constrained Kalman filter (CKF) which will be discussed in the following section.

5.2

Constrained Kalman Filter

Similar to the Kalman Filter, the CKF consists of two steps: In the first step, the states are predicted and in the second step, the states are corrected. In the CKF, the second step contains an optimization problem which is subject to the state constraints while in the Kalman Filter, the second step is calculated analytically [18].

5.2.1

Problem Formulation

Consider the following linear model in which w is the model error and v is the measurement error: xk+1 = Axk + Buk + Bd dk + wk

(5.1)

yk = Cxk + vk ¯ 0 + w0 x0 = x the optimization problem of CKF is formulated in a way that the model and measurement errors are minimized: 47

48

CHAPTER 5. CONSTRAINED STATE ESTIMATION

T T ˆ k−1|k , v ˆ k|k } = arg min w ˆ k−1|k ˆ k−1|k + v ˆ k|k ˆ k|k {w P−1 R−1 v k|k−1 w

(5.2)

subject to: ˆ k|k = x ˆ k|k−1 + w ˆ k−1|k x ˆ k|k = yk − Cˆ v xk|k ˆ k−1|k < wmax wmin < w ˆ k|k < vmax vmin < v ˆ k|k < xmax xmin < x ˆ k|k−1 is the predicted state in the previous time step: where yk is the last measurement and x ˆ k|k−1 = Aˆ x xk−1|k−1 + Buk−1 ˆ k−1|k : The objective function in 5.2 can be represented as an explicit function of w T ˆ k−1|k = arg min w ˆ k−1|k ˆ k−1|k + (yk − Cˆ ˆ k−1|k )T R−1 (yk − Cˆ ˆ k−1|k ) w P−1 xk|k−1 − Cw xk|k−1 − Cw k|k−1 w (5.3)

subject to: ˆ k|k = x ˆ k|k−1 + w ˆ k−1|k x ˆ k−1|k < wmax wmin < w ˆ k|k < xmax xmin < x ˆ k−1|k can be derived from the output constraint: The constraint on w ˆ k−1|k < ymax − Cˆ ymin − Cˆ xk|k−1 < Cw xk|k−1 Pk|k−1 is calculated iteratively from the Riccati equation: Pk|k−1 = A(Pk−1|k−2 − Pk−1|k−2 CT (CPk−1|k−2 CT + R)−1 CPk−1|k−2 )AT + Q

5.2.2

Conclusion

In A.11 on page 98 in example 20 it is depicted that for an unconstrained system, the Kalman filter state estimation and CKF are equivalent. In the constrained systems, the CKF keeps the estimated states in the admissible region even if a large unknown disturbance enters the system and shifts the states towards the boundaries.

5.3

Full Information Estimation

Batch least square state estimation is another optimization-based state estimation which is introduced by Jawinski ([1] following [18]). It is also called the Full Information Estimation because it uses all the previous measurements to estimate the actual states.

5.3. FULL INFORMATION ESTIMATION

5.3.1

49

Problem Formulation

The following linear model is assumed: xk+1 = Axk + Buk + Bd dk + wk

(5.4)

yk = Cxk + vk

where wk is the model error vk is the measurement error. The Full Information Estimator is formulated as: min

ˆ 0|k ,w ˆ j ,ˆ x vj

Φk =

k−1 ∑

ˆ jT Q−1 w ˆj + w

j=0

k ∑

ˆ jT R−1 v ˆ j + (ˆ ¯ 0 )T P−1 ¯0) v x0|k − x x0|k − x 0 (ˆ

(5.5)

j=0

subject to: ˆ j+1|k = Aˆ ˆj x xj|k + w ˆ j = yj − Cˆ v xj|k where R−1 , Q−1 and P−1 are symmetric positive definite penalty matrices and specify the relative contribution of each term in the quadratic objective. Moreover, they are the tuning parameters ¯ 0 is an a priori estimate of the initial state at t = 0 [15]. for the Full Information Estimator. x The optimization problem in 5.5 can be reformulated as:

min Φk =

ˆ 0|k ,w ˆj x

k−1 ∑

ˆ jT Q−1 w ˆj + w

j=0

k ∑

¯ 0 )T P−1 ¯0) (yj − Cˆ xj|k )T R−1 (yj − Cˆ xj|k ) + (ˆ x0|k − x x0|k − x 0 (ˆ

j=0

(5.6) subject to: ˆ j+1|k = Aˆ ˆj x xj|k + w

ˆ 0|k and w ˆ i|k resulting from the The last state at time t = k is calculated based on the optimal x optimization problem in 5.6 : ˆ k|k = Ak x ˆ 0|k + x

k−1 ∑

ˆ i|k + Bui ) Ak−i−1 (w

i=0

5.3.2

Stability check

Muske [17] shows that the necessary condition for the stability of the Full Information Estimator is the detectability of the model. Moreover, R and Q and the initial condition P0 must be positive definite.

50

5.3.3

CHAPTER 5. CONSTRAINED STATE ESTIMATION

Conclusion

An example of Full Information Estimation is depicted in A.11 on page 98 in example 21. As can be seen, the Full Information Estimator bounds the state, even though an unknown disturbance enters the system and moves the states toward the boundaries. Dealing with unconstrained systems, Full Information Estimator drives the same result as the KF.

5.4

Moving Horizon Estimation

The main disadvantage of the Full Information Estimation is the increasing size of the problem. To bound the computational cost of the optimization problem, Muske [17] modified the Full Information Estimator to a fixed-size moving window. In the moving window structure, the number of the measurements used in each time step is a fix number. In other words, in each time step k, the last N + 1 measurements are used for the estimation where N is the window length. The main idea of the Moving Horizon Estimation is to solve the optimization problem for a fixed amount of data and approximately summarizing the old data, instead of taking them directly into account. How the old data are approximately summarized is of high importance because it affects the stability of the observer.

5.4.1

Problem formulation

The linear model in 5.4 is used. The time interval is divided into two pieces: t1 = {t : 0 ≤ t ≤ k − N } and t2 = {t : k − N + 1 ≤ t ≤ k}. In the first time interval, the Moving Horizon Estimation and the Full Information Estimation are equivalent. In the second time interval, the Moving Horizon Estimation is formulated as follows: min

ˆ k−N ,{w ˆ j }k−1 x j=k−N

Φk =

k−1 ∑ j=k−N

ˆ jT Q−1 w ˆj + w

k ∑

vkT R−1 vk + θk−N (ˆ xk−N ).

(5.7)

j=k−N

ˆ k−N i.e. the where θk−N (ˆ xk−N ) is the summary of the effect of the data {yk }k−N j=0 on the state x information from the beginning until the beginning of the horizon. This summary is called the arrival cost. Rawlings [2] have defined the arrival cost as follows: mhe T −1 mhe ˆ k−N ˆ k−N θk−N (ˆ xkN ) = (ˆ xk−N − x ) Pk−N (ˆ xk−N − x ) + Φ∗k−N

(5.8)

where the Φ∗k−N is the optimal value of the objective function in 5.7 at k − N . Pk−N is updated by the Riccati equation: Pj = Q + APj−1 AT − APj−1 CT (R + CPj−1 CT )−1 CPj−1 AT

(5.9)

which is the covariance update formula for the Kalman filter (Jazwinski 1970) and subject to the initial condition P0 .

5.4.2

Stability check

Rao [2] shows that under the following conditions, the constrained moving horizon estimator is an asymptotically stable observer for the system 5.4.

5.5. MULTI-SCALE STATE ESTIMATION

51

• R and Q and the initial condition P0 are positive definite. • The pair (A, C) is observable. • N , the horizon length is greater than n the number of states. This condition guarantees that the arrival cost is bounded. • For an infinite amount of data, generated by the system in 5.4, there is always a feasible state and disturbance trajectory that yields bounded cost. It has been proved that when there is no inequality constraints, MHE is equivalent to the Kalman Filter[2].

5.4.3

Conclusion

The results are shown in A.11 on page 98 in example A.11.3. Similar to CKF and the Full Information Estimator, MHE bounds the state even though an unknown disturbance enters the system. Moreover it is shown that the MHE and the Full Information Estimation are equivalent if the system is not bounded.

5.5

Multi-scale state estimation

In industrial processes, usually the control problems involve important controlled variables and measurements related to the product quality, sampled at different sampling rates or with significant time delays. A considerable amount of research on the topic of multi-rate sampled data have been done in the field of navigation and aerospace industry [21]. These research had relatively small impact on the industrial process control. In practice, most of the time the infrequent measurements are skipped or taken into account in an ad hoc manner [29] The reason for the fact that Multi-rate sampled data were not under the focus in process control, was the failure of those developed MR methods dealing with process control issues(e.g. model uncertainty, constraints and model disturbances)[11]. Since 90’s and after the emergence of the MPC controllers, which have addressed these issues, the topic of multi-rate sampled data has attracted more attention. Dealing with MR sampled data, researches have been made in two different major directions: 1) Development of new online sensors and 2) Development of state observers which can estimate the states of the system using measurements with time delays and different sampling rates from the controller. [28] Kr¨amer (2005) in [18] has mentioned two approaches for updating the estimation error of the systems which have infrequent measurements: The first approach is called fixed structure. In this approach, the error is updated whenever the infrequent measurement is received, otherwise the error is kept constant. This approach is similar to the zero-order-hold signal reconstruction method. In the second approach called variable structure multi-rate estimation, the error is calculated whenever the infrequent measurement is received, otherwise the error is considered as zero. The fix-structure estimation was introduced for the first time by Ohno and Horwitz ([13] following [18]). This method has been used for non-aggressive regulation where the controller sampling rate

52

CHAPTER 5. CONSTRAINED STATE ESTIMATION

is faster than the measurement sampling rate. Kr¨amer et al. in [26] have extended this approach to multi-rate measurements.

5.5.1

Problem Formulation

Kr¨amer in [18] has formulated the multi-rate problem for a system with two measurement vectors, one of which is received at a fast sampling period and the other one is received at a slow sampling period. In this work a sampled system with three measurement vectors is assumed, one of which is available at every sample time and the other two are available at slower sampling times. Despite this difference, the problem is formulated similarly to Kr¨amer’s formulation. For simplicity it is assumed that there is no time delay and the two slow sampling time always coincide with the fast sampling time. That means that the two slow sampling periods are integer multiplies of the fast sampling periods. In figure 5.1, it is depicted that three measurements are having different sample times. ∆t represents the controller sampling interval. The estimation error is updated at each sample time. The fast measurement is received at every sample time while the two slow measurements are received at every Ts1 and Ts2 sampling interval. Since it is assumed that the slow measurements are coinciding with the fast measurement, Ts1 and Ts2 must be integers. ×

6r b

Measurement 1 Measurement 2 Measurement 3

b r - r

Ts2 ∆t

t0

t1

r

r

Ts1 ∆t

× ×

- b

× × × × × t2

t3

t4

t5

-t

t6

Figure 5.1: Discrete system with 3 measurement vectors and different sample times Based on the mentioned assumptions, variable and fixed structures are defined as follows: (

s2 • Variable structure: Kvar = Kfvar Ks1 var Kvar

ˆ j+1 = Aˆ x xj + Buj +

)

  ˆj ) Kfvar (yj − CF x       ) (   f (y − CF x  ˆ ) K  j j var   s1 (y − Cs1 x   ˆ l1 ) K l1  var      ( )

ˆ ) Kf (y − CF x

j var j   s2 (y − Cs2 x  ˆ K  l2 ) var l2           Fx  ˆ y − C j j       ˆ l1  Kvar yl1 − Cs1 x     ˆ y − Cs2 x l2

l2

j/Ts1 ∆t ∈ N, j/Ts2 ∆t ∈ /N j/Ts1 ∆t ∈ N, j/Ts2 ∆t ∈ /N

j/Ts1 ∆t ∈ / N, j/Ts2 ∆t ∈ N

j/Ts1 ∆t ∈ N, j/Ts2 ∆t ∈ N

(5.10)

5.5. MULTI-SCALE STATE ESTIMATION (

53

s2 • Fixed structure: Kf ix = Kff ix Ks1 f ix Kf ix

)



ˆ j+1 x



ˆj yj − CF x   ˆ l1  = Aˆ xj + Buj + Kf ix yl1 − Cs1 x ˆ l2 yl2 − Cs2 x

(5.11)

where l1 = Ts1 · ⌊j/Ts1 ⌋ and l2 = Ts2 · ⌊j/Ts2 ⌋. The function ⌊·⌋ rounds its argument to the nearest integer toward minus infinity. In other words, l1 and l2 are the counters of the two slow measurement sampling intervals and j is counting the fast sampling intervals. CF is C with all entries of the rows corresponding to the fast measurement. Similarly, Cs1 and Cs2 are C with all entries of the rows corresponding to the first or second slow measurement. Stability Condition It is assumed that the pair (C, A) are detectable and the pair (A, B) is stabilizable. Otherwise system cannot be stabilized with the given measurement and manipulated variable.[11] Relation between Fixed Structure and Variable Structure Kr¨amer [18], in Theorem 5.3, shows that if Kf ix and Kvar are tuned properly, for sampled dynamic linear systems, the fixed structure state estimator and variable structure state estimator yield the same steady state value and the same convergence behavior in the slow sampling points. In this work, Kalman filter is preferred for the state estimation. “Applying a Kalman filter to a multi-rate system always yields a variable structure KF ”[18](page 89). Finally, the multi-rate state estimation design can be formulated as follows: 1. The new state and Kalman Filter matrix is predicted: ˆ p = Aˆ x xj−1 + Buj T

Pp = APj−1 A + Q

(5.12) (5.13)

2. The estimator gain of variable structure is calculated as: Kvar = Pp CTmr (Cmr Pp CTmr + R)−1

(5.14)

where Cmr is defined based on the number of received measurements:  f C if j/Ts1 ∆t ∈ N, j/Ts2 ∆t ∈ /N       ) (    f  C   if j/Ts1 ∆t ∈ N, j/Ts2 ∆t ∈ /N  s1    C

Cmr =

) (   f  C      Cs2        

C

(5.15) if j/Ts1 ∆t ∈ / N, j/Ts2 ∆t ∈ N

if j/Ts1 ∆t ∈ / N, j/Ts2 ∆t ∈ /N

54

CHAPTER 5. CONSTRAINED STATE ESTIMATION 3. To calculate the estimator gain of the fixed structure, the measurement covariance of the two slow measurements must be adjusted as follows: 

RF  0 

Rf ix =  

0

0 s1 ∆t1 Rvar

0 0

0

s2 ∆t2 Rvar

    

(5.16)

where ∆t1 and ∆t2 are respectively the time interval between the two first slow measurements and the time interval between the two second slow measurements.

5.5.2

Results

In A.11.4 on page 101, example 23 the fixed and variable structure are compared and the results are depicted in figures A.42 on page 103 and A.43 on page 103. As can be seen in the examples, the fixed structure is smoother than the variable structure because it has smaller gain. On the other hand, for the systems in which the slow measurements are very slow compared to the fast measurement, variable structure yields better results rather than the fixed structure, because the fix structure cannot vanish the estimation error in the inter-sampling intervals and yields an offset. Also, the fixed and variable structure are compared. It is depicted that at the sampling times that all the measurements are received, the fix and variable structure coincide.

5.6

Constrained Multi-rate state estimation

To incorporate constraints in the multi-rate state estimation, it is possible to combine an optimizationbased state estimation approach with the multi-rate state estimation. For the nonlinear systems, the Moving Horizon Estimation yields a better result. However, in this work a simple constrained Kalman Filter is preferred because it has less computational cost compared to the Moving Horizon Estimation. On the other hand, for the linear cases the Moving Horizon Estimation does not yield a better result compared to the constrained Kalman Filter.

5.6.1

Problem Formulation

Using the equation 5.3, the multi-rate-CKF estimation has the following algorithm: • Prediction: ˆ p = Aˆ x xj−1 + Buj Pp = APj−1 AT + Q • Multi-rate-CKF objective function is built using the equation 5.3 and Cmr from equation

5.6. CONSTRAINED MULTI-RATE STATE ESTIMATION

55

5.15 and ykmr which is the vector of received measurements:

ykmr =

 f  yk if j/Ts1 ∆t ∈ N, j/Ts2 ∆t ∈ /N       ( )   f  y  k  if j/Ts1 ∆t ∈ N, j/Ts2 ∆t ∈ /N   s1   yk ( )   f   y  k    yks2        

yk

(5.17) if j/Ts1 ∆t ∈ / N, j/Ts2 ∆t ∈ N if j/Ts1 ∆t ∈ / N, j/Ts2 ∆t ∈ /N

The objective function is: T ˆ k−1|k = arg min w ˆ k−1|k ˆ k−1|k + w P−1 k|k−1 w

ˆ k|k−1 − Cmr w ˆ k−1|k )T R−1 (ymr ˆ k|k−1 − Cmr w ˆ k−1|k ) (ykmr − Cmr x k − Cmr x subject to : ˆ k|k = x ˆ k|k−1 + w ˆ k−1|k x ˆ k|k−1 ˆ k−1|k < ymax − Cmr x ˆ k|k−1 < Cmr w ymin − Cmr x ˆ k|k < xmax xmin < x • Correction of the estimator matrix: Pj = (In×m − Pp CT (CPp CT + R)−1 C)Pp

5.6.2

Conclusion

In A.11.5 on page 101, in example 24 a multi-rate-CKF is implemented and compared with a variable structure estimator. It is depicted that the multi-rate-CKF has the same result as the variable structure. Indeed the multi-rate-CKF estimator minimizes the process disturbance wk to estimate the states of the actual time step. The fix structure estimator places the error of the slow measurement on a zero-order-hold and propagates this error to the next steps. Unlike the fixed structure, the variable structure and multi-rate-CKF do not spread the actual measurement error to the next time steps and use only the actual measurement.

Chapter 6

Applications of MPC: Control of the inflow of a Tank In this chapter, an industrial plants with linear models is controlled by MPC regulator and estimated by optimal based state estimation.

6.1

Problem formulation

The system is depicted in figure 6.1 and consists of three tanks. All the levels are measured and thus the system is observable.

6.1.1

Control objectives

The control objectives are: ˙ in3 , does not change more than 10%. It is preferred to limit the • The inflow to the tank 3 V ˙ rate of change in Vin3 rather than the inflow itself, although the inflow itself is subject to the physical limitations. • The levels in tank 1 and tank 2, h1 and h2 stay constant(soft constraints).

6.2

Model

The outflow of the first and second tanks are controlled by a control valve. The control valve is modeled as follows: V˙ out =

K v τs + 1

where v, τ and K are the valve position, valve time constant and valve gain respectively. ˙ in3 , is the summation of the two outflows of the other tanks: V ˙ out1 The inflow to the tank 3 V ˙ and Vout2 . The first tank is 25 times greater than the second one. 57

58

CHAPTER 6. APPLICATIONS OF MPC: CONTROL OF THE INFLOW OF A TANK

˙ in1 V

?

A1

? H  H

i

FC

˙ out1 V

˙ in2 V

?

˙ in3 V ? y-

A2

?  H H

i 6

FC

A3

˙ out2 V

Figure 6.1: Example 1

According to the mentioned assumptions, the differential equations of the system are: dh1 = dt dh2 = dt ˙ out1 V dt ˙ Vout2 dt

˙ in1 V ˙ out1 V − A1 A1 ˙ ˙ Vin2 Vout2 − A2 A2 1 ˙ out1 + K1 v1 ) = (−V τ1 1 ˙ out2 + K2 v2 ) = (−V τ2

where: • The manipulated variables are the valve positions v1 and v2 . ˙ out1 and V ˙ out2 . • The states are the tank levels h1 , h2 and the outflows of the tanks V ˙ out1 ,V ˙ out2 and V ˙ out3 . • The controlled variables are the levels and the outflows of the tanks: V ˙ in1 and V ˙ in2 are the inputs of • Beside the manipulated variables, inflows of the tanks V the system. Since these variables are not measured, they are counted as unmeasurable disturbances which must be estimated by the augmented observer. Finally the state-space model of the system is found as follows: 





0 h˙1  ˙  0  h2   = ¨ Vout1  0 ¨ out2 V 0 





0 − A11 0 0 0 − τ11 0 0





2

 h1 1 0 0 0  h1  h  0 1 0 0   h   2    2    ˙  Vout1  = 0 0 1 0  ˙  Vout1    ˙ Vout2  0 0 0 1 ˙

˙ in3 V

0 0 1 1





0 0 h1 1    − A1   h2   0  +  K1  ˙  τ1 0  V out1  1 ˙ Vout2 0 −τ

Vout2

0 0 0 K2 τ2



(

 v  1   v2

)



1 A1

0  + 0

0

0



1  A2  

(

˙ in1 V ˙ in2 0 V 0

)

(6.1)

6.3. A PRIORI INFORMATION AND USER-DEFINED PARAMETERS

6.3

59

a priori information and user-defined parameters

As it is described in the section 4.4, in order to tune the MPC controller, the table of a priori information 4.2 must be available. This table contains the limitations, set-points, and parameters which are partially generated by system identification and partially set by the user based on the goals, assumptions and the available facts of the system: • ymin and ymax : It is desired to keep the level in the second tank constant. The level in the first tank may vary ±10%. This can be interpreted as limitations on h1 and h2 . • ∆ymin and ∆ymax : In this example, unlike the previous examples, it is desired to limit the rate of changes in one of the controlled variable i.e. the inflow of the third tank. For this purpose, a new set of constraints must be added to the MPC regulator which limits the rate of changes in the outputs. • umin and umax : The position of the valves can vary between 0 and 100. • ∆umin and ∆umax : The rate of changes in the manipulated variable can be limited. For simplicity, these limitations are removed. • Set-points: Set-points of the controlled and manipulated variables are set according to the operating points. • Importance: The level of the second tank is the most important controlled variable. The outflow of the first and second tanks are supposed to be regulated in a way that their summation stay constant which means that they can move in different directions or stay constant but it is not essential to bring them to a specific target. Therefore they have the least values in the importance weighting vector. For the manipulated variable there is no preference which means that the relative importance weights are equal. • σ and model-mismatch: These variables tune the Kalman filter and are found by system identification. • Costs: Economical optimization is turned off i.e the cost vectors are zero. • Violation: Level constraints may be violated if it is required. Finally, the MPC controller is designed based on the information in 6.1. • the rate of changes in V˙ in3 is limited to [−0.1, 0.1]. • The valve positions are kept constant that is the manipulated variable is not changing. • The inflow to the second tank is changing during the process. • Disturbance model is subject to error. In figure 6.2, the result of process regulation is depicted. The level of the second h2 is regulated properly to its targets. The rate of change in V˙ in3 is limited as desired. The system is tested once more. This time the admissible region for the rate of changes in V˙ in3 is narrower: ∆y ∈ [−0.01, 0.01]. The result is depicted in Figure 6.3. It can be seen that the rate of change in V˙ in3 is kept nearly constant, but the h1 has violated its limitations.

min





40 48     0   0 0

y

( )

0 0

u

Limitation max ∆min





60  52      100    10  110 (

)

100 100





−∞  −∞       −∞     −∞  −0.1 (

)

∆max

setpoint







∞ ∞     ∞   ∞ 0.1 (

−∞ −∞

∞ ∞

)



55 50     40   5 45 (

40 50

setpoint 

Importance set- Costs Violation point of Limits



10  15      0.1   0.1 1

)

 

0 0     0   0 0

-

( )

-





0.1 0.1     0.1    0  0

Measurment error 



0.5 0.5     0.2   0.2 0.2

0 0

-

-

Outputs

40

20

2

3

4

5

6

h1 h2 V˙ out1 V˙ out2 V˙ in3 ¯1 h ¯2 h V¯˙ out1 V¯˙ out2 V¯˙ in37

Time (s)

4

Inputs and Disturbances

x 10 Unmeasured Disturbances, Estimated Disturbance,Manipulated variable and Targets 100

V˙ in1 V˙ in2 Vˆ˙ in1 Vˆ˙ in2 v1 v2 vt1 vt2

80 60 40 20 0 0

1

2

3

4

5

6

Time (s)

7 4

x 10

Figure 6.2: System1, rate of changes in V˙ in3 is limited to [−0.1, 0.1] The reason is that the constraint on V˙ in3 is expressed as a hard constraint because it is not relaxed. While the constraint on the upper and lower limits of the level is a soft constraint and it is possible to violate them. Therefore, when the constraint on V˙ in3 is hard, the regulator violates the soft constraint to keep the hard constraint active.

60

1

1      1    1 

1

-

Controlled Variable and Set−points 60

1

 

( )

1 1

Table 6.1: A priori Data of the example1

0 0

Model Mismatch

Controlled Variable and Set−points 80

Outputs

60 40 20 0 0

1

2

3

4

5

6

h1 h2 V˙ out1 V˙ out2 V˙ in3 ¯1 h ¯2 h V¯˙ out1 V¯˙ out2 V¯˙ in37

Time (s)

4

Inputs and Disturbances

x 10 Unmeasured Disturbances, Estimated Disturbance,Manipulated variable and Targets 100

V˙ in1 V˙ in2 Vˆ˙ in1 Vˆ˙ in2 v1 v2 vt1 vt2

80 60 40 20 0 0

1

2

3

4

5

6

Time (s)

Figure 6.3: System1, rate of changes in V˙ in3 is limited to [−0.01, 0.01]

61

7 4

x 10

Appendix A

Appendix A.1

Controllability and Observability

Controllability is an important property of a control system (i.e. 2.2 on page 3 ) which plays an important role in many control problems, optimal control and state space control.

A.1.1

Controllability and stability

A system is controllable when an external input vector u is able to affect the states of a system x and move them from any desired initial state x(k = 0) = x0 to any other state x(ks ) in a finite time interval. The controllability of a system can be checked using Hautus’ or Kalman’s criterium. According to the Kalman criterium, a linear state space system is controllable if the rank of its controllability matrix is full[19]. The controllability matrix of a system Qc is defined as:

(

)

Qc = B AB A2 B . . . An−1 B

(A.1)

An uncontrollable system is stabilisable, when its uncontrollable modes (i.e. uncontrollable eigenvalues) are stable.

A.1.2

Observability and stability

A system is called observable if there is finite T such that the knowledge of u(t), the manipulated variable and y(t) measurements, ∀t ∈ [0, T ] suffices to determine the initial state x(k = 0) = x0 of the system. If a system is not observable, it is not possible to reconstruct the unmeasured states of the system using an observer. Observability, similarly to controllability, can be checked using the Hautus’ or Kalman’s criterium [19]. When a system is observable, its observability matrix 63

Qo is of full rank, the observability matrix is defined as:     φo =    



C CA CA2 .. .

      

(A.2)

CAn−1 An unobservable system is detectable if the unobservable modes (i.e. unobservable eigenvalues) are stable.

A.2

Defining the Model of the System

Since the modeling and identification of the system is based on experiments, it is subject to faults, which should be taken into consideration. The model of the real system introduced in equation 2.2 on page 3 is as follows:

xk+1 = Axk + Buk + Ezk + Bd dk ,

xk=0 = x0

(A.3)

yk = Cxk + Duk with x ∈ Rn , u ∈ Rr , z ∈ Rp , y ∈ Rm and A ∈ Rn×n , B ∈ Rn×r , E ∈ Rn×p , C ∈ Rm×n , Bd ∈ Rn×m Remark: The matrices without index -r are the matrices of the model, and the one with the index -r are the real system matrices.

A.3 A.3.1

Luenberger Observer An Example of Luenberger Observer with inaccurate initial condition

Example 10 (Luenberger Observer with inaccurate initial condition). Consider the following sampled system: • A Luenberger Observer; • An exact model; • Different initial values; • There is no disturbance. 



xk+1





0.0094 0.8187 0.1725 0.0085     0.9092 0.0863 xk + 0.0998 uk = 0 0.0953 0 0.0863 0.8230 64

System and observer

States−Input

6 4 2

x1 x2 x3 x ˆ1 x ˆ2 x ˆ3 u

0 −2 −2

0

2

4

6 Time(s)

8

10

12

14

10

12

14

System and Observer Output−Error−Input

6 4

y yˆ ey u

2 0 −2 −2

0

2

4

6 Time(s)

8

Figure A.1: System with Luenberger Observer and different Initial Value (

)

yk = 1 0 0 xk Since the observer has an exact model, the matrices of model are the same as real system model. Choosing L = [ 0.58 0.39 0.19 ], observer is stable and the convergence condition is satisfied. 

x ˆk+1







0.8187 0.1725 0.0085 0.0094     0.9092 0.0863 x = 0 ˆk + 0.0998 uk + LT (yk − Cˆ xk ) 0 0.0863 0.8230 0.0953 (

)

ˆk = 1 0 0 · x y ˆk ˆ0 = The initial value of the real system is x0 = [ 1, 2, 5 ] and initial values of the observer x [ 0 , 0 , 0 ]. As shown in figure A.1 the estimated states start from different initial value but after a while, the states of the model approaches to the real system states.

A.3.2

An Example of Luenberger Observer with model mismatch

Example 11 (Luenberger Observer and model mismatch). The system which is introduced in example A.3.1 is assumed, having: 65

System and Observer with Model Mismatch 8

States

6 4

x1 x2 x3 x ˆ1 x ˆ2 x ˆ3

2 0 −2 −2

0

2

4

6 Time(s)

8

10

12

14

12

14

System and Observer with Model Mismatch Input−Output−Error

8 6

y yˆ ey u

4 2 0 −2 −2

0

2

4

6 Time(s)

8

10

Figure A.2: System with Luenberger Observer and model mismatch • A Luenberger Observer; • Model mismatch; • Akin initial values; • No disturbance. 



x ˆk+1





0.0094 0.8187 0.1725 0.0085     0.9092 0.0863 x xk ) ˆk + 0.0998 uk + LT (yk − Cˆ = 0 0.0953 0 0.0863 0.8230

The model differs from the real system. The observer with the inaccurate model is: 







0.0094 −0.8353 0.1725 0.0085 ( )     ˆ ˆ ), y ˆ= 1 0 0 x ˆ ˆ + 0.0989 u + L(y − y 0 0.8913 0.0858 x x˙ =  0.0957 0 0.0858 0.8312

Using the pole placement method, Matrix L is calculated so that the poles of the system have been moved toward the origin of the unit circle. It means that the Luenberger observer increases the system speed. The matrix L is [ 0.76 , 1.25 , 1.09 ]. 66

A.4

Kalman Filter

Example 12 (Predictor Corrector form of Kalman Filter). The system which is introduced in example A.3.1 on page 64 is assumed, having: • A disturbance Observer; • An exact model; • Akin initial values; • Unmeasured disturbance d; • Predictor Corrector Kalman Filter The system with disturbance observer and predictor-corrector Kalman Filter is tested under two conditions: • Bd is modeled exactly and assumed to be equal to B; • Bd is modeled inaccurate; In the first case, the disturbance observer has estimated the disturbance accurately, and accordingly states and output of the observer matches the real system states and output.(Figure A.3) In the second case, the observer does not estimate the observer correctly, consequently, states are not estimated precisely, but the estimator estimates the output.(Figure A.4 on page 69)

A.5

State Regulator

Example 13 (Pre-filter and state regulator). The system which is introduced in example A.3.1 on page 64 is assumed, having: • An Observer: Predictor-corrector Kalman filter; • An exact model;

[

]

• Akin initial values as: 2, 0, 1] ; [

]

• State regulator with the matrix K as: 0.12, 0.75, 0.43 ; • Pre-filter with matrix M set as 1.49. The step reference with the attitude of 2 is applied to the system. As in Figure A.5 on page 70 it is shown, while the system without pre-filter has a distinctive offset in the first chart, in the second and third chart it can be seen that pre-filter has eliminated the offset and the output meets the reference value.

A.6

Target Calculation

Example 14 (Measured disturbance). The step reference with the value of 2 is applied to the system in example A.3.1 on page 64 under the following conditions: 67

States−Input−Disturbance

System, Disturbance Observer and Predictor−Corrector KF 10

x1 x2 x3 x ˆ1 x ˆ2 x ˆ3 u d dˆ

8 6 4 2 0 −2

−2

0

2

4

6

8 10 Time(s)

12

14

16

18

20

18

20

Output−Input−Dist−Error

System, Disturbance Observer and Predictor−Corrector KF 10

y yˆ u d ey

8 6 4 2 0 −2

−2

0

2

4

6

8 10 Time(s)

12

14

16

Figure A.3: Predictor Corrector form of KF with disturbance observer and exact model. • An exact model; • Akin initial values; • Measurable disturbance z; • State regulator; • Predictor-Corrector KF; • Target calculation and origin-shifting ; Equation 2.53 on page 18 is applied to find the target values for states and input with respect to the known disturbance vector z. As it is shown in Figure A.6 on page 71, the controller has removed the offset and output matches the reference value.

A.7

Disturbance Observer and Disturbance Compensator

Without losing generality, the term Ezk is omitted in the following equations. The whole control system, including a pre-filter can be described as follows:

xk+1 = Ar xk + Br uk

(A.4)

ˆ k + Lx (yk − y ˆ k+1 = Aˆ ˆk) x xk + Buk + Bd

(A.5)

68

States−Input−Disturbance

System, Disturbance Observer and Predictor−Corrector KF 10

x1 x2 x3 x ˆ1 x ˆ2 x ˆ3 u d dˆ

8 6 4 2 0 −2

−2

0

2

4

6

8 10 Time(s)

12

14

16

18

20

18

20

Output−Input−Dist−Error

System, Disturbance Observer and Predictor−Corrector KF 10

y yˆ u d ey

8 6 4 2 0 −2

−2

0

2

4

6

8 10 Time(s)

12

14

16

Figure A.4: Predictor Corrector form of KF with disturbance observer and model mismatch. ˆ k+1 = d ˆ k + Ld (yk − y ˆk) d ˆk uk = −Kˆ xk + Mw − d

(A.6) (A.7)

yk = Cr xk

(A.8)

ˆ k = Cˆ y xk

(A.9) −1

M = −(C(I − A + BK)

B)

−1

(A.10)

In the steady state y = w, therefore:

x = Ar x + Br u

(A.11)

ˆ + Lx (Cr x − Cˆ ˆ = Aˆ x x + Bu + Bd x)

(A.12)

ˆ=d ˆ + Ld (Cr x − Cˆ d x)

(A.13)

ˆ u = −Kˆ x + Mw − d

(A.14)

ˆ y = y ˆ , and From A.13 it can be interpreted that in the steady state to have a stationary d, ˆ Cr x = Cˆ x. Therefore d is the exact steady disturbance if: (( ) ( ) ) A B Lx C 1, those elements of QKF which penalize the disturbance are amplified and those elements which penalize the states are reduced and consequently the disturbance estimation is faster than state estimation. For rDX = 10, disturbance estimation is so aggressive that the estimated disturbance has an overshoot. • Conversely, for rDX < 1, those elements of QKF which penalize the disturbance, have been decreased and the penalty on the states have been increased. Consequently, the estimation of the unknown disturbance is slower than state estimation. For rDX = 0.46 the disturbance estimation is so slow that the estimated disturbance never converges to the real disturbance. Example 19. Tuning rKF In this example rDX is set to 2 and rKF is changing. The default value of rKF is 2.0691 For rKF = 0.1 the resulting RKF (rKF = 0.1) is 0.01. rKF has no influence on QKF . For rKF = 2 QKF is calculated as: 

QKF (rDX



0.0195 0 0 0  0 0.0638 0 0    = 2) =    0 0 0.2500 0  0 0 0 4.0000

The results are shown in figure A.30 and A.31. For rKF = 2.0891, which is the default value, the resulting RKF (rKF = 2.0891) is 4.3643. The results are depicted in figure A.32 and A.33. For rKF = 10 the resulting RKF (rKF = 10) is 100. The results are shown in figure A.34 and A.35. Comparing figures A.30-A.35 and the resulting RKF , it can be concluded that: • For a large rKF the resulting RKF is larger which means that the estimated state and disturbance converge to the real values slowly. 92

Controlled System/ y ,¯ y and yˆ

Outputs

1.5

1

0.5

0 0

Inputs and Disturbances

yˆ y y¯ yt

10

20

30

40

50 60 70 Time (s) Controlled System/Inputs and Disturbances

80

90

100

0.3

u u ¯ dˆ d

0.2 0.1 0 −0.1 −0.2 0

10

20

30

40

50 Time (s)

60

70

80

90

100

Figure A.26: MPC Regulator with rDX = 10 and default values for sCr and rKF Controlled System/x , x ¯ and x ˆ 0.8

x1 x2 x3 x ¯1 x ¯2 x ¯3 xKF 1 xKF 2 xKF 3

States

0.6 0.4 0.2 0 −0.2 0

10

20

30

40

50 60 70 Time (s) Controlled System/ z , dKF and d

80

90

Disturbances

0.2

100

dˆ d

0.15 0.1 0.05 0 0

10

20

30

40

50 Time (s)

60

70

80

90

Figure A.27: MPC Regulator with rDX = 10 and default values for sCr and rKF 93

100

Controlled System/ y ,¯ y and yˆ

Outputs

1.5

1

0.5

0 0

Inputs and Disturbances

yˆ y y¯ yt

10

20

30

40

50 60 70 Time (s) Controlled System/Inputs and Disturbances

80

90

100

0.3

u u ¯ dˆ d

0.2 0.1 0 −0.1 −0.2 0

10

20

30

40

50 Time (s)

60

70

80

90

100

Figure A.28: MPC Regulator with rDX = 0.046 and default values for sCr and rKF Controlled System/x , x ¯ and x ˆ

States

1

x1 x2 x3 x ¯1 x ¯2 x ¯3 xKF 1 xKF 2 xKF 3

0.5

0

−0.5 0

10

20

30

40

50 60 70 Time (s) Controlled System/ z , dKF and d

80

90

Disturbances

0.1

100

dˆ d

0.08 0.06 0.04 0.02 0 0

10

20

30

40

50 Time (s)

60

70

80

90

Figure A.29: MPC Regulator with rDX = 0.046 and default values for sCr and rKF 94

100

Controlled System/ y ,¯ y and yˆ

Outputs

1.5

1

0.5

0 0

Inputs and Disturbances

yˆ y y¯ yt

10

20

30

40

50 60 70 Time (s) Controlled System/Inputs and Disturbances

80

90

100

0.3

u u ¯ dˆ d

0.2 0.1 0 −0.1 −0.2 0

10

20

30

40

50 Time (s)

60

70

80

90

100

Figure A.30: MPC Regulator with rKF = 0.1, rDX = 2 and default value for sCr Controlled System/x , x ¯ and x ˆ 0.8

x1 x2 x3 x ¯1 x ¯2 x ¯3 xKF 1 xKF 2 xKF 3

States

0.6 0.4 0.2 0 −0.2 0

10

20

30

40

50 60 70 Time (s) Controlled System/ z , dKF and d

80

90

Disturbances

0.2

100

dˆ d

0.15 0.1 0.05 0 0

10

20

30

40

50 Time (s)

60

70

80

90

Figure A.31: MPC Regulator with rDX = 0.1, rDX = 2 and default value for sCr 95

100

Controlled System/ y ,¯ y and yˆ

Outputs

1.5

1

0.5

0 0

Inputs and Disturbances

yˆ y y¯ yt

10

20

30

40

50 60 70 Time (s) Controlled System/Inputs and Disturbances

80

90

100

0.3

u u ¯ dˆ d

0.2 0.1 0 −0.1 −0.2 0

10

20

30

40

50 Time (s)

60

70

80

90

100

Figure A.32: MPC Regulator with rKF = 2.0891, rDX = 2 and default value for sCr Controlled System/x , x ¯ and x ˆ 0.8

x1 x2 x3 x ¯1 x ¯2 x ¯3 xKF 1 xKF 2 xKF 3

States

0.6 0.4 0.2 0 −0.2 0

10

20

30

40

50 60 70 Time (s) Controlled System/ z , dKF and d

80

90

Disturbances

0.2

100

dˆ d

0.15 0.1 0.05 0 0

10

20

30

40

50 Time (s)

60

70

80

90

Figure A.33: MPC Regulator with rDX = 2.0891, rDX = 2 and default value for sCr 96

100

Controlled System/ y ,¯ y and yˆ

Outputs

1.5

1

0.5

0 0

Inputs and Disturbances

yˆ y y¯ yt

10

20

30

40

50 60 70 Time (s) Controlled System/Inputs and Disturbances

80

90

100

0.3

u u ¯ dˆ d

0.2 0.1 0 −0.1 −0.2 0

10

20

30

40

50 Time (s)

60

70

80

90

100

Figure A.34: MPC Regulator with rKF = 10, rDX = 2 and default value for sCr Controlled System/x , x ¯ and x ˆ

States

1

x1 x2 x3 x ¯1 x ¯2 x ¯3 xKF 1 xKF 2 xKF 3

0.5

0

−0.5 0

10

20

30

40

50 60 70 Time (s) Controlled System/ z , dKF and d

80

90

Disturbances

0.1

100

dˆ d

0.08 0.06 0.04 0.02 0 0

10

20

30

40

50 Time (s)

60

70

80

90

Figure A.35: MPC Regulator with rKF = 10, rDX = 2 and default value for sCr 97

100

• For small rKF RKF is small and the state and disturbance estimation is fast. For rKF = 0.2 the estimated disturbance and states converge to the real value so fast that the estimated disturbance has an overshoot.

A.11

Constrained State Estimation

A.11.1

Constrained Kalman filter

Example 20. Constrained Kalman filter 











−0.1087 −0.5586 0 0.2793 0.2793       0.5429 0 x + 0.2285 u + 0.2285 dk x˙ =  0.2793 0.2285 0.8125 1 0.0937 0.0937 (

)

y = 0 0 1.6667 x A sample time of 1/6 of the smallest time constant is selected: T s ≃ 0.682/6. The regulator uses a normal Kalman filter for the state estimation. In parallel, a constrained Kalman filter estimates the states. As it is depicted in figure A.36, the simple Kalman Filter and the Constrained Kalman filter arrive at exactly the same result when the system is unbounded. The same system is tested with the Kalman Filter and the Constrained Kalman Filter in the presence of output constraints. The controlled variable is bounded to -1.1 and 0.9 which imposes a limitation on the states as well. As can be seen in the figure A.37, the estimated states of the CKF are bounded while the estimated states of the Kalman Filter are not. On the 20th second an unmeasurable disturbance enters the system and it takes 9 seconds for the Kalman filter to estimate it. That means until 30th second the controller is not aware that the states and consequently the controlled variable have violated the boundaries. However the constrained Kalman filter keeps the estimated state beyond the boundaries. One usage of the constrained estimator is to keep the estimated states in the admissible region when an unknown disturbance shifts the states and pushes them towards the boundaries.

A.11.2

Full Information Estimator

Example 21. Full Information Estimator The linear system which is introduced in example 20 is considered. A Kalman Filter and a Full Information Estimator estimate the states. System is tested one time without constraints (figure A.38) and the other time in the presence of output constraints (figure A.39). It can be seen that dealing with unconstrained systems, the Full Information Estimation yields exactly the same result as the Kalman Filter. In figure A.39, the controlled variables are bounded.

A.11.3

Moving Horizon Estimation

Example 22. Moving Horizon estimator 98

Controlled System/x , x ¯ and x ˆ 0.8

xKF 1 xKF 2 xKF 3 xCKF 1 xCKF 2 xCKF 3

States

0.6 0.4 0.2 0 −0.2 0

10

20

30

40

50 60 Time (s) Controlled System/ dˆ and d

70

80

90

0.15

100

dKF dCKF

Disturbances

0.1 0.05 0 −0.05 0

10

20

30

40

50 Time (s)

60

70

80

90

100

Figure A.36: CKF in comparison with KF with an unconstrained system Controlled System/x , x ¯ and x ˆ 0.6

xKF 1 xKF 2 xKF 3 xCKF 1 xCKF 2 xCKF 3

States

0.4 0.2 0 −0.2 0

10

20

30

40

50 60 Time (s) Controlled System/ dˆ and d

70

80

90

Disturbances

0.15

100

dKF dCKF

0.1 0.05 0 −0.05 0

10

20

30

40

50 Time (s)

60

70

80

90

Figure A.37: CKF in comparison with KF in the presence of output constraints 99

100

Controlled System/x , x ¯ and x ˆ 0.8

xKF 1 xKF 2 xKF 3 xfull1 xfull2 xfull3

States

0.6 0.4 0.2 0 −0.2 0

10

20

30

40

50 60 70 Time (s) Controlled System/ z , dˆ and d

80

90

100

0.15

d dfull

Disturbances

0.1 0.05 0 −0.05 0

10

20

30

40

50 Time (s)

60

70

80

90

100

Figure A.38: Unconstrained Full Information Estimator in comparison with KF Controlled System/x , x ¯ and x ˆ 0.6

xKF 1 xKF 2 xKF 3 xfull1 xfull2 xfull3

States

0.4 0.2 0 −0.2 0

10

20

30

40

50 60 70 Time (s) Controlled System/ z , dˆ and d

80

90

Disturbances

0.15

100

d dfull

0.1 0.05 0 −0.05 0

10

20

30

40

50 Time (s)

60

70

80

90

Figure A.39: Constrained Full Information Estimator in comparison with KF 100

100

The linear system which is introduced in example 20 is used. A Kalman Filter and a Moving Horizon Estimator estimate the states. System is tested one time without constraints (figure A.40) and the other time in the presence of output constraints (figure A.41). It can be observed that dealing with unconstrained systems, the MHE is equivalent to the Kalman Filter. In figure A.41, the controlled variables are bounded and in consequence the states are bounded.

A.11.4

Multi Rate State Estimation

Example 23. Multi-rate State Estimation The following discrete system is assumed:

xk+1













−0.70660 −0.17650 −1.15500 0.00000 −0.01318 −0.25760     =  0.18010 −0.67020 0.05246  xk + −0.54780 −0.58030 0.00000  uk + (A.61) 1.15500 −0.06373 −0.70580 0.00000 2.13600 1.77000 0.00000 −0.01318 −0.25760   −0.54780 −0.58030 0.00000  dk 0.00000 2.13600 1.77000 

(A.62)



0.3255 1.2700 −0.1390   yk = −1.1190 0.0000 −1.1630 xk 0.6204 0.1352 1.1840

The sampling rate of the controller is 1 second. The fast measurement is received at every sample time. Similar to what is depicted in 5.1, the second and third measurements are received respectively at every 2 and 5 time steps. For a better comparison between fix and variable structure, both of them are depicted in figure A.44. At the sampling times that all the measurements are received (i.e. in this example in every 10 sample time) the fix and variable structure coincide because (see Kr¨amer theorem in 5.5.1 )

A.11.5

Constrained Multi-rate State Estimation

Example 24. Multi-rate-CKF State Estimation Figure A.45 shows the multi-rate-CKF estimation. In figure A.46, the multi-rate-CKF and the variables structure are compared. It is depicted that they yield the same result.

101

Controlled System/x , x ¯ and x ˆ 0.8

xKF 1 xKF 2 xKF 3 xmhe1 xmhe2 xmhe3

States

0.6 0.4 0.2 0 −0.2 0

10

20

30

40

50 60 70 Time (s) Controlled System/ , dˆ and d

80

90

0.15

100

dKF dmhe

Disturbances

0.1 0.05 0 −0.05 0

10

20

30

40

50 Time (s)

60

70

80

90

100

Figure A.40: Unconstrained MHE in comparison with KF Controlled System/x , x ¯ and x ˆ 0.6

xKF 1 xKF 2 xKF 3 xmhe1 xmhe2 xmhe3

States

0.4 0.2 0 −0.2 0

10

20

30

40

50 60 70 Time (s) Controlled System/ , dˆ and d

80

90

Disturbances

0.15

100

dKF dmhe

0.1 0.05 0 −0.05 0

10

20

30

40

50 Time (s)

60

70

80

Figure A.41: Constrained MHE in comparison with KF 102

90

100

Variable Structure 40

x1 x2 x3 xvr1 xvr2 xvr3

States

20 0 −20 −40 0

10

20

30

40

50 Time (s)

60

70

80

90

Disturbances

10

100

d1 d2 d3 dvr1 dvr2 dvr3

5

0

−5 0

10

20

30

40

50 Time (s)

60

70

80

90

100

Figure A.42: Multi-rate state estimation with variable structure Fix Structure 40

x1 x2 x3 xf ix1 xf ix2 xf ix3

States

20 0 −20 −40 0

10

20

30

40

50 Time (s)

60

70

80

90

Disturbances

8

100

d1 d2 d3 df ix1 df ix2 df ix3

6 4 2 0 −2 0

10

20

30

40

50 Time (s)

60

70

80

Figure A.43: Multi-rate state estimation with fixed structure 103

90

100

Comparison between Fix structure and Variable Structure 40

xf ix1 xf ix2 xf ix3 xvar1 xvar2 xvar3

States

20 0 −20 −40 0

10

20

30

40

50 Time (s)

60

70

80

90

Disturbances

10

100

df ix1 df ix2 df ix3 dvar1 dvar2 dvar3

5

0

−5 0

10

20

30

40

50 Time (s)

60

70

80

Figure A.44: A comparison between fix and variable structure

104

90

100

Multi-rate-CKF 40

x1 x2 x3 xckf 1 xckf 2 xckf 3

States

20 0 −20 −40 0

10

20

30

40

50 Time (s)

60

70

80

90

Disturbances

1

100

d1 d2 d2 d3 dckf 1 dckf 2

0.5

0

−0.5 0

10

20

30

40

50 Time (s)

60

70

80

90

100

Figure A.45: Multi-rate state estimation combined with constrained Kalman Filter Comparison between multi-rate-CKF and Variable Structure 40

xckf 1 xckf 2 xckf 3 xvar1 xvar2 xvar3

States

20 0 −20 −40 0

10

20

30

40

50 Time (s)

60

70

80

90

Disturbances

1

100

dckf 1 dckf 2 dckf 3 dvar1 dvar2 dvar3

0.5

0

−0.5 0

10

20

30

40

50 Time (s)

60

70

80

90

Figure A.46: A comparison between multi-rate-CKF and variable structure 105

100

Bibliography [1] A.H.Jawinski. Stochastic process and filtering theory. Academic press,New york and London, 1970. [2] J. H. Lee C. V. Rao, J. B.Rawlings. Constrained linear state estimation - a moving horizon approach. Automatica, 37, 2001. [3] J. Brian Froisy. Model predictive controll - building a bridge between theory and practice. Computers and Chemical Engineering 30, pages 1426–1435, 2006. [4] J. B. Rawlings G. Pannocchia. Robustness of MPC and disturbance models for multivariable ill-conditioned processes. TWMCC, Texas-Wisconsin Modeling and Control Consortium, 2001. [5] James B. Rawlings G. Pannocchia, N. Laachi. A candidate to replace pid control: Sisoconstrained lq control. AIChE 51, pages 1178–1189, 2005. [6] J.B. Rawlings G. Pannocchia. Disturbance models for offset-free model-predictive control. AIChE Journal, February 2003. [7] S. J. Wright G. Pannocchia, J. B. Rawlings. Model predictive control with active steadystate input constraints: Existence and computation. TWMCC , Texas-Wisconsin Modeling and Control Consortium, 2001. [8] R. Amrit. J. B. Rawlings. Optimizing process economic performance using model predictive control. Lecture Notes in Control and Information Sciences, 2009. [9] Bhavik R. Bakshi James B. Rawlings. Particle filtering and moving horizon estimation. Computers and Chemical Engineering, (30):1529 – 1541, 2006. [10] J. B. Jorgensen A.N. Venkat S. B. Jorgensen J.B. Rawlings, D. Bonne. Unreachable setpoints in model predictive control. IEEE Control Systems Society, 2008. [11] M. Morarih J.H.Lee, M. S. Gelormino. Model predictive control of multi-rate sampled-data systems: A state space approach. International journal of control, 55, 1992. [12] S. L.Shah J.Prakash, S. C.Patwardhan. Constrained state estimation using the ensemble kalman filter. American Control Conference, 2008. [13] R. Horwitz K. Ohno. A variable structure multi-rate state estimator for seeking control of hard disk drives. 13, 2005. [14] J. B. Rawlings K. R. Muske. Model predictive control with linear models. Process Systems Engineering, (39):262 – 287, 1993. [15] J.B.Rawlings K. R. Muske. Nonlinear moving horizon state estimation. NATO ASI Kluwer 106

Academic, 293, 1994. [16] T. A. Badgwell K. R. Muske. Disturbance modeling for offset-free linear model predictive control. Journal of Process Control 12, pages 617–632, 2002. [17] J. B. Rawlings K. R.Muske. Receding horizon recursive state estimation. American control Conference, 1993. [18] Stefan Kr¨amer. Heat Balance Calorimetry and Multirate State Estimation Applied to SemiBatch Emulsion Copolymerisation to Achieve Optimal Control, volume 3. Shaker Verlag, Aachen, 2005. [19] Jan Lunze. Regelungstechnik 2. Springer Verlag, Heidelberg, 4. edition, 2004. [20] Rao Scokaert P. O. M. Mayne, Rawlings. Constrained model predictive control: Stability and optimality. Automatica 36, 1999. [21] J.D Powel M.Berg, N.Amit. Multi-rate digital system design. IEEE Transactions on Automatic Control, 33, 1988. [22] Kenneth R. Muske. Steady-state target optimization in linear model predictive control. Proceedings of the American Control Conference, pages 3597–3601, 1997. [23] James B. Rawlings Pierre O. M. Scokaert. Constrained linear quadratic regulation. IEEE Transaction on Automatic Control, 43:1163–1169, 1998. [24] James B. Rawlings. Tutorial overview of model predictive control. IEEE Control Systems Magazine, pages 38–52, 2000. [25] Thomas A. Badgwell S. Joe Qin. A survey of industrial model predictive control technology. Control Engineering Practice 11, pages 733–764, 2003. [26] R. Gesthuisen S. Kr¨amer. Multi-rate state estimation using moving horizon estimation. IFAC, 2005. [27] A. Faanes S. Skogastad. Offset free tracking with mpc with model mismatch: Experimental result. Industrial and engineering chemistry research, 2005. [28] B. A. Ogunnaik S. Tatiraju, M. Soroush. Multi-rate nonlinear state estimation with the application to a polymerization reactor. AIChE Journal, 45, 1999. [29] W.L.Luyben. Parallel cascade control. Industrial and engineering chemistry Fundamentals, 12, 1973.

107