Closed-Loop Subspace Predictive Control for Hammerstein Systems

0 downloads 0 Views 493KB Size Report
closed loop. respect to systems with Hammerstein nonlinearity is the derivation of the proper model [8]. Identification of this class of nonlinear system has been ...
Joint 48th IEEE Conference on Decision and Control and 28th Chinese Control Conference Shanghai, P.R. China, December 16-18, 2009

WeC14.5

Closed-loop Subspace Predictive Control for Hammerstein systems B. Kulcs´ar, J.W. van Wingerden, J. Dong and M. Verhaegen

Abstract— In this paper, a closed loop Subspace Predictive Control algorithm is developed for Linear Time Invariant (LTI) systems with static nonlinearity at the plant input, i.e. for Hammerstein LTI systems (H-SPC). The elaborated method is a data driven approach combining subspace identification with optimal predictive control. Using the closed loop structure of the subspace identification technique an input-output predictor is derived with the estimated Markov parameters and with the identified static input nonlinearity. Identification makes use of the kernel of the data space by Least Square Support Vector Machinery (LS-SVM). A nominal prediction is then created. The goal of the control is to optimize the associated quadratic performance criteria in a receding horizon fashion. A simulation example is provided that shows the effectiveness of the proposed methodology. Index Terms— Subspace identification, Predictive control, state space model-free technique, Hammerstein systems, Subspace based system identification

ek rk

yk

uk f(.)

A,B,C,D

Controller

Fig. 1. Schematic representation of a Hammerstein system operating in closed loop.

respect to systems with Hammerstein nonlinearity is the derivation of the proper model [8]. Identification of this class of nonlinear system has been developed during the last few years. Considerable results have been published in ex. [9] where componentwise ARX SISO and MIMO plants are I. I NTRODUCTION estimated in the kernel of the data space. N4SID subspace Traditional model predictive control approaches are mostly identification results has been extended for Hammerstein based on a given model. The model can be derived either systems in [10] using the method of Least Squares Support from theoretical principles, or from real life problems, using Vector Machines (LS-SVM). The idea behind the solution system identification. The Subspace Predictive Control for is to overparametrize the nonlinearity times the unknown Linear Time Invariant systems (SPC LTI), recently elabo- parameter and solve the LS problem in the feature (kernel) rated in [1], gained ground mostly because of its model- space. The main contribution of the paper is to show a data-driven free nature. The idea behind the open-loop SPC approach is to combine the subspace identification [2] with input- (state space model-free) predictive control solution for LTI output (I/O) optimal predictive control without the explicit systems with Hammerstein nonlinearity. Instead of following application of the model parameter. [1] proposes a controller the traditional MPC design steps for Hammerstein systems, a dependent but closed loop SPC solution. [3], [4], [5] extends macroscopic I/O predictor is formulated using the identified the closed loop results to a controller independent, robust Hammerstein nonlinearity and Markov parameters, without or fault tolerant form. All of the previously mentioned explicitly identifying the local model parameters matrices publications pre-assume the LTI structure of the plant which (A, B,C, D). Accordingly, the method does not require model in certain cases might perfectly approximate the behavior of order selection, we only need to identify I/O relationship. Subspace-based methods can be efficiently used for MIMO the system. There are cases when the pure linearized SPC approach closed-loop model identification formulated in the dual data is no more valid and the system behavior can only be space by means of LS-SVM [11]. As shown, this approach well approximated by a nonlinear description. LTI systems can also be used to create an MIMO I/O predictor. Apart the with Hammerstein nonlinearity, i.e. LTI systems with input identification of the Markov parameters in the feature space, separable static nonlinearity, covers a large set of these the paper shows how the Hammerstein nonlinearity can be models [6]. This model structure under feedback control is estimated up to a similarity transformation by performing a low rank approximation with a specific combination of the illustrated in Figure 1. The Model Predictive Control of Hammerstein systems kernel matrices. Unconstrained and optimal receding horizon is an active research topic because current approaches suf- control is then solved while not taking the nonlinearity in the fer from the dependency on a priori system information, optimization into account. The paper is organized as follows. Notations, assumpsometimes small closed loop stability regions or insufficient input constraints handling [7]. Another important issue with tions and the Hammerstein data driven control problem is described in Section II. The nominal and optimal H-SPC is The authors are with the Delft Center for Systems and Control, Delft given in terms of the identified Markov parameters, in the University of Technology, Mekelweg 2, 2628 CD Delft, The Netherlands {B.A.Kulcsar;J.W.vanWingerden;j.dong}@tudelft.nl central Sections III and IV. The optimal control problem with [email protected] the I/O predictor is described in the Section V. Finally, the

978-1-4244-3872-3/09/$25.00 ©2009 IEEE

2604

WeC14.5 proposed batchwise method is used in a simulation example. II. P ROBLEM FORMULATION In this section, we first define the system class, introduce the assumptions and notations we use along the paper and formulate the closed loop Subspace Predictive Control problem for systems with Hammerstein nonlinearity. A. Problem statement Let us consider the following innovation type, discrete time Hammerstein LTI system given by: xk+1 yk

= Axk + B f (uk ) + Kek , = Cxk + ek ,

(1) (2)

where xk ∈ Rn , uk ∈ Rr , yk ∈ Rℓ , are the state, the input and the output vectors. ek ∈ Rℓ denotes the zero mean white innovation noise with non-singular variance of E T E . The matrices A ∈ Rn×n , B ∈ Rn×r , C ∈ Rℓ×n , K ∈ Rn×ℓ are the system, the input, the output, and the noise matrices respectively. Finally, f (.) : Rr → Rr is a static, smooth nonlinear function with an additional constraint f (0) = 0, and assumed to be invertible and injectivity 1 . Denote, vk = f (uk ), which can not directly be measured and hence will be referred to as the effective control input along the paper. We can rewrite (1)-(2) in the predictor form as: xˆk+1 yk

=

A˜ xˆk + B f (uk ) + Kyk ,

= Cxˆk + ek ,

III. C LOSED - LOOP S UBSPACE BASED I/O P REDICTOR

(4)

The section presents the derivation of the I/O predictor required for the optimal control solution. In our case, the state sequence, X p is an intermediate variable.

B. Assumptions and notation Assumption 1: A˜ is stable and the system is minimal. We define the past identification window and the prediction window by p and f where p ≥ f . This window is used to define the following stacked vectors:     f (U0 ) Y0  f (U1 )   Y1      f (UP ) =  , Y =   ..  . .. P    .  . Yp−1

More precisely, the data matrices are organized as:   f (ui ), f (ui+1 ), · · · , f (ui+N−p ) , f (Ui ) =   Yi = yi , yi+1 , · · · , yi+N−p ,   Xi = xi , xi+1 , · · · , xi+N−p ,

where i = 0 . . . p. The first column of the f (UP ) respectively YP are denoted by f (U p ) and Y p . The future output is denoted by Yp . The noise sequence can be collected into:   E p = e p , e p+1 , · · · , e p+N−1 . The extended controllability matrix is defined by:   K = Ku , Ky .

Assumption 2: ||A˜ p ||2 ≪ 1, meaning that the past horizon is large enough to neglect the effect of the initial state. The bias of the identification can be made arbitrarily small by selecting a p large. We refer to [12] for the asymptotic properties of the closed loop subspace identification. Problem 1: The control problem can now be formulated as. Given the input sequence uk and the output sequence yk over a time window k = {0, . . . , N − 1}; identify the Markov parameters in eq. (5)-(6). and the nonlinearity f (·) of the closed loop system (1)-(2)., and create an I/O predictor. Then the aim of the optimal control is to minimize a predefined quadratic cost over f in a receding horizon way. The objective of the paper is to show a data driven control candidate for a class of LTI system, when the nonlinearity is input separable (see Figure 1). The elaborated method can be fed by data originated from closed loop measurements [12]. Hereby, the closed loop refers not only to the controller in ˜ the loop case but also to the closed loop observer form (A).

(3)

with A˜ = A − KC.

f (U p−1 )

Markov parameters of the closed-loop observer are given by:   CKu = CA˜ p−1 B, CA˜ p−2 B, · · · , CB , (5)  p−1  p−2 ˜ ˜ CKy = CA K, CA K, · · · , CK . (6)

1 The existence of a unique inverse is a requirement of the proposed optimal control synthesis and not of the identification itself, see Section V.

A. Linear regression The first objective of a Subspace Predictive Control for Hammerstein systems (H-SPC) is to construct a predictor, based on a past input and output sequence. Therefore, we have to introduce the state and use it as a latent variable. The state X p is given by:    f (UP )  . X p = A˜ p X0 + Ku , Ky YP The key approximation in this algorithm is that we assume A˜ j ≈ 0 for all j ≥ p (by Assumption 2). With this, the state sequence X p is approximately given by:    f (UP )  . X p ≈ Ku , Ky YP In a number of other LTI subspace methods it is well known to make this step (e.g. N4SID, SSARX, PBSIDopt , CVA) [13]. Furthermore, the input-output behavior is now approximated by:    f (UP )  + E p. Y p ≈ C Ku , Ky YP The equation is a linear regression problem. When the nonlinearity is known, an estimate of the matrices CK can be obtained by solving a linear least squares problem. However, the nonlinearity is not known, which motivates us to introduce in the next section a kernel space based solution

2605

WeC14.5 to estimate the nonlinearity before constructing the predictor. In this section we continue by pre-assuming that we already have (estimated) the nonlinearity f(·). T If the matrix f (UP )T , YPT , has full row rank and the nonlinear function f (.) is given, the matrix CK can be estimated by solving the following linear regression problem:   f (UP ) 2 ||F , (7) min ||Yp − CK YP CK where || · · · ||F represents the Frobenius norm [14]. Again, the solution of the above LS problem requires the knowledge of VP = f (UP ) and results in: †  d = Yp f (UP ) , (8) CK YP where † stands for the pseudo inverse and it reflects consistent estimate of CK (neglected bias, Assumption 2), no matter the measurements are acquired from open or closed loop experiments [15]. B. The predictor Applying the sequence of the estimated Markov parameters we can express the estimated outputs at the time instant k by:  i h dy f (U k−p ) . du CK (9) yˆk = CK {z } Y k−p | Ξ0

A nominal and deterministic predictor can be built up in the future horizon f . By neglecting the bias and the error on the estimated Markov parameters, the predictor is given by:   f (U k−p ) + Λ f (Uk+ f ), Yk+ f = Γ Y k−p    Γ  0 ... 0 0 (10)   Λ1  ΓΓ12  Λ1 , , Λ =  Λ2 Γ=  .  .  .  . . . Γ f −1

. . Λ f −1

. . Λ f −2

.. ···

Λ1

T  and f (Uk+ f ) = where Yk+ f = yˆTk . . . yˆTk+ f   T f (uk )T . . . f (uk+ f −1 )T . The parameters Γ and Λ describe the open loop prediction of the nominal output, expressed in terms of the estimated closed loop Markov parameters[3]. Therefore, Γ j ( j = 1, . . . , f − 1) can be calculated by the following recursive formulas: i−1 \ CA˜ i−t−1 K · Γt , Γ0 = Ξ0 , Γi = Ξi + ∑t=0 i−1 c ˜ i−t−1 K · Λt , Λ1 = CB, Λi = C\ A˜ i−1 B + ∑t=0 CA\

(11)

where the identified closed loop Markov parameters are given in Ξ0 and we define: h i Ξi = 0ℓ×ir C\ A˜ p−1 B ... C[ A˜ i B 0ℓ×iℓ C\ A˜ p−1 K ... C[ A˜ i K ,

as the zero padded Ξ0 over f , i.e. using the PBSIDopt idea based on the Assumption 2. The nominal and deterministic predictor in form of (10). is an I/O relationship between the past measured data, the future unknown input sequence and the future outputs.

Therefore, the future (nominal) evaluation of the system, Yk+ f can be predicted without explicitly deriving the ”local” ˆ B, ˆ K). ˆ C, ˆ system parameters (A, IV. I DENTIFICATION OF CLOSED LOOP M ARKOV H AMMERSTEIN NONLINEARITY WIn the previous section it was shown that we have to solve a linear problem to create a predictor. However, we assumed that the nonlinearity f (·) was known. Unfortunately, vk = f (uk ) is unmeasurable and therefore the solution of (8) is no more linear in the (primal) data space {ui , yi }. Instead of stating a non-linear problem, we now reformulate it and build a kernel version, based on the ideas from the Least Squares Support Vector Machines [16] to estimate the components of the predictor. This section is the key part of the paper presenting a solution how to overcome the difficulty caused by f (·). The optimization problem in the primal data space is dualized using the kernel method, first. By over-parametrization and by the appropriate selection of nonlinear basis function in the dual space (kernel or feature space), the optimization problem can be solved. LS-SVM optimality results has been proven to be identical to the results in the primal space [16]. The estimated Markov parameters and the input nonlinearity can then be derived. PARAMETERS AND THE

A. Least squares support vector machines (LS-SVM) The reformulation of the original problem in a dual space gives rise to define the dual optimization problem. The primal problem in eq. (8). can be rewritten into a dual equivalence as:  T   f (UP ) 2 f (UP ) ||F , (12) min ||Yp − α YP YP α  T f (UP ) = CK . with α YP The problem formulated in (12)  is still T nonlinear.  We f (UP ) f (UP ) will now approximate the matrix by a YP YP centered componentwise kernel. These kernels are defined as follows: Definition 1 (Centered componentwise kernels): Given two data matrices:     f1 e1  ..   ..  ν ×Nν , and F =  .  ∈ Rτ ×Nτ , E = . ∈R fτ



we define the following componentwise kernel: ν

τ

K(E, F) = ∑ ∑ K(ei , f j ), i=1 j=1

RNν ×Nτ

defined as a centered kernel, where with K(a, b) ∈ a ∈ RNν and b ∈ RNτ are defined as:   (qa1 ,b1 − qa1,0 ), · · · , (qa1 ,bNτ − qa1 ,0 )  (qa ,b − qa2,0 ), · · · , (qa2 ,bNτ − qa2 ,0 )  2 1   K(a, b) =   .. .. ..   . . .

2606

(qaNν ,b1 − qaNν ,0 ), · · · , (qaNν ,bNτ − qaNν ,0 )

WeC14.5 where qi, j is a kernel function with the arguments i, and j. For instance, we have chosen radial kernels (RBF); i.e. qi, j = exp−(i − j)2 /σ (σ is a kernel constant), and linear kernels, qi, j = i × j. The matrix K has the property that K(a, 0) = 0 which is referred to as the centering property. With the definition of the kernels, we can redefine (12) in the feature space of the collected data 2 . The componentwise kernel basis function is not unique, hence which offers us the freedom of choice (user and system dependent). There is a large amount of literature written on how to choose these ’basis’ functions, see [16] and references therein. However, specific recommendation of the basis functions is not given in this paper, a generic structural property of the Hammerstein LTI systems is exploited. We distinguish between the basis function associated to the input and to the output. Linear kernel functions are applied to characterize the effect of the outputs in the feature space, denoted by K(l) (·). Unlike the outputs, RBF nonlinear kernels are used to accurately fit Hammerstein nonlinearity at the plant input, denoted by K(·). Accordingly:   (13) min ||Yp − α K(UP ,UP ) + K(l)(YP ,YP ) ||2F . α | {z } W

This least squares problem is linear in the selected kernel matrices and can be ill-conditioned:

αˆ = YpW T (WW T + 1/γ I)−1,

(14)

where γ is the regularization parameter and αˆ is the estimated coefficients (Lagrangian) representing the minimum norm solution of the fitted data in the selected basis of the dual space. The key observation that we need for the remainder of the paper is that the Markov parameter times the static nonlinearities can be expressed in the solution of (13), by αˆ and the dual kernel components. Since by definition we have:

function are still in question. In the next subsection we show how to obtain the those terms depending on B up to some similarity transformation. B. Estimation of the Hammerstein nonlinearities To obtain an estimate of the Hammerstein nonlinearity we suggest to solve an SVD. We can construct the following matrix just based on the previously defined kernel matrices matching them with U p :     c CB αˆ K (U0 ,U p )  d   αˆ K (U1 ,U p )  ˜   CAB    f (U p ) =  .  .. .   .   . .   ˆ α K (U ,U ) \ p p−1 CA˜ p−1 B The given combination of the nonlinear kernels describes the product of Markov parameters with the same nonlinear input sequence f (U p ). Since, it is a low rank matrix, we can obtain the Hammerstein nonlinearity by performing an SVD: 3   αˆ K (U0 ,U p )     αˆ K (U1 ,U p )    Σr 0 Vf   U U . =   .. f f ,⊥ V f ,⊥ 0 0   . αˆ K (U p−1 ,U p ) Here, Σr is a diagonal matrix of dimension r [13]. As a result we now have the Markov parameters depending on B and the nonlinearity up to a similarity transformation, Tu . Since:     [u CBT αˆ K (U0 ,U p )  \  αˆ K (U1 ,U p )  ˜ u   CABT   † \ −1   = Uf , T f (U ) = U ,  .. u p . f   .   .   . αˆ K (U p−1,U p ) p−1 BT CA˜\ u (16) where † represents the left pseudo inverse. We can now introduce a dummy variable vk , which is the input of the linear dynamics of the Hammerstein models. This vk is now given by:

\ αˆ K (UP , .) = CK u f (.),

\ vˆk = Tu−1 f (uk ).

and due to the componentwise formulation of the kernels the following equalities also hold:

αˆ K(l) (Yi , .) αˆ K (Ui , .)

p−i−1 K, = CA˜\ \ = CA˜ p−i−1 B f (.),

(15)

for all i ∈ {0, · · · , p − 1}. At this point, it is important to notice that Markov parameters depending on K can be dy directly expressed from the dualized results. Though, CK d can be given, the computation of CKu and the nonlinear 2 The

introduced centered kernels ensures that the same class of nonlinearity is estimated because we do over-parametrization in the feature space by the nonlinear basis functions. A possible solution to center the estimated nonlinearity has been introduced in [10], [9] as an equality constraint of the dual optimization. Instead of this, we restrict ourselves already in this phase on centered kernels inheritly resulting in centered function approximates and consequently there is no need to add centering constraints at the computation of the optimal solution.

(17)

This section allows us to build up the predictor defined in (10), using not only (16) but also (15). Observe that by assumption the nonlinearity is invertible which makes the mapping between uk and vk unique. The next section shows how to combine the deterministic predictor output (10) with the inverse of the static mapping (17) to solve H-SPC problem. V. P REDICTIVE CONTROL Using the Subspace Predictive Control idea, this section describes the Hammerstein Subspace Predictive Control (HSPC) algorithm using the identified predictor. In our case, the optimal control problem minimizes a pre-defined performance function subject to the equality constraint formulated in (10). Since the nonlinear function

2607

3 In

the noisy case a gap between the singular values will appear.

WeC14.5 is assumed to be uniquely invertible, we do not consider in the cost the variable u but the effective (and not measured control) input v. Let us first consider the following H-SPC problem, by defining the goal of the control in terms of a quadratic cost function: T T T Jk = Yk+ f QYk+ f + Vk+ f RVk+ f + ∆Vk+ f R∆ ∆Vk+ f ,

(18)

VI. E XAMPLE This section demonstrates via an example the benefits of the Hammerstein Subspace Predictive Control (H-SPC). In this example, the input nonlinearity consists of two parallel nonlinearities, such as saturations. The goal of the control is to stabilize the origin with the H-SPC data driven controller. The problem is similar to the control of a highpurity distillation columns, see eg. [6], with neglected output nonlinearity. The discrete time LTI system can be written as:

with weights Q, R, R∆ ≻ 0 and Vk+ f = f (Uk+ f ). Moreover,  T ∆Vk+ f = vk − vk−1 . . . vk+ f −1 − vk+ f −2 =  T f (uk ) − f (uk−1 ) . . . f (uk+ f −1 ) − f (uk+ f −2 ) . (19)

xk+1

=

yk

=

vk (1)

=

vk (2)

=

The relationship between the V f and ∆V f is formulated by:   f (U p ) ∆Vk+ f = S∆Vk+ f − S1 , (20) Yp where



Ir −Ir  S∆ =  .  .. 0

0 Ir

0

...

0

... ... −Ir

 0 0  ,  Ir

(21)

and Ir is the r dimensional identity matrix. S1 is a selector matrix selecting f (uk−1 ) from the data matrix f (U p ). Now, we can write the optimal closed loop SPC problem as: min Jk ,

Vk+ f

(22)

subjected to the nominal predictor (10) and (20). The analytic solution is given by −1 ∗ T T Vk+ · f = − Λ QΛ + S∆ R∆ S∆ + R    f (U ) p ΛT QΓ + S∆T RS1 . (23) Yp

∗ is denoted by v∗ and applied The first term of the vector Vk+ f k throughout the estimated inverse of the nonlinear map f (·) (based on (17)). A straightforward way to express u from v is to create a data set based on eq. (16). We have to cover (not necessarily with an equidistant grid) the possible control input set in between umin ≤ u ≤ umax . What we get is a look up table (LUT) {ui , vi } based on the nonlinear dual space expression (17) The inverse function can be evaluated by means of interpolation. The benefits of the choice of the cost function, which depends on the effective control input term is to formulate a convex optimization in v. Even if the real actuation point is u and not v, we suggest to separate the nonlinearity and solve the optimization problem w.r.t. v. Afterwards, using the inverse relationship between vk and uk , u∗k can be computed.4 5

4 Eq. (17) might be also taken into account during the optimization (22). The advantage of that approach is certainly to get the result in u. Unfortunately, this idea requires non-convex optimization solvers. 5 Apart from the unconstrained solution, the method is easily extendable when hard constraints on the I/O are needed.



0.95 −0.01   0.03  0 0  0.82  0.1   0  1 0  −0.9 0.77

 −0.81 −0.06  0.25  xk +, 0  0.7  0.38 −0.04  −0.49 ek , 0.05  −0.17  0.01 0 x + ek , 0 0.01 k

0.01 0.02 0.82 0.93 −0.05 0.1 0.03 0.77 0 0 0 0.8 0 0 0   −0.81 −0.61 −0.12 −0.06   0.25  vk + −0.36  −0.23 0 1 0.04 −0.03 −0.18

−0.37 −0.8

euk (1) − 1 , euk (1) + 1 euk (2) − 1 −20 · u (2) , e k +1 10 ·

with additional white noise sequence ek ∼ N(0, Iℓ · 10−6). The batchwise H-SPC design is decomposed into two major parts. Since in this paper here we introduced the batchwise solution, the identification of the Markov parameters and nonlinearity can be done off-line. Indeed, under changing model conditions, recursive identification is required [5]. During the data collection phase, N = 2000 measurement points were acquired over p = f = 10 horizon. Past horizon was selected to be twice as large as the state space dimension n. The regularization factor has been chosen to equal γ = 10, the size of the kernel matrix W became Nk × (N − p) with Nk = 1900. RBF kernels were used with σ = 1 [16]6 . No controller action was performed during the data collection phase7 in order to properly excite the nonlinear input functions (utrain ∼ N(0, 9 · Iℓ ). Regularization was found as a crucial tuning parameter of the identification process. Using the concepts introduced in Section IV, the nonlinearity and the Markov parameters were identified up to an unknown similarity transformation. In the controller implementation phase, the closed loop H-SPC was simulated with the synthesized controller in the loop (integrator type of controller, I=0.01). The following weights were used Q = 103 · In , R = 10−1 · Ir , R∆ = 104 · Ir . The idea was to remove the nonlinearity from the plant input and minimize the performance criteria (18). The data driven unconstrained optimal solution was computed then by the process written in the Section (V). Figure 2 depicts the H-SPC simulation results. First, I controller is applied when  the simple-minded  x0 = 100 · 1 1 1 1 1 (dash line). On the other hand, 6 Note that with the present computing power, the computation of the componentwise centered kernel matrices K(·) is very memory demanding if Nk is chosen too large. 7 For sake of simplicity.

2608

WeC14.5 nonlinearity. Performance criteria has been redefined and optimal data driven control law was finally obtained for Hammerstein LTI systems. Despite the encouraging results of this nominal and model independent technique for Hammerstein LTI systems, there are still many open questions (e.g. closed loop stability for finite horizon, persistency of excitation, non uniquely invertible input linearity, smoothness of the nonlinearity). However, in the current paper nominal and deterministic predictor is applied, the implementation of robust control results might be a relevant extension. Recursive scheme or the extension of the current idea to a different model family are possible further research directions.

Control input, u 2

u_k

0 u1

−2

u2 −4 0

10

20

30

40

50

60

70

80

90

100

Measured output

y(1)_k

100 0 −100 −200 0

H−SPC I−type of controller 10

20

30

40

50

60

70

80

90

100

Measured output

y(2)_k

100 H−SPC I−type of controller

50 0 −50 0

10

20

30

40

50 Sample [k]

60

70

80

90

100

R EFERENCES Fig. 2.

Performance results

Absolute error between v and \hat v 0.4 v−\hat v 0.3

v(1)_k

0.2 0.1 0 −0.1 −0.2 −0.3 0

10

20

30

40

50

60

70

80

90

100

5 v−\hat v

v(2)_k

0

−5

−10

−15 0

10

20

30

40

50

60

70

80

90

100

Sample [k]

Fig. 3.

Input comparison

the H-SPC is plotted with the same initial condition, with solid line. In the latter case, the first phsamples are just data i T

acquisition stacking measurements in f (U k−p )T Y Tk−p . At the sample 11th , v∗k was applied and a new measurement h iT was taken into f (U k+1−p )T Y Tk+1−p . From the forthcoming seconds, the H-SPC became active and replaced the original basic controller. Since the control law can be computed on the basis of the identified Markov parameters and the collected last p I/O measurements, it has a low computational burden. Figure 3. represents the comparative plots of the difference between the ’real’ and the estimated effective control actions, i.e. the absolute error between v and v. ˆ The example concludes that the nominal H-SPC is a useful data driven candidate when the input nonlinearity and Markov parameters are accurately estimated.

[1] W. Favoreel, B. de Moor, M. Gevers, and P. van Overschee, “Closedloop model-free subspace-based LQG-design,” in IEEE Mediterranean Conference on Control and Automation, Haifa, Israel, 1999. [2] P. van Overschee and B. de Moor, Subspace identification for linear systems-Theory Implementation and Application. Dordecht, Kluwer academic press, 1996. [3] J. Dong and M. Verhaegen, “On the equivalence of closed loop subspace predictive control with LQG,” IEEE Conference on Decision and Control, pp. 4085–4090, 2008. [4] B. Woodley, Model free subspace based H∞ control, PhD Dissertation. Stanford University, 2001. [5] J. Dong, M. Verhaegen, and E. Holweg, “Closed-loop subspace predictive control for fault tolerant MPC design,” in 17th IFAC World Congress, Seoul, Korea, 2008. [6] E. Eskinat, S. H. Johnson, and W. L. Luyben, “Use of Hammerstein models in identification of nonlinear systems,” AIChE, vol. 37, no. 2, pp. 255–268, 1991. [7] H.-T. Zhang, H. Li, and G. Chen, “Dual mode predictive control algorithm for constrained Hammerstein systems,” International Journal of Control, vol. 81, no. 10, pp. 1609–1625, 2008. [8] F. Ding and T. Chen, “Identification of Hammerstein nonlinear ARMAX systems,” Automatica, vol. 41, no. 9, pp. 1479–1489, 2005. [9] I. Goethals, K. Pelckmans, J. A. K. Suykens, and B. de Moor, “Identification of MIMO Hammerstein models using least squares support vector machines,” IFAC Automatica, vol. 41, no. 7, pp. 1263– 1272, 2005. [10] ——, “Subspace identification of Hammerstein systems using least squares support vector machines,” IEEE Trans. on Aut. Contr., vol. 50, no. 10, pp. 1509–1519, 2005. [11] J. W. van Wingerden and M. Verhaegen, “Closed-loop identification of MIMO hammerstein models using LS-SVM,” In Proceedings of 15th IFAC Symposium on System Identification, July 6 - 8, 2009, SaintMalo, France, 2009. [12] A. Chiuso, “On the relation between the CCA and the predictor-based subspace identification,” IEEE Trans. on Aut. Contr., vol. 52, pp. 1795– 1812, 2007. [13] M. Verhaegen and V. Verdult, Filtering and System Identification: An introduction. Cambridge University Press, 2007. [14] G. H. Golub and C. F. v. Loan, Matrix computations. The John Hopkins University Press, 1996. [15] A. Chiuso, “The role of vector auto regressive modeling in predictor based subspace identification,” Automatica, vol. 43, no. 6, pp. 1034– 1048, 2007. [16] J. Suykens, T. van Gestel, J. de Brabanter, B. de Moor, and J. Vandewalle, Least square support vector machines. World Scientific Ltd, 2002.

VII. C ONCLUSIONS A state space model-free control approach has been presented for MIMO Hammerstein systems. The proposed novel algorithm showed not only the identification of the observer’s Markov parameters, but also estimation of the Hammerstein

2609