Recursive Subspace Identification Algorithm for Closed-Loop Systems

0 downloads 0 Views 165KB Size Report
researchers try to apply subspace methods to closed-loop system identification. The main difficulty in the identification of closed-loop systems is due to the ...
2013 American Control Conference (ACC) Washington, DC, USA, June 17-19, 2013

Recursive Subspace Identification Approach of a Closed-loop Model Jia Wang, Hong Gu, Hongwei Wang

Abstract— A subspace model identification algorithm under closed-loop experimental condition is presented in this paper that can be implemented to recursively identify and update system model. A new updating scheme is developed to obtain the projected data matrix recursively through sliding window technique and linear equation. Based on the propagator type method in array signal processing, the subspace spanned by the column vectors of the extended observability matrix is estimated without singular values decomposition. The proposed method is feasible for the closed-loop system contaminated with colored noises. The numerical example shows the effectiveness of the proposed algorithm.

I. INTRODUCTION Subspace Model Identification (SMI) has received much interest over the past two decades [1], [2] and most these methods are characterized with open-loop identification that constructs input output block Hankel matrices in order to retrieve certain subspaces related to the system matrices [3]. Since there are many cases where open-loop experiments are impossible due to the problem of safety and stability, researchers try to apply subspace methods to closed-loop system identification. The main difficulty in the identification of closed-loop systems is due to the existence of correlation between plant inputs and disturbances, resulting in biased estimates of system parameters [4]. More recent work can achieve consistent estimates with closed-loop data, such as [5]-[8]. However, these algorithms, which are appropriate for off-line identification, are difficult to implement on-line due to huge computational load of singular value decomposition (SVD). In many cases it is necessary to have a model of the system available on-line while the system is in operation [9]. A possibility is provided by identifying a system under closed-loop condition using a recursive update of the model. It meets the requirement that one of the dimensions of the block Hankel matrices involved needs to go to infinity in off-line identification since updating technique is used. The main problem with implementation of recursive SMI algorithms under closedloop condition is to find alternative ways for SVD. In spite of updating algorithm for the SVD, it still exists the problem of correlation between plant inputs and disturbances in closedloop subspace identification methods. Combined updating techniques and correlation problem, several recursive algorithms for closed-loop subspace identification have been J. Wang is with School of Control Science and Engineering, Dalian University of Technology, Dalian, Liao Ning 116023, China H. Gu is with Faculty of Control Science and Engineering, Dalian University of Technology, Dalian, Liao Ning 116023, China

[email protected] H. W. Wang is with Faculty of Control Science and Engineering, Dalian University of Technology, Dalian, Liao Ning 116023, China

978-1-4799-0178-4/$31.00 ©2013 AACC

developed. In order to obtain unbiased parameter estimates, the vector auto regressive with exogenous inputs (VARX) models from the work of [10] has been introduced and the Projection Approximation Subspace Tracking (PAST) algorithm method is used to update the signal subspace in [11]. In [12], the problem of recursive closed-loop subspace identification is presented by two linear optimization problem and subspace tracking techniques is developed to recursive SMI based on the relationship between sensor array signal processing and SMI problems. In this paper, a recursive subspace identification method under closed-loop conditions is presented based on the orthogonal decomposition which has been inspired by the offline method [8]. We present a new updating scheme for the projected data matrix and develops recursive construction for the LQ decomposition. Based on the propagator type method in array signal processing, we obtain the subspace spanned by the column vectors of the extended observability matrix without singular values decomposition. The reminder of this paper is organized as follows. In section II, we state problem formulation and introduce state-space model and notation. In section III, the projected data matrix is constructed online by updating technique. In section IV, a recursive identification algorithm is emphasized. In section V, a numerical example is given and the results show the effectiveness of the proposed algorithm. Finally, in section VI, we present the conclusion and prospect the future research. II. PROBLEM FORMULATION AND NOTATION A. Problem formulation Consider a closed-loop system depicted in Fig. 1. The plant is assumed to be modelled by P(z), represented as a finite dimensional state-space model in the following. Let us assume that the signals u, r2 ∈ Rm and y, r1 ∈ R p are observed. The disturbance signals ηu and ηy are white noise with mean zero. The matrices Hc (z) and H p (z) are related noise filters respectively. Furthermore, it is assumed that A1: The feedback system is well-posed in the sense that (u, y) are determined uniquely if all the external signals are given. A2: The controller C(z) is known and stabilizes the closed-loop system. A3: The exogenous inputs (r1 , r2 ) satisfy persistent exciting (PE) conditions and are uncorrelated with the white noises ηu and ηy . A4: The exogenous inputs (r1 , r2 ) are second-order jointly stationary processes with mean zero. A5: There is no feedback from (u, y) to (r1 , r2 ).

1675

a subspace of R at the time period [0, T ]. Then, we take the orthogonal projection of (6) onto the space R[0,T ] and obtain,  xˆd (t + 1) = Axˆd (t) + Buˆd (t) (7) yˆd (t) = Cxˆd (t) + Duˆd (t) ˆ d (t)|R[0,T ] } and yˆd (t) = E{y ˆ d (t)|R[0,T ] }. where uˆd (t) = E{u C. Notation Fig. 1.

Block schematic representation of the closed-loop configuration

Based on the configuration of Fig. 1, we state closed-loop identification problem as: given the exogenous inputs r1 , r2 and input output u, y, we derive a subspace identification method to estimate state-space models of the plant P(z) recursively. B. State-space model     y r1 and w = Define r = . The Hilbert spaces r2 u are generated by second-order random variables of the exogenous inputs r and joint input-output signals w which are denoted by R = span{r(t)| t ∈ Z} and W = span{w(t)| t ∈ Z}. We obtain the orthogonal projection of w onto R and its complement as ˆ wd (t) = E{w(t)| R}, ws (t) = w(t) − wd (t)

(1)

ˆ where E{·|R} denotes the orthogonal projection onto R. Under the assumption that exogenous inputs are feedbackfree, the joint input-output process w has the orthogonal decomposition w(t) = wd (t) + ws (t) (2) or y(t) = yd (t) + ys (t), u(t) = ud (t) + us (t)

(3)

Moreover, wd and ws are mutually uncorrelated,  E ws (t)wTd (τ ) = 0, ∀t, τ = 0, ±1, ±2, · · · . The plant of the closed-loop system shown in Fig. 1 can be expressed with transfer function as y(t) = P(z)u(t) + H p (z)ηy (t)

(4)

Due to the exogenous input signal r is uncorrelated with ˆ ηy (t)|R} are close to 0 as the noise ηy , all elements of E{ N → ∞. Then we rewrite (4) into yd (t) = P(z)ud (t)

(5)

ˆ ˆ where yd (t) = E{y(t)|R} and ud (t) = E{u(t)|R}. The plant is expressed into a deterministic state-space form as:  xd (t + 1) = Axd (t) + Bud (t) (6) yd (t) = Cxd (t) + Dud (t) where the system matrices A ∈ Rnd ×nd , B ∈ Rnd ×m , C ∈ R p×nd and D ∈ R p×m . Define the finite history of secondorder random variables of exogenous inputs R[0,T ] which is

Given the current data sequence {r(i), u(i), y(i), i = 0, 1, . . . ,t}, we construct exogenous inputs block Hankel matrix as:   r(0) r(1) · · · r(t − 2i + 1)  r(1) r(2) · · · r(t − 2i + 2)    .. .. ..   ..   . . . .      r(i − 1) r(i) · · ·  r(t − i) Rp   R0,2i,t =  = Rf r(i) r(i + 1) · · · r(t − i + 1)     r(i + 1) r(i + 2) · · · r(t − i + 2)      .. .. .. ..   . . . . r(2i − 1) r(2i) ··· r(t) where R0,2i,t ∈ R2i(m+p)×(t−2i+2) , the number of block rows i is a user-defined index. The subscripts p and f denote past and future respectively. In a similar way, we construct block Hankel matrices U0,2i,t ∈ R2im×(t−2i+2) and Y0,2i,t ∈ R2ip×(t−2i+2) . The extended observability matrix Γid is defined as: h T i Γid = CT (CA)T · · · CAi−1 III. RECURSIVE UPDATE OF THE PROJECTED DATA MATRIX

A. Construction of the projected data matrix Based on the orthogonal projection in previous section,   ˆ d,p U and we define the projected data matrices Uˆ d = ˆ   Ud, f  Uˆ Yˆ Yˆd = ˆd,p and the stacked matrix Wˆ = ˆ d . Yd Yd, f Lemma Given block Hankel matrices R0,2i,t , U0,2i,t and Y0,2i,t with data sequence {r(i), u(i), y(i), i = 0, 1, . . . ,t}, the projected data matrix Wˆ can be obtained via the LQ decomposition as follows,     T  Q1 R0,2i,t L11  U0,2i,t  =  L21 L22   QT2  (8) Y0,2i,t L31 L32 L33 QT3 where L11 ∈ R2i(p+m)×2i(p+m) , L22 ∈ R2im×2im and L33 ∈ R2ip×2ip . Then the projected matrix Wˆ is obtained by     L21 Uˆ d ˆ = (9) W= ˆ QT1 L31 Yd where L21 ∈ R2im×2i(p+m) and L31 ∈ R2ip×2i(p+m) . Proof: In general case, the orthogonal projections of the row space of the matrix A on the row space of the matrix B is denoted by  † A/B = LA LBT LB LBT LB QT

1676

where A and B can be expressed as linear combinations in terms of LQ decomposition as A = LA QT , B = LB QT . Applying the above result, we compute the orthogonal projection of the row space of U0,2i,t onto R0,2i,t and have Uˆ d = U0,2i,t /R0,2i,t h i−1 T T = L2,[1:3] L1,[1:3] L1,[1:3] L1,[1:3] L1,[1:3] QT   T L LT −1 L QT = L QT = [L21 L22 ] L11 21 1 11 1 11 11

In a similar way, we also obtain

and bi is the ith row of the matrix B, bi = [gi1 li1 (t + 1) . . . lii (t + 1) 0 . . . 0 ] Clearly, the elements of the row ai is known but bi is mostly unknown. Based on the definition of the LQ decomposition, the intermediate matrix G∗ satisfies,     Q(t)T 0 G∗ = L(t)Q(t)T G j+1 = A 0T I and   h i I 0T T G = G1 L(t + 1)Q(t + 1) = B 0 Q(t + 1)T

Yˆd = Y0,2i,t /R0,2i,t



h i−1 T T = L3,[1:3] L1,[1:3] L1,[1:3] L1,[1:3] L1,[1:3] QT   T L LT −1 L QT = L QT = [L31 L32 L33 ] L11 31 1 11 1 11 11

To ensure the condition rank(Uˆ d )= 2im, we have assumed that r2 has PE condition of order 2m. Then, the component uˆd has PE condition of order 2m [13].  B. Updating the projected data matrix by sliding window At the current time instant t, the data sequence {r(i), u(i), y(i), i = 0, 1, ...,t} is given. We define   R0,2i,t   G(t) =  U0,2i,t  = G1 G2 ... G j (10) Y0,2i,t  T , where and each element is Gk = g1k g2k ... gNk 1 ≤ k ≤ j, j = t − 2i + 1 and N = 2i (m + p). According to the above Lemma, we have the LQ decomposition G(t) = L(t)Q(t)T and the corresponding projected data matrix is     L21 (t) Uˆ d (t) = QT1 (t) L31 (t) Yˆd (t) At the (t + 1)th time instant, new observed data {r(t + 1), u(t + 1), y(t + 1)} is obtained to construct data vectors as following:

φR,(t+1)

=

Due to the orthogonality of matrices Q(t) and Q(t + 1), it is easy to obtain A A T = BB T (11) Unfold (11), the corresponding elements of both sides are equal and each element of updated matrix L(t + 1) can be computed. Similarly, we can obtain the matrix Q(t + 1). C. Recursive estimation of a basis of Γid Consider the noise free case, since the extended observability matrix Γid ∈ Rip×nd with ip > nd , the matrix Γid has, at least, nd linearly independent rows, the following partition of the observation vector Ξ32 (t + 1) can be introduced:   i   Γd,1 Ξ32,1 ∈ Rnd ×i(m+p) T ˆ = X Θ Ξ32 (t +1) = f 2 i Ξ32,2 Γd,2 ∈ R(ip−nd )×i(m+p) (12) where Ξ32,1 and Ξ32,2 are the components respectively to the nd rows and (ip − nd ) rows of Ξ32 (t + 1). Thus, there exists a unique operator Pf ∈ Rnd ×ip−nd , such as Γid,2 = PfT Γid,1 Moreover, an estimate of the extended observability matrix is available by estimating Pˆf and that is Γid = [Ind Pˆf ]T . In order to develop a recursive minimization algorithm, we introduce the forgetting factor β and obtain the cost function with a finite exponentially weighted sum as:

[r(t − 2i + 2) r(t − 2i + 3) · · · r(t + 1)]T

N

In a similar way, we obtain new data vectors φU,(t+1) and φY,(t+1) . Thus,   φR,(t+1)  T G j+1 =  φU,(t+1)  = g1( j+1) g2( j+1) ... gN( j+1) φY,(t+1) Then new column G j+1 is added and old column G1 is eliminated and new data matrix is obtained   G(t + 1) = G2 G3 ... G j+1

The data matrix G(t + 1) is expressed into the LQ decomposition form G(t + 1) = L(t + 1)Q(t + 1)T , where L(t + 1) and Q(t + 1) are unknown. We introduce  a intermediate matrix G∗ = G1 G2 ...   G j G j+1  and define A = L(t) G j+1 and B = G1 L(t + 1) , ai is the ith row of the matrix A ,   ai = li1 (t) li2 (t) . . . lii (t) 0 . . . 0 gi( j+1)

J(Pf ) =

2

∑ β N−s Ξ32,2 − PfT Ξ32,1 F

s=0

(13)

The minimization of such a criterion is recursively feasible by applying a classic recursive least squares approach. IV. NUMERICAL EXAMPLES In order to illustrate the performance of the proposed method in this paper, we consider the closed-loop system depicted in Fig. 1. There are three cases of the poles of the plant P(z), 0.8, 1, 1.3, respectively. The noise models are given by z3 − 1.56z2 + 1.045z − 0.3338 , Hc (z) = 0 z3 − 2.35z2 + 2.09z − 0.6675 The reference inputs r1 and r2 are zeros mean Gaussian white signals with variances σr21 = σr22 = 1. The noise input ηy is uncorrelated with the reference signals and also is zero mean Gaussian white signal with variance ση2y = 0.1. For the data Hankel matrices, the user-defined index is chosen as i = 8.

1677

H p (z) =

Case 1

0.85

0.75

V. CONCLUSION AND FUTURE WORK

Estimated True value

0.8

0

500

1000

1500

2000

2500

3000

3500

Under the assumption that the order of a plant system to be identified is a priori known, we derive a recursive closed-loop subspace model identification algorithm. The proposed algorithm updates the projected data matrix through linear equations and estimates the extended observability matrix based on the propagator method. The effectiveness of the algorithm has been demonstrated by two numerical simulation examples. However, it should be noted that since the projection is onto the finite data space R[0,T ] , the projected data {uˆd , yˆd } is not purely deterministic and do contain some residuals, or noises. It is clear as the window size T → ∞ that is

4000

Case 2

1.05 Estimated True value

1 0.95

0

500

1000

1500

2000

2500

3000

3500

4000

Case 3

1.35

1.25 Case 3(no noise)

Estimated True value

1.3

0

500

1000

1500

2000

2500

3000

3500

4000

0

500

1000

1500

2000

2500

3000

3500

4000

1.35 1.3 1.25

Estimated True value

Samples

ˆ d (t)|R[0,T ] } = ud (t) uˆd (t) = lim E{u

Fig. 2. Trajectories of the estimated coefficients of three transfer functions

T →∞

Case 3

Magnitude(dB) Case 2

Case 1

20

True Our method EIVPM RPB MOESP−PO

10 0 −10 −2 10 60

−1

10

0

10

1

10

20 0 −1

10

0

10

1

10

True Our method EIVPM RPB MOESP−PO

10 0 −10 −2 10

R EFERENCES True Our method EIVPM RPB MOESP−PO

40

−20 −2 10 20

we would recover the perfect noise free input signals. Practically, this is not feasible. Thus, in order to cope with the residuals, the future work is to develop a recursive subspace algorithm with correlation function estimates.

−1

10

0

10 Frequency(rad/s)

1

10

Fig. 3. Bode diagrams of three identified systems with EIVPM, RPB, MOESP-PO and our method

The order of the plant is assumed to be known as n = 1. Each simulation generates 4000 data points. In the first experiment, trajectories of estimated coefficients of the plant P(z) in three case averaged over 10 independent Monte Carlo Simulation (MCS) are displayed in Fig. 2. For case 1 and case 2, Fig. 2 demonstrates that the proposed recursive algorithm has remarkable ability to track the poles. As the number of samples increases, the results of estimated coefficients can be consistent with true value. For case 3 there seems to be some difficulties and gives a large fluctuation of the result. We note that the plant P(z) of case 3 is an unstable system of which the pole is outside the unit circle. If the noise variance σηy is reduced to zero, the identification result of the unstable plant P(z) can follow estimated coefficient trajectory much better, which is the case 3 with no noise shown in the Fig. 2. The second experiment compare the representative subspace algorithms EIVPM [14], MOESP-PO and RPB [12] with our method. The results in Fig. 3 illustrate average of the estimated magnitude of frequency response of three transfer functions. As can be seen from Fig. 3, our method can yield satisfactory results and better performance than others.

[1] S. J. Qin, An Overview of Subspace Identification, Computer and Chemical Engineering, vol. 30, no. 10-12, 2006, pp. 1502-1513. [2] J. Wang and S. J. Qin, A New Subspace Identification Approach Based on Principal Component Analysis, Journal of Process Control, vol. 12, no, 8, 2002, pp. 841-855. [3] P. Van Overschee and B. De Moor, Subspace Identification for Linear Systems: Theory, Implementation, Applications. Kluwer Academic Publishers, 1996. [4] M. Gilson and G. Mercere. ` Subspace-based Optimal IV Method For Closed-loop System Identification. The 14th IFAC Symposium on System Identification, Newcastle, Australia, 2006. [5] S. J. Qin, W. Lina and L. Ljung, A Novel Subspace Identification Approach with Enforced Causal Models, Automatica, vol. 41, no. 12, 2005, pp. 2043-2053. [6] A. Chiuso and G. Picci, Consistency Analysis of Some Closed-loop Subspace Identification Methods, Automatica, vol. 41, no. 3, 2005, pp. 377-391. [7] G. Mercere, ` L. Bako and S. Lecoeuche, Propagator-based Methods for Recursive Subspace Model Identification, Signal Processing, vol. 88, no. 3, 2008, pp. 468-491. [8] T. Katayama, H. Kawauchi and G. Picci, Subspace Identification of Closed-loop System by the Orthogonal Decomposition Method. Automatica, vol. 41, no. 5, 2005, pp. 863-872. [9] L. Ljung, System Identification: Theory for the User, Prentice-Hall, Upper Saddle River, 2002. [10] T. Gustafsson. Instrumental Variable Subspace Tracking Using Projection Approximation. IEEE Transactions on Signal Processing, vol. 46, no. 3, 1998, pp. 669-681. [11] P. Wu, C. Yang and Z. Song, Recursive Subspace Model Identification Based on Vector Autoregressive Modelling, Proceedings of the 17th IFAC Symposium on Automatic Control, Seoul, Korea, 2008, pp. 88728877. [12] I. Houtzager, J. Van Wingerden and M. Verhaegen, Fast-array Recursive Closed-loop Subspace Model Identification, The 17th IFAC Symposium on System Identification, Saint-Malo, France, 2009, pp. 96-101. [13] I. Markovsky, J. C. Willems, P. Rapisarda and B. De Moor, Algorithm for Deterministic Balanced Subspace Identification. Automatica, vol. 41, no. 5, 2005, pp. 755-766. [14] G. Mercere, ` S. Lecoeuche and C. Vasseur, A New Recursive Method for Subspace Identification of Noisy Systems : EIVPM, The 13th IFAC Symposium on System Identification, Rotterdam, 2003, pp. 1637-1642.

1678