International Journal of Control, Automation, and Systems (2013) 11(4):711-717 DOI 10.1007/s12555-012-0228-5
ISSN:1598-6446 eISSN:2005-4092 http://www.springer.com/12555
Data Filtering based Least Squares Algorithms for Multivariable CARAR-like Systems Dong-Qing Wang*, Feng Ding, and Da-Qi Zhu Abstract: This paper focuses on the identification problem of multivariable controlled autoregressive autoregressive (CARAR-like) systems. The corresponding identification model contains a parameter vector and a parameter matrix, and thus the conventional least squares methods cannot be applied to directly estimate the parameters of the systems. By using the hierarchical identification principle, this paper presents a hierarchical generalized least squares algorithm and a filtering based hierarchical least squares algorithm for the multivariable CARAR-like systems. The simulation results show that the two hierarchical least squares algorithms are effective. Keywords: Filtering, hierarchical identification, least squares, multivariable system, parameter estimation.
1. INTRODUCTION Multivariable systems widely exist in various applications, such as flight control systems [1,2], robots [3,4], complex networks [5] and industrial processes [6,7]. The commonly used identification methods include the least squares methods [8], the gradient based methods [9], the bias compensation methods [10,11] and the maximum likelihood methods [12,13], and they can be used for estimating parameters of multivariable systems. In general, one uses various identification techniques or principles to present estimation algorithms for identifying different systems. For example, the polynomial transformation technique can deal with the missing-data systems [14]; the auxiliary model identification idea can solve the identification problem of the systems with unknown intermediate variables [15]; the multiinnovation identification theory makes sufficient use of input-output data and can improve the parameter estima__________ Manuscript received May 25, 2012; revised December 18, 2013 and March 17, 2013; accepted April 17, 2013. Recommended by Editorial Board member Soohee Han under the direction of Zengqi Sun. This paper was supported by the National Natural Science Foundation of China (61273194,61104001), the Shandong Provincial Natural Science Foundation (ZR2010FM024), the Qingdao Municipal Science and Technology Development Program (12-1-4-2-(3)-jch), the Shandong Province Higher Educational Science and Technology Program (J10LG12) and the 111 Project (B12018). Dong-Qing Wang is with the College of Automation Engineering, and with Shandong Provincial Key Laboratory of Industrial Control Technique, Qingdao University, Qingdao 266071, China (e-mail:
[email protected]). Feng Ding is with the School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China (e-mail: fding@ jiangnan.edu.cn). Da-Qi Zhu is with Laboratory of Underwater Vehicles and Intelligent Systems, Shanghai Maritime University, Shanghai 201306, China (e-mail:
[email protected]). * Corresponding author. © ICROS, KIEE and Springer 2013
tion accuracy [16,17]; the filtering technique is to filter the input-output data of a system, to transform the system into subsystems, and to reduce the computational load of the identification algorithms [18-20]; the hierarchical identification principle alternatively estimates the system parameters, and can improve the computational efficiency for output error systems [21] and for multivariable systems [22,23]. For multivariable systems, there exist many identification methods, e.g., the coupled-least-squares algorithms for multivariable systems [22], the hierarchical least squares algorithms for discrete-time systems [23], the auxiliary model based multi-innovation stochastic gradient algorithms for multiple-input systems [15], the polynomial transformation based identification algorithms for multirate systems [24]. This paper presents a hierarchical generalized least squares (HGLS) algorithm and a filtering based hierarchical least squares (F-HLS) algorithm to estimate the parameters of a multivariable controlled autoregressive autoregressive (CARAR-like) system. The corresponding identification model for the CARAR-like system contains one parameter matrix and one parameter vector. By using the hierarchical identification principle, an HGLS algorithm is proposed to alternatively estimate the system parameter matrix and the parameter vector. Further, through filtering the input-output data of the system, an F-HLS algorithm is presented. The basic idea of the F-HLS algorithm is to use a rational polynomial to filter input-output data of the system, resulting in two identification models: a multivariable ARX-like model and a multivariable AR model. The dimensions of the covariance matrices of the proposed F-HLS algorithm becomes smaller compared with those of the HGLS algorithm, and thus has a high computational efficiency. The rest of the paper is organized as follows. Section 2. simply gives the HGLS algorithm for multivariable CARAR-like systems. Section 3. employs the filtering technique and presents the F-HLS identification algori-
Dong-Qing Wang, Feng Ding, and Da-Qi Zhu
712
thm. Section 4. gives an example for the HGLS and the F-HLS algorithms. Finally, the concluding remarks are given in Section 5. 2. THE HIERARCHICAL GENERALIZED LEAST SQUARES ALGORITHM Let us introduce some notation first. The symbol In stands for an identity matrix of order n and I is an identity matrix of appropriate sizes; 1n (or 1m×n) represents an n-dimensional column vector (or an m×n dimensional matrix) whose entries are 1; the superscript T denotes the matrix transpose; the norm of a matrix X is defined by ||X||2:= tr[XXT]. Consider a multivariable CARAR-like system, α ( z ) y (t ) = Q ( z )u(t ) +
1 v (t ), C ( z)
(1)
where y (t ) ∈ » m is the system output vector, u(t ) ∈ » r is the system input vector, v (t ) ∈ » m is the stochastic noise vector with zero mean, α(z) is the system characteristic polynomial in the unit backward shift operator z-1 [z-1y(t) = y(t–1)], Q(z) is a matrix polynomial in z-1, and C(z) is a polynomial in z-1, which are defined as
− nq
, Qi ∈ m×r ,
C ( z ) := 1 + c1 z −1 + c2 z −2 + + cnc z − nc , ci ∈ 1.
1 In (1), the noise w (t ) := v (t ) ∈ » m is an autoC ( z) regressive (AR) process. It can be expressed as w (t ) = − ∑ ci w (t − i ) + v (t ).
,
ψ (t ) := [ψ s (t ),ψ n (t )] ∈ » m×( nα + nc ) , ψ s (t ) := [ y (t − 1), y (t − 2), , y (t − nα )] ∈ m×nα , ψ n (t ) := [ w (t − 1), w (t − 2), , w (t − nc )] ∈ m×nc .
Equation (4) can be written as y (t ) = −ψ s (t )α +θ Tϕ (t ) + w (t )
(5)
T
= −ψ s (t )α −ψ n (t )c +θ ϕ (t ) + v (t )
= −ψ (t )ϑ +θ Tϕ (t ) + v (t ).
(6)
Equation (6) is the identification model of the multivariable CARAR-like system in (1), and contains one parameter vector ϑ which consists of the coefficients of two polynomials, and one parameter matrix θ which consists of the coefficients of the matrix polynomial. Define the two quadratic cost functions: t
J1 (ϑ ) := ∑ [ y ( j )+ψ ( j )ϑ − θ Tϕ ( j )]2 , j =1 t
J 2 (θ ) := ∑ [ y ( j )+ψ ( j )ϑ − θ Tϕ ( j )]2 . j =1
mizing the cost functions J1 (ϑ ) and J2(θ) yields the following least squares algorithm:
ϑˆ (t ) = ϑˆ (t − 1) + L1 (t )[y (t ) +ψ (t )ϑˆ (t − 1) − θ Tϕ (t )] , (7) L1 (t ) = − P1 (t − 1)ψ T (t )[ I + ψ (t ) P1 (t − 1)ψ T (t )]−1 ,
nc
(2)
i =1
(8)
P1 (t ) = [ I + L1 (t )ψ (t )]P1 (t − 1) , (9) T T ˆ ˆ θ (t ) = θ (t − 1) + L (t )[ y (t ) +ψ (t )ϑ − θ (t − 1)ϕ (t )] , 2
Then, (1) can be written as
(10)
α ( z ) y (t ) = Q ( z )u(t ) + w (t ),
(3)
or na
nc
nq
i =1
i =1
i =1
y (t ) + ∑ ai y (t − i ) + ∑ ci w (t − i ) = ∑ Qi u(t − i ) + v (t ).
(4) Define the parameter vector ϑ , the parameter matrix θ, and the input information vector ϕ (t ) and the information matrix ψ (t ) as
⎡α1 ⎤ ⎡c1 ⎤ ⎢α ⎥ ⎢c ⎥ ⎡α ⎤ 2 ⎥ 2 nα nα + nc ⎢ ∈ , c := ⎢ ⎥ ∈ nc , , α := ϑ := ⎢ ⎥ ∈ » ⎢ ⎥ ⎢ ⎥ ⎣c ⎦ ⎢ ⎥ ⎢ ⎥ ⎣⎢αnα ⎦⎥ ⎣⎢cnc ⎦⎥ θ T := [Q1 , Q2 , , Qnq ] ∈
nq r
⎡αˆ (t ) ⎤ ˆ Let ϑˆ (t ) := ⎢ ⎥ and θ (t ) represent the estimates ⎣cˆ(t ) ⎦ ⎡α ⎤ of ϑ = ⎢ ⎥ and θ at time t. Referring to [23] and mini⎣c ⎦
α ( z ) := 1 + α1 z −1 + α 2 z −2 + + α nα z − nα , α i ∈ 1 , Q ( z ) := Q1 z −1 + Q2 z −2 + + Qnq z
ϕ (t ) := [uT (t − 1), uT (t − 2), , uT (t − nq )]T ∈
m×( nq r )
,
L2 (t ) =
P2 (t − 1)ϕ (t )
1+ϕ T (t ) P2 (t − 1)ϕ (t )
,
P2 (t ) = [ I − L2 (t )ϕ T (t )]P2 (t − 1) .
(11) (12)
However, the problems inherent in the above algorithm arise in that the right-hand side in (7) contains the unknown parameter matrix θ, the right-hand side in (10) contains the unknown parameter vector ϑ , and the information matrix ψ(t) contains the unmeasurable noise terms w(t-i), so the algorithms in (7)-(12) cannot be implemented. Using the hierarchical identification principle, replacing θ and α with their estimates θˆ(t ) and αˆ (t ) in (5), the estimate wˆ (t ) of w (t ) can be computed by wˆ (t ) = y (t )+ψ s (t )αˆ (t ) − θˆ T (t )ϕ (t ).
Replace θ and ϑ in the right-hand sides of (7) and
Data Filtering based Least Squares Algorithms for Multivariable CARAR-like Systems
(10) with their estimates θˆ(t − 1) and ϑˆ (t ), respectively, and ψ(t) in (7)-(10) with ψˆ (t ) := [ψ s (t ), wˆ (t − 1), wˆ (t − 2), , wˆ (t − nc ) ] .
nq
i =1
i =1
yf (t ) + ∑ ai yf (t − i ) = ∑ Qi uf (t − i ) + v (t ),
(23)
yf (t ) + ψ f (t )α = θ Tϕ f (t ) + v (t ).
(24)
or
We can obtain the hierarchical generalized least squares (HGLS) algorithm for estimating ϑ and θ:
ϑˆ (t ) = ϑˆ (t − 1) + L1 (t )[y (t ) +ψˆ (t )ϑˆ (t − 1) − θˆT (t − 1)ϕ (t )], (13) T T −1 L1 (t ) = − P1 (t − 1)ψˆ (t )[ I +ψˆ (t ) P1 (t − 1)ψˆ (t )] , (14) P1 (t ) = [ I + L1 (t )ψˆ (t )]P1 (t − 1), P1 (0) = I , (15) T ˆ ˆ ˆ ˆ θ (t ) = θ (t − 1) + L (t )[ y (t ) +ψˆ (t )ϑ (t ) − θ (t − 1)ϕ (t )]T , 2
Equation (2) can be written as w (t ) = −ψ n (t )c + v (t ).
(25)
To this point, we have obtained two identification models in (24) and (25) for the multivariable CARARlike system. For the models in (24) and (25), based on the hierarchical identification principle [23], we define three cost functions,
(16) P2 (t − 1)ϕ (t )
na
t
(17)
J 3 (α ) := ∑ [ yf ( j ) + ψ f ( j )α − θ Tϕ f ( j )]2 ,
P2 (t ) = [ I − L2 (t )ϕ T (t )]P2 (t − 1), P2 (0) = I , (18) ˆ ˆ ˆ ψ (t ) = [ y(t − 1),, y(t − nα ), w (t − 1),, w (t − nc )], (19)
J 4 (θ ) := ∑ [ yf ( j ) + ψ f ( j )α − θ Tϕ f ( j )]2 ,
ψ s (t ) = [ y (t − 1), y (t − 2), , y (t − nα )],
J 5 (c ) := ∑ [w ( j ) + ψ n ( j )c]2 .
L2 (t ) =
1+ϕ T (t ) P2 (t − 1)ϕ (t )
T
T
,
T
(20) T
ϕ (t ) = [u (t − 1), u (t − 2), , u (t − nq )] ,
(21)
ˆT
wˆ (t ) = y (t ) +ψ s (t )αˆ (t ) − θ (t )ϕ (t ).
(22)
The procedures of computing ϑˆ (t ) and θˆ(t ) in the HGLS algorithm are listed as follows: ① Let t =1, ϑˆ (0) = 1nα + nc / p0 , θˆ(0) = 1nq r ×m / p0 , wˆ (i ) = 1m / p0 , u(i ) = 0, y (i ) = 0, as i ≤ 0, p0 = 106. ② Collect the data u(t) and y(t), form ϕ (t ), ψ s (t ), ψˆ (t ) by (21), (20) and (19), respectively. ③ Compute L1(t), P1(t), L2(t) and P2(t) by (14), (15), (17) and (18), respectively. ④ Update the parameter estimates ϑˆ (t ) and θˆ(t ) by (13) and (16), respectively. ⑤ Compute wˆ (t ) by (22). ⑥ Increase t by 1 and go to Step ②. 3. THE FILTERING BASED HIERARCHICAL LEAST SQUARES ALGORITHM Define the filtered input vector uf(t), the filtered output vector yf(t), the filtered information vector ϕ f (t ) and the filtered information matrix ψ f (t ) as uf (t ) := C ( z )u(t ), ϕ f (t ) := [ufT (t
yf (t ) := C ( z ) y (t ),
− 1), ufT (t
− 2), , ufT (t − nq )] ∈
ψ f (t ) := [ yf (t − 1), yf (t − 2), , yf (t − nα )] ∈
713
nq r
,
m×nα
.
Multiplying both sides of (1) by C(z) gives α ( z )C ( z ) y (t ) = Q ( z )C ( z )u(t ) + v (t ),
or α ( z ) yf (t ) = Q ( z )uf (t ) + v (t ),
which is an equation error model and can be rewritten as
j =1 t
j =1 t
j =1
Minimizing the cost functions J 3 (α ), J 4 (θ ) and J 5 (c ) yields the following filtering based least squares algorithm [23]:
αˆ (t ) = αˆ (t − 1) + L3 (t )[ yf (t ) +ψ f (t )αˆ (t − 1) − θ Tϕ f (t )], (26) T T −1 L3 (t ) = − P3 (t − 1)ψ f (t )[ I + ψ f (t ) P3 (t − 1)ψ f (t )] , (27) P3 (t ) = [ I + L3 (t )ψ f (t )]P3 (t − 1), (28) T ˆ ˆ ˆ θ (t ) = θ (t − 1) + L (t )[ y (t ) +ψ (t )α − θ (t − 1)ϕ (t )]T , 4
f
f
f
(29) L4 (t ) =
P4 (t − 1)ϕ f (t )
1+ϕ fT (t ) P2 (t − 1)ϕ f (t )
,
P4 (t ) = [ I − L4 (t )ϕ fT (t )]P4 (t − 1), cˆ(t ) = cˆ(t − 1) + L5 (t )[ w (t ) +ψ n (t )cˆ(t − 1)],
(30) (31) (32)
L5 (t ) = − P5 (t − 1)ψ nT (t )[ I + ψ n (t ) P5 (t − 1)ψ nT (t )]−1 , (33) P5 (t ) = [ I + L5 (t )ψ n (t )]P5 (t − 1). (34)
Similarly, the right-hand sides in (26) and (29) contain the unknown parameter matrix θ and parameter vector α. The polynomial C(z) is unknown, so are uf(t) and yf(t), and the information vector ϕ f (t ) and matrix ψ f (t ) are unknown, the estimates αˆ (t ) and θˆ(t ) in (26) and (29) are impossible to compute. The information matrix ψ n (t ) contains the unavailable noise term w(t – i), so the parameter estimate cˆ(t ) in (32) is also impossible to compute. We adopt the hierarchical identification principle to replace θ in (26) and α in (29) with their estimates θˆ(t − 1) and αˆ (t ), and replace the unknown filtered vectors uf (t ) and yf (t ) with their estimates uˆf (t ) and yˆ f (t ).
Dong-Qing Wang, Feng Ding, and Da-Qi Zhu
714
P5 (t ) = [ I + L5 (t )ψˆ n (t )]P5 (t − 1), P5 (0) = I ,
From (5), we have w (t ) = y (t ) + ψ s (t )α − θ Tϕ (t ),
(35)
Replacing the unknown α and θ on the right-hand side of (35) with the estimates αˆ (t − 1) and θˆ(t − 1), the estimate wˆ (t ) of w (t ) can be computed by
ˆT
wˆ (t ) = y (t ) +ψ s (t )αˆ (t − 1) − θ (t − 1)ϕ (t ). Use wˆ (t − i ) to construct the estimate of ψ n (t ) :
ψˆ n (t ) := [wˆ (t − 1), wˆ (t − 2), , wˆ (t − nc )] ∈ m×nc .
T
T
T
T
cˆ(t ) := [cˆ1 (t ), cˆ2 (t ), , cˆnc (t )] , Cˆ (t , z ) := 1 + cˆ1 (t ) z
−1
+ cˆ2 (t ) z
−2
(36) + +cˆnc (t ) z
− nc
(46)
ψ s (t ) = [ y (t − 1), y (t − 2), , y (t − nα )],
(47)
ϕˆf (t )
= [ uˆ fT (t
− 1), uˆ fT (t
− 2), , uˆ fT (t
− nq )],
(48)
ψˆ f (t ) = [ yˆ f (t − 1), yˆ f (t − 2), , yˆ f (t − nα )], ψˆ n (t ) = [ wˆ (t − 1), wˆ (t − 2), , wˆ (t − nc )],
(49)
Filtering u(t) and y(t) with Cˆ (t , z ) obtains the estimates of uf(t) and yf(t): uˆf (t ) = Cˆ (t , z )u(t ),
yˆ f (t ) = Cˆ (t , z ) y (t ),
(51)
yˆ f (t ) = y (t ) + cˆ1 (t ) y (t − 1) + + cˆnc (t ) y (t − nc ),
(52)
wˆ (t ) = y (t ) +ψ s (t )αˆ (t − 1) − θˆT (t − 1)ϕ (t ),
(53)
cˆ (t ) = [cˆ1 (t ), cˆ2 (t ), , cˆnc (t )]T ,
(55)
ˆT
(56)
The procedures of computing αˆ (t ), cˆ(t ) and θˆ(t ) in the F-HLS algorithm are listed as follows: ① Let t =1, ϑˆ (0) = 1nα + nc / p0 (i.e. αˆ (0) = 1nα / p0 and
i ≤ 0, p0 = 106.
yˆ f (t ) = y (t ) + cˆ1 (t ) y (t − 1) + + cˆnc (t ) y (t − nc ). Construct the estimates of ϕ f (t ) and ψ f (t ) with uˆf (t − i ) and yˆ f (t − i ) as follows: nq r
,
ψˆ f (t ) = [ yˆ f (t − 1), yˆ f (t − 2), , yˆ f (t − nα )] ∈ m×nα , Replacing the unknown information vector ψ f (t ) in (26)-(29) with ψˆ f (t ), ϕ f (t ) in (26) and (29)-(31) with ϕˆf (t ), and ψ n (t ) in (32)-(34) with ψˆ n (t ), the unknown filtered output yf (t ) in (26) and (29) with yˆ f (t ), and the unknown vector w(t) in (32) with wˆ (t ), we obtain the filtering based hierarchical least squares (FHLS) algorithms of estimating the parameter vectors α, θ and c for the multivariable CARAR-like systems: αˆ (t ) = αˆ (t − 1) + L3 (t )[ yˆ f (t ) + ψˆ f (t )αˆ (t − 1) − θˆ T (t − 1)ϕˆ (t )],
(37)
f − 1)ψˆ fT (t )[ I
L3 (t ) = − P3 (t +ψˆ f (t ) P3 (t − 1)ψˆ fT (t )]−1 , (38) P3 (t ) = [ I + L3 (t )ψˆ f (t )]P3 (t − 1), P3 (0) = I , (39) ˆ ˆ θ (t ) = θ (t − 1) + L4 (t )[ yˆ f (t ) + ψˆ f (t )αˆ (t ) (40) − θˆ T (t − 1)ϕˆf (t )]T , P4 (t − 1)ϕˆf (t ) L4 (t ) = , (41) T 1+ϕˆf (t ) P4 (t − 1)ϕˆf (t ) P4 (t ) = [ I − L4 (t )ϕˆfT (t )]P4 (t − 1), P4 (0) = I , cˆ(t ) = cˆ(t − 1) + L5 (t )[wˆ (t ) + ψˆ n (t )cˆ(t − 1)],
+ψˆ n (t ) P5 (t
(54)
yˆ f (i ) = 1m / p0 , wˆ (i ) = 1m / p0 , u(i ) = 0, y (i ) = 0, as
uˆf (t ) = u(t ) + cˆ1 (t )u(t − 1) + + cˆnc (t )u(t − nc ),
− 1)ψˆ nT (t )[ I
αˆ (t ) = [αˆ1 (t ), αˆ 2 (t ), , αˆ nα (t )] ,
cˆ(0) = 1nc / p0 ), θˆ(0) = 1nq r ×m / p0 , uˆf (i ) = 1m / p0 ,
which can be recursively computed by
ϕˆf (t ) = [uˆ fT (t − 1), uˆ fT (t − 2), , uˆ fT (t − nq )]T ∈
(50)
uˆf (t ) = u(t )+cˆ1 (t )u(t − 1) + + cˆnc (t )u(t − nc ),
θ (t ) = [Qˆ1 (t ), Qˆ 2 (t ), , Qˆ nq (t )].
.
(45)
ϕ (t ) = [u (t − 1), u (t − 2), , u (t − nq )] ,
T
Let
L5 (t ) = − P5 (t
T
− 1)ψˆ nT (t )]−1 ,
(42) (43) (44)
② Collect the data u(t) and y(t), form ϕ (t ), ψ s (t ), ϕˆf (t ), ψˆ f (t ), and ψˆ n (t ) by (46)-(50). ③ Compute wˆ (t ) by (53), the gain vector L5(t) by (44), and the covariance matrix P5(t) by (45). ④ Update the parameter estimates cˆ(t ) by (43). ⑤ Compute uˆf (t ) and yˆ f (t ) by (51) and (52), respectively. ⑥ Compute L3(t) and P3(t) by (38) and (39), and L4(t) and P4(t) by (41) and (42), respectively. ⑦ Update the parameter estimates αˆ (t ) and θˆ(t ) by (37) and (40), respectively. ⑧ Increase t by 1 and go to Step ②. The convergence analysis of the F-HLS algorithm is described as the follows. We assume that {v(t), ℱt} is a martingale difference sequence defined on a probability space {Ω, ℱ, P}, where {ℱt} is the σ algebra sequence generated by v(t), i.e., ℱt=σ(v(t), v(t1), v(t2),…) [25]. The sequence {v(t)} satisfies (A1) Ε(t )[v (t ) | Ft −1 ] = 0, a.s. (A2) Ε[|| v (t ) ||2 | Ft −1 ] = σ 2 (t ) ≤ σ 2 I < ∞, a.s. (A3) lim sup t →∞
1 t ∑ v (i)v T (i) ≤ σ 2 I < ∞, a.s. t i =1
Theorem 1: For the F-HLS algorithm in (37)-(56), suppose that (A1)-(A3) hold, and that the output-input data matrices/vector ψˆ f (t ), ψˆ n (t ) and ϕˆf (t ) are persistently exciting, there exist constants 0 < cl < ch < ∞ and an integer N > n0 such that for t > n0, the following strong persistent excitation conditions hold [26]:
Data Filtering based Least Squares Algorithms for Multivariable CARAR-like Systems
715
Table 1. The HGLS estimation errors δ versus t with σ. σ2
t
10 50 0.502 100 500 1000 10 50 1.002 100 500 1000 True Value
α1 -0.31408 0.27125 0.37297 0.41894 0.40465 -0.09137 0.28035 0.38090 0.44788 0.41732 0.40000
α2 0.36735 0.22829 0.18543 0.14850 0.13801 0.22739 0.23109 0.20428 0.14988 0.13102 0.15000
Q1(1,1) 0.76494 1.68345 1.77053 1.79447 1.76961 1.02787 1.86220 1.90651 1.85912 1.80024 1.75000
Q1(1,2) -0.14653 0.11075 0.16935 0.17133 0.18599 -0.23128 0.02265 0.13524 0.14322 0.17324 0.20000
Q1(2,1) 0.12987 0.25089 0.36083 0.46697 0.49898 0.19819 0.19620 0.31747 0.45255 0.50817 0.50000
Q1(2,2) 0.08320 -0.31627 -0.37192 -0.48644 -0.50018 0.34177 -0.19400 -0.26631 -0.46980 -0.49825 -0.50000
c1 0.76108 0.35785 0.24375 0.24804 0.29832 0.73881 0.43929 0.32229 0.25139 0.30524 0.35000
δ(%) 76.61582 18.30796 11.35822 6.16085 2.95763 69.83657 25.73731 17.57536 8.80762 3.91016
c1 0.37847 0.39748 0.39723 0.36777 0.36696 0.46918 0.37262 0.36391 0.33749 0.34829 0.35000
δ(%) 50.25940 13.19773 8.75160 3.01730 1.36407 51.78634 15.83623 12.95840 5.71905 2.17892
Table 2. The F-HLS estimation errors δ versus t with σ. σ2
t
α1 0.05824 0.29305 0.34845 0.39116 0.38829 0.16123 0.29854 0.35010 0.39816 0.39329 0.40000
10 50 0.502 100 500 1000 10 50 1.002 100 500 1000 True Value
1 N
N −1
1 N
N −1
1 (A6) cl I ≤ N
N −1
(A4) cl I ≤ (A5) cl I ≤
α2 0.23796 0.19299 0.16705 0.14878 0.14114 0.20592 0.21115 0.18255 0.14814 0.13721 0.15000
∑ ψˆ fT (t )ψˆ f (t ) ≤ ch I ,
Q1(1,1) 1.00788 1.66725 1.73646 1.76482 1.75630 1.20880 1.82353 1.86007 1.82146 1.78800 1.75000
a.s.
j =0
∑ ϕˆf (t )ϕˆfT (t ) ≤ ch I , j =0
δ :=
ψˆ nT (t )ψˆ n (t )
≤ ch I , a.s.
Then the parameter estimation vectors αˆ (t ), θˆ(t ) and cˆ(t ) consistently converges to the true parameter vector α, θ and c. Theorem 1 can be proved in a similar to the method in [23] and is omitted here. 4. EXAMPLE Consider a two-input two-output CARAR-like model, α ( z ) y (t ) = Q ( z )u(t ) +
1 v (t ), C ( z)
⎡ y1 (t ) ⎤ ⎡ u1 (t ) ⎤ ⎡v1 (t ) ⎤ y (t ) = ⎢ , u(t ) = ⎢ , v (t ) = ⎢ ⎥ ⎥ ⎥, ⎣ y2 (t ) ⎦ ⎣ u2 (t ) ⎦ ⎣v2 (t ) ⎦ α ( z ) = 1 + 0.40z −1 + 0.15z −2 , C ( z ) = 1 + 0.35z −1 ,
⎡1.75 0.20 ⎤ −1 Q( z ) = ⎢ ⎥z . ⎣0.50 -0.50 ⎦ The inputs {u1(t)} and {u2(t)} are taken as two random sequences with zero mean and unit variances, {v1(t)} and {v2(t)} are taken as white noise sequences with zero mean and variances σ 12 = σ 22 = 0.502 and σ 12 = σ 22 = 1.002, respectively. Applying both the HGLS algorithm
Q1(2,1) 0.17550 0.32110 0.39316 0.45185 0.48819 0.14300 0.27782 0.35978 0.42239 0.48723 0.50000
Q1(2,2) -0.51742 -0.39488 -0.38820 -0.47772 -0.49374 -0.79578 -0.39382 -0.33950 -0.46767 -0.49568 -0.50000
and the F-HLS algorithm to estimate parameters of this example system, the parameter estimates and the estimation errors
a.s.
j =0
∑
Q1(1,2) -0.25315 0.14882 0.17449 0.18869 0.19767 -0.48246 0.06808 0.13414 0.17896 0.19578 0.20000
=
ϑˆ (t ) − ϑ
2
+ θˆ(t ) − θ
ϑ
2
+θ
αˆ (t ) − α
2
+ cˆ(t ) − c
α
2
2
×100%
2
+ c
2
2
+ θˆ(t ) − θ
+θ
2
2
×100%
are shown in Tables 1 and 2 and Figs. 1-3. From Tables 1 and 2 and Figs. 1-3, we can draw the following conclusions: • The parameter estimation errors become (generally) smaller and smaller with the data length t increasing, and the parameter estimation accuracy of the F-HLS algorithm is higher than that of the HGLS algorithm. • The parameter estimates given by the F-HLS algorithm converge fast to their true values compared with the HGLS algorithm. • The proposed F-HLS algorithm requires less computational load than the HGLS algorithm because the dimensions of the covariance matrices P4(t) in the FHLS algorithm and those of the covariance P2(t) in the HGLS algorithm are same, but the dimensions of the covariance matrices P3(t) or P5(t) in the F-HLS algorithm is smaller than those of the covariance matrix P1(t) in the HGLS algorithm because of P3 (t ) ∈ » nα ×nα , P5 (t ) ∈ » nc ×nc and P1 (t ) ∈ » ( nα +nc )×( nα +nc ) , especially for big nα and nc.
Dong-Qing Wang, Feng Ding, and Da-Qi Zhu
716 0.5 0.4 0.3 δ
σ 2 = 1.002
0.2
[3] σ 2 = 0.502
0.1 0
[4] 0
100
200
300
400
500
600
700
800
900
1000
t
Fig. 1. The HGLS estimation errors δ versus t with σ2. 0.5
[5]
0.4
δ
0.3 σ 2 = 1.002
0.2
σ 2 = 0.502
[6]
0.1 0
0
100
200
300
400
500
600
700
800
900
1000
t
[7]
2
Fig. 2. The F-HLS estimation errors δ versus t with σ . 0.5
[8]
0.4 0.3 δ
[9] 0.2
HGLS
F-HLS
0.1 0
0
100
200
300
400
500
600
700
800
900
1000
[10]
t
Fig. 3. The estimation errors δ versus t (σ2=0.502). 5. CONCLUSION In this paper, we employ the hierarchical identification principle to present an HGLS algorithm and an F-HLS algorithm for CARAR-like systems. The proposed FHLS algorithm requires less computational load than the HGLS algorithm because the dimensions of the covariance matrices of the F-HLS algorithm are reduced due to filtering the input-output data of the system.
[11]
[12]
[13] [1]
[2]
REFERENCES D. T. W. Yau, E. H. K. Fung, Y. K. Wong, and H. H. T. Liu, “Multivariable identification and controller design of an integrated flight control system,” Applied Mathematical Modelling, vol. 31, no. 12, pp. 2733-2743, December 2007. H. C. Kim, H. R. Dharmayanda, T. Kang, A. Budiyono, G. G. Lee, and W. Adiprawita, “Parameter identification and design of a robust attitude con-
[14]
troller using H∞ methodology for the raptor E620 small-scale helicopter,” International Journal of Control, Automation, and Systems, vol. 10, no. 1, pp. 88-101, February 2012. Z. S. Lim, S. T. Kwon, and M. G. Joo, “Multiobject identification for mobile robot using ultrasonic sensors,” International Journal of Control, Automation, and Systems, vol. 10, no. 3, pp. 589593, June 2012. D. Q. Zhu, Q. Liu, and Z. Hu, “Fault-tolerant control algorithm of the manned submarine with multithruster based on quantum-behaved particle swarm optimization,” International Journal of Control, vol. 84, no. 11, pp. 1817-1829, November 2011. C. X. Fan, F. W. Yang, and Y. Zhou, “State estimation for coupled output discrete-time complex network with stochastic measurements and different inner coupling matrices,” International Journal of Control, Automation, and Systems, vol. 10, no. 3, pp. 498-505, June 2012. J. Eynard, S. Grieu, and M. Polit, “Modular approach for modeling a multi-energy district boiler,” Applied Mathematical Modelling, vol. 35, no. 8, pp. 3926-3957, August 2011. S. Karacan, H. Hapoğlu, and M. Alpbaz, “Multivariable system identification and generic model control of a laboratory scale packed distillation column,” Applied Thermal Engineering, vol. 27, no. 56, pp. 1017-1028, 2007. L. Ljung, System Identification: Theory for the User, 2nd Edition, Prentice-hall, Englewood Cliffs, NJ, 1999. L. Y. Wang, L. Xie, and X. F. Wang, “The residual based interactive stochastic gradient algorithms for controlled moving average models,” Applied Mathematics and Computation, vol. 211, no. 2, pp. 442-449, May 2009. Y. Zhang, “Unbiased identification of a class of multi-input single-output systems with correlated disturbances using bias compensation methods,” Mathematical and Computer Modelling, vol. 53, no. 9-10, pp. 1810-1819, May 2011. Y. Zhang and G. M. Cui, “Bias compensation methods for stochastic systems with colored noise,” Applied Mathematical Modelling, vol. 35, no. 4, pp. 1709-1716, August 2011. J. C. Agüero, J. I. Yuz, G. C. Goodwin, and R. A. Delgado, “On the equivalence of time and frequency domain maximum likelihood estimation,” Automatica, vol. 46, no. 2, pp. 260-270, February 2010. T. Söderström, M. Hong, J. Schoukens, and R. Pintelon, “Accuracy analysis of time domain maximum likelihood method and sample maximum likelihood method for errors-in-variables and output error identification,” Automatica, vol. 46, no. 4, pp. 721-727, April 2010. J. Ding, L. L. Han, and X. M. Chen, “Time series AR modeling with missing observations based on the polynomial transformation,” Mathematical and Computer Modelling, vol. 51, no. 5-6, pp. 527-536,
Data Filtering based Least Squares Algorithms for Multivariable CARAR-like Systems
March 2010. [15] Y. J. Liu, Y. S. Xiao, and X. L. Zhao, “Multiinnovation stochastic gradient algorithm for multiple-input single-output systems using the auxiliary model,” Applied Mathematics and Computation, vol. 215, no. 4, pp. 1477-1483, October 2009. [16] F. Ding, “Hierarchical multi-innovation stochastic gradient algorithm for Hammerstein nonlinear system modeling,” Applied Mathematical Modelling, vol. 37, no. 4, pp. 1694-1704, April 2013. [17] Y. S. Xiao, G. L. Song, Y. W. Liao, and R. F. Ding, “Multi-innovation stochastic gradient parameter estimation for input nonlinear controlled autoregressive models,” International Journal of Control, Automation, and Systems, vol. 10, no. 3, pp. 639643, June 2012. [18] D. Q. Wang and F. Ding, “Input-output data filtering based recursive least squares identification for CARARMA systems,” Digital Signal Processing, vol. 20, no. 4, pp. 991-999, July 2010. [19] L. Xie, H. Z. Yang, and F. Ding, “Recursive least squares parameter estimation for non-uniformly sampled systems based on the data filtering,” Mathematical and Computer Modelling, vol. 54, no. 12, pp. 315-324, July 2011. [20] Y. S. Xiao and N. Yue, “Parameter estimation for nonlinear dynamical adjustment models,” Mathematical and Computer Modelling, vol. 54, no. 5-6, pp. 1561-1568, September 2011. [21] F. Ding, “Decomposition based fast least squares algorithm for output error systems,” Signal Processing, vol. 93, no. 5, pp. 1235-1242, May 2013. [22] F. Ding, Coupled-least-squares identification for multivariable systems,” IET Control Theory and Applications, vol. 7, no. 1, pp. 68-79, January 2013. [23] F. Ding and T. Chen, “Hierarchical least squares identification methods for multivariable systems,” IEEE Trans. on Automatic Control, vol. 50, no. 3, pp. 397-402, March 2005. [24] X. G. Liu and J. Lu, “Least squares based iterative identification for a class of multirate systems,” Automatica, vol. 46, no. 3, pp. 549-554, March 2010. [25] G. C. Goodwin and K. S. Sin, Adaptive Filtering, Prediction and Control, Prentice-Hall, Englewood Cliffs, NJ, 1984. [26] L. Y. Wang, F. Ding, and X. P. Liu, “Consistency of HLS estimation algorithms for MIMO ARX-like systems,” Applied Mathematics and Computation, vol. 190, no. 2, pp. 1081-1093, July 2007.
717
Dong-Qing Wang received her Ph.D. degree from the College of Automation Engineering, Tianjin University, Tianjin, China in 2006. She was with the College of Automation Engineering, Qingdao University, Qingdao, China since 1988. She has been a professor in the College of Automation Engineering, Qingdao University. Her research interests are stochastic systems, system identification, process modeling and control. Feng Ding received his B.Sc. degree from the Hubei University of Technology, Wuhan, China in 1984, and his M.Sc. and Ph.D. degrees in Automatic Control both from the Department of Automation, Tsinghua University, in 1991 and 1994, respectively. He has been a Professor in the School of Internet of Things Engineering, Jiangnan University, Wuxi, China since 2004. He is a Colleges and Universities “Blue Project” Middle-Aged Academic Leader, Jiangsu, China. His current research interests include model identification and adaptive control. He authored the book System Identification — New Theory and Methods (Science University Press, Beijing, 2013), and published over 108 SCI papers on modeling and identification. Da-Qi Zhu received his B.S. degree in Physics from Huazhong University of Science and Technology, and his Ph.D. degree in Electrical Engineering from Nanjing University of Aeronautics and Astronautics in 1992 and 2002, respectively. He has been a professor in the Information Engineering College, Shanghai Maritime University. His current research interests include neural networks, underwater vehicles and fault diagnosis.