(a0;111; ana; b0;111; bnb) t. ; t. = 1. (3) excited with u(k) and ..... the error bound is almost equal to the CramrâRao lower bound. Each simulation is repeated 300 ...
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 44, NO. 2, FEBRUARY 1999
[3] P. M. Frank, “Fault diagnosis in dynamic systems using analytical and knowledge-based redundancy—A survey,” Automatica, vol. 26, pp. 459–474, 1990. , “Enhancement of robustness in observer-based fault detection,” [4] Int. J. Contr., vol. 59, pp. 955–981, 1994. [5] P. M. Frank and B. Koeppen, “Review of optimal solutions to the robustness problem in observer-based fault detection,” Proc. Inst. Mech. Engrs, vol. 207, pp. 105–112, 1993. [6] J. J. Gertler, “Survey of model-based failure detection and isolation in complex plants,” IEEE Contr. Syst. Mag., vol. 3, pp. 3–11, 1988. , “Analytical redundancy methods in fault detection and isolation,” [7] in Proc. IFAC/IMACS Symp. SAFEPROCESS, Baden–Baden, 1991. [8] F. Hamelin, D. Sauter, and D. Theilliol, “Some extensions to the parity space method for FDI using alternated projection subspaces,” in Proc. 34th Conf. Decision and Control, 1995. [9] X. C. Lou, A. S. Willsky, and G. L. Verghese, “Optimally robust redundancy relations for failure detection in uncertain systems,” Automatica, vol. 22, pp. 333–344, 1986. [10] R. J. Patton and J. Chen, “Parity space approach to model-based fault diagnosis—A tutorial survey and some new results,” in Proc. IFAC/IMACS Symp. SAFEPROCESS, Baden–Baden, 1991. [11] R. J. Patton, P. M. Frank, and R. N. Clark, Eds., Fault Diagnosis in Dynamic Systems, Theory and Application. Englewood Cliffs, NJ: Prentice-Hall, 1989. [12] J. Wuennenberg, “Observer-based fault detection in dynamic systems,” Ph.D. dissertation, Univ. Duisburg, 1990.
Frequency-Domain Identification of Linear Systems Using Arbitrary Excitations and a Nonparametric Noise Model J. Schoukens, G. Vandersteen, R. Pintelon, and P. Guillaume
Abstract—This paper presents a generalized frequency domain identification method to identify single-input/single-output (SISO) systems combining two previously published extensions in one method: arbitrary but persistent excitations are allowed and a nonparametric noise model is extracted from the same data that are used to identify the system. The method is directly applicable to identification in feedback if an external persistently exciting reference signal is available. Index Terms— Arbitrary excitations, closed-loop identification, frequency domain.
343
from the data used to identify the system, assuming explicitly a periodic excitation [10]. In this paper both generalizations are combined: arbitrary (but persistent) excitations are allowed and a nonparametric noise model is automatically generated, during the estimation of the plant model. The method is applicable under the same experimental conditions as time domain identification. Compared with prediction error methods [5], [13] the major advantage is that no parametric noise model is needed so that one model selection problem is eliminated. The method is directly applicable to identify systems captured in a feedback loop, without needing any knowledge about the controller. For generality we use this framework to explain the new method. Recently, it was pointed out that models intended for control design should preferably be identified in closed loop [2], [4]. Classical prediction error methods deal with this situation provided that the true noise model belongs to the model set [5], [13]. Only under special conditions, like white process noise and a time delay in the loop, can the noise model be omitted. Alternative methods making explicit use of a known reference signal were presented recently [15], but either they need to know the controller or they make use of multistep approaches requiring multiple model selections. Another alternative is to generate the instruments of an instrumental variable approach starting from the known reference signal [12], [5], [13]. The method presented in this paper can also be interpreted in such a frame. Compared with [12] a nonparametric frequency weighting is generated, improving the finite sample behavior of the method significantly. II. PLANT SETUP A system G(q ) is captured in a feedback loop with controller C (q ) and driven by a known reference signal r(k) through the system D(q) (Fig. 1). In open loop (C (q) = 0) D(q) can be the actuator characteristic, while for closed loop it can model a feedforward control. All systems are discrete time, linear, and time invariant. The measurements u(k) and y (k) are broken in M subrecords of equal length N (u[l] (k) = u(k + (l 0 1)N ); l = 1; 1 1 1 ; M; and k = 1; 1 1 1 ; N ) and transformed to the frequency domain using the discrete Fourier transform (DFT)
R[l] (k) = p
1
I. INTRODUCTION In the literature, frequency-domain system identification methods are presented requiring periodic excitations and assuming that the noise covariance matrix is known [7]. Recently, two complementary generalizations were published: the first one extends the method to arbitrary excitations, still assuming that the noise covariance matrix is a priori available [8]; the second one replaces the exact covariance matrix of the noise by a nonparametric noise model that is extracted Manuscript received June 6, 1997. Recommended by Associate Editor, J. C. Spall. This work was supported by the Belgian National Fund for Scientific Research, the Flemish government (GOA-IMMI), and the Belgian government as a part of the Belgian program on Interuniversity Poles of attraction (IUAP4/2) initiated by the Belgian State, Prime Minister’s Office, Science Policy Programming. J. Schoukens, G. Vandersteen, and R. Pintelon are with the Department ELEC, Vrije Universiteit Brussel, 1050 Brussels, Belgium. P. Guillaume is with the Department WERK, Vrije Universiteit Brussel, 1050 Brussels, Belgium. Publisher Item Identifier S 0018-9286(99)01303-3.
U [l] (k) = p
N k=0
1
Y [l] (k) = p
N 01 N 01
N k=0
1
N 01
N k=0
r[l] (k)e0j! k u[l] (k)e0j! k y[l] (k)e0j! k
(1)
with p !k = k2=N; k = 0; 1 1 1 ; N 0 1: Due to DFT scaling factor N the DFT spectrum of a stationary random signal is an O(N 0 ): Define the vector 1=
R(k) = (R[1] (k); R[2] (k); 1 1 1 ; R[M ] (k))t
(2)
(with t the transpose) and similarly for U (k) and Y (k): Define U (k) = U0 (k) + NU (k) and Y (k) = Y0 (k) + NY (k): NU (k); NY (k) model all disturbing noise sources on U (k) and Y (k): U0 (k); Y0 (k) are the Fourier vectors of the noise free signals (nG = nC = mu = my = 0): Define also the vector Z t = (Rt ; U t ; Y t ) containing all observations of
0018–9286/99$10.00 1999 IEEE
Authorized licensed use limited to: Rik Pintelon. Downloaded on December 4, 2008 at 09:08 from IEEE Xplore. Restrictions apply.
344
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 44, NO. 2, FEBRUARY 1999
of this paper is to combine this result with the generalized model of Section III so that arbitrary persistent excitations can be used. The price for this generalization is the need for a known external reference signal which was not needed before. Since for arbitrary excitations the expected value of the sample mean can be zero, a more general projection p(k) is needed to separate signal and noise
R(k) with p(k) 2 C M 21 RH (k)R(k) P (k) = p(k)(P (k)H p(k))01 pH (k) with P (k) 2 C M 2M p(k) =
and
Q(k) = (IM Fig. 1. Plant set up:
G plant to be identified, C controller, D actuator.
R[l] (k); U [l] (k); Y [l] (k); 8k; l; and Z0 being the vector with the exact values. Assumption 1: The noise on the Fourier coefficients is zero mean complex normally distributed [6]. [l] 2 1) NU (k) is i.i.d. over [l] and Nc (0; U (k )) (Nc stands for [l] complex normally distributed), NY (k) is i.i.d. over [l] and Nc (0; Y2 (k)); E [NY[l] (k)NU[l] (k)H ] = Y2 U (k): [l] [l] [l] [l] 2) For k1 6= k2 E [NU (k1 )NU (k2 )H ] = 0; E [NY (k1 )NY (l2 )H ] [l] [l] H = 0; and E [NY (k1 )NU (k2 ) ] = 0: Remark: These assumptions are asymptotically (N ! 1) met after a DFT for filtered white noise (which is the common time domain model). In [11] it is shown that the frequency domain and time domain cost function are asymptotically equivalent under these conditions. III. GENERALIZED MODEL STRUCTURE: ARBITRARY EXCITATIONS IN THE FREQUENCY DOMAIN The input–output DFT spectra of linear systems can be exactly related by adding an additional transient term to the model [8] that accounts for the leakage errors. Consider a discrete-time system nb na B (z 01 ) 0m G(z 01 ) = b z am z 0m ; = m A(z 01 ) m=0 m=0 = (a0 ; 1 1 1 ; ana ; b0 ; 1 1 1 ; bnb )t ; t = 1 (3) excited with u(k) and a response y (k): The relation between U0[l] (k); Y0[l] (k) and G(z 01 )jz=e is [8] [l] [l] 0 j! 0 j! [l] 0j! A(e )Y0 (k ) = B (e )U0 (k ) + T (e ) (4) with
T [l] (z 01 ) =
nt n=0
tn[l] z 0n ; nt = max(na 0 1; nb 0 1):
(5)
Because the transients T [l] (z 01 ) do not grow with the length of the experiment, it is seen from (1) that the coefficients tn are an O(N 01=2 ) showing that the impact of the transients disappears as an O(N 01=2 ) for N ! 1: Define also the vector T (k) using the formalism of (2). IV. SEPARATING SIGNAL AND NOISE: GENERATION OF A NONPARAMETRIC NOISE MODEL For periodic signals the true signal is separated from the noise using the sample mean and the sample variance (which is the nonparametric noise model) over the repeated periods. In [10] it is shown that it is enough to measure four periods to prove consistency, even in the presence of colored cross-correlated input and output noise. The goal
with Q(k) 2 C M 2M :
0 P (k))
(6)
Using these matrices we define a generalized sample mean and sample variance. ^ (k ) = pH U (k ); Y ^ (k ) = pH Y (k ) Definition 1: U 2 ^U (k ) = M 2 ^Y U (k ) = M 2 ^Y (k ) = M
1
H (k)QU (k)
1
H (k)QY (k)
1
H (k)QY (k):
0 1U 0 1U 0 1Y
Remarks: 1) p(k) can also be interpreted as instrumental variables based on the known reference signal. 2) Least squares interpretation: Consider the linear relation between R and U : U (k) = R(k)LRU (k) + U (k); LRU (k) 2 C 1 U (k) = NU (k) + T (k)=A(e0j! ) models all terms that cannot be explained by the model LRU (k): Notice that limN !1 U (k ) = NU (k ) because the transients are an O(N 01=2 ) and the noise NU (k) an O(N 0 ): The least squares estimates for LRU ; U; and NU are then given by (omitting the k-dependency)
_ LRU =
RH
(RH R)
U
=
pH U
^; =U
_ RRH U = H U = p(pH p)01 pH U R R _ U = (IM 0 P )U:
=
PU (7)
Also the generalized sample variance can be linked again to the 2 H ^ ^ ^ classical definitions. Define U ^ = E [(U (k ) 0 E [U (k )]) (U (k ) 0 ^ E [U (k)])]: Then the following property is valid. 2 Property 1: U U2 (k)] = U2 (k); for N ! 1: ^ = E [^ The proof follows from classical results of the linear algebra noticing that Q(k)P (k) = 0; P H (k)Q(k) = 0; and Q(k)p(k) = 0; P (k ) and Q(k ) are Hermitian symmetric, idempotent matrices of rank 1 and M 0 1 respectively; Q(k)R(k) = 0; Y0 (k) = U0 (k)G(k) with U0 (k) R(k)LRU (k); Y0 (k) R(k)LRY (k); Q(k)U0 (k) = 0; and Q(k )Y0 (k ) = 0 for N ! 1 (the finite sample behavior is studied in Section V.). Property 2: If the noise NX obeys Assumption 1, then pH NX (k) and NX (k)H P NX (k) are independent of QNX (k) and NX (k)H QNX (k): Proof: NX is normally zero mean i.i.d. and P and Q are orthogonal matrices [14]. V. A WEIGHTED LEAST SQUARES ESTIMATOR The weighted least squares estimator is based on the equation error
A(e0j! ; )Y (k) 0 B (e0j! ; )U (k) = "(k; ):
Authorized licensed use limited to: Rik Pintelon. Downloaded on December 4, 2008 at 09:08 from IEEE Xplore. Restrictions apply.
(8)
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 44, NO. 2, FEBRUARY 1999
Define "^(k; ) = pH "(k; ) = A(e0j! ; )Y^ (k)
"0 (k; ) = A(e0j! ; )Y0 (k)
N" (k; ) = A(e0j!
0 B(e0j! ; )U^ (k)
0 B(e0j! ; )U0 (k) ; )NY (k) 0 B (e0j! ; )NU (k):
(9)
Then " has the following property and definition. Property 3: Q(k)"0 (k; ) = 0 and Q(k)"(k; ) = Q(k)N" (k) for N ! 1: Definition 2: ^"2 (k; ) = 1=(M 0 1)"(k; )H Q(k)"(k; ): 2 ^" (k; ) is directly related to ^U2 (k); ^Y2 (k); and ^Y2 U (k): Property 4: For N ! 1 ^"2 (k; ) = A(e0j! ; ) 2 ^Y2 (k) + B (e0j! ; ) 2 ^U2 (k)
j j j j 0 j! H 0 j! 02 Real(B(e ; ) A(e ; )^Y2 U (k)):
2
^ ("()) = K
1
min K^ ("())
F
F k=1
"^( k ; )H "^( k ; ) ^"2 ( k ; )
2
F k=1
(10)
2
D
which is the minimum of the cost function based on the noise free data U0 ; Y0 and the exact nonparametric noise model "2 ( k ; ) instead of the sample variance-based model. If the model set includes the exact model G0 (z 01 ); then G(3 ; z 01 ) = G0 (z 01 ): Moreover, if no overparameterization is done, 3 = 0 ; with 0 the exact parameters. For simplicity we exclude overparameterization and guarantee a unique solution by the following assumptions. Assumption 3: is an identifiable parameter set for the exact measurements "0 ; there exists a unique global minimum point 3 = arg min K ("0 ()) N > N0 ; including N = :
2
8
1
"
"
+
"0H P N" N"H QN"
H
Assumption 4: ^ is an interior point of D : Assumption 5: The experiments are informative (persistent exci^ (; Z )] = O(N 0 ): tation) so that E [K
H
N" P N" P "0 + NNH" QN +N H QN "
"
"
"
(11) with H
" 2 N" QN 2
N"H Q (" = 2)
p
(M 0 1)E
"0H P "0 N"H QN"
2 2 (2M 0 2)
N"
p
(" = 2) 2
H
H
= 2(M 0 1) "0 P2 "0 E 2N H"QN = "0 P2 "0 " " " "
E
"0H P N" N"H QN"
= "0H E [P N" ]E and E
N"H P N" "2
1
N"H QN"
M M
01 02
=0
= 1:
N"H P N" N"H QN"
"2 N"H P N" 1)E 2 E N H2QN " " "
0 1: =M M 02
(12)
The expected value of the cost function becomes
^ ("())] = E E [K =
M M
1
F
F k=1
"^( k ; )H "^( k ; ) ^"2 ( k ; )
0 1 K ("0 ()) + C (M ) 02
(13)
arg min E [K^ ("())] = 3 : 2
D
B. Improving the Finite Sample Behavior
"^0 ( k ; )H "^0 ( k ; ) "2 ( k ; )
arg min K ("0 ())
D
1
which proves that
In order to study the properties of the estimator, ^ is compared to
D
"0H P "0 N"H QN"
= 2(M 0
A. Asymptotic Properties
= arg min F1
H P ("0 + N" ) = (M 0 1) ("0 + NN" )H QN = (M 0 1)
(M 0 1)E
where D = f 2 Rna+nb+2 : "2 ( k ; ) 6= 0 and t = 1g: Remark: For some realizations ^"2 ( k ; ) can be zero, however this is an event with probability zero so that the cost function is regular with probability one. First we study the asymptotic properties (N ! 1; F = O(N )) (the transient effects are negligible); next we show how to improve the final sample behavior by dealing with the transients.
3
"^H "^ ^"2
=
D
with
Theorem 1: Under Assumptions 1–5, ^ converges strongly to 3 : Proof: The proof is completely similar to that of [10] by noticing that
"
Since ^U2 (k); ^Y2 (k); and ^Y2 U (k) are directly calculated from the raw data, they are independent from model errors. Because "(k; ) is a linear combination of normally distributed variables it is clear that it is also normally distributed. Property 5: Under Assumption 1, for N ! 1; "(k; ) is Nc ("0 (k; ); "2 (k; )IM ) (complex normally distributed) with "2 (k) = E [^ "2 (k; )]: Moreover, P "(k; ) and Q"(k; ) are independent variables. The estimator is the minimizer of a cost function calculated on a (sub)set of frequencies k ; k = 1; 1 1 1 ; F . Definition 3: ^ = arg
345
Although the previous section proves consistency of ^; it is still an open question what is the impact of the transients on the finite sample behavior. Since the transients behave as an O(N 01=2 ); they might even be the dominating error mechanism for short data records, so that they should be included in the method. A generalized model is needed including the transient behavior in the system description, and the transient impact on the sample variances should be compensated. The first problem is easily solved using a model extension that is linear in the parameters. The variances are corrected in a multiple step procedure to approximate the exact weighting Step 1: Identify the generalized model, including the transients using the sample mean/sample variance information (using Definition 1) obtained by a straightforward analysis of the raw data. Step 2: Eliminate the transient influence from the raw data using the previous estimates and calculate the sample variance (NOT the sample mean) from these corrected data. Make a new estimate using the old sample mean and improved sample variance. Repeat Step 2 until convergence is obtained. The procedure always restarts from
Authorized licensed use limited to: Rik Pintelon. Downloaded on December 4, 2008 at 09:08 from IEEE Xplore. Restrictions apply.
346
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 44, NO. 2, FEBRUARY 1999
(a) Fig. 2. Simulation results: (a)
N
= 128; M = 5; (b)
(b)
^ ref P : ^ 1 ; 2: G ^ 2 ; 3: G ^3; - - - G ^ ref T ; 1 1 1 G = 256; M = 5: 1: G
N
the original raw data to avoid a wind-up effect. Repetitively added corrections could push away the original information. For the same reason, the sample mean is always calculated from the original data and not from the corrected data. ^ (k) and Y^ (k) Generalized Model: Applying (4), (5), and (8) to U results in
( 0j!
A e
with
) ^ ( ) = B (e0j! ; )U^ (k) + T^(k; T );
; Y k
^(
T k; T
(
E
) = pH (k)T (k; T ):
(14)
)
T k; T is the vector that is obtained by ordering the transients [l] [l] k of the subrecords in a vector, where T [l] e0j! ; T is [l] [l] [l] 0j! 0 j! defined as k T e ; T =A e ; 0 [see (2)] and [l] groups all the parameters T l ; ; M : In (14) the term A e0j! ; =A0j! ; 0 is omitted. The new equation error is
T () (
T ()=
(
) ( ( = 1 111
)
( )
)
) ) 0 j! 0j! ; )U^ (k) 0 T^(k; T ) "^T (k) = A(e ; )Y^ (k) 0 B (e (15) = "^0 (k; ) + N^" (k; ) 0 T^(k; T ):
Finite sample behavior of the cost function: Next we show that the errors on ^ caused by the wrong weighting are an O(N 01 ) to be compared to the noise errors that are an O(N 01=2 ): The sample variance calculated from the raw data (without transient compensation) is
^2(
" k;
) = M 10 1 (N" (k; ) + A(e0j! ; )T (k))H Q(k)(N" (k; ) + A(e0j! ; )T (k))
(16)
and the kth term of the cost function becomes, after grouping all parameters in a , (17), as shown at the bottom of the page, with e0 ( k ; a ) = "0 ( k ; ) 0 T (k; T ): The expected value of (17) with respect to the noise becomes
1
(M 0 1)
E
H "T P "T "2
^
=
e0H P e0 "2
+E
N"H P N" "2 2
E
1 (N" + AT )HQ" (N" + AT ) "T
Since N"H P N" ="2 is proportional to a standard 2 distribution within a -independent constant, its expected value is independent from : The term (N" + AT )H Q(N" + AT )="2 is a noncentral 2 distribution centered around AT (which is an O(N 01=2 )); and hence its moments depend only on squared or higher powers of AT [14]. Consequently
:
(18)
"2 HQ N "
01 ( + AT ) = C (M )(1 + O (N ))
(N" + AT )
(19)
where the subscript points to the dependency on : Hence the kth term of the cost function is E
H "T P "T "2
^ =
()
( ) ( ) + C (M ) (1 + O (N 01 )): 2
e0H P e0 C1 M "2
()
(20)
Sensitivity of the parameter estimates to the presence of transients: From (20) it is seen that the presence of the transients shifts the minimizer of the expected value of the cost. A distortion analysis shows that this shift is an O(N 01 ) to be compared to the noise sensitivity of the parameters which is an O(N 01=2 ): Define T = arg min; K ("T ()); then T = 3 + (N ); where (N ) is
2
D
an O(N 01 ): Since the transients are an O(N 01=2 ); it is possible to compensate the transients on the raw data for N large enough and to extract improved sample variances. This process can be repeated a few times. VI. SIMULATION RESULTS
To check the previous results, a simulation is setup to identify a plant in a feedback loop as given in Fig. 1. The models of the systems are given by their [numerator; denominator] polynomials in z 01 . G: [0.1085 0.2171 0.0985; 1.0000 0.5969 0.8442], C : [0.6625 0.6625, 1.0000 0.3249], and D: [0 6.6471, 1.0000 00.2647 0.7753]. The noise is filtered white noise with noise shaping filters: mu : [0.8625 00.6625, 1.0000 00.3249], my : [0.7792 00.5792, 1.0000 00.1584], nG : [0.8409 0.5000 0.2276 0.5000 0.6362, 1.0000 00.0000 0.0013 0.0000 0.7239], and nc : [1, 1]. Two simulations were performed. In the first one M = 5; N = 128; while in the second one N = 256:
( k ; a )H P ( k )"T ( k ; a ) = (e0 (a ; k ) + N" (; k ))H P ( k )(e0 (a ; k ) + N" (; k )) 2 (M 0 1)^" ( k ; ) (N" (k; ) + A(e0j! ; )T (k))H Q(k)(N" (k; ) + A(e0j! ; )T (k))
Authorized licensed use limited to: Rik Pintelon. Downloaded on December 4, 2008 at 09:08 from IEEE Xplore. Restrictions apply.
(17)
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 44, NO. 2, FEBRUARY 1999
(a)
347
(b)
Fig. 3. Experimental verification on a mechanical system: (a) periodic excitation, (b) random; : reference meas., : model, x: magn. complex error, - - - - - -: uncertainty on reference meas.
The system is driven with a white binary noise source with an amplitude of one. The standard deviations of the noise sources were the same in both simulations: G = 0:01; c = 0:01; u = 0:001; y = 0:001; resulting in an SNR between 10 and 30 dB at the output of the ^1: system. During the simulation five different results were stored. G ^ 2 : transient no transient model, no transient compensated weighting; G ^ 3 : transient model, model, no transient compensated weighting; G ^ ref T : transient model, using a transient compensated weighting; G ^ ref P : periodic excitation, no priori given exact weighting; and G transient model, using an a priori given exact weighting. The last two ^ ref T is obtained identifying results are used as a reference value. G the extended model but using the exact noise variances, so that the impact of errors in the determination of the weighting function are ^ ref P is obtained using periodic excitations, completely removed. G again assuming that the noise model is known a priori. Under these conditions, this estimator is the maximum likelihood estimator [7] and the error bound is almost equal to the Cramr–Rao lower bound. Each simulation is repeated 300 times, and the root mean square errors are plotted in Fig. 2. The errors were completely due to variance, the bias was negligible small (for example 40 dB below the stochastic errors). ^ 3 ) gives error From this simulation it is seen that the method (G bounds that are very close to the lowest obtainable bound. Compared to the straightforward application of the classical frequency domain ^ 1 ) a considerable gain in accuracy is obtained. The figure method (G also shows that it is not sufficient to add the transients to the model; ^ 2 >< G ^ 3 ): it is also necessary to correct the weighting function (G ^ The errors of G1 for N = 256 are 6 dB below those found with N = 128: This confirms that the transient errors on the parameters are an O(N 01 ): The lowest error bounds decreased with 3 dB, which is in agreement with their stochastic nature. VII. EXPERIMENTAL VERIFICATION The transfer function between the force (input) and acceleration (output) of a vibrating mechanical system is identified. An amplifier + electromechanical shaker is used as actuator. The reference signal (stored in the computer memory) is applied to the amplifier through a zero-order-hold reconstruction. In the first part of the experiment a periodic reference signal is applied to get a nonparametric reference (50 periods of 1024 points), and in the second part a random sequence with the same power spectrum is applied and measured in 50 2 1024 points. From both parts a parametric model is derived (numerator order = denominator order = 23) and compared to the nonparametric reference measurement. Fig. 3 shows that the method provides good results even on this complex example with lightly damped poles (and hence significant transients). However, at some frequencies (at almost completely hidden resonance/antiresonances) the errors are significantly larger than those obtained with the periodic excitations.
VIII. CONCLUSIONS In this paper a method is proposed to identify linear dynamic systems, starting from the measured input and output and a known external reference signal. A nonparametric noise model is extracted from the data, and it is used as a weighting in the cost function. The major advantage compared with the classic time domain methods is the automatic generation of a noise model from the raw data. A second important advantage is that no knowledge is required about the controller if a system in a feedback loop is identified. The identification method is also independent from the complexity of this controller. It is clear that this simplifies the practical application of the method considerably. REFERENCES [1] D. R. Brillinger, Data Analysis and Theory. New York: McGraw-Hill, New York. [2] M. Gevers, “Identification for control,” in Proc. ACASP95, Budapest, Hungary, pp. 1–12. [3] G. H. Golub and C. F. Van Loan, Matrix Computations. Baltimore, MD: The John Hopkins Univ. Press, 1990. [4] H. Hjalmarsson, M. Gevers, and F. Debruyne, “For model-based control design, closed loop identification gives better performance,” Automatica, vol. 32, pp. 1659–1674, 1996. [5] L. Ljung, System Identification—Theory for the User. Englewood Cliffs, NJ: Prentice-Hall, 1987. [6] Pincinbono, Random Signals and Systems. Englewood Cliffs, NJ: Prentice-Hall, 1994. [7] R. Pintelon, P. Guillaume, Y. Rolain, J. Schoukens, and H. Van Hamme, “Parametric identification of transfer functions in the frequency domain—A survey,” IEEE Trans. Automat. Contr., vol. 39, pp. 2245–2260, Nov. 1994. [8] R. Pintelon, J. Schoukens, and G. Vandersteen, “Frequency domain—System identification using arbitrary signals,” IEEE Trans. Automat. Contr., 1997. [9] J. Schoukens, R. Pintelon, and H. Van hamme, “Identification of linear dynamic systems using piecewise constant excitations: Use, misuse and alternatives,” Automatica, vol. 30, no. 7, pp. 1153–1169, 1994. [10] J. Schoukens, R. Pintelon, and Y. Rolain, “Frequency domain system identification using nonparametric noise models estimated from a small number of data sets,” Automatica, vol. 33, no. 6, pp. 1073–1086, 1997. [11] , “Study of conditional ML estimators in time and frequency domain system identification,” in Proc. ECC97, July 1–4, Brussels, Belgium, 1997. [12] T. S¨oderstr¨om and P. Stoica, “Comparison of some instrumental variable methods—Consistency and accuracy aspects,” Automatica, vol. 17, no. 1, pp. 101–115, 1981. [13] , System Identification. Hemel Hempstead, U.K.: Prentice-Hall, 1989. [14] A. Stuart and J. K. Ord, Adavanced Theory of Statistics. London, U.K.: Griffin, 1986. [15] P. M. J. Van den Hof and R. J. P. Schram, “Identification and control—Closed loop issues,” Automatica, vol. 31, pp. 1751–1770, 1995.
Authorized licensed use limited to: Rik Pintelon. Downloaded on December 4, 2008 at 09:08 from IEEE Xplore. Restrictions apply.