Automatica 36 (2000) 101}109
Brief Paper
Subspace-based fault detection algorithms for vibration monitoringq Miche`le Basseville*, Maher Abdelghani, Albert Benveniste1 IRISA, Campus Universitaire de Beaulieu, 35042 Rennes Cedex, France Received 21 November 1997; revised 7 January 1999; received in "nal form 8 April 1999
Abstract We address the problem of detecting faults modeled as changes in the eigenstructure of a linear dynamical system. This problem is of primary interest for structural vibration monitoring. The purpose of the paper is to describe and analyze new fault detection algorithms, based on recent stochastic subspace-based identi"cation methods and the statistical local approach to the design of detection algorithms. A conceptual comparison is made with another detection algorithm based on the instrumental variables identi"cation method, and previously proposed by two of the authors. ( 1999 Elsevier Science Ltd. All rights reserved. Keywords: Fault detection; Subspace-based stochastic identi"cation methods; Statistical local approach; Structural vibration monitoring
1. Introduction Before outlining the paper, we describe our motivations and introduce the models which we use throughout. 1.1. Motivations The problem of fault detection and isolation (FDI) is a crucial issue which has been investigated with di!erent types of approaches, as can be seen from the survey papers (Willsky, 1976; Frank, 1990) and the books (Patton, Frank & Clark, 1989; Basseville & Nikiforov, 1993) among other references. Moreover, an increasing interest in condition-based maintenance has appeared in a large number of industrial applications. The key idea there is to replace regular systematic inspections by conditionbased inspections, i.e. inspections decided upon the continuous monitoring of the considered system (machine, structure, process or plant), based on the sensors data, in order to prevent from a possible malfunction or damage q This work is supported by Eureka project no. 1562 SINOPSYS (Model based Structural monitoring using in-operation system identi"cation) coordinated by LMS, Leuven, Belgium. This paper was not presented at any IFAC meeting. This paper was recommended for publication in revised form by Associate Editor H. Hjalmarsson under the direction of Editor T. SoK derstroK m. * Corresponding author. Tel: #33-2-99-84-72-36; fax: #33-2-9984-71-71. E-mail address:
[email protected] (M. Basseville) 1 M.B. is also with CNRS, M.A. was and A.B. is also with INRIA.
before it happens and optimize the maintenance cost. Condition-based maintenance typically involves the monitoring of components, and not only of sensors and actuators. It has been recognized that a relevant approach to condition-based maintenance consists in the early detection of slight deviations with respect to a (parametric) characterization of the system in usual working conditions, without arti"cial excitation, speeding down, or stop. Processing long samples of multi-sensor output measurements is often necessary for this purpose. It turns out that, in many applications, the FDI problem of interest is to detect changes in the eigenstructure of a linear dynamical system. An important example is structural vibration monitoring (Basseville et al., 1993). Vibrating structures are classically modeled as a continuous time linear system driven by an excitation, and whose output vector is the set of accelerometer measurements. Their vibrating characteristics (modes and modal shapes) coincide with the system eigenstructure. The key issue is to identify and monitor vibrating characteristics of mechanical structures subject to unmeasured and non-stationary natural excitation. Typical examples are o!shore structures subject to swell, buildings subject to wind or earthquake, bridges, dams, wings subject to #utter in #ight, and turbines subject to steam turbulence, friction in bearings, and imperfect balancing. A relevant approach to the vibration monitoring problem has been shown to be based on the modeling of modes and modal shapes through state space representations (Prevosto et al., 1991), the use of output-only and
0005-1098/00/$ - see front matter ( 1999 Elsevier Science Ltd. All rights reserved. PII: S 0 0 0 5 - 1 0 9 8 ( 9 9 ) 0 0 0 9 3 - X
102
M. Basseville et al. / Automatica 36 (2000) 101}109
covariance-driven identi"cation methods (such as instrumental variables or balanced realization algorithms) (Benveniste & Fuchs, 1985), and the computation of speci"c s2-type tests based on the so-called instrumental statistics (Basseville et al., 1987). In practice, these tests turn out to be robust w.r.t. non-stationary excitation. Moreover, the design of these s2-tests is a particular case of a general approach to FDI which builds a detector from a convenient estimating function (Benveniste et al., 1987, Zhang et al., 1994; Basseville, 1998). On the other hand, during the last decade there has been a growing interest in subspace-based linear system identi"cation methods (Viberg, 1995; Van Overschee & De Moor, 1996) and their relationship with instrumental variables (IV) (Viberg et al., 1997). These methods are well suited for capturing the system eigenstructure. Moreover, we know from practice that this family of methods is robust w.r.t. the non-stationarity of the zero part. Because of what has been argued above, the question arises to design FDI algorithms based on this class of identi"cation methods and to investigate their theoretical properties. It is the purpose of this paper to address these two issues, concentrating on the detection problem. The interested reader is referred to (Basseville et al., 1987; Basseville, 1997; Basseville, 1998), for solutions to the isolation problem.
1.2. System models and parameters We consider linear multi-variable systems described by a discrete-time state space model:
G
X "FX #e , k`1 k k
(1)
> "HX #l , k k k
where state X and observed output > have dimensions m and r, respectively. The state noise process (e ) is an kk unmeasured Gaussian white noise sequence with zero mean. We assume noise e to be stationary, that is of k constant covariance matrix. The issue of robustness with respect to non-stationary excitation is addressed in Section 2.5. The measurement noise process (l ) is assumed kk to be an unmeasured MA(ι) Gaussian sequence with zero mean. In the sequel, we use the notational convention that ι"!1 for no measurement noise, and ι"0 for white (i.i.d.) measurement noise. Note that, with this MA assumption for its structure, measurement noise does not a!ect2 the eigenstructure of system (1).
2 The same would not hold true with an AR assumption for the measurement noise structure.
$%& E(X >T) be the cross-correlation between Let G" k k state X and observation > , and let: k k
A B H
O" p
HF F
and C "(G FG 2 Fp~1G) p
HFp~1
(2)
be the pth-order observability matrix of system (1) and controllability matrix of pair (F,G), respectively. We assume that, for p large enough, both observability and controllability matrices have full rank m. The problem we consider is to monitor the observed system eigenstructure, that is the collection of m pairs (j, / ), where j ranges over the set of eigenvalues of state j transition matrix F, / "Hu and u is the correspondj j j ing eigenvector. In all what follows, we assume that the system has no multiple eigenvalues, and thus that the j's and u 's are pairwise complex conjugate. In particular, j 0 is not an eigenvalue of state transition matrix F. We stress that the collection of (j, / ) provides us with a caj nonical parameterization of the pole part of system (1). In particular, it does not depend on the state space basis (Benveniste & Fuchs, 1985). In the sequel, referring to vibration monitoring (Prevosto et al., 1991), such a pair (j, / ) is called a mode. The set of the m modes is conj sidered as the system parameter h: $%& h"
A B "
vec '
.
(3)
In Eq. (3), " is the vector whose elements are the eigenvalues j, ' is the matrix whose columns are the / 's, and j vec is the column stacking operator. Parameter h has size (r#1)m. The problem is to detect changes in parameter vector h. This detection problem can also be addressed using an input}output ARMA re-writing of state space model (1) (AkamK ke, 1974): p p`ι > "+ A> #+ BE , (4) k i k~i j k~j i/1 j/0 where (E ) is a Gaussian white noise sequence with zero kk mean and identity covariance matrix, and where the AR coe$cients A are related to the pair (H, F) of model (1) i via: p HFp" + A HFp~i. (5) i i/1 Note that the state noise process (e ) in Eq. (1) is re#ected kk only in the MA coe$cients (Benveniste & Fuchs, 1985; Prevosto et al., 1991). The problem is then to detect
M. Basseville et al. / Automatica 36 (2000) 101}109
changes in the eigenstructure of a multi-variable ARMA process with unknown MA part. We stress that models (1) and (4) are fully relevant w.r.t. the physics of vibration monitoring (Basseville et al., 1987), and should not be considered as black-box models for this application.
Section 2 is devoted to the ideal case where the true system order is known and used in designing the tests. We "rst discuss system theoretic issues, we describe how subspace-based tests can be built, we properly formalize this design, based on the statistical local approach. Finally, we exhibit several particular tests, and discuss robustness w.r.t. nonstationary zeroes. Section 3 points out the remarkable and somewhat unexpected fact that all subspace-based FDI algorithms with known system order are equivalent. Section 4 provides additional elements for implementing the tests. Then, Section 5 discusses how subspace tests should be adapted in the more realistic case of model reduction. Relation to other works is discussed in Section 6, and Section 7 concludes.
2. Subspace-based FDI algorithms 0 known system order Throughout this section, we assume that the true system order is known. Of course, this is somewhat unrealistic, but it allows us to introduce the key tools of our design of subspace-based tests. The more realistic case where the true system order is not known is discussed in Section 5. 2.1. System theoretic issues The arguments presented here are well known (Viberg et al., 1997), we just state them here for the sake of clarity and completeness. We are given a sequence of covarian$%& E(> >T) of output > of a state space model ces: R " k`j k k j (1). For q5p#1, let H be the block-Hankel p`1,q matrix:
A
Rι `p`1
Rι
B
Rι `2 2 `q Rι R ι `3 2 `q`1 . F } F 2
Choosing the eigenvectors of F as a basis3 for the state space of model (1) yields the following particular representation of the observability matrix introduced in Eq. (2):
AB '
O (h)" p`1
'* F
.
(7)
'*p
1.3. Outline of the paper
Rι `1 Rι `2 H " p`1,q F
103
2
Rι `p`q
(6)
As mentioned above, integer ι re#ects the assumed correlation in the measurement noise sequence (l ) . It kk should be considered as a design parameter for the proposed algorithms.
where diagonal matrix * is de"ned as *"diag("), and " and ' are as in Eq. (3). For any other state basis, the observability matrix O can be written as p`1 O "O (h)¹ (8) p`1 p`1 for a suitable m]m invertible matrix ¹. Because of the de"nition of H , O and C in Eqs. (6) and (2), a direct p`1,q p q computation of the R 's from the model equations: j Rι "HFι`j`1G, for j50, leads to the following `j`1 well-known factorization property: H "O (Fι`1C ). (9) p`1,q p`1 q From Eqs. (9), (7) and (8), we get that the following property characterizes whether a nominal parameter h agrees with a given output covariance sequence (R ) 0 jj (Viberg et al., 1997): O (h ) and H have the same left kernel p`1 0 p`1,q space4 with co-rank m.
(10)
Property (10) can be checked as follows: (1) From h as in Eq. (3), form O (h ), and pre-multi0 p`1 0 ply it by some invertible weighting matrix = . 1 (2) Pick an orthonormal basis of the left kernel space of matrix = O (h ), in terms of the columns of some 1 p`1 0 matrix S of co-rank m such that STS"I , (11) s ST= O (h )"0. (12) 1 p`1 0 Matrix S has dimensions (p#1)r]s, where s"(p#1)r!m, and it is not unique; it can be obtained, for example, by the SVD-factorization of = O (h ). Two such matrices are related through 1 p`1 0 a post-multiplication with an orthonormal matrix ;. Also, the choice of such a matrix depends on the weighting matrix = . We comment further on this 1 below. We stress that, because of Eq. (12), matrix S depends implicitly on parameter h . We bring the 0 reader's attention to the fact that, even though S is not unique, in the sequel we treat it as a function of parameter h, denoted by S(h), which we fully justify in Section 3. 3 This is called the modal basis in the vibration monitoring application (Basseville et al., 1987). 4 The left kernel spac of matrix M is the kernel space of matrixMT.
104
M. Basseville et al. / Automatica 36 (2000) 101}109
(3) The parameter h which actually corresponds to the 0 output covariance sequence (R ) of model (1) is charjj acterized by (13) ST(h )= H =T"0, 0 1 p`1,q 2 where = is another invertible weighting matrix for 2 choice. This matrix is introduced here by reference to usual subspace identi"cation methods (Van Overschee & De Moor, 1966). The choice of weighting matrix = only in#uences the 1 particular basis of the left kernel space of O (h ), which p`1 0 is given by the rows of ST= where S is orthonormal (11). 1 However, characterization (10) does not depend, of course, on the particular weighting matrices = and = . 1 2 We stress that the above stands only in the case of known system order. How this should be modi"ed in the (more realistic) case of model reduction is discussed in Section 5. 2.2. Subspace-based tests: a simple derivation
In Eq. (16), vector B is unknown, but "xed. Note that for large n, hypothesis H corresponds to small deviations in 1 h. As formally stated below, it turns out that residual f in n Eq. (14) is asymptotically Gaussian distributed, under both H and H , making easy its evaluation. More pre0 1 cisely, let E and cov be the expectation and the h h covariance, respectively, when the actual system parameter is h. We de"ne the mean residual sensitivity: $%& !1/Jn L/Lh E f (h)D M(h )" 0 h0 n h/h0
(17)
(18) "#1/Jn L/Lh E f (h )D 0, h n 0 h/h and the residual covariance matrix: $%& lim &(h )" E 0(f fT). Matrix M is a Jacobian matrix, 0 n?= h n n matrix & captures the uncertainty in f , Eq. (18) holds n true because of Eq. (15), and the limit in the de"nition of & exists. The following central limit theorem (CLT) holds (Basawa, 1985; Benveniste et al., 1987; Benveniste et al., 1990; Zhang et al., 1994; Delyon et al., 1997).
We now explain how subspace-based detectors can be designed. Assume we have at hand a nominal model h , and 0 newly collected data > ,2,> . Compute the empirical 1 n $%& 1/(n!j) +n~j > >T. Then, covariance sequence: RK " k/1 k`j k j perform steps 1 and 2 of Section 2.1, and replace step 3 by (3) De"ne the residual vector:
Theorem 2.1 (CLT). Provided that &(h ) is positive dex0 nite, the residual f in Eq. (14) is asymptotically Gaussian n distributed with the same covariance under both hypotheses in Eq. (16); that is
$%& f (h )" Jn vec(ST(h )= H K =T), n 0 0 1 p`1,q 2
A deviation in the system parameter h is thus re#ected into a change in the mean value of f . Note that matrices n M(h ) and &(h ) depend on neither the sample size n nor 0 0 the fault vector B in hypothesis H . Thus we do not need 1 to re-estimate them when testing the hypotheses, they can be estimated prior to testing, using data on the safe system (exactly as the reference parameter h ). As shown 0 in Section 4, Jacobian matrix M can be estimated from a data sample using a sample average. The estimation of covariance matrix & is more tricky (Zhang et al., 1994) We recommend to use an empirical estimate based on a simple version of the jackknife method (Politis, 1998; Basseville et al., 1993; Gach-Devauchelle, 1991). Let M K , &K be consistent estimates of M, &. The detection problem, namely deciding that residual f is signixn cantly di!erent from zero, can be solved as follows5.
(14)
where H K is the empirical block-Hankel matrix p`1,q obtained by substituting RK for R in Eq. (6). j j From Eq. (13) we already know the following. Let h denote the actual value of the parameter, for the system which generated the new data sample. Then E (f (h ))"0 i! h"h . (15) h n 0 0 Thus, it is natural to replace criterion (13) by the requirement that statistics f (h ) should have zero mean, and n 0 consequently plays the role of a residual. Based on a data sample, testing if this hypothesis is valid requires the knowledge of the statistical properties of the residual f (h ). This is addressed next. n 0 2.3. Subspace-based tests: theoretical design We need to know the distribution of f (h ) when the n 0 actual parameter for the new data sample is h. Unfortunately, this distribution is unknown, in general. One manner to circumvent this di$culty is to use a local approach, that is to assume close hypotheses: (Safe system) H : h"h and 0 0 (Faulty system) H : h"h #B/Jn. 1 0
(16)
G
N(0, &(h )) under H , 0 0 f (h ) &" n 0 n?= N(M(h ) B, &(h )) under H . 0 0 1
Theorem 2.2 (s2-test). Assume additionally that Jacobian matrix M(h ) is full column rank ( f.c.r.). Then the test 0 between the hypotheses H and H dexned in Eq. (16) is 0 1 achieved through: $%& fT&K ~1M K (MK T&K ~1M K )~1M K T&K ~1f s2" n n n
(19)
5 Theorem 2.1 is also a basis for performing isolation (Benveniste et al., 1987; Basseville, 1998; Basseville, 1997).
M. Basseville et al. / Automatica 36 (2000) 101}109
which should be compared to a threshold. In Eq. (19), the dependence on h has been dropped for simplicity. Test 0 statistics s2 is asymptotically distributed as a s2-variable, n with rank(M) degrees of freedom, and with non-centrality parameter under H : BTMT&~1M B. 1 2.4.
IV-, BR-
and
CVA-based FDI
algorithms
We now illustrate the above design of subspace-based algorithms. First, we recall a FDI algorithm previously proposed by two of the authors (Basseville et al., 1987), which is based on the IV approach to eigenstructure identi"cation. Then, we give two other examples of the family of subspace-based FDI algorithms introduced above, namely the BR and CVA-algorithms. FDI
2.4.1. IV-based FDI algorithm The statistics f based on the input}output representan tion (4), introduced in (Basseville et al., 1987), falls in the category of the subspace-based FDI algorithms de"ned above. Actually, it is well known that the identi"cation of the AR parameters of a multi-dimensional ARMA model (4) can be achieved by solving, in least-squares sense if q'p, the following system of delayed Yule}Walker equations: (AT !I )H K "0, r p`1,q
(AT(h)!I ) O (h)"0, (21) r p`1 which can easily be shown to write under the form (12) with a matrix S satisfying Eq. (11). To this end, we consider the QR-decomposition: A(h)
"(S SI )
and we discuss two particular choices of weighting matrices = and = . The "rst choice corresponds to the 1 2 BR-identi"cation method, for which weights = and 1 = are identity matrices. The second choice corresponds 2 to the CVA-identi"cation method (AkamK ke, 1973; Arun & Kung, 1986; Van Overschee & De Moor, 1996; Lindquist & Picci, 1996), for which weights = and 1 = are given by = "T~1@2 , = "T~1@2, where: q,~ p`1,` 2 2 1 T "cov Y` , T "cov Y~ , and where: k,q k,p`1 q,~ p`1,` > > ι k k~ ~1 $%& $%& Y` " F , Y~ " , (23) F k,p`1 k,q > > ι k`p k~ ~q are the vectors containing the future and past data, respectively. Note that, thanks to the stationarity assumption, Hankel matrix writes: H " p`1,q E(Y` Y~T) for all k. k,p`1 k,q The main di!erence with the IV-based FDI algorithm is in the dimension of orthonormal matrix S, and thus in the number of degrees of freedom of the s2-test. This is further discussed in Section 4. Additional di!erences might result from numerical issues on one hand, and from the e!ect of model reduction on the other one. The latter issue is addressed in Section 5.
A B A B
(20)
$%& (A A ). From the estimated AR matrices, where AT" p2 1 a modal parameter h (3) is deduced by eigenanalysis of Eq. (5). We stress that matrix A is an implicit function of h, de"ned by
A B
105
AB B
"S B (22) !I 0 r where matrices (S SI ) and B are orthonormal and upper triangular, respectively. Of course, matrices S, SI , B depend on h. From the "rst equality, we deduce that s"r. From the second equality, we deduce that matrices S and B are orthonormal (namely STS"I ) and invertible, rer spectively, and that (21) writes: BTSTO (h)"0. Turnp`1 ing back to Eqs. (20) and (14), we get the IV-based test K ), with statistics: f (h )"Jn vec(BT(h )ST(h )H 0 0 p`1,q n 0 B and S de"ned in Eq. (22). The dimension of f is now qrs"qr2. Note that, thanks to the invariance property stated below in Section 3, the particular choice of invertible matrix B plays no role in the resulting s2-test. 2.4.2. BR- and CVA-based FDI algorithms We now turn to the state-space model (1) and to subspace-based test statistics f of the form (14)}(12)}(11), n
2.5. Subspace-based tests: robustness w.r.t. system zeroes As discussed in (Basseville, 1998), Theorem 2.1 is a particular case of a more general result on local tests (Benveniste et al., 1987; Basseville & Nikiforov, 1993). Special theorems for the test related to the IV method are found in (Moustakides & Benveniste, 1986), which show that this method for eigenstructure monitoring is robust against possible time-variations of the covariance matrix of excitation noise e in model (1), or, equivalently, k against possible time-variations of the MA coe$cients B in model (4). This is an important feature for vibration j monitoring applications (Basseville et al., 1993). Moreover, the same robustness for IV and BR identixcation methods was proved in Benveniste & Fuchs, 1985). This, together with experiments in vibration monitoring Abdelghani et al., 1999; Basseville et al., 1986), supports the evidence that subspace-based methods for eigenstructure monitoring are robust w.r.t. the nonstationarity of system zeroes.
3. Invariance of subspace-based FDI algorithms 0 known order We now discuss the role of the design matrices in the three steps of the construction of residual f (h ) in n 0 Eq. (14), assuming known system order. The weighting matrix = is introduced in step 1. In step 2, we perform 1 the SVD of = O (h ) and collect the left singular 1 p`1 0
106
M. Basseville et al. / Automatica 36 (2000) 101}109
vectors associated with singular value 0. This speci"es a particular choice for ST(h ), and other matrices S sat0 isfying (11)}(12) are then obtained by post-multiplying S(h ) by an orthonormal matrix ;. The weighting matrix 0 = is introduced in step 3. We make these three design 2 choices explicit in the following notation: $%& Jn vec(;T ST(h ) = H f (h )" K =T). (24) n_U,W1W2 0 0 1 p`1,q 2 A straightforward calculation shows that f "(= ?< )f , (25) n_U,W1,W2 2 1 n_I,I,I where < is any invertible matrix6 such that: 1 ;TST(h )= "< ST(h ), and ? is Kronecker product. 0 1 1 0 Let s2 1 2 be the s2-test (19) associated with f . n_U,W ,W n_U,W1,W2 , since the From Eq. (25), we get that: s2 1 2"s2 n_U,W ,W n_I,I,I invertible matrix (= ?< ) factors out in Eq. (19). Thus, 2 1 for eigenstructure monitoring, all the proposed subspacebased methods are equivalent, when using the true system order.
4. Estimation and rank of the Jacobian matrix In this section, we investigate the estimation and the rank of the Jacobian matrix M(h) of a residual of the form (24). Two methods for estimating this Jacobian can be considered, corresponding to Eqs. (17) and (18), respectively. 4.1. First estimation method From Eq. (17), we get: =T?;T) LS(h)/LhD 0, M(h )"!(= HT h/h 0 2 p`1,q 1
(26)
$%& vec(ST), and LS(h)/LhD is obtained by where S" h/h0 di!erentiating (11)}(12) with respect to h . Writing 0 $%& L vec O (h)/Lh, we easily get that LS(h)/Lh is O@ (h)" p`1 p`1 solution of the system: (OT (h) =T?I ) LS(h)/Lh p`1 1 4 #(I ?ST(h) = ) O@ (h)"0, m 1 p`1 (I ?ST(h)) LS(h)/Lh"0. 4 Even though a solution LS(h)/Lh is non-unique, the right-hand side of Eq. (26) is unique, since: =T?;T). Ker(OT (h) =T?I )LKer(= HT 1 p`1,q 1 4 2 p`1 A consistent estimate M K for M(h ), based on a data 0 sample and a solution LS(h)/LhD 0, is obtained by subh/h stituting H K for H in Eq. (26). The unicity of M K is then guaranteed only asymptotically.
6 For example, < ";TST(h )= S(h ). 1 0 1 0
4.2. Second estimation method From Eq. (18), we get: M(h )"L/Lh vec(;TST(h )= H =T)D , 0 0 1 p`1,q 2 h/h0 which also writes M(h )" 0 L/Lh vec(;TST(h )= O (h)Fι`1C =T)D 0, (27) 0 1 p`1 q 2 h/h thanks to factorization (9) of the Hankel matrix. Hence: M(h)"(= CTFι`1 T?;TST(h)= ) O@ (h) 2 q 1 p`1 "(= ?;TST(h)= ) 2 1 ι ](CT F `1 T?I )O@ (h) q (p`1)r p`1 "(= ?;TST(h)= ) 2 1 (28) )O@ (h). Os T (h)?I ](HT (p`1)r p`1 p`1,q p`1 The last equality, where Os (h) is the pseudo-inverse of p`1 O (h), is obtained using factorization (9) again. A conp`1 sistent estimate MK , based on a data sample, is obtained by substituting H K for H in Eq. (28). Some comments are in order on formula (28), which also apply to MK . First, it is easily checked that M(h) does not depend on the particular choice of the eigenvectors / contained in the j parameter vector h de"ned in Eq. (3). Multiplying the / 's by constant complex numbers amounts to postj multiply observability matrix O (h) in Eq. (7) by an p`1 invertible diagonal matrix D, to post-multiply matrix Os T (h) by D~1, and to pre-multiply matrix O@ (h) by p`1 p`1 (D?I ), and all the terms in D cancel out in Eq. (28). (p`1)r Second, matrix O@ (h) is full rank (r#1)m, as can be p`1 checked directly from the following explicit formula:
A
O@ (h)" p`1 where:
"@ (p)?/ 1 1 0
0 } "@ (p)?/ m m
K
"(p)?I 1 r 0
0 } "(p)?I m r
B
$%& (0 1 2 j 2 pjp~1), $%& (1 j j2 2 jp), "@(p)T" "(p)T" i i i i i i i for 14i4m. Conditions regarding the rank of M(h ) are as follows. 0 4.3. Conditions ensuring full rank Formula (27) also writes: M(h ) 0 "(= CTFι`1 T?;T) 2 q $%& " A
(L/Lh vec(ST(h )= O (h))D 0), 0 1 p`1 h/h B
M. Basseville et al. / Automatica 36 (2000) 101}109
where matrices A and B have dimensions qrs]sm and sm](r#1)m, respectively. A necessary and su$cient condition for M to be f.c.r. is: B f.c.r. and Ker A W Range B"0. Now, for matrix B to be f.c.r., it is necessary that: s5r#1. Moreover, since matrices = , CTFι`1 T and 2 q ; are invertible, f.c.r. and orthonormal, respectively, matrix A is f.c.r. Thus, a necessary and suzcient condition for matrix M to be f.c.r. is: B f.c.r. A necessary condition is: sm5(r#1)m"size h. Note that, in the case of the IV-based FDI algorithm described in Section 2.4, we have s"r, and thus Jacobian matrix M is not f.c.r.
5. Subspace-based
FDI
algorithms0model reduction
Most practical situations correspond to the case where actual data are generated by a system of higher order than that of the nominal model h . Thus, the nominal 0 model has reduced order. Then, a new question arises: what does it mean, for a nominal model h , to match 0 a given data sample when model reduction is enforced ? Of course, system theoretic characterization (10), or (13), is no longer valid, and the same is true for the de"nition of the residual in Eq. (14). Other de"nitions are needed, which we introduce now. 5.1. Revisiting the nominal model characterization $%& m, condition Since rank H ' rankO (h )" p`1,q p`1 0 (10) for perfect matching cannot be satis"ed. What can be required, instead, is that the left kernel space of O (h ) p`1 0 is orthogonal to the mth order principal subspace7 of H . p`1,q Steps 1 and 2 of Section 2.1 are unchanged, but step 3 is modi"ed accordingly, and reformulated as follows: (3)
= H =T as 1 p`1,q 2 = H =T"(P P ) D