Adaptive Extended Kalman Filter for Recursive ... - IEEE Xplore

0 downloads 0 Views 721KB Size Report
Adaptive extended Kalman filter for recursive identification under missing data. I. Pe˜narrocha and R. Sanchis. Departament d'Enginyeria de Sistemes ...
49th IEEE Conference on Decision and Control December 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA

Adaptive extended Kalman filter for recursive identification under missing data I. Pe˜narrocha and R. Sanchis Departament d’Enginyeria de Sistemes Industrials i Disseny Universitat Jaume I, Castell´o, Spain {ipenarro,rsanchis}@esid.uji.es, Abstract— In this work, the parameter identification for systems with scarce measurements is addressed. A linear plant is assumed and its output is assumed to be available only at sporadic instants of time and affected by noise measurement. The identification is carried out estimating the missing outputs in order to construct the regression vector needed by the parameter estimation algorithm and using the available output information not only to update the estimated parameter vector, but also to update the regression vector in order to fasten the convergence of the algorithm. The problem is addressed with an adaptive extended Kalman filter that estimates and correct both the parameters and the regression vector, allowing to improve the convergence speed of the algorithm with respect to other existing ones on the literature as it is shown with several examples. Index Terms— Parameter estimation; Output estimation; Least squares; Randomly missing outputs; Kalman filter; Networked Control Systems; Pseudo-Linear Recursive Identification; Algorithm initialization;

I. I NTRODUCTION In many industrial applications the control signal is updated at a fixed rate, but the output is not available at every sampling time due to communication errors, shared or slow sensors, or the use of destructive measuring methods. Different authors have dealt with the identification of such systems when the measurement pattern is regular (periodic). This allows the implementation of a standard RLS algorithm where the regression vector is constructed with only measured variables using a multirate model of the process. If the pattern of data availability is not regular, the multirate approach cannot be used. In that case, the regression vector cannot be filled with only measured values. It is then necessary to include estimated values of the non measured outputs on the regression vector. This results in a pseudolinear recursive algorithm (PLR). This algorithm has a convergence problem due to the existence of wrong attractors. The study of the pseudo-linear identification algorithms for estimating the parameters of the discrete transfer function of the process from scarcely sampled output measurements has been addressed by the authors in previous works [1], assuming that the availability of data may be irregular. A basic PLR algorithm was introduced, being later generalized by the introduction of different possible predictors to obtain the estimates of the unmeasured outputs, in [2]. The convergence analysis of that algorithm is analyzed in [3] and it is demonstrated the existence of wrong attractors on which the

978-1-4244-7746-3/10/$26.00 ©2010 IEEE

algorithm can be caught depending on the initialization of the estimated vector of parameters. In [4] the initialization of such algorithms is addressed and different strategies are established for both the initialization and the rapid parameter adaptation for new control periods. The strategies showed good results in order to avoid the wrong attractors for small sampling periods but crashed for large sampling periods due to the use of interpolators that do not make use of the model to estimate the missing outputs. In [5], [6] the problem of parameter identification and output estimation with both periodic and irregularly missing output data using output error models is addressed replacing the unmeasurable outputs with the output of an auxiliary model. The convergence properties of the parameter estimation and output predictions are established under weak persistent excitation conditions and unbounded noise variance. However, that algorithm presents a slow convergence ratio and the presence of wrong attractors and how to avoid them is not investigated. In [7] a Kalman filter based method is proposed for parameter estimation filling the regression vector with the output estimation of the identified model when the measurements are not available, and with the measurements when they are available. The convergence properties are analyzed showing a low performance for low signal to noise ratios. Furthermore, the examples are developed with relatively low missing data. In [8] an extended Kalman filter as a parameter estimator for linear systems was developed and analyzed in depth assuming standard sampling. The algorithm is demonstrated to have wrong attractors for output error models, and a modification based on an innovation model (that includes the residuals and the Kalman filter gain for state update to be identified) is discussed to avoid it leading to an unbiased estimation of the parameters but at the cost of a slower algorithm at the initialization phase. The modification is related to the difficulty of establishing the covariance matrices needed by the Kalman filter and it is based on the online estimation of the steady state optimal Kalman filter gain for state estimation update. However, this approach cannot be used with irregular sampling, as the Kalman gain does not reach an steady state value because it must be calculated depending on the intersampling period taking into account the open loop evolution of the disturbances (and, therefore, the state estimation error) between the irregularly taken samples.

1165

The Kalman filter is the optimal state estimation when the covariance matrices of both state disturbances and measurement noises are known. The problem resides in how to obtain those matrices when no information of the model is known, i.e., in the identification phase. Several authors [9], [10] have dealt with the Adaptive Kalman filter in order to obtain online the estimation of the covariances matrices needed by the Kalman filter to lead to the optimal gain that minimizes the estimation error, even if the real covariances matrices are not properly identified, However, those approaches found on the literature cannot be applied for the irregular sampling case. In this work, an adaptive extended Kalman filter for both parameter estimation and output estimation in systems with scarce measurements is developed improving the convergence problems of the previous works related to the presence of wrong attractors or slow convergence for low signal to noise ratio. The parameter estimation algorithm uses an extended Kalman filter including both the state and the parameters as system extended state. The algorithm also calculates the disturbance covariance matrix that must be used to properly estimate the parameter estimation error and, therefore to obtain a better initialization of the algorithm and adaptation to changes on the vector of parameters. The layout of the paper is as follows: the problem statement, defining the different algorithms found in the literature for parameter identification with missing data are established and their properties and found problems are analyzed in Section 2. The adaptive extended Kalman filter for parameter estimation under missing data is presented in Section 3. Some illustrative examples comparing the results of the new algorithm with those found on the literature are worked out in Section 4. Some draft conclusions are summarized in the last section. II. P ROBLEM STATEMENT Consider a SISO continuous time linear system of order n whose input is updated at period T by a computer with a zero-order hold, the output being measured synchronously with the input update resulting in the following discrete difference equation at period T that defines the dynamics of the system yt = ϕ⊤ (1) t θ where θ = [a1 . . . an b1 . . . bn ]T is the parameter vector, and ϕt = [yt−1 . . . yt−n ut−1 . . . ut−n ]T (with yt = y(tT ), ut = u(tT )) is the regression vector. Let us assume that the output measurements are obtained only at scarce instants of time and affected by a measurement noise leading to mt = yt + vt where mt is the measurement signal and vt is the measurement noise that is assumed to be a zero mean variable with known variance σv2 . Let us define the availability factor αt as the signal that indicates at each sampling period if the noisy output measurement mt is available, taking the value αt = 1 when it is available, and αt = 0 when not.

The general structure of a Kalman filter based identification algorithm for this model is defined at each step t as [11] yˆt = ϕˆt θˆt−1 et − Pt−1

(2a)

= mt − yˆt

(2b)

= Pt−1 /λ

(2c)

− Pt−1 ϕˆT αt Lt = − T ϕP ˆ t−1 ϕˆ + σv2 − Pt = (I − Lt ϕˆt )Pt−1 θˆt = θˆt−1 − Lt et αt ϕˆt+1 = f (mt , αt , θˆt , ϕˆt )

(2d) (2e) (2f) (2g)

where yˆt is the initial estimation of the process output, ϕˆt is the estimated regression vector (to be defined depending on the used algorithm) containing information about the past of the process, θˆt is the estimation of the process parameters with the available information until instant t, et is the a priori prediction error, mt is the output measurement, Lt is the updating gain and Pt is the estimation of the process parameters estimation error, or, at least, converges to that value. λ ∈ (0, 1] is the forgetting factor that should be defined to achieve proper convergence properties. In fact, in order to achieve the optimal parameter identification (due to the Kalman filter based structure), it should be designed such that matrix Pt represents the covariance of the parameter estimation error at each sampling instant, i.e., Pt = E{θ˜t θ˜tT }, being θ˜t = θ − θˆt the parameter estimation error. In fact, most of the convergence problems of the algorithms found on the literature are related to the wrong estimation of E{θ˜t θ˜tT }. Note that if matrix Pt is properly calculated, the Kalman filter based identification algorithm leads to a low value of the gain Lt when the parameter estimation error is low, what means that the present parameters will be kept almost unchanged. On the other hand, when there is a high parameters estimation error (at the initialization of the algorithm or when a change on the real parameters occurs), the algorithm leads to a high gain Lt in order to correct the model parameters properly, i.e., giving a higher weight to the last received measurements. When no forgetting factor is used, or when it has a higher value than it should to get the optimal estimation, the value Pt decreases rapidly (being much smaller than E{θ˜t θ˜tT }) leading to low gains Lt and, therefore to low variations on θˆt which implies a low convergence ratio. On the other hand, if the forgetting factor λ is very low, matrix Pt can increase its value exponentially leading to an unstable algorithm. The regression vector is defined in general as ϕˆt = [xt−1 · · · xt−n ut−1 · · · ut−n ] where the elements xt that fill the regression vector depend on the actual parameters estimation and available measurements. When dealing with conventional periodic sampling, the output related regression vector elements are equal to the

1166

measured output

III. P ROPOSED APPROACH xt = mt

(3)

leading to the conventional RLS algorithm. This algorithm presents an important bias on the parameter estimation when dealing with low signal to noise ratios. For that reason, the model reference approach consisting on using the output estimations to construct the regression vector (xt = yˆt ) is also found on the literature. The problems with the bias are then solved at the expense of a lower convergence ratio. When dealing with a sampling scenario with scarce measurements, the regression vector must be filled with the output estimates when they are not available. Different approaches have been developed in this sense. In [5], [6], the elements xt are updated every time the vector of parameters θˆt is updated at time t running the model in open loop recursively from t′ = t − (n − 1) to t′ = t: ϕˆt′ = [xt′ −1 · · · xt′ −n ] xt′ = ϕt′ θˆt

(4a) (4b)

updating the whole regression vector making use of the updated parameters θˆt . In that work a convergence analysis is done where the convergence ratio is bounded depending on the sampling pattern, and convergence for low signal to noise ratios is also assured. However, the examples show a very low convergence ratio that can only be increased by means of including a forgetting factor that is not taken into account on the convergence analysis and whose value is defined heuristically without any relation to the parameter estimation error at each step. In [2], [7] the elements of the regression vector are calculated at every sampling time only once, and they are equal to the measurements when they are available, or to the estimated outputs through the available model when they are not available: xt = αt mt + (1 − αt )ϕt θˆt−1 .

(5)

The algorithm presents poor properties when dealing with low signal to noise ratio and the convergence is not assured. In [3] the elements of the regression vector are estimated initially in open loop, but corrected any time a new measurement is available (and, therefore, the model parameters are also updated) improving the regression vector for future estimations. In this case, the update is done through equations xt = ϕˆt θˆt + l1 (mt − ϕt θˆt )αt xt−i = xt−i + li+1 (mt − ϕˆt θˆt )αt , 0 < i < n

(6a) (6b)

where the gains li (i = 1, . . . , n) must be defined to obtain a desired predictor dynamics (note that (5) is a special case of this one with l1 = 1 and lj>1 = 0) . In that work the presence of wrong attractors is demonstrated depending on the initial values on θˆ0 . However, a strategy to calculate li on line is not established neither related to the accuracy on the model estimation nor to the convergence ratio.

In this work, inspired by the model reference approach with regression vector update, the problem of estimating the correct value of the error covariance matrix Pt and calculating the optimal regression vector update taking into account that matrix is addressed through an adaptive extended Kalman filter. The idea is to update the estimate of the parameters (θ), the regression vector output related elements (xt ) and the covariance matrix of the parameter estimation errors each time a new measurement is available. Let us first adapt the extended Kalman filter for parameter estimation [8] to the scarce measurements scenario. Assume a given parameterized internal realization of a non disturbed model given by xt = A(θt )xt−1 + B(θt )ut−1 yt = C(θt )xt mt = yt + vt

(7) (8) (9)

being xt the state, ut the input, yt the output, mt the measurement variable, and vt the noise measurement (assumed to be a zero mean random signal with covariance σv2 ). Assume that the output measurement is only available at certain instants that can be identified by the binary variable αt defined before (αt = 1 if the measurement is available, αt = 0 if not). Then the estimation of both the state and the parameter vector is done through the recursive equations zˆt− = f (ˆ zt−1 , ut )

(10a)

yˆt− = h(ˆ zt− )

(10b)

e− ˆt− t = mt − y

(10c)

Pt−

=

Lt = zˆt = Pt =

T Ft−1 Pt−1 Ft−1 + Wt − T − T (Pt Ht )(Ht Pt Ht + zˆt− + Lt e− t (I − Lt Ht )Pt−

(10d) σv2 )−1 αt

(10e) (10f) (10g)

being zt the extended state  T vector including the process parameters zt = xTt θtT , e− t the a priori output estimation error and the functions f and h defined as   A(θt )xt + B(θt )ut f (zt , ut ) = (11) θt h(zt ) = C(θt )xt . (12) Matrices Ft and Ht are the Jacobian of these functions evaluated at the extended state estimation vector defined as   ∂ A(θˆt ) Mt (13) f (zt , ut ) = Ft = 0 I ∂zt zt =ˆ zt   ∂ h(zt , ut ) (14) = C(θˆt ) Dt Ht = ∂zt zt =ˆ zt where

1167

∂ (A(θt )ˆ xt + B(θt )ut ) Mt = ∂θt θt =θˆt ∂ (C(θt )ˆ xt ) Dt = ∂θt θt =θˆt

(15) (16)

Matrix Wt is the covariance matrix that includes the co- the above conditions, then, the following condition would variance of both the states disturbances and parameters also be fulfilled: variations. − 2 T 2 E{(e− To know the value of that matrix (with standard sampling) t ) } = E{Ht (Pt + Wt )Ht } + σv T if the system is in the identification phase is very difficult, = E{Ht (Ft−1 Pt−1 Ft−1 + Wt )HtT } + σv2 (17) as is highlighted in [8]. For that reason, different techniques were proposed (for example assuming a null matrix on Wt where Pt− represents the expected value of the a-priori state or using an innovations model), but they were only valid estimation error, and for constant systems (i.e., θt = θ0 ). Those approaches T were shown to have convergence problems depending on the σe2ˆ,t = E{Ht (Ft−1 Pt−1 Ft−1 )HtT } ˆ initialization of the covariance matrix Pt and parameters θ0 . Furthermore, letting Wt = 0 when dealing with scarce would estimate exactly the a-priori output error variance due measurements (that is, updating Pt and θˆt only when there to the state and parameter estimation errors. Note that if the are measurements available, as proposed above) leads to a system is constantly excited, algorithm (10) with Wt = 0 will large parameter bias and a high probability of the algorithm naturally make matrix Pt to decrease indefinitely. In order to be caught by wrong attractors as it will be shown on the to reach a value of Pt that approaches the correct parameter examples section. estimation error covariance, the following value for matrix The second proposed solution in that work (the innovations Wt is proposed: model approach) includes in the model the steady state ( 2 0, if σe,t < σv2 + σe2ˆ,t Kalman filter gain for the state updates as parameters to 2 2 2 W = σ −σ −σ t be identified (that is, the steady state values of the first wt I = e,tHt Hv T eˆ,t I, otherwise t n elements of Lt , those that are used to update x− ) and t presents better a performance than the previous approach. where Wt = wt I has been chosen as a diagonal matrix with However, that approach cannot be applied to the irregular all the diagonal elements equal to wt . σ 2 is an estimate e,t sampling scenario because in that case, the Kalman filter gain of the total output estimation error variance and σ 2 is an eˆ,t does not reach a steady state value as it must remain varying estimate of the a-priori output error variance due to the indefinitely with time depending on the number of periods state and parameter estimation errors. With this definition between available samples in order to achieve a stable (and of Wt , matrix Pt is increased with a value such that (17) is optimal) predictor. fulfilled. The idea is not to increase the actual value of Pt The solution presented in this work consists of estimating if the a priori error is smaller than the one expected with online the value of Wt to help the proposed estimation matrix Pt , allowing the algorithm to decrease Pt with future algorithm (10) to converge faster and to avoid the wrong measurements (and, therefore, acquiring information of the attractors when dealing with scarce measurements. If the process), and to increase it with Wt when the expected a sensor noise variance is known, an optimal estimation would priori error is lower than the real one, in such a way that the be achieved if at every sampling period, Pt and Wt fulfill: algorithm is ready again to acquire new information and to forget the oldest one. • Pt equals the extended estate estimation error covariance, i.e., Now, the resulting proposed algorithm for the canonical (  T ) observable realization of the process (1) is presented. The xt − x ˆt xt − x ˆt system dynamics is written as Pt = E{(zt −ˆ zt )(zt −ˆ zt )T } = E θ˜t θ˜t     a1,t 1 0 · · · 0 b1,t • Wt equals the extended system disturbance covariance  a2,t 0 1 · · · 0   .   matrix, including the process disturbances and the proxt =  . ..  xt−1 +  ..  ut−1 (18) .. .. .. .  . . . . . cess parameters variations, i.e., bn,t a 0 0 · · · 0 n,t ) ( T      wx,t wx,t (19) = θa,t In,n−1 xt−1 + θb,t ut−1 , Wt = E wθ,t wθ,t where ai,t and bi,t are the parameters to be estimated where wx,t and wθ,t denote the state disturbance (if (assumed to vary slowly on time), and θ , θ are vectors a,t b,t any) and the incremental parameter variations with time, containing them. The output measurement of the system is respectively. given by However, when applying the algorithm to a real system, the values of (zt − zˆt ) and wx,t or wθ,t cannot be known. The mt = [1 0 · · · 0]xt + vt = c′ xt + vt only available measurement that can be used to indicate how the algorithm is estimating the process parameters is the a and it is assumed to be available only at scarce irregular sampling instants. Taking into account that for this realization priori error e− t , that is the difference between the sensor samples and model output estimates. If Pt and Wt fulfilled Ht = c = [c′ 0 · · · 0], and ccT = 1, then the proposed

1168

algorithm is defined by equations    ˆ1,t−1 I u1,t−1 I θˆa,t−1 In,n−1 x Ft−1 =  0 I 0  0 0 I     At−1 = θˆa,t−1 In−1,n , Bt−1 = θˆb,t−1  −    x ˆt At−1 x ˆt−1 + Bt−1 ut−1 −  θˆa,t−1  =  θˆa,t−1 − ˆ ˆ θb,t−1 θb,t−1 et = αt (mt − cˆ x− t ) + (1 2 T eˆt = cFt−1 Pt−1 Ft−1 cT N (t) =



− αt )et−1

¯ t≤N ¯ t>N

t ¯ N

λ

(20a)

A3 0.99 1.924

A4 3.734

A5 0.573

TABLE I R ESULTS COMPARISON FOR PARAMETER ESTIMATION ERROR (δ¯t ) (E X 1)

(20d)

In this example the faster convergence of the proposed algorithm is shown, even for low signal to noise ratios and dealing with scarce measurements. Consider the following time-varying system to be identified:

(20e) (20f)

(20h)

αt σv2 Pt = (I − Lt c)Pt−    − xˆt x ˆt −  θˆa,t  = θˆa,t + Lt (mt − cˆ x− t ) − θˆ θˆb,t

A3 1 4.032

A. Example 1

σe2ˆ,t = σe2ˆ,t−1 + (ˆ e2t − σeˆ,t−1 )/N (t)  2 0, if σe,t < σv2 + σe2ˆ,t Wt = 2 2 2 (σe,t − σv − σeˆ,t )I, otherwise Pt− cT − T cPt c +

A2 0.99 1.748

(20c)

(20g)

Lt =

A2 1 3.858

(20b)

2 2 σe,t = σe,t−1 + (e2t − σe,t−1 )/N (t)

T Pt− = Ft−1 Pt−1 Ft−1 + Wt

A1 1 1.206

0.4z −1 + 0.3z − 2 ut , 1 − 1.6z − 1 + 0.8z −2 0.5z −1 + 0.2z − 2 ut , yt = 1 − 1.5z − 1 + 0.9z −2

yt =

0 ≤ t < 2500 t ≥ 2500

(20i) (20j) (20k) (20l) (20m)

b,t

where definitions (13) and (14) have been applied, and a zero order holder has been proposed to estimate the a priori error between available measurements. The expected values of the error variances needed by the algorithm are proposed ¯ samples to be filtered in such a way that during the first N all values are equally weighted to reach a good (and fast) ¯ must be large enough to estimation. The window size N avoid quick random variations of the estimated parameters. If it is very large, the capability to adapt to changes in the parameters is delayed (until the Pt increasing condition is fulfilled), but not canceled. IV. E XAMPLES Now we present two examples with different systems and signal conditions in which all the algorithms studied in this work are applied, that is: A1 the standard RLS algorithm with no data missing, i.e., with the regression vector filled with the noisy measurements (defined by (2) and (3)), A2 the model reference approach developed in [6] (defined by (2) and (4)), A3 the PRLS one developed in [3] (defined by (2) and (6)), A4 the EKF approach developed in [8] adopted to the missing data case and assuming Wt = 0 (defined by (10)), A5 the AEKF proposed in this paper (defined by (20)) with ¯ = 1000. N Algorithms A1, A2 and A3 are initialized with P0 = 1·103 I, while A4 and A5 with P0 = 0.

whose output is measured through a noisy sensor with σv2 = 1. Let us assume that the system is persistently excited with a sequence {ut } with zero mean and unit variance (σu2 = 1). The noise to signal ratio is then of (δns = 40.38%). Assume that the outputs are available only every 3 control periods, this is αt = 1 only for t = 3i (with i ∈ N). As a measure of the parametric estimation error let us define the index ˆ t −θt || δt = ||θ||θ . Applying all the algorithms discussed in this t || work, we obtain P the results that are summarized in table I, sim where the value tt=1 δt /tsim is shown, where tsim is the simulation time, i.e., the average error of the index δt . The evolution of the parameter estimation error for all the algorithms can be observed on figure 1a, while the estimated value of those parameters is shown in figure 1b. All the algorithms have been applied to the same data series. It is clear that the proposed algorithm achieves a much faster convergence ratio than the other algorithms. However, the initial transitory has a high oscillation but only during few samples. The low signal to noise ratio produces a wrong performance of the standard RLS despite it is applied with no missing data. The algorithm A2 shows a very slow convergence ratio for λ = 1, but converges a bit faster to the vicinity of the real parameters for λ = 0.99 but with a high oscillation. Algorithm A3 (with l1 = 1 and l2 = 0, i.e., a substitution predictor) works better than A2 for λ = 1 but has a lower convergence ratio (and the same oscillations) than A2 for λ = 0.99. When the parameters change on instant 2500, the proposed algorithm takes about 70 samples to detect this change, and then matrix P is increased allowing the algorithm to converge to the new parameters. As it can be seen in the parameter estimation evolution (figure 1b, with λ = 1 for the other algorithms), the algorithm corrects at instant t = 2570 the value of matrix P in order to acquire new data and, in fact, the parameters then start to converge to the new values. The traces of matrix Pt (dotted lines) and the trace of θ˜t θ˜tT (solid lines) are shown in figure 1c for all the algorithms (with λ = 0.99). All the other algorithms decrease quickly the trace of Pt while there is still a high parametric

1169

2 1.2

A4

1.8 A4

1.6

1

1.4 0.8

A3(λ=1) A2(λ=1)

A2

δ

|δt|

A1

1.2

0.6

1 0.8

0.4

A1

A3

A5

0.6

0.2

0.4

A3(λ=0.99)

A2(λ=0.99)

A5

0.2 0 500

a)

1000

1500

2000

2500 t

3000

3500

4000

0

4500

0

500

1000

1500

2000

2500 t

3000

3500

4000

4500

5000

2

1.5

Fig. 2.

A3(λ=1)

Parameter estimation error for Example 2

1

0.5

A2(λ=1)

0 A5 −0.5

−1

b)

A3(λ=0.99)

A1 A2(λ=0.99)

0

500

1000

1500

2000

2500

3000

3500

4000

4500

5000

4500

5000

4 3.5 3 A3(λ=1) 2.5

A2(λ=1)

2

A3(λ=0.99)

A5

1.5

A4

A2(λ=0.99)

1

A5 (tr(P))

0.5 0

c)

0

500

1000

1500

2000

2500

3000

3500

4000

Fig. 1. a) Parameter estimation error, b) parameter estimates, and c) trace of matrix Pt (dotted lines) and trace of θ˜t θ˜tT (solid lines) for Example 1 for Example 1.

estimation of linear plants has been defined and compared with other algorithms existing in the literature that also apply when dealing with scarce measurements. It has been shown through several examples that the proposed algorithm converges faster than other ones existing in the literature and that is able to detect parameter variations with time. It is also able to avoid the wrong attractors that have been demonstrated to exist in previous works. The convergence analysis for this new algorithm has not been carried out. This analysis is a matter for future works as well as the study of other possibilities for the evaluation of the matrix Wt that lead to an optimal estimation and may improve the transient performance. ACKNOWLEDGMENT

A1 1.2663

A2 5.8415

A3 4.3937

A4 7.0396

A5 0.3142

This work was supported by CICYT project number DPI2008-06731-C02-02/DPI

TABLE II

R EFERENCES

R ESULTS COMPARISON FOR PARAMETER ESTIMATION ERROR (δ¯t ) (E X 2)

error. However the proposed algorithm tries to fit the value of tr(θ˜t θ˜tT ) with tr(Pt ) as it can be appreciated at the beginning of the algorithm (that is what makes it converge quickly) and on sample 2570. B. Example 2 This example shows how the proposed identification algorithm is able to get away from the wrong attractors where other algorithms demonstrated to be caught by them. Consider the system G(s) = 100/(s2 + 2s + 1) to be identified with a sampling period of T = 0.05 Let us assume the same signal conditions as in Example 1 (σu2 = 1, σv2 = 1). Let us apply the five algorithms with λ = 1 (with l1 = .75 and l2 = 0 in A3). Assume as in Example 1 that the process output is only available every 3 control periods. In this case, the results are summarized in table II and figure 2. Note that the proposed algorithm is able to scape from the wrong attractors as it can be appreciated during the first 500 periods, and converges to the right ones. The other algorithms are caught in the wrong attractors V. C ONCLUSIONS In this work, the parameter identification of systems with scarce measurements has been addressed. A new algorithm based on an adaptive extended Kalman filter for parameter

[1] R. Sanchis, A. Sala, and P. Albertos, “Scarce data operating conditions: Process model identification,” SYSID IFAC Symposium on System Identification, 1997. [2] P. Albertos, R. Sanchis, and A. Sala, “Output prediction under scarce data operation: Control applications,” Automatica, vol. 35, pp. 1671– 1681, 1999. [3] R. Sanchis and P. Albertos, “Recursive identification under scarce measurements. convergence analysis,” Automatica, vol. 38, pp. 535– 544, 2002. [4] P. Albertos, R. Sanchis, and I. Pe˜narrocha, “Initializing parameter estimation algorithms under scarce measurements,” 13th IFAC Symposium on System Identification (SYSID-2003), 2003. [5] F. Ding and T. Chen, “Combined parameter and output estimation of dual-rate systems using an auxiliary model,” Automatica, vol. 40, pp. 1739–1748, 2004. [6] F. Ding and J. Ding, “Least squares parameter estimation for systems with irregularly missing data,” International Journal of Adaptive Control and Signal Processing, vol. 10.1002/acs.1141, 2009. [7] Y. Shi, H. Fang, and M. Yan, “Kalman filter-based adaptive control for networked systems with unknown parameters and randomly missing outputs,” International Journal of Robust and Nonlinear Control, vol. 19, pp. 1976–1992, 2009. [8] L. Ljung, “Asymptotic behaviour of the extended kalman filter as a parameter estimator for linear systems,” IEEE Transactions of Automatic Control, vol. AC-24, pp. 36–50, 1979. [9] R. Mehra, “On the identification of variances and adaptive kalman filtering,” IEEE Transaction on Automatic Control, vol. AC-15, no. 2, pp. 175–184, 1970. [10] V. Fathabadi, M. Shahbazian, K. Salahshour, and L. Jargani, “Comparison of adaptive kalman filter methods in state estimation of a nonlinear system using asynchronous measurements,” Proceedings of the World Congress on Engineering and Computer Science 2009 Vol II, 2009. [11] L. Ljung, System Identification. Theory for the user. Prentice-Hall, 1987.

1170

Suggest Documents