Convergence and Stability of a Constrained Partition ... - IEEE Xplore

2 downloads 0 Views 223KB Size Report
Index Terms—Convergence, partition-based moving horizon estimation (PMHE), stability. I. INTRODUCTION. Due to its duality to model predictive control and ...
1316

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 61, NO. 5, MAY 2016

Convergence and Stability of a Constrained Partition-Based Moving Horizon Estimator René Schneider and Wolfgang Marquardt Abstract—A novel type of iterative partition-based moving horizon estimators (PMHE) is proposed, the estimates of which approach those of a centralized moving horizon estimator as the number of iterations increases. It uses a deterministic setting and can handle known inputs as well as bounds on the estimated state. We derive conditions on the system, its partitions, and the scalar regularization parameter, which guarantee convergence towards the optimal centralized state estimate as well as stability of the estimation error dynamics, even with a finite number of iterations at each time step. Finally, numerical simulations demonstrate both the features and competitive estimation quality of the proposed method. Index Terms—Convergence, partition-based moving horizon estimation (PMHE), stability. I. I NTRODUCTION Due to its duality to model predictive control and the possibility to include constraints, moving horizon estimation (MHE) has become a popular tool to infer the state variables of a system from its measurements. More recently, there is an interest to apply MHE directly to the subsystems of an overall system, which need not be in the same location. For example, instead of estimating the complete state of a chemical plant at once, it may be more desirable to estimate individually the state of its interacting process units. Since the states of these subsystems are partitions of the overall system’s state, it was proposed to term the corresponding estimation methods partitionbased [1]. Contrary to decentralized estimation schemes, partitionbased estimators are able to communicate and share their estimates in order to improve estimation quality. Just like centralized moving horizon estimators (CMHE), partitionbased moving horizon estimators (PMHE) are typically formulated either in a stochastic or in a deterministic setting. In the stochastic setting, all variables of a system are considered to be random and are often assumed to be unbounded. Estimation methods for this setting include the centralized MHE proposed in [2], non-iterative PMHE schemes [3], as well as an iterative PMHE method [4], which can be combined with distributed model predictive control for consistent distributed output feedback [5]. In the deterministic setting, on the other hand, noise to the system is assumed to be unknown but subject to known bounds. Hence, asymptotic or bounded stability of the estimation error can be established similar to traditional observers. Furthermore, ideas from regularization are often used, e.g., in [6] for linear and in [7] for nonlinear CMHE, as well as in [3] for a nonManuscript received March 27, 2015; revised May 27, 2015; accepted July 21, 2015. Date of publication August 24, 2015; date of current version April 22, 2016. This work was supported by Cybernetica AS, Norway. Recommended by Associate Editor M. L. Corradini. R. Schneider is with the AVT Process Systems Engineering, RWTH Aachen University, 52056 Aachen, Germany (e-mail: [email protected]). W. Marquardt was with the AVT Process Systems Engineering, RWTH Aachen University, 52056 Aachen, Germany. He is now with Forschungszentrum Jülich GmbH, 52425 Jülich, Germany (e-mail: [email protected]). Digital Object Identifier 10.1109/TAC.2015.2471775

iterative PMHE method called PMHE3, and in [8] for a Chebyshev approximation-based PMHE method for discretized PDE systems. In this paper, a novel iterative PMHE algorithm in the deterministic setting is proposed, called sensitivity-driven PMHE (μS-PMHE). The name indicates, that similar to our previous algorithm [4], [5], the novel method is based upon one subsystem’s knowledge about the sensitivity of its neighbors’ objective functions with respect to its own local optimization variables. Unlike our previous algorithm, however, the proposed method uses the deterministic setting and a scalar regularization or tuning parameter μ. Therefore, we now discuss the advantages of μS-PMHE over the other deterministic PMHE methods presented in [3], [8]. Beginning with PMHE3 [3], the first advantage of μS-PMHE is that it is iterative, which means that in contrast to PMHE3, the proposed method is able to approach the optimal centralized estimation performance obtained with CMHE [6]. Another advantage is that μS-PMHE can readily handle known inputs to the system, which renders our algorithm suitable for closed-loop control applications. Furthermore, the conditions for asymptotic stability of the estimation error of μS-PMHE are structurally different from those of PMHE3. Hence, if PMHE3 is unstable for a particular system, μS-PMHE may be stable. We now turn to the PMHE method in [8], which has been designed for state estimation of sparse (multi-)banded linear systems arising from discretization of PDE systems. This is a disadvantage compared to μS-PMHE, which can also be applied to more complex interconnection structures. Nevertheless, both methods are able to approximate the optimal centralized estimation performance obtained with CMHE [6] arbitrarily well. Whereas this is accomplished by increasing the number of iterations in μS-PMHE, the PMHE method in [8] achieves this via a Chebyshev approximation. Unfortunately, the Chebyshev approximation introduces additional estimation errors as soon as known inputs are applied to the measurement-generating system. Clearly, this is a disadvantage in closed-loop applications. Finally, μS-PMHE provides more general stability results of the estimation error than the PMHE method in [8], and is able to handle constraints on the estimated state. The ability to incorporate constraints on the estimated state is also an important improvement over our previous algorithm in [4] and [5]. In addition, we will present convergence and stability conditions not only for the theoretical case of an infinite number of iterations but also for a finite and time-varying number of iterations at each time step. To this end, the paper is structured as follows: The linear system and its partitions are introduced in Section II. Our novel algorithm is presented in Section III. Sections IV and V contain the main convergence and stability results in the absence or presence of constraints on the initial state estimate, respectively. A numerical comparison of our novel method to the PMHE3 method proposed in [3] is given in Section VI. The paper concludes in Section VII. Finally, the appendices contain the proofs of the main results. II. P ROBLEM F ORMULATION This text deals with the problem of estimating the state variables xi ∈ Rnxi , from the measurements yi ∈ Rnyi and bounded inputs

0018-9286 © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 61, NO. 5, MAY 2016

ui ∈ Rnui of a number of interacting linear subsystems Si , i ∈ N ≡ {1, . . . , N }, given by xi,k+1 = Aii xi,k + Bii ui,k +



Aij xj,k + Bij uj,k

(1a)

j∈N \i

yi,k = Cii xi,k ,

xi,0 = xi (0)

(1b)

where k ∈ N denotes the discrete time steps and all subsystem matrices are assumed to be bounded and of compatible dimensions. Furthermore, we assume that all subsystems Si , i ∈ N , satisfy the state constraints xi,k ∈ Xi ⊆ R

nxi

x0 = x(0)

(3a) (3b)

where, at all time steps k, xk satisfies the constraint xk ∈ X ≡ X 1 × · · · × X N .

Algorithm 1 (CMHE) At each time step t ≥ K, the following constrained optimization problem must be solved x ˆ∗t−K = arg min Φt

(5)

x ˆt−K ∈X

where the objective function is defined as μ 1  ˆ xt−K − x ¯t−K 2 + yk − yˆk 2 (6a) 2 2 k=t−K μ xt−K − x = ˆ ¯t−K 2 2 1 + Yt−K:t − GU t−K:t−1 − Oˆ xt−K 2 . (6b) 2  ·  is the Euclidean norm, and μ ∈ R is a finite and non-negative scalar tuning or regularization parameter. Furthermore, the finite and non-negative integer K denotes the length of the estimation horizon and yˆk is the predicted measurement. We have also introduced the aggregated measurement and input vectors Yt−K:t = col(yt−K , . . . , yt ) and Ut−K:t−1 = col(ut−K , . . . , ut−1 ), respectively. They are related through the extended observability and input matrices, defined as t

Φt =





(CA)T . . . (CAK ) 0

0

⎢ CB ⎢ ⎢ ⎢ CAB G=⎢ .. ⎢ ⎢ . ⎢ .. ⎣ . CA

K−1

T

T

···

B

CB .. . .. . CAK−2 B

j∈N \i

− (Yt−K:t − GU t−K:t−1 )T O:,i + μ¯ xTi,t−K x ˆi,t−K 1 (Yi,t−K:t − Gi Ut−K:t−1 )T 2 × (Yi,t−K:t − Gi Ut−K:t−1 ) μ T ¯ + x x ¯i,t−K 2 i,t−K +

n

(7)

×n

and (OT O)ij ∈ R xi xj denotes the i, j-block of the matrix OT O. Furthermore, O:,i are the columns of O corresponding to xi ; Yi,t−K:t = col(yi,t−K , . . . , yi,t ), and Gi denotes the rows of G, picked such that Gi Ut−K:t−1 reflects the effect of the inputs Ut−K:t−1 on the measurements of subsystem Si . The proposed algorithm is then given as follows: Algorithm 2 (μS-PMHE) At any time step t ≥ K, and at each iteration l(t) ∈ {0, . . . , L(t)}, each subsystem Si , i ∈ N , computes an estimate of its state at the [l+1] beginning of the current estimation horizon, denoted by x ˆi,t−K . To that end, it solves the optimization problem [l+1]

x ˆi,t−K = arg

min

x ˆi,t−K ∈Xi

˜ [l+1] . Φ i,t

(8)

More precisely, the objective function in (8) is defined as

       ∂Φ j,t  [l+1] ˜  + Φi,t = Φi,t xˆ [l] T  xj,t−K ,∀j=i j,t−K =ˆ ∂ x ˆ i,t−K   j∈N \i [l] x ˆt−K =ˆ xt−K   [l]

× x ˆi,t−K − x ˆi,t−K

(9)

where the sensitivity term in (9) takes the influence of the local optimization variables on the neighboring subsystems’ objective function into account. It can be computed as

 j∈N \i



∂Φj,t  ∂x ˆTi,t−K xˆ

= [l] xt−K t−K =ˆ

 T T 1  [l] x ˆj,t−K (OT O)ij . 2 j∈N \i

[0]

¯i,t−K , At each time step t, the iterations are initialized as x ˆi,t−K = x where, similar to CMHE, x ¯i,t−K denotes an a priori estimate of x ˆi,t−K , given by [L(t−1)]

..

.

..

.

0

···



ˆi,t−K−1|t−1 + Bii ui,t−K−1 x ¯i,t−K = Aii x 0 .. . .. .

0

Before the novel μS-PMHE algorithm can be presented, each subsystem must be assigned an individual objective function. To this end, we first split the objective function of the CMHE such that Φt = Φi,t . In particular i∈N

1 T ˆi,t−K μI + (OT O)ii x ˆi,t−K Φi,t = x 2  1 + x ˆTj,t−K (OT O)ji x ˆi,t−K 2

(4)

For later comparison and in order to introduce some additional notation, we now briefly recall the CMHE proposed for system S in [6] and [7].

O = CT

III. μS-PMHE

(2)

for all k, where all sets Xi are known, non-empty and convex. Later in this text, a more compact description of the subsystem dynamics (1) is employed. To this end, consider the variable compositions z = col(z1 , . . . , zN ) for vectors z ∈ {x, u, y}, and the block matrix compositions X = [Xij ] for all i, j ∈ N and matrices X ∈ {A, B, C}. In terms of these variables, the dynamics of the subsystems Si , i ∈ N , can be re-written as the dynamics of the composite system S xk+1 = Axk + Buk , yk = Cxk

1317

CB CAB

0 .. . .. . .. .



⎥ ⎥ ⎥ ⎥ ⎥. ⎥ ⎥ ⎥ 0 ⎦

CB

Finally, x ¯t−K denotes an a priori estimate of x ˆt−K . It is typically computed from the estimate at the previous time step t − 1 as x ¯t−K = Aˆ xt−K−1|t−1 + But−K−1 .

+



[L(t−1)]

Aij x ˆj,t−K−1|t−1 + Bij uj,t−K−1 .

(10)

j∈N \i

Finally, the desired estimate for the subsystem state xi,t can be com[L] puted from x ˆi,t−K via repetitive forward prediction similar to (1a). Different from our previous algorithm used in [4] and [5], the algorithm proposed here assumes a deterministic state evolution. Hence, the local optimization problems no longer contain covariance matrices nor do they require any Riccati recursions. Instead, the scalar regularization parameter μ can be tuned for convergence of the iterations involved and for stability of the estimation error. In contrast to [4] and

1318

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 61, NO. 5, MAY 2016

[5], this deterministic framework also allows us to prove convergence and stability in the presence of constraints on the estimated states, as we shall see later. To conclude this section, we highlight three further differences to our previous algorithms [4] and [5]. First, the subsystem optimization problems (8) do not include any equality constraints. Rather, the system dynamics are directly included in the objective function. As a side effect, the objective function no longer contains any Lagrange multipliers, making the theoretical analysis much more tractable. Second, the novel algorithm requires that each subsystem knows the complete dynamics of the aggregated system as well as the measurements and inputs of the other subsystems. However, it suffices to transmit this information once per time step, because it remains constant at all iterations. Third, since the initial state is now the only optimization variable, much less information needs to be transmitted between iterations.

Theorem 1 (Necessary and Sufficient Conditions for Convergence of Unconstrained μS-PMHE): Suppose that Xi = Rni ∀i ∈ N . Then, at any fixed time step t, the collective state estimate of unconstrained μS-PMHE is given in terms of the true state xt−K by the iterative sequence



[l+1]

[l]

μI + (OT O)d x ˆt−K = −(OT O)r x ˆt−K + μ¯ xt−K + OT Oxt−K

where the block diagonal matrix (OT O)d has been defined in Theorem 3 and (OT O)r is its complement, i.e., (OT O)d +(OT O)r = OT O. Suppose further that Assumption 1 holds. Then, as L(t) → ∞, the iterates converge to the CMHE solution x ˆ∗t−K if and only if the spectral radius ρ



μI + (OT O)d

−1

(OT O)r



< 1.

Proof: The proof is provided in Appendix A. IV. C ONVERGENCE AND S TABILITY: U NCONSTRAINED C ASE In this and the subsequent section, conditions are derived for convergence and stability of the estimation error of μS-PMHE in the absence or presence of constraints on the initial state estimate, respectively. This section defines and establishes convergence and stability for unconstrained μS-PMHE, i.e., of Algorithm 2 when X = Rn . Since μS-PMHE is an iterative algorithm, we first study its convergence at a fixed time step t. Subsequently, we examine the estimation error when t → ∞. A. Convergence We start by re-stating a common definition of convergence:

(11)

(12) 

Remark 1 (Design for Convergence): Condition (12) in Theorem 1 can be seen as a design criterion: If μ is fixed, the system must be partitioned such that (12) is satisfied, though this may not always be possible. Conversely, if the system partition is fixed, μ can always be chosen sufficiently large to satisfy (12). It is also interesting to note that (11) can be interpreted as a block Jacobi method to solve the first-order necessary conditions of optimality of problem (5). B. Stability We are now ready to study stability of the estimation error of μS-PMHE as t → ∞. We begin with the definition of an asymptotically stable observer, adopted from [2]:

Definition 1 (Convergence, [9, Appendix A]): A sequence {z [l] } is said to converge to z ∗ if, for every  > 0, there exists some nonnegative [l] ∗ integer l such that |z − z | ≤  ∀l ≥ l. Since Φt = i∈N Φi,t , we expect that there is some relationship between the fixed point of μS-PMHE and the solution of the CMHE problem. To validate this conjecture, we make the following assumption:

Definition 2 (Asymptotic Stability): An estimator is said to be an asymptotically stable observer for the measurement-generating system (1) or (3) if for any  > 0 there is a number δ > 0 and a positive integer ¯(0) ≤ δ, then xt−K − x ˆt−K  ≤  for all t ≥ t¯ t¯ such that if x0 − x and x ˆt−K → xt−K as t → ∞. We can now derive sufficient conditions for unconstrained μS-PMHE being an asymptotically stable observer if L(t) = ∞ at all time steps t:

Assumption 1: One or both of the following are true: (i) μ > 0, (ii) the pair (A, C) is observable from measurements at K + 1 time steps, i.e., O has full rank. This assumption has implications for the solution of (5):

Proposition 1: Suppose that Assumption 1 holds, Xi = Rni ∀i ∈ N , and that the system partitioning satisfies condition (12). If L(t) = ∞ at all time steps t, then the estimation error of unconstrained μS-PMHE evolves as

Lemma 1: If Assumption 1 holds, then (i) the matrix (μI + OT O) is bounded and symmetric positive definite, (ii) the objective function Φt of the CMHE problem (5) is strictly convex on X, and (iii) the solution x ˆ∗t−K is unique. Proof: Part (i) follows from either μ > 0 or the observability of (A, C), which implies full rank of O. Part (ii) follows from part (i) and the definition of Φt in (6). A proof of part (iii) can be found, e.g., in [9, Proposition A.35].  Similarly, Assumption 1 affects the μS-PMHE problem (8): Lemma 2: If Assumption 1 holds, then, for any i ∈ N , (i) the matrices Mi = (μI + OT O)ii are bounded and symmetric positive ˜ [l] of μS-PMHE are strictly definite, (ii) the objective functions Φ i,t convex on Xi for any l ≥ 1, and (iii) the corresponding subsystem state [l] estimates x ˆi,t−K are unique. Proof: Part (i) follows from the positive definiteness of (μI + OT O) proved in Lemma 1, and parts (ii) and (iii) follow accordingly.  This leads to the following convergence result for unconstrained μS-PMHE:

et−K = (μI + OT O)

−1

μAet−K−1 .

(13)

Consequently, unconstrained μS-PMHE is an asymptotically stable observer if and only if the spectral radius



ρ (μI + OT O)

−1



μA < 1.

(14)

Proof: The first part follows directly from Theorem 1 above and Proposition 1 in [6]. The second part is the standard stability condition for LTI systems applied to (13).  Corollary 1 (Existence of Stabilising μ, [6, Remark 1]): Assume further that (A, C) is observable in K + 1 steps, i.e., O has full rank. Then, as noticed in [6], there always exists a μ ≥ 0 such that the stability condition (14) is satisfied: For very contractive systems, i.e., for systems satisfying A ≤ 1, any μ ≥ 0 satisfies condition (14). Otherwise, μ can always be chosen small enough, i.e., such that 0 ≤ μ < λmin (OT O)/(A − 1). Remark 2 (Design Conflict for Weakly Contractive Systems): In the case of weakly contractive systems, i.e., if A > 1, the conditions on

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 61, NO. 5, MAY 2016

μ for convergence (12) and for stability (14) are conflicting: For convergence μ should be sufficiently large, whereas for stability μ should be sufficiently small. In this case, it may be necessary (but not always possible) to re-partition the system in such a way that the convergence condition (12) is satisfied for some μ < λmin (OT O)/(A − 1).

In practice, L(t) < ∞, and we have the following result: Theorem 2: Suppose that Xi = Rni ∀i ∈ N . Then the collective [l] [l] ˆt−K of unconstrained μS-PMHE estimation error et−K = xt−K − x evolves with t and l as μI + (O O)d T

[l+1] et−K

= −(O

T

[l] O)r et−K

+ μAet−K−1 .

(15)

Suppose further that Assumption 1 holds and define the convergence rate κ as   −1 T   κ :=  μI + (OT O)d (O O)r  . (16)

μ + λmin (O O)

∀t ≥ 0.

μ + κλmax (OT O) A < 1 μ + λmin (OT O)



(19)

then unconstrained μS-PMHE is asymptotically stable for all L(t) ≥ 1, i.e., independently of the number of iterations per time step. Furthermore, the upper bound on the estimation error norm decreases as L(t) increases. In this case, unconstrained μS-PMHE becomes an anytime algorithm, making it very suitable for on-line applications if the available time permits the execution of at least a single iteration. V. C ONVERGENCE AND S TABILITY: C ONSTRAINED C ASE This section focuses on convergence and stability of the estimation error of constrained μS-PMHE, i.e., of Algorithm 2 when Xi ⊂ Rni for at least one i ∈ N . In contrast to the unconstrained case covered in the previous section, the corresponding first-order necessary conditions of optimality no longer form a linear system of equations, but involve additional inequalities. Therefore, our previous arguments are no longer valid and convergence and stability of constrained μS-PMHE need to be established differently, as it is shown in this section.

Proposition 2 (Asymptotic Stability for Constrained μS-PMHE): Suppose that (A, C) is observable in K + 1 steps and that L(t) = ∞ at all time steps t. Then, the squared norm of the μS-PMHE estimation [L(t)] ˆt−K obeys error et−K = xt−K − x et−K 2 ≤ ζt−K , t ≥ K

8A2 μ 1, then condition (21) is satisfied if μ is chosen as 0 ≤ μ < λmin (OT O)/(8A2 − 1). Remark 4 (Design Conflict for Weakly Contractive Systems): Similar to Remark 2, the conditions on μ for convergence (20) and for stability (21) are potentially conflicting for weakly contractive systems, i.e., if 8A2 > 1: For convergence μ should be sufficiently large, whereas for stability μ should be sufficiently small. In this case, it may be necessary (but not always possible) to re-partition the system in such a way that the convergence condition (20) is satisfied for some μ < λmin (OT O)/(8A2 − 1). C. Approximate μS-PMHE In case that L(t) < ∞, we have the following result: Proposition 3 (Stability of Approximate MHE [7, Theorem 2]): Suppose that the constrained μS-PMHE is terminated after a finite number L(t) of iterations, such that, ∀t ≥ K



As before, we first study convergence of μS-PMHE at a fixed time step t: Theorem 3 (Sufficient Conditions for Convergence of μS-PMHE): Suppose that t is fixed and Assumption 1 holds. Then, as l → ∞, the sequence of state estimates generated by the constrained μS-PMHE Algorithm 2 converges to the CMHE solution x ˆ∗t−K , i.e., [l]

l→∞



(21)

then μS-PMHE is asymptotically (even exponentially) stable, i.e.,

[L(t)]

ˆt−K Φt x

A. Convergence

[l]

(20)

B. Stability

(18)

Thus, in the case of unconstrained μS-PMHE, asymptotic stability of the estimation error is possible also with finitely many iterations per time step. For example, consider the subset of configurations satisfying (12) for which κ < 1. If they also satisfy (18) for L(t) = 1, i.e., if



t→∞

Proof: The proof is provided in Appendix B.

ˆ1,t−K , . . . , x ˆN,t−K lim col x



where (OT O)d = diag((OT O)11 , . . . , (OT O)N N ), and λmin and λmax denote the smallest and largest eigenvalue of a matrix, respectively.

(17)

Hence, μS-PMHE with a finite number of iterations L(t) at each time step is asymptotically stable if μ + κL(t) λmax (OT O) A < 1 μ + λmin (OT O)



λmin (OT O)d > λmax (OT O) − μ /2

where the sequence {ζt } is given in [7, Corollary 1]. In particular, if μ is selected such that

Then, the estimation error is bounded as

   [l]  μ + κl λmax (OT O) Aet−K−1 . et−K  ≤ T

if

Proof: The proof is provided in Appendix C.  Similar to the unconstrained case and the discussion in Remark 1, μ can always be chosen sufficiently large to guarantee convergence.

C. Approximate Unconstrained μS-PMHE



1319

=x ˆ∗t−K





− Φt x ˆ∗t−K ≤ 

(22)

and that the pair (A, C) is observable in K + 1 steps, i.e., O has full rank. Then, the estimation error of μS-PMHE with finitely many iterations is bounded by et−K 2 ≤ ζ¯t−K

(23)

where the sequence {ζ¯t } is given in [7, Theorem 2]. In particular, if μ is chosen such that condition (21) is satisfied, the sequence {ζ¯t }

1320

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 61, NO. 5, MAY 2016

TABLE I S PECTRAL R ADIUS OF E STIMATION E RROR P ROPAGATION M ATRIX E(l)

converges exponentially to 2  . (μ) = ζ∞ λmin (OT O) + (1 − 8A2 ) μ

(24)

 Furthermore, if ζ¯t > ζ∞ (μ), then ζ¯t+1 < ζ¯t , t = 0, 1, . . ..

Proof: This result follows directly from [7, Theorem 2] for the special case of linear and noise-free systems.  Hence, the asymptotic bound on the estimation error can be decreased in two ways: First, if μS-PMHE converges then  can be made arbitrarily small by increasing the number of iterations performed at each time step. Second, and similar to Remark 4, μ can be adjusted depending on the contractiveness of the system, i.e., it can be increased if 8A2 < 1, or should be decreased if 8A2 ≥ 1. VI. N UMERICAL R ESULTS In this section, the estimation accuracy of μS-PMHE is numerically evaluated and compared to other methods for two different example systems from the literature. For simplicity, the focus is on the unconstrained case, and we note from (13) and (15) that the estimation error dynamics of both CMHE and μS-PMHE can be described by [l]

et−K = E(l)et−K−1

(25)

where E(l) denotes the generic error propagation matrix from time step t − K − 1 to t − K. Therefore, the spectral radius ρ(E(l)) can serve as an indicator for how fast the estimation error vanishes. As the performance of a particular estimation method is highly governed by its tuning parameters, only methods with identical tuning parameters should be compared to each other. This excludes the algorithms PMHE1 and PMHE2 [3] as well as our previous algorithm [4], all of which utilize the Kalman filter tuning matrices, i.e., possess more degrees of freedom than μS-PMHE. While the method in [8] is also based on the CMHE formulation (5), we do not include it in the comparison, because it is only suited for sparse multi-banded system matrices, and because its estimation error depends on the state trajectory, i.e., it cannot be described by (25). To the best of our knowledge, these considerations leave only the PMHE3 method [3] for a fair comparison. In the unconstrained case, its estimation error dynamics obey [3, eq. (23a)], i.e.,



et−K = μI +(O∗ )T O∗

−1 



μI +(O∗ )T (O∗ −O) Aet−K−1

where O∗ is the observability matrix of (A∗ , C ∗ ), i.e., the matrices formed from the diagonal blocks of A and C. To compare CMHE, μS-PMHE, and PMHE3, we have computed ρ(E(l)) for the compartmental example systems used in [3] and [4]. For different values of μ and K, the results are shown in Table I. The data shown supports the statement made in [6] for CMHE that the estimation accuracy “as a function of μ and K is generally not easy to analyze,” and it seems to be true also for μS-PMHE and PMHE3. For example, when applied to the system studied in [3], the estimation error decay rate of μS-PMHE is similar to the decay rates of PMHE3

and CMHE, independent of the number of iterations employed. On the other hand, for the system used in [4], PMHE3 and μS-PMHE with few iterations perform worse than CMHE. As the number of iterations increases, the estimation error of μS-PMHE starts to decay faster than the estimation error of PMHE3 and approaches the decay rate of CMHE. However, it is fair to mention the higher computational cost of μS-PMHE compared to PMHE3 when more than a single iteration is employed. Finally, the first row of Table I demonstrates that additional iterations may even reduce the performance of μS-PMHE, when the estimation error of CMHE is larger than the one of μS-PMHE with a single iteration. From these results, we may conclude that neither PMHE3 nor μS-PMHE generally gives a smaller estimation error. The optimal method depends in a non-trivial manner on the system at hand, its partitions, the horizon length, the regularization parameter μ, and on the available computational time. VII. C ONCLUSION We presented a novel iterative and partition-based moving horizon state estimation algorithm in a deterministic setting, which can handle known inputs as well as constraints on the estimated state. We derived conditions for convergence of the state estimates to the optimal estimates obtained with a centralized moving horizon estimator in the limiting case of infinite iterations per time step. Both for this limiting case and for the more practical scenario of finitely many iterations per time step, sufficient conditions for asymptotic or bounded stability of the estimation error have been established. Finally, numerical simulations have shown that in some cases the estimation error of our method decays faster than the one of the PMHE3 method [3]. For clarity of presentation, the results have been presented only for nominal, noise-free systems. They can be extended to the case of bounded process and measurement noise in a straight-forward manner along the lines of [6] and [7]. This will increase the notational burden, prohibit asymptotic stability and generally give worse error bounds. While our method focuses on systems which have already been partitioned, it remains an interesting open problem to derive constructive partitioning guidelines such that convergence and stability can be guaranteed a priori. Finally, it may be worthwhile future work to apply our algorithm to nonlinear systems. A PPENDIX A P ROOF OF T HEOREM 1 For Xi = Rni , the first-order necessary conditions of optimality for problems (9) are ˜ [l+1]   ∂Φ i,t = μI + (OT O)ii x ˆi,t−K (26a) ∂x ˆi,t−K T [l] 1  T (O O)ji x ˆj,t−K (26b) + 2 j=i

T (Yt−K:t − GU t−K:t−1 ) − O:,i

(26c)

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 61, NO. 5, MAY 2016

− μ¯ xi,t−K +

1 T [l] (O O)ij x ˆj,t−K 2

(26d)

j=i

!

= 0,

1321

is given by the following sequence:



z [l+1] = arg min

∀i ∈ N

s∈Y

(26e)

where the second term in (26d) is the derivative of the sensitivity term introduced in (9). If these conditions are instead formulated in terms of the true state at the beginning of the horizon, (26c) becomes T [l] − j∈N [(OT O)ji ] xj,t−K . Noting that OT O is a symmetric matrix, and stacking these conditions ∀i ∈ N leads to the l-discrete, linear system (11), which is well known to be asymptotically stable if and only if condition (12) holds. 

T

1 s − z [l] M s − z [l] 2γ

+ s − z [l]

e∗t−K = (μI + OT O) to obtain the expression



et−K − e∗t−K =− μI +(OT O)d [l+1]

−1

z

[l+1]

= arg min

N    1

s∈Y

i=1



−1



(27)



(OT O)r et−K −e∗t−K . [l]

Taking the norm on both sides and using the definition of κ, results in

       [l]   [l−1]   [0]  et−K −e∗t−K  ≤ κ et−K −e∗t−K  ≤ κl et−K −e∗t−K  .

(28)

It follows from the initialization of μS-PMHE discussed in Algorithm 2 that: [0]

et−K = Aet−K−1 .

(29)

Applying the reverse triangle inequality to (28) further yields

     [l]   [l]  et−K  = et−K − e∗t−K + e∗t−K       [0]  ≤ κl et−K − e∗t−K  + e∗t−K  .

(30)

By means of (27) and (29), we obtain the expression [0] et−K



e∗t−K

= O O(μI + O O) T

T

−1

Aet−K−1

t→∞

A PPENDIX C P ROOF OF T HEOREM 3 The proof consists of two parts. First, we show that the μS-PMHE Algorithm 2 is essentially the scaled gradient projection (SGP) method applied to the CMHE problem (5). Second, we show that the stated assumptions are sufficient to guarantee geometric convergence of the μS-PMHE state estimates to CMHE at any fixed time step as L(t) → ∞. Proposition 4 (μS-PMHE is SGP Applied to CMHE): For any fixed time step t, the μ-SPMHE Algorithm 2 with subsystem-separable state constraints solves the corresponding CMHE problem (5) by means of the iterative scaled gradient projection algorithm, as presented in [9, Section 3.3.4]. Proof: According to [9, Section 3.3.3, eq. (3.8)], the SGP algorithm to iteratively solve the optimization problem z∈Y



(32)

T





[l]

Mi si − zi [l] zi

T





∇i F z



[l]

.

(33)

Parallelization is possible if Y = Y1 × · · · × YN . Then, all summands can be minimized independently



[l+1]

zi

= arg min

si ∈Yi

1 2γ



[l]

si − zi



T

[l]

+ si − zi



[l]

Mi si − zi

T



∇i F z [l]





, i ∈ N.

(34)

Replacing z by xt−K , Y by X, F by Φ, M by μI + (OT O)d , and setting the stepsize as γ = 1, then the expression minimized in (34) ˜ [l+1] defined in (9) up to corresponds to the μS-PMHE cost function Φ i,t an additional constant.  Due to this relation between μS-PMHE and SGP, we can derive convergence criteria for μS-PMHE from those of SGP, given in [9, Proposition 3.7]. SGP converges if there exists a positive descent constant A3 = α − KL /2, where α > 0 must satisfy the positivity [9, Condition (3.9)] and KL is a Lipschitz constant introduced in [9, Assumption 3.1]. In case of μS-PMHE, and because of Assumption 1, the positivity condition is satisfied for α = λmin (M ) > 0 and the Lipschitz constant is KL = λmax (μI + OT O). Consequently, the condition on A3 translates to the μS-PMHE convergence condition (20).  R EFERENCES

which, when substituted together with (27) into (30), yields (17).  Finally, if condition (18) holds, then clearly lim et−K  = 0.

min F (z)

[l]

si − zi

+ si −

μAet−K−1



∇F z [l]

where γ > 0. When M is block-diagonal with blocks Mi , (32) can be written as [9, Section 3.3.4, eq. (3.10)]

A PPENDIX B P ROOF OF T HEOREM 2 Substituting (1) into (11) yields the estimation error sequence (15). We may re-write (15) in terms of distance of the estimation error from its steady state value e∗t−K , given as

T

(31)

[1] M. Farina, R. Scattolini, J. Garcia, J. Espinosa, and J. Rawlings, Report on the State of the Art in Distributed State and Variance Estimation, and on Preliminary Results on Disturbance Modelling for Distributed Systems, European FP7 project HD-MPC, 2010. [Online]. Available: http://www. ict-hd-mpc.eu/deliverables/hd_mpc_D_5_1.pdf [2] C. V. Rao, J. B. Rawlings, and J. H. Lee, “Constrained linear state estimation—A moving horizon approach,” Automatica, vol. 37, no. 10, pp. 1619–1628, 2001. [3] M. Farina, G. Ferrari-Trecate, and R. Scattolini, “Moving-horizon partition-based state estimation of large-scale systems,” Automatica, vol. 46, no. 5, pp. 910–918, 2010. [4] R. Schneider, H. Scheu, and W. Marquardt, “An iterative partition-based moving horizon estimator for large-scale linear systems,” in Proc. 12th Eur. Control Conf., 2013, pp. 2621–2626. [5] R. Schneider, H. Scheu, and W. Marquardt, “Distributed MPC and partition-based MHE for distributed output feedback,” in Proc. 19th IFAC World Congr., 2014, pp. 2183–2188. [6] A. Alessandri, M. Baglietto, and G. Battistelli, “Receding-horizon estimation for discrete-time linear systems,” IEEE Trans. Autom. Control, vol. 48, no. 3, pp. 473–478, Mar. 2003. [7] A. Alessandri, M. Baglietto, and G. Battistelli, “Moving-horizon state estimation for nonlinear discrete-time systems: New stability results and approximation schemes,” Automatica, vol. 44, no. 7, pp. 1753–1765, 2008. [8] A. Haber and M. Verhaegen, “Moving horizon estimation for large-scale interconnected systems,” IEEE Trans. Autom. Control, vol. 58, no. 11, pp. 2834–2847, Nov. 2013. [9] D. P. Bertsekas and J. N. Tsitsiklis, Parallel and Distributed Computation: Numerical Methods. Belmont, MA, USA: Athena Scientific, 1997.

Suggest Documents