International Journal of Automation and Computing
8(1), February 2011, 29-36 DOI: 10.1007/s11633-010-0551-3
Stability of Iterative Learning Control with Data Dropouts via Asynchronous Dynamical System Xu-Hui Bu
Zhong-Sheng Hou
Advanced Control Systems Laboratory, School of Electronics and Information Engineering, Beijing Jiaotong University, Beijing 100044, PRC
Abstract: In this paper, the stability of iterative learning control with data dropouts is discussed. By the super vector formulation, an iterative learning control (ILC) system with data dropouts can be modeled as an asynchronous dynamical system with rate constraints on events in the iteration domain. The stability condition is provided in the form of linear matrix inequalities (LMIS) depending on the stability of asynchronous dynamical systems. The analysis is supported by simulations. Keywords: robustness.
1
Iterative learning control (ILC), networked control systems (NCSs), data dropouts, asynchronous dynamical system,
Introduction
Iterative learning control (ILC) is an attractive technique when it comes to dealing with systems that execute the same task repeatedly over a finite time-interval. The key feature of this technique is to use information from the previous (and/or current) operation (or iteration) in order to enable the controlled system to perform progressively better from operation to operation. This technique has been the center of interest of many researchers over the last two decades[1−4] . Robustness has been studied in ILC from a number of different perspectives, such as model uncertainty[5, 6] , parameter interval uncertainty[7, 8] , nonlinear robustness[9, 10] , the initial reset problem[11−13] , stochastic noise[14−17] , disturbance rejection[18, 19] , and data delays[20, 21] . Data dropout is an important issue in the industrial control system, especially in networked control systems (NCSs)[22, 23] . Compared with the traditional point-topoint wiring, the use of the communication channels can reduce the costs of cables and power, simplify the installation and maintenance of the whole system, and increase the reliability. However, the inclusion of communication in feedback control loops often suffer data dropouts by the network failure or the limited bandwidth. Such data dropout makes the controller analysis and synthesis complex, as it affects stability and performance of the controlled system. Some results about the control system with data dropout, related to stability analysis, controller design, filter design and dropout compensation have been discussed in [24–33], respectively. In the literature, there have been basically three approaches for modeling the data dropout phenomenon in the NCSs. An arguably popular approach is to view the packet loss as a binary switching sequence that is specified by a conditional probability distribution. The binary switching sequence obeys a Bernoulli distributed sequence taking on values of zero and one with certain probability. There have been some results published on such Manuscript received January 24, 2010; revised April 20, 2010 This work was supported by General Program (No. 60774022) and State Key Program (No. 60834001) of National Natural Science Foundation of China.
a model[24−27] . The second approach is to use a discretetime linear system with Markovian jumping parameter to represent random packet-loss for the network[28, 29] . They formulate the NCS with data dropout as a Markovian jump system with two operation modes, and then apply the techniques developed for Markovian jump systems. The third approach is to replace the packet losses by zeros and then construct an incompleteness matrix in the measurement. Such an idea has been used in [32–33] to deal with the robust filtering problems, with data missing or packet losses. The data dropout problem in the context of ILC has been studied in [34–38]. Ahn et al.[34] discussed the problem where data dropouts from a remote plant occurred with the same random variable applied to each component in the multivariable output vector of the plant. Ahn et al.[35] considered the case where each component in the multivariable output vector of the plant is subject to an independent dropout. Liu et al.[36] proposed an averaging ILC algorithm to overcome the random data dropout. It is shown that ILC can perform well and achieve asymptotical convergence in ensemble average along the iteration axis. Pan et al.[37] proposed an iterative learning control approach for a class of sampled-data nonlinear systems over network communication channels. They all focus on the problem how to design learning schemes that can achieve desired stability and convergence properties of the closed-loop system. Although Ahn et al.[38] studied the robust stability of discrete time ILC systems in an NCS networked control system, they only discussed the first order ILC scheme. To our best knowledge, no one else has studied the stability of high order ILC schemes with data dropout. This observation motivates the present study. In this paper, the stability of first order ILC schemes and high order ILC schemes with data dropouts are both presented. As depicted in previous works [34–38], there are two different types of data dropouts in ILC systems: control input signal dropouts and output measurement signal dropouts. The first one occurs when the control input is updated, as during the signal transfer from the controller to the plant the signal may be missed due to actuator failure, network failure or data collision. The second data dropout
30
International Journal of Automation and Computing 8(1), February 2011
is the measurement data loss which is also called intermittent measurement, as during the signal transfer from the sensor to the controller the signal may be missed due to the sensor failure or network failure. In this paper, we only consider the missing measurement (intermittent measurement) for the sake of convenience, but the results can be extended to the control input signal dropouts. This paper describes the missing measurement as a binary sequence that is specified by a conditional probability distribution, and the binary sequence obeys a Bernoulli distributed sequence taking on values of zero and one with certain probability. By the super vector formulation, an ILC system with missing measurement can be modeled as an asynchronous dynamical system (ADS) with rate constraints on events in the iteration domain. Then, the stability condition can be given in the form of linear matrix inequalities (LMIs) depending on the stability of ADS. The rest of this paper is organized as follows. In Section 2, the problem is formulated and some preliminaries are given. In Section 3, main results of this paper are established. Numerical simulations are presented in Section 4, and some conclusions are given in Section 5.
2
Problem formulation and preliminaries
In this paper, the discrete-time linear system is considered as follows: ( xk (t + 1) = Axk (t) + Buk (t) (1) yk (t) = Cxk (t) where xk (t) ∈ Rn , uk (t) ∈ Rm , and yk (t) ∈ Rl are state, input, and output variables, A ∈ Rn×n , B ∈ Rn×m , and C ∈ Rl×n are the matrices describing the system in the state space, respectively. The subscript k is used to denote iteration and t is used to denote time. The system is operated repeatedly in the iteration domain with a desired output yd (t), t ∈ [0, N ]. Basic assumptions of this system are 1) every operation begins at an identical initial condition; and 2) the desired trajectory yd (t) is iteration invariant. Assume that the relative degree of the system is 1. With the lifting technique to the repetitive system, we can define the super-vectors u k and y k , resulting in the uk , where plant equation y k = Hu
systems, and some classes of nonlinear systems, a similar uk can also be developed, with the representation y k = Hu key feature being that the matrix H is lower-triangular. Hence, the results of this paper can be extended to linear, time-varying systems, and some classes of nonlinear systems.
2.1
First order ILC scheme with intermittent measurement
Let us consider the first order ILC with the following update equation: uk+1 (t) = uk (t) + γek (t + 1)
(2)
where ek (t + 1) = yd (t + 1) − yk (t + 1), and γ is a constant learning gain. From Fig. 1, during the measurement signal translation from the sensor to the controller, the signal may be missed due to sensor failure, network failure or data collision. It is further supposed that the plant can detect whether the output measurement yk (t+1) is dropped or not. That is, if yk (t + 1) is delivered, then the current control signal is updated as uk+1 (t) = uk (t) + γek (t + 1), otherwise, the current control signal is updated as uk+1 (t) = uk (t). It can be represented by uk+1 (t) = uk (t) + η(t)γek (t + 1)
(3)
where η(t) could be zero or one (i.e., η(t) ∈ {0, 1}), but this zero or one could be random due to the stochastic characteristic of the data loss. It is obvious that η(t) is uncorrelated with uk (t) and ek (t). If η(t) = 0, then there could be a measurement dropout, and if η(t) = 1, there is no data dropout. It is assumed that P {η(t) = 1} = E{η(t)} = η¯ where P {·} denotes the probability operator, E{·} denotes the mathematical expectation factor, and η¯ is the successful data transfer rate, which is a constant with 0 6 η¯ 6 1. It is also assumed that η¯ is known. In what follows, for simplicity, we omit time index t for η(t).
u k = [uk (0), uk (0), · · · , uk (N − 1)]T y k = [yk (1), yk (2), · · · , yk (N )]T Fig. 1
and H=
CB CAB CA2 B .. . CAN −1 B
0 CB CAB .. . CAN −2 B
0 0 CB .. . CAN −3 B
··· ··· ··· .. . ···
0 0 0 .. . CB
.
Remark 1. H is a lower-triangular Toeplitz matrix whose elements are the Markov parameters of the system (refer to [39, 40] for more details). For linear, time-varying
2.2
The configuration of networked control systems
High order ILC scheme with intermittent measurement
Let us consider the high order ILC with the following update equation uk+1 (t) = ρ1 uk (t) + ρ2 uk−1 (t) + · · · + ρn uk−n+1 (t) + θ1 ek (t + 1) + θ2 ek−1 (t + 1) + · · · + θn ek−n+1 (t + 1) (4) where ρ1 , ρ2 , · · · , ρn and θ1 , θ2 , · · · , θn are the constant gains. The control input uk+1 (t) is calculated using the
31
X. H. Bu and Z. S. Hou / Stability of Iterative Learning Control with Data Dropouts via · · ·
past operation information uk (t), · · · , uk−n+1 (t) and ek (t + 1), · · · , ek−n+1 (t + 1). In this update, the output measurements yk (t + 1), · · · , yk−n+1 (t + 1) may be missed during the network transfer or sensor failure. It is also supposed that the plant can detect whether the output measurements yk (t + 1), · · · , yk−n+1 (t + 1) are dropped or not. That is, if yi (t + 1) is delivered ei (t + 1) = yd (t + 1) − yi (t + 1), otherwise ei (t + 1) = 0 rather than ei (t + 1) = yd (t + 1) − 0. It can be represented by uk+1 (t) =ρ1 uk (t) + ρ2 uk−1 (t) + · · · + ρn uk−n+1 (t)+ η1 (t)θ1 ek (t + 1) + η2 (t)θ2 ek−1 (t + 1) + · · · + ηn (t)θn ek−n+1 (t + 1)
Stability analysis of the intermittent ILC system
In this section, we first introduce some results of ADSs, and then the results are used to analyze the stability of the intermittent ILC system.
3.1
rM α1r1 α2r1 · · · αM >α>1
V (x(k + 1))−V (x(k)) 6 (αs−2 − 1)V (x(k)) s = 1, 2, · · · , M
x(k + 1) = fs (x(k)),
s = 1, 2, · · · , M
V (x(tk+1 )) 6 αs−2 V (x(tk )) or ln V (x(tk+1 )) 6 −2lnαs + ln V (x(tk )).
where x(k) ∈ Rn is continuous-valued state, and s = 1, 2, · · · , M represents the set of discrete states, which has a corresponding set of rates r1 , r2 , · · · , rM . These rates represent the fraction of average time that each discrete state P occurs, thus M i=1 ri = 1. Before proceeding, a basic stability definition from [41] is repeated here. The ADS is said to be exponentially stable if its trajectories satisfy lim αk kx(k)k = 0
k→∞
(7)
for some α > 1. The largest such α > 1 is referred to as the decay rate of the system. It is clear that exponential stability implies uniform asymptotic stability. Then, the stability of such an ADS can be given by the following theorem[41] . Theorem 1. Given an ADS as defined by (6), if there exists a Lyapunov function V (x) : Rn → R+ and scalars
(10)
Note that whenever a discrete state of the ADS occurs, we have a contributing term αs on the right-hand side of (10). Hence, summing up these inequalities for k = 1, 2, · · · , K − 1 in the limit, the total time that one discrete event occurs is equal to r˜i K as K → ∞. Therefore, ln V (x(k + 1)) − ln V (x(0)) 6 − 2˜ r1 K ln α1 − · · · − 2˜ rm K ln αm or by (6) ln V (x(k + 1)) − ln V (x(0)) 6 −2K ln α so that V (x(k)) 6 α−2k V (x(0)).
(6)
(9)
then the ADS remains exponentially stable. Proof. Suppose that the discrete state transitions of any trajectory of the system occur at times 0 = t1 < t2 < t3 < · · · , then, for t ∈ [tk , tk+1 ] condition (9) gives
Asynchronous dynamical systems
ADSs, like hybrid systems, are systems that incorporate continuous and discrete dynamics. The continuous dynamics are governed by differential or difference equations, whereas the discrete dynamics are governed by finite automata that are driven asynchronously by external discrete events with fixed rates[41] . Considering a simplified ADS with rate constraints that can be described by a set of difference equations
(8)
and
(5)
where the data dropout indicator ηi (t), i = 1, 2, · · · , is a binary random parameter, and it has the identical sense with η(t) in (3). If ηi (t) = 0, then yk−i+1 (t + 1) is missed. Then, if ηi (t) = 0, yk−i+1 (t + 1) is delivered successfully. In what follows, for simplicity, we also omit time index t for ηi (t). Because of the stochastic characteristic of the data loss, the measurement missing processes in two difference iteration are uncorrelated. Thus, ηi are independent and identically distributed random variables and satisfy E{ηi ηj } = 0 if i 6= j . It is also assumed that E{ηi } = η¯i and η¯i is known.
3
α1 , α2 , · · · , αM corresponding to each rate such that
(11)
From the definition of V (x(k)), existing β2 > β1 > 0 satisfies β1 kx(k)k2 6 V (x(k)) 6 β2 kx(k)k2 .
(12)
Now, using (11) and (12) we get s β2 k 2 α kx(K)k 6 kx(0)k β1 or lim αk kx(K)k = 0.
k→∞
¤ Remark 2. Theorem 1 requires the ADS to be stable on the average. It does not require every difference equation of the ADS to be stable, but rather it guarantees the ADS to be stable on the whole. Remark 3. If the discrete state dynamics is given by x(k + 1) = Φs (x(k)) for s = 1, 2, · · · , M , the search for the Lyapunov function of type V (x(k)) = xT (k)P x(k) and the scalars α1 , α2 , · · · , αM can be cast into a bilinear matrix inequality (BML) problem[41] . Inequations (8) and (9) can be rewritten as r1 logα1 + r2 logα2 + · · · + rM logαM > 0
(13)
−2 ΦT s P Φs 6 αs P, s = 1, 2, · · · , M.
(14)
and
32
International Journal of Automation and Computing 8(1), February 2011
3.2
Stability of the first order intermittent ILC
In super vector formulation, (3) has the form ˆ ek u k+1 = u k + Γe
Yk = H Uk and (19) can be described as
(15)
ˆ = ηΓ, Γ = diag{γ}N ×N , e k = y − y , and where Γ d k e k = [ek (1), ek (2), · · · , ek (N )]
and H = diag{H}, then
Uk+1 = A Uk + F Ek with
T
A =
y d = [yd (1), yd (2), · · · , yd (N )]T . uk+1 and e k+1 = y d −yy k+1 from (15), we have As y k+1 = Hu ˆ ek . e k+1 = (I − H Γ)e
(16)
F =
Notice that (
ˆ = I − HΓ} = η¯ P {I − H Γ ˆ = I} = 1 − η¯. P {I − H Γ
(17)
Hence, (16) can be considered as an ADS with two events: e k+1 = (I − HΓ)eek and e k+1 = e k . The average occurrence rates of the event e k+1 = (I − HΓ)eek is η¯ , and the average occurrence rates of the event e k+1 = e k is 1 − η¯. The following theorem can be used to test the system stability for such an ILC progress. Theorem 2. For the above system setup (16), if there ek and constants exists a Lyapunov function V (eek ) = e T k Pe α1 , α2 such that α1η¯ α21−¯η > 1 (I − HΓ)T P (I − HΓ) 6 α1−2 P P 6 α1−2 P
(18)
then the system is exponentially stable. Proof. It can be easily obtained from the Theorem 1. ¤ Remark 4. Different from the system (6), (16) can be considered as an ADS with rate constraints on events in the iteration domain. The occurrence rates of events represent the fraction of average number that each discrete state occurs on the whole iteration number.
3.3
Stability of the high order intermittent ILC
In super vector formulation, (5) has the form
where Λi = diag{ρi }N ×N ,
Θi = diag{θi }N ×N
Ξi = diag{ηi }N ×N . Define Yk , Uk , Ek , Yd , and H as T T Yk = [yy T k , y k−1 , · · · , y k−n+1 ] T T uT Uk = [u k , u k−1 , · · · , u k−n+1 ] T T Ek = [eeT k , e k−1 , · · · , e k−n+1 ]
Yd = [yy d , y d , · · · , y d ]
Λ2 0 .. . ···
Ξ1 Θ 1 0 .. . 0
··· ··· .. . I
Λn 0 .. . 0
Ξ2 Θ 2 0 .. . ···
··· ··· .. . 0
Ξn Θ n 0 .. . 0
.
Ek+1 = Yd − H A Uk − H F Ek = (A − H F )Ek . (20) Notice that
F =
η1 Θ 1 0 .. . 0
η2 Θ 2 0 .. . ···
··· ··· .. . 0
ηn Θ n 0 .. . 0
.
Since ηi ∈ {0, 1}, F has 2n cases for different values of ηi . We can assume F = Fj has the probability γj , j = P n 1, 2, · · · , 2n , thus, 2j=1 γj = 1. Remark 5. γj is the probability of F = Fj . It can be obtained by η¯i . For n = 2, F has four cases as below: " # " # Θ1 Θ 2 0 Θ2 F1 = , F2 = , 0 0 0 0 " # " # Θ1 0 0 0 F3 = , F4 = 0 0 0 0 and the corresponding probabilities are γ2 = (1 − η¯1 )η¯2 ,
γ1 = η¯1 (1 − η¯2 ), (19)
Without loss of generality, we assume that Yd = 0. Note H A = A H , we have
γ1 = η¯1 η¯2 ,
ˆ k+1 =Λ1u k + Λ2u k−1 + · · · + Λnu k−n+1 + u Ξ1 Θ1e k + Ξ2 Θ2e k−1 + · · · + Ξn Θne k−n+1
Λ1 I .. . 0
γ2 = (1 − η¯1 )(1 − η¯2 ).
Hence, (20) can be considered as an ADS with 2n events, and the event Ek+1 = (A − H F )Ek has the probability γj . The following theorem can be used to test the stability for such a high order ILC progress. Theorem 3. For the above system setup (20), if there exists a Lyapunov function V (Ek ) = EkT P Ek and constants α1 , α2 , · · · , α2n such that γ n
α1γ1 α2γ2 · · · α2n2 > 1 (A − H F )T P (A − H F ) 6 γj−2 P j = 1, 2, · · · , 2n then the system is exponentially stable.
(21)
X. H. Bu and Z. S. Hou / Stability of Iterative Learning Control with Data Dropouts via · · ·
4
Simulations
In this section, some examples are provided to illustrate the validity of our analysis. Let us consider the following discrete-time system " # " # 0.25 0.6 1 xk (t + 1) = xk (t) + uk (t) 0.6 0 0 h i y (t) = 1 −1.3 x (t). k k
The above proves the stability of the system when the first order ILC is used for all the three cases. This means that when only the part of the measurement is delivered to the controller, we can still guarantee the stability of the first order ILC system. Figs. 2–4 show the simulation results. Case 1 means that there are 20 % missing measurements, Case 2 means that there are 50 % missing measurements, and Case 3 means that there are 70 % missing measurements. From Figs. 2–4, we can see that the convergence is ensured when there are missing measurements.
The desired repetitive reference trajectory is yd (t) = sin(
8.0(t − 1) ), 10
t = 1, 2, · · · , 10.
The initial conditions are yk (0) = 0 and uk (0) = 0 for all k. The system can be changed into the super vector form with CB 0 0 ··· 0 CAB CB 0 ··· 0 2 CA B CAB CB ··· 0 . H= .. .. .. .. .. . . . . . CA9 B CA8 B CA7 B · · · CB
4.1
The first order intermittent ILC
Fig. 2 Simulation result of Case 1 for the first order ILC system
Consider the first order ILC update law: uk+1 (t) = uk (t) + 0.8ek (t + 1). In super vector formulation, Γ = diag{0.8}10×10 . The various missing measurement cases are considered as Case 1 (¯ η = 0.8), Case 2 (¯ η = 0.5), and Case 3 (¯ η = 0.3). We solve the LMIs problem of Theorem 2 with Matlab and find Case 1. α1 = 1.1, α2 = 0.85, and P is 1.0 E + 8× 2.81 −0.17 0.05 0.02 0.00 0.00 0.00 0.00 0.00 0.00
−0.17 0.05 0.02 0.00 0.00 0.00 0.00 0.00 0.00
3.03 −0.25 0.02 0.03 0.00 0.00 0.00 0.00 0.00
−0.25 3.08 −0.25 0.02 0.03 0.00 0.00 0.00 0.00
0.02 −0.25 3.09 −0.25 0.02 0.03 0.00 0.00 0.00
0.03 0.02 −0.25 3.09 −0.25 0.02 0.03 0.00 0.00
0.00 0.03 0.02 −0.25 3.09 −0.25 0.02 0.03 0.00
0.00 0.00 0.03 0.02 −0.25 3.09 −0.25 0.02 0.03
0.00 0.00 0.00 0.03 0.02 −0.25 3.09 −0.25 0.02
0.00 0.00 0.00 0.00 0.03 0.02 −0.25 3.09 −0.25
0.00 0.00 0.00 0.00 . 0.00 0.03 0.02 −0.25 3.09
Case 2. α1 = 1.3, α2 = 0.9, and P is 1.0 E + 8×
2.81 −0.16 0.04 0.02 0.00 0.00 0.00 0.00 0.00 0.00
−0.16 3.01 −0.22 0.02 0.03 0.00 0.00 0.00 0.00 0.00
0.04 −0.22 3.05 −0.22 0.02 0.03 0.00 0.00 0.00 0.00
0.02 0.02 −0.22 3.06 −0.22 0.01 0.03 0.00 0.00 0.00
0.00 0.03 0.02 −0.22 3.06 −0.22 0.01 0.03 0.00 0.00
0.00 0.00 0.03 0.01 −0.22 3.06 −0.22 0.01 0.03 0.00
0.00 0.00 0.00 0.03 0.01 −0.22 3.06 −0.22 0.01 0.03
0.00 0.00 0.00 0.00 0.03 0.01 −0.22 3.06 −0.22 0.01
0.00 0.00 0.00 0.00 0.00 0.03 0.01 −0.22 3.06 −0.22
Fig. 3 Simulation result of Case 2 for the first order ILC system
0.00 0.00 0.00 0.00 0.00 . 0.00 0.03 0.01 −0.22 3.06
Case 3. α1 = 1.2, α2 = 0.95, and P is 1.0 E + 8×
2.43 −0.29 0.10 0.03 0.00 0.00 0.00 0.00 0.00 0.00
−0.29 2.85 −0.47 0.08 0.04 −0.01 0.00 0.00 0.00 0.00
0.10 −0.47 2.99 −0.51 0.08 0.05 −0.01 0.00 0.00 0.00
0.03 0.08 −0.51 3.04 −0.53 0.08 0.05 −0.01 0.00 0.00
0.00 0.04 0.08 −0.53 3.05 −0.53 0.08 0.05 −0.01 0.00
0.00 −0.01 0.05 0.08 −0.53 3.05 −0.53 0.08 0.05 −0.01
0.00 0.00 −0.01 0.05 0.08 −0.53 3.05 −0.53 0.08 0.05
0.00 0.00 0.00 −0.01 0.05 0.08 −0.53 3.05 −0.53 0.08
0.00 0.00 0.00 0.00 −0.01 0.05 0.08 −0.53 3.06 −0.53
0.00 0.00 0.00 0.00 0.00 . −0.01 0.05 0.08 −0.53 3.06
33
Fig. 4 Simulation result of Case 3 for the first order ILC system
34
International Journal of Automation and Computing 8(1), February 2011
4.2
The high order intermittent ILC
Consider the second order ILC update scheme: uk+1 = 0.5uk + 0.5uk−1 + 0.6ek + 0.2ek−1 .
conditions on the stability are given in terms of LMIs. The main theoretical result is that if the LMIs are solvable, the ILC system is exponentially stable. Simulation examples have been illustrated for the validity of the analysis.
In super vector formulation, we have Λ1 = diag{0.5}10×10 ,
Λ2 = diag{0.5}10×10 ,
Θ1 = diag{0.6}10×10 ,
Θ2 = diag{0.2}10×10 ,
Ξ1 = diag{η1 }10×10 ,
Ξ2 = diag{η2 }10×10
and " Λ1 − η1 HΘ1 A −HF = I
# Λ2 . −η2 HΘ2
To see various data dropout rates, we also tested three different cases as below: Case 1: η¯1 = η¯2 = 0.8. Case 2: η¯1 = η¯2 = 0.5. Case 3: η¯1 = η¯2 = 0.3. In this ILC scheme, the LMIs in Theorem 3 has the form
Fig. 5
Simulation result of Case 1 for the high order ILC system
Fig. 6
Simulation result of Case 2 for the high order ILC system
Fig. 7
Simulation result of Case 3 for the high order ILC system
α1γ1 α2γ2 α3γ3 α4γ4 > 1 (Rj )T P Rj 6 γj−2 P,
j = 1, 2, 3, 4
where # " Λ1 − HΘ1 Λ2 R1 = I −HΘ2 # " Λ1 − η1 HΘ1 Λ2 F2 = I 0 " # " Λ1 Λ2 Λ1 F3 = , F4 = I −η2 HΘ2 I
# Λ2 0
and γ1 = η¯1 η¯2 ,
γ2 = (1 − η¯1 )η¯2
γ1 = η¯1 (1 − η¯2 ),
γ2 = (1 − η¯1 )(1 − η¯2 ).
We solve the LMIs problem of Theorem 3 with Matlab and find Case 1. α1 = 1.3, α2 = 0.75, α3 = 0.8, α4 = 0.95, and P = P1 . P1 is shown in Appendix. Case 2. α1 = 1.1, α2 = 1.1, α3 = 0.9, α4 = 0.95, and P = P2 . P2 is shown in Appendix. Case 3. α1 = 1.1, α2 = 0.9, α3 = 0.85, α4 = 1.2, and P = P3 . P3 is shown in Appendix. The above proves the stability of the system when the second order ILC is used for all the three cases. This means that when only part of the measurement is delivered to the controller, we can still guarantee the stability of the high order ILC system. Figs. 5–7 show the simulation results.
5
Conclusions
This paper has discussed the stability of the ILC system with data dropouts. By super vector form, the ILC system is modeled as an ADS in the iteration domain. Sufficient
35
X. H. Bu and Z. S. Hou / Stability of Iterative Learning Control with Data Dropouts via · · ·
Appendix The matrices of P1 , P2 , and P3 are shown here. P1 is 1.0 E + 8×
0.36 −0.14 0.08 −0.02 0.01 0.00 0.00 0.00 0.00 0.00 0.18 −0.08 0.05 −0.01 0.00 0.00 0.00 0.00 0.00 0.00
−0.14 0.45 −0.19 0.10 −0.03 0.01 0.00 0.00 0.00 0.00 −0.07 0.23 −0.11 0.06 −0.02 0.00 0.00 0.00 0.00 0.00
0.08 −0.19 0.49 −0.21 0.11 −0.03 0.01 0.00 0.00 0.00 0.04 −0.10 0.25 −0.12 0.06 −0.02 0.00 0.00 0.00 0.00
P2 is 1.0 E + 8× 0.09 −0.05 0.04 −0.05 0.04 −0.02 0.01 −0.01 0.00 0.00 0.00 0.00 0.02 −0.01 0.01 −0.01 0.00 0.00 0.00 0.00 0.00 0.00
0.14 −0.10 0.08 −0.05 0.03 −0.02 0.01 −0.01 0.01 −0.01 0.03 −0.02 0.02 −0.01 0.01 −0.01 0.01 0.00 0.00
−0.10 0.20 −0.15 0.12 −0.08 0.05 −0.03 0.02 −0.02 0.01 −0.02 0.05 −0.04 0.03 −0.02 0.02 −0.01 0.01 −0.01
P3 is 1.0 E + 8× 0.06 −0.03 0.02 −0.03 0.02 −0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.09 −0.06 0.04 −0.02 0.01 −0.01 0.00 0.00 0.00 0.00 0.01 0.00 −0.01 0.01 −0.01 0.00 0.00 0.00 0.00
−0.06 0.12 −0.08 0.06 −0.03 0.02 −0.01 0.01 0.00 0.00 0.00 0.00 0.01 −0.01 0.01 −0.01 0.00 0.00 0.00
−0.02 0.10 −0.21 0.50 −0.21 0.11 −0.03 0.01 0.00 0.00 −0.01 0.06 −0.11 0.26 −0.12 0.06 −0.02 0.01 0.00 0.00
0.01 −0.03 0.11 −0.21 0.50 −0.21 0.11 −0.03 0.01 0.00 0.00 −0.02 0.06 −0.11 0.26 −0.12 0.07 −0.02 0.01 0.00
0.00 0.01 −0.03 0.11 −0.21 0.50 −0.21 0.11 −0.03 0.01 0.00 0.01 −0.02 0.06 −0.11 0.26 −0.12 0.07 −0.02 0.01
0.00 0.00 0.01 −0.03 0.11 −0.21 0.50 −0.21 0.11 −0.03 0.00 0.00 0.01 −0.02 0.06 −0.11 0.26 −0.12 0.07 −0.02
0.00 0.00 0.00 0.01 −0.03 0.11 −0.21 0.50 −0.21 0.11 0.00 0.00 0.00 0.01 −0.02 0.06 −0.11 0.26 −0.12 0.06
0.00 0.00 0.00 0.00 0.01 −0.03 0.11 −0.21 0.50 −0.21 0.00 0.00 0.00 0.00 0.01 −0.02 0.06 −0.11 0.26 −0.12
0.00 0.00 0.00 0.00 0.00 0.01 −0.03 0.11 −0.21 0.50 0.00 0.00 0.00 0.00 0.00 0.01 −0.02 0.06 −0.11 0.26
0.18 −0.07 0.04 −0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.54 −0.23 0.13 −0.03 0.01 0.00 0.00 0.00 0.00 0.00
−0.08 0.23 −0.10 0.06 −0.02 0.01 0.00 0.00 0.00 0.00 −0.23 0.65 −0.31 0.16 −0.04 0.01 0.00 0.00 0.00 0.00
0.05 −0.11 0.25 −0.11 0.06 −0.02 0.01 0.00 0.00 0.00 0.13 −0.31 0.71 −0.33 0.17 −0.04 0.01 0.00 0.00 0.00
−0.01 0.06 −0.12 0.26 −0.11 0.06 −0.02 0.01 0.00 0.00 −0.03 0.16 −0.33 0.72 −0.34 0.17 −0.04 0.01 0.00 0.00
0.00 −0.02 0.06 −0.12 0.26 −0.11 0.06 −0.02 0.01 0.00 0.01 −0.04 0.17 −0.34 0.73 −0.34 0.17 −0.04 0.01 0.00
0.00 0.00 −0.02 0.06 −0.12 0.26 −0.11 0.06 −0.02 0.01 0.00 0.01 −0.04 0.17 −0.34 0.73 −0.34 0.17 −0.04 0.01
0.00 0.00 0.00 −0.02 0.07 −0.12 0.26 −0.11 0.06 −0.02 0.00 0.00 0.01 −0.04 0.17 −0.34 0.73 −0.34 0.17 −0.04
0.00 0.00 0.00 0.01 −0.02 0.07 −0.12 0.26 −0.11 0.06 0.00 0.00 0.00 0.01 −0.04 0.17 −0.34 0.73 −0.34 0.17
0.00 0.00 0.00 0.00 0.01 −0.02 0.07 −0.12 0.26 −0.11 0.00 0.00 0.00 0.00 0.01 −0.04 0.17 −0.34 0.72 −0.33
0.00 0.00 0.00 0.00 0.00 0.01 −0.02 0.06 −0.12 0.26 0.00 . 0.00 0.00 0.00 0.00 0.01 −0.04 0.17 −0.33 0.71
−0.02 0.08 −0.15 0.24 −0.19 0.16 −0.11 0.08 −0.05 0.04 −0.01 0.02 −0.04 0.06 −0.05 0.05 −0.04 0.03 −0.02 0.02
0.01 −0.05 0.12 −0.19 0.29 −0.24 0.19 −0.14 0.10 −0.07 0.00 −0.01 0.03 −0.05 0.08 −0.07 0.06 −0.05 0.04 −0.03
−0.01 0.03 −0.08 0.16 −0.24 0.34 −0.28 0.23 −0.17 0.12 0.00 0.01 −0.02 0.05 −0.07 0.10 −0.08 0.07 −0.06 0.05
0.00 −0.02 0.05 −0.11 0.19 −0.28 0.38 −0.32 0.26 −0.19 0.00 −0.01 0.02 −0.04 0.06 −0.08 0.11 −0.10 0.08 −0.06
0.00 0.01 −0.03 0.08 −0.14 0.23 −0.32 0.41 −0.34 0.28 0.00 0.01 −0.01 0.03 −0.05 0.07 −0.10 0.12 −0.10 0.09
0.00 −0.01 0.02 −0.05 0.10 −0.17 0.26 −0.34 0.43 −0.36 0.00 0.00 0.01 −0.02 0.04 −0.06 0.08 −0.10 0.13 −0.11
0.00 0.01 −0.02 0.04 −0.07 0.12 −0.19 0.28 −0.36 0.45 0.00 0.00 −0.01 0.02 −0.03 0.05 −0.06 0.09 −0.11 0.13
0.02 −0.01 0.01 −0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.17 −0.11 0.08 −0.04 0.02 −0.01 0.01 0.00 0.00 0.00
−0.01 0.03 −0.02 0.02 −0.01 0.01 −0.01 0.01 0.00 0.00 −0.11 0.26 −0.19 0.14 −0.08 0.05 −0.03 0.02 −0.01 0.01
0.01 −0.02 0.05 −0.04 0.03 −0.02 0.02 −0.01 0.01 −0.01 0.08 −0.19 0.35 −0.27 0.21 −0.13 0.09 −0.06 0.04 −0.02
−0.01 0.02 −0.04 0.06 −0.05 0.05 −0.04 0.03 −0.02 0.02 −0.04 0.14 −0.27 0.43 −0.35 0.27 −0.18 0.12 −0.08 0.05
0.00 −0.01 0.03 −0.05 0.08 −0.07 0.06 −0.05 0.04 −0.03 0.02 −0.08 0.21 −0.35 0.51 −0.42 0.33 −0.23 0.16 −0.11
0.00 0.01 −0.02 0.05 −0.07 0.10 −0.08 0.07 −0.06 0.05 −0.01 0.05 −0.13 0.27 −0.42 0.58 −0.48 0.39 −0.27 0.20
0.00 −0.01 0.02 −0.04 0.06 −0.08 0.11 −0.10 0.08 −0.06 0.01 −0.03 0.09 −0.18 0.33 −0.48 0.64 −0.54 0.43 −0.31
0.00 0.01 −0.01 0.03 −0.05 0.07 −0.10 0.12 −0.10 0.09 0.00 0.02 −0.06 0.12 −0.23 0.39 −0.54 0.70 −0.58 0.47
0.00 0.00 0.01 −0.02 0.04 −0.06 0.08 −0.10 0.13 −0.11 0.00 −0.01 0.04 −0.08 0.16 −0.27 0.43 −0.58 0.73 −0.61
0.00 0.00 −0.01 0.02 −0.03 0.05 −0.06 0.09 −0.11 0.13 0.00 . 0.01 −0.02 0.05 −0.11 0.20 −0.31 0.47 −0.61 0.75
−0.01 0.04 −0.08 0.14 −0.01 0.07 −0.05 0.03 −0.02 0.01 0.00 −0.01 0.01 0.00 0.01 −0.01 0.01 −0.01 0.01 0.00
0.00 −0.02 0.06 −0.01 0.16 −0.12 0.09 −0.06 0.04 −0.03 0.00 0.01 −0.01 0.01 −0.01 0.02 −0.02 0.01 −0.01 0.01
0.00 0.01 −0.03 0.07 −0.12 0.18 −0.14 0.11 −0.07 0.05 0.00 0.00 0.01 −0.01 0.02 −0.01 0.02 −0.02 0.02 −0.01
0.00 −0.01 0.02 −0.05 0.09 −0.14 0.20 −0.16 0.12 −0.09 0.00 0.00 −0.01 0.01 −0.02 0.02 −0.01 0.02 −0.02 0.02
0.00 0.00 −0.01 0.03 −0.06 0.11 −0.16 0.22 −0.17 0.14 0.00 0.00 0.00 −0.01 0.01 −0.02 0.02 −0.01 0.02 −0.02
0.00 0.00 0.01 −0.02 0.04 −0.07 0.12 −0.17 0.23 −0.19 0.00 0.00 0.00 0.01 −0.01 0.01 −0.02 0.02 −0.01 0.01
0.00 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.00 −0.01 0.01 −0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 −0.01 0.01 −0.01 0.00 0.00 0.01 0.00 −0.01 0.01 0.00 0.01 −0.01 0.01 −0.01 0.01 −0.03 0.00 0.01 −0.01 0.01 −0.01 0.02 −0.02 0.01 −0.01 0.05 0.00 0.00 0.01 −0.01 0.02 −0.01 0.02 −0.02 0.02 −0.09 0.00 0.00 −0.01 0.01 −0.02 0.02 −0.01 0.02 −0.02 0.14 0.00 0.00 0.00 −0.01 0.01 −0.02 0.02 −0.01 0.02 −0.19 0.00 0.00 0.00 0.01 −0.01 0.01 −0.02 0.02 −0.01 0.25 0.00 0.00 0.00 0.00 0.01 −0.01 0.01 −0.02 0.01 0.00 0.12 −0.08 0.06 −0.03 0.01 −0.01 0.00 0.00 0.00 0.00 −0.08 0.18 −0.13 0.09 −0.05 0.03 −0.02 0.01 −0.01 0.00 0.06 −0.13 0.24 −0.18 0.13 −0.08 0.05 −0.03 0.02 0.00 −0.03 0.09 −0.18 0.28 −0.22 0.16 −0.11 0.07 −0.05 0.01 0.01 −0.05 0.13 −0.22 0.33 −0.26 0.20 −0.13 0.09 −0.01 −0.01 0.03 −0.08 0.16 −0.26 0.37 −0.29 0.23 −0.16 0.01 0.00 −0.02 0.05 −0.11 0.20 −0.29 0.40 −0.32 0.25 −0.02 0.00 0.01 −0.03 0.07 −0.13 0.23 −0.32 0.43 −0.35 0.01 0.00 −0.01 0.02 −0.05 0.09 −0.16 0.25 −0.35 0.45 0.01 0.00 0.00 −0.01 0.03 −0.06 0.11 −0.18 0.27 −0.36
0.00 0.00 0.00 0.00 0.01 −0.01 0.02 −0.02 0.01 0.01 0.00 . 0.00 −0.01 0.03 −0.06 0.11 −0.18 0.27 −0.36 0.46
References [1] S. Arimoto, S. Kawamura, F. Miyazaki. Bettering operation of robots by learning. Journal of Robotic Systems, vol. 1, no. 2, pp. 123–140, 1984. [2] Z. Bien, J. X. Xu. Iterative Learning Control: Analysis, Design, Integration and Applications, Dordrecht, Holland: Kluwer Academic Publishers, 1998. [3] Y. Chen, C. Wen. Iterative Learning Control: Convergence, Robustness and Applications, Lecture Notes in Control and Information Sciences, Springer, 1999. [4] X. E Ruan, Z. Z. Bien, K. H. Park. Decentralized iterative learning control to large-scale industrial processes for nonrepetitive trajectories tracking. IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans, vol. 38, no. 1, pp. 238–252, 2008. [5] R. H. Chi, Z. S. Hou, S. L. Sui, L. Y, W. L. Yao. A new adaptive iterative learning control motivated by discretetime adaptive control. International Journal of Innovative Computing, Information and Control, vol. 4, no. 6, pp. 1267–1274, 2008. [6] A. Tayebi, M. B. Zaremba. Robust iterative learning control design is straightforward for uncertain LTI systems satisfying the robust performance condition. IEEE Transactions on Automatic Control, vol. 48, no. 1, pp. 101–106, 2003.
[7] H. S. Ahn, K. L. Moore, Y. Chen. Stability analysis of discrete-time iterative learning control systems with interval uncertainty. Automatica, vol. 43, no. 5, pp. 892–902, 2007. [8] H. S. Ahn, K. L. Moore, Y. Chen. Iterative Learning Control: Robustness and Monotonic Convergence for Interval Systems, Germany: Springer, 2007. [9] J. X. Xu , Z. H. Qu. Robust iterative learning control for a class of nonlinear systems. Automatica, vol. 34, no. 8, pp. 983–988, 1998. [10] J. X. Xu, Y. Tan. Robust optimal design and convergence properties analysis of iterative learning control approaches. Automatica, vol. 38, no. 11, pp. 1867–1880, 2002. [11] K. H. Park, Z. Bien. A generalized iterative learning controller against initial state error. International Journal of Control, vol. 73, no. 10, pp. 871–881, 2000. [12] Y. Q. Chen, C. Wen, Z. Gong, M. Sun. An iterative learning controller with initial state learning. IEEE Transactions on Automatic Control, vol. 44, no. 2, pp. 371–376, 1999. [13] M. X. Sun, D. Wang. Initial shift issues on discrete-time iterative learning control with system relative degree. IEEE Transactions on Automatic Control, vol. 48, no. 1, pp. 144– 148, 2003.
36
International Journal of Automation and Computing 8(1), February 2011
[14] S. S. Saab. Stochastic P-type/D-type iterative learning control algorithms. International Journal of Control, vol. 76, no. 2, pp. 139–148, 2003. [15] S. S. Saab. On a discrete-time stochastic learning control algorithm. IEEE Transactions on Automatic Control, vol. 46, no. 8, pp. 1333–1336, 2001. [16] S. S. Saab. A discrete-time stochastic learning control algorithm. IEEE Transactions on Automatic Control, vol. 46, no. 6, pp. 877–887, 2001. [17] H. F. Chen. Almost sure convergence of iterative learning control for stochastic systems. Science in China Series F: Information Sciences, vol. 46, no. 1, pp. 67–79, 2003. [18] M. Norrlof, S. Gunnarsson. Disturbance aspects of iterative learning control. Engineering Applications of Artificial Intelligence, vol. 14, no. 1, pp. 87–94, 2001. [19] M. Butcher, A. Karimi, R. Longchamp. A statistical analysis of certain iterative learning control algorithms. International Journal of Control, vol. 81, no. 1, pp. 156–166, 2008. [20] W. S. Chen. Novel adaptive learning control of linear systems with completely unknown time delays. International Journal of Automation and Computing, vol. 6, no. 2, pp. 177–185, 2009. [21] D. Meng, Y. Jia, J. Du, S. Yuan. Feedback approach to design fast iterative learning controller for a class of timedelay systems. IET Control Theory and Applications, vol. 3, no. 2, pp. 225–238, 2009. [22] T. C. Yang. Networked control system: A brief survey. IEE Proceedings: Control Theory and Applications, vol. 153, no. 4, pp. 403–412, 2006. [23] J. P. Hespanha, P. Naghshtabrizi, Y. G. Xu. A Survey of recent results in networked control systems. Proceedings of the IEEE, vol. 95, no. 1, pp. 138–162, 2007. [24] W. Zhang, M. S. Branicky, S. M. Phillips. Stability of networked control systems. IEEE Control Systems Magazine, vol. 21, no. 1, pp. 84–99, 2001. [25] M. Yu, L. Wang, T. G. Chu, G. M. Xie. Stabilization of Networked Control Systems with Data Packet Dropout and Network Delays via Switching System Approach. In Proceedings of the 43rd IEEE Conference on Decision and Control, IEEE, Atlantis, Bahamas, pp. 3539–3544, 2004. [26] Q. Ling, M. D. Lemmon. Power spectral analysis of networked control systems with data dropouts. IEEE Transactions on Automatic Control, vol. 49, no. 6, pp. 955–959, 2004. [27] B. F. Wang, G. Guo. Kalman filtering with partial Markovian packet losses. International Journal of Automation and Computing, vol. 6, no. 4, pp. 395–400, 2009. [28] P. Seiler, R. Sengupta. An H∞ approach to networked control. IEEE Transactions on Automatic Control, vol. 50, no. 3, pp. 356–364, 2005. [29] J. Wu, T. W. Chen. Design of networked control systems with packet dropouts. IEEE Transactions on Automatic Control, vol. 52, no. 7, pp. 1314–1319, 2007. [30] M. Sahebsara, T. W. Chen, S. L. Shah. Optimal filtering in networked control systems with multiple packet dropout. IEEE Transactions on Automatic Control, vol. 52, no. 8, pp. 1508–1513, 2007. [31] Y. C. Tian, D. Levy. Compensation for control packet dropout in networked control systems. Information Sciences, vol. 178, no. 5, pp. 1263–1278, 2008. [32] A. V. Savkin, I. R. Petersen. Robust filtering with missing data and a deterministic description of noise and uncertainty. International Journal of System Science, vol. 28, no. 4, pp. 373–378, 1997. [33] A. V. Savkin, I. R. Petersen, S. O. R. Moheimani. Model validation and state estimation for uncertain continuoustime systems with missing discrete-continuous data. Computers and Electrical Engineering, vol. 25, no. 1, pp. 29–43, 1999.
[34] H. S. Ahn, Y. Chen, K. L. Moore. Intermittent iterative learning control. In Proceedings of IEEE International Symposium on Intelligent Control, IEEE, Munich, Germany, pp. 832–837, 2006. [35] H. S. Ahn, Y. Q. Chen, K. L. Moore. Discrete-time intermittent iterative learning control with independent data dropouts. In Proceedings of the 17th IFAC World Congress, Seoul, Korea, pp. 12442–12447, 2008. [36] C. P. Liu, J. X. Xu, J. Wu. Iterative learning control for network systems with communication delay or data dropout. In Proceedings of the 48th IEEE Conference on Decision and Control, IEEE, Shanghai, PRC, pp. 4858–4863, 2009. [37] Y. J. Pan, J. M. Horacio, T. W. Chen, L. Sheng. Effects of network communications on a class of learning controlled non-linear systems. International Journal of Systems Science, vol. 40, no. 7, pp. 757–767, 2009. [38] H. S. Ahn, K. L. Moore, Y. Q. Chen. Stability of discretetime iterative learning control with random data dropouts and delayed controlled signals in networked control systems. In Proceedings of International Conference on Control, Automation, Robotics and Vision, IEEE, Hanoi, Vietnam, pp. 757–762, 2008. [39] K. L. Moore, Y. Q. Chen, H. S. Ahn. Iterative learning control: A tutorial and big picture view. In Proceedings of the 45th IEEE Conference on Decision and Control, IEEE, San Diego, USA, pp. 2352–2357, 2006. [40] K. L. Moore. An observation about monotonic convergence in discrete-time, P-type iterative learning control. In Proceedings of IEEE International Symposium on Intelligent Control, IEEE, Mexico, USA, pp. 45–49, 2001. [41] A. Hassibi, S. P. Boyd, J. P. How. Control of asynchronous dynamical systems with rate constraints on events. In Proceedings of the 38th IEEE Conference on Decision and Control, IEEE, Phoenix, USA, pp. 1345–1351, 1999. Xu-Hui Bu received the bachelor and master degrees from Henan Polytechnic University, Jiaozuo, PRC in 2004 and 2007, respectively. He is currently a Ph. D. candidate in Beijing Jiaotong University, Beijing, PRC. His research interests include model free adaptive control, iterative learning control, and robust control. E-mail:
[email protected] Zhong-Sheng Hou received the bachelor and master degrees in applied mathematics from Jilin University of Technology, Changchun, PRC in 1983 and 1988, respectively, and the Ph. D. degree in control theory from Northeastern University, Shenyang, PRC in 1994. From 1988 to 1992, he has been a lecturer with the Department of Applied Mathematics, Shenyang Polytechnic University. He has been a postdoctoral fellow with the Harbin Institute of Technology, Harbin, PRC from 1995 to 1997 and a visiting scholar with Yale University, New Haven, USA from 2002 to 2003. In 1997, he joined the Beijing Jiaotong University, Beijing, PRC and is currently a full professor with the Department of Automatic Control, and Advanced Control Systems Laboratory, School of Electronics and Information Engineering. He is the author of the monograph Nonparametric Model and Its Adaptive Control Theory published by Science Press of China, and the holder of the patent invention Model Free Control Technique Chinese Patent (ZL 94 112504. 1) issued in 2000. His research interests include model free adaptive control, learning control, and intelligent transportation systems. E-mail:
[email protected] (Corresponding author)