Robust Design of the Uncertainty and Disturbance Estimator A. Castillo ∗ R. Sanz ∗ P. Garcia ∗ P. Albertos ∗ ∗
Universitat Polit`ecnica de Val`encia, 46020 Valencia, Spain.
[email protected],
[email protected],
[email protected],
[email protected]
Abstract: In this paper, the robust design of the Uncertainty and Disturbance Estimator is tackled for a class of nonlinear systems. This strategy combines state-feedback control with a reduced-order disturbance observer. The design procedure is derived in the state-space framework, in contrast to frequency-based design methodologies, which are already studied in the literature. Based on Lyapunov theory, sufficient conditions to ensure closed-loop stability are given in terms of Linear Matrix Inequalities. Furthermore, a computable criterion is derived in order to obtain both the feedback gain matrix and the observer tuning ensuring robust asymptotic stability. Keywords: Disturbance rejection, Robust control, Linear matrix inequalities, Uncertainty and Disturbance Estimator, Lyapunov stability. 1. INTRODUCTION A wide number of dynamical systems are affected by unmeasurable external disturbances or unmodeled, possibly nonlinear, dynamics. Those disturbances degrade the nominal control performance or even could cause the whole system destabilization, Xie and Guo (2000). Disturbance rejection is the main control objective in some industrial processes as they operate in a fixed set-point during long time periods, subject to external disturbances coming from the outside world. Therefore, due to its practical interest, disturbance rejection is a key objective in control system design, Chen et al. (2016). Over the past decades, some control techniques based on disturbance observation have attracted the attention of the research community. This is proved by the large number of publications concerning this area, Chen et al. (2016). Control techniques based on the Extended State Observer (ESO) (Guo and Zhao (2011); Li et al. (2012)), the Disturbance Observer (DOB) (Jo et al. (2014), Yang et al. (2011)), the Equivalent Input Disturbance (EID) (She et al. (2011)), the Active Disturbance Rejection Control (ADRC) (Han (2009); Huang and Xue (2014)) or the Uncertainty and Disturbance Estimator (UDE) (Zhong and Rees (2004)) are based on the idea of observing the disturbance by using the system input-output information. Among the aforementioned strategies, the UDE-based control is built on the fact that the total disturbance can be approximated by filtering techniques. It was originally introduced by Zhong and Rees (2004) as an extension of the Time Delay Control (TDC) proposed by Youcef-Toumi and Ito (1988). Since then, many important results have ⋆ This work was partially supported by projects PROMETEOII/2013/004, Conselleria de Educacion, Generalitat Valenciana, and TIN2014-56158-C4-4-P-AR, Ministerio de Economia y Competitividad, Spain.
been developed. For example, the UDE was used by Talole and Phadke (2009) in order to robustify nonlinear control based on Input-Output Linearization (IOL). Several years later, Zhong et al. (2011) revealed the UDE two-degree of freedom. The UDE-based control performance using different kind of filtering techniques has been also analyzed, see Chandar and Talole (2014); Kodhanda and Talole (2016). A wide number of practical implementations have been carried out demonstrating very positive disturbance rejection qualities, Kuperman et al. (2011); Kolhe et al. (2013); Ren and Zhong (2013); Sanz et al. (2017, 2016). Most works concerning the UDE have been developed in the frequency domain. In this paper, the UDE-based control is designed in state-space form for a class of nonlinear systems. Based on Lyapunov theory, sufficient stability conditions are given in terms of Linear Matrix Inequalities (LMI). Moreover, a design procedure for the UDE control parameters based on an optimization problem subject to LMI constraints is proposed. The optimization problem can be easily solved by numerical methods and the resulting controller will satisfy a balance between robustness, feedback gain norm, and observer bandwidth. This approach differs from other traditional UDE design methodologies where the feedback gain is chosen based on the desired closed-loop performance of the nominal model, while the observer bandwidth is independently adjusted. This paper is structured as follows. In Section 2 the system under consideration and the main assumptions are presented and discussed. In Section 3 the state-space formulation of UDE-based control strategy is developed. A sufficient stability criteria in terms of LMI is presented in Section 4. In Section 5 a design procedure in order to obtain the feedback gain and the observer bandwidth is presented. In order to clarify the main ideas, in Section 6 two numerical examples are presented. Finally, in Section 7 the main conclusions of the present work are drafted.
2. PROBLEM FORMULATION AND PRELIMINARIES Consider the class of systems represented by x(t) ˙ = f (x, t) = Ax(t) + B u(t) + w(x) , x(t0 ) = x0 ,
(1)
where x(t) ∈ Rn is the system state, u(t) ∈ Rm is the control action, A ∈ Rn×n , B ∈ Rn×m are known constant matrices and w : Rn → Rm is an unknown possibly nonlinear state dependent function. System (1) constrains the uncertainty to belong to the range space of the matrix B. This is is the so-called matched uncertainty assumption, which is rather conventional in disturbance-observer-based applications, Chen et al. (2016). Assumption 1. The pair (A, B) is stabilizable. Assumption 2. There exists a scalar β ≥ 0 such that
∂ w(x)
(2)
≤ β, ∀x ∈ Br , ∂x where Br , x ∈ Rn kxk ≤ r , 0 ≤ r < ∞ Assumption 1 implies the existence of a Lyapunov function for the nominal system. Assumption 2 implies Lipschitz continuity of w(x) as well as it represents a sufficient condition for the existence and uniqueness of the solution of (1). In fact, direct application of Theorem 3.3 of Khalil (2002) states that if it is known that for some x0 ∈ Br the solution x(t) cannot leave the ball Br , then the solution is unique and it is defined for all t ∈ [t0 , ∞). This fact will be addressed later in Theorems 1-2, where the initial state is required to belong to a region, M, such that x(t) ∈ Br . 3. PROPOSED CONTROL STRATEGY Let w(t) ˆ be an estimation of the unknown term w(x). The proposed control strategy makes use of such estimation in order to compute the control action as u(t) = Kx(t) − w(t), ˆ
(3)
where K ∈ Rm×n . The first term in (3) is a state-feedback that should be designed to stabilize the nominal system while the second term can be seen as a feedforward action in order to cancel out the effect of the uncertainties. In this paper, the estimation w(t) ˆ is obtained by constructing a reduced-order observer from the system (1). Following the traditional UDE design procedure proposed by Zhong and Rees (2004), the unknown term is obtained as +
+
w(x) = B [x(t) ˙ − Ax(t) − Bu(t)], T
−1
T
(4)
where B = (B B) B is the left-pseudoinverse of B. Remark 1. The pseudoinverse B + always exists if B has full column rank. This fact holds in all well-defined systems as it means that each control action has a different effect in the system. Note that if this condition would not be satisfied, then, at least, two columns, bi , bj , of B will satisfy that bi = αbj , α ∈ R and, therefore ui , uj (being u = [u1 , . . . , ui , . . . , uj , . . . , um ]T ) will have the same effect in the system (i.e. one of them would not be necessary).
The expression (4) cannot be directly computed because the state derivative x(t) ˙ is not accessible. To circumvent this issue, a filtered estimation, w(t), ˆ is proposed such that w(t) ˆ˙ , −Ωw(t) ˆ + Ωw(x) (5) = −Ωw(t) ˆ + ΩB + [x(t) ˙ − Ax(t) − Bu(t)], where Ω , diag {Ω1 , Ω2 , . . . , Ωm }, being Ωj > 0.
In this way, the explicit known of x(t) ˙ can be avoided by performing the following change of variable: ˆ , w(t) ξ(t) ˆ − ΩB + x(t). (6) Differentiating (6) and using (1), (5) leads to the following reduced-order observer: ( ˆ˙ = −Ωξ(t) ˆ − (Ω2 B + + ΩB + A)x(t) − Ωu(t) ξ(t) (7) ˆ w(t) ˆ = ξ(t) + ΩB + x(t), ξ(0) = ξ0 Remark 2. Notice that the reduced observer (7) is computed using only information from the actual state x(t) and the control input u(t). Remark 3. In order to avoid the peaking phenomenon, the initial state of the reduced-order observer (7) should be chosen as ξ0 = −ΩB + x0 . The error between the actual uncertainty and its estimation is defined as e(t) , w(x) − w(t), ˆ
(8)
e(t) ˙ = −Ωe(t) + w(x). ˙
(9)
and, with (5) and (8), the dynamics of the estimation error is obtained as Remark 4. It can be seen from (9) that the estimation error decays (assuming w(x) ˙ bounded) with an exponential rate α = min λ(Ω) = min Ωj . Therefore, one would be interested in choosing the bandwidths Ωj as large as possible. However, large values of Ωj could amplify the input noise and it could degrade the system performance, Sanz et al. (2016). Plugging (3) into (1) and using (8) yields x(t) ˙ = (A + BK)x(t) + Be(t),
(10) T
Let us define the augmented state η(t) = [x (t), e (t)]T . Then, the closed-loop dynamics can be obtained from (9)(10), leading to 0 A + BK B η(t) + η(t) ˙ = w(x) ˙ Im 0 −Ω (11) ¯ ¯ w(x), = Aη(t) +Γ ˙ being Im ∈ Rm×m the identity matrix.
T
The closed-loop system (11) shows that the separation principle holds in this UDE-based control strategy as the poles of the nominal closed-loop are the poles of the observer plus the poles of the controlled system. However, the closed-loop stability depends on the time-derivative of the disturbance term. Note that, if the sytem (1) was controlled with a conventional state-feedback, u(t) = Kx(t), then the stability of the closed-loop will depend on the term w(x). However, by incorporating the UDE, the stability depends on the time-derivative of w(x), rather than its absolute value.
4. CLOSED-LOOP ANALYSIS In this section, a sufficient condition to prove the robust asymptotic stability of (11) in terms of an LMI is given. The stability condition provides an upper bound for β in (2), denoted by β ∗ , such that the system is asymptotically stable for any w(x) satisfying that β < β ∗ . First, in order to obtain less conservative stability conditions, the system (11) is transformed to a descriptor form, He et al. (2005) η(t) ˙ = µ(t) (12) ¯ ¯ w(x) 0 = −µ(t) + Aη(t) +Γ ˙ where µ ∈ R(n+m) .
Theorem 1. Consider the system (1) controlled by (3), (7) with a given stabilizing K and Ω. Under Assumptions 1-2, the closed-loop is robustly asymptotically stable for any x0 ∈ W , x0 ∈ Br x(t) ∈ Br , t ∈ [t0 , ∞) 1 and β such √ that 0 ≤ β ≤ β ∗ , 1/ γ ∗ if the following problem is feasible γ ∗ = argmin γ γ>0
with
s.t. Ψ ≺ 0
T ¯ ¯T N2 A + A N2 P − N2T + A¯T N3 (∗) −N3 − N3T Ψ= (∗) (∗) (∗) (∗) Φ = [In 0n×m ]
¯ 0 N2T Γ ¯ ΦT N3T Γ −Im 0 (∗) −γIn
being P ∈ R(n+m)×(n+m) a positive definite matrix and N2 , N3 ∈ R(n+m)×(n+m) free matrices. Proof. First at all, by Assumption 2 and the fact that x0 ∈ W, the derivative of the uncertainty, w(x), ˙ can be norm-bounded as
∂w(x)
kw(x)k ˙ = x˙ ≤ βkxk, ˙ ∀ t ∈ [t0 , ∞). ∂x Hence, 2 kw(x)k ˙ ≤ β 2 kxk ˙ 2 , ∀ t ∈ [t0 , ∞), which, by (11)-(12), can be written as the following quadratic inequality 2 T µ(t) β Φ Φ 0 ˙ ≥ 0, (13) [µ(t) w(x)] ˙ 0 −Im w(x)
where Φ = [In , 0n×m ] is defined such that x(t) ˙ = Φµ(t).
On the other hand, to prove the stability, the Lyapunov candidate function V (t) = η T (t)P η(t) with P ≻ 0 is proposed. Its derivative is given by V˙ (t) = 2η T (t)P η(t). ˙ Using (12) it follows that: V˙ (t) = 2η T (t)P µ(t) + 2[η T (t)N2T + µT (t)N3T ]× ¯ ¯ w(x)] [−µ(t) + Aη(t) +Γ ˙ T T ¯ ¯ ¯ N2 A + A N2 P − N2T + A¯T N3 N2T Γ ¯ q¯, = q¯T N3T Γ (∗) −N3 − N3T (∗) (∗) 0 1 Note that since K and Ω are stabilizing matrices, the point x∗ = 0 is an attraction point for the system (1). Therefore, it is always possible to find a subset W ⊆ Br such that any trajectory x(t) of (1), starting from any x0 ∈ W, lies entirely in Br .
T where q¯ , η T (t) µT (t) w˙ T (x) and N2 , N3 are free matrices with appropriate size. By the S-procedure, if (13) holds, then V˙ (t) < 0 is implied by the existence of a positive scalar τ > 0 such that T ¯ N2 A¯ + A¯T N2 P − N2T + A¯T N3 N2T Γ ¯ ≺ 0. (∗) −N3 − N3T + τ β 2 ΦT Φ N3T Γ (∗) (∗) −τ Im (14) Dividing the LMI (14) by τ (which is equivalent to scaling the matrices P, N2 , N3 ), and applying the Schur complement in the term β 2 ΦT Φ the LMI Ψ ≺ 0 is obtained where γ = 1/β 2 . Finally, if γ is taken as a design parameter then, the minimum value of γ such that Ψ ≺ 0 holds will provide the maximum admissible value of β. 5. CLOSED-LOOP CONTROLLER DESIGN In this section, an optimization problem is proposed. Its optimal solution will provide the feedback controller gain, K, the observer bandwidth, Ω, and the maximum value β ∗ ensuring asymptotic stability over Br . The optimization problem is based on the minimization of a cost function. The proposed cost function is a weighed sum of three terms which are related with the robustness, β ∗ , and the maximum values of kKk and kΩk. For the controller design, the matrix A¯ is decomposed as B 0 K 0 AB + A¯ = 0 −Im 0 Ω 0 0 (15) = A0 + B0 K0 . According to (15), the augmented system is decentralized. For such systems, the design of the diagonal matrix K0 can be easily performed by using diagonal matrices in the ˇ LMI, as explained in Zeˇcevi´c and Siljak (2008). Theorem 2. Consider the system (1) controlled by (3), (7). Under the Assumptions 1-2, the closed-loop is robustly assymptotically √ stable for any x0 ∈ W and β such that 0 ≤ β ≤ β ∗ , 1/ γ ∗ if the following problem is feasible γ ∗ , ǫ∗ = argmin λγ γ + λK (κyK + κxK ) + λΩ (κyΩ + κxΩ ) γ>0, ǫ>0 ¯ ≺0 Ψ s.t ΣxK ≺ 0, ΣyK ≻ 0, Σ ≺ 0, Σ ≻ 0, yΩ xΩ
with
¯ Ψ11 Y1 − Y2 + ǫ(Y2T AT0 + X T B0T ) Γ 0 ¯ Y T ΦT ǫΓ (∗) −ǫ(Y2T + Y2 ) ¯ = 2 Ψ (∗) (∗) −Im 0 (∗) (∗) (∗) −γIn
Ψ11 = A0 Y2 + B0 X + Y2T AT0 + X T B0T Y2 = diag {Y2K , Y2Ω } X = diag {XK , XΩ } Y2K In −κxK In (XK )T Σx K = , Σy K = (∗) κyK In (∗) −Im T Y2Ω In −κxΩ In (XΩ ) Σx Ω = , Σy Ω = (∗) κyΩ In (∗) −Im
where Y1 ∈ R(n+m)×(n+m) , Y2K ∈ Rn×n are symmetric positive definite matrices, Y2Ω = diag y2Ω1 , . . . , y2Ωm
with y2Ωj ∈ R≥0 , XK ∈ Rm×n is a real matrix, XΩ = diag {xΩ1 , . . . , xΩm } with xΩj ∈ R and ǫ, γ, κxK , κyK , κxΩ , κyΩ , λγ , λK , λΩ ∈ R≥0 are positive parameters.
K = XK (Y2K )−1 , Ω = XΩ (Y2Ω )−1 ,
Cost
1.4 0.5
N2 = diag {N2K , N2Ω } Rn×n is a symmetric diag n2Ω1 , . . . , n2Ωm scalar.
The inequality (14) is a Bilinear Matrix Inequality (BMI), ¯ N2 , P, β, τ , and ǫ are undetermined parameters. So, as A, for control design purposes, (14) needs to be linearized. n −1 −1 o −1 Let us define Y2 = N2 = diag N2K , N2Ω
. Dividing (14) by τ , multiplying it by and Y1 = Y2T P Y2 diag Y2T , Y2T , Im and its transpose from the left and right, respectively, and using the Schur complement on the quadratic term β 2 Y2T ΦT ΦY2 , results in ¯ ¯ AY2 + Y2T A¯T Y1 − Y2 + ǫY2T A¯T Γ 0 T T T ¯ Y Φ ǫΓ (∗) −ǫ(Y2 + Y2 ) 2 ≺ 0. (∗) (∗) −Im 0 (∗) (∗) (∗) −γIm (16) Plugging (15) into (16) and defining X = K0 Y2 = ¯ ≺ 0 follows. Finally, as diag {XK , XΩ }, the LMI Ψ ˇ suggested in Siljak and Stipanovic (2000), the conditions X T X < κx I and Y2−1 < κy I, which are equivalent to
−κx I X T ≺0 (∗) −I
and
Y2 I ≻0 (∗) κy I
1.8 1.6
√ √ satisfying kKk < κyK κxK and kΩk < κyΩ κxΩ . Proof. Let us set in (14). Setting and N3 = ǫN2 where N2K ∈ positive definite matrix, N2Ω = with n2Ωj ∈ R≥0 and ǫ is a tuning
2
Cost
Furthermore, the stabilizing feedback control gain and the observer bandwidth are given by
2.2
(17)
respectively, ensure that kK0 k2 = Y2−1 X T XY2−1 < κx κ2y . Therefore conditions like (17) are imposed to each submatrix of X and Y2 leading to the restrictions ΣxK ≺ 0, ΣyK ≻ 0, ΣxΩ ≺ 0 and ΣyΩ ≻ 0.
The theorem follows by constructing the cost index in order to find the global minimum of the weighed sum of γ, (κyK +κxK ), which is related with kKkmax and (κyΩ +κxΩ ), which is related with kΩkmax . Remark 5. The weights in the minimization problem of Theorem 2 can be adjusted. Increasing λγ leads to a higher value of β ∗ while increasing λK or λΩ contributes to reduce the norm of K or Ω, respectively. Remark 6. In the minimization problem of Theorem 2 there are two degrees of freedom: γ, and the parameter ǫ. The optimal value of ǫ can be easily found by using numerical optimization because the cost exhibits a convex behavior with respect to ǫ, as explained in Remark 5 of Fridman and Shaked (2002). This point will be illustrated in Case 1 in next section (see Fig. 1).
1
1.5
2
2.5
ǫ Fig. 1. Convex behavior of the cost with respect to the parameter ǫ. 6. NUMERICAL EXAMPLE Let us consider the dynamic system y¨ − w(y, ˙ y) = u, (18) where w(y, ˙ y) represents an uncertain, possibly nonlinear, function. The system (18) can be expressed in the form of (1) by the change of variables x = [x1 , x2 ]T , x1 = y, x2 = y˙ leading to 0 1 0 x˙ = x+ u(t) + w(x) 0 0 1 = A x + B u(t) + w(x) , where it can be seen that Assumption 1 holds as the pair A, B is stabilizable. Case 1: Let us suppose the following linear parametric uncertainty w(x) = β1 x2 + β2 x1 , where β1 , β2 ∈ R are uncertain parameters. Let us suppose that the values of β1 , β2 range between β1 ∈ [0, 0.9], β2 ∈ [0, 0.4] and let us set Br = R2 . Then: ∂w ∀x ∈ R2 , = (β2 β1 ) , ∂x
∂w
≤ β = 0.98, ∀x ∈ R2 ,
∂x where the maximum value β is obtained for β1 = 0.9 and β2 = 0.4. Hence, Assumption 2 is satisfied. Theorem 2 is used to design a robust stabilizing controller. The weights are chosen as λγ = 0.95, λK = λΩ = 0.22 (which have been normalized to λ2γ + λ2K + λ2Ω = 1). As it is said in Remark 6, the minimization problem has two degrees of freedom. But it can be easily solved because the cost exhibits a convex behavior with respect to ǫ. In order to find the global minimum, the minimization problem is solved for different fixed values of ǫ. The results can be seen in Figure 1 where the minimum is obtained for ǫ = 1.1. Then, for ǫ = 1.1, solving Theorem 2 yields an optimal β ∗ = 1.67, and matrices K = [−0.47 −1.94] , (19) Ω = 1.01 rad/s. Note that the term w(x) could destabilize the system. However, as 0 ≤ β ≤ β ∗ , Theorem 2 ensures that the resulting controlled system is asymptotically stable over R2 for all the possible values of β1 , β2 inside the bounds
1.5
x1,nom
x2,nom
x1
3
x2
x
1
2
0.5 0
1 5
10
15
20
25
x
-0.5 0
u
0
0 x1 , (β = 0.5 β ∗ ) x2 x1 , (β = β ∗ ) x2 x1 , (β = 1.37 β ∗ ) x2
-1
-1
-2 -2
unom
u
-3 0
5
10
15
20
25
Fig. 2. Nominal and robust responses of the system.
1.5
e = w(x) − w(t) ˆ
e
0.5 0 5
10
5
10
15
20
25
Fig. 4. This figure shows the system response, until the limit of stability, when β1 = 0.
1
-0.5 0
0
t (s)
t (s)
15
20
25
t (s) Fig. 3. Disturbance estimation error.
Case 2: Now, let us consider the following nonlinear uncertainty w(x) = β1 sin x1 + β2 tanh x2 + β3 ,
being β1 , β2 , β3 ∈ R uncertain parameters. Let us suppose that β1 ∈ [0, 2], β2 ∈ [0, 2] and β3 ∈ [0, 1] and let us also set Br = R2 . Now, ∂w = β1 cos x1 β2 (1 − tanh2 x2 , ∂x
∂w
≤ β = 2.83, ∀x ∈ R2 ,
∂x
∀x ∈ R2 ,
and for any x0 ∈ R2 (note that, here, as Br = R2 , then W is also R2 ).
where the maximum value β is obtained for x1 = x2 = 0 and β1 = β2 = 2. Assumption 2 is also satisfied over R2 .
Simulation results of the closed-loop response are presented in Figures 2-4 where the initial state has been set to x0 = [1, 1]T and the controller matrices are the ones given by (19). In Figure 2 the green line represents the nominal response (i.e. for the undisturbed system, β1 = β2 = 0), and the blue line represents the robust response (where the parameters have been set to β1 = 0.9, β2 = 0.4). For the robust response, the corresponding evolution of the disturbance observation error, e = w(x)− w(t), ˆ is depicted in Figure 3, where it can be seen how the observation error tends to zero.
In this case, the disturbance has a stronger effect on the nominal system. Notice that, now, the stability is not guaranteed by the controller (19) as β > β ∗ . For that reason the weights are modified and a new controller is designed. First, in order to increase the robustness, λγ is increased. Moreover, λK is reduced in order to obtain a more aggressive feedback gain. The controller has been designed with λγ = 0.99, λK = 0.03 and λΩ = 0.1. The optimal solution is obtained for ǫ = 1.4 yielding an optimal β ∗ = 3.08 and matrices
Figure 4 is included in order to find where is the real limit of stability. Theorem 2 ensures stability for any β ≤ β ∗ . Note that there is no restriction in the relation between β1 and β2 (the restriction is only in the norm). In Figure 4 the value β1 has been fixed to zero, and β2 has been increased until the limit of stability. The Figure shows that when condition β ≤ β ∗ is satisfied the system is always stable (as expected), and, in this particular case, instability appears when β = 1.37 β ∗ .
K = [−0.91 −6] , Ω = 0.7 rad/s. A simulation of the closed-loop response is presented in Figure 5 where the initial state has been set to x0 = [π/2, 1]T . Green line represents the nominal response (β1 = β2 = β3 = 0) and the blue line represents the robust response, where the parameters have been set to β1 = β2 = 2, β3 = 1.
2
x
x1,nom
x2,nom
x1
x2
1
0 0
5
10
15
20
25
0
u
-2 -4 -6 -8
unom
0
5
10
15
20
u
25
t (s) Fig. 5. Nominal and robust responses of the system. 7. CONCLUSIONS In this paper the design problem of the UDE-based controller is tackled. The UDE is designed in the time-domain and sufficient conditions to prove robust asymptotic stability are given in terms of LMI. Also, an optimization problem is proposed in order to directly obtain the feedback gain, K, and the observer bandwidth, Ω, satisfying a given relation between robustness, feedback gain norm and observer bandwith. REFERENCES Chandar, T. and Talole, S. (2014). Improving the performance of ude-based controller using a new filter design. Nonlinear Dynamics, 77(3), 753–768. Chen, W.H., Yang, J., Guo, L., and Li, S. (2016). Disturbance-observer-based control and related methodsan overview. IEEE Transactions on Industrial Electronics, 63(2), 1083–1095. Fridman, E. and Shaked, U. (2002). An improved stabilization method for linear time-delay systems. IEEE Transactions on Automatic Control, 47(11), 1931–1937. Guo, B.Z. and Zhao, Z.l. (2011). On the convergence of an extended state observer for nonlinear systems with uncertainty. Systems & Control Letters, 60(6), 420–430. Han, J. (2009). From pid to active disturbance rejection control. IEEE transactions on Industrial Electronics, 56(3), 900–906. He, Y., Wu, M., and She, J.H. (2005). Improved boundedreal-lemma representation and h/sub/spl infin//control of systems with polytopic uncertainties. IEEE Transactions on Circuits and Systems II: Express Briefs, 52(7), 380–383. Huang, Y. and Xue, W. (2014). Active disturbance rejection control: methodology and theoretical analysis. ISA transactions, 53(4), 963–976. Jo, N.H., Joo, Y., and Shim, H. (2014). A study of disturbance observers with unknown relative degree of the plant. Automatica, 50(6), 1730–1734.
Khalil, H.K. (2002). Nonlinear systems. Pearson. Kodhanda, A. and Talole, S. (2016). Performance analysis of ude based controllers employing various filters. IFACPapersOnLine, 49(1), 83–88. Kolhe, J.P., Shaheed, M., Chandar, T., and Talole, S. (2013). Robust control of robot manipulators based on uncertainty and disturbance estimation. International Journal of Robust and Nonlinear Control, 23(1), 104– 122. Kuperman, A., Zhong, Q.C., and Stobart, R.K. (2011). Robust control of wing rock motion. In Decision and Control and European Control Conference (CDC-ECC), 2011 50th IEEE Conference on, 5659–5664. IEEE. Li, S., Yang, J., Chen, W.H., and Chen, X. (2012). Generalized extended state observer based control for systems with mismatched uncertainties. IEEE Transactions on Industrial Electronics, 59(12), 4792–4802. Ren, B. and Zhong, Q.C. (2013). Ude-based robust control of variable-speed wind turbines. In Industrial Electronics Society, IECON 2013-39th Annual Conference of the IEEE, 3818–3823. IEEE. Sanz, R., Garcia, P., Zhong, Q.C., and Albertos, P. (2016). Robust control of quadrotors based on an uncertainty and disturbance estimator. Journal of Dynamic Systems, Measurement, and Control, 138(7), 071006. Sanz, R., Garc´ıa, P., Zhong, Q.C., and Albertos, P. (2017). Predictor-based control of a class of time-delay systems and its application to quadrotors. IEEE Transactions on Industrial Electronics, 64(1), 459–469. She, J.H., Xin, X., and Pan, Y. (2011). Equivalentinput-disturbance approachanalysis and application to disturbance rejection in dual-stage feed drive control system. IEEE/ASME Transactions on Mechatronics, 16(2), 330–340. ˇ Siljak, D. and Stipanovic, D. (2000). Robust stabilization of nonlinear systems: the lmi approach. Mathematical problems in Engineering, 6(5), 461–493. Talole, S.E. and Phadke, S.B. (2009). Robust input– output linearisation using uncertainty and disturbance estimation. International Journal of Control, 82(10), 1794–1803. Xie, L.L. and Guo, L. (2000). How much uncertainty can be dealt with by feedback? IEEE Transactions on Automatic Control, 45(12), 2203–2217. Yang, J., Chen, W.H., and Li, S. (2011). Non-linear disturbance observer-based robust control for systems with mismatched disturbances/uncertainties. IET control theory & applications, 5(18), 2053–2062. Youcef-Toumi, K. and Ito, O. (1988). A time delay controller for systems with unknown dynamics. In American Control Conference, 1988, 904–913. IEEE. ˇ Zeˇcevi´c, A. and Siljak, D. (2008). Control design with arbitrary information structure constraints. Automatica, 44(10), 2642–2647. Zhong, Q.C., Kuperman, A., and Stobart, R. (2011). Design of ude-based controllers from their two-degreeof-freedom nature. International Journal of Robust and Nonlinear Control, 21(17), 1994–2008. Zhong, Q.C. and Rees, D. (2004). Control of uncertain lti systems based on an uncertainty and disturbance estimator. Journal of Dynamic Systems, Measurement, and Control, 126(4), 905–910.