Submitted to the 2002 American Control Conference
ACC02-IEEE1208
Randomization in RH∞: An Approach to Controller Design with Hard/Soft Performance Specifications Giuseppe Calafiore∗ and Fabrizio Dabbene†
Abstract In this paper, we consider a robust controller design problem, where the design objectives are divided into two categories: Hard specifications and soft specifications. Hard specifications need to be met robustly against uncertainties, and are addressed using a classical H∞ /µ-synthesis approach. Soft specifications are given as average performance requirements with respect to uncertainty, and are addressed using a probabilistic design method. The key element in this approach is an algorithm for random generation of stable transfer matrices, with a bound on the H∞ norm.
1
Introduction
In the standard robust controller design problem, a nominal plant model is augmented with uncertainty and design objectives masks, to obtain an extended plant G. The control system is then usually described by means of a feedback connection of the extended plant with a block of structured uncertainties, which may include dynamic as well as static, real and complex blocks, see Figure 1. The objective of robust controller design in this setting is to determine a controller K that achieves JH (∆, K) ≤ γ, for all ∆ ∈ ∆H , where JH is some performance index, usually the structured singular value of the closed loop system, and ∆H is a norm-bounded structured uncertainty set, as described for instance in [10]. ∗
Author to whom correspondence should be sent. Dipartimento di Automatica e Informatica, Politecnico di Torino. Tel.: +39-011-564.7066; Fax: +39-011-564.7099; E-mail: calafi
[email protected] † IRITI - CNR, Politecnico di Torino. Tel.: +39-011-564.7065; Fax: +39-011-564.7099; Email:
[email protected].
1
D
zH
wH u
G
y
K
Figure 1: Extended plant. Two well known drawbacks of this approach are the computational complexity (µ-synthesis is in general NP-hard), and the difficulty of treating for instance L1 , L2 , or time-domain design objectives. Also, the complexity of µ-synthesis grows with the number of uncertainty or performance channels (number of structure blocks in ∆) that are included in the extended model. On the other hand, the probabilistic approach [2, 4, 5, 7] to robust design is suitable for treating more general design objectives, but provides only probabilistic guarantees of performance. When used for design, this approach requires a double randomization: in the space of uncertainties and in the space of controller parameters, see for instance [9]. One of the drawbacks of this technique is that a randomly generated controller could turn out to be destabilizing for the system. More generally, the problem arises of how to appropriately select the search space for the randomized controllers. To address this problem, we here follow a strategy where the random search on the controller parameters is restricted using a suitable controller parameterization, discussed in Section 2. A similar technique has been introduced in [3]. The design approach proposed in this paper is a mixed deterministic/probabilistic one: we consider two performance channels wH → zH and wS → zS , as shown in Figure 2, where wH → zH is the channel related to “hard” performances, that need to be addressed using deterministic robust design methods (H∞ or µ-synthesis). The channel wS → zS represents additional “soft” performances, which express design objectives that should be achieved in “average” with respect to a subset ∆S of the uncertainties ∆H . We proceed as follows: First, a controller is designed, using H∞ robust design techniques, to attain the “hard” specifications. Then, we consider a well-known parameterization KQ (s) of all robustly stabilizing controllers, in terms of a Q 2
D zH
wH wS
zS
G y
u
K
Figure 2: Extended plant with hard and soft performance channels. parameter, as discussed in Section 2. For every Q(s) such that Q(s)∞ ≤ 1, we have a corresponding controller KQ (s) that attains the hard specifications. At this stage, we proceed with a probabilistic design, performing a double randomization on ∆ ∈ ∆S and Q(s), in order to achieve the soft specifications, up to a given probabilistic confidence level, see Section 3. The main tools in this latter stage are (a) the randomization of the uncertainties, which may be performed using the techniques discussed in [1] and [2]; (b) the randomization on the Q parameter, which is discussed in Section 4 of this paper; and (c) the probabilistic evaluation of performance, which is done using the Learning Theory framework proposed in [8]. A design example illustrating this methodology is presented in Section 5. Notation. Given the configuration in Figure 2, we denote by TH (s, K) the closed-loop transfer matrix between wH and zH , when u = K(y). We denote by TS (s, K, ∆) the closed-loop transfer matrix between wS and zS , when u = K(y), and wH = ∆zH . To indicate that the matrices (A, B, C, D) are a state space realization of a transfer function G(s), we use the notation A B . = C(sI − A)−1 B + D. G(s) = C D For a square matrix X, X 0 (resp. X 0) means X is symmetric, and positive-definite (resp. semidefinite). In denotes the n × n identity matrix, while On,m denotes the n × m zero matrix.
3
2
Robust controllers parameterization
The first step of the proposed approach focuses on the “hard” specifications described in terms of H∞ performance. More precisely, for unstructured uncertainty, the controller K(s) has to be designed in order to guarantee that the closed loop satisfies the inequality TH (s, K)∞ ≤ γ,
(1)
where γ represents the desired performance level. This corresponds to guaranteeing robust stability for all ∆ : ∆∞ ≤ 1/γ. In a more general setting, the uncertainty ∆ can be assumed to belong to a structured set of the type . (2) ∆ = {∆ : ∆ = blockdiag (q1 Ir1 , . . . , qs Irs , ∆1 , . . . , ∆b )} , where qi , i = 1, . . . , s are repeated scalar parameters with multiplicity r1 , . . . , rs , and ∆i , i = 1, . . . , b are full blocks. The robust performance specification takes then the form sup µ∆ (TH (jω, K)) ≤ γ, (3) ω
where the structured singular value µ∆ is defined as . µ∆ (TH ) =
1 . min {σ(∆) : ∆ ∈ ∆, det(I − TH ∆) = 0}
(4)
The following well-known lemma provides a parameterization of all possible stabilizing controllers guaranteeing the bound (1), see for instance [10]. For the sake of readability, we limit our discussion to the case when the so-called regularity assumptions hold. The reader interested in the general case may refer for instance to [10]. Lemma 1 Consider a system described as A B1 B2 G(s) = C1 0 D12 C2 D21 0
(5)
with (A, B1 ) controllable, (A, B2 ) stabilizable, (C1 , A) observable and (C2 , A) detectable. Suppose further that the following regularity assumptions hold T T C1 D12 = 0 I and D21 B1T D21 = 0 I . D12 Then the system (5) admits a stabilizing controller K(s) that guarantees TH (s, K)∞ ≤ γ if and only if there exist matrices X 0 and Y 0 such that AT X + XA + X(γ −2 B1 B1T − B2 B2T )X + C1T C1 = 0, AY + Y AT + Y (γ −2 C1T C1 − C2T C2 )Y + B1 B1T = 0, ρ(X, Y ) < γ 2 . 4
(6) (7) (8)
Moreover, the family of all controllers KQ (s) guaranteeing TH (s, KQ )∞ ≤ γ may be parameterized as ˆ Q(s)), KQ (s) = F (K(s),
(9)
ˆ where F (K(s), Q(s)) denotes the standard lower fractional transformation of K . ˆ is over Q, and where Q(s) ∈ Q, Q = {Q(s) ∈ RH∞ : Q(s)∞ ≤ 1}, and K(s) a transfer matrix given by A + (γ −2 B1 B1T − B2 B2T )X − ZY C2T C2 ZY C2 ZB2 ˆ = K (10) −B2T X 0 I I 0 −C2 . with Z = (I − γ −2 Y X)−1 . The importance of the above lemma is that it explicitly defines the set of all controllers KQ (s) that achieve the desired closed-loop performance (1), in terms of a single parameter Q(s) ∈ RH∞ . A similar parameterization may be easily found also for problems with structured uncertainty (µ-synthesis problems). In fact, for any controller K(s) and scaling D(s) ∈ RH∞ such that D−1 (s) ∈ RH∞ , D(s)∆ = ∆D(s), and DTH (s, K)D−1 ∞ ≤ γ, it is well-known that the closed-loop is stable and satisfies (3). Therefore, applying Lemma 1 to the augmented system −1 D (s) . D(s) ˜ G= G , I I we obtain a family of stabilizing controllers achieving the hard specifications (3). Notice however that, in the structured case, the resulting family of controllers does not contain all possible stabilizing controllers satisfying (3).
3
Controller design with hard/soft performance specifications
The controllers achieving the desired hard performance JH are parameterized by the set Q defined above. This set is now chosen as the search space for controllers, in order to achieve the so-called soft specifications in a probabilistic way. Formally, given a controller KQ (s) parameterized by Q ∈ Q, and uncertainty ∆ ∈ ∆S ⊆ ∆H , we define the cost function JS (Q, ∆) : {Q × ∆S } → [0, 1] 5
(11)
relative to the soft performance channel TS (s, KQ , ∆). For instance, these performance requirements may be expressed as a function of the H2 or H∞ norm of the ws → zs channel as JS (Q, ∆) =
TS (s, KQ , ∆)a , 1 + TS (s, KQ , ∆)a
with a = 2 or ∞.
In general, it is also possible to consider other type of specifications on the transfer function TS (s, KQ , ∆), such as time-domain specifications with respect to a given input signal (“fuel” consumption, overshoot, etc.). The performance JS is not intended to be achieved in a worst-case sense against all ∆ ∈ ∆S , but has to be met in an average sense. To this aim, we first introduce a probability density function (pdf) f∆ (∆) on the set ∆S . The objective of the controller is then to minimize the expected value of JS with respect to f∆ , that is the average performance of the controller KQ when the uncertainty is distributed according to f∆ (∆). Since the above problem is computationally intractable in general, we seek an approximate solution of the problem φ = min E∆ [JS (Q, ∆)]
(12)
Q∈Q
by means of a randomized algorithm, as proposed in [9]. This technique is based on randomization both on the uncertainty set ∆S and on the controller set parameterized by Q. First, M independent identically distributed (iid) random samples Q1 , Q2 , . . . , QM ∈ Q are generated. For each Qk , N iid samples ∆1 , ∆2 , . . . , ∆N ∈ ∆S are drawn according to f∆ (∆). Then, the solution of (12) is approximated by (13) φˆN M = min EˆN (Qk ), k=1,...,M
where the Empirical Mean EˆN (Qk ) is an estimate of E∆ [JS (Qk , ∆)] given by N k . 1 ˆ EN (Q ) = JS (Qk , ∆i ). N i=1
(14)
The following lemma states the probabilistic properties of the solution (13). Lemma 2 (Vidyasagar, [9]) Let JS (Q, ∆) : δ, +, α ∈ (0, 1), choose M ≥
log 2δ 1 log 1−α
, and N ≥
{Q × ∆s } → [0, 1].
log 4M δ 22
Given
. Then, with confidence
δ, we can say that φˆN M given by (13) is an approximate near minimum of φ in (12) to accuracy + and level α. That is, the event
Pr φˆN M ≥ φ + α ≤ + occurs with probability greater than 1 − δ. 6
A possible drawback of the preceding approach is that the number of samples N in the space ∆S turns out to be function of the number M of samples taken in the space of the controllers, in other words, the closer we want to get to the actual minimizing controller, the more accurate has to be the estimate of the performance function. In order to avoid this disadvantage, in [9] an approach based on results from Learning Theory has been introduced. Summarizing, the probabilistic controller KQp , with . Qp = arg min
k=1,...,M
N 1 E∆ [JS (Qk , ∆i )] N i=1
is guaranteed to robustly satisfy the hard specification, and to provide approximately optimal average performance on the soft channel. In the next section, we study an algorithm to randomly generate the controller parameters Qk (s).
4
Random generation of transfer matrices in RH∞
In this section, we present an efficient algorithm for random generation of stable transfer matrices, with a bound on the H∞ norm. Our technique is based on a parameterization of all stable transfer matrices of fixed order, and relies on a modification of the well-known Bounded-Real Lemma (see for instance [10]), which is recalled below. A different approach for generating uniform transfer functions in RH∞ is presented in [6]: This method is based on finite impulse response approximations, and appears to be currently limited to the single-inputsingle-output case. Lemma 3 (Bounded-Real Lemma) Let γ > 0 be given. The following two statements are equivalent A B ∈ RHn∞o ,ni , and G(s)∞ ≤ γ; 1. G(s) = C D 2. There exist P 0 such that the following satisfied P A + AT P P B BT P −γI C D We now state the following key corollary.
7
LMI in the matrix variable P is CT DT ≺ 0. −γI
(15)
Corollary 1 Let γ > 0 be given, and let G(s) be a rational and proper transfer matrix with ni inputs, no outputs, and McMillan degree n. The following two statements are equivalent 1. G(s) ∈ RHn∞o ,ni , and G(s)∞ ≤ γ; 2. There exist a minimal realization A, B, C, D Rn,ni , C ∈ Rno ,n , D ∈ Rno ,ni , such that the variables A, B, C, D is satisfied B CT A + AT BT −γI DT C D −γI
of G(s), with A ∈ Rn , B ∈ following LMI in the matrix ≺ 0.
(16)
Proof. The implication from 2. to 1. is immediately proved, using Lemma 3 with P = I. To prove the implication from 1. to 2., consider a minimal realization of ¯ B, ¯ C, ¯ D, ¯ then by Lemma 3 there exist a P 0 such that G(s): A, ¯ C¯ T P A¯ + A¯T P P B ¯T P ¯ T ≺ 0. B −γI D (17) ¯ −γI C¯ D Let P be factored as P = RT R, where R is nonsingular, and consider the congruence transformation obtained multiplying (17) by diag(R−T , I, I) on the left and diag(R−1 , I, I) on the right. The LMI (17) holds if and only if the following one holds ¯ R−T C¯ T ¯ −1 + R−T A¯T RT RB RAR ¯ T ≺ 0. ¯ T RT −γI D B −1 ¯ ¯ CR D −γI The statement is then proved taking ¯ −1 A = RAR ¯ B = RB ¯ C = CR−1 ¯ D = D, ¯ B, ¯ C, ¯ D ¯ by means of a that is, the realization A, B, C, D is obtained from A, suitable similarity transformation. ✷ Remark: Notice that the previous corollary states that every stable transfer matrix G(s) of fixed order, with H∞ norm bound γ, admits a state-space realization which belongs to the feasible set of the LMI (16). In other words, this feasible set represents a convex parameterization of all fixed order, stable transfer matrices bounded in the H∞ norm. In this setting, we may also deal with further “realistic” constraints on the transfer matrix, in a very natural way. For instance, the following proposition considers the case where a bound on the spectral radius (maximum modulus of the poles) is imposed on G(s). 8
Proposition 1 Let λ, γ 0 be given, and let A ∈ Rn , B ∈ Rn,ni , C ∈ Rno ,n , D ∈ Rno ,ni be such that the following LMIs are satisfied B CT A + AT BT −γI DT ≺ 0; (18) C D −γI 2 λI A
0. (19) AT I A B is stable, has H∞ norm less than γ, Then, the transfer matrix G(s) = C D and the modulus of its poles is bounded by λ. Proof. The proof of this proposition is immediate, noticing that the norm bound A < λ imposed by (19) implies a bound on the spectral radius of A. ✷ The above proposition is our main tool used for generating random transfer matrices, which is the topic discussed in the next section.
4.1
Algorithm for random generation of transfer matrices
The algorithm we propose is divided in three main steps. In the first step, we randomly generate a stable system matrix A, which satisfies the Lyapunov condition A + AT ≺ 0, and the norm bound (19). Next, we generate the D matrix, uniformly in the spectral norm ball D < γ, or set D = 0, if strictly proper transfer matrices are required. In the third step, we randomly select the B and C matrices in the feasible set of the LMI (18), A and D being fixed. The detailed algorithm is reported below, using pseudo-Matlab code notation when needed. In the following, XF denotes the Frobenius norm of matrix X, and X denotes the spectral (maximum singular value) norm of X. Algorithm 1 Given n, ni , no , γ, λ > 0, returns a random transfer matrix G(s) ∈ RHn∞o ,ni of degree n, such that G(s)∞ < γ, and with magnitude of the poles less than λ. 1. Generation of A (a) Generate A = randn(n, n) (matrix of independent, zero-mean, unit variance gaussian variables), and w = rand (uniform random variable A 2 w1/n . in [0 1]). Set A ← λ AF (b) If A is unstable, then reflect the unstable eigenvalues of A with respect to the imaginary axis; (c) Solve for P 0 the Lyapunov equation P A + AT P = −I, and factor P as P = RT R; 9
(d) Apply the similarity transformation A ← RAR−1 . Notice that, after the similarity transformation, we have that A + AT = −R−T R−1 . 2. Generation of D (a) If strictly proper transfer matrix is required, set D = 0; (b) Else, generate a random matrix D ∈ Rno ,ni , uniformly in the spectral norm ball D < γ, see comments below. . 3. Generation of X = [B C T ] (a) Generate a normalized random direction in the space of X: X = X randn(n, ni + no ), X ← . XF (b) Compute the step size α, such that αX is on the boundary of the feasible set of the LMI A + AT αX ≺ 0, (20) −RD αX T γI −DT . where RD = . This may be done as follows: −D γI −1 T • Let Y = RT (XRD X )R;
1 • Compute the step size: α = ; λmax (Y ) • X ← αX.
(c) Smudge X inside the feasible set, multiplying it by the volumetric factor z = w1/(n(ni +no )) , where w is uniformly distributed in [0 1]: X ← zX. Comments to the algorithm. The result of step 1.(a) of the algorithm is a matrix A drawn uniformly from the Frobenius norm ball of radius λ, see for instance [2]. Notice that the norm constraint AF ≤ λ implies that the spectral radius of A (maximum modulus of the eigenvalues of A) is bounded by λ, as required. The further steps 1.(b)–1.(d) do not modify the modula of the eigenvalues of A. In step 2.(b), the uniform generation of D in the spectral norm ball may be done using the techniques described in [2]. In step 3, the feasible set of the LMI (20) in the variable X is a symmetric ¯ on the convex set centered in the origin. In step 3.(b) we compute a point X ¯ boundary of this set, and then generate a random point lying on the ray 0 − X, in step 3.(c). Notice also that the proposed algorithm is not restricted to SISO transfer functions, but works without difficulty with MIMO transfer matrices.
10
5
Example
We considered a simplified third order model of the altitude dynamics of an aircraft. The extended plant, which includes design specification masks, as well as a description of additive real uncertainty on the plant matrix, is given as follows
z1 z2 zS = y
1 0 0 0 0 0 0 0 −0.001 0 0 0 0 0.001 0 0.0881 0 0 0 1 0 −0.1 −0.5 0 0 0 0 0 0.0881 0 0 0 0 2 0 0 0 0 0 0 0 0.0881 0 0 0 0 0.5 0 0.1 0 0.5 −1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0.0045 0 0 0 0 0 0 0 0 0 0 0 0 0.0045 0 0 0 0 0 0 0 0 0 0 0 0.0045 0 0 0 0 0 0 0 0 0 0.1 0 0.5 −1 0 0 −0.001 0 0 0 1 0 −0.1 0 −0.5 1
w1 w2 , wS u
where w1 ∈ R3 , z1 ∈ R2 take into account the design specifications, w2 ∈ R3 , z2 ∈ R3 refer to the plant uncertainty, and wS ∈ R, zS ∈ R represent the soft channel, which in this case is the reference-to-altitude transfer function. The control signal u ∈ R is the elevators angle, and the measured output y is the tracking error. This design setup is shown in Figure 3. In order to take somehow in account the two-block structure of the uncertainty, we included in the extended plant above suitable (constant) scalings for µ-synthesis. é D1 ù ê D 2 úû ë wH
zH
wS
zS
G y
u
KQ
Figure 3: Design setup for the example. An H∞ design was then performed on the generalized plant above. This corre. . sponds to closing the hard channel between wH = [w1T w2T ]T and zH = [z1T z2T ]T on a full complex uncertainty in the set ∆H = {∆ ∈ C6,5 : ∆ ≤ 1}. The resulting central H∞ controller −293.4838(s + 0.000696)(s2 − 0.2545s + 0.7328) K(s) = , (s + 140.4)(s + 0.0008966)(s2 + 2.677s + 6.343) 11
guarantees TH ∞ ≤ 1. However, the nominal closed-loop step response on the soft channel wS → zS , reported in Figure 4, shows undesirable oscillations and overshoots. Step Response 1.2
Altitude
1
0.8
0.6
0.4
0.2
0
-0.2 0
2
4
6
8
10
12
14
16
18
20
Time
Figure 4: Nominal step response for the central controller. At this point, we use the method previously discussed to optimize the average (with respect to the plant uncertainty) behavior of this step response. To this end, notice that the actual plant uncertainty was originally represented by a real matrix ∆2 ∈ R3,3 , therefore, we choose ∆S ⊆ ∆H as the following set: 03,2 0 3,3 : ∆2 ∈ R , ∆2 ≤ 1 , ∆S = 0 ∆2 and assume uniform probability distribution on ∆S . To optimize the time re. sponse, we choose JS = sˆ/(1 + sˆ), being sˆ = |s| + |s|, where s, s are the relative undershoot and overshoot of the response, respectively, and considered Q parameters of order nQ = 3, with spectral radius bound λ = 100. The randomized algorithm, with M = 50, N = 500 (corresponding to α = + = 0.1, δ = 0.01 in Lemma 2), yielded the controller K(s) =
1.7073(s − 98.72)(s + 0.0006964)(s2 − 1.465s + 1.811)(s2 + 3.717s + 13.81) . (s + 140.4)(s + 1.83)(s + 0.0008963)(s2 + 1.321s + 2.924)(s2 + 8.959s + 32.97)
This controller still guarantees TH ∞ ≤ 1, and provides the optimized time response depicted in Figure 5. 12
Step Response 1.2
1
Altitude
0.8
0.6
0.4
0.2
0
-0.2 0
2
4
6
8
10
12
14
16
18
20
Time
Figure 5: Nominal step response for the randomized controller.
6
Conclusions
In this paper we presented a mixed deterministic/probabilistic approach for robust controller design. The key feature of this method is that the search for probabilistic performance on the “soft” channel is done while maintaining deterministic robust performance on the “hard” channel. We remark that the tools presented here may also be used to solve µ−synthesis problems in a probabilistic setting: For a n-blocks µ problem, we first solve an H∞ synthesis problem for, say, the first block alone. Then, we parameterize all such controllers, and notice that the controller for the initial n-blocks problem, if it exists, must be in the parameterized family. We may then use a randomized approach to search in this parameterized family for a controller achieving the n-blocks robust design. The presented method relies on a technique for randomization in RH∞ , which is described in Section 4. Similar results and an analogous algorithm may be easily derived also for the case of discrete-time systems. Further analysis of the statistical properties of the random samples generated by means of Algorithm 1 is subject of ongoing research.
13
References [1] G. Calafiore and F. Dabbene. Polynomial-time random generation of uniform real matrices in the spectral norm ball. In Proceedings of the IFAC Symposium on Robust Control Design, Prague, June 2000. [2] G. Calafiore, F. Dabbene, and R. Tempo. Randomized algorithms for probabilistic robustness with real and complex structured uncertainty. IEEE Trans. Aut. Control, 45(12):2218–2235, December 2000. [3] G. Calafiore, F. Dabbene, and R. Tempo. Randomized algorithms for reduced order H∞ controller design. In Proceedings of the American Control Conference, Chicago, June 2000. [4] X. Chen and K. Zhou. Order statistics and probabilistic robust control. Systems and Control Letters, 35:175–182, 1998. [5] P.P. Khargonekar and A. Tikku. Randomized algorithms for robust control analysis have polynomial time complexity. In Proc. IEEE Conf. on Decision and Control, Kobe, Japan, December 1996. [6] C.M. Lagoa, M. Sznaier, and B.R. Barmish. An algorithm for generating transfer functions uniformly distributed over H∞ balls. Personal communication. To appear in the proceedings of 40th CDC, December 2001. [7] R. Tempo, E.W. Bai, and F. Dabbene. Probabilistic robustness analysis: Explicit bounds for the minimum number of samples. Systems and Control Letters, 30:237–242, 1997. [8] M. Vidyasagar. A Theory of Learning and Generalization. Springer-Verlag, London, 1997. [9] M. Vidyasagar. Randomized algorithms for robust controller synthesis using statistical learning theory. Automatica, 37:1515–1528, 2001. [10] K. Zhou. Essentials of Robust Control. Prentice-Hall, Upper Saddle River, 1998.
14