Linear Matrix Inequalities in Control

11 downloads 3079 Views 498KB Size Report
Optimization Problem/Programming Problem. Find a feasible decision x ∈ S with the least possible cost f(x). In short such a problem is denoted as minimize f(x).
Linear Matrix Inequalities in Control Carsten Scherer Delft Center for Systems and Control (DCSC) Delft University of Technology The Netherlands This course has been developed in collaboration with Siep Weiland Department of Electrical Engineering Eindhoven University of Technology The Netherlands

1/54

Optimization and Control Classically optimization and control are highly intertwined: • Optimal control (Pontryagin/Bellman) • LQG-control or H2 -control • H∞ -synthesis and robust control • Model Predictive Control Main Theme View control input signal and/or feedback controller as decision variable of an optimization problem. Desired specs are imposed as constraints on controlled system. 2/54

Carsten Scherer

Sketch of Issues • How to distinguish easy from difficult optimization problems? • What are the consequences of convexity in optimization? • What is robust optimization? • Which performance measures can be incorporated? • How can controller synthesis be convexified? • How can we check robustness by convex optimization? • What are the limits for the synthesis of robust controllers? • How can we perform systematic gain-scheduling? 3/54

Carsten Scherer

Outline • From Optimization to Convex Semi-Definite Programming • Convex Sets and Convex Functions • Linear Matrix Inequalities (LMIs) • Truss-Topology Design • LMIs and Stability • A First Glimpse at Robustness

3/54

Carsten Scherer

Optimization The ingredients of any optimization problem are: • A universe of decisions x ∈ X • A subset S ⊂ X of feasible decisions • A cost function or objective function f : S → R Optimization Problem/Programming Problem Find a feasible decision x ∈ S with the least possible cost f (x). In short such a problem is denoted as minimize subject to

f (x) x∈S 4/54

Carsten Scherer

Questions in Optimization • What is the least possible cost? Compute the optimal value fopt := inf f (x) ≥ −∞. x∈S

Convention: If S = ∅ then fopt = +∞. If fopt = −∞ the problem is said to be unbounded from below. • How can we compute almost optimal solutions? For any chosen positive absolute error ε, determine xε ∈ S with fopt ≤ f (xε ) ≤ fopt + ε. By definition of the infimum such an xε does exist for all ε > 0.

5/54

Carsten Scherer

Questions in Optimization • Is there an optimal solution? Does there exist xopt ∈ S with f (xopt ) = fopt ? If yes, the minimum is attained and we write fopt = min f (x). x∈S

Set of all optimal solutions is denoted as arg min f (x) = {x ∈ S : f (x) = fopt }. x∈S

• Is optimal solution unique?

6/54

Carsten Scherer

Recap: Infimum and Minimum of Functions Any f : S → R has infimum l ∈ R ∪ {−∞} denoted as inf x∈S f (x). The infimum is uniquely defined by the following properties: • l ≤ f (x) for all x ∈ S. • l finite: For all  > 0 exists x ∈ S with f (x) ≤ l + . l = −∞: For all  > 0 exists x ∈ S with f (x) ≤ −. If there exists x0 ∈ S with f (x0 ) = inf x∈S f (x) we say that f attains its minimum on S and write l = minx∈S f (x). If existing, the minimum is uniquely defined through the properties: • l ≤ f (x) for all x ∈ S. • There exists some x0 ∈ S with f (x0 ) = l. 7/54

Carsten Scherer

Nonlinear Programming (NLP) With decision universe X = Rn , the feasible set S is often defined by constraint functions g1 : X → R, . . . , gm : X → R as S = {x ∈ X : g1 (x) ≤ 0, . . . , gm (x) ≤ 0} . The general nonlinear optimization problem is formulated as minimize subject to

f (x) x ∈ X , g1 (x) ≤ 0, . . . , gm (x) ≤ 0.

By exploiting particular properties of f, g1 , . . . , gm (e.g. smoothness), optimization algorithms typically allow to iteratively compute locally optimal solutions x0 : There exists an ε > 0 such that x0 is optimal on S ∩ {x ∈ X : kx − x0 k ≤ ε}. 8/54

Carsten Scherer

Example: Quadratic Program f : Rn → R is quadratic iff there exists a symmetric matrix P with !T ! 1 1 P . f (x) = x x Quadratically constrained quadratic program !T ! 1 1 minimize P0 x x

subject to

x ∈ Rn ,

1 x

!T Pk

1 x

! ≤ 0, k = 1, . . . , m

where P0 , P1 , . . . , Pm ∈ Sn+1 . 9/54

Carsten Scherer

Linear Programming (LP) With the decision vectors x = (x1 · · · xn )T ∈ Rn consider the problem minimize subject to

c1 x 1 + · · · + cn x n a11 x1 + · · · + a1n xn

≤ b1 .. .

am1 x1 + · · · + amn xn ≤ bm The cost is linear and the set of feasible decisions is defined by finitely many affine inequality constraints. Simplex algorithms or interior point methods allow to efficiently 1. decide whether the constraint set is feasible (value < ∞). 2. decide whether problem is bounded from below (value > −∞). 3. compute an (always existing) optimal solution if 1. & 2. are true. 10/54

Carsten Scherer

Linear Programming (LP) Major early contributions in 1940s by • George Dantzig (simplex method) • John von Neumann (duality) • Lenoid Kantorovich (economics applications) Leonid Khachiyan proved polynomial-time solvability in 1979. Narendra Karmakar introduce an interior point method in 1984. Has numerous applications for example in economical planning problems (business management, flight-scheduling, resource allocation, finance). LPs have spurred the development of optimization theory and appear as subproblem in many optimization algorithms. 11/54

Carsten Scherer

Recap For a real or complex matrix A the inequality A 4 0 means that A is Hermitian and negative semi-definite. • A is defined to be Hermitian if A = A∗ = A¯T . If A is real then this amounts to A = AT and A is then called symmetric. All eigenvalues of Hermitian matrices are real. • By definition A is negative semi-definite if A = A∗ and x∗ Ax ≤ 0 for all complex vectors x 6= 0. A is negative semi-definite iff all its eigenvalues are non-positive. • A 4 B means by definition: A, B are Hermitian and A − B 4 0. • A ≺ B, A 4 B, A < B, A  B defined/characterized analogously. 12/54

Carsten Scherer

Recap Let A be partitioned with square  A11  .. A= .

diagonal blocks as  · · · A1m ..  .. . . .

Am1 · · ·

Amm

Then A ≺ 0 implies A11 ≺ 0, . . . , Amm ≺ 0. Prototypical Proof. Choose any vector zi 6= 0 of length compatible with the size of Aii . Define z = (0, . . . , 0, ziT , 0, . . . , 0)T 6= 0 with the zero blocks compatible in size with A11 , . . . , Amm . This construction implies z T Az = ziT Aii zi . Since A ≺ 0, we infer z T Az < 0. Therefore, ziT Aii zi < 0. Since zi 6= 0 was arbitrary we infer Aii ≺ 0. 13/54

Carsten Scherer

Semi-Definite Programming (SDP) Let us now assume that the constraint functions G1 , . . . , Gm map X into the set of symmetric matrices, and define the feasible set S as S = {x ∈ X : G1 (x) 4 0, . . . , Gm (x) 4 0} . The general semi-definite program (SDP) is formulated as minimize subject to

f (x) x ∈ X , G1 (x) 4 0, . . . , Gm (x) 4 0.

• This includes NLPs as a special case. • Is called convex if f and G1 , . . . , Gm are convex. • Is called linear matrix inequality (LMI) optimization problem or linear SDP if f and G1 , . . . , Gm are affine. 14/54

Carsten Scherer

Outline • From Optimization to Convex Semi-Definite Programming • Convex Sets and Convex Functions • Linear Matrix Inequalities (LMIs) • Truss-Topology Design • LMIs and Stability • A First Glimpse at Robustness

14/54

Carsten Scherer

Recap: Affine Sets and Functions The set S in the vector space X is affine if λx1 + (1 − λ)x2 ∈ S for all x1 , x2 ∈ S, λ ∈ R. The matrix-valued function F defined on S is affine if the domain of definition S is affine and if F (λx1 + (1 − λ)x2 ) = λF (x1 ) + (1 − λ)F (x2 ) for all x1 , x2 ∈ S, λ ∈ R. For points x1 , x2 ∈ S recall that • {λx1 + (1 − λ)x2 : λ ∈ R} is the line through x1 , x2 • {λx1 + (1 − λ)x2 : λ ∈ [0, 1]} is the line segment between x1 , x2 15/54

Carsten Scherer

Recap: Convex Sets and Functions The set S in the vector space X is convex if λx1 + (1 − λ)x2 ∈ S for all x1 , x2 ∈ S, λ ∈ (0, 1). The symmetric-valued function F defined on S is convex if the domain of definition S is convex and if F (λx1 + (1 − λ)x2 ) 4 λF (x1 ) + (1 − λ)F (x2 ) for all x1 , x2 ∈ S, λ ∈ (0, 1). F is strictly convex if 4 can be replaced by ≺. If F is real-valued, the inequalities 4 and ≺ are the same as the usual inequalities ≤ and < between real numbers. Therefore our definition captures the usual one for real-valued functions! 16/54

Carsten Scherer

Examples of Convex and Non-Convex Sets

17/54

Carsten Scherer

Examples of Convex Sets The intersection of any family of convex sets is convex. • With a ∈ Rn \ {0} and b ∈ R, the hyperplane {x ∈ Rn : aT x = b} is affine while the half-space {x ∈ Rn : aT x ≤ b} is convex. • The intersection of finitely many hyperplanes and half-planes defines a polyhedron. Any polyhedron is convex and can be described as {x ∈ Rn : Ax ≤ b, Dx = e} with suitable matrices A, D, vectors b, e. A compact polyhedron is said to be a polytope. • The set of negative semi-definite/negative definite matrices is convex. 18/54

Carsten Scherer

Convex Combination and Convex Hull • x ∈ X is convex combination of x1 , ..., xl ∈ X if x=

l X k=1

λk x

k

with λk ≥ 0,

l X

λk = 1.

k=1

Convex combination of convex combinations is a convex combination. • The convex hull co(S) of any subset S ⊂ X is defined in one of the following equivalent fashions: 1. Set of all convex combinations of points in S. 2. Intersection of all convex sets that contain S. For arbitrary S the convex hull co(S) is convex. 19/54

Carsten Scherer

Explicit Description of Polytopes Any point in the convex hull of the finite set {x1 , . . . , xl } is given by a (not necessarily unique) convex combination of generators x1 , . . . , xl . The convex hull of finitely many points co{x1 , . . . , xl } is a polytope. Any polytope can be represented in this way. In this fashion one can explicitly describe polytopes, in contrast to an implicit description such as {x ∈ Rn : Ax ≤ b}. Note that the implicit description is often preferable for reasons of computational complexity. Example: {x ∈ Rn : a ≤ x ≤ b} is defined by 2n inequalities but requires 2n generators for its description as a convex hull.

20/54

Carsten Scherer

Examples of Convex and Non-Convex Functions

21/54

Carsten Scherer

On Checking Convexity of Functions All real-valued or Hermitian-valued affine functions are convex. It is often not simple to verify whether a non-affine function is convex. The following fact (for S ⊂ Rn with interior points) might help. The C 2 -map f : S → R is convex iff ∂ 2 f (x) < 0 for all x ∈ S. It is not easy to find convexity tests for Hermitian-valued function in the literature. The following reduction to real-valued functions often helps. The Hermitian-valued map F defined on S is convex iff S 3 x → z ∗ F (x)z ∈ R is convex for all complex vectors z ∈ Cm . 22/54

Carsten Scherer

Convex Constraints Suppose that F defined on S is convex. For all Hermitian H the strict or non-strict sub-level sets {x ∈ S : F (x) ≺ H} and {x ∈ S : F (x) 4 H} “of level H” are convex. Note that the converse is in general not true! • The feasible set of the convex SDP on slide 13 is convex. • Convex sets are typically described as the finite intersection of such sub-level sets. If the involved functions are even affine this is often called an LMI representation. • Convexity is necessary for a set to have an LMI representation. 23/54

Carsten Scherer

Example: Quadratic Functions The quadratic function !T ! 1 q sT f (x) = x s R

1 x

! = q + 2sT x + xT Rx

is convex iff R is positive semidefinite. The zero sub-level set of a convex quadratic function is a half-space (R = 0) or an ellipsoid. The ellipsoid is compact if R  0. For later use: f (x) = q+2sT x+xT Rx with R = RT is nonnegative for all x ∈ Rn iff its defining matrix satisfies ! q sT < 0. s R 24/54

Carsten Scherer

Important Properties Property 1: Jensen’s Inequality. If F defined on S is convex then for all x1 , . . . , xl ∈ S and λ1 , . . . , λl ≥ 0 with λ1 + · · · + λl = 1 we infer λ1 x1 + · · · + λl xl ∈ S and F (λ1 x1 + · · · + λl xl ) 4 λ1 F (x1 ) + · · · + λl F (xl ). Source of many inequalities in mathematics! Proof: cc of cc is cc! Property 2. If F and G define on S are convex then F + G and αF for α ≥ 0 as well as S 3 x → λmax (F (x)) ∈ R are all convex. If F and G are scalar-valued then max{F, G} is convex. There are many other operations for functions that preserve convexity. 25/54

Carsten Scherer

A Key Consequence Let F be convex. Then F (x) ≺ 0 for all x ∈ co{x1 , . . . , xl } if and only if F (xk ) ≺ 0 for all k = 1, . . . , l. Proof. We only need to show “if”. Choose x ∈ co{x1 , . . . , xl }. Then there exists λ1 ≥ 0, . . . , λl ≥ 0 that sum up to one with x = λ1 x1 + · · · + λl xl . By convexity and Jensen’s inequality we infer F (x) 4 λ1 F (x1 ) + · · · + λl F (xl ) ≺ 0 since the set of negative definite matrices is convex. 26/54

Carsten Scherer

General Remarks on Convex Programs • Solvers for general nonlinear programs typically determine local optimal solutions. There are no guarantees for global optimality. • Main feature of convex programs: Locally optimal solutions are globally optimal. Convexity alone neither guarantees that the optimal value is finite, nor that there exists an optimal solution/efficient solution algorithms. • In general convex programs can be solved with guarantees on accuracy if one can compute (sub)gradients of objective/constraint functions. • Strictly feasible linear semi-definite programs are convex and can be solved very efficiently, with guarantees on accuracy at termination. 27/54

Carsten Scherer

Outline • From Optimization to Convex Semi-Definite Programming • Convex Sets and Convex Functions • Linear Matrix Inequalities (LMIs) • Truss-Topology Design • LMIs and Stability • A First Glimpse at Robustness

27/54

Carsten Scherer

Linear Matrix Inequalities (LMIs) With the decision vectors x = (x1 · · · xn )T ∈ Rn a system of LMIs is A10 + x1 A11 + · · · + xn A1n 4 0 .. . m m Am 0 + x1 A 1 + · · · + xn A n 4 0

where Ai0 , Ai1 , . . . , Ain , i = 1, . . . , m, are real symmetric data matrices. LMI feasibility problem: Test whether there exist x1 , . . . , xn that render the LMIs satisfied. LMI optimization problem: Minimize c1 x1 + · · · + cn xn over all x1 , . . . , xn that satisfy the LMIs. Only simple cases can be treated analytically → Numerical techniques. 28/54

Carsten Scherer

LMI Optimization Problems minimize subject to

c1 x 1 + · · · + cn x n Ai0 + x1 Ai1 + · · · + xn Ain 4 0, i = 1, . . . , m

• Natural generalization of LPs with inequalities defined by the cone of positive semi-definite matrices. Considerable richer class than LPs. • The i-th constraint can be equivalently expressed as λmax (Ai0 + x1 Ai1 + · · · + xn Ain ) ≤ 0. • Interior-point or bundle methods allow to effectively decide about feasibility/boundedness and to determine almost optimal solutions. • Must be strictly feasible: There exists some decision x for which the constraint inequalities are strictly satisfied. 29/54

Carsten Scherer

Testing Strict Feasibility Introduce the auxiliary variable t ∈ R and consider A10 + x1 A11 + · · · + xn A1n 4 tI .. . m m Am 0 + x1 A1 + · · · + xn An 4 tI.

Find infimal value t∗ of t over these LMI constraints. • This problem is strictly feasible. We can hence compute t∗ efficiently. • If t∗ is negative then original problem is strictly feasible. • If t∗ is non-negative then original problem is not strictly feasible. 30/54

Carsten Scherer

LMI Optimization Problems Developments • Bellman/Fan initialized derivation of optimality conditions (1963) • Jan Willems coined terminology LMI and revealed relation to dissipative dynamical systems (1971/72) • Nesterov/Nemirovski exhibited essential feature (self-concordance) for existence of polynomial-time solution algorithm (1988) • Interior-point methods: Alizadeh (1992), Kamath/Karmarkar (1992) Suggested books: Boyd/El Ghaoui (1994), El Ghaoui/Niculescu (2000), Ben-Tal/Nemiorvski (2001), Boyd/Vandenberghe (2004) 31/54

Carsten Scherer

What are LMIs good for? • Many engineering optimization problem can be (often but not always easily) translated into LMI problems. • Various computationally difficult optimization problems can be effectively approximated by LMI problems. • In practice the description of the data is affected by uncertainty. Robust optimization problems can be handled/approximated by standard LMI problems.

In this course How can we solve (robust) control problems with LMIs? 32/54

Carsten Scherer

Outline • From Optimization to Convex Semi-Definite Programming • Convex Sets and Convex Functions • Linear Matrix Inequalities (LMIs) • Truss-Topology Design • LMIs and Stability • A First Glimpse at Robustness

32/54

Carsten Scherer

Truss Topology Design 250

200

150

100

50

0 0

50

100

150

200

250

300

350

400 33/54

Carsten Scherer

Example: Truss Topology Design • Connect nodes with N bars of length l = col(l1 , . . . , lN ) (fixed) and cross-sections x = col(x1 , . . . , xN ) (to-be-designed). • Impose bounds ak ≤ xk ≤ bk on cross-section and lT x ≤ v on total volume (weight). Abbreviate a = col(a1 , . . . , aN ), b = col(b1 , . . . , bN ). • If applying external forces f = col(f1 , . . . , fM ) (fixed) on nodes the construction reacts with the node displacement d = col(d1 , . . . , dM ). Mechanical model: A(x)d = f where A(x) is the stiffness matrix which depends linearly on x and has to be positive semi-definite. • Goal is to maximize stiffness, for example by minimizing the elastic stored energy f T d. 34/54

Carsten Scherer

Example: Truss Topology Design Find x ∈ RN which minimizes f T d subject to the constraints A(x) < 0, A(x)d = f, lT x ≤ v, a ≤ x ≤ b. Features • Data: Scalar v, vectors f , a, b, l, and symmetric matrices A1 , . . . , AN which define the linear mapping A(x) = A1 x1 + · · · + AN xN . • Decision variables: Vectors x and d. • Objective function: d → f T d which happens to be linear. • Constraints: Semi-definite constraint A(x) < 0, nonlinear equality constraint A(x)d = f , and linear inequality constraints lT x ≤ v, a ≤ x ≤ b. Latter interpreted elementwise! 35/54

Carsten Scherer

From Truss Topology Design to LMI’s Render LMI inequality strict. Equality constraint A(x)d = f allows to eliminate d which results in minimize subject to

f T A(x)−1 f A(x)  0, lT x ≤ v, a ≤ x ≤ b.

Push objective to constraints with auxiliary variable: minimize subject to

γ γ > f T A(x)−1 f, A(x)  0, lT x ≤ v, a ≤ x ≤ b.

Trouble: Nonlinear inequality constraint γ > f T A(x)−1 f .

36/54

Carsten Scherer

Recap: Congruence Transformations Given a Hermitian matrix A and a square non-singular matrix T , A → T ∗ AT is called a congruence transformation of A. If T is square and non-singular then A ≺ 0 if and only if T ∗ AT ≺ 0. The following more general statement is also easy to remember. If A is Hermitian and T is nonsingular, the matrices A and T ∗ AT have the same number of negative, zero, positive eigenvalues. What is true if T is not square? ... if T has full column rank? 37/54

Carsten Scherer

Recap: Schur-Complement Lemma The Hermitian block matrix

Q S ST R

! is negative definite

if and only if Q ≺ 0 and R − S T Q−1 S ≺ 0 if and only if R ≺ 0 and Q − SR−1 S T ≺ 0. Proof. First equivalence follows from ! ! ! I 0 Q S I −Q−1 S = −S T Q−1 I ST R 0 I

Q 0 0 R − S T Q−1 S

! .

The proof reveals a more general relation between the number of negative, zero, positive eigenvalues of the three matrices. 38/54

Carsten Scherer

From Truss Topology Design to LMI’s Render LMI inequality strict. Equality constraint A(x)d = f allows to eliminate d which results in minimize subject to

f T A(x)−1 f A(x)  0, lT x ≤ v, a ≤ x ≤ b.

Push objective to constraints with auxiliary variable: minimize subject to

γ γ > f T A(x)−1 f, A(x)  0, lT x ≤ v, a ≤ x ≤ b.

Linearize with Schur lemma to equivalent LMI problem minimize subject to

γ γ fT f A(x)

!  0, lT x ≤ v, a ≤ x ≤ b.

39/54

Carsten Scherer

Yalmip-Coding: Truss Toplogy Design minimize

γ γ fT f A(x)

subject to

Suppose A(x) =

PN

k=1

!  0, lT x ≤ v, a ≤ x ≤ b.

xk mk mTk with vectors mk collected in matrix M .

The following code with Yalmip commands solves LMI problem: gamma=sdpvar(1,1); x=sdpvar(N,1,’full’); lmi=set([gamma f’;f M*diag(x)*M’]); lmi=lmi+set(l’*x 0 such that AT K + KA + εK ≺ 0. Let x(.) be any statetrajectory of the system. Then x(t)T (AT K + KA)x(t) + εx(t)T Kx(t) ≤ 0 for all t ∈ R and hence (using x(t) ˙ = Ax(t)) d x(t)T Kx(t) + εx(t)T Kx(t) ≤ 0 for all t ∈ R dt and hence (integrating factor eεt ) x(t)T Kx(t) ≤ x(t0 )T Kx(t0 )e−ε(t−t0 ) for all t ∈ R, t ≥ t0 . Since λmin (K)kxk2 ≤ xT Kx ≤ λmax (K)kxk2 we can conclude that s λmax (K) −ε(t−t0 ) kx(t)k ≤ kx(t0 )k e for t ≥ t0 . λmin (K) 48/54

Carsten Scherer

Algebraic Proof Sufficiency. Let λ ∈ λ(A). Choose a complex eigenvector x 6= 0 with Ax = λx. Then the LMI’s imply x∗ Kx > 0 and ¯ ∗ Kx + x∗ Kxλ = 2Re(λ)x∗ Kx. 0 > x∗ (AT K + KA)x = λx This guarantees Re(λ) < 0. Therefore all eigenvalues of A are in C− . Necessity if A is diagonizable. Suppose all eigenvalues of A are in C− . Since A is diagonizable there exists a complex nonsingular T with T AT −1 = Λ = diag(λ1 , . . . , λn ). Since Re(λk ) < 0 for k = 1, . . . , n we infer Λ∗ + Λ ≺ 0 and hence (T ∗ )−1 AT T ∗ + T AT −1 ≺ 0. If we left-multiply with T ∗ and right-multiply with T (congruence) we infer AT (T ∗ T ) + (T ∗ T )A ≺ 0. Hence K = T ∗ T  0 satisfies the LMI’s. 49/54

Carsten Scherer

Algebraic Proof Necessity if A is not diagonizable. If A is not diagonizable it can be transformed by similarity into its Jordan form: There exists a nonsingular T with T AT −1 = Λ + J where Λ is diagonal and J has either ones or zeros on the first upper diagonal. For any ε > 0 one can even choose Tε with Tε ATε−1 = Λ + εJ. Since Λ has the eigenvalues of A on its diagonal we still infer Λ∗ + Λ ≺ 0. Therefore it is possible to fix a sufficiently small ε > 0 with 0  Λ∗ + Λ + ε(J T + J) = (Λ + εJ)∗ + (Λ + εJ). As before we can conclude that K = Tε∗ Tε satisfies the LMI’s.

50/54

Carsten Scherer

Stability of Discrete-Time LTI Systems The linear time-invariant dynamical system x(t + 1) = Ax(t), t = 0, 1, 2, . . . is exponentially stable if and only if there exists K with K  0 and AT KA − K ≺ 0. Recall how “negativity” of AT K + KA =

d x(t)T Kx(t) dt !T

I A

in continuous-time leads to ! ! 0 K I ≺ 0. K 0 A

Now “negativity” of x(t + 1)T Kx(t + 1) − x(t)T Kx(t) leads to !T ! ! I −K 0 I AT KA − K = ≺ 0. A 0 K A 51/54

Carsten Scherer

Outline • From Optimization to Convex Semi-Definite Programming • Convex Sets and Convex Functions • Linear Matrix Inequalities (LMIs) • Truss-Topology Design • LMIs and Stability • A First Glimpse at Robustness

51/54

Carsten Scherer

A First Glimpse at Robustness With some compact set A ⊂ Rn×n consider the family of LTI systems x(t) ˙ = Ax(t) with A ∈ A. A is said to be quadratically stable if there exists K such that K  0 and AT K + KA ≺ 0 for all A ∈ A. Why name? V (x) = xT Kx is quadratic Lyapunov function. Why relevant? Implies that all A ∈ A are Hurwitz. Even stronger: Implies, for any piece-wise continuous A : R → A, exponential stability of the time-varying system x(t) ˙ = A(t)x(t). 52/54

Carsten Scherer

Computational Verification If A has infinitely many elements, testing quadratic stability amounts to verifying the feasibility of an infinite number of LMIs. Key question: How to reduce to a standard LMI problem? Let A be the convex hull of {A1 , . . . , AN }: For each A ∈ A there exist coefficients λ1 ≥ 0, . . . , λN ≥ 0 with λ1 + · · · + λN = 1 such that A = λ1 A1 + · · · + λN AN . If A is the convex hull of {A1 , . . . , AN } then A is quadratically stable iff there exists some K with K  0 and ATi K + KAi ≺ 0 for all i = 1, . . . , N. Proof. Slide 26. 53/54

Carsten Scherer

Lessons to be Learnt • Many interesting engineering problems are LMI problems. • Variables can live in arbitrary vector space. In control: Variables are typically matrices. Can involve equation and inequality constraints. Just check whether cost function and constraints are affine & verify strict feasibility. • Translation to input for solution algorithm by parser (e.g. Yalmip). Can choose among many efficient LMI solvers (e.g. Sedumi). • Main trick in removing nonlinearities so far: Schur Lemma. 54/54

Carsten Scherer

Suggest Documents