Lecture 10: Convex Optimization - Matematikcentrum

19 downloads 312 Views 76KB Size Report
Lecture 10: Convex. Optimization. The material from this lecture: • Stephen Boyd and Lieven Vandenberghe: Convex Optimization. The book is available online ...
Linear and Combinatorial Optimization Fredrik Kahl Matematikcentrum

Lecture 10: Convex Optimization The material from this lecture: • Stephen Boyd and Lieven Vandenberghe: Convex Optimization. The book is available online - see link on the course homepage.

Extensions to Linear Programming LP - easy to solve. General problems - hard to solve? The border between easy and hard problems is not between linear and non-linear problems, but between convex and nonconvex problems.

Convex optimization problems A convex optimization problem in standard form: min z = f0 (x)

(1)

fi (x) ≤ 0

(2)

Ax = b

(3)

if f0 , . . . , fm are convex functions. Note: • The objective function is convex. • The inequalities are given by convex functions. • The equalities are given by affine functions. The domain is then convex. (Intersection of hyperplanes and convex sets). Lecture 1: Every local minimum is a also global minimum.

Quadratic functions and forms • The quadratic function ⎛ ⎞T ⎛ P x T T ⎝ ⎝ ⎠ f (x) = x P x + 2q x + r = qT 1

⎞⎛ ⎞ x q ⎠⎝ ⎠ r 1

is convex if and only if P  0, where AB means A − B positive semidefinite. • Quadratic form f (x) = xT P x, convex if and only if P  0. • Euclidean norm f (x) = ||Ax + b||, f 2 is convex quadratic function.

Least squares problems Minimize Euclidean norm A - full rank. Minimize ||Ax − b||2 Solution: xopt = (AT A)−1 AT b

Minimum norm solution Minimize ||x||2 subject to Ax = b. Solution: xopt = AT (AAT )−1 b

Minimize a linear function with quadratic constraints min z = cT x

(4)

xT Ax ≤ 1

(5) (6)

where A = AT  0. opt

Solution: x

−1

= −A

√ c/ cT A−1 c

Proof: Change variables y = A1/2 x, c˜ = A−1/2 c min z = c˜T y

(7)

yT y ≤ 1

(8) (9)

c/||˜ c|| and xopt = A−1/2 y opt . Solution is y opt = −˜

Quadratic programming (QP) Quadratic objective function, linear constraints.

min z = xT P x + 2q T x + r

(10)

Ax ≤ b

(11)

Fx = g

(12)

Convex problem if P  0 Very hard problem if P  0.

QCQP Quadratic objective function, quadratic constraints. (QCQP)

min z = xT P0 x + 2q0T x + r0 xT Pi x + 2qiT x + ri ≤ 0, Convex problem if Pi  0 Very hard problem if Pi  0.

(13) i = 1, . . . L (14)

Second order cone programming (SOCP) min z = cT x ||Ai x + bi || ≤ eTi x + di ,

(15) i = 1, . . . L (16)

This problem class includes also LP, QP and QCQP.

Robust linear programming min z = cT x

(17)

aTi x ≤ bi

(18)

Suppose ai is uncertain ai + Fi u | ||u|| ≤ 1}. ai ∈ Ei = {¯

min z = cT x aTi x ≤ bi

(19) ∀ai ∈ Ei

(20)

This can be written as an SOCP.

min z = cT x a ¯Ti x + ||Fi x|| ≤ bi

(21) (22)

Semidefinite programming (SDP) min z =cT x F (x) = F0 +

(23) n 

xi Fi 0

(24)

i=1

Ax = b where Fi = FiT are symmetric p × p matrices. • SDP is a convex optimization problem. • Several semidefinite constraints can be combined to one. • Many non-linear convex problems can be written as an SDP.

(25)

LP → SDP min cT x Ax ≤ b

(26) (27)

can be written as an SDP

min cT x diag(Ax − b) 0

(28) (29)

Minimize maximal eigenvalue (30)

min z = λmax (A(x)) A(x) = A0 +

n 

xi Ai

(31)

i=1

can be written as an SDP

min z = t A(x) − tI 0

(32) (33)

Schur complement Consider the block matrix ⎞

⎛ A ⎝ X =X = BT T

B



C

• S = C − B T A−1 B is called the Schur complement to A in X (provided det(A) =  0). • Useful for constructing semidefinite constraints. Exercise: • X  0 if and only if A  0 and S  0. • If A  0, then X  0 if and only if S  0.

Example: Quadratic inequality. (Ax + b)T (Ax + b) − cT x − d ≤ 0 is equivalent to ⎛ ⎝

⎞ I

Ax + b

(Ax + b)T

cT x + d

⎠0

QCQP as an SDP The problem with quadratic objective function and quadratic constraints,

min z = (A0 x + b0 )T (A0 x + b0 ) − cT0 x − d0 (34) (Ai x + bi )T (Ai x + bi ) − cTi x − di ≤ 0, (35) (36)

i = 1, . . . , L can be written as an SDP min t

⎛ I

A0 x + b0



⎠0 (A0 x + b0 )T cT0 x + d0 + t ⎞ ⎛ I Ai x + bi ⎠  0, i = 1, . . . , L. ⎝ (Ai x + bi )T cTi x + di ⎝

Cones as SDP:s The constraint ||Ax + b|| ≤ eT x + d is equivalent with the SDP ⎛ (eT x + d)I ⎝ (Ax + b)T

⎞ Ax + b eT x + d

⎠0

A non-linear example (cT x)2 min z = T d x Ax ≤ b, dT x > 0

(37) (38)

can be written as

(39)

min z = t Ax ≤ b

(40)

(cT x)2 t− T ≥0 d x

(41)

and then using Schur complement as an SDP

⎛ ⎜ ⎜ ⎝

min t diag(b − Ax)

0

0

t

0

cT x

0



⎟ c x⎟ ⎠0 dT x T

Minimizing matrix norm min z = ||A(x)|| A(x) = A0 +

(42)

m 

xi Ai

i=1

where the matrix norm is given given by ||A|| = σ1 (A) = (λmax (AT A))1/2 can be written as an SDP min t

⎛ ⎝

tI

A(X)

A(X)T

tI

⎞ ⎠0

(43)

Algorithms There are efficient methods for solving SDP. It is possible to show fast (polynomial) convergence for some of these methods. On the web, there are publicly available algorithms for SDP (and SOCP): http://www.stanford.edu/~boyd/SDPSOL.html http://cs.nyu.edu/cs/faculty/overton/sdppack /sdppack.html