Lecture2.pdf - Google Drive

1 downloads 7 Views 481KB Size Report
2. x n s / x. s s / x. s. x. s / x..... Matrix calculus. Example: 1 2 1 s x ,x x ( ). Page 3 of 15. Lecture2.pdf. Lectur
ECE 3650: Optimal Control (3 Credits, Fall 2015)

Lecture 2: Static Optimization September 10, 2015

Zhi-Hong Mao Associate Professor of ECE and Bioengineering University of Pittsburgh, Pittsburgh, PA 1

Outline • • • •

Homework 1 Matrix calculus Optimization without constraints Optimization with equality constraints

2

Homework 1 • Problem 1: Let P1 = (x1, y1) and P2 = (x2, y2) be two given points. Find the third point P3 = (x3, y3) such that d1 = d2 is minimized, where d1 is the distance from P3 to P1 and d2 is the distance from P3 to P2. (Only accept the method using Lagrange multiplier.) • Problem 2: Minimize L 

1 T 1 0  1  2 1 x x  uT  u 2 0 2  2 1 1

1  2 2  * * * * if x       u. Find x , u , λ , L .  3  1 0  3

1

Homework 1 • Problem 3: Let the scalar plant have performance index

J

xk 1  xk uk  1

1 N 1 2  uk 2 k 0

with final time

N = 2. Given x0, it is desired to make x2 = 0. a. Write state and costate equations with uk eliminated. b. Assume the final costate λ2 is known. Solve for λ1 in terms of λ2 and the state. Use this to express x2 in terms of λ2 and x0. Hence find a quartic equation for λ2 in terms of initial state x0. c. If x0 = 1, find the optimal state and costate sequences, the optimal control, and the optimal valued 4 of the performance index.

Homework 1 • Due Thursday 10/1 in class (three weeks later—I will be out of town during the 4th week of class)

5

Matrix calculus • Some definitions – Let x be a vector, x = [x1,x2,…,xn]T. The differential in x is dx = [dx1,dx2,…,dxn]T – The derivative of x with respect to a scalar t is

 dx1 / dt    dx  dx2 / dt    dt    dx / dt  n  6

2

Matrix calculus • Some definitions – The differential in x – The derivative of x with respect to a scalar t

– If a scalar s is a function of x, then the gradient of s with respect to x is a column vector

s / x1    s s / x2  sx    x    s / xn  7

Matrix calculus • Some definitions – The differential in x – The derivative of x with respect to a scalar t

– If a scalar s is a function of x, then the gradient of s with respect to x is a column vector Example:

s / x1    s s / x2  sx    x     s /  x  n 

s(x1 ,x2 )   x1

8

Matrix calculus • Some definitions – The differential in x – The derivative of x with respect to a scalar t

– If a scalar s is a function of x, then the gradient of s with respect to x is a column vector

s / x1    s s / x2  sx    x    s / xn 

Example:

s(x1 ,x2 )   x1

9

3

Matrix calculus • Some definitions – The differential in x – The derivative of x with respect to a scalar t

– If a scalar s is a function of x, then the gradient of s with respect to x is a column vector Example: s(x1 ,x2 )   x1  x2 2

s / x1    s s / x2  sx    x     s /  x  n 

2

10

Matrix calculus • Some definitions – The differential in x – The derivative of x with respect to a scalar t (which could be time)

– If a scalar s is a function of x, then the gradient of s with respect to x is a column vector Example: s(x1 ,x2 )   x1  x2 2

s / x1    s s / x2  sx    x    s / xn 

2

11

Matrix calculus • Some definitions – The differential in x – The derivative of x with respect to a scalar t (which could be time)

– If a scalar s is a function of x, then the gradient of s with respect to x is a column vector

Remark: Directional derivative

12

4

Matrix calculus • Some definitions – The differential in x – The derivative of x with respect to a scalar t (which could be time)

– If a scalar s is a function of x, then the gradient of s with respect to x is a column vector In vector calculus, the gradient of a scalar field is a vector field which points in the direction of the greatest rate of increase of the scalar field, and whose magnitude is the greatest rate of change.

13

Matrix calculus • Some definitions – The differential in x – The derivative of x with respect to a scalar t (which could be time) – The gradient of s with respect to x

– The total differential in s is T

n s  s  ds    dx   dxi  x    i 1 xi

14

Matrix calculus • Some definitions – – – –

The The The The

differential in x derivative of x with respect to a scalar t (which could be time) gradient of s with respect to x total differential in s

– The Hessian or hessian matrix of s with respect to x is the second derivative

sxx 

2 s  2 s    x 2  xi x j 

which is a symmetric n by n matrix

15

5

Matrix calculus • Some definitions – – – –

The The The The

differential in x derivative of x with respect to a scalar t (which could be time) gradient of s with respect to x total differential in s

– The Hessian or hessian matrix of s with respect to x is the second derivative

sxx 

2 s  2 s    x 2  xi x j 

Exercise:

s(x1 ,x2 )  0.5x12  x1 x2  x22  x2

16

Matrix calculus • Some definitions – – – – –

The The The The The

differential in x derivative of x with respect to a scalar t (which could be time) gradient of s with respect to x total differential in s Hessian or hessian matrix of s with respect to x

– Let f(x) be an m-vector function of x. The Jacobian of f with respect to x is the m by n matrix

fx 

f  f f  x  x1 x2

 f1  x  1  f 2  f     x1 xn     f m  x  1

f1 x2 f 2 x2 f m x2

f1  xn   f 2   xn    f m  xn 

17

Matrix calculus •

Some definitions

• Taylor series expansion – The Taylor series expansion of s(x) about x0 is

 s  s(x )  s(x0 )     x 

T

1 2

 x  x0    x  x0 

T

2 s  x  x0   O(3) x 2

where O(3) represents terms of order 3, and sx and sxx are evaluated at x0

18

6

Matrix calculus •

Some definitions

• Taylor series expansion – The Taylor series expansion of s(x) about x0 is

 s  s(x )  s(x0 )     x 

T

1 2

 x  x0    x  x0 

T

2 s  x  x0   O(3) x 2

Exercise:

s(x1 ,x2 )  0.5x12  x1 x2  x22  x2

19

Matrix calculus • •

Some definitions Taylor series expansion

• Some useful results d 1 d  A   A1  A  A1 dt  dt 

 

20

Matrix calculus • •

Some definitions Taylor series expansion

• Some useful results – The chain rule

 f T (x )y(x )  f xT y  y xT f x





21

7

Matrix calculus • •

Some definitions Taylor series expansion

• Some useful results – The chain rule

– Some useful gradients

 x  x  x  x

 y x   x  x y   y T

 T T x A y  AT y x  y T f (x )  f T (x )y  f xT y x

y 

T

T



Ax 





x

T









Ax  Ax  AT x 22

Matrix calculus • •

Some definitions Taylor series expansion

• Some useful results – The chain rule

– Some useful gradients • If Q is symmetric

 T x Qx  2Qx x  (x  y )T Q(x  y )  2Q(x  y ) x





23

Matrix calculus • •

Some definitions Taylor series expansion

• Some useful results – The chain rule – Some useful gradients

– Some useful Hessians

2 T x Ax  A  AT x 2





24

8

Matrix calculus • •

Some definitions Taylor series expansion

• Some useful results – The chain rule – Some useful gradients

– Some useful Hessians • If Q is symmetric

2 T x Qx  2Q x 2 2  (x  y )T Q(x  y )  2Q x 2









25

Matrix calculus • •

Some definitions Taylor series expansion

• Some useful results – The chain rule – Some useful gradients – Some useful Hessians

– Some useful Jacobians

  Ax   A x x ? x 26

Optimization without constraints • Problem formulation – A scalar performance index L(u) is given that is a function of a control or decision vector u  Rm – We want to:

min L(u)

27

9

Optimization without constraints • Problem formulation • Solution

min L(u)

– Taylor series expansion for an increment in L

1 dL  LTu du  du T Luu du  O(3) 2 where the Hessian matrix Luu is also called the curvature matrix

28

Optimization without constraints • Problem formulation • Solution

min L(u)

– Taylor series expansion for an increment in L

1 dL  LTu du  du T Luu du  O(3) 2

– A critical or stationary point occurs when the increment dL is zero to the first order for all increments du in the control

Lu  0 29

Optimization without constraints • Problem formulation • Solution

min L(u)

– Taylor series expansion for an increment in L – A critical or stationary point

1 dL  LTu du  du T Luu du  O(3) 2

Lu  0

At a critical point, if Luu is positive definite (Luu > 0), then the critical point is a local minimum. At a critical point, if Luu is negative definite (Luu < 0), then the critical point is a local maximum. At a critical point, if Luu is indefinite, then the critical point is a saddle point. 30

10

Optimization without constraints • Problem formulation • Solution

min L(u)

– Taylor series expansion for an increment in L

1 dL  LTu du  du T Luu du  O(3) 2

Lu  0

– A critical or stationary point

Example:

L(u ) 

1 T  q11 u  2  q21

q12  u   s1 q22 

s2  u 

1 T u Qu  S T u 2

31

Optimization with equality constraints • Problem formulation

32

Optimization with equality constraints • Problem formulation – A scalar performance index L(x, u) is given that is a function of a control vector u  Rm and an auxiliary (state) vector x  Rn – The problem is

min L(x,u ) subject to f (x,u )  0 where f(x, u) = 0 is the constraint equation—a set of n scalar equations, f  Rn 33

11

Optimization with equality constraints • Problem formulation

min L(x,u ) subject to f (x,u )  0

• Solutions – Define the Hamiltonian function

H (x,u, )  L(x,u)   T f (x,u) where λ  Rn is a Lagrange multiplier

34

Optimization with equality constraints • Problem formulation • Solutions – Hamiltonian function and Lagrange multiplier

– Necessary conditions for a minimum point of L(x, u) that also satisfies the constraint f(x, u) = 0 are

H  f 0  H  Lx  f xT   0 x H  Lu  f uT   0 u

35

Optimization with equality constraints • Problem formulation • Solutions – Hamiltonian function and Lagrange multiplier

– Necessary conditions for a minimum point of L(x, u) that also satisfies the constraint f(x, u) = 0 are

H  f 0  H  Lx  f xT   0 x H  Lu  f uT   0 u

“We have thus been able to replace the problem of minimizing L(x, u) subject to the constraint f(x, u) = 0 with the problem of minimizing the Hamiltonian without constraints.” –From the textbook. Is this statement correct? 36

12

Optimization with equality constraints • Problem formulation • Solutions • Examples – The “milkmaid problem”

37

Optimization with equality constraints • Problem formulation • Solutions • Examples – The “milkmaid problem”

38

Optimization with equality constraints • Problem formulation • Solutions • Examples – The shapes of red blood cells • Their shapes are believed to minimize curvature energy at fixed volume V and area A:

min





1 2 kc  2C  c0  dA  V    c02 A 2 M

where λ and μ are Lagrange multipliers, C is the mean curvature, kc is the bending modulus, c0 the spontaneous curvature (biasing C), and the integral over M, the membrane surface

39

13

Optimization with equality constraints • Problem formulation • Solutions min • Examples





1 2 kc  2C  c0  dA  V    c02 A 2 M

– The shapes of red blood cells

40

41

42

14

References • • • • • •

F. L. Lewis, D. Vrabie, and V. L. Syrmos. Optimal Control, 3rd Edition, John Wiley and Sons, 2012. http://en.wikipedia.org/wiki/Gradient http://faculty.jsd.claremont.edu/sjensen/teaching/tutorials/Lagrange.html https://hvelink.saintlukeshealthsystem.org/library/healthguide/enus/images/media/medical/hw/h5551158.jpg http://www.nhlbi.nih.gov/health/dci/Diseases/Sca/SCA_WhatIs.html http://www.mtholyoke.edu/~mpeterso/reu/89/reu89.html

43

15