Instructor: Dr. Zhi-Hong Mao. â (Office) 1204 Benedum Hall. â (Email)
. â (Phone) 412-624-9674.
ECE 3650: Optimal Control (3 Credits, Fall 2015)
Lecture 1: Course Organization and Introduction to Optimal Control September 3, 2015
Zhi-Hong Mao Associate Professor of ECE and Bioengineering University of Pittsburgh, Pittsburgh, PA 1
Outline • • • • • • •
Course description Course organization Brief review of optimization methods What is optimal control? Why optimal control? Approaches to optimal control An example
2
Course description • This course introduces: – Fundamental mathematics of optimal control theory – Implementation of optimal controllers for practical applications Question: How many of you have taken ECE 2646 (or equivalent courses)?
ECE 1673: Linear control systems ECE 2646: Linear system theory
Nonlinear control Robust control
Optimal control Adaptive control
… 3
1
Course description •
This course introduces: – Fundamental mathematics of optimal control theory – Implementation of optimal controllers for practical applications
• This course covers: – – – –
Static optimization Optimal control of discrete-time systems Optimal control of continuous-time systems Dynamic programming
4
Course organization • Time: Thursday 5:20 pm−7:50 pm • Instructor: Dr. Zhi-Hong Mao – – – –
(Office) 1204 Benedum Hall (Email)
[email protected] (Phone) 412-624-9674 (Office hours) Monday 3 pm−5 pm
5
Course organization • •
Time: Thursday 5:20 pm-7:50 pm Instructor: Dr. Zhi-Hong Mao
• Text book – F. L. Lewis, Draguna Vrabie, and V. L. Syrmos, Optimal Control, 3rd Edition, John Wiley and Sons, New York, 2012 (or 2nd Edition, 1995)
• Lecture notes available at – http://www.pitt.edu/~zhm4/ECE3650
• Email list – Emergent notices will be sent to you via emails
6
2
Course organization • • • • •
Time: Monday 5:20 pm-7:50 pm Instructor: Dr. Zhi-Hong Mao Text book Lecture notes Email list
• Course evaluation – Homework and class participation: 30% (late homework will not be accepted) – Midterm: 30% – Final exam: 40%
7
Brief review of optimization methods • Formulation of optimization problems – Performance index or objective function – Control or decision variables – Constraints
minimize L(u ) subject to fi (u ) 0, i 1,..., n Question: Are all these ingredients necessary? 8
Brief review of optimization methods •
Formulation of optimization problems
• Examples of optimization problems – A toy example: A child’s rectangular play yard is to be built next to the house. To make the three sides of the play-pen, twenty-four feet of fencing are available. What should be the dimensions of the sides to make a maximum area?
x
x y
9
3
Brief review of optimization methods •
Formulation of optimization problems
• Examples of optimization problems – A toy example
x
x y
maximize xy subject to 2 x y 24 10
Brief review of optimization methods •
Formulation of optimization problems
• Examples of optimization problems – A toy example
– The diet problem (one of the first modern optimization problems): In the 1930s-1940s, the Army wanted a low cost diet that would meet the nutritional needs of a soldier
“If you go on a diet and lose five pounds, only to gain back ten the following month, how many infuriating, godforsaken pounds do you weigh?” 11
Brief review of optimization methods •
Formulation of optimization problems
• Examples of optimization problems – A toy example
– The diet problem minimize cost of food subject to: total calories minimum requirement, amount of vitamins minimum requirement, amount of minerals minimum requirement, etc.
9 inequalities, 77 decision variables Solution: The minimum cost of food is $
per year! 12
4
Brief review of optimization methods •
Formulation of optimization problems
• Examples of optimization problems – A toy example – The diet problem
– A remark about “large-scale” • In the fifties, a travel-salesman problem (TSP) through 49 cities (corresponding to 1176 variables in the standard IP formulation) has been considered large-scale, while today the world record of solving a TSP is 13,509 cities (91,239,786 variables)
13
Brief review of optimization methods •
Formulation of optimization problems
• Examples of optimization problems – A toy example – The diet problem
– A remark about “large-scale” • In the fifties, a travel-salesman problem (TSP) through 49 cities (corresponding to 1176 variables in the standard IP formulation) has been been considered large-scale, while today the world record of solving a TSP is 13,509 cities (91,239,786 variables)
• “Large-scale” does not only depend on the number of variables or constraints but also structures of the problem (e.g., convex v.s. noncovex programming, l0 v.s. l1 minimization) 14
Brief review of optimization methods •
Formulation of optimization problems
• Examples of optimization problems – A toy example – The diet problem – A remark about “large-scale”
– “Save wire” organizing principle: At multiple hierarchical levels—brain, ganglion, individual cell— physical placement of neural components appears consistent with a single, simple goal, i.e., to minimize cost of connections among the components
15
5
Brief review of optimization methods •
Formulation of optimization problems
• Examples of optimization problems – – – –
A toy example The diet problem A remark about “large-scale” “Save wire” organizing principle
– Optimization in biology • Optimization theory not only explains current adaptations of biological systems, but also helps to predict new designs that may yet evolve • Biological world may provide solutions to engineering problems: The structures, movements, and behaviors of animals, and their life histories, have been shaped by the optimization processes of evolution or of learning by trial and error 16
Brief review of optimization methods •
Formulation of optimization problems
•
Examples of optimization problems
• Optimization methods – Extremum of a smooth function – Gradient search – Simplex algorithm
17
Brief review of optimization methods • •
Formulation of optimization problems Examples of optimization problems
• Optimization methods – Extremum of a smooth function – Gradient search – Simplex algorithm
– Lagrangian methods and Lagrange multipliers – Randomized algorithms
Genetic algorithm
18
6
Brief review of optimization methods • •
Formulation of optimization problems Examples of optimization problems
• Optimization methods – – – – –
Extremum of a smooth function Gradient search Simplex algorithm Lagrangian methods and Lagrange multipliers Randomized algorithms
– Energy-function-based optimization
Question (Steiner’s problem): How to find a point inside a triangle that gives the shortest sum of distances to the vertices?
• With applications in protein folding problems, Hopfield neural networks, robotic path planning, etc. 19
Protein folding 20
Hemoglobin
21
7
What is optimal control? • Definition – Optimal control is to find optimal ways to control a dynamic system
22
What is optimal control? •
Definition
• Formulation of optimal control problems – State-space description of a system • The system is modeled as a set of first-order differential equations (representation of the dynamics of an n th-order system using n first-order differential equations)
x Ax Bu y Cx Du 23
What is optimal control? •
Definition
• Formulation of optimal control problems – State-space description of a system • The system is modeled as a set of first-order differential equations (representation of the dynamics of an n th-order system using n first-order differential equations)
• Example: Newton’s second law
m
d 2 y (t ) u(t ) dt 2
x Ax Bu y Cx Du Question: What are A, B, C, and D for this example?
x1 (t ) y (t ) x2 ( t )
dy (t ) dx1 (t ) dt dt
x1 (t ) 0 1 x1 (t ) 0 x (t ) 0 0 x (t ) 1 / mu(t ) 2 2 x (t ) y (t ) [1 0] 1 x2 ( t )
24
8
What is optimal control? •
Definition
• Formulation of optimal control problems – State-space description of a system
– Objective functions or performance indices • Example 1: Suppose that the objective is to control a dynamical system modeled by the equations
x Ax Bu, x(t0 ) x0 y Cx on a fixed interval [t0, tf] so that the components of the state vector are “small.” A suitable performance index to be minimized would be tf
J1 xT (t ) x(t )dt t0
25
What is optimal control? •
Definition
• Formulation of optimal control problems – State-space description of a system
– Objective functions or performance indices • Example 1
• Example 2: If the objective is to control the system so that the components of the output, y(t), are to be small, then we could use the performance index tf
J 2 y T (t ) y (t )dt t0
tf
tf
t0
t0
xT (t )C T Cx(t )dt xT (t )Qx(t )dt where the weight matrix Q = CTC is symmetric positive semidefinite 26
What is optimal control? •
Definition
• Formulation of optimal control problems – State-space description of a system
– Objective functions or performance indices • Example 1 • Example 2
• Example 3: If the objective is to control the system so that the components of the iutput, u(t), are to be small, then we could use the performance index tf
tf
0
0
J 3 uT (t )u(t )dt or J 3 uT (t )Ru(t )dt t t where the weight matrix R is symmetric positive definite
27
9
What is optimal control? •
Definition
• Formulation of optimal control problems – State-space description of a system
– Objective functions or performance indices • Example 1 • Example 2 • Example 3
• Example 4: If we wish the final state x(tf) to be as close as possible to 0, then we could use the performance index
J 4 xT (t f )Fx(t f ) where F is a symmetric positive definite matrix
28
What is optimal control? •
Definition
• Formulation of optimal control problems – State-space description of a system – Objective functions or performance indices
– LQR (linear quadratic regulator) problem • The control aim is to keep the state “small,” the control “not too large,” and the final state as near to 0 as possible. The resulting performance index is
J xT (t f )Fx(t f )
tf
t0
x
T
(t )Qx(t ) uT (t )Ru(t ) dt
Minimizing the above performance index subject to
x Ax Bu, x(t0 ) x0 y Cx is called the LQR problem
29
What is optimal control? •
Definition
• Formulation of optimal control problems – State-space description of a system – Objective functions or performance indices
– LQR (linear quadratic regulator) problem
J xT (t f )Fx(t f )
tf
t0
x
T
(t )Qx(t ) uT (t )Ru(t ) dt
Question: What if the desired state is not 0 but xd(tf)?
30
10
What is optimal control? • •
Definition Formulation of optimal control problems
• Comparison with conventional optimization problems Question: What are the decision variables and constraints of an optimal control problem?
31
Why optimal control? • Problems with classical control system design – Classical design is a trial-and-error process – Classical design is to determine the parameters of an “acceptable” system – Classical design is essentially restricted to singleinput single-output LTI systems
32
Why optimal control? •
Problems with classical control system design
• Why optimal control? – Based on state-space description of systems and applicable to control problems involving multi-input multi-output systems and time-varying situations – “Optimal” design (in optimal control) v.s. “acceptable” design (in classical control) – Optimal control theory provides strong analytical tools
33
11
Why optimal control? • •
Problems with classical control system design Why optimal control?
• Applications of optimal control – In engineering system design – In the study of biology (including neuroscience) – In management science and economics
34
Why optimal control? • • •
Problems with classical control system design Why optimal control? Applications of optimal control
• Word of caution – Optimal control design assumes that the system model is exactly known and that there are no disturbances – Lack of intuition in design – Optimal control should not be viewed as a replacement of classical analytic methods; rather, it should be considered as an addition that complements the older tools of classical control 35
Approaches to optimal control • Calculus of variations – Pontryagin’s maximum principle
• Dynamic programming – Hamilton-Jacobi-Bellman equation
36
12
Approaches to optimal control • •
Calculus of variations Dynamic programming
• Linear quadratic regulator – For an LTI system described by x Ax Bu, x(0) x0 with a quadratic cost function defined as J
0
x Qx u Ru dt T
T
the feedback control law that minimizes the value of the cost is u Kx where K is given by K R1BT P and P is found by solving the algebraic Riccati equation (ARE): AT P PA Q PBR1BT P 0 37
An example
38
An example
39
13
An example
DCM
Feedforward control
40
41
An example
DCM
Feedback control
42
14
43
44
An example J xT Qx uT Ru dt
0
u Va , x (q, i, )'
Km i
J 20q(t )2 (t )2 0.01Va (t )2 dt
DCM
0
Optimal control
45
15
46
References • • • • • • • • • • • • • • • • •
J. R. Banga. Optimization in computational systems biology. BMC Systems Biology, 2008. J. W. Chinneck. Practical optimization: a gentle introduction. Available online at http://www.sce.carleton.ca/faculty/chinneck/po.html G. B. Dantzig. The diet problem. Interfaces 20, 43-47, 1990. R. J. Jagacinski and J. M. Flach. Control Theory for Humans: Quantitative Approaches to Modeling Performance. Lawrence Erlbaum Associates Publishers, Mahwah, NJ, 2003. F. L. Lewis, D. L. Vrabie, and V. L. Syrmos. Optimal Control, 3rd Edition, John Wiley and Sons, 2012. D. E. Kirk. An introduction to dynamic programming. IEEE Transactions on Education E10, 212-219, 1967. A. Martin. Large-scale optimization. Optimization and Operations Research, in Encyclopedia of Life Support Systems, Eolss Publishers, Oxford, UK, 2004. S. H. Zak. Systems and Control. Oxford University Press, 2003. http://asweknowit.net/images_edu/dwa5%20brain%20cells%20non-neuronal.jpg http://en.wikipedia.org/wiki/Conjugate_gradient_method http://en.wikipedia.org/wiki/File:Hb-animation2.gif http://en.wikipedia.org/wiki/Linear-quadratic_regulator http://en.wikipedia.org/wiki/Protein_folding http://faculty.cs.tamu.edu/amato/dsmft/research/folding/index.shtml.OLD2 http://molsim.chem.uva.nl/gallery/index.html http://www.johnrdixonbooks.com/images/Optimization.pdf http://www.mathworks.com/products/control/demos.html?file=/products/demos/shipping/co ntrol/dcdemo.html 47
16