Oct 1, 2013 ... Law of motion of the state variable k(t): ˙k(t) = g[k(t),c(t),t]. (2). 2. Feasible set for
control variable c(t): c(t) ∈ Y (t). (3). 3. Boundary conditions:.
Optimal Control Prof. Lutz Hendricks Econ720
October 3, 2017
1 / 45
Topics
Optimal control is a method for solving dynamic optimization problems in continuous time.
2 / 45
Example: Growth Model A household chooses optimal consumption to Z T
max
e−ρt u[c(t)]dt
(1)
0
subject to ˙ = rk(t) − c(t) k(t)
(2)
c (t) ∈ [0,¯c]
(3)
k(0) = k0 ,given
(4)
k(T) ≥ 0
(5)
3 / 45
Generic Optimal control problem Choose functions of time c (t) and k (t) so as to Z T
max
v[k(t), c(t), t]dt
(6)
0
Constraints: 1. Law of motion of the state variable k (t): ˙ = g[k(t), c(t), t] k(t)
(7)
2. Feasible set for control variable c (t): c (t) ∈ Y (t)
(8)
3. Boundary conditions, such as: k(0) = k0 ,given
(9)
k(T) ≥ kT
(10) 4 / 45
Generic Optimal control problem
I
c and k can be vectors.
I
Y (t) is a compact, nonempty set.
I
T could be infinite. I
I
Then the boundary conditions change
Important: the state cannot jump; the control can.
5 / 45
A Recipe for Solving Optimal Control Problems
A Recipe Step1. Write down the Hamiltonian H(t) = v(k, c, t) + µ(t)g(k, c, t) | {z }
(11)
˙ k(t)
µ is essentially a Lagrange multiplier (called a co-state).
Step 2. Derive the first order conditions which are necessary for an optimum: ∂ H/∂ c = 0
(12)
∂ H/∂ k = −µ˙
(13)
7 / 45
A Recipe
Step 3. Impose the transversality condition: I
I
for finite horizon: µ (T) = 0
(14)
lim H(t) = 0
(15)
for infinite horizon: t→∞
This depends on the terminal condition (see below).
8 / 45
A Recipe
Step 4. A solution is the a set of functions [c(t), k(t), µ(t)] which satisfy I
the FOCs
I
the law of motion for the state
I
the boundary / transversality conditions
9 / 45
Intuition ∂ H/∂ c = 0: Maximize Hamiltonian w.r.to control. I
v (k, c, t) picks up current utility of c
I
µ (t) is marginal value of additional “future” k.
I
µ (t) g (k, c, t) picks up change in continuation value (change in k˙ times value of future k)
˙ ∂ H/∂ k = −µ: I
˙ think of this as [∂ H/∂ k] /µ = −µ/µ
I
∂ H/∂ k is the value of additional k
I
[∂ H/∂ k] /µ is like a rate of return (give up c now, how much future value do you get?) ˙ µ/µ is the growth rate of marginal utility
I
10 / 45
Example: Growth Model
Z ∞
max
e−ρt u (c (t)) dt
(16)
0
subject to k˙ (t) = f (k (t)) − c (t) − δ k (t)
(17)
k (0) given
(18)
11 / 45
Growth Model: Hamiltonian
H (k, c, µ) = e−ρt u (c (t)) + µ (t) [f (k (t)) − c (t) − δ k (t)]
(19)
Necessary conditions: Hc = e−ρt u0 (c) − µ = 0 Hk = µ f 0 (k) − δ = −µ˙
12 / 45
Growth Model Substitute out the co-state: µ˙ = e−ρt u00 (c) c˙ − ρ µ
c˙ =
µ˙ + ρ µ e−ρt u00 (c)
= − f 0 (k) − δ − ρ
(20)
(21) u0 (c) u00 (c)
(22)
Solution: ct , kt that solve Euler equation and resource constraint, plus boundary conditions.
13 / 45
Details
First order conditions are necessary, not sufficient. They are necessary only if we assume that 1. a continuous, interior solution exists; 2. the objective function v and the constraint function g are continuously differentiable. Acemoglu (2009), ch. 7, offers some insight into why the FOCs are necessary.
14 / 45
Details If there are multiple states and controls, simply write down one FOC for each separately: δ H/δ ci = 0 ∂ H/∂ kj = −µ˙ j
There is a large variety of cases depending on the length of the horizon (finite or infinite) and the kinds of boundary conditions. I
Each has its transversality condition (see Leonard and Van Long 1992).
15 / 45
Next steps
Typical useful next things to do: 1. Eliminate µ from the system. Obtain two differential equations in (c, k) . 2. Find the steady state by imposing c˙ = k˙ = 0.
16 / 45
Sufficient conditions
First-order conditions are sufficient, if the programming problem is concave. This can be checked in various ways.
17 / 45
Sufficient conditions I
The objective function and the constraints are concave functions of the controls and the states. I
The co-state must be positive.
This condition is easy to check, but very stringent. In the growth model: I
u (c) is concave in c (and, trivially, k)
I
f (k) − δ k − c is concave in c and k
I
µ = u0 (c) > 0
18 / 45
Sufficient Conditions II
(Mangasarian) First-order conditions are sufficient, if the Hamiltonian is concave in controls and states, where the co-state is evaluated at the optimal level (and held fixed). This, too is very stringent.
19 / 45
In the growth model
∂ H/∂ c = u0 (c) − µ ∂ H/∂ k = µ [f 0 (k) − δ ] ∂ 2 H/∂ c2 = u00 (c) < 0 ∂ 2 H/∂ k2 = µf 00 (k) < 0 ∂ 2 H/∂ c∂ k = 0 Therefore: weak joint concavity (because we know that µ > 0)
20 / 45
Sufficient Conditions III Arrow and Kurz (1970) I
First-order conditions are sufficient, if the maximized Hamiltonian is concave in the states.
I
If the maximized Hamiltonian is strictly concave in the states, the optimal path is unique.
Maximized Hamiltonian: Substitute controls out, so that the Hamiltonian is only a function of the states. This is less stringent and by far the most useful set of sufficient conditions.
21 / 45
In the growth model
Optimal consumption obeys u0 (c) = µ or c = u0−1 (µ) Maximized Hamiltonian: ˆ = u u0−1 (µ) + µ f (k) − δ k − u0−1 (µ) H
(23)
ˆ k > 0 and ∂ 2 H/∂ k2 = µf 00 (k) < 0. We have ∂ H/∂ ˆ is strictly concave in k. H Necessary conditions yield a unique optimal path.
22 / 45
Discounting: Current value Hamiltonian
Problems with discounting Current utility depends on time only through an exponential discounting term e−ρt . The generic discounted problem is Z T
max
e−ρt v[k(t), c(t)]dt
(24)
0
subject to the same constraints as above.
23 / 45
Applying the Recipe
ˆ g (k, c) H (t) = eρt v (k, c) + µ
(25)
∂H ˆt gc (kt , ct ) = 0 =⇒ e−ρt vc (kt, ct ) = −µ ∂ ct
(26)
∂H ˆt gk (kt , ct ) = −µ ˆ˙ t = e−ρt vk (kt , ct ) + µ ∂ kt
(27)
24 / 45
Applying the Recipe
Let ˆt µt = eρt µ
(28)
and multiply through by eρt : vc (t) = −µt gk (t)
ˆ. This is the standard FOC, but with µ instead of µ
25 / 45
Applying the Recipe
ˆt gk (t) = −eρt µ ˆ˙ t vk (t) + eρt µ
(29)
ˆ˙ t using Substitute out µ µ˙ t =
ˆt deρt µ ˆ˙ t = ρ µt + eρt µ dt
we have vk (t) + µt gk (t) = −µ˙ t + ρ µt This is the standard condition with an additional ρ µ term.
26 / 45
Shortcut We now have a shortcut for discounted problems. Hamiltonian (drop the discounting term): H = v (k, c) + µg (k, c)
(30)
∂ H/∂ c = 0
(31)
˙ ∂ H/∂ k = µ(t)ρ − µ(t) | {z } added
(32)
lim e−ρT µ(T)k(T) = 0
(33)
FOCs:
and the TVC T→∞
27 / 45
Equality constraints Equality constraints of the form h[c(t), k(t), t] = 0
(34)
are simply added to the Hamiltonian as in a Lagrangian problem: H(t) = v(k, c, t) + µ(t)g(k, c, t) + λ (t)h(k, c, t)
(35)
FOCs are unchanged: ∂ H/∂ c = 0 ∂ H/∂ k = −µ˙
For inequality constraints: h (c, k, t) ≥ 0; λ h = 0
(36) 28 / 45
Transversality Conditions
Finite horizon: Scrap value problems The horizon is T. The objective function assigns a scrap value to the terminal state variable: e−ρT φ (k(T)): Z T
max
e−ρt v[k(t), c(t), t]dt + e−ρT φ (k(T))
(37)
0
Hamiltonian and FOCs: unchanged. The TVC is µ (T) = φ 0 (k (T))
(38)
Intuition: µ is the marginal value of the state k.
30 / 45
Scrap value examples 1. Household with bequest motive Z T
U= 0
eρt u (c (t)) + V (kT )
(39)
with k˙ = w + rk − c. 2. Maximizing the present value of earnings Z T
Y=
e−rt wh (t) [1 − l (t)]
(40)
0
subject to h˙ (t) = Ah (t)α l (t)β − δ h (t) Scrap value is 0. TVC: µ (T) = 0.
31 / 45
Infinite horizon TVC
The finite horizon TVC with the boundary condition k (T) ≥ kT is µ (T) = 0. I
Intuition: capital has no value at the end of time.
But the infinite horizon boundary condition is NOT limt→∞ µ (t) = 0. The next example illustrates why.
32 / 45
Infinite horizon TVC: Example
Z ∞
max
[ln (c (t)) − ln (c∗ )] dt
0
subject to ˙k (t) = k (t)α − c (t) − δ k (t) k (0) = 1 lim k (t) ≥ 0
t→∞
c∗ is the max steady state (golden rule) consumption. No discounting - subtracting c∗ makes utility finite.
33 / 45
Infinite horizon TVC
Hamiltonian H (k, c, λ ) = ln c − ln c∗ + λ [kα − c − δ k]
(41)
Necessary FOCs Hc = 1/c − λ = 0 Hk = λ αkα−1 − δ = −λ˙
(42) (43)
34 / 45
Infinite horizon TVC We show: limt→∞ c (t) = c∗ [why?] Limiting steady state solves λ˙ /λ = αkα−1 − δ = 0 k˙ = kα − 1/λ − δ k = 0
Solution is the golden rule: k∗ = (α/δ )1/(1−α)
(44)
Verify that this max’s steady state consumption.
35 / 45
Infinite horizon TVC
Implications for the TVC... λ (t) = 1/c (t) implies limt→∞ λ (t) = 1/c∗ . Therefore, neither λ (t) nor λ (t) k (t) converge to 0. The correct TVC: lim H (t) = 0
t→∞
(45)
The only reason why the standard TVC does not work: there is no discounting in the example.
36 / 45
Infinite horizon TVC: Discounting With discounting, the TVC is easier to check. Assume: I
the objective function is e−ρt v [k (t) , c (t)]
I
it only depends on t through the discount factor
I
v and g are weakly monotone
Then the TVC becomes lim e−ρt µ (t) k (t) = 0
t→∞
(46)
where µ is the costate of the current value Hamiltonian. This is exactly analogous to the discrete time version lim β t u0 (ct ) kt = 0
t→∞
(47)
37 / 45
Example: renewable resource
Example: Renewable resource
Z ∞
max
e−ρt u (y (t)) dt
(48)
0
subject to
(49)
x˙ (t) = −y (t)
(50)
x (0) = 1
(51)
x (t) ≥ 0
(52)
39 / 45
Example: Renewable resource
Current value Hamiltonian Necessary FOCs
40 / 45
Example: Renewable resource
FOC Therefore: µ (t) = µ (0) eρt
(53)
y (t) = u0−1 [µ (0) eρt ]
(54)
41 / 45
Solution
The optimal path has lim x (t) = 0 or Z ∞
Z ∞
y (t) dt = 0
u0−1 [µ (0) eρt ] dt = 1
(55)
0
This solves for µ (0).
42 / 45
Example: Renewable resource
TVC for infinite horizon case: lim e−ρt µ (0) eρt x (t) = 0
(56)
lim x (t) = 0
(57)
Equivalent to
43 / 45
Reading
I
Acemoglu (2009), ch. 7. Proves the Theorems of Optimal Control.
I
Barro and Martin (1995), appendix.
I
Leonard and Van Long (1992): A fairly comprehensive treatment. Contains many variations on boundary conditions.
44 / 45
References I
Acemoglu, D. (2009): Introduction to modern economic growth, MIT Press. Arrow, K. J. and M. Kurz (1970): “Optimal growth with irreversible investment in a Ramsey model,” Econometrica: Journal of the Econometric Society, 331–344. Barro, R. and S.-i. Martin (1995): “X., 1995. Economic growth,” Boston, MA. Leonard, D. and N. Van Long (1992): Optimal control theory and static optimization in economics, Cambridge University Press.
45 / 45