Stochastic Optimal Control in Finance - People

0 downloads 0 Views 304KB Size Report
Apr 1, 2015 - control and the dynamic programming approach to control. This is done .... Admissible controls A. A control process satisfying the constraints.
Stochastic Optimal Control in Finance H. Mete Soner Ko¸c University Istanbul, Turkey [email protected]

.

for my son, MehmetAli’ye.

Preface These are the extended version of the Cattedra Galileiana I gave in April 2003 in Scuola Normale, Pisa. I am grateful to the Society of Amici della Scuola Normale for the funding and to Professors Maurizio Pratelli, Marzia De Donno and Paulo Guasoni for organizing these lectures and their hospitality. In these notes, I give a very quick introduction to stochastic optimal control and the dynamic programming approach to control. This is done through several important examples that arise in mathematical finance and economics. The theory of viscosity solutions of Crandall and Lions is also demonstrated in one example. The choice of problems is driven by my own research and the desire to illustrate the use of dynamic programming and viscosity solutions. In particular, a great emphasis is given to the problem of super-replication as it provides an usual application of these methods. Of course there are a number of other very important examples of optimal control problems arising in mathematical finance, such as passport options, American options. Omission of these examples and different methods in solving them do not reflect in any way on the importance of these problems and techniques. Most of the original work presented here is obtained in collaboration with Professor Nizar Touzi of Paris. I would like to thank him for the fruitful collaboration, his support and friendship.

Oxford, March 2004.

i

Contents 1 Examples and Dynamic Programming 1.1 Optimal Control. . . . . . . . . . . . . . . . . . . . . . . . 1.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Deterministic minimal time problem . . . . . . . . 1.2.2 Merton’s optimal investment-consumption problem 1.2.3 Finite time utility maximization . . . . . . . . . . . 1.2.4 Merton problem with transaction costs . . . . . . . 1.2.5 Super-replication with portfolio constraints . . . . . 1.2.6 Buyer’s price and the no-arbitrage interval . . . . . 1.2.7 Super-replication with gamma constraints . . . . . 1.3 Dynamic Programming Principle . . . . . . . . . . . . . . 1.3.1 Formal Proof of DPP . . . . . . . . . . . . . . . . . 1.3.2 Examples for the DPP . . . . . . . . . . . . . . . . 1.4 Dynamic Programming Equation . . . . . . . . . . . . . . 1.4.1 Formal Derivation of the DPE . . . . . . . . . . . . 1.4.2 Infinite horizon . . . . . . . . . . . . . . . . . . . . 1.5 Examples for the DPE . . . . . . . . . . . . . . . . . . . . 1.5.1 Merton Problem . . . . . . . . . . . . . . . . . . . 1.5.2 Minimal Time Problem . . . . . . . . . . . . . . . . 1.5.3 Transaction costs . . . . . . . . . . . . . . . . . . . 1.5.4 Super-replication with portfolio constraints . . . . . 1.5.5 Target Reachability Problem . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

1 1 2 3 3 5 5 7 7 8 9 10 11 13 14 16 16 16 19 20 22 22

2 Super-Replication under portfolio constraints 2.1 Solution by Duality . . . . . . . . . . . . . . . 2.1.1 Black-Scholes Case . . . . . . . . . . . 2.1.2 General Case . . . . . . . . . . . . . . 2.2 Direct Solution . . . . . . . . . . . . . . . . .

. . . .

. . . .

25 25 26 27 29

ii

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

2.2.1 2.2.2 2.2.3 2.2.4

Viscosity Solutions Supersolution . . . Subsolution . . . . Terminal Condition

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . or “Face-Lifting”

. . . .

3 Super-Replication with Gamma Constraints 3.1 Pure Upper Bound Case . . . . . . . . . . . . 3.1.1 Super solution . . . . . . . . . . . . . . 3.1.2 Subsolution . . . . . . . . . . . . . . . 3.1.3 Terminal Condition . . . . . . . . . . . 3.2 Double Stochastic Integrals . . . . . . . . . . 3.3 General Gamma Constraint . . . . . . . . . . 3.4 Examples . . . . . . . . . . . . . . . . . . . . 3.4.1 European Call Option . . . . . . . . . 3.4.2 European Put Option: . . . . . . . . . 3.4.3 European Digital Option . . . . . . . . 3.4.4 Up and Out European Call Option . . 3.5 Guess for The Dual Formulation . . . . . . . .

iii

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . .

29 30 32 34

. . . . . . . . . . . .

38 39 39 41 41 44 51 53 53 54 55 56 58

Chapter 1 Examples and Dynamic Programming In this Chapter, we will outline the basic structure of an optimal control problem. Then, this structure will be explained through several examples mainly from mathematical finance. Analysis and the solution to these problems will be provided later.

1.1

Optimal Control.

In very general terms, an optimal control problem consists of the following elements: • State process Z(·). This process must capture of the minimal necessary information needed to describe the problem. Typically, Z(t) ∈ 0 , dt Z(0) = z, where f is a given vector field and ν : [0, ∞) → U is the control process. We always assume that f is regular enough so that for a given control process ν, the above equation has a unique solution Zxν (·). For a given target set T ⊂ 0 .

Again assuming that (2.2.11) holds on [0, s∗ ] for some s∗ we get s ˆ ∗ ˆ G(s) = ( ∗ )b G(s ), s

∀s ≤ s∗ .

ˆ Assume that G(s) = G(s) for s ≥ s∗ . Then, we have the following claim ½ (s − K)+ , s ≥ s∗ , ˆ G(s) = h(s) := s b ∗ + ( s∗ ) (s − K) s < s∗ , where s∗ > K the unique point for which h ∈ C 1 , i.e., hs (s∗ ) = Gs (s∗ ) = 1. Then, K s∗ = b−1 ∗ It is easy to show that h (with s as above) satisfies (2.2.9). It remains to show that h is the minimal function satisfying (2.2.9). This will be done through a general formula below. ¤ Theorem 2.2.3 Assume that G is nonnegative, continuous and grows at ˆ be given by (2.2.10). Then, most linearly. Let G ˆ V (s) = V (s) = G(s) . 35

(2.2.12)

ˆ uniformly on compact sets. Moreover, In particular, v(t, ·) converges to G ˆ G(s) = sup e−δK (ν) G(s, e−ν ), ˜ ν∈K

and

ˆ t,s (T ))|Ft ], v(t, s) = E[G(S

ˆ i.e., v is the unconstrained (Black-Scholes) price of the modified claim G. This is proved in Broadie-Cvitanic-Soner [5]. A presentation is also given in Karatzas & Shreve [15] (Chapter 5.7). A proof based on the dual formulation is simpler and is given in both of the above references. ˆ A PDE based proof of formulation G(·) is also available. A complete proof in the case of a gamma constraint is given in the next Chapter. Here we only prove the final statement through a PDE argument. Set ˆ t,s (T )) | Ft ] . u(t, s) = E[G(S Then,

and

1 ut = σ 2 s2 uss 2

∀t < T, s > 0,

ˆ u(T, s) = G(s) .

It is clear that u is smooth. Set w(t, s) := bu(t, s) − sus (t, s) . ˆ Then, since u(T, s) = G(s) satisfies the constraints, w(T, s) ≥ 0

∀s ≥ 0 .

Also using the equation satisfied by u, we calculate that 1 s wt = but − sust = σ 2 s2 [buss ] − σ 2 [s2 uss ]s 2 2 1 2 2 1 = σ s [buss − 2uss − s2 usss ] = σ 2 s2 wss . 2 2 So by the Feynman-Kac formula, bu(t, s) − sus (t, s) = w(t, s) = E[w(t, s)|Ft ] ≥ 0, 36

∀t ≤ T, s ≥ 0 .

Similarly, we can show that au(t, s) + sus (t, s) ≥ 0,

∀s ≤ T, s ≥ 0 .

Therefore, F (t, s, u(t, s), ut (t, s), us (t, s), uss (t, s)) = 0,

t < T, s > 0,

where F is as in the previous section. Since v, the value function, also solves ˆ this equation, and also v(T, s) = u(T, s) = G(s), by a comparison result for the above equation, we can conclude that u = v, the value function.

37

Chapter 3 Super-Replication with Gamma Constraints This chapter is taken from Cheridito-Soner-Touzi [7, 8], and Soner-Touzi [19]. For the brevity of the presentation, again we consider a market with only one stock, and assume that the mean return rate µ = 0, by a Girsanov transformation, and that the interest rate r = 0, by appropriately discounting all the prices. Then, the stock price S(·) follows dS(t) = σS(t)dW (t), and the wealth process Z(·) solves dZ(t) = Y (t)dS(t), where the “portfolio process” Y (·) is a semi-martingale dY (t) = dA(t) + γ(t)dS(t) . The control processes A(t) ∈ and γ(·) are required to satisfy Γ∗ (S(t)) ≤ γ(t) ≤ Γ∗ (S(t))

∀t, a.s. ,

for given functions Γ∗ and Γ∗ . The triplet (Y (t) = y, A(·), γ(·)) = ν is the control. Further restriction on A(·) will be replaced later. Then, the minimal super-replicating cost v(t, s) is ν ≥ G(St,s (T )) a.s. } . v(t, s) = inf{z : ∃ν ∈ A s.t. Zt,z

38

3.1

Pure Upper Bound Case

We consider the case with no lower bound, i.e. Γ∗ = −∞. Since formally we expect vs (t, S(t)) = Y ∗ (t) to be the optimal portfolio, we also expect γ ∗ (t) := vss (t, S(t)). Hence, the gamma constraint translates into a differential inequality vss (t, s) ≤ Γ∗ (s) . In view of the portfolio constraint example, the expected DPE is 1 min{−vt − σ 2 s2 vss ; 2

−s2 vss + s2 Γ∗ (s)} = 0 .

(3.1.1)

We could eliminate s2 term in the second part, but we choose to write the equation this way for reasons that will be clear later. Theorem 3.1.1 Assume G is non-negative, lower semi-continuous and growing at most linearly. Then, v is a viscosity solution of (3.1.1) We will prove the sub and super solution parts separately.

3.1.1

Super solution

Consider a test function ϕ and (t0 , s0 ) ∈ (0, T ) × (0, ∞) so that (v∗ − ϕ)(t0 , s0 ) = min(v∗ − ϕ) = 0 . Choose (tn , sn ) → (t0 , s0 ) so that v(tn , sn ) → v∗ (t0 , s0 ). Further choose vn = (yn , An (·), γn (·)) so that with zn := v(tn , sn ) + 1/n2 , Ztνnn,sn (tn +

1 1 1 ) ≥ v(tn + , Stn ,sn (tn + )) n n n

a.s.

The existence of νn follows from the dynamic programming principle with stopping time θ = tn + n1 . Set θ := tn + n1 , Sn (·) = Stn ,sn (·). Since v ≥ v∗ ≥ ϕ, Ito’s rule yield, Z θn (3.1.2) zn + Ytνnn,yn (u)dS(u) ≥ ϕ(tn , sn ) tn

Z

θn

+

[Lϕ(u, Sn (u))du + ϕs (u, Sn (u))dS(u)] . tn

39

Taking the expected value implies Z θn n E(−Lϕ(u, Sn (u)))du ≥ n[ϕ(tn , sn ) − zn ] tn

= n[ϕ − v](tn , sn ) −

1 . n

We could choose (tn , sn ) so that n[ϕ − v](tn , sn ) → 0. Hence, by taking the limit we obtain, −Lϕ(t0 , s0 ) ≥ 0 To obtain the second inequality, we return to (3.1.2) and use the Ito’s rule on ϕs and the dynamics of Y (·). The result is Z θn Z θn Z θn αn + an (u)du + bn (t)dtdS(u) tn

Z

θn

tn

Z

u

θn

Z

θn

dn dS(u) ≥ 0 ,

cn (t)dS(t)dS(u) +

+ tn

tn

u

where αn an (u) bn (u) cn (u) dn

= = = = =

zn − ϕ(tn , sn ), −Lϕ(u, Sn (u)), dAn (u) − Lϕs (u, Sn (u)), γn (u) − ϕss (u, Sn (u)), yn − ϕs (tn , sn ) .

We now need to define the set of admissible controls in a precise way and then use our results on double stochastic integrals. We refer to the paper by Cheridito-Soner-Touzi for the definition of the set of admissible controls. Then, we can use the results of the next subsection to conclude that −Lϕ(t0 , s0 ) = lim sup an (u) ≥ 0 , u↓t

and lim sup cn (u) ≥ 0 . u↓t

In particular, this implies that −ϕss (t0 , s0 ) ≤ Γ∗ (s0 ) . 40

Hence, v is a viscosity super solution of 1 min{−vt − σ 2 s2 vss ; 2

3.1.2

−s2 vss + s2 Γ∗ (s)} ≥ 0 .

Subsolution

Consider a smooth ϕ and (t0 , s0 ) so that (v ∗ − ϕ)(t0 , s0 ) = max(v ∗ − ϕ) = 0 Suppose that 1 min{−ϕt (t0 , s0 ) − σ 2 s20 ϕss (t0 , s0 ) ; −s20 ϕss (t0 , s0 ) + s20 Γ∗ (s0 )} > 0 . 2 We will obtain a contradiction to prove the subsolution property. We proceed exactly as in the constrained portfolio case using the controls y = ϕs (t0 , s0 ), dA = Lϕs (u, S(u))du, γ = ϕss (u, S(u)) . Then, in a neighborhood of (t0 , s0 ), Y (u) = ϕs (u, S(u)) and γ(·) satisfies the constraint. Since −Lϕ > 0 in this neighborhood, we can proceed exactly as in the constrained portfolio problem.

3.1.3

Terminal Condition

In this subsection, we will show that the terminal condition is satisfied with a ˆ This is a similar result as in the portfolio constraint case modified function G. and the chief reason for modifying the terminal data is to make it compatible with the constraints. Assume 0 ≤ s2 Γ∗ (s) ≤ c∗ (s2 + 1) .

(3.1.3)

Theorem 3.1.2 Under the previous assumptions on G, lim

t0 →T,s0 →s

ˆ v(t0 , s0 ) = G(s),

ˆ is smallest function satisfying (i) G ˆ ≥ G (ii) G ˆ ss (s) ≤ Γ∗ (s). Let where G ∗ γ (s) be a smooth function satisfying ∗ γss = Γ∗ (s) .

41

Then, ˆ G(s) = hconc (s) + γ ∗ (s) , h(s) := G(s) − γ ∗ (s) , and hconc (s) is the concave envelope of h, i.e., the smallest concave function above h. Proof: Consider a sequence (tn , sn ) → (T, s0 ). Choose ν n ∈ A(tn ) so that Ztνnn,v(tn ,sn )+ 1 (T ) ≥ G(Stn ,sn (T )) . n

Take the expected value to arrive at v(tn , sn ) +

1 ≥ E(Gtn ,sn (T ) | Ftn ) . n

Hence, by the Fatou’s lemma, V (s0 ) :=

lim inf

(tn ,sn )→(T,s0 )

v(tn , sn ) ≥ G(s0 ) .

Moreover, since v is a super solution of (3.1.1), for every t vss (t, s) ≤ Γ∗ (s)

in the viscosity sense .

By the stability of the viscosity property, V ss (s) ≤ Γ∗ (s)

in the viscosity sense .

Hence, we proved that V is a viscosity supersolution of min{V (s) − G(s);

−Vss (s) + Γ∗ (s)} ≥ 0 .

Set w(s) = V (s) − γ ∗ (s). Then, w is a viscosity super solution of min{w(s) − h(s);

−wss (s)} ≥ 0 .

Therefore, w is concave and w ≥ h. Hence, w ≥ hconc and consequently ˆ V (s) = w(s) + γ ∗ (s) ≥ hconc (s) + γ ∗ (s) = G(s) . 42

To prove the reverse inequality fix (t, s). For λ > 0 set γ λ (t, s) := (1 − λ)γ ∗ (s) + c(T − t)s2 /2 . Set λ γ(u) = γss (u, S(u)),

dA(u) = Lγsλ (u, S(u)),

y0 = γsλ (t, s) + p0 ,

where p0 is any point in the subdifferential of hλ,conc , hλ = G − (1 − λ)γ ∗ . Then, p0 (s0 − s) + hλ,conc (s) ≥ hλ,conc (s0 ), ∀s0 . ν For any z, consider Zt,z with ν = (y0 , A(·), γ(·)). Note for t near T , ν ∈ A(t),

Z ν Zt,z (T )

T

(γsλ (t, s) + p0 )dS(u) = z+ t Z T Z u λ + [ Lγsλ (r, S(r))dr + γss (r, S(r))dS(r)]dS t t Z T = z + p0 (S(T ) − s) + γsλ (u, S(u))dS(u) t

≥ z + hλ,conc (S(T )) − hλ,conc (s) Z T λ λ +γ (S(T ), T ) − γ (t, s) − Lγ λ (u, S(u))du . t

We directly calculate that with c∗ as in (3.1.3), 1 ∗ + c(T − t)) Lγ λ (t, s) = −cs2 + σ 2 s2 ((1 − λ)γss 2 1 = −s2 {c − σ 2 [(1 − λ)Γ∗ (s) + +c(T − t)]} 2 1 1 ≤ −s2 {c − σ 2 [(1 − λ)c∗ + c(T − t)]} + σ 2 c∗ 2 2 1 2 ∗ σ c , ≤ 2 provided that t is sufficiently near T and c is sufficiently large. Hence, with 1 z0 = hλ,conc (s) + γ λ (t, s) + σ 2 c∗ (T − t), 2 43

we have ν Zt,z ≥ hλ,conc (S(T )) + γ λ (S(T ), T ) 0 ≥ G(S(T )) − (1 − λ)γ ∗ (S(T )) + (1 − λ)γ ∗ (S(T )) = G(S(T )) .

Therefore, 1 z0 = hλ,conc (s) + γ λ (t, s) + σ 2 c∗ (T − t) ≥ v(t, s) , 2 for all λ > 0, c large and t sufficiently close to T . Hence, lim sup v(t, s) ≤ hλ,conc (s0 ) + γ λ (T, s0 )

∀λ > 0

(t,s)→(T,s0 )

= hλ,conc (s0 ) + (1 − λ)γ ∗ (s0 ) = (G − (1 − λ)γ ∗ )conc (s0 ) + (1 − λ)γ ∗ (s0 ) ˆ λ (s0 ) . = G It is easy to prove that

ˆ λ (s0 ) = G(s ˆ 0) . lim G λ↓0

¤

3.2

Double Stochastic Integrals

In this section, we study the asymptotic behavior of certain stochastic integrals, as t ↓ 0. These properties were used in the derivation of the viscosity property of the value function. Here we only give some of the results and the proofs. A detailed discussion is given in the recent manuscript of Cheridito, Soner and Touzi. For a predictable, bounded process b, let Z tZ u b V (t) := b(ρ)dW (ρ)dW (u) , 0

0

h(t) := t ln ln 1/t . For another predictable, bounded process m, let Z t M (t) := m(u)dW (u), 0

44

Vmb

Z tZ

u

:=

b(ρ)dM (ρ)dM (u) . 0

0

In the easy case when b(t) = β, t ≥ 0, for some constant β ∈ R, we have V b (t) =

¢ β¡ 2 W (t) − t , t ≥ 0 . 2

If β ≥ 0, it follows from the law of the iterated logarithm for Brownian motion that, 2V β (t) lim sup =β, h(t) t&0

(3.2.4)

where

1 , t > 0, t and the equality in (3.2.4) is understood in the almost sure sense. On the other hand, if β < 0, it can be deduced from the fact that almost all paths of a one-dimensional standard Brownian motion cross zero on all time intervals (0, ε], ε > 0, that h(t) := 2t log log

lim sup t&0

2V β (t) = −β . t

(3.2.5)

The purpose of this section is to derive formulae similar to (3.2.4) and (3.2.5) when b = {b(t) , t ≥ 0} a predictable matrix-valued stochastic process. Lemma 3.2.1 Let λ and T be two positive parameters with 2λT < 1 and {b(t) , t ≥ 0} an Md -valued, F-predictable process such that |b(t)| ≤ 1, for all t ≥ 0. Then £ ¡ ¢¤ £ ¡ ¢¤ E exp 2λV b (T ) ≤ E exp 2λV Id (T ) . Proof. We prove this lemma with a standard argument from the theory of stochastic control. We define the processes Z r Z t b b Y (r) := Y (0)+ b(u)dW (u) and Z (t) := Z(0)+ (Y b (r))T dW (r) , t ≥ 0 , 0

0

where Y (0) ∈