Downside risk minimization via a large deviations approach - arXiv

1 downloads 0 Views 479KB Size Report
[4] Bielecki, T. R. and Pliska, S. R. (1999). Risk-sensitive dynamic asset manage- ... [5] Bielecki, T. R., Pliska, S. R. and Sheu, S.-J. (2005). Risk sensitive portfolio.
arXiv:1205.0672v1 [math.PR] 3 May 2012

The Annals of Applied Probability 2012, Vol. 22, No. 2, 608–669 DOI: 10.1214/11-AAP781 c Institute of Mathematical Statistics, 2012

DOWNSIDE RISK MINIMIZATION VIA A LARGE DEVIATIONS APPROACH1 By Hideo Nagai Osaka University We consider minimizing the probability of falling below a target growth rate of the wealth process up to a time horizon T in an incomplete market model, and then study the asymptotic behavior of minimizing probability as T → ∞. This problem can be closely related to an ergodic risk-sensitive stochastic control problem in the risk-averse case. Indeed, in our main theorem, we relate the former problem concerning the asymptotics for risk minimization to the latter as its dual. As a result, we obtain an expression of the limit value of the probability as the Legendre transform of the value of the control problem, which is characterized as the solution to an H-J-B equation of ergodic type, in the case of a Markovian incomplete market model.

1. Introduction. Risk management is a main topic in the study of finance. In the present paper, we consider the problem of minimizing the downside risk associated with an investor’s total wealth in a certain incomplete market model. More precisely, let St0 be the price of a riskless asset with the dynamics dSt0 = rt St0 dt, (St1 , . . . , Stm ) the prices of the risky assets, and Nti , i = 0, . . . , m, the number of shares of ith security. Then the total wealth that the investor possesses is defined as Vt =

m X

Nti Sti ,

i=0

and we assume a self-financing condition, dVt =

m X

Nti dSti .

i=0

Received September 2009; revised April 2011. Supported in part by a Grant-in-aid for Scientific Research 20340019 JSPS. AMS 2000 subject classifications. 35J60, 49L20, 60F10, 91B28, 93E20. Key words and phrases. Large deviation, long-term investment, risk-sensitive stochastic control, H-J-B equation of ergodic type. 1

This is an electronic reprint of the original article published by the Institute of Mathematical Statistics in The Annals of Applied Probability, 2012, Vol. 22, No. 2, 608–669. This reprint differs from the original in pagination and typographic detail. 1

2

H. NAGAI

When setting the proportion of the portfolio invested in the ith security as N iSi hit = Vt t t , we have m

dVt X i dSti ht i , = Vt St i=0

and the total wealth is denoted by Vt = Vt (h), which is the solution to this stochastic differential equation for a given strategy ht . Let us consider minimizing the probability   1 VT (h) log P (1.1) ≤ κ T ST0 for a given target growth rate κ by selecting portfolio choice h. Let us make clear the meaning of the probability. If we choose strategy (h0t , h1t , . . . , hm t )= (1, 0, . . . , 0), then we have d log Vt =

dVt dSt0 = 0 = d log St0 . Vt St

Thus, the probability is always 1 for large time T and κ > 0. Accordingly, in considering the above minimization, we investigate the extent for which we can improve the probability by selecting a strategy, as compared with the trivial strategy of investing the total wealth in a riskless asset. The latter strategy is considered the benchmark in terms of finance. We shall consider the asymptotic behavior of the probability   1 VT (h) 1 inf log P log (1.2) ≤κ . lim T ST0 T →∞ T h· According to the theory of large deviation, it is natural to relate (1.2) to 0 1 inf log E[eγ log(VT (h)/ST ) ] T →∞ T h·

(1.3)

χ(γ) ˆ := lim

for γ < 0. Namely, as T → ∞,   1 1 VT (h) inf log P log ∈ (−∞, κ] → − inf sup{γk − χ(γ)} ˆ T h· T ST0 k∈(−∞,κ] γ 0, |f |2ρ = −γ sup{x;x∈B2ρ } |f (x)|, C is a universal constant and Bρ = {x ∈ Rn ; |x| < ρ}. ˆ x), we consider the stochastic differential equation For h(t, ˆ

ˆ Xt )} dt + λ(Xt ) dW h , dXt = {β(Xt ) + γλσ ∗ (Xt )h(t, X0 = x, t ˆ t := ˆ and define h h(t, Xt ) for the solution Xt of the stochastic differential equation. Note that the solution of this stochastic differential equation is obtained by the change of measure from the solution of (2.3). Indeed, we can see that ∇v has at most linear growth under assumptions (2.10) and (2.11) from the above gradient estimates, and therefore, E[eγ

RT 0

∗ 2 ˆ h(s,X s ) σ(Xs ) dWs −(γ /2)

holds. Thus ˆ

P h (A) := E[eγ

RT 0

RT 0

ˆ ˆ h(s,Xs )∗ σσ∗ (Xs )h(s,X s ) ds

ˆ h(s,Xs )∗ σ(Xs ) dWs −(γ 2 /2)

RT 0

]=1

∗ ∗ ˆ ˆ h(s,X s ) σσ (Xs )h(s,Xs ) ds

; A]

10

H. NAGAI

defines a probability measure. Under this measure, Xt turns out to be a solution of the above stochastic differential equation. The following is a verification theorem, the proof of which is almost the same as the proof of Proposition 2.1, [28], and thus is omitted here. ˆ t := ˆ (γ,T ) ≡ h Proposition 2.1 ([28]). Assume (2.10) and (2.11). Then h t ˆ Xt ) ∈ A(T ) and it is optimal h(t, (2.17)

ˆ

0

0

v(0, x) = inf log E[eγ(log VT (h)−log ST ) ] = log E[eγ(log VT (h)−log ST ) ]. h·

Let us consider an H-J-B equation of ergodic type that is thought to be the limit equation of (2.14). Namely, (2.18) Set

χ = 12 tr[λλ∗ D 2 w] + βγ∗ Dw + 21 (Dw)∗ λNγ−1 λ∗ Dw − Uγ .

and assume that (2.19) and (2.20)

G(x) := β(x) − λσ ∗ (σσ ∗ )−1 α ˆ (x), G(x)∗ x ≤ −cG |x|2 + c′G , α ˆ ∗ (σσ ∗ )−1 α ˆ→∞

cG , c′G > 0, as |x| → ∞.

Under these assumptions, we have a solution to the H-J-B equation of ergodic type, and the proof is given in Proposition 3.1 in Section 3. Proposition 2.2. Assume (2.10), (2.11), (2.19) and (2.20). Then (2.18) has a solution (χ, w(γ) ) such that w ∈ C 2 (Rn ), w(x) → −∞

as |x| → ∞

and the solution satisfying this condition is unique up to additive constants with respect to w. We further assume that (2.21)

α ˆ ∗ (σσ ∗ )−1 α ˆ ≥ c0 |x|2 − c′0 ,

c0 , c′0 > 0.

Then we have the following theorem, and the proof is given after Proposition 4.2 in Section 4. Theorem 2.2. have

Under assumptions (2.10), (2.11), (2.19) and (2.21), we 1 v(0, x; T ) = χ(γ). T →∞ T

χ(γ) ˆ = lim

The following results are important to prove our main results, and the proofs are given in Lemma 6.3, Lemma 7.1 and Corollary 4.1.

DOWNSIDE RISK MINIMIZATION

11

Theorem 2.3. Let (χ, w(γ) ) be a solution to (2.18). Then under the assumptions of Theorem 2.2, χ(γ) and w(γ) are differentiable with respect to γ and χ(γ) is convex. Their derivatives satisfy 1 χ′ (γ) = tr[λλ∗ D 2 wγ ] + (βγ∗ + (Dw(γ) )∗ λNγ−1 λ∗ )Dwγ 2 (2.22) 1 {α ˆ + σλ∗ Dw(γ) }∗ (σσ ∗ )−1 {ˆ α + σλ∗ Dw(γ) }, + 2 2(1 − γ) where wγ =

∂w (γ) ∂γ .

Furthermore, lim χ′ (γ) = 0.

γ→−∞

Remark 2.2. It is important to know the limit value limγ→−∞ χ′ (γ) since it determines the left endpoint of the interval of the target growth rate κ, which makes J(κ) finite. Here we compare the results above with those to be expected for the case without a benchmark, considering asymptotics 1 inf log E[eγ log VT (h) ]. χ(γ) ˇ := lim h· T T →∞ The H-J-B equation of ergodic type of this problem becomes ˇ + βγ∗ D w ˇ + 12 (D w) ˇ ∗ λNγ−1 λ∗ D w ˇ − Uγ + γr(x), χ ˇ = 21 tr[λλ∗ D 2 w]

and we can obtain its derivative 1 ˇγ ] + (βγ∗ + (D w) ˇ ∗ λNγ−1 λ∗ )D w ˇγ χ ˇ′ (γ) = tr[λλ∗ D 2 w 2 1 + {ˆ α + σλ∗ D w} ˇ ∗ (σσ ∗ )−1 {ˆ α + σλ∗ D w} ˇ +r 2(1 − γ)2 through almost the same arguments as the current ones provided to obtain the results in the present article. The difference appears in considering the asymptotics of χ′ (γ) as γ → −∞. Indeed, Z ′ lim χ ˇ (γ) = lim r(x)m ˇ γ (dx) < ∞ γ→−∞

γ→−∞

ˇ ˇ γ (dx) is the invariant measure of L-diffusion could be seen as in [22], where m ˇ is defined by process and L ˇ = 1 tr[λλ∗ D 2 ψ] + (βγ + λNγ−1 λ∗ D w) Lψ ˇ ∗ Dψ. 2

ˇ corresponds to L ¯ defined by (4.16) in the present paper and Note that L can be shown to be ergodic under suitable conditions in a manner similar to the proof of Proposition 4.3. Now we can state our main theorems. The proofs are given in Sections 7 and 8.

12

H. NAGAI

Theorem 2.4. χ ˆ′ (0−), (2.23)

J(κ) = −

Under the assumptions of Theorem 2.2, for 0 < κ < inf

sup{γk − χ(γ)} ˆ = − sup{γκ − χ(γ)}. ˆ

k∈(−∞,κ] γ 0 from the maximum principle as well as (3.7), we see that 0 ≤ vε ≤ ϕε .  Set 2

ψδ (x) := eδ|x| ,

δ > 0.

Then, by taking δ to be sufficiently small, we can see that there exists R1 such that for R > R1 , Lψδ (x) < −1

c . in BR

Therefore, we see that L and ψδ satisfy assumption (A.3) in the last section. Set K(x; ψδ ) = −Lψδ ,   |u(x)| 2,p n r. 16c2 16c2 ∗ ∗ 2 Then tr[λλ∗ D 2 τ ] ≥ − 4n r 2 c2 , (Dτ ) λλ Dτ ≤ r 2 τ and |Dτ | ≤ c1 r 2 τ . Let x be the maximum point of τ F in Br (x0 ). Then D(τ F )(x) = 0 and tr[λλ∗ × D 2 (τ F )](x) ≤ 0. Therefore, from the maximum principle we have

1 0 ≤ − (λλ∗ )ij Dij (τ F ) − β i Di (τ F ) + Qij Dj wDi (τ F ) 2   1 = τ − (λλ∗ )ij Dij F − β i Di F + Qij Dj wDi F 2

1 − (λλ∗ )ij Dij τ F − (λλ∗ )ij Di τ Dj F − (β i Di τ )F + (Qij Dj wDi τ )F 2  2c2 1 (−2χ − 2β i Di w + Di wQij Dj w − 2U )2 + |∇w|2 ≤τ − 2nc2 δ  2 2 3 + 2|∇β||∇w| + |∇Q| |∇w| + 2|∇U ||∇w|  (λλ∗ )ij Di τ Dj τ 1 ∗ ij i ij −F (λλ ) Dij τ − − β Di τ + Q Dj wDi τ . 2 τ 

Since

1 ∗ 1−γ λλ

≤ Q ≤ λλ∗ , by taking δ to be sufficiently small,

  1 1 2 2 |β|2 c(γ)|Dw| ≤ −2β Dw + (Dw) QDw + |β| ≤ (c2 + 1)|Dw| + 1 + δ δ 2





for a positive constant (3.15)

c(γ) =

c1 − δ > 0. 1−γ

Therefore, it follows that   1 2 2 ∗ ∗ 0 ≤ −τ −2β Dw + (Dw) QDw + |β| δ    1 2 1 |β| + 2U + 2χ + 2τ −2β ∗ Dw + (Dw)∗ QDw + |β|2 δ δ  2 1 2 −τ |β| + 2U + 2χ + τ (2|∇β||∇w|2 + |∇Q||∇w|3 + 2|∇U ||∇w|) δ

20

H. NAGAI

√ √ 4 c2 1/2 4 c2 16c2 2nc2 + 2 F + 2 F + |β| √ τ F + √ τ |Q|F 3/2 r r c1 r c1 r      1 1 2 2 4 2 2 ≤ −τ c(γ) |∇w| + 2τ (c2 + 1)∇w| + 1 + |β| + 2U |β| δ δ   √ √ 4 c2 4 c2 1/2 2nc2 16c2 τ |Q|F 3/2 + + τ F + F + |β| √ √ r2 r2 c1 r c1 r + τ (2|∇β|F + |∇Q|F 3/2 + 2|∇U |F 1/2 ). We can assume F ≥ |β|2 and F ≥ |∇U |; thus,     1 1 2 2 2 0 ≤ −c(γ) τ F + 2 c2 + 2 + |β| + 2U τF δ δ   √ √ 4 c2 4 c2 1/2 2nc2 16c2 + + 2 F + |β| √ τ F + √ τ |Q|F 3/2 r2 r c1 r c1 r + τ (2|∇β|F + |∇Q|F 3/2 + 2F 3/2 ). Accordingly, we have   √ 4 c2 2 0 ≤ −c(γ) τ F + |∇Q| + √ |Q| + 2 (τ F )1/2 c1 r    √ 1 2nc2 16c2 4|β| c2 1 2 + 2 c2 + 2 + |β| + 2U + 2 + 2 + √ + 2|∇β|. δ δ r r c1 r Therefore, we obtain 2  √ 4 c2 1 1 2 c(γ) τ F ≤ |∇Q| + √ |Q| + 2 2 2c(γ)2 c1 r   1 2 c c|β| + cδ |β| + 2U + 2 + + 2|∇β|, δ r r with cδ = 2(c2 + 2 + 1δ ) and universal constant c > 0. Including the case where |β|2 ≥ F , |∇U | ≥ F , we obtain F (x0 ) = τ (x0 )F (x0 ) ≤ (τ F )(x)   c 1 2 2 ≤ |∇Q| + 2 |Q| + c c(γ)4 r   c′δ 1 |β| 2 + + |∇β| + |β| + U + + |∇U |, c(γ)2 r r and (3.13) has been proved.

21

DOWNSIDE RISK MINIMIZATION

Now let us prove (3.12). For each ρ > 0 take a point xρ ∈ Rn such that |xρ | = ρ. Set     ρ 4|x − xρ |2 in Dρ = x; |x − xρ | ≤ , R(x) = cρ 1 − ρ2 2 where cρ is a positive constant determined later. Then, R(x) ≥ 0 in Dρ and R(x) = 0 on ∂Dρ . Set z(x) = w(x) ¯ − R(x). Then, z(x) = w(x) ¯ ≥ 0,

x ∈ ∂Dρ .

Note that ξ ∗ λNγ−1 λ∗ ξ ≤ c2 |ξ|2 ,

ξ ∈ Rn .

Then we have 1 −χ − tr[λλ∗ D 2 z] − β ∗ Dz 2 1 1 = − (D w) ¯ ∗ λNγ−1 λ∗ D w ¯ + Uγ + tr[λλ∗ D 2 R] + β ∗ DR 2 2 1 1 = − D(w ¯ + R)∗ λNγ−1 λ∗ D(w ¯ − R) − (DR)∗ λNγ−1 λ∗ DR + Uγ 2 2 1 + tr[λλ∗ D 2 R] + β ∗ DR 2 1 c2 1 ≥ − D(w ¯ + R)∗ λNγ−1 λ∗ Dz − |DR|2 + Uγ + tr[λλ∗ D 2 R] + β ∗ DR. 2 2 2 Noting that |βγ (x)| ≤ cρ, x ∈ Dρ , for a positive constant independent of γ, 1 1 tr[λλ∗ D 2 z] − β ∗ Dz + D(w ¯ + R)∗ λNγ−1 λ∗ Dz 2 2 4cρ c2 ≥ − |DR|2 + U (x) − 2 tr[λλ∗ ] − 4ccρ 2 ρ   2 2 8c2 cρ −γ 4c2 ncρ |ρ| ≥− 2 + c0 +1 − − 4ccρ ρ 2(1 − γ) 4 ρ2   c2ρ cρ −γ0 c0 −γ0 c0 ρ2 ≥ − 8c2 2 + 4c2 n 2 + 4ccρ + + . ρ ρ 8(1 − γ0 ) 2(1 − γ0 )

−χ −

By setting cρ = c(γ0 )ρ2 with c(γ0 ) such that 8c2 c(γ0 )2 + 4cc(γ0 ) < −γ0 c0 and 4c2 nc(γ0 ) < 2(1−γ , we see that 0) ¯ + R)∗ λNγ−1 λ∗ Dz ≥ M > 0 − 12 tr[λλ∗ D 2 z] − β ∗ Dz + 21 D(w

−γ0 c0 8(1−γ0 )

in Dρ

22

H. NAGAI

for some positive constant and sufficiently large ρ. Then z is superharmonic in Dρ and z(x) ≥ 0, x ∈ ∂Dρ . Therefore z(x) ≥ 0, x ∈ Dρ , from which we have z(xρ ) = w(x ¯ ρ ) − cρ ≥ 0.

Hence, w(x ¯ ρ ) ≥ c(γ0 )ρ2 . 

4. H-J-B equations and related stochastic control problems. Let us come back to H-J-B equation (2.15). According to assumption (2.10), we have a positive constant cβ such that |βγ (x)|2 ≤ cβ (|x|2 + 1). We strengthen condition (2.20) to (2.21). Then we have the following lemma. Lemma 4.1. Assume (2.10), (2.11), (2.21) and v0 ≥ 1. Then for each t < T there exist positive constants k = k(T − t) and k′ = k′ (T − t) such that v¯(t, x; T ) ≥ k|x|2 − k′ .

(4.1)

Proof. Choose a positive constant c such that c cγ − cβ > 0, 2 γc0 c , and set and set b = cγ − 2 cβ , where cγ = − 2(1−γ) R(t, x) := 12 x∗ P (t)x + q(t), where P (t) is a solution to the Riccati equation   1 c2 ˙ + (4.2) P (t)In P (t) + bIn = 0, P (t) − 1−γ c

P (T ) = 0,

and q(t) is a solution to the ordinary equation ccβ c1 q(t) ˙ + tr[P (t)] − (4.3) − c′γ = 0, q(T ) = −γ log v0 , 2 2 c′ γ

0 where c′γ = − 2(1−γ) . Set

z(t, x) := v¯(t, x) − R(t, x). Then ∂z 1 − tr[λλ∗ D 2 z] − βγ∗ Dz − ∂t 2 1 ∂R 1 = (D¯ v )∗ λNγ−1 λ∗ D¯ v + Uγ + + tr[λλ∗ D 2 R] + β ∗ DR 2 ∂t 2 1 1 v + R)∗ λNγ−1 λ∗ D(¯ v − R) − DR∗ λNγ−1 λ∗ DR + Uγ = − D(¯ 2 2

DOWNSIDE RISK MINIMIZATION

23

∂R 1 + tr[λλ∗ D 2 R] + β ∗ DR ∂t 2 1 c2 ≥ − D(¯ v + R)∗ λNγ−1 λ∗ D(¯ v − R) − (DR)∗ In DR + cγ |x|2 − c′γ 2 2(1 − γ) +

c 1 c1 1 ˙ + tr[P (t)] − βγ∗ βγ − (DR)∗ DR. + x∗ P˙ (t)x + q(t) 2 2 2 2c Therefore, −

∂z 1 1 − tr[λλ∗ D 2 z] − β ∗ Dz + D(¯ v + R)∗ λNγ−1 λ∗ Dz ∂t 2 2     ccβ c2 1 1 ∗ 1 ∗˙ + x P (t)In P (t)x + cγ − |x|2 ≥ x P (t)x − 2 2 1−γ c 2 ccβ c1 + q(t) ˙ + tr[P (t)] − − c′γ 2 2   ccβ 1 cγ − |x|2 ≥ 0. ≥ 2 2

Thus we see that z(t, x) is super harmonic in [0, T ) × Rn , and z(T, x) = 0. Therefore we have z(t, x) = v¯(t, x) − R(t, x) ≥ 0, that is, v¯(t, x) ≥ R(t, x) = 12 x∗ P (t)x + q(t).

Since P (t) is positive definite, v¯(t, x) ≥ k|x|2 − k′ ,

k = k(T − t), k′ = k′ (T − t) > 0.

Let us rewrite (2.15) as  ∂¯ v 1  0 = + tr[λλ∗ D 2 v¯] + G∗ D¯ v   ∂t 2 1 1 ∗ ∗ (4.4) − (λ∗ D¯ v − Σ∗ α ˆ )∗ Nγ−1 (λ∗ D¯ v − Σ∗ α ˆ) + α ˆ ΣΣ α ˆ,   2 2   v¯(T, x) = −γ log v0 . Noting that

1 − (λ∗ D¯ v − Σ∗ α ˆ )∗ Nγ−1 (λ∗ D¯ v − Σ∗ α ˆ) 2   1 ∗ ∗ ∗ ∗ z Nγ z − z Σ α ˆ + (λz) D¯ v = inf z∈Rn+m 2  1 = inf {z + Nγ−1 (λ∗ D¯ v − Σ∗ α ˆ )}∗ Nγ {z + Nγ−1 (λ∗ D¯ v − Σ∗ α ˆ )} n+m 2 z∈R  1 ∗ ∗ ∗ −1 ∗ ∗ v−Σ α ˆ ) Nγ (λ D¯ v−Σ α ˆ) , − (λ D¯ 2



24

H. NAGAI

we can rewrite it again as  ∂¯ v 1  + tr[λλ∗ D 2 v¯] + G∗ D¯ v + inf {(λz)∗ D¯ v + ϕ(x, z)}, 0= (4.5) ∂t 2 z∈Rn+m  v¯(T, x) = −γ log v0 , where

ˆ + 12 α ˆ ∗ ΣΣ∗ α, ˆ ϕ(x, z) = 12 z ∗ Nγ z − z ∗ Σ∗ α

Nγ = I − γΣ∗ (ΣΣ∗ )−1 Σ.

This H-J-B equation corresponds to the following stochastic control problem, the value of which is defined as Z T  ϕ(Ys , Zs ) ds − γ log v0 , inf E (4.6) ˜ ) Z· ∈A(T

0

where Yt is a controlled process governed by the stochastic differential equation (4.7)

dYt = λ(Yt ) dWt + {G(Yt ) + λ(Yt )Zt } dt, Y0 = x, n+m ˜ ˜ with control Zt ∈ A(T ). Here, A(T ) is the set of all R valued progressively measurable processes such that  Z T 2 |Zs | ds < ∞. E 0

To study this problem, we introduce a value function for 0 ≤ t ≤ T ,  Z T −t ϕ(Ys , Zs ) ds − γ log v0 . v∗ (t, x) = inf E ˜ −t) Z· ∈A(T

0

By the verification theorem, the solution v¯ to (4.5) can be identified with the value function v∗ . Indeed, set zˆ(s, x) = −Nγ−1 (λ∗ D¯ v − Σ∗ α ˆ )(s, x),

which attains the infimum in (4.5), and consider the stochastic differential equation ˆ Yˆt )} dt, (4.8) dYˆt = λ(Yˆt ) dWt + {G(Yˆt ) + λ(Yˆt )Z(t, Y0 = x.

From the estimates obtained in Theorem 2.1, we see that (4.8) has a unique solution. It is also seen by using Itˆo’s theorem that Z T  ˆ ˆ ϕ(Ys , Zs ) ds − γ log v0 v¯(0, x) = E 0

ˆ Yˆs ). In a similar way, we can see that holds, where Zˆs = Z(s,  Z T ϕ(Ys , Zs ) ds − γ log v0 v¯(0, x) ≤ E 0

˜ ), hence, v¯(0, x) = v∗ (0, x). for each Z· ∈ A(T

25

DOWNSIDE RISK MINIMIZATION

Let us consider the following stochastic control problem with the averaging cost criterion:  Z T 1 (4.9) ϕ(Ys , Zs ) ds , ρ(γ) = inf lim sup E Z· ∈A˜ T →∞ T 0 where Yt is a controlled process governed by controlled stochastic differential equation (4.7) with control Zt . The solution Yt of (4.7) is sometimes written (Z) to make clear the dependence on control Zt , and the set A˜ of all as Yt admissible controls is defined as follows:  A˜ = Z· ; Zt is an Rn+m valued progressively measurable process such that 1 (Z) lim sup E[|YT |2 ] = 0, E T →∞ T

Z

0

T

 |Zs | ds < ∞, ∀T . 2



Corresponding to this stochastic control problem, H-J-B equation of ergodic type (3.1) can be written as (4.10)

− χ(γ) =

1 tr[λλ∗ D 2 w] ¯ + G∗ D w ¯ + inf {(λz)∗ D w ¯ + ϕ(x, z)}. 2 z∈Rn+m

We then set zˆ(x) = −Nγ−1 (λ∗ D w ¯ − Σ∗ α ˆ )(x),

(4.11)

and consider stochastic differential equation dY¯t = λ(Y¯t ) dWt + {G(Y¯t ) + λ(Y¯t )ˆ z (Y¯t )} dt (4.12)

= λ(Y¯t ) dWt + {βγ − λNγ−1 λ∗ D w}( ¯ Y¯t ) dt,

Y¯0 = x.

We shall prove the following proposition. −χ(γ) = ρ(γ) and  Z T 1 ¯ ¯ ρ(γ) = lim E ϕ(Ys , Zs ) ds , T →∞ T 0

Proposition 4.1. (4.13) where Z¯s = zˆ(Y¯s ).

For the proof of this proposition, we prepare the following lemma. Lemma 4.2. Under assumptions (2.10), (2.11), (2.19) and (2.21) the following estimates hold. For each γ1 < γ0 < 0 there exist positive constants δ > 0 and C > 0 independent of T and γ with γ1 ≤ γ ≤ γ0 such that (4.14)

¯

¯ YT ) E[eδw( ]≤C

26

H. NAGAI

and also ¯

2

E[eδ|YT | ] ≤ C.

(4.15) Proof. Let us set

¯ = 1 tr[λλ∗ D 2 ψ] + (G + λˆ z )∗ Dψ Lψ 2

(4.16)

= 12 tr[λλ∗ D 2 ψ] + (βγ − λNγ−1 λ∗ D w) ¯ ∗ Dψ.

Then we have ¯w −χ(γ) = L ¯ + ϕ(x, zˆ(x))

1 ¯w =L ¯ + (λ∗ D w ¯ − Σ∗ α ˆ )∗ Nγ−1 (λ∗ D w ¯ − Σ∗ α ˆ) 2 1 ˆ ΣΣ∗ α ˆ + (λ∗ D w ¯ − Σ∗ α ˆ )∗ N −1 Σ∗ α ˆ+ α 2 1 γ ¯w =L ¯ + (λ∗ D w) ¯ ∗ Nγ−1 λ∗ D w ¯− α ˆ ΣΣ∗ α ˆ. 2 2(1 − γ)

Therefore, by applying Itˆo’s formula, we have  Z t δ δw( ¯ Y¯t ) δw( ¯ Y¯0 ) ∗ ∗ ¯ ¯ Lw( ¯ Ys ) + (D w) e −e =δ ¯ λλ D w ¯ eδw¯ (Y¯s ) ds 2 0 Z t eδw¯ (D w) ¯ ∗ λ(Y¯s ) dWs +δ 0

Z t 1 =δ −χ − (D w) ¯ ∗ λNγ−1 λ∗ D w ¯ 2 0

 γ δ ∗ ∗ ∗ + α ˆ ΣΣ α ˆ + (D w) ¯ λλ D w ¯ eδw¯ (Y¯s ) ds 2(1 − γ) 2



Z

t

eδw¯ (D w) ¯ ∗ λ(Y¯s ) dWs .

0

Thus, for p > 0, ¯

¯

¯

¯ Yt ) pδt ¯ Yt ) ¯ Yt ) d(eδw( e ) = epδt deδw( + pδepδt eδw( dt  1 ¯ ∗ λNγ−1 λ∗ D w ¯ = epδt δ −χ − (D w) 2

+

 γ δ α ˆ ΣΣ∗ α ˆ + (D w) ¯ ∗ λλ∗ D w ¯ + p eδw¯ (Y¯t ) dt 2(1 − γ) 2

¯ Y¯t ) + δepδt eδw( (D w) ¯ ∗ λ(Y¯t ) dWt .

27

DOWNSIDE RISK MINIMIZATION

Taking into account (2.21) and (2.16), for δ > 0 such that δ
0. Thus, we obtain δw( ¯ Y¯t )+pδt

e

δw(x) ¯

≤e

+δ Therefore, taking

Z

+δ t

Z

t

¯ Y¯s ) epδs+δw( {−k1 |Y¯s |2 + k2 } ds

0

¯ Y¯s ) epδs+δw( (D w) ¯ ∗ λ(Y¯s ) dWs .

0

τ = τR := inf{t; |Y¯t | ≥ R},

and setting

k3 =

sup √

|y|≤

w(y), ¯

k2 /k1

we see that δw( ¯ Y¯t∧τ )+pδ(t∧τ )

E[e

δw(x) ¯

]≤e

δw(x) ¯

≤e

δw(x) ¯

≤e

+ δE

Z

+ δk2 E

t∧τ

pδs+δw( ¯ Y¯s )

e 0

Z

δk3

+ δk2 e

t∧τ

pδs+δw( ¯ Y¯s )

e 0

E

Z

0

δw(x) ¯

=e

δk3

+ k2 e

By letting R tend to ∞, we have

{−k1 |Y¯s | + k2 } ds 2

t∧τ

pδs

e

ds

1{|Y¯s |2 ≤k2 /k1 } ds





 1 pδ(t∧τ ) E (e − 1) . p

1 ¯ Y¯t )+pδt ¯ E[eδw( ] ≤ eδw(x) + k2 eδk3 (epδt − 1). p

Hence, 1 ¯ Y¯t ) ¯ E[eδw( ] ≤ e−pδt+δw(x) + k2 eδk3 (1 − e−pδt ) p 1 ¯ ≤ eδw(x) + k2 eδk3 . p Finally, we see that (4.15) follows from (4.14) because of (3.12).  Proof of Proposition 4.1. ∗

2

From (4.10) it follows that

¯ + G∗ D w ¯ + (λz)∗ D w ¯ + ϕ(x, z) −χ(γ) ≤ 12 tr[λλ D w]





28

H. NAGAI

for each z ∈ Rn+m . Therefore, for each control Zt we have Z T Z T ∗ ¯ s ) ds {G(Ys ) + λ(Ys )Zs }∗ D w(Y (D w(Y ¯ s )) λ(Ys ) dWs + w(Y ¯ t ) − w(x) ¯ = 0

0

+ ≥ Thus,

Z

1 2 T

0

Z

T

tr[λλ∗ D 2 w](Y ¯ s ) ds

0 ∗

(D w(Y ¯ s )) λ(Ys ) dWs − χT −

w(x) ¯ − χT ≤ E

Z

T 0

Z

T

ϕ(Ys , Zs ) ds. 0

 ϕ(Ys , Zs ) ds + w(Y ¯ T) ,

from which we obtain 1 −χ ≤ lim sup E T →∞ T

Z

1 = lim sup E T →∞ T

Z

T

 ϕ(Ys , Zs ) ds + w(Y ¯ T)

0 T

ϕ(Ys , Zs ) ds 0



2 ≤ c|y|2 + c′ . Namely, we have −χ(γ) ≤ ρ(γ). On the other hand, since |w(y)| ¯ by taking Zt = Z¯t , we have Z T Z T ϕ(Y¯s , Z¯s ) ds, (D w( ¯ Y¯s ))∗ λ(Y¯s ) dWs − χT − w( ¯ Y¯t ) − w(x) ¯ = 0

0

and thus

w(x) ¯ − χT = E

Z

T 0

 ¯ ¯ ¯ ϕ(Ys , Zs ) ds + w( ¯ YT ) .

Lemma 4.2 implies 1 −χ = lim sup E T →∞ T

Z

0

T

 ¯ ¯ ϕ(Ys , Zs ) ds ≥ ρ,

and we see that −χ(γ) = ρ(γ).  Let us define (4.17)

1 χ(γ) ¯ = lim sup inf E T Z∈A˜ T →∞

Z

T



ϕ(Ys , Zs ) ds = lim sup 0

Then we can see that χ ¯ ≤ ρ(γ) = −χ(γ).

T →∞

1 v¯(0, x; T ). T

DOWNSIDE RISK MINIMIZATION

Proposition 4.2.

29

Assume (2.10), (2.11), (2.19) and (2.21). Then χ(γ) ¯ = ρ(γ) = −χ(γ).

The proof of Theorem 2.2 follows directly from this proposition since χ(γ) ¯ = −χ(γ) ˆ because of Proposition 2.1. For the proof of the present proposition, we prepare some lemmas. (T ) For each T > 0, we take the controlled process Yˆt = Yˆt defined by (4.8) (T ) and control zˆ(t, Yˆt ). Taking a sequence {Tn } such that  Z Tn 1 1 (T ) (T ) n n )) dt , , zˆ(t, Yˆt ϕ(Yˆt v¯(0, x; Tn ) = lim E χ ¯ = lim Tn →∞ Tn Tn →∞ Tn 0 we have the following lemma. Lemma 4.3. we have

Under the assumptions of Proposition 4.2, for each t > 0

(4.18)

lim inf Tn →∞

1 (Tn ) 2 | ] = 0. E[|YˆTn −t Tn

Proof. Set ˆ := 1 tr[λλ∗ D 2 ψ] + (G + λˆ Lψ z (t, x))∗ Dψ. 2 Then v¯(T − t, YˆT −t ; T ) − v¯(0, x; T )  Z T −t Z T −t  ∂¯ v ˆ ˆ (D¯ v )(s, Yˆs ) dWs + L(s, Ys ) ds + = ∂t 0 0 Z T −t Z T −t (D¯ v (s, Yˆs ))∗ λ(Yˆs ) dWs . ϕ(Yˆs , zˆ(s, Yˆs )) ds + =− 0

0

Therefore, v¯(0, x; Tn ) = E

Z

Tn −t

0

 (Tn ) (Tn ) (Tn ) ˆ ˆ ˆ ϕ(Ys , zˆ(s, Ys )) ds + v¯(Tn − t, YTn −t ; Tn ) .

Since 1 lim sup E Tn →∞ Tn

Z

0

Tn −t

 (Tn ) (Tn ) ˆ ˆ ϕ(Ys , zˆ(s, Ys ) ds ≥ χ, ¯

we have (4.19)

lim inf Tn →∞

1 (Tn ) ; Tn )] = 0. E[¯ v (Tn − t, YˆTn −t Tn

30

H. NAGAI

Noting that v¯(Tn − t, x; Tn ) ≥ k|x|2 − k′ , k = k(t) and k′ = k′ (t), because of Lemma 4.1, we obtain 0 ≤ lim inf Tn →∞

1 (Tn ) 2 | ] ≤ 0, kE[|YˆTn −t Tn

and our lemma has been proved.  Lemma 4.4. Under the assumptions of Proposition 4.2, there exists a subsequence {Tn′ } ⊂ {Tn } such that 1 (T ′ ) E[|YˆT ′ n |2 ] = 0. ′ n Tn →∞ Tn lim ′

Proof. (T )

(T )

|YˆT |2 − |YˆT −t |2 = 2

Z

T

T −t

Z

+2 +

Z

(Yˆs(T ) )∗ λ(Yˆs(T ) ) dWs T

T −t T

T −t

Yˆs(T ) {G(Yˆ (T ) ) + λ(Yˆs(T ) )ˆ z (s, Yˆs(T ) )} ds

tr[λλ(Yˆs(T ) )] ds.

Therefore, (T ) (T ) E[|YˆT |2 ] = E[|YˆT −t |2 ] + 2E

Z

T

T −t

z (s, Yˆs(T ) )} ds Yˆs(T ) {G(Yˆ (T ) ) + λ(Yˆs(T ) )ˆ +

Z

T T −t

 (T ) ˆ tr[λλ(Ys )] ds .

By using the gradient estimates in Theorem 2.1 and (2.19) we obtain y ∗ G(y) + y ∗ λ(y)ˆ z (s, y) ≤ c(|y|2 + 1) for some positive constant c and (T ) E[|YˆT |2 ] ≤

(T ) E[|YˆT −t |2 ] + cE (T )

Z

≤ E[|YˆT −t |2 ] + c′ E

T T −t T

Z

|Yˆs(T ) |2 ds

T −t



ϕ(Yˆs(T ) , zˆ(s, Yˆs(T ) )) ds



since (2.21) is assumed and 1 1 ϕ(x, zˆ(t, x)) = zˆ(t, x)∗ Nγ zˆ(t, x) − zˆ(t, x)∗ Σ∗ α ˆ+ α ˆ ΣΣ∗ α ˆ 2 2 1 v − Σ∗ α ˆ )∗ Nγ (λ∗ D¯ v − Σ∗ α ˆ) = (λ∗ D¯ 2

31

DOWNSIDE RISK MINIMIZATION

1 ˆ ΣΣ∗ α ˆ + (λ∗ D¯ v − Σ∗ α ˆ )∗ Nγ−1 Σ∗ α ˆ+ α 2 γ 1 v )∗ λNγ−1 λ∗ D¯ v− α ˆ ΣΣ∗ α ˆ. = (D¯ 2 2(1 − γ)

By Itˆo’s formula,

v¯(T, YˆT ; T ) − v¯(T − t, YˆT −t ; T ) Z Z T ˆ ˆ ϕ(Ys , zˆ(s, Ys )) ds + =− T −t

T

(D¯ v (s, Yˆs ))∗ λ(Yˆs ) dWs ,

T −t

and we have E

Z

T T −t

 ϕ(Yˆs , zˆ(s, Yˆs )) ds = E[¯ v (T − t, YˆT −t ; T )].

Take a subsequence {Tn′ } ⊂ {Tn } such that

1 E[¯ v (Tn′ − t, YˆTn′ −t ; Tn′ )] = 0 Tn →∞ Tn′ lim ′

and 1 (Tn′ ) 2 E[|YˆT ′ −t | ] = 0. ′ n Tn →∞ Tn lim ′

Then we have 1 1 (T ′ ) (Tn′ ) 2 | ] E[|YˆT ′ −t 0 ≤ lim sup ′ E[|YˆT ′ n |2 ] ≤ lim ′ ′ n n Tn →∞ Tn Tn′ →∞ Tn  Z T ′ n 1 ′ (Tn′ ) (Tn′ ) ˆ ˆ + c lim ϕ(Ys , zˆ(s, Ys )) ds = 0, E Tn′ →∞ Tn′ Tn′ −t and thus the lemma has been proved.  Proof of Proposition 4.2.

For each ε there exists Tε such that

(T )

w(x) ¯ < εTε , E[|YˆTε ε |2 ] < εTε ,  Z Tε 1 (Tε ) ˆ (Tε ) χ < ε. ˆ ¯ − ϕ( Y , Z ) ds E s s Tε 0

Set

Zs(ε) (ε)

and consider Yt (ε)

dYt

=



Zˆs(Tε ) , 0,

s < Tε , Tε ≤ s,

defined by (ε)

(ε)

(ε)

(ε)

= λ(Yt ) dWt + {G(Yt ) + λ(Yt )Zt } dt,

(ε)

Y0

= x.

32

H. NAGAI

Then, for t ≥ Tε , (ε) Yt

(ε) = YTε

+

t



and

λ(Ys(ε) ) dWs

+

Z

t Tε

G(Ys(ε) ) ds

(ε)

(ε)

(ε)

(ε)

(ε)

Z

d|Yt |2 = 2(Yt )∗ λ(Yt ) dWt + 2(Yt )∗ G(Yt ) +

Z

t



(ε)

G(Yt ) dt

(ε) + tr[λλ∗ (Yt )] dt.

Therefore, (ε)

ept |Yt |2 (ε)

= epTε |YTε |2 + 2 +

Z

t



pTε

≤e

t

Z



eps (Ys(ε) )∗ λ(Ys(ε) ) dWs

eps {2(Ys(ε) )∗ G(Ys(ε) ) + p|Ys(ε) |2 + tr[λλ∗ (Ys(ε) )]} ds

(ε) |YTε |2

+2

t

Z

ps

e



(Ys(ε) )∗ λ(Ys(ε) ) dWs

+

Z

t



eps {−k1 |Ys(ε) |2 + k2 } ds

for some positive constants k1 , k2 > 0. By using stopping time arguments, for t ≥ Tε , we have  Z t (ε) 2 ps pTε pt 2 e k2 ds E[e |Yt | ] ≤ E[e |YTε | ] + E Tε

(ε)

= E[epTε |Yt |2 ] +

k2 pt (e − epTε ). p

Thus, we see that (ε)

(ε)

E[|Yt |2 ] ≤ E[|YTε |2 ] +

k2 , p

(ε) from which we obtain lim supt→∞ 1t E[|Yt |2 ] = 0. Hence, Z (ε) ∈ A˜∞ . Now, by applying Itˆo’s formula, we have Z Tε (ε) (D w(Y ¯ s(ε) ))∗ λ(Ys(ε) ) dWs ¯ = w(Y ¯ Tε ) − w(x) 0

+



Z



0

1 + 2 Z Tε 0

Z

{G(Ys(ε) ) + λ(Ys(ε) )Zs(ε) }∗ D w(Y ¯ s(ε) ) ds Tε

0

tr[λλ∗ D 2 w](Y ¯ s(ε) ) ds

(D w(Y ¯ s(ε) ))∗ λ(Ys(ε) ) dWs − χTε −

Z

Tε 0

ϕ(Ys(ε) , Zs(ε) ) ds.

33

DOWNSIDE RISK MINIMIZATION

Therefore, w(x) ¯ − χTε ≤ E

Z

0





(ε) ϕ(Ys(ε) , Zs(ε) ) ds + w(Y ¯ Tε )

,

from which we have 1 −χ ≤ E Tε

Z

Tε 0

≤ε+χ ¯ + cε

ϕ(Ys(ε) , Zs(ε) ) ds



+

1 (ε) E[w(Y ¯ Tε )] Tε

for some positive constant c > 0. Therefore, −χ ≤ χ ¯ + cε for any ε, and we have −χ ≤ χ. ¯ This completes the proof of the proposition.  The following is a direct consequence of Proposition 4.1. Corollary 4.1. Under the assumptions of Proposition 4.2, ρ(γ) is a concave function on (−∞, 0), and χ(γ) ˆ is a convex function. Indeed, 1 ϕ = z∗z − 2 is a concave function of tions ρ(γ) is concave. Proposition 4.3. godic.

γ ∗ ∗ 1 z σ (σσ ∗ )−1 σz − z ∗ Σ∗ α ˆ+ α ˆ ΣΣ∗ α ˆ 2 2 γ, and the infimum of a family of concave func-

¯ is erUnder the assumptions of Proposition 3.1, L

Proof. γ 1 ¯w ¯ ∗ λNγ−1 λ∗ D w ¯+ α ˆ ∗ ΣΣ∗ α ˆ − χ → −∞ L ¯ = − (D w) 2 2(1 − γ)

¯w as |x| → ∞, and L ¯ ≤ −c, |x| ≫ 1 and c > 0. Moreover, w(x) ¯ → ∞, |x| → ∞, and the Hasminskii conditions (cf. [17]) hold.  5. Derived Poisson equation. We are going to consider a Poisson equation formally obtained by differentiating H-J-B equation (3.1) of ergodic type with respect to γ. Namely, we consider −θ(γ) =

1 tr[λλ∗ D 2 u] + G∗ Du − (λ∗ D w ¯ − Σ∗ α ˆ )∗ Nγ−1 λ∗ Du 2 1 (λ∗ D w ¯ − Σ∗ α ˆ )∗ Σ∗ (ΣΣ∗ )−1 Σ(λ∗ D w ¯ − Σ∗ α ˆ ). − 2(1 − γ)2

34

H. NAGAI

Since −

1 (λ∗ D w ¯ − Σ∗ α ˆ )∗ Σ∗ (ΣΣ∗ )−1 Σ(λ∗ D w ¯ − Σ∗ α ˆ) 2(1 − γ)2 =−

we can write (5.1)

1 (σλ∗ D w ¯−α ˆ )∗ (σσ ∗ )−1 (σλ∗ D w ¯−α ˆ ), 2(1 − γ)2

¯ − − θ(γ) = Lu

1 (σλ∗ D w ¯−α ˆ )∗ (σσ ∗ )−1 (σλ∗ D w ¯−α ˆ ). 2(1 − γ)2

¯ is ergodic in light of Proposition 4.3, and the pair (u, θ(γ)) of Note that L a function u and a constant θ(γ) is the solution to (5.1). Let us set D = BR0 = {x ∈ Rn ; |x| < R0 },

and let R0 be sufficiently large so that 1 γ K(x; w) ¯ := (D w) ¯ ∗ λNγ−1 λ∗ D w ¯− α ˆ ∗ ΣΣ∗ α ˆ + χ > 0, 2 2(1 − γ) (5.2)

x ∈ Dc ,

for γ ≤ γ0 < 0, which is possible because of assumption (2.21). Therefore, ¯ and w we see that L ¯ satisfy assumption (A.3) in the Appendix and also |f (γ) (x)| < ∞, ¯ x∈D c K(x : w) sup

for 1 (σλ∗ D w ¯−α ˆ )∗ (σσ ∗ )−1 (σλ∗ D w ¯−α ˆ ). 2(1 − γ)2 In the following we always take a solution w ¯ to (3.1) such that w(x) ¯ > 0. Thus, according to Proposition A.4 we can show the existence of the solution (u, θ(γ)) to (5.1). f (γ) = −

Corollary 5.1.

Equation (5.1) has a solution (u, θ(γ)) such that |u| 2,p sup < ∞, u ∈ Wloc , ¯ x∈D c w

and 1 (σλ∗ D w ¯ − α) ˆ ∗ (σσ ∗ )−1 (σλ∗ D w ¯−α ˆ )mγ (y) dy. 2(1 − γ)2 Moreover, this solution u is unique up to additive constants. θ(γ) =

Z

Proof. It can be clearly seen that 1 (σλ∗ D w ¯−α ˆ )∗ (σσ ∗ )−1 (σλ∗ D w ¯−α ˆ ) ∈ FK 2(1 − γ)2 and Proposition A.4 applies. 

35

DOWNSIDE RISK MINIMIZATION

6. Differentiability of H-J-B equation. Under the assumptions of Proposition 4.2, Z 2 eδ|x| mγ (dx) ≤ c,

Lemma 6.1. (6.1)

where c and δ are positive constants independent of γ1 ≤ γ ≤ γ0 < 0. Proof. Inequality (6.1) is a direct consequence of (4.15) in Lemma 4.2 since Y¯t is an ergodic diffusion process with the invariant measure mγ (dx).  In the following, we always work under the assumptions of Theorem 2.2 (Proposition 4.2). Lemma 6.2. Let (w ¯(γ) , χ(γ)) and (w ¯(γ+∆) , χ(γ +∆)) be solutions to (3.1) with γ and γ + ∆, respectively, such that w ¯(γ) (0) = w ¯ (γ+∆) (0) = cw > 0. Then (γ+∆) (γ) 1 w ¯ converges to w ¯ in Hloc strongly and uniformly on each compact set. Proof. We have kw ¯ (γ+∆) kL∞ (B2r ) ≤ 2rk∇w ¯(γ+∆) kL∞ (B2r ) ≤ c1 (γ, r) and |w ¯ (γ+∆) (x) − w ¯ (γ+∆) (y)| ≤ |x − y|k∇w ¯ (γ+∆) kL∞ (B2r ) ≤ c2 (γ, r), x, y ∈ B2r , for each r in light of (3.11) and (3.15), where ci (γ, r) is a positive constant independent of ∆, i = 1, 2. Therefore it follows that {w ¯(γ+∆) }∆ is bounded 1 1 in H (B2r ) and converges to some w ˜ in H (B2r ) weakly for each r and also uniformly on each compact set by taking a subsequence if necessary. Note that χ(γ + ∆) converges to χ(γ) because χ(γ) is convex on (−∞, 0) and thus continuous. Take a function τ ∈ C0∞ (B2r ) such that τ (x) ≡ 1, x ∈ Br , and 0 ≤ τ ≤ 1. With (w ¯(γ+∆) − w)τ ˜ , we test ∗ −χ(γ + ∆) = 21 tr[λλ∗ D 2 w ¯(γ+∆) ] + βγ+∆ Dw ¯ (γ+∆)

−1 λ∗ D w ¯ (γ+∆) + Uγ+∆ ¯(γ+∆) )∗ λNγ+∆ − 12 (D w

and obtain Z −

B2r

χ(γ + ∆)(w ¯(γ+∆) − w)τ ˜ dx

=−

Z

B2r

1 (λλ∗ )ij Di w ¯ (γ+∆) Dj ((w ¯ (γ+∆) − w)τ ˜ ) dx 2

36

H. NAGAI

+

Z

B2r

1 − 2 Z +

Z

∗ β˜γ+∆ Dw ¯ (γ+∆) (w ¯(γ+∆) − w)τ ˜ dx

B2r

B2r

−1 λ∗ D w ¯ (γ+∆) (w ¯(γ+∆) − w)τ ˜ dx (D w ¯ (γ+∆) )∗ λNγ+∆

Uγ+∆ (w ¯ (γ+∆) − w)τ ˜ dx,

P ∗ )ij where β˜γi = βγi − 21 j ∂(λλ . Therefore, ∂xj Z 1 (λλ∗ )ij Di (w ¯(γ+∆) − w)D ˜ j (w ¯ (γ+∆) − w)τ ˜ dx 2 B2r Z 1 (λλ∗ )ij Di wD ˜ j (w ¯(γ+∆) − w)τ ˜ dx =− 2 B2r Z 1 (λλ∗ )ij Di w ¯(γ+∆) Dj τ (w ¯(γ+∆) − w) ˜ dx − B2r 2 Z ∗ β˜γ+∆ Dw ¯(γ+∆) (w ¯(γ+∆) − w)τ ˜ dx + B2r

1 − 2 Z +

Z

B2r

B2r

−1 λ∗ D w ¯(γ+∆) (w ¯(γ+∆) − w)τ ˜ dx (D w ¯(γ+∆) )∗ λNγ+∆

Uγ+∆ (w ¯(γ+∆) − w)τ ˜ dx +

Z

B2r

χ(γ + ∆)(w ¯(γ+∆) − w)τ ˜ dx.

Since all terms of the right-hand side converge to 0, we see that D(w ¯(γ+∆) − w) ˜ 2 (γ+∆) 1 converges strongly to 0 in L (Br ) and w ¯ to w ˜ strongly in H (Br ). Thus, we obtain our present lemma because (w, ¯ χ(γ)) satisfies (3.1), and the solution is unique up to additive constants with respect to w. ¯  Lemma 6.3. (6.2) where

Let (u(γ+∆) , θ(γ + ∆)) be a solution to ¯ + ∆)u(γ+∆) + f (γ+∆) , − θ(γ + ∆) = L(γ

f (γ+∆) = −

1 (λ∗ D w ¯ (γ+∆) − Σ∗ α) ˆ ∗ 2(1 − γ − ∆)2

× Σ∗ (ΣΣ∗ )−1 Σ(λ∗ D w ¯ (γ+∆) − Σ∗ α ˆ ).

Then, as |∆| → 0, θ(γ + ∆) converges to θ(γ) and u(γ+∆) converges to u(γ) 1 strongly and uniformly on each compact set, where (u(γ) , θ(γ)) is in Hloc a solution to (5.1). Proof. Note that |f (γ+∆) (x)| ≤ c(1+|x|2 ), ∃c > 0, and that f (γ+∆) (x) → f (γ) almost everywhere by taking a subsequence, if necessary, since w(γ+∆)

DOWNSIDE RISK MINIMIZATION

37

1 to w (γ) by Lemma 6.2. Moreover, we note that converges strongly in Hloc {mγ+∆ (dx)} = {mγ+∆ (x) dx} is tight because of Lemma 6.1. Therefore, it converges weakly to some probability measure m(dx) ˜ by taking a subsequence if necessary. The limit can be identified with mγ (dx) R = mγ (x) dx, ¯ and mγ (x) dx = 1. and mγ (x) is the only function satisfying (A.24) for L(γ) Thus mγ+∆ (x) dx converges to mγ (x) dx weakly. Therefore, Z θ(γ + ∆) = − f (γ+∆) (x)mγ+∆ (x) dx

converges to θ(γ). On the other hand, since u(γ+∆) is a solution to (6.2) it satisfies |u(γ+∆) | < ∞, ¯(γ+∆) x∈D c w sup

and we have |w ¯(γ+∆) | ≤ c(1 + |x|2 ). Therefore, we see that u(γ+∆) is locally bounded by a constant independent of ∆. Then, by testing (6.2) with u(γ+∆) τ , we can see that ku(γ+∆) kH 1 (Br ) is bounded for each r. Therefore, 1 weakly to by taking a subsequence if necessary, u(γ+∆) converges in Hloc (γ) a function u ˜, which turns out to be the solution u to (5.1). By similar 1 arguments as in the proof of Lemma 6.2, we can see it converges in Hloc strongly. Furthermore, by Theorem 9.11 in [16], we see that ku(γ+∆) kW 2,p (Br ) ≤ C(ku(γ+∆) kLp (B2r ) + kf (γ+∆) + θ(γ + ∆)kLp (B2r ) ) ≤ C(ku(γ+∆) kL∞ (B2r ) + kf (γ+∆) + θ(γ + ∆)kLp (B2r ) ) for each r > 0, where C is a constant depending on c1 , c2 and the L∞ (B2r ) ¯ + ∆). Thus, by the Sobolev imbedding norms of the coefficients of L(γ (γ+∆) theorem, {u } is equicontinuous, and thus u(γ+∆) converges uniformly to u(γ) on each compact set.  Lemma 6.4. Let (w ¯(γ) , χ(γ)) and (w ¯(γ+∆) , χ(γ +∆)) be solutions to (3.1) (γ+∆) −w ¯ (γ) and ζ (∆) = w¯ . with γ and γ +∆, respectively. Set χ(∆) = χ(γ+∆)−χ(γ) ∆ ∆ Then lim χ(∆) = θ(γ)

|∆|→0

and lim ζ (∆) (x) = u(γ) (x),

|∆|→0

Here, (u(γ) , θ(γ)) is the solution to (5.1).

x ∈ Rn .

38

H. NAGAI

Proof. Here we abbreviate u(γ) to u. From (3.1) it follows that −χ(γ + ∆) + χ(γ)

∗ ¯(γ+∆) − w ¯(γ) )] + βγ+∆ Dw ¯ (γ+∆) = 12 tr[λλ∗ D 2 (w

−1 λ∗ D w ¯(γ+∆) ¯ (γ+∆) )∗ λNγ+∆ − βγ∗ D w ¯(γ) − 12 (D w

+ 12 (D w ¯(γ) )∗ λNγ−1 λ∗ D w ¯(γ) + Uγ+∆ − Uγ

¯(γ+∆) − w ¯(γ) )] + βγ∗ D(w ¯(γ+∆) − w ¯(γ) ) = 12 tr[λλ∗ D 2 (w

¯ (γ) )∗ λNγ−1 λ∗ D w ¯ (γ) − (D w ¯ (γ) )∗ λNγ−1 λ∗ D(w ¯(γ+∆) − w ¯ (γ) ) − 12 (D w −1 + (D w ¯ (γ) )∗ λNγ−1 λ∗ D w ¯(γ+∆) − 12 (D w λ∗ D w ¯ (γ+∆) ¯ (γ+∆) )∗ λNγ+∆

+ (βγ+∆ − βγ )∗ D w ¯(γ+∆) + Uγ+∆ − Uγ ¯ = L(γ)( w ¯(γ+∆) − w ¯(γ) )

−1 − 21 D(w λ∗ D(w ¯(γ+∆) − w ¯ (γ) ) ¯(γ+∆) − w ¯ (γ) )∗ λNγ+∆

−1 + (βγ+∆ − βγ )∗ D w ¯(γ+∆) + 21 (D w − Nγ−1 )λ∗ D w ¯(γ) ¯(γ) )∗ λ(Nγ+∆

−1 − Nγ−1 )λ∗ D w ¯ (γ+∆) + Uγ+∆ − Uγ . − (D w ¯ (γ) )∗ λ(Nγ+∆

Therefore we have

(∆) (∆) ¯ − χ(∆) = L(γ)ζ + f1 (x) − g(∆) (x),

(6.3) where (∆) f1 (x) =

−1 − Nγ−1 ) ∗ (Nγ+∆ (βγ+∆ − βγ )∗ 1 (γ+∆) (γ) ∗ Dw ¯ + (D w ¯ ) λ λ Dw ¯(γ) ∆ 2 ∆

− (D w ¯

(γ) ∗

) λ

−1 − Nγ−1 ) (Nγ+∆



λ∗ D w ¯(γ+∆) +

Uγ+∆ − Uγ ∆

and g (∆) (x) =

1 −1 λ∗ D(w ¯ (γ+∆) − w ¯(γ) ). D(w ¯ (γ+∆) − w ¯(γ) )∗ λNγ+∆ 2∆

(∆)

Note that f1 is dominated by c(1 + |x|2 ) with a certain positive constant c and that it converges almost everywhere to f (γ) by taking a subsequence, if necessary, because ∂βγ 1 = λΣ∗ α ˆ, ∂γ (1 − γ)2 ∂Nγ−1 1 = Σ∗ (ΣΣ∗ )−1 Σ, ∂γ (1 − γ)2

∂Uγ 1 =− Σ∗ (ΣΣ∗ )−1 Σ. ∂γ 2(1 − γ)2

39

DOWNSIDE RISK MINIMIZATION

Therefore, (∆) − χ1

(6.4)

:=

Z

(∆) f1 (x)mγ (dx) →

Moreover, for ∆ > 0, (∆)

(6.5)

− χ1

Z

f (x)mγ (dx) = −θ(γ),

|∆| → 0.

≥ −χ(∆) .

Let us consider the equation (∆)

(6.6) (∆)

− χ1

(∆)

¯ = L(γ)u 1

(∆)

+ f1

.

We shall see that u1 − u → 0 as ∆ → 0 by specifying suitable ambiguity constants. For that, set (∆)

z (∆) := u1

− u,

(∆)

F (∆) := f1

(∆)

− f + χ1

− θ(γ).

Then we have (∆) ¯ L(γ)z + F (∆) = 0,

2,p z (∆) ∈ Wloc ,

sup Dc

|z (∆) | < ∞. w ¯(γ)

By considering constructing the solution to this equation according to the proof of Proposition A.4 in the Appendix, we see that z (∆) → 0 as ∆ → 0. ¯ f = F (∆) and To begin, let Ψ(∆) be the solution to (A.19) for L0 = L(γ), (∆) (∆) ¯ ξ the solution to (A.20) for L0 = L(γ), f = F and Ψ = Ψ(∆) . The operator T is defined as T F (∆) (x) = ξ (∆) (x),

x ∈ Γ1 ,

and the operator P is defined in the same way as in (A.10) by replacing L0 (∆) (∆) ¯ with L(γ) in (A.4) and (A.9). Then starting with ζ0 = Ψ(∆) , η0 = ξ (∆) , (∆) (∆) we define the sequence ζk , ηk , k = 1, 2, . . . , successively as the solution (∆) ¯ and as the solution to (A.4) with to (A.9) with φ = ηk−1 and L0 = L(γ), (∆) ¯ and L0 = L(γ), respectively. Then we obtain h=ζ k

η¯(∆) (x) =

∞ X k=0

and the estimate for

η¯(∆) ,

(∆)

ηk |Γ1 =

∞ X

P k (T F (∆) )(x)

k=0

k¯ η (∆) kL∞ (Γ1 ) ≤ KkT F (∆) kL∞ (Γ1 ) (∆)

1 . 1 − e−ρ

To estimate kT F (∆) kL∞ (Γ1 ) , we set ξ1 to be the solution to ( c (∆) ¯ L(γ)ξ + F (∆) = 0, D , 1 (∆)

ξ1 |Γ = 0,

40

H. NAGAI (∆)

and ξ2

(∆)

(∆)

= ξ (∆) − ξ1 . Then ξ2 satisfies ( (∆) ¯ L(γ)ξ = 0, 2 (∆)

c

ξ2 |Γ = Ψ(∆) |Γ ,

and we have (∆)

D ,

(∆)

kξ2 kL∞ (Γ1 ) ≤ kξ2 kL∞ (Dc ) ≤ kΨ(∆) kL∞ (Γ) ≤ K1 kF (∆) kL∞ (D1 ) (∆)

for some constant K1 > 0. On the other hand, to estimate kξ1 kL∞ (Γ1 ) , we set (∆)

ξ1

:= v (∆) (w ¯(γ) )α ,

α > 1.

We can assume that D is sufficiently large so that 1 α−1 − Uγ − (D w ¯ (γ) )∗ λNγ−1 λ∗ D w ¯ (γ) − χ(γ) + (γ) (D w ¯ (γ) )∗ λλ∗ D w ¯ (γ) < −M, 2 w ¯ x ∈ Dc,

for some M > 0. Since (∆) (∆) ¯ ¯ ¯ w L(γ)ξ = (w ¯(γ) )α L(γ)v + αv (∆) (w ¯(γ) )α−1 L(γ) ¯(γ) 1

Dw ¯ (γ) (γ) α (w ¯ ) w ¯(γ)   ¯ (γ) (γ) α ¯(γ) ∗ ∗ D w (∆) D w λλ (w ¯ ) , + α(α − 1)v w ¯(γ) w ¯ (γ)

+ α(Dv (∆) )∗ λλ∗

v (∆) satisfies    Dw ¯(γ) ∗ ∗ (∆)  (∆) ¯  L(γ)v +α λλ Dv    w ¯(γ)    α−1 α F (∆) (γ) (γ) ∗ ∗ (γ) (∆) ¯ L(γ) w ¯ + + (D w ¯ ) λλ D w ¯ , v = −   w ¯ (γ) w ¯(γ) (w ¯(γ) )α     (∆) v |∂D = 0.

Noting that

α−1 ¯ w L(γ) ¯(γ) + (γ) (D w ¯ (γ) )∗ λλ∗ D w ¯ (γ) w ¯ α−1 1 ¯ (γ) )∗ λNγ−1 λ∗ D w ¯ (γ) − χ(γ) + (γ) (D w = −Uγ − (D w ¯ (γ) )∗ λλ∗ D w ¯(γ) 2 w ¯ < −M, x ∈ Dc,

we have

|v (∆) (x)| ≤ K2 sup Dc

|F (∆) | , (w ¯(γ) )α

41

DOWNSIDE RISK MINIMIZATION

and thus (∆)

kξ1 kL∞ (Γ1 ) ≤ K2 sup Dc

|F (∆) | k(w ¯(γ) )α kL∞ (Γ1 ) . (w ¯(γ) )α

(∆)

Moreover, ξ1 = v (∆) (w ¯(γ) )α → 0 uniformly on each compact set as ∆ → 0. Therefore, ξ (∆) → 0 uniformly on each compact set and we also obtain the estimates |F (∆) | ¯(γ) )α kL∞ (Γ1 ) kξ (∆) kL∞ (Γ1 ) ≤ K1 kF (∆) kL∞ (D1 ) + K2 sup (γ) α k(w c ¯ ) D (w and |F (∆) | k¯ η (∆) kL∞ (Γ1 ) ≤ K1′ kF (∆) kL∞ (D1 ) + K2′ sup (γ) α k(w ¯ (γ) )α kL∞ (Γ1 ) . c ( w ¯ ) D (∆) ˜ ¯ Let ζ be the solution to (A.26) for L0 = L(γ), f = F (∆) and η¯ = η¯(∆) . (∆) (∆) Then ζ˜ → 0 uniformly as ∆ → 0 since k¯ η k is estimated as shown above. ¯ f = F (∆) and Ψ = ζ˜(∆) . Let η˜(∆) be the solution to (A.20) for L0 = L(γ), Then, as above, η˜(∆) → 0 uniformly on each compact set as ∆ → 0. Since z (∆) = ζ˜(∆) in D1 and z (∆) = η˜(∆) in D c , we conclude that z (∆) → 0 uniformly on each compact set. In a similar manner, we have ∗ ¯ + ∆)ζ (∆) + (βγ+∆ − βγ ) Dw(γ) −χ(∆) = L(γ ∆ −1 Nγ−1 − Nγ+∆ Uγ+∆ − Uγ 1 ¯(γ) )∗ λ λ∗ D w ¯ (γ) + + (D w 2 ∆ ∆ 1 −1 + λ∗ D(w ¯ (γ+∆) − w ¯(γ) ). D(w ¯ (γ+∆) − w ¯(γ) )∗ λNγ+∆ 2∆

By setting (∆) f2 (x)

−1 Nγ−1 − Nγ+∆ (βγ+∆ − βγ )∗ 1 (γ) (γ) ∗ := Dw + (D w ¯ ) λ λ∗ D w ¯ (γ) ∆ 2 ∆ Uγ+∆ − Uγ + , ∆

we have (6.7)

¯ + ∆)ζ (∆) + f (∆) (x) + g(∆) (x). − χ(∆) = L(γ 2 (∆)

Since mγ+∆ (x) dx converges to mγ (x) dx weakly, and f2 converges almost everywhere to f (x) by taking a subsequence if necessary, as above, we have Z Z (∆) (∆) − χ2 := f2 (x)mγ+∆ (x) dx → f (x)mγ (x) dx = −θ(γ)

(6.8)

as |∆| → 0.

42

H. NAGAI

Moreover, for ∆ > 0, (∆)

(6.9)

− χ2

≤ −χ(∆) .

We consider (∆)

(6.10)

− χ2

¯ + ∆)u(∆) + f (∆) . = L(γ 2 2 (∆)

Then, in the same manner as above, we see that u2 − u(γ+∆) → 0, as |∆| → 0, by specifying ambiguity constants. Since u(γ+∆) converges to u, (∆) u2 does the same. From (6.4), (6.5), (6.8) and (6.9), it follows that lim −χ(∆) = −θ(γ).

(6.11)

∆↓0

The converse inequalities of (6.5) and (6.9) hold for ∆ < 0, and we have lim −χ(∆) = −θ(γ).

(6.11)

∆↑0

Hence, we see that −χ(∆) → −θ(γ) as |∆| → 0. From (6.3) and (6.6), we have (∆)

−χ1

(∆) ¯ + χ(∆) = L(γ)(u − ζ (∆) ) + g(∆) , 1

and through arguments similar to those above, we see that (∆)

lim inf (u1 (x) − ζ (∆) (x)) ≥ 0,

(6.11′ )

∆↓0

(∆)

since g(∆) (x) ≥ 0 for ∆ > 0 and χ(∆) − χ1 from (6.7) and (6.10), we have (∆)

−χ2

(∆)

¯ + ∆)(u + χ(∆) = L(γ 2

→ 0 as |∆| → 0. Similarly,

− ζ (∆) ) − g(∆)

and see that (6.11′ )

(∆)

lim sup(u2 (x) − ζ (∆) (x)) ≤ 0. ∆↓0

Therefore, lim ζ (∆) (x) = u(x).

∆↓0

We likewise have (∆)

lim sup(u1 (x) − ζ (∆) (x)) ≤ 0 ∆↑0

and (∆)

lim inf (u2 (x) − ζ (∆) (x)) ≥ 0 ∆↑0

43

DOWNSIDE RISK MINIMIZATION

since g(∆) (x) ≤ 0 for ∆ < 0. Therefore, we obtain lim ζ (∆) (x) = u(x)

∆↑0

and Lemma 6.4 follows.  Remark 6.1. Since u = u(γ) = ∂∂γw¯ is a solution to (5.1), it has a polynomial growth order. More precisely, we have |u(x)| ≤ C(1 + |x|2 ),

∃C > 0;

∂u also has a polycf. Corollary 5.1 and (3.11). Furthermore, we can see that ∂x l ∂u nomial growth order for each l. Indeed, ul := ∂xl satisfies i 0 = 12 Di (aij Dj ul ) + B i Di ul − Dl f + 21 Di (aij l Dj u) + Bl Di u,

where

aij (x) = (λλ∗ )ij (x), ¯ j, B j (x) = βγj (x) − 21 Di ((λλ)ij ) − (λNγ−1 λ∗ D w) f= aij l =

1 (λ∗ D w ¯ − Σ∗ α ˆ )∗ Σ∗ (ΣΣ∗ )−1 Σ(λ∗ D w ¯ − Σ∗ α ˆ ), 2(1 − γ)2 ∂aij , ∂xl

Bli =

∂B i . ∂xl

1 lj Therefore, if we set fi = − 12 aij l Dj u, i 6= l, and fl = − 2 al Dj u − f , then ul satisfies  Z  Z 1 ij a Dj ul − fi ξxi dx + (B i Di ul + Bli Di u)ξ dx = 0 (6.14) 2

for each ξ ∈ W01,2 (Bρ (x0 )), ρ > 0, x0 ∈ Rn . We note that

kfi kLp/2 (Bρ (x0 )) , kB i kLp/2 (Bρ (x0 )) , kBli Di ukLp/2 (Bρ (x0 )) ≤µ(x0 )≤C(1 + |x0 |m0 ),

∃C > 0, m0 > 0,

which can be seen in light of our assumptions since u is a solution to (5.1), and w ¯ is a solution to (3.1). Equation (6.14) corresponds to (13.4) in [25], Chapter 3, Section 13. Therefore, by the same arguments as in that work,  Z Z 2 2 2 1−2/p 2 2 (ul − k) |∇ζ| dx + (k + 1)|Ak,ρ | |∇ul | ζ dx ≤ γ(x0 ) Ak ,ρ

Ak,ρ

is seen to hold, where ζ is a cut-off function supported by Bρ (x0 ), Ak,ρ = {x ∈ Bρ (x0 ); ul (x) > k}, and γ(x0 ) is a constant dominated by C(1 + |x0 |m1 ), C > 0, m1 > 0. From this inequality we obtain inequality (5.12) for u = ul in [25], Chapter 2, Section 5. Hence, similarly to the proof of Lemma 5.4 in [25], Chapter 2, we see that ul has a polynomial growth order.

44

H. NAGAI

7. Proof of Theorem 2.4. We first state the following lemma. Lemma 7.1.

Under the assumptions of Theorem 2.3, χ′ (−∞) := lim χ′ (γ) = 0.

(7.1)

γ→−∞

Proof. Note that 0 ≤ lim χ′ (γ) ≤ χ′ (γ0 ) γ→−∞

for γ0 < 0 and that thermore,

χ′ (γ)

is nondecreasing. Therefore, χ′ (−∞) exists. Fur-

1 χ(γ) =− γ γ

Z

γ

γ0

χ′ (t) dt +

χ(γ0 ) γ

and limγ→−∞ χ(γ) γ = 0 since −χ0 ≤ χ(γ) ≤ 0. Here, χ0 is a constant defined by (3.10). Hence, we obtain (7.1).  We next give the proof of Theorem 2.4. For γ < 0, we have      log VT (h) − log ST0 VT (h) γ γκT ≤ κ = log P ≥e log P T ST0      VT (h) γ −γκT e ≤ log E ST0    VT (h) γ − γκT. = log E ST0

Therefore,

    log VT (h) − log ST0 VT (h) γ − γκT inf log P ≤ κ ≤ inf log E h h T ST0 

≤ v(0, x; T ) − γκT,

from which we obtain   1 log VT (h) − log St0 lim inf inf log P ≤ κ ≤ χ(γ) − γκ T →∞ T h T

for all γ < 0. Hence, we have   log VT (h) − log ST0 1 ≤ κ ≤ inf {χ(γ) − γκ}. lim inf inf log P γ 0 such that κ − ε > 0. Then there exists γε such that

(7.2)

inf {χ(γ) − γ(κ − ε)} = χ(γε ) − γε χ′ (γε ).

γ κ T   Z T 1 ε 1 ′ log v + V1 (Xs ) ds > χ (γ) + ≤ P˜ T T 0 3     1 1 ε + P˜ MTh − hM h iT > T 2 3  Z T  1 1 ε ∗ −1 ∗ ∗ ˜s > {(σσ ) (ˆ α + σλ Dw)} σ(Xs ) dW + P˜ . T 0 1−γ 3

DOWNSIDE RISK MINIMIZATION

47

Taking Lemma 4.2 into account, we have  Z T  1 ε 1 ∗ −1 ∗ ∗ ˜ ˜ {(σσ ) (ˆ α + σλ Dw)} σ(Xs ) dWs > P T 0 1−γ 3  Z T 9 ˜ V1 (Xs ) ds ≤ 2 2E ε T 0 C ≤ 2 ε T for some positive constant C and     1 ε 1 h h ˜ ˜ MTh −(1/2)hM h iT ] ≤ e−εT /3 . P MT − hM iT > ≤ e−εT /3 E[e T 2 3

Thus, by using the following lemma, we can see that   1 (7.6) (log VT (h) − log ST0 ) > κ < ε P˜ T for sufficiently large T .

For sufficiently large T we have   Z T 1 log v ε ε ′ + V1 (Xs ) ds > χ (γ) + ≤ . P˜ T T 0 3 2

Lemma 7.2.

Proof. By Itˆo’s formula, wγ (XT ) − wγ (X0 ) Z Z T ¯ {L(γ)wγ (Xs )} ds + =

Therefore, 1 T

Z

˜s (∇wγ (Xs ))∗ λ(Xs ) dW

0

0

=−

T

Z

T



V1 (Xs ) ds + χ (γ)T +

T

˜ s. (∇wγ (Xs ))∗ λ(Xs ) dW

0

0

T

Z

V1 (Xs ) ds = χ′ (γ) +

0

1 + T

Z

1 {wγ (x) − wγ (XT )} T T

˜ s. (∇wγ (Xs ))∗ λ(Xs ) dW

0

Thus,   Z 1 T ε 1 ′ ˜ log v + V1 (Xs ) ds > χ (γ) + P T T 0 3   Z T 1 1 ε 1 ∗ ˜s > log v + {wγ (x) − wγ (XT )} + (∇wγ (Xs )) dW = P˜ T T T 0 3

48

H. NAGAI

   ε 1 ε 1 ˜ log v > {wγ (x) − wγ (XT )} > +P T 9 T 9  Z T  1 ˜s > ε + P˜ (∇wγ (Xs ))∗ dW T 0 9  Z T 81 81 ∗ ∗ 2 (Dwγ ) λλ Dwγ (Xs ) ds . ≤ 2 2 E[|wγ (x) − wγ (XT )| ] + 2 2 E ε T ε T 0 Hence, by taking T and R to be sufficiently large, we obtain our present lemma because of Lemma 4.2; cf. Remark 6.1.  ≤ P˜



Let us complete the proof of Theorem 2.4 for 0 < κ < χ′ (0−). Set  Z t γ γ ∗ ∗ −1 ˜ ˜s Mt = α ˆ Σ + (Dw) λNγ (Xs ) dW 1−γ 0 and ˜ γ ≥ −εT }, A1 = {−M T

A2 = {− 12 hM γ iT ≥ (χ(γ) − γχ′ (γ) − ε)T },   1 0 A3 = (log VT (h) − log ST ) ≤ κ . T

Then 

 1 0 P (log VT (h) − ST ) ≤ κ T   ˜ γ −(1/2)hM γ iT 1 0 −M ˜ T ; (log VT (h) − ST ) ≤ κ =E e T ˜γ

˜ −MT −(1/2)hM ≥ E[e

γi T

(χ(γ)−γχ′ (γ)−2ε)T

; A1 ∩ A2 ∩ A3 ]

P˜ (A1 ∩ A2 ∩ A3 ) ′ ≥ e(χ(γ)−γχ (γ)−2ε)T {1 − P˜ (Ac1 ) − P˜ (Ac2 ) − P˜ (Ac3 )}. ≥e

We have seen that P˜ (Ac3 ) < ε holds for sufficiently large T in (7.6), and it is straightforward to see that likewise P˜ (Ac1 ) < ε for sufficiently large T . Therefore, taking the following lemma into account as well, we have   1 ′ 0 (log VT (h) − log ST ) ≤ κ ≥ e(χ(γ)−γχ (γ)−2ε)T (1 − 3ε) ∀h ∈ H(T ), P T for sufficiently large T , and   log VT (h) − log ST0 1 lim inf log P ≤ κ ≥ χ(γ) − γχ′ (γ) − 2ε T T h∈H(T ) T →∞ = χ(γ) − γ(κ − ε) − 2ε

≥ inf {χ(γ) − γ(κ − ε)} − 2ε γ εT D(w − γwγ )(Xs ) λ(Xs ) dW ∗



   εT εT ≤ P˜ (w − γwγ )(XT ) > + P˜ −(w − γwγ )(x) > 3 3  Z t  ˜ s > εT . + P˜ − D(w − γwγ )(Xs )∗ λ(Xs ) dW 3 0

Hence, we obtain the present lemma in the same way as Lemma 7.2.  For κ < 0, we shall prove Theorem 2.4. By convexity, we have That is,

χ(−1) ≥ χ(γ) + χ′ (γ)(−1 − γ),

γ < −1.

χ(γ) − γκ ≤ χ(−1) + χ′ (γ) + γ(χ′ (γ) − κ).

χ′ (γ) is monotonically nondecreasing, and χ′ (γ) → 0 as γ → −∞. Therefore, we see that Hence,

χ(γ) − γκ → −∞

as γ → −∞.

inf {χ(γ) − γκ} = −∞.

γ 0, x ∈ BR −L0 ψ − (Dψ)∗ aDψ > 0, 0 a (A.3) 0 ψ   c . L0 ψ < −1, x ∈ BR 0 (A.2)

54

H. NAGAI

Set K(x; ψ) = −L0 ψ,

  |u(x)| 2,p 0, there exists Rε such that |g| ≤ ε, BR ε Rε ∨ R0 . Then we see that |g| ≤ ε,

BR ∩ D c ,

since ψ > 0 and K(x; ψ) > 0 in D c . Thus, we see that g = 0 because ε is arbitrary.

DOWNSIDE RISK MINIMIZATION

55

Let us show the existence of the solution to (A.4). We can assume h ≥ 0. Consider the following Dirichlet problem for R > R0 :  c −L0 ξR = 0, BR ∩ D , (A.7) ξR |Γ = h, ξR |∂BR = 0. Then we have (A.8)

kξR kL∞ (BR ∩Dc ) ≤ khkL∞ (Γ) .

It is clear that ξR ≤ ξR′ and R < R′ by the maximum principle. Therefore, there exists ξ ∈ L∞ (Rn ∩ D c ) and ξR → ξ,

kξkL∞ (Dc ) ≤ khkL∞ (Γ) .

When taking ˜ ⊂ BR ∩ D c , D ∗ ⊂⊂ D we see that ′ ′ kξR kW 2,p (D∗ ) ≤ ckξR kLp (D) ˜ ≤ c kξR kL∞ (D) ˜ ≤ c khkL∞ (Γ) . 1,q Thus, ξR converges to ξ weakly in Wloc . Regularity theorems show that 2,q ξ ∈ Wloc . 

Let us take a bounded domain D1 such that D ⊂ D1 and a bounded Borel function φ on Γ1 = ∂D1 . We consider a Dirichlet problem  −L0 ζ = 0, D1 , (A.9) ζ|Γ1 = φ, which admits a solution ζ ∈ W 2,p (D1 ) ∩ L∞ . For this solution, we consider exterior Dirichlet problem (A.4) with h = ζ. Then we introduce an operator P : B(Γ1 ) 7→ B(Γ1 ) defined by (A.10)

P φ(x) = ξ(x),

x ∈ Γ1 ,

where ξ(x) is the solution to (A.4) with h = ζ. In a similar manner to Lemma 5.1 in [1], Chapter II, we have sup

λx,y (B) < 1,

B∈B(Γ1 ),x,y∈Γ1

where λx,y (B) = P χB (x) − P χB (y),

B ∈ B(Γ1 ).

Moreover, we have the following proposition; cf. Theorem 4.1, Chapter II in [1].

56

H. NAGAI

Proposition A.2. erties. (A.11)

Operator P defined above satisfies the following prop-

kP φkL∞ (Γ1 ) ≤ kφkL∞ (Γ1 ) ,

P 1(x) = 1,

and for some δ > 0, (A.12)

P χB (x) − P χB (y) ≤ 1 − δ,

x, y ∈ Γ1 , B ∈ B(Γ1 ).

Furthermore, there exists a probability measure π(dx) on (Γ1 , B(Γ1 )) such that Z n P φ(x) − φ(x)π(dx) ≤ KkφkL∞ e−ρn , (A.13) 2 1 ,K = , ρ = log 1−δ 1−δ and Z Z (A.14) φ(x)π(dx) = P φ(x)π(dx) for any bounded Borel function φ.

Consider an exterior Dirichlet problem for a given function f ∈ FK :  c −L0 u = f, x∈D , (A.15) u|Γ = 0. Then we have the following proposition. Proposition A.3. For a given function f ∈ FK , there exists a unique 2,p solution u ∈ Wloc , 1 < p < ∞, to (A.15) such that |u(x)| < ∞. x∈D c ψ(x) sup

Proof. Assume that f ≥ 0, f ∈ FK . For R > R0 we consider a Dirichlet problem on BR ∩ D c :  c −L0 uR = f, x ∈ BR ∩ D , (A.16) uR |Γ = 0, uR |∂BR = 0. c

There exists a unique solution uR ∈ W02,p (BR ∩ D ). Set cf = ess sup x∈D c

|f (x)| . K(x; ψ)

Then we have (A.17)

0 ≤ uR ≤ cf ψ.

DOWNSIDE RISK MINIMIZATION

57

To see that, set u ˜R := uR − cf ψ. Then

−L0 u ˜R = −L0 uR + cf L0 ψ

Therefore,

c

D ∩ BR .

= f − cf K(x; ψ) ≤ 0, c

u ˜R ≤ 0,

D ∩ BR ,

c

˜R ≤ 0 on Γ ∪ ∂BR . Hence, we have since u ˜R is subharmonic in D ∩ BR and u uR ≤ cf ψ. On the other hand, c

−L0 uR = f ≥ 0,

BR ∩ D .

Hence, uR is superharmonic and uR = 0 on Γ ∩ ∂BR . Thus, c

uR ≥ 0,

BR ∩ D ,

and (A.17) holds. If f ≤ 0, f ∈ FK and cf = ess supx∈Dc guments for −f , we obtain

|f (x)| K(x;ψ) ,

then, through the same ar-

−cf ψ ≤ u− R ≤ 0,

where −u− R is the corresponding solution to (A.16). Therefore, for general f = f + − f − , we have −cf ψ ≤ uR ≤ cf ψ.

+ + Let u+ R be a solution to (A.16) for f . Then, uR is nondecreasing with respect to R because of the maximum principle. Indeed, for R < R′ , we have + −L0 (u+ R′ − uR ) = 0, + u+ R′ − uR ≥ 0,

c

BR ∩ D , Γ ∪ ∂BR .

+ Since u+ R is dominated by cf ψ, there exists u such that

u+ (x) = lim u+ R (x), R→∞

u+ (x)|Γ = 0.

Let us show that u+ (x) satisfies Set

−L0 u+ = f + , c

Then we have

D ∗ := BR ∩ D ,

c

D .

D ′ ⊂⊂ D ∗ ∪ ∂D ∗ .

ku+ k2,p;D′ ≤ c(ku+ kp;D∗ + kf + kp;D∗ )

≤ c′ (ku+ k∞;D∗ + kf + kp;D∗ .

58

H. NAGAI

np , is compact. ThereFor D ′′ ⊂ D ′ injection W 2,p (D ′ ) ֒→ W 1,q (D ′′ ), 1 ≤ q ≤ n−p 1,q + + + fore, uR → u weakly in Wloc for each 1 ≤ q < ∞ and u is a weak solution to  c −L0 u+ = f + , Rn ∩ D , u+ |Γ = 0. 2,p By the regularity theorem u+ ∈ Wloc , ∀p > 1. 2,p − Similarly, we have u ∈ Wloc , which is a solution to  c −L0 u− = f − , Rn ∩ D , − u |Γ = 0.

Now let us prove uniqueness. For i = 1, 2, we assume that ui is a solution to (A.15) such that −cf ψ ≤ ui ≤ cf ψ,

Then u = u1 − u2 satisfies  −L0 u = 0, (A.18) u|Γ = 0,

2,p ui ∈ Wloc .

c

D , 2,p −2cf ψ ≤ u ≤ 2cf ψ, u ∈ Wloc .

We shall prove that u satisfying (A.18) is trivial, u ≡ 0. For this purpose, we set u = vψ α ,

α = 1 + ca > 1,

where ca > 0 is the constant that appears in (A.3). Since −L0 u = 0 we have     Dψ ∗ αv α−1 ∗ L0 v + 2a aDv + (Dψ) aDψ = 0. L0 ψ + ψ ψ ψ Note that −L0 ψ −

α−1 α−1 (Dψ)∗ aDψ = K(x; ψ) − (Dψ)∗ aDψ ≥ 0 ψ ψ

for |x| ≫ 1 under assumption (A.3). Moreover, u v= α →0 as |x| → ∞. ψ Hence, from the maximum principle, we see that v ≡ 0 as in the proof of Proposition A.1.  Let f be a function on Rn such that f is bounded in D and f ∈ FK (D c ), and D1 a bounded domain such that D ⊂ D1 . We consider  −L0 Ψ = f, D1 , (A.19) Ψ|Γ1 = 0,

59

DOWNSIDE RISK MINIMIZATION

and (A.20)



c

Rn ∩ D ,

−L0 ξ = f, ξ|Γ = Ψ|Γ .

Then we set T f (x) = ξ(x),

x ∈ Γ1 ,

and (A.21) We further consider (A.22)

R

ν(f ) = RΓ1

Γ1

  −L0 z = f, 2,p

 z ∈ Wloc ,

T f (σ)π(dσ) T 1(σ)π(dσ)

.

|z| < ∞. x∈D c ψ sup

Then, as in the proof of Theorem 5.3, in [1], Chapter II, we obtain the following proposition. Here, we only give the proof of the existence of the solution for use in Section 6. Proposition A.4. Equation (A.22) has a solution unique up to additive constants if and only if ν(f ) = 0. Moreover, Z (A.23) ν(f ) = m(y)f (y) dy for m ∈ L1 (Rn ), m ≥ 0 and −L∗0 m = 0 in distribution sense Z 2,p (A.24) m(y)(−L0 z) dy = 0, z ∈ Wloc ,

such that z ∈ Fψ and −L0 z ∈ FK . Furthermore, m(x) is the only function in L1 satisfying (A.24) and Z m(x) dx = 1. Proof of existence. Let ζ0 = Ψ and η0 = ξ, where Ψ (resp., ξ) is the solution to (A.19) [resp., (A.20)]. For each k = 1, 2, . . . define ζk and ηk as follows. Let ζk be the solution to (A.9) for φ = ηk−1 , and ηk the solution to (A.4) for h = ζk . Then η0 (x)|Γ1 = T f (x), ηn (x)|Γ1 = P n (T f )(x), R Since Γ1 T f (x)π(dx) = 0, we have

|P n (T f )(x)| ≤ KkT f kL∞ (Γ1 ) e−ρn

n = 1, 2, . . . .

60

H. NAGAI

by (A.13). Set η˜n (x) =

˜

Pn

k=0 ηk (x), ζn (x) =

Pn

k=0 ζk (x). n

Then

η˜n |Γ1 = T f + P (T f ) + · · · + P (T f ).

Therefore, we see that there exists η¯ ∈ C(Γ1 ) such that k˜ ηn − η¯kL∞ (Γ1 ) → 0, n → 0. Moreover, we have 1 k˜ ηn kL∞ (Γ1 ) ≤ KkT f kL∞ (Γ1 ) . 1 − e−ρ Note that ζ˜n is the solution to  −L0 ζ˜n = f, D1 , (A.25) ˜ ζn |Γ1 = η˜n−1 |Γ1 , and η˜n the solution to (A.20) with Ψ(x) = ζ˜n . Noting that kζ˜n − ζ˜m kL∞ (D1 ) ≤ kζ˜n − ζ˜m kL∞ (Γ1 ) ≤ k˜ ηn−1 − η˜m−1 kL∞ (Γ1 ) ,

1,q 2,p we see that ζ˜n converges in C(D1 ) and weakly in Wloc since its Wloc norm is bounded; cf. Theorem 9.11 in [16]. By the regularity theorems, the limit 2,p ζ¯ ∈ Wloc ∩ C(D 1 ) and satisfies  −L0 ζ˜ = f, D1 , (A.26) ˜ ζ|Γ1 = η¯. ˜ On Pn the other hand, η˜n − ξ is the solution to (A.4) with h = ζn − Ψ|Γ = i=1 ζi |Γ and

n

n−1

X

X



ζj k˜ ηn − ξkL∞ (Γ) ≤ ≤ ηj ≤ k˜ ηn−1 kL∞ (Γ1 ) .



∞ j=1

L (Γ)

j=0

L (Γ1 )

Thus, we see that

k˜ ηn − ξkL∞ (Dc ) ≤ k˜ ηn−1 kL∞ (Γ1 ) ,

1,q 2,p and η˜n converges in C(D c ) and weakly in Wloc (D c ). The limit η˜ ∈ Wloc (D c )∩ ˜ Setting z = ζ˜ in D1 and z = η˜ in D c , C(D c ) and satisfies (A.20) with Ψ = ζ. we have a solution to (A.22). 

Acknowledgments. The author thanks the anonymous referee for careful reading the manuscript and giving valuable suggestions. The author also thanks Dr. Y. Watanabe for fruitful discussions. REFERENCES [1] Bensoussan, A. (1988). Perturbation Methods in Optimal Control. Wiley, Chichester. MR0949208 [2] Bensoussan, A. (1992). Stochastic Control of Partially Observable Systems. Cambridge Univ. Press, Cambridge. MR1191160 [3] Bensoussan, A., Frehse, J. and Nagai, H. (1998). Some results on risk-sensitive control with full observation. Appl. Math. Optim. 37 1–41. MR1485061

DOWNSIDE RISK MINIMIZATION

61

[4] Bielecki, T. R. and Pliska, S. R. (1999). Risk-sensitive dynamic asset management. Appl. Math. Optim. 39 337–360. MR1675114 [5] Bielecki, T. R., Pliska, S. R. and Sheu, S.-J. (2005). Risk sensitive portfolio management with Cox–Ingersoll–Ross interest rates: The HJB equation. SIAM J. Control Optim. 44 1811–1843 (electronic). MR2193508 [6] Browne, S. (1999). Beating a moving target: Optimal portfolio strategies for outperforming a stochastic benchmark. Finance Stoch. 3 275–294. MR1842287 [7] Browne, S. (1999). The risk and rewards of minimizing shortfall probability. J. Portfolio Management 25 76–85. [8] Bucy, R. S. and Joseph, P. D. (1987). Filtering for Stochastic Processes with Applications to Guidance, 2nd ed. Chelsea, New York. MR0879420 [9] Davis, M. and Lleo, S. (2008). Risk-sensitive benchmarked asset management. Quant. Finance 8 415–426. MR2435642 [10] Dembo, A. and Zeitouni, O. (1998). Large Deviations Techniques and Applications, 2nd ed. Applications of Mathematics (New York) 38. Springer, New York. MR1619036 [11] Donsker, M. D. and Varadhan, S. R. S. (1976). On the principal eigenvalue of second-order elliptic differential operators. Comm. Pure Appl. Math. 29 595–621. MR0425380 [12] Fleming, W. H. and Rishel, R. W. (1975). Deterministic and Stochastic Optimal Control. Springer, Berlin. MR0454768 [13] Fleming, W. H. and Sheu, S.-J. (1999). Optimal long term growth rate of expected utility of wealth. Ann. Appl. Probab. 9 871–903. MR1722286 [14] Fleming, W. H. and Sheu, S. J. (2000). Risk-sensitive control and an optimal investment model. Math. Finance 10 197–213. MR1802598 [15] Fleming, W. H. and Sheu, S. J. (2002). Risk-sensitive control and an optimal investment model. II. Ann. Appl. Probab. 12 730–767. MR1910647 [16] Gilbarg, D. and Trudinger, N. S. (1983). Elliptic Partial Differential Equations of Second Order, 2nd ed. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] 224. Springer, Berlin. MR0737190 [17] Has’minski˘ı, R. Z. (1980). Stochastic Stability of Differential Equations. Monographs and Textbooks on Mechanics of Solids and Fluids: Mechanics and Analysis 7. Sijthoff & Noordhoff, Alphen aan den Rijn. MR0600653 [18] Hata, H. and Iida, Y. (2006). A risk-sensitive stochastic control approach to an optimal investment problem with partial information. Finance Stoch. 10 395– 426. MR2244352 [19] Hata, H., Nagai, H. and Sheu, S.-J. (2010). Asymptotics of the probability minimizing a “down-side” risk. Ann. Appl. Probab. 20 52–89. MR2582642 [20] Hata, H. and Sekine, J. (2005). Solving long term optimal investment problems with Cox–Ingersoll–Ross interest rates. Adv. Math. Econ. 8 231–255. [21] Hata, H. and Sekine, J. (2010). Explicit solution to a certain non-ELQG risk-sensitive stochastic control problem. Appl. Math. Optim. 62 341–380. MR2727339 [22] Hata, H. and Sheu, S. J. (2009). “Down-side risk” probability minimization problem for a multidimensional model with stochastic volatilty. Preprint. [23] Kaise, H. and Nagai, H. (1999). Ergodic type Bellman equations of risk-sensitive control with large parameters and their singular limits. Asymptot. Anal. 20 279– 299. MR1715337 [24] Kuroda, K. and Nagai, H. (2002). Risk-sensitive portfolio optimization on infinite time horizon. Stoch. Stoch. Rep. 73 309–331. MR1932164

62

H. NAGAI

[25] Ladyzhenskaya, O. A. and Ural’tseva, N. N. (1968). Linear and Quasilinear Elliptic Equations. Academic Press, New York. MR0244627 [26] Nagai, H. (1996). Bellman equations of risk-sensitive control. SIAM J. Control Optim. 34 74–101. MR1372906 [27] Nagai, H. (2001). Risk-sensitive dynamic asset management with partial information. In Stochastics in Finite and Infinite Dimensions 321–339. Birkh¨ auser, Boston, MA. MR1797094 [28] Nagai, H. (2003). Optimal strategies for risk-sensitive portfolio optimization problems for general factor models. SIAM J. Control Optim. 41 1779–1800 (electronic). MR1972534 [29] Nagai, H. (2011). Asymptotics of the probability minimizing a “down-side” risk under partial information. Quant. Finance 11 789–803. [30] Nagai, H. and Peng, S. (2002). Risk-sensitive dynamic portfolio optimization with partial information on infinite time horizon. Ann. Appl. Probab. 12 173–195. MR1890061 [31] Pham, H. (2003). A large deviations approach to optimal long term investment. Finance Stoch. 7 169–195. MR1968944 [32] Pham, H. (2003). A risk-sensitive control dual approach to a large deviations control problem. Systems Control Lett. 49 295–309. MR2011524 [33] Rishel, R. (1999). Optimal portfolio management with partial observations and power utility function. In Stochastic Analysis, Control, Optimization and Applications 605–619. Birkh¨ auser, Boston, MA. MR1702984 [34] Stettner, L. (2004). Duality and risk sensitive portfolio optimization. In Mathematics of Finance (G. Yin and Q. Zhang, eds.). Contemp. Math. 351 333–347. Amer. Math. Soc., Providence, RI. MR2076552 [35] Stutzer, M. (2003). Portfolio choice with endogenous utility: A large deviations approach. J. Econometrics 116 365–386. MR2002529 Division of Mathematical Science for Social Systems Graduate School of Engineering Science Osaka University Toyonaka, 560-8531 Japan E-mail: [email protected]

Suggest Documents