Backward Stochastic Differential Equations driven by c`adl`ag martingales Raffaella Carbone 1 , Benedetta Ferrario1 , Marina Santacroce2 1
Dipartimento di Matematica dell’Universit` a di Pavia, via Ferrata 1, 27100 Pavia, Italy. E-mail:
[email protected],
[email protected] 2 Dipartimento di Matematica del Politecnico di Torino, corso Duca degli Abruzzi 24, 10129 Torino, Italy. E-mail:
[email protected] This work was partially supported by the IMATI-CNR-Pavia (Italy). Abstract. Backward Stochastic Differential Equations arise in many financial problems. Although there exists a growing number of papers considering general financial markets, the theory of BSDEs has been developed just in the Brownian setting. We consider Backward Stochastic Differential Equations driven by an Rd -valued c` adl` ag martingale and we study the properties of the solutions in the case of a, possibly non uniform, Lipschitz generator.
1
Introduction
Backward Stochastic Differential Equations (BSDEs) have been introduced by Bismut in 1973 ([1]) with a linear generator. The existence and uniqueness of the solutions in the case of non linear generators satisfying Lipschitz conditions was proved by Pardoux and Peng ([16]) in the case of a Brownian driver. These results were generalized by Kobylanski ([10]) to generators with quadratic growth. The interest in BSDE has grown rapidly in the last decades, this is due to the fact that BSDEs arise naturally in many financial problems, e.g. in valuating and hedging contingent claims, or in the theory of recursive utility, see [7] by El Karoui, Peng and Quenez. It is well-known that in a complete financial market the dynamics of the portfolio which replicates a claim can be described by a BSDE with a linear generator. On the contrary, when the market is incomplete, the problem of pricing, which in general leads to an interval of arbitrage free prices, cannot be characterized in terms of linear BSDEs (see, e.g., [8]); also in utility maximization problems the BSDEs characterizing the solutions have generators with quadratic growth (among others, it is worth mentioning [18], by Rouge and El Karoui). Although there exists a growing number of papers considering general financial markets, i.e. where the price processes are modelled by semimartingales in general filtration (requiring only a no-arbitrage condition), the theory of BSDEs is only rarely used beyond the Brownian setting. A backward equation in a semimartingale setting was first derived by Chitashvili ([4] 1983), as a stochastic version of the Bellman equation in an optimal control problem. The reader is referred also to [5]. More recently, semimartingale BSDEs have been used to characterize the solution of optimal portfolio problems based on utility functions: either giving the solution of the dual problem in terms of optimal martingale measures (see e.g. [12] and [14]) or directly, for instance, in the case of mean variance and in exponential hedging (see [2], [13], [15]). For semimartingales BSDEs, there exist very few results, also for generators with linear growth ([6], by El Karoui and Huang in 1997, as far as we are aware). In the present paper, we want to study, on a 1
complete probability space (Ω, F, P), equipped with a complete, quasi-left continuous, right continuous filtration (Ft )t∈[0,T ] , a backward stochastic differential equation of the form Z
T
T
Z f (s, Ys , Zs )dhM is −
Yt = ξ + t
Zs dMs − NT + Nt ,
t ∈ [0, T ],
(1)
t
driven by the square integrable (c` adl` ag) martingale (Mt )t∈[0,T ] , where T is a fixed time horizon and FT = F. By [M ] and hM i we denote respectively the quadratic variation process and its predictable projection. The coefficients f and ξ are the parameters of the equation: f is called the generator of (1) and the random variable ξ is the terminal condition. The solution is a process (Yt , Zt , Nt )t∈[0,T ] satisfying equation (1) P-a.s., with Z predictable, N a square integrable martingale orthogonal to M , and verifying some integrability conditions that we will make more precise in the next section. Under some suitable conditions, we prove the existence and uniqueness of the solutions of (1) in a proper space (the integrability conditions on the coefficients will guarantee proper similar conditions for the solution) when f is a (possibly non-uniform) Lipschitz generator. In Section 2, we provide the main result and some other properties of the solutions (such as a priori estimates, comparison principles, stability) in the particular case of a uniformly Lipschitz generators and in dimension 1. For the sake of clarity, we give a detailed description of the one–dimensional case. Techniques and ideas are analogous for the general case. Then, in Section 3, we explain how our proofs can be extended to the multidimensional case with non-uniform Lipschitz generator, similar to the one considered in [6]. We use a slightly different nonuniform Lipschitz condition (however, this is required for a correct proof in the multidimensional case), and we do not need that the quadratic variation [M ] of the driving martingale is predictable, but the main improvement with respect to [6] is the proof of existence and uniqueness of the solution under much weaker integrability conditions on the parameters.
2
Existence and uniqueness for Lipschitz generators
As we told in the introduction, in this section we study equation (1) in dimension 1, with a uniformly Lipschitz generator. We remember that we work on a complete probability space (Ω, F, P), with a complete, quasi-left continuous, right continuous filtration (Ft )t∈[0,T ] . We will adopt notations similar to the ones used in [7] (for the Brownian case). To describe the parameters and the solution of the equation we introduce the following spaces L2β = L2β (FT ), β > 0, the space of the FT -measurable random variables ξ with kξk2β = E[eβhM iT |ξ|2 ] < ∞; RT 1 HT1 = HT1 (R) the space of the R-valued processes φ such that kφkH1T = E( 0 |φt |2 dhM it ) 2 < ∞; RT HT2 = HT2 (R) the space of the R-valued processes φ such that kφk2T = E 0 |φt |2 dhM it < ∞; RT 2 2 HT,β = HT,β (R), β > 0, the space of the processes φ such that kφk2T,β = E 0 eβhM it |φt |2 dhM it < ∞ 2 (HT,β is obviously a subspace of HT2 ). We also denote by L2 the space of L2 -bounded martingales and by L2β the subspace of L2 containing all RT martingales N such that kN k2β := E 0 eβhM it d[N ]t < +∞. It is clear from the definition that each element of these spaces is really an equivalence class, even if 2 2 we will call it a process. Notice also that HT,β ⊂ HT,β ⊂ HT2 , for β1 > β2 > 0. 1 2 2
As in [7], we say that the coefficients (f, ξ) of equation (1) are β-standard when ξ is in L2β , f (·, 0, 0) ∈ 2 HT,β , and f is uniformly Lipschitz; we call L the minimum real constant such that |f (ω, t, y0 , z0 ) − f (ω, t, y1 , z1 )| ≤ L(|y0 − y1 | + |z0 − z1 |) P − a.e.ω for all y0 , y1 , z0 , z1 , t. The solution is an R3 -valued process (Yt , Zt , Nt )t∈[0,T ] satisfying equation (1) P-a.s. such that Y is 2 2 in HT,β , Z is a predictable process in HT,β and N is a martingale in L2β orthogonal to M . Our aim is to prove the existence and uniqueness of this solution when the coefficients (f, ξ) are β-standard, for suitable β (Theorem 2.1); the admissible values for β will depend on the Lipschitz constant of the generator. The basic ideas of the proof can be sketched as follows: a result in [6] is exploited to solve the case of a generator with null Lipschitz constant; then, we prove a “good” a priori estimate by resorting to a suitable Itˆ o’s formula; this allows us to reach the desired result by a fixed point theorem. Remark. We point out that, since ([M ]− < M >) is a martingale, if Z is a predictable element of 2 HT,β , we have (see Th. 11.7 of [9]) RT RT E 0 eβhM it |Zt |2 d[M ]t = E 0 eβhM it |Zt |2 dhM it < ∞. Moreover, it is worth remarking that the set of uniformly bounded processes is always contained in HT2 , 2 since M is square integrable, while it is contained in HT,β , β > 0, if and only if E[eβhM iT ] < ∞. R T Indeed d(eβhM it ) = βeβhM it dhM it , so E 0 eβhM it dhM it = β −1 (E[eβhM iT ] − 1). We start considering a particular form of (1), with a generator g independent of the solution Z
T
Z gs dhM is −
Yt = ξ +
T
Zs dMs − NT + Nt ,
t
t ∈ [0, T ].
(2)
t
The following proposition resembles Proposition 6.1 of [6], adapted to our context. 2 2 2 Proposition 2.1. For ξ in L2β and g in HT,β , equation (2) has a unique solution in HT,β × HT,β × L2β . Rt Proof. If the solution exists, the process nt = 0 Zs dMs + Nt − N0 is the martingale component of Y , so
Yt = E[ξ + R 2 t2 Remark that, A dhM i ≤ s s t1 and positive constant a. So |
RT t
1 a
RT t
gs dhM is | Ft ],
(e−ahM it1 − e−ahM it2 )
gs dhM is | ≤ a−1/2 e−a/2hM it
t ∈ [0, T ]. R t2
RT t
t1
(3)
A2s eahM is dhM is , a.s., for any process A
gs2 eahM is dhM is
1/2
.
(4)
2 , the previous assures that we can define the process Y as in (3) and that Since ξ is in L2β and g in HT,β it is an adapted process in L2 (Ω, F, P ). Then, by the martingale representation theorem, there exists a unique predictable process Z in HT2 and a unique L2 bounded martingale N orthogonal to M which 2 verify (2). Therefore, for any ξ in L2β and g in HT,β , the process (Y, Z, N ) is uniquely determined. We 2 2 have to show that it is in HT,β × HT,β × L2β . Now, by (4) with a = β/2 and Jensen inequality
Yt2
2
≤ 2E[ξ |Ft ] + 4β
−1 −β/2hM it
e
Z E[ t
3
T
gs2 eβ/2hM is dhM is |Ft .
RT Then recall that, for a positive measurable process K and an increasing process A, E[ 0 Kt dAt ] = RT E[ 0 E[Kt |Ft ] dAt ] (see, e.g. Th. 57 of Ch.VI in [3]), then Z T Z 2 β/2hM it T 2 β/2hM is 2 8 2 βhM it 2 ξ + e kY kT,β ≤ 2E [e (5) gs e dhM is ] dhM it ≤ kξk2β + 2 kgk2T,β β β β 0 t 2 We show that the processes Z and N are in HT,β and L2β , respectively. We compute the differential
d(eβhM it ([n]T − [n]t )) = βeβhM it ([n]T − [n]t )dhM it − eβhM it d[n]t ; Z T Z T βhM is d[n]s = E[n]T + βE eβhM is ([n]T − [n]s )dhM is , E e
then integrating
0
0
kZk2T,β
that is
+
kN k2β
=
kZk2T,0
+
kN k20
Z + βE[
T
eβhM it E[[n]T − [n]t |Ft ]dhM it ].
0
We still have to arrange the third term; since (n2 − [n]) is a martingale, Z T Z E[[n]T − [n]t |Ft ] = E[(nT − nt )2 |Ft ] = E[(ξ − Yt + gs dhM is )2 |Ft ] ≤ 8E[(ξ 2 + ( t
T
gs dhM is )2 |Ft ],
t
so we conclude with some easy computations similarly as in (5). 2 2 Lemma 2.1. Assume (Y, Z, N ) ∈ HT,β × HT,β × L2β is a solution of (1) for some β-standard parameters R· (f, ξ). Then the random variable supt∈[0,T ] (eβ/2hM it Yt ) is in L2 . Moreover the process 0 eβhM is Ys− dHs is a martingale for any martingale H in L2β .
Proof. Inequality (4) with a = β and gs = f (s, Ys , Zs ), and Doob inequality, imply hR i2 i 2 h T E[ sup eβhM it Yt2 ] ≤ 2E[sup E[ξeβ/2hM iT |Ft ]2 ] + E sup E ( 0 f 2 (s, Ys , Zs )eβhM is dhM is )1/2 |Ft β t t t∈[0,T ] RT 2 2 −1 βhM is ≤ 8kξkβ + 8β E 0 f (s, Ys , Zs )e dhM is ≤
8kξk2β + 24β −1 (kf (·, 0, 0)k2T,β + L2 kY k2T,β + L2 kZk2T,β ).
This completes the proof of the first part. Now, for the integral process, we recall that, if K is a L2 bounded martingale and since the filtration is quasi left continuous for any predictable process A such RT R· that E1/2 ( 0 A2s dhKis ) < ∞, then the process 0 As dKs is a martingale. So we have to prove that this condition holds for A = eβhM i Y− and K = H. Indeed, we have hR 12 i R 12 β β T βhM is T 2 2 hM it Y e dhHi sup e ≤ k sup e 2 hM it Yt kL2 kHkβ E 0 e2βhM is Ys− dhHis ≤ E s t 0 t∈[0,T ]
t∈[0,T ]
which is finite by previous considerations. Proposition 2.2. (A priori estimates). Let (f1 , ξ1 ) and (f2 , ξ2 ) be two β-standard parameters, with β > 2L + L2 , and suppose that there exist the corresponding solutions (Y 1 , Z 1 , N 1 ) and (Y 2 , Z 2 , N 2 ) of 2 2 × L2β . For any couple of processes X 1 and X 2 , let δX be the (1) respectively, in the space HT,β × HT,β 1 2 difference process X − X , and let δ2 ft = (f1 − f2 )(t, Yt2 , Zt2 ). Then there are two positive constants K and µ, depending on L and β as in (7), such that kδY k2T,β + kδZk2T,β + kδN k2β ≤ K kξ1 − ξ2 k2β + µ−2 kδ2 f k2T,β . 4
Proof. We have δY = Y 1 − Y 2 , where Yti = ξi +
RT
fi (s, Ysi , Zsi )dhM is −
t
RT t
Zsi dMs − NTi + Nti , i = 1, 2.
Writing Itˆ o formula for eβhM is δYs2 , we find d(eβhM is δYs2 ) = eβhM is β|δYs |2 dhM is − 2δYs (f1 (s, Ys1 , Zs1 ) − f2 (s, Ys2 , Zs2 ))dhM is +δZs2 d[M ]s + d[δN ]s + 2δYs− δZs dMs + 2δYs− dδNs . Rt The last two terms are martingales (just apply Lemma 2.1 with Ht = 0 δZs dMs + δNt ). So, if we take the expectation of the integral in [0, T ] and rearrange some terms, we deduce that T
Z E[δY02 ] + E[
eβhM is (βδYs2 + δZs2 )dhM is ] + E[
0
Z
T
eβhM is d[δN ]s ]
0 T
Z βhM iT 2 δYT ] + 2E[ = E[e
eβhM is δYs (f1 (s, Ys1 , Zs1 ) − f2 (s, Ys2 , Zs2 ))dhM is ]
0
≤ E[eβhM iT δYT2 ] + 2E[
Z
Z ≤ E[eβhM iT δYT2 ] + E[
T
eβhM is |δYs | (L|δYs | + L|δZs | + |δ2 fs |)dhM is ]
0 T
eβhM is (δYs2 (2L + Lλ2 + µ2 ) + δZs2
0
1 L + δ2 fs2 2 )dhM is ], (6) 2 λ µ
for any λ, µ > 0. Rearranging the previous equation (notice E[δY02 ] < ∞, always by Lemma 2.1) (β − 2L − Lλ2 − µ2 )kδY k2T,β + (1 − Lλ−2 )kδZk2T,β + kδN k2β ≤ E[eβhM iT |δYT |2 ] + µ−2 kδ2 fs k2T,β . Now we conclude by choosing λ, µ and K such that 1 − Lλ−2 > 0,
β − 2L − Lλ2 − µ2 > 0,
K = max{(1 − Lλ−2 )−1 , (β − 2L − Lλ2 − µ2 )−1 }. (7)
We remark that, for β > 2L + L2 , it is always possible to choose λ and µ that verify (7). An easy consequence of the inequalities in Proposition 2.2 is a kind of continuity of the solutions with respect to the parameters. We say that (fα , ξα ), α ∈ R is a continuous family of β-standard parameters if for α → α0 , we have 2 ξα → ξα0 in L2β , fα (·, Y·α0 , Z·α0 ) → fα0 (·, Y·α0 , Z·α0 ) in HT,β . Indeed, it is immediate to verify the following Corollary 2.1. (Stability). Let (fα , ξα ), α ∈ R, be a continuous family of β-standard parameters of BSDE such that all the functions fα are equi-Lipschitz with Lipschitz constant L. Suppose also that for 2 2 each equation there exists a unique solution (Yα , Zα , Nα ) in HT,β × HT,β × L2β for some β > 2L + L2 . 2 2 Then, the map α ∈ R 7→ (Yα , Zα , Nα ) ∈ HT,β × HT,β × L2β is continuous. We now prove the main result: the existence and uniqueness theorem. Theorem 2.1. Let (f, ξ) be β-standard parameters, with β > β0 (L), where √ √ 2 2L if L ≤ √2/2 β0 (L) = . 2L2 + 1 if L > 2/2 2 2 Then equation (1) has a unique solution (Y, Z, N ) in the space HT,β × HT,β × L2β . Moreover, the random variable (supt (eβ/2hM it |Yt |)) is in L2 (Ω, F, P ).
5
Proof. We prove the thesis by using a fixed point theorem. Let us define the map 2 2 2 2 ψ : HT,β × HT,β × L2β → HT,β × HT,β × L2β , ψ(y, z, n) = (Y, Z, N ) Z T Z T where (Y, Z, N ) solves Yt = ξ + f (s, ys , zs )dhM is − Zs dMs − NT + Nt . t
(8)
t
2 By Proposition 2.1, ψ is well defined. We want to prove that ψ is a contraction on the space HT,β × 2 2 i i i i i i HT,β × Lβ . We consider (Y , Z , N ) = ψ(y , z , n ), i = 1, 2. According to Proposition 2.2 (notice that β > β0 (L) ≥ 2L + L2 ), we have, for all positive constants K and µ2 such that β > µ2 and K = max{1, (β − µ2 )−1 },
kδN k2β + kδY k2T,β + kδZk2T,β ≤
K K kf (·, y·1 , z·1 ) − f (·, y·2 , z·2 )k2T,β ≤ 2 2 L2 kδyk2T,β + kδzk2T,β 2 µ µ
This implies that ψ is contractive if we can choose the constants K and µ such that µ2 < β,
K = max{1, (β − µ2 )−1 },
2
K 2 L 2L2 and β > µ2 + 2L µ2 . Elementary calculations show that it is possible if and only if β > β0 (L). Hence, in this case, there exists 2 2 a unique fixed point for the map ψ in the space HT,β × HT,β × L2β , which is the solution to eq. (1).
2.1
The linear case
As a particular Lipschitz case, we deal with the linear case. We have the following representation for the first component of the solution. Lemma 2.2. (Linear BSDE’s) Let a, b, c be predictable bounded processes, denote by M c the continuous part of M , and define Z t Z t 2 Z t bs γt = exp as dhM is , dhM is + Et = exp − bs dMsc , Γt = γt Et . 0 0 0 2 i h R T Suppose that (i) E exp 12 0 b2s dhM is < ∞, (ii) E[(supt γt )2 ET2 ] < ∞. If the linear backward equation dYt = −(at Yt + bt Zt + ct ) dhM it + Zt dMt + dNt , has a solution (Y, Z, N ) in HT2 × HT2 × L2 , then Y is given by " # Z T Γs ΓT + cs dhM is |Ft , Yt = E ξ Γt Γt t
YT = ξ
0 ≤ t ≤ T.
(9)
(10)
We point out that the process E is the Dol´eans exponential associated to the stochastic integral of the process b with respect to the continuous component of the martingale M . We know that E is a uniformly integrable martingale when the Novikov condition (i) is fulfilled; really, (i) could be replaced by any other condition implying uniform integrability of E. 6
Proof. We introduce the probability measure Q (equivalent to P) which has density ET with respect to P. R f, defined by M ft := Mt − t bs dhM is , is a Q-martingale, By Girsanov theorem, the F-adapted process M 0 fi = hM i. Equation (9), under the measure Q, becomes (Q)dYt = −(at Yt + ct ) dhM it + Zt dM ft + and hM dNt , YT = ξ. So, we apply Itˆ o formula to the process γY and obtain Z t Z t Z t fs + (Q) γt Yt − γ0 Y0 + cs γs dhM is = γs Zs dM γs dNs . (11) 0
0
0
By using condition (ii), one can easily see that the r.h.s. of (11) is a Q–martingale, so the l.h.s. too. We obtain, as usual, that T
Z γt Yt = EQ [ξγT +
Z cs γs dhM is |Ft ],
and consequently
t
T
cs Γs dhM is |Ft ].
Γt Yt = E[ξΓT + t
This concludes the proof. Going back in the proof of the previous Lemma, we can also see that, whenever the expression (10) makes sense, it is a solution of the linear equation (9) if the l.h.s. of (11) is a L2 (Q) martingale. HT2
The linear case allows us to obtain a comparison result. Just consider two processes (Y i , Z i , N i ) in × HT2 × L2 , i = 1, 2, verifying Yti = ξi +
Z
T
fi (s, Ysi , Zsi )dhM is −
t
Z
T
Zsi dMs + Nti − NTi .
t
We define δYt = Yt1 − Yt2 and δZt = Zt1 − Zt2 , and introduce the processes ∆Y ft1 =
f1 (t, Yt1 , Zt1 ) − f1 (t, Yt2 , Zt1 ) 1δYt 6=0 , δYt
∆Z ft1 =
f1 (t, Yt2 , Zt1 ) − f1 (t, Yt2 , Zt2 ) 1δZt 6=0 . δZt
Then it is easy to verify that the process δY solves the following linear BSDE −dδYt = (∆Y ft1 δYt + ∆Z ft1 δZt + δ2 ft ) dhM it − δZt dMt − dδNt ,
δYT = ξ1 − ξ2
According to Lemma 2.2, if the coefficients ∆Y f 1 and ∆Z f 1 verify the suitable conditions (i) and (ii), we have that Z T Γt δYt = E[ΓT (ξ1 − ξ2 ) + δ2 fs Γs dhM is |Ft ]. (12) t
Assume now that ξ1 ≥ ξ2 and, for any t, δ2 ft ≥ 0 P-a.s.. Then, for any t, Yt1 ≥ Yt2 P-a.s.. We have proved the following comparison result Theorem 2.2. Let (Y 1 , Z 1 , N 1 ), (Y 2 , Z 2 , N 2 ) as before and suppose the coefficients ∆Y f 1 and ∆Z f 1 verify conditions (i) and (ii) of Lemma 2.2. Assume that ξ1 ≥ ξ2 and, for any t, δ2 ft (ω) ≥ 0 P-a.s.. Then, for any t, Yt1 ≥ Yt2 P-a.s.. Example. Finally, we would like to consider an easy linear equation dYt = −aYt dhM it + Zt dMt + dNt ,
7
YT = ξ
(13)
with a constant and ξ in L2 . Here, it is quite easy to understand which are the natural links between the integrability conditions on the parameter ξ and on the solution Y . We have Γt = γt = eahM it , so the proof above works if E[e2ahM iT ] < ∞ (condition (ii)). Then, it is immediate to verify that Yt = E[ξea(hM iT −hM it ) |Ft ] is the unique process in HT2 verifying the equation, when ξ is in L22a . Moreover Z T if β > 2a; ≤ (β − 2a)−1 E[ξ 2 eβhM iT ] 2 2 2ahM iT (β−2a)hM it e dhM it ] kY kT,β ≤ E[ξ e ≤ (2a − β)−1 E[ξ 2 e2ahM iT ] if β < 2a; 0 = E[ξ 2 e2ahM iT hM iT ] if β = 2a. 2 2 This implies that, if ξ is in L2β for some β > 2a, then Y is in HT,β , while, if ξ is in L22a , then Y is in HT,β at least for all β < 2a.
3
Non-uniform Lipschitz generators and multidimensional case
We consider now the case of a non-uniform Lipschitz generator in the multidimensional context, with M = (Mt )t∈[0,T ] a square integrable Rn -valued martingale; we will recover some of the results obtained in the previous section and explain why the same proofs work also in these more general conditions. We adopt the same formulation of the problem used in [6]. We consider the same equation (15) and the same spaces, and we will see that our proofs guarantee the existence and uniqueness of the solution under weaker integrability conditions of the parameters. [M ] and hM i are processes taking values in Rn×n . Since the filtration is quasi left continuous, we suppose there exists a continuous increasing process C, the one appearing in the BSDE (14), such that Z t hM it = ms m∗s dCs , with m Rn×n − valued process. 0
The BSDE can now be written Z T Z Yt = ξ + f (s, Ys , Zs )dCs − t
T
Zs∗ dMs − NT + Nt ,
t ∈ [0, T ],
(14)
t
with generator f : Ω × R+ × Rd × Rn×d → Rd and an Rd -valued terminal condition ξ. By ∗ we denote n×d the transposition. We use its p| · | to denote the Euclidean norm of vectors and matrices, if z is in R norm is defined by |z| = trace(z ∗ z). We suppose f is satisfying a Lipschitz condition of the type: |f (t, y1 , z1 ) − f (t, y2 , z2 )| ≤ rt |y1 − y2 | + θt |m∗t (z1 − z2 )| dCt ⊗ dP a.s.,
(15)
Rt where rt and θt are two positive predictable processes. We denote by αt2 = rt +θt2 > 0 and At = 0 αs2 dCs . The natural spaces for the solutions of the BSDE (14) will be now (we use the hatˆ to distinguish from the one of the previous section): R ˆ 2 , β > 0, the subspace of the Rd -valued processes φ such that |φ|2 = E T eβAt |φt |2 dCt < ∞; H T,β T,β R T0 Lˆ2β the space of L2 bounded Rd -valued martingales N such that |N |2β := E 0 eβAt dtrace[N ]t < +∞. RT We will denote by |Z|2β,T the norm of the martingale Z · M , i.e. |Z|2β,T := E 0 eβAt |m∗t Zt |2 dCt . The parameters of the equation are β-standard if 8
ˆ 2 of the FT -measurable Rd -valued such that |ξ|2 = E[eβAT |ξ|2 ] < ∞; - ξ is in the space L β β - f satisfies (15) and is such that |α−2 f (·, 0, 0)|β,T is finite. The solution (Y, Z, N ) will be a process in the space 2 ˆ T,β Sβ = {(Y, Z, N ) s.t. (αY ), Z ∈ H , N ∈ Lˆ2β , [N, M ] = 0},
|(Y, Z, N )|β = |αY |β,T + |Z|β,T + |N |β
We underline that the only differences with respect to the environment in [6] are: the quadratic variation is not necessarily predictable and the non-uniform Lipschitz condition (15) is slightly different, but, as we said in the Introduction, it is necessary to prove the main result in the multidimensional case. With respect to the previous section, the main difference is that relaxing the Lipschitz condition requires to use norms (for the parameters and the solution) which depend also on the “Lipschitz” processes, while before the Lipschitz constant simply influenced the choice of β. In this new context, the main result can be written in the following form. Theorem 3.1. Let (f, ξ) be β-standard parameters, with β > 3. Then equation (14) has a unique solution (Y, Z, N ) in the space Sβ . Moreover the random variable (supt (eβ/2At |Yt |)) is in L2 (Ω, F, P ). Remark. This assures better results than the corresponding existence and uniqueness theorem in [6] since there the same result was proved only for β > 90 (see Theorem 6.1 in [6] and write explicitly the condition “for β large enough”). For the proof, we simply have to revisit the results of the previous section: - Proposition 2.1 does not change and so also Lemma 2.1, which is a quite direct consequence of the previous Proposition. - For the a priori estimates, and so the existence and uniqueness theorem, the previous proof works; only we need to rewrite these results in the new notations: the norms have changed, and so the choice of β will change. We sketch here briefly the same steps in the new context. Proposition 3.1. (A priori estimates). Let (f1 , ξ1 ) and (f2 , ξ2 ) be two β-standard parameters, with β > 3, and suppose that there exist the corresponding solutions (Y 1 , Z 1 , N 1 ) and (Y 2 , Z 2 , N 2 ) of (14) respectively, in the space Sβ . Then, for some positive constants K and µ, |(δY, δZ, δN )|2β ≤ K |ξ1 − ξ2 |2β + µ−2 |α−2 δ2 f |2T,β . Proof. Now we have to write Itˆ o formula for eβAs |δYs |2 , we find ∗ (f1 (s, Ys1 , Zs1 ) − f2 (s, Ys2 , Zs2 ))dCs d(eβAs |δYs |2 ) = eβAs βαs2 |δYs− |2 dCs − 2δYs− ∗ ∗ +|ms δZs∗ |2 dCs + dtrace[δN ]s + 2δYs− δZs∗ dMs + 2δYs− dδNs . The last two terms are always martingales, so, if we take the expectation of the integral in [0, T ], we have β|αδY
|2β,T
+
|δZ ∗ |2β,T
+ |δN |β ≤ E[e
βAT
Z
2
|δYT | ] + 2E[
T ∗ eβAs δYs− (f1 (s, Ys1 , Zs1 ) − f2 (s, Ys2 , Zs2 ))dCs ]
0
≤ |ξ1 −
ξ2 |2β
Z + 2E[
Z ≤ |ξ1 − ξ2 |2β + E[
T
eβAs |δYs− | (rs |δYs | + θs |m∗s δZs | + |δ2 fs |)dCs ]
0 T
eβAs (|δYs |2 (2rs + θs2 λ2 + αs2 µ2 ) + |m∗s δZs |2 λ−2 + µ−2 |αs−2 δ2 fs |2 )dCs ],
0
9
(16)
for any λ, µ > 0. So (β − 2 − λ2 − µ2 )|αδY |2T,β + (1 − λ−2 )|δZ|2T,β + |δN |2β ≤ |ξ1 − ξ2 |2β + µ−2 |αs−2 δ2 fs |2T,β . For β > 3, we can choose λ, µ and K such that 1 − λ−2 > 0,
β − 2 − λ2 − µ2 > 0,
K = max{(1 − λ−2 )−1 , (β − 2 − λ2 − µ2 )−1 }.
(17)
Proof of Theorem 3.1. Following the scheme of the corresponding Theorem 2.1 in the previous section, we define a function ψ in the same way. In order to prove that ψ is a contraction, we just take the first inequality in (16) and remember that, in this case, it becomes Z T ∗ (f (s, ys1 , zs1 ) − f (s, ys2 , zs2 ))dCs β|αδY |2β,T + |δZ ∗ |2β,T + |δN |β ≤ 2E eβAs δYs− 0
Z ≤ 2E Z ≤E
T ∗ eβAs δYs− (rs |δys | + θs |m∗s δzs |)dCs
0 T
eβAs (2µ2 αs2 |δYs |2 + αs2 µ−2 |δys |2 + µ−2 |m∗s δzs |2 )dCs ,
0
that is
(β − 2µ2 )|αδY |2β,T + |δZ ∗ |2β,T + |δN |β ≤ µ−2 (|αδy|2β,T + |δz|2β,T ).
Acknowledgments. We would like to thank Michael Mania for interesting discussions.
References [1] J. M. Bismut, Conjugate convex functions in optimal stochastic control. J. Math. Anal. Appl., 44 (1973), 384–404. [2] O. Bobrovnytska, M. Schweizer, Mean-variance hedging and stochastic control: beyond the Brownian setting. IEEE Trans. Automat. Control 49 (2004), no. 3, 396–408. [3] C. Dellacherie, P.A. Meyer, Probabilities and potential B. Theory of Martingales. North-Holland, Amsterdam 1982. [4] R. Chitashvili, Martingale ideology in the theory of controlled stochastic processes, 73–92, Lect. Notes in Math., Springer, New York, N. 1021, 1983. [5] R. Chitashvili, M. Mania, Optimal locally absolutely continuous change of measure. Finite set of decisions. Stoch. Stochastics Rep. 21 (1987), 131–185 (Part I), 187–229 (Part II). [6] N. El Karoui, S.J. Huang, A general result of existence and uniqueness of backward stochastic differential equations, 27–36, Backward stochastic differential equations (Paris, 1995–1996), Pitman Res. Notes Math. Ser., 364, Longman, Harlow, 1997. [7] N. El Karoui, S. Peng, M.C. Quenez, Backward stochastic differential equations in finance. Math. Finance 7 (1997), no. 1, 1–71. [8] N. El Karoui, M.C. Quenez, Dynamic programming and pricing of contingent claims in an incomplete market. SIAM J. Control Optim. 33 (1995), no. 1, 29–66. 10
[9] R. Elliott, Stochastic Calculus and Applications. Springer N.Y., 1982. [10] M. Kobylanski, Backward stochastic differential equations and partial differential equations with quadratic growth. Ann. Probab. 28 (2000), no. 2, 558–602. [11] R.Sh. Liptser, A.N. Shiryayev, Theory of martingales. Kluwer Academic Publishers Group: Dordrecht, 1989. [12] M. Mania, M. Santacroce, R. Tevzadze, A semimartingale BSDE related to the minimal entropy martingale measure. Finance Stoch. 7 (2003), no. 3, 385–402. [13] M. Mania, M. Schweizer, Dynamic Exponential Utility Indifference Valuation, Annals of Applied Probability (2005), to appear. [14] M. Mania, R. Tevzadze, A semimartingale Bellman equation and the variance-optimal martingale measure. Georgian Math. J. 7 (2000), no. 4, 765–792. [15] M. Mania, R. Tevzadze, Backward stochastic PDE and imperfect hedging. Int. J. Theor. Appl. Finance 6 (2003), no. 7, 663–692. [16] E. Pardoux, S.G. Peng, Adapted solution of a backward stochastic differential equation. Systems Control Lett. 14 (1990), no.1, 55–61. [17] D. Revuz, M. Yor, Continuous martingales and Brownian motion. Third edition. Grundlehren der Mathematischen Wissenschaften 293. Springer: Berlin, 1999. [18] R. Rouge, N. El Karoui, Pricing via utility maximization and entropy. INFORMS Applied Probability Conference (Ulm, 1999). Math. Finance 10 (2000), no. 2, 259–276.
11