Mean Reflected Stochastic Differential Equations with Jumps

2 downloads 0 Views 873KB Size Report
Mar 27, 2018 - B is a Brownian process independent of N. We also propose a .... be given, and define K and K as above, using the same Brownian motion.
MEAN REFLECTED STOCHASTIC DIFFERENTIAL EQUATIONS WITH JUMPS

arXiv:1803.10165v1 [math.PR] 27 Mar 2018

PHILIPPE BRIAND, ABIR GHANNOUM, AND CÉLINE LABART

Abstract. This paper is devoted to the study of reflected Stochastic Differential Equations with jumps when the constraint is not on the paths of the solution but acts on the law of the solution. This type of reflected equations have been introduced recently by Briand, Elie and Hu [BEH18] in the context of BSDEs, when no jumps occur. In [BCdRGL16], the authors study a numerical scheme based on particle systems to approximate these reflected SDEs. In this paper, we prove existence and uniqueness of solutions to this kind of reflected SDEs with jumps and we generalize the results obtained in [BCdRGL16] to this context.

1. Introduction Reflected stochastic differential equations have been introduced in the pionneering work of Skorokhod (see [Sko61]), and their numerical approximations by Euler schemes have been widely studied (see [Slo94], [Slo01], [Lep95], [Pet95], [Pet97]). Reflected stochastic differential equations driven by a Lévy process have also been studied in the literature (see [MR85], [KH92]). More recently, reflected backward stochastic differential equations with jumps have been introduced and studied (see [HO03], [EHO05], [HH06], [Ess08], [CM08], [QS14]), as well as their numerical approximation (see [DL16a] and [DL16b]). The main particularity of our work comes from the fact that the constraint acts on the law of the process X rather than on its paths. The study of such equations is linked to the mean field games theory, which has been introduced by Lasry and Lions (see [LL07a], [LL07b], [LL06b], [LL06a]) and whose probabilistic point of view is studied in [CD18a] and [CD18b]. Stochastic differential equations with mean reflection have been introduced by Briand, Elie and Hu in their backward forms in [BEH18]. In that work, they show that mean reflected stochastic processes exist and are uniquely defined by the associated system of equations of the following form: Z t Z t    b(Xs )ds + σ(Xs )dBs + Kt , t ≥ 0,  Xt = X0 + 0 0 (1.1) Z t    E[h(Xt )] ≥ 0, E[h(Xs )] dKs = 0, t ≥ 0. 0

Due to the fact that the reflection process K depends on the law of the position, the authors of [BCdRGL16], inspired by mean field games, study the convergence of a numerical scheme based on particle systems to compute numerically solutions to (1.1). In this paper, we extend previous results to the case of jumps, i.e. we study existence and uniqueness of solutions to the following mean reflected stochastic differential equation (MR-SDE in the sequel) Date: March 23, 2018. 1

2

PH. BRIAND, A. GHANNOUM, AND C. LABART

Z tZ Z t Z t   ˜ (ds, dz) + Kt ,  F (Xs− , z)N σ(Xs− )dBs + b(Xs− )ds +  Xt = X0 + 0 E 0 0 Z t    E[h(Xt )] ≥ 0, E[h(Xs )] dKs = 0, t ≥ 0,

t ≥ 0,

0

(1.2) ˜ is a compensated Poisson measure N ˜ (ds, dz) = N (ds, dz) − λ(dz)ds, and where E = R∗ , N B is a Brownian process independent of N . We also propose a numerical scheme based on a particle system to compute numerically solutions to (1.2) and study the rate of convergence of this scheme. Our main motivation for studying (1.2) comes from financial problems submitted to risk measure constraints. Given any position X, its risk measure ρ(X) can be seen as the amount of own fund needed by the investor to hold the position. For example, we can consider the following risk measure: ρ(X) = inf{m : E[u(m + X)] ≥ p} where u is a utility function (concave and increasing) and p is a given threshold (we refer the reader to [ADEH99] and to [FS02] for more details on risk measures). Suppose that we are given a portfolio X of assets whose dynamic, when there is no constraint, follows the jump diffusion model Z ˜ (dt, dz), dXt = b(Xt )dt + σ(Xt )dBt + F (Xt− , z)N t ≥ 0. E

Given a risk measure ρ, one can ask that Xt remains an acceptable position at each time t. The constraint rewrites E [h(Xt )] ≥ 0 for t ≥ 0 where h = u − p. In order to satisfy this constraint, the agent has to add some cash in the portfolio through the time and the dynamic of the wealth of the portfolio becomes Z ˜ (dt, dz) + dKt , dXt = b(Xt )dt + σ(Xt )dBt + F (Xt− , z)N t ≥ 0, E

where Kt is the amount of cash added up to time t in the portfolio to balance the "risk" associated to Xt . Of course, the agent wants to cover the risk in a minimal way, adding cash only when needed: this leads to the Skorokhod condition E[h(Xt )]dKt = 0. Putting together all conditions, we end up with a dynamic of the form (1.2) for the portfolio. The paper is organized as follows. In Section 2, we show that, under Lipschitz assumptions on b, σ and F and bi-Lipchitz assumptions on h, the system admits a unique strong solution, i.e. there exists a unique pair of process (X, K) satisfying system (1.2) almost surely, the process K being an increasing and deterministic process. Then, we show that, by adding some regularity on the function h, the Stieltjes measure dK is absolutely continuous with respect to the Lebesgue measure and we obtain the explicit expression of its density. In Section 3 we show that the system (1.2) can be seen as the limit of an interacting particles system with oblique reflection of mean field type. This result allows to define in Section 4 an algorithm based on this interacting particle system together with a classical Euler scheme which gives a strong approximation of the solution of (1.2). When h is bi-Lipschitz, this leads to an approximation error in L2 -sense 1 proportional to n−1 + N − 2 , where n is the number of points of the discretization grid and N is the number of particles. When h is smooth, we get an approximation error proportional to n−1 + N −1 . By the way, we improve the speed of convergence obtained in [BCdRGL16]. Finally, we illustrate these results numerically in Section 5. 2. Existence, uniqueness and properties of the solution. Throughout this paper, we consider the following set of assumptions.

MEAN REFLECTED SDES WITH JUMPS

3

Assumption (A.1). (i) Lipschitz assumption: There exists a constant Cp > 0, such that for all x, x0 ∈ R and p > 0, we have Z 0 p 0 p | b(x) − b(x ) | + | σ(x) − σ(x ) | + | F (x, z) − F (x0 , z) |p λ(dz) ≤ Cp | x − x0 |p . E

(ii) The random variable X0 is square integrable independent of Bt and Nt . Assumption (A.2). (i) The function h : R −→ R is an increasing function and there exist 0 < m ≤ M such that ∀x ∈ R, ∀y ∈ R, m|x − y| ≤ |h(x) − h(y)| ≤ M |x − y|. (ii) The initial condition X0 satisfies: E[h(X0 )] ≥ 0. Assumption (A.3). ∃p > 4 such that X0 belongs to Lp : E[|X0 |p ] < ∞. Assumption (A.4). The mapping h is a twice continuously differentiable function with bounded derivatives. 2.1. Preliminary results. Define the function Z : R × P(R) 3 (x, ν) 7→

H

h(x + z)ν(dz),

(2.1)

and the inverse function in space of H evaluated at 0, namely: ¯ 0 : P(R) 3 ν 7→ inf{x ∈ R : H(x, ν) ≥ 0}, G

(2.2)

¯ 0: as well as G0 , the positive part of G G0

: P(R) 3 ν 7→ inf{x ≥ 0 : H(x, ν) ≥ 0}.

(2.3)

We start by studying some properties of H and G0 . Lemma 1. Under (A.2), we have: (i) For all ν in P(R), the mapping H(·, ν) : R 3 x 7→ H(x, ν) is a bi-Lipschitz function, namely: ∀x, y ∈ R, m|x − y| ≤ |H(x, ν) − H(y, ν)| ≤ M |x − y|. (ii)

(2.4)

For all x in R, the mapping H(x, ·) : P(R) 3 ν 7→ H(x, ν) satisfies the following Lipschitz estimate: Z 0 0 0 ∀ν, ν ∈ P(R), |H(x, ν) − H(x, ν )| ≤ h(x + ·)(dν − dν ) . (2.5)

Proof. The proof is straightforward from the definition of H (see (2.1)).



Note that thanks to Monge-Kantorovitch Theorem, assertion (2.5) implies that for all x in R, the function H(x, ·) is Lipschitz continuous w.r.t. the Wasserstein-1 distance. Indeed, for two probability measures ν and ν 0 , the Wasserstein-1 distance between ν and ν 0 is defined by: Z 0 0 E[|X − Y |]. ϕ(dν − dν ) W1 (ν, ν ) = sup = X∼νinf ; Y ∼ν 0 ϕ 1−Lipschitz

Therefore ∀ν, ν 0 ∈ P(R), |H(x, ν) − H(x, ν 0 )| ≤ M W1 (ν, ν 0 ). Then, we have the following result about the regularity of G0 :

(2.6)

4

PH. BRIAND, A. GHANNOUM, AND C. LABART

Lemma 2. Under (A.2), the mapping G0 : P(R) 3 ν 7→ G0 (ν) is Lipschitz-continuous in the following sense: Z 1 0 0 ¯ h(G0 (ν) + ·)(dν − dν ) , |G0 (ν) − G0 (ν )| ≤ (2.7) m ¯ 0 (ν) is the inverse of H(·, ν) at point 0. In particular where G M |G0 (ν) − G0 (ν 0 )| ≤ W1 (ν, ν 0 ). (2.8) m 

Proof. The proof is given in ([BCdRGL16], Lemma 2.5).

2.2. Existence and uniqueness of the solution of (1.2). We emphasize that existence and uniqueness results hold only under (A.1) which is the standard assumption for SDEs and (A.2) which is the assumption used in [BEH18]. The convergence of particles systems requires only an additional integrability assumption on the initial condition, namely (A.3). We sometimes add the smoothness assumption (A.4) on h in order to improve some of the results. We first recall the existence and uniqueness result of in the case of SDEs. Definition 1. A couple of processes (X, K) is said to be a flat deterministic solution to (1.2) if (X, K) satisfy (1.2) with K being a non-decreasing deterministic function with K0 = 0. Given this definition we have the following result. Theorem 1. Under Assumptions (A.1) and (A.2), the mean reflected SDE (1.2) has a unique deterministic flat solution (X, K). Moreover, ∀t ≥ 0, Kt = sup inf{x ≥ 0 : E[h(x + Us )] ≥ 0} = sup G0 (µs ), s≤t

(2.9)

s≤t

where (Ut )0≤t≤T is the process defined by: Z t Z t Z tZ ˜ (ds, dz), Ut = X0 + b(Xs− )ds + σ(Xs− )dBs + F (Xs− , z)N 0

0

0

(2.10)

E

and (µt )0≤t≤T is the family of marginal laws of (Ut )0≤t≤T . Proof. The proof for the case of continuous backward SDEs is given in [BEH18]. For the ease of the reader, we sketch the proof for the forward case with jumps.   ˆ be a given process such that, for all t > 0, E sups≤t |X ˆ s |2 < ∞. We set Let X Z t Z t Z tZ ˆ ˆ ˆ s− , z)N ˜ (ds, dz), ˆ Ut = X0 + b(Xs− )ds + σ(Xs− )dBs + F (X 0

0

0

E

and define the function K by setting ˆs )] ≥ 0} = sup G0 (ˆ Kt = sup inf{x ≥ 0 : E[h(x + U µs ). s≤t

(2.11)

s≤t

The function K being given, let us define the process X by the formula Z t Z t Z tZ ˆ ˆ ˆ s− , z)N ˜ (ds, dz) + Kt . Xt = X0 + b(Xs− )ds + σ(Xs− )dBs + F (X 0

0

0

E

Let us check that (X, K) is the solution to (1.2). By definition of K, E[h(Xt )] ≥ 0 and we have dK almost everywhere, ˆs )] ≥ 0} > 0, Kt = sup inf{x ≥ 0 : E[h(x + U s≤t

ˆt + Kt )] = 0 dK-a.e. since h is continuous and nondecreasing. so that E[h(Xt )] = E[h(U Next, we consider the set C 2 = {X càdlàg, E(supt≤T |Xs |2 ) < ∞} and the map Ξ : C 2 −→ C 2

MEAN REFLECTED SDES WITH JUMPS

5

ˆ the process X. Let us show that Ξ is a contraction. Let X, ˆ X ˆ 0 ∈ C2 which associates to X 0 be given, and define K and K as above, using the same Brownian motion. We have from Assumption (A.1), Cauchy-Schwartz and Doob inequality

( Z  2   2 Z t     t 0 0 0 2 ˆ s− ) − σ(X ˆ − ) dBs ˆ s− ) − b(X ˆ − ) ds + σ(X b(X E sup |Xt − Xt | ≤ 4E sup s s t≤T

t≤T

0

0

) Z tZ  2  0 0 2 ˆ s− , z) − F (X ˆ − , z) N ˜ (ds.dz) + |Kt − Kt | + F (X s 0 E (  2  Z t   Z t 2  ˆ 0 0 ˆ ˆ ˆ ≤ 4 E sup t σ(Xs− ) − σ(Xs− ) dBs b(Xs− ) − b(Xs− ) ds + E sup t≤T t≤T 0 0 ) Z tZ  2    ˆ s− , z) − F (X ˆ 0 − , z) N ˜ (ds, dz) + sup |Kt − K 0 |2 F (X + E sup t s t≤T t≤T 0 E ( Z Z T 2  2  T ˆ 0 0 ˆ − ) ds + E ˆ s− ) − σ(X ˆ − ) ds ≤ C TE b(Xs− ) − b(X σ(X s

0

Z

T

Z

+ 0

E

s

0

)  2  ˆ 0 0 2 ˆ − , z) λ(dz)ds + sup |Kt − Kt | E F (Xs− , z) − F (X s t≤T

(

    2 0 2 0 2 ˆ ˆ ˆ ˆ ≤ C T C1 E sup |Xt− − Xt− | + T C1 E sup |Xt− − Xt− | t≤T

t≤T

)   ˆ t− − X ˆ 0− |2 + sup |Kt − K 0 |2 + T C1 E sup |X t t t≤T

t≤T

    ˆt − X ˆ 0 |2 + C sup |Kt − K 0 |2 . ≤ C T 2 C1 + T C2 E sup |X t t t≤T

t≤T

From the representation (2.11) of the process K and Lemma 2, we have that

sup |Kt − t≤T

Kt0 |2

  M 0 2 ˆt − U ˆt | ≤ E sup |U m t≤T   2 0 2 ˆt − X ˆ| . ≤ C(T C1 + T C2 )E sup |X t t≤T

Therefore,     0 2 0 2 ˆt − X ˆ| . E sup |Xt − Xt | ≤ C(1 + T )T E sup |X t t≤T

t≤T

Hence, there exists a positive T , depending on b, σ, F and h only, such that for all T < T , the map Ξ is a contraction. We first deduce the existence and uniqueness of solution on [0, T ] and then on R+ by iterating the construction.  2.3. Regularity results on K, X and U .

6

PH. BRIAND, A. GHANNOUM, AND C. LABART

Remark 1. Note that from this construction, we deduce that for all 0 ≤ s < t: Kt − Ks  = sup inf s≤r≤t

  Z x ≥ 0 : E h x + Xs +

r

Z

r

Z

Z

σ(Xu− )dBu +

b(Xu− )du + s

s

r

s

 ˜ F (Xu− , z)N (du, dz) .

E

Proof. From the representation (2.9) of the process K, we have   Kt = sup inf x ≥ 0 : E[h(x + Ur )] ≥ 0 r≤t

= sup G0 (Ur ) r≤t

 = max

 sup G0 (Ur ), sup G0 (Ur ) r≤s

s≤r≤t

  = max Ks , sup G0 (Ur ) s≤r≤t

  = max Ks , sup G0 (Xs − Ks + Ur − Us ) s≤r≤t

   = max Ks , sup G¯0 (Xs − Ks + Ur − Us )+ . s≤r≤t

By the definition of G¯0 , we can observe that for all y ∈ R, G¯0 (X + y) = G¯0 (X) − y, so we get   +  Kt = max Ks , sup Ks + G¯0 (Xs + Ur − Us ) s≤r≤t

 +  ¯ = Ks + max 0, sup Ks + G0 (Xs + Ur − Us ) − Ks . 

s≤r≤t

Note that supr

(f (r)+ )

= (supr f (r))+ = max(0, supr f (r)) for all function f , and obviously  + +  Kt = Ks + sup Ks + G¯0 (Xs + Ur − Us ) − Ks s≤r≤t

 +  ¯ = Ks + sup G0 (Xs + Ur − Us ) s≤r≤t

= Ks + sup G0 (Xs + Ur − Us ), s≤r≤t

and so Kt − Ks = sup G0 (Xs + Ur − Us ). s≤r≤t

 In the following, we make an intensive use of this representation formula of the process K. Let (Fs )s≥0 be a filtration on (Ω, F, P) such that (Xs )s≥0 is an (Fs )s≥0 -adapted process. Proposition 1. Suppose that Assumptions (A.1) and (A.2) hold. Then, for all p ≥ 2, there exists a positive constant Kp , depending on T , b, σ, F and h such that     (i) E supt≤T |Xt |p ≤ Kp 1 + E |X0 |p .   (ii) ∀ 0 ≤ s ≤ t ≤ T, E sups≤u≤t |Xu |p |Fs ≤ C 1 + |Xs |p .

MEAN REFLECTED SDES WITH JUMPS

7

Remark 2. Under the same conditions, we conclude that     E sup |Ut |p ≤ Kp 1 + E |X0 |p . t≤T

Proof of (i). We have ( p Z t Z t p   p−1 p p |b(Xs− )|ds + E sup σ(Xs− )dBs E|X0 | + E sup E sup |Xt | ≤ 5 t≤T

t≤T

t≤T

0

0

) Z tZ p ˜ (ds, dz) + K p . + E sup F (Xs− , z)N T t≤T

0

E

Let us first consider the last term KT = supt≤T G0 (µs ). From the Lipschitz property of Lemma 2 of G0 and the definition of the Wasserstein metric we have M ∀t ≥ 0, |G0 (µt )| ≤ E[|Ut − U0 |], m since G0 (µ0 ) = 0 as E[h(X0 )] ≥ 0 and where U is defined by (4.3). Therefore Z t p  p ( Z t p p p p−1 M E sup |b(Xs− )|ds + E sup σ(Xs− )dBs |KT | = | sup G0 (µt )| ≤ 3 m t≤T t≤T t≤T 0 0 p ) Z tZ ˜ (ds, dz) , F (Xs− , z)N + E sup t≤T

0

E

and so Z t p  Z t p   p p σ(Xs− )dBs E sup |Xt | ≤ C(p, M, m)E |X0 | + sup |b(Xs− )|ds + sup t≤T

t≤T

t≤T

0

0

Z tZ p  ˜ (ds, dz) . + sup F (Xs− , z)N t≤T

0

E

Hence, using Assumption (A.1), Cauchy-Schwartz, Doob and BDG inequality we get (     Z T  Z T p 2 p p p−1 p 2 E sup |Xt | ≤ C E |X0 | + T E (1 + |Xs− |) ds + C1 E (1 + |Xs− |) ds t≤T

0

Z + C2 E

0

) T p (1 + |Xs− |) ds

0



p



≤ C1 1 + E|X0 |

Z + C2 0

T

  p E sup |Xt | dr, t≤r

and from Gronwall’s Lemma, we can conclude that for all p ≥ 2, there exists a positive constant Kp , depending on T , b, σ, F and h such that     E sup |Xt |p ≤ Kp 1 + E |X0 |p . t≤T

Proof of (ii). For the first part, we have Xu = Uu + Ku = Xs + (Uu − Us ) + (Ku − Ks ) Z u Z u Z = Xs + b(Xr− )dr + σ(Xr− )dBr + s

+ (Ku − Ks ).

s

s

uZ E

˜ (dr, dz) F (Xr− , z)N

8

PH. BRIAND, A. GHANNOUM, AND C. LABART

Define that Es [· ] = E[· |Fs ], then we get (  Z     p p−1 p Es |Xs | + Es sup Es sup |Xu | ≤ 5 s≤u≤t s≤u≤t

p  Z u p   σ(Xr− )dBr b(Xr− )dr + Es sup s≤u≤t s s p  p ) Z uZ  ˜ (dr, dz) + Kt − Ks F (Xr− , z)N + Es sup s≤u≤t s E ( p  p  Z t  Z t  p p−1 Es b(Xr− ) dr + Es σ(Xr− ) dr ≤ C |Xs | + T s s p  p )  Z tZ + Es F (Xr− , z) λ(dz)dr + 2 KT s E ( p )  Z t  p p Es 1 + |Xr− | dr + 2 KT ≤ C(T ) |Xs | + C1 s Z t p Es [ sup |Xu− |p ]dr. ≤ C1 (1 + |Xs | ) + C2 u

s≤u≤r

s

Finally, from Gronwall’s Lemma, we deduce that for all 0 ≤ s ≤ t ≤ T , there exists a constant C, depending on p, T , b, σ, F and h such that    E sup |Xu |p |Fs ≤ C 1 + |Xs |p . s≤u≤t

 Proposition 2. Let p ≥ 2 and let Assumptions (A.1), (A.2) and (A.3) hold. There exists a constant C depending on p, T , b, σ, F and h such that (i) ∀ 0 ≤ s < t ≤ T, |K t − Ks | ≤ C|t − s|(1/2) . (ii) ∀ 0 ≤ s ≤ t ≤ T, E |Ut − Us |p ≤ C|t − s|. (iii) ∀ 0 ≤ r < s < t ≤ T, E[|Us − Ur |p |Ut − Us |p ] ≤ C|t − r|2 . Remark 3. Under the same conditions, we conclude that   ∀ 0 ≤ s ≤ t ≤ T, E |Xt − Xs |p ≤ C|t − s|. Proof of (i). Let us recall that G¯0 (X) = inf{x ∈ R : E[h(x + X)] ≥ 0}, G0 (X) = (G¯0 (X))+ = inf{x ≥ 0 : E[h(x + X)] ≥ 0}, for all process X. From Remark 1, we have Kt − Ks = sup G0 (Xs + Ur − Us ). s≤r≤t

(2.12)

Hence, from the previous representation of Kt − Ks , we will deduce the 12 -Hölder property of the function t 7−→ Kt . Indeed, since by definition G0 (Xs ) = 0, if s < t, using Lemma 2, |Kt − Ks | = sup G0 (Xs + Ur − Us ) s≤r≤t

= sup [G0 (Xs + Ur − Us ) − G0 (Xs )] s≤r≤t

=

M sup E[|Ur − Us |], m s≤r≤t

MEAN REFLECTED SDES WITH JUMPS

9

and so (  Z |Kt − Ks | ≤ C E sup s≤r≤t

2 1/2 σ(Xu− )dBu s s Z r Z 2 1/2 )   ˜ (du, dz) + E sup F (Xu− , z)N s≤r≤t s E (Z 2 1/2   Z t t  σ(X ≤C E b(Xu− ) du + E − ) du u s s ) 2  Z tZ 1/2 + E F (Xu− , z) λ(dz)du s E (  )     r

   Z b(Xu− )du + E sup s≤r≤t

r

1/2

≤ C |t − s|E 1 + sup |Xu | + |t − s|1/2 E 1 + sup |Xu |2

.

u≤T

u≤T

Therefore, if X0 ∈ Lp for some p ≥ 2, it follows from Proposition 1 that |Kt − Ks | ≤ C|t − s|1/2 .

Proof of (ii). " Z p   p Z t t p p−1 E |Ut − Us | ≤ 4 E |b(Xr− )|dr + σ(Xr− )dBr s

s

Z tZ p # ˜ (dr, dz) + F (Xr− , z)N s E p  Z r p Z r ≤ C sup E |b(Xu− )|du + σ(Xu− )dBu 0≤r≤t s s Z r Z p  ˜ (du, dz) + F (Xu− , z)N s E ( Z   Z t

p−1

≤ C |t − s|

t

p

(1 + |Xu− |) du + C1 E

E s

p/2  (1 + |Xu− |) du 2

s

Z t ) p + C2 E (1 + |Xu− |) du s

   p/2  ≤ C1 E 1 + sup |Xt |p |t − s|p + C2 E 1 + sup |Xt |2 |t − s|p/2 t≤T

t≤T

  + C3 E 1 + sup |Xt |p |t − s|. t≤T

Finally, if X0 ∈ Lp for some p ≥ 2, we can conclude that there exists a constant C, depending on p, T , b, σ, F and h such that   ∀ 0 ≤ s ≤ t ≤ T, E |Xt − Xs |p ≤ C|t − s|.

10

PH. BRIAND, A. GHANNOUM, AND C. LABART

Proof of (iii). Let 0 ≤ r < s < t ≤ T , we have     p p p p E |Us − Ur | |Ut − Us | ≤ E |Us − Ur | Es [|Ut − Us | ] " (  Z p  p   Z t t p σ(Xs− )dBs b(Xs− )ds + Es ≤ CE |Us − Ur | Es s s p )#  Z t Z ˜ (ds, dz) . F (Xs− , z)dN + Es s

E

Then, from Burkholder-Davis-Gundy inequality, we get " (  Z p    Z   t p p p b(Xs− )ds + Es E |Us − Ur | |Ut − Us | ≤ CE |Us − Ur | Es s

s

2 p/2 σ(Xs− ) ds

t

p Z tZ )# F (Xs− , z) λ(dz)ds + Es s

E

"

(

   p |t − s| 1 + Es sup |Xu |

p

p

≤ CE |Us − Ur |

s≤u≤t

+ |t − s|

p/2

  p/2    )# 2 p 1 + Es sup |Xu | + |t − s| 1 + Es sup |Xu | s≤u≤t

"

s≤u≤t

(

≤ CE |Us − Ur |p

  )# p |t − s| 1 + Es sup |Xu | , s≤u≤t

thus, from (i) and Proposition 1, we obtain # # " "   E |Us − Ur |p |Ut − Us |p ≤ C1 |t − s|E |Us − Ur |p + C2 |t − s|E |Us − Ur |p |Xs |p  # p p ≤ C1 |t − s||s − r| + C2 |t − s|E |Us − Ur | |Xs − Xr | + |Xr | "

p

" 2

p−1

≤ C1 |t − r| + C2 |t − s|E 2

p

|Us − Ur |

"



p

p

#

|Us − Ur | + |Ks − Kr |

#

+ C3 |t − s|E |Us − Ur |p |Xr |p "

#

"

≤ C1 |t − r|2 + C2 |t − s|E |Us − Ur |2p + C3 |t − s||s − r|p/2 E |Us − Ur |p "

# p

p

+ C4 |t − s|E |Us − Ur | |Xr | " 2

# p

p

≤ C1 |t − r| + C4 |t − s|E |Xr | Er [|Us − Ur | ] . Following the proof of (ii), we can also get    p Er [|Us − Ur | ] ≤ C|s − r| 1 + Er sup |Xu | . p

r≤u≤s

#

MEAN REFLECTED SDES WITH JUMPS

11

Then " #     p 2 p p p E |Us − Ur | |Ut − Us | ≤ C1 |t − r| + C2 |t − s||s − r|E |Xr | 1 + Er sup |Xu | r≤u≤s

#  p 2 2 p . ≤ C1 |t − r| + C2 |t − r| E |Xr | 1 + sup |Xu | "

r≤u≤s

Under (A.3), we conclude that E[|Us − Ur |p |Ut − Us |p ] ≤ C|t − r|2 ,

∀ 0 ≤ r < s < t ≤ 1. 

2.4. Density of K. Let L be the linear partial operator of second order defined by ∂ 1 ∂2 Lf (x) := b(x) f (x) + σσ ∗ (x) 2 f (x) + ∂x 2 ∂x

Z 

 f x + F (x, z) − f (x) − F (x, z)f (x) λ(dz), 0



E

(2.13) for any twice continuously differentiable function f . Proposition 3. Suppose that Assumptions (A.1), (A.2) and (A.4) hold and let (X, K) denote the unique deterministic flat solution to (1.2). Then the Stieljes measure dK is absolutely continuous with respect to the Lebesgue measure (Proposition 2) with density k : R+ 3 t 7−→

(E[Lh(Xt− )])− 1E[h(Xt )]=0 . E[h0 (Xt− )]

(2.14)

Let us admit for the moment the following result that will be useful for our proof. Lemma 3. The functions t 7−→ E [h(Xt )] and t 7−→ E [Lh(Xt )] are continuous. Lemma 4. If ϕ is a continuous function such that, for some C ≥ 0 and p ≥ 1, ∀x ∈ R,

|ϕ(x)| ≤ C(1 + |x|p ),

then the function t 7−→ E[ϕ(Xt )] is continuous. The proof of Lemma 4 is given in Appendix A.1. We may now proceed to the proof of Proposition 3.

Proof. Let t in [0, T ]. For all positive r, we have Z Xt+r = Xt +

t+r

  Z Z b(Xs− ) − F (Xs− , z)λ(dz) ds +

t

+ Kt+r − Kt .

E

t

t+r

Z

t+r

Z

σ(Xs− )dBs +

F (Xs− , z)N (ds, dz) t

E

12

PH. BRIAND, A. GHANNOUM, AND C. LABART

Under (A.4) and thanks to Itô’s formula we get Z t+r Z t+r Z t+r Z 0 0 ˜ (ds, dz) h(Xt+r ) − h(Xt ) = b(Xs− )h (Xs− )ds + σ(Xs− )h (Xs− )dBs + F (Xs− , z)h0 (Xs− )N t t t E Z t+r Z 1 t+r 2 h0 (Xs− )dKs + + σ (Xs− )h00 (Xs− )ds 2 t t  Z t+r Z   0 h Xs− + F (Xs− , z) − h(Xs− ) − F (Xs− , z)h (Xs− ) N (ds, dz) + t E Z t+r Z Z t+r Z t+r 0 0 ˜ (ds.dz) F (Xs− , z)h0 (Xs− )N σ(Xs− )h (Xs− )dBs + b(Xs− )h (Xs− )ds + = t E t t Z t+r Z 1 t+r 2 0 + h (Xs− )dKs + σ (Xs− )h00 (Xs− )ds 2 t t  Z t+r Z   0 h Xs− + F (Xs− ) − h(Xs− ) − F (Xs− )h (Xs− ) λ(dz)ds + t E  Z t+r Z   0 ˜ (ds, dz) + h Xs− + F (Xs− , z) − h(Xs− ) − F (Xs− , z)h (Xs− ) N t E Z t+r Z t+r Z t+r 0 = Lh(Xs− )ds + h (Xs− )dKs + σ(Xs− )h0 (Xs− )dBs t t t  Z t+r Z   ˜ (ds, dz), + h Xs− + F (Xs− , z) − h(Xs− ) N t

E

where L is given by (2.13). Thus, we obtain   Z t+r Z 0 h (Xs− )dKs = Eh(Xt+r ) − Eh(Xt ) − E

t+r

ELh(Xs− )ds.

(2.15)

t

t

Suppose, at the one hand, that Eh(Xt ) > 0. Then, by the continuity of t 7−→ Eh(Xt ), we get that there exists a positive R such that for all r ∈ [0, R], Eh(Xt+r ) > 0. This implies in particular, from the definition of K, that dK([t, t + r]) = 0 for all r in [0, R]. At the second hand, suppose that Eh(Xt ) = 0, then two cases arise. Let us first assume that ELh(Xt ) > 0. Hence, we can find a positive R0 such that for all r ∈ [0, R0 ], ELh(Xt+r ) > 0. We thus deduce from our Assumptions and (2.15) that Eh(Xt+r ) > 0 for all r ∈ (0, R0 ]. Therefore, dK([t, t + r]) = 0 for all r ∈ [0, R0 ]. Suppose next that ELh(Xt ) ≤ 0. By continuity of t 7−→ ELh(Xt ), there exists a positive R00 such that for all r ∈ [0, R00 ] it holds ELh(Xt+r ) ≤ 0. Since Eh(Xt+r ) must be positive on this set, we could have to compensate and Kt+r is then positive for all r ∈ [0, R00 ]. Moreover, the compensation must be minimal i.e. such that Eh(Xt+r ) = 0. Equation (2.15) becomes:  Z t+r  Z t+r E h0 (Xs− )dKs = − ELh(Xs− )ds, t

on

[0, R00 ].

t

Dividing both sides by r and taking the limit r −→ 0 gives (by continuity): dKt = −

E[Lh(Xt− )] dt. E[h0 (Xt− )]

Thus, dK is absolutely continuous w.r.t. the Lebesgue measure with density: kt =

(E[Lh(Xt− )])− 1E[h(Xt )]=0 . E[h0 (Xt− )] 

MEAN REFLECTED SDES WITH JUMPS

13

Remark 4. This justifies, at least under the smoothness Assumption (A.4) on the constraint function h, the non-negative hypothesis imposed on h0 . Proof of Lemma 3. Under Assumption (A.2), and by using Lemma 4, we obtain the continuity of the function t 7−→ Eh(Xt ). Under the assumptions (A.1), (A.2) and (A.4), we observe that x 7−→ Lh(Xt ) is a continuous function such that, for all x ∈ R, there exists constants C1 , C2 and C3 > 0, |b(x)h0 (x)| ≤ C1 (1 + |x|), |σ 2 (x)h00 (x)| ≤ C2 (1 + |x|2 ), and Z   Z  0 |F (x, z)|λ(dz) h x + F (x, z) − h(x) − F (x, z)h (x) λ(dz) ≤ C3 E  EZ Z |F (0, z)|λ(dz) |F (x, z) − F (0, z)|λ(dz) + ≤ C3 E E Z |x|λ(dz) + C30 ≤ C3 E

≤ C3 (1 + |x|). Finally, by using Lemma 4, we conclude the continuity of t 7−→ ELh(Xt ).



3. Mean reflected SDE as the limit of an interacting reflected particles system. Having in mind the notations defined in the beginning of Section 2 and especially equation (2.9), we can write the unique solution of the SDE (1.2) as: Z t Z t Z tZ ˜ (ds, dz) + sup G0 (µs ), (3.1) Xt = X0 + b(Xs− )ds + σ(Xs− )dBs + F (Xs− , z)N 0

0

0

s≤t

E

where µt stands for the law of Z t Z t Z tZ ˜ (ds, dz). Ut = X0 + b(Xs− )ds + σ(Xs− )dBs + F (Xs− , z)N 0

0

0

E

We here are interested in the particle approximation of such a system. Our candidates are the particles, for 1 ≤ i ≤ N , Z t Z t Z tZ i i i i i ¯ ˜ i (ds, dz) + sup G0 (µN ), (3.2) Xt = X0 + b(Xs− )ds + σ(Xs− )dBs + F (Xsi− , z)N s 0

0

0

s≤t

E

˜ i are independent compensated Poisson measure, where B i are independent Brownian motions, N i N ¯ are independent copies of X0 and µ denotes the empirical distribution at time s of the X s 0 particles Z t Z t Z tZ i i i i i ˜ i (ds, dz), 1 ≤ i ≤ N. ¯ Ut = X0 + b(Xs− )ds + σ(Xs− )dBs + F (Xsi− , z)N 0

namely µN s =

0

0

E

N 1 X δUsi . It is worth noticing that N i=1 ( ) N 1 X N i G0 (µs ) = inf x ≥ 0 : h(x + Us ) ≥ 0 . N i=1

14

PH. BRIAND, A. GHANNOUM, AND C. LABART

In order to prove that there is indeed a propagation of chaos effect, we introduce the following independent copies of X Z tZ Z t Z t ¯ i− , z)N ˜ i (ds, dz)+sup G0 (µs ), 1 ≤ i ≤ N, ¯ i− )dB i + ¯ i− )ds+ ¯i = X ¯i+ F (X σ( X b( X X s t 0 s s s 0

0

0

s≤t

E

and we couple these particles with the previous ones by choosing the same Brownian motions and the same Poisson processes. ¯ i, 1 ≤ i ≤ N : In order to do so, we introduce the decoupled particles U Z tZ Z t Z t ¯ i− , z)N ˜ i (ds, dz). ¯ i− )dB i + ¯ i− )ds + ¯i = X ¯i + F (X σ(X b(X U t

0

s

0

s

s

0

0

s

E

¯ i are i.i.d. and let us denote by µ Note that for instance the particles U ¯N the empirical measure t associated to this system of particles.   ¯ i = E [h(X0 )] ≥ 0. However, Remark 5. (i) Under our assumptions, we have E h X 0 there is no reason to have N  1 X ¯ 0i ≥ 0, h X N i=1

even if N is large. As a consequence, ( G0 (µN 0 ) = inf

) N  1 X ¯i ≥ 0 x≥0: h x+X 0 N i=1

¯ i + G0 (µN ) and the non is not necessarily equal to 0. As a byproduct, we have X0i = X 0 0 decreasing process sups≤t G0 (µN s ) is not equal to 0 at time t = 0. Written in this way, the particles defined by (3.2) can not be interpreted as the solution of a reflected SDE. To view the particles as the solution of a reflected SDE, instead of (3.2) one has to solve Z t Z t Z tZ i i N i i i ¯ ˜ i (ds, dz) + K N , Xt = X0 + G0 (µ0 ) + b(Xs− )ds + σ(Xs− )dBs + F (Xsi− , z)N t 0

0

N  1 X h Xti ≥ 0, N i=1

KN

0

Z 0

t

E

N  1 X h Xti dKsN , N i=1

K0N

with non decreasing and = 0. Since, we do not use this point in the sequel, we will work with the form (3.2). (ii) Following the same proof of Theorem 1, it is easy to demonstrate the existence and the uniqueness of a solution for the particle approximated system (3.2). We have the following result concerning the approximation (1.2) by interacting particles system. Theorem 2. Let Assumptions (A.1) and (A.2) hold and T > 0. (i) Under Assumption (A.3), there exists a constant C depending on b, σ and F such that, for each j ∈ {1, . . . , N },   2     M2 M j j 2 2 ¯ N −1/2 . E sup |Xs − Xs | ≤ C exp C 1 + 2 (1 + T ) m m2 s≤T (ii) If Assumption (A.4) is in force, then there exists a constant C depending on b, σ and F such that, for each j ∈ {1, . . . , N },          1 + T2 M2 j j 2 2 2 ¯ E sup |Xs − Xs | ≤ C exp C 1 + 2 (1 + T ) 1 + E sup |XT | N −1 . m m2 s≤T s≤T

MEAN REFLECTED SDES WITH JUMPS

15

Proof. Let t > 0. We have, for r ≤ t, Z r  Z  j r j j j j i j ¯ − ) dB ¯ − )ds + ¯ ≤ Xr − X σ(Xs− ) − σ(X b(Xs− ) − b(X s r s s 0 0 Z r Z   j j j ¯ ˜ F (Xs− , z) − F (Xs− , z) N (ds, dz) + sup G0 (µN + s ) − sup G0 (µs ) . 0

s≤r

E

s≤r

Taking into account the fact that N N sup G0 (µN s ) − sup G0 (µs ) ≤ sup G0 (µs ) − G0 (µs ) ≤ sup G0 (µs ) − G0 (µs ) s≤r

s≤r

s≤r

s≤t

µN µN ≤ sup G0 (µN s ) − G0 (µs ) , s ) − G0 (¯ s ) + sup G0 (¯ s≤t

s≤t

we get the inequality ¯ rj ≤ I1 + sup G0 (µN µN µN sup Xrj − X s ) − G0 (µs ) , s ) − G0 (¯ s ) + sup G0 (¯ s≤t

s≤t

r≤t

(3.3)

where we have set Z r   j j j j i ¯ ¯ σ(Xs− ) − σ(Xs− ) dBs I1 = b(Xs− ) − b(Xs− ) ds + sup r≤t 0 0 Z r Z   j j j ¯ ˜ + sup F (Xs− , z) − F (Xs− , z) N (ds, dz) . Z

t

r≤t

0

E

On the one hand we have, using Assumption (A.1), Doob and Cauchy-Schwartz inequalities (  Z 2  2  Z t t  2  j j j j ¯ ¯ ≤C E t E I1 b(Xs− ) − b(Xs− ) ds + E σ(Xs− ) − σ(Xs− ) ds 0 0 2 Z tZ ) ¯ j− , z) λ(dz)ds F (X j− , z) − F (X +E s s 0 E (   Z t  Z t  ¯ sj 2 ds + C1 ¯ sj 2 ds E Xsj − X E Xsj − X ≤ C tC1 0

0

Z + C1

 ) t  2 j j ¯ ds E Xs − X s

0

Z ≤ C(1 + t)

t

  j 2 j ¯ ds. E Xs − X s

0

where C depends only on b, σ and F and may change from line to line. On the other hand, by using Lemma 2, N N M 1 X i ¯ i M 1 X N N ¯ i , sup G0 (µs ) − G0 (¯ µs ) ≤ sup Us − Us ≤ sup Usi − U s m s≤t N mN s≤t s≤t i=1

i=1

and Cauchy-Schwartz inequality gives, since the variables are exchangeable,       N i j M2 1 X M2 N N 2 i 2 j 2 ¯ ¯ E sup G0 (µs ) − G0 (¯ µs ) ≤ 2 E sup Us − Us = 2 E sup Us − Us . m N m s≤t s≤t s≤t i=1

Since ¯j Usj −U s

Z = 0

s

Z

¯ j− ))dr+ (b(Xrj− )−b(X r

0

s

Z

¯ j− ))dB j + (σ(Xrj− )−σ(X r r

0

sZ E

¯ j− ), z)N ˜ j (dr, dz), (F (Xrj− , z)−F (X r

16

PH. BRIAND, A. GHANNOUM, AND C. LABART

the same computations as we did before lead to    Z t  j M2 N N 2 j 2 ¯ E sup G0 (µs ) − G0 (¯ µs ) ≤ C 2 (1 + t) E Xs − Xs ds. m s≤t 0 Hence, with the previous estimates we get, coming back to (3.3),      Z t  j 2 j 2 2 j j N ¯ ≤K ¯ ds + 4E sup G0 (¯ E Xs − X E sup Xr − X µs ) − G0 (µs ) r s r≤t

s≤t

0

Z

t

≤K 0

    2 ¯ rj 2 ds + 4E sup G0 (¯ E sup Xrj − X µN ) − G (µ ) , 0 s s r≤s

s≤t

where K = C(1 + t)(1 + M 2 /m2 ). Thanks to Gronwall’s Lemma     j 2 j 2 Kt N ¯ E sup Xr − Xr ≤ Ce E sup G0 (¯ µs ) − G0 (µs ) . r≤t

s≤t

By Lemma 2 we know that Z 2     2 1 N N ¯ 0 (µs ) + ·)(d¯ µs − dµs ) , E sup G0 (¯ µs ) − G0 (µs ) ≤ 2 E sup h(G m s≤t s≤t from which we deduce that Z 2     j Kt 1 j 2 N ¯ ¯ . ≤ Ce E sup Xr − Xr E sup h( G (µ ) + ·)(d¯ µ − dµ ) 0 s s s m2 r≤t s≤t

(3.4)

Proof of (i). Since the function h is, at least, a Lipschitz function, we understand that the rate of convergence follows from the convergence of empirical measure of i.i.d. diffusion processes. The crucial point here is that we consider a uniform (in time) convergence, which may possibly damage the usual rate of convergence. We will see that however here, we are able to conserve this optimal rate. Indeed, in full generality (i.e. if we only suppose that (A.2) holds) we get that: 2  Z    M2 1 N 2 N ¯ µs − dµs ) ≤ 2 E sup W1 (¯ E sup h(G0 (µs ) + ·)(d¯ µs , µs ) . m2 m s≤t s≤t Thanks to the additional Assumption (A.3) and to Remark 2 and Proposition 2, we will adapt and simplify the proof of Theorem 10.2.7 of [STR98] using recent results about the control Wasserstein distance of empirical measures of i.i.d. sample to the true law by [FG15], to obtain   2 N E sup W1 (¯ µs , µs ) ≤ CN −1/2 . s≤1

The reader can refer to the work on ([BCdRGL16], Theorem 3.2, Proof of (i)) in order to find this result. Proof of (ii). In the case where h is a twice continuously differentiable function with bounded derivatives (i.e. under (A.4)), we succeed to take benefit from the fact that µ ¯N is an empirical measure associated to i.i.d. copies of diffusion process, in particular we can get rid of the supremum in time. In view of (3.4), we need a sharp estimate of Z 2   N ¯ µs − dµs ) . E sup h(G0 (µs ) + ·)(d¯ (3.5) s≤t

¯ 0 (µs ) is locally Lipschitz continuous. Indeed, since by definition Let us first observe that s 7−→ G ¯ H(G0 (µt ), µt ) = 0, if s < t, using (2.4),

MEAN REFLECTED SDES WITH JUMPS

17

¯ 0 (µs ) − G ¯ 0 (µt )| ≤ 1 |H(G ¯ 0 (µs ), µt ) − H(G ¯ 0 (µt ), µt )| |G m 1 ¯ 0 (µs ), µt )|, = |H(G m 1 ¯ 0 (µs ) + Ut )]| = |E[h(G m    Z tZ Z t Z t 1 ˜ (dr, dz) . ¯ 0 (µs ) + Us + F (Xr− , z)N σ(Xr− )dBr + b(Xr− )dr + = E h G m s E s s We get from Itô’s formula, setting ∂ 1 ∂2 L¯y f (x) := b(y) f (x) + σσ ∗ (y) 2 f (x) + ∂x 2 ∂x

Z 

 f x + F (y, z) − f (x) − F (y, z)f (x) λ(dz), 

0

E

Z t Z t     ¯ 0 (µs ) + Ur− dr + ¯ 0 (µs ) + Ur− dBr ¯ 0 (µs ) + Ut ) = h(G ¯ 0 (µs ) + Us ) + σ Xr− h0 G h(G b Xr− h0 G s s Z tZ Z t  0    1 ¯ 0 (µs ) + Ur− N ˜ (dr, dz) + ¯ 0 (µs ) + Ur− dr F Xr− , z h G + σ 2 Xr− h00 G 2 s s E Z tZ Z tZ ˜ (dr, dz), + m(r, z)λ(dz)dr + m(r, z)N s

E

s

E

with      0  ¯ ¯ ¯ m(r, z) = h G0 (µs ) + Ur− + F (Xr− , z) − h G0 (µs ) + Ur− − F Xr− , z h G0 (µs ) + Ur− . Thus, we obtain Z t Z t ¯ ¯ ¯ ¯ ¯ 0 (µs ) + Ur− )dBr h(G0 (µs ) + Ut ) = h(G0 (µs ) + Us ) + LXr− h(G0 (µs ) + Ur− )dr + σ(Xr− )h0 (G s s  Z tZ    ¯ 0 (µs ) + Ur− + F (Xr− , z) − h G ¯ 0 (µs ) + Ur− N ˜ (dr, dz), + h G s

E

and so Z t ¯ ¯ 0 (µs ) + Ur− )]dr ¯ E[h(G0 (µs ) + Ut )] = E[h(G0 (µs ) + Us )] + E[L¯Xr− h(G s Z t ¯ 0 (µs ), µs ) + ¯ 0 (µs ) + Ur− )]dr = H(G E[L¯Xr− h(G s Z t ¯ 0 (µs ) + Ur− )]dr. = E[L¯Xr− h(G s

Since h has bounded derivatives and sups≤T |Xs | is a square integrable random variable for each T > 0 (see Proposition 1), the result follows easily. Now, we deal with (3.5). ¯ 0 (µ). By definition, we have, denoting Let us denote by ψ the Radon-Nikodym derivative of G

18

PH. BRIAND, A. GHANNOUM, AND C. LABART

¯ 0 (µs ) + U ¯ i , since U ¯ i are independent copies of U , by V i the semi-martingale s 7−→ G s

Z RN (s) :=

N    1 X ¯ N ¯ ¯i − E h G ¯ 0 (µs ) + Us h(G0 (µs ) + ·)(d¯ µs − dµs ) = h G0 (µs ) + U s N

=

1 N

i=1 N X

    ¯ 0 (µs ) + U ¯si ¯ 0 (µs ) + U ¯si − E h G h G

i=1

N    1 X = h Vsi − E h Vsi , N i=1

It follows from Itô’s formula

h

Vsi



Z s    0 i  i ¯ i− h0 V i− dB i ¯ σ X b Xr− h Vr− dr + h =h + r r r 0 0 Z sZ 0 Z s   i   ¯ i− , z h0 V i− N ˜ (dr, dz) + 1 ¯ i− h00 V i− dr + F X σ2 X r r r r 2 0  Z0 s ZE   i i i i 0 i ¯ ¯ + h Vr− + F (Xr− , z) − h(Vr− ) − F (Xr− , z)h (Vr− ) λ(dz)dr 0 E  Z sZ   i i i i 0 i ¯ ¯ ˜ i (dr, dz) + h Vr− + F (Xr− , z) − h(Vr− ) − F (Xr− , z)h (Vr− ) N 0 E Z s Z s Z   0 i   1 s 2 ¯ i  00 i  0 i i i ¯ h Vr− ψr dr + b Xr− h Vr− dr + = h V0 + σ Xr− h Vr− dr 2 0 0 0  Z sZ   ¯ i− , z) − h(V i− ) − F (X ¯ i− , z)h0 (V i− ) λ(dz)dr + h Vri− + F (X r r r r 0 E  Z s Z sZ   0 i  i  i i i i ¯ ¯ ˜ i (dr, dz) + σ Xr− h Vr− dBr + h Vr− + F (Xr− , z) − h(Vr− ) N 0 Z s Z 0s E Z s      i 0 i ¯ i− dBri = h V0 + h Vr− ψr dr + L¯X¯ i h Vri− dr + h0 Vri− σ X r r− 0 0 0  Z sZ    ¯ i− , z) − h V i− N ˜ i (dr, dz) + h Vri− + F (X r r 0 E Z s Z s   0 i     ¯ i− dBri = h V0i + h Vr− ψr + L¯X¯ i h Vri− dr + h0 Vri− σ X r r− 0 0  Z sZ    ¯ i− , z) − h V i− N ˜ i (dr, dz), + h Vri− + F (X r r V0i



0

Z

s

0

Vri−



 ¯ 0 µr + dG

Z

s

E

Taking expectation gives     Z   i i E h Vs = E h V0 +

s

   E h0 Vri− ψr + L¯X¯ i h Vri− dr − r 0 Z s  0 i   ¯ 0 (µ0 ), µ0 ) + = H(G E h Vr− ψr + L¯X¯ i h Vri− dr r− 0 Z s  0 i   =0+ E h Vr− ψr + L¯X¯ i h Vri− dr. 0

r−

MEAN REFLECTED SDES WITH JUMPS

19

We deduce immediately that N N Z 1 X 1 X s i i C (r)dr + MN (s) + LN (s) RN (s) = h(V0 ) + N N i=1 i=1 0  Z s X N N 1 X 1 i i = h(V0 ) + C (r) dr + MN (s) + LN (s), N N 0 i=1

i=1

where we have set      C i (r) = h0 Vri− ψr + L¯X¯ i h Vri− − E h0 Vri− ψr + L¯X¯ i h Vri− , r−

r−

N Z 1 X s 0 i  ¯i  i MN (s) = h Vr− σ Xr− dBr , N i=1 0 N Z Z    i 1 X s ˜ (dr, dz). ¯ i− , z) − h V i− N LN (s) = h Vri− + F (X r r N 0 E i=1

As a byproduct, Z N 1 X i sup |RN (s)| ≤ h(V0 ) + sup N s≤t s≤t 1 ≤ N

i=1 N X i=1

0

Z i h(V0 ) +

0

N X i 1 C (r) dr + sup |MN (s)| + sup |LN (s)| N s≤t s≤t

s

i=1

N t X i dr + sup |MN (s)| + sup |LN (s)|. 1 C (r) N s≤t s≤t i=1

¯ i are i.i.d, We get, using Cauchy-Schwartz inequality, since U i and X (  2    Z t X   N 1 N i 1 X i 2 h(V0 ) + E C (r) dr E sup |RN (s)| ≤ 4 V N N s≤t 0 i=1 i=1    ) + E sup |MN (s)|2 + E sup |LN (s)|2 s≤t

s≤t

(    Z t X N 1 N i 2 1 X i ≤4 V h(V0 ) + tE C (r) dr N 0 N i=1 i=1    ) 2 2 + E sup |MN (s)| + E sup |LN (s)| s≤t

s≤t

(    Z t  X N N 1 X 1 i i h(V0 ) + t V C (r) dr =4 V N N 0 i=1 i=1    ) 2 2 + E sup |MN (s)| + E sup |LN (s)| . s≤t

s≤t

Thus, we get       Z 4t t 4 E sup |RN (s)|2 ≤ V[h(V0 )] + V(C(r))dr + 4E sup |MN (s)|2 + 4E sup |LN (s)|2 N N 0 s≤t s≤t s≤t Z t   4 4t = V[h(V0 )] + V(h0 Vr− ψr + L¯Xr− h Vr− )dr N N 0     2 2 + 4E sup |MN (s)| + 4E sup |LN (s)| . s≤t

s≤t

20

PH. BRIAND, A. GHANNOUM, AND C. LABART

Since MN is a martingale with N Z  1 X t 0 i ¯ i− ) 2 dr, hMN it = 2 h (Vr− )σ(X r N 0 i=1

Doob’s inequality leads to     E sup |MN (s)|2 ≤ 4E |MN (t)|2 s≤t

  N Z 2 4 X t 0 i i ¯ E dr h (V )σ( X ) r− r− N2 0 i=1   Z 2 4 t 0 E h (Vr− )σ(Xr− ) dr. = N 0

=

Then, using Doob inequality for the last martingale LN , we obtain     2 E sup |LN (s)| ≤ 4E |LN (t)|2 s≤t

2   Z t Z  N   i 4 X i i i ˜ ¯ = 2 h Vr− + F (Xr− , z) − h Vr− N (dr, dz) E N 0 E i=1 Z tZ  X   i 8 ˜ (dr, dz) ¯ i− , z) − h V i− N + 2 E h Vri− + F (X r r N 0 E 1≤i 0, N and n be two non-negative integers and let Assumptions (A.1), (A.2) and (A.3) hold. There exists a constant C, depending on T , b, σ, F , h and X0 but independent of N , such that: for all i = 1, . . . , N     i i 2 −1 −1/2 ˜ . E sup Xs − Xs ≤C n +N s≤t

(ii) Moreover, if Assumption (A.4) is in force, there exists a constant C, depending on T , b, σ, F , h and X0 but independent of N , such that: for all i = 1, . . . , N     i i 2 −1 −1 ˜ . E sup Xs − Xs ≤C n +N s≤t

Proof. Let us fix i ∈ 1, . . . , N and T > 0. We have, for t ≤ T , Z s Z s  i i i i i i ˜ si ≤ ˜ ˜ Xs − X σ(Xr− ) − σ(Xr− ) dBr b(Xr− ) − b(Xr− )dr + 0 0 Z sZ   i i i ˜ ˜ µN + F (Xr− , z) − F (Xr− , z) N (dr, dz) + sup G0 (µN r ) − G0 (˜ r ) . 0

r≤s

E

Hence, using Assumption (A.1), Cauchy-Schwartz, Doob and BDG inequalities we get: ( Z  2     2 Z s   s i i i i 2 i i i ˜ ˜ ˜ ≤ 4E sup b(Xr− ) − b(Xr− ) dr + E sup Xs − Xs σ(Xr− ) − σ(Xr− ) dBr s≤t

s≤t

0

0

) 2 Z sZ   2 i N N i i ˜ (dr, dz) + sup G0 (µ ) − G0 (˜ ˜ − , z) N µr ) + F (Xr− , z) − F (X r r r≤s 0 E (  Z 2  2  Z t t i i i i ˜ ˜ b(X ) − b( X ≤C E t ) ds + E σ(X ) − σ( X ) − − − − s s s s ds 0 0 2 ) Z tZ   2 ˜ i− , z) λ(dz)ds + E sup G0 (µN F (X i− , z) − F (X +E µN s ) − G0 (˜ s ) s s s≤t 0 E (   Z t  Z t  i i i 2 i 2 ˜ ˜ ≤ C T C1 E Xs − Xs ds + C1 E Xs − Xs ds 0

Z + C1

t

0

   ) i 2 2 i N N ˜ ds + E sup G0 (µ ) − G0 (˜ µs ) E Xs − X s s s≤t

0

Z ≤C

t

    i i 2 N N 2 ˜ E Xs − Xs ds + 4E sup G0 (µs ) − G0 (˜ µs ) . s≤t

0

(4.2) Denoting by (µit )0≤t≤T the family of marginal laws of (Uti )0≤t≤T and (˜ µit )0≤t≤T the family of ˜ i )0≤t≤T , we have marginal laws of (U t (     2 2 N N i 2 E sup G0 (µs ) − G0 (˜ µs ) ≤ 3 E sup G0 (µN ) − G (µ ) + sup G0 (µis ) − G0 (˜ µis ) 0 s s s≤t

s≤t

 ) 2 i N + E sup G0 (˜ µs ) − G0 (˜ µs ) , s≤t

s≤t

24

PH. BRIAND, A. GHANNOUM, AND C. LABART

and from Lemma 2, (

Z 2   2  1 i N i ¯ 0 (µs ) + ·)(dµs − dµs ) + M E sup h(G sup W12 (µis , µ ˜is ) ≤3 2 m m s≤t s≤t Z 2 )  1 ¯ 0 (˜ µis ) + ·)(d˜ + 2 E sup h(G µN µis ) s − d˜ m s≤t (  Z 2  2   i i i N i ˜ ¯ ≤ C E sup h(G0 (µs ) + ·)(dµs − dµs ) + sup E Us − Us s≤t

s≤t

Z 2 )  ¯ 0 (˜ + E sup h(G µN µis ) µis ) + ·)(d˜ . s − d˜ s≤t

Proof of (i). Following the Proof of (i) in Theorem 2, we obtain Z 2     2 N i i N i ¯ E sup h(G0 (µs ) + ·)(dµs − dµs ) ≤ CE sup W1 (µs , µs ) ≤ CN −1/2 , s≤t

s≤t

2  Z    N i N i 2 i ¯ µs − d˜ µs ) ≤ CE sup W1 (˜ ˜s ) ≤ CN −1/2 . E sup h(G0 (˜ µs ) + ·)(d˜ µs , µ s≤t

s≤t

From which, we can derive the inequality 2     i N N 2 i ˜ µs ) ≤ C1 sup E Us − Us + C2 N −1/2 E sup G0 (µs ) − G0 (˜ s≤t s≤t 2  2     i i i i ˜ ˜ ˜ ≤ C1 sup E Us − Us + sup E Us − Us + C2 N −1/2 . s≤t

s≤t

For the first term of the right hand side, we can observe that, 2  2    i i i i ˜s ˜s sup E Us − U ≤ E sup Us − U s≤t

s≤t

( Z  ≤ 3E sup s≤t

0

s

b(Xri− )

 2 Z i ˜ − b(Xr− ) dr +

0

s

σ(Xri− )

2  i i ˜ − σ(Xr− ) dBr

Z sZ  2   i i i ˜ ˜ + F (Xr− , z) − F (Xr− , z) N (dr, dz) (0  EZ 2  2  Z t t i i i i ˜ − ) ds + E ˜ − ) ds b(X − ) − b(X σ(X − ) − σ(X ≤C E t s s s s 0 0 2 ) Z tZ i i ˜ − , z) λ(dz)ds F (X − , z) − F (X +E s s 0 E (   ) Z t  Z t  i 2 2 ˜ i dr + 2C1 ˜ i dr ≤ C T C1 E Xs − X E Xsi − X s s 0

Z ≤C 0

t

  ˜ si 2 ds. E Xsi − X

0

MEAN REFLECTED SDES WITH JUMPS

25

2   i i ˜ ˜ Using Assumption (A.1), the second term sups≤t E Us − Us becomes (  Z 2   i ˜ −U ˜ i ≤ 3 sup E sup E U s s s≤t

s≤t

s

s

2 Z i ˜ b(Xr− )dr +

s

s

2 Z i i ˜ σ(Xr− )dBr +

sZ E

s

2 ) ˜ i− , z)N ˜ i (dr, dz) F (X r

2 ) sZ ˜ i− , z) λ(dz)dr F (X r s≤t s E (  2 2 ) Z s 2 i 2 i i i i 2 ˜ s ) s − s + σ(X ˜ − | )dr ˜ s ) Bs − Bs + C ≤ 3 sup E b(X (1 + |X r

(  2 2 Z 2 i i i 2 i ˜ ˜ ≤ 3 sup E b(Xs ) s − s + σ(Xs ) Bs − Bs +

s≤t

s

(        i T 2 i 2 i 2 i 2 ˜ ˜ E sup b(Xr− ) ≤ 3 sup + E Bs − Bs E σ(Xs ) n s≤t s≤r≤s )    T ˜ ri |2 ) E sup (1 + |X +C n s≤r≤s  2       i T i 2 i 2 i 2 ˜ ˜ ≤ C1 E sup b(Xs ) + C2 sup E Bs − Bs E sup σ(Xs ) n s≤t s≤T s≤T     T ˜ i |2 ) E sup(1 + |X + C3 s n s≤T       2   i 2 i 2 i 2 T i ˜ ˜ + C2 sup E Bs − Bs 1 + E sup X ≤ C1 1 + E sup X s s n s≤t s≤T s≤T     i 2 T ˜ + C3 , 1 + E sup X s n s≤T and from Proposition 1, we get 2       i i 2 T i i ˜ −U ˜ ≤ C1 sup E U + C2 sup E Bs − Bs . s s n s≤t s≤t Then, by using BDG inequality, we obtain    Z i 2 i sup E Bs − Bs = sup E s≤t

s≤t

s

dBui

2  ≤ sup |s − s| ≤ s≤t

s

T . n

Therefore, we conclude 2   i i ˜ ˜ sup E Us − Us ≤ C1 n−1 + C2 n−1 s≤t

−1

≤ Cn

(4.3)

,

from which we derive the inequality (    ) Z t  2 i 2 N N −1 −1/2 i ˜ ds , E sup G0 (µs ) − G0 (˜ µs ) ≤ C n + N + E Xs − X s s≤t

(4.4)

0

and taking into account (4.2) we get (    ) Z t  i 2 i 2 i i −1 −1/2 ˜s ≤ C n + N ˜ ds . E sup Xs − X + E Xs − X s s≤t

0

(4.5)

26

PH. BRIAND, A. GHANNOUM, AND C. LABART

Since       i i i i 2 i 2 i 2 ˜ ˜ ˜ ˜ E Xs − Xs ≤ 2E Xs − Xs + 2E Xs − Xs     2 i 2 i i i ˜ −U ˜ , ˜ s + 2E U = 2E Xs − X s s it follows from (4.3) and (4.5) that (    ) Z t  i 2 i 2 i −1 −1/2 i ˜s ≤ C n + N ˜ s ds . E sup Xs − X + E Xs − X s≤t

0

and finally, we conclude the proof of (i) with Gronwall’s Lemma.

Proof of (ii). Following the proof of (ii) in Theorem 2, we obtain Z 2   i N i ¯ E sup h(G0 (µs ) + ·)(dµs − dµs ) ≤ CN −1 , s≤t

2  Z  N i i ¯ µs − d˜ µs ) ≤ CN −1 . µs ) + ·)(d˜ E sup h(G0 (˜ s≤t

According to the same strategy applied to proof of (i) in Theorem 4, the result follows easily:     i i 2 −1 −1 ˜ E sup Xs − Xs ≤C n +N . s≤t

 Theorem 3. Let T > 0, N and n be two non-negative integers. Let assumptions (A.1), (A.2) and (A.3) hold. (i) There exists a constant C, depending on T , b, σ, F , h and X0 but independent of N , such that: for all i = 1, . . . , N ,     i −1 −1/2 i 2 ¯ ˜ E sup Xt − Xt ≤C n +N . t≤T

(ii) If in addition (A.4) holds, there exists a positive constant C, depending on T , b, σ, F , h and X0 but independent of N , such that: for all i = 1, . . . , N ,     i i 2 −1 −1 ¯ ˜ E sup Xt − Xt ≤C n +N . t≤T

Proof. The proof is straightforward writing i i ¯ −X ˜ i ≤ X ¯ − X i + X i − X ˜ i , X t t t t t t and using Theorem 2 and Proposition 4.



MEAN REFLECTED SDES WITH JUMPS

27

5. Numerical illustrations. Throughout this section, we consider, on [0, T ] the following sort of processes: Z tZ Z t Z t   ˜ (ds, dz) + Kt ,  c(z)(ηs + θs Xs− )N (σ + γ X )dB + (β + a X )ds + X = X − − − s s s s s s s 0  t 0 E 0 0 Z t    E[h(Xt )] ≥ 0, E[h(Xs )] dKs = 0, t ≥ 0. 0

(5.1) where (βt )t≥0 , (at )t≥0 , (σt )t≥0 , (γt )t≥0 , (ηt )t≥0 and (θt )t≥0 are bounded adapted processes. This sort of processes allow us to make some explicit computations leading us to illustrate the algorithm. Our results are presented for different diffusions and functions h that are summarized below. Linear constraint. We first consider cases where h : R 3 x 7−→ x − p ∈ R. Case (i) Drifted Brownian motion and compensated Poisson process: βt = β > 0, at = γt = θt = 0, σt = σ > 0, ηt = η > 0, X0 = x0 ≥ p, c(z) = z and   1 (ln z)2 f (z) = √ 1{0 0, γt = γ > 0, θt = θ > 0, c(z) = δ1 (z). Then Kt = ap(t − t∗ )1t≥t∗ , where t∗ = a1 (ln(x0 ) − ln(p)), and Z t Xt = Yt + Yt Ys−1 dKs , 0

where Y is the process defined by:   Yt = X0 exp − (a + γ 2 /2 + λθ)t + γBt (1 + θ)Nt . Nonlinear constraint. Secondly, we illustrate the case of non-linear function h: h : R 3 x 7−→ x + α sin(x) − p ∈ R, −1 < α < 1, and we illustrate this case with Case (iii) Ornstein Uhlenbeck process: βt = β > 0, at = a > 0, γt = θt = 0, σt = σ > 0, ηt = η > 0, X0 = x0 with x0 > |α| + p, c(z) = δ1 (z). We obtain dKt = e−at d sup(Fs−1 (0))+ , s≤t

t ≥ 0,

28

PH. BRIAND, A. GHANNOUM, AND C. LABART

where for all t in [0, T ], (   at     2 e −1 −at −at σ Ft : R 3 x 7−→ e x0 − β + x + α exp − e sinh(at) a 2a "    −at     e −1 1 iη −iη −at x0 − (β + λη) × exp(λt(e − 1)) + exp(λt(e − 1)) sin e +x 2 a   −at     # e −1 1 iη −iη −at x0 − (β + λη) exp(λt(e − 1)) − exp(λt(e − 1)) cos e +x + 2i a ) −p Remark 7. These examples have been chosen in such a way that we are able to give an analytic form of the reflecting process K. This enables us to compare numerically the “true” process K and ˆ . When an exact simulation of the underlying process is available, its empirical approximation K we compute the approximation rate of our algorithm. 5.1. Proofs of the numerical illustrations. In order to have closed, or almost closed, expression for the compensator K we introduce the process Y solution to the non-reflected SDE Z t Z t Z tZ ˜ (ds, dz). Yt = X0 − (βs + as Ys− )ds + (σs + γs Ys− )dBs + c(z)(ηs + θs Ys− )N 0

0

0

Rt

E

eAt Xt

and eAt Yt , we get By letting As = 0 as ds and applying Itô’s formula on Z t Z t Z t At As As e Xt = X0 + e Xs as ds + e (−βs − as Xs− )ds + eAs (σs + γs Xs− )dBs 0 0 0 Z tZ Z t ˜ (ds, dz) + + eAs c(z)(ηs + θs Xs− )N eAs dKs 0 E 0 Z t Z tZ Z t Z t As As As ˜ eAs dKs , e c(z)(ηs + θs Xs− )N (ds, dz) + e (σs + γs Xs− )dBs + e βs ds + = X0 − 0

0

0

0

E

in the same way, t

Z

At

Z

As

e Yt = X0 −

t

Z tZ

As

e (σs + γs Ys− )dBs +

e βs ds + 0

0

0

˜ (ds, dz), eAs c(z)(ηs + θs Ys− )N

E

and so Xt = Yt + e

−At

Z

t As

−At

Z

e dKs + e 0

t As

−At

Z tZ

e γs (Xs− + Ys− )dBs + e 0

0

˜ (ds, dz). eAs c(z)θs (Xs− + Ys− )N

E

Remark 8. In all of cases, we have at = a i.e. At = at, so we get    Z t Z t Z tZ −at as as as ˜ E[Yt ] = E e x0 − e βds + e (σs + γs Ys− )dBs + e c(z)(ηs + θs Ys− )N (ds, dz) 0 0 0 E   Z t −at as =e x0 − e βds 0   at  e −1 −at =e x0 − β . a Proof of assertions (i). From Proposition 3 and Remark 8, we have kt = β1E(Xt )=p = β1E(Yt )+Kt −p=0 = β1x0 −βt+Kt −p=0 ,

MEAN REFLECTED SDES WITH JUMPS

29

so, we obtain that Z

t

kt dt

Kt = 0

Z =

t

β1Kt =p+βt−x0 dt, 0

and as Kt ≥ 0, we conclude that Kt = (p + βt − x0 )+ . Next, we have   1 (ln z)2 f (z) = √ exp − , 2 2πz the density function of a lognormal random variable, so we can obtain Z Z zf (z)dz = ληE(ξ) ηzλ(dz) = λη E

E

where ξ ∼ lognormal(0, 1), and we conclude that Z √ ηzλ(dz) = λη e. E

Finally, we deduce the exact solution Nt X √ Xt = X0 − (β + λ e)t + σBt + ηξi + Kt , i=0

where Nt ∼ P(λt) and ξi ∼ lognormal(0, 1).



Proof of assertions (ii). In this case, and using the same Proposition and Remark, we have kt = (E(−aXt ))− 1E(Xt )=p , which −at

Z

t

eas dKs = 0 Z t −at −at eas dKs ⇐⇒ −x0 e +p=e

E(Xt ) = p ⇐⇒ E(Yt ) − p + e

0

0

⇐⇒ Ks = ap, and Kt ≥ 0 ⇐⇒ −x0 e−at + p ≥ 0 p ⇐⇒ e−at ≤ x0 1 ⇐⇒ t ≥ (ln(x0 ) − ln(p)) := t∗ . a ∗ So, we conclude that Kt = ap(t − t )1t≥t∗ , where t∗ = a1 (ln(x0 ) − ln(p)). Next, by the definition of the process Yt in this case, ˜t , dYt = −aYt− dt + γYt− dBs + θYt− dN we have   Yt = X0 exp − (a + γ 2 /2 + λθ)t + γBt (1 + θ)Nt .

30

PH. BRIAND, A. GHANNOUM, AND C. LABART

Thanks to Itô formula we get      X 1 1 1 1 1 1 2 2 2 d = − 2 dYt + − γ Yt dt + d + 2 ∆Ys Yt 2 Yt3 Ys− + ∆Ys Ys− Yt Ys − s≤t   X 1 γ θ ˜ a γ2 1 θ , = dt − dBt − dNt + dt + d − + Yt Yt Yt− Yt (1 + θ)Ys− Ys − Ys− s≤t

and so  X θ2 Ys−1 = (a + γ − − + d − 1+θ s≤t     λθ2 θ −1 −1 2 ˜ = a+γ + Y dt − γYt dBt − Y −1 − dNt . 1+θ t 1+θ t Then, using integration by parts formula, we obtain dYt−1

2

)Yt−1 dt

γYt−1 dBt

˜ θYt−1 − dNt



−1 d(Xt Yt−1 ) = Xt− dYt−1 + Yt−1 ]t − dXt + d[X, Y

˜ = (a + γ 2 )Xt Yt−1 dt − γXt Yt−1 dBt − θXt− Yt−1 − dNt +



 X θ2 Xs− Ys−1 d − 1+θ s≤t

aXt Yt−1 dt

=

˜ − + γXt Yt−1 dBt + θXt− Yt−1 − dNt  2  X θ − γ 2 Xt Yt−1 dt − d Xs− Ys−1 − 1+θ s≤t Yt−1 dKt .

+

Yt−1 dKt

Finally, we deduce that t

Z Xt = Yt + Yt

Ys−1 dKs .

0

 Proof of assertions (iii). In that case, we have   at  Z t Z t e −1 ˜s eas dBs + e−at ηs eas dN Yt = e−at x0 − β + σs e−at a 0 0   at  Z t Z t e −1 −at =e x0 − (β + λη) eas dBs + e−at ηs eas dNs + σs e−at a 0 0 := ft + Gt + Ft , and ¯ t, Xt = Yt + e−at K

¯t = K

Z

t

eas dKs .

0

Hence ¯ t + α sin(Yt + e−at K ¯ t) − p h(Xt ) = Yt + e−at K   ¯ t + α sin(Yt ) cos(e−at K ¯ t ) + cos(Yt ) sin(e−at K ¯ t) − p = Yt + e−at K h n ¯ t + α cos(e−at K ¯ t ) sin(ft ) cos(Gt ) cos(Ft ) + cos(ft ) sin(Gt ) cos(Ft ) = Yt + e−at K o n ¯ t ) cos(ft ) cos(Gt ) cos(Ft ) + cos(ft ) cos(Gt ) sin(Ft ) − sin(ft ) sin(Gt ) sin(Ft ) + sin(e−at K oi − sin(ft ) sin(Gt ) sin(Ft ) − sin(ft ) cos(Gt ) sin(Ft ) − cos(ft ) sin(Gt ) sin(Ft ) − p.

MEAN REFLECTED SDES WITH JUMPS

31

On one side, since Gt is a centered gaussian random variable with variance V given by V = σ2

1 − e−2at sinh(at) = σ 2 e−at , 2a a

we obtain that

and

 iGt e E[cos(Gt )] = E

E[eiGt ] = e−V /2 ,   iGt e − e−iGt = 0, E[sin(Gt )] = E 2i    2 + e−iGt iGt −at σ = E(e ) = exp − e sinh(at) =: g(t). 2 2a

On the other side, iFt

E[e

   Z t as −at e dNs , ] = E exp iηe 0

by taking ‘a’small, we get    Z t iFt dNs E[e ] ≈ E exp iη 0 h  i ≈ E exp iηNt   ≈ exp λt(eiη − 1) , and so

    exp λt(eiη − 1) − exp λt(e−iη − 1) E[sin(Ft )] ≈

=: m(t),

2i     exp λt(eiη − 1) + exp λt(e−iη − 1)

E[cos(Ft )] ≈

=: n(t). 2 Using Remark 8, we conclude that, for small ‘a’,   ¯ t + α g(t)m(t) cos(ft + e−at K ¯ t ) + g(t)n(t) sin(ft + e−at K ¯ t) − p E[h(Xt )] ≈ E[Yt ] + e−at K ¯ t ). := Ft (K Therefore,  + ¯ t = sup F −1 (0) K s

and

s≤t

 + dKt = e−at d sup Fs−1 (0) . s≤t

 5.2. Illustrations. This computation works as follows. Let 0 = T0 < T1 < · · · < Tn = T be a subdivision of [0, T ] of step size T /n, n being a positive integer, let X be the unique solution ˜ i )0≤k≤n be its numerical approximation given by of the MRSDE (5.1) and let, for a given i, (X Tk ¯ l )0≤l≤L and (X ˜ i,l )0≤l≤L , L independent copies Algorithm 1. For a given integer L, we draw (X i 2 ˜ . We then approximate the L -error of Theorem 3 by: of X and X L 2 X ¯l i,l ˆ= 1 ˜ E max X − X Tk Tk . 0≤k≤n L l=1

32

PH. BRIAND, A. GHANNOUM, AND C. LABART

Figure 1. Case (i). n = 500, N = 100000, T = 1, β = 2, σ = 1, λ = 5, x0 = 1, p = 1/2.

ˆ w.r.t. log(N ). Data: E ˆ when N varies Figure 2. Case (i). Regression of log(E) from 100 to 2200 with step size 300. Parameters: n = 100, T = 1, β = 2, σ = 1, λ = 5, x0 = 1, p = 1/2, L = 1000.

MEAN REFLECTED SDES WITH JUMPS

Figure 3. Case (ii). Parameters: n = 500, N = 10000, T = 1, β = 0, a = 3, γ = 1, η = 1, λ = 2, x0 = 4, p = 1.

ˆ w.r.t. log(N ). Data: E ˆ when N varies Figure 4. Case (ii). Regression of log(E) from 100 to 800 with step size 100. Parameters: n = 1000, T = 1, β = 0, a = 3, γ = 1, η = 1, λ = 2, x0 = 4, p = 1, L = 1000.

33

34

PH. BRIAND, A. GHANNOUM, AND C. LABART

Figure 5. Case (iii). Parameters: n = 1000, N = 100000, T = 15, β = 10−2 , σ = 1, p = π/2, α = 0.9, a = 10−2 , x0 is the unique solution of x+α sin(x)−p = 0 plus 10−1 .

MEAN REFLECTED SDES WITH JUMPS

35

Appendix A. Appendices A.1. Proof of Lemma 4. Let s and t in [0, T ] such that s ≤ t. Firstly, we suppose that ϕ is a continuous function with compact support. In this case, there exists a sequence of Lipschitz continuous functions ϕn with compact support which converges uniformly to ϕ. Therefore, by using Proposition 2, we get |E[ϕ(Xt )] − E[ϕ(Xs )]| ≤ |E[ϕ(Xt )] − E[ϕn (Xt )]| + |E[ϕn (Xt )] − E[ϕn (Xs )]| + |E[ϕn (Xs )] − E[ϕ(Xs )]| ≤ E[|(ϕ − ϕn )(Xt )|] + Cn E[|Xt − Xs |] + |E[(ϕn − ϕ)(Xs )]| ≤ 2E[k ϕn − ϕ k∞ ] + Cn (E[|Xt − Xs |2 ])1/2 ≤ 2E[k ϕn − ϕ k∞ ] + Cn |t − s|1/2 . Thus, we obtain that lim sup |E[ϕ(Xt )] − E[ϕ(Xs )]| ≤ 2E[k ϕn − ϕ k∞ ]. t→s

This result is true for all n ≥ 1, so we deduce that lim sup |E[ϕ(Xt )] − E[ϕ(Xs )]| = 0, t→s

then we conclude the continuity of the function t 7−→ E[ϕ(Xt )]. Secondly, we consider the case that ϕ is a continuous function such that ∀x ∈ R, ∃C ∈ R, ϕ(x) ≤ C(1 + |x|p ). We define a sequence of functions ϕn , such that for all n ≥ 1 and x ∈ R, ϕn (x) = ϕ(x)θn (x) with θn smooth such that ( 1 if |x| ≤ n θn (x) = 0 if |x| > n + 1 Based on this definition, ϕn is a continuous function with compact support. Then we get |E[ϕ(Xt )] − E[ϕ(Xs )]| ≤ E [|ϕ − ϕn | (Xt )] + |E[ϕn (Xt )] − E[ϕn (Xs )]| + E [|ϕ − ϕn | (Xs )]     ≤ E 2|ϕ(Xt )|1|Xt |>n + |E[ϕn (Xt )] − E[ϕn (Xs )]| + E 2|ϕ(Xs )|1|Xs |>n " ! # ≤ CE

1 + sup |Xt |p t≤T

1supt≤T |Xt |>n + |E[ϕn (Xt )] − E[ϕn (Xs )]| .

Thus, by using the first part of this Lemma, we obtain that " lim sup |E[ϕ(Xt )] − E[ϕ(Xs )]| ≤ CE t→s

1 + sup |Xt |p t≤T

!

# 1supt≤T |Xt |>n .

This result is true for all n ≥ 1, then by using the dominated convergence theorem, we deduce that lim sup |E[ϕ(Xt )] − E[ϕ(Xs )]| = 0, t→s

and we conclude the continuity of the function t 7−→ E[ϕ(Xt )].

36

PH. BRIAND, A. GHANNOUM, AND C. LABART

References [ADEH99]

Philippe Artzner, Freddy Delbaen, Jean-Marc Eber, and David Heath. Coherent measures of risk. Math. Finance, 9(3):203–228, 1999. [BCdRGL16] Philippe Briand, Paul-Eric Chaudru de Raynal, Arnaud Guillin, and Céline Labart. Particles systems and numerical schemes for mean reflected stochastic differential equations. arXiv:1612.06886, 2016. Submitted. [BEH18] Philippe Briand, Romuald Elie, and Ying Hu. BSDEs with mean reflexion. Ann. Appl. Probab., 28(1):482–510, 2018. [CD18a] R. Carmona and F. Delarue. Probabilistic Theory of Mean Field Games with Applications I. Springer, 2018. [CD18b] R. Carmona and F. Delarue. Probabilistic Theory of Mean Field Games with Applications II. Springer, 2018. [CM08] Stéphane Crépey and Anis Matoussi. Reflected and doubly reflected BSDEs with jumps: a priori estimates and comparison. Ann. Appl. Probab., 18(5):2041–2069, 2008. [DL16a] R. Dumitrescu and C. Labart. Numerical approximation of doubly reflected bsdes with jumps and rcll obstacles. Journal of Math. Anal. and Appl., 442(1):206–243, 2016. [DL16b] R. Dumitrescu and C. Labart. Reflected scheme for doubly reflected bsdes with jumps and rcll obstacles. Journal of Comput. and Appl. Math., 296:827–839, 2016. [EHO05] E.H. Essaky, N. Harraj, and Y. Ouknine. Backward stochastic differential equation with two reflecting barriers and jumps. Stochastic Analysis and Applications, 23:921–938, 2005. [Ess08] E.H. Essaky. Reflected backward stochastic differential equation with jumps and RCLL obstacle. Bulletin des Sciences Mathématiques, (132):690–710, 2008. [FG15] Nicolas Fournier and Arnaud Guillin. On the rate of convergence in wasserstein distance of the empirical measure. Probab. Theory Related Fields, (162(3-4)):707—-738, 2015. [FS02] Hans Föllmer and Alexander Schied. Convex measures of risk and trading constraints. Finance Stoch., 6(4):429–447, 2002. [HH06] S. Hamadène and M. Hassani. BSDEs with two reacting barriers driven by a Brownian motion and an independent Poisson noise and related Dynkin game. Electronic Journal of Probability, 11:121– 145, 2006. [HO03] S. Hamadène and Y. Ouknine. Reflected backward stochastic differential equation with jumps and random obstable. Elec. Journ. of Prob., 8:1–20, 2003. [KH92] Arturo Kohatsu-Higa. Reflecting stochastic differential equations with jumps. Technical report, 1992. [Lep95] D. Lepingle. Euler scheme for reflected stochastic differential equations. Math. Comput. Simulation, 38:119–126, 1995. [LL06a] J.-M. Lasry and P.-L. Lions. Jeux à champ moyen. i. le cas stationnaire. C.R. Math. Acad. Sci. Paris, 343(9):619–625, 2006. [LL06b] J.-M. Lasry and P.-L. Lions. Jeux à champ moyen. ii. horizon fini et contrôle optimal. C.R. Math. Acad. Sci. Paris, 343(10):679–684, 2006. [LL07a] J.-M. Lasry and P.-L. Lions. Large investor trading impacts on volatility. Ann. Inst. H. Poincaré, Anal. Non Linéaire, 24(2):311–323, 2007. [LL07b] J.-M. Lasry and P.-L. Lions. Mean field games. Japanese Journal Math., 2:229–260, 2007. [MR85] José-Luis Menaldi and Maurice Robin. Reflected diffusion processes with jumps. The Annals of Probability, 13(2):319–341, 1985. [Pet95] R. Petterson. Approximations for stochastic differential equations with reflecting convex boundaries. Stoc. Proc. Appl, 59(2):295–308, 1995. [Pet97] R. Petterson. Penalization schemes for reflecting stochastic differential equations. Bernoulli, 3(4):403–414, 1997. [QS14] M.C. Quenez and A. Sulem. Reflected BSDEs and robust optimal stopping for dynamic risk measures with jumps. Stoc. Proc. Appl., (124(9)):3031–3054, 2014. [Sko61] A.V. Skorokhod. Stochastic equations for diffusion processes in a bounded region. Theory Probab. Appl, 6:264–274, 1961. [Slo94] L. Slominski. On approximation of solutions of multidimensional sdes with reflecting boundary conditions. Stoc. Proc. Appl, 50(2):197–219, 1994. [Slo01] L. Slominski. Euler’s approximations of solutions of sdes with reflecting boundary. Stoc. Proc. Appl, 92(2):317–337, 2001. [STR98] Ludger Rüschendorf S. T. Rachev. Mass transportation problems. Vol. 1: Theory. Vol. 2: Applications. 1998. [YS12] Hui Yu and Minghui Song. Numerical solutions of stochastic differential equations driven by poisson random measure with non-lipschitz coefficients. Journal of Applied Mathematics, 2012, 2012.

MEAN REFLECTED SDES WITH JUMPS

Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LAMA, 73000 Chambéry, France E-mail address: [email protected] Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LAMA, 73000 Chambéry, France Univ. Libanaise, LaMA-Liban, P.O. Box 37, Tripoli, Liban E-mail address: [email protected] Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LAMA, 73000 Chambéry, France E-mail address: [email protected]

37