Aug 27, 2014 - u]Vudu + (t â s). âβ. â« t s. [DrU2 u]VudBu . Now we can estimate each term of the above right-hand side as before. Taking the supremum over.
arXiv:1408.6471v1 [math.PR] 27 Aug 2014
Rate of convergence and asymptotic error distribution of Euler approximation schemes for fractional diffusions Yaozhong Hu∗, Yanghui Liu, David Nualart† Department of Mathematics The University of Kansas Lawrence, Kansas, 66045 USA
Abstract For a stochastic differential equation driven by a fractional Brownian motion with Hurst parameter H > 21 it is known that the existing (naive) Euler scheme has the rate of convergence n1−2H , which means no convergence to zero of the error when H is formally set to 12 (the standard Brownian motion case). In this paper we introduce a new (modified Euler) approximation scheme which is closer to the classical Euler scheme for diffusion √ processes and it has the rate 1 of convergence γn−1 , where γn = n2H− 2 when H < 34 , γn = n/ log n when H = 34 and γn = n 1 if H > 43 . In particular, the rate of convergence becomes n− 2 when H is formally set to 12 . Furthermore, we study the asymptotic behavior of the fluctuations of the error. More precisely, if {Xt , 0 ≤ t ≤ T } is the solution of a stochastic differential equation driven by a fractional Brownian motion and if {Xtn , 0 ≤ t ≤ T } is its approximation obtained by the new modified Euler scheme, then we prove that γn (X n − X) converges stably to the solution of a linear stochastic differential equation driven by a matrix-valued Brownian motion, when H ∈ ( 21 , 34 ]. In the case H > 34 , we show the Lp convergence of n(Xtn − Xt ) and the limiting process is identified as the solution of a linear stochastic differential equation driven by a matrix-valued Rosenblatt process. The rate of weak convergence is also deduced for this scheme. We also apply our approach to the naive Euler scheme. The main tools are fractional calculus, Malliavin calculus, and the fourth moment theorem.
1
Introduction
Consider the following stochastic differential equation (SDE) on Rd Z Xt = x +
t
b(Xs )ds + 0
m Z X j=1
t
σ j (Xs )dBsj ,
t ∈ [0, T ] ,
(1.1)
0
where x ∈ Rd , B = (B 1 , . . . , B m ) is an m-dimensional fractional Brownian motion (fBm) with Hurst parameter H ∈ ( 12 , 1) and b, σ 1 , . . . , σ m : Rd → Rd are continuous functions. The above stochastic integrals are pathwise Riemann-Stieltjes integrals. If σ 1 , . . . , σ m are continuously differentiable and their partial derivatives are bounded and locally H¨older continuous of order δ > H1 − 1 and b is Lipschitz, then equation (1.1) has a unique solution which is H¨older continuous of order γ for any 0 < γ < H. This result was first proved by Lyons in [13] using Young integrals (see [30]) and p-variation estimates, and later by Nualart and Rascanu in [22] using fractional calculus (see [31]). ∗
Y. Hu is partially supported by a grant from the Simons Foundation #209206. D. Nualart is supported by the NSF grant DMS1208625 and the ARO grant FED0070445. Keywords. Fractional Brownian motion, stochastic differential equations, Euler scheme, rate of convergence, fractional calculus, Malliavin calculus, stable convergence, fourth moment theorem, Rosenblatt processes. †
1
We are interested in numerical approximations for the solution to equation (1.1). For simplicity of the presentation we consider uniform partitions of the interval [0, T ], ti = iT n , i = 0, . . . , n. For T every positive integer n, we define η(t) = ti when ti ≤ t < ti + n . The following naive Euler numerical approximation scheme has been previously studied Xtn
t
Z =x+ 0
n )ds b(Xη(s)
+
m Z X
t n )dBsj , σ j (Xη(s)
0
j=1
t ∈ [0, T ].
(1.2)
This scheme can also be written as Xtn = Xtnk + b(Xtnk )(t − tk ) +
m X
σ j (Xtnk )(Btj − Btjk ),
tk ≤ t ≤ tk+1 ,
k = 0, 1, . . . , n − 1 ,
j=1
X0n
=x.
It was proved by Mishura [16] that for any real number > 0 there exists a random variable C such that almost surely, sup |Xtn − Xt | ≤ C n1−2H+ . 0≤t≤T
Moreover, the convergence rate n1−2H is sharp for this scheme, in the sense that n2H−1 [Xtn − Xt ] converges almost surely to a finite and nonzero limit. This has been proved in the one-dimensional case by Nourdin and Neuenkirch in [17] using the Doss representation of the solution (see also Theorem 10.1 below). Notice that if H = 21 , then 2H − 1 = 0. This shows that the numerical scheme (1.2) has a rate of convergence different from the Euler-Maruyama scheme (see [6, 11]) for the classical Brownian motion. This is not surprising because for H = 12 the sequence Xtn converges to the solution of the corresponding Itˆ o equation. It is then natural to ask the following question: Can we find a numerical scheme that generalizes the Euler-Maruyama scheme to the fBm case? In this paper we introduce a new approximation scheme that we call modified Euler scheme: Xtn
Z = x+
t
b(Xη(s) )ds + 0
m Z X j=1
t
σ
j
0
n (Xη(s) )dBsj + H
m Z X j=1
0
t n (∇σ j σ j )(Xη(s) )(s − η(s))2H−1 ds, (1.3)
or Xtn = Xtnk + b(Xtnk )(t − tk ) +
m X
m
σ j (Xtnk )(Btj − Btjk ) +
j=1
X0n
1X (∇σ j σ j )(Xtnk )(t − tk )2H , 2 j=1
= x,
for any t ∈ [tk , tk+1 ]. Here ∇σ j denotes the d×d matrix
∂σ j,i ∂xk
1≤i,k≤d
, and (∇σ j σ j )i =
∂σ j,i j,k k=1 ∂xk σ .
Pd
Notice that if we formally set H = 21 and replace B by a standard Brownian motion W , this is the classical Euler scheme for the Stratonovich SDE Z t m Z t X Xt = x + b(Xs )ds + σ j (Xs )dWsj 0
Z = x+
t
b(Xs )ds + 0
j=1 0 m Z t X j=1
σ
j
(Xs )δWsj
0
1 + 2
Z tX m
(∇σ j σ j )(Xs )ds.
0 j=1
In the above and throughout this paper, d denotes the Stratonovich integral and δ denotes the Itˆ o (or Skorohod) integral.
2
For our new modified Euler scheme (1.3) we shall prove the following estimate 1
sup (E|Xt − Xtn |p ) p ≤ Cγn−1 ,
(1.4)
0≤t≤T
for any p ≥ 1, where
γn =
2H− 1 2 n
if 21 < H < 34 , if H = 43 ,
√n
log n n
3 4
if
(1.5)
< H < 1. 1
Note that in (1.4), if we formally set H = 12 , then the convergence rate is n− 2 , which is exactly the convergence rate of the classical Euler-Maruyama scheme in the Brownian motion case. This suggests that the modified Euler scheme should be viewed as an authentical modified version of the Euler-Maruyama scheme (1.2). The proof of this result combines the techniques of Malliavin calculus with classical fractional calculus. On the other hand, we make use of uniform estimates for the moments of all orders of the processes X, X n and their first and second order Malliavin derivatives, which can be obtained using techniques of fractional calculus, following the approach used, for instance, by Hu and Nualart in [7]. The idea of the proof is to properly decompose the error Xt − Xtn into a weighted quadratic variation term plus a higher order term, that is, nt
Xt −
Xtn
=
c m bX T X
i,j
Z
tk+1
Z
s
f (tk ) tk
i,j=1 k=0
δBui δBsj + Rtn ,
(1.6)
tk
where bxc denotes the integer part of a real number x. The weighted quadratic variation term provides the desired rate of convergence in Lp . To further study this new scheme and compare it to the classical Brownian motion case, it is natural to ask the questions: Is the above rate of convergence (1.4) exact or not? Namely, does the quantity γn (Xt − Xtn ) have a non-zero limit? If yes, how to identify the limit, and is there a similarity to the classical Brownian motion case? In the second part of the paper, we give a complete answer to these questions. The weighted variation term in (1.6) is still a key ingredient in our study of the scheme. As it happens in the Breuer-Major theorem, there is a different behavior in the cases H ∈ ( 21 , 34 ] and H ∈ ( 34 , 1). If H ∈ ( 12 , 34 ], we show that γn (Xt − Xtn ) converges stably to the solution of a linear stochastic differential equation driven by a matrix-valued Brownian motion W independent of B. The main tools in this case are Malliavin Calculus and the fourth moment theorem. We will also make use of a recent limit theorem in law for weighted sums proved in [3]. In the case H ∈ ( 43 , 1), we show the convergence of γn (Xt − Xtn ) in Lp to the solution of a linear stochastic differential equation driven by a matrix-valued Rosenblatt process. Again we use the technique of Malliavin calculus and the convergence in Lp of weighted sums, which is obtained applying the approach introduced in [3]. We refer to [18] for a discussion on the asymptotic behavior of some weighted Hermite variations of one-dimensional fBm, which are related with the results proved here. We also consider a weak approximation result for our new numerical scheme. In this case, the rate is n−1 for all values of H. More precisely, we are able to show that n [E(f (Xt )) − E(f (Xtn ))] converges to a finite non zero limit which can be explicitly computed. Let us mention that the techniques of Malliavin calculus also allow us to provide an alternative and simpler proof of the fact that the rate of convergence of the numerical scheme (1.2) is of the order n1−2H and this rate is optimal, extending to the multidimensional case the results by Neuenkirch and Nourdin in [17]. If the driven process is a standard Brownian motion, similar problems have been studied in [9, 12] and the references therein. See also [2] for the precise L2 -limit and also for a discussion on 3
the “best” partition. In the case 14 < H < 21 the SDE (1.1) can be solved using the theory of rough paths introduced by Lyons (see[14]). There are also a number of results on the rate of convergence of Euler-type numerical schemes in this case (see, for instance, the paper by Deya, Neuenkirch and Tindel [4] for a Milstein-type scheme without L´evy area in the case 13 < H < 12 , and the monograph by Friz and Victoir [5]). The paper is organized as follows. The next section contains some basic materials on fractional calculus and Malliavin calculus that will be used along the paper, and introduces a matrix-valued Brownian motion and a generalized Rosenblatt process, both of which are key ingredients in our results on the asymptotic behavior of the error (see Section 6 and Section 8). In Section 3, we derive the necessary estimates for the uniform norms and H¨older seminorms of the processes X, X n and their Malliavin derivatives. In Section 4, we prove our result on the rate of convergence in Lp for the numerical scheme (1.3). In Section 5, we prove a central limit theorem for weighted quadratic sums, and then in Section 6 we apply this result to the study of the asymptotic behavior of the error γn (Xt − Xtn ) in case H ∈ ( 12 , 43 ]. In Section 7, we study the Lp -convergence of some weighted random sums. In Section 8, we apply the results of Section 7 to establish the Lp -limit of n(Xt − Xtn ) in case H ∈ ( 34 , 1). The weak approximation result is discussed in Section 9. In Section 10, we deal with the numerical scheme (1.2). In Section 11, we prove some auxiliary results. In Section 12, we compare some simulation results by using the Euler scheme and the modified Euler scheme.
2
Preliminaries and notations
Throughout the paper we consider a fixed time interval [0, T ]. To simplify the presentation we only deal with the uniform partition of this interval, that is, for each n ≥ 1 and i = 0, 1, . . . , n we set ti = iT n . We use C and K to represent constants that are independent of n and whose values may change from line to line.
2.1
Elements of fractional calculus
In this subsection we introduce the definitions of the fractional integral and derivative operators and we review some properties of these operators. Let a, b ∈ [0, T ] with a < b and let β ∈ (0, 1). We denote by C β (a, b) the space of β-H¨ older continuous functions on the interval [a, b]. For a function x : [0, T ] → R, kxka,b,β denotes the β-H¨older seminorm of x on [a, b], that is, |xu − xv | kxka,b,β = sup ;a ≤ u < v ≤ b . (v − u)β We will also make use of the following seminorm: |xu − xv | kxka,b,β,n = sup ; a ≤ u < v ≤ b, η(u) = u . (v − u)β
(2.1)
T Recall that for each n ≥ 1 and i = 0, 1, . . . , n, ti = iT n , and η(t) = ti when ti ≤ t < ti + n . We will denote the uniform norm of x on the interval [a, b] as kxka,b,∞ . When a = 0 and b = T , we will simply write kxk∞ for kxk0,T,∞ and kxkβ for kxk0,T,β . Let f ∈ L1 ([a, b]) and α > 0. The left-sided and right-sided fractional Riemann-Liouville integrals of f of order α are defined, for almost all t ∈ (a, b), by α Ia+ f (t)
1 = Γ(α)
Z
t
(t − s)α−1 f (s)ds
a
4
and α Ib− f (t)
(−1)−α = Γ(α)
b
Z
(s − t)α−1 f (s)ds,
t
R∞ α (Lp ) respectively, where (−1)α = e−iπα and Γ(α) = 0 rα−1 e−r dr is the Gamma function. Let Ia+ α (Lp )) be the image of Lp ([a, b]) by the operator I α (resp. I α ). If f ∈ I α (Lp ) (resp. (resp. Ib− a+ a+ b− α f ∈ Ib− (Lp )) and 0 < α < 1 then the fractional Weyl derivatives are defined as Z t 1 f (t) f (t) − f (s) α Da+ f (t) = +α ds (2.2) α+1 Γ(1 − α) (t − a)α a (t − s) and α Db− f (t) =
(−1)α Γ(1 − α)
f (t) +α (b − t)α
Z t
b
f (t) − f (s) ds , (s − t)α+1
(2.3)
where a < t < b. Suppose that f ∈ C λ (a, b) and g ∈ C µ (a, b) with λ + µ > 1. Then, according to [30], the Rb Riemann-Stieltjes integral a f dg exists. The following proposition can be regarded as a fractional Rb integration by parts formula, and provides an explicit expression for the integral a f dg in terms of fractional derivatives. We refer to [31] for additional details. Proposition 2.1 Suppose that f ∈ C λ (a, b) and g ∈ C µ (a, b) with λ + µ > 1. Let λ > α and Rb µ > 1 − α. Then the Riemann-Stieltjes integral a f dg exists and it can be expressed as Z
b
f dg = (−1)α
a
Z a
b 1−α α gb− (t)dt, Da+ f (t)Db−
(2.4)
where gb− (t) = 1(a,b) (t)(g(t) − g(b−)). The notion of H¨ older continuity and the above result on the existence of Riemann-Stieltjes integrals can be generalized to functions taking values in some normed spaces. We fix a probability space (Ω, F , P ) and denote by k · kp the norm in the space Lp := Lp (Ω), where p ≥ 1. Definition 2.2 Let f = {f (t), t ∈ [0, T ]} be a stochastic process such that f (t) ∈ Lp for all t ∈ [0, T ]. We say that f is H¨ older continuous of order β > 0 in Lp if kf (t) − f (s)kp ≤ C|t − s|β ,
(2.5)
for all s, t ∈ [0, T ]. The following result shows R T that with proper H¨older continuity assumptions on f and g the Riemann-Stieltjes integral 0 f dg exists and equation (2.4) holds. Proposition 2.3 Let the positive numbers p0 , λ, µ, p, q satisfy p0 ≥ 1, λ + µ > 1, p1 + 1q = 1 and p0 p > µ1 , p0 q > λ1 . Assume that f = {f (t), t ∈ [0, T } and g = {g(t), t ∈ [0, T ]} are H¨ older p p p q 0 0 continuous stochastic processes of order µ and λ in L and L , respectively, and f (0) ∈ Lp0 p . Let π : 0 = t0 < t1 < · · · < tN = T be a partition on [0, T ], and ξi : ti−1 ≤ ξi ≤ ti . Then the sum R PN p0 to the Riemann-Stieltjes integral T f dg as |π| tends to i=1 f (ξi )[g(ti ) − g(ti−1 )] converges in L 0 zero, where |π| = max1≤i≤N |ti − ti−1 |, and equation (2.4) holds. Proposition 2.3 can be proved through a slight modification of the proof in the real-valued case done in [31] using H¨ older’s inequality. 5
2.2
Elements of Malliavin Calculus
We briefly recall some basic facts about the stochastic calculus of variations with respect to a fBm. We refer the reader to [19] for further details. Let B = {(Bt1 , . . . , Btm ), t ∈ [0, T ]} be an m-dimensional fBm with Hurst parameter H ∈ ( 12 , 1), defined on some complete probability space (Ω, F , P ). Namely, B is a mean zero Gaussian process with covariance 1 E(Bti Bsj ) = (t2H + s2H − |t − s|2H )δij , 2
i, j = 1, . . . , m,
for all s, t ∈ [0, T ], where δij is the Kronecker symbol. Let H be the Hilbert space defined as the closure of the set of step functions on [0, T ] with respect to the scalar product 1 h1[0,t] , 1[0,s] iH = (t2H + s2H − |t − s|2H ). 2 It is easy to see that the covariance of fBm can be written as Z tZ s αH |u − v|2H−2 dudv, 0
0
where αH = H(2H − 1). This implies that Z
T
Z
hψ, φiH = αH 0
T
ψu φv |u − v|2H−2 dudv
0
for any pair of step functions φ and ψ on [0, T ]. The elements of the Hilbert space H, or more generally, of the space H⊗l may not be functions but distributions (see [26] and [27]). We can find a linear space of functions contained in H⊗l in the following way. Let |H|⊗l be the linear space of measurable functions φ on [0, T ]l ⊂ Rl such that Z 2 l kφk|H|⊗l := αH |φu ||φv ||u1 − v1 |2H−2 · · · |ul − vl |2H−2 dudv < ∞, [0,T ]2l
1
where u = (u1 , . . . , ul ), v = (v1 , . . . , vl ) ∈ [0, T ]l . Suppose φ ∈ L H ([0, T ]l ). The following estimate holds kφk|H|⊗l ≤ bH,l kφk
(2.6)
1
L H ([0,T ]l )
for some constant bH,l > 0 (the case l = 1 was proved in [15] and the extension to general case is easy, see [8, equation (2.5)]. The mapping 1[0,t1 ] × · · · × 1[0,tm ] 7→ (Bt11 , . . . , Btmm ) can be extended to a linear isometry between Hm and the Gaussian space spanned by B. We denote this isometry by h 7→ B(h). In this way, {B(h), h ∈ Hm } is an isonormal Gaussian process indexed by the Hilbert space Hm . Let S be the set of smooth and cylindrical random variables of the form F = f (Bs1 , . . . , BsN ) , where N ≥ 1 and f ∈ Cb∞ Rm×N . For each j = 1, . . . , m and t ∈ [0, T ], the derivative operator Dj F on F ∈ S is defined as the H-valued random variable Dtj F =
N X ∂f i=1
∂xji
(Bs1 , . . . , BsN ) 1[0,si ] (t) , t ∈ [0, T ] .
6
We can iterate this procedure to define higher order derivatives Dj1 ,...,jl F which take values on H⊗l . For any p ≥ 1 and any integer k ≥ 1, we define the Sobolev space Dk,p as the closure of S with respect to the norm p 2 k m
X X p p j1 ,...,jl 2
F H⊗l kF kk,p = E [|F | ] + E D . l=1
j1 ,...,jl =1
If V is a Hilbert space, Dk,p (V ) denotes the corresponding Sobolev space of V -valued random variables. For any j = 1, . . . , m we denote by δ j the adjoint of the derivative operator Dj . We say u ∈ Domδ j if there is a δ j (u) ∈ L2 (Ω) such that for any F ∈ D1,2 the following duality relationship holds E hu, Dj F iH = E(δ j (u)F ) . (2.7) The random variable δ j (u) is also called the Skorohod integral of u with respect to the fBm B j and RT we use the notation δ j (u) = 0 ut δBtj . Let F ∈ D1,2 and u be in the domain of δ j such that F u ∈ L2 (Ω; H). Then (see [20]) F u belongs to the domain of δ j and the following equality holds δ j (F u) = F δ j (u) − hDj F, uiH ,
(2.8)
provided the right-hand side of (2.8) is square integrable. Suppose that u = {ut , t ∈ [0, T ]} is a stochastic process whose trajectories are H¨older continuous RT of order γ > 1 − H. Then, for any j = 1, . . . , m, the Riemann-Stieltjes integral 0 ut dBtj exists. On the other hand, if u ∈ D1,2 (H) and the derivative Dsj ut exists and satisfies almost surely Z
T
0
and E kDj uk2
1 LH
T
Z
|Dsj ut | |t − s|2H−2 dsdt < ∞,
0
([0,T ]2 )
< ∞, then (see Proposition 5.2.3 in [20])
RT
ut δBtj exists and we have
0
the following relationship between these two stochastic integrals T
Z 0
ut dBtj =
Z
T
0
ut δBtj + αH
Z 0
T
Z
T
Dsj ut |t − s|2H−2 dsdt.
(2.9)
0
The following result is Meyer’s inequality for the Skorohod integral (see, for example, Proposition 1.5.7 of [20]). Given p > 1 and an integer k ≥ 1, there is a constant ck,p such that kδ k (u)kp ≤ ck,p kukDk,p (H⊗k )
for all u ∈ Dk,p (H⊗k ) .
(2.10)
Applying (2.6) and then the Minkowski inequality to the right-hand side of (2.10) yields
kδ k (u)kp ≤ C kukp
1 LH
([0,T ]p )
+C
k X
m X
l=1 j1 ,...,jl =1
for all u ∈ Dk,p (H⊗k ) provided pH ≥ 1.
7
j ,...,j
kD 1 l ukp
1
L H ([0,T ]p+l )
(2.11)
2.3
Stable convergence
Let Yn , n ∈ N be a sequence of random variables defined on a probability space (Ω, F , P ) with values in a Polish space (E, E ). We say that Yn converges stably to the limit Y , where Y is defined on an extension of the original probability space (Ω0 , F 0 , P 0 ), if and only if for any bounded F -measurable random variable Z it holds that (Yn , Z) ⇒ (Y, Z) as n → ∞, where ⇒ denotes the convergence in law. Note that stable convergence is stronger than weak convergence but weaker than convergence in probability. We refer to [10] and [1] for more details on this concept.
2.4
A matrix-valued Brownian motion
The aim of this subsection is to define a matrix-valued Brownian motion that will play a fundamental role in our central limit theorem. First, we introduce two constants Q and R which depend on H. Denote by µ the measure on R2 with density |s − t|2H−2 . Define, for each p ∈ Z Z 1 Z p+1 Z t Z s Z 1 Z p+1 Z 1 Z s 4H 4H µ(dvdu)µ(dsdt), R(p) = T µ(dvdu)µ(dsdt). Q(p) =T 0
p
0
p
0
p
t
p
P P It is not difficult to check that for 21 < H < 34 the series p∈Z Q(p) and p∈Z R(p) are convergent and for H = 43 , they diverge at the rate log n. Then we set (we omit the explicit dependence of Q and R on H to simplify the notation) P P Q(p) , R = p∈Z R(p), if H ∈ ( 21 , 34 ); Q = p∈ZP P (2.12) T 4H T 4H |p|≤n Q(p) |p|≤n R(p) Q = lim = , R = lim = , if H = 43 . n→∞ n→∞ log n 2 log n 2 Lemma 2.4 The constants Q and R satisfy R ≤ Q. Proof: ( 12 , 34 ).
If H = 34 , we see from (2.12) that these two constants are both equal to
Consider the functions on p ∈ Z. Then
2
n−1
X
1
(ϕp − ψp )
n
2 2 p=0
R2
T 4H 2 .
Suppose H ∈
defined by ϕp (v, s) = 1{p≤v≤s≤p+1} , ψp (v, s) = 1{p≤s≤v≤p+1} ,
=
n−1 2 X n
1{p≤v≤s≤p+1} , 1{q≤v≤s≤q+1}
L2 (R2 ,µ)
p,q=0
L (R ,µ)
!
− 1{p≤v≤s≤p+1} , 1{q≤s≤v≤q+1} L2 (R2 ,µ) =
n−1 2 X (Q(p − q) − R(p − q)) , n p,q=0
which converges to 2(Q − R) as n tends to infinity. Therefore, Q ≥ R.
2
f 0,ij = {W f 0,ij , t ∈ [0, T ]}, i ≤ j, i, j = 1, . . . , m and W f 1,ij = {W f 1,ij , t ∈ [0, T ]}, i, j = Let W t t f 0,ij = W f 0,ji . The 1, . . . , m be independent standard Brownian motions. When i > j, we define W t t matrix-valued Brownian motion (W ij )1≤i,j≤m , i, j = 1, . . . , m is defined as follows: √ α H p αH p f 1,ii and W ij = √ f 1,ij + R W f 0,ij W ii = √ Q+R W Q−R W when i 6= j. T T 8
Notice that this definition makes sense because R ≤ Q. The random matrix Wt is not symmetric 0 0 when H < 43 (see the plot and table below). For i, j, i0 , j 0 = 1, . . . , m, the covariance E(Wtij Wsi j ) is equal to 2 (t ∧ s) αH (Rδji0 δij 0 + Qδjj 0 δii0 ), T where δ is the Kronecker function. In the following plot and table, we consider two quantities for H ∈ ( 12 , 43 )
0.8 q r 0.7
0.6
2 2 αH αH q = 4H Q, and r = 4H R. T T
Value of Q and R
0.5
We see that the values of q and r approach 0.5 and 0 as H tends to 21 , respectively, and both of them tend to infinity when H gets closer to 43 .
H q r
2.5
0.5010 0.4990 9.9868×10−4
0.5260 0.4763 0.0256
0.5510 0.4580 0.0503
0.5760 0.4445 0.0763
0.4
0.3
0.2
0.1
0 0.5
0.55
0.6
0.65
0.7
0.75
Value of H
0.6010 0.4369 0.1053
0.6260 0.4375 0.1400
0.6510 0.4522 0.1845
0.6760 0.4852 0.2551
0.7010 0.5669 0.3689
0.7260 0.7290 0.6149
A matrix-valued generalized Rosenblatt process
In this subsection we introduce a generalized Rosenblatt process which will appear in the limiting result proved in Section 8 when H > 43 . Consider an m-dimensional fBm Bt = (Bt1 , . . . , Btm ) with Hurst parameter H ∈ ( 34 , 1). Define for i1 , i2 ∈ 1, . . . , m Zni1 ,i2 (t) := n
c b nt T Z
X j=1
tj+1
tj
(Bsi1 − Btij1 )δBsi2 .
When i1 = i2 = i we can write nt
Zni,i (t)
bT c T 2H X = 2H−1 H2 (ξjn,i ), 2n j=1
where H2 (x) = x2 − 1 is the second degree Hermite polynomial and ξjn,i = T −H nH (Btij+1 − Btij ). It is well known (see [18]) that for each i = 1, . . . , m, the process Zni,i (t) converges in L2 to the Rosenblatt process R(t). We refer the reader to [28] and [29] for further details on the Rosenblatt process. Rt When i1 6= i2 , the stochastic integral tjj+1 (Bsi1 − Btij1 )δBsi2 cannot be written as the second Hermite polynomial of a Gaussian random variable. Nevertheless, the process Zni1 ,i2 (t) is still
9
convergent in L2 . Indeed, for any positive integers n and n0 , we have nt
n0 t
bT cb T c hZ X X i1 i2 i1 i2 0 E Zn (t)Zn0 (t) =nn E
k=0 k0 =0
k+1 T n k T n
(Bsi1 − B ik1T )δBsi2 n
0
2 =nn0 αH
→
b nt c b nT t c Z T
X X
k=0 k0 =0 Z tZ t 2 2 T αH
Z
k0 +1 T n0 k0 T n0
k T n
Z
t
i
n0
s
Z
k T n
k T n0
(Bsi1 − B ik1 T )δBsi2
k0 T n0
µ(dvdu)µ(dsdt)
|u − v|4H−4 dudv = cH t4H−2 ,
4
as n0 , n → +∞, where cH =
k+1 T n
k+1 T n0
Z
0
0
T 2 H 2 (2H−1) 4(4H−3) Zti1 i2 the
sequence in L2 . We denote by generalized Rosenblatt process. It is easy to show that
. This allows us to conclude that Zni1 i2 (t) is a Cauchy L2 -limit of Zni1 i2 (t). Then Zti1 i2 can be considered as a
E[|Zti1 i2 − Zsi1 i2 |2 ] ≤ C|t − s|4H−2 , and by the hypercontractivity property, we deduce E[|Zti1 i2 − Zsi1 i2 |p ] ≤ Cp |t − s|p(2H−1)
(2.13)
for any p ≥ 2 and s, t ∈ [0, T ]. By the Kolmogorov continuity criterion this implies that Z i1 i2 has a H¨older continuous version of exponent λ for any λ < 2H − 1.
3
Estimates for solutions of some SDE’s
The purpose of this section is to provide upper bounds for the H¨older seminorms of solutions of two types of SDE’s. The first type (see (3.1)) covers equation (1.1) and its Malliavin derivatives, as well as all the other SDE’s involving only continuous integrands which we will encounter in this paper. The second type (see (3.13)) deals with the case where the integrands are step processes. These SDE’s arise from the approximation schemes such as (1.2) and (1.3). For any integers k, N, M ≥ 1, we denote by Cbk (RM ; RN ) the space of k times continuously differentiable functions f : RM → RN which are bounded together with their first k partial derivatives. If N = 1 we simply write Cbk (RM ). In order to simplify the notation we only consider the case when the fBm is one-dimensional, that is, m = 1. All results of this section can be generalized to the case m > 1. Throughout the remaining part of the paper we let β be any number satisfying 12 < β < H. The first two lemmas are path-wise results and they will still hold when B is replaced by general H¨older continuous functions of index γ > β. The constants appearing in the lemmas depend on β, H, T , and the uniform and H¨older seminorms of the coefficients. We fix a time interval [τ, T ], and to simplify we omit the dependence on τ and T of the uniform norm and β-H¨older seminorm on the interval [τ, T ]. Lemma 3.1 Fix τ ∈ [0, T ). Let V = {Vt , t ∈ [τ, T ]} be an RM -valued processes satisfying Z t Z t 1 V t = St + [g1 (Vu ) + Uu Vu ]du + [g2 (Vu ) + Uu2 Vu ]dBu , τ
(3.1)
τ
where g1 ∈ Cb (RM ; RM ), g2 ∈ Cb1 (RM ; RM ) and U i = {Uti , t ∈ [τ, T ]}, i = 1, 2, and S = {St , ∈ [τ, T ]} are RM ×M -valued and RM -valued processes, respectively. We assume that S has β-H¨ older i continuous trajectories, and the processes U , i = 1, 2, are uniformly bounded by a constant C. 10
(i) If U 1 = U 2 = 0, then we can find constants K and K 0 such that (t − s)β kBkβ ≤ K, τ ≤ s < t ≤ T , implies kV ks,t,β ≤ K 0 (kBkβ + 1) + 2kSkβ . (ii) Suppose that there exist constants K0 and K00 such that (t − s)β kBkβ ≤ K0 , τ ≤ s < t ≤ T , implies kU 2 ks,t,β ≤ K00 (kBkβ + 1).
(3.2)
Then there exists a positive constant K such that 1/β
max{kV k∞ , kV kβ } ≤ KeKkBkβ (|Sτ | + kSkβ + 1).
(3.3)
Let τ ≤ s < t ≤ T . By the definition of V , Z t Z t 1 [g1 (Vu ) + Uu Vu ]du + [g2 (Vu ) + Uu2 Vu ]dBu . Vt − Vs = St − Ss +
(3.4)
Proof:
s
s
Applying Lemma 11.1(ii) to the vector valued function f : (u, v) → g2 (v) + uv and the integrator z = B and taking β 0 = β yield |Vt − Vs | ≤kSkβ (t − s)β + (kg1 k∞ + CkV ks,t,∞ )(t − s) + K1 (kg2 k∞ + CkV ks,t,∞ )kBkβ (t − s)β
(3.5) 2
2β
+ K2 [(k∇g2 k∞ + C)kV ks,t,β + kV ks,t,∞ kU ks,t,β ]kBkβ (t − s) . Step 1. In the case U 1 = U 2 = 0 (which means that we can take C = 0 and kU 2 ks,t,β = 0), dividing both sides of (3.5) by (t − s)β and taking the H¨older seminorm on the left-hand side, we obtain kV ks,t,β ≤kSkβ + c1 (t − s)1−β + K1 c1 kBkβ + K2 c1 kV ks,t,β kBkβ (t − s)β ,
(3.6)
where and throughout this section we denote c1 = max{C, kg1 k∞ , kg2 k∞ , k∇g2 k∞ }.
(3.7)
Take K = 12 (K2 c1 )−1 . Then for any τ ≤ s < t ≤ T such that (t − s)β kBkβ ≤ K, we have kV ks,t,β ≤2kSkβ + 2c1 (t − s)1−β + 2K1 c1 kBkβ , which implies (i). Step 2. As in Step 1, we divide (3.5) by (t − s)β and then take the H¨older seminorm on the left-hand side to obtain kV ks,t,β ≤kSkβ + c1 (1 + kV ks,t,∞ )(t − s)1−β + K1 c1 (1 + kV ks,t,∞ )kBkβ + 2K2 c1 kV ks,t,β kBkβ (t − s)β + K2 kV ks,t,∞ kU 2 ks,t,β kBkβ (t − s)β .
(3.8)
If (t − s)β kBkβ ≤ 41 (K2 c1 )−1 , then the coefficient of kV ks,t,β on the right-hand side of (3.8) is less or equal than 12 . Thus, we obtain kV ks,t,β ≤ 2kSkβ + 2c1 (1 + kV ks,t,∞ )(t − s)1−β + 2K1 c1 (1 + kV ks,t,∞ )kBkβ +2K2 kV ks,t,∞ kU 2 ks,t,β kBkβ (t − s)β . 11
On the other hand, assuming (t − s)β kBkβ ≤ K0 and applying (3.2), we obtain kV ks,t,β ≤ 2kSkβ + C1 (1 + kBkβ )(1 + kV ks,t,∞ ),
(3.9)
for some constant C1 . This implies kV ks,t,∞ ≤ |Vs | + 2(t − s)β kSkβ + C1 (t − s)β (1 + kBkβ )(1 + kV ks,t,∞ ). Assuming (t − s)β kBkβ ≤
1 4C1
and (t − s)β ≤
1 4C1
∧
1 2
we obtain
kV ks,t,∞ ≤ 2|Vs | + 2kSkβ + 1. (3.10) h i1/β 1/β 1 1 1 1 , K , ∧ ∧ . We divide the interval [τ, T ] into N = Take ∆ = kBk−1 min 0 β 4K2 c1 4C1 4C1 2 −τ b T∆ c + 1 subintervals and denote by s1 , s2 , . . . , sN the left endpoints of these intervals and sN +1 = T . Applying the inequality (3.10) to each interval [si , si+1 ] for i = 1, . . . , N yields
kV k∞ ≤ 2N +1 (|Sτ | + 2kSkβ + 1) .
(3.11)
From the definition of ∆ we get N ≤1+
T 1/β ≤ 1 + T max C2 , C3 kBkβ , ∆
(3.12)
for some constants C2 and C3 . From inequalities (3.11) and (3.12) we obtain the desired estimate for kV k∞ . If t, s ∈ [τ, T ] satisfy 0 ≤ t − s ≤ ∆ then from (3.9) and from the upper bound of kV k∞ we can Vt −Vs estimate (t−s) β by the right-hand side of (3.3) for some constant K. On the other hand, if t − s > ∆, then Vt − Vs ≤ 2kV k∞ ∆−1 . (t − s)β We can obtain a similar estimate from the upper bound of kV k∞ and from the definition of ∆. This gives then the desired estimate for kV kβ and hence we complete the proof of (ii). 2 For the second lemma we fix n and consider the partition of [0, T ] given by ti = i Tn , i = 0, 1, . . . , n. Define η(t) = ti if ti ≤ t < ti + Tn and (t) = ti + Tn if ti < t ≤ ti + Tn . Lemma 3.2 Suppose that S, gi , U i , i = 1, 2, are the same as in Lemma 3.1. Let g ∈ C([0, T ]). Let V = {Vt , t ∈ [τ, T ]} be an RM -valued processes satisfying the equation Z t∨(τ ) Z t∨(τ ) 1 2 Vt = St + [g1 (Vη(u) ) + Uη(u) Vη(u) ]g(u − η(u))du + [g2 (Vη(u) ) + Uη(u) Vη(u) ]dBu . (3.13) (τ )
(τ )
(i) If U 1 = U 2 = 0, then we can find constants K and K 0 such that (t − s)β kBkβ ≤ K, τ ≤ s < t ≤ T , implies kV ks,t,β,n ≤ K 0 (kBkβ + 1) + 2kSkβ . (ii) Suppose that there exist constants K0 and K00 such that (t − s)β kBkβ ≤ K0 , τ ≤ s < t ≤ T , implies kU 2 ks,t,β,n ≤ K00 (kBkβ + 1). Then, there exists a constant K such that 1/β
max{kV k∞ , kV kβ } ≤ KeKkBkβ (|Sτ | + kSkβ + 1). 12
(3.14)
Remark 3.3 The proof of this result is similar to that of Lemma 3.1. Nevertheless, since the integral is discrete, we need to replace the H¨ older seminorm k · ks,t,β by the seminorm k · ks,t,β,n introduced in (2.1). Let s, t ∈ [τ, T ] be such that s < t and s = η(s). This implies s ≥ (τ ). As in the proof of
Proof:
(3.5), applying Lemma 11.1(i) (instead of Lemma 11.1(ii)) yields |Vt − Vs | ≤kSkβ (t − s)β + (kg1 k∞ + CkV ks,t,∞ )kgk∞ (t − s) + K1 (kg2 k∞ + CkV ks,t,∞ )kBkβ (t − s)β + K3 [(k∇g2 k∞ + C)kV ks,t,β,n + kV ks,t,∞ kU 2 ks,t,β,n ]kBkβ (t − s)2β . Dividing both sides of the above inequality by (t − s)β and taking the H¨older seminorm on the left-hand side we obtain kV ks,t,β,n ≤kSkβ + (kg1 k∞ + CkV ks,t,∞ )kgk∞ (t − s)1−β + K1 (kg2 k∞ + CkV ks,t,∞ )kBkβ
(3.15) 2
β
+ K3 [(k∇g2 k∞ + C)kV ks,t,β,n + kV ks,t,∞ kU ks,t,β,n ]kBkβ (t − s) . Step 1.
In the case U 1 = U 2 = 0, (3.15) becomes
kV ks,t,β,n ≤ kSkβ + c1 kgk∞ (t − s)1−β + K1 c1 kBkβ + K3 c1 kV ks,t,β,n kBkβ (t − s)β , where c1 is defined in (3.7). Taking K = 12 (K3 c1 )−1 , for any τ ≤ s < t ≤ T such that (t−s)β kBkβ ≤ K, we have kV ks,t,β,n ≤ 2kSkβ + 2c1 kgk∞ (t − s)1−β + 2K1 c1 kBkβ . This completes the proof of (i). Step 2. In the general case, we follow the proof of Lemma 3.1, except that we assume s = η(s) and use the seminorm k · ks,t,β,n instead of k · ks,t,β . We also apply (3.14) instead of (3.2). In this way we obtain the inequality (3.9) with kV ks,t,β replaced by kV ks,t,β,n , that is, kV ks,t,β,n ≤ 2kSkβ + C1 (1 + kBkβ )(1 + kV ks,t,∞ )
(3.16)
for some constant C1 . The inequality (3.10) remains the same kV ks,t,∞ ≤ 2|Vs | + 2kSkβ + 1,
(3.17)
provided s = η(s) and both t − s and (t − s)β kBkβ are bounded by some constant C4 . 1/β −1/β Take ∆ = (C4 kBkβ ) ∧ C4 . We are going to consider two cases depending on the relation between ∆ and 2T n . )) 2T If ∆ > n , we take N = b 2(T −(τ c and divide the interval [(τ ), (τ ) + N ∆ ∆ 2 ] into N subintervals ∆ T of length 2 . Since the length of each of these subintervals is larger than n , we are able to choose N points s1 , s2 , . . . , sN from each of these intervals such that s1 = (τ ) and η(si ) = si , i = 1, 2, . . . , N . On the other hand, we have si+1 − si ≤ ∆ for all i = 1, . . . , N − 1. Applying the inequality (3.17) to each of the intervals: [s1 , s2 ], [s2 , s3 ], . . . , [sN −1 , sN ], [sN , T ] yields kV k(τ ),T,∞ ≤ 2N +1 |S(τ ) | + 2kSkβ + 1 . (3.18) From the definition of ∆ we have N≤
2T 1/β ≤ K + KkBkβ , ∆ 13
(3.19)
for some constant K depending on T and C4 . From (3.18) and (3.19) and taking into account that kV kτ,(τ ),∞ = kSkτ,(τ ),∞ ≤ |Sτ | + T β kSkβ ,
(3.20)
we obtain the desired estimate for kV k∞ . 1/β 2T If ∆ ≤ 2T n , that is, when n ≤ ∆ ≤ K + KkBkβ , then by equation (3.13) we have |Vt | ≤|Vη(t) | + |St − Sη(t) | + (c1 + C|Vη(t) |)kgk∞ (T /n) + (c1 + C|Vη(t) |)kBkβ (T /n)β ≤An + Bn |Vη(t) |, for any t ∈ [τ, T ], where An = kSkβ (T /n)β + c1 kgk∞ (T /n) + c1 kBkβ (T /n)β and Bn = 1 + Ckgk∞ (T /n) + CkBkβ (T /n)β . Iterating this estimate, we obtain 1/β
kV k(τ ),T,∞ ≤ |S(τ ) |Bnn + nAn Bnn−1 ≤ K(|S(τ ) | + kSkβ + 1)eKkBkβ ,
(3.21)
for some constant K independent of n, where we have used the inequality Bnn ≤ eK(1+kBkβ )n
1−β
, 1/β
and the fact that n ≤ K + KkBkβ for some constant K. Taking into account (3.20), we obtain the desired upper bound for kV k∞ . In order to show the upper bound for kV kτ,T,β , we notice that if 0 ≤ t − s ≤ ∆, then from (3.16) and from the upper bound of kV kτ,T,∞ we have 1/β
kV k(s),t,β,n ≤ K(|Sτ | + kSkβ + 1)eKkBkβ , for some constant K. Thus 1/β |V(s) − Vs | |Vt − Vs | KkBkβ ≤ kV k + ≤ K(|S | + kSk + 1)e . τ β (s),t,β,n (t − s)β ((s) − s)β
If t − s ≥ ∆, we can obtain the upper bound of kV kβ by a similar argument as in the proof of Lemma 3.1. The proof of (ii) is now complete. 2 The following result gives upper bounds for the norm of Malliavin derivatives of the solutions of the two types of SDE’s (3.1) and (3.13). Given a process P = {Pt , t ∈ [τ, T ]} such that Pt ∈ DN,2 ∗ P to denote the random variable for each t and some N ≥ 1, we use the notation DN sup |Pr0 |, |Dr1 Pr0 |, . . . , |DrN1 ,...,rN Pr0 | (3.22) r0 ,...,rN ∈[τ,T ]
and we use DN P to denote sup r0 ,...,rN ∈[τ,T ]
|Pr0 |, |Dr1 Pr0 |, . . . , |DrN1 ,...,rN Pr0 |, kP kβ , kDr1 P kr1 ,T,β , . . . , kDrN1 ,...,rN P kr1 ∨···∨rN ,T,β . (3.23)
If N = 0 we simply write D0∗ P = kP k∞ and D0 P = max (kP k∞ , kP kβ ). 14
Lemma 3.4 (i) Let V be the solution of equation (3.1). Assume that g1 = g2 = 0. Suppose that U 1 are U 2 are uniformly bounded by a constant C and assume that there exist constants K0 and K00 such that (t − s)β kBkβ ≤ K0 , τ ≤ s < t ≤ T , implies kU 2 ks,t,β ≤ K00 (kBkβ + 1).
(3.24)
Suppose that S, U 1 , U 2 ∈ DN,2 , where N ≥ 0 is an integer, and Dr St = Dr Uti = 0, i=1,2, if 0 ≤ t < r ≤ T , and suppose that there exists a constant K > 0 such that the random variables DN S, 1/β
∗ U 1 , D U 2 , are less than or equal to KeKkBkβ . Then there exists a constant K 0 > 0 such that DN N 0
1/β
DN V is less than K 0 eK kBkβ . (ii) Let V be the solution of the equation (3.13). Then the above conclusion (i) still holds true under the same assumptions except that in (3.24) we replace kU 2 ks,t,β by kU 2 ks,t,β,n . Proof : We first show point (i). The upper bounds of kV k∞ and kV kβ follow from Lemma 3.1(ii). The Malliavin derivative Dr Vt satisfies the equation (see Proposition 7 in [23]) Z t Z t (1) 1 Uu Dr Vu du + Uu2 Dr Vu dBu Dr Vt = St + r
r
while t ∈ [r ∨ τ, T ] and Dr Vt = 0 otherwise, where Z t Z t (1) 2 1 St := Dr St + Ur Vr + [Dr Uu ]Vu du + [Dr Uu2 ]Vu dBu r
(3.25)
r
for t ∈ [r ∨ τ, T ]. Lemma 3.1(ii) applied to the time interval [r, T ], where r ≥ τ , implies that 1/β
max {kDr V kr,T,∞ , kDr V kr,T,β } ≤ KeKkBkβ (|Sr(1) | + kS (1) kr,T,β + 1) . Therefore, to obtain the desired upper bound it suffices to show that there exists a constant K 1/β
independent of r such that both kS (1) kr,T,∞ and kS (1) kr,T,β are less than or equal to KeKkBkβ . Applying Lemma 11.1(ii) to the second integral in (3.25) and noticing that kDr U 2 k∞ , kDr U 2 kr,T,β , 1/β
kV k∞ , kV kr,T,β are bounded by KeKkBkβ , we see that the upper bound of kS (1) k∞ is bounded 1/β
by KeKkBkβ . On the other hand, in order to show the upper bound for kS (1) kr,T,β , we calculate (1)
(1)
St −Ss (t−s)β
using (3.25) to obtain (1)
(1)
St − Ss (t − s)β
≤ kDr Skr,T,β + (t − s)
−β
Z
t
[Dr Uu1 ]Vu du
s
+ (t − s)
−β
Z
t
[Dr Uu2 ]Vu dBu .
s
Now we can estimate each term of the above right-hand side as before. Taking the supremum over s, t ∈ [r, T ] yields the upper bound of kS (1) kr,T,β . We turn to the second derivative. As before, we are able to find the equation of Dr21 ,r2 Vt (see Proposition 7 in [23]). The estimates of Dr21 ,r2 Vt can then be obtained in the same way as above by applying Lemma 3.1(ii) and the estimates that we just obtained for Vt and Ds Vt , as well as the assumptions on S and U i . The estimates of the higher order derivatives of V can be obtained analogously. The proof of (ii) follows the same lines except that we use Lemma 3.2(ii) and Lemma 11.1(i) instead of Lemma 3.1(ii) and Lemma 11.1(ii). 2 1/β
Remark 3.5 Since β > 12 , from Fernique’s theorem we know that KeKkBkβ has finite moments of any order. So Lemma 3.4 implies that the uniform norms and H¨ older seminorms of the solutions of (3.1) and (3.13) and their Malliavin derivatives have finite moments of any order. We will need this fact in many of our arguments. 15
The next proposition is an immediate consequence of Lemma 3.4. Recall that the random ∗ P and D P are defined in (3.22) and (3.23). variables DN N Proposition 3.6 Let X be the solution of equation (1.1) and let X n be the solution of the Euler scheme (1.2). Fix N ≥ 0 and suppose that b ∈ CbN (Rd , Rd ), σ ∈ CbN +1 (Rd , Rd ) (recall that we assume m = 1). Then there exists a positive constant K such that the random variables DN X and 1/β
DN X n are bounded by KeKkBkβ for all n ∈ N. If we further assume σ ∈ CbN +2 (Rd , Rd ), then the same upper bound holds for the modified Euler scheme (1.3). Proof : We first consider the process X, the solution to equation (1.1). The upper bounds for kXk∞ and kXkβ follow from Lemma 3.1(ii). The Malliavin derivative Dr Xt satisfies the following linear stochastic differential equation Z t Z t Dr Xt = σ(Xr ) + ∇b(Xu )Dr Xu du + ∇σ(Xu )Dr Xu dBu , (3.26) r
r
while 0 < r ≤ t ≤ T , and Dr Xt = 0 otherwise. Then, it suffices to show that 1/β
sup DM (Dr X) ≤ KeKkBkβ ,
(3.27)
r∈[0,T ]
for M = N −1. We can prove estimate (3.27) by induction on N ≥ 1. Set St = σ(Xr ), Ut1 = ∇b(Xt ) and Ut2 = ∇σ(Xt ). Applying Lemma 3.1(i) to X we obtain that U 2 satisfies (3.24). Therefore, Lemma 3.4 implies that (3.27) holds for M = 0. Now we assume that 1/β
sup DM (Dr X) ≤ KeKkBkβ r∈[0,T ]
for some 0 ≤ M ≤ N − 2. It is then easy to see that 1/β
KkBkβ ∗ 1 2 , DM +1 (U ) ∨ DM +1 (U ) ∨ DM +1 (S) ≤ Ke
taking into account that b ∈ CbN (Rd ; Rd ), σ ∈ CbN +1 (Rd ; Rd ), which enables us to apply Lemma 3.4 to (3.26) to obtain the upper bound of supr∈[0,T ] DM +1 (Dr X). The estimates of the Euler scheme and the modified Euler scheme and their derivatives can be obtained in the same way. We omit the proof and we only point out that one more derivative of σ is needed for the modified Euler scheme because the function ∇σ is involved in its equation. 2
4
Rate of convergence for the modified Euler scheme and related processes
The main result of this section is the convergence rate of the scheme defined by (1.3) to the solution of the SDE (1.1). Recall that γn is the function of n defined in (1.5). Theorem 4.1 Let X and X n be solutions to equations (1.1) and (1.3), respectively. We assume b ∈ Cb3 (Rd ; Rd ), σ ∈ Cb4 (Rd ; Rd×m ). Then for any p ≥ 1 there exists a constant C independent of n (but dependent on p) such that 1
sup E [|Xtn − Xt |p ] p ≤ Cγn−1 .
0≤t≤T
16
Proof: Denote Y := X − X n . Notice that Y depends on n, but for notational simplicity we shall omit the explicit dependence on n for Y and some other processes when there is no ambiguity. The idea of the proof is to decompose Y into seven terms (see (4.7) below), and then study their convergence rate individually. Step 1. By the definitions of the processes X and X n , we can write Z th i n Yt = ) ds b(Xs ) − b(Xsn ) + b(Xsn ) − b(Xη(s) 0
m Z th X
+
i n ) dBsj σ j (Xs ) − σ j (Xsn ) + σ j (Xsn ) − σ j (Xη(s)
j=1 0 m Z t X
−H
0
j=1
n )(s − η(s))2H−1 ds (∇σ j σ j )(Xη(s)
t
Z
b1 (s)Ys ds +
= 0
+
m Z X j=1
m Z th X j=1
0
0
t
σ1j (s)Ys dBsj
Z th
+
0
i n ) ds b(Xsn ) − b(Xη(s)
m Z t i X n σ j (Xsn ) − σ j (Xη(s) ) dBsj − H σ0j (s)(s − η(s))2H−1 ds, j=1
0
where Z 1 n σ0j (s) = (∇σ j σ j )(Xη(s) ), b1 (s) = ∇b(θXs + (1 − θ)Xsn )dθ, 0 Z 1 ∇σ j (θXs + (1 − θ)Xsn )dθ . σ1j (s) = 0
Let Λn = {Λnt , t ∈ [0, T ]} be the d × d matrix-valued solution of the following linear SDE Λnt
Z =I+
t
b1 (s)Λns ds
+
0
m Z X j=1
t
0
σ1j (s)Λns dBsj ,
(4.1)
where I is the d × d identity matrix. Applying the chain rule for the Young’s integral to Γnt Λnt , where Γn = {Γnt , t ∈ [0, T ]} is the unique solution of the equation Γnt = I −
Z
t
0
Γns b1 (s)ds −
m Z X j=1
0
t
Γns σ1j (s)dBsj , t ∈ [0, T ],
(4.2)
we see that Γnt Λnt = Λnt Γnt = I for all t ∈ [0, T ]. Therefore, (Λnt )−1 exists and coincides with Γnt . We can express the process Yt in terms of Λnt as follows Z Yt =
t
Λnt Γns
0
m Z t h h i i X n n n b(Xs ) − b(Xη(s) ) ds + Λnt Γns σ j (Xsn ) − σ j (Xη(s) ) dBsj j=1
−H
m Z t X j=1
0
0
Λnt Γns σ0j (s)(s − η(s))2H−1 ds.
17
(4.3)
The first two terms in the right-hand side of equation (4.3) can be further decomposed as follows: t
Z 0
i h n ) dBsj Λnt Γns σ j (Xsn ) − σ j (Xη(s) t
Z = 0
Λnt Γns bj2 (s)(s
−
η(s))dBsj
+
m Z X i=1
Z
t
+ 0
0
t
i )dBsj Λnt Γns σ2j,i (s)(Bsi − Bη(s)
Λnt Γns σ3j (s)(s − η(s))2H dBsj
:= I2,j (t) +
m X
I3,j,i (t) + I4,j (t),
(4.4)
i=1
where bj2 (s) σ2j,i (s) σ3j (s)
1
Z
n n )dθ, )b(Xη(s) ∇σ j (θXsn + (1 − θ)Xη(s)
= 0 1
Z
n n ∇σ j (θXsn + (1 − θ)Xη(s) )σ i (Xη(s) )dθ,
= 0
=
1 2
Z
1
∇σ
j
(θXsn
+ (1 −
0
n θ)Xη(s) )
m X
σ0l (s)dθ,
l=1
and Z
t
h i n Γns b(Xsn ) − b(Xη(s) ) ds 0 Z t m m X X 1 j n n = Λnt Γns b3 (s) b(Xη(s) )(s − η(s)) + σ j (Xη(s) )(Bsj − Bη(s) )+ σ0j (s)(s − η(s))2H ds 2 0 Λnt
j=1
:= I11 (t) +
m X
j=1
I12,j (t) + I13 (t),
(4.5)
j=1
where b3 (s) =
R1 0
n )dθ. We also denote ∇b(θXsn + (1 − θ)Xη(s)
I5,j (t) = −HΛnt
Z 0
t
Γns σ0j (s)(s − η(s))2H−1 ds.
(4.6)
Substituting equations (4.4), (4.5) and (4.6) into (4.3) yields Y = I11 +
m X j=1
I12,j + I13 +
m X j=1
I2,j +
m X
I3,j,i +
j,i=1
m X
I4,j +
j=1
m X
I5,j .
(4.7)
j=1
Step 2. Denote by (Λn )i , i = 1, . . . d, the i-th columns of Λn . We claim that (Λn )i satisfy the conditions in Lemma 3.4 with M = d, τ = 0, Ut1 = b1 (t), Ut2 = σ1j (t) and N = 2. We first show that U 2 satisfies (3.24). Taking into account that b ∈ Cb3 (Rd ; Rd ), σ ∈ Cb4 (Rd ; Rd×m ), it suffices to show that both X and X n satisfy (3.24). This is clear for X because of Lemma 3.1 (i). It follows from Lemma 3.2 (i) that there exist constants K and K 0 such that (t − s)β kBkβ ≤ K, 0 ≤ s < t ≤ T , implies kX n ks,t,β,n ≤ K 0 (kBkβ + 1).
18
Notice that n | n − X n| n − X n| |Xtn − X(s) |X(s) |X(s) |Xtn − Xsn | s s n ≤ + ≤ kX k + s,t,β,n (s) − s (t − s)β (t − (s))β ((s) − s)β
for t, s : t ≥ (s), where we recall that (s) = tk+1 when s ∈ (tk , tk+1 ]. Therefore, to verify (3.24) for X n it suffices to show that kX n ks,t,β ≤ K 0 (kBkβ + 1) for s, t ∈ [tk , tk+1 ] for some k. But this follows immediately from (1.3). On the other hand, the 1/β
fact that D2∗ U 1 and D2 U 2 are less than KeKkBkβ for some K follows from Proposition 3.6 and the assumption that b ∈ Cb3 (Rd ; Rd ), σ ∈ Cb4 (Rd ; Rd×m ), where D2∗ and D2 are defined in (3.22) and (3.23), respectively. In the same way we can show that the columns of Γn satisfy the assumptions of Lemma 3.4. As a consequence, it follows from Lemma 3.4 that 1/β
D2 Λn ∨ D2 Γn ≤ KeKkBkβ .
(4.8)
From (4.8) and from the fact that b ∈ Cb3 (Rd ; Rd ) and σ ∈ Cb4 (Rd ; Rd×m ), it follows
Step 3.
1
E(|I11 (t)|p ) p ≤ Cn−1
and
1
E(|I13 (t)|p ) p ≤ Cn−2H .
(4.9)
Notice that n−1 and n−2H are bounded by γn−1 . Applying estimates (11.4) and (11.5), inequality (4.8), and Proposition 3.6, we have for any j 1
E(|I12,j (t)|p ) p ≤ Cn−1 ,
1
E(|I2,j (t)|p ) p ≤ Cn−1
1
E(|I4,j (t)|p ) p ≤ Cn−2H . (4.10) P Now to complete the proof of the theorem it suffices to show that for any j, E(| m i=1 I3,j,i (t) + and
1
I5,j (t)|p ) p ≤ Cγn−1 . For any fixed j we make the decomposition m X
I3,j,i + I5,j = E1,j + E2,j + E3,j ,
(4.11)
i=1
where E1,j (t) = Λnt E2,j (t) = Λnt
m Z th X i=1 0 m Z t X 0
i=1
Z
i n i Γns σ2j,i (s) − Γnη(s) (∇σ j σ i )(Xη(s) ) (Bsi − Bη(s) )dBsj ,
n i Γnη(s) (∇σ j σ i )(Xη(s) )(Bsi − Bη(s) )dBsj
t
−HΛnt Γnη(s) σ0j (s)(s − η(s))2H−1 ds, 0 Z t E3,j (t) = HΛnt (Γnη(s) − Γns )σ0j (s)(s − η(s))2H−1 ds.
(4.12)
0
1
Applying (4.8), it is easy to see that E(|E3,j (t)|p ) p ≤ Cn1−2H−β for any
1 2
< β < H. On the 1
other hand, applying estimate (11.14) from Lemma 11.5 to E1,j we obtain E(|E1,j (t)|p ) p ≤ Cn1−3β for any 21 < β < H. Notice that the exponents n1−2H−β and n1−3β are bounded by γn−1 if β is sufficiently close to H.
19
Taking into account the relationship between the Skorohod and path-wise integral, we can express the term E2,j as follows nt
E2,j (t) =
Λnt
c m bX T X
Ftn,i,j k
i=1 k=0
Z
tk+1 ∧t Z s
δBui δBsj ,
(4.13)
tk
tk
for t ∈ [0, T ], where Ftn,i,j = Γnt (∇σ j σ i )(Xtn ), and we define tn+1 = (n + 1) Tn . From (4.8) and Proposition 3.6, we have 1/β
max{|Ftn,i,j |, |Dr1 Ftn,i,j |, |Dr2 Dr1 F n,i,j |} ≤ KeKkBkβ .
(4.14) 1
Hence, applying estimate (11.7) from Lemma 11.4 to E2,j (t) we obtain E(|E2,j (t)|p ) p ≤ Cγn−1 . The proof is now complete. 2 The following result provides a rate of convergence for the Malliavin derivatives of the modified scheme and some related processes. Recall that β satisfies 12 < β < H. Lemma 4.2 Let X and X n be the processes defined by (1.1) and (1.3), respectively. Suppose that σ ∈ Cb5 (Rd ; Rd×m ), b ∈ Cb4 (Rd ; Rd ). Let p ≥ 1. Then, (i) There exists a constant C such that for all u, r, s, t ∈ [0, T ] and n ∈ N, max {kDs Xt − Ds Xtn kp , kDr Ds Xt − Dr Ds Xtn kp , kDu Dr Ds Xt − Du Dr Ds Xtn kp } ≤ Cn1−2β . (4.15) (ii) Let V and V n be d-dimensional processes satisfying the equations Z
t
Vt =V0 +
f1 (Xu , Xu ) Vu du + 0
Vtn
Z =V0 +
m Z X
t
0
t
f1 (Xu , Xun )Vun du
0
+
j=1 m Z t X j=1
0
f2j (Xu , Xu ) Vu dBuj , (4.16) f2j (Xu , Xun )Vun dBuj ,
where f1 ∈ Cb3 (Rd × Rd ; Rd×d ) and f2j ∈ Cb4 (Rd × Rd ; Rd×d ). Then there exists a constant C such that for all r, s, t ∈ [0, T ] and n ∈ N, max{kVt − Vtn kp , kDs Vt − Ds Vtn kp , kDr Ds Vt − Dr Ds Vtn kp } ≤ Cn1−2β .
(4.17)
Remark 4.3 The above results still hold when the approximation process X n is replaced by the one defined by the recursive scheme (1.2). The proof follows exactly the same lines. Proof. (i) Taking the Malliavin derivative in both sides of (4.3), we obtain Dr (Xt − Xtn ) =
Z 0
t
m Z t h h X i i n n Dr Λnt Γns b(Xsn ) − b(Xη(s) ) ds + Dr Λnt Γns σ j (Xsn ) − σ j (Xη(s) ) dBsj j=1
+
m X
n Λnt Γnr σ j (Xrn ) − σ j (Xη(r) ) −H
j=1
0
m Z t X j=1
0
Dr Λnt Γns σ0j (s) (s − η(s))2H−1 ds.
Proposition 3.6 and equation (4.8) imply that the first, third and last term of the above right-hand side have Lp -norms bounded by Cn1−2H . Applying estimate (11.15) from Lemma 11.5 to the second 20
term and noticing that kXkβ and supr∈[0,T ] kDr Xkβ have finite moments of any order, we see that its Lp -norm is also bounded by Cn1−2β . Similarly, we can take the second derivative in (4.3) and then estimate each term individually as before to obtain that the upper bound of kDr Ds Xt − Dr Ds Xtn kp is bounded by Cn1−2β . (ii) Using the chain rule for Young’s integral we derive the following explicit expression for Vt − Vtn Vt −
Vtn
t
Z =
n n Υt Υ−1 s (f1 (Xs , Xs ) − f1 (Xs , Xs )) Vs ds
0
+
m Z X j=1
t
0
j j n Υt Υ−1 f (X , X ) − f (X , X ) Vsn dBsj , s s s s s 2 2
(4.18)
where Υ = {Υt , t ∈ [0, T ]} is the Rd×d -valued process that satisfies t
Z
f1 (Xs , Xs ) Υt ds +
Υt = I + 0
m Z X j=1
0
t
f2j (Xs , Xs ) Υt dBsj .
Lemma 3.4 implies that there exists a constant K such that for all n ∈ N, u, r, s, t ∈ [0, T ], we have 1/β
max{Υt , Ds Υt , Dr Ds Υt , Du Dr Ds Υt } ≤ KeKkBkβ .
(4.19)
Therefore, applying estimate (11.4) to the second integral in (4.18) with ν = 0 and taking into account (4.15), we obtain kV − V n kp ≤ Cn1−2β . Taking the Malliavin derivative on both sides of (4.18), and then applying estimates (11.4) from Lemma 11.3 and (4.15) as before, we can obtain the desired estimate for kDs Vt − Ds Vtn kp . The estimate for kDr Ds Vt − Dr Ds Vtn kp can be obtained in a similar way. 2 We define {Λt , t ∈ [0, T ]} as the solution of the limiting equation of (4.1), that is, t
Z
∇b(Xs )Λs ds +
Λt = I + 0
m Z X
t
∇σ j (Xs )Λs dBsj .
(4.20)
0
j=1
The inverse of the matrix Λt , denoted by Γt , exists and satisfies Z Γt = I −
t
Γt ∇b(Xs )ds − 0
m Z X j=1
t
Γt ∇σ j (Xs )dBsj .
0
It follows from Lemma 4.2 that if we assume that σ ∈ Cb5 (Rd ; Rd×m ) and b ∈ Cb4 (Rd ; Rd ), then estimate (4.17) holds with the pair (V, V n ) being replaced by (Γi , Γni ) or (Λi , Λni ), i = 1, . . . , d, where the subindex i denotes the i-th column of each matrix.
5
Central limit theorem for weighted sums
Our goal in this section is to prove a central limit result for weighted sums (see Proposition 5.6 below) that will play a fundamental role in the proof of Theorem 6.1 in the next section. This result has an independent interest and we devote this entire section to it.
21
We recall that B = {Bt , t ∈ [0, T ]} is an m-dimensional fBm and we assume that the Hurst parameter satisfies H ∈ 21 , 43 . For any n ≥ 1 we set tj = jT n , j = 0, . . . , n. Recall that η(s) = tk if tk ≤ s < tk+1 . Consider the d × d matrix-valued process Ξn,i,j t
= γn
{t} Z X k=0
tk+1
tk
i )δBsj , (Bsi − Bη(s)
1 ≤ i, j ≤ m,
where we denote {t} = b nt T c for t ∈ [0, T ) and {T } = tn−1 . Proposition 5.1 The following stable convergence holds as n tends to infinity (Ξn , B) → (W, B) where W = {Wt , t ∈ [0, T ]} is the matrix-valued Brownian motion, introduced in Section 2.4, and we assume that W and B are independent. Proof
From the inequality (11.7) in Lemma 11.4 it follows 4 k−j 2 n n , E Ξtk − Ξtj ≤ C n
(5.1)
for any j ≤ k. This implies the tightness of (Ξn , B). Then, it remains to show the convergence of the finite dimensional distributions of (Ξn , B) to that of (W, B). To do this, we fix a finite set of points r1 , . . . , rL+1 ∈ [0, T ] such that 0 = r1 < r2 < · · · < rL+1 ≤ T and define the random vectors BL = (Br2 − Br1 , . . . , BrL+1 − BrL ), ΞnL = (Ξnr2 − Ξnr1 , . . . , ΞnrL+1 − ΞnrL ) and WL = (Wr2 − Wr1 , . . . , WrL+1 − WrL ). We claim that as n tends to infinity the following convergence in law holds (ΞnL , BL ) ⇒ (WL , BL ).
(5.2)
For notational simplicity, we add one term to each component of ΞnL and we define {rl+1 }
Θnl (i, j)
:=
Ξn,i,j rl+1
−
Ξn,i,j rl
+
i,j ζ{r l },n
= γn
X
i,j , ζk,n
1 ≤ l ≤ L,
1 ≤ i, j ≤ d,
(5.3)
k={rl }
where i,j ζk,n
Z
tk+1
= tk
(Bsi − Btik )δBsj .
Then Slutsky’s lemma implies that the convergence in law in (5.2) is equivalent to (Θnl (i, j), 1 ≤ i, j ≤ d, 1 ≤ l ≤ L, BL ) ⇒ (WL , BL ). According to [25] (see also Theorem 6.2.3 in [24]), to show the convergence in law of (ΘnL , BL ), it suffices to show the convergence of each component of (ΘnL , BL ) to the correspondent component of (WL , BL ) and the convergence of the covariance matrix. The convergence of the covariance matrix of ΘnL follows from Propositions 5.2 and 5.3 below. The convergence in law of each component to a Gaussian distribution follows from Proposition 5.5 below and the fourth moment theorem (see [21] and also Theorem 5.2.7 in [24]). This completes the proof. 2
22
In order to show the convergence of the covariance matrix and the fourth moment of Θn we first introduce the following notation. Dk = {(s, t, v, u) : tk ≤ v ≤ s ≤ tk+1 , u, t ∈ [0, T ]} ,
(5.4)
Dk1 ,k2 = {(s, t, v, u) : tk2 ≤ v ≤ s ≤ tk2 +1 , tk1 ≤ u ≤ t ≤ tk1 +1 } .
The next two propositions provide the convergence of the covariance E[Θnl0 (i0 , j 0 )Θnl (i, j)] in the cases l = l0 and l 6= l0 , respectively. We denote β k (s) = 1[tk ,tk+1 ] (s). n
Proposition 5.2 Let Θnl (i, j) be defined in (5.3). Then rl+1 − rl (Rδji0 δij 0 + Qδjj 0 δii0 ), T
2 E[Θnl (i0 , j 0 )Θnl (i, j)] → αH
as n → +∞.
(5.5)
Here δii0 is the Kronecker function, αH = H(2H − 1) and Q and R are the constants defined in (2.12). Proof: The proof will be done in several steps. Step 1. Applying twice the integration by parts formula (2.7), we have {rl+1 }
E[Θnl (i0 , j 0 )Θnl (i, j)]
=
2 αH γn
X Z
k={rl } Dk
Dui Dtj Θnl (i0 , j 0 )µ(dvdu)µ(dsdt),
(5.6)
where we recall that {t} = b nt T c for t ∈ [0, T ) and {T } = tn−1 , and Dk is defined in (5.4). Since {rl+1 }
Dui Dtj Θnl (i0 , j 0 )
X
=γn
1[tk ,t] (u)β k (t)δjj 0 δii0 + 1[tk ,u] (t)β k (u)δji0 δij 0 , n
k={rl }
(5.7)
n
the left-hand side of (5.5) equals {rl+1 } 2 2 αH γn
X
Z
k,k0 ={rl } Dk
1[tk0 ,t] (u)β (t)δ k0 n
jj 0
δ
ii0
+ 1[tk0 ,u] (t)β (u)δ δ k0 n
ji0
ij 0
µ(dvdu)µ(dsdt)
2 2 :=αH γn G1 δjj 0 δii0 + G2 δji0 δij 0 . In the next two steps, we compute the limits of γn2 G1 and γn2 G2 as n tends to infinity in the case H ∈ ( 21 , 34 ) and in the case H = 34 separately. Step 2. In this step, we consider the case H ∈ ( 21 , 34 ). Recall that Q(p) =T
4H
1 Z p+1 Z t Z s
Z
µ(dvdu)µ(dsdt) = n 0
p
0
4H
Z µ(dvdu)µ(dsdt), Dk0 ,k0 +p
p
which is independent of n, where the set Dk1 ,k2 is defined in (5.4). We can express γn2 G1 in terms of Q(p) as follows {rl+1 }
γn2 G1
= n
4H−1
X k,k0 ={rl }
=
∞ X
Z
1 µ(dvdu)µ(dsdt) = n Dk0 ,k
Ψnl (p)Q(p),
p=−∞
23
{rl+1 }−{rl } ({rl+1 }−p)∧{rl+1 }
X
X
p={rl }−{rl+1 } k0 =({rl }−p)∨{rl }
Q(p)
where Ψnl (p) =
({rl+1 } − p) ∧ {rl+1 } − ({rl } − p) ∨ {rl } 1[{rl }−{rl+1 },{rl+1 }−{rl }] (p). n −r
r
The term Ψnl (p) is uniformly boundedP and converges to l+1T l as n tends to infinity for any fixed p. Therefore, taking into account that ∞ p=−∞ Q(p) = Q < ∞, the dominated convergence theorem implies lim γn2 G1 =
n→∞
rl+1 − rl Q. T
Similarly, we can show that lim γ 2 G2 n→∞ n
=
rl+1 − rl R. T
Step 3. In the case H = 43 , we can write γn2 G1
=
n2 log n
{rl+1 }
Z
X
µ(dvdu)µ(dsdt) =
k,k0 ={rl }
1 − n log n
Dk0 ,k
1 n log n
{rl+1 }−{rl }
{rl+1 }
X
X
p={rl }−{rl+1 }
k0 ={rl }
0 X
{rl }−p−1
{rl+1 }−{rl }
{rl+1 }
X
X
X
p={rl }−{rl+1 }
k0 ={rl }
p=1
k0 ={rl+1 }−p+1
+
Q(p)
Q(p)
:= G11 + G12 . Taking into account that Q(p) behaves like 1/|p| as |p| tends to infinity, P it is then easy to see that G12 converges to zero. On the other hand, recall that Q =
lim
n→+∞
Q(p) . log n
|p|≤n
This implies that
Q G11 converges to (rl+1 − rl ). This gives the limit of γn2 G1 . The limit of γn2 G2 can be obtained T similarly. 2 Proposition 5.3 Let l, l0 ∈ {1, . . . , L} be such that l 6= l0 . Let Θn be defined as in (5.3). Then lim E[Θnl0 (i0 , j 0 )Θnl (i, j)] = 0.
(5.8)
n→∞
Proof:
Without any loss of generality, we assume l0 < l. As in (5.6) we have {rl+1 }
E[Θnl0 (i0 , j 0 )Θnl (i, j)]
=
2 αH γn
X Z
k={rl } Dk
Dui Dtj Θnl0 (i0 , j 0 )µ(dvdu)µ(dsdt).
Taking into account (5.7), we can write {rl+1 } {rl0 +1 }
E[Θnl0 (i0 , j 0 )Θnl (i, j)]
=
2 2 αH γn
k={rl }
X Z
X
k0 ={rl0 }
Dk
1[tk0 ,t] (u)β k0 (t)δjj 0 δii0 n
+1[tk0 ,u] (t)β (u)δ δ µ(dvdu)µ(dsdt) 2 2 e e 2 δji0 δij 0 . := αH γn G1 δjj 0 δii0 + G k0 n
24
ji0
ij 0
In the case H ∈ ( 12 , 43 ) we have {rl+1 } {rl0 +1 }
e 1 = n4H−1 γn2 G
X Z
X k={rl }
=
1 n
k0 ={rl0 }
Dk
1[tk0 ,t] (u)β k0 (t)µ(dvdu)µ(dsdt) n
{rl+1 }−{rl0 } {rl0 +1 }∧({rl+1 }−p)
X
X
Q(p) =
p={rl }−{rl0 +1 } k0 =({rl }−p)∨{rl0 }
∞ X
Φnl (p)Q(p),
p=−∞
where Φnl (p) =
max{({rl0 +1 } − p) ∧ {rl+1 } − ({rl } − p) ∨ {rl0 }, 0} 1[{rl }−{rl0 +1 },{rl+1 }−{rl0 }] (p). n
The term Φnl (p) is uniformly bounded and converges to 0 as n tends to infinity for any fixed p because P l < l0 . Therefore, taking into account that ∞ p=−∞ Q(p) = Q < ∞, the dominated convergence 2 e 1 converges to zero as n tends to infinity. Similarly, we can show that theorem implies that γn G 2 e γn G2 converges to zero as n tends to infinity. In the case H = 34 we can write e1 γn2 G
{rl+1 } {rl0 +1 } Z n2 X X µ(dvdu)µ(dsdt) = ln n Dk0 ,k 0 k={rl } k ={rl0 }
1 = n ln n 1 ≤ n ln n ≤
{rl+1 }−{rl0 } {rl0 +1 }∧({rl+1 }−p)
X
X
Q(p)
p={rl }−{rl0 +1 } k0 =({rl }−p)∨{rl0 } {rl+1 }−{rl0 } {rl+1 }−p
X
X
Q(p)
p={rl }−{rl0 +1 } k0 ={rl0 }
0 X 1 C (p + 1)Q(p) ≤ . n ln n p=−n ln n
1 e 1 converges to zero as n The above last inequality follows from Q(p) = O( |p| ). This shows that γn2 G e 2 converges to zero. tends to infinity. In the same way we can show that γn2 G 2
The following estimate is needed in the calculation of the fourth moment of Θnl (i, j) in Proposition 5.5. Lemma 5.4 Let H ∈ ( 21 , 43 ]. We have the following estimate. n−1 X
hβ k1 , β k2 iH hβ k2 , β k3 iH hβ k3 , β k4 iH hβ k1 , β k4 iH ≤ Cn−2 γn−2 .
k1 ,k2 ,k3 ,k4 =0
Proof:
n
n
n
n
n
n
n
n
Since the indices k1 , k2 , k3 , k4 are symmetric, we only need to consider the case k1 ≤ k2 ≤
25
k3 ≤ k4 . By definition of the inner product we can write X hβ k1 , β k2 iH hβ k2 , β k3 iH hβ k3 , β k4 iH hβ k1 , β k4 iH 0≤k1 ≤k2 ≤k3 ≤k4 ≤n−1
=
n
n
n
n
n
n
n
n
n−1 n−1 n−1 n−1 T 8H X X X X (|k2 − k1 + 1|2H + |k2 − k1 − 1|2H − 2|k2 − k1 |2H ) 24 n8H k1 =0 k2 =k1 k3 =k2 k4 =k3 × (|k3 − k2 + 1|2H × (|k4 − k3 + 1|2H × (|k4 − k1 + 1|2H
+ |k3 − k2 − 1|2H − 2|k3 − k2 |2H ) + |k4 − k3 − 1|2H − 2|k4 − k3 |2H ) + |k4 − k1 − 1|2H − 2|k4 − k1 |2H ).
Let p1 = k2 − k1 , p2 = k3 − k2 , p3 = k4 − k3 . Then the above sum is bounded by 1−8H
Cn
n−1 X
p2H−2 p2H−2 p2H−2 (p1 + p2 + p3 )2H−2 , 1 2 3
p1 ,p2 ,p3 =1
which is again bounded by Cn1−8H
n−1 X
. p4H−4 p2H−2 p2H−2 3 2 1
p1 ,p2 ,p3 =1
P In the case H ∈ ( 12 , 34 ), the series pn−1 p4H−4 is convergent. When H = 34 , it is bounded by 3 =1 3 C log n. So the above sum is bounded by Cn−4H−1 if 21 < H < 34 and bounded by Cn−4 log n if H = 43 . The proof of the lemma is complete. 2 The following proposition contains a result on the convergence of the fourth moment of Θln (i, j). Proposition 5.5 The fourth moment of Θnl (i, j) and 3E(|Θnl (i, j)|2 )2 converge to the same limit as n → ∞. Proof:
Applying the integration by parts formula (2.7) yields {rl+1 }
E[Θnl (i, j)4 ]
=
2 αH γn
X Z
k={rl } Dk {rl+1 }
=
2 αH γn
X Z
k={rl } Dk
h i E Dui Dtj Θnl (i, j)3 µ(dvdu)µ(dsdt) " E
n 3Θnl (i, j)2 Dui Dtj [Θnl (i, j)]
+6Θnl (i, j)Dtj [Θnl (i, j)] Dui [Θnl (i, j)]
# o
µ(dvdu)µ(dsdt)
:= G1 + G2 . Since Dui Dtj [Θnl (i, j)] is deterministic, it is easy to see that G1 = 3E(|Θnl (i, j)|2 )2 . We have shown the convergence of E(|Θln (i, j)|2 ) in Proposition 5.2. It remains to show that G2 → 0 as n → ∞. Applying again the integration by parts formula (2.7) yields {rl+1 } 4 2 G2 =6αH γn
X
Z
k,k0 ={rl } Dk ×Dk0
o n Dui 0 Dtj0 Dtj [Θnl (i, j)] Dui [Θnl (i, j)] µ(dv 0 du0 )µ(ds0 dt0 )µ(dvdu)µ(dsdt) .
26
Using equation (5.7) we can derive the inequalities {rl+1 }
G2 ≤
4 4 6αH γn
Z
X
β h (t)β h (t0 )β h0 (u)β h0 (u0 ) n
k,k0 ,h,h0 ={rl } Dk ×Dk0
n
n
n
0 0 +β h (t)β h (u )β h0 (u)β h0 (t ) µ(dv 0 du0 )µ(ds0 dt0 )µ(dvdu)µ(dsdt) n
n
n
n
{rl+1 }
X
4 4 ≤ 12αH γn
hβ h , β k iH hβ h0 , β k iH hβ h , β k0 iH hβ h0 , β k0 iH . n
k,k0 ,h,h0 ={rl }
n
n
n
n
n
n
n
The convergence of G2 to zero now follows from Lemma 5.4. 2 We can now establishR a central limit theorem for weighted sums based on the previous proposit i,j i,j = 0. tion. Recall that ζk,n = tkk+1 (Bsi − Btik )δBsj , k = 0, . . . , n − 1 and ζn,n Proposition 5.6 Let f = {ft , t ∈ [0, T ]} be a stochastic process with values on the space of d × d matrices and with H¨ older continuous trajectories of index greater than 21 . Set, for i, j = 1, . . . , m, Ψi,j n (t)
=
{t} X
fti,j ζ i,j . k k,n
k=0
Then, the following stable convergence in the space D([0, T ]) holds as n tends to infinity, (Z ) t {γn Ψn (t), t ∈ [0, T ]} → fsi,j dWsij , t ∈ [0, T ] , 0
1≤i,j≤m
where W is a matrix-valued Brownian motion independent of B with the covariance introduced in Section 2.4. Proof: This proposition is an immediate consequence of the central limit result for weighted random sums proved in [3]. In fact, the process Ψi,j n (t) satisfies the required conditions due to Proposition 5.1 and the estimate (5.1). 2
6
CLT for the modified Euler scheme in the case H ∈ ( 21 , 43 ]
The following central limit type result shows that in the case H ∈ ( 12 , 34 ], the process γn (X − X n ) converges stably to the solution of a linear stochastic differential equation driven by a matrix-valued Brownian motion independent of B as n tends to infinity. Theorem 6.1 Let H ∈ ( 12 , 34 ] and let X, X n be the solutions of the SDE (1.1) and recursive scheme (1.3), respectively. Let W = {Wt , t ∈ [0, T ]} be the matrix-valued Brownian motion introduced in Section 2.4. Assume σ ∈ Cb5 (Rd ; Rd×m ) and b ∈ Cb4 (Rd ; Rd ). Then the following stable convergence in the space C([0, T ]) holds as n tends to infinity, {γn (Xt − Xtn ) , t ∈ [0, T ]} → {Ut , t ∈ [0, T ]} ,
(6.1)
where {Ut , t ∈ [0, T ]} is the solution of the linear d-dimensional SDE Z
t
∇b(Xs )Us ds +
Ut = 0
m Z X j=1
t
∇σ
j
(Xs )Us dBsj
0
+
m Z X i,j=1 0
27
t
(∇σ j σ i )(Xs )dWsij .
(6.2)
Remark 6.2 It follows from [12] that when B is replaced by a standard Brownian motion and √ b = 0, the process n(X − X n ) converges in law to the unique solution of the d-dimensional SDE: r m m X T X j j dUt = (∇σ j σ i )(Xt )dWtij , U0 = 0. (6.3) ∇σ (Xt )Ut dBt + 2 j,i=1
j=1
Here W ij , i, j = 1, . . . , m, are independent one-dimensional Brownian motions, independent of B. To compare our theorem 6.1 with this result, we let the Hurst parameter H converge to 12 . Then the q αH √ T Q − R converges to constant R will converge to 0 and √ 2 . This formally recovers equation T (6.3). Remark 6.3 The process U defined in (6.2) is given by Ut =
m Z X
t
Λt Γs (∇σ j σ i )(Xs )dWsij ,
t ∈ [0, T ],
(6.4)
i,j=1 0
where we recall that Λ is defined in (4.20) and Γ is its inverse. Proof of Theorem 6.1. Recall that Yt = Xt − Xtn . We would like to show that the process {γn Yt , Bt , t ∈ [0, T ]} converges weakly in C([0, T ]; Rd+m ) to {Ut , Bt , t ∈ [0, T ]}. To do this, it suffices to prove the following two statements: (i) Convergence of the finite dimensional distributions of {γn Yt , Bt , t ∈ [0, T ]}; (ii) Tightness of the process {γn Yt , Bt , t ∈ [0, T ]}. We first show (i). Recall the decomposition of Yt given in (4.7) and (4.11) and recall the estimates obtained for each term in the decomposition of Yt . Since the other terms converge to zero in Lp for p ≥ 1, from the Slutsky theorem it suffices to consider the convergence of the finite dimensional Pm distributions of {γn j=1 E2,j (t), Bt , t ∈ [0, T ]}, where E2,j is defined by (4.12). Set Fsi,j := Λnt Γns (∇σ j σ i )(Xsn ) − Λt Γs (∇σ j σ i )(Xs ).
(6.5)
It follows from Lemma 4.2 and Remark 4.3 that
i,j
i,j i,j sup
Ft ∨ Ds Ft ∨ Dr Ds Ft ≤ Cn1−2β . r,s,t∈[0,T ]
p
p
p
Denote nt
e2,j (t) = Λt E
c m bX T X
Z
j i
tk+1
Z
s
Γtk (∇σ σ )(Xtk ) tk
i=1 k=0
δBui δBsj ,
(6.6)
tk
e2,j (T ) = E e2,j (T −). Then applying Lemma 11.4 (11.8) with F i,j defined by for t ∈ [0, T ), and E (6.5), we obtain that
e2,j (t) γn E2,j (t) − E
≤ Cγn n−H n1−2β , p
which converges to zero as n → ∞ since β can be taken as close as possible to H. By Slutsky theorem again, it suffices to consider the convergence of the finite dimensional distributions of m X e2,j (t), Bt , t ∈ [0, T ] . γn E (6.7) j=1
28
Applying Proposition 5.6 to the family of processes P fti,j = Γt (∇σ j σ i )(Xt ), we obtain the e convergence of the finite dimensional distributions of {γn m j=1 Γt E2,j (t), Bt , t ∈ [0, T ]} to those of {Γ Pt Ut , Bet , t ∈ [0, T ]}. This implies the convergence of the finite dimensional distributions of {γn m j=1 E2,j (t), Bt , t ∈ [0, T ]} to those of {Ut , Bt , t ∈ [0, T ]}. To show (ii), we prove the following tightness condition sup E |γn (Xt − Xtn ) − γn (Xs − Xsn )|4 ≤ C(t − s)2 .
(6.8)
n≥1
Taking into account of (4.7) and (4.11), we only need to show the above inequality for γn I11 , γn I12,j , γn I13 , γn I2,j , γn I4,j , γn E1,j , γn E2,j and γn E3,j . The tightness for the terms γn I11 , γn I13 and γn E3,j is clear. Now we consider the tightness of the term I2,j . We write Z t Z t Λns Γnu bj2 (u)(u − η(u))dBuj . Γns bj2 (s)(s − η(s))dBsj + I2,j (t) − I2,j (s) =(Λnt − Λns ) s
0
Then it follows from Lemma 11.3 (11.4) that 4 ! Z t 1 j n n n j E γn (Λt − Λs ) Γs b2 (s)(s − η(s))dBs ≤ C(t − s)4β EkΛn k8β 2 = C(t − s)4β . 0
Lemma 11.3 (11.4) also implies that the fourth moment of the second term is bounded by C(t−s)4H . The tightness for γn I12,j , γn I4,j , γn E1,j , γn E2,j can be obtained in a similar way by applying the estimates (11.5) and (11.4) from Lemma 11.3, (11.14) from lemma 11.5, and (11.7) from Lemma 11.4, respectively. 2
7
A limit theorem in Lp for weighted sums
Following the methodology used in [3], we can show the following limit result for random weighted sums. The proof uses the techniques of fractional calculus and the classical decompositions in large and small blocks. Consider a double sequence of random variables ζ = {ζk,n , n ∈ N, k = 0, 1, . . . , n} and for each t ∈ [0, T ], we denote b nt c T
gn (t) :=
X
ζk,n .
(7.1)
k=0
Proposition 7.1 Fix λ > 1−β, where 0 < β < 1. Let p ≥ 1 and p0 , q 0 > 1 such that p10 + q10 = 1 and pp0 > β1 , pq 0 > λ1 . Let gn be the sequence of processes defined in (7.1). Suppose that the following conditions hold true: 0
(i) For each t ∈ [0, T ], gn (t) → z(t) in Lpq . (ii) For any j, k = 0, 1, . . . , n we have 0
0
E(|gn (kT /n) − gn (jT /n)|pq ) ≤ C(|k − j|/n)λpq . 0
0
f (s)dz(s)
in Lp as n → ∞.
pp Let f = {f (t), t ∈ [0, T ]} be a process such that E(kf kpp β ) ≤ C and E(|f (0)| ) ≤ C. Then for each t ∈ [0, T ], b nt c T n
F (t) :=
X k=0
Z f (tk )ζk,n →
t
0
29
(7.2)
Rt Remark 7.2 The integral 0 f (s)dz(s) is interpreted as a Young integral in the sense of Proposition 0 0 2.3, which is well defined because f and z, as functions on [0, T ] with values in Lpp and Lpq , are H¨ older continuous (conditions (i) and (ii) together imply the H¨ older continuity of z) of order β and λ, respectively. Recall that the H¨ older continuity of a function with values in Lp is defined in (2.5). 0
Remark 7.3 The convergence (7.2) still holds true if the condition E(kf kpp β ) ≤ C is weakened 0 pp by assuming that f is H¨ older continuous of order β in L . The proof will be similar to that of Proposition 7.1. Proof: Given two natural numbers m < n we consider the associated partitions of the interval [0, T ] lT given by tk = kT n , k = 0, 1, . . . , n and ul = m , l = 0, 1, . . . , m. Then, we have the decomposition b mt c T
c b mt T n
F (t) =
X
X
f (ul )
l=0
ζk,n +
X X
[f (tk ) − f (ul )]ζk,n ,
(7.3)
l=0 k∈Im (l)
k∈Im (l)
where Im (l) := {k : 0 ≤ k ≤ b nt T c, tk ∈ [ul , ul+1 )}. 0 Because of condition (i) and the assumption that E(|f (t)|pp ) ≤ C for all t ∈ [0, T ], the first term on the right-hand side of the above expression converges in Lp , as n tends to infinity, to b mt c T
X
f (ul )[z(ul+1 ) − z(ul )].
l=0
Applying Proposition R2.3 to f and z we obtain that the above Riemann-Stieltjes sum converges t to the Young integral 0 f (s)dz(s) in Lp as m tends to infinity. To show the convergence (7.2) it suffices to show that p mt bX c X T lim sup E (7.4) [f (tk ) − f (ul )]ζk,n = 0. m→∞ n∈N l=0 k∈Im (l) Notice that k belongs to Im (l) if and only if ul ≤ tk < (ul+1 ) and tk ≤ η(t). Recall that (u) = tk+1 if tk < u ≤ tk+1 and η(u) = tk if tk ≤ u < tk+1 . As a consequence, we can write c b mt T
X X
b mt cZ T
[f (tk ) − f (ul )]ζk,n =
l=0 k∈Im (l)
X
[f (s) − f (al )]dgn (s), (al ,bl )
l=0
where al = ul and bl = (ul+1 ) ∧ η(t) + Z
α
T n
Z
. By the fractional integration by parts formula, bl
[f (s) − f (al )]dgn (s) = (−1) (al ,bl )
al
Daαl + [f (s) − f (al )]Db1−α [gn (s) − gn (bl −)]ds, l−
(7.5)
where we take α ∈ (1 − λ, β). By (2.2), it is easy to show that |Daαl + [f (s) − f (al )]| ≤
β 1 kf kβ (s − al )β−α ≤ Ckf kβ mα−β . Γ(1 − α) β − α
(7.6)
On the other hand, by (2.3) we have 1−α Dbl − [gn (s) − gn (bl −)] =
Z bl 1 gn (s) − gn (bl −) gn (s) − gn (u) + (1 − α) du . Γ(α) (bl − s)1−α (u − s)2−α s 30
(7.7)
We can calculate the integral in the above equation explicitly. Z bl Z bl gn (s) − gn (u) gn (s) − gn (u) du = du 2−α 2−α (u − s) s (s) (u − s) Z tk+1 X (u − s)α−2 du [gn (s) − gn (tk )] = tk
k : tk ∈[(s),bl )
X
=
[gn (s) − gn (tk )]
k : tk ∈[(s),bl )
(7.8)
1 [(tk − s)α−1 − (tk+1 − s)α−1 ]. 1−α
Substituting (7.6), (7.7) and (7.8) into (7.5), we obtain Z Z bl 1−α α−β [f (s) − f (al )]dgn (s) ≤ Ckf kβ m Dbl − [gn (s) − gn (bl −)] ds (al ,bl ) al Z tk+1 X 1−α α−β ≤Ckf kβ m Dbl − [gn (s) − gn (bl −)] ds k : tk ∈[η(al ),bl ) tk
Z
X
α−β
≤Ckf kβ m
(bl − s)α−1 ds
tk
k : tk ∈[η(al ),bl )
Z
X
+ Ckf kβ mα−β
tk+1
|gn (tk ) − gn (bl −)|
tk+1
|gn (tk ) − gn (tj )|
[(tj − s)α−1 − (tj+1 − s)α−1 ]ds .
tk
k,j : η(al )≤tk α > 1 − β. Then for any s, t ∈ [0, T ] such that s < t, s = η(s), there exists a constant K4 depending on α, β and T , such that t
Z
(t − r)
α+β−1
r
Z s
s
0
|η(r) − η(u)|β 0 dudr ≤ K4 (t − s)β+β . (r − u)α+1
Proof: Without loss of generality, we let T = 1. Note that when η(s) = s < t ≤ η(s) + n1 , the double integral equals zero. In the following we will assume t > η(s) + n1 . We can write t
= =
0
r
|η(r) − η(u)|β dudr (r − u)α+1 s s Z t Z η(r) 0 |η(r) − η(u)|β (t − r)α+β−1 dudr 1 (r − u)α+1 η(s)+ n η(s) Z t Z η(r) Z η(r)− 1 ! 0 n |η(r) − η(u)|β α+β−1 dudr (t − r) + (r − u)α+1 η(s)+ 1 η(r)− 1 η(s) Z
(t − r)α+β−1
Z
n
n
:= J1 + J2 . On one hand, notice that in the term J2 we always have r − u > r − u + n1 ≤ 2(r − u). Therefore, Z J2 ≤
t α+β−1
Z
(t − r) 1 η(s)+ n
1 η(r)− n
η(s)
0
1 n,
and thus η(r) − η(u) ≤
0
2β (r − u)β 0 dudr ≤ K(t − s)β+β . α+1 (r − u)
On the other hand, Z J1 =
t α+β−1
Z
0
η(r)
(t − r) 1 η(s)+ n
1 η(r)− n
0
Z
0
Z
≤Kn−β (t − s)α+β−1
t
[ 1 η(s)+ n
≤Kn−β (t − s)α+β−1
t
1 η(s)+ n 0
≤Kn−β (t − s)α+β−1
|η(r) − η(u)|β dudr (r − u)α+1 1 1 − ]dr α (r − η(r)) (r − η(r) + n1 )α
1 dr (r − η(r))α
(η(t) + n1 ) − (η(s) + n1 ) α−1 n 1/n
0
≤K(t − s)β+β . 2
The lemma is now proved.
11.2
Estimates for some special Young and Skorohod integrals
In this section we derive estimates for some specific Young and Skorohod integrals. We fix n ∈ N and consider the uniform partition on [0, T ]. Lemma 11.3 Let B = {Bt , t ∈ [0, T ]} be a one-dimensional fBm with Hurst parameter H > 12 . Fix ν ≥ 0 and p ≥ H1 . Let F = {Ft , t ∈ [0, T ]} be a stochastic process whose trajectories are H¨ older
41
continuous of order γ > 1 − H and such that Ft ∈ D1,q , t ∈ [0, T ], for some q > p. For any ρ > 1 we set F1,ρ =
sup (kFt kρ ∨ kDs Ft kρ ) . s,t∈[0,T ]
Then there exists a constant C (independent of F ) such that the following inequalities hold for all 0≤s 12 . Fix p ≥ H1 . Let F = {Ft , t ∈ [0, T ]} be a stochastic process such that Ft ∈ D2,q , t ∈ [0, T ], for some q > p. For any ρ > 1 we set F2,ρ = sup kFt kρ ∨ kDs Ft kρ ∨ kDr Ds Ft kρ . r,s,t∈[0,T ]
42
Set also F∗ =
sup
(|Ft | ∨ |Ds Ft | ∨ |Dr Ds Ft |) .
r,s,t∈[0,T ]
Then there exists a constant C (independent of F ) such that the following holds for all 0 ≤ s < t ≤ T , i, j = 1, . . . , m,
nt
bX c Z tk+1 ∧t Z u
T 1 i j −1
δBv δBu Ftk (11.7)
≤ Cγn (t − s) 2 kF∗ kq ,
tk ∨s tk
k=b ns c T p
nt
bT c
Z tk+1 ∧t Z u
X
−H H
F (11.8) δBvi δBuj tk
≤ Cn (t − s) F2,q . tk ∨s tk
k=b ns c
T
Proof:
Using (2.8), we can write
b nt c T
Z
X
p
Ftk
k=b ns c T
tk+1 ∧t Z u
tk ∨s
δBvi δBuj =
tk
Z s
t i Fη(u) (Bui − Bη(u) )δBuj + αH
Z tZ s
0
T i Drj Fη(u) (Bui − Bη(u) )µ(drdu).
(11.9) RT
Applying (11.5) to the second integral of the right-hand side of (11.9) with Fu replaced by 0 Drj Fη(u) |r− u|2H−2 dr (notice that here we do not need the H¨older continuity of the integrand for the Young integral to be well defined) yields
Z t Z T
Z T
j i i −1 H
|r − u|2H−2 dr Dr Fη(u) (Bu − Bη(u) )µ(drdu) ≤ Cn (t − s) F2,q sup
s
0
u∈[0,T ] 0
p
≤ Cn−1 (t − s)H F2,q .
(11.10)
This implies both the estimates (11.7) and (11.8). Applying (2.8) to the first summand in the right-hand side of (11.9) yields ) Z t Z tZ u Z t (Z T Z u i Dvi Fu µ(drdv) δBuj . (11.11) Fu (Bui − Bη(u) )δBuj = Fu δBvi δBuj + αH s
s
η(u)
s
0
η(u)
Now we apply (2.11) to the second term of the right-hand side of (11.11) and we obtain
Z (Z Z
) Z TZ u
t T u
i j D F µ(drdv) δB ≤ CF 1 (u) µ(drdv)
2,p [s,t] v u u
1
s 0 η(u) 0 η(u) p
L H ([0,T ])
≤ CF2,p n
−1
H
(t − s) .
(11.12)
Again, this inequality implies both the estimates R t R u (11.7) and (11.8). It remains to estimate the term Is,t := s η(u) Fu δBvi δBuj . It follows from (2.11) that kIs,t kp ≤ CF2,p k1[s,t] (u)1[η(u),u] (v)k
1
L H ([0,T ]2 )
≤ CF2,p n−H (t − s)H ,
which completes the proof of (11.8). To derive (11.7) we need a more accurate estimate. Meyer’s inequality implies that
kIs,t kp ≤ C
1[s,t] (u)1[η(u),u] (v)Fu H⊗2 +
1[s,t] (u)1[η(u),u] (v)Dr Fu H⊗3 p p
+
1[s,t] (u)1[η(u),u] (v)Dr0 Dr Fu H⊗4 p
≤ CkF∗ kp 1[s,t] (u)1[η(u),u] (v) H⊗2 . 43
Therefore, to complete the proof, it suffices to show that Z t Z t Z u0 Z u 2 2 µ(dvdv 0 )µ(dudu0 ) ≤ (t − s)γn−2 . k1[s,t] (u)1[η(u),u] (v)kH⊗2 = αH s
η(u0 )
s
(11.13)
η(u)
In the case t − s ≥ n1 , Z tZ tZ
u0
Z
b nt c T
u
0
µ(dvdv dudu ) ≤ s
η(u0 )
s
b nt c T
n−1 X
X
≤
tk+p+1
k=b ns c p=1−n tk+p T b nt c T
n−1 X
X
=n−4H
tk0 +1
Z
k,k0 =b ns c tk0 T
η(u)
Z
Z
X
0
Z
tk+1
tk
Z
u0
tk+p
Z
u
tk+1
tk
Z
u0
tk0
Z
u
µ(dvdv 0 dudu0 )
tk
µ(dvdv 0 dudu0 )
tk
Q(p) ≤ C(t − s)γn−2 ,
k=b ns c p=1−n T
where we recall that Q(p) is defined in Section 2.4, and the inequality (11.13) follows. In the case t − s ≤ n1 , we have the raw estimate Z tZ tZ
u0
Z
u
0
0
µ(drdr dudu ) ≤ s
s
η(u0 )
η(u)
1 n2H
Z tZ s
t
µ(dudu0 ) =
s
1 n2H
(t − s)2H ,
and n−2H (t − s)2H ≤ (t − s)γn−2 . 2
So (11.13) is also true for this case. The proof of the lemma is now complete.
Lemma 11.5 Let B = {Bt , t ∈ [0, T ]} be a one-dimensional fBm with Hurst parameter H > 12 . Suppose that F = {Ft , t ∈ [0, T }, G = {Gt , t ∈ [0, T ]} are processes that are H¨ older continuous of order β ∈ ( 21 , H). Then, there exists a constant C (not depending on F or G) such that for all 0 ≤ s < t ≤ T , ν ≥ 0, Z t Fu (Gu − Gη(u) )(Bu − Bη(u) )dBu ≤ C(kF k∞ + kF kβ )kGkβ kBk2 n1−3β (t − s)β , (11.14) β Z st Fu (Gu − Gη(u) )(u − η(u))ν dBu ≤ C(kF k∞ + kF kβ )kGkβ kBkβ n1−2β−ν (t − s)β . (11.15) s
Proof of (11.14): We assume first that s, t ∈ [tk , tk+1 ] for some k = 0, 1, . . . , n − 1. By Lemma 11.1(ii), Z t Fu (Gu − Gη(u) )(Bu − Bη(u) )dBu ≤ K1 sup |Fu (Gu − Gt )(Bu − Bt )|kBkβ (t − s)β k k s u∈[s,t] +K2 sup |Fu (Gu − Gtk )| kBk2β (t − s)2β u∈[s,t]
+ |Fu (Bu − Btk )| kGkβ kBkβ (t − s)2β + |(Gu − Gtk )(Bu − Btk )| kF kβ kBkβ (t − s) ≤ Cκβ (F, G)n−2β (t − s)β , 44
2β
(11.16)
where κβ (F, G) = (kF k∞ + kF kβ )kGkβ kBk2β . In the general case, we can write Z t Fu (Gu − Gη(u) )(Bu − Bη(u) )dBu s Z (s) c b nt Z tk+1 Z t T X Fu (Gu − Gη(u) )(Bu − Bη(u) )dBu + = + η(t) tk s k=b ns c+1 T b nt c T X −2β β β ≤ Cκβ (F, G)n ((s) − s) + (t − η(t)) + (T /n)β k=b ns c+1 T
≤ Cκβ (F, G)n−2β [((s) − s)β + (t − η(t))β + (η(t) − (s))n1−β ] ≤ Cκβ (F, G)n1−3β (t − s)β ,
(11.17)
where the first inequality follows from (11.16). Proof of (11.15): This estimate can be proved by following the lines of the proof of (11.14) and noticing the fact that (u−η(u))ν has finite ν-H¨older seminorm on (tk , tk+1 ) for each k = 1, . . . , n−1. 2
12
Simulation
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
In the left Figure, the black curve is a sample path of the solution of the equation dXt = Xt dt + Xt dBt , t ∈ [0, 1], obtained from the explicit form of the solution, where B is a fBm with Hurst parameter H = 0.55, the blue curve is a sample path obtained by the naive Euler scheme, while the red curve is a sample path obtained by the modified Euler scheme from the same simulation of the fractional Brownian motion. The modified Euler scheme produces a much better approximation than the naive Euler scheme.
References [1] Aldous, D. J. and Eagleson, G. K. (1978). On mixing and stability of limit theorems. Ann. Probab. 6, 325-331. [2] Cambanis, S. and Hu, Y. (1996). Exact convergence rate of the Euler-Maruyama scheme, with application to sampling design. Stochastics Stochastics Rep. 59, 211-240. [3] Corcuera, J. M., Nualart, D. and Podolskij, M. (2013). Asymptotics of weighted random sums. Preprint. arXiv:1402.1414v1. 45
[4] Deya, A., Neuenkirch, A. and Tindel, S. (2012). A Milstein-type scheme without L´evy area terms for SDEs driven by fractional Brownian motion. Ann. Inst. Henri Poincar´e Probab. Stat. 48, 518-550. [5] Friz, P. and Victoir, N. (2010). Multidimensional Stochastic Processes as Rough Paths: Theory and Applications. Cambridge University Press, Cambridge. [6] Hu, Y. (1996). Strong and weak order of time discretization schemes of stochastic differential equations. S´eminaire de Probabilit´es XXX , Lecture Notes in Math. 1626, 218-227, Springer, Berlin. [7] Hu, Y. and Nualart, D. (2007). Differential equations driven by H¨older continuous functions of order greater than 1/2. Stochastic Anal. Appl. 349-413, Abel Symp. 2, Springer, Berlin. [8] Hu, Y. and Nualart, D. (2009). Stochastic heat equation driven by fractional noise and local time. Probab. Theory Related Fields 143, 285-328. [9] Jacod, J. and Protter, P. (1998). Asymptotic error distributions for the Euler method for stochastic differential equations. Ann. Probab. 26, 267-307. [10] Jacod, J. and Shiryaev, A. N. (1987). Limit Theorems for Stochastic Processes. Springer, Berlin. [11] Kloeden, P. E. and Platen, E. (1992). Numerical Solution of Stochastic Differential Equations. Springer, New York. [12] Kurtz, T. G. and Protter P. (1991). Wong-Zakai corrections, random evolutions, and simulation schemes for SDEs. Stochastic Analysis 331-346. Academic Press, Boston, MA. [13] Lyons, T. (1994). Differential equations driven by rough signals. I. An extension of an inequality of L. C. Young. Math. Res. Lett. 1, 451-464. [14] Lyons, T. and Qian, Z. (2002). System Control and Rough Paths. Oxford University Press, Oxford. [15] M´emin, J., Mishura, Y. and Valkeila, E. (2001). Inequalities for the moments of Wiener integrals with respect to a fractional Brownian motion. Statist. Probab. Lett. 51, 197-206. [16] Mishura, Y. (2008). Stochastic Calculus for Fractional Brownian Motion and Related Processes. Springer-Verlag, Berlin. [17] Neuenkirch, A. and Nourdin, I. (2007). Exact rate of convergence of some approximation schemes associated to SDEs driven by a fractional Brownian motion. J. Theoret. Probab. 20, 871-899. [18] Nourdin, I., Nualart, D. and Tudor, C. A. (2010). Central and non-central limit theorems for weighted power variations of fractional Brownian motion. Ann. Inst. Henri Poincar´e Probab. Stat. , 1055-1079. [19] Nualart, D. (2003). Stochastic integration with respect to fractional Brownian motion and applications. Contemp. Math. 336, 3-39. [20] Nualart, D. (2006). The Malliavin Calculus and Related Topics. Second Edition. SpringerVerlag, Berlin. [21] Nualart, D. and Peccati, G. (2005). Central limit theorems for sequences of multiple stochastic integrals. Ann. Probab. 33, 177-193. 46
[22] Nualart, D. and Rascanu, A. (2002). Differential equations driven by fractional Brownian motion. Collect. Math. 53, 55-81. [23] Nualart, D. and Saussereau, B. (2009). Malliavin calculus for stochastic differential equations driven by a fractional Brownian motion. Stochastic Process. Appl. 119, 391-409. [24] Nourdin, I. and Peccati, G. (2012). Normal Approximations with Malliavin Calculus: From Stein’s Method to Universality Cambridge University Press, Cambridge. [25] Peccati, G. and Tudor, C. (2005). Gaussian limits for vector-valued multiple stochastic integrals. S´eminaire de Probabilit´es XXXVIII 247-262, Lecture Notes in Math. 1857 Springer-Verlag, Berlin. [26] Pipiras, V. and Taqqu, M. S. (2000). Integration questions related to fractional Brownian motion. Probab. Theory Related Fields 118, 251-291. [27] Pipiras, V. and Taqqu, M. S. (2001). Are classes of deterministic integrands for fractional Brownian motion on an interval complete? Bernoulli 7 873-897. [28] Rosenblatt, M. (2011). Selected Works of Murray Rosenblatt. Springer, New York. [29] Tudor, C. A. (2008). Analysis of the Rosenblatt process. ESAIM Probab. Stat. 12, 230-257. [30] Young, L. C. (1936). An inequality of the H¨older type, connected with Stieltjes integration. Acta Math. 67 , 251-282. [31] Z¨ahle, M. (1998). Integration with respect to fractal functions and stochastic calculus. I. Probab. Theory Related Fields 111, 333-374.
47