Gaussian approximation of multivariate Lévy ... - Semantic Scholar

2 downloads 0 Views 231KB Size Report
May 10, 2005 - Problem of simulation of multivariate Lévy processes is investigated. The method ...... where f is a deterministic function and M is an infinitely divisible random measure with no .... [5] William N. Hudson and J. David Mason.
Gaussian approximation of multivariate L´evy processes with applications to simulation of tempered and operator stable processes Serge Cohen



´ski Jan Rosin



May 10, 2005

Abstract Problem of simulation of multivariate L´evy processes is investigated. The method based on shot noise series expansions of such processes combined with Gaussian approximation of the remainder is established in full generality. Formulas that can be used for simulation of tempered stable, operator stable and other multivariate processes are obtained.

Key-words: L´evy processes, Gaussian approximation, shot noise series expansions, simulation, tempered stable processes, operator stable processes. AMS classification (2000): 60G51, 60G52, 68U20, 60F05.

1

Introduction

In this work we consider a general problem of simulation of multivariate L´evy processes with applications to stable-like and other processes. Simulation of stochastic processes is widely used in science, engineering and economy to model complex phenomena. There is a vast literature on this subject, just to mention classical monographs [9] on numerical solutions of ∗ Universit´e Paul Sabatier, Laboratoire de Statistique et de Probabilit´es, 118, Route de Narbonne, 31062 Toulouse, France. † Department of Mathematics, University of Tennessee, Knoxville, TN 37996, USA. Research supported, in part, by a grant from the National Science Foundation. The work was begun during his visit at the Universit´e Paul Sabatier.

1

SDE’s and [6] on simulation of one-dimensional stable processes. Applications of L´evy processes to stochastic finance and physics created the need to have efficient simulation schemes for such processes. In this regard we refer the reader to the recent monograph [3] and references therein. Beside applications, computer simulation can be useful to understand the structure of some multivariate L´evy processes for which we have now many interesting theoretical results but for which our empirical understanding of the sample paths is limited. For instance, we think that simulation of multivariate tempered stable and operator stable processes will improve their understanding and the way that they can be used as model. Contrary to the one-dimensional case, close formulas for simulation of increments of multidimensional L´evy processes are rarely available. Thus one needs to use approximate methods. At the first level of approximation of a L´evy process one can use an appropriate compound Poisson process. However, when the L´evy process has paths of infinite variation, the error (the remainder process) of such approximation can be significant. In the one–dimensional case it was shown in [1] that the remainder process can often be approximated by a Brownian motion with small variance. Adding such small Brownian motion to the compound Poisson process will account for the variability between the epochs of the latter process, improving the approximation in general (see [1]). There are two main issues related to the extension of this method to the multidimensional setting. The first one is the construction of a family of successive compound Poisson approximations of a multivariate L´evy process. We use generalized shot noise series expansions of L´evy processes (cf. [15]) for this purpose. Such expansions can also be related to L´evy copulas (see [3], Sect. 6.6) but we do not consider them here. The second issue is the availability of a Gaussian approximation for the remainder process in the multivariate setting. We want to approximate the remainder by a Brownian motion with small covariance matrix. Computational problems are the first features which distinguish the multidimensional and one-dimensional cases. Verification that a given process admits Gaussian approximation can be involved and may constitute a result by itself. Please see the operator stable case in Section 3.4. Once a Gaussian approximation of the remainder is establish, we seek to replace the original normalizing matrices by their asymptotics, wherever possible, to give simpler formulas for simulation (cf. the tempered stable case, Section 3.3). Another difference is that the remainder process in the one-dimensional case is simply a small-jump part of the L´evy process (cf. [1]). In the multidimensional case such approach is too restrictive and may lead to substantial technical 2

difficulties (see Section 3.3 and Remark 3.5). Generally, a choice of the remainder depends on the geometry of the L´evy measure of the process under consideration. The paper is organized as follows. The general problem of Gaussian approximation is considered in Section 2. In Section 3 we give applications to multivariate stable, tempered stable and operator stable processes. Simulation of stochastic integral processes driven by an infinitely divisible random measure is discussed in Section 4. In this article theoretical issues have been adressed. In an upcoming article practical questions as well as efficiency of the simulation will be considered.

2

Gaussian approximation

In this section we give necessary and sufficient conditions for normal approximation of a multivariate infinitely divisible distributions. Let {X }∈(0,1] be a family of infinitely divisible random vectors in Rd with characteristic functions of the form Z  ihy,X i ihy,xi Ee = exp (2.1) [e − 1 − ihy, xi]ν (dx) Rd

where

Z

kxk2 ν (dx) < ∞.

(2.2)

Rd

X has zero mean and covariance matrix Σ which can also be written as Z Σ = xx> ν (dx). (2.3) Rd

In the sequel we will assume that Σ is nonsingular. It is helpful to relate this assumption to the properties of ν and the distribution L(X ) of X . Let us first state a general lemma. R 2 Lemma 2.1 Let ν be a measure such that Rd kxk ν(dx) < ∞ and Σ = R > Rd xx ν(dx). The following conditions are equivalent (i) Σ is non singular; (ii) the smallest linear space supporting ν equals Rd .

3

Proof: Suppose Σ is singular, that is Σy = 0 for some y such that kyk = 1. Consider V = {x ∈ Rd : hy, xi = 0}. For any z ∈ Rd we have Z hy, xihz, xi ν(dx) 0 = hΣy, zi = d R Z = hy, xihz, xi ν(dx) Vc

where V c = Rd \ V . In particular Z hy, xi2 ν(dx) = 0. Vc V c,

Since |hy, xi| > 0 when x ∈ ν(V c ) = 0 which means that ν is concentrated on a proper subspace V of Rd . Conversely, if ν is concentrated on a proper subspace V of Rd and y is any unit vector perpendicular to V, then the above computation shows that hΣy, zi = 0 for any z ∈ Rd . Therefore Σ is singular.  If we apply this equivalence to Σ , then not only the smallest linear space supporting ν equals Rd , but also L(X ) is not concentrated on any proper hyperplane of Rd , which follows from Proposition 24.17 (ii-3) in [17]. In the sequel Id will denote the identity matrix of rank d, N (0, Id ) will (d)

stand for a standard normal vector in Rd , and “−→” will mean the convergence in distribution. Our first theorem is a starting point of a method that will be further developed in more specific situations. Theorem 2.2 Suppose that Σ is nonsingular for every  ∈ (0, 1]. Then, as  → 0, (d)

Σ−1/2 X −→ N (0, Id ) if and only if for every κ > 0 Z hΣ−1  x,xi>κ

(2.4)

hΣ−1  x, xi ν (dx) → 0.

(2.5)

Proof: We have −1/2

ihy,Σ

Ee

X i

Z

−1/2

ihy,Σ

xi



ihy, Σ−1/2 xi] ν (dx) 

= exp [e −1− Rd   Z ihy,xi = exp ihy, b i + [e − 1 − ihy, xi1(kxk ≤ 1)] ν˜ (dx) Rd

4

−1/2

1/2

where ν˜ = ν ◦ Σ

is the push forward of ν by the map x → Σ Z Z b = − x˜ ν (dx) = − Σ−1/2 x ν (dx).  −1/2

kxk>1

kΣ

x and

xk>1

To prove that (2.5) implies (2.4) we verify conditions from Theorem 15 in [8]. We need to show thus as  → 0 b → 0 Z

(2.6)

xx> ν˜ (dx) → Id

(2.7)

kxk≤1

ν˜ (kxk ≥ κ) → 0,

∀κ > 0.

(2.8)

(2.6) holds because Z kb k ≤

−1/2

kΣ

kΣ−1/2 xk ν (dx)  xk>1

Z ≤

−1/2

kΣ

kΣ−1/2 xk2 ν (dx)  xk>1

Z =

hΣ−1  x,xi>1

hΣ−1  x, xi ν (dx) → 0.

by the assumption (2.5). Notice that Z Z > xx ν˜ (dx) = (Σ−1/2 x)(Σ−1/2 x)> ν (dx)  Rd Rd Z −1/2 = Σ xx> ν (dx) Σ−1/2 = Id .

(2.9)

Rd

Denoting also by k · k the operator norm we get Z Z > kId − xx ν˜ (dx)k = k (Σ−1/2 x)(Σ−1/2 x)> ν (dx)k  −1/2 kxk≤1 kΣ xk>1 Z kΣ−1/2 xk2 ν (dx) → 0 ≤  −1/2

kΣ

xk>1

as above, which proves (2.7). To get (2.8) we observe that Z −2 ν˜ (kxk > κ) ≤ κ kxk2 ν˜ (dx) kxk>κ Z = κ−2 kΣ−1/2 xk2 ν (dx) → 0. −1/2

kΣ

xk>κ

5

Therefore, we proved that (2.5) implies (2.4). To obtain the converse, note that from Theorem 15.14 [8], we must have ∀κ > 0 Z xx> ν˜ (dx) → Id kxk≤κ

as  → 0. By (2.9) we have Z

xx> ν˜ (dx) → 0.

kxk>κ

We infer that Z hΣ−1  x,xi>κ

hΣ−1  x, xi ν (dx) Z

Z =

−1/2 kΣ xk>κ1/2

2

kxk ν˜ (dx) =

= kxk>κ1/2

=

d X

d Z X i=1

Z

kΣ−1/2 xk2 ν (dx)  hei , xi2 ν˜ (dx)

kxk>κ1/2

xx> ν˜ (dx) ei i → 0

hei , kxk>κ1/2

i=1

where {ei }di=1 is an orthonormal basis in Rd . The proof of Theorem 2.2 is complete.  Note that in order to verify (2.5) it is enough to show it for some lower bound for Σ . We state this simple observation for a convenient reference. Lemma 2.3 Suppose that for every κ > 0 there exist (κ) > 0 and a family ˜  :  ∈ (0, (κ))} such that ∀ ∈ (0, (κ)) of positive definite matrices {Σ ˜ Σ ≥ Σ and

Z lim

→0 hΣ ˜ −1  x,xi>κ

˜ −1 x, xi ν (dx) = 0. hΣ 

Then (2.5) holds. ˜ −1 Proof: From (2.10) we have Σ−1  ≤ Σ . Hence, for 0 <  < (κ), Z Z ˜ −1 x, xi ν (dx) → 0 hΣ−1 x, xi ν (dx) ≤ hΣ    hΣ−1  x,xi>κ

˜ −1 hΣ  x,xi>κ

as  → 0.  6

(2.10)

(2.11)

Corollary 2.4 Let d = 1 and ν (dx) = 1(−,) (x)ν(dx), where ν is a L´evy measure on R. Put Z 2 σ () = x2 ν(dx). (−,)

By Theorem 2.2, (d)

σ −1 ()X −→ N (0, 1)

(2.12)

if and only if ∀ κ > 0 σ

−2

Z

x2 ν(dx) → 0

()

(2.13)

{|x|>κ1/2 σ(), |x| 0 σ( ∧ κσ()) → 1. σ()

(2.14)

A simpler than the above and sufficient condition for (2.12) is σ() = ∞. →0  lim

This is also a necessary condition when ν does not have atoms in a neighborhood of zero, see [1]. We can extend (2.14) (equivalently (2.13)) to a more general setting as follows. Suppose ν is given in polar coordinates as ν (dr, du) = µ (dr |u)λ(du) r > 0, u ∈ S d−1

(2.15)

where λ is a finite measure on the unit sphere S d−1 and {µ (· |u) : u ∈ S d−1 } is a measurable family of L´evy measures on (0, ∞). Define Z ∞ σ2 (u) = r2 µ (dr |u). (2.16) 0

Theorem 2.5 Let ν be L´evy measures on Rd given by (2.15) such that the support of λ is not contained in any proper subspace of Rd . Suppose there exists a function b : (0, 1] 7→ (0, ∞) such that lim inf →0

σ (u) >0 b() 7

λ − a.e.

(2.17)

and for every κ > 0 Z

−2

kxk2 ν (dx) = 0.

lim b()

→0

(2.18)

kxk>κb()

Then Σ is nonsingular for sufficiently small  and (d)

Σ−1/2 X −→ N (0, Id )  Proof: Consider

Z

as  → 0.

uu> λ(du).

Λ= S d−1

Λ is nonsingular by Lemma 2.1. Hence inf v∈S d−1 hΛv, vi =: 2a > 0. For any Borel subset B of S d−1 put Z ΛB = uu> λ(du). (2.19) B

There exists a δ > 0 such that kΛ − ΛB k < a whenever λ(S d−1 \ B) < δ. For such a set B and any v ∈ S d−1 hΛB v, vi ≥ hΛv, vi − kΛ − ΛB k > a. Hence ΛB ≥ aId

(2.20)

whenever λ(S d−1 \ B) < δ. From (2.17) we can find 0 , 1 ∈ (0, 1] such that the set B := {u ∈ S d−1 : inf b()−2 σ2 (u) > 1 } 0 λ(du) S d−1

0 2

B

˜ . ≥ a1 b() Id =: Σ Thus for any κ > 0 we have Z ˜ −1 x, xi ν (dx) hΣ  ˜ −1 hΣ  x,xi>κ



−2 a−1 −1 1 b()

8

Z kxk>(a1 κ)1/2 b()

kxk2 ν (dx).

Since the last expression converges to 0 by (2.18), Lemma 2.3 concludes the proof.  Typically ν is of the form ν = 1D ν and

Z

(2.21)

xx> ν(dx).

Σ =

(2.22)

D

where ν is a L´evy measure and D is a bounded neighborhood of the origin. This is a natural extension of the case studied in [1] for d = 1. However, contrary to the one–dimensional case, a centered ball is not always the best choice for D . We illustrate this point on examples in the next section. On the other hand, the case when D is a ball of radius  is typical and important Suppose a L´evy measure ν has a representation in polar coordinates ν(dr, du) = µ(dr |u)λ(du) r > 0, u ∈ S d−1 .

(2.23)

Here {µ(· |u) : u ∈ S d−1 } is a measurable family of L´evy measures on (0, ∞) and λ is a finite measure on the unit sphere S d−1 such that the support of λ is not contained in any proper subspace of Rd . Theorem 2.6 Let ν be a L´evy measure on Rd given by (2.23). Consider ν (dx) = 1D (x) ν(dx) where D = {x ∈ Rd : kxk < }. If lim −2

Z

→0

r2 µ(dr |u) = ∞

λ − a.e.

(0,)

then Σ is nonsingular and (d)

Σ−1/2 X −→ N (0, Id ) 

as  → 0,

where Σ is given by (2.22). Proof: We have σ2 (u)

Z

r2 µ (dr |u).

= (0,)

9

(2.24)

Therefore, condition (2.24) says that lim

→0

σ (u) =∞ 

λ − a.e.

(2.25)

We can choose a sequence k & 0 such that the sets     σ (u) d−1 2 Bk := u ∈ S : inf : 0 <  ≤ k > k  S T∞ satisfy λ(Bk ) > λ(S d−1 )(1 − 2−k ), k = 1, 2, . . . . Then S0 = ∞ n=1 k=n Bk has full λ-measure. Put b() = k for k+1 <  ≤ k . It is clear that lim

→0

b() = ∞. 

(2.26)

and for each u ∈ S0 there is an n such that u ∈ Bk for k ≥ n. Hence for k+1 <  ≤ k ≤ n σ (u) σ (u) = > k, b() k which proves that σ (u) =∞ →0 b() lim

λ − a.e.

Thus (2.17) of Theorem 2.5 holds. Using (2.26) and the form of ν we infer that for each κ > 0 and sufficiently small  > 0, Z Z 2 kxk ν (dx) = kxk2 ν(dx) = 0. kxk≥κb()

{kxk≥κb(),kxk1

(3.4)

kxk≤1

and N = {N  (t) : t ≥ 0} is a compound Poisson process with L´evy measure −1/2

(d)

ν  . There are various methods to simulate process N . If Σ X (1) −→ N (0, Id ), then we may approximate X by W , a centered Brownian motion with covariance matrix Σ , where Z Σ = xx> ν (dx). Rd

Sometimes, Σ may be too complicated for practical use. In this case we may try to replace it by a simpler asymptotic. The method of approximation of X is precisely stated in the following proposition. Proposition 3.1 Let X = {X(t) : t ≥ 0} be a L´evy process in Rd determined by (3.1). Suppose that (d)

Σ−1/2 X (1) −→ N (0, Id ) 

as  → 0

and for some family of nonsingular matrices {A }∈(0,1] of rank d we have −1 > A−1  Σ (A ) → Id

11

as  → 0.

(3.5)

Let a , N be as in (3.3), and let W = {W (t) : t ≥ 0} be a standard Brownian motion in Rd independent of N . Then for every  ∈ (0, 1] there exists a cadlag process Y = {Y (t) : t ≥ 0} such that (d) X = a + A W + N + Y (3.6) in the sense of equality of finite dimensional distributions and such that for each T > 0 (P) sup kA−1 (3.7)  Y (t)k −→ 0 as  → 0. t∈[0,T ]

Proof: Consider the polar decomposition 1/2 A−1 = C U  Σ

where C positive definite and U is on orthogonal matrix, see, e.g., [4], Theorem 12.2.22. Then −1 > C2 = C C> = A−1  Σ (A ) → Id .

Let n → 0. There exists a subsequence {nk } such that Unk → U , for some orthogonal matrix matrix U . Hence 1/2 A−1 n Σn = Cnk Unk → U. k

k

Consequently,   (d) > −1 1/2 Σ−1/2 A−1 n Xnk (1) −→ N (0, U U ) = N (0, Id ). n Xnk (1) = An Σn k

k

k

k

(d)

Thus A−1  X (1) −→ N (0, Id ). By a theorem of Skorohod (cf. [8], Theorem (d)

15.17) there exist L´evy processes Z = {Z (t) : t ≥ 0} such that Z = A−1  X and (P) sup kZ (t) − W (t)k −→ 0 as  → 0 (3.8) t∈[0,T ]

for each T > 0. Making W and N depend on different coordinates of a large enough probability space, we may also assume that Z and N are independent. Put Y = A (Z − W). Then (d)

(d)

X = a + X + N = a + A Z + N = a + A W + N + Y . 12

This proves (3.6), (3.7) follows from (3.8).  Conditions (3.6)–(3.7) make precise the statement X ≈ a + A W + N .

(3.9)

There are two practical issues related to implementation of (3.9). The first one is how to define ν so that Σ (or its asymptotic) is computable. The second one is how to generate N efficiently. We will illustrate these issues in the following examples.

3.2

Stable processes

The L´evy measure ν of an α–stable process X in Rd has the form in polar coordinates ν(dr, du) = αr−α−1 dr λ(du) (3.10) where α ∈ (0, 2) and λ is a finite measure on S d−1 . Denote kλk = λ(S d−1 ). Assume that X(1) is not concentrated on a proper hyperplane of Rd , which by Lemma 2.1 means that λ is not concentrated on a proper subspace of Rd . The compound Poisson process N of (3.3) can be generated from a shot noise expansion of a stable process. Namely, let {e0j } be an iid sequence of exponential random variables with parameter 1 and γj = e01 + · · · + e0j . {γj } forms a Poisson point process on (0, ∞) with the Lebesgue intensity measure. Let {τj } be an iid sequence of uniform on [0, T ] random variables. Finally, let {vj } be an iid sequence of random vectors taking values in S d−1 with the common distribution λ/kλk. Assume that {vj }, {τj }, and {e0j } are independent. For  ∈ (0, 1] and t ∈ [0, T ] define X −1/α I(0,t] (τj )γj vj . (3.11) N  (t) = (T kλk)1/α γj ≤−1

It is elementary to check that N is a compound Poisson process with characteristic function ( Z ) E exp ihy, N  (t)i = exp t

1/α kxk≥0

(eihy,xi − 1) ν(dx)

where 0 := kλkT .

13

(3.12)

1/α

Therefore, we define ν (dx) = 1D (x)ν(dx), where D = {x : kxk < 0 }. ν is of the form (2.21). Then, by (2.22), α 2/α−1  Λ 2−α 0

Σ = where

Z

(3.13)

uu> λ(du).

Λ=

(3.14)

S d−1

Since ν is of the form (2.23) with µ(dr|u) = αr−1−α dr and Z α −α −2  r2 µ(dr|u) =  → ∞, 2−α (0,) we infer from Theorem 2.6 that (d)

Σ−1/2 X (1) −→ N (0, Id ). Therefore, the approximation (3.9) applies. By Proposition 3.1 we have Proposition 3.2 Let X be an α–stable L´evy process with L´evy measure given by (3.10). Suppose that the support of λ is not contained in a proper subspace of Rd . Let T be fixed and recall that 0 := kλkT . Let N be given by (3.11), W be a standard Brownian motion in Rd independent of N , and a be a shift determined by (3.4). Then, for every  ∈ (0, 1] there exists a cadlag process Y such that on [0, T ] (d)

1/α−1/2

X = a + 0

(

α 1/2 1/2 ) Λ W + N + Y 2−α

(3.15)

in the sense of equality of finite dimensional distributions and such that 1/2−1/α

0

(P)

sup kY (t)k −→ 0

as  → 0.

(3.16)

t∈[0,T ] 1/2

Proof: We apply Proposition 3.1 to A = Σ , where Σ is given by (3.13). We obtain (3.15) and 1/α−1/2

kY (t)k ≤ kA kkA−1  Y (t)k ≤ 0

(

α 1/2 1/2 ) kΛ kkA−1  Y (t)k. 2−α

Since Λ is nonsingular, (3.7) yields (3.16). 

14

3.3

Tempered stable processes

Recall that the L´evy measure of a tempered α–stable process X in Rd is of the form ν(dr, du) = αr−α−1 q(r, u) dr λ(du), (3.17) in polar coordinates , where α ∈ (0, 2), λ is a finite measure on S d−1 , and q : (0, ∞) × S d−1 7→ (0, ∞) is a Borel function such that, for each u ∈ S d−1 , q(·, u) is completely monotone with q(0+, u) = 1 and q(∞, u) = 0. For this and further facts on tempered stable distributions and processes see [16]. As in the previous section, assume that λ is not concentrated on a proper subspace of Rd and put kλk = λ(S d−1 ). Define a finite measure Q on Rd0 := Rd \ {0} by Z Z ∞ Q(A) := 1A (ru) Q(dr|u)λ(du) S d−1

0

where {Q(·|u)}u∈S d−1 is aR measurable family of probability measures on R+ ∞ determined by q(r, u) = 0 e−rs Q(ds|u). Note that Q(Rd0 ) = kλk. Let {vj } be an idd sequence in Rd0 with the common distribution Q/kλk. Let {uj } and {ej } be iid sequences of uniform on (0, 1) and exponential with parameter 1 random variables, respectively. Let {γj } and {τj } be as in the previous section. Assume that {vj }, {uj }, {ej }, {γj } and {τj } are independent of each other. Again, it is natural to take as the process N the partial sum of a shot noise representation of a tempered stable process. Such representation is given in [16], Theorem 5.4. Therefore, we have for  ∈ (0, 1] and t ∈ [0, T ] !  −1/α X γ vj j 1/α N  (t) = 1(0,t] (τj ) ∧ ej uj kvj k−1 . (3.18) T kλk kvj k −1 γj ≤

N

is a compound Poisson process with characteristic function ( Z ) E exp ihy, N  (t)i = exp t (eihy,xi − 1) ν  (dx) . Rd0

From the proof of Theorem 5.1 [16] or otherwise, we can verify that ν  (dr, du) = k  (r, u) drλ(du), in polar coordinates, where (  −1  R 1/α α−1 ∞ αs−α−1 q(s, u) ds −1 , 0 < r < 0 ,  0 α r q(r, u) − r r k (r, u) = 1/α αr−α−1 q(r, u) , r ≥ 0 15

and 0 = T kλk, as in (3.12). Notice that the process N has both large and small jumps, which is different from the stable case treated in the previous −1 −α−1 q(r, u), if 0 < section. Moreover, since k  (r, u) ≤ −1 0 αr q(r, u) ≤ αr 1/α r < 0 , we have ν  ≤ ν. Hence ν = ν − ν  has polar representation ν (dr, du) = k (r, u) drλ(du) where Z ∞ −1 −1 −1 −α−1 α−1 αs−α−1 q(s, u) ds k (r, u) = α(r − 0 r )q(r, u) + 0 αr r

1/α

1/α

if 0 < r < 0 and k (r, u) = 0 if r ≥ 0 . ν is of the form (2.15) (but not as in (2.21)). We will use Theorem 2.5 to show the normal approximation for X (1). We begin with an estimate for σ2 (u). Z

σ2 (u)

∞ 2

r k (r, u) dr ≥

= 0

=

1/α αq(0 , u)

1/α

0

Z 0

(r−α+1 − −1 0 r) dr

α2 2/α−1 1/α  q(0 , u). 2(2 − α) 0 1/α−1/2

Therefore, condition (2.17) holds with b() = 0 . (2.18) trivially holds 1/α because ν is concentrated on a ball of radius 0 and we have Z Z kxk2 ν (dx) = kxk2 ν(dx) = 0 1/α−1/2

1/α−1/2

kxk>κ0

{kxk>κ0

1/α

,kxk λ(du) ≤ 16

α Λ. 2−α

(3.21)

To get a lower bound we write σ2 (u)

Z =α 0



1/α

0

r2 (r−α−1 q(r, u) − −1 0 ` (r, u)) dr

1/α q(0 , u)

α 2/α−1  − α−1 0 2−α 0

where ` (r, u) = r−1 q(r, u) − rα−1

Z



Z

1/α

0

r2 ` (r, u) dr

(3.22)

0

αs−α−1 q(s, u) ds.

r

Then α−1 0

Z

1/α

0

2

r ` (r, u) dr ≤

0

2/α−1 α0

Z

1/α

0

` (r, u) dr 0

1/α

Z ∞  ∂  α = αs−α−1 q(s, u) ds dr −r ∂r 0 r Z ∞ i h 2/α−1 2/α−1 −α−1 αs q(s, u) ds =: 0 k(0 , u). = 0 1 − 0 2/α−1 0

Z

0

1/α

0

Notice that 0 ≤ k(0 , u) ≤ 1 and lim→0 k(0 , u) = 0. Combining the above estimate with (3.22) we get h i α 2/α−1 1/α σ2 (u) ≥ 0 q(0 , u) − k(0 , u) . 2−α which together with (3.21) yields α α 1−2/α Λ ≥ 0 Σ ≥ Λ 2−α 2 − α Z h i α 1/α (1 − q(0 , u)) − − k(0 , u) uu> λ(du). 2−α S d−1 Applying now the Dominated Convergence Theorem we establish (3.19) and so (3.20). In conclusion, using Proposition 3.1 we obtain Theorem 3.3 Let X be a tempered α–stable L´evy process with L´evy measure given by (3.17). Suppose that the support of λ is not contained in a proper subspace of Rd . Let T be fixed and 0 be as in (3.12). Let N be given by (3.18), W be a standard Brownian motion in Rd independent of N , and a be a shift determined by (3.4).

17

Then, for every  ∈ (0, 1] there exists a cadlag process Y such that on [0, T ] α 1/2 1/2 (d) 1/α−1/2 X = a + 0 ( ) Λ W + N + Y (3.23) 2−α in the sense of equality of finite dimensional distributions and such that 1/2−1/α

0

(P)

sup kY (t)k −→ 0

as  → 0.

(3.24)

t∈[0,T ]

The basic difference between (3.23) and (3.15) is in the form of N (and implicitly, in a and Y ).

3.4

Operator stable processes

A L´evy process X as in (3.1) is said to be operator stable with exponent B ∈ GL(Rd ) if for every t > 0 there exists a vector b(t) ∈ Rd such that (d)

X(t) = tB X(1) + b(t)

(3.25)

where tB = exp(B log t). A comprehensive introduction to operator stable laws can be found in the monographs [7] and [12]. Since X has no Gaussian part, the necessary and sufficient condition for B to be an exponent is that all the roots of the minimal polynomial of B have real parts greater than 1/2 (cf. [7], Theorem 4.6.12). For a description of the L´evy measure of X it is convenient to use a norm k · kB on Rd that depends on B as follows Z 1 kxkB = ksB xks−1 ds. 0

Rd

Let SB denote the unit sphere in with respect to k · kB . Then there exists a finite Borel measure λ on SB such that the L´evy measure ν of X is of the form Z Z ∞ ν(A) = 1A (sB u)s−2 dsλ(du) (3.26) SB

0

B(Rd )

where A ∈ (cf. [7], Proposition 4.3.4). To establish the approximation (3.9) we start with the compound Poisson process N. It is natural to look at a shot noise representation of X. As suggested by a remark following Corollary 4.4 [14], we may take  −B X γj  N (t) = I(0,t] (τj ) vj . (3.27) T kλk −1 γj ≤

18

This is a complete analogy of (3.11) with the same notation, except 1/α is replaced by B and S d−1 by SB . It is elementary to check that N is a compound Poisson process with characteristic function  Z Z ∞   ihy,sB ui −2 E exp ihy, N (t)i = exp t (e − 1)s dsλ(du) SB

0

where 0 = kλkT , as in (3.12). Combining this with (3.26) we have Z Z 0 1A (sB u)s−2 dsλ(du). ν (A) = SB

0

From (2.3) we get Z

Z

0

Σ = SB 0 0 B

Z =

(sB u)(sB u)> s−2 dsλ(du)

s Λ(sB )> s−2 ds

(3.28)

0

where Λ is given by (3.14) (with S d−1 replaced by SB ). We also observe that Z 1 −1 B B > Σ = 0 (0 r)B Λ((0 r)B )> r−2 dr = −1 (3.29) 0 0 Σ1 (0 ) . 0

Let linB (suppλ) denote the smallest B–invariant subspace of Rd containing the support of λ. If ν is as in (3.26), then the support of ν is not contained in a proper subspace of Rd if and only if linB (supp λ) = Rd

(3.30)

cf. [7], Corollary 4.3.5. If λ does not lie in a proper subspace of Rd , then (3.30) clearly holds. Theorem 3.4 Let X be an operator stable L´evy process with exponent B and L´evy measure given by (3.26) such that (3.30) holds. Let T be fixed and 0 be as in (3.12). Let N be as in (3.27), W be a standard Brownian motion in Rd independent of N , and a be a shift determined by (3.4). Then, for every  ∈ (0, 1] there exists a cadlag process Y such that on [0, T ] (d)

−1/2 B 1/2 0 Σ1 W

X = a + 0

19

+ N + Y

(3.31)

in the sense of equality of finite dimensional distributions and such that for every δ > 0 1/2−min{b1 ,...,bd }+δ

0

(P)

sup kY (t)k −→ 0

as  → 0

(3.32)

t∈[0,T ]

where b1 , . . . , bd are the real parts of the eigenvalues of B. Proof. First we will prove that Σ1 is nonsingular. By Lemma 2.1 we need to show that supp ν1 = Rd . Following Corollary 4.3.5 [7] we have supp ν1 = {x : x = sB u, 0 ≤ s ≤ 1, u ∈ supp λ}. Let L = lin(supp ν1 ) be the linear space spanned by supp ν1 . We will show that L is B–invariant. To this end it is enough to show that if x = sB u ∈ supp ν1 , 0 < s ≤ 1 and u ∈ supp λ, then BsB u ∈ L. Take θ < 1 such that θs < s ≤ 1. Then (θs)B u ∈ supp ν1 and hence (θs)B u − sB u ∈ L. θ%1 log θ

BsB u = lim

Since L is B–invariant and contains support of λ, L = Rd by (3.30). Thus Σ1 is nonsingular. We infer that Σ1 ≥ c1 Id where c1 = minkxk=1 hΣ1 x, xi > 0. By (3.29) we get B B > Σ ≥ c1 −1 0 0 (0 ) .

(3.33)

We will use Lemma 2.3 in the proof of normal approximation. To this end ˜  of Σ which satisfies condition (2.11). we need to find a lower bound Σ The Jordan decomposition of the exponent B states that B =D+N

(3.34)

where D is semi-simple and N is a nilpotent matrix such that DN = N D. See e.g., Theorem 2.1.18 [12]. Then, for any s > 0, sB (sB )> = sD sN (sN )> (sD )> . 20

Choose δ > 0 to be determined later. Since −N > is also nilpotent, the real part of all eigenvalues of −N > are zeros. Therefore, there exists c2 > 0 such that for all s ∈ (0, 1] > ks−N k ≤ c2 s−δ (3.35) (see [11], for instance). Hence >

>

>

kxk ≤ ks−N kksN xk ≤ c2 s−δ ksN xk for s ∈ (0, 1]. This yields kxk2 ≤ c22 s−2δ hsN (sN )> x, xi or sN (sN )> ≥ c3 s2δ Id for s ∈ (0, 1]; c3 = c−2 2 . Consequently, for s ∈ (0, 1] sB (sB )> ≥ c3 s2δ sD (sD )> .

(3.36)

Since D is semi-simple, D = U EU −1 where U is a unitary matrix and E = diag(e1 , . . . , ed ), is a diagonal matrix with the diagonal e1 , . . . , ed , where ek = bk + ib0k are the eigenvalues of B. Thus ∗

sD (sD )> = (U sE U −1 )(U sE U −1 ) = U diag(s2b1 , . . . , s2bd ) U −1 where E ∗ is the complex conjugate transposed of E. Combining this with (3.36) we have for s ∈ (0, 1] sB (sB )> ≥ c3 U diag(s2b1 +2δ , . . . , s2bd +2δ ) U −1 . Combining this with (3.33) we get for 0 < 0 ≤ 1 −1 d ˜ . Σ ≥ c4 2δ−1 U diag(02b1 , . . . , 2b =: Σ 0 0 )U

where c4 = c1 c3 . B ˜ −1 To verify condition (2.11) we first estimate hΣ  x, xi for x = s u, where 0 < s ≤ 0 ≤ 1 and u ∈ SB . We get D N N ˜ −1 sB u, sB ui = h(sD )> Σ ˜ −1 hΣ   s s u, s ui D ˜ −1 ≤ c5 s−2δ k(sD )> Σ  s k  s s 2bd  −1 1−2δ −2δ 2b1 = c5 c−1  s kU diag ( ) , . . . , ( ) U k 4 0 0 0  s s 2bd −2δ  1−4δ 2b1 −2δ ) , . . . , ( ) k. = c5 c−1  kdiag ( 4 0 0 0

21

In the first inequality we used the fact that the bound (3.35) holds for any nilpotent matrix (with possibly different constant) and the fact that SB is bounded. If δ = 1/8 and c6 = c5 c−1 4 , then the above bound shows that 1/2

B B ˜ −1 hΣ  s u, s ui ≤ c6 0 ,

whenever 0 < s ≤ 0 ≤ 1 and u ∈ SB . Then, for every κ > 0 and 0 ≤ 1 we have Z ˜ −1 hΣ  x, xi ν (dx) −1 ˜  x,xi>κ hΣ ZZ =

B B ˜ −1 {(s,u)∈(0,0 ]×SB : hΣ  s u,s ui>κ}

(3.37)

B B −2 ˜ −1 dsλ(du) hΣ  s u, s ui s

=0 −2 2 −1 −1 by (3.12)). Indeed, in view 2 when 0 < c−2 6 κ (or when  < c6 κ kλk T 2 of (3.37) the region of integration is empty for 0 < c−2 6 κ . Therefore, (2.11) (d)

−1/2

trivially holds and Σ X (1) −→ N (0, Id ). Applying Proposition 3.1 for −1/2 B 1/2 0 Σ1

A = 0 we get (3.31) and that

(P)

sup kA−1  Y (t)k −→ 0

as  → 0.

(3.38)

t∈[0,T ]

Proceeding as above we can find positive constants c01 and c02 such that for every 0 ≤ 1 and δ > 0 0 −1 B B > A A>  = Σ ≤ c1 0 0 (0 ) D > ≤ c02 0−1−2δ D 0 (0 ) 2bd −1 1 = c02 0−1−2δ U diag(2b 0 , . . . , 0 )U −1−2δ+2 min{b1 ,...,bd }

≤ c02 0

Id .

This yields −1/2−δ+min{b1 ,...,bd }

kA k ≤ c0 where c =

p

c02 .

.

Therefore, −1/2−δ+min{b1 ,...,bd }

kY (t)k ≤ kA kkA−1  Y (t)k ≤ c0

kA−1  Y (t)k,

which together with (3.38) yields (3.32). The proof is complete.  22

Remark 3.5 One may also consider   γ −B  γ −B X

j j e  (t) = N I(0,t] (τj )I vj ≥  vj . T kλk T kλk j

˜  has jumps of magnitude less than . From That is, the remainder process X both theoretical and computational point of views the compound Poisson approximation N of (3.27) is more tractable than the above. As a matter ˜  in full of fact, we were unable to establish Gaussian approximation of X generality, for any operator stable process.

4 4.1

Other applications Independent marginals

The normal approximation for independent marginals can be deduced from the one–dimensional result of [1]. The aim of this example is to show how the approximation follows from our general scheme. Suppose that the coordinates of the multivariate L´evy process X(t) = (X1 (t), . . . , Xd (t)) are independent. Let νi be the L´evy measure of Xi . Then ν(dx1 , . . . , dxd ) =

d X

δ(dx1 ) ⊗ · · · ⊗ νi (dxi ) ⊗ · · · ⊗ δ(dxd )

i=1

where δ are Dirac at 0. Let D = {x ∈ Rd : kxk < } and ν = ν|D . R masses 2 2 Define σi () = |xi | ν(dx) = diag σ12 (), . . . , σd2 () D

where diag(ai ) is a diagonal matrix with the diagonal a1 , . . . , ad . In view of Corollary 2.4, (2.5) holds if and only if for every i = 1, . . . , d and κ > 0 σi (κσi () ∧ ) →1 σi ()

as  → 0.

If (4.1) holds then by Proposition 3.1  (d) diag σ1−1 (), . . . , σd−1 () X −→ W.

23

(4.1)

4.2

Normal asymptotic for stochastic integral processes

Let X = {X(t) : t ∈ T} be a stochastic process represented as a stochastic integral Z f (t, s) M (ds) X(t) = S

where f is a deterministic function and M is an infinitely divisible random measure with no Gaussian part. For more information on such integrals, see [13]. To simulate X it is useful to have the normal asymptotic of the part corresponding to “small jumps” of M . Then one can write an approximation of the type (3.9) with W being a Gaussian process. Such approximation was used in [10, 2] to simulate locally self-similar processes defined by a fractional integral of a L´evy random measure. To illustrate how this method fits into our general framework, we consider the following simple situation. Let N be a Poisson random measure on S ×RR, where S is a Borel set. Let m ⊗ Q be the intensity measure of N, where x2 Q(dx) < ∞ and Q({0}) = 0. Consider a random measure Z Z M (A) = x [N (ds, dx) − m(ds)Q(dx)], A

R

where A ∈ B(S), m(A) < ∞. Clearly, the process Z Z Z X(t) = f (t, s) M (ds) = xf (t, s) [N (ds, dx) − m(ds)Q(dx)] S

S

R

is well defined when f (t, ·) : S 7→ R is a Borel function such that Z Z x2 |f (t, s)|2 m(ds)Q(dx) < ∞ S

R

for every t ∈ T. Put Z Z x [N (ds, dx) − m(ds)Q(dx)],

M (A) = A

|x|κ1/2 khk2 σ(),|x|κ1/2 khk2 σ(),|x|

Suggest Documents