Approximations for the maximum of a vector-valued stochastic process ...

6 downloads 0 Views 161KB Size Report
drift is strongly related to the first passage times studied by Siegmund (1968). ... Introducing a stochastic process Θ(t) defined on [1, ∞), which can be approx-.
Periodica Mathematica Hungarica Vol. 47 (1–2), 2003, pp. 1–15

APPROXIMATIONS FOR THE MAXIMUM OF A VECTOR-VALUED STOCHASTIC PROCESS WITH DRIFT ´th∗∗ (Salt Lake City) Alexander Aue∗ (K¨ oln) and Lajos Horva [Communicated by: Istv´ an Berkes ]

Abstract Giving a generalization of Berkes and Horv´ ath (2003), we consider the Euclidean norm of vector-valued stochastic processes, which can be approximated with a vector-valued Wiener process having a linear drift. The suprema of the Euclidean norm of the processes are not far away from the norm of the processes at the right most point. We also obtain an approximation for the supremum of the weighted Euclidean norm with a Wiener process.

1. Introduction The central limit theorem for the maximum of partial sums with positive mean was established by Teicher (1973) and the law of the iterated logarithm is due to Chow and Hsiung (1976). The central limit theorem for partial sums with drift is strongly related to the first passage times studied by Siegmund (1968). The central limit theorem and the law of the iterated logarithm were extended by Vervaat (1972) to functional limit theorems. Similar results were obtained by Chow, Hsiung and Yu (1980) to partial sums which can be observed only with errors. Berkes and Horv´ ath (2003) studied approximations for the maxima of partial sums of a univariate sequence {Xi } of independent, identically distributed random variables with (1.1) EX1 = µ > 0 and 0 < VarX1 = σ 2 < ∞. They obtained the following strong approximation: Theorem 1.1. Let {Xi } be a sequence of independent, identically distributed random variables, such that (1.1) holds. If 0≤α 2, then

T (k) T (m) a.s.  1/ν−α  − =o m 1≤k≤m k α mα k where T (k) = i=1 Xi . max

(m → ∞),

Introducing a stochastic process Θ(t) defined on [1, ∞), which can be approximated by a Wiener process with a positive drift, Berkes and Horv´ ath (2003) could derive Theorem 1.1 from a much more general result. They assumed that there is a Wiener process W and constants τ > 0 and γ such that   a.s. with some ν > 2 (t → ∞) (1.3) Θ(t) − (τ W (t) + γt) = o t1/ν and γ>0

(1.4)

and proved the following theorem: Theorem 1.2. If (1.2)–(1.4) hold, then Θ(t) Θ(m) a.s.  1/ν−α  − =o m α mα 1≤t≤m t sup

(m → ∞)

and Θ(t) W (m) a.s. − γm1−α − τ = o(m1/ν−α ) α t mα 1≤t≤m sup

(m → ∞).

In this paper we consider multivariate versions of Theorems 1.1 and 1.2.

2. Results

Ê

Let Xi ∈ p (1 ≤ i < ∞) be a sequence of independent, identically distributed random vectors with EX1 = γ = (γ1 , . . . , γp ) and CovX1 = Σ. Next we define the partial sums S(k) =

k  i=1

Xi .

(2.1)

approximations for the maximum of a vector-valued. . .

Finally, let

 z = 



3

1/2 zj2 

1≤j≤p

be the standard Euclidean norm in Theorem 1.1 in the present paper.

Êp .

We will prove the following analogue of

Theorem 2.1. Let {Xi } be a sequence of independent, identically distributed random vectors. If (1.2) holds, γj = 0

1≤j≤p

(2.2)

with some ν > 2,

(2.3)

for all

and EX1 ν < ∞ then we have

S(k) S(m) a.s.  1/ν−α  − =o m 1≤k≤m kα mα max

(m → ∞).

Condition (2.2) of Theorem 2.1 assumes that all coordinates of the sum have drift terms. In the next result we obtain a more general result, but a somewhat weaker upper bound for the rate of convergence if instead of (2.2) only γ > 0 is required. Theorem 2.2. Let {Xi } be a sequence of independent, identically distributed random vectors. If (1.2), (2.3) hold and γ > 0, then

S(k) S(m) a.s. β−α

− =o m 1≤k≤m kα mα max

(2.4) (m → ∞)

with any β > max(1/4, 1/ν). Theorems 2.1 and 2.2 will follow from a more general approach. Let Γ(t) = (Γ1 (t), . . . , Γp (t)) be a p-dimensional stochastic process on [1, ∞). We assume that there exists a vector-valued Wiener process W(t) = (W1 (t), . . . , Wp (t)) such that Cov(W(t), W(s)) = min(t, s)Σ and

  Γ(t) − (W(t) + tγ) = o t1/ν with some ν > 2

(2.5) (t → ∞).

(2.6)

We note that (2.6) holds for the sum of a large class of random vectors, including independent (Einmahl (1989)) and mixing vectors (Kuelbs and Philipp (1980)) and martingale differences (Eberlein (1986)). The following theorem specifies the behaviour of the supremum of Γ(t).

´ th a. aue and l. horva

4

Theorem 2.3. Let Γ(t) be a p-dimensional stochastic process on [1, ∞). If (1.2), (2.2) and (2.5)–(2.6) hold, then, Γ(t) Γ(T ) a.s.  1/ν−α  (T → ∞) − =o T sup tα Tα 1≤t≤T and

Γ(t) W(T ) + T γ a.s.  1/ν−α  − =o T tα Tα

sup

1≤t≤T

(T → ∞).

Similarly to Theorem 2.2 we have the following result: Theorem 2.4. Let Γ(t) be a p-dimensional stochastic process on [1, ∞). If (1.2) and (2.4)–(2.6) hold, then, sup

1≤t≤T

and sup

1≤t≤T

Γ(t) Γ(T ) a.s. β−α

− =o T tα Tα

Γ(t) W(T ) + T γ a.s. β−α

− =o T tα Tα

(T → ∞)

(T → ∞)

with any β > max(1/4, 1/ν). Comparing Theorems 2.3 and 2.4, we see that the rate in the general case γ > 0 is weaker than if (2.2) is assumed. The next result shows that the same rate is reached in both cases when α = 0. Theorem 2.5. Let Γ(t) be a p-dimensional stochastic process on [1, ∞) such that (2.4)–(2.6) hold. Then,   a.s. (T → ∞) sup Γ(t) − Γ(T ) = o T 1/ν 1≤t≤T

and

  a.s. sup Γ(t) − W(T ) + γT  = o T 1/ν

1≤t≤T

(T → ∞).

We prove in the next theorem that W(t) + γt can be approximated by a scalar Wiener process with a positive drift. Theorem 2.6. If (2.4) holds, then there is a scalar Wiener process W (t) such that

a.s.

W(t) + γt − (τ W (t) + γt) = O (log log t) where τ=

γ  Σγ . γ2

(t → ∞),

(2.7)

(2.8)

5

approximations for the maximum of a vector-valued. . .

Combining Theorems 2.4 and 2.6, we obtain the weak convergence of sup t−α Γ(t) and the law of the iterated logarithm immediately. Corollary 2.1. If (1.2) and (2.4)–(2.6) hold, then Tα Γ(t) 1−α a.s. − γT = τ, lim sup √ sup 2T log log T 1≤t≤T tα T →∞ where τ is defined in (2.8). Corollary 2.2. Let Γ(t) be a p-dimensional stochastic process on [1, ∞), W (t) be a scalar Wiener process and τ be defined by (2.8). We assume that (1.2) and (2.4)–(2.6) hold. (i) If 0 ≤ α < 1/2, then

 Γ(t) D[0,1] W (u) 1−α α−1/2 T − γ ([T u] + 1) −→ τ α . sup α t u 1≤t≤[T u]+1 (ii) If 1/2 < α < 1, then

T

α−1/2

Γ(t) 1−α − γ ([T u] + 1) sup α t 1≤t≤[T u]+1

(iii) If 0 < c1 < c2 < ∞, then

T

α−1/2

Γ(t) 1−α − γ ([T u] + 1) sup tα 1≤t≤[T u]+1

 D[1,∞]

W (u) . uα

D[c1 ,c2 ]

W (u) . uα

−→ τ

 −→ τ

The proofs of Theorems 2.1–2.6 and their implications are given in the next section.

3. Proofs Proof of Theorem 2.3. We can assume without loss of generality that γ1 , . . . , γp(1) > 0, γp(1)+1 , . . . , γp < 0. Clearly, Γ(T ) Γ(t) ≤ sup Tα tα 1≤t≤T ≤

 p(1)  

i=1

|Γi (t)| sup tα 1≤t≤T

2 +

p  i=p(1)+1



|Γi (t)| sup tα 1≤t≤T

 2 1/2 

(3.1) .

By the law of the iterated logarithm for Brownian motions we have 1 lim sup √ sup W(t) = C1 T log log T 1≤t≤T T →∞

a.s.

(3.2)

´ th a. aue and l. horva

6

with some constant C1 . Hence there is a random variable T1 such that   2 2 1/2 p p(1)   |Γi (t)| |Γi (t)|  + sup sup   tα tα 1≤t≤T i=1 1≤t≤T i=p(1)+1



 p(1)  

i=1

sup

1≤t≤T

Γi (t) tα

2 +

p  i=p(1)+1

sup

1≤t≤T

−Γi (t) tα

 2 1/2 

(3.3) ,

if T ≥ T1 . Since −Wi (t) is also a Brownian motion, Theorem 1.2 yields   2 2 1/2 p p(1)   Γi (t) −Γi (t) + sup sup α   t tα 1≤t≤T 1≤t≤T i=1 i=p(1)+1

 p(1)   2  Γi (T ) a.s. 1/ν−α = + o T  Tα i=1

+

p  i=p(1)+1



(3.4)

 2 1/2   −Γi (T ) + o T 1/ν−α .  Tα

The mean value theorem and (3.2) give that  1/2  p(1) p  Γ (T )  2  2     −Γ (T )  i i + o T 1/ν−α + + o T 1/ν−α  α α   T T i=p(1)+1  i=1    2 1/2  p  Γi (T )  −  α    T i=1 

  p  Γi (T ) a.s. = o T 1/ν−α (3.5) T 1−α α T i=1   = o T 1/ν−α

a.s.

as T → ∞, completing the proof.



Proof of Theorem 2.4. Without loss of generality, we can assume that γ1 , . . . , γp(1) > 0, γp(1)+1 , . . . , γp(1)+p(2) < 0 and γp(1)+p(2)+1 , . . . , γp = 0. Set q = p(1) + p(2) and let η(T ) be the location of the largest value of t−α Γ(t) on the interval [1, T ], i.e.   Γ(s) Γ(t) = sup t: . η(T ) = sup tα sα 1≤t≤T 1≤s≤T

approximations for the maximum of a vector-valued. . .

7

From (2.6) and (3.2) we obtain that for almost all ω there is a random variable T1 = T1 (ω) such that Γ(t) tα 1≤t≤T −c T log log T √  1−α  t log log t ≤ C1 sup + γ T − c T log log T , tα 1≤t≤T sup √

(3.6)

if T ≥ T0 . On choosing c1 = c/2 for a large enough c, we see that the upper bound

1−α √ in (3.6) is smaller than T − c1 T log log T . On the other hand, Γ(t) sup ≥ γT 1−α − C1 tα 1≤t≤T

√ T log log T , Tα

(3.7)

if T ≥ T2 with some random variable T2 . Now, given a large enough c, the upper bound for the supremum in (3.6) is smaller than the bound in (3.7). Hence, a.s.

T − η(T ) = O

  T log log T

(T → ∞).

(3.8)

Similarly to (3.1), we have Γ(T ) Γ(t) Γ(η(T )) ≤ sup = Tα tα η α (T ) 1≤t≤T  1/2 q p  1  2 ≤ α Γ (η(T )) + Γ2i (η(T )) , η (T ) i=1 i i=q+1 if T ≥ T3 with some random variable T3 . Using (3.2) and the upper bound for the increments of a Wiener process (cf. Cs¨ org˝ o and R´ev´esz (1981), p. 30), we conclude that for any q + 1 ≤ i ≤ p  2  Γi (η(T )) − Γ2i (T ) = |Γi (η(T )) − Γi (T )| |Γi (η(T )) + Γi (T )|     = O (T log log T )1/4 log T T log log T

a.s.

   = O T 3/4 log T (log log T )3/4

a.s.

´ th a. aue and l. horva

8

as T → ∞. Hence 1 η α (T )

 1/2 q p    Γ2i (η(T )) + Γ2i (η(T )) i=1

i=q+1

1/2  q p      1 a.s.  = α Γ2 (η(T )) + Γ2i (T ) + O T 3/4 log T (log log T )3/4  η (T ) i=1 i i=q+1  1/2 q p   1 a.s.  = α Γ2 (η(T )) + Γ2i (T ) η (T ) i=1 i i=q+1   + O T 3/8−α (log T )1/4 (log log T )3/8 . Using Theorem 1.2 again, we get that  1/2 q p  1  2 Γ (η(T )) + Γ2i (T ) η α (T ) i=1 i i=q+1 1/2  q p 2 2   Γi (η(T )) Γi (T )  + = 2α (T ) 2α (T ) η η i=1 i=q+1 1/2  2 q p   2   Γi (T ) Γi (T )  ≤ + o T 1/ν−α + . α 2α (T ) T η i=1 i=q+1 Applying (3.8) and the law of the iterated logarithm for Γi (t) (q < i ≤ p), we get   p √  1 1   2 T log log T a.s.  Γi (T ) = O T log log T  η 2α (T ) − T 2α  T 2α+1 i=q+1   = O T 1/2−2α (log log T )3/4

a.s.

as T → ∞. Hence we consider  

q  Γi (T ) i=1





+ o T 1/ν−α

 2

1/2 p    Γ2i (T ) + + O T 1/2−2α (log log T )3/4  2α T i=q+1

approximations for the maximum of a vector-valued. . .

 ≤

q  Γi (T ) i=1





+ o T 1/ν−α

 2

9

1/2 p 2  Γi (T )  + T 2α i=q+1

  +O T 1/4−α (log log T )3/8 . Proceeding as in the proof of Theorem 2.3, we arrive at   1/2  q 1/2  p p   Γ (T )   2   Γ2 (T )  Γ2 (T )   i 1/ν−α i i  − + o T +   α 2α 2α   T T T i=q+1 i=1  i=1    = o T 1/ν−α ,

a.s.

as T → ∞. This completes the proof.



The proof of Theorem 2.5 is based on the following lemma. Lemma 3.1. If (2.5) holds, then a.s.

sup W(t) + γt − W(T ) + γT  = O(log T )

1≤t≤T

as T → ∞. Proof. As in the proof of Theorem 2.3 we can assume that all non-zero drift terms are positive. identically distributed random variables, uniLet ξ1 , . . . , ξp be √ √   independent, form on the interval − 3, 3 . Then Eξ = 0

and

Eξξ = Ip×p ,

where ξ = (ξ1 , . . . , ξp ) . Because Σ is a semi-definite matrix Σ1/2 exists and we can define η = Σ1/2 ξ. Clearly, Eη = 0

and

Eηη  = Σ.

Moreover, there is a constant a such that |ηi | ≤ a

(1 ≤ i ≤ p),

where η = (η1 , . . . , ηp ) . Let γ1 , . . . , γp(1) > 0, γp(1)+1 , . . . , γp = 0. We can assume that γ1 = min{ γi : γi > 0 } and p(1) < p. Next we define 2 a (p + 1). (3.9) c= γ1

´ th a. aue and l. horva

10

We write sup W(t) + γt =

1≤t≤T

=

sup 1/c≤t≤T /c

√ c

W(ct) + cγt

sup 1/c≤t≤T /c

W∗ (t) +



cγt,

where W∗ (t) = c−1/2 W(ct) is a Wiener process. We note that EW∗ (t) = 0

EW∗ (t)W∗ (s) = min(t, s)Σ.

and

By Einmahl (1989) there are independent, identically distributed random variables η 1 , η 2 , . . ., distributed as η such that   t      a.s. η i  = O(log t) (t → ∞). (3.10) W∗ (t) −   i=1

Using (3.9), we get that 2  2  k    k+1 √ √      (η i + cγ) −  (η i + cγ)     i=1

i=1

k   √



√ ηi,1 + cγ1 + ηk+1,1 + cγ1 = ηk+1,1 + cγ1 2 i=1

k 



√ √ + . . . + ηk+1,p(1) + cγp(1) 2 ηi,p(1) + cγp(1) i=1

+ ηk+1,p(1) +

+ ηk+1,p(1)+1

2

2

k 



cγp(1)

(3.11) 

ηi,p(1)+1 + ηk+1,p(1)+1

i=1

+ . . . + ηk+1,p

k 



 ηi,p + ηk+1,p

i=1

≥ (2k + 1)(−a +

√ √ cγ1 )(a + cγ1 ) − (p − 1)a2 (2k + 1)

> 0. Putting together (3.10) and (3.11) we get √ sup W(t) + γt = c sup 1≤t≤T

1/c≤t≤T /c

W∗ (t) +

√ cγt

11

approximations for the maximum of a vector-valued. . .

  t   √   c sup  η i + cγt + O(log T )  1/c≤t≤T /c 

a.s. √

=

i=1

   T /c     √ √ a.s.  = c ηi + cγT /c  + O(log T )  i=1  a.s. √

=

cW∗ (T /c) +

√ cγT /c + O(log T )

a.s.

= W(T ) + γT  + O(log T ) 

as T → ∞ by the definition of W∗ .

Proof of Theorem 2.5. It is an immediate consequence of condition (2.6) and Lemma 3.1.  Proof of Theorem 2.6. Using the mean value theorem, we get that 

p  1 2  W (t) + 2tγ W(t) , W(t) + γt − γt = √ 2 ξ i=1 i where ξ satisfies

    ξ − γ2 t2  ≤ W(t) + γt2 − γt2  ≤

p 

Wi2 (t) + 2t |γ  W(t)| .

(3.12)

i=1

By the law of the iterated logarithm, we have p

 1 lim sup Wi2 (t) < ∞ t→∞ t log log t i=1

a.s.

(3.13)

and lim sup t→∞

t |γ  W(t)| t3/2 (log log t)1/2

0 we get that    Γ(t) W([T u] + 1) + γ([T u] + 1)   α−1/2 T sup  sup −   tα ([T u] + 1)α 0≤u≤C 1≤t≤[T u]+1 a.s.

= o(1). (3.17)

By Theorem 2.6 for any 0 ≤ α < 1/2 we have   W([T u] + 1) + γ([T u] + 1) − γ([T u] + 1)1−α T α−1/2 sup  ([T u] + 1)α 0≤u≤1  W ([T u] + 1)  −τ ([T u] + 1)α  log | log([T u] + 1)| = O T α−1/2 sup ([T u] + 1)α 0≤u≤1

a.s.

approximations for the maximum of a vector-valued. . .

13

  = O T α−1/2 log log T

a.s.

a.s.

= o(1).

By the scale transformation of the Wiener process we have for all C > 0   W ([T u] + 1) : 0 ≤ u ≤ C T α−1/2 ([T u] + 1)α   W (([T u] + 1)/T ) D = : 0 ≤ u ≤ C . (3.18) α (([T u] + 1)/T ) Clearly, [T u]/T + 1/T → u uniformly on [0, C], the almost sure uniform continuity of t−α W (t) on [0, 1] implies Corollary 2.2(i). In case of 1/2 < α < 1, Theorem 2.4 yields for all 0 < C < ∞ that     W([T u] + 1) + γ([T u] + 1) Γ(t)   − T α−1/2 sup  sup  α α   t ([T u] + 1) C≤u

Suggest Documents