Linear interpolation of random processes and extremes of a sequence

0 downloads 0 Views 255KB Size Report
(A1) Assume that ft : (t)=1g= ftk; tk+1 ?tk > h0 > 0; k = 1;2;:::g. ...... as v ! 0; t ! 1. 2 uniformly in n>n0. Analogously, we nd that for the variance function. Sn(t),.
Linear interpolation of random processes and extremes of a sequence of Gaussian nonstationary processes Vladimir Piterbarg, Oleg Seleznjev Faculty of Mechanics and Mathematics y Moscow State University 119 899, Moscow Russia

Abstract

We consider the limit distribution of maxima for a sequence of Gaussian nonstationary processes. As an application we investigate the linear interpolation error for a.s. continuous and mean square di erentiable Gaussian processes with stationary increments. The relation between the smoothness of incremental variance function, d(t) = E [(X (s + t) ? X (s))2 ], and the interpolation errors in mean square and uniform metrics is studied. The approach can be applied also to analysis of di erent methods of interpolation. Finally, we discuss an application of approximation of Gaussian processes to calculation of the distribution functions of their maximums with given precision.

Keywords: Maximum of nonstationary Gaussian process; Point process; Linear interpolation; Simulation;

1

Introduction

Let Xn (t) be a sequence of Gaussian zero mean processes on [0; T ]; T = T (n); n = 1; 2; : : : , with continuous or di erentiable sample paths. The aim of this paper is to study the limit tail distribution P fMn (T ) > ug of the maximum Mn (T ) = max[0;T ] jXn(t)j for a class of nonstationary random processes Xn (t) when T = T (n) ! 1; u = u(n) ! 1 as n ! 1 and to apply these results to some random approximation problems. The class of Gaussian processes is de ned by the condition that the smoothness of its variance function near the maximum points coincides with that of the correlation function near these points. This problem is  y

Supported in part by ONR Grant N00014 93 1 0043. e-mail:[email protected]; fax: (095) 939{20{90

1

important in particularly for an investigation of approximation accuracy for Gaussian processes with di erentiable sample paths. A similar problem and an approach for linear interpolation of continuous Gaussian processes are considered in Seleznjev (1993). The tail probability P fmax[0;T ] jX (t)j > ug for a Gaussian process X (t) were investigated by Cramer and Leadbetter (1967), Pickands (1969) (see Leadbetter, Lindgren and Rootzen (1983), Piterbarg (1988), Adler (1991) for further references). Rudzkis (1985) studied the tail distribution of the maximum for some classes of nonstationary processes. The distribution of extreme values for one class of nonstationary di erentiable processes is considered in Konstant and Piterbarg (1993). But for some problems (e.g. approximation of sample paths) it is important to investigate the distribution of extreme values for a sequence of random processes. Limit theorems for a sequence of stationary Gaussian processes and the approximation of a Gaussian stationary process by trigonometric polynomials were considered in Seleznjev (1991). In this paper we investigate an important class of sequences of Gaussian nonstationary processes with continuous and di erentiable sample paths. These results allow us to study di erent methods of interpolation (e.g. by splines or polynomials) with given precision. In that case even for the linear interpolation of a stationary random process we have to investigate the sequence of nonstationary deviation processes as the number of interpolation knots tends to in nity and its variance functions have maximum points between the knots. In order to demonstrate this general approach, the linear piecewise interpolation of a di erentiable in mean square Gaussian process with stationary increments is considered in mean square and uniform metrics. It is well known that this widely used method of interpolation provides the best rate of approximation in the uniform metric for some classes of continuous and continuously di erentiable nonrandom (deterministic) functions (see e.g. Powell (1981)). The optimal properties of linear interpolation for sample paths of random processes is investigated in Su and Cambanis (1993). Some methods of approximation and applications for a simulation of random processes are considered also in Eplett (1986), Hasofer (1987, 1989). Belyaev and 2

Simonyan (1979) investigated the approximation of a Gaussian stationary process by regression interpolation broken lines.

2

Results

For all n; n = 1; 2; : : :, denote by n (t)2 and rn (t; s) variance and correlation functions, respectively, of a Gaussian process Xn (t), n = max[0;T ] n (t); t; s 2 [0; T ]. The following conditions are connected with interpolation of sample paths of Gaussian processes (cf. conditions (A1)-(A5) in Seleznjev (1993)) especially in di erentiable case. In that case the variance of the normalized deviation process has a sequence of separated local maximum points, (A1){(A4) and (B1){(B4) , (cf. Piterbarg (1988)), and the smoothnesses of the variance and correlation functions near the variance maximum points are equal. Let  (t) be a continuous function, 0   (t)  1; t 2 [0; 1). Introduce the following conditions on  (t). (A1) Assume that ft :  (t) = 1g = ftk ; tk+1 ? tk > h0 > 0; k = 1; 2; : : :g. (A2) Denote by mh the number of points in the interval [0; h], i.e. mh = ]ftk ; tk 2 [0; h]g. Assume that there exists a constant K > 0 such that T  KmT . (A3) Let Jk denote the interval ft : jt ? tk j <  g. Assume that  (t) can be expanded near any

tk in the form (t) = 1 ? (a + k (t))jt ? tk j ;

a; > 0;

(1)

where for the sequence of functions k (t); k = 1; 2; : : :, and for any  > 0 there exists 0 > 0 such that maxfj k (t)j; t 2 Jk0 ; k = 1; 2; : : :g < .

(A4) There exists h1 > 0 such that tk 2 ((k ? 1)h1; kh1); k = 1; 2; : : :; and T = mT h1 . We introduce also the following conditions on the variance, covariance and correlation

functions of Xn (t) and a sequence of levels u = u(n). (B1) There exists 0 > 0 such that

n (t) = n (t)(1 + n (t)); 3

n > 0;

(2)

where sup[0;T ] jn (t)j ! 0 as n ! 1, and n u2=n2 = o(1) as n ! 1, n = supfjn (t)j; t 2

Jk0 ; k = 1; : : :; mT g (B2) For any  > 0 there exists 0 > 0 such that

rn(t; s) = 1 ? (bn + n(t; s))jt ? sj ; 0 < = ;

(3)

where bn ! b > 0 as n ! 1, and the function n (t; s) is continuous at the points (tk ; tk ); k = 1; 2; : : :; mT , and supfj n(t; s)j; t; s 2 Jk0 ; k = 1; : : :; mT ; n = 1; 2; : : :g < .

(B3) There exist 1 > 0; C > 0 such that for any t; s 2 Jk0 , k = 1; : : :; mT ; n = 1; 2; : : : ,

E [(Xn(t) ? Xn(s))2]  C jt ? sj 1 :

(4)

(B4) For any v > 0 there exists  > 0 such that

(v) = supfjrn(t; s)j; v  jt ? sj  T; n = 1; 2; : : :g <  < 1:

(5)

(B5)  (v )(log v )1+2= ! 0 as v ! 1. The case 0 < < min(2; ) has been considered in Seleznjev (1993) in connection with some interpolation problems for Gaussian processes with continuous sample paths. Here we investigate the case = . This case, and even more special one when = = 2, arises for example in interpolation problems for di erentiable Gaussian processes which will be investigated below. Let (t) be a Gaussian process with continuous sample paths (Levy-Shoenberg) and E [(t)] = ?jtj ; cov((t); (s)) = jtj + jsj ? jt ? sj ; 0 <  2. We write

H d (s; t) = E [exp(max ((v ) ? djv j ))]; [s;t] and

H2a=b =

H d = tlim H d (?t; t); !1

s

1 + 2ba ;

(see e.g. Konstant and Piterbarg (1993)). Denote by (u) = H a=b'(u)=u, where '(u) is the standard Gaussian density. 4

Theorem 1 Let T = T (n); u = u(n); min(T; u=n) ! 1 as n ! 1. If (B1){(B4) with (A1){(A3) hold, then (i) for any xed h > 0,

P fMn(h) > ug  mh (u=n) as n ! 1; (ii) if additionally (B5) holds and m(u=n ) !   0 as n ! 1, then

P fMn(T ) > ug ! 1 ? e? as n ! 1: Denote also by Mn (A) = maxA jXn (t)j for any set A  [0; T ]. De ne a point process Nn (B ) for any Borel set B , Nn (B ) := ]fk : tk 2 B and Mn (Jk ) > ug, and the normalized point process NT (B ) = Nn (T  B ). Let the level u = u(n) be chosen such that min(T; u=n) ! 1; and mT (u=n ) !  , as n ! 1; 0 <  < 1. Let N be a Poisson process with intensity

 . In the next theorem asymptotic properties of the point process Nn(B) for the sequence of random processes Xn (t) are considered.

Theorem 2 If (B1){(B5) with (A1){(A4) hold then NT converges in distribution to N as n ! 1. We apply Theorems 1 and 2 to investigation of a sequence of deviation random processes, when the number of interpolation knots tends to in nity. Let X (t); t 2 [0; 1], be a Gaussian zero mean process with stationary increments and continuously di erentiable sample paths. Let Ln (t) be an interpolation broken line for given values X (hk); k = 0; : : :; n; h = 1=n,

Ln (t) = X (hk)(1 ? t) + X (h(k + 1))t; t 2 [hk; h(k + 1)]; t = h(k + t); k = 0; : : :; n ? 1: Denote by n (t) = X (t) ? Ln (t) the deviation random process. Now Theorems 1 and 2 provide the general method for investigation of the deviation in mean square, n2 = max[0;1] E [n(t)2 ], as well as in uniform metrics, P fmax[0;1] jn (t)j   g, as n ! 1 and  ! 0 accordingly. We introduce the following conditions for the incremental variance d(t), d(t ? s) = E [(X (t) ?

X (s))2], 5

(C1) there exist c > 0 and 2 > > 0 such that

d00(t) = d00(0) ? jtj (c(2 + )(1 + ) + f (t)); f (t) log jtj = o(1) as t ! 0; (C2) for any t 2 [?1; 1] n f0g, there exists the forth derivative, d(4)(t), and

d(4)(t)jtj2? = O(1) as t ! 0: In order to motivate our method of approximation of X (t), we note that usually it is desirable that the rate of approximation X (t) by an approximation process Ln (t) is optimal in some sense and the construction of Ln (t) is not too complicated. The linear interpolation is one of the simplest methods of interpolation and at the same time the rate of approximation in mean square is optimal for the class of continuous and continuously di erentiable random processes satisfying (C1) (see Seleznjev (1989a)). Henceforth we study the approximation by broken lines, although the scheme of investigation is common for other local interpolation methods (e.g. Hermite splines, Lagrange polynomials). We apply Theorems 1 and 2 to the normalized deviation process Xn (s) = n (hs)=n ; s 2 [0; T ], and now T = m = n. Let the level u = =n ! 1 as n ! 1, and denote by rn (t; s) and n (t) the correlation and variance functions of the process Xn (t), respectively. Then

P fmax j (t)j  g = P fmax jX (s)j  ug: [0;1] n [0;T ] n

(6)

The condition of stationarity of increments implies that rn (t + k; s + k) = rn (t; s) and n (t + k) =

n(t); k = 0; 1; : : :; n ? 1; t; s 2 [0; 1], (the process Xn(t) is cyclostationary, Konstant and Piterbarg (1993)). So it is sucient to verify local conditions (B1){(B3) only for t; s from the interval [0; 1]. We investigate the conditions of Theorems 1 and 2 in Lemmas 1 and 2. Denote by S (t) =  (t)2 = t(1 ? t)(1 ? t ? (1 ? t) ); S (t) = S (1 ? t); t 2 [0; 1]. Then  2 =  2 = max[0;1] S (t) = S ( 21 ) = 2?2(1 ? 2? ) and S (t) has a single maximum point t0 = 2?1 . The function S (t) can be represented around the point t0 in the following form, S (t) =  2 ? (c + (t))jt ? tj j2 ; 6

where (t) ! 0 as t ! t0 , and c = 1 ? ( ( +1) ? 2)2?( +1) : Denote by C = c( +2)( +1).

Lemma 1 If (C1) and (C2) hold then

(i) the variance n (t)2 satis es the condition (B1) of Theorem 1 and n  C 1=2n?(1+ =2)

as n ! 1; with parameters a = 21  ?2 c and

n(t)2 =  ?2 S (t)(1 + n (t)); (ii) the covariance function rn (t; s) satis es the condition (B2) and

rn (t; s) = 1 ? (bn + n(t; s))jt ? sj2 ; where bn ! b = b( ) as n ! 1,

b( ) =  ?2 +1 1 ( 21 ? +1 2 ) > 0;

0 < < 2;

(iii) there exists C > 0 such that for any t; s 2 [k; k + 1]; k = 0; : : :; n ? 1; n = 1; 2; : : : ,

E [(Xn(t) ? Xn(s))2 ]  C jt ? sj: Thus the deviation in the mean square metric has the rate of approximation n?(1+ 2 ) . Clearly, for the convergence in mean square it is not necessary that the initial random process X (t) is Gaussian. The rate of approximation is also the same as for the approximation of di erentiable periodic Gaussian stationary processes by trigonometric polynomials, see Seleznjev (1991).

Lemma 2 Let (C1) and (C2) hold. Then there exists K > 0 such that the correlation function rn(t; s) satis es (B4) and (B5) and for all n

jrn(t; s)j  K jt ? sj ?2; s; t 2 [0; T ]; t 6= s: Thus by Lemmas 1 and 2 the sequence of Gaussian processes Xn (t); t 2 [0; T ]; n = 1; 2; : : : ; satis es the conditions of Theorems 1 and 2. Let the level  =  (n) be chosen such that

u = =n ! 1 and n(u) ! ; as n ! 1; 0    1. We can enumerate absolute maximum 7

points of the variance tk = k + 21 and the corresponding intervals Jk = [k; k + 1); tk 2 Jk ; k =

S 1; 2; : : : , where [0; 1) = 1 k=1 Jk . As in Theorem 2 for any Borel set B denote by Nn (B ) the number of the points tk ; tk 2 B; for which the deviation maxt2Jk jn (th)j is greater than  .

Let Nn be a point process de ned by Nn(B ) = Nn (nB ): As a corollary of Theorems 1 and 2 and (6) we immediately obtain the following theorem.

Theorem 3 Assume that the incremental variance d(t) satis es (B1) and (B2) and that n(u) !   0 and u = =n ! 1 as n ! 1. (i) If 0    1 then P fmax j (t)j  g ! e? as n ! 1: [0;1] n (ii) If 0 <  < 1 then Nn converges in distribution to N as n ! 1.

Denote by

an = (2 log(n)) 12 ; bn = an + a?n 1 [? 12 log(log n) + log(H a=b2()? 21 )]: Since the level u = u(n) = x=an + bn satis es the condition n(u) !  = e?x as n ! 1, we have the following corollary (cf. Theorem 12.3.5, Leadbetter, Lindgren and Rootzen (1983)).

Corollary 1 Let (B1) and (B2) hold. Then P ( an (max j (t)j ? bnn )  x) ! expf?e?xg as n ! 1: [0;1] n n

Example 1. Let

X (t) =

Zt 0

(0 (v + 1) ? 0 (v ))dv;

t 2 [0; 1];

where 0 (t) is fractional Brownian motion (Levy-Shoenberg) with zero mean, the covariance function E [0(t)0 (s)] = jtj + jsj ? jt ? sj ; 0 < < 2. Then for the incremental variance we have,

d(t ? s) = (t ? s)2

Z 1Z 1 0

0

(j(v2 ? v1 )(t ? s) ? 1j + j(v2 ? v1 )(t ? s)+1j ?jt ? sj jv2 ? v1 j )dv1dv2; 8

and the conditions (B1), (B2) hold with c = 4(2 + )?1 (1 + )?1 and d00(t) = 2(jt ? 1j + jt + 1j ? 2jtj ). One of the applications of the previous approximation results is the estimation of the distribution function of the maximum of a Gaussian process F (a) = P fmax[0;1] X (t)  ag for a xed level a. For any approximation process Ln (t) we denote Fn (a) = P fmax[0;1] Ln  ag and Gn () = P fmax[0;1] jn (t)j > g: Then it follows immediately (cf. Eplett (1986), Hasofer (1987)) that

Fn (a + ) + Gn()  F (a)  Fn(a ? ) ? Gn (): If we use for approximation the broken line Ln (t), then max[0;1] Ln (t) = maxk X (kh), and

Fn(a)  F (a)  Fn (a ? ) ? Gn()  Fn (a) ? G(a; ) ? Gn();

(7)

where G(a; ) = (a ? )=(1 ? (a)), (a) denotes the standard normal distribution function. The estimate in the right-hand side inequality follows from Ylvisaker (1968). As a consequence of (7) we obtain the estimate of the rate of convergence of Fn (a) to F (a). In fact, if  =

C0L(2 log n)1=2n?(1+ =2); C > 1, then

jF (a) ? Fn (a)j  K (log n)1=2n?(1+ =2); where K = K (a) > 0. The distribution function Fn (a) is constructed with only the nite number of random variables X (kh); k = 0; : : :; n, and can be estimated by simulation with a controlled variance. The estimate for the distribution function Gn () of the uniform deviation follows from Theorem 3 or 4. So we have upper and lower bounds for the distribution function of the maximum of the continuous Gaussian process X (t). This approach has been used by Hasofer (1987) for the estimation of the distribution function F (a) for a stationary di erentiable Gaussian process by trigonometric polynomials.

9

3

Proofs

3.1 Limit theorems for a sequence of Gaussian processes Let X (t); t 2 [0; T ], be an a.s. continuous Gaussian zero mean process with variance, covariance and correlation functions  (t)2; R(t; s) and r(t; s), respectively. We recall that

M (h) = max[0;h] jX (t)j and de ne (u) = '(u)=u. Write the following conditions for the process X (t), (cf. Piterbarg and Prisyazhnyuk (1981)) (D1) there exists a single point tmax 2 (0; h) such that sup[0;h]  (t) =  (tmax) = 1 and (t) = 1 ? ajt ? tmaxj + o(jt ? tmaxj ); as t ! tmax; a > 0; > 0; (D2) there exists the asymptotic expansion of the correlation function

r(t; s) = 1 ? (b + (t; s))jt ? sj ; b > 0; > 0; where (t; s) ! 0 as t ! tmax; s ! tmax; (D3) r(t; s) < 1 for all t 6= s; (D4) there exist > 0; C > 0 such that for any t; s 2 [0; h]

E [(X (t) ? X (s))2]  C jt ? sj : The next proposition will be used to prove Theorem 1.

Proposition 1 (i) Let (D4) hold with h = T , and assume (t)  1. Let T = T (u) be a monotone function, T  T0u?2= , where T0 > 0. Then for any K > K0 = 2 exp(5:2= )(2 )1= and u > u0 = max(1:2 ? 2 ; 2), 1

P fM (T ) > ug  KC Tu (u): 1

2

(ii) Let (D1){(D4) hold and = . Then for any xed h > 0,

P fM (h) > ug  (u) as u ! 1; where (u) has been de ned as in Theorem 1.

10

Proof of Proposition 1. The assertions are direct consequences of Lemma 6.4 and Theorem 8.1

of Piterbarg (1988) (see also Piterbarg and Prisyazhnyuk (1981), Berman (1985), Talagrand (1988), Dobric et al. (1988), Berman and Kono (1989), Weber (1989), Adler (1991)). The estimates of the constants K0 ; u0 follow from Seleznjev (1989b). Recall that we consider u = u(n) and T = T (n) and u ! 1 as n ! 1. The main idea of the proof of Theorem 1 as in the case of a sequence of stationary processes in Seleznjev (1993) is to show, that in the appropriate assumptions, for the sequence of Gaussian processes

Xn(t); n = 1; 2; : : :; and for some auxiliary Gaussian process Y (t), which has been introduced below, the following relations are valid,

P fMn (h) > ug  P fM (h) > ug;

P fMn (T ) > ug  P fM (T ) > ug;

for a xed h and T ! 1 as n ! 1, where M (h) = max[0;h] jY (t)j. Proof of Theorem 1. Let q = q (u) = cu?2= as n ! 1; c > 0. Denote by

Pq fM (h) > ug = P fmax fjY (jq)j; jq 2 [0; h]g > ug; j

Pq fMn (h) > ug = P fmax fjXn(jq)j; jq 2 [0; h]g > ug: j

(8)

Consider the main steps of the proof. 1. For every interval Jj , the assumption (A1), we select the subinterval Jj ; tj 2 Jj  Jj small enough to use Slepian's inequality and we show, that

jY (t)j > ug as n ! 1; jXn(t)j > ug  P fmax P fmax jXn(t)j > ug  P fmax   J j

Jj

Jj

where Y (t) = X (t)s0(t); t 2 [0; 1), X (t) is a Gaussian stationary process with zero mean, unit variance and the correlation function r(t) = expf?jtj g, for any k,

s0 (t) = 1 + ajt1? t j > 0; t 2 Jk ; s0(t)  0 otherwise: k

(9)

2. We used in the proof the method based on Berman's inequality (see Leadbetter Lindgren and Rootzen (1983), p. 168), for the Gaussian processes with discrete time. So we show, that 11

the probability P fmaxA jY (t)j > ug is well approximated by the similar probability for the set

A \ Rq , where Rq = f0; q; 2q; : : :g:

(10)

Further probabilities of events like (8), related with the grid Rq , denote by Pq . 3. Apply the standard mixing arguments based on Berman's inequality to approximate the tail distributions Pq fMn (T ) > ug by Pq fM (T ) > ug, T = T (n) ! 1 as n ! 1: We are coming now to the detail presentation of the proof. Consider at rst the case n  1 and 0 <  < 1. For any k = 1; 2; : : : , write Jk = fs : js ? tk j   g and for any h > 0 denote

S h  Jk . Then (B1) implies that by h = mk=1

(t)(1 ? n ) < n (t) < (t)(1 + n )

(11)

for any n = 1; 2; : : : and t 2 [0; T ]. Furthermore, for any  > 0 there exist  > 0; n0 > 0 such that for any t 2 T n T and n > n0 the variance n (t) < 1 ? : By (A3) and Proposition 1(i) we nd,

P fmax jX (t)j > ug  CTu2= 1 (u=(1 ? )) = o(T(u)) = o(1); T nT n 

P fmax jXn(t)j > ug = P fmax jXn(t)j > ug + o(1) as n ! 1: T T 

(12)

The assumption (A2) implies that for any  > 1 and ; 1 >  > 0 there exists h0 = h0 (; ) such that for any h0 >  > 0; and jt ? tk j < ; js ? tk j < ;

r((t ? s))  rn(t; s)  r((t ? s)):

(13)

Then by using Slepian's inequality (see e.g. Leadbetter, Lindgren and Rootzen (1983), p. 156) and the inequality for the variances we compare the corresponding probabilities for the random process Xn (t) and for the processes

Xn+(t) = X (t)(t)(1 + n ) = X +(t) +(t)(1 + n ); Xn? (t) = X (t)(t)(1 ? n ) = X ?(t) ? (t)(1 ? n ); 12

where 0 <  ? (t) <  (t) <  + (t); t 2 Jk , and

+ (t) = 1 + (a ? 1)jt ? t j ;  ?(t) = 1 + (a + 1)jt ? t j : k k From (13) it then follows that for any suciently large n  n0 we have for k = 1; 2; : : :; mT ,

jXn?(t)j > ug: (jX (t)jn(t)) > ug  P fmax jXn+(t)j > ug  P fmax P fmax    Jk

Jk

Jk

Write u+ = u=(1 + n ) and u? = u=(1 ? n ). Then for the processes X + (t) and X ? (t) the assumptions (D1){(D4) of Proposition 1(ii) hold and for all suciently large n we have

P fmax jXn+(t)j > ug = P fmax jX +(t)j > u+g =   Jk

Jk

jX +(t)j > u+ g  (1 + )(u+ ); P fmax 

(14)

J1

and analogously

P fmax jXn?(t)j > ug  (1 ? )(u? ):  Jk

(15)

Finally, note that by (A1) we get n u2 = o(1) as n ! 1 and therefore (u+ )  (u? )  (u) as n ! 1. Letting  ! 1 and  ! 1 in (14) and (15) we obtain

jP fMn (Jk ) > ug(u)?1 ? 1j = 0: nlim !1 k=1max ;2;:::;mT

(16)

Further, we have

pki = P fmax Xn (t) > u; max Xn(t) > ug  P f max (Xn (t) + Xn (s)) > 2ug:     Jk

Jk Ji

Ji

(17)

By (A4) for k 6= i the variance of the random eld Xn (t) + Xn (s) is not greater than 4 ?  for some  > 0. So applying Proposition 1(i) we can estimate pki  K expf?2u2=(4 ? )g and

= (1 + o(1))

mh X k=1

P fmax jXn(t)j > ug = 2P fmax Xn(t) > ug(1 + o(1)) h h 



2P fmax Xn (t) > ug = (1 + o(1))mhP fmax jXn(t)j > ug   Jk

J1

= (u)(1 + o(1)) as n ! 1

and Theorem 1(i) follows. 13

We are coming now to the proof of Theorem 1(ii). Consider the sequence Xn (jq ), where

q  cu?2= ; c > 0; j = 1; 2; : : :; Nh, and qNh  h < q(Nh +1); (cf. Piterbarg and Prisyazhnyuk (1981)). Denote by S (u) = ft : jt ? t1 j  Su?2= g; S > 0: Then we have P fMn (Jk ) > ug = P fMn(S (u)) > ug + I (u); where

I (u) = Pq fMn(S (u))  u; M (J1 \ fjt ? t1 j > Su?2= g) > ug  P fM (J1 \ fjt ? t1 j > Su?2= g) > ug  2P fM ([Su?2= ; (u)]) > ug;

(18)

as in the proof of Theorem 8.1 in Piterbarg (1988). We estimate I (u) in the following way,

I (u)  2

N X k=1

P (Ak );

where N = [ (u)u2= =S ] + 1,

Ak = fM (k )  u(1 + ajkSu?2= j )g; k = [kSu?2= ; (k + 1)Su?2= ]: Applying Proposition 1(i) we nd that there exists L > 0 such that

P (Ak )  LS (u) exp(?ajkS j ); and therefore for any S > 0 (u)  2 (u)LS

1 X k=1

exp(?ajkS j ) = (u) (S );

(19)

where (S ) ! 0 as S ! 1. Consider now the di erence

D(u) = P fM (S (u)) > ug ? Pq fM (S (u)) > ug = P fM (S (u)) > u;  max Y (t)  ug (u)\ S

q

for suciently large S > 0. Then by standard arguments (see e.g. Leadbetter et al. (1983), Lemma 12.2.3 , Piterbarg and Prisyazhnyuk (1981)) we nd that ulim !1 D(u)= (u) =

Z1 ez P fmax ((t) ? bjtj ) > z; max jtjS fjtjS g\ 0

14

c

((t) ? bjtj )  z gdz = c : (20)

The last relation follows from weak convergence of the conditional distributions of Gaussian process u (t) = u(Y (tu?2= ) ? u) + z with given Y (0) = u ? z=u, i.e. u (0) = 0, to the distributions of the Gaussian process (t) ? bjtj . Since sample paths of (t) are continuous, then for a xed S the probability in (20) tends to zero as c ! 0. We have also that

P fmax ((t) ? bjtj ) > z; max ((t) ? bjtj )  z g  P fmax((t) ? bjtj ) > z g; jtjS fjtjS g\ jtjS c

and therefore by the dominated convergence theorem we nd that c ! 0 as c ! 0. Thus by (19) and (20) we obtain, lim sup jP fM (J1 ) > ug ? Pq fM (J1 ) > ugj (u)?1 = c ; n!1

where c ! 0 as c ! 0. Therefore there exist K1 and K2 such that

jP fjt?max Y (t)  ug ? Pq fjt?max Y (t)  ugj  K1 c(u); t j t j 1

1

and also by standard arguments we nd that

jP fjt?max jY (t)j > ug ? Pq fjt?max jY (t)j > ugj  K2c(u): t j t j 1

0

By choosing again appropriate constants 1 >  > 0;  > 1 as in (13) and then letting ;  ! 1 we obtain lim sup max jP fMn (Jk ) > ug ? Pq fMn(Jk ) > ugj=(u)  c; n!1 k

(21)

and

jP fMn (T) > ug ? Pq fMn(T) > ugj  mT max jP fMn (Jk ) > ug ? Pq fMn(Jk ) > ugj  k c mT (u)O(1) = c O(1) as n ! 1:

(22)

The relation (22) is valid also for the process Y (t). Then applying one of the variants of Berman's inequality (see Lemma 11.1.2, Leadbetter, Lindgren and Rootzen (1983)), we obtain

jPq fmax jY (t)j  ug ? T 

K1

m YT k=1

Pq fmax jY (t)j  ugj   Jk X

2 u2 g = ; (23) )?2 + s0 (iq )?2 ) g  K T r exp f? ri?j expf? u (s0 (jq k 1 n 2(1 + ri?j ) q 0 ug ? P m fmax   Jk

J1

jY (t)j > ugj  cmT (u) = cO(1) mT max jPq fmax jY (t)j > ug ? P fmax   k Jk

J1

(26)

as n ! 1. The assertions (23) and (26) imply

jY (t)j > ugj  jP fmax jY (t)j > ug ? P m fmax T J

jPq fmax jY (t)j > ug ? T 

1

jY (t)j > ugj + cO(1) = o(1) + cO(1) Pqm fmax J 1

16

(27)

as n ! 1. Thus letting c ! 0 we nd

jY (t)j > ug)mT + o(1) = e? + o(1) as n ! 1: P fmax jY (t)j > ug = (1 ? P fmax  T J1



For the sequence Xn (t); n = 1; 2; : : : , we use similar arguments as in Seleznjev (1991) for the sequence of stationary processes, namely

jPqfmax jXn(t)j > ug ? Pq fmax jY (t)j > ugj  K1 T T

X

i 0 such that for any

t > t0 0   (t) = supf(s; n)j log sj; s  t; n = 1; 2; : : :g < < 1: Consequently  (t) ! 0 as t ! 1 and

k (t; n)  (t)j log tj for all n: Thus as in (23) we have

jP fmax jXn(t)j > ug ? P fmax jY (t)j > ugj = Tq S (u; T ) + cO(1) = o(1); T T as n ! 1 and then letting c ! 0. So Theorem 1(ii) follows for 0 <  < 1. Let  = 1; i.e. mT (u) ! 1 as n ! 1. Write  = (1=) log(2=) for any xed  > 0 and

T = =(u). Then for any suciently large n  n0 we obtain T  T and by the rst part of (ii)

P fmax jX (t)j > ug  expf?g + =2  =2 + =2 = : [0;T ] n 

17

Hence (ii) is valid also for this case, since

P fmax jX (t)j > ug  P fmax jX (t)j > ug   [0;T ] n [0;T ] n 

for any xed  > 0 and n  n0 . The case  = 0 follows immediately from the estimate in Proposition 1(i). Finally, in the general case when n 6 1, (i) and (ii) follow from the relation

P fMn (h) > ug = P fmax jX (t)=nj > u=ng [0;h] n and the proof is completed. We turn our attention to the proof of Theorem 2. Investigate the asymptotic properties of the point process Nn (B ). Recall, that 0 <  < 1, and

Nn(B) = ]fk : tk 2 B and Mn (Jk ) > ug: Proof of Theorem 2. The proof is similar to that of Theorem 12.4.2, Leadbetter, Lindgren

and Rootzen (1983), so consider the main steps only. We have to verify that for any 0 < c < d  nlim !1 E [NT ((c; d])] = E [N ((c; d])] =  (d ? c); S and if Ei = (ci ; di]; i = 1; 2; : : :; k, (disjoint) Uk = ki=1 Ei; then k Y  nlim !1 P fNT ((Uk ) = 0g = P fN (Uk ) = 0g = i=1 expf (di ? ci )g: The relation (29) follows from the de nition of the point process NT and Theorem p2 X  E [NT ((c; d])] = E [Nn((Tc; Td])] = P fMn (Ji ) > ug; i=p1

(29)

(30) 1(ii) since

where p1 = Tc=h1 + O(1); p2 = Td=h1 + O(1) as n ! 1. Therefore

E [NT ((c; d])]  (d ? c) hT ( u ) = (d ? c)mT ( u )   (d ? c) as n ! 1: 1

n

n

Furthermore for the intervals Ei = (ci; di]; i = 1; 2; : : :; k, the de nition of the point process

Nn(B) yields

k \

k \

i=1

i=1

P f fNn(Ei) = 0gg = P f fMn(Ei)  ugg: 18

The proof of the following assertion repeats that of (28), so we nd k \

k Y

i=1

i=1

P f fMn(Ei)  ugg ?

P fMn (Ei)  ug = o(1) as n ! 1:

Thus we need the corresponding result only for the auxiliary process Y (t). The proof repeats that of Lemma 9.1.1, Leadbetter, Lindgren and Rootzen (1983).

3.2 Approximation of a random process by a broken line At rst we introduce some additional notations. For every continuous function g (t) de ne the di erence operators 4h g (t) = g (t + h) ? g (t); 42h1 ;h2 g (t) = 4h1 (4h2 g (t)). It will be convenient

to use the following interpretation of a broken line Ln (t); t 2 [0; 1]. Let  be a Bernoulli

random variable P f = 1g = t; P f = 0g = 1 ? t, with the distribution function B (t). As noted before, consider at rst the deviation process only on the interval [0; 1]. Then we have

Et[f ()] = f (1)t + f (0)(1 ? t) and Ln(ht) = Et[X (h)]. Denote by Yn(t) the deviation random zero mean process,

Yn (t) = X (ht) ? Ln (ht) = Et [(X (ht) ? X (h))]; with the covariance function Rn (t; s) and variance Sn (t) = Rn (t; t). Represent Rn (t; s) in the following form,

Rn(t; s) = 12 Et;s[d(h(t ?  )) + d(h(s ? )) ? d(h( ?  )) ? d(h(t ? s))];

(31)

where ;  are independent Bernoulli random variables with the distribution functions B (t) and B (s), respectively. We will use the following consequence of (C1) and Taylor's formula for the incremental variance d(t),

d(t) = 21 d00(0)t2 ? jtj2+ (c + F (t));

where

F (t) =

Z1 0

v f (vt)(1 ? v)dv; F (k) (0) = 0; k = 0; 1; 2; 19

(32)

and also F (t) log jtj = o(1) as t ! 0. It follows directly form (31) that for any t; s 2 [0; 1]; k = 0; 1; : : :;

Rn(t; s + k) = ? 21 Et;s [42?t;s? d(h(t ? s ? k))] = ? 21 Et;s [42?t;s? d0(h(t ? s ? k))]; (33) where d0(t) = d(t) ? 21 d00(0)t2. For sake of simplicity we suppose that d0(t) = d(t) with

d00(0) = 0.

Proof of Lemma 1. It follows directly from conditions (C1){(C2) and relation (31) that for

the variance function Sn (t) the following relation is valid

Sn (t) = C h2+ S (t)(1 + n (t)); where n (t) ! 0 as n ! 1 uniformly in t 2 [0; 1]. Recall, that S (t) = t(1 ? t)(1 ? t ? (1 ? t) ). In fact, by the cyclostationarity property of the process Yn (t) and (C1) there exists K > 0 such that

n = max j (t)j = max j (t)j  K max jf (th)j = o(1= log(n)): [0;T ] n [0;1] n [0;1] On the other hand, the condition n(u) !   0 implies that u2 = O(1= log n) as n ! 1. Hence n u2 = o(1) as n ! 1. Thus Lemma 1(i) follows. It follows from (31) and (33) that the covariance function Rn (t + v; t ? v ) can be represented in the following form,

Rn (t + v; t ? v) = Sn (t) ? 12 d(2vh) ? v 2d(h) + 21 Et[42v;v d(h(t ? v ? ))]

+ 12 v 421;2v d(h(t ? v ? 1)) = Sn (t) ? ch2+ v 2 Dn (t; v );

(34)

where Dn (t; v ) = I1 + I2 + I3 + I4 , and also for any  > 0 and uniformly in n > n0 () we have

I1 = O(v ); I2 = ?1 + o(1); I3 = O(1), and I4 = 12 v ?1 421;2v jt ? v ? 1j +2 + gn (t; v) =

( + 2)((1 ? t) +1 + t +1 ) + O(v ) + gn (t; v ); jgn (t; v )j  ; 20

(35)

as v ! 0; t !

1 2

uniformly in n > n0 . Analogously, we nd that for the variance function

Sn(t), Sn(t + v) = Sn (t) + Et[4vd(h(t ? ))] ? v4h d(h(t + v ? 1)) + d(h)(v 2 ? v(1 ? 2t)) = Sn(t) + gn(t; v): (36) By (33) we have

gn (t; v)gn(t; ?v) = ?c2 v 2h4+2 An(t; v);

(37)

where An (t; v ) ! 0 as v ! 0; t ! 21 uniformly in n > n0 . Analogously we obtain

gn(t; v) + gn(t; ?v) = Et[42v;v d(h(t ? v ? ))] ? v421;2vd(h(t ? v ? 1))) = = 2cv 2h2+ Bn (t; v );

(38)

where Bn (t; v ) = I3 ? I4 . Therefore 



(Sn (t + v )Sn(t ? v )) 2 = Sn (t) 1 + cv 2h2+ BSn (t;(t)v ) + n (t; v ) ; 1

n

and also n (t; v ) ! 0 as v ! 0; t ! 21 uniformly in n > n0 . As a consequence of the relations (34), (36), (37) and (38) we get that the correlation function rn (t; s) can be represented in the following form

rn(t ? v; t + v) = Rn(t + v; t ? v)=(Sn(t + v)Sn(t ? v))1=2 = 1 ? Cn(t + v; t ? v)(2v)2; where v = (t ? s)=2,

2+ Cn(t; s) = ch S ((It2 +) I4 ) + n (t; s); n  t = (t + s)=2, and n (t; s) ! 0 as t; s ! 12 uniformly in n > n0 . Observe that by the relation (36) for the variance Sn (t) we have that there exist K > 0; 0 <  < 1; such that

Sn ( 21 + v) = C h2+ ( 2 + v g~n(v)); where for all jv j  =2 we get jg~n(v )j  K1. By (35) we have Cn (t; s) = b( ) + n (t; s), where

b( ) =  ?2 +1 1 ( 21 ? +1 2 ) > 0; 0 < < 2; 21

and n (t; s) ! 0 as t; s ! 21 uniformly in n and Lemma 1(ii) follows. The assertion in Lemma 1(iii) follows immediately from the de nition of the normalized deviation process Xn (t) = Yn (th)=n . In fact, there exists K > 0 such that for any t; s 2 [0; 1], 1

1

1

(E [(Xn(t) ? Xn (s))2 ]) 2  n ?1 ((E [(X (th) ? X (sh))2 ]) 2 + (E [(Ln(th) ? Ln (sh))2 ]) 2 ) 

n ?1 (d(h(t ? s)) 2 + jt ? sjd(h) 2 )  K jt ? sj: 1

1

Now we investigate the behavior of the correlation function rn (t; s) when t 2 [0; 1] and s 2 [k; k + 1]; s = v + k; k  1. Proof of Lemma 2. Represent the covariance function Rn (t; s) in the following form

Rn (t; s) = 21 Et;v [d(h(t ?  ? k)) + d(h( ? v ? k)) ? d(h(t ? v ? k)) ? d(h( ?  ? k))] = ? 21 Et;v [42?t;v? d(h(t ? v ? k))]: Then there exist K1; K2 > 0 and 0 < i < 1; i = 1; 2; 3, such that

Rn(t; s) = 21 h2 Et;v [(t ? )(v ?  )d(2)(h(t ? s + 1 ( ? t) + 2 (v ?  ))] =

1 h4 E [(t ?  )(v ?  ) ( ? t) +  (v ?  )2 d(4)(h(t ? s +  ( ( ? t) +  (v ?  ))]: 3 1 2 1 2 4 t;v Further, (C2) implies, k > 3, 2+ 2+ jRn(t; s)j  K1 h 2?  K2 h 2? (k ? 3) jt ? sj

uniformly in n; t; s:

Thus the rst part of Lemma 2 follows. In particular, we have that for some t0 > 0 and for all n and any jt ? sj > t0 ,

jrn(t; s)j < 0 < 1: Then as in proof of Lemma 1 we have that for t; v 2 [0; 1]; s = v + k; k  k0,

C ?1 h?(2+ ) Rn(t; s) ! R(t; s) = 12 Et;v [42?t;v? jt ? sj2+ ]; as n ! 1, uniformly in t; v 2 [0; 1]; k  k0 < 1. 22

(39)

Denote by Z (t); t 2 [0; k0]; Gaussian zero mean process with the covariance and correlation functions R(t; s) and r(t; s) functions, respectively. The process Z (t) is nondegenerated. This can be veri ed directly by using the di erential properties of the covariance function R(t; s). In fact, otherwise there exist ti ; i = 0; : : :; p such that for any t  0

R(t; t0 ) =

p X i=1

R(t; ti);

(see also P ug (1982)). Therefore for any  > 0 there exists 1 > 0 such that for the correlation function r(t; s) we obtain

jr(t; s)j < 1 < 1 uniformly for jt ? sj > : So for suciently large n we have that also the process Xn (t) is nondegenerated and the second part of Lemma 2 follows.

References Adler, R.J. (1990) An Introduction to Continuity, Extrema, and Related Topics for General Gaussian Processes. IMS Lecture Notes-Monograph Series, 12.

Belyaev, Yu.K. and Simonyan, A.H. (1979) Asymptotic properties of deviations of a sample path of a Gaussian process from approximation by broken line for decreasing width of quantization. In Random Processes and Fields, ed. Yu. K. Belyaev, Moscow University Press, 9-21. Berman, S. M. (1985) The maximum of a Gaussian process with nonconstant variance. Ann. Inst H. Poincare Probab. Statist., 21, 383-391.

Berman, S. M., Kono, N. (1989) The maximum of a Gaussian process with nonconstant variance: a sharp bound for the distribution tail. Ann. Probab., 17, 632-650. Cramer, H. and Leadbetter, M.R. (1967) Stationary and Related Stochastic Processes, Wiley, New-York. 23

Dobric, V., Marcus, M.B., Weber, M.J.G. (1988), The distribution of large values of the supremum of a Gaussian process. Asterique, 157-158, 95-127. Eplett, W.T. (1986) Approximation theory for simulation of continuous Gaussian processes. Prob. Theory Rel. Fields, 73 , 159-181.

Hasofer, A.M. (1987) Distribution of the maximum of a Gaussian process by a Monte-Carlo method. J. Sound Vibr., 112, 283-293. Hasofer, A.M. (1989) Continuous simulation of Gaussian processes with given spectrum. Proceedings of the 5th Int. Conf. on Struct. Safety and Reliability, San-Francisco, Aug 7-11,

ASCE, New-York. Konstant, D.G. and Piterbarg, V.I. (1993) Extreme values of the cyclostationary Gaussian random processes. J. Appl. Prob., 30, 82-97. Leadbetter, M.R., Lindgren, G. and Rootzen, H. (1983) Extremes and Related Properties of Random Sequences and Processes. Springer-Verlag, New-York.

Pickands, J. III (1969) Upcrossings probabilities for stationary Gaussian processes. Trans. Amer. Math. Soc., 145, 51-73.

Piterbarg, V.I. and Prisyazhnyuk, V.P. (1979) Asymptotics of the probability of large excursions for a nonstationary Gaussian process. Theory Prob. Math. Stat., 18, 131-143. Piterbarg, V.I. (1988) Theory of Asymptotic Methods in Gaussian Random Processes and Fields. Moscow University Press, Moscow.

Piterbarg, V.I., Seleznjev, O.V. (1994) Linear interpolation of random processes and extremes of a sequence of Gaussian nonstationary processes. Univ. N. Carol., Chapel Hill, Center St. Proc. Tech. Rep. 446, 1-26.

P ug, G. (1982) A statistically important Gaussian process. Stoch. Proc. Appl., 13, 45-57. Powell, M.J.D. (1974) Approximation theory and methods. Cambridge University Press, Cambridge. Rudzkis, R.O. (1985) On the probability of a large excursion of a nonstationary Gaussian 24

process. Lith. Math. J., 25, 143-154. Seleznjev, O.V. (1989a) The best approximation of random processes and approximation of periodic random processes. Univ. Lund Stat. Res. Rep., 6, 1-32. Seleznjev, O.V. (1989b) Estimate of tail probability for maximum of Gaussian eld. Univ. Lund Stat. Res. Rep., 10, 1-10.

Seleznjev, O.V. (1991) Limit theorems for maxima and crossings of a sequence of Gaussian processes and approximation of random processes. J.Appl. Prob., 28, 17-32. Su, Y., Cambanis, S. (1993) Sampling designs for estimation of a random process, Stoch. Proc. Appl., 46, 47-89.

Talagrand, M. (1988) Small tails for the supremum of a Gaussian process. Ann. Inst H. Poincare Probab. Statist., 24, 307-315.

Weber, M. (1989) The supremum of Gaussian processes with constant variance. Prob. Theory Rel. Fields, 81, 585-591.

Ylvisaker, N.D. (1968) A note on the absence of tangencies in Gaussian sample paths. Ann. Math. Stat., 39, 261-262.

25

Suggest Documents