h - CiteSeerX

2 downloads 0 Views 276KB Size Report
SIDNEY RESNICK, GENNADY SAMORODNITSKY AND FANG XUE. Abstract. For the ... distribution has a second moment (see e.g. Brockwell and Davis, 1991).
HOW MISLEADING CAN SAMPLE ACF'S OF STABLE MA'S BE? (VERY!) SIDNEY RESNICK, GENNADY SAMORODNITSKY AND FANG XUE Abstract. For the stable moving average process Z 1 t

X

=

?1

( + ) ( ) =1 2

f t

x M dx ; t

;

; :::

we nd the weak limit of its sample autocorrelation function as the sample size increases to 1. It turns out that, as a rule, this limit is random! This shows how dangerous it is to rely on sample correlation as a model tting tool in the heavy tailed case. We discuss for what functions this limit is non-random for all (or only some { this can be the case, too!) lags. n

f

1. Introduction The sample autocorrelation function (acf) of a stationary process fXt g1t 0, P (h) ^n (h) ?! 1991 Mathematics Subject Classi cation. Primary 62M10; Secondary 60E07, 60G70. Key words and phrases. heavy tails, ARMA processes, stable processes, sample correlation, acf, in nite variance, moving average. This research was partly supported by NSF Grant DMS-97-04982 at Cornell University. S. Resnick and G. Samorodnitsky were also partially supported by NSF Grant DMI-9713549 at Cornell. 1

2

SIDNEY RESNICK, GENNADY SAMORODNITSKY AND FANG XUE

P where \?! " denotes convergence in probability and

P1 c c j=?1 j j+h (h) = P 1 c2 j=?1 j

is a constant. However, if for some heavy tailed processes, the sample acf loses this desirable feature of converging to a constant, the usual model tting and diagnostic tools such as the Akaike Information Criterion or Yule-Walker estimators will be of questionable applicability. In this case, the mischief potential for misspecifying a model is great, and more care must be taken in using the sample correlations for model tting and estimation (see e.g. Resnick, 1997b). Recent studies seem to indicate that processes with asymptotically degenerate sample acf (like MA(1)) form a very limited class in the heavy tailed world. For bilinear time series and some variations of MA(1) (sum of two MA(1)'s, coecient permutation with reset), it is shown that the sample acf converges in nite dimensional distribution to a random limit (Davis and Resnick, 1996; Resnick and Van Den Berg, 1998; Cohen et al., 1997). In order to understand what happens to sample correlations in heavy tailed cases, it is natural to look at stationary -stable processes, 0 < < 2. This class of processes can be viewed as a heavy tailed analog of Gaussian processes, and its structure is relatively well understood. Cohen et al. (1997) conducted empirical studies on two examples of ergodic symmetric -stable (S S) processes of the form Z Xt = ft (x)M (dx); t = 1; 2; : : : , E

where M is a S S random measure on E with - nite control measure m, ft 2 L (E; m) for all t, and 0 < < 2 (see Samorodnitsky and Taqqu, 1994). Simulation evidence was found in both cases that the limit of the sample acf as the sample size n goes to 1 is random . This article focuses on the class of -stable moving average processes (1.3)

Xt =

Z1

?1

f (t + x)M (dx); t = 1; 2; : : : ,

where f 2 L (R1), M is a S S random measure on R1 with Lebesgue control measure, and 0 < < 2. Although one might think this class is quite close to the MA(1) class, that is not the case. We will evaluate for these processes the weak limits of the sample acfs, using the series representation of fXt g and certain results on tetrahedral multi-linear forms provided by Samorodnitsky and Szulga (1989). Despite fXt g's kinship with MA(1), these limits are usually (with notable exceptions) random, thus con rming the empirical results. The limits, of course, depend on the lag h and the function f . In x2, we give the series representation of the sample covariance ^n (h) and write it as sum of \diagonal" and \o -diagonal" parts; x3 nds the weak limit of the diagonal part under suitable normalization; x4 shows that the o -diagonal part, when compared with the diagonal part, can be neglected. In x5, we summarize our ndings and discuss when the weak limit of ^n(h) is degenerate. Examples are used to demonstrate the arbitrary limit behavior of acfs when di erent lags are studied. In particular we construct examples which show that the sample acf may be asymptotically constant for some lags, but asymptotically random for other lags. A simulation result is presented in Figure 1 for one particular stable moving average process which can be written as sum of two MA(1) processes. The sample acfs of eight independent copies are drawn in the rst eight plots and overlaid in the last. For this process, the sample acfs appear to have a degenerate limit for lags no bigger then 10, but randomness takes over afterwards, as

HOW MISLEADING CAN SAMPLE ACF'S BE?

3

indicated by the fuzziness in the last plot. Evidence continues to accumulate which casts doubt on the appropriateness of the acf as a tool for model tting and parameter estimation in heavy tailed models. 2. Decomposition of the Series Representation of Covariance Functions Let q(x) be any density function that is strictly positive on R1. A change of variable (Samorodnitsky and Taqqu, 1994) in (1.3) gives

Z 1

fXt ; t = 1; 2; : : : g =d

(2.1)

?1

f (t + x)q(x)?1= M1(dx); t = 1; 2; : : :



(the equality is in the sense of nite dimensional distributions), where M1 is a symmetric -stable random measure on R1 whose control measure has density q(x) with respect to the Lebesgue measure. Unlike M , M1 has a nite control measure, hence has the following series representation (Samorodnitsky and Taqqu, 1994):

fM1

(A); A 2 Bg =d

where B is the Borel -algebra on R1,

Z 1

C :=

(2.2)

?1

(

X C 1= i?i?1= 1(Vi 2 A); A 2 B i=1

x? sin x dx

1

?1  =

1? ?(2? ) cos( =2) ;

2=;

)

;

if 6= 1 if = 1

is a constant, and (2.3) fj g are iid Rademacher random variables with P [i = 1] = P [i = ?1] = 1=2; (2.4) f?j g are arrival times of a Poisson process with unit rate on [0; 1); (2.5) fVj g are iid random variables with the density q(x): All of the above three sequences are independent. We now write down the series representation of Xt . De ne

St := C 1=

(2.6)

1 X i=1

i?i?1= f (Vi + t)q(Vi)?1= ; t = 1; 2; : : : :

Then the series in (2.6) converges almost surely (Samorodnitsky and Taqqu, 1994), and fXt ; t  1g =d fSt ; t  1g: With ^n(h) de ned by (1.2), we have for all H  0,

fn ^n

(2.7)

(h); h = 0; 1; : : : ; H g =d

From (2.6), the following holds almost surely. (2.8)



(X n t=1

St St+h; h = 0; 1; : : : ; H :

X ?1= C 1= i?i f (Vi + t)q(Vi )?1= St+h St St+h = t=1 t=1 i=1 1  n X X i ?i?1= f (Vi + t)q(Vi)?1= C 1= C 1= = i=1 t=1

n X

n X

1

)

!

4

0

0

10

10

30

0

0

10

10

20 Lag

20 Lag

20

30

30

30

0

0



••••• •• •••

0

10

20 Lag

20 Lag

Lag

20

30

30

30

• ••••• •• •• •• • • ••••• • • • •• • • • • •• •• • •• •• • •• • • • •••• ••••••••• •••••• •• •• •• •• •••••••• •• •• ••••••••• •••• ••• ••••• •• • ••• • •• ••• ••••••••• •••• • ••• ••••• ••••• ••••• •••• ••• •• ••••• •• •••• •• •• • •••• •••••• •••••••• ••••

10

10

SIDNEY RESNICK, GENNADY SAMORODNITSKY AND FANG XUE

30

Stable MA: Length = 3000 Alpha = 1.5

20 Lag

20 Lag

20

Lag

0.2 0.0 -0.2 1.0 0.8 0.6 0.4 0.2 0.0 -0.2 1.0 0.8 0.6 0.4 0.2 0.0 -0.2

ACF

Figure 1. Stable Moving Average: Sample Correlation Functions

Lag

10

1.0 0

0.8

30

0.6

10

1.0 0

0.8 0.2 0.0 -0.2 1.0 0.8 0.6

0.4

0.6

ACF

ACF ACF

0.4 0.2 0.0 -0.2 1.0 0.8 0.6 0.4 0.2 0.0 -0.2

ACF

1.0

0.4

0.8

ACF

0.6 0.4 0.2 0.0 -0.2 1.0 0.8 0.6 0.4 0.2 0.0 -0.2 1.0 0.8 0.6 0.4 0.2 0.0 -0.2

ACF ACF ACF

HOW MISLEADING CAN SAMPLE ACF'S BE?

1  ?1=  X i?i f (Vi + t + h)q(Vi)?1= + j ?j?1= f (Vj + t + h)q(Vj )?1= A = C 2=

1 X i=1

?i?2=

1 2= X X

+ C

where

i=1 j = 6 i 0 00 =: Yn(h) + Yn (h);

n X t=1

5

j 6=i

f (Vi + t)f (Vi + t + h)q(Vi)?2=

X i j ?i?1= ?j?1= f (Vi + t)f (Vj + t + h)q(Vi )?1= q(Vj )?1= n

t=1

 C 2=

1 X

2= C =2

(2.9)

Y 0 (h) =

with (2.11)

Y 0 (h); h = 0; 1; : : : ; H =d n(  C 2= Z 1 X n

?i?2=

n X

f (Vi + t)f (Vi + t + h)q(Vi )?2= C =2 t=1 i=1 P P1 and Y 00(h) is the is the sum of the \diagonal" terms where i = j in the double sum 1 n i=1 j=1 sum of the \o -diagonal" terms. We will see that both series converge almost surely. As a matter of fact, for all H  0,  (2.10) Yn0 (h); h = 0; 1; : : : ; H =d ) ( 2= Z 1 X n C ? 2= f (t + x)f (t + h + x)q(x) M~ 1(dx); h = 0; 1; : : : ; H ; C =2 ?1 t=1 where M~ 1 is a positive strictly =2-stable random measure on R1, whose control measure has density q(x) with respect to Lebesgue measure (Samorodnitsky and Taqqu, 1994). Being the series representation of the stable integrals in (2.10), the series of the diagonal terms (2.9) converges almost surely to Yn0 (h). So the series of the o -diagonal terms also converges almost surely. But a Rademacher series converges unconditionally whenever it converges almost surely (Samorodnitsky and Szulga, 1989). Hence the convergence to Yn00(h) is unconditional. This will enable us to rewrite this sum with an arbitrary deterministic change of order. With (2.7), (2.8) and a change of variable in (2.10), we have the following. Proposition 2.1. For any H  0 and any n > 0, ?  (n ^n (h); h = 0; 1; : : : ; H ) =d Yn0 (h) + Yn00 (h); h = 0; 1; : : : ; H ; n

C =2

and

(2.12)

Yn00 (h) = C =2

?1 t=1

X 1 Z1 X 2n 1 (t + x) dx; An = 2n ?1 t=1

!< > Z n nX ?1 1 dx; (t + x) Bn = 2n ?n t=?n Cn = D = Since

and

Z 1 nX ?1

(t + x)

0

t=?n

0

t=?1

Z1 X 1

!< >

(t + x)

dx;

!< >

dx:

1 n?1 X (t + x)!< >  X t=?1 j(t + x)j t=?n Z1 X 1 0 t=?1

j(t + x)j dx =

Z1 ?1

j(x)j dx < 1;

from the dominated convergence theorem, (3.3) lim C = D: n!1 n Moreover,

and

!< > Z1 X n?1 1 An = 2n dx; (t + x) ?1 t=?n

Z !< > n ? 1 X dx (t + x) jAn ? Bnj = 21n jxj>n t=?n nX ?1 1 Z  2n

jxj>n t=?n n?1 Z 1 1 X

j(t + x)j dx j(x)j dx +

nX ?1

Z t?n

j(x)j dx

= 2n t=?n ?1 t=?n n+t ! Z 2n 1 2n Z ?t X X 1 = 2n j(x)j dx + j(x)j dx t=1 t?1 t=1 ?1 2n Z X 1 j(x)j dx:  2n j x j >t ? 1 t=1

!

8

SIDNEY RESNICK, GENNADY SAMORODNITSKY AND FANG XUE

R

But this is the Cesaro mean of the sequence jxj>n j(x)j dx, which goes to zero as n goes to 1, so (3.4) lim jA ? Bnj = 0: n!1 n Next we estimate the distance between Bn and Cn .

n?1 Z !< > ?1 1 X j+1 nX dx ? C  ( t + x ) n 2n j=?n j t=?n n?1 Z n?1 !< > 1 X X 1 dx ? 2 nC  ( t + j + x ) n 2n j=?n 0 t=?n 1< > 0 n+j?1 ! < > Z Z n ? 1 n ? 1 n ? 1 1 1 X 1 X @ X (t + x)A dx ? X dx (t + x) 2n j=?n 0 t=?n+j j=?n 0 t=?n 0 1 < > !< > n?1 n Z 1 n+j 1 X dx: @ X?1 (t + x)A ? X  ( t + x ) 2n j=?n 0 t=?n+j t=?n

jBn ? Cn j = = =



Applying Lemma 3.1, the above can be bounded by

0 n+j?1 ?n+j 1 Z ?1 n 1 X X X 1 A @ n j=1 0 t=n (t + x) + t=?n (t + x) dx + 0 n?1 1 Z ? 1 ? n ? 1 1 1 X B X (t + x) + X (t + x) C dx t=n+j A n j=?n 0 @ t=?n+j ! ?n+j n Z 1 n+j X?1 X?1 1X

 n j=1

0

j(t + x)j +

t=n 0 n?1 Z ?1 1 ?X X

t=?n

j(t + x)j dx +

1 @ j(t + x)j + j(t + x)j A dx t=?n+j t=n+j ! ?n+j 1 X?1 X

1 n j=?n 0

n?1 X

n Z1 X 1 j(t + x)j dx + j(t + x)j +  n t=?1 j=1 0 t=n 0 1 Z 1 n?1 ?1 1 ?X X 1 X A @ n j=?n 0 t=?1 j(t + x)j + t=n+j j(t + x)j dx

= =

Z1 n

Z

j(x)j dx + n1

jxj>n

n Z ?n+j X

j=1 ?1 n?1 Z 1X

j(x)j dx + n

j=0 jxj>j

j(x)j dx + j(x)j dx:

Z ?n ?1

j(x)j dx + n1

?1 Z 1 X j=?n n+j

j(x)j dx

HOW MISLEADING CAN SAMPLE ACF'S BE?

9

With the same reasoning as applied to (3.4), (3.5) lim jB ? Cnj = 0: n!1 n From (3.3),(3.4) and (3.5), An ! D as n ! 1. Proposition 3.3. Suppose M~ is a positive strictly stable random measure on R1 with index =2 and Lebesgue control measure, and  C 2= Z 1 X 1

^(h) := C (3.6) f (t + x)f (t + x + h)M~ (dx): =2 0 t=?1 Then for all H  0; n ?2= 0 o (3.7) n Yn (h); h = 0; 1; : : : ; H =) f ^ (h); h = 0; 1; : : : ; H g, as n ?! 1; where \=)" denotes weak convergence. P Proof. For any real 0; 1 ; : : : ; H , if we take (x) = f (x) Hh=0 f (x + h)h in Lemma 3.2, then (2.11) shows that bothPthe scale parameter and skewness parameter of the strictly =2-stable P H H ? 2= 0 random variable n h=0 h Yn(h) converge to the corresponding parameters of h=0 h ^(h), which is also a strictly =2-stable random variable. So (3.8)

n?2=

H X h=0

h Yn0 (h) =)

H X h=0

h ^ (h), as n ?! 1:

Thus (3.7) follows from the Cramer-Wold device (see e.g. Billingsley, 1995). 4. The Off-Diagonal Part In this section, we need the following notation:  ln x; if x > 1; ln+ x := 0; otherwise: Lemma 4.1. (Samorodnitsky and Szulga, 1989, Proposition 5.1) If fj g1j 0 such that f (x) = 0 whenever x < 0 or x > q + 1; (ii) f (x) is continuous on (k; k + 1) for all k 2 Z; (iii) g0(f; x) > 0 for all x 2 (0; 1); (iv) there exist constants h ; h  0, such that ^(h) = h almost surely. Then there existPconstants c0 ; c1; : : : ; cq and a sequence of iid S S random variables fZk g?1 q. The following examples indicate that assumptions (ii) and (iii) are necessary in Corollary 5.3. Example 5.3. In the setting of Example 5.2, let m = 2, A1 = [0; 1=2), A2 = [1=2; 1), and c0k = c00k = 0, if k < 0 or k > 2, c00 = 2; c01 = 9; c02 = 4, c000 = 1; c001 = 6; c002 = 8, (2) 00 where c0k denotes c(1) k and ck denotes ck for every k 2 Z. In this case, (ii) of Corollary 5.3 fails. But since g0(x) = 101, g1(x) = 54, g2(x) = 8 are all constants and gk (x) = 0 if k < 0 or k > 2, ^(h) is nonrandom for all h. However, this process is not a classical nite moving average.

HOW MISLEADING CAN SAMPLE ACF'S BE?

15

Example 5.4. The process of Example 5.3 has, up to a multiplicative constant, another representation. In the notation of that example, let

(

c0[x] fxg; if fxg  0 c00[x] fxg; if fxg > 0 where [x] := max(Z\ (0; x]), fxg := x ? [x] ? 1=2, and c0k and c00k are de ned in Example 5.3. Here f is continuous on (k; k + 1) for all k 2 Z, but (iii) of Corollary 5.3 fails as gh (f; 1=2) = 0. The next example shows that without assumption (i) in Corollary 5.3, assumptions (ii), (iii) and (iv) are not enough to guarantee that the process is a classical moving average of nite or in nite order. Example 5.5.? Let? : (0; 1)7! (0; 1) be any continuous function. For all x 2 (0; 1), the function Fx (z ) := exp (x) z ? z ?1 is analytic on fz : 0 < jz j < 1g, thus has Laurent expansion (see e.g. Ahlfors, 1979) f (x) =

Fx (z ) =

(5.7) where

1 X

k=?1

ak (x)z k ;

(

Z Fx (z) P1 (?1)j (x)2j+k ; if k  0; 1 j=0 j!(j+k)! dz = ak (x) = 2i (?1)k ajkj (x); if k < 0. jzj=1 z k Let f (x + k) = ak (x) for all x 2 (0; 1) and k 2 Z. Then f (x) satis es (ii) and (iii) of Corollary 5.3, and is in L (R), since Z1

?1

jf (x)j dx

=

1 Z1 X

k=?1 0

jak (x)j dx

1 01 1 Z1 X 2j+jkj X  ( x ) A dx @  j !( j + j k j )! 0 j=0 k=?1 1 01 1 X X 1 A < 1: @  j !( j + j k j )! j=0 k=?1 P Moreover, for all x 2 (0; 1), the Laurent series F (z ?1) = 1 a (x)z ?k and (5.7) both

converge absolutely on fz : 0 < jz j < 1g, so 1 = Fx (z ?1)Fx(z ) = = =

1 X

k=?1

ak (x)z ?k

1 X 1 X h=?1 k=?1

1 X 1 X

h=?1 k=?1

x

k=?1 k

! X 1 k=?1

ak (x)z k

!

ak (x)ak+h(x)z h f (x + k)f (x + k + h)z h

16

SIDNEY RESNICK, GENNADY SAMORODNITSKY AND FANG XUE

=

1 X

h=?1

gh(f; x)z h:

From the uniqueness of the Laurent series of the constant function 1, we have g0 (f; x) = 1 and gh (f; x) = 0 for all h > 0. So ^(0) = 1 almost surely, and for all h > 0, ^(h) = 0 almost surely. However, fXt g is rarely a classical moving average process (not, e.g., if (x) = x, since the spectral measure of (X1; X2 ) is not discrete). Remark 5.1. This example shows that one classical method of testing whether data comes from an iid model, namely testing if ^(h)  0, h > 0, is extremely unreliable. The process in Example 5.5 is far from iid. Our nal result considers special cases of Example 5.2. It is signi cant because it shows the variety of the asymptotic behavior of acfs, which seriously questions the viability of the sample correlation function as an appropriate tool for statistical estimation or model tting of heavy tailed time series models. Proposition 5.4. Under the setting of Example 5.2 with m = 2, let N be the set of positive integers and A,B subsets of N. If A and B satisfy any one of the following three conditions, then we can choose c0k and c00k , such that ^(h) is degenerate when h 2 A and random when h 2 B . (i) A = fH; H + 1; : : : g; B = NnA; H 2 N. (ii) B = fH; H + 1; : : : g; A = NnB; H 2 N. (iii) A,B are nite and A \ B = ;. Proof. As we have seen in Example 5.2, ^(h) is degenerate if and only if P1 c00c00 P1 c0 c0 k= ?1 k k+h (5.8) P1 c0 2 = Pk=1?1 kc00k+h 2 ; k=?1 k k=?1 k (2) 00 0 where c0k denotes c(1) k and ck denotes ck for every k 2 Z. There are many ways to choose ck and 00 ck to make (5.8) hold when h 2 A and fail when h 2 B . We will show just one example. (i) If c0k = c00k = 0 whenever k > H or k  0, then (5.8) holds for all h 2 A. Most choices of c0k and c00k will fail (5.8) when h 2 B . (ii) Let c0k = c00k 6= 0 if k = 0; ?1; ?2; : : : ; ?H + 1, and c00 6= ?5=3, c00 6= 5; c0H = 2; c00H = 3; c0kH = 23?k ; c00kH = 21?k ; if k = 2; 3; : : : ; c0k = c00k = 0, otherwise. With these c0k and c00k , 1 X

k=?1

If h < H ,

c0 2 = k

0 X

k=1?H

1 X

c0 c0

1 X 2 0 c + 4 + 26?2k = k

k k+h

k=2

=

?h X

c0 c0

k k+h =

0 X

1 1 X X 2 00 2 ? 2k ck + 9 + 2 = c00k 2 : k=1?H k=2 k=?1 ?h X

k=?1 k=1?H k=1?H If h = k1H + k2, k1  1, k2 = 1; 2; : : : ; H ? 1;

1 X

k=?1

c00c00

k k+h =

c0k c0k+h = c0?k2 c0k1 H 6= c00?k2 c00k1 H =

1 X k=?1

1 X k=?1

c00k c00k+h :

c00k c00k+h ;

HOW MISLEADING CAN SAMPLE ACF'S BE?

17

since c0?k2 = c00?k2 6= 0 and c0k1 H 6= c00k1 H . P c0 c0 6= P1 c00c00 . If h = k1H , k1  1, it can be similarly checked that 1 k=?1 k k+h k=?1 k k+h 0 , such that c0 = 0 if k < 0 or k > l and c0 2 + c0 2 > 0. De ne (iii) Suppose l = max( A [ B ). Pick c 0 l k k P ?h c0 c0 and a0h = lt=0 t t+h  0 h2B a00h = aah0 ;+ ; ifotherwise (5.9) h     where  awaits to be decided. Let A0 = a0 jj ?kj lj;k=0 and A00 = a00jj ?kj lj;k=0 be two (l+1)(l+1) matrices. Linear algebra shows that A0 is positive de nite. Since all the main subdeterminants of A00 are continuous functions of , we can nd an  6= 0 to keep A00 positive de P ?H c00c00 ; h = 0; 1; : : : ; l. 00 00 00 00 nite. This achieved, there must exist c0 ; c1 ; : : : ; cl such that ah = lt=0 t t+h The last assertion can be proved via linear algebra or through a probability approach (see Brockwell and Davis, 1991, Theorem 1.5.1, Proposition 3.2.1, Theorem 3.2.1). With c0k and c00k chosen this way, (5.8) becomes a0h = a00h : (5.10) a00 a000 From (5.9), we have that (5.10) fails if and only if h 2 B . References L. V. Ahlfors (1979): Complex Analysis: an Introduction to the Theory of Analytic Functions

of One Complex Variable . McGraw-Hill, New York, 3rd edition.

P. Billingsley (1995): Probability and Measure . J. Wiley & Sons, New York, 3rd edition. P. Brockwell and R. Davis (1991): Time Series: Theory and Methods . Springer-Verlag, New

York, 2nd edition.

J. Cohen, S. Resnick and G. Samorodnitsky (1997): Sample correlations of in nite variance

time series models: an empirical and theoretical study. Technical Report 1202, Department of Operations Research and Industrial Engineering, Cornell University. Available at http://www. orie.cornell.edu/trlist/trlist.html as TR1202.ps.Z. R. Davis and S. Resnick (1985): Limit theory for moving averages of random variables with regularly varying tail probabilities. Annals of Probability 13:179{195. R. Davis and S. Resnick (1996): Limit theory for bilinear processes with heavy tailed noise. Annals of Applied Probability 6:1191{1210. D. Duffy, A. McIntosh, M. Rosenstein and W. Willinger (1993): Analyzing telecommunications data from working common channel signaling subnetworks. In Proceedings of the 25th Interface . Interface Foundation of North America, San Diego. D. Duffy, A. McIntosh, M. Rosenstein and W. Willinger (1994): Statistical analysis of CCSN/SS7 trac data from working CCS subnetworks. IEEE Journal on Selected Areas in Communications 12:544{551. R. Durrett (1996): Probability: Theory and Examples . Duxbury Press, Belmont, California, 2nd edition. K. Meier-Hellstern, P. Wirth, Y. Yan and D. Hoeflin (1991): Trac models for ISDN data users: oce automation application, Teletrac and Datatrac in Period of Change. In Proceedings of the 13th ITC , A. Jensen and V. B. Iversen, editors. North Holland, Amsterdam, The Netherlands.

18

SIDNEY RESNICK, GENNADY SAMORODNITSKY AND FANG XUE

S. Resnick (1997a): Heavy tail modeling and teletrac data. Annals of Statistics 25:1805{1869. S. Resnick (1997b): Why non-linearities can ruin the heavy tailed modeler's day. In A Practical

Guide to Heavy Tails: Statistical Techniques for Analyzing Heavy Tailed Distributions , R. Adler, R. Feldman and M. Taqqu, editors. Birkhauser, Boston. S. Resnick and E. Van Den Berg (1998): Sample correlation behavior for the heavy tailed general bilinear process. Preprint. G. Samorodnitsky and J. Szulga (1989): An asymptotic evaluation of the tail of a multiple symmetric -stable integral. Annals of Probability 17:1503{1520. G. Samorodnitsky and M. Taqqu (1994): Stable Non-Gaussian Random Process . Stochastic Modeling. Chapman & Hall, New York. W. Willinger, M. Taqqu, R. Sherman and D. Wilson (1997): Self-similarity through highvariability: Statistical analysis of Ethernet LAN trac at the source level. IEEE/ACM Transactions on Networking 5:71{96. Sidney Resnick and Gennady Samorodnitsky, School of Operations Research and Industrial Engineering, Cornell University, Ithaca, NY 14853

E-mail address :

[email protected]

[email protected]

Fong Xue, Center for Applied Mathematics, Cornell University, Ithaca, NY 14853

E-mail address :

[email protected]