Complete Moment Convergence of Moving Average ... - CiteSeerX

7 downloads 0 Views 98KB Size Report
Complete Moment Convergence of Moving Average. Processes Generated by Negatively Associated. Sequences. Mi-Hwa Ko1,a. aDepartment of Mathematics ...
Communications of the Korean Statistical Society 2010, Vol. 17, No. 4, 507–513

Complete Moment Convergence of Moving Average Processes Generated by Negatively Associated Sequences Mi-Hwa Ko 1,a a

Department of Mathematics Education, Daebul University Abstract

Let {Xi , −∞ < i < ∞} be a doubly infinite sequence of identically distributed and negatively associated random variables with mean zero and finite variance and {ai , −∞ < i < ∞} be an absolutely summable se∑ quence of real numbers. Define a moving average process as Yn = ∞ i=−∞ ai+n Xi , n ≥ 1 and S n = Y1 + · · · + Yn . ∑ ∞ In this paper we prove that E|X1 |r h(|X1 | p ) < ∞ implies n=1 nr/p−2−q/p h(n)E{max1≤k≤n |S k | − ϵn1/p }q+ < ∞ and ∑∞ r/p−2 h(n)E{supk≥n |k−1/p S k | − ϵ}q+ < ∞ for all ϵ > 0 and all q > 0, where h(x) > 0 (x > 0) is a slowly varying n=1 n function, 1 ≤ p < 2 and r > 1 + p/2.

Keywords: Moving average process, negatively associated, complete moment convergence, doubly infinite sequence.

1. Introduction We assume that {Xi , −∞ < i < ∞} is a doubly infinite sequence of identically distributed random variables with mean zero and finite variance and {ai , −∞ < i < ∞} is an absolutely summable sequence of real numbers. We define the moving average process {Yn , n ≥ 1} as follows: Yk =

∞ ∑

ai+k Xi ,

k ≥ 1.

(1.1)

i=−∞

Under some suitable conditions, many limiting results for the moving average process {Yn , n ≥ 1} have been obtained. For example, Burton and Dehling (1990) have obtained a large deviation principle for {Yn , n ≥ 1}, Zhang (1996) has obtained the complete convergence and Kim and Ko (2008) proved the complete moment convergence under φ-mixing assumption, respectively, Kim and Baek (2001) established the central limit theorem under linearly positive quadrant dependence condition, Baek et al. (2003) obtained the complete convergence and Li and Zhang (2004) discussed the complete moment convergence under negative association assumption, respectively. Recently, Chen et al. (2009) improved the result in Zhang (1996) and Zhou (2010) improved the result in Kim and Ko (2008). The family of random variables {Xi , 1 ≤ i ≤ n} is said to be negatively associated(NA) if for every pair of disjoint subsets A and B of {1, 2, . . . , n}, Cov{ f (Xi , i ∈ A), g(X j , j ∈ B)} ≤ 0 whenever f and g are coordinatewise non-decreasing and the covariance exists. An infinite family is NA if every finite subfamily is NA. This notion was introduced by Joag-Dev and Proschan (1983). As pointed out and proved by Joag-Dev and Proschan (1983), a number of well known multivariate distributions possess the NA property, such as (a) multinomial, (b) multivariate hypergeometric, (c) negatively correlated normal distribution and (d) random sampling without replacement. 1

Full-time non-tenure-track faculty, Department of Mathematics Education, Daebul University, Jeonnam 526-720, Korea. E-mail: [email protected]

Mi-Hwa Ko

508

Because of its wide applications in multivariate statistical analysis and reliability, the notion of NA has received considerable attention recently. We say that the sequence {Un , n ≥ 1} satisfies the complete moment of order q (> 0) convergence if ∫ ∞ ∞ ∑ E{|Un |}q+ < ∞, where E{|Un |}q+ = P{|Un |q > x}dx (1.2) 0

n=1

and a+ means that max(a, 0). Especially, when q = 1 if (1.2) holds we say that the sequence {Un , n ≥ 1} satisfies the complete moment convergence. ∑ Moreover, we say that the sequence {Un , n ≥ 1} satisfies the complete convergence if ∞ n=1 P{|U n | > ϵ} < ∞. Note that complete moment convergence implies complete convergence (see Li and Zhang, 2004). The purpose of this paper is to show the complete moment of order q convergence for maximum and supremum of the partial sums of moving average processes generated by negatively associated sequences.

2. Main Result and Lemmas The following theorem is the main result of this paper and the proof will appear in Section3. Theorem 1. Suppose that {Yn , n ≥ 1} is defined as (1.1), where {ai , −∞ < i < ∞} is a sequence of ∑ real numbers with ∞ i=−∞ |ai | < ∞ and {Xi , −∞ < i < ∞} is a sequence of identically distributed NA random variables with mean zero and finite second moment. Let h(x) > 0(x > 0) be a slowly varying function and 1 ≤ p < 2 and r > 1 + p/2. If E|X1 |r h(|X1 | p ) < ∞ then, for all ϵ > 0 and all q > 0 we obtain ∞ { } ∑ q r 1 q (i) n p −2− p h(n)E max |S k | − ϵn p < ∞ and (ii)

∞ ∑ n=1

+

1≤k≤n

n=1

n

r p −2

{

}q 1 −p h(n)E sup k S k − ϵ < ∞. k≥n

+

Remark 1. (1) Theorem 1.1 in Li and Zhang (2004) is the special case of (i) in Theorem 1 for q = 1. (2) From (i) the complete moment convergence for supremum of the partial sum of moving average process under NA assumption is obtained. Next we state some lemmas which have important roles in proving our main result. ∑ Lemma 1. (Burton and Dehling, 1990) Let ∞ i=−∞ ai be an absolutely convergent series of real ∑∞ numbers with a = i=−∞ ai and k ≥ 1. Then k ∞ i+n 1 ∑ ∑ a j = |a|k . lim n→∞ n i=−∞ j=i+1

Complete Moment Convergence of MAP Generated by NA Sequences

509

Lemma 2. (Shao, 2000) Let {Xi , i ≥ 1} be a sequence of negatively associated random variables ∑ ∑ with mean zero and finite second moment. Let S n = ni=1 Xi and Bn = ni=1 EXi2 . Then for all x > 0 and y > 0, ( ) ( ) 12yx ( ) ( ) x2 Bn P max |S k | ≥ x ≤ 2P max |Xk | ≥ y + 4 exp − +4 . 1≤k≤n 1≤k≤n 8Bn 4(xy + Bn ) In the proof of Theorem 1.1 and Remark 1.2 of Li and Zhang (2004), by taking max1≤k≤n |S k | instead of |S n | we obtain the following complete convergence for the maximum of the partial sum of the moving average process based on the negatively associated sequence. Lemma 3. Suppose that {Yn , n ≥ 1} is a moving average process defined as (1.1), where {ai , −∞ < ∑ i < ∞} is a sequence of real numbers with ∞ i=−∞ |ai | < ∞ and {Xi , −∞ < i < ∞} is a sequence of identically distributed NA random variables with mean zero and finite second moment. Let h(x) > 0 (x > 0) be a slowly varying function and 1 ≤ p < 2, r > 1 + p/2. Then, E|X1 |r h(|X1 | p ) < ∞ implies ∞ { } ∑ r 1 n p −2 h(n)P max |S k | > ϵn p < ∞, for all ϵ > 0. (2.1) 1≤k≤n

n=1

3. Proof of Theorem 1 Proof: Proof of (i) ∑ ∑ ∑n ∑∞ We will use the standard method. Observe that nk=1 Yk = ∞ a X , where i=−∞ k=1 ak+i Xi = ∑n ∑∞ i=−∞ ni i ani = k=1 ak+i . From Lemma 1, we can assume, without loss of generality, that i=−∞ |ai | ≤ n, n ≥ 1 ∑ ∑ 2 −1/2 2 2 −2 and a˜ = ∞ . Then Bn = ∞ i=−∞ |ani | ≤ 1. Let λ = (EX1 ) i=−∞ ani EXi ≤ nλ . Using Lemma 2 with y = βx (where β > 0 whose value will be specified later), we have ∞ { } ∑ q 1 q r (3.1) n p −2− p h(n)E max |S k | − ϵn p =

∞ ∑ n=1

=

∞ ∑

+

1≤k≤n

n=1 q

n p −2− p h(n) r



{

} 1 1 P max |S k | − ϵn p > x q dx



1≤k≤n

0

∫ q r n p −2− p h(n)

n=1

q np

∫ ∞ { ∞ { } } ∑ q 1 1 1 1 r P max |S k | > ϵn p + x q dx + n p −2− p h(n) q P max |S k | > ϵn p + x q dx 1≤k≤n

0

n=1

np

1≤k≤n

= I + J. I=

∞ ∑

n

q r p −2− p

∫ h(n)

n=1

( ) 1 letting y = x q

=q

∞ ∑

n

q

np

q r p −2− p

≤q

n=1

1≤k≤n

0



1

np

h(n)

n

q r p −2− p



1 np

h(n) 0

} { 1 yq−1 P max |S k | > ϵn p + y dy 1≤k≤n

0

n=1 ∞ ∑

{ } 1 1 P max |S k | > ϵn p + x q dx

{ } 1 yq−1 P max |S k | > ϵn p dy 1≤k≤n

(3.2)

Mi-Hwa Ko

510 ∞ ∑

≤C

n=1 ∞ ∑

≤C

{ } q q r 1 n p −2− p + p h(n)P max |S k | > ϵn p 1≤k≤n

{ } r 1 n p −2 h(n)P max |S k | > ϵn p < ∞, 1≤k≤n

n=1

by Lemma 3.

By modifying the proof of Theorem 1.1 in Li and Zhang (2004) we can prove J < ∞. For the completeness we repeat it here. J≤

∞ ∑

q

n p −2− p h(n) r

n=1

(

1

letting y = x q

=q

∞ ∑



)

q

q



{ } P max |S k | > y yq−1 dy

∞ 1

1≤k≤n

np

n=1

∫ ∞ ∑ r −2− qp p ≤q n h(n) n

n=1

(3.3)

1≤k≤n

np

n p −2− p h(n) r

{ } 1 P max |S k | > x q dx



 1  ( 2 −1 2) ) 12β ( } ∞   { y n λ 1  yq−1 dy  2P max |a X | > βy + 4 exp − + 4  ni i 1  −∞ βy} yq−1 dy

βn p j=1

n=1 ∞ ∑

1

βn p i=−∞ ∫ ∞ ∑ ∞ ∑

n=1 ∞ ∑

∞ ∑

n p i=−∞ ∞ ∞ ∑



n=1

≤C



βn p ∞ 1 βn p





βt

1 p

) 1p k n p + 1 P{k ≤ |X1 | p ≤ k + 1}yq−1 dy y p k=[y ] ∞ ∑

∞ ∑

(

k p y−1 P{k ≤ |X1 | p ≤ k + 1}yq−1 dy 1

k=[y p ] ∞ ∑ k=[y p ]

k p y−1 P{k ≤ |X1 | p ≤ k + 1}yq−1 dydt 1

Complete Moment Convergence of MAP Generated by NA Sequences

) 1 letting x = βt p ∫ ∞ ∫ ≤C xr−1−q h(x p )

511

(

1

x





≤C

(∫

1

r−1−q

p



1

h(x )dx

) ∑ ∞

1

≤C

k p y−1 P{k ≤ |X1 | p ≤ k + 1}yq−1 dydx

k=[y p ]

y

x ∫

yr−2 h(y p )

∞ ∑

≤C

1

1

k p P{k ≤ |X1 | p ≤ k + 1}dy

(3.4)

k=[y p ]

∞ ∑



1 p

1

(k+1) p

k P{k ≤ |X1 | ≤ k + 1} p

yr−2 h(y p )dy 1

k=0

≤C

k p y−1 P{k ≤ |X1 | p ≤ k + 1}yq−1 dy

k=[y p ]

1

≤C

∞ ∑



∞ ∑

1

k p P{k ≤ |X1 | p ≤ k + 1}(k + 1)

k=0 ∞ ∑

r−1 p

h(k + 1)

r

(k + 1) p h(k + 1)P{k ≤ |X1 | p ≤ k + 1}

k=0

≤ CE|X1 |r h(|X1 | p ) + 1 < ∞. Now we estimate J22 for r > 1 + p/2 and 1 ≤ p < 2. J22 =

∞ ∑

q r p −2− p

n





h(n)4q

n=1

(

)

1

np

(

) y2 n−1 λ2 q−1 exp − y dy 8

letting t = y2 x−1 ∫ ∫ ∞ q r ≤C x p −2− p h(x)

(

) ∞ y2 x−1 λ2 q−1 exp − y dydx 1 8 xp 1 ( ) ∫ ∞ ∫ ∞ q q r tλ2 −2− qp + q2 −1 p 2 2 x ≤C h(x)x t exp − dtdx 2 8 1 x p −1   p ( ) ∫ ∞ ∫ t 2−p  q r tλ2  −2− qp + q2  −1 p  2 h(x)dx t exp − dt ≤C x  8 1 1 ( )−1 ∫ ∞ ( ) ( p ) r−2 r q q tλ2 ≤C −1− + t 2−p h t 2−p exp − dt < ∞. p p 2 8 1

(3.5)

For J23 , under assumption r > 1 + p/2 and 1 ≤ p < 2, which means that (2 − p)/12(r − p) < 1/6. So we choose β such that 0 < β < (2 − p)/(12(r − p)). Then J23 =

∞ ∑

n

q r p −2− p



(



h(n)

1

4q



≤C 1



q

x p −2− p h(x) r



∞ 1

xp

1 ) 12β

4(βy2 n−1 λ2 + 1)

np

n=1

1

(

1 4(βy2 x−1 λ2 + 1)

1 ) 12β

yq−1 dy yq−1 dydx

(3.6)

Mi-Hwa Ko

512





≤C

x ∫

1

≤C ≤C ≤C ≤C

x

∫1 ∞

∫ h(x)

q r 1 p −2− p + 12β

(

∞ 1

xp



∫1 ∞

q r p −2− p



h(x)

1 ) 12β

x 4y2 λ2 ∞

yq−1 dydx

y− 6β +q−1 dydx 1

1 xp

∞ q r 1 1 x p −2− p + 12β h(x) y− 6β +q−1 1p dx x

x ∫1 ∞ x

q q 1 1 r p −2− p + 12β − 6βp + p

r−2p 2−p p − 12βp

h(x)dx

h(x)dx < ∞.

1

Hence, by combining (3.1) ∼ (3.6) for 1 ≤ p < 2 and r > 1 + p/2, we have ∞ ∑

{ } q r 1 q n p −2− p h(n)E max |S k | − ϵn p < ∞. +

1≤k≤n

n=1

 Proof: Proof of (ii) ∞ ∑ n=1

{ }q ∑ ∫ ∞ 1 r r n p −2 h(n) n p −2 h(n)E sup |k− p S k | − ϵ = k≥n

+

i ∞ 2∑ −1 ∑

n p −2 h(n) r

≤C ≤C

∞ ∫ ∑ i=1 ∞ ∑

i=1 ∞ ∑



≤C (

k≥n

k≥2i−1

0

( )∫ i pr −1 h 2i 2 )

(

∞ 0

} 1 1 −p q P sup k S k > ϵ + x dx

P 2

0

P

≤k ϵ + x dx

2l−1 ≤k ϵ + x q dx l−1 l

)

{



P

0

l=1 (l−1) qp

)

letting y = 2 x ∞ ) ( )∫ ∑ (r l p −1− qp 2 ≤C h 2l l=1

k≥2i−1

0

l=i

{



n=2i−1

{

∞ ∫ ( ) ( )∑ i pr −1 i 2 h 2

∞ ∫ ∑ l=1 ∞ ∑

{ } 1 1 P sup k− p S k > ϵ + x q dx



i } { 2∑ −1 1 1 r n− p −2 h(n) P sup k− p S k > ϵ + x q dx

i=1

≤C

∫ 0

i=1 n=2i−1

≤C

{ } 1 1 P sup k− p S k > ϵ + x q dx k≥n

0

n=1

=



0



i=1

) (l−1) } ( 1 q max |S k | > ϵ + x 2 p dx

2l−1 ≤k 2 1≤k 2 p ϵ + y q dy 1≤k n p 2− p ϵ + y q dy 1≤k

Suggest Documents