On the Convergence of Moments in a Martingale ...

1 downloads 0 Views 198KB Size Report
The results obtained extend the well known Robbins theorem to randomly stopped martingales and to the convergence of moments. As an application of.
On the Convergence of Moments in a Martingale Central Limit Theorem Krzysztof S. Kubacki

Abstract. Let {Snk , Fnk , k ≥ 1} be a zero-mean, square-integrable martingale for each n ≥ 1, and let Xnk = Snk −Sn,k−1 (Sn0 = 0), k ≥ 1, denote the martingale differences. Let Tn be a positive stopping time w.r.t. {Fnk , k ≥ 1}. P Assume that Tn −→ ∞ converges in probability to infinity. We give sufficient conditions for both SnTn =⇒ X(stably) converges stably to the random variable X and E|SnTn |2r −→ E|X|2r (for some r ≥ 1), where X is a mixture of normal random variables. In the special case, where Tn , Xn1 , Xn2 , . . . are independent random variables for each n, these conditions are both sufficient and necessary for mixing convergence of SnTn and the convergence of moments. The results obtained extend the well known Robbins theorem to randomly stopped martingales and to the convergence of moments. As an application of our result we give an extension of Bernstein’s theorem to sums with random indices and Raikov’s theorem to the convergence of moments.

1. Background The past forty years or so have seen an intensive development of the studies devoted to limit laws of randomly indexed sequences of random variables. This development has been caused by numerous applications of the obtained theorems in queueing theory, reliability theory, renewal theory, sequential analysis and other sections of mathematics. Essentially, the studies of limit behaviour of randomly indexed sequences of random variables have been two pronged. Attempts have been made to find the limit laws for sequences of partial sums of independent or weakly dependent random variables in those cases where the sequence of random indices is either independent of or may depend on the summands. The first direction in research was originated by Robbins (1948). He observed that if {Xk , k ≥ 1 } is a sequence of independent and identically distributed random variables with EX1 = a, Var(X1 ) = c2 < ∞, and if {υn , n ≥ 1} is a sequence of positive, integer-valued random variables, independent of {Xk , k ≥ 1 } and such that Eυn = q αn , Var(υn ) = βn2 < ∞, then the limiting distribution of (Sυn − ESυn )/ Var(Sυn ) may not be normal or even infinitely divisible.

1

To state his results exactly, put σn2 = Var(Sυn ), dn = aβn /σn , hn (t) = E exp{it(Sυn − ESυn )/σn }, and gn (t) = E exp{it(υn − αn )/βn }. Robbins has proved the following two statements. (A) If σn → ∞ and βn /σn2 −→ 0 as n → ∞ , then for all real t |hn (t) − gn (tdn ) exp{−t2 (1 − d2n )/2}| −→ 0 as n → ∞ . . (B) Assume that a = EX1 = 0, (υn − αn )/βn =⇒ Z converges weakly to a random variable Z with characteristic function g(·), and distribu√ . tion function G(·). Clearly, hn (t) = E exp(itSυn /c αn ) in this case. If αn −→ ∞ and βn /αn −→ r as n → ∞ , where 0 < r < ∞, then for all real t (as n → ∞ ) hn (t) −→ g(irt2 /2) exp(−t2 /2) =

Z



0

2 y/2

e−t

dG1 (y),

. where G1 (y) = G((y − 1)/r) = P[rZ + 1 < y]. These results were generalized by Gnedenko and Fahim (1969), Sz´asz and Freyer (1971), Sz´asz (1972), Rosi´ nski (1975), Rychlik (1976), Kubacki and Szynal (1987a, 1988a, b), Kruglov (1976, 1988), and Bethmann (1988), among others. When υn depends on the {Xk , k ≥ 1 } it is not possible in general to get the distribution of Sυn explicitly. Intensive research has however been conducted in the context of the preservation of classical limit theory (laws of large numbers and central limit theorems) under random time changes. This line of investigation was originated by Anscombe (1952). He observed that if {Yn , n ≥ 1 } is a sequence of random variables such that Yn =⇒ µ as n → ∞ and ∀ε > 0 ∃δ > 0 :

lim sup P[ max |Yi − Yn | ≥ ε] ≤ ε, n→∞

|i−n|≤δn

then YNn =⇒ µ as n → ∞ for every sequence {Nn , n ≥ 1} of positive, integer-valued random variables such that (with a sequence {an , n ≥ 1} of positive integers going to infinity) Nn P −→ 1 (converges in probability) an

as n → ∞ .

A more general result than that of Anscombe (1952) has been obtained by Aldous (1978), Cs¨org˝o and Rychlik (1981), and Kubacki and Szynal (1985, 1986), among others. Given the results of Kubacki and Szynal (1986) it is hardly surprising that these two branches of the theory are closely connected. In fact, this connection allowed them to extended some well-known results of the earlier studies to a larger class of random indices, see Kubacki and Szynal 2

(1985, 1987b, 1988a). The procedure used there is simple but in addition to the Anscombe type theorem a limit theorem of Robbins type has been required. This note deals with the convergence of moments in a Robbins type limit theorem for randomly stopped martingales. The convergence of moments of randomly stopped martingales {|STn /kn |2r , n ≥ 1} (with a sequence {kn , n ≥ 1} of constants going to infinity) has been considered by Irle (1987) under the assumption that the sequence {n−1/2 Sn , n ≥ 1} obey the central limit theorem. Our approach is different. 2. Introduction Let {Snk , Fnk , k ≥ 1} be a zero-mean, squareintegrable martingale for each n ≥ 1, and let Xnk = Snk − Sn,k−1 (Sn0 = 2 2 0), k ≥ 1, denote the martingale differences. Put σnk = E(Xnk |Fn,k−1 ), T∞ ϕnk (t) = E(exp(itXnk )|Fn,k−1 ), F0 = n=1 Fn0 and fnk (t) =

k Y j=1

2 ϕnj (t) , Vnk =

k X

2 2 2 σnj , b2nk = max σnj , Unk = 1≤j≤k

j=1

k X

2 Xnj .

j=1

Now let {Tn , n ≥ 1} be a sequence of positive, integer-valued random variables (r.v.’s) such that Tn is a stopping time w.r.t. {Fnk , k ≥ 1} P for each n. Assume that Tn −→ ∞. (All limits are taken as n → ∞ , unless stated otherwise.) Let η 2 denote a positive, almost surely finite r.v., and let µ2r (r > 0) denote the 2r-th absolute moment of a r.v. X with characteristic function E exp(−t2 η 2 /2). Obviously, X is a mixture of normal r.v.’s. This note gives sufficient conditions for both SnTn =⇒ X converges weakly to the r.v. X and E|SnTn |2r −→ µ2r , for some r > 1. Limit theorems for randomly stopped martingales can be deduced similarly as in the classical case if η 2 is deterministic; see, e.g., Gaenssler et al. (1978), Rychlik (1979) and Be´ska et al. (1982). To get a limit theorem, if η 2 is random, we usually assume that either η 2 is F0 -measurable r.v. (cf., e.g., Eagleson (1975) and Kubacki (1990)) or the σ-fields Fnk increase as n increases (cf., e.g., Rootz´en (1977) and Durrett and Resnick (1978)). A survey of limit theorems for martingales may be found in Hall and Heyde (1980) and Liptser and Shiryayev (1986). Several papers which have recently appeared are devoted to the convergence of moments in random versions of the Central Limit Theorem. The case when Tn , Xn1 , Xn2 , . . . are independent r.v.’s has been investigated by Marushin and Krivorukov (1984), Korolev (1985), Kruglov (1988), Kubacki and Szynal (1988b), and Kubacki (1989), while the case when Xn1 , Xn2 , . . . are independent r.v.’s and Tn , n ≥ 1, are stopping times has been investigated by Chow et al. (1979), Gut and Janson (1986) 3

and Irle (1987), among others; see also references in their papers. A general case, where Xnk , k ≥ 1, are martingale differences and Tn , n ≥ 1, are stopping times, has been considered by Kubacki (1990) under the additional assumption that η 2 is an F0 -measurable r.v. The aim of this note is to consider the general case without this additional assumption about η 2 . Throughout, I(A) denotes the indicator function of the set A. All equalities and inequalities between r.v.’s are considered in the sense “with probability one.” 3. Weak Convergence of SnTn following results.

In the sequel we shall need the

PROPOSITION 1. Suppose that 2 VnT =⇒ η 2 , n

and for all ε > 0,

Tn X

(1)

P

2 E(Xnk I[|Xnk | ≥ ε]|Fn,k−1 ) −→ 0.

(2)

k=1

Then P

b2nTn −→ 0,

(3)

and for all real t,

2 η 2 /2

fnTn (t) =⇒ e−t

.

(4)

Proof. Only some modifications are necessary in the proof of Theorem 2.1 of Kubacki (1990) to make it applicable in this case; cf. also the proof of the first part of Theorem 3.5 in Hall and Heyde (1980). As a simple consequence of Proposition 1 we get the following extension of the Statement (B) to independent but nonidentically distributed summands (cf. also Theorem 2 of Kruglov (1976)). THEOREM 1. Let (Xnk : k = 1, . . . , υn ; n = 1, 2, . . . ) be a randomly choosen triangular array of independent r.v.’s with EXnk = 0, 2 2 EXnk = σnk < ∞, k ≥ 1. Suppose that υn is a positive integer-valued P 2 r.v., independent of Xnk , k ≥ 1, and such that υn −→ ∞. If Vnυ =⇒ η 2 n and υ Z for all ε > 0,

n X

P

k=1 |x|≥ε

x2 dFXnk (x) −→ 0,

then P

b2nυn −→ 0 and Snυn =⇒ X. Proof. Obvious; note that E exp(itSnυn ) = Efnυn (t). 4

(5)

Remark 1. Note that in Theorem 1 we do not assume that Snkn converges in distribution (for a sequence {kn , n ≥ 1} of positive constants). Let 2 2 mn = EVnυ , c2n = Var(Vnυ ) < ∞, n ≥ 1. Suppose that there exists a n n sequence {an , n ≥ 1} of positive constants such that mn /an −→ m and 2 2 cn /an −→ c, and that (Vnυ − mn )/cn =⇒ T. Then Vnυ /an =⇒ c T + m. n n Moreover, if for all ε > 0,

υn Z 1 X P x2 dFXnk (x) −→ 0, √ an k=1 |x|≥ε an

then a−1/2 Snυn =⇒ X. n THEOREM 2. Let (Xnk : k = 1, . . . , υn ; n = 1, 2, . . . ) be a randomly choosen triangular array of independent r.v.’s as in Theorem 1. Suppose in addition that Xnk , k ≥ 1, are symmetric. Then P

2 b2nυn /Vnυ −→ 0 and Snυn /Vnυn =⇒ N0,1 n

(6)

if and only if for all ε > 0, υn Z 1 X 2 Vnυ n k=1

P

|x|≥εVnυn

x2 dFXnk (x) −→ 0.

(7)

Proof. The if part follows by Proposition 1; put Fnk = σ(υn , Xn1 , . . . , Xnk ). Suppose now that (6) holds. Define ϕnjk (t) = ϕnj (t/Vnk ) and note 2 2 that for all j ≤ k, |ϕnj (t/Vnk ) − 1| ≤ 12 E(tXnj /Vnk )2 = 12 t2 σnj /Vnk . Hence υn X 1 |ϕnjυn (t) − 1| ≤ t2 , (8) 2 j=1 and

1 P 2 2 max |ϕnjυn (t) − 1| ≤ t2 ( max σnj )/Vnυ −→ 0. n 1≤j≤υn 1≤j≤υ n 2 From Taylor series expansion it is seen that υn X j=1

| log(1 + znj ) − znj | ≤

υn X

|znj |2 ≤ max |znj |

j=1

1≤j≤υn

υn X

|znj | .

j=1

This with znj = ϕnjυn (t) − 1, by (8) and (9), yields for any fixed t, υn X

1 P 2 2 | log ϕnjυn (t) + 1 − ϕnjυn (t)| ≤ t4 ( max σnj )/Vnυ −→ 0. n 1≤j≤υ n 4 j=1

5

(9)

Thus, we have (with fnk (t) = log fnυn (t) =

υn X

Qk

j=1

ϕnjk (t), k ≥ 1, n ≥ 1)

log ϕnkυn (t) =

k=1

υn X

[ϕnkυn (t) − 1] + An (t),

(10)

k=1 P

where for each fixed t, |An (t)| ≤ t4 /4, and An (t) −→ 0; cf. the proof of Theorem 3.1 in Kubacki and Szynal (1988b). Since E exp(itSnυn /Vnυn ) = Efnυn (t) = E exp(ln fnυn (t)), (6) and (10) imply that E exp(

υn X

[ϕnkυn (t) − 1] ) −→ exp(−t2 /2).

(11)

k=1

Since Xnk , k ≥ 1, are symmetric and 0 ≤ 1 − cos(x) ≤ x2 /2, then ϕnkυn (t) =

Z

+∞

cos(tx/Vnυn ) dFXnk (x)

−∞

≥ 1−

1 2 2 2 t σnk /Vnυ , n 2

which together with (11) implies that υn Z +∞ 1 2 X 1 ≤ E exp t + [cos(tx/Vnυn ) − 1] dFXnk (x) −→ 1. 2 k=1 −∞

!

Since x ≤ ex − 1 for x ≥ 0, we get υn Z +∞ X 1 [cos(tx/Vnυn ) − 1] dFXnk (x) −→ 0, 0 ≤ t2 + E 2 k=1 −∞

!

which implies, similarly as in the classical case (see e.g. Chung (1968), p. 190), that (7) holds. In what follows we shall need some results on stable convergence and weak-L1 convergence from Aldous and Eagleson (1978) and Hall and Heyde (1980). Let {Yn , n ≥ 1 } be a sequence of r.v.’s defined on a probability space (Ω, F, P). Suppose that Yn =⇒ Y. Then Yn =⇒ Y (stably) if and only if for all real t, exp(itYn ) −→ exp(itY 0 ) weakly in L1 ,

(12)

where the r.v. Y 0 has the same distribution as Y, and E[exp(itY 0 )I(B)] is a continuous function of t for all B ∈ F; cf. Theorem 3.1 of Hall 6

and Heyde (1980). Moreover, Yn =⇒ Y (mixing) if and only if for all real t, exp(itYn ) −→ E exp(itY ) weakly in L1 . A sequence {ξn , n ≥ 1} of integrable r.v.’s is said to converge weakly ω−L1

in L1 to ξ (written ξn −− −→ ξ) if for all bounded F-measurable r.v.’s Z Eξn Z −→ EξZ.

(13)

An equivalent condition is that for all measurable sets B, with P(B) > 0, Eξn I(B) −→ EξI(B). We are now ready to prove the following result. THEOREM 3. Suppose that (2) holds, and that P

2 VnT −→ η 2 . n

(14)

Furthermore, suppose that for all real t, and ε > 0, ω−L1 exp(itSn,Tn ∧γnε ) −t2 η2 /2 2 2 I[e > ε] −− −→ I[e−t η /2 > ε], fn,Tn ∧γnε (t)

(15)

where γnε = γnt,ε is a stopping time such that |fn,Tn ∧γnε (t)| > ε/2. Then SnTn =⇒ X(stably).

(16)

Remark 2. It is possible to replace (15) by a measurability condition on η 2 . If η 2 is F0 -measurable r.v. and conditions (14) and (2) hold, then the conclusion of Theorem 3 remains true, provided the stability is dropped; cf. Kubacki (1990). However, conditions (14) and (2) alone are not sufficient to imply (16) even in the case where {Tn , n ≥ 1} is a sequence of positive constants; cf. Example 3.3.4 in Hall and Heyde (1980). The proof of Theorem 3 is based on the following auxiliary results. PROPOSITION 2. Suppose that (14) holds. Then (2) is satisfied if and only if (3) holds and for all real t,

P

2 η 2 /2

fnTn (t) −→ e−t

.

(17)

Proof. Only some modifications are necessary in the proof of Theorem 2.1 of Kubacki (1990) to make it applicable in this case; see also the proof of the first part of Theorem 3.5 in Hall and Heyde (1980). LEMMA 1. Let t and ε > 0 be arbitrarily fixed. There exists a sequence {γnt,ε , n ≥ 1} of stopping times w.r.t. {Fnk , k ≥ 1} such that γnt,ε ≤ Tn

−1 and |fn,T

t,ε n ∧γn

7

(t)| < 2/ε.

(18)

Proof. Fix n. Put An = {|fnTn (t)| ≤ ε/2} and define (

γbn (ω) =

n

min k : |fn,Tn ∧k (t)| ≤ ∞

Since An ∈ FnTn and fn,Tn ∧k (t) = we have for all m ≥ 1

Pk−1 j=1

ε 2

o

if ω ∈ An , if ω ∈ 6 An .

fnj (t)I[Tn = j]+fnk (t)I[Tn ≥ k],

ε ε ∀k ≤ m − 1 ; |fn,Tn ∧m (t)| ≤ ∩ An {γbn = m} = |fn,Tn ∧k (t)| > 2 2 ∈ Fn,m−1 , 



so γbn is a stopping time w.r.t. {Fnk , k ≥ 1}. Moreover, γbn ≤ Tn and |fn,Tn ∧bγn (t)| ≤ ε/2

on the set {γbn < ∞}.

Let γn = (γbn − 1)I[γbn < ∞] + Tn I[γbn = ∞]. Since {γn = m} = {γbn = m + 1} ∈ Fnm

for all m ≥ 1,

we conclude that γn is a stopping time w.r.t. {Fnk , k ≥ 1}. Furthermore, we have γn ≤ Tn and |fn,Tn ∧γn (t)| > ε/2, which gives (18). Here and in the sequel we write γn instead of γnε or γnt,ε when no misunderstanding may arise. LEMMA 2. Let t and ε > 0 be arbitrarily fixed. Let {γn , n ≥ 1} be a sequence of stopping times defined in the proof of Lemma 1. Then (17) implies that h i 2 2 P γn < Tn ; e−t η /2 > ε −→ 0. (19) Proof. It follows from the proof of Lemma 1 that {γn < Tn } = {γbn ≤ Tn ; γbn < ∞} ⊆ {|fnTn (t)| ≤ ε/2} . Together with (17) this implies (19). PROPOSITION 3. Suppose that (15) and (17) hold. Then (16) is satisfied. Proof. Fix t. In view of (12) if suffices to prove that for all B ∈ F, E exp(itSnTn )I(B) −→ E exp(−t2 η 2 /2)I(B). Since 2 η 2 /2

|E{exp(itSnTn ) − exp(−t2 η 2 /2)}I[e−t −t2 η 2 /2

≤ 2 P[e

8

≤ ε]|

≤ ε] → 0 as ε → 0,

and 2 η 2 /2

|E{exp(itSnTn ) − exp(itSn,Tn ∧γn )}I[e−t −t2 η 2 /2

≤ 2 P[γn < Tn ; e

> ε]|

> ε] → 0 by (19),

it remains to prove that for all B ∈ F, 2 η 2 /2

EeitSn,Tn ∧γn I[e−t

2 η 2 /2

> ε; B] −→ Ee−t

2 η 2 /2

I[e−t

> ε; B].

(20)

But the left hand side of (20) is equal to )

(

exp(itSn,Tn ∧γn ) −t2 η2 /2 2 2 I[e > ε; B](fn,Tn ∧γn (t) − e−t η /2 ) E fn,Tn ∧γn (t) ( ) exp(itSn,Tn ∧γn ) −t2 η2 /2 −t2 η2 /2 +E e I[e > ε; B] fn,Tn ∧γn (t) = J1,n + J2,n say. It follows from (17), (18) and (19) that h i 2 2 2 2 2 E|fn,Tn ∧γn (t) − e−t η /2 |I e−t η /2 > ε ; B ε i 2 4 h 2 2 2 2 ≤ E|fn,Tn (t) − e−t η /2 | + P γn < Tn ; e−t η /2 > ε ε ε −→ 0.

|J1,n | ≤

Furthermore, (13) and (15) imply that 2 η 2 /2

J2,n −→ Ee−t

h

2 η 2 /2

I e−t

i

> ε; B .

This shows that (20) holds. The proof of Theorem 3 is easily based on Propositions 2 and 3 and is not detailed here. As a simple consequence of Theorem 3 we get THEOREM 4. Suppose that (2), (14) with η 2 = 1, and (15) are satisfied. Then SnTn =⇒ N0,1 (mixing). From Theorem 3 one can also obtain an extension of Corollary 3.1 of Hall and Heyde (1980) to randomly stopped martingales. COROLLARY 1. Suppose that (2) and (14) hold, and that Fnk ⊆ Fn+1,k for k ≥ 1 and n ≥ 1. Then (16) is satisfied. 9

(21)

The proof of Corollary 1 is based on the following auxiliary results. LEMMA 3. If η 2 is a.s. constant, then (17) implies that for all real t,

E exp(itSnTn ) −→ exp(−t2 η 2 /2).

Proof. See Be´ska et al. (1982). LEMMA 4. Let {τn , n ≥ 1} be a sequence of stopping times w.r.t. {Fnk , k ≥ 1} such that P 2 Vnτ −→ 0. (22) n P

P

Then fnτn (t) −→ 1 and Snτn −→ 0. P

Proof. From (22) it follows that for all real t, fnτn (t) −→ 1. Hence and P by Lemma 3 we conclude that Snτn −→ 0. LEMMA 5. Suppose that (2) and (14) hold. Then there exists a nondecreasing sequence {τn , n ≥ 1} of stopping times w.r.t. {Fnk , k ≥ 1} such that P P 2 τn −→ ∞ and Vnτ −→ 0. (23) n P

Proof. Since (2) implies (3), b2nTn −→ 0. One can now choose a nondecreasing sequence {mn , n ≥ 1} of positive integers such that mn −→ ∞,

P[Tn < mn ] −→ 0

and

P

mn b2nTn −→ 0.

2 Since Vnm ≤ mn b2nmn , we have for all ε > 0 n 2 2 P[Vnm > ε] ≤ P[mn b2nmn > ε, mn ≤ Tn ] + P[Vnm > ε, mn > Tn ] n n ≤ P[mn b2nmn > ε] + P[mn > Tn ] −→ 0.

This completes the proof of Lemma 5. PROPOSITION 4. Suppose that (2), (14) and (21) hold. Then (15) is satisfied. Moreover, for all real t and ε > 0, Z n := Z n,t,ε =

exp(i tSn,Tn ∧γnε ) ω−L1 −− −→ 1. fn,Tn ∧γnε (t)

10

(24)

Proof. Fix t and ε > 0. Note that Zkn = exp(itSn,Tn ∧γn ∧k )/fn,Tn ∧γn ∧k (t), k ≥ 1, forms a uniformly bounded martingale w.r.t. {Fnk , k ≥ 1}. Moreover, for the sequence {τn , n ≥ 1} defined in Lemma 5, we have P P 2 Vn,T −→ 0. Hence and by Lemma 4, Zτnn −→ 1, so n ∧γn ∧τn L

1 1. Zτnn −→

(25)

Let m ≥ 1 be fixed and let B ∈ Fmτm ; then by (21), B ∈ Fnτn for all n ≥ m. For such an n, E[Z n I(B)] = E[I(B)E(Z n |Fnτn )] = EZτnn I(B). Hence and by (25), for all B ∈ Fmτm we have E[Z n I(B)] = EZτnn I(B) −→ P(B).

(26)

∞ Let F∞ = ∞ 1 Fnτn be the σ-field generated by 1 Fnτn . For any 0 B ∈ F∞ and any δ > 0 there exists an m and an B ∈ Fmτm such that P(B4B 0 ) < δ (4 denotes symmetric difference). Since

W

S

sup |E[Z n I(B 0 )] − E[Z n I(B)]| ≤ E[|Z n |I(B4B 0 )] < (2/ε)δ, n≥1

the left hand side can be made arbitrarily small by choosing δ sufficiently small. It now follows from (26) that for any B 0 ∈ F∞ , E[Z n I(B 0 )] → P(B 0 ). This in turn implies that for any bounded F∞ -measurable r.v. Y, E[Z n Y ] −→ E(Y ). Finally, if B ∈ F, then E[Z n I(B)] = E[Z n E(I(B)|F∞ )] −→ E[E(I(B)|F∞ )] = P(B). This establishes (24). The proof of Corollary 1 is easily based on Theorem 3 and Proposition 4 and is not detailed here. Obviously, conditions ensuring the central limit theorem for randomly stopped martingales ought to be reduced to the classical ones when {Tn , n ≥ 1} be a sequence of positive integers. (Corollary 1 reduces to Corollary 3.1 of Hall and Heyde (1980) if {Tn , n ≥ 1} is a sequence of positive integers.) There are however martingale arrays (Snk , Fnk ) and sequences {Tn , n ≥ 1} of stopping times such that SnTn satisfies the central limit theorem but for no sequence {kn , n ≥ 1} of positive integers does Snkn have a nondegenerate limiting distribution. Example 1. Let U, X1 , X2 , . . . be independent r.v.’s such that U has a uniform distribution on (0, 1) and, for each k, Xk has a normal distribution with mean 0 and variance 1. Put Xnk = υn−1/2 Xk , Snk =

k X

Xnj , Fnk = σ{υ1 , . . . , υn , X1 , . . . , Xk },

j=1

11

where 1 1 + 2n I U ≥ 2 2 



υ2n−1 = nI U
0,

s−2 υn

υn Z X k=1 |x|≥εsυn

P

x2 dFXk (x) −→ 0,

then s−1 υn Sυn =⇒ N0,1 (mixing). Proof. Put Xnk = s−1 υn Xk , Fnk = σ{υ1 , . . . , υn , X1 , . . . , Xk }, and use Corollary 2. 12

Example 2. Let X, X1 , X2 , . . . be independent and identically distributed r.v.’s with EX = 0, EX 2 = 1, and let {υn , n ≥ 1} be a sequence of positive integer-valued r.v.’s independent of {Xk , k ≥ 1} and P such that υn −→ ∞. Put Xnk = υn −1/2 Xk , Snk =

k X

Xnj , Fnk = σ{υn , X1 , . . . , Xk }, k ≥ 1.

j=1

Then the martingale triangular array (Snk , Fnk : k = 1, . . . , υn ; n = 1, 2, . . . ) does not satisfy (21). But it follows from Corollary 3 that Snυn := υn−1/2 Sυn =⇒ N0,1 (mixing). Then for all real t, ω−L1

exp(itSnυn ) −− −→ exp(−t2 /2).

(27)

2 2 n On the other hand, since Vnυ = 1 and E ( υk=1 Xnk I[|Xnk | ≥ ε] ) → 0 n for all ε > 0, it follows from Proposition 2 that for all real t, P fnυn (t) −→ exp(−t2 /2). This together with (27) implies (24) and, consequently, (15).

P

Remark 3. Condition (15) is necessary for (16), provided that (17) holds. By using the same arguments as in the proof of Lemma 3.1 of Hall and Heyde (1980) one can deduce a different set of conditions for stable P P 2 −→ η 2 and convergence: (16) holds if max1≤k≤Tn |Xnk | −→ 0, UnT n for all real t, JnTn (t) =

Tn Y

ω−L1

(1 + itXnk ) −− −→ 1.

(28)

k=1

Note that 1 2 2 ) − VnT exp(itSnTn )/fnTn (t) = JnTn (t) exp{ − t2 (UnT n n 2 Tn 1 X 2 − t2 E(Xnk A(tXnk )|Fn,k−1 ) 2 k=1 −R1,n − R2,n }, where the function A of the real variable x is defined by eix = 1 + P n ix − 12 x2 + 12 x2 A(x), R1,n = Tk=1 {ln ϕnk (t) − (ϕnk (t) − 1)} and R2,n = PTn 2 2 k=1 {ln(1 + itXnk ) − itXnk − t Xnk /2}; cf. Rychlik (1979), and Hall and Heyde (1980), p. 72. It is now easy to see that in some cases (e.g. if (2) 2 holds and E|VnT − η 2 | −→ 0 ) condition (28) implies (15). n 13

4. The convergence of moments of SnTn need the following results.

In the sequel we shall

THEOREM 5. Let (Snk , Fnk : k = 1, . . . , Tn ; n = 1, 2, . . .) be a randomly chosen martingale triangular array. Suppose that P

2 VnT −→ η 2 and n

2 EVnT −→ Eη 2 . n

(29)

Then the following three conditions are equivalent: P

P

2 UnT −→ η 2 ; n

b2nTn −→ 0 and for all ε > 0,

Tn X

(30) P

2 E(Xnk I[|Xnk | ≥ ε]|Fn,k−1 ) −→ 0;

(31)

k=1 P

P

2 η 2 /2

b2nTn −→ 0 and, for all real t, fnTn (t) −→ e−t

.

(32)

Proof. From Proposition 2 it follows that conditions (31) and (32) are equivalent. The equivalence of (30) and (31) follows via Lemmas 3.1–3.2 of Kubacki (1990). It follows from Remark 2 that if η 2 is F0 -measurable r.v. then each of conditions (30)–(32) together with (29) implies that SnTn =⇒ X. In the special case, where Tn , Xn1 , Xn2 , . . . are independent r.v.’s and η 2 = 1, the reverse implication holds true. THEOREM 6. Let (Xnk : k = 1, . . . , υn ; n = 1, 2, . . . ) be a randomly 2 chosen triangular array of independent r.v.’s with EXnk = 0 and EXnk = 2 σnk < ∞. Suppose that {υn , n ≥ 1} is a sequence of positive integerP valued r.v.’s independent of Xnk , k ≥ 1, and such that υn −→ ∞. Suppose further that 2 EVnυ −→ 1. (33) n Then the following four conditions are equivalent: P

P

P

2 2 b2nυn −→ 0, Vnυ −→ 1 and Unυ −→ 1; n n P

2 Vnυ −→ 1 and, for all ε > 0, n

υn Z X

P

k=1 |x|≥ε

P

b2nυn −→ 0 and, for all real t, P

x2 dFXnk (x) −→ 0; P

2 /2

fnυn (t) −→ e−t

b2nυn −→ 0 and Snυn =⇒ N0,1 .

14

(34)

;

(35) (36) (37)

Proof. The equivalence of (34) and (35) follows from Theorem 5. Clearly, (35) =⇒ (36) and (36) =⇒ (37). It remains to prove that (37) =⇒ (35). The proof of the last implication is heavily based on the results of Bethmann (1988); see also Szasz (1972) and Rosi´ nski (1975). Define the q-quantile of the r.v. υn as the greatest integer ln (q) that satisfies the inequality P[υn < ln (q)] ≤ q < P[υn ≤ ln (q)]. LEMMA 6. For every n ≥ 1, let {ank , k ≥ 1} be a nondecreasing sequence of nonnegative real numbers, and let υn be a positive integer-valued P r.v. Then anυn −→ a if and only if anln (q) −→ a for all q ∈ (0, 1). PROPOSITION 5. Let (Xnk ) and {υn , n ≥ 1} be as in Theorem 6. P 2 Suppose that b2nυn −→ 0 and EVnυ −→ 1. Then Snυn =⇒ N0,1 if and n only if Snln (q) =⇒ N0,1 for all q ∈ (0, 1). Suppose (37). For every ε > 0, we have max1≤k≤υn P[|Xnk | ≥ ε] ≤ P −→ 0. In view of Proposition 5 and Lemma 6 (37) implies that, for all q ∈ (0, 1),

b2nυn /ε2

Snln (q) =⇒ N0,1 and

max P[|Xnk | ≥ ε] −→ 0.

1≤k≤ln (q)

(38)

Hence and by Normal Convergence Criterion in Lo`eve (1960), p. 316, we have for arbitrarily fixed τ > 0 ln (q)

lim n→∞ 2 (τ ) = where σnk

Vnl2 n (q) ≥

Pln (q) k=1

R

X

2 σnk (τ ) = 1,

(39)

k=1

2 |x| 1. Suppose that 2 E|VnT − η 2 |r −→ 0 and n

E(η 2r ) < ∞.

(40)

If η 2 is not a.s. constant, assume also that (15) holds. Then the following three conditions are equivalent: P

2 E|UnT − η 2 |r −→ 0; n

b2nTn −→ 0 and (

E

Tn X

) 2r

|Xnk |

−→ 0;

k=1 P

P

2 η 2 /2

b2nTn −→ 0, fnTn (t) −→ e−t

and

E|SnTn |2r −→ µ2r .

Moreover, under (15) and (40), each of these implies that SnTn =⇒ X(stably) and E|SnTn |2r −→ µ2r . Proof. Only some modifications are necessary in the proof of Theorem 1.1 (the case r > 1) in Kubacki (1990) to make it applicable in this case; see also the proof of Theorem 3.5 in Hall and Heyde (1980). From Theorems 6 and 7 it is easy to obtain the following result which extends Bernstein’s theorem to sums with random indices and Raikov’s theorem to the convergence of moments. THEOREM 8. Let (Xnk : k = 1, . . . , υn ; n = 1, 2, . . . ) be a randomly 2 chosen triangular array of independent r.v.’s with EXnk = 0 and EXnk = 2 σnk < ∞. Suppose that {υn , n ≥ 1} is a sequence of positive integerP valued r.v.’s independent of Xnk , k ≥ 1, and such that υn −→ ∞. Let r > 1. Then the following three conditions are equivalent: P

2 2 − 1|r −→ 0; b2nυn −→ 0, E|Vnυ − 1|r −→ 0 and E|Unυ n n 2 E|Vnυ n

r

− 1| −→ 0 and E

(υ n X

) 2r

E|Xnk |

−→ 0;

k=1

. b2nυn −→ 0, Snυn =⇒ N0,1 and E|Snυn |2r −→ µ?2r = E|N0,1 |2r . P

16

The following result extends Theorem 1 to the convergence of moments. THEOREM 9. Let (Xnk : k = 1, . . . , υn ; n = 1, 2, . . . ) be a randomly chosen triangular array of independent r.v.’s as in Theorem 8. If 2 2r Vnυ =⇒ η 2 and, for some r > 1, EVnυ −→ Eη 2r and n n E

υn X

E|Xnk |2r −→ 0,

k=1

then Snυn =⇒ X and E|Snυn |2r −→ E|X|2r . Proof. Only some modifications are necessary in the proof of Theorem 3.2 of Kubacki and Szynal (1988b) to make it applicable in this case. As a simple consequence of Theorem 2 and Theorem 1.4 of Kubacki (1990) we obtain the following COROLLARY 4. Let (Xnk : k = 1, . . . , υn ; n = 1, 2, . . . ) be a randomly chosen triangular array of independent r.v.’s as in Theorem 2. Let r > 1. Then the following three conditions are equivalent: P

2 b2nυn /Vnυ −→ 0, and n

(

E

−2r Vnυ n

υn X

2 2 E|Unυ /Vnυ − 1|r −→ 0; n n

) 2r

E|Xnk |

−→ 0;

k=1 P

2 b2nυn /Vnυ −→ 0, Snυn /Vnυn =⇒ N0,1 and E|Snυn /Vnυn |2r −→ µ?2r . n

Proof. Obvious; use Theorem 1.4 of Kubacki (1990) and Theorem 2. We end this note with some applications of Theorem 8 to the convergence of moments in Robbins type theorems. 5. Applications Let (Ynk : k = 1, . . . , υn ; n = 1, 2, . . . ) be a randomly chosen triangular array of independent r.v.’s with EYnk = 2 ank , Var(Ynk ) = σnk < ∞. Suppose that υn is independent of Ynk , k ≥ 1, P and that υn −→ ∞. Put S˜nk =

k X j=1

Ynj , Lnk =

k X

2 anj , Vnk =

j=1

k X j=1

2 2 σnj , b2nk = max σnj , 1≤j≤k

2 and An = ES˜nυn , σn2 = Var(S˜nυn ), ∆2n = Var(Lnυn ), %n = EVnυ . Note n 2 2 that An = ELnυn and σn = %n + ∆n ; cf. Kubacki and Szynal (1988b). Define d2n = ∆2n /σn2 and assume that d2n > 0 for all n ≥ 1. The following results generalize Theorem 2 of Kruglov (1988) to independent and nonidentically distributed r.v.’s and strengthen Theorem 3.2 of Kubacki and Szynal (1988b).

17

THEOREM 10. Suppose that d2n := E|σn−1 (Lnυn − An )|2 −→ 0. Then P

σn−2 b2nυn −→ 0 and σn−1 (S˜nυn − An ) =⇒ N0,1 P

2 −→ 1 and if and only if σn−2 Vnυ n

for all ε > 0,

σn−2

υn Z X

P

k=1 |x|≥εσn

x2 dFYnk (x + ank ) −→ 0. P

Let r > 1. Suppose that E|σn−1 (Lnυn − An )|2r → 0. Then σn−2 b2nυn −→ 0, σn−1 (S˜nυn − An ) =⇒ N0,1 and E|σn−1 (S˜nυn − An )|2r −→ µ?2r if and only if ( 2 E|σn−2 Vnυ n

r

− 1| −→ 0 and

E

σn−2r

υn X

) 2r

E|Ynk − ank |

−→ 0.

k=1

Proof. Obvious; σn−1 (S˜nυn − An ) = σn−1 (S˜nυn − Lnυn ) + σn−1 (Lnυn − An ), where the last term converges in L2r -norm to zero. THEOREM 11. Suppose that d2n −→ d2 , 0 < d2 ≤ 1.

(41)

Furthermore, suppose that P

2 %−1 n Vnυn −→ 1

(42)

and for all ε > 0,

%−1 n

υn Z X

P

√ k=1 |x|≥ε %n

x2 dFYnk (x + ank ) −→ 0.

(43)

Then there exists a r.v. Z such that σn−1 (S˜nυn − An ) =⇒ Z

(44)

if and only if, for a r.v. Y, ∆−1 n (Lnυn − An ) =⇒ Y.

(45)

Let r > 1. Suppose that ( 2 E|%−1 n Vnυn

r

− 1| −→ 0 and E

%−r n

υn X

) 2r

E|Ynk − ank |

−→ 0.

(46)

k=1

Then there exists a r.v. Z such that σn−1 (S˜nυn − An ) =⇒ Z and E|σn−1 (S˜nυn − An )|2r −→ E|Z|2r

(47)

if and only if, for a r.v. Y, −1 2r 2r ∆−1 n (Lnυn − An ) =⇒ Y and E|∆n (Lnυn − An )| −→ E|Y | .

18

(48)

Proof. It follows from Kubacki and Szynal (1988b) that conditions (41), (42) and (43) together imply ˜nυn − An )) − exp(−t2 (1 − d2 )/2)hn (td) −→ 0, E exp(itσn−1 (S

where hn (t) = E exp(it∆−1 n (Lnυn − An )). Hence and by the convergence theorem for characteristic functions conditions (44) and (45) are equivalent. This completes the proof of the first part of the theorem. Now let r > 1 and assume (46). It follows from Theorem 8 that %−1/2 (S˜nυn − Lnυn ) =⇒ N0,1 and E|%−1/2 (S˜nυn − Lnυn )|2r −→ µ?2r . n n Together with (41) this implies the uniform integrability of the sequence {|σn−1 (S˜nυn − Lnυn )|2r , n ≥ 1}. Since σn−1 (S˜nυn − An ) = σn−1 (S˜nυn − Lnυn ) + (σn−1 ∆n )[∆−1 n (Lnυn − An )], the sequence {|σn−1 (S˜nυn − An )|2r , n ≥ 1} is uniformly integrable if and 2r only if {|∆−1 n (Lnυn − An )| , n ≥ 1} is uniformly integrable; cf. Lemma 1 in Kruglov (1988). Together with the first part of the theorem this establishes the equivalence of (47) and (48). 5.1. Some results for i.i.d. r.v.’s The following theorem has been proved by Kruglov (1988) and Kubacki (1989). THEOREM 12. Let X, X1 , X2 , . . . be independent and identically distributed r.v.’s with EX = 0, EX 2 = 1. Let {υn , n ≥ 1} be a sequence of positive integer-valued r.v.’s independent of Xk , k ≥ 1, and such that P υn −→ ∞. Assume further that E(υn ) = αn < ∞, n ≥ 1. Then αn−1/2 Sυn =⇒ N0,1

and

E|αn−1/2 Sυn |2r −→ µ?2r

if and only if E|αn−1 υn − 1|r −→ 0. Proof. Obvious; use Proposition 4 with both Theorem 6 (for r = 1) and Theorem 8 (for r > 1). THEOREM 13. Let X, X1 , X2 , . . . be i.i.d. r.v.’s with EX = a 6= 0, Var(X) = 1. Let {υn , n ≥ 1} be a sequence of positive integer-valued P r.v.’s independent of Xk , k ≥ 1, and such that υn −→ ∞. Assume further that E(υn ) = αn < ∞, n ≥ 1. Then αn−1/2 (S˜υn − ES˜υn ) =⇒ N0,1 and E|αn−1/2 (S˜υn − ES˜υn )|2r → µ?2r

(49)

if and only if E|αn−1/2 (υn − αn )|2r −→ 0. 19

(50)

Proof. If (50) holds, then αn−1 υn = [αn−1/2 (υn − αn )]αn−1/2 + 1 −→ 1 and, moreover, E|αn−1 υn − 1|r ≤ E|αn−1/2 (υn − αn )|r −→ 0. Theorem 12 now implies that αn−1/2 (S˜υn − aυn ) =⇒ N0,1 and E|αn−1/2 (S˜υn − aυn )|2r → µ?2r ,

(51)

and since αn−1/2 (S˜υn − ES˜υn ) = αn−1/2 (S˜υn − aυn ) + αn−1/2 (υn − αn ), (49) follows. Conversely if (49) holds, then (cf. Kruglov (1988), the proof of the only if part of Theorem 2) sup E|aαn−1/2 (υn − αn )|2r ≤ sup E|αn−1/2 (S˜υn − aαn )|2r < ∞, n≥1

(52)

n≥1

and then sup αn−1 Var(υn ) = sup E|αn−1/2 (υn − αn )|2 < ∞. n≥1

n≥1

Since (49) implies that αn−1 [αn + a2 Var(υn )] = E[αn−1/2 (S˜υn − ES˜υn )]2 −→ 1, αn−1 Var(υn ) −→ 0 and consequently P

αn−1/2 (υn − αn ) −→ 0.

(53)

It follows now from (52) that E|αn−1 (υn − αn )|2r −→ 0, and then Theorem 12 implies (51). Conditions (49) and (51) imply the uniform integrability of {|αn−1/2 (υn − αn )|2r , n ≥ 1}. Together with (53) this implies (50). Theorem 13 has also been proved by a different method in Kubacki (1989). THEOREM 14. Let {Xk , k ≥ 1 } and {υn , n ≥ 1} be as in Theorem 13. Assume further that Var(υn ) = βn2 < ∞, n ≥ 1, and denote σn2 := Var(S˜υn ) = αn + a2 βn2 , n ≥ 1. Then σn−1 (S˜υn − ES˜υn ) =⇒ N0,1

and

E|σn−1 (S˜υn − ES˜υn )|2r −→ µ?2r

(54)

and E|σn−2 υn − 1|r −→ 0

(55)

E|σn−1 (υn − αn )|2r −→ 0.

(56)

if and only if

20

Proof. It follows from (56) that σn−2 αn = 1 − σn−2 a2 βn2 −→ 1,

(57)

and consequently E|αn−1/2 (υn − αn )|2r ≤ E|σn−1 (υn − αn )|2r (αn−1 σn2 )r → 0. Theorem 13 now implies (49). Together with (57) this implies (54). Furthermore, n

o

E|σn−2 υn − 1|r ≤ 2r−1 E|σn−2 (υn − αn )|r + |σn−2 αn − 1|r −→ 0 by (56) and (57), which establishes (55). Conversely if (55) holds, then it follows from Theorem 8 that σn−1 (S˜υn − aυn ) =⇒ N0,1 and E|σn−1 (S˜υn − aυn )|2r −→ µ?2r . Together with (54) this implies that {|σn−1 (υn −αn )|2r , n ≥ 1} is uniformly integrable. Since σn−2 αn = σn−2 E(S˜υn − aυn )2 −→ 1, P

σn−2 a2 βn2 −→ 0, and consequently σn−1 (υn − αn ) −→ 0. Together with uniform integrability this implies (56). Theorem 14 extends Theorem 2(I) of Kruglov (1988) to a larger class of random indices. The following result extends Statement (B) to the convergence of moments. THEOREM 15. Let X, X1 , X2 , . . . be i.i.d. r.v.’s with EX = 0, EX 2 = 1. Let {υn , n ≥ 1} be a sequence of positive integer-valued r.v.’s independent of Xk , k ≥ 1, and such that βn−1 (υn − αn ) =⇒ T, where αn = E(υn ), and βn2 = Var(υn ) < ∞, n ≥ 1. Assume further that E|βn−1 (υn − αn )|r −→ E|T |r , and that there exists a sequence {an , n ≥ 1} of positive numbers such that αn /an −→ m and βn /an −→ c. Then a−1/2 Sυn =⇒ X n where E exp(itX) =

R∞ 0

and

E|a−1/2 Sυn |2r −→ µ2r , n

exp(−t2 y/2) d P[c T + m < y].

Proof. Obvious; use Theorem 9. Acknowledgment. This work was partially supported by KBN (grant DNS-P/05/067/90) and AR (grant TZM/DS-1/91).

21

References [1] Aldous, D. J. and Eagleson, G. K. (1978). On mixing and stability of limit theorems. Ann. Probab. 6, No. 2, 325–331. [2] Anscombe, F. J. (1952). Large-sample theory of sequential estimation. Proc. Cambridge Philos. Soc. 98, 600–607. [3] Be´ska, M., Klopotowski, A. and Slomi´ nski, L. (1982). Limit theorems for random sums of dependent d-dimensional random vectors. Z. Wahrsch. Verw. Gebiete 61, No. 1, 43–57. [4] Bethmann, J. (1988). The Lindeberg-Feller theorem for sums of a random number of independent random variables in a triangular array. Theory Probab. Appl. 33, No. 2, 354–359. [5] Chow, Y. S., Hsiung, Ch. A. and Lai, T. L. (1979). Extended reneval theory and moment convergence in Anscombe’s theorem. Ann. Probab. 7, No. 2, 304–318. [6] Chung, K. L. (1968). A Course in Probability Theory. Harcourt, New York. [7] Cs¨org˝o, M. and Rychlik, Z. (1981). Asymptotic properties of randomly indexed sequences of random variables. Canad. J. Statist. 9, No. 1, 101–107. [8] Durrett, R. and Resnick, S. I. (1978). Functional limit theorems for dependent variables. Ann. Probab. 6, No. 5, 829–846. [9] Eagleson, G.K. (1975). Martingale convergence to mixtures of infinitely divisible laws. Ann. Probab. 3, No. 4, 557–562. [10] Gaenssler, P., Strobel, J. and Stute, W. (1978). On central limit theorems for martingale triangular arrays. Acta Math. Acad. Sci. Hung. 31, No. 3-4, 205–216. [11] Gnedenko, B. V. and Fahim, G. (1969). On a transfer theorem. Dokl. Akad. Nauk SSSR 187, No. 1, 15–17. [12] Gut, A. and Janson, S. (1986). Converse results for existence of moments and uniform integrability for stopped random walks. Ann. Probab. 14, No. 4, 1296–1317. [13] Hall, P. and Heyde, C. C. (1980). Martingale Limit Theory and its Application. Academic Press, New York. 22

[14] Helland, I. S. (1982). Central limit theorems for martingales with discrete or continuous time. Scand. J. Statist. 9, 79–94. [15] Irle, A. (1987). Uniform integrability in Anscombe’s theorem for martingales. In: Mathematical Statistics and Probability Theory. Vol. A. Puri, M.L., R´ev´esz, P. and Wertz, W. (eds.). Reidel, Dordrecht, pp. 201-207. [16] Korolev, V. Yu. (1985). The convergence of moments of randomly indexed sums of independent random variables. Theory Probab. Appl. 30, No. 2, 361–364. [17] Kruglov, V. M. (1976). On the convergence of distributions of sums of a random number of independent random variables to the normal distribution. Vestnik Moskov. Univ. 5, 5–12. [18] Kruglov, V. M. (1988). The convergence of moments of random sums. Theory Probab. Appl. 33, No. 2, 339–342. [19] Kruglov, V. M. and Korolev, V. Yu. (1990). Limit Theorems for Random Sums. Izdat. Mosk. Univ., Moskva. [20] Kubacki, K. S. (1989). Convergence of moments in a random central limit theorem. Ukr. Math. J. 41, No. 9, 1105–1108. [21] Kubacki, K. S. (1990). The convergence of moments in the martingale central limit theorem. In: Limit Theorems in Probability and Statistics. Berkes, I., Cs´aki, E. and R´ev´esz, P. (eds.). North-Holland, Amsterdam, pp. 327–348. [22] Kubacki, K. S. and Szynal, D. (1985). Weak convergence of randomly indexed sequences of random variables. Bull. Pol. Ac.: Math. 32, No. 3-4, 201–210. [23] Kubacki, K. S. and Szynal, D. (1986). On a random version of the Anscombe condition and its applications. Probab. Math. Statist. 7, No. 2, 125–147. [24] Kubacki, K. S. and Szynal, D. (1987a). On a random central limit theorem of Robbins type. Bull. Pol. Ac.: Math. 35, No. 3-4, 223–231. [25] Kubacki, K. S. and Szynal, D. (1987b). On the rate of convergence in a random version of the central limit theorem. Bull. Pol. Ac.: Math. 35, No. 9-10, 607–616.

23

[26] Kubacki, K. S. and Szynal, D. (1988a). On the rate of convergence in a random central limit theorem. Probab. Math. Statist. 9, No. 2, 95–103. [27] Kubacki, K. S. and Szynal, D. (1988b). The convergence of moments in a random limit theorem of H. Robbins type. Theory Probab. Appl. 33, No. 1, 83–93. [28] Liptser, P. Sh. and Shiryayev, A. N. (1986). Martingale Theory. Nauka, Moskva. (in Russian) [29] Lo`eve, M. (1960). Probability Theory. Van Nostrand, Princeton. [30] Marushin, M. N. and Krivorukov, V. P. (1984). Some remarks on central limit theorem for moments of even order of sums of a random number of independent identically distributed random variables. Ukrain. Mat. Sbornik 36, 22-28. [31] Robbins, H. (1948). The asymptotic distribution of the sum of a random number of random variables. Bull. Amer. Math. Soc. 54, No. 12, 1151–1161. [32] Rosi´ nski, J. (1975). Limit theorems for randomly indexed sums of random vectors. Coll. Math. 34, No. 1, 91–107. [33] Rootz´en, H. (1977). A note on convergence to mixtures of normal distributions. Z. Wahrsch. Verw. Gebiete 38, 211–216. [34] Rychlik, Z. (1976). A central limit theorem for sums of a random number of independent random variables. Coll. Math. 35, No. 1, 147–158. [35] Rychlik, Z. (1979). Martingale random central limit theorem. Acta Math. Acad. Sci. Hungar. 34, No. 1-2, 129–139. [36] Szasz, D. (1972). Limit theorems for the distributions of the sums of a random number of random variables. Ann. Math. Statist. 43, No. 6, 1902–1913. [37] Szasz, D. and Freyer, B. (1971). A problem of summation with random indices. Litovsk. Mat. Sb. 11, 181–187. (in Russian)

24