Home Page
Limit Theorems for Randomly Indexed Sequences of Random Variables i=1Title Page
Krzysztof S. Kubacki i=2Contents
I NSTITUTE OF A PPLIED M ATHEMATICS , ACADEMY OF AGRICULTURE , P.O. B OX 158, 20-950 L UBLIN , P OLAND E-mail address:
[email protected] i=1JJ
i=94II
i=1J
i=94I
Page i of 94
Go Back
Full Screen
Close
Quit
2000 Mathematics Subject Classification. Primary 60F05; Secondary 62E20, 60E15 Key words and phrases. Random limit theorems, randomly stopped martingales, random-sums limit theorems, rate of convergence, convergence of moments
Home Page
ii=1Title Page
ii=2Contents
The author was supported in part by the Kosciuszko Foundation, NY, and the Committee for Scientific Research, Warsaw. A BSTRACT.
ii=1JJ
ii=94II
ii=1J
ii=94I
This book is a survey of work on limit theorems for randomly indexed sequences of random variables.
Page ii of 94
Go Back
To my teacher, Professor Dominik Szynal, on the occasion of his 70th birthday Full Screen
Close
Quit
Home Page
Title Page
Contents Introduction
5
Chapter 1. On a problem of Tate 1.1. Early history 1.2. Characterizations of weak convergence and convergence in probability 1.3. Some characterizations of essential convergence in law and almost sure convergence 1.4. A characterization of stable convergence
9 9 13 13 19
Chapter 2. On a problem of Anscombe 2.1. The Anscombe theorem and its first generalizations 2.2. The Anscombe random condition 2.3. The Anscombe type theorem 2.4. Usefulness of the Anscombe random condition 2.5. An application to random sums of independent random variables 2.6. Applications to the stable convergence
23 23 27 30 33 42 52
Chapter 3. On a Robbins’ type theorem 3.1. The classical results 3.1.1. Some consequences of the First Robbins Theorem 3.2. On the limit behaviour of random sums of independent random variables 3.2.1. Introduction and notation 3.3. A Note on a Katz–Petrov Type Theorem 3.3.1. A Katz–Petrov type theorem. 3.3.2. Nonuniform estimates for random sums. 3.3.3. Proof of Theorem 3.6.
55 55 57 59 59 61 61 62 66
Chapter 4. Weak Convergence to Infinitely Divisible Laws 4.1. The case of finite variances 4.1.1. The main results 4.2. The case of not necessarily finite variances 4.2.1. The main results. 4.2.2. Proofs.
73 73 74 77 78 80
Bibliography
89
Contents
JJ
II
J
I
Page 3 of 94
Go Back
Full Screen
Close
Quit
Home Page
Title Page
Contents
JJ
II
J
I
Page 4 of 94
Go Back
Full Screen
Close
Quit
Home Page
Title Page
Introduction The past fifty years or so have seen an intensive development of the studies devoted to limit laws of randomly indexed sequences of random variables. This development has been caused by numerous applications of the obtained theorems in queuing theory, reliability theory, renewal theory, sequential analysis and other sections of mathematics. Essentially, the studies of limit behaviour of randomly indexed sequences of random variables have been two pronged. Attempts have been made to find the limit laws for sequences of partial sums of independent or weakly dependent random variables in those cases where the sequence of random indices is either independent of or may depend on the summands. The first direction in research was originated by H. Robbins (1948). He observed that if {Xk , k ≥ 1 } is a sequence of independent and identically distributed random variables with EX1 = a, Var(X1 ) = c2 < ∞, and if {υn , n ≥ 1} is a sequence of positive, integer-valued random variables, independent of {Xk , k ≥ 1 } and such that Eυn = αn , Var(υn ) = β2n < ∞, then the limiting distribution of q (Sυn − ESυn )/ Var(Sυn ) p may not be normal or even infinitely divisible although (Sn − ESn )/ Var(Sn ) =⇒ N0,1 converges weakly to a normal random variable N0,1 with mean 0 and variance 1. To state his result exactly, put σ2n = Var(Sυn ), dn = |a|βn /σn , and hn (t) = E exp{it(Sυn − ESυn )/σn },
Contents
JJ
II
J
I
Page 5 of 94
gn (t) = E exp{it(υn − αn )/βn }. Go Back
Robbins has proved the following two statements. (A) If σn → ∞ and βn /σ2n −→ 0 as n → ∞ , then for all real t |hn (t) − gn (tdn ) exp{−t 2 (1 − dn2 )/2}| −→ 0
as n → ∞ .
. (B) Assume that a = EX1 = 0, and that (υn − αn )/βn =⇒ Z converges weakly to a random variable Z with characteristic function g(·), and distribution function G(·). If αn −→ ∞ and βn /αn −→ r as n → ∞ , where 0 < r < ∞, then for all real t (as n → ∞ ) √ . hn (t) = E exp(˙ıtSυn /c αn ) −→ g(˙ırt 2 /2) exp(−t 2 /2) Z ∞
= 0
e−t
2 y/2
Full Screen
Close
dG1 (y)
= E exp(−t 2 (rZ + 1)/2), . where G1 (y) = G((y − 1)/r) = P[rZ + 1 < y]. Observe that if P[υn = n] = P[υn = 3n] = 1/2, then αn = 2n, β2n = n2 , and σ2n = 2nc2 + n2 a2 . Moreover, P[(υn − αn )/βn = −1] = P[(υn − αn )/βn = 1] = 1/2,
Quit
Home Page
Title Page
. . . so gn (t) = E exp{it(υn − αn )/βn } = (e−it + eit )/2 = cos(t). If a = EX1 6= 0, then dn = |a|βn /σn → 1, βn /σ2n → 0, and statement (A) yields hn (t) −→ cos(t), which proves that (Sυn − ESυn )/σn =⇒ Z converges weakly to a random variable Z equal ±1 with probability 1/2 each. Clearly, the distribution of the random variable Z is not infinitely divisible (see, e.g., [108, Theorem 5.1.1]). . On the other hand, if a = EX1 = 0, then σ2n = 2nc2 = c2 αn , and βn /αn → 1/2. It follows from statement (B) that 1 1 hn (t) −→ E exp(−t 2 ( Z + 1)/2) = (exp(−3t 2 /4) + exp(−t 2 /4)), 2 2 which implies that Sυn /σn =⇒ X converges weakly to a random variable X (X = I[Z = 1]N0,3/2 + I[Z = −1]N0,1/2 ) being a mixture of two independent normal random variables N0,3/2 and N0,1/2 . Clearly, the distribution of the random variable X is not infinitely divisible (see, e.g., Lukacs (1979), p. 376). Robbins type theorems will be presented in Chapter 3. They will include results due to J. Rosi´nski (1975), Z. Rychlik (1976), K.S. Kubacki, D. Szynal (1987, 1988) and V.M. Kruglov (1988), among others. When υn depends on the {Xk , k ≥ 1 } it is not possible in general to obtain the distribution of Sυn explicitly. However, substantial research has been conducted in the context of the preservation of classical limit theory (laws of large numbers and central limit theorems) under random time changes. This line of investigations was originated by F.J. Anscombe (1952) and R.F. Tate (1952). R.F. Tate has considered the following problem. Suppose that
Contents
JJ
II
J
I
Page 6 of 94
Yn −→ Y in some sense as n → ∞ , and υn −→ ∞ in some sense as n → ∞ .
Go Back
When can one conclude that Yυn −→ Y in some sense as n → ∞ ? His results were generalized by W. Richter (1965), D. Szynal, W. Zi˛eba (1986) and W. Zi˛eba (1988), among others, and will be presented in Chapter 1 (where ‘in some sense’ means one of the classical convergence modes, (in distribution, in probability or almost surely) or stable convergence introduced by A. Rényi (1958), or essential convergence in law introduced by D. Szynal and W. Zi˛eba (1974)). F.J. Anscombe has investigated the case where (1)
Yn =⇒ µ as n → ∞
Full Screen
Close
(converges weakly).
He observed that if (2)
∀ε > 0 ∃δ > 0 : lim sup P[ max |Yi −Yn | ≥ ε] ≤ ε, n→∞
|i−n|≤δn
then for every sequence {Nn , n ≥ 1} of positive, integer-valued random variables such that (3)
Nn P −→ 1, an
where an → ∞ are constants,
Quit
Home Page
Title Page
we have (4)
YNn =⇒ µ as n → ∞ .
It ought to be noticed that no assumption is made about the independence of the sequences {Yn , n ≥ 1} and {Nn , n ≥ 1}. We will always denote by {υn , n ≥ 1} a sequence of positive, integer-valued random variables independent of the underlining sequence {Yn , n ≥ 1}. Condition (2), called the Anscombe condition, plays very important role in proofs of limit theorems for randomly indexed sequences of random variables. D.J. Aldous (1978) has pointed out that condition (2) is also a necessary one for (4) when (3) holds. A more general result than that of Anscombe (1952) and Aldous (1978) has been obtained by M. Csörg˝o, Z. Rychlik (1980, 1981) and K.S. Kubacki, D. Szynal (1985, 1986), among others, and will be presented in Chapter 2. Given the results of Kubacki and Szynal (1986) it is hardly surprising that these two branches of the theory are closely connected. In fact, this connection allowed them to extended some well-known results of the earlier studies to a larger class of random indices, see Kubacki and Szynal (1985, 1987b, 1988a). The procedure used there is simple but in addition to the Anscombe type theorem a limit theorem of Robbins type has been required. The main idea there focus on the following problem. Suppose that we do not know whether (1) holds (or suppose even that the sequence {Yn , n ≥ 1} does not weakly converge), but we do know that there is a sequence {υn , n ≥ 1} of positive, integer-valued random variables, independent of the underlining sequence {Yn , n ≥ 1} such that Yυn =⇒ µ converges weakly to an interesting us measure µ, e.g. gaussian measure. When in such cases can one conclude that YNn =⇒ µ ?
Contents
JJ
II
J
I
Page 7 of 94
Go Back
Full Screen
Close
Quit
Home Page
Title Page
Contents
JJ
II
J
I
Page 8 of 94
Go Back
Full Screen
Close
Quit
Home Page
CHAPTER 1 Title Page
On a problem of Tate In this chapter we give some characterizations of almost sure convergence, essential convergence in law and stable convergence in terms of the convergence of randomly indexed sequences. 1.1.
Early history
Let {Yn , n ≥ 1} be a sequence of random variables defined on a probability space (Ω, F, P), and let {Nn , n ≥ 1} be a sequence of positive, integer-valued random variables defined on the same probability space. We make no assumption about the independence of the sequences {Yn , n ≥ 1} and {Nn , n ≥ 1}. We will always denote by {υn , n ≥ 1} a sequence of positive integer-valued random variables independent of the underlining sequence {Yn , n ≥ 1}. Suppose that Yn −→ Y
Contents
in some sense (as n → ∞ ),
JJ
II
J
I
and Nn −→ ∞ in some sense (as n → ∞ ). Page 9 of 94
When can we conclude that YNn −→ Y in some sense (as n → ∞ ) ? The first elementary results of the above kind belong to R.F. Tate [180].
Go Back
T HEOREM 1.1. [180] Suppose that Yn =⇒Y as n → ∞ .
(1.1)
If {υn , n ≥ 1} is a sequence of positive, integer-valued random variables, independent of {Yn , n ≥ 1} and such that Full Screen
P
(1.2)
υn −→ ∞ as
n → ∞,
Yυn =⇒Y
n → ∞.
then1 (1.3)
as
Close
P ROOF. Let ϕX denote the characteristic function of the random variable X. By the independence assumption we have ∞
ϕYvn (t) =
∑ E(exp{itYk }|υn = k)P[υn = k] k=1 ∞
=
∑ ϕYk (t)P[υn = k]. k=1
1Condition (1.2) is equivalent to the condition υ =⇒ ∞ used throughout [180]. n
Quit
Home Page
Title Page
Now, choose k0 so large that |ϕYk (t) − ϕY (t)| ≤ ε for
k ≥ k0
and then n0 so large that
Contents
P[υn ≤ k0 ] ≤ ε for
n ≥ n0 .
We then obtain k0
|ϕYυn (t) − ϕY (t)| ≤
JJ
II
J
I
∑ |ϕYk (t) − ϕY (t)|P[υn = k] k=1 ∞
+
∑
|ϕYk (t) − ϕY (t)|P[υn = k]
k=k0 +1 k0
∞
≤ 2 ∑ P[υn = k] + ε k=1
∑ P[υn = k]
k0 +1
Page 10 of 94
≤ 2P[υn ≤ k0 ] + ε ≤ 3ε,
which in view of the arbitrariness of ε proves the conclusion (1.3).
√ Theorem 1.1 has been proved also by A. Rényi [132] in the special case where Yn = (X1 + . . . + Xn )/ n for each n and X1 , X2 , . . . are independent and identically distributed with EX1 = 0 and EX12 = 1; see also A. Rényi [134], p. 472. We will formulate Rényi’s result and prove it (by a different method) in the section on Robbins Theorem. T HEOREM 1.2. [180] Suppose that {υn , n ≥ 1} is as in Theorem 1.1. If (1.4)
P
Yn −→Y
as
Full Screen
n → ∞,
then (1.5)
Go Back
Close
P
Yυn −→Y
as
n → ∞.
P
P ROOF. Observe that condition (1.4) is equivalent to Yn −Y −→ 0, which further is equivalent to Yn −Y =⇒ 0. By Theorem 1.1 we have Yυn −Y =⇒ 0
Quit
as n → ∞ ,
which proves the assertion. The next results are due to W. Richter [138] (R.F. Tate [180] proved the same result, but for sequences {υn , n ≥ 1} only).
Home Page
Title Page
T HEOREM 1.3. [138] Let {Yn , n ≥ 1} be a sequence of random variables such that a.s.
Yn −−→Y
(1.6)
as
n → ∞. Contents
Suppose further that {Nn , n ≥ 1} is a sequence of positive, integer-valued random variables such that a.s.
Nn −−→∞ as
(1.7)
n → ∞.
Then a.s.
YNn −−→Y
(1.8)
as
JJ
II
J
I
n → ∞.
P ROOF. Let A = {ω : Yn (ω) 6−→ Y (ω)}, B = {ω : Nn (ω) 6−→ ∞} and C = {ω : YNn (ω) 6−→ Y (ω) }. Then C ⊂ A ∪ B , which proves the assertion. T HEOREM 1.4. [138] Let {Yn , n ≥ 1} be as in Theorem 1.3. Suppose that {Nn , n ≥ 1} is a sequence of positive, integer-valued random variables such that P
Nn −→ ∞ as
(1.9)
n → ∞.
Page 11 of 94
Then P
YNn −→Y
(1.10)
as
n → ∞. Go Back
P ROOF. In the proof of Theorem 1.4 we shall use the following well known result (see e.g. [24], Theorem 20.5): the sequence {ξn , n ≥ 1} of random variables converges in probability to a random variable ξ if and only if each subsequence of {ξn , n ≥ 1} contains a further subsequence which converges to ξ almost surely. P P Since Nn −→ ∞, then Nnk −→ ∞ for every subsequence {nk , k ≥ 1}. Furthermore, from each subsequence we can always select a further subsequence {nk( j) , j ≥ 1} such that
Full Screen
a.s.
Nnk( j) −−→∞ as j → ∞. a.s.
Close
Finally, since Yn −−→Y, it follows from Theorem 1.3 that a.s.
YNnk( j) −−→Y as j → ∞. Theorem 1.3 can not be converted even in the case where {Nn , n ≥ 1} is a sequence of positive, integer-valued random variables independent of {Yn , n ≥ 1}. We have (1.11)
a.s.
a.s.
Yn −−→Y and Yυn −−→Y
need not imply
a.s.
υn −−→∞.
Indeed, let Yn = Y and P[υn = 1] = P[υn = 2] = 12 . Then Yn = Yυn = Y for all n, but υn 6−→ ∞.
Quit
Home Page
Title Page
Furthermore, a.s.
a.s.
Yυn −−→Y and υn −−→∞
(1.12)
need not imply
a.s.
Yn −−→Y. Contents
To see this, let Yn = Y for n = 2k, k ≥ 1, and Yn = Y + 1 for n = 2k − 1, k ≥ 1. Let n if n = 2k, υn = n − 1 if n = 2k − 1, for k ≥ 1.
JJ
II
J
I
a.s.
Then Yυn = Y for all n, and υn −−→∞. However, the sequence {Yn , n ≥ 1} oscillates between Y and Y + 1. The following example shows that the conclusion of Theorem 1.4 can not be sharpened, i.e. that a.s.
P
Yn −−→Y and Nn −→ ∞
(1.13)
need not imply
a.s.
YNn −−→Y.
a.s.
E XAMPLE 1.5. Let P[Yn = n1 ] = 1. Clearly Yn −−→0. Note that for every sequence {Nn , n ≥ 1} of positive, integer-valued random variables we have 1 YNn = , Nn which converges almost surely (respectively, in probability) to zero as n → ∞ if and only if Nn −→ ∞ almost surely (respectively, in probability).
Page 12 of 94
Go Back
Also P
a.s.
Yn −→Y and Nn −−→∞
(1.14)
need not imply
P
YNn −→Y.
E XAMPLE 1.6. [138] Let Ω = [0, 1], F = the σ-field of measurable subsets of Ω, and let P be the Lebesque measure on [0, 1]. Set 1 if j2−m ≤ ω < ( j + 1)2−m , Yn (ω) = 0 otherwise,
Full Screen
Close
where n = 2m + j, 0 ≤ j ≤ 2m − 1, and let Nn = min{k ≥ 2n : Yk > 0 }. Then P
(i) Yn −→ 0 but not almost surely. a.s.
(ii) Nn −−→∞ as n → ∞ . (iii) YNn = 1 a.s. for all n. The proof of these facts is left to the reader.
Quit
Home Page
Title Page
1.2.
Characterizations of weak convergence and convergence in probability
The following results are implied by Theorem 1.1 and Theorem 1.2. Contents
T HEOREM 1.7. The following conditions are equivalent: (i) Yn =⇒Y as n → ∞ ; (ii) Yυn =⇒Y as n → ∞ for every sequence {υn , n ≥ 1} of positive, integer-valued random variables, independent of {Yn , n ≥ 1} and such P
that υn −→ ∞ as n → ∞ . P ROOF. The implication (ii) =⇒ (i) is obvious: put υn = n. The reverse implication, (i) =⇒ (ii), follows by Theorem 1.1.
JJ
II
J
I
T HEOREM 1.8. The following conditions are equivalent: P (i) Yn −→Y as n → ∞ ; P (ii) Yυn −→Y as n → ∞ for every sequence {υn , n ≥ 1} of positive, integer-valued random variables, independent of {Yn , n ≥ 1} and such P
that υn −→ ∞ as n → ∞ .
Page 13 of 94
Moreover, the following results are implied by Theorem 1.3. T HEOREM 1.9. The following conditions are equivalent: a.s.
(i) Yn −−→Y as n → ∞ ; a.s. (ii) Yυn −−→Y as n → ∞ for every sequence {υn , n ≥ 1} of positive, integer-valued random variables, independent of {Yn , n ≥ 1} and such
Go Back
a.s.
that υn −−→∞ as n → ∞ . Full Screen
T HEOREM 1.10. The following conditions are equivalent: a.s.
(i) Yn −−→Y as n → ∞ ; a.s. a.s. (ii) YNn −−→Y as n → ∞ for every sequence {Nn , n ≥ 1} of positive, integer-valued random variables such that Nn −−→∞ . In the next section we shall see that the similar statement to that of Theorem 1.10 for weak convergence or convergence in probability is false. So Theorem 1.7 (respectively, Theorem 1.8) is the only one analogy of Theorem 1.10 for weak convergence (respectively, convergence in probability). 1.3.
Some characterizations of essential convergence in law and almost sure convergence
The concept of essential convergence in law has been introduced by D. Szynal and W. Zi˛eba [176].
Close
Quit
Home Page
Title Page
D EFINITION 1.11. [176] A sequence {Yn , n ≥ 1} of random variables is said to converge essentially in law to a random variable Y (written E.D.
Yn −−−→Y ) if for every continuity point x of the distribution function FY (·) of Y we have2 P[ lim inf{Yn < x} ] = P[ lim sup{Yn < x} ] = FY (x). n→∞
Contents
n→∞
It is well known that E.D.
L EMMA 1.12. If Yn −−−→Y, then Yn =⇒Y.
JJ
II
J
I
P ROOF. Note by Fatou’s lemma that for every continuity point x of FY P[Y < x ] = P[ lim inf{Yn < x} ] ≤ lim inf P[Yn < x ] n→∞
n→∞
≤ lim sup P[Yn < x ] ≤ P[ lim sup{Yn < x} ] = P[Y ≤ x ], n→∞
n→∞
so P[Yn < x ] −→ P[Y < x ] as n → ∞ for every continuity point x of FY .
Page 14 of 94
E.D.
The following example shows that, in general, Yn =⇒Y does not imply Yn −−−→Y. E XAMPLE 1.13. [176] Let X, X1 , X2 , . . . be independent and identically distributed nondegenerate random variables. Obviously, Xn =⇒ X. Now, if x is a continuity point of FX such that 0 < FX (x) = p < 1, then lim P[
n→∞
\
Go Back
{Xk < x} ] = 0 6= P[ X < x ] = p > 0.
k≥n Full Screen
E.D.
Thus Xn − 6 −−→X. a.s.
E.D.
L EMMA 1.14. If Yn −−→Y, then Yn −−−→Y. Close
P ROOF. Note that lim supn→∞ {Yn < x} ⊆ {Y ≤ x}, so P[ lim sup{Yn < x} ] ≤ P[Y ≤ x ]. n→∞
Conversely, {Y < x} ⊆ lim infn→∞ {Yn < x}, so
Quit
P[Y < x ] ≤ P[ lim inf{Yn < x} ]. n→∞
2The function F (·) defined by F (x) = P[Y < x ] for all real x is said to be the distribution function of Y. The definition F (x) = P[Y ≤ x ] is also customary. This induces Y Y Y
only minor modifications in its properties, e.g. this function is continuous from the right, while P[Y < x ] is continuous from the left.
Home Page
Title Page
It follows that for every continuity point x of FY FY (x) = P[Y < x ] ≤ P[ lim inf{Yn < x} ] n→∞
≤ P[ lim sup{Yn < x} ] ≤ P[Y ≤ x ] = FY (x),
Contents
n→∞ E.D.
i.e., that Yn −−−→Y.
E.D.
a.s.
P
The following example shows that, in general, Yn −−−→Y does not imply Yn −−→Y or even Yn −→Y. E XAMPLE 1.15. [176] Let Ω = [0, 1] and suppose that Yn (ω) ≡ Y (ω) ≡ ω for each n, where the random variable Y is uniformly distributed on Ω. Define a uniformly distributed on Ω random variable X by the formula ( ω+a if 0 ≤ ω < 1 − a, X(ω) = ω + a − 1 if 1 − a ≤ ω ≤ 1; 0 < a < 1. Then
JJ
II
J
I
Page 15 of 94
P[ lim inf{Yn < x} ] = P[ lim sup{Yn < x} ] = P[ X < x ], n→∞
n→∞
P
but Yn−→ 6 X as P[ ω : |Yn (ω) − X(ω)| ≥ ε ] = 1 for all ε < a and each n. Go Back
E.D.
P
Moreover, Yn −→Y does not imply Yn −−−→Y. E XAMPLE 1.16. [176] Let Ω = [0, 1], F = the σ-field of measurable subsets of Ω, and let P be the Lebesque measure on [0, 1]. Write IA for the indicator function of the set A. Define Y1 = 1,Y2 = I[0,1/2] ,Y3 = I[1/2,1] , and, in general, if bm = 2m , define Ybm +k = I[k/2m ,(k+1)/2m ] for k
= 0, 1, . . . , 2m − 1;
P
m = 0, 1, 2, . . . Then Yn −→ 0 (since P[Yn P[
6 0 ] = 2−n −→ 0). = [
Full Screen
But
{Yk ∈ [1/2, 1]} ] = 1,
k≥n
Close
E.D.
which implies that Yn − 6 −−→0. Thus, the following implications hold true:
Quit
a.s.
(1.15)
P
( Yn −−→Y ) −−−→ ( Yn −→Y ) y y E.D.
( Yn −−−→Y ) −−−→ ( Yn =⇒Y )
Home Page
Title Page
Furthermore, D. Partyka and D. Szynal [122] have proved that: a.s.
E.D.
T HEOREM 1.17. [122] If Yn −−→Y then Yn −−−→Y 0 for every random variable Y 0 with the same distribution as Y. Contents
P ROOF. Obvious, use Lemma 1.14. E.D.
a.s.
T HEOREM 1.18. [122] If Yn −−−→Y then there exists a random variable Y 0 with the same distribution as Y and such that Yn −−→Y 0 . E.D.
It follows from Theorems 1.17 and 1.18 that Yn −−−→Y if and only if there exists a random variable Y 0 with the same distribution as Y such a.s.
E.D.
JJ
II
J
I
a.s.
P
that Yn −−→Y 0 . Therefore, Yn −−−→Y and Yn −→Y at the same time if and only if Yn −−→Y [182]. Due to D. Szynal and W. Zi˛eba [177] and W. Zi˛eba [183] we have the following characterization of the essential convergence in law: T HEOREM 1.19. [177] The following conditions are equivalent: E.D.
(i) Yn −−−→Y as n → ∞ ; a.s. (ii) YNn =⇒Y as n → ∞ for every sequence {Nn , n ≥ 1} of positive integer-valued random variables such that Nn −−→∞. E.D.
Page 16 of 94
a.s.
P ROOF. Suppose that Yn −−−→Y . Then there exists a random variable Y 0 with3 L(Y 0 ) = L(Y ) such that Yn −−→Y 0 . Hence and by Theoa.s.
a.s.
rem 1.10, for every sequence {Nn , n ≥ 1} of positive, integer-valued random variables such that Nn −−→∞ we have YNn −−→Y 0 . Since PY 0 = PY , L(Y 0 ) = L(Y ), this implies that YNn =⇒Y . Conversely, suppose that (ii) holds. Let x be a continuity point of the distribution function FY of Y . Define S inf{k ≥ n : Yk < x} if ω ∈ ∞ k=n [Yk < x], τn,x (ω) = S n if ω 6∈ ∞ k=n [Yk < x ].
Go Back
Full Screen
a.s.
Then τn,x −−→∞ as n → ∞ and, therefore, Yτn,x =⇒Y as n → ∞ . Hence P[Yτn,x < x] −→ P[Y < x ] as n → ∞ , and since P(Yτn,x < x) = P(
∞ [
[Yk < x] ),
k=n
then P(
∞ [
[Yk < x] ) −→ P[Y < x] as n → ∞ .
k=n 3 L(Y 0 ) = L(Y ) means that Y 0 and Y have the same distribution.
Close
Quit
Home Page
Title Page
Similarly one can get P(
∞ \
[Yk < x] ) −→ P[Y < x] as n → ∞ ,
k=n
Contents
E.D.
which proves that Yn −−−→Y as n → ∞ .
R EMARK 1.20. Condition (ii) of Theorem 1.19 is equivalent to the following condition: P (iii) YNn =⇒Y as n → ∞ for every sequence {Nn , n ≥ 1} of positive, integer-valued random variables such that Nn −→ ∞. P ROOF. Obviously, (iii) =⇒ (ii). It remains only to prove the reverse implication. Suppose that (ii) holds true, and that (iii) does not hold. We will obtain a contradiction. P P Let Nn −→ ∞ and YNn 6=⇒ Y. Then there exist ε > 0 and a subsequence {YNn(k) , k ≥ 1 } such that Nn(k) −→ ∞ and L(YNn(k) ,Y ) > ε
JJ
II
J
I
for all k ≥ 1,
where L(X,Y ) denotes the Lévy metric on the space of real valued random variables4. We can choose a subsequence {Nn(k j ) , j ≥ 1 } such that a.s.
Nn(k j ) −−→∞ as j → ∞. But YNn(k j ) 6=⇒ Y as j → ∞, and a contradiction is obtained. Thus (ii) implies (iii).
Page 17 of 94
The following result immediately follows from Theorem 1.18, Theorem 1.19 and Remark 1.20, and has been also proved by E. Rychlik [144].
Go Back
P
C OROLLARY 1.21. [144] If YNn =⇒Y for every sequence {Nn , n ≥ 1} of positive, integer-valued random variables such that Nn −→ ∞, then a.s.
there exists a random variable X such that L(X) = L(Y ) and Yn −−→X. Full Screen
Now, for the moment, we specialize the random variable Y to a constant. C OROLLARY 1.22. [183] Let C be a constant. The following conditions are equivalent: a.s.
(i) Yn −−→C as n → ∞ ; P P (ii) YNn −→C as n → ∞ for every sequence {Nn , n ≥ 1} of positive, integer-valued random variables such that Nn −→ ∞. E.D.
a.s.
P ROOF. The assertion follows by Theorem 1.19, Remark 1.20 and the fact that Yn −−−→C if and only if Yn −−→C (see Theorems 1.17 and 1.18). Corollary 1.22 suggest the following more general result (a similar result has been proved also by A. Dvoretzky [53] under the additional assumption that Nn , n ≥ 1, are stopping times). 4L(X,Y ) = inf{h > 0 : F (x) ≤ F (x + h) + h, F (x) ≤ F (x + h) + h for all x } X Y Y X
Close
Quit
Home Page
Title Page
T HEOREM 1.23. [177] that The following conditions are equivalent: a.s.
(i) Yn −−→Y as n → ∞ ; P P (ii) YNn −→Y as n → ∞ for every sequence {Nn , n ≥ 1} of positive, integer-valued random variables such that Nn −→ ∞.
Contents
P
P ROOF. The implication (i) =⇒ (ii) follows by Theorem 1.4. Suppose now that (ii) holds true. Then, obviously, Yn −→Y. E.D.
On the other hand, by Theorem 1.19 and Remark 1.20 we see that Yn −−−→Y, which further implies that there exists a random variable X a.s.
P
a.s.
P
with L(X) = L(Y ) such that Yn −−→X. Hence Yn −→ X. But also But also Yn −→Y, so that X = Y a.s., which implies that Yn −−→Y.
JJ
II
J
I
C OROLLARY 1.24. Let {Yn , n ≥ 1} be a sequence of random variables such that Y1 ≤ Y2 ≤ . . . a.s. The following conditions are equivalent: (i) Yn =⇒Y as n → ∞ ; E.D.
(ii) Yn −−−→Y as n → ∞ ; P (iii) YNn =⇒Y as n → ∞ for every sequence {Nn , n ≥ 1} of positive, integer-valued random variables such that Nn −→ ∞. P ROOF. The equivalence of (ii) and (iii) follows by Theorem 1.19 and Remark 1.20. Furthermore, it is obvious that for every point y (of continuity of FY ) we have P(
∞ \
Page 18 of 94
[Yk < y] ) − P[Y < y ] ≤ P[Yn < y ] − P[Y < y ]
k=n
≤ P(
∞ [
Go Back
[Yk < y] ) − P[Y < y ].
k=n
Therefore (ii) implies (i). Suppose now that (i) holds. Since Y1 ≤ Y2 ≤ . . . a.s., then the sequence of events { [Yn < y] , n ≥ 1} decreases with n. Hence lim P(
n→∞
∞ \
[Yk < y] ) = lim P(
k=n
n→∞
∞ [
Full Screen
[Yk < y] )
k=n Close
and by (i) we conclude that for every continuity point y of FY lim P(
n→∞
∞ [
[Yk < y] ) = lim P[Yn < y ] = P[Y < y ].
k=n
n→∞
Quit
Thus, for every continuity point y of FY we have P( lim inf [Yn < y] ) = P( lim sup [Yn < y] ) = P[Y < y ], n→∞
E.D.
which means that Yn −−−→Y.
n→∞
Home Page
Title Page
1.4.
A characterization of stable convergence
Let {Yn , n ≥ 1} be a sequence of random variables defined on a probability space (Ω, F, P). We write Yn =⇒Y (stably) if Yn =⇒Y and, for every set B ∈ F with P(B) > 0, there exists a distribution function FB such that for every continuity point y of FB we have
Contents
lim P[Yn < y|B ] = FB (y),
(1.16)
n→∞
where P(D|B) = P(D ∩ B)/P(B). In the special case where FB (y) = FY (y) for all B, we write Yn =⇒Y (mixing). These concepts are originally due to A. Rényi ([131, 133, 135]). A survey of stable and mixing limit theorems, and applications of these concepts, may be found e.g. in [?]. It is well known that: P If Yn −→Y, then Yn =⇒Y (stably). Indeed, for every set B ∈ F, with P(B) > 0 and every continuity point y of FB we have lim P[Yn < y|B] = lim P[ (Yn −Y ) +Y < y|B] = P[Y < y|B],
n→∞
n→∞
since
P
B Yn −Y −→ 0,
JJ
II
J
I
Page 19 of 94
where PB (A) = P(A|B).
Furthermore,
Go Back
P
Yn =⇒Y (stably) need not imply Yn −→Y. 2 E XAMPLE 1.25. [131] √ Let X, X1 , X2 , . . . be independent and identically distributed random variables with EX = 0 and EX = 1. Let Sn = X1 + . . . + Xn . Then Sn / n =⇒ N0,1 (mixing), but do not converge in probability.
T HEOREM 1.26. [183] The following conditions are equivalent: (i) Yn =⇒Y (stably) as n → ∞ ; P (ii) YNn =⇒Y (stably) as n → ∞ for every sequence {Nn , n ≥ 1} of positive, integer-valued random variables such that (a) Nn −→ ∞ and (b) the σ-field σ(Nn , n ≥ 1) generated by the random variables Nn , n ≥ 1, is countable5. [183], most P ROOF. The implication (ii) =⇒ (i) is obvious: put Nn = n. Suppose (i), and let {Nn , n ≥ 1} be a sequence of positive, integer-valued random P variables suchS that Nn −→ ∞ and the σ-field σ(Nn , n ≥ 1 ) is at most countably infinite: σ(Nn , n ≥ 1) = {Bi , i ≥ 1}, say, where Bi ∈ F, Bi ∩B j = 0/ for i 6= j, and ∞ i=1 Bi = Ω. For every ε > 0 there exists a positive integer L such that ∞
(1.17)
∑
i=L+1 5By “countable” we mean always “finite or countably infinite”.
P(Bi ) < ε/8.
Full Screen
Close
Quit
Home Page
Title Page
Furthermore, by (i), we have for every i ≥ 1 and every continuity point x of the distribution function FB∩Bi |P[Yn < x, B ∩ Bi ] − FB∩Bi (x)P(B ∩ Bi )| −→ 0 as n → ∞ . Therefore, there exists a positive integer M such that for all k > M
Contents
max |P[Yk < x, B ∩ Bi ] − FB∩Bi (x)P(B ∩ Bi )| < ε/(8L)
(1.18)
1≤i≤L P
Of course, since Nn −→ ∞, we have for sufficiently large n
JJ
II
J
I
P[Nn < M] ≤ ε/8.
(1.19) For all i ≥ 1 and ω ∈ Bi let
Nn (ω) = kn,i , Dn = {i : 1 ≤ i ≤ L, kn,i ≥ M}. We note that ∞
P[Y < x, B] =
∑ FB∩Bi (x)P(B ∩ Bi)
i=1
=
Page 20 of 94
∑ FB∩Bi (x)P(B ∩ Bi) + ∑ FB∩Bi (x)P(B ∩ Bi), i6∈Dn
i∈Dn
where by (1.17) and (1.19) we have
Go Back
∑ FB∩Bi (x)P(B ∩ Bi)
i6∈Dn
=
∑
FB∩Bi (x)P(B ∩ Bi ) Full Screen
i:1≤i≤L, kn,i 0 we have |P[Y < x, B] − P[YNn < x, B]| ≤ |P[Y < x, B] −
∑ P[Ykn,i < x, B ∩ Bi]| + ε/4
Contents
i∈Dn
≤|
FB∩Bi (x)P(B ∩ Bi ) − P[Ykn,i < x, B ∩ Bi ] | + ε/2
∑
i∈Dn
≤
∑ FB∩Bi (x)P(B ∩ Bi) − P[Ykn,i < x, B ∩ Bi] + ε/2
JJ
II
J
I
i∈Dn
≤ Lε/(8L) + ε/2 < ε, which proves that YNn =⇒Y (stably).
By the same method one can prove
[?]
T HEOREM 1.27. The following conditions are equivalent: (i) Yn =⇒Y (mixing) as n → ∞ ; P (ii) YNn =⇒Y (mixing) as n → ∞ for every sequence {Nn , n ≥ 1} of positive, integer-valued random variables such that Nn −→ ∞ and the σ-field σ(Nn , n ≥ 1) is at most countable. The following results are quite useful:
Page 21 of 94
Go Back
C OROLLARY 1.28. (cf. Corollary 1.22) Let C be a constant. The following conditions are equivalent: P (i) Yn −→C as n → ∞ ; P P (ii) YNn −→C as n → ∞ for every sequence {Nn , n ≥ 1} of positive, integer-valued random variables such that Nn −→ ∞ and the σ-field σ(Nn , n ≥ 1) is at most countable. C OROLLARY 1.29. Let {Xk , k ≥ 1 } be a sequence of independent random variables with zero means and finite variances. Let Sn = X1 + . . . + Xn , s2n = Var(Sn ). Then the following conditions are equivalent: (i) Sn /sn =⇒ N0,1 as n → ∞ ;
Full Screen
Close
P
(ii) SNn /sNn =⇒ N0,1 (mixing) as n → ∞ for every sequence {Nn , n ≥ 1} of positive, integer-valued random variables such that Nn −→ ∞ and the σ-field σ(Nn , n ≥ 1) is at most countable. Quit
R EMARK 1.30. It follows from Corollary 1.29 that, under (i), for any positive random variable λ with discrete distribution: S[λn] /s[λn] =⇒ N0,1 (mixing).
Home Page
Title Page
Contents
JJ
II
J
I
Page 22 of 94
Go Back
Full Screen
Close
Quit
Home Page
CHAPTER 2 Title Page
On a problem of Anscombe In this chapter we give a generalization of the Anscombe’s theorem on the asymptotic behaviour of randomly indexed sequences of random variables. As an application of our result we give an extension of the random central limit theorem of J. R. Blum, D. L. Hanson, J. I. Rosenblatt [25] and J. A. Mogyoródi [114] to sequences of independent but nonidentically distributed random variables. 2.1.
The Anscombe theorem and its first generalizations
Contents
JJ
II
J
I
Let {Yn , n ≥ 1} be a sequence of random variables defined on a probability space (Ω, F, P). Suppose that there exists a probability measure µ such that Yn =⇒ µ
(2.1)
as n → ∞ .
Let {Nn , n ≥ 1} be a sequence of positive, integer-valued random variables defined on the same probability space (Ω, F, P). The well known Anscombe’s theorem [7] gives conditions on sequences {Yn , n ≥ 1} and {Nn , n ≥ 1} under which YNn =⇒ µ as n → ∞ .
(2.2)
Page 23 of 94
They are as follows. T HEOREM 2.1. [7] Let {Yn , n ≥ 1} be a sequence of random variables satisfying (2.1). If {Yn , n ≥ 1} satisfies also the so-called “Anscombe condition”: for every ε > 0 there exists δ > 0 such that Go Back
(A)
lim sup P[ max |Yi −Yn | ≥ ε] ≤ ε, n→∞
|i−n|≤δn
and {Nn , n ≥ 1} is a sequence of positive, integer-valued random variables such that Nn P (2.3) −→ 1 as n → ∞ , an where {an , n ≥ 1} is a sequence of positive integers with an → ∞, then condition (2.2) holds. D. J. Aldous [?] has pointed out that condition (A) is also a necessary one for (2.2) when (2.3) holds. T HEOREM 2.2. [?] The following conditions are equivalent: (i) the sequence {Yn , n ≥ 1} satisfies (2.1) and the Anscombe condition (A); (ii) YNn =⇒ µ for every sequence {Nn , n ≥ 1} of positive, integer-valued random variables satisfying (2.3). Let {Xk , k ≥ 1 } be a sequence of independent random variables with zero means and finite variances. Let Sn = X1 + .√ . . + Xn , s2n = Var(Sn ), 2 and Yn = Sn /sn , n ≥ 1. From the above results one can only anticipate that if 0 < Var(Xk ) = σ < ∞ (k ≥ 1), Sn /σ n =⇒ N(0, 1), and if √ P Nn /an −→ 1 as n → ∞ , where {an , n ≥ 1} is a sequence of positive integers with an → ∞, then SNn /σ Nn =⇒ N(0, 1). A more general and stronger result than that of [7] and [?] has been given by M. Csörg˝o and Z. Rychlik [46].
Full Screen
Close
Quit
Home Page
Title Page
T HEOREM 2.3. [46] Let {kn , n ≥ 1} be a non-decreasing sequence of positive numbers. The following conditions are equivalent: (i) the sequence {Yn , n ≥ 1} satisfies (2.1) and the so called “generalized Anscombe condition” with norming sequence {kn , n ≥ 1} : for every ε > 0 there exists δ > 0 such that (A0 )
Contents
lim sup P[
max
|ki2 −kn2 |≤δkn2
n→∞
|Yi −Yn | ≥ ε] ≤ ε;
(ii) YNn =⇒ µ for every sequence {Nn , n ≥ 1} of positive, integer-valued random variables satisfying P kN2 n /ka2n −→ 1
(2.4)
JJ
II
J
I
as n → ∞ ,
where {an , n ≥ 1} is a sequence of positive integers with an → ∞. It is easy to see that in a special case where kn = n (n ≥ 1), both (A0 ) and (2.4) reduces to (A) and (2.3), respectively. P ROOF OF T HEOREM 2.3. (i) =⇒ (ii). Let C be the set of bounded and continuous real-valued functions defined on a real line. Let C0 be the subset of C consisting of functions f which satisfy (for all real x and y) | f (x)| ≤
Page 24 of 94
1 , | f (x) − f (y)| ≤ |x − y| . 2
It is well known ([23], the proof of Theorem 2.1, or [24], the proof of Theorem 25.8) that if E f (Yn ) −→ E f (Y ) for all f ∈ C0 then Yn =⇒Y. Given f ∈ C0 and ε > 0 choose δ as in (A0 ). Then
Go Back
E| f (YNn ) − f (Yan )| ≤ ε + P[|YNn −Yan | > ε] ≤ ε + P[|kN2 n − ka2n | > δka2n ] + P[
max
|ki2 −ka2n |≤δka2n
|Yi −Yan | ≥ ε],
Full Screen
and so by (A0 ) and (2.4), lim sup E| f (YNn ) − f (Yan )| ≤ 2ε,
Close
and (ii) follows. (ii) =⇒ (i). For the converse, it is clear that Yn =⇒ µ, so suppose that (A0 ) fails. Then there exist ε > 0 and a subsequence b1 < b2 < . . . of positive integers such that
Quit
n→∞
(2.5)
P[ max |Yi −Ybn | ≥ ε] > ε f or all n ≥ 1, i
where the maximum is taken either over all i such that kb2n ≤ ki2 ≤ (1 + 1/n)kb2n or over all i such that kb2n (1 − 1/n) ≤ ki2 ≤ kb2n . We shall only consider the first case as the second one can be treated in the same way.
Home Page
Title Page
Let {Gi , 1 ≤ i ≤ M} be disjoint and open subsets such that 0 < µ(Gi ) = µ(Gi ), µ( M i=1 Gi ) > 1 − ε/2, and diameter(Gi ) < ε/2. Thus, by 0 (2.5), there exists a set G ∈ {Gi , 1 ≤ i ≤ M} and a subsequence {bn , n ≥ 1} of {bn , n ≥ 1} such that S
P[Yb0n ∈ G, max |Yi −Yb0n | ≥ ε] ≥
(2.6)
i
ε , 2M
Contents
where the maximum is taken over all i such that kb20 ≤ ki2 ≤ (1 + 1/n)kb20 . Define Nm = min(Cm1 , Cm2 ), where n
n
Cm1 = max{i : ki2 ≤ (1 + 1/m)kb20m } , Cm2 = min{i ≥ b0m : Yi 6∈ G} .
JJ
II
J
I
a.s.
Then kN2 m /kb20 −−→1 as m → ∞ and so (2.4) holds and hence YNm =⇒ µ as m → ∞. But, by (2.6), we have m
P[YNm 6∈ G] = P[Yb0m 6∈ G] + P[Yb0m ∈ G,YNm 6∈ G] ε , ≥ P[Yb0m 6∈ G] + 2M
Page 25 of 94
i.e., there exists ε > 0 for which lim sup P[YNm 6∈ G] ≥ µ(Ω \ G) + m→∞
ε , 2M
which proves that YNm 6=⇒ µ. This is a contradiction to (ii), and it proves that the sequence {Yn , n ≥ 1} must satisfy
Go Back
(A0 ).
R EMARK 2.4. Let {Xk , k ≥ 1 } be a sequence of independent random variables with zero means and finite variances. Let Sn = X1 + . . . + Xn , = Var(Sn ), and Yn = Sn /sn , n ≥ 1. Then the sequence {Yn , n ≥ 1} of random variables satisfies (A0 ) with norming sequence {kn , n ≥ 1}, where kn2 = s2n , n ≥ 1. s2n
Full Screen
P ROOF. For every n and 0 < δ < 1, define stopping times T1 = min{k : s2k ≥ (1 − δ)s2n } and T2 = max{k : s2k ≤ (1 + δ)s2n }.
Close
Note that, by Kolmogorov’s inequality, we have for all ε > 0 P[ max |Si | ≥ ε] ≤ s2T2 ε−2 ≤ (1 + δ)s2n ε−2 T1 ≤i≤T2
and P[ max |Si − ST1 | ≥ ε] ≤ (s2T2 − s2T1 )ε−2 ≤ 2δs2n ε−2 . T1 ≤i≤T2
Quit
Home Page
Title Page
Hence, for every ε > 0 and 0 < δ ≤ 1/3, we get P[
max
|s2i −s2n |≤δs2n
|Yi −Yn | ≥ ε ] ≤ P[
max
|s2i −s2n |≤δs2n
+P[
|Si − Sn | ≥ εsn /2 ]
max
|s2i −s2n |≤δs2n
Contents
|Si (si − sn )|/si sn ≥ ε/2 ]
≤ 8δ(4 + δ)ε−2 ,
which proves the conclusion.
JJ
II
J
I
It follows from Remark 2.4 that if Sn /sn =⇒ µ, then SNn /sNn =⇒ µ for every sequence {Nn , n ≥ 1} of positive, integer-valued random P
variables such that s2Nn /s2an −→ 1, where {an , n ≥ 1} is a sequence of positive integers with an → ∞. We see that in Theorem 2.1, and as well as in Theorem 2.3, the assumption Yn =⇒ µ (cf. condition (2.1)) is essential one to prove that YNn =⇒ µ. However, in practical situations we very often do not know whether (2.1) holds and even sometimes we know that the sequence {Yn , n ≥ 1} does not weakly converge. When in such cases we can state that a sequence {YNn , n ≥ 1} (with random indices) converges weakly to an interesting us measure µ, e.g. gaussian measure ?
Page 26 of 94
E XAMPLE 2.5. [149] Let {Xk , k ≥ 1 } be a sequence of independent random variables defined by P[X22n = 22 and Xk , for k
n 6 22 (n ≥ 1, k =
n−1
n−1
] = P[X22n = −22
]=
1 (n ≥ 1), 2
Go Back
≥ 1), has the normal distribution function N(0, 1) with mean zero and variance one. Put n
Sn =
n
∑ Xk , s2n =
∑ Var(Xk ) , Yn = Sn/sn ,
k=1
k=1
n ≥ 1. Full Screen
Then Y22n −1 =⇒ N0,1
(as n → ∞ ),
and Y22n =⇒ X
√ where N0,1 denotes a normal random variable with mean 0 and variance function 1, and the random variable X = (Z + N0,1 )/ 2 , where1 Z is independent of N0,1 and 1 P[Z = −1] = P[Z = 1] = . 2 0 Let {Nn , n ≥ 1} be a sequence of positive integer-valued random variables such that (2.7)
n 1 P[ Nn0 = 22 − 1 ] = 1 − , n
n
P[ Nn0 = 22 ] =
1 n
(n ≥ 1).
1The random variable X has the characteristic function given by the formula ϕ (t) = cos(t/√2) exp(−t 2 /4). X
Close
Quit
Home Page
Title Page
Theorem 2.3 does not allow to confirm the weak convergence of the randomly indexed sequence {YNn0 , n ≥ 1} although it is known [149] that YNn 0 =⇒ N0,1 (see also Example 2.17 lather on in this chapter). This example suggest that in the studies of the limit behaviour of randomly indexed sequences of random variables the assumption (2.1) should be replaced by a weaker one, e.g., that Yτn =⇒ µ for a sequence {τn , n ≥ 1} of positive, integer-valued random variables such that
Contents
P
τn −→ ∞, or its particular case: Yan =⇒ µ for a sequence {an , n ≥ 1} of positive integers tending to infinity. When in such case (when we know that the sequence {Yn , n ≥ 1} does not weakly converge) we can state that a randomly indexed sequence {YNn , n ≥ 1} converges weakly ? The next section gives an answer to this question and contains, as particular cases, the results quoted above. 2.2.
JJ
II
J
I
The Anscombe random condition
Let {Yn , n ≥ 1} be a sequence of random variables defined on a probability space (Ω, F, P), and let {αn , n ≥ 1} be a non-decreasing sequence of positive random variables (0 < αn ≤ αn+1 a.s., n ≥ 1) defined on the same probability space (Ω, F, P). Furthermore, let {τn , n ≥ 1} be a P
sequence of positive, integer-valued random variables defined on (Ω, F, P) and such that τn −→ ∞. Page 27 of 94
D EFINITION 2.6. (Anscombe random condition) A sequence {Yn , n ≥ 1} is said to satisfy the Anscombe random condition with norming sequence {αn , n ≥ 1} of positive random variables and filtering sequence {τn , n ≥ 1} of positive, integer-valued random variables if for every ε > 0 there exists δ > 0 such that (A∗ )
lim sup P[ n→∞
max
|α2i −α2τn |≤δα2τn
|Yi −Yτn | ≥ ε] ≤ ε.
One can easily see that in the special case where αn = kn and τn = n (n ≥ 1), (A∗ ) reduces to (A0 ), and hence, when kn2 = n (n ≥ 1), (A∗ ) reduces to (A). Moreover, we notice that if a sequence {Yn , n ≥ 1} satisfies (A), then for any sequence {an , n ≥ 1} of positive integers with an → ∞ we have ∀ε > 0 ∃δ > 0 : lim sup P[ max n→∞
|i−an |≤δ an
Go Back
Full Screen
|Yi −Yan | ≥ ε] ≤ ε,
√ with i.e. it satisfies (A∗ ) with norming sequence { n , n ≥ 1} (i.e. α2n = n) and any filtering sequence {an , n ≥ 1} of positive integers such that an → ∞. The analogous remark refers to (A0 ). Namely, if a sequence {Yn , n ≥ 1} satisfies (A0 ) with norming sequence {kn , n ≥ 1}, then it satisfies (A∗ ) with the same norming sequence {kn , n ≥ 1} (i.e. αn = kn ) and any filtering sequence {an , n ≥ 1} of positive integers such that an → ∞. The following lemma generalize these remarks. L EMMA 2.7. If a sequence {Yn , n ≥ 1} satisfies (A0 ) with the norming sequence {kn , n ≥ 1}, then it satisfies (A∗ ) with the same norming sequence {kn , n ≥ 1} and any filtering sequence {τn , n ≥ 1} independent of {Yn , n ≥ 1}.
Close
Quit
Home Page
Title Page
P ROOF. Let M > 0 be fixed. By the independence assumption P[
max
|ki2 −kτ2n |≤δkτ2n ∞
+
|Yi −Yτn | ≥ ε] ≤ P[τn ≤ M] Contents
P[τn = r]P[
∑
max
|ki2 −kr2 |≤δkr2
r=M+1
|Yi −Yr | ≥ ε]. JJ
II
J
I
But lim P[τn ≤ M] = 0,
n→∞ P
since τn −→ ∞. Choosing then M so large that for all r > M P[
max
|ki2 −kr2 |≤δkr2
|Yi −Yr | ≥ ε] ≤ ε ,
Page 28 of 94
we obtain the desired result.
Go Back
There arises, obviously, a question about conditions which the filtering sequence {τn , n ≥ 1} should satisfy in the case where we reject in Lemma 2.7 the assumption that {τn , n ≥ 1} is independent of {Yn , n ≥ 1}. The following lemma gives an answer to this question: L EMMA 2.8. If a sequence {Yn , n ≥ 1} satisfies (A0 ) with the norming sequence {kn , n ≥ 1}, then it satisfies (A∗ ) with the same norming sequence {kn , n ≥ 1} and any filtering sequence {Nn , n ≥ 1} satisfying (2.4). P ROOF. By the assumption and the remark after Definition 2.6 we conclude that the sequence {Yn , n ≥ 1} satisfies (A∗ ) with the norming sequence {kn , n ≥ 1} and any filtering sequence {an , n ≥ 1} of positive integers such that an → ∞. Let {Nn , n ≥ 1} be a sequence of positive, integer-valued random variables satisfying (2.4), i.e. (2.8)
P
kN2 n /ka2n −→ 1,
where {an , n ≥ 1} is a sequence of positive integers with an → ∞. Put Bn = [ |kN2 n − ka2n | ≤ ηka2n ] (n ≥ 1),
Full Screen
Close
Quit
Home Page
Title Page
where η is a fixed positive number. Then for every ε > 0 and δ > 0 we have P[
max
2 |≤δk2 |ki2 −kN Nn n
+P[
|Yi −YNn | ≥ ε] ≤ P(Bcn )
2 |≤δk2 |ki2 −kN Nn n
+P[|YNn −Yan | ≥ ε/2, Bn ] |Yi −Yan | ≥ ε/2] ≤ P(Bcn ) + P[ max |ki2 −ka2n |≤δ∗ ka2n
+P[
max
|ki2 −ka2n |≤ηka2n
Contents
|Yi −Yan | ≥ ε/2, Bn ]
max
JJ
II
J
I
|Yi −Yan | ≥ ε/2],
where δ∗ = δ(1 + η) + η. Since the sequence {Yn , n ≥ 1} satisfies condition (A∗ ) with the norming sequence {kn , n ≥ 1} and filtering sequence {an , n ≥ 1} and since, by (2.8), P(Bcn ) → 0, the last inequality proves that the sequence {Yn , n ≥ 1} satisfies (A∗ ) with norming sequence {kn , n ≥ 1} and filtering sequence {Nn , n ≥ 1}. R EMARK 2.9. Further it will be shown that condition (2.4) is in a sense necessary one for (A0 ) to imply (A∗ ). It will be proved in Subsec-
Page 29 of 94
P
tion 2.4 that it is not enough to assume kN2 n /kτ2an −→ 1 as n → ∞ . L EMMA 2.10. If a sequence {Yn , n ≥ 1} satisfies (A∗ ) with the norming sequence {αn , n ≥ 1} and filtering sequence {n, n ≥ 1} (i.e. τn = n a.s., n ≥ 1), then it satisfies (A∗ ) with the same norming sequence {αn , n ≥ 1} and any filtering sequence {υn , n ≥ 1} independent of {(αk ,Yk ), k ≥ 1}.
Go Back
The proof of this lemma runs similarly as the proof of Lemma 2.7, so it is omitted. The following lemma generalizes Lemma 2.8. L EMMA 2.11. [99] If a sequence {Yn , n ≥ 1} satisfies (A∗ ) with the norming sequence {αn , n ≥ 1} and filtering sequence {τn , n ≥ 1}, then it satisfies (A∗ ) with the same norming sequence {αn , n ≥ 1} and any filtering sequence {Nn , n ≥ 1} such that
Full Screen
P
α2Nn /α2τan −→ 1 as n → ∞ ,
(2.9)
Close
where {an , n ≥ 1} is a sequence of positive integers with an → ∞. P ROOF. Put Bn = [|α2Nn − α2τan | ≤ ηα2τan ] (n ≥ 1), where η is a fixed positive number. Then for every ε > 0 and δ > 0, we have P[
max
|α2i −α2Nn |≤δα2Nn
|Yi −YNn | ≥ ε] ≤ P(Bcn )
+P[
max
|Yi −Yτan | ≥ ε/2]
+P[
max
|Yi −Yτan | ≥ ε/2],
|α2i −α2τan |≤δ∗ α2τan |α2i −α2τan |≤ηα2τan
Quit
Home Page
Title Page
where δ∗ = δ(1 + η) + η. The assumption and this inequality imply the desired result. 2.3.
The Anscombe type theorem Contents
The following theorem generalizes Theorem 2.3. T HEOREM 2.12. [99] Let {αn , n ≥ 1} be a non-decreasing sequence of positive random variables and let {τn , n ≥ 1} be a sequence of P
positive, integer-valued random variables such that τn −→ ∞. The following conditions are equivalent: (i) Yτn =⇒ µ and the sequence {Yn , n ≥ 1}satisfies Anscombe random condition (A∗ ) with norming sequence {αn , n ≥ 1} and filtering sequence {τn , n ≥ 1}; (ii) YNn =⇒ µ for every {Nn , n ≥ 1} satisfying (2.10)
P
α2Nn /α2τan −→ 1 as n → ∞ ,
JJ
II
J
I
where {an , n ≥ 1} is a sequence of positive integers with an → ∞. P ROOF. Let ε > 0 and a closed set A ⊂ R be given. Then, for every δ > 0, we have P[YNn ∈ A] ≤ P[Yτan ∈ Aε ] + P[YNn ∈ A,Yτan 6∈ Aε ]
Page 30 of 94
≤ P[Yτan ∈ Aε ] + P[|YNn −Yτan | ≥ ε] ≤ P[Yτan ∈ Aε ] + P[|α2Nn − α2τan | > δα2τan ] +P[
max
|α2i −α2τan |≤δα2τan
Go Back
|Yi −Yτan | ≥ ε],
where Aε = {x : ρ(x, A) ≤ ε}, ρ(x, A) = inf{|x − y| : y ∈ A}. Since ε > 0 can be chosen arbitrarily small, we see by (2.10), (A∗ ) and the assumption Yτn =⇒ µ that for every closed set A ⊂ R we have lim sup P[YNn ∈ A] ≤ µ(A) ,
Full Screen
n→∞
i.e. YNn =⇒ µ. If (ii) holds, then putting Nn = τn (n ≥ 1), we conclude that Yτn =⇒ µ. Suppose that (A∗ ) with the norming sequence {αn , n ≥ 1} and filtering sequence {τn , n ≥ 1} fails. Then there exist an ε > 0 and a subsequence n1 < n2 < . . . of positive integers (n j → ∞ as j → ∞) such that for all j≥1 (2.11)
P[ max |Yi −Yτn( j) | ≥ ε] > ε, i∈Bn( j)
where Bn = {i : α2τn ≤ α2i ≤ (1 + 1/ j)α2τn } or Bn = {i : (1 − 1/ j)α2τn ≤ α2i ≤ α2τn }.
Close
Quit
Home Page
Title Page
We shall only consider the first case as the second one can be treated similarly. Let {Gi , 1 ≤ i ≤ M} be disjoint and open subsets such that 0 < µ(Gi ) = µ(Gi ) , µ(
M [
Contents
Gi ) > 1 − ε/2 ,
i=1
and the diameter(Gi ) < ε/2. Thus, by (2.11), there exists a set G ∈ {Gi , 1 ≤ i ≤ M} and a subsequence {m( j) , j ≥ 1} of the sequence {n( j) , j ≥ 1} such that for all j ≥ 1 ε |Yi −Yτm( j) | ≥ ε ] ≥ (2.12) P[Yτm( j) ∈ G , max 2M ≤α2i ≤(1+1/ j)α2τ α2τ
JJ
II
J
I
m( j)
m( j)
Let N j = min(C1j ,C2j ), where C1j = max{i : α2i ≤ (1 + 1/ j)α2τm( j) },
C2j = min{i ≥ τm( j) : Yi 6∈ G}.
Since the sequence {αn , n ≥ 1} is non-decreasing, we have (for j ≥ 1) α2N j ≤ (1 + 1/ j)α2τm( j) .
α2τm( j) ≤ αC2 1 ,
α2τm( j) ≤ αC2 2 ,
j
j
Page 31 of 94
Hence P
α2N /α2τm( j) −→ 1 j
as j → ∞,
Go Back
which proves that the sequence {N j , j ≥ 1} satisfies (2.10). But, by (2.12), we have P[YN j 6∈ G] = P[Yτm( j) 6∈ G] + P[YN j 6∈ G,Yτm( j) ∈ G] ≥ P[Yτm( j) 6∈ G] +P[Yτm( j) ∈ G,
Full Screen
α2τ
m( j)
max
≤α2i ≤(1+1/ j)α2τ
|Yi −Yτm( j) | ≥ ε]
m( j)
≥ P[Yτm( j) 6∈ G] + ε/(2M),
Close
which proves that YN j 6=⇒ µ as j → ∞. This is a contradiction to (ii), and it proves that the sequence {Yn , n ≥ 1} must satisfy (A∗ ) with norming sequence {αn , n ≥ 1}and filtering sequence {τn , n ≥ 1}. It is easy to see that putting αn = kn , τn = n (n ≥ 1), Theorem 2.12 reduces to Theorem 2.3. It is so since in this case (A∗ ) reduces to (A0 ), whereas (2.10) reduces to (2.4). C OROLLARY 2.13. Let {kn , n ≥ 1} be a non-decreasing sequence of positive numbers and let {bn , n ≥ 1} be a sequence of positive integers with bn → ∞. The following conditions are equivalent: (i) Ybn =⇒ µ and the sequence {Yn , n ≥ 1} satisfies (A∗ ) with norming sequence {kn , n ≥ 1} and filtering sequence {bn , n ≥ 1} ;
Quit
Home Page
Title Page
(ii) YNn =⇒ µ for every {Nn , n ≥ 1} satisfying P
kN2 n /ka2n −→ 1 as n → ∞ ,
(2.13)
where {an , n ≥ 1} is a sequence of positive integers with an → ∞ and {an , n ≥ 1} ⊆ {bn , n ≥ 1}.
Contents
C OROLLARY 2.14. Let {kn , n ≥ 1} be a non-decreasing sequence of positive numbers and let {τn , n ≥ 1} be a sequence of positive, integerP
valued random variables such that τn −→ ∞. The following conditions are equivalent: (i) Yτn =⇒ µ and the sequence {Yn , n ≥ 1} satisfies (A∗ ) with norming sequence {kn , n ≥ 1} and filtering sequence {τn , n ≥ 1}; (ii) YNn =⇒ µ for every {Nn , n ≥ 1} satisfying
JJ
II
J
I
P
kN2 n /kτ2an −→ 1 as n → ∞ ,
(2.14)
where {an , n ≥ 1} is a sequence of positive integers with an → ∞. In the particular case, where kn2 = n, τn = [λn] (n ≥ 1), where λ is a positive a.s. finite random variable (P[0 < λ < ∞] = 1), we have C OROLLARY 2.15. The following conditions are equivalent: √ (i) Y[λn] =⇒ µ and the sequence {Yn , n ≥ 1} satisfies (A∗ ) with norming sequence { n , n ≥ 1} and filtering sequence {[λn], n ≥ 1}; (ii) YNn =⇒ µ for every {Nn , n ≥ 1} satisfying
Page 32 of 94
P
Nn /an −→ λ as n → ∞ ,
(2.15)
Go Back
where {an , n ≥ 1} is a sequence of positive integers with an → ∞. C OROLLARY 2.16. Let {αn , n ≥ 1} be a non-decreasing sequence of positive random variables and let {bn , n ≥ 1} be a sequence of positive integers with bn → ∞. The following conditions are equivalent: (i) Ybn =⇒ µ and the sequence {Yn , n ≥ 1} satisfies (A∗ ) with norming sequence {αn , n ≥ 1} and filtering sequence {bn , n ≥ 1} ; (ii) YNn =⇒ µ for every {Nn , n ≥ 1} satisfying
Full Screen
P
α2Nn /α2an −→ 1 as n → ∞ , where {an , n ≥ 1} is a sequence of positive integers with an → ∞ and {an , n ≥ 1} ⊆ {bn , n ≥ 1}.
Close
The following example elucidates the usefulness of these considerations: E XAMPLE 2.17. (cf. Example 2.5) Let {Xk , k ≥ 1 }, Sn , Yn , and Nn0 be as in Example 2.5. It is easy to see that the sequence {Yn , n ≥ 1} satisfies (A0 ) with norming sequence {kn , n ≥ 1}, where kn = sn (cf. Remark 2.4). Hence and by Lemma 2.7, the sequence {Yn , n ≥ 1} satisfies also (A∗ ) with the norming sequence {sn , n ≥ 1} and any filtering sequence {an , n ≥ 1} of positive integers with an → ∞. Thus, by P
n
Corollary 2.13, YNn =⇒ N0,1 for every {Nn , n ≥ 1} satisfying s2Nn /s2an −→ 1, where {an , n ≥ 1} ⊆ {22 − 1, n ≥ 1} and an → ∞; and YNn =⇒ X for P
n
every {Nn , n ≥ 1} satisfying s2Nn /s2a0 −→ 1, where {a0n , n ≥ 1} ⊆ {22 , n ≥ 1} and an 0 → ∞. n
Quit
Home Page
Title Page
For the sequence {Nn0 , n ≥ 1} defined by (2.7) we have n
P[ |s2Nn0 /s222n −1 − 1| ≥ ε ] = P[ |s2Nn0 /s222n −1 − 1| ≥ ε, Nn0 = 22 ] n
≤ P[ Nn0 = 22 ] =
1 −→ 0 n
Contents
for all ε > 0,
so P
s2Nn0 /s222n −1 −→ 1
as n → ∞ .
Hence we conclude that in this case YNn0 =⇒ N0,1 . Let us further notice that if A is an event independent of {Xk , k ≥ 1 } and 2n 2 − 1 on A υn = n 22 on Ac , then P[Yυn < x ] −→ Φ(x)P(A) + P[ X < x ]P(Ac ) as n → ∞ for every real x. Therefore,
JJ
II
J
I
Page 33 of 94
1 Yυn =⇒ N0,1 I(A) + √ (Z + N0,1 )I(Ac ), 2
(2.16)
where N0,1 , I(A), and Z are independent. Moreover, the sequence {Yn , n ≥ 1} satisfies (A∗ ) with norming sequence {sn , n ≥ 1} and filtering
Go Back
P
sequence {υn , n ≥ 1}. Indeed, by the construction υn −→ ∞ and υn is for every n independent of {Yk , k ≥ 1}. Since the sequence {Yn , n ≥ 1} satisfies (A0 ) with norming sequence {sn , n ≥ 1}, Lemma 2.7 confirms the desired result. Hence, and by (2.16) and Corollary 2.14, we have 1 YNn =⇒ N0,1 I(A) + √ (Z + N0,1 )I(Ac ), as n → ∞ 2
Full Screen
P
for every {Nn , n ≥ 1} such that s2Nn /s2υan −→ 1 (n → ∞), where {an , n ≥ 1} is a sequence of positive integers with an → ∞. This fact implies the n statements after (2.7), i.e. putting A = Ω or, equivalently, υn = 22 − 1 (n ≥ 1), we obtain YNn =⇒ N0,1 for every sequence {Nn , n ≥ 1} satisfying P
n
Close
n
s2Nn /s2an −→ 1, where {an , n ≥ 1} ⊆ {22 − 1, n ≥ 1} and an → ∞; putting however A = 0/ or, equivalently, υn = 22 (n ≥ 1), we obtain YNn =⇒ X P
n
for every sequence {Nn , n ≥ 1} satisfying s2Nn /s2a0 −→ 1, where {a0n , n ≥ 1} ⊆ {22 , n ≥ 1}, and a0n → ∞. n
2.4.
Usefulness of the Anscombe random condition
Now we shall give two examples of sequences {Yn , n ≥ 1} which fulfil (A∗ ) whereas they satisfy neither (A) nor (A0 ). At the end of this section we shall give an example proving Remark 2.9.
Quit
Home Page
Title Page
n
E XAMPLE 2.18. Let {Xk , k ≥ 1 } be a sequence of random variables defined as follows: the random variables Xk , k 6= 22 (n ≥ 0, k ≥ 1), are independent and have the normal distribution function N(0, 1), and n
22 −1
X2 = −X1 ,
X22n = −
∑ n−1
j=22
Contents
(n ≥ 1).
Xj +1
√ . Let us put Sn = ∑nj=1 X j , Yn = Sn / n (n ≥ 1). Then a.s. (n ≥ 0),
Y22n = 0
(2.17)
(n → ∞).
Y22n −1 =⇒ N0,1
Indeed, for every n ≥ 0 we have S22n = 0 a.s., which proves that Y22n = 0 a.s. For the proof of the second property in (2.17) we put Sn = Sn∗ + Sn∗∗ (n ≥ 1), where n
Sn∗ =
(2.18)
∗
∑
n
n
Xi =
i=1
Sn∗∗ =
∑ Xi ,
i=1 i∈N∗
∗∗
∑
n
=
Xi
i=1
∑
JJ
II
J
I
Xi ,
i=1 i∈N∗∗ Page 34 of 94
and n
N∗ = { j ∈ N : j 6= 22 , n ≥ 0} ,
(2.19)
N∗∗ = N \ N∗ ,
and note that, for every n ≥ 1, Sn∗ is the sum of independent random variables, S∗2n = S∗2n 2
a.s., S∗∗2n
S1∗∗ = 0
(2.20)
2
−i
= −S∗ n−1 22
2
−1 n
a.s. for n ≥ 0, while n−1
a.s. for 1 ≤ i ≤ 22 − 22
Go Back
(n ≥ 1).
−1
Moreover, by the Kolmogorov’s inequality we have, for every ε > 0, P[|S∗∗2n 2
−1
Full Screen
q n 22 − 1| ≥ ε] = P[|
n−1 22 −1
∑
Xi | ≥ ε
q
n
22 − 1]
i=1 i∈N∗
≤2
2n−1
2 2n ε (2 − 1) −→ 0
Close
( n → ∞ ).
Hence, and by taking into account that E exp{˙ıtS∗2n 2
−1
Quit
n q 22 −1 q n n 2 / 2 − 1} = E exp{˙ıt ∑ X j 22 − 1}
j=1 j∈N∗
n n = exp{−t 2 (22 − n − 1) 2(22 − 1)} −→ exp{−t 2 /2}
( n → ∞ ),
Home Page
Title Page
we obtain Y22n −1 = S∗2n 2
−1
q n q n 22 − 1 + S∗∗2n 22 − 1 =⇒ N0,1
( n → ∞ ),
−1
2
which ends the proof of (2.17). Now we shall prove that the sequence {Yn , n ≥ 1 } does not satisfy the Anscombe condition (A). Indeed, if the sequence {Yn , n ≥ 1} fulfilled (A), than in view of the remarks after Definition 2.6 it would fulfil (A∗ ) with norming sequence √ n { n, n ≥ 1} and any filtering sequence {an , n ≥ 1} of positive integers with an → ∞ ( n → ∞ ), e.g. with an = 22 − 1 (n ≥ 1) for which Y22n −1 =⇒ N0,1
P
n
|i−32 |≤δ32
+ P[
max n
|i−32 |≤δ32
≤ 2P[
n
max
n
J
I
( n → ∞ ),
√ 2n 2n |S ||3 − i|/ i 3 ≥ ε/2] i n
[(1−δ)32 ]≤i≤[(1+δ)32 ]
II
n
√ which is a contradiction to (2.17). Thus, the sequence {Yn , n ≥ 1} does not satisfy (A∗ ) with the norming sequence { n, n ≥ 1} and filtering n sequence {22 − 1, n ≥ 1}, and than, by the earlier considerations, we conclude that the sequence {Yn , n ≥ 1} does not satisfy (A). √ n And now we shall prove that the sequence {Yn , n ≥ 1} satisfies (A∗ ) with the norming sequence { n, n ≥ 1} and filtering sequence {32 , n ≥ 1}. To this end we note that for every ε > 0 and for every δ > 0 we have q n n n (2.21) P[ max |Yi −Y32 | ≥ ε] ≤ P[ max |Si − S32 | ≥ ε 32 /2] n n n n |i−32 |≤δ32
JJ
( n → ∞ ).
Hence, by Corollary 2.13, YNn =⇒ N0,1 ( n → ∞ ) for every sequence {Nn , n ≥ 1} such that Nn /(22 − 1) −→ 1 ( n → ∞ ). So then, for Nn = 22 a.s. (n ≥ 1), we would have Y22n =⇒ N0,1
Contents
Page 35 of 94
Go Back
Full Screen
q n |Si − S[(1−δ)32n ] | ≥ ε 32 /4] + P[
max
n n [(1−δ)32 ]≤i≤[(1+δ)32 ]
q n |Si | ≥ ε [(1 − δ)32 ]/2δ],
where [x] denotes the integral part of the real number x. Further, by (2.20) and the Kolmogorov’s inequality, the right-hand side of (2.21) is less than or equal to q n ∗ ∗ 2P[ max |Si − S[(1−δ)32n ] | ≥ ε 32 /4] n n [(1−δ)32 ]≤i≤[(1+δ)32 ]
n n n ≤ 32{[(1 + δ)32 ] − [(1 − δ)32 ]} ε 32 −→ 64δ/ε2
( n → ∞ ),
Close
Quit
Home Page
Title Page
n
n
since, for [(1 − δ)32 ] ≤ i ≤ [(1 + δ)32 ], we have ∗∗ Si∗∗ − S[(1−δ)3 2n ] =
i
∑
Xj = 0
a.s.,
n j=[(1−δ)32 ]+1 ∗∗ j∈N
Contents
while the second term on the right-hand side of (2.21) is less than or equal to q n ∗ P[ max |Si | ≥ ε [(1 − δ)32 ]/4δ] n n
JJ
II
J
I
[(1−δ)32 ]≤i≤[(1+δ)32 ]
+ P[
max n
n
[(1−δ)32 ]≤i≤[(1+δ)32 ]
2
≤ 16δ [(1 + δ)3
2n
2n
|Si∗∗ | ≥ ε
q n [(1 − δ)32 ]/4δ]
]/ε [(1 − δ)3 ] + P[|S2∗2n −1 | ≥ ε 2 2n 2n 2 2
q n [(1 − δ)32 ]/4δ] n
≤ 16δ {[(1 + δ)3 ] + 2 }/ε [(1 − δ)32 ] −→ 16δ2 (1 + δ)/ε2 (1 − δ) ( n → ∞ ),
Page 36 of 94
where 16δ2 (1 + δ)/ε2 (1 − δ) ≤ 64δ/ε2 for 0 < δ ≤ 1/2. Hence, for every ε > 0 and 0 < δ ≤ 1/2, we have lim sup P[ n→∞
max n
|i−32 |≤δ32
Go Back
|Yi −Y32n | ≥ ε] ≤ 128δ/ε2 , n
√ n which proves that the sequence {Yn , n ≥ 1} satisfies (A∗ ) with the norming sequence { n, n ≥ 1} and filtering sequence {32 , n ≥ 1}. Let us further notice that Y32n =⇒ N0,1 ( n → ∞ ). Indeed, since for every n ≥ 1 we have q q n n Y32n = S3∗2n / 32 + S3∗∗2n / 32 , where, by the Kolmogorov’s inequality, for every ε > 0, q q n n n n ∗∗ ∗ 2 P[|S32n / 3 | ≤ ε] = P[|S22n −1 | ≥ ε 32 ] ≤ 22 /ε2 32 −→ 0
Full Screen
Close
(n → ∞)
and q n ∗ E exp{˙ıtS32n 32 } = E exp{˙ıt
32
Quit
n
∑
q n Xj 32 }
j=1 j∈N∗
n n = exp{−t 2 (32 − n − 1) 2 · 32 } −→ exp{−t 2 /2}
( n → ∞ ),
Home Page
Title Page
P
we have Y32n =⇒ N0,1 ( n → ∞ ). Hence, by Corollary 2.13, YNn =⇒ N0,1 ( n → ∞ ) for every {Nn , n ≥ 1} satisfying Nn /an −→ 1 ( n → ∞ ), where n {an , n ≥ 1} ⊂ {32 , n ≥ 1} (an → ∞, n → ∞ ). n
E XAMPLE 2.19. Let {Xk , k ≥ 1 } be a sequence of random variables defined as follows: the random variables Xk , k 6= 22 (n ≥ 0, k ≥ 1), are n independent and have the normal distribution function N(0, 1) for k 6= 32 (n ≥ 1) and n−1
P[X32n = 32
] = P[X32n = −32
n−1
]=
1 2
Contents
(n ≥ 1), JJ
II
J
I
while n
22 −1
X2 = −X1 ,
X22n = −
∑ n−1
j=22
Xj
(n ≥ 1).
+1
Let us put n
kn2 =
∑ Var(X j )
m
for n 6= 22 (m ≥ 0), Page 37 of 94
j=1
and n
k222n
= k222n −1
for n ≥ 0,
Sn =
∑ X j,
. and Yn = Sn /kn
(n ≥ 1). Go Back
j=1
Then Y22n = 0
(2.22)
a.s. (n ≥ 0),
Y22n −1 =⇒ N0,1
( n → ∞ ).
Indeed, for every n ≥ 0 we have S22n = 0 a.s., which proves that Y22n = 0 a.s. (n ≥ 0). For the proof of the second property in (2.22) we notice n that k222n −1 ∼ 22 for sufficiently large n, and Y22n = S2∗2n −1 /k22n −1 + S2∗∗2n −1 /k22n −1 where
Sn∗
and
Sn∗∗
(n ≥ 1),
Full Screen
Close
are defined as in (2.18). Furthermore, by (2.20), we have 2n
Y22n −1 = (S∗2n 2
−1
− S∗ n−1 22
−1
2 −1 ∗ ) k22n −1 = ∑ X j k22n −1 2n−1
j=2
2n
2 −1
=
∑ n−1
j=22
X j k22n −1 +1
for every n ≥ 1,
Quit
Home Page
Title Page
where
Contents
n
22 −1
E exp{˙ıt
∑ n−1
j=22
X j k22n −1 } +1
= cos(t · 3
2n−2
2n
/k22n −1 ) exp{−t 2 (2 − 2
2n−1
JJ
II
J
I
− 1)/2k22n } 2
−1
−→ exp{−t 2 /2}
( n → ∞ ).
Hence, Y22n −1 =⇒ N0,1 ( n → ∞ ), which completes the proof of (2.22). From (2.22) and that
Page 38 of 94
Go Back
k22n = k22n 2
2
−1
(n ≥ 1), Full Screen
we conclude that the sequence {Yn , n ≥ 1} satisfies neither the Anscombe condition (A) nor the generalized Anscombe condition (A0 ) with the norming sequence {kn , n ≥ 1}. The proofs of these facts run similarly as in Example 2.18. Suppose for example that the sequence {Yn , n ≥ 1} satisfies the generalized Anscombe condition (A0 ) with norming sequence {kn , n ≥ 1}. Then in view of the remarks after Definition 2.6 it would fulfil (A∗ ) with (the same) norming sequence {kn , n ≥ 1} and any filtering sequence n {an , n ≥ 1} of positive integers with an → ∞ ( n → ∞ ), e.g. with an = 22 −1 (n ≥ 1) for which Y22n −1 =⇒ N0,1 ( n → ∞ ). Hence by Corollary 2.13 P
we would have YNn =⇒ N0,1 ( n → ∞ ) for every sequence {Nn , n ≥ 1} such that kN2 n /k222n −1 −→ 1 ( n → ∞ ). So then, since it follows from the assumptions that k222n = k222n −1 for all n ≥ 1, we would conclude that Y22n =⇒ N0,1 ( n → ∞ ). But this is a contradiction to (2.22) which states that Y22n = 0 a.s. for all n ≥ 0. Thus, the sequence {Yn , n ≥ 1} does not satisfy (A∗ ) with the norming sequence {kn , n ≥ 1} and filtering sequence n {22 − 1, n ≥ 1} and then, by the earlier considerations, we conclude that the sequence {Yn , n ≥ 1} does not satisfy the generalized Anscombe condition (A0 ) with the norming sequence {kn , n ≥ 1}, as well.
Close
Quit
Home Page
Title Page
n
Now we shall prove that the sequence {Yn , n ≥ 1} satisfies (A∗ ) with the norming sequence {kn , n ≥ 1} and filtering sequence {32 , n ≥ 1}. To this end we note that, for every ε > 0 and for every 0 < δ < 1, we have (2.23) P[
max
|ki2 −k22n |≤δk22n 3
|Yi −Y32n | ≥ ε]
Contents
3
≤ P[
max
|ki2 −k22n |≤δk22n 3
+ P[
3
max
|ki2 −k22n |≤δk22n 3 3
≤ P[
|Si − S32n | ≥ εk32n /2]
|Si ||k322n − ki2 |/ki k322n ≥ ε/2]
max
k22n ≤ki2 ≤(1+δ)k22n 3 3
+ P[
k22n ≤ki2 ≤(1+δ)k22n
|Si∗ ≥ εk32n /4δ]
J
I
3
+ P[
max
k22n ≤ki2 ≤(1+δ)k22n 3 3
n
II
|Si∗ − S3∗2n | ≥ εk32n /2]
max
3
JJ
|Si∗∗ ≥ εk32n /4δ], Page 39 of 94
n
since k322n −1 ∼ 32 and k322n ∼ 2 · 32 for sufficiently large n, whence {i ∈ N : |ki2 − k322n | ≤ δk322n } = {i ∈ N : k322n ≤ ki2 ≤ (1 + δ)k322n } for sufficiently large n, and since for
i ∈ {k322n
≤ ki2
≤ (1 + δ)k322n }
Go Back
we have i
Si∗∗ − S3∗∗2n =
∑n
j=32
Xj = 0
a.s. Full Screen
+1
j∈N∗∗
for sufficiently large n. As Sn∗ is the sum of independent random variables with finite variances, then, by the Kolmogorov’s inequality, the first term on the right-hand side of (2.23) is less than or equal to 4δ/ε2 , the second one is less than or equal to 16δ2 (1 + δ)k322n ε2 k322n ≤ 32δ/ε2 , and the third term on the right-hand side of (2.23) is equal to
Quit
P[|S2∗2n −1 | ≥ εk32n /4δ] ≤ 16δ2 k222n −1 /ε2 k32n −→ 0
( n → ∞ ).
Hence, for every ε > 0 and for every 0 < δ < 1, we have lim sup P[ n→∞
Close
max
|ki2 −k22n |≤δk22n 3 3
|Yi −Y32n | ≥ ε] ≤ 36δ/ε2 ,
Home Page
Title Page
n
which proves that the sequence {Yn , n ≥ 1} satisfies (A∗ ) with the norming sequence {kn , n ≥ 1} and filtering sequence {32 , n ≥ 1}. n Similarly it can be proved that the sequence {Yn , n ≥ 1} satisfies (A∗ ) with the norming sequence {kn , n ≥ 1} and filtering sequence {32 − 1, n ≥ 1}. Let us further notice that Y32n −1 =⇒ N0,1 ,
(2.24)
Y32n =⇒ X ,
( n → ∞ ),
and k322n /k322n −1 −→ 2
Contents
( n → ∞ ),
JJ
II
J
I
where X is a random variable with the characteristic function
√ 2 ϕX (t) = cos(t/ 2)e−t /4 .
Then, by Corollary
n P 2.13, YNn =⇒ N0,1 ( n → ∞ ) for every {Yn , n ≥ 1 } satisfying kN2 n /ka2n −→ 1 (n → ∞), where {an , n ≥ 1} ⊂ {32 − 1 , n ≥ 1} n P and YNn =⇒ X (n → ∞) for every {Yn , n ≥ 1 } satisfying kN2 n /ka2n −→ 1 (n → ∞), where {an , n ≥ 1} ⊂ {32 , n ≥ 1} (an → ∞,
(an → ∞, n → ∞ ), n → ∞). n Let us still notice that if A is an event independent of Xk , k 6= 22 (n ≥ 0, k ≥ 1), and ( n 32 − 1 on A, (2.25) υn = n 32 on Ac ,
Page 40 of 94
Go Back
then P[Yυn < x] → Φ(x)P(A) + P[X < x]P(Ac ), n → ∞ , i.e. Yυn =⇒ N0,1 I(A) + XI(Ac )
(2.26)
( n → ∞ ), Full Screen
and the sequence {Yn , n ≥ 1} satisfies the Anscombe random condition (A∗ ) with the norming sequence {kn , n ≥ 1} and filtering sequence {υn , n ≥ 1}. Thus, by Corollary 2.14, YNn =⇒ N0,1 I(A) + XI(Ac ) ( n → ∞ ),
Close
P
for every {Nn , n ≥ 1} satisfying kN2 n /kυ2 an −→ 1 ( n → ∞ ), where {an , n ≥ 1} is a sequence of positive integers with an → ∞ ( n → ∞ ). That fact n implies the statements after (2.24), i.e. putting A = Ω or, equivalently, υn = 32 − 1 a.s. (n ≥ 1), we obtain YNn =⇒ N0,1 ( n → ∞ ) for every P
n
{Nn , n ≥ 1} satisfying kN2 n /ka2n −→ 1 ( n → ∞ ), where {an , n ≥ 1} ⊂ {32 − 1 , n ≥ 1} (an → ∞, n → ∞ ); putting however A = 0/ or, equivalently, n
P
n
υn = 32 a.s. (n ≥ 1), we obtain YNn =⇒ X ( n → ∞ ) for every {Nn , n ≥ 1} satisfying kN2 n /ka2n −→ 1 ( n → ∞ ), where {an , n ≥ 1} ⊂ {32 , n ≥ 1} (an → ∞, n → ∞ ). The following example proves Remark 2.9.
Quit
Home Page
Title Page
E XAMPLE 2.20. [50] Let U, Z1 , Z2 , . . . be independent random variables such that U has a uniform distribution on the interval (0, 1) and, for each n ≥ 1, Zn has a normal distribution, N(0, 1), with mean zero and variance one. Let n
Yn = n−1/2 ∑ Zi
Contents
(n ≥ 1),
i=1
n
J(ω) = {[2 U(ω)] + 1, 1 ≤ n < ∞}, JJ
and
II
Yn0 = Yn I[n 6∈ J] (n ≥ 1). J I √ The sequence {Yn0 , n ≥ 1} satisfies (A) [50] and than, by Lemma 2.8, it satisfies as well (A∗ ) with norming sequence { n, n ≥ 1} and any filtering podac dowod sequence {Nn , n ≥ 1} of positive integer-valued random variables satisfying (2.3) (i.e. (2.4) with kn2 = n, n ≥ 1). Let us put Nn = [2nU], n ≥ 1. The sequence {Nn , n ≥ 1} does not satisfy (2.4) but satisfies (2.14). Indeed, putting e.g. τn = [2nU] + 1, n ≥ 1, Page 41 of 94 we have P
Nn /τn −→ 1
(2.14’)
( n → ∞ ),
2 ∗ i.e. condition √ (2.14) with kn = n (an = n, n ≥ 1). Now we shall prove that the sequence {Yn , n ≥ 1} does not satisfy (A ) with the norming sequence { n, n ≥ 1} and filtering sequence {Nn , n ≥ 1}. This fact will prove Remark 2.9. It follows from Theorem 1.1 that
YNn =⇒ N0,1
(2.27)
( n → ∞ ),
where Nn = [2nU] (n ≥ 1),
Go Back
Full Screen
P
since Yn =⇒ N0,1 ( n → ∞ ), Nn −→ ∞ ( n → ∞ ) and, for every n ≥ 1, the random variable Nn is independent of Zi (i ≥ 1). Furthermore, by the construction of the set J(ω) we have Close
P[YN0 n
−n
6= YNn ] = P[Nn ∈ J] ≤ P[U < (n − 1)2 ] 2n
+ ∑ P[(k − 1)2−n ≤ U < k2−n ; [2nU] ∈ {[2nU] + 1, 1 ≤ n < ∞}]
Quit
k=n 2n
≤ on (1) + ∑ P[(k − 1)2−n ≤ U < k2−n ; k − 2 ≤ 2mU < k − 1 k=n
for some m ≥ 1]
Home Page
2n
≤ on (1) + ∑
n
∑
Title Page
P[(k − 1)2−n
k=n m=log2 2n (1−2/n)
≤ U < k2−n ; (k − 2)2−m ≤ U < (k − 1)2−m ] n
≤ on (1) +
∑n
Contents
P[(n − 2)2−m ≤ U < (2n − 1)2−m ]
m=log2 2 (1−2/n) n
= on (1) + (2n − n + 1)
∑n
2−m
JJ
II
J
I
m=log2 2 (1−2/n)
= on (1) +
2−n+1 (2n − n + 1) 2 · −→ 0 n 1 − 2n
( n → ∞ ),
since on (1) = P[U < (n − 1)2−n ] → 0 (n → ∞). (It should be noticed that the last equality above follows from the well known property: s
∑
m=r
1 − ( 12 )s−r 2−m = 2−r 1 + 2−1 + . . . + 2−s+r = 2−r 1
Page 42 of 94
2
used with s = n and r = log2 2n (1 − 2/n). Since s − r = log2 n/(n − 2), in a straightforward manner we obtain n
∑
2−m = 2
m=r
2−n 2−n+1 2 1 − 2− log2 n/(n−2) = · , 1 − 2/n 1 − 2/n n
and thus the desired equality follows.) P This fact and (2.27) imply (by the law-equivalence lemma: If Yn −Yn0 −→ 0 or P[Yn 6= Yn0 ] → 0, then the sequences L(Yn ) and L(Yn0 ) of laws are equivalent) that Yn0 =⇒ N0,1 (n → ∞). Since (2.14’) holds, and Yτ0n = 0
where τn = [2nU] + 1 (n ≥ 1), √ we conclude that the sequence {Yn0 , n ≥ 1} does not satisfy (A∗ ) with the norming sequence { n, n ≥ 1} and filtering sequence {Nn , n ≥ 1}, where Nn = [2nU], n ≥ 1. (2.28)
2.5.
Go Back
Full Screen
a.s. (n ≥ 1),
Close
An application to random sums of independent random variables Quit
We now show that the given results allow to extend the random central limit theorem of J. R. Blum, D. L. Hanson, J. I. Rosenblatt [25] and J. A. Mogyoródi [114] to sequences of independent, nonidentically distributed random variables. T HEOREM 2.21. [98] Let {Xk , k ≥ 1} be a sequence of independent random variables with EXk = 0, EXk2 = σ2k < ∞, k ≥ 1, such that . (2.29) Yn = Sn /sn =⇒ µ, n → ∞ ,
Home Page
Title Page
where Sn = ∑nk=1 Xk , s2n = ∑nk=1 σ2k , n ≥ 1, and s2n → ∞ as n → ∞. Suppose that λ is a positive random variable (P[0 < λ < ∞] = 1), and that P
{υn , n ≥ 1} is a sequence of positive integer-valued random variables independent of {λ, Xk , k ≥ 1} with υn −→ ∞, n → ∞ , such that for any given ε > 0 and some constant r > 0 # " 2 s [(λ+c)υn ] − λr ≥ ε = 0. (2.30) lim lim sup P s2υn 0≤c→∞ n→∞ If {Nn , n ≥ 1} is a sequence of positive integer-valued random variables such that P
s2Nn /s2υn −→ λr ,
(2.31)
JJ
II
J
I
n → ∞,
then YNn = SNn /sNn =⇒ µ,
(2.32)
Contents
n → ∞.
R EMARK 2.22. Note that condition (2.30) implies that for any given ε > 0 (2.300 )
lim lim sup P[|s2[(λ+c)υn ] /s2[λυn ] − 1| ≥ ε] = 0.
Page 43 of 94
0 0 and N > 0, we have P[|s2[(λ+c)υn ] /s2[λυn ] − 1| ≥ ε] ≤ P[|s2[(λ+c)υn ] /s2υn − λr |(s2υn /s2[λυn ] ) ≥ ε/2]
Go Back
+ P[λr |s2υn /s2[λυn ] − 1/λr | ≥ ε/2] ≤ P[s2υn /s2[λυn ] > M] + P[|s2[(λ+c)υn ] /s2υn − λr | ≥ ε/(2M)]
Full Screen
+ P[λr > N] + P[|s2υn /s2[λυn ] − 1/λr | ≥ ε/(2N)]. Since the condition (2.30) with c = 0 implies that
Close
lim sup P[s2υn /s2[λυn ] > M] = P[λr < 1/M] n→∞
and that lim sup P[|s2υn /s2[λυn ] − 1/λr | ≥ ε/(2N)] = 0,
Quit
n→∞
then for any M > 0 and N > 0 lim lim sup P[|s2[(λ+c)υn ] /s2[λυn ] − 1| ≥ ε] ≤ P[λr < 1/M] + P[λr > N].
Letting now M → ∞ and N → ∞
0 0 and 0 < δ < 1/2 we have (cf. Remark 2.9) lim sup P max |Yi −Ym | ≥ ε ≤ 8δ(4 + δ)/ε2 . |s2i −s2m |≤δs2m
m→∞
Page 46 of 94
Hence and by Lemma 2.23, for every ε > 0, ∞
lim lim sup P 0 M]
Go Back
m n
m n
n
We see that for any given M > 0 n→∞
n→∞
+ lim sup P[|s[λm υn ] /s[λυn ] − 1| ≥ ε/(2M)] n→∞
≤ µ{x : |x| > M} + lim sup P[|s2[λm υn ] /s2[λυn ] − 1| ≥ ε/(2M)]. n→∞
Full Screen
Moreover, since for every n ≥ 1 1 ≤ s2[λm υn ] /s2[λυn ] ≤ s2[(λ+2−m )υn ] /s2[λυn ]
a.s., Close
then, by (2.300 ), we get lim lim sup P[|s2[λm υn ] /s2[λυn ] − 1| ≥ ε/(2M)] = 0.
m→∞
n→∞
Hence
Quit
lim lim sup P[|Y[λm υn ] (1 − s[λm υn ] /s[λυn ] )| ≥ ε/2] ≤ µ{x : |x| > M},
m→∞
n→∞
which implies (letting M → ∞) that (2.39)
lim lim sup P[|Y[λm υn ] (1 − s[λm υn ] /s[λυn ] )| ≥ ε/2] = 0.
m→∞
n→∞
Home Page
Title Page
Now we see, using (2.30) and (2.300 ), that for any given δ > 0 (2.40) lim lim sup P |S[λm υn ] − S[λυn ] | ≥ εs[λυn ] /2 m→∞ n→∞ ≤ lim lim sup P m2−m ≤ λr < m, |s2[λυn ] /s2υn − λr | ≤ 2−m , |s2[λm υn ] /s2λυn − 1| ≤ δ, |S[λm υn ] − S[λυn ] | ≥ εs[λυn ] /2 m→∞ n→∞ |Si − S[λυn ] | ≥ εs[λυn ] /2 ≤ lim lim sup P m2−m ≤ λr < m, |s2[λυn ] /s2υn − λr | ≤ 2−m , max m→∞
|s2i −s2[λυ ] |≤δs2[λυ
n→∞
n]
n
≤ lim lim sup P m2−m ≤ λr < m, (λr − 2−m )s2υn ≤ s2[λυn ] ≤ (λr + 2−m )s2υn , m→∞ n→∞ p |Si − S[λυn ] | ≥ εsυn λr − 2−m /2 max (1−δ)(λr −2−m )s2υn ≤s2i ≤(1+δ)(λr +2−m )s2υn m2m −1
≤ lim lim sup m→∞
n→∞
∑
JJ
II
J
I
P k2−m ≤ λr < (k + 1)2−m , |s2[λυn ] /s2υn − λr | ≤ 2−m ,
k=m
max
(1−δ)(λr −2−m )s2υn ≤s2i ≤(1+δ)(λr +2−m )s2υn m2m −1
≤ lim lim sup m→∞
n→∞
∑
|Si − S[λυn ] | ≥ εsυn
p (k − 1)2−m /2
Go Back
k=m
max
p |Si − S j | ≥ εsυn (k − 1)2−m /2
m2m −1 ∞
= lim lim sup n→∞
Page 48 of 94
P k2−m ≤ λr < (k + 1)2−m ,
(1−δ)(k−1)2−m s2υn ≤s2i ,s2j ≤(1+δ)(k+1)2−m s2υn
m→∞
Contents
∑ ∑ P[υn = b]P k2−m ≤ λr < (k + 1)2−m,
Full Screen
k=m b=1
max
(1−δ)(k−1)2−m s2b ≤s2i ,s2j ≤(1+δ)(k+1)2−m s2b
|Si − S j | ≥ εsb
p (k − 1)2−m /2 Close
m2m −1 ∞
≤ lim lim sup m→∞
n→∞
∑ ∑ P[υn = b]P k2−m ≤ λr < (k + 1)2−m,
k=m b=1
max
(1−δ)(k−1)2−m s2b ≤s2i ≤(1+δ)(k+1)2−m s2b m2m −1
≤ 2 lim
m→∞
∑
|Si − ST 1 | ≥ εsb b
p (k − 1)2−m /4 ∞
P k2−m ≤ λr < (k + 1)2−m lim sup n→∞
k=m
∑ P[υn = b]× b=1
p P max |Si − ST 1 | ≥ εsb (k − 1)2−m /4 k2−m ≤ λr < (k + 1)2−m , Tb1 ≤i≤Tb2
b
Quit
Home Page
Title Page
where Tb1 = Tb1 (δ, k, m) = min{i : (1 − δ)(k − 1)2−m s2b ≤ s2i }, Tb2 = Tb2 (δ, k, m) = max{i : s2i ≤ (1 + δ)(k + 1)2−m s2b }.
Contents
But by Lemma 2.23 and the Kolmogorov’s inequality we get p lim sup P max |Si − ST 1 | ≥ εsb (k − 1)2−m /4 k2−m ≤ λr < (k + 1)2−m p = lim sup P max |Si − ST 1 | ≥ εsb (k − 1)2−m /4 b→∞
Tb1 ≤i≤Tb2
b
≤ 16 lim sup E(ST 2 − ST 1 )2 /(ε2 s2b (k − 1)2−m ) b→∞
II
J
I
b
Tb1 ≤i≤Tb2
b→∞
JJ
b
b
Page 49 of 94
≤ 16 lim sup {(1 + δ)(k + 1)2−m s2b − (1 − δ)(k − 1)2−m s2b }/(ε2 s2b (k − 1)2−m ) b→∞ 2
≤ 16(2kδ + δ + 3)/(ε (k − 1)) ≤ 64δ/ε2 + 48/((m − 1)ε2 )
Go Back
for m ≤ k ≤ m2m − 1, m ≥ 3. Therefore, using the Toeplitz lemma [107, p. 238], we get (for m ≥ 3) ∞
lim sup n→∞
∑ P[υn = b]P b=1
max |Si − ST 1 | ≥ εsb
Tb1 ≤i≤Tb2
b
p (k − 1)2−m /4 k2−m ≤ λr
Full Screen
< (k + 1)2−m ≤ 64δ/ε2 + 48/((m − 1)ε2 ) for m ≤ k ≤ m2m − 1, Close
and hence, and by (2.40), we have lim lim sup P |S[λm υn ] − S[λυn ] | ≥ εs[λυn ] /2 ≤ 128δ/ε2 .
m→∞
n→∞
Quit
Letting now δ → 0, we have lim lim sup P |S[λm υn ] − S[λυn ] | ≥ εs[λυn ] /2 = 0,
m→∞
n→∞
which together with (2.39) implies (2.38). The proof of Lemma 2.27 is complete.
Home Page
Title Page
P ROOF OF L EMMA 2.28. Note that for any given ε > 0 and δ > 0, we have (2.41) P max |Yi −Y[λυn ] | ≥ ε |s2i −s2[λυ ] |≤δs2[λυ n
n]
Contents
≤P
n]
n
+P
−1 |S[λυn ] ||s−1 i − s[λυn ] | ≥ ε/2
max
|s2i −s2[λυ ] |≤δs2[λυ n
≤P
|Si − S[λυn ] |/si ≥ ε/2
max
|s2i −s2[λυ ] |≤δs2[λυ
max
|s2i −s2[λυ ] |≤δs2[λυ
JJ
II
J
I
n]
p |Si − S[λυn ] | ≥ ε 1 − δs[λυn ] /2
n]
n
p + P |Y[λυn ] | ≥ ε 1 − δ/2δ . By Lemma 2.27 we know that p p lim sup P |Y[λυn ] | ≥ ε 1 − δ/2δ = µ{x : |x| ≥ ε 1 − δ/2δ}.
Page 50 of 94
p lim lim sup P |Y[λυn ] | ≥ ε 1 − δ/2δ = 0.
Go Back
n→∞
Letting δ → 0, we get (2.42)
δ→0
n→∞
Following the considerations of the proof of Lemma 2.27 we can show that for any given ε > 0, δ > 0 and m ≥ 3 p lim sup P max |Si − S[λυn ] | ≥ ε 1 − δs[λυn ] /2 n→∞
|s2i −s2[λυ ] |≤δs2[λυ n
Full Screen
n]
≤ P[λr < m2−m ] + P[λr > m] + 128δ/ε2 (1 − δ) + 96/ε2 (1 − δ)(m − 1). Letting m → ∞, we get for every ε > 0 and 0 < δ ≤ 1/2 p (2.43) lim sup P max |Si − S[λυn ] | ≥ ε 1 − δs[λυn ] /2 n→∞
Close
|s2i −s2[λυ ] |≤δs2[λυ n
n]
≤ 256δ/ε2 −→ 0 Combining (2.41), (2.42) and (2.43) we get the desired result: for every ε > 0 there exists a δ > 0 such that |Yi −Y[λυn ] | ≥ ε ≤ ε. lim sup P max n→∞
|s2i −s2[λυ ] |≤δs2[λυ n
n]
as δ → 0.
Quit
Home Page
Title Page
P ROOF OF T HEOREM 2.21. It follows from (2.30) and (2.31) that P
s2Nn /s2[λυn ] −→ 1
(2.44)
as
n → ∞.
By (2.35) and Lemma 2.28 we have Y[λυn ] =⇒ µ and the sequence {Yn , n ≥ 1} satisfies the Anscombe random condition (A∗ ) with the norming sequence {sn , n ≥ 1} and filtering sequence {[λυn ], n ≥ 1}. Thus, by Corollary 2.14, we obtain YNn =⇒ µ as n → ∞ for every sequence {Nn , n ≥ 1} satisfying (2.44), which is the statement of Theorem 2.21. C OROLLARY 2.29. Let {Xk , k ≥ 1} be a sequence of independent and identically distributed random variables with EX1 = 0, EX12 = σ2 > 0. If {Nn , n ≥ 1} is a sequence of positive integer-valued random variables such that P
Nn /υn −→ λ as
(2.45)
Contents
JJ
II
J
I
n → ∞,
where λ is a positive random variable (P[0 < λ < ∞] = 1) and {υn , n ≥ 1} is a sequence of positive integer-valued random variables independent P
of {λ, Xk , k ≥ 1} with υn −→ ∞ as n → ∞, then
√ SNn /σ Nn =⇒ N0,1 .
P ROOF. It is easy to see that in this case the condition (2.30) holds true with r = 1, while the condition (2.31) is a consequence of (2.45).
Page 51 of 94
We note that in the case when λ is a positive random variable having a discrete distribution, then the conditions (2.30) and (2.31) in Theorem 2.21 can be replaced by the weaker condition (2.44). T HEOREM 2.30. Let {Xk , k ≥ 1} be a sequence of independent random variables with EXk = 0, EXk2 = σ2k < ∞, k ≥ 1, such that . (2.290 ) Yn = Sn /sn =⇒ µ, n → ∞ , where Sn = ∑nk=1 Xk , s2n = ∑nk=1 σ2k , n ≥ 1, and s2n → ∞ as n → ∞. Let {Nn , n ≥ 1} be a sequence of positive integer-valued random variables such that P
(2.440 )
s2Nn /s2[λυn ] −→ 1,
Go Back
Full Screen
n → ∞,
where λ is a positive random variable having a discrete distribution and {υn , n ≥ 1} is a sequence of positive integer-valued random variables Close
P
independent of {λ, Xk , k ≥ 1} with υn −→ ∞. Then (2.320 )
YNn = SNn /sNn =⇒ µ,
n → ∞. Quit
Home Page
Title Page
2.6.
Applications to the stable convergence
Recall that Yn =⇒ µ(stably) if Yn =⇒ µ and for every B ∈ F with P(B) > 0 there exists a probability measure µB such that Yn =⇒ µB as n → ∞ under the conditional measure P( |B). In the special case where µB = µ for all B ∈ F with P(B) > 0 we say Yn =⇒ µ(mixing). As a simple consequence of Theorem 2.3 we get T HEOREM 2.31. [150] The following conditions are equivalent: (i) Yn =⇒ µ(stably) and satisfies (A0 ); (ii) YNn =⇒ µ(stably) for every sequence {Nn , n ≥ 1} satisfying (2.4).
Contents
JJ
II
J
I
P ROOF. The implication (ii) =⇒ (i) is obvious: put Nn = n to get Yn =⇒ µ(stably), and use Theorem 2.3 to get {Yn , n ≥ 1} satisfies (A0 ). To prove the reverse implication fix B ∈ F with P(B) > 0. Fix ε > 0. Given ε0 = εP(B) choose δ > 0 such that lim sup P[
max
|ki2 −kn2 |≤δkn2
n→∞
|Yi −Yn | ≥ ε0 ] ≤ ε0
Then lim sup P[ n→∞
max
|ki2 −kn2 |≤δkn2
≤ lim sup P[ n→∞
max
|Yi −Yn | ≥ ε]/P(B)
max
|Yi −Yn | ≥ ε0 ]/P(B) ≤ ε,
|ki2 −kn2 |≤δkn2
≤ lim sup P[ n→∞
|Yi −Yn | ≥ ε|B]
|ki2 −kn2 |≤δkn2
which proves that the sequence {Yn , n ≥ 1} satisfies (A0 ) under the conditional measure P(·|B). Moreover, by (i), the sequence {Yn , n ≥ 1} converges weakly (to a measure µB ) under the conditional measure P(·|B). Thus, by Theorem 2.3, YNn =⇒ µB under the conditional measure P(·|B) for every sequence {Nn , n ≥ 1} satisfying (2.4). Hence, YNn =⇒ µ(stably) for every {Nn , n ≥ 1} satisfying (2.4).
Page 52 of 94
Go Back
Full Screen
R EMARK 2.32. We note that if Yn =⇒ µ(mixing) not only stably, then in the part (ii) of Theorem 2.31 we obtain YNn =⇒ µ(mixing) for every {Nn , n ≥ 1} satisfying (2.4). Close
C OROLLARY 2.33. [150] (cf. Corollary 1.29) Let {Xk , k ≥ 1 } be a sequence of independent random variables with zero means and finite variances. Let Sn = X1 + . . . + Xn , s2n = Var(Sn ). Then the following conditions are equivalent: (i) Sn /sn =⇒ N0,1 ; (ii) SNn /sNn =⇒ N0,1 (mixing) for every sequence {Nn , n ≥ 1} of positive, integer-valued random variables such that P
s2Nn /s2an −→ 1, where an → ∞ are positive integers. In what follows we shall need the following simple consequence of Theorem 2.12.
Quit
Home Page
Title Page
T HEOREM 2.34. [99] Let {αn , n ≥ 1} be a non-decreasing sequence of positive random variables and let {τn , n ≥ 1} be a sequence of P
positive, integer-valued random variables such that τn −→ ∞. The following conditions are equivalent: (i) Yυn =⇒ µ (stably) and the sequence {Yn , n ≥ 1} satisfies Anscombe random condition (A? ) with norming sequence {αn , n ≥ 1} and filtering sequence {τn , n ≥ 1}; (ii) YNn =⇒ µ (stably) for every {Nn , n ≥ 1} satisfying (2.10), i.e. P
α2Nn /α2τan −→ 1,
where an → ∞ are constants.
Contents
JJ
II
J
I
Theorem 2.34 together with Remark 1.1.30 yields the following generalization of Corollary 2.33. C OROLLARY 2.35. Let {Xk , k ≥ 1 } be a sequence of independent random variables with zero means and finite variances. Let Sn = X1 + . . . + Xn , s2n = Var(Sn ). Then the following conditions are equivalent: (i) Sn /sn =⇒ N0,1 ; (ii) SNn /sNn =⇒ N0,1 (mixing) for every {Nn , n ≥ 1} such that P
s2Nn /s2[λan ] −→ 1 as n → ∞ ,
Page 53 of 94
where λ is a positive random variable with discrete distribution and {an , n ≥ 1} is a sequence of positive integers with an → ∞.
Go Back
Full Screen
Close
Quit
Home Page
Title Page
Contents
JJ
II
J
I
Page 54 of 94
Go Back
Full Screen
Close
Quit
Home Page
CHAPTER 3 Title Page
On a Robbins’ type theorem 3.1.
The classical results
Contents
Let {Xk , k ≥ 1 } be a sequence of indepenent, identically distributed random variables, with a common distribution function F. For each k ≥ 1, set . a = EXk =
Z +∞
. c2 = Var(Xk ) =
Z +∞
X˜k = Xk − a,
x dF(x), −∞
x2 dF(x) − a2 ,
JJ
II
J
I
0 < c2 < +∞.
−∞
Let {υn , n ≥ 1} be a sequence of positive, integer-valued random variables, independent of {Xk , k ≥ 1 }. Assume that the distribution function of υn is well defined by the values pnk , k ≥ 1, where ∞
pnk = P[υn = k],
k ≥ 1;
pnk = 1.
∑
Page 55 of 94
k=1
Put . αn = Eυn =
∞
k pnk ,
∑
β2n
. = Var(υn ) =
k=1
Go Back
∞
∑
k
2
pnk − α2n ,
k=1
. gn (t) = E exp{˙ıt(υn − αn )/βn } =
∞
∑
exp{˙ıt(k − αn )/βn }pnk .
Full Screen
k=1
. . Under these assumptions on υn , the distribution functions of Sυn = X1 + . . . + Xυn and S˜υn = X˜1 + . . . + X˜υn depend on υn , and ∞
ESυn =
∑
∞
E(I[υn =k] Sk ) =
k=1
∞
∑ (ESk ) pnk = ∑ ak pnk = aαn , k=1
k=1
ES˜υn = E(Sυn − aυn ) = ESυn − aαn = 0, Cov(S˜υn , υn ) = E(υn S˜υn ) − (Eυn )(ES˜υn ) = E(υn S˜υn ) ∞
=
Close
∞
∑ E(I[υn=k] kS˜k ) =
∑ k(ES˜k ) pnk = 0,
k=1
k=1
Quit
Home Page
. σ2n = Var(Sυn ) = Var(S˜υn + aυn ) = Var(S˜υn ) + Var(aυn ) + 2 Cov(S˜υn , aυn ) = Var(S˜υn ) + Var(aυn ) Title Page
∞
= E(S˜υn )2 + a2 Var(υn ) =
∑ (ES˜k2) pnk + a2β2n k=1
∞
=
∑ c2k pnk + a2β2n = c2αn + a2β2n ,
Contents
k=1
. fn (t) = E exp{˙ıt(Sυn − ESυn )/σn } = E exp{˙ıt(Sυn − aαn )/σn } ∞
=
∑ k=1 ∞
=
∑
exp{−˙ıtaαn /σn } (E exp{˙ıtX1 /σn })k pnk exp{˙ıta(k − αn )/σn } (E exp{˙ıt(X1 − a)/σn })k pnk
k=1
JJ
II
J
I
υn ˜ = E exp{˙ıta(υn − αn )/σn } ϕ(t/σ , n) where
Page 56 of 94
. ˜ ˜ = Eeı˙t X1 = Eeı˙t(X1 −a) = ϕ(t)
Z +∞
eı˙tx dF(x + a).
−∞
Due to Robbins [140] we have the following version of the central limit theorem.
Go Back
T HEOREM 3.1 (First Robbins Theorem). If σ2n −→ ∞ and
(3.1)
βn = o(σ2n ) as n → ∞ , Full Screen
then for all real t lim fn (t) = lim gn (t sgn(a) dn ) exp{−t 2 (1 − dn2 )/2},
(3.2)
n→∞
n→∞
. where dn = (a2 β2n /σ2n )1/2 = |a|βn /σn = a sgn(a)βn /σn , 0 ≤ dn ≤ 1. 1
P ROOF.
2
Close
It follows from (3.1) that
2 E (υn − αn )/σ2n = β2n /σ4n −→ 0 ( n → ∞ ), so in view of the Chebyshev’s inequality, for every ε > 0, (3.3) P |υn − αn | ≥ εσ2n −→ 0 ( n → ∞ ). 1 sgn(a) = −1, 0 or 1 according as a < 0, a = 0 or a > 0. Obviously, aβ /σ = sgn(a) d . n n n 2 The proof presented is a slightly modified version of the original Robbins’ proof.
Quit
Home Page
Title Page
Put αn . ˜ ψn (t) = gn (t sgn(a)dn ) ϕ(t/σ n) αn ˜ . = E exp{˙ıta(υn − αn )/σn } ϕ(t/σ n)
Contents
Then αn υn ˜ ˜ | fn (t) − ψn (t)| = Eeı˙ta(υn −αn )/σn ϕ(t/σ − ϕ(t/σ n) n) υn −αn ˜ ≤ E ϕ(t/σ − 1 ≤ 2P[|υn − αn )| > bσ2n ] n) υn −αn ˜ + E ϕ(t/σ − 1 I[|υn − αn )| ≤ bσ2n ], n) whence (for any given b > 0) (3.4)
JJ
II
J
I
lim | fn (t) − ψn (t)|
n→∞
υn − αn ) υn −αn ˜ − 1 I[| ≤ lim E ϕ(t/σ | ≤ b]. n) n→∞ σ2n
Page 57 of 94
Since EX1 = a, Var(X1 ) = c2 < ∞ and |eı˙tx − 1 − ı˙tx| ≤ x2 /2,
x ∈ IR (Burrill, p. 334),
Go Back
it follows from (3.1) that (3.5)
˜ |ϕ(t/σ n ) − 1| = |
Z +∞ −∞ 2 2
(eı˙tx/σn − 1 − ı˙tx/σn )dF(x + a)| Full Screen
≤ t c /2σ2n −→ 0,
n → ∞.
3.1.1. Some consequences of the First Robbins Theorem. Note that if a = EX1 = 0, then dn = 0 for all n ≥ 1, and the assumption (3.1) reduces to the following one (3.6)
c2 αn −→ ∞ and βn = o(αn )
Close
as n → ∞ . Quit
Hence and by the First Robbins Theorem we have C OROLLARY 3.2. If a = EX1 = 0 and (3.6) is satisfied, then lim fn (t) = lim E exp{˙ıtSυn /
n→∞
n→∞
p 2 c2 αn } = e−t /2 .
Home Page
Title Page
R EMARK 3.3. In the next subsection we will show that that the assumptions of Corollary 3.2 can be weakened, i.e. instead of (3.6) it will be ? sufficient to assume that P
αn −→ ∞ and
(3.7)
υn /αn −→ 1
as n → ∞ .
Contents
It is easy to see that (3.6) implies (3.7). Indeed, by (3.6) and the Chebyshev’s inequality, for every ε > 0 we have P [|υn − αn | > εαn ] ≤ β2n /ε2 α2n −→ 0
as n → ∞ .
But in (3.7) we do not assume that β2n = Var(υn ) exists (is finite). From the First Robbins Theorem we also immediately get the following results.
JJ
II
J
I
C OROLLARY 3.4. If a = EX1 6= 0 and σn −→ ∞ and
(3.8)
as n → ∞ ,
βn = o(σn )
then lim fn (t) = lim E exp{˙ıt(Sυn − aαn )/σn } = e−t
(3.9)
n→∞
2 /2
n→∞
. Page 58 of 94
P ROOF. Note that (3.8) implies βn = o(σ2n ), i.e. (3.1) holds. It follows from (3.8) that dn = |a|βn /σn −→ 0
as n → ∞ . Go Back
Moreover, denoting Zn := (υn − αn )dn /βn , n ≥ 1, we see that for every n ≥ 1
EZn = 0 and
EZn2 = dn2 −→ 0
Full Screen
( n → ∞ ).
P
The Chebyshev’s inequality implies that Zn −→ 0. Hence ∞
E exp{˙ıtZn } =
exp{˙ıt(k − αn )dn /βn }pnk
∑
Close
k=1
= gn (tdn ) −→ 1
as n → ∞ .
Thus, using Theorem 3.1, we get
Quit
lim fn (t) = lim gn (tdn ) exp{−t
n→∞
n→∞
= e−t which proves the desired result (3.9).
2 /2
2
(1 − dn2 )/2}
,
Home Page
Title Page
3.2.
On the limit behaviour of random sums of independent random variables
3.2.1. Introduction and notation. Let (Xnk )n,k∈N be a doubly infinite array (DIA) of random variables such that for every n, the random variables Xnk , k ≥ 1, are independent, let Fnk be the distribution function of Xnk , and let Snk= ∑kj=1 Xn j . We put Z +∞
ank = EXnk =
x dFnk (x),
−∞
σ2nk = Var(Xnk ) =
k
Lnk =
Z +∞ −∞
x2 dFnk (x) − a2nk ,
k
∑
2 = an j , Vnk
∑
j=1
Contents
σ2n j , b2nk = max σ2n j ,
JJ
II
J
I
1≤ j≤k
j=1
while for Ynk = Xnk − ank , k ≥ 1, n ≥ 1, we write ϕnk (t) = E exp{itYnk } =
n
Z +∞ −∞
exp{itx} dFnk (x + ank ),
fnk (t) = ∏ ϕn j (t). j=1
Now let {Nn , n ≥ 1} be a sequence of positive integer-valued random variables such that Nn is for every n independent of Xnk , k ≥ 1. We assume that the distribution function of Nn is determined by the values
Page 59 of 94
∞
pnk = P[Nn = k], k ≥ 1;
∑
pnk = 1.
k=1 n Under these assumptions on Nn , the distribution function of SnNn = ∑N k=1 Xnk depends on Nn , and
∞
ESnNn =
∞
Lnk pnk = An ,
∑
ELnNn =
k=1 2 = EVnN n
∑ Vnk2 pnk = ρn ,
∑
Lnk pnk ,
k=1
∞
∞
Var(LnNn ) =
k=1
∑
2 Lnk pnk − A2n = ∆2n ,
∑
∞
2 2 Vnk pnk + ∑ Lnk pnk − A2n = ρn + ∆2n = σ2n .
k=1
Close
k=1
Furthermore, let H be a bounded, non-decreasing function such that H(−∞) = 0, 0 ≤ H(x) ≤ 1 for all x, and H(+∞) = 1. We write (exp{itx} − 1 − itx)/x2 for x 6= 0, gt (x) = for x = 0, −t 2 /2 Z (3.10)
Full Screen
k=1 ∞
Var(SnNn ) =
Go Back
+∞
f (t) = exp −∞
gt (x) dH(x) .
Quit
Home Page
Title Page
Then f (t) is a characteristic function (ch. f.) of an infinitely divisible distribution with zero mean and unit variance (see, e.g., [108, Theorem 5.5.3]). Contents
JJ
II
J
I
Page 60 of 94
Go Back
Full Screen
Close
Quit
Home Page
Title Page
3.3.
A Note on a Katz–Petrov Type Theorem
The note contains a generalization of the well-known Katz–Petrov theorem [123] on the rate of convergence in the central limit theorem. As an application of our result we give nonuniform estimates of the rate of convergence in the random central limit theorem. The results obtained extend and strengthen some results of [4, 21, 31, 101, 120, 121, 146] and [155]. 3.3.1. A Katz–Petrov type theorem. Let {Xk , k ≥ 1} be a sequence of independent random variables variables) such that EXk = 0 and EXk2 = σ2k < ∞ for each k ≥ 1. Put n
Sn =
∑ Xk ,
JJ
II
J
I
n
s2n =
k=1
∑ σ2k . k=1
Let G be the class of functions, defined for all real u, that satisfy the following conditions: (A) g is nonnegative, even, and nondecreasing in the interval (0, ∞); (B) u/g(u) does not decrease in (0, ∞). Note that G contains the functions g(u) = |u|δ , 0 ≤ δ ≤ 1; g(u) = |u|I[|u| < c]+cI[|u| ≥ c], c > 0; and g(u) = c(1−δ)/2 |u|δ I[|u| < c1/2 ]+|u|I[c1/2 ≤ |u| < c] + cI[|u| ≥ c], 0 ≤ δ ≤ 1, c > 0. The Katz–Petrov theorem can be stated (in a general form) as follows (cf. [123] or [124, p. 141]). T HEOREM 3.5. Let {Xk , k ≥ 1} be a sequence of independent random variables such that EXk = 0, EXk2 = σ2k < ∞, k ≥ 1, and let {gn , n ≤ 1} be a sequence of functions from G such that EXk2 gn (Xk ) < ∞ for all 1 ≤ k ≤ n, n ≥ 1. Then there exists a positive universal constant C such that, for all n ≥ 1 and all real x, (3.11)
Contents
n ∆n,x = |P[Sn < xsn ] − Φ(x)| ≤ C ∑ EXk2 gn (Xk ) [s2n gn (sn )] .
Page 61 of 94
Go Back
Full Screen
k=1
We are going to prove the following result. Close
T HEOREM 3.6. Let {Xk , k ≥ 1} and {gn , n ≥ 1} be as in Theorem 1. Then there exists a positive universal constant C such that, for all n ≥ 1 and all real x, (3.12)
n ∆n,x ≤ C ∑ EXk2 gn (Xk ) [s2n (1 + |x|)2 gn (sn (1 + |x|))] . k=1
Theorem 3.6 generalizes some results of [21], and at the same time strengthens the main result of [4]. As an application of Theorem 3.6 we give nonuniform estimates of the rate of convergence in the random central limit theorem. But one can easily see that Theorem 3.6 enables us to strengthen some results of [16] and [118], also. We do not, however, go into details here.
Quit
Home Page
Title Page
3.3.2. Nonuniform estimates for random sums. It is well known that there are sequences {Xk , k ≥ 1} of independent random variables such that {Sn /sn , n ≥ 1} does not weakly converge, whereas for every x Rn,x = |P[SNn < xsNn ] − Φ(x)| → 0
as n → ∞ ,
Contents
for a sequence {Nn , n ≥ 1} of positive integer–valued random variables independent of {Xk , k ≥ 1} (cf. e.g. Example 2.5 and Example 2.17 in Chapter 2). How fast does in such cases Rn,x tend to 0 ? From Theorem 3.6 we have immediately T HEOREM 3.7. Let {Xk , k ≥ 1} and {gn , n ≥ 1} be as in Theorem 3.5, and let {Nn , n ≥ 1} be a sequence of positive integer–valued random variables independent of {Xk , k ≥ 1}. Then n Nn o (3.13) Rn,x ≤ CE ∑ Xk2 gNn (Xk ) s2Nn (1 + |x|)2 gNn (sNn (1 + |x|)) .
JJ
II
J
I
k=1
The following consequences of Theorem 3.7 are sometimes useful: T HEOREM 3.8 (cf. [118], Corollary 6). Let {Xk , k ≥ 1} be a sequence of independent random variables such that EXk = 0, EXk2 = σ2k , E|Xk |2+δ = β2+δ < ∞, k ≥ 1, for some 0 < δ ≤ 1. Then k Rn,x ≤ CE
n Nn
∑ β2+δ k
k=1
2+δ o sNn (1 + |x|)2+δ ,
Page 62 of 94
Go Back
and, for all p > 1/(2 + δ), Z
+∞
p dx Rn,x
−∞
1/p
≤ Cp E
n Nn
∑
k=1
2+δ β2+δ k /sNn
P ROOF. This follows from (3.13) with gn (u) = |u|δ , n ≥ 1.
o
Full Screen
.
T HEOREM 3.9 (cf. [101], Theorem 1). Let {Xk , k ≥ 1} be a sequence of independent random variables such that EXk = 0, EXk2 = σ2k < ∞, k ≥ 1. Then (3.14) Rn,x ≤ C EΛNn (sNn (1 + |x|)) + ELNn (sNn (1 + |x|)) /(1 + |x|)2 = E{ΨNn (sNn (1 + |x|))}/(1 + |x|)2 , and, for all p > 1/2, (3.15)
Z
+∞
−∞
p Rn,x dx
1/p
≤ C p EΨNn (sNn ) ,
Close
Quit
Home Page
Title Page
where 1 Λk (u) = 2 sk Lk (u) =
k
Z
y2 dFX j (y) ,
∑
j=1
1 2 sk u
k
Z
∑
j=1
Ψk (u) = Cu−1
Contents
|y|≥u
|y|3 dFX j (y) ,
|y| 1/(2 + δ), (3.17)
Z
+∞
−∞
p Rn,x dx
1/p
2+δ ∗ ≤ C p E B2+δ , Nn ΨNn (sNn ) sNn
Close
Nn 2+δ where B2+δ Nn = ∑k=1 βk .
P ROOF. Using (3.13) with
Quit
gn (u) = (sn (1 + |x|))(1−δ)/2 |u|δ I[|u| < (sn (1 + |x|))1/2 ] + |u|I[sn (1 + |x|))1/2 ≤ |u| < sn (1 + |x|)] + sn (1 + |x|)I[|u| ≥ sn (1 + |x|)] , n ≥ 1 ,
Home Page
Title Page
we find that Rn,x ≤
n C 2+δ −(1−δ)/2 E (B2+δ Nn /sNn )(sNn (1 + |x|)) 2+δ (1 + |x|) +
+
1
Nn
|y|≥(sNn (1+|x|))1/2
Nn
Z
JJ
II
J
I
o |y|2+δ dFXk (y)
∑
s2+δ Nn k=1
Contents
|y|2+δ dFXk (y)
∑
s2+δ Nn k=1 1
Z
|y|≥sNn (1+|x|)
2+δ ∗ = E (B2+δ /s )Ψ (s (1 + |x|)) (1 + |x|)2+δ , N Nn n Nn Nn where Ψ∗k (u) = C
n k −(2+δ) −(1−δ)/2 u + Bk ∑
Z
j=1
Page 64 of 94
|y|≥u1/2
−(2+δ)
+ Bk
|y|2+δ dFX j (y)
k
Z
∑
j=1
o |y|2+δ dFX j (y) .
Go Back
|y|≥u
−(1−δ)/2
Furthermore, one can easily see that, for every k ≥ 1, Ψ∗k is bounded by C(σ1 + 2), nonincreasing on [σ1 , ∞], and that limu→∞ Ψ∗k (u) = 0. The proof of Theorem 3.11 is complete as (3.17) is an immediate consequence of (3.16). R EMARK 3.12. One can easily see that in the special case where Nn = n almost surely, n ≥ 1, Theorem 3.11 strengthens Corollary 2 to Theorem 4 of [21] and at the same time generalizes to nonidentically distributed random variables the Corollary of [120]. 2+δ 2+δ 2+δ T HEOREM 3.13. Let {Xk , k ≥ 1} be as in Theorem 3.11, and suppose that B2+δ Nn /sNn = O E(BNn /sNn ) with probability 1 as n → ∞, and that Z 1 Nn 2+δ |y| dFXk (y) → 0 as n → ∞ , E ∑ B2+δ k=1 Nn |y|≥sNn /ϕn
where {ϕn , n ≥ 1} is a sequence of positive numbers going to infinity. Then 2+δ sup(1 + |x|)2+δ Rn,x = o (E(B2+δ Nn /sNn )) as x
n→∞.
Full Screen
Close
Quit
Home Page
Title Page
P ROOF. Only some modifications are necessary in the proof of Theorem 3.11 to make it applicable in this case: put gn (u) = sn (1 + |x|)/ϕn )1−δ |u|δ I[|u| < sn (1 + |x|)/ϕn ] + |u|I[sn (1 + |x|)/ϕn ≤ |u| < sn (1 + |x|)] + sn (1 + |x|)I[|u| ≥ sn (1 + |x|)] ,
n≥1.
Contents
The following corollaries generalize some results of [31] and [121]. C OROLLARY 3.14. Let {Xk , k ≥ 1} be a sequence of independent and identically distributed distributed ) random variables such that EX1 = 0, EX12 = 1, and let gn ∈ G be such that EX12 gn (X1 ) < ∞, n ≥ 1. Then there exists a positive universal constant C such that rn,x = |P[SNn < xsNn ] − Φ(x)| √ ≤ CE X12 gNn (X1 )/[(1 + |x|)2 gNn ( Nn (1 + |x|))] . R EMARK 3.15 (cf. [121]). Obviously, if gn = g, n ≥ 1, where g ∈ G is such that EX12 g(X1 ) < ∞ and limu→∞ g(u) = ∞, and if for some C > 0 P[Nn < C/(εn (1 + |x|))] = O 1/g ((1 + |x|)/εn )1/2 ,
JJ
II
J
I
Page 65 of 94
where {εn , n ≥ 1} is a sequence of positive numbers with εn → 0 as n → ∞, then (1 + |x|)2 rn,x = O 1/g (C1 (1 + |x|)/εn )1/2 , Go Back
where C1 = min(C, 1). C OROLLARY 3.16. Let {Xk , k ≥ 1} be a sequence of independent and identically distributed random variables such that EX1 = 0, EX12 = 1. Then there exists a bounded and nonincreasing function Ψ on (0, ∞) with limu→∞ Ψ(u) = 0 such that √ rn,x ≤ EΨ( Nn (1 + |x|))/(1 + |x|)2 , and, for all p > 1/2, Z
+∞
−∞
p rn,x dx
1/p
≤ C p EΨ
√ Nn .
C OROLLARY 3.17. Let {Xk , k ≥ 1} be a sequence of independent and identically distributed random variables such that EX1 = 0, EX12 = 1, E|X1 |2+δ < ∞ for some 0 < δ < 1. Then there exists a bounded and nonincreasing function Ψ1 on [1, ∞) with limu→∞ Ψ1 (u) = 0 such that n o √ δ/2 (3.18) rn,x ≤ E Ψ1 ( Nn (1 + |x|))/[Nn (1 + |x|)2+δ ] , and, for all p > 1/(2 + δ), (3.19)
Z
+∞
−∞
p rn,x dx
1/p
√ δ/2 ≤ C p E Ψ1 Nn /Nn .
Full Screen
Close
Quit
Home Page
−δ/2
−δ/2
C OROLLARY 3.18. Let {Xk , k ≥ 1} be as in Corollary 3.17, and suppose that Nn = O(ENn P Nn −→ ∞ (in probability) as n → ∞. Then −δ/2 sup(1 + |x|)2+δ rn,x = o (ENn ) ,
Title Page
) with probability 1 as n → ∞, and that
Contents
x
and for all p > 1/(2 + δ), Z
+∞
−∞
p rn,x dx
1/p
−δ/2
= o (ENn
).
P ROOF. This immediately follows from (3.18), (3.19), and the assumptions of the corollary.
C OROLLARY 3.19. Let {Xk , k ≥ 1} be a sequence of independent and identically distributed random variables such that EX1 = 0, EX12 = 1, E|X1 |3 < ∞. Then −1/2 sup(1 + |x|)3 rn,x = O(ENn ) ,
JJ
II
J
I
x
and for all p > 1/3, Z
+∞
−∞
p rn,x dx
1/p
−1/2
= O(ENn
).
R EMARK 3.20. We note that [155] gives a similar order of approximation for the weak convergence in the random central limit theorem. But the order of approximation deduced for the weak convergence cannot in general be transferred to the associated strong convergence in distribution (see e.g. [28]). 3.3.3.
Page 66 of 94
Go Back
Proof of Theorem 3.6. The proof is based on the following auxiliary result.
P ROPOSITION 3.21. Let Xk , 1 ≤ k ≤ n, be independent random variables such that EXk = 0, EXk2 = σ2k < ∞, 1 ≤ k ≤ n, and let g = gn,x ∈ G be such that EXk2 g(Xk ) < ∞ for all 1 ≤ k ≤ n. Then there exists a positive universal constant C1 such that n (3.20) ∆n,x ≤ C1 ∑ EXk2 g(Xk ) [s2n x∗2 g(sn (1 + |x|))] ,
Full Screen
k=1
where x∗ = max(1; |x|).
Close
P ROOF. In the proof we use the ideas of [4, 21] and [166] (see also [124, pp. 141–144]), therefore we indicate only the changes that should be made in our case. Observe first that for all x ∆n,x ≤ 1, x2 ∆n,x ≤ 1 ([124, p. 151]), whence ∆n,x ≤ x∗−2 for all x. Moreover, if for some x n
∑ EXk2g(Xk )/[s2ng(sn(1 + |x|))] ≥ 10−1 k=1
Quit
Home Page
Title Page
then
n ∆n,x ≤ x∗−2 ≤ 10 ∑ EXk2 g(Xk ) [s2n x∗2 g(sn (1 + |x|))] , k=1 Contents
i.e. (3.20) is trivially true in this case with C1 = 10. It remains to show (3.20) for the case n (3.21) ∑ EXk2g(Xk ) [s2ng(sn(1 + |x|))] < 10−1 k=1
with arbitrarily fixed x. For j = 1, . . . , n, define the truncated r.v. by Xn j = X j if |X j | < sn (1 + |x|), and Xn j = 0 otherwise. Let
JJ
II
J
I
an j = EXn j = EX j I[|X j | < sn (1 + |x|)] , c2n j = EXn2j − E 2 Xn j = EX j2 I[|X j | < sn (1 + |x|)] − a2n j , n
An =
n
∑ an j ,
j=1
Vn2
=
∑ c2n j .
j=1
Page 67 of 94
Then it follows from elementary calculations that (3.22)
|an j | ≤ EX j2 g(X j )/[sn x∗ g(sn (1 + |x|))] , j = 1, . . . , n ,
(3.23)
0 ≤ s2n −Vn2 ≤ 2 ∑ EX j2 g(X j )/g(sn (1 + |x|)) .
n
Go Back
j=1
Furthermore, we note that n
n
Full Screen
Vn2 = s2n − ∑ EX j2 I[|X j | ≥ sn (1 + |x|)] − ∑ a2n j j=1
j=1
n ≥ s2n 1 − ∑ EX j2 g(X j )/[s2n g(sn (1 + |x|))] −
j=1 n
∑ E|X j |I[|X j | ≥ sn(1 + |x|)]/sn
2
j=1
9 n 2 ≥ s2n − ∑ EX j2 g(X j ) [s2n (1 + |x|)g(sn (1 + |x|))] %Bigr) 10 j=1 ≥
89 2 s , by (3.21). 100 n
Close
Quit
Home Page
Title Page
Hence 89 2 s ≤ Vn2 ≤ s2n . 100 n
(3.24)
Contents
As in the above–mentioned papers, we obtain (3.25)
h n i ∆n,x ≤ P ∑ (Xn j − an j ) < yVn − Φ(y)
JJ
II
J
I
j=1
n
+ |Φ(y) − Φ(x)| + ∑ P[|X j | ≥ sn (1 + |x|)] j=1
= Tn,1 + Tn,2 + Tn,3 , say , where Page 68 of 94
y = (xsn − An )/Vn .
(3.26) By Chebyshev’s inequality we have
Go Back
n
(3.27)
Tn,3 =
∑ P[|X j | ≥ sn(1 + |x|)]
j=1 n
≤
∑ EX j2g(X j )/[s2n(1 + |x|)2g(sn(1 + |x|))]
Full Screen
j=1 n
≤
∑ EX j2g(X j )/[s2nx∗2g(sn(1 + |x|))] .
Close
j=1
Next we observe that (3.28)
Tn,2 ≤ |Φ(y) − Φ(xsn /Vn )| + |Φ(xsn /Vn ) − Φ(x)| −1/2
≤ (2π)
xsZn /Vn 2 Zy −t 2 /2 e dt + e−t /2 dt . xsn /Vn
x
Quit
Home Page
Title Page
Furthermore, by (3.23) and (3.24), we have Z xsn /Vn 2 2 −t /2 (3.29) e dt ≤ |x|e−x /2 (sn /Vn − 1) x
Contents
−x2 /2
= |x|e (s2n −Vn2 )/[Vn (sn +Vn )] 89 −1/2 n 2 |x|e−x /2 ≤2 EX j2 g(X j )/[s2n g(sn (1 + |x|))] 100 j=1 89 −1/2 n 2 |x|x∗2 e−x /2 EX j2 g(X j )/[s2n x∗2 g(sn (1 + |x|))] =2 100 j=1 89 −1/2 n (3/e)3/2 EX j2 g(X j )/[s2n x∗2 g(sn (1 + |x|))] , ≤2 100 j=1
∑
JJ
II
J
I
∑
∑
and, by (3.22) and (3.24), putting z = min(|x|sn /Vn ; |y|),
Page 69 of 94
Zy 2 −t 2 /2 e dt ≤ e−z /2 |An /Vn |
(3.30)
xsn /Vn −z2 /2
≤e
Go Back
n
∑
EX j2 g(X j )/[Vn sn x∗ g(sn (1 + |x|))]
j=1
89 −1/2 n 2 x∗ e−z /2 ∑ EX j2 g(X j )/[s2n x∗2 g(sn (1 + |x|))] 100 j=1 89 −1/2 n ≤ ∑ EX j2g(X j )/[s2nx∗2g(sn(1 + |x|))] 100 j=1
≤
−z2 /2
since x∗ e
Close
≤ 1 (this inequality is obvious for |x| ≤ 1; if |x| > 1, then we observe that |y| = |xsn /Vn ||1 − An /(xsn )| ≥ |x||1 − An /(xsn )| ≥
by (3.21) and (3.22), which implies that z ≥
9 10 x∗ ).
Combining (3.28)–(3.30) we find that n
(3.31)
Full Screen
Tn,2 ≤ C2 ∑ EX j2 g(X j )/[s2n x∗2 g(sn (1 + |x|))] , j=1
9 |x| 10
Quit
Home Page
Title Page
where C2 = (1 + 2(3/e)3/2 )/(1, 78π)1/2 < 1.4035. Finally, applying Bikelis’ theorem [21, Theorem 2] to the sequence {(Xn j − an j ), 1 ≤ j ≤ n} of truncated random variables, we obtain h n i Tn, j = P ∑ (Xn j − an j ) < yVn − Φ(y) (3.32)
Contents
j=1 n
≤ C3 ∑ E|Xn j − an j |3 /[Vn3 (1 + |y|)3 ] . j=1
JJ
II
J
I
But for j = 1, . . . , n, E|Xn j − an j |3 ≤ 4(E|Xn j |3 + |an j |3 ) ≤ 8E|Xn j |3 ≤ 8sn (1 + |x|)EX j2 g(X j )/g(sn (1 + |x|)) . Furthermore, using (3.26), we have |x| = |(yVn + An )/sn | ≤ |y| + |An /sn |
as Vn /sn ≤ 1 , Page 70 of 94
n
≤ |y| + ∑
EX j2 g(X j )/[s2n x∗ g(sn (1 + |x|))]
by (3.22),
j=1
≤ |y| + 10−1 ,
by (3.21),
whence x∗2 (1 + |x|)/(1 + |y|)3 ≤ x∗2 /(1 + |y|)2 + x∗2 /[10(1 + |y|)3 ] ≤
Go Back
11 . 10
Consequently,
Full Screen
n
Tn,1 ≤ C3 ∑ E|Xn j − an j |3 /[Vn3 (1 + |y|)3 ] j=1
n
Close
≤ 8C3 sn (1 + |x|) ∑ EX j2 g(X j )/[Vn3 (1 + |y|)3 g(sn (1 + |x|))] j=1
89 −3/2 n ≤8 C3 (x∗2 (1 + |x|)/(1 + |y|)3 ) ∑ EX j2 g(X j )/[s2n x∗2 g(sn (1 + |x|))] 100 j=1 89 −3/2 n ≤ 8.8 C3 ∑ EX j2 g(X j )/[s2n x∗2 g(sn (1 + |x|))], 100 j=1 and combining this result with (3.25), (3.27) and (3.31) we get the desired conclusion (3.20) with C1 = 1 +C2 + 8.8(89/100)−3/2C3 .
Quit
Home Page
Title Page
Note. Proposition 3.21 strengthens Theorem 2.1 of [4]. P ROOF OF T HEOREM 3.6. It follows from Proposition 3.21 that for all x n
Contents
∆n,x ≤ C1 ∑ EX j2 gn (X j )/[s2n gn (sn (1 + |x|))] , j=1 n
x2 ∆n,x ≤ C1 ∑ EX j2 gn (X j )/[s2n gn (1 + |x|))] . j=1
JJ
II
J
I
Consequently, n
(1 + x2 )∆n,x ≤ 2C1 ∑ EX j2 gn (X j )/[s2n gn (1 + |x|))] , j=1
and it is easily seen that n
∆n,x ≤ 4C1 ∑ EX j2 gn (X j )/[s2n (1 + |x|)2 gn (sn (1 + |x|))] . j=1
The proof of Theorem 3.6 is complete.
Page 71 of 94
R EMARK 3.22. We note that Theorem 3.6 can also be deduced in a different way from Theorem 4 of [21] (cf. e.g. [165]). Go Back
Full Screen
Close
Quit
Home Page
Title Page
Contents
JJ
II
J
I
Page 72 of 94
Go Back
Full Screen
Close
Quit
Home Page
CHAPTER 4 Title Page
Weak Convergence to Infinitely Divisible Laws Let (Xnk ) (k ≥ 1, n ≥ 1) be a doubly infinite array (DIA) of random variables (r.v.’s) defined on a common probability space (Ω, F, P). Assume that (Xnk ) is adapted to an array (Fnk ) (k ≥ 0, n ≥ 1) of row-wise increasing sub-σ-fields of F, i.e. Xnk is Fnk -measurable and Fn,k−1 ⊆ Fnk . Fn0 / Ω}. need not be the trivial σ-field {0, Now let {Nn , n ≥ 1} be a seqence of positive integer-valued r.vs defined on the same probability space (Ω, F, P). Let us denote Nn
SnNn =
∑ Xnk ,
Nn
Contents
JJ
II
J
I
ϕnk (t) = E(exp(˙ıtXnk )|Fn,k−1 ), fnNn (t) = ∏ ϕnk (t), k=1
k=1
Recently, several papers have appeared which are devoted to the study of the limit distribution of SnNn as n → ∞ . There have been two basic problems on the filed. One is when a limit theorem is already given and the question is about the optimal conditions ensuring that the same theorem, or a mixtured version of it, remain true with random indices (see, e.g. [169], [141], [47], and their references). The other problem is to prove directly random-sums limit theorems, and to determine the class of possible limit distributions. Of course, the conditions ensuring the random limit theorem ought to be reduced to the classical ones when Nn = n. However, there are (Xnk ) such that SnNn satisfies the random limit theorem whereas Snn does not weakly converge (see, e.g. [149, Example 1]). The latter problem has been considered by many authors. The case when Nn , Xn1 , Xn2 , . . . are independent r.v.’s has been investigated in [169], [141] and [149], while the case when Nn are stopping times has been investigated in [148], [20] and [80] (see also references in these papers). A general case, when Nn need not be (for every n) independent of Xnk or a stopping time, has been considered in [97] under the assumption that Xnk have finite conditional variances and in [90, 92] without this assumption. In the sequel we will abuse notation slighlty in the interes of brevity denoting E(Y |Fn,k−1 ) by Ek−1Y , where Y is some variable (e.g., = Xnk ) taken from the n-th row1. Throughout, I(A) denotes the indicator function of the set A, and the various kinds of convergence, with probability 1 a.s.
L
P
Page 73 of 94
Go Back
Full Screen
r (almost sure ), in Lr -norm, in probability, and weak (in distribution) are denoted by −−→ , −→ , −→ and =⇒ , respectively. All equalities and inequalities between r.v.’s are considered in the sense “with probability one”, and all limits are taken as n → ∞ , unless stated otherwise.
Close
4.1.
The case of finite variances
(Xnk ) is called a martingale doubly infinite array (MDIA) if E|Xnk | < ∞, and E(Xnk |Fn,k−1 ) = 0 almost surely (a.s.) for all k ≥ 1 and n ≥ 1. Let us denote 2 σ2nk = E(Xnk |Fn,k−1 ), VN2n =
Nn
∑ σ2nk , k=1
1Hence, we will denote P(A |F nk n,k−1 ) by Pk−1 (Ank ).
bNn = max σ2nk . 1≤k≤Nn
Quit
Home Page
Title Page
4.1.1. The main results. To begin with considerations we see that if Nn is Fn0 -measurable and Fn0 is the trivial σ-field, then there exists P a seqence {kn , n ≥ 1} of positive integer numbers such that P(Nn = kn ) = 1 for all n ≥ 1. So, if Nn −→ ∞, then kn −→ ∞. On the other hand, if P
Nn , Xn1 , Xn2 , . . . are independent random variables, then E fnNn (t) = E exp(˙ıtSnNn ). Thus2 for a fixed real t, fnNn (t) −→ A(t) as n → ∞ , if and only if E exp(˙ıtSnNn ) −→ A(t) as n → ∞ , since | fnNn (t)| ≤ 1. The following routine lemma will be useful throughout this chapter. L EMMA 4.1.. ([97]) Let (Xnk ) be a DIA of r.vs adapted to an array (Fnk ) of row-wise increasing sub-σ-fields, and let {Nn , n ≥ 1} be a sequence of positive integer-valued random variable such that
Contents
JJ
II
J
I
P
fnNn (t) −→ A(t) as n → ∞ ,
(4.1) for some t, implies (4.2)
∗ ∗ E exp(˙ıtSnN )/( fnN (t))−1 −→ 1 as n → ∞ , n n
Page 74 of 94
where A(t) is a complex number such that |A(t)| 6= 0, and ∗ = SnN n
Nn
∑ Xnk∗ , k=1
Nn
Go Back
∗ ∗ fnN (t) = ∏ Ek−1 exp(˙ıtXnk ), n k=1
Full Screen
∗ Xnk
. = Xnk I[| fnk (t)| ≥ |A(t)|//2].
Then, under (4.1), E exp(˙ıtSnNn ) −→ A(t) as n → ∞ .
Close
. P ROOF. For each fixed n, let Ank = {| fnk (t)| ≥ |A(t)|//2}. Note that An0 = Ω ⊇ An1 ⊇ An2 . . . and Acnk ∩ Ans = 0/ for k ≤ s. Furthermore, Ω = Anm + Acnm = Anm + (Acnm ∩ An,m−1 ) + Acn,m−1 = . . . = Anm + (Acn,m ∩ An,m−1 ) + . . . + (Acn2 ∩ An1 ) + (Acn1 ∩ An0 ).
2Other cases will be discussed later on
Quit
Home Page
Title Page
Then ∗ | fnN (t)| = n
Nn
∏ |Ek−1 exp(˙ıtXnk∗ )| =
Contents
k=1 Nn
= ∏ |I(Ank )Ek−1 exp(˙ıtXnk ) + I(Acnk )| = k=1 Nn
JJ
II
J
I
= [I(Ank )|ϕnk (t)| + I(Acnk )] k=1 Nn −1 s Nn
∏
= ∏ |ϕnk (t)|I(AnNn ) + k=1
Nn
×
∏
∑ ∏ |ϕn j (t)|I(An j ∩ Ans ∩ Acn,s+1) ×
s=1 j=1
I(Acn j ∩ Ans ∩ Acn,s+1 ) + I(Acn1 )
j=s+1
≥
Page 75 of 94
|A(t)| I(AnNn ) + 2
Nn −1 s
∑ ∏ |ϕn j (t)|I(Ans ∩ Acn,s+1) ×
s=1 j=1
|At | c I(An1 ) 2 ≥ |A(t)|//2, a.s. for all n,
Go Back
× I(Ans ∩ Acn,s+1 ) +
and, by (4.1),
Full Screen
P
Nn [
! {Xnk 6= Ynk }
Nn [
=P
k=1
! Acnk
= P(AcnNn ) −→ 0 as n → ∞ .
k=1 Close
Moreover, by (4.1), P
Nn [
! ∗ { fnk (t) 6= fnk (t)}
=P
Nn [ k [
! Acn j
k=1 j=1
k=1
as n → ∞ , and then (4.10 )
P
∗ fnN (t) −→ At as n → ∞ . n
∗ ) −→ A(t) as n → ∞ . Thus, it suffices to show that E exp(˙ıtSnN n
= P(AcnNn ) −→ 0 Quit
Home Page
Title Page
Now we have ∗ ∗ ∗ |E exp(˙ıtSnN ) − A(t)| ≤ |E exp(˙ıtSnN ){1 − A(t)( fnN (t))−1 |+ n n n ∗ ∗ + |A(t)| · |E exp(˙ıtSnN )( fnN (t))−1 − 1| ≤ n n
Contents
∗ ≤ (2//|A(t)|)E| fnN (t) − A(t)| + n ∗ ∗ + |A(t)| · |E exp(˙ıtSnN )( fnN (t))−1 − 1| n n
−→ 0 as n → ∞ , by (4.10 ), (4.2) and the dominated convergence theorem [107, p. 125].
JJ
II
J
I
R EMARK 4.2.. Suppose that fnk (t) = fk (t) for all n and k. (Put e.g. Xnk = Xk //Vn (k ≥ 1, n ≥ 1), where Vn2 = ∑nj=1 E(X j2 |F j−1 ), and a.s.
P
fn (t) = ∏nk=1 ϕk (t//Vn ), n ≥ 1.) Note that if fn (t)−−→A(t) as n → ∞, where |A(t)| 6= 0, and if Nn −→ ∞ (as n → ∞), then (4.1) holds (see e.g. P
[40, Lemma 2]). Taking into account that in our case fn (t) −→ A(t) is eqivalent to the almost sure convergence (since P(lim inf Ankn ) = 1), we n→∞
P
P
see that (4.1) takes place if for instance fnkn (t) −→ A(t), kn → ∞, and Nn −→ ∞.
Page 76 of 94
Now we are going to extend one of the results of [27] and [149]. We will assume that (Xnk ) is a MDIA such that there exists a finite constant C for which lim P(VN2n > C) = 0,
(4.3)
Go Back
n→∞
and P
b2nNn −→ 0 as n → ∞ .
(4.4)
P ROPOSITION 4.3.. ([97]) Let {Nn } be a sequence of positive integer-valued random variables and suppose (Xnk ) is a MDIA such that (4.3) and (4.4) hold. Then Z +∞ −2 P ı˙tx fnNn (t) −→ exp e − 1 − ı˙tx x dG(x) as n → ∞ , −∞
Full Screen
Close
where G is a bounded nondecreasing function, if and only if for all continuity points a, b of G Nn
(4.5)
P
∑ Ek−1Xnk2 I(a < Xnk ≤ b) −→ G(b) − G(a) as n → ∞ . k=1
Quit
P ROOF. We may assume that (4.6)
VN2n ≤ C a.s. for all n ∈ N.
Home Page
Title Page
4.2.
The case of not necessarily finite variances
The aim of the present paper is to consider the general case without any assumptions about the existence of moments of the r.v.’s Xnk . We put Z +∞ ı˙tx 1 + x2 (4.7) A(t) = exp ı˙tγ + exp(˙ıtx) − 1 − dK(x) , 1 + x2 x2 −∞ where γ is a fixed real number, and K is a bounded and nondecreasing real function such that K(−∞) = 0. Then A is the characteristic function (ch.f.) of an infinitely divisible distribution in the Lévy - Khintchine representation (see, e.g. [108]). The following routine lemma will be useful throughout this paper. L EMMA 4.4 ([97]). Let (Xnk ) be a DIA of r.v.’s adapted to an array (Fnk ) of row-wise increasing sub-σ-fields, and let {Nn , n ≥ 1} be a sequence of positive integer-valued r.v.’s such that Nn
Contents
JJ
II
J
I
P
fnNn = ∏ Ek−1 exp(˙ıtXnk ) −→ A(t),
(4.8)
k=1
Page 77 of 94
for some t, implies ? ? −1 E exp(˙ıtSnN )( f ) −→ 1, nN n n
(4.9) where ? SnN = n
Nn
∑ Xnk? , k=1
Nn
Go Back
? ? fnN (t) = ∏ Ek−1 exp(˙ıtXnk ) n k=1
and ? Xnk = Xnk I(| fnk (t)| ≥ |A(t)|/2).
Full Screen
Then, under (4.8), E exp(˙ıtSnNn ) −→ A(t). R EMARK 4.5. One can easily prove that (4.8) implies (4.9) for example in the following cases: / Ω}, Fnk = σ{Xn1 , Xn2 , . . . , Xnk }; a: Nn is for every n independent of {Xnk , k ≥ 1}, and Fn0 = {0, b: Nn is for every n a stopping time w.r.t. {Fnk , k ≥ 1};
Close
P
c: Xnk = Xk for every k and all n, and Nn −→ ∞; d: P(kn ≤ Nn ≤ rn ) −→ 1 and maxkn ≤k≤rn α(Fnk , Ank ) −→ 0, where / Ω, {Nn = k}, {Nn 6= k}} , kn ≤ rn are positive integers, and Ank = {0, α(A1 , A2 ) = sup{|P(B ∩C) − P(B)P(C)| : B ∈ A1 , C ∈ A2 }. ? )( f ? (t))−1 − 1, k ≥ 1, forms a uniformly bounded The proof of Remark 4.5 is based on the fact that for every n the sequence Zkn = exp(˙ıtSnk nk n martingale w.r.t. {Fnk , k ≥ 1} such that EZk = 0, and is not deatiled here.
Quit
Home Page
Title Page
4.2.1.
The main results. Let (τnk ) (k ≥ 1, n ≥ 1) be a DIA of nonnegative r.v.’s such that |Ek−1 Xnk I(|Xnk | ≤ τnk )| < ∞ for every k and all n.
We put
Contents
ank = Ek−1 Xnk I(|Xnk | ≤ τnk ), Ynk = Xnk − ank . What we consider to be the basic random limit theorem for partial sums of dependent r.v.’s, can now be formulated as follows. T HEOREM 4.6.. Let (Xnk ) and {Nn , n ≥ 1} be as in Lemma 4.4, and assume that (4.10)
JJ
II
J
I
P
max Pk−1 (|Ynk | ≥ ε) −→ 0, for all ε > 0,
1≤k≤Nn
(4.11)
2 Ynk P ∑ Ek−1 1 +Y 2 −→ 0, k=1 nk
(4.12)
nk I(Ynk ≤ y) −→ K(y) ∑ Ek−1 1 +Y 2
Nn
Nn
k=1
Y2
P
Page 78 of 94
nk
for every point y of continuity of K and for y = ±∞, and Nn
(4.13)
Ynk P ∑ ank + Ek−1 1 +Y 2 −→ γ. k=1 nk
Go Back
Then SnNn weakly converges to an infinitely divisible distribution, whose ch.f. A is given by (4.7). Theorem 4.6 extends the main result of [97], Theorem 1, to partial sums of dependent r.v.’s such that their frist moments need not exist. In the special case when the DIA (Xnk ) is “conditionaly infinitesimal”, i.e. (4.14)
Full Screen
P
max Pk−1 (|Xnk | ≥ ε) −→ 0, for all ε > 0,
1≤k≤Nn
and τnk = τ, where τ is a positive constant, Theorem 4.6 yields
Close
T HEOREM 4.7.. Let (Xnk ) and {Nn , n ≥ 1} be as in Lemma 4.4, and assume that (4.12), (4.13) and (4.14) are satisfied. Then SnNn weakly converges to an infinitely divisible distribution, whose ch.f. is given by (4.7). One cen see that in the special case, when {Nn , n ≥ 1} is a sequense of contans and for every n the r.v.’s Xn1 , Xn2 , . . . are independent, (4.12) and (4.13) with τnk = τ are just the conditions that are necessary and sufficient in order that SnNn =⇒ X, where X has ch.f. A (see, e. g. [107, p.309]). In the case when for every n the r.v.’s Nn , Xn1 , Xn2 , . . . are independent Theorem 4.7 reduces to Theorem 6.1 of [141]. Moreover, under the assumption that {Nn , n ≥ 1} is a sequence of stopping times Theorem 4.7 is the 1-dimensional analogy of Theorem 1 from [20]. Of course, in all these cases (4.8) implies (4.9).
Quit
Home Page
Title Page
As simple conseqences of Theorem 4.6 we also get: C OROLLARY 4.8.. Let (Xnk ) and {Nn , n ≥ 1} be as in Lemma 4.4, and assume that (4.10), (4.11) and (4.13) are satisfied, and that Nn
(4.15)
∑ k=1
Ek−1
2 Ynk P −→ D, 2 1 +Ynk
Contents
where D is a positive number, and (for every real y) Nn
(4.16)
2 Ynk D P D E ∑ k−1 1 +Y 2 I(Ynk ≤ y) −→ 2 + π arc tg(y). k=1 nk
Then SnNn weakly converges to the Cauchy distribution with ch.f. A(t) = exp(˙ıtγ − D|t|).
JJ
II
J
I
C OROLLARY 4.9.. Let (Xnk ) and {Nn , n ≥ 1} be as in Lemma 4.4, and assume that (4.10), (4.11), (4.13) and (4.15) are satisfied with γ = D, and that Nn
(4.17)
2 Ynk P ∑ Ek−1 1 +Y 2 I(|Ynk − 1| ≥ ε) −→ 0, for all ε > 0. k=1 nk
Page 79 of 94
Then SnNn weakly converges to the Poisson distribution with parameter λ = 2D. C OROLLARY 4.10.. Let (Xnk ) and {Nn , n ≥ 1} be as in Lemma 4.4, and assume that (4.11), (4.13) and (4.15) are satisfied, and that Nn
(4.18)
∑
Ek−1
k=1
2 Ynk P I(|Ynk | ≥ ε) −→ 0, for all ε > 0. 2 1 +Ynk
Go Back
Full Screen
Then {SnNn } weakly converges to the normal distribution N(γ, D) with mean γ and variance D. It is easy to verify that (4.18) is equivalent to Nn
(4.19)
P
Close
∑ Pk−1(|Ynk | ≥ ε) −→ 0, for all ε > 0. k=1
Now we see that in the special case, when {Nn , n ≥ 1} is a seqence of contans and τnk = 0, Corollary 4.10 reduces to Theorem 2.3 of [52]. From Corollary 4.10 one can deduce the following random version of the “normal convergence criterion” (cf. [107, p. 316]). C OROLLARY 4.11.. Let (Xnk ) and {Nn , n ≥ 1} be as in Lemma 4.4, and assume that Nn
(4.20)
P
∑ Pk−1(|Xnk | ≥ ε) −→ 0, for all ε > 0. k=1
Quit
Home Page
Nn
(4.21)
∑
Title Page
P
Ek−1 Xnk I(|Xnk | ≤ τ) −→ γ,
k=1 Nn
(4.22)
∑
Contents
P 2 Ek−1 Xnk I(|Xnk | ≤ τ) − E2k−1 Xnk I(|Xnk | ≤ τ) −→ D,
k=1
where τ > 0 is finite and arbitrarily fixed. Then SnNn =⇒ N(γ, D). In the special case, when {Nn , n ≥ 1} is a seqence of stopping times, Corollary 4.11 reduces to Theorem 2.1 of [80]. 4.2.2.
JJ
II
J
I
Proofs. In order to prove Theorem 4.6 we need the following auxiliary results.
L EMMA 4.12.. (4.10) holds if and only if dnNn = max Ek−1
(4.23)
1≤k≤Nn
2 Ynk 2 1 +Ynk
P
−→ 0.
Furthermore, (4.10) implies that
Page 80 of 94
(4.24)
bnNn
P Y nk −→ 0. = max Ek−1 2 1≤k≤Nn 1 +Ynk Go Back
Proof. Obvious. P ROPOSITION 4.13.. Let (Xnk ) be a DIA of r.v.’s adapted to an array (Fnk ) of row-wise increasing sub-σ-fields, and let {Nn , n ≥ 1} be a sequence of positive integer-valued r.v.’s satisfying (4.10), (4.11), (4.12) and (4.13). Then (4.8) holds.
Full Screen
Proof. For every t let us define ht (x) =
( 1+x2 ı˙tx exp(˙ıtx) − 1 − 1+x 2 x2 1 2 2t
for x 6= 0, for x = 0.
Close
The function ht is continuous and bounded (i.e. there exists a constant L = Lt > 0 such that |ht (x)| ≤ Lt for all x). Thus, (4.25)
max |Ek−1 exp(˙ıtYnk ) − 1| = 2 Y Y nk nk ≤ = max ı˙tEk−1 + Ek−1 ht (Ynk ) 2 2 1≤k≤Nn 1 +Ynk 1 +Ynk
1≤k≤Nn
P
≤ |t|b2nNn + Lt dnNn −→ 0, as (4.23) and (4.24) hold.
Quit
Home Page
Title Page
Furthermore, from the Taylor series it is seen that Nn
Nn
k=1
k=1
∑ | log(1 + znk ) − znk | ≤ ∑ |znk |2 .
Contents
This with znk = Ek−1 exp(˙ıtYnk ) − 1, by (4.25), yields for fixed t Nn
Nn
k=1 Nn
k=1
|Atn | = | ∑ log Ek−1 exp(˙ıtYnk ) − ∑ [Ek−1 exp(˙ıtYnk ) − 1]| 2 Ynk ≤ ∑ |Ek−1 exp(˙ıtYnk ) − 1| ≤ t ∑ Ek−1 2 1 +Ynk k=1 k=1 2
+(2|t|Lt bnNn + Lt2 dnNn )
2
JJ
II
J
I
Nn
Nn
2 Ynk P ∑ Ek−1 1 +Y 2 −→ 0, k=1 nk Page 81 of 94
as (4.10), (4.11) and (4.12) with y = +∞ hold. Since Nn
Nn 2 Ynk Ynk log fnNn (t) = ı˙t ∑ ank + Ek−1 + + Atn , E h (Y ) t k−1 nk ∑ 2 2 1 +Y 1 +Y k=1 k=1 nk nk
Go Back
then for the proof of (4.8) it is sufficient to show that (4.26)
Rtn
Nn
Y2 P = ∑ Ek−1 ht (Ynk ) nk 2 −→ 1 +Ynk k=1
Z +∞ −∞
ht (x) d K(x) .
Full Screen
For arbitrary positive ε1 , ε2 , and ε3 , choose an integer m sufficiently large and subdivision x0 < x1 < . . . < xm , all continuity points of K, so that
Close
Z +∞ m |B(m)| = ∑ ht (x j−1 ) K(x j ) − K(x j−1 ) − ht (x) dK(x) < ε1 , j=1 −∞ Quit
max |x j − x j−1 | < ε2 ,
1≤ j≤m
|ht (x j ) − ht (x j−1 )| < ε3 ,
and (4.27)
K(x0 ) + K(+∞) − K(xm ) < ε3 .
Home Page
Title Page
(We recall that K is a bounded and nondecreasing function such that K(−∞) = 0, and that x0 → −∞ and xm → +∞ as m → ∞.) Then, it follows that Nn m Y2 |Cn (m)| = ∑ ∑ Ek−1 ht (Ynk ) nk 2 I(x j−1 < Ynk ≤ x j )− k=1 j=1 1 +Ynk Nn m 2 Ynk − ∑ ∑ ht (x j−1 )Ek−1 I(x < Y ≤ x ) j ≤ j−1 nk 2 1 +Ynk k=1 j=1 Nn
≤ ε3 ∑ Ek−1 k=1
2 Ynk , 2 1 +Ynk
Nn 2 Y |Dn (m)| = Rtn − ∑ Ek−1 ht (Ynk ) nk 2 I(x0 < Ynk ≤ xm ) ≤ 1 +Y k=1 nk
Contents
JJ
II
J
I
Page 82 of 94
Nn
Nn 2 2 Ynk Ynk I(Y ≤ x ) + E 0 nk ∑ k−1 1 +Y 2 − 2 1 +Ynk k=1 k=1 nk ! Nn 2 Y − ∑ Ek−1 nk 2 I(Ynk ≤ xm ) , 1 +Ynk k=1
≤ Lt
∑ Ek−1
where the last factor on the majorant side converges in probability to K(x0 ) + K(+∞) − K(xm ), which is less than ε3 by (4.27). Thus, by the relations given above, we obtain Rtn
Nn
Y2 = ∑ Ek−1 nk 2 = 1 +Ynk k=1 (
Full Screen
Z +∞ −∞
ht (x) d K(x) + B(m)+
) 2 Ynk I(x j−1 < Ynk ≤ x j ) − K(x j ) − K(x j−1 ) + + ∑ ht (x j−1 ) ∑ Ek−1 2 1 +Ynk j=1 k=1 m
Go Back
Close
Nn
+Cn (m) + Dn (m), which, by (4.12), gives the desired result (4.26). The proof of Theorem 4.6 is easily based on Proposition 4.13 and Lemma 4.4 and is not detailed here. In order to prove Theorem 4.7 we need the following auxiliary results.
Quit
Home Page
Title Page
L EMMA 4.14.. Under (4.10) and (4.12) with y = +∞ (4.11) is equivalent to Nn
P
∑ |Ek−1 exp(˙ıtYnk ) − 1|2 −→ 0.
(4.28)
Contents
k=1
PROOF. From the proof of Proposition 4.13 it follows that (4.11) implies (4.28). The reverse implication can be proved similarly. L EMMA 4.15.. Suppose that (4.14) holds, and that τnk = τ, where τ is a positive constant. Then P
max |ank | −→ 0
(4.29)
JJ
II
J
I
1≤k≤Nn
and there is a constant M depending only on τ and t such that for every k and all n (4.30)
|Ek−1 exp(˙ıtYnk ) − 1| ≤ MEk−1
2 Ynk . 2 1 +Ynk
Furthermore, in this case (4.10) holds. PROOF. From (4.14) it follows that for every 0 < ε < τ
Page 83 of 94
P
max |ank | < ε + τ max Pk−1 (|Xnk | ≥ ε) −→ ε,
1≤k≤Nn
1≤k≤Nn
which states that (4.14) implies (4.29). Furthermore, one can see that (4.14) combined with (4.29) implies (4.10). Thus, for the proof of Lemma 4.15 we wish to show that (4.30) holds. To this end we note that by a routine technique of subsequences we can assume that τ (4.31) |ank | ≤ for every k and all n. 2 Obviously, there is no loss of generality, because if (Xnk ) does not satisfy (4.31), then we can set X˜nk = Xnk I(|ank | ≤ 2τ ). Then (X˜nk ) will form a DIA of r.vs adapted to (Fnk ) and, by (4.29), τ n ˜ P(∪N max |ank | > ) −→ 0 k=1 {Xnk 6= Xnk }) ≤ P(1≤k≤N 2 n and Nn τ Nn P(∪k=1 { fnk (t) 6= ∏ E j−1 exp(˙ıt X˜n j )}) ≤ P( max |ank | > ) −→ 0. 2 1≤k≤Nn j=1 Furthermore, (4.14) holds with Xnk replaced by X˜nk , and τ . Ek−1 X˜nk I(|X˜nk | ≤ τ) = ank I(|ank | ≤ ) = a˜nk , 2 τ . Y˜nk = X˜nk − a˜nk = Ynk I(|ank | ≤ ), 2
Go Back
Full Screen
Close
Quit
Home Page
Ynk Y˜nk = Ek−1 I(|ank | ≤ 2 2 1 + Y˜nk 1 +Ynk Y˜ 2 Y2 Ek−1 nk 2 = Ek−1 nk 2 I(|ank | ≤ 1 + Y˜ 1 +Y Ek−1
nk
nk
τ ), 2
τ Ek−1 exp(˙ıtY˜nk ) − 1 = (Ek−1 exp(˙ıtYnk ) − 1) I(|ank | ≤ ), 2 ˜ from which we conclude that (4.10)–(4.13) and (4.28)–(4.30) hold with Ynk replacing Ynk , enabling use to be made of property (4.31). Alternatively, we will prove Lemma 4.15 (and later on Theorem 4.7 and Corollary 4.11) as it stands and assume also that (4.31) holds. We see that |Ek−1 exp(˙ıtYnk ) − 1| ≤ |Ek−1 [exp(˙ıtYnk ) − 1 − ı˙tYnk ]I(|Ynk | ≤ τ)+ |Ek−1 [exp(˙ıtYnk ) − 1]I(|Ynk | > τ)| + |tEk−1Ynk I(|Ynk | ≤ τ)| ≤ 1 2 I(|Ynk | ≤ τ) + 2Pk−1 (|Ynk | > τ) + ≤ t 2 Ek−1Ynk 2 +|tEk−1Ynk [I(|Ynk | ≤ τ) − I(|Xnk | ≤ τ)]| + +|tEk−1Ynk I(|Xnk | ≤ τ)|, where the first and the second terms are less than Y2 1 2 t (1 + τ2 )Ek−1 nk 2 2 1 +Ynk
and
2
Title Page
τ ), 2
2 Ynk 1 + τ2 , E k−1 2 τ2 1 +Ynk
Contents
JJ
II
J
I
Page 84 of 94
Go Back
respectively. Moreover, we have (4.32)
|Ek−1Ynk I(|Xnk | ≤ τ)| = |ank − ank Pk−1 (|Xnk | ≤ τ)| = τ τ = |ank |Pk−1 (|Xnk | > τ) ≤ Pk−1 (|Ynk | ≥ ) 2 2
and (4.33)
Full Screen
Close
|Ek−1Ynk [I(|Ynk | ≤ τ) − I(|Xnk | ≤ τ)]| ≤ τ 3τ 3τ τ ≤ Ek−1 |Ynk |I( ≤ |Ynk | ≤ ) ≤ Pk−1 (|Ynk | ≥ ), 2 2 2 2
as (4.31) holds. Hence, |Ek−1 exp(˙ıtYnk ) − 1| ≤ 2 Ynk 1 2 1 + (τ/2)2 2 t (1 + τ ) + 2(1 + τ|t|) E k−1 2 2 (τ/2)2 1 +Ynk
Quit
Home Page
Title Page
which gives (4.30). PROOF OF T HEOREM 4.7. We need to show that the assumptions of Theorem 4.7 imply (4.8). To this end it is sufficient to prove that (4.10) and (4.11) are satisfied. From Lemmas 4.12 and 4.15 it follows that (4.10) holds, and that Nn
Nn
∑ |Ek−1 exp(˙ıtYnk ) − 1|2 ≤ M2dnNn
∑ Ek−1
k=1
k=1
2 Ynk P −→ 0, 2 1 +Ynk
i.e. (4.28) is satisfied. Hence, and by Lemma 4.14, we get (4.11). Corollaries 4.8, 4.9 and 4.10 easily follow from Theorem 4.6 (see also [7], p. 93). In order to prove Corollary 4.11 we need the following auxiliary results. L EMMA 4.16.. (4.20) implies (4.19).
Contents
JJ
II
J
I
PROOF. By Fn,k−1 -measurability of ank we obtain for every ε > 0 Nn
Nn
ε
Nn
ε
∑ Pk−1(|Ynk | ≥ ε) ≤ ∑ Pk−1(|Xnk | ≥ 2 ) + ∑ I(|ank | ≥ 2 ).
k=1
But
k=1
Page 85 of 94
k=1
! ε ε −→ 0 P ∑ I(|ank | ≥ ) ≥ ε = P max |ank | ≥ 2 2 1≤k≤Nn k=1 Nn
Go Back
as (4.29) holds. Hence, we get the desired implication. L EMMA 4.17.. (4.15) and (4.20) imply that
Full Screen
Nn
Ynk
P
∑ Ek−1 1 +Y 2 −→ 0.
(4.34)
k=1
nk
Close
PROOF. We note that for every 0 < ε < τ Ek−1Ynk I(|Ynk | ≤ τ) − Ek−1 Ynk = 2 1 +Ynk 3 Ynk I(|Ynk | ≤ τ) Ynk I(|Ynk | > τ) = Ek−1 + Ek−1 ≤ 2 2 1 +Ynk 1 +Ynk εEk−1
2 Ynk + (τ + 1)Pk−1 (|Ynk | ≥ ε). 2 1 +Ynk
Quit
Home Page
Title Page
Hence, using (4.32) and (4.33), we get for every 0 < ε < 2τ Nn Ynk ≤ ∑ Ek−1 2 k=1 1 +Ynk
Contents
Nn
2 Ynk τ 3τ Nn ≤ ε ∑ Ek−1 + 1+τ+ + ∑ Pk−1(|Ynk | ≥ ε), 2 2 2 k=1 1 +Ynk k=1
JJ
II
J
I
which by (4.15) and (4.20) gives (4.34). PROOF OF C OROLLARY 4.11. Taking into account Corollary 4.10 and Lemmas 4.15 and 4.17 we only need to show that (4.15) is satisfied. First we note that for every k and all n 2 Ek−1Ynk I(|Xnk | ≤ τ) = Vark−1 Xnk I(|Xnk | ≤ τ) − a2nk Pk−1 (|Xnk | > τ), 2 − E2 Z . Since by (4.31) and (4.20) where Vark−1 Znk = Ek−1 Znk k−1 nk Nn
Nn
k=1
k=1
P
Page 86 of 94
∑ a2nk Pk−1(|Xnk | > τ) ≤ (τ/2)2 ∑ Pk−1(|Xnk | ≥ τ) −→ 0, then Go Back
P 2 Ek−1Ynk I(|Xnk | ≤ τ) −→ D,
(4.35) as (4.22) holds. But (cf (4.33))
τ 2 |Ek−1Ynk [I(|Ynk | ≤ τ) − I(|Xnk | ≤ τ)]| ≤ (3τ/2)2 Pk−1 |Ynk | ≥ . 2
Full Screen
Hence, and by (4.20), we get Nn
(4.36)
P
∑ Ek−1Ynk2 I(|Ynk | ≤ τ) −→ D.
Close
k=1
Moreover, by (4.20), we have for every ε < τ Nn Nn 2 2 I(|Ynk | ≤ τ) − ∑ Ek−1Ynk I(|Ynk | ≤ ε) ≤ ∑ Ek−1Ynk k=1 k=1 Nn
P
≤ τ2 ∑ Pk−1 (|Ynk | ≥ ε) −→ 0, k=1
Quit
Home Page
Title Page
and the same is true for every ε > τ; suffices to interchange ε and τ on the majorant side. Thus, using (4.20) and (4.36), it follows that Nn
(4.37)
P
∑ Ek−1Ynk2 I(|Ynk | ≤ ε) −→ D,
for all ε > 0. Contents
k=1
Since for every ε > 0 Nn 2 Ynk 1 Nn 2 E Y I(|Y | ≤ ε) ≤ E ∑ k−1 nk nk ∑ k−1 1 +Y 2 ≤ 1 + ε2 k=1 k=1 nk
≤
Nn
Nn
k=1
k=1
JJ
II
J
I
∑ Ek−1Ynk2 I(|Ynk | ≤ ε) + ∑ Pk−1(|Ynk | ≥ ε),
then (4.20) and (4.37) imply (4.15).
Page 87 of 94
Go Back
Full Screen
Close
Quit
ACKNOWLEDGEMENTS . I am indebted to Professor Dominik Szynal for having introduced me to this subject and for his constant encouragement and useful advice. Also, I am grateful to Professor Zdzisław Rychlik for helpful discussion.
Home Page
Title Page
Contents
JJ
II
J
I
Page 88 of 94
Go Back
Full Screen
Close
Quit
Home Page
Title Page
Bibliography [1] Aerts M., Callaert H. 1989 The accuracy of the normal approximation for U-Statistics with a random summation index converging to a random variable. Acta Sci. Math. (Szeged) 53(3-4), 385–394. MR 91a:60056 [2] , The convergence rate of sequential fixed-width confidence intervals for regular functionals. Austral. J. Statist. 28 (1986), 97–106. MR 88a:62224 [3] , The exact approximation order in the central limit theorem for random U-Statistics. Sequential Anal. 5 (1986), 19–35. MR 87g:62031 [4] Ahmad I.A., Lin P.E., A Berry–Esseen type theorem. Utilitas Math. 11 (1977), 153–160. MR 55 #6525 [5] Aldous D.J., Weak convergence of randomly indexed sequences of random variables, Math. Proc. Camb. Philos. Soc. 83 (1978), 117-126. [6] Aldous D. J., Eagleson G. K., On mixing and stability of limit theorems. Ann. Probab. 6(2), 325-331. [7] Anscombe F.J. 1952 Large-sample theory of sequential estimation. Proc. Cambridge Philos. Soc. 98, 600-607. [8] 1953 Sequential estimation. J. Royal Stat. Soc., Sec. B, 15 (1953), 1-21. [9] Atakuziev D. On the convergence rate in a limit theorem for supercritical branching process and its application in limit theorems for sums of a random number of summands. (in Russian) Izv. Akad. Nauk UzSSR, Ser. fiz-mat nauk, No. 2 (1978), 3-8. [10] , Eine Abschätzung der Konvergenzgeschwindigkeit in einem integralen und einem lokalen Transportsatz. (in Russian) Doklady Akad. Nauk UzSSR 6 (1978), 3-5. [11] Azlarov T.A., Atakuziev D.A., Džamirzaev A.A, Shema summirovani sluqa$inyh veliqin s geometriqeski raspredelennym sluqa$inym indeksom. Sbornik: Predeljnye teoremy dl sluqa$ inyh processov, Taxkent, UzSSR (1977), 6-21. [12] Azlarov T.A., Džamirzaev A.A., Ob otnositeljno$i usto$iqivosti dl summ sluqa$inogo qisla sluqa$inyh veliqin, Izv. Akad. Nauk UzSSR, No. 2 (1972), 7-14. [13] , Ravnomernye ocenki v odno$i teoreme perenosa. Sbornik: Sluqa$inye processy i statistiqeskie vyvody, vypusk V, Taxkent, UzSSR (1975), 10-14. [14] Babu G.J., Ghosh M., A random functional central limit theorem for martingales. Acta Math. Acad. Sci. Hungar. 27 (1976), 301-306. [15] Barndorff-Nielsen O., On the limit distribution of the maximum of a random number of independent random variables. Acta Math. Acad. Sci. Hungar. 15 (1964), 399-403. [16] Bartma´nska B., Szynal D., On nonuniform estimates of the rate of convergence in the central limit theorem for functions of the average of independent random variables. Mathematical Statistics and Probability Theory, Proc. 6th Pannonian Symp. on Math. Statist., Bad Tatzmannsdorf, Austria, Sept.14–20, 1986. [17] Barsov S.S., On the accuracy of normal approximation of the distributions of random sums of random vectors. (in Russian) Teor. Verojatnost. Primenen. 30 (1985), 351-354. [18] Batirov H.B., Maneviˇc D.V., Obxie ocenoqnye teoremy dlja raspredelenija summ sluqa$inogo qisla sluqa$inyh slagaemyh. Matemat. Zametki 34 (1983), 145-152. [19] Bethmann, J. (1988). The Lindeberg-Feller theorem for sums of a random number of independent random variables in a triangular array. Theory Probab. Appl. 33, No. 2, 354–359. [20] Be´ska M., Kłopotowski A., Słomi´nski L., Limit theorems for random sums of dependent d-dimensional random vectors. Z. Wahrsch. Verw. Gebiete 61 (1982), 43-57. [21] Bikelis A., On estimates of the remainder term in the central limit theorem. Liet. Mat. Rink. 6 (1966), 323-346 [in Russian] [22] Billingsley P. Limit theorems for randomly selected partial sums. Ann. Math. Statist. 33(1962), 85-92. [23] Billingsley P., Convergence of Probability Measures. Willey: New York 1962. [24] Billingsley P., Probability and Measure. Wiley: New York 1979. [25] Blum J.R., Hanson D.L., Rosenblatt J.I., On the central limit theorem for the sum of a random number of independent random variables. Z. Wahrsch. Verw. Gebiete 1 (1963), 389-393. [26] Borodin A.N., Some limit theorems for the processes with random time. Teor. Verojatnost. Primenen. 24 (1979), 754-770. [27] Brown B.M., Eagleson G.K., Martingale convergence to infinitely divisible laws with finite variances. Trans. Amer. Math. Soc. 162 (1971), 449-453. [28] Butzer P.L., Schulz D., General random sum limit theorems for martingales with large O-rates. Z. Anal. und Anwend. 2 (1983), 97-109. [29] , The random martingale central limit theorem and weak law of large numbers with o-rates. Acta Sci. Math. 45 (1983), 81-94.
Contents
JJ
II
J
I
Page 89 of 94
Go Back
Full Screen
Close
Quit
Home Page
Title Page
[30] Callaert H., Janssen P., The convergence rate of fixed-width sequential confidence intervals for the mean. Sankhy¯a: The Indian Journal of Statist., Ser. A, 43 (1981), 211-219. [31] Callaert H., Janssen P., A note on the convergence rate of random sums. Rev. Roum. Math. Pures Appl. 28 (1983), 148-151. [32] Chow Y.S., Robbins H., On the asymptotic theory of fixed-width sequential confidence intervals for the mean. Ann. Math. Statist. 36 (1965), 457-462. [33] Chow Y.S., Hsiung Ch.A., Lai T.L., Extended renewal theory and moment convergence in Anscombe’s theorem. Ann. Probab. 7 (1979), 304-318. [34] Chow Y.S., Yu K.F., The performance of a sequential procedure for the estimation of the mean, Ann. Statist. 9 (1981), 184-189. [35] Chung K.L., A Course in Probability Theory. Harcourt: New York 1968. [36] Csenki A., On the convergence rate of fixed-width sequential confidence intervals. Scand. Actuarial J. ?? (1980), 107-111. [37] Csörg˝o M., On the strong law of large numbers and the central limit theorem for martingales. Trans. Amer. Math. Soc. 13 (1968), 259-275. [38] Csörg˝o M., Csörg˝o S., An invariance principle for the empirical process with random sample size. Bull. Amer. Math. Soc. 76 (1970), 706-710. [39] Csörg˝o M., Csörg˝o S., On weak convergence of randomly selected partial sums. Acta Scientiarum Mathematicarum 34 (1973), 53-60. [40] Csörg˝o M., Fischler R., Departure from independence: the strong law, standard and random-sums central limit theorems. Acta Math. Acad. Sci. Hungar. 21 (1970), 105-114. [41] Csörg˝o M., Fischler R., On mixing and the central limit theorem. Tôhoku Math. J. 23 (1971), 139-145. [42] , Some examples and results in the theory of mixing and random-sum central limit theorems. Period. Math. Hungar. 3 (1973), 41-57. [43] Csörg˝o S., On limit distribution of sequences of random variables with random indices. Acta Math. Acad. Sci. Hungar. 25 (1974), 227-232. [44] , On weak convergence of the empirical process with random sample size. Acta Scientiarum Mathematicarum 36 (1974), 17-25. [45] Csörg˝o M., Rychlik Z., On random limit and moderate deviation theorems: a collection of five papers. Carleton Math. Lecture Note 23 (1979). [46] , Weak convergence of sequences of random elements with random indices. Math. Proc. Camb. Philos. Soc. 88 (1980), 171-174. , Asymptotic properties of randomly indexed sequences of random variables. Canad. J. Statist. 9 (1981), 101-107. [47] [48] Csörg˝o S., Rychlik Z., Rate of convergence in the strong law of large numbers. Probab. Math. Statist. 5 (1985), 99-111. [49] Dobrushin P.L., Lemma o predele složnoj funkcii. Uspehi Matemat. Nauk 10 (1955), 157-159. [50] Durrett R.T., Resnick S.I., Weak convergence with random indices. Stochastic Processes Appl. 5 (1977), 213-220. [51] Englund G., A remainder term estimate in a random-sum central limit theorem. Teor. Verojatnost. Primenen. 28 (1983), 143-149. [52] Dvoretzky, A., Asymptotic normality for sums of dependent random variables. Proc. Sixth Berkeley Symp. Math. Statist. Probab. 2 (1972), 79-94. [53] , On stopping time directed convergence. Bull. AMS 82 (1976), 347-349. [54] Eagleson G.K. Martingale convergence to mixtures of infinitely divisible laws. Ann. Probab. 3 (1975), No. 4, 557-562. [55] Formanov Sh. K., Obobxenie teoremy Robbinsa. Sbornik: Sluqajnye processy i smedznye voprosy, (Siradzdinov S.H. edt.), Taxkent 1970, 84-91. [56] Gaenssler P., Strobel J., Stute W. (1978), On central limit theorems for martingale triangular arrays, Acta Math. Acad. Sci. Hungar. 31, 205-216. [57] G˝anssler P., H˝ansler E. (1979), Remarks on the functional central limit theorems for martingales, Z. Wahrsch. Verw. Gebiete 50, 237-243. [58] Gafurov M.U. (1974), Ob ocenke skorosti xodimosti v central’noj predel’noj teoreme dlja summ so sluqajnym processy i qislom slagaemyh. Sbornik: Sluqajnye processy i statistiqeskie vyvody, vypusk IV, UzSSR, pp. 35-39. [59] Gafurov M.U., Soliev Sh.T. (1974), Nekotorye ocenki skorosti xodimosti v predel’nyh teoremah dlja summ sluqajnogo qisla nezavisimyh sluqajnyh veliqin. Sbornik: Sluqajnye processy i statistiqeskie vyvody, vypusk IV, UzSSR, pp. 40-51. [60] Galambos J. (1976), A remark on the asymptotic theory of sums with random size, Math. Proc. Camb. Philos. Soc. 79, 531. [61] Ghosh M. (1980), Rate of convergence to normality for random means: Applications to sequential estimation, Sankhy¯a 42, Ser. A, Pt. 3, 231-240. [62] Gnedenko B.V. (1967), O svjazi teorii sumirovanija nezavisimyh sluajnyh veliin s zadaami teorii massovogo obsluivanija i teorii nadenosti, Rev. Roum. Math. Pures Appl. 12, No. 9, 1243-1253. [63] Gnedenko B.V. (1969), Ob odnoj teoreme perenosa, Dokl. Akad. Nauk SSSR 187, No. 1, 15-17.
Contents
JJ
II
J
I
Page 90 of 94
Go Back
Full Screen
Close
Quit
Home Page
Title Page
[64] Gnedenko B.V. (1972), Limit theorems for sums of a random number of positive independent random variables, Proc. Sixth Berkeley Symp. Math. Statist. Probab., Vol. II, pp. 537-549. Univ. California Press: Berkeley. [65] Gnedenko B. V., Fahim G. On a transfer theorem. Dokl. Akad. Nauk SSSR 187 (1969), No. 1, 15-17. [66] Goffman M., Danieljan Z.A. (1977), K zakonu arksinusa dlja summ nezavisimyh sluajnyh veliin do sluajnogo indeksa. Sbornik: Predel’nye teoremy dlja cluajnyh processov, UzSSR: Taxkent, pp.40-46. [67] Grishchenko V.A. (1982), Approksimacija raspredelenij summ sluqa$inogo qisla sluqa$inyh veliqin, Dokl. Akad. Nauk Ukr. SSR, Ser. A, No. 4, 13-14. [68] Grishchenko V.A. (1982), On random summation, Dokl. Akad. Nauk Ukr. SSR, Ser. A, No. 9, 63-64. [69] Guiasu S. (1965), Asymptotic distribution for stochastic processes with random discrete time. Trans. Fourth Prague Conf. on Inform Theory, Statist. Dec. Funct., Random Process., pp.307-321. (Akademia, Publishing House of the Czechoslovak Acad. of Sciences, Prague) [70] Guiasu S. (1965), On limiting distribution for stochastic processes with random discrete time, Rev. Roum. Math. Pures Appl. 10, No. 4, 481-503. [71] Guiasu S. (1971), On the asymptotic distribution of sequences of random variables with random indices, Ann. Math. Statist. 42, 2018-2028. [72] Gut A. (1974), On convergence in r-mean of some first passage times and randomly indexed partial sums, Ann. Probab. 2, No. 2, 321-323. [73] Gut A. (1983), Complete convergence and convergence rates for randomly indexed partial sums with an application to some first passage times, Acta Math. Hungar. 42, 225-232. Correction: ibidem 45, No. 1-2, 1985, 235-236. [74] Gut A. (1984), On the law of the iterated logarithm for randomly indexed partial sums with two applications, Studia Sci. Math. Hungar. 19,.... (Report 17, 1983) [75] Gut A. (1987), Stopped random walks: Limit Theorems and Applications. Springer-Verlag: New York, 199 pp. [76] Gut A., Ahlberg P. (1981), On the theory of chromatography based upon renewal theory and a central limit theorem for randomly indexed partial sums of random variables, Chemica Scripta 18, No. 5, 248-255. [77] Gut A., Janson S. (1983), The limiting behaviour of certain stopped sums and some applications, Scand. J. Statist. 10, 281-292. [78] Gut A., Janson S. (1984), Converse results for existence of moments and uniform integrability for stopped random walks, Ann. Probab. 14, No. 4, 1296-1317. [79] P. Hall and C.C. Heyde, Martingale Limit Theory and its Applications, Academic Press, New York, 1980. [80] Helland, I. S., Central limit theorems for martingales with discrete or continuous time, Scand. J. Statist. 9 (1982), 79–94. [81] Jagers P. (1973), A limit theorem for sums of random numbers of i.i.d. random variables. In:“Mathematics and Statistics”. Esseys in honour of Harald Bergstr˝um, Jagers P. and Røade L. editors, G˝oteborg, pp. 33-39. [82] Jakubowski A. (1986), Principle of conditioning in limit theorems for sums of random variables, Ann. Probab. 14, No. 3, 902-915. [83] Komekov B. (1976), O skorosti shodimosti v central’noj predel’noj teoreme dlja summ sluˇcajnogo cˇ isla sluˇcajnyh slagaemyh. Sbornik: “Predel’nye teoremy i matematiˇceskaja statistika”, UzSSR: Tashkent, pp. 53-61. [84] Korolev V.Yu. (1985), Shodimost momentov summ sluˇcajnogo cˇ isla nezavisimyh sluˇcajnyh veliˇcin, Teor. Verojatnost. Primenen. 30, No. 2, 361-364. [85] Korolev V.Yu. (1987), ... , Lecture Notes in Math., vol. 1233, 36-40. [86] Krajka A., Rychlik Z. (1985), Weak convergence of random reversed martingales with o-rates, Bull. Pol. Acad.: Math. 33, No. 1-2, 65-71. [87] Kruglov V.M. (1976), On the convergence of distributions of sums of a random number of independent random variables to the normal distribution, Vestnik Moskov. Univ., Ser. mat., No. 5, 5-12. [88] Kruglov V.M. (1988), The convergence of moments of sums, Teor. Verojatnost. Primenen. 33 (1988), 339-342. [89] V.M. Kruglov and V.Yu. Korolev, Limit Theorems for Random Sums, Izdat. Mosk. Univ., Moscow, 1990. [90] Kubacki K.S. (1985), On a random-sums limit problem. In: Probability and Statistical Decision Theory (Proc. 4th Pannonian Symp. on Math. Statist., Bad Tatzmannsdorf, Austria, 1983) Vol. A, pp. 231-263. (F. Konecny, J. Mogyoródi, W. Wertz, eds.) Joint edition published by Reidel Publ. Comp., Dordrecht (Holland) – Boston (Mass.), and Akadémiai Kiadó, Budapest, 1985. [91] 1986 O warunkach koniecznych i dostatecznych słabej zbie˙zno´sci losowo indeksowanych ciagów ˛ zmiennych losowych. Praca doktorska, Instytut Matematyczny Polskiej Akademii Nauk, Warszawa. [92] , Weak convergence of random-sums to infinitely divisible distributions, Studia Sci. Math. Hungar. 22, No. 1-2, 275-285.
Contents
JJ
II
J
I
Page 91 of 94
Go Back
Full Screen
Close
Quit
Home Page
Title Page
[93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119]
, A note on a Katz-Petrov type theorem, Bull. Pol. Ac.: Math. 36 (1987), 315-326. , Remarks on the convergence of moments in a random central limit theorem, Ukrain. Math. J. 41 (1989), 1105-1108. [Translated from Ukrainski$i Matematiqeski$ i urnal 41 (1989), 1282-1286.] Kubacki K.S. (1990), The convergence of moments in the martingale central limit theorem. In: Limit Theorems in Probability and Statistics (Proc. Third Hungarian Colloq. on Limit Theorems in Probability and Statistics, Peˇcs, July 3-7, 1989) Berkes I., Csáki E. and Révész P. (eds.). North-Holland, Amsterdam, pp. 327-348. Kubacki, K.S. (1995), On the convergence of moments in a martingale central limit theorem, Teori Verotnost. Primenen. 40 (1995), 361-386 [Th. Probab. Appl. 40 (1995), 273-284] Kubacki K.S., Szynal D. (1983), Weak convergence of martingales with random indices to infinitely divisible laws, Acta Math. Hungar. 42, No. 1-2, 143-151. , Weak convergence of randomly indexed sequences of random variables, Bull. Pol. Ac.: Math. 33 (1985), 201-210. , On a random version of the Anscombe condition and its applications, Probab. Math. Statist. 7 (1986), 125-147. Kubacki K.S., Szynal D. (1987), On a random central limit theorem of Robbins type, Bull. Pol. Ac.: Math. 35 (1987), 223-231. , On the rate of convergence in a random version of the central limit theorem, Bull. Pol. Ac.: Math. 35 (1987), 607-616. , The convergence of moments in a random limit theorem of H. Robbins type, Teor. Verojatnost. Primenen. 33 (1988), 83-93. , On the rate of convergence in a random central limit theorem, Probab. Math. Statist. 9 (1988), 95-103. Landers D., Rogge L. (1976), The exact approximation order in the central limit theorem for random summation, Z. Wahrsch. Verw. Gebiete 36, 269-283. Landers D., Rogge L. (1977), A counterexample in the approximation theory of random summation, Ann. Probab. 5, No. 6, 1018-1023. Landers D., Rogge L. (1978), On nonuniform gaussian approximation for random summation, Metrica 25, 95-114. M. Loève (1960), Probability Theory. Van Nostrand, Princeton. E. Lukacs (1960), Characteristic functions, Griffin, London. Łagodowski Z.A., Rychlik Z. (1985), Complete convergence and convergence rates for randomly indexed sums of random variables with multidimensional indices, Bull. Pol. Ac.: Math. 33, 219-223. Łagodowski Z.A., Rychlik Z. (1986), Rate of Convergence in the strong law of large numbers for martingales, Probab. Th. Rel. Fields 71, 467-476. Mamatov M., Nematov I. (1976), Nekotorye ocenki ostatoˇcnogo cˇ lena v central’noj predel’noj teoreme dlja summ sluˇcajnogo cˇ isla nezavisimyh sluˇcajnyh veliˇcin, Izv. Akad. Nauk UzSSR, Ser. fiz-mat nauk, No. 1, 10-14. Marushin M.N., Krivorukov V.P. (1984), Neskol’ko zameqani$i o central’noj predel’noj teoreme teorii momentov qetnogo porjadka dlja summ sluqa$inogo qisla nezavisimyh odinakogo raspredelennyh sluqa$ inyh veliqin, Ukrain. Mat. urnal 36, No. 1, 22-28. Mogyoródi J. (1961), On limiting distributions for sums of a random number of independent random variables, Magyar Tudományos Akadémia Matematikai Kutató Intézet K˝ozleményei 6A, 365-371. [Publications of the Math. Institute of the Hungarian Acad. of Sciences] Mogyoródi J. (1962), A central limit theorem for the sum of a random number of independent random variables, Publications of the Mathematical Institute of the Hungarian Academy of Sciences 7A, No. 3, 409-424. Mogyoródi J. (1965), Limit distributions for sequences of random variables with random indices, Trans. Fourth Prague Conf. on Inform. Th., Statist. Dec. Funct., Random Process., pp. 463-470. Publishing House of the Czechoslovak Acad. of Sciences: Prague. Mogyoródi J. (1965), On the law of large numbers for the sum of a random number of independent random variables, Annales Univ. Sci. Budapest de E˝otv˝os Nom., Sect. Math., 8, 33-38. Mogyoródi J. (1966), A remark on stable sequences of random variables and a limit distribution theorem for a random sum of independent random variables, Acta Math. Acad. Sci. Hungar. 17, No. 3-4, 401-409. K.W. Morris, D. Szynal (1982), On the convergence rate in the central limit theorem for some functions of the average of independent random variables, Probab. Math. Statist., 3, 85–95. Nematov I. (1974), O ravnomernoj ocenke dlja summ cluˇcajnogo cˇ isla nezavisimyh cluˇcajnyh veliˇcin. Nauˇcnye Trudy Tashkent. Gosudar. Univ., vypusk 460, pp. 90-96.
Contents
JJ
II
J
I
Page 92 of 94
Go Back
Full Screen
Close
Quit
Home Page
Title Page
[120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153]
L.V.Osipov, V.V.Petrov (1967), On the estimation of the remainder in the central limit theorem, Teor. Veroyatnost. i Primenen., 12, 322–329 [in Russian] L. Paditz, Z. Rychlik (1986), A note on the Katz type theorem for random sums, Bull. Pol. Ac.: Math., 34, 723–727. D. Partyka, D. Szynal (1985), On some properties of the essential convergence in law and their applications, Bull. Pol. Ac.: Math., 32, 211-217. V.V. Petrov (1965), An estimate of the deviation of the distribution of a sum of independent random variables from the normal law, Dokl. Akad. Nauk SSSR, 160, 1013–1015 [in Russian] , Sums of Independent Random Variables. Nauka, Moscow, 1972. Prakasa Rao B.L.S. (1969), Random central limit theorems for martingales, Acta Math. Acad. Sci. Hungar. 20, No. 1-2, 217-222. Prakasa Rao B.L.S. (1974), On the rate of convergence in the random central limit theorem for martingales, Bull. Acad. Pol. Sci. 22, No. 12, 1255-1260. Prakasa Rao B.L.S. (1975), Remark on the rate of convergence in the random central limit theorem for mixing sequences, Z. Wahrsch. Verw. Gebiete 31, 157-160. Prakasa Rao B.L.S. (1984), Remarks on the rate of convergence in the invariance principle for random sums. Proc. Colloquium on Limit Theorems in Probability and Statistics, Budapest, Hungary 1984. B.L.S. Prakasa Rao, Asymptotic Theory of Statistical Inference, Wiley, New York, 1987. Rényi A. (1957), On the asymptotic distribution of the sum of a random number of independent random variables, Acta Math. Acad. Sci. Hungar. 8, 193-197. A. Rényi (1958), On mixing sequences of sets, Acta Math. Acad. Sci. Hungar., 9, 215-228. Rényi A. (1960), On the central limit theorem for the sum of a random number of independent random variables, Acta Math. Acad. Sci. Hungar. 11, 97-102. A. Rényi (1963), On stable sequences of events, Sankhya, Ser. A, 25, 293-302. A. Rényi (1970), Probability Theory. Akademiai Kiado, Budapest. A. Rényi, P. Revesz (1958), On mixing sequences of random variables, Acta Math. Acad. Sci. Hungar., 9, 389-393. P. Revesz (1963), On sequences of quasi-equivalent events, I. Publ. Math. Inst. Hungar. Acad. Sci., 8A, No. 1-2, 73-83. Richter W. (1965), Ubertragung von grenzaussagen für folgen zufälliger elemente auf folgen mit zuffällingen indizes, Math. Nachr. 29, 347-365. Richter W. (1965), Limit theorems for sequences of random variables with sequences of random indices, Teor. Verojatnost. Primenen. 10, 82-93. (in German) [Theor. Probab. Appl. 10, 74-84] Robbins H. (1948), On the asymptotic distribution of the sum of random number of random variables, Proc. Nat. Acad. Sci. USA 34, 162-163. Robbins H. (1948), The asymptotic distribution of the sum of a random number of random variables, Bull. Amer. Math. Soc. 54, No. 12, 1151-1161. Rosi´nski J. (1975), Limit theorems for randomly indexed sums of random vectors, Colloquium Mathematicum 34, No. 1, 91-107. Rosi´nski J. (1976), Weak compactness of laws of random sums of identically distributed random vectors in Banach spaces, Coll. Math. 35, No. 2, 313-325. Rychlik E. (1981), Asymptotic distributions of sums of a random number of independent random variables, Bull. Acad. Pol. Sci., Ser sci. math., 29, No. 3-4, 173-177. Rychlik E. (1983), Some limitary theorems for randomly indexed sequences of random variables, Bull Pol. Ac.: Math. 31, No. 1-2, 81-87. Rychlik Z. (1974), On some inequalities for the concentraction function of the sum of a random number of independent random variables, Bull. Acad. Pol. Sci., Ser. sci. math., 22, No. 1, 65-70. Rychlik Z. (1976), A central limit theorem for sums of a random number of independent random variables, Coll. Math. 35, 147-158. Rychlik Z. (1977), On the rate of convergence in the random-sum central limit theorem, Liet. Matemat. Rink. 17, No. 1, 171-178. Rychlik Z. (1978), A central limit theorem for martingales, Liet. Matemat. Rink. 18, 139-145. Rychlik Z. (1979), Martingale random central limit theorems, Acta Math. Acad. Sci. Hungar. 34, No 1-2, 129-139. Z. Rychlik (1980), Rozkłady graniczne losowo indeksowanych ci gów zmiennych losowych. Rozprawa habilitacyjna. UMCS: Lublin, 110 pp. Rychlik Z. (1985), A remainder term estimate in a random-sum central limit theorem, Bull. Pol. Ac.: Math. 33, No. 1-2, 57-63. Rychlik Z., Szynal D. (1973), On the limit behaviour of sums of a random number of independent random variables, Coll. Math. 28, No. 1, 147-159. Rychlik Z., Szynal D. (1974), On the convergence rates in the central limit theorem for the sums of a random number of independent identically distributed random variables, Bull. Acad. Pol. Sci., Ser. sci. math., 22, No. 7, 683-690.
Contents
JJ
II
J
I
Page 93 of 94
Go Back
Full Screen
Close
Quit
Home Page
Title Page
[154] Rychlik Z., Szynal D. (1975), Convergence rates in the central limit theorem for sums of a random number of independent random variables, Teor. Verojatnost. Primenen. 20, No. 3, ... [155] Z. Rychlik, D. Szynal (1979), On the rate of approximation in the random–sums central limit theorem, Teor. Veroyatnost. Primenen., 24, 614–620. [156] Scott D.J. (1973), Central limit theorems for martingales and for processes with stationary increments using a Skorokhod representation approach, Adv. Appl. Probab. 5, 119-137. [157] A.N. Shiryaev, Probability, Nauka, Moscow, 1980. [158] Siraždinov S.H., Gafurov M.U., Saliev Sh.T. (1975), The limiting behaviour of the distribution of sums of a random number of random variables (in Russian). Sbornik: “Sluˇcajnye processy i statistiˇceskie vyvody”, vypusk V, UzSSR, Tashkent, pp. 141-157. [159] Siraždinov S.H., Komekov B. (1978), O predel’nom povedenii raspredelenija sluˇcsjno indeksirovannyh posledovatel’nostej sluˇcajnyh veliˇcin, Izv. Akad. Nauk UzSSR, No. 6, 29-36. [160] Siraždinov S.H., Orazov G. (1966), Obobshenie odnoj teoremy H. Robbinsa. Sbornik: “Predel’nye teoremy i statist. vyvody”, Akad. Nauk UzSSR, Tashkent, 154-162. [161] Siraždinov S.H., Orazov G. (1966), Utoˇcnenie odnoj predel’noj teoremy H. Robbinsa, Izv. Akad. Nauk UzSSR, Ser. fiz-mat nauk, 30, 30-39. [162] Sreehari M. (1969), An invariance principle for random partial sums, Sankhy¯a 30, Ser. A, 433-442. [163] Sreehari M., Prakasa Rao B.L.S. (1982), Rate of convergence in the invariance principle for random sums, Sankhy¯a 44, Ser. A, Pt. 1, 144-152. [164] Srivastava R.C. (1973), Estimation of probability density function based on random number of observations with applications, International Statist. Review 41, No. 1, 77-86. [165] Yu.P. Studnev (1965), A remark on Katz–Petrov’s theorem, Teor. Veroyatnost. Primenen., 10, 751–753 [in Russian] [166] Yu.P. Studnev, Yu.I. Ignat (1967), On a refinement of the central limit theorem and its global version, Teor. Veroyatnost. Primenen., 12, 562–567 [in Russian] [167] Szántai T. (1971), On limiting distributions for the sums of random number of random variables concerning the rarefaction of recurent process, Studia Sci. Math. Hungar. 6, 443-452. [168] Szász D., Freyer B. (1971), On the sums of a random number of random variables, Liet. Matemat. Rink. 11, No. 1, 181-187. [169] Szász D. (1972),On the limiting calsses of distributions for sums of a random number of random variables, Teor. Verojatnost. Primenen. 17, 424-439. [170] Szász D. (1972), Limit theorems for the distributions of the sum of a random number of random variables, Ann. Math. Statist. 43, 1902-1913. [171] Szász D. (1972), Stability and law of large numbers for sums of a random number of random variables, Acta Sci. Math. 33, No. 1-2, 269-274. [172] Szynal D. (1972), On almost complete convergence for the sum of a random number of independent random variables, Bull. Acad. Pol. Sci., Ser. sci. math., 20, No. 7, 571-574. [173] Szynal D. (1973), On the limit behaviour of a sequence of quantilies of a sample with a random number of items, Applicationes Mathematicae 13, No. 3, 321-327. [174] Szynal D. (1974), On the random-sums central limit theorem without moment conditions, Bull. Acad. Pol. Sci., Ser. sci. math., 22, No. 3, 313-318. [175] Szynal D., Zapała (1977), Convergence rates for weighted sums of random variables with random indices, Coll. Math. 38, No. 1, 111-127. [176] D. Szynal, W. Zi˛eba (1974), On some type of convergence in law, Bull. Pol. Ac.: Math., 22(11), 1143-1149. [177] D. Szynal, W. Zi˛eba (1986), On some characterization of almost sure convergence, Bull. Pol. Ac.: Math., 34(9-10), 635-642. [178] Takahashi S. 1951 On the central limit theorem. Tôhoku Math. J. 3, No. 2, 316-321. [179] Takahashi S. 1951 On the asymptotic distribution of sum of independent random variables. Proc. Japan Acad. 27(8), 393-400. [180] Tate R.F. 1952 Contributions to the theory of random numbers of random variables. Ph.D. Thesis, University of California, Berkeley. [181] Wittenberg H. 1964 Limiting distributions of random sums of independent random variables. Z. Wahrsch. Verw. Gebiete 3, 7-18. [182] Zi˛eba W. 1975 On relations between modes of convergence of a sequence of random elements. Bull. Pol. Ac.: Math. 23(5), 581-586. [183] W. Zi˛eba W. 1988 O typach zbie˙zno´sci w teorii prawdopodobie´nstwa i ich charakteryzacjach. Rozprawa habilitacyjna, UMCS, Lublin.
Contents
JJ
II
J
I
Page 94 of 94
Go Back
Full Screen
Close
Quit