Branching random walk with a random environment in time

2 downloads 0 Views 351KB Size Report
Jul 29, 2014 - EN2 i Ii(Ni), where Ψ. (1) n (t) := m−1 n ∫ eitxXn(dx). We denote J−1 be the inverse mapping of J, J−1(j) = {n : J(n) = j} and |J−1(j)| be the ...
Branching random walk with a random environment in time Chunmao HUANGa , Quansheng LIUb,∗ a

Harbin institute of technology at Weihai, Department of mathematics, 264209, Weihai, China b

Universit´e de Bretagne-Sud, UMR 6205, LMBA, F-56000 Vannes, France

arXiv:1407.7623v1 [math.PR] 29 Jul 2014

Abstract We consider a branching random walk on R with a stationary and ergodic environment ξ = (ξn ) indexed by time n ∈ N. Let Zn be the counting measure of particles of generation n. For the case where the corresponding branching process {Zn (R)} (n ∈ N) is supercritical, we establish large deviation principles, central limit theorems and a local limit theorem for the sequence of counting measures {Zn }, and prove that the position Rn (resp. Ln ) of rightmost (resp. leftmost) particles of generation n satisfies a law of large numbers. AMS 2010 subject classifications. 60J80, 60K37, 60F10, 60F05. Key words: Branching random walk, random environment, large deviation, central limit theorem, local limit theorem.

1

Introduction

A random environment in time is modeled as a stationary and ergodic sequence of random variables, ξn , indexed by the time n ∈ N, taking values in some measurable space Θ. Each realization of ξn corresponds to a distribution ηn = η(ξn ) on N × R × R × · · · . When the environment ξ = (ξn ) is given, the process can be described as follows. At time 0, there is an initial particle ∅ of generation 0 located at S∅ = 0 ∈ R; at time 1, it is replaced by N = N (∅) particles of generation 1, located at Li = Li (∅), 1 ≤ i ≤ N , where the random vector X(∅) = (N, L1 , L2 , · · · ) ∈ N×R×R×· · · is of distribution η0 = η(ξ0 ) (given the environment ξ). In general, each particle u = u1 · · · un of generation n located at Su is replaced at time n+1 by N (u) new particles ui of generation n + 1, located at Sui = Su + Li (u) (1 ≤ i ≤ N (u)), where the random vector X(u) = (N (u), L1 (u), L2 (u), · · · ) is of distribution ηn = η(ξn ). Note that the values Li (u) for i > N (u) do not play any role for our model; we introduce them only for convenience. We can for example take Li (u) = 0 for i > N (u). All particles behave independently conditioned on the environment ξ. Let (Γ, Pξ ) be the probability space under which the process is defined when the environment ξ is fixed. As usual, Pξ is called quenched law. The total probabilifty space can be formulated as the product space (Γ × ΘN , P), where P = Pξ ⊗ τ in the sense that for all measurable and positive g, we have  Z Z Z gdP = g(ξ, y)dPξ (y) dτ (ξ), ΘN

Γ

where τ is the law of the environment ξ. The total probability P is usually called annealedS law. The quenched law Pξ may be considered to be the conditional probability of P given ξ. Let U = {∅} n≥1 Nn be the set of all finite sequence u = u1 · · · un . By definition, under Pξ , the random vectors {Xu }, indexed by u ∈ U, are independent of each other, and each Xu has distribution ηn = η(ξn ) if |u| = n, where |u| denotes the length of u. Let T be the Galton-Watson tree with defining element {N (u)}. We have: (a) ∅ ∈ T; (b) if u ∈ T, then ui ∈ T if and only if 1 ≤ i ≤ N (u); (c) ui ∈ T implies u ∈ T. Let Tn = {u ∈ T : |u| = n} be the set of particles of generation n and X Zn = δS u u∈Tn

∗ Corresponding

author at: Universit´ e de Bretagne-Sud, UMR 6205, LMBA, F-56000 Vannes, France. Email addresses: [email protected] (C. Huang), [email protected] (Q. Liu).

1

be the counting measure of particles of generation n, so that for a subset A of R, Zn (A) is the number of particles of generation n located in A. For any finite sequence u, let N (u)

X (u) =

X

δLi (u)

i=1

be the counting measure corresponding to the random vector X(u), whose increasing points are Li (u), 1 ≤ i ≤ N (u). Denote Xn = X (u0 |n) , where u0 = (1, 1, · · · ) and u0 |n is the restriction to its first n tems, with the convention that u0 |0 = ∅. For simplicity, we introduce the following notations: Nn = Xn (R),

mn = Eξ Nn ,

P0 = 1

and Pn = Eξ Zn (R) =

n−1 Y

mi .

(1.1)

i=0

Let F0 = σ(ξ),

Fn = σ(ξ, (X(u) : |u| < n)) for n ≥ 1

be the σ-field containing all the information concerning the first n generations. It is well known that the sequence {Zn (R)/Pn } is a non-negative martingale under Pξ for every ξ with respect to the filtration Fn , hence it converges almost surely (a.s.) to a random variable denoted by W . Throughout this paper we always assume that N E log m0 ∈ (0, ∞) and E log+ N < ∞. (1.2) m0 The first condition means that the corresponding branching process in random environment, {Zn (R)}, is supercritical ; the second implies that W is non-degenerate. Hence (see e.g. Athreya and Karlin (1971, [1])) Pξ (W > 0) = Pξ (Zn (R) → ∞) = lim Pξ (Zn (R) > 0) > 0 a.s.. n→∞

In this paper, we are interested in asymptotic properties of the sequence of measures {Zn }. Our first objective is to show a large deviation principle for {Zn (n·)} (Theorem 3.2). Our approach uses the G¨artner-Ellis theorem. In the proof, we first demonstrate that the sequence of quenched means ˜ {E satisfies a large deviation principle, and then show that the free energy log Znn (t) , where Z˜n (t) = P ξ Zn (n·)} tSu denotes the partition function, converges a.s. to a limit that we calculate explicitly (Theorem u∈Tn e 3.1). Moreover, we also show that the position Rn (resp. Ln ) of rightmost (resp. leftmost) particles of generation n satisfies a law of large numbers (Theorem 3.4): Rnn (resp. Lnn ) converges a.s. to a limit that we determine explicitly. These results generalize those of Biggins (1977, [4]), Franchi (1995, [14]) and Chauvin & Rouault (1997, [9]) for the deterministic environment case. Our second objective is to show central limit theorems and related results for {Zn }. For a deterministic branching random walk, Kaplan and Asmussen (1976, [21]) proved the following central limit theorem. Assume that m = EN ∈ (1, ∞) and that EXm0 (·) has mean 0 and variance 1. If EN (log N )1+ε < ∞ for some ε > 0, then √ a.s. ∀x ∈ R, (1.3) m−n Zn (−∞, nx] → Φ(x)W where Φ(x) is the distribution function of the standard normal distribution N (0, 1). They also gave a local version of (1.3) under the stronger moment condition that EN (log N )γ < ∞ for some γ > 3/2. The formule (1.3), which was first conjectured by Harris [16], has been studied by many authors, see e.g. Stam (1966, [32]), Kaplan & Asmussen (1976, [21]), Klebaner (1982, [23]) and Biggins (1990, [7]). We shall show the following version of (1.3) (Theorem 10.2) for a branching random walk in a random environment: under n ·+an ) certain moment conditions, the sequence of probability measures Zn (b , with (an , bn ) that we calculate Zn (R) explicitly, converges to the standard normal distribution N (0, 1) in law a.s. on the survival event {Zn → ∞}. The technic in the proof is a mixture of Klebaner (1982) and Biggins (1990) by considering the characteristic function and choosing an appropriate truncation function. We shall also show a corresponding local limit theorem (Theorem 10.4) under stronger moment conditions, which generalizes the result of Biggins (1990, Theorem 7) on deterministic branching random walks. From Theorem 10.4 we obtain another form of local limit theorem (Corollary 10.5), which coincides with the result of Kaplan & Asmussen (1976, Theorem 2) for the deterministic environment case. Moreover, we shall also show large deviation principles and central limit theorems for probability mesures E Z (·) EZn (·) Zn (·) , E EξZZnn(·) with different normins: EξξZnn(R) , EZ (R) and Eξ Zn (R) . n (R)

2

This paper is organized as follows. In Sections 2 - 5, we consider large deviations. In Section 2, we show large deviation principles for Eξ Zn (n·), EZn (n·) and E EZξ nZ(n·) . In Section 3, we state a convergence result n (R) for the free energy, a large deviation principle for Zn (n·) and laws of large numbers for Rn and Ln . In Section 4, we prove the results of Section 3. In Section 5, we show a large deviation principle for Eξ ZZnn(n·) (R) . In Sections 6 - 12, we study central limit theorems. In Section 6, we consider a branching random walk in a varying environment and state the corresponding limit theorems. In Sections 7 and 8, we prove the results of Section 6. From Sections 9 to 12, we return to a branching random walk in a random environment: in E Z (·) EZn (·) and E EξZZnn(·) Section 9, we show central limit theorems for EξξZnn(R) , EZ (R) ; in Section 10, we state a central n (R) Zn (·) Zn (R) , (·) E ZZnn(R) .

limit theorem and a local limit theorem for central limit theorems for

2

(·) Eξ ZZnn(R)

and

which are proved in Section 11; in Section 12, we show

Large deviations for Eξ Zn (n·), EZn (n·) and E EZξ Zn (n·) n (R)

To study large deviations of Zn , we begin with the study of its quenched and annealed means. For n ∈ N and t ∈ R, let Z N (u) X etLi (u) (u ∈ Tn ), (2.1) mn (t) := Eξ eitx Xn (dx) = Eξ i=1

be the Laplace transform of the counting measure describing the evolution of the system at time n. In particular, N X etLi , m0 (0) = Eξ N = m0 . m0 (t) = Eξ i=1

We assume that

E|L1 | < ∞, E| log m0 (t)| < ∞ and E|

m′0 (t) | Λ′ (t+ ), Corollary 3.3. It is a.s. that lim

n→∞

lim

n→∞

1 log Zn [nx, ∞) = −Λ∗ (x) > 0 if x ∈ (Λ′ (0), Λ′ (t+ )), n

1 log Zn (−∞, nx] = −Λ∗ (x) > 0 if x ∈ (Λ′ (t− ), Λ′ (0)). n

5

For deterministic branching random walks, see Biggins (1977, [4]) and Chauvin & Rouault (1997, [9]). Remark. x ∈ (Λ′ (0), Λ′ (t+ )) if and only if x > Λ′ (0) and Λ∗ (x) < 0. x ∈ (Λ′ (t− ), Λ′ (0)) if and only if x < Λ′ (0) and Λ∗ (x) < 0.

If the set Tn 6= ∅, let

Ln = min Su u∈Tn

(resp. Rn = max Su ) u∈Tn

be the position of leftmost (resp. rightmost) particles of generation n. We can see that Ln (resp. Rn ) satisfies a law of large numbers. Theorem 3.4 (Asymptotic properties of Ln and Rn ). It is a.s. that lim

Ln = Λ′ (t− ), n

lim

Rn = Λ′ (t+ ). n

n→∞

n→∞

For deterministic branching random walks, see Biggins (1977) and Chauvin & Rouault (1997).

4

Proofs of Theorems 3.1 and 3.4

Let us give the proofs of Theorems 3.1 and 3.4 which are composed by some lemmas. Similar arguments have been used in Franchi (1995, [14]) and Chauvin & Rouault (1997). Observe that P tSu Z˜n (t) u∈Tn e = Wn (t) := m0 (t)...mn−1 (t) Eξ Z˜n (t) is a martingale, therefore it converges a.s. to a random variable W (t) ∈ [0, ∞). In the deterministic environment case, this martingale has been studied by Kahane & Peyri`ere (1976), Biggins (1977), Durrett & Liggett (1983), Guivarc’h (1990), Lyons (1997) and Liu (1997, 1998, 2000, 2001), etc. in different contexts. The following lemma concerns the non degeneration of W (t). Lemma 4.1. If t ∈ (t− , t+ ) and EW1 (t) log+ W1 (t) < ∞, then

If t ≤ t− or t ≥ t+ , then

W (t) > 0

a.s.

W (t) = 0

a.s.

Notice that t ∈ (t− , t+ ) is equivalent to tΛ′ (t) − Λ(t) < 0. Therefore the lemma is an immediate consequence of Theorem 7.2 of Biggins and Kyprianous (2004) on a branching process in a random environment, or of a result of Kuhlbusch (2004, [22]) on weighted branching processes in random environment. Lemma 4.2. If t ∈ (t− , t+ ), then

1 log Z˜n (t) = Λ(t) a.s.. n→∞ n lim

(4.1)

Proof. If EW1 (t) log+ W1 (t) < ∞, by Lemma 4.1, W (t) > 0 a.s.. Consequently, n−1 1 1X 1 log mi (t) → E log m0 (t) = Λ(t) a.s.. log Z˜n (t) = log Wn (t) + n n n i=0

We now consider the general case where EW1 (t) log+ W1 (t) may be infinite. We only consider the case where t ∈ [0, t+ ) (the case where t ∈ (t− , 0]) can be considered in a similar way, or by considering (−Lu ) instead of (Lu )). For the lower bound, we use an truncating argument. For c ∈ N, we construct a new branching random walk in a random environment (BRWRE) using X c (u) = (N (u) ∧ c, L1 (u), L2 (u), · · · ) instead of X(u) = (N (u), L1 (u), L2 (u), · · · ), where and throughout we write a ∧ b = min(a, b). We shall apply Lemma 4.1 to the new BRWRE. We define mcn (t), Wnc (t), Λc (t) and tc+ for the new BRWRE just as just as mn (t) Wn (t), Λ(t) and tc were defined for the original BRWRE.

6

PN ∧c We first show that Λc (t) := E log m0 (t) ↑ Λ(t) as c ↑ ∞. Clearly, mc0 (t) = Eξ i=1 etLi ↑ m0 (t) as + + c ↑ ∞. This leads to E log mc0 (t) ↑ E log m0 (t) by the monotone convergence theorem. On the other hand, for c ≥ 1, we have log− mc0 (t) ≤ log− m10 (t) = log− Eξ etL1 ≤ tEξ |L1 |

(as Eξ etL1 ≥ e−tEξ |L1 | by Jensen’s inequality). Therefore by the condition E|L1 | < ∞ and the dominated convergence theorem, E log− mc0 (t) ↓ E log− m0 (t). We next prove that for c > 0 large enough, t ∈ [0, tc+ ), which is equivalent to tΛ′c (t) − Λc (t) < 0. Recall that t ∈ [0, , t+ ) is equivalent to tΛ′ (t) − Λ(t) < 0. By the definition of Λ′ (t), there exists a h > 0 such that Λ(t + h) − Λ(t) − Λ(t) < 0. h Since Λc ↑ Λ as c ↑ ∞, we have for c large enough, t

t

Λc (t + h) − Λc (t) − Λc (t) < 0. h

(4.2)

The convexity of Λc (t) shows that Λc (t + h) − Λc (t) . h Combing (4.4) with (4.2) we obtain for c large enough, Λ′c (t) ≤

(4.3)

tΛ′c (t) − Λc (t) < 0.

(4.4)

We finally prove that EW1c (t) log+ W1c (t) < ∞. Let Y = W1c (t). we define a random variable X whose distribution is determined by Eξ g(X) = Eξ Y g(Y ) for all bounded and measurable function g (notice that Eξ Y = 1 by definition). For x ∈ R, let  x/e if x < e, l(x) = log x if x ≥ e. It is clear that l is concave and log+ x ≤ l(x) ≤ 1 + log+ x for all x ∈ R. Thus Eξ Y log+ Y = Eξ log+ X

≤ ≤

Eξ l(x) l(Eξ X) = l(Eξ Y 2 )

1 + log+ Eξ Y 2  c  cm0 (2t) + ≤ 1 + log , mc0 (t)2 PN ∧c PN ∧c where the last inequality holds as ( i=1 etLi )2 ≤ (N ∧ c) i=1 e2tLi . Taking expectation in the above inequality , we get   c cm0 (2t) + + + c c EW1 (t) log W1 (t) = EY log Y ≤ 1 + E log mc0 (t)2 ≤

≤ 1 + log c + E log+ m0 (2t) + 2E log− mc0 (t) < ∞.

We have therefore proved that for c > 0 large enough, the new BRWRE satisfies the conditions of Lemma 4.1, so that 1 lim log Z˜nc (t) = E log mc0 (t) = Λc (t) a.s.. n→∞ n Notice that Z˜n (t) ≥ Z˜nc (t). It follows that lim inf n→∞

Letting c ↑ ∞, we obtain

1 log Z˜n (t) ≥ Λc (t) a.s.. n

1 log Z˜n (t) ≥ Λ(t) a.s.. n For th upper bound, from the decomposition n1 log Z˜n (t) = n1 log Wn (t) + that Wn (t) → W (t) < ∞ a.s., we obtain that lim inf n→∞

lim sup n→∞

1 log Z˜n (t) ≤ Λ(t) a.s.. n

This completes the proof. 7

1 n

Pn−1 i=0

log mi (t) and the fact

Lemma 4.3. It is a.s. that lim sup n→∞

Rn ≤ Λ′ (t+ ). n

Proof. For a > Λ′ (t+ ), we have Λ∗ (a) > 0. By Theorem 2.1, lim

n→∞

This leads to

P

n

1 Eξ Zn [an, ∞) = −Λ∗ (a) < 0 a.s.. n

Pξ (Zn [an, ∞) ≥ 1) < ∞ a.s.. It follows that by Borel-Cantelli’s lemma, Pξ a.s. , Zn [an, ∞) = 0

for n large enough.

Therefore Rn < an, so that a.s., lim sup n→∞

Rn ≤ a. n

Letting a ↓ Λ′ (t+ ), we obtain the desired result. Lemma 4.4. If t ≥ t+ , then a.s.,

log Z˜n (t) = tΛ′ (t+ ). n→∞ n lim

(4.5)

Proof. For the upper bound, we only consider the case where t+ < ∞. Choose 0 < t0 < t+ ≤ t. Since Su ≤ Rn for u ∈ Tn , we have tSu ≤ t0 Su + (t − t0 )Rn , so that

Z˜n (t) ≤ Z˜n (t0 )e(t−t0 )Rn .

Thus

log Z˜n (t0 ) Rn log Z˜n (t) ≤ + (t − t0 ) . n n n Letting n → ∞ and using Lemma 4.3, we get a.s., lim sup n→∞

log Z˜n (t) ≤ Λ(t0 ) + (t − t0 )Λ′ (t+ ). n

Letting t0 ↑ t+ and using Λ(t+ ) − t+ Λ′ (t+ ) = 0, we obtain a.s., lim sup n→∞

log Z˜n (t) ≤ tΛ′ (t+ ). n

For the lower bound, as log Z˜n (t) is a convex function of t, for t− < t0 < t1 < t+ ≤ t, we have log Z˜n (t) − log Z˜n (t0 ) log Z˜n (t1 ) − log Z˜n (t0 ) ≥ . t − t0 t1 − t0 Dividing the inequality by n and applying Lemma 4.2 to t0 and t1 , we obtain a.s., lim inf n→∞

log Z˜n (t) t − t0 (Λ(t1 ) − Λ(t0 )). ≥ Λ(t0 ) + n t1 − t0

Letting t1 ↓ t0 , we get a.s.,

log Z˜n (t) ≥ Λ(t0 ) + (t − t0 )Λ′ (t0 ). n→∞ n and using Λ(t+ ) − t+ Λ′ (t+ ) = 0, we obtain a.s., lim inf

Letting t0 ↑ t+

lim inf n→∞

log Z˜n (t) ≥ tΛ′ (t+ ). n

This completes the proof. Lemma 4.5. It is a.s. that lim inf n→∞

Rn ≥ Λ′ (t+ ). n 8

Proof. Notice that Su ≤ Rn for u ∈ Tn , we have Z˜n (t) ≤ Zn (R)etRn , so that for each 0 < t < ∞,

log Z˜n (t) log Zn (R) Rn ≤ +t . n n n If t+ < ∞, then by Lemma 4.4, the above inequality gives for t > t+ , a.s., Λ′ (t+ ) ≤

(4.6)

1 Rn E log m0 + lim inf . n→∞ t n

Letting t ↑ ∞, we obtain the desired result. If t+ = ∞, then by Lemma 4.2, the inequality (4.6) gives for t > 0, a.s., 1 Rn Λ(t) ≤ E log m0 + lim inf . n→∞ n t t Letting t ↑ ∞, we get a.s., Rn lim inf ≥ Λ′ (∞) = Λ′ (t+ ). n→∞ n The conclusions for t ≤ t− and Ln can be obtained in a similar way, or by applying the obtained results for t ≥ t+ and Rn to the opposite branching random walk −Su . Hence Theorem 3.4 holds, and (3.3) holds a.s. for each fixed t ∈ R. So a.s. (3.3) holds for all rational t, and therefore for all real t by the convexity of log Z˜n (t). This ends the proof of Theorem 3.1.

5

Large deviations for Eξ ZZnn(n·) (R)

Using the lower bound in Therorem 3.2 and the upper bound Theorem 2.1, we have the following theorem. Theorem 5.1. If a.s. Pξ (N ≤ 1) = 0 and Eξ N 1+δ ≤ K for some constants δ > 0 and K > 0, then a.s., for each measurable subset A of R, ˜ ∗ (x) − E log m0 − inf o Λ x∈A

≤ ≤

1 Zn (nA) log Eξ n Zn (R) Zn (nA) 1 ≤ − inf Λ∗ (x) − E log m0 , lim sup log Eξ ¯ n Zn (R) x∈A n→∞ lim inf n→∞

where Ao denotes the interior of A, and A¯ its closure. ˜ ∗ (x) = Λ∗ (x) for x ∈ (Λ′ (t− ), Λ′ (t+ )). From Theorem 5.1 we obtain Notice that Λ Corollary 5.2. If a.s. Pξ (N ≤ 1) = 0 and Eξ N 1+δ ≤ K for some constants δ > 0 and K > 0, then a.s., lim

n→∞

lim

n→∞

1 Zn [nx, ∞) log Eξ = −Λ∗ (x) − E log m0 if x ∈ (Λ′ (0), Λ′ (t+ )), n Zn (R)

Zn (−∞, nx] 1 log Eξ = −Λ∗ (x) − E log m0 if x ∈ (Λ′ (t− ), Λ′ (0)). n Zn (R)

Theorem 5.1 is a combination of Lemmas 5.1 and 5.4 below. Lemma 5.1 (Lower bound). It is a.s. that for each measurable subset A of R,   Zn (nA) 1 ˜ ∗ (x) − E log m0 . ≥ − inf o Λ lim inf log Eξ n→∞ n x∈A Zn (R) Proof. By Therorem 3.2, a.s. lim inf n→∞

1 ˜ ∗ (x), log Zn (nA) ≥ − inf o Λ x∈A n

which implies that for each ε > 0, a.s.   1 Zn (nA) ˜ ∗ (x) − E log m0 − ε = 1. log ≥ −Λ lim Pξ n→∞ n Zn (R) 9

(5.1)

˜ ∗ (x) − E log m0 . Notice that Write f (A) = − inf x∈Ao Λ     Zn (nA) Zn (nA) Eξ ≥ Eξ 1 Zn (nA) Zn (R) Zn (R) { Zn (R) ≥exp (n(f (A)−ε))}   1 Zn (nA) ≥ exp (n(f (A) − ε))Pξ log ≥ f (A) − ε . n Zn (R) We have a.s. 1 log Eξ n



Zn (nA) Zn (R)



1 ≥ f (A) − ε + log Pξ n



 1 Zn (nA) log ≥ f (A) − ε . n Zn (R)

Taking inferior limit and letting ε → 0, we obtain (5.1). To obtain the upper bound, we need certain moment conditions. Lemma 5.2 ([18], Theorem 3.1). If a.s. Pξ (N ≤ 1) = 0 and Eξ N 1+δ ≤ K for some constants δ > 0 and K > 0, then for each s > 0, there exists a constants Cs > 0 such that Eξ W −s ≤ Cs a.s.. Lemma 5.3. If a.s. Pξ (N ≤ 1) = 0 and Eξ N 1+δ ≤ K for some constants δ > 0 and K > 0, then a.s. lim

n→∞

1 log Pξ (Zn (R) ≤ e(E log m0 −ε)n ) = −∞. n

(5.2)

Proof. Denote Wn = Zn (R)/Pn . Notice that ∀s > 0, supn Eξ Wn−s = Eξ W −s . Lemma 5.2 shows that Eξ W −s < ∞ a.s.. By Markov’s inequality, a.s.   Pξ Zn (R) ≤ e(E log m0 −ε)n !! n−1 X −s log mi ≤ Eξ Wn exp s (E log m0 − ε)n − ≤

Eξ W

−s

exp s (E log m0 − ε)n −

i=0 n−1 X i=0

log mi

!!

.

Hence a.s. n−1   1 1X 1 log mi log Pξ Zn (R) ≤ e(E log m0 −ε)n ≤ log Eξ W −s + s E log m0 − ε − n n n i=0

!

.

Taking superior limit, we get a.s. lim sup n→∞

  1 log Pξ Zn (R) ≤ e(E log m0 −ε)n ≤ −εs. n

Letting s → ∞, we obtain (5.2). Lemma 5.4 (Upper bound). If a.s. Pξ (N ≤ 1) = 0 and Eξ N 1+δ ≤ K for some constant δ > 0 and K > 0, then it is a.s. that for each measurable subset A of R,   Zn (nA) 1 ≤ − inf Λ∗ (x) − E log m0 . (5.3) lim sup log Eξ ¯ Zn (R) x∈A n→∞ n Proof. Notice that for each ε > 0, a.s.       Zn (nA) Zn (nA) Zn (nA) Eξ = Eξ 1{Zn (R)>e(E log m0 −ε)n } + Eξ 1{Zn (R)≤e(E log m0 −ε)n } Zn (R) Zn (R) Zn (R)   −(E log m0 −ε)n (E log m0 −ε)n ≤ e Eξ Zn (nA) + Pξ Zn (R) ≤ e . Hence a.s.

1 log Eξ n



Zn (nA) Zn (R)





   1 log e−(E log m0 −ε)n Eξ Zn (nA) + Pξ Zn (R) ≤ e(E log m0 −ε)n . n 10

Taking superior limit in the above inequality, and using Theorem 2.1 and Lemma 5.3, we obtain a.s.   1 Zn (nA) lim sup log Eξ Zn (R) n→∞ n    1 1 (E log m0 −ε)n ≤ max lim sup log Eξ Zn (nA) − E log m0 + ε, lim sup log Pξ Zn (R) ≤ e n→∞ n n→∞ n = max{− inf Λ∗ (x) − E log m0 + ε, −∞} = − inf Λ∗ (x) − E log m0 + ε. ¯ x∈A

¯ x∈A

Then let ε → 0.

6

Branching random walk in varying environment

Kaplan and Asmussen (1976, [21]) showed that under certain moment conditions, the probability measures Zn (bn ·+an ) satisfy a central limit and a local limit theorem for a branching random walk in deterministic Zn (R) environment for some sequence (an , bn ). Biggins (1990, [7]) proved the same results under weaker moments conditions. We want to generalize these results to branching random walk with random environment in time. But instead of studying the case of random environment directly, we first introduce branching random walk with varying environment in time and give some related results. A branching random walk with a varying environment in time is modeled in a similar way as the branching random walk with a random environment in time. Let {Xn } be a sequence of point processes on R. The distribution of Xn is denoted by ηn . At time 0, there is an initial particle ∅ of generation 0 located at S∅ = 0; at time 1, it is replaced by N = N (∅) particles of generation 1, located at Li = Li (∅), 1 ≤ i ≤ N , where the point process X∅ = (N, L1 , L2 , · · · ) is an independent copy of X0 . In general, each particle u = u1 · · · un of generation n located at Su is replaced at time n+1 by N (u) new particles ui of generation n + 1, located at Sui = Su + Li (u) (1 ≤ i ≤ N (u)),

where the point process formulated by the number of offspring and there displacements, {X(u) = (N (u), L1 (u), L2 (u), · · · )}, is an independent copy of Xn . All particles behave independently, namely, the point processes {X(u)} are independent of each other. In particular, {X(u) : u ∈ Tn } are independent of each other and have a common distribution ηnP . Let Zn = u∈Tn δSu be the counting measure of particles of generation n. As the case of random environment, we introduce the following notations: Nn = Xn (R),

mn = ENn ,

P0 = 1

and Pn = EZn (R) =

n−1 Y

mi .

(6.1)

i=0

Assume that

1 1 log Pn > 0 and lim inf log mn = 0. n→∞ n n Thus for some c > 1, there exists an integer n0 depending on c such that 0 < mn < ∞,

lim inf n→∞

Pn > cn

for all n > n0

(6.2)

(6.3)

Denote Γ the probability space under which the process is defined. Let F0 = {∅, Γ} and Fn = σ((N (u), L1 (u), L2 (u), · · · ) : |u| < n) for n ≥ 1 be the σ-field containing all the information concerning the first n generations, then the sequence {Zn (R)/Pn } forms a non-negative martingale with respect to the filtration Fn and converges a.s. to a random variable W . Xn Let νn be the intensity measure of the point process m in the sense that for a subset A of R, n νn (A) =

EXn (A) , mn

and let φn be the corresponding characteristic function, i.e. Z Z 1 itx φn (t) = e νn (dx) = E eitx Xn (dx). mn The characteristic function of

(6.4)

Zn Pn

is defined as Z 1 1 X itSu Ψn (t) = . e eitx Zn (dx) = Pn Pn u∈Tn

11

(6.5)

It is not difficult to see that φi and Ψn have the following relation: EΨn (t) =

n−1 Y

φi (t).

(6.6)

|x|ε νi (dx).

(6.7)

i=0

Furthermore, denote υn (ε) =

n−1 XZ i=0

Condition (A). There is a non-degenerate probability distribution L(x) and constants {an , bn } with bn → ∞ such that Z n−1 Y −itan φi (t/bn ) → g(t) = eitx L(dx). e i=0

Similar conditions were posed by Klebaner (1982, [23]) and Biggins (1990, [7]). If additionally bn+1 /bn → 1, then the limit distribution would be in What Feller (1971, [13]) calls the class L, also known as the selfdecomposable distributions. Denote Gn (x) = ν0 ∗ · · · ∗ νn−1 (x), we introduce another condition: Condition (B). There exist constants {an , bn } with bn → ∞ such that Gn (bn x + an ) converges to a nondegenerate probability distribution L(x) . n) It is clear that if (B) holds with {an , bn }, then (A) holds with a′n = an +o(b and b′n = bn . Let bn R R P P n−1 n−1 µn = xνn (dx) and σn2 = |x − µn |2 νn (dx). Take an = i=0 µi and bn = ( i=0 σi2 )1/2 , if moreover bn satisfying bn+1 /bn → 1, then Gn (bn x + an ) → L(x). In particular, if {νn } satisfies Lindeberg or Liapounoff Rx 2 conditions, then the limiting distribution L is standard normal, i.e. L(x) = Φ(x) = √12π −∞ e−t /2 dt. We have the following result:

Theorem 6.1. For a branching random walk in a varying environment satisfying (6.2), assume that for some δ > 0, X 1 ENn log+ Nn (log+ log+ Nn )1+δ < ∞, (6.8) 1+δ m n(log n) n n

for some ε > 0 and γ1 < ∞, and for some γ2 > 0,

υn (ε) = o(nγ1 ),

(6.9)

−γ2 ), b−1 n = o(n

(6.10)

then Ψn (t/bn ) − W If in addition (A) holds, then

n−1 Y i=0

φi (t/bn ) → 0

e−itan Ψn (t/bn ) → g(t)W

a.s..

a.s.,

(6.11)

(6.12)

and for x a continuity point of L, Pn−1 Zn (−∞, bn (x + an )] → L(x)W

a.s..

(6.13)

The null set can be taken to be independent of t in (6.12) and x in (6.13) respectively, and (6.12) holds uniformly for t in compact sets. Remark. The above conclusions were obtained by Biggins (1990, [7], Theorem 1 and 2) under similar hyR P[x] pothesis with (6.8) replaced by a condition x log xF (dx) < ∞, where F (x) := k=0 supn P (Nn = k). In homogeneous case, F is simply the distribution function which determines the offspring’s number, but in general, F has not such a concrete expression as (6.8). The following theorem is a local limit theorem. We use the notation an ∼ bn to signify that an /bn → 1 as n → ∞.

12

Theorem 6.2. For a branching random walk in a varying environment satisfying (6.2), assume that (A) holds with bn ∼ θnγ for some constants 0 < γ ≤ 12 and θ > 0, g is integrable and for some ι > 0, sup sup |φi (t)| =: cι < 1. i

If (6.9) holds and

(6.14)

|u|≥ι

1 ENn (log+ Nn )1+β < ∞ mn n(log n)1+δ

X n

for some δ > 0 and β > γ, then sup bn Pn−1 Zn (x, x + h) − W hpL (x/bn − an ) → 0

(6.15)

a.s.,

(6.16)

x∈R

where pL (x) denotes the density function of L.

7

Proof of Theorem 6.1

To prove Theorem 6.1, we only need to show (6.11), for it is obvious that (6.12) is directly from (6.11), and (6.13) is from (6.12) by applying the continuity theorem. The rest assertions are according to Biggins (1990, [7], Theorem 2) . We remark here that our proof is inspired by Biggins (1990, [7]) and Klebaner (1982, [23]). ˜ n,κ be equal to Xn on {Nn (log Nn )κ ≤ We will use a truncation method. Let κ > 0 be a constant. Let X Pn+1 } and be empty otherwise; the rest of the notations is extended similarly. Let In (x) = 1{x(log x)κ ≤Pn+1 } and Inc = 1 − In , so m ˜ n,κ = ENn In (Nn ), and φ˜n,κ (t) =

Z

eitx ν˜n,κ (dx) =

1 E m ˜ n,κ

Z

eitx Xn (dx)In (Nn ).

The proof of Theorem 6.1 is composed of several lemmas. P + 1 1+β Lemma 7.1. Let β ≥ 0. If n mn n(log (log+ log+ Nn )1+δ < ∞ holds for some δ > 0, n)1+δ ENn (log Nn ) P β ˜ n,κ /mn ) < ∞. then for all κ, n n (1 − m Proof. We can calculate X n

nβ (1 −

m ˜ n,κ ) = mn = =

X nβ (mn − m ˜ n,κ ) mn n X nβ ENn Inc (Nn ) m n n

X nβ X nβ ENn Inc (Nn )1{Nn >a} + ENn Inc (Nn )1{Nn ≤a} , m m n n n n

where a is a constant. Since Pn → ∞, the convergence of the second series above is obvious. It suffices to show that of the first series for suitable a. Take f (x) = (log x)1+β (log log x)(1+δ) . f (x) is increasing and positive on (a, +∞). Noticing (6.3), we have for n large enough,

≤ ≤ ≤

nβ ENn Inc (Nn )1{Nn >a} mn nβ f (Nn (log Nn )κ ) E 1{Nn >a} mn f (Pn+1 ) C ENn (log Nn )1+β (log log Nn )1+δ 1{Nn >a} mn n(log n)1+δ C ENn (log+ Nn )1+β (log+ log+ Nn )1+δ , mn n(log n)1+δ

where C is a constant, and likePa, in general, it does not necessarily stand for the same constant throughout. 1 ENn (log+ Nn )1+β (log+ log+ Nn )1+δ implies that of the series The convergence of the series n mn n(log n)1+δ P nβ c n mn ENn In (Nn )1{Nn >a} . 13

P

Lemma 7.2 ([7], Lemma 3 (ii)). If n−1 Y i=0

n (1

−m ˜ n,κ /mn ) < ∞, then n−1 Y

φ˜i,κ (t/bn ) −

i=0

φi (t/bn ) → 0,

as n → ∞.

(7.1)

The formula (7.1) shows that we can prove (6.11) with φ˜i,κ in place of φi . For simplicity, let ζn (t) = φ˜n,κ (t)

and

ωn =

m ˜ n,κ , mn

where the value of κ will be fixed to be suitably large later. R itx (1) Let Ψu (t) := m−1 e X(u)(dx) if u ∈ Tn . Then n

Ψn+1 (t) − ωn ζn (t)Ψn (t)  1 X itSu  (1) 1 X itSu (1) Ψu (t)In (N (u)) − ωn ζn (t) Ψu (t)Inc (N (u)) + e e Pn Pn

=

u∈Tn

=

u∈Tn

: An (t) + Bn (t).

By iteration, we obtain Ψn



t bn



− Ψk



t bn

 n−1 Y

ωi ζi

i=k



t bn



=

n−1 X



Ai

i=k

t bn



+ Bi



t bn

 n−1 Y

ωj ζj

j=i+1



t bn



.

(7.2)

Thus Ψn (t/bn ) − W =

n−1 X

Ai (t/bn )

n−1 Y

i=0 n−1 Y

ζi (t/bn ) ωj ζj (t/bn ) +

j=i+1

i=k

+ Ψk (t/bn )

n−1 Y i=k

n−1 X

Bi (t/bn )

ωj ζj (t/bn )

j=i+1

i=k

ωi ζi (t/bn ) − W

n−1 Y

n−1 Y

!

ζi (t/bn ) .

i=0

(7.3)

Let α > 1. Take k = J(n) = j if j α ≤ n < (j + 1)α , so that k α ∼ n, which means k goes to infinity more slowly than n. For this k, we will show that each term in the right side of (7.3) is negligible. P ˜ n,κ /mn ) < ∞, then Lemma 7.3. If n (1 − m n−1 X

Ai (t/bn )

i=k

n−1 Y

j=i+1

ωj ζj (t/bn ) → 0

a.s.,

as n → ∞.

(7.4)

Proof. Notice that n−1 n−1 n−1 n−1 X 1 X Y X X ω ζ ≤ |A | ≤ A N (u)Iic (N (u)). j j i i P m i i i=k i=k j=i+1 i=k |u|=i

Since 

we get

E

∞ X i=0

 ∞ X X 1 1 X m ˜ i,κ N (u)Iic (N (u)) = ENi Iic (Ni ) = ) < ∞. (1 − Pi mi m mi i i i=0 |u|=i

∞ X i=0

1 X N (u)Iic (N (u)) < ∞. Pi mi |u|=i

which implies (7.4), combined with (7.5).

14

(7.5)

Lemma 7.4. If for some δ1 > 0 , X n

then

n−1 X

Bi (t/bn )

1 ENn log+ Nn < ∞, mn n1+δ1

n−1 Y

j=i+1

i=k

ωj ζj (t/bn ) → 0

a.s.,

(7.6)

as n → ∞.

(7.7)

Remark. Obviously (6.8) implies (7.6). Proof. Let Cn =

n−1 X

Bi (t/bn )

P∞

ωj ζj (t/bn ).

j=i+1

i=k

We want to show that

n−1 Y

E|Cn |2 < ∞, which implies (7.6). Since E(Bi |Fi ) = 0, we have   n−1 n−1 n−1 X X Y E|Cn |2 = var(Cn ) = var  var(Bi ), ωj ζj  ≤ Bi n=1

i=k

j=i+1

i=k

where the notation var denotes variance. Moreover, var(Bi ) =

E(var(Bi |Fi )) ≤

1 1 (1) var(Ψi Ii (Ni )) ≤ ENi2 Ii (Ni ), Pi Pi m2i

R itx (1) e Xn (dx). We denote J −1 be the inverse mapping of J, J −1 (j) = {n : J(n) = j} where Ψn (t) := m−1 n −1 and |J (j)| be the number of the elements in J −1 (j). It is not difficult to see that |J −1 (j)| = O(j α−1 ) and Pi −1 (j)| = O(iα ). Hence, j=1 |J ∞ X

n=1

E|Cn |2

≤ =

∞ n−1 X X

n=1 i=k ∞ X

1 ENi2 Ii (Ni ) Pi m2i

X

n−1 X

j=1 n∈J −1 (j) i=j

≤ = ≤ =

∞ X j=1

∞ X i=j

1 ENi2 Ii (Ni ) Pi m2i

|J −1 (j)|

1 ENi2 Ii (Ni ) Pi m2i

|J −1 (j)|

∞ X i X i=1 j=1 ∞ X

C

i=1

1 ENi2 Ii (Ni ) Pi m2i

iα ENi2 Ii (Ni ) Pi m2i

∞ ∞ X X iα iα 2 2 EN I (N )1 + C C i i {N >a} i i 2 2 ENi Ii (Ni )1{Ni ≤a} . P m P m i i i i i=1 i=1

P α The second series above converges, since i Piim2 < ∞. For the first series above, take f (x) = x(log x)−(α+1+δ1 ) . i f (x) is increasing and positive on (a, +∞). We have for i large enough,

≤ ≤ ≤

iα ENi2 Ii (Ni )1{Ni >a} Pi m2i f (Pi+1 ) iα ENi2 1{Ni >a} 2 Pi mi f ({Ni (log Ni )κ ) C ENi (log Ni )α+1+δ1 −κ 1{Ni >a} mi i1+δ1 C ENi log+ Ni , mi i1+δ1

if we take κ ≥ α + δ1 . Then by (7.6), it follows the convergence of the series 15

iα 2 i Pi m2i ENi Ii (Ni )1{Ni >a} .

P

Lemma 7.5. If

P

n (1

−m ˜ n,κ /mn ) < ∞ and (6.9), (6.10) hold, then

Ψk (t/bn )

n−1 Y i=k

ωi ζi (t/bn ) − W

n−1 Y i=0

ζi (t/bn ) → 0

as n → ∞.

a.s.,

(7.8)

Pn−1 Qn−1 P ˜ n,κ /mn ) < ∞ implies that i=k ωi → 1, so the factor i=k ωi in (2.8) can be ignored. Proof. n (1 − m Notice that !   n  n−1  n−1 n−1 n−1 n−1 Y Y Y Y Y Zk (R) Y Zk (R) ζi = Ψk − Ψk ζi − W −W ζi . ζi + ζi + W ζi − Pk Pk i=0 i=0 i=k

i=k

i=k

i=k

It suffices to prove that Ψk (t/bn ) − and

n−1 Y i=k

Zk (R) →0 Pk

ζi (t/bn ) −

n−1 Y i=0

Since |eitx − 1| ≤ C|tx|ε , we have Ψk (t/bn ) − Zn (R) Pk

a.s.,

as k → ∞.

(7.9)

ζi (t/bn ) → 0,

as k → ∞.

(7.10)

≤ ≤

1 Pk

Z tb−1 e n x − 1 Zk (dx) Z ε −ε 1 C|u| bn |x|ε Zk (dx). Pk

Assume that 0 < ε ≤ 1 (the proof for the case of ε > 1 is similar). Taking expectation in the above inequality, we obtain Z Zk (R) ε −ε α α E Ψk (t/bn ) − ≤ C|t| sup{b : k ≤ n < (k + 1) } |x|ε ν0 ∗ · · · ∗ νk−1 (dx) n Pk k−1 XZ α α ≤ C|t|ε sup{b−ε : k ≤ n < (k + 1) } |x|ε νi (dx) n i=0

=

ε

C|t|

sup{b−ε n

: k ≤ n < (k + 1) }υk (ε) = |u|ε o(k γ1 −αεγ2 ). α

α

Hence (7.9) holds if we take α large. By Lemma 7.2, we can prove (7.10) with φi in place of ζi , which holds directly by noticing that n−1 n−1 ! k−1 k−1 n−1 Y Y Y Y Y φi (t/bn ) φi (t/bn ) ≤ 1 − φi (t/bn ) 1 − φi (t/bn ) = φi (t/bn ) − i=0 i=0 i=0 i=k i=k   Zk (R) Zk (R) = E Ψk (t/bn ) − . ≤ E Ψk (t/bn ) − Pk Pk Thus (7.8) holds.

8

Proof of Theorem 6.2

We will go along the proof by following the lines in [7]. Let 1 K(x) = 2π Then

Z



sin 12 x 1 2x

K(x)dx = 1

2

Ka (x) =

and

R

Z

1 x K( ) (a > 0). a a

Ka (x)dx = 1.

R

The characteristic function of Ka is denoted by ka , which vanishes outside (− a1 , a1 ), so that the characteristic (n) Zn n function of Z Pn ∗ Ka is integrable and so Pn ∗ Ka has a density function Da . We will get our result through (n)

the asymptotic property of Da .

16

Lemma 8.1 (see [10]). If f (t) is a characteristic function such that |f (t)| ≤ κ as soon as b ≤ |u| ≤ 2b, then we have for |u| < b, t2 |f (t)| ≤ 1 − (1 − κ2 ) 2 . 8b Lemma 8.2. Under the conditions of Theorem 6.2, sup |bn Da(n) (bn (x + an )) − W pL (x)| → 0 a.s., x∈R

as n → ∞.

(8.1)

Proof. Let A be a positive constant. By the Fourier inversion theorem,  Z  t t 2π bn Da(n) (bn (x + an )) − W pL (x) = Ψn ( )ka ( )e−itan − W g(t) dt . bn bn

Split the integral of the right side into |t| < A and |t| ≥ A. Using Theorem 6.1 and noticing that limn ka (t/bn ) = 1, we have Z   t t −iuan Ψn ( )ka ( )e − W g(t) dt |t| 0, there exist n0 > 0 and δ > 0 such that ∀n ≥ n0 and ∀0 < h < δ, h(W pL (x) − ε) ≤ Pn−1 Zn (bn (x + an ), bn (x + an + h)) ≤ h(W pL (x) + ε)

a.s.,

∀x ∈ R.

(8.9)

The null set can be taken to be independent of x. Now we turn to the proof of Theorem 6.2: Proof of Theorem 6.2. Fix h > 0. ∀ε > 0, take 0 < ε′ < ε/h. By Lemmas 8.2 and 8.3 , for this ε′ > 0, there exist n′0 > 0 and δ ′ > 0 such that ∀n ≥ n′0 and ∀0 < h′ < δ ′ , h′ (W pL (x) − ε′ ) ≤ Pn−1 Zn (bn (x + an ), bn (x + an + h′ )) ≤ h′ (W pL (x) + ε′ ) a.s.,

∀x ∈ R,

Let h′ = h/bn . Then there exist n ˜ 0 > 0 such that 0 < h′ < δ ′ for n ≥ n ˜ 0 . Take n0 := max{n′0 , n ˜ 0 } > 0, we have ∀n ≥ n0 , h(W pL (x) − ε′ ) ≤ bn Pn−1 Z n (bn (x + an ), bn (x + an ) + h) ≤ h(W pL (x) + ε′ ) a.s., ∀x ∈ R, which implies that sup |bn Pn−1 Zn (bn (x + an ), bn (x + an ) + h) − W hpL (x)| ≤ ε′ h < ε x∈R

so that

sup |bn Pn−1 Zn (x, x + h) − W hpL (x/bn − an )| < ε x∈R

The proof is finished.

19

a.s..

a.s.,

9

Central limit theorems for

Eξ Zn (·) EZn (·) Eξ Zn (R) , EZn (R)

and E EξZZnn(·) (R)

Now we return to consider the branching random walk with a random environment in time introduced in Section 1. When the environment ξ is fixed, a branching random walk in random environment is in fact a branching random walk in varying environment introduced in Section 6. We still assume (1.2), which implies that 1 1 lim log Pn = E log m0 > 0 and lim log mn = 0 a.s. n→∞ n n→∞ n by the ergodic theorem. Hence the assumption (6.2) is satisfied, so that (6.3) holds for some constant c > 1 and integer n0 = n0 (ξ) depending on c and ξ. Note that all the notations and results in Section 6 are still available under the quenched law Pξ and the corresponding expectation Eξ . E X (·) Xn . Put Recall that νn (·) = ξ mnn is the intensity measure of m n µn = and σn2

=

Z

Z

N (u) X 1 Li (u) (u ∈ Tn ) Eξ xνn (dx) = mn i=1

N (u) X 1 (x − µn ) νn (dx) = (Li (u) − µn )2 Eξ mn i=1 2

(u ∈ Tn ).

(9.1)

(9.2)

We first have a central limit theorem for quenched means as follows. E Z (·)

Theorem 9.1 (Central limit theorem for quenched means EξξZnn(R) ). If |µ0 | < ∞ a.s. and Eσ02 ∈ (0, ∞), then Eξ Zn (−∞, bn x + an ] → Φ(x) a.s., Eξ Zn (R) Pn−1 Pn−1 where an = i=0 µi and bn = ( i=0 σi2 )1/2 .

Proof. Notice that i.e., for all t > 0 ,

Eξ Zn (·) Eξ Zn (R)

= ν0 ∗ · · · ∗ νn−1 (·). It suffices to show that {νn } satisfies Lindeberg condition, n−1 Z 1 X |x − µi |2 νi (dx) = 0 n→∞ b2 n i=0 |x−µi |>tbn

lim

By the ergodic theorem,

n−1 b2n 1X 2 σi = Eσ02 > 0 = lim n→∞ n n→∞ n i=0

lim

a.s..

a.s..

(9.3)

(9.4)

So for a positive constant a satisfying 0 < a2 < Eσ02 , there exists an integer n0 depending on a and ξ such that b2n ≥ a2 n for all n ≥ n0 . Fix a constant M > 0 . For n ≥ max{n0 , M }, we have b2n ≥ a2 n ≥ a2 M , so that n−1 Z n−1 Z 1 X 1 X 2 |x − µ | ν (dx) ≤ |x − µi |2 νi (dx). i i b2n i=0 |x−µi |>tbn a2 n i=0 |x−µi |>ta√M

Taking superior limit in the above inequality , we obtain n−1 Z 1 X |x − µi |2 νi (dx) lim sup 2 n→∞ bn |x−µ |>tb i n i=0 Z n−1 X 1 1 |x − µi |2 νi (dx) lim ≤ a2 n→∞ n i=0 |x−µi |>ta√M Z 1 = |x − µ0 |2 ν0 (dx). E √ a2 |x−µ0 |>ta M

R Let M → ∞, it obvious that E |x−µ0 |>ta√M |x − µ0 |2 ν0 (dx) → 0 by the dominated convergence theorem, since Eσ02 < ∞. This completes the proof. If the environment is i.i.d., we can obtain a central limit theorem for annealed means. 20

Theorem 9.2 (Central limit theorem for annealed means µ ¯=

1 E Em0

Z

EZn (·) EZn (R) ).

N X 1 Li E Em0 i=1

xX0 (dx) =

and 1 σ ¯ = E Em0 2

If |¯ µ| < ∞ and σ ¯ 2 ∈ (0, ∞), then √ where a ¯n = n¯ µ and ¯bn = n¯ σ. Proof. Denote ν¯n (·) =

EZn (¯ bn ·+¯ an ) . EZn (R)

ϕ¯n (t) =

Z

Z

Assume that {ξn } are i.i.d.. Let

(x − µ ¯)2 X0 (dx) =

N X 1 (Li − µ ¯)2 . E Em0 i=1

EZn (−∞, ¯bn x + a ¯n ] → Φ(x), EZn (R)

The characteristic function of ν¯n is denoted by ϕ¯n . We can calculate

eitx ν¯n (dx)

= (Em0 )−n E

Z

eitx Zn (¯bn dx + a ¯n ) ¯

= (Em0 )−n e−it¯an /bn E

n−1 Y



i=0



¯

= e−it¯an /bn

Em0 (t/¯bn ) Em0

n

Z

¯

eitx/bn Xn (dx)

,

R where mn (t) := Eξ eitx Xn (dx). The last step above is from the independency of (ξn ). Denote F (x) = EX0 (x) Em0 , then by the classic central limit theorem, we have F ∗n (¯bn x + a¯n ) → Φ(x). Therefore,

where p(x) =

2 √1 e−x /2 2π

Z

Z

eitx F ∗n (¯bn dx + a¯n ) → g(t) :=

Z

eitx p(x)dx,

is the density function of standard normal distribution. Notice that eitx F ∗n (¯bn dx + a¯n )

¯

Z

¯



= e−it¯an /bn

¯

eity/bn F ∗n (dy) Z n −it¯ an /¯ bn ity/¯ bn = e F (dy) e = e−it¯an /bn

n Em0 (t/¯bn ) = ϕ¯n (t). Em0

We in fact have obtained ϕ¯n (t) → g(t), it follows that ν¯n (x) → Φ(x) by the continuity theorem. By an argument similar to the proof of Theorem 9.2, we obtain a central limit theorem as follows: R (·) ). Assume that {ξn } are i.i.d.. Let µ ¯′ = E xν0 (dx) = Theorem 9.3 (Central limit theorem for E EZξn(R)     N N R P P 1 1 ′2 ′ 2 ′ 2 ¯ = E (x − µ ¯ ) ν0 (dx) = E m0 E m0 Li and σ (Li − µ ¯ ) . If |¯ µ′ | < ∞ and σ ¯ ′2 ∈ (0, ∞), then i=1

i=1

E

Zn (−∞, ¯b′n x + a ¯′n ] → Φ(x), Eξ Zn (R)

√ ′ σ. where a ¯′n = n¯ µ′ and ¯b′n = n¯

21

10

Central limit theorem and local limit theorem for

Zn (·) Zn (R)

As we mentioned in last section (Section 9), we can directly use the results of Theorems 6.1 and 6.2 considering the quenched law Pξ and the corresponding expectation Eξ . However, by the good properties of stationary and ergodic random process, we have some similar but simper and more precise results than Theorems 6.1 and 6.2. Theorem 10.1. Assume that for some ε > 0, υ(ε) := E and bn = bn (ξ) satisfying

Z

|x|ε ν0 (dx) < ∞,

−γ b−1 ) a.s. n = o(n

then Ψn (t/bn ) − W

n−1 Y i=0

for some γ > 0,

φi (t/bn ) → 0

a.s..

If in addition (A) holds wit {an (ξ), bn (ξ)} and gξ , then e−itan Ψn (t/bn ) → gξ (t)W

a.s.,

(10.1)

and for x a continuity point of Lξ , Pn−1 Zn (−∞, bn (x + an )] → Lξ (x)W

a.s..

Moreover, (10.1) holds uniformly for u in compact sets. The following result is the most important central limit theorem of this paper. Theorem 10.2 (Central limit theorem for

Zn (·) Zn (R) ).

If E|µ0 |ε < ∞ for some ε > 0 and Eσ02 ∈ (0, ∞), then

Zn (−∞, bn x + an ] → Φ(x) Zn (R) where an =

Pn−1 i=0

a.s. on {Zn (R) → ∞},

(10.2)

Pn−1 µi and bn = ( i=0 σi2 )1/2 .

R Remark. If E x2 ν0 (dx) < ∞, it can be easily seen that Eµ20 < ∞ and Eσ02 < ∞.

Theorem 10.2 is an extension of the results of Kaplan and Asmussen (1976, II, Theorem 1) and Biggins (1990) on deterministic branching random walks. Similarly to the case of varying environment, we also have the local limit theorems corresponding to Theorems 10.1 and 10.2 respectively. Theorem 10.3. Assume that ν0 is non-lattice a.s., (A) holds with {an (ξ), bn (ξ)} satisfying bn ∼ θnγ a.s. for some constants 0 < γ ≤ 12 and θ > 0, and gξ is integrable. If υ(ε) < ∞ for some ε > 0, and E

N (log+ N )1+β < ∞ m0

(10.3)

for some β > γ, then ∀h > 0, sup |bn Pn−1 Zn (x, x + h) − W hpL (x/bn − an )| → 0

a.s.,

x∈R

where pL is the density function of Lξ . Theorem 10.4 below is a direct consequence of Theorem 10.3. To verify the conditions of Theorem 10.3, see the proof of Theorem 10.2.

22

Theorem 10.4 (Local limit theorem for ε > 0, Eσ02 ∈ (0, ∞), and

Zn (·) Zn (R) ).

E

Assume that ν0 is non-lattice a.s.. If E|µ0 |ε < ∞ for some

N (log+ N )β < ∞ m0

for some β > 23 , then ∀h > 0, x − an Zn (x, x + h) )| → 0 a.s. on {Zn (R) → ∞}, − hp( Zn (R) bn x Pn−1 Pn−1 2 where an = i=0 µi , bn = ( i=0 σi2 )1/2 , and p(x) = √12π e−x /2 is the density function of standard normal distribution. sup |bn

For the deterministic environment case, similar result was showed by Biggins (1990). From Theorem 10.4, we immediately obtain the following corollary. Corollary 10.5. Under the conditions of Theorem 10.4, we have ∀a < b, Zn (a + an , b + an ) 1 → √ (b − a) Zn (R) 2π Pn−1 Pn−1 2 1/2 where an = i=0 µi and bn = ( i=0 σi ) . bn

a.s. on {Zn (R) → ∞},

Corollary 10.5 coincide with a result of Kaplan and Asmussen (1976, II, Theorem 2) on deterministic branching random walks.

11

Proofs of Theorems 10.1-10.3

Before of the proof of Theorem 10.1, we prove a lemma at first. N0 (log+ N0 )1+β < ∞, then for all κ, Lemma 11.1. Let β ≥ 0. If E m 0

P

n

nβ (1 − m ˜ n,κ /mn ) < ∞ a.s..

Proof. As the proof of Lemma 7.1, we have  X X m ˜ n,κ 1 1− = Eξ Nn Inc (Nn ). m m n n n n By (6.3), for n large enough, Eξ Nn Inc (Nn ) ≤ Eξ Nn 1{Nn (log Nn )κ >cn+1 } . Taking expectation for the series

P

nβ κ n+1 } , mn Eξ Nn 1{Nn (log Nn ) >c

E

we have !

X nβ Eξ Nn 1{Nn (log Nn )κ >cn+1 } mn n

N0 1{N0 (log N0 )κ >cn+1 } m 0 n N0 X β n 1{N0 (log N0 )κ >cn+1 } = E m0 n

=

X

≤ CE so that

P

n

nβ E

N0 (log+ N0 )1+β < ∞, m0

nβ (1 − m ˜ n,κ /mn ) < ∞ a.s..

Proof of Theorem 10.1. From the proof of Theorem 6.1, we know that in fact, instead of (6.8), we only need P ˜ n,κ /mn ) < ∞ for the suitable κ. For the branching random walk in a stationary (7.6) and n (1 − m P and + N0 ergodic random environment, Lemma 11.1 tells us that the condition E m log N < ∞ ensures 0 n (1 − 0 m ˜ n,κ /mn ) < ∞. And it also ensures (7.6), since for any δ1 > 0, ! X 1 X N0 1 + = log+ N0 < ∞. E N log N E E ξ n n 1+δ1 1+δ1 m m n n n 0 n n 23

By the ergodic theorem, υn (ε) = υ(ε) < ∞ a.s.. n Hence the condition (6.9) holds. Thus Theorem 10.1 is just a direct consequence of Theorem 6.1. lim n

Proof of Theorem 10.2. We will use Theorem 10.1 to prove Theorem 10.2. Assume that 0 < ε ≤ 2 (otherwise, consider min{ε, 2} instead of ε), then  Z  Z υ(ε) = E |x|ε ν0 (dx) ≤ Cε E |x − µ0 |ε ν0 (dx) + E|µ0 |ε < ∞.

√ −γ )a.s.. The proof of Theorem By (9.4), bn ∼ Eσ02 n a.s., which implies that for any 0 < γ < 12 , b−1 n = o(n 9.1 show that {νn } satisfies Lindeberg condition, so that (A) holds with a′n = an /bn and b′n = bn . By Theorem 10.1, Pn−1 Zn (−∞, bn x + an ] → Φ(x)W a.s. Notice that Zn (R)/Pn → W a.s. and P(W > 0) = P(Zn (R) → ∞). Thus (10.2) holds.

Lemma 11.2. Let A > 0 be a constant. Assume that bn ∼ θnγ a.s. for some constants 0 < γ ≤ 21 . If ν0 is non-lattice a.s.,then there exists a constant θ1 > 0 (not depending on A) such that Z n−1 Z Y 2 lim sup bn |φi (t)|dt ≤ e−θ1 t dt a.s., (11.1) n→∞

|t|≥A

U i=k

where k = J(n) the same as the proof of Theorem 6.1 and U = {t :

A bn

≤ |t| ≤ a1 }.

Proof. Take 0 < 2ǫ < a1 . Like the last part of the proof of Theorem 6.2, split U into U1 and U2 , so Z n−1 Z n−1 Z n−1 Y Y Y bn |φi (t)|dt = bn |φi (t)|dt + bn |φi (t)|dt. U i=k

U1 i=k

U2 i=k

Since νi is non-lattice a.s., we have

sup ǫ≤|t|≤a−1

Hence by Lemma 8.1, for |t| < ǫ, |φi (t)| ≤ 1 − where αi =

1−c2i 8ǫ2

|φi (t)| =: ci (ǫ, a) = ci < 1

> 0 a.s.. Using (11.2), we immediately get Z n−1 n−1 Y 2 Y |φi (t)| ≤ bn bn ci → 0 a U2 lim

log bn +

n→∞

Observe that lim

n→∞

Take 0 < θ1
0



if γ = 12 if 0 < γ
0) and E( ZZnn(R) |Zn (R) > 0): (·) (·) and E ZZnn(R) ). If E|µ0 |ε < ∞ for some ε > 0 and Theorem 12.1 (Central limit theorems for Eξ ZZnn(R) Eσ02 ∈ (0, ∞), then   Zn (−∞, bn x + an ] a.s., (12.1) Eξ Zn (R) > 0 → Φ(x) Zn (R)   Zn (−∞, bn x + an ] E (12.2) Zn (R) > 0 → Φ(x), Zn (R)

where an =

Pn−1 i=0

Pn−1 2 1/2 µi and bn = ( i=0 σi ) .

Proof. Theorem 12.1 is a consequence of Theorem 10.2. We only prove (12.2), the proof for (12.1) is similar. By Theorem 10.2,   Zn (−∞, bn x + an ] − Φ(x) 1{Zn (R)→∞} → 0 a.s.. (12.3) Zn (R) N log+ N < ∞ ensures that The condition E m 0

lim P(Zn (R) > 0) = P(Zn (R) → ∞) > 0.

n→∞

Observing that

= ≤

  E Zn (−∞, bn x + an ] Zn (R) > 0 − Φ(x) Zn (R)   1 E1{Z (R)>0} Zn (−∞, bn x + an ] − Φ(x) n P(Zn (R) > 0) Zn (R)    Zn (−∞, bn x + an ] 1 E 1{Z (R)>0} − 1{Z (R)→∞} − Φ(x) n n P(Zn (R) > 0) Zn (R)   1 E1{Z (R)→∞} Zn (−∞, bn x + an ] − Φ(x) , + n P(Zn (R) > 0) Zn (R)

we only need to show that the two terms in the right side of the inequality above tend to zero as n tends to infinity. Since Zn (−∞, bn x + an ] ≤ 1 and 0 ≤ Φ(x) ≤ 1, 0≤ Zn (R) we have

Zn (−∞, bn x + an ] ≤ 1. − Φ(x) Zn (R)

Notice (12.3), by the dominated convergence theorem, we get   E1{Z (R)→∞} Zn (Zn (−∞, bn x + an ]) − Φ(x) → 0. n Zn (R)

For the first term, we have   E(1{Z (R)>0} − 1{Z (R)→∞} ) Zn (−∞, bn x + an ] − Φ(x) n n Zn (R) ≤ E|1{Zn (R)>0} − 1{Zn (R)→∞} |

= P(Zn (R) > 0) − P(Zn (R) → ∞) → 0.

This completes the proof.

25

References [1] K. B. Athreya, S. Karlin, On branching processes in random environments I & II. Ann. Math. Statist. 42 (1971), 1499-1520 & 1843-1858. [2] K. B. Athreya, P. E. Ney, Branching Processes. Springer, Berlin, 1972. [3] J. D. Biggins, Martingale convergence in the branching random walk. J. Appl. prob. 14 (1977), 25-37. [4] J. D. Biggins, Chernoff’s theorem in the branching random walk. J. Appl. Probab. 14 (1977), 630-636. [5] J. D. Biggins, Growth rates in the branching random walk. Z. Wahrsch. verw. Geb. 48 (1979), 17-34. [6] J. D. Biggins, A. E. Kyprianou, Seneta-Heyde norming in the branching random walk. Ann. Prob. 25 (1997), 337-360. [7] J. D. Biggins, The central limit theorem for the supercritical branching random walk,and related results. Stoch. Proc. Appl. 34 (1990), 255-274. [8] J. D. Biggins, A. E. Kyprianou, Measure change in multitype branching. Adv. Appl. Probab. 36 (2004), 544-581. [9] B. Chauvin, A. Rouault, Boltzmann-Gibbs weights in the branching random walk. In K.B. Athreya, P. Jagers, (eds.), Classical and Modern Branching Processes, IMA Vol. Math. Appl. 84, pp. 41-50, Springer-Verlag, New York, 1997. [10] H. Cram´er, Random variables and probability distributions. Cambridge University Press, Cambridge, 1937. [11] A. Dembo, O. Zeitouni, Large deviations Techniques and Applications. Springer, New York, 1998. [12] R. Durrett, T. Liggett, Fixed points of the smoothing transformation. Z. Wahrsch. verw. Geb. 64 (1983), 275-301. [13] W. Feller, An introduction to probability theory and its applications, Vol.II. Wiley, New York, 1971. [14] J. Franchi, Chaos Multiplicatif: un Traitement Simple et Complet de la Fonction de Partition. S´eminaire de probabilit´es XXIX. Springer, 1995, 194-201. [15] Y. Guivarc’h, Sur une extension de la notion de loi semi-stable. Ann. Inst. H. Poincar´e. Probab. Statist. 26 (1990), 261-285. [16] T. E. Harris, The theory of branching process. Springer, Berlin, 1963. [17] B. Hambly, On the limit distribution of a supercritical branching process in a random environment. J. Appl. Prob. 29 (1992), 499-518. [18] C. Huang, Q. Liu, Moments, moderate and large deviations for a branching process in a random environment. Stoch. Proc. Appl. 122 (2012), 522-545. [19] P. Jagers, Galton-Watson processes in varying environments. J. Appl. Prob. 11 (1974), 174-178. [20] J. P. Kahane, J. Peyri`ere, Sur certaines martingales de Benoit Mandelbrot. Adv. Math. 22 (1976), 131-145. [21] N. Kaplan, S. Asmussen, Branching random walks I & II. Stoch. Proc. Appl. 4 (1976), 1-13 & 15-31. [22] D. Kuhlbusch, On weighted branching processes in random environment. Stoch. Proc. Appl. 109 (2004), 113-144. [23] C. F. Klebaner, Branching random walk in varying environment. Adv. Appl. Proba. 14 (1982), 359-367. [24] Q. Liu, Sur une ´equation fonctionnelle et ses applications: une extension du th´eor`eme de Kesten-Stigum concernant des processus de branchement. Adv. Appl. Prob. 29 (1997), 353-373. [25] Q. Liu, Fixed points of a generalized smoothing transformation and applications to branching processes. Adv. Appl. Prob. 30 (1998), 85-112. [26] Q. Liu, On generalized multiplicative cascades. Stoch. Proc. Appl. 86 (2000), 61-87. 26

[27] Q. Liu, A. Rouault, Limit theorems for Mandelbrot’s multiplicative cascades. Ann. Appl. Proba. 10 (2000), 218-239. [28] Q. Liu, Asymptotic properties absolute continuity of laws stable by random weighted mean. Stoch. Proc. Appl. 95 (2001), 83-107. [29] Q. Liu, Branching Random Walks in Random Environment. Proceedings of the 4th International Congress of Chinese Mathematicians, 2007 (ICCM 2007), Vol. II, 702-719. Eds: L. Ji, K. Liu, L. Yang, S.-T. Yau. [30] R. Lyons, A simple path to Biggins’s martingale convergence for branching random walk. In K.B. Athreya, P. Jagers, (eds.), Classical and Modern Branching Processes, IMA Vol. Math. Appl. 84, pp. 217-221, Springer-Verlag, New York, 1997. [31] W. L. Smith, W. Wilkinson, On branching processes in random environments. Ann. Math. Statist. 40 (1969), 814-827. [32] A. J. Stam, On a conjecture of Harris, Z. Wahrsch. Verw. Geb. 5 (1966) 202-206. [33] C. Stone, A local limit theorem for nonlattice multi-dimensional distribution functions. Ann. Math. Statist. 36 (1965), 546-551. [34] D. Tanny, Limit theorems for branching processes in a random environment. Ann. Proba. 5 (1977), 100-116. [35] D. Tanny, A necessary and sufficient condition for a branching process in a random environment to grow like the product of its means. Stoch. Proc. Appl. 28 (1988), 123-139.

27