Apr 7, 2015 - PR] 7 Apr 2015. Stochastic averaging for multiscale Markov processes with an application to branching random walk in random environment.
Stochastic averaging for multiscale Markov processes with an application to branching random walk in random environment by Martin Hutzenthaler and Peter Pfaffelhuber
arXiv:1504.01508v1 [math.PR] 7 Apr 2015
April 8, 2015 Abstract
N
Let Z = (Zt )t∈[0,∞) be an ergodic Markov process and, for n ∈ , let Z n = (Zn2 t )t∈[0,∞) drive a process X n . Classical results show under suitable conditions that the sequence of non-Markovian processes (X n )n∈N converges to a Markov process and give its infinitesimal characteristics. Here, we consider a general sequence (Z n )n∈N . Using a general result on stochastic averaging from [Kur92], we derive conditions which ensure that the sequence (X n )n∈N converges as in the classical case. As an application, we consider the diffusion limit of branching random walk in quickly evolving random environment.
1
Introduction
Stochastic averaging is a well-known concept and has been introduced a while ago (see e.g. [Kha66]). Consider a sequence of bivariate Markov processes (X n , Z n )n∈N . The general idea is that the processes (Z n )n∈N (subsequently denoted as fast variables) converge quickly to an equilibrium and that the non-Markovian processes (X n )n∈N (subsequently denoted as slow variables) evolve on a slower timescale and only sense this equilibrium in the limit as n → ∞ and, thus, converge to a Markov process. As an example reference, we mention the work of [EK86] on random evolutions, Proposition 12.2.2 (a general result if the union of the state spaces of (X n )n∈N is compact), Theorems 12.2.4 (again for a compact union of state spaces of (X n )n∈N ) and 12.3.1 (with non-compact state space), where (X n )n∈N are deterministic processes; see also [EN80, EN88] for similar results in discrete time. Further references include, e.g., [AV10, PV01, PV03, VK12]). Theorem 2.13 in the recent paper [KKP14] also treats processes with three different timescales under different assumptions. All of these references assume that the fast variables converge in a suitable sense to an equilibrium process or to an equilibrium distribution (depending on the current state of the slow variables). This paper is motivated by the observation that in many applications the fast variables do not converge to an equilibrium process or an equilibrium distribution. Still, the slow variables can be approximated by a Markov process. Our intuition is that the slow variables only depend on the fast variables through certain functions and for the processes to converge it suffices that these functions of the fast variables converge suitably. Now different functions of the fast variables could converge at different speeds. For this reason we consider three timescales. More precisely, we assume for every n ∈ N that the pre-generator Ln of the Markov process (X n , Z n ) satisfies for all f ∈ Dom(Ln ) that Ln f = L0,n f + n · L1,n f + n2 · L2,n f,
(1.1)
AMS 2010 subject classification: 60F05 (Primary) 60K37, 60J80 (Secondary) Key words and phrases: Stochastic averaging, random walk in random environment, martingale problem
1
where Dom(Ln ) is the domain of the pre-generator Ln . For every n ∈ N, we think of n2 L2,n as the pre-generator of the fast variable Z n evolving on timescale O(n2 ) and we think of L0,n +nL1,n as the pre-generator of the slow variable X n given the fast variable Z n . The form of this pregenerator indicates that certain functions of the fast variables converge on timescale O(1) and some functions of the fast variables converge on timescale O(n). We will show in our main result, Theorem 2.4 below, under suitable assumptions that the non-Markov processes (X n )n∈N converge to a Markov process. Moreover, we give a non-trivial application to branching random walk in random environment in Theorem 3.5 below. Theorem 2.4 below is a corollary of Theorem 2.1 in [Kur92] which is a general result on stochastic averaging. The main contribution of our paper is to demonstrate how to apply the abstract result of [Kur92] to settings where the occupation measures of the driving processes (Z n )n∈N might not converge. For example, for branching random walk in random environment we will require that the first two moments of the offspring distributions converge appropriately. We explain our approach with a simple example. Let a random walker on the real line move at constant speed (∈ R indicating positive or negative direction) for an exponentially distributed time period, choose then a new speed according to a given distribution and continue so forth. If the exponential waiting times become shorter and shorter and the distributions of the random speeds are suitable then these processes converge to a Brownian motion. More formally, let N be a Poisson process with rate 1 and, for every n ∈ N, let Z¯1n , Z¯2n , ... be independent and identically distributed real-valued random variables with distribution πn having mean µn ∈ R and variance σn2 /2 ∈ [0, ∞). We assume that limn→∞ nµn = a ∈ R and that limn→∞ σn2 = σ 2 ∈ (0, ∞). For n every n ∈ N define Z n = (Ztn )t∈[0,∞) by Ztn = Z¯N and define X n = (Xtn )t∈[0,∞) for every n2 t t ∈ [0, ∞) by Xtn
:= n
Z
0
t
Zsn ds.
(1.2)
For each n ∈ N the pre-generator of the bivariate Markov process (X n , Z n ) satisfies for all f ∈ Cc∞ (R2 , R) that Ln f = n · L1,n f + n2 · L2,n f where Z ∂f f (x, y)πn (dy) − f (x, z) (1.3) (L2,n f )(x, z) = (L1,n f )(x, z) = z (x, z), ∂x R for all (x, z) ∈ R2 . Of course a corollary of the celebrated Lindeberg-Feller theorem shows that the finite-dimensional distributions of (X n )n∈N converge to a Brownian motion if and only if Lindeberg’s condition is satisfied or, equivalently, if for all ε ∈ (0, 1) it holds that limn→∞ E (Z0n )2 1{|Z0n |>εn} = 0. In our stochastic averaging result, Theorem 2.4 below, we will obtain convergence in distribution on the space of cadlag functions and we will assume that there exists δ ∈ (0, 1] such that supn∈N E (Z0n )2+δ < ∞. The following heuristic then explains with a pre-generator calculation why the only possible limit process of the sequence (X n )n∈N is (at + σWt )t∈[0,∞) where W is a real-valued standard Brownian motion. Since sups∈[0,∞ supn∈N E[(Zsn )2 ] < ∞, the occupation measures (see Definition 2.1 below) of the pron 2 cesses (Z relatively Moreover, any limit (R,R) compact. R Γ R ∞)n∈N are R ∞satisfies for all f ∈ Ccm ∞ 2 n 2 that E 0 f (s)z Γ(ds, dz) = limn→∞ E 0 f (s)(Zs ) ds = 0 f (s) ds limm→∞ E (Z0 )2 = R∞ σ2 2 0 f (s) ds 2 . Consequently, using for every f ∈ Cc (R, R) and n ∈ N that L2,n f ≡ 0, we get
2
for all f ∈ Cc2 (R, R) approximately in the limit n → ∞ that Z t Ln (f + n1 L1,n f ) (Xsn , Zsn ) ds [0, ∞) ∋ t 7→ (f + n1 L1,n f )(Xtn , Ztn ) − 0
f (Xtn )
n 1 n ′ n Zt f (Xt )
Z
t
(nL1,n f + L1,n L1,n f + nL2,n L1,n f ) (Xsn , Zsn ) ds 0 Z t Z n ′ n n n n 1 (L1,n L1,n f ) (Xs , Zs ) + n (L1,n f )(Xsn , y) πn (dy) ds = f (Xt ) + n Zt f (Xt ) − 0 R Z t = f (Xtn ) + n1 Ztn f ′ (Xtn ) − f ′′ (Xsn )(Zsn )2 + f ′ (Xsn )nµn ds (1.4) 0 Z t 2 ≈ f (Xtn ) + n1 Ztn f ′ (Xtn ) − f ′′ (Xsn ) σ2 + f ′ (Xsn )nµn ds 0 Z t 2 ≈ f (Xtn ) − f ′′ (Xsn ) σ2 + f ′ (Xsn )a ds =
+
−
0
is a local martingale. So we recognize the pre-generator of the Brownian motion (at+σWt )t∈[0,∞) . 2 Note that the second derivative appears as σ2 f ′′ (x) = limn→∞ E [(L1,n L1,n f ) (x, Z0n )] where x ∈ R and f ∈ Cc2 (R, R). An analogous iterated operator appears also in the case of branching random walk in random environment; see Remark 3.7 for more details. Moreover, we emphasize that the sequence (πn )n∈N does not need to have any convergence properties except for suitable convergence of the first and second moments. Next we explain our approach in the abstract setting of the second paragraph of this introduction where, for simplicity, we assume for every n ∈ N that L0,n = 0. For this, fix a function f in a dense subset of the continuous and bounded functions on the state space of the limiting Markov process. We assume for every n ∈ N – identifying f with a function in the domain of Ln which is constant in the second argument – that there exists a function hn ∈ Dom(Ln ) and a measure πn on the state space of Z n (typically the ergodic equilibrium of Z n ) such that for all (x, z) in the state space of (X n , Z n ) and all n ∈ N it holds that (L2,n f )(x, z) = 0 Z L2,n hn (x, z) = L1,n f (x, .)dπn − L1,n f (x, z).
(1.5)
Then for all n ∈ N it follows from Ln being the pre-generator of (X n , Z n ) that Z t Ln (f + n1 hn ) (Xsn , Zsn ) ds [0, ∞) ∋ t 7→ (f + n1 hn )(Xtn , Ztn ) − 0
= (f + n1 hn )(Xtn , Ztn ) −
t
Z
n(L1,n f )(Xsn , Zsn ) + (L1,n hn )(Xsn , Zsn ) + n(L2,n hn )(Xsn , Zsn ) ds 0 Z t Z n n 1 n (L1,n f )(Xsn , ·)dπn + (L1,n hn )(Xsn , Zsn ) ds = (f + n hn )(Xt , Zt ) − 0
(1.6)
is a local martingale. Moreover we assume that the sequence n1 hn converges suitably to 0. Now, as in our application in Section 3, the sequence of functions (n · L1,n f )n∈N might not converge but the sequence of averaged functions does. So we additionally assume for every x in the state space of the limiting Markov process that the limit Z lim n (L1,n f )(x, ·)dπn =: (A1 f )(x) (1.7) n→∞
3
exists. Moreover we assume for every n ∈ N that there exists a function gn on the state space of Z n and a suitable function A2 f such that for all t ∈ [0, ∞) in the limit n → ∞ it holds that Z t Z t Z tZ L1,n hn (Xsn , Zsn )ds ≈ A2 f (Xsn , gn (Zsn ))ds = A2 f (Xsn , z))Γgn (Z n ) (ds, dz), (1.8) 0
0
0
where we used for each n ∈ N the occupation measure Γgn (Z n ) of gn (Z n ). The reason for introducing the functions (gn )n∈N is that the occupation measures of the processes (Z n )n∈N might not converge but the occupation measures of (gn (Z n ))n∈N (which possibly have a much smaller state space) could converge. Finally we assume that the sequence (X n )n∈N satisfies the compact containment condition and for every t ∈ [0, ∞) that the family {gn (Zsn ) : n ∈ N, s ∈ [0, t]} is tight. Then the sequence (X n , Γgn (Z n ) )n∈N is tight and (1.6), (1.7) and (1.8) suggest that every limit point (X, Γ) satisfies that Z t Z tZ [0, ∞) ∋ t 7→ f (Xt ) − A1 f (Xs )ds − A2 f (Xs , z))Γ(ds, dz) (1.9) 0
0
is a local martingale, suggesting the form of the pre-generator for X. The technicalities of this reasoning are carried out in the next section.
2
Main result
Before we state Theorem 2.4, we fix some notation including the occupation measure of a stochastic process. Definition 2.1. Let (E, dE ) be a complete and separable metric space. 1. We denote by C(E, R) (Cb (E, R)) the set of continuous (bounded and continuous) functions f : E → R and we denote by D([0, ∞), E) the set of c` adl` ag-functions f : [0, ∞) → E.
2. For a linear function A : Dom(A) ⊆ Cb (E, R) → C(E, R), we say that an E-valued stochastic process X = (Xt )t∈[0,∞) solves the (local) D([0, ∞), E)-martingale problem for A if X has c` adl` ag-paths and Z t (2.1) f (Xt ) − f (X0 ) − (Af )(Xs )ds 0
t∈[0,∞)
is a (local) martingale for all f ∈ Dom(A). In this case, we say that A is a pre-generator for the process X. 3. A sequence (Xtn )t∈[0,∞) , n ∈ N, of E-valued stochastic processes is said to satisfy the compact containment condition, if for every ε, t > 0, there exists a compact set K ⊆ E with inf P(Xsn ∈ K for all s ∈ [0, t]) > 1 − ε. n∈
N
4. Let B(E) be the Borel σ-algebra and M(E) be the space of measures on (E, B(E)), endowed with the weak topology (which is denoted by ⇒) and M1 (E) ⊂ M(E) the subset of probability measures. 5. Denote the set of occupation measures by Lm (E) := {Γ ∈ M([0, ∞) × E) : Γ([0, t] × E) = t for all t ∈ [0, ∞)} . 4
(2.2)
6. For an E-valued stochastic process X = (Xt )t∈[0,∞) with c` adl` ag-paths, its occupation measure is the unique Lm (E)- valued random variable ΓX such that for all t ∈ [0, ∞) and all B ∈ B(E) it holds that Z t ΓX ([0, t] × B) = 1B (Xs )ds. 0
We now describe the setting we are working in as well as some basic assumptions for our main result. Assumption 2.2.
1. Let (θn )n∈N ⊂ (0, ∞) be a sequence of real numbers with θn −−−−→ ∞. n→∞
2. Let (S, dS ) and (E, dE ) be complete and separable metric spaces and let S1 , S2 , ... ⊆ S and E1 , E2 , ... ⊆ E be Borel measurable subsets. For every n ∈ N, let Ln : Dom(Ln ) ⊆ C(Sn × En , R) → C(Sn × En , R) be a linear function and let L0,n L1,n , L2,n : Dom(Ln ) → C(Sn × En , R) be functions such that for all n ∈ N it holds that Ln = L0,n + θn L1,n + θn2 L2,n .
(2.3)
3. For every n ∈ N, let (X n , Z n ) = (Xtn , Ztn )t∈[0,∞) be a solution of the D([0, ∞), Sn × En )martingale problem for Ln . 4. The sequence (X n )nN of S-valued stochastic processes satisfies the compact containment condition. 5. Let (H, dH ) be a complete and separable metric space and let gn : En → H, n ∈ N, be Borel measurable functions such that the family (Γgn (Z n ) )n∈N is tight in Lm (H). Remark 2.3. Lemma 1.3 in [Kur92] implies that Assumption 2.2.5 is fulfilled if for every t ∈ (0, ∞) the family {gn (Zsn ) : n ∈ N, s ∈ [0, t]} is relatively compact.
Theorem 2.4. Let the setting from Assumption 2.2 be given, let Dom(A) ⊆ Cb (S, R) be a dense set in the topology of uniform convergence on compact sets and let A1 : Dom(A) → Cb (S, R) and A2 : Dom(A) → C(S × H, R) be functions. Suppose for every f ∈ Dom(A) that there exist fn , hn ∈ Dom(Ln ), n ∈ N, and πn ∈ M1 (En ), n ∈ N, such that for all n ∈ N it holds that L2,n fn = 0, such that (2.4) lim E sup f (Xsn ) − (fn + θ1n hn )(Xsn , Zsn ) = 0, n→∞
s∈[0,t]
and such that the following integrals are well-defined and Z s (θn L2,n hn + θn L1,n fn + L0,n fn ) (Xrn , Zrn ) lim E sup n→∞
s∈[0,t]
0
−
Z
(θn L1,n fn +
L0,n fn ) (Xrn , y)πn (dy)
En
Moreover, assume for all t ∈ [0, ∞) that there exists p ∈ (1, ∞) with sup n∈
N
Z
0
t
p
E [|(A2 f )(Xsn , gn (Zsn ))| ] ds < ∞
5
dr = 0.
(2.5)
(2.6)
and assume for all t ∈ [0, ∞) that Z s Z (A1 f ) (Xrn ) − lim E sup
(θn L1,n fn + L0,n fn ) (Xrn , y)πn (dy) dr = 0, n→∞ s∈[0,t] En 0 Z sh i n n n n 1 (A2 f ) (Xr , gn (Zr )) − L1,n hn + θn L0,n hn (Xr , Zr ) dr = 0. lim E sup
n→∞
(2.7) (2.8)
0
s∈[0,t]
Then (X n , Γgn (Z n ) )n∈N is relatively compact and for every limit point ((Xt )t∈[0,∞) , Γ) and for every f ∈ Dom(A) it holds that Z t Z tZ f (Xt ) − (A1 f )(Xs ) ds − (A2 f )(Xs , y)Γ(ds, dy) (2.9) 0
0
H
t∈[0,∞)
is a martingale. Remark 2.5 (How to choose (fn )n∈N , (gn )n∈N and (hn )n∈N in applications). Let the setting from Theorem 2.4 be given. Assume for every n ∈ N and every f ∈ Dom(A) that f |Sn ∈ Dom(Ln ) and assume for every n ∈ N that there exists πn ∈ MR1 (En ) such that for every f ∈ Dom(Ln ) and every (x, z) ∈ Sn ×En it holds that (L2,n f )(x, z) = En f (x, y)πn (dy)−f (x, z). Then (2.5) holds with fn := f |Sn , n ∈ N, and hn := L1,n fn , n ∈ N. Moreover if for every t ∈ [0, ∞) we have that Z (θn L1,n fn + L0,n fn ) (x, y)πn (dy) = 0, (2.10) lim sup (A1 f ) (x) − n→∞ x∈Sn
lim E
n→∞
Z
0
t
En
sup (A2 f ) (x, gn (Zrn )) − L1,n hn + x∈S n
1 θn L0,n hn
(x, Zrn ) dr = 0,
(2.11)
then the assumptions (2.7) and (2.8) are satisfied. For more general operators (L2,n )n∈N it is sufficient for condition (2.5) to solve for every n ∈ N the Poisson equation Z L2,n hn = L1,n fn + θ1n L0,n fn (·, y)πn (dy) − L1,n fn + θ1n L0,n fn . (2.12) En
This Poisson equation has been frequently studied e.g. in the context of Stein’s method and there exist conditions implying existence of a solution; see, e.g., [PV01, PV03, VK12]. We also refer to the literature on Stein’s method where Poisson equations are frequently solved. Moreover if proving tightness of (ΓZ n )n∈N in Lm (E) is feasible, then one can choose H = E and (gn )n∈N to be the identity functions in Assumption 2.2.5. In our application of Theorem 2.4 in Section 3 below, informally speaking, the processes (X n )n∈N sense the equilibria of the processes (Z n )n∈N only via certain real-valued functions (gn )n∈N . Proving tightness of (Γgn (Z n ) )n∈N in Lm (R) is in our application easier than proving tightness of (ΓZ n )n∈N in Lm (E). Proof of Theorem 2.4. We will apply Theorem 2.1 in [Kur92] to the sequence ((X n , gn (Z n )))n∈N and first check the assumptions. By Assumption 2.2.4, the sequence (X n )nN satisfies the compact containment condition. Note also that the proof of Theorem 2.1 in [Kur92] only requires the relative compactness of Γgn (Z n ) and that the stronger assumption of relative compactness of the family {gn (Ztn ) : n ∈ N, t ∈ [0, ∞)} is obsolete. Next, fix f ∈ Dom(A) for the rest of the proof. By assumption, there exist fn , hn ∈ Dom(Ln ), n ∈ N, and πn ∈ M1 (En ), n ∈ N, such that L2,n fn = 0 for all n ∈ N and such that (2.4) and 6
(2.5) hold. For every n ∈ N, Dom(Ln ) is a vector space so that fn + εnt := (fn +
n n n 1 θn hn )(Xt , Zt ) − f (Xt ) Z t + (A1 f )(Xsn ) + (A2 f )(Xsn , gn (Zsn )) 0
1 θ n hn
∈ Dom(Ln ). Define (2.13)
− Ln (fn +
n n 1 θn hn ) (Xs , Zs ) ds
for all t ∈ [0, ∞) and all n ∈ N. Then Assumption 2.2.3 implies for every n ∈ N that the process Z t (A1 f )(Xsn ) + (A2 f )(Xsn , gn (Zsn )) ds + εnt f (Xtn ) − t∈[0,∞) 0 (2.14) Z t Ln (fn + θ1n hn ) (Xsn , Zsn ) ds = fn + θ1n hn (Xtn , Ztn ) − t∈[0,∞)
0
is a martingale. By Assumption (2.6) and global boundedness of A1 f , for every t ∈ [0, ∞) there exists a real number p ∈ (1, ∞) such that Z t p (2.15) E [|(A1 f )(Xsn ) + (A2 f )(Xsn , gn (Zsn ))| ] ds < ∞. sup n∈
N
0
Moreover, recall that Ln is a linear function and that L2,n fn = 0 such that Ln fn + θ1n hn (x, z) = (Ln fn ) (x, z) + θ1n (Ln hn ) (x, z) = (L0,n fn + θn L1,n fn ) (x, z) + L1,n hn + θ1n L0,n hn (x, z) + θn (L2,n hn )(x, z) Z (2.16) (θn L1,n fn + L0,n fn ) (x, y)πn (dy) + L1,n hn + θ1n L0,n hn (x, z) = En Z (θn L1,n fn + L0,n fn ) (x, y)πn (dy). + (θn L2,n hn + L0,n fn + θn L1,n fn ) (x, z) − En
Therefore, we infer for all t ∈ [0, ∞) and all n ∈ N that E sup |εns |
(2.17)
s∈[0,t]
≤ E sup fn + s∈[0,t]
+E
"
1 θ n hn
(Xsn , Zsn )
Z s Z sup (A1 f )(Xrn ) −
s∈[0,t]
0
−
f (Xsn )
(θn L1,n fn +
En
#
L0,n fn ) (Xrn , y)πn (dy) dr
# Z s n n n n 1 + E sup (A2 f ) (Xr , gn (Zr )) − L1,n hn + θn L0,n hn (Xr , Zr ) dr s∈[0,t] 0 Z s + E sup (θn L2,n hn + θn L1,n fn + L0,n fn ) (Xrn , Zrn ) s∈[0,t] 0 Z n (θn L1,n fn + L0,n fn ) (Xr , y)πn (dy) dr . − "
En
Then the Assumptions (2.4), (2.5), (2.7) and (2.8) yield for every t ∈ [0, ∞) that the left-hand side of (2.17) converges to 0 as n → ∞. Having checked all assumptions of Theorem 2.1 in [Kur92], the assertion now follows from Theorem 2.1 in [Kur92]. This finishes the proof of Theorem 2.4. 7
3
Branching random walk in random environment
We will give a non-trivial application of our main theorem. More precisely, we analyse the diffusion approximation of a sequence of branching random walks in random environment (BRWRE). Informally speaking, we consider BRWRE where the offspring distribution changes more and more often over time and the involved offspring distributions are independent of the deme and of previous offspring distributions. Note that the corresponding result in a non-spatial setting, for branching in random envionment, was shown in [Kur78]; see also [Kei75, Bor03, BS11]. The limiting diffusion process was studied in [BH12, Hut11, BMS13]. Next we describe the dynamcis of BRWRE. Definition 3.1 (Ingredients for BRWRE). 1. Let D be a countable set (of demes).
γi |xi | for every x ∈ RD and o n S := ℓγ := ℓγ (RD ) := x ∈ RD : kxkℓγ < ∞ .
2. Let γ : D → (0, ∞). Define kxkℓγ :=
P
i∈D
(3.1)
Note that (ℓγ , k · kℓγ ) is a separable Banach space.
3. Let a : D × D → [0, ∞) be a function P the random walk) satisfying for all P (the jump rates for j ∈ D and some µ ∈ (0, ∞) that i∈D a(j, i) = µ = i∈D a(i, j) and for some c ∈ [0, ∞) and all j ∈ D X γi a(i, j) ≤ cγj . (3.2) i∈D
4. For every n ∈ N let Sn := S ∩ ( n1 · N0 )D and let E be the set of probability measures on (N0 , {A ⊆ N0 }). For z ∈ E, let m(z) and v(z) be the mean and variance of z, respectively. Definition 3.2 (Scaled BRWRE). Let the setting from Definition 3.1 be given, let β ∈ (0, ∞) and let N be a Poisson process with rate 1. For every n ∈ N, let (Z¯in )i∈N0 be independent and identically distributed, E-valued random variables which are independent of N and define Ztn := n Z¯N for all t ∈ [0, ∞). Then for every n ∈ N and conditioned on Z n = z ∈ E, let nX n be a 2 2 n t/β
branching random walk with migration matrix aT , with migration and branching rate 1 and with offspring distribution process z, that is, let X n = (X n (i))i∈D be a (time-inhomogeneous) Markov process with c` adl` ag-sample paths and with state space S n such that given Xtn (i) = x ∈ N0 /n each of the xn ∈ N0 individuals at time t ∈ [0, ∞) in deme i ∈ D • is replaced (independently of all other individuals) at rate 1 by k ∈ probability zt (k) and • jumps to deme j at rate a(j, i) independently of all other events.
N0 individuals with
For every n ∈ N, nX n conditioned on Z n is a continuous-time BRW with time-dependent offspring distributions and, therefore, is well-defined. To avoid distinction of cases, we adopt the convention that zero times an undefined quantity is zero. Thereby expressions such as xf (x − N1 ) are defined for x = 0, all N ∈ N and all functions f : [0, ∞) → R. For every n ∈ N, the pregenerator Ln of the time-homogeneous Markov process (X n , Z n ) has domain n Dom (Ln ) := f : Sn × E → R f is bounded and o (3.3) depends only on finitely many coordinates , 8
has the form (2.3) with θn = n and (using the i-th unit vector ei , i ∈ D) for all f ∈ Dom(Ln ) and all (x, z) ∈ Sn × E it holds that X L0,n f (x, z) = n a(j, i)xi f (x − n1 ei + n1 ej , z) − f (x, z)), L1,n f (x, z) = n
i,j∈D ∞ XX
xi (f (x +
l−1 n ei , z)
i∈D l=0
− f (x, z))z(l),
(3.4)
L2,n f (x, z) = E[f (x, Z0n )] − f (x, z). In order to obtain convergence, we assume the following properties of (Z0n )n∈N . Assumption 3.3 (The distribution of (Z0n )n∈N ). Let the setting from Definition 3.1 be given. For the E-valued sequence (Z0n )n∈N , we assume that there exist α ∈ R, σb2 , σe2 ∈ [0, ∞), p ∈ (2, ∞) such that lim n · E[m(Z0n ) − 1] = α,
n→∞
lim Var (m(Z0n )) = σe2 ,
n→∞
lim E [v(Z0n )] = σb2 ,
n→∞
p
sup E [(m(Z0n ) − 1) ] < ∞.
n∈
N
Example 3.4 (How to fulfill Assumption 3.3). Let the setting from Definition 3.1 be given, let σe2 < 1, α ∈ R and let (Z0n )n∈N be a sequence of E-valued random variables such that for all |α| , ∞) it holds that n ∈ N ∩ [ 1−σ 3 P Z0n (0) =
1 2
−
α 2n
±
σe n 2 , Z0 (2)
|α| , ∞) that Here, we get for all n ∈ N ∩ [ 1−σ 3
P m(Z0n ) − 1 =
α n
=
1 2
+
α 2n
∓
σe 2
= 12 .
± σe = P v(Z0n ) = 1 − ( αn ± σe )2 = 12 .
Therefore (Z0n )n∈N satisfies Assumption 3.3 with σb2 = 1 − σe2 . For general σb2 , σe2 , we have to choose distributions with support bigger than {0, 2}. The following theorem, Theorem 3.5, proves weak convergence of branching random walk in global random environment to interacting branching diffusions in global random environment. The one-dimenional case |D| = 1 is well-known and has been established in [Kur78] and in [Bor03] under more general assumptions. To the best of our knowledge, interacting branching diffusions in (non-trivial) random environment have not been studied before. Theorem 3.5 (Convergence of branching random walk in random environment). Let the setting n→∞ from Definition 3.2 be given and let (Z0n )n∈N satisfy Assumption 3.3. If X n (0) ===⇒ X(0), then n→∞ we have that X n ===⇒ X where X = (X(i))i∈D is the unique weak solution of q X p dXt (i) = a(i, j)(Xt (j) − Xt (i))dt + (α + σe2 )Xt (i)dt + σb2 Xt (i)dWt (i) + 2σe2 Xt (i)dWt′ j∈D
(3.5)
where W ′ , (W (i)i∈D are independent real-valued Brownian motions. 9
Remark 3.6 (Global versus local environment). The processes (Z n )n∈N describe global environments in the sense that at any time all demes have the same offspring distribution. Another case would be that the environments on different demes are different, e.g. by using independent environments. We note that – although results can easily be conjectured – approximation results for the latter case of a local environment are harder to obtain. In particular, it is not clear how to adapt our proof for the compact containment condition. Remark 3.7. Analogously to the example in the introduction, the summands in (3.5) involving σe2 appear as the limit of certain iterated operators. More precisely, for f ∈ Cc2 (RD , R) depending only on finitely many coordinates and for x ∈ S it holds that X ∂f 2 ∂2f σe2 xi ∂x (x) + (x ) (x) = lim E [(L1,n L1,n f ) (x, Z0n )] . i ∂x2 (3.6) i n→∞
i
i∈D
See also (3.32) for more details. Before we give the proof, we provide an auxiliary result which will be useful for proving (2.4). Lemma 3.8. Let X0 , X1 , X2 . . . be independent and identically distributed [0, ∞)-valued random variables and let M be an independent Poisson distributed random variable with parameter ρ ∈ [0, ∞) which is independent of the sequence X0 , X1 , . . . Then for all α ∈ (0, ∞) and all p ∈ (1, ∞) it holds that α1−p E max Xi ≤ 2α + (1 + ρ) E[X1p ] . (3.7) i∈{0,1,...,M} p−1 Proof of Lemma 3.8. Independence of all involved random variables implies for all x ∈ (0, ∞) that P max Xi > x = 1 − P min Xi ≤ x i∈{0,1,...,M} i∈{0,1,...,M} i h = 1 − E (P (X1 ≤ x))1+M (3.8) = 1 − exp (−ρ(1 − P(X1 ≤ x))) P(X1 ≤ x)
≤ P(X1 > x) + 1 − exp (−ρ P(X1 > x)) .
Consequently, Fubini’s theorem and the Markov inequality imply for all α ∈ (0, ∞) and all p ∈ (1, ∞) that Z ∞ P max Xi > x dx E max Xi = i∈{0,1,...,M} i∈{0,1,...,M} 0 Z ∞ ≤ P(X1 > x) + 1 − exp (−ρ P(X1 > x)) dx Z0 α Z ∞ (3.9) ≤ 2 dx + (1 + ρ) P(X1 > x) dx Z0 α Zα∞ ≤ 2 dx + (1 + ρ) x−p E[X1p ] dx 0
α
= 2α + (1 + ρ) E[X1p ] This finishes the proof of Lemma 3.8.
10
α1−p . p−1
Our next lemma states a well-known caracterization of compact subsets of S; see, e.g., (a simplified version of) Lemma 3.3 in [KM12]. Lemma 3.9. A subset K ⊆ S is relatively compact if and only if (i) supx∈K kxklγ < ∞ and
(ii) for every δ ∈ (0, ∞) there exists a finite subset E ⊆ D such that supx∈K kx1D\E klγ < δ. For proving the compact containment condition we will use that the associated random walks satisfy the compact containment condition. This is subject of Lemma 3.11 below. First we derive a formula for the variance of linearly interpolated random walks. Lemma 3.10. Let (Yi )i∈N0 be a sequence of independent and identically distributed real-valued random variables having finite second moment E[(Y0 )2 ] < ∞. Moreover let ξ = (ξt )t≥0 be a Poisson process with parameter ρ ∈ (0, ∞) and jump times (Tk )k∈N which is independent of (Yi )i∈N0 . Then almost surely it holds for all t ∈ [0, ∞) that, with T0 := 0, Z t ∞ X Var Yξr dr ξ = Var [Y0 ] (t ∧ Tk+1 − t ∧ Tk )2 0
Var
k=0
t
Z
0
(3.10)
4 4 2t 1 + e−ρt − 2 + t2 + 2 e−ρt Yξr dr = Var [Y0 ] ρ ρ ρ
Rt and E 0 Yξr dr ξ = E[Y0 ] t
Proof. Fix t ∈ [0, ∞) for the rest of the proof. Fubini’s theorem and the assumption that Rt (Y )i∈N are identically distributed imply that E 0 Yξr dr ξ = E[Y0 ] t. Next, note that the realvalued random variables Tk+1 − Tk , k ∈ N0 , are independent and exponentially distributed with parameter ρ. Writing [0, t] = ∪∞ k=0 [t ∧ Tk , t ∧ Tk+1 ] and using independence, we get that Z t Var Yξr dr ξ 0 "Z Z t 2 # t =E Yξr dr − E Yξr dr ξ ξ 0
= E
∞ X
0
k,l=0
Z
t∧Tk+1
t∧Tk
Yk − E[Yk ] dr
! Z
t∧Tl+1
t∧Tl
∞ iX h 2 2 = E (Y0 − E[Y0 ]) (t ∧ Tk+1 − t ∧ Tk ) .
! Yl − E[Yl ] dr ξ
(3.11)
k=0
Next we analyze the expected sum of squared waiting times up to time t. Conditional on ξt = m, the m jump times are independently and uniformly distributed over [0, t] for every m ∈ N. Since the interjump times are exchangeable, we obtain that ∞ X
k=0
=
∞ m i X h i h X 2 2 P [ξt = m] E (t ∧ Tk+1 − t ∧ Tk ) |ξt = m E (t ∧ Tk+1 − t ∧ Tk ) =
∞ X
m=0
P [ξt = m]
m=0
= P[ξt = 0]t2 +
m X
k=0 Z t
k=0
i h 2 2 E (t ∧ T1 ) |ξt = m = E (ξt + 1) (t ∧ T1 ) h
i
E [ξt−s + 1] s2 ρe−ρs ds = e−ρt t2 +
0
Z
0
11
t
(ρ(t − s) + 1) s2 ρe−ρs ds
(3.12)
where we used the Markov property of the Poisson process. An elementary calculation yields Z t 4 2t (3.13) 1 + e−ρt − 2 1 − e−ρt . (ρ(t − s) + 1) s2 ρe−ρs ds = ρ ρ 0
More precisely, the second derivatives with respect to t of both sides are equal to 2tρe−ρt . Moreover the first derivatives of both sides at t = 0 are equal to zero and both sides vanish at t = 0. This implies (3.13). Taking expectations in (3.11), inserting (3.13) into (3.12), (3.12) into (3.11) results in Z t 4 2t −ρt −ρt 2 −ρt (3.14) Var − 2 1−e 1+e Yξr dr = Var [Y0 ] e t + ρ ρ 0
which implies the assertion and finishes the proof of Lemma 3.10.
The next lemma, Lemma 3.11 is a first (simple) application of our main theorem to continuously interpolated random walks in continuous time. Lemma 3.11. Let N be a Poisson process with rate 1 and, for every n ∈ N, let Z¯0n , Z¯1n , Z¯2n , ... be independent and identically distributed real-valued random variables with distribution πn having n mean µn ∈ R and variance σn2 /2 ∈ [0, ∞). For every n ∈ N define (Ztn )t∈[0,∞) by Ztn = Z¯N n2 t R t and define (Xtn )t∈[0,∞) for every t ∈ [0, ∞) by Xtn := n 0 Zsn ds. If supn∈N n|µn | < ∞ and supn∈N σn2 < ∞, then (Xtn )t∈[0,∞) satisfies the compact containment condition.
Proof. We will apply Theorem 2.4 and first check the assumptions. Let (Tk )k∈N0 denote the jump times of the Poisson process N . For every n ∈ N, conditioned on N , the process
N0 ∋ k 7→ n
Z
0
Tk n2
Zsn − E[Z0n ] ds =
1 n
=
1 n
Z
Tk
0 k−1 X l=0
n − E[Z0n ] ds Z¯N s
(Tl+1 − Tl ) Z¯ln − E[Z0n ]
(3.15)
is a martingale due to integrability and independence of the increments. This, the fact that the supremum over a linearly interpolated discretely defined function is equal to the supremum over this function, Doob’s submartingale inequality, the fact that t − TNt is bounded stochastically by an exponentially distributed random variable with parameter 1 for every t ∈ (0, ∞) and
12
Lemma 3.10 imply for all n, m ∈ N and t ∈ (0, ∞) satisfying tn|E[Z0n ]| < m that ! P
sup |Xsn | ≥ 2m
s∈[0,t]
≤ P
sup TN
s∈[0,
n2 (t+1) n2
]
Z n
s
0
Zun − E[Z0n ] du ≥ m + P TNn2 (t+1) < tn2
Z !# Tk 1 n n =E P sup + P n2 (t + 1) − TNn2 (t+1) > n2 Z¯Nu − E[Z0 ] du ≥ m N n k∈N0 ∩[0,Nn2 (t+1) ] 0 ! 2 Z TN 2 n (t+1) 2 n − E[Z0n ] du N + e−n Z¯N ≤ m12 E E n1 u "
0
=
Nn2 (t+1) −1
n 1 1 m2 n2 E Var(Z0 )
2
X
(Tk+1 − Tk )
k=0
2
+ e−n
(3.16)
and P
≤
s∈[0,t]
|Xsn |
1 1 m2 n2
Var(Z0n )
sup
≥ 2m
!
∞ X
k=0
E
h
(n2 (t + 1)) ∧ Tk+1 − (n2 (t + 1)) ∧ Tk
2
Var(Z0n ) 2n2 (t + 1) 1 + e−n
=
1 1 m2 n2
≤
2 1 m2 σn 2(t
2
+ 1) + e−n .
(t+1)
2 i
2
+ e−n
Therefore, using supn∈N n|µn | < ∞ and supn∈N σn2 < ∞, for all t ∈ (0, ∞) it holds that ! sup |Xsn | ≥ 2m
lim sup lim sup P m→∞
n→∞
(3.17)
2 2 − 4 1 − e−n (t+1) + e−n
= 0.
(3.18)
s∈[0,t]
This implies that (X n )n∈N satisfies the compact containment condition and finishes the proof of Lemma 3.11. Proof of Theorem 3.5. We will apply Theorem 2.4 and first check the assumptions. We start with proving the compact containment condition. According to Lemma 3.9 it suffices to show for all ε, t ∈ (0, ∞) that there exists k ∈ N and a finite subset E ⊆ D such that (3.19) lim sup P sup ||Xsn ||ℓγ ≥ k < ε, n→∞
lim sup P n→∞
0≤s≤t
sup ||Xsn 1D\E ||ℓγ ≥ ε
0≤s≤t
13
< ε.
(3.20)
Let I be the identity on S and let (πi )i∈D be the projections on S, that is, for all i ∈ D and x ∈ S it holds that πi x = xi . Using for all n ∈ N, i ∈ D, (x, z) ∈ S × E and s ∈ [0, ∞) that −s(a−µI) −s(a−µI) d e π (x) = − e (a − µI)x (i) ds (L0,n πi )(x, z) = ((a − µI)x) (i)
(3.21)
(L1,n πi )(x, z) = xi (m(z) − 1) (L2,n πi )(x, z) = 0,
Itˆ o’s formula for continuous semimartingales implies that the processes {M n,t,E : E ⊆ D, t ∈ [0, ∞), n ∈ N} defined for all t ∈ [0, ∞), s ∈ [0, t], n ∈ N, E ⊆ D by Z s X (m(Zrn ) − 1) dr Msn,t,E := γi e(t−s)(a−µI) πi (Xsn ) exp − n (3.22) 0
i∈E
are continuous non-negative local martingales and, therefore, continuous supermartingales. For t ∈ [0, ∞), n ∈ N and E ⊆ D, Rlet (τkn,t,E )k∈N be a localizing sequence of stopping times for M n,t,E . s In addition, define Ysn := n 0 m(Zrn ) − 1 dr for all s ∈ [0, ∞) and all n ∈ N. Lemma 3.11 implies that the sequence (Y n )n∈N satisfies the compact containment condition. Furthermore, it is straight forward to prove for nonempty subsets A ⊆ [0, ∞) and B ⊆ (0, ∞) and every r, s ∈ R that sup u ≥ rs
u∈A
⇒
sup (u,v)∈A×B
uv ≥ r or sup 1/v ≥ s.
(3.23)
v∈B
Moreover, a ≥ 0 implies for all t ∈ [0, ∞), i ∈ D and x ∈ S that (et(a−µI) πi )(x) ≥ e−µt xi . n→∞ Using this, Doob’s inequality, X n (0) ===⇒ X(0) and inequality (3.2), we then obtain for all t, r, u ∈ (0, ∞) and all E ⊆ D that X γi e(t−s)(a−µI) πi (Xsn ) ≥ r2 e−µt lim sup P sup ||Xsn 1E ||ℓγ ≥ r2 ≤ lim sup P sup n→∞
n→∞
s∈[0,t]
≤ lim sup lim P n→∞ k→∞
≤ lim sup lim P n→∞ k→∞
n,t,E sup Ms∧τ n,t,E ≥
s∈[0,t]
k
n,t,E sup Ms∧τ n,t,E ≥
s∈[0,t]
k
2
r ueµt 2
s∈[0,t] i∈D
+ lim sup P n→∞
n,t,E r ueµt , M0
≤r
s sup exp n ∫ (m(Zrn ) − 1)dr > u
s∈[0,t]
0
# s n,t,E n > r + P sup exp n ∫ (m(Zr ) − 1)dr > u + lim sup P M0 n→∞
≤ lim sup n→∞
≤
"
E[min{M0n,t,E ,r }] ueµt r2
+P
X
i∈E P E[min{ i∈E γi (et(a−µI) πi )(X0 ),r }] ueµt r2
+P
X i∈E
(3.24)
0
s∈[0,t]
γi et(a−µI) πi (X0 ) > r + sup P sup Ysn > log(u) . n∈
γi et(a−µI) πi (X0 ) > r + sup P sup Ysn > log(u) . n∈
N
N
s∈[0,t]
s∈[0,t]
P √ t(a−µI) πi (X0 ) ≤ Now, choosing E = D, u = r, using for all t ∈ (0, ∞) that i∈D γi e √ ect kX0 klγ < ∞ and letting r → ∞ implies (3.19). Moreover, choosing r = ε and E = D \ E˜ with finite subset E˜ ⊆ D, using the dominated convergence theorem together with P t(a−µI) (X ) ≤ ect kX0 klγ < ∞, letting E˜ ↑ D and letting u → ∞ gives (3.20) e π γ 0 i i i∈D and shows that Assumption 2.2.4 is satisfied. 14
Next, we define gn (z) := (m(z) − 1)2 ∈ R for every n ∈ N and every z ∈ E. Then Remark 2.3 together with sup sup P(gn (Zsn ) ≥ k) = sup P(gn (Z0n ) ≥ k) ≤
s∈[0,∞) n∈
N
n∈
N
1 k
sup E [gn (Z0n )]
n∈
N
(3.25)
for all k ∈ N and together with Assumption 3.3 implies that Assumption 2.2.5 is satisfied. Moreover Lemma 3.9 and density of Cc2 (Rd , R) in Cb (Rd , R) in the topology of uniform convergence on compact sets for every d ∈ N imply that Dom(A) := {f ∈ Cc2 (RD , R) : f depends only on finitely many coordinates}
(3.26)
is dense in Cb (S, R) in the topology of uniform convergence on compact sets. Define (A1 f )(x) :=
X
i,j∈D
(A2 f )(x, r) := r
∂f (x) + α a(i, j)(xj − xi ) ∂x i
X
∂f (x) + xi ∂x i
X
xi xj
i,j∈D
i∈D
X
∂f (x) + xi ∂x i
σb2 2
X
i∈D
i∈D
∂2f (x) ∂xi ∂xj
2
xi ∂∂xf2 (x) i
(3.27)
for all f ∈ Dom(A), r ∈ R and all x ∈ S. Then A1 : Dom(A) → Cc (S, R) and A2 : Dom(A) → C(S × R, R) are well-defined functions. Fix f ∈ Dom(A) for the rest of the proof. For every n ∈ N define fn , hn ∈ Dom(Ln ) and πn ∈ M1 (E) by fn := f |Sn , by hn := L1,n fn and by πn (·) := P(Z0n ∈ ·). Lemma 3.8 yields for all ε, t ∈ (0, ∞) and all n ∈ N that " # E
sup
s∈[0,t]
|m(Zsn )−1| n
≤ 2ε +
n 1+n2 t np E [|m(Z0 )
p
1 − 1| ] (p−1)ε p−1 .
This together with p > 2, Assumption 3.3 and f ∈ Dom(A) yields that " " # # X ∂f |hn (Xsn ,Zsn )| |m(Zsn )−1| lim E sup a(i, j)xi ∂xi (x) lim E sup ≤ sup = 0. n n n→∞
x∈S
s∈[0,t]
n→∞
i∈D
(3.28)
(3.29)
s∈[0,t]
This shows that (2.4) is satisfied. Clearly it holds for every n ∈ N that LR2,n fn = 0 and for x ∈ Sn , y, z ∈ E that (L0,n fn )(x, y) = (L0,n fn )(x, z) and that L2,n hn (x, z) = E (L1,n fn )(x, y)πn (dy) − (L1,n fn )(x, z). This shows that (2.5) is satisfied. Assumption 3.3 implies for all t ∈ (0, ∞) that sup n∈
N
Z
0
t
i h p p E |(A2 f )(Xsn , gn (Zsn ))| 2 ds ≤ t sup |(A2 f )(x, 1)| sup E [|m(Z0n ) − 1| ] < ∞. x∈S
n∈
N
(3.30)
This shows that (2.6) is satisfied. Next Assumption 3.3 and Taylor’s theorem yield for all t ∈ (0, ∞) that Z (L0,n fn + nL1,n fn ) (x, y)πn (dy) lim sup (A1 f ) (x) − n→∞ x∈Sn En X ∂f 1 1 f (x− n ei + n ej )−f (x) ∂f ≤ lim sup a(j, i)xi ∂xj (x) − ∂xi (x) − 1 (3.31) n→∞ x∈Sn n i,j∈D
+ lim sup
n→∞ x∈Sn
∞ XX
i∈D l=0
∂f (x) l−1 xi ∂x n + i
2 1∂ f 2 ∂x2i (x)
15
l−1 2 n
−
f (x+ l−1 n ei )−f (x) 1 n2
E[Z n (l)] = 0.
This shows that Assumption (2.7) is satisfied. In addition Assumption 3.3 and Taylor’s theorem yield for all t ∈ (0, ∞) that Z t lim E sup (A2 f ) (x, gn (Zrn )) − L1,n hn + n1 L0,n hn (x, Zrn ) dr n→∞
0 x∈Sn Z t
n n 1 = lim E sup (A2 f ) (x, gn (Zr )) − L1,n L1,n fn + n L0,n L1,n fn (x, Zr ) dr n→∞ 0 x∈Sn X Z t X ∂f ∂ (x) x ≤ lim E sup gn (Zrn ) xi ∂x j ∂xj i n→∞
−
0 x∈Sn
∞ X
X
xi (xj +
j∈D
i∈D
l−1 n
1i=j )
l−1 l−1 k−1 f (x+ k−1 n ej + n ei )−f (x+ n ei )− f (x+ n ej )−f (x) 1 n2
i,j∈D k,l=0
+ t sup x∈S
∂ a(j, i) (xj − xi ) ∂x i
X
i,j∈D
X
! lim n1 E [|m(Z0n ) − 1|] = 0. n→∞
∂f xk ∂x (x) k
k∈D
Zrn (l)Zrn (k) dr
(3.32)
This shows that Assumption (2.8) is satisfied. Having checked all assumptions of Theorem 2.4, Theorem 2.4 now implies that (X n , Γgn (Z n ) )n∈N is relatively compact. Assumption 3.3 implies for every limit point Γ of (Γgn (Z n ) )n∈N and every f ∈ Cc (R, R) that # "Z # "Z E
[0,∞)×
= lim
n→∞
Z
R
f (t)yΓ(dt, dy) = lim E n→∞
∞
0
f (t)E [gn (Ztn )]
dt =
Z
0
[0,∞)×
R
f (t)yΓgn (Z n ) (dt, dy)
∞
lim
n→∞
f (t)E [gn (Z0n )]
dt =
Z
0
(3.33)
∞
f (t) dt σe2 ,
that is, R yΓ(dt, dy) = dt σe2 . Consequently Theorem 2.4 implies that any limit point X of the squence (X n )n∈N is a solution of the D([0, ∞), S)-martingale problem for the pre-generator Dom(A) ∋ f˜ 7→ A1 f˜+ (A2 f˜)(·, σe2 ). In particular, this implies existence of a weak solution of the SDE (3.5). In addition standard Yamada-Watanabe-type arguments yield pathwise uniqueness of this SDE; cf., e.g., Lemma 3.3 in [HW07] where the assumed independence of the Brownian motions is not used in the proof. Therefore, uniqueness of a weak solution of the SDE (3.5) follows from a Yamada-Watanabe type argument; see, e.g., Thereom 2.2 in [SS80]. Finally since any limit point of (X n )n∈N is a weak solution of the SDE (3.5), this shows that (X n )n∈N converges weakly to the unique solution of the SDE (3.5). This finishes the proof of Theorem 3.5. R
Acknowledgements The authors are deeply indepted to Tom Kurtz for very valuable discussions.
References [AV10]
N. Abourashchi and A. Yu. Veretennikov. On stochastic averaging and mixing. Theory Stoch. Process., 16(32):111–129, 2010.
[BH12]
C. B¨oinghoff and M. Hutzenthaler. Branching diffusions in random environment. Markov Processes Related Fields, 18(2):269–310, 2012. 16
[BMS13] Vincent Bansaye, Juan Carlos Pardo Millan, and Charline Smadi. On the extinction of continuous state branching processes with catastrophes. Electron. J. Probab, 18(106):1– 31, 2013. [Bor03]
Kostya Borovkov. A note on diffusion-type approximation to branching processes in random environments. Theory of Probability & Its Applications, 47(1):132–138, 2003.
[BS11]
Vincent Bansaye and Florian Simatos. On the scaling limits of Galton Watson processes in varying environment. arXiv preprint arXiv:1112.2547, 2011.
[EK86]
Stewart N. Ethier and Thomas G. Kurtz. Markov processes: Characterization and convergence. Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. John Wiley & Sons Inc., New York, 1986.
[EN80]
S. N. Ethier and T. Nagylaki. Diffusion approximations of Markov chains with two time scales and applications to population genetics. Adv. Appl. Prob., 12:14–49, 1980.
[EN88]
S. N. Ethier and T. Nagylaki. Diffusion approximations of Markov chains with two time scales and applications to population genetics, II. Adv. Appl. Prob., 20:525–545, 1988.
[Hut11]
Martin Hutzenthaler. Supercritical branching diffusions in random environment. Electron. Commun. Probab., 16:781–791, 2011.
[HW07] Martin Hutzenthaler and Anton Wakolbinger. Ergodic behavior of locally regulated branching populations. Ann. Appl. Probab., 17(2):474–501, 2007. [Kei75]
N. Keiding. Extinction and exponential growth in random environments. Theo. Pop. Biol., 8:49–63, 1975.
[Kha66] R. Z. Khashminskii. A limit theorem for the solutions of differential equations with random right-hand sides. Theor. Probability Appl., 11(11):390–406, 1966. [KKP14] H.-W. Kang, T. G. Kurtz, and L. Popovic. Central limit theorems and diffusion approximations for multiscale Markov chain models. Ann. Appl. Probab., 24(2):449–497, 2014. [KM12]
A. Klenke and L. Mytnik. Infinite rate mutually catalytic branching in infinitely many colonies. construction, characterization and convergence. Prob. Theo. Rel. Fields, 154(3):533–584, 2012.
[Kur78] T. G. Kurtz. Diffusion approximations for branching processes. In In Branching processes (Conf., Saint Hippolyte, Que., 1976), volume Vol. 5 of Adv. Probab. Related Topics, pages 269–292. Dekker, New York, 1978. [Kur92] Thomas G. Kurtz. Averaging for martingale problems and stochastic approximation. In Applied stochastic analysis (New Brunswick, NJ, 1991), volume 177 of Lecture Notes in Control and Inform. Sci., pages 186–209. Springer, Berlin, 1992. [PV01]
E. Pardoux and A. Yu. Veretennikov. On Poisson equation and diffusion approximation 1. Ann. Probab., 29:1061–1085, 2001.
[PV03]
E. Pardoux and A. Yu. Veretennikov. On Poisson equation and diffusion approximation 2. Ann. Prob., 31:1166–1192, 2003.
17
[SS80]
Tokuzo Shiga and Akinobu Shimizu. Infinite-dimensional stochastic differential equations and their applications. J. Math. Kyoto Univ., 20(3):395–416, 1980.
[VK12]
A. Yu. Veretennikov and A. M. Kulik. Extended Poisson equation for weakly ergodic Markov processes. Theor. Probability and Math. Statist., 85:23–39, 2012.
18