Competing super-Brownian motions as limits of interacting particle ...

1 downloads 0 Views 491KB Size Report
Jan 2, 2008 - Keywords and phrases. super-Brownian motion, interacting branching particle system, collision local time, competing species, measure-valued ...
Competing super-Brownian motions as limits of interacting particle systems Richard Durrett

1

Leonid Mytnik

2

Edwin Perkins

3

Department of Mathematics, Cornell University, Ithaca NY 14853 E-mail address: [email protected] Faculty of Industrial Engineering and Management, Technion – Israel Institute of Technology, Haifa 32000, Israel E-mail address: [email protected] Department of Mathematics, The University of British Columbia, 1984 Mathematics Road, Vancouver, B.C., Canada V6T 1Z2 E-mail address: [email protected]

Abstract. We study two-type branching random walks in which the birth or death rate of each type can depend on the number of neighbors of the opposite type. This competing species model contains variants of Durrett’s predator-prey model and Durrett and Levin’s colicin model as special cases. We verify in some cases convergence of scaling limits of these models to a pair of super-Brownian motions interacting through their collision local times, constructed by Evans and Perkins.

January 2, 2008 AMS 2000 subject classifications. 60G57, 60G17. Keywords and phrases. super-Brownian motion, interacting branching particle system, collision local time, competing species, measure-valued diffusion. Running head. Competing super-Brownian motions 1. Partially supported by NSF grants from the probability program (0202935) and from a joint DMS/NIGMS initiative to support research in mathematical biology (0201037). 2. Supported in part by the U.S.-Israel Binational Science Foundation (grant No. 2000065). The author also thanks the Fields Institute, Cornell University and the Pacific Institute for the Mathematical Sciences for their hospitality while carrying out this research. 3. Supported by an NSERC Research grant. 1, 2, 3. All three authors gratefully acknowledge the support from the Banff International Research Station, which provided a stimulating venu for the completion of this work.

1

1

Introduction

√ Consider the contact process on the fine lattice ZN ≡ Zd /( N MN ). Sites are either occupied by a particle or vacant. • Particles die at rate N and give birth at rate N + θ • When a birth occurs at x the new particle √ is sent to a site y 6= x chosen at random from x + N where N = {z ∈ ZN : kzk∞ ≤ 1/ N } is the set of neighbors of 0. • If y is vacant a birth occurs there. Otherwise, no change occurs. √ The N in the definition of ZN scales space to take care of the fact that we are running time at rate N . The MN serves to soften the interaction between a site and its neighbors so that we can get a nontrivial limit. From work of Bramson, Durrett, and Swindle (1989) it is known that one should take  N 3/2 d=1  1/2 MN = (N log N ) d=2  N 1/d d≥3 Mueller and Tribe (1995) studied the case d = 1 and showed that if we assign each particle mass 1/N and the initial conditions converge to a continuous limiting density u(x, 0), then the rescaled particle system converged to the stochastic PDE:  00  √ u 2 du = + θu − u dt + 2u dW 6 where dW is a space-time White noise. Durrett and Perkins (1999) considered the case d ≥ 2. To state their result we need to introduce super-Brownian motion with branching rate b, diffusion coefficient σ 2 , and drift coefficient β. Let MF = MF (Rd ) denote the space of finite measures on Rd equipped with the topology of weak convergence. Let Cb∞ be the space of infinitely differentiable functions on Rd with bounded partial derivatives of all orders. Then the above super-Brownian motion is the MF -valued process Xt , which solves the following martingale problem: For all φ ∈ Cb∞ , if Xt (φ) denotes the integral of φ with respect to Xt then Z t (1.1) Zt (φ) = Xt (φ) − X0 (φ) − Xs (σ 2 ∆φ/2 + βφ) ds 0

is a martingale with quadratic variation < Z(φ) >t =

Rt 0

Xs (bφ2 ) ds.

Durrett and Perkins showed that if the initial conditions converge to a nonatomic limit then the rescaled empirical measures, formed by assigning mass 1/N to each site occupied by the rescaled contact processes, converge to the super-Brownian motion with b = 2, σ 2 = 1/3, and β = θ − cd . P∞ Here c2 = 3/2π and in d ≥ 3, cd = n=1 P (Un ∈ [−1, 1]d )/2d with Un a random walk that takes steps uniform on [−1, 1]d . Note that the −u2 interaction term in d = 1 becomes −cd u in d ≥ 2. This occurs because the environments seen by well separated particles in a small macroscopic ball are almost independent, so by the law of large numbers mass is lost due to collisions (births onto occupied sites) at a rate proportional to the amount of mass there. 2

There has been a considerable amount of work constructing measure-valued diffusions with interactions in which the parameters b, σ 2 and β in (1.1) depend on X and may involve one or more interacting populations. State dependent σ’s, or more generally state dependent spatial motions, can be characterized and constructed as solutions of a strong equation driven by a historical Brownian motion (see Perkins (1992), (2002)), and characterized as solutions of a martingale problem for historical superprocesses (Perkins (1995)) or more simply by the natural extension of (1.1) (see Donnelly and Kurtz (1999) Donnelly and Kurtz (1999)). (Historical superprocesses refers to a measure-valued process in which all the genealogical histories of the current population are recorded in the form of a random measure on path space.) State dependent branching in general seems more challenging. Many of the simple uniqueness questions remain open although there has been some recent progress in the case of countable state spaces (Bass and Perkins (2004)). In Dawson and Perkins (1998) and Dawson et al (2002), a particular case of a pair of populations exhibiting local interaction through their branching rates (called mutually catalytic or symbiotic branching) is analyzed in detail thanks to a couple of special duality relations. State dependent drifts (β) which are not “singular” and can model changes in birth and death rates within one or between several populations can be analyzed through the Girsanov techniques introduced by Dawson (1978) (see also Ch. IV of Perkins (2002)). Evans and Perkins (1994,1998) study a pair of interacting measure-valued processes which compete locally for resources through an extension of (1.1) discussed below (see remark after Theorem 1.1). In two or three dimensions these interactions involve singular drifts β for which it is believed the change of measure methods cited above will not work. In 3 dimensions this is known to be the case (see Theorem 4.14 of Evans and Perkins (1994)). Corresponding models with infinite variance branching mechanisms and stable migration processes have been constructed by Fleischmann and Mytnik (2003). Given this work on interacting continuum models, it is natural to consider limits of multitype particle systems. The simplest idea is to consider a contact process with two types of particles for which births can only occur on vacant sites and each site can support at most one particle. However, this leads to a boring limit: independent super-processes. This can be seen from Section 5 in Durrett and Perkins (1999) which shows that in the single type contact process “collisions between distant relatives can be ignored.” To obtain an interesting interaction, we will follow Durrett and Levin (1998) and consider two types of particles that modify each other’s death or birth rates. In order to concentrate on the new difficulties that come from the interaction, we will eliminate the restriction of at most one particle per site and let ξti,N (x) be the number of particles of type i at x at time t. Having changed from a contact process to a branching process, we do not need to let MN → ∞, so we will again simplify by considering the case MN ≡ M . Let σ 2 denote the variance of the uniform distribution on (Z/M ) ∩ [−1, 1]. Letting x+ = max{0, x} and x− = max{0, −x}, the dynamics of our competing species model may be formulated as follows: • When a birth occurs, the new particle is of the same type as its parent and is born at the same site. • For i = 1, 2, let ni (x) be the number of individuals of type i in x + N . Particles of type i give birth at rate N + γi+ 2−d N d/2−1 n3−i (x) and die at rate N + γi− 2−d N d/2−1 n3−i (x).

3

Here 3 − i is the opposite type of particle. It is natural to think of the case in which γ1 < 0 and γ2 < 0 (resource competition), but in some cases the two species may have a synergistic effect: γ1 > 0 and γ2 > 0. Two important special cases that have been considered earlier are (a) the colicin model. γ2 = 0. In Durrett and Levin’s paper, γ1 < 0, since one type of E. coli produced a chemical (colicin) that killed the other type. We will also consider the case in which γ1 > 0 which we will call colicin. (b) predator-prey model. γ1 < 0 and γ2 > 0. Here the prey 1’s are eaten by the predator 2’s which have increased birth rates when there is more food. Two related example that fall outside of the current framework, but for which similar results should hold: (c) epidemic model. Here 1’s are susceptible and 2’s are infected. 1’s and 2’s are individually branching random walks. 2’s infect 1’s (and change them to 2’s) at rate γ2−d N d/2 n2 (x), while 2’s revert to being 1’s at rate 1. (d) voter model. One could also consider branching random walks in which individuals give birth to their own types but switch type at rates proportional to the number of neighbours of the opposite type. The scaling N d/2−1 is chosen on the basis of the following heuristic argument. In a critical branching process that survives to time N there will be roughly N particles. In dimensions d ≥ 3 if we tile the integer lattice with cubes of side 1 there will be particles in roughly N of the N d/2 √ cubes within distance N of the origin. Thus there is probability 1/N d/2−1 of a cube containing a particle. To have an effect over the time interval [0, N ] a neighbor of the opposite type should produce changes at rate N −1 N d/2−1 or on the speeded up time scale at rate N d/2−1 . In d = 2 an occupied square has about √ log N particles so there will be particles in roughly N/(log N ) of the N squares within distance N of the origin. Thus there is probability 1/(log N ) of a square containing a particle, but when it does it contains log N particles. To have an effect interactions should produce changes at rate 1/N or on the speeded up time scale at rate 1 = N d/2−1 . In d = 1 √ there are roughly N particles in each interval [x, x + 1] so each particle should produce changes at rate N −1 N −1/2 or on the speeded up time scale at rate N −1/2 = N d/2−1 . Our guess for the limit process comes from work of Evans and Perkins (1994, 1998) who studied some of the processes that will arise as a limit of our particle systems. We first need a concept that was introduced by Barlow, Evans, and Perkins (1991) for a class of measure-valued diffusions dominated by a pair of independent super-Brownian motions. Let (Y 1 , Y 2 ) be an MF 2 -valued process. Let ps (x) s ≥ 0 be the transition density function of Brownian motion with variance σ 2 s. For any φ ∈ Bb (Rd ) (bounded Borel functions on Rd ) and δ > 0, let Z tZ Z (1.2) Lδt (Y 1 , Y 2 )(φ) ≡ pδ (x1 − x2 )φ((x1 + x2 )/2)Ys1 (dx1 )Ys2 (dx2 ) ds t ≥ 0. 0

Rd

Rd

The collision local time of (Y 1 , Y 2 ) (if it exists) is a continuous non-decreasing MF -valued stochastic process t 7→ Lt (Y 1 , Y 2 ) such that Lδt (Y 1 , Y 2 )(φ) → Lt (Y 1 , Y 2 )(φ)

as δ ↓ 0 in probability,

for all t > 0 and φ ∈ Cb (Rd ), the bounded continuous functions on Rd . It is easy to see that if Ysi (dx) = ysi (x)dx for some Borel densities ysi which are uniformly bounded on compact time 4

Rt intervals, then Lt (Y 1 , Y 2 )(dx) = 0 ys1 (x)ys2 (x)dsdx. However, the random measures we will be dealing with will not have densities for d > 1. The final ingredient we need to state our theorem is the assumption on our initial conditions. Let B(x, r) = {w ∈ Rd : |w − x| ≤ r}, where |z| is the L∞ norm of z. For any 0 < δ < 2 ∧ d we set   N (2∧d)−δ −1/2 %δ (µ) ≡ inf % : sup µ(B(x, r)) ≤ %r , for all r ∈ [N , 1] , x

√ d where the lower bound on r is being dictated  N by the lattice Z /( N M ). We say that a sequence of measures µ , N ≥ 1 satisfies condition UBN if N sup %N δ (µ ) < ∞,

for all 0 < δ < 2 ∧ d

N ≥1

We say that measure µ ∈ MF (Rd ) satisfies condition UB if for all 0 < δ < 2 ∧ d   (2∧d)−δ %δ (µ) ≡ inf % : sup µ(B(x, r)) ≤ %r , for all r ∈ (0, 1] < ∞ x

If S is a metric space, CS and DS are the space of continuous S-valued paths and c`adl`ag S-valued paths, respectively, the former with the topology of uniform convergence on compacts and the latter with the Skorokhod topology. Cbk (Rd ) denotes the set of functions in Cb (Rd ) whose partial derivatives of order k or less are also in Cb (Rd ). The main result of the paper is the following. If X = (X 1 , X 2 ), let FtX denote the rightcontinuous filtration generated by X. Theorem 1.1 Suppose d ≤ 3. Define measure-valued processes by X i,N Xti,N (φ) = (1/N ) ξt (x)φ(x) x

{X0i,N }, i

Suppose γ1 ≤ 0 and γ2 ∈ R. If = 1, 2 satisfy UBN and converge to X0i in MF for 1,N 2,N i = 1, 2, then {(X , X ), N ≥ 1} is tight on DMF 2 . Each limit point (X 1 , X 2 ) ∈ CMF 2 and d 2 2 satisfies the following martingale problem MγX11,γ,X 2 : For φ1 , φ2 ∈ Cb (R ), 0

(1.3)

0

t

σ2 ∆φ1 ) ds + γ1 Lt (X 1 , X 2 )(φ1 ), 2 0 Z t σ2 2 2 2 Xt (φ2 ) = X0 (φ2 ) + Mt (φ2 ) + Xs2 ( ∆φ2 ) ds + γ2 Lt (X 1 , X 2 )(φ2 ) 2 0

Xt1 (φ1 ) = X01 (φ1 ) + Mt1 (φ1 ) +

Z

Xs1 (

where M i are continuous (FtX )-local martingales such that Z t i j hM (φi ), M (φj )it = δi,j 2 Xsi (φ2i ) ds 0

Remark. Barlow, Evans, and Perkins (1991) constructed the collision local time for two superBrownian motions in dimensions d ≤ 5, but Evans and Perkins (1994) showed that no solutions to the martingale problem (1.3) exist in d ≥ 4 for γ2 ≤ 0. Given the previous theorem, we will have convergence whenever we have a unique limit process. The next theorem gives uniqueness in the case of no feedback, i.e., γ1 = 0. In this case, the first process provides an environment that alters the birth or death rate of the second one. 5

Theorem 1.2 Let γ1 = 0 and γ2 ∈ R, d ≤ 3, and X0i , i = 1, 2, satisfy condition UB. Then there 2 is a unique in law solution to the martingale problem (MPγX11,γ,X 2 ). 0

0

The uniqueness for γ1 = 0 and γ2 ≤ 0 above was proved by Evans and Perkins (1994) (Theorem 4.9) who showed that the law is the natural one: X 1 is a super-Brownian motion and conditional on X 1 , X 2 is the law of a super ξ-process where ξ is Brownian motion killed according to an inhomogeneous additive functional with Revuz measure Xs1 (dx)ds. We prove the uniqueness for γ2 > 0 in Section 5 below. Here X 1 is a super-Brownian motion and conditional on X 1 , X 2 is the superprocess in which there is additional birthing according to the inhomogeneous additive functional with Revuz measure Xs1 (dx)ds. Such superprocesses are special cases of those studied by Dynkin (1994) and Kuznetsov (1994) although it will take a bit of work to connect their processes with our martingale problem. Another case where uniqueness was already known is γ1 = γ2 < 0. Theorem 1.3 (Mytnik (1999)) Let γ1 = γ2 < 0, d ≤ 3, and X0i , i = 1, 2, satisfy Condition UB. Then there is a unique in law solution to the martingale problem (1.3). Hence as an (almost) immediate Corollary to the above theorems we have: Corollary 1.4 Assume d ≤ 3, γ1 = 0 or γ1 = γ2 < 0, and {X0i,N }, i = 1, 2 satisfy UBN and converge to X0i in MF for i = 1, 2. If X i,N is defined as in Theorem 1.1, then (X 1,N , X 2,N ) converges weakly in DMF 2 to the unique in law solution of (1.3). Proof We only need point out that by elementary properties of weak convergence X0i will satisfy UB since {X0i,N } satisfies UBN . The result now follows from the above three Theorems. For d = 1 uniqueness of solutions to (1.3) for γi ≤ 0 and with initial conditions satisfying Z Z log+ (1/|x1 − x2 |)X01 (dx1 )X02 (dx2 ) < ∞ (this is clearly weaker that each X01 satisfying UB) is proved in Evans and Perkins (1994) (Theorem 3.9). In this case solutions can be bounded above by a pair of independent super-Brownian motions (as in Theorem 5.1 of Barlow, Evans and Perkins R(1991)) from which one can readily see that t Xti (dx) = uit (x)dx for t > 0 and Lt (X 1 , X 2 )(dx) = 0 u1s (x)u2s (x)dsdx. In this case u1 , u2 are also the unique in law solution of the stochastic partial differential equation ! 00 √ σ 2 ui i i 1 2 du = + θu + γi u u dt + 2ui dW i i = 1, 2 2 where W 1 and W 2 are independent white noises. (See Proposition IV.2.3 of Perkins (2002).) Turning next to γ2 > 0 in one dimension we have the following result: Theorem 1.5 Assume γ1 ≤ 0 ≤ γ2 , X01 ∈ MF has a continuous density on compact support and 2 X02 satisfies Condition UB. Then for d = 1 there is a unique in law solution to MγX11,γ,X 2 which 0

0

0,0 is absolutely continuous to the law of the pair of super-Brownian motions satisfying MX 1 ,X 2 . In 0

0

particular X i (t, dx) = ui (t, x)dx for ui : (0, ∞) → CK continuous maps taking values in the space of continuous functions on R with compact support, i = 1, 2. 6

We will prove this result in Section 5 using Dawson’s Girsanov Theorem (see Theorem IV. 1.6 (a) of Perkins (2002)). We have not attempted to find optimal conditions on the initial measures. As before, the following convergence theorem is then immediate from Theorem 1.1 Corollary 1.6 Assume d = 1, γ1 ≤ 0, {X0i,N } satisfy UBN and converge to X0i ∈ MF , i = 1, 2, where X01 has a continuous density with compact support. If X i,N are as in Theorem 1.1, then 2 (X 1,N , X 2,N ) converges weakly in DMF 2 to the unique solution of MPγX11,γ,X 2. 0

0

Having stated our results, the natural next question is: What can be said about uniqueness in other cases? Conjecture 1.7 Uniqueness holds in d = 2, 3 for any γ1 , γ2 . For γi ≤ 0 Evans and Perkins (1998) prove uniqueness of the historical martingale problem associated with (1.3). The particle systems come with an associated historical process as one simple puts mass N −1 on the path leading up to the current position of each particle at time t. It should be possible to prove tightness of these historical processes and show each limit point satisfies the above historical martingale problem. It would then follow that in fact one has convergence of empirical measures in Theorem 1.1 (for γi ≤ 0) to the natural projection of the unique solution to the historical martingale problem onto the space of continuous measure-valued processes. Conjecture 1.8 Theorem 1.1 continues to hold for γ1 > 0 in d = 2, 3. There is no solution in d ≥ 4. The solution explodes in finite time in d = 1 when γ1 , γ2 > 0. In addition to expanding the values of γ that can be covered, there is also the problem of considering more general approximating processes. Conjecture 1.9 Our results hold for the long-range contact process with modified birth and death rates. Returning to what we know, our final task in this Introduction is to outline the proofs of Theorems 1.1 and 1.2. Suppose γ1 ≤ 0 and γ2 ∈ R, and set γ¯1 = 0 and γ¯2 = γ2+ . Proposition 2.2 below will show that the corresponding measure-valued processes can be constructed on the same space ¯ i,N for i = 1, 2. Here (X ¯ 1,N , X ¯ 2,N ) are the sequence of processes corresponding so that X i,N ≤ X to parameter values (¯ γ1 , γ¯2 ). Tightness of our original sequence of processes then easily reduces to tightness of this sequence of bounding processes, because increasing the measures will both increase the mass far away (compact containment) and also increase the time variation in the integrals of test functions with respect to these measure-valued processes–see the approximating martingale ¯ 1,N , X ¯ 2,N ), we first note that the tightness of the first problem (2.12) below. Turning now to (X coordinate (and convergence to super-Brownian motion) is well-known so let us focus on the second. The first key ingredient we will need is a bound on the mean measure, including of course its total ¯ 1,N . The starting point mass. We will do this by conditioning on the branching environment X here will be the Feynman-Kac formula for this conditional mean measure given below in (2.17). In ¯ 2,N we will need a concentration order to handle tightness of the discrete collision measure for X ¯ 1,N , i.e., a uniform bound on the mass in small inequality for the rescaled branching random walk X 7

balls. A more precise result was given for super-Brownian motion in Theorem 4.7 of Barlow, Evans and Perkins (1991). The result we need is stated below as Proposition 2.4 and proved in Section 6. Once tightness of (X 1,N , X 2,N ) is established it is not hard to see that the limit points satisfy a martingale problem similar to our target, (1.3), but with some increasing continuous measurevalued process A in place of the collision local time. To identify A with the collision local time of the limits, we take limits in a Tanaka formula for the approximating discrete local times (Section 4 below) and derive the Tanaka formula for the limiting collision local time. As this will involve a number of singular integrals with respect to our random measures, the concentration inequality ¯ 1,N will again play an important role. This is reminiscent of the approach in Evans and for X Perkins (1994) to prove the existence of solutions to the limiting martingale problem when γi ≤ 0. However the discrete setting here is a bit more involved, since requires checking the convergence of integrals of discrete Green functions with respect to the random mesures. The case of γ2 > 0 forces a different approach as we have not been able to derive a concentration inequality for this process and so must proceed by calculation of second moments–Lemma 2.3 below is the starting point here. The Tanaka formula derived in Section 5 (see Remark 5.2) is new in this setting. Theorem 1.2 is proved in Section 5 by using the conditional martingale problem of X 2 given X 1 to describe the Laplace functional of X 2 given X 1 in terms of an associated nonlinear equation involving a random semigroup depending on X 1 . The latter shows that conditional on X 1 , X 2 is a superprocess with immigration given by the collision local time of a Brownian path in the random field X 1 . Convention As our results only hold for d ≤ 3, we will assume d ≤ 3 throughout the rest of this work.

2

The Rescaled Particle System–Construction and Basic Properties

We first will write down a more precise description corresponding to the per particle birth and death rates used in the previous section to define our rescaled interacting particle systems. We let pN denote the uniform distribution on N , that is √ 1(|z| ≤ 1/ N ) N p (z) = (2.1) , z ∈ ZN . (2M + 1)d P N Let P N φ(x) = y p (y − x)φ(y) for φ : ZN → R for which the righthand side is absolutely 0 summable. Set M = (M + (1/2))d . The per particle rates in Section 1 lead to a process (ξ 1 , ξ 2 ) ∈ ZN N ZZ + × Z+ such that for i = 1, 2, ξti (x) → ξti (x) + 1 with rate N ξti (x) + N d/2−1 ξti (x)γi+ (M 0 )d

X

pN (y − x)ξt3−i (y),

y

ξti (x) → ξti (x) − 1 with rate N ξti (x) + N d/2−1 ξt (x)γi− (M 0 )d

X

pN (y − x)ξt3−i (y),

y

and (ξti (x), ξti (y)) → (ξti (x) + 1, ξti (y) − 1) with rate N pN (x − y)ξti (y). 8

The factors of (M 0 )d may look odd but they combine with the kernels pN to get the factors of 2−d in our interactive birth and death rates. Such a process can be constructed as the unique solution of an SDE driven by a family of i,− i,+,c i,−,c i,m Poisson point processes. For x, y ∈ ZN , let Λi,+ x , Λx , Λx,y , Λx,y , Λx,y , i = 1, 2 be independent Poisson processes on R2+ , R2+ , R3+ , R3+ , and R2+ , respectively. Here Λi,± governs the birth and x i,±,c death rates at x, Λx,y governs the additional birthing or killing at x due to the influence of the i,± other type at y and Λi,m are x,y governs the migration of particles from y to x. The rates of Λx i,±,c i,m d/2−1 0 N N ds du; the rates of Λx,y are N M p (y − x)ds du dv; the rates of Λx,y are N pN (x − y)du. Let Ft be the canonical right continuous filtration generated by this family of point processes and let Fti denote the corresponding filtrations generated by the processes with superscript i, P point ZN ZN i i 1 2 i = 1, 2. Let ξ0 = (ξ0 , ξ0 ) ∈ Z+ × Z+ be such that |ξ0 | ≡ x ξ0 (x) < ∞ for i = 1, 2–denote this set of initial conditions by SF –and consider the following system of stochastic jump equations for i = 1, 2, x ∈ ZN and t ≥ 0: Z tZ Z tZ i i i i,+ i ξt (x) = ξ0 (x) + 1(u ≤ ξs− (x))Λx (ds, du) − 1(u ≤ ξs− (x))Λi,− x (ds, du) 0 0 XZ tZ Z 3−i i + 1(u ≤ ξs− (x), v ≤ γi+ ξs− (y))Λi,+,c x,y (ds, du, dv) y



(2.2)

y

+

0

XZ tZ Z 0

XZ tZ y

3−i i 1(u ≤ ξs− (x), v ≤ γi− ξs− (y))Λi,−,c x,y (ds, du, dv)

1(u ≤

i ξs− (y))Λi,m x,y (ds, du)



0

XZ tZ y

i 1(u ≤ ξs− (x))Λi,m y,x (ds, du).

0

Assuming for now that there is a unique solution to this system of equations, the reader can easily check that the solution does indeed have the jump rates described above. These equations are similar to corresponding systems studied in Mueller and Tribe (1994), but for completeness we will now show that (2.2) has a unique Ft -adapted SF -valued solution. Associated with (2.2) introduce the increasing Ft -adapted Z+ ∪ {∞}-valued process Jt =

2 X

|ξ0i |

+

XZ tZ x

i=1

+

+

0

0

XZ tZ x,y

XZ tZ x

XZ tZ Z x,y

+

0

XZ tZ Z x,y

+

1(u ≤

i (x))Λi,+ ξs− x (ds, du)

i 1(u ≤ ξs− (x))Λi,− x (ds, du)

0

3−i i 1(u ≤ ξs− (x), v ≤ γi+ ξs− (y))Λi,+,c x,y (ds, du, dv)

3−i i 1(u ≤ ξs− (x), v ≤ γi− ξs− (y))Λi,−,c x,y (ds, du, dv)

i 1(u ≤ ξs− (y))Λi,m x,y (ds, du).

0

Set T0 = 0 and let T1 be the first jump time of J. This is well-defined as any solution to (2.2) cannot jump until T1 and so the solution is identically (ξ01 , ξ02 ) until T1 . Therefore a short calculation shows

9

that T1 is exponential with rate at most 2 X

(2.3)

4N |ξ0i | + M 0 N d/2−1 |γi ||ξ0i |.

i=1

At time T1 (2.2) prescribes a unique single jump at a single site for any solution ξ and J increases by 1. Now proceed inductively, letting Tn be the nth jump time of J. Clearly the solution ξ to (2.2) is unique up until T∞ = limn Tn . Moreover sup |ξs1 | + |ξs2 | ≤ Jt for all t ≤ T∞ .

(2.4)

s≤t

Finally note that (2.3) and the corresponding bounds for the rates of subsequent times shows that J is stochastically dominated by a pure birth process starting at |ξ01 | + |ξ02 | and with per particle birth rate 4N + M 0 N d/2−1 |(|γ1 | + |γ2 |). Such a process cannot explode and in fact has finite pth moments for all p > 0 (see Ex. 6.8.4 in Grimmett and Stirzaker (2001)). Therefore T∞ = ∞ a.s. and we have proved (use (2.4) to get the moments below): Proposition 2.1 For each ξ0 ∈ SF , there is a unique Ft -adapted solution (ξ 1 , ξ 2 ) to (2.2). Moreover this process has c` adl` ag SF -valued paths and satisfies (2.5)

E(sup(|ξs1 | + |ξs2 |)p ) < ∞

for all p, t ≥ 0.

s≤t

The following “Domination Principle” will play an important role in this work. ¯ denote the corresponding Proposition 2.2 Assume γi+ ≤ γ¯i , i = 1, 2 and let ξ, respectively ξ, i i ¯ unique solutions to (2.2) with initial conditions ξ0 ≤ ξ0 , i = 1, 2. Then ξti ≤ ξ¯ti for all t ≥ 0, i = 1, 2 a.s. ¯ One then argues inductively on Proof. Let J and Tn be as in the previous proof but for ξ. i i ¯ n that ξt ≤ ξt for t ≤ Tn . Assuming the result for n (n = 0 holds by our assumption on the initial conditions), then clearly neither process can jump until Tn+1 . To extend the comparison to Tn+1 we only need consider the cases where ξ i jumps upward at a single site x for which ξTi n+1 − (x) = ξ¯Ti n+1 − (x) or ξ¯i jumps downward at a single site x for which the same equality holds. As only one type and one site can change at any given time we may assume the processes do not change in any other coordinates. It is now a simple matter to analyze these cases using (2.2) and show that in either case the other process (the one not assumed to jump) must in fact mirror the jump taken by the jumping process and so the inequality is maintained at Tn+1 . As we know Tn → ∞ a.s. this completes the proof. Denote dependence on N by letting ξ N = (ξ 1,N , ξ 2,N ) be the unique solution to (2.2) with a given initial condition ξ0N and let (2.6)

Xti,N =

1 X δx ξti,N (x), i = 1, 2 N x∈ZN

10

denote the associated pair of empirical measures, each taking values in MF . We will not be able to deal with the case of symbiotic systems where both γi > 0 so we will assume from now on that γ1 ≤ 0. As we prefer to write positive parameters we will in fact replace γ1 with −γ1 and ¯ i,N denote the corresponding particle system and therefore assume γ1 ≥ 0. We will let ξ¯i,N and X + ¯ 1,N is empirical measures with γ¯ = (0, γ2 ). We call (X 1,N , X 2,N ) a positive colicin process, as X 2,N ¯ . The just a rescaled branching random walk which has a non-negative local influence on X above Domination Principle implies ¯ i,N X i,N ≤ X

(2.7)

for i = 1, 2.

In order to obtain the desired limiting martingale problem we will need to use a bit of jump calculus to derive the martingale properties of X i,N . Define the discrete collision local time for X i,N by Z tZ −d (2.8) Li,N (φ) = 2 φ(x)N d/2 Xs3−i,N (B(x, N −1/2 ))Xsi,N (dx) ds t 0

Rd

¯ i,N . These We denote the corresponding quantity for our bounding positive colicin process by L ˜ denote the predictable integrals all have finite means by (2.5) and, in particular, are a.s. finite. Let Λ ˆ ˜ compensator of a Poisson point process Λ and let Λ = Λ − Λ denote the associated martingale measure. If ψ i : R+ × Ω × ZN → R are Ft -predictable define a discrete inner product by X ∇N ψs1 · ∇N ψs2 (x) = N pN (y − x)(ψ 1 (s, y) − ψ 1 (s, x))(ψ 2 (s, y) − ψ 2 (s, x)) y

and write ∇2N ψsi (x) for ∇N ψsi · ∇N ψsi (x). Next define (2.9)

Mti,N (ψ i )

X 1 hZ tZ i ˆ i,+ = ψ i (s, x)1(u ≤ ξs− (x))Λ x (ds, du) N 0 x XZ tZ i ˆ i,− (ds, du) − ψ i (s, x)1(u ≤ ξs− (x))Λ x x

+

x,y



0

XZ tZ x,y



0

XZ tZ Z x,y

+

0

XZ tZ Z

3−i i ˆ i,−,c (ds, du, dv) ψ i (s, x)1(u ≤ ξs− (x), v ≤ γi− ξs− (y))Λ x,y

i ˆ i,m (ds, du) ψ i (s, x)1(u ≤ ξs− (y))Λ x,y

0

XZ tZ x,y

3−i i ˆ i,+,c ψ i (s, x)1(u ≤ ξs− (x), v ≤ γi+ ξs− (y))Λ x,y (ds, du, dv)

i i ˆ i,m ψ i (s, x)1(u ≤ ξs− (x))Λ (ds, du) . y,x

0

To deal with the convergence of the above sum note that its predictable square function is Z t Z t 1 i,N 2 i |γi | i,N (2.10) hM i,N (ψ i )it = Xsi,N (2(ψsi )2 ) ds + Lt ((ψ i )2 ) + Xs (∇N ψs ) ds. N 0 0 N 11

If ψ is bounded, the above is easily seen to be square integrable by (2.5), and so M i,N (ψ i )t is an L2 Ft -martingale. More generally whenever the above expression is a.s. finite for all t > 0, ¯ i,N for Mti,N (ψ) is an Ft -local martingale. The last two terms are minor error terms. We write M the corresponding martingale measures for our dominating positive colicin processes. Let ∆N be the generator of the “motion” process B·N which takes steps according to pN at rate N: X ∆N φ(x) = N (φ(y) − φ(x))pN (y − x). y∈ZN

Let ΠN s,x be the law of this process which starts from x at time s. We will adopt the convention 2 ΠN = ΠN x 0,x . It follows from Lemma 2.6 of Cox, Durrett and Perkins (2000) that if σ is as defined 1,3 d in Section 1 then for φ ∈ Cb ([0, T ] × R ) (2.11)

∆N φ(s, x) →

σ2 ∆φ(s, x), 2

uniformly in s ≤ T and x ∈ Rd as N → ∞.

i Let φ1 , φ2 ∈ Cb ([0, T ] × ZN ) with φ˙ i = ∂φ ∂t also in Cb ([0, T ] × ZN ). It is now fairly straightforward to multiply (2.2) by φi (t, x)/N , sum over x, and integrate by parts to see that (X 1,N , X 2,N ) satisfies 1 ,γ2 the following martingale problem MN,γ 1,N 2,N :

X0

(2.12)

Xt1,N (φ1 (t))

=

X01,N (φ1 (0))

,X0

+

Mt1,N (φ1 )

Z

t

+

Xs1,N (∆N φ1 (s) + φ˙ 1 (s)) ds

0

− γ1 L1,N t (φ1 ), t ≤ T, Xt2,N (φ2 (t))

=

X02,N (φ2 (0))

+

Mt2,N (φ2 )

Z +

t

Xs2,N (∆2,N φ2 (s) + φ˙ 2 (s)) ds

0

+ γ2 L2,N (φ2 ) , t ≤ T, t where M i,N are Ft − martingales, such that hM

i,N

(φi ), M

j,N

Z

t

Xsi,N (2φi (s)2 ) ds  Z t 1 i,N 2 i |γi | i,N 2  + L φi + Xs (∇N φs ) ds. N t 0 N

(φj )it = δi,j

0

Let ¯ s1,N , x) = 2−d N d/2 X ¯ s1,N (B(x, N −1/2 )). gN (X ¯ 2,N given X ¯ 1,N we first note that ξ¯1,N is in fact Ft1 -adapted To derive the conditional mean of X as the equations for ξ¯1,N are autonomous since γ¯1 = 0 and so the pathwise unique solution will be 1 ∨ F 2 , then Λ ˆ 2,± , Λ ˆ 2,±,c , Λ ˆ 2,m are all adapted to the smaller filtration. Note also that if F¯t = F∞ t ¯ 2,N (ψ) will be a F¯t -martingale whenever ψ : [0, T ] × Ω × ZN → R F¯t -martingale measures and so M ¯ is bounded and Ft -predictable. Therefore if ψ, ψ˙ : [0, T ] × Ω × ZN → R are bounded, continuous in the first and third variables for a.a. choices of the second, and F¯t -predictable in the first two

12

variables for each point in ZN , then we can repeat the derivation of the martingale problem for (X 1,N , X 2,N ) and see that ¯ 2,N (ψt ) = X ¯ 2,N (ψ0 ) + M ¯ 2,N (ψ) + X t t 0

Z 0

t

  ¯ s2,N ∆N ψs + γ + gN (X ¯ s1,N , ·)ψs + ψ˙ s ds, t ≤ T, X 2

¯ 2,N (ψ) is now an F¯t -local martingale because the right-hand side of (2.10) is a.s. finite for where M t all t > 0. 1 -measurable in the second variable and Fix t > 0, and a map φ : ZN × Ω → R which is F∞ satisfies sup |φ(x)| < ∞ a.s.

(2.13)

x∈ZN

Let ψ satisfy ∂ψs ¯ 1,N , ·)ψs , 0 ≤ s ≤ t, = −∆N ψs − γ2+ gN (X s ∂s ψt = φ. One can check that ψs , s ≤ t is given by   Z t gN + 1,N N N N ¯ (2.14) γ2 gN (Xr , Br ) dr , ψs (x) = Ps,t (φ) (x) ≡ Πs,x φ(Bt ) exp s

which indeed does satisfy the above conditions on ψ. Therefore for ψ, φ as above ¯ 2,N (ψ). ¯ 2,N (φ) = X ¯ 2,N (P gN (φ)) + M X t t 0,t 0

(2.15) For each K > 0,

¯ 2,N (ψ)|F¯0 ) = E(M ¯ 2,N (ψ ∧ K)|F¯0 ) = 0 a.s. on {sup |ψs | ≤ K|} ∈ F0 E(M t t s≤t

and hence, letting K → ∞, we get ¯ 2,N (ψ)|F¯0 ) = 0 a.s. E(M t

(2.16)

This and (2.15) imply h i   ¯ 2,N (φ)|X ¯ 1,N ¯ 2,N P gN (φ) (·) (2.17) E X = X t 0 0,t  Z t  Z ¯ r1,N , BrN ) dr X ¯ 2,N (dx). = Π0,x φ(BtN ) exp γ2+ gN (X 0 Rd

0

It will also be convenient to use (2.15) to prove a corresponding result for conditional second moments.

13

1 -measurable in the second variable and satisfy Lemma 2.3 Let φi : ZN × Ω → R, (i = 1, 2) be F∞ (2.13). Then h i     2,N 2,N 2,N gN 2,N gN 1,N ¯ ¯ ¯ ¯ ¯ (2.18) E Xt (φ1 )Xt (φ2 )|X = X0 P0,t (φ1 ) (·) X0 P0,t (φ2 ) (·)  Z t Z gN gN ¯ 2,N (dx) ds X ¯ 1,N +E 2Ps,t (φ1 ) (x)Ps,t (φ2 ) (x)X s d  0 R Z tZ  X gN gN  +E (Ps,t (φ1 ) (y) − Ps,t (φ1 )(x)) 0

Rd y∈Z N

 i gN gN ¯ s2,N (dx) ds X ¯ 1,N × (Ps,t (φ2 ) (y) − Ps,t (φ2 )(x))pN (x − y) X  +   1,N γ2 ¯ 2,N gN gN ¯ P·,t (φ1 ) (·)P·,t (φ2 ) (·) X +E L . N t Proof.

Argue just as in the derivation of (2.16) to see that ¯ 2,N (φ1 )M ¯ 2,N (φ2 )|F¯0 ) = E(hM ¯ 2,N (φ1 ), M ¯ 2,N (φ2 )it |F¯0 ) a.s. E(M t t

The result is now immediate from this, (2.15) and (2.10).

Now we will use the Taylor expansion for the exponential function in (2.14) to see that for φ : ZN × Ω → [0, ∞) as above, and 0 ≤ s < t, " # Z t Z tY ∞ n X (γ2+ )n N gN N 1,N N ¯ Ps,t (φ)(x) = Πs,x φ(Bt ) ... gN (X si , Bsi ) ds1 . . . dsn n! s s n=0 i=1 "Z ∞ X (2.19) 1(s < s1 < s2 < . . . < sn ≤ t) = (γ2+ )n Rn +

n=0

Z × R

dn

p(n) x (s1 , . . . , sn , t, y1 , . . . , yn , φ)

n Y

! ¯ 1,N (dyi ) X si

# ds1 . . . dsn .

i=1

Here p(n) x (s1 , . . . , sn , t, y1 , . . . , yn , φ)   √ N N N , i = 1, . . . , n) . = 2−dn N dn/2 ΠN φ(B )1( y − B ≤ 1/ i x t si ¯ 1,N which We now state the concentration inequality for our rescaled branching random walks X will play a central role in our proofs. The proof is given in Section 6. ¯ 1,N } satisfies UBN . For δ > 0, Proposition 2.4 Assume that the non-random initial measure {X 0 define ¯ 1,N ). Hδ,N ≡ sup %N δ (Xt t≥0

Then for any δ > 0, Hδ,N is bounded in probability uniformly in N , that is, for any  > 0, there exists M () such that P (Hδ,N ≥ M ()) ≤ , ∀N ≥ 1. 14

Throughout the rest of the paper we will assume Assumption 2.5 The sequences of measures {X0i,N , N ≥ 1}, i = 1, 2, satisfy condition UBN , and for each i, X0i,N → X0i in MF as N → ∞. It follows from (MN,0,0 1,N X0

,X02,N

¯ s1,N (1) is bounded in prob) and the above assumption that sups X

ability uniformly in N . For example, it is a non-negative martingale with mean X01,N (1) → X01 (1) and so one can apply the weak L1 inequality for non-negative martingales. It therefore follows from Proposition 2.4 that (suppressing dependence on δ > 0) ¯ s1,N (1) RN = Hδ,N + sup X

(2.20)

s

is also bounded in probability uniformly in N , that is for any ε > 0 there is an Mε > 0 such that P (RN ≥ Mε ) ≤ ε for all N.

(2.21)

The next two Sections will deal with the issues of tightness and Tanaka’s formula, respectively. In the course of the proofs we will use some technical Lemmas which will be proved in Sections ¯ 1,N )-measurable process R ¯ N (t, ω) whose definition 7 and 8, and will involve a non-decreasing σ(X (value) may change from line to line and which also satisfies ¯ N (t) is bounded in probability uniformly in N. for each t > 0, R

(2.22)

3

Tightness of the Approximating Systems

It will be convenient in Section 4 to also work with the symmetric collision local time defined by Z tZ   N −d 1 |x1 − x2 | ≤ N −1/2 N d/2 φ((x1 + x2 )/2)Xs1,N (dx1 )Xs2,N (dx2 ) ds. Lt (φ) = 2 0

Rd × Rd

This section is devoted to the proof of the following proposition. Proposition 3.1 Let γ1 ≥ 0, γ2 ∈ R and {(X 1,N , X 2,N ) : N ∈ N} be as in (2.6) with {X0i,N , N ∈ N}, i = 1, 2, satisfying Assumption 2.5. Then {(X 1,N , X 2,N , L2,N , L1,N , LN ), N ∈ N} is tight on DMF 5 and each limit point (X 1 , X 2 , A, A, A) ∈ CMF 5 and satisfies the following martingale problem 1 ,γ2 ,A M−γ : For φi ∈ Cb2 (Rd ), i = 1, 2, X 1 ,X 2 0

0

t

σ2 ∆φ1 ) ds − γ1 At (φ1 ), 2 0 Z t σ2 Xt2 (φ2 ) = X02 (φ2 ) + Mt2 (φ2 ) + Xs2 ( ∆φ2 ) ds + γ2 At (φ2 ) 2 0 i where M are continuous local martingales such that Z t i j hM (φi ), M (φj )it = δi,j 2 Xsi (φ2i ) ds. Xt1 (φ1 )

=

X01 (φ1 )

+

Mt1 (φ1 )

Z

+

Xs1 (

0

15

As has alreeay been noted, the main step will be to establish Proposition 3.1 for the positive col¯ 1,N , X ¯ 2,N ) which bounds (X 1,N , X 2,N ). Recall that this process solves the following icin process (X martingale problem: For bounded φ : ZN → R, Z t 1,N 1,N 1,N ¯ (φ) = X ¯ (φ) + M ¯ ¯ s1,N (∆N φ) ds, (3.1) X (φ) + X t t 0 0 Z t 2,N 2,N 2,N ¯ (φ) = X ¯ (φ) + M ¯ ¯ s2,N (∆N φ) ds (3.2) X (φ) + X t t 0 0 Z tZ ¯ 2,N (dx, ds), + γ2+ φ(x)L 0

Rd

where ¯ 1,N (φ)it = 2 (3.3) hM

t

Z

¯ 1,N (φ2 ) ds + X s

0

¯ 2,N (φ)it = 2 (3.4) hM

0 t

Z

¯ s2,N (φ2 ) ds + X

0

+

γ2+ N

 X

R

d



¯ 1,N (dy) ds (φ(x) − φ(y))2 pN (x − y) X s

x∈ZN



Z tZ 0

¯ 2,N φ2 L t



Z tZ

 X

R

d



¯ s2,N (dx) ds (φ(y) − φ(x))2 pN (x − y) X

y∈ZN



¯ 1,N , N ≥ 1} converges weakly in DM to super-Brownian Proposition 3.2 The sequence {X F 2 motion with parameters b = 2, σ , and β = 0. Proof This result is standard in the super-Brownian motion theory, see e.g. Theorem 15.1 of Cox, Durrett, and Perkins (1999). Most of the rest of this section is devoted to the proof of the following proposition. ¯ 2,N , N ≥ 1} is tight in DM and each limit point is supported Proposition 3.3 The sequence {X F by CMF . Recall the following lemma (Lemmas 2.7, 2.8 of Durrett and Perkins (1999)) which gives conditions for tightness of a sequence of measure-valued processes. Lemma 3.4 Let {P N } be a sequence of probabilities on DMF and let Yt denote the coordinate variables on DMF . Assume Φ ⊂ Cb (Rd ) be a separating class which is closed under addition. (a) {P N } is tight in DMF if and only if the following conditions holds. (i) For each T,  > 0, there is a compact set KT, ⊂ Rd such that ! lim sup P N N

c sup Yt (KT, )> t≤T

16

< .

(ii) limM →∞ supN P N (supt≤T Yt (1) > M ) = 0. (iii) If P N,φ (A) ≡ P N (Y· (φ) ∈ A), then for each φ ∈ Φ, {P N,φ , N ≥ 1} is tight in DR . (b) If P N satisfies (i), (ii), and (iii), and for each φ ∈ Φ all limit points of P N,φ are supported by CR , then P N is tight in DMF and all limit points are supported on CMF . Notation.

We choose the following constants: 0 < δ < δ˜ < 1/6,

(3.5) and define

ld ≡ (d/2 − 1)+ , N %ˆN δ (µ) ≡ %δ (µ) + µ(1) .

(3.6) By Proposition 2.4

√ ¯ 1,N (B(x, r)) ≤ Hδ,N r(2∧d)−δ , ∀r ∈ [1/ N , 1], sup X t x,t

and hence for φ : R+ × ZN → [0, ∞), 1 ¯ 2,N L (φ) ≤ Hδ,N N −1+ld +δ/2 2−d N t

(3.7)

Z tZ 0

R

d

¯ 2,N (dx) ds . φ(x)X s

¯ N (t, ω) from the end of Section 2. The proof of the Recall our convention with respect to R gN following bound on the semigroup Ps,t is deferred until Section 7. Proposition 3.5 Let φ : ZN → [0, ∞). Then for 0 ≤ s < t (a) Z

gN Ps,t

Z

N

  N N (φ) (x1 )µ (dx1 ) ≤ ΠN s,x1 φ(Bt ) µ (dx1 ) Z t Z   1,N N N ¯ −ld −δ˜ N ¯ + %ˆδ (µ )RN (t) (sn − s) sup ΠN sn ,yn +zn φ(Bt ) Xsn (dyn ) dsn . s

|zn |≤N −1/2

(b) gN ¯ N (t), x1 ∈ ZN . Ps,t (φ) (x1 ) ≤ kφk∞ R

As simple consequences of the above we have the following bounds on the conditional mean ¯ 2,N . measures of X Corollary 3.6 If φ : ZN → [0, ∞), then 17

(a) h i ¯ 2,N (φ)|X ¯ 1,N E X t Z   2,N N ¯ ≤ ΠN x1 φ(Bt ) X0 (dx1 ) Z Z t ˜ d 2,N ¯ −δ−l ¯ sn + ρˆδ (X0 )RN (t) 0

sup |zn |≤N −1/2

  1,N N ¯ ΠN yn +zn φ(Bt−sn ) Xsn (dyn ) dsn .

(b) i h ¯ 2,N (φ)|X ¯ 1,N ≤ kφk X ¯ 2,N ¯ E X t ∞ 0 (1)RN (t). Proof

Immediate from (2.17) and Proposition 3.5.

The next lemma gives a bound for a particular test function φ and is essential for bounding the first moments of the approximate local times. Lemma 3.7 (a) Z   √  gN N d/2 1 |· − y| ≤ 1/ N (x1 )µN (dx1 ) P0,t ZN

N ¯ −ld −δ˜ , ∀t > 0, y ∈ ZN , N ≥ 1. ≤ %ˆN δ (µ )RN (t)t

(b) Z ZN

  √  gN N d/2 1 |· − y| ≤ 1/ N (x1 )µN (dy) P0,t

N ¯ −ld −δ˜ , ∀t > 0, x1 ∈ ZN , N ≥ 1. ≤ %ˆN δ (µ )RN (t)t

Proof

Deferred to Section 7.

Lemma 3.8 For any  > 0, T > 0, there exist r1 such that ! h i 2,N c 1,N 3 ¯ (B(0, r) )|X ¯ P sup E X (3.8) ≤ ≥ 1 − 1 , ∀N, ∀r ≥ r1 . t t≤T

Proof

Corollary 3.6(a) implies h i ¯ 2,N (B(0, r)c )|X ¯ 1,N E X t Z N  2,N ¯ (dx1 ) >r X ≤ ΠN x1 Bt 0 Z t Z ˜ d 2,N ¯ −δ−l ¯ + ρˆδ (X0 )RN (t) sn 0



It1,N (r)

+

sup |zn |≤N −1/2

It2,N (r) . 18

N  1,N ¯ (dyn ) dsn >r X ΠN sn ,yn +zn Bt sn

Now we have  ¯ 2,N (B(0, r/2)c ) + X ¯ 2,N (1)ΠN B N > r/2 . It1,N (r) ≤ X 0 t 0 0 Clearly, For any compact K ⊂ Rd ,

(3.9)



ΠN y : y ∈ K, N ≥ 1



is tight on DRd .

By (3.9) and Assumption 2.5 we get that for all r sufficiently large and all N ∈ N, 1 sup It1,N (r) ≤ 3 . 2 t≤T

(3.10)

Arguing in a similar manner for I 2,N , we get sup It2,N (r) t≤T

¯ 2,N )R ¯ N (T ) ≤ ρˆδ (X 0

Z

T

˜ ¯ 1,N (B(0, r/2)c ) s−δ−ld ds sup X s s≤T

0

!   ¯ s1,N (1)ΠN BtN > r/2 − N −1/2 . + sup X 0 s≤T

¯ 2,N , N ≥ 1} and tightness of {R ¯ N (T ), N ≥ 1} and Again, by (3.9), our assumptions on {X 0 1,N ¯ , N ≥ 1} we get that for all r sufficiently large and all N , {X ! 1 (3.11) P sup It2,N (r) ≤ 3 ≥ 1 − 1 , 2 t≤T and we are done.

Lemma 3.9 For any , 1 > 0, T > 0, there exists r1 such that  i  h ¯ 1,N ≤ 2 ≥ 1 − 1 , ∀N ∈ N, ∀r ≥ r1 . ¯ 2,N (B(0, r)c ) |X (3.12) P E L T Proof h i ¯ 2,N (B(0, r)c ) |X ¯ 1,N E L T " Z TZ Z −d d/2 ≤ 2 E N z∈Rd

0

= 2

−d

Z 0

T



|x|≥r

Z

Z

|z|≥r−N −1/2

ZN



#

¯ s2,N (dx)|X ¯ 1,N X ¯ s1,N (dz) ds 1 |x − z| ≤ N −1/2 X

  √  gN ¯ 2,N (dx1 )X ¯ s1,N (dz) ds P0,s N d/2 1 |· − z| ≤ 1/ N (x1 )X 0 (by (2.17))

 Z T ˜ N ¯ 2,N 1,N −1/2 c ¯ ¯ ≤ RN (T )ˆ %δ (X0 ) sup Xs B(0, r − N ) s−ld −δ ds s≤T 0   N ¯ 2,N 1,N −1/2 c ¯ ¯ = RN (T )ˆ %δ (X0 ) sup Xs B(0, r − N ) . s≤T

19

(by Lemma 3.7(a))

¯ N (T ), N ≥ 1} and {X ¯ 1,N , N ≥ 1}, we Now recalling Assumption 2.5 and the tightness of {R complete the proof as in Lemma 3.8.

Notation For any r > 1 let fr : Rd 7→ [0, 1] be a C ∞ function such that B(0, r) ⊂ {x : fr (x) = 0} ⊂ {x : fr (x) < 1} ⊂ B(0, r + 1) Lemma 3.10 For each T,  > 0, there is an r = r2 > 0 sufficiently large, such that ! ¯ 2,N (B(0, r2 )c ) >  ≤ . (3.13) lim sup P sup X N →∞

t

t≤T

¯ 2,N and Proof Apply Chebychev’s inequality on each term of the martingale problem (3.2) for X then Doob’s inequality to get ! 2,N 1,N ¯ (fr ) > |X ¯ P sup X (3.14) t

t≤T



n    2,N  ¯ · (fr )iT |X ¯ 1,N ¯ 2,N (fr ) > /4 + c E hM 1 X 0 2  Z  c T  ¯ 2,N ¯ 1,N ds + E Xs (B(0, r)c )|X  0 Z T Z o c 2,N 1,N ¯ ¯ + γE fr (x)L (dx, ds)|X ∧1  0 Rd

¯ 2,N }, (3.4) and Lemmas 3.8, 3.9 we may take r sufficiently large such Hence by tightness of {X 0 that the right-hand side of (3.14) is less than /2 with probability at least 1 − /2 for all N . This completes the proof.

Lemma 3.11 For any φ ∈ Bb,+ (Rd ), t > 0, h

¯ 2,N (φ) |X ¯ 1,N E L t

(3.15)

Proof By (2.17), h i ¯ 2,N (φ) |X ¯ 1,N E L t Z tZ −d ≤ kφk∞ 2 0

x1 ∈ R d

Z z∈Rd

¯ N (t)X ¯ 2,N (1) ≤ kφk∞ R 0

gN P0,t

Z



i

¯ 1,N ¯ 2,N kφk∞ sup %ˆN δ (Xs )X0 (1) s≤t 0

t



N

d/2

Z

t

˜

s−ld −δ ds.

0

 √  1,N ¯ ¯ 2,N (dx1 ) ds 1 |· − z| ≤ 1/ N (x1 )Xs (dz) X 0 

˜

s−ld −δ ds,

20

¯ s1,N . As we may assume where the last inequality follows by Lemma 3.7(b) with µN = X ¯ 1,N ¯ sup %ˆN δ (Xs ) ≤ RN (t) s≤t

(the left-hand side is bounded in probability uniformly in N by Propostions 2.4 and 3.2), the result follows.

Lemma 3.12 For any T > 0, ¯ 2,N (1) > K) = 0. lim sup P (sup X t

K→∞ N

t≤T

¯ 2,N Proof Applying Chebychev’s inequality on each term of the martingale problem (3.2) for X and then Doob’s inequality, one sees that !  n   2,N  ¯ 2,N (1) > K/3 + c E hM ¯ · (1)iT |X ¯ 1,N ¯ 2,N (1) > K|X ¯ 1,N (3.16) P sup X X ≤ 1 t 0 2 K t≤T h io c ¯ 2,N (1)|X ¯ 1,N ∧ 1. + γ2+ E L T K Now apply Assumption 2.5, (3.4), Lemma 3.11, and Corollary 3.6(b) to finish the proof.

¯ 2,N , N ≥ 1} is tight in CM . Lemma 3.13 The sequence {L F Proof Lemmas 3.9 and 3.11 imply conditions (i) and (ii), respectively in Lemma 3.4. Now let us check (iii). Let Φ ⊂ Cb (Rd ) be a separating class of functions. We will argue by Aldous’ tightness criterion (see Theorem 6.8 of Walsh (1986)). First by (2.22) and Lemma 3.11 we immediately get ¯ 2,N (φ) : N ∈ N} is tight. Next, let {τN } be arbitrary sequence of that for any φ ∈ Φ, t ≥ 0, {L t stopping times bounded by some T > 0 and let {N , N ≥ 1} be a sequence such that N ↓ 0 as N → ∞. Then arguing as in Lemma 3.11 it is easy to verify h i ¯ 2,N ¯ 2,N (φ) |X ¯ 1,N , Fτ E L (φ) − L τN τN +N N Z τN +N ˜ ¯ N (T )X ¯ τ2,N (1) ≤ kφk∞ R (s − τN )−ld −δ ds, N τN

Then by (2.22) and Lemma 3.12 we immediately get that ¯ 2,N ¯ 2,N (φ) → 0 LτN +N (φ) − L τN ¯ 2,N (φ)} is tight in probability as N → ∞. Hence by Aldous’ criterion for tightness we get that {L 2,N 2,N ¯ (φ) ∈ CR for all N , and so {L ¯ (φ)} is tight in CR , and we are in DR for any φ ∈ Φ. Note L done. The next lemma will be used for the proof of Proposition 3.1. The processes X i,N , Li,N and LN are all as in that result. 21

Lemma 3.14 The sequences {Li,N , N ≥ 1}, i = 1, 2, and {LN , N ≥ 1} are tight in CMF , and moreover for any uniformly continuous function φ on Rd and T > 0 N (3.17) sup Li,N (φ) − L (φ) → 0, in probability as N → ∞, i = 1, 2. t t t≤T

Proof

First, by Proposition 2.2 ¯ 2,N , L2,N − L2,N ¯ 2,N − L ¯ 2,N L2,N ≤L ≤L s s , ∀0 ≤ s ≤ t, t t t t

(3.18)

+

2 ¯ 2,N is the approximating collision local time for the (X ¯ 1,N , X ¯ 2,N ) solving MN,0,γ where L 1,N

X0

,X02,N

. By

¯ 2,N is tight in CM , and hence, by (3.18), L2,N is tight as well (see the proof of Lemma 3.13 L t F Lemma 3.13). To finish the proof it is enough to show that for any uniformly continuous function φ on Rd and T > 0 2,N sup L1,N (3.19) t (φ) − Lt (φ) → 0, as N → ∞, in probability, t≤T 2,N sup LN (3.20) t (φ) − Lt (φ) → 0, as N → ∞, in probability. t≤T

We will check only (3.19), since the proof of (3.20) goes along the same lines. By trivial calculations we get 1,N 2,N sup Lt (φ) − Lt (φ) ≤ sup |φ(x) − φ(y)| L2,N T (1), t≤T

|x−y|≤N −1/2



sup |x−y|≤N −1/2

¯ 2,N (1), ∀N ≥ 1, |φ(x) − φ(y)| L T

where the last inequality follows by (3.18). The result follows by the uniform continuity assumption on φ and Lemma 3.13. Now we are ready to present the Proof of Proposition 3.3 We will check conditions (i)–(iii) of Lemma 3.4. By Lemmas 3.10 and 3.12, conditions (i) and (ii) of Lemma 3.4 are satisfied. Turning to (iii), fix a φ ∈ Cb3 (Rd ). Then using the Aldous criterion for tightness along with Lemma 3.12 and (2.11), and arguing as R · 2,N ¯ s (∆N φ) ds} is a tight sequence of processes in CR . in Lemma 3.13, it is easy to verify that { 0 X ¯ 2,N By Lemma 3.13 and the uniform convergence of P N φ to φ we also see that {γ2+ L (P N φ)} is a · tight sequence of processes in CR . Turning now to the local martingale term in (3.2), arguing as above, now using |∇φ|2 ≤ Cφ < ∞ ¯ 2,N (φ)i· , N ≥ 1} is a tight sequence of processes and Lemma 3.13 as well, we see from (3.4) that {hM in CR . Note also that by definition, ¯ 2,N sup ∆M (φ) ≤ 2 kφk∞ N −1 . t t≤T

Theorem VI.4.13 and Proposition VI.3.26 of Jacod and Shiryaev Jacod and Shiryaev (1987) show ¯ 2,N (φ) , N ≥ 1} is a tight sequence in DR and all limit point are supported in CR . The that {M t 22

above results with (3.2) and Corollary VI.3.33 of Jacod and Shiryaev Jacod and Shiryaev (1987) ¯ ·2,N (φ) is tight in DR and all the limit points are supported in CR . Lemma 3.4(b) now show that X completes the proof.

Proof of Proposition 3.1 Arguing as in the proof of Proposition 3.3 and using Proposition 2.2 and Lemma 3.14, we can easily show that {(X 1,N , X 2,N , L2,N , L1,N , LN ), N ≥ 1} is tight on DMF 5 , and any limit point belongs to CMF 5 . Let {(X 1,Nk , X 2,Nk , L2,Nk , L1,Nk , LNk ), k ≥ 1} be any convergent subsequence of {(X 1,N , X 2,N , L2,N , L1,N , LN ), N ≥ 1}. By Lemma 3.14, if (X 1 , X 2 , A) is the limit of {(X 1,Nk , X 2,Nk , L2,Nk ), k ≥ 1}, then (X 1,Nk , X 2,Nk , L2,Nk , L1,Nk , LNk ) ⇒ (X 1 , X 2 , A, A, A),

(3.21)

as k → ∞. By Skorohod’s theorem, we may assume that convergence in (3.21) is a.s. in DMF 5 to a continuous limit. To complete the proof we need to show that (X 1 , X 2 ) satisfies the martingale 1 ,γ2 ,A problem M−γ . Let φi ∈ Cb3 (Rd ), i = 1, 2. Recalling from (2.11), that X 1 ,X 2 0

0

∆N φi →

(3.22)

σ2 ∆φi uniformly on Rd , 2

−γ1 ,γ2 ,A ,−γ1 ,γ2 , except we see that all the terms in MNk1,N 2,Nk converge to the corresponding terms in MX 1 ,X 2 k X0

,X0

0

0

,γ1 ,γ2 perhaps the local martingale terms. By convergence of the other terms in MNk1,N 2,Nk we see that k X0

Mti,Nk (φi )



Mti (φi )

=

Xti (φi )



X0i (φi )

Z − 0

t

Xsi (

,X0

σ 2 ∆φi ) ds − (−1)i γi At (φi ) a.s. in DR , i = 1, 2. 2

These local martingales have jumps bounded by N2k kφi k∞ , and square functions which are bounded in probability uniformly in Nk by Proposition 3.2 and Lemma 3.12. Therefore they are locally bounded using stopping times {TnNk } which become large in probability as n → ∞ uniformly in Nk . One can now proceed in a standard manner (see, e.g. the proofs of Lemma 2.10 and Proposition 2 in Durrett and Perkins (1999)) to show that M i (φ) have the local martingale property and square 1 ,γ2 ,A functions claimed in M−γ . Finally we need to increase the class of test functions from Cb3 to X 1 ,X 2 0

0

Cb2 . For φi ∈ Cb2 apply the martingale problem with Pδ φi (Pδ is the Brownian semigroup) and let 1 ,γ2 ,A δ → 0. As Pδ ∆φi → ∆φi in the bounded pointwise sense, we do get M−γ for φi in the limit X01 ,X02 and so the proof is complete.

4

Convergence of the approximating Tanaka formulae

Define KN = N d/2 (M + 21 )d . Then for any φ : Z → R bounded or non-negative define "Z ! # 1,N 2,N ∞ B + B s s N (4.1) GαN φ(x1 , x2 ) = ΠN e−αs KN pN (Bs1,N − Bs2,N )φ ds , x 1 × Πx 2 2 0

23

where α ≥ 0 for d = 3 and α > 0 for d ≤ 2. These conditions on α will be implicitly assumed in what follows. Note that for any bounded φ we have Z ∞  α N N −αs N 1,N 2,N GN φ(x1 , x2 ) ≤ kφk∞ Πx1 × Πx2 e KN p (Bs − Bs ) ds 0

≡ kφk∞ GαN 1(x1 , x2 ) X = kφk∞ N d/2 2−d

(4.2)

|z|≤N −1/2

Z



e−αs pN 2s (x1 − x2 − z) ds,

0

N with where pN · is the transition probability function of the continuous time random walk B generator ∆N . For 0 <  < 1, define  ψN (x1 , x2 ) ≡ GαN 1(x1 , x2 )1(|x1 − x2 | ≤ ),  if d = 1,  1, 1 + ln+ (1/t), if d = 2, hd (t) ≡  1−d/2 t , if d = 3.

(4.3)

Let (X 1,N , X 2,N ) be as in (2.6) as usual. Recall −d L2,N t (φ) = 2 −d L1,N t (φ) = 2

LN t (φ)

= 2

−d

Z tZ 0

R

d

Z tZ 0

Rd

φ(x)N d/2 Xs1,N (B(x, N −1/2 ))Xs2,N (dx) ds, φ(x)N d/2 Xs2,N (B(x, N −1/2 ))Xs1,N (dx) ds,

Z tZ 0

Rd × Rd

  1 |x1 − x2 | ≤ N −1/2 N d/2 φ((x1 + x2 )/2)Xs1,N (dx1 )Xs2,N (dx2 ) ds.

We introduce 1,N × Xt2,N , ∀t ≥ 0. XN t = Xt

(4.4)

Then arguing as in Lemma 5.2 of Barlow, Evans, and Perkins (1991) where an Ito’s formula for a pair of interacting super-Brownian motions was derived, we can easily verify the following approximate Tanaka formula for φ : Z → R bounded: Z tZ Z N α N α Xt (GN φ) = X0 (GN φ) − γ1 GαN φ(x1 , x2 )Xs2,N (dx2 ) L1,N (ds, dx1 ) 0 Rd Rd Z tZ Z (4.5) + γ2 GαN φ(x1 , x2 )Xs1,N (dx1 ) L2,N (ds, dx2 ) d d 0 R R Z t α +α XN s (GN φ) ds 0 Z tZ Z  + GαN φ(x1 , x2 ) Xs1,N (dx1 ) M 2,N (ds, dx2 ) + Xs2,N (dx2 ) M 1,N (ds, dx1 ) −

0 Rd Rd LN t (φ),

24

1 ,γ2 ,A where M i,N , i = 1, 2, are the martingale measures in M−γ . Let (X 1 , X 2 , A, A, A) ∈ CMF 5 X 1 ,X 2 0

0

be an arbitrary limit point of (X 1,N , X 2,N , L1,N , L2,N , LN ) (they exist by Proposition 3.1)), and to simplify the notation we assume (X 1,N , X 2,N , L1,N , L2,N , LN ) ⇒ (X 1 , X 2 , A, A, A), as N → ∞. Moreover, throughout this section, by the Skorohod representation theorem, we may assume that (4.6)

(X 1,N , X 2,N , L1,N , L2,N , LN ) → (X 1 , X 2 , A, A, A), in (DMF 5 ), P − a.s. 2

Recall that pt (·) is transition density of the Brownian motion B with generator σ2 ∆. Let Πx be the law of B with B0 = x and denote its semigroup by Pt . If φ : Rd → R is Borel and bounded define   Z ∞  1 Bs + Bs2 α −αs 1 2 ds G φ(x1 , x2 ) ≡ lim Πx1 × Πx2 e p (Bs − Bs )φ ↓0 2 0 Z ∞ x + x  1 2 = e−αs p2s (x1 − x2 )Ps/2 φ ds. 2 0 A change of variables shows this agrees with the definition of Gα φ in Section 5 of Barlow-Evans and Perkins (1991) and so is finite (and the above limit exists) for all x1 6= x2 , and all (x1 , x2 ) if d = 1. For φ ≡ 1, Gα 1(x1 , x2 ) is bounded by c(1 + log+ (1/|x1 − x2 |)) if d = 2 and c|x1 − x2 |−1 if d = 3 (see (5.5)–(5.7) of Barlow, Evans and Perkins (1991)). In this section we intend to prove the following proposition. Proposition 4.1 Let (X 1 , X 2 , A) be an arbitrary limiting point described above. Then for φ ∈ Cb (Rd ),

(4.7)

Z tZ Z Gα φ(x1 , x2 )Xs2 (dx2 ) A(ds, dx1 ) Xt (Gα φ) = X0 (Gα φ) − γ1 d d R 0 R Z tZ Z + γ2 Gα φ(x1 , x2 )Xs1 (dx1 ) A(ds, dx2 ) 0 Rd Rd Z t ˜ t (φ) − At (φ), +α Xs (Gα φ) ds + M 0

˜ t (φ) is a continuous F X,A -local martingale. where M t To verify the proposition we will establish the convergence of all the terms in (4.5) through a series of lemmas. If µ ∈ MF (ZN ), µ ∗ pN (dx) denotes the convolution measure on ZN . The proof of the following lemma is trivial and hence is omitted. Lemma 4.2 If µ ∈ MF (ZN ) then sup µ ∗ pN (B(x, r)) ≤ %δ (µ)r(2∧d)−δ , ∀r ∈ [0, 1]. x

25

Now let us formulate a number of helpful results whose proofs are deferred to Section 8. Lemma 4.3 For 0 <  < 1, Z  N 1−ld −δ˜ sup (4.8) ψN (x , x1 )µN (dx) ≤ cˆ %N , ∀N ≥ −2 . δ (µ ) x1

ZN

Lemma 4.4 If 0 < η < 2/7, then for all 0 <  < 1/2, "Z Z # t  2,N 1,N 1,N ¯ (dx1 )L ¯ (ds, dx) ds|X ¯ E ψ (x1 , x)X 0

2 ZN

N

s

˜ 1−ld −δ˜ ¯ N (t)ˆ ¯ 2,N 2 η(1−ld −3δ) ≤ R %N t , ∀t > 0, N ≥ −2 . δ (X0 ) 

Define q N (x) = 10 (x) + pN (x).

(4.9)

Lemma 4.5 If 0 < η < 2/7, then for all 0 <  < 1/2, Z  Z Z  2,N 2,N  α 1,N ¯ ¯ ¯ ¯ 1,N ∗ q N (dx) E ψN (x, x1 )Xt (dx1 ) GN 1(x , x2 )Xt (dx2 )|X X t Rd



Rd

Rd

¯ N (t)[ˆ ¯ 2,N 2 R %N δ (X0 )

˜ η(1−ld −3δ)

+ 1]

, ∀t ≥ 0, N ≥ −2 .

Now, for any δˆ > 0 define Z ∞  δˆ N N −αs N 1,N 2,N 1,N 2,N Gα, φ(x , x ) ≡ Π × Π e K p (B − B )φ((B + B )/2) ds , 1 2 N x1 x2 s s s s N δˆ Z ∞ x + x  1 2 α,δˆ ds. G φ(x1 , x2 ) ≡ e−αs p2s (x1 − x2 )Ps/2 φ 2 δˆ ˆ

Unlike Gα φ, Gα,δ φ is bounded on R2d for bounded Borel φ : Rd → R. Lemma 4.6 (a) For any φ ∈ Cb (Rd ) (4.10)

GαN φ(·, ·) → Gα φ(·, ·), as N → ∞,

uniformly on the compact subsets of Rd × Rd \ {(x1 x2 ) : x1 = x2 }. (b) For any φ ∈ Cb (Rd ), δˆ > 0, (4.11)

ˆ

ˆ

δ α,δ Gα, φ(·, ·), as N → ∞, N φ(·, ·) → G

uniformly on the compact subsets of Rd × Rd . Proof: Let ε0 ∈ (0, 1). The key step will be to show Z ε N 1,N (4.12) lim sup N d/2 ΠN − Bs2,N | ≤ N −1/2 ) ds = 0. x1 × Πx2 (|Bs ε↓0 N,|x1 −x2 |≥ε0

0

26

Once (4.12) is established we see that the contribution to GαN φ from times s ≤ ε is small uniformly for |x1 − x2 | ≥ ε0 and N . A straightforward application of the continuous time local central limit theorem (it is easy to check that (5.2) in Durrett (2004) works in continuous time) and Donsker’s Theorem shows that uniformly in x1 , x2 in compacts, lim Gα,ε φ(x1 , x2 ) N →∞ N Z ∞   B 1,N + B 2,N  s s N 1,N 2,N −1/2 2−d N d/2 ΠN × Π 1(|B − B | ≤ N )φ = lim e−αs ds x1 x2 s s N →∞ ε 2 Z ∞ x + x  1 2 p2s (x1 − x2 )Ps/2 φ e−αs ds = Gα,ε φ(x1 , x2 ). = 2 ε This immediately implies (b), and, together with (4.12), also gives (a). It remains to establish (4.12). Assume |x| ≡ |x1 − x2 | ≥ ε0 and let {Sj } be as in Lemma 7.1. Then for N −1/2 < ε0 , use Lemma 7.1 to obtain Z ε N 1,N N d/2 ΠN − Bs2,N | ≤ N −1/2 ) ds x1 × Πx2 (|Bs 0 √ Z ε ∞ −2N s X (2N s)j  Sj N d/2−1 e P √ ∈ √ x + [−j −1/2 , j −1/2 ]d )2N ds N = 2 j! j j 0 j=1 Z 2N ε ∞ X √ uj −d/2 d/2−1 2 (4.13) e−u du. ≤ N C exp{−c((N/j)|x| ∧ N |x|)}j j! 0 j=1

Now use Stirling’s formula to conclude that for j ≥ 1 and ε0 = 2eε, 2N ε

Z

e−u

0

 N ε 0 j uj (2N ε)j c0  e2N ε j du ≤ ≤√ ≤ c0 , j! j! j j j

and so conclude Z

2N ε

e−u

0

Use this to bound (4.13) by h √ C 0 N d/2−1 e−c N ε0

  N ε 0 j  uj du ≤ c0 min 1, . j! j

X

j −d/2 + N d/2−1



+N −1 √ 0

h

≤C N

2−j

j≥2ε0 N

1≤j≤ N ε0

X

X

((j + 1)/N )−d/2 exp{−cε20 /(j/N )}

i

N ε0 0, there exists ∗ > 0, such that ! Z (4.17) P

GαN φ(x1 , x2 )1(|x1 − x2 | ≤ 2) Xt1,N (dx1 ) Xt2,N (dx2 ) > δ1

sup d

t≤T

R ×R

d

≤ δ2 ,

for any  ≤ ∗ and N ≥ −2 ∗ . As in the previous section, for 1/2 >  > 0, f ∈ [0, 1] is a C ∞ function such that  1, if |x| ≤ , (4.18) f (x) = 0 if |x| > 2. By Lemma 4.6(b) and the convergence (X 1,N , X 2,N ) → (X 1 , X 2 ), in DMF 2 , P − a.s. with (X 1 , X 2 ) ∈ CMF 2 , we get Z P

sup

G Rd × Rd

t≤T

!

α,δˆ

φ(x1 , x2 )f (x1 −

Z ≤

lim P

sup

N →∞

t≤T

Rd × Rd

Z ≤

lim P

sup

N →∞

t≤T

Rd × Rd

x2 ) Xt1 (dx1 ) Xt2 (dx2 )

δˆ Gα, N φ(x1 , x2 )f (x1

> δ1 !



x2 ) Xt1,N (dx1 ) Xt2,N (dx2 )

> δ1 !

GαN φ(x1 , x2 )f (x1 −

x2 ) Xt1,N (dx1 ) Xt2,N (dx2 )

> δ1

≤ δ2 . Since δˆ > 0 was arbitrary we can take δˆ ↓ 0 in the above to get Z (4.19)

P

sup t≤T

Rd × Rd

! Gα φ(x1 , x2 )f (x1 − x2 ) Xt1 (dx1 ) Xt2 (dx2 ) > δ1 28

≤ δ2 .

Now P

Z sup GαN φ(x1 , x2 ) Xt1,N (dx1 ) Xt2,N (dx2 ) d d t≤T R ×R  Z α 1 2 − G φ(x1 , x2 ) Xt (dx1 ) Xt (dx2 ) ≥ 3δ1 Rd × Rd

Z GαN φ(x1 , x2 )(1 − f (x1 − x2 )) Xt1,N (dx1 ) Xt2,N (dx2 ) ≤ P sup d d t≤T R ×R  Z α 1 2 G φ(x1 , x2 )(1 − f (x1 − x2 )) Xt (dx1 ) Xt (dx2 ) ≥ δ1 − Rd × Rd

!

Z +P

sup Rd × Rd

t≤T

!

Z +P

sup d

t≤T

GαN φ(x1 , x2 )f (x1 − x2 ) Xt1,N (dx1 ) Xt2,N (dx2 ) ≥ δ1

R ×R

d

Gα φ(x1 , x2 )f (x1 − x2 ) Xt1 (dx1 ) Xt2 (dx2 ) ≥ δ1

.

Now let N → ∞. Apply Lemma 4.6(a), convergence of X 1,N , X 2,N and (4.17), (4.19) to get Z lim sup P sup GαN φ(x1 , x2 ) Xt1,N (dx1 ) Xt2,N (dx2 ) N →∞

t≤T

Rd × Rd

Z −

G

α

Rd × Rd



φ(x1 , x2 ) Xt1,N (dx1 ) Xt2,N (dx2 )

 ≥ 3δ1

≤ 2δ2 , and since δ1 , δ2 > 0 were arbitrary (4.15) follows. Now (4.14) follows immediately from (4.15) and (4.17). Weak continuity of t → Xti and the fact that (1 − f (x1 − x2 ))Gα φ(x1 , x2 ) is bounded and continuous imply the continuity of Z Gα φ(x1 , x2 )(1 − f (x1 − x2 ))Xt1 (dx1 )Xt2 (dx2 ) t→ 2d

R

for any  > 0. (4.16) now follows from (4.19). Lemma 4.8 For any φ ∈ Cb,+ (Rd ) and T > 0, Z TZ (4.20) Gα φ(x1 , x2 ) Xs2 (dx2 ) A(ds, dx1 ) < ∞, P − a.s., 0

Rd × Rd

and (4.21)

Z t Z sup GαN φ(x1 , x2 ) Xs2,N (dx2 ) L1,N (ds, dx1 ) d d t≤T 0 R ×R Z tZ − Gα φ(x1 , x2 ) Xs2 (dx2 ) A(ds, dx1 ) → 0, 0

Rd × Rd

in probability, as N → ∞. 29

Proof By Lemma 4.4, Assumption 2.5 and Proposition 2.2, for any δ1 , δ2 > 0, there exists ∗ > 0, such that ! Z tZ α 2,N 1,N (4.22)P sup GN φ(x1 , x2 )1(|x1 − x2 | ≤ 2) Xs (dx2 ) L (ds, dx1 ) > δ1 ≤ δ2 , t≤T

Rd × Rd

0

for any  ≤ ∗ , N ≥ −2 ∗ . Then, arguing as in the derivation of (4.19) in Lemma 4.7, we get T

Z (4.23)

Z

α

P Rd × Rd

0

G φ(x1 , x2 )f (x1 −

x2 ) Xs2 (dx2 ) A(ds, dx1 )

 > δ1

≤ δ2 ,

and hence, again as in Lemma 4.7, Z t Z lim sup P sup GαN φ(x1 , x2 ) Xs2,N (dx2 ) L1,N (ds, dx1 ) d d N →∞ t≤T 0 R ×R  Z tZ α 2 − G φ(x1 , x2 ) Xs (dx2 ) A(ds, dx1 ) ≥ 3δ1 0

Rd × Rd

≤ 2δ2 . Since δ1 , δ2 were arbitrary we are done, as in Lemma 4.7.

Lemma 4.9 For any φ ∈ Cb,+ (Rd ), T > 0, Z

T

Z

(4.24) 0

Rd × Rd

Gα φ(x1 , x2 ) Xs1 (dx1 ) A(ds, dx2 ) < ∞, P − a.s.,

and (4.25)

Z t Z Z sup GαN φ(x1 , x2 )Xs1,N (dx1 ) L2,N (ds, dx2 ) t≤T 0 Rd Rd Z tZ α 1 G φ(x1 , x2 ) Xs (dx1 ) A(ds, dx2 ) → 0, − 0

Rd × Rd

in probability, as N → ∞.

30

Proof

Let f ∈ [0, 1] be a continuous function on Rd satisfying (4.18). Then Z t Z Z GαN φ(x1 , x2 )Xs1,N (dx1 ) L2,N (ds, dx2 ) sup t≤T

Rd

0

Rd

Z tZ −

G Rd × Rd

0

α



φ(x1 , x2 ) Xs1 (dx1 ) A(ds, dx2 )

Z t Z Z GαN φ(x1 , x2 )(1 − f (x1 − x2 ))Xs1,N (dx1 ) L2,N (ds, dx2 ) ≤ sup t≤T 0 Rd Rd Z tZ α 1 − G φ(x1 , x2 )(1 − f (x1 − x2 )) Xs (dx1 ) A(ds, dx2 ) Z

0 Rd × Rd Z T Z

+ Rd

0

Z (4.26)

T

Rd

Z

+ ≡ I

0 1,N,

Rd × Rd 2,N,

+I

GαN φ(x1 , x2 )f (x1 − x2 )Xs1,N (dx1 ) L2,N (ds, dx2 ) Gα φ(x1 , x2 )f (x1 − x2 ) Xs1 (dx1 ) A(ds, dx2 )

+ I 3,

With Lemma 4.6(a) at hand, it is easy to check that for any compact K ⊂ Rd |GαN φ(x1 , x2 )(1 − f (x1 − x2 )) − Gα φ(x1 , x2 )(1 − f (x1 − x2 ))| → 0.

sup (x1 x2 )∈K×K

Therefore by the convergence (X 1,N , L2,N ) → (X 1 , A), in DMF 2 , P − a.s. with (X 1 , A) ∈ CMF 2 and uniform boundedness of Gα φ away from the diagonal, we easily get I 1,N, → 0, P − a.s., ∀ > 0.

(4.27)

By Lemma 4.3, Proposition 2.2, and Proposition 2.4 we get that for all N > −2 ) ( Z 2,N α 1,N 2,N, ¯ (1) sup I ≤ L G φ(x1 , x2 )1(|x1 − z| ≤ 2)X (dx1 ) T

x2 ,s≤T

Rd ˜

¯ 2,N (1)RN 1−ld −δ ≤ cL T =

N

s

recall (2.20)

¯ 2,N (1)RN 1−ld −δ˜. cL T

Hence by Lemma 3.11, for any δ1 , δ2 > 0 there exists ∗ such that  (4.28) P I 2,N, > δ1 ≤ δ2 , ∀N ≥ −2 ∗ ,  ≤ ∗ . Arguing as in the derivation of (4.19) in Lemma 4.7, we get  (4.29) P I 3, > δ1 ≤ δ2 , ∀ ≤ ∗ . Now combine (4.27), (4.28), (4.29) to complete the proof.

31

Lemma 4.10 For any φ ∈ Cb,+ (Rd ) and T > 0, 2 Z T Z Z α 2 (4.30) G φ(x1 , x2 ) Xs (dx2 ) Xs1 (dx1 ) ds < ∞, P − a.s., 0

Rd

Rd

and for pˆN (z) = pN (z) or pˆN (z) = 10 (z) we have Z Z Z 2 t  α 2,N sup GN φ(x1 , x2 ) Xs (dx2 ) (4.31) Xs1,N ∗ pˆN (dx1 ) ds t≤T 0 Rd Rd 2 Z t Z Z Gα φ(x1 , x2 ) Xs2 (dx2 ) Xs1 (dx1 ) ds → 0, − d d 0 R R in probability, as N → ∞. Proof

Let f be continuous function of Rd given by (4.18). Recall that  ψN (x1 , x2 ) = GαN 1(x1 , x2 )1(|x1 − x2 | ≤ )

and let ψ  (x1 , x2 ) = Gα 1(x1 , x2 )1(|x1 − x2 | ≤ ). Then by simple algebra Z Z Z 2 t  α 2,N sup GN φ(x1 , x2 ) Xs (dx2 ) Xs1,N ∗ pˆN (dx1 ) ds d d t≤T 0 R R 2 Z t Z Z − Gα φ(x1 , x2 ) Xs2 (dx2 ) Xs1 (dx1 ) ds 0 Rd Rd Z Z Z 2 t  α 2,N ≤ sup GN φ(x1 , x2 )(1 − f (x1 − x2 )) Xs (dx2 ) Xs1,N ∗ pˆN (dx1 ) ds t≤T 0 Rd Rd  2 Z tZ Z − Gα φ(x1 , x2 )(1 − f (x1 − x2 )) Xs2 (dx2 ) Xs1 (dx1 ) ds d d 0 R R Z TZ   + 3 kφk2∞ ψN (x1 , x2 )Xs2,N (dx2 )GαN 1(x1 , x02 ) Xs2,N (dx02 ) Xs1,N ∗ pˆN (dx1 ) ds 0 R3d Z TZ + 3 kφk2∞ ψ  (x1 , x2 )Xs2 (dx2 )Gα 1(x1 , x02 ) Xs2 (dx02 ) Xs1 (dx1 ) ds ≡ I 1,N, + I

0 2,N,

R3d 3,

+I

.

Therefore by Lemma 4.6(a) and the convergence (X 1,N , X 2,N ) → (X 1 , X 2 ), in DMF 2 , P − a.s. with (X 1 , X 2 ) ∈ CMF 2 , as in the previous proof we get (4.32)

I 1,N, → 0, P − a.s. ∀ > 0.

By Lemma 4.5, Assumption 2.5 and Proposition 2.2, for any δ1 , δ2 > 0 there exists ∗ such that  (4.33) P I 2,N, > δ1 ≤ δ2 , ∀N ≥ −2 ∗ ,  ≤ ∗ . 32

Then arguing as in the derivation of (4.19) in Lemma 4.7 we get  (4.34) P I 3, > δ1 ≤ δ2 , ∀ ≤ ∗ . Now combine (4.32), (4.33), and (4.34) to complete the proof.

Lemma 4.11 For any φ ∈ Cb,+ (Rd ) and T > 0, Z

T

Z

Z

(4.35)

G 0

R

d

R

α

d

2

φ(x1 , x2 ) Xs1 (dx1 )

Xs2 (dx2 ) ds < ∞, P − a.s.,

and for pˆN (z) = pN (z) or pˆN (z) = 10 (z), Z Z Z 2 t  α 1,N (4.36) GN φ(x1 , x2 ) Xs (dx1 ) Xs2,N ∗ pˆN (dx2 ) ds sup t≤T 0 Rd Rd 2 Z t Z Z − Gα φ(x1 , x2 ) Xs1 (dx1 ) Xs2 (dx2 ) ds → 0, d d 0 R R in probability, as N → ∞. Proof The proof goes along the same lines as of Lemma 4.10, with the only difference being that we use Lemmas 4.3 and 4.2 instead of Lemma 4.5. Before we formulate the next lemma, let us introduce the following notation for the martingales in the approximate Tanaka formula (4.5): ˜ N (φ) ≡ (4.37) M t

2 Z tZ X i=1

0

R

Z d

R

d

GαN φ(x1 , x2 )Xs3−i,N (dx1 )M i,N (ds, dx2 ), i = 1, 2, φ ∈ Cb (Rd ).

˜ (φ) such that for Lemma 4.12 For any φ ∈ Cb (Rd ) there is a continuous FtX,A -local martingale M any T > 0, ˜N ˜ sup M (φ) − M (φ) → 0, t t t≤T

in probability, as N → ∞. ˜ N (φ), converge Proof Lemmas 4.7, 4.8 and 4.9 show that all the terms in (4.5), except perhaps M t in probability, uniformly for t in compact time sets, to an a.s. continuous limit as N → ∞. Hence ˜ t (φ) such that supt≤T |M ˜ tN (φ) − M ˜ t (φ)| → 0 in there is an a.s. continuous FtX,A -adapted process M probability as N → ∞.

33

(4.2) and Lemma 7.2 below imply Z ∞ N α −d e−αs N d/2 P (B2s ∈ x2 − x1 + [−N −1/2 N −1/2 ]d ) ds |GN φ(x1 , x2 )| ≤ |φ|∞ 2 0 Z ∞  N d/2 e−αs ≤ ckφk∞ ds Ns + 1 0 ≤ ckφk∞ N 1/2 . Therefore ˜ N (φ)| ≤ N −1 sup |∆M s x2

2 Z 2 X X α i,N −1/2 G φ(x , x )X (dx ) ≤ ckφk N Xsi,N (1). 1 2 1 ∞ N s i=1

i=1

In view of |∆Xsi,N (1)| ≤ N −1 , and Proposition 3.1, we see that if ˜ N (φ)| + TnN = inf{s : |M s

2 X

Xsi,N (1) ≥ n},

i=1

then ˜ N N | is uniformly bounded and as n → ∞ TnN is large in probability, unformly in N. (4.38) |M s∧T n

˜ N N is a uniformly bounded continuous FtXN -local martingale and from this and (4.38) Therefore M ·∧Tn ˜ (φ) is a continuous F X,A -local martingale. it is easy and standard to check that M t

Proof of Proposition 4.1 Immediate from the approximate Tanaka formula (4.5), the Lemmas 4.7, 4.8, 4.9, 4.12 and convergence of LN to A.

5

Proofs of Theorems 1.1, 1.2 and 1.5

Lemma 5.1 Let (X 1 , X 2 , A) be any limit point of (X 1,N , X 2,N , L2,N ). Then the collision local time L(X 1 , X 2 ) exists and for any φ ∈ Cb (Rd ), α

Z tZ

α

Z

Xt (G φ) = X0 (G φ) − γ1 Gα φ(x1 , x2 )Xs2 (dx2 ) A(ds, dx1 ) 0 Rd Rd Z tZ Z (5.1) + γ2 Gα φ(x1 , x2 )Xs1 (dx1 ) A(ds, dx2 ) d d 0 R R Z t +α Xs (Gα φ) ds − Lt (X 1 , X 2 )(φ) 0 Z tZ Z  + Gα φ(x1 , x2 ) Xs1 (dx1 )M 2 (ds, dx2 ) + Xs2 (dx1 )M 1 (ds, dx2 ) , 0

Rd

Rd

34

where the stochastic integral term is a continuous local martingale with quadratic variation Z tZ 0

Z

Rd

Rd

Z tZ

2 Gα φ(x1 , x2 )Xs2 (dx1 ) X 1 (dx2 ) ds

Z

+ 0

Proof

Rd

Rd

2 Gα φ(x1 , x2 )Xs1 (dx1 ) X 2 (dx2 ) ds.

Define Gα φ(x1 , x2 )

Z = Πx1 × Πx2



−αs

e

p (Bs1



Bs2 )φ

0



Bs1 + Bs2 2



 ds .

2 ,A As in Section 5 of Barlow, Evans and Perkins (1991), MγX11,γ,X 2 implies 0

0

Z tZ Z Xt (Gα φ) = X0 (Gα φ) − γ1 Gα φ(x1 , x2 )Xs2 (dx2 ) A(ds, dx1 ) d d 0 R R Z t Z tZ Z α 1 + γ2 (5.2) G φ(x1 , x2 )Xs (dx1 ) A(ds, dx2 ) + α Xs (Gα φ) ds 0 0 Rd Rd Z tZ Z  Gα φ(x1 , x2 ) Xs1 (dx1 )M 2 (ds, dx2 ) + Xs2 (dx1 )M 1 (ds, dx2 ) 0 Rd Rd Z tZ Z p (x1 − x2 )φ((x1 + x2 )/2)Xs1 (dx1 )X 2 (dx2 ) ds. − 0

Rd

Rd

Now apply Lemmas 4.7, 4.8, 4.9, 4.10, 4.11 (trivially dropping the non-negativity hypothesis on φ), and argue as in Section 5 of Barlow, Evans, Perkins (1991), essentially using dominated convergence, to show that all the terms (5.2) (except possibly the last one) converge in probability to the corresponding terms of (5.1), as  ↓ 0. Hence the last term in (5.2), Lt (X 1 , X 2 )(φ), converges in probability to say Lt (φ) for each φ ∈ Cb (Rd ). This gives (5.1) with Lt (φ) in place of Lt (X 1 , X 2 )(φ). As each term in (5.1) is a.s. continuous in t (use (4.16) for the left-hand side) the same is true of t → Lt (φ). This implies uniform convergence in probability for t ≤ T of Lt (X 1 , X 2 ) to Lt (φ) for each φ ∈ Cb (Rd ). It is now easy to construct L as a random non-decreasing continuous MF valued process, using a countable convergence determining class and hence we see that by definition Lt = Lt (X 1 , X 2 ).

Proof of Theorem 1.1 In view of Proposition 3.1, it only remains to show that Lt (X 1 , X 2 ) exists and equals At . This, however now follows from Lemma 5.1, Proposition 4.1, and the uniqueness of the decomposition of the continuous semimartingale Xt (Gα φ).

Remark 5.2 Since At = Lt (X 1 , X 2 ), Lemma 5.1 immediately gives us the following form of

35

Tanaka’s formula for (X 1 , X 2 ): Z tZ

Z

Gα φ(x1 , x2 )Xs2 (dx2 ) L(ds, dx1 ) Xt (G φ) = X0 (G φ) − γ1 d d 0 R R Z tZ Z Gα φ(x1 , z)Xs1 (dx1 ) L(ds, dx2 ) + γ2 0 Rd Rd Z t Xs (Gα φ) ds − Lt (X 1 , X 2 )(φ) +α 0 Z tZ Z  + Gα φ(x1 , x2 ) Xs1 (dx1 )M 2 (ds, dx2 ) + Xs2 (dx1 )M 1 (ds, dx2 ) , α

α

0

Rd

Rd

where the stochastic integral term is a continuous local martingale with quadratic variation 2 Z t Z Z α 2 G φ(x1 , x2 )Xs (dx1 ) X 1 (dx2 ) ds 0

Rd

Rd

Z tZ

Z

+

G 0

Rd

Rd

α

2

φ(x1 , x2 )Xs1 (dx1 )

X 2 (dx2 ) ds.

Proof of Theorem 1.2 As mentioned in the Introduction, the case of γ2 ≤ 0 is proved in Theorem 4.9 of Evans and Perkins (1994) and so we assume γ2 > 0. Let X0i satisfy UB, i = 1, 2, 2 . and assume P is a law on CMF 2 under which the canonical variables (X 1 , X 2 ) satisfy MP0,γ X 1 ,X 2 0

0 1

Then X 1 is a super-Brownian motion with branching rate 2, variance parameter σ 2 , and law P X , say. By Theorem 4.7 of Barlow, Evans and Perkins (1991), (5.3)

Hδ ≡ sup ρδ (Xt1 ) < ∞

1

∀δ > 0 P X − a.s.

t≥0

R Let qη : R+ → R+ be a C ∞ function with support in [0, η] such that qη (u)du = 1. For ε > 0 we will choose an appropriate η = η(ε) ≤ ε below and so may define Z ∞ hε (X 1 , s, x) = γ2 qη (u − s)pε ∗ Xu1 (x) du ≡ pε ∗ Xsε,1 (x). 0

Then (5.3) implies (5.4)

sup ρδ (Xtε,1 ) ≤ γ2 Hδ < ∞

∀δ > 0 a.s.

t≥0,ε>0

Let Er,x denote expectation with respect to a Brownian motion B beginning at x at time r and let Pr,t f (x) = Er,x (f (Bt )) for r ≤ t. It is understood that B is independent of (X 1 , X 2 ). Define Z t ε 1 `t (B, X ) = hε (X 1 , s, Bs ) ds, 0

where the integrand is understood to be 0 for s < r under Pr,x . The additional smoothing in time in our definition of hε does force some minor changes, but it is easy enough to modify the 36

arguments in Theorems 4.1 and 4.7 of Evans and Perkins (1994), using (5.4), to see there is a Borel ` : CRd × CMF → CR+ such that   1 (5.5) lim sup Er,x sup |`εt (B, X 1 ) − `t (B, X 1 )|2 = 0 P X − a.s., ε↓0

r≥0,x∈R

t≥r

d

and for a.a. X 1 , t → `t (B, X 1 ) is an increasing continuous additive functional of B. It is easy to use (5.4) and (5.5) to see that for a.a. X 1 , `t is an admissible continuous additive functional in the sense of Dynkin and Kuznetsov (see (K1 ) in Kuznetsov (1994)). Let C` = {φ ∈ Cb (Rd ) : φ as a finite limit at ∞}, 1

X φ and C`+ be the set of non-negative functions in C` . Let φ : CMF → C`+ be Borel and let Vr,t = Vr,t be the unique continuous C`+ -valued solution of Z t  (5.6) Vr,t (x) = Pr,t φ(x) + Er,x Vs,t (Bs )`(B, X1 )(ds) − Vs,t (Bs )2 ds , r ≤ t. r

The existence and uniqueness of such solutions is implicit in Kuznetsov (1994) and may be shown by making minor modifications in classical fixed point arguments (e.g. in Theorems 6.1.2 and 6.1.4 of Pazy (1983)), using the L2 bounds on `t from (5.5) and the negative quadratic term in X 1 is Borel in X 1 –we will not (5.6) to see that explosions cannot occur. The construction shows Vr,t comment on such measurability issues in what follows. Theorems 1 and 2 of Kuznetsov (1994) and the aforementioned a.s. admissibility of `(B, X 1 ) give the existence of a unique right continuous measure-valued Markov process X such that EX0 (e−Xt (φ) ) = e−X0 (V0,t φ) ,

(5.7)

φ ∈ C`+ .

If PX0 |X 1 is the associated law on path space we will show P (X 2 ∈ ·|X 1 ) = PX02 |X 1 (·)

(5.8)

2 and hence establish uniqueness of solutions to MP0,γ . X01 ,X02 The proof of the following lemma is similar to its discrete counterpart, Proposition 3.5(b), and so is omitted ((5.4) is used in place of Proposition 2.4).

Lemma 5.3

 ε sup Er,x eλ`T < ∞

sup r≤T,x∈R

d

1

∀T, λ > 0 P X − a.s.

ε>0

    ε ε,X 1 X 1 f (x) = E `t . By Lemma 5.3 these are Let Pr,t f (x) = Er,x f (Bt )e`t and Pr,t f (B )e r,x t well-defined inhomogeneous semigroups for bounded Borel functions f : Rd → R. It follows from (5.5), Lemma 5.3, and a dominated convergence argument that for each t > 0, and bounded Borel f,  ε  ε,X 1 X1 (5.9) sup |Pr,t f (x) − Pr,t f (x)| ≤ sup kf k∞ Er,x |e`t − e`t | r≤t,x∈R

d

r≤t,x∈R

d

1

→ 0 as ε ↓ 0 P X − a.s. 37

Let D(∆/2) be the domain of the generator of B acting on the Banach space C` and D(∆/2)+ be the nonnegative functions in this domain. Assume now that φ : CMF → D(∆/2)+ is Borel. Let 1 ε = V ε,X , be the unique continuous C + -valued solution of Vr,t r,t ` ε Vr,t (x)

(5.10)

= Pr,t φ(x) + Er,x

t

Z

 ε ε Vs,t (Bs )`ε (B, X1 )(ds) − Vs,t (Bs )2 ds , r ≤ t.

r

We claim

ε Vr,t

also satisfies ε Vr,t (x)

(5.11)

=

ε,X 1 Pr,t φ(x)

Z −

t

1

ε,X ε 2 Pr,s ((Vs,t ) )(x) ds.

r

ε ∈ D(∆/2) for r ≤ t, is continuously differentiable in Theorem 6.1.5 of Pazy (1983) shows that Vr,t r < t as a C` -valued map, and satisfies ε  σ2∆  ∂Vr,t ε ε ε (x) = − + hε (r, x) Vr,t (x) + (Vr,t (x))2 , r < t, Vt,t = φ. ∂r 2

(5.12)

Then (5.11) is just the mild form of (5.12) and follows as in section 4.2 of Pazy (1983). Note here that hε is continuously differentiable in s thanks to the convolution with qη and so Theorem 6.1.5 of Pazy (1983) does apply. We next show that 1

ε lim sup kVr,t φ − Vr,t φk∞ = 0 P X − a.s.

(5.13)

ε↓0 r≤t

First use (5.11) and Lemma 5.3 to see that for each t > 0, (5.14)

ε Vr,t (x) ≤

sup d

1

ε,X Pr,t φ(x)

sup d

r≤t,x∈R ,ε>0

r≤t,x∈R ,ε>0

≤ kφ(X 1 )k∞

ε

1

Er,x (e`t ) = cφ,t (X 1 ) < ∞ P X − a.s.

sup d

r≤t,x∈R ,ε>0

Using (5.11) again, we see that ε kVr,t



ε0 Vr,t k∞



ε,X 1 kPr,t φ



Z +

(5.15)

ε0 ,X 1 Pr,t φk∞

t

0

Z +

t

0

1

1

ε,X ε ,X ε 2 k(Pr,s − Pr,s )((Vs,t ) )k∞ ds

r 0

1

0

ε ,X ε ε ε ε kPr,s ((Vs,t + Vs,t )(Vs,t − Vs,t ))k∞ ds

r



0 T1ε,ε

0

0

+ T2ε,ε + T3ε,ε .

(5.14) shows that 0 T2ε,ε

Z ≤ r

t

ε

ε0

sup Er,x (|e`s − e`s |)cφ,t (X 1 )2 x

1

→ 0 uniformly in r ≤ t as ε, ε0 ↓ 0 P X − a.s. (by (5.5) and Lemma 5.3).

38

0

(5.9) implies T1ε,ε → 0 uniformly in r ≤ t as ε, ε0 ↓ 0 and (5.14) also implies Z t 0 ε ε0 kVs,t − Vs,t k∞ ds. T3ε,ε ≤ 2cφ,t (X 1 ) 0

ε (x) converges uniformly in The above bounds and a simple Gronwall argument now show that Vr,t d + x ∈ R , and r ≤ t to a continuous C` -valued map as ε ↓ 0. It is now easy to let ε ↓ 0 in (5.10) and use (5.5) to see that this limit is Vr,t , the unique solution to (5.6). This completes the derivation of (5.13). i As usual FtX is the canonical right-continuous filtration generated by X i . Consider also the X 1 × F X 2 . Argue as in the proof of Theorem 4.9 of Evans and Perkins enlarged filtration F¯t = F∞ t (1994), using the predictable representation property of X R1 , to see that for φ ∈ D(∆/2), Mt2 (φ) t is a continuous F¯t -local martingale such that hM 2 (φ)it = 0 Xs2 (2φ2 ) ds. The usual extension of d ¯ the orthogonal martingale measure M 2 now shows that if f : R+ × R tΩ ×2 R2 → R is P(F· ) × Borelmeasurable (here P(F¯· ) is the F¯t -predictable σ-field), such that 0 Xs (fs ) ds < ∞ ∀t > 0 a.s., then Z tZ 2 Mt (f ) ≡ f (s, ω, x)M 2 (ds, dx) is a well-defined continuous (F¯t ) − local martingale 0 Z t such that hM (f )it = Xs2 (2fs2 ) ds. 0

It is easy to extend the martingale problem for X 2 to bounded f : [0, t] × Rd → R such that d s fs ∈ Cb2 (Rd ) for s ≤ t, and ∂f ∂s , (∆/2)fs ∈ Cb ([0, t] × R ), (e.g. argue as in Proposition II.5.7 of Perkins (2002)). For such an f one has

(5.16) Xu2 (fu ) = X02 (f0 ) + Mu2 (f ) +

Z

u

Xs2

0

 σ2 2

∆fs +

∂fs  ¯ 1, X ¯ 2 ) (f ) , ds + γ2 Lu (X ∂s

u ≤ t.

Next we claim (5.16) holds for f (s, x, X 1 ) where f : [0, t] × Rd × CMF → R is a Borel map such 1 that for P X -a.a. X 1 , (5.17)

∂fs f (s, ·, X 1 ) ∈ D(∆/2) for all s ≤ t, (∆/2)fs , ∈ Cb ([0, t] × Rd ), ∂s ∆ ∂f 1 1 1 sup |f (s, x, X )| + f (s, x, X ) + (s, x, X ) = C(X 1 ) < ∞. 2 ∂s d

s≤t,x∈R

To see this, for f : [0, t] × Rd × CMF → R bounded, introduce Z ∞ δ 1 f (s, x, X ) = pδ ∗ fu,X 1 (x)qδ (u − s) du. 0

Note that if fn → f in the bounded pointwise sense where the bound may depend on X 1 , then ∂fnδ ∂f δ ∆ δ ∆ δ 1 ∂u → ∂u and 2 fn → 2 f in the same sense as n → ∞. By starting with f (u, x, X ) = 1 f1 (u, x)f2 (X ) where fi are bounded and Borel, and using a monotone class argument we obtain (5.16) for fδ where f is any Borel map on R+ × Rd × CMF with sups≤t,x |f (s, x, X 1 )| < ∞ for each 39

X 1 . If f is as in (5.17) then it is easy to let δ ↓ 0 to obtain (5.16) for f . Note we are using the extension of the martingale measure M 2 to the larger filtration F¯t in these arguments. ε is the unique solution to (5.10). Recall from Now recall φ : CMF → D(∆/2)+ (Borel) and Vr,t ε is a classical solution of the non-linear pde and in particular (5.17) is valid for (5.12) that Vr,t ε f (r, x) = Vr,t (x). Therefore we may use (5.12) in (5.16) to get (5.18)

ε Xs2 (Vs,t )

=

ε ) X02 (V0,t

+ Z sZ

+

Z

sZ

ε Vr,t (x)M 2 (dr, dx)

0

Z +

s ε 2 Xr2 ((Vr,t ) ) dr

0

ε Vr,t (x)[γ2 L(X 1 , X 2 )(dr, dx) − hε (X 1 , r, x)Xr2 (dx)dr], s ≤ t.

0

We claim the last term in (5.18) approaches 0 uniformly in s ≤ t P -a.s. as ε = εk ↓ 0 for an appropriate choice of ηk = η(εk ) in the definition of hε . The definition of collision local time allows us to select εk ↓ 0 so that Lεk (X 1 , X 2 ) → L(X 1 , X 2 ) in MF (R+ × Rd ) a.s. Note that Z sZ Z x + x  1 2 Vr,t pεk (x1 − x2 )Xr1 (dx1 )Xr2 (dx2 ) dr γ2 2 0 Z sZ εk − Vr,t (x2 )hεk (X 1 , r, x2 )Xr2 (dx2 ) dr Z s0Z Z x + x  1 2 εk − Vr,t ≤ γ2 |Vr,t (x2 )|pεk (x1 − x2 )Xr1 (dx1 )Xr2 (dx2 ) dr (5.19) 2 0 Z ∞ Z s Z h i εk 1 1 2 Pεk ∗ Xr (x2 ) − qη(εk ) (u − r)Pεk ∗ Xu (x2 ) du Vr,t (x2 )Xr (dx2 ) dr +γ2 0

0

=

I1k (s)

+

I2k (s).

Let δ0 > 0. By (5.13) and the uniform continuity of (r, x) → Vr,t (x) there is a k0 = k0 (X 1 ) ∈ N a.s. such that x + x  1 2 εk − Vr,t sup |Vr,t (x2 )| < δ0 for k > k0 and |x1 − x2 | < εk0 . 2 r≤t By considering |x1 − x2 | < εk0 and |x1 − x2 | ≥ εk0 separately one can easily show there is a k1 = k1 (X 1 ) so that sup I1k (s) < δ0 if k > k1 . s≤t

Next use the upper bound in (5.14) and the continuity of u → pεk ∗ Xu1 (x2 ) to choose η so that ηk = η(εk ) ↓ 0 fast enough so that sup I2k (s) → 0

P -a.s. as k → ∞.

s≤t

The above bounds show the lefthand side of (5.19) converges to 0 a.s. as k → ∞. The a.s. convergence of Lεk (X 1 , X 2 ) to L(X 1 , X 2 ) therefore shows that Z sZ Z sZ εk 1 2 1 2 lim sup γ2 Vr,t (x)L(X , X )(dr, dx) − Vr,t (x)hεk (X , r, x)Xr (dx)dr = 0 a.s. k→∞ s≤t

0

0

40

The uniform convergence in (5.13) now shows that the last term in (5.18) approaches 0 uniformly in s ≤ t P -a.s. as ε = εk → 0. (5.13) also allows us to let εk ↓ 0 in (5.18), taking a further subsequence perhaps to handle the martingale term, and conclude Z sZ Z s 2 2 2 Xr2 ((Vr,t )2 ) dr for all s ≤ t a.s. Xs (Vs,t ) = X0 (V0,t ) + Vr,t (x)M (dr, dx) + 0

0

Now apply Ito’s lemma to conclude −Xs2 (Vs,t )

e

−X02 (V0,t )

=e

Z

sZ



2

e−Xr (Vr,t ) Vr,t (x)M 2 (dr, dx),

s ≤ t.

0

The stochastic integral is a bounded F¯t -local martingale and therefore is an F¯t -martingale. This proves for t1 < t, 2 2 E(e−Xt (φ) |F¯t1 ) = eXt1 (Vt1 ,t φ) for any Borel φ : CMF → D(∆/2)+ .

Now Vt1 ,t φ : CMF → D(∆/2)+ is also Borel and so if φ1 : CMF → D(∆/2)+ is Borel, we can also conclude 2

2

2

E(e−Xt (φ)−Xt1 (φ1 ) |F¯0 ) = E(e−Xt1 ((Vt1 ,t φ)+φ1 ) |F¯0 ) = exp(−X02 (V0,t1 ((Vt1 ,t φ) + φ1 ))). This uniquely identifies the joint distribution of (Xt21 , Xt2 ) conditional on X 1 . Iterating the above a finite number of times, we have identified the finite-dimensional distributions of X 2 conditional on X 1 , and in fact have shown that conditional on X 1 , X 2 has the law of the measure-valued process considered by Dynkin and Kuznetsov in (5.7). Proof of Theorem 1.5.

1 ,γ2 Let (X 1 , X 2 ) be a solution to M−γ and let P denote its law on the X 1 ,X 2 0

0

canonical space of MF 2 -valued paths. Conditionally on X, let Y denote a super-Brownian motion with immigration γ1 Lt (X 1 , X 2 ) (see Theorem 1.1 of Barlow, Evans and Perkins (1991)) constructed perhaps on a larger probability space. This means for φ ∈ Cb2 (R2 ), Z t σ 2 ∆φ 1 2 Y Yt (φ) = γ1 Lt (X , X )(φ) + Mt (φ) + Ys ( ) ds, 2 0 where Z

Y

hM (φ)it =

t

Ys (2φ2 ) ds,

0

MY

i MX ,

and is orthogonal with respect to the i = 1, 2. All these martingales are martingales with ¯ 1 = X 1 + Y satisfies the martingale respect to a common filtration. Then it is easy to check that X problem characterizing super-Brownian motion starting at X01 , i.e., is as in the first component in M0,0 . Therefore there is jointly continuous function, u ¯1 (t, x), with compact support such X01 ,X02 ¯ 1 (dx) = u that X ¯1 (t, x)dx (see, e.g., Theorem III.4.2 and Corollary III.1.7 of Perkins (2002)) and t so there is a bounded function on compact support, u1 (t, x), so that Xt1 (dx) = u1 (t, x)dx by the ¯ 1 . Let φ ∈ Cb (Rd ). Then Lebesgue’s differentiation theorem implies that domination X 1 ≤ X Z x1 + x2 1 )u (s, x1 )dx1 = φ(x2 )u1 (s, x2 ) for Lebesgue a.a. (s, x2 ) a.s. lim pδ (x1 − x2 )φ( δ→0 2 41

Moreover the approximating integrals are uniformly bounded by kφk∞ ku1 k∞ and so by Dominated Convergence one gets from the definition of L(X 1 , X 2 ) that Z tZ 1 2 Lt (X , X )(φ) = φ(x2 )u1 (s, x2 )Xs2 (dx2 ) ds. 0

Evans and Perkins (1994) (Theorem 3.9) used Dawson’s Girsanov theorem to show there is a unique 1 ,0 in law solution to M−γ in our one-dimensional setting. If P −γ1 ,0 denotes this unique law on the X01 ,X02 canonical path space of measures, then they also showed P −γ1 ,0 0 P − a.s. and P −γ1 ,0 − a.s. 0

The latter is a special case of our argument when γ2 = 0. This allows us to apply Dawson’s Girsanov theorem (see Theorem IV. 1.6 (a) of Perkins (2002)) to conclude that Z Z nZ t Z o 1 t dP 1 X2 1 2 2 = exp (ds, dx) − u (s, x) X (dx)ds . u (s, x)/2 M s dP −γ1 ,0 Ft 8 0 0 2

Here M X is the martingale measure associated with X2 and u1 is the density of X 1 , both under P −γ1 ,0 . Although the Girsanov theorem quoted above considered absolute continuity with respect to PX01 × PX02 , the same proof gives the above result. This proves uniqueness of P and, together with (5.20) shows that P is absolutely continuous with respect to PX01 × PX02 . This gives the required properties of the densities of X i as they are well-known for super-Brownian motion (see Theorem III.4.2 of Perkins (2002)).

6

Proof of the Concentration Inequality–Proposition 2.4

As we will be proving the concentration inequality for the ordinary rescaled critical branching ¯ 1,N , in order to simplify the notation we will write X N for X ¯ 1,N , and write ξ N , or random walk, X 1,N N ¯ ¯ 1,N will be denoted just ξ, for ξ . Dependence of the expectation on the initial measure X0 = X 0 N by EX N . {Pu , u ≥ 0} continues to denote the semigroup of our rescaled continuous time random 0 walk B N . P Notation. If ψ : ZN → R, let P N ψ(x) = y pN (y − x)ψ(y) and let Rψ(x) = RN ψ(x) =

∞ X

2−k (P N )k ψ(x).

k=0

To bound the mass in a fixed small ball we will need good exponential bounds. Here is a general exponential bound whose proof is patterned after an analogous bound for super-Brownian motion (see e.g. Lemma III.3.6 of Perkins (2002)). The discrete setting does complicate the proof a bit. 42

Proposition 6.1 Let f : ZN → R+ be bounded and define Z t N N N ¯ ¯ f (u) = kPu f k∞ , I f (t) = f¯N (u) du. 0

If t > 0 satisfies 1 I¯N f (t) ≤ exp(−4kf k∞ /N ), 14

(6.1) then

    EX N exp(XtN (f )) ≤ exp 2X0N (PtN Rf ) .

(6.2) Proof

0

Assume φ : [0, t] × ZN → R+ is such that φ and φ˙ ∈ Cb ([0, t] × ZN ). Then MN,0,0 1,N X0

,X02,N

and (2.9) imply XtN (φt )

=

X0N (φ0 )

Z +

t

XsN (∆N φs + φ˙ s ) ds

0

XZ tZ 1 ˆ 1,+ (ds, du) − Λ ˆ 1,− (ds, du)] φ(s, x)1(u ≤ ξs− (x))[Λ x x N 0 x XZ tZ 1 ˆ 1,m + [φ(s, x) − φ(s, y)]1(u ≤ ξs− (y))Λ x,y (ds, du). N 0 x,y +

(6.3)

A short calculation using Ito’s lemma for Poisson point processes (see p. 66 in Ikeda and Watanabe (1981)) shows that     exp XtN (φt ) − exp X0N (φ0 ) Z t   ˜N (6.4) exp XsN (φs ) [XsN (∆N φs + φ˙ s ) + dN = s ] ds + Mt , 0

where dN s

h 1   1  i ξs (x) exp φ(s, x) + exp − φ(s, x) − 2 N N x h 1  i X X 1 +N ξs (y) pN (y − x) exp (φ(s, x) − φ(s, y) − (φ(s, x) − φ(s, y)) − 1 , N N y x

= N

X

˜ N is a locally bounded local martingale. In fact and M     XZ tZ 1 N N N ˜ ˆ 1,+ (φs ) + 1(u ≤ ξs− (x))φ(s, x) − exp Xs− Mt = exp Xs− (φs ) Λ x (ds, du) N 0 x     XZ tZ 1 N N ˆ 1,− (ds, du) (φs ) − 1(u ≤ ξs− (x))φ(s, x) − exp Xs− (φs ) Λ + exp Xs− x N 0 x     XZ tZ 1 N N ˆ 1,m + exp Xs− (φs ) + 1(u ≤ ξs− (y))(φ(s, x) − φ(s, y)) − exp Xs− (φs ) Λ x,y (ds, du). N 0 x,y

43

Assume now that for some c0 > 0, kφk∞ ≤ 4c0 N,

(6.5) and note that if |w| ≤ 4c0 , then

0 ≤ ew − 1 − w = w2

(6.6)

∞ X

wk e4c0 ≤ w2 . (k + 2)! 2

k=0

Now use (6.6) with w = φ(s, x)/N ∈ [0, 4c0 ] or w = dN s

(6.7)

φ(s,x)−φ(s,y) N

∈ [−4c0 , 4c0 ] to see that

X e4c0 1 X pN (y − x)(φ(s, x) − φ(s, y))2 ξs (y) 2 N y x   4c0 e ≤ e4c0 XsN (φ2s ) + XsN φ2s + P N (φ2s ) 2 3 e4c0 N N 2 = e4c0 XsN (φ2s ) + X (P (φs )). 2 2 s ≤ e4c0 XsN (φ2s ) +

Now assume t > 0 satisfies (6.1), let c1 = 7 exp (4kf k∞ /N ) and define κ(u) = (1 − c1 I¯N f (t − u))−1 for u ∈ [0, t]. Introduce N φ(u, x) = Pt−u Rf (x)κ(u),

u ≤ t,

x ∈ ZN .

As convoluting P N with PtN amounts to running B N until the first jump after time t, one readily sees that these operators commute and hence so do R and PtN . Therefore (6.8)

|PuN (Rf )(x)| = |R(PuN f )(x)| ≤ 2f¯N (u),

u ≤ t.

We also have P N (PuN Rf )(x) = P N R(PuN f )(x) =

∞ X

2−k (P N )k+1 (PuN f )(x)

k=0

≤ 2R(PuN f )(x) = 2PuN (Rf )(x).

(6.9)

By (6.1) and (6.8), for u ≤ t, |φ(u, x)| ≤ 2f¯N (t − u)κ(u) ≤ 4f¯N (t − u) ≤

44

4kf k∞ N, N

and so (6.5) holds with c0 = kfNk∞ . Clearly φ, φ˙ ∈ Cb ([0, t] × ZN ) and so (6.7) is valid with this choice of c0 . We therefore have XsN (φ˙ s + ∆N φs ) + dN s N = XsN (Pt−s Rf )κ˙ s + dN s N = −XsN (Pt−s Rf )κ2s c1 f¯N (t − s) + dN s  e4c0 3  4c0   e 2 N N N ≤ κ2s XsN Pt−s Rf + P N (Pt−s Rf )2 − c1 f¯N (t − s)Pt−s Rf (by (6.7)) 2 2     2 ¯N N 4c0 N N N ≤ κs f (t − s)Xs 3e Pt−s Rf + e4c0 P N Pt−s Rf − c1 Pt−s Rf (by (6.8)) h i  N ≤ κ2s f¯N (t − s) 5e4c0 − c1 XsN Pt−s Rf ( by (6.9))

≤ 0, the last by the definition of c1 . Now return to (6.4) with the above choice of φ. By choosing ˜ N N ) = 0 and using Fatou’s lemma we get from stopping times TkN ↑ ∞ as k → ∞ such that E(M t∧Tk the above that     ≤ E exp XtN (Rf ) E exp XtN (f ) h X N (P N Rf ) i t 0 ≤ exp 1 − c1 I¯N f (t)   ≤ exp 2X0N (PtN Rf ) (by (6.1)).

We now specialize the above to obtain exponential bounds on the mass in a ball of radius r. In fact we will use this bound for the ball in a torus and so present the result in a more general framework. Lemma 7.3 below will show that the key hypothesis, (6.10) below, is satisfied in this context. Corollary 6.2 Let c6.10 ≥ 1, T > 0 and δ ∈ (0, 2 ∧ d). There is an r0 = r0 (c6.10 , δ, T ) ∈ (0, 1] such that for any N , r ∈ [N −1/2 , r0 ] and any C ⊂ Rd satisfying (6.10)

N d −d/2 sup ΠN ) x (Bu ∈ C) ≤ c6.10 r (1 + u

∀u ∈ (0, T ],

x

then for any c > 0, h i EX N exp (rδ−2∧d XtN (C)) ≤ e2c , 0

for all 0 ≤ t ≤ T satisfying (6.11)

X0N (PtN R1C ) ≤ cr2∧d−δ .

Proof We want to apply Proposition 6.1 with f = rδ−2∧d 1C , where N −1/2 ≤ r ≤ 1 and C ⊂ Rd satisfies (6.10). Note that (6.12)

kf k∞ ≤ rδ−2∧d ≤ N 45

−δ + 2∧d 2 2

≤ N,

and by (6.10), for t ≤ T I¯N f (t) ≤ rδ−2∧d c6.10

hZ

t

Z

d

0

≤ r

δ−2∧d

h

t

r du +

d

(rd u−d/2 ) ∧ 1 du

i

0

Z

r2

c6.10 r t +

du + r

d

Z

t

i u−d/2 du 1(t > r2 ) .

r2

0

If φd (r, t) = 1 + t + [log+ (t/r2 )1(d = 2)], then a simple calculation leads to I¯N f (t) ≤ 3c6.10 rδ φd (r, t) t ≤ T. In view of (6.12), condition (6.1) in Proposition 6.1 will hold for all t ≤ T if r ≤ r0 (c6.10 , δ, T ) for some r0 > 0. Therefore Proposition 6.1 now implies the required result. Corollary 6.3 If 0 < θ ≤ N and t ≤ e−4 (14θ)−1 , then EX N (exp(θXtN (1)) ≤ exp (4θX0N (1)). 0

Proof Take f ≡ θ (θ as above) in Proposition 6.1. Note that condition (6.1) holds iff t ≤ exp(−4θ/N )(14θ)−1 and so for θ ≤ N is implied by our bound on t. As PtN Rf = 2θ, Proposition 6.1 gives the result. Remark 6.4 The above corollary is of course well known as XtN (1) = ZN t /N , where {Zu , u ≥ 0} is a rate 1 continuous time Galton-Watson branching process starting with N X0N (1) particles and undergoing critical binary branching. It is easy to show, e.g. by deriving a simple non-linear o.d.e in t for E(uZt |Z0 = 1), (see (9.1) in Ch. V of Harris (1963)) that for θ > 0 and t < [N (eθ/N − 1)]−1 , or θ ≤ 0 and all t ≥ 0, (6.13)

h i h EX N exp(θXtN (1) = 1 + 0

iN X0N (1) eθ/N − 1 . 1 − tN (eθ/N − 1)

The above exponential bounds will allow us to easily obtain the required concentration inequality on a mesh of times and spatial locations. To interpolate between these space-time points we will need a uniform modulus of continuity for the individuals making up our branching random walk. For this it will be convenient to explicitly label these individuals by multi-indices n β ∈ I ≡ ∪∞ n=0 {0, 1, . . . , |ξ0 | − 1} × {0, 1} .

If β = (β0 , . . . , βn ) ∈ I, let πβ = (β0 , . . . , βn−1 ) if n ≥ 1 be the parent index of β, set πβ = ∅ if n = 0, and let β|i = (β0 , . . . , βi ) if 0 ≤ i ≤ n ≡ |β|. Define β|(−1) = ∅. Let {τ β : β ∈ I}, {bβ : β ∈ I} be two independent collections of i.i.d. random variables with τ β exponentially distributed with rate 2N and P (bβ = 0) = P (bβ = 2) = 1/2. Let T ∅ = 0 and define β

T =

X

τ

β|i

1

i−1 Y j=1

0≤i≤|β|

46

 bβ|j > 0 .

We will think of [T πβ , T β ) as the lifetime of particle β, so that T β = T πβ (iff bβ|j = 0 for some j < |β|) means particle β never existed, while bβ is the number of offspring of particle β. Write β ∼ t iff particle P β is alive at time t, i.e., iff T πβ ≤ t < T β . Let {B0k : 0 ≤ k < |ξ0 |} be points in ZN satisfying k 1(B0k = x) = ξ0 (x) for all x ∈ ZN . Now condition on {T β : β ∈ I}. Let {Bsβ − BTβ πβ : s ∈ [T πβ , T β )}T πβ 0, i = 1, 2,   (6.14) P (δN,ε < 2−n ) ≤ c1 (ε) exp −c2 (ε)2nε for all 2−n ≥ N −1 , and whenever δN,ε ≥ 2−n ≥ N −1 , for n ∈ N, (6.15)

1

β −n( 2 −ε) |Btβ − Bj2 −n | ≤ 2

for any j ∈ {0, 1, . . . , n2n }, t ∈ [j2−n , (j + 1)2−n ] and β ∼ t.

Proof In Section 4 of Dawson, Iscoe and Perkins (1989) a more precise result is proved for a system of branching Brownian motions in discrete time. Our approach is the same and so we will 1 only point out the changes needed in the argument. Let h(r) = r 2 −ε . In place of the Brownian tail estimates we use the following exponential bound for our continuous time random walks which may be proved using classical methods:   N −n (6.16) ΠN )| > h(2−n )) ≤ 3d exp −c6.16 2nε for 2−n ≥ N −1 and some positive c6.16 (ε). 0 (|B (2 We write β 0 > β iff β 0 is a descendent of β, i.e., if β = β 0 |i for some i ≤ |β 0 |. The next ingredient is a probability bound on Γn (j2−n ) = card{β ∼ j2−n : ∃β 0 > β s.t. β 0 ∼ (j + 1)2−n }, the number of ancestors at time j2−n of the population at (j +1)2−n . The probability of a particular individual having ancestors at time 2−n in the future is (1 + N 2−n )−1 (use (6.13) with θ = −∞). From this and easy binomial probability estimates one gets (6.17)

N n n n P (∃j ≤ n2n s. t. Γn (j2−n ) ≥ [e2 Xj2 −n (1) + 1]2 ) ≤ n2 exp(−2 ).

Define n1 = n1 (N ) by 2−n1 ≥ N1 > 2−n1 −1 and for 1 ≤ n ≤ n1 define n o β β −n An = max n max |Bj2 − B | > h(2 ) . −n (j−1)2−n 1≤j≤n2 β∈Γn (j2−n )

48

Set A0 = Ω. Next introduce n A∗n1 = maxn j≤n1 2

1

sup t∈[j2−n1 ,(j+1)2−n1 ],β∼t

o β −n |Btβ − Bj2 ) . −n1 | > h(2

Set δN,ε = 2−n iff ω ∈ / A∗n1 ∪ (∪nn10 =n An0 ) and ω ∈ An−1 for n ∈ N. If ω ∈ A∗n1 , set δN,ε = 1. A standard binomial expansion argument (as in Dawson, Iscoe and Perkins (1989)) shows that (6.15) holds with some universal constant c in front of the bound on the righthand side. The latter is easily   removed by adjusting ε. It follows easily from (6.16) and (6.17) that P (An ) ≤ c1 exp −c2 2nε . Therefore to show (6.14), and so complete the proof it remains to show   (6.18) P (A∗n1 ) ≤ c1 exp −c2 2n1 ε . This bound is the only novel feature of this argument. It arises due to our continuous time setting. N Let 0 ≤ j ≤ n1 2n1 and condition on Nj = N Xj2 −n1 (1). For each of these Nj particles run it until its first branching event. The particle is allowed to give birth iff it splits into two before time (j + 1)2−n1 . If it splits after this time or if it dies at the branching event, it is killed. Note that particles will split into two with probability 1 q= 2

Z 0

2−n1

1 1 1 −n1 e−2N s 2N ds = (1 − e−2N 2 ) ≤ (1 − e−4 ) < , 2 2 2

the last inequality by our definition of n1 . Let Z1 be the size of the population after this one generation. Now repeat the above branching mechanism allowing a split only if it occurs before the next time interval of length 2−n1 and continue this branching mechanism until the population becomes extinct. In this way we get a subcritical discrete time Galton-Watson branching process with mean offspring size µ = 2q < 1. Let Tj denote the extinction time of this process. Then P (Tj > m) ≤ E(Zm ) = Nj µm and so, integrating out the conditioning, we have P ( maxn Tj > m) ≤ n1 2n1 N X0N (1)µm .

(6.19)

j≤n1 2

1

In keeping track of this branching process until Tj we may in fact be tracing ancestral lineages beyond the interval [j2−n1 , (j + 1)2−n1 ], which is of primary interest to us, but any branching events in this interval will certainly be included in our Galton Watson branching process as they must occur within time 2−n1 of their parent’s branching time. The resulting key observation is −n1 ] therefore that Tj is an upper bound for the largest number of binary splits in [j2−n1 , (j + √ 1)2 over any line of descent of the entire population. Since the √ offspring can move at most 1/ N from the location of its parent at each branching event, Tj / N gives a (crude) upper bound on the maximum displacement of any particle in the population over the above time interval. It therefore follows from (6.19) that   β −1/2 P maxn sup |Btβ − Bj2 ≤ n1 2n1 N X0N (1)µm −n1 | > mN j≤n1 2

1

t∈[j2−n1 ,(j+1)2−n1 ],β∼t

≤ n1 22n1 +1 X0N (1)µm . √ Now take m to be the integer part of N h(2−n1 ) in the above. A simple calculation (recall Assumption 2.5) now gives (6.18) and so completes the proof. 49

Proof of Proposition 2.4. Let ε ∈ (0, 1/2) and T ∈ N. We use n0 = n0 (ε, T ) to denote a 1 lower bound on n which may increase in the course of the proof. Let h(r) = r 2 −ε (as above) and φ(r) = r2∧d−ε for r > 0. Following Barlow, Evans and Perkins (1991), let Bn (y) = B(y, 3h(2−n )), Bn = {Bn (y) : y ∈ 2−n Zd } and for n ≥ n0 define a class Cn of subsets of Rd such that from some constants c6.22 and c6.23 , depending only on {X0N } and ε, (6.20) (6.21)

∀B ∈ Bn ∃C ∈ Cn s. t. B ⊂ C ˜≡∪ ∀C ∈ Cn ∃B ∈ Bn s.t. C ⊂ B

d

k∈Z

k+B

(6.22)

∀C ∈ Cn , N ∈ N, X0N (Pj2−n R1C ) ≤ c6.22 φ(h(2−n )) for j ∈ {0, . . . , T 2n }

(6.23)

card(Cn ) ≤ c6.23 T 2c(d)n .

To construct Cn note first that as in Lemma 4.2 for B ∈ Bn , N N N −n µN ))) t (B) ≡ X0 (Pt R1B ) ≤ 2 sup X0 (B(x, 3h(2 x

(6.24)

≤ c0 ({X0N }, ε)φ(h(2−n )).

Here we have used Assumption 2.5. For a fixed y ∈ (2−n Zd ) ∩ [0, 1)d let C be a union of balls in Bn −n )). of the form k + Bn (y), k ∈ Zd , where such balls are added until max0≤j≤T 2n µN j2−n (C) > φ(h(2 It follows from (6.24) that for each 0 ≤ j ≤ 2n T , µj2−n (C) ≤ φ(h(2−n )) + c0 φ(h(2−n )) and so (6.22) holds with c6.22 = 1 + c0 . Continue the above process with new integer translates of Bn (y) until every such translate is contained in a unique C in Cn . If n0 is chosen so that 6h(2−n0 ) < 1, then these C’s will be disjoint and all but one will satisfy max0≤j≤2n T µj2−n (C) > φ(h(2−n )). Therefore n

(6.25)

number of C’s constructed from Bn (y) ≤

2 T X

˜n (y))φ(h(2−n ))−1 + 1 µj2−n (B

j=0

≤ X0N (1)T 2n φ(h(2−n ))−1 + 1. Now repeat the above for each y ∈ (2−n Zd ) ∩ [0, 1)d . Then (6.25) gives (6.23), and (6.20) and (6.21) are clear from the construction. Let 0 ≤ t ≤ T , x ∈ Rd and assume N1 ≤ 2−n ≤ δN , where δN = δN,ε is as in Proposition 6.5. Choose 0 ≤ j ≤ 2n T and y ∈ 2−n Zd so that t ∈ [j2−n , (j + 1)2−n ) and |y − x| ≤ 2−n ≤ h(2−n ), respectively. Let β ∼ t and Btβ ∈ B(x, h(2−n ). Then Proposition 6.5 implies β β β β |Bj2 ≤ |Bj2 −n − y| −n − Bt | + |Bt − x| + |x − y|

≤ 3h(2−n ), and so, (6.26)

β Bj2 −n ∈ Bn (y) ⊂ C

50

for some C ∈ Cn ,

where the last inclusion holds by (6.20). For C ∈ Cn , let XtN,j2

−n ,C

1 X β β 1(Bj2 −n ∈ C, Bt ∈ A), N

(A) =

t ≥ j2−n

β∼t

denote the time t distribution of descendents of particles which were in C at time j2−n . Use (6.26) −n to see that XtN (B(x, h(2−n ))) ≤ XtN,j2 ,C (1), and therefore (6.27)

sup 0≤t≤T,x∈R

d

XtN (B(x, h(2−n ))) ≤

−n

N,j2 ,C sup Xt+j2 (1). −n

max n

j≤2 T,C∈Cn t≤2−n

−n

N,j2 ,C The process t → Xt+j2 (1) evolves like a continuous time critical Galton-Watson branching −n process–conditional on Fj2−n it has law PX N |C –and in particular is a martingale. If θn , λn > 0 j2−n

−n

N,j2 ,C (1)) to see we may therefore apply the weak inequality to the submartingale exp(θn Xt+j2 −n that (6.27) implies (6.28)     X X N N −n −n −θn λn E EX N |C (exp(θn X2−n (1))) . P sup Xt (B(x, h(2 ))) > λn , 2 < δN ≤ e

L1

t≤T,x∈R

d

j2−n

j≤2n T C∈Cn

 n





d∧2 −3ε 2

−n



d∧2 −4ε 2

Let θn = 2 and λn = 2 . For n ≥ n0 we may assume 2−n ≤ e−4 (14θn )−1 , and n as we clearly have θn ≤ 2 ≤ N , Corollary 6.3 implies (6.29)

EX N

| j2−n C

N (exp (θn X2N−n (1))) ≤ exp(4θn Xj2 −n (C)).

Next we apply Corollary 6.2 with C ∈ Cn as above, δ = ε, and r = h(2−n ) ∈ [N −1/2 , h(2−n0 )]. Lemma 7.3 below and (6.21) imply hypothesis (6.10) of Corollary 6.2 and (6.22) implies N d∧2−ε X0N (Pj2 −n R1c ) ≤ c6.22 φ(r) = c6.22 r

∀0 ≤ j2−n ≤ T.

Note that for n ≥ n0 ,  −n( 12 −ε)(ε−d∧2)

rε−d∧2 = 2

n

≥2

d∧2 −ε(d∧2) 2

Therefore Corollary 6.2 shows that for n ≥ n0 (ε, T ),   N E exp (4θn Xj2 ≤ e2c6.22 −n (C))

 ≥ 4θn .

∀0 ≤ j2−n ≤ T.

Use this in (6.29) and apply the resulting inequality and (6.23) in (6.28) to see that for n ≥ n0 (ε, T ),   d∧2 P sup XtN (B(x, h(2−n ))) ≥ 2−n( 2 −4ε) t≤T,x∈R n

d

≤ (2 T + 1) c6.23 T exp(c(d)n) exp (−2nε )e2c6.22 + P (δN ≤ 2−n ) ≤ T 2 c({X0N }, ε) exp(c0 (d)n − 2nε ) + c1 (ε) exp (−c2 (ε)2nε ). 51

We have used (6.14) from Proposition 6.5 in the last line. If r0 (ε, T ) = h(2−n0 ), the above bound is valid for (recall n ≥ n0 and 2−n ≥ N −1 ) 1

h(2−n ) ∈ [N − 2 +ε , r0 (ε, T )]. A calculation shows that 2−n(1−4ε) ≤ h(2−n )d∧2−8ε 2−nε . Therefore elementary Borel-Cantelli and interpolation arguments (in r) show that if Hδ,N (ε, T, r0 ) =

sup

sup d

1

0≤t≤T,x∈R N − 2 +ε ≤r≤r0

XtN (B(x, r)) , rd∧2−δ

then for any η > 0 there is an M = M (η, , T ) such that sup P (H8ε,N (ε, T, r0 (ε)) ≥ M ) ≤ η. N

Here we are also using P ( sup XtN (1) ≥ K) ≤ X0N (1)/K ≤ C/K 0≤t≤T

by the weak L1 inequality. The latter also allows us to replace r0 (ε) by 1 in the above bound. If 1 1 we set r0 = r 1−2ε ≤ r so that r ≥ N − 2 +ε iff r0 ≥ N −1/2 , one can easily show sup 0≤t≤T,x∈R

XtN (B(x, r0 )) XtN (B(x, r)) sup ≤ sup , 0 d∧2−8ε d r0 d∧2−10ε r −1 +ε ≤r 0 ≤1 0≤t≤T,x∈R N 2 ≤r≤1

sup d √1 N

that is H10ε,N (0, T, 1) ≤ H8ε,N (ε, T, 1). Therefore we have shown that for each T ∈ N and ε > 0, N N H10ε,N (0, T, 1) = sup0≤t≤T ρN 10ε (Xt ) is bounded in probability uniformly in N . The fact that X· has a finite lifetime which is bounded in probability uniformly in N (e.g. let θ → −∞ in (6.13)) now lets us take T = ∞ and so complete the proof of Proposition 2.4.

7

Proof of Proposition 3.5 and Lemma 3.7

In this section we will verify some useful moment bounds on the positive colicin models which, through the domination in Proposition 2.2, will be important for the proof of our theorems. As usual, |x| is the L∞ norm of x. Lemma 7.1 Let Sn =

n X

Xi ,

i=1

where {Xi , i ≥ 1} are i.i.d. random variables, distributed uniformly on (Zd /M ) ∩ [0, 1]d . Then there exist c, C > 0 such that   √ Sn 2 d (7.1) P √ ∈ x + [−r, r] ≤ Ce−c(|x| ∧(|x| n)) rd , n for all (x, r) ∈ Rd × [n−1/2 , 1]. 52

√ √ Proof It suffices to bound P (Sn / n = x) for x ∈ Zd /M n by the right-hand side with r = n−1/2 for then we can add up the points in the cube to get the result. For this, note if |x| ≤ 2 we can ignore the exponentials on the right-hand side, and for |x| > 2 we have |x0 | > |x|/2 for all x0 ∈ x + [−r, r]d and any r ≤ 1. The local central limit theorem implies sup P (Sn = z) ≤ Cn−d/2 .

(7.2)

z

Suppose now that n = 2m is even. For 1 ≤ i ≤ d, i P (|Sni | ≥ an, Sn = y) ≤ 2P (|Sm | ≥ am, Sn = y)

(7.3)

i | ≥ am or |S i −S i | ≥ am and the increments of S i conditional since for |Sni | ≥ an we must have |Sm n m k on Sn = y are exchangeable. A standard large deviations result (see Section 1.9 in Durrett (2004)) implies i P (|Sm | ≥ am) ≤ e−γ(a)m

where γ(a) = supθ>0 −θa + φ(θ) and φ(θ) = E exp(θXi ). γ is convex with γ(0) = 0, γ 0 (0) = 0, and γ 00 (0) > 0, so γ(a) ≥ c(a2 ∧ a). √ √ Let y = x n and choose i so that |yi | = |y|. Taking a = |x|/ n it follows that i P (|Sm | ≥ am) ≤ Ce−c(|x|

2 ∧|x|√n)

,

so using (7.2) and (7.3) we have i P (Sn = y) = P (|Sni | ≥ an, Sn = y) ≤ 2P (|Sm | ≥ am, Sn = y) i ≤ E[1(|Sm | ≥ am)P (Sm = y − Sm (ω))]

≤ Ce−c(|x|

2 ∧|x|√n)

n−d/2 .

The result for odd times n = 2m + 1 follows easily from the observation that if Sn = x then Sn−1 = y where y is a neighbor of x and on the step n the walk must jump from y to x. Using the local central limit theorem for the continuous time random walk to estimate the probability of being at a given point leads easily to: Lemma 7.2 There is a constant c such that for all s > 0   crd N d/2 d N (7.4) sup ΠN B ∈ x + [−r, r] ≤ , ∀r ≥ N −1/2 . 0 s (N s + 1)d/2 x √ and in particular for r = 1/ N and any N ≥ 1 we have   h √ √ id c N N (7.5) sup Π0 Bs ∈ x + −1/ N , 1/ N ≤ . (N s + 1)d/2 x We also used the following version of Lemma 7.2 for the torus in the proof of the concentration inequality in the previous section. This time we give the details. Notation. If x ∈ Rd , r > 0, let C(x, r) = ∪

d

k∈Z

B(x + k, r).

Lemma 7.3 There is a constant C such that N d −d/2 sup ΠN ) 0 (Bu ∈ C(x, r)) ≤ Cr (1 + u x∈R

d

53

for all u > 0, r ∈ [N −1/2 , 1].

Note first it suffices to consider x ∈ [−1, 1]d and p (7.6) N −1/2 ≤ r ≤ u/2. √ If {Sn } is as in Lemma 7.1, we may define BuN = SτuN / N , where τ N is an independent rate N Poisson process. We have Proof

N N (7.7) ΠN / [N u/2, 2N u]) 0 (Bu ∈ C(x, r)) ≤ P (τu ∈

X

+

P (τuN = j)

N u/2≤j≤2N u

s s S  N N  j P √ ∈ B (x + k) ,r . j j j d

X k∈Z

Standard exponential bounds imply P (τuN ∈ / [N u/2, 2N u]) ≤ e−cN u q for some c > 0. By (7.6), we have j −1/2 ≤ r Nj ≤ 1 for N u/2 ≤ j ≤ 2N u and so we may use Lemma 7.1 to bound the series on the right-hand side of (7.7) by (7.8)

X N u/2≤j≤2N u

  |x + k|2 N √  C exp −c ∧ |x + k| N (N/j)d/2 rd j d

X

P (τuN = j)

k∈Z

  |x + k|2 √  ∧ |x + k| N u−d/2 rd exp −c 2u d

X

≤ 2d/2 C

k∈Z

≤C

X d

k∈Z d/2

≤ C(u

 −c|x + k|2   exp + exp(−c|x + k|) u−d/2 rd u

+ 1)u−d/2 rd .

In the last line we have carried out an elementary calculation (recall |x| ≤ 1) and as usual are changing constants from line to line. Use the above and (7.8) in (7.7) to get N −cN u sup ΠN + C(1 + u−d/2 )rd 0 (Bu ∈ C(x, r)) ≤ e x

≤ C(N u)−d/2 + C(1 + u−d/2 )rd ≤ Crd u−d/2 + C(1 + u−d/2 )rd , since r ≥ N −1/2 . The result follows. Eliminating times near 0 and small |x| we can improve Lemma 7.2. Lemma 7.4 Let 0 < δ ∗ < 1/2, 0 < δ 0 < 1. Then for any a ≥ 0, there exists c = c(a, δ, δ ∗ )) > 0 such that   d N ΠN B ∈ x + [−r, r] ≤ crd (sa ∧ 1), 0 s 0



for all r ≥ N −1/2 , s ≥ N δ −1 , and |x| ≥ s1/2−δ .

54

Proof As before, it suffices to consider r = N −1/2 . Let τsN = number of jumps of B N up to time s. Then, as in 7.8, there exists c > 0 such that  N N −csN (7.9) ΠN . 0 τs > 2N s or τs < N s/2 ≤ e Now for N s/2 ≤ j ≤ 2N s and r ≥ N −1/2 , Lemma 7.1 implies !   √N p P Sj / j ∈ x + [−N −1/2 , N −1/2 ]d √ ≤ C exp −c j

√ !! |x|2 N |x| N ∧ √ j −d/2 . j j



Using 2N s ≥ j ≥ N s/2, |x| ≥ s1/2−δ , and calculus the above is at most √ C exp(−c(|x|2 /s ∧ |x|/ s))N −d/2 s−d/2 ∗



≤ C[exp(−cs−δ ) + exp(−cs−2δ )]N −d/2 s−d/2 ≤ C(a, δ ∗ )(sa ∧ 1)N −d/2 To combine this with (7.9) to get the result, we note that (sa ∧ 1)−1 e−csN/2 is decreasing in s, so 0 if s ≥ N δ −1 , then δ0 e−cN /2 e−csN/2 ≤ a(δ0 −1) ≤ C(δ 0 , a). sa ∧ 1 N Using e−cN

δ 0 /2

≤ c(δ 0 )N −d/2 we see that the right-hand side of (7.9) is also bounded by C(δ 0 , a)(sa ∧ 1)N −d/2

and the proof is completed by combining the above bounds. Lemma 7.5 For any 0 < δ ∗ < 1/2, let n o ∗ ∗ QN = (s, x) : |x| ≥ s1/2−δ ∨ N −1/2+δ .  √  BsN − x ≤ 1/ N . Then there exists a constant c7.10 = c7.10 (δ ∗ ) such Let IN (s, x) = N d/2 ΠN 0 that (7.10)

sup

sup

IN (s, x) ≤ c7.10

N ≥1 (s,x)∈QN 0

0

Proof Fix 0 < δ 0 < δ ∗ . If s ≥ N√δ −1 then the result follows from Lemma 7.4. For s ≤ N δ −1 ∗ ∗ and |x| ≥ N −1/2+δ , |BsN − x| ≤ 1/ N implies |BsN | > N −1/2+δ /2. Therefore for p ≥ 2, a simple martingale inequality implies   h i BsN > N −1/2+δ∗ /2 ≤ cp N d/2 N p(1/2−δ∗ ) ΠN BsN p IN (s, x) ≤ N d/2 ΠN 0 0 ∗

0

≤ cp N d/2 N p(1/2−δ ) sp/2 ≤ cp N d/2+p(δ /2−δ

∗)

≤ cp

if p is large enough so that d/2 + p(δ 0 /2 − δ ∗ ) < 0. Recall that in (3.5) we fixed the constants δ, δ˜ such that 0 < δ < δ˜ < 1/6. ˜ such that for all s > 0, z 0 ∈ ZN and µN ∈ MF (ZN ) Lemma 7.6 There exists c7.11 = c7.11 (δ, δ) Z  √  N N 0 N −ld −δ˜ (7.11) N d/2 sup√ ΠN B + z + z − y ≤ 1/ N µ (dy) ≤ c7.11 %ˆN . 1 0 s δ (µ )s Rd |z1 |≤1/ N

55

Proof

Recall ld = (d/2 − 1)+ . Let s ≥ 1. Then from Lemma 7.2 we immediately get that Z  √  N d/2 N 0 sup√ ΠN ≤ 1/ N B + z + z − y N µ (dy) 1 s 0 Rd |z1 |≤1/ N

≤ N d/2

c ˜ µN (1) ≤ cµN (1)s−d/2 ≤ cµN (1)s−ld −δ (N s + 1)d/2

and hence for s ≥ 1 we are done. ˜ A little algebra Now let 0 < s < 1 and fix 0 < δ ∗ < 1/2 such that δ/2 + δ ∗ ((2 ∧ d) − δ) = δ. converts the condition into ˜ − d/2 + ((2 ∧ d) − δ)(1/2 − δ ∗ ) = −ld − δ.  ∗ ∗ 1 and I 2 be the contributions to Define AN = y : |z 0 − y| ≤ (s1/2−δ ∨ N −1/2+δ ) + N −1/2 . Let IN N c (7.11) from the integrals over AN and AN , respectively. Let s ≥ 1/N . Then we apply Lemma 7.2 1 as follows: to bound IN Z c 1 IN ≤ N d/2 µN (dy) (N s + 1)d/2 AN 2∧d−δ  c N 1/2−δ ∗ −1/2+δ ∗ −1/2 (µ ) ≤ %ˆN (s ∨ N ) + N δ (s + 1/N )d/2 c  1/2−δ∗ 2∧d−δ N N −ld −δ˜ (µ ) ≤ cˆ %N , 2s ≤ cˆ %N δ δ (µ )s (s)d/2 (7.12)

where (7.12) is used in the last line. If s ≤ 1/N we get (using (7.12) again)   ∗ N d/2 (−1/2+δ ∗ )2∧d−δ 1 N %N IN ≤ N d/2 µN B(z 0 , N −1/2+δ + N −1/2 ) ≤ cˆ δ (µ )N ˜

˜

N ld + δ N −ld −δ = cˆ %N ≤ cˆ %N . δ (µ )N δ (µ )s

√ 2 we have the following. Note that for any y ∈ Ac and |z | ≤ 1/ N As for the second term IN 1 N we have, 0 z + z1 − y ≥ s1/2−δ∗ ∨ N −1/2+δ∗ . (7.13) Then by (7.13) and Lemma 7.5 we get ˜

2 IN ≤ c7.10 µN (1) ≤ c7.10 s−ld −δ µN (1), 1 and I 2 gives the where the last inequality is trivial since s < 1. Summing our bounds on IN N required inequality for 0 < s < 1 and the proof is complete.

˜ Return now the the proof of Proposition 3.5. Let φ : Z → [0, ∞). By our choice of δ, ˜ld ≡ 1 − ld − δ˜ > 0. From (2.19), Lemma 7.6, and the definition of Hδ,N we get the following bounds by using the Markov property of B N : 56

  gN N Ps,t (φ) (x1 ) ≤ ΠN s,x1 φ(Bt ) Z tZ + −d (7.14) + γ2 2 s

X

N d/2

ZN

 N   N N ΠN s,x1 Bs1 = y1 + z Πs1 ,y1 +z φ(Bt )

|z|≤N −1/2

¯ 1,N (dy1 ) ds1 ×X s1  n Z ∞ X −dn + n n 1,N ¯ u (1) + 2 (γ2 ) c7.11 Hδ,N + sup X ×

Rn +

s≤u≤t

n=2 n Y

1(s < s1 < . . . < sn ≤ t)

  ˜ N (si − si−1 )−ld −δ sup ΠN sn ,x φ(Bt ) ds1 . . . dsn , x

i=1

and for µN ∈ MF (ZN ), Z

(7.15)

gN Ps,t (φ) (x1 )µN (dx1 ) ≤

Z

  N N ΠN s,x1 φ(Bt ) µ (dx1 ) n−1 Z  ∞ X 1,N N −dn + n n N ¯ (γ2 ) c7.11 Hδ,N + sup Xu (1) + %ˆδ (µ )2 ×

1(s < s1 < . . . < sn ≤ t)

˜

(si − si−1 )−ld −δ

i=1 Z

×

Rn +

s≤u≤t

n=1 n Y

sup |zn |≤N −1/2

  1,N N ¯ ΠN sn ,yn +zn φ(Bt ) Xsn (dyn ) ds1 . . . dsn .

¯ s1,N In the above integrals, s0 = 0 and we have distributed the X integrations among the B N i increments in different manners in (7.14) and (7.15). Define Z n Y ˜ Jn (sn ) ≡ 1(s1 < . . . < sn ) (si − si−1 )−ld −δ ds1 . . . dsn−1 , n ≥ 1. Rn−1 +

i=1

Some elementary calculations give (7.16)

Jn (sn ) =

Γ(˜ld )n (n−2)˜ld +1−2ld −2δ˜ , ∀n ≥ 1. sn Γ(n˜ld )

Define (7.17)

N R7.17 (t, ω) ≡

 n ˜ ∞ RN (ω)2−d γ2+ c7.11 Γ(˜ld ) t(n−2)ld +1−2ld X Γ(n˜ld )  n ˜ ∞ 2−d γ2+ c7.11 Γ(˜ld ) RN (ω)n−1 t(n−1)ld X

n=2

(7.18)

N R7.18 (t, ω) ≡

Γ(n˜ld )

n=1

57

.

,

Note that (7.19)

N N N N R7.17 (t, ω) + R7.18 (t, ω) = sup(R7.17 (s, ω) + R7.18 (s, ω)) < ∞,

P − a.s.

s≤t

where the last inequality follows since Cn → 0, as n → ∞, Γ(n) for any C > 0 and  > 0 (recall that d ≤ 3, ˜ld > 0, 1 − 2ld ≥ 0). Moreover, (2.21) immediately yields N N (7.20) for each t > 0, R7.17 (t, ω) + R7.18 (t, ω) is bounded in probability uniformly in N (tight).

Now we are ready to give some bounds on expectations. Lemma 7.7 Let φ : Z → [0, ∞). Then, for 0 ≤ s < t,   Z t + N 1,N N N ¯ Πs,x1 φ(Bt ) exp , Br ) dr γ2 gN (X r s   N ≤ ΠN s,x1 φ(Bt ) Z tZ X  N   1,N + −d N N ¯ N d/2 ΠN + γ2 2 s,x1 Bs1 = y1 + z Πs1 ,y1 +z φ(Bt ) Xs1 (dy1 ) ds1 ZN

s

N + R7.17 (t)

Z s

Proof

t

|z|≤N −1/2

  ˜ N (sn − s)−2δ sup ΠN sn ,x φ(Bt ) dsn . x

Immediate from (7.14) and (7.16).

Proof of Proposition 3.5 ¯ N (t) = RN (t). (a) Immediate from (7.15), (7.18) and (7.20) with R 7.18 (b) By Lemma 7.7, Lemma 7.6 and the definition of RN we get  Z t  + 1,N N N N ¯ γ2 gN (Xr , Br ) dr Πs,x1 φ(Bt ) exp s  Z tZ   N −1/2 ¯ s1,N (dy1 ) ds1 ≤ kφk∞ 1 + γ2+ 2−d N d/2 ΠN |B − y | ≤ N X 1 s,x1 s1 1 ZN

s

N +R7.17 (t)

Z

−2δ˜

(sn − s) s



t

 dsn Z t

−ld −δ˜

2−d γ2+ c7.11 RN

≤ kφk∞ 1 + (s1 − s) s   ˜ ˜ N ≤ kφk∞ c 1 + RN t1−ld −δ + R7.17 (t)t1−2δ ¯ N (t), = kφk∞ R 58



ds1 +

˜ N R7.17 (t)t1−2δ

and we are done.

Proof of Lemma 3.7 (a) Recall that our choice of parameter δ˜ in (3.5) implies that ˜ld = 1 − ld − δ˜ > 0. By Proposition 3.5(a) we get Z   √  gN P0,t N d/2 1 |· − y| ≤ 1/ N (x1 )µN (dx1 ) ZN Z  √  N N N d/2 ΠN B − y ≤ 1/ ≤ N µ (dx1 ) 0,x1 t Rd Z t ˜ d N ¯ δ−l N (7.21) s− + %ˆδ (µ )RN (t) n 0 Z   N ¯ 1,N (dyn ) dsn . ≤ N −1/2 X Bt−s − y × N d/2 sup ΠN sn yn +zn n Rd

|zn |≤N −1/2

By Lemma 7.6 the first term in (7.21) is bounded by ˜

N −δ−ld c7.11 %ˆN δ (µ )t

(7.22) uniformly on x1 .

Now it is easy to check that Z t ˜ ˜ (7.23) s−2δ (t − s)−ld −δ ds ≤ c(t), and c(t) is bounded uniformly on the compacts. 0

¯ N (t) may change from line to line, to show Apply (7.23) and Lemma 7.6, and recall that R that the second term in (7.21) is bounded by (7.24)

N ¯ c(t)ˆ %N δ (µ )RN (t)RN

≤ ≤

Z

t

˜

0 N N ¯ 1−2ld −2δ˜ %ˆδ (µ )RN (t)t ˜ d N ¯ −δ−l %ˆN δ (µ )RN (t)t

Now put together (7.22, (7.24) to get that (7.21) is bounded by ˜

N ¯ −ld −δ %ˆN δ (µ )RN (t)t

and we are done.

59

˜

δ−ld s− (t − sn )−ld −δ dsn n

R (b) Apply Lemma 7.7 with φ(x) = N d/2 Z 1(|x − y| ≤ N −1/2 )µN (dy) to get Z   √  gN P0,t N d/2 1 |· − y| ≤ 1/ N (x1 )µN (dy) ZN

gN = P0,t φ(x1 ) Z  √  N N ≤ 1/ B − y ≤ N d/2 ΠN N µ (dy) t x1 Rd Z tZ X √   d/2 N  N + −d N ≤ 1/ B − y N d/2 ΠN B = y + z N Π (7.25) + γ2 2 N 1 t−s1 s1 y1 +z x1 0

2 ZN

|z|≤N −1/2

¯ s1,N (dy1 )µN (dy) ds1 ×X 1   Z t Z −2δ˜ N d/2 ¯ sn sup Πx1 N + RN (t) x1

0

ZN

 √ N Bt−s − y ≤ 1/ N µN (dy) dsn . n

By Lemma 7.6 the first term in (7.25) is bounded by ˜

N −ld −δ c7.11 %ˆN δ (µ )t

(7.26)

uniformly on x1 . Let us consider the second term in (7.25). First apply Lemma 7.6 to bound the integrand. Z X √  1,N  d/2 N  N N ¯ (dy1 )µN (dy) N d/2 ΠN N X B = y + z N Π B ≤ 1/ − y 1 x1 s1 s1 y1 +z t−s1 2 ZN

|z|≤N −1/2

 ≤

Z

sup N z1 Z ZN N d/2 × ZN

d/2

ΠN z1

  √  N N Bt−s − y ≤ 1/ N µ (dy) 1

X

 1,N N ¯ ΠN x1 Bs1 = y1 + z Xs1 (dy1 )

|z|≤N −1/2 ˜

˜

−ld −δ N (t − s1 )−ld −δ . ≤ cˆ %N δ (µ )RN s1

Hence the second term in (7.25) is bounded by Z t ˜ ˜ N N ¯ N ¯ 1−2ld −2δ˜ d −δ %ˆδ (µ )RN (t) s−l (t − s1 )−ld −δ ds1 ≤ cˆ %N . δ (µ )RN (t)t 1 0

N ¯ −ld −δ˜ ≤ %ˆN δ (µ )RN (t)t

(7.27)

Now apply (7.23) and Lemma 7.6 to show that the third term in (7.25) is bounded by (7.28)

N ¯ c(t)ˆ ρN δ (µ )RN (t).

Now put together (7.26), (7.27), (7.28) to bound (7.25) by N ¯ −ld −δ˜ %ˆN δ (µ )RN (t)t

and we are done.

60

8

Proof of Lemmas 4.3, 4.4 and 4.5

We start with the Proof of Lemma 4.3  ψN (x , x1 )

= 2

−d

N

d/2

Z

X

≤ 

+

e−αs N d/2

e−αs pN 2s (x1 − x − z) ds1 (|x − x1 | ≤ )

0

|z|≤N −1/2 ∞

Z



X

pN 2s (x1 − x − z) ds1 (|x − x1 | ≤ )

|z|≤N −1/2 X Z  N d/2 e−αs pN 2s (x1 0 −1/2 |z|≤N

− x − z) ds

≡ I1,N (x, x1 , z1 ) + I2,N (x, x1 , z1 ) . By Lemma 7.2 we get the following bound on I1,N : Z ∞ 1,N I (x, x1 , z1 ) ≤ c s−d/2 e−αs ds1 (|x − x1 | ≤ ) 

≤ chd ()1 (|x − x1 | ≤ ) . Now use condition UBN to get Z Z I1,N (x, x1 , z1 )µN (dx) ≤ ch () d 0 ZN

1 (|x − x1 | ≤ ) µN (dx)

ZN

≤ ≤

(8.1)

N (2∧d)−δ˜ cˆ %N δ (µ )hd () N 1−δ˜ , ∀x1 ∈ ZN cˆ %N δ (µ )

.

As for I2,N , apply Lemma 7.6 to get Z  Z ˜ N (µ ) s−ld −δ ds I2,N (x, x1 )µN (dx) ≤ c7.11 %ˆN δ ZN



(8.2)

0 N N 1−ld −δ˜ cˆ %δ (µ ) ,

∀x1 ∈ ZN .

By combining (8.1) and (8.2) we are done. Recall from (4.3) that   1, 1 + ln+ (1/t), hd (t) ≡  1−d/2 t ,

if d = 1, if d = 2, if d = 3.

Recall also that α > 0 if d ≤ 2, and α ≥ 0 if d = 3. Lemma 8.1 There is a c = cα > 0 such that  α  N ≤ chd (t) ∀t > 0, ∀x1 , x2 ∈ ZN . ΠN x1 GN 1(Bt , x2 ) 61

Proof

First apply Lemma 7.2 to get    α N  d/2 ΠN = ΠN x1 GN 1(Bt , x2 ) x1 N

t

Z

X

 N  e−αs pN 2s (Bt − x2 − z) ds

0

|z|≤N −1/2

  d/2 + ΠN x1 N ≤ ct−d/2

t

e−αs ds + c

≤ c(t



Z

N  e−αs pN 2s (Bt − x2 − z) ds

e−αs s−d/2 ds

t

0 1−d/2



t

|z|≤N −1/2

Z



Z

X

1(t ≤ 1) + t

−d/2

Z

t

e−αs ds1(t ≥ 1)) + chd (t) .

0

Now a trivial calculation shows that c(t

1−d/2

1(t ≤ 1) + t

−d/2

Z

t

e−αs ds1(t ≥ 1)) ≤ chd (t)

0

and we are done.

Notation, assumptions. N, 

Until the end of this section we will make the following assumption on 0 <  < 1/2, N ≥ −2 .

(8.3)

We will typically reserve notation µN for measures in MF (ZN ) and c(t) will denote a constant depending on t ≥ 0 which is bounded on compacts. Also recall that δ, δ˜ satisfy (3.5). Lemma 8.2 If 0 < η < (2/d) ∧ 1, there exists a constant c = cη > 0 such that     chd (t), if 0 < t ≤ η N ΠN ψ (B , x ) ≤ 2 x1 N t 1−dη/2 c otherwise, uniformly in x1 , x2 ∈ Z, N ≥ −2 . Proof

First let us treat the case t ≤ η . By Lemma 8.1, for all x1 , x2 ∈ ZN ,     α  N N ΠN ≤ ΠN x1 ψN (Bt , x2 ) x1 GN 1(Bt , x2 ) ≤ chd (t) .

(8.4)

Now let us turn to the case t > η . Then    d/2   N  N ΠN = ΠN x1 ψN (Bt , x2 ) x1 1 Bt − x2 ≤  N

X |z|≤N −1/2

Z





N  e−αs pN 2s (Bt − x2 − z) ds



  d/2  N + ΠN x1 1 Bt − x2 ≤  N

X |z|≤N −1/2

≡ I1 + I2 . 62

Z 0



 N  e−αs pN 2s (Bt − x2 − z) ds

Then for any x1  I1

X

 d/2  N ≤ sup ΠN x1 1 Bt − x ≤  N x

|z|≤N −1/2

 Z N  N ≤ sup Πx1 1 Bt − x ≤  c x



−αs −d/2

e



Z

s

 N  e−αs pN 2s (Bt − x − z) ds



 ds

(by (7.5))



≤ chd ()d t−d/2

(by Lemma 7.2)

d−ηd/2

(since t > η )

≤ chd ()

≤ c1−ηd/2 , where the last inequality follows by the definition of hd . Now apply again Lemma 7.2 and the assumption t > η to get   X Z  N  d/2  I2 ≤ ΠN e−αs pN x1 N 2s (Bt − x2 − z) ds |z|≤N −1/2

≤ ct−d/2

Z

0



ds 0

≤ c−ηd/2  and we are done. We will also use the trivial estimate ˜

hd (t) ≤ ct−ld −δ , t ≤ 1.

(8.5)

The proof of the following trivial lemma is omitted. Lemma 8.3 Let f (t) be a function such that  chd (t), f (t) ≤ c1−dη/2

if 0 < t ≤ η otherwise.

Then for 0 < η < 1/2 and some c(η, t), bounded for t in compacts, we have Z t ˜ ˜ (8.6) s−2δ f (t − s) ds ≤ c(t)η(1−ld −3δ) . 0

The next several lemmas will give some bounds on the Green’s functions GαN used in Section 4. ¯ N (t)}t≥0 (possibly depending on η), such that for Lemma 8.4 For any 0 < η < 2/7 there exists {R all 0 <  < 1/2,  ¯ N (t), hd (t − s)R if 0 < t − s ≤ η , gN  Ps,t (ψN (·, x2 )) (x1 ) ≤ ˜ ¯ if t − s > η , η(1−ld −3δ) R N (t) uniformly in x1 , x2 , N ≥ −2 . 63

Proof gN Ps,t

First, by Lemma 7.7 we get     N (ψN (·, x2 )) (x1 ) ≤ ΠN s,x1 ψN (x2 , Bt ) Z tZ + −d + γ2 2 N d/2 s

ZN

X

 N    N N ΠN s,x1 Bs1 = y1 + z Πs1 ,y1 +z ψN (x2 , Bt )

|z|≤N −1/2

¯ 1,N (dy1 ) ds1 ×X s1 Z t   ¯ N (t) (sn − s)−2δ˜ sup ΠN ψ  (x2 , B N ) dsn +R x1 N t−sn x1 ,x2

s

≡ I

(8.7)

1,N

(s, t) + I

2,N

(s, t) + I 3,N (s, t).

Note that I 1,N (s, t) = I 1,N (t − s), and hence, if t − s ≥ η , then by Lemma 8.2, I 1,N (s, t) in (8.7) is bounded by ˜

c1−dη/2 ≤ cη(1−ld −3δ) , ∀ ≤ 1,

(8.8)

˜ by our assumptions on η, δ. ˜ To bound where the inequality follows since 1 − dη/2 ≥ η(1 − ld − 3δ) 1,N η I (s, t) for t − s ≤  use again Lemma 8.2 and hence obtain (8.9)

I

1,N

 (s, t) ≤

if 0 < t − s ≤ η , if t − s > η .

chd (t − s), ˜ cη(1−ld −3δ)

Next consider I 2,N (s, t). First bound the integrand: Z X  N    1,N N N ¯ N d/2 ΠN s,x1 Bs1 = y1 + z Πs1 ,y1 +z ψN (x2 , Bt ) Xs1 (dy1 ) ZN

|z|≤N −1/2



sup x∈ZN

Z ×

ΠN x

!    N ψN (x2 , Bt−s1 )

N d/2

ZN

X

 1,N N ¯ ΠN x1 Bs1 −s = y1 + z Xs1 (dy1 )

|z|≤N −1/2

   ˜ N ≤ c7.11 RN (s1 − s)−ld −δ sup ΠN x ψN (x2 , Bt−s1 ) ,

(8.10)

s ≤ s1 ≤ t,

x∈ZN

where the last inequality follows by Lemma 7.6 and definition of RN . Now for t − s ≤ η we apply (8.10), (8.5), and Lemma 8.2 to get (8.11)

I

2,N

(t) ≤

γ2+ 2−d c7.11 RN Z

t

Z

t

˜

(s1 − s)−ld −δ

sup x,x2 ∈ZN

s ˜

˜

   N ΠN x ψN (x2 , Bt−s1 ) ds1

(s1 − s)−ld −δ (t − s1 )−ld −δ ds1 s Z 1 ˜ ˜ 1−2ld −2δ˜ ≤ cRN (t − s) u−ld −δ (1 − u)−ld −δ du ≤ cRN

0

≤ c(t)RN hd (t − s), t − s ≤ η . 64

Now, for t − s ≥ η use (8.10) to get (8.12)

I

2,N

Z

γ2+ 2−d c7.11 RN

(s, t) ≤

t−2η

Z ≤ cRN

s

t

   ˜ N (s1 − s)−ld −δ sup ΠN x ψN (x2 , Bt−s1 ) ds1 x∈ZN

   ˜ N (s1 − s)−ld −δ sup ΠN x ψN (x2 , Bt−s1 ) ds1 x∈ZN

s

Z

t

+ cRN t−2η

   ˜ N (s1 − s)−ld −δ sup ΠN x ψN (x2 , Bt−s1 ) ds1 . x∈ZN

Let us estimate the first integral on the right hand side of (8.12). By Lemma 8.2 with 2η < 2/3 in place of η, it is bounded by

1−ηd

Z

t−2η

c

˜

(s1 − s)−ld −δ ds1

s

= c(t)1−ηd ˜

≤ c(t)η(1−ld −3δ) .

(8.13)

The last line follows as in (8.8) and uses η < 2/7. Now, let us estimate the second integral on the right hand side of (8.12). Since t − s ≥ η , we see that for  < 1/2, there is a constant c = c(η) such that t − 2η ≥ s + η − 2η ≥ s + cη , and hence the second integral on the right hand side of (8.12) is bounded by Z t    ˜ η(−ld −δ) N c sup ΠN (8.14) x ψN (x2 , Bt−s1 ) ds1 t−2η x∈ZN

˜ 2η(1−ld −δ) ˜ η(−ld −δ)

≤ c



(by (8.5) and Lemma 8.2).

Since ld ≤ 1/2, the right-hand side of (8.14) is bounded by ˜

cη(1−ld −3δ) .

(8.15) Combining (8.11)—(8.15) we get (8.16)

I

2,N

 (s, t) ≤

c(t)RN hd (t − s), ˜ c(t)RN η(1−ld −3δ)

if 0 < t − s ≤ η , if t − s > η .

For I 3,N (t), we apply Lemmas 8.2 and 8.3 to get ˜ ¯ N (t)η(1−ld −3δ) I 3,N (s, t) ≤ R

(8.17) ˜

Moreover, since η(1−ld −3δ) ≤ hd (t − s) for t − s ≤ 1, we obtain   ˜ ¯ N (t). (8.18) I 3,N (s, t) ≤ η(1−ld −3δ) 1(t − s ≥ η ) + hd (t − s)1(t − s ≤ η ) R 65

Now put together (8.9), (8.16), (8.18) to bound (8.7) by   ˜ ¯ N (t), (8.19) η(1−ld −3δ) 1(t − s ≥ η ) + hd (t − s)1(t − s ≤ η ) R and we are done.

Lemma 8.5 There is a c = cα such that Z N N sup GαN 1(x , x1 )µN (dx) ≤ cˆ %N δ (µ ), ∀µ ∈ MF (ZN ), ∀N ∈ N , x1

Proof

ZN

By Lemma 7.2 we get GαN 1(x , x1 )

≤ N

1

Z

X

d/2

e−αs pN 2s (x1 − x − z) ds

0

|z|≤N −1/2 ∞ −αs −d/2

Z +c

e

1

≡ N d/2

(8.20)

s X Z

|z|≤N −1/2

1

ds

e−αs pN 2s (x1 − x − z) ds + c8.20 (α),

0

for all x, x1 ∈ ZN . Apply Lemma 7.6 to get Z Z N α N N GN 1(x , x1 )µ (dx) ≤ c7.11 %ˆδ (µ ) ZN

1

˜

s−ld −δ ds + c8.20 (α)µN (1)

0



N cˆ %N δ (µ ) ,

∀ x1 ∈ ZN ,

and we are done.

¯ N (t) so that Lemma 8.6 There is an R Z gN N ¯ Ps,t (GαN 1(x , ·)) (x1 )µN (dx) ≤ c(t)ˆ %N δ (µ )RN (t) , ZN

for all t ≥ s ≥ 0, x1 ∈ ZN , µN ∈ MF (ZN ), and N ∈ N . Proof

The result is immediate by Lemma 8.5 and Proposition 3.5(b).

Lemma 8.7 There is a c8.7 such that if µN ∈ MF (ZN ), then Z  α  N N sup√ ΠN x1 +z1 GN 1(x , Bt ) µ (dx1 ) ZN |z1 |≤1/ N

N ≤ c8.7 %ˆN δ (µ ) , ∀t ≥ 0, x ∈ ZN , N ∈ N.

66

Proof

Use (8.20) to get    α N  ΠN ≤ N d/2 ΠN x1 +z1 GN 1(x , Bt ) x1 +z1

Z

X

|z|≤N −1/2

(8.21)

1

 N  e−αs pN 2s (Bt − x − z) ds

0

+ c8.20 (α),

for all t ≥ 0, x, x1 , z1 ∈ ZN . If µN ∈ MF (ZN ), we have   Z X Z 1 N   N N d/2 sup√ ΠN e−αs pN x1 +z1 2s (Bt − x − z) ds µ (dx1 ) ZN |z1 |≤1/ N 1

Z ≤

N

d/2

Z

|z|≤N −1/2

0

 √  N Bt+2s sup√ ΠN − x ≤ 1/ N µN (dx1 ) ds x1 +z1

Rd |z1 |≤1/ N

0

≤ c8.22 ρˆδ (µN ), ∀t ≥ 0, x ∈ ZN ,

(8.22)

where the last inequality follows by Lemma 7.6. This and (8.21) imply Z  α  N N sup√ ΠN x1 +z1 GN 1(x , Bt ) µ (dx1 ) ZN |z1 |≤1/ N

≤ c8.22 ρˆδ (µN ) + c8.20 (α)µN (1) N ≤ cˆ %N δ (µ ) , ∀t ≥ 0, x ∈ ZN ,

and the proof is finished.

¯ N (t) such that Lemma 8.8 There is an R Z gN N ¯ (GαN 1(x , ·)) (x1 )µN (dx1 ) ≤ %ˆN P0,t δ (µ )RN (t) , ZN

for all t ≥ 0, x ∈ ZN , µN ∈ MF (ZN ), N ∈ N . Proof Z

First, by Lemma 7.7 we get

gN P0,t (GαN 1(x , ·)) (x1 )µN (dx1 ) ZN Z  α  ≤ ΠN GN 1(x , BtN ) µN (dx1 ) 0,x 1 Rd Z tZ X   N  α + −d N N (8.23) + γ2 2 N d/2 ΠN x1 Bs1 = y1 + z Πy1 +z GN 1(x , Bt−s1 ) 0

2 ZN

|z|≤N −1/2

¯ 1,N (dy1 )µN (dx1 ) ds1 + R ¯ N (t)µN (1) ×X s1 ≡ I

1,N

+I

2,N

+I

3,N

. 67

Z 0

t

 α  δ˜ N s−2 sup ΠN n x1 GN 1(x , Bt−sn ) dsn x1 ,x

By Lemma 8.7, I 1,N

(8.24)

N ≤ c(t)ˆ %N δ (µ ).

Now, let us consider the second term in (8.23). By Lemma 8.7 and Lemma 7.6 we get ! Z tZ   α N I 2,N ≤ γ2+ 2−d sup ΠN y1 +z1 GN 1(x , Bt−s1 ) 0

ZN

|z1 |≤N −1/2



 Z

×

X

N d/2

ZN

N ≤ c7.11 %ˆN δ (µ )

|z|≤N −1/2 t

Z 0

N ≤ cˆ %N δ (µ )RN

Z 0

(8.25)

 N N  ¯ 1,N ΠN x1 Bs1 = y1 + z µ (dx1 ) Xs1 (dy1 ) ds1

t

˜

d −δ ¯ 1,N s−l c8.7 %ˆN δ (Xs1 ) ds1 1

˜

d −δ s−l ds1 1

N ≤ c(t)ˆ %N δ (µ )RN , ∀t ≥ 0.

Apply (7.23), (8.5) and Lemma 8.1 to get that I 3,N

(8.26)

¯ N (t)µN (1), ≤ R

¯ N (·) may change from line to line. Now combine (8.24), (8.25), (8.26) where we recall again that R to get that the expression in (8.23) is bounded by N ¯ %ˆN δ (µ )RN (t),

(8.27) and we are done.

¯ N (·) such that for all µN ∈ MF (ZN ), Lemma 8.9 There is an R Z gN  (8.28) P0,t (ψN (·, x)) (x1 )µN (dx) ZN

˜

N ¯ −2 ≤ 1−ld −δ c(t)ˆ %N . δ (µ )RN (t) , ∀t ≥ 0, x1 ∈ ZN , N ≥ 

Proof

 . The result is immediate by Lemma 4.3, Proposition 3.5(b) and the symmetry of ψN

gN We next need a bound on the convolution of Ps,t with pN . Let

¯ 1,N , x) = 2−d N d/2 X ¯ 1,N (B(x, 2N −1/2 )). g¯N (X s s Clearly gN g¯N Ps,t (φ) ≤ Ps,t (φ) , ∀s ≤ t, φ : ZN → [0, ∞).

(8.29)

Lemma 8.10 For any φ : ZN → [0, ∞)., and s ≤ t X g X g¯ Ps,tN (φ) (x1 + x)pN (x1 ) ≤ Ps,tN (φ(x1 + ·)) (x)pN (x1 ) x1 ∈ZN

x1 ∈ZN

68

Proof

It is easy to see that X g Ps,tN (φ) (x1 + x)pN (x1 ) x1 ∈ZN

X

= |x1

p

N

(x1 )ΠN x

 Z  N φ Bt−s + x1 exp 0

|≤N −1/2

 Z  N pN (x1 )ΠN φ B + x exp 1 x t−s

X



0

|x1 |≤N −1/2

t−s

¯ 1,N γ2+ gN (X r+s

, BrN

 + x1 ) dr

¯ 1,N , B N ) dr γ2+ g¯N (X r r+s



g¯N Ps,t (φ(x1 + ·)) (x)pN (x1 ).

X

=

t−s

x1 ∈ZN

Proof of Lemma 4.4 "Z

 ψN (x1 , x)1(|x2

(8.30) E 3 ZN

Z ≤ 3 ZN

By Lemma 2.3 (see below) and (3.7), − x| ≤ 1/ N )N

gN  (ψN (·, x)) (x1 )N d/2 P0,t

gN P0,t

#





d/2

¯ 1,N ¯ 2,N (dx1 )X ¯ 2,N (dx2 )X ¯ 1,N (dx)|X X t t t

√  1(|· − x| ≤ 1/ N ) (x2 )

¯ 1,N (dx) ¯ 2,N (dx2 )X ¯ 2,N (dx1 )X ×X t 0 0 ! i hZ t h gN  sup Ps,t (ψN (·, x)) (y) + c 1 + Hδ,N N −1+`d +δ/2 E 0

Z

y∈ZN

i √  gN ¯ 2,N (dz) ds X ¯ 1,N (dx)|X ¯ 1,N 1(|· − x| ≤ 1/ N ) (z)X N d/2 Ps,t s t 

(8.31) × 2 ZN

+ cE

t

hZ

! gN  sup Ps,t (ψN (·, x)) (y)

y∈ZN

0

 Z

X

× 2 ZN



gN N d/2 Ps,t



  i √ ¯ s2,N (dz) ds X ¯ 1,N (dx)|X ¯ 1,N . 1(|· − x| ≤ 1/ N ) (y)pN (z − y) X t

y∈ZN

Here we have noted that if φ1 , φ2 ≥ 0 in Lemma 2.3, then the third term on the right-hand side of (2.18) is at most E

Z t Z 0

ZN

X

  gN gN gN gN ¯ 2,N (dz)ds|X ¯ 1,N . Ps,t φ1 (y)Ps,t φ2 (y)pN (z − y) + Ps,t φ1 (z)Ps,t φ2 (z)X s

y

69

The first term on the right hand side of (8.30) is bounded by Z gN  ¯ 2,N (dx1 )X ¯ 1,N (dx) (8.32) P0,t (ψN (·, x)) (x1 )X t 0 2 ZN

 × sup N x

d/2

gN P0,t



 √  2,N ¯ 1(|· − x| ≤ 1/ N ) (x2 )X0 (dx2 )

1−ld −δ˜

2 −ld −δ˜ ¯ 2,N (1)ˆ ¯ 2,N ¯ ≤  X %N δ (X0 )RN RN (t) t 0 ¯ N (t)1−ld −δ˜%ˆN (X ¯ 2,N )2 t−ld −δ˜ = R δ

0

where the first inequality follows by Lemmas 8.9 and 3.7(a), and the second is immediate conse¯ 2,N quence of the definition of %ˆN δ (X0 ). Now we consider the second term on the right-hand side of (8.30). By Lemma 8.4, for η ∈ (0, 2/7), it is bounded by Z Z t  ˜ ¯ N (t) (8.33) η(1−ld −3δ) 1(t − s ≥ η ) + hd (t − s)1(t − s ≤ η ) c(1 + RN )R ZN 0 Z   √  d/2 gN 2,N 1,N ¯ ¯ ¯ 1,N (dx). ×E N Ps,t 1(|· − x| ≤ 1/ N ) (z)Xs (dz)|X ds X t ZN

Now use (2.17) to evaluate the conditional expectation inside the integral with respect to s to get Z   √  gN ¯ 2,N (dz)|X ¯ 1,N 1(|· − x| ≤ 1/ N ) (z)X N d/2 Ps,t E (8.34) s ZN Z  √  gN ¯ 2,N (dz) 1(|· − x| ≤ 1/ N ) (z)X N d/2 P0,t = 0 ZN

¯ 2,N −ld −δ˜, ∀x ∈ ZN , ¯ N (t)ˆ ≤ R %N δ (X0 )t where the last inequality follows by Lemma 3.7(a). This and (8.5) allows us to bound (8.33) by   ˜ ˜ ¯ N (t)2 tη(1−ld −3δ) ¯ 2,N ¯ 1,N (1)t−ld −δ˜R (8.35) + cη(1−ld −δ) c(1 + RN )ˆ %N δ (X0 )Xt ˜ −ld −δ˜ N ¯ 2,N ¯ N (t)η(1−ld −3δ) = R t %ˆδ (X0 ).

Now we consider the third term on the right-hand side of (8.30). We use Lemmas 8.4, 8.10 to show that it is bounded by Z t  ˜ ¯ RN (t) η(1−ld −3δ) 1(t − s ≥ η ) + hd (t − s)1(t − s ≤ η ) 0    Z Z   X √ g¯N  × E N d/2 Ps,t 1(|· + y − x| ≤ 1/ N ) (z)pN (y) (8.36) ZN

ZN

y∈ZN

 ¯ 2,N (dz)|X ¯ 1,N ds X ¯ 1,N (dx). ×X s t

70

Now use (2.17), (8.29), Chapman-Kolmogorov and Lemma 3.7(a) to get that   X  Z √  2,N 1,N d/2 g¯N ¯ ¯ pN (y) E N Ps,t 1(|· + y − x| ≤ 1/ N ) (z)Xs (dz)|X ZN

y∈ZN

X Z

=

N

d/2

ZN

y∈ZN

g¯N P0,t





  2,N ¯ 1(|· + y − x| ≤ 1/ N ) (z)X0 (dz) pN (y)

−ld −δ˜ ¯ 2,N ¯ ≤ %ˆN , δ (X0 )RN (t)t

where in the last inequality we applied Lemma 3.7(a) for the P g¯N semigroup. This, (8.5) and (8.36) imply that the third term is bounded by ˜ ¯ 2,N −ld −δ˜η(1−ld −3δ) ¯ N (t)X ¯ 1,N (1)ˆ . R %N δ (X0 )t t

(8.37)

Combine (8.32) and (8.35), (8.37) to get the desired result. Proof of Lemma 4.5 It is easy to use Lemma 2.3 and (3.7), as in (8.30), to show that Z  Z Z  2,N 2,N  α 1,N ¯ ¯ ¯ ¯ 1,N ∗ q N (dx) E ψN (x, x1 )Xt (dx1 ) GN 1(x , x2 )Xt (dx2 )|X X t d Rd Rd Z R   gn gn  ¯ 1,N ∗ q N (dx) ¯ 2,N (dx2 ) X ¯ 2,N (dx1 )X (GαN 1(x , ·)) (x2 )X (ψN (x , ·)) (x1 )P0,t ≤ P0,t t 0 0 3 ZN

"Z h i + c 1 + Hδ,N N −1+`d +δ/2 E 0

Z

Z

×

sup ZN

(8.38) ≡ I

1,N

+I

z∈ZN 2,N

ZN

gn Ps,t

t

! gn  sup Ps,t (ψN (x , ·)) (z)

z,x∈ZN

(GαN 1(x , ·)) (z)



! #  1,N 2,N 1,N N ¯ ¯ ¯ (dx) Xs (dz1 ) ds|X Xt ∗ q

.

In the above the sup over z in the last integrand avoids the additional convolution with pN we had in (8.30). First, apply Lemmas 8.8, 8.9, and 4.2 to get Z   gn 1,N N ¯ 2,N ¯  ¯ 1,N ∗ q N (dx)X ¯ 2,N (dx1 ) I ≤ %ˆδ (X0 )RN (t) P0,t (8.39) (ψN (x , ·)) (x1 ) X t 0 2 ZN

1−ld −δ˜ N ¯ 1,N ¯ 2,N ¯ 2,N ¯ ≤ %ˆN %ˆδ (Xt )X0 (1) δ (X0 )RN (t) ¯ 2,N )2 R ¯ N (t)1−ld −δ˜. ≤ %ˆN (X δ

0

71

Now, let us bound I 2,N . By Lemmas 8.4, 8.6, and 4.2 we get for η ∈ (0, 2/7), I 2,N

Z t  ˜ ¯ N (t) ≤ c(1 + RN )R η(1−ld −3δ) 1(t − s ≥ η ) + hd (t − s)1(t − s ≤ η ) 0 Z    gN 1,N α N ¯ × sup Ps,t (GN 1(x , ·)) (z) Xt ∗ q (dx) z∈ZN

ZN

  2,N ¯ s (1)|X ¯ 1,N ds ×E X ¯ 1,N )R ¯ N (t) ≤ %ˆN δ (Xt Z t    ˜ ¯ s2,N (1)|X ¯ 1,N ds. × η(1−ld −3δ) 1(t − s ≥ η ) + hd (t − s)1(t − s ≤ η ) E X 0

Apply Corollary 3.6(b) to get I

2,N

(8.40)

≤ ≤

¯ 2,N (1) ¯ 2,N )R ¯ N (t)2 X %ˆN δ (Xt 0

Z t

 ˜ η(1−ld −3δ) 1(t − s ≥ η ) + hd (t − s)1(t − s ≤ η ) ds

0 ˜ η(1−ld −3δ) N ¯ 2,N ¯ , %ˆδ (X0 )RN (t)

where (8.5) is used in the last line. Combine this with (8.39) to finish the proof.

References M.T. Barlow, S.N. Evans, and E.A. Perkins. Collision local times and measure-valued processes. Canadian J. Math., 43: 897–938, 1991. R. Bass, and E.A. Perkins. Countable systems of degenerate stochastic differential equations with applications to super-markov chains. Elect. J. Prob., 9:Paper 22, 634–673, 2004. M. Bramson, R. Durrett, and G. Swindle. Statistical mechanics of Crabgrass. Annals of Probability, 17:444–481, 1989. J. Cox, R. Durrett, and E. Perkins. Rescaled Particle Systems Converging to Super-Brownian Motion. In M. Bramson and R. Durrett, editors, Perplexing Problems in Probability. Festschrift in Honor of Harry Kesten, pages 269–284, Birkh¨auser Boston, 1999. J. Cox, R. Durrett, and E. Perkins. Rescaled voter models converge to super-Brownian motion. Annals of Probability, 28:185–234, 2000. D.A. Dawson. Geostochastic calculus. Can. J. Statistics, 6:143–168, 1978. D.A. Dawson, A. Etheridge, K. Fleischmann, L. Mytnik, E. Perkins and J. Xiong(2002). Mutually catalytic super-Brownian motion in the plane. Annals of Probability, 30:1681–1762, 2002. D.A. Dawson, and E. Perkins. Long-time behavior and coexistence in a mutually catlytic branching model. Annals of Probability, 26:1088-1138, 1998.

72

P. Donnelly and T.G. Kurtz. Particle representations for measure-valued population models. Academic Press, 27:166–205, 1999. R. Durrett. Predator-prey systems. In K.D. Elworthy and N. Ikeda, editors, Asymptotic Problems in Probability Theory: Stochastic Models and Diffusions on Fractals, pages 37-58 Pitman Research Notes 283, Longman, Essex, England, 1992. R. Durrett and S. Levin. Allelopathy in spatially distributed populations. J. Theor. Biol. 185:165– 172, 1997. R. Durrett and E.A. Perkins. Rescaled contact processes converge to super-Brownian motion in two or more dimentions. Probability Theory and Related Fields, 114:309–399, 1999. R. Durrett. Probability: Theory and Examples. Duxbury Press, Belmont, CA, Third edition, 2004. E.B. Dynkin. An Introduction to Branching Measure-Valued Processes, CRM Monographs, 6. Amer. Math. Soc., 1994. S. N. Ethier and T. G. Kurtz. Markov Process: Characterization and Convergence. John Wiley and Sons, New York, 1986. S.N. Evans and E.A. Perkins. Measure-valued branching diffusions with singular interactions. Canadian J. Math., 46:120–168, 1994. S.N. Evans and E.A. Perkins. Collision local times, historical stochastic calculus, and competing superprocesses. Elect. J. Prob., 3:Paper 5, 120 pp., 1998. K.Fleischmann and L. Mytnik. Competing species superprocesses with infinite variance. Elect. J. Prob., 8:Paper 8, 59 pp., 2003. T. Harris. The Theory of Branching Processes. Spring-Verlag, Berlin, 1963. N. Ikeda and S. Watanabe. Stochastic Differential Equations and Diffusion Processes. North Holland, Amsterdam, 1981. J. Jacod and A.N. Shiryaev. Limit Theorems for Stochastic Processes. Springer-Verlag, New York, 1987. S. Kuznetsov. Regularity properties of a supercritical superprocess, in The Dynkin Festschrift, papes 221–235. Birkhauser, Boston, 1994. T. Liggett. Interacting Particle Systems. Springer-Verlag, New York, 1985. C. Mueller and R. Tribe. Stochastic pde’s arising from the long range contact and long range voter processes. Probabability Theory and Related Fields, 102: 519-546, 1995. L. Mytnik. Uniqueness for a competing species model. Canadian J. Math., 51:372–448, 1999. E. Perkins. Dawson-Watanabe Superprocesses and Measure-valued Diffusions, in Ecole d’Et´e de Probabilit´es de Saint Flour 1999, Lect. Notes. in Math. 1781, Springer-Verlag, 2002. J. Walsh. An Introduction to Stochastic Partial Differetial Equations, in Ecole d’Et´e de Probabilit´es de Saint Flour 1984, Lect. Notes. in Math. 1180, Springer, Berlin, 1986. 73