Limit theorems for random walks in dynamic random environment

3 downloads 0 Views 361KB Size Report
Jul 5, 2011 - the environment as seen from the position of the walker, i.e the ... formalism to derive these concentration inequalities is in the spirit of [10] but.
Limit theorems for random walks in dynamic random environment

arXiv:1106.4181v2 [math.PR] 5 Jul 2011

Frank Redig∗

Florian V¨ollering†

July 6, 2011

We study a general class of random walks driven by a uniquely ergodic Markovian environment. Under a coupling condition on the environment we obtain strong ergodicity properties and concentration inequalities for the environment as seen from the position of the walker, i.e the environment process. We also obtain ergodicity of the uniquely ergodic measure of the environment process as well as continuity as a function of the jump rates of the walker. As a consequence we obtain several limit theorems, such as law of large numbers, Einstein relation, central limit theorem and concentration properties for the position of the walker. Keywords: environment process, coupling, random walk, concentration estimates, backwards martingales. AMS classification: 82C41(Primary) 60F17(Secondary)

1 Introduction In recent days random walks in dynamic random environment have been studied by several authors. Motivation comes among others from non-equilibrium statistical mechanics -derivation of Fourier law- [4] and large deviation theory [9]. In principle random walk in dynamic random environment contains as a particular case random walk in static random environment. However, mostly, in turning to dynamic environments, authors concentrate more on environments with sufficient mixing properties. In that case the fact that the environment is dynamic helps to obtain self-averaging properties that ensure standard limiting behavior of the walk, i.e., law of large numbers and central limit theorem. ∗ University

of Nijmegen, IMAPP, Heyendaalse weg 135, 6525 AJ Nijmegen, The Netherlands [email protected] † Leiden University, Mathematical Institute, Niels Bohrweg 1, 2333 CA Leiden, The Netherlands [email protected]

1

In the study of the limiting behavior of the walker, the environment process, i.e., the environment as seen from the position of the walker plays a crucial role. See also [6], [8] for the use of the environment process in related context. In a translation invariant setting the environment process is a Markov process and its ergodic properties fully determine corresponding ergodic properties of the walk, since the position of the walker equals an additive function of the environment process plus a controllable martingale. The main theme of this paper is precisely the study of the following natural question: if the environment is uniquely ergodic, with a sufficient speed of mixing, then the environment process shares similar properties. In several works ([2], [3], [1]) this transfer of “good properties of the environment” to “similar properties of the environment process” is made via a perturbative argument, and therefore holds only in a regime where the environment and the walker are weakly coupled. An exception is [5] in the case of an environment consisting of independent Markov processes. In this paper we consider the context of general Markovian uniquely ergodic environments, which are such that the semigroup contracts at a minimal speed in norm of variation type. Examples of such environments include interacting particle systems in “the M < ǫ regime” [7] and weakly interacting diffusion processes on a compact manifold. Our conditions on the environment are formulated in the language of coupling. More precisely, we impose that for the environment there exists a coupling such that the distance between every pair of initial configurations in this coupling decays fast enough so that multiplied with td it is still integrable in time. As a result we then obtain that for the environment process there exists a coupling such that the distance between every pair of initial configurations in this coupling decays at a speed which is at least integrable in time. In fact we show more, namely in going from the environment to the environment process we essentially loose a factor td . E.g., if for the environment there is a coupling where the distance decays exponentially, then the same holds for the environment process (with possibly another rate). Once we have controllable coupling properties of the environment process, we can draw strong conclusions for the position of the walker. More precisely, we prove a law of large numbers with an asymptotic speed that depends continuously on the rates, and a central limit theorem with a controllable asymptotic covariance matrix. We also prove recurrence in d = 1 under condition of zero speed and transience when the speed is non zero, as well as an Einstein relation. Along the way, we develop a general formalism to derive concentration inequalities for functions of a Markov process, based on martingales adapted to quantities like the position of the walker, which is not exactly an additive functional of the environment process. Concentration inequalities for the position of the walker are crucial to obtain transience or recurrence. The formalism to derive these concentration inequalities is in the spirit of [10] but based now on “backwards” martingales. These “backwards” martingales are different from the classical martingales associated to the generator, or arising in the “martingale problem”. Using them leads however to a controllable expression of the variance of the walker, as well as for additive functionals of the environment process. We believe that the use of these backwards martingales are interesting per se, and can also play an important role in controlling the deviation from macroscopic behavior (such as

2

hydrodynamic limits) in the context of interacting particle systems. Our paper is organized as follows. The model and necessary notation are introduced in Section 2. Section 3 is dedicated to lift properties of the environment to the environment process. Especially Theorem 3.1 is of great importance and is used frequently in the later sections. Based on the results in Section 3 the Law of Large Numbers and an Einstein Relation are obtained in Section 4. A functional Central Limit Theorem is proven in Section 5, utilizing the martingale approach detailed in the Appendix. Section 6 further applies the general methods from the Appendix to the specific case of a random walk in dynamic random environment to obtain concentration estimates and therewith statements about recurrence and transience. Finally the Appendix is dedicated to the study of the non-time-homogeneous “backwards” martingales and the concentration inequalities they facilitate.

2 The model 2.1 Environment We are interested in studying a random walk (Xt )t≥0 on the lattice Zd which is driven d by a second processes (ηt )t≥0 on E Z , the (dynamic) environment. This can be interpreted as the random walk moving through the environment, and its transition rates being determined by the local environment around the random walk. To become more precise, the environment (ηt )t≥0 is a Feller Process on the state d space Ω := E Z , where (E, ρ) is a compact Polish space with metric ρ (examples in mind are E = {0, 1} or E = [0, 1]). We assume that the distance ρ on E is bounded from above by 1. The generator of the Markov process is denoted by LE and its semigroup by StE , both considered on the space of continuous functions C(Ω; R). We assume that the environment is translation invariant, i.e. E PE η (θx ηt ∈ ·) = Pθx η (ηt ∈ ·)

with θx denoting the shift operator θx η(y) = η(y − x) and PE η the path space measure of the process (ηt )t≥0 starting from η. Later on we will formulate precise conditions necessary to obtain our results.

2.2 Lipschitz functions For practical purposes, we introduce the set of pairs in Ω which differ at one specific site:  (Ω × Ω)x := (η, ξ) ∈ Ω2 : η(x) 6= ξ(x) and η(y) = ξ(y) ∀ y ∈ Zd \{x} , x ∈ Zd .

Definition 2.1. For any f : Ω → R, we denote by δf (x) the Lipschitz-constant of f when only site x is changed with respect to the distance ρ, i.e. δf (x) :=

sup (η,ξ)∈(Ω×Ω)x

f (η) − f (ξ) . ρ(η(x), ξ(x))

3

We write 9 f 9 :=

X

δf (x).

(1)

x∈Zd

Note that 9 f 9 < ∞ implies that f is globally bounded and that the value of f is uniformly weakly dependent on sites far away. A rougher semi-norm we also use is the oscillation (semi)-norm k f kosc := sup (f (η) − f (ξ)) . η,ξ∈Ω

Generally, those two semi-norms are related by the inequality k f kosc ≤ 9 f 9.

2.3 The random walker and assumption on rates The random walker Xt is a process on Zd , whose transition rates depend on the state of the environment as seen from the walker. More precisely, the rate to jump from site x to site x + z given that the environment is in state η is α(θ−x η, z). We make two assumptions on the jump rates α. First, we guarantee that the walker Xt has first moments by assuming X k α k1 := k z k sup | α(η, z) | < ∞. (2) z∈Zd

η∈Ω

More generally, as sometimes higher moments are necessary, we write X p p k z k sup | α(η, z) | , p ≥ 1. k α kp := z∈Zd

η∈Ω

Second, we limit the sensitivity of the rates to small changes in the environment by assuming that X 9 α(·, z) 9 < ∞. (3) 9 α 9 := z∈Zd

In Section 6, we impose the following stronger condition k α k1 < ∞: X k z k 9 α(·, z) 9 < ∞. 9 α 91 :=

(4)

z∈Zd

2.4 Environment process While the random walker Xt itself is not a Markov process due to the dependence on the environment, the pair (ηt , Xt ) is a Markov process with generator X Lf (η, x) = LE f (·, x)(η) + α(θ−x η, z) [f (η, x + z) − f (η, x)] , z∈Zd

4

corresponding semigroup St (considered on the space of functions continuous in η ∈ Ω and Lipschitz continuous in x ∈ Zd ) and path space measure Pη,x . The environment as seen from the walker is of crucial importance to understand the asymptotic behaviour of the walker itself. This process, (θ−Xt ηt )t≥0 , is also called the environment process (this name is common in the literature, however in the context of this paper that name can easily be confused with the environment ηt ), which is also a Markov process with generator X LEP f (η) = LE f (η) + α(η, z) [f (θ−z η) − f (η)] , z∈Zd

corresponding semigroup StEP (on C(Ω)) and path space measure PEP η . Notice that this process is well-defined only in the translation invariant context.

3 Ergodicity of the environment process 3.1 Assumptions on the environment In this section we will show how we can use a given Markovian coupling of the environment with a fast enough coupling speed to obtain a strong ergodicity properties about the environment process. bE of the environment Assumption 1a. There exists a strong Markovian coupling P E b (with corresponding expectation E ) which satisfies Z ∞ 1 2 bE td sup E η,ξ ρ(ηt (0), ηt (0)) dt < ∞. η,ξ∈Ω

0

This assumption is already sufficient to obtain the law of large numbers for the position of the walker and unique ergodicity of the environment process, but it does not give quite enough control on local fluctuations. The following stronger assumption remedies that. b E of the environment Assumption 1b. There exists a strong Markovian coupling P E b ) which satisfies (with corresponding expectation E Z

0



td

X

x∈Zd

sup

(η,ξ)∈(Ω×Ω)0

b E ρ(η 1 (x), η 2 (x)) E t t η,ξ ρ(η(0), ξ(0))

dt < ∞.

Remark The following telescoping argument which will be used several times later on shows how Assumption 1b implies 1a: Fix η, ξ ∈ Ω. Let (ζn )n≥0 be a sequence in Ω with ζ0 = η, limn→∞ ζn = ξ and |{x ∈ Zd : ζn (x) 6= ζn−1 (x)}| ≤ 1 for all n ≥ 1. This sequence interpolates between the configurations η and ξ by single site changes. Using such a sequence, we can telescope over single site changes, and using the triangle inequality for each n: X 1 2 b E ρ(η 1 (0), η 2 (0)) ≤ bE E E η,ξ t t ζn+1 ,ζn ρ(ηt (0), ηt (0)). n∈N

5

We can assume that each site x ∈ Zd is at most once changed in the sequence ζn . Therefore, X X 1 2 b E ρ(η 1 (0), η 2 (0)). bE E sup E η,ξ t t ζn+1 ,ζn ρ(ηt (0), ηt (0)) ≤ n∈N

x∈Zd

(η,ξ)∈(Ω×Ω)x

Using the translation invariance of the environment and ρ ≤ 1 we obtain the estimate b E ρ(η 1 (0), η 2 (0)) ≤ E η,ξ t t

X

x∈Zd

sup (η,ξ)∈(Ω×Ω)0

b E ρ(η 1 (x), η 2 (x)) E t t η,ξ . ρ(η(0), ξ(0))

Taking the supremum on the left hand site and integrating over time, we obtain that Assumption 1b indeed implies Assumption 1a. Examples which satisfy Assumption 1a and 1b include: • interacting particle systems in the so-called M < ǫ; • weakly interacting diffusions on a compact manifold; • a system of ODEs which converges uniformly to its unique stationary configuration at sufficient speed. Note that the third example is a deterministic environment and as such any form of condition which measures this convergence purely in probabilistic terms like the total variation distance is bound to fail.

3.2 Statement of the main theorem The main result of this section is the following theorem, which tells us how the coupling property of the environment lifts to the environment process. Theorem 3.1. Let f : Ω → R with 9 f 9 < ∞. a) Under Assumption 1a, there exists a constant Ca > 0 so that Z ∞ EP St f (η) − StEP f (ξ) dt ≤ Ca 9 f 9 . sup η,ξ∈Ω

0

b) Under Assumption 1b, there exists a constant Cb > 0 so that Z ∞ EP X St f (η) − StEP f (ξ) dt ≤ Cb 9 f 9 . sup ρ(η(x), ξ(x)) 0 d (η,ξ)∈(Ω×Ω)x x∈Z

This theorem encapsulates all the technical details and difficulties to obtain results, and subsection 3.5 is dedicated to its proof. In Section 3.4 we generalize this result to give more information about decay in time. Here we continue with results we can obtain using Theorem 3.1. Most results about the environment process just use part a) of the theorem, part b) is needed to obtain the CLT in Section 5 or more sophisticated deviation estimates in Section 6.

6

3.3 Existence of a unique ergodic measure and continuity in the rates First of, the environment process, i.e. the environment as seen from the walker, is ergodic. Lemma 3.2. Under Assumption 1a the environment process has a unique ergodic probability measure µEP . Proof As E is compact, so is Ω, and therefore the space of stationary measures is non-empty. So we must just prove uniqueness. Assume µ, ν are both stationary measures. Choose an arbitrary f : Ω → R with 9 f 9 < ∞. By Theorem 3.1,a, T | µ(f ) − ν(f ) | ≤

Z Z Z

≤ sup η,ξ∈Ω

T

EP St f (η) − StEP f (ξ) dt µ(dη) ν(dξ)

Z0 ∞ 0

EP St f (η) − StEP f (ξ) dt < ∞.

As T is arbitrary, µ(f ) = ν(f ). As functions f with 9 f 9 < ∞ are dense in C(Ω), there is at most one stationary probability measure. It is of interest not only to know that the environment process has a unique ergodic measure µEP , but also to know how this measure depends on the rates α. Theorem 3.3. Under Assumption 1a, the unique ergodic measure µEP depends conα tinuously on the rates α. For two transition rate functions α, α′ , we have the following estimate:

i.e.

EP C(α) µα (f ) − µEP k α − α′ k0 9 f 9, α′ (f ) ≤ p(α) (α, f ) 7→ µEP α (f )

is continuous in k · k0 × 9 · 9. The functions C(α), p(α) satisfy C(α) > 0, p(α) ∈ ]0, 1[. In the case that the rates α do not depend on the environment, i.e. α(η, z) = α(z), they are given by p(α) = 1, Z ∞ 1 2 bE sup E C(α) = η,ξ ρ(ηt (0), ηt (0)) dt. 0

η,ξ∈Ω

As the proof is a variation of the proof of Theorem 3.1, it is delayed to the end of Section 3.5.

7

3.4 Speed of convergence to equilibrium in the environment process We already know that under Assumption 1a the environment process has a unique ergodic distribution. However, we do not know at what speed the process converges to its unique stationary measure. Given the speed of convergence for the environment it is natural to believe that the environment process inherits that speed with some form of slowdown due to the additional self-interaction which is induced from the random walk. For example, if the original speed of convergence were exponential with a constant λ, then the environment process would also converge exponentially fast, e This is indeed the case. but with a worse constant λ.

Theorem 3.4. Let φ : [0, ∞[ → R be a monotone increasing and continuous function satisfying φ(0) = 1 and φ(s + t) ≤ φ(s)φ(t).

b E of the environment (with a) Suppose there exists a strong Markovian coupling P b E ) which satisfies corresponding expectation E Z ∞ b E ρ(η 1 (0), η 2 (0)) dt < ∞. φ(t)td sup E η,ξ t t η,ξ∈Ω

0

Then there exists a constant K0 > 0 and a decreasing function Ca : ]K0 , ∞[ → [0, ∞[ so that for any K > K0 and any f : Ω → R with 9 f 9 < ∞, Z ∞   t EP St f (η) − StEP f (ξ) dt ≤ Ca (K) 9 f 9 . sup φ K η,ξ∈Ω 0

b E of the environment (with b) Suppose there exists a strong Markovian coupling P E b corresponding expectation E ) which satisfies Z



0

φ(t)td

X

x∈Zd

sup

(η,ξ)∈(Ω×Ω)0

b E ρ(η 1 (x), η 2 (x)) E t t η,ξ dt < ∞. ρ(η(0), ξ(0))

Then there exists a constant K0 > 0 and a decreasing function Cb : ]K0 , ∞[ → [0, ∞[ so that for any K > K0 and any f : Ω → R with 9 f 9 < ∞, Z ∞   EP X St f (η) − StEP f (ξ) t sup φ dt ≤ Cb (K) 9 f 9 . K ρ(η(x), ξ(x)) 0 d (η,ξ)∈(Ω×Ω)x x∈Z

Canonical choices for φ are φ(t) = exp(λt) or φ(t) = (1 + t)λ . In the first case, exponential decay of order λ of the environment is lifted to exponential decay of order λ/K for any K > K0 in the environment process. In the second case, polynomial decay of order λ becomes polynomial decay of order λ − d − ǫ for any 0 < ǫ ≤ λ − d.

8

3.5 Proofs of Theorem 3.1, 3.3 and 3.4 In this section we always assume that Assumption 1a holds. We start with an outline of the idea of the proofs. We have a coupling of the environments (ηt1 , ηt2 ), which we extend to include two random walkers (Xt1 , Xt2 ), driven by their corresponding environment. We maximize the probability of both walkers performing the same jumps. Then Assumption 1a is sufficient to obtain a positive probability of both walkers staying together forever. If the walkers stay together, one just has to account for the difference in environments, but not the walkers as well. When the walkers split, the translation invariance allows for everything to shifted that both walkers are back at the origin, and one can try again. After a geometric number of trials it is then guaranteed that the walkers stay together. b E of the environProposition 3.5 (Coupling construction). Given the coupling P η,ξ b ments, we extend it to a coupling Pη,x;ξ,y . This coupling has the following properties:

a) (Marginals)The coupling supports two environments and corresponding random walkers: bη,x;ξ,y ((η 1 , X 1 ) ∈ ·) = Pη,x ((ηt , Xt ) ∈ ·); 1) P t

t

bη,x;ξ,y ((η 2 , X 2 ) ∈ ·) = Pξ,y ((ηt , Xt ) ∈ ·); 2) P t t

b E ) The environments behave as under P bE : b) (Extension of P η,ξ bη,x;ξ,y ((η 1 , η 2 ) ∈ ·) = P bE ((η 1 , η 2 ) ∈ ·); P t t η,ξ t t

c) (Coupling of the walkers) Xt1 and Xt2 perform identical jumps as much as possi- P ble, the rate of performing a different jump is α(θ−Xt1 ηt1 , z) − α(θ−Xt2 ηt2 , z) ; z∈Zd

d) (Minimal and maximal walkers) In addition to the environments ηt1 and ηt2 and random walkers Xt1 and Xt2 , the coupling supports minimal and maximal walkers Yt+ , Yt− as well. These two walkers have the following properties: b η,x;ξ,y − a.s. (in dimension d > 1, this is to 1) Yt− ≤ Xt1 − x, Xt2 − y ≤ Yt+ P be interpreted coordinate-wise); 2) Yt+ , Yt− are independent of ηt1 , ηt2 ; b η,x;ξ,y Y + = tγ + for some γ + ∈ Rd ; 3) E t

b η,x;ξ,y Y − = tγ − for some γ − ∈ Rd . 4) E t

b η,x;ξ,y can be done in the following way: Proof The construction of this coupling P E b The environments behave according to Pη,ξ , and X01 = x, X02 = y, Y0+ = Y0− = 0. For each z ∈ Zd there is an independent Poisson clock with rate λz := supη α(η, z). Whenever such a clock rings, draw an independent U from the uniform distribution in [0, 1]. If U < α(θ−Xt1 ηt1 , z)/λz , Xt1 performs a jump by z, analogue for Xt2 . The

9

upper and lower walkers Yt+ and Yt− always jump on these clocks, however they jump by max(z, 0) or min(z, 0) respectively. Nt increases if The properties of the coupling arise directly from the construction, except the last two. Those are a consequence from the first moment condition k α k1 < ∞. Now we show how suitable estimates on the coupling speed of the environment translate to properties of the extended coupling. Lemma 3.6.

b η,x;ξ,y ρ(η 1 (X 1 ), η 2 (X 1 )) dt < ( γ + − γ − t + 1)d sup E b E ρ(η 1 (0), η 2 (0)). E t t t t η,ξ t t ∞ η,ξ∈Ω

Proof Denote with Rt ⊂ Zd the set of sites x with Yt− ≤ x ≤ Yt+ (coordinate-wise). Then b η,x;ξ,y ρ(ηt1 (Xt1 ), ηt2 (Xt1 )) sup E

η,ξ,x,y

b η,x;ξ,y ≤ sup E η,ξ,x,y

b ≤E

"

X

z∈Rt

#

X

ρ(ηt1 (x + z), ηt2 (x + z))

z∈Rt

b E ρ(η 1 (z), η 2 (z)) 1 sup E η,ξ t t η,ξ,z

≤ ( γ + − γ −



b E ρ(η 1 (0), η 2 (0)). t + 1)d sup E t t η,ξ η,ξ∈Ω

Lemma 3.7. Denote by τ := inf{t ≥ 0 : Xt1 6= Xt2 } the first time the two walkers are not at the same position. Under Assumption 1a, b η,0;ξ,0 (τ = ∞) > 0, inf P

η,ξ∈Ω

i.e., the walkers X 1 and X 2 never decouple with strictly positive probability. Proof Both walkers start in the origin, therefore τ > 0. The probability P(exp((λt )) > T ) that an exponential random variable with time dependent rate λt is bigger than T RT is given by exp(− 0 λt dt). As the rate of decoupling is given by Proposition 3.5,c), we obtain   Z T X b η,0;ξ,0 (τ > T ) = E b η,0;ξ,0 exp − P α(θ−Xt1 ηt1 , z) − α(θ−Xt1 ηt2 , z) dt 0



b η,0;ξ,0 ≥ exp −E

Z

T 0

z∈Zd



(5)

X α(θ−Xt1 ηt1 , z) − α(θ−Xt1 ηt2 , z) dt .

z∈Zd

10

By telescoping over single site changes, X b η,0;ξ,0 E α(θ−Xt1 ηt1 , z) − α(θ−Xt1 ηt2 , z) z∈Zd

b η,0;ξ,0 ≤E

X X

z∈Zd

ρ(ηt1 (Xt1 + x), ηt2 (Xt1 + x))δα(·,z) (x)

x∈Zd

b η,0;ξ,0 ρ(η 1 (X 1 + x), η 2 (X 1 + x)) 9 α 9 ≤ sup E t t t t x∈Zd

b E ρ(η 1 (0), η 2 (0)), ≤ 9 α 9 ( γ + − γ − ∞ t + 1)d sup E η,ξ t t η,ξ∈Ω

where the last line follows from Lemma 3.6. With this estimate, we obtain b η,ξ (τ = ∞) ≥ exp − 9 α 9 P >0

Z

0



( γ + − γ −



d

t + 1) sup η,ξ∈Ω

uniformly in η, ξ.

b E ρ(η 1 (0), η 2 (0)) dt E η,ξ t t

Proof of Theorem 3.1, part a) The idea of the proof is to use the coupling of Proposition 3.5: We wait until the walkers Xt1 and Xt2 , which are initially at the same position, decouple, and then restart everything and try again. By Lemma 3.7 there is a positive probability of never decoupling, so this scheme is successful. Using the time of decoupling τ (as in Lemma 3.7), Z

  b Eη,0;ξ,0 1t≥τ f (θ−Xt1 ηt1 ) − f (θ−Xt2 ηt2 ) dt 0 Z T i h b = Eη0;ξ,0 1t≥τ E f (θ−Xt1 ηt1 ) − f (θ−Xt2 ηt2 ) Fτ dt T



Z

0

T

0

b η,0;ξ,0 1t≥τ S EP f (θ−X 1 η 1 ) − S EP f (θ−X 2 η 2 ) dt E t−τ t−τ τ τ τ τ

b η,0;ξ,0 =E

Z

0

(T −τ )∨0

StEP f (θ−X 1 ητ1 ) − StEP f (θ−X 2 ητ2 ) dt τ τ

bη,0;ξ,0 (τ < ∞) sup ≤P

η,ξ∈Ω

Z

0

T

EP St f (η) − StEP f (ξ) dt.

11

(6) (7)

!

And therefore Z T EP St f (η) − StEP f (ξ) dt 0

Z

=

Z





T

0 T 0

b Eη,0;ξ,0 f (θ−Xt1 ηt1 ) − f (θ−Xt2 ηt2 ) dt

b η,0;ξ,0 1t 0. Therefore Assumption 1b completes the proof. Proof of Theorem 3.1, part b) Let τ := inf{t ≥ 0 : Xt1 6= Xt2 }. Then we split the

13

integration at τ : X sup

(η,ξ)∈(Ω×Ω)x

x∈Zd



X

x∈Zd

+

Z

EP St f (η) − StEP f (ξ)



0

sup (η,ξ)∈(Ω×Ω)x

X

x∈Zd

Z

∞ 0

sup (η,ξ)∈(Ω×Ω)x

Z

0

dt ρ(η(x), ξ(x))   b Eη,ξ 1τ >t f (θ−Xt1 ηt1 ) − f (θ−Xt1 ηt2 )



dt ρ(η(x), ξ(x))   b Eη,ξ 1τ ≤t f (θ−Xt1 ηt1 ) − f (θ−Xt2 ηt2 ) ρ(η(x), ξ(x))

dt

We estimate the first term by moving the expectation out of the absolute value and forgetting the restriction to τ > t: Z ∞ X X b η,ξ ρ(η 1 (y + X 1 ), η 2 (y + X 1 )) E t t t t dt. δf (y) sup ρ(η(x), ξ(x)) (η,ξ)∈(Ω×Ω) 0 x d d y∈Z

x∈Z

By Lemma 3.8 with w = δf , this is bounded by some constant times 9 f 9. For the second term we start by using the Markov property: Z ∞   b Eη,ξ 1τ ≤t f (θ−Xt1 ηt1 ) − f (θ−Xt2 ηt2 ) dt 0 Z ∞  b EP EP f (θ−Xτ1 ητ1 ) − St−τ f (θ−Xτ2 ητ2 ) dt = Eη,ξ 1τ ≤t St−τ 0 Z ∞ EP  b S f (θ−X 1 η 1 ) − S EP f (θ−X 2 η 2 ) dt ≤ Eη,ξ 1τ t f (θ−X 1 η 1 ) − f (θ−X 1 η 2 ) =E t t t t   1 b η,ξ 1τ ≤t f (θ−X 1 η ) − f (θ−X 2 η 2 ) +E t t t t   b η,ξ 1τ >t f (θ−X 1 η 1 ) − f (θ−X 1 η 2 ) =E t t t t   EP,1 EP,2 1 2 b η,ξ 1τ ≤t S 1 2 f (θ η ) − S f (θ η ) . +E −Xτ τ −Xτ τ t−τ t−τ

15

Therefore, Ψ(T ) :=

sup

sup

0≤T ′ ≤T η,ξ∈Ω



sup

sup

0≤T ′ ≤T η,ξ∈Ω

Z

Z

T′

0 T′

0

StEP,1 f (η) − StEP,2 f (ξ) dt

  b η,ξ 1τ >t f (θ−X 1 η 1 ) − f (θ−X 1 η 2 ) E t t t t

b η,ξ 1τ ≤t sup +E

η,ξ∈Ω



sup

  EP,1 EP,2 St−τ f (η) − St−τ f (ξ) dt

 Z b η,ξ sup E

0≤T ′ ≤T η,ξ∈Ω

 Z b η,ξ ≤ sup E η,ξ∈Ω

0



0

τ

 f (θ−Xt1 ηt1 ) − f (θ−Xt1 ηt2 ) dt + 1τ ≤T ′ Ψ(T ′ − τ )

 f (θ−Xt1 ηt1 ) − f (θ−Xt1 ηt2 ) dt + 1τ ≤T Ψ(T − τ ) .

(15)

We will now exploit this recursive bound on Ψ. Lemma 3.9. Let τ1 := inf{t ≥ 0 : Xt1 6= Xt12 } and τ2 := inf{t ≥ 0 : Xt12 6= Xt2 }. Set X β := sup | α(η, z) − α′ (η, z) | , z∈Zd η∈Z

d

b η,ξ (τ1 = ∞), p(α) := inf P η,ξ∈Ω Z ∞

+



γ (α) − γ − (α) t + 1 d sup E b E ρ(η 1 (0), η 2 (0)) dt, C(α) := η,ξ t t ∞ η,ξ∈Ω

0

where γ + (α), γ − (α) are as in Proposition 3.5 for the rates α. Let Y ∈ {0, 1} be Bernoulli with parameter p(α) and Y ′ exponentially distributed with parameter β. Let Y1 , Y2 , ... be iid. copies of Y · Y ′ and N (T ) := inf{N ≥ 0 : PN n=1 Yn > T }. Then Ψ(T ) ≤ C(α) 9 f 9 EN (T )

Proof By construction of the coupling, τ2 stochastically dominates Y ′ . As we have τ = τ1 ∧ τ2 it follows that τ  Y1 . Using this fact together with the monotonicity of Ψ in (15),   Z ∞ 1 2 b Ψ(T ) ≤ sup Eη,ξ f (θ−Xt1 ηt ) − f (θ−Xt1 ηt ) dt + 1τ ≤T Ψ(T − τ ) η,ξ∈Ω

b η,ξ ≤ sup E η,ξ∈Ω

Z

0 ∞

0

f (θ−Xt1 ηt1 ) − f (θ−Xt1 ηt2 ) dt + E

1Y1 ≤T Ψ(T − Y1 ).

As p(α) > 0 by Lemma 3.7 we can iterate this estimate until it terminates after N (T ) steps. Therefore we obtain Z ∞ b η,ξ Ψ(T ) ≤ EN (T ) sup E f (θ−Xt1 ηt1 ) − f (θ−Xt1 ηt2 ) dt. η,ξ∈Ω

0

16

The integral is estimated by telescoping over single site changes and Lemma 3.6 in the usual way, yielding Ψ(T ) ≤ C(α) 9 f 9 EN (T ).

To finally come back to the original question of continuity, Z Z Z T EP 1 EP,1 EP,2 EP EP EP µα (f ) − µα′ (f ) = St f (η) − St f (ξ) dt µα (dη) µα′ (dξ) T 0 1 1 Ψ(T ) ≤ EN (T )C(α) 9 f 9 T T 1 C(α) 9 f 9 −→ T →∞ EY Y ′ C(α) X sup | α(η, z) − α′ (η, z) | 9 f 9 . = p(α) d η∈Z d ≤

z∈Z

By sending α′ to α, the right hand side tends to 0 so that the ergodic measure of the environment process is indeed continuous in the rates α. It is also interesting to note that both p(α) and C(α) are rather explicit given the original coupling of the environment. Notably when α(η, z) = α(z), i.e. the rates do not depend on the R∞ b E ρ(η 1 (0), η 2 (0)) dt. environment, p(α) = 1 and C(α) = 0 sup E t t η,ξ η,ξ∈Ω

Proof of Theorem 3.4 The proof of this theorem is mostly identical to the proof of Theorem 3.1. Hence instead of copying the proof, we just state where details differ. A first fact is that the conditions for a) and b) imply Assumptions 1a) and b). In the adaptation of the proof for part a), in most lines it suffices to add a φ Kt to the integrals. However, in line (6), we use       t−τ τ t ≤φ φ (16) φ K K K

to obtain the estimate  τ  Z (T −τ )∨0 b Eη,0;ξ,0 φ φ(t) StEP f (θ−Xτ1 ητ1 ) − StEP f (θ−Xτ2 ητ2 ) dt K 0

b η,0;ξ,0 (τ < ∞) to instead. Thereby in lines (7), (8) and (9) we have to change P  τ b Eη,0;ξ,0 φ K 1τ λ

2

sup α(η, z) k z k . η∈Ω

By the second moment condition k α k2 < ∞, the sum on the right converges to 0 as 1 λ → ∞. So when we choose for example λ = T 4 , the right hand side converges to 0 as T → ∞, which proves the functional CLT for (MT (t))0≤t≤1 . Theorem 5.4. Assume Assumption 1b and 2 and k α k2 < ∞. Let Z X α(η, z)z µEP (dη) v= z∈Zd

be the asymptotic speed of the random walk (Xt )t≥0 . Then   1 (T − 2 (XtT − vtT ) 0≤t≤1

converges in probability to a Brownian motion with covariance matrix Z h i T Σ2 = L (Aη + idx ) (Aη + idx ) (η, 0) µEP (dη), where Aη : Ω × Zd → Rd with X Z ∞ Aη (ξ, x) := z StEP [α(·, z)](θ−x ξ) − StEP [α(·, z)](η) dt z∈Zd

0

and idx (ξ, x) = x. Proof Define the d-dimensional martingale 1

MT (t) := T − 2 (E [XT | FtT ] − E [XT | F0 ]) .

26

(19)

By Proposition 5.3 the projection onto any unit vector u ∈ Rd converges to a Brownian motion with variance Z L (< Aη , u > + < idx , u >)2 (η, 0) µEP (dη) Z h i T T =u L (Aη + idx ) (Aη + idx ) (η, 0) µEP (dη) u.

As the projected martingales are adapted to (Ft ), the σ-algebra of MT , that implies that MT converges to a Brownian motion with the given covariance matrix. Since 1 MT (1) = T − 2 (XT − Eη0 ,0 XT ) we can already conclude a central limit theorem. To obtain the functional central limit theorem, we simply use the fact that MT (t) is close 1 to T − 2 (XtT − vtT ): 1

T 2 MT (t) = E [XT | ηtT , XtT ] − E [XT | η0 , X0 ] Z T −tT X = zSsEP [α(·, z)](θ−XtT ηtT ) + XtT 0



Z

0

z∈Zd T X

zSsEP [α(·, z)](η0 ).

z∈Zd

When we split the last integral at tT , we notice that the part up to tT is close to vtT : Z tT X EP zS [α(·, z)](η ) − vtT 0 s 0 z∈Zd Z tT Z Z tT X X = zSsEP [α(·, z)](ξ) µEP (dξ) zSsEP [α(·, z)](η0 ) − 0 0 z∈Zd z∈Zd ≤ Ca 9 α 91

by Theorem 3.1. Similarly, the part after tT almost annihilates with the integral from 0 to T − tT : Z T −tT X Z T X EP EP η ) − zS [α(·, z)](η ) zS [α(·, z)](θ 0 −XtT tT s s tT 0 d d z∈Z z∈Z Z T −tT X Z T −tT X = zSsEP [α(·, z)](θ−XtT ηtT ) − Eη0 ,0 zSsEP [α(·, z)](θ−XtT ηtT ) 0 0 z∈Zd z∈Zd ≤ Ca 9 α 91 .

So

MT (t) − Xt T −1 vtT ≤ 2Ca 91α1 9 , T2 T2 and the convergence of the drift-adjusted random walk follows to Brownian motion follows from the convergence of MT .

27

5.3 Some remarks on the variance A first comment is that the variance (19) is indeed non-degenerate. We remember that the generator L acts as a positive operator, as Aη (η, 0) = 0 and id  x (η, 0) = 0. By splitting L = LE + LRW , we furthermore see that either LE Aη ATη (η) is µEP -a.s. 0, which implies Aη = 0, or the variance is already positive by contributions from the environment alone. If Aη = 0, then the second half becomes X   zz T α(η, z), LRW idx idTx (η, 0) = z∈Zd

which is 0 if and only of the rates α are 0 µEP -a.s., in which case the walker is degenerate by construction. However we can also use the explicit formulation of the variance to study the behaviour as the speed of the environment is increased. For a positive λ denote by Lλ = λLE + LRW the generator where the environment runs at speed λ. By noticing that the new system corresponds to one where the rates are scaled by 1/λ plus a rescaling of time by λ (or by following the proof of Theorem 3.1) one can see that Aλη ∈ O(λ−1 ). Hence   lim λLE Aλη (Aλη )T (η) = 0 λ→∞

and

X     zz T α(η, z). lim LRW (Aλη + idx )(Aλη + idx )T (η) = LRW idx idTx (η) =

λ→∞

z∈Zd

As the ergodic measure of the environment process converges to µE , the ergodic measure of the environment, we obtain that the variance Σ2λ converges to Z X zz T α(η, z) µE (dη), z∈Zd

the variance when averaging of the environment.

6 Concentration estimates In this section we will obtain detailed results about the deviation from the mean of additive functionals of the environment process and of the position of the walker itself. It is comparatively easy to obtain those from general methods. Let us do a quick overview of the general ingredients we use here to obtain concentration of additive functionals of Markov processes. Those are general and not specific to this setting. One part is the existence of a suitable estimate of the form [L(g − g(y))2 ](y) ≤ R 9 g 92 .

(20)

28

The second part is an estimate of 9

Z

T

St f dt 9 ≤ CT 9 f 9 .

(21)

0

R

T

In general, an estimate of 0 St f dt

osc

is also convenient, but in our case that is

implied, as k g kosc ≤ 9 g 9 (note that 9 · 9 could be any kind of norm-like object in a different context). If estimates (20) and (21) are satisfied, then concentration RT estimates follow for additive functionals 0 f (Xt ) dt. The reader is referred to [] for a more in-depth look at those methods. Section 7 contains similar methods to obtain concentration for f (XT ) instead of additive functionals. In our context, Assumption 1b guarantees (21) for the environment, and Theorem 3.1,b) lifts that to the environment process. Similarly, Assumption 2 provides (20) for the environment and Lemma 5.2 for the environment process.

6.1 Concentration of additive functionals of the environment process Theorem 6.1. Assume Assumptions 1b and 2. Then there exists a constant D > 0 so that for any f : Ω → R with 9 f 9 < ∞ the deviation estimate ! Z T Z T  r2 EP EP EP Pν1 f (ηt ) dt > Ca 9 f 9 (r + 1) + ν2 St f dt ≤ e− T D 0

0

holds for any r ≥ 0 and any probability measures ν1 , ν2 on Ω. In the case that ν1 , ν2 are equal and point measures, the same holds with (r + 1) replaced by r. The idea of the proof is to use Lemma 7.10. Before starting with the proof, we state a fact which is highly useful to prove the conditions of the general concentration estimates. Lemma 6.2. a) Let f : Ω → R with k f kosc < ∞. Then k−2

L[(f − f (η))k ](η) ≤ k f kosc L(f − f (η))2 (η); b) Fix η ∈ Ω, and let gη , fη : E → R satisfy fη (η), gη (η) = 0 and fη ≤ gη . Then Lfη ≤ Lgη . k−2

Proof a) Is a direct consequence of b) with fη = (f − f (η))k and gη = k f kosc (f − f (η))2 . b) Follows by definition of L: Lfη = lim supǫ→0 1ǫ Sǫ fη ≤ lim supǫ→0 1ǫ Sǫ gη = Lgη .

29

Remark This lemma is completely general and not restricted to this context. Proof of Theorem 6.1 We want to apply Lemma 7.10. So we first prove that Z

EP

L

T′

StEP f

0



StEP f (η) dt

!k

k

(η) < D′ (Ca 9 f 9)

(22)

for any T ′ > 0, η ∈ Ω and k ∈ N, k ≥ 2. Recall that by Theorem 3.1 part a),

Z ′

T

EP St f ≤ Ca 9 f 9 .

0

osc

Therewith, using Lemma 6.2 a), Z

EP

L

T′

StEP f − StEP f (η) dt

0

k−2

≤ (Ca 9 f 9)

EP

L

Z

!k

(η)

T′

0

StEP f − StEP f (η) dt

!2

(η).

By Lemma 5.2, Z

EP

L

T′

StEP f

0

≤R

EP

9

Z

0



StEP f (η) dt

!2

(η)

T′

StEP f − StEP f (η) dt 92 .

Using Theorem 3.1 part b) for the estimate 9

Z

0

T′

StEP f − StEP f (η) dt 9 ≤ Cb 9 f 9,

we choose D′ = PEP ν1

Z

0

REP Cb2 Ca2

to obtain (22). As a direct consequence, Lemma 7.10 implies

T

f (ηtEP ) dt

> Ca 9 f 9 (r + 1) +

Z

T 0

ν2 StEP f



dt

!

≤e



1 r2 2 T D′ + 1 r 3

.

Since 9 f 9 ≥ k f kosc , the left-hand-side is in fact 0 for r ≥ CTa . So we can further T . Choosing D = 2D′ + 3C2 a completes estimate that probability by replacing 31 r by 3C a the proof.

30

6.2 Concentration estimates for the position of the walker Note how Theorem 6.1 already gives us a strong concentration property for the rates of the walker Xt . However, it is a bit more tricky to obtain good concentration estimates for the position of the walker itself. For concentration results for the position of the walker, we need moment conditions on the transition rates of the walker which are comparable to the strength of the concentration estimate. For example, if the jumps are of bounded size or have some exponential moment, then we obtain a concentration estimate which is Gaussian for small and exponential for large deviations. Note that it is impossible to obtain pure Gaussian concentration, as the example of jumps with rate 1 of size 1 to the right, independent of the environment, i.e. a Poisson process, shows. Theorem 6.3. Assume Assumptions 1b and 2. Also assume that 9 α 91 < ∞ and that there exists some M < ∞ so that the bound k α kk ≤ M k is satisfied for any k ∈ N. Then there exist constants c1 , c2 > 0 so that for any probability distributions ν1 , ν2 on Ω and any r > 0 Pν1 ,0 (k XT − Eν2 ,0 XT k > c1 r + 2dCa 9 α 91 ) ≤ 2de



1 r2 2 T c2 + 1 r 3

.

If only some moments for the jumps of the walker exist, then we still get a concentration estimate, although it is correspondingly weaker. Theorem 6.4. Assume Assumptions 1b and 2. Fix p > 1. Assume that 9 α 91 < ∞ and k α kp < ∞. Then there exist a constant c > 0 depending on the process, but not on p, and a constant cp depending only on p so that for any probability distributions ν1 , ν2 on Ω and any r > 0 p

cp (T 2 + 1) . rp The remainder of this section will deal with the proofs of the theorems. The next lemma will verify the conditions of more general theorems in Section 7, from which the results for the walker are immediate. Pν1 ,0 (k XT − Eν2 ,0 XT k > r) ≤ cp

Lemma 6.5. Assume Assumptions 1b and 2. For any vector v ∈ Rd with k v k = 1 we consider the function fv : Ω × Zd → R, fv (η, x) =< x, v >. Then k

k

L (St fv − St fv (η, x)) (η, x) ≤ 2k (REP )Cb2 Cak−2 9 α 9k1 +2k k α kk . Proof We first note that fv is in the domain of the generator L, with X α(θ−x η, z) < z, v > . Lf (η, x) = z∈Zd

Since this is uniformly bounded in η and x the condition of Proposition 7.8 is satisfied. Also observe that   Z t X α(·, z) < z, v >(θ−x η) dr + fv (η, x). SrEP  St fv (η, x) = 0

|

z∈Zd

{z

}

=:g(·)

31

With this in mind, using Lemma 6.2 L(St fv − St fv (η, x))k (η, x) Z r k SrEP g(θ−· ·) − SrEP g(θ−x η) dr + fv − fv (η, x) (η, x) =L 0

≤ 2k L

Z

0

r

SrEP g(θ−· ·) − SrEP g(θ−x η) dr

k

k

(η, x) + 2k L (fv − fv (η, x)) (η, x).

To estimate the right hand term, as fv is independent of η, X k α(θ−x η, z) < z, v >k L (fv − fv (η, x)) (η, x) = z∈Zd

X



z∈Z

k

k

sup α(η, z) k z k = k α kk .

d η∈Ω

To treat the left hand term, we first notice that it can be rewritten in terms of the generator of environment process: Z r Z r k k EP L SrEP g − SrEP g(θ−x η) dr (θ−x η). SrEP g(θ−· ·) − SrEP g(θ−x η) dr (η, x) = L 0

0

Next, we use Theorem 3.1,b) with the fact that 9 g 9 ≤ 9 α 91 to obtain that Z t SrEP g dr 9 ≤ Cb 9 α 91 . 9 0

Together with Lemma 5.2 we therefore arrive at Z r 2 EP EP L Sr g(θ−· ·) − Sr g(θ−x η) dr (η, x) ≤ REP Cb2 9 α 921 . 0

Higher moments we reduce to second moments via Lemma 6.2 utilizing Theorem 3.1,a): Z r k EP EP (η, x) S g(θ ·) − S g(θ η) dr −· −x r r 0



Cak−2



91k−2

Z

0

r

SrEP g(θ−· ·)



SrEP g(θ−x η) dr

2

(η, x).

Proof of Theorem 6.3 Let ei be an orthonormal basis of Rd , and write ei+d := −ei . Then Pν1 ,0 (k XT − Eν2 ,0 XT k > r + 2dCa 9 α 91 )   2d X 1 Pν1 ,0 fei (XT ) − Eν2 ,0 fei (XT ) > r + Ca 9 α 91 . ≤ 2d i=1

32

Those probabilities we can estimate with Corollary 7.9, whose condition is satisfied according to Lemma 6.5 and the conditions of this theorem. At this point we use the fact that Z T StEP gi (η) − StEP gi (ξ) dt ST fei (η, 0) − ST fei (ξ, 0) = 0

with gi (η) =

P

α(η, z)fei (z) to get the estimate Ca 9 α 91 for the constant a in

z∈Zd

Corollary 7.9. The constants c1 , c2 are chosen appropriately. Proof of Theorem 6.4 We will only give a rough sketch of the proof. It is an application of Theorem 7.11 to obtain an estimate on the pth moment plus Markov’s inequality. To use the estimate in Theorem 7.11, the general idea is to use the fact that Z t X SsEP [ z ∈ Zd zα(·, z)](x) + x, Ex Xt = 0

together with applications of Theorem 3.1. The first term on the right hand side of the estimate in Theorem 7.11 is the main contributor and can be estimated by h i

p p T 2 2 2 (Cbp 9 α 9p1 (RE + LJ 9 · 9→9 · 9 )p + k α kp2 .

The second term is estimated by   p 2p Cbp 9 α 9p1 + k α kp and the third by

(Ca 9 α 91 )p . Combining various constants, we then get the estimate p

Eν1 k XT − Eν2 XT kp ≤ cp C p (T 2 + 1).

6.3 Transience and recurrence Using the concentration estimates and the convergence to Brownian motion, questions like transience or recurrence of the walker can be treated rather easily. The following two statements about transience and recurrence are not meant to be exhaustive, instead they showcase how typical use the stronger statements of the previous sections. Theorem 6.6 (Transience). Assume Assumptions 1b and 2 and suppose k α k2+ǫ < ∞ for some ǫ > 0. If asymptotic speed v = limt→∞ Xtt is non-zero, then the random walk Xt is transient.

33

P Proof Since a := z∈Zd supη∈Ω α(η, z) < ∞, the duration the walker Xt stays at the origin at each visit is at least exp(a)-distributed. Hence it is sufficient for transience to show that P0,η (Xt = 0) is integrable in t, which can be seen from Theorem 6.4: ! 2+ǫ 2 +1 2+ǫ t Pη,0 (Xt = 0) ≤ Pη,0 (| Xt − vt | ≥ vt) ≤ c2+ǫ c ∧ 1, (vt)2+ǫ where we used the fact that vt = EµEP ,0 Xt .

Theorem 6.7 (Recurrence). Let the dimension d = 1 and assume Assumptions 1b and 2. Suppose the walker only has jumps of size 1, i.e. α(η, z) = 0

d

∀ η ∈ E bZ , z 6= {−1, 1},

and that it has 0 speed. Then Xt is recurrent. 1

Proof By Theorem 5.4, T − 2 XtT converges to Brownian motion. Hence there exists(with probability 1) an infinite sequence t1 < t2 < . . . of times with Xt2n < 0 and Xt2n+1 > 0, n ∈ N. As the walker has only jumps of size 1, it will traverse the origin between tn , tn+1 for any n ∈ N.

7 Appendix: General concentration results In this section we will prove general concentration results for Markov processes. Hence we will forget the connection to random walks in dynamic random environments. To avoid a clash of notation, we will assume that F is a polish space, elements of it are denoted x, y, (Yt )t≥0 a Feller process on F , with generator A and semigroup (Pt )t≥0 (on an appropriate space like C0 (F ), Cb (F ), C(F ), B(F ), ...). The canonical filtration is denoted by (Ft )t≥0 . The method of the proofs will use certain martingale approximation. Notation 7.1. Given f : F → R, we define the martingale MTf (t) := E [f (YT ) | Ft ] − E [f (YT ) | F0 ] , = PT −t f (Yt ) − PT f (Y0 ).

t ∈ [0, T ],

Remark Note that this martingale is different from the canonical martingale f (Yt ) − Rt 0 Af (Ys ) ds, but there are some similarities.

To avoid questions regarding the domain of the generator A, we also introduce (nonlinear) versions of A in a natural way which are defined for all functions.

34

Notation 7.2. For any f : F → R, define 1 Af (x) := lim sup (Pǫ f (x) − f (x)); ǫ ǫ→0 1 Af (x) := lim inf (Pǫ f (x) − f (x)). ǫ→0 ǫ We call A upper generator and A lower generator. Remark a) Note that Af (x), Af (x) ∈ R ∪ {±∞}. But if f ∈ dom(A), then Af = Af = Af . b) If we look at the martingale ff (t) = f (Yt ) − M

Z

t

Af (Ys ) ds,

(23)

0

then its predictable quadratic variation process is given by Z t E D ff = Af 2 (Ys ) − 2f (Ys )Af (Ys ) ds M t 0 Z th i 2 A (f − f (Ys )) (Ys ) ds =

(24)

0

under the assumption that D f, f 2 E∈ dom(A). If that is not the case, then we still have ff when using the upper generator A (lower generator an upper (lower) bound on M A) in (24). The following lemma shows that [A(f − f (x))(g − g(x))](x)] still has the property of a quadratic form. Lemma 7.3. Let, f, g : F → R. Then A[(f − f (x))(g − g(x))](x) 2 ≤ A(f − f (x))2 (x)A(g − g(x))2 (x).

Proof of the lemma Utilizing the definition of A and Chauchy-Schwarz inequality, A[(f − f (x))(g − g(x))](x) 2 1 2 ≤ lim sup 2 | Ex (f (Yǫ ) − f (x))(g(Yǫ ) − g(x)) | ǫ ǫ→0  1  ≤ lim sup 2 Ex (f (Yǫ ) − f (x))2 Ex (g(Yǫ ) − g(x))2 ǫ→0 ǫ   ≤ A(f − f (x))2 (x) A(g − g(x))2 (x)

35

Remark It is also interesting to note that hf, gix = A[(f − f (x))(g − g(x))](x) is a positive semi-definite bilinear form. This form can be related to the martingales (23) by their predictable covariation process, which is given by Z t E D f fg f = M ,M hf, giYs ds. t

0

Before going into details about the martingale MTf and its properties we need the following little fact.

Lemma 7.4. Fix δ > 0, k ∈ N, k ≥ 2 and a probability measure µ. Let f, g : [0, δ] → Lk (µ). Assume that

Then

 1 lim µ f (ǫ)k = a ∈ R, ǫ→0 ǫ

 1 lim µ g(ǫ)k = 0. ǫ→0 ǫ

 1 lim µ (f (ǫ) + g(ǫ))k = a. ǫ→0 ǫ

 The same holds true for lim sup and lim inf as long as limǫ→0 1ǫ µ g(ǫ)k = 0.

Proof First,

k     X k µ f (ǫ)j g(ǫ)k−j . µ (f (ǫ) + g(ǫ))k = j j=0

By H¨ older’s inequality with p = kj , q =

k k−j ,

  k−j  j k . µ f (ǫ)j g(ǫ)k−j ≤ µ f (ǫ)k k µ g(ǫ)k

Therewith and with the assumptions on the limits,     j  k−j 1 1 1 lim µ f (ǫ)j g(ǫ)k−j ≤ lim µ f (ǫ)k k lim µ g(ǫ)k k =0 ǫ→0 ǫ ǫ→0 ǫ ǫ→0 ǫ

for any j < k.

Now we will state and prove the key lemma regarding the martingale 7.1. It is very parallel to Lemma 2.5 in [10], where the identical approach is used to deal with the RT analogue martingale constructed from 0 f (Yt ) dt instead of f (YT ).

Lemma 7.5. Fix k ∈ N, k ≥ 2 If

1 lim Ex (PT −t−ǫ f (Yǫ ) − PT −t f (Yǫ ))k = 0 ǫ→0 ǫ then:

36

∀ x ∈ F,

i h a) lim sup 1ǫ E (MTf (t + ǫ) − MTf (t))k Ft = A (PT −t f (·) − PT −t f (Yt ))k (Yt ); ǫ→0

i h b) lim inf 1ǫ E (MTf (t + ǫ) − MTf (t))k Ft = A (PT −t f (·) − PT −t f (Yt ))k (Yt ). ǫ→0

Remark If supx∈F Af (x) < ∞, then the condition is satisfied for any k ≥ 2. Proof First,

MTf (t + ǫ) − MTf (t) = PT −t−ǫ f (Yt+ǫ ) − PT f (Y0 ) − PT −t f (Yt ) + PT f (Y0 ) = PT −t−ǫ f (Yt+ǫ ) − PT −t f (Yt+ǫ ) + PT −t f (Yt+ǫ ) − PT −t f (Yt ). By our assumptions and the Markov property,  1  lim E (PT −t−ǫ f (Yt+ǫ ) − PT −t f (Yt+ǫ ))k Ft = 0. ǫ

ǫ→0

Hence, by Lemma 7.4 i 1 h lim sup E (MTf (t + ǫ) − MTf (t))k Ft ǫ ǫ→0  1  = lim sup E (PT −t f (Yt+ǫ ) − PT −t f (Yt ))k Ft ǫ ǫ→0 k

= A (PT −t f (·) − PT −t f (Yt )) (Yt ).

The argument for lim inf is analogue. Proposition 7.6. Assume that for all 0 < t ≤ T and all x ∈ F (Pt f − Pt f (x))2 ∈ dom(A) and 1 2 lim Ex (Pt−ǫ f (Yǫ ) − Pt f (Yǫ )) = 0. ǫ

ǫ→0

Then the predictable quadratic variation of MTf is Z t E D 2 f MT = A (PT −s f (·) − PT −s f (Ys )) (Ys ) ds. t

0

Proof By Lemma 7.5,  2  1 d D fE 2 f f MT = lim E MT (t + ǫ) − MT (t) Ft = A (PT −t f (·) − PT −t f (Yt )) (Yt ). ǫ↓0 ǫ dt t Theorem 7.7. Assume that for all k ∈ N, k ≥ 2, all x ∈ F and all 0 < t ≤ T , 1 lim Ex (Pt−ǫ f (Yǫ ) − Pt f (Yǫ ))k = 0. ǫ

ǫ→0

37

Also assume that ∞ X 1 A (Pt f − Pt f (x))k (x) < ∞. k! 0≤t≤T x∈F

sup sup

k=2

Write f N T (t)

:= exp

MTf (t)

! Z tX ∞ 1 k − A (PT −s f − PT −s f (Ys )) (Ys ) ds , k! 0

0 ≤ t ≤ T;

k=2

N fT (t)

:= exp

MTf (t)

! Z tX ∞ 1 k − A (PT −s f − PT −s f (Ys )) (Ys ) ds , k! 0

0 ≤ t ≤ T.

k=2

Then (N T (t))0≤t≤T is a supermartingale and (N fT (t))0≤t≤T is a submartingale. If (PT −t f − PT −t f (x))k (x) ∈ dom(A) for all x and k, then both are equal and a martingale. i h f f Proof It is sufficient to prove that lim supǫ→0 1ǫ E N T (t + ǫ) − N T (t) Ft ≤ 0 (or

analogue with lim inf for N fT ). For a shorter notation, write k

ψ(s, x, k) := A (PT −s f − PT −s f (x)) (x). Then i 1 h f f lim sup E N T (t + ǫ) − N T (t) Ft ǫ ǫ→0 # " ! Z t+ǫ X ∞ 1 1 f f f = N T (t) lim sup E exp MT (t + ǫ) − MT (t) − ψ(s, Ys , k) ds − 1 Ft ǫ k! ǫ→0 t k=2   ! l Z t+ǫ X ∞ ∞ X 1 1 1 f ≤ N T (t) lim sup E  MTf (t + ǫ) − MTf (t) − ψ(s, Ys , k) ds Ft  . l! ǫ→0 ǫ k! t l=1

k=2

f

First, since N T (t) > 0, we can ignore it. Next, we study the terms individually in l. For l = 1, as MTf is a martingale, we get just the term −

∞ X 1 ψ(t, Yt , k). k! k=2

For l > 1, we use the fact that Z

t

∞ t+ǫ X k=2



X 1 1 ψ(s, Ys , k) ds ≤ ǫ sup sup ψ(t, x, k) = ǫ · const. k! k! 0≤t≤T x∈F k=2

38

Therefore  !l  Z t+ǫ X ∞ 1  1 ψ(s, Ys , k) ds Ft  = 0, lim E ǫ→0 ǫ k! t k=2

and by Lemma 7.4, we can drop that term. Finally, by Lemma 7.5,  l  1 lim sup E MTf (t + ǫ) − MTf (t) Ft = ψ(s, Yt , l). ǫ ǫ→0 So the terms for l ≥ 2 sum to

∞ P

l=2

l = 1.

1 l! ψ(t, Yt , l),

which cancels exactly with the term for

As an direct consequence we can estimate the exponential moment of f (YT ). Proposition 7.8. Assume that for all k ∈ N, k ≥ 2, all x ∈ F and all 0 < t ≤ T , 1 lim Ex (Pt−ǫ f (Yǫ ) − Pt f (Yǫ ))k = 0. ǫ

ǫ→0

Let ν1 , ν2 be two arbitrary probability measures on F . Then Eν1 e

f (XT )−ν2 (PT f )

≤ c(ν1 , ν2 ) exp

Z

T

0

! ∞ X 1 k A (Pt f − Pt f (x)) (x) dt , sup k! x∈F k=2

and Eν1 e

f (XT )−ν2 (PT f )

≥ c(ν1 , ν2 ) exp

Z

T

0

! ∞ X 1 k inf A (Pt f − Pt f (x)) (x) dt , x∈F k! k=2

with c(ν1 , ν2 ) =

Z

ePT f (x)−ν2 (PT f ) ν1 (dx).

Notably, c(δx , δx ) = 1. Proof As a first observation, i  h f   Eν1 ef (XT )−ν2 (PT f ) = Eν1 ePT f (Y0 )−ν2 (PT f ) E eMT (T ) F0 .

Next, we rewrite the conditional expectation as # " ∞ RT P i k 1 h f f k! A(PT −t f −PT −t f (Yt )) (Yt ) dt 0 MT (T ) k=2 E e F0 = E N T (T )e F0 , 39

f

f

where N T is the supermartingale from Theorem 7.7. As N T is positive, we can estimate that term by taking the supremum over the possible values of Yt in the integral, so that we obtain ∞ i k i R T supx∈F P h f h f 1 k! A(PT −t f −PT −t f (x)) (x) dt 0 MT (T ) k=2 E e . F0 ≤ E N T (T ) F0 e

f

f

As N T is a supermartingale and N T (0) = 1, we finally arrive at Eν1 e

f (XT )−ν2 (PT f )

≤ c(ν1 , ν2 ) exp

Z

T

0

! ∞ X 1 k sup A (PT −t f − PT −t f (x)) (x) dt . k! x∈F k=2

Reversing the direction of time yields the claim, and the proof for the other bound is analogue. Corollary 7.9. Let f : F → R be in dom(L) with k Lf k∞ < ∞. Assume that A(Pt f − Pt f (x))k (x) ≤ c1 ck2 uniformly in x ∈ F and t ∈ [0, T ]. Then, for any r > 0,

Px (f (YT ) − Ex f (YT ) > r) ≤ e



1 2

( cr2 )2

T c1 + 1 r 3 c2

.

If f is bounded, then for any probability measures ν1 , ν2 on F , Pν1 (f (YT ) − Eν2 f (YT ) > r + k PT f kosc ) ≤ e



1 2

2

( cr2 )

T c1 + 1 r 3 c2

.

Proof By Markov’s inequality, Pν1 (f (YT ) − Eν2 f (YT ) > r) ≤ Eν1 eλf (YT )−Eν2 λf (YT ) e−λr ≤ eT c1

P∞

k=2

λk ck 2 −λr+λk PT f kosc

,

where the last line is the result from Proposition 7.8. Note that c(ν1 , ν2 ) ≤ eλk PT f kosc and c(δx , δx ) = e0 For now we continue with the term λ k PT f kosc present. Through optimizing λ, the exponent becomes     r − k PT f kosc r − k PT f kosc r − k PT f kosc − T c1 + +1 . log c2 c2 T c1 c2 By writing r′ = r − k PT f kosc , we continue with   ′   r′ r′ r log − T c1 + +1 . c2 c2 T c1 c2

40

This step is not necessary in the case of a fixed initial configuration x, as then r′ = r. To ′

show that his term is less than

log





r +1 T c1 c2





− 12 ( cr )2

2 ′ T c1 + 31 cr 2

1 r′ 2 2 ( c2 ) ′ T c1 + 13 cr 2

T c1 +

+

, we first rewrite it as the following inequality:

r′ c2

.

r′ c2

Through comparing the derivatives, one concludes that the left hand side is indeed bigger than the right hand side. The next lemma is a slightly refined version of Corollary 2.7 in [10]. Lemma 7.10. Fix T > 0 and f : Y → R. Assume that for some T ′ > T and all k ∈ N, k ≥ 2, Ex sup0≤t≤T ′ f (Yt )k < ∞. Assume also that for some constants c1 , c2 > 0 the following estimates hold: Z T′ Pt f (x) − Pt f (y) dt ≤ c1 ; sup sup x,y∈X 0≤T ′ ≤T

sup

sup A

x∈X 0≤T ′ ≤T

0

Z

T′

Pt f − Pt f (x) dt

0

!2

(x) ≤ c2 c21 .

Then Pν1

Z

0

T

f (Xt ) dt > c1 (r + 1) +

Z

T

ν2 (Pt f ) dt

0

!

≤e



1 r2 2 T c2 + 1 r 3

for any r > 0 and any probability measures ν1 , ν2 on F . If ν1 , ν2 are equal and a Dirac measure, then one can replace (r + 1) by r. The proof of this lemma is analogue to the proof of Corollary 7.9, using the corresponding exponential moment bound in Theorem 2.6 in [10]. Theorem 7.11. Let f : F → R and fix T > 0. Assume that for all x ∈ F and all 0

Suggest Documents