A Gaussian upper bound for martingale small-ball probabilities

0 downloads 0 Views 151KB Size Report
May 23, 2014 - martingale small-ball probabilities. James R. Lee∗. Yuval Peres†. Charles K. Smart‡. Abstract. Consider a discrete-time martingale {Xt} taking ...
arXiv:1405.5980v1 [math.PR] 23 May 2014

A Gaussian upper bound for martingale small-ball probabilities James R. Lee∗

Yuval Peres†

Charles K. Smart‡

Abstract Consider a discrete-time martingale {Xt } taking values in a Hilbert space H. We show that   if for some L ≥ 1, the bounds E kXt+1 − Xt k2H | Xt = 1 and kXt+1 − Xt kH ≤ L are satisfied √ for all times t ≥ 0, then there is a constant c = c(L) such that for 1 ≤ R ≤ t, 2 2 R P(kXt kH ≤ R | X0 = x0 ) ≤ c √ e−kx0 kH /(6L t) t

Following [Lee-Peres, Ann. Probab. 2013], this has applications to diffusive estimates for random walks on vertex-transitive graphs.

1

Introduction

Let H be a Hilbert space with inner product h·, ·i and norm k · k, and let {Xn : n ≥ 0} denote a discrete-time H-valued martingale with respect to a filtration {Fn }. We suppose that for some number L > 0 and all n ≥ 1, we have kXn − Xn−1 k ≤ L almost surely. For each n ≥ 0, define the random variable Vn = E kXn − Xn−1 k2 | Fn−1 . We prove small-ball estimates on such martingales under certain conditions on the sequence of conditional variances {Vn }. Specifically, we seek upper bounds on the quantity P(kXn − X0 k ≤ R) as a function of R and n. One motivating application is to the analysis of random walks on groups [LP13], which we discuss more below. Such martingales can exhibit surprising behavior even in the 1-dimensional case. In [GPZ13], the following is shown. For every t ≥ 1, there is a real-valued martingale {Xn } with X0 = 0 satisfying L = 1 and 12 ≤ Vn ≤ 1 almost surely for all times 0 ≤ n ≤ t − 1, and such that P(|Xt | ≤ 1) ≥ ct−α , where c > 0 is a universal constant and α < 1/2. In other words, even under seemingly strong bounds on the conditional variances, Xt can land near the origin with probability much greater than the order t−1/2 achieved by simple random walk. Our primary goal is to prove that for H-valued martingales, such “non-Gaussian” behavior cannot happen if the sequence {Vn } is deterministic. ∗

Computer Science, University of Washington. Partially supported by NSF grant CCF-1217256. Microsoft Research ‡ Mathematics, MIT †

1

Theorem 1.1. Let {Xn } be an H-valued martingale with respect to the filtration {Fn } and suppose there  exists a sequence of  numbers {vn ≥ 1 : n ≥ 1} such that for each n ≥ 1, almost√surely E kXn − Xn−1 k2 | Fn−1 = vn and kXn − Xn−1 k ≤ L. Then for every n ≥ 1 and 1 ≤ R ≤ n, we have cL R 2 2 P(kXn k ≤ R | X0 = x0 ) ≤ √ e−kx0 k /(6L n) , n where cL > 0 is a constant depending only on L. ´ Using the methods of [LP13] (which themselves are inspired by observations of Anna Ershler), this can be used to obtain a diffusive estimate for random walks on finitely-generated groups and, more generally, vertex-transitive graphs. In particular, the following result is proved in Section 3 using Theorem 1.1. Theorem 1.2. For every infinite, locally-finite, connected, vertex-transitive graph G, there is a constant CG > 0 such that if {Zt } is the simple random walk on G, then for every ε > 0 and every t ≥ 1/ε2 ,  √ P distG (Zt , Z0 ) ≤ ε t ≤ CG ε , where distG denotes the graph distance in G.

For the preceding theorem, one only requires the case v1 = v2 = · · · = 1 in Theorem 1.1. We also prove a related theorem for finite vertex-transitive graphs. In that setting, one needs more general deterministic sequences {vn }. In an independent and concurrent work, Armstrong and Zeitouni [AZ14] give related smallball estimates under different assumptions on the sequence {Vn }. In particular, the case R = 1, v1 = v2 = · · · = 1, and x0 = 0 of Theorem 1.1 is a corollary of their main theorem.

2

Martingale small-ball probabilities

We recall the setup of the introduction, where H is a Hilbert space with inner product h·, ·i and norm k · k. The process {Xn : n ≥ 0} will denote a discrete-time H-valued martingale with respect to a filtration {Fn }. We use the notation En [·] = E [· | Fn ] and Pn [·] = P [· | Fn ]. For the remainder of the section, we will assume that {Xn } satisfies the the following two properties: (M1) There is a (deterministic) sequence of numbers {vn : n ≥ 1} such that for all n ≥ 1, we have vn ≥ 1 and En−1 kXn − Xn−1 k2 = vn . (M2) For all n ≥ 1, kXn − Xn−1 k ≤ L almost surely. We first prove an estimate assuming a martingale approaches the origin in a controlled manner. Lemma 2.1. There is a universal constant c > 0 such that for every λ ≥ 1 and all n ≥ 1, the following holds: If {Xn } is any martingale satisfying (M1) and (M2) and X0 = x0 , then h

P kXn k ≤ 1 and kXn−t k ≤ t

5/8

cL12 λ4/5 √ + λ for all 0 ≤ t ≤ n ≤ exp n i

2



−kx0 k2 2L2 n



.

(1)

Proof. Fix n ≥ 1, and for 1 ≤ k ≤ n, define the random variable h i Ψk = Pn−k kXn k ≤ 1 and kXn−t k ≤ t5/8 + λ for all 0 ≤ t ≤ k .

Let k0 = ⌈CL24 λ8/5 ⌉, where C ≥ 1 is a universal constant to be chosen later. Now define, for 1 ≤ k ≤ n, the sequence sk = L2 min(k, k0 ) + vn−k+1 + vn−k+2 + · · · + vn−k0 .

(2)

We will prove by induction on k that for all 1 ≤ k ≤ n, the following bound holds almost surely: −

Ψk ≤ e where βk = e

 k Y

j=k0 +1

kXn−k k2 2sk

βk ,

(3)

sj − sj−1 + γL3 j −9/8 1− 2sj



,

(4)

and γ ≥ 1 is a universal constant to be specified later. Clearly βk = e for k ≤ k0 . For k > k0 , using the fact that log(1 + x) ≤ x for x > −1, we will have k k X 1 X sj − sj−1 3 + γL log βk ≤ 1 − j −9/8 2 sj j=k0 +1

j=k0 +1 k X

sj − sj−1 −1/8 + O(L3 )k0 sj j=k0 +1   sk 1 . ≤ O(1) − log 2 sk0 +1

≤ 1−

1 2

From (M1) and (2), we know that sk0 +1 ≤ (k0 + 1)L2 and from (M2), we have sk ≥ k, hence we conclude that for all 1 ≤ k ≤ n, √ k0 + 1 · L L12 λ4/5 √ βk ≤ O(1) ≤ O(1) √ . k k Combining this with (3) and using the fact that sn ≤ L2 n yields (1). (Observe that Ψn is precisely what we are trying to bound in (1).) Thus we are now left to prove (3) by induction on k. First consider the case 1 ≤ k ≤ k0 . If kXn−k k > kL + 1, then Ψk (x) = 0 almost surely, because the Lipschitz bound (M2) implies that the chain can move at most kL distance in k steps. We may also assume that kXn−k k > 1, else the bound is trivially true. So consider the event E = {1 < kXn−k k ≤ kL + 1}. We can apply Azuma’s inequality to the X , Xn−k − Xt i : t = n − k, n − k + 1, . . . , n} to conclude that almost 1-dimensional martingale {h kXn−k n−k k surely,   Ψk ≤ P kXn k ≤ 1 | E    Xn−k , Xn−k − Xn > kXn−k k − 1 | E ≤ P kXn−k k ≤ e−

(kXn−k k−1)2 2kL2

=e

−kXn−k k2 2kL2

3

e

kXn−k k−1 nL2

≤e·e

−kXn−k k2 2kL2

,

where the last inequality uses our assumption that kXn−k k ≤ kL + 1 and the fact that L ≥ 1. Thus the bound (3) is satisfied since for k ≤ k0 , we have βk = e and sk = kL2 (recalling (2)). We are thus left to prove (3) by induction for k > k0 . We may assume that kXn−k k ≤ k5/8 + λ ,

(5)

since otherwise Ψk = 0. Now we will use the inductive hypothesis to calculate: h i Ψk = En−k Pn−k+1 kXn k ≤ 1 and kXn−t k ≤ t5/8 + λ for all 0 ≤ t ≤ k h i ≤ En−k Pn−k+1 kXn k ≤ 1 and kXn−t k ≤ t5/8 + λ for all 0 ≤ t ≤ k − 1 = En−k [Ψk−1 ] "



≤ βk−1 En−k e

kXn−k+1 k2 2sk−1

#

,

where in the final line we have employed the inductive hypothesis. Letting D = Xn−k+1 − Xn−k and using the preceding inequality, we have " # 2 −

Ψk ≤ βk−1 En−k e −

= βk−1 e

kXn−k +Dk 2sk−1

kXn−k k2 2sk



· En−k exp



−kDk2 hXn−k , Di kXn−k k2 sk−1 − sk − + 2sk−1 sk−1 2 sk−1 sk



.

(6)

Observe that, by (5), and assumptions (M1) and (M2), the three terms inside the exponential 2 λ2 L L L + λL have orders of magnitude O( Lk ), O( k3/8 k ), and O( k 3/4 + k 2 ), respectively, with probability one. We will now choose the constant C large enough so that for k ≥ k0 ≥ CL24 λ8/5 , each of these terms is at most 1/3, and the term L/k 3/8 is larger than the others. We require the following basic approximation. Lemma 2.2. If 0 ≤ y ≤ 1, then |ey − (1 + y + y 2 /2)| ≤ y 3 . Using (M2), we have kDk ≤ L almost surely. In conjunction with (5), we may apply Lemma 2.2 to write    −kDk2 hXn−k , Di kXn−k k2 sk−1 − sk En−k exp − + 2sk−1 sk−1 2 sk−1 sk 2 2 kDk kXn−k k sk−1 − sk hXn−k , Di2 + O(L3 k−9/8 ) , (7) = 1 − En−k + + En−k 2sk−1 2 sk−1 sk 2s2k−1 where we have also used the martingale property En−k D = 0.

4

Now using the fact that En−k kDk2 = sk − sk−1 (which follows from (M1) and the definition (2)), along with Cauchy-Schwarz, we can bound (7) by 1−

sk − sk−1 kXn−k k2 sk−1 − sk kXn−k k2 sk − sk−1 + O(L3 k−9/8 ) + + 2sk−1 2 sk−1 sk 2 s2k−1 sk − sk−1 kXn−k k2 (sk − sk−1 )2 + + O(L3 k−9/8 ) 2sk−1 2 s2k−1 sk sk − sk−1 ≤1− + O(L3 k−9/8 ) , 2sk−1

=1−

(8)

where in the final line we have used (5), the fact that sk − sk−1 ≤ L by (M2), the fact that sk s2k−1 ≥ k3 by (M1), and our assumption that k ≥ CL24 λ8/5 . Let γ denote the implicit constant from (8). Observing (6), we have verified that almost surely   kX k2 kX k2 sk − sk−1 − n−k − n−k 3 −9/8 2sk 2sk + γL k · 1− . = βk e Ψk ≤ βk−1 e 2sk−1 This completes the proof of (3) by induction. We will use the preceding estimate to control the small-ball probability. Before that, we observe that it suffices to prove a bound for R2 -valued martingales. The following dimension reduction lemma is a special case of the continuous-time version proved in [KS91]. We include a proof here for the convenience of the reader. A similar exposition of the discrete case appears in [KW92, Prop. 5.8.3]. Lemma 2.3. Let {Mt } be an H-valued martingale. Then there exists an R2 -valued martingale {Nt } such that for any time t ≥ 0, kNt k2 = kMt k2 and kNt+1 − Nt k2 = kMt+1 − Mt k2 . Proof. We prove the claim by induction on t. The case t = 1 is trivial. Suppose now we can construct {Mt }t≤n successfully based on {Nt }t≤n . We wish to specify the value of Mn+1 given {Nt }t≤n+1 and {Mt }t≤n such that the required conditions hold. In the generic case, there exist two distinct points in x1 , x2 ∈ R2 satisfying kNn+1 k = kxi k, and hNn , Nn+1 i = hMn , xi i,

(9)

for i = 1, 2. (One can see them as the intersections of a circle and a line.) Denoting these two (1) (2) (1) (2) points by Mn+1 and Mn+1 , we now let Mn+1 be Mn+1 (resp., Mn+1 ) with probability 21 . It is clear that kNn+1 k = kMn+1 k. Recalling (9) and using the induction hypothesis kNn k = kMn k, we also infer that kMn+1 − Mn k2 = kMn+1 k2 − 2hMn+1 , Mn i + kMn k2 = kNn+1 k2 − 2hNn+1 , Nn i + kNn k2

= kNn+1 − Nn k2 .

It remains to prove that E[Mn+1 | M1 , . . . , Mn ] = Mn . To this end, it suffices to show E[hMn+1 , Mn⊥ i | M1 , . . . , Mn ] = 0,

(10) 2

E[hMn+1 , Mn i | M1 , . . . , Mn ] = kMn k , 5

(11)

where Mn⊥ is a unit vector with hMn⊥ , Mn i = 0. Equality (10) follows by our uniform random choice (1) (2) of Mn over {Mn+1 , Mn+1 }. Since {Nt } is a martingale, we have that E{hNn+1 , Nn i | N1 , . . . , Nn } = kNn k2 . Combined with (9) and our choice of Mn+1 , we obtain (11) as required. In the degenerate case when Nn and Nn+1 are proportional, there is a unique solution to (9), and we just let Mn+1 be that unique point. In the case when Nn = 0 but Nn+1 6= 0, there are infinitely many solutions, one can pick out two symmetric ones and let Mn+1 be uniformly random over those two points. We now proceed to our first small-ball estimate. Theorem 2.4. Assume that {Xn } is an H-valued martingale satisfying (M1) and (M2). If X0 = x0 , then for any n ≥ 1, O(L20 ) − kx02k2 e 3L n . P(kXn k ≤ 1) ≤ √ n Proof. By Lemma 2.3, we may assume that {Xn } takes values in R2 . By induction on n, we will prove that kx0 k2 B (12) P(kXn k ≤ 1) ≤ √ e− 3L2 n n √ for some number B ≤ O(L20 ) to be chosen later. The case n = 1 is trivial as long as B ≥ e, since the left-hand side is 0 for kx0 k > L + 1 (by (M2)). Also observe that Azuma’s inequality applied to the 1-dimensional martingale {hx0 , x0 − Xt i} implies that P(kXn k ≤ 1) ≤ e−



If kx0 k > 3L n log n, then

e−

max(0,kx0 k−1)2 2L2 n

max(0,kx0 k−1)2 2L2 n

.

B −kx0 k2 ≤ √ e 3L2 n , n

as long as B > 0 is a sufficiently large constant. Thus we may assume that p kx0 k ≤ 3L n log n .

(13)

Let k0 ≥ 1 be a number to be chosen later and put λ = k0 L. We may decompose   P (kXn k ≤ 1) ≤ P kXn k ≤ 1 and kXn−t k ≤ t5/8 + λ for 0 ≤ t ≤ n +

n X

k=k0

  P kXn k ≤ 1 and kXn−k k > k5/8 + 1

n 4/5   X cL64/5 k0 − kx02k2 2L n √ + P kXn k ≤ 1 and kXn−k k > k5/8 + 1 , e ≤ n k=k0

where we have bounded the first term using Lemma 2.1.

6

(14)

Note that if kXn k ≤ 1 then by (M2), we must have kXn−k k ≤ kL + 1. Let Nk denote a 1-net in the Euclidean disk of radius kL + 1 about 0, and observe that |Nk | ≤ 4(kL + 1)2 . Thus we have   P kXn k ≤ 1 and kXn−k k > k5/8 + 1   X P(kXn−k − yk ≤ 1) ≤ P kXn k ≤ 1 | kXn−k k > k5/8 + 1 y∈Nk

  X kx −yk2 B − 20 5/8 3L (n−k) √ e ≤ P kXn k ≤ 1 | kXn−k k > k + 1 n − k y∈N k

  B|N | − max(0,kx0 k−(kL+1))2 k 3L2 (n−k) e , ≤ P kXn k ≤ 1 | kXn−k k > k5/8 + 1 √ n−k

(15)

where in the second line we have used the inductive hypothesis, and in the final line a union bound. We may then use Azuma’s inequality to write     1/4 2 (16) P kXn k ≤ 1 | kXn−k k > k5/8 + 1 ≤ P kXn − Xn−k k > k 5/8 ≤ e−k /(2L ) . Combining (14), (15), (16), and using |Nk | ≤ 4(kL + 1)2 yields 4/5

P(kXn k ≤ 1) ≤

cL64/5 k0 − kx02k2 √ e 2L n n n X 4(kL + 1)2 √ exp +B n − k k=k 0

−k1/4 max(0, kx0 k − (kL + 1))2 − 2L2 3L2 (n − k)

!

.

Our goal is now to prove that there is a constant α > 0 such that !   −k1/4 max(0, kx0 k − (kL + 1))2 −kx0 k2 exp ≤ α exp − . 4L2 3L2 (n − k) 3L2 n Plugging this estimate into the preceding inequality yields    n 4/5 2 2 64/5 kx k −kx0 k  X 4(kL + 1)2 cL k0 − 02 2L n √ √ exp + B exp e P(kXn k ≤ 1) ≤ α n 3L2 n n−k k=k 0

By choosing k0 ≍ L9 large enough, the sum in brackets is at most 4/5

k0 and setting B = 2cL64/5 k0 ≤ O(L20 ) shows that P(kXn k ≤ 1) ≤ proof of (12) by induction. Thus we are left to prove (17) for k ≥ k0 . Case I: kx0 k ≤ kL + 1.

1/4

2

1 √ . 2 n

B − √ e n

(17)

−k1/4 4L2

! .

Fixing this value of kx0 k2 3L2 n

, completing the

−kx0 k In this case, we need to show that exp( −k 4L2 ) ≤ α exp( 3L2 n ). We may assume that kx0 k > √ In particular, we may assume that L n, else the inequality holds trivially for some α = O(1). √ √ k > n. But then our assumption (13) that kx0 k ≤ 3L n log n shows that the inequality holds for some α = O(1) (with room to spare).

7

Case II: kx0 k > kL + 1. In this case, it suffices to argue that exp

−k1/4 (kx0 k − (kL + 1))2 kx0 k2 + − 4L2 3L2 (n − k) 3L2 n

!

≤ O(1) .

Expanding the square, we see that it is enough to show exp

−k1/4 2(kL + 1)kx0 k + 4L2 3L2 (n − k)

!

≤ O(1) .

(18)

√ √ Recalling (13) that p kx0 k ≤ 3L n log p n, we have k ≤ √ n log n. Thus the positive term in (18) is O(1) unless k ≥ n/ log n. But if n/ log n ≤ k ≤ n log n, then we have √ 2(kL + 1)3L n log n 2(kL + 1)kx0 k k1/4 ≥ ≥ 2 2 4L 3L n 3L2 (n − k)

where we have additionally used the fact that k ≥ k0 and k0 ≍ L9 is chosen large enough. We have thus verified (18), completing the proof. Finally, we extend Theorem 2.4 to larger radii. Theorem 2.5. Assume that {Xn } is an H-valued martingale satisfying (M1) and (M2), with √ X0 = x0 . Then for any n ≥ 1 and 1 ≤ R ≤ n, we have kx0 k2 R P(kXn k ≤ R) ≤ O(L20 ) √ e− 6L2 n . n

Proof. Consider a martingale {Yt } defined as follows. For 1 ≤ t ≤ n, we set Yt = Xt . Let m = ⌊R2 ⌋. For n < t ≤ n + m, put t−n Xn X Yt = Xn + εj , kXn k j=1

n+m were {εj } are i.i.d. signs independent of {Xn }. Then the martingale {Yt }t=0 satisfies assumptions (M1) and (M2) hence by Theorem 2.4, 2

0k O(L20 ) − 3Lkx 2 (n+m) e . P(kYn+m k ≤ 1) ≤ √ n+m

(19)

On the other hand, since simple random walk satisfies a local CLT, there is a constant c > 0 such that c P(kYn+m k ≤ 1) ≥ P(kXn k ≤ R) . R Combining this with (19) yields the desired result.

8

3

Random walks on vertex-transitive graphs

A primary application of our small-ball estimate is to random walks on vertex-transitive graphs. We will use distG to denote the shortest-path metric on a graph G. Theorem 3.1 (Diffusive random walks). For every infinite, locally-finite, connected, vertex-transitive graph G, there is a constant CG > 0 such that if {Zt } is the random walk on G, then for every ε > 0 and every t ≥ 1/ε2 ,  √ P distG (Zt , Z0 ) ≤ ε t ≤ CG ε. This should be compared to the result of the first two authors [LP13] which shows that this property holds for an average t, i.e. t √ 1X  P distG (Z0 , Zs ) ≤ ε t ≤ CG ε . t s=0

If G is non-amenable, then the random walk has spectral radius ρ < 1, so √ √ P(distG (Z0 , Zt ) ≤ ε t) ≤ dε t ρt . (See, for example, [Woe00].) The latter quantity is at most Cρ,d ε for t ≥ 1/ε2 . Thus Theorem 3.1 follows from an analysis of the amenable case. Theorem 3.2. If G is a d-regular, infinite, connected, vertex-transitive graph that is also amenable and {Zt } is the random walk on G, then the following holds. For any ε > 0 and t ≥ 1/ε2 ,  p  P distG (Z0 , Zt ) ≤ ε t/d ≤ Kd10 ε ,

where the constant K > 0 is universal.

Proof. Suppose that G has vertex set V . Let Aut(G) denote the automorphism group of G. By [LP13, Thm. 3.1], there is a Hilbert space H on which Aut(G) acts by isometries, and a nonconstant equivariant harmonic mapping Ψ : V → H. In other words, one has σΨ(x) = Ψ(σx) for all σ ∈ Aut(G) and x ∈ V . In particular, for any pair of vertices x, y ∈ V , we have     E kΨ(Z0 ) − Ψ(Z1 )k2 | Z0 = x = E kΨ(Z0 ) − Ψ(Z1 )k2 | Z0 = y .

Since Ψ is non-constant, we may normalize Ψ so that EkΨ(Z0 ) − Ψ(Z1 )k2 = 1. Writing E kΨ(Z0 ) − Ψ(Z1 )k2 =

1 d

X

y:{y,Z0 }∈E

kΨ(Z0 ) − Ψ(y)k2 = 1 ,

√ one concludes that Ψ is a d-Lipschitz mapping from (V, distG ) into H. Additionally, since Ψ √ is harmonic, the process {Xt = Ψ(Zt )} is a martingale to which Theorem 2.5 applies, with L = d. Thus we have   p  √ P dist(Z0 , Zt ) ≤ ε t/d ≤ P kX0 − Xt k ≤ ε t ≤ O(1)d10 ε . 9

One can make a similar statement about random walks on finite vertex-transitive graphs, up to the relaxation time. (The method of proof is also from [LP13].) Theorem 3.3. Suppose G = (V, E) is a finite, connected, vertex-transitive d-regular graph, and λ denotes the second-largest eigenvalue √ of the transition matrix of the random walk on G. Then for −1 every t ≤ (1 − λ) and every ε ≥ 1/ t,  p  P distG (Z0 , Zt ) ≤ ε t/d ≤ O(d10 )ε . Proof. We may assume that λ ≥ 12 , else the statement is vacuously true. Let P be the transition matrix of the random walk on ψ : V → R be an eigenfunction of P with eigenvalue PG, and let 1 2 u∈V ψ(u) = 1. First, observe that 2 ≤ λ < 1 and norm-squared X X X1 X 1 X |λψ(u) − ψ(v)|2 = (1 + λ2 )ψ(u)2 − 2λ ψ(u) ψ(v) d d u∈V

u∈V

v:{u,v}∈E

u∈V

v:{u,v}∈E

2

= hψ, ((1 + λ )I − 2λP )ψi = 1 + λ2 − 2λ2

= 1 − λ2 .

(20)

Consider the automorphism group Aut(G) of G and define the map Ψ : V → R|Aut(G)| by Ψ(v) =

(ψ(σv))σ∈Aut(G) n · . |Aut(G)| 1 − λ2

We claim that the process {λ−t Ψ(Zt )} is a martingale. This follows from the fact that {λ−t ψ(Zt )} is a martingale, which can easily be checked: E[λ−t−1 ψ(Zt+1 ) | Zt ] = λ−t−1 (P ψ)(Zt ) = λ−t ψ(Zt ) .

Next, observe that     E kλ−t−1 Ψ(Zt+1 ) − λ−t Ψ(Zt )k2 | Zt = λ−2(t−1) E kλΨ(Zt ) − Ψ(Zt+1 )k2 | Zt X1 X = λ−2(t−1) (1 − λ2 )−1 |λψ(u) − ψ(v)|2 d u∈V

v:{u,v}∈E

= λ−2(t−1) ,

(21)

where the final line uses (20). From this, we learn two things. First, for t ≥ 1,   E kλ−t−1 Ψ(Zt+1 ) − λ−t Ψ(Zt )k2 | Zt = λ−2(t−1) ≥ 1 .

Secondly, we have a Lipschitz condition for small times: Consider t ≤ (1 − λ)−1 and {u, v} ∈ E. Then using (21), we have √   −t−1  1/2 kλ−t−1 Ψ(u) − λ−t Ψ(v)k ≤ d · E kλ Ψ(Zt+1 ) − λ−t Ψ(Zt )k2 | Zt = u √ √ −1 ≤ λ−(t−1) ≤ dλ 1−λ ≤ 4 d ,

where we have used the fact that 1 ≥ λ ≥ 12 . Now applying Theorem 2.5 to the martingale {Xt = λ−t Ψ(Zt )} for times t ≤ (1 − λ)−1 , we see that  p  √ P distG (Z0 , Zt ) ≤ ε t/d ≤ P(kXt − X0 k ≤ 4ε t) ≤ O(d10 )ε . 10

References [AZ14]

Scott N. Armstrong and Ofer Zeitouni. Local asymptotics for controlled martingales, 2014. Preprint: arXiv:1402.2402.

[GPZ13] Ori Gurel-Gurevich, Yuval Peres, and Ofer Zeitouni. Localization for controlled random walks and martingales, 2013. Preprint: arXiv:1309.4512. [KS91]

Olav Kallenberg and Rafal Sztencel. Some dimension-free features of vector-valued martingales. Probab. Theory Related Fields, 88(2):215–247, 1991.

[KW92] Stanislaw Kwapie´ n and Wojbor A. Woyczy´ nski. Random series and stochastic integrals: single and multiple. Probability and its Applications. Birkh¨auser Boston Inc., Boston, MA, 1992. [LP13]

James R. Lee and Yuval Peres. Harmonic maps on amenable groups and a diffusive lower bound for random walks. Ann. Probab., 41(5):3392–3419, 2013.

[Woe00] Wolfgang Woess. Random walks on infinite graphs and groups, volume 138 of Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge, 2000.

11