1 Introduction - Eurandom

0 downloads 0 Views 308KB Size Report
Dec 1, 2000 - locally-finite graph, with the initial weight of each vertex v a positive ...... Tdn(1; )e, if t > Tn, where [•] is the greatest integer function, (3.17) shows.
Continuous time vertex-reinforced jump processes Burgess Davis and Stanislav Volkovyz December 1, 2000

Abstract We study the continuous time integer valued process Xt , t 0, which jumps to each of its two nearest neighbors at the rate of one plus the total time the process has previously spent at that neighbor. We show that the proportion of the time before t which this process spends at integers j converges to positive random variables Vj , which sum to one, and whose joint distribution is explicitly described. We also show limt!1 max0st Xs = log t = 2:768 : : :

1 Introduction This paper introduces and studies a continuous time right-continuous integer valued stochastic process which jumps only to nearest neighbors. We call this process, which was conceived by W. Werner, a vertex-reinforced jump Department of Statistics, Purdue University, West Lafayette, IN 47907, USA. E-mail: [email protected] y Department of Mathematics, University of Bristol, Bristol, BS81TW, UK. E-mail: [email protected] z Most of the work was done when S.V. was working at Fields Institute, Toronoto, Canada, and EURANDOM, Eindhoven, The Netherlands, and also while visiting Purdue University.

1

process (VRJP) and for now designate it by Xt , t 0. Given fXs  s  R R t Xt = j g and putting A = 1 + 0t I (Xs = j ; 1) ds and B = 1 + 0s I (Xs = j + 1) ds, the probability of a jump to j ; 1 (j + 1) at a time in (t t + h] equals Ah + o(h) (respectively Bh + o(h)), where both o(h) depend only on A and B . Thus the time elapsed after t until the rst jump from j has an exponential distribution with rate A + B , and the probability the jump is to j ; 1 is A=(A + B ). This determines VRJP in the sense that the generator determines a Markov process, even though a VRJP is not a Markov process, and as with Markov processes an initial distribution needs to be speci ed to complete its description. It is easy to construct VRJP, started, say, at 0 from a sequence of i.i.d. exponential random variables of parameter 1. Other graphs may be considered, but in this paper we will stick to the integers. We note that the rst use of exponential variables in connection with (discrete time) reinforced processes was made by Herman Rubin to couple a generalized P olya urn with a pure birth process (see Davis 3] and Sellke 8]). Of the discrete time reinforced random walks studied in the literature, the two that seem most fundamental are the bond-reinforced random walk rst studied by Coppersmith and Diaconis in 2], the paper which originated the subject, and the vertex-reinforced random walk rst studied by Pemantle in 6], and later by Pemantle and Volkov 7] and Volkov 10]. The Coppersmith-Diaconis walk on the integers starts with weight one on all the "bonds" (i i + 1), and between times n and n + 1 jumps to one of the two nearest neighbors with probabilities summing to 1 and proportional to the weights of the bonds connecting the current state with these neighbors. Coppersmith and Diaconis observed that these walks could be realized as coupled P olya urns. This approach proves almost sure recurrence on Z1 (see Davis 3]), which here and elsewhere in this paper will mean that every integer is almost surely visited at arbitrarily large times. Later, in a series of intricate papers, a remarkably complete description of the limit2

ing behavior of this and many related bond-reinforced walks was provided p by T oth (see 9]). Scaled properly (not n, in the Coppersmith-Diaconis case) they converge to various previously unknown processes, some of them quite wild. Pemantle's vertex-reinforced random walk on the integers is the vertex-reinforced analog of the walk just described. Each integer initially has weight one, and this weight is augmented by one each time it is visited. This process jumps to one of its two nearest neighbors between times n and n + 1, the relative weights of the neighbors giving the probabilities of the jumps. Not only is this walk not recurrent, but It was proved in Pemantle and Volkov 7] that it eventually gets stuck on a nite set of points, and with a positive probability in exactly ve states! This paragraph only scratches the surface of the subject of discrete time reinforced walks. See Davis 4], Pemantle and Volkov 7], and T oth 9] for more, including references to papers in biology and learning theory which use discrete time reinforced walks as models, and a discussion of some processes which are limits of reinforced walks which arose in other areas of probability. Both the walks described above, and VRJPs, are close in spirit to P olya urns, although only for the Coppersmith-Diaconis walk is the connection explicit. Reinforced Brownian motions have also been studied. See 1] for references. This paper began as an attempt to decide whether VRJP on the integers is recurrent. It is fairly easy to show that it does not get stuck in a nite number of states, but to show recurrence is a dierent matter. In the following two theorems Xt , t 0, will stand for VRJP on the integers started at 0. We omit the quali cation a.s. when it clearly must hold.

Theorem 1.1 . The limits Vi := limt!1 1t R0t I (Xs = i) ds exist for each

integer i, and are positive and sum to 1. There are i.i.d. random variables Ui , 0 < i < 1 or ;1 < i < 0, each having the density f (x) given by

 p  1 2 1 p exp ; 2 x ; x p 3 2x

3

Q

Q

such that if we put Wi equal to ik=1 Uk if i > 0, equal to ;1 k=i Uk if i < 0, P 1 and equal to 1 if i = 0, then Vi = Wi = i=;1 Wi .

Theorem 1.2 Let  = 0:36 : : : : be the number explicitly given in equation (5.46). Then limt!1 max0st Xs = log t = ;1  2:77. Of course, symmetry gives the analog of Theorem 1.2 for minimum. Thus VRRW on the integers started at 0 has range approximately a centered interval, for all large t. Each of the two theorems just stated immediately implies that VRJPs are recurrent.

2 Vertex-reinforced jump processes on f0 1g In exact analogy to the de nition of Xt , t 0, in the previous section, we can and do de ne vertex-reinforced jump processes Y on any any connected locally- nite graph, with the initial weight of each vertex v a positive number av , perhaps dierent from one, so that the weight of v at time t is here R L(t v) := av + 0t I (Ys = v) ds. We still call such a process a vertex-reinforced jump process (VRJP). In this section, we study only VRJP on f0 1g started at 0, with initial weight a at zero and b at one, and we use Zt  t 0 to designate these processes. Where it might be ambiguous which initial weights we are dealing with on f0 1g, we will use a b as a superscript. The initial position is always 0 unless explicitly mentioned. Especially ab and ab refer only to VRJP started at 0. At times we will need to consider random initial weights, and we will use a similar convention. We now recall some classical results about discrete parameter martingales. Let f1  f2  : : : be a martingale with dierence sequence d1 = f1 , di = fi ; fi;1 , i > 1. Doob's maximal inequalities ( 5] (p. 308)) say P

E



E

!p  p sup jfn j  p ;p 1 sup jfn jp p > 1: n1 n1 E

4

(2.1)

If fn2 < 1 for each n, then di , i 1, is an orthogonal series and thus P (fn+k ; fn)2 = ni=+nk+1 d2i . In addition, the almost sure convergence of L2 -bounded and thus L1-bounded martingales, together with the fact that fn+i, i 0, is a martingale for each i, give with (2.1) E

E

E

E

sup(fn+k ; f1 )2 = k0



E

E

(fn+k ; fn )]2 sup(fn+k ; fn ) ; klim !1 k0

4 sup jfn+k ; fnj2  16 k0

X

k>0

E

d2n+k

= 16 klim (fn+k ; fn )2  (2.2) !1 if supn fn2 < 1, where f1 := limi!1 fi . In the proof of the following lemma, and throughout the paper, we adopt the usual convention that C , K etc. often stand for positive constants which may change from line to line. E

E

Lemma 2.1 Let f1, f2, : : :,fn be a martingale with di erences d1  d2  : : :  dn satisfying

max 1j n





d4j j di  i < j = < 1

E

(2.3)

and let " > 0. Then there is a constant K = K (  ") such that

K: (1 max j f j > "n ) < i j n n2

P

Proof: This proof, which is probably known, is a close cousin to the standard proof of complete convergence of averages of i.i.d. variables with nite fourth P moments. We divide the n4 terms of the expansion for ( ni=1 di )4 into four groups, according to the power to which the di of the greatest i in that term is raised, and then rearrange the sums of the terms in the groups. P So fn4 = ( di )4 = I + II + III + IV , where

0i;1 13 n X X (I ) = 4 di @ dj A  i=1

5

j =1

0i;1 12 n X X A 2@ (II ) = 6 di dj  i=1 j =1 0i;1 1 n X X A 3@ (III ) = 4 di dj  i=1

n X

(IV ) =

i=1

j =1

d4i :

P d )3 =  (d j d  j < i)(P d )3 ] = 0 = 0, so (I ) = Now di ( ij;1 i j j =1 j P d )2 = 2 0. And by (2.3), (dk j di  i < k)) < C ( ) = C , so d2i ( ij;1 =1 j P P 2 2 2 (di j dj  j < i)( dj ) ) < C ( dj ) < C (i ; 1) < Cn, and so we get E

E

E

E

E

E

E

E E

E

(II ) < Cn2 . P d  Since (jdi j3 j dj j < i) < C ( ) = C by (2.3) we similarly get d3i ij;1 =1 j P P i ;1 2 1 = 2 1 = 2 3 = 2 C j j =1 dj j  C  ( dj ) ] = Cn , and so (III ) < Cn . Finally, (2.3) implies d4i < C , and so (IV ) < Cn. Thus fn4 < Cn2, which together with the p = 4 case of (2.1) and Markov's inequality, gives Lemma 2.1. E

E

E

E

E

E

E

E

E

We recall that, to keep our notation brief, all VRJPs on f0 1g considered in this section are started at 0. We put

(t) = inf fs : L(s 0) = tg so that under ab , we have (a) = 0. P

Lemma 2.2 For all t a,

E

ab L( (t) 1) = b t. a

A proof of this lemma is given via Laplace transforms in the appendix. We now sketch a dierent proof. Proof of Lemma 2.2: We will show that y(t) := ab L( (t) 1) satis es the dierential equation y0 = y=t. Since y(a)=b, this implies Lemma 2.2. If  is a positive number, L( (t + ) 1) ; L( (t) 1) is the time spent at 1 while the local time at 0 increases from t to t + . The probability of a E

6

jump from 0 to 1 in this time interval, given L( (t) 1), is L( (t) 1)+ o() as  ! 0, and the duration of the excursion to 1 resulting from this jump has an exponential distribution with rate t between t and t + , so that L( (t + ) 1) ; L( (t) 1) = L( (t) 1)  lim !0  t and our dierential equation is satis ed. It takes a little more work to show that the expectation of the sum of the durations of all the excursions beyond the rst is o(). This argument is omitted. E

E

E

ab

L ((t)1) , if t a, where the superscript means We put mt = mab t = t that we are studying L( (t) 1)=t under ab . P

Corollary 2.3 The process mt , t a, is a martingale. Proof: This is immediate from Lemma 2.2 and the fact that given Zs , 0  s  (t), the process Zy+t , y 0, has the same distribution as VRJP on f0 1g under tL((t)1) . P

Lab (t1) Corollary 2.4 Both the limits limt!1mab t , and limt!1 Lab (t0) almost surely

exist and are equal and positive.

Proof: Note that the right continuity of the paths of Zt , t 0, gives that if L(t 0) = s, then L( (s) 1)  L(t 1)  lim L( (r) 1) = lim L( (r) 1) : (2.4) r#s s L(t 0) r#s s r Thus, since limt!1 mt exists a.s., Lab (t 1) exists a.s. lim (2.5) t!1 Lab (t 0) To complete the proof we will show that the latter limit is strictly positive by showing that Lab (t 0) exists a.s. (2.6) lim t!1 Lab (t 1) 7

This is done by noting that if is the time of the rst jump to 1, then, conditioned on fZt  0  t  g, the distribution of 1 ; Zt+ , t 0, (i.e. we just relabel 0 as 1 and 1 as 0), has the distribution of Zt , t 0, under ba+ , so that (2.6) follows from (2.5). P

Our treatment of the following lemma parallels that of Lemma 2.2 it is proved in the appendix, and a dierent proof is sketched in this section.

Lemma 2.5 For all r a E

ab L( (r) 1)2

= ; ab + ab a3+ b r2 : 2

Proof: We have L( (r+dr) 1) = L( (r))+ , where is Bernoulli (L( (r) 1)dr) and  is exponential (r), and and  are independent given L( (r) 1). Thus, noting 2 = , we have E

L( (r + dr) 1)2 = + = =

E E E

E

L( (r) 1)2 + 2 fL( (r + dr) 1) ( j L( (r) 1))g  2 ( 2 j L( (r) 1)) L( (r) 1)2 + 2r L( (r) 1)2 dr + r22 L( (r) 1)dr 2b dr L( (r) 1)2 + 2r L( (r) 1)2 dr + ar E

E

E

E E

E

E

E

using Lemma 2.2 in the last line. 2b Thus L( (r) 1)2 satis es y0 = 2r y + ar , and y(a) = b2 , and Lemma 2.5 follows. E

Lemma 2.5 immediately gives the L2 norm of the martingale mab t , t a, is nite, since 2 ab (m )2 = ab + b ; b  r a: (2.7) E

r

a3

ar2

The continuous version of (2.2) with mr playing the role of fn, along with (2.7) give, putting m1 := limt!1 mt , E

ab sup(m ; m )2  16 sup E (m ; m )2 = 16 sup( E m2 ; m2 )  16 b  s 1 s a s a a3 sa sa sa

8

which, together with (2.4), gives  2 b: ab sup L(t 1) ; m  16 1 a3 t0 L(t 0) Also, we have,

(2.8)

E

Lemma 2.6

E





ab sup log L(y1) 4 < C (a b). y0 L(y0)

Proof: Since (log x)4 < x2 , x > 1, (2.4) together with (2.7) and the continuous version of (2.1) in the case p = 2 give, if r+ = max(r 0), ab sup E y0

" +#4 ab2 + b L ( y 1) : 4 log L(y 0)

(2.9)

a3

Let be the time of the rst jump of Zt to 1. Then, given , the process L(y+0) , y 0, has the same distribution as L(y1) , y 0, under b+a , and L(y+1) L(y0) from this and (2.9) it is easy to conclude that P

ab sup E y0

"

#

0) + 4 < C (a b) log LL((y y 1)

(2.10)

since the interval 0  y  is easily handled. Together (2.9) and (2.10) give Lemma 2.6. We use the superscript 11+exp 1 to indicate we are starting VRJP on f0 1g with a random initial weight of 1 plus an exponential (1) random variable at one, and 1 at zero, so that the VRJP behaves like VRJP on f0 1g with initial weights 1 at both zero and one, started at one, after the rst jump to zero. It is easy to conclude from Lemma 2.6 and the concavity of log x that log mt , t 0, is a supermartingale under 11+exp 1 , satisfying E

sup t0

E

11+exp 1

j log mtj4 < 1:

(2.11)

Furthermore, we will calculate in the appendix the number   0:36, de ned by

=

E

11+exp 1

9

log m1:

(2.12)

Now 11+exp 1 log mt is non-increasing as t increases, and (2.11) implies this convergence is dominated, so that E

E

11+exp 1

log mt #  as t ! 1:

(2.13)

3 VRJP on the nonnegative integers In this section, Yt , t 0, exclusively stands for VRJP on f0 1 2 : : : g started R at 0, with initial weights all 1, and we use LY (t k) to denote 1 + 0t I (Ys = k) ds, often omitting the superscript. We let Tn = inf ft : Yt = ng. We will need the following, which is immediate from the construction of VRJP. VRJP observed only at the times when it stays on some subset of consecutive integers A, behaves the same way as VRJP restricted to the set A. Moreover, it is independent of either the path VRJP to the right of A or the path of VRJP to the left of A. More precisely, let Wt , t 0, be a VRJP (initial weights are 1) on any set of consecutive integers, and let A be a subset of consecutive integers of those integers. Let T = inf ft 0 : Wt 2 Ag be a stopping time, and k = WT 2 A R be the \port of entry". Put A (a) = supft : 0t I (Ws 2 A) ds = ag. Then Ha := W(a) is a VRJP on A started at k. If B represents the states to the left or to the right of A, then WB (a) , a 0, is independent of WA (a) , a 0. Restriction principle.

Lemma 3.1 If n > 0, (Tn < 1) = 1. P L(t i) ; 1] = t, if (T < 1) < 1 there must be a j , Proof: Since 1 n i=0 0  j < n, such that (L(1 j ) = 1 L(1 j + 1) < 1) > 0. Now we use the restriction principle on fj j + 1g, together with Corollary 2.4, with j P

P

P

relabeled as zero and j + 1 relabeled as one, to get a contradiction. Next we observe the following 10

Lemma 3.2 Let 1  j < n. Then given L(Tn  i), i j + 1, the distribution

1 of L(Tn  j )=L(Tn  j + 1) is the distribution of m1L(1+exp Tn j +1) .

Proof: Note that L(Tj +1  j ) has the distribution 1 + exp 1, while of course L(Tj +1  j + 1) 1. The rest of the argument follows from the restriction principle applied to fj j + 1g, together with the observation that what happens on excursions of Yt to the right of j + 1 is not inuenced by what happens to Yt while it is on f0 1 2 : : :  j + 1g. The following proposition establishes the recurrence of VRJP on nonnegative integers.

Proposition 3.3 For all j 0, L(1 j ) = 1 a.s. Proof: Suppose that (L(1 j ) < 1) > 0 for some j . Then 1=L(1 j ) > 0. By restriction principle applied to fj j + 1 : : :g, and Lemma 3.1, this P

E

expectation must be the same for all j 0. We claim that (m1t 1+exp 1 );1 < 1 for all t > 0. We know that (see the appendix) E

p 2=t2 +2 +a2 )

b(a; ab () = exp(;mab t )=e E

Since E

R 1 exp(;mab)d = t

0

1

=

m1t 1+exp 1

=

Z1

, mab t

we have

11+exp 1 ()d

Z1 Z1 0

1

d

Z0 1 Z0 1

:

q

exp(;u)du exp((u + 1)(1 ; 2 =t2 + 2 + 1))

p

du exp(1 ; (u + 1)(1 ; 2 + 1)) 0 0 Z 1 exp(1 ; p2 + 1) p d = 1: = 2 + 1 0 On the other hand, if (L(1 j ) < 1) > 0,
, 0  j < n.

Proof: The rst two statements of i), and iii) follow from Lemma 3.2, the de nition of , and, in the case of iii), (2.13). For the last statement of i) see formula (5.45) in the appendix. And ii) follows almost immediately from i) and the SLLN, which enables the bounding of the terms of the sum by a geometric series. Next we state the main result of this section, the almost sure convergence, in l1 , of the empirical occupational time distribution, Put

Qk;1 Z ;1 pk = P1 i=;1 Qk;1i Z ;1 : k=0 i=;1 i

Theorem 3.5 The following holds. 1 X L ( t k ) ; 1 lim ; pk = 0 a.s. t!1 k=0

t

12

Before proving Theorem 3.5 we sketch for motivation a short proof of a weaker result. We have

L(t k) = L(t 0)

kY ;1

j =;1

(Rjt );1 =: ;tk  k 0:

Q ;1 Z ;1, the de nition of Z gives Putting !k = ki=;1 i i ;tk = !k a.s. k 0: lim t!1 ;tk+1 !k+1

Now if n is xed and atk , 0  k  n, t numbers such that

0, and bk , 0  k  n are positive

t

ak = bk  lim t!1 atk+1 bk+1

then which shows

t

lim P ak = Pnbk b  0  k  n t!1 n at i=0 i

L(k t)

i=0 i

lim P t!1 ni=0 L(i t)

= Pn!k ! : i=0 i

The last equality together with Proposition 3.3 implies that if 0  k  n L(k t) ; 1 = P !k  P lim n n ! t!1 i=0 (L(i t) ; 1) i=0 i P a junior version of Theorem 3.5, since 1 i=0 (L(i t) ; 1) = t. The following lemma is a more precise version of the simple fact about sequences just used.

Lemma 3.6 Let ai and bi , 0  i  n, be positive numbers, and let " > 0. P P Put A = ni=0 ai and B = ni=0 bi . Then n ja ; b j n X i i < " implies X ai ; bi < 2" : 1;" i=0 ai i=0 A B 13

Proof: The hypotheses imply jA ; B j < "A, and so

P n a n X i ; bi = X ai (B ; A) + (ai ; bi)A  jB ; Aj + jbi ; aij A B AB B B i=0

i=0

< 1 ;" " + 1 ;" " :

Now since L(Tn  n) = 1, we have, recalling RnTn;i is shortened to Rnn;i , P that log L(Tn  n ; k) = ki=0 log Rnn;i . Put ni = (log Rnn;i j Rnn;j  0  j < i), 1  i  n, and Din = log Rnn;i ; ni. (Sometimes we drop the superscript.) Then Lemma 3.4 iii) implies E

ni  a.s., 1  i  n:

(3.14)

and of course Din , 1  i  n, is a martingale dierence sequence. Furthermore, Lemma 3.2 and (2.11) imply (j log Rnn;i j4 j Rnn;j  1  j < i) < C , 1  i  n, which in turn implies E

E

(Di4 j Rnn;j  1  j < i) < C 1  i  n

where the C is absolute, especially it does not depend on i or n. Thus for any " > 0, according to Lemma 2.1, P

(j

n X i=1

Di j > "n) < Cn(2") 

(3.15)

P

which with (3.14) implies ( ni=1 Di + i < ( ; ")n) < C (")=n2 , or equivalently, (3.16) (log L(T  0) < ( ; ")n) < C (") : P

n

n2 This inequality, Borel-Cantelli, and the fact that L(Tn  0) ; 1 < Tn , imply P

P

(lim inf log Tn =n ) = 1 n!1 14

(3.17)

or equivalently

0st Xs  ;1 a.s. lim sup maxlog t

t!1

(3.18)

P

In the following, if a < b are not necessarily integers, we use bi=a ri to designate the sum of those ri for all i satisfying a  i  b. We let  be a xed number between 0 and 1 satisfying  log 2 < 14 , which guarantees 2(n+1) < 2 n where  := e log 2; 41 < 1: n=4

e

(3.19)

The proof of Theorem 3.5 will be completed by establishing the following two limits. We have sup

tTn i=0

and

Pn(1;L ()t i) ; 1 ; pi ! 0 as n ! 1 (L(t j ) ; 1)

n(1; X )

j =0

nX +1

L(Tn+1  i) ! 0 as n ! 1: n=4 i=n(1; ) e

(3.20)

(3.21)

To see that (3.20) and (3.21) imply Theorem 3.5, observe that to prove Theorem 3.5, it su"ces to prove

nX +1

L ( t k ) ; 1 sup Pn+1(L(t j ) ; 1) ; pk ! 0 as n ! 1 Tn tTn+1 k=0 j =0 P since L(T  k) = 1 if k > n + 1, and 1 p = 1. Now (3.20) obviously implies

n+1

sup

k=0 k

L ( t k ) ; 1 Pn(1; ) ; pk ! 0 as n ! 1: (L(t j ) ; 1)

n(1; X )

Tn tTn+1 k=0

j =0

Furthermore, if n 1,

L ( t k ) ; 1 sup Pn+1(L(t j ) ; 1) ; pk Tn tTn+1 k=n(1; ) j =0 nX +1

15

nX +1 L ( t k ) ; 1 pk  sup P +1(L(t j ) ; 1) + Tn tTn+1 k=n(1; ) nj =0 k=n(1; ) Pn+1 L(T  k) 1 X n k =n(1; )  + pk : T nX +1

n

k=n(1; )

The second sum here clearly approaches 0 as n ! 1, and since Tn en=4 for large enough n, by (3.17) and (5.46), (3.21) gives that the rst does also. We rst prove (3.21), then (3.20). Using Lemma 3.2 and Corollary 2.3 we have, for 0  j  n, E

L(Tn  j ) = = = =

(L(Tn  j ) j L(Tn  j + 1))   Ln (Tn  j + 1) L(LT(T nj+j )1) j L(Tn  j + 1)

E E E

E E

E

L(Tn  j + 1) L(Tn  j + 1)

E E

n

11+exp 1

mL(Tn j +1) 11+exp 1 m1 = 2 L(Tn j + 1): E

Together with L(Tn  n) = 1 this gives L(Tn  k) = 2n;k , 0  k  n. Thus 1 nX +1 X L(k Tn+1 ) < 1 E

E

E

n=1 k=n(1; )

en=4

using (3.19), and (3.21) follows. Next we prove (3.20). We observe

L ( t i ) L ( t i ) ; 1 Pn(1; ) ; Pn(1; ) sup tTn i=0 j =0 (L(t j ) ; 1) j =0 L(t j ) n(1 ; ) + 2   sup Pn(1; tTn j =0 ) (L(t j ) ; 1) n(1; X )

(3.22)

which follows immediately by putting the dierence of the quotients on the P (1; ) L(t j ) ; 1 LHS of (3.22) over a common denominator. Since nj =0 Tdn(1; )e , if t Tn, where d e is the greatest integer function, (3.17) shows that that suprema to the right of the inequality in (3.22) approaches 0 as 16

n ! 1, and thus the suprema to the left does. This implies that the following inequality is equivalent to (3.20). sup

L ( t i ) Pn(1; ) ; pi ! 0 as n ! 1: L(t j )

n(1; X )

tTn i=0

j =0

(3.23)

To prove (3.23), we rewrite it as sup

Qk;1 t ;1 Qk;1 Z ;1 ( R ) i i Pn(1;i =;1 !0 ; Pn(1;i =;1 ) Qj ;1 ) Qj ;1 ;1 t ;1 (R ) Z

n(1; X )

tTn k=0

j =0

i=;1

i

j =0

i=;1 i

as n ! 1, and we note that using Lemma 3.6, to prove (3.23) it su"ces to prove

Q ;1 Z ;1 n(1; ;1 X ) Qki=;1 (Rit );1 ; ki=;1 sup Qk;1 Z ;1 i ! 0 as n ! 1 tTn k=0 i=;1 i

which reduces to

k;1  t ! Y Z ; R i 1 ; sup 1 + Rt i ! 0 as n ! 1: (3.24) tTn k=0 i i=;1 Q P Now j1 ; mi=0 (1 + ai )j < exp( mi=0 jai j) ; 1  2#jai j if #jai j < 0:1, and so n(1; X )

(3.24) follows from

n sup

n(1; X )

tTn k=0

Zi ; Rit ! 0 as n ! 1 Rit

(3.25)

The initial n in (3.25) is an upper bound (if n is large) for the number of summands k in (3.24), k = 0 1 : : :  n(1 ; ). Now exactly as we proved (3.16), we have P

 nX;1 i=1

!

Di + i < 0:2 n < nC2

(3.26)

Also, Lemma 2.1 and (3.14), or even the weaker version of (3.14) with  replaced by zero, imply ( ninfkn

P

k X

i= n

Di + i < ;0:1(1 ; )n) < nC2 : 17

(3.27)

Together (3.26) and (3.27) give P

( n;1 infkn

k X i=0

Di + i < 0:1n) < nC2 

or, equivalently,

(L(Tn  i) e0:1 n  0  i  n(1 ; ) + 1) > 1 ; nC2 :

(3.28)

0n(1; )+1 1 @  GciA < C2  n

(3.29)

P

Let Gi = Gni = fL(Tn  i) > e0:1 n g. Then (3.28) may be restated as P

i=0

where the superscript c denotes complement. Now conditioned on L(Tn  i) and L(Tn  i + 1), the distribution of Yt , t Tn, restricted to fi i + 1g has the distribution of the two state walk of Section 2, Zt , t 0, under L(Tn i+1)L(Tn i) if we relabel i + 1 as 0 and i as 1. Thus (2.8) implies P

E

=

suptTn (Zi ; Rit )2 I (Gi+1 )

E E





suptTn (Zi ; Rit )2 j L(Tn  i + 1) L(Tn  i) I (Gi+1 )

(3.30)

16L(Tn  i) I (G )  16 (e;0:1 n )2 Rn = 32 e;0:2 n  i L(Tn  i + 1)3 i+1 using Corollary 2.3 and the fact that Rin has the distribution of mL(Tn i+1) under 11+exp 1 , so Rin = 11+exp 1 m0 = 2. Thus,



E

E

P

E

E



t

E

h i1 suptTn (Zi ; Rit )2 I (Gi+1 ) 2 (3.31) i1 h supt0 (R1ti )2 2  Ce;0:1 n

suptTn ZiR;tiRi I (Gi+1 ) 

E

E

using (3.30). That suptTn (Rit );2 is nite follows from the restriction principle and the fact that supt0 (Rit );1 has the same distribution as sups1 ms E

18

under P 11 , using the continuous version of Lemma 2.1 and the sentence which includes (2.7). Finally, to complete the proof of (3.25) and thus (3.20), we have, for large enough n,

fn sup

n(1; X )

tTn i=0

S

n(1;  ) i=0

And

0



1

 ) c A Zi ; Rit e;0:1 ng @n(1; Gi Rit i=0

t Z ; R i i f sup Rt I (Gi+1) > n;1e;0:05 ng tTn i

(3.32)

0n(1; ) ( )1 t  @ sup Zi R;t Ri I (Gi+1 ) > n;1 e;0:05 n A i i=0 tTn  ! n(1; Zi ; Rit X ) ;1 ;0:05 n  sup I (Gi+1) > n e tTn Rit i=0 n(1; t X ) 0:05 n Z ; R i  ne sup Rt i I (Gi+1 )  Cn2 e;0:05 n < nC2  tTn i i=0 P

P

E

using (3.31). And this inequality, together with (3.29) and (3.32) and BorelCantelli, establish (3.25). The next theorem is a one-sided version of Theorem 1.2.

Theorem 3.7 The following holds.

max0st Xs = ;1 a.s. lim t!1 log t

(3.33)

Proof: We will show that given " > 0, there is a constant C (") such that   (3.34) P T > en( +") < C (") n 1: n

n2

This inequality together with Borel-Cantelli shows lim supn!1 log Tn =n  , which implies max0st Xs ;1  lim inf t!1 log t 19

and which, with (3.18), gives (3.33). We know 11+exp 1 log mt decreases to . Let K = Let > 0, and let the integer N satisfy E

E

11+exp 1

E

11+exp 1

log me0:1N < (1 + ):

log m1 . (3.35)

Here and below we use the notation of the proof of Theorem 3.5. Now (3.35) implies i I (Gi+1 ) < (1 + ), 0  i  n, and so on the intersection of the Gi, 1  i  n, n X i=1

i =

n X i=1

i +

n X

i= n+1

i :  Kn +

n X

i= n+1

i :

Thus if n N , n X i=1

0n(1; )+1 1 \ i I @ Gi A  Kn + (1 ; )n + 1](1 + ) (3.36) i=1

=: nf ( )

where f ( ) = K + (1 ; )(1 + ) + !(1=n). Lemma 2.1 gives



P

and so P

!

k X

Di > n < Cn(2 )  1kn i=1 sup

0 1 n(1;\ )+1 k X @ sup Di + i > nf ( ) + n  Gi A < Cn(2 ) 1kn i=1

i=1

when n N . Together with (3.29), this gives that if n N , P

X k

so that

i=1

!

Di + i < nf ( ) + n  1  k  n > 1 ; nC2 

(log L(Tn  i) < nf ( ) + n  0  i  n) > 1 ; nC2 :

P

20

P ;1(L(T  i) ; 1) = T , we get Since ni=0 n n P



 Tn < C1 nen f ( )+ ] > 1 ; nC2  n N:

Now if we choose  and so small that f ( ) + <  + ", this implies (3.34).

4 VRJP on the integers. We begin this section by describing the random variables Vi of Theorem 1.1. Then we prove Theorem 1.1 and use it and Theorem 3.7 to prove Theorem 1.2. Let X = Xt , t 0, be a VRJP on the integers started at 0. Let Xs+ , s 0, be X restricted to the nonnegative integers, and let Xs;, s 0, be X restricted to the non-positive integers. Then both X + and (;X ; ) are VRJP's on the nonnegative integers. Let Zi+ , i > 0 be the variables de ned for X + exactly as the variables Zi were de ned for Yt , t 0, in Section 3, and let Zi; be the analogous variables for X ; . Then by Theorem 3.5 fZi+ 1  i < 1 Zi; 1  i < 1g are i.i.d. random variables, each having the density function given in the statement of Theorem 1.1, as shown in the appendix. Put

P

8 Qk + ;1 > > < i=1(Zi )  k > 0 Wk = > 1 k = 0 > Qk (Z ;);1  k < 0 : i=1 i

and put W = 1 i=;1 Wi , and Vk = Wi =W . We now prove Theorem 1.1, with Vi as just constructed. R R R Let (t) = 0t I (Xs = 0) ds, (t) = 0t I (Xs > 0) ds, and (t) = 0t I (Xs < 0) ds. Then

(t) + (t) + (t) = t 21

(4.37)

and using the restriction principle we get both

R t I (X = j ) ds lim 0 s = P1WjW  j 0 t!1 (t) + (t) i=0

and

i

(4.38)

R t I (X = j ) ds = P Wj lim 0 s

 j  0: (4.39) 1 (t) + (t) i=0 W;i Let $(t), $(t), and $(t) stand for (t)=t, (t)=t, and (t)=t respectively. Then (4.37) and the versions of (4.38) and (4.39) for j = 0 give the following three t!1

equations:

$(t) + $(t) + $(t) = 1 $ W0  limt!1 $(t)+(t)$(t) = X 1 i=0

Wi

(4.40)

$ W0 : limt!1 $(t)+(t)$(t) = X 1 Thus

i=0

W;i

limt!1 $(t) = W0 =W 1 X limt!1$(t) = Wi=W limt!1 $(t) =

i1 =1 X i=1

(4.41)

W;i =W:

The equations (4.38), (4.39), and (4.41) imply Theorem 1.1. Proof of Theorem 1.2: From (3.33) and the restriction principle we have (4.42) lim max0st Xs = ;1 a.s. t!1 log(t) + (t)] Equations (4.41) together with (4.42) prove Theorem 1.2.

22

5 Appendix To describe the distribution of L( (t) 1), de ned in Section 2 immediately before Lemma 2.2, we will calculate its Laplace transform ab ( t) = ab e; L((t)1) ,  0. Further we will omit the superscript ab unless it makes our arguments ambiguous. Denote w(t) := L( (t) 1) and observe that E

w(t + dt) = w(t) +  where is a Bernoulli (w(t)dt) random variable and  is an exponential (t) random variable, which, given w and t, are independent of anything. Hence

( t + dt) = (e; w;  ) = E

E

h ; w e

i

(e;  j w)

E

(5.43)

The inner conditional expectation is easy to compute: E

(e;  j w) = (1 ; w dt) 1 + w dt (e;  ) = (1 ; w dt) + w dt  +t t = 1 ; w dt  + t E

Plugging this into (5.43) yields

( t + dt) ; ( t) = ;  + t (we; w ) dt E

whence, since (w(t)e; w(t) j t) = ;@( t)=@, E

@ =  @ : @t  + t @

The natural boundary conditions are

( a) = e; b  (0 ) = 1: Solving this (see Section 5.1) we obtain

ab ( t) = eb(a;

p

23

2 +2 t+a2 ) :

(5.44)

Though we are not able to invert Laplace transform (5.44) for every t, it still gives us Lemmas 2.2 and 2.5 = ab t 2 2 2 ab w(t)]2 = b(t ; a + t ab) a3 E

E

ab w(t)

by dierentiating ab ( t) once and twice at  = 0. Next, we want to calculate explicitly the distribution of := m111 = limt!1 w(tt) which exists by Corollary 2.4. By interchanging the integration and the limit we obtain E

11 ; 

e

p

= tlim 11 (=t t) = e1; !1

1+2

:

Using Laplace transform, we can, for example, compute moments of : E

= 1

E

2 = 2

E

3 = 7

E

4 = 37

E

5 = 266 : : : :

The inversion of 11 e;  requires an integration on a complex plane. We omit these calculations, presenting only the result. The density of the distribution of = m111 for x > 0 is E

Z +1 p 1; 12 (x+x;1 ) 1 e 1; ;2i +1 ;i x f  (x ) = 2  e e d = p 3 ;1 2x

(5.45) p

(one can quite easily verify that its Laplace transform coincides with e1; 1+2 ). This density is also the density of m111+exp 1 , and is the density f of Theorem 1.1. Moreover, we can present the formula for c.d.f. of :  1 p  1 p  2 F (x) = 1 ; % p ; x + e 1 ; % p + x  x > 0 

x

x

where %( ) is a c.d.f. of a normal zero-one distribution. To calculate  in (2.12) observe that m111+exp 1 has the same distribution as 1=m111 = 1= . Consequently, Z1 Z1 x 1  = log(x)f1= dx = log x exp(1p; 2 ; 2x ) dx = 0:3613 : : : (5.46) 2x 0 0 24

5.1

Solution of the equation

(;x)0x + (x + y)0y = 0

This equation is a linear PDE to which we can apply a standard technique. We will look for a solution in the area where x 0 and y 1 with a boundary condition

(x a) = e;bx Let v(x y) = (;y x + y) be a column vector, then the equation is equivalent to

v r = 0 where denotes a scalar product and r is a gradient of . Thus, (x y) must be constant along the solutions of the equation z_ = v, where z = (x y). Solving the system 8 < x_ = ;x : y_ = x + y we obtain x(t) = C1 e;t , y(t) = C2 et ; 12 C1 e;t . Hence, 2C1 C2 = x(x + 2y), and any solution (x y) to the PDE must be a function of one argument x(x + 2y). If this curve (x(t) y(t)) intersects the horizontal line y = a at point p x~ 0, then x(x + 2y) = x~(~x + 2a) and x~ = ;a + x(x + 2y) + a2 (we took a positive sign at the square root since x~ must be non-negative). On the other hand, (~x a) = e;~xb , therefore

; px(x+2y)+a2 

(x y) = (~x a) = eb a;

:

References 1] Benaim, M., Ledoux, M., and Raimond, O. (2000) Self interacting diffusions. Preprint. 25

2] Coppersmith, D. and Diaconis, P. (1987). Random walks with reinforcement. Unpublished manuscript. 3] Davis, B. (1990). Reinforced random walk. Prob. Th. Rel. Fields 84, pp. 203 { 229. 4] Davis, B. (1999). Reinforced and perturbed random walks. Random walks, Bolyai Soc. Math. Stud., 9, Budapest, pp. 113 { 126. 5] Doob, J.L. (1953) Stochastic processes. John Wiley and Sons, New York. 6] Pemantle, R. (1988). Phase transition in reinforced random walk and RWRE on trees. Ann. Probab. 16, pp. 1229 { 1241. 7] Pemantle, R. and Volkov, S. (1999). Vertex-reinforced random walk on Z has nite range. Ann. Probab. 27, pp. 1368 { 1388. 8] Sellke, T. (1994) Reinforced Random Walk on the d-dimensional Integer Lattice, Technical Report #94-26, Department of Statistics, Purdue University. 9] T oth, B. (1999) Self-interacting random motions { a survey. Random walks, Bolyai Soc. Math. Stud., 9, Budapest, pp. 349 { 384. 10] Volkov, S. (2000) Vertex-Reinforced Random Walk on Arbitrary Graphs Ann, Probab. to appear.

26