Sep 7, 2017 - [3] Colin Cooper and Alan Frieze, A note on the vacant set of random walks on the hypercube and other regular graphs of high degree,.
arXiv:1709.02359v1 [math.PR] 7 Sep 2017
Stopping Times of Random Walks on a Hypercube Cl´audia Peixoto∗
Diego Marcondes∗
Abstract A random walk on a N -dimensional hypercube is a discrete stochastic process whose state space is the set {−1, +1}N and that has uniform probability of reaching any neighbour state and probability zero of reaching a non-neighbour state in one step. This random walk is often studied as a process associated with the Ehrenfest Urn Model. This paper aims to present results about the time that such random walk takes to self-intersect and to return to a set of states. It will also be presented results about the time that the random walk on a hypercube takes to visit a given set and a random set of states. Asymptotic distributions and bounds are presented for those times. The coupling of random walks will be widely used as a tool to prove the results. Keywords— Stochastic Process; Coupling; Random Walk; Hypercube; Stopping Times
1
Introduction
A random walk on a N -dimensional hypercube is a discrete stochastic process whose state space is the N -dimensional hypercube, i.e., the set {−1, +1}N . The random walk on a hypercube is often studied as a process associated with the Ehrenfest Urn Model [12], when there is an interest not only on the proportion of particles at each urn, but also on where each particle is at some time. Asymptotic results for random walks on a hypercube have been widely studied. The time that a particle takes to reach its stationary distribution was showed by [5] to occur around 14 N log N , and the total variation distance to stationarity at this threshold was studied. The probability of hitting a vertex a before hitting a vertex b, whenever a and b shared the same edge, starting at any position, was presented by [13]. The structure of ∗
Universidade de S˜ ao Paulo, Brazil
1
the set of unvisited vertex was studied by [3] and [9], results about its transition probabilities were given by [11] and [7] and some interesting applications of random walks on a hypercube were presented by [4] and [6]. Finally, hitting times of the Ehrenfest chain were determined by [8] by the use of coupled random walks on a hypercube. Although extensively studied and applied, the random walk on a hypercube still lacks some simple and important results, especially about its stopping times. Therefore, this paper aims to present results about stopping times of random walks on a N -dimensional hypercube. There will be treated the first self-intersection time, the time to revisit the path taken from 0 to N γ , 0 < γ < 1, the time to visit a given set and the time to visit a random set. The results of this paper are based on the master’s thesis [10] and the papers [1] and [2] and come as a complement to the vast theory about random walks on a hypercube.
2
Random Walk on a Hypercube
A N -dimensional hypercube is denoted by HN = {−1, +1}N and its vertices are represented by η, i.e., η = {η1 , . . . , ηN }, ηi ∈ {−1, +1}, i ∈ {1, . . . , N }. Given j ∈ {1, . . . , N } and η ∈ HN , the vertex η j is obtained from η by a spin at its jth coordinate. A random walk on a hypercube is a stochastic process whose state space is the hypercube HN . The transition of this stochastic process from one state to another is made by a spin at a vertex of the hypercube. The transition probabilities of this process may be defined in two ways, dividing these random walks in two types: aperiodic and periodic. The aperiodic random walk is denoted by σ, where σ(t) ∈ HN is its state at a time t ∈ N. The random walk σ may be obtained from two sequences of independent random variables I(t) and U (t), t ∈ N, defined in a probability space (ΩN , FN , PN ). The random variables I(t) assume values in {1, . . . , N }, are independent, identically distributed and, for any k ∈ {1, . . . , N }, PN {I(t) = k} = 1/N . Further, the random variables U (t) are independent and identically distributed, with uniform distribution in [0, 1]. Then, for all t ∈ N, σ(t) = (σ(t − 1))i if I(t) = i and U (t) < 21 , and σ(t) = σ(t − 1) if U (t) ≥ 12 . On the other hand, the periodic random walk is denoted by ξ, where ξ(t) ∈ HN is its state at a time t ∈ N. The random walk ξ, defined on the probability space (Ω0 , F0 , P0 ), is a random walk with transition probability given by 1 P0 ξ(k + 1) = η i ξ(k) = η = , ∀k ≥ 0; η ∈ HN , N where η i is one of the N states that may be reached from η by a spin,
2
i ∈ {1, . . . , N }. Note that ξ may be also defined as a function of the sequence of random variables I(t). The difference between σ and ξ is that ξ changes its state at each step with probability 1, while σ has probability 1/2 of not changing its state. Both random walks will be used to prove our results, as some proofs are more elegant for ξ(t) and others for σ(t). The random walks with initial state (−1, . . . , −1) are denoted by ξ − and σ − , those with initial state (+1, . . . , +1) are denoted by ξ + and σ + and the random walks with initial state η ∈ HN are denoted by ξ η and σ η . When the initial state has no importance, the random walks will be denoted simply by ξ and σ. The indexes of the probability spaces defined above will be omitted if there is no doubt about which one is being referred. Coupled random walks σ η and σ ς are constructed using the same ω, i.e., the same index I(t, ω) and the same value of U (t, ω), given σ η (t), σ ς (t) and I(t + 1) = i, in the following way: ( if U (t + 1) < 12 , then σiη (t + 1) = +1, σiς (t + 1) = +1; if U (t + 1) ≥ 12 , then σiη (t + 1) = −1, σiς (t + 1) = −1. The distance, at time t,P between two random walks σ η e σ ς is η,ς η η,ς N defined by DN (t) = 1/(2N (t) − σiς (t)|. Note that DN (t) is ) i=1 |σiη,ς η,ς an Ehrenfest Model in 0, 1/N, ..., DN (0) − 1/N, DN (0) . The time taken by coupled aperiodic random walks σ + and σ − to + − meet is defined by t− N = min(t > 0 : σ (t) = σ (t)). A well known − result for tN is that t− N = 1 in probability. N →∞ N log N lim
(1)
Let σ η and σ ς be coupled random walks on HN . Suppose that = [N f ]/N, 0 ≤ f ≤ 1. Then, it follows easily from (1) that lim P σ η (t(N )) 6= σ ς (t(N )) = 0, (2)
η,ς DN (0)
N →∞
t(N ) N →∞ N logN
for any t(N ) that satisfies lim
= ∞. The result below, that
also follows from (1), presents an upper bound for the rate of convergence to the equilibrium: 1 + (3) lim P(σ (t(N )) = η) − N = 0, N →∞ 2 t(N ) N →∞ N logN
for all η ∈ HN and t(N ) satisfying lim
3
= ∞.
3
Results
Our first results treat the time of the first self-intersection of ξ, that is given by the random variable SN = min(t ≥ 2 : ξ(t) ∈ {ξ(0), . . . , ξ(t − 1)}), and the time of a 2l-step return of ξ, i.e, the return to a state in 2l steps. Note that such return happens if, and only if, a 2k-step return (k < l) did not happen to such state (or any other) and a return happened at the 2l-th step. A 2l-step return may be defined as a function of the random variables {I(1), . . . , I(2l)} and occurs if, and only if, n o 1. all the vectors (I(m), . . . , I(m+s)) : m = 1, . . . , 2l; m+s ≤ 2l have at least one odd quantity of a value j ∈ {1, . . . , N }. This guarantees that ξ(t) 6= ξ(m) for all m < t ≤ m + s, so that no 2k-step return has occurred for k < l. 2. if j ∈ {I(1), . . . , I(2l)} than j appears in an even quantity in it. This guarantees that a 2l-step return has occurred, as all the vertices will be the same as the initial state. In order to establish if a 2l-step return has occurred, we may apply convenient functions to a sample of {I(1), . . . , I(2l)}. To this purpose, let x = (i1 , . . . , i2l ) ∈ {1, . . . , N }2l , x[j,k] = (ij , . . . , ik ), 1 ≤ j < k ≤ 2l and 1{A} (B) be the usual Kronecker’s delta, and define for all x ∈ {1, . . . , N }2l the functions f2l , h2l and g2l as 2l N Q P f (x) = 1 (i ) ; 1 2l {j} k {0,2,...,2l} j=1 k=1 " 2l+1−j # ⌊ 2 ⌋ 2l Q Q h2l (x) = 1 − f2k x[j,j+2k−1] , l > 1; j=1 k=1 (j,k)6=(1,2l) ( f2l (x)h2l (x), if l > 1; . g2l (x) = f (x), if l = 1; 2l
Note that the functions h2l (x) and f2l (x), where x is a sample of {I(1), . . . , I(2l)}, indicate if the conditions 1 and 2, respectively, are being satisfied by the sample and the function g2l (x) indicates if a return in 2l-steps happened at the sample. Therefore, defining the set Jl ⊂ {1, . . . , N }2l as Jl = (i1 , . . . , i2l ) ∈ {1, . . . , N }2l : g2l ((i1 , . . . , i2l )) = 1 ,
4
it is easily seen that the time of the first 2l-step return of the random walk ξ may defined by Γl = min t ≥ 2l : I(t − (2l − 1)), . . . , I(t) ∈ Jl .
Our first result states that, as N → ∞, the first return of ξ is a 2-step return with probability 1. Theorem 1 lim P(SN = Γ1 ) = 1.
N →∞
From the theorem above it follows that, as N → ∞, the number of different states already visited by a random walk ξ at time N γ , 0 < γ < 1, will be N γ + 1, with probability 1. Corollary 1 For 0 < γ < 1, lim P |ξ(0), . . . , ξ(N γ )| = N γ + 1 = 1. N →∞
Still from Theorem 1, it follows that SN , divided by N , converges weakly to an exponential law. Corollary 2 The random variable N −1 SN converges weakly to a mean one exponential law. We now treat the return of σ + to its first N γ visited states, i.e, the path taken from 0 to N γ , as denoted by V (0, N γ ) = {σ + (0), . . . , σ + (N γ )}. The first return of σ + to V (0, N γ ) and the first visit of σ η to V (0, N γ ) may be defined, respectively, by RN = min(t > N γ : σ + (t) ∈ V (0, N γ )) η and RN = min(t > 0 : σ η (t) ∈ V (0, N γ )). Note that σ + and σ η are coupled. Consider from now on that N γ = [N γ ] and denote V (0, N γ ) η by V . Propositions 1 through 3 give bounds to RN and RN . Firstly, we note that, as N → ∞, N 1+δ , 0 < δ < 1/2, is a lower η bound for RN and RN . Proposition 1 For 0 < γ < 1, 0 < δ < 1/2 and ∀η ∈ / V, η lim P(RN > N 1+δ ) = lim P(RN > N 1+δ ) = 1
N →∞
N →∞
The lower bound above may be improved to any bound t(N ) satγ isfying lim t(N2N)N = 0. N →∞
γ
Proposition 2 For 0 < γ < 1 and for all t(N ) satisfying lim t(N2N)N = N →∞ 0, lim P(RN > t(N )) = 1. N →∞
5
t(N )1−ǫ ν(V ) N →∞ N (logN +1)
Finally, we note that any bound t(N ) satisfying lim
=
0, in which ǫ > 0 and ν(·) is the uniform measure in HN , is an upper bound for RN . t(N )1−ǫ ν(V ) N →∞ N (logN +1)
Proposition 3 For ǫ > 0 and for all t(N ) satisfying lim
=
0, in which ν(·) is the uniform measure in HN , lim P(RN ≤ t(N )) = 1.
N →∞
We now engage in determining the weak convergence of RN . For this purpose, define βN = min(t ∈ N : P(RN ≥ t) ≤ e−1 ). The next theorem states that the distribution of RN , standardized by βN , converges to an exponential law with rate 1. Theorem 2 For 0 < γ < 1 and t > 0, ! RN > t = e−t . lim P N →∞ βN Finally, we engage in finding the hitting time for a random set of HN , defined as follow. Let M ⊂ HN be a random set of HN , defined ¯ F, ¯ P). ¯ Each vertex of the hypercube will on the probability space (Ω, γ be in M with probability 1/N , γ > 0, independently of each other. Let the time that the process ξ takes to reach the set M be defined as Θ = min(t > 0 : ξ(t) ∈ M ). Proposition 4 states that the expected value of the probability of Θ being greater than N γ t equals the survival function of a mean one exponential law. Proposition 4 For 0 < γ < 1 and t > 0, ¯ > N γ t)) = e−t . lim E(P(Θ
N →∞
Lastly, we note that, not only the expected value of the probability of Θ being greater than N γ t equals the survival function of a mean one exponential law, but also the limiting distribution of NΘγ is such exponential law. Theorem 3 For 0 < γ < 1 and ǫ, t > 0, ¯ |P(Θ > N γ t) − e−t | > ǫ = 0. lim P N →∞
6
4
Proofs
A proof for the theorems, propositions and corollaries above are presented in this section. Some lemmas are enunciated and proved in order to assist the proofs of Theorems 1 and 2.
Lemma 1 For l ≥ 3 8n P Γl ≤ n ≤ 3 . N
Proof: [Proof of Lemma 1] First, we will show that P {I(1), . . . , I(2l)} ∈ Jl ≤
8 . N3
It is easy to see that P {I(1), . . . , I(2l)} ∈ Jl = |Jl |/N 2l .
Now, note that |Jl | ≤ 2N 2l−2 , because the last two coordinates of a vector v ∈ Jl are fixed, but a permutation. Furthermore, the set Jl may be divided into two disjoint sets: its vectors in which the last three coordinates are distinct from one another and those in which only two of the last three coordinates are distinct. Those sets are denoted by Jl′ and Jl′′ , respectively. Thus, |Jl | = |Jl′ | + |Jl′′ |, |Jl′ | ≤ 3! N 2l−3 , because the last three coordinates must be fixed, but a permutation, if the vector is in Jl′ , and |Jl′′ | ≤ N |Jl−1 |, because if the pair that is at the last three coordinates is disregarded, exactly one return of the type |Jl−1 | happens; furthermore, the pair may assume N values. Therefore, 3! N 2l−3 + N |Jl−1 | 3! N 2l−3 + N 2N 2l−4 8 |Jl | ≤ ≤ = 3. N 2l N 2l N 2l N Now, it follows that n X P Γl ≤ n = P (I(j −2l+ 1),. . . ,I(j)) ∈ / Jl ,j < k;(I(k − 2l + 1),. . . ,I(k)) ∈ Jl k=2l
8n ≤ nP (I(1), . . . , I(2l)) ∈ Jl ≤ 3 . N
Proof: [Proof of Theorem 1] First of all, note that SN = min Γl l≥1
and that it is enough to prove that lim P Γ1 < N
N →∞
1+δ
0.
On the one hand, 1 N 1+δ = 1. lim P Γ1 < N 1+δ = lim 1 − 1 − N →∞ N →∞ N
On the other hand, P
min
1+δ
2≤l≤ N 2
Γl > N
1+δ
!
=1−P
min
1+δ
2≤l≤ N 2
≥1−
2N 1+δ N2
Γl ≤ N
1+δ
!
N 1+δ 2
−
X 8N 1+δ . N3
(4)
l=3
For 0 < δ < 12 , the limit of (4) is 1 as N → ∞.
Proof: [Proof of Corollary 1] It is straightforward that γ lim P {ξ(0), . . . , ξ(N )} = N γ + 1 = lim P SN > N γ = lim P Γ1 > N γ = 1. N →∞
N →∞
N →∞
Proof: [Proof of Corollary 2] It is enough to prove that, for some t> 0, lim P SN > tN = e−t . Now, note that P SN > tN − P Γ1 > N →∞ tN ≤ P Γ1 6= SN . From Theorem 1, lim P Γ1 6= SN = 0. NN→∞ t 1 However, lim P Γ1 > tN = lim 1 − N = e−t . N →∞
N →∞
Proof: [Proof of Proposition 1] From Theorem 1, lim P ∪l≥2 Γl < N →∞
N 1+δ
i
= 0 and, therefore,
Nγ = 0. N →∞ N
lim P(RN ≤ N 1+δ ) = lim P(σ + (N γ + 1) = σ + (N γ )) ≤ lim
N →∞
N →∞
η Now, it is enough to prove that lim P(RN > N 1+δ ) − P(RN > N →∞ N 1+δ ) = 0. However, it is straightforward that η P(RN > N 1+δ )−P(RN > N 1+δ )
η η ≤ P(RN > N 1+δ , RN ≤ N 1+δ ) + P(RN > N 1+δ , RN ≤ N 1+δ ) N →∞
≤ sup P(σ η0 (N 1+δ ) 6= σ + (N 1+δ )) −−−−→ 0. η0 ∈HN
8
Proof: [Proof of Proposition 2] Again, it is straightforward that P(RN ≤ t(N )) ≤P(RN ≤ N 1+δ ) + P(N 1+δ < RN ≤ t(N )) ≤P(RN ≤ N 1+δ ) + P(σ + (t) ∈ V, for some N 1+δ < t ≤ t(N )) X ≤o(N ) + ν(η)|P(σ + (t) ∈ V, for some N 1+δ < t ≤ t(N ))− η∈HN
η
− P(σ (t) ∈ V, for some N 1+δ < t ≤ t(N ))|+ X + ν(η)P(σ η (t) ∈ V, for some N 1+δ < t ≤ t(N )) η∈HN
X
≤o(N ) +
ν(η) sup P(σ + (N 1+δ ) 6= σ η0 (N 1+δ ))+ η0 ∈HN
η∈HN
+
X
X
ν(η)
η∈HN
≤o(N )+
t(N )
P(σ η (u) ∈ V )
u=N 1+δ
X
ν(η) sup P(σ + (N 1+δ ) 6= σ η0 (N 1+δ ))+ η0 ∈HN
η∈HN
(N γ +1)t(N ) 2N
t(N )N γ 2N N →∞
The limit above, as N → ∞, is zero by the hypothesis lim and (2).
Proof: [Proof of Proposition 3] Consider the events As = {σ η (s) ∈ V } and the random variable Z = η {RN
t(N P) s=0
1As . Note that {Z > 0} =
≤ t(N )}. Applying the Paley-Zygmund inequality, for δ > 0,
P(Z > 0) ≥
t(N P)
h E E
1 As
s=N 1+δ h t(N P)
1 As
s=N 1+δ
=
"
t(N P)
s=N 1+δ
=
i2
i2 + o(N ) =
t(N P)
s=N 1+δ
|V | 2N
+
P
|V | 2N
#2
E
t(N P)
h
t(N P)
s=N 1+δ
i2 P(As )
(1As )2 +
P
+ o(N ) (1Au )(1As )
u6=s
s=N 1+δ
+ o(N )
P(Au ∩ As )
u6=s
(t(N ) − N 1+δ )2 (ν(V ))2 P P P(Au ∩ As ) + (t(N ) − N 1+δ )ν(V )+ |u−s|>N 1+δ
9
P(Au ∩ As )
0N 1+δ
X
(t(N ) − k + 1)P(σ(k) ∈ V, σ(0) ∈ V )
k=N 1+δ t(N )
≤
X
(t(N ) − k + 1)
t(N )
=
ν(η)P(σ(k) ∈ V |σ(0) = η)
η∈V
k=N 1+δ
X
X
(t(N ) − k + 1) ×
k=N 1+δ
(
X
ν(η)
η∈V
X
ξ∈HN
h ν(ξ) P(σ(k) ∈ V |σ(0) = η)−
) i X X − P(σ(k) ∈ V |σ(0) = ξ) + ν(η) ν(ξ)P(σ(k) ∈ V |σ(0) = ξ) η∈V
t(N )
≤
X
"
(t(N ) − k + 1)
k=N 1+δ t(N )
≤
X
η∈V
ν(η)
ξ∈HN
X
ν(ξ)P(σ η (k) 6= σ ξ (k)) + ν(V )2
ξ∈HN
i h N (log N + 1) + ν(V )2 (t(N ) − k + 1) ν(V ) k 1+δ
X
k=N t(N )
t(N ) X t(N ) X ≤ ν(V )N (log N + 1) + t(N )ν(V )2 k k=1
k=1
≤t(N ) log(t(N ))ν(V )N log(N + 1) + t(N )2 ν(V )2 ≤t(N )1+ǫ N (log N + 1)ν(V ) + t(N )2 ν(V )2 .
On the other hand, X
P(Au ∩ As ) =
1+δ N X
(t(N ) − k + 1)P(σ η (k) ∈ V, σ(0) ∈ V )
k=1
0 t(N )) − P(RN > t(N ))| = 0. By enough to prove that lim |P(RN N →∞
10
#
Proposition 1 and (2), η |P(RN > t(N )) − P(RN > t(N ))|
η η ≤ P(RN > t(N ), RN ≤ t(N )) + P(RN > t(N ), RN ≤ t(N )) η η ≤ P(RN > N 1+δ , RN ≤ N 1+δ ) + P(RN > N 1+δ , RN ≤ N 1+δ )
≤ sup P(σ η0 (N 1+δ ) 6= σ + (N 1+δ )).
(5)
η0 ∈HN
As N → ∞, (5) goes to zero.
Lemma 2 lim P(RN ≥ βN ) = e−1 .
N →∞
Proof: [Proof of Lemma 2] By the definition of βN ,P(RN ≥ βN ) ≤ e−1 < P(RN ≥ βN − 1). It is easy to see that
0 ≤ P(RN ≥ βN − 1) − P(RN ≥ βN ) ≤ P(βN − 1 ≤ RN < βN ), so that the proof will be completed applying the Markov property. Note that P(βN − 1 ≤ RN < βN ) = P(βN − 1 ≤ RN < βN |σ + (βN − 1) ∈ / V ) × P(σ + (βN − 1) ∈ / V) h X = P(σ(1) ∈ V |σ(0) ∈ / V ) × P(σ + (βN − 1) ∈ / V)− ν(η)P(σ η (βN − 1) ∈ / V )+ η∈HN
+
X
i ν(η)P(σ (βN − 1) ∈ / V)
X
h i ν(η) P(σ + (βN − 1) ∈ / V ) − P(σ η (βN − 1) ∈ / V) +
X
ν(η) sup P(σ + (βN − 1) 6= σ η0 (βN − 1)) +
η∈HN
≤
η∈HN
≤
η∈HN
η
η0 ∈HN
! ! Nγ 2N − |V | 2N N ! ! Nγ 2N − |V | . 2N N
As N → ∞, the first term is zero by (2) and the second is zero by Corollary 1 and hypothesis. Therefore, lim P(RN ≥ βN ) = e−1 . N →∞
Lemma 3 For each integer n > 0, there exists an α satisfying e−1 ≤ α < 1 such that, for every N > Mn > 0, P(RN ≥ nβN ) ≤ αn .
11
Proof: [Proof of Lemma 3] The Lemma will be proved by induction. For n = 1 the result is immediate by definition. Now assume that the inequality holds good for the integer n. Then, applying the Markov property, X P(RN ≥ βN (n + 1)) = P(RN ≥ βN n, σ(βN n) = η) × P(RN ≥ βN |σ(0) = η) η∈V /
≤ P(RN ≥ βN n) sup P(RN ≥ βN |σ(0) = η0 ) η0 ∈V / n
≤ α sup P(RN ≥ βN |σ(0) = η0 ) ≤ αn e−1 ≤ αn+1 . η0 ∈V /
Therefore, for a N big enough, i.e., N > MN > 0, P(RN > βN (n + 1)) ≤ αn+1 .
Proof: [Proof of Theorem 2] To show that RN standardized by βN converges to an exponential law with rate 1 as N → ∞, it is enough to prove that (1) lim P(RN > βN (t + s)) − P(RN > βN t)P(RN > βN s) = 0. N →∞ N = 1. (2) lim E R βN N →∞
Note that the first item guarantees that if the law RN /βN converges when N → ∞, then this limit must be an exponential law (maybe a degenerate one). On the other hand, the Lemma 2 along with this item implies that if t is a positive rational number, then the limit lim P(RN ≥ βN t) exists and equals e−t . As the exponential law is N →∞
continuous, it is enough to show the convergence for all t ∈ R, what concludes the proof. The second item guarantees that such exponential law has rate 1. This technique was applied in [1], where more details are presented. Proof of (1): Firstly, the result will be proved for an initial state chosen uniformly from HN . For this end, note the following three facts:
12
(a) X η > βN (t + s))− ν(η)P(RN η∈HN
−
X
η∈HN
≤
X
ν(η)P(σ η (u) ∈ / V, ∀u ∈ {1, . . . , βN t} ∪ {βN t + N 1+δ , . . . , βN (t + s)}) ν(η)P(σ η (u) ∈ V, for some u ∈ {βN t + 1, . . . , βN t + N 1+δ })
η∈HN
≤
X
ν(η)
X
η
P(σ (u) = ς) ≤
u=βN t+1 ς∈V
η∈HN
=
1+δ βN t+N X
N 1+δ (N γ
+ 1)
2N
X
η∈HN
ν(η)
1+δ βN t+N X
u=βN t
X 1 2N
ς∈V
N →∞
−−−−→ 0.
(b) X X η η 1+δ ν(η)P(RN > βN s) − ν(η)P(σ (u) ∈ / V, ∀u ∈ {N , . . . , βN s}) η∈HN
≤
η∈HN
X
ν(η)P(σ (u) ∈ V, for some u ∈ {1, . . . , N 1+δ })
X
ν(η)
η
η∈HN
≤
η∈HN
1+δ N X
X
P(σ η (u) = ς) ≤
u=1 ς∈V
N 1+δ (N γ + 1) N →∞ −−−−→ 0. 2N
(c) Applying the Markov property, the facts (a) and (b), and (2), we have that X η ν(η)P RN > βN (t + s) − η∈HN
−
X
η∈H
X η η > βN s > βN t ν(η)P RN ν(η)P RN η∈H
N X NX η η ν(η)P RN > βN t, σ (βN t) = κ ≤
η∈HN κ∈V /
h × P σ κ (u) ∈ / V, ∀u ∈ {N 1+δ , . . . , βN s} − i − P σ η (u) ∈ / V, ∀u ∈ {N 1+δ , . . . , βN s} X X N →∞ ≤ ν(η) sup P σ κ0 (N 1+δ ) 6= σ η (N 1+δ ) −−−−→ 0. η∈HN
κ∈V /
κ0 ∈HN
η Now, it is enough to show that lim P(RN > βN t)−P(RN > βN t) = N →∞
13
0. Note that, by Proposition 1, / V, ∀u ∈ {N 1+δ , . . . , βN t}) P(RN > βN t) − P(σ + (u) ∈ N →∞ ≤ P(RN ≤ βN t) − P(N 1+δ ≤ RN ≤ βN t) = P(RN < N 1+δ ) −−−−→ 0.
and, analogously, N →∞ η η < N 1+δ ) −−−−→ 0. / V, ∀u ∈ {N 1+δ , . . . , βN t}) ≤ P(RN P(RN > βN t) − P(σ η (u) ∈ Therefore, η P(RN > βN t) − P(RN > βN t) / V, ∀u ∈ {N 1+δ , . . . , βN t}) − P(σ η (u) ∈ / V, ∀u ∈ {N 1+δ , . . . , βN t}) ≤ P(σ + (u) ∈ ≤ sup P(σ + (N 1+δ ) 6= σ η0 (N 1+δ )).
(6)
η0 ∈HN
Applying (2), as N → ∞, (6) is zero. Proof of (2): By definition, ! ! Z ∞ RN RN E P > t dt. = βN βN 0 As N → ∞, because of Lemma 3, the Lebesgue’s ConR ∞ Dominated N > t dt = 1. vergence Theorem may be applied and lim 0 P R βN N →∞
¯ Proof: [Proof of Proposition 4] Firstly, note that, for a given ω¯ ∈ Ω, N 1 X γ P(Θ > N t) = 1{η∈M / } N
i1 =1
! N 1 X 1{ηi1 ∈M 1{ηi1 ,...,iN γ t ∈M / } ... / } . N γ iN
t =1
n γ / Let F1 (N ) = (i1 ,. . . ,iN γ t ) ∈ {1,. . . ,N }N t ; l = 1,. . . ,N γ t : η i1 ,...,il ∈ o γ {η,η i1 ,. . . ,η i1 ,...,il−1 } and F2 (N ) = {1, . . . , N }N t /F1 (N ). If i′ = (i1 , . . . , iN γ t ), then X ¯ P(Θ > N γ t) = 1 γ ¯ 1 i1 E E . . . 1 i ,...,i N γ t ∈M {η ∈M / } {η 1 / } NN t ′ i ∈F1 (N ) X 1 ¯ 1 i1 E . . . 1 + Nγt i ,...,i γ N t ∈M {η ∈M / } {η 1 / } N i′ ∈F2 (N ) !N γ t+1 !N γ t+1 |F1 (N )| |F2 (N )| 1 1 = + . 1− γ 1− γ N Nγt N N Nγt N
14
Applying Corollary 1, |F2 (N )| |F1 (N )| = 1 and lim = 0. γt γ N N →∞ N N t N →∞ N lim
Therefore, 1 1− γ N
¯ P(Θ > N t) = lim lim E
N →∞
γ
N →∞
!N γ t+1
= e−t .
Proof: [Proof of Theorem 3] Applying the Chebyshev’s inequality, " # i2 1 h γ −t γ −2t γ −t ¯ ¯ P(Θ > N t) −2e E ¯ P(θ > N t) + e . P |P(Θ > N t) − e | > ǫ ≤ 2 E ǫ ¯ For a given ω ¯ ∈ Ω, !) ( N N 2 X X 1 1 × 1{ηi1 ∈M 1{ηi1 ,...,iN γ t ∈M P(Θ > N γ t) = 1{η∈M / } / } ... / } N N γ i1 =1 iN t =1 !) ( N N 1 X 1 X 1{ηi∗1 ∈M 1{ηi∗1 ,...,i∗N γ t ∈M ... . × 1{η∈M / } / } / } N ∗ N ∗ i1 =1
∗
∗
iN γ t =1
∗
Let G = {η i1 , . . . , η i1 ,...,iN γ t }. By Corollary 1, lim |G∗ | − N γ t = 0. N →∞
On the other hand, for a given {i∗1 , . . . , i∗N γ t }, by Proposition 1, ∗ ∗ ∗ 6 ∅ = 0. lim P {η, η i1 , . . . , η i1 ,...,iN γ t } ∩ {η i1 , . . . , η i1 ,...,iN γ t } = N →∞
Therefore,
h 2 i ¯ P(Θ > N γ t) E =
1 1− γ N
!
1 1− γ N
!2|G∗ |
+ o(N ).
h 2 i ¯ P(Θ > N γ t) When N diverges lim E = e−2t .Thus, by ProposiN →∞ ¯ |P(Θ > N γ t) − e−t | > ǫ = 0. tion 4, P
Acknowledgements We would like to thank Antonio Galves for his orientation on the master’s thesis [10], in which this paper is based.
15
References [1] Marzio Cassandro, Antonio Galves, Enzo Olivieri, and Maria Eul´alia Vares, Metastable behavior of stochastic dynamics: a pathwise approach, Journal of statistical physics 35 (1984), no. 5-6, 603–634. [2] Marzio Cassandro, Antonio Galves, and Pierre Picco, Dynamical phase transitions in disordered systems: the study of a random walk model, Annales de l’IHP Physique th´eorique, vol. 55, 1991, pp. 689–705. [3] Colin Cooper and Alan Frieze, A note on the vacant set of random walks on the hypercube and other regular graphs of high degree, Moscow Journal of Combinatorics and Number Theory 4 (2014), no. 4, 403–426. [4] DW Crowe, The n-dimensional cube and the tower of hanoi, The American Mathematical Monthly 63 (1956), no. 1, 29–30. [5] Persi Diaconis, Ronald L. Graham, and John A. Morrison, Asymptotic analysis of a random walk on a hypercube with many dimensions, Random structures and algorithms 1 (1990), no. 1, 51–72. [6] Edgard N Gilbert, Gray codes and paths on the n-cube, Bell Labs Technical Journal 37 (1958), no. 3, 815–826. [7] G. Letac and L. Takacs, Random walks on an m-dimensional cube (stma v22 1919), Journal fr die Reine und Angewandte Mathematik 310 (1979), 187–195. [8] Peter Matthews, Mixing rates for a random walk on the cube, SIAM Journal on Algebraic Discrete Methods 8 (1987), no. 4, 746–752. [9]
, Some sample path properties of a random walk on the cube, Journal of Theoretical Probability 2 (1989), no. 1, 129–146.
[10] Cl´ adia Monteiro Peixoto, Aproxima¸ca ˜o do Equil´ıbrio e Tempos Exponenciais para o Passeio Aleat´ orio no Hipercubo, Master’s thesis, Instituto de Matem´atica e Estat´ıstica da Universidade de S˜ ao Paulo, S˜ ao Paulo, Brasil, 1992. [11] Benedetto Scoppola, Exact solution for a class of random walk on the hypercube, Journal of Statistical Physics 143 (2011), no. 3, 413–419. [12] Michael Voit, Asymptotic distributions for the ehrenfest urn and related random walks, Journal of Applied Probability 33 (1996), no. 2, 340–356.
16
[13] Stanislav Volkov and Timothy Wong, A note on random walks in a hypercube, Pi Mu Epsilon journal (2008), 551–557.
17