Diffusive limit of asymmetric simple exclusion - CiteSeerX

0 downloads 0 Views 373KB Size Report
Proof. We shall only prove LG G. It su ces to check (5.2) for Lg. The first equality of (5.2) is trivial. The second one follows since our dynamics conserves the total ...
Di usive limit of asymmetric simple exclusion R. Esposito

1

Dipartimento di Matematica, Universita di Roma Tor Vergata Via della Ricerca Scienti ca 00133 Roma, Italy. [email protected] 2 R. Marra Dipartimento di Fisica, Universita di Roma Tor Vergata Via della Ricerca Scienti ca 00133 Roma, Italy. [email protected] 3 H. T. Yau Courant Institute of Mathematical Sciences, New York University NY, NY, 10012. [email protected]

Dedicated to Elliott Lieb for his sixtieth birthday Abstract We consider the asymmetric simple exclusion process on the lattice Z d \ T d with periodic b.c. for d  3, in the di usive space-time scaling with parameter ". Assume the initial state is a product of Bernoulli measures with density of order ", up to a xed reference constant density . We prove that the density at time t is given to rst order by  ? "m(x ? "?1 vt; t) with v a uniform velocity depending on  and the dynamics and m(z; t) satis es the d-dimensional viscous Burgers equation. The di usion matrix is given by a variational formula related to the Green-Kubo formula and it is strictly bigger than the di usion matrix for the corresponding symmetric exclusion process. Work partially supported by MURST and CNR-GNFM 2 Work partially supported by MURST, CNR-GNFM and INFN 3 Work partially supported by US-NSF Grant n. DMS 9101196, A.P.Sloan Foundation Fellowship and David and Lucile Packard Foundation Fellowship 1

1

1 Introduction. The macroscopic behavior of a large particle system is usually described in terms of the limiting behavior of some special local observables under a suitable space-time scaling. In the context of hydrodynamical limit, these observables are the locally conserved quantities of the dynamics. The system is assumed to be initially distributed according to a probability measure characterized by a slowly varying Gibbs states. Let " represent the scale separation between the micro and macro space scales. Looking at the time evolution of the conserved local observables of the system on times of order "?1 corresponds to the hyperbolic (Euler) scaling, while the di usive (Navier-Stokes) scaling corresponds to time of order "?2. Rigorous results on the hydrodynamical limit are available for a wide range of models, though mostly for lattice systems. We refer to [1] and [2] for a review of the known results. One of the simplest models of interacting particle systems is the simple exclusion process (SEP). It is a particle model on the lattice Z d with an hard core interaction which prevents the presence of more than one particle per site. Particles jump at random from a site to nearest neighbors according to the following rule: a particle in the site x, independently of the others, waits for an exponential time, then jumps with probability proportional to pe  0 to the site x + e (e being a vector of length 1 on the lattice) provided that the site x + e is empty; otherwise stays in x and waits a new exponential time. We denote by L the generator of such a process. The case pe = p?e for all e (symmetric simple exclusion process) is particularly simple and its di usive limit is ruled by the linear heat equation. To certain extent, this model is exactly solvable. The more interesting case is the non symmetric one, pe 6= p?e, called asymmetric simple exclusion process (ASEP). For this system the product measure is still invariant, although the dynamics is not reversible with respect to it. To understand the macroscopic behavior of the system, we start with the hyperbolic scaling. Assume that the initial pro le is a product measure with density 0 ("x) 2 (0; 1) slowly varying from site to site. Then the macroscopic particle density at time "?1t, "(z; t) (as a function of the macroscopic position and time), in the limit " ! 0, satis es (see [1] for references) the inviscid Burgers equation

@t  +   rz [(1 ? )] = 0; 2

(1.1)

where  is the vector with component e = pe ? p?e with e being the unit vectors in the positive coordinate directions. To have an idea of the behavior in the di usive scaling at heuristic level, it is convenient to look at the rst order di usive corrections in " to the eq. (1.1). One expects that in the di usive scaling this term becomes of order 1. On the other hand, the macroscopic current J = [(1 ? )] in general has an extra factor "?1 in front in the di usive scaling and it is dicult to make a conjecture on the scaling limit. If  is of order ", which correspond to the weakly asymmetric simple exclusion process (WASEP), the macroscopic current J becomes of order " and the limiting equation can be proven to be just a di usion with a (nonlinear) drift (see [3], [4]). This modi cation, however, does not manifest the asymmetric nature of the problem and the di usive constant so obtained is the same as the symmetric one. In fact, both in [3] and in [4] the asymmetric part of the dynamics is treated as a small perturbation of the symmetric part and both methods rely heavily on this assumption. In this paper we consider the ASEP with  of order 1, but we analyze a di erent setting to make the macroscopic current J of order ". Suppose that the initial density 0 ("x) is of the form 0 ("x) =  ? "u0("x) with  2 (0; 1). Then, assuming that this form of the density persists at time t, the macroscopic current would be

J = (1 ? ) ? "vu ? "2u2; v = (1 ? 2): Therefore the equation for the uctuation of density would become: d 2u X @t u + "?1v  rz u +   rz u2 = Di;j @z@ @z i j i;j =1 with Di;j some di usion matrix. A global galileian transformation

(1.2)

m(z; t) = u(z + "?1vt; t) allows to remove the diverging term and the equation for m(z; t) is the viscous Burgers equation d X d 2m X (1.3) Di;j @z@ @z @t m +   rz m2 = i=1 i;j =1

3

i

j

Note that in the case  = 12 no galileian transformation is necessary because v = 0. The rest of this paper is devoted to prove (1.3) under quite general technical assumptions for dimension strictly bigger than 2, with periodic boundary condition. The periodic boundary condition is assumed for simplicity and we believe that it can be removed by using a Fritz's type argument [5, 6]. Our methods work for a general class of models, in particular the jump needs not to be nearest neighbor. The restriction on dimension, however, is intrinsic since in this scaling the di usion coecient is expected to be in nite for d = 1 or 2. It was conjectured that the correct scaling for d = 1 is "?3=2 while for d = 2 there is a logarithmic correction to "?2. See [1] for a review and references therein. Compared with previous results [3, 4], our result di ers in three major ways. First of all, our dynamics is fully asymmetric and non reversible. Furthermore, the di usion coecient, which is 1 in WASEP, in our case is given by a Green-Kubo formula which depends on the (global) density in a complicated manner. Strictly speaking, there is no proof that the typical expression of Green-Kubo formula is mathematically well-de ned in this case, nor does our result establish this rigorously. We do, however, give a variational representation of the di usion coecient which is equivalent to the Green-Kubo formula heuristically. Finally, the WASEP has a non trivial behavior only on the di usive time scale, while in ASEP the di usive scale is beyond the usual hydrodynamical time scale (namely the Euler scale). In other words, we are looking at the long time behavior of a system with non trivial Euler limit. Admittedly, by restricting the initial data to be small perturbation of the global equilibrium, our setting is very special from this point of view. It nevertheless is the natural setting for the incompressible Navier-Stokes equation. The last remark is indeed the key point of this paper and it relates to the important question of the Navier-Stokes correction to the Euler equations for the density, velocity and temperature of a uid. In that case the analysis on the di usive time scaling is made dicult by the presence of a transport term of order "?1, like the macroscopic current in the present situation. The incompressible approximation, which means to consider density, velocity and temperature constant up to terms of order " and look to the equations for the corrections, has a well de ned limit as " goes to 0. It is actually this feature that makes possible to prove that the incompressible Navier-Stokes equations are the di usive scaling limit of the 4

Boltzmann equation (see [7], [8]). The same scaling argument can be used to obtain the incompressible Navier-Stokes equations formally from the Newton equations [9]. It should be emphasized that the di usion coecient (matrix) for the asymmetric process is strictly bigger than the corresponding one for the symmetric process. Since the di usion corresponding to the symmetric part is due to the stochasticity of the dynamics, to certain extent the di usivity corresponding to the \deterministic" part is measured by the di erence of the two di usion coecients. Therefore, it is important to prove that this di erence is strictly positive. Our method is certainly far from dealing with purely deterministic systems. It nevertheless establishes the fact that the asymmetric part of the dynamics does enhance the di usivity. The strategy of the proof is the following. The main technical tools are a modi cation of the entropy method proposed in [10] (see also [11]), the method of Varadhan [12, 13, 14] for the nongradient systems and some multi{scale analysis. The general philosophy of the relative entropy method is to choose, as reference measure, the local equilibrium with parameter slowly varying in space and suitably chosen in such a way to reproduce the correct density in the limit. In our case the local equilibrium (for the system in the cube of size "?1 with periodic boundary conditions) is just a product measure with chemical potential  + "("x ? "?1vt; t), (1 ? )(z; t) = ?m(z; t) and m being the solution of (1.3) with di usion matrix to be determined. The speci c entropy of this measure, whose density is denoted by t", with respect to the global equilibrium is of order "2. This xes the typical size of the relative entropies. Our main result states that the speci c entropy of the non equilibrium distribution ft" relative to t" is bounded by lim "?2s(ft"j t") = 0

"!0

(1.4)

provided that 0" is the initial measure. The bound (1.4) correctly pins down the chemical potential  and, together with the entropy inequality, implies a law of large numbers for the empirical density and its limit satis es (1.3). As usual, the relative entropy method requires the smoothness of the solution of the hydrodynamic equation, which, in the case of the viscous Burgers equation, is assured globally in time. The above outlines are standard procedures for applying the relative entropy method. 5

In our case, it encounters signi cant diculties because the system is in \wrong" scaling. To overcome these, we are lead to pick up the correct second order correction which can be achieved by adding a suitable non product correction "2, with  to be determined. We emphasize that such a correction changes the entropy only to orders higher than "2. Such a correction is added purely for the purpose to prove (1.4). The choice of  is dictated by the method of [12] for the nongradient systems, which, in its original form [12, 13], applies only to di usive and reversible systems. Recently, it was extended to non reversible systems by [14] who considered the asymmetric simple exclusion process with a mean zero condition, i.e. the average (signed) distance for which a particle can jump is zero. This approach, however, can not be applied our setting because, among other things, it is dimensionally independent. Our method relies strongly on the multiscale analysis and the logarithmic Sobolev inequality (for the corresponding symmetric process). It is far more general than [14] but is restricted to d  3. Finally we remark that one should be able to prove (1.4) via standard \one" and \two blocks" estimates and the \tightness" [12, 13, 14], using the multi-scale estimates in sect. 5 and 6. This would allow us to treat more general initial states. We choose the relative entropy method because it is somewhat shorter and it provides an example for which one can pin down the next order correction to the local Gibbs state. The rest of the paper is organized as follows. In sect. 2 we state our main result. Sect 3 is devoted to some simple large deviation bounds for local Gibbs states to be used later. The main part of relative entropy estimate is in sect. 4. In sect. 5 we identify the di usion matrix via a variational formula. Finally the multiscale estimates are proved in sect. 6.

2 Statement of the problem and results. Suppose L  Z d is a cubic sublattice of size (2L +1). Let d ;L denote the product measure on L with chemical potential ; i.e.

d ;L = Z ?1 exp[

X

x2L

6

x]; x 2 f0; 1g;

(2.1)

where Z is the normalization. Let  be the equilibrium density, namely P  e : =  = P=0;1 e 1 + e =0;1 e  We assume 0 <  < 1. We de ne the generator L of simple exclusion process by Lf = Pb Lbf with the sum running on the set of all oriented bonds b = (x; y) in Z d such that y ? x = e and e = f(1; 0; 0) ; (0; 1; 0); (0; 0; 1)g. Here Lb is de ned by

Lbf = pex[f (b) ? f ()]

(2.2)

8 > > > > >
0 means sum for e = (1; 0; 0) ; (0; 1; 0) or (0; 0; 1). Let ft be the density of the microscopic dynamics relative to d ;L. Then ft satis es the forward equation @t ft = "?2Lft > > > > > :

P

P

Here " = (2L + 1)?1 and the adjoint L is de ned by

L f =

X

b

Lb f; Lb f = p?ex[f (b) ? f ()]:

We shall assume periodic boundary condition on L so that L is really the adjoint of L. The initial data ft=0 = 0 is chosen as P exp f " x ("x)x g (2.3) 0= Z where  is a smooth periodic function and Z is the normalization. Recall the speci c entropy of two densities f and g relative to d is de ned by

s(f jg) = "d f log fg d : Z

(2.4)

Let v = (ve)e>0, ve = (1 ? 2)e  (1 ? 2)(pe ? p?e). Our main result is the following: 7

Theorem 2.1. Let ft be the density at time t relative to d with initial data 0 . Let

de ned by

t

X = Z1(t) expf" ("x ? "?1vt; t)xg

x

t

be

(2.5)

where Z (t) is the normalization (w.r.t. d ) and  satis es the equation

@ (z; t) ? (1 ? )  (@ 2 )(z; t) = Dee0 (@e@e0 )(z; t): e e @t e>0 e0 >0 e>0 X X

X

(2.6)

Here @e(z) = @)(z)=@ze and 0 < D < 1 ( in matrix sense) with D given by Theorem 5.10. Then, for d  3, lim "?2s(ftj t ) = 0: (2.7) "!0

The proof of this theorem will be is given in sect. 4. Remark. The factor (1 ? ) is due to the relation between the density and the chemical potential, as explained in the following. To understand the meaning of (2.7), let us rst recall some de nitions and elementary facts. Let P (h) denote the pressure de ned by

P (h) = log

X

=0;1

eh = log(eh + 1):

(2.8)

De ne the entropy by the Legendre transformation

s() = sup[  ? P ( )]: 2R

The density  and h are related by

 = P 0( ) = e (e + 1)?1 = (1 + e? )?1 ; h = log(=1 ? ): In particular, if h = + " then

 = P 0(h) = P 0( ) + "P 00( ) + O("2) =  + "(1 ? ) + O("2):

(2.9)

Let be a local Gibbs state characterized by a smooth function  as = exp[" Z

log Z = log exp["

X

x

X

x

 ("x)x]Z?1 ;

 ("x)x]d = 8

X

x

[P ( + " ("x)) ? P ( )]:

(2.10) (2.11)

Here and for the rest of this section, all summations are for x 2 L. One can easily check that

s( ) = s( j1)  "dE  [ log ]  "d f"E  [ ("x)x] ? P ( + " ("x)) + P ( )g  const: "2: X

x

In order to pin down the function  , one needs to know the entropy beyond order "2. Indeed, one has the following Lemma.

Lemma 2.2. Let f be a density satisfying lim "?2s(f j ) = 0

"!0

Then for any smooth function J and bounded local function F one has

lim[E f ? E ]["d?1 "!0

X

x

J ("x)xF ] = 0:

If instead of "?2 s(f j ) ! 0 one has "?2 s(f j )  const: then the following bound holds

j[E f ? E ]["d?1 In particular,

X

x

J ("x)xF ]j  const:

can be the equilibrium measure d .

Proof: We use the entropy inequality:

Ef [X ]  ?1 "?ds(f j ) + ?1 log E [expf X g]

(2.12)

for any positive and for any random variable X . Let X = "d?1 Px J ("x)xF and = q"2?d We claim that, for such choices of and X , one has, for any q xed, h

n

oi

lim ?1 log E exp [X ? E fX g]  const:q: "!0

(2.13)

To prove this, suppose F measurable w.r.t. the -algebra of a cube of size `. Let us label x 2 ` by (2` + 1) +  with jj  ` and 2 L=2`+1. Hence we can write X as

X = Av (2` + 1)d"d?1

X



J ["f(2` + 1) + g](2`+1) + F;

9

(2.14)

where Av = (2` + 1)?d Pjj`. Without loss of generality one can assume E [X ] = 0. Now, by the Jensen inequality (applied to u ! log E [eu]),

?1 log E [expf [X ]g]  ?1Av (2` + 1)d log E exp q" J ["f(2` + 1) + g](2`+1) + F "

(

X

)#



(2.15)

:

(2.16)

Since is a product measure, the expectation factors into a product. Therefore, it suces to prove that for any q xed lim q"?2 log E [expfq"F g ? E [F ]  const:q:

"!0

(2.17)

One can check (2.17) easily by expanding the exponential up to second order in ". To prove the rst part of Lemma 2.2, one takes " ! 0 and then q ! 0. This proves the upper bound. To obtain the lower bound, one simply switches X to ?X . For the second part, one xes q to be some small constant instead of letting q ! 0. This concludes lemma 2.2.

Corollary 2.3 Let "(z; t) be the empirical density "(z; t) = "d?1

X

x

(z ? "x)(x ? ):

Then, for d  3, the empirical density "(z ? "?1 vt; t) converges weakly in probability to m(x; t) with m solving the equation

@m(z; t) +  @ (m(z; t)2 ) = Dee0 (@e @e0 m)(z; t): e e @t e>0 e;e0>0 X

X

(2.18)

Proof: >From Theorem 2.1 and Lemma 2.2 it is enough to check the convergence with respect to the local equilibrium t . To prove this claim, recall the entropy bound [16, 11]

2 + "?ds(f j ) : P f [A]  log (2.19) log(1 + 1=P [A]) Let A be the set A = fj"d?1 x J ("x)(x ?  + "m("x ? "?1vt; t)) > ag. From (2.13) and the Chebyshev inequality, P [A] is bounded by P [A]  expf?(a ? const:q) g. Choose q small enough so that a ? const:q > a=2. From (2.7), (2.19) and this bound it follows lim"!0 P f [A] = 0 for any choice of a provided d > 2. This concludes Corollary 2.3. P

10

3 Large Deviations Estimate. In this section we prove a simple result concerning large deviations of local Gibbs states. Most of the arguments are standard [10, 17] and we only sketch a few key steps. Let x;k be the average on a large block of size 2k + 1 centered at x

x;k = (2k + 1)?d

X

jx?yj=k

y ;

(3.1)

where jxj = maxi=1;:::;d jxij. For the rest of this section, k = `"?2=d with ` being a large constant independent of ". Lemma 3.1. Let be the density given by (2.10) and let

x;k = E [x;k ; x;k ]  E [(x;k ? mx;k )2] where E [A; B ] = E [AB ] ? E [A]E [B ] and mx;k is de ned by

mx;k = E [x;k ]:

(3.2)

Then, for k = `"?2=d and for any q < q0 with q0 a small xed constant, "

lim lim "d?2 log E `!1 "!0

(

exp q

X

x

[(x;k ? mx;k

)2 ? 

)#

x;k ]

=0

(3.3)

In general, if Gx is a family of bounded smooth function such that then for any smooth function J

lim lim "d?2 log E `!1 "!0

x Gx(mx;k ) = @G @y

"

exp

( X

x



y=mx;k

= 0;

(3.4) )#

J ("x)fGx(x;k ) ? E [Gx(x;k )]g

= 0:

(3.5)

Proof. Since jG(u)j  const:(u ? m)2 for G satisfying (3.4), we only have to prove the rst part of the Lemma. This lemma was proved in [10, 17] in a slightly di erent context. Divide L into non overlapping cubes of size 2k + 1 and label them by . We shall use  to denote the center of the cube, too. Hence

(x;k ? mx;k )2 =

X

x

=

X X

(+j;k ? m+j;k )2

jj jk  X Avjjjk(2k + 1)d (+j;k ? m+j;k )2 

11

By Jensen inequality for the function log E [exp(u)], the left hand side of (3.3) is bounded by #) ( " o Xn 2 d d ? 2 (+j;k ? m+j;k ) ? +j;k " Avjjjk log E exp q(2k + 1) = "d?2 Avjjjk

X



log E

n

exp

h

 n oio q(2k + 1)d (+j;k ? m+j;k )2 ? +j;k

(3.6)

Here we have used the product nature of . It is not hard to estimate the expectation w.r.t. , again since it is a product measure. For simplicity, let  = j = 0 and suppress all indices  + j . Note that k is a random variable with mean mk and variance k . Let k = (k ? mk )=k1=2. Since is a product measure, k is order k?d . Together with last identity, (3.3) follows (for d  3) if one establishes log E [expfq(2k + 1)dk k2g]  const:

(3.7)

>From the local central limit theorem and its expansion ([18], see also Lemma A.1 in [17]), k has a density p12 exp[?k2=2][1 + O(k?d=2)]. Since (2k + 1)dk is uniformly bounded by a constant, say , the integrand in (3.7) is bounded by exp[qk2]. Clearly, for q small enough, it is integrable w.r.t. p12 exp[?k2=2]. We have proved (3.7) and this concludes Lemma 3.1. We will be interested in local Gibbs states with small perturbation. More precisely, let ~ be de ned by ( ) X X ? 1 2 ~ = Z~ exp "  ("x)x + "  ("x)xF (3.8) x

x

where Z~ is the normalization,  and  are smooth function and F is a local function. We shall assume for the rest of this paper that E  [F ] = 0. The following lemma allows us to replace s(f j ) by s(f j ~).

Lemma 3.2. Suppose that f and  are probability measures on L with  the equilibrium measure de ned in (2.1). Moreover, f satis es "?2 s(f j )  const: Then for ~ and de ned

in (3.8) and (2.10), with F such that E  [F ] = 0, one has

lim "?2[s(f j ~) ? s(f j )] = 0:

"!0

12

Proof. By de nition,

~  ); "?2[s(f j ) ? s(f j ~)] = "dE f [  ("x)xF ] ? "d?2 log(Z=Z X

x

where ~ "d?2 log Z=Z



"

(

=

"d?2 log E 

?

"d?2 log E  [exp

=

"d?2 log E

"

exp "

exp

(

"

(

X

x

X

x

 ("x)x

X + "2  ("x)

x

)

xF

)#

 ("x)x ]

X "2  ("x)

x

xF

)#

:

To the leading order, ~  = "dE [  ("x)xF ] + O("2) "d?2 log Z=Z X

x

~  ! 0 as " ! 0: The rst term "d By assumption on F and Lemma 2.2, "d?2 log Z=Z E f [Px  ("x)xF ] also tends to zero as " ! 0 by assumption on F and Lemma 2.2. This proves Lemma 3.2.

4 Entropy estimate. This section is devoted to the proof of Theorem 2.1. The aim is to get an integral inequality for the relative entropy in terms of a \large deviations" term and a \variance term". The rst term will be bounded via Lemma 3.1. The second term is related to the fact that the current is non gradient. It will be handled with the method developed in [12, 13, 14] and in sect. 5 and 6. This method forces us to modify the local equilibrium by a second order non product term, which does not a ect the relative entropy up to order "2 by Lemma 3.2, but it is crucial for the variance term to vanish. We will replace t with the density ~t de ned by ~t

= Z~?1 exp t

(

" (  !~ )("x; t)x X

x

+ "2()

)

;

(4.1)

where Z~t is the normalization,  is the convolution product on Z d , and , !~ and  are chosen as follows. 13

(4.A) Let ` and k be integers, ` = `d+2 and suppose that k is divided in disjoint cubes of size (2`+ 1), with centers 2 (2`+ 1)Z d , j j  k. Let `1 = `? `1=d2 and consider the cubes `1; and ~ k = Sj jk is the region k without corridors of width 2`1=d2 . De ne !~ and ! to be the normalized characteristic functions !~ (x) = j~ k j?11(x 2 ~ k ); !(x) = (2k + 1)?d1(x 2 k ). (4.B) ("x; t) = ("x ? "?1vt; t) with  and v given by Theorem 2.1. (4.C) Choice of . () = ? Px;e>0(@e )("x; t)(~!  y F )(x); where F is a local function satisfying F^ () = 0 = @ F^ =@yjy= , F^ (u) = E h [F ], with u = P 0(h). The choice of !~ is particularly convenient to handle some boundary terms in sect. 5 and 6, although the use of `1=d2 is rather arbitrary. The choice of ` = `d+2 is arbitrary too and will be made clear in the proof of theorem 4.6 in Section 6. The choice of depends on the need of compensating a diverging transport term, according to what explained in sect. 1. Finally  is the second order corrections to and its choice will also be made clear below. We note that in the same conditions of Lemma 3.2, with ~ de ned by (4.1), we have "?2[(f j ) ? s(f j ~)] ! 0 from Lemma 3.2. To state the main result of this section, we need the following de nition. De nition 4.1. Let G be a local function supported in jxj  s0 for some integer s0. Suppose that G satis es E  [G] = 0, where  is the equilibrium state de ned in (2.1). For any y 2 [0; 1] we de ne the \variance" V`(G; y) by

V`(G; y) = (2`1 + 1)?d

*2 4

3 X

jxj`1

2

(xG ? `(G))5 (?Ls;`)?1 4

X

jxj`1

3+

(xG ? `(G))5

`;y

(4.2)

where `1 = ` ? `1=d2 and `;y is the canonical Gibbs state of (2` + 1)d sites with density y, namely, `;y = Q?`;y1 (` ? y)`: Here ` denotes the counting measure on the con gurations in ` and Q`;y is the normalizaP tion. The generator Ls;` is de ned by Ls;` = 1=2(L` + L` ) with L` = jbj` Lb and L` de ned

14

similarly. Here jbj  ` means b 2 ` and Lb is de ned by (2.2). Finally, ` (G) is a constant depending on ` de ned by `(G) = E  [Gj`]: (4.3) We de ne also the \variance" of G by

V (G; ) = lim sup E  [V`(G; `)]:;

(4.4)

`!1

We have chosen `1 < ` to avoid boundary terms. The speci c form of `1 chosen above is not important. Our main result of this section is the following theorem. Theorem 4.2 Let ~t be de ned as above and ft satisfying the assumptions of Theorem 2.1.

De ne the lattice gradient ref (x)  f (x + e) ? f (x). In particular, re 0 = e ? 0 . Then, for d  3, Z T ?2 s(f j ~ )dt + CT X V (~ ? 2 ~ qe ; ) " lim lim " s ( f j )  C lim lim t t T T `!1 "!0 `!1 "!0 0 e>0

(4.5)

Here C is a positive constant, V is de ned in (4.4) and

q~e = e(0 ? )(e ? ) ?

X

e0

D~ ee0 re0 0 ? L

X

x

x F

(4.6)

with D~ ee0 = ee0 ? Dee0 and Dee0 is de ned by (5.19) Let us recall a few basic facts from previous works.

Lemma 4.3 The density ft satis es the following bounds p (i) dtd s(ft )  ?const:"?2+d b2L Db ( ft ), where Db (g) = [g( b ) ? g()]2d () is the R

P

Dirichlet form.

(ii) For any t  0 , "?2 s(ft )  const: (iii) For any local function F and smooth function J, j[E ft ? E  ]["d?1 Px J ("x)xF ]j  const:

15

Proof. The rst two bounds (i) and (ii) were proved in [16]. The last bound (iii) follows from Lemma 2.2.

Lemma 4.4. The relative entropy s(ft j ~t ) satis es the bound (i) dtd s(ft j ~t )  "d ft ~t?1 ("?2 L ? @t ) ~t d : R

(ii) There is a constant ct independent of ft such that

lim "?2 "!0

"

d s(f j ~ ) ? "d f ("?2L ? @ ) log ~ d ? c  0: t t t t dt t t #

Z

(iii) L ~t d = 0 = @t ~t d : R

R

Proof. The bound (i) is proved in [10, 11] while (iii) follows by inspection. Hence we only have to prove (ii). By comparing (i) and (ii), it suces to prove that Z

lim "d?4 ft [L log ~t ? ~t?1L ~t ]d = ct "!0 By de nition of L; one has (omitting the subscript t)

W = L log ~ ? ~?1L ~ ?"p?ere ~("x; t)xrex + "2p?ex[(x;x+e) ? ()] = Xn

x;e

o

n

o

? xp?e exp ?"[re ~("x; t)]rex + "2 [(x;x+e) ? ()] ? 1 ; X

x;e

where ~ =  !~ . Expanding the exponential to the second order,

W=

X

x;e

p?e"4(@e ~("x; t))2x(rex)2 + "4

X

x;e

p?ex[(x;x+e) ? ()]2 + 0("6):

If one choose ct = lim"!0 "d?4 E  [W ]; then (ii) follows from (iii) of Lemma 4.3. The generator L can be decomposed as L = Ls + L(a) where

Lsf = 12 (L + L )f = L(a) f = 21 (L ? L)f =

X

b>0

X

b>0

L(bs) f =

Lb(a) f =

X

x;e>0

16

X

[f (x;x+e) ? f ()];

x;e>0

e(x ? x+e)[f (x;x+e) ? f ()]:

(4.7) (4.8)

Here the sum over b > 0 means summing over non oriented bond. One can de ne the current and its antisymmetric part by

Lx =

X

e>0

r?e wx;e; L(a) x =

X

e>0

(a) r?e wx;e

(4.9)

 and w(a) are given explicitly by with r?e g(x)  g(x) ? g(x ? e). The currents wx;e, wx;e x;e x + x+e ]: (4.10) (a) ; w  = r  ? w (a) ; w (a) =  u =  [  wx;e = rex + wx;e e x e x;e e x x+e ? x;e x;e x;e 2 We shall now compute L log ~. By de nition of ~

"d?4L log ~ = "d?4Ls log ~ ? "d?4L(a) log ~ (a)  ! ("x; t)[r?e wx;e ~] = "d?1 [@e @e ("x; t)](~!  )x ? "d?3 X

X

?

x;e>0 x;e>0 X d ? 2  " @e ("x; t)L (~!  y F )(x) + o(1) = B1 + B2 + B3 + o(1): x

(4.11)

Among the three terms in (4.11), B1 is the simplest since it is already of order one (recall "?1(x ? )  O(1)). We now compute B2. By Taylor expansion,

B2 = "d?2

X

(a) ] ["?1re ("x; t)][~!  wx;e

x;e>0

(a) ] + 1 "d?1 (a) ] (@e )("x; t)[~!  wx;e (@e@e )("x; t)[~!  wx;e 2 x;e>0 x;e>0 X d ( a ) + " a("x; t)(~!  wx;e ? c1) + const:

= "d?2

X

X

(4.12)

x;e

Here c1 is some constant and a is proportional to the third derivatives of . Here and below by const: we mean any quantity independent of the con guration and a cumulative constant will be determined at the end of this section using (iii) of Lemma 4.4. Note that from Lemma 4.3 the expectation of the third term goes to 0, by a suitable choice of c1 so we only have to concentrate on the rst two terms. In preparation to the use of the \one block" estimate (Theorem 4.6), we introduce the quantities  (2)

(1) !  y +2y+e )x ? x;k ]; (4.13) x;e = E [ux;ejx;k ]; x;e = (2 ? 1)[(~ where  =  is the equilibrium measure. We can decompose !~  u as (2) (~!  u)x;e = (1) x;e + x;e +

X

e0 >0

(x) ?  ?1 D~ ee0 (~!  re0 )(x)e?1 + !~  [gy;e e

17

X

e0 >0

D~ ee0 (re0 y )]; (4.14)

(x) is de ned by where gy;e (x) = u + (1 ? 2 )( y + y+e ?  ) ? (1) ? (1 ? 2 )( gy;e x;k ? ): y;e x;e 2

(4.15)

If we let (b = (y; y + e))

qb = uy;e + (1 ? 2)( y +2y+e ? ) ? ( ? 1) = (y ? )(y+e ? ) (x) is nothing but then gy;e

(x) = q ? E  [q j ] gy;e b b x;k

(4.16) (4.17)

One can compute (1) x;e easily. It is simply n

E  [ux;ejx;k ] = [1 + ((2k + 1)d ? 1)?1] (x;k )2 ? x;k

o

(4.18)

Lemma 4.5 Suppose J is a smooth function. Then there is constant C2 (independent of ft ) such that

d?2 E ft [ J ("x) (2) ? " (1 ? 2) lim " x;e 2 "!0 x X

X

x

@eJ ("x)x ] ? C2 = 0

Proof. By de nition of !~ and ! in (4.A), up to a constant, the expression inside the expectation can be written as

"d?2(2 ? 1)

( X

x

[(~!  J )("x)(x ? ) ? (!  J )("x)(x ? )] )

X ? 21 [re(~!  J )("x) ? "@eJ ("x)](x ? ) ; x The replacement of !  J by !~  J produces an error

(4.19)

"d?2 E ft f (x ? )[!  J ("x) ? !~  J ("x)]g X

=

x X d ? 2 f t " E f (x ? )Av [!  J ("x) ? !~  J ("x)]g; x

where !~ (x) = j`1; j?11(x 2 `1; ) and ! (x) = j`; j?11(x 2 `; (recall de ned in (4.A)). Hence

!  J ("x) ? !~  J ("x)j = jAvy2`;  [J ("(x ? y ) ? J ("(x ? )] ? Avy2`1 ; [J ("(x ? y) ? J ("(x ? )]j  const:"2`?2j@e@e J j: 18

Hence jerrorj  "d `2 Px E ft [x ? ] ! 0 because E ft [x ? ] = O("). Finally one can check the second term in (4.19) vanishes as " ! 0. This concludes Lemma 4.5. Let us summarize what we have proved so far

"?4+dL log ~t = B1 + 4 + 5 + 6 + 7 + const: + o(1) where B1 is de ned in (4.11) and i ; i = 4; 5; 6; 7 are de ned by (e = pe ? p?e)

4 = "d?2

5 =

"d?2

X

(@e )("x; t)e (1) x ;

x;e>0

X

x;e;e >0 0

(@e )("x; t)D~ ee0 (~!  re0 )(x);

n o X

6 = 1 "d?1 (@e@e) ("x; t) [(~!  we(a) )(x) + (1 ? 2)ex ; 2 x;e>0 X

7 = "d?2 (@e )("x; t) 7;x;e

x;e>0

2

7;x;e = !~  4e

g(x) ?

X

y;e

e >0 0

D~ ee0 (re0 )y

3

? L

y;y+e

X

z

z F (x)

(4.20)

5

The following theorem provides a key estimate replacing the usual \one block-two blocks" estimate in hydrodynamical limit. The traditional approach [16] fails here because the estimates are not strong enough to take into account uctuations. Our method is based on a multi-scale analysis and logarithmic Sobolev inequality and it will be presented in sect. 6.

Theorem 4.6 For any smooth function J and constant > 0, (

Z

lim lim "d?2 f `!1 "!0

X

x

J ("x)f!~  [G ? k (G)](x)gd

d ? const: " f [ (J ("x))2 V`(G; x;`)]d ? "d?4 Db ( f )  0: x b Here ` = `d+2 , k = `"?2=d and (~!  G)(x) = y !~ (x ? y)y G as in (4.A). Corollary 4.7 For any > 0 and f such that s(f )  const: "2, Z

X

X

q

)

(4.21)

P

"d?2

lim lim `!1 "!0

Z

X

x

J ("x) f!~  (G ? k (G)g (x)fd

Z q X 1 2 d ? 4  2 J (z)dz V (G; ) + "lim " Db ( f ): !0 b In particular, if ft is the density in Theorem 2.1 then for any > 0,

lim lim `!1 "!0

Z

0

T

n

dt

"d?2

Z

X

x

o

fJ ("x)~!  (G ? k (G))g (x)ft d  CTV (G; ) + const: 19

where C is some constant depending on J and . Proof: By lemma 2.2 we can replace the density f in the middle term of (4.21) by 1. The second part of the corollary is a consequence of (i) and (ii) of Lemma 4.3 Corollary 4.8 There is a constant C3 independent of ft such that

lim lim `!1 "!0

T

Z

0

E ft [ 6 ]dt ? C3 = 0

2 ?  ]. Corollary Proof. By Corollary 4.7, one can replace by (~!  we(a) )(x) by !~  [x;k x;k 4.8 then follows from Lemma 3.1 (with =  ) and the entropy inequality (2.12).

Corollary 4.9

where q~e is de ned by

lim lim `!1 "!0

T

Z

0

E ft [ 7 ]dt  CT

q~e() = eqe ?

X

e0

X

e>0

V (~qe; ) ?1 +

D~ ee0 re0 0 ? Le

X

z

z F

(4.22)

Proof. Corollary 4.9 is just a reformulation of Corollary 4.7 to our setting. Notice that the term (1) x;k ? ) has been introduced to write (4.17), where E  [qejx;k ) is x + (1 ? 2)( just the k (qe) term in the l.h.s. of (4.21) . To summarize, we have proved

"?4+d

Z

0

T

E ft [L log ~t ]dt 

Z

T

0

where 8 and 9 are de ned by

Z

dt ftf 4 + 8 + const:gd + 9 + ?1 + o(1) (4.23) d?1

8 = B1 + 5 = " 2

X

x;e;e0>0

Dee0 (@e@e0 )("x; t)(~!  )("x);

Dee0 = ee0 ? D~ ee0 ; 9 = CT We now compute @t log ~t . By de nition,

"d?2@t log ~t = ?"d?2

X

x;e>0

X

e>0

V (~qe; ):

(4.24) (4.25)

@e("x ? "?1vt; t)ve(~!  )(x) +

"d?1 @ ("x ? "?1vt; t)(~!  )(x) + "d @t x  T1 + T2 + T3 + const: X

20

X

x

d [@ ("x ? "?1vt; t)] F + const: x dt e (4.26)

The last term T3 can be estimated as follows. By de nition, the t-derivative has two terms, symbolically, d = ?"?1 X v @ + @ : e e t dt e>0

The contributions from the second term is negligible by (iii) of Lemma 4.3. The contribution from the rst term is of the form

"d?1

X

x;e>0

Je("x; t)xF  X

If one uses the same argument used for the second term, it follows that [E ft ? E  ][X ] is bounded. In order to have [E ft ? E  ][X ] negligible, one needs Corollary 4.7 and suitable subtraction. By Lemma 4.6, one can replace X by "d?1 Px;e>0 Je("x; t)F^ (x;k ). By entropy inequality (2.12) and Lemma 3.1 ( =  ); the rst term vanishes in the limit " ! 0 and ` ! 1. Finally, letting ! 1 we conclude that T3 is negligible, provided that F satis es assumption (C) in the de nition of ~t . Combining with previous estimates, one has Z

0

T

dtE ft ["d?4L ? "d?2@t ] log ~t  n

o

T

Z

0

We now estimate 4 ? T1.

4 ? T1 = "d?2

X

x;e>0

dtE ft [ 4 ? T1 + 8 ? T2 + const:] + 9 + + o(1): (4.27)

(@e )("x; t)[e (1) !  )(x)] x + ve (~

By (4.18) we can write (1) x;e as

(1) x;k ? )2 + (2 ? 1)(x;k ? ) + (2 ? ) + O(k?d) x = ( By de nition, ve = (1 ? 2)e . Hence 4 ? T1 = 10 + 11 +const: + O(`?d) (recall k = `"?2=d ), where

10 = "d?2

11 = "d?2

X

(@e )("x; t)(2 ? 1)e[x;k ? (~!  )(x)];

x;e>0 X

x;e>0

e(@e )("x; t)(x;k ? )2

(4.28)

Note that 10 is of the same form as the rst term of (4.19), hence lim"!0[E ft ?E  ][ 10 ] = 0. Therefore T

Z

0

dtE ft ["d?4 L ? "d?2@t ] log ~t  n

o

21

T

Z

0

dtE ft [ 12 + 9 + const:] + o(1)

where 12 is de ned by

12 + with

= "d?1

"?1

e (@e

@ )("x ? "?1vt; t)](~!  )(x) [(Dee0 @e@e0 ? @t

(

X

x;e;e0>0

)("x ? "?1vt; t)(

x;k

?  )2 ]

)

= "d?1

X

x;e>0

(4.29)

12;x;e(x;k ) + const: + o(1)

@ )](u ? ) + "?1 (@ )(u ? )2

12;x;e(u) = [(Dee0 @e@e0 ? @t e e

(4.30)

In the last step we have again replaced !~   by !   = x;k and the error goes to 0 by the same argument used before. By assumption  satis es equation (2.6) of Theorem 2.1. Let m() = ?(1 ? ). Then 12 can be simpli ed as

12;x;e(u) = e(@e )[2m( )(u ? ) + "?1(u ? )2] = "?1e(@e )(u ?  + "m( ))2 + const:

(4.31)

Therefore we have the equation T

Z

0

dtE ft [("d?4L ? "d?2 @t ) log

where

~]  "d?2

T

Z

0

dtE ft [ ? x;e(x;k ) + const:] + 9 + ?1 + o(1) X

x;e

? x;e(u) = e(@e )("x; t)(u ?  + "m( )("x; t))2

(4.32)

Note that  ? "m( ) is not exactly the mean E ~t [x;k ]   ? "k ("x; t). One can prove easily that the replacement of m by k produces only negligible errors. To summarize, one has Z



Z

0

0

T T

dtE ft ["d?4 ~?1L ~ ? "d?2 @t ~]d Z

dt ft["d

X

x;e

(4.33)

?x;e(x;k ) + const:]d + 9 + ?1 + o(1)

Here ? is de ned by (4.32) with m replaced by k . We now determine the constant by (iii) of Lemma 4.4. Note that (4.33) is an inequality but, if one traces the argument through, it is not hard to nd that, in case ft = ~t , (4.33) is an identity with 9 and ?1 set to zero. Hence the constant satis es the equation Z

~t ["d?2 X ?x;e(x;k ) + const:]d = o(1) x;e

22

By Lemma 3.2, one can replace ~t by t . Hence the constant is nothing but const: = ?"?2 e(@e )("x; t)x;k; with x;k de ned in Lemma 3.1. One can now apply the entropy inequality (2.12) and Lemma 3.1 and conclude that, with k = `"?2=d, Z T ? 2 ~ lim lim " s(fT j T )  `lim lim "?2s(ftj ~t )dt + 9 + T ; `!1 "!0 !1 "!0 0

(4.34)

for any > 0. Therefore Theorem 4.2 follows since is arbitrary. In next sections we prove that we can choose Dee0 and F so that 9 =0. A replacement of ~ by produces negligible errors and Theorem 2.1 follows by Gronwall lemma.

5 The di usion coecient. In this section we identify the di usion coecient and characterize the function F used in sect. 4 by a variational problem. This is done by a geometric argument which relies on the construction of the appropriate Hilbert space below. Let G be the linear space G = fgjg is a local function and g satis es (5:1)g @ g^ j : g^() = 0 = @y (5.1) y= Here g^ is de ned by g^(y) = E  [g], y = P 0( ). The condition (5.1) is equivalent to

E  [g] = 0 =

X

x

E  [g; x]

(5.2)

where E  [A; B ] = E  [AB ] ? E  [A]E  [B ]. In this section  = P 0( ); <  >= E  and we will drop the index in the measure  . Recall the de nition of V (g; ) (De nition 4.1, eq. (4.4) ). We claim that the variance V is a norm on G and it de nes an inner product via polarization.

Lemma 5.1 For g and h 2 G de ne V (g; h) = 41 [V ((g + h); ) ? V ((g ? h); )]:

(5.3)

Then V is a inner product, that we denote by   ;  . Furthermore, in the de nition 4.2 of V (g; ) the limit exists and hence lim sup can be replaced by lim.

23

We shall prove this Lemma in the following Theorem 5.2 { Corollary 5.6. Theorem 5.2 is the key input and we shall prove it in sect. 6. Throughout this section ` will denote an integer (independent on ").

Theorem 5.2 The variance V (g) satis es the variational principle Here

1 V (g) = sup 2 e 2R;G2G

(

X X ete(g) + hg; Gi0 ? 41 E  [ ere + re xG]2 x e>0 e>0 X

hg; Gi0 = E [g

X

x

xG]; te (g) = `lim E  [g !1

X

x2`

(e  x)x]:

)

(5.4) (5.5)

Note the right side is independent of `, for ` large enough, since E [gx ] = 0 if g does not depend on x.

Lemma 5.3. LG  G ; LG  G and LsG  G . Proof. We shall only prove LG  G . It suces to check (5.2) for Lg. The rst equality

of (5.2) is trivial. The second one follows since our dynamics conserves the total particle number.

We start with the de nition of the inner product on a subspace of G and then extend it to G . The following lemma is a simple consequence of Theorem 5.2 Lemma 5.4.For g; G  G : X X (5.6) V (Lsg) = 21 E  [(re xg)2] e>0 hre; Gi0 = 0 (5.7) te0 (re) = e;e0 V (re) = E  [(re)e] = 12 E  [(re)2] (5.8) where h  ;  i0 is de ned in (5.5). Hence   ;  = V (g; h) de nes an inner product on G (1) = G (0) + LsG where G (0) = fPe>0 ereg. Furthermore G (0) ? LsG and, for  2 G (1) ,

 Lsg;  = ?hg;  i0:

(5.9)

Proof. By (5.5), te(Ls g) = 0. Hence the sup in (5.4) is attained for e = 0 and this proves (5.6). (5.7) follows from the de nition of h  ;  i0. By (5.7) the sup in (5.4) for g = re is attained for G = 0 and this proves (5.8). >From (5.6), X X X  LsG; Lsg = 21 E  [(re xg)(re xg)] = ?hg; LsGi0 x x e>o

24

Let  = Pe aere + LsG. Since  Lsg; re = 0, (5.9) is proven.

Lemma 5.5. Let g; G 2 G and  =

e>0 e re  + Ls G 2 G

(1) .

P

Sg ( ) =

X

e>0

De ne Sg ( ) by

ete(g) + hg; Gi0:

(5.10)

Then Sg ( ) is a bounded linear functional on G (1) : Hence Sg de nes an element in G (1) via the Riesz representation theorem, which we still denote by Sg : The map S : g ! Sg is a linear transformation from G to G (1) . Furthermore, S = I when restricted on G (1) ; namely Sg = g for g 2 G (1) : Proof. The only thing we need to prove is the boundedness of Sg ( ). It suces to prove that for any g xed,

jhg; Gi0j2  const:(g) By de nition,

X

e>0

hg; Gi0 = E  [g

E  [(re X

x

X

x

xG)2 ]:

x G]:

Since both g and G are local functions, we can truncate the summation of x at, say, jxj  M: By Lemma 6.1,

E  [g

X

jxjM

xG]2  const:(g)

X

b

E  [(rb

X

jxjM

xG)2 jbj?d?1=2 ]

(5.11)

Now let M ! 1 and use the fact that for all b one has

E  [(rb

X

x

x G)2] = E  [(re

X

x

xG)2 ]

for some e > 0. Hence the right side of (5.11) is bounded by const:(g)E  [Pe>0(re Px xG)2 ]. This proves the boundedness of Sg . Let N be the kernel of S in G . Then G =N is isomorphic to G (1) . Note that g 2 N i V (g) = 0. Hence one can identify G with G =N .

Corollary 5.6. G = G (1) and  ;  de nes an inner product on G so that  g; h =

V (g; h). This proves Lemma 5.1 and also establishes the structure of G . In consequence of above discussion we can think of the elements of G , roughly speaking, as sums of a gradient term re and a uctuation term LsG. 25

We note that up to now we have only used the symmetric part of the generator. In our model the generator is not symmetric. To determine the di usion coecient, we need to decompose the elements of G in terms of a gradient and LG. The way to do that is not straightforward, and the rst step is the lemma 5.7 below. Recall the de nition (4.10) of the current we = w0;e. Note that we 62 G . We introduce the function e = (re) + e (0 ? )(e ? ) (5.12) so that e 2 G and hg; wei0 = hg; ei0 for g 2 G . Note that (5.2) is crucial for the last identity to hold. Similarly, one can de ne e = (re) ? e(0 ? )(e ? ) so that hg; wei0 = hg; ei0:

Lemma 5.7. Suppose g 2 G . Then te(Lg) = `lim E  [Lg !1

X

(x  e)x] = ?hg; wei0 = ?hg; ei0;

(5.13)

jxj`

te(Lg) = ?hg; wei0 = ?hg; ei0

(5.14)

Proof. Since Lg is a local function, one can x a M much larger than the support of g and replace Lg by LM g with M  `. One can now perform integration by parts and

E  [LM g

X

(x  e)x] = hgLM

jxj`

X

8 < X

(x  e)x i + E  [g :

jxj`

y2@ `

y sign(y)

9 = X ;

(x  e)x];

jxj`

where sign(y) = 1 depending on which boundary y lies. The choice of sign is not important; the crucial point is that opposite boundaries have opposite signs. The rst term on the right is just ?hg; wei0: Hence it remains to prove the last term is zero. We claim that the contribution from opposite boundaries cancel exactly and sum up to zero. To see this let g depend only on the con guration in some  and z 62  . Let z0 be the re ection of z along the axis e. Note that for any x

E  [gz x] = E  [gz0 x]

(5.15)

unless x = z or x = z0 . Hence

E  [g(z ? z0 ) (x  e)x] = E [g(z ? z0 ) f(z  e)z + (z0  e)z0 g] = E  [g; F (z ; z0 )]; X

x

26

where F (z ; z0 ) = (z ? z0 ) f(z  e)z + (z0  e)z0 g : Since  is a product measure and g is independent of z and z0 ; the correlation of g and F is zero. We have thus proved (5.15) and the Lemma.

Lemma 5.8

 e; re0  = te0 (e ) = V (re)e;e0 = te(e0 )

Proof. By de nition

E  [e te0 (e) = `lim !1

X

(x  e0)x]

jxj`

Clearly, if e0 6= e then te0 (e) = 0. Hence we assume e = e0. Since  is a product measure, E  [ex] = 0 unless x = 0; or e. Hence

te(e ) = E [e e] = E  [(re)e] + eE [(0 ? )(e ? )e]: Clearly, the last term is zero. Thus we prove Lemma 5.8 using Lemma 5.4

Theorem 5.9. (i) LG + G (0) = G = L G + G (0) (ii) Let Gw = fPe>0 ee g. Then Gw + LG = G = Gw + LsG . Furthermore,

V ( ee + LG )  V ( ere + Lsg) X

X

e>0

e>0

(5.16)

Similarly, Gw + L G = G and

V ( ee + Lg)  V ( ere + Lsg) X

X

e>0

e>0

(5.17)

(iii) Let T be the linear transformation from G to G s.t.

T ( ee + Lg) = X

X

e>0

e>0

ere + Lsg:

Then T is bounded above by 1 and

(A) (B )

T re ? LG 8e  T re ; e0 = ee0 V (re) 27

(5.18)

(iv) Let Q be the matrix Qee0 = re; T re0  . The di usion matrix D is given by

D = V (re)Q?1 > 1:

(5.19)

(v) Let R be the linear transform from G to G s.t.

R ( e re  + L s g ) = X

X

e>0

e>0

ee + Lg

(5.20)

Then R is a bounded linear transform, R = T ?1 and in consequence D is bounded above. Remark: At formal level one can think to the maps T and R as T = LsL?1 and R = LL?s 1 . With this in mind, it is easy to understand the statements of Theorems 5.9 and 5.10 below and the calculus of the di usion matrix. Unfortunately, we do not know how to give sense to L?1, so the arguments based on it are only formal and we had to use the above maps to make the computation rigorous. Proof of (i). Clearly, one only has to check that the orthogonal projection from LG to LsG is onto. Suppose that, for some h 2 G ,  Lg; Lsh = 0 for all g 2 G . In particular,  Lh; Lsh = 0. By Lemma 5.4,

 Lh; Lsh = ?hLh; hi0 = ?hLsh; hi0 = Lsh; Lsh  This proves Lsh = 0 in G and hence (i).

Proof of (ii). We shall only prove the bound of V , namely (5.16). By Theorem 5.2 the left side of (5.16) is equal to 9 8 = 0 e>0 e>0 e0 >0 In particular, let G = ?g and = . Hence from Lemma 5.4 we can bound the last expression from below by X X X X (5.21) e0 te0 ( ee + Lg) ? h ee + Lg; gi0 ? 21 V ( ere + Lsg) e>0 e>0 e>0 e0 By de nition of L; hLg; gi0 = hLsg; gi0. Also, from (5.13) te0 (Lg) = ?hg; we0 i0. To summarize, we have 1 V (X  + Lg)  X 0 t 0 (X  ) ? 0 hg; w i ? X h ; gi e e 0 e e e e e e0 0 2 e>0 e e e>0 e>0 e0 X (5.22) ? hLsg; gi0 ? 21 V ( ere + Lsg): e>0

28

Since g 2 G (1) , he; gi0 = hwe; gi0 by (5.14). By de nition we + we = 2re and hence hg; we + wei0 = 0. By Lemma 5.8, te0 (e) = e;e0 V (re). Therefore 1 V (X  + Lg)  V (X r ) ? hL g; gi e e s 0 2 e>0 e e e>0 X X ? 21 V ( ere + Lsg) = 21 V ( ere + Lsg): e>0 e>0 Note that this inequality is precisely the statement that T in (iii) is bounded by 1. P Proof of (A) of (iii). Suppose that re  = e0 aee0 e0 + LF with F 2 G . In general, we may not have such a decomposition and some approximation is needed. Then

 T re ; Lg = aee0 re0  ; Lg  +  LsF; Lg  By (5.9)

 LsF ; Lg = ?hF; Lgi0 = ?hLF; gi0

Note that no boundary term arise when performing integrations by parts because g and F are local functions. Also, by Lemma 5.7, te (Lg) = re; Lg = ?he ; gi0. Hence

 T re ; Lg = ?h

X

e0 >0

aee0 e0 + LF; gi0 = hre ; gi0 = 0

Proof of (B) of (iii). By de nition of T

 T re ; e =

X

e0 >0

aee0 re0  ; e  +  LsF ; e 

For Lemma 5.4 and (5.13), for all e;  LsF ; e = ?hF; e i0 = te(LF ) = LF; re . Here the last identity follows from the isomorphism of G and G (1) (Cor. 5.6) and the structure of G (1) (Lemma 5.4). By Lemma 5.8,  re0 ; e = e;e0 V (re) = e0 ; re . Hence

 T re; e =

X

e >0 0

aee0 e0 + LF; re = re; re = eeV (re):

Proof of (iv). The di usion coecient is such that e ? Pe0 >0 Dee0 re 2 L G ; hence by (A) of (iii), it is characterized by

 e ?

X

e>0

Dee0 re; T re = 0 29

(5.23)

for all e0 and e: Solving (5.23), one has

Dee0 = V (re)(Q?1)ee0 :

(5.24)

Since T is bounded above by 1, so is Q (as a quadratic form). Hence D  1. Finally, one has to prove the strict inequality holds. Note that D = 1 i  T re; re0  = ee0 V (re). Since T is bounded by 1, this implies T re = re: Hence by (A) of (iii), re ? LG and leads to contradiction. Proof of (v). First of all, we claim that T has a bounded inverse. This follows from general abstract arguments as follows. Since T  ( the adjoint of T w.r.t.   ;  ) is surjective from (ii), T is a bijection from G to G . The inverse mapping theorem [20] then assures that T ?1 is bounded. It is now trivial to check that R = T ?1. Theorem 5.10: For the map T de ned in (iii) of Theorem 5.9 the following variational P formula holds ( = e>0 ere + Lsg:

 ; T = sup inf e ;k + 2

e ;h

nX

e>0

e( e ? e)E  [(re)2] ? 2hg; hi0 ?

[ ehe; ki0 + ehe; hi0] + 2hLh; ki0 +

X

e>0

X

e>0

E  [( ere + re

X

x

o

xk)2 ] (5.25)

where k and k are local functions. In particular, if  = e>0 e re , the second term on the right side of (5.25) is zero. Hence we have the following representation of the di usion matrix: P

X

e;e0>0

+ 2

(D?1)ee0 e e0 = V (re) sup inf e ;k e ;h

nX

e>0

e( e ? e)E  [(re)2] ?

[ ehe; ki0 + ehe; hi0] + 2hLh; ki0 +

X

e>0

X

e>0

E  [( ere + re

X

x

o

xk)2 ] (5.26)

Remark: By the formal relation T = LsL?1 , we have (D?1 )ee0 = V (re ) hre; L?1 re0 i?0 1 . Hence (5.26) is a way to give a rigorous sense to this expression. Proof. By de nition of R, we have the identity  ; T = ; R?1 =  ; (R?1)s , with (R?1)s = [R?1 + (R?1)]=2 and (R?1) is the adjoint of R?1 w.r.t. ; . Hence we have the variational formula

 ; (R?1)s = sup f2  ; u  ?  u; [(R?1)s]?1u  g: u2G

30

(5.27)

It is a general fact that

[(R?1)s]?1 = RRs?1 R:

(5.28)

One can check (5.28) straightforwardly by simply computing [R?1 + (R?1 )]R Rs?1R = [R?1 R + 1]Rs?1 R = R?1[R + R]Rs?1 R = 2R?1R = 2: We can now rewrite  u; [(R?1)s]?1 u  by

 u; [(R?1)s]?1u = sup f2  Ru;   ?  ; Rs  g:  2G

(5.29)

By the structure theorem on G , we can assume u = Pe>0 ere + Lsh and  = Pe>0 ere + Lsk. Hence, by de nition of R

 ; Rs = ; R = X

e>0

X

e:e0

e e0  re; e0  +

e[ re; Lk  +  Lsk; e0 ]+  Lsk; Lk  :

>From Lemma 5.7,  re; Lk = ?hk; ei0. Together with (5.9)

 re; Lk  +  Lsk; re0  = ?[hk; e i0 + hk; ei0] = ?hk; rei0 = 0: Combining that with Lemma 5.8 and the fact that  Lsk; Lk = Lsk; Lsk  = Pe>0 E  [re Px xk]2 we conclude that

 ; Rs =

X

e>0

E [ ere + re

X

x

x k ]2 :

(5.30)

To get the variational formula for  ; T , it remains to compute  Ru;  . From the de nition of R, (5.20), Lemma 5.7 and Lemma 5.8,

 Ru;   =  =

X

e>0

X

e>0

ee + Lh;

X

e0 >0

e eV (re) ?

X

e>0

e0 re0  + Lsk  ehe; ki0 ?

X

e0 >0

e0 hh; e0 i0 ? hLh; ki0: (5.31)

Combining (5.27){(5.31) and recalling that  ; u = Pe>0 e eV (re) ? hg; hi0 (Lemma 5.4), we conclude the proof.

31

6 Multiscale analysis. We are interested in estimating E  [V`(g; y)] for g 2 G with ` = `d+2 as in Theorem 4.6. We will use `1 = `? `1=d2 . First of all, one can rewrite E  [V`(g; y)] by a variational principle as 8

9

1 E  [V (g; y)] = (2` + 1)?d From the local limit theorem and its expansion ((3.11) of [17]) one has gn = g^ + 0(q?nd):. Hence E  [gn; gn]  0(q?2nd) + E  [^g(qn ); g^(qn )] Note that our assumption (i) and (ii) are the same as (3.4) with m x;k replaced by m. From (3.5) one has log E  [expfqng^g]  const. With the same argument, E  [^g(qn ); g^(qn )]  const:q?2nd. Hence X E  [(Tnb )2]  const:q2n?2nd: (6.6) b2(n+1)

Step 3. Finally, let ~ b = Pn Tn;b : One can check from Step 3 that the bound (6.3) holds for d  3. This concludes Lemma 6.1.

Once Lemma 6.1 is established, Theorem 5.2 can be proved as in [12, 13]. The only di erence is that the function ~ b in [13] vanishes for all but nitely many b . We have, however, the decay bound (6.3) at our disposal. Hence we can follow exactly the same argument given in [12, 13]. In the following, we sketch a proof which is essentially the same as in [12, 13]. 33

Proof of Theorem 5.2. From the variational principle, there exists a function u such that

1 E  [V (g; y)]  Av E  [( g)u] ? 1 Av E  (r u)2] + ; b x ` jxj`1 2 4 jbj` where  is a small positive constant and `1 = ` ? `1=d2 . By Lemma 6.1, X

x

E  [(xg)u] =

X

b

E  [ ~ b(x g)rbu] X

x

(6.7)

(6.8)

Since E  [~ b (xg)2] = E  [f~ x b(g)g2]; Px ~ b(x g) converges strongly to an element b in L2 ( ) and b is translationally covariant in b, in the sense that b (x) = x b(). Therefore, the tail of the summation on x in (6.8) is not important. Furthermore, it follows from the Schwartz inequality, (6.7) and the translational covariance of b that 1 E  [V (g; y)]  ?Av E  [( 1 r u ?  )2] + X E  [2 ] +   X E  [2 ] + : (6.9) b ` jbj` e e 2 2 b e>0 e>0 As a corollary, we also have

Avjbj`E  [(rbu)2]  10

X

e>0

E  [2e ]:

(6.10)

Suppose for the moment that e is measurable w.r.t. the {algebra F (s) generated by fy jyj  sg with s a xed integer independent on `. Then, from the translational covariance of b one has the identity

E  [y e()ry eu()] = E  fe ()re(y u)()g:

(6.11)

Averaging (6.11) over jyj  `1and summing over e > 0, Avjbj`1 E  [b()rbu()] =

X

e>0

E  [e ()reu()]:

Here u = Avjyj`1 y u. Let e(`) = reu. One can check that

E  [(e(`) )2]  Avjbj`E  [(rb u)2]

(6.12)

Hence E  [(e(`))2 ] is uniformly bounded and it converges (up to subsequences) weakly to an element e in L2 ( ) such that lim Av  E  [b ()rbu()] = `!1 jbj`1 34

X

e>0

E  [b ()e];

lim inf Avjbj`E  [(rbu)2]   `!1

We have thus proved that for any  > 0,

X

e>0

E  [(e)2]:

1 E  [V (g;  )]  E  [X  (g) ] ? 1 E  [X( )2 ] + : lim e e ` ` 4 e>0 e `!1 2 e>0 Finally we can let  ! 0. Because a weak limit is taken, we do not known whether e is of the form re Px xu. The key remark from [12, 13] is that e can be well approximated by function of the form ere + re Px x G. More precisely, de ne b = xe for b = (x; x + e). Then b satis es the compatibility conditions reb = rb e. From [13], Theorem 9.2, e can be approximated by functions of the form just described. Clearly, by adjusting e, one can replace G by G ? a ? b0 without changing  . Hence we can assume G 2 G and lim sup 21 E  [V`(g; `)]  `!1(

X X sup E  [ e (g)f ere + re x Gg] ? ;G2G e>0

)

1 E  [X( r  + r X  G)2] : x e e e 4 e

One can check that

E  [e(g)re] = te(g);

X

e>0

E  [e (g)re

X

xG] = hg; Gi0

This concludes the upper bound for Theorem 5.2 provided that b is measurable w.r.t. F (s). For the general case, let b = (bs) + (bs) where (bs) is measurable in F (s). Hence 1 E  [V (g;  )]  Av E  [(s) r u] ? 1 ? Av E  [(r u)2] + +  ` b ` jxj`1 jbj` b b 2 4

= Avjbj`1 E  [ (bs) rbu] ? 4 Avjbj`E  [(rbu)2]: >From the Schwartz inequality, ! 0 as s ! 1 for any > 0 xed. This provides the cuto we need to complete our proof of the upper bound. The lower bound is much easier and can be proved by plugging in a test function u of the form Px2k e(e  x)x + P x G. Proof of Theorem 4.6. Consider rst the eigenvalue problem Z

q

U (f ) = q !~  [y G ? k (G)]fdk ? "?2Avjbjk Db( f ) 35

(6.13)

where dk is the canonical Gibbs state with k xed and G is a local function. Recall that k = `"?2=d. Divide the cube k into disjoint subcubes of size 2`d+2+s + 1 and label each subcube by (; s), s = 0; 1; : : : ; n with 2`d+2+s + 1 = 2k + 1. The choice of the exponent d + 2 + s is made for convenience and will be made clear later. Let (s) be the cube labeled by (; s) and denote its density by (s) , namely (s) = Avx2(s) x. Let us require that these decompositions are compatible in the sense that, for each (; s), there is a (~; s + 1) such that (s)  (~s+1). Denote by G(s) the function

G(s)(u) = E [Gj(s) = u]

(6.14)

where (s) = Avjxj`d+2+s x. We can now decompose the average over y as X

y

!~ (y)y G =

1

X

s=0

n

o

Av~ Av(;s)2(~;s+1)G(s) ((s)) ? G(~s+1) ((~s+1)) :

(6.15)

By de nition, G(0) y = y G. In the r.h.s. of (6.15) the average over  for the rst level s = 0, is taken in (1)  such that 1=d2 the distance from the boundary, jx ? @ (1)  j, is greater than ` . Since G is a local function, this will ensure the support of xG lies strictly inside (1)  for ` large and it is consistent with the de nition of !~ given in Section 4. Actually, !~ has been chosen in this way to avoid boundary terms arising from the corridors. Taking into account the decomposition (6.15) and the analogous for the Dirichlet form, we can bound U from above by taking the sup on each cube conditioned on ~( s+1) = . Then U is bounded by 1

X

s=0

U~(s+1) ()

=

U (s+1) =

Z

1

X

s=0

Av~ U~(s+1) (~( s+1))fdk ;

Z

R

n

o

sup q Av(;s)2(~;s+1) G(s) ((s)) ? G(~s+1) () h2 d

h2 d =1

? 4(s +1 1)2 "?2Avjbj`d

+3+s

Db(h):

Here d is the canonical Gibbs state in (~s+1) with density . The factor (s + 1)?2 makes the sum over s nite. Note that G(~s+1) () is a constant w.r.t. d. Hence we can rede ne G(s) so that G(s)() = 0. Furthermore, by subtracting a linear term c()(;s ? ) from G(~s) , 36

we can assume that G(~s) satis es (3.4) with mk replaced by . Such a subtraction is justi ed since the error vanishes after sum over . Case 1, s  1. From the logarithmic Sobolev inequality for symmetric simple exclusion [19], Avjbj``~Db(h)  const:(``~)?2 s(h2j);

(6.16)

where `~ = `d+s+2. Hence by the entropy inequality (2.12),

U (s+1)

(

)

Z 1 ? 2 ? ( d +2) ~  const: (s + 1)2 " (2`` + 1) log d exp[Q] ? qG(~s+1) ; Q = q(s + 1)2"2(2``~ + 1)(d+2) Av(;s) G(s)((s) )

(6.17) (6.18)

>From the Jensen inequality, the average over  can be moved to the front of the log in (6.17). Therefore, U (s+1) is bounded by

U (s+1)  (2`~ + 1)?d" ?1Av log E  [expfq"(2`~ + 1)d G(s)((s)g] ? qG(~s+1)

(6.19)

where " = (s + 1)2"2(2``~ + 1)(d+2) (2`~ + 1)?d  4(s + 1)2 "2`2+d`~2 . We claim that (6.19) is bounded above by

U (s+1)  const:q2"`~?d  const:q2"2`2+d (`~)2?d:

(6.20)

This implies the bound 1

X

s=1

U (s+1)  const:

1

X

s=1

(s + 1)2q2"2ld+2 (`~)2?d  const:"2`?1;

(6.21)

provided that d  3. Note that it is the uctuation `?d that helps the convergence. Note also the choice of `~ to have the last factor `?1. We now prove (6.20). Our strategy is as follows. Since " is small, in fact " < "2?4=d , one can expand the exponential of (6.19) provided (2`~ + 1)dG(s)((s) ) is of order 1. Assuming this, one gets (6.20) by expanding the exponential up to the second order in ". Note that the rst order term cancels due to the subtraction of qG(~s+1)(). We now prove that  2 = (2`~ + 1)dG(s)((s) ) is of order 1. As in the proof of Lemma 3.1 it is enough to consider the special case G(s) ((s)) = ((s) ? )2 since G satis es (3.4). If one replaces the canonical Gibbs state d by the grand canonical Gibbs state (namely the product state), it follows from the 37

local central limit theorem that exp[q" 2] is integrable and  is of order 1 (see the proof of Lemma 3.1). To extend this result to d, let

X = (2`~ + 1)d=2 ((s) ? ); Y = [2(` ? 1)`~]?d=2

X

x=2

(x ? ):

Clearly, (2``~ + 1)d=2 ((~s+1) ? ) = X + Y with = (2`~ + 1)d=2(2``~ + 1)?d=2 and = [2(` ? 1)`~]d=2 (2``~ + 1)?d=2. Note that X and Y are independent random variables (w.r.t. d ) and the densities g(x) and h(y) of X and Y respectively are given by the local central limit theorem [19, 18] as 2 2 g(x) = p 1 expf? 2x2 g[1 + O(`~?d=2)]; h(y) = p 1 expf? 2y2 g[1 + O(`~?d=2)]: (6.22) 2 2 where  is the variance of  ?  w.r.t. d . Therefore, the density of X w.r.t.d is simply the marginal density of X conditioned on X + Y = 0. Since and are bounded by a constant depending only on ` (and independent of `~ and "), one can prove easily from (6.22) that exp[const:(`)X 2 ] is integrable w.r.t. d for some const(`) depending on ` only. This provides the bound one needs to complete the proof of (6.20). Case 2, s=0. From the regular perturbation theory [20], U(1) is bounded by

U (1) 

 "2 q 2

Z

fV`(G; ;`)dk + const:(`)O("4):

We can now average over  to have an upper bound on U (0) . Together with the bound on U (s) for s  1 in (6.21), we conclude Theorem 4.6 by averaging w.r.t.J .

Acknowledgements: We thank S. Olla and H. Spohn for very helpful discussions. We

also thank J. Lebowitz and IHES, where part of this work was done, for the kind hospitality.

References

[1] H. Spohn, Large Scale Dynamics of Interacting Particles, Springer-Verlag New York (1991). [2] A. De Masi and E. Presutti, \Mathematical Methods for Hydrodynamic Limits", Springer-Verlag, Berlin, Lect. Notes Math., 1501, (1991). 38

[3] A. De Masi, E. Presutti and E. Scacciatelli, \The weakly asymmetric simple exclusion", Ann. Instit. H. Poincare A, 25, 1{38, (1989). [4] C. Kipnis, S. Olla and R. S. R. Varadhan, \Hydrodynamics and large deviations for simple exclusion process", Commun. Pure Appl. Math,42, 115{137, (1989). [5] J. Fritz, On the di usive nature of entropy ow in in nite systems: remarks to a paper by Guo{Papanicolau{Varadhan. Commun. Math. Phys. 133 331-352 (1990). [6] H. T. Yau, Metastability of Ginzburg-Landau model with a conservation law, preprint (1992). [7] A. De Masi, R. Esposito, and J. L. Lebowitz, \Incompressible Navier-Stokes and Euler limits of the Boltzmann equation", Commun. Pure Appl. Math., 42, 1189{1214, (1989). [8] C. Bardos, , F. Golse, D. Levermore, \Fluid dynamical limits of kinetic equations I. Formal derivations", J. Stat. Phys., 63, 323{344, (1991). [9] R .Esposito and R. Marra in preparation. [10] H. T. Yau, \Relative entropy and hydrodynamics of Ginsburg-Landau models" Letters Math. Phys., 22, 63{80, (1991). [11] S. Olla, S. R. S. Varadhan and H. T. Yau, \Hydrodynamical limit for a Hamiltonian system with weak noise", (1991). [12] S. R. S. Varadhan, \Nonlinear di usion limit for a system with nearest neighbor interactions II", in Proc. Taniguchi Symp., Kyoto, (1990). [13] J. Quastel \Di usion of color in the simple exclusion process", Commun. Pure Appl. Math., 40, 623{679, (1992). [14] L. Xu, \Di usion limit for the lattice gas with short range interactions", Thesis, New York University, (1993). [15] R. S. R. Varadhan, H. T. Yau, in preparation. [16] M. Guo, G. C. Papanicolau and S. R. S. Varadhan, \Non linear di usion limit for a system with nearest neighbor interactions", Comm. Math. Phys. 118, 31{59, (1988). [17] C. C. Chang and H. T. Yau, \Fluctuations of one-dimensional Ginzburg-Landau models in non equilibrium", Commun. Math. Phys. 145, 209{234, (1992). [18] V. V .Petrov, Sums of independent random variables Springer Berlin (1975). [19] S. Lu, H. T. Yau, \Logarithmic Sobolev inequality for the Kawasaki dynamics", in preparation. [20] M. Reed and B. Simon, Methods of Modern Mathematical Physics 1-4, Academic Press, New York,

39

Suggest Documents