9909034v2 6 Mar 2000

1 downloads 0 Views 449KB Size Report
Feb 5, 2008 - positive Lyapunov exponent, for dilute hard-ball gases in equilibrium. ... they are available, the predictions of the generalized Boltzmann equation are in accord with .... ing moving particle in an array of randomly placed (but non-overlapping) ...... We will base our calculation on the exponential growth rate of.
Kinetic Theory Estimates for the Kolmogorov-Sinai Entropy, and the Largest Lyapunov Exponents for Dilute, Hard-Ball Gases and for Dilute, Random Lorentz Gases H. van Beijeren, and R. van Zon

arXiv:chao-dyn/9909034v2 6 Mar 2000

Institute for Theoretical Physics, University of Utrecht, Princetonplein 5, 3584 CC Utrecht, The Netherlands

J. R. Dorfman Institute for Physical Science and Technology, and Department of Physics, University of Maryland, College Park, MD, 20742, USA. (February 5, 2008) The kinetic theory of gases provides methods for calculating Lyapunov exponents and other quantities, such as Kolmogorov-Sinai entropies, that characterize the chaotic behavior of hard-ball gases. Here we illustrate the use of these methods for calculating the Kolmogorov-Sinai entropy, and the largest positive Lyapunov exponent, for dilute hard-ball gases in equilibrium. The calculation of the largest Lyapunov exponent makes interesting connections with the theory of propagation of hydrodynamic fronts. Calculations are also presented for the Lyapunov spectrum of dilute, random Lorentz gases in two and three dimensions, which are considerably simpler than the corresponding calculations for hard-ball gases. The article concludes with a brief discussion of some interesting open problems. I. INTRODUCTION

The purpose of this article is to demonstrate that familiar methods from the kinetic theory of gases can be extended in order to provide good estimates for the chaotic properties of dilute, hard-ball gases 1 . The kinetic theory of gases has a long history, extending over a period of a century and a half, and is responsible for many central insights into, and results for the properties of gases, both in and out of thermodynamic equilibrium [1]. Strictly speaking, there are two familiar versions of kinetic theory, an informal version and a formal version. The informal version is based upon very elementary considerations of the collisions suffered by molecules in a gas, and upon elementary probabilistic notions regarding the velocity and free path distributions of the molecules. In the hands of Maxwell, Boltzmann, and others, the informal version of kinetic theory led to such important predictions as the independence of the viscosity of a gas on its density at low densities, and to qualitative results for the equilibrium thermodynamic properties, the transport coefficients, and the structure of microscopic boundary layers in a dilute gas. The more formal theory is also due to Maxwell and Boltzmann, and may be said to have had its beginning with the development of the

1 We

will use the term hard-ball to denote hard core systems in any number of dimensions, rather than using the term hard-disk for two dimensional, hard-core systems, etc.

1

Boltzmann transport equation in 1872 [2]. At that time Boltzmann obtained, by heuristic arguments, an equation for the time dependence of the spatial and velocity distribution function for particles in the gas. This equation provided a formal foundation for the informal methods of kinetic theory. It leads directly to the Maxwell-Boltzmann velocity distribution for the gas in equilibrium. For non-equilibrium systems, the Boltzmann equation leads to a version of the Second Law of Thermodynamics (the Boltzmann H-theorem), as well as to the Navier-Stokes equations of fluid dynamics, with explicit expressions for the transport coefficients in terms of the intermolecular potentials governing the interactions between the particles in the gas [3]. It is not an exaggeration to state that the kinetic theory of gases was one of the great successes of nineteenth century physics. Even now, the Boltzmann equation remains one of the main cornerstones of our understanding of nonequilibrium processes in fluid as well as solid systems, both classical and quantum mechanical. It continues to be a subject of investigation in both the mathematical and physical literature, and its predictions often serve as a way of distinguishing different molecular models employed to calculate gas properties. However, there is still not a rigorous derivation of the Boltzmann equation that demonstrates its validity over the long time scales typically used in applications. Nevertheless, the Boltzmann equation has been generalized to higher density, and in so far as they are available, the predictions of the generalized Boltzmann equation are in accord with experiments and with numerical simulations of the properties of moderately dense gases [4]. In spite of the many successes of the Boltzmann equation it cannot be a strict consequence of mechanics, since it is not time reversal invariant, as are the equations of mechanics, and it does not exhibit other mechanical phenomena, such as Poincar´e recurrences [5]. Boltzmann realized that the equation has to be understood as representing the typical behavior of a gas, as sampled from an ensemble of similarly prepared gases, rather than the exact behavior of a particular laboratory gas. He also understood that the fluctuations about the typical behavior should be very small, and not important for laboratory experiments. To support his arguments, he developed the foundations of statistical mechanics, introducing what we now call the micro-canonical ensemble. This ensemble is described by giving all systems in it a fixed total energy, and then assuming that the probability of finding a system in a certain small region on the constant energy surface is proportional to the dynamically invariant measure of the small region, given by the Lebesgue measure of the region divided by the magnitude of the gradient of the Hamiltonian function at the point of interest on the surface. As we know, this micro-canonical ensemble forms the starting point for all statistical calculations of the thermodynamic properties of fluid and many other systems. In his attempt to provide a mechanical argument for the effectiveness of the microcanonical ensemble for calculating the thermodynamic properties of fluids, Boltzmann formulated the ergodic hypothesis, which, in its modern form, states that the time average of any Lebesgue-integrable dynamic quantity of an isolated, many particle system approaches the ensemble average of the quantity, taken with respect to the micro-canonical ensemble [6]. This hypothesis is the subject of several articles in this Volume, but its value for the foundations of statistical thermodynamics is often questioned [7]. The questionable points can be summarized in a few items: (1) The ergodic hypothesis applies to classical systems while nature is fundamentally quantum mechanical. (2) Even granting the approximate validity of classical mechanics for many purposes, no laboratory system is truly isolated from the rest of the universe. Instead, laboratory systems are constantly perturbed by outside influences, 2

and these sources of randomness, together with the simple laws of large numbers for systems with many degrees of freedom, may be responsible for the utility of the micro-canonical ensemble for the calculation of equilibrium properties. (3) Even granting the ability to isolate a laboratory system from the rest of the universe, the time it would take for a system’s phase space trajectory to sample all of the available phase space on the constant energy surface is just too long for ergodic behavior to be a physically reasonable explanation for the effectiveness of the micro-canonical ensemble. It seems more likely that the equilibrium behavior of a system of many particles depends on a number of these factors, including perturbations from the environment, the fact that thermodynamic systems have a large number of degrees of freedom, and the fact that the physically relevant quantities are projections of phase space quantities onto a subspace of a few dimensions. Thus even though the time scale for ergodic behavior on the full phase space may be unreasonably long, the projected behavior on relevant subspaces may not take very much time for the establishment of equilibrium values of the measurable quantities, such as pressure, temperature, etc. Pending the further clarification of these and similar issues, it seems fair to say that the complete understanding of the reasons for the validity of the micro-canonical ensemble as the basic understructure for statistical thermodynamics has yet to be achieved. In addition to resolving the various issues needed to complete our understanding of the equilibrium behavior of fluids, we would also like to understand the dynamics of the approach to thermodynamic equilibrium on as deep a level as possible. Such an understanding would enable us to provide a justification of the Boltzmann equation and its many generalizations, as well as of the successful use of hydrodynamic or stochastic equations in nonequilibrium situations. The counterpart of Boltzmann’s ergodic hypothesis for nonequilibrium phenomena is the assumption that a dynamical system should be mixing, in the sense of Gibbs. That is, given some initial distribution of points on the constant energy surface in phase space, in a region of positive measure, a system is mixing if the distribution of the points eventually becomes uniform over the energy surface, with respect to the invariant measure [8]. If one can prove that an isolated dynamical system with a large number of degrees of freedom is mixing, then one can show that the phase space distribution function for the system will approach an equilibrium distribution at long times, and consequently, quantities averaged with respect to this distribution function will approach their equilibrium values. Needless to say, the same concerns listed above suggest that the approach to equilibrium may be a very complicated affair with a number of possible factors acting in concert or individually as circumstances require. One might also ask why a phase space distribution function is needed at all, since a laboratory system corresponds to a point in phase space at any given time, and not to a distribution of points in phase space. The usual argument given in statistical mechanics texts is that it is easier to describe the average behavior of an ensemble of points than to solve the complete set of the equations of motion of a single system, and to draw conclusions from such a solution. Of course, computer simulations of fluid systems are often attempts to solve the equations of motion for an individual system, but they too are influenced by noise in the form of round-off errors, and so do not really describe an isolated dynamical system, unless one resorts to certain lattice-type models that can be treated by integer arithmetic on the computer. Our hope, often unstated, is that the properties we explore using the methods of statistical mechanics are somehow typical of the behavior of an individual system in the laboratory, even if we know that this cannot be strictly true. 3

We hope, and occasionally can prove, that the deviations from typical behavior are small. In any case, it is clearly important to know what role the dynamics of the fluid system might play in the approach to equilibrium. Certainly the equilibrium and nonequilibrium properties of the system are sensitive to the underlying molecular structure of the fluid and to the interactions between the particles of which it is composed [9,10]. Our experience with the Boltzmann equation also assures us that the role of molecular collisions cannot be underestimated, even if we are not entirely certain why the Boltzmann equation works for an individual laboratory system. Therefore, when faced with a plausible model of a fluid, one would like to know if the dynamics of the model is ergodic, mixing, K, or Bernoulli. It is of considerable interest to establish the dynamical properties of a large isolated system of particles as the starting point for our investigation of the foundations of nonequilibrium statistical mechanics, and then to look at the consequences of: (a) external noise on the system, and (b) the restriction of our interest to only a small class of functions of the dynamical variables needed for physical applications. As a step in this direction we show here that kinetic theory can be used to demonstrate (not prove) that isolated hard-ball systems are chaotic dynamical systems [11]. We will show, in fact, that, at low densities at least, hard-ball systems have a positive Kolmogorov-Sinai entropy, which we can estimate, and that the largest positive Lyapunov exponent can also be estimated. These estimates, in fact, are in good agreement with the results of numerical simulations. What these estimates are unable to tell us is whether or not the systems are indeed ergodic, mixing, K, or Bernoulli, since we cannot use these kinetic theory techniques to show that the phase space consists of only one invariant region and not a countable number of them. In fact there is some evidence that if the intermolecular potential is not discontinuous but smoother than a hard sphere potential, then there may be some elliptic islands of positive measure in the phase space [12]. Strictly speaking, the ergodic hypothesis is not valid for such potentials. It remains to be seen whether or not this phenomenon is ultimately of some importance for statistical mechanics, especially for systems with large numbers of particles, and at temperatures where quantum effects may be neglected. As a by-product of the analysis given here we will also be able to calculate the largest Lyapunov exponents and Kolmogorov-Sinai Entropy for a dilute Lorentz gas with one moving moving particle in an array of randomly placed (but non-overlapping) fixed hard ball scatterers [13,14]. This system is much easier to analyze than a gas where all the particles are in motion, and was the first type of hard-ball system whose chaotic dynamics were studied in detail, either by rigorous methods or by kinetic theory. The plan of this article is as follows: In Section II we will set up the equations of motion for hard-ball systems that will enable us to analyze the separation of initially close (infinitesimally close, actually) trajectories in phase space. The dynamics of the separation of trajectories in phase space is, of course, an essential ingredient in analyzing Lyapunov exponents and Kolmogorov-Sinai (KS) entropies. In Section III we will apply these results to a calculation of the KS entropy of a hard-ball gas using informal kinetic theory rather than a formal Boltzmann equation approach. This informal method gives the leading density behavior of the KS entropy, but more formal methods are needed to go further. We will outline the more formal method based upon an extension of the Boltzmann equation, but we will not go too deeply into its solution, since it rapidly becomes very technical. In Section IV we outline a method for estimating the largest Lyapunov exponent for a dilute hard-ball 4

gas using a mean field theory based upon the Boltzmann equation [15,16]. In Section V we apply these methods, both informal and formal, to the calculation of the KS entropy and largest Lyapunov exponents for a dilute Lorentz gas with fixed hard-ball scatterers. We conclude in Section VI with remarks and a discussion of outstanding open problems. II. THE DYNAMICS OF HARD-BALL SYSTEMS

In this section we will present a method due to Dellago, Posch and Hoover [17] for describing the dynamical behavior of infinitesimally close trajectories in phase space for hard-ball systems. We begin with a consideration of a system of N identical hard balls in d dimensions, each of mass m, and diameter a. Their positions and velocities are denoted by ~ri and ~vi , respectively, where i = 1, 2, . . . N labels the particles. For simplicity we can imagine that the particles are all placed in some cubical volume V = Ld and that periodic boundary conditions are applied at the faces of the cube. The dynamics consists of periods of free motion of the particles separated by instantaneous binary collisions between some pair of particles. During free motion, the equations of the system are ~r˙ i = ~vi ,

~v˙ i = 0.

(1)

At the instant of collision between particles i and j, say, there is an instantaneous change in the velocities. It is convenient to write the dynamics in terms of the center of mass motion ~ ij ,V ~ij ) and relative motion (~rij , ~vij ), (R ~ ij = (~ri + ~rj )/2, R ~rij = ~ri − ~rj ,

~ij = (~vi + ~vj )/2, V ~vij = ~vi − ~vj .

The change can be described by the equations ′ ~ ij ~ ij , R =R ~rij′ = ~rij ≡ aˆ σ,

~ij′ = V ~ij , V ~vij′ = Mσˆ · ~vij ,

where the matrix Mσˆ describes a specular reflection on a plane with normal σ ˆ , i.e., the unit vector in the direction from particle j to i at the instant of the (i, j) collision, Mσˆ ≡ 1 − 2ˆ σσ ˆ.

(2)

Nondotted products of vectors are dyadic products and 1 is the identity matrix. In terms of the individual particle velocities, the dynamics is described by ~vi′ = ~vi − (~vij · σˆ )ˆ σ, ′ ˆ )ˆ σ. ~vj = ~vj + (~vij · σ

(3)

These equations (plus the dynamics at the boundaries) are all that one needs in order to determine the trajectory of the system in phase space. However, in order to determine Lyapunov exponents and other chaotic properties of the system, we need to examine two infinitesimally close trajectories, and obtain the equations that govern their rate of separation. 5

Equations for the rate of separation of two phase space trajectories of hard-ball systems have been developed by Sinai using differential geometry [9,18]. This leads to an expression for the rate of separation in terms of an operator that is expressed as a continuous fraction. Here we adopt a somewhat different but equivalent approach that is nicely suited to kinetic theory calculations. To obtain the equations we need, we consider two infinitesimally close phase space points, Γ (the reference point) and Γ + δΓ (the adjacent point), given by (~r1 , ~v1 , ~r2 , ~v2 , . . . , ~rN , ~vN ) and by (~r1 + δ~r1 , ~v1 + δ~v1 , ~r2 + δ~r2 , ~v2 + δ~v2 , . . . , ~rN + δ~rN , ~vN + δ~vN ), respectively. The 2N infinitesimal deviation vectors δ~ri , δ~vi describe the displacement in phase space between the two trajectories. The velocity deviation vectors are not all independent since we will restrict their values by requiring that the total momentum and total kinetic energy of the two trajectories be the same. That is, N X

δ~vi = 0,

i=1

N X i=1

~vi · δ~vi = 0.

(4)

where, in the energy equation, we have neglected second order terms in the velocity deviations. We will not use these equations in any serious way since we will be interested in the limit of large N, in which case they are not very important. In the case of the Lorentz gas, to be discussed in a later section, the conservation of momentum equation is not relevant, but we will require that both of the two trajectories have the same energy, so that the velocity deviation vector is orthogonal to the velocity itself. Between collisions on the displaced trajectory, the deviations satisfy δ~r˙ i = δ~vi , δ~v˙ i = 0. (5) The treatment of the effect of the collisions on the deviation vectors is more complicated. One assumes that the two trajectories are so close together that the same sequence of binary collisions takes place on each trajectory over arbitrarily long times. Let us suppose that we consider a collision between particle i and j on each trajectory. Now, since the trajectories are slightly displaced, the (i, j) collision will take place at slightly different times on each trajectory. This slight (in fact infinitesimal) time displacement must be included in the analysis of the dynamical behavior of the deviation vectors at a collision. The dynamics of the deviations can be considered separately for the center-of-mass coordinates and the relative coordinates. The center-of mass coordinates behave as in a free flight, so that ~ij . ~ ′ = δR ~ ij , δ V ~ ′ = δV δR ij

ij

For the relative coordinates, we consider the relative coordinates of two infinitesimally close trajectories, ~rij , ~vij and ~rij∗ , ~vij∗ : reference trajectory (collision at t = 0) t < 0: ~vij (t) constant ~rij (t) = ~vij t + aˆ σ t > 0: ~vij (t) = Mσˆ · ~vij ~rij (t) = Mσˆ · ~vij t + aˆ σ

adjacent trajectory (collision at t = δt) t < δt: ~vij∗ (t) constant ~rij∗ (t) = ~vij∗ (t − δt) + aˆ σ∗ t > δt: ~vij∗ (t) = Mσˆ∗ · ~vij∗ σ∗ ~rij∗ (t) = Mσˆ∗ · ~vij∗ (t − δt) + aˆ 6

The transformations at a collision in terms of the deviation vectors are found with the aid of δ~rij = ~rij∗ (0− ) − ~rij (0− ), δ~rij′ = ~rij∗ (δt+ ) − ~rij (δt+ ),

δ~vij = ~vij∗ (0− ) − ~vij (0− ), δ~vij′ = ~vij∗ (δt+ ) − ~vij (δt+ ),

where the superscripts + and − indicate immediately after and immediately before the collision, respectively. We should use Eq. (5) in between collisions, i.e., from the last collision up to t = 0− . Then we use the collision rule, to be derived in this section, that links δ~ri′ and δ~vi′ to δ~ri and δ~vi . Eq. (5) is used again from t = δt+ , starting with δ~ri′ and δ~vi′ . It is also possible to use Eq. (5) from t = 0+ , as if the values of δ~rij′ and δ~vij′ in the above equation were valid at t = 0+ . The difference is an erroneous additional δtδ~vij′ for δ~rij , when we apply Eq. (5) from t = 0+ , but this additional term is quadratic in the deviations and therefore negligible. In this way, the collision may also be viewed as instantaneous for the deviation vectors. We can now write δ~rij = σ ˆ ∗ a − ~vij∗ δt − σˆ a = δˆ σ a − ~vij δt,

where δˆ σ=σ ˆ∗ − σ ˆ , and in the last equality we neglect terms quadratic in the deviations. Because |ˆ σ | = |ˆ σ ∗ | = 1, we have δˆ σ·σ ˆ = 0. Taking the inner product with σ ˆ of the above formula gives σ ˆ · δrij δt = − , σ ˆ · ~vij

"

#

(ˆ σ · ~vij )1 − ~vij σ ˆ δˆ σ= · δ~rij . a(ˆ σ · ~vij )

Substitution of these results into δ~rij′ = σ ˆ ∗ a − σˆ a − Mσˆ · ~vij δt and δ~vij′ = Mσˆ∗ · ~vij∗ − Mσˆ · ~vij , yields δ~rij′ = Mσˆ · δ~rij , δ~vij′ = −2Qσˆ (i, j) · δ~rij + Mσˆ · δ~vij , where Qσˆ (i, j) is the matrix Qσˆ (i, j) =

[(ˆ σ · ~vij )1 + σ ˆ~vij ] · [(ˆ σ · ~vij )1 − ~vij σ ˆ] . a(ˆ σ · ~vij )

(6)

In terms of the individual particle deviations, the collision dynamics reads δ~ri′ δ~rj′ δ~vi′ δ~vj′

= δ~ri − (δ~rij · σ ˆ )ˆ σ, = δ~rj + (δ~rij · σˆ )ˆ σ, = δ~vi − (δ~vij · σ ˆ )ˆ σ − Qσˆ (i, j) · δ~rij , = δ~vj + (δ~vij · σ ˆ )ˆ σ + Qσˆ (i, j) · δ~rij .

(7)

Eqs. (5), (6) and (7) are the dynamical equations that govern the time dependence of the deviation vectors, {δ~ri , δ~vi }. They have to be solved together with the equations for the {~ri , ~vi } in order to have a completely determined system. That is, in order to follow the deviation vectors in time, one needs to know when, where, and with what velocities the various collisions take place in the gas. In the next section, we will use these equations to provide a first estimate of the Kolmogorov-Sinai entropy for a dilute gas of hard balls in d dimensions. 7

III. ESTIMATES OF THE KOLMOGOROV-SINAI ENTROPY FOR A DILUTE GAS

We consider hard-ball system from the previous section when the the gas is dilute, i.e. nad ≪ 1, with n the density N/V , and it is in equilibrium with no external forces acting on it. Since the hard-ball system is a conservative, Hamiltonian system, one can easily show that all nonzero Lyapunov exponents are paired with a corresponding exponent of identical magnitude but of opposite sign [9,10,19]. Obviously there are an equal number of positive and negative Lyapunov exponents, and the sum of all the Lyapunov exponents must be equal to zero. For the system we consider here, the Kolmogorov-Sinai (KS) entropy is equal to the sum of the positive Lyapunov exponents, by Pesin’s theorem [20]. We will compute the KS entropy per particle in the thermodynamic limit, for a dilute hard-ball gas, using methods of kinetic theory [11,21]. To carry out this calculation we will use the fact that when an infinitesimally small 2Nd-dimensional volume in phase phase is projected onto the Nd-dimensional subspace corresponding to the velocity directions, the volume of this projection must grow exponentially with time t, in the long time asymptotic limit, with an exponent that is the sum of all of the positive Lyapunov exponents. That is, when we denote this projected volume element by δVv (t), as t → ∞, 



 X  δVv (t) = exp t λi .   δVv (0) λi >0

(8)

The same result also holds for another small volume element, denoted by δV r (t), that is the projection onto the Nd-dimensional subspace corresponding to the position directions2 . The advantage of using an element in velocity space resides in the fact that the velocity deviation vectors do not change during the time intervals between collisions in the gas, but only at the instants of collisions. We will make use of this fact shortly. Our first step will be to obtain a general formula for the KS entropy of a hard-ball system, which, in principle, should describe the complete density dependence of this quantity. Then we will apply this result to the low density case, using both informal and formal kinetic theory methods. A. The KS Entropy as an Ensemble Average

Our object here is to express the KS entropy for a hard-ball system as an equilibrium ensemble average of an appropriate microscopic quantity, which in turn can be evaluated by

2 It

is worth pointing out that while both δVv and δVr grow exponentially in time, their combined volume, i.e. the original 2N d-dimensional volume, stays constant. This seemingly paradoxical statement can be understood by realizing that almost all projections of the 2N d-dimensional volume onto N d-dimensional subpaces will grow exponentially in time with an exponent given by the sum of the largest N d Lyapunov exponents.

8

standard methods of statistical mechanics. To do this we rewrite Eq. (8) as a time average of a dynamical quantity as X

1 δVv (t) ln , t δVv (0)

λi = lim

t→∞

λi >0

d δVv (τ ) 1Z t dτ ln , t→∞ t 0 dτ δVv (0) * + d δVv (τ ) = , ln dτ δVv (0)

= lim

(9)

where the angular brackets denote an average over an appropriate equilibrium ensemble to be specified further on. Here we have assumed that the hard-ball system under consideration is ergodic so that long time averages may be replaced by ensemble averages. Now we can use elementary kinetic theory arguments to give a somewhat more explicit form to the ensemble average appearing in Eq. (9). Since the volume element in velocity space does not change during the time between any two binary collisions, and since the binary collisions in a hard-ball gas are instantaneous, the ensemble average of the time derivative may be written as X

λi >0

λi =

X i k0 can be identified with the tail of the distribution. To get good statistics (which is difficult because there are not many particles in the head), we measure this distribution at several times, and average the result. As the velocity distribution Phead (~v) only involves clock values, there is no density dependence and a simulation at one density is enough. We ran a simulation with N = 128 particles, and d = 2. The distribution of the clock values was obtained, and a value of k0 = 7 seemed a reasonable start of the tail of the distribution. We have checked that the results do not change much when instead we take k0 = 6 or k0 = 8. Combining Eqs. (65) and (63) gives the explicit prediction for the velocity distribution in the head: 



Phead (v) = 0.63943 + 0.16289v 2 + 0.004351v 4 v e−v

2 /2

.

(66)

This prediction has been plotted in Fig. 6 together with the distribution found from the simulations, and the velocity distribution in the whole gas. Even though the statistics isn’t perfect, there is a very good agreement between the two. It should be noted that leaving out the v 4 term in Eq. (66) gives only a small difference. It is also evident that the velocity distribution in the head of the clock distribution has higher mean velocity than the velocity distribution characterizing the full gas, which is also plotted in figure Eq. (6). 0.7 0.6

Phead

0.5 0.4 0.3 0.2 0.1 0

0

0.5

1

1.5

2

2.5

3

3.5

4

v FIG. 6. Comparison of the velocity distribution in the leading edge. The histogram results from the simulations for N = 128, in d = 2 dimensions. The solid curve is a plot of the prediction (66). The dotted curve is the two-dimensional Maxwell distribution, v exp(−v 2 /2).

28

V. THE DILUTE, RANDOM HARD-BALL LORENTZ GAS

A very useful and simple example of a dynamical system which is a dispersing billiard, rather than a semi-dispersing billiard is provided by the Lorentz gas model. In this model one places fixed, hard balls of radius a in a d-dimensional space, and then considers the motion of a point particle of mass m and with initial velocity ~v, in the free space between the scatterers [3]. The particle moves freely between collisions and makes specular, energy conserving collisions with the scatterers . We consider here only the case where the scatterers are not allowed to overlap each other. One can imagine that the scatterers are placed with their centers on the sites of a regular lattice, or that the scatterers are placed randomly in space. Here we consider the case where N scatterers are placed randomly in space in a volume V in such a way that the average distance between scatterers is much greater than their radii, a. That is, the scatterers form a quenched, dilute gas with nad ≪ 1. In this section we show that the simplifications in the dynamics of this model produce simplifications in the calculations of the Lyapunov exponents and KS entropies of the moving particle, due primarily to the dispersing nature of the collisions of the moving particle with the scatterers. A Lorentz gas in d dimensions may have at most d − 1 positive Lyapunov exponents. That is, the phase space dimension for the moving particle is 2d, the requirement of constant energy removes one degree of freedom, and the direction in phase space has a zero Lyapunov exponent associated with it, since two points on the same trajectory will not separate or approach each other in the course of time. Thus there can be at most 2d − 2 nonzero Lyapunov exponents, of which only d − 1 can be positive, since the exponents come in plus-minus pairs for this Hamiltonian system. To analyze the Lyapunov spectrum we consider two infinitesimally close trajectories in phase space and follow the spatial and velocity deviation vectors δ~r, δ~v separating the two trajectories, in time. By arguments almost identical to (but simpler than) those presented in Section II for the case where all particles move, we have the following equations of motion for the position, ~r, velocity, ~v spatial deviation vector, δ~r, and velocity deviation vector, δ~v, of the moving particle, between collisions ~r˙ = ~v, ~v˙ = 0, δ~r˙ = δ~v, δ~v˙ = 0.

(67)

At a collision with a scatterer, these dynamical quantities change according to ~r′ ~v ′ δ~r′ δ~v ′

= ~r, = ~v − 2(~v · σ ˆ )ˆ σ, = Mσˆ · δ~r, = Mσˆ · δ~v − 2Qσˆ · δ~r.

(68)

Again, the primed variables denote values immediately after a collision. while the unprimed ones denote values immediately before the collision. The distance from the center of the scatterer to the moving particle, at collision, is aˆ σ , and the tensor Qσˆ is given by 29

Qσˆ =

[(ˆ σ · ~v )1 + σ ˆ~v ] · [(ˆ σ · ~v)1 − ~v σ ˆ] . a(ˆ σ · ~v )

(69)

It should be noted that if the velocity deviation vector and the velocity vector are orthogonal before collision, then they will be orthogonal after collision as well. That is, if ~v · δ~v = 0, then ~v ′ · δ~v ′ = 0, also. Thus we can take δ~v to be perpendicular to ~v for all time, and without loss of generality, we can take δ~r to be perpendicular to ~v as well. Of course, the condition that ~v · δ~v = 0 is simply the statement that the two trajectories are on the same constant energy surface. We will denote the spatial and velocity deviation vectors for the Lorentz gas with a subscript ⊥ to indicate that they are defined in a plane perpendicular6 to ~v. It follows that we may replace the d × d ROC matrix defined by δ~r = ρ · δ~v

(70)

by a (d − 1) × (d − 1) matrix ρ⊥ defined by δ~r⊥ = ρ⊥ · δ~v⊥ .

(71)

For the two dimensional case ρ⊥ is a simple scalar which we denote by ρ. One easily finds that between collisions ρ grows with time as ρ(t) = ρ(0) + t,

(72)

The change in ρ at collision satisfies the “mirror” equation [18] 1 1 2v = + . ′ ρ ρ a cos φ

(73)

Here the primes denote values immediately after a collision, and v is the magnitude of the velocity of the particle. If the radius of curvature ρ is positive, initially, it will always remain positive, and it also follows from Eq. (73) that the value of vρ after collision is less than half of the radius of the scatterers. Consequently the radius of curvature typically grows to be of the order of a mean free time, and it becomes much smaller immediately after a collision with a scatterer. For three dimensional systems a similar situation results. Now ρ⊥ is a 2 × 2 matrix which satisfies the free motion equation ρ⊥ (t) = ρ⊥ (0) + t1,

(74)

and changes at collision according to −1 [ρ′⊥ ]

(

−1

= Mσˆ · [ρ⊥ ]

2 v2σ ˆσ ˆ + ~vσˆ + σ ˆ~v − − (~v · σ ˆ )1 a (~v · σ ˆ) "

#)

· Mσˆ .

(75)

Here the inverse radius of curvature matrices [ρ′⊥ ]−1 and [ρ⊥ ]−1 are defined in planes perpendicular to ~v ′ and to ~v, respectively. In the hybrid notation of Eq. (75), in which both

6 Of

course, the components of δ~v , δ~r in the direction of ~v are not related to the non-zero Lyapunov exponents or the KS entropy, since these components do not grow exponentially.

30

d × d matrices and d − 1 × d − 1 matrices figure, the inverse matrices in the directions along ~v ′ and ~v , respectively, may be defined by [ρ′⊥ ]−1~v ′ = 0 and [ρ⊥ ]−1~v = 0. The final matrix in the right-hand side of Eq. (75) can then be restriced to the plane perpendicular to ~v ′ straightforwardly. It is worth pointing out some important differences between the ROC matrices defined here for the Lorentz gas and those defined earlier for the regular gas of moving particles. Here the ROC matrices are defined in a subspace orthogonal to the velocity of the moving particle. Further the change in the matrix elements at collision is from a typically large value on the order of a mean free time to an always small value, on the order of the time it takes to move a distance equal to half the radius of a scatterer. This latter property is a property of dispersing billiards. For the regular gas case, only a few of the elements of the ROC matrices become small after a collision, which means that one cannot find an accurate approximation to the ROC matrices by considering only one collision. This latter property is associated with semi-dispersing billiards where a reflection from a scatterer does not change the diagonal components of the ROC matrix that correspond to the flat directions of the scatterer, at all. A. Informal Calculation of the KS Entropy and Lyapunov Exponents for the Dilute, Random Lorentz Gas

Here we show that simple kinetic theory methods allow us to compute the Lyapunov exponents and KS entropies of Lorentz gases in two and three dimensions [14]. To do that we use methods similar to those in Section III. That is, we consider the equations for the deviation vectors, Eqs. (67 - 69) above. The velocity deviation vector changes only upon collision with a scatterer. We will base our calculation on the exponential growth rate of the magnitude of the velocity deviation vector, and for three dimensional systems, on the exponential growth rate of the volume element in velocity space. We begin by writing the spatial deviation vector just before collision as δ~r = tδ~v + δ~r(0),

(76)

where δ~r(0) is the spatial deviation vector just after the previous collision with a scatterer. This equation is essentially the same as Eq. (18), but now it is a good approximation to neglect the spatial deviation vector vector δ~r(0) since δ~r(0) is of relative order a/vt compared to the term tδ~v, in all directions of δ~r, perpendicular to ~v . Thus we neglect this term and insert Eq. (76) into the last equality of Eq. (68) to obtain7 δ~v ′ = Mσˆ · δ~v − 2tQσˆ · δ~v ≡ a · δ~v,

(77)

where we have defined a matrix a that gives the change in the velocity deviation vector at collision. Then we can express the velocity deviation vector at some time t in terms of its initial value as

7 In

principle, the term Mσˆ ·δ~v in Eq. (77) can be neglected also, but only for directions perpendicular to the velocity. If one is careful to consider only deviations δ~v in the subspace perpendicular to the velocity of the particle, it is possible to carry out the calculation with this term neglected.

31

δ~v (t) = aN · aN −1 · · · a1 · δ~v (0),

(78)

where we have labeled the successive collisions by the subscripts 1, 2, ..., N . We can determine the largest Lyapunov exponent by examining the growth of the magnitude of the velocity deviation vector with time, and the KS entropy as the growth of the volume element with time. Therefore, with the approximations mentioned above, 1 |δ~v(t)| ln t→∞ t |δ~v (0)|

λmax = lim

N N 1 X |δ~vi+ | ln + , t→∞ t N |δ~vi−1 | 1

= lim

(79)

where δ~vi+ is the velocity deviation vector immediately after the collision labeled by the subscript i. Similarly, the sum of the positive Lyapunov exponents is given by X

N N 1 X ln | det ai |. t→∞ t N 1

λi = hKS = lim

λi >0

(80)

To evaluate the sums appearing in Eqs. (79, 80), we note that to leading order in the density none of the collisions are correlated with any previous collision, that is, the leading contribution to the Lyapunov exponents comes from collision sequences where the moving particle does not encounter the same scatterer more than once in the sequence 8 . Therefore we can treat each term in the sums in Eqs. (79, 80) as being independent of the other terms in the sum. We have expressed λmax and the sum of the positive Lyapunov exponents as arithmetic averages, but for long times and with independently distributed terms in the average, we can replace the arithmetic averages by ensemble averages over a suitable equilibrium ensemble. That is λmax

|δ~v + | = ν ln |δ~v − | *

"

#+

(81)

λi = ν hln | det a|i

(82)

and X

λi >0

where δ~v − and δ~v + are the velocity deviation vectors before and after collision, respectively, ν is the (low density)value of the collision frequency, N /t as t becomes large, and the angular brackets denote an equilibrium average. We now consider a typical collision of the moving particle with one of the scatterers. The free time between one collision and the next is sampled from the normalized equilibrium distribution of free times [3], P (τ ) given at low densities by

8 In

two dimensions the particle will hit the same scatterer an infinite number of times. However the effects of such processes are of higher density, and can be neglected here since the number of collisions between successive collisions with the same scatterer become typically very large as the density of scatterers approaches zero.

32

P (τ ) = νe−ντ .

(83)

The construction of the matrix a requires some geometry and depends on the number of dimensions of the system. In any case we take the velocity vector before collision, ~v to be directed along the z-axis, and take σ ˆ · ~v = −v cos φ, where −π/2 ≤ φ ≤ π/2. The velocity deviation before collision δ~v − is perpendicular to the z-axis. Then it is a simple matter to compute |δ~v + |/|δ~v − | and | det a|. For two-dimensional systems δ~v and the matrix a are given in this representation by9 −

δ~v =

1 0

!



|δ~v | ;

a=

(1 + Λ) cos 2φ sin 2φ (1 + Λ) sin 2φ − cos 2φ

!

,

(84)

where we introduced Λ = (2vτ )/(a cos φ). To leading order in vτ /a we find that |δ~v + | = Λ; |δ~v − |

| det a| = Λ.

(85)

For three dimensional systems the unit vector σˆ can be represented as σ ˆ = − cos φ zˆ + sin φ cos α xˆ + sin φ sin α yˆ. Now the ranges of the angles φ and α are 0 ≤ φ ≤ π/2 and 0 ≤ α ≤ 2π. There is an additional angle ψ in the x, y plane such that the velocity deviation before collision δ~v − = |δ~v − |[ˆ x cos ψ + yˆ sin ψ]. It is somewhat more convenient to use a ˜ = (1 − 2ˆ symmetric matrix, a σσ ˆ) · a, given by 1 + Λ(cos2 φ + sin2 φ cos2 α) Λ sin2 φ cos α sin α 0  2 2 2 2 Λ sin φ cos α sin α 1 + Λ(cos φ + sin φ sin α) 0  ˜= a , 0 0 1 



(86)

One easily finds |δ~v + | 2τ v cos2 (α − ψ) = + sin2 (α − ψ) cos2 φ − 2 |δ~v | a cos φ "

#1/2

,

(87)

and ˜ | = | det a| = | det a



2vτ a

2

.

(88)

to leading order in vτ /a. To complete the calculation we must evaluate the averages appearing in Eqs. (79, 80). That is we average over the distribution of free times and over the rate at which scattering events are taking place with the various scattering angles. Additionally in 3 dimensions an average over a stationary distribution of angles ψ has to be performed in general. Due to the isotropy of the scattering geometry ψ can here be absorbed in a redefinition α′ = α − ψ 9 In

contrast to the ROC matrices ρ, a is a d × d matrix. If one chooses one of the basis vectors of a perpendicular to ~v , the remaining ones are the basis of the corresponding d − 1 × d − 1 matrix, from which one can also obtain Eq. (85), and, in the three dimensional case, Eqs. (87) and (88).

33

of the azimuthal angle α. This will not be true any more if the isotropy of velocity space is broken (e.g. by an external field). The appropriate average of a quantity F takes the simple form hF i =

1 J

Z

0





Z

dˆ σ cos φP (τ )F,

(89)

where P (τ ) is the free time distribution given by Eq. (83) and J is a normalization factor obtained by setting F = 1 in the numerator. The integration over the unit vector σˆ , i.e., over the appropriate solid angle, ranges over −π/2 ≤ φ ≤ π/2 in two dimensions and over 0 ≤ φ ≤ π/2 and 0 ≤ α ≤ 2π in three dimensions. After carrying out the required integrations we find that λ+ = λmax = 2nav[− ln(2na2 ) + 1 − C] + · · · ,

(90)

for two dimensions. Here C is Euler’s constant, and the terms not given explicitly in Eq. (90) are higher order in the density. Similarly, for the three dimensional Lorentz gas we obtain 2 λ+ n/2) + ln 2 − max = na vπ[− ln(˜

1 − C] + · · · , 2

+ 2 λ+ n/2) − C] + · · · , max + λmin = 2na vπ[− ln(˜

(91)

(92)

from which it follows that 2 n/2) − ln 2 + λ+ min = na vπ[− ln(˜

1 − C] + · · · , 2

(93)

where n ˜ = na3 π. We have therefore determined the Lyapunov spectrum for the equilibrium Lorentz gas at low densities in both two and three dimensions [13,14]. There is good agreement with simulations, as shown in figure 7. We note that the two positive Lyapunov exponents for three dimensions differ slightly, and that we were able to get individual values because we could calculate the largest exponent and the sum of the two exponents. We could not determine all of the Lyapunov exponents for a d > 3 dimensional Lorentz gas this way. Moreover, for a spatially inhomogeneous system, such as those considered in the application of escape-rate methods, the simple kinetic arguments used here are not sufficient and Boltzmann-type methods are essential for the determination of the Lyapunov exponents and KS entropies. In Fig. (7) we illustrate the results obtained above for the Lyapunov exponents of the dilute Lorentz gas in both two and three dimensions, as functions of the reduced density of the scatterers, and compare them with the numerical simulations of Dellago and Posch [31,32]. As one can see the agreement is excellent.

34

1.000 2D

λ

0.100

0.010

0.001 0.0001

0.001

0.01

0.1

0.01

0.1

n 1.000 3D

λ

0.100

0.010

0.001 0.0001

0.001 n

FIG. 7. A plot of the Lyapunov exponents, in units of v/a, for the moving particle in a random, dilute Lorentz gas in two dimensions (top) and three dimensions (bottom), as functions of the density n, in units of a−d . The solid lines are the results given by kinetic theory, Eq. (90), respectively, Eqs. (91) and (93), and the data points are the numerical results of Dellago and Posch. B. Formal Kinetic Theory for the Low Density Lorentz Gas

The formal theory for the KS entropy of the regular gas is easily applied to the Lorentz gas, which is, of course, considerably simpler. Thus by following the arguments leading to Eq. (13) for the sum of the positive Lyapunov exponents for the regular gas, we find that the KS entropy for the equilibrium Lorentz gas is given by X

λi >0

d−1

λi = a

Z

~ σ Θ(−~v · σ ~ − aˆ dx dρ dRdˆ ˆ )|~v · σ ˆ |δ(~r − R σ) × 35

~ ρ). ln | det [Mσˆ + 2Qσˆ · ρ] |F2 (x, R,

(94)

~ denotes the location of the scatterer with which the moving particle is colliding, and Here R F2 is the pair distribution function for the moving particle to have coordinate, ~r, velocity, ~ At low densities we ~v , ROC matrix, ρ, while the center of the scatterer is located at R. may assume that the moving particle and the scatterer are uncorrelated, so that the density expansion for F2 , immediately before collision would have the form ~ ρ) = nF1 (x, ρ) + . . . , F2 (x, R,

(95)

where n is the number density of the scatterers and F1 (x, ρ) is the equilibrium single particle distribution function for the moving particle. We may easily construct an extended Lorentz-Boltzmann equation (ELBE) for F1 along the lines used previously for the extended Boltzmann equation in Section III. The ELBE is given by [13,14] "

#

∂ + L0 F1 (x, ρ) = ad−1 ∂t



~ − aˆ δ(~r − R σ)

Z



dρ δ(ρ − ρ(ρ



))Pσˆ′

Z

~ σ Θ(−~v · σ dRdˆ ˆ )|(~v · σ ˆ) × 

~ + aˆ − δ(~r − R σ ) F1 (x, ρ).

(96)

Here the operator P∧σ′ is a substitution operator that replaces velocities, ~v and ROC matrices, ρ, by their restitution values, i.e., the values needed before collision to produce the values ~v, and ρ after collision with a scatterer with collision vector σ ˆ . Also, the free particle streaming operator on the left-hand side of Eq. (96) is given by L0 = ~v ·

d X ∂ ∂ + , ∂~r α=1 ∂ραα

(97)

since between collisions, ~r varies as ~r(0) + t~v and ρ varies as ρ(0) + t1. Returning for the moment to Eq. (94) for the KS entropy, we see by evaluating the determinant in the integrand on the right side, that not all components of ρ contribute. In fact, only the components of ρ in the plane perpendicular to ~v contribute to hKS . Here we will work out the details of the calculation of hKS for the two dimensional case, leaving the details of the three dimensional case to the literature [14]. For d = 2 we can easily evaluate the determinant in Eq. (94), and find that it is !

2vρ | det [Mσˆ + 2Qσˆ · ρ] | = 1 + . a cos φ

(98)

Here the scalar ρ is the component of the ρ matrix given by ρ = vˆ⊥ · ρ · vˆ⊥ , where vˆ⊥ is a unit vector orthogonal to ~v . Moreover, for low densities we can use the approximation nF1 (x, ρ) for F2 . Then the expression for hKS becomes, at low densities hKS = an

Z

"

2vρ dx dρdˆ σΘ(−~v · σ ˆ )|~v · σˆ | ln 1 + a cos φ

36

!#

F1 (x, ρ),

(99)

where x = ~r, ~v, and we have to determine F1 (x, ρ) as the solution of the ELBE where we integrate over all components of ρ, except the one diagonal component ρ. The ELBE then becomes, in the spatially homogeneous, equilibrium case ∂ F1 (~r, ~v, ρ) = nav ∂ρ

Z

π/2

−π/2



 Z cos φ 

0



!



a cos φ r , ~v ′, ρ′ ) − F1 (~r, ~v , ρ) . (100) dρ′ δ ρ − φ F1 (~ 2v + a cos vρ′

The argument of the δ function is simply obtained by using the mirror formula given by Eq. (73), but now the unprimed variable is the value after collision, and the primed variable is the value of ρ before collision. A further, and useful simplification results from the observation that ρ′ is typically of the order of the mean free time between collisions, which, for low density, is much larger than a/v. Therefore the delta function can be replaced by a cos φ δ ρ− φ 2v + a cos vρ′

!

!

a cos φ . ≈δ ρ− 2v

(101)

In a spatially homogeneous and isotropic equilibrium state, F1 has to be a function of the norm of the velocity vector |~v| and the radius of curvature ρ only. Using that the magnitude of the velocity, |~v |, always stays the same, we know that there is a solution of the form F1 (~r, ~v, ρ) = ϕ(~v)ψ(ρ),

(102)

with ϕ(~v) = (2πV )−1 δ(|~v| − v0 ) is the normalized equilibrium spatial and velocity distribution function for the moving particle, v0 is its constant speed, and V is the volume of the system. Now all we have to do is to determine ψ(ρ). An inspection of Eq. (100), with the approximation, Eq. (101) shows that ψ(ρ) satisfies the equation ∂ψ(ρ) + 2navψ(ρ) = 0, ∂ρ

(103)

ψ(ρ) = (1/t0 )e−t/t0 for ρ ≥ a/(2v).

(104)

for ρ ≥ a/(2v), with solution Here t0 is the mean free time given by t0 = (2nav)−1 . In the case that ρ < a/(2v), one can easily solve the full equation with the delta function to find (

2vρ ψ(ρ) = (1/t0 ) 1 − 1 − a 



1/2 )

for ρ < a/(2v).

(105)

Combining these results with Eq. (102) and inserting them in Eq. (99) we recover the result, Eq. (90) for the KS entropy, equivalently the positive Lyapunov exponent, for the low density, equilibrium, random Lorentz gas. A similar, but somewhat more elaborate calculation can be carried out, for the three dimensional case, to obtain exactly the same result as obtained in the informal theory for the KS entropy. To obtain the largest Lyapunov exponent by more formal methods, one has to resort to methods for determining the largest eigenvalue for products of random matrices. This is well described in the literature [14] and we will not pursue this issue further here. 37

VI. CONCLUSIONS AND OPEN PROBLEMS

In this article we have given a survey of the applications of the kinetic theory of dilute, hard-ball gases to the calculation of quantities that characterize the chaotic behavior of such systems. Results have been obtained for the KS entropy per particle and for the largest Lyapunov exponent of dilute hard-ball systems, as well as for the Lyapunov spectrum for the moving particle in the dilute, random Lorentz gas, with nonoverlapping, fixed, hardball scatterers. All of these results are in good to excellent agreement with the results of computer simulations. In the study of the largest Lyapunov exponent, we have developed a very interesting clock model which seems to explain many features of the behavior of this exponent. Moreover, the method for treating the clock model reveals a deep and, perhaps unexpected, connection between the theory of Lyapunov exponents and the theory of hydrodynamic fronts. We emphasize that the results given here apply to a dilute gas in equilibrium, but nonequilibrium situations have been treated by these methods as well. For example, it is possible to calculate the Lyapunov spectrum for a dilute, random Lorentz gas in a nonequilibrium steady state produced by a thermostatted electric field, at least for small fields, and to obtain results that are in excellent agreement with computer simulations [33,34,32]. Calculations are currently underway for the largest Lyapunov exponent for a hard-ball gas, when the gas is subjected to a thermostatted, external force that maintains a steady shear flow in the gas. Furthermore, one can use kinetic theory to calculate the Lyapunov spectrum for trajectories on the fractal repeller for a Lorentz gas with open, absorbing boundaries [13,35]. Such results are useful for understanding escape-rate methods for relating chaotic quantities for trajectories on a fractal repeller of an open system to the transport properties of the system, as described by Gaspard and Nicolis [9,10,36]. These results will eventually be extended to hard-ball gases as well. Of course, many problems remain to be solved. Here we mention some of the most immediate problems: 1. All of the results described here apply to hard-core systems. That is the particles interact with a potential energy that is either zero, beyond a given separation, or infinite, below that separation. It is worth studying the properties of dilute systems with smoother potential energies for a number of reasons: (a) The results of RomKedar and Turaev [12] suggest that the dynamics of particles with short ranged but smooth potentials may exhibit regions of nonhyperbolic behavior. It is important to know more about these regions and to assess their effect on the overall chaotic behavior of gases interacting with such potentials. (b) We know very little about the chaotic properties of dilute gases interacting with long range forces, such as Maxwell molecules and Coulomb gases. 2. An open problem, even for dilute gases, is to obtain the complete spectrum of Lyapunov exponents for a hard-ball gas. This is a very challenging problem in mathematical physics, and no easy approach is in sight. There is a very tantalizing set of numerical results by Posch and coworkers [37] showing that the smallest nonzero Lyapunov exponents have a hydrodynamic structure, in that the exponents themselves seem to scale as the inverse of the linear size of the system, and that the spatial 38

deviation vectors seem to form collective modes of both transverse and longitudinal types. Eckmann and Gat [38] have proposed an explanation of this hydrodynamic-like behavior of the lowest Lyapunov modes using techniques from the theory of random matrices. It would be useful to understand and to extend their results using kinetic theory methods. 3. The extension of these results to gases at higher densities remains an open, challenging problem. The kinetic theory of dense gases has exposed a number of effects caused by long range dynamical correlations between the particles. Such effects include nonanalytic terms in the density expansion of transport coefficients and long time tail phenomena in time correlation functions, among others [4]. It is of some interest to see the effect of these dynamical correlations on the chaotic properties of the gas, as well. Furthermore, a high density hard-ball system may form a glass, and there would be useful information obtained about the glassy state if one could study the chaotic behavior of the gas through the glass transition. 4. The extension of the clock model to higher density systems can also be expected to reveal new and interesting phenomena connected to the effects of density and other fluctuations in the gas upon the clock speed and related quantities. At present there are some weak numerical indications [39] that the largest Lyapunov exponent might diverge in the limit of large numbers of particles as ln N, where N is the number of particles in the system. It might be possible to confirm or to rule out this possibility by extending the clock model so as to include the effects of fluctuations in the fluid. 5. Lyapunov exponents and KS entropies are not the only properties that characterize chaotic systems. There are many more quantities, such as topological pressures, fractal dimensions, etc., that remain to be explored by the methods outlined here. 6. A subject of considerable interest and activity is the physics of gases that make inelastic collisions, i.e. the physics of granular materials [40]. One would like to know what the chaotic properties of such systems might be, or, more generally, how to define such properties in nonstationary systems. In conclusion, we have described in this article only the first steps, taken over the last few years, to develop useful methods for the calculations of Lyapunov exponents and KS entropies for the systems of particles that can be treated by kinetic theory. We are particularly delighted that kinetic theory has something to contribute to the field of the chaotic behavior of large systems of particles, and that the ideas of Maxwell and Boltzmann are still having new and fruitful applications. Acknowledgements: The authors would like to thank Professor Harald A. Posch, Professor Christoph Dellago, and Dr. Arnulf Latz for many helpful conversations, and for collaborations on much of the research described here. here; We are also grateful to Professor Christoph Dellago and Professor Harald A. Posch for supplying Figs. 1, 2 and 7. H. v. B. and R. v. Z. are supported by FOM, SMC and by the NWO Priority Program Non-Linear Systems, which are financially supported by the “Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO)”. J. R. D. would like to thank the National Science Foundation for support under grant PHY-9600428. 39

[1] S. G. Brush, The Kind of Motion We Call Heat : A history of the kinetic theory of gases in the 19th century, Studies in statistical mechanics vol. 6, North-Holland, Amsterdam (1976). [2] L. Boltzmann, Lectures on Gas Theory, S. G. Brush, transl., Dover Publications, New York, (1995). [3] S. Chapman and T. Cowling, The Mathematical Theory of Non-uniform Gases, third edition, Cambridge University Press, Cambridge (1970). [4] J. R. Dorfman, and H. van Beijeren, in Statistical Physics, Part B, B. J. Berne, ed., Plenum Publishing Co., New York, (1977),p.65. [5] P. Ehrenfest and T. Ehrenfest, The Conceptual Foundations of the Statistical Approach in Mechanics, Cornell University Press, Ithaca, (1959). [6] G. E. Uhlenbeck and G. W. Ford, Lectures in Statistical Mechanics, American mathematical Society, Providence, (1963). [7] J. L. Lebowitz, Physica A, 194, 1, (1993). [8] V. I. Arnold and A. Avez, Ergodic Problems of Classical Mechanics, W. A. Benjamin, Inc, New York, (1968). [9] P. Gaspard,Chaos, Scattering Theory and Statistical Mechanics, Cambridge University Press, Cambridge (1998). [10] J. R. Dorfman, An Introduction to Chaos in Non-equilibrium Statistical Mechanics, Cambridge University Press, Cambridge (1999). [11] H. van Beijeren, J. R. Dorfman, Ch. Dellago and H. A. Posch, Phys. Rev. E 56, 5272 (1997). [12] V. Rom-Kedar and D. Turaev, Physica D 130, 187 (1999). [13] H. van Beijeren and J. R. Dorfman, Phys. Rev. Lett. 74, 4412 (1995); Erratum Phys. Rev. Lett. 76, 3238 (1995). [14] H. van Beijeren, A. Latz and J. R. Dorfman, Phys. Rev. E 57, 4077 (1998). [15] R. van Zon, H. van Beijeren and Ch. Dellago, Phys. Rev. Lett. 80, 2035 (1998). [16] R. van Zon, H. van Beijeren, J. R. Dorfman, Kinetic Theory of Dynamical Systems, preprint chao-dyn/9906040 (1999). [17] Ch. Dellago, H. A. Posch and W. G. Hoover, Lyapunov instability in a system of hard disks in equilibrium and nonequilibrium steady states, Phys. Rev. E. 53, 1485-1501 (1996). [18] Ya. G. Sinai, Russian Math. Surveys 25, 137, (1970) ; Ya. G. Sinai (ed.), Dynamical Systems, World Scientific Publishing (1991). [19] E. Ott, Chaos in Dynamical Systems, Cambridge University Press, Cambridge (1993). [20] Ja. B. Pesin, Sov. Math. Doklady 17 (1976), 196; Reprinted in R. S. MacKay and J. D. Meiss, Hamiltonian Dynamical Systems (Adam Hilger, Bristol, 1987). [21] J. R. Dorfman, A. Latz, and H. van Beijeren, Chaos 8, Issue 2 (1998) 444 [22] J. R. Dorfman, (to be published). [23] H. van Beijeren, H. Kruis, D. Panja and J. R. Dorfman, to be published. [24] C. Cercignani, Theory and Application of the Boltzmann Equation, Scottish Academic Press, Edinburgh/London (1975). [25] W. van Saarloos, Phys. Rev. A 37, 211 (1988); U. Ebert and W. van Saarloos, Phys. Rev. Lett. 80, 1650 (1998); U. Ebert and W. van Saarloos, Front propagation into unstable states:

40

[26] [27]

[28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40]

Universal algebraic convergence towards uniformly translating pulled fronts, Physica D (to appear). Ch. Dellago and H. A. Posch, in [27], pp. 68–83 (1997). Special issue on the Proceedings of the Euroconference on The Microscopic Approach to Complexity in Non-Equilibrium Molecular Simulations. CECAM at ENS-Lyon, France, 1996, edited by M. Mareschal, Physica (Amsterdam) 240A (1997). G. A. Bird, Molecular Gas Dynamics, Clarendon, Oxford (1976). E. Brunet and B. Derrida, Phys. Rev. E 56, 2597 (1997). A. Lemarchand and B. Nawakowski, J. Chem. Phys. 109, 7028 (1998). Ch. Dellago and H. A. Posch, Phys. Rev. E, 52, 2401, (1995). Ch. Dellago and H. A. Posch, Phys. Rev. Lett. 78, 211 (1997). H. van Beijeren, J. R. Dorfman, E. G. D. Cohen, Ch. Dellago and H. A. Posch, Phys. Rev. Lett. 77, 1974 (1996). A. Latz and H. van Beijeren and J. R. Dorfman, Phys. Rev. Lett. 78, 207 (1997). H. van Beijeren, A. Latz, and J. R. Dorfman (to be published). P. Gaspard and G. Nicolis, Phys. Rev. Lett. 65, 1693, (1990). Lj. Milanovi´c, H. A. Posch and Wm. G. Hoover, Mol. Phys. 95, 281 (1998); H. A. Posch and R. Hirschl, Simulation of billiards and hard-body fluids, preprint, (1999). J.-P. Eckmann and O. Gat, preprint chao-dyn/9908018 (1999). D. J. Searles, D. J. Evans and D. J. Isbister, in [27], pp. 96–104 (1997). M. H. Ernst, in “Proceedings of the NATO-ASI on Dynamics: Models and Kinetic Methods for Non-equilibrium Many Body Systems, J. Karkheck (ed.), 1999.”

41