liapunov spectra for infinite chains of nonlinear

0 downloads 0 Views 256KB Size Report
PN?1 i=1 V (qi ?qi+1). We have set the masses m equal to 1. Note that the ...... (b). By Theorem IV.8.9 in 3], a set C L1(R; (ds)) is weakly sequentially ..... CEW was also supported by NSF Grant dms 86{02001. ... Sb. 82, 245{256 (1970).
LIAPUNOV SPECTRA FOR INFINITE CHAINS OF NONLINEAR OSCILLATORS

Jean-Pierre Eckmann Departement de Physique Theorique Universite de Geneve CH-1211 Geneva 4, Switzerland C. Eugene Wayne Department of Mathematics Pennsylvania State University University Park, Pa. 16802 USA

Abstract.

We argue that the spectrum of Liapunov exponents for long chains of nonlinear oscillators, at large energy per mode, may be well approximated by the Liapunov exponents of products of independent random matrices. If, in addition, statistical mechanics applies to the system, the elements of these random matrices have a distribution which may be calculated from the potential and the energy alone. Under a certain isotropy hypothesis (which is not always satis ed) we argue that the Liapunov exponents of these random matrix products can be obtained from the density of states of a typical random matrix. This construction uses an integral equation rst derived by Newman. We then derive and discuss a method to compute the spectrum of a typical random matrix. Putting the pieces together we see that the Liapunov spectrum can be computed from the potential between the oscillators.

-2-

1. Introduction In the paper [7], Livi et al . discuss a numerical experiment concerning Liapunov exponents for Hamiltonian systems with many degrees of freedom. They plot, for a variety of such systems, the graph of i versus i=N , where i is the i-th Liapunov exponent (ordered 1      N ) in a system with N degrees of freedom. They observe that the curves thus obtained seem to be independent of N for large N and, to a lesser extent, also independent of the system under analysis. Furthermore, they observe that the graphs form essentially straight lines. We propose here an explanation of these ndings. Our explanation is based on a twofold reduction: { We rst argue that in a Hamiltonian system describing a long chain of non-linear oscillators, in which equipartition of the energy holds, and which is ergodic, the tangent matrices to the Hamiltonian ow  !  are symplectic matrices which are, for short times  , well approximated by

  0 ?  d  S ( )  exp  ; 1 0

(1:1)

where is a tri-diagonal matrix of the form 0 ! ?! 0 ::: 0 1 1 1 BB ?!1 !1 + !2 ?!2 : : : 0 CC B 0 ?! ! + ! : : : 0 CC ; (1:2)

=B 2 2 3 CC BB . . . . . .. .. .. .. A @ .. 0 0 0 : : : !N ?1 and the !i 's are independent, identically distributed random variables with density F (!). Because of equipartition, the density F depends only on the ergodic distribution of the coordinate of one oscillator, which in turn depends on the coupling potential. The importance and existence of an equipartition threshold has been discussed in Livi et al .[6], but our description will be more detailed and goes beyond previous studies. { Assuming now that the matrices have a known distribution, the problem remains of nding the distribution of Liapunov exponents for the product

Y

n?1 j =0

d (j )

-3of the tangent maps d (j ) at times t = j , for j = 0; : : :; n?1. We shall show that at large energy per oscillator and sucient anharmonicity, there is a choice of  for which the approximation (1.1) is valid, and where, furthermore, successive matrices are statistically independent. We derive an integral equation which describes the connection between the density F of the !i 's and the integrated density K of eigenvalues of , and we show that it has a unique solution. The density of states can be read o this solution by a result of Simon and Taylor [12]. We then make the assumption that successive matrices of the form S ( ) tend to rotate tangent vectors isotropically in space. Numerical evidence, presented in Section 6, shows that this assumption may, or may not, be satis ed for a given system. For those systems which do satisfy the isotropy hypothesis, we can extend the theory of Newman [8], [9], in which the distribution of Liapunov exponents for products of random matrices is studied. We arrive at a relation between the expected integrated density of states K for large random matrices and the corresponding integrated density of Liapunov exponents for their (random) product. These ideas lead to an explicit algorithm, described in Section 2, for determining the integrated density of Liapunov exponents for chains of oscillators. Applying this algorithm shows that the Liapunov spectrum is close to being a straight line. It depends on the probability distribution of the tangent matrices, and hence on the potential.

2. The Determination of the Liapunov Spectrum In order to compute the density of Liapunov exponents, we shall show in Section 4 that we need to compute the density of states for a random tridiagonal matrix of the form

::: 0 1 ::: 0 C CC ::: 0 C ; .. C C ... . A 0 0 0 : : : !N ?1 where the !i are independent, identically distributed random variables, with density F (!).

0 ! ?! 0 1 1 BB ?!1 !1 + !2 ?!2 B 0 ?! ! + !

=B 2 2 3 BB . .. .. . @ . . .

We make the following assumptions on F :

-4-

R

F1. dxF (x) = 1; F (x)  0. F2. sup F ( )(1 +  2) < 1. F3. The support of F lies on one side of 0.

Remark. The function F is, in the setting of Section 3, related to the second derivative

of the interaction potential V . The conditions F1{F3 can then be easily reexpressed in terms of V , in particular, F3 follows from the convexity of the potential.

Under the assumptions F1{F3, the integrated density of Liapunov exponents H () in the limit N ! 1 is given by the following algorithm: A1. For every  2 R,  6= 0, nd a positive function u such that Z ? 0(t + ) ? t0 2u (t0) ; u (t) = dt0 F tt0 ? t ?  t0 ? t ?   and Z1 dt0 u (t0 ) = 1 : ?1

A2. Determine L() by

Z1 Z1

L() = 1 ? dz 0

?1

d! u? (!(z ? 1))j!jF (!) :

(L is the integrated density of eigenvalues of matrices of the form of . The condition F3 is only needed in this step.) A3. Now compute K (), the integrated density of eigenvalues of the absolute value of a matrix of the form (1.1), as follows: Given , we de ne = 1=2 and then ? () as the square root of the smaller of the two eigenvalues of the matrix  cos2( ) + ?2 sin2( ) ( ?1 ? ) cos( ) sin( )  : ( ?1 ? ) cos( ) sin( ) cos2 ( ) + 2 sin2 ( ) Then we de ne 8 0; for   0, < ?1 K () = : L(? ()); for 0    1, (2:1) 1 ? K (?1 ); for  > 1.

-5Here, ??1 denotes the inverse function of ? .

Remark. The numbers ? () and + () = 1=? () are eigenvalues of jSj when  is an eigenvalue of . This follows by explicit calculation and observing that  cos( ?) ?? sin( ?)  S ( ) = ??1 sin( ?) cos( ?) ; (2:2) with ? = 1=2. Note that only integer powers of occur in S ( ). A4. Determine max by

1 log? 2

max =

Z



d 2 K 0() ;

and min = ?max. Then, H () = 0 if   min , H () = 1 if   max, and for all other , the function H () is the nonzero solution of the equation Z 2 (2:3) ds K 0(s) H ()e2 + s(1 ? H ())s2 = 1 : In Section 5 we show the existence of solutions to the equation in A1 and the existence of the integral in A2 under the conditions F1{F2. In Section 6, we apply these formulas to a case similar to that considered by Livi et al ., and compare them with direct numerical studies.

3. The Random Matrix Approximation In this section, we argue that for a long chain of coupled oscillators, at large energy per oscillator, and in a regime where equipartition of the energy holds, one can approximate the spectrum of Liapunov exponents by studying random products of random matrices of the form of those appearing in Section 2. Our arguments make precise some ideas which appear in Paladin and Vulpiani [10]. Given a system with Hamiltonian

H(p; q) =

N X i=1

p2 =2m + i

X

N ?1 i=1

V (qi ? qi+1 ) ;

(3:1)

we let t be the associated Hamiltonian ow. The Liapunov exponents are then given by the large time behavior of dt . For example, Oseledec's theorem tells us that if

-6the system has an ergodic invariant measure, then for almost every choice of the initial point (p0; q0 ), (w.r.t. the invariant measure),* the largest Liapunov exponent is given by 1 log kd (p0; q0)k : 1 = tlim (3:2) t !1 t Suppose  > 0 is some time interval. Then if we set

(pn ; qn ) = n (p0; q0 ) ;

(3:3)

the chain rule of di erentiation implies dn

(p0 ; q0)

=

Y

n?1 j =0

d (pj ; qj ) :

(3:4)

We now study (3.4) when  is small. Then we obtain a good approximation to the solution of the equations of motion in the time interval [l;  (l + 1)] by approximating the Hamiltonian by its Taylor series about the point (pl ; ql ) to second order. This gives an e ective Hamiltonian of the form

H(2) (p ; q )

=

1 2

N X i=1

+V (ql ) +

P

(pi ? pli )2 +

X N

i=1

N X i=1

(pi ? pli )pli + 12

@ V (ql ) + 1 (qi ? qil ) @q 2 i

X N

i;j =1

N X i=1

(pli )2

2V (ql ) ; (qi ? qil )(qj ? qjl ) @q@ @q i

j

with V (q) = iN=1?1 V (qi ? qi+1 ). We have set the masses m equal to 1. Note that the equations of motion for this reduced Hamiltonian are





d p = Kl + Ml p : q dt q Here, Kl is the column vector whose components are

8 N X 2 > @ V < l @ V (ql ) ; for i = 1; : : :; N , ? + q l (K )i = > @qi j=1 j @qi @qj :0 ; for i = N + 1; : : :; 2N .

* If we suppose ergodicity, this is just the Liouville measure restricted to the energy shell, cf. e.g.[4]

-7and Ml is the matrix

 0 ? @2V (ql )  @qi @qj Ml = : 1

0

We can solve these equations explicitly, and if we denote the corresponding ow by lt , we have  p Z l t t 0 ?Ml t0 l lt p M l M dt e K : t q = e q +e Hence,

0

l

d (pl ; ql )  dl = e M ;

(3:5)

when  is suciently small. We then expect that the Liapunov exponents of our Hamiltonian should be well approximated by those of the product of matrices

Y

n?1 l=0

e Ml :

(3:6)

Note that because of the short range nature of V , we can compute the matrix 2V (ql ) ; (A(ql ))ij  @q@ @q i j and we nd

8 00 l l > V (qi ? qi+1 ) + V 00 (qil?1 ? qil ); if i = j , < (A(ql ))ij = > ?V 00 (qil ? qjl ); if ji ? j j = 1, : 0; otherwise.

(The terms at the boundary (i = 1; N or j = 1; N ) are di erent, but this is irrelevant for P P what follows.) Note that j (A(ql ))ij = 0 and j (A(ql ))ji = 0 for i = 2; : : :; N ? 1. Note also that the matrices exp( Ml) are symplectic. We now make a number of precise physical assumptions about the behavior of the system (3.1). They will show that { The numbers V 00 (qil ?qil+1 ) occurring in the matrices A(ql ) can be viewed as random variables, !i . { There is a range of  for which the approximation (3.5) is valid and for which successive matrices Ml and Ml+1 are statistically independent.

-8Our assumptions are: E1. The energy of the orbit is E  N and the energy per oscillator, E , is large. E2. The system is in a state of equipartition of the energy. * E3. For large q, the function V behaves like V (q)  q , with > 2. From equipartition and the virial theorem, we deduce that the energy in the system will be divided equally between the oscillators and between kinetic and potential energy. Using E1{E3, we see that typically the momenta and positions will have values

p  E 1=2 ; q  E 1= : In the following paragraphs, we perform calculations of orders of magnitude. We therefore omit all indices, and assume that the statistical properties of qi and of qi ? qi+1 are the same. We also use interchangeably V and V to denote the potential. If we consider the equations of motion corresponding to the Hamiltonian (3.1), we see that the equations of motion are p_ = ?gradV (q) ; q_ = p : The solutions to these equations for times t may thus be approximated by

p(t)  p(0) ? t V 0 (q(0)) ; q(t)  q(0) + t p(0) :

(3:7)

We estimate the time t over which this approximation remains valid by comparing the above values of p(t) and q(t) to the values we would have obtained if we further subdivided the time interval. We nd p(t)  p( 2t ) ? 2t V 0 (q( 2t ))  p(0) ? 2t V 0 (q(0)) ? 2t V 0 (q(0) + 2t p(0))  p(0) ? tV 0 (q(0)) ? ( 2t )2V 00 (q(0))p(0) : * In [6], numerical evidence is presented for the existence of a nite equipartition threshold as N ! 1.

-9Similarly,

q(t)  q( 2t ) + 2t p( 2t )    t  t  t 0  q(0) + 2 p(0) + 2 p(0) ? 2 V (q(0))  q(0) + t p(0) ? ( 2t )2V 0 (q(0)) : Thus, the approximation Eq.(3.5) will be good if

These are implied by

j(t)2V 00 (q(0))p(0)j  jt V 0(q(0))j ; j(t)2 V 0 (q(0))j  jt p(0)j :

(3:8) (3:9)

 V 0(q(0)) p(0)  jtj  min p(0)V 00 (q(0)) ; V 0 (q(0))   E 1?1= E 1=2

(3:10)

 min E 1=2E 1?2= ; E 1?1=  E (1= )?(1=2) :

Note that the ow Eq.(3.7) gives an expression for the tangent map which is   0 ?V 00(q)  p  1 ?t V 00(q)  dt q =  exp t 1 ; t 1 1 0 provided j(t)2 V 00(q)j  1, which is valid if

jtj  E 1= ?1=2 :

(3:11) (3:12)

Note that this is the same inequality as the one in (3.10).

Remark. We should also check that when we further subdivide the time interval we

do not change the tangent map to the ow very much either. To see this, note that the tangent map to the ow computed with a time interval t=2 is just  1 ? ( t )2V 00(q(0)) ?tV 00(q(0)) ? ( t )2V 000(q(0))p(0)  2 2 :  t 2 t1 1 ? ( 2 ) V 00 (q(0)) This is close to the approximation we previously derived, provided j(t)2 V 00 (q(0))j  1 ; j(t)2V 000 (q(0))p(0)j  jtV 00 (q(0))j :

- 10 Both of these inequalities are satis ed if jtj  E 1= ?1=2. We now compute the amount by which the the components, V 00 , of the tangent matrix change during a time t. This change is of the order t dtd V 00  t V 000 (q)q_  t V 000 (q)p  t E 21 E 1? 3 : Suppose now we assume t  E 1= ?1=2, in accordance with (3.12). Then the change in the tangent matrix elements will be of order

E 1 ? 12 E 21 E 1? 3  E 1? 2 : This is of the same order of magnitude as the size of V 00 . Hence, under the assumptions E1{E3, we see that for t  E 2= ?1 the elements in the tangent map dt at t = 0 will be very di erent from those in dt , at t = t and so we may choose them to be randomly and independently distributed with respect to their previous values and among each other. Note that the distribution F of the elements of dt depends on the potential V and on the invariant measure of the ow. We make the further assumption E4. Statistical mechanics holds at large energies for systems with many degrees of freedom. We can then go one step further and compute the density F at energy E per degree of freedom from a knowledge of the potential alone. First note that the quantities V 00 (qil ? qil+1 ) and V 00 (qil?1 ? qil ) are uncorrelated. Thus we can treat the various elements of the matrices A(ql ) as independent random variables. Because of equipartition, the density F of these random variables can be determined numerically by sampling the coordinates of two oscillators. Furthermore, we can compute the density from a knowledge of the potential alone. The argument goes as follows: The integral of the function F up to ! is, according to what we have saidRabove, the probability that V 00 (qi ? qi+1 ) ! d!0 F (!0). In the canonical ensemble, takes a value less than !. We denote G(!) = ?1 we rst calculate R e? HN (p;q)dp dq (3:13) G (!) = ER(!e)? HN (p;q)dp dq ;

where

n

o

E (!) = (p; q) 2 R2N j V 00 (qN=2 ? qN=2+1 ) < ! :

(3:14)

- 11 We may integrate over the p's and we get

R

PN?1

? i=1 V (qi ?qi+1 ) dq E (!) e G (!) = R ? PN ?1 V (q ?q ) ; i i+1 dq i=1 e

(3:15)

which, by a change of variables to qi ? qi+1 , leads to

G (!) =

R

? V (x) dx V 00 (x)
R

:

(3:16)

Observe that if V is convex, then the support of G is contained in f! > 0g, which is our condition F3. By the standard techniques of statistical mechanics, we have then

G(!) = G  (!) ; where  is determined by the equation

R H (p; q)e? HN (p;q)dp dq NR ; EN = e?  HN (p;q)dp dq

(3:17)

and E is the energy per oscillator. The simple set of Eqs (3.13){(3.17) together with our earlier results show: The density H of Liapunov exponents of a chain of non-linear oscillators at high energy per oscillator can be predicted through a series of integral transforms of the interaction potential, by solving rst Eq.(3.17) to obtain the probability distribution of the elements of the matrices (3.11) and then applying the algorithm A1{A4 to the matrices (3.11).

- 12 -

4. Liapunov Exponents for Products of Symplectic Matrices In the present section, we argue that one can apply a modi cation of the theory developed by Wachter [13] and Newman [9] to evaluate the distribution of Liapunov exponents of a random product of matrices in terms of the density of eigenvalues of a single matrix. We begin by studying symplectic matrices of size 2N  2N and derive one of the formulas in Newman for this class of matrices. We can then apply his theory to take the thermodynamic limit N ! 1. Let Sj ( ) be a set of independent, identically distributed random matrices of the form of Eq.(1.1); however, the index j denotes now di erent matrices, and not the size of a time interval  , which we assume xed. We also de ne

Sn 

n Y

j =1

Sj ( ) :

If we assume that the Liapunov exponents of the product are ordered as 1  2  : : :, then for almost every choice of (linearly independent) normalized vectors x1 ; : : :; xk we have 1 log (kSn x ^    ^ Sn x k) : 1 +    + k = nlim 1 k !1 n

Let xl (j ) = Sj xl , for l = 1; : : :; k and j = 0; : : :; n. Then



!

n

Sj x1(j ? 1) ^    ^ Sj xk (j ? 1) 1X log 1 +    + k = nlim !1 n j =1 kx1 (j ? 1) ^    ^ xk (j ? 1)k

S x (j ? 1)

X n S x ( j ? 1) 1 j 1 j k

= nlim !1 n j =1 log kx1 (j ? 1)k ^    ^ kxk (j ? 1)k

x (j ? 1)

 x ( j ? 1) 1 k

? log kx (j ? 1)k ^    ^ kx (j ? 1)k

: 1 k j =1

(4:1)

n X

It is a consequence of Oseledec's theorem that 1 +    + k is almost surely independent of the choice of x1 ; : : :; xk . Therefore, we treat x1 ; : : :; xk as independent random vectors, uniformly distributed on the unit sphere. Denoting the expectation with respect to these variables as Ex, we have 1 E (log kSn x ^    ^ Sn x k) 1 +    + k = nlim 1 k !1 n x

- 13 -

X 

S x (j ? 1) n Sj xk (j ? 1)

 j 1

Ex log kx (j ? 1)k ^    ^ kx (j ? 1)k 1 k j =1

  n X

x ( j ? 1) x ( j ? 1) 1 k ? Ex log

kx (j ? 1)k ^    ^ kx (j ? 1)k

: (4:2) 1 k j =1

1 = nlim !1 n

We now make the following hypothesis. R1. The vectors el (j )  xl (j ? 1)=kxl (j ? 1)k, j = 1; 2; : : :, are, for xed l, and almost every choice of xl and S1 ; S2; : : :, uniformly distributed on the unit sphere.

Remark. This hypothesis means that multiplication of a random vector by a random sequence of matrices of the form Sj does not tend to rotate that vector into any preferred

directions. This hypothesis seems not to be satis ed in general for the systems under consideration. Numerical experiments rather suggest that the vectors are localized in the following sense: If one of the components of el (j ) is large (in modulus) then nearby components tend to be large, too. As j varies, the components at which these maxima occur move in an apparently random fashion. In the absence of a theory of these timedependent localization problems, we continue our analysis under the hypothesis R1. In those cases where numerical evidence indicates that R1 is most nearly satis ed, we verify that we obtain good agreement between theory and numerical experiment, see Section 6. Consider now the rst term in the expression (4.2) in the light of this hypothesis. We rst note that it can be rewritten as n ?log

S e (j ) ^    ^ S e (j )

 ; 1X E lim j 1 j k x n!1 n j =1

where the el (j ) are random, uniformly distributed unit vectors in R2N . Note that el (j ) is independent of el0 (j ) if l 6= l0, since these two vectors are the product of a sequence of invertible matrices and two independent random variables. Therefore, we have n X ?log

S e (j ) ^    ^ S e (j )

 = E ( log kSe ^    ^ Se k) ; 1 lim E j 1 j k 1 k x n!1 n j =1

where E denotes the expectation with respect to both the random matrices S and the independent, uniformly distributed, unit vectors xl . We have omitted the indices j over which these expectations are taken.

- 14 Let Xl (j ) be 2N -dimensional vectors whose entries are Gaussian random variables of mean zero and variance 1. Note that the subspace spanned by fx1(j ); : : :; xk (j )g is equidistributed with that spanned by fX1(j ); : : :; Xk (j )g. Thus, we have

E ( log kSe1 ^    ^ Sek k) = E ( log kSX1 ^    ^ SXk k) ? kE ( log kX k) : On the l.h.s., E denotes the expectation w.r.t. the probability distribution of the random matrices S and the random unit vectors xl . On the r.h.s., it denotes expectation w.r.t. the random matrices S and the Gaussian random variables Xl . The term kE ( log kX k) comes from normalizing the Xl 's. Following Newman, we let G(1; : : :; 2N ) be the 2N  2N matrices whose entries are independent mean zero Gaussian random variables, such that elements of the l-th row have variance l2 . Then

E ( log kSX1 ^    ^ SXk k) = E ( log kSG(1; : : :; 1)v1 ^    ^ SG(1; : : :; 1)vk k) ?  = 21 E log det (Pk GT (1; : : :; 1)ST SG(1; : : :; 1)Pk ) : Here, v1 ; : : :; v2N are the natural basis vectors in R2N , and Pk is the projection onto the span of v1 ; : : :; vk . Let Oj be the orthogonal matrix which diagonalizes STj Sj , i.e., Oj STj Sj OTj = D2j . Then 1 ? 2 E log



det (Pk GT (1; : : :; 1)ST SG(1; : : :; 1)Pk ) ?  = 12 E log det (Pk GT (1; : : :; 1)OT D2 OG(1; : : :; 1)Pk ) ?  = 12 E log det (Pk GT (1; : : :; 1)D2G(1; : : :; 1)Pk )

because OG(1; : : :; 1) is equidistributed with G(1; : : :; 1). If we let 1 (j ); : : :; 2N (j ) be the eigenvalues of Dj , i.e., the eigenvalues of jSj j, and we de ne k (1 ; : : :; 2N ) = E~ ( log kG(1 ; : : :; 2N )v1 ^    ^ G(1 ; : : :; 2N )vk k) ; where E~ denotes an average over the entries of G with 1; : : : ; 2N xed, we obtain

n X ?

S e (j ) ^    ^ S e (j )

 1 log lim j 1 j k n!1 n j =1 = E ( log kDG(1; : : :; 1)v1 ^    ^ DG(1; : : :; 1)vk k) ? kE ( log kX k) = E ( log kG(1 ; : : :; 2N )v1 ^    ^ G(1 ; : : :; 2N )vk k) ? kE ( log kX k) = E^(k (1; : : : ; 2N )) ? kE ( log kX k) :

- 15 Here, E^ denotes an average over the random eigenvalues 1; : : : ; 2N . A similar argument shows that

x (j ? 1)

n X 1 x ( j ? 1) 1 k

lim log kx (j ? 1)k ^    ^ kx (j ? 1)k

= k (1; : : :; 1) ? kE ( log kX k) : n!1 n 1 k j =1 Thus, we obtain 1 +    + k = E^(k (1 ; : : :; 2N )) ? k (1; : : :; 1) ; (4:3) which is precisely Eq.(1.12) of [9]. This allows us to apply the powerful result of Newman [9, Theorem 2.11]. His work uses an interesting study of the density of states of random matrices due to Wachter [13]. We repeat Newman's result in its original form (adapted to our notation), and state then which assumptions are changed in our application.

Theorem 4.1. Let N1      NN denote the Liapunov exponents for the product, Mt    M1 of N  N real i.i.d. random matrices and let HN denote the empirical

distribution of the Ni 's, HN () = N ?1  (number of i with Ni   ): Suppose MT1 M1 has, for each N , a rotation invariant distribution. Let KN denote the (random) empirical distribution of the eigenvalues Ni of jM1j. Suppose that there exist nonrandom  > 0 and  < 1 such that, with probability one,   Ni   for all i and N . Suppose further that there is a nonrandom distribution function K such that with probability one, KN ! K (at all continuity points of K ). Then HN () ! H () for all , where H is a continuous distribution function Zsatisfying: ?  H () = 0; for   ? 12 log ?2 dK () ;

Z

1 log? 2



H () = 1; for   2 dK () : H () is, for all other , the unique solution in (0; 1) of Z 2 dK () H e2 + (1 ? H )2 = 1: The discussion leading to (4.3) shows how the assumption on rotational invariance is replaced by the weaker assumption R1. The assumption that jSj has eigenvalues uniformly bounded away from 0 and 1 is obviously ful lled in our case, since S is symplectic and bounded, i.e., its eigenvalues come in pairs  $ ?1 .

- 16 -

5. Random Tridiagonal Matrices In this section we justify steps A1 and A2 of the algorithm of Section 2. We want to study the distribution of eigenvalues of the matrix (1.2), and therefore we consider rst the eigenvalue problem for such matrices. Recall that 0 ! ?! 0 ::: 0 1 1 1 BB ?!1 !1 + !2 ?!2 : : : 0 CC B 0 ?! ! + ! : : : 0 CC :

=B 2 2 3 CC BB . . . . . . . . . . . @ . . . . A 0 0 0 : : : !N ?1 The eigenvalue problem x = x can be rewritten as the system of equations in the unknowns x1 ; : : :; xN ,

?!n?1 xn?1 + (!n?1 + !n )xn ? wn xn+1 = xn : (5:1) for n = 2; : : :; N ? 1, and with obvious modi cations for n = 1 and n = N . We rewrite

the problem, de ning zn = xn+1 =xn , as a system of transformations  !n?1  1 zn = ! !n?1 + !n ?  ? z : (5:2) n n?1 If we denote the distribution of zn by Gn and that of the random variables !n by F , and assume that we are given G1 , then we can write

Gn+1 (zn+1 ) =

Z nY +1 j =1

F (!j )d!j

n Y j =1

dzj G1 (z1)

Y ? 1 !   zj ? ! (!j?1 + !j ?  ? z j?1 ) : (5:3) j j ?1 j =2

n+1

Suppose we de ne inductively a sequence of functions of two variables by

B1(!; z) = G Z 1(z) ; 0  ? Bj+1(!; z) = F (!0)Bj (!0 ; z0) z ? !1 (!0 + ! ?  ? !z0 ) d!0 dz0 :

Clearly, we nd

G2 (z2 ) = =

Z Z

(5:4) (5:5)

?  F (!2)F (!1)G1 (z1 ) z2 ? !1 (!1 + !2 ?  ? !z 1 ) d!1 d!2 dz1 F (!2)B2(!2 ; z2) d!2 ;

2

1

(5:6)

- 17 and, by induction

Gn (zn ) =

Z

F (!n )Bn (!n ; zn ) d!n ;

(5:7)

for all n  2. Instead of studying Gn , for which we want to show the existence of a limit as n ! 1, we study the functions Bn . We rewrite Eq.(5.5) as

Bj+1 (!; z) =

Z

?  F (!0)Bj (!0 ; z0 ) !(z ? 1) + !0 ( z10 ? 1) + ) j!j d!0 dz0 ;

(5:8)

and we see that j!j?1Bj+1(!; z) is a function of the combination !(z ? 1) alone. We thus de ne Uk (!(z ? 1)) = Bk (!; z)j!j?1 for k = 1; 2; : : : ; (5:9) and we set

t = !(z ? 1) ; (5:10) t0 = !0(z0 ? 1) : Note that the Jacobian of the transformation (!0; z0 ) ! (!0; t0) is j!0 j and that z0 = 1 + t0=!0 , and 0 0 t0 ! 0 : = ? (5:11) !0 ( z10 ? 1) = !0 1?+t t=! 0 =! 0 t0 + !0

Therefore, we get

Uj+1(t) =

Z

?

0 0



dt0 d!0 F (!0)Uj (t0 ) t ? t0t+!!0 +  :

(5:12)

We want to eliminate !0 by integrating over the -function. Note that the -function has support on 0 !0 = ? tt?(tt+0 +)

and that

Therefore, Using now

0 0 0 0 0 02 ?  @!0 t ? t0t+!!0 +  = ? t0 +t !0 + (t0 t+!!0)2 = ? (t0 +t !0 )2 :

(5:13)

t0 + !0 2 0!0 0 ?   ? t t ( t +  )  t ? t0 + !0 +  =  !0 + t ? t0 +   t0 :

(5:14)

t ? t0 +  ; t0 t0 = = ? t0 + !0 t0 ? tt?0 (tt+0 +) t0

(5:15)

- 18 we see nally that

Uj+1 (t) =

Z

t0 2 0 (t + ) t 0 0 dt F ( t0 ? t ?  ) Uj (t ) t0 ? t ?  :

(5:16)

We de ne the operator T (which depends on F ) by (Tu)(t) =

Z

0 (t + ) t0 )2 u(t0) : dt0 F ( tt0 ? )( t ?  t0 ? t ? 

(5:17)

Note that T maps non-negative functions to non-negative functions, in fact, it is \positivity improving", i.e., (Tu)(t) > 0 for all t, if u(t)  0 for all t (cf. the remark after (5.26)). We want to nd the xed points of Eq.(5.16) which is equivalent to nding the xed points of (5.17). This will allow us to nd the density of states of by applying ideas which are originally due to Schmidt [11], and have been more recently elaborated by Simon and Taylor [12]. Let feN! (j )gj=1;:::;N be the eigenvalues of the system (5.1). If we de ne 1 number of j j eN (j ) <  ; L() = Nlim ! !1 N

then L() exists and is almost surely independent of !, cf. Benderskii, Pastur [1], Kirsch, Martinelli [5].

If we rewrite (5.1) as (5.2), and are given the distribution G1 (z1 ), then (5.3) determines Gn (zn ). Using now the condition F3. on F , i.e., that the support of F lies on one side of zero, we see that is a de nite matrix and by Corollary 2.2 of [12], we have M Z 1 X 1 dz Gn (z) : 1 ? L(?) = Mlim !1 M n=1 0

(In [12], the operator is considered with opposite signs in the o -diagonal elements, hence we have here 1 ? L(?) rather than L().) By Eq.(5.7) and the de nition of Un , we have Z1 Gn (z) = d!F (!)j!jUn(!(z ? 1)) : ?1

On the other hand, given G1 we can determine U1 and then Un+1 = T n U1 . Thus,

Gn+1 (z) =

Z1

?1

d!F (!)j!jT nU1 (!(z ? 1)) :

- 19 In particular, we have the

Proposition 5.1. If limM !1 M1 PMn=0?1 T n U1 = U  , then the integrated density of states for is given by 1 ? L(?) =

Z1 Z1 dz

?1

0

d! U (!(z ? 1))j!jF (!) :

(Recall that T , and hence U  depend on .) now prove that the above limit exists. We will use the notation kf kp = ?R jfWe  (x)jpdx 1=p , for 1  p < 1, and kf k1 = supx jf (x)j. Our rst result is R Theorem 5.2. Assume that sup F ( ) 2 < 1 and that dx F (x) =P1. If  6= 0, then 1 M ?1 T n U exists for every function U in L1 , with kU k = 1, the limit lim

1 1 1 M !1 M n=0  1  and is equal to a function U in L . In fact, U is the unique eigenvector of T with 1

eigenvalue 1.

Proof. This proof is similar to the one given by Delyon, Kunz, and Souillard [2]. It is

based on the following two lemmas:

Lemma 5.3. The operator T is a contraction on L1 . and

Lemma 5.4. For  6= 0, the operator T 2 is bounded on L1 by const:?2 , where the constant depends only on F .

Proof of Lemma 5.3. We have kTuk1 

Z Z dt

2 0 (t + ) t0 t 0 dt F ( t0 ? t ?  ) t0 ? t ?  ju(t0)j :

(5:18)

Changing variables to t0 and z with

0 (t + ) z = tt0 ? t? ;

(5:19)

- 20 we have and hence

0

dz = ( t0 ? tt ?  )2 dt ;

kTuk1 

Z

(5:20)

dz dt0 F (z)ju(t0 )j  kuk1 ;

since F is a probability measure. The proof is complete.

Remark. The Eq.(5.20) also shows that if u  0, then kTuk1 = kuk1. Proof of Lemma 5.4. We assume, without loss of generality, that u  0. Then (Tu)(t) =

Z

2 0 (t + ) t0 t 0 dt F ( t0 ? t ?  ) t0 ? t ?  u(t0 )

 jtk+uk1j2 sup F ( ) 2   Bjt 0+kukj21 :

(5:21)

If t +  is bounded away from 0, (5.21) and Lemma 5.3 lead to a bound for T 2 u: jT 2 u(t)j  Bjt 0+kukj21 : (5:22) We next consider the integral (Tu)(t) =

Z

0 (t + ) t0 2 0 t 0 dt F ( t0 ? t ?  ) t0 ? t ?  u(t ) ;

(5:23)

which we separate into the regions jt0 j  2jt + j and jt0j  2jt + j. It is convenient to introduce the notation q = t + . In the region jt0 j  2jqj, we nd t0 0 t ? t ?   2 ; and hence Z 0 (t + ) t0 2 t 0 (5:24) dt F ( t0 ? t ?  ) t0 ? t ?  (Tu)(t0)  4kF k1kTuk1 : 0 jt j2jqj To discuss the region jt0 j  2jqj we change variables to 0 z = t0t?q q ;

(5:25)

- 21 so that t0 = zq=(z ? q). We get the bound, using (5.21), 0 (t + ) t0 2 t 0 dt F ( t0 ? t ?  ) t0 ? t ?  (Tu)(t0) 0 jt j2jqj

Z

z 2 ? zq  = dz F (z) z ? q (Tu) z ? q jz=(z?q)j2 z 2 B kuk Z  dz F (z) z ? q 0 1 2 jz=(z?q)j2 zzq?q +  2 Z z dz F (z) zq + z ? q : = B0kuk1 Z

jz=(z?q)j2

If jqj > =4, the assertion of the lemma follows already from (5.22). So we may assume jqj < =4. This, together with jz=(z ? q)j  2 implies

2 B z < 21 : zq + z ? q 

The proof of Lemma 5.4 is complete. We now proceed to the Proof of Theorem 5.2. We have already seen that

kT k1  1 ;

(a)

and that

(T n u) is uniformly bounded in L1 for n = 2; 3; : : : : (b) By Theorem IV.8.9 in [3], a set C  L1(R; (ds)) is weakly sequentially compact if it is bounded and if for each decreasing sequence of sets fEng with empty intersection, the limit Z lim f (s)(ds) = 0 ; n!1 En

is uniform for f in C . If the functions f in C are all uniformly bounded in L1 , this condition is satis ed. Thus, the sequence fT n U gn2 is weakly sequentially compact for P ?1 T n U is bounded for U in the unit ball, any U in the unit ball of L1. By (a), M1 M n=0 and T n U=n ! 0 as n ! 1. Since the unit ball is a fundamental set of L1 , these facts, plus Corollary VII.5.3 in [3] imply that ?1 1 MX T nU = U  : lim M !1 M n=0

- 22 Since kTU k1 = kU k1 = 1 and T maps positive functions to positive functions, we see that for positive U the limit U  is not zero. Thus 1 is an eigenvalue of T . Suppose now that the eigenvalue 1 is not simple. Then there exist u and v in L1, both real, and normalized to kuk1 = kvk1 = 1 with u 6= v such that Tu = u and Tv = v. Then

ku ? vk1 = kTu ? Tvk1 =

Z





dt (Tu)(t) ? (Tv)(t)

Z Z 0 (t + ) 0 ?  t t 0 2 0 0 = dt dt F ( t0 ? t ?  )( t0 ? t ?  ) u(t ) ? v(t ) :

Since u 6= v, and kuk1 = kvk1, u(x) ? v(x) is not everywhere positive, (nor everywhere negative). Thus, assume u(x) ? v(x)  0 for x 2 E1 and u(x) ? v(x) < 0 for x 2 E2, with E1 and E2 of positive measure. By the triangle inequality,

ku ? vk1 

Z Z

0 (t + ) t0 )2ju(t0 ) ? v(t0 )j ; )( dt dt0 F ( tt0 ? t ?  t0 ? t ? 

and this inequality is strict if

Z Z dt

E2

0 (t + ) t0 )2 ju(t0) ? v(t0)j 6= 0 : dt0 F ( tt0 ? )( t ?  t0 ? t ? 

(5:26)

Note now that (5.26) holds if, for each t0 2 E2, we can nd a t such that

t0 (t + )=(t0 ? t ? ) 2 supp F : Clearly, such a t always exists, and hence (5.26) holds. Set now z = t0 (t + )=(t0 ? (t + )). Then 02 dz = (t0 ?t t dt ? )2 :

Thus

ku ? vk1