Multidimensional nonhyperbolic attractors 1

1 downloads 0 Views 355KB Size Report
^f ;b(; X)=(a( ) ?x2 + by;?bx), with X = (x; y) and a( ) defined in the same way as before. Clearly, ' ;b is a di eomorphism (onto its image) for every nonzero value of ...
Multidimensional nonhyperbolic attractors Marcelo Viana



January 1995

Abstract

We construct smooth transformations and di eomorphisms exhibiting nonuniformly hyperbolic attractors with multidimensional sensitiveness on initial conditions: typical orbits in the basin of attraction have several expanding directions. These systems also illustrate a new robust mechanism of sensitive dynamics: despite the nonuniform character of the expansion, the attractor persists in a full neighbourhood of the initial map.

1 Introduction

We consider smooth maps ': M ?! M on a manifold M , with some compact invariant region U : '(U )  int (U ). We say that ' has expanding behaviour if typical points x 2 U have tangent vectors v whose iterates grow exponentially fast: log kD'n(x)vk =n has positive limit (or lim inf) as n ! +1. In general, we shall call Lyapunov exponents of ' at x to all values of this limit, for all nonzero tangent vectors v. Clearly, Lyapunov exponents measure the asymptotic exponential rate at which (in nitesimally) nearby points approach or move away from each other as time increases to +1. Hence, presence of positive exponents indicates sensitive dependence of trajectories starting near x with respect to the corresponding initial point (\chaotic" dynamics). A well-known example are the uniformly hyperbolic (or Axiom A, see [Sm]) di eomorphisms with nonperiodic attractors. In this case the number of positive Lyapunov exponents is constant on the basin of attraction U , and the dynamics of the attractor is (structurally) stable. In particular, any nearby map also has a nonperiodic attractor, close to the initial one and with the same number of positive exponents. The mathematical study of nonuniform expanding behaviour is much more incomplete, in fact it has been mostly restricted to systems with a unique positive 1



Partially supported by a J. S. Guggenheim Foundation Fellowship.

1

Lyapunov exponent. A rst important result was due to [Ja], who showed that many quadratic maps of the interval admit an absolutely continuous invariant probability measure . Then such maps have positive Lyapunov exponent at all -generic points (a positive Lebesgue measure set). Other proofs of this result were given e.g. by [CE], [BC1]. In higher dimensions, [BC2] showed that many Henon di eomorphisms of the plane have strange attractors containing dense orbits on which the di eomorphism has a positive Lyapunov exponent. See [BY], [MV], [Vi] for further developments and [JN] for a more recent approach. In all these cases, expanding behaviour exhibits a rather subtle form of persistence: \many" means positive Lebesgue measure in parameter space. On the other hand, [Yo] constructed open sets of nonuniform hyperbolicity in a space of linear cocycles. The purpose of this paper is to introduce the study of certain dynamical systems exhibiting nonuniform multidimensional expansion. We construct smooth maps, both invertible and noninvertible, having nonuniformly hyperbolic attractors with a high dimensional character: the map has several expanding directions at Lebesgue almost every point x in the basin . More precisely, we may write TxM = E  E ? with +

n ?

n

D' (x)v

D' ( x ) v

 c > 0 > ? c  lim sup log lim inf log n! 1 +

n!+1

+

for all v 2 E  nf0g (c independent of x or v) and dim E > 1 (this is what we call the number of expanding directions, or of positive Lyapunov exponents). The basic strategy is to couple nonuniform models, namely quadratic or Henon maps, with convenient \fast" systems such as expanding maps or solenoid di eomorphisms. The expanding behaviour observed in these multidimensional examples originates from a di erent mechanism, of a statistical type, which makes them much more robust than their low-dimensional counterparts: the expanding attractor persists in a whole C -neighbourhood of the initial map . Although the main ingredients are quite general, we illustrate this strategy through some concrete situations, in order to keep our presentation as transparent as possible. Further extension of these methods is brie y discussed at the end of this section. First we describe the, simpler, noninvertible case. Let ' : S  IR ?! S  IR be a C map given by ' (; x) = (^g(); a() ? x ), where g^: S ?! S is an expanding map of the circle and a() = a + (). Here () is some Morse function and a 2 (1; 2) is xed such that x = 0 is a preperiodic point for the map h(x) = a ? x . For the sake of de niteness, we take () = sin 2 and we also suppose g^ to be linear, g^() = d mod 1 for some d  2. Note that we identify S = IR=mod 1. It is easy to check that, since a < 2, there exists a compact interval I  (?2; 2) such that ' (S  I )  int (S  I ) for any small . +

3

1

3

2

1

0

0

0 2

1

0

0

1

0

2

1

1

0

1

Theorem A Assume d to be large enough, d  16 say. Then for every su-

ciently small > 0 the map ' has two positive Lyapunov exponents at Lebesgue almost every point (; x) 2 S 1  I0 . Moreover, the same holds for every map ' suciently close to ' in C 3 (S 1  IR).

Here C (S  IR) denotes the space of all C maps from S  IR to itself: in the second part of the theorem ' is not assumed to have a skew-product form. Note that maps such as the ones above can be easily embedded into compact manifolds. Moreover, examples of the same kind having any preassigned number of positive Lyapunov exponents may be obtained just by replacing the factor map g by other hyperbolic transformations, e.g. expanding maps on the m-torus, m  2. Our arguments generalize to these situations essentially without change, cf. remarks near the end of Section 2. See also the discussion below. Similar statements apply in the context of Theorem B. Now we describe a corresponding construction for di eomorphisms. We take ' ;b: T  IR ?! T  IR given by ' ;b (; X ) = (^g(); f^ ;b(; X )), where  T = S  B is the solid 3-torus,  = (; T ) and g^: T ?! T is a solenoid di eomorphism g^(; T ) = (d mod 1; G(; T )), see [Sm];  f^ ;b(; X ) = (a() ? x + by; ?bx), with X = (x; y) and a() de ned in the same way as before. Clearly, ' ;b is a di eomorphism (onto its image) for every nonzero value of b. In the same way as before, a < 2 ensures the existence of some compact interval I  (?2; 2) such that ' ;b(T  I )  int (T  I ) for every small and b. 3

1

2

3

3

3

3

1

1

2

2

3

3

2

0

0

3

2 0

3

2 0

Theorem B Assume d to be large enough. Then there exists an open set of

(small positive) values of ( ; b) for which ' ;b has two positive Lyapunov exponents at Lebesgue almost every point (; X ) 2 T3  I02 . Moreover, the same holds for every ' close enough to ' ;b in C 3 (T3  IR2 ).

Examples of [Sh] and [Ma], see also [BD] for a new construction, show that certain topological features of the dynamics, such as transitivity, may be persistent under perturbations even in the absence of uniform hyperbolicty. These examples are obtained by deformation of Anosov di eomorphisms and retain many uniform features of the initial map (continuous invariant cone elds, invariant foliations), obstruction to Axiom A coming from the existence of saddle points with di erent stable indices. In contrast, the maps in Theorems A and B inherit from the nonuniform models (quadratic maps, Henon di eomorphisms) involved in their construction a coexistence of hyperbolic dynamics (uniform expansion, resp. invariant cone elds) in portions of phase space, together with highly nonhyperbolic behaviour (in nite contraction, resp. breakdown of cone elds due to 3

interchange of expanding and contracting directions) in certain critical regions . The presence of critical regions is a major drawback in getting expanding behaviour, let us brie y comment on the way this is dealt with in the proof of our theorems. In the context of real quadratic maps, expanding behaviour relies on a delicate control of the recurrence of the critical orbit, more precisely, on appropriate lower bounds for the distance between the critical point and its n-th iterate, n  1. This translates into a sequence of conditions on the parameter, which must then be proven to hold for a positive measure set of parameter values, [Ja]. A similar approach, in a more sophisticated form, is also central in the study of Henon maps with small jacobian. Indeed, a main guideline in [BC2] is to try and view such maps as a kind of perturbation of 1-dimensional maps and the accuracy of this point of view itself depends on bounding the recurrence of the \critical set". However, the nature of the critical regions renders such a strategy of recurrence control hopeless for these multidimensional systems. This is easy to see in the setting of Theorem A, where the critical region f det D' = 0g is a codimension-1 submanifold and, hence, is likely to intersect its iterates. Clearly, such intersections can not be destroyed by small parameter variations, regardless of the number of parameters involved, which means that critical points can not be prevented from hitting back the critical region in nite time , with the corresponding accumulation of nonhyperbolic e ects. Instead, our arguments are based on a statistical (large deviations type) analysis of these returns to the vicinity of the critical region. Roughly, we prove that, for most trajectories, the total nonhyperbolicity associated with returns is not strong enough to anihilate the hyperbolicity acquired at iterates far from the critical region. Also, these arguments must have an essentially high-dimensional character in the case of Theorem B, as it can not be considered \nearly 1-dimensional": necessity to allow for arbitrarily close returns means that we must deal with tangent vectors of unbounded slope right from early iterates. This analysis relies on the fact that the driving maps g are taken fastly mixing. On the other hand, other properties of expanding maps and solenoids are used in apparently much less crucial ways. In view of this, we expect an extension of these arguments to apply when such hyperbolic maps are replaced in our construction by more general systems with fast decay of correlations. A natural example we have in mind is the coupling of nonuniformly hyperbolic maps of the interval, e.g. '(; x) = (g(); a() ? x ), g a unimodal (or multimodal) map as in [Ja] or [BC1]. Acknowledgements. I am thankful to J.-C. Yoccoz, L.-S. Young and, specially, L. Carleson, for fruitful conversations. Most of this work was done during visits to the University of Paris-Sud, the Mittag-Leer Institute, the UCLA, and Princeton University, whose hospitality and support are gratefully acknowledged. 2

4

2 Proof of Theorem A

First we assume that ': S  IR ?! S  IR has the form 1

(1)

1

'(; x) = (g(); f (; x)); with @x f (; x) = 0 if and only if x = 0

and we prove that the conclusion of the theorem holds as long as ' is C and 2

k' ? ' kC 2  on S  I :

(2)

1

0

Then we explain how to remove assumption (1). Our basic strategy goes as follows. We call X^  S  I an admissible curve if X^ = graph (X ), with X : S ?! I satisfying 1. X is C except, possibly, for being discontinuous on the left at  = ~ . 2. jX 0()j  and jX 00()j  at every  2 S . Here ~ denotes the xed point of g close to  = 0 and we also assume that S = IR=mod 1 has the induced orientation. Let X^ = graph (X ) be any admissible

n curve

and denote X^jn(0 ) = 'j (; X ()), for j  0 and  2 S . Clearly,

D' (X^ ())v  const j(g ) ()j grows exponentially, whenever v is a nonvertical (meaning, non-colinear to @x@ ) tangent vector. On the other hand, we prove that there are positive constants c, C , and such that for every suciently large n we have 1

1

0

0

2

0

1

1

0

0

0

0

1

1

n

@x f (X^ j ())  ecn;

D'n(X^ ()) @

= Y

@x j

(3)

1

=1

p

except for a set En of values of  with Lebesgue measure m(En )  Ce? n. We take E = \n [kn Ek and then 1

0 1 p p [ X m @ Ek A  Ce? k  const e? n for all n; kn

kn

implying m(E ) = 0. Moreover, by construction, ' has two positive Liapounov exponents at X^ () for every  2 S nE . Since the admissible curve X^  S  I is arbitrary, this proves the theorem. Now we come to a detailed exposition of the arguments leading to (3). Except where otherwise stated, all constants to appear below are independent of . Moreover, our statements always assume to be suciently small. For future reference, we remark that these conditions on never involve the value of d. We begin by introducing the Markov partitions Pn, n  1, of S de ned by 1

1

0

1

5

1

0

 P = f[~j? ; ~j ): 1  j  dg, where ~ ; ~ ; : : : ; ~d = ~ are the pre-images of 1

1

0

1

0

~ under g (ordered according to the orientation of S );  Pn = fconnected components of g? (!): ! 2 Png, for each n  1. The following simple fact is to be used several times below. Given X^ = graph (X ) and !  S we denote X^ j! = graph (X j!). Lemma 2.1 If X^ is an admissible curve and ! 2 Pn then 'n(X^ j!) is also an 0

1

1

+1

1

admissible curve.

Proof: The rst property in the de nition is obvious. As for the second one, it suces to observe that it is preserved at each iteration. De ne Y : S ?! I by Y (g()) = f (; X ()),  2 ! 2 P . Note that (2) implies (take small) jg0j  d ?  15; jg00j  ; j@xf j  j2xj +  4; j@ f j  j0j +  8 ; j@xxf j  2 +  3; j@ f j  j00j +  50 ; j@x f j  : 1

0

1

Then a direct calculation gives

 



jY 0j = g0 (@ f + @xfX 0)  (8 + 4 )  and; analogously; 0 0 00 0 00 00 jY j = g0 (@ f + 2@xfX + @xx f (X ) + @xfX ? Y g )  1

1 15

2

1

2

and the lemma follows. tu Lemma 2.2 Let X^ = graph (X ) be an admissible curve and denote X^ () = (; X ()), Z^ = '(X^ ) and Z^() = '(X^ ()) = (g(); Z ()). Then, q given any ^ interval I  I , we have m f 2 S : Z () 2 S  I g  4 jI j = + 2 jI j = . 1

0

1

Proof: Let A = f 2 S : jsin 2j  1=3g and A = S nA . Note that each of A and A has exactly two connected components. Suppose rst that  2 A . Then jcos 2j  11=12, which implies 1

1

1

2

1

1

2

1

 9  9 0 0 ^ (4) @ f (X ())  j ()j ?  2 and so jZ ()j  2 ? 4 = 2 :

It follows that m (f 2 A : Z () 2 I g)  2 jI j =( =2) = 4 jI j = : On the other hand, if  2 A then @ f (X^ ()) q j00()j ? q 10 , implying jZ 00()j  4 . Hence, m (f 2 A : Z () 2 I g)  4 jI j =(4 ) = 2 jI j = : tu 1

2

2

6

Corollary 2.3 There is C > 0 such that, given X^ = graph (X ) an admissible curve and I  I an interval with jI j  we have s   m f 2 S : X^j () 2 S  I g  C j I j for every j  1: 1

0

0

0

1

1

1

Proof: Let ! 2 Pj? (resp. ! = S in case j = 1), X^! = 'j? (X^ j!) and Z^! = '(X^! ). By the previous qlemma, the qmeasure of f 2 S : Z^! () 2 S  I g is ^ ^ j? bounded above by 4 jI j = + 2 jI j =  6 jI j = . Now, q Xj () = Z! (g ()) for every  2 ! and so m(f 2 !: X^j () 2 S  I g)  6C (jI j = )m(!) where C is some uniform bound for the metric distortion of iterates of g. tu Given (; x) 2 S  I and j  0 we let (j ; xj ) = 'j (; x). We also introduce positive constants 0 <   1=3 and 0 <  < 1, whose value will be precised below (in terms of the map h(x) = a ? x alone). Lemma 2.4 There are  > 0 and  > 1 satisfying Q a) For each small > 0 there isp N = N ( )  1 such that jN ? j@x f (j ; xj )j  jxj ?  whenever jxj < 2 : p b) For each (; x) 2 S  I with  jxj <  there is p(x)  N such that Qp x ? j@ f ( ; x )j  p x : x j j j  Proof: Throughout the proof we use C to denote any large constant depending only on the quadratic map h. Take l  1 minimum such that q = hl (0) is a periodic point of h, let k  1 be its period, and denote k = j(hk )0(q)j. Note that 1

1

1

0 1

1

1

1

1

0

2

0

1

1

1 =0

1+

1

( ) =0

0 1

1

1

( )

1

it must be  > 1, by [Si]. We x  <  <  with  >  ?= and then take  > 0 small enough so that 1

0

k < 1

2

1 2

1

kY ?1 j =0

2

@xf ('j (; y)) < k whenever jy ? qj <  2

0

(and is suciently small). Given (; x) 2 S  I we denote di = jxl ki ? qj, for i  0. We suppose  > 0 and small so that jxj <  ) d  Cx + C <  . Now let (; x) and i  1 be such that jxj <  and d ; : : : ; di? <  . Then di  (k di? + C ) and so, by induction, di  (1 + k +    + k i? )C + kid  ki(C + Cx ): (5) p Suppose rst jxj < 2 : then (5) becomes di  kiC . We set N~ = N~ ( )  1 to be the minimum integer such that kN C   and then de ne N = l + kN~ . The previous argument implies that di <  for every 0  i  N~ ? 1 and using 1

0

1

1

1

2

(

1)

2

2

0

1

0

0

j@xf (j ; xj )j =

j =0

j@x f (j ; xj )j  7

2

2

0

2

lY ?1

0

2

~

j =0

0

0

+

1

2

NY ?1

2

0 @ j@xf (l

N~Y ?1 kY ?1 i=0

j =0

1 j )jA

ki+j ; xl+ki+

+

we get NY ?1

j@xf (j ; xj )j  C1 jxj kN  C1 jxj 

 C1 jxj ?

?=2)kN~

 jxj ?  ; j p which proves the rst part of the lemma. Suppose now that jxj  : then (5) gives di  kiCx . We let p~(x)  1 be minimum such that kp x Cx   and we ~

1

=0

(1 2

~( )

2

2

2

de ne p(x) = l + kp~(x). Then, in the same way as before, p(Y x)?1 j =0

=2

1+

!k p x

j@xf (j ; xj )j  C1 jxj kp x  C1 p

~( )

1

~( )

1

 C1 

2

= ?=2)kp~(x)

(1 2 2

2

1+

0

 1 p x = ; ( ) 4

where, for the last inequality, we use the fact that p~(x)  1 (uniformly) as long as    . We conclude the proof by taking  =  = . tu 1

0

1 4

1

Lemma 2.5 There are  > 1, C > 0 such pthat Qkj ? j@xf (j ; xj )j  C p k for all (; x) 2QS  I with jx j ; : : : ; jxk? j  . If, in addition, jxk j <  then we even have kj ? j@x f (j ; xj )j  C  k . Proof: We x  as above and keep the notations from the previous lemma. Since h has negative schwarzian derivative, there are  > 1 and m  1 such that j(hm)0(y)j > m whenever jyj ; : : : ; jhm? (y)j   , see e.g. [MS, Theorem III.3.3]. 2

1

1 =0

0

1 =0

2

0

2

1

2

2

1

2

1

0

1

0

1

Then, by continuity (suppose small enough), a similar fact holds for @x f : (6)

mY ?1 j =0

j@xf (j ; yj )j  m whenever (; y) 2 S  I has jy j ; : : : ; jym? j   : 1

0

0

0

1

1

As a consequence, there is A > 0 such that (7)

nY ?1 j =0

j@xf (j ; yj )j  An for all n  1 and (; y) with jy j ; : : : ; jyn? j   : 0

0

1

1

Moreover, there is a constant 0 <  < 1 such that, reducing  > 0 and  > 1 if l ? l l 0 necessary, (h ) (y) >  whenever jyj ; : : : ; h (y)   > hl (y) : this follows from [No], together with (6) and a continuity argument. Then we restrict to l < m and invoke continuity once more to conclude a similar statement for @xf . Combining this with (6) we get 1

0

(8)

nY ?1 j =0

1

0

1

j@xf (j ; yj )j  n whenever jy j ; : : : ; jyn? j   > jynj : 0

0

1

1

Now let (; x) be as in the statement and let j <    < js be the values of j 2 f0; : : : ; k ? 1g for which jxj j <  . Clearly, we may suppose s > 0 for 1

1

8

otherwise the lemma follows immediately from (7), (8). When jxk j <  we also set js = k. On the other hand, we denote pi = p(xji ), i = 1; : : : ; s, and then Lemma 2.4 gives ji Y pi ? j@x f (j ; xj )j  1 pi ; (9) 1

+1

+

1

1

j =ji

for all i < s. Moreover, (9) holds also for i = s if js + ps  k; note that this is necessarily the case if jxk j <  , as our de nition of p(x) implies ji + pi  ji for all i, see above. On the other hand, by (8), 1

(10)

jY 1 ?1 j =0

j@x f (j ; xj

+1

)j  j1 0

ji+1 Y?1

and

j@xf (j ; xj )j  j +1?j ?p ; i

j =ji+pi

i

0

i

for all i < s and, again, the second inequality remains valid for i = s when jxk j <  . At this point we take  = minf ;  g and get 1

2

Yk j =0

0

j@x f (j ; xj )j  j1 0

1

 pi ji+1?ji?pi  n

Ys 

1

i=1

0

2

whenever jxk j <  . This proves the second part of the lemma. As for the rst one, it follows from a similar calculation and the remark that in general (that is, even if (9), (10) are not valid for i = s) 1

kY ?1 j =js

j@xf (j ; xj )j  (2 ? ) jxj j Ak?j ?  Ap k?j ? ; s

0

s

1

0

s 1

as a consequence of (7). tu We are now in a position to explicit our choice of : having in mind the proof of Lemma 2.6 below we take  = log  =(4 log 32). On the other hand, we introduce M = M ( ) to be the maximum integer such that 32M  1; note that M < N , since   sup jh0 j  4, recallpalso the proof of Lemma 2.4. Finally, for r  0 we denote J (r) = fx 2 IR: jxj < e?r g. 2

Lemma 2.6 There are C > 0 and > 0 such that, given any admissible curve Y^ = graph (Y ) and any r  ( ? 2) log ,   m f 2 S : Y^M () 2 S  J (r ? 2)g  C e? r 3

0

0

1

1

1 2

1

3

5

Before proving this lemma let us state and prove the following auxiliary result. We take X^ = graph (X ) to be an admissible curve and for 1  j  d we denote Z^j = '(X^ j[~j? ; ~j )) = graph (Zj ). 1

9

Lemma 2.7 There are H ; H  f1; : : : ; dg with #H ; #H  [d=16] such that jZj1 () ? Zj2 ()j  =100 for all  2 S , j 2 H , and j 2 H . 1

2

1

1

1

1

2

2

2

Proof: Let Z^() = (g(); Z ()) = '(X^ ()) and  <  be the two critical 1

2

points of Z , recall the proof of Lemma 2.2. We set l = [d=16] and de ne ki by i 2 [~ki ? ; ~ki ), i = 1; 2. If neither of  ,  belongs to [1=4; 3=4] then we take H = fk + 1; : : : ; k + lg and H = fk ? l; : : : ; k ? 1g. Observe that ~k1 l < + dl? < ?  arcsin and, analogously, ~k2?l > +  arcsin . Moreover, Z is monotone decreasing on [~k1 ; ~k2? ). Hence, using also jZ 0 jA j  , we get inf Z j[~j1? ; ~j1 ) ? sup Z j[~j2? ; ~j2 )   arcsin  =100 for every j 2 H , j 2 H . Clearly, this proves the lemma in this case. On the other hand, if   1=4 (resp.   3=4), we take H = fk ? l; : : : ; k ? 1g, H = f1; : : : ; lg (resp. H = fd ? l +1; : : : ; dg, H = fk +1; : : : ; k + lg) and then similar estimates yield the same conclusion as in the previous case. tu Proof of Lemma 2.6: Let us begin by giving a brief sketch of the proof. By Lemma 2.1, Y^M is the union of dM admissible curves. We x a constant > 0 and organize the set of these admissible curves into subsets, each of which containing d 1r elements, in such a way that curves belonging to a same subset are spread along the p x-direction: at most (d ? [d=16]) 1 r of them are within jJ (r ? 2)j  const e?r from each other. We obtain this by combining the previous result with the expansion given by Lemma 2.5. Then at most that many curves intersect each fg  J (r ? 2), hence m(f: YM () 2 J (r ? 2)g)  const ((d ? [d=16])=d) 1r , from which the lemma follows. Now we come to the details. Let Y^j () = 'j (; Y ()) = (gj (); Yj ()). We also use C to represent any large positive constant depending only on h. Note rst that osc (Y )  and osc (Yj )  4 osc (Yj? )+2 ,where osc (Yj ) = sup Ypj ?inf Yj . As a consequence, osc (Yj )  2 4j p2 32?M 4j , osc (YM )  2 = < . Clearly, we may suppose that jYM ( )j < at some  2 S (otherwise the conclusion of the lemma is obvious) and then 1

1

1 4

1

+1

1

1 2

1

1 2

2

1 3

2

2

2

1 2

1 2

1

1

2

1

+

1 3

1

2

1 3

2

1

1

2

1

2

1

1

2

1

2

1

2

2

1

0

0

1

3 5

1

jYM ()j  2p (<  ) for every  2 S : Let us denote O = fhi(0): i  1g and j () = dist (Yj (); O). The same argument as in (5) yields, for all  2 S , 0  j  M ? 1, and 1  i  M ? j ,   (12) j i()  C 4i + jYj ()j : p We claim that jYj ()j  for every  2 S and p 0  j  M ? 1. Indeed, if M ? j M it were not so then M ()  C 4  C 4  C for some  2 S . Up to assuming suciently small, this would contradict (11) (recall that O is a nite (11)

1

1

1

2

+

1

1

10

set not containing zero) and so our claim is justi ed. Note that this argument proves somewhat more: (taking C  1= dist (0; O)) 4M ?j jYj ()j  1 for all  2 S and 0  j  M ? 1: (13) C Now we derive a uniform bound for the distortion of @x f on iterates of Y^ . Note that by (1), (2), we may write @xf (; x) = x (; x) with j + 2j  at every point. Then given 0  j  M ? 1, (j ; xj ); (j ; yj ) 2 Y^j , and 1  i  M ? j , 2

1

0

i @x f (j ; xj ) = j Yi? @x f i(j ; yj ) m j +

(14)

1

=

j Yi? xm  ym m j +

=

Our previous estimates imply xm ? 1  q 2 4m  C 4M < p and ym m?M C4 1

p

1

(m ; xm ) : (m ; ym)

(m ; xm ) ? 1  2 < p : (m ; ym) 2 ? p

Hence, (14) is bounded by (1 + ) i  e M  2 (we use M  log and also assume small enough). Altogether, this proves that, given any 0  j  M ? 1 and 1  i  M ? j , i @x f (j ; xj )  2 for every (j ; xj ); (j ; yj ) 2 Y^j : @x f i(j ; yj ) We x an arbitrary y^ 2 Y^ and let j = j@xf M ?j ('j (^y))j. Lemma 2.5, together with (11), gives j  C M ?j for 0  j  M ? 1. On the other hand, the previous inequality gives 1 j  @ f i( ; x )  2 j for all ( ; x ) 2 Y^ : (15) x j j j j j 2 i j i j We x K = 400e and consider t < t <     M given by t = 1 and ti = minfs: ti < s  M and ti  2Ksg (if it exists): 2

2

1

0

2

2

+

+

2

+1

1





2

1

p

Moreover, given r  ? 2 log we set k = k(r) = maxfi: ti  2Ke?r = g and then we claim that k(r)  r for some constant > 0. Indeed, we have ti  2Kti+1 ?  8Kti+1pfor all i and so tk+1  C M ? (8K )?k . Also, by de nition, tk+1  2Ke?r = . Putting these two inequalities together we get (recall also the de nition of  and M ) 1 2

1

1

1

1

2

k log(8K )  r + M log  ? log + C  r ?  r 1 ? == ??  + C  r; 2

1 2 1 2

4 2

1 2

1

11



1 2

2

1



? 4 log + C 1

which proves our claim. For each l = (l ; : : : ; lM ) 2 f1; : : : ; dgM we denote by !(l) the only element ! 2 PM satisfying gi(!)  [~li ? ; ~li ), i = 1; : : : ; M . Given 1  j  M we let Y^j (l) = graph (Yj (l)) = 'j (Y^ j!(l)). We call l and m incompatible if 1

1

0

p YM (l; ) ? YM (m;  )  4e ?r for all  2 S : 2

1

this implies that Y^M (l) and Y^M (m ) can not both intersect a same vertical segment fg  J (r ? 2). By Lemma 2.7 there are H 0 ; H 00  f1; : : : ; dg with #H 0 ; #H 00  [d=16] such that given any l0 2 H 0 and l00 2 H 00 jY (l0 ; l ; : : : ; lM ; ) ? Y (l00 ; l ; : : : ; lM ; )j  100 for all  2 S and l ; : : : ; lM . Then, by (15),  4e ?r p for 2 S jYM (l0 ; l ; : : : ; lM ; ) ? YM (l00 ; l ; : : : ; lM ; )j  2 100 (because 1  k(r)), that is (l0 ; l ; : : : ; lM ) and (l00; l ; : : : ; lM ) are incompatible for every l ; : : : ; lM . In fact, we claim that all pairs (l0 ; l ; : : : ; lt2 ? ; lt02 ; : : : ; lM0 ), (l00; l ; : : : ; lt2 ? ; lt002 ; : : : ; lM00 ) are incompatible. Observe that, 1

1

1

1

1

1

1

1

1

2

1

2

1

1

2

1

2

2

1

2

1

2

2

1

1

2

2

1

1

2

1

1

1

1

 4e ; jYt2 (l0 ; l ; : : : ; lM ; ) ? Yt2 (l00 ; l ; : : : ; lM ; )j  2 100 1

2

1

1

2

2

t2

as a consequence of (15) and the de nition of t . On the other hand, 2

0 Yt2 (l ; l ; : : : ; lt2 ? ; lt02 ; : : : ; lM0 )() ? Yt2 (l0 ; l ; : : : ; lt2 ? ; lt2 ; : : : ; lM )()   osc ('(Y^t2? (l0 ; l ; : : : ; lt2 ? )))  8 1

2

1

1

1

1

2

2

1

1

for all  2 S , and a similar fact holds for Yt2 (l00 ; : : :). Therefore, 1

1

0 YM (l ; l ; : : : ; lt2 ? ; lt02 ; : : : ; lM0 ; )p? YM (l00; l ; : : : ; lt2 ? ; lt002 ; : : : ; lM00 ; )   t2 (4e ? 16)  4e ?r 1

2 1 2

1

1

2

2

1

2

(because 2  k(r)), proving our claim. Now we just repeat this argument for each of the successive ti: at the ith step and for each xed Li = (l ; : : : ; lti ? ), we nd Hi0; Hi00 with #Hi0; #Hi00  [d=16] such that given any lt0i 2 Hi0 and lt00i 2 Hi00 then all pairs (Li ; lt0i ; lti ; : : : ; lM ), (Li; lt00i ; lti ; : : : ; lM ) are incompatible and, in fact, the same is true for every pair (Li; lt0i ; lti ; : : : ; lti+1? ; lt0i+1 ; : : : ; lM0 ), (Li; lt00i ; lti ; : : : ; lti+1? ; lt00i+1 ; : : : ; lM00 ) as long as i + 1  k(r). In this way we conclude that each segment fg J (r ? 2) intersects at most dM ?k r  (d ? [d=16])k r 1

+1

+1

+1

+1

1

1

1

( )

12

( )

admissible curves Y^ (l). Using also j(gM )0j  (d ? )M  const dM (because M  const log ), we conclude 1

  M =d)k r  const  99  1 r m f : Y^M () 2 S  J (r ? 2)g  d ((d (?d [?d= 16]) )M 100   and the lemma follows by taking = 1 log . tu Now we use the previous lemmas to complete the proof of Theorem A. In all that follows we let n  1 be xed, suciently large. We de ne m  1 by m  n < (m + 1)pand take also l = m ? M , where M = M ( ) is as above. Note that l  m  n as long as n  log . Recall also that we are considering an arbitrary admissible curve X^ . Given 1    n and ! l 2 P l , we set

= ' (X^ j! l). Then we say that  is  a In-situation for  2 ! l if \ (S  J (0)) 6= ; but \ (S  J (m)) = ;;  a IIn-situation for  2 ! l if \ (S  J (m)) 6= ;. Note that, by Lemma 2.1, is the graph of a function de ned on g (! l ) 2 Pl and whose derivative is bounded abovepby . Therefore, its diameter in the xdirection is bounded by (d ? )?l  e?m . This means that whenever  is a IIn-situation for ! l then  (S  J (m ? 1)). At this point we introduce B (n) = f 2 S : some 1    n is a IIn-situation for g and then Corollary 2.3 gives s p (16) m(B (n))  nC jJ (m ? 1)j  const ? = ne?m=  const e? n= : ( )

1

100 99

5

2

2

1

0

0

+

+

+

1

+

1

1

+

+

1

2

2

1

+

1 4

1

2

4

>From now on we consider only values of  2 S nB (n) that is, having no IIn-situations in [1; n]. Let 1   <    < s  n be the In-situations of . Note that our de nition of N (recall the proof of Lemma 2.4) implies i  i + N for every i; in particular (s ? 1)N  n. For each  = i we x r = ri 2 f1; : : : ; mg minimum such that \ (S  J (r)) = ;. Then, by Lemma 2.4, 1

2

1

+1

1

i +Y N ?1 i

@x f (X^ j ())  e?r ? =  ; i

1 2+

for each 1  i < s. On the other hand, Lemma 2.5 gives Y 1 ?1 1

@xf (X^j ())  C 1 ? and  +1Y? @x f (X^ j ())  C  +1? ?N ; 2

2

1

i

1

i +N

13

2

2

i

i

for every 1  i < s and also

n Y @xf (X^j ())  (2 ? ) jx j C p n?  const e?r n? :  Altogether, this yields the following lower bound for log Qnj @xf (X^j ()) :  1  s  1 X 3 1 (n ? (s ? 1)N ) log  + ?  log ? r i ? s const ? log : 2 2 i We consider G = fi: ri  ( ? 2) log g (note that it depends on ) and then  1  s  1 X X X 1 ?  log ? r r ri + Ns i ? i + s log  ? 2 i i2G i2G for some > 0 independent of or n (because N  const log ). Replacing 2

s

2

s

s

s

2

s

=1

2

=1

1 2

1

2

=1

1

2

above we get log

n Y @x f (X^ j ())  3cn ? X ri ? s const ? 3 log 1  2cn ? X ri 2 i2G

1

i2G

where c = minf ; log  g and we use n  const log  N  1. Now we introduce B (n) = f 2 S : Pi2G ri  cng and set En = B (n) [ B (n). Then 1 3

2

1

log

1

2 1

1

n Y @xf (X^j ())  cn for every  2 S nEn:

2

1

1

p

Now, in view also of (16), we are left to prove that m(B (n))  const e? n for some > 0. We deduce this from Lemma 2.6 by means of a large deviations argument. First we let 0  q  m ? 1 be xed and denote Gq = fpi 2 G: i  q mod mg. We also take mq = maxfj : mj + q  ng (note mq  m  n) and for each 0  j  mq we let r^j = ri if mj + q = i, some i 2 Gq , and r^j = 0 otherwise. Observe that Gq and the r^j are, in fact, functions of . Then we introduce

q ( ; : : : ; mq ) = f 2 S nB (n): r^j = j for 0  j  mq g 1

1

0

2





where for each j either j = 0 or j  ? 2 log ; we also assume the j not to be simultaneously zero. Consider 0  j  mq and !mj q l 2 Pmj q l . Recall that our construction is such that the value of r^j is constant on !mj q l. Now Y^ = 'mj q l(X^ j!mj q l) is an admissible curve and we have de ned l in such a way that mj + q + l = m(j + 1) + q ? M . Therefore, we are in a position to apply Lemma 2.6 to this curve and obtain in this way   m (f 2 !mj q l: r^j = g)  CC e?  for all   21 ? 2 log 1 : 1 2

1

+ +

+ +

+ +

0

+ +

0

+ +

+ +

+1

3

14

5

Here, as before, C is a uniform upper bound for the metric distortion of the iterates of g. Repeating this reasoning for each 0  i  mq we conclude that

P   m q ( ; : : : ; mq )  C  e? j ; 0

5

4

where C = CC and  = #fj : j 6= 0g. As a consequence, 4

Z

3

e



2

P

r

2

i Gq i

d 

X j

C  e? 3

4

P

j

X



;R

C   (; R)e? R; 3

4

where the integral is taken over Sj q ( ;  ; : : :) and  (; R) is the  number  of integer solutions of the equation x +    + x = R satisfying xj  ? 2 log for all j . Now, for some absolute constant K > 0, 0

1

1 2

1

    RR R   ( R +  )! ( R +  ) R  (; R)  R!  !  K RR   = K 1 + R 1 +  

+

!R

1

 e R :

For the last inequality we use the fact that R=  const log and so all three factors can be made arbitrarily close to 1 by taking suciently small. For this same reason we may also suppose C   e R . It follows that 1

Z

e 2

since   R and R 



P

1 2

4

2

r

i Gq i

d 



X ;R

e? R 

X R

Re? R  1;

? 2 log  1 (because   1). Therefore, 1

91 08 Z P r < X = cn ? c n=m @ A m :: ri  m ;  e e 2 d  e? c n=m : i2G Now, clearly,  2 B (n) ) P r  cn for some 0  q  m ? 1 and so 2

2

i Gq i

2

q

i2Gq i

1

(17)

m

p

m (B (n))  me? c n=m  e? n; 1

2

for = c :

This concludes the proof of the theorem, under the simplifying assumption (1). Now we proceed to remove this assumption: we take ' to be any map of the form '(; x) = (g(; x); f (; x)) satisfying k' ? ' k  ", where " > 0 is small with respect to , and explain how the conclusion of theorem may be obtained for ' by a variation of the previous argument. The rst step is to show that such a ' always admits an invariant foliation F c by nearly vertical smooth curves. This is a direct consequence of the fact that the vertical straight lines f = const g constitute a normally expanding invariant foliation for ' , see [HPS], but we sketch the main points in the proof, since results on persistence of normally 15

hyperbolic objects are somewhat less standard in this setting of non-invertible dynamics. Let X be the space of continuous maps  : S  I ?! [?1; 1], endowed with the sup-norm, and de ne F : X ?! X by F (z) = ?@@x ff(z(z))('('(z(z))))?+@@xgg((zz)) ; z = (; x) 2 S  I :   Note that F is indeed well de ned jF (z)j  ?( const(4 ++")")++" (d ? ") < 1 1

0

1

0

and, moreover, it is a contraction on X : jF ? Fj is bounded above by

((d + ")(4 + ") + ") j ? j  1 j ? j : jdet D'jj ? j  j(?@ f + @ g)(?@ f + @ g)j (d ? const ) 2 Let  c 2 X be the xed point of X . Then we take F c to be the integral foliation of the vector eld z 7! ( c(z); 1). Note that we de ned F in such a way that, for every z 2 S  I , D'(z) maps E c(z) = spanf( c(z); 1)g to E c('(z)) and this implies that F c is invariant. It also follows from the methods of [HPS] that the leaves of F c are as smooth as the map ', i.e. they are C -embedded intervals; 2

1

0

3

moreover, they approach vertical segments, uniformly in the C topology, as " ! 0. Note that, in general, F c is not a smooth foliation (its holonomy maps may not even be Lipschitz continuous). Existence of such an invariant foliation replaces the skew-product assumption of (1) in the general case of the theorem. More precisely, what we do now is to show that almost every z = (; x) 2 S  I has positive Liapounov exponent along the direction of E c(z). In order to do this we introduce (z) de ned by D'(z)( c(z); 1) = (z)( c('(z)); 1), i.e. (z) = @ f (z) c(z) + @x f (z). We also need an analog of the second part of (1). Let the critical set C of ' be de ned by C = fz 2 S  I : (z) = 0g. We claim that C = graph () for some C map : S ?! I and, moreover,  is C -close to zero if " > 0 is small. Indeed, it is clear that z 2 C implies det D'(z) = 0 and the converse is also easy to deduce: if z is such that det D'(z) = 0 then the image of D'(z) is a one-dimensional subspace with slope j@ f (z)=@ g(z)j  1; on the other hand, D'(z)( c(z); 1) is colinear to ( c('(z)); 1), a nearly vertical vector; hence, it must be D'(z)( c(z); 1) = 0. Therefore, our claim follows directly from an implicit function argument applied to det D'(; x) = 0. Now, this means that up to a C change of coordinates C close to the identity we may suppose   0 and, hence, write (; x) = x (; x) with j + 2j close to zero if " and are small. We de ne admissible curve in just the same way as before but a few words are required concerning the de nition of the partitions Pn in the present setting. Indeed, since F c is usually not a smooth 3

1

1

1

0

0

0

2

2

2

16

2

foliation, there is no natural smooth structure (let alone smooth expanding action of the dynamics) on the space of its leaves, as happened in the previous case. Instead, we let  denote the leaf of F c close to f = 0g which is xed under ' and we de ne Pn to be the set of all intervals [0 ; 00) such that (0 ; 00) is a connected component of X^n? ((S  I )n ). Note that this depends on the admissible curve X^ (in an unimportant way). On the other hand, it is easy to check that (d + const )?n  j!j  (d ? const )?n for every ! 2 Pn . At this point we may use Qthe same argument as before, with @x f (; x) replaced by (; x), to show that nj ('j (x; )) grows exponentially almost surely. The proof of the theorem is complete. Finally, we explain how these arguments can be easily adapted to a higherdimensional version of our construction. We consider ' : T m  IR ?! T m  IR, (; x) 7! (^g(); f^ (; x)), where g^ is an expanding map on the m-torus T m and f^ (; x) = a + () ? x , a as before. For simplicity, we take g^ to be linear and to have a unique largest eigenvalue u. Then we suppose the function  to vary in a Morse fashion along the corresponding eigendirection !u. In this setting we take admissible curve to mean a curve of the form f((t) =  + t!u ; X (t))g  T m  IR with jX 0j, jX 00 j small. Then, up to assuming u suciently large (depending only on the Morse function ), the same arguments as before prove that for small enough the map ' has m + 1 positive Liapounov exponents at '((t); X (t)), for almost every t. Moreover, the same remains true for all small perturbations of ' , as long as every eigenvalue of g^ is larger than 4 (this is to assure that the invariant foliation f = const g is normally expanding, recall argument above). 0

1

1

0

0

0

=1

2

0

0

0

3 Proof of Theorem B In proving Theorem B we follow a similar global strategy as for Theorem A, but have to deal with several additional diculties arising, fundamentally, from the higher-dimensional nature of the X {variable. We fully present the new ingredients required to bypass such diculties and refer the reader to the previous section for many details which are common to both proofs. First we derive the conclusion of the theorem for '(; X ) = ' ;b(; X ) = (^g(); f^ ;b(; X ). Extension to all maps in a neighbourhood of ' ;b follows precisely the same lines as before, as we comment near the end of the present section. For the sake of notational simplicity we write g = g^ and f = f^ ;b . In all that follows we let and b be small, more precisely 0 < b   c for some c  1. The constant c is determined by a number of conditions which we state along the way. We point out that none of these conditions involves the value of d, cf. also remark preceding Lemma 2.1. p In addition, p for xed d we assume that b is large enough with respect to 1= d, say b d  100. Clearly, this last condition 2 0

0

17

0

is compatible with the previous one, provided d be large enough. It is not clear whether it is strictly necessary or just a requirement of our method. Recall that  = (; T ) 2 T = S  B and X = (x; y) 2 I . Here we call X -vector to any tangent vector of T  I which is a linear combination of @x@ and @ @ @ _ y_ ). By admissible @y . An X -vector x_ @x + y_ @y will also be denoted, simply, by (x; curve we now mean any subset X^  T  I  S which is the graph of some X : S ?! B  I  S , X () = (T (); X (); ()), satisfying 1. T , X , are C except, possibly, for being left-discontinuous at ~ = 0; 2. jX 0j  , jX 00j  , j 0j  b =d and j 00j  b =d . We think of T  I  S as the bundle over T  I whose bbers are the unit balls of X -vectors and we let  denote the action induced by D' on this bundle, ! @ X f (; X ) (; X; ) = g(); f (; X ); k@X f (; X ) k : In the same way as we did for Theorem A, we reduce the proof of Theorem B to a main claim stated in terms of admissible curves. We let X^ = graph (X ), X = (T ; X ; ), be any admissible curve with ()  @y@ . For every j  0 we denote X^j () = (gj (); Tj (); Xj (); j ()) = j (; X ()) and also Wj () = @X f j (X^ ()) (). Note that W () = ()  @x@ and Wj () = kWj ()k j (). Now, clearly,

D'n(X^ ()) @@

 dn for every  2 S and n  1. We claim that for convenient constants c; C; > 0 and all largepn we have kWn()k  ecn except on a set En of values of  with m(En)  Ce? n . The proof of this statement occupies most of the present section. On the other hand, it easily yields the theorem, cf. the situation in the previous section. Markov partitions Pn for  7! g() = d mod 1 are de ned as before. ^ !) is also an Lemma 3.1 If X^ is an admissible curve and ! 2 Pn then n (Xj admissible curve. Proof: Clearly, it is sucient to consider the case n = 1. Observe that if X^ = ^ !) = graph (T ; X; ) where T (d) = G(; T ()), graph (T; X; ) then (Xj X(d) = (a() ? x() + by(); ?bx()) and (d) is the unit vector obtained dividing ! ?2x() b () ?b 0 1

3

3

1

2

2 0

2

3

1

2 0

2 0

2 0

1

2

0

2

2 0

3

1

3

2 0

0

0

0

0

0

0

0

0

1

1

0

1

+1

1

1

2

by its norm. Property 1 is clear and so we proceed to check 2. Note rst that jX0 j = d1 ( 0 ? 2xx0 + by0; ?bx0 )  d1 (2 + 4 + b ; b )  12d  18

and a similar calculation gives jX00j  (48 =d )  . On the other hand, we write = (cos ; sin ) and  = (cos  ; sin  ) and then () tg  (d) = 2x() cosb cos () ? b sin () which leads to b 0 ? 2bx0 cos 0 =1 (18)  d (2x cos ? b sin ) + (b cos ) : 2

2

2

2

2

Note that the denominator is bounded from below by k@X f ? k?  (b =5) and, clearly, also by b cos . In this way we nd (19) j 0 j  db1 (25 j 0j + 2b jx0j)  db1 (25 b d + 2b 12d )  b d ; recall that b d  50. On the other hand, taking derivatives in (18) and performing the same kind of estimations as before, we get j 00j  d2 b4 ( const b j 00j + const b jx00 j + const b jx0 jj 0j)+ + d2 b4 ( const j 0j + const b jx0 j)( const jx0 j + const b? j 0j); where const always replaces some numerical constant. (For the deduction of this inequality note also that (2x cos ? b sin ) +(b cos )  const b jsin jjcos j.) It follows that j 00j  ( const b + const )b =d  b =d . tu Given (; X ) 2 T  I and j  0 we denote (j ; Xj ) = 'j (; X ) and also j = (j ; Tj ), Xj = (xj ; yj ). In what follows  > 0,  > 0, N = N ( )  1, and  > 1 have the same meaning as in Section 2. Given an X -vector V = (x;_ y_ ) we denote slope V = y= _ x_ . Lemmap3.2 Therepis C > 0 and, given (; X ) 2 T  I and k  1 with jx j < 2 , jxj j  for 1  j  k and jxk j <  , there exists S^ = S^k (Z ) an X -vector at Z = ( ; X ) satisfying a) For any X -vector U at Z with j slope U j  c and every 1  j  k, we j have

kj slope

@X f j (Z )?U j  k?cN and k@X f j (Z )U k  C  kU k; in addition,

@X f (Z )U  C  kU k. b) S^ = (^s; 1) with js^j  b and @X f k (Z )S^ = (0; r^) with jr^j  b k =C ; moreover slope @X f j (Z )S^  1=c for all 1  j  k. Proof: For simplicity we denotep Lj = @X f j (Z ), 1  j  k. Observe rst that, given any point Y 2 fjxj  g and any X -vector V = (x;_ y_ ) at Y with j slope V j  c , then k@X f (Y )V k  j?2xx_ + by_ j  (1 ? c ) j?2xj kV k and b (20) j slope @X f (Y )V j = 2x ? b slope V  2p b? bc  pb  c : 1

2

2

2

2

2

2

2

1

2

3

2

1

1

2

2

2

2

3

2

2 0

3

1

2

5

+1

0

1

2 0

3

1

1

0

0

5

1+

5

2

2

2

5

0

0

0

0

19

0

2

In view of this, and up to assuming c small, the arguments in Lemma 2.4 give 0



(21)

Lj U

 const j kU k for 1  j < N and

LN U

 ?

 kU k :

1+

1

An analog of Lemma 2.4b) also follows from those arguments. Moreover, we take m  1,  > 1,  > 0 as in Lemma 2.5 and then, by continuity (take c small), 1. k@X f m (Y )V k  m kV k if 'j (Y ) 2 fjxj   g for 0  j  m ? 1 2. k@X f l (Y )V k  l kV k if 'j (Y ) 2 fjxj   g for 0  j  l ? 1 and 'l (Y ) 2 fjxj <  g, with l < m. Now the same reasoning as in Lemma 2.5 yields 0

0

1

0

1

0

1

k

L U  C k?N ?

(22)

2

 kU k

1+

2

and

Lj U

 C j?N ? 21  kU k 2

+

2

for all N < j < k. Recall (Lemmas 2.4, 2.5) that ?  const N  const  N . This, together with (20){(22), completes the proof of a). In order to prove b) we introduce s^j ; r^j de ned by Lj (^sj ; 1) = (0; r^j ), every j  1, and then we take s^ = s^k , r^ = r^k . Note rst that 1

gives s^ = 2xb

4 2

2

 b and r^ = ? b  b ; 2 2x ! A B j j since x  a > 1. In general, we write Lj = C D and then s^j = ?Bj =Aj j j and r^j = Dj ? Bj Cj =Aj = det Lj =Aj . Note that Aj = ?2xj Aj + bCj and Bj = ?2xj Bj + bDj and so s^j ? s^j = ?b det Lj =(Aj Aj ). On the other hand, the estimates in part a) imply jAj j  (1 ? c ) kLj (1; 0)k  const . In this way we get jr^j j  const b j and js^j ? s^P j j  const b j for every j  1 and this last inequality also gives js^k j  (b=2) + j const b j  b. Finally, if it were j slope Lj S^k j < 1=c for some j < k then the same calculation as in (20) would give j slope Lk S^k j  c , contradicting our de nition of S^k . tu Remark 1: If we drop the assumptions on x , xk (keeping only jxj j  p for 1  j  k) then the same arguments yield the following slightly weaker conclusions, which will be of use below. For any X -vector U with j slope U j  cp and any 1  j  k, we have j slope @X f j (Z )U j  c and k@X f j (Z )U k  const jpkU k (cf. Lemma 2.5). A vector S^k = (^sk ; 1) is pde ned, satisfying js^k j  (b= ), @X f k (Z )S^k = (0; r^k ) with jr^k j  const b k = , and j slope @X f k (Z )S^j  1=c p for 1  j  k. Moreover, if jxj j  for all j  1 then the corresponding sequence (^sj ; 1) converges to some X -vector S^1 = (^s1; 1) psuch that js^1j  c ,

j slope @X f j (Z )S^1j  1=c , and

@X f j (Z )S^1

 const b j = for all j  1.

?2x b ?b 0 1

1

!

s^ 1

1

!

= r^0

1

!

1

2

1

1

1

0

+1

+1

2

+1

+1

+1

2

+1

0

+1

2 +1

2 +1

1

0

0

0

+1

0

0

2

2

0

0

2

0

20

We also need to consider pairs of points ( i ; X i ) 2 T  I , i = 1; 2, and the corresponding objects ji = (ji ; Tj i ), Xj i = (xji ; yji ), and Z i = ( i ; X i ). Let [Z ; Z ] be the straight line segment connecting Z , Z . Given any Z 2 [Z ; Z ], we de ne j = (j ; Tj ) and Xj = (xj ; yj ) by (j ; Xj ) = 'j? (Z ), every j  1. ( )

( )

(1)

(2)

(1)

(2)

( )

( )

( )

( )

( )

( )

2 0

3

( )

(1)

( ) 1

( ) 1

(2)

1

Lemma 3.3 There exists C >p0 such that the following holds. Let  =  and k  1 be such that jxj j  for any 1  j  k and any Z 2 [Z ; Z ]. Let  = minfjx j; jx jg. Then a) s^k (Z ) ? s^k (Z )  b x ? x ? C pb3 (1) 1 (1)

6

(1) 1

(2) 1 (2)

(2) 1

(1)

(1) 1

(2)





8

(2) 1





6





(1) ? X1(2)

. b) s^k (Z (1) ) ? s^k (Z (2) )  2b2 x(1) ? x(2) 1 1 + C6 b X1

Proof: We keep the notations of the previous lemma. First, we observe b x ? x  s^ (Z ) ? s^ (Z ) = bjx ? x j  b x ? x : 8 j2x x j 2 p Recall that s^j ? s^j = ?b j =(Aj Aj ). Using p jA j  P  and jAj j  const for all j (see Remark 1), js^k ? s^ j  const (b = ) + j const (b j = )  p const (b = ): This proves a). Now, let D denote derivative with respect to the X {variable. A simple induction argument shows that kDxj k, jAj j  4j and kDAj k  4 j . PHence, kD(^sj ? s^j )k  const b j 4 j ? = and so kD(^sk ? s^ )k  const j b j 4 j ? =  const b: Then b) follows from our (1) 1

(2) 1

1

(1)

1

2 +1

+1

(1) 1

(2)

(1) 1

+1

3

1

3

1

2

1

1

(2) 1

(2) 1

(1) 1

2

1

2 +1

2

2 +1

+1 2 +1 2 +2 3 2

(2) 1

2 +2

3 2

rst estimate and the mean value theorem (note that the s^j are independent of T , and  is constant on [Z ; Z ]). tu Recall that X^ = graph (T ; X ; ) is an arbitrary admissible curve with  @y@ . Let n be any large integer and M = M ( ), J (r), be as in Section 2. Let j () = (gj (); Tj ()), Xj () = (xj (); yj ()), and j () = (cos j (); sin j ()). Given 1    npand p!?M 2 P?M we say that  is a return for (every)p 2 !p?M if x (!?M ) \ (? ; p) 6= ;. Note that this implies x (!?M )  (?2 ; 2 ), since osc (x j!?M )  , recall (11). For completeness we set Pi = fS g for all i  0; observe also that  < M can only occur for, at most, one return. Our goal at this point is to introduce a function  (), de ned on  2 !?M , bounding the amount of expansion lost by @X f at ( (); X ()). In what follows !i() denotes the element of the partition Pi containing . Firstpwe p take k = k ()  1 minimum such that  =  +k +1 satis es x (!?M ())\(? ; ) 6= ; (in other words,  is the next return of ). Then we let S () = (s (); 1) = S^k ( (); X ()), given by Lemma 3.2 (use Remark 1 if k = 1), and we decompose W = kW? k (h (1; 0) + v (s ; 1)) and W? = kW? k (cos  ; sin  ). 1

(1)

1

0

(2)

0

0

0

0

1

+1

+1

1

1

21

1

Now we de ne (23)

 () = cosh (()) = ?2x () + b tg  () + bs () 

(note that W = kW? k = @X f ( ; X )  = (?2x cos  + b sin  ; ?b cos  )). A main property of these  () is to be stated in Lemma 3.7, which is an analog of Lemma 2.6 in this context. First we need a few auxiliary results. For 1  q  k () we de ne ;q () = ?2x () + b tg  () + bs;q (), with s;q () = s^q ( (); X ()) as in Lemmas 3.2, 3.3. We denote by Aj , Bj , Cj , Dj the entries of @X f j ( (); X ()). Finally, as in the previous section, we let A = f: jsin 2j  1=3g and A = S nA . Lemma 3.4 Let  2 !?M be such that jb tg  ()j  10p . Then for every 1  q  k (), a) 0;q ()  jAqbA2qq?1 j d q? if d q?  2 A and 1

+1

+1

+1

+1

1

1

2

+

+

1

1

10





1

1

b) 00;q ()  10 jAqbA2qq?1 j d2(+q?1) if d+q?1  2 A2 .

Proof: We detail the proof of a): statement b) is proved along precisely the same

lines and we omit the corresponding calculations. Note rst that our assumption on  () implies (recall also Lemma 3.1)  100  b ! 24 0 j(?2x + b tg  ) j  d + b 1 + b d d  25 d? : Moreover, 25 d?  ( =10)b q 4 ? q d q?  ( =10)(b q = jAq Aq? j)d q? for every q  1, since b d  1000 and jAj j  4j . Therefore, it suces to prove that 0 b q?  q? whenever d q?  2 A : s;q ()  5 jA A j d q q? Recall, from the proof of Lemma 3.2, that s; = b=(2x ) = ?b=A and also s;j ? s;j = ?b j =(Aj Aj ) for all j . Hence, ! j 0 A0j A0j b A b 0 0 0 s; = A A and s;j = s;j + A A A + A j j j j and so, by induction, s0;q is equal to ! qX ? bj ? A0 q? A0 qX j j? ! A0j A0j b b b j q 0 s; + A A A + A = A A A + A A A + A A j j j j q q? q j j j j j j? j (convention A = 1). In order to estimate the right-hand side we introduce Ej = Aj =Aj? and note that E = ?2x and Ej = ?2x j ? b =Ej? . It follows, by induction, 1

2

1

2

1

2

+

1

2

2

2

1

+

1

+

1

1

+1

1

1

2 +1

1

+1

=1

+1

2 +1

+1

1

+1

2

+1

+1

+1

+1

1

1

1

2 +1

1

1

22

2

+1

=1

0

1

1

+1

1

1

1

1

1

2 +1

+

1

+

2

1

1

1

p

(i)  jEj j  4 for all j  1 ; moreover, no two consecutive values of jEj j can be less than 1 (because no two consecutive jxi j are less than 3=5).

(ii) Ej0  25 d j? for j  1 (note that x0 j  12 d j? , cf. Lemma 3.1). +

1

+

+

1

Now, d q? )j  1=3. Then (compare the proof of Lemma 2.2) 0 suppose 0 jsin(2 x q = a (d q? )d q? ? 2x q? x0 q? ? by0 q?  (3 =2)d q? and so E 0  3 d q? ? (b = )25 d q?  (5 =2)d q? . As a consequence, q +

+

+

1

1

+

1

+

1

2

+

+

1

q?1

+

=1

1

2

1

1

j ?q)

2(

1

1

0 1 qX ? 5 25 i?q A  d @ ?p d 8 i 2 1

q?1 :

+

!j?q Pj jE 0 =E j db s s s jEi Ei j ( =2)d q?  20 4 : j

qY ?1

2

=1

+1

i=

1

+

1

=1

On the other hand, 2

+

+

1

j? 0 , q? 0 Aj b Aq b Aj Aj? Aj Aq Aq? Aq  b

1

2

0 0 qX Aq  Eq ? ? Ei0  d Aq Eq i Ei

(24)

+

+

1

2

In order to check this, use jEij  4 and (i), (ii) above if s < j , and qY ?1 i=j

qY ? jEi Ei j Ej0 =Ej = jEj j jEiEi j Ej0  4 1

+1

+1

+1

i=j +1

q?j )?1  25 d +s?1

2(

otherwise. In a similar fashion,

j , !j?q A0j b q? A0q db b : Aj Aj Aj Aq Aq? Aq  5 4 2

2 +1

2

1

+1

2

1

Using b d  1000 once more, we get 2

(25)

!q?j 1 ? q? 0 b q? A0q 0 qX 4 s;q  A A A @1 ? 25 b d A  jAb A j 4 d q q? q q q? j 2

1

1

1

2

2

2

=1

1

q?1 ;

+

1

and our argument is complete. tu Remark 2: The same type of estimates (cf. (25), (24), and (ii) above) also gives the following upper bound which will be useful below: 0 b q? A0q p b q? d q?  const p b q? d q? :  100 (26) s;q  2 Aq Aq? Aq jAq Aq? j 2

1

2

1

1

+

1

2

1

+

1

1

Lemma 3.5 Let 1    n and !?M 2 P?M be as above and let ! be any element of P contained in !?M . 23

p

p

a) If jb tg  ( )j  10 for some  2 ! then j ()j  2 for every  2 ! .

 1=10 b) m (f 2 ! :  () 2 J (r)g)  b?1=4 jJb(2r)j m(! ) for every r  0.

Proof: Suppose rst that the assumption of a) is satis ed. Since j! j = d? and p (by Lemma 3.1) j 0 j  (b =d)d , we must havepjb tg p()j  8 for all  2 ! p and then a) follows immediately: j ()j  ?4 + 8 ? b  2 . Nowpwe prove b). Note that, in view of a), it is no restriction to assume jb tg j  10 2



and we do so in what follows. We start with a few simple remarks concerning the Aj , Ej introduced above. Let 1  j  q. It follows from the proof of Lemma 3.4, namely from (ii), that jEj ( ) ? Ej ( )j  25 dj?q whenever  ;  belong to a same ! q? 2 P q? . Hence, under this assumption we have 1

+

1

+

2

1

2

1

jAq ( )j = Yq jEj ( )j  Yq 1 + 25p dj?q   p2: jAq ( )j j jEj ( )j j

(27)

1

1

2

2

=1

=1

We also observe that

k? j q q X j;q ?  j = Ab A  jA2b A j  p2b jA bA j ; j j q q q q? 1

(28)

j =q

2 +2

2 +2

+1

+1

2

2

1

since (recall (i) above)

j , j b b Aj Aj Aj Aj?

= b  pb < 1  for all j: (29) jEj Ej j 2 p We de ne  = (r) to be the maximum integer satisfying e?r  16(b=4) . In what follows we assume   1, as the bound in b) is trivial when  = 0. Observe also that we de ned k in such a way that k () = k implies k ( ) = k for every  2 ! k ?M (), in particular for every  2 ! k? (). For 1  k   ? 1 we let Fk be the family of all intervals ! k? 2 P k? , with ! k?  ! , satisfying  k () = k for (every)  2 ! k? and  ( ) 2 J (r) for some  2 ! k? . Then we also let F be the set of all ! ? 2 P ? , ! ?  ! , such that  k ()   for (every)  2 ! ? and  ( ) 2 J (r) for some  2 ! ? . 2 +2 +1

2

2

1

2

+1

2

+ +1

+

+

1

+

+

1

1

+

1

1

+

+

1

+

1

+

+

1

+

1

1

1

We claim that 1) Given 1  q  , each ! q? 2 P q? contains at most (100b = d)?q elements of F. In particular (taking q = 1) #F  (100b = d)? . +

1

+

24

1

1 4

1 4

1

We prove this statement by induction on  ? q. Note that case q =  is trivial. Let q <  and assume that 1) holds for q + 1 (i.e. for all ! q 2 P q ). Given ! q? an arbitrary element of P q? , we count the intervals ! q 2 P q with ! q  ! q? and containing some elementp of F. Let ! q be any such interval: then there exists  2 ! q with j ( )j  e?r and hence, in view of (28), (29) and the fact that q + 1  , q q q ! p 2 b 6 b 6 b b j;q ( )j  e?r + jA A ( )j  jA A ( )j  p max jA A j ; q q q q q q? where the maximum is taken on ! q? . In other words, ! q must intersect +

+

1

+

+

+

+

1

+

1

+

+

+

2 +2

2 +2

+1

( Z =  2 !

2

2

+1

+

1

1

+

!)

bq 6b max p : j  (  ) j  q? ;q jAq Aq? j : Now, using Lemma 3.4, the same arguments as in Lemma 2.2, Corollary 2.3, give 2

2

+

1

1

p

b = ) max (b q = jAq Aq? j) m(Z )  6 ( =(12 10) min (b q = jAq Aq? j) d  q? 2

2

2

2( +

1

p

!=

1 2

1

1)

:

Therefore, recall (27), m(Z )  6 (240b =( )) = d??q  95b = d??q . On the other hand, Lemma 3.4 implies that (;q j! q? ) is at most 3-to-1 and so Z has no more than 3 connected components. Hence, there are at most 95b = d + 6  100b = d intervals ! q as above. This, together with the induction hypothesis, implies that 1) holds for ! q? and so the proof of our claim is complete. Now we observe that the same argument also proves 2) Given 1  q  k < , each ! q? 2 P q? contains at most (100b = d)k?q elements of Fk . In particular, #Fk  (100b = d)k? . Moreover, using Lemma 3.4 (with q = k), along with the calculations of Lemma 2.2 and Corollary 2.3, we get (the minimum is over ! k? ) 2

1 2

+1

+

1 4

1 4

1 4

1

+

+

+

1

1

+

1

1 4

1 4

1

+

1

p

?r m (f 2 ! k? :  () 2 J (r)g)  6 ( =10) min(b 2k = j e Ak Ak? j) d +

p

+1

1

2

p

1

!=

1 2

 k?1)

2( +

:

Now, e?r  16(b =4 )  4(b = jAA? j)  4(b = )?k (b k = jAk Ak? j), recall (29). It follows that the Lebesgue measure of f 2 ! k? :  () 2 J (r)g p  ? k is bounded above by 6((80= )(b = ) ) = d??q  (60b = )?k d??k . Altogether, this gives 2

2

2

2

1

2

1 2

2

1

+ 1 1 4

+1

+1

 m (f 2 ! :  () 2 J (r)g)  P (100b = d)k? (60b = )?k d??k k  3(100b = )? d?  b? = (100b = ) m(! ): 1 4

=1

1 4

25

1

1

1 4

1 4

+1

1 4

Finally, up to assuming c small enough, the p maximality condition in the de nition of  implies (100b = )  (b=4)=  ( e?r =b ) = . tu Let X^ = graph (X ) be an admissible curve and Z^() = '(X^()). We write ^ Z = (g; U; Z; ), Z = (z; w) and  = (cos ; sin ). For 1  i  d denote ^ [~i? ; i)) = graph (Zi ), with Zi = (Ui ; Zi; i), Zi = (zi ; wi), and Z^i = (Xj i = (cos i ; sin i ). Observe that z() = zi(g()) for  2 [~i? ; i), and similarly for w, wi, and , i . Given j  1 we let Aj , Bj , Cj , Dj , be the entries of @X f j and (recall Lemmas 3.2, 3.3), we write s^j = ?Bj =Aj and r^j = det @X f=Aj . 1 4

0

5

2 1 10

1

1

Lemma 3.6 Let jz()j  p4 for all  2 S . Then there are H ; H  f1; : : : ; dg with #H ; #H  [d=100], such that a) jzi1 () ? zi2 ()j  =250 for all  2 S , i 2 H and i 2 H , p b) j cotg i () ? s^k (; Ui (); Zi())j  b for all  2 S and i 2 H [ H , p for any k  1 such that 'j (X^()) 2 fjxj  g for all 1  j  k and all  2 S . Proof: Let l = [d=100] and de ne H~ s = f2sl + 1; : : : ; 2sl + lg for s = 0; 1; 2. Observe that [~i? ; ~i)  [0; 1=20]  A for all 1  i  5l (recall that ~i = i=d). Moreover, as in (4), jz0 ()j  =2 for all  2 A . Hence, given any i 2 H~ r , i 2 H~ s with r = 6 s, we have inf S1 jzi1 ? zi2 j  ( =2)(l=d)  =250, which gives us a). Now 1

1

1

2

2

1

1

1

2

2

2

1

1

2

1

1

1

1

1

2

we claim that given any such i , i , at least one of them satis es b). Note that the lemma is a direct consequence: one obtains sets H , H as in the statement just by choosing appropriate elements from H~ , H~ , H~ . We prove the claim p by contradiction. Suppose there is  2 S such that j cotg i ( ) ? si( )j  b pfor both i = i , i = i (we write si( ) = s^j (; Upi ( ); Zi( ))p). Since js^k j  b= (recall Remark 1), it follows j cotg i( )j  (b= ) + (b )  1 for i = i ; i (if c is small). Using (19), i =   (gj[~i? ; ~i ))? , and the mean value theorem, we get b ; j cotg i1 ( ) ? cotg i2 ( )j  db2 50 b d 5l  2000 1

2

1

0

1

1

1

2

2

2

2

2

0

1

p

1

1

2

2

p

recall that b d  100. It follows that jsi1 ( ) ? si2 ( )j  2b + (b =2000)  (b =1500), for small enough c . But Lemma 3.3a) gives (note that our assumpp 4 tions imply   ) 2

0

b ? const b =  b ; jsi1 ( ) ? si2 ( )j  8b jzi1 ( ) ? zi2 ( )j ? const b =  800 1000 if c is small. We have reached a contradiction, thus proving our claim. tu Now we are in a position to prove our last lemma. Given Y^ = graph (Y ) an admissible curve and j  0, we write Y^j = j (Y^ ) and Y^j () = j (; Y ()) = 3

5 4

3 4

0

0

0

26

0

0

(gj (); Sj (); Yj (p); ?j ()), with Yj = (j ; j ) and ?j = (cos p j ; sin j ). We suppose that jM ( )j < for some  2 S . Then jM ()j < 2 for everyp 2 S : this is because osc (j )  2 4j for all j  0, in particular osc (M ) < , cf. (11). To the curve Y^M we associate M () = ?2M () + b tg M () + bsM () with sM () = s^(gM (); SM (); YM ()), as in (23). 1

+1

+1

1

+1

Lemma 3.7 There are C > 0 and > 0 such that, given any admissible curve Y^ = graph (Y ) as above and any r  ( ? ) log , 7

0

0

1

1 2

  m f 2 S : M () 2 J (r ? 2)g  C e? r : 1

7

5

Proof: The argument has two parts, which can be sketched as follows. The

rst step is parallel to the proof of Lemma 2.6. For each l = (l ; : : : ; lM ) in f1; : : : ; dgM we let Y^j (l) = graph (Yj (l)) = j (Y^ j!(l)) and also introduce the objects Sj (l; ), Yj (l; ) = (j (l; ); (l; )), ?j (l; ) = (cos j (l; ); sin j (l; )), sM (l; ), and M (l; ) corresponding to it. We x r  ( ? 2) log and say that l and m are incompatible if p jM (l; ) ? M (m;  )j  4e ?r for every  2 S : (30) 1

0

1 2

2

1

1

Using Lemma 3.6a) we prove that each l is incompatible with all but, at most, dM ((d ? [d=100])=d) r elements m 2 f1; : : : ; dgM . Then, in a second step, we prove that the incompatible pairs l, m obtained in this way also satisfy p jM (l; ) ? M (m;  )j  2e ?r for every  2 S ; (31) const

2

1

thus ensuring that M (l; ) and (m;  ) can not both belong in J (r ? 2). This is done by checking that the two last terms in M have a neglectable e ect, which relies on the property in Lemma 3.6b). The lemma follows directly from the combination of these conclusions.p First we note that jj ()j  4 for all  2 S and 0  j  M ? 1. Indeed, let j () = dist (j (); O) + jj ()j, recall that O  IR is the post-critical orbit of h(x) = a ? x . In the same way as in (12), j i()  C 4i( + jj ()j ) for 0  j  M ? 1 and 1  i  M ? j (throughout, C denotes any positive constant depending onlypon h). Then, by the same argument as in (13), one gets jj ()j  const 4j?M  . Let y^ 2 Y^ be xed and write y^j = j Q (^y ) = (^j ; S^j ; Y^j ; ?^ j ), with Y^j = (^j ; ^j ) and ?^ j = (cos ^j ; sin ^j ). De ne also j = Mi ?j j? 2^ij. Then the same continuity argument as in Lemmas 2.5 and 3.2, gives j  const M ?j for all 0  j  M ? 1. We let K = 1000e and de ne t < t <     M by t = 1 and 1

0

2

+

2

0

0

=

2

1

2

1

2

1

ti = minfs: ti < s < M ? 5 and ti  2Ksg (if it exists): +1

27

2

p

Then we set k = k(r) = maxfi: ti  2Ke?r = g. In precisely the same way as in the proof of Lemma 2.6, we deduce that k(r)  r where = = log 8K . p 4 We already remarked that jj ()j  for all  2 S and 0  j  M ? 1. Hence, we are in a position to apply Lemma 3.6 to obtain H 0 ; H 00  f1; : : : ; dg with #H 0 ; #H 00  [d=100], such that for all  2 S ; j (l0 ; ) ?  (l00 ; )j  250 for all l0 = (l0 ; l ; : : : ; lM ) and l00 = (l00; l ; : : : ; lM ) with l0 2 H 0 , l00 2 H 00, and arbitrary l ; : : : ; lM . Since osc ( ) = b osc ( )  b , we also have for all  2 S : j (l0 ; ) ?  (l00 ; )j  b < p 250 By induction (using the same kind of calculation as in (20)), jj (l0 ; ) ? j (l00; )j  p jj (l0 ; ) ? j (l00 ; )j 1

1

1

1

1

1

1

1

1

1

1

1

2

1

1

1

2

1

1

1

1

1

2

1

1

1

1

0

1

1

1

1

1

for every j  1 and  2 S . Combining this with

1

1

osc ( j )  p 2 4j  C 2j ^j C 4j?M

M

+

 C 4M  p

(take c small for the last inequality), we nd jj (l0 ; ) ? j (lp00; )j   j ? 2^j j(1 ? p )jj (l0 ; ) ? j (l00 ; )j ? bjj (l0 ; ) ? j (l00; )j (32)  j ? 2^j j(1 ? 2 )jj (l0 ; ) ? j (l00 ; )j: 0

+1

+1

1

1

1

1

1

p

1

1

1

p

Hence, using log(1 ? 2 )M  const log  ? log 2 (if c is small), jM (l0 ; ) ? M (l00; )j   (1 ? 2p )M ?pj (l0 ; ) ?  (l00 ; )j    4e ?r (because k(r)  1), which means that any such l0 ; l00 are incompatible. Moreover, 1

1

1 2

0

1

1

2

1 250

1

1

1

1

1

1

 4e ; jt2 (l0 ; ) ? t2 (l00 ; )j  12  250 1

1

1

2

t2

by the de nition of t . Also, given any m 0 = (l0 ; l ; : : : ; lt2 ? ; lt02 ; : : : ; lm0 ) and m 00 = (l00; l ; : : : ; lt2 ? ; lt002 ; : : : ; lm00 ), we have jt2 (l0 ; ) ? t2 (m 0 ; )j  osc (t2 (l0 ; l ; : : : ; lt2 ? ))  8 2

1

1

2

1

1

2

1

1

1

1

1

28

2

1

and analogously for l00 , m 00 . Therefore, m 0 , m 00 are incompatible: jM (m 0 ; ) ? M (m 00 ; )j  21 t2 (4e ? 16)  4e ?r p (if k(r)  2). Now, proceeding as in the proof of Lemma 2.6, we conclude that each l 2 f1; : : : ; dgM is compatible with not more than dM ((d ? [d=100])=d) 1r elements m 2 f1; : : : ; dgM , as claimed above. Starting the second part of the proof, we note that it is sucient to consider the case er  4 b? . Indeed, otherwise the lemma is an immediate consequence of Lemma 3.5b) (take  1=20). Let l, m be a pair of incompatible sequences as constructed above. More precisely, for some 1  i  k(r) we have lj = mj for j < ti and lti 2 Hi0, mti 2 Hi00. For notational simplicity, we shall write t = ti and let s~i(l; ), r~i(l; ), A~i (l; ), C~i(l; ), be the values of s^M ?t, r^M ?t, AM ?t, CM ?t , respectively, at the point (; St (l; ); Yt(l; )) 2 T  I . We write ?t(l; ) = (cos t(l; ); sin t(l; )) = hi(1; 0) + vi(~si(l; ); 1) and then ?M (l; ) = (cos M (l; ); sin M (l; )) = hi (A~i(l; ); C~i(l; )) + vi(0; r~i(l; )), which yields ~ b tg M (l; ) = b C~i (l; ) + b hvi r~~i (l; ): (33) Ai i Ai Note that (i) j(~ri=A~i)(l; )j  const b M ?t , by Lemma 3.4; p (ii) jhi=vij = j cotg t (l; ) ? s~i (l; )j  b , by Lemma 3.6b). Hence, the last term in (33) is bounded by p p p const b M ?t =(b )  const b =  e ?r (using M ? t > 5 and er  4 b? and assuming c small enough). Clearly, the same arguments and estimates hold also for m . In order to control the rst term in (33) we introduce uj (l; ) = slope @X f j?t(; St(l; ); Yt(l; ))(1; 0), and let uj (m;  ) be the analogous object for m . Note that ut = 0, uM = C~i=A~i, and juj j  c for all t  j  M (by Lemma 3.2a) andp Remark 1). Note also that uj = b=(2j ? buj ) for all j . Hence, using jj j  4  bc  jbuj j, p p  )j + bjuj (l; ) ? uj (m;  )j: uj (l; ) ? uj (m;  )  bjj (l; ) ?  (m; 1

1

1

1

1

2

1

2

9

3

2(

2 0

)

2

2(

)+1

2

11

9

2

0

0

0

+1

+1



+1



p  )j. MoreBy recurrence, uM (l; ) ? uM (l; )  Pts ( b)sjM ?s(l; ) ? M ?s(m;   over (recall (32)), jM ?s(l; ) ? M ?s(m;  )j  (2=s)jM (l; ) ? M (m;  )j and 2=s is uniformly bounded from above. This gives Ci p Ai (l; ) ? ACii (m;  )j  )  Pts const ( b)sjM (l; ) ? M (m;  jM (l; ) ? M (m;  )j: =1

~

~

~

~

=1

1 8

29

Altogether, we conclude that

p b tg M (l; ) ? b tg M (m;  )  81 jM (l; ) ? M (m;  )j + 2e ?r : Now we estimate bsM (l; ) ? bsM (m;  ) . First we recall that by de nition sM (l; ) = s^k (g(); SM (l; ); YM (l; )) where k = k(l; ) is the number of iterates before the next return. Clearly, up to taking c small enough, we may suppose k(l; )  4 for all  2 S . Then the proof of Lemma 3.2 gives p bsM (l; ) ? bs^ (g(); SM (l; ); YM (l; ))  const b  e ?r

(34)

2

+1

+1

0

1

4

+1

10

+1

2

(we use er  4 b? once more). Of course, the same holds for m . On the other hand, Lemma 3.3b) gives (here we have   1) bs^ (g(); SM (l; ); YM (l; )) ? bs^ (g(); SM (m;  ); YM (m;  ))  b2 M (l; ) ? M (m;  )  ) + const b YM (l; ) ? YM (m;  ) ;  M (l; ) ? M (m; 9

4

+1

+1

+1

2 1 8

4

+1

2

+1

+1

+1

+1

   Y ( l ;  ) ? Y ( m;   )   ( l ;  ) ?  ( m;   ) using the inequalities M M M M const M (l; ) ? M (m;  ) . We conclude that +1

+1

+1

+1

 p bsM (l; ) ? bsM (m;  )  1 M (l; ) ? M (m;  ) + 2e ?r : 8

(35)

2

Altogether, (30), (34), (35) imply (31) and so our argument is complete. tu Finally, we derive Theorem B from the previous lemmas. In the rest of the paper X^ = graph (T ; X ; ) is any admissible curve with = @y@ and pthe notations are as introduced at the beginning of this section. We take l; m  n, l = m ? M , as before. For any return 1    n (recall the de nition above) and  2 !?M we de ne r = r(; )  0 to be the smallest integer such that  (! l ()) \ J (r) = ;. Recall that !i() denotes the element of Pi containing . Then we say that  is a 0-situation , resp. a In-situation , resp. a IIn -situation , for  if r = 0,presp. 1  r  m, resp. r > m. Clearly, Lemma 3.5a) asserts that jb tg  j  10 can only occur in a 0-situation. On the other hand, if  is a IIn-situation then  (! l ())  J (m ? 1), because p (i) osc (x j! l())  d?l  e?m; p (ii) osc (b tg  j! l())  b(1 + (100 =b ))(b =d)d?l  e?m; p p (iii) osc (bs j! l())  bl  e?m . 0

0

0

0

0

+

+

+

2

+

+

30

Indeed, (i) and (ii) are direct consequences of the de nition of admissible curve and Lemma 3.1, and (iii) can be justi ed as follows. If k ( )  l=2(< m) for some  2 ! l() then, by de nition, k is constant p on ! l () andp then (26) gives p k k ? ? l osc (bs j! l ())  const b d  const d?l= ?  bl . If k ( ) > l=2 for all  2 ! l (), we take q =p[l=2] and, using also js ? s;q j  const b q , p we get osc (bs j! l ())  const b q dq? ?l + const b q  bl . Claim (iii) is proved. Now we let B (n) = f 2 S : some 1    n is a IIn-situation for g and then Lemma 3.5b) gives +

2

+

+

1

2

1

2 +1

+

2

+

1

2 +2

1

2

m(B (n))  nb? =

1 2

2

p ?m  = p e  const e? n= +1

1 10

20

for all n suciently large (with respect to and b). >From now on we may restrict ourselves to those values of  2 S having no IIn-situations in [1; n]. Let 1        s  n be the returns of  and denote ri = r(i; ). We write W () = kW? ()k (h (1; 0) + v (s (); 1)), for each  = i. Then Lemma 3.2 and the de nition of  in (23) give 1

1

1

kW j ()k  (1 ? c ) kW? ()k jh jk@X f (Z ())(1; 0)k j  (1 ? c ) kW? ()k jcos  ()jj ()j const  ; for all 0  j < i ? i , where Z () = ' (; T (); X ()). Moreover, taking j = i ? i ? 1 and writing  = i , kW? ()k  kW? ()k jcos  ()j  (1 ? c ) kW? ()k jcos  ()j j ()j const ?  ??N : +

0

1

0

1

+1

+1

2

+1

+1

+1

0

0

+1

1

1

0

1+

1

By recurrence (recall that W = @x@ ),

2

0

Ys

kWn()k  n? s? N = ? ( const j ()j ?  ) (

1)

3 2

2

i

i=1

1+

and so log kWn()k is bounded from below by (n ? (s ? 1)N ) log  + 2



 1 ?  log ? ri ? s const ? 3 log 1 2 2

s  1 X i=1

>From here on the argument is completely analogous to the one in Section 2, with P Lemma 3.7 replacing Lemma 2.6. We get log kWn()k  2cn ? i2G ri, where c > 0 and G = fi: ri P  ( ? 2) log g, and we prove that thepLebesgue measure of B (n) = f 2 S : i2G ri  cng is bounded by const econst n (if  < M then Lemma 3.7 can not be used at time  but this is irrelevant for the conclusion; recall that i > M for all i > 1). Then En = B (n) [ B (n) satis es the claim we 1

1

1 2

1

1

1

1

31

2

stated at the beginning of this section. This completes the proof of the theorem in the case ' = ' ;b. Moreover, it is not dicult to see that all these arguments remain valid for arbitrary di eomorphisms in a suciently small neighbourhood of ' ;b (depending on and b), if one uses the same approach as for Theorem A. Indeed, since F = f = const g is a normally hyperbolic invariant foliation for ' ;b, we have that any nearby di eomorphism ' also admits such a foliation F ('). Moreover, the leaves of F (') converge to those of F as ' approaches ' ;b. Hence, we may reproduce the previous calculations for ', just taking X -vector to mean any vector tangent to a leaf of F (') and making straightforward adjustements in the notations. As our argument is based on analysing pieces of orbits whose lentgh is bounded independent of n (this remark is particularly clear e.g. in the situation of Lemma 3.2), all our estimates remain valid, by continuity, if ' is close enough to ' ;b. Therefore, the proof of Theorem B is complete. 0

0

References

[BC1] M. Benedicks, L. Carleson, On iterations of 1 ? ax on (?1; 1), Ann. Math. 122 (1985), 1-25. [BC2] M. Benedicks, L. Carleson, The dynamics of the Henon map, Ann. Math. 133 (1991), 73-169. [BY] M. Benedicks, L.-S. Young, SBR-measures for certain Henon maps, Invent. Math. 112-3 (1993), 541-576. [BD] C. Bonatti, L. J. Daz, Persistent nonhyperbolic transitive di eomorphisms , preprint Univ. Dijon, 1994. [CE] P. Collet, J.-P. Eckmann, On the abundance of aperiodic behaviour, Comm. Math. Phys. 73 (1980), 115-160. [HPS] M. Hirsch, C. Pugh, M. Shub, Invariant Manifolds, Lect. Notes Math. 583 (1977), Springer Verlag. [Ja] M. Jakobson, Absolutely continuous invariant measures for one-parameter families of one-dimensional maps, Comm. Math. Phys. 81 (1981), 39-88. [JN] M. Jakobson, S. Newhouse, Strange attractors in strongly dissipative surface di eomorphisms, in preparation. [Ma] R. Ma~ne, Contributions to the stability conjecture , Topology 17 (1978), 383-396. 2

32

[MS] W. de Melo, S. van Strien, One-Dimensional Dynamics, Springer Verlag, 1993. [MV] L. Mora, M. Viana, Abundance of strange attractors, Acta Math. 171 (1993), 1-71. [No] T. Nowicki, A positive Lyapunov exponent for the critical value of an Sunimodal mapping implies uniform hyperbolicity, Ergod. Th. & Dynam. Sys. 8 (1988), 425-435. [Sh] M. Shub, Topologically transitive di eomorphisms on T , Lect. Notes in Math. 206 (1971), 39, Springer Verlag. [Si] D. Singer, Stable orbits and bifurcations of maps of the interval , SIAM J. Appl. Math. 35 (1978), 260-267. [Sm] S. Smale, Di erentiable dynamical systems, Bull. Am. Math. Soc. 73 (1967), 747-817. [Vi] M. Viana, Strange attractors in higher dimensions, Bull. Braz. Math. Soc. 24 (1993), 13-62. [Yo] L.-S. Young, Some open sets of nonuniformly hyperbolic cocycles, Ergod. Th. & Dynam. Sys. 13 (1993), 409-415. 4

Marcelo Viana IMPA, Est. D. Castorina 110, Jardim Bot^anico, 22460 Rio de Janeiro, Brazil Tel: +55(21)294-9032 Fax: +55(21)512-4115 E-mail: [email protected]

33