On the number of closed solutions for polynomial ODE's ... - CiteSeerX

0 downloads 0 Views 229KB Size Report
Assume that f; a1;:::;an : 0;1] ! IR are continuous, and that min 0;1] jan(t)j , for some. > 0. Then there .... 0 is su ciently small, the vector eld (P; Q) has at most 5 closed limit cycles. .... Consider the derivative of G with respect to y in direction w 2 ~E: @G. @y .... PROOF : Di erentiating (3.25) with respect to s, we get by induction ?
On the number of closed solutions for polynomial ODE's and a special case of Hilbert's 16th problem Marta Calanchi and Bernhard Ruf Dip. di Matematica, Universita di Milano

Abstract

P

n i In this paper we prove that the equation du i=0 ai (t)u = f (t); t 2 [0; 1], dt + u(0) = u(1), has for every continuos f at most n solutions provided that n is odd, and the continuous coecients ai satisfy jan(t)j  > 0 and jai(t)j  ; i = 1; : : :; n ? 1, with > 0 suciently small. Furthermore, we show that this result implies that for a restricted subclass of polynomial vector elds of order n in IR2 the maximal number of limit cycles is n. This constitutes a special case of Hilbert's 16th problem.

1 Introduction In this paper we look for upper bounds on the number of closed solutions for the following ordinary di erential equation with polynomial nonlinearity

dx + a (t)xn + : : : + a (t)x = f (t) ; 0  t  1 (1.1) 1 dt n where a1 ; : : :an ; f : [0; 1] ! IR are continuous functions. We call a solution x(t) of (1.1) closed, if x(0) = x(1); if f; a1 ; : : :; an are 1-periodic, then a closed solution is clearly 1-periodic. A.L. Neto [9] attributes the following problem to C. Pugh: Does there exist a number N (n) depending only on the degree n of the polynomial such that (1.1) has at most N closed solutions (if not all solutions de ned in [0; 1] are closed).

Pugh's question is motivated by the following results, apparently due to S. Smale and proved in [9]: if not all solutions are closed, then the

dx + a (t)x2 + a (t)x = f (t) ; 0  t  1 1 dt 2 has at most 2 closed solutions, i.e. N (2) = 2, and the Riccati equation:

1

(1.2)

dx + a (t)x3 + a (t)x2 + a (t)x = f (t) ; 0  t  1 (1.3) 2 1 dt 3 has at most 3 closed solutions provided that a3 (t) > 0, t 2 [0; 1]; (see also CafagnaDonati [2]). In [9] A.L. Neto gives a negative answer to Pugh's problem; he gives examples of equations of the form dx = a (t)x3 + a (t)x2 ; 0  t  1 (1.4) 2 dt 3 having at least k closed solutions (and not all solutions closed), where k is any given positive integer, and a2 ; a3 are polynomials in t or in cos(t) and sin(t). From these examples it follows also that there exist equations of the form dx = x4 + a (t)x3 + a (t)x2 + a (t)x + a ; 0  t  1 (1.5) 3 2 1 0 dt which have at least k closed solutions. Abel equation:

In this paper we specify a restricted class of polynomial nonlinearities of degree n for which N (n) = n. More precisely, we prove the following result: THEOREM 1.1 Suppose that n is odd. Assume that f; a1; : : :; an : [0; 1] ! IR are continuous, and that min[0;1] jan (t)j  , for some > 0. Then there exists some > 0 such that if max[0;1] jai(t)j  ; i = 1; : : :; n ? 1, then (1.1) has at most n solutions, for all righthand sides f (t) 2 C [0; 1].

As pointed out in [9], the problem of nding bounds for the number of closed solutions of equation (1.1) is related to Hilbert's 16th problem; indeed, it can be viewed as a particular case of it. In the second part of Hilbert's 16th one reads "This is the question as to the maximum number and position of Poincare's boundary cycles (cycles limites) for a di erential equation of the rst order degree of the form

dy Q dx = P

(1.6)

where P; Q are entire rational integral functions of the nth degree in x; y ", cf. D.

Hilbert, [5].

2

We recall that Poincare limit cycles are isolated periodic solutions, i.e. periodic solutions which have an annulus-like neighborhood free of other periodic solutions in the (x; y )-plane. Ilyashenko and Yakovenko mention in [7] three versions of this problem: Individual niteness problem (or Dulac's problem): equation (1.6) has always at most

a nite number of solutions.

Existential Hilbert problem: there exists a number H (n) such that equation (1.6) has

at most H (n) solutions, for any polynomial functions P (x; y ), Q(x; y ) of degree  n. Constructive Hilbert problem: give a formula or estimate for H (n).

Up to now, only Dulac's problem has been solved positively, in independent proofs by Yu. Ilyashenko [6] and J. E calle [4]. We now show that Theorem 1.1 yields H (n) = n (in the Constructive Hilbert problem) for a restricted subclass of polynomials P; Q of odd order n: Suppose that the polynomial vector eld Y (x; y ) = (P (x; y ); Q(x; y )) has a unique singular point, which we may assume to be in (0; 0), i.e. (P (0; 0); Q(0; 0)) = (0; 0). Then Y can be written in polar coordinates Y = (Yr ; Y ), see [9], where

and

Yr (r; ) = cos  P (r cos ; r sin ) + sin  Q(r cos ; r sin )

(1.7)

Y (r; ) = r1 [cos  Q(r cos ; r sin ) ? sin  P (r cos ; r sin )]

(1.8)

We can then state the following theorem:

THEOREM 1.2 Suppose that Y = (Yr ; Y ) is as above, and that Yr and Y satisfy for some k 2 f0; 2; : : :; n ? 1g, with k even and n odd:

1) Y (r; ) = rk f (), with f ()  > 0 2) Yr (r; ) = rn an () + rn?1 an?1 () + : : : + rk ak () with 2a) an ()  > 0 2b) jaj ()j  , k + 1  j  n ? 1 (on ak () no further restrictions are required)

3

Then, if > 0 is suciently small, the number H (k; n) of closed limit cycles of the vector eld Y is bounded by

H (k; n)  n ? k

PROOF : Assumptions 1) and 2) imply

dr Yr n?k d = Y = r ~an() + : : : + a~k () where a~j () = af (()) . The result now follows by Theorem 1.1, since a~n ()  max jf ()j > 0, and provided that ja~j ()j  ; k + 1  j  n ? 1, are suciently small. 2 j

EXAMPLE 1.1 Suppose that the vector eld (P; Q) in the plane is given as follows:

P (x; y) = ?y + x5 + xy 4 + P4i=1 (bixi + cixy i?1 ) + d1y Q(x; y) = x + x4y + y 5 + P4i=1 (biyxi?1 + ci yi ) + d2x with jbij; jcij; jdi j  suciently small. Then some easy calculations yield Y (r; ) = cos2() + sin2() + d2 cos2() ? d1 sin2() = f ()  1 ? 2 > 0 and

Yr (r; ) = r5 a5() + a5 () = [cos4() + sin4()]  > 0

4 X

i=1

riai()

with and jai()j  , i = 1; : : :; 4 Hence, the vector eld (P; Q) satis es the hypotheses of Theorem 1.2 with k = 0, and thus, if > 0 is suciently small, the vector eld (P; Q) has at most 5 closed limit cycles.

The situation in the example can be visualized as follows: the vector eld (P0; Q0) = (?y; x) is the degenerate situation, where all solutions are closed (it describes the harmonic oscillator). Adding the term of order ve (x5 + xy 4; x4y + y 5 ) all the closed solutions disappear, and the point (0; 0) becomes a degenerate singular point (of order 5). Theorem 1.2 says that a perturbation of this situation by (suciently small) lower order terms allows for at most 5 closed orbits. We remark that in [10] a similar result to Theorem 1.1 was proved for the partial di erential equation on a bounded domain  IRn

?u + Pni=1 ai(x)ui = f (x) ; x in

@u @ = 0 ; x on @

4

(1.9)

The proof of Theorem 1.1 follows the same lines as in [10], but di ers substantially in the estimates.

2 The Reduction Method We now proceed with the proof of Theorem 1.1. We assume throughout the paper that an (t)  > 0; the case an (t)  ? is treated analogously. To study the local behaviour of equation 1.1 we make use of a Lyapunov-Schmidt reduction (cf. [1]. We consider the Banach spaces E = fu 2 C 1[0; 1]; u(0) = u(1)g and F = C [0; 1], with the usual norms. Note that E  F is dense. We split the space R F into the direct sum F = F~  IR, where F~ = fu 2 F ; 01 udt = 0g. We introduce R the projections Q : F ! IR ; Qu = 01 u(t)dt and P : F ! F~ ; Pu = u ? Qu, and write u 2 E as u = s + y = Qu + Pu Then u is a closed solution of equation (1.1) if and only if u = s + y solves the following system of equations (P 1) (Q1)

y_ + Pg(s + y) = Pf = f1 Qg(s + y) = Qf

(2.10)

where g (u) = Pnj=1 aj (t)uj . We rst solve equation (P 1) for xed s 2 IR: PROPOSITION equation (P 1).

2.1

For every xed s 2 IR there exists a unique solution y = y (s) of

PROOF : Existence: We apply the Leray-Schauder principle, [3], [8], transforming

equation (P1) into an equivalent integral equation. Integrating (P1) from 0 to t we have

y(t) ? y(0) = ? Since

R1

0

Z

0

t

g(s + y)d + t

Z

1

g(s + y)d +

0

y(t)dt = 0 we see that

?y(0) =

1

Z

0

5

K (y)dt

t

Z

0

f1d =: K (y)

(2.11)

Thus, we can write equation (P1) as 1

Z

y = K (y) ?

0

K (y)dt

(2.12)

R Note that a solution y 2 F~ of (2.12) is closed, and that K~ (y ) := K (y ) ? 01 K (y )dt maps F~ into F~ . Furthermore, K~ is a compact mapping. Thus, it remains to show that there exists an R > 0 such that

y = K~ (y) ;  2 (0; 1)

(2.13)

implies ky k1 < R. Equation (2.13) is equivalent to 

y_ +  g(s + y) ?

1

Z

0



g(s + y)dt ? f1 = 0

(2.14)

Multiplying (2.14) by y and integrating yields 1

Z

0

y_ y dt + 

" n Z X

1

i=1 0

ai (t)(s + y)i ydt ?

1

Z

0

#

f1 ydt = 0

(2.15)

The rst term being zero and  6= 0, we have Z

0

1

an (t)(s + y)nydt = ?

nX ?1 Z 1 i=1 0

ai(t)(s + y)iydt +

Z

0

1

f1ydt

(2.16)

Calculating the various terms, using an (t)  , jai (t)j  c; i = 1; : : :; n; jf1(t)j  c, and Holder's inequality, we obtain

01 yn+1 dt  Pnj=1 j 01 bj (s; t)y j jdt  b Pnj=1 R01 jyj jdt  b Pnj=1 (R01 yn+1 dt) +1 dt R

R

n

(2.17)

j

where b is a constant which depends only on s and f1 . This inequality clearly implies (recalling that n + 1 is even) that there exists a constant c, depending only on s and f1 , such that Z 1 n +1 kykn+1 = yn+1dt  c ; (2.18) 0

for all solutions of (P 1). We show next that also ky k1  c, for all solutions y : by equation (P 1) we have

jy_ j  jPg(s + y)j + jf1j = jg(s + y) ? 6

Z

0

1

g(s + y)dtj + jf1j

Integrating on [0; 1], using the de nition of g and (2.18), we get R1

0

jy_ j  2 Pnj=1 R01 jaj (t)jj(s + y)j jdt + R01 jf1jdt  2b Pnj=1(R01 yn+1dt) +1 + c  c = c(s; f1) j

n

Finally, since y has mean value zero, we obtain

kyk1  ky_ k1  c(s; f1)

(2.19)

Uniqueness: Let s 2 IR be xed. Assume that y and z are two solutions in E~ of

equation (P 1):

y_ + g(s + y) = f1 + 01 g(s + y)dt R z_ + g(s + z) = f1 + 01 g(s + z)dt It is not restrictive to assume z (0) > y (0). Since Qz = Qy = 0, there exists t1 2 (0; 1) such that z (t1 ) = y (t1 ) and z_ (t1 )  y_ (t1 ), and hence by the equations: R

0  y_ (t1 ) ? z_ (t1) = ?

1

Z

0

(g (s + y ) ? g (s + z ))dt

Moreover, there exists t2 2 (t1 ; 1) such that z (t2) = y (t2) and z_ (t2 )  y_ (t2 ), and then 0  y_ (t2 ) ? z_ (t2) = ?

Z

1

0

(g (s + y ) ? g (s + z ))dt

Hence we get 01 g (s + y ) = 01 g (s + z ). Thus, y and z satisfy the same Cauchy problem; in particular, from the point t1 emanates a unique solution, contradicting the assumption. 2 R

R

Thus, we have proved that for each s 2 IR there exists a unique y = y (s), solution of equation (P 1). By the implicit function theorem one can prove the following PROPOSITION

2.2

The function s 2 IR ! y (s) 2 E~ is real analytic.

PROOF : (see G. Tarantello [11])

We de ne the map G : E~  IR ! F~

G(y; s) = y_ + g(s + y) ? We claim that G is analytic in E~  IR. 7

1

Z

0

g(s + y)dt ? f1

For s0 2 IR let y0 = y0 (s0 ) denote the unique solution of equation (P 1), i.e. G(y0; s0) = 0. Consider the derivative of G with respect to y in direction w 2 E~ : @G (y ; s )[w] = w_ + g 0(s + y )w ? Z 1 g 0(s + y )wdt 0 0 0 0 @y 0 0 0 @G (y0 ; s0) is injective; indeed, suppose w 2 E~ is such that @G (y0 ; s0)[w] = 0; then one @y @y shows with a similar argument as in Proposition 2.1 (uniqueness) that w = 0. By a standard application of the Fredholm Alternative Theorem we conclude that @G @y (y0 ; s0) de nes an invertible operator which maps E~ ! F~ . Then, by the Implicit Function Theorem, there exists  > 0 and an analytic map : (s0 ? ; s0 + ) ! E~ such that

(s0) = y0 and G(s; (s)) = 0 for all s 2 (s0 ? ; s0 + ). By the uniqueness it follows that y (s) = (s), and hence y (s) is analytic. 2 For s 2 IR, let now y (s) = y (s; f1) denote the unique solution of equation (P 1). We insert this solution into equation (Q1) and obtain an equation in one dimension: Z

1

0

g(s + y(s))dt =

Z

1

0

f (t)dt

(2.20)

3 The Reduced Equation To study equation (2.20), we introduce the following function ? : IR ! IR ?(s) =

1

Z

0

g(s + y(s))dt ?

Z

0

1

f (t)dt

(3.21)

In order to show that equation (1.1) has at most n solutions, it is sucient to show that (3.21) has at most n zeros, in view of Proposition 2.1. First, we study the local behaviour of ?. Suppose that ?(s0 ) = 0. If ?0 (s0 ) 6= 0, then ? is locally invertible and ?(s) = 0 has a unique solution in a neighborhood of s0 . Suppose now that ?0 (s0) = 0; then u0 = s0 + y (s0 ) is a singular point of the mapping u_ + g(u), i.e. v_ + g 0(u0 )v = 0 ; for some v 2 E n f0g (3.22) Indeed, di erentiating equation (P 1) with respect to s, we have

y_s (s) + Pg 0 (s + y(s))(1 + ys (s)) = 0 ; 8 s 2 IR Since

?0 (s) =

Z

0

1

g0(s + ys)(1 + ys (s))dt 8

(3.23) (3.24)

we get by adding (3.23) and (3.24) ?0 (s) = y_s + g 0(s + y (s))(1 + ys (s)) = v_ + g 0(u)v

(3.25)

Equation (3.25) shows that if ?0 (s0 ) = 0, then v = 1 + ys (s0 ) is the unique solution R of (3.22) Rwith 01 vdt = 1, and viceversa. Indeed, a general solution of (3.22) has the 0 form ce? 0 g (u0 )d . Thus, t

Rt

? 0 g0 (u0 )d e v = 1 + ys = R 1 ? R g0(u )d 0 dt 0e 0 Also, for the corresponding adjoint equation we have t

Rt

?v_  + g0(u0)v = 0 ; with v = d e 0 g0(u0 )d Choosing d =

? 0e

R1

Rt

0 0 g (u0 )d dt

(3.26)

we have v  = 1=v .

We want to show that the stationary points of ? are degenerate to at most degree n ? 1, that is we prove PROPOSITION 3.1 Suppose that the assumptions of Theorem 1.1 hold. If ?0 (s0 ) = 0, then ?(n) (s0)  C > 0. PROOF : Di erentiating (3.25) with respect to s, we get by induction

?(k) (s) = y_ (k) + Pni=1 ai iui?1 y (k)+ + Pni=2 ai ui?2 Pq2Q (k?1) p(q )v q1 (y (2))q2 : : : (y (k?1))q ?1 + + : : :+ + Pni=k?1 ai ui?(k?1) Pq2Q (2) p(q )v q1 (y (2))q2 + + Pni=k ai ui?k i(i ? 1) : : : (i ? (k ? 1))v k k

k

(3.27)

k

where v = 1 + ys , Qk (m) = fq = (q1 ; : : :; qm) 2 f0; 1; : : :; kgm ; mi=1 iqi = kg, and p(q)  0 are some integer coecients. R Multiplying (3.27) by v  = 1=v and integrating we get, setting a = 01 v  dt and noting R R R that 01 y_ (k)v  dt + 01 Pni=1 ai iui?1 y (k)v  dt = 01 y (k) (?v_  + g 0(u0 )v )dt = 0: P

?(k) (s)a = Pni=2 01 ai ui?2 Pq2Q (k?1) p(q )v q1?1 : : : (y (k?1) )q ?1 dt+ + : : :+ R + Pni=k?1 01 ai ui?(k?1) Pq2Q (2) p(q )v q1?1 (y (2))q2 dt+ R + Pni=k 01 ai ui?k i(i ? 1) : : : (i ? (k ? 1))v k?1 dt R

k

k

k

9

(3.28)

In particular, for k = n we have ?(n) (s)a = Pni=2 01 ai ui?2 Pq2Q (n?1) p(q )v q1?1 : : : (y (k?1) )q ?1 dt+ + : : :+ R + Pni=n?1 01 ai ui?(n?1) Pq2Q (2) p(q )v q1?1 (y (2))q2 dt+ R + 01 an n!v n?1 dt R

k

n

(3.29)

k

Note that the last term in equation (3.29) can be estimated from below by Z 1 Z 1 n ? 1 n ? 1 ann!v dt  n! v dt  n!( vdt)n?1 = n! 0 0 0

Z

1

(3.30)

We now estimate the remaining terms of (3.29) from above: a generic term of expression (3.29) (except the last) has the form

p(q) where

Z

0

1

ai ui?j vq1?1 (y (2))q2 : : : (y (n?1))q ?1 dt n

(3.31)

either 2  j  i  n ? 1 or 2  j < i = n

and n ? 1  q1  0, q2 + : : : + qn?1  1. In Section 4 below the following estimates are proved:

Under the assumptions of Theorem 1.1 and supposing that cn < 1=2, where cn = n(n?1) (1 + ), with = max ja (t)j, one has: n 2

1) 1=2  v (t)  2, 1=2  v (t)  2, and a = 01 v  dt  2 (cf. Lemma 4.3) R 2) 01 un?1 dt  (n ? 1) (cf. Lemma 4.4) 3) ky (k)k1  c , k = 1; 2; : : :n (cf. Lemma 4.5) R

With these estimates we now get, considering separately the cases 2  j < i  n ? 1:

p(q) 01 aiui?j v q1?1 (y(2))q2 : : : (y n?1 )q ?1 dt  p(q) kvkq11?1ky(2)kq12 : : : kyn?1k1q ?1 R01 ui?j dt  ? R ?1 1 q + ::: + q n ? 1 2 ? 1  p(q) c 0 u dt  c 2+ ?1 1  c R

n

n

i j n

n

n

10

(3.32)

2  j = i  n ? 1:

p(q) 01 ai vq1?1 (y (2))q2 : : : (y n?1 )q ?1 dt  p(q) kvk1q1?1ky(2)kq12 : : : kyn?1k1q ?1  p(q) c q2+:::+q ?1  c 2  c R

n

n

n

2  j < i = n:

(3.33)

p(q) 01 an un?j v q1 ?1 (y (2))q2 : : : (y n?1 )q ?1 dt  p(q) c R01 un?j dt (3.34)  ?   c R01 un?1dt ?1  c 1+ ?1 1  c Thus, joining these estimates, we nd for ?(n) (s): (3.35) ?(n) (s)  a1 ( n! ? c(n) )  21 ( n! ? c(n) ) Clearly, the last term is positive for > 0 suciently small, and thus the proposition is proved. 2 R

n

n n

j

n

4 Estimates In this section we prove the estimates stated above. We assume throughout this section that the hypotheses of Theorem 1.1 are satis ed . LEMMA 4.1 Assume that u = s + y (s) is a singular point of u_ + g (u). If 0 < < n2? 1 , then Z

n?1 < 1 2 0 PROOF : Let v = 1 + ys . Since u = s + y (s) is a singular point, we have 1 n?1 u vdt 

(4.36)

v_ + g 0(u)v = 0 Integrating we get

Z

0

We isolate the rst term R1

0

1 0 g (u)vdt = 0

nan un?1 vdt = ? 01 (n ? 1)an?1 un?2 v + (n ? 2)an?2 un?3 v+ + : : : + 2a2 v + a1 v ] dt R



11

(4.37)

Using an (t)  > 0 and Holder's inequality we obtain n

n 01 un?1 vdt  (n ? 1) 01 jun?2 vjdt + (n ? 2) 01 jun?3 vjdt+ o R R + : : : + 2 01 uvdt + 01 vdt  ?2  R  1  (n ? 1) 01 un?1vdt ?1 R01 vdt ?1 + R

R

R

n n

n

 n?3 R n?1 1

2 n?1 +(n ? 2) 0 un?1 vdt vdt + 0   n?2   1 R R 1 un?1 vdt n?1 1 vdt n?1 + R 1 vdt R 1

+ : : : +2

Setting b =

1

 1 un?1 vdt n?1 0

R

0

0

and using



R1

0

(4.38)

0

vdt = 1 we have

h

nbn?1  (n ? 1)bn?2 + (n ? 2)bn?3 + : : : + 2b + 1

i

(4.39)

Now, if b  1, then

n bn?1  [(n ? 1) + (n ? 2) + : : : + 3 + 2 + 1] bn?2 that is

n b  n(n2? 1)

which contradicts the assumption b  1. Thus necessarily b < 1; going back to inequality (4.39) we now get

n bn?1  [(n ? 1) + (n ? 2) + : : : + 1]

2

and hence (4.36).

REMARK 4.1 With the same arguments one shows that for the solution v  of the adjoint equation (3.26) holds Z 1 1 n?1  u v dt < v dt 0 0

Z

LEMMA 4.2 Let = max[0;1] jan (t)j. Then

kv_ k1  n(n2? 1) (1 + )

kysk1  n(n2? 1) (1 + ) 12

(4.40) (4.41)

PROOF : Inequality (4.40): since v_ + g 0(u)v = 0 one has Z

1

0

jv_ jdt 

1h

Z

i

nanun?1 v + (n ? 1)jan?1un?2 vj + : : : + 2ja2uv j + ja1 vj dt

0

By Holder's inequality and by Lemma 4.1  n?2 



R1 R1 ?1 R 1 vdt n?1 n?1 0 0 jv_ jdt  n 0 jan ju vdt + (n ? 1) 0 u vdt  1 R R  ?2 R1 ? 1 ? 1 1 1 + : : : + 2 0 un?1 vdt + 0 vdt 0 vdt n ( n ? 1)  n n?2 1 + 2 = n(n2?1) ( + 1)

R1

n

n



1 ?1

n

n n

+

Inequality (4.41): since 01 ys dt = 0 there exists t0 such that ys (t0 ) = 0; then ys (t) = R

Rt

t0 v_ (t)dt and hence

kys k1  kv_ k1  n(n2? 1) (1 + )

2

LEMMA 4.3 Let cn = n(n2?1) (1 + ), and suppose that cn  1=2. Then

1 2  1 ? cn  v (t)  1 + cn  2 1  1  v = 1  1  2 2 1 + cn v 1 ? cn

(4.42) (4.43)

PROOF : Since v (t) = 1 + ys (t) and by Lemma 4.2 kys k1  cn , the estimates follow

2

trivially.

LEMMA 4.4 Suppose that cn < 1=2. Then 1 n?1 1 u dt  n ? 0

Z

PROOF : Since kys k1  cn we have by Lemma 4.1 Z

0

Therefore

1

un?1 (1 ? cn )dt  Z

0

1

Z

1

0

un?1 (1 + ys )dt  n2? 1

1 un?1 dt  n ?2 1 1 ?1c  n ? n 13

2

LEMMA 4.5 Assume that u = s + y (s) is a singular point of u_ + g (u), i.e. v_ + g 0 (u)v = 0 with v = 1 + ys . If > 0 is suciently small, then we have for k = 1; : : :; n

ky(k)k1  c

(4.44)

PROOF : We proceed by induction;

k = 1: the estimate (4.44) is true by Lemma 4.2, since y (1) = ys .

k ! k + 1: di erentiating equation (P 1) of 2.10 k + 1 times with respect to s one obtains h y_ (k+1) = ? P Pni=1 aiiui?1 y(k+1) + +P Pni=2 ai ui?2 Pq2Q +1 (k) p(q )v q1 : : : (y (k) )q + (4.45) + : : :+ +P Pni=k ai ui?k Pq2Q +1 (2) p(q )v q1 (y (2))q2 + i +P Pni=k+1 ai i(i ? 1) : : : (i ? k)ui?(k+1) v k+1 R R Using that jPz j = jz ? 01 zdtj  jz j + 01 jz jdt we obtain by integrating (4.45) k

k

k

R1

0

?1 R 1 i jai j jui?1 jjy (k+1)jdt+ jy_ (k+1)jdt  2n R01 janjun?1jy(k+1)jdt + 2 Pni=1 0 Pn R 1 P i ? 2 +2 i=2 0 jai jju j q2Q +1 (k) p(q )v q1 : : : jy (k)jq dt+ + : : :+ R +2 Pni=k 01 : : : (i ? k + 1)jaijjui?k j Pq2Q +1 (2) p(q )v q1 jy (2)jq2 dt+ R +2 Pni=k+1 01 jai jjui?(k+1)jv k+1 dt k

k

k

(4.46)

We now estimate separately the terms of inequality (4.46): We begin with the rst line: i = n: by Lemma 4.4 Z

0

i = 1; : : :; n ? 1: R1

0

1

anun?1 jy (k+1)jdt  c ky (k+1) k1

(4.47)

jaijjui?1jjy(k+1)jdt  ky(k+1)k1 R01 jui?1jdt  ?1   ky(k+1)k1 R01 un?1dt ?1  c ky(k+1)k1 i n

(4.48)

We continue with the subsequent terms; these terms are of the general form

p(q)

Z

0

1

jaij jui?j j vq1 jy(2)jq2 : : : jy(k)jq dt k

14

(4.49)

with 2  j = i  n ? 1, resp. 2  j < i  n, and 0  q1  k + 1, 1  q2 + : : : + qk . We treat separately the two cases: 2  j = i  n ? 1: By the induction hypothesis we have that ky (m) k1  c , m = 1; : : :; k. Then we get from (4.49) Z 1 (4.50) p(q) jaij v q1 : : : jy (k) jq dt  c 2 k

0

2  j < i  n: By induction hypothesis we get from (4.49) and by Lemma 4.4

p(q)

Z 1 i?j i ? j q ( k ) q 1 k jaij ju jv : : : jy j dt  c jui?j jdt  c n?1 0 0

Z

1

(4.51)

Therefore we have in all cases Z

Since

R1

0

0

1 (k+1) jy_ jdt  c

1 ky

(k+1)k

1 + c2

(4.52)

y(k+1) dt = 0 we can estimate

ky(k+1)k1  ky_ (k+1)k1  c1 ky(k+1)k1 + c2

(4.53)

Therefore, assuming that c1 < 1=2

ky(k+1)k1  1 ?c2c  2c2 1

(4.54)

2

5 Bounds on the number of zeros In section 3 it was shown that if ?0 (s0 ) = 0 ; then ?(n) (s0) > 0 It is easy to see that this implies that ?(s) = 0 has locally at most n solutions. Indeed, if for s near s0 we can write by Taylor's theorem nX ?1 n i ?(s) = ?(i)(s0 ) (s ?i!s0 ) + ?(n) (s0 + ) (s ?ns! 0 ) i=2 Since ?(n) (s0 + ) > 0 for s near s0 , we conclude that there exists a neighborhood U of s0 such that ?(s) = 0 has at most n solutions in U .

To obtain a global result, we employ the following Proposition, which is proved in [10]: 15

PROPOSITION

5.1

Suppose that k : IR ! IR is a smooth function satisfying

(i) k0 (x)  ? , for all x 2 IR (ii) for any y 2 IR with k0 (y ) = 0 holds jk(i)(y )j  c, i = 2; : : :; n ? 1 (iii) let Ia = fx 2 IR : k0 (x) < ag, and suppose that

jk00(x)j  b and k(n)(x)  d > 0, 8 x 2 Ia.

Then, if  > 0 is suciently small (for xed positive constants a; b; c; d), the equation k(x) =  has for any  2 IR at most n solutions.

Thus, to conclude the proof of Theorem 1.1 it remains to show that the function ? satis es the hypotheses of Proposition 5.1. LEMMA 5.1 Suppose that the assumptions of Theorem 1.1 are satis ed. Then the R R function ?(s) = 01 g (s + y (s))dt ? 01 fdt satis es the hypotheses of Proposition 5.1, for > 0 suciently small. PROOF :

(i) We recall that ?0 (s) =

Z

1 0 g (s + y(s))(1 + ys (s))dt = y_s + g 0(s + y(s))(1 + ys (s)) 0

Multiplying this equation by 1 + ys (s) and integrating we obtain Z 1 Z 1 0 0 ? (s) = ? (s) (1 + ys (s))dt = g 0(s + y (s))(1 + ys (s))2dt

0

0

We can estimate ?0 (s) = 01 g 0(s + y (s))(1 + ys )2dt = 01 g 0(u)(1 + ys )2dt R = Pni=1 01 iai ui?1 (1 + ys )2dt ?1 R 1 iui?1 (1 + ys )2 dt  R01 n(u)n?1(1 + ys )2dt ? Pni=1 0 R1 Pn?1 n ? 1 i ? 1  minf nr ? i=1 ijrj g 0 (1 + ys)2dt  ? n(n2?1) R

R

since for jrj  1

nrn?1 ?

nX ?1 i=1

ijrji?1 = rn?1 ( n ? ( jrj1n?1 + : : : + (n j?rj 1) ))  n ? n(n2? 1) 16

and for 0  jrj < 1

nrn?1 ?

nX ?1 i=1

ijrji?1  ? n(n2? 1)

Since can be chosen as small as we like, (i) holds. (ii) The expression for ?(k) (s)a is given by (3.28). The terms are estimated as in (3.32) and (3.34), using a  21 1  c ; 2  k  n ? 1; for any s with ?0 (s) = 0 a (iii) Set Ia = fs 2 IR : ?0 (s) < ag with a > 0 suciently small. One checks that this leads to the following modi cations in the estimates of section 4: R in Lemma 4.1: 01 un?1 vdt  1 ( n?2 1 + na ) in Lemma 4.2: kys k1  n(n2?1) (1 + ) + a = c( + a) in Lemma 4.3: no changes if one sets cn = n(n2?1) (1 + ) + a R in Lemma 4.4: 01 un?1 dt  2 ( n?2 1 + na ) = c( + a) in Lemma 4.5: ky (k) k1  c( + a) ; this is obtained by the following changes in the proof: R in line (4.47): 01 an un?1 jy (k+1) jdt  c( + a)ky (k+1)k1 R in line (4.50): p(q ) 01 jai jv q1 : : : jy (k) jq dt  c ( + a) in line (4.51): : : :  c( + a) which yields in lines (4.52) and (4.53): : : :  c1 ( + a) + c2 ( + a) Assuming c1( + a) < 12 one nds in line (4.54): ky (k+1) k1  c( + a) One now veri es easily that j?00(s)j  b and ?(n) (s)  d > 0 for s 2 Ia , if and a are suciently small. Thus, the assumptions of Proposition 5.1 are satis ed, and hence the proof of Theorem 1.1 is complete. 2

j?(k)(s)j  c

2 ?1

n

k

17

References [1] Ambrosetti, A., Prodi, G., A primer in nonlinear analysis, Cambridge University Press, 1993 [2] Cafagna, V., Donati, F., Un resultat global de multiplicite pour un probleme di erentiel non lineaire du premier ordre, C.R. Acad. Sci. Paris, Ser. I, Math., 300, 1985, p. 523-526 [3] Deimling, K., Nonlinear Functional Analysis, Springer, 1985 [4] E calle, J., Introduction aux fonctions analysables et preuve constructive de la conjecture de Dulac, Hermann, Paris, 1992 [5] Hilbert, D., Mathematical Problems, Bulletin AMS, 8, 1902 [6] Ilyashenko, Yu., Finiteness theorems for limit cycles, Amer. Math. Soc., Providence, RI, 1991 [7] Ilyashenko, Yu., Yakovenko, S., Concerning the Hilbert 16th problem, Amer. Math. Soc., 1995, p. 1-19 [8] Krasnosel'ski, M.A.,Topological Methods in the Theory of Nonlinear Integral Equations, Pergamon Press, 1963, p. 123-140 Pn j [9] Neto, A.L., On the number of solutions of the equation dx j=0 aj (t)x ; 0  t  1, for dt = which x(0) = x(1), Inventiones Math., 59, 1980, p. 67-76 [10] Ruf, B., Bounds on the number of solutions for elliptic problems with polynomial nonlinearities, J. Di . Equ. 151, 1999, p. 111-133 [11] Tarantello,G., On the number of solutions for the forced pendulum equation, J. Di . Equ. 80, 1989, p. 79-93

18