MAXIMAL FUNCTIONS ASSOCIATED TO FILTRATIONS

8 downloads 0 Views 266KB Size Report
for which rmi = 1, listed in increasing order. Applying ' ?1 to each of these intervals yields one of the sets Bmi ji . This is the required decomposition of Yn.
MAXIMAL FUNCTIONS ASSOCIATED TO FILTRATIONS MICHAEL CHRIST AND ALEXANDER KISELEV Abstract. Let T be a bounded linear, or sublinear, operator from Lp (Y ) to Lq (X ). To any sequence of subsets Yj of Y is associated a maximal operator T  f (x) = supj jT (f  Yj )(x)j. Under the hypotheses that q > p and the sets Yj are nested, we prove that T  is also bounded. Classical theorems of Menshov and

Zygmund are obtained as corollaries. Multilinear generalizations of this theorem are also established. These results are motivated by applications to the spectral analysis of Schrodinger operators.

1. Introduction Let (X; ); (Y;  ) be arbitrary measure spaces. Denote by E the characteristic function of a set E . To any sequence of measurable subsets fYn : n 2 Zg of Y and any operator T de ned on Lp(Y ) can be associated a maximal operator T  f (x) = sup jT (f  Yn )(x)j: n p q : L (Y ) 7! L (X ) is denoted

The operator norm of T by kT kp;q. By a ltration of Y we will mean any sequence of subsets Yn  Y which are nested: Yn  Yn+1 for every n. The purposes of this paper are rstly, to establish a rather general maximal theorem1, Theorem 1.1, and secondly, to establish two multilinear variants. Theorem 1.1. Let 1  p; q  1, and suppose that T : Lp(Y ) 7! Lq (X ) is a bounded linear operator. Then for any ltration fYn : n 2 Zg of Y , the maximal operator T  is bounded from Lp (Y ) to Lq (X ), provided that p < q. Moreover kT kp;q  (1 ? 2?(p? ?q? ))?1 kT kp;q: Because the constant appearing in the conclusion is independent of fYng, a corresponding result can be deduced for continuum ltrations, indexed by a real variable, as a direct consequence via a limiting argument. The most natural example is where Y = R and Yn = (?1; n] for each n 2 R . Theorem 1.2. Suppose that p < q. Let T : Lp(R ) 7! Lq (R) be de ned by Tf (x) = R R K (x; y )f (y ) dy , where K is locally integrable. De ne 1

(1.1)

1

Z

T f (x) = sup j K (x; y)f (y) dyj: s2R y 0. We will construct a measurable function ' : Y 7! [0; 1] satisfying ('?1([0; t))) = t for all t 2 [0; 1] and Yn = '?1([0; (Yn))). In particular, ('?1 ftg) = 0 for all t. Then de ne Bjm = '?1([(j ? 1)2?m; j 2?m)):

To Yn as in (2.2), consider for each n a binary expansion (Yn) = P decompose ? m r 2 , with each rm equal either to 0 or to 1. If there happens to be more m m than one such expansion, choose one. There is a corresponding decomposition of the interval [0; (Yn)) into a union of disjoint, adjacent dyadic intervals, closed on the left and open on the right, of lengths 2?m ; 2?m ; : : : , where the mi are those indices for which rmi = 1, listed in increasing order. Applying '?1 to each of these intervals yields one of the sets Bjmi i . This is the required decomposition of Yn. De ne '(y) = 1 for all y 2 Y n[n Yn, and '(y) = 0 for all y 2 \n Yn. Each remaining point y 2 Y belongs to YnnYn?1 = Y~n for a unique index n. By repeated application of the divisibility property of (Y; ), it is possible to de ne ' : Y~n 7! [(Yn?1); (Yn)) so that ('?1([(Yn?1); t))) = t ? (Yn?1) for every t 2 [(Yn?1); (Yn)). The resulting function ' : Y 7! [0; 1] has both of the required properties. 1

2

MAXIMAL FUNCTIONS ASSOCIATED TO FILTRATIONS

5

Remark. The proof can be recast in the following manner, which will be used in

our analysis of multilinear analogues. Let r be any exponent strictly greater than p. De ne

Gf (x) =

1 X 2m X

m=0

Then T f (x)  Gf (x), and kGf kLq (X ) Minkowski's integral inequality gives 

Z X

j

jT (fjm)(x)jr

q=r



j =1

jT (fjm)(x)jr

1=r

:

 Cp;q;rkT kp;q kf kLp(Y ) . Indeed, when q=r  1,

dx r=q 

X

j

kT (fjm)krLq (X )

 kT krp;q

X

j

kfjm krLp(Y )

m kr?p   kT krp;q max k f j Lp (Y ) j

 kT krp;q2?m(r?p)=pkf krp:

X

j

kfjmkpLp(Y )

Since G decreases as r increases, we conclude that for any r > p there exists  > 0 such that X k( jT (fjm)jr )1=r kq  2?m kT kp;qkf kp: j

Summing over m  0 completes the proof. As an application we deduce a theorem of Menshov [4]. Let fn : n  1g be an orthonormal subset of L2(X ) for some measure space X . Then for any p P < 2 and p any sequence of coecients c = fcng 2 ` , the partial sums of the series n cnn converge almost everywhere in X . Proof. Let Y = f1; 2; 3; : : : g, equipped P with counting measure. Let Yn = f1; 2; 3; : : : ; ng  Y . De ne the operator T (c) = n cnn, mapping L2 (Y ) boundedly to L2 (X ). T maps the smaller space Lp(Y ) boundedly to L2 (X ), as well. The partial sums are SN (c) = T (c  YN ). By our theorem, supN jSN (c)j de nes a bounded operator from `p to L2 (X ). Combined with the obvious fact that almost everywhere convergence holds for the dense class of all nitely supported c, this implies almost everywhere convergence for all c 2 `p. Menshov's theorem fails, in general, for p = 2. This is a further indication that our results cannot extend to the endpoint p = q, and are far too crude to yield Carleson's theorem. 3. Some Numerical Inequalities The proofs of our multilinear results, Theorems 1.3 and 1.4, are based on an induction on the degree n of multilinearity. In this section we analyze the asymptotic behavior of the solution of a certain numerical recursion, (3.2), which will be used in the proof of Theorem 1.4.

6

MICHAEL CHRIST AND ALEXANDER KISELEV

Lemma 3.1. There exists 2 R + such that the numbers ck de ned by (3.1) ck = k?k=2k? for all k  2 satisfy for every k  6 the inequalities ck

(3.2)

yk +

k?2 X j =2

cj ck?j xj yk?j + ck xk  ck (x2 + y2)k=2

for all x; y  0:

For k = 2; 3 the inequality holds, if it is interpreted as the trivial assertion ck xk + ck yk  ck (x2 + y2)k=2; this holds for any c2 ; c3. Our argument does not work for k = 4; 5, but we will show later that this is irrelevant for the main application. The lack of any terms associated to the indices j = 1; n ? 1 in the inequality is not a typographical error. Proof. De ne





kk=2 k k;j = j j=2(k ? (3.3) j )(k?j)=2 j (k ? j ) for 2  j  k ? 2, and k;0 = k;k = 1. Then cj ck?j =ck = k;j . Assume that k  6. Suppose rst that k is even. We aim to majorize the leftPk=2 ?k=2 2 l 2 k ?l hand side of (3.2), divided by ck , by the binomial series l=0 l (x ) (y ) = (x2 + y2)k=2, via a term-by-term comparison, while bearing in mind that the former series has about twice as many terms as the latter. Suppose that j is even. We write a  b to indicate that the ratios a=b; b=a are bounded above by a positive constant, uniformly over all relevant parameters on which a; b depend. In this notation, Stirling's formula is n!  nn+ e?n. The binomial coecients are thus 2

1 2

(3.4)







k k=2  (k=2)?k=2(j=2)j=2((k ? j )=2)(k?j)=2 j=2 j (k ? j )

1=2

= k?k=2j j=2(k ? j )(k?j)=2



k

j (k ? j )

1=2

:

Thus for any preassigned positive constant , 







?1

? k= 2 k k;j j=2  j (k ? j ) (3.5)  uniformly in j; k satisfying 2  j  k ? 2, provided that is chosen to be suciently large and k  5, because k=(j (k ? j )) is then bounded uniformly by a constant strictly less than 1. This breaks down for k = 4; j = 2, causing the restriction k  6 in the statement of the lemma, since our analysis of k = 5 involves a reduction to k = 4; see below. 1 2

MAXIMAL FUNCTIONS ASSOCIATED TO FILTRATIONS

For j odd, the above calculations tell us that (3.6)    1=2 k k=2 ? k= 2 ( j ? 1) = 2 ( k ? j +1) = 2 (k ? j + 1) (j ? 1)=2  k (j ? 1) (j ? 1)(k ? j + 1) 1=2  k ? k= 2 ( j ? 1) = 2 ( k ? j +1) = 2 k j (k ? j ) j (k ? j )   1 ?  k ? j 1=2 k  k;j j (k ? j ) 2 : j Likewise (3.7)   1=2  k k=2 ? k= 2 ( j +1) = 2 ( k ? j ? 1) = 2 (k ? j ? 1) (j + 1)=2  k (j + 1) (j + 1)(k ? j ? 1)   1 ?  k ? j ?1=2 k  k;j j (k ? j ) 2 : j For j even we have a direct majorization     k= 2 k= 2 j k ? j j k ? j (3.8) k;j x y   j=2 x y =  j=2 (x2 )j=2(y2)(k?j)=2 : For j odd write  j ?1=2  j +1=2 j ? 1 k ? j +1 j k ? j 1 1 k;j x y + 2 k?j k;j xj+1yk?j?1 (3.9) k;j x y  2 k ? j     k= 2 k= 2 2 ( j ? 1) = 2 2 ( k ? j +1) = 2 (x2 )(j+1)=2 (y2)(k+j+1)=2 : (y ) +   (j ? 1)=2 (x ) (j + 1)=2 Thus for even k  6, by choosing  = 1=3 we obtain  (k=X 2)?1  k?2 X k= 2 c j ck?j j k?j (3.10) (x2 )l(y2)(k=2)?l : x y  l j =2 ck l=1

7

Adding in the two remaining terms xk + yk from the left-hand side of the inequality to be proved, we obtain on the right the full binomial series, whose sum equals (x2 + y2)k=2. The proof for odd k is similar but involves an extra initial step. Start by majorizing j x yk?j by the sum of appropriate coecients times (x2 + y2)?1=2 xj+1yk?j and (x2 + y2)+1=2 xj?1yk?j , or of (x2 + y2)?1=2 xj yk+1?j and (x2 + y2)+1=2 xj yk?1?j . The rst alternative can be applied when j ? 1  2 and k ? j  k ? 1 ? 2; thus when j  3. The second can be applied when k ? j  3. Since k > 6 and 2  j  k ? 2, at least one of these two restrictions is satis ed. This reduces matters to the case where k is even, and the above argument can then be applied. The details, including the choice of the coecients that replace [j=(k ? j )]1=2 in the above reasoning, are left to the reader.

8

MICHAEL CHRIST AND ALEXANDER KISELEV

Observe that for any B 2 R + , if ck is replaced by c~k = B k ck for all k, then the conclusion of the lemma remains valid for fc~k g. Moreover, the same goes if B  1 and c~k = B k ck for all k  4, while c~k = ck for all k < 4. The signi cance of this is that for B suciently large, fc~k g will satisfy the inequality (3.2) for all k. In fact, it is easy to check that B = 2 is sucient. Hence the restriction k  6 in the preceding lemma can be eliminated. Lemma 3.2. Suppose that the coecients fck : k  2g satisfy (3.2) for every k  2. Let b1 = 1, and let bk = Ak ck for all k  2. Then for any nonnegative x1 ; x2 ; y1; y2 , for any k  2, (3.11)

bk

xk + b 2

xk?1y

1 bk?1 2

1+

k?2 X j =2

bj bk?j xj1xk1?j + b1 bk?1xk1?1y2 + bk xk1  bk (jxj + jyj)k:

Here jxj = (x21 + x22 )1=2 , jyj = (y12 + y22)1=2 . Note that x1 ; x2 now play the roles lled by x; y in the preceding lemma. Proof. Divide both sides of the inequality by bk . Using the preceding lemma to majorize the sum of all terms not involving y1; y2, and noting that b1 bk?1=bk = A?1 ck?1=ck , we nd that the resulting left-hand side is (3.12)  jxjk + A?1 ckc?1 jyj(x21k?2 + x22k?2)1=2 k  jxjk + CA?1k1=2 jyj(x21k?2 + x22k?2 )1=2  jxjk + CA?1k1=2 jyj  2jxjk?1; p p p using the inequality a + b  a + b. On the other hand, expanding (jxj + jyj)k in binomial series and retaining only two terms, we obtain a quantity  jxjk + kjxjk?1jyj, which dominates the right-hand side provided that A  2C . 4. Multilinear maximal operators on the diagonal Let f 2 L1 (R ), and consider a multilinear expression (4.1)

Mn (f ) =

Z



Z

n Y

x1 x2 xn i=1 diagonal f1 = f2 =    fn = f .

f (xi)dxi;

restricted to the De nition. A martingale structure on a subinterval I  R is a collection of subintervals fEjm : m  0; 1  j  2m g of I that satisfy the following conditions, modulo endpoints.  I = [j Ejm for every m.  Ejm \ Ejm0 = ; for every j < j 0.  If j < j 0, x 2 Ejm , and x0 2 Ejm0 , then x < x0.  For every m; j , Ejm = E(2mj+1?1) [ E2mj+1 .

MAXIMAL FUNCTIONS ASSOCIATED TO FILTRATIONS R

R

9

Such a martingale structure is said to be adapted to f in Lp if Ejm jf jp = 2?m I jf jp for every m; j . The second condition is actually a consequence of the third. To any martingale structure fEjmg on R , we associate the sublinear operator

g(f ) =

(4.2)

1 X 2m Z X

(

m=1 j =1

j

Ejm

f j2)1=2 :

The absolute value signs are outside the integrals; this is essential in the application [2] for which this machinery is designed. It is obvious that jMn(f )j  kf kn1 =n!, with equality for nonnegative f . The following variant is related but seemingly less obvious. Proposition 4.1. There exists B < 1 such that for any martingale structure on R , for every f 2 L1 , and for every n  1, p (4.3) jMn(f )j  B ng(f )n= n! : No connection is assumed between f and the martingale structure; both are arbitrary. Proof. Fix n; f . By Stirling's formula, it suces to prove that jMn (f )j  bn g(f )n, where bn  n?n=2AR nn? are the constants in Lemma 3.2. De ne fm;j = j Ejm f j, and introduce the variants 1 X

gMJ (f ) =

(4.4)

(

X

m>M j :EjmEJM

2 )1=2 : fm;j

These will be used in conjunction with the fact that for any element Ejm of the martingale structure, the collection fEJM : EJM  Ejm g is a martingale structure on Ejm . Partition the region of integration = fx : x1 < x2 <    < xn g into regions Sj = fx 2 : xj 2 E11 and xj+1 2 E21g, with 0  j  n, interpreting this to mean that x 2 S0 , x1 2 E21 and x 2 Sn , xn 2 E11 . For 1  j  n ? 1, Z

(4.5)



Z

n Y

Sj i=1

f (xi )dxi = Mj (f  11)  Mn?j (f  12 );

while for Rj = 0 it equals Mn (f  12 ) andR for j = n it equals Mn(f  11 ). Since M1 (f ) = R f , for j = 1 this simpli es to E f  Mn?1 (f  12 ), while for j = n ? 1 it becomes E f  Mn?1(f  11 ). This yields the recursive bound 1 1

1 2

(4.6) jMn(f )j  jMn(f +

n?2 X j =2

jMn?j (f

 11)j + j

Z

E

 11)j  jMj (f

1 2

f j  Mn?1(f  11)j

 12 )j + j

Z

E

1 1

f j  Mn?1 (f  12 )j + jMn(f  12 )j :

10

MICHAEL CHRIST AND ALEXANDER KISELEV

We proceed by induction on n. Thus for 2  j  n ? 2, (4.7) jMj (f  11 )  Mn?j (f  12)j  bj bn?j g11 (f )j g12 (f )n?j : For j = 1 we have the simpler upper bound f1;1 bn?1g12 (f )n?1, and for j = n ? 1, the bound f1;2 bn?1g11 (f )n?1. For j = 0; n we are in the same situation with which we began, except that f has been replaced by its restriction to either half Er1 of the space. To handle these terms we proceed rst via circular reasoning, majorizing them by the desired quantities bn g11 (f )n and bng12 (f )n respectively, and will subsequently explain how this can be justi ed by restructuring the induction argument to eliminate the circularity. By summing over all 0  j  n, we obtain from (4.6) and (4.7) (4.8) jMn(f )j 

n?2 X j =2

bj bn?j g11 (f )j g12 (f )n?j

+ bn?1 f1;1g12 (f )n?1 + bn?1 f1;2g11 (f )n?1 + bng11 (f )n + bng12 (f )n : Setting xt = g1t (f ), yt = f1;t for t = 1; 2, we are in the situation of Lemma 3.2, and hence conclude that ?  jMn(f )j  bn [f12;1 + f12;2 ]1=2 + [g11 (f )2 + g12(f )2]1=2 n : (4.9) P P 2 )1=2 . Now for t = 1; 2 we have g1t (f ) = m2 Bm;t , where Bm;t = ( j:EjmEt fm;j Thus 1

(4.10)

 X

(g11(f )2 + g12(f )2)1=2 = (

is the norm of the vector ( the sum of the norms: (4.11)



P

X

m2

m2 Bm;1 ;

(B 2

m;1

+ B2

m2

Bm;1 )2 + (

P

m;2

X

m2

m2 Bm;2 ) 2 C

)1=2

=

2m X X

(

m2 j =1

2,

Bm;2 )2

1=2

and hence is majorized by

2 )1=2 : fm;j

Therefore (4.12) [f12;1 + f12;2]1=2 + [g11 (f )2 + g12 (f )2]1=2  g(f ): This completes the proof, except for justifying the treatment of the regions S0; Sn. For large integers K de ne K to be the set of all x = (x1; : : : ; xn) 2 such that there exist no j 2 f1; 2; : : : 2K g and 1  i  n ? 1 suchR thatR both Q xi ; xi+1 belong to K Ej . Since K " as K ! 1, it suces to prove that    K i f (xi )dxi satis es the bound desired for Mn(f ), for every K . We generalize the setup, allowing R to be replaced by an arbitrary subinterval, and fEjmg to be replaced by an arbitrary martingale structure on that subinterval. The bound is proved by a double induction on K; R induction on n for each R n, doing K . When K = 1, Mn = 0 for n  3, while M2 = E f  E f ; thus the induction is well founded. 1 1

1 2

MAXIMAL FUNCTIONS ASSOCIATED TO FILTRATIONS

11

The above argument still applies; the factor of the characteristic function of K causes no disturbance. Moreover, for the contributions of S0; Sn, K is e ectively replaced by K ? 1, if one replaces R by Et1 and considers fEjm  Et1gm2 to be a martingale structure on Et1. Thus the induction hypothesis applies, and yields the bounds desired for the contributions of these two exceptional regions. The same reasoning yields a slightly more general result, in which Mn(f ) is modi ed by replacing some of the factors f (xi) by their complex conjugates. Nothing is changed in the conclusion, nor in its proof, since f m;j  fm;j . Proposition 4.1 has a maximal version. Write Mn (f )(y; y0) = Mn(fy;y0 ), and de ne (4.13) Mn (f ) = sup0 Mn (f )(y; y0): yy 2R

To control it, we employ a variant of g(f ): (4.14)

~g(f ) =

2m Z X

X

m1

m

j =1

j

f j2 m

Ej

1=2

:

Proposition 4.2. There exists B < 1 such that for every locally integrable function f and every n  1, p (4.15) Mn (f )  B ng~(f )n= n! :

Proof. We have shown that p (4.16) jMn(f )(y; y0)j = jMn(f[y;y0])j  B ng(f[y;y0])n= n! for some constant B , so it suces to show that g(fI )  C g~(f ), uniformly for all intervals I . The de nition of g involves summing over m; j , and the summands are all nonnegative. Any martingale interval Ejm that is contained in I makes the same contribution to g(f ) as to g(fI ), and any Ejm contained in the complement of I contributes nothing to g(fI ), and something  0 to g(f ). Thus it suces to analyze the contributions of those Ejm that intersect both I and R nI . For any particular m, there are at most two such values of j . We discuss only those (m; j ) for which Ejm intersects the left endpoint y of I ; the same reasoning applies to the right. For the purposes of this discussion, we may consider the Ejm to be open intervals, since sets of measure P Rzero have no e ect on the quantities to be estimated. We must then majorize m j I \Eimm f j, where i(m) is the unique index such that y 2 Eim if such an index exists, and where we understand the m-th term of the sum to be zero if no such index exists. As in the proof of Theorem 1.1, I \ Eim(m) can be partitioned, modulo sets of measure zero, in a unique way into a union of sets EJM , where M ranges over all integers > m and for each such M , at most two indices J arise for such a partition. Indeed, if R is identi ed homeomorphically with (0; 1) in such a way that fEjm g become the dyadic intervals of lengths 2?m , then this partition is simply the usual Whitney decomposition of (the interior of) I \ Eim(m) . A given EJM can arise in the partitions of more than one set Eim(m) , but can so arise only for (

)

12

MICHAEL CHRIST AND ALEXANDER KISELEV

m < M , Hence for any M , the total number of all EJM arising partitions, R in all such P R counted according to multiplicity, is  CM . Majorizing j I \Eimm f j by j EJM f j, where the sum is over all intervals in the partition we arrive at the desired bound. We can now prove our diagonal-multilinear maximal theorem, Theorem 1.4. Proof of Theorem 1.4. Let fEjmg be a martingale structure on R , adapted to f in Lp. De ne (

G(f )() =

(4.17)

1 X

m=1

m

2m ?X

j =1

jT (fmj)()j2

1=2

)

:

It suces now to combine two ingredients. Firstly, f 7! G(f ) maps Lp(R) boundedly to Lq (), as in the rst remark following the proof of Theorem 1.1; the extra factor of m is harmless because there is a favorable factor of 2?m for some (p; q) > 0 for the operator norm of the m-th summand in the series de ning G(f ). pSecondly, by Proposition 4.2, there is a pointwise bound Mn(f )()  B nG(f )()n= n! . 5. Proof of Theorem 1.3 The proof of Theorem 1.3 is close to that of Theorem 1.4, but some modi cations are needed. With the natural change of notation from Mn (f ) to Mn (f1 ; : : : ; fn), we wish to show that for all suciently large n,

Mn (f1 ; : : : ; fn)  Ann?

(5.1)

R only Mn(f1 ; : : : ; fn) =

R

n Y i=1

g~(fi): Q



?

   yx x xny0 ni=1 K (; xi)fi(xi )dxi We will treat where y; y0 are xed; the bound for its maximal relative Mn then follows as in the proof of Proposition 4.2. As in the proof of Proposition 4.1, the factor n? is of no consequence but makes the induction argument work more smoothly. Q (5.1) in turn follows from the bound Mn(f1 ; : : : ; fn)  Ann? ni=1 g(fi ). For small n, that bound is a simple consequence of the same recursive procedure used to derive Proposition 4.1; the details are left to the reader. Proceeding as in the proof of that proposition, we obtain for all large n (5.2)

n (f1 ; : : : ; fn ) 

A?nn M

+

n?2  Y i X i=2

j =i

n Y j =1

1

g11(fj ) + 

g11 (fj )

n Y j =i+1

nY ?1 j =1

2

g11 (fj )  (fn)1;2 

g12(fj ) + (f1 )1;1

n Y j =2

g12 (fj ) +

n Y j =1

g12 (fj );

the rst and last terms are obtained via the same circular reasoning as in the earlier proof. And as in the proof of Proposition 4.1, the constant  may be made as small as desired by choosing and the constant A to be suciently large, provided n  6. R j j 1 To simplify the notation set xt = gt (fj ) for t = 1; 2, and set yt = j Et fj j. Also set jxj j = [(xj1 )2 + (xj2)2 ]1=2 , and likewise de ne jyj j. 1

MAXIMAL FUNCTIONS ASSOCIATED TO FILTRATIONS

13

In this notation, A?nn Mn(f1 ; : : : ; fn) is majorized by (5.3)

n Y j =1

xj

1+

nY ?1 j =1

xj

1

 yn +  2

n?2 Y n?i X i=2 j =1

xj

1

n Y j =n?i+1

xj

2+

n Y j =2

n Y j 1 x y + xj : 2 1

j =1

2

We focus attention temporarily on the middle term. Consider rst the case where n is even. Then after pulling out a common factor of x11 x21 x2n?1 xn2 , we are left with (5.4)   3 4 n ? 2 3 4 5 6 n ? 2 3 4 5 6 7 8 n ? 2 3 4 n ? 2 x2 x2    x2 + x1 x1 x2x2    x2 + x1x1 x1 x1 x2 x2    x2 +    + x1 x1    x1 



+ x31 x42    x2n?2 + x31 x41 x51x62    x2n?2 +    + x31    x1n?3 x2n?2 : Q

?2 jxj j. To see this We claim that each of the two terms in parentheses is  jn=3 observe that the sum of the rst two terms in the rst set of parentheses is (x31 x41 + x32 x42 )x52    x2n?2  jx3 j  jx4 j  x52    x2n?2 , by Cauchy-Schwarz. In all the other terms in the rst set of parentheses, majorize x3t x4t by jx3j  jx4 j. Now there is one fewer term than when we began; the sum of the contribution of the term just obtained by applying Cauchy-Schwarz and the next term in the sum is  jx3 j  jx4 j(x51x61 + x52 x62 )x72    xn2 . Apply Cauchy-Schwarz to bound the factor in parentheses by jx5 jjx6j, other terms by jx5 j  jx6j, and repeat the process. majorize each factor x5t x6t in the Qn?2 j Eventually the desired bound j=3 jx j is obtained. The analysis of the second line of (5.4) proceeds in the same way, except that we begin by majorizing x3t and xtn?2 by jx3 j; jxn?2j, respectively, in every term, and then Qn?2 j proceed as in the preceding paragraph. The same bound j=3 jx j is obtained. Finally, the case of odd n is the same except for minor changes in notation. Multiplying   1=2, the sum of the contributions of the two terms in by x11 x21 x2n?2 xn2 and taking Q ?2 j jx j. (5.4) is  x11 x21x2n?2 xn2 jn=3 Q The next step is to add in the terms nj=1 xjt for t = 1; 2. Cauchy-Schwarz gives us

(5.5)

n Y j =1

xj1 + x11x21 x2n?2 xn2

nY ?2 j =3



jxj j +

n Y j =1

xj2  x11 x21

Another application of Cauchy-Schwarz gives the We have thus shown that (5.6)

Ann Mn(f1 ; : : : ; fn) 

which in turn is plainly (5.7)

 (y11 + jx1 j)

nY ?1 j =2

n Y j =1

jxj j + y2n

This last product is the desired bound.

jxj j + x12 x22

j =3 Q bound nj=1 jxj j.

nY ?1 j =1

jxj j(yn + jxnj)  2

n Y

jxj j + y11

n Y j =1

n Y j =2

jxj j;

(jyj j + jxj j):

n Y j =3

jxj j:

14

MICHAEL CHRIST AND ALEXANDER KISELEV

References [1] M. Christ and A. Kiselev, Absolutely continuous spectrum for one-dimensional Schrodinger operators with slowly decaying potentials: some optimal results, J. Amer. Math. Soc. 11 (1998), 771-797, MR 99g:34166. [2] , WKB asymptotics of generalized eigenfunctions of one-dimensional Schrodinger operators with , preprint January 2000. [3] A. Kiselev, Interpolation theorem related to a.e. convergence of integral operators, Proc. Amer. Math. Soc. 127 (1999), no. 6, 1781{1788, MR 99h:34124. [4] D. Menshov, Sur les series de fonctions orthogonales, Fund. Math. 10, 375-420 (1927). [5] R.E.A.C. Paley, Some theorems on orthonormal functions, Studia Math. 3 (1931) 226-245. [6] H. Smith and C. Sogge, Global Strichartz estimates for nontrapping perturbations of the Laplacian, Comm. Partial Di erential Equations, to appear. [7] T. Tao, Spherically averaged endpoint Strichartz estimates for the two-dimensional Schrodinger equation, Comm. Partial Di erential Equations, to appear. [8] A. Zygmund, Trigonometric Series, Vol. 2, Cambridge Univ. Press, Cambridge, 1977. , A remark on Fourier transforms, Proc. Camb. Phil. Soc. 32 (1936), 321-327. [9] Michael Christ, Department of Mathematics, University of California, Berkeley, Berkeley, CA 94720, USA

E-mail address :

[email protected]

Alexander Kiselev, Department of Mathematics, University of Chicago, Chicago, Ill. 60637, USA

E-mail address :

[email protected]

Suggest Documents