The Carleson Embedding Theorem with matrix weights

2 downloads 0 Views 142KB Size Report
Aug 24, 2017 - CA] 24 Aug 2017. THE CARLESON EMBEDDING THEOREM WITH MATRIX WEIGHTS. AMALIA CULIUC AND SERGEI TREIL. Abstract.
THE CARLESON EMBEDDING THEOREM WITH MATRIX WEIGHTS

arXiv:1508.01716v1 [math.CA] 7 Aug 2015

AMALIA CULIUC AND SERGEI TREIL Abstract. In this paper we prove the weighted martingale Carleson embedding theorem with matrix weights both in the domain and in the target space.

1. Introduction and main results The main result of this paper is the matrix weighted martingale Carleson embedding theorem, where matrix weights appear in both the domain and the target space. The need for such result is motivated by the attempt to generalize the two weight estimates for well localized operators from [6], to the case of matrix-valued measures. The main part of the estimate in [6] is the two weight inequality for paraproducts, and this estimate for the matrix-valued measures can be reduced exactly to the embedding theorem treated in this paper. Earlier versions of the matrix weighted Carleson Embedding Theorem theorem under fairly strong additional assumptions (such as the weight belonging to the A2 class) go back to [7] and, more recently, [3], [1]. Two weight estimates with matrix weighs for well-localized operators, also under additional assumptions, were treated in [4] and [1] (see also [2]), but the result is still not known in full generality. The weighted embedding theorem presented in this paper does not assume any properties for the matrix weight except local boundedness, and produces an embedding constant that depends polynomially on the dimension of the space. As in the scalar case, our embedding theorem states the Carleson measure condition, which is just a simple testing condition, implies the embedding. For matrix weights the Carleson measure condition (condition (ii) in Theorem 1.1 or condition (iii) in Theorem 1.2) is an inequality between positive semidefinite matrices. In the case of scalar weights in the domain, the right hand side of the inequality is a multiple of the identity matrix I: in this situation, sacrificing constants, one can replace matrices by their norms, and the matrix embedding theorem trivially follows from the scalar one. Of course, the constants obtained by such trivial reduction are far from optimal: constants of optimal order were obtained using more complicated reasoning in [5]. In our case, both sides of the Carleson measure condition are general positive semidefinite matrices, so the simple strategy of replacing matrices by norms or traces does not work. A more complicated idea, in the spirit of the argument in [5], is used to get the result. 1.1. Setup. 1.1.1. Atomic filtered spaces. Let (X , F, σ) be a sigma-finite measure space with an atomic filtration Fn , that is, a sequence of increasing sigma-algebras Fn ⊂ F such that for each Fn there exists a countable collection Dn of disjoint sets of finite measure with the property that every set of Fn is a union of sets in Dn . 2010 Mathematics Subject Classification. Primary 42B20, 60G42, 60G46. Key words and phrases. Carleson embedding theorem, matrix weights, Bellman function. Supported in part by the National Science Foundation under the grant DMS-1301579. 1

2

AMALIA CULIUC AND SERGEI TREIL

We will call the sets I ∈ Dn atoms, and denote by D the collection of all atoms, D = ∪n∈Z Dn . We allow a set I to belong to several generations Dn , so formally an atom I ∈ Dn is a pair (I, n). To avoid overloading the notation, we skip the “time” n and write I instead of (I, n); if we need to “extract” the time n, we will use the symbol rk I. Namely, if I denotes the atom (I, n) then n = rk I. The inclusion I ⊂ J for atoms should be understood as inclusion for the sets together with the inequality rk I ≥ rk J. However, the union (intersection) of atoms is just the union (intersection) of the corresponding sets and “times” n are not taken into account. A standard example of such a filtration is the dyadic lattice D on RN , which explains the choice of notation. However, in what follows, D will always denote a general collection of atoms and I ∈ D will stand for an atom in D, and not necessarily for a dyadic interval. 1.1.2. Matrix-valued measures. Let F0 be the collection of sets E ∩ F where E ∈ F and F is a finite union of atoms. A d × d matrix-valued measure W on X , is a countably additive function on F0 with values in the set of d × d positive semidefinite matrix. Equivalently, W = (wj,k )dj,k=1 is a d × d matrix whose entries wj,k are (possibly signed or even complex-valued) measures, finite on atoms, and such that for any E ∈ F0 the matrix (wj,k (E))dj,k=1 is positive semidefinite. Note that the measure W is always finite on atoms. The weighted space L2 (W ) is defined as the set of all measurable Fd -valued functions (F = R or C) such that ˆ D E 2 W (dx)f (x), f (x) d < ∞; := kf k 2 L (W )

F

X

as usual we take the quotient space over the set of functions of norm 0. 1.2. Main results.

e be positive semidefinite d × d Theorem 1.1. Let W be a d × d matrix-valued measure and let A I matrices. The following statements are equivalent:

2

ˆ X

1/2 2

A e W (dx)f (x) (i)

≤ Akf kL2 (W )

I I I∈D X e W (I)AI W (I) ≤ BW (I0 ) for all I0 ∈ D. (ii) I∈D I⊂I0

Moreover, for the best constants A and B we have B ≤ A ≤ CB, where C = C(d) is a constant depending only on the dimension d.

Note that the underlying measure σ is absent from the statement of the theorem: we do not need σ in the setup, we only need the filtration Fn . Alternatively, we can pick σ to make the Pd setup more convenient. For example, if we define σ := tr W := k=1 wk,k , then the measures wj,k are absolutely continuous with respect to σ. Thus, we can always assume that our matrixvalued measure is an absolutely continuous one W dσ, where W is a matrix weight, i.e. a locally integrable (meaning integrable on all atoms I) matrix-valued function with values in the set of positive semidefinite matrices. For a measurable function f we denote by hf iI its average, ˆ hf iI := σ(I)−1 f dσ; I

if σ(I) = 0 we put hf iI = 0. The same definition is used for the vector and matrix-valued functions. In what follows we will often use |E| for σ(E) and dx for dσ.

THE CARLESON EMBEDDING THEOREM WITH MATRIX WEIGHTS

3

e . The theorem below is the restatement of Theorem 1.1 in this setup, if we put AI = |I|−1 A I More precisely, Theorem 1.1 is just the equivalence (ii) ⇐⇒ (iii) in Theorem 1.2. The equivalence (i) ⇐⇒ (ii) will be explained below. Theorem 1.2. Let W be a d × d matrix-valued weight and let AI , I ∈ D be a sequence of positive semidefinite d × d matrices. Then the following are equivalent:

2 X

1/2

(i)

AI hW 1/2 f iI |I| ≤ Akf k2 2 . L

I∈D

2 X

1/2

(ii)

AI hW f iI |I| ≤ Akf k2L2 (W ) . I∈D

(iii)

1 X hW iI AI hW iI |I| ≤ BhW iI0 for all I0 ∈ D. |I0 | I∈D I⊂I0

Moreover, B ≤ A ≤ CB, where C = C(d) = e · d3 (d + 1)2 . Acknowlegement. The authors would like to thank F. Nazarov for suggesting the crucial idea of using a “Bellman function with a parameter” argument, similar to the one used earlier in [5]. 2. Proof of the main result 2.1. Trivial reductions. The equivalence of (i) and (ii) is trivial. In (i), perform the change of variables f := W 1/2 f to obtain (ii) and similarly, in (ii) set f := W −1/2 f to obtain (i). Note that here we do not need to assume that the weight W is invertible a.e.: we just interpret W −1/2 as the Moore–Penrose inverse of W 1/2 . The implication (i) =⇒ (iii) and the estimate A ≥ B are obvious by setting f = W 1/2 1I e, e ∈ Fd in (i). Equivalently, to show that (ii) =⇒ (iii) one just needs to apply (ii) to the test functions f = 1I e. So it only remains to prove that (iii) =⇒ (i), or equivalently, that (iii) =⇒ (ii). 2.1.1. Invertibility of W . Let us notice that without loss of generality we can assume that the weight W is invertible a.e., and even that the weight W −1 is uniformly bounded. To show that, define for α > 0 the weight Wα by Wα (s) := W (s) + αI, and let . hW iI AI hW iI hWα i−1 AαI := hWα i−1 I I If (iii) is satisfied, then trivially 1 X |I0 |

I∈D I⊂I0

hWα iI AαI hWα iI |I| ≤ BhW iI0 ≤ BhWα iI0 .

If Theorem 1.2 holds for invertible weights W , we get that for all f ∈ L2 (W ) ∩ L2

2 X

α 1/2

(AI ) hWα f iI |I| ≤ Akf k2L2 (Wα ) . I∈D

Noticing that kf k

L2 (Wα )

→ kf k

L2 (W )

, hWα f iI → hW f iI , AαI → AI as α → 0+ we immediately get

(ii) for all f ∈ L2 (W ) ∩ L2 ; taking the limit inside the sum is justified because an infinite sum of non-negative numbers is the supremum of all finite subsums, and finite sums commute with limits. Since the estimate (ii) holds on a dense set, extending the embedding operator by continuity we trivially get that it holds for all f ∈ L2 (W ).

4

AMALIA CULIUC AND SERGEI TREIL

2.2. The Bellman functions. By homogeneity we can assume without loss of generality that B = 1. As we discussed above, we only need to prove the implication (iii) =⇒ (i). Following a suggestion by F. Nazarov we will do so by a “Bellman function with a parameter” argument similar to one presented in [5]. Denote FI = kf k2 2

L (I)

MI =

:= h|f |2 iI

(2.1)

1 X hW iJ AJ hW iJ |I|

xI = hW

(2.2)

J⊂I 1/2

f iI .

(2.3)

For any real number s, 0 ≤ s < ∞, define the Bellman function D E −1 Bs (I) = Bs (FI , xI , MI ) = hW iI + sMI xI , xI d .

(2.4)

F

Notice that FI is not involved in the definition of Bs (I), but it will be used in the estimates. The functions BI satisfy the following properties: (i) The range property: 0 ≤ Bs (I) ≤ FI ; (ii) The key inequality: X |I ′ | Bs (I ′ ) (2.5) Bs (I) + sRI (s) ≤ |I| ′ I ∈ch(I)

where RI (s) = kA1/2 hW iI (hW iI + sMI )−1 xI k2 . I The inequality Bs (I) ≥ 0 is trivial, and the inequality Bs (I) ≤ FI follows immediately from Lemma 3.1 below. The key inequality (2.5) is a consequence of Lemma 3.3, which we also prove below. 2.3. From Bellman functions to the estimate. Let us rewrite (2.5) as X |I|Bs (I) + |I|sRI (s) ≤ |I ′ |Bs (I ′ ). I ′ ∈ch(I)

Then, applying this estimate to each Bs (I ′ ), and then to each descendant of each I ′ , we get, going m generations down, X X |I|Bs (I) + sRI ′ (s)|I ′ | ≤ |I ′ |Bs (I ′ ) ≤ kf 1I k2 2 ; I ′ ∈D:I ′ ⊂I rk I ′ 0

ˆ 1 ε RI (0) ≤ C(ε, d) sRI (s)ds. ε 0 Moreover, for ε = 2/d we can have C(d) = e · d3 (d + 1)2 . Applying the lemma to (2.6), we get X RI (0) ≤ e · d3 (d + 1)2 kf k2L2 , I∈D

which proves the theorem (modulo Lemma 2.1).

2.4. Proof of Lemma 2.1. Observe that it follows from the cofactor inversion formula that the entries of the matrix (hW iI + sMI )−1 are of the form pj,k (s)/Q(s), where Q(s) = QI (s) = det(hW iI + sMI )

is a polynomial of degree at most d, and pj,k (s) are polynomials of degree at most d − 1. Therefore RI is a rational function in s, RI (s) = PeI (s)/|QI (s)|2 , where PeI (s) is a polynomial of degree at most 2(d − 1) and PeI (s) ≥ 0. We can then write PeI (s) = |PI (s)|2 , where PI has degree 2 at most d − 1. Therefore RI (s) = PI (s)/QI (s) . By hypothesis, MI ≤ hW iI , so the operator hW iI + sMI is invertible for all s such that Re(s) > −1. Thus the zeroes of QI (s) are all in the half plane Re(s) ≤ −1. Let λ1 , λ2 , ..., λd be the roots of the polynomial QI (s) counting multiplicity. We have d Q (s) Y s − λk I = λ . QI (0) k k=1

For a fixed s and Re λk ≥ −1 the term |s − λk |/|λk | attains its maximum at λk = −1. Therefore, on the interval [0, ε], Q (s) I d (2.7) ≤ (1 + ε) . QI (0)

From the estimate above,

ˆ

ε

0

ˆ ε P (s) 2 I 2d s sRI (s)ds ds ≤ (1 + ε) QI (0) 0

(2.8)

It will suffice then to find a constant C1 = C1 (ε, d) such that for any polynomial p of degree at most d − 1 ˆ ε ds 2 (2.9) s |p(s)|2 . |p(0)| ≤ C1 ε 0 Note that if we do not care about the constant C(d) we can stop here:´ we just consider the space e of polynomials of degree at most d endowed with the norm kpk := ε−1 0 s|p(s)|2ds and the linear functional p 7→ p(0). Since any linear functional on a finite-dimensional normed space is bounded, we immediately get (2.9).

6

AMALIA CULIUC AND SERGEI TREIL

If we want to estimate the constant C(d), some extra work is needed. First, making the change of variables x = 2s/ε we can see that (2.9) is equivalent (with the same constant C1 ) to ˆ ε 2 2 x |p(x)|2 dx |p(0)| ≤ C1 4 0 or, equivalently, to the estimate ε |p(1)| ≤ C1 4 2

ˆ

1 −1

(1 − x) |p(x)|2 dx

(2.10)

for all polynomials p, deg p ≤ d − 1. (1,0) Consider the Jacobi polynomials Pn which are orthogonal polynomials with respect to the (1,0) 1 0 weight w, w(x) = (1 − x) = (1 − x) (1 + x) . Denote by Jn the normalized Jacobi polynomials, (1,0) (1,0) −1 (1,0) Jn := kPn k 2 Pn . L (w)

(1,0)

Since Pn

(1,0) 2 k 2 L (w)

(1) = n + 1 and kPn

= 2/(n + 1), we have that

Jn(1,0) (1)2 = (n + 1)3 /2. Writing p =

Pd−1

(1,0) n=0 cn Jn

(2.11)

we get that ˆ

1 −1

(x − 1) (P (x))2 dx = kP k2 2

L (w)

=

d−1 X

n=0

|cn |2

and that by (2.11) P (1) =

d−1 X

cn

n=0

From Cauchy–Schwarz, 2

|p(1)| ≤

d−1 X

n=0

2

|cn |

!

(n + 1)3/2 √ . 2

d−1 X (n + 1)3

n=0

2

!

=

1 2 d (d + 1)2 kpk2 2 . L (w) 8

Comparing this with (2.10) we can see that (2.10) and consequently (2.9) hold with C1 = C1 (ε, d) = ε−1 d2 (d + 1)2 /2. From (2.9) and (2.8) 1 RI (0) ≤ C(ε, d) ε

ˆ

0

ε

sRI (s)ds,

with C(ε, d) = ε−1 d2 (d + 1)2 (1 + ε)2d /2. For ε = 1/(2d), we have indeed that C(d) = d3 (d + 1)2 1 +

 1 2d 2d

≤ e · d3 (d + 1)2 .

3. Verifying properties of Bs

It remains to show that Bs satisfies the Bellman function properties. The range property is proved in the following proposition: Lemma 3.1. For Bs defined above in (2.4), Bs (I) ≤ FI .

THE CARLESON EMBEDDING THEOREM WITH MATRIX WEIGHTS

7

Proof. Let e ∈ Fn . Since W is self-adjoint, an application of the Cauchy-Schwarz inequality gives  1/2 1/2  1/2 1/2 hW 1/2 f, ei ≤ . hW e, W ei hf, f i I

I

I

Therefore, recalling the notation (2.1), (2.3), we get that for any vector e, hx , ei 2 I ≤ FI . hhW iI e, ei

(3.1)

Using Lemma 3.2 below we can write

|hxI , ei|2 e6=0 h(hW iI + MI )e, ei hx , ei 2 I ≤ sup ≤ FI . e6=0 hhW iI e, ei

h(hW iI + MI )−1 x, xi = sup

which means exactly that Bs (I) ≥ 0.



Lemma 3.2. Let A ≥ 0 be an invertible operator in a Hilbert space H. Then for any vector x ∈ H hA−1 x, xi =

|hx, ei|2 e∈H: e6=0 hAe, ei sup

Proof. By definition, hA−1 x, xi = kA−1/2 xk2 = =

|hA−1/2 x, ai|2 kak2 a∈H: kak6=0 sup

|hx, A−1/2 ai|2 . kak2 a∈H: kak6=0 sup

Making the change of variables a = A1/2 e we conclude hA−1 x, xi =

|hx, ei|2 . e∈H: kek6=0 hAe, ei sup

 The main estimate (2.5) is the consequence of the following lemma: Lemma 3.3. Let H be a Hilbert space. For x ∈ H and for U being a bounded invertible positive operator in H define φ(U, x) := hU −1 x, xiH . Then the function φ is convex, and, moreover, if X X x0 = θ k xk , ∆U := U0 − θ k Uk

where 0 ≤ θk ≤ 1,

P

k

k

k θk

= 1, then X θk φ(Uk , xk ) − φ(U0 , x0 ) ≥ hU0−1 ∆U U0−1 x0 , x0 iH k

(3.2)

8

AMALIA CULIUC AND SERGEI TREIL

To see that this lemma implies (2.5), fix s > 0. Denoting UIs = hW iI + sMI ,

xI = hW 1/2 f iI ,

we see that Bs (I) = φ(UIs , xI ). Let Ik , k ≥ 1 be the children of I, and let θk = |Ik |/|I|. Notice that hW iI = P MI = k θk MI + shW iI AI hW iI , so k

UIs −

X k

P

k θk hW iIk ,

θk UI =: ∆U s = shW iI AI hW iI . k

Therefore, applying Lemma 3.3 with U0 = UIs , x0 = xI , Uk = UIs , xk = xI , ∆U = ∆U s we get k k (3.2), that translates exactly to the estimate (2.5). Proof of Lemma 3.3. The function φ and the right hand side of (3.2) are invariant under the change of variables −1/2

x 7→ U0

x,

−1/2

U 7→ U0

−1/2

U U0

,

(3.3)

so it is sufficient to prove (3.2) only for U0 = I. In this case define function Φ(τ ), 0 ≤ τ ≤ 1 as E X D Φ(τ ) = θk (I + τ ∆Uk )−1 (x0 + τ ∆xk ), (x0 + τ ∆xk ) − hx0 , x0 iH , H

where ∆xk = xk − x0 and ∆Uk = Uk − U0 = Uk − I. Using the power series expansion of (I + τ ∆U )−1 we get  X  X Φ(τ ) =τ 2 θk h∆xk , x0 iH − θk h∆Uk x0 , x0 i X  X X θk h∆Uk2 x0 , x0 i + θk h∆xk , ∆xk i − 2 θk h∆Uk x0 , ∆xk iH + o(τ 2 ) + τ2 P θk (xk − x0 ) = 0 and also θk ∆Uk = −∆U . Hence   X θk k∆Uk x0 k2 + k∆xk k2 − 2h∆Uk x0 , ∆xk i + o(τ 3 ). Φ(τ ) = τ h∆U x0 , x0 i + τ 2

Notice that

P

θ k ∆ xk =

P

(3.4)

Using the above formula for x0 = (x1 + x2 )/2, U0 = (U1 + U2 )/2 (so ∆U = 0) we get that that the second differential of φ at U = I is non-negative (the function φ is clearly analytic, so all the differentials are well defined). The change of variables (3.3) implies that the second differential of φ is nonnegative everywhere. In particular, this implies that Φ′′ (τ ) ≥ 0, so Φ is convex. Returning to the general choice of arguments U , x, we can see from (3.4) that Φ′ (0) = h∆U x0 , x0 iH . Since Φ is convex and Φ(0) = 0, Φ(1) ≥ Φ′ (0) = h∆U x0 , x0 iH . 

THE CARLESON EMBEDDING THEOREM WITH MATRIX WEIGHTS

9

References [1] K. Bickel, B. Wick, A study of the matrix Carleson embedding theorem with applications to sparse operators, arXiv:1503.06493 1 [2] K. Bickel, S. Petermichl, B. Wick Bounds for the Hilbert Transform with Matrix A2 Weights, arXiv:1402.3886 [math.CA]. 1 [3] J. Isralowitz, H. Kwon, S. Pott A matrix weighted T1 theorem for matrix kernelled Calderon-Zygmund operators, arXiv:1401.6570 1 [4] R. Kerr Martingale transforms, the dyadic shift and the Hilbert transform: a sufficient condition for boundedness between matrix weighted spaces, arXiv:0906.4028 1 [5] F. Nazarov, G. Pisier, S. Treil, A. Volberg, Sharp estimates in vector Carleson imbedding theorem and for vector paraproducts, J. Reine Angew. Math. 542 (2002), 147–171. 1, 3, 4 [6] F. Nazarov, S. Treil, and A. Volberg, Two weight inequalities for individual Haar multipliers and other well localized operators, Math. Res. Lett. 15 (2008), no. 3, 583–597. 1 [7] S. Treil and A. Volberg, Wavelets and the angle between past and future, J. Funct. Anal. 143 (1997), no. 2, 269–308. 1 Department of Mathematics, Brown University, Providence, RI 02912, USA E-mail address: [email protected] Department of Mathematics, Brown University, Providence, RI 02912, USA E-mail address: [email protected]

Suggest Documents