OPTIMAL RECOVERY OF TRACES IN HARDY SPACES 1

1 downloads 0 Views 210KB Size Report
Abstract. We construct optimal methods of recovery of traces of analytic functions from the Hardy space H2(Bn) on the sphere of radius ρ using inaccurate ...
OPTIMAL RECOVERY OF TRACES IN HARDY SPACES K. YU. OSIPENKO AND M. I. STESSIN Abstract. We construct optimal methods of recovery of traces of analytic functions from the Hardy space H 2 (Bn ) on the sphere of radius ρ using inaccurate information about their traces on the spheres of radiuses r1 and r2 , r1 < ρ < r2 . We show that instead of using the whole information about the trace on the sphere of radius r1 we can use only its projection on a space of polynomials given with the same accuracy. Moreover, we construct a collection of optimal recovery methods.

1. Introduction Let Bn stand for the unit ball in Cn , Bn = { z = (z1 , . . . , zn ) ∈ Cn : |z|2 =

n X

|zj |2 < 1 }.

j=1

Recall that the Hardy space H 2 (Bn ) consists of all functions f such that Z 2 kf kH 2 (Bn ) = sup |f (rz)|2 dσ(z) < ∞, 0 0 sup b1 kIN f (r1 z)k2 λ L

f ∈H 2 (Bn ) b2 kI M f (r2 z)k2 +λ

2 (σ)

L2 (σ)

=

kf (ρz)kL2 (σ) ≤C1

C1 C2

sup

kf (ρz)kL2 (σ) ,

f ∈H 2 (Bn ) b2 kI M f (r2 z)k2 +λ

b1 kIN f (r1 z)k2 λ L

2 (σ)

L2 (σ)

≤C2

we obtain kf (ρz) − m(y b 1∗ , y2∗ )(ρz)k2L2 (σ) = kg(ρz)k2L2 (σ) ≤

sup b1 kIN f (r1 z)k2 λ L

2

=

f ∈H 2 (Bn ) b2 kI M f (r2 z)k2 b1 (δ1 +ε)2 +λ b2 (δ2 +ε)2 + λ ≤λ (σ) L (σ)

kf (ρz)k2L2 (σ)

2

b1 + λ b2 ) S 2 + ε2 (λ S2

kf (ρz)k2L2 (σ)

sup b1 kIN f (r1 z)k2 λ L

2

f ∈H 2 (Bn ) b2 kI M f (r2 z)k2 +λ (σ) L

2 (σ)

≤S 2

b1 + λ b2 ). = S 2 + ε2 (λ Since ε > 0 is arbitrary we obtain EρN,M (H 2 (Bn ), r1 , r2 , δ1 , δ2 ) ≤ eN,M (H 2 (Bn ), r1 , r2 , δ1 , δ2 , m) b ≤ S. ρ This and (7) imply EρN,M (H 2 (Bn ), r1 , r2 , δ1 , δ2 ) = S and m b is an optimal method.

¤

First we find the error of optimal recovery. We introduce the following notation: r22 − r12 ρ2 − r12 r2 , 2 log ρ

log (12)

s0 = (N + dαe)+ ,

α=1−

OPTIMAL RECOVERY OF TRACES

9

where dxe = inf{ n ∈ Z : n ≥ x } is the ceiling function and (x)+ = max{x, 0}, a(−1) = −1, for N ≥ 0 Ã Ã µ ¶2(N −s0 +1) ! Á µ ¶! ρ ρ a(N ) = s0 + log 1 − 2 log , r2 r1 + ( » ¼ log(δ2 /δ1 ) b = −dαe + s1 , δ2 > δ1 , s1 = , N log(r2 /r1 ) 0, δ2 ≤ δ1 , µ Á µ ¶¶ 2 2 ρ c = s1 − log r2 − r1 M 2 log − 1. 2 r2 − ρ2 r1 + For N ≥ M ≥ −1 consider the sets: Σ1 = { (N, M ) : a(N ) ≤ M ≤ N , M ≥ 0} , n o b ∪ (−1, −1), Σ2 = (N, M ) : −1 ≤ M < a(N ), N < N n o c b Σ3 = (N, M ) : −1 ≤ M ≤ M , N ≥ N , n o c < M < a(N ), N ≥ N b . Σ4 = (N, M ) : M Theorem 3. The error of optimal recovery is given by q N,M 2 n b1 δ 2 + λ b2 δ 2 , E (H (B ), r1 , r2 , δ1 , δ2 ) = λ ρ

1

2

where

µ ¶2M  ρ   ,   r1  Ã  µ ¶2(N −s0 +1) ! µ ¶2s0    ρ ρ  1 − , b1 = r1 r2 λ  µ ¶2(s1 −1) 2   r2 − ρ2 ρ    ,   r1 r22 − r12   0, µ ¶2(N +1)  ρ   ,   r2   µ ¶2(s1 −1) 2    ρ − r12  ρ , b2 = r2 r22 − r12 λ   1,    µ ¶2(M −s2 ) ! µ ¶2s2 Ã   ρ ρ   1− ,   r r1 2

(N, M ) ∈ Σ1 ∪ Σ4 , (N, M ) ∈ Σ2 , (N, M ) ∈ Σ3 , δ2 > δ1 , (N, M ) ∈ Σ3 , δ2 ≤ δ1 , (N, M ) ∈ Σ1 ∪ Σ2 , (N, M ) ∈ Σ3 , δ2 > δ1 , (N, M ) ∈ Σ3 , δ2 ≤ δ1 , (N, M ) ∈ Σ4 ,

10

K. YU. OSIPENKO AND M. I. STESSIN

and



 r22 − r12  log r22 − ρ2   s2 = M +  ρ .   2 log  r1

(13)

b1 , λ b2 , and fb ∈ Proof. In accordance with Theorem 2 we should find λ 2 n H (B ) admissible in (6) which satisfy conditions (a) and (b). Assume that f ∈ H 2 (Bn ), then it can be represented in the form ∞ X X f (z) = cα z α . j=0 |α|=j α

Since monomials z form an orthogonal system in H 2 (Bn ) with n!α! kz α k2H 2 (Bn ) = , (n + |α| − 1)! (see [2], sect. 1.4.9) we have à µ ¶ ! M 2j X ρ 2j L(f, λ1 , λ2 ) = r1 − + λ1 bj r1 j=0 à µ ¶ µ ¶2j ! N 2j X ρ r2 + r12j − + λ1 + λ2 bj r r 1 1 j=M +1 à µ ¶ µ ¶2j ! ∞ 2j X r2 ρ 2j + λ2 bj , + r1 − r1 r1 j=N +1 where bj =

X n! α!|cα |2 . (n + j − 1)! |α|=j

Consider the set of points on the plane R2  µ ¶2j ρ   , yj = µr1 ¶2j j = 0, 1, . . . . (14)  r2   xj = , r1 These points belong to the plot of concave function  µ ¶2t ρ   , y = µ r1 ¶2t t ∈ [0, ∞).  r2  x = , r1

OPTIMAL RECOVERY OF TRACES

11

Consequently, the piecewise linear function passing through the points (14) is also concave. There exists the minimal integer number s0 , 0 ≤ s0 ≤ N , such that all points (14) lie under the line passing through the point (xs0 , ys0 ) with the slope equals yN +1 /xN +1 . The number s0 may be formally defined as follows ¾ ½ yN +1 ys+1 − ys ≤ . s0 = min s ∈ Z+ : xs+1 − xs xN +1 It can be easily verified that this number may be also defined by (12). b1 + λ b2 x be the line Let y = λ yN +1 y − ys0 = (x − xs0 ), xN +1 that is, µ (15)

b1 = λ

ρ r1

¶2s0 Ã µ ¶2(N −s0 +1) ! ρ 1− , r2

µ b2 = λ

ρ r2

¶2(N +1) .

Then for all j ∈ Z+ µ −

ρ r1

¶2j

µ b1 + λ b2 +λ

r2 r1

¶2j ≥ 0.

Assume that M ≥ 0 and µ

It means that à M≥

Ã

s0 + log 1 −

µ

ρ r2

ρ r1

¶2M

b1 . ≥λ

¶2(N −s0 +1) ! Á µ

Then putting

µ b0 λ 1

=

ρ r1

ρ 2 log r1

¶! = a(N ). +

¶2M

we have b0 , λ b2 ) ≥ 0 L(f, λ 1 for all f ∈ H 2 (Bn ). Consider the function s r δ2 δ1 (n + M − 1)! M (n + N )! N +1 z1 + N +1 z . fb(z) = M n!M ! n!(N + 1)! 1 r1 r2

12

K. YU. OSIPENKO AND M. I. STESSIN

It is easy to verify that fb ∈ H 2 (Bn ) is admissible in (6) and satisfies b0 , λ b2 ) = 0. Hence condition (a) is also fulfilled. (b). Moreover, L(fb, λ 1 Applying Theorem 2 we obtain that for (N, M ) ∈ Σ1 s µ ¶2M µ ¶2(N +1) ρ ρ N,M 2 n 2 δ1 + δ22 . Eρ (H (B ), r1 , r2 , δ1 , δ2 ) = r1 r2 Assume now that M = N = −1. L(f, 0, 1) ≥ 0. For fb(z) = δ2 kI−1 fb(r1 z)k2L2 (σ) = 0,

Then for all f ∈ H 2 (Bn ),

kI −1 fb(r2 z)k2L2 (σ) = δ22 .

Thus from Theorem 2 we obtain that Eρ−1,−1 (H 2 (Bn ), r1 , r2 , δ1 , δ2 ) = δ2 . b = 0 and Σ2 = (−1, 1). Suppose that δ2 > δ1 . It is If δ2 ≤ δ1 , then N easy to show that in this case ½ µ ¶s0 ¾ δ r2 2 b = min N ∈ N : N ≥ . δ1 r1 b . Then Assume that −1 ≤ M < a(N ) and N < N µ ¶2M ρ b1 0

and we can choose the parameter a so that fb is admissible in (6) and satisfies (b). Hence using Theorem 2 the case (M, N ) ∈ Σ2 is proved. Now assume that µ ¶s0 δ2 r2 ≤ . δ1 r1 b . Let us find s1 ≤ s0 from the condition It means that N ≥ N µ ¶s1 µ ¶s1 −1 δ2 r2 r2 < ≤ . (16) r1 δ1 r1

OPTIMAL RECOVERY OF TRACES

Hence

13

»

¼ log(δ2 /δ1 ) s1 = . log(r2 /r1 ) Suppose that s1 ≥ 1. Consider the line, which passes through the b1 + λ b2 x, where points (xs1 −1 , ys1 −1 ) and (xs1 , ys1 ). It has the form y = λ now µ ¶2(s1 −1) 2 µ ¶2(s1 −1) 2 2 ρ r − ρ ρ − r12 2 b1 = b2 = ρ (17) λ , λ . r1 r22 − r12 r2 r22 − r12 In view of concavity of the piecewise linear function passing through the points (14) for all these points b1 + λ b2 xj ≥ 0. −yj + λ Since the values

µ

ρ r2

¶2(s−1)

ρ2 − r12 r22 − r12

decrease as s → ∞ we have µ ¶2(s0 −1) 2 µ ¶2(N +1) 2 ρ ρ − r ρ 1 b2 ≥ λ ≥ . r2 r22 − r12 r2 Thus if M = −1 or M ≥ 0 and µ ¶2M ρ b (18) λ1 ≥ , r1 b1 , λ b2 ) ≥ 0. Condition (18) can be then for all f ∈ H 2 (Bn ), L(f, λ rewritten in the form r2 − r12 log 22 r2 − ρ2 M ≤ s1 − 1 − ρ . 2 log r1 Consider the function s s (n + s1 − 2)! s1 −1 (n + s1 − 1)! s1 fb(z) = a z1 +b z1 . n!(s1 − 1)! n!s1 ! We want to find the parameters a and b from the conditions (19) kIN fb(r1 z)k2L2 (σ) = δ12 , kI M fb(r2 z)k2L2 (σ) = δ22 . It means that 2(s1 −1)

+ b2 r12s1 = δ12 ,

2(s1 −1)

+ b2 r22s1 = δ12 .

a2 r1 a2 r2

14

K. YU. OSIPENKO AND M. I. STESSIN

It is easy to verify that condition (16) allows to find such parameters. Thus we constructed the function fb which is admissible in (6) and satisfies (b). From Theorem 2 we obtain the value of the optimal recovery error. Now suppose that s1 < 1. It means that δ2 ≤ δ1 . In this case c = −1. Then for all f ∈ H 2 (Bn ), L(f, 0, 1) ≥ 0. For fb(z) = δ2 M kIN fb(r1 z)k2L2 (σ) = δ22 ≤ δ12 ,

kI −1 fb(r2 z)k2L2 (σ) = δ22 .

Thus from Theorem 2 we obtain that EρN,−1 (H 2 (Bn ), r1 , r2 , δ1 , δ2 ) = δ2 . c < M < a(N ) and N ≥ N b (that Finally, consider the case when M b1 + λ b2 x be the tangent to the piecewise is, (N, M ) ∈ Σ4 ). Let y = λ linear function passing through the points (14) that passes through the point (0, (ρ/r1 )2M ). That is µ ¶2M µ ¶2s2 Ã µ ¶2(M −s2 ) ! ρ ρ ρ b1 = b2 = λ , λ 1− , r1 r2 r1 where s2 be the minimal number such that the tangent contains the point (xs2 , ys2 ) from the set of points (14). Thus, s1 ≤ s2 ≤ s0 . The number s2 may be defined as the number satisfying the condition µ ¶2M ρ ys2 − ys +1 − ys2 ys2 − ys2 −1 r1 ≥ 2 . > xs2 − xs2 −1 xs2 xs2 +1 − xs2 It can be shown that s2 may be defined by (13). Again for all points (14) b1 + λ b 2 xj ≥ 0 −yj + λ b1 , λ b2 ) ≥ 0. Consider the function and for all f ∈ H 2 (Bn ), L(f, λ s r (n + M − 1)! (n + s2 − 1)! s2 fb(z) = a z1M + b z1 . n!M ! n!s2 ! Now to satisfy conditions (19) we have to choose a and b such that a2 r12M + b2 r12s2 = δ12 , b2 r22s2 = δ22 . Taking into account (16) we have µ ¶s2 µ ¶s1 r1 r1 δ1 ≤ ≤ . r2 r2 δ2 Hence we can find parameters a and b to satisfy (19). It remains to use Theorem 2. ¤

OPTIMAL RECOVERY OF TRACES

15

Now we construct a family of optimal recovery methods for the sets Σj , j = 2, 3, 4. Put  (N, M ), (N, M ) ∈ Σ1 ,    (N, a(N )), (N, M ) ∈ Σ2 , (N 0 , M 0 ) = b c  (N , M ), (N, M ) ∈ Σ3 ,    −1 (a (M ), M ), (N, M ) ∈ Σ4 , where

a−1 (M ) = min{ N ∈ Z+ : a(N ) ≥ M }.

e and M f such that N 0 ≤ N e ≤ N and Theorem 4. For all integer N 0 f M ≤ M ≤ M the methods m b Ne ,M f(y1 , y2 ) are optimal for problem (5). Proof. Let f ∈ H 2 (Bn ), y1 ∈ YN (f, r1 , δ1 ), y2 ∈ Y M (f, r2 , δ2 ), and ∞ X X α yk (rk z) = rkj c(k) ek (z), k = 1, 2, α z +y j=0

|α|=j α≥0

where yek , k = 1, 2, are orthogonal to all holomorphic polynomials in L2 (σ). Then 0 0 kf (ρz) − m b Ne ,M b Ne ,M f(y1 , y2 )(ρz)kL2 (σ) = kf (ρz) − m f(y1 , y2 )(ρz)kL2 (σ) ,

where y10 (z)

=

e N X X

α c(1) α z ,

y20 (z) =

j=0 |α|=j α≥0

∞ X X

α c(2) α z .

f+1 |α|=j M α≥0 f

Moreover, y10 ∈ YNe (f, r1 , δ1 ), y20 ∈ Y M (f, r2 , δ2 ). Thus e f

N ,M eN,M (H 2 (Bn ), r1 , r2 , δ1 , δ2 , m b Ne ,M (H 2 (Bn ), r1 , r2 , δ1 , δ2 , m b Ne ,M f ) ≤ eρ f). ρ

From this inequality we have EρN,M (H 2 (Bn ), r1 , r2 , δ1 , δ2 ) ≤ eN,M (H 2 (Bn ), r1 , r2 , δ1 , δ2 , m b Ne ,M f) ρ e f

e f

,M N ,M ≤ eN (H 2 (Bn ), r1 , r2 , δ1 , δ2 , m b Ne ,M (H 2 (Bn ), r1 , r2 , δ1 , δ2 ). f) = Eρ ρ

Using the fact that e f

EρN,M (H 2 (Bn ), r1 , r2 , δ1 , δ2 ) = EρN ,M (H 2 (Bn ), r1 , r2 , δ1 , δ2 ) we obtain that the methods m b Ne ,M f are optimal.

¤

Returning to problem (1) we obtain b ≤ N < ∞ and Corollary 1. For all integer N and M such that N c the methods m −1 ≤ M ≤ M b N,M (y1 , y2 ) are optimal for problem (1).

16

K. YU. OSIPENKO AND M. I. STESSIN

Now we would like to consider the following question: is it possible to choose a best method in some sense among all these optimal methods. Usually numerical algorithms (for instance, interpolation or quadrature formulae) considered as good ones if they are precise for some set of functions (for instance, subspaces of algebraic or trigonometric polynomials). In this connection one tries to make this set as large as possible. For example, Gauss quadrature is constructed to make the largest value of n so that all polynomials of degree at most n are integrated precisely. We say that a method m(y1 , y2 ) is precise by first (second) argument at a function f ∈ H 2 (Bn ) for problem (5) if m(IN f, 0) = f

(m(0, I M f ) = f ).

We say that a method m(y1 , y2 ) is precise by first (second) argument at a set W if it is precise by first (second) argument at all functions from W . e ≤ N and M ≤ M f ≤ M 0 the It is easily seen that for all N 0 ≤ N methods m b Ne ,M f(y1 , y2 ) are precise by first (second) argument at the ⊥ spaces PM f (PN f is the space of polynomials of degree at e ), where PM f and P ⊥ is the space of functions from H 2 (Bn ) orthogonal to most M e N PNe . Taking into account the arguments which were given above we obtain that the method m b N 0 ,bM 0 c (y1 , y2 ) is the best for problem (5) and the method m b Nb ,M (y , y c 1 2 ) is the best for problem (1). References [1] K. Yu. Osipenko, M. I. Stessin, “Hadamard and Schwarz type theorems and optimal recovery in spaces of analytic functions”, Constr. Approx. (2008) (to appear). [2] W. Rudin, Function Theory in the Unit Ball of Cn , Springer-Verlag, New York, 1980. Department of Mathematics, “MATI” — Russian State Technological University, Orshanskaya 3, Moscow, 121552, Russia E-mail address: [email protected] Department of Mathematics and Statistics, University at Albany, Albany, NY 12222, USA E-mail address: [email protected]