Let X (X, I1" II) be a normed linear space, A a subsetof ... - Project Euclid

8 downloads 0 Views 2MB Size Report
Aug 18, 1976 - AVRAHAM A. MELKMAN AND CHARLES A. MICCHELLI. 1. ..... Since the 2n + 1-eigenfunction sin 2nn(x- o) has 2n simple zerosat o +j/2n, j.
ILLINOIS JOURNAL OF MATHEMATICS Volume 22, Number 4, December 1978

SPLINE SPACES ARE OPTIMAL FOR L2 n-WIDTH BY

AVRAHAM A. MELKMAN

AND

CHARLES A. MICCHELLI

1. Introduction

I1"

Let X (X, II) be a normed linear space, A a subset of X and X. an n-dimensional linear subspace of X. The Kolmogorov n-width of relative to X is defined by

:

d#(A; X)= d(Ac’)= inf sup inf Xn x

,Yt" Y

yll.

Xn

X. is called an optimal subspace for : provided that d.(:)

sup inf Yt y

yll,

Xn

This concept of n-width was introduced by Kolmogorov in paper he finds the exact value of the n-width for

[8] and

in his

W2"[0, 1] {f: f(’- ’) abs. cont. on (0, 1), IIf(’)ll 0,

[f, fm] linear space spanned byf,...,fm). Furthermore, interpolation off + WZ"[01,1] at r/x,,, r/, +,,, by an element in X.+, is an optimal method of approximating WZ"[0, 1]. zero otherwise, and

In addition, the space of natural splines,

(X ?’In + r,r)2+r- 1]. (x /1,r)2+r- 1, S")(O) S")(1)= 0, i= r, 2r 1}, is an optimal subspace for the (n + r)-width of W2"[0, 1] and interpolation at qt,,, r/,+,,, is an optimal method of approximating W2"[0, 1].

X2N

{S [1, x,

x 2r- 1,

We also include in Section 4 a matrix formulation of our results on totally positive integral operators, as well as in Section 3 the computation of n-widths under restricted approximation. This latter problem allows us to answer the following question of optimal estimation. Given fin a certain set and sampled function valuesf(x t), ,f(x,), where is the best place to sample f at n additional places to obtain the most information about it ? Some of the results presented here were announced in [10].

SPLINE SPACES ARE OPTIMAL FOR

L2

543

n-WIDTH

2. Statement of problem

Let H denote the Hilbert space of real-valued, square-summable functions [0, 1]. We denote the norm and inner product on H by

on

]]fll 2 (f, f),

(f, 9)=

fo f(t)9(t)

dt,

respectively.

Let K(x, y) be a continuous function on [0, 1] x [0, 1] and define o,d=

{f; K(x,

y)h(y) dy" ]]h], _
22 >-"" >- 2, >-"" > 0, and corresponding orthonormal eigenfunctions,

(2.1)

K*Kdp,= 2b, (b, b,)=6,m,m,n= 1,2,

In addition, we define $,

(2.2)

KK*,,

KS, and observe that

,,q,,., (,,, ,,,)= 2,,6,,,,,, n, m

1, 2,

The following theorem is a familiar result for the Kolmogorov n-width of [16, p. 188].

see Shapiro

,

THEOREM 2.1. d.(cd; H)-- 2.x/+za and X [ 1, ], the linear subspace is an optimal subspace for g4r. spanned by 1,

To comment further on this interesting result we require the following idea. A continuous kernel K is said to be totally positive provided that the Fredholm determinant xl’ K(kYl, is nonnegative for all 0 < x
O, and the corresponding orthonormal eigenfunctions Ku. 2.u,, n 1, 2,..., form a Markov system on (0, 1), that is, t ]u,(x)[ >0, U( XI1,.,.,nXn l=de !

Consequently, the (n

+ 1)st

0 0 we conclude that 2 < 2,+ But we also have by the definition of 2 that 2,+_ 0

for all 0 < x o < < x. < 1 (and strictly positive for some choice of x o, x.) at there exists a function 9(x)= which weakly changes signs x) flK(r/, (o 0, 1). In particular, O(x)(- 1) > 0, < x
- 0, x, y [0, 1], because

(R*R)(x, y)=

f,

"

’_... Y._’.L’

x, 1,

.I!

da

and we conclude that the largest eigenvalue of R*R is given by 2m(R*R) 2,+ 1" Thus the proof is complete. Lemma 2.3 leads also to a result similar to Theorem 2.4 for convolution operators. Thus, in Example 2.2, interpolation at + (2n 1)/2n by the subspaee

,

2n2n

1)

is an optimal procedure for the class

3. n-widths under restricted approximation

In this section we study n-widths under restricted approximation. We begin by describing our initial motivation for this problem. Suppose we sample a function f(x) at s points x (xx, x), 0 < x < < x < 1. Given only thatf Yg and the data f= (f(x), ...,f(x)) we wish to find an optimal method of estimatingf (x). To describe what we mean by this we let T be any mapping (not necessarily linear) from R C[0, 1]. This mapping determines the estimator Sf Tf forfand the error, given only that f is

E(x; S)

sup

f- Sf

fe

We will say that So is an optimal estimator for gcg, provided that

E(x; So)

inf E(x;

S)

SPLINE SPACES ARE OPTIMAL FOR

g2

where the infimum is taken over all mappings from R into C[0, a related problem. Let P,, be the orthogonal projection of H onto the subspace

x, [K(,. ),

551

n-WIDTH

1]; see [12] for

K(,,. )].

74r

For f= Kh we define Sxf= KPh. Since Ph 0 for any h such that (Kh)(x) O, i= 1,..., n, Sx is a well-defined estimator which uses only the information

f=

(f(xl), ...,f(x)).

We define

C(x)= sup {ll f ll: f (x,)= o, i= 1, THEOREM 3.1.

s,f g4r}.

S is an optimal linear estimator for off and E(x)= C(x).

Proof. Let f e :{’, f(x)=0, i= 1,..., n. Then for any mapping T: R C[O, 1], Ilfll-< 1/2[llf- w(0)ll / IIf/ W(0)ll] _< sup IIf-sfll fe

where Sf= Tf. Hence C(x) < E(x). The reverse inequality follows from the following reasoning. First observe that

Ilf- sfll Now, 1,

K(h Ph) e

KPhII IlK(h Ph)ll. because lib Phll 0’

0_ d.(; H)= k/+21(x)

for all y. To complete the proof, we will demonstrate that

IIK- KP,,,I

(3.4)

2/+(x)

where

Then combining this fact with (3.3) the proof will be complete. Let T K x K,,Pn,,,. Then using the fact that

T= K- KP.o, Zo=(X,...,x,, we may compute the kernel of T* T to be

KK,

(T*T)(x, y)= (TT*)(y, x)=

(y,, ,, ,, r/l(x), ,(x), t/(x)) (x)

KK*

xs,

11,

(x

1,...,

x,, "1 ix),

Thus, T*T(x, y) sgnfn+ l(x) sgnfn+ I(Y) > 0. But, because T= K,,- r,,Pn,,,, then (replacing K by K,) as in the proof of Theorem 2.3 we conclude that

T*Tf+t 2+ t(x)f+t

and

2x(T*T)= ;t+ t(x);

hence (3.4)is verified. According to the above theorem, both

X [ft, ...,f]

and

X2 [(K,K**)(., r/t(x)),

(K,,K**)(., r/,(x))]

are optimal subspaces for d(cd,; H). There is, of course, a third optimal subspace based upon the zeros of h/ t(x) (it can be shown that h+ t(x) has n + s zeros) and again interpolation at the zeros off+ t(x) is an optimal procedure. However, the discussion of these facts will take us too far from our discussion on optimal estimation. Instead, let us note that the results we have been considering in Section 2 and Section 3 extend when the operator K maps

I-L

f:

f(t) de{t) < o

onto

/-/

f:

f(t) dB(t)
O.

(4.6) Appearing to

,

(4.5), we obtain by direct computation, det --j’i-

.

,

k,l=

n+l

aet k,!

I(f Ae’)l I(fL Ae’)[

n

where we define f"+ u the ith unit vector, (uk) tkt k, 1, N, and e" + u The matrix whose columns are composed of the vectors e e" + has the property that the signs of all its (n + 1)st minors are (-1)’+", if

x,

SPLINE SPACES ARE OPTIMAL FOR

l, 0. Thus (4.6) is verified and the proof of the theorem is finished. 5. Further extensions

In this section we will indicate how Theorem 2.3 can be extended to sets of the form 9rr X + 9" where X, is some fixed r-dimensional subspace of H. Let kl(x),..., k,(x) be continuous functions on [0, 1] and define

(5.1) 9f,

ak(x) +

K(x, y)h(y)dy" I[hll

_< 1,

a,)e R’

(al,

j=l

The main prototype, for us, of this class of examples is the Sobolev class

W2"[0, 1] {f: ft,- abs. cont. on (0, 1),ft’ L2[0, 1], [If ’ ll 1} which may be written in the form (5.1) by using Taylor’s theorem with remainder"

(5.2)



f’’(0) x + 1 (r- 1 ) =o J!

f (x)

f

(x Y)7 ft’)(Y) dy

x’-, x _> 0, zero otherwise). Let Q, be the orthogonal projection of H onto X,= [k, k,]. Then as follows" We define K, (1 Q,)K. Then Theorem 2.1 easily extends to KK, is a completely continuous, symmetric positive semi-definite operator (x-

,

2,,

with eigenvalues

22,,

eigenfunctions,

.,, .,,.

""

0 and corresponding orthonoal

KK,,,= 2,,, (.,,, s,,)=6m,n,m= 1,2,

Let

K,

Then

K g l n,r

n

,r

l n ,r I

ln ,r

,r

/n (nm n, m

1, 2,...,

and

d.(c,)

,,,,

1/ -,.-+ 1,,,

n < r, n > r.

When n > r, X. [k, k,, ._,,,] is an optimal subspace for the n-width of For the analog of Theorems 2.3 and 2.4 we require the following assumptions. For any points 0 s < < ss 1, 0 < < ts 1 and any

558

AVRAHAM A. MELKMAN AND CHARLES A. MICCHELLI

m < 0, the determinant

K(

1, tl,

r, st, tr, tr+ 1,

Sm

tr+m

)=

is nonnegative. The linear spaces

k,, K(., st),..., K(., s,)] and [kt,..., k,, K(st, "),..., K(s,, .)] have dimension r/m for all 0-- O, X, y [0, 1]

and"

THEOREM 5.2.

IlK

L gl[

:1/2 ’n +

,r"

as

SPLINE SPACES ARE OPTIMAL FOR

/. n-WIDTH

563

Hence, again interpolation is an optimal procedure for estimating the class

r"

Let us now apply Theorem 5.1 to an example discussed by Kolmogorov in

[8]. Example 5.1. k,(t)=

t’-, i=

1,

r,

K(x, y)= 1/(r- 1)!, x, y 6 [0, 1]. In

this case

:g, {f: f’-)abs, cont.,f’) s L2[0, 1], IIf’)ll The eigenvalue equation K*rKrdpn,r be equivalent to

2n,rdPn,r, n

(3.6)

y.,,,

Yn,r

_