Wavelets on Closed Subsets of the Real Line - Semantic Scholar

1 downloads 0 Views 447KB Size Report
thogonal wavelet decomposition of a general function f on the line. This is a ..... functions on the real line IR such as the square integrable functions on the.
Wavelets on Closed Subsets of the Real Line L. Andersson, N. Hall, B. Jawerth, G. Peters Abstract. We construct orthogonal and biorthogonal wavelets on a given closed subset of the real line. We also study wavelets satisfying certain types of boundary conditions. We introduce the concept of \wavelet probing", which is closely related to our construction of wavelets. This technique allows us to very quickly perform a number of di erent numerical tasks associated with wavelets. x1. Introduction

Wavelets and multiscale analysis have emerged in a number of di erent elds, from harmonic analysis and partial di erential equations in pure mathematics to signal and image processing in computer science and electrical engineering. Typically a general function, signal, or image is broken up into linear combinations of translated and scaled versions of some simple, basic building blocks. Multiscale analysis comes with a natural hierarchical structure obtained by only considering the linear combinations of building blocks up to a certain scale. This hierarchical structure is particularly suited for fast numerical implementations; the underlying idea being that functions on a certain scale only need to be sampled at a rate approximately given by the scale they live on. To discuss this in more concrete terms, let us consider a standard, orthogonal wavelet decomposition of a general function f on the line. This is a representation in terms of linear combinations of translated dilates of a single function : XX f (x) = hf; k i k (x) ; 

k

with k (x) = 2=2 (2 x ? k) for integers  and k. The functions

f (x) =

XX

  k

hf; k i k (x)

Topics in the Theory and Applications of Wavelets Larry L. Schumaker and Glenn Webb (eds.), pp. 1{4.

1

2

L. Andersson, et. al.

represent approximations of f associated with each of the scales 2? . Suppose that we decide that it is enough to consider f only up to a certain scale 2?1 . We may obviously write f1 = f0 + (f0 +1 ? f0 ) + (f0 +2 ? f0+1) + : : : + (f1 ? f1 ?1); and this gives us a representation of f1 as a sum of a function f0 , living on a coarser scale 2?0 , and a sum of terms containing the additional detail (f0 +i+1 ? f0 +i) needed to pass from f0 +i to f0+i+1. If now f1 is represented by N samples, then we only need roughly half as many to represent f1 ?1, a quarter as many for f1 ?2 , and so on. Similarly, each of the detail terms needs only half as many samples for its representation as the previous one when we pass from ner to coarser scales. This hierarchical structure is easy to turn into a fast, O(N ) numerical algorithm for calculating the wavelet coecients hf1 ; k i of f1 . In this paper we shall study procedures which utilize the inherent hierarchical structure in a di erent way. These procedures share the feature that they involve probing and gathering information from several di erent scales while keeping the location (essentially) xed. Typically, they lead to numerical algorithms which are very fast, with a complexity proportional to the number of levels being processed. There are a number of examples of this kind of wavelet probing. The one we shall consider in greatest detail is the construction of wavelets on closed subsets of the real line. We shall also show how similar ideas lead to extremely quick algorithms for splitting and merging functions de ned on intervals. Splitting allows us to nd the wavelet coecients associated with di erent subintervals of a function originally de ned on a larger interval. Merging goes in the opposite direction and involves starting with the wavelet coecients on the smaller intervals and nding the wavelet coecients associated with the the union of the smaller intervals. Other examples we shall discuss are algorithms for very quick, smoothness preserving extensions of functions, pointwise evaluation of wavelet decompositions, and the construction of wavelets satisfying di erent boundary conditions. The paper is organized as follows. In the next section we brie y review multiresolution analysis on the line and recall some basic facts about Daubechies' compactly supported wavelets. In Section 3 we give an algorithm for pointwise evaluation of a function which is represented in terms of its wavelet coecients. This is our rst and perhaps simplest example of wavelet probing. Then in Section 4 we go through our basic construction of orthonormal wavelets on [0; 1] (this construction was independently discovered by A. Cohen, I. Daubechies, and P. Vial [6], [7]) and, more generally, on any interval [A; B] with rational endpoints. This leads to characterizations for smoothness spaces such as Lip [A; B] (0 < < 1) as well. Splitting and merging of wavelet decompositions is the topic of Section 5. In Section 6 the basic construction in Section 4 is modi ed to incorporate wavelets satisfying certain types of boundary conditions. After that, in Section 7, we use

Wavelets on closed subsets

3

wavelets on closed intervals to construct smoothness preserving extensions of functions. The results in this section are analogous to facts established by Auscher [1], [2] in the case of Meyer's wavelets on [0; 1]. It is well known (cf. [5], [4]) that biorthogonal wavelets o er very useful extra exibility in the construction of wavelet decompositions. In Section 8 we discuss compactly supported biorthogonal wavelets on quite general closed subsets of the real line. One of the main results of this section, which has been split into Theorems 8.1 and 8.2, states that it is possible to start from any biorthogonal, compactly supported wavelets on the line and construct wavelets on each any interval [A; B] while maintaining numerical stability. There are other ways as well to add exibility and obtain more general wavelet type basis. One is to generate wavelets recursively; a discussion of this is also in Section 8. Finally, in Section 9 we have gathered a number of other thoughts and concluding remarks.

x2. Orthogonal wavelets on the line

Wavelets on the line can be generated in several ways. One is through multiresolution analysis, cf. [17], [18]. We shall focus, at least at rst, on multiresolution analysis of L2(IR) generated by a compactly supported function . More speci cally, we shall assume that we have a sequence fV g2ZZ of closed spaces V  L2(IR) with the following properties: 1) : : : V?1  V0  V1  V2 : : :; 2) lim!?1 V = f0g and lim!+1 V = L2 (IR); 3) v(x) 2 V () v(2x) 2 V+1 ; 4) v(x) 2 V0 () v(x + 1) 2 V0; R 5) there exists a function  with (y) dy = 1 such that the collection f(x ? k)gk2ZZ is an orthonormal basis for V0 and (x) =

X p

2N ?1

k=0

hk 2(2x ? k)

(2 ? 1)

for some integer N  1 and coecients hk ; 6) the coecients hk satisfy

X

2N ?1

k=0

(?1)k hk k = 0

for 0   N ? 1:

(2 ? 2)

The existence of such multiresolution analyses was established by Daubechies [11]. We shall refer to (2-1) as the re nement relation satis ed by . It will sometimes be convenient to assume that hk is de ned for all k 2 ZZ by letting hk = 0 if k < 0 or k  2N . It is easy to see that the re nement relation implies, in particular, that  has support given by supp = fx : (x) 6= 0g = [0; 2N ? 1]:

(2 ? 3)

L. Andersson, et. al.

4 If we let

k (x) = 2=2 (2 x ? k) ; k 2 ZZ; (2 ? 4) then fk gk2ZZ is an orthonormal basis for V . The orthogonal complement of V in V+1 is denoted by W . Hence, M V+1 = V W : Furthermore, +1 M 2 L (IR) = W :  =?1

and, for any nite integer 0,

L2(IR) = V

0

+1 M

 =0

The function de ned by X p (x) = gk 2(2x ? k); k

W : gk = (?1)k h 1?k ;

(2 ? 5)

is the wavelet associated with the multiresolution analysis. Let k (x) = 2=2 (2 x ? k); ; k 2 ZZ: It is easy to see that f k gk2ZZ is in fact an orthonormal basis for W . This implies that a general function f 2 L2(IR) can be written as X f (x) = hf; k i k (x); (2 ? 6) ;k

and also as X f (x) = hf; 0k i 0 k (x) + k

X  0 ;k

hf; k i k (x):

(2 ? 7)

We refer to either of the representations (2{6) and (2{7) as a wavelet decomposition of f . The mappings f 7! fhf; k ig;k2ZZ and f 7! fhf; 0k igk2ZZ [ fhf; k ig0 ;k2ZZ are the corresponding wavelet transforms of f . For later reference we also note the following. If f 2 V+1, then it uniquely splits into two orthogonal pieces, one in V and one in W . In particular, this is true for the functions +1;l, l 2 ZZ. In fact, by (2{7), we have X X +1;l = h+1;l; k i k + h+1;l; k i k k (2 ? 8) X k X = hl?2k k + gl?2k k : k

k

Wavelets on closed subsets

5

x3. A simple example

We x a multiresolution analysis, generated by a function  satisfying the conditions discussed in the previous section. Now, suppose we pick a function f 2 V1 and suppose we know its wavelet decomposition

f (x) =

X k

0k 0 k (x) +

X

0  1 ?1;k

?k k (x);

(3 ? 1)

with 0 < 1. Since the function f 2 V1 , it also has a representation

f (x) =

X k

1k 1 k (x)

for some coecients f1k gk . To nd these coecients in terms of the known ones, f0k gk [ f?k g01 ?1;k , we may use the relation +1;k =

X l

hk?2ll + gk?2l?l ;

0    1 ? 1;

(3 ? 2)

recursively. At each level we start from vectors fk gk , f?k gk and generate a vector f+1;k gk . Going through all the levels then yields f1k gk . This procedure is the Inverse Fast Wavelet Transform (IFWT). Let us now change our point of view slightly and instead consider the problem of nding just one of the coecients, 1k , for a certain k. The relation (3{2) may still be used for this. Of course, when k is xed most of the terms in the sums on the right hand side are zero. In fact, to nd 1k we only need the 1?1;l's with 0  k ? 2l  2N ? 1 in the rst sum and the ?1 ?1;l's with ?(2N ? 2)  k ? 2l  1 in the second. The coecients 1?1;l are similarly determined by 1?2;j with 0  l ? 2j  2N ? 1 and ?1 ?2;j with ?(2N ? 2)  l ? 2j  1. Continuing this, we see that on level 1 ? i we need the 1?i;j 's only for j 's satisfying

k ? 2N ? 1 ( 1 + 1 : : : + 1)  j  k ; 2i 2 2i?1 2i?2 2i and the ?1?i;j 's only for k ? 2N ? 1 ( 1 + 1 : : : + 1) + N ? 1  j  k + N ? 1: 2i 2 2i?1 2i?2 2i Hence, on each level there are approximately 4N terms, independent of the level, and nding 1k requires approximately 4N  (1 ? 0) operations. As a consequence, if we need the values of 1k for values of k that are sparsely scattered, we see that it is faster to process them in this second way rather than using the \full" IFWT. Another possibility is that we need 1k for clusters of k-indices; a hybrid procedure obtained by processing all the indices in each

L. Andersson, et. al.

6

of the clusters at once, using the IFWT and the above outlined, pointwise process, leads to a much quicker algorithm than the "full" IFWT. There is a close relation between nding the values of 1k for certain k's and pointwise evaluation of f . This is clear in case the pointwise values of f are directly related to the values of 1k : f (2?1 k) = 1k 21 =2. For example, if the function  isPsuch that (0) = 1, (k) = 0 if k 6= 0, then the representation f (x) = k 1k 1k has this interpolatation property. This is the case for some of the recursively de ned multiresolution analyses, see Section 9, and there are other instances as well. In general, if we are interested in nding the values of f (2?1 k) for some xed k, then we may P ?  use the relation f (2 1 k) = l 1l (k ? l)21=2 . This then requires that we nd the 1l's with 0 < k ? l < 2N ? 1; these values can be found with approximately (1 ? 0)  4N operations. We note that there are other quantities associated with f , such as the n:th derivative f (n) (assuming that the n:th derivative of  makes sense), that can quickly be evaluated at a speci c point 2?1 k in an analogous way.

x4. Orthogonal wavelets on [0; 1]

4.1. Some background

Standard wavelet analysis involves constructing bases for collections of functions on the real line IR such as the square integrable functions on the real line, L2(IR), and the Lipschitz continuous functions, Lip (IR). For many applications it is necessary, or at least more natural, to work on a subset of the real line. In this section we shall start the study of a construction which allows us to obtain wavelets and wavelet type bases on closed subsets of the real line.1 Let us start by considering orthogonal wavelets on [0; 1]. Clearly, by restricting the functions in each V to [0; 1] we obtain spaces Vrestr which form a multiresolution analysis of L2([0; 1]). In this setting, Y. Meyer [19] has shown how to construct orthonormal bases f'k gk2ZZ and f k gk2ZZ for Vrestr and for the orthogonal complement Wrestr of Vrestr in Vrestr +1 , respectively. The basic idea is to prove that the restrictions to [0; 1] of the functions k is a basis of Vrestr for each xed  , and then to orthonormalize these with a GramSchmidt type process. It turns out, that Meyer's elegant construction has a couple of drawbacks. The restriction of some of the k only have small tails that intersect [0; 1]. As a consequence, the collection of the restrictions of the k , although a basis, is almost linearly dependent. This means that the matrix that corresponds to a change of basis, from the restrictions of the k to the corresponding orthonormal basis, is ill conditioned. Furthermore, it is easy to check that the dimension of Vrestr is equal to 2 + 2N ? 2 while the dimension of Wrestr is 2 . This inherit imbalance between Vrestr and Wrestr is sometimes inconvenient in applications. 1 The idea behind this construction was discovered, independently, by Cohen{Daubechies{ Vial and was announced in [6]. Cf. also [7].

Wavelets on closed subsets

7

We remark that there are other constructions of wavelets on [0; 1]. In fact, for some historical perspective it is interesting to note that Franklin's original construction [13] was given for [0; 1]. Also, in the case of semiorthogonal spline-wavelets, there is a construction due to C. Chui and E. Quak [3]. The construction we shall study here is in some ways simpler than that of Meyer and overcomes the diculties just mentioned. As we shall see, it also has some other interesting features.

4.2. Scaling functions on [0; 1]

Because of the condition 6) above, all polynomials PN ?1 of degree  N ?1 can be obtained as linear combinations of the functions fk gk2ZZ,

PN ?1  fk gk2ZZ :

(4 ? 1)

(We shall use the notation A in two di erent ways: sometimes it will denote general linear combinations and sometimes it will be used as the closure of nite linear combinations in L2. Which one of the two should be clear from the context.) Since this fact is directly linked to many of the approximation properties of wavelets, any construction of wavelets on an interval should preserve this property. We shall take this observation as the starting point for our construction. To x ideas, let us consider the case of the unit interval [0; 1]. We shall rst construct the spaces V [0; 1] which yield the multiresolution analysis of L2[0; 1]. For this purpose we will need the functions k which have nonempty intersections with the interior of [0; 1]. With this in mind we let

S = fk : suppk \ (0; 1) 6= ;g = fk : ?(2N ? 2)  k  2 ? 1g: (4 ? 2) We also let L and R be two xed nonnegative integers and de ne

S;L = fk : ?(2N ? 2)  k  L ? 1g;

(4 ? 3)

S;R = fk : 2 ? (2N ? 2) ? R  k  2 ? 1g;

(4 ? 4)

and

S;I = fk : supp;k?L and supp;k+R  [0; 1]g (4 ? 5) = fk : L  k  2 ? (2N ? 1) ? R g: We shall assume that the scaling parameter  is suciently large so that the sets S;L and S;R are disjoint. We let 0 be the smallest such  : 0 = minf : S;L \ S;R = ;g:

(4 ? 6)

In particular, the scale is so small that a function k can intersect at most one of the endpoints. Hence, S;L, S;I , and S;I break S into three, disjoint sets: S = S;L [ S;I [ S;R   0:

L. Andersson, et. al.

8

Suppose now that P is an arbitrary polynomial of degree  N ? 1. Because of (4{1) we may write X P (x)j[0;1] = hP ; k i k (x)j[0;1] k2S X ? =

+

X

k2S;L k2S;I

+

X hP ; k i k (x)j[0;1] :

k2S;R

Let us put p ;L = fp  (k)gk2S;L p ;R = fp ;R(k)gk2S;R with p (k) = hP ; k i and de ne (x) = X p (k )k (x)j (x) = X p (k )k (x)j : P;L P [0 ; 1] [0;1]  ;R 

k2S;L k2S;R The polynomial P can now be written in the following way: (x) + X hP ; k i k (x)j P (x)j[0;1] = P;L (4 ? 7) [0;1] + P;R (x): k2S;I and P imply that they are supported near Note that the de nitions of P;L ;R

the left and right endpoints, respectively. Suppose we pick a collection fP g of M polynomials of degree  N ? 1. g and M This gives us two corresponding collections of M functions fP;L g. These functions P are linearly independent exactly when functions fP;R ;L the associated M vectors p L are linearly independent. This is due to the fact, established by Meyer [19], that the restrictions of the functions k to [0; 1] are linearly independent (the proof of this fact is in fact easy in our case when the scale is so small that the functions k only intersect one endpoint at a are linearly independent exactly when the time). Similarly, the functions P;R vectors p;R are. For example, let us choose the (normalized) monomials P (x) = 2=2 (2 x) with 0   M ? 1 for some M  N . By a change of varibles it follows that   D =2  E Z X kiM ?i ; (4 ? 8) x (k) = 2 (2 x) ; k = (y + k) (y) dy = i i=0 R i where Mi = y (y) dy is the i:th moment of . Let us make a brief digression and discuss how these moments can be expressed explicitly in terms of the coecients hk . Using the re nement relation (2{1) we obtain

Mi =

X Z m

=

p

hm xi (2x ? m) 2 dx

X m

1

hm 2i+1=2

Z

(x + m)i (x) dx =

1

Xi  i 

m M 2i+1=2 j=0 j i?j j

Wavelets on closed subsets where

9

mi =

X k

hk k i :

R The fact that  = 1 readily implies that m recursive formula

0

p

= 2. Hence, we obtain the

i?1  i  X 1 i mi?j Mj : (2 ? 1)Mi = 21=2 j =0 j

(4 ? 9)

R

Now, since M0 =  = 1, the coecient in front of k in (4{8) is 1. This observation and induction over the degree yield that the linear span of the vectors x ;L = fx  (k)gk2S;L is equal to the linear span of the vectors fk gk2S;L : fx ;L g0 M ?1 = ffk gk2S;L g0 M ?1; (4 ? 10) and, similarly,

f(x ? 1) ;R g0 M ?1 = ff(k ? 2 ) gk2S;R g0 M ?1:

(4 ? 11)

Moreover, since the vectors fk gk2S;L clearly are linearly independent, the same holds for x ;L . The analogous fact is true at the right endpoint as well. We let (x) = X;L

(X ? 1) ;R (x) =

X

k2S;L

X

k2S;R

x  (k)k (x)j[0;1] (x ? 1) (k)k (x)j[0;1] ;

(4 ? 12)

and de ne the spaces V [0; 1],   0, by g V [0; 1] = fX;L 0 N ?1 [ fk gk2S;I [ f(X ? 1);R g0 N ?1 : (4 ? 13)

Note that the restriction to [0; 1] of each polynomial P of degree  N ? 1 is in V [0; 1]. As we shall see, it is quite easy to nd an orthonormal basis for V [0; 1]. g We rst observe that the collections fX;L 0 N ?1 , fk gk2S;I , and f(X ? 1) ;R g0 N ?1 are mutually orthogonal. This is a consequence of the our assumption that the functions k are orthogonal with respect to the inner product on IR since

hk ; l i[0;1] = hk ; l iIR = 0; if k 2 S;L; l 2 S;I [ S;R: The functions in fk gk2S;I are orthonormal by assumption. Hence, the functions in the three collections are linearly independent, and to obtain

L. Andersson, et. al.

10

an orthonormal basis for V [0; 1] we just have to nd orthonormal bases for g fX;L 0 N ?1 and f(X ? 1);R g0 N ?1 , respectively. This can be accomplished with a Gram-Schmidt type process. More speci cally, let us consider the left endpoint and set

' ;L =

NX ?1 =0

A X;L

?1;N ?1. These functions form an for some N  N matrix A = A;L = fA g N=0 ; =0 orthonormal set exactly when

D

 = ' ;L; ' ;L

E

= [0;1]

X



D

; X A A  X;L ;L

E

[0;1]

:

(4 ? 14)

?1;N ?1 by letting If we de ne the matrix M = M;L = fM g N=0 ; =0

D

; X M = X;L ;L

E

[0;1]

;

(4 ? 15)

then we may rewrite the orthonormality condition (4-14) as

IN N = AMA : Now note that M is positive de nite and symmetric and, hence, has a Cholesky decomposition M = CC . The choice

A = C ?1 ?1 are orthonormal. guarantees that the functions in f' ;Lg N=0 Next we claim that the spaces V [0; 1] form an increasing sequence. PROPOSITION 4.1. V0 [0; 1]  V0+1[0; 1]  : : :. Proof: To show this we rewrite the re nement relation (2{1) as follows:

k =

X m

hm?2k +1;m :

Now, if k 2 S;I and hm?2k 6= 0, then 2L  2k  m  2k + 2N ? 1  2+1 ? (2N ? 1) ? 2R; and, in particular, m 2 S+1;I . This implies that

fk gk2S;I  f+1;k gk2S+1;I :

(4 ? 16)

Wavelets on closed subsets

11

We also have = X;L

=

X

k2S;L

x  (k)k j[0;1] =

X X

m2S +1 k2S;L

? X =

+

X k2S;L

x  (k)

x  (k)hm?2k +1;m

X

m2S +1;L m2S +1;I

= I + II + III:

+

X m2S +1

hm?2k +1;m (4 ? 17)

X  :::

m2S +1;R

Using (2{8) and that the functions ;k are orthogonal to polynomials of degree  N ? 1, cf. (4{1), we see that

E

D

x +1 (m) = 2(+1)=2(2+1x) ; +1;m E D X X = 2 +1=2 hm?2k 2=2 (2 x) ; k = 2 +1=2 hm?2k x  (k): k

k

Furthermore, we note that m 2 S+1;L and hm?2k 6= 0 implies that k 2 S;L. Hence,

I=

X X

x  (k)hm?2k +1;m

m2S +1;L k X x+1 (m)+1;m = 2?( +1=2)X +1;L : = 2?( +1=2) m2S +1;L

II clearly only involves terms that are in f+1;mgm2S+1;I , and, nally, 2 V +1 [0; 1], and in (4{6) readily gives that III = 0. This shows that X;L the same way we obtain that (X ? 1) ;R 2 V+1[0; 1], and this nishes the proof of our claim. Let us order the basis elements of V [0; 1] as follows:

8 ' < ;L 'k = :  k ';R

if k = k;L ? 1 ? for = 0; : : : ; N ? 1; if k 2 S;I if k = k;R + 1 + for = 0; : : : ; N ? 1,

with k;L = L and k;R = 2 ? (2N ? 1) ? R being the smallest respectively largest integer in S;I . It will be convenient to use the following notation. We let L = fk : k;L ? N  k  k;L ? 1g R = fk : k;R + 1  k  k;R + N g and I = L [ S;I [ R :

L. Andersson, et. al.

12

In particular, 'k is de ned for k 2 I . The fact that the spaces V [0; 1] form an increasing sequence implies, in particular, that there is a re nement relation between the corresponding basis elements. Hence, there is a matrix H = H  = fHklg such that

'k =

X

l2I +1

Hkl '+1;l

k 2 I :

(4 ? 18)

In fact, the proof of Proposition 4.1 readily gives us an explicit description of H . The general shape of H is

0 H LL B B B B 0 H =B B B B B @ 0

H LI

0

H II

0

H RI

H RR

1 C C C C C C C C C A

Here H II = fHklII g, with

HklII = hl?2k

k 2 S;I ; l 2 S+1;I ;

is a band matrix and makes up the main part. The matrices H LL and H RR are N  N matrices given by H LL = A;L LC+1;L and H RR = A;R RC+1;R, where L = diag (2?(N +1=2); : : : ; 2?1=2 ) and R = diag (2?1=2 ; : : : ; 2?(N +1=2). Most of the elements in H LI and H RI are zero. The nonzero elements can be traced back to the term II in (4{17) (and the corresponding expansion for the right endpoint). Explicitly, we have

HklLI =

NX ?1 =0

A L

X j 2S;L

x  (j )hl?2j ; k = k;L ? 1 ?

for k 2 L and k+1;L  l  k+1;L + 2N + L ? 3. Similarly,

HklRI =

NX ?1 =0

A R

X j 2S;R

(x ? 1)  (j )hl?2j ; k = k;R + 1 +

for k 2 R and k+1;R ? 2N ? R + 3  l  k+1;R.

Wavelets on closed subsets

13

4.3. Wavelets on [0; 1]

Next we consider the detail spaces W [0; 1] and the associated wavelets. We de ne the space W [0; 1],   0, to be the orthogonal complement of V [0; 1] in V+1[0; 1]. We easily calculate the dimensions of the spaces involved: dim V [0; 1] = 2 + 2 ? R ? L (4 ? 19) dim W [0; 1] = dim V+1[0; 1] ? dim V [0; 1] = 2 : There are certain functions k that are both completely supported inside [0; 1] and belong to V+1[0; 1]. Indeed, the de nition ((2{5)) is equivalent to k (x) =

X k

gm?2k +1;m (x):

(4 ? 20)

Hence, if the integer k is such that gm?2k 6= 0 implies m 2 S+1;I , then k (x) 2 V+1[0; 1]. This is exactly the case when k belongs to the set fk : N ? 1 + L =2  k  2 ? N ? R =2g: (4 ? 21) These functions k are thus in W [0; 1]. Comparing this against the dimension of W [0; 1], we see that we still need to nd approximately 2 ? (2 ? N ? R =2 ? (N ? 1 + L =2) + 1) = 2N ? 2 ? (R + L )=2 functions. Roughly half of these are missing at the left endpoint and half at the right. To nd the remaining functions we shall identify functions in V+1[0; 1] that cannot be written as combinations of either the functions in V [0; 1] or the functions k we have already found. First we note that (4{17) shows that the functions X +1;L and (X ? 1) +1;R are combinations of functions in V [0; 1] and functions from the collection f+1;kgk2S+1;I . Hence, it will be sucient to consider the functions in this latter collection. Using the relation (2{8) and a simple calculation, we see that +1;k with 2L +2(N ?1)  k  2+1 ?2(2N ?1)?2R +1 can be written as combinations of functions in V [0; 1] and the functions k we have already identi ed in W [0; 1]. However, for the functions +1;k with L  k  2L + 2(N ? 1) ? 1 or 2+1 ? 2(2N ? 1) ? 2R + 2  k  2+1 ? (2N ? 1) ? R this is not the case. To simplify the argument below somewhat, we shall assume that  is suciently large for these two cases not to have any common elements k: 2  L + R + 3(N ? 1): (4 ? 22) Let 1 denote the smallest such  . Note that 1  0. Let us now consider the left endpoint for example, which corresponds to the rst of the two cases. By (2{8) we have +1;2L+2N ?3 = h2N ?1;L?1 + h2N ?3;L + : : : +1;2L+2N ?4 = h2N ?2;L?1 + h2N ?4;L + : : : (4 ? 23) +1;2L+2N ?5 = h2N ?1;L?2 + h2N ?3;L ?1 + : : : .. .

L. Andersson, et. al.

14

How this sequence of relations ends depends on whether L is odd or even. If L = 2tL is even, then .. . +1;L+1 = h2N ?1;tL ?N +1 + h2N ?3;tL ?N +2 + : : : +1;L = h2N ?2;tL ?N +1 + h2N ?4;tL ?N +2 + : : : and if L = 2tL ? 1 is odd, .. . +1;L+1 = h2N ?2;tL ?N +1 + h2N ?4;tL ?N +2 + : : : +1;L = h2N ?1;tL ?N + h2N ?3;tL?N +1 + : : : In this sequence of functions, every second one is linearly dependent on the previous ones (modulo functions in V [0; 1]). To see this, we need to recall that X  hk hk?2l = h;0; ;li = 0l k X k

hk gk?2l = h;0; ;li = 0:

As a consequence, if we multiply the rst relation in (4{23) by h1 and the second one by h0, we obtain that



?h 

0  +1;2L +2N ?4 + h1 2L + +1;2N ?3 j[0;1] = (h1h2N ?1 + h0h2N ?2);L?1 j[0;1] mod V [0; 1] = 0 mod V [0; 1] :

Similarly, we have

?h 

0  +1;2L +2N ?6 + h1  +1;2L +2N ?5 + h2+1;2L+2N ?4 + h3+1;2L+2N ?3 j[0;1] = 0 mod V [0; 1];



and so on. The missing functions in W [0; 1] at the left endpoint are now given by, for example, 1;L = +1;2L+2N ?3j[0;1] ? projV [0;1] +1;2L+2N ?3 2;L = +1;2L+2N ?5j[0;1] ? projV [0;1] +1;2L+2N ?5 .. .

(4 ? 24)

If L = 2tL is even, or if L = 2tL ? 1 is odd, this yields N ? 1 + tL new functions. In the same way we get N ? 1 + tR new functions at the right endpoint if R = 2tR or R = 2tR ? 1.

Wavelets on closed subsets

15

Let us order the basis elements of W [0; 1] as follows:

8 < ;L k = k : ;R

if k = k;l ? for = 1; : : : ; N ? 1 ? tL ; if k;l  k  k;r if k = k;r + for = 1; : : : ; N ? 1 ? tR,

where k;l is the smallest integer  N ? 1 + L=2 and k;r is the largest  2 ? N ? R=2, cf. (4{21). With an argument similar to the one above, it is simple to orthonormalize the functions k , k 2 J . Given a matrix B = B = fBklgk;l2J , we let k =

X l

Bkl l :

The ortonormality of these functions k is equivalent to

I = BNB ; where N = N  is the matrix with entries

Nkl = k ; l [0;1] :

(4 ? 25)

If N = DD is the Cholesky decomposition of N , then one possible choice for B is B = D?1 : (4 ? 26) After this orthogonalization procedure most of the functions k remain unchanged. In particular, most of the functions k coincide with some wavelets on the line, k . We let T;I be the corresponding index set:

T;I = fk : ;L  k  ;Rg = fk :

k = k g:

(Note that by using the lower triangular Cholesky decomposition at the left edge and the upper triangular Cholesky decomposition at the right, we can always arrange so that the wavelets k , k;l  k  k;r , remain the same and, in particular, fk;l  k  k;r g = T;I .) We also de ne the index sets corresponding to the \exceptional" wavelets on the left and right, respectively:

L = fk : k;l ? (N ? 1) ? tL  k  ;L ? 1g R = fk : ;R + 1  k  k;r + (N ? 1) + tR g: The union of these three sets, which we shall denote by J , is thus the full index set for the wavelet functions

J = L [ T;I [ R = fk : k;l ? (N ? 1) ? tL  k  k;r + (N ? 1) + tRg: Let us summarize most of the above as follows.

L. Andersson, et. al.

16

PROPOSITION 4.2. The space W [0; 1],   1, is spanned by the functions k , N ? 1 + L =2  k  2 ? N ? R =2, and the functions ;L , 0   N ? 1 ? tL , and ;R, 0   N ? 1 ? tR, de ned by (4{24). For the  's with 0   < 1, the missing functions can be found in a

similar way. The main di erence is that we can not consider (4{23) and the corresponding relations at the right endpoint separately. In this case there is a block of functions +1;k in the middle that all have to be included and projected. Outside the middle block, there is a left and a right block where, as before, every second function can be ignored. Using the re nement matrix H for the 'k functions, we may write the projections (4{24) in terms of the functions '+1;k as follows: ;k = '+1;l ? X h'+1;l; 'ni 'n = '+1;l ?

n X X

Hnl Hnm '+1;m n m X  LL X  LI = Gkm '+1;m + Gkm '+1;m m2L +1 m2S +1;I

(4 ? 27)

with k and l related by l = 2k + 2L + 2N ? 2k;l ? 1. The matrices GLL and GLI have entries de ned by X  LI LL Hnl Hnm = ? GLL km n2L

and

GLI km = ml ?

X n2L

LI ? H nlLI Hnm

k;LX +N ?2 n=k;L

h l?2nhm?2n:

At the other endpoint, x = 1, we similarly have X RI ;k = X GRR Gkm '+1;m ' + km  +1;m m2S +1;I

m2R +1

Since W [0; 1]  V+1[0; 1] there is a matrix G = G = fGklg such that k =

The general shape is

X

l2I +1

0 GLL B B B 0 G=B B B B @ 0

Gkl '+1;l

k 2 J :

GLI

0

GII

0

GRI

GRR

1 C C C C C C C A

(4 ? 28)

Wavelets on closed subsets

17

Here GII = fgl?2k g is a band matrix and makes up the main part, and

GLL = BGLL ; GLI = BGLI ; .. . where B is given by (4{26). For numerical applications it is important that there is a fast algorithm for calculating decomposition of a function. Suppose that f = P  ' theis inwavelet V+1 [0; 1] for some integer P 0. Since V+1 [0; 1] =  +1;k  +1;k V [0; 1]  W [0 ; 1], f = f + g with V [0; 1] 3 f = k k 'k and W [0; 1] 3  P g = k k 'k . This splitting corresponds to the transforms

and

;k =

X 

(4 ? 29)

;k =

X 

(4 ? 30)

The inverse transform is given by

+1;l =

l

X k

l

Hkl +1;l

Gkl+1;l:

Hkl k + Gkl k :

(4 ? 31)

By repeatedly applying the forward transforms we obtain the wavelet decomposition of the original function, and by using the inverse we can get from the wavelet decomposition back to the function. Clearly, these transforms are fast, O(N ). The di erence compared to the usual transform on the whole line is restricted to a di erent behavior of the small corner matrices in H and G.

4.4. Intervals with rational endpoints

Next let us discuss wavelets associated with more general intervals [A; B]. The easiest way to obtain a wavelet decomposition of a function f on [A; B] is of course to make a change of variables and reduce this case to the case of the unit interval. However, there are situations when this is not appropriate or possible. Given the interval [A; B], A < B, we may follow the procedure outlined above in the case of [0; 1] and de ne

S = fk : suppk \(A; B) 6= ;g = fk : 2 A?(2N ?1) < k < 2 Bg: (4 ? 32) For xed L and R we also let

S;L = fk : 2 A ? (2N ? 1) < k < 2 A + L g;

(4 ? 33)

L. Andersson, et. al.

18

S;R = fk : 2 B ? (2N ? 1) ? R < k < 2 Bg;

(4 ? 34)

S;I = fk : supp;k?L and supp;k+R  (A; B)g = fk : 2 A + L  k  2 B ? (2N ? 1) ? R g:

(4 ? 35)

and

We shall still assume that the scaling parameter  is suciently large so that the sets S;L and S;R are disjoint, and 0 will denote the smallest such  . Clearly, S = S;L [ S;I [ S;R when   0. The spaces V [A; B],   0, are given by

V [A; B] = f(X ? A) ;L g0 N ?1 [ fk gk2S;I [ f(X ? B) ;R g0 N ?1: To obtain an orthonormal basis for V [A; B] we can proceed as before. Similarly, it is straightforward to de ne the spaces W [A; B] and nd an orthonormal basis of wavelets k . This procedure yields an orthonormal basis for L2[A; B] for an arbitrary interval [A; B]. However, we also want these bases to be unconditional bases for Sobolev spaces and other smoothness spaces, and there should also be a characterizations of smoothness spaces in terms of the coecients in the wavelet decomposition. To show that the bases we have constructed do indeed have these properties, we need to have a good control of the matrices H and G involved as the scale  varies. The proof of the uncondionality and the characterization in terms of coecients then follow using standard arguments (cf. [14], [15],[18]). One way to obtain the necessary estimates of the matrices is to assume that the endpoints A and B are rational numbers. To explain why this assumption is helpful, let us consider the H matrices for example. Clearly, we have a very precise understanding of the center part of H , since the entries are given by Hkl = hl?2k ; this is the same as saying that the functions k with support inside [A; B] are not e ected by a restriction from the line to [A; B]. Now, the functions (X ? A) ;L and (X ? B) ;R , which are involved at the endpoints, only a ect H in the upper left and lower right corners. Part of H comes from the matrices M , see (4{15), so let us look at the entries of this matrix. We have

D X X

M = (X ? A) ;L ; (X ? A) ;L =

k2S;L l2S;L

E

[0;1]

(x ? A)  (k)(x ? A)  (l) hk ; l i[A;B] :

By a change of variables and using that 2 B ? k > 2N ? 1 whenever k 2 S;L, we see that Z +1 hk ; l i[A;B] = (y)(y + k ? l) dy 2 A?k

Wavelets on closed subsets

19

Similarly, we have (x ? A)  (k) =

Z +1 ?1

(y + k ? 2 A)(y) dy;

and an analogous expression for (x ? A)  (l). If A is rational, 2 A modZZ only takes on nitely many di erent values as  varies (  0). The same is then true for the above integrals, and, the analogous expressions at the right endpoint. As a consequence, there are only nitely many di erent matrices M and corner matrices making up the H matrices. The same is true for the G matrices. Now, this implies that there are nitely many functions 'i so that the orthonormal basis functions for each of the spaces V [A; B],   0, are obtained by a translation and dilation of one of these: 'k = 2=2 'i(2 x ? k) for some i. There are also nitely many functions i so that the basis functions for W [A; B],   0, are translations and dilations of one of the i s: k (x) = 2=2 i (2 x ? k) for some i. Assume now that the functions  and that we start from are su ;1 , where ciently \nice" (what is needed is, roughly, that ; 2 B1? ;1 \ B1 ? ; 1 ; 1 B1 is the predual of Lip = B1 ). In a standard way (cf. [15]) we may prove characterizations of di erent function spaces in terms of their wavelet coecients. As a typical example we state the following. THEOREM 4.3. Let 0 < < 1 and let [A; B] be an interval with rational endpoints. We have

i j:

kf kLip  sup sup j f; 'i0 k j + sup sup 2( +1=2)j f; k i

i  0 ;k

k

x5. Splitting and merging decompositions

Suppose we start with an interval I = [A; B] and split it into two subintervals I (1) = [A; C ] and I (2) = [C; B], A < C < B. A function f on I then similarly splits into two functions f1 and f2 . For example,  x 2 I (1) (1) f (x) = f0 (x) ifotherwise, and f (x) = f (1) (x) + f (2)(x), x 2 I . Associated with the full interval I there is one wavelet decomposition of f :

f (x) =

X

k2I0

0k '0k +

XX

 0 k2J

k

k :

Another possibility is to represent f using the wavelet decompositions of f (1) and f (2), associated with the subintervals I (1) and I (2):

f (i) (x) =

X

k2I(i0)

(i0)k '(i0)k +

X X

 0 k2J (i) 

(i)

k

(i) k ;

i = 1; 2:

L. Andersson, et. al.

20

The purpose of this section is to investigate the relation between the wavelet decomposition of f on the full interval I , on one hand, and the wavelet decompositions of f (1) and f (2) on the two subintervals I (1) and I (2), on the other. To make our analysis easier, we shall assume that I and the two subintervals I (1) and I (2) are suciently long, to make the endpoints independent in the sense we have discussed in Section 4. This means, in particular, that (1) away from the right endpoint of I (1), the basic functions '(1) 0 k and k , used in the wavelet decomposition on I (1), are the same as certain functions '0k and k used in the wavelet decomposition on I . Similarly, away from the left endpoint of I (2) the basic functions on I (2) coincide with some of those on I . More speci cally, we have

'(1) 0 k = '0 k (1) k = k

(1) if k 2 L(1) 0 [ S0 ;I (1) if k 2 L(1)  [ T;I

(5 ? 1)

and

if k 2 S(2)0;I [ R(2) '(2) 0 0 k = '0 k (5 ? 2) (2) = (2) [ R(2) . if k 2 T  k k ;I Because of the orthonormality, the coeents WT (f ) = f0k gk2I0 [ (i)g f k gk2J ;0 , WT (f )(i) = f(i0)k gk2I0 [ f k k2J ; 0 , i = 1; 2, for a given function f on I are all given by appropriate inner products. Let us consider the operator Sp that takes the sequence WT (f ) and \splits" it into the two sequences WT (f )(1) [ WT (f )(2) Sp : WT (f ) 7! WT (f )(1) [ WT (f )(2) : (5 ? 3) The relations (5{1) and (5{2) imply that most of the coecients in WT (1)(f )[ WT (2)(f ) coincide with certain coecients in WT (f ). In fact, only the coef cients that are associated with the edge between I (1) and I (2) are (possibly) di erent. Let us be more precise. We de ne XX X

k k (x): f edge (x) = 0 k '0k (x) +  0 k2E

E0

with and

E0 = S(1)0;R [ S(2)0;L

E

= fk : (1)

;R

We also let

f (1);edge(x) =

< k < (2) g = ;L

X k2R(1) 0





c L(1) [ T (1) [ T (2) [ R(2) 

(1) (1) 0 k '0 k (x) +

;I

X X  0 k2R(1) 

;I

(1)

k



(1)

k (x);

Wavelets on closed subsets and

f (2);edge(x) =

X k2L(2) 0

21 (2) (2) 0 k '0 k (x) +

X X  0 k2L(2) 

(2)

k

(2)

k (x):

Note that WT (f ), WT (1)(f (1) ), and WT (2)(f (2) ) each break up into two pairwise disjoint sets,

WT (f ) = WT (f edge ) [ WT (f ? f edge) WT (f (1) ) = WT (f (1);edge) [ WT (f (1) ? f (1);edge) WT (f (2) ) = WT (f (2);edge) [ WT (f (2) ? f (2);edge): The relations (5{1) and (5{2) imply that

f (x) ? f edge (x) = f (1)(x) ? f (1);edge(x) + f (2) (x) ? f (2);edge f edge (x) = f (1);edge(x) + f (2);edge(x): They also show that

WT (f ? f edge) = WT (1)(f (1) ? f (1);edge) [ WT (2)(f (2) ? f (2);edge): Hence, when restricted to the subsequence WT (f ? f edge) of WT (f ), the operator Sp reduces to the identity. In other words, the only nontrivial relation is the one between WT (f edge ) and each of WT (1)(f (1);edge) and WT (2)(f (2);edge). To obtain these relations, we may simply take the inverse wavelet transform of f edge on [A; B], split f edge = f (1);edge + f (2);edge, and then take the forward wavelet transforms of f (1);edge and f (2);edge on [A; C ] and [C; B], respectively. However, there is a much more ecient way than this to get from WT (f edge ) to WT (1)(f (1);edge) and WT (2)(f (2);edge). The key is to observe that there is a lot of redundancy in the scheme we just described. When we apply the inverse wavelet transform recursively to f edge , then many of the coecients we create coincide with some that we need later when we recursively apply the forward transforms to f (1);edge and f (2);edge. This is due to the fact that, as we go through the recursion, many of the coecients are not really a ected by the edge and for these we have relations similar to (5{1) and (5{2). Instead of creating these coecients twice, we put them aside at each step and only apply the inverse transform to the remaining set. Later, when we apply the forward transforms, we pick them up again as we need them. Also, note that we only need the part of the forward transforms that create edge related coecients. The resulting algorithm has a complexity proportional to the number of levels we need to process times the lter length 2N . See Figure 5.1.

L. Andersson, et. al.

22

} ∪ {λ

(1) ν +1,k 0



(1+2) ν ,k 0 k∈









(1+2) ν ,k 0 k∈ E

(1+2) ν +1,k 0 k∈

}

{γ ν0

(1) νmax -1,k (1) k∉ R ν max

}

0 +1

(1) νmax -1,k

}



}

(2)



max -1

(1+2) νmax ,k

}





(1+2) νmax -1,k k∈ E

ν0 +1

}

k∉

-1

}



}

max -1

}

ν max

-1

(1) ν +1,k 0 (1) k∉ R ν0 +1



}



(2) νmax -1,k

(1+2) νmax -1,k k∈



... Eν

(1+2) ν +1,k 0 k∈ E

}

(1+2) νmax ,k

(1) k∉ R ν max

}



0







}

(2) k∉ L ν0 +1

(1+2) ν +1,k 0

}

} ∪ {λ

(1) νmax -1,k

(2) ν +1,k 0

(1) k∉ R ν0 +1

-1

(1) νmax -1,k



}

}

...

(1) ν +1,k 0



}

(1) ν ,k



0

}

k 1 can be de ned inductively by using the relation Nm (x) = Nm?1  N1(x): Let us x one of these and set (x) = Nm (x). There are now in fact several possible choices for the dual function ~ (x) = ~ L (x), see [5]. The choice depends on how much smoothness and what symmetry properties we want the dual function to have, whether V~ must contain polynomials up to a certain degree, etc. Such properties can often be exploited in di erent ways when constructing wavelets on a closed subset of the line. To illustrate this we shall, in some detail, consider the B-splines of order 2, the so called hat functions, cf. [4]. In this case, the re nement relation is given by (x) = 21 (2x) + (2x ? 1) + 21 (2x ? 2); i. e.,

p

p

p

(8 ? 15) h0 = 42 h1 = 22 h2 = 42 ; and supp = [0; 2]. For the dual function ~ we pick the function satisfying the re nement relation with

p

p

p ? ~h?1 = h~ 3 = 2 8

h~ 0 = ~h2 = 42 h~ 1 = 3 4 2 : This implies, in particular, that supp ~ = [?1; 3]. Note that

X k

hk =

X~ k

p

hk = 2

X k

hk k =

X~ k

p

hk k = 2:

(8 ? 16)

Wavelets on closed subsets

51

Hence, by (4{9), the 0:th and 1:st moments of  and ~ are: M0 = M~ 0 = 1 M1 = M~ 1 = 1: (8 ? 17) Also note that X X k X k~ X k (?1) hk = (?1) hk = 0 (?1) hk k = (?1)k h~ k k = 0; k

k

k

k

which implies, in particular, that a general linear function can be obtained through linear combinations of either fk g or f~ k g: P1  fk gk2ZZ and P1  f~ k gk2ZZ:

For the basic shape of the scaling functions and corresponding wavelets for the real line we refer to Figure 8.1. ~ν ϕ

ϕν

k

k

0.125

0.5

0

0

ν =-5

64

128

ν =-5

k=3

192

64

128

192

~ ψν

ψν

0.25

k=3

k

k

0.5

0

0.

ν =-5

64

128

192

-0.5

ν =-5

k=4

64

128

k=4

192

Figure 8.1. Scaling function and wavelets on the line. To x ideas we shall design the V [A; B] and V~ [A; B] spaces in such a way that they both contain the linear functions on [A; B]. Another possibility would be to relax this somewhat and only preserve constants on the edges. We choose L = R = 1 and ~L = ~R = 0. If we instead had picked L = 1, R = 2 and ~L = 0, ~R = 1, then we would have gained that the dimensions dim V [A; B] = dim W [A; B] when the endpoints are dyadic, but the construction of the basic functions would be less symmetric. We shall use the notation x = the smallest integer > x and x = the largest integer < x.

L. Andersson, et. al.

52 We have

S = fk : 2 A ? 2  k  2 Bg

and

S;I = fk : 2 A + 2  k  2 B ? 4g = fk : k;L  k  k;R g S;L = fk : 2 A ? 2  k  2 A + 1g S;R = fk : 2 B ? 3  k  2 Bg: Similarly, S~ = fk : 2 A ? 3  k  2 B + 1g and S~;I = fk : 2 A + 2  k  2 B ? 4g S~;L = fk : 2 A ? 3  k  2 A + 1g; S~;R = fk : 2 B ? 3  k  2 B + 1g: For the basic functions in V [A; B] we pick 8 (X ? A) if k = k ? 1 ? for = 0; 1; < ;L ;L if k;L  k  k;R 'k = : k (X ? B) ;R if k = k;R + 1 + for = 0; 1, and for the basic functions in V~ [A; B] we may choose

8 g > < (X ? A);L ~k = ~ k > : (Xg ? B);L

if k = k;L ? for = 0; 1; if k;L  k  k;R if k = k;R + 1 + for = 0; 1,

17

16

15

-0.75

-0.5

-0.25

Figure 8.2. Condition number of ML .

Wavelets on closed subsets

53

To obtain a biorthogonal set out of these functions we need the matrices 1 ML = M;L = fM g1 ;=0 ; =0 and MR = M;R , cf. (4{15). In fact, it is easy to nd an explicit formula for these. If we consider the left endpoint, for example, we have



? A);L ML = (X ? A) ;L ; (Xg



D

[A;B ]

E

= (X ? A) ;L ; 2( +1=2)(x ? A) [A;B] ; since the part of (x ? A) which is not included in (Xg ? A);L is orthogonal to (X ? A);L .

~ν ϕ

ϕν

k

k

1 0.125

0

0

ν =-5

32

64

ν =-5

k=-1

96

32

64

k=-1

96

~ν ϕ

ϕν

k

k

1 0.125

0

0

ν =-5

32

64

ν =-5

k=0

96

32

64

k=0

96

Figure 8.3. Left endpoint scaling functions. By the de nition of (X ? A) ;L and (8{5) the right hand side equals

X

k2S;L

D

P (2 A ? k) k ; 2( +1=2)(x ? A)

E

[A;B ]

:

By (8{17) and the de nition (8{6) of P , P 0(x)  1 and P 1 (x) = 1 + x.

The inner products k ; 2( +1=2)(x ? A) [A;B] are easily calculated, and,

L. Andersson, et. al.

54 putting this together, we see that

ML =



5 +x 2 6+5x+x2 2

19+15x+3x2 6 30+37x+15x2+2x3 6



(8 ? 18)

with x = 2 A ? 2 A; in Figure 8.2 the condition number of ML is plotted as a function of x for all possible endpoints A. Now, to nd a biorthogonal set of functions, one possibility is to keep the functions 'k and use the inverses of the matrices ML and MR to nd '~k as appropriate linear combinations of the ~ k 's; cf. Figure 8.3-4. To nd the corresponding biorthogonal wavelets k and ~k we may repeat the argument used in the orthogonal case, with some obvious modi cations. ψν

0.25

~ ψν

2

k

k

1

0.

0

-1

ν =-5

-0.25

ν =-5

k=0,1

k=0,1

-2 32

64

96

128

ψν

0.25

32

64

96

128

~ ψν

2

k

k

1

0.

0

-1

ν =-5

-0.25

ν =-5

k=2

k=2

-2 32

64

96

128

32

64

96

128

Figure 8.4. Left endpoint wavelets.

8.4. Recursively de ned wavelets on an interval

Let us x a scaling function  and a corresponding dual function ~ with the coecients in the re nement relations given by fhk g and fh~ k g, respectively. Suppose now that we start with two families of functions, fkmax g and f~ kmax g, which are biorthogonal but not necessarily generated by the scaling

Wavelets on closed subsets

55

functions  and ~ . In an obvious way we may now, at least formally, imitate the de nitions of Vmax and V~max , respectively. Suppose that we de ne the functions k and ~ k recursively via the re nement relations k =

X k

hm?2k m+1;

 < max;

(8 ? 19)

and similarly for ~ k . Then we may use these functions to generate spaces V and V~ ,  < max. Furthermore, we may de ne wavelets k and ~ k using the recursive relation k =

X k

gm?2k m+1 ;

 < max;

(8 ? 20)

and similarly for ~ k . Under quite general conditions on the families fkmax g and f~ kmax g, the above indicated procedure leads to a ( nite) multiresolution analysis which, in the appropriate sense, converges to the one generated by the scaling functions  and ~ , see [11], [4]. We may ask whether it is possible to generate such nite, recursively de ned multiresolution analyses on an interval [A; B] as well. This is indeed possible, both in the case of Meyer's original construction (at least as long as the endpoints A and B are rational) and in the case of the construction we have studied here. Again, for simplicity, we shall only illustrate this in the case of the hat functions. There is really only one added diculty. Suppose we go ahead and apply the same argument as in Section 4 for orthogonal wavelets and as outlined above in the biorthogonal case. We then nd that the analogue of the relations (4{23) in fact yields too many k and ~k functions. In other words, some of the functions k , and ~ k , that we obtain from (4{23) must be linearly dependent. To see where this linear dependence comes from, we shall rst introduce some notation. Let us x an interval [A; B] with integer endpoints and let us choose the nest level max = 0. We pick 0(x) = ~ 0 (x) = [0;1] (x); and de ne  and ~  ,  < 0, inductively, using (8{19) and the coecients (8{15) and (8{16). With these choices we have supp = [a ; b ] with

a = a2+1 = 0

supp ~  = [~a ; ~b ];

b = b+12 + 2 ;

56

L. Andersson, et. al.

and

~ a~ = ~a+12 ? 1 ~b = b+12 + 3 : Of course, a0 = a~0 = 0 and b0 = ~b0 = 1. For  = 0 we pick S0;I = S~0;I = fk : A  k < Bg. This implies then that S0;L = S0;R = ; and S~0;L = S~0;R = ;. For  < 0 we have

S;I = fk : 2 A ? a + L + 1  k  2 B ? b ? R ? 1g and

S~;I = fk : 2 A ? a~ + ~L + 1  k  2 B ? ~b ? ~R ? 1g: It is convenient to arrange for these sets to coincide. This translates into conditions on L ; R and ~L ; ~R : L ? ~L = 2 A ? a~ ? 2 A ? a R ? ~R = 2 B ? b ? 2 B ? ~b : The added diculty arises when we try to nd the missing and ~ functions we need to span W?1[A; B] and W~ ?1[A; B]. Proceeding as before, we nd that k?1, (A + 3)=2  k  (B ? 1 ? 3)=2, are all in V0 [A; B] and have support inside [A; B]. Similarly, ~k?1 , (A + 2)=2  k  (B ? 1 ? 2)=2, are in W?1[A; B]. At this point let us make some speci c choices to simplify the discussion somewhat. Let us only consider the left endpoint; the other endpoint is handled in the same way. Let us also assume that L = R = 1; other 's are less interesting and can be treated similarly. Finally, it is easiest to consider the cases A even and odd separately; we pick the case A even and omit A odd. By (8{3) we have 0A = h~ 2?A=12?1 + ~h0?A=12 + : : : 0A+1 = h~ 3?A=12?1 + ~h1?A=12 + h~ ?1?A=12+1 + : : : 0A+2 = h~ 2?A=12 + ~h0?A=12+1 + : : : (8 ? 21) ? 1 ? 1 0 A+3 = h~ 3A=2 + ~h1A=2+1 + : : : .. . Based on this, we would initially pick 0A and 0A+2 to project on V?1[A; B]. However, if we also consider (8{3) for 0A?1 , 0A?2 , and 0A?3 and use that these three functions are identically zero on [A; B], then we nd that 0  0A?3 j[A;B] = h~ ?1 ?A=12?1j[A;B] + : : : 0  0A?2 j[A;B] = h~ 0 ?A=12?1j[A;B] + : : : 0  0A?1 j[A;B] = h~ 1 ?A=12?1j[A;B] + ~h?1?A=12j[A;B] + : : :

Wavelets on closed subsets

57

If we multiply the rst two equations by h1 and h2, respectively, we rst see that ?A=12?1 = 0 mod V?1 [A; B], and then, with a similar argument, we conclude that ?A=12 = 0 mod V?1. This in turn implies that 0A = 0A+2 = 0 mod V?1 [A; B]. Now, this means that we have not produced any new functions by following our earlier procedure, and, in fact, by a direct calculation we can verify that we already have all the functions at the left endpoint and no additional ones are needed. ψν

1.

k

0.5

0.5

0

0

ν =-1

2

4

k

ν =-1

k=0

6

2

ψν

0.2

~ ψν

1.

4

6

~ ψν

2

k

k

0.

k=0

0

-0.2

ν =-3

8

16

-2

8

24

ψν

0.1

k

0

-0.1

-1

32

64

96

16

k=0

24

~ ψν

1

0.

ν =-5

ν =-3

k=0

k

ν =-5

k=0

32

64

k=0

96

Figure 8.5. Recursively de ned wavelets.

x9. Concluding remarks The approximation properties of wavelet decompositions depend on the property (4{1). Loosely speaking, this allows us to nd a good polynomial approximation locally of a given function in the spaces V , on segments of size approximately 2? . The wavelet transform thus provides us with an algorithm for nding these local polynomial approximations very quickly. In polynomial approximation and spline theory, one technique for improving approximation properties is to allow the mesh size to vary. For example, splines with variable knots have been studied extensively. Let us discuss how an analogous theory can be worked out in the context of multiresolution analysis considered here.

L. Andersson, et. al.

58

Let us start by xing partitions S = fS;igi of the integers ZZ, one for each level  . We shall assume that these partitions satisfy 1) ZZ = [i S;i; 2) S;i \ S;j = ; when i 6= j ; , For each  and i = min(N; #S;i) ? 1 we de ne the functions X;i 0   i, by (x) = X x (k );k (x): X;i  k2S;i

These functions are thus generated by the vectors x ;i = fx  gk2S;i , 0   i. Notice also that for each xed  , the functions corresponding to di erent i's are orthogonal. Furthermore, polynomials can still be obtained by suitable linear combinations: g PN ?1  fX;i 0  i ;i : The corresponding spaces V = V;S are obtained by taking the closure in L2 of nite linear combinations of these functions: M M g0  i : V = V;i = fX;i i

i

To nd an orthonormal basis for V is easy. Since the spaces V;i are pairwise orhtogonal, is sucient to consider the orthonormalize the functions , 0   i , for xed  and i. This is equivalent to nding an orthonorX;i mal basis (with respect to the inner product on IR i +1) for the span of the vectors x ;i , 0   i. This ortonormal basis corresponds to the (discrete and orthogonal) Gram polynomials based on the set S;i . Next let us investigate under what conditions on the partitions S the spaces V form an increasing sequence. This is of course a necessary requirement for these spaces to form the starting point for multiresolution analysis. . Then by using the We start with one of the generating functions X;i re nement relation (2{1) we see that (x) = X X hm?2k x (k ) +1;m X;i  =

k2S;i m

X X X

j m2S +1;j k2S;i

hm?2k x (k)+1;m :

Let H = fHkl g be the matrix with entries Hkl = hl?2k . Then a necessary to be in the span of the functions X and sucient condition for X;i  +1;j is that H  x ;i jS+1;j  fx +1;j g0  j ; (9 ? 1) for each j . Let us consider an example. Suppose that each of the sets S+1;j of our partition at level  +1 satis es one of the following two conditions. Either we have  m 2 S+1;j 0  m ? 2k  2N ? 1 implies k 2 S;i

Wavelets on closed subsets

59

for some i = i(j ), or

#S+1;j = dim V+1;j : Then by a simple argument, similar to the one used in the discussion of (4{17), we see that the condition (9{1) is satis ed. , it is possible carry over a In this context, with building blocks X;i number of di erent concepts from spline theory: knot removal, smoothing splines, variable knots, etc. In many applications, it is useful to construct wavelets, or wavelet type bases, which are adapted to the speci c problem at hand. One such procedure is the construction of wavelet packets due to Coifman{Meyer{Wickerhauser and their collaborators [8], [9]. It is possible to carry out many of the arguments in this paper in the context of wavelet packets. For more on wavelet packets on closed intervals, we refer to [12]. We point out that \wavelet probing" makes sense for much more general decompositions than just the biorthogonal wavelet decompositions considered here. The crucial prerequisite is of a geometric nature: each of the basic building blocks used in the decomposition should be associated with a certain location and the other building blocks at the same \level" should have a quickly diminishing in uence at that location. What we have discussed in this paper can easily be extended to tensor product sets in higher dimensions. This has some interesting applications. For example, in two dimensions, splitting and merging allows us to split an image into smaller, independent subimages. Each subimage can then be processsed; merging the wavelet coecients from the subimages, we obtain the wavelet coecients of the full image. Similarly, we may start from a matrix, decompose it into smaller blocks, process each block and then quickly merge the information into a result for the full matrix. For example, this seems to be of interest for matrix compression. Also, it is well known that for large classes of matrices there is a close connection between several fundamental quantities, such as singular values, and wavelet coecients for the matrix. Perhaps splitting and merging would make it possible to pass from such quantities for the blocks to the corresponding ones for the full matrix and vice versa. Although interesting, tensor product domains are really too easy and the real challenge is more complicated higher dimensional sets.

Acknowledgments. The rst, third, and fourth authors were partially sup-

ported by ONR Grant N0001-90-J-1343. The third author was also partially supported by AFOSR Grant 89-0455, DARPA Grant AFOSR 89-0455 and Summus, Ltd. The second author was supported by Summus, Ltd.

References

1. Auscher P., Wavelets with boundary conditions on the interval, in Wavelets: A Tutorial in Theory and Applications, C. K. Chui (ed.), Academic Press, Boston, 1992, 217{236. 2. Auscher P. , to appear in J. Func. Anal.

60

L. Andersson, et. al.

3. Chui C. and E. Quak, Wavelets on a bounded interval, in Numerical Methods of Approximation Theory, D. Braess and L. L. Schumaker (eds.), Birkhauser, Basel, 1992, 1{24. 4. Cohen A., Biorthogonal wavelets, in Wavelets: A Tutorial in Theory and Applications, Chui C. K. (ed.), Academic Press, Boston, 1992, 123{152. 5. Cohen A., I. Daubechies and J. Feauveau, Bi-orthogonal bases of compactly supported wavelets, to appear in Comm. Pure and Appl. Math. 6. Cohen A., I. Daubechies, B. Jawerth and P. Vial, Multiresolution analysis, wavelets and fast algorithms on an interval, C.R. 316 (1992), 417{421. 7. Cohen A., I. Daubechies, and P. Vial, in preparation. 8. Coifman R. R. and Y. Meyer, Orthonormal wavelet packet bases, preprint. 9. Coifman R. R., Y. Meyer, and M. V. Wickerhauser, Size properties of wavelet packets, in Wavelets and their applications, M. B. Ruskai et al (eds.), Jones and Bartlett, 1992, 453-470. 10. Dahmen W. and C. Michelli, personal communication. 11. Daubechies I., Orthonormal bases of compactly supported wavelets, Comm. Pure and Appl. Math. 41 (1988), 909{996. 12. Deng B., B. Jawerth, G. Peters, and W. Sweldens, In preparation. 13. Franklin P., A set of continuous orthogonal functions, Math. Ann. 100 (1928), 522{529. 14. Frazier M. and B. Jawerth, A discrete transform and decompositions of distribution spaces, J. Func. Anal. 93 (1990), 34{170. 15. Frazier M., B. Jawerth and G. Weiss, Littlewood-Paley Theory and the Study of Function Spaces, Regional Conference series in Mathematics, AMS, 79 (1991). 16. Gautschi W., Norm estimates for inverses of Vandermonde matrices, Numer. Math. 23 (1975), 337{347. 17. Mallat S. G., Multifrequency channel decompositions of images and wavelet models, IEEE Trans. on Acoust. Signal Speech Process 37(12) (1989), 2091{2110. 18. Meyer Y., Ondelettes et Operateurs, I: Ondelettes, II: Operateurs de Calderon-Zygmund, III: (with R. Coifman), Operateurs Multilineaires, Hermann, Paris, 1990. 19. Meyer Y., Ondelettes sur l'intervalle, to appear in Rev. Mat. Iberoamericana.

Suggest Documents