Regularity of Refinable Function Vectors

3 downloads 0 Views 368KB Size Report
We study the existence and regularity of compactly supported solutions = ( ) r;1 ... used in order to prove uniqueness of solutions and convergence of the .... wide class of functions. ..... for juj , since b < 1. Hence, we can estimate k N(u)k C ( 0 + )N: (3.8). 12 ...... sequence s0(k), the resulting sequence sn(k) is related to s0 by.
Regularity of Renable Function Vectors

Albert Cohen

Ingrid Daubechies

Gerlind Plonka

We study the existence and regularity of compactly supported solutions = ( )r;=01 of vector renement equations. The space spanned by the translates of can only provide approximation order if the renement mask P has certain particular factorization properties. We show, how the factorization of P can lead to decay of j ^ (u)j as juj ! 1. The results on decay are used in order to prove uniqueness of solutions and convergence of the cascade algorithm. Subject classication: 39B62, 42A05, 42A38. Keywords: Renable function vectors, Renement mask, Fourier transform,

Spectral radius

1. Introduction In this paper we shall discuss the smoothness of renable function vectors. These are solutions to functional equations of the type N X (x) = P n (2x ; n)  (1.1) n=0

where the \coe cients" P n are r  r matrices (r 2 N r  1), and where := (0 : : : r;1)T is an r{dimensional function vector. Equations of type (1.1) are natural generalizations of the renement equations studied in e.g. Cavaretta, Dahmen, and Micchelli (1991), where r = 1 therefore we shall call them renement equations as well, or occasionally vector renement equations. Vector renement equations have come up in several papers. The oldest example is probably the multiwavelet construction by Alpert and Rokhlin (1991) (see also Alpert (1993)), where the  are all supported on 0 1] and are polynomials of degree r ; 1 on their support. In this example the smoothness of the  is of course known equation (1.1) is useful as a computational tool in going from one multiresolution level to the next. Matrix generalizations of type (1.1) were also discussed in more generality in Goodman, Lee, and Tang (1993) and Goodman and Lee (1994), including how to dene wavelets once the scaling functions were known. However, 1

it was not clear how to construct smooth non-polynomial examples, let alone how to connect smoothness with properties of the P n . This was in marked contrast with the case r = 1, where the link between smoothness of and properties of the P n or of the renement mask P (u) := 12 X P n e;iun (1.2) n is well understood, and where this connection can be exploited to construct with arbitrary pre{assigned smoothness as well as many other properties (see Daubechies (1992)). Donovan, Geronimo, Hardin, and Massopust (1994) (hereafter referred to as DGHM) were the rst to construct continuous non-polynomial renable function vectors. They gave examples of special bases of selfsimilar wavelets, generated by continuous scaling functions that satisfy an equation of type (1.1). In their paper, the iterated function technique used in the construction was the key to derive smoothness, rather than properties of the P n. This rst example triggered several other constructions (e.g. Strang and Strela (1994)), as well as work on the lter bank implications of (1.1) (Vetterli and Strang (1994), Heller et al. (1994)) and a systematic study of the approximation order of solutions of (1.1) (Heil, Strang, and Strela (1994), Plonka (1995.a)). This last work contains the key to understanding how solutions of (1.1) can be smooth. As shown in Plonka (1995.a), the space spanned by the functions  (x ; n) (n 2 N) can only have approximation order m if P (u) has certain particular factorization properties. (We assume that the  (x ; n) are also (algebrically) linearly independent.) This is reminiscent of the case r = 1, where similarly linearly independent translates of a renable function  can only provide approximation order m if the renement mask, often denoted by m0(u), can be factored as  ;iu !m 1 + e m0(u) = q(u)  (1.3) 2 where q(0) = 1, q is 2{periodic and non{singular for u = . By iterating the formula     ^(u) = m0 u ^ u  2 2 which is obtained by Fourier transformation from (1.1), and exploiting the factorization (1.3), one then nds 1 1 Y Y j^(u)j = j m0(2;j u)j  C (1 + juj);m jq(2;j u)j  j =1

j =1

where the innite products converge uniformly on compact sets if m0 or, equivalently, q is Holder continuous in u = 0. Together with estimates of the type supu jq(u)j  B or, more generally, supu jq(2k;1u) q(2k;2u) : : : q(u)j  B k for some k 2 N n f0g, this leads to j^(u)j  C (1 + juj);m+log2 B (1.4) 2

(see e.g. Daubechies (1988, 1992), Cohen and Conze (1992)). The factorization (1.3), together with estimates on the factor q(u), therefore leads to decay for ^, and hence to smoothness estimates for . By using more sophisticated methods involving transfer operators, one can rene the brute force estimates (1.4) and formulate necessary and su cient conditions on q(u) ensuring that  lies in some Sobolev space W s (see Conze and Raugi (1990), Villemoes (1994), Eirola (1992), Gripenberg (1993), Herve (1994), Cohen and Daubechies (1994)). Here again, the factorization (1.3) is a key ingredient. In this paper, we shall see that the factorization for the matrix P (u) discovered by Plonka (1995.a) for the case r > 1, can play a similar role, although the discussion is more intricate. We shall assume that the  (x ; n),  = 0 : : :  r ; 1, n 2 Zform a linearly independent basis for their closed linear span V0, and that they provide approximation order m, i.e., for f 2 W m one has

kf ; ProjVj f kL  C 2;jm kf kW m  2

where Vj is the scaled space

Vj = fg 2 L2(R) g(2;j ) 2 V0g: (1.5) Then, it is shown in Plonka (1995.a) that there exist r  r matrices C 0(u) : : : C m;1 (u) (constructed explicitly in Plonka (1995.a) see also below) such that P (u), dened in (1.2), factors as (1.6) P (u) = 21m C 0(2u) : : : C m;1(2u) P (m)(u) C m;1 (u);1 : : : C 0(u);1 where P (m)(u) is well{dened. By Fourier transform of (1.1) we obtain     ^ (u) = P u ^ u : (1.7) 2 2 This can be iterated again, and we nd         ^ (u) = P u P u : : : P un ^ un : (1.8) 2 4 2 2 Substituting (1.6) in (1.8) leads to     ^ (u) = 2;mn C 0(u) : : : C m;1(u)P (m) u : : : P (m) un (1.9) 2 2  u ;1  u ;1  u  C m;1 2n : : : C 0 2n ^ 2n : Even at this stage, the case r > 1 is more complicated than r = 1. The matrices P (2;j u) or P (m)(2;j u) do not commute, and the discussion of the convergence of 3

an innite product denition for ^ (u) is therefore more complex. Herve (1994) studied the convergence of the matrices n(u) u u u ( u ) := P P : : : P (1.10) n 2 4 2n  as n ! 1, and showed that convergence is assured if P (0) = diag(1 1 : : :  r;1), with jlj < 1 for l = 1 : : : r ; 1, or if P (0) is similar to such a matrix, i.e., P (0) = M diag(1 1 : : :  r;1) M ;1 for some non{singular M . This already(m)excludes the case where P (0) is not diagonalizable. Moreover, our matrices P (0) may well have a spectral radius larger than 1, so that Herve's results cannot be used for the products P (m)(u=2) : : : P (m)(2;n u) in (1.8). Heil and Colella (1994) discuss not only the convergence of n (u) (with results similar to Herve (1994)) but also the convergence of n (u)v, where v is a xed r{dimensional vector. If v is an eigenvector of P (0) with eigenvalue 1, then n (u) v may converge even if the spectral radius 0 of P (0) is strictly larger than 1 Heil and Colella call this constrained convergence. They prove constrained convergence if 0 < 2 and if the largest eigenvalue of P (0) is nondegenerate. We use a di erent technique that proves convergence of n (u) v if 0 < 2, without non{degeneracy condition, and that extends to some cases where 0  2, if P (u) has vanishing derivatives at u = 0. Once convergence of (1.8) or (1.9) is established, we can proceed to the main topic of this paper, namely how the factorization (1.6), together with estimates on P (m)(u) can lead to decay of j^ (u)j ( = 0 : : :  r ; 1) as juj ! 1. As in the case r = 1, this can be exploited to prove L2{convergence and pointwise convergence theorems (in the \x{domain") similar to those in Daubechies (1988). One can also introduce matrix transfer operators to prove more precise estimate like in the case r = 1. This paper is organized as follows. In section 2, we recall the precise results on the factorization of P (u) obtained in Plonka (1995.a). We also show that this factorization is necessary in order to obtain smooth functions 0 : : : r;1. In section 3, we discuss the pointwise convergence of n(u) a, as n ! 1, for a xed vector a. In section 4, we exploit the factorization (1.6) to prove, under certain additional conditions, that limn!1 n (u) a decays, as a function of u, for juj ! 1. We show, in section 5, how transfer operators can also be used to evaluate the regularity of the scaling functions. Section 6 gives a short uniqueness discussion: in the previous sections an innite product solution for (1.7) has been constructed if this has sufcient decay, then its inverse Fourier transform gives a solution to (1.1). Theorem 6.1 shows that, under certain conditions on the mask, this solution is unique in a wide class of functions. In section 7 we show how the decay estimates proved earlier can be used to translate the pointwise convergence of n (u)a to convergence of the cascade and subdivision algorithms in the \x{domain". Finally, section 8 studies several examples we apply our analysis to see how the (known) smoothness of spline functions and of the DGHM scaling functions can be recovered, and we construct some new examples with controlled smoothness. 4

2. Factorization of the refinement mask We want to recall some results of Plonka (1995.a). We start by some denitions. Let r 2 N be xed, and let y 2 Rr be a vector of length r with y 6= 0. Here and in the following, 0 denotes the zero vector of length r. We suppose that y is of the form y = (y0 : : : yl;1 0 : : :  0)T (2.1) with 1  l  r and y 6= 0 for  = 0 : : :  l ; 1. Introducing the direct sum of square matrices A B := diag(A B ), we dene the matrix C y by C y (u) := C~ y (u) I r;l  (2.2)

where I r;l is the (r ; l)  (r ; l) unit matrix. If l > 1, then C~ y (u) is dened by 0 y;1 ;y0;1 0 : : : 0 1 0 BB ... C CC 0 y1;1 ;y1;1 . . . BB ... ... ... ... 0 C CC  (2.3) C~ y (u) := BBB C . . . . . . y;1 ;y;1 C B@ 0 l;2 l;2 A ; ; iu ;e =yl;1 0 : : : 0 yl;11 if l = 1, i.e., if y := (y0 0 : : :  0)T with y0 6= 0, then C~ y (u) is the scalar (1 ; e;iu )=y0, so that C y (u) is a diagonal matrix of the form  ! ;iu 1 ; e C y (u) := diag y  1 : : :  1 : 0

It can easily be observed that C y (u) is invertible for u 6= 0. Further, the matrix C y is chosen such that yT C y (0) = 0T : We introduce E y (u) := (1 ; e;iu ) C ;y1(u) : (2.4) Assuming that y is of the form (2.1), we obtain that E y (u) = E~ y (u) (1;e;iu ) I r;l with 0 y y y ::: y 1 l;1 BB 0 1 2 . . C . ... C BB y0z y1 y2 C ... . . . . . . . . . y C ~E y (u) := B C (z := e;iu) : (2.5) BB l;1 C B@ y0z y1z . . . yl;2 yl;1 CCA y0z y1z : : : yl;2z yl;1 Note that E y can be written in the form

E y (u) = E y (0) ; i(1 ; e;iu ) (DE y )(0)  5

(2.6)

where D denotes the di erential operator with respect to !, D := d= d!. The vector y need not to be such that its zero entries always occur at the tail. If the non{zero entries of the vector y are given in a di erent order than in (2.1), then the matrices C y and E y are dened just by reshu"ing the rows and columns accordingly. We can now formulate the factorization results for P (u). The following theorem is a special case of a theorem proved in Plonka (1995.a): Theorem 2.1 Let := ( )r;=01 be a renable vector of compactly supported functions, and let f ( ; n) : n 2 Z  = 0 : : :  r ; 1g form a linearly independent basis of their closed linear span V0. Then V0 provides approximation order m if and only if the renement mask P of satises the following conditions: The elements of P are trigonometric polynomials, and there are vectors yk 2 Rr  y0 6= 0 (k = 0 : : :  m ; 1) such that for n = 0 : : :  m ; 1 we have n n! X (yk )T (2i)k;n (Dn;k P )(0) = 2;n (yn)T  (2.7) k k=0  ! n n X (yk )T (2i)k;n (Dn;k P )() = 0T : k k=0 Furthermore, the equalities (2.7) imply that there are vectors xk 6= 0 (k = 0 : : :  m; 1) such that P factorizes P (u) = 21m C x0 (2u) : : : C xm;1 (2u) P (m)(u) C xm;1 (u);1 : : : C x0 (u);1  (2.8) where the (r  r){matrices C xk are dened by xk (k = 0 : : :  m ; 1) via (2.2) and P (m)(u) is an (r  r){matrix with trigonometric polynomials as entries. The vectors xl (l = 0 : : :  m ; 1) in Theorem 2.1 are completely dened in terms of the vectors yk (k = 0 : : :  m ; 1). In particular, we have (x0)T = (y0)T , (x1)T = (;i)(y0)T (DC y 0 )(0) + (y1)T C y0 (0) (cf. Plonka (1995.a)). With the assumptions in Theorem 2.1, approximation order m is equivalent with exact reproduction of algebraic polynomials of degree m ; 1 in V0. Vice versa, if algebraic polynomials of degree m ; 1 can be exactly reproduced in V0, i.e., if there are vectors ynl 2 Rr (l 2 Z n = 0 : : :  m ; 1) such that X nT (yl ) (x ; l) = xn (x 2 R n = 0 : : : m ; 1)  l2Z

then ynl can be written in the form

n n! X y = k ln;k yk0  k=0 n l

and the vectors yk0 (k = 0 : : :  m ; 1) satisfy the equalities (2.7) with respect to the renement mask P of . 6

Now, assume that is a renable function vector with a renement mask P satisfying the conditions (2.7) for the vectors y0 : : : ym;1 (y0 6= 0). Further, let M 2 Rrr be an invertible matrix and ] (x) := M (x): Then ] is also a renable function vector with the renement mask P ](u) := M P (u) M ;1 , since ^ ](u) = M ^ (u) = M P (u=2) ^ (u=2) = M P (u=2) M ;1 ^ ](u=2) : Observe that P ] is obtained by a similarity transformation from P , i.e., P and P ] possess the same spectrum. Furthermore, P ](u) satises the conditions (2.7) for n = 0 : : :  m ; 1 with vectors y]0 : : :  y]m;1, given by (y] )T = (y )T M ;1 ( = 0 : : :  m ; 1) : Hence, P ] can also be factored as in (2.8) with C {matrices dened by certain vectors x]0 : : : x]m;1. In particular, we have (x]0)T := (y]0)T = (y0)T M ;1 . Note that this implies that the factorization (2.8) is not invariant under basis transformations. For instance, in the case where we consider a single factorization, (2.9) P (u) = 12 C y0 (2u)P (1)(u)C y0 (u);1 with y0 = (y )r;=01 (y0 6= 0), we could choose instead to carry out rst the basis transformation 0 y y y ::: y 1 BB 00 11 02 : : : r0;1 CC B C M = BBB 0 0 1 . . . 0 CCC : (2.10) B@ ... ... ... . . . ... CA 0 0 0 ::: 1 For P ](u) = MP (u)M ;1 the equations (2.7) now hold with (y]0)T = (1 0 : : :  0), and we can factor P ](u) accordingly. Multiplying the factored expression by M ;1 on the left and M on the right, we obtain P (u) = 12 Dy0 (2u)Q(1)(u)Dy0 (u);1  (2.11) where Dy0 (u) is now dened by 0 y1 z ; y2 z : : : ; yr;1 z 1 1 ; z ; y0 y0 y0 B CC 1 0 ::: 0 C   BBB 0 CC : ; 1 0 0 1 : : : 0 Dy0 (u) := M diag 1 ; z 1 : : :  1 = BB . ... ... . . . ... C B@ .. CA 0 0 0 ::: 1 7

Other choices of M would lead to yet other factorizations. In most applications, the original factorization (2.8) turns out to be the most useful. We shall use the existence of this di erent factorization (2.11) as a tool to study the spectrum of P (1)(0). In the second part of this section, we show that the factorization of the renement mask is necessary in order to obain smooth functions. Lemma 2.2 Let := ( )r;=01 be a renable vector of compactly supported functions, i.e., we have N X (x) = P n (2x ; n): (2.12) n=0

Further, let f ( ; n) : n 2 Z  = 0 : : :  r ; 1g form a Riesz basis of their closed linear span V0 . If  2 C m;1 (R) ( = 0 : : :  r ; 1), then V0 provides approximation order m. In particular, there are vectors x0  : : : xm;1 (x 6= 0) such that the renement mask P of factorizes in the form (2.8).

Proof: From the Riesz basis property, there exist dual scaling functions ~ 2 V0,  = 0 : : :  r ; 1 such that h( ; k) ~ ( ; l)i =  kl where h i denotes the usual scalar product in L2(R). These functions are dened by

(^~0(u) : : : ^~r;1(u))T = G(u);1 ^ (u) where the matrix elements of G(u) are dened by X^ g (u) = (u + 2n)^ (u + 2n) n2Z X = h   ( ; k)ieiku (  = 0 : : :  r ; 1): k 2Z

The Riesz basis property is equivalent with the fact that G(u) is uniformly nonsingular. Since its entries are trigonometric polynomials, it follows that the functions ~ have exponential decay. We thus can dene the polynomials Z1 pn (x) = yn ~ (y ; x) dy: ;1

We shall prove that for n < m, we have ;1 X rX xn = pn (k) (x ; k) k2Z =0

(2.13)

for all x 2 R.(Note that for a xed x, the above sum has a nite number of non zero terms.) 8

We proceed by induction on n. For n = 0, we remark that cannot vanish at every integer: by repeated application of the renement equation (2.12), we would obtain that vanishes at all dyadic rationals 2;j k (j 2 N k 2 Z) and thus is identically zero. Let l and 0 be such that 0 (l) = C 6= 0 and dene fj = 0 (2;j  +l). For j  0, we have f 2 V;j V0 (with V;j as in (1.5)), and thus ;1 X rX fj (x) = hfj  ~ ( ; k)i (x ; k): (2.14) k2Z =0

As j goes to +1, fj (x) tends to C uniformly on every compact set, and for a xed k 2 Z, hfj  ~ ( ; k)i tends to C p0 (k). We thus obtain (2.13) from (2.14), by letting j go to innity. Now suppose that (2.13) is proved up to order n ; 1. For the same reason as above, we can nd l and 0 such that Dn0 (l) = C 6= 0. We then dene nX ;1 fj (x) = 2nj n! 0 (2;j x + l) ; Ds0 (l) (2;j x)s=s!]: s=0

From the recursion hypothesis, we have ;1 X rX fj (x) = hfj  ~ ( ; k)i (x ; k): k2Z =0

(2.15)

As j goes to +1, fj (x) tends to Cxn uniformly on every compact set, and for a xed k 2 Z, hfj  ~ ( ; k)i tends to C pn (k). We thus obtain (2.13) from (2.15), by letting j go to innity. Hence, we have proved that all polynomials of degree m ; 1 are linear combinations of the functions  , ( = 0 : : : r ; 1). By Theorem 2.2 in Plonka (1995.a), it follows that V0 provides approximation order m. Hence, by Theorem 2.1, P (u) can be factorized as in Theorem 2.1.

3. Convergence of infinite matrix products Ultimately, we are interested in L1-solutions (x) of (1.1), and their smoothness, if they have any. We also want the space spanned by the  (x;n) ( = 0 : : :  r;1 n 2 Z) to have a certain approximation order. For 2 L1, the Fourier transform ^ is a well-dened and continuous vector-valued function that must satisfy (1.7) for all u. In particular, we must have

^ (0) = P (0) ^ (0) : On the other hand, if we want any non-zero approximation order, then we must have ^ (0) 6= 0, since ^ (0) = 0 would imply R  (x ; n) dx = 0 for all  n, making it impossible to construct the function 1 as a combination of the  (x ; n). Together, these two observations imply that we should take ^ (0) = a, where a is a left eigenvector of P (0) for the eigenvalue 1. Note that we know that 1 has to be an 9

eigenvalue of P (0) because of (2.7). In all the examples we shall consider in practice, will be compactly supported more generally, should have good (exponential) decay, so that ^ will be smooth. This means that we expect that in       ^ (u) = P u    P un ^ un  2u  2  u   u   u 

 u2  ^ ^ = P 2    P 2n a + P 2    P 2n 2n ; (0)  the second term should become negligibly small in the limit for n ! 1. This suggests that we dene     ^n(u) := P u    P un a = n(u)a 2 2 and study its limit for n ! 1. In this section, we shall discuss the existence of this limit, pointwise in u. In what follows, kvk will denote the Euclidean norm of v 2 Rd, i.e., kvk = v02 +    + vr2;1]1=2, and kV k := max kV vk=kvk will be the corresponding matrix norm (spectral norm) for V 2 Rrr . Recall that the spectral norm of a matrix V can be dened by the spectral radius of V T V , i.e., kV k = kV k2 := ( (V T V ))1=2. Lemma 3.1 Suppose that a is an eigenvector of P (0) for the eigenvalue 1. Further, suppose that P satises kP (u) ; P (0)k  C juj  (3.1) for some  > 0, and that kP (0)k  2 : Then the innite product ^ (u) := nlim (3.2) !1 n (u) a converges pointwise for any u 2 R. The convergence is uniform on compact sets. Proof: The estimate (3.1) implies that kP (u)k  kP (0)k + C juj  kP (0)k eC0 juj : Hence, we have     kP u2 : : : P 2ul k  eC0 juj2;+:::+2;l] kP (0)kl  eCjuj kP (0)kl since 2; + : : : + 2;l]  1;2;2;  < 1 for  > 0. Using this estimate and observing that u u

k k (u) a ; a = P 2 : : : P 2k ; P (0) a  u  u

u k X = P 2 : : : P 2l;1 P 2l ; P (0) P (0)k;l a  (3.3) l=1

10

it follows that for any k 2 N

1 " kP (0)k #l X k k (u) a ; ak  C e juj 2 l=1  C 0 eCjuj juj  where we assume that a is normalized, kak = 1, for the sake of convenience. Now, remarking that N +k (u)a ; N (u)a = N (u) k (2;N u)a ; a], we obtain  !N k P (0) k  ; N k N +k (u)a ; N (u)ak  C 0 eCjuj (1+2 ) juj 2  and hence lim mn !1 k m (u) a ; n (u) ak = 0: Thus, (3.2) converges pointwise for all u 2 R. The convergence is uniform on compact sets. This result is often su cient. Note that, when the entries of P (u) are trigonometric polynomials, (3.1) is always satised with  = 1 and can be satised for integer values  > 1 if and only if P (u) has vanishing derivative at the origin. The argument can be pushed a little further, allowing for the replacement of kP (0)k by the spectral radius of P (0), 0 = (P (0)) := maxfjj : P (0) x =  x x 6= 0g : C juj

Theorem 3.2 Let a be an eigenvector of P (0) for the eigenvalue 1. Suppose that

P (u) satises (3.1), and that

0 < 2 :

(3.4) ^ (u), dened by (3.2), converges pointwise for all u 2 R, and the convergence Then  ^ (u) is Holder continuous in u = 0. is uniform on compact sets. Moreover, 

Proof: 1. Again, we assume kak = 1 for the sake of convenience. Let Q(u) := P (u) ; P (0). Thenkit follows from k(3.1) that kQ(u)k  C juj with  > 0. Further, observe that kP (0) k  C ( 0 + ) . Then we have u u k N (u)k = kP 2 : : : P 2N k u u = kP (0) + Q 2 ] : : : P (0) + Q 2N ]k  u   u  N X X m m 2 1 = k P (0) Q 2m1 +1 P (0) Q 2m1 +m2 +2 : : : l=0 m1 +:::+ml+1 =N ;l   u Q 2m1 +m2 +:::+ml +l P (0)ml+1 k: 11

The second summation above is taken over all positive integer m1 : : :  ml+1 such that m1 + : : : + ml+1 = N ; l. Introducing b = 2; < 1, this leads to N Pl X X k N (u)k  C l+1 ( 0 + )N ;l C l jujl 2; k=0(mk +1)(l+1;k)



l=0 m1 +:::+ml+1 =N ;l N X X C b;N +l bl(l+1)=2 ( 0 + )N ;l (jujC C )l bm1(l+1)+:::+ml+1 ]: l=0 m1 +:::+ml+1 =N ;l

2. Next we nd an upper bound for the sum over m1 : : :  ml+1. Consider the sum X AML := bm1+2m2 +:::LmL : (3.5) m1 +:::+mL =M

For AML we nd the recursion (putting m = m1)

AML =

M X m=0

bM AM ;mL;1 = bM

M X m=0

AM ;mL;1

with AM1 = bM and A0L = 1. We show by induction that M b (3.6) AML  (1 ; b)L;1 : For L = 1 and M 2 N, (3.6) is satised. Now, assume that (3.6) holds for L  1 and M 2 N. Then we obtain by the recursion formula M M M X X bM : bm  (1 ; AML+1 = bM AM ;mL  (1 ;b b)L;1 b)L m=0 m=0

3. Substituting (3.6) into the expression for k N (u)k obtained above, we nd N X k N (u)k  C ( 0 + )N ;l (juj C C )l b;N +l bl(l+1)=2 AN ;ll+1 l=0

N X

b  C ( 0 + )N ;l (juj C C )l (1 ; b)l l(l+1)=2

l=0

 C ( 0

+ )N

N " juj C C #l X bl(l+1)=2: ( +  )(1 ; b ) 0 l=0

(3.7)

The sum in (3.7) converges uniformly for juj  $, since b < 1. Hence, we can estimate k N (u)k  C  ( 0 + )N : (3.8) 12

4. Now, with the same argument as in the proof of Lemma 3.1, we have by (3.3)  ! k X l ; 1 k k (u) a ; ak  C C  ( 0 + ) j2ulj l=1 k  +  l X 0 0  (3.9)  C  juj 2 : l=1 Hence, uniform boundedness of k k (u)a ; ak is ensured, if 0 < 2, by choosing  su ciently small. Again, it follows that k  +  l  X j u j 0 0 N k N +k (u)a ; N (u)ak  C  C  ( 0 + ) 2N 2 l=1  0 +  N 00   C  juj 2  where the last term is uniformly small in k if N is su ciently large. Thus, for xed u, k (u)a is a Cauchy sequence for 0 < 2, implying that we have pointwise convergence of (u). Moreover, the convergence is uniform on compact sets. The Holder continuity of (u) in u = 0 directly follows from (3.9).

4. Decay of infinite matrix products ^ (u) is well-dened (under some conditions on P (u)), we now Having shown that  proceed to study how the factorization (2.8) of the renement mask P (u) can lead ^ (u) for juj ! 1. Let us suppose that P (u) can be factored in to decay in u of  the form

P (u) = 2m (1 ;1e;iu )m C x (2u) : : : C xm; (2u) P (m)(u) E xm; (u) : : : E x (u) where the C { and E {matrices are dened as in (2.2) and (2.4) and where the vectors x0 : : : xm;1 are all di erent from the zero vector. We can now rewrite ^ (u) as (" #m ^(u) := lim n (u) a = lim n 1 ;iu=2n C x (u) : : : C xm; (u) n!1 n!1 2 (1 ; e ) 0

1

1

0

0

1

u u  u u ( m ) ( m ) P 2 : : : P 2n E xm;1 2n : : : E x0 2n a : We note again that, since xT0 P (0) = xT0 with x0 6= 0 , 1 is an eigenvalue of P (0), and we take a to be a right eigenvector of P (0) for that eigenvalue. We also assume that xT0 a 6= 0 if the eigenvalue 1 of P (0) is nondegenerate, then thisn is automatically ^ (0) = limn!1 P (0) a = a. We will satised. Note that for u = 0 we have  establish conditions under which u u u u ( m ) ( m ) kP 2 : : : P 2n E xm;1 2n : : : E x0 2n ak 13

tends to a nite limit for n ! 1 since limn!1 j2;n (1 ; e;iu=2n );n j = juj;1 for u 6= 0, this then implies k^ (u)k  (1 + juj);m kCx0(u) : : : C xm;1 (u)k   (4.1)   u :::E u ak: (m) u : : : P (m) u E  nlim x x m ; 1 0 n n !1 kP 2 2 2 2n Let us dene the vectors ek := (ek )r;=01 by ( 0 ek := 10 ifif xxk 6= (4.2) = k 0 where xk are the components of the vectors xk (k = 0 : : :  m ; 1) introduced above.

Theorem 4.1 Let P be an r  r{matrix of the form

P (u) = 21m C x (2u) : : : C xm; (2u) P (m)(u) C xm; (u);1 : : : C x (u);1 where the matrices C xk are dened by the vectors xk 6= 0 (k = 0 : : :  m ; 1) via (2.2) and where P (m)(u) is an (r  r) matrix with trigonometric polynomials as entries. Suppose that P (m)(0)em;1 = em;1 where em;1 is dened by (4.2). Further, suppose that m := (P (m)(0)) < 2 (4.3) 0

1

1

0

and let for k  1

u u 1 ( m ) ( m ) k := k log2 sup kP 2 : : : P 2k k: u Then there exists a constant C > 0 such that for all u 2 R k^ (u)k  C (1 + juj);m+ k :

(4.4) (4.5)

Note that the requirement P (m)em;1 = em;1 is automatically satised in the case of interest to us, i.e., if P (u) is the renement mask for the vector of functions 0(x) : : : r;1(x) whose integer translates provide approximation order m see Plonka (1995.a). Proof: 1. From (2.6) it follows that m X E xm;1 (u) : : : E x0 (u) = E xm;1 (0) : : : E x0 (0) + (1 ; e;iu)k G(xk)m;1:::x0 k=1

with some matrices G(xk)m;1:::x0 depending on E x (0) and (DE x )(0) ( = 0 : : :  m ; 1). Hence, we can write u u u u ( m ) ( m ) P 2 : : : P 2n E xm;1 2n : : : E x0 2n a = T 1n(u) + T 2n(u) 14

with

u u ( m ) T 1n(u) := P 2 : : : P 2n E xm;1 (0) : : : E x0 (0) a and   u m X n k (m) u ; iu= 2 ( m ) T 2n(u) := (1 ; e ) P 2 : : : P 2n vk k=1 (k) where vk := Gxm;1 :::x0 a (k = 1 : : :  m). 2. We can estimate the second term T 2n(u) with the same argument as in (3.8),     m X kT 2n(u)k  C j2;n ujk kP (m) u2 : : : P (m) 2un k kvk k k=1      Cjuj 2;n kP (m) u2 : : : P (m) 2un k  C juj 2;n ( m + )n: Since the spectral radius m of P (m)(0) is supposed to be < 2, it follows that for all u2R nlim !1 kT 2n (u)k = 0: 3. We now concentrate on T 1n(u). From the structure of E xk (0) and the denition (4.2) of ek it follows that, for any vector b E xk (0) b = (xk )T b ek (k = 0 : : :  m ; 1): Repeating this argument, we obtain E xm;1 (0) : : : E x0 (0) a = (x0)T a] (x1)T e0] : : : (xm;1)T em;2 ] em;1: This leads to u u T T T ( m ) ( m ) T 1n(u) = (x0) a] (x1) e0] : : : (xm;1) em;2 ] P 2 : : : P 2n em;1 : Since P (m)(0) em;1 = em;1, and m < 2, we nd by Theorem 3.2 that nlim !1 T 1n (u) is well{dened for all u, and uniformly bounded on compact sets. 4. Take now any u 2 R. If juj  1 then by the Holder continuity of P (m)(u) with Holder exponent   1 there is a C such that kT 1n(u)k  C . If juj > 1, dene L such that 2L;1 < juj  2L. Thus, u u u ( m ) ( m ) k nlim !1 T 1n 2L k !1 T 1n (u)k  kP 2 :: : P 2L kk nlim  C kP (m) u2 : : : P (m) 2uL k: By the denition of k it follows that 0 L k  C 00 (1 + juj) k  k nlim !1 T 1n (u)k  C 2 (m)

15

i.e., by (4.1) and the observations above we nd a constant C such that k^ (u)k  C (1 + juj);m+ k : Remarks. 1. It follows from (4.5) that the components of (x) are continuous if P satises the above conditions and if k < m ; 1. 2. For the proof of Theorem 4.1 we have assumed that (P (m)(0)) < 2. As we will see in Lemma 4.3 below, this can be ensured if the largest eigenvalue of P (0) apart from the eigenvalue 1 is smaller than 2;m+1 . 3. In order to avoid that T 1n(u) collapses to 0 as n ! 1, i.e., limn!1 kT 1n(u)k = 0, which would imply ^ (u) = 0, we have to make sure that (x0)T a] (x1)T e0] : : : (xm;1 )T em;2] 6= 0:

(4.6)

Note that this is already satised if there is an index  (0    r ; 1) such that the  th component of xk does not vanish for all k = 0 : : :  m ; 1. On the other hand, since xl is a left eigenvector, and el;1 a right eigenvector of P (l)(0), both for the eigenvalue 1, (4.6) is also satised if the eigenvalue 1 of P (l)(0) is nondegerate, for all l. More detailed estimates show that decay of ^ (u) is also possible in some cases where (P (m))  2. Corollary 4.2 Let P be again an r  r{matrix of the form P (u) := 21m C x0 (2u) : : : C xm;1 (2u) P (m)(u) C xm;1 (u);1 : : : C x0 (u);1 where the matrices C xk are dened by the vectors xk 6= 0 (k = 0 : : :  m ; 1) via (2.2) and where P (m)(u) is an (r  r) matrix with trigonometric polynomials as entries. Suppose that P m (0)em;1 = em;1. Further, suppose that

kP (m)(u) ; P (m)(0)k  C juj

(4.7)

kE xm; (u) : : : E x (u) ; E xm; (0) : : : E x (0)k  C juj :

(4.8)

and that the E -matrices dened in (2.4) satisfy 1

0

1

0

Now, if m < 2minf g , then there exists a constant C > 0 such that for all u 2 R k^ (u)k  C (1 + juj);m+ k  where k is dened in (4.4).

16

Proof: Observe that u u u u ( m ) ( m ) kP 2 : : : P 2n E xm; 2n : : : E x 2n ak  kSn (u)k + kT 1nk 1

0

with

u u ( m ) Sn (u) := P 2 : : : P 2n    

 E xm;1 2un : : : E x0 2un ; E xm;1 (0) : : : E x0 (0) a and where T 1n is dened as in the proof of Theorem 4.1. With the same argument as in Theorem 3.2 (cf. (3.8)) we obtain by (4.8) that u u ( m ) ( m ) kS n(u)k  kP 2 : : : P 2n k u u kE xm;1 2n : : : E x0 2n ; E xm;1 (0) : : : E x0 (0)] ak  C  ( m + )n C j2un j : (m)

Thus Sn (u) tends to zero for n ! 1 if m < 2 . Further, since em;1 is an eigenvector of P (m)(0), we can apply Lemma 3.1 in order to show that T 1n is convergent for ( m + ) < 2. Hence, u u u u ( m ) ( m ) P 2 : : : P 2n E xm;1 2n : : : E x0 2n a is well{dened if m < 2minf g. Following point 4 of the proof of Theorem 4.1 we can nd a constant C such that k^ (u)k  C (1 + juj);m+ k :

Since P (m)(u) is completely determined by P (u), the conditions m < 2 or m < 2 are restrictions on P (u). The following lemma shows that there is a simple connection between the spectra of P (0) and P (m)(0), which makes it possible to recast bounds on m as spectral bounds on P (0) as well. Lemma 4.3 Let P (u) be an r  r{matrix of the form (4.9) P (u) = 12 C x0 (2u) P (1)(u) C x0 (u);1 where C x0 is dened by x0 6= 0 via (2.2), and assume that P (1)(0) e0 = e0 (with e0 dened by x0 via (4.2) ). Then, P (0) possesses a spectrum of the form f1 1  : : : r;1g if and only if P (1)(0) possesses a spectrum of the form f1 21  : : : 2r;1 g. 17

Proof: 1. First, observe that the factorization (4.9) implies that P (0) has the

eigenvalue 1 with left eigenvector x0. At the same time, x0 is a left eigenvector of P () for the eigenvalue 0, i.e., we have (x0)T P (0) = (x0)T 

(x0)T P () = 0T

(cf. Plonka (1995.a), Theorem 4.1). 2. Without loss of generality, we assume that x0 is of the form x0 = (x00 : : : x0] l;1 0 : : :  0)T with;11  l  r and x0 6= 0 for  = 0 : : : l ; 1. Now, consider P (u) := M P (u) M with 0 x x ::: x 1 BB 00 01 . . 0l;1 CC M := BBB 0.. .1. . . . 0.. CCC I r;l . . . A @ . 0 0 ::: 1 where I r;l is the (r ; l)  (r ; l) unit matrix (cf. section 2). Since M is invertible, P ](0) possesses the same spectrum as P (0). The left eigenvector of P ](0) for the eigenvalue 1 is x] := (x0)T M ;1 = (1 0 : : :  0)T . Analogously, x] is the left eigenvector of P ]() for the eigenvalue 0. Hence, we nd the factorization P ](u) = 21 C ](2u) P ] (1)(u) C ](u);1 (4.10) with C ](u) := diag (1 ; e;iu 1 : : :  1): Observe that P ](0) has the structure 1 0:::0 ! ] P (0) = r(0) R(0)  where r(0) is a vector of length r ; 1 and R(0) an (r ; 1)  (r ; 1){matrix, it follows by the factorization (4.10) that P ] (1)(0) is of the form  1 0:::0 ! ] (1) P (0) = 0 2 R(0) : Consequently, P ] (1)(0) has the spectrum f1 21  : : : 2r;1 g if and only if P ](0) has the spectrum f1 1 : : :  r;1g. 3. We show next that P ] (1)(0) and P (1)(0) have the same spectrum. The factorizations (4.9) and (4.10) imply the following connection between P ] (1)(0) and P (1)(0): P (1)(u) = A(2u) P ] (1)(u) A(u);1 18

where

A(u) := C0 x (u);1 M ;1 C ](u) B 1 0 0 ::: 0 0

BB z = B BB z B@ ... z

x01 0 ... 0

x02 x02 ... :::

::: ::: ... 0

x0r;1 x0r;1 ... x0r;1

1 CC CC CC I r;l CA

(z := e;iu):

Since A(0) is invertible, it follows that P ] (1)(0) and P (1)(0) are similar, and thus the spectra of P (0) and P (1)(0) are connected as given in Lemma 4.3. It follows that the spectrum of P (m)(0) is likewise given by f1 2m 1 : : : 2m r;1g. The requirement that m < 2 (as in Corollary 4.2 thus translates into maxfj1j : : : jr;1 jg < 2 ;m :

(4.11)

Remark. If m > , which need not be true in general, but which we expect to be true in most cases ( = 1 except if both P (u) and E xm;1 (u) : : : E x0 (u) have vanishing derivatives at u = 0), then (4.11) automatically implies that 0, the spectral radius of P (0), equals 1. It also implies that the eigenvalue 1 of P (0) is nondegenerate. Since k  0 for all k, we need to have m  2 in order to ensure decay faster than (1 + juj);1; for j^ (u)j.

5. Transfer operators In this section, we want to investigate regularity estimates for  in terms of Sobolev estimates, using transfer operators. The Sobolev exponent s of  is dened by Z1 s = supf  ;1 k^ (u)k2 (1 + juj2) du < +1g: We assume that the factorization (2.8) and the hypothesis (4.3) of Theorem 4.1 are satised. As we saw in the proof of Theorem 4.1, we have k^ (u)k  C (1 + juj);mkT1(u)k where and

T 1(u) = n!lim+1 T 1n(u)

T 1n(u) := P

(m)

u u ( m ) 2 :::P 2n E xm;1 (0) : : : E x0 (0)a: 19

It follows that if we can prove an estimate of the type Z 2n  kT 1(u)k2 du  C 22n  n ;2 

(5.1)

then  is in the Sobolev space H s for all s < m ;  . The estimate (5.1) is related to the spectral property of the transition operator T that acts on 2-periodic r  r matrices M (u) according to

(T M )(2u) := P (m)(u)M (u)(P (m))(u)+P (m)(u+)M (u+)(P (m))(u+) (5.2) T

where (P (m)) := (P (m)) . As in the scalar case, this operator leaves a nite dimensional space E containing the identity invariant, if P (m) has trigonometric polynomial entries (cf. Cohen and Daubechies (1994)). Let  be the spectral radius of T restricted to E .

Theorem 5.1 The estimate (5.1) holds for all  > ) H s for all s < m ; log( 2 log 2 .

log() . 2 log 2

Consequently,  is in

Proof: For all n > 0, we have Z

Z

Z 2

;

T n M (u) du = ;

T T n;1 M (u) du

= P (m)(u=2) T n;1 M (u=2)(P (m))(u=2) du ;2 = Z n   2  (m) = P (u=2) : : : P (m)(2;n u)M (2;n u)(P (m))(2;n u) : : : (P (m))(u=2) du: n ;2 

If we take M = I and apply the trace operation, we thus obtain the estimate Z 2n TrP (m)(u=2) : : : P (m)(2;n u)(P (m))(2;n u) : : : (P (m))(u=2)] du  C ( + )n n ;2  q for all n > 0. Since kAk2 = Tr(AA) is an equivalent norm for nite matrices, it follows that Z 2n kP (m)(u=2) : : : P (m)(2;n u)k2 du  C ( + )n : ;2n  ) This last estimate clearly implies (5.1) for all  > log( 2log 2 , if we observe that T 1 (u) = P (m)(u=2) : : : P (m)(2;n u)T 1(2;n u) and that T 1(u) is uniformly bounded on compact sets.

20

6. Uniqueness If the conditions of Theorem 4.1 are satised, with k < m ; 1 for some k  1, then ^ is well-dened and integrable, so that (x), its inverse Fourier transform, is well-dened as well. Since ^ (u) is obviously a solution to (1.7), (x) is a solution to (1.1). Is it the only one? The following theorem lists some conditions that ensure uniqueness.

Theorem 6.1 Suppose that the conditions of Theorem 4.1 are satised, with inf k1 k < m ; 1, and that the eigenvalue 1 of P (0) is nondegenerate. Then (x) is

a compactly supported continuous solution to (1.1).R Moreover, if (x) is any other R 1 L -solution to (1.1) such that (x) dx 6= 0 and (1 + jxj)k (x)k dx < 1, then (x) is a multiple of (x).

Proof: 1. We assume, as in Theorem 4.1, that all the entries of P (u) are trigono-

metric polynomials. Let us, for this   consider u to be complex rather  point only, than real. The argument that kP u2    P 2un k is bounded uniformly in n  1 and in u 2 fz jzj < 1g holds for complex u as well. Since kP (u)k  CeRjImuj, it then follows that for any juj > 1, 2k  juj < 2k+1 , u u kP 2    P 2n ak  C k eRjImuj(1=2+1=4+:::+1=2k )C1  C 0(1 + juj)k log2 C eRjImuj:    ^ (u) = limn!1 P  u2    P 2un a satises the same bound, implying It follows that  that  is a compactly supported distribution. On the other hand, (x) is bounded and continuous because, by Theorem 4.1, j^ (u)j  C (1 + juj);1; for real u. Note ^ is a C 1{function since its Fourier transform has compact support. that  2. If (x) is another L1-solution, then ^ (0) 6= 0 must be an R eigenvector for P (0) with eigenvalue 1, so that ^ (0) = ca for some c 6= 0. Since jxjk (x)kdx < 1, we also have k ^ (u) ; ^ (0)k  C juj. Hence, for any xed u, k ^ (u) ; c^(u)k    

u u u ^ ^ = nlim n ; (0) k !1 kP 2    P 2n 2   u u   (m) u    P (m) u E    E  C 0 nlim k P x0 2n 2n xm;1 2n u 2

!1  ^ 2n ; ^ (0) k " # j u j 0 n  C nlim !1 C juj ( m + ) C1 C 2;n = 0  since m < 2. Thus, c. All the examples studied in the literature so far correspond to P (u) in which all the entries are trigonometric polynomials, and that is why we have mostly restricted 21

ourselves to this case. Nevertheless, most of our analysis carries over to the nonpolynomial case. In Plonka (1995.a), the original version of Theorem 2.1 does not require the  to be compactly supported, nor the entries of P to be trigonometric polynomials only su cient decay in x for (x) and a su ciently high regularity of P (u) are required. As shown in section 3 (where the P (u) were not restricted ^ (0)j  C juj (since P to trigonometric polynomials), this then implies j^ (u) ;  is Holder continuous in u = 0 with Holder exponent at least 1). This, in turn, is the only ingredient necessary in point 2 of the proof of Theorem 6.1, which establishes uniqueness of the solution within a certain class of functions with mild decay. Compact support of (x) is, of course, no longer assured.

7. Convergence of the cascade algorithm If P (u) is m (m  1) times factorizable (in the sense of Plonka (1995.a)), i.e.,

P (u) = 2m (1 ;1e;iu )m C x (2u) : : : C xm; (2u) P (m)(u) E xm; (u) : : : E x (u) 0

1

1

0

if the spectral radius of P (m)(0) is less than 2, and if, for some xed k,     (m) u : : : P (m) u k < m k P k = k1 log2 sup u 2 2k then our analysis in the previous sections has shown that     ^(u) = lim P u : : : P un a n!1 2 2 (with (x0)T a = 1 ) is well-dened, and that k^ (u)k  C (1 + juj);m+ k : ^ (u) ; ^ (0)k  C juj for juj  1. So far, this convergence Moreover, we have k is only pointwise, in the Fourier domain. For practical applications one is often interested in convergence of iterative schemes that generate the function  in the \x{domain". One has to distinguish two types of schemes:

The cascade algorithm, introduced in Daubechies (1988), consists in iterating P N the mapping f 7! n=0 P n f (2x ; n) on a well chosen initial function vector f (x).

The subdivision or renement algorithm consists in iterative renements of a \vector sequence" s0(k) by rules of the type sn (2;n k) = X P Tk;2m sn;1 (2;n+1 m) m

(see Cavaretta, Dahmen and Micchelli (1991) or Dyn (1992) for an overview of subdivision schemes). 22

In the scalar case, it can easily be checked that n iterations of the cascade algorithm, initiated on the \hat-function" &(x) := maxf0 1 ; jxjg, are equivalent to the linear interpolation of the points generated by n iterations of the subdivision algorithm, initiated on a Dirac sequence (k). However, the subdivision process is often preferred, because of its local nature. In the vector case, these relations are more complex: if one iterates n times the cascade algorithm on an initial vector function of the type &(x) b where b is a xed vector, then the result n is expressed in the Fourier domain by ^ ;n u)P (u=2) : : : P (2;n u)b: ^ n (u) = &(2 (7.1) In contrast, if one iterates n times the subdivision algorithm on an inital vector sequence s0(k), the resulting sequence sn(k) is related to s0 by (^sn(u))T = (^s0(u))T P (u=2) : : : P (2;n u):

Here (^sn(u))T is the row vector composed by the Fourier series of each component of sn . After linear interpolation, we obtain a row vector function ( ~ n (x))T given by ^ ;n u)(^s0(u))T P (u=2) : : : P (2;n u): ( ^~ n(u))T = &(2 This shows that the j -th component of n can be obtained by applying the subdivision algorithm on the initial vector sequence (s0l(k) = lj k0 (i.e. the sequence    0001000    in the j -th component, and the zero sequence in all other components), then taking the scalar product with the vector b and interpolating linearly the resulting scalar sequence. Let us now investigate the convergence of the cascade algorithm, keeping in mind these more sophisticated relations with subdivision schemes. In order to simplify the study of convergence, we shall use the function sin(xx) as a starting point, rather than the hat function &(x) , which is equivalent to considering band-limited interpolation of the sequences generated from the subdivision scheme. We thus dene  u  b:l:  u  b:l: ^ ^ ^ ^ b:l: ( u ) :=  ( u )  (0) =  ( u ) a  ( u ) := P ;] ;] 0 n 2 n;1 2 : (7.2) ^ (0). The following result deals with the convergence of the Note that ^ 0(0) =  cascade algorithm in the uniform norm.

Theorem 7.1 Let P be an r  r matrix with the assumptions of Theorem 4.1. If k dened in (4.4) satises k < m ; 1, then we have ^ b:l: ^ kL1 = 0: lim k n ; n!1 As a consequence b:l: n (x) converges uniformly to (x).

23

Proof: We have

u u ; n ^ b:l: n (u) = ;] (2 u)P 2 : : : P 2n a: With the assumptions of Theorem 4.1, we already know that ^ b:l: n converges point^ wise (and uniformly on compact sets) to . Using the factorization of P (u), we obtain #m " b:l: 1 ^ n (u) = ;] (2;n u) 2n (1 ; e;iu=2n ) C x0 (u) : : : C xm;1 (u) u u u u ( m ) ( m ) P 2 : : : P 2n E xm;1 2n : : : E x0 2n a and thus     ; m ; n u) kP (m) u : : : P (m) u k: k ^ b:l: ( u ) k  C (1 + j u j ) (2 1 ;] n 2 2n Now, from the assumptions of Theorem 4.1, we have     ;] (2;n u) kP (m) u2 : : : P (m) 2un k  C2(1 + juj) k  where C2 does not depend on n. We thus have the uniform estimate ;m+ k : k ^ b:l: n (u)k  C (1 + juj)

Since m ; k > 1, we can apply dominated convergence and the result follows. Remarks. 1. Because the hat function &(x) and the sinc function sin(xx) agree on integers, one easily checks that the vector functions n dened by (7.1) (with b replaced by a) ;n and the band-limited b:l: n agree in the dyadic rationals 2 Z, ;n n(2;n k) = b:l: n (2 k )  k 2 Z: Now n is just the linear interpolation of the n(2;n k because  is Holder continuous, and supk k n(2;n k) ; (2;n k)k ! 0 as n ! 1, it follows that n converges uniformly to  as well. 2. The same arguments will also give L2{convergence, assuming only k < m ; 12 . 3. If m ; k > m0 +1, convergence results in C m0 can also be obtained starting from

the same cardinal sine function. 4. The graphs for the examples in section 7 are, in fact, graphs of close approximations n to the true solutions , obtained by the subdivision iteration described just before in Theorem 7.1. 24

8. Examples In this section we want to apply the analysis of the previous sections to various examples. We will see that the known smoothness of B{splines with multiple knots and of DGHM scaling functions can be recovered. Further, we construct a new example with controlled smoothness. 8.1. B{splines with multiple knots

Let r 2 N and m 2 N0 be given xed integers. We consider equidistant knots with multiplicity r, xl := bl=rc (l 2 Z), where bxc means the integer part of x 2 R. Let Nmr ( 2 Z) denote the cardinal B{spline of order m and defect r with respect to the knots x  : : : x+m given by the following formulas: For m = 0 and  = 0 : : : r ; 2 let N0r := Dr;1; =(r ; 1 ;  ) and let Nr0;r1 := , where denotes the Dirac distribution. For m  1 and x = x+m = 0, we dene Nmr according to the distribution theory by r;m;1; Nmr := Dr ; 1 ;  :

Further, let Nr1;r1 := (01) + (01])=2. Assume that for l 2 Zand  = 0 : : : r ; 1 we have N1+rlr := N1r( ; l). Now, for m  2 and x+m > x , let Nmr ( 2 Z) be dened by the recursion formula (x+m ; x ) Nmr (x) := (x ; x ) Nm;1r (x) + (x+m ; x) Nm+1;1r (x): Note that for  2 Zand m 2 N mr Nmr +lr = N ( ; l)

and for m  r,

N^mr (0) =

(l 2 Z)

Z1

mr (x) dx = 1 : N  m ;1

It is well{known that for m > r, we have Nmr 2 C m;r;1(R). We put (Nmr )r;=01 and N^ m := (N^mr )r;=01 . In particular, we obtain  r;1 1 !T ( iu ) ( iu ) ^ N 0(u) = r ; 1  : : :  1  1 :

N m :=

As shown in de Boor (1976), the spline functions Nmr ( ; l) (m  r l 2 Z  = 0 : : : r ;1) form a Riesz basis of their closed linear span V0. Furthermore, V0 provides approximation order m. The vector N m satises a vector renement equation N^ m (2u) = P m (u) N^ m(u) 25

where the renement mask P m is of the form P m(u) = 21m C x0 (2u) : : : C xm;1 (2u) P 0(u) C xm;1 (u);1 : : : C x0 (u);1 (8.1) with matrices C xk dened by the vectors of spline knots xk := (xm;k  : : : xm;k+r;1 )T (k = 0 : : :  m ; 1) via (2.2) and with the renement mask of N 0 P 0(u) := P 0(0) = diag (2r;1 : : :  20) (cf. Plonka (1995.b)). In particular, we have the recursion P m (u) = 12 C (2u) P m;1 (u) C (u);1 with C dened by (xm : : :  xm+r;1)T , where P m;1 is the renement mask of the B{spline vector N m;1 of order m ; 1. Now, let us apply the theory of the previous sections to the renement mask P m . Repeated application of Lemma 4.3 yields that P m(0) possesses the spectrum f1 2r;1;m  : : :  2;m+1 g. Since kP 0(u) ; P 0(0)k  C juj holds for all  > 0, we have by Lemma 3.1 pointwise convergence of u n Y lim P 0 l a n!1 2 l=1 with a := (0 : : :  0 1)T for all u 2 R. Hence, it follows for all m 2 N0 that   n ^m (u) := lim Y P m ul a n!1 2 l=1

(8.2)

where a := (0|  :{z: :  0} |1 :{z: :  1})T for r > m + 1 and a := (1 : : :  1)T for r  m + 1, r;m;1

m+1

is at least pointwise convergent. As we will see later, Theorem 6.1 implies that for m > r the solution m(x) coincides with N m(x). Observe that for r = 1, we have the well{known renement equation for cardinal B{splines  ;iu !m 1 + e Pm (u) = : 2

Now, assume that r  2. Then (P 0(0)) = 2r;1  2, such that we can not apply our analysis in Theorem 4.1 to the factorization (8.1). But using Lemma 4.3, we nd that P r;1(0) with P r;1 (u) := 2r1;1 C xm;r+1 (2u) : : : C xm;1 (2u) P 0(u) C xm;1 (u);1 : : : C xm;r+1 (u);1 possesses the spectrum f1 1 2;1 : : :  2;r+2g, i.e., (P r;1(0)) = 1: 26

Note, that P r;1 is the renement mask of N r;1. Thus, we can apply Theorem 4.1 to the factorization

P m(u) = C x (2u) : : : C xm;r (2u) P r;1 (u) C xm;r (u);1 : : : C x (u);1 0

and nd that with

0

k^ m(u)k  C (1 + juj);m+r;1+

1

u 1 = log2 sup kP r;1 2 k: u Lemma 8.1 For 1 given in (8.3) we have 1 = 0.

(8.3)

Proof: Observe that N0r;1r is dened by the knots 0|  :{z: :  0}, that means N0r;1r = r

. Further, N r;1r ( = 1 : : :  r ; 1), dened by 0 : : :  0 1 : : :  1, coincide with  r ;1 | {z } | {z }  r; the Bernstein polynomials of degree r ; 2, i.e.  ! r ; 2 r ; 1 r N (x) =  ; 1 x;1 (1 ; x)r;1; : Hence the renement mask of N r;1 can explicitely be given by  ! T 1 0 P r;1(u) = 0 Ar;2(u) with





Ar;2 (u) = 12 A0r;2 + A1r;2 e;iu 

where A0r;2 and A1r;2 are triangular matrices of the form   !!r;2   !!r;2 1 j 1 r ; 2 ; j 0 1 Ar;2 := 2j i  Ar;2 := 2r;2;j i ; j ij =0

ij =0

(see e.g. Micchelli and Pinkus (1991)). Recall that the spectral norm kV k2 of a matrix V := (vij )nij=1 can be estimated by the product of the matrix 1{norm and the matrix 1{norm n n X X kV k1 := 1max j v j  k V k := max jvij j (8.4) ij 1 j n 1 i n i=1

i.e.,

j =1

kV k22  kV k1 kV k1 27

(see e.g. Lancaster and Tismenetsky (1985)). Since all entries of A0r;2 and A1r;2 are nonnegative, it follows that  !! rX ;2  1 j ! 1 r ; 2 ; j 1 sup kAr;2 (u)k1 = kAr;2(0)k1 = 0 max + 2r;2;j i ; j j r;2 2 i=0 2j i u j j ! rX ;2 r ; 2 ; j ! 1 1 X 1 1 = 2j+1 + 2r;1;j i ; j = 2 + 2 = 1: i=0 i i=j Analogously , we nd that sup kAr;2(u)k1 = kAr;2(0)k1 = 1: u

Hence, we have supu kP r;1(u)k2 = 1, and thus 1 = 0. By Theorem 4.1 it follows for  = 0 : : :  r ; 1 that k^ mk  C (1 + juj);m+r;1  i.e., the elements of m are (m ; r ; 1){times continuously di erentiable. Since 1 < m ; r, Theorem 6.1, ensuring the uniqueness of the limit, yields for m > r that m (x) = N m(x). Further, we also have uniform and L2{convergence of the associated cascade algorithm. We want to check, whether the smoothness result can be improved by Corollary 4.2. By m; Xr;1 (1 ; e;iu )k G(xkm) ;r :::x0 k kE xm;r (u) : : : E x0 (u) ; E xm;r (0) : : : E x0 (0)k = k k=1

and kG(1) xm;r :::x0 k > 0, we have (4.8) only with  = 1. Hence, our result can not be improved. 8.2. DGHM{scaling functions

Now, we consider the example of two scaling functions treated in Donovan, Geronimo, Hardin, Massopust (1994). In the special case s = s1 = s2 of their construction, let ^ be a solution of (1.7) with the renement mask P (u) := 12 (P 0 + P 1 e;iu + P 2 e;2iu + P 3 e;3iu) (8.5) where 1 0 1 0 s2 ;4s;3 s2 ;4s;3 ; ; 1 0 2 ( s +2) 2( s +2) P 0 := @ ; 3(s;1)(s+1)(s2;3s;1) 3s2 +s;1 A  P 1 := @ ; 3(s;1)(s+1)(s2;s+3) 1 A  4(s+2)2 2 (s+2) 4 (s+2)2   0 2 0 ! 0 2 0! P 2 := ; 3(s;1)(s+1)(s2 ;s+3) 3s2 +s;1  P 3 := ; 3(s;1)(s+1)(s 2;3s;1) 0 : 4 (s+2) 2 (s+2) 4 (s+2) 28

The renement mask P (u) can be factorized P (u) = 14 C x0 (2u) C x1 (2u) P (2)(u) C x1 (u);1 C x0 (u);1 (8.6) 2 ;1) where C x0  C x1 are dened by x0 := (; 3(ss+2  1)T and x1 := (1 1)T via (2.2) and  ! 2 0 1 P (2)(u) := 2 (s2;3s;1)z2 +(;10s2;8s+6)z+(s2 ;3s;1) 4s(1 + z) (z := e;iu): (s+2)

Since P (2)(0) possesses the spectrum f1 4sg, by Lemma 4.3 the renement mask 2 ( s ; 1) T P (0) has the spectrum f1 sg. Observe that (1 s+2 ) is a right eigenvector of P (0) for the eigenvalue 1 so that Theorem 3.2 yields that for jsj < 2, the innite product u  1 ! n Y ^ (u) := lim P l (8.7) (s;1)2 n!1 2 s+2 l=1 converges pointwise for u 2 R. By simple computations, we nd that P does not satisfy (3.1) with  > 1, such that convergence of the innite product can not be shown for jsj > 2. In order to apply Theorem 4.1, we even need that j4sj < 2 and hence jsj < 1=2. As before, Corollary 4.2 will not provide an improvement of the results, since P (2) does not satisfy (4.7) with  > 1. We apply Theorem 4.1 and obtain for k 2 N the estimate k ^ (u)k  C (1 + juj);2+ k with u u 1 (2) (2) kP 2 : : : P 2k k: k := k log2 sup u We show Lemma 8.2 For a xed s with jsj < 1=2, there is a number k 2 N such that k < 1.

Proof: 1. Since P (2)(u) is of the form



P (2)(u) = a(1u) b(0u) with

!

2 ; 3s ; 1)e;2iu + (;10s2 ; 8s + 6)e;iu + (s2 ; 3s ; 1) ( s a(u) :=  2 (s + 2) b(u) := 2s (1 + e;iu) we obtain ! u u  1 0 (2) (2) P 2 P 4 = a(u=2) + a(u=4)b(u=2) b(u=2)b(u=4) :

29

By induction it follows that ! u u  1 0 (2) (2)       P 2 : : : P 2k = Pkl=1 a ul Ql;1 b un Qk b ul : n=1 2 l=1 2 2

2. Note that the spectral norm of a matrix V can be estimated by the Frobenius norm of V := (vij )nij=1 , i.e., 0 n 11=2 X 2 kV k2  @ jvij j A := kV kF : ij =1

We will show that, for any xed s with jsj < 1=2 there is a k 2 N such that 1 log sup kP (2)  u  : : : P (2)  u  k < 1: k 2 u 2 2k F First, observe that bk := supu j Qkl=1 b(2;lu)j  j4sjk . Further, for k  u  lY ;1  u  X ak := sup j a 2l b n j u l=0 n=1 2 we obtain the estimate  k  l;1 X jak j   (s2 ; 3s ; 1) (1 + e;2iu=2l ) + (;10s2 ; 8s + 6)e;iu=2l  2(j4ss+j 2) l=1  k  X 1 2 2  2(s + 2) 2js ; 3s ; 1j + j ; 10s ; 8s + 6j j4sjl;1 : l=1 Since jsj < 1=2, we have 2js2 ; 3s ; 1j < 9=2 and j ; 10s2 ; 8s + 6j < 38=5. Thus, k k X X jakj  2(s 1+ 2) 121 j 4sjl;1  121 j4sjl;1 : 10 l=1 30 l=1 For the Frobenius norm of P (2)(u=2) : : : P (2)(u=2k ) we nd u u (2) 2 2 1=2 (2) : : : P k sup k P F  (1 + jak j + jbk j ) k 2 2 u 0  11=2 !2 k X 121  @1 + 30 j4sjl;1 + j4sj2k A : l=1 3. First, we consider the case that s is xed with 0  jsj < 1=4. Then,   u u 2!1=2 121 (2) (2) sup kP 2 : : : P 2k kF  2 + 30 k : u 30

  2 k Now, choosing k such that k > 12 log2 2 + 121 (this is satised for k  5) we 30 obtain that 0  ! 11=2 1 log sup jP (2)  u  : : : P (2)  u  k  1 log @2 + 121k 2A < 1: k 2 u 2 2k F k 2 30 4. Now, we deal with the case that s is xed with 1=4  jsj < 1=2. Then jak j  121 k j4sjk;1 , such that 30   !1=2 u u 2 121 (2) (2) 2 k ; 2 2 k sup kP 2 : : : P 2k kF  1 + 30 k j4sj + j4sj : u k=30)2+4) (this is e.g. satised if Choosing k such that log2 j4sj < 1 + 1;log2((121 2k;2 2k log2 j4sj < 1 ; log 4k;4 ), then it follows !  2 121 (2k ; 2) log2 j4sj + log2 30 k + j4sj2 < 2k ; 1

and hence

2  k j4sj2k;2 + j4sj2k < 22k : 1 + 121 30 For the Frobenius norm it follows that 1 log sup kP (2)  u  : : : P (2)  u  k k 2 u 2 2k F !   2  21k log2 1 + 121 k j4sj2k;2 + j4sj2k < 1: 30 Since for jsj < 1=2 there is a k 2 N such that k < 1, it follows that the elements of the solution of the the vector renement equation (1.1) with the renement mask P dened in (8.5) are continuous. The uniqueness of the solution is ensured by Theorem 6.1. Further, by the analysis in section 7, we have uniform and L2{ convergence of the associated cascade algorithm. Remarks. 1. The continuity of 0 and 1 for jsj < 1=2 is also proved in DGHM (1994) by means of fractal interpolation. In their paper it is already shown that 0 1 are Lipschitz continuous for jsj < 1=2, i.e., there exits an M < 1 such that for all x y 2 0 1] we have j (x) ;  (y)j  M jx ; yj ( = 0 1). Further, if 1=2 < jsj < 1, then 0 1 have the Holder exponent  = ; log jsj= log 2. 2. The solutions 0 and 1 are symmetric, and they have a very short support. In particular, supp 0 = 0 1], supp 1 = 0 2], and we have 0 = 0( ; 1=2) 1 = 1( ; 1): 31

The closed linear span V0 of the integer translates of 0 and 1 provides the approximation order 2. Using the results in section 2, this fact is a simple consequence of the factorization (8.6) of the renement mask P . Note that in DGHM (1994) it is proved that the hat function &(x) := maxf0 1 ; jxjg is contained in V0, which already implies that V0 has approximation order 2. 3. In the case s = ;0:2, it is shown in DGHM (1994) that the integer translates of 0 and 1 form an orthogonal basis of V0. 8.3. Scaling functions with controlled smoothness

We consider solutions of the vector renement equation (1.1) with the renement mask  ;iu ;2iu + e;3iu (e;iu ; 2e;3iu + e;5iu )=2 ! 1 e + 2 e P (u) := 4 1=32 e;iu = 14 C x0 (2u) C x1 (2u) P (2)(u) C x1 (u);1 C x0 (u);1 where !   ;iu 0 ! 1 2 12 : 1 ; e (2) ; iu C x0 (u) = C x1 (u) := P (u) := e 0 1 ; sin(u=8 2) 1

Observe that P (2)(0) and P (0) possess the spectra f1 1g and f1 1=4g, respectively. Hence, by Theorem 3.2 the innite product u  1 ! n Y (u) := nlim !1 l=1 P 2l 1=96 converges pointwise, since (1 1=96) is a right eigenvector of P (0) for the eigenvalue 1. Further, considering the product 1  u  0 e;3iu=2 + e;iu (1;e;iu=2 )2 ; 3 iu= 2 P (2)(u) P (2) 2 = @ e;iu=2 (1;e;iu )2 + e;64iu (1;e;iu )2 e;iu=2 (1;ee;iu )2 + e;3iu=2 A  32 32 64

we nd for the matrix 1{norm and the matrix 1{norm (dened in (8.4)) u u 33 33 : (2) (2) (2) (2) sup k P ( u ) P k k  sup k P ( u ) P 1  1 u 2 16 u 2 16 Hence, for the spectral norm it follows that u (2) (2) sup kP (u) P 2 k2 u  u  1=2 33  u (2) (2) (2) (2) kP (u) P 2 k1  16 :  sup kP (u) P 2 k1 sup u u 32

Now, applying Theorem 4.1 with u  33  1 1 (2) (2) 2 = 2 log2 sup kP (u) P 2 k2  2 log2 16  0:52219706 u we have that k ^ (u)k  C (1 + juj);2+ 2  i.e., the elements 0 1 of the solution are continuous functions. For the support of 0 and 1 we obtain 1 5 ]  supp  supp 0 =  23  10 1 =   ]: 3 3 3 Furthermore, we have the symmetry relations

0(2 + x) = 0(2 ; x)

1(1 + x) = 1(1 ; x):

It can be shown that the integer translates of 0 and 1 form a Riesz basis of their closed linear span V0. Then factorization of the renement mask already implies that V0 provides approximation order 2 (cf. section 2). In particular, the equalities (2.7) are satised with y 0 := (1 0)T and y1 := (2 0)T . Actually, the approximation order 2 is already provided by the closed linear span of the integer translates of 0. We have indeed  !  ! 1 0 0 0 P (0) =  P () =  (DP )(0) =



1 128

1 4

!

;2i 0  0 ; 4i

1 128

; 41



! 0 0 (DP )() = 0 i  4

so that for odd l:

^ (2l) = P () ^ (l) and hence ^0(2l) = 0. For even l we have ^ (2l) = P (0) ^ (l) i.e., ^0(2l) = ^0(l). Thus, ^0(2l) = 0 for l 2 Zn f0g. Analogously, for the derivative of ^ it follows, for odd l,   (D ^ )(2l) = 12 (DP )() ^ (l) + P () (D ^ )(l) so that (D^0)(2l) = 0. For even l we have   (D ^ )(2l) = 12 (DP )(0) ^ (l) + P (0) (D ^ )(l)  33

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0

0.5

1

1.5

2

2.5

3

3.5

4

Figure 1: Graph of 0

0.035 0.03 0.025 0.02 0.015 0.01 0.005 0 0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

Figure 2: Graph of 1

34

1.8

2

i.e., (D^0)(2l) = 12 (D^0)(l). Thus, ^0 satises the Strang{Fix conditions of order 2, ^0(2l) = 0 (l 2 Zn f0g) ^0 6= 0 (D^)(2l) = 0 (l 2 Z): Finally, we note that, in this example, there is no function f in the space V0 which is already renable by itself. In the most other examples considered in the literature, even if the elements of are not renable by themselves, there exist renable functions in the span of their integer translates in the spline example (see section 8.1) the space V0, spanned by the B{splines of order m with r{fold knots, contains the cardinal B{splines Nm;k (k = 0 : : :  r ; 1), in the case of the DGHM{scaling functions, V0 contains the hat function.

Acknowledgement G. Plonka was supported by the Deutsche Forschungsgemeinschaft. I. Daubechies is partially supported by NSF grant DMS-9401785 and AFOSR grant F49620-95-10290.

References B. K. Alpert (1993), A class of bases in L2 for the sparse representation of integral operators, SIAM J. Math. Anal. 24, 246{262.

B. K. Alpert and Rokhlin (1991), A fast algorithm for the evaluation of Leg-

endre expansions, SIAM J. on Sci. Stat. Comput. 12, 158{179.

C. de Boor (1976), Splines as linear combinations of B{splines. A survey, in

Approximation Theory II, G.G. Lorentz, C.K. Chui and L.L. Schumaker (eds.), Academic Press, New York, 1{47. A. Cavaretta, W. Dahmen and C. A. Micchelli (1991), Stationary Subdi-

vision, Memoirs Amer. Math. Soc. 453, 1{186.

A. Cohen and J. P. Conze (1992), Regularite des d'ondelettes et mesures er-

godiques, Rev. Math. Iberoamericana 8, 351{366.

A. Cohen and I. Daubechies (1994), A new technique to estimate the regularity

of renable functions, submitted to Rev. Math. Iberoamericana.

J. P. Conze and A. Raugi (1990), Fonction harmonique pour un operateur de

transition et application, Bull. Soc. Math. Fr. 118, 273{310. 35

I. Daubechies (1988), Orthonormal bases of wavelets with compact support,

Comm. Pure & Appl. Math. 41, 909{996.

I. Daubechies (1992), \Ten Lectures on Wavelets", SIAM, Philadelphia, 1992. G. Donovan, J. S. Geronimo, D. P. Hardin, and P. R. Massopust (1994),

Construction of orthogonal wavelets using fractal interpolation functions, preprint.

N. Dyn (1992), Subdivision schemes in computer-aided geometric design, in Ad-

vances in Numerical Analysis II, Wavelets, Subdivision Algorithms and Radial Functions, W.A. Light (ed.), Oxford University Press, 36{104.

T. Eirola (1992), Sobolev characterization of solutions of dilation equations. SIAM

J. Math. Anal. 23, 1015{1030.

T. N. T. Goodman and S. L. Lee (1994), Wavelets of multiplicity r. Trans.

Amer. Math. Soc. 342, 307{324.

T. N. T. Goodman, S. L. Lee and W. S. Tang (1993), Wavelets in wandering

subspaces. Trans. Amer. Math. Soc. 338, 639{654.

G. Griepenberg (1993), Unconditional bases of wavelets for Sobolev spaces, SIAM

J. Math. Anal. 24, 1030{1042.

C. Heil and D. Colella (1994), Matrix renement equations: Existence and

uniqueness, preprint.

C. Heil, G. Strang and V. Strela, Approximation by translates of renable

functions, Numer. Math., to appear.

P. Heller, V. Strela, G. Strang, P. Topiwala, C. Heil and L. Hills

(1995), Multiwavelet lter banks for data compression, Proc. IEEE ISCAS, Seattle, WA. L. Herv e (1994), Multi{Resolution analysis of multiplicity d. Application to dyadic

interpolation, Appl. Comput. Harmonic Anal. 1, 299{315.

P. Lancaster and M. Tismenetsky (1985), \The Theory of Matrices", Aca-

demic Press, San Diego.

C. A. Micchelli and A. Pinkus (1991), Descartes systems from corner cutting,

Const. Approx. 7, 161{194.

G. Plonka (1995.a), Approximation order provided by renable function vectors,

Universitat Rostock, preprint.

36

G. Plonka (1995.b), Factorization of renement masks for function vectors, Uni-

versitat Rostock, preprint.

G. Strang and V. Strela (1994), Orthogonal Multiwavelets with vanishing

moments, SPIE Proceedings, Orlando Optical Engineering, to appear.

G. Strang and V. Strela (1994), Short wavelets and matrix dilation equations, Massachusetts Institute of Technology, Department of Mathematics, preprint. M. Vetterli and G. Strang (1994), Time{varying lter banks and multi-

wavelets, preprint, Electrical Engineering Department, Berkeley.

L. Villemoes (1994), Wavelet analysis of renement equations, SIAM J. Math.

Anal. 25, 1433{1460.

Albert Cohen, Laboratoire d'Analyse Numerique, Universite Pierre et Marie Curie, 4 Place Jussieu, 75005 Paris, France [email protected] Ingrid Daubechies, Department of Mathematics, Princeton University, Princeton NJ 08544, USA [email protected] Gerlind Plonka Fachbereich Mathematik, Universitat Rostock, D-18051 Rostock, Germany [email protected]

37

Suggest Documents