Approximation Order Provided by Re nable. Function Vectors. Gerlind Plonka. DEDICATED TO PROF. L. BERG ON THE OCCASION OF HIS 65TH BIRTHDAY.
CONSTRUCTIVE APPROXIMATION
Constr. Approx. (1995)
c 1994 Springer-Verlag NewYork Inc.
Approximation Order Provided by Re nable Function Vectors Gerlind Plonka DEDICATED TO PROF. L. BERG ON THE OCCASION OF HIS 65TH BIRTHDAY
Abstract. In this paper, we consider Lp {approximation by integer translates of a nite set of functions ( = 0; :: : ;r ? 1) which are not necessarily compactly supported, but have a suitable decay rate. Assuming that the function vector = ( )r?=01 is re nable, necessary and sucient conditions for the re nement mask are derived. In particular, if algebraic polynomials can be exactly reproduced by integer translates of , then a factorization of the re nement mask of can be given. This result is a natural generalization of the result fora singlefunction , where the re?iu m if approximation order nement mask of contains the factor 1+e2 m is achieved.
1. Introduction Recently, a lot of papers have studied the so{called multiresolution analysis of multiplicity r (r 2 IN; r 1), generated by dilates and translates of a nite set of functions ( = 0; : : :; r ? 1), and the construction of corresponding \multiwavelets" (cf. e.g. Donovan, Geronimo, Hardin and Massopust [10]; Goodman, Lee and Tang [12, 13]; Herve [16]; Plonka [21]). In Alpert [1], multiwavelets are used for sparse representation of integral operators. Further applications for solving dierential equations by nite element methods seem to be possible, since scaling functions and multiwavelets with very small support can be constructed (cf. e.g. Plonka [21]). For nite elements, short support is crucial. In order to obtain multiwavelets with vanishing moments, the problem remains, how to choose the scaling functions , such that algebraic polynomials of degree < m (m 2 IN) can be exactly reproduced by a linear combination of integer translates of ( = 0; : : :; r ? 1), or, such Date received: ;Date revised: ; Communicated by . AMS classi cation : 41A25, 41A30, 42A99, 46C99 Key words and phrases : Approximation order, Re nement mask, Strang{Fix conditions 1
2
G. Plonka
that := ( )r?=01 provides controlled approximation order m. First ideas to solve this problem can be found in Donovan, Geronimo, Hardin and Massopust (DGHM) [10] and Strang and Strela [25, 26], where examples with two scaling functions 0 ; 1 with very small support are treated, such that the translates 0 ( ? l); 1 ( ? l) (l 2 ZZ) are orthogonal and such that provides approximation order m = 2. In this paper, we consider a re nable vector of r functions with suitable decay providing controlled approximation order m. We will study the consequences for the re nement mask of in some detail. Let us introduce some notations. Consider the Hilbert space L2 = L2 (IR) of all square integrable functions on IR. The Fourier transform of f 2 L2 (IR) is R 1 ^ de ned by f := ?1 f(x)e?ix dx: Let BV (IR) be the set of all functions which are of bounded variation over IR and normalized by lim f(x) = 0; jxj!1 (f(x + h) + f(x ? h)) (?1 < x < 1): f(x) = 12 hlim !0
If f 2 L2 (IR) \ BV (IR), then the Poisson summation formula X ?iul X ^ f(l) e = (1.1) f(u + 2j) l2ZZ
j 2ZZ
is satis ed (cf. Butzer and Nessel [4]). By C(IR), we denote the set of continuous functions on IR. For a measurable function f on IR and m 2 IN let
kf kp := (
Z1
?1
jf(x)jp dx)1=p;
jf jm;p := kDm f kp ;
kf km;p :=
m X k=0
kDk f kp :
Here and in the following, D denotes the dierential operator D := d= d . Let Wpm (IR) be the usual Sobolev space with the P norm k km;p . The lp {norm of a sequence c := fcl gl2ZZ is de ned by kcklp := ( l2ZZ jcl jp )1=p : For m 2 IN, let Em (IR) be the space of all functions f 2 C(IR) for which sup fjf(x)j (1 + jxj)1+m+g < 1 ( > 0): x2IR
P
2 ?m 2 Let l?2 m := fc := (ck ) : 1 k=?1 (1 + jkj ) jck j < 1g be a weighted sequence with the corresponding norm
kckl2?m :=
1 X l=?1
(1 + jlj )?m jcl j2 2
!=
1 2
:
Re nable Function Vectors
3
Considering the functions 2 Em (IR) ( = 0; : : :; r ? 1), we call the set B() := f (? l) : l 2 ZZ; = 0; : : :; r ? 1g L2?m {stable if there exist constants 0 < A B < 1 with A
r? X 1
=0
kc kl2?m k 2
r? X X 1
=0 l2ZZ
c;l ( ? l)kL2?m B 2
r? X 1
=0
kc k2l2?m
for any sequences c = fc;l gl2ZZ 2 l?2 m ( = 0; : : :; r ? 1). Here L2?m denotes the weighted Hilbert space L2?m = ff : kf kL2?m := k(1 + j j2)?m=2 f k2 < 1g. Note that, if the functions are compactly supported, then the (algebraic) linear independence of the integer translates of ( = 0; : : :; r ? 1) yields the L2?m {stability of B(). For m = 0, the L2?m {stability coincides with the well{ known Riesz basis property in L2 (IR). For f 2 Em (IR), the Fourier transform f^ is contained in the Sobolev space W2m (IR). We want to give an example for a function 0 with in nite support yielding an L2?m {stable set B(0 ) for all m 2 IN: Let Mm be the cardinal symmetric B{ spline of order n de ned by M1 (x) := ([?1=2;1=2) + (?1=2;1=2])=2 and for n > 1 by convolution Mn (x) := Mn?1(x) ? M1 (x). Further, let n(u) :=
1 X
l=?1
1 1 = X n iul n(u) l=?1 !l e :
Mn (l) eiul ;
Observe that n(u) is a positive, real cosine polynomial for all real u. Then, the spline function Ln (x) :=
1 X
l=?1
!ln Mn (x ? l)
is the fundamental function of the cardinal spline interpolation satisfying Ln(k) = 0;k (k 2 ZZ). Observe that Ln has for n > 2 a noncompact support but exponential decay for jxj ! 1. The translates Ln ( ? l) (l 2 ZZ) form an L2?m {stable set for m 2 IN 0 . This immediately follows from the results in Schoenberg [23], since the eigensplines s satisfying s(k) = 0 (k 2 ZZ) are not contained in L2?m . For 2 Em (IR) ( = 0; : : :; r ? 1), we say that provides the (Lp ){ approximation order m (1 p 1), if for each f 2 W2m (IR) there are sequences ch = fch;lgl2ZZ ( = 0; : : :; r ? 1; h > 0) such that for a constant c independent of h we have: r?1 (1) kf ? h?1=p P P ch;l (=h ? l)kp c hm jf jm;p . =0 l2ZZ
Recently, closed shift{invariant subspaces of L2 (IR) providing a speci ed approximation order were characterized in de Boor, DeVore and Ron [2, 3]. In particular, in [2] it was shown that the approximation order of a nitely generated shift{invariant subspace S () of L2 (IR), generated by the integer translates of
4
G. Plonka
( = 0; : : :; r ? 1), can already be realized by a speci able principal subspace S (f) S (), where f can be represented as a nite linear combination f=
r? X X 1
=0 l2ZZ
al ( ? l):
(al 2 IR):
As known, the approximation order provided by a single function can be described in terms of the Fourier transform of by the so{called Strang{Fix conditions (cf. de Boor, DeVore and Ron [2, 3]; Dyn, Jackson, Levin and Ron [11]; Halton and Light [14, 19]; Jia and Lei [17]; Schoenberg [22]; Strang and Fix [24]). Following the notations in Jia and Lei [17], provides controlled approximation order m, if (1) holds, and furthermore the following inequalities are satis ed: (2) We have
kch klp c kf kp ( = 0; : : :; r ? 1); where c is independent of h. (3) There is a constant independent of h such that for l 2 ZZ dist (lh; supp f) > )
ch;l = 0 ( = 0; : : :; r ? 1):
In Jia and Lei [17], the strong connection of controlled approximation order provided by and the Strang{Fix conditions for was shown. Note that, instead of using the de nition of Jia and Lei [17], we also can take the de nition of local approximation order by Halton and Light [14]. For our considerations the equivalence to the Strang{Fix conditions is important. A function 2 L2 (IR) is called re nable if satis es a re nement equation of the form =
X
l2ZZ
pl (2 ?l)
(pl 2 IR);
or equivalently, if satis es the Fourier transformed re nement equation ^ = P (=2) ^(=2) with the re nement mask (1.2)
X P = P := 21 pl e?il : l2ZZ
Note that P is a 2{periodic function. The following result holds for a single function :
Re nable Function Vectors
5
Theorem 1.1. Let 2 Em (IR) \ BV (IR) (m 2 IN) be a re nable function and let B() be L?m {stable. Then the provides controlled approximation order 2
m if and only if the re nement mask P satis es the equalities (1.3) D P () = 0 ( = 0; : : :; m ? 1); (1.4) P(0) = 1: The assertions (1.3), (1.4) are equivalent to the following condition: The re nement mask P can be factorized in the form ?iu m (1.5) S(u) P(u) = 1 + 2e with S(0) = 1, where S is a 2{periodic m{times continuously dierentiable function.
Note that, under the conditions of Theorem 1.1, controlled approximation order m is ensured if and only if algebraic polynomials of degree < m can be exactly reproduced by integer translates of , i.e., if provides accuracy m. For corresponding results see also Cavaretta, Dahmen and Micchelli [5]. There is a close connection between accuracy and regularity of . For jS(u)j 1, the condition (1.5) is equivalent to the assertion that 2 C m?1 (IR) (cf. Daubechies and Lagarias [7, 9]). We want to generalize the result of Theorem 1.1 for a nite set of functions 2 L2 (IR) ( = 0; : : :; r ? 1). The function vector with elements in L2 (IR) is re nable, if satis es a re nement equation of the form X (1.6) = P l (2 ?l) (P l 2 IRrr ): l2ZZ
By Fourier transform we obtain (1.7) ^ = P (=2) ^ (=2) with ^ := (^ )r?=01 and with the re nement mask X P = P := 21 P l e?il : (1.8) l2ZZ
Note that P is now an (r r){matrix of 2{periodic functions. The purpose of this paper is to nd necessary and sucient conditions for the re nement mask P yielding approximation order m, analogously as for a single function in Theorem 1.1. In particular, we show that the approximation order m provided by yields a certain factorization of the re nement mask P of . This factorization can be considered as a natural generalization of the factorization (1.5) of the re nement mask of a single function. Hence, it can be conjectured, that the new factorization property of P will play a similar role for further investigations and for new constructions of re nable function vectors and multiwavelets as (1.5) for the single scaling function. The main result in this paper is the following
6
G. Plonka
Theorem1.2. Let := ( )r? be a re nable vector of functions 2 Em (IR) \BV (IR) (m 2 IN). Further, let B() be L?m {stable. Then the following as1 =0
2
sertions are equivalent: (a) The function vector provides controlled approximation order m. (b) Algebraic polynomials of degree < m can be exactly reproduced by integer translates of . (c) The re nement mask P of satis es the following conditions: The elements of P are m{times continuously dierentiable functions in L22 (IR), and there are vectors yk0 2 IRr ; y00 6= 0 (k = 0; : : :; m ? 1) such that for n = 0; : : :; m ? 1 we have n n X
(yk0 )T (2i)k?n (Dn?k P )(0) = 2?n (y n0 )T ; k k=0 n n X (yk0 )T (2i)k?n (Dn?kP )() = 0T ; k k=0
where 0 denotes the zero vector. Furthermore, if provides controlled approximation order m, then there are nonzero vectors x0; : : :; xm?1 such that P factorizes
P (u) = 21m C m? (2u) : : : C (2u) S(u) C (u)? : : : C m? (u)? ; where the (r r){matrices C k are de ned by xk (k = 0; : : :; m ? 1) as in (4.1) { (4.2) and S (u) is an (r r){matrix with m{times continuously dierentiable entries in L (IR). In particular, for the determinant of P it follows 1 + e?iu m det S (u): det P (u) = 2r 1
0
0
1
1
1
2 2
Note that the assertions of Theorem 1.1 follow from Theorem 1.2 in the special case r = 1. The paper is organized as follows. In Section 2 we will show that, under mild conditions on the scaling functions ( = 0; : : :; r ? 1), the function vector provides controlled approximation order m (m 2 IN) if and only if algebraic polynomials of degree < m can be exactly reproduced by integer translates of . In Section 3, we introduce the doubly in nite matrix L containing the coecient matrices P l which occur in the re nement mask (1.8). Assuming that algebraic polynomials of degree < m can be reproduced, we derive some consequences for the eigenvalues and eigenvectors of L. Similar results can also be found in Strang and Strela [25]. Further, we give necessary and sucient conditions for the re nement mask P of yielding controlled approximation order m (cf. Theorem 3.2). These conditions use values of the re nement mask and its derivatives at the points u = 0 and u = , only. Finally, in Section 4, we show that, assuming controlled approximation order m,
Re nable Function Vectors
7
the re nement mask P can be factorized. As already pointed out in Theorem 1.2, it follows that the determinant of P (u) contains the factor (1 + e?iu )m . While working on this paper, the author has obtained a new preprint of Heil, Strang and Strela [15] which contains the result of Theorem 3.2, but with another proof.
2. Approximation order and accuracy In this section, we shall investigate the connections between approximation order provided by and reproduction of algebraic polynomials by integer translates of ( = 0; : : :; r ? 1). Let r 2 IN and m 2 IN be xed. The following assumptions for the scaling functions ( = 0; : : :; r ? 1) will often be needed in our further considerations: (i) 2 Em (IR). (ii) 2 BV (IR): (iii) The set B() is L2?m {stable. The assumption (i) ensures that the Fourier transforms ^ are m{times continuously dierentiable. If the assumption (ii) is also satis ed, then the Poisson summation formula (1.1) holds for . We observe the following Lemma 2.1. Let (i) and (iii) be satis ed for ( = 0; : : :; r ? 1). Assume that algebraic polynomials of degree < m can be exactly reproduced by integer translates of P , i.e., there are vectors ynl 2 IRr (l 2 ZZ; n = 0; : : :; m ? 1) such that the series l2ZZ (ynl)T ( ? l) are absolutely and uniformly convergent on any compact interval of IR and (2.1)
X
l2ZZ
(ynl)T (x ? l) = xn
(x 2 IR; n = 0; : : :; m ? 1):
Then the vectors ynl can be written in the form
ynl =
(2.2)
n n X n?k
k=0
k l
yk : 0
n gl2ZZ forming the vectors yn := Proof. First, observe that the sequences fy;l l n )r?1 are contained in l2 . The assertion (2.2) will be proved by induction. (y;l =0 ?m For n = 0, there are vectors y0l (l 2 ZZ) with
X
l2ZZ
(y0l )T (x ? l) = 1:
Replacing x by x + 1, we nd
X
l2ZZ
(y0l )T (x + 1 ? l) =
X l2ZZ
(y0l+1 )T (x ? l) = 1:
8
G. Plonka
Thus, by (iii), y0l = y0l+1 (l 2 ZZ), and the assertion follows. Let now
X
l2ZZ
(ydl )T (x ? l) = xd
with ydl of the form (2.2) be true for d = 0; : : :; n < m ? 1. We show that ynl +1 2 IRr (l 2 ZZ) satisfying
X
(2.3)
l2ZZ
is of the form
(ynl +1 )T (x ? l) = xn+1 (x 2 IR)
ynl = +1
nX +1 n+1
ln+1?k yk0 :
k
k=0
Replacing x by x + 1 in (2.3) we obtain
X
l2ZZ
+1 T (ynl+1 ) (x ? l) = (x + 1)n+1 =
=
nX +1 n+1
s=0 nX +1 n+1 X
s
s=0
l2ZZ
xs
s
(ysl )T (x ? l):
Using the induction hypothesis and the L2?m {stability of the integer translates of ( = 0; : : :; r ? 1), this yields
ynl = +1 +1
s n s = yn + X n + 1 X s ls?k yk y l l s k k s s s s X n n + 1 X l X + ds?k yk s k s k d l n n ? k + 1 X n n + 1 X X ds?k yk + s ? k k d s k k l nX ?k n ? k + 1 ! n n + 1 X X ds y k + s k s d k l ? n n + 1 X X n ?k ? dn ?k yk (d + 1) + k d k n n + 1 n X n ?k yk = X n + 1 (l + 1)n + (l + 1) k k
nX +1 n + 1 s=0
= yn0 +1 = yn0 +1 = yn0 +1 = yn0 +1
0
=0
=0
0
=0
=0
=0
=
=0
=0
=0
0
=0
0
=0
+1
=0
= yn0 +1
+1
+1
+1
+1
k=0
Thus, the assertion follows. Now we can show:
0
=0
0
?k yk :
+1
k=0
0
ut
Re nable Function Vectors
9
Theorem 2.2. Assume that the functions ( = 0; : : :; r ? 1) satisfy the
assumptions (i) { (iii). Then the following conditions are equivalent: (a) The function vector provides controlled approximation order m (m 2 IN). (b) Algebraic polynomials of degree < m can be exactly reproduced by integer translates of . (c) The function vector satis es the Strang{Fix conditions of order m, i.e., there is a nitely supported sequence of vectors fal gl2ZZ , such that
f :=
X
l2ZZ
aTl ( ? l)
satis es
^ ^ 6= 0; Dn f(2l) = 0 (l 2 ZZ n f0g; n = 0; : : :; m ? 1): f(0)
Proof. The equivalence of (a) and (c) is already shown in Jia and Lei [17], Theorem 1.1. 1. Let f be a nite linear combination of integer translates of satisfying the Strang{Fix conditions of order m. Then by Corollary 2.3 in Jia and Lei [17], algebraic polynomials of degree < m can be exactly reproduced by integer translates of f, i.e, (b) follows from (c). 2. By (i) and (ii), the Poisson summation formula can be applied to , and we have X X (x ? l) eiul = [( + x)]^(u + 2j) j 2ZZ = eiux
l2ZZ
X
j 2ZZ
e2ijx ^ (u + 2j):
Repeated dierentiation of this equation by u yields for = 0; : : :; m ? 1:
X
X
k eiux X e2ijx (D?k ^ )(u + 2j): (ix) l2ZZ k=0 k j 2ZZ By (i), the series in (2.4) are absolutely and uniformly convergent for x on any compact interval of IR. 3. We show that (c) follows from (b): Assume that algebraic polynomials of degree < m can be exactly reproduced by integer translates of , i.e., there are vectors ynl (y00 6= 0), such that (2.1) with (2.2) is satis ed. Here and in the following, 0 denotes the zero vector of length r. Let f be de ned by
(2.4)
(2.5)
(x ? l) (il) eiul =
f :=
mX ?1 k=0
ak ( + k); T
where the coecient vectors ak are determined by (a0 ; : : :; am?1 ) := (y00; : : :; ym0 ?1 ) V ?1 (2.6)
10
G. Plonka
1 with the Vandermonde matrix V := (kn )mk;n?=0 . Hence we have
(2.7)
yn = 0
mX ?1 k=0
kn ak
(n = 0; : : :; m ? 1):
By Fourier transform of (2.5) we obtain ^ = A(u)T ^ (u) f(u) with
A(u) := Observe that by (2.7) (2.8)
(Dn A)(0) =
mX ?1 k=0
mX ?1 k=0
ak eiuk:
(ik)n ak = in yn0 (n = 0; : : :; m ? 1):
4. We show by induction, that f satis es the Strang{Fix conditions of order m. For n = 0, we have by (2.1) and (2.2) X (y00 )T (x ? l) = 1 (x 2 IR): l2ZZ
Using formula (2.4) for = 0 and u = 0 this yields X 2ijx ^ (2j) = 1 (x 2 IR); (y 00)T e j 2ZZ
and so, by continuity of ^ (u) and (2.8), ^ = A(0)T ^ (0) = (y00 )T ^ (0) = 1; f(0) ^ f(2l) = A(0)T ^ (2l) = (y00 )T ^ (2l) = 0 (l 2 ZZ n f0g): Hence, f satis es the Strang{Fix conditions of order 1. 5. To show the Strang{Fix conditions of order 2, observe that ^ Df(2l) = DA(0)T ^ (2l) + A(0)T D^ (2l) = i(y10 )T ^ (2l) + (y 00)T D^ (2l): By (2.1) and (2.2),
X 1 1
X
(y0 )T l1? (x ? l) = x: =0 l2ZZ Inserting (2.4) with u = 0 and = 1 ? , we obtain X X 2ijx 1 T ^ (y0 ) (2j) + (?i) (y 00 )T D^ (2j) + x (y00 )T ^ (2j) = x e j 2ZZ
j 2ZZ
Re nable Function Vectors
and hence,
11
X j 2ZZ
e2ijx D^ (2j) = 0:
It follows that D^ (2l) = 0 for j 2 ZZ. 6. Now, assume that f satis es for 0 n ? 1 with 2 n m ? 1 the conditions X (D A)T (0) (D? ^ )(2l) ^ (D f)(2l) = =0 X i (y )T (D? ^ )(2l) = ; (2.9) = 0;l 0; 0 =0 where denotes the Kronecker symbol. These conditions are even a little bit stronger than the Strang{Fix conditions of order n, since we even have that ^ = 0 for 1 n?1. But the conditions are justi ed by the observations D f(0) ^ = 0 (l 2 ZZ), yielding the in part 5 of the proof. We show that also (Dn f)(2l) Strang{Fix conditions of order n + 1. By (2.1) and (2.2) we have n n X )T X ln? (x ? l) = xn ( y (0 n m ? 1): 0 =0 l2ZZ Using (2.4) with = n ? and u = 0 we obtain n nX ? n ? n X k X e2ijx (Dn? ?k T n ? ^ )(2j) = xn: (ix) ( y (?i) 0) k =0 j 2ZZ k=0 Hence, n n ?k n ? k X X 2ijx nX k n ? k x (?i) e (y 0 )T i (Dn? ?k^ )(2j) = xn: k =0 k=0 j 2ZZ By (2.9), the sum on the left{hand side is zero for k = 1; : : :; n ? 1. Thus, since f satis es the Strang{Fix conditions of order 1, ! n n X 2ijx X T n ? n ^ (y 0 ) i (D )(2j) + xn = xn ; (?i) e =0 j 2ZZ i.e., by (2.9), X 2ijx n ^ e (D f)(2j) = 0: j 2ZZ
It follows that f^ satis es the Strang{Fix conditions of order n + 1, so that the proof by induction is complete. ut Remark. Observe that the equivalence of (a) and (b) can be shown without use of the L2?m {stability of the integer translates of ( = 0; : : :; r ? 1) (cf. Jia and Lei [17]).
12
G. Plonka
3. Conditions for the re nement mask In this section, we shall derive necessary and sucient conditions for the re nement mask P yielding a function vector which provides controlled approximation order m. Let now = ( )r?=01 be re nable in the sense of (1.6). The decay property (i) implies that the elements of P (u) are m{times continuously dierentiable functions in L22 (IR). Following the ideas in Strang and Strela [25], we introduce the doubly in nite matrix 0: : : : : : : : : : : : : : : : : : : : : : : :1 B: : : P 0 P 1 P 2 P3 P4 P 5 : : :C L := BBB : : : P ?2 P ?1 P 0 P 1 P 2 P 3 : : : CCC @ : : : P ?4 P ?3 P ?2 P ?1 P 0 P 1 : : : A ::: ::: ::: ::: ::: ::: ::: ::: containing the (r r){coecient matrices P l occuring in the re nement equation (1.6). Now, with the in nite vector := (: : :; ( +1)T ; T ; (? 1)T ; : : :)T the re nement equation (1.6) can formally be written in vector form
L(2 ) = :
(3.1)
First we recall the following Lemma, which can also be found in Strang and Strela [25] for compactly supported functions.
Lemma 3.1. Let the assumptions (i) and (iii) be satis ed for ( = 0; : : :; r ? 1). Assume that algebraic polynomials of degree < m can be exactly reproduced by integer translates of , i.e., there are vectors ynl 2 IRr (l 2 ZZ; n = 0; : : :; m ? 1) such that (2.1) is satis ed. Then the matrix L has the eigenvalues 1; 1=2; : : :; (1=2)m?1 with corresponding left eigenvectors
yn := (: : :; (yn)T; (yn)T; : : :)T ; 0
1
i.e.,
(yn)T L = 2?n (yn )T
(n = 0; : : :; m ? 1):
Proof. From (2.1) it follows by (3.1) for n = 0; : : :; m ? 1
xn = (yn)T (x) = (yn )T L(2x) = 2?n(2x)n = 2?n (yn )T (2x) (x 2 IR):
The L2m? {stability of the translates ( ? l) (l 2 ZZ; = 0; : : :; r ? 1) gives (yn )T L = 2?n (yn )T (n = 0; : : :; m ? 1):
ut Now we can prove:
Re nable Function Vectors
13
Theorem 3.2. Let r 2 IN , and let ( = 0; : : :; r ? 1) be functions satisfying
the assumptions (i) { (iii). Further, let be re nable. Then the function vector provides controlled approximation order m if and only if the re nement mask P of in (1.8) satis es the following conditions: There are vectors yk0 2 IRr ; y00 6= 0 (k = 0; : : :; m ? 1) such that for n = 0; : : :; m ? 1 we have
(3.2) (3.3)
n n X
(y k0 )T (2i)k?n (Dn?k P )(0) = 2?n (y n0 )T ; k k=0 n n X (yk0 )T (2i)k?n (Dn?k P )() = 0T ; k k=0
where 0 denotes the zero vector.
Proof. 1. Assume that provides controlled approximation order m. Since all
assumptions for Theorem 2.2 are satis ed, it follows that algebraic polynomials of degree < m can be exactly reproduced by integer translates of . Thus, applying Lemmata 2.1 and 3.1, there are vectors ynl 2 IRr (l 2 ZZ; n = 0; : : :; m ? 1) such that we have with yn = (: : :; (yn0 )T ; (y n1 )T ; : : :)T (yn )T L = 2?n (yn )T (n = 0; : : :; m ? 1);
(3.4)
where the vectors ynl are of the form (2.2). In particular, there is a vector y00 2 IRr (y00 6= 0) such that by the Poisson summation formula it follows that (y00 )T
X
l2ZZ
(x ? l) = (y )
0 T 0
X^ (2j) e ijx = 1; 2
j 2ZZ
i.e., (y00 )T ^ (0) = 1. Using the structure of the matrix L, (3.4) is equivalent to the equations (3.5) (3.6)
X
(yn?l )T P 2l = 2?n (yn0 )T (n = 0; : : :; m ? 1);
Xl2ZZn (y?l ) P T
l2ZZ
l
2 +1
= 2?n (yn1 )T (n = 0; : : :; m ? 1):
Note that by (i), the series in (3.5) { (3.6) are well{de ned. Putting (2.2) in (3.5), we nd for n = 0; : : :; m ? 1, 2?n (yn)T = 0
(3.7)
=
n n X X
l2ZZ k=0 n n X k=0
n?k k T k (?l) (y0 )
X
P
l
2
kT k?n n?k k (y0 ) (?2) l2ZZ(2l) P 2l :
14
G. Plonka
Analogously, from (3.6) it follows by (2.2) for s = 0; : : :; m ? 1, 2?s (ys1 )T = 2?s
s s X k=0
s k )T = X s (yk )T (?2)k?s X (2l)s?k P 2l+1 : ( y 0 k 0 k=0 k l2ZZ
(3.8) ? Multiplying (3.8) with 2?n ns (?1)n?s 2s, summation over s = 0; : : :; n ? 1 yields
nX ?1 n s n?s X s (yk )T ( ? 1) (3.9) 2?n 0 s=0 s k=0 k ! s s nX ?1 n X s?k X k T k ? s s n ? s ? n (y0 ) (?2) (2l) P 2l+1 : 2 (?1) =2 s=0 s l2ZZ k=0 k
Considering the left{hand side of equation (3.9), we nd s s nX ?1 n X n ? s ? n (yk )T (?1) 2
k 0 n?s n ? k (yk )T = 2?n ( ? 1) s?k 0 k=0 k s=k n?1 nX ?1 n n?1?k n?k X (?1)s n ? k (y k )T = ?2?n X n (yk )T : = 2?n ( ? 1) 0 0 s s=0 k=0 k k=0 k s=0 s nX ?1 n nX ?1
k=0
The right{hand side of (3.9) yields nX ?1 n
= = =
(?2)s?n
s=0 s nX ?1 n nX ?1
s s X k=0
k
(yk )T (?2)k?s 0
n ? k
X
X l2ZZ
(2l)s?k P 2l+1
kT k?n s?k k s=k (?2) s ? k (y 0 ) l2ZZ (2l) P 2l+1
k=0 nX ?1 n
k
(yk )T (?2)k?n
k=0 nX ?1 n k=0
0
kT k (y0 ) (?2)
X
l2ZZ
k?n X
l2ZZ
P P
l
2 +1
l
2 +1
n?X 1?k n ? k s=0
s
(2l)s
(2l + 1)n?k ? (2l)n?k :
Hence, by addition of (3.8) with s = n and (3.9) we obtain (3.10)
n n X
k=0
X
kT k?n n?k ?n n T k (y0 ) (?2) l2ZZ (2l + 1) P 2l+1 = 2 (y0 ) :
!
Re nable Function Vectors
15
Now, for the sum and the dierence of the equations (3.7) and (3.10), respectively, it follows for n = 0; : : :; m ? 1: (3.11) (3.12)
n n X
k=0 n X n k=0
k
k
X
(?2)k?n (yk )T 0
(?2)k?n (yk )T 0
l2ZZ
X l2ZZ
ln?k P l
ln?k (?1)l P l
!
!
= 2?n+1 (yn0 )T ; = 0T :
By (i), the series in (3.11) { (3.12) are absolutely convergent, and P is m{times continuously dierentiable. By n?k X P l ln?k e?iul (Dn?k P )(u) = (?i)2 l2ZZ we have X n?k l P l = 2 in?k (Dn?k P )(0); l2ZZ
X l2ZZ
(?1)l ln?k P l = 2 in?k (Dn?k P )():
Hence, (3.11) { (3.12) are equivalent to the conditions (3.2) { (3.3). 2. Assume that P is m{times continuously dierentiable with elements in L22 (IR), and that there are vectors yk0 2 IR, y00 6= 0 (k = 0; : : :; m ? 1), such that the conditions (3.2) and (3.3) are satis ed for n = 0; : : :; m ? 1. We show that provides controlled approximation order m. Let the function f be de ned as in (2.5) { (2.6) by ^ := A(u)T ^ (u); f(u)
P
?1 a eiuk 2 C 1(IRr ) such that the vector of 2{periodic functions A(u) := mk=0 k satis es the conditions (3.13) Dn A(0) = in yn0 (n = 0; : : :; m ? 1): We show that f satis es the Strang{Fix conditions of order m: For the -th derivative of f^ we nd
^ = (D f)(2l)
X s=0
?s T s^ s (D A) (0) (D )(2l) (l 2 ZZ; = 0; : : :; m ? 1):
From (3.13) and the re nement equation (1.7), it follows for l 2 ZZ ^ = (D f)(2l)
X ?s
s X X ? d ?s =0
=
s i
d=0
d s=d s ? d i
(y 0?s)T 2?s
s s X d=0
s?d d^ d (D P )(l) (D )(l)
(y0?s )T 2?s(Ds?d P )(l) (Dd ^ )(l)
16
G. Plonka
X X ?d ? d ?s?d
i (y0 ?s?d )T 2?s?d(Ds P )(l) (Dd ^ )(l) s d s=0 d=0 ! ?d X X ? d ? d ? s s s T s? i (y0 ) 2 (D P )(l) (Dd ^ )(l): = s d s=0 d=0
=
Using the relation (3.3) with n = ? d, we obtain for odd l ^ = 0 ( = 0; : : :; m ? 1): (D f)(2l) For even l, we nd by (3.2) with n = ? d X i?d (y?d )T (Dd ^ )(l) ? ^ (D f)(2l) = 2 0 d=0 d ^ = 2? (D f)(l) ( = 0; : : :; m ? 1): Repeating the procedure, we obtain that ^ (D f)(2l) = 0 (l 2 ZZ n f0g; = 0; : : :; m ? 1): Finally, using the Poisson summation formula, we have ^ = (y 00)T ^ (0) = (y00 )T X ( ? l) 6= 0; f(0) l2ZZ
since B() is L2?m {stable. Thus, by Theorem 2.2, provides controlled approximation order m. ut Remark. 1. In the case r = 1, the relations (3.2) and (3.3) can strongly be simpli ed, and we obtain Theorem 1.1. Using the trigonometric polynomial vector A(u) de ned in the proof of Theorem 2.2, the conditions (3.2) and (3.3) simply read Dn [A(2u)T P (u)]ju=0 = Dn A(0)T; Dn[A(2u)T P (u)]ju= = 0T : 2. Let satisfy the assumptions of Theorem 3.2. In order to verify controlled approximation order m = 1, the following conditions must be true: There is a vector y00 2 IRr (y00 6= 0) such that (3.14) (y00 )T P (0) = (y00 )T ; (y00 )T P () = 0T : That means, y00 is the left eigenvector of P (0) for the eigenvalue 1. At the same time y00 is the left eigenvector of P () for the eigenvalue 0. 3. In order to verify approximation order m = 2 we have to show: a) provides controlled approximation order 1, i.e., (3.14) is true for y00 6= 0. b) There is a second vector y10 2 IRr such that (2i)?1 (y00)T (DP )(0) + (y10)T P (0) = 2?1 (y10)T ; (2i)?1 (y00 )T (DP )() + (y10 )T P () = 0T :
Re nable Function Vectors
17
4. For proving the reverse direction in Theorem 3.2, i.e., that the relations (3.2) and (3.3) imply the controlled approximation order m, the L2?m {stability (iii) is not needed, if we assume that (y00 )T ^ (0) 6= 0. 5. We observe that the conditions (3.2) and (3.3) use the values of the re nement mask and its derivatives at the points u = 0 and u = , only.
Example 3.1. We want to consider the example of quadratic B{splines with
double knots. Let N02 and N12 be the B{splines de ned by the knots 0; 0; 1; 1 and 0; 1; 1; 2, respectively, i.e., we have 8 x2 2x(1 ? x) x 2 [0; 1]; x 2 [0; 1); < 2 2 (2 ? x) x 2 [1; 2]; N02(x) := 0 N (x) := 1 otherwise; :0 otherwise:
Let N := (N02; N12)T . In Plonka [20], it was shown that the Fourier transformed vector iu ? 2 + (2 + iu)e?iu 2 ^ N (u) = (iu)3 1 ? 2iue?iu ? e?2iu
satis es the Fourier transformed re nement equation N^ = P (=2) N^ (=2) with the re nement mask ?iu P (u) := 18 2e?2iu++2e2e?2iu 1 + 4e?iu2 + e?2iu : It is well{known that N provides the controlled approximation order m = 3. In fact, the conditions (3.2) and (3.3) are satis ed with y00 := (1; 1)T, y 10 := (1=2; 1)T, y20 := (0; 1)T . We nd for n = 0; 1; 2 1 (1; 1) 4 2 = (1; 1); 46 8 1 (1; 1) 0 2 = (0; 0); ?2i 0 18 0 4?22 1 i ? 16 (1; 1) ?6i ?6i + 8 (1=2; 1) 4 6 = 2 (1=2; 1); ? 16i (1; 1) ?2i2i 2i0 + 81 (1=2; 1) 00 ?22 = (0; 0); ?2 0 i ?2i 0 1 4 2 1 ? 32 (1; 1) ?10 ?8 ? 8 (1=2; 1) ?6i ?6i + 8 (0; 1) 4 6 = 41 (0; 1); 2 0 i 2i 0 1 0 2 1 ? 32 (1; 1) ?6 0 ? 8 (1=2; 1) ?2i 2i + 8 (0; 1) 0 ?2 = (0; 0):
18
G. Plonka
4. Factorization of the re nement mask As known, if a single re nable function provides controlled approximation order m, then the re nement mask of factorizes 1 + e?iu m S(u) P(u) = 2 (cf. Theorem 1.1). In this section, we shall nd a matrix factorization of the re nement mask in the case of a nite set of re nable functions. r?1 by Let r 2 IN be xed. First, let us de ne the (r r){matrix C := (Cj;k )j;k =0 r T the vector y = (y0 ; : : :; yr?1) 2 IR , y 6= 0. Let j0 := minfj; yj 6= 0g and j1 := maxfj; yj 6= 0g. Further, for all j with yj 6= 0 let dj := minfk : k > j; yj 6= 0g. For j0 < j1 , the entries of C are de ned for j; k = 0; : : :; r ? 1 by 8 y?1 yj 6= 0 and j = k; > j > < 1 ?1 yj = 0 and j = k; (4.1) Cj;k(u) := > ?yj yj 6= 0 and dj = k; ?iu =yj1 j = j1 and k = j0 ; > ? e :0 otherwise: For j0 = j1 , C is a diagonal matrix of the form (4.2) C (u) := diag(1;| :{z: :; 1}; (1 ? e?iu)=yj0 ; 1;| :{z: :; 1}): r?1?j0
j0
In particular, for r = 1 we have C (u) := (1 ? e?iu)=y0 . Note that C is chosen such that (4.3) yTC(0) = 0T: We obtain: Theorem4.1. Let ( = 0; : : :; r ? 1) be functions satisfying the assumptions (i) { (iii). Further, let be re nable. Then the following conditions are equivalent: (a) provides controlled approximation order 1, i.e., there is a vector y = (y0; : : :; yr?1)T 2 IRr (y 6= 0) with
yT
X
l2ZZ
( ? l) = 1:
(b) The re nement mask P of satis es (4.4) yT P (0) = yT; yT P () = 0T with y as in (a). (c) The re nement mask P of is of the form P (u) = 21 C (2u) P~ (u) C (u)?1; where C (u) is de ned by (4.1) { (4.2) with y as in (a), and where P~ (u) is an (r r){matrix satisfying P~ (0)e = e with e := (sign(jy0 j); : : :; sign(jyr?1 j)T .
Re nable Function Vectors
19
Proof. 1. Without loss of generality we suppose that y 6= 0 is of the form
y = (y ; : : :; yl? ; 0 : : :; 0) 0
1
T
with 1 l r and y 6= 0 for = 0; : : :; l ? 1. Introducing the direct sum of quadratic matrices A B := diag (A; B ), the matrix C (u) reads C (u) = Cl (u) I r?l , where I r?l is the (r ? l) (r ? l){unit matrix and 0 y?1 ?y?1 0 : : : 0 1 0 BB 00 . C ? y1 1 ?y1?1 . . . .. C BB . C ... ... ... 0 C C l (u) := BB .. CC (l > 1); B@ C 0 0 . . . yl??12 ?yl??12 A ?e?iu =yl?1 0 : : : 0 yl??11 ?iu C 1(u) := 1 ?ye0 : It can be easily observed that C (u) is invertible for u 6= 0, and we have C (u)?1 = Cl (u)?1 I r?l with 0 y0 y1 y2 : : : yl?1 1 BB y z y y . . . ... CC C B0 1 2 Cl (u)?1 = 1 ?1 z El (u) := 1 ?1 z BBB ... . . . . . . . . . yl?1 CCC (z := e?iu): C B@ y0 z y1 z . . . yl?2 yl?1 A y0 z y1 z : : : yl?2 z yl?1 We introduce E (u) := (1 ? e?iu ) C (u)?1 = E l (u) (1 ? e?iu )I r?l . 2. The equivalence of (a) and (b) was already shown in Theorem 3.2. Assume that (4.4) holds. We show that P~ (u) := 2 C (2u)?1 P (u) C (u) satis es the assertion P~ (0)e = e with e = (1;| :{z: :; 1}; 0| : {z: :; 0})T. Using the rule of L'Hospital, we obtain l
r?l
2 P~ (0) = ulim ! (1 ? e? iu) E (2u) P (u) C (u) 0
2
= 1i (2(DE )(0) P (0) C (0) + E(0) (DP )(0) C (0) + E(0) P (0) (DC )(0)) :
Observe that C (0)e = 0 and (DC )(0)e = (i=yl?1 ) el with the l-th unit vector el := (0;| :{z: :; 0}; 1; 0; : : :; 0)T. Hence, l?1
P~ (0) e = yl1? E(0) P (0) el : By the assumption (4.4), we have E (0) P (0) = E (0). Thus, P~ (0) e = yl1? E(0) el = e: 1
1
20
G. Plonka
3. Let the re nement mask P (u) be of the form P (u) = 12 C(2u) P~ (u) C (u)?1 = 2(1 ?1e?iu) C (2u) P~ (u) E(u) with C (u) and E (u) de ned by y 6= 0 as in part 1 of the proof, and with P~ (0) e = e. We show that (4.4) is satis ed for P (u). Since C () is regular, the second assertion in (4.4) immediately follows from (4.3). Again, using the rule of L'Hospital, we nd P (0) = 2i1 2(DC )(0) P~ (0) E(0) + C (0) (DP~ )(0) E(0) + C (0) P~ (0) (DE)(0) : By (4.3), yT P (0) simpli es to yTP (0) = 1i yT (DC )(0) P~ (0) E(0): Observing that yT (DC )(0) = ieT1 with e1 := (1; 0; : : :; 0)T, and P~ (0) E (0) = E(0), we have yTP (0) = eT1E(0) = yT: ut Moreover, we obtain the following Theorem4.2. Let m 1 be xed. Let P be an (r r){matrix with m{times continuously dierentiable entries in L22 (IR), and assume that P satis es (3.2) ?1 (y 0 6= 0). Then the and (3.3) for n = 0; : : :; m ? 1 with vectors y00; : : :; ym 0 0 ~ matrix P , (4.5) P~ (u) := 2 C (2u)?1 P (u) C (u) with C (u) de ned as in (4.1) { (4.2) by y00 , satis es the conditions (3.2) and (3.3) for n = 0; : : :; m ? 2 with vectors y~ 00; : : :; y~ m0 ?2 given by
(4.6)
(~yk )T := 0
k+1 k + 1 1 X ?k?1 (y )T (Dk+1? C )(0) 0 k + 1 =0 i
(k = 0; : : :; m ? 2). In particular, we have y~ 00 6= 0. Proof. 1. From 2 P (u) C (u) = C (2u) P~ (u) it follows by (n ? k){fold dierentiation for 0 n ? k m ? 1 nX ?k n ? k 2 (Dn?k?lP )(u)(Dl C )(u) l l=0 = i.e., (4.7)
nX ?k n ? k l=0
l
2n?k?l(Dn?k?l C )(2u)(Dl P~ )(u);
(Dn?k P )(u) C (u) = S 1n?k (u) ? S 2n?k(u)
Re nable Function Vectors
with
Sn?k(u) := 1
Sn?k(u) := 2
21
?k n ? k 1 nX n?k?l(Dn?k?l C )(2u) (Dl P~ )(u); 2 l=0 l 2
nX ?k n ? k
l
l=1
(Dn?k?l P )(u) (Dl C )(u):
2. First, we observe that by (3.3) for u = and 1 n m ? 1 n n X
kT k?n n?k k (y0 ) (2i) S 2 ()
k=0 n?l n ? l n n X X
!
(yk0 )T (2i)k?n+l (Dn?l?k P )() (2i)?l (Dl C )() k l k=0 l=1 n n X = 0T (2i)?l (Dl C)() = 0T: l l=1
(4.8)=
Analogously, for u = 0, from (3.2) it follows for 1 n m ? 1 by the de nition (4.6) of y~ 0n?1 n n X
kT k?n n?k k (y0 ) (2i) S 2 (0)
k ! n?l n ? l n n X X n ? l ? k k k ? n l (4.9)= (y ) (2i) (D P )(0) (2i)?l (Dl C )(0) k l k l n n X 2?n l (yn?l ) (2i)?l (Dl C )(0) = l l nX ? n ?n l (yl ) (Dn?l C )(0) = 2?nn(~yn? ) ? 2?n(y n) C (0): = 2?n l i =0
0
T
+
=0
=1
+
T
0
=1
1
+
l=0
0
T
0
1 T
0
T
3. We will show the following. If P satis es (3.2) { (3.3) for a certain n (1 n m ? 1) with yk0 (k = 0; : : :; n) then P~ satis es (3.2) { (3.3) for n ? 1 with y~ k0 (k = 0; : : :; n ? 1). We multiply (3.3) with C () and replace (Dn?k P )() C () (k = 0; : : :; n) by the right{hand side of (4.7) for u = . Then, by (4.8), we have for 1 n m ? 1 n n X
n?k k T k?n n?k k (y0 ) (2i) (S 1 () ? S 2 ())
=
k=0 n n X k=0
k
(yk )T (2i)k?n 1 0
nX ?k n ? k
2 l=0
l
2n?k?l (Dn?k?lC )(0) (Dl P~ )() = 0:
22
G. Plonka
Hence, by the de nition of y~ n0 ?l?1
!
n?l n ? l n n X 1X k T k?n+l (Dn?k?lC )(0) (2i)?l (Dl P~ )() 2 l=0 l k=0 k (y0 ) i nX ?1 n ? 1 (2i)?l y~ n0 ?l?1 (Dl P~ )() = 0: = n2 l l=0
The substitution l0 = n ? 1 ? l yields nX ?1 n ? 1
l0
l0 =0
(2i)?n+l0 +1 (~y l00 )T (Dn?l0 ?1P~ )() = 0;
that is, P~ satis es (3.3) for 0 n m ? 2 with the vectors y~ k0 (k = 0; : : :; m ? 2). 4. We multiply (3.2) with C (0) and replace (Dn?k P )(0) C (0) (k = 0; : : :; n) by the right{hand side of (4.7) for u = 0. Then, by (4.9), we obtain for 1 n m?1 n n X
(yk0 )T (2i)k?n S 1n?k (0) k k=0 n n X ? n n T = 2 (y0 ) C (0) + (y k0 )T (2i)k?n S 2n?k (0) k k=0 = 2?n(yn0 )T C (0) + 2?n n(~y0n?1)T ? 2?n (yn0 )T C (0) = n2?n(~y0n?1)T : Hence, by the de nitions of S 1n?k(0) and y~ n0 ?l?1 it follows that 2
n n X k=0
kT k?n n?k k (y 0 ) (2i) S 1 (0)
! n n X n?l n ? l X n ? k ? l k k ? n l = (y ) i (D C)(0) (2i)?l (Dl P~ )(0) l k l k nX ? n ? 1 (2i)?l (~yn?l? ) (Dl P~ )(0) =n l l nX ? n ? 1 ?n l (~yl ) (Dn? ?l P~ )(0) = n2?n (~y n? ) ; =n l (2i) 0
=0
T
+
=0
1
0
1 T
=0
1
+1+
l=0
0
T
1
+1
0
1 T
that is, P~ satis es (3.2) for 0 n m ? 2 with the vectors y~ k0 (k = 0; : : :; m ? 2). 5. Finally, we consider (~y00 )T := (?i)(y 00)T (DC )(0) + y10 T C (0): Computing this vector, we easily observe that the sum of the components of y~ 00 equals 1. Consequently, y~ 00 6= 0. ut
Re nable Function Vectors
23
With the help of Theorem 4.2 we nd the following factorization of re nement matrices: Corollary 4.3. Let m 2 IN be xed. Let P be an (r r){matrix with m{times continuously dierentiable entries in L22 (IR), and assume that P satis es (3.2) ?1 (y0 6= 0). Then there are { (3.3) for n = 0; : : :; m ? 1 with vectors y00; : : :; ym 0 0 nonzero vectors x0 ; : : :; xm?1 , such that P factorizes P (u) = 21m C m?1(2u) : : : C 0(2u) S(u) C 0(u)?1 : : : C m?1(u)?1; where the matrices C k are de ned by xk (k = 0; : : :; m ? 1) as in (4.1) { (4.2) and S (u) is an (r r){matrix with m{times continuously dierentiable entries in L22 (IR). In particular, for the determinant of P it follows 1 + e?iu m (4.10) det S (u): det P (u) = 2r Proof. The rst assertion can easily be observed by repeated application of Theorem 4.2. The formula (4.10) follows from ?2iu det (C k (2u) C k (u)?1) = 11??ee?iu = 1 + e?iu (k = 0; : : :; m ? 1):
ut
Remark. The vectors xn (n = 0; : : :; m ? 1) in Corollary 4.3 can be computed
by repeated application of (4.6). In particular, we obtain xT0 := (y00 )T , xT1 := (~y00)T = (?i)(y 00)T (DC )(0) + (y10 )T C (0), where C is de ned by x0 = y00 . Example 4.1. 1. Let us again consider the re nement mask of the vector of quadratic B{splines with double knots ?iu P (u) = 81 2e?2iu++2e2e?2iu 1 + 4e?iu2 + e?2iu (cf. Example 3.1). By Theorem 4.2, we obtain P~ by (4.5) with C (u) de ned by y00 = (1; 1)T:
? 2 + 2z 1 ?1 2 P~ (u) = 14 ?1z ?11 ?z 1 2z + 2z 1 + 4z + z 1 2+z 1 1
2
2
2
?iu z 1 + 2z (z := e ): In particular, we have P~ (0) 11 = 11 . For y~ 00 we obtain with y10 = (1=2; 1)T (cf. Example 3.4), (~y00 )T = (1; 1) 01 00 + (1=2; 1) ?11 ?11 = (1=2; 1=2):
=4
24
G. Plonka
Repeating this procedure, we nally obtain with z := e?iu the factorization 2 0 ?1=2 1 0 P (u) = 18 ?1z2 ?11 ?1=2 z 2 =2 1=2 0 1 ? z2 0 1 1 0 ?1 1=2 ?1=2 ?1 1 ?1 ?1 01?z : ?z=2 1=2 ?z 1 For the determinant of P (u) we have 1 (1 + e?iu)3 : det P (u) = 32 2. Now we consider the example of two scaling functions 0 and 1 treated in Donovan, Geronimo, Hardin, Massopust [10] and Strang and Strela [25]. The re nement mask of := (0; 1)T is given by p P (u) := 201 (?1 + 9z +6 +9z6z2 ? z3)=p2 ?3 + 810z2? 3z2 (z = e?iu): The translates 0 ( ? l) and 1( ? l) (l 2 ZZ) are orthogonal. We observe that supp 0 = [0; 1] and supp 1 = [0; 2]. Further, the symmetry relations 0 = 0 (1 ? ) and 1 = 1(2 ? ) are satis ed. mask p P Tsatis es p TheT re nement 1 0 2; 1) and y := ( 2=2; 1) . With the conditions (3.2) { (3.3) with y := ( 0 0 x0 := (p2; 1)T and x1 := (1=2; 1=2)T we nd the factorization p ?p2=2 1=2 ?1=2 10 0 P (u) = 401 ?2=2 ?z 2 =2 1=2 ?1 + 20z ? z 2 ?4 ? 4z z2 1 1=2 ?1=2 ?1 p2=2 ?p2=2 ?1 ?z=2 1=2 : ?z 1 A further factorization is not possible, since there is no vector x2 2 IR2 , which is a right eigenvector of S (0) for the eigenvalue 1 and a right eigenvector of S () for the eigenvalue 0, at the same time. For the determinant of P it follows 1 (1 + e?iu )3 : det P (u) = ? 40 In fact, it was shown in [10, 25] that provides controlled approximation order m = 2. This example shows, that the factorization of the determinant given in (4.10) provides only a necessary, not a sucient condition for controlled approximation order.
Remark. 1. In the following, let m 2 IN and 1 r m be given integers. 0
We consider equidistant knots of multiplicity r xl = xrl := bl=rc (l 2 ZZ); where bxc means the integer part of x 2 IR. Let Nkm;r 2 C m?r?1 (IR) (1 r m; k 2 ZZ) denote the normalized B-splines of order m and defect r with the
Re nable Function Vectors
25
knots xk ; : : :; xk+m. We introduce the spline vector N rm := (Nm;r )r?=01 . In Plonka [20], it was shown that the re nement mask P rm (u) of N rm can be factorized in the form P rm(u) = 21m Am?1(z2) : : : A0(z2) P r?1 A0(z)?1 : : : Am?1(z)?1 with P r?1 := diag (2r?1 ; : : :; 20) and matrices A ( = 0; : : :; m ? 1) de ned by the vector of knots x := (x ; : : :; x +r?1 )T in a similar manner as C in (4.1) { (4.2). In particular, we have det P rm (u) = 2?r(m?1)+r(r?3)=2 (1 + e?iu)m (cf. [20]). Since the approximation order of the spline space generated by N rm is m, it can be conjectured that the B-splines with multiple knots are optimal in the sense that for a xed small support [0; b(m ? 1)=rc +1] of the best possible approximation order m is achieved. 2. For the construction of scaling functions providing a given approximation order, also a reversion of Corollary 4.3 would be useful. This problem will be dealt with in a forthcomming paper. Acknowledgment. This research was supported by the Deutsche Forschungsgemeinschaft. The author would like to thank Prof. Dr. M. Tasche for various helpful suggestions. Also, the author is grateful to the referees for careful reading the manuscript and their helpful comments.
References 1. Alpert, B., K.: A class of bases in L2 for the sparse representation of integral operators. SIAM J. Math. Anal. 24 (1993) 246{262 2. de Boor, C., DeVore, R. A., Ron, A.: Approximation from shift{invariant subspaces of L2 (IRd ). Trans. Amer. Math. Soc. 341 (1994) 787{806 3. de Boor, C., DeVore, R. A., Ron, A.: The structure of nitely generated shift{invariant spaces in L2 (IRd ). J. Funct. Anal. 119(1) (1994) 37{78 4. Butzer, P. L.,Nessel, R. J.: Fourier Analysis and Approximation.Basel: Birkhauser Verlag, 1971 5. Cavaretta, A., Dahmen, W., Micchelli, C. A.: Stationary Subdivision. Memoirs Amer. Math. Soc. 453 (1991) 1{186 6. Chui, C. K.: An Introduction to Wavelets. Boston: Academic Press, 1992 7. Daubechies, I.: Orthonormal bases of wavelets with compact support. Pure Appl. Math. 41 (1988) 909{996 8. Daubechies, I.: Ten Lectures on Wavelets. Philadelphia: SIAM, 1992 9. Daubechies, I., Lagarias, J.: Two{scale dierence equations: II. Local regularity, in nite products of matrices and fractals. SIAM J. Math. Anal. 23 (1992) 1031{1079 10. Donovan, G., Geronimo,J. S., Hardin, D. P., Massopust, P. R.: Construction of orthogonal wavelets using fractal interpolation functions. SIAM J. Math. Anal. (to appear) 11. Dyn, N., Jackson, I. R. H., Levin, D., Ron, A.: On multivariate approximation by the integer translates of a basis function, Israel J. Math. 78 (1992) 95{130 12. Goodman, T. N. T., Lee, S. L.: Wavelets of multiplicity r. Report AA/921, University of Dundee, 1992. 13. Goodman, T. N. T., Lee, S. L., Tang, W. S.: Wavelets in wandering subspaces. Trans. Amer. Math. Soc. 338 (1993) 639{654
26
G. Plonka
14. Halton, E. J., Light, W. A.: On local and controlled approximation order. J. Approx. Theory 72 (1993) 268{277 15. Heil, C., Strang, G., Strela, V.: Approximationby translates of re nable functions. Numer. Math. (to appear) 16. Herve, L.: Multi{Resolutionanalysis of multiplicity d. Applicationto dyadic interpolation. Appl. Comput. Harmonic Anal. 1 (1994) 299{315 17. Jia, R. Q., and Lei, J. J.: Approximation by multiinteger translates of functions having global support. J. Approx. Theory 72 (1993) 2{23 18. Lemarie{Rieusset, P. G.: Ondelettes generalisees et fonctions d'echelle a support compact, Revista Matematica Iberoamericana 9 (1993) 333{371 19. Light, W. A.: Recent developments in the Strang{Fix theory for approximation orders. Curves and Surfaces (P. J. Laurent, A. le Mehaute and L. L. Schumaker, eds.), Academic Press, New York, 1991, pp. 297{306 20. Plonka, G.: Two{scale symbol and autocorrelation symbol for B-splines with multiple knots. Advances in Comp. Math. 3 (1995) 1{22 21. Plonka, G.: Generalized spline wavelets, Constr. Approx. (to appear) 22. Schoenberg, I. J.: Contributions to the problem of approximation of equidistant data by analytic functions, Part A. Quart. Appl. Math. 4 (1946) 45{99 23. Schoenberg, I. J.: Cardinal Interpolation and spline functions: II Interpolation of data of power growth. J. Approx. Theory 6 (1972) 404{420 24. Strang, G., Fix,G.: A Fourier analysis of the nite element variational method. Constructive Aspects of Functional Analysis (G. Geymonat, ed.), C.I.M.E., 1973, pp. 793{840 25. Strang, G., Strela, V.: Orthogonal Multiwavelets with vanishing moments. J Optical Eng. 33 (1994) 2104{2107 26. Strang, G., Strela, V.: Short wavelets and matrix dilation equations. IEEE Trans. on SP, vol. 43, 1995. Gerlind Plonka Fachbereich Mathematik Universitat Rostock D{18051 Rostock, Germany
This article was processed using the LATEX macro package with CAP style