Interpolating polynomial wavelets on [−1, 1] M.R. Capobianco
W. Themistoclakis
∗
Abstract In the present paper polynomial interpolating scaling function and wavelets are constructed by using the interpolation properties of de la Vall´ee Poussin kernels with respect to the four kinds of Chebyshev weights. For the decomposition and reconstruction of a given function efficient algorithms based on fast discrete cosine and sine transforms are proposed.
Keywords and phrases: Polynomial wavelets, de la Vall´ee Poussin means, Chebyshev polynomials, Interpolation, Fast discrete cosine and sine transforms. MSC (AMS) Classification : 65D05,65T60
1
Introduction
The classical wavelet theory developed as a theory on the whole real axis: a suitable function ψ ∈ L2 (R) was fixed as mother wavelets and from it all the wavelets were generated by means of dilations and translations, to form a basis of L2 (R) (see e.g. [5, 14]). In adapting such theory to the case of functions defined on a compact interval of R, several artifices, like periodization techniques, were introduced for overcoming the boundary problems (see e.g. [3, 4, 10, 11]). More recently algebraic polynomials on the interval [−1, 1] have been considered as wavelet and handled by wavelet techniques with advantages in the computational efficiency and accuracy in the applications, for instance to approximation problems. With respect to other kind of wavelets the polynomial wavelets seem to be the most natural ones to consider on compact intervals since they are based on the properties of orthogonal polynomials which constitute a theory that spontaneously lives into the bounds of the orthogonality interval. In particular wavelets tehnique for Chebyshev polynomials of first kind have been developed in [8, 9, 12, 18]. The case of an arbitrary system of orthogonal polynomials is treated in [6, 7] where the classical multiresolution scheme due ∗ Istituto per Applicazioni del Calcolo “Mauro Picone” - Sezione di Napoli - C.N.R. Via P. Castellino, 111, 80131 Napoli, Italy.
[email protected],
[email protected]
1
to Mallat and Meyer is modified, by presenting a unified approach for the construction of polynomial wavelets. Usually polynomial wavelets are constructed starting from the Lagrange interpolation at the zeros of orthogonal polynomials. Here a different approach is followed. As it is well known classical approximation and Lagrange interpolation suffers from the fact that the Lebesgue constants are unbounded. This can be seen by Gibbs phenomenon and large oscillations of the approximating polynomials over the whole interval. To overcome these difficulties de la Vall´ee Poussin means are considered. The respective kernels have a particularly simple representation and, in the case of Bernstein–Szeg¨o weights, they satisfy a nice interpolation property (see [20]). Thus, restricting ourself to the four types of Chebyshev weights, polynomial interpolating scaling functions are defined by using de la Vall´ee Poussin interpolation process and polynomial wavelets are explicitly given in terms of these scaling functions. Such wavelets are not orthogonal, but they are uniquely determined by the following interpolation constraint: at each resolution level j, both scaling and wavelet functions are interpolating polynomials and their interpolation knots constitute two disjoint sets whose union gives the interpolation knots of the scaling functions at the higher resolution level j + 1. The wavelet functions we construct in this way constitute a generalization of some trigonometric interpolating wavelets studied in [13, 19]. As expected, with respect to the polynomial wavelets based on the Lagrange interpolation, the ones based on de la Vall´ee Poussin interpolation are more localized and give a better approximation in the uniform weighted norm. Unfortunately they are not orthogonal each other. As a consequence also the matrices involved into the two–scale relations are not orthogonal. Anyway the structure of these matrices is studied in detail: the elements of the inverse matrices are explicitly known and the computation of the products matrix–vector can be performed using fast discrete cosine transforms. The paper is organized as follows. In section 2 we define the scaling functions and study their properties. In section 3 we construct the wavelet functions and finally in section 4 we give the decomposition and reconstruction algorithms.
2
The scaling functions
Let w be a Chebyshev weight, i.e. one of the following weight functions r r p 1−x 1+x 1 2 √ , , 1−x , , 2 1 + x 1−x 1−x and let {pn (w)}n be the system of orthonormal polynomials with respect to w having positive leading coefficients, i.e. pn (w, x) = γn xn + . . . ,, with γn > 0. Moreover denote by xnr (w) or simply by xnr or xr , r = 1, . . . , n, the zeros of pn (w).
2
Starting with the well–known Darboux kernels Kn (w, x, y) :=
n X
pk (w, x)pk (w, y),
k=0
n ∈ N,
(2.1)
we consider the following de la Vall´ee Poussin means of Darboux kernels ϕm n (x, y)
n+m−1 1 X := Kj (w, x, y), 2m j=n−m
n > m.
(2.2)
By (2.1) and (2.2) it is easy to check that equivalently we can write ϕm n (x, y) =
n+m−1 X
ck pk (w, x)pk (w, y),
(2.3)
k=0
where the coefficients ck are given by if 1 ck := n + m − k if 2m
0 ≤ k ≤ n − m,
(2.4)
n − m < k < n + m.
The next proposition states an interesting interpolation property for the previous polynomials. Proposition 2.1 For all n, m ∈ N with n ≥ m and r, s = 1, . . . , n, we have 0 if r = 6 s, n n n X (2.5) ϕm n (xr , xs ) = p2k (w, xnr ) if r = s, k=0
n where ϕm n is defined by (2.2) and {xs }s=1,..,n are the zeros of pn (w).
Such result was proved in [20] for a particular case only, but the proof can be easily extended in this more general case too. For the convenience of the reader, we give here the explicit proof. Proof. First of all we note that by the trigonometric form of Chebyshev polynomials the following identity pn+k (w, x) + pn−k (w, x) = 2pn (w, x)Tk (x),
Tk (x) := cos(k arccos x), (2.6)
follows for each n ∈ N and k = 1, . . . , n − 1. By using (2.6) with x := xnr and x := xns , since these points are zeros of pn (w, x), for any h = 1, 2, .., m − 1, we obtain pn+h (w, xnr ) = 2pn (w, xnr )Tk (xnr ) − pn−h (w, xnr ) = −pn−h (w, xnr ),
pn+h (w, xns ) = 2pn (w, xns )Tk (xns ) − pn−h (w, xns ) = −pn−h (w, xns ), 3
i.e. for all h = 1, 2, .., m − 1, we have pn+h (w, xnr )pn+h (w, xns ) = pn−h (w, xnr )pn−h (w, xns ).
(2.7)
Then since for any l = 1, 2, . . . , m − 1, we can write Kn+l (w, xnr , xns ) = Kn (w, xnr , xns ) +
l X
pn+h (w, xnr )pn+h (w, xns ),
h=1
Kn−(l+1) (w, xnr , xns ) = Kn−1 (w, xnr , xns ) −
l X
pn−h (w, xnr )pn−h (w, xns ),
h=1
by (2.7), for l = 1, 2, .., m − 1, we obtain =
Kn+l (w, xnr , xns ) + Kn−(l+1) (w, xnr , xns ) = Kn (w, xnr , xns ) + Kn−1 (w, xnr , xns ).
Thus, observing that 0 n n n n n X Kn−1 (w, xr , xs ) = Kn (w, xr , xs ) = p2k (w, xnr )
if
r 6= s,
if
r = s,
k=0
we deduce
n n ϕm n (xr , xs )
=
n+m−1 1 X Kl (w, xnr , xns ) 2m l=n−m
i.e. (2.5) holds.
m−1 X
=
1 2m
=
1 [Kn (w, xnr , xns ) + Kn−1 (w, xnr , xns )] = Kn (w, xnr , xns ), 2
l=0
Kn+l (w, xnr , xns ) + Kn−(l+1) (w, xnr , xns )
2
Now define scaling functions as the following polynomials Φm n,r (x) :=
n ϕm n (xr , x) , n m ϕn (xr , xnr )
r = 1, . . . , n.
(2.8)
Taking into account (2.5) and recalling that λn (xnr )
:=
"n−1 X k=0
4
p2k (w, xnr )
#−1
(2.9)
are the well–known Christoffel numbers, by (2.3) and (2.8), we get the following explicit form of the scaling functions n Φm n,r (x) = λn (xr )
n+m−1 X
ck pk (w, xnr )pk (w, x),
(2.10)
k=0
where ck are given in (2.4). Moreover by (2.5), we have the interpolating property n 1 Φm n,r (xs ) = δr,s ,
r, s = 1, . . . , n.
(2.11)
Such property will be fundamental in our construction of polynomial wavelets, as we will see in the next section. In Figure 1 there are graphed four scaling functions corresponding to the four Chebyshev weights. w(x)=(1−x2)−1/2
w(x)=(1−x2)1/2
1
1 27 y=Φ54,28(x)
7
y=Φ31,16(x)
0.5
0.5
0
0
−1
−0.5
0
0.5
1
−1
−0.5
w(x)=(1−x)1/2(1+x)−1/2
0
1
w(x)=(1−x)−1/2(1+x)1/2
1
1 13 y=Φ40,20(x)
13
y=Φ40,21(x)
0.5
0.5
0
0
−1
0.5
−0.5
0
0.5
1
−1
−0.5
0
0.5
1
Figure 1: Scaling functions with respect to the four Chebyshev weights. The reader can note the interpolation property and the localization of Φm n,r around the knot xnr . Such localization increases when the values of n and m grow up, as the next figure shows. 1δ
r,s
is the Kronocker symbol that holds 0 if r 6= s and 1 if r = s.
5
1
3 y=Φ15,8(x)
0.5 0 −1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
0.8
1
0.8
1
1
y=Φ7
0.5
(x)
31,16
0 −1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
1
15
y=Φ63,32(x)
0.5 0 −1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
√ Figure 2: Scaling functions with respect to w(x) = 1 − x2 corresponding to the interpolation knot xnr = 0 with increasing values of n and m. Now let us see other useful properties of the scaling functions. Theorem 2.2 For all n, m ∈ N with n ≥ m, the set {Φm n,r ,
r = 1, . . . , n}
constitutes a bases of the polynomial subspace m Vn := span {p0 (w), p1 (w), . . . , pn−m (w)} ∪ ∪
m−s m+s pn−s (w) − pn+s (w), 2m 2m
s = 1, . . . , m − 1
(2.12)
.
Proof. Firstly let us note that dim Vnm = n, since the system of generators in (2.12) constitutes an orthogonal system with respect to the scalar product Z 1 < f, g >:= f (x)g(x)w(x)dx. −1
On the other hand, by the interpolation property (2.11), it follows that the functions Φm n,r , r = 1, . . . , n, are linearly independent. Thus we have only to m prove that Φm n,r ∈ Vn , for any r = 1, . . . , n. To this aim we observe that by (2.10) and (2.4), taking into account that pn (w, xnr ) = 0, we obtain n−m n−1 X X n+m−j n n ) = λ (x )p (w) + Φm p (w, x pj (w, xnr )pj (w)+ n j j r n,r r 2m j=0 j=n−m+1 6
+
n+m−1 X j=n+1
=
λn (xnr ) +
=
m−1 X
s=1 n−m X
n+m−j pj (w, xnr )pj (w) 2m
n−m X
pj (w, xnr )pj (w) +
m−1 X s=1
j=0
m−s pn+s (w, xnr )pn+s (w) 2m
m+s pn−s (w, xnr )pn−s (w)+ 2m
#
λn (xnr )pj (w, xnr )pj (w) +
j=0
+
m−1 X
λn (xnr )pn−s (w, xnr )
s=1
m−s m+s pn−s (w) − pn+s (w) , 2m 2m
where in the last line we used the fact that by (2.6) we get pn+s (w, xnr ) + pn−s (w, xnr ) = 0,
s = 1, . . . n − 1.
(2.13)
In conclusion Φm n,r can be expressed as linear combination of the basis functions in (2.12) and the theorem follows. 2 Set for brevity pk (w) qk (w) := m+n−k p (w) − m−n+k p k 2n−k (w) 2m 2m
if
0 ≤ k ≤ n−m,
if
n−m < k < n,
(2.14)
(2.12) can be written more simply as
Vnm = span {q0 (w), q1 (w), . . . , qn−1 (w)} ,
(2.15)
and in proving Theorem 2.2 we have stated the following basis transformation Φm n,r (x)
=
λn (xnr )
n−1 X
pk (w, xnr )qk (w, x),
r = 1, . . . , n.
(2.16)
k=0
Ruffly speaking we can say that while {q0 (w), q1 (w), . . . , qn−1 (w)} constitutes an orthogonal basis of Vnm , the system of scaling functions m m Φn,1 , Φm n,2 , . . . , Φn,n
gives a localized interpolating basis of the same space Vnm . Moreover set if 0 ≤ h ≤ n − m, 1 µh := (2.17) m2 + (n − h)2 if n − m < h < n, 2m2 7
from < ph (w), pk (w) >= δh,k we deduce < qh (w), qk (w) >= δh,k µh , and consequently, from (2.16), we get
=
λn (xnr )λn (xns )
n−1 X
µh ph (w, xnr )ph (w, xns ).
(2.18)
h=0
Riesz stability and error estimates The polynomial space Vnm spanned by the scaling functions Φm n,r is an intermediate space between the set Pn−m of all algebraic polynomials of degree at most n − m, and the one Pn+m−1 of degree n + m − 1, i.e. the following inclusions hold n > m. (2.19) Pn−m ⊂ Vnm ⊂ Pn+m−1 , On the space Vnm we may consider the following projection vnm f (x) :=
n X
f (xnr )Φm n,r (x).
(2.20)
r=1
Such projection operator vnm constitutes a generalization of the de la Vall´ee Poussin operator studied in [2, 20]. By (2.11), it is an interpolating operator vnm f (xnr ) = f (xnr ),
r = 1, . . . , n,
which, by (2.19), preserves the algebraic polynomials of degree at most n − m vnm P = P,
∀P ∈ Pn−m .
Moreover if m < n ≤ Cm where C is a constant independent of n and m, then the operator vnm is uniformly bounded in the weighted uniform spaces Cu0 consisting of all functions f which are locally continuous in [−1, 1] and such that kf uk∞ := max|x|≤1 |f (x)u(x)| < ∞, under suitable choices of the Jacobi weight u. More precisely we can apply a result stated in [2, Th. 3.2], which gives the following Theorem 2.3 Let be m, n ∈ N such that m < n ≤ Cm where C is a constant independent of n and m and let u(x) = (1 − x)γ (1 + x)δ be a Jacobi weight which exponents verify the following bound 1 1 1 if w(x) = √ , 0≤γ< , 0≤δ< 2 2 1 − x2 p 3 3 if w(x) = 1 − x2 , 0= λj+1 (xj+1,r )Φj,s (xj+1,r ),
r = 1, .., nj+1 .
(3.6)
Proof. Statement i) follows from (2.19), taking into account that by the settings (3.1)–(3.3), it is nj + mj − 1 ≤ nj+1 − mj+1 ,
∀j ∈ N.
(3.7)
Now let us prove ii). To this aim we recall that the explicit form of the Chebyshev zeros is well–known (see e.g. [17]): 1 1 − x2 p w(x) = 1 − x2
w(x) = √
=⇒
xnr (w) := cos
(2r − 1)π , 2n
(3.8)
=⇒
xnr (w) := cos
rπ , n+1
(3.9)
w(x) =
r
1−x 1+x
=⇒
xnr (w) := cos
2rπ , 2n + 1
(3.10)
w(x) =
r
1+x 1−x
=⇒
xnr (w) := cos
(2r − 1)π . 2n + 1
(3.11)
11
By (3.8)–(3.11) it is easy √ to check that the zeros of pn (w) √ are also zeros of p3n (w) in the case w(x) = 1/ 1 − x2 , p of p2n+1 (w) if w(x) = 1 − x2 , and of p3n+1 (w) in the other two cases w(x) = (1 ± x)/(1 ∓ x), i.e. 1 1 − x2 p w(x) = 1 − x2 r 1±x w(x) = 1∓x
w(x) = √
=⇒ {xnr }r ⊂ {xr(3n) }r ,
(3.12)
=⇒ {xnr }r ⊂ {xr(2n+1) }r ,
(3.13)
=⇒ {xnr }r ⊂ {xr(3n+1) }r .
(3.14)
Thus (3.5) follows from (3.12), (3.13) 3nj nj+1 = 2nj + 1 3nj + 1
and (3.14) taking into account that 1 , 1 − x2 p w(x) = 1 − x2 , r 1±x . w(x) = 1∓x w(x) = √
if if if
Finally let us prove iii). We recall that by (2.10) we have nj +mj −1
Φj,s (x) = λj (xj,s )
X
cj,k pk (w, xj,s )pk (w, x),
(3.15)
k=0
where cj,k
1 = nj + mj − k 2mj
if
0 ≤ k ≤ n j − mj ,
if
n j − mj < k < n j + mj .
(3.16)
Then from < ph (w), pk (w) >= δh,k , we deduce < Φj+1,r , Φj,s >= nj+1 +mj+1 −1
nj +mj −1
X
X
=
k=0
h=0 nj +mj −1
= λj+1 (xj+1,r )λj (xj,s )
X
cj+1,k cj,k pk (w, xj+1,r )pk (w, xj,s ),
k=0
which gives (3.6) taking into account that by virtue of (3.7), it is cj+1,k = 1,
k = 0, 1, . . . nj + mj − 1,
12
j ∈ N.
2
Now, for all j ∈ N, let us consider the wavelet spaces Wj defined as the orthogonal complement of Vj in Vj+1 , i.e. uniquely determined by the conditions Vj+1 = Vj ⊕ Wj
and
V j ⊥ Wj .
(3.17)
The wavelet functions provide local bases in the spaces Wj . In general they are not uniquely determined and some additional condition has to be required. In our case, since we have interpolating scaling function, then it is spontaneous to impose that also the wavelet functions verify a similar property. Precisely we require the wavelet functions have to be interpolating polynomials at those knots {yj,r } of the scaling functions {Φj+1,s } of level j + 1, that are not knots of the scaling functions {Φj,s } of level j (see (3.5)). Thus for all resolution levels j ∈ N, let us construct the wavelet functions {Ψj,s }s=1,..,nj+1 −nj defined by the conditions < Ψj,s , Φj,r > = 0, Ψj,s (yj,r ) = δs,r ,
s = 1, .., nj+1 − nj , s, r = 1, .., nj+1 − nj .
r = 1, .., nj ,
(3.18) (3.19)
Of course Ψj,s ∈ Vj+1 , and in Vj+1 we have the scaling functions basis. So by (2.11) and (3.5) we get nj+1
Ψj,s (x)
=
X
Ψj,s (xj+1,k )Φj+1,k (x)
k=1 nj+1 −nj
=
X
Ψj,s (yj,k )Φj+1 (yj,k , x) +
k=1
nj X
Ψj,s (xj,k )Φj+1 (xj,k , x).
k=1
Hence using (3.19), we have Ψj,s (x) = Φj+1 (yj,s , x) +
nj X
Ψj,s (xj,k )Φj+1 (xj,k , x),
(3.20)
k=1
where the unknown values Ψj,s (xj,k ) can be easily derived by means of condition (3.18). More precisely, from (3.18), (3.6) and (2.11) we deduce 0
=
< Ψj,s , Φj,k >
=
< Φj+1 (yj,s , ·), Φj,k > +
=
λj+1 (yj,s )Φj,k (yj,s ) +
nj X r=1
nj X
Ψj,s (xj,r ) < Φj+1 (xj,r , ·), Φj,k >
Ψj,s (xj,r )λj+1 (xj,r )Φj,k (xj,r )
r=1
=
λj+1 (yj,s )Φj,k (yj,s ) + λj+1 (xj,k )Ψj,s (xj,k ),
which gives Ψj,s (xj,k ) = −
λj+1 (yj,s ) Φj,k (yj,s ). λj+1 (xj,k ) 13
(3.21)
Summing up, for each j ∈ N and s = 1, . . . , nj+1 − nj , the wavelet function Ψj,s is given by nj X λj+1 (yj,s ) Φj,k (yj,s )Φj+1 (xj,k , x). Ψj,s (x) = Φj+1 (yj,s , x) − λj+1 (xj,k )
(3.22)
k=1
In the next figure, four wavelet functions corresponding to the the four Chebyshev weights are graphed.
2 −1/2
2 1/2
w(x)=(1−x )
w(x)=(1−x )
1
1
0.5
0.5
0
0
−0.5
−0.5
−1 −1
−0.5
0
0.5
1/2
w(x)=(1−x)
−1 −1
1
−1/2
w(x)=(1−x) 1
0.5
0.5
0
0
−0.5
−0.5
−0.5
0
0 −1/2
(1+x)
1
−1 −1
−0.5
0.5
1
−1 −1
−0.5
0
0.5
1
0.5
1
1/2
(1+x)
Figure 3: Wavelet functions at the resolution level j = 3 .
4
Decomposition and reconstruction algorithms
By virtue of (3.5), the interpolating properties (2.11) and (3.19) give immediately the following two scale relations nj+1 −nj
Φj,s (x)
=
X
Φj+1 (xj,s , x) +
k=1
14
Φj,s (yj,k )Φj+1 (yj,k , x),
(4.1)
Ψj,s (x)
=
nj X
Ψj,s (xj,k )Φj+1 (xj,k , x) + Φj+1 (yj,s , x).
(4.2)
k=1
Such relations can be written in the more compact vectorial form by defining the following vectors T T Ψj (x) := Ψj,1 (x), . . . , Ψj,nj+1 −nj (x) , Φj (x) := Φj,1 (x), . . . , Φj,nj (x) , T
Φ0j+1 (x)
:=
Φj+1 (xj,1 , x), . . . , Φj+1 (xj,nj , x)
Φ00j+1 (x)
:=
Φj+1 (yj,1 , x), . . . , Φj+1 (yj,nj+1 −nj , x)
, T
,
and the matrices Aj and Bj which elements are defined respectively by (Aj )h,k := Φj,h (yj,k ),
h = 1, .., nj ,
k = 1, .., nj+1 − nj ,
(Bj )h,k := Ψj,h (xj,k ),
h = 1, .., nj+1 − nj ,
k = 1, .., nj .
With such notations (4.1) and (4.2) can be rewritten as ! ! I A j Φ0j+1 Φj = , Φ00j+1 Ψj Bj I
(4.3)
where here and in the sequel, I denotes the identity matrix of opportune dimensions. The next theorem gives the inverse relations of (4.3), which express the scaling functions at the level j + 1 in terms of the scaling and wavelet functions at the level j. Theorem 4.1 For all resolution level j ∈ N, we have ! G−1 −G−1 j Aj j Φ0j+1 = −1 −1 Φ00j+1 −Bj Gj I + B j Gj Aj
Φj Ψj
!
,
(4.4)
where G−1 is the inverse matrix of Gj defined by j (Gj )h,k :=
1 < Φj,h , Φj,k >, λj+1 (xj,h )
h, k = 1, . . . , nj .
(4.5)
Proof. The following property Gj = I − A j Bj
(4.6)
is fundamental for proving the theorem, since from (4.6) it is easy to check that the matrix in (4.4) is the inverse matrix of the one in (4.3), i.e. G−1 −G−1 I Aj j j Aj = I, −1 −1 Bj I −Bj Gj I + B j Gj Aj 15
as well as
G−1 j
−G−1 j Aj
−Bj G−1 j
I+
I
Bj G−1 j Aj
Aj
Bj
I
= I.
Thus let us state only (4.6). To this aim we observe that by (4.1) and (3.6) we get nj+1 −nj
X
Φj,h (yj,l ) < Φj+1 (yj,l , ·), Φj,k >
X
Φj,h (yj,l ) [λj+1 (yj,l )Φj,k (yj,l )] .
< Φj,h , Φj,k > = < Φj+1 (xj,h , ·), Φj,k > +
l=1 nj+1 −nj
= λj+1 (xj,h )Φj,k (xj,h ) +
l=1
Hence using (2.11) and (3.21), we obtain nj+1 −nj
"
< Φj,h , Φj,k >= λj+1 (xj,k ) δh,k − i.e. (4.6) holds.
X
#
Φj,h (yj,l )Ψj,l (xj,k ) ,
l=1
2
From the two–scale relations (4.3) and (4.4) we can easily deduce the decomposition and reconstruction formulas which connect the basis coefficients of a function in Vj+1 with the basis coefficients of its components in the spaces Vj and Wj . More precisely given an arbitrary function fj+1 ∈ Vj+1 , by virtue of (3.17), we can uniquely decompose fj+1 = fj + gj ,
where
f j ∈ Vj ,
and
g j ∈ Wj .
Now if we assume fj =
nj X
nj+1 −nj
aj,k Φj,k ,
gj =
k=1 nj+1
fj+1 =
X
k=1
aj+1,k Φj+1,k =
X
bj,k Ψj,k ,
k=1 nj X
nj+1 −nj
a0j+1,k Φ0j+1,k
+
X
a00j+1,k Φ00j+1,k ,
k=1
k=1
then, by virtue of (4.3) and (4.4), the basis coefficients aj := aj,1 , . . . , aj,nj , bj := bj,1 , . . . , bj,nj+1 −nj , a0j+1 := a0j+1,1 , . . . , a0j+1,nj , a00j+1 := a00j+1,1 , . . . , a00j+1,nj+1 −nj , are connected by the following formulas
16
Decomposition formula:
aj , bj = a0j+1 , a00j+1
G−1 j
−G−1 j Aj
−Bj G−1 j
I + Bj G−1 j Aj
Reconstruction formula:
a0j+1 , a00j+1 = aj , bj
I
Aj
Bj
I
,
.
(4.7)
(4.8)
Thus the decomposition and reconstruction of a given function reduces to matrix– vector multiplies where the matrices do not depend on the given function and so they may be computed beforehand. Nevertheless in the present formulation, the computational cost is very hight since at each resolution level j we need to invert the matrix Gj and to calculate some matrix–matrix products. The problem of the inversion of Gj is solved by the following theorem which gives the explicit expression of the elements of G−1 j . Theorem 4.2 The elements of the inverse matrix of Gj are given by nj −1
(G−1 j )r,s
:= λj+1 (xj,r )
X
h=0
1 ph (w, xj,r )ph (w, xj,s ), µj,h
(4.9)
where r, s = 1, .., nj , and
µj,h
1 := m2j + (nj − h)2 2m2j
if
0 ≤ h ≤ n j − mj ,
if
n j − mj < h < n j .
(4.10)
Proof. From (4.5) and (2.18) we get (j) = gr,s
nj −1 λj (xj,r )λj (xj,s ) X µj,h ph (w, xj,r )ph (w, xj,s ). λj+1 (xj,s )
(4.11)
h=0
Then Gj has the following decomposition Gj = ∆j CjT Mj Cj Dj , where Cj is the following square matrix q (Cj )r,s := λj (xj,s )pr−1 (w, xj,s ),
17
(4.12)
r, s = 1, . . . , nj
(4.13)
and Dj , Mj , ∆j are the following diagonal matrices ! p λj (xj,r ) Dj := diag , λj+1 (xj,r ) r=1,...,nj q λj (xj,s ) , ∆j := diag
(4.14) (4.15)
s=1,...,nj
Mj
:=
diag(µj,h−1 )h=1,...,nj .
(4.16)
Now we observe that Cj is an orthogonal matrix. In fact we have q q CjT Cj r,s = λj (xj,r ) λj (xj,s )Knj (w, xj,r , xj,s ) = δr,s , and
Cj CjT r,s
=
nj X
λj (xj,h )pr−1 (w, xj,h )ps−1 (w, xj,h ) =< pr−1 , ps−1 >= δr,s .
h=1
Thus, by (4.12) we get −1 T −1 −1 G−1 j = D j C j Mj C j ∆ j ,
(4.17)
which gives the thesis. 2 By virtue of the previous theorem we can rewrite the decomposition and reconstruction formulas (4.7) and (4.8), in the following explicit form. Decomposition formulas For k = 1, . . . , nj aj,k
"nj −1 # X 1 = pr (w, xj,k )pr (w, xj,s ) µj,r r=0 s=1 "nj −1 # nj+1 −nj X 1 X 00 aj+1,s λj+1 (yj,s ) + pr (w, xj,k )qr (w, yj,s ) µj,r r=0 s=1 nj X
a0j+1,s λj+1 (xj,s )
(4.18)
For k = 1, . . . , nj+1 − nj bj,k = a00j+1,k
"nj −1 # X 1 a0j+1,s λj+1 (xj,s ) − qr (w, yj,k )pr (w, xj,s ) (4.19) µj,r s=1 r=0 "nj −1 # nj+1 −nj X 1 X 00 aj+1,s λj+1 (yj,s ) − qr (w, yj,k )qr (w, yj,s ) µj,r r=0 s=1 nj X
18
Reconstruction formulas For k = 1, . . . , nj a0j+1,k = aj,k−
# "nj −1 nj+1 −nj X λj (xj,k ) X pr (w, xj,k )qr (w, yj,s ) bj,s λj+1 (yj,s ) λj+1 (xj,k ) s=1 r=0
(4.20)
For k = 1, . . . , nj+1 − nj a00j+1,k = bj,k +
nj X
aj,s λj (xj,s )
s=1
"nj −1 X
qr (w, yj,k )pr (w, xj,s )
r=0
#
(4.21)
Proof of (4.18)–(4.21). Recalling (2.16) and (3.21), formulas (4.20) and (4.21) follow from (4.8). Moreover, taking into account that nj X
λj (xj,k )pr (w, xj,k )ps (w, xj,k ) =< pr (w), ps (w) >= δr,s ,
k=1
it is easy to check that nj −1
(G−1 j Aj )r,s
=
λj+1 (xj,r )
X
k=0
1 pk (w, xj,r )qk (w, yj,s ), µj,k
nj −1
(Bj G−1 j )r,s
=
−λj+1 (yj,r )
X
1 pk (w, xj,s )qk (w, yj,r ), µj,k
X
1 qk (w, yj,r )qk (w, yj,s ), µj,k
k=0 nj −1
(Bj G−1 j Aj )r,s
=
−λj+1 (yj,r )
k=0
and consequently decomposition formulas (4.18) and (4.19) follow by (4.7). 2 Recalling the trigonometric form of the Chebyshev polynomial, the previous decomposition and reconstruction formulas (4.18)–(4.21), can be easily computed using fast cosine and sine transforms. More precisely, by changing the order of the summations in (4.18)–(4.21), we obtain the following algorithms for the efficient evaluation of the decomposition and reconstruction formulas.
19
Decomposition Algorithm: Step 1.
Compute αr =
nj X
λj+1 (xj,s )a0j+1,s pr (w, xj,s ),
r = 0, . . . , nj − 1
s=1 nj+1 −nj
Step 2.
X
Compute βr =
λj+1 (yj,s )a00j+1,s qr (w, yj,s ),
s=1 nj −1
Step 3.
Compute aj,k =
X αr + β r pr (w, xj,k ), µj,r r=0
r = 0, . . . , nj − 1
k = 1, . . . , nj
nj −1
Step 4.
Compute bj,k =
X αr + β r qr (w, yj,k ), µj,r r=0
k = 1, . . . , nj+1 − nj
Reconstruction Algorithm: Step 1.
Compute αr =
Step 2.
Compute βr =
nj X
λj (xj,s )aj,s pr (w, xj,s ), s=1 nj+1 −nj X
λj+1 (yj,s )bj,s qr (w, yj,s ),
s=1
a0j+1,k
r = 0, . . . , nj − 1 r = 0, . . . , nj − 1
nj −1 λj (xj,k ) X βr pr (w, xj,k ), = aj,k − λj+1 (xj,k ) r=0
Step 3.
Compute
Step 4.
Compute a00j+1,k = bj,k +
k = 1, .., nj
nj −1
X
αr qr (w, yj,k ),
r=0
k = 1, . . . , nj+1 − nj
The reader can note that each of the previous steps requires one of the following polynomial transforms T ype 1. T ype 2.
nj X
cs pr (w, xj,s ), s=1 nj+1 −nj X
cs qr (w, yj,s ),
s=1 nj −1
T ype 3.
r = 0, . . . , nj − 1 r = 0, . . . , nj − 1
X
cr pr (w, xj,k ),
k = 1, . . . , nj
X
cr qr (w, yj,k ),
k = 1, . . . , nj+1 − nj
r=0 nj −1
T ype 4.
r=0
Thus, recalling the trigonometric form of the Chebyshev polynomials and (2.14), the computational cost of these polynomial transforms can be reduced to O(n j log nj ), using fast cosine and sine transforms (see e.g. [16]).
20
In conclusion, for convenience of the reader, in the next table we synthesize the well-known trigonometric representation of the Chebyshev polynomials and the zeros and the Christoffel numbers corresponding to each Chebyshev weight.
w(x)
1 √ 1 − x2
nj
2 · 3j
2j+2 − 1
(2k − 1)π 4 · 3j
cos
cos (xj,s )s
k = 1, .., nj cos (yj,s )s
(x=cos t)
1 − x2
kπ 2j+2
k = 1, .., nj
(2k−1)π 4 · 3j+1
cos
kπ 2j+3
r
1−x 1+x
3j+1 − 1 2 cos
2kπ 3j+1
k = 1, .., nj cos
2kπ 3j+2
r
1+x 1−x
3j+1 − 1 2 cos
(2k − 1)π 3j+1
k = 1, .., nj cos
(2k−1)π 3j+2
k=1...3nj k + 1 6= 3h
k=1...2nj +1 k 6= 2h
k=1...3nj +1 k 6= 3h
k=1...3nj +1 k + 1 6= 3h
π 2 · 3j
π(1 − x2j,k ) 2j+2
2π(1 − xj,k ) 3j+1
2π(1 + xj,k ) 3j+1
sin(2n+1) 2t √ π sin 2t
cos(2n+1) 2t √ π cos 2t
λj (xj,k )
pn (w, x)
p
r
2 cos nt π
√
2 sin(n+1)t √ π sin t
1 √ if n=0 π
References [1] D.Berthold, W.Hoppe, B.Silbermann, The numerical solution of the generalized airfoil equation, J. Integr. Eq. Appl., 4 (1992), 309–336. [2] M.R. Capobianco, W. Themistoclakis, On the boundedness of de la Vall´ee Poussin operators, East J. on Approx., 7, No. 4 (2001), 417–444.
21
[3] C.K. Chui, E. Quak, Wavelets on a bounded interval, in Numerical Methods of Approximation Theory, eds. D. Braess and L.L. Schumaker (Birkh¨auser, Basel, 1992), pp. 53–75. [4] A. Cohen, I. Daubechies, P. Vial, Wavelets on the interval and fast wavelet transforms, Appl. Comp. Harmonic Anal., 1 (1993), 54–81. [5] I. Daubechies, Ten Lectures on Wavelets, SCMG Springer-Verlag, New York, 1987. [6] B. Fischer, J. Prestin, Wavelets based on orthogonal polynomials, Math. Comp., 66, No.220 (1997), 1593–1618. [7] B. Fischer, W. Themistoclakis, Orthogonal polynomial wavelets, to appear on Num. Algo.. [8] T. Kilgore, J. Prestin, Polynomial wavelets on the interval, Constr. Approx., 12 (1996), 95–110. [9] T. Kilgore, J. Prestin, K. Selig, Polynomial wavelets and wavelet packet bases, Studia Scie. Math. Hungar., 33 (1997), 419–431. [10] Y. Meyer, Ondelettes due l’intervalle, Rev. Mat. Iberoamericana, 7 (1992), 115–133. [11] C.A. Micchelli, Y. Xu, Using the matrix refinement equation for the construction of wavelets on invariant sets, Appl. Comp. Harmonic Anal., 1 (1994), 391–401. [12] G. Plonka, K. Selig, M. Tasche, On the construction of wavelets on the interval, Adv. Comp. Math., 4 (1995), 357–388. [13] J. Prestin, K. Selig, Interpolating and orthonormal trigonometric wavelets, in Signal and Image Representation in Combined Spaces, Academic Press, San Diego, 1995. [14] S. Mallat, Multiresolution approximations and wavelet orthonormal bases of L2 (R), Trans Amer. Math. Soc., 315 (1989). [15] P. Nevai, Mean convergence of Lagrange interpolation III, Trans Amer. Math. Soc., 282 (1984), 669–698. [16] G. Steidl, Fast radix-p discrete cosine transform, AAECC, 3 (1992), 39–46. ¨ , Orthogonal Polynomials, Revised ed., AMS Colloquium Pub[17] G. Szego blications XXIII, Amer. Math. Soc., New York, 1959. [18] M. Tasche, Polynomial wavelets on [−1, 1], in Approximation Theory, Wavelets and Applications, (Maratea 1994), ed. S.P. Singh, (Kluwer Academic, Dordrech, 1995), pp. 49–70. 22
[19] W. Themistoclakis, Trigonometric wavelet interpolation in Besov spaces, Facta Univ. (Nis) Ser. Math. Inform., 14 (1999), 49–70. [20] W. Themistoclakis, Some interpolating operators of de la Vall´ee Poussin type. Acta Math. Hungar., 84 (3) (1999), 221–235.
23