a great candidate for adaptive and multi-resolution schemes to obtain solutions which ... However, in solving the initial boundary value problem, how to treat the.
Adaptive Wavelet Collocation Methods for Initial Boundary Value Problems of Nonlinear PDE’s
Wei Cai and Jianzhong Wang Department of Mathematics University of North Carolina at Charlotte Charlotte, NC 28223
ABSTRACT We have designed a cubic spline wavelet-like decomposition for the Sobolev space H02 (I) where I is a bounded interval. Based on a special “Point Value Vanishing” property of the wavelet basis functions, a fast Discrete Wavelet Transform (DWT) is constructed. This DWT will map discrete samples of a function to its wavelet expansion coefficients in at most 7N log N operations. Using this transform, we propose a collocation method for the initial boundary value problem of nonlinear PDE’s. Then, we test the efficiency of the DWT and apply the collocation method to solve linear and nonlinear PDE’s.
Key Words. Wavelet Approximations, Multi-Resolution Analysis, Fast Discrete Wavelet Transform, Collocation Methods, IBV problems of PDE’s. AMS(MOS) subject classification. 63N30, 65N13, 65F10; 41A15,42C05, CR: G1.8. 1 1 The first author has been supported by NSF grant ASC-9113895 and a supercomputing grant from the North Carolina Supercomputer Center and a faculty research grant from UNC Charlotte and partially supported by the National Aeronautics and Space Administration under NASA contract NAS1-19480 while the auther was in residence at the Institute for Computer Applications in Science and Engineering (ICASE), NASA Langley Research Center, Hamption, VA 23681-0001.
1
Introduction
Wavelet approximations have attracted much attention as a potentially efficient numerical technique for solving partial differential equations [1] - [6]. Because of their advantageous properties of localizations in both space and frequency domains [7] - [9], wavelets seem to be a great candidate for adaptive and multi-resolution schemes to obtain solutions which vary dramatically both in space and time and develop singularities. However, in order to take advantage of the nice properties of wavelet approximations, we have to find an efficient way to deal with the nonlinearity and general boundary conditions in the PDE’s. After all, most of the problems of fluid dynamics and electromagnetics, which have solutions with quite different scales, are governed by nonlinear PDE’s with complicated boundary conditions. Therefore, it is our objective here to address these issues when designing wavelet numerical schemes for nonlinear PDE’s. Most of the wavelet approximation schemes for PDE’s so far have been based on the wavelet decomposition of L2 (R) with Daubechies’ orthonormal wavelets on the whole real line R ([1]-[6]). However, in solving the initial boundary value problem, how to treat the boundary conditions is an important aspect of any numerical scheme. In [1] an embedded domain approach is used so that the boundary condition gets absorbed into the PDE’s via a penalty term. In [6], by using the primitive function of Daubechies’ wavelet and its dilations and translations, a Rieze basis is constructed for the Sobolev space H01 (I) defined on a bounded interval I. Another common way to achieve wavelet approximation on a bounded interval is to keep all Daubechies’ wavelets ( or compactly supported spline wavelets), whose supports are totally inside the interval, intact while modifying those wavelets intersecting the boundary by an orthonormalization (semi-orthogonalization) procedure, see [10],[11] and [12]. However, we believe that a more natural approach for approximating PDE’s solution in a Sobolev space H02 (I) is to construct directly a multi-resolution analysis (MRA) (V0 ⊂ V1 ⊂ V2 · · ·) for H02 (I) where Vj is generated by some scaling functions through dilations and trans2
lations. Using such a MRA, we can decompose H02 (I) into the form H02 (I) = V0
L∞
j=0
Wj ,
where ⊕ stands for orthogonal direct sum and Wj denotes the orthogonal compliment of Vj in the space Vj+1 : Vj+1 = Vj
L
Wj . Then we show that the basis of all the subspaces Wj
can be essentially generated from one function ( “mother wavelet” ) by dilations and translations except the two boundary functions in each Wj which are generated from another function located at the boundary (“mother boundary wavelet”) by dilations and reflections. Because the inner product considered here is in space H02 (I), not in space L2 (I), these two mother wavelet functions will no longer have vanishing moments of the first two orders as usual wavelets in space L2 (see (3.17)). For simplicity, we will still use the term ‘wavelet’ throughout this paper with the understanding that it is different from the usual wavelet with its non-vanishing moment. It is worthwhile to point out that despite the fact that these wavelet-like functions have no vanishing moments, the projection fj of any function f ∈ H02 (I) on Vj still provides a “blurred” version of function f while the one on space Wj keeps its local details. Hence the magnitude of coefficients in the wavelet expansion of functions in H02 (I) does reflect the local scales and changes of the function to be approximated. It is these features that are needed for achieving adaptivities and multi-resolution approximations in practical computations. Here we would like to mention Harten’s recent work [13], his approach of designing a scheme of multi-resolution representation of numerical data without directly using wavelet idea is successfully applied in reducing the computational costs of numerical fluxes in Godunov schemes for shock wave computations. To design a wavelet collocation method for nonlinear time evolution problems, the key point is to construct a “Discrete Wavelet Transform” (DWT) which maps between function values and the wavelet coefficient space such that the resulting wavelet expansion interpolates the function values. In this paper, we will use a cubic spline wavelet basis of H02 (I) [14] for the construction of DWT. A special “Point Value Vanishing” property ( see(3.7)) of this wavelet basis results in O(N logN ) operations for the DWT where N is the total number of unknowns. Therefore, the nonlinear term in the PDE can be easily treated in the physical
3
space and the derivatives of those nonlinear terms computed in the wavelet space. As a result, collocation methods will provide the flexibility of handling nonlinearity and various boundary conditions. In [15], a different method based on scale separation was suggested to compute the wavelet approximation of f (u) given a wavelet expansion with Daubechies’ wavelets for u. The rest of this paper is divided into the following six sections. In Section 2, we introduce the cubic scaling functions φ(x), φb (x) and their wavelet functions ψ(x), ψb (x). A multiresolution analysis (MRA) and its corresponding wavelet decomposition of the Sobolev space H02 (I) are constructed using φ(x), φb (x) and ψ(x), ψb (x). Then, we show how to construct a wavelet approximation for functions in the Sobolev space H 2 (I). In Section 3, we introduce the fast discrete wavelet transform (DWT) between functions and their wavelet coefficients. In Section 4, we discuss the derivative matrix D for approximating differential operators. In Section 5, we present the wavelet collocation methods for nonlinear time evolution PDE’s. In Section 6, we give the CPU time performance of the DWT transforms and the numerical results of the wavelet collocation methods for linear and nonlinear PDE’s. A conclusion is given in Section 7.
2
Scaling functions φ(x), φb(x) and wavelet functions ψ(x), ψb(x)
Let I denote a finite interval, say I = [0, L], and L is a positive integer (for the sake of simplicity, we assume that L > 4), and H 2 (I) and H02 (I) denote the following two Sobolev spaces. H 2 (I) = {f (x), x ∈ I| ||f (i) ||2 < ∞, i = 0, 1, 2}
(2.1)
H02 (I) = {f (x) ∈ H 2 (I)| f (0) = f 0 (0) = f (L) = f 0 (L) = 0}.
(2.2)
It can be easily checked [16] that H02 (I) is a Hilbert space equipped with inner product Z
< f, g >=
I
f 00 (x)g 00 (x) dx,
4
(2.3)
thus, q
|||f ||| =
< f, f >
(2.4)
provides a norm for H02 (I). In order to generate a MAR for the Sobolev space H02 (I), we consider two scaling functions, an interior scaling function φ(x) and a boundary scaling function φb (x) (see Figure 1) 4 1X φ(x) = N4 (x) = 6 j=0
Ã
4 j
!
(−1)j (x − j)3+
3 11 3 3 φb (x) = x2+ − x3+ + (x − 1)3+ − (x − 2)3+ 2 12 2 4
(2.5) (2.6)
where N4 (x) is the 4th order B-spline [17] and for any real number n (
xn+
xn 0
=
if x ≥ 0 otherwise.
As a pair they satisfy the following two-scale relationships, Lemma 1 φ(x)
=
4 X
à −3
2
k=0
φb (x)
=
4 k
!
β−1 φb (2x) +
φ(2x − k) 2 X
βk φ(2x − k)
(2.7)
k=0
here
1 11 1 1 β−1 = , β0 = , β1 = , β2 = . 4 16 2 8
We summarize some properties of φ(x) and φb (x) in the following lemma. Lemma 2 Let φ(x) and φb (x) be defined as in (2.5) and (2.6), then we have (1) supp(φ(x)) = [0, 4];
(2.8)
(2) supp(φb (x)) = [0, 3];
(2.9)
(3) φ(x), φb (x) ∈ H02 (I);
(2.10)
1 1 1 (4) φ0 (1) = −φ0 (3) = , φ0 (2) = 0, φ0b (1) = , φ0b (2) = − ; 2 4 2 1 2 7 1 (5) φ(1) = φ(3) = , φ(2) = , φb (1) = , φb (2) = . 6 3 12 6 5
(2.11) (2.12)
For any j, k ∈ Z, we define φj,k (x) = φ(2j x − k), φb,j (x) = φb (2j x),
(2.13)
and let Vj be the linear span of {φj,k (x), 0 ≤ k ≤ 2j L − 4, φb,j (x), φb,j (L − x)}, namely Vj = span{φj,k (x) | 0 ≤ k ≤ 2j L − 4; φb,j (x), φb,j (L − x)}.
(2.14)
Theorem 1 Let Vj , j ∈ Z+ be the linear span of (2.14). Then Vj forms a multiresolution analysis (MRA) for H02 (I) equipped with norm (2.4) in the following sense. (i) V0 ⊂ V1 ⊂ V2 ⊂ · · · ; S
(ii) closH02 ( (iii)
T j∈Z+
j∈Z+
Vj ) = H02 (I);
Vj = V0 ; and
(iv) for each j, {φj,k (x), φb,j (x), φb,j (L − x)} is a basis of Vj . Proof. The proof for (iii) and (iv) is straightforward and omitted here. The proof for (i) follows from (2.7) in Lemma 1. In order to prove (ii), we recall some familiar results on interpolating cubic splines for smooth functions [18] [19]. Lemma 3 Let π be the partition given by xi = ih, 0 ≤ i ≤ n, h =
(b−a) n
and s(x) be the
cubic spline interpolating f (x) ∈ C 4 [a, b] at all points in π, i.e. s(xi ) = f (xi ),
0 ≤ i ≤ n,
and satisfying the following boundary conditions: s0 (a) = f 0 (a), s0 (b) = f 0 (b).
(2.15)
Then 1. s(x) uniquely exists and
||s(r) − f (r) ||∞ ≤ ²r ||f (4) ||∞ h4−r , where ²0 =
5 ,² 384 1
=
1 ,² 24 2
= 83 , ²3 = 1. 6
r = 0, 1, 2, 3
(2.16)
2. If the average operator R is defined by s00 (xi−1 ) + 10s00 (xi ) + s00 (xi+1 ) R(s )(xi ) = 12 00
for 1 ≤ i ≤ n − 1,
(2.17)
then R(s00 )(xi ) − f 00 (xi ) = O(h4 )
Proof of (ii) of Theorem 1. Let h =
1 , 2j
(2.18)
a = 0 and b = L. Consider f (x) ∈ C0∞ (0, L), Since
C0∞ (0, L) ⊂ C 4 [0, L] ∩ H02 (0, L), by Lemma 3, there is a unique cubic spline corresponding to the partition π interpolating f (x). From the fact that f (0) = f (L) = f 0 (0) = f 0 (L) = 0, we have s(x) in Vj and then nj −4
s(x) = c−1 φb,j (x) +
X
ck φj,k (x) + cL−3 φb,j (L − x)
(2.19)
k=0
such that s(xi ) = f (xi ), where nj = 2j L, xi =
0 ≤ i ≤ nj
(2.20)
i . nj
Finally, from (2.16) in Lemma 3 with r = 2 we have |||s − f ||| = ||s(2) − f (2) ||L2 ≤ L||s(2) − f (2) ||∞ ≤ ²2 L2−2j ||f (4) ||∞ . S
Therefore, as j −→ ∞, |||s − f ||| −→ 0. This proves that C0∞ (0, L) ⊂ closH02 (
j∈Z+
Vj ).
Thus, Theorem 1 (ii) follows from the fact that C0∞ (0, L) is dense in H02 (0, L). 2
To construct a wavelet decomposition of Sobolev space H02 (I) under the inner product (2.3), we consider the following two wavelet functions ψ(x), ψb (x) (see Figure 2) , 12 3 3 ψ(x) = − φ(2x) + φ(2x − 1) − φ(2x − 2) ∈ V1 7 7 7 24 6 ψb (x) = φb (2x) − φ(2x) ∈ V1 . 13 13 7
(2.21) (2.22)
It can be verified that ψ(x) and ψb (x) both belong to V1 and ψ(n) = ψb (n) = 0, for all n ∈ Z.
(2.23)
Property (2.23) will be important in the construction of a fast Discrete Wavelet Transform (DWT) later. Now we define ψj,k (x) = ψ(2j x − k), l (x) = ψb (2j x), ψb,j
j ≥ 0, k = 0, · · · , nj − 3, r (x) = ψb (2j (L − x)), ψb,j
(2.24) (2.25)
where again nj = 2j L. For the sake of simplicity, we will adopt the following notations l ψj,−1 (x) = ψb,j (x),
r ψj,nj −2 = ψb,j (x).
(2.26)
So, when k = −1 and nj − 2, wavelet functions ψj,k (x) will denote the two boundary wavelet functions, which cannot be obtained by translating and dilating ψ(x). Finally, for each j ≥ 0, we define Wj = span{ψj,k (x) | k = −1, · · · , nj − 2}.
(2.27)
Theorem 2 The Wj , j ≥ 0, defined in (2.27) is the orthogonal compliment of Vj in Vj+1 under the inner product (2.3), i.e. (1) Vj+1 = Vj ⊕ Wj for j ∈ Z+ , where ⊕ stands for Vj ⊥Wj under the inner product (2.3) and Vj+1 = Vj + Wj . Therefore, (2) Wj ⊥Wj+1 , j ∈ Z+ ; and (3) H02 (I) = V0
L j∈Z+
Wj .
Proof. (1) We only have to prove Vj ⊕ Wj for j = 0, namely, for 0 ≤ l ≤ L − 4, 0 ≤ k ≤ L − 3, < φ(x − l), ψ(x − k) > = 0
(2.28)
< φ(x − l), ψb (x) > = 0
(2.29)
< φb (x), ψ(x − k) > = 0
(2.30)
< φb (x), ψb (x) > = 0.
(2.31)
8
Integrating by parts twice in (2.28) and using the fact that ψ(x), φ(x) ∈ H02 (I) , we have < φ(x − l), ψ(x − k) > =
Z L 0
φ00 (x − l)ψ 00 (x − k) dx
00
0
= φ (x − l)ψ (x − = −
Z L 0
k)|L0
−
Z L 0
φ(3) (x − l)ψ 0 (x − k) dx
= −φ(3) (x − l)ψ(x − k)|L0 + =
Z L 0
φ(3) (x − l)ψ 0 (x − k) dx
Z L 0
φ(4) (x − l)ψ(x − k) dx
(4)
φ (x − l)ψ(x − k) dx.
Using equation (2.23) and an easily checked identity (4)
φ (x) =
4 X
Ã
j=0
4 j
!
(−1)j δ(x − j),
where δ(x) is the Dirac-delta function. So we have < φ(x − l), ψ(x − k) >=
4 X
Ã
j=0
4 j
!
(−1)j ψ(j − (k − l)) = 0.
Equations (2.29) - (2.31) can be shown in a similar way. So (1) follows from (2.21) and (2.22) and the fact that dimVj = 2j L−1 and dimWj = 2j L, and then dimVj+1 = 2j+1 L−1 = (2j L − 1) + 2j L = dimVj + dimWj ; (2) follows from (1); (3) follows directly from Theorem 1 (ii). 2 As a consequence of Theorem 2, any function f (x) ∈ H02 (I) can be approximated as closely as needed by a function fj (x) ∈ Vj = V0 ⊕ W0 ⊕ W1 · · · ⊕ Wj−1 for a sufficiently large j, and fj (x) has a unique orthogonal decomposition fj (x) = f0 + g0 + g1 + · · · + gj−1 , where f0 ∈ V0 , gi ∈ Wi , 0 ≤ i ≤ j − 1.
9
(2.32)
Approximation for function in H 2 (I) Consider the following two splines η1 (x) = (1 − x)3+
(2.33)
7 4 1 η2 (x) = 2x+ − 3x2+ + x3+ − (x − 1)3+ + (x − 2)3+ . 6 3 6
(2.34)
For any function f (x) ∈ H 2 (I), by the Sobolev embedding theorem, we have f (x) ∈ C 1 (I). Therefore, we can define the following interpolating spline Ib,j f (x), j ≥ 0, Ib,j f (x) = α1 η1 (2j x) + α2 η2 (2j x) + α3 η2 (2j (L − x)) + α4 η1 (2j (L − x)),
(2.35)
where the coefficients α1 , α2 , α3 , α4 are determined by certain interpolating conditions we will state below. Since spline Ib,j f (x) is expected to approximate the nonhomogeneities of function f (x) at the boundaries, these interpolating conditions will be also called end conditions. The following two kinds of end conditions are common in applications. Derivative End Conditions We can impose the following end-derivative conditions Ib,j f (0) = f (0), (Ib,j f )0 (0) = f 0 (0),
Ib,j f (L) = f (L)
(2.36)
(Ib,j f )0 (L) = f 0 (L).
(2.37)
It can be easily verified that if we choose α1 = f (0), f 0 (L) 3 α3 = − j+1 + f (L), 2 2
α2 =
f 0 (0) 3 + f (0) 2j+1 2
(2.38)
α4 = f (L),
then Ib,j f satisfies conditions (2.36), (2.37). In many situations, however, we do not know the values of derivatives f 0 (0), f 0 (L). Then they have to be approximated by finite differences using only the values of f (x). To preserve the exact order of accuracy for the cubic spline approximation, we suggest using the following 10
approximations p 1X f (0) = ck f (kh) + O(hs ) h k=0 0
f 0 (L) = −
(2.39)
p 1X ck f (L − kh) + O(hs ), h k=0
where h > 0 and p ≥ 3. For p = 3, if we take c0 = −
11 , 6
3 c2 = − , 2
c1 = 3,
1 c3 = , 3
then s = 3 in (2.39), and thus, equation (2.37) is satisfied within an error of O(h3 ). Correspondingly, the coefficients αk , 1 ≤ k ≤ 4 for Ib,j f (x) become α1 = f (0),
α2 =
p X
c0k f (kh),
(2.40)
k=0 p X
α3 =
c0k f (L − kh),
α4 = f (L),
k=0
where c00 = (
1
c0 2j+1 h
3 + ), 2
c0k =
ck , j+1 2 h
1 ≤ k ≤ p.
Now we have f (x) − Ib,j f (x) ∈ H02 (I) and the decomposition (2.32) can be applied. Finally, for any function f (x) ∈ H 2 (I) , we can find a function fj (x) in the form of fj (x) = Ib,j f + f0 + g0 + g1 · · · + gj−1 ,
f0 ∈ V0 , gi ∈ Wi , 0 ≤ i ≤ j − 1,
(2.41)
which approximates f (x) as closely as needed provided that j is large enough. Furthermore, by Lemma 3, the approximation order will be O(2−4j ) if fj (x) is chosen as the interpolating spline of f (x). Not-A-Knot Conditions In many applications, the solutions vary dramatically near the boundary, so approximation (2.39) to the end derivatives could result in large errors. In those cases, we prefer to use the so-called ‘not-a-knot’ end conditions [20], which amounts to requiring that the spline 11
Ib,j f (x) agrees with function f (x) at one additional point near each boundary. So we have the following equations for αk , 1 ≤ k ≤ 4,
Ib,j f (0) = f (0),
Ib,j f (L) = f (L)
Ib,j f (τ1 ) = f (τ1 ), In our case, by choosing τ1 =
1 , τ2 2j+1
Ib,j f (τ2 ) = f (τ2 ).
=L−
α1 = f (0), α3 = 6f (τ2 ) −
(2.42)
1 , 2j+1
we have
α2 = 6f (τ1 ) −
3f (L) , 4
3f (0) 4
(2.43)
α4 = f (L).
Although in this case f (x)−Ib,j f (x) is no longer in the space H02 (I), an interpolating spline fj (x) in the form of (2.41) with Ib,j f (x) defined in (2.42) will still have an approximation to f (x) of order O(2−4j ) [21].
3
Discrete Wavelet Transform (DWT)
In this section, we will introduce a fast Discrete Wavelet Transform (DWT) which maps discrete sample values of a function to its wavelet interpolant expansions. Such expansion with the wavelet decomposition will enable us to compute an approximation of the first and second derivatives of the function. Interpolant Operator IV 0 in V0 Consider any function f (x) ∈ H02 (I) and denote the interior knots for V0 by (−1)
xk
= k,
(−1) L−1 }k=1
and the values of f (x) on {xk
(−1)
fk
k = 1, · · · , L − 1
(3.1)
by (−1)
= f (xk
),
k = 1, · · · , L − 1.
12
(3.2)
(−1)
The cubic interpolant IV 0 f (x) of data {fk
} can be expressed as follows,
L−4 X
IV 0 f (x) = c−1 φb (x) +
ck φ0,k (x) + cL−3 φb (L − x)
(3.3)
k=0 (−1)
and IV 0 f (x) interpolates data fk
(−1)
IV 0 f (xk
, k = 1, · · · , L − 1, namely (−1)
) = fk
,
k = 1, · · · , L − 1. (−1)
Let B be the transform matrix between f (−1) = (f1
(3.4)
(−1)
, · · · , fL−1 )> and the coefficient
c = (c−1 , · · · , cL−3 )> , i.e. f (−1) = Bc
(3.5)
where by using item (5) in Lemma 2, we have 7 12 1 6 B=
1 6 2 3 1 6
1 6 2 3
..
1 6
.
..
1 6
.
.. 2 3 1 6
. 1
. 1 6 2 3 1 6
6 7 12
In order to obtain the coefficients ck , −1 ≤ k ≤ L − 3 in (3.3), we have to solve the tridiagonal system (3.5) which involves (8L) operations. Interpolation Operator IWj f in Wj Similarly, we can define the interpolation operator IWj f (x) in Wj , j ≥ 0 for any function f (x) in H02 (I). For this purpose, we choose the following interpolation points in I, (j)
xk =
k + 1.5 , 2j
−1 ≤ k ≤ nj − 2,
(3.6)
where nj = DimWj = 2j L. (−1)
It can be checked that for the interpolation points {xk
(j)
} for V0 in (3.1) and {xk } for
Wj , j ≥ 0 in (3.6) the wavelet functions ψj,k (x) satisfy a “Point Value Vanishing ” property. Point Value Vanishing Property of ψj,k (x) For j > i, −1 ≤ k ≤ nj − 2, (j)
ψj,k (xk ) = 1 (i)
ψj,k (xl ) = 0, −1 ≤ ` ≤ ni − 2 if i ≥ 0; 1 ≤ ` ≤ L − 1 if i = −1. 13
(3.7)
2 So {φ0,k (x)}, {ψj,k (x)}∞ j=0 form a hierarchical basis for the Sobolev space H0 (I) [22].
Moreover, the “Point Value Vanishing ” property will be crucial in obtaining a fast Discrete Wavelet Transform (DWT). The interpolation IWj f (x) of a function f (x) ∈ H02 (I) in Wj , j ≥ 0 can be expressed as a linear combination of ψj,k (x), k = −1, · · · , nj − 2, namely nj −2
IWj f (x) =
X
fˆj,k ψj,k (x)
(3.8)
k=−1
and (j)
(j)
IWj f (xk ) = f (xk ),
−1 ≤ k ≤ nj − 2.
If we denote Mj as the nj −th order matrix which relates ˆf (j) = (fˆj,−1 , · · · , fˆj,nj −2 )> and (j)
(j)
f (j) = (f (x−1 ), · · · , f (xnj −2 ))> , then f (j) = Mj ˆf (j) where
(3.9)
1 1 − 14 1 −1 1 − 14 13 1 1 − 14 1 − 14 .. .. Mj = . . 1 − 14
..
.
1 1 − 14 1 − 14 1 1 − 14
. 1 − 13
1
The solution of the coefficients {fˆj,k , −1 ≤ k ≤ nj − 2} again involves solving a tridiagonal system (3.9) which costs (8nj ) operations. Now let us assume that the values of a function f (x) ∈ H02 (I) are given on all the inter(j)
polation points {xk } defined in (3.1) and (3.6). We intend to find the wavelet interpolation PJ f (x) ∈ V0 ⊕ W0 ⊕ W1 · · · ⊕ WJ−1 for J − 1 ≥ 0, i.e. PJ f (x) = fˆ−1,−1 φb (x) +
L−4 X
fˆ−1,k φk (x) + fˆ−1,L−3 φb (L − x)
k=0
+
j −2 J−1 X nX
[
fˆj,k ψj,k (x)]
j=0 k=−1
= f−1 (x) +
J−1 X
fj (x)
j=0
14
(3.10)
where nj −2
f−1 (x) = IV 0 f (x) ∈ V0 ,
X
fj (x) =
fˆj,k ψj,k (x) ∈ Wj , j ≥ 0,
k=−1
and the following interpolating conditions hold, i.e. (−1)
PJ f (xk
(j)
(−1)
) = f (xk
),
1≤k ≤L−1
(j)
PJ f (xk ) = f (xk ),
j ≥ 0, −1 ≤ k ≤ nj − 2.
(3.11)
Let us denote f = (f (−1) , f (0) , · · · , f (J−1) )> the values of f (x) on all interpolation points, i.e. (−1)
f (−1) = {f (xk
)}L−1 k=1 ,
(j)
n −2
j f (j) = {f (xk )}k=−1 ,
j ≥ 0,
and ˆf = (ˆf (−1) , ˆf (0) , · · · , ˆf (J−1) )> the wavelet coefficients in the expansion (3.10) ˆf (−1) = {fˆ−1,k }L−1 , k=1 ˆf (j) = {fˆj,k }nj −2 , k=−1
j ≥ 0.
The following algorithm provides a recursive way to compute all the wavelet coefficients ˆf . Note that the wavelet expansion (3.10) can be expanded to include higher level wavelet spaces Wj , J ≤ j ≤ J 0 by adding only terms from the higher wavelet spaces, i.e. WJ , · · · , WJ 0 −1 . DWT transform ˆf −→ f This direction of transform is straightforward by evaluating the expansion (3.10) at all (j)
the collocation points {xk }, j ≥ −1. The “Point Value Vanishing” property (3.7) of the wavelet functions and the compactness of suppψj,k (x) can be used to reduce the number of evaluations. Number of Operations
15
Let N be the total number of collocation points and N = (L − 1) + (j)
PJ−1
In the evaluation of PJ f (xk ), values of ψ(x) and φ(x) at dyadic points
j=0
k 2J
nj = 2J L − 1.
are needed and
they can be computed once for future use. Recalling (3.10) and the “Point Value Vanishing” property (3.7) of the wavelet basis functions, we have (−1)
PJ f (xk
(−1)
) = f−1 (xk
),
1≤k ≤L−1
which needs 4(L − 1) (flops). (j)
For each 0 ≤ j ≤ J − 1, to compute PJ f (xk ), −1 ≤ k ≤ nj − 2, needs 5jnj (flops). Thus, it takes 4(L − 1) +
PJ−1 j=0
5jnj ≤ 5N log N(f lops) to compute the vector f .
f −→ ˆf Recalling that f = (f (−1) , f (0) , · · · , f (J−1) )> , we proceed to construction of PJ f (x) in the following steps: Step 1 Define f−1 (x) = IV 0 f (−1) = fˆ−1,−1 φb (x) +
L−4 X
fˆ−1,k φk (x) + fˆ−1,L−3 φb (L − x),
k=0 (−1)
so f−1 (x) interpolates f (x) at the interpolation points xk (−1)
f−1 (xk
(−1)
) = f (xk
, −1 ≤ k ≤ L − 1, namely
).
(3.12)
Step 2 Define f0 (x) = IW0 (f (0) − (IV 0 f )(0) ) =
nX 0 −2
fˆ0,l ψ0,l (x)
(3.13)
l=−1 (0)
0 −2 where (IV 0 f )(0) = {IV 0 f (xk )}nk=−1
As a result of the “Point Value Vanishing” property (3.7) of the wavelet functions, we (−1)
have ψ0,l (xk
) = 0, −1 ≤ l ≤ n0 − 2, 1 ≤ k ≤ L − 1, thus (−1)
f0 (xk
) = 0,
1 ≤ k ≤ L − 1. 16
So we have for 1 ≤ k ≤ L − 1 (−1)
f−1 (xk
(−1)
) + f0 (xk
(0)
(−1)
) = f−1 (xk
(0)
(−1)
) = IV 0 f (xk
(0)
(0)
(−1)
) = f (xk (0)
) (0)
f−1 (xk ) + f0 (xk ) = IV 0 f (xk ) + (fk − (IV 0 f )k ) = f (xk ).
(3.14)
Equation (3.14) implies that function f−1 (x) + f0 (x) actually interpolates f (x) on both (−1) L−1 }k=1
interpolation points {xk
(0)
for V0 and the interpolation points {xk }L−2 k=−1 for W0 .
Step 3 Generally, we define for 1 ≤ j ≤ J − 1 fj (x) = IWj (f (j) − (Pj−1 f )(j) ) nj −2
=
X
(3.15)
fˆj,k ψj,k (x)
(3.16)
k=−1 (j)
(j)
where (Pj−1 f )k = Pj−1 f (xk ), −1 ≤ k ≤ nj − 2. Again, as in step 2, we can verify that function f−1 (x) + f0 (x) + · · · + fj−1 (x) interpolates (−1) L−1 (j−1) nj −2 }k=1 , · · · {xk }k=−1 .
function f (x) on all interpolation points {xk
Especially, for j = J
we have PJ f (x) = f−1 (x) + f0 (x) + · · · + fJ−1 (x), which satisfies the required interpolation condition (3.11). Number of Operations. For j = −1, the number of operations to invert (3.9) using the Thomas algorithm to obtain {fˆk
(−1)
} is 8(L − 1)(f lops). For 0 ≤ j ≤ J − 1, the cost of computing the coefficients
P (j) fˆk in fj (x) = IWj (f (j) − (Pj−1 f )(j) ) = −1≤k≤nj −2 fˆj,k ψj,k (x) consists of three parts: (1) (j)
evaluation of (Pj−1 f )(j) = {Pj−1 f (xk )} − −5jnj (f lops); (2) calculating the difference f (j) − (Pj−1 f )(j) — nj (f lops); (3) inverting the matrix Mj in (3.9) – 8nj (f lops). So the total cost P(J−1) of finding ˆf = 8(L − 1) + j=0 (5j + 9)nj ≤ 6N log N where, again, N = 2J L − 1.
Wavelet Expansion Coefficients {fˆj,k } First we consider the Fourier transformation of the wavelet function ψ(x), sin ω4 4 − 3ω i 7 ˆ ψ(ω) = (2 − cos ω)( ω ) e 2 . 3 4 17
(3.17)
ˆ We find that ψ(0) = 73 , which means that ψ(x) has no vanishing moment of the first order (see Figure 3). Since the wavelet decomposition we consider here is in the space H02 (I), the decaying properties for the wavelet coefficients {fˆj,k } ought to be related to the vanishing moments of the second derivative of ψ(x) (see Figure 4), not to those of ψ(x). In order to clarify the meaning of the wavelet coefficients {fˆj,k } in the finite wavelet decomposition of ∗ function f (x) in H02 (I), let us consider the dual basis {ψj,k (x)} for {ψj,k (x)} in Wj , i.e.
Z I
∗ 00 (ψj,k ) (x)(ψj,l )00 (x) dx = δlk .
(3.18)
Since Wj , j = 0, · · · , J − 1 and V0 are mutually orthogonal, we have fˆj,k =
Z ∗ 00 f 00 (x)(ψj,k ) (x)dx.
I
(3.19)
n −2
j ∗ We need to express ψj,k (x) in terms of {ψj,k (x)}k=−1 , namely
nj −2 ∗ ψj,k (x)
X
=
(j)∗
βkl ψj,l (x),
(3.20)
l=−1 (j)∗
where the coefficients βkl
are determined as follows.
Let Gj denote the matrix (j)
Gj = (βkl )nj ×nj (j)
where βkl =
R I
(3.21)
00 00 ψj,k (x)ψj,l (x) dx, −1 ≤ k, l ≤ nj − 2. Then, from (3.18) we have (j)∗
(βkl )nj ×nj = Gj −1 .
(3.22)
(j)∗
In order to estimate the entries of matrix (βkl )nj ×nj we recall that (j)
βkl
Z
=
I
00 00 ψj,k (x)ψj,l (x) dx
= 23j
Z
2j I
Z
= 2
00 00 ψ0,k (x)ψ0,l (x) dx (4)
3j 2j I
ψ0,k (x)ψ0,l (x) dx.
1 1 , ψ( 32 ) = 1, ψ( 52 ) = − 14 , ψb ( 21 ) = 1, ψb ( 32 ) = and that ψ0l (n) = 0, n ∈ Z, ψ( 21 ) = − 14 (4)
1 3 12 3 − 13 , ψ (4) ( 12 ) = 28 × 14 δ(x − 12 ), ψ (4) ( 23 ) = 28 × 14 δ(x − 23 ), ψ (4) ( 52 ) = 28 × 14 δ(x − 52 ), ψb ( 21 ) =
18
28 ×
15 δ(x 13
(4)
− 12 ), and ψb ( 32 ) = 28 ×
Gj =
896 169 9 13 1 − 13
0 6 3(j+2) ×2 7
3 δ(x 13
− 32 ), we have the following
9 13 54 14 10 14 1 − 14
1 − 13 10 14 54 14 10 14
0 1 − 14
...
10 14 54 14
..
.
..
.
...
...
...
... ..
.
..
. ... 54 14 10 14 1 − 14
0
=
... 10 14 54 14 10 14 1 − 13
1 − 14 10 14 54 14 9 13
0 1 − 13 9 13 896 169
.
6 × 23(j+2) Γj . 7
Γj = (γkl )nj ×nj is actually a Hermite matrix satisfying nj −2
γkk −
X
|γkl | ≥ C > 0,
−1 ≤ k ≤ nj − 2,
l=−1,l6=k
where C is a constant independent of j. Hence there exists two positive constants C1 and C2 , which are independent of j, such that C1 Inj ≤ Γj ≤ C2 Inj , where Inj is the nj × nj identity matrix. Now denote ∗ Γ−1 j = (γkl )nj ×nj ,
by using Lemma 8 and Lemma 9 in [23], we assert that there are two positive constants K and γ such that ∗ |γkl | ≤ Kexp(−γ|k − l|).
Since (j)∗
(βkl )nj ×nj =
7 × 2−3(j+2) Γ−1 j , 6
we have (j)∗
|βkl | ≤ K1 2−3j exp(−γ|k − l|). 19
(3.23)
In order to estimate the coefficients fˆjk in (3.19), we use the following result from the Meyer’s book [8]. Lemma 4 Let g(x) be compactly supported, n times continuously differentiable and have n + 1 vanishing moments: Z ∞ −∞
xp g(x)dx = 0, for 0 ≤ p ≤ n.
Let α, 0 < α < n, be a real number that is not an integer and f (x) ∈ L2 . Then f (x) is uniformly Lipschitz of order α over a finite interval [a, b] if and only if for any k ∈ Z and j ∈ Z such that 2−j k ∈ (a, b), Z
|
f (x)g(2j x − k) dx| = O(2−(α+1)j )
as j → ∞.
(3.24)
Applying this Lemma, we can prove the following result. Lemma 5 Let 0 < α < 1 and f ∈ H02 (I). If the second derivative of the function f is H¨older continuous with exponent α, at x0 ∈ I, i.e. |f 00 (x) − f 00 (x0 )| ≤ C|x − x0 |α , x ∈ (x0 − δ, x0 + δ) ⊂ I for some δ > 0, then for any k ∈ Z, j ∈ Z + such that 2−j k ∈ (x0 − δ/2, x0 + δ/2), |fˆj,k | = O(2−(α+2)j ),
as j → ∞.
(3.25)
Proof. Since ψ and ψb , defined by (2.21) and (2.22), are two times continuously differentiable and their second derivatives have 2 vanishing moments, by Lemma 4, X
∗ |fˆj,k | = | < f, ψj,k >|≤
X
2−j l∈(x0 −δ,x0 +δ)
+
X ¯ (x0 −δ,x0 +δ) 2−j l∈
(j)∗
|βkl | | < f, ψjl > |
¯ (x0 −δ,x0 +δ) 2−j l∈
2−j l∈(x0 −δ,x0 +δ)
≤ (
X
(j)∗
|βkl | | < f, ψj,l > | +
Z
)K1 2−3j exp(−γ|k − l|)22j |
R
f 00 (x)ψ 00 (2j x − l) dx|
∞ X 1 K1 exp(−γ|k − l|) O(2−(α+2)j ) + K2 exp(− δγ2j ) exp(−γs) 2 s=1 2−j l∈(x0 −δ,x0 +δ)
≤
X
= O(2−(α+2)j ). 20
2 Lemma 5 implies that the wavelet coefficients fˆjk , j ≥ 0, reflect the singularity of the function to be approximated. In practice, when we solve PDE’s using collocation methods, we often use the values of the functions, not their derivatives. Therefore, in order to use the wavelet coefficients to adjust the choice of wavelet basis functions, we have to establish a relation between the magnitudes of the wavelet coefficients fˆj,k , j ≥ 0 and f (x). Let us first state the following result on the inverse of the tridiagonal matrix from [24]. Lemma 6.
Let A be an n × n tridiagonal matrix with elements a2 , a3 , · · · , an on the
subdiagonal, b1 , b2 , · · · , bn on the diagonal and c2 , c3 , · · · , cn on the superdiagonal, where ai , ci 6= 0. Define the two sequences {um }, {vm } as follows: 1 (am−1 um−2 + bm−1 um−1 ) cm 1 = 0, vn = 1, vm = − (bm+1 vm+1 + cm+2 vm+2 ) am+1
u0 = 0, u1 = 1, um = − vn+1
m≥2 m≤n−1
(3.26) (3.27)
where a1 and cn+1 are arbitrary nonzero constants. Then A−1 = (αi,j ) is given by (
αij = Corollary.
Q
− au1i vvj0 jk=2 Q − au1jvv0i jk=2
ck ak ck ak
i≤j i>j
(3.28)
Let Mj be the interpolation matrix in (3.9), then we have the following
estimates on MJ−1 = (αi,j ), K
|αi,j | ≤ where K = 1.1726 and α = 7 +
√
α|j−i|
(3.29)
. 192 = 13.928.
We delay the proof of (3.29) to the Appendix. Theorem 3 Let f (x) ∈ H02 (0, L), M = maxI |f (x)| and IWj f (x) be its interpolation in Wj defined in (3.8). If for ² > 0, −1 ≤ k1 < k2 ≤ nj − 2 (j)
|f (xk )| ≤ ²,
for k1 ≤ k ≤ k2 ,
21
defining X
˜Iw f (x) = j
fˆj,k ψj,k (x),
(3.30)
¯ [k1 +l,k2 −l] −1≤k≤nj −2,k∈ log ² where l = l(²) = min( n2j , − log ), then we have α
|˜Iwj f (x) − IWj f (x)| ≤ C(M )² where C(M ) =
6K (α α−1
+ M ), K = 1.1726 and α = 7 +
√
(3.31)
. 192 = 13.928.
Proof. From (3.9), we have ˆf (j) = M−1 f (j) j (j) (j) where ˆf (j) = (fˆj,−1 , · · · , fˆj,nj −2 )> , f (j) = (f (x−1 ), · · · , f (xnj −2 ))> , thus
fˆj,k =
nj X
(j)
αk,i f (xi−2 ),
−1 ≤ k ≤ nj − 2.
i=1
So we have |fˆj,k | ≤ K
nj X
1
(j)
|k−i| i=1 α
|f (xi−2 )|.
(3.32)
For any given ² > 0, we take ` = min(nj /2, − log ²/ log α). For k ∈ [k1 + `, k2 − `], using (3.32) we have X
|fˆj,k | ≤ K[
1
α |k−i|≤` X
1
|k−i|≤`
α|k−i|
≤ K²
X
(j)
|f (xi−2 )| + |k−i| + MK
1
(j)
α|k−i| |k−i|>`
X
1
|k−i|>`
α|k−i|
|f (xi−2 )|]
1 1 1 1 ≤ 2K²[1 + ( ) + · · · + ( )` ] + 2M K[( )`+1 + · · · + ( )nj −` ] α α α α 1 nj −2` 1 `+1 1 − (α) 1 − (α) 1 = 2K² + 2M K( )`+1 1 α 1 − (α) 1 − ( α1 ) α 1 ≤ 2K² + 2M K² α−1 α−1 = C 0² where C 0 =
2K (α α−1
+ M ).
Finally, we have |˜Iwj f (x) − IWj f (x)| = | ≤
X
X
fˆj,k ψj,k (x)|
k∈[k1 +`,k2 −`]
|fˆj,k ||ψj,k (x)| ≤ C 0 ²
k∈[k1 +`,k2 −`]
X k∈[k1 +`,k2 −`]
22
|ψj,k (x)|.
Note that in the last summation, only three terms will be nonzero for any fixed x, so we have |˜Iwj f (x) − IWj f (x)| ≤ 3C 0 ² = C² where C =
6K (α α−1
+ M ). This concludes the proof of Theorem 3. 2
Remark. As a consequence of Theorem 3, the coefficients fˆj,k of the wavelet interpolation operator IWj f (x) can be ignored if the magnitudes of function f (x) at points (j)
xk
(j)
(j)
∈ [xk1 +` , xk2 −` ] are less than some given error tolerance ². This procedure will only
result in an error of O(²). For ² = 10−10 , ` = 9; ² = 10−8 , ` = 7. In the wavelet interpolation expansion (3.10), IWj is used to interpolate the difference between a lower level interpolation Pj−1 f (x) and f (x), i.e. Pj−1 f (x) − f (x). Thus, the function Pj−1 f (x) − f (x) will be the function occurring in Theorem 3 (Recall that in Theorem 3 that function is still denoted by f (x) for the sake of simpler notation). Hence, the coefficients fˆj,k in the wavelet expansion will be less than the error tolerance ² in a larger region of the solution domain as j becomes larger. Then many terms of ψj,k (x) can be discarded in the wavelet expansion of f (x). This fact will be used in the later section to achieve adaptivity for the solution of PDE’s. The idea of decomposing numerical approximations into different scales has been previously used successfully in the shock wave computations with uniform high order spectral methods, where ENO finite difference methods and spectral methods are combined to resolve the shocks and the high frequency components in the solution, respectively [25]. We conclude this section with the following result which shows how to use wavelet coefficient to estimate the data interpolated by IWj . Theorem 4 Let IWj f (x) and f (x) as in Theorem 3. For ² > 0, −1 ≤ k1 < k2 ≤ nj − 2, if |fˆj,k | ≤ ²
for
23
k1 ≤ k ≤ k2 ,
then (j)
|f (xk )| ≤ 3²
for k1 + 3 ≤ k ≤ k2 − 3.
(3.33)
Proof. The proof follows from the definition of IWj f (x). 2
4
Derivative Matrices
The operation of differentiation of functions, given by its wavelet expansion of (3.10), can be represented by a finite dimension matrix D. Such a matrix has been investigated in [26] for wavelet approximation of periodic functions based on Daubechies’ compactly supported wavelets. The properties of matrix D, especially of its eigenvalues, affect much the efficiency and stability of the numerical methods for the solution of time dependent PDE’s . Because of the multiresolution structure of spaces Vj , i.e. Vj ⊂ Vj+1 and V0 ⊕ W0 ⊕ · · · ⊕ WJ = VJ+1 . We can assume that J + 1 = 0 and the wavelet interpolation u0 (x) for function u(x) can be written as a linear combination of Ib,0 u(x) and basis in V0 , namely u0 (x) = Ib,0 u(x) + uˆ−1 φb,0 (x) +
L−4 X
uˆk φ0,k (x) + uˆL−3 φb,0 (L − x).
(4.1)
k=0
Here we only consider the case where Ib,0 u(x) is defined in (2.35) with end derivative condition (2.36)-(2.39). The case of not-a-knot conditions can be treated similarly. Second Derivative Matrix Consider the second derivative matrix which approximates the second differential operator Lu = uxx
(4.2)
u(0) = u(L) = 0.
(4.3)
with the boundary conditions
24
As u0 (x) is a cubic spline interpolant of function u(x) based on the knots xi = i, i = 0 · · · , L with the end derivatives given by (2.39), from the first and second derivative continuity condition of u0 (x) we have hi 00 hi ui − ui−1 ui−1 + u00i + , 6 3 hi hi hi ui − ui−1 = − u00i−1 − u00i + , 3 6 hi
u0i = u0i−1
1≤i≤L 1≤i≤L
(4.4) (4.5)
and hi+1 6 ui+1 − ui ui − ui−1 hi u00i + 2u00i + u00i+1 = [ − ], hi + hi+1 hi + hi+1 hi + hi+1 hi+1 hi
(4.6)
where 1 ≤ i ≤ L − 1, u0i = u0 (xi ), u00i = u00 (xi ) and hi = xi − xi−1 = 1. Equations for the second derivatives at xi , 1 ≤ i ≤ L − 1 are provided by (4.6) while the equations for the second derivatives at two end points can be obtained from (4.5) (i=1) and (4.4) (i=L), h1 00 h1 00 u1 − u0 u0 + u1 = − u˙ 0 3 6 h1 hL 00 hL 00 uL − uL−1 uL−1 + uL = u˙ L − 6 3 hL
(4.7) (4.8)
where u˙ 0 , u˙ L are the approximations of the first derivative of u(x) at x0 , xL in (2.39), respectively. Using the notations u = (u(0), u(1), · · · , u(L))> ∈ RL+1 , u00 = (u00 (0), u00 (1), · · · , u00 (L))> ∈ RL+1 , from (4.6), (4.7) and (4.8) we have the following T1 u00 = T2 u + ~γ
25
(4.9)
where
T1 =
− h11
6 h1 (h1 +h2 ) T2 =
h1 3 h1 h1 +h2
h1 6
0 h2 h1 +h2
2 .. .
..
..
.
hi hi +hi+1
. hi+1 hi +hi+1
2 .. .
..
.
..
hL−1 hL−1 +hL
1 h1 6 − h1 +h (1 2 h1
..
+
1 ) h2
..
hL 6
..
.
6 hL−1 (hL−1 +hL )
u˙ 0 0 .. .
hL hL−1 +hL hL 3
2
6 h2 (h1 +h2 )
.
.
α ~ 0 .. .
,
.
1 − hL−16+hL ( hL−1 1 hL
+
1 ) hL
, 6 hL (hL−1 +hL )
− h1L
~γ = = u = Γu 0 0
β~
u˙ L and α ~=
1 β~ = (0, · · · , 0, cp , · · · , c0 )> ∈ RL+1 h
1 (c0 , · · · , cp , 0, · · · , 0)> ∈ RL+1 , h
with c00 , · · · , c0p defined in (2.39). As a result of the uniqueness of the spline u0 (x), T1 is invertible, so we have u00 = T−1 1 (T2 + Γ)u = D2 u.
(4.10)
By eliminating the first and last rows and columns, we then obtain the second derivative matrix for the differential operator (4.2). Fisrt Derivative Matrix Here we consider the first derivative matrix which approximates the first differential operator Lu = ux
26
(4.11)
with the boundary condition u(L) = 0.
(4.12)
Using the notation u0 = (u0 (0), u0 (1), · · · , u0 (L))> ∈ RL+1 , and using (4.4) and (4.5), we have u0 = H1 u00 + H2 u where
and
− h11 −1 h1
H2 =
− h31 − h61
h1 6 H1 =
(4.13)
h1 3 h2 6
0 h2 3
..
.
0 ..
.
hi 6
..
.
hi 3
..
.
0 ..
.
hL−1 6
1 h1 1 h1 − h12
..
.
hL−1 3 hL 6
0 hL 3
0 1 h2
..
.
0 ..
.
..
− h1i
1 hi
..
. .
0 ..
.
1 − hL−1
..
.
1 hL−1
− h1L
. 0
1 hL
Finally, by using (4.10) we obtain u0 = (H1 D2 + H2 )u = D1 u,
(4.14)
and, after eliminating the last row and column of D1 , we also have the first derivative matrix for the differential operator (4.11). In Figure 15, we plot the eigenvalues of D1 for L = 8, J = 0, 1, 2, 3, which correspond to N = 8, 16, 32, 64. The eigenvalues come in conjugate pairs with two pure real eigenvalues. 27
The real parts of all the eigenvalues are negative. The largest eigenvalue increases in the order of O(N ) in magnitude. In Figure 16, we plot the eigenvalues of D2 for L = 8, J = 0, 1, 2, 3 which correspond to N = 8, 16, 32, 64. They are all real and positive and the largest one increases in the order of O(N 2 ).
5
Adaptive Wavelet Collocation Methods for PDE’s
In this section we consider a collocation method based on the DWT described in Section 3 for time dependent PDE’s. Let u = u(x, t) be the solution of the following initial boundary value problem
ut + fx (u) = uxx + g(u), u(0, t) = g0 (t) u(L, t) = g1 (t)
x ∈ [0, L], t ≥ 0 (5.1)
u(x, 0) = f (x).
Here only Dirichlet boundary conditions are considered, however the methods can also be modified to treat Von Neuman type or Robin type boundary conditions. We use the idea of the method of lines where only the spatial derivative is discretized by the wavelet approximation. The numerical solution uJ (x, t) will be represented by a unique decomposition in V0 ⊕ W0 ⊕ · · · ⊕ WJ−1 , J − 1 ≥ 0, namely uJ (x, t) = Ib,J u(x, t) + uˆ−1,−1 (t)φb (x) + +
J−1 X j=0
X
uˆ−1,k (t)φk (x) + uˆ−1,L−3 (t)φb (L − x)
k=0
nj −2
L−4 X
uˆj,k (t)ψj,k (x) ,
k=−1
= u−1 (x) +
J−1 X
uj (x)
(5.2)
j=0
where Ib,J u(x, t) given in (2.35) consists of the nonhomogeneity of u(x, t) on both boundaries, and the coefficients uˆj,k (t) are all functions of t. Using the DWT transform, we can also identify the numerical solution uJ (x, t) by its point values on all collocation (previously (j)
named interpolation ) points, i.e. {xk } in (3.1) and (3.6). We put all these values in vector u = u(t), i.e. u = u(t) = (u(−1) , u(0) , · · · , u(J−1) )> 28
(j)
where u(j) = {u(xk , t)}, 1 ≤ k ≤ L − 1 for j = −1; −1 ≤ k ≤ nj − 2, for j ≥ 0. To solve for the unknown solution vector u(t), we collocate the PDE (5.1) on all collocation points and obtain the following semi-discretized wavelet collocation method. Semi-Discretized Wavelet Collocation Methods uJ t + fx (uJ ) = R(uJ xx ) + g(uJ )|x=x(j) , −1 ≤ j ≤ J − 1 k
uJ (0, t) = g0 (t) uJ (L, t) = g1 (t) (j) (j) uJ (xk , 0) = f (xk )
(5.3)
where 1 ≤ k ≤ L − 1 for j = −1; −1 ≤ k ≤ nj − 2, for j ≥ 0. The average operator R on the second derivative is used to take advantage of the superconvergence of the cubic spline at the knot points ( see (2.17) ). However, R should only be used at a local uniform mesh [19]. Equation (5.3) involves a total of (2J − 1)L + 2 unknowns in u; two of them will be determined by the boundary conditions and the rest are the solutions of the ODE system subject to their initial conditions. In order to implement the time marching scheme for the ODE’s system (for example Runge-Kutta type time integrator), we have to compute the (j)
(j)
derivative term in (5.3) fx (uJ (xk )) and uJ xx (xk ) in an efficient way. Let us only discuss the first derivative which involves the computation of the nonlinear function f (uJ (x, t)). For this purpose we first find a similar wavelet decomposition as (5.2) for f (uJ ). For a general nonlinear function f (u), this can be done quite straightforward using the DWT in Section 3. (j)
(j)
Computation of fx ((xk ) = fx (uJ (xk )) (j)
Step 1 Given u = (u(−1) , u(0) , · · · , u(J−1) )> , compute f (j) = {f (uk )}, j ≥ −1 and define f = (f (−1) , f (0) , · · · , f (J−1) )> ; Step 2 Compute the wavelet interpolation expansion using DWT transform for f , fJ (x, t) = Ib,J f + fˆ−1,−1 (t)φb (x) +
L−4 X k=0
29
fˆ−1,k (t)φk (x) + fˆ−1,L−3 (t)φb (L − x)
+
j −2 J−1 X nX
[
fˆj,k (t)ψj,k (x)];
(5.4)
j=0 k=−1 (j)
Step 3 Differentiate (5.4) and evaluate it at all collocation points {xk }, j ≥ −1, fx (uJ )|x=x(j) = (Ib,J f ) k
+
0
(j) (xk )
J nX i −2 X
[
(j) + fˆ−1,−1 (t)φ0b (xk ) +
L−4 X
(j) (j) fˆ−1,k (t)φ0k (xk ) − fˆ−1,L−3 (t)φ0b (L − xk )
k=0 (j) 0 fˆi,l (t)ψi,l (xk )].
i=0 l=−1
Cost of Computing the Derivatives. For each single collocation point, it takes 7 + 5(J + 1) = 5J + 12(f lops) to compute (J)
fJ0 (xk ). Therefore, the total cost of computing all derivatives is (5J + 12)N ≤ 5N log N. Again, ψ 0 (x) and φ0 (x) at the dyadic points
k , 2j
0 ≤ k ≤ 2j L can be precomputed for future
use. Assuming that the Euler forward difference scheme is used to discretize the time derivative in (5.3), we obtain a fully discretized wavelet collocation method. Fully discretized Wavelet Collocation Method
uJ n+1 unJ (0) unJ (L) (j) u0J (xk )
= uJ n + ∆t[−fx (unJ ) + R(unJ xx ) + g(unJ )]|x=x(j) , −1 ≤ j ≤ J − 1 k = g0 (tn ) = g1 (tn ) (j) = f (xk ).
(5.5)
where 1 ≤ k ≤ L − 1 for j = −1; −1 ≤ k ≤ nj − 2, for j ≥ 0 and tn = n∆t is the time station and ∆t is the time step. Adaptive Choice of Collocation Points In equations (5.2) and (5.4), uJ (x) and f (uJ (x)) are expressed using the full set of (j)
collocation points {xk }. As discussed in the remark after Theorem 3, most of the wavelet expansion coefficients uˆj,k for large j can be ignored within a given tolerance ². So we can dynamically adjust the number and locations of the collocation points used in the wavelet 30
expansions, reducing significantly the cost of the scheme while providing enough resolution in the regions where the solution varies significantly. We can achieve this adaptivity in the following two ways. Deleting Collocation Points Let ² ≥ 0 be a prescribed tolerance and j ≥ 0, ` = `(²) = min( n2j , − log ²/ log α). Step 1. First we locate the range for the index k, 0 0 (k10 , l10 ), · · · , (km , lm ), m = m(j, ²)
(5.6)
such that |ˆ uj,k | ≤ ²,
ki0 ≤ k ≤ li0 , i = 1, · · · , m.
(5.7)
Step 2. Following Theorem 3 and 4, we can ignore uˆj,k in (5.2) for ki ≤ k ≤ li , i = 1, · · · , m, ki = ki0 + ` + 3, li = li0 − ` − 3, namely we redefine uj (x) as uj (x) :=
X
uˆj,k ψj,k (x)
¯ Kj −1≤k≤nj −2,k∈
where Kj =
S
1≤i≤m [ki , li ].
Step 3. The new collocation points and unknowns will be (j)
(j)
{xk }, uJ (xk ), k = 1, · · · , L − 1 if j = −1; k ∈ {−1, · · · , nj − 2}\Kj if j ≥ 0.
Increasing Level of Wavelet Space. Let ² ≥ 0 again be some prescribed tolerance, and if max|ˆ unJ,k | > ²
(5.8)
where subscript n indicates the solution at time t = tn , then we can increase the number of wavelet spaces Wj in the expansion for the numerical solution uJ (x) in (5.2), say, up to WJ 0 , J 0 > J.
31
Step 1 At t = tn if condition (5.8) is satisfied, let J 0 > J and define a new solution vector 0
˜ nJ0 := (u(−1) , u(0) , · · · , u(J−1) , u(J) , · · · , u(J −1) )> u n −2
(j)
j where for J ≤ j ≤ J 0 − 1, u(j) = {PJ u(xk )}k=−1 .
˜ nJ0 on the right hand side of scheme (5.5) to advance the solution to time Step 2 Use u n+1 n+1 ∈ V0 ⊕ W0 ⊕ · · · ⊕ WJ 0 −1 will step tn+1 and obtain solution un+1 J 0 . Then, uJ 0 (x) = PJ 0 uJ 0
be the new numerical solution which yields a better approximation of the exact solution of (5.1).
6
Numerical Results
CPU Performance of DWT The theoretical estimates of operations for performing the DWT transform in both directions and the computation of derivatives at all collocation points are O(N log N ) where N is the total number of terms in the wavelet expansion (3.10). We take the function in (6.1) and define its wavelet interpolation expansion (3.10) for L = 10, J = 2, 3, · · · , 10; the total number of terms (or collocation points) N = 2J L − 1 are between 79 and 10240. In Figure 5, we plot the CPU time for the performance of DWT back and forth in both directions (‘o’ in the Figure) and the computations of derivatives on all collocation points (‘+’ in the Figure). Also drawn in the Figure is a straight line which indicates an almost linear growth of the CPU timing up to 12000 points. Adaptive Approximation of Wavelet Interpolation Expansion We consider function with high gradients h1 (x + 1, 0.3) 0 h (x + 0.5, δ) f (x) = 1 0 sin(5πx)h1 (x − 0.25, 0, 25) x−0.5
h2 (
2δ
)
32
if if if if if if
− 1 ≤ x ≤ −0.7 − 0.7 ≤ x ≤ −0.5 − δ − 0.5 − δ ≤ x ≤ −0.5 + δ − 0.5 + δ ≤ x ≤ 0 0 ≤ x ≤ 0.5 0.5 ≤ x ≤ 1
(6.1)
where δ = 0.01 and h1 (x, a) is an exponential hat function and h2 (x) is a step-like function and they are defined as (
h1 (x) = and h2 (x) =
0
1 exp(− a2 −x 2) 0
1 Rx 5 t (1 2772 0
5
− t) dt
1
if |x| < a otherwise if x < 0 if 0 ≤ x ≤ 1 otherwise .
(6.2)
(6.3)
First we construct the full wavelet interpolation expansion (3.10) PJ f (x) for J = 7, L = 40, the total number of wavelet functions (or the collocation points N ) N +4 = (2J L−1)+4 = 2J L + 3 = 5123 (including four boundary functions in Ib,J f (x)). At the top of Figure 6, we plot f (x) (solid line) and PJ f (x) at non-interpolation points; at the bottom we have the absolute error in logarithmic scale. In Figure 7, we plot the components f0 ∈ V0 and gj (x) ∈ Wj , 0 ≤ j ≤ 6 in PJ f (x) = Ib,J f (x) + f0 + g0 + · · · + gJ−1 . We can see that only the higher frequency part is retained in higher wavelet spaces Wj (notice that the scale varies for different pictures). Then, we use the procedure at the end of Section 5 to filter out the coefficients fˆj,k which are less than ² in magnitude. In Figure 8, we take ² = 10−5 and the number of wavelet functions fˆj,k reduces to 289 with the accuracy of the approximation (bottom curves) within order of ². In Figure 9, we plot the solution at the remaining interpolation points and the expected clustering of the interpolation points is seen at locations where the function changes more dramatically. In Figure 10, we plot the magnitude of the wavelet coefficients fˆj,k , j ≥ −1 one level above another. High density of the wavelet coefficients reflects the existence of high gradients of the approximated function. In Figure 11, we take ² = 10−4 and the number of wavelet functions fˆj,k reduces to 206 with the accuracy of the approximation (bottom curves) within order of ². Linear Hyperbolic PDE’s
33
We consider the problem of the linear hyperbolic partial differential equation ut + ux = 0,
0≤x≤1
u(0, t) = 0 x u(x, 0) = f (x) or h2 ( 2δ ).
(6.4)
In Figure 12, we present the solution of (6.4) at t = 0.1 with initial condition u(x, 0) = x h2 ( 2δ ), δ = 0.004, and the numerical parameter L = 20, J = 8, ² = 10−4 . Third order Runge-
Kutta methods are used for all numerical results presented here. We update the mesh every 5 time steps. Figure 12a is the numerical solution (plus) at 240 collocation points at time t = 0.1 against the exact solution (solid line), Figure 12b shows the distribution of wavelet coefficients for each level of wavelet spaces. The number of collocation points fluctuates around 240 and only the wavelet coefficients of those wavelet basis functions close to the large gradients remained in the numerical solution. Figure 12c shows the errors of the numerical solution in logarithmic scale at all remaining collocation points. Figure 13(a-f) shows the results of (6.4) with f (x) in (6.1) as initial condition (δ = 0.02) and the numerical parameters L = 15, J = 7, ² = 10−4 . Figure 13(a-c) shows the solutions at time t = 0.1, 0.2, 0.3 when the number of collocation points are 385, 394, 392, respectively. Figure 13(d-f) shows the wavelet coefficients of the numerical solution at those times, i.e. t = 0.1, 0.2, and 0.3, respectively. Figure 13 (g)-(i) show the errors of the numerical solutions in logarithmic scale at t = 0.1, 0.2, 0.3, respectively. Inviscid Burger Equation Finally, we consider the problem of the nonlinear hyperbolic partial differential equation u2 ut + ( 2 )x = 0,
−1 ≤ x ≤ 2 (6.5)
u(0, t) = given u(x, 0) = f (x)
where
(
f (x) =
−sin(πx) 0
if − 1 ≤ x ≤ 1 otherwise
The solution of the Burger’s equation develops a shock at time t =
(6.6) 1 π
∼ 3.18. In this
case, we take L = 15, J = 8, ² = 10−4 . With every 5 iterations we change the number and 34
locations of the collocation points according to the criteria proposed at the end of Section 5. The number of collocation points are 292, 295, 303 at times t = 0.3, 0.318, 0.319, respectively. Figures 14(a)-(c) show the numerical solutions at those three times, respectively while Figure 14d shows the solution a little while after a shock has developed at location x = 0. Further integration of the solution after this time will produce oscillations in the numerical solution as the numerical method has no mechanism to capture a real shock. Again, Figures 14(d-f) show the wavelet coefficients used in the numerical solution at time t = 0.30, 0.318, and 0.319, respectively. The numerical scheme automatically puts more collocation points near the high gradient (x=0) and the derivative discontinuity (x=1).
7
Conclusion
In this paper, we have constructed a fast Discrete Wavelet Transform (DWT) which enables us to construct collocation methods for nonlinear PDE’s. The adaptivity of wavelet approximation is conveniently implemented through the examination of the wavelet coefficients. The preliminary tests of the solution of PDE’s indicate such an approach will be important in large scale computations where the solution develops extremely high gradients in isolated regions, and uniform mesh is not practical. A final note on the use of higher order spline for the construction of MRA for Sobolev spaces: the method used in this paper can be utilized to yield higher order approximation to smooth functions. Considering the compactness of the support of the wavelet functions and the required smoothness in solving second order differential equations, we have limited our discussion to the case of cubic splines in this paper.
Acknowledgement The authors would like to thank the editor and, especially, the reviewers of this paper for their suggestions.
35
References [1] R. Glowinski, A. Rieder, R. O. Wells, Jr., and X. Zhou, A wavelet multilevel method for Dirichlet boundary value problems in general domains. Technical Report 93-06, Rice University, 1993. Computational Mathematics Laboratory. [2] E.Bacry, S. Mallat and G. Papanicolaou, A wawelet based space-time adaptive numerical method for partial differential equations, Technical Report No591, Robotics Report No257, New York University, Courant Institute of Mathematical Sciences, Nov.1991. [3] G. Beylkin, On wavelet-based algorithm for solving differential equations, preprint, 1992. [4] L. Greengard and V. Rokhlin, On the numerical solution of two-point boundary value problems, Yale University Technical Report, YALEU/DCS/RR-692 (1989) [5] S. Jaffard, Wavelet methods for fast resolution of elliptic problems, SIAM Journal on Numerical Analysis, 29(4):965-986, 1992. [6] J.-C. Xu and W.-C. Shann, Galerkin-wavelet methods for two point boundary value problems. Numer. Math., 63(1):123-142, 1992. [7] I. Daubechies, Orthogonal bases of compactly supported wavelets, Comm. Pure Appl. Math. 41 (1988), pp. 909-996. [8] Y. Meyer, Ondelettes ´et op´etateurs, I: Ondelettes. Hermann, Paris 1990 [9] C.K. Chui, “An Introduction to Wavelets”, Academic Press, 1992. [10] A. Cohen, I. Daubechies, and P. Vial, Wavelets on the interval and fast wavelet transforms, Submitted to Applied and Computational Harmonic Analysis. [11] Y. Meyer, Ondelettes sur l’intervalle, Rev. Mat. Iberoamericana, 7 (1991), 115-143. [12] C.K. Chui, E. Quak, Wavelets on a bounded interval, CAT Report 265, Department of Mathematics, Texas A & M University, March, 1992. 36
[13] A. Harten, Multiresolution Representation of Data, Courant Mathematics and Computing Laboratory report No. 93-002, June, 1993. [14] J.Z.Wang, Construction of wavelet bases in Sobolev spaces on a finite interval, Report on 1992 Summer Research conference of AMS “wavelets and applications” , South Hadley, July,1992. [15] G. Beylkin, Wavelets, Multiresolution Analysis and Fast Numerical Algorithms. A draft of INRIA Lecture Notes, 1991. [16] R. Adams, “Sobolev Spaces”, Academic Press, 1975. [17] I.J. Scheonberg, “Cardinal Spline Interpolation”, CBMS-NSF Series in Applied Math #12, SIAM Publ. Philadelphia, 1973. [18] C.A. Hall and W. W. Meryer, Optimal error bounds for cubic spline interpolation, J. of Approx.Theory 16, 105-122 (1976). [19] T. R. Lucas, Error Bounds for interpolating cubic splines under various end conditions, SIAM J. Numer. Anal. Vol 11, No. 3, 569-584 (1974) [20] C.R. De Boor, A Practical Guide to Splines, Springer-Verlag, New York, 1978. [21] B. K. Swartz and R.S. Varga, Error bounds for spline and L - spline interpolation, J. Approx. Theory, 6 (1972), pp6-49. [22] H. Yserentant, On the multi-level splitting of finite elment spaces. Numer. Math. 49, 379-412 (1986). [23] S. Jaffard, Ph. Laurencot, Orthogonal wavelets, analysis of operators, and applications to numerical analysis, in “Wavelets - A Tutorial in Theory and Applications”. C.K. Chui (ed). Acad. Press, pp542-601 (1992).
37
[24] Y. Ikebe, On the Inverse of Band Matrix, Linear Algebra and its Applications 24:93-97 (1979). [25] W. Cai and C.W. Shu, Uniform Form High Order Spectral Methods for One and Two Dimensional Euler Equations, J. Compt. Phys. 104 (1993), 427-443. [26] L. Jameson, On the Differential Matrix for Daubechies-Based Wavelets on an Interval, ICASE Report 93-94, December, 1993.
38
Appendix Proof of (3.29). The proof is a straightforward application of Lemma 6. For Mj in 1 1 1 1 (3.9), we have (a2 , a3 , · · · , an ) = (− 13 , − 14 , · · · , − 14 , − 14 ), (b1 , b2 , · · · , bn ) = (1, 1, · · · , 1) and 1 1 1 1 (c2 , c3 , · · · , cn ) = (− 14 , − 14 , · · · , − 14 , − 13 ) where n = nj = 2j L. Therefore, the sequence
{um } in (3.26) satisfies the following relations, u0 = 0, u1 = 1, u2 = 14, u3 =
2534 , 13
(A.1)
and for 4 ≤ m ≤ n um = c¯m (−um−2 + 14um−1 ) where c¯m =
13 , 14
(A.2)
if m = n, c¯m = 1 otherwise.
Recursive relation (A.2) is a finite difference of order 2 whose general solution is of the following form um = c¯m (c1 αm−3 + c2 β m−3 ) where α = 7 +
√
192/2, β = 7 −
√
(A.3)
192/2 are the two distinct roots of the quadratic equation x2 − 14x + 1 = 0,
and constant c1 and c2 are chosen so equation (A.3) is valid for m = 2, 3. Therefore, u0 = 0, u1 = 1 um = c¯m (µ1 αm−3 + µ2 β m−3 ), where µ1 =
α ( 2534 α−β 13
− 14β) > 0, µ2 =
β (14α α−β
−
2534 ) 13
2≤m≤n
(A.4)
> 0.
Similarly, we can show that vn+1 = 0, vn = 1,
(A.5)
vm = c¯n−m+1 (µ1 αn−2−m + µ2 β n−2−m ),
39
for 1 ≤ m ≤ n − 1
(A.6)
and v0 = − where δ1 =
(13α−1)µ1 14
> 0, δ2 =
(13β−1)µ2 14
1 (δ1 αn−4 + δ2 β n−4 ). a1 < 0.
Finally, following (3.28) in Lemma 5, we have the following estimates on the inverse of Mj . Denote ej , 1 ≤ j ≤ n as
ej =
1 13
if if if if
14
1 14 13
j=1 j=2 3≤j ≤n−1 j = n.
Case 1: i ≤ j and 1 ≤ j ≤ n − 1, i = 1 αi,j
(µ1 αn−2−j + µ2 β n−2−j ) = −ei c¯n−j+1 (δ1 αn−4 + δ2 β n−4 )
(A.7)
So we have 14 αn−2−j (µ1 + µ2 z n−2−j ) 13 αn−4 (δ1 − δ2 z n−4 ) 14 α(µ1 + µ2 /z) 1 ≤ 13 (δ1 − δ2 ) αj−1 1 = K1 |j−i| α
|αi,j | ≤
where z =
β α
. ≤ 1 and K1 = 1.1666.
Case 2: i ≤ j and 1 ≤ j ≤ n − 1, 2 ≤ i ≤ n αi,j = −ei c¯n−j+1 c¯i
(µ1 αi−3 + µ2 β i−3 )(µ1 αn−2−j + µ2 β n−2−j ) . (δ1 αn−4 + δ2 β n−4 )
(A.8)
Case 3: i ≤ j and j = n, i = 1 αi,j = −ei
1 (δ1 αn−4 + δ2 β n−4 )
(A.9)
Case 4: i ≤ j and j = n, 2 ≤ i ≤ n αi,j = −ei c¯i
(µ1 αi−3 + µ2 β i−3 ) . (δ1 αn−4 + δ2 β n−4 ) 40
(A.10)
Case 5: i > j and j = 1, 1 ≤ i ≤ n − 1 αi,j = −ej c¯n−i+1
µ1 αn−2−i + µ2 β n−2−i . (δ1 αn−4 + δ2 β n−4 )
(A.11)
Case 6: i > j and j = 1, i = n αi,j = −ej
(δ1
αn−4
1 + δ2 β n−4 )
(A.12)
Case 7: i > j and 2 ≤ j ≤ n − 1, 1 ≤ i ≤ n − 1 αi,j
(µ1 αj−3 + µ2 β j−3 )(µ1 αn−2−i + µ2 β n−2−i ) = −ej c¯j c¯n−i+1 . (δ1 αn−4 + δ2 β n−4 )
(A.13)
Case 8: i > j and 2 ≤ j ≤ n − 1, i = n αi,j = −ej
(µ1 αj−3 + µ2 β j−3 ) . (δ1 αn−4 + δ2 β n−4 )
(A.14)
For Cases 2 - 8, we can similarly obtain |αi,j | ≤
Ki , α|j−i|
2≤i≤8
. . . . . . where K2 = 1.1726, K3 = 1.1607, K4 = 1.1666, K5 = 1.1666, K6 = 1.1607, K7 = 1.1722 and . K8 = 1.1666. Finally, if we choose K = 1.1726, then |αi,j | ≤
K α|j−i|
,
1 ≤ i, j ≤ n.
(A.15)
This concludes the proof. 2
41