Wavelet Approximations for First Kind Boundary

16 downloads 0 Views 358KB Size Report
An elliptic boundary value problem in the interior or exterior of a polygon is ...... Nedelec and J. Planchard: Une m ethode variationelle d' elements nis pour la.
Wavelet Approximations for First Kind Boundary Integral Equations on Polygons

Tobias von Petersdor 1 Department of Mathematics University of Maryland, College Park College Park, MD 20742, USA

Christoph Schwab2 Department of Mathematics & Statistics University of Maryland, Baltimore County Baltimore, MD 21228, USA

Technical Note BN-1157 Institute for Physical Science and Technology University of Maryland at College Park February 1994

1 2

partially supported by the NSF under grant DMS 91-20877 partially supported by the AFOSR under grant F49620-J-0100

Abstract An elliptic boundary value problem in the interior or exterior of a polygon is transformed into an equivalent rst kind boundary integral equation. Its Galerkin discretization with N degrees of freedom on the boundary with spline wavelets as basis functions is analyzed. A truncation strategy is presented which allows to reduce the number of nonzero elements in the sti ness matrix from O(N 2) to O(N log N ) entries. The condition numbers are bounded independently of the meshwidth. It is proved that the compressed scheme thus obtained yields in O(N (log N )2) operations approximate solutions with the same asymptotic convergence rates as the full Galerkin scheme in the boundary energy norm as well as in interior points. Numerical examples show the asymptotic error analysis to be valid already for moderate values of N .

Mathematics Subject Classi cation (1991) Primary: 65N38 Secondary: 65N55

1. Introduction For regular elliptic boundary value problems in bounded or unbounded domains, the method of reduction to the boundary via Green's identities leads to equivalent boundary integral equations. Their discretization by nite elements on the boundary manifold gives rise to the so-called boundary element method, which has become a standard tool in engineering practice by now (for a survey, we refer to [24] and the references there). While classically boundary element methods have been based mainly on the so-called equations of the second kind, it was shown more recently that a discretization based on the so-called equations of the rst kind is also possible. [6, 14, 20]. In particular, in conjunction with a Galerkin type discretization one obtains symmetric and positive de nite sti ness matrices and approximations for the unknown Cauchy data of the original boundary value problem. Distinct advantages of boundary integral equation based methods are the dimensional reduction of the computational domain by one and the discretization of exterior problems. However, the resulting sti ness matrices are dense due to the nonlocal nature of the boundary integral operators. This substantially increases the complexity of the matrix generation and of a matrix vector multiplication which is the basic step in an iterative solution of the linear system. In addition, for integral equations of the rst kind, the condition of the sti ness matrices corresponding to standard nite element bases grows as the meshwidth tends to zero, i.e. the matrices are usually ill-conditioned. A substantial reduction in the complexity of the calculation of the sti ness matrix and of the multiplication of the sti ness matrix with a vector is possible by multipole expansions of the potentials proposed in [23] and the panel clustering proposed independently in [13]. Another avenue to overcome the abovementioned drawbacks of boundary integral equation based numerical methods was indicated in [2]. It is based on multiresolution discretizations of integral operators which, through the choice of a special, so-called wavelet basis, yields numerically sparse sti ness matrices, i.e. they are still densely populated, but most of their entries are so small that they can be neglected without a ecting the overall accuracy of the discretization (so-called truncation of the sti ness matrices). In the present paper the notion `wavelet' is used in the wide sense, i.e. we refer to wavelets, pre-wavelets and biorthogonal wavelets alike. For Galerkin discretizations of rather general integral operators

of order zero it was shown in [2] that generation of the sti ness matrix and the matrix vector multiplication can be achieved with essentially optimal complexity (i.e. up to possibly a logarithmic term) up to any prescribed xed accuracy. The problem with ill-conditioning did not arise for the operators considered in [2]. These observations motivate the use of wavelet bases in boundary element methods. There, however, one is less interested in truncating the sti ness matrices in order to achieve an a-priori xed accucacy as in [2], but rather in truncation which does not decrease the asymptotic convergence rate of the overall boundary element scheme. This question was addressed recently in the periodic setting for a wide class of boundary integral operators and basis functions for general Galerkin-Petrov schemes in [8] (see also [9] for a survey of these results). It was shown there in particular, that truncation schemes are possible which preserve the asymptotic rate of convergence of the Galerkin-Petrov scheme with optimal complexity. In addition, it was shown that for operators of nonzero order, a simple diagonal preconditioning of the multiscale discretization renders the condition number of the truncated sti ness matrices bounded, i.e. the above-mentioned ill-conditioning can be completely avoided. Nevertheless, the analysis of [8] exploited the smoothness of the boundary in an essential way and an analysis at this level of generality to nonperiodic, nonsmooth domains does not seem to be straightforward. For boundary integral equations of the second kind on polyhedra in lR3, computational results for a multiscale method with piecewise linear basis functions were reported recently in [10], and it was shown that various ndings from the periodic setting seem to hold there, as well. The present paper is devoted to the analysis of wavelet based symmetric Galerkin schemes for boundary integral equations of the rst kind on polygonal domains in lR2. Rather than attempting to cover as wide a class of operators and discretizations as possible, we consider here the boundary integral operators arising with boundary value problems for the Laplace equation and piecewise linear, resp. constant, prewavelet basis functions (see, for example, [4] and the references there). For our application they appear to be better suited than the fully orthogonal wavelets due to Daubecies (see [11]) which were developed primarily for signal and image processing. The piecewise polynomial spline prewavelets considered here allow in particular for the exact computation of the entries in the sti ness matrix. We analyze the convergence of the compressed Galerkin schemes resulting from appropriate truncations of the sti ness matrices and show that optimal asymptotic convergence rates in the (boundary) energy norm can be achieved with a memory of O(N log N ) and a computational complexity of O(N (log N )2) operations (here and in what follows, N denotes the number of degrees of freedom on the boundary). Since our original intention, however, was to solve boundary value problems inside or exterior to a polygonal domain in lR2, we analyze also the asymptotic accuracy of solution values at interior points of the domain which are obtained from the solutions of the compressed Galerkin scheme via the representation formula. We show that this convergence rate depends more sensitively on the way the sti ness matrix is truncated, and provide a truncation strategy that ensures again the optimal asymptotic convergence rate at a computational complexity of O(N (log N )2). For this result it is essential that we considered Galerkin discretizations with prewavelet bases where the vanishing moments of both, test and trial functions, yield substantially faster o -diagonal decay for the entries in the wavelet Galerkin sti ness matrix than, say, corresponding collocation schemes. Our proofs show that elements in the Galerkin sti ness matrix which stem from wavelets supported in a corner must be truncated di erently

than those corresponding to wavelets the support of which is contained in one side of the polygon. Another consequence of the use of a multilevel basis is the preconditioning of the resulting sti ness matrices. We show theoretically and computationally that the wavelet Galerkin sti ness matrices have bounded and, in fact, quite moderate condition numbers. Even though our convergence and work estimates are conservative and of asymptotic nature, we show here and in [22] numerical results which indicate that they are nearly sharp and describe the performance of the method accurately even for a moderate number of degrees of freedom. Further, although our analysis discusses rst kind integral equations for the Laplacian in a polygon in detail, all arguments apply verbatim to a wide class of boundary integral operators arising from boundary value problems for second order strongly elliptic operators with constant coecients in the plane. We exemplify this by giving the necessary argument for the integral operators arising for the problem of plane elasticity in detail. The analysis applies also to curvilinear polygons with smoothly curved boundary pieces. Then, however, it is in general not possible anymore to evaluate the entries in the wavelet Galerkin sti ness matrix analytically and numerical quadrature has to be used. The analysis of the resulting additional consistency errors will be given elsewhere. For some of the integral equation formulations there are also integral operators on the right hand side of the equation. In this paper we only analyze the ecient discretization of the integral operators on the left hand side. The integral operators on the right hand side can be eciently discretized with the same strategies, but we do not discuss this here. Our paper is organized as follows: In Section 2 we present the model boundary value problems and their reductions to boundary integral equations of the rst kind via the so-called `direct method' and the single and double layer potential ansatz methods. In Section 3, we introduce the Galerkin discretizations of these integral equations and analyze their asymptotic convergence. We prove in particular that the solution converges in the interior with a higher rate than in the boundary energy norm. In Section 4, we introduce the prewavelet bases upon which our analysis is based. Using a basic norm equivalence from [21], we show that the Galerkin matrices in the prewavelet bases have bounded condition numbers. In Section 5, we analyze the o -diagonal decay of certain blocks in the Galerkin matrices with respect to prewavelet bases. These estimates are combined in Section 6 with the Schur Lemma to yield consistency estimates for the truncation strategies. We analyze rst the truncation of all nonvertex elements of the matrix, with all entries corresponding to vertices of the polygon being kept. Next we show that the latter entries can also be truncated, albeit more care must be used in this truncation. These results are used in Section 7 to prove our main results: essentially optimal convergence rates in the boundary energy norm and for the point values in the interior are achieved by the compressed Galerkin schemes with storage of O(N log N ) words and an operation count of O(N (log N )2). In Section 8 we show that analogous results hold also for the compression of Galerkin discretizations of the single layer potential in wavelet bases. In Section 9 we show that all our results hold also for rst kind boundary integral equations arising in the problem of plane elasticity on polygonal domains. We conclude in Section 10 with numerical experiments which indicate that the estimates of the consistency error due to the matrix compressions obtained here are sharp. We will use the notation C for a generic constant which can have di erent values in di erent equations.

2. Boundary integral equations Let ? be a bounded, polygonal and nonintersecting boundary with N0 straight sides ?j and vertices V = fPj0gj=1;:::;N0 . The domain  lR2 is one of the components of lR2 n ?, i.e., either the bounded interior or the unbounded exterior of the curve ?. To simplify the presentation we exclude the case of interior angles of 2 (\cracks") (the boundary integral equations have to be modi ed on the cracks, but the methods in this paper will still apply). The unit normal vector on the boundary ? which points from to c := lR2 n is denoted by n (it is de ned almost everywhere on ?). We can parametrize the boundary ? by a 1-periodic function : [0; 1] ! ? satisfying ( Ni ) = Pi0; i = 1; : : : ; N0 (2.1) 0 where the components of  are linear polynomials on each of the intervals [ iN?01 ; Ni0 ], i = 1; : : : ; N0. s ( ), s  0, we denote the usual Sobolev spaces on (see, e.g., By H s( ) and Hloc [19, 16]) and by H 0(?) = L2(?) the space of functions on ? that are square integrable with respect to the surface measure ds on ?. The spaces H k (?), k = 1; 2, are de ned as spaces of functions which are continuous on ? and which have restrictions to the sides ?j of the polygon that are in H k (?j ), j = 1; : : : ; N0. We equip H k (?) with the norm

kuk2H (?) = k

N

0 X

j =1

kuk2H (? ); k = 0; 1; 2 k

(2.2)

j

where the norms on ?j are lifted from the real line with a linear parametrization of ?j . If no ambiguity can arise, we will denote the H s(?) norm simply by kuks. Spaces of fractional order 0 < s < 2 are obtained by interpolation [16]. For ?3=2 < s < 0, H s(?) denotes the space of continuous, linear functionals on H ?s (?), i.e. H s (?) = (H ?s (?))0 where the duality pairing hu; vi for u 2 H s (?), v 2 H ?s (?) extends ? u(s)v(s) ds for u; v 2 L2(?). We de ne the spaces Hk (?), k = 0; 1; 2 as spaces of functions which have restrictions to the sides ?j of the polygon that are in H k (?j ), j = 1; : : :; N0 (without the condition of continuity at the vertices). For s 2 (0; 2) we de ne Hs (?) by interpolation; for ? 23 < s < 0 we de ne Hs (?) := H s (?). The trace of U on the boundary ? is denoted by 0U . The trace operator s+1=2 ( ) ! H s (?); 0 < s < 3=2;

0: Hloc (2.3) is continuous. The normal derivative operator 1 can be shown to be a continuous linear operator mapping H1 ;loc( ) onto H ?1=2(?) (see, e.g., [6]) where R

1 ( ) j U = 0g: H1 ;loc( ) = f U 2 Hloc Here and in the next equation U = 0 is understood in the distributional sense.

We consider the model boundary value problem U = 0 in ;

(2.4)

subject to Dirichlet boundary conditions

0 U = u on ?

(2.5)

1U =  on ?

(2.6)

or Neumann boundary conditions, where u 2 H 1=2(?) and  2 H ?1=2(?). The Dirichlet problem in a bounded domain has a unique weak solution U 2 H 1( ). For the Neumann problem in a bounded domain we need the condition

h; 1i = 0

(2.7)

Then the weak solution U 2 H 1( ) is unique up to a constant. In the case of an exterior domain we require for both Dirichlet and Neumann problem the condition U (x) = c log jxj + o(1); x ! 1: (2.8) where c is unknown. In this case the boundary value problem has a unique weak solution 1 ( ), and c = ?h; 1i =(2 ). (For the exterior Dirichlet problem one can also U 2 Hloc require U (x) = O(1) at in nity. This choice is discussed in section 9 for elasticity.) We study the numerical solution of (2.4){(2.7) via boundary integral equations of the rst kind. We denote by e(x; y) := ? 21 log jx ? yj; x; y 2 lR2 (2.9) the fundamental solution of the Laplacian and assume in the case of the Dirichlet problem that [14] diam( ) < 1: (2.10) We can de ne boundary integral operators V , K , K 0, W by V v(x) = ? e(x; y)v(y) dsy; Kv(x) = ? @n@ e(x; y)v(y) dsy (2.11) y (2.12) K 0v(x) = @n@ e(x; y)v(y) dsy; Wv(x) = ? @n@ @n@ e(x; y)v(y) dsy ? x x ? y where x 2 ?. They are continuous mappings in the following spaces [6]: Z

Z

Z

Z

V : H ?1=2(?) ! H 1=2(?); K : H 1=2(?) ! H 1=2(?)

(2.13)

K 0: H ?1=2(?) ! H ?1=2(?); W : H 1=2(?) ! H ?1=2(?): (2.14) We can reformulate the Dirichlet problem (2.4){(2.6) as a boundary integral equation in two ways.

Direct method for the Dirichlet problem: Find  2 H ?1=2(?) such that V  = ( 12 I + K )u (2.15) (where I denotes the identity operator). We then obtain the solution U by the representation formula U (x) = e(x; y)(y) ? @n@ e(x; y)u(y) dsy ; x 2 : (2.16) ? y Single layer ansatz for the Dirichlet problem: Find 2 H ?1=2(?) such that V =u (2.17) and we obtain U (x) by U (x) = ? e(x; y) (y) dsy; x 2 : (2.18) As constants are solutions of the homogeneous Neumann problem, the operator W maps constants to zero. Hence the solutions of boundary integral equations with the operator W will only be unique up to a constant. By using the operator W1 given by W1v := Wv + hv; 1i (2.19) we will obtain unique solutions with mean zero on ?. We can derive two boundary integral formulations for the Neumann problem. Direct method for the Neumann problem: Find u 2 H 1=2(?) such that W1u = ( 21 I ? K 0): (2.20) We can then obtain U (x) using (2.16). If is an exterior domain the solution u of (2.20) and the trace u := 0U can di er by a constant as u = u ? hu; 1i = j?j (where j?j := ? 1 ds). Double layer ansatz for the Neumann problem: Find  2 H 1=2(?) such that W1  =  (2.21) and we obtain U (x) by U (x) = ? @n@ e(x; y)(y)dsy; x 2 : (2.22) ? y The boundary integral operators V and W1 are self adjoint. They are also elliptic in the following sense (see, e.g., [14, 20]): There exists a positive constant C such that hV ; i  C kk2H ?1 2(?) 8 2 H ?1=2(?) (2.23) and (2.24) hW1u; ui  C kuk2H 1 2(?) 8u 2 H 1=2(?): All the considered boundary integral equations are integral equations of the rst kind. These formulations yield symmetric and elliptic operators also for other problems with symmetric and elliptic di erential operators, e.g., elasticity problems (see Section 9 ahead). Other approaches (e.g. double layer ansatz for the single layer potential) lead to integral equations of the second kind where the operators are not symmetric. !

Z

Z

R

Z

=

=

3. Galerkin boundary element methods For the approximation of the solution we choose a positive integer N and use the uniform mesh with the meshpoints ( NNj 0 ), j = 0; : : : ; NN0 ? 1. Then we de ne the space SN0 of piecewise constant functions on this mesh and the space SN1 of continuous, piecewise linear functions on this mesh. As SN0 and SN1 are subspaces of H ?1=2(?) and H 1=2(?), respectively, we can approximate the boundary integral equations (2.15) (2.17), (2.20), (2.21) with the Galerkin method. We use test and trial functions in these nite dimensional subspaces and obtain the following Galerkin approximations. Direct method for Dirichlet problem: Find N 2 SN0 such that 8' 2 SN0 (3.1) hV N ; 'i = ( 21 I + K )u; ' and de ne an approximation UN (x) for x 2 using (2.16): UN (x) = e(x; y)N (y) ? @n@ e(x; y)u(y) dsy : (3.2) ? y Ansatz method for Dirichlet problem: Find N 2 SN0 such that hV N ; 'i = hu; 'i 8' 2 SN0 (3.3) and de ne the approximation UN (x) for x 2 by E

D

!

Z

Z

UN (x) = e(x; y) N (y) dsy :

(3.4)

?

Direct method for Neumann problem: Find uN 2 SN1 with 8' 2 SN1 hW1uN ; 'i = ( 12 I ? K 0); ' and de ne the approximation UN (x) for x 2 by E

D

(3.5)

(3.6) UN (x) = e(x; y)(y) ? @n@ e(x; y)uN (y) dsy : ? y Ansatz method for Neumann problem: Find N 2 SN1 with hW1N ; 'i = h; 'i 8' 2 SN1 (3.7) and de ne the approximation UN (x) for x 2 by UN (x) = ? ? @n@ e(x; y)N (y)dsy : (3.8) y The rst kind integral equations obtained in this fashion are equivalent to the weak form of the boundary value problem (2.4){(2.6). Due to (2.23) and (2.24) the Galerkin equations are symmetric and positive de nite and admit unique solutions. Asymptotic estimates on the boundary follow from the approximation properties and are well known. Estimates in an interior point can be obtained using duality principles (see e.g., Costabel, Stephan, duality). The singularities of the exterior problem in uence the convergence rates in the case of the ansatz method, but not for the direct method. !

Z

Z

Theorem 3.1 Assume that the given data satisfy u 2 H 2(?) for the Dirichlet problem and  2 H1 (?) for the Neumann problem. Denote by !min and !max the smallest and the largest interior angle of the domain , respectively. De ne

 := !  ; c := 2 ?! max

(3.9)

min

and let for the direct methods (2.15), (2.20) with arbitrary " > 0

 := minf 32 ;  ? "g; ^ := : For the ansatz methods (2.17), (2.21) let with arbitrary " > 0  := minf 23 ;  ? "; c ? "g; ^ := minf 23 ;  ? "g Then the Galerkin approximations N , N , uN , N satisfy k ? N kH ?1 2(?)  CN ? ; k ? N kH ?1 2(?)  CN ?

(3.10)

ku ? uN kH and we have for x 2 ,

(3.13)

=

(3.11) (3.12)

=

1=2

(?)

 CN ?; k ? N kH

1=2

(?)

 CN ?

jU (x) ? UN (x)j  CxN ??^ :

(3.14) Proof : We will only consider the case of the Neumann problem. The proofs for the Dirichlet problem are completely analogous. From the regularity of the Neumann problem in (see, for example, [12]) we know that  2 H1(?) implies u 2 H 1=2+(?) with  = min(3=2;  ? "). Then the quasioptimality of the Galerkin solution and the approximation properties of SN1 in H 1=2(?) yield (3.13) for method (3.5). For method (3.7) we can split  in two parts 1 and 2 satisfying  = 1 + 2; W11 = ( 21 I ? K 0); W12 = ( 21 I + K 0): By the equivalence of (2.20) to the original Neumann problem we conclude that 1 is the trace of the solution of the Neumann problem in with data , and that 2 is the trace of the solution of the Neumann problem in c with data  (for c the normal vector on ? has the opposite direction, therefore (2.20) holds with the opposite sign for K 0). Using regularity in and c , quasioptimality, and the approximation property of SN1 we obtain (3.12) with  = minf 23 ;  ? "; c ? "g. In order to prove (3.14) for method (3.5),(3.6) we note that U (x) ? UN (x) = hu ? uN ; ^ i with ^ = ? 1e(x;  ). Let ^ be the solution of W1^ = ^ . We can split ^ as follows ^ = ^1 + ^2; W1^1 = ( 21 I ? K 0)^; W1^2 = ( 12 I + K 0)^: Here ^1 is the trace of the solution of the Neumann problem with data ^ . As x 2 , we have ^ 2 H1 (?) and hence ^1 2 H 1=2+^(?) with ^ = min( 23 ;  ? "). The function ^2 is the trace of the solution of the Neumann problem in c with data ^ . As x 2 , this solution must be 1e(x;  )j , and hence ^2 := e(x;  )j?. Since ^2 2 H 2(?) we have ^ 2 H 1=2+^(?). Now the Nitsche duality argument yields (3.15) U (x) ? UN (x) = u ? uN ; W1^ = W1(u ? uN ); ^ = W1(u ? uN ); ^ ? ^N c

D

E

D

E

D

E

with arbitrary ^N 2 SN1 by using the Galerkin equation. By choosing ^N 2 SN1 as the best approximation in H 1=2(?) and with the estimate jhW1v; v^ij  C kvkH 1 2(?) kv^kH 1 2(?) we get (3.14). The same argument applies to method (2.21). 2 For the practical computation of the Galerkin solutions one chooses bases for the boundary element spaces SN0 , SN1 . The standard B-spline basis for SN0 is given by the functions j ?1 ), ( j ) and zero '~j(N ) which are identical 1 on the segment between the nodes ( NN NN0 0 1 everywhere else. The standard basis for SN is given by the piecewise linear functions 'j(N ) which are 1 in node ( NNj 0 ) and zero in all other nodes. Then the Galerkin equations (3.1), (3.3) lead to linear systems where the matrix has the entries V '~j(N ); '~k(N ) : (3.16) The Galerkin equations (3.5), (3.7) lead to linear systems where the matrix has the entries =

D

=

E

D

W1'j(N ); 'k(N )

E

(3.17)

For a polygonal boundary ? one can nd analytical formulae for the matrix elements (3.16), (3.17). Therefore the matrices can be computed in O(N 2 ) operations. The condition number (with respect to the 2-matrix norm) of the matrices (3.16), (3.17) grows like O(N ) [24].

4. Prewavelet bases Now we choose special bases of SNi , i = 0; 1 based on hierarchical decompositions. We assume that N = 2L with a nonnegative integer L. Then the spaces V i;` := S2i , ` = 0; : : : ; L form a hierarchy of nested spaces `

V i;0  V i;1  : : :  V i;L

(4.1)

where i = 0; 1. We will later omit the index i if no ambiguity arises. The dimension of (VN i;`) is N` := 2` N0. The meshpoints for V i;` are ( Nj ). We will also use the notation '~`j := '~j , j = 1; : : :; N` for the basis of V 0;` and '`j := '(jN ), j = 1; : : : ; N` for the basis of V 1;`. `

`

`

4.1. Bases for the Neumann problem

For the Neumann problem we need the space V 1;L of piecewise linears. We de ne for ` = 1; : : : ; L the space W 1;` as the \orthogonal complement with respect to " of V 1;`?1 in the space V 1;`:

W 1;` := ' 2 V 1;` h'  ;  i = 0 8 2 V 1;`?1 ; ` = 1; : : : ; L: n

o

(4.2)

Here we use the inner product h ; i of L2[0; 1) for the functions '  ;  : [0; 1) ! lR. Note that ' 2 W 1;` implies h'  ; 1i = 0, h'  ; ti = 0. This vanishing moment property will be exploited in the compression of the Galerkin matrix in sections 5 and 6. Then V 1;`+1 = V 1;`  W 1;`+1 and we obtain the multilevel splitting

V 1;L = W 1;0  W 1;1    W 1;L

(4.3)

where we de ne W 1;0 := V 1;0. Hence every function uL 2 V 1;L admits a unique decomposition uL = w0 + w1 +  + wL ; w` 2 W 1;`; ` = 0; : : : ; L:

(4.4)

Let us denote by P` the \L2 projection with respect to " characterized by P`: L2(?) ! V 1;`; h(v ? P` v)  ; '  i 8' 2 V 1;`:

(4.5)

Let P?1 := 0. Then we have in (4.4) that w` = (P` ? P`?1 )uL and there holds dim W 1;` = dim V 1;` ? dim V 1;`?1 = N02`?1 = N`?1. A basis for W 1;` is given by W 1;` = spanf j`gNj=1?1 ; ` = 1; : : :; L (4.6) `

where (see, for example, [4], [3]) ` j

=

5

X

k=1

ak '`2j?4+k ; a = (1; ?6; 10; ?6; 1):

(4.7)

Since the functions j` are not orthogonal to their translates, they are prewavelets in the terminology of [4]. For notational convenience we de ne j0 := '0j for j = 1; : : : ; N0 and N?1 := N0 so that (4.6) also holds for ` = 0. It follows from (4.3) and (4.4) that the functions L

[

form a basis for

f j` j j = 1; : : : N`?1 g

`=0 V 1;L: Any uL 2 V 1;L

(4.8)

can be written as

uL =

L N?

` 1 X X

`=0 j =1

u`j j`:

(4.9)

The decompositions (4.4), (4.9) are stable in the following sense: Proposition 4.1 For every uL 2 V 1;L, we have for the decompositions (4.4), (4.9) for 0  s < 3=2 the norm equivalences L X L 2 ku kH (?)  C1 22s` kw`k2L2(?) `=0 s

and

L

X

`=0

22(s?1=2)`

NX ?1

 C2

L

X

`=0

L X ` 2 juj j  C3 22s` kw`k2L2(?) j =0 `=0 `

NX ?1

ju`j j2

(4.10)

 C4kuLk2H (?)

(4.11)

22(s?1=2)`

`

j =0

s

where the constants Ci do not depend on L. Proof : We note that the parametrization  gives a correspondence between functions on I := lR=ZZ and functions on ? which is an isomorphism between H s(I ) and H s(?) for 0  s  23 . Hence the assertions for 0  s < 23 follow from corresponding results on lR=ZZ. In this case, however, the left inequality in (4.10) and the right inequality in (4.11) follow

from [21, Theorem 2] (although the proof there is given for two dimensional domains, it applies verbatim to lR=ZZ). For the remaining two inequalities note that the Gram matrix D

2`=2 j`  ; 2`=2 k`  

E

j;k=0:::N ?1 `

for the uniform mesh on lR=ZZ has entries which depend only on jj ? kj and not on `. The entries for jj ? kj = 0; 1; 2; 3; : : : are 72; 40=3; ?4=3; 0; 0; : : :. Hence the matrix is strictly diagonally dominant, its largest eigenvalue is bounded and its smallest eigenvalue is bounded away from 0, uniformly for all `. 2 In the estimates in Section 6 ahead, however, we will also use a variant of (4.11) which is valid in the range 0  s  2. Proposition 4.2 Let u 2 H s (?), 0  s  2 and let uL = PLu 2 V 1;L with PL de ned in (4.5). Then there holds for the decompositions (4.4) and (4.9) of uL: L

X

`=0

22(s?1=2)`

NX ?1 `

j =0

L

ju`j j2  C5 22s` w` 20  C6L kuk2s : X

`=0





(4.12)

Proof : We apply (4.11) for s = 0 and uL = w` , 0  `  L, and get

2?`

NX ?1 `

j =0

ju`j j2  C3 w` 20 ; 0  `  L:





Multiplying by 22s` and summing over ` = 0; :::; L yields the rst inequality in (4.12). To prove the second inequality in (4.12), we observe that with P?1 := 0 and the approximation property of P` L

X

`=0



22s` w`

2 0



=

L

X

22s` k(P` ? P`?1 )uk20

`=0 L X

 2 22s` ku ? P` uk20 + ku ? P`?1 uk20 n

`=0 L X

o

 2c 22s` 2?2s` + 2?2s(`?1) kuk2s `=0 = 6c(L + 1) kuk2s : n

o

This yields the second inequality in (4.12) with a constant C6 independent of L. 2 We can use the wavelet basis (4.8) in the Galerkin equations (3.5). Writing uL in the form (4.9) we obtain the linear system D

X

`=0;:::;L k=1;:::;N ?1

W1 k` ;

`

where g := ( 21 I ? K 0).

`0 k0

E

D

u`k = g;

`0 j0

E

; `0 = 0; : : :; L; k = 1; : : : ; N`0?1

(4.13)

Let us denote the NL  NL coecient matrix by AL AL(j;`);(j0;`0 ) := hW1 j`00 ; j`i (4.14) using the pairs (j; `) and (j 0; `0) for row and column indices of the matrix AL. Denote the unknown coecient vector by ~u = (uj` )`=0;:::;L; j=1;:::;N ?1 and the right hand side vector by ~b with entries b`j = g; j` . Then we can write (4.13) as AL~u = ~b: (4.15) For the double layer ansatz method (2.21) we analogously obtain the linear system AL~ = ~c (4.16) where ~ contains the coecients of L with respect to the basis f j`g, and the components of ~c are given by c`j = ; j` . An immediate consequence of the norm equivalence (4.10) with s = 21 , the continuity (2.14) and the H 1=2(?)-ellipticity (2.24) is Proposition 4.3 Let AL be the Galerkin wavelet sti ness matrix of W1 with respect to the wavelet basis (4.7). Then we have for the condition number with respect to the Euclidian vector norm  = cond2(AL)  C < 1 (4.17) with C independent of L. We now solve the linear system (4.15) with the conjugate gradient (CG) algorithm. We start with ~u(0) = 0 and obtain a sequence of vectors ~u(k), k = 0; 1; 2; ::: . It is well known that p ? 1 k (4.18) ~u(k) ? ~u A  C ;  = p + 1 where k~vk2A = ~v>AL~v. Let uL(k) 2 V 1;L denote the function corresponding to the coecient vector ~u(k). We then have for the error in the energy norm the estimate `

E

D

E

D





L

L



uL ? uL(k)



 C ?1 W1(uL ? uL(k)); uL ? uL(k) (?) D

H1

2

=

1=2

E



= C ?1 ~u ? ~u(k)





A

L

 Ck :



If the number of CG steps satis es k  c1 + c2 log N there holds uL ? uL(k) H 1 2(?)  CN ??^ with ; ^ as in Theorem 3.1. Let U(Lk)(x) denote the approximation in the interior obtained from (3.6) with uL(k) in place of uN . Then we have with ^ = ? 1e(x;  ) 2 H ?1=2(?) as in the proof of Theorem 3.1 U (x) ? U(Lk)(x)  U (x) ? U L (x) + uL ? uL(k); ^  CN ??^ + uL ? uL(k) H 1 2(?) k^ kH ?1 2(?)  C 0N ??^ : We obtain Proposition 4.4 A number of O(log N ) conjugate gradient steps applied to the linear system (4.15) yields approximate solutions which converge with the same rates as the exact Galerkin solutions in Theorem 3.1, both for the energy norm error and for the error at an interior point.









D

E



=

=

=

4.2. Bases for the Dirichlet problem

For the Dirichlet problem we use the space V 0;L of piecewise constants. In order to obtain a larger number of vanishing moments we will use the derivatives of the wavelets used in section 4.1. The resulting basis functions form biorthogonal wavelets. We rst de ne spaces of functions with mean zero:

f u 2 H s (?) j hu  ; 1i = 0 g; s 2 [0; 2] f u 2 Hs (?) j hu; 1i = 0 g; s 2 [?1; 1] f u 2 V 1;L j hu  ; 1i = 0 g; f u 2 V 0;L j hu; 1i = 0 g: Obviously we have Hs(?) = Hs0(?)  1 and V i;L = V0i;L  1. H0s (?) Hs0(?) V01;L V00;L

:= := := :=

(4.19) (4.20) (4.21) (4.22)

We denote by D = dsd the derivative with respect to the arclength s on ?. Then it follows by interpolation and duality that D: H s (?) ! Hs?1 (?) is continuous and that

D: H0s (?) ! Hs0?1 (?)

(4.23)

is an isomorphism for s 2 [0; 2]. We also note that DV 1;L = V00;L and that D: V01;L ! V00;L is bijective. Therefore we de ne W 0;` := DW 1;` for ` = 1; : : : ; L and obtain the decompositions

V00;L = V00;0  W 0;1    W 0;L;

V 0;L = V 0;0  W 0;1    W 0;L: A basis of W 0;` (where `  1) is then given by f ~1` ; : : : ; ~N` ?1 g where

(4.24)

`

~j` = D j` =

a~k '~` 2j ?4+k ; ` k=1 h2j ?4+k 6

X

and V 0;L is given by

L

[

`=0

~a = (1; ?7; 16; ?16; 7; ?1)

f ~j` j j = 1; : : : N`?1 g

(4.25) (4.26)

where h`j = j(j=N` j denotes the length of the subintervals on ?. For notational convenience we again de ne W 0;0 := V 0;0, N?1 := N0, ~j0 := '~0j . Let u 2 Hs (s 2 [?1; 1]) and = hu; 1i = j?j where j?j = ? ds. Then we have the norm equivalence C1 kuk2H  ku ? k2H + j j2  C2 kuk2H : (4.27) Now let uL 2 V 0;L, = uL; 1 = j?j. Applying Proposition 4.1 to D?1 (uL ? ), using that D is an isomorphism and noting that V 0;0, V 1;0 are xed, nite-dimensional spaces we obtain Proposition 4.5 For every uL 2 V 0;L, we have unique decompositions R

s

D

uL =

s

s

E

L

X

`=0

w` ;

w` 2 W 0;` ` = 0; : : : ; L

(4.28)

and

u=

L N?

` 1 X X

`=0 j =1

u`j ~j`:

(4.29)

For ?1  s < 1=2, there hold the norm equivalences 2 uL

H (?)



and

s

L

X

`=0



22(s+1=2)`

L

2 C1 22(s+1)`

w`

L2(?) `=0 X

NX ?1 `

j =0

L

2



u`j  C3



X

`=0

 C2

L

X

`=0

22(s+1=2)`

NX ?1 `

j =0



22(s+1)` kw`k2L2(?)  C4 uL

2



u`j ;

(4.30)

2

(4.31)





H (?) s

where the constants Ci do not depend on L. We also have a result analogous to Proposition 4.2. Proposition 4.6 Let u 2 Hs(?), ?1  s  1, let = hu; 1i = j?j and uL = PL u := + DPL D?1 (u ? ) with PL as in (4.5). Then uL 2 V 0;L satis es L

X

`=0

22(s+1=2)`

NX ?1

L

X ` 2 juj j  C7 22s`

w`

20 j =0 `=0 `

 C8L kuk2H (?) :

(4.32)

s

Writing N in the wavelet Galerkin basis (4.26), i.e.

L =

L N?

` 1 X X

`=0 j =1

j` ~j`;

(4.33)

we obtain the linear system V ~k` ; ~k`00 k` = h; ~j`00 ; `0 = 0; : : : ; L; k = 1; : : : ; N`0?1 D

X

`=0;:::;L k=1;:::;N ?1 h := ( 21 I + K )u.

E

D

E

(4.34)

`

where

We denote the NL  NL coecient matrix of (4.34) AL(j;`);(j0;`0) := hV ~j`00 ; ~j`i

(4.35)

once more by AL, the unknown coecient vector by ~ = (`j )`=0;:::;L;j=1;:::;N ?1 and the right hand side vector by d~ with entries d`j = h; ~j` . Then we can write (4.34) as AL~ = d~: (4.36) D

E

`

Analogously we obtain for the single layer ansatz method (3.3) the linear system AL ~ = ~e (4.37) where ~ is the coecient vector of L in the basis (4.26) and ~e is given by e`j = u; ~j` . Due to the norm equivalence (4.30) with s = ? 21 , the ellipticity (2.23) and the continuity (2.13), the condition number cond2(AL) is once more bounded independently of L. D

E

5. Decay and truncation of the wavelet Galerkin matrix We saw that the wavelet basis yields Galerkin matrices with bounded condition numbers. In addition, the wavelet basis allows us to use sparse matrices: We will show that most entries in the Galerkin matrix are so small that they can be replaced by zero without a ecting the convergence rates of the resulting \compressed Galerkin scheme". We will rst consider the Neumann problem and analyze the operator W1 and postpone the analysis of V to Section 8. We begin our analysis by investigating the e ect of neglecting small o -diagonal elements in each block of the sti ness matrix AL in the wavelet basis (see (4.14)). To this end, let Sj` denote the open set f x 2 ? j j`(x) 6= 0 g. The next lemma shows that each vanishing moment of the test and trial function increases the rate of decay of AL(j;`);(j0;`0) in terms of dist(Sj`; Sj`00 ). Recall that V denotes the set of vertices.



Lemma 5.1 The elements AL(j;`);(j0;`0) of the Galerkin matrix with dist(Sj`; Sj`00 ) > 0 satisfy AL(j;`);(j0;`0 )  C1 dist(Sj`; Sj`00 )?62?3` 2?3`0 AL(j;`);(j0;`0 )  C2 dist(Sj`; Sj`00 )?42?` 2?3`0 AL(j;`);(j0;`0 )  C2 dist(Sj`; Sj`00 )?42?3` 2?`0









if Sj` \ V = ;, Sj`00 \ V = ; if Sj` \ V 6= ;, Sj`00 \ V = ; if Sj` \ V = ;, Sj`00 \ V 6= ;

(5.1) (5.2) (5.3)

Here the constants C1; C2 are independent of L; j; j 0; `; `0 . Proof : Assume that dist(Sj` ; Sj`00 ) > 0. Then we have D

W1 j`; j`00

E

=

Z

?

Z

k(x; y) ?

`0 ` j (x) j 0 (y ) dsy dsx

=

1Z 1

Z

0

with

0

k~(s; t) j`((t)) j`00 ((s)) dt ds (5.4)

k(x; y) = @n@ @n@ e(x; y) + 1; k~(s; t) = k((s); (t)) j0(t)jj0(s)j x y for x; y 2 ?nV . By the de nition of the spaces W 1;` the function j`   satis es 1 ` ((t)) tk dt = 0 0 j

Z

(5.5)

for k = 0; 1:

(5.6)

Hence there exists a second antiderivative (s) satisfying 00(s) = j`((s)) and supp = ~ s) with ~ 00(s) = j`00 ((s)) supp( j`  ) = (Sj` ). Analogously, there exists a function ( and supp ~ = supp( j`00  ) = (Sj`00 ). First assume that Sj` \ V = ;, Sj`00 \ V = ;. Then the function k~(s; t) is smooth on (Sj` )  (Sj`00 ) and we integrate by parts twice with respect to s and twice with respect to t 1 1 @2 @2 1 1 k~(s; t) j`((t)) j`00 ((s)) dt ds = k~(s; t) (t) (s) dt ds 0 0 @s2 @t2 0 0 Z

Z



Z

Z



@ 2 @ 2 k~(s; t) k k k ~ k :  max @s L1 L1 2 @t2 s2(S )



` j

(5.7)

t2(S 00 ) `

j

By using the chain and product rules for the derivatives and the estimates @ j j @ j j k(x; y)  C ; (5.8) @x @y jx ? yj2+j j+j j for partial derivatives of order j j and 0 j j with respect to x and y we see that we can estimate the maximum by C2 dist(Sj`; Sj`0 )?6. The de nitions of j` and give







` j   L1

 C 2?` ;

k kL  C 2?3`;

(5.9)

1

implying (5.1). If Sj` \ V 6= ;, then the function k~(s; t) is not smooth at t 2 (Sj` ). After integrating by parts only with respect to s we obtain (5.2). The estimate (5.3) follows analogously. 2 Now we will estimate the errors which arise if we replace certain matrix elements with zero. We will use the standard notations kM k1, kM k2 and kM k1 for the norms of a nite matrix M . Lemma 5.2 Let B an N`?1  N`0 ?1 matrix where the entries satisfy

jBk;k0 j  C dist(Sk` ; Sk`00 )?62?3` 2?3`0 :

(5.10)

dist(Sk` ; Sk`00 )   : B~k;k0 := 0B 0 ifotherwise k;k

(5.11)

Let  > 0 and de ne B~ by

(

Then

B ? B~ 1  C?62?3`2?2`0 maxf; 2?`0 g B ? B~ 1  C?62?2`2?3`0 maxf; 2?` g N (B~ )  N`?1 N`0 ?1 minfC (2?` + 2?`0 + ); 1g Here N (B~ ) denotes the number of nonzero elements in the matrix B~ .











Proof : We have

~

= max

B ? B 1 k=1;:::;N

X

jBk;k0 j  C k=1max ;:::;N ?

?1 k0 =1;:::;N 0 ?1 dist(S ;S 00 )

`

`

` k

`

k

`

1

(5.12) (5.13) (5.14)

dist(Sk` ; Sk`00 )?62?3` 2?3`0 :

X

k0 =1;:::;N 0 ?1 dist(S ;S 00 ) `

` k

`

k

We estimate the two terms in the sum which are closest to Sk` directly and majorize the remaining terms by an integral:

B ? B~





1

 C?62?3` 2?3`0 +

Z

` ; x)?6 2?3` 2?2`0 ds  C ?62?3` 2?3`0 + C ?52?3` 2?2`0 dist( S x k ?

The estimate for B ? B~ 1 follows from the previous estimate with (k; `) and (k0; `0) interchanged. For the estimate of N (B~ ) we note that there are for each xed j` at most 1 + (2?` + 2)=2?`0 values of j 0 such that dist(Sj` ; Sj`00 ) < . 2 In the same fashion we prove the following estimate which involves entries stemming from the vertex elements which exhibit slower decay. Lemma 5.3 Let B an N`?1  N`0 ?1 matrix where the entries satisfy (5.15) Bk;k0 = 0 if Sk` \ V = ; 0 ?4 ?` ?3`0 ` ` ` jBk;k0 j  C dist(Sk ; Sk0 ) 2 2 if Sk \ V 6= ;: (5.16) Let  > 0 and de ne B~ by ` ; S `00 )   0 if dist( S k ~ Bk;k0 := B 0 otherwise k : (5.17) k;k Then (5.18) B ? B~ 1  C?42?` 2?2`0 maxf; 2?`0 g (5.19) B ? B~ 1  CN0?42?` 2?3`0 N (B~ )  CN0N`0 ?1 minf + 2?` ; 1g (5.20) Here N (B~ ) denotes the number of nonzero elements in the matrix B~ . Proof : We only must show (5.20). We observe that there are CN0 values of0 j such that Sj` \ V 6= ;. For each of those values of j there are at most 1 + (2?` + 2)=2?` values of j 0 such that dist(Sj`; Sj`00 ) < . 2 Remark 5.1 The decay rate (5.16) is also obtained in collocation schemes for all nonvertex elements of the resulting sti ness matrix. In this case, the estimates (5.18), (5.19) can be used to analyze corresponding truncated collocation schemes analogous to what is done in the present work. Since, however, we focus here on Galerkin schemes, we will elaborate on this elsewhere.





(











6. Consistency estimates for the truncated matrix

Now we will de ne a truncated matrix A~L by replacing small entries in the matrix AL with zero. We will use the estimates from the previous section to obtain consistency estimates for the error between the exact and approximate bilinear form in certain Sobolev spaces. These estimates will be used in the next section to obtain optimal convergence rates for properly chosen truncation algorithms. A tool in the analysis is the Schur lemma, see, e.g., [17, Vol. II, p. 269]. Lemma 6.1 Let G = fG``0 g`;`02lN be an in nite matrix. If there exists a sequence (`) > 0 and a constant c, such that (6.1) jG``0 j (`0 )  c (`) 8` 2 lN X

`0 2lN

and

X

`2lN

jG``0 j (`)  c (`0)

8`0 2 lN

(6.2)

then the matrix G : `2  `2 ! Cl is a bounded operator with norm not greater than c.

We rst consider a truncation algorithm where we replace only matrix elements from \non0 ` L ` vertex" wavelets with zero, i.e., elements A(j;`);(j0;`0 ) with Sj \ V = ; and Sj0 \ V = ;. nn0 )`;`0 =0;:::;L containing the truncation parameters for each block of We use a matrix (`;` the Galerkin matrix and de ne ` \ V = ; and S `00 \ V = ; and dist(S ` ; S `00 )   nn0 0 if S L j k k j `;` : (6.3) ~ A(j;`);(j0;`0) := AL 0 0 otherwise (j;`);(j ;` ) (

The matrices AL; A~L with respect to the basis f j`g de ne mappings AL; A~L: V L ! (V L)0. We have the following estimate for the di erence between these mappings: Lemma 6.2 Let s; s~ 2 [ 21 ; 2] and > s +5 2 ; ~ > s~ +5 2 : (6.4) nn0 in (6.3) satisfy Assume that the truncation parameters `;` nn0  maxfa 2?L 2 (L?`) 2 ~ (L?`0 ); 2?` ; 2?`0 g: `;`

(6.5)

Then we have for u 2 H s(?), u~ 2 H s~(?)

(AL ? A~L)PLu; PL u~  Ca?5L=2NL1?s?s~ kuks ku~ks~

D

E

(6.6)

where  = 0; 1; 2 if none, one, or both of s and s~ are greater than or equal to 23 , respectively. Proof : Using Proposition 4.2 we obtain D

L ~L)PLu; PLu~E  CL=2NL1?s?s~ kuks kuks~

E L

(A ? A

2

where the matrix E L is given by

E(Lj;`);(j0;`0 ) = 2(s?1=2)(L?`)2(~s?1=2)(L?`0)(AL(j;`);(j0;`0) ? A~L(j;`);(j0;`0)):



(6.7)



We estimate E L 2 using the Schur lemma with (j;`) := 2?`=2: Let AL`;`0 denote the block (AL(j;`);(j0`0 ))j=1;:::;N ?1;j0=1;:::;N 0?1 (and similarly for A~L`;`0 ). Then we have `

X

(j 0 ;`0 )

`



E(Lj;`);(j0;`0 ) (j0;`0) =



L

X

`0 =0 L X `0 =0

2(s?1=2)(L?`)2(~s?1=2)(L?`0) AL`;`0 ? A~L`;`0



?`0 =2 2 1



nn0 )?5 2?3` 2?2`0 2?`0 =2 (6.8) 2(s?1=2)(L?`)2(~s?1=2)(L?`0)C2(`;`

nn0  2?`0 . Now we insert (6.5) and obtain with (6.4) by applying (5.12), noting that `;`

X

(j 0 ;`0 )

E(Lj;`);(j0;`0 ) (j0;`0)  Ca?5

L

X

`0 =0

2(s+2?5 )(L?`)2(~s+2?5~ )(L?`0)2?`=2  C 0a?5 (j;`)

(6.9)

The estimate for the column sums follows by a completely analogous argument using (5.13): X

(j;`)



E(Lj;`);(j0;`0) (j;`)

=

 

L

X

`=0 L X `=0

2s(L?`) 2s~(L?`0 ) AL`;`0 ? A~L`;`0 1 2?`=2





nn0 )?5 2?2` 2?3`0 2?`=2 2(s?1=2)(L?`)2(~s?1=2)(L?`0)C2(`;`

L X ? 5 Ca 2(s+2?5 )(L?`)2(~s+2?5~ )(L?`)2?`0 =2  C 0a?5 (j0;`0 )(6.10) `=0





By Lemma 6.1, estimates (6.9) and (6.10) together imply E L 2  C 0a?5.

2

Remark 6.1 If (6.4) holds for (s; s~) = (s; s~), then (6.4) and therefore (6.6) hold for all s 2 [ 21 ; s], s~ 2 [ 21 ; s~]. The following special cases of Lemma 6.2 will be used in the convergence estimates in the next section. (i) (s; s~) = ( 21 ; 12 ): We have (6.6) with  = 0 if (6.5) holds with > 12 , ~ > 21 . (ii) (s; s~) = (2; 12 ) and ( 21 ; 2): We have (6.6) for both cases with  = 1 if nn0  maxfa 2?L 2 1 (L?minf`;`0 g) 2 2 (L?maxf`;`0 g) ; 2?` ; 2?`0 g `;`

1 > 45 ; 2 > 21 : (6.11)

We also have (6.6) for (s; s~) = ( 21 ; 21 ). (iii) (s; s~) = (2; 2): We have (6.6) with  = 2 if (6.5) holds with > 45 , ~ > 54 . We also have (6.6) for (s; s~) = ( 12 ; 12 ); (2; 21 ); ( 21 ; 2). Lemma 6.3 Let ; ~ < 1. Then the matrix A~L de ned by (6.3), (6.5) has O(NL log NL) nonzero elements. Proof : We rst consider only the elements A~L(j;`);(j 0;`0) with Sj` \ V = ; and Sj`00 \ V = ;. We rst assume `;`0 = maxf2?` ; 2?`0 g. Then N (B~ )  O(N`?1 + N`0 ?1) and

N (A~L)  C

Now we assume that

N (A~L)  C

L L

X X

(2` + 2`0 )  CNL log NL:

`=0 `0 =0 `;`0 = a2?L 2 (L?`)2 ~(L?`0 ):

L L

X X

`=0 `0 =0

2` 2`0 a2?L2 (L?`) 2 ~(L?`0)  C 2( +~ ?1)L2(1? )L2(1? ~)L = 2L:

Let us now consider the elements A~L(j;`);(j0;`0 ) with Sj` \V 6= ;: For each xed `  1 there are 2N0 values of j for which Sj` \V 6= ;. Hence the total number of elements in A~L(j;`);(j0;`0 ) with

Sj` \ V 6= ; is bounded by 2(L + 1)NL. The same estimate holds for elements A~L(j;`);(j0;`0) with Sj`00 \ V 6= ; 2 By Lemma 6.3 we have in all cases of Remark 6.1 that the number of nonzero elements satis es N (A~) = O(NL log NL), i.e. we have achieved an optimal complexity (up to logarithmic terms) by truncating \nonvertex elements" alone. In computational practice, however, it turns out that the \nonvertex-vertex" and \vertex-nonvertex" matrix elements may contribute a substantial percentage of the elements kept after truncation. Therefore we consider also the truncation of the matrix elements from \nonvertex-vertex" and \vertexnv0 )`;`0 =0;:::;L and nonvertex" matrix elements. To this end, we use two additional matrices (`;` vn0 )`;`0 =0;:::;L. Then we de ne (`;` nn0 if Sj` \ V = ; and Sj`00 \ V = ; and dist(Sk` ; Sk`00 )  `;` 0 0 vn0 if Sj` \ V 6= ; and Sj`0 \ V = ; and dist(Sk` ; Sk`0 )  `;` 0 0 nv0 : if Sj` \ V = ; and Sj`0 \ V 6= ; and dist(Sk` ; Sk`0 )  `;` otherwise (6.12) nv Note that we can consider (6.3) formally as a special case of (6.12) with (`;`0 ) = 1, vn0 ) = 1 for `; `0 = 0; : : :; L. (`;`

0 A~L(j;`);(j0;`0 ) := 00 AL(j;`);(j0;`0) 8 > > > > < > > > > :

Lemma 6.4 Let s; s~ 2 [ 21 ; 2], ;  0 2 lR and > s +5 2 ;

~ > s~ +5 2 ;

1 + 3 ? 1 ? 5 + s ? s ~ + s + s ~ + vn vn vn 2 2 2 2 ~ > 3 ; ~ > 3 ; > 4 ; > 4 ; 5 1 0 0 s~ ? 12 +  0 s + 3 ? 0 nv > s + 2 +  ; ~nv > s~ + 2 ?  : ; nv > 23 ; ~nv > 3 4 4 nn vn nv Assume that the truncation parameters `;`0 , `;`0 , `;`0 in (6.12) satisfy

vn

nn0  maxfa 2?L 2 (L?`) 2 ~ (L?`0 ) ; 2?` ; 2?`0 g `;` vn0  maxfa 2?L 2 vn (L?`) 2 ~ vn (L?`0 ) ; a 2?L 2 vn (L?`) 2 ~vn (L?`0 ); 2?`0 g `;` nv0  maxfa 2?L 2 nv (L?`) 2 ~ nv (L?`0 ); a 2?L 2 nv (L?`) 2 ~nv (L?`0 ) ; 2?` g `;`

(6.13) (6.14) (6.15) (6.16) (6.17) (6.18)

Then we have for u 2 H s(?), u~ 2 H s~(?)

(AL ? A~L)PL u; PLu~  Ca?3L=2NL1?s?s~ kuks ku~ks~ :

D

E

(6.19)

where  = 0; 1; 2 if none, one, or both of s and s~ are greater than or equal to 23 , respectively. Proof : The di erence matrix D := AL ? A~L can be decomposed as D = Dnn + Dvn + Dnv where the matrix Dnn contains the elements truncated in the rst case of (6.12): L ` `0 ` `0 nn Dnn := A(j;`);(j0;`0 ) if Sj \ V = ; and Sj0 \ V = ; and dist(Sk ; Sk0 )  `;`0 : 0 otherwise 

The matrices Dvn and Dnv are similarly de ned to contain the elements truncated in the second and third case of (6.12), respectively. We estimate the contributions of the three matrices Dnn, Dvn, Dnv to (6.19) separately. The estimate for Dnn was performed in Lemma 6.2. We estimate for Dvn anlogously, using Lemma 5.3 instead of Lemma 5.2 and using the Schur lemma with (j;`) = 2`. The estimate for Dnv follows similarly by applying Lemma 5.3 to0 the transposes of the block matrices ` nv0 , and by using the Schur lemma with 2 D`;` (j;`) = 2 .

Remark 6.2 If (6.13){(6.15) hold for (s; s~) = (s; s~), then (6.13){(6.15) and therefore (6.19) hold for all s 2 [ 21 ; s], s~ 2 [ 21 ; s~]. The following special cases of Lemma 6.4 will

be used in the convergence estimates in the next section.

(i) (s; s~) = ( 12 ; 12 ): We have (6.19) with  = 0 if (6.17), (6.18) hold with vn; vn; ~nv; ~nv > 27 ; ~vn; ~vn; nv; nv > 75 :

(6.20)

This follows with  =  0 = ? 17 . (ii) (s; s~) = (2; 21 ) and ( 12 ; 2): We have (6.19) for both cases with  = 1 if (6.17), (6.18) hold with 11 ; vn; ~nv > 3 ; ~vn; nv > 1: vn; ~ nv > 75 ; ~vn; nv > 14 (6.21) 8

This follows using the values  = ? 12 ;  0 = ? 145 for (s; s~) = ( 21 ; 2), and  = ? 145 ;  0 = ? 21 for (s; s~) = (2; 12 ). We also have (6.19) for (s; s~) = ( 12 ; 12 ). (iii) (s; s~) = (2; 2): We have (6.6) with  = 2 if (6.17), (6.18) hold with vn; ~nv > 23 ; ~vn; nv > 34 ; vn; ~nv > 43 ; ~vn; nv > 1: (6.22) This follows with  =  0 = ? 21 . We also have (6.6) for (s; s~) = ( 12 ; 2); (2; 12 ); ( 21 ; 12 ).

The following Lemma estimates the number of nonzero \vertex-nonvertex" and \nonvertexvertex" elements after the truncation (6.12). The proof is analogous to the proof of Lemma 6.3 and is omitted. Lemma 6.5 Let 2 [0; 1] and ~  0. Then the matrix L ` `0 ` `0 nn A~vn := A(j;`);(j0;`0 ) if Sj \ V 6= ; and Sj0 \ V = ; and dist(Sk ; Sk0 ) < `;`0 , 0 otherwise 

with

(6.23)

nn0  maxfa2?L 2 (L?`) 2 ~(L?`0 ) ; 2?`0 g `;`

has O(NL ) nonzero elements.

Lemma 6.5 shows that the percentage of \nonvertex-vertex" and \vertex-nonvertex" elements in A~L tends to zero as NL ! 1 in each of the three cases in Remark 6.2.

7. Convergence and complexity of the compressed Galerkin scheme The Galerkin equations (3.5) and (3.7) lead to the linear systems (4.15) and (4.16) for the coecient vectors ~u and ~ of the solutions uL and L, respectively. Now we replace the matrix AL with the truncated Galerkin matrix A~L de ned by (6.12) and obtain the sparse linear systems (7.1) A~L~u = ~b; A~L~~ = ~c: The solution vectors ~u and ~~ de ne via (4.9) functions u~L and ~L. Hence u~L and ~L are functions in V L which satisfy A~Lu~L; ' = ( 21 I ? K 0); ' 8' 2 V L A~L~L; ' = h; 'i 8' 2 V L D

E

D

E

D

E

(7.2) (7.3)

where A~L: V L ! (V L)0 is the mapping induced by the matrix A~L with respect to the basis f j`g. In this section we will prove that the solutions u~L and ~L of the \compressed Galerkin scheme" converge with the same rates (up to a logarithmic factor) as the exact Galerkin solutions in Theorem 3.1.

7.1. Convergence in energy norm

The main tool in the analysis is the rst lemma of Strang in the following form. Lemma 7.1 Assume that the approximate operators A~L are stable in the sense that (7.4) A~LuL; uL  ckuLk21=2 for some c > 0 independent of L. Then there holds h(AL ? A~L)vL; v~Li L L (7.5) ku ? u~ k1=2  C v inf ku ? v k1=2 + sup1 2V 1 kv~Lk1=2 v~ 2V D

E



8
0: (8.5) @x @y jx ? yjj j+j j Hence the kernel k(x; y) of the operator ?DV D satis es (5.8). This shows that the elements AL(j;`);(j0;`0 ) with `; `0  1 of the sti ness matrix AL satisfy all estimates in Lemma 5.1 (for ` = 0 or `0 = 0, these estimates hold trivially, by adjusting the constants in the estimates of Section 5). These observations, together with the norm equivalences Proposition 4.5 and Proposition 4.6, imply two exact analogs of Lemmas 6.2 and 6.4. Lemma 8.1 Under the assumption of Lemma 6.2, we have for u 2 Hs?1 (?), u~ 2 Hs~?1(?) (AL ? A~L)PLu; PL u~  Ca?5L=2NL1?s+~s kuks?1 ku~ks~?1 ; (8.6)



D

E

where PL is as in Proposition 4.6.

Proof : De ne = hu; 1i = j?j and ~ = hu~; 1i = j?j. Then PL u = + DPL D?1 (u ? ) and likewise for u~. As 1 2 V 0;0, and, as observed above, AL(j;`);(j0;`0) = A~L(j;`);(j0;`0) if ` = 0 or `0 = 0 or both, we have D E D E E = (AL ? A~L)PLu; PLu~ = (AL ? A~L)DPL D?1 (u ? ); DPL D?1 (~u ? ~ ) D E = ?D(AL ? A~L)DPL D?1 (u ? ); PL D?1 (~u ? ~)   ~ L ? 1 ? 1 L ^ ^ = (A ? A )PLD (u ? ); PLD (~u ? ~) :

Here A^ is as in (8.4), and A^~L : V 1;L ! (V 1;L)0 is induced by the truncated sti ness matrix of A^ in the wavelet basis f j`g. Since all estimates in Section 5 are valid for A^ in place of W1 the proof of Lemma 6.2 yields











jE j  Ca?5L=2NL1?s?s~ D?1 (u ? ) H (?) D?1 (~u ? ~ ) H (?) : s

~

s

From (4.23) and (4.27) we have that



D?1 (u ? )



H (?) s

 C ku ? kH ? (?)  CC2 kukH ? (?) : s

1

s

1

This completes the proof. 2 Exactly in the same fashion we prove the analog of Lemma 6.4. Lemma 8.2 Under the assumptions of Lemma 6.4, we have for u 2 Hs (?), u~ 2 Hs~?1 (?) (AL ? A~L)PLu; PL u~  Ca?3L=2NL1?s?s~ kuks?1 ku~ks~?1 ; (8.7) D

E

This allows us to repeat the convergence analysis of Section 7 almost verbatim for the compressed wavelet Galerkin scheme corresponding to the operator V (the energy space now being H?1=2(?)). We only note that the approximation property 1 ku ? PLukH 1 2(?)  CNL2 ?s kukH (?) s 2 [ 21 ; 2] =

s

used in the proofs of Theorems 7.1 and 7.2 implies with (4.27) and the de nition of PL 1 u ? PL u H?1 2 (?)  CNL2 ?s kukH ?1(?) s 2 [ 21 ; 2]: Rather than repeating the details, we state the resulting analogs of Theorems 7.1 and 7.2. Theorem 8.1 Assume that (6.13){(6.18) hold for (s; s~) = ( 21 ; 2) and (2; 12 ) with a  a where a is independent of L. Let ~ L and ~L be the solutions of the approximate Galerkin equations (8.2) and (8.3), respectively. Then we have with  as in (3.10), (3.11) the asymptotic error estimates k ? ~ LkH ?1 2(?)  CNL? (log NL)1=2; k ? ~LkH ?1 2(?)  CNL? (log NL)1=2: (8.8)





s

=

=

=

Theorem 8.2 Assume that (6.13){(6.18) hold with s = 2, s~ = 2 and a  a where a is in-

dependent of L. Let ~ L and ~L be the solutions of the approximate Galerkin equations (8.2) and (8.3), respectively, and let U~ L(x) denote the solution in obtained from ~ L and ~L via (3.2) and (3.4), respectively. Then we have with ; ^ as in (3.10), (3.11), and for xed x 2 the error estimate

U (x) ? U~ L(x)  CxNL??^ log NL:





(8.9)

Remark 8.1 In the case of the Laplacian the operator A^ = ?DV D is in fact identical

to the operator W . The above proofs do not use this property, but only the decay properties (8.5) of the kernel. Hence the arguments remain valid for other boundary integral operators with weakly singular kernels, e.g. for linear elasticity (see next section).

9. Linear elasticity The results of the previous sections for the Laplace equation relied mainly on the behavior (5.8), (8.5) of the kernels of the integral equations. Therefore the same methods can be applied to boundary integral equations for other di erential equations. In this section we discuss the case of linear elasticity in polygons. Let ? and be as in the previous sections, then the equations for the displacement eld U : ! lR2 of a homogeneous isotropic elastic material in with Poisson ratio  2 [0; 21 ) are given by (9.1) U := U + 1 ?1 2 grad div U = 0 in : For the Dirichlet problem we use the boundary condition

U j? = u on ?

(9.2)

2

with u 2 H 1=2(?) , for the Neumann problem we use the boundary condition h

i

1U =  on ?

(9.3)

2

1 (?)]2 j with  2 H ?1=2(?) where the boundary traction operator 1 mapping f U 2 [Hloc 2 U = 0 g to H ?1=2(?) is given by h

i

h

i

1U = 1 ?22 div U j? n + 2 @U @n ? + n  curl U j? :

(9.4)

Here n denotes the normal vector pointing from to c and xx12  yy12 = x1y2 ? x2y1. We denote the rigid motions by m1(x) = 10 , m2(x) = 01 , m3(x) = ?xx12 . In the case of a Dirichlet problem in an exterior domain U has to satisfy at in nity the condition U (x) = O(1); grad U (x) = o(jxj?1) as jxj ! 1: (9.5) In the case of a Neumann problem in a bounded domain we assume that the given data  satisfy mj (x)  (x) dsx = 0; j = 1; 2; 3: (9.6) ? In the case of a Neumann problem in an exterior domain we assume this condition only for j = 1; 2, i.e., (x) dsx = 0 (9.7) ? and we require that U satis es at in nity 

 



 









Z

Z

U (x) = o(1); grad U (x) = o(jxj?1 ) as jxj ! 1:

(9.8)

In the case of a Neumann problem in a bounded domain there exists a weak solution U which is unique up to a rigid motion, in all other cases there exists a unique weak solution U [15]. The fundamental matrix e(x; y) is given by 1 (3 ? 4 ) log jx ? yj I ? (x ? y)(x ? y)> e(x; y) = 8(1?? ) jx ? yj2 and we can de ne the boundary integral operators by 

Z

V v(x) = ? e(x; y)v(y) dsy;

Z



Kv(x) = ?( 1;y e(x; y))>v(y) dsy

Z

Z

(9.9) (9.10)

1;xe(x; y)v(y) dsy; Wv(x) = ? 1;x ( 1;y e(x; y))>v(y) dsy : (9.11) ? where x 2 ?. These operators have the same properties (2.13), (2.14), (2.23), (2.24) as the operators for the Laplacian. As the rigid motions mj are in the nullspace of W we de ne the modi ed operator W1 by W1v := Wv + 3j=1 hv; mj i mj . In the case of a Neumann problem (interior or exterior) and the interior Dirichlet problem we can now use exactly the same boundary integral formulations (2.15), (2.17), (2.20), (2.21) as for the Laplacian. Note that we have to replace @n@ e(x; y) with ( 1;y e(x; y))> in (2.16) and (2.22). Also note that in method (2.20) for an exterior Neumann problem u := U j? and u can di er by a rigid motion. K 0v(x) =

?

P

y

The choice of asymptotics for elasticity at in nity for the Dirichlet problem is analogous to the asymptotics U (x) = O(1) in the case of the Laplacian. As the boundary integral formulations?in Section 3 used a di erent behavior at in nity, a modi cation is necessary [15]. Let H0 1=2(?) as in (4.20). Then we replace (2.15),(2.16) with the following: Find 2  2 H?0 1=2(?) such that h

i

hV ; 'i = ( 21 I + K )u; ' D

Let ! :=

2

for all ' 2 H?0 1=2(?) : h

i

(9.12)

V  ? ( 21 I + K )u ds= j?j and de ne

R 

?

E



U (x) =

Z 

?

e(x; y)(y) ? ( 1;y e(x; y))>u(y) + ! 

(9.13)

In the case of a Neumann problem or an interior Dirichlet problem we can obviously use a Galerkin discretization with the same spaces as for the Laplace problem, use the same wavelet bases and truncation strategies. As the kernel functions of W1 and V satisfy (5.8) and (8.5), respectively, all the arguments for the Laplacian remain valid in the present case and we obtain the analogs of Theorems 7.1,7.2,8.1,8.2. Note that the regularity parameters , ^ must now use the singularity exponents for elasticity instead of ; c from (3.9). In the case of the exterior Dirichlet problem we use the space V00;L (see (4.22)). We choose an arbitrary basis for the coarsest space V00;0 and then obtain with (4.25) and (4.24) a wavelet basis for V00;L. As only matrix elements on levels ` > 0 are truncated, all estimates for the truncation errors remain valid. We can evaluate all arising double integrals analytically. For the Dirichlet problem the kernel contains a term with log jx ? yj which we treat as in the case of the Laplacian. The additional terms (xi ? yi)(xj ? yj )= jx ? yj2 lead to integrands of the form s2=(s2 + t2 ? 2cst), st=(s2 + t2 ? 2cst) (with c < 1) which have second antiderivatives with respect to s and t in closed form with elementary functions. In the case of the Neumann problem we can transform the hypersingular integral to a weakly singular integral with derivatives of the test and trial functions [18], and the resulting integral can be treated as in the case of the Dirichlet problem. Therefore Proposition 7.1 carries over to the present cases.

10. Numerical Results In this section we present the results of numerical computations. The results show that the asymptotic convergence estimates are describing the convergence behaviour accurately already for a moderate number of degrees of freedom. They also show that the constants in the estimates are of moderate size, making the method competitive with other approaches. We refer to [22] for further details of our implementation, and for additional numerical results with biorthogonal wavelets and other domains. We consider the Neumann problem (2.4), (2.6) of the Laplace equation on the square

= (? 41 ; 14 )2, where the prescribed Neumann data  are obtained as the normal derivatives of the solution u(x) = r2(log(r) sin(2') + ' cos(2')): (10.1) Here (r; ') are polar coordinates centered at x = ( 41 ; 41 ). We use the direct method (3.5), and compute the values for both the Galerkin method and the \compressed Galerkin method".

In our example we have  = 2 and  = 32 , hence Theorem 3.1 gives the convergence rates u ? uL H 1 2(?)  CN ?3=2 and U (x) ? U L(x)  CN ?3 for the exact Galerkin solution uL. For the exact Galerkin method we compute the Galerkin matrix B L in (3.17) and the right hand side vector for the Galerkin method with the `hat function' basis f'Ljg. In order to avoid quadrature errors, we used analytic expressions for the entries of the Galerkin matrix. To evaluate the right hand side, we rst compute the L2-projection of the given Neumann data  onto the space SN0 of piecewise constant functions. The L2-projection is approximated by using the Simpson rule on each subinterval. It can be shown [22] that for suciently smooth data the additional error caused by replacing  by its L2 projection and the use of the Simpson rule does not a ect the convergence rates in the energy norm or in the interior. The bilinear form with the operator K 0 and piecewise linear and piecewise constant functions is also evaluated analytically. Thus we need O(N 2 ) operations to compute the entries of the Galerkin matrix and another O(N 2 ) operations to compute the right hand side vector b. We turn next to the compressed Galerkin schemes. In our experiments, we only improve the complexity of the computation of the Galerkin matrix. This is sucient for the ansatz type methods (3.3) and (3.7), since the computation of the right hand side vectors can obviously be performed with sucient accuracy in O(N ) operations by using for example the Simpson rule on each subinterval. For the direct methods (3.1) and (3.5), however, we have to compute the result of an integral operator (K or K 0) acting on the given data. If this step is done as described above, it results in O(N 2) complexity and storage. Nevertheless, as we show in [22], the complexity of this part of the computation can also be reduced to O(N log N ) operations by using wavelet bases and suitable truncations of the operators K and K 0. In our computations we rst compute the full Galerkin matrix B L in the standard basis and transform it to the matrix AL with respect to the wavelet basis (4.6) by applying the so-called pyramid scheme with the \ lter" a from (4.6) successively to the rows and columns of B L. Then we replace certain entries of the matrix AL with zero to obtain the matrix A~L. We used (6.12) with the parameters nn0 = maxfa2?L 2(L?minf`;`0 g) 2(L?maxf`;`0 g) ; 2?` ; 2?`0 g;  vn0 = 1;  nv0 = 1; `;` (10.2) `;` `;` i.e. we keep all contributions stemming from `vertex-nonvertex' wavelets and truncate only entries corresponding to wavelets which do not overlap the vertices of the square. De nition (10.2) of nn satis es in Remark 6.1 the condition (i) for  > 0, (ii) for  > 103 , (iii) for  > 53 . While (i) and (iii) are immediate, we observe that (10.2) with  > 103 implies (6.11) with 1 = 54 + ( ? 103 )=2, 2 = 12 + ( ? 103 )=2, i.e. (ii) of Remark 6.1. Moreover, we remark that the truncated matrices A~L thus obtained have O(N log N ) nonvanishing entries, provided that 0   < 1. Note that a truncation parameter of  corresponds to a growth of the bandwidth by a factor of := 2 in block AL`?1;`?1 compared with block AL`;` . Therefore  = 0 corresponds to = 1, i.e., the same bandwidth is used for truncation in all diagonal blocks. Theorem 7.1 implies that we obtain the optimal convergence rate N ?3=2 in the energy norm for  20:3  1:23; Theorem 7.2 implies that we obtain the optimal convergence rate N ?3 in an interior point for  20:6  1:52. In our computations, we modi ed the truncation algorithm (6.12) by using the distance measured around the edge rather than the Euclidian distance, and by using the distance





=





between the centers of Sj` and Sj`00 instead of dist(Sj` ; Sj`00 ). Obviously, this is of minor impact for the domain considered here. We remark that our theoretical results about the convergence rate of the compressed Galerkin scheme are of course only valid if the parameter a in (6.6) and (6.19) and hence the initial bandwidth in the block A~LL;L is suciently large (see Lemma 7.2). In all experiments reported here we selected an initial bandwidth of 3. The described approach for computing the truncated Galerkin matrix A~L needs of course O(N 2 ) operations, since we rst compute the complete Galerkin matrix AL. We use this approach since we want to compute the results of the exact Galerkin method and the energy errors of the \compressed Galerkin approximations" (see (10.3)). An optimal implementation of our algorithm, however, would rst set up a suitable data structure which stores only the O(N log N ) nonzero elements of A~L, and computes each element directly using the analytical formulae for the entries of B L. Let us now describe our results. We compare the condition number of the Galerkin matrix B L in the standard basis (see (3.17)) and the Galerkin matrix AL in the wavelet basis (see (4.14)) in Table 1. While the condition number of B L grows linearly with NL, the condition numbers of AL stay bounded with a very moderate bound.

NL 8 16 32 64 128 256 512 L cond(B ) 13:1398 24:8472 48:4930 96:2301 192:3263 384:6534 769:308 cond(AL) 7:12381 7:20999 7:23110 7:23595 7:23713 7:23745 7:23754 cond(A~L) 7:12381 7:21032 7:23112 7:23595 7:23714 7:23746 7:23755 Table 1: Condition numbers of B L, AL, and A~L versus NL. Because of the small condition numbers of AL the plain conjugate gradient algorithm for solving the arising sparse, symmetric, positive de nite linear systems is very ecient. Figure 1 depicts the asymptotic behaviour of the energy norm errors using (6.12) with the parameters (10.2) for = 1:0; 1:1; 1:25; 1:55. The energy norm errors were computed using the full Galerkin matrix AL (which is exact up to roundo errors) and the relation 2 u ? u~L E = kuk2E ? 2~u> AL~u + ~u>AL~u





(10.3)

where kvkE := hW1v; vi1=2 and AL~u = ~b, A~L~u = ~b. The numerical results in Figure 1 clearly show that we can achieve optimal convergence rates by using a matrix with O(N log N ) elements and that the truncation of the sti ness matrix yields virtually the same energy error, even with small  > 0. In order to see the asymptotic rates for di erent values of  more clearly, one needs to use higher values of N . In Figure 2 we show the error in the interior which is more sensitive to di erent choices of  in (10.2). It is clearly visible that values of  < 53 , i.e., < 1:52 lead to reduced convergence rates, compared with the exact Galerkin method, as we expect from Theorem 7.2. Acknowledgement: The authors want to thank Dr. Wen-Jong Shyong (Univ. of Maryland, Baltimore County) for the computations.

-1

10

-2

10

-3

10

-4

10

-5

10

1.00 -6

10

-7

10

1.10, 1.25, 1.40, 1.55, exact Galerkin

-8

10

0

10

1

2

10

3

10



Figure 1: Squared energy norm errors u ? u~L

10

2 E



4

10

vs. DOF for = 1; 1:1; 1:25; 1:4; 1:55.

References [1] B.K. Alpert: Wavelets and other bases for fast numerical linear algebra, in: Wavelets| a tutorial in theory and applications, C.K. Chui (Ed.), Academic Press New York 1992, 181{216. [2] G. Beylkin, R. Coifman and V. Rohklin: The fast wavelet transform and numerical algorithms, Commun. Pure and Applied Mathematics XLIV, 141{183 (1991). [3] M. Buhmann and C. A. Michelli: Spline prewavelets for non-uniform knots, Numerische Mathematik 61, 455{474 (1992). [4] C.K.Chui and J. Wang: Introduction to wavelets, Wavelet analysis and its applications, Vol. 1, Academic Press, New York 1992. [5] P.G. Ciarlet: The nite element method for elliptic problems, North Holland 1978. [6] M. Costabel: Boundary Integral Operators on Lipschitz Domains|Elementary Results, SIAM J. Math. Anal. 19 (1988). [7] W. Dahmen and A. Kunoth: Multilevel Preconditioning, Numerische Mathematik 63, 315{344 (1992).

-3

10

-4

10

1.00 -5

10

-6

10

-7

10

1.10 -8

10

-9

10

1.25 1.40 1.55

-10

10

exact Galerkin

-11

10

0

10

1

10

2

10

3

10

4

10

Figure 2: Interior point error vs. DOF for = 1; 1:1; 1:25; 1:4; 1:55. [8] W. Dahmen, S. Prodorf and R. Schneider: Wavelet approximation methods for pseudodi erential equations II: Matrix compression and fast solution, Advances in Computational Mathematics 1 (1993) 259{335. [9] W. Dahmen, S. Prodorf and R. Schneider: Multiscale methods for pseudodi erential equations, in: Recent Advances in Wavelet Analysis, L.L. Schumaker and G. Webb (Eds.), Academic Press 1993, 191{235. [10] W. Dahmen, B. Kleemann, S. Prodorf and R. Schneider: A multiscale method for the double layer potential equation on a polyhedron, Preprint No. 76-1993, IAAS Berlin, in press in: Advances in Computational Mathematics, H.P. Dikshit and C.A. Miccelli (Eds.), World Scienti c Publ. (1994) 1{45. [11] I. Daubechies: Orthonormal bases of compactly supported wavelets, Comm. Pure and Appl. Math. 41 (1988), 909{996. [12] P. Grisvard: Singularities in boundary value problems, Collection RMA, Vol. 22, Masson Publishers, Paris and Springer Verlag, Heidelberg 1992. [13] W. Hackbusch and Z.P. Novak: On the fast matrix multiplication in the boundary element method by panel clustering, Numerische Mathematik 54 (1989) 229{245. [14] G. C. Hsiao and W. L. Wendland: A nite element method for some integral equations of the rst kind, J. Math. Anal. Appl. 58 (1977) 449{481.

[15] G. C. Hsiao and W. L. Wendland: On a boundary integral method for some exterior problems in elasticity, Proceedingof Tbilisi University (1985) 31{60, Tbilisi University Press, Tbilisi [16] J. L. Lions and E. Magenes: Non-homogeneous boundary value problems and applications I, Springer-Verlag New York, 1972. [17] Y. Meyer: Ondelettes et Operateurs, Vol. I: Ondelettes and Vol. II: Operateurs de Calderon-Zygmund, Hermann & Cie. Publ. Paris, 1990. [18] J. C. Nedelec: Integral Equations with non integrable kernels, Integral Equations and Operator Theory, 5 (1982) 562{572 [19] J. Necas: Les Methodes directes en theorie des equations elliptiques, Masson , Paris 1967. [20] J.C. Nedelec and J. Planchard: Une methode variationelle d'elements nis pour la resolution numerique d'un probleme exterieur dans lR3, Revue Franc. Automatique Inf. Rech. Operationelle 3 (1973) 105{129. [21] P. Oswald: Stable splittings of Sobolev spaces and fast solution of variational problems, Preprint MATH 92/5, Friedrich Schiller Universitat, D-07740 Jena, FRG, May 1992. [22] T. von Petersdor , C. Schwab and W.Y. Shyong: Wavelet based boundary elements for rst kind boundary integral equations on polygons, in preparation. [23] V. Rokhlin: Rapid solution of integral equations of classical potential theory, J. Comp. Phys. 60 (1985) 187-207. [24] W.L. Wendland: Strongly elliptic boundary integral equations, in:\The state of the art in numerical analysis", A. Iserles and M. Powell (Eds.), Clarendon Press, Oxford (1987) 511-561.

Suggest Documents