AN ODYSSEY INTO LOCAL REFINEMENT AND MULTILEVEL PRECONDITIONING I: OPTIMALITY OF THE BPX PRECONDITIONER ∗ BURAK AKSOYLU† AND MICHAEL HOLST‡ Abstract. In this paper we give a systematic presentation of a number of modern approximation theory tools which are useful for the analysis of multilevel preconditioners. We then use these tools to establish a number of new (and old) results for the Bramble-Pasciak-Xu (BPX) preconditioner in two and three spatial dimensions, on quasiuniform and locally refined meshes, for both redgreen (conforming) and red (non-conforming) mesh refinement algorithms. For example, under the assumption of H s -stability of L2 -projection for s ≥ 1 we give a very simple optimality proof of the BPX preconditioner on quasiuniform meshes through the use of K-functionals. While the existing literature on the optimality of the BPX preconditioner is primarily restricted to uniformly refined meshes, there are some notable exceptions for two dimensions such as the original Dahmen-Kunoth paper. One of the main results of this paper is the extension of these types of optimality results to locally refined two- and three-dimensional meshes constructed using standard red-green as well as redonly refinement algorithms. We establish a number of geometrical relationships between neighboring simplices produced by these classes of refinement algorithms, and these results are then used to show that the resulting locally enriched finite element subspace supports the construction of a scaled basis which is formally Riesz stable. The proof techniques we use are quite general; in particular, the results in this paper require no smoothness assumptions on the differential equation coefficients, and the refinement procedures may produce nonconforming meshes. Moreover, the theoretical framework supports arbitrary spatial dimension d ≥ 1; only the geometrical relationships must be re-established for spatial dimension d ≥ 4. Key words. finite element methods, approximation theory, local mesh refinement, multilevel preconditioning, BPX, red and red-green refinement, 2D and 3D. AMS subject classifications. 65M55, 65N55, 65N22, 65F10
1. Introduction. This article is the first in a series of three articles [2, 3] on local refinement and multilevel (ML) preconditioning. (Most of the material forming this trilogy is based on the first author’s Ph.D. dissertation [1].) This first paper presents some new results on the BPX preconditioner [12] in the case of locally refined meshes. In particular, we establish that the BPX preconditioner is optimal in three spatial dimensions on locally refined meshes constructed using standard red-green and red refinement. These results, which extend the original two-dimensional results of Dahmen and Kunoth, are based on establishing a number of geometrical properties of both red-green and red 3D refinement algorithms, and then using these results to show that the resulting finite element subspaces support the construction of a L2 -stable Riesz basis. Along the way we reproduce a number of results that have appeared in the literature, but we use more recent analysis techniques which yield much simpler proofs and which often give additional insight into the underlying theoretical structure. As a useful supporting result, we establish the H 1 -stability of L2 -projection onto finite element spaces based on the same class of local red-green and red mesh re∗ The first author was supported in part by the Burroughs Wellcome Fund through the LJIS predoctoral training program at UC San Diego, in part by NSF (ACI-9721349, DMS-9872890), and in part by DOE (W-7405-ENG-48/B341492). Other support was provided by Intel, Microsoft, Alias|Wavefront, Pixar, and the Packard Foundation. The second author was supported in part by NSF CAREER Award 9875856 and in part by a UCSD Hellman Fellowship. † Department of Computer Science, California Institute of Technology, Pasadena, CA 91125, USA (
[email protected]). ‡ Department of Mathematics, University of California at San Diego, La Jolla, CA 92093, USA (
[email protected]).
1
2
B. AKSOYLU AND M. HOLST
finement algorithms considered for the analysis of the BPX preconditioner. This question is currently undergoing intensive study in the finite element approximation theory community, and the result contained in the companion paper [3] appears to be the first result on a priori H 1 -stability of L2 -projection for implementable 3D refinement algorithms. The existing results, mainly due to Carstensen [14] and BramblePasciak-Steinbach [11] involve a posteriori verification of somewhat complicated mesh conditions after refinement has taken place. The problem class of interest is linear second order partial differential equations (PDE) of the form: −∇ · (p ∇u) + q u = f,
(1.1)
u ∈ H01 (Ω).
Here, f ∈ L2 (Ω), p, q ∈ L∞ (Ω) and p : Ω → L(Rd , Rd ), q : Ω → R, where p is a symmetric positive definite matrix, and q is nonnegative. Let T0 be a shape regular and quasiuniform initial partition of Ω into a finite number of d + 1 simplices, and generate T1 , T2 , . . . by refining the initial partition using either red-green or red local refinement strategies in d = 2 or d = 3 spatial dimensions. Let Sj be the simplicial linear C 0 finite element (FE) space corresponding to Tj equipped with zero (j) Nj boundary values. The set of nodal basis functions for Sj is denoted by {φi }i=1 where Nj = dim Sj is equal to the number of interior nodes in Tj . Successively refined FE spaces will form the following nested sequence: S0 ⊂ S1 ⊂ . . . ⊂ Sj ⊂ . . . ⊂ H01 (Ω).
(1.2)
Although the mesh is nonconforming in the case of red refinement, Sj is used within the framework of conforming FE methods for discretizing (1.1). Let the bilinear form and the linear functional representing the weak formulation of (1.1) be denoted as Z Z a(u, v) = p ∇u · ∇v + q u v dx, b(v) = f v dx, u, v ∈ H01 (Ω), Ω
Ω
and let us consider the following Galerkin formulation: Find u ∈ Sj , such that (1.3)
a(u, v) = b(v),
∀v ∈ Sj .
Solving the discretized form of (1.3), namely A(j) u = b where u, b ∈ RNj , by iterative methods has been the subject of intensive research because of the enormous practical impact on a number of application areas in computational science. The FE community has witnessed remarkable advances in the construction of preconditioners for iterative methods over the last two decades. As motivation, we will outline the development of optimal ML methods, in particular we will focus on the BPX preconditioner, highlighting the historical development and the fundamentals of ML preconditioners. The main idea is a stable splitting of u ∈ SJ , (1.4)
u=
J X
(πj − πj−1 )u,
j=0
by linear operators πj : L2 → Sj with the following properties; (1.5)
πj |Sj = I,
LOCAL REFINEMENT AND PRECONDITIONING I: BPX OPTIMALITY
3
and πj πk = πmin{j,k} , where π−1 = 0. The splitting (1.4) or equivalently the hierarchical ordering of the nodes gives rise to a hierarchical splitting Sj = Sj−1 ⊕ Sjf ,
(1.6)
where we will refer to Sjf as the slice space, and where the superscript f stands for fine. The slice space Sjf is selected as a hierarchical complement of Sj−1 in Sj , namely Sjf = (πj − πj−1 )Sj . By (1.5), the two level direct decomposition becomes (1.6) Sj = Sj−1 ⊕ (I − πj−1 )Sj .
(1.7)
It can be said that this preconditioning era has been initiated by the two level direct decomposition (1.7) and the choice of the FE interpolation Ij as πj . This was the introduction of the hierarchical basis (HB) methods [4, 6, 37, 39]. In local refinement, HB methods enjoy an optimal complexity of O(Nj −Nj−1 ) per iteration per level (resulting in O(NJ ) overall complexity) by only using degrees of freedom (DOF) corresponding to Sjf by the virtue of (1.7). However, HB methods suffer from suboptimal iteration counts or equivalently suboptimal condition number. The search for an optimal iterative method, at least in terms of the condition number, reached an important milestone when Bramble, Pasciak, and Xu [12] substituted πj with Qj , the L2 -projection onto Sj . This breakthrough, which yielded the BPX preconditioner, allowed for the construction of optimal additive methods for uniform refinement, working in 3D as well as in 2D. However, the BPX decomposition; Sj = Sj−1 ⊕ (Qj − Qj−1 )Sj , used for example in [19] gives rise to basis functions which are not locally supported. Consequently, in the case of local refinement, the BPX preconditioner can easily be suboptimal due to the fact that the basis functions are not locally supported. Without special care, this gives rise to a suboptimal cost per iteration. Only in the presence of a geometric increase in the number of DOF does the cost per iteration remain optimal. Although the basis functions created under the BPX decomposition are not locally supported, they do have good decay rates. This allows locally supported approximations of the wavelet basis utilized in [19]. Among such methods, wavelet modifications to hierarchical basis methods by Vassilevski and Wang [32, 33] will be the subject of the companion paper [3]. A similar direct wavelet-like ML modification approach was used on general meshes by Stevenson in [29]. The splitting (1.4) will define a preconditioner B (J) , (1.8)
(B (J) u, v) =
J X
22jk ((πj − πj−1 )u, (πj − πj−1 )v),
u, v ∈ Sj , k = 1,
j=0
where k is the smoothness parameter to accommodate higher order PDEs, and where the factors 22j are dimension independent and simply reflect the dyadic refinement. If the following spectral equivalence can be established provided that inversion of B (J) is computationally feasible (1.9)
λB (J) (B (J) u, u) ≤ (A(J) u, u) ≤ ΛB (J) (B (J) u, u),
4
B. AKSOYLU AND M. HOLST
then the efficiency of the preconditioner will be determined by −1
ΛB (J) λB (J)
ΛB (J) λB (J)
since the con-
dition number then satisfies κ(B (J) A(J) ) ≤ (cf. page 205 in [25]). Here, we should clarify that stable splitting means that the corresponding preconditioner will have favorable λB (J) and ΛB (J) , and in the best case, optimal bounds (meaning absolute constants). A similar discussion on stable splittings can be found on page 72 in [26]. Our primary interest is the BPX preconditioner where the BPX norm is defined as follows: (1.10)
kuk2BPX ≡
J X
22j k(Qj − Qj−1 )uk2L2 .
j=0
The best case in (1.9) would be to obtain absolute constants c1 , c2 in the norm equivalence: (1.11)
c1 kuk2BPX ≤ kuk2H 1 ≤ c2 kuk2BPX ,
so that the BPX preconditioner becomes optimal. We proceed with the historical development of the theory of the BPX preconditioner, and the organization of the paper in this respect. In the original work [12] the following suboptimal result was established with regularity assumptions: (1.12)
c1 kuk2BPX ≤ kuk2H 1 ≤ c2 J kuk2BPX . J
There have been numerous contributions, especially in the uniform refinement setting [9, 36, 38, 41], to achieving optimal upper and lower bounds (1.11) under various assumptions. The German school has been perhaps the most influential in establishing optimality results, through the use of a number powerful abstract tools that have an established history in approximation theory. Oswald [23, 25] first proved the optimality result for both two- and three-dimensional problems in the quasiuniform setting. He pioneered the use of Besov space techniques in FE theory. These techniques [15, 23, 25] rely on the fact that the Sobolev space H 1 coincides with certain Besov spaces, and as a result the corresponding norms are equivalent. In §2, we present several approximation theory tools, namely, moduli of smoothness and Kfunctionals, to study Besov, approximation, and interpolation spaces. The so-called Jackson and Bernstein estimates are the primary tools leading to embeddings of the spaces mentioned above. These embeddings form the foundation for establishing the norm equivalence (1.11). In §3, we present a systematic treatment showing how such embeddings are employed. It is also worth discussing exactly why the original work failed to prove optimality, yielding a critical insight into the underlying theory first pointed out by Bornemann [10]. The optimality (1.11) is achieved when the approximation and inverse inequalities are realized in H s for s > 1 rather than s = 1. An outline of this subtle difference is discussed more fully in §3. A novel approach was taken [9] through the use of flexible K-functionals known from the interpolation spaces. This self-contained presentation bypasses local refinement details by establishing all relationships with respect to the uniform refinement case. In §4, using the technology of K-functionals, we give an extremely simple proof of optimality of the BPX preconditioner on quasiuniform meshes. This result was motivated by the insight into the critical role played by s > 1 in the approximation and inverse inequalities, and then by utilizing H s -stability of L2 -projection for s > 1.
LOCAL REFINEMENT AND PRECONDITIONING I: BPX OPTIMALITY
5
Most of the literature on the BPX preconditioner is restricted to uniform refinement, and the existing results on local refinement are limited. The two noteworthy efforts on local refinement belong to Dahmen and Kunoth [15] and Bornemann and Yserentant [9]. Both of the above works consider refinement strategies only in two spatial dimensions, and in particular the refinement strategies analyzed are 2D redgreen refinement and 2D red refinement, respectively. In [15], a theoretical framework is established for analyzing the optimality of the BPX preconditioner in the 2D local refinement case. The main goal of this work is to extend these types of optimality results to certain realistic, implementable, local refinement procedures both in 3D and in 2D. In order to establish these results, in §5 we outline the theoretical framework, as well as the norm equivalence theorems to be established which are the core of this framework. With the theoretical framework in place, we extend the optimal 2D results in [15] to the 3D red-green refinement described in [8]. The green refinement in red-green refinement is in fact a closure process for the hanging nodes introduced by red refinement. We show that the same proof technique for red-green refinement can be used for red refinement. This convenience extends the optimality results to 2D and 3D versions of the red refinement above. We should also note that red refinement, seen as a subset of red-green refinement, can handle more general settings than the refinement studied in [9]. In §6 and §7, we list the refinement conditions and additional rules for tetrahedralization and triangulation for d = 3 and d = 2, respectively. We give several theorems about the generation and size relations of the neighboring simplices, thereby establishing quasiuniformity for the simplices in the support of a basis function. In §8, we give a rigorous treatment establishing the desired norm equivalence (1.11). The technique we employ is valid for any spatial dimension d ≥ 1, for nonsmooth PDE coefficients p ∈ L∞ (Ω), and it also supports nonconforming meshes as in the case of red refinement. Quasiuniformity of the support gives rise to a stable basis; one can then establish the Bernstein estimate. The Bernstein estimate then provides one side of the norm desired equivalence. Due to the nature of local adaptivity, it is not possible to establish a Jackson estimate, and as a result the remaining inequality in the norm equivalence is handled directly using approximation theory tools, as in the original work of [15]. §9 contains a summary of our results and some conclusions, together with a guide to the subsequent articles in the trilogy. 2. Preliminaries. Let Ω ⊂ Rd be open, for arbitrary k = 1, 2, . . . and h ∈ Rd , and define the subset Ωk,h = x ∈ Rd : [x, x + kh] ⊂ Ω . Define the directional k-th order difference of f ∈ Lp (Ω) as k X ∆kh f (x) = r=0
k r
(−1)k−r f (x + rh),
x, h ∈ Rd .
This is a sampling of the function f in the direction of h. It is also the exact expression in the k-th derivative of f in the absence of limit. The dependence on the direction is removed by taking the sup over all possible directions. This gives rise to a finer scale of smoothness than differentiability: Definition 2.1. Lp -modulus of smoothness ωk (f, t, Ω)p = sup k∆kh f kLp (Ωk,h ) . |h|≤t
6
B. AKSOYLU AND M. HOLST
Moduli of smoothness were introduced prior to K-functionals. In contrast to the historical evolution of this part of the approximation theory, the K-functional approach seems to be more convenient and often preferred [9]. K-functionals are used to define interpolation spaces, which play a somewhat equivalent role in approximation theory as Besov spaces. Definition 2.2. n o (2.1) kf − gkLp (Ω) + t|g|Wpk (Ω) , K(f, t; Lp (Ω), Wpk (Ω)) = inf g∈Wpk (Ω)
is called the K-functional for f ∈ Lp (Ω). Both modulus of smoothness and Kfunctionals give direct information on the smoothness of f . In fact, they are equivalent for a large class of domains Ω ⊂ Rd thanks to the following. Theorem 2.3. Let Ω satisfy the uniform cone condition. Then, for any k = 1, 2, . . . , 1 ≤ p < ∞, and f ∈ Lp (Ω), we have (2.2)
ωk (f, t, Ω)p ' K(f, tk ; Lp (Ω), Wpk (Ω)),
t > 0.
Proof. See for example, Theorem 1 in [26] or Theorem 2.4 in [16]. 2.1. Spaces from approximation theory. The Besov space, the approximation space and the interpolation space are the three crucial spaces borrowed from approximation theory for norm estimates. Besov space becomes the primary function space setting by realizing the Sobolev space as a Besov space (see (3.3)). The analysis needed for functions in Sobolev space is done in the Besov space. The primary motivation for employing the Besov space stems from the fact that the characterization of functions which have a given upper bound for the error of approximation sometimes calls for a finer scale of smoothness. Besov spaces are defined to be the collection of functions f ∈ Lp (Ω) with a finite Besov norm defined as follows: s (Ω) = kf kL (Ω) + |f |B s (Ω) , kf kBp,q p p,q
where the seminorm is given by sj s (Ω) = k{2 |f |Bp,q ωk (f, 2−j , Ω)p }j∈N0 klq ,
and k is any fixed integer larger than s. The functions with finite BPX norm (1.10) also form a function space. This class corresponds to a general class in approximation theory which is referred to as the approximation space. It is the collection of functions f ∈ Lp (Ω) with a finite approximation space norm defined as follows. kf kAs,J = k{2sj k(Qj − Qj−1 )f kLp (Ω) }Jj=0 klq , p,q (Ω) kf kAsp,q (Ω) = k{2sj k(Qj − Qj−1 )f kLp (Ω) }j∈N0 klq . Let X and Y be two Banach spaces. The interpolation space (X, Y )σ,q is defined to be the collection of functions f ∈ X + Y with a finite interpolation norm, where the norm is defined as follows: kf k(X,Y )σ,q = kf kX + |f |(X,Y )σ,q , and where the seminorm is given by |f |(X,Y )σ,q = k{2σj K(f, 2−j ; X, Y )}j∈N0 klq . An example of interest is when X = Lp (Ω), Y = Wps (Ω).
LOCAL REFINEMENT AND PRECONDITIONING I: BPX OPTIMALITY
7
3. Estimates from approximation theory. The Bernstein estimate and inverse inequality, respectively, describe the difference and the differentiability of functions from subspaces used in approximation spaces Sj . They are used for the upper bound in the norm estimate. In particular, the Bernstein estimate (3.4) also dictates what needs to be done for the lower bound in the following norm equivalence which will be a part of Theorem 5.1. (3.1)
c1 (p)
vJ
s kukAsp,q ≤ kukBp,q ≤ c2 kukAsp,q ,
u ∈ SJ .
The remedy prescribed by the Bernstein estimate involves the error of the best approximation. This is precisely where the Jackson estimate comes into play in order to pin down the lower bound to an absolute constant. Then we can say that the Bernstein (inverse) estimates and the Jackson (approximation) estimates complement each other for the purpose of achieving the norm equivalence (1.11). The intricate, perhaps subtle part of the norm equivalence is the involvement of Jackson-like estimates. Careful attention is required in the selection of the smoothness scale. The choice of s = 1 rather than s > 1 for the function space H s in which the Jackson estimate needs to be established will give rise to the suboptimal lower bound as in the original proof of the BPX preconditioner [12]. This point is discussed more completely at the end of this section. 3.1. The Bernstein estimate and inverse inequality. The Bernstein estimate or inverse inequality bounds a fine norm by a corresponding coarser (or weaker) norm in a FE space. This is the reverse of the Jackson estimate or approximation inequality. The treatment of these estimates in the analysis of the norm equivalence (1.11) is two-fold. Traditionally, the first treatment used is the approximation-Besov space relationship. We demonstrate the embedding (3.5) of the approximation space s Asp,q into the Besov space Bp,q by using the Bernstein estimate (3.4). The initial step usually taken to establish the norm equivalence (1.11) is to observe the well-known fact (3.2)
s s Bp,min{p,2} ⊂ Wps ⊂ Bp,max{p,2} ,
s > 0, p ≥ 1.
In particular, realizing the Sobolev space as a Besov space [26] by (3.2), (3.3)
s H s (Ω) ∼ (Ω), = B2,2
s > 0,
opens the door to approximation theory where numerous well-established tools are at our disposal. The BPX norm is directly related to the approximation space Asp,q , hence we will construct a sequence of bridges between the approximation and Besov spaces through embeddings. Since the function space setting is a Besov space, such embeddings call for estimates that involve moduli of smoothness. The central requirement is called the Bernstein estimate: Definition 3.1. A Bernstein estimate has the form: (3.4)
ωk+1 (u, t)p ≤ c (min{1, t2J })β kukLp ,
u ∈ SJ ,
where c is independent of u and J. Usually k = degree of the element and β > k. The immediate implication of the Bernstein estimate is the following embedding which is usually referred to as the upper bound in the norm equivalence (3.1).
8
B. AKSOYLU AND M. HOLST
Theorem 3.2. If the Bernstein estimate (3.4) holds with k = 1, then (3.5)
s Asp,q ,→ Bp,q ,
0 < s < β.
Proof. See Theorem 2 in [23] for β = 1 + 1/p and Proposition 2 in [24] for β = 2 + 1/p. A second distinct approach to establishing the norm equivalence (1.11) is both novel and enlightening, and is inspired by the equivalence of modulus of smoothness and K-functionals. It helps us identify the mechanisms hidden behind the approximation theory, especially issues pertaining to the Jackson estimate. In this approach, the approximation-Besov space association is viewed as an approximation-interpolation space relationship. Here we exhibit a similar embedding (3.8) to (3.5) with a slightly modified assumption. This is the embedding of the approximation space into the interpolation space (Lp (Ω), Y )σ,q by using the inverse inequality (3.7). Y is some “nice” space with (3.6)
SJ ⊂ Y ⊂ Lp (Ω),
∀J.
Inverse inequality is the counterpart of the Bernstein estimate in the interpolation space, and is defined as follows. Definition 3.3. Let X = L2 (Ω) and Y be as in (3.6) with q = 2. We say that inverse inequality holds, if there exists β > 0 satisfying (3.7)
kukY ≤ c (2J )β kukL2 ,
u ∈ SJ , ∀J.
Now, the following embedding holds. Theorem 3.4. If the inverse inequality (3.7) holds, then (3.8)
Aβσ 2,q ,→ (L2 , Y )σ,q ,
0 < σ < 1.
Proof. See Theorem 1 in [10] for q = 2. Remark 3.1. The inverse inequality (3.7) is heavily used in FE analysis in various places. For instance, it is used to establish the strengthened Cauchy-Schwarz inequality [9], and an interesting application is in establishing H 1 -stability of L2 projection [13]. On the other hand, the strengthened Cauchy-Schwarz inequality is used for the lower bound in the BPX optimality [36], and as well as the upper bound [9]. It is also an essential inequality for standard techniques used for instance in the analysis of multiplicative Schwarz methods [32, 33]. 3.2. The Jackson estimates. Both the Jackson estimate and the approximation inequality characterize the approximation error in a coarser (or weaker) norm than the native norm on the space containing the given function. For a given subset M ⊂ Lp (Ω), consider the best approximation: EM (f )p = inf kf − gkLp , g∈M
f ∈ Lp (Ω).
The study of best approximation is the heart of classical approximation theory. We are interested in getting estimates from above for EM (f )p when M coincides with a FE subspace. In the Besov space setting, E is characterized by modulus of smoothness,
LOCAL REFINEMENT AND PRECONDITIONING I: BPX OPTIMALITY
9
and if a Sobolev space setting is available, then it can also be characterized by the Sobolev norm. This gives rise to different versions of the Jackson estimate. Definition 3.5. The Jackson estimate for Besov spaces is defined as follows: (3.9)
ESJ (f )p ≤ c ωα (f, 2−J )p ,
f ∈ Lp (Ω).
Definition 3.6. The analog of (3.9) in Sobolev spaces is called the best approximation inequality: (3.10)
ESJ (f )p ≤ c (2−J )α |f |Wpα ,
f ∈ Wpα (Ω),
where c is a constant independent of f and J, and α is an integer. This definition can also be extended to α ∈ R in a different setting: Let X = L2 (Ω) and Y be as in (3.6). We say that the approximation inequality holds, if there exists α > 0 satisfying (3.11)
ESJ (u)p ≤ c (2−J )α |u|Y ,
u ∈ Y.
The second definition (3.10) is more general in the sense that it can be used both in the Besov and interpolation space settings. Moreover (3.10) implies (3.9): ESJ (f )p ≤ kf − gkLp + ESJ (g)p ≤ c (kf − gkLp + (2−J )l |g|Wpl ) −J
≤ c ωα (f, 2
)p
(by (3.10))
(by (2.2)).
The Jackson estimate implies the embeddings discussed in the previous section, but in the opposite direction. This will establish the lower bound of (1.9), which completes the picture for the norm equivalence (see (3.1) and (5.3) below). Theorem 3.7. If the Jackson estimate (3.9) for α = 2 holds, then (3.12)
s Bp,q ,→ Asp,q ,
0 < s < 2.
Proof. See Proposition 2 in [24]. Since we do not relate E to a modulus of smoothness, α can be real as in (3.11), and therefore the following embedding holds. Theorem 3.8. For α > 0, if an approximation property such as the Jackson estimate (3.11) holds for p = 2 and ∀J, then (3.13)
(L2 (Ω), Y )σ,q ,→ Aασ 2,q ,
0 < σ < 1.
Proof. See Theorem 1 in [10] for q = 2. The Jackson estimate will play a pivotal role in reducing the lower bound to an absolute constant as in (5.4) below.
10
B. AKSOYLU AND M. HOLST
3.3. Optimality of the BPX preconditioner revisited. The embeddings (3.8) and (3.13) make it possible to characterize the approximation space by using the interpolation space. Corollary 3.9. If both (3.10) and (3.7) hold, then α ≤ β. For α = β we get the isomorphism (3.14)
∼ Aασ 2,2 = (L2 (Ω), Y )σ,2 ,
0 < σ < 1.
Proof. See Corollary 1 in [10]. Keeping in mind that we always aim for the norm equivalence (1.11), our discussion takes an interesting turn. Is it possible to acquire H 1 through interpolation spaces? In the light of the isomorphism (3.14) the answer is yes, (3.15)
H 1 (Ω) ∼ = (L2 (Ω), Y )σ,2 ,
by carefully prescribing Y and σ which we have left vague so far. By the standard interpolation technique, one can attain (3.15) by choosing a smoother space than H 1 . The choice is (3.16)
Y = H s (Ω),
1 < s < ∞,
σ=
1 , s
so that 0 < σ < 1 as required. With some fairly mild domain-related technical assumptions [7, 10, 31] (i.e. minimal domain smoothness and extension operator) we conclude that (3.17)
H 1 (Ω) ∼ = (L2 (Ω), H s (Ω)) 1s ,2 ,
1 < s < ∞.
We are dealing with linear FE spaces. One should recall that we are only able to move into the interpolation space arena as long as the inverse (3.7) and approximation (3.11) inequalities are satisfied. We then try to achieve the inequalities within the linear FE framework. One crude observation is that smoothness of linear FE spaces can vary between 0 and 3/2, meaning that SJ ⊂ H s (Ω),
0 ≤ s ≤ 3/2.
1 < s < ∞ dictates the precise choice of s = 1 + , 0 < ≤ 1/2. Therefore the inverse (3.7) and approximation (3.11) inequalities must be established in H 1+ , not in H 1 . This means that linear finite elements are a little bit smoother than one may presume. By an analysis of the original BPX proof, we can see how the original analysis failed to establish optimality. In the original proof of the suboptimal BPX result [12], the corresponding estimates were established in H 1 . We want to emphasize that optimality can only be achieved by viewing the Jackson estimate in H 1+ , > 0 arbitrarily small. This subtle point is hidden one way or another in all proofs of optimality of the BPX preconditioner, a deep insight into the underlying theoretical structure which was first pointed out by Bornemann [10]. In the rest of this section we will borrow results from FE theory for 2D quasiuniform meshes. By definition of the L2 -projection and the Bramble-Hilbert Lemma, the approximation inequality (3.11) holds for s = 0 and s = 2. Interpolating between s = 0 and s = 2, one can show that the approximation inequality (3.11) holds for β = s = 1 + (cf. page 2 in [10], Theorem 3.2 in [13] , and Theorem 3.5 in [35]): ku − QJ ukL2 ≤ c hsJ kukH s ≤ c 2−Js kukH s ,
∀u ∈ H s (Ω).
LOCAL REFINEMENT AND PRECONDITIONING I: BPX OPTIMALITY
11
The Bernstein estimate holds for α = s = 1 + (cf. page 2 in [10] and Proposition 3.3 in [35]), Js kukH s ≤ c h−s J kukL2 ≤ c 2 kukL2 ,
∀u ∈ SJ ,
by using the Sobolev-Slobodeckii norm Z Z |∇u(x) − ∇u(y)|2 kuk2H 1+ ' kuk2H 1 + dx dy. |x − y|2+2 Ω Corollary 3.9 for α = β = 1/σ = s = 1 + implies then the crucial isomorphism result (cf. Theorem 2 in [10]): A12,2 ∼ = H 1 (Ω). 4. Optimality of the BPX preconditioner through K-functionals. It is well known that the L2 -projection is H s -stable on quasiuniform meshes [13], i.e. (4.1)
|Qj u|H s ≤ c |u|H s ,
∀u ∈ H s , 0 ≤ s ≤ 1,
holds. In addition, the approximation inequality (3.11) holds in the following special form: ESj (f )2 = kf − Qj f kL2 ≤ c (2−j )s |f |H s , f ∈ H s , 0 ≤ s ≤ 2. In order to obtain the canonical suboptimal lower bound for the BPX preconditioner 1 2 2 J kukBPX ≤ c|u|H 1 , we present two approaches. The first approach uses only the approximation inequality (3.11): kQj u − Qj−1 ukL2 ≤ ku − Qj−1 ukL2 (a property of Qj ) ≤ c (2−j )s |u|H s . The second approach uses both (3.11) and H s -stability of L2 -projection: kQj u − Qj−1 ukL2 = kQj u − Qj−1 Qj ukL2 ≤ c (2−j )s |Qj u|H s ≤ c (2−j )s |u|H s (by (4.1)). By using either of the above approaches, we get the canonical suboptimal lower bound for quasiuniform meshes: 2
kukBPX ≤ c
J X
|u|2H s
j=0
= c J |u|2H s , s = 1. Next, we present a novel approach by using K-functionals to obtain the optimal and suboptimal lower bound for quasiuniform meshes. We begin with a lemma that characterizes the components of the BPX norm in terms of the K-functional. Lemma 4.1. Let the approximation inequality (3.11) hold, then (4.2)
k(Qj − Qj−1 )ukL2 ≤ c K(u, 2−(j−1)s ; L2 , H s ).
12
B. AKSOYLU AND M. HOLST
Proof. Let v ∈ H s , then k(Qj − Qj−1 )ukL2 ≤ ku − Qj−1 ukL2 ≤ ku − vkL2 + kv − Qj−1 ukL2 ≤ ku − vkL2 + kv − Qj−1 vkL2 + kQj−1 (u − v)kL2 ≤ 2ku − vkL2 + kv − Qj−1 vkL2 ≤ c (ku − vkL2 + (2−(j−1) )s |v|H s ) (by (3.11)) ≤ c K(u, 2−(j−1)s ; L2 , H s ) (taking inf v∈H s ). See Lemma 7.1 in [9] for an alternative proof. Lemma 4.1 requires the approximation inequality which cannot be satisfied on locally refined meshes for the reason stated in the beginning of §8.1. That is why our optimality results will, for the moment, be limited to uniform refinement. If one can establish the H s -stability, s > 1, of the L2 -projection, then we will show that the optimal lower bound can be obtained. This approach will be taken in the next theorem and it dramatically simplifies the optimality proof as is first suggested in [1]. However, to the authors’ knowledge, the H s -stability, s > 1, of the L2 -projection has not appeared in the literature. Although the H s -stability for s > 1 has not been carefully studied, there is consensus [27, 34] that it can be established. Theorem 4.2. If the L2 -projection Qj−1 is H s -stable as in (4.1), then for • s = 1 + , we have kuk2BPX ≤ c|u|2H s , • s = 1, we have J1 kuk2BPX ≤ c|u|2H s . Proof. Qj−1 : L2 7→ Sj−1 , also we know that Sj−1 ⊂ H s . K(u, 2−(j−1)s ; L2 , H s ) ≤ ku − Qj−1 ukL2 + (2−(j−1) )s |Qj−1 u|H s ) ≤ c 2−(j−1)s |u|H s . The last step is due to the approximation inequality and the H s -stability of the L2 projection. J X
22j k(Qj − Qj−1 )uk2L2 ≤ c
j=0
J X
22j K(u, 2−(j−1)s ; L2 , H s )2 (by (4.2))
j=0
≤ c 22s ≤c
J X
|u|2H s
j=0
22j(1−s)
|u|2H s
(s = 1 + > 1) 1 2 or ≤ c |u|H s (s = 1). J The remaining piece for BPX optimality is the upper bound, and it is established by using the Bernstein estimate (3.4) in the same way as in (3.1). Remark 4.1. It is ironic that the original BPX proof was improved to optimal bounds by realizing the fact that the Jackson estimate holds for H s , s > 1. By noticing the role of s > 1, we see a similar pattern between H s -stability of the L2 -projection and the Jackson estimate. One might conjecture that there is a hidden relationship between H s -stability of L2 -projection and the Jackson estimate, which is an avenue for future research.
LOCAL REFINEMENT AND PRECONDITIONING I: BPX OPTIMALITY
13
5. Main norm equivalence theorems and optimality of the BPX preconditioner. Our discussion of the BPX preconditioner so far has been restricted to the setting of quasiuniform meshes. When dealing with quasiuniform meshes, one has a universal simplex size, inverse estimates (3.7), best approximation inequalities (3.10), and further convenient properties at one’s disposal. In the presence of such convenience, conventional FE tools are easily employed, which is why most of the theoretical work on the BPX preconditioner has revolved around quasiuniform meshes. Optimality results in §3 were given as applications of space embeddings. A different approach, namely norm equivalence, can be used for optimality results. It is fundamentally the same as the space embeddings approach, but it appears to be somehow the right abstract framework for extending uniform refinement results to local refinement. Once again, the primary goal of this work is to analyze the BPX preconditioner on locally refined meshes in 2D or 3D. The work of Dahmen and Kunoth [15] established a theoretical framework for analyzing the optimality of BPX for 2D local refinement. Their work was in fact the only one to date which carefully examined a realistic, implementable type of 2D red-green refinement and rigorously established optimality results. Our analysis for the 3D case follows their outline for the 2D case. The focus of the analysis in the (p) norm equivalence is the quantity vJ defined as follows. (5.1)
(p)
vj,J = sup
u∈SJ
ku − Qj ukLp , ωk+1 (u, 2−j , Ω)p
n o (p) (p) vJ = max 1, vj,J : j = 0, . . . , J .
(p)
The quantity vj,J can be seen as a characterization of how well one can reach the (p)
Jackson estimate (3.9) by using functions from SJ . The quantity vJ helps prescribe the lower bound as seen in (3.1) in the presence of the Bernstein estimate (3.4). With the help of this estimate, both the Bernstein estimate (3.4) and the Jackson estimate (3.10) are used to achieve an optimal lower bound (5.3). Eventually, the 1 fundamental norm equivalence between the Besov space B2,2 and the approximation 1 space A2,2 is established as we collect all the results given in [15] related to the BPX preconditioner under one big theorem as follows. We would like to emphasize that the Bernstein estimate (3.4) is always the minimum requirement. Theorem 5.1. Suppose the Bernstein estimate (3.4) holds for some real number β > k. Then • For each 0 < s < min{β, k + 1}, there exist constants 0 < c1 , c2 < ∞ independent of u ∈ SJ , J = 0, 1, . . ., such that (3.1) holds: c1 (p)
vJ
s kukAsp,q ≤ kukBp,q ≤ c2 kukAsp,q ,
u ∈ SJ .
• The following condition number estimate holds for the BPX preconditioner: (5.2)
κ(B (J)
−1/2
A(J) B (J)
−1/2
(2)
) = O((vJ )2 ),
J → ∞.
• If in addition the Jackson estimate (3.10) holds for an integer α > k and p = 2, then (p)
(5.3)
vJ = O(1),
J → ∞,
holds, and as a result the condition number is optimal: (5.4)
κ(B (J)
−1/2
A(J) B (J)
−1/2
) = O(1),
J → ∞.
14
B. AKSOYLU AND M. HOLST
Proof. See Theorem 4.1, Proposition 4.1, Theorem 3.1, and Theorem 3.2 in [15], respectively. (p) Due to the fact that the Jackson estimate cannot hold for local refinement, vJ must be estimated directly (see subsection 8.1 for details). In the sections to follow, the emphasis is on extending optimality results to local refinement. We extend the 2D red-green refinement result in [15] to an analogous 3D red-green refinement procedure. The reader is directed to [8] for an in depth analysis of this particular refinement procedure. The first 3D optimal BPX result on locally refined meshes with full detail is then given in §6. In particular, we give precise bounds on the size and generation relations between neighboring simplices in the tetrahedralization. We hope that our analysis will serve as a basis for building provably good preconditioners for realistic 3D problems and for implementable refinement procedures [1]. Motivated by applications in computer graphics and computer animation, in §7 we study 2D and 3D versions of the red refinement. Since green refinement in red-green refinement is a closure process for the hanging nodes introduced by red refinement, red refinement can be interpreted as a subset of the red-green refinement approach examined in §6. Perhaps contrary to popular belief in the FE modeling community, the same analysis and proof technique done for red-green refinement naturally carries over to red refinement without violating any required assumptions. As a result, we (p) can employ vJ in the analysis, as it supports nonconforming meshes. 6. 3D red-green refinement. As noted in previous sections, BPX is a provably good (optimal) preconditioner for quasiuniform meshes in 2D and 3D. We turn our attention to the optimality of the BPX preconditioner for locally refined meshes in 3D. Dahmen and Kunoth [15] established an optimality statement for the BPX preconditioner using the classical red-green refinement procedure in 2D. This procedure was popularized by Bank’s PLTMG [5] package. We prove that the BPX preconditioner still preserves its optimality for an analog of this procedure in 3D. In fact, it is the natural extension of red-green refinement to 3D. The details of the procedure can be found in [8]. We first list a number of geometric assumptions we make concerning the underlying mesh. (The two-dimensional case is quite standard, so we only describe the more complicated three-dimensional case here.) Let Ω ⊂ R3 be a polyhedral domain. We assume that the triangulation Tj of Ω at level j is a collection of tetrahedra with mutually disjoint interiors which cover Ω [ Ω= τ. τ ∈Tj
We want to generate successive refinements T0 , T1 , . . . which satisfy the following conditions: Assumption 6.1. Nestedness: Each tetrahedron (son) τ ∈ Tj is covered by exactly one tetrahedron (father) τ 0 ∈ Tj−1 , and any corner of τ is either a corner or an edge midpoint of τ 0 . Assumption 6.2. Conformity: The intersection of any two tetrahedra τ, τ 0 ∈ Tj is either empty, a common vertex, a common edge or a common face. Assumption 6.3. Nondegeneracy: The interior angles of all tetrahedra in the refinement sequence T0 , T1 , . . . are bounded away from zero. A regular (red) refinement subdivides a tetrahedron τ into 8 equal volume subtetrahedra. We connect the edges of each face as in 2D regular refinement. We then
LOCAL REFINEMENT AND PRECONDITIONING I: BPX OPTIMALITY
15
cut off four subtetrahedra at the corners which are congruent to τ . An octahedron with three parallelograms remains in the interior. Cutting the octahedron along the two faces of these parallelograms, we obtain four more subtetrahedra which are not necessarily congruent to τ . We choose the diagonal of the parallelogram so that the successive refinements always preserve nondegeneracy [1, 8, 22, 40]. If a tetrahedron is marked for regular refinement, the resulting triangulation violates conformity A.6.2. Nonconformity is then remedied by irregular (green) refinement. In 3D, there are altogether 26 = 64 possible edge refinements, of which 62 are irregular. One must pay extra attention to irregular refinement in the implementation due to the large number of possible nonconforming configurations. Bey [8] gives a methodical way of handling irregular cases. Using symmetry arguments, the 62 irregular cases can be divided into 9 different types. To ensure that the interior angles remain bounded away from zero, we enforce these additional conditions: (Similar assumptions were made in [15] in 2D analogue) Assumption 6.4. Irregular tetrahedra are not refined further. Assumption 6.5. Only tetrahedra τ ∈ Tj with L(τ ) = j are refined for the construction of Tj+1 , where (6.1)
L(τ ) = min {j : τ ∈ Tj }
denotes the level of τ . One should note that the restrictive character of A.6.4 and A.6.5 can be eliminated by a modification on the sequence of the tetrahedralizations [8]. On the other hand, it is straightforward to enforce both assumptions in a typical local refinement algorithm by minor modifications of the supporting datastructures for tetrahedral elements (cf. [18]). In any event, the proof technique (see (8.20) and (8.21)) requires both assumptions hold. The last refinement condition enforced for the possible 62 irregularly refined tetrahedra is stated as the following. Assumption 6.6. If three or more edges are refined and do not belong to a common face, then the tetrahedron is refined regularly. The generation bound for simplices in Rd which are adjacent on d many vertices is the major result required in the support of a basis function so that (8.1) holds. The generation bound for simplices which are adjacent on d − 1, d − 2, . . . many vertices follows by using the shape regularity and the generation bound established for d-vertex adjacency. We provide rigorous generation bound proofs for all the adjacency types mentioned in the lemmas to follow when d = 3. Lemma 6.1. Let τ and τ 0 be two tetrahedra in Tj sharing a common face f . Then (6.2)
|L(τ ) − L(τ 0 )| ≤ 1.
Proof. (The original proof may be found in [1]; the 2D version appeared in [15].) If L(τ ) = L(τ 0 ), then 0 ≤ 1, there is nothing to show. Without loss of generality, assume that L(τ ) < L(τ 0 ). Proof requires a detailed and systematic analysis. To show the line of reasoning, we first list the facts used in the proof: 1. L(τ 0 ) ≤ j because by assumption τ 0 ∈ Tj . 2. L(τ ) < j (L(τ ) < L(τ 0 ) ≤ j). 3. By assumption τ ∈ Tj , meaning that τ was never refined from the level it was born L(τ ) to level j. 4. Let τ 00 be the father of τ 0 . Then L(τ 00 ) = L(τ 0 ) − 1 < j. 5. L(τ ) < L(τ 0 ) by assumption, implying L(τ ) ≤ L(τ 00 ).
16
B. AKSOYLU AND M. HOLST
6. By (3), τ belongs to all the triangulations from L(τ ) to j, in particular τ ∈ TL(τ 00 ) , where by (4) L(τ 00 ) < j. f is the common face of τ and τ 0 on level j. If τ 0 is obtained by regular refinement of its father τ 00 , then f is still the common face of τ and τ 00 . But, by (6) both τ, τ 00 ∈ TL(τ 00 ) . Then, this would violate conformity A.6.2 for level L(τ 00 ). Hence, τ 0 must have been irregular. On the other hand, L(τ ) ≤ L(τ 0 ) − 1 = L(τ 00 ). Next, we proceed by eliminating the possibility that L(τ ) < L(τ 00 ). If so, we repeat the above reasoning, and τ 00 becomes irregular. τ 00 is already the father of the irregular τ 0 , contradicting A.6.4 for level L(τ 00 ). Hence L(τ ) = L(τ 00 ) = L(τ 0 ) − 1 concludes the proof. By A.6.4 and A.6.5, every tetrahedron at any Tj is geometrically similar to some tetrahedron in T0 or to a tetrahedron arising from an irregular refinement of some tetrahedron in T0 . Then, there exist absolute constants c1 , c2 such that (6.3)
c1 size(¯ τ ) 2−L(τ ) ≤ size(τ ) ≤ c2 size(¯ τ ) 2−L(τ ) ,
where τ¯ is the father of τ in the initial mesh. Lemma 6.2. Let τ and τ 0 be two tetrahedra in Tj sharing a common edge (two vertices). Then there exists a finite number E depending on the shape regularity such that (6.4)
|L(τ ) − L(τ 0 )| ≤ E.
Proof. Start with τ = τ1 , obtain the face-adjacent neighbor τ2 (either of the two faces), and then obtain the face-adjacent tetrahedron τ3 of τ2 , repeat this process δ times until you reach τ 0 = τδ . After you pick one of the two faces, the direction of the face-adjacent tetrahedra is determined. δ is always a finite number due to shape regularity. Lemma follows by face-adjacent neighbor relation (6.2). Lemma 6.3. Let τ and τ 0 be two tetrahedra in Tj sharing a common vertex. Then there exists a finite number V depending on the shape regularity such that (6.5)
|L(τ ) − L(τ 0 )| ≤ V.
Proof. Take one edge from τ and τ 0 where the edges meet at the common vertex. By shape regularity, there exist η (a bounded number) many edge-adjacent tetrahedra between τ1 = τ and τη = τ 0 . By the above construction in Lemma 6.2, there exist δ1,2 many face-adjacent tetrahedra between τ1 and τ2 . We repeat this process until Pη−1 we place δη−1,η many tetrahedra between τη−1 and τη . Hence, there exist i=1 δi,i+1 face-adjacent tetrahedra between τ and τ 0 . Again, face-adjacent neighbor relation Pη−1 (6.2) concludes the lemma with V = i=1 δi,i+1 . Consequently, simplices in the support of a basis function are comparable in size as indicated in (6.6). This is usually called patchwise quasiuniformity. Furthermore, it was shown in [1] that patchwise quasiuniformity (6.6) holds for 3D marked tetrahedron bisection by Joe and Liu [20] and for 2D newest vertex bisection by Sewell [28] and Mitchell [21]. Due to restrictive nature of the proof technique (see (8.20) and (8.21)), we focus on refinement procedures which obey A.6.4 and A.6.5. Lemma 6.4. There is a constant depending on the shape regularity of Tj and the quasiuniformity of T0 , such that (6.6)
size(τ ) ≤ c, size(τ 0 )
∀τ, τ 0 ∈ Tj ,
τ ∩ τ 0 6= ∅.
LOCAL REFINEMENT AND PRECONDITIONING I: BPX OPTIMALITY
17
Proof. τ and τ 0 are either face-adjacent (d vertices), edge-adjacent (d−1 vertices), or vertex-adjacent, and are handled by (6.2), (6.4), (6.5), respectively. 2−L(τ ) size(¯ τ) size(τ ) ≤ c (by (6.3)) 0) 0 −L(τ size(τ ) 2 size(τ¯0 ) 0 size(¯ τ) ≤ c 2|L(τ )−L(τ )| size(τ¯0 ) ≤ c 2max{1,E,V } γ (0)
(by (6.2), (6.4), (6.5) and quasiuniformity of T0 )
7. Red refinement. In red-green refinement, hanging nodes are closed by green refinement. In this sense, green refinement is auxiliary or secondary to red refinement. Red refinement has a number of attractive features when compared to red-green refinement. The type of red refinement under consideration here is exactly red-green refinement without green closure. This version of red refinement can be interpreted as a reduction of red-green refinement by deleting the impact of green closure. Hence, in essence a red mesh can be turned into a red-green mesh when needed. Observe that the red mesh in the middle is turned into a red-green mesh by green closure as in the right mesh in Figure 7.1. This correspondence is especially useful in inheriting certain properties in red refined meshes which are already established for red-green refinement.
Fig. 7.1. Left: Coarse DOF, N0 = 8. Middle: a DOF created by red refinement, N1red = 9. Right: Green closure deployed, N1red−green = 13.
Red refinement as a stand-alone procedure creates new DOF by pairwise quadrasection or octasection. The resulting hanging nodes are not closed, and therefore cannot be DOF (see the middle mesh in Figure 7.1 where a new DOF is represented by a small square). The initial triangulation T0 gives rise to nested, but possibly nonconforming triangulations; see the middle mesh in Figure 7.1. The father-son relationship (6.3) boils down to the following simple expression: size(τ ) = size(¯ τ ) 2−L(τ ) . Subspaces generated by red refinement spaces are sufficiently rich to handle any desired type of singularity, but will not be as rich as the corresponding subspaces generated by red-green refinement. A function u ∈ Sj is determined by its values at DOF. Hanging nodes are always midpoints of edges connecting two DOF. The values at hanging nodes are computed by linear interpolation using the corresponding DOF at the ends of edges. Although the mesh is nonconforming, we have well-defined basis functions which satisfy the Lagrange property; see Figure 7.2.
18
B. AKSOYLU AND M. HOLST
Fig. 7.2. Basis functions on meshes created by two different red refinements. Left: Two DOF created on mutually edge-adjacent simplices. Right: Two DOF created on simplices which are not mutually edge-adjacent.
A simplex in the red mesh can be expressed as a union of simplices in the corresponding red-green mesh. Hence, the red FE space is a subspace of the corresponding red-green FE space. Similarly, any simplex in Tj can be expressed as a union of simplices in the uniformly refined triangulations T˜j . [ (7.1) τ ∈ Tj , τ = τ˜. τ˜∈T˜j
In a sense, simplices in T˜j are the building blocks for simplices in Tj . This property is no longer valid if red refinement is supplemented with the green refinement. The simplex relationship (7.1) gives rise to the most attractive property of red refinement: Sj is a true subspace of S˜j . (7.2)
u=
Nj X i=1
˜
(j)
ui φi
∈ Sj ⇒ u =
Nj X
(j)
u ˜i φ˜i
∈ S˜j .
i=1
Relation (7.1) implies (7.2) because basis functions which live on simplices of Tj can be uniquely expressed by basis functions living on simplices of T˜j . The fact that Sj ⊂ S˜j is quite convenient simply because the standard estimates such as inverse inequalities and Cauchy-Schwarz like estimates which naturally hold for S˜j are inherited by Sj without additional effort. Red refinement is also preferred over red-green refinement in practical applications such as computer graphics. Two of the favorable practical properties of red refinement are as follows. Valence (number of edges meeting at a vertex) of all nodes except the coarsest ones is equal to 6, and the shape regularity constant (ratio of the diameter of circumscribing over the inscribed sphere) of the father simplex remains the same for any children in the ML hierarchy. Moreover, red refinement can be programmed using relatively simple datastructures such as quadtrees and octtrees. One has to modify the red-green refinement assumptions for red refinement. Nestedness A.6.1 is automatically satisfied. However, conformity in the sense of A.6.2 is violated. Neighbor relations in A.6.2 are categorized as regular adjacency, and either of the alternative relations, namely, intersection at a quarter of a face, a half of an edge, and a hanging node is categorized as irregular adjacency. The latter category
LOCAL REFINEMENT AND PRECONDITIONING I: BPX OPTIMALITY
19
will be characterized as f /4, e/2, and h respectively. In order to preserve nondegeneracy A.6.3, we substitute A.6.4 by the following analogous version designed for red refinement. Assumption 7.1. There can be at most one hanging node h on a given edge in Tj . A.6.5 and A.6.6 can remain as stated. A side remark about A.6.6 is that A.6.4 dictates a quadrasection rather than two bisections for a 2D simplex having two hanging nodes, and an octasection rather than irregular refinement for a 3D simplex having three or more hangings nodes that do not belong to a common face. 7.1. Generation bounds. Assume that τ is a child simplex formed by regular refinement, which now has an irregular adjacency with τ 0 . Then green closure in the case of red-green refinement gives rise to a child τ 00 of τ 0 with L(τ 00 ) = L(τ 0 ) + 1. By A.6.5, τ and τ 00 live on the same level. One returns to red refinement by undoing the green closure, and finds that L(τ ) = L(τ 0 ) + 1. Consequently, we have the following generation bound for irregular adjacency: (7.3)
|L(τ ) − L(τ 0 )| = 1,
τ ∩ τ 0 = {f /4, e/2, h}.
For regular adjacency, the generation bound (6.2) is still valid for face adjacent τ and τ 0 , neither of which has an irregular adjacency with the rest of the simplices. However, if either one, say τ , has an irregular adjacency with any other simplex, then after green closure, a child τ 00 of τ 0 will have the generation bound (6.2) with τ . Undoing the green closure can give rise to a case which can increase the generation bound by 1. (7.4)
|L(τ ) − L(τ 0 )| ≤ 2,
τ ∩ τ 0 = f.
Since we have generation bounds (7.3) and (7.4) for face adjacent simplices for irregular and regular adjacencies respectively, a generation bound such as (6.4) for edge adjacent follows by shape regularity with a larger bound than the red-green bound due to irregular adjacencies. Similarly, an analog of (6.5) follows by an application of the bound for edge adjacent simplices, the bound for irregular adjacency for the case of hanging nodes, and by shape regularity. Basis stability (8.11), and hence the Bernstein estimate (3.4) holds, and the ultimate goal, namely, the optimal BPX lower bound for red refinement, follows by exactly the same analysis as done for the case of red-green refinement. Remark 7.1. Bornemann and Yserentant [9] studied a different version of 2D red refinement for which they established the BPX optimality through the use of Kfunctionals. A.7.1 is replaced by the strong requirement of enforcing the difference of levels of two simplices to be at most 1 if they have at least one common node. This approach is restrictive: It brings a patchwise uniform refinement flavor, and is closer to uniform refinement than the type of red-refinement discussed in this section. They exploit the more restrictive assumptions in two ways. First, local refinement details can be bypassed by relating them to the uniform refinement case. Due to the patchwise uniform refinement, all the necessary estimates in local refinement can be characterized by uniform refinement estimates. Second, with p in C 1 (Ω), they can establish a strengthened Cauchy-Schwarz inequality which can also be useful for multiplicative Schwarz methods and for the BPX lower bound. Their optimality result then relies on smooth PDE coefficients. On the other hand, our refinement strategy is less restrictive. One can introduce DOFs inside a given patch without uniformly refining the whole patch. This behavior
20
B. AKSOYLU AND M. HOLST
is demonstrated in Figure 7.2. Our framework supports p ∈ L∞ (Ω), which is essential in many applications in science and engineering [1]. 8. Establishing optimality of the BPX preconditioner. Our motivation is to form a stable basis in the following sense [26]. X (j) (j) (8.1) k ui φi kL2 (Ω) ' k{size1/2 (supp φi ) ui }xi ∈Nj kl2 . xi ∈Nj
The basis stability (8.1) will then guarantee that the Bernstein estimate (3.4) holds, which is the first step in establishing the norm equivalence (3.1). We would like to elaborate on (8.1). A basis function corresponding to newer vertices will have smaller support than the ones corresponding to old vertices. For a stable basis, functions with (j) small supports have to be augmented by an appropriate scaling so that kφi kL2 (Ω) (j) remains roughly the same for all basis functions. This is reflected in size(supp φi ) by defining: (8.2)
Lj,i = min{L(τ ) : τ ∈ Tj , xi ∈ τ }.
Then (j)
size(supp φi ) ' 2−dLj,i . We prefer to use an equivalent stability and such basis is called L2 -stable Riesz basis (see section 2 and Remark 1 in [3]). X (j) (8.3) k u ˆi φˆi kL2 (Ω) ' k{ˆ ui }xi ∈Nj kl2 , xi ∈Nj
(j) where φˆi denotes the scaled basis, and the relationship between (8.1) and (8.3) is given as follows:
(8.4)
(j) (j) φˆi = 2d/2Lj,i φi ,
u ˆi = 2−d/2Lj,i ui ,
xi ∈ Nj .
Remark 8.1. Our construction works for any d-dimensional setting with the scaling (8.4). However, it is not clear how to define face-adjacency relations for d > 3. If such relations can be defined through some topological or geometrical abstraction, then our framework naturally extends to d-dimensional local refinement strategies, and hence the optimality of the BPX preconditioner can be guaranteed in Rd , d ≥ 1. The analysis is done purely with basis functions, completely independent of the underlying mesh geometry. Since basis functions in red refinement are well-defined (see Figure 7.2), hanging nodes do not pose any problems. The element mass matrix gives rise to the following useful formula. "d+1 #2 d+1 X X volume(τ ) kgk2L2 (τ ) = g(xi ) , (8.5) g(xi )2 + (d + 1)(d + 2) i=1 i=1 where, i = 1, . . . , d + 1 and xi is a vertex of τ , d = 2, 3. In view of (8.5), we have that (j)
(j)
kφˆi k2L2 (Ω) = 2dLj,i
volume(supp φˆi ) . (d + 1)(d + 2)
LOCAL REFINEMENT AND PRECONDITIONING I: BPX OPTIMALITY
21
(j) Since the min in (8.2) is attained, there exists at least one τ ∈ supp φˆi such that L(τ ) = Lj,i . By (6.3) we have
2Lj,i '
(8.6)
size(τ ) . size(¯ τ)
Also, (j) volume(supp φˆi ) '
(8.7)
M X
size3 (τi ),
(j) τi ∈ supp φˆi .
i=1
By (6.6), we have size(τi ) ' size(τ ).
(8.8)
Combining (8.7) and (8.8), we conclude (j) volume(supp φˆi ) ' M size3 (τ ).
(8.9)
Finally then, (8.6) and (8.9) yield (j) 2dLj,i volume(supp φˆi ) ' M
(8.10)
1 . size3 (¯ τ)
Some final comments on the constants in (8.10) are as follows: First, M is a uniformly bounded constant by shape regularity. Second, we already know by A.6.4 and A.6.5 that every tetrahedron at any Tj is geometrically similar to some tetrahedron in T0 or to a tetrahedron arising from an irregular refinement of some tetrahedron in T0 , hence, to some tetrahedron of a fixed finite collection. Therefore, we can view the size of any tetrahedron in T0 , in particular size of τ¯, as a constant. Combining the two arguments above, we have established that (j)
kφˆi kL2 (Ω) ' 1,
(8.11) Let g = (8.12)
P
xi ∈ Nj .
(j)
xi ∈Nj
u ˆi φˆi
∈ Sj . For any τ ∈ Tj we have that X (j) kgk2L2 (τ ) ≤ c |ˆ ui |2 kφˆi k2L2 (Ω) , xi ∈Nj,τ
where Nj,τ = {xi ∈ Nj : xi ∈ τ }, which is uniformly bounded in τ ∈ Tj and j ∈ N0 . By the scaling (8.4), we get equality in the estimate below. The inequality is a standard inverse estimate where one bounds g(xi ) using formula (8.5) and by handling the volume in the formula by (6.3): (8.13)
|ˆ ui |2 = 2−dLj,i |g(xi )|2 ≤ c 2−dLj,i 2dLj,i kgk2L2 (τ ) .
We sum up over τ ∈ Tj in (8.12) and (8.13), and by using (8.11) we achieve stability (8.3). This allows us to establish the Bernstein estimate (3.4). Lemma 8.1. For the scaled basis (8.4), the Bernstein estimate (3.4) holds for β = 3/2 Proof. (8.11) with (8.12) and (8.13) assert that the scaled basis (8.4) is stable in the sense of (8.3) (also see page 15 in [26]). Hence (by Theorem 4 in [26]), we recover the Bernstein estimate (3.4). Although this result can be found as Lemma 2 in [23], one should note that here the proof actually works independently of the spatial dimension.
22
B. AKSOYLU AND M. HOLST
8.1. Lower bound in the norm equivalence. In a locally refined mesh, the Jackson estimate (3.10) holds only for functions whose singularities are somehow wellcaptured by the mesh geometry. For instance, if a mesh is designed to pick up the singularity at x = 0 of y = 1/x, then on the same mesh we will not be able to recover a singularity at x = 1 of y = 1/(x − 1). Hence the Jackson estimate (3.10) cannot hold for all functions f ∈ Wpk . As pointed out above, some kind of a Jackson-like estimate (3.9) can hold for special functions. In order to get the lower bound, we (2) focus on estimating vJ directly, as in [15] for the 2D setting. Let τ ∈ Tj be a tetrahedron with vertices x1 , x2 , x3 , x4 . Clearly the restrictions (j) of φˆi to τ are linearly independent over τ where xi ∈ {x1 , x2 , x3 , x4 }. Then, there exists a unique set of linear polynomials ψ1τ , ψ2τ , ψ3τ , ψ4τ such that Z (j) (8.14) φˆk (x, y, z)ψlτ (x, y, z)dxdydz = δkl , xk , xl ∈ {x1 , x2 , x3 , x4 }. τ
For xi ∈ Nj and τ ∈ Tj , define a function for xi ∈ τ
(8.15)
(j) ξi (x, y, z)
=
(
1 τ Mi ψi (x, y, z),
0,
(x, y, z) ∈ τ (j) , (x, y, z) 6∈ supp φˆi (j)
where Mi is the number of tetrahedra in Tj in supp φˆi . By (8.14) and (8.15), we obtain Z (j) ˆ(j) (j) (8.16) (ξk , φl ) = ξk (x, y, z)φˆl (x, y, z)dxdydz = δkl , xk , xl ∈ Nj . Ω
We can now define a quasi-interpolant, in fact a projector onto Sj , such that ˜ j f )(x, y, z) = (Q
(8.17)
X
(j)
(j)
(f, ξi )φˆi (x, y, z).
xi ∈Nj
One can easily observe by (8.11) and (8.16) that (j)
kξi kL2 (Ω) ' 1,
(8.18) Letting Ωj,τ = that (8.19)
S
xi ∈ Nj , j ∈ N0 .
{τ 0 ∈ Tj : τ ∩ τ 0 6= ∅}, we can conclude from (8.11) and (8.18)
˜ j f kL (τ ) = k kQ 2
X
(j)
(j)
(f, ξl )φˆk kL2 (τ ) ≤ ckf kL2 (Ωj,τ ) .
xk ∈Nj,l
We define now a subset of the triangulation where the refinement activity stops, meaning that all tetrahedra in Tj∗ , j ≤ m also belong to Tm : (8.20)
Tj∗ = {τ ∈ Tj : L(τ ) < j, Ωj,τ ∩ τ 0 = ∅, ∀τ 0 ∈ Tj with L(τ 0 ) = j}.
Since there is no refinement activity on Tj∗ , the FE spaces are frozen in the refinement hierarchy. That is, g ∈ Sm implies g ∈ Sj for j ≤ m. ˜ j g = g. Claim 8.1. For g ∈ Sj , Q
LOCAL REFINEMENT AND PRECONDITIONING I: BPX OPTIMALITY
Proof. Let g(x, y, z) =
23
(j)
P
xk ∈Nj
gk φk (x, y, z). Then,
X
˜ j g)(x, y, z) = (Q
X
(j)
(j)
(j)
gˆl (ξl , φˆk )φˆk (x, y, z)
xk ∈Nj xl ∈Nj
X
=
(j)
gˆk φˆk (x, y, z) = g(x, y, z).
xk ∈Nj
˜ j g = g on T ∗ . Hence, By Claim 8.1, Q j (8.21)
˜ j gkL (τ ) = 0, kg − Q 2
τ ∈ Tj∗ .
˜ j fixes polynomials of degree at most 1; Π1 (R3 ), i.e. Claim 8.2. Q ˜ j P )(x, y, z) = P (x, y, z), (Q
P ∈ Π1 (R3 ).
Proof. We can uniquely represent a linear function on a tetrahedron by basis functions corresponding to its vertices (j)
(j)
(j)
(j)
P |τ = α1 φ1 + α2 φ2 + α3 φ3 + α4 φ4 . For xi ∈ τ ˜ j P |τ = (P, ξ1 )φ(j) + (P, ξ2 )φ(j) + (P, ξ3 )φ(j) + (P, ξ4 )φ(j) Q 4 1 2 3 = α1 φ1 + α2 φ2 + α3 φ3 + α4 φ4 = P. ˜ j fixes polynomials (i.e. Claim 8.2), and (8.19) imply The fact that Q
(8.22)
˜ j gkL (τ ) ≤ kg − P kL (τ ) + kQ ˜ j (P − g)kL (τ ) kg − Q 2 2 2 ≤ c kg − P kL2 (Ωj,τ ) , τ ∈ Tj \ Tj∗ .
We would like to bound the right hand side of (8.22) in terms of a modulus of smoothness in order to reach a Jackson-type estimate. Following [15], we utilize a modified modulus of smoothness: Z ω ˜ k (f, t, Ω)pp = t−s k∆kh f kpLp (Ωk,h ) dh. [−t,t]s
They can be shown to be equivalent [16]: there exist 0 < c1 , c2 < ∞ such that c1 ω ˜ k (f, t, Ω)p ≤ ωk (f, t, Ω)p ≤ c2 ω ˜ k (f, t, Ω)p . (The equivalence of the two moduli of smoothness ωk and ω ˜ k in the one-dimensional setting can be found in Lemma 5.1 in [16].) For τ a simplex in Rd and t = size(τ ), a Whitney estimate shows that [17, 23, 30] (8.23)
inf
P ∈Πl−1 (Rd )
kf − P kLp (τ ) ≤ c˜ ωk (f, t, τ )p ,
24
B. AKSOYLU AND M. HOLST
where c depends only on the smallest angle of τ but not on f and t. The reason why ˜ j works well for tetrahedralization in 3D is the fact that the Whitney estimate (8.23) Q remains valid for any spatial dimension. Tj \ Tj∗ is the part of the tetrahedralization Tj where refinement is active at every level. Then, in view of (6.6) size(Ωj,τ ) ' 2−j ,
τ ∈ Tj \ Tj∗ .
Taking the inf over P ∈ Πk−1 (R3 ) in (8.22) and using the Whitney estimate (8.23) we conclude ˜ j gkL (τ ) ≤ c˜ kg − Q ωk (g, 2−j , Ωj,τ )2 . 2 Recalling (8.21) and summing over τ ∈ Tj \ Tj∗ gives rise to ˜ j gkL (Ω) ≤ c˜ kg − Q ω2 (g, 2−j , Ω)2 . 2 It is time to switch from the modified modulus of smoothness to the standard one. Consequently, ˜ j gkL (Ω) ≤ c ω2 (g, 2−j , Ω)2 , kg − Qj gkL2 (Ω) ≤ kg − Q 2 and (5.1) yields (2)
vJ = O(1),
J → ∞.
9. Conclusion. In this article, the first in a set of three papers [2, 3], our aim was to extend the optimality of the BPX preconditioner [12] from the uniform refinement setting to the local refinement setting; previous work had been mostly restricted to the uniform refinement setting. We established optimality results for the BPX preconditioner in three spatial dimensions on locally refined meshes constructed using certain red-green and red refinement procedures which are easily implemented using standard datastructures. The proof techniques used are quite general; in particular, the results in our framework required no smoothness assumptions on the differential equation coefficients where p ∈ L∞ (Ω), and the refinement procedures may produce nonconforming meshes as in the case of red refinement. Moreover, the theoretical setting supports arbitrary spatial dimension d ≥ 1; only the geometrical relationships must be re-established for spatial dimension d ≥ 4. The optimality results presented for the BPX preconditioner extend the original two-dimensional results of Dahmen and Kunoth [15]. These results were based on establishing geometrical relationships particularly simplices that are adjacent on d-many vertices (6.2, 7.4) and size of the neighboring simplices (6.3) for 3D/2D redgreen and red refinement procedures. These properties were used to show that the resulting finite element subspaces allowed for the construction of a scaled L2 -stable Riesz basis. This naturally took us to the approximation theory arena, where the Bernstein estimate is guaranteed to hold (see Lemma 8.1). We then presented a survey of the central approximation theory tools. By using more recent analysis techniques, we reproduced a number of results which have appeared in the literature. In particular, the analysis of the BPX preconditioner with K-functionals by Bornemann and Yserentant [9] and the critical role of H s , s > 1 in the Jackson estimate observed by Bornemann [10], yielded some additional insight into the underlying theoretical structure of the BPX preconditioner. Utilizing the K-functional technology and the
LOCAL REFINEMENT AND PRECONDITIONING I: BPX OPTIMALITY
25
H s -stability of L2 -projection for s > 1, we demonstrated that the new techniques often led to notably shorter and simpler proofs of the BPX preconditioner in the quasiuniform setting. The computational complexity of an iterative method that employs a preconditioner is the number of iterations times the cost per iteration. In the case of the conjugate gradient method, the number of iterations is bounded by the square root of the condition number of the preconditioned system. The cost per iteration includes both the iterative method itself as well as the cost to apply the preconditioner. While the results outlined above show optimality of the BPX preconditioner on certain classes of locally refined meshes in the sense that the condition number was shown to be uniformly bounded, a standard application of the method on locally refined meshes has worse than optimal (linear) complexity. This is due to the fact PJ the BPX representation u = j=0 (Qj − Qj−1 )u gives rise to components which are not locally supported. This fact prevents BPX methods from being a viable tool in realistic adaptive refinement scenarious where a geometric in the number of DOF is usually not present. Exploiting the good decay rates of basis functions in the BPX decomposition, locally supported approximations have been proposed by Vassilevski and Wang [32, 33] and by Stevenson [29]. The emphasis will be on the methods introduced in [32, 33] and they are called as wavelet modified hierarchical basis (WMHB) methods. The lower bound in the BPX optimality turns out to be the fundamental assumption in the WMHB framework. Therefore, the present article established the foundation for provably good WMHB methods for all the local refinement procedures we considered. Optimality of the WMHB preconditioner will be studied carefully in the second article [3] of this series. Since WMHB methods are modifications to HB methods, the same optimal complexity per iteration is naturally reflected in the WMHB methods; this will be explored in the third article [2]. An interesting side result of the optimality of the BPX preconditioner which will be contained in [3] is a proof of H 1 -stability of L2 -projection restricted to finite element spaces under the same class of local red-greeen and red mesh refinement algorithms. This question is currently under intensive study in the finite element community due to its relationship to multilevel preconditioning. The existing theoretical results, mainly due to Carstensen [14] and Bramble-Pasciak-Steinbach [11] involve a posteriori verification of somewhat complicated mesh conditions after refinement has taken place. If such mesh conditions are not satisfied, one has to redefine the mesh. However, the stability result we obtained appears to be the first a priori H 1 -stability result for L2 -projection on finite element spaces produced by easily implementable 3D/2D refinement algorithms. Acknowledgments. The authors thank R. Bank, P. Vassilevski, and J. Xu for many enlightening discussions. REFERENCES [1] B. Aksoylu, Adaptive Multilevel Numerical Methods with Applications in Diffusive Biomolecular Reactions, PhD thesis, Department of Mathematics, University of California, San Diego, La Jolla, CA, 2001. [2] B. Aksoylu, S. Bond, and M. Holst, An odyssey into local refinement and multilevel preconditioning III: Implementation and numerical experiments, in Proceedings of the 7th Copper Mountain Conference on Iterative Methods, H. van der Vorst, ed., Copper Mountain, CO, 2002, SIAM J. Sci. Comput. Copper Mountain special issue, to be submitted. [3] B. Aksoylu and M. Holst, An odyssey into local refinement and multilevel preconditioning II: Stabilizing hierarchical basis methods, SIAM J. Numer. Anal., (2002). submitted.
26
B. AKSOYLU AND M. HOLST
[4] R. E. Bank, Hierarchical basis and the finite element method, Acta Numerica, (1996), pp. 1–43. [5] , PLTMG: A Software Package for Solving Elliptic Partial Differential Equations User’s Guide 8.0, Software, Environments, Tools, SIAM, Philadelphia, PA, 1998. [6] R. E. Bank, T. Dupont, and H. Yserentant, The hierarchical basis multigrid method, Numer. Math., 52 (1988), pp. 427–458. ¨ fstro ¨ m, Interpolation spaces: an introduction, Springer Verlag, New York, [7] J. Bergh and J. Lo 1976. [8] J. Bey, Tetrahedral grid refinement, Computing, 55 (1995), pp. 271–288. [9] F. Bornemann and H. Yserentant, A basic norm equivalence for the theory of multilevel methods, Numer. Math., 64 (1993), pp. 455–476. [10] F. A. Bornemann, Interpolation Spaces and Optimal Multilevel Preconditioner, in Proceedings of the 7th International Conference on Domain Decomposition Methods, D. Keyes and J. Xu, eds., Providence, 1994, CONM Series of the AMS, MR 95j:65155; Zbl. 818.65107, pp. 3–8. [11] J. H. Bramble, J. E. Pasciak, and O. Steinbach, On the stability of the L2 projection in H 1 (Ω), Math. Comp., (2000). accepted. [12] J. H. Bramble, J. E. Pasciak, and J. Xu, Parallel multilevel preconditioners, Math. Comp., 55 (1990), pp. 1–22. [13] J. H. Bramble and J. Xu, Some estimates for a weighted L2 projection, Math. Comp., 56 (1991), pp. 463–476. [14] C. Carstensen, Merging the Bramble-Pasciak-Steinbach and the Crouzeix-Thomee criterion for H1-stability of the L2-projection onto finite element spaces, Math. Comp., (2000). Accepted. Berichtsreihe des Mathematischen Seminars Kiel 00-1 (2000). [15] W. Dahmen and A. Kunoth, Multilevel preconditioning, Numer. Math., 63 (1992), pp. 315– 344. [16] R. A. DeVore and G. G. Lorentz, Constructive Approximation, Grundlehren der mathematischen Wissenschaften 303, Springer Verlag, Berlin Heidelberg, 1993. [17] R. A. DeVore and V. A. Popov, Interpolation of Besov spaces, Trans. Amer. Math. Soc., 305 (1988), pp. 397–414. [18] M. Holst, Adaptive numerical treatment of elliptic systems on manifolds, Advances in Computational Mathematics, 15 (2001), pp. 139–191. [19] S. Jaffard, Wavelet methods for fast resolution of elliptic problems, SIAM J. Numer. Anal., 29 (1992), pp. 965–986. [20] B. Joe and A. Liu, Quality local refinement of tetrahedral meshes based on bisection, SIAM J. Sci. Comput., 16 (1995), pp. 1269–1291. [21] W. F. Mitchell, Unified Multilevel Adaptive Finite Element Methods for Elliptic Problems, PhD thesis, Computer Science, University of Illinois at Urbana-Champaign, Urbana, IL, 1988. [22] M. E. G. Ong, Uniform refinement of a tetrehedron, SIAM J. Sci. Comput., 15 (1994), pp. 1134–1144. [23] P. Oswald, On function spaces related to finite element approximation theory, Zeitschrift f¨ ur Analysis und ihre Anwendungen, 9 (1990), pp. 43–64. [24] , Hierarchical conforming finite element methods for the biharmonic equation, SIAM J. Numer. Anal., 29 (1992), pp. 1610–1625. [25] , On discrete norm estimates related to multilevel preconditioners in the finite element method, In K. G. Ivanov, P. Petrushev, and B. Sendov, editors, Proceedings International Conference on Constructive Theory of Functions, Varna 1991, (1992), pp. 203–214. Publ. House of Bulgarian Academy of Sciences. [26] , Multilevel Finite Element Approximation Theory and Applications, Teubner Skripten zur Numerik, B. G. Teubner, Stuttgart, 1994. [27] J. E. Pasciak, Private communication. [28] E. G. Sewell, Automatic generation of triangulations for piecewise polynomial approximation, PhD thesis, Department of Mathematics, Purdue University, West Lafayette, IN, 1972. [29] R. Stevenson, A robust hierarchical basis preconditioner on general meshes, Numer. Math., 78 (1997), pp. 269–303. [30] E. A. Storozhenko and P. Oswald, Jackson’s theorem in the spaces Lp (Rk ), 0 < p < 1, Siberian Math., 19 (1978), pp. 630–639. [31] H. Triebel, Interpolation Theory, Function Spaces, Differential Operators, North-Holland Pub. Co., Amsterdam, New York, 1978. [32] P. S. Vassilevski and J. Wang, Stabilizing the hierarchical basis by approximate wavelets, I: Theory, Numer. Linear Alg. Appl., 4 Number 2 (1997), pp. 103–126. [33] , Wavelet-like methods in the design of efficient multilevel preconditioners for elliptic
LOCAL REFINEMENT AND PRECONDITIONING I: BPX OPTIMALITY
[34] [35] [36] [37] [38] [39] [40] [41]
27
PDEs, in Multiscale Wavelet Methods For Partial Differential Equations, W. Dahmen, A. Kurdila, and P. Oswald, eds., Academic Press, 1997, ch. 1, pp. 59–105. J. Xu, Private communication. , Theory of Multilevel Methods, PhD thesis, Department of Mathematics, Penn State University, University Park, PA, July 1989. Technical Report AM 48. , Iterative methods by space decomposition and subspace correction, SIAM Rev., 34 (1992), pp. 581–613. H. Yserentant, On the multilevel splitting of finite element spaces, Numer. Math., 49 (1986), pp. 379–412. , Two preconditioners based on the multi-level splitting of finite element spaces, Numer. Math., 58 (1990), pp. 163–184. , Old and new convergence proofs for multigrid methods, Acta Numerica, (1993), pp. 285– 326. S. Zhang, Multilevel iterative techniques, PhD thesis, Pennsylvania State University, 1988. X. Zhang, Multilevel Schwarz methods, Numer. Math., 63 (1992), pp. 521–539.