USING AN ENCOMPASSING PERIODIC BOX TO PERFORM NUMERICAL CALCULATIONS ON GENERAL DOMAINS
arXiv:1601.01080v2 [math.NA] 12 Apr 2016
PATRICK GUIDOTTI
Abstract. This paper shows how numerical methods on a regular grid in a box can be used to generate numerical schemes for problems in general smooth domains contained in the box with no need for a domain specific discretization. The focus is mainly be on spectral discretizations due to their ability to accurately resolve the interaction of finite order distributions (generalized functions) and smooth functions. Mimicking the analytical structure of the relevant (pseudodifferential) operators leads to viable and accurate numerical representations and algorithms. An important byproduct of the structural insights gained in the process is the introduction of smooth kernels (at the discrete level) to replace classical singular kernels which are typically used in the (numerical) representations of the solution. The new kernel representations yield enhanced numerical resolution and, while they necessarily lead to significantly higher condition numbers, they also suggest natural and effective ways to precondition the systems.
1. Introduction It is the primary goal of this paper to develop a framework which allows one to extend the benefits of numerical spectral methods from boxes to arbitrary geometry domains. The idea is to simulate problems on domains Ω ⊂ B located inside a periodicity box B = [−π, π]N by using spectral approximation of generalized functions (distributions). This is best illustrated in the case of boundary value problems, which are also an important application of the method. Consider the boundary value problem ( Au = f in Ω, Bu = g on ∂Ω, for some generic differential operator A and boundary operator B. It is no restriction to assume that the operators and the data be defined everywhere in the box B. One obtains a numerical approximation for the boundary value problem in the following manner. First discretize the periodm mN icity box B by a regular grid Gm = {xm points if working in dimension n ∈ N, j : j ∈ ZN } with 2 then, independently, discretize the boundary Γ of the domain Ω by a subset Γn = {y1 , . . . , yn } ⊂ Γ. Choose a discretization Am of the operator A which operates on the grid Gm and find a solution v m : Gm → R of Am v m = f m in Gm for a discretization of f . With that in hand, generate numerical approximations for functions ψkm : Gm → R, k = 1, . . . , n, in the kernel of the operator AΩ on Ω and try to adjust the solution v m by a linear combination n X wm,n = wkm,n ψkm k=1
of these kernel elements in order for um,n = v m + wm,n to satisfy a discretization B m,n um,n = g n of the boundary condition for a discretization g n : Γn → R of g. Choosing B to be the trace operator Key words and phrases. Spectral methods, meshless methods, numerical analysis, boundary value problems. 1
2
PATRICK GUIDOTTI
γΓ at first for ease of presentation, this can be done as follows. Approximate δyk for k = 1, . . . , n by its spectral representation δymk on the periodic grid Gm and insist that hδymj , um,n iqm = gjn , j = 1, . . . , n, where h·, ·iqm is the discrete duality pairing (scalar product) discretizing the continuous duality pairing h·, ·iDπ0 ,Dπ between periodic distributions and test functions. Details will be given in the rest of the paper. Following the strategy outlined above leads to a system for the unknown wm,n of the form n n X X hδymj , wm,n iqm = hδymj , ψkm iqm wkm,n = Mjk wkm,n = gjn − hδymj , v m iqm , j = 1, . . . , n, k=1
k=1
One can think of δym· as the discrete kernel of the trace operator γΓ . It is therefore possible to deal with a more general boundary operator B by deriving a “natural” numerical approximation Bymj of its distributional kernel for j = 1, . . . , n. This would lead to the system n X hBymj , wm,n iqm = hBymj , ψkm iqm wkm,n = gjn − hBymj , v m iqm , j = 1, . . . , n, k=1
In order to obtain a numerical method it remains to generate the kernel functions ψkm for k = 1, . . . , m. This can be done in many different ways. In order to, at first, make a connection explicit to pseudo-differential operators, again consider B = γΓ and proceed in the following manner. Take the spectral approximation δymk for k = 1, . . . , n and set ψkm = (Am )−1 δymk . Since the Dirac distribution is “supported” on the singleton {yk }, the function ψkm will indeed “lie” in the kernel of Am over Ω. Since these functions are “peaked” at different locations yk , they will be linearly independent. The matrix M in the system for the unknown wm,n is therefore given by Mjk = hδymj , (Am )−1 δymk iqm , j, k = 1, . . . , n. Latter can be recognized as the discrete counterpart of m(y, η) = hδy , A−1 δη i, y, η ∈ Γ, the distributional kernel of a pseudodifferential operator A−1 on the boundary curve Γ. This connection is made more precise in the rest of the paper and provides a framework in which to obtain analytical proofs for the numerical methods introduced. For implementation purposes, however, it is best to proceed in a somewhat different way when constructing the kernel functions ψkm . Instead of using the “rougher” Dirac distributions used above, it is better to replace them by smooth functions ϕy˜k which are supported outside of Ω with support “centered” at y˜k = yk + δνΓ (yk ) for δ > 0 where νΓ (yk ) is the unit outer normal to the boundary Γ at the point yk . After discretization this leads to the alternative matrix fjk = hδ m , (Am )−1 ϕm iqm , j, k = 1, . . . , n, M yj
y˜k
which is the discretization of a smoothing operator with kernel m(y, e η) = hδy , A−1 ϕη˜i, y, η ∈ Γ. As such, M will be easier to capture numerically (fast convergent expansion of its kernel function) but also badly conditioned (as a smoothing and thus compact operator with unbounded inverse – read less diagonally dominant). In spite of this, “natural” and effective preconditioning procedures can be devised which completely remove this drawback. The method above can be thought of as a fully discrete boundary integral method. As such it does not rely on the availability of an explicit analytic representation of the kernels involved and is therefore applicable to non-constant coefficient differential operators. Even for situations
NUMERICS IN A BOX
3
where analytic representations of the kernel are known, the singularity shifting/removal procedure employed above offers an effective and accurate numerical discretization method which completely avoids the need to find ways to numerically deal with the singularity of the kernel function. One of the main reason for developing the method is its applicability to the numerical computation of solutions to moving boundary value problems. The fact that the domain evolves in time would in general require continuous remeshing of the varying computational domain. In the approach presented here, the encompassing computational domain remains unchanged during the evolution and computation of the moving boundary is reduced to tracking the location of its discretization points. The rest of the paper is organized as follows. In the next section some preliminary results are obtained which highlight the main features of the underlying spectral approach to approximating generalized functions and test functions in the context of periodicity and for boundary value problems, in particular; in Section 3 details are given about the discretizations used in the concrete examples studied in Section 4. The following section is dedicated to the specifics of the numerical implementation and to numerical experiments which illustrate the main theoretical insights and the advantages of the proposed method. A brief conclusion ends the paper. 2. Preliminaries 2.1. Setup. Before working with the relevant discretizations, the stage is set by fixing the analytical context which will very much guide the numerical procedures developed in the rest of the paper. Let B = [−π, π)N be the periodicity box bounding the area of interest. Extensive use will be made of distributions and of test functions. Latter are periodic smooth functions belonging to one of the following useful spaces (2.1) Dπ (B) = Dπ = ϕ ∈ C∞ (RN ) ϕ is 2π-periodic m m m N Dπ (B) = Dπ = ϕ ∈ C (R ) ϕ is 2π-periodic (2.2) ∞ N D0 (B) = D0 = ϕ ∈ C (R ) supp(ϕ) ⊂⊂ B (2.3) where m ∈ N. The first space carries its standard locally convex topology generated by the family of seminorms {pm : m ∈ N} given by pm (ϕ) = sup k∂ α ϕk∞ , |α|≤m
the second is a Banach with respect to the norm pm , and the last carries the natural inductive limit-Fr´echet topology, i.e. the coarsest topology which makes the inclusions DK (B) = ϕ ∈ C∞ (RN ) supp(ϕ) ⊂ K ,→ D0 (B) continuous for all K = K ⊂⊂ B. Notice that DK (B) is endowed with the locally convex topology induced by the seminorms pm,K (·) = sup k∂ α · k∞,K , |α|≤m
where the additional subscript indicates that the supremum norm is taken over the set K. The space Dπ0 (B) = Dπ0 = u : Dπ → K u is linear and continous is then the space of K(= R, C)-valued distributions dual to Dπ . On L2π = L2π (B) = L2 (B) there is a natural orthonormal basis (ek )k∈ZN given by ek (x) =
1 eik·x , x ∈ B, k ∈ ZN , (2π)N/2
4
PATRICK GUIDOTTI
consisting of eigenfunctions of the periodic Laplacian −∆π . It is well-known that X F : L2π → l2 (Zn ), ϕ = ϕˆk ek → 7 (ϕ) ˆ k∈ZN k∈Zn
is an isometric isomorphism where Z ϕˆk = ϕ(x)¯ ek (x) dx = hϕ, e¯k i = (ϕ|ek ). B
In particular one has that kϕkL2π = k(ϕˆk )k∈N kl2 (Zn ) and Parseval’s identity Z X ˆ for ϕ, ψ ∈ L2 . (ϕ|ψ) = ˆ ψ) ϕ ψ¯ dx = ϕˆk ψˆk = (ϕ| π B
k∈Zn
Notice that the formulæ above use the notations h·, ·i and (·|·) for the duality pairing and the scalar product, respectively. The former is clearly motivated by the natural duality pairing between distributions and test functions h·, ·i : Dπ0 × Dπ → K, (u, ϕ) 7→ hu, ϕi = u(ϕ). Observe that, if ϕ ∈ Dπ , then ∂ α ϕ ∈ L2π for all α ∈ Nn and thus X X X α ϕ) e = \ ∂α ϕˆk ek = ∂ α ϕ = (∂ (ik)α ϕˆk ek , k k k∈ZN
k∈ZN
k∈ZN
with convergence in L2π , owing to well-known properties of the Fourier transform. Introducing the periodic Bessel potential spaces via X (1 + |k|2 )s u ˆ2k < ∞ , Hsπ = Hsπ (B) = u ∈ Dπ0 k∈Zn
for s ∈ R and u ˆk = hu, e¯k i = (u|ek ), it follows easily that X ϕˆk ek → ϕ as M → ∞, |k|≤M
in
Hsπ
for any s ≥ 0 if ϕ ∈ Dπ . By the well-known embedding Hsπ ,→ Dπm = Dπm (B),
(2.4)
valid for s > N/2 + m, it then follows that the convergence of the Fourier series actually takes place in the topology of Dπ . An important consequence of this fact is the validity of the following generalized Parseval’s identity X X X hu, ϕi = hu, ϕˆk ek i = hu, ek i ϕˆk = u ˜k ϕˆk , (2.5) | {z } N N N k∈Z
(u|ϕ) = (u|
k∈Z
X k∈ZN
ϕˆk ek ) =
=:˜ uk
X
k∈Z
(u|ek )ϕˆk = (ˆ u|ϕ) ˆ
(2.6)
k∈ZN
for u ∈ Dπ0 , ϕ ∈ Dπ , and (u|ek ) = hu, e¯k i = hu, e−k i. A distribution u ∈ Dπ0 is said to be of finite order m ∈ N if it admits an estimate of the form |hu, ϕi| ≤ c pm (ϕ), ϕ ∈ Dπm , for a non-negative constant c but not for m replaced by m − 1. It follows from a density argument combined with the embedding (2.4) that any finite order distribution belongs to H−s π for some finite s ≥ 0. The upshot of this is that (2.5) can be used to evaluate the action of a finite order distribution on a test function by a fast converging series since (˜ uk )k∈ZN is polynomially bounded and (ϕˆk )k∈ZN decays faster than the reciprocal of any polynomial in k. This, combined with the
NUMERICS IN A BOX
5
choice of appropriate discretizations, will be exploited later to derive highly accurate representations of various operators (not even necessarily supported on the discretization grid itself). Indeed many useful basic operations such as differentiation, integration, evaluation/interpolation are distributions of finite order. It also turns out that, for many interesting distributions u ∈ Dπ0 , it will be possible to compute their Fourier coefficients either exactly or in a highly accurate manner. 2.2. Simple Illustrative Examples. Consider first δx0 for x0 ∈ B which is a zero order distribution. Later δx0 will be discretized on a regular grid but x0 will be allowed to be any point in the domain B. Then it holds that X X (δx0 |ek )ek . hδx0 , ek i¯ ek = δ x0 = k∈ZN
k∈ZN
Indeed hδx0 , ϕi = hδx0 ,
X
ϕˆk ek i =
k∈ZN
X
hδx0 , ek ihϕ, e¯k i = h
k∈ZN
X
(δf ¯k , ϕi, x0 ) k e
k∈ZN
where (δf x0 )k = ek (x0 ). The convergence of this evaluation series X hδx0 , ϕi = ek (x0 )ϕˆk , k∈ZN
is clearly very fast and its coefficients are known either exactly or to a high degree of accuracy. This seemingly very simple observation will play a crucial role in the derivation of a highly accurate representation of higher dimensional kernels related to boundary value problems. Remark 2.1. While in this paper it will be enough to deal with the evaluation of smooth functions, such as test functions, in [1] modifications are presented (in a non-periodic context) which make it possible to retain good convergence properties also for piecewise smooth functions, another important class of functions in applications. The next example shows how the considerations of Subsection 2.1 provide an abstract framework in which to understand spectral methods (after discretization). Let ϕ ∈ Dπ and consider computing ∂ α ϕ(x0 ) = h(−1)|α| ∂ α δx0 , ϕi at a point x0 ∈ B. In this case ∂ α ϕ(x0 ) = h(−1)|α| ∂ α δx0 , ϕi =
X
αδ ) ϕ (−1)|α| (∂^ x0 k ˆk
k∈ZN
=
X
∂ α ek (x0 )ϕˆk =
k∈ZN
X
[(ik)α ϕˆk ]ek (x0 ).
k∈ZN
It is again clear that the main advantages lie in the fact that ϕ is smooth and that the Fourier coefficients of ∂ α δx0 are known exactly. The next is an example of integration. Take x0 < x1 in the one dimensional box B. Of interest is the computation (eventually numerically) the integral Z x1 I(ϕ) = ϕ(x) dx x0
between the end points given (which are not necessarily on a numerical grid). Since I ∈ Dπ0 is a zero order distribution, it is possible compute its Fourier coefficients Z x1 1 1 ikx1 ˜ Ik = hI, ek i = ek (x) dx = [e − eikx0 ], 1/2 ik (2π) x0
6
PATRICK GUIDOTTI
again obtaining an explicit formula. Then Z x1 X ϕ(x) dx = I˜k ϕˆk , x0
k∈Z
will provide a fast converging series representation. 2.3. A simple Boundary Value Problem. This section concludes with a simple example that will make the advantages and basic principle of this approach apparent. They will reappear in the higher dimensional context with the appropriate adjustments. Consider the following two point boundary value problem ( −∂xx u = f on (x0 , x1 ) ⊂ B, (2.7) u(xj ) = uj for j = 0, 1. Notice that it will be considered as a problem embedded in the periodicity box B (which will later be discretized by a regular grid). Assume that f ∈ Dπ be given along with uj ∈ R for j = 0, 1. Choose a function ψ ∈ Dπ satisfying ψˆ0 = 1, 1 − ψ ∈ D0 (B), supp(ψ) ⊂ [x0 , x1 ]c , and define Pψ f = f − fˆ0 ψ for the datum f and accordingly for any distribution in Dπ0 . This way, a function Pψ f is obtained with vanishing average which coincides with f on (x0 , x1 ). When applied to a general distribution u that is compactly supported inside the box, this operation produces a modified distribution Pψ u which coincides with the original u on its supp(u) if, without loss of generality, it is assumed that supp(u) ∩ supp(ψ) = ∅. Next define the operator Gπ acting on f via ( 0, k = 0, \ Gπ (f )k = fˆk k2 , k 6= 0 so that −∂xx Gπ (f ) = Gπ (−∂xx f ) = f − P0 (f ), for P0 f = fˆ0 e0 , the orthogonal projection onto average free functions. A solution of (2.7) can be looked for in the form u = Gπ Pψ f + v, where v satisfies ∂xx v = 0 and v(xj ) = uj − Gπ Pψ f (xj ), j = 0, 1. All that remains is to find two linearly independent elements v0 , v1 in the kernel of −∂xx on (x0 , x1 ) and look for v in the form v = α0 v0 + α1 v1 . In order for the boundary conditions to be satisfied, one needs that α0 v0 (xj ) + α1 v1 (xj ) = uj − Gπ Pψ f (xj ) =: βj , j = 0, 1. By choosing vk = Gπ (Pψ δxk ) for k = 0, 1, this leads to the matrix M = [vk (xj )]j,k=0,1 with Mjk = hδxj , Gπ (Pψ δxk )i,
(2.8)
which is a kind of Green’s function “M = M (xj , xk )”, j, k = 0, 1. The crucial observation is that all ingredients δxj , Gπ , Pψ δxk allow for spectral representations in the periodicity interval B (regardless of whether xj for j = 0, 1 are or are not grid points after discretization). The convergence is, however, limited by the fact that, in (2.8), δxj is of zero order and that Gπ (Pψ δxk ) is of limited smoothness (slightly better than H1π ). This, however, can be alleviated by replacing Pψ δxk by either Pψ δx˜k with x ˜k = xk + δν(xk ) for ν(xk ) = (−1)k+1 and k = 0, 1,
NUMERICS IN A BOX
7
or, even better, by Pψ ϕx˜k for a smooth test function ϕx˜k ∈ D0 supported in a neighborhood Uk of x ˜k with Uk ∩ (x0 , x1 ) = ∅ and Uk ∩ supp(ψ) = ∅ to obtain fjk = hδx , Gπ Pψ ϕx˜ i. M j k It is easily checked that M is invertible for x0 6= x1 . Taking x ˜k not too far from xk and ϕx˜k ' δx˜k , f ' M is also invertible. The upshot is, clearly, that M f allows for a fast converging it follows that M representation of its entries. To conclude this simple example one has that f−1 β, u = Gπ (Pψ f ) + [v0 v1 ]M for β = [β0 β1 ]> . Remark 2.2. It is to be observed that, after discretization, all basic ingredients δxj , Gπ , Pψ ϕx˜k , ψ will have highly accurate grid representations, even if xj , x ˜k do not lie on the grid. Owing either to the availability of exact Fourier coefficients or to their smoothness, the additional discretization error incurred when going to a finite dimensional representation is as small as can be hoped for. 3. Discretization 3.1. One Dimension. In order to rip the benefits of the above considerations the interval B1 = [−π, π) is discretized at m ∈ N (even) equidistant points (xm j )j=0,...,m−1 where 2π j, j = 0, . . . , m − 1. m This will be sometimes referred to as the grid Gm 1 of size m in dimension n = 1. As pointed out in [1], the choice of grid has to be complemented by an appropriate choice of corresponding quadrature rule q m = (qjm )j=0,...,m−1 such that Z m−1 X m ϕ(x) dx as m → ∞, ϕ ∈ Dπ , h1m , ϕm iqm = 1m ·qm ϕm = q m · ϕm = ϕm q → j j xm j = −π +
B1
j=0
for the constant function 1 with value 1 and for ϕm = PP (ϕ) = ϕ(xm j ) j=0,...,m−1 , the (physical space) projection of the test function ϕ on the grid. It is also required that the quadrature rule satisfy em ¯m j ·q m e k = δjk , −m/2 ≤ j, k ≤ m/2 − 1, for the basis vectors ej , j = −m/2, . . . , m/2 − 1, where again the superscript indicates projection (by evalutation) on the grid. Definition 3.1. A discretization pair (xm , q m ) on B1 satsfying the above properties is called faithful discretization. The trapezoidal rule, for which it holds that 2π qm = (1, . . . , 1), m has this property of preserving the duality pairing and the orthogonal structure of the continuous setting. Many basic, useful distributions, such as δx0 for any x0 ∈ [−π, π), cannot be directly evaluated at points (short of obtaining a vanishing projection for all non-grid points x0 ). It is then better to use an approximation based on Fourier coefficients and given by m/2−1
um = PF (u) =
X k=−m/2
m/2−1
u ˜k e¯m k =
X k=−m/2
u ˜k PP (¯ ek ), u ∈ Dπ .
8
PATRICK GUIDOTTI
The reason for this is that, in practice, one often has analytical knowledge of the coefficients u ˜k or the ability to compute them to a high degree of accuracy. Pm/2−1 m Remark 3.2. Observing that δxm0 = k=−m/2 ek (x0 )¯ em k and assuming that x0 = xj0 is one of the grid points, one has that m/2−1
(xm δxmm j ) j
=
X
0
m/2−1
ek (xm ek (xm j0 )¯ j )
k=−m/2
=
X
i(k−j)π ei(j0 −k)π ej0 (xm ej (xm k )¯ k )e
k=−m/2 m/2−1
= ei(j0 −j)π
X
ej0 (xm ej (xm k )¯ k )=
k=−m/2
m i(j0 −j)π m e ej0 ·qm e¯m j 2π
m = δjj 2π 0 since 2π 1 1 −ikπ+ik 2π j m e = √ ei(j−k)π eij(−π+ m k) = ei(j−k)π ej (xm ek (xm k ). j )= √ 2π 2π It is seen that PF (δx0 ) evaluates exactly (to the discrete Dirac function) if x0 is a grid point, while, for x0 ∈ [−π, π) \ Gm 1 , it has oscillatory character. In any case one has that
hδxm0 , ϕm iqm → hδx0 , ϕi = ϕ(x0 ) as m → ∞, for any ϕ ∈ Dπ , with fast convergence. Remark 3.3. The alternating point trapezoidal rule of quadrature given by q m = 2π m (2, 0, . . . , 2, 0) can also be used instead of the regular trapezoidal rule as it has been observed to have the required properties in [1]. Definition 3.4. The discrete Fourier transform Fm : Cm → Cm is defined by m Fm (v) = v ·qm e¯m k k=−m/2,...,m/2−1 , v ∈ C . Remark 3.5. As the discretization is faithful, Fm is an isometric isomorphism. In fact it is easy to prove that v ·qm w = Fm (v) · Fm (w) so that Parseval’s identity carries over exactly to the discrete setting. Notice that the standard Euclidean inner product is used in the right-hand side. Proposition 3.6. For a finite order distribution u ∈ Dπ0 and a test function ϕ ∈ Dπ , it can be shown (see [1, Theorem 4.2] and the considerations preceding it) that, given any M ∈ N, one has that 1 |hu, ϕi − um ·qm ϕm | ≤ c(M, u, ϕ) M . m Notice that, while this result is proved in [1] only for compactly supported distribution and compactly supported test functions, the same arguments apply in the current context since the compact support condition is not needed in the periodic context where no boundary is present and, hence, no boundary effects (read convergence slowdown due to boundary mismatch) can occur. Remark 3.7. At the continuous level, one can think of the series 1 X ik(x−y) i(x, y) = e , x, y ∈ [−π, π), 2π k∈Z
NUMERICS IN A BOX
9
as the (generalized, since it converges in the sense of distributions only) kernel i of the identity map on L2π since clearly Z Z π 1 X ikx π −iky ϕ(x) = e e ϕ(y) dy“ = ” i(x, y)ϕ(y) dy, ϕ ∈ Dπ . 2π −π −π k∈Z
In this context, a discretization which respects the duality pairing and the orthogonality structure as decribed above yields “natural” spectral discretizations for a variety of important operators which will be exploited later. In particular, it delivers such a discretization im for the identity map given by m/2−1
(δxm |δym )qm := hδxm , δ¯ym iqm =
X
m ek (x)¯ ek˜ (y)h¯ em ˜ iq m k , ek
˜ k,k=−m/2 m/2−1
=
X
ek (x)¯ ek˜ (y)δkk˜ =
˜ k,k=−m/2
1 2π
m/2−1
X
eik(x−y) = im (x, y), (3.9)
k=−m/2
which is clearly the truncation of the series representation of the kernel i itself. Notice that x, y need not be grid points and that the approximation is thus “grid blind” and the error incurred is caused only by truncation of the series and by evaluation of the exponential function at the points of interest. If the kernel is evaluated on the grid points only, then it coincides with the kernel of discrete identity map, i.e. with the identity matrix m im (xm j , xk ) = δjk .
3.2. Higher dimensions. In higher dimensions, the periodicity box B = BN is discretized analogously in each direction by equidistant points to obtain the grid m m Gm = Gm n = G1 × · · · × G1 , {z } | N -times
2π N
with corresponding quadrature rule q m = m 1m , where now, 1m is thought of as a vector of length mN . This way, a faithful discretization respecting duality pairing and orthogonality is obtained. In particular, it follows that hem ¯m ˜, ˜ iq m = δk k k ,e k for ek (x) = (2π)1N/2 eik·x for x ∈ RN , k ∈ ZN and, again, em k = ek Gm . Dirac delta functions are approximated by tensor products δxm0 = δxm1 ⊗ · · · ⊗ δxmN , 0
0
of the corresponding one dimensional representations j = 1, . . . , N where x0 = (xj0 )j=1,...,N . As far as test functions ϕx0 supported in a neighborhood of a point x0 ∈ B go, many choices can be made. The specifics will be given in the numerical experiments performed later. For now it is only important to know that such test functions can be given explicitly by an analytical formula which allows for accurate evaluation anywhere. Consider now a general pseudodifferential operator a(x, D) with symbol a(x, k) k∈ZN defined by X X 1 ik·x e a(x, k) ϕ ˆ = ek (x)a(x, k)ϕˆk , a(x, D)ϕ = k (2π)N/2 N N δxmj , 0
k∈Z
k∈Z
10
PATRICK GUIDOTTI
and where a(·, k) : B → C is assumed to be smooth and periodic for each k ∈ ZN . Its Schwartz kernel is given by X X 1 ka (x, y) = eik·(x−y) a(x, k) = ek (x)ek (−y)a(x, k), N (2π) N N k∈Z
k∈Z
for which one has that a(x, D)ϕ = hka (x, ·), ϕi for ϕ ∈ Dπ . More suggestively one can write that ka (x, y) = a(x, D)δy |δx , justified by the validity of the formal Parseval’s identity X X \ ek (x)ek (−y)a(x, k) a(x, k)ek (−y)e−k (x) = a(x, D)δy |δx = a(x, D)δy |δbx = k∈ZN
k∈ZN
If a(x, ·) is polynomially bounded (for each x), convergence in the sense of distributions can be established. For well-known classes of symbols [2], it can be shown that ka is smooth away from the diagonal [x = y], where cancellations are responsible for the faster convergence of the series. This is the case for general differential operators and the corresponding solutions operators appearing in common boundary value problems, for instance. It turns out that, what was observed above for a ≡ 1 (leading to the identity map) in one space dimension, is valid for general pseudodifferential operators. Theorem 3.8. Given a pseudodifferential operator a(x, D) with kernel ka , it is natural to approximate it by the truncated series expansion X 1 kam (x, y) = a(x, k)eik·(x−y) , N/2 (2π) k∈ZN m N N where Zm = k ∈ Z : ki = −m/2, . . . , m/2 − 1 for i = 1, . . . , n . In this case one has that kam (x, y) = am (x, D)δym |δxm qm , (3.10) −1 m for am (x, D) = Fm a (x, ·)Fm and
am (x, k) = a(x, k), k ∈ ZN m. Proof. In one dimension, the extension of (3.9) to general symbols amounts to X am (x, D)δym |δxm qm = Fm [am (x, D)δym ] Fm (δxm ) qm = ek (−y)a(x, k)ek (x) = kam (x, y), k∈ZN m
˜ ˜ . The rest follows from this and the fact since the term δkk˜ in (3.9) is simply replaced by a(x, k)δ kk that, in higher dimensions, one has that δxm = δxm1 ⊗ · · · ⊗ δxmn and ek (z) = ek1 (z1 ) ⊗ · · · ⊗ ekn (zn ), z ∈ Rn . This simple observation is quite useful and shows how to produce grid independent “spectral” approximations of operators through an approximation of their kernels. The structure of the kernel made apparent in (3.10) provides a blue print as to how to obtain numerical approximations to kernels of discrete operators K m by simply computing (δxm |K m δym )qm = (K m δym |δxm )qm (the two coincide in the real case, which always applies in the examples considered here). This is of interest when K m is, for instance, the numerical inverse of the discretization Am of an operator A for which no analytical inverse is available.
NUMERICS IN A BOX
11
Remark 3.9. For solution operators, the convergence of the series can, in general, be quite slow even if it is stronger than in the sense of distributions. This is due to the (mildly) singular behavior of the kernel on the diagonal and typically requires special care in the numerical evaluation process. Representation (3.10), however, suggests natural ways in which to do this by regularization of the kernel through δxm |am (x, D)ϕm y˜ , where Dπ 3 ϕy˜ ' δy˜ and y˜ ' y is conveniently located. In some cases, this modification can be carried out to obtain an alternate exact representation by a smooth kernel with no approximation involved. See the boundary value problem example in the next section. 4. Two Dimensional Examples Two examples in two space dimensions are presented here which illustrate the benefits of the proposed approach. 4.1. Integration. Consider a (smooth or piecewise smooth) domain Ω ⊂ B and the numerical task of approximating the integral Z ϕ(x) dx, ϕ ∈ Dπ , I(ϕ) = Ω
of a smooth function ϕ. Since I is a finite order distribution, one has that X X I= hI, ek i¯ ek = I˜k e¯k , k∈Z2
k∈Z2
and I(ϕ) = k∈Z2 I˜k ϕˆk . If I˜k can be computed /approximated accurately by I˜km on Gm , then a numerical quadrature I m for integration over Ω could be obtained by setting X I m (ϕm ) = I˜km ϕˆm k , P
k∈Z2m m m m where ϕˆm ¯k iq = Fm (ϕ)k can be computed using the Fast Fourier transform. The k = hϕ , e 2 notation Zm is used, as before, for the appropriate set of indeces corresponding to the discretization level considered. While it appears that the problem of computing I(ϕ) has simply been replaced by that of evaluating I(ek ) for k ∈ Z2m , the analytical knowledge of the bases functions and of their properties becomes useful. Indeed for k = (0, 0) one has Z Z Z x1 1 1 1 I˜0 = dx = div dx = (x1 ν1 + x2 ν2 ) dσΓ (x) 2π Ω 4π Ω 4π Γ x2 Z 2π 1 = γ1 (t)γ˙ 2 (t) − γ˙ 1 (t)γ2 (t) dt, (4.11) 4π 0 where Γ = ∂Ω, ν is the outward unit normal to Γ, and γ1 (·), γ2 (·) is a parametrization of Γ. Here it assumed for simplicity that Γ is connected. If, on the other hand, k 6= 0, then Z Z 1 1 ik·x ˜ Ik = e dx = −∆eik·x dx 2π Ω 2π|k|2 Ω Z Z 1 i ik·x =− ν · ∇e dσ = − [k · ν]eik·x dσΓ Γ 2π|k|2 Γ 2π|k|2 Γ Z 2π i k2 γ˙ 1 (t) − k1 γ˙ 2 (t) eik·γ(t) dt. (4.12) = 2π|k|2 0
Thus, given a representation of Ω via its boundary Γ, either as a list of points (from which the relevant geometric quantities can be computed) or via an analytic expression (often available even
12
PATRICK GUIDOTTI
in pratice), the computation reduces to that of a periodic one dimensional integral which can be performed to high accuracy as already noted earlier. The advantage of this approach is that the integrand lives on B, or on Gm , and only a simple discrete representation Γn of Γ is needed in order to perform the calculation. Notice that the grids Gm and Γn do not need to have any relation whatsoever to one another. In fact, when u is smooth, m can be kept small while n will need to be chosen large in order to get a good approximation of the highly oscillatory (in general) line integral. The advantage clearly lies in the line integral being one dimensional. 4.2. Boundary Value Problems. Let again Ω be a smooth domain inside the box B and consider the classical boundary value problems ( −∆u = f in Ω, (4.13) u=g on Γ, and ( −∆u = f ∂ν u = g
in Ω, on Γ,
where it can be assumed that the data are given as f : B → R and g : Γ → R. Using 1 G(x, y) = G(x − y) = log |x − y| for x, y ∈ R2 , 2π and the classical Green’s identity Z Z (u∆G − G∆u) dx = (u∂ν G − G∂ν u) dσΓ , Ω
(4.14)
(4.15)
Γ
solution representations can be obtained from Z Z Z u(x) = G(x, y)f (y) dy − G(x, y)∂ν u(y) dσΓ (y) + g(y)∂ν G(x, y) dσΓ (y), ZΩ ZΓ Z u(x) = G(x, y)f (y) dy + u(y)∂ν G(x, y) dσΓ (y) − G(x, y)g(y) dσΓ (y), Ω
Γ
once the boundary functions u and ∂ν u are recovered, depending on whether one considers the Neumann or Dirichlet problem, respectively. While the single and double layer potentials terms Z S(u)(x) = G(x, y)∂ν u(y) dσΓ (y), x ∈ R2 \ Γ, (4.16) ZΓ D(u)(x) = u(y)∂ν G(x, y) dσΓ (y), x ∈ R2 \ Γ. (4.17) Γ
are important to understand and will appear later for their mapping properties, the construction of solutions, both analytical and numerical, presented here will proceed slightly differently. The following facts [2] will be useful Z S(u)(x) = lim S(u)(˜ x) = G(x, y)∂ν u(y) dσΓ (y), x ∈ Γ, (4.18) Γ63x ˜→x
∂ν± S(u)(x) =
lim
Ω± 3˜ x→x
Γ
1 ∂ν(˜x) S(u)(˜ x) = ∓ u(x) + N (u)(x), x ∈ Γ, 2
(4.19)
where Ω+ = Ω and Ω− = R2 \ Ω, respectively, and the normal to Γ is extended continuously in a neighborhood of Γ, and Z N (u)(x) = u(y)∂ν(x) G(x, y) dσΓ (y), x ∈ Γ. Γ
NUMERICS IN A BOX
13
Observe that that the function G(·, y) is clearly a harmonic function in Ω for any y ∈ B \ Ω and for any fundamental solution G. Now consider the Dirichlet problem above and the shifted Neumann problem given by ( u − ∆u = f ∂ν u = g
in Ω, on Γ,
(4.20)
so as to make the problem uniquely solvable. For the Dirichlet problem therefore take GD π (x, y) to be the Green’s function for the periodicity box B characterized by its symbol ( k = 0, D ˆ π (k) = 0, G 1 2 |k|2 , 0 6= k ∈ Z , and, for the Neumann problem, GN π with symbol 1 , k ∈ Z2 . 1 + |k|2 R D If f is a mean zero function, i.e. if fˆ0 = 0, then GD π ∗ f = B Gπ (·, y)f (y) dy satisfies ˆN G π (k) =
−∆GD π ∗ f = f in Ω, as desired. One also has that GD ∗ −∆u = u − P0 (u), π where P0 = (·|e0 )e0 is the orthogonal projection onto the subspace consisting of constant functions. Similarly for the Neumann problem where N (1 − ∆)GN π ∗ f = f and Gπ ∗ u − ∆u = u. A solution to the boundary value problems can therefore be sought in the form Z u(x) = Gbπ ∗ f (x) + Gbπ (x, y)h(y) dσΓ (y), x ∈ Ω, b = D, N, Γ
where the second term is a “harmonic” function in Ω and can be thought of as a superposition along the boundary of functions in the kernel of ∆Ω or 1 − ∆Ω , respectively, which generate the desired boundary behavior for the solution. The function h can indeed be determined by the requirement that u = g or ∂ν u = g on the boundary Γ, respectively, that is by insisting that D g(x) = hδx , ui = hδx , GD π ∗ f i + hδx , Gπ ∗ (hδΓ )i, x ∈ Γ,
and that N g(x) = h−ν(x) · ∇δx , ui = −h∂ν(x) δx , GN π ∗ f i − h∂νx δx , Gπ ∗ (hδΓ )i, x ∈ Γ,
where Z H ∗ (hδΓ ) =
H(x, y)h(y) dσΓ (y), Γ
N for H = GD π , Gπ . The above is justified by the fact that
∂ν u(x) = hδx , ∂ν ui = hδx ,
2 X
νj ∂j ui =
j=1
2 X hνj δx , ∂j ui j=1
2 X = hνj (x)δx , ∂j ui = −hν(x) · ∇δx , ui j=1
= −h∂ν(x) δx , ui, x ∈ Γ.
14
PATRICK GUIDOTTI
This yields an equation ( g − hδ· , GD π ∗ f i, Mb (h) = gˇ = −g − hν· · ∇δ· , GN π ∗ f i,
b = D, b = N,
for an operator Mb on Γ given by Z mb (x, y)h(y) dσΓ (y), h : Γ → R,
Mb (h) =
(4.21)
Γ
with kernel function defined by ( mD (x, y) = hδx , (−∆π )−1 Pψ δy i, b = D, for x, y ∈ Γ, mb (x, y) = −1 mN (x, y) = h∂ν(x) δx , (1 − ∆π ) δy i, b = N,
(4.22)
for the Dirichlet and Neumann problem, respectively. Here the more transparent notation (−∆π )−1 N and (1 − ∆π )−1 are used for the operation of convolution with GD π and Gπ , respectively. Pψ u denotes the projection onto mean zero functions/distributions given by Pψ u = u − u ˆ0 ψ = u − u ˜0 ψ,
(4.23)
for a nonnegative function ψ ∈ Dπ satisfying supp(ψ) ⊂ Ωc and ψˆ0 = 1.
(4.24)
Remark 4.1. Using the suggestive notation dσΓ (y) = |hdy,
∂ i|dt, ∂t
∂ for hdy, ∂t i = γ(t) ˙ when y = γ(t) to evoke the validity of Z Z 2π v(y) dσΓ (y) = v γ(t) |γ(t)| ˙ dt, Γ
0
for any parametrization γ of Γ and for any smooth integrand v : Γ → R, allows for the factor ∂ ∂ |hdy, ∂t i| to be assimilated into the unknown function h to yield |hdy, ∂t i|h as the new unknown. This is particularly convenient when working at the discrete level, where one is only eventually interested in the function Z Z 2π ∂ b x 7→ Gπ (x, y)h(y) dσΓ (y) = Gbπ (x, y)h(y)|hdy, i| dt, Ω → R ∂t Γ 0 ∂ i|h are equivalent. and the determination of h or |hdy, ∂t
The kernels mD and mN in (4.22) have the form of those considered in the previous section and are of exactly the same type as in the earlier one dimensional toy boundary value problem. Just as in that case, δy can be replaced by δy˜ for y˜ = y + δν(y), y ∈ Γ, where δ > 0 can be chosen such that a tubular neighborhood TΓδ = {x ∈ B | d(x, Γ) < 2δ} of Γ can be found with well-defined coordinates (y, s) ∈ Γ × (−2δ, 2δ) satisfying x = y + sν(y) for y = Y (x) and s = d(x, Γ), e = {˜ where Y (x) denotes the point on Γ closest to x. This corresponds to replacing Γ by Γ y | y ∈ Γ} in the evaluation of the kernel (but not in that of the boundary integral). Notice that latter
NUMERICS IN A BOX
15
supp(ϕy˜) νΓ (y)
Γ y˜ −1
Pψ δ y i
y
'
Gπ (x, y) = hδx , (−4π )
e π (x, y) = hδx , (−4π )−1 Pψ ϕy˜i G
Ω
ϕy˜ ' δy˜ ' δy
Figure 1. A pictorial illustration of the proposed kernel construction. distinction is immaterial at the discrete level where the boundary measure is assimilated in the unknown function h as described above. An even better choice is obtained by replacing δy by ϕy˜ ∈ D0 with supp(ϕy˜) ⊂ Ωc and supp(ϕy˜) ∩ supp(ψ) = ∅.
(4.25)
The kernel modification is shown pictorially in Figure 1. fb with The upshot is that the operator Mb with singular kernel is replaced by the operator M smooth kernel given by ( m e D (x, y) = hδx , (−∆π )−1 Pψ ϕy˜i, b = D, for x, y ∈ Γ. −1 m e N (x, y) = h∂ν(x) δx , (1 − ∆π ) ϕy˜i, b = N, By choosing ϕy˜ localized enough (read close to a Dirac delta function) it follows that fb ' 0, Mb − M in the strong operator sense. Notice that the projection procedure (4.23) ensures that the support of Pψ ϕy˜ lies completely outside of Ω and does thus still generate functions in the kernel of ∆Ω . Remark 4.2. The operator Mb can be shown to be smoothing of one degree of differentiability in the Dirichlet case, and of none in the Neumann case. For a proof based on symbol analysis see e.g. [2]. Remark 4.3. While it is often convenient to work with an explicit fundamental solution for −∆ and use it in order to derive the necessary boundary kernels (to be used in a numerical implementation of boundary integral type), the approach described above does not rely on the explicit −1 knowledge of a Green’s function. Indeed at the discrete level, the kernel functions, GD π = (−∆π ) N −1 m −1 m −1 and Gπ = (1 − ∆π ) in the examples, can be replaced by (AD ) and (AN ) for any dism cretizations Am to the grid G of a differential operator A obtained by spectral or finite difference b b methods for b = D, N . In the above example Am would be a standard spectral or finite difference approximations of the periodic −∆ and 1 − ∆ operators on the box B. This opens the door to applying the method to nonconstant coefficient operators and to constant coefficient operators for which no explicit Green’s function or symbol is available. Next an illustrative analytical result is proved in the Dirichlet case which will play an important role in obtaining invertibility results for the numerical schemes derived later. Lemma 4.4. The operator MD defined in (4.21) with kernel mD given by (4.22) is invertible.
16
PATRICK GUIDOTTI
Proof. First notice that GD π is a fundamental solution on the space of mean zero distributions. It follows either from Poisson’s summation formula or from the theory of pseudodifferential operators [2] that GD π is smooth away from the diagonal [x = y] and that 1 GD log |x − y| = G(x, y), x ' y ∈ B, π (x, y) ' 2π i.e., it has the same singular behavior of the full space fundamental solution G. It indeed differs from it by a smooth kernel only. Now one has that X 1 X ik·x 1 1 1 e eik·x 2 e−ik·y − ψˆk mD (x, y) = e−ik·y − ψˆk = 2 2 2 4π |k| 4π |k| 2 2 k∈Z
06=k∈Z
and thus that mD (x, y) = GD π (x, y) −
1 4π 2
X
eik·x
06=k∈Z2
ψˆk = GD π (x, y) − η(x), |k|2
where η is a smooth function. Consequently one sees that Z Z Z D e S(h) = Gπ (·, y)h(y) dσΓ (y) − η(x) h(y) dσΓ (y) = Sπ (h) − η(x) h(y) dσΓ (y). Γ
Γ
Γ
This means that Se enjoys the same classical jump relations as S (and Sπ ) given by Z S(h) = G(·, y)h(y) dσΓ (y), Γ
i.e. it holds ( e e e = γΓ S(h), = γγ− S(h) γγ+ S(h) e e ∂ν + S(h) − ∂ν − S(h) = −h, Γ
(4.26)
Γ
where the superscripts ± indicate limits taken from within and from without Ω, respectively, just as in (4.18). It also follows (see e.g. [2]) that MD is Fredholm and that it continuously maps Hs (Γ) to Hs+1 (Γ) for any s ∈ R. It is therefore enough to show that MD is injective “on smooth functions”, i.e. that Z e γΓ S(h) = γΓ mD (·, y)h(y) dσΓ (y) = 0 =⇒ h ≡ 0, Γ
e for smooth h : Γ → R. Since S(h) is defined for all x ∈ B and is harmonic in B \ Γ, unique e ≡ 0. It follows that solvability of the Dirichlet problem in Ω yields that S(h) Ω e e e ∂ν + S(h) − ∂ν − S(h) = −∂ν − S(h) = −h. Γ
Γ
Γ
Now, in B \ Ω one has that e ∆S(h) = ψ − ψˆ0 e0
Z h(y) dσΓ (y), Γ
and, consequently, that Z Z Z e − − h(y) dσΓ (y) = ∂ν S(h)dσΓ (y) =
Z |B \ Ω| ˆ e ∆S(h) = ψ0 1 − h(y) dσΓ (y), Γ 4π 2 Γ Γ B\Ω Γ R since supp(ψ) ⊂ B \ Ω and ψˆ0 = 1. This, in turn, implies that Γ h(y) dσΓ (y) = 0 because |B \ Ω| < |B| = 4π 2 . For such a h, it therefore holds that e S(h) = Sπ (h).
NUMERICS IN A BOX
17
By construction it holds that Z Sπ (h) dx = 0, B
so that Poincar´e’s inequality yields Z Z Z Z ∇Sπ (h) 2 dx = c Sπ (h)2 dx = Sπ (h)2 dx ≤ c B\Ω
B
B
∇Sπ (h) 2 dx,
B\Ω
and entails that, if Sπ (h) B\Ω is constant, then it has to vanish identically. Since Z Z Z ∇Sπ (h) 2 dx + Sπ (h) ∂νΓ Sπ (h) dσΓ , 0=− Sπ (h)∆Sπ (h) dx = Γ | {z } B\Ω B\Ω =0
it therefore follows that Sπ (h) B\Ω ≡ 0. Finally this shows that ∂ν − Sπ (h) B\Ω = h = 0, Γ
thus establishing the claim.
fD is injective provided y˜ ' y and ϕy˜ ' δy˜ for y ∈ Γ. Proposition 4.5. The modified operator M fD has smooth kernel and is therefore compact. Given any smooth h 6≡ 0, Proof. The operator M e it follows from the previous lemma that γΓ S(h) 6≡ 0. Now it holds that hδx , (−∆π )−1 Pψ ϕy˜i → hδx , (−∆π )−1 Pψ δy˜i as ϕy˜ → δy˜, pointwise everywhere in x, y ∈ Γ (in fact, uniformly). On the other hand, one also has that δy˜ → δy as y˜ → y, uniformly in y ∈ Γ in the sense of distributions (or in the sense of measures) so that hδx , (−∆π )−1 Pψ δy˜i → hδx , (−∆π )−1 Pψ δy i as y˜ → y, pointwise for x 6= y, i.e., almost everywhere. Since the limiting kernel is integrable in view of its logarithmic behavior in the singularity and provides a bound for the approximating kernels, Lebesgue’s theorem yields that hδx , (−∆π )−1 Pψ δy˜i → mD (x, y) in L1 Γ, dσΓ (y) , uniformly in x ∈ Γ, and, in fact, uniformly in |x − y| ≥ ε for any ε > 0. Consequently fD (h) → MD (h) as Γ ˜ → Γ, M uniformly in khk2 = 1 due to the mild (in particular square integrable) singularity of mD on the diagonal. This then entails that fD ) = {0}, ker(M ˜ for Γ close enough to Γ.
This useful property will remain valid after discretization, which is just an additional approximation, even if, as will be demostrated in the numerical examples, the modified and the original boundary are not that close to each other. Remark 4.6. The result shows that the functions wy˜ : Ω → R given by Z wy˜(x) = GD π (x, z)Pψ ϕy˜ (z) dz for y ∈ Γ B
e ' Γ and ϕy˜ ' δy˜. This is intuitively clear for are “linearly independent” elements of ker(∆Ω ) if Γ e Γ = Γ and ϕy˜ ' δy since, then, wy are functions with singularities at different locations x = y, yielding a “diagonally dominant” kernel (or matrix, at the discrete level).
18
PATRICK GUIDOTTI
Remark 4.7. When dealing with the Neumann problem in the classical way, the fact that the normal derivative of S is not continuous across Γ as clearly indicated by (4.18), does require care in obtaining the correct numerical formulation. By using the kernel generation procedure described in this paper, however, the problem is completely avoided, since the relevant kernel m e N is smooth thanks to the replacement of δy by ϕy˜ in its construction. Remark 4.8. Notice that the proposed kernel construction effectively replaces a pseudo-differential operator of type −1 or type 0 for b = D or b = N , respectively, with an infinitely smoothing operator. Incidentally, an operator of type k is a bounded linear operator which maps, in the above context, L2 (Γ) to H−k (Γ). This has important consequences. One is that the approximating operator is compact with unbounded inverse (more so that the approximated operator), i.e. it does not enjoy the same “functional” mapping properties. At the numerical level this will be reflected in a significant increase in the condition number of the discretized operator. It will, however, be possible to use natural “rougher” discretizations of the same operator as preconditioners, thus completely curing the conditioning issues, while maintaining the highly desirable fast converging numerical discretizations to the approximate smooth kernel and, consequently, accuracy. 5. Numerical Implementation and experiments The periodc box B = [−π, π]2 is discretized by a uniform grid Gm of m2 points by discretizing each direction by j zjm = −π + 2π , j = 0, . . . , m − 1, m where z = x1 , x2 . The boundary value problems will be posed on the unit circle centered at the origin, i.e. Ω = B(0, 1) and the padding function ψ of (4.24) is defined by 2 1 [ 2 (x1 −π)] sin2 [ 12 (x2 −π)]
ψ(x1 , x2 ) = e−200 sin
.
While it is not analytically compactly supported away from Ω, it numerically vanishes outside a neighborhood of the boundary of the periodicity box B as show in the contourplot below. The boundary of the domain Ω is discretized by n equidistant points yj = cos(θj ), sin(θj ) , j = 0, ..., n − 1, where θj = 2π nj , yielding the set Γn . Wherever required, the analytical knowledge of the boundary Γ of Ω will be used to obtain numerical quantities such as, e.g., normal and tangent vectors. In some applications these might need to be replaced by their numerical counterparts or done away with altogether by choosing as centers for the required test-functions points on the grid which are roughly located along the (numerical) outward normal.
NUMERICS IN A BOX
19
Table 1. Relative error for the Fourier quadrature rule at different discretization levels. m
n
em,n
m
n
em,n
32
128
3.78e-03
128
128
4.30e-08
256
3.78e-03
256
4.33e-08
512
3.78e-03
512
4.33e-08
128
1.80e-04
128
3.07e-07
256
1.80e-04
256
2.58e-07
512
1.80e-04
512
2.58e-07
128
8.73e-06
128
1.81e-08
256
8.73e-06
256
4.05e-08
512
8.73e-06
512
4.05e-08
64
96
192
256
At the chosen discretization level m, the discrete Laplace operator −4m on the periodicity box is represented spectrally via discrete Fast Fourier transform Fm via −1 Fm diag (|k|2 )k∈Z2m Fm . The projection Pψ of (4.23) is discretized by Pψm (um ) = um −
Fm (um )(0, 0) m ψ , Fm (ψ m )(0, 0)
where um is a grid vector, i.e. a function defined on the grid Gm and ψ m is the evaluation of ψ on it. The testfunctions ϕy˜ supported about the point y˜ ∈ B \ Ω used in the set up of the kernel are chosen of two different types: symmetric and non-symmetric. The former are defined through 2 1 [ 2 (z1 −˜ y1 )] sin2 [ 12 (z2 −˜ y2 )]
ϕy˜(z) = e−α sin
, z∈B
and are discretized by evaluation on the grid Gm and setting α = 4m in order to make the testfunction “sharper” compatibly with the resolution power of the grid. For reasons to be explained later, non-symmetric and “sharper” testfunctions are useful. Given a point y ∈ Γ = S1 , let τ = τ (y) and ν = ν(y) denote the corresponding unit tangent and normal vector, respectively. Then consider ∂ν ϕy˜,
(5.27)
where the reader is reminded that y˜ = y + δν(y), y ∈ Γ. This type of testfunction, depicted in the contourplot above, has the added adavantage of automatically having vanishing average, and plays an important role in deriving efficient numerical discretizations (see Subsection 5.2.2). 5.1. Bulk Integrals. As a first example consider the domain integral as described in Section 4.1. Letting Ω = B(0, 2) and computing the Fourier coefficients I˜k of the distribution I = χΩ just as explained in (4.11)-(4.12) by using the trapezoidal rule for the angular parametrization of S22 , one obtains a quadrature rule for integration over Ω. Table 1 summarizes the results obtained when applying the quadrature to the function π u = cos( r2 ), r = |x| > 0. 4
20
PATRICK GUIDOTTI
It appears that the number of discretization points n has less of an impact on the accuracy than the bulk discretization level m as can be expected since the integrand is radially symmetric. 5.2. Dirichlet Problem. Consider now the homogeneous Dirichlet Problem on B(0, 1) and take the right hand side to be f ≡ 1 defined on whole square B. In a first step, a grid vector v m is determined satisfying −4m v m ≡ 1. This can be done simply by taking −1 m v m = Fm diag (gπm (k))k∈Z2m Fm Pψm (1m ) =: Gm π (1 ), where 1m is the constant grid function with value 1 and ( 0, if k = (0, 0), m gπ (k) = |k|−2 , if k ∈ Z2m \ {(0, 0)}. Next the boundary weight vector wn is determined such that n
m m X m wkn Gm δyj , v + π (ϕy˜k ) q m = 0 for j = 1, . . . , n. k=1
This leads to a system of equations for the entries of wn characterized by the matrix M with entries
m Mjk = δymj , Gm π (ϕy˜k ) q m , j, k = 1, . . . , n, following the blueprint laid out in the previous section. It can be viewed as being close to the spectral discretization n 1 k m (x, y) = hδxm , (−4m )−1 Pψm (δym ˜ )iq m , x, y ∈ Γ ⊂ S .
of the smooth kernel k(x, y) = hδx , (−4)−1 Pψ (δy˜)i, x, y ∈ S1 . As mentioned earlier this discretization k m is actually independent of the grid Gm and can be evaluated anywhere in B × B, in particular on Γn × Γn . It follows from Proposition 4.5 that M is invertible for appropriate choices of y˜ for y ∈ Γn and of testfunctions ϕy˜. Once the grid vector wm is found, a numerical solution of the Dirichlet problem is given by n X m m,n m m m rΩ u = rΩ v + wkn Gm π (ϕy˜k ) , k=1 m rΩ
where denotes the restriction (of functions defined on B or of vectors defined on the grid Gm ) m to G ∩ Ω. The numerical results presented in Table 2 provide information about the relative l2 and l∞ errors em,n and em,n ∞ computed as follows 2 em,n = p
m m,n m krΩ u − rΩ uklp for p = 2, ∞, m krΩ uklp
This is done for various combined discretization levels (m, n), various distances of y˜ from y ∈ Γn , and types of testfunctions in Tables 2–4. Recorded is also the condition number of the obtained matrix M . The results with fixed distance δ = 0.4 are summarized in Table 2. It appears clearly that accuracy tends to grow for a given grid parameter m with increasing number of boundary discretization points n. This happens until the boundary discretization becomes too fine compared to the given, fixed discretization of the periodicity box. Notice that, if the parameter n is kept fixed, the accuracy improves also as a function of the discretization size m. Similarly gains stop accruing when the box discretization becomes too fine compared to the fixed boundary resolution. As the operator approximated by M is of negative order 1, the condition number of M is expected
NUMERICS IN A BOX
21
Table 2. Numerical Results for the Dirichlet Problem, δ = 0.4 m
n
em,n ∞
em,n 2
cond(M )
m
n
em,n ∞
em,n 2
cond(M )
64
64
2.12e-05
3.18e-05
2.6e+03
512
128
2.17e-11
1.71e-11
1.7e+06
80
1.33e-05
2.90e-05
1.7e+04
144
1.61e-11
4.83e-12
8.3e+06
96
1.10e-05
2.75e-05
1.1e+05
160
3.01e-11
7.16e-12
3.9e+07
112
6.11e-05
1.42e-04
1.1e+08
64
1.30e-06
1.10e-06
2.5e+03
64
1.03e-06
1.01e-06
2.5e+03
80
5.44e-08
4.74e-08
1.3e+04
80
1.94e-07
1.62e-07
1.3e+04
96
2.43e-09
2.14e-09
6.9e+04
96
2.07e-07
9.92e-08
7.0e+04
112
1.14e-10
9.89e-11
3.5e+05
64
1.26e-06
1.10e-06
2.5e+03
128
5.48e-12
4.68e-12
1.7e+06
80
5.43e-08
4.74e-08
1.3e+04
144
2.75e-13
2.24e-13
8.3e+06
96
2.32e-09
2.13e-09
6.9e+04
160
2.38e-14
6.39e-15
3.9e+07
112
1.17e-10
1.01e-10
3.5e+05
176
2.44e-14
6.30e-15
1.8e+08
128
256
1024
to grow linearly in the discretization size. Indeed increasing n enlarges the condition number. This effect is, however, compounded by the matrix M becoming less and less diagonally dominant as the boundary discretization points become denser while the support of the testfunctions remains unchanged for fixed discretization level m. Notice that, for fixed n, the condition number of M remains virtually unchanged as m changes. The “optimal” value (for the specific choice of testfunction type and support size) was chosen based on the results found in Table 3 where the arbitrary but still representative choice of m = 256 is made and a variety of discretization levels n are shown. The distance is steadily increased until it no longer leads to an improvement in the approximation quality. It can be seen that the accuracy improves with distance and that optimal distance decreases as the box discretization gets finer, thus allowing for a stronger resolution power and, consequently, a better approximation of the testfunctions. There appears to be a trade-off between condition number of M and accuracy of the outcome, where the best accuracy is obtained at the cost of a high condition number. In perfect agreement with the theoretical analysis, the condition number of M is the least when using Dirac delta functions located along the discrete boundary Γn in the numerical representation of the kernel. This is clearly evident in the data shown in Table 4 for two choices of discretization level, m = 128, 256. Again the low condition number comes at the price of a reduced accuracy (if the comparison is carried out at the same discretization level m). 5.2.1. Preconditioning. Given the dramatic increase in condition number resulting from the use of the proposed smoother kernels, it is natural to ask whether it can be mitigated by some preconditioning procedure. Denote by Mϕ and Mδ the matrix obtained discretizing the smooth kernel and the singular kernel, respectively, i.e.
m Mϕ = δymj , Gm π ϕy˜k q m , j, k = 1, . . . , n, and
m Mδ = δymj , Gm π δyk q m , j, k = 1, . . . , n. It seems natural to use the better conditioned but “rough” approximation Mδ as a preconditioner for the highly accurate but badly conditioned Mϕ . In Table 5 the condition numbers of Mϕ , Mδ ,
22
PATRICK GUIDOTTI
e for m = 8 Table 3. Dependence on δ = dist(Γ, Γ) n
δ
em,n ∞
em,n 2
cond(M )
n
δ
em,n ∞
em,n 2
cond(M )
64
0.15
9.06e-04
8.69e-04
9.6e+01
80
0.5
2.59e-09
2.09e-09
6.2e+04
0.2
2.28e-04
2.18e-04
1.9e+02
0.6
1.54e-10
1.05e-10
2.7e+05
0.3
1.58e-05
1.45e-05
7.1e+02
0.7
3.84e-11
1.12e-11
1.2e+06
0.4
1.26e-06
1.10e-06
2.5e+03
0.8
2.95e-11
1.06e-11
4.9e+06
0.5
1.15e-07
9.33e-08
8.3e+03
0.15
6.10e-06
1.02e-05
3.9e+03
0.6
1.20e-08
8.90e-09
2.7e+04
0.2
4.13e-08
3.81e-08
1.9e+04
0.7
1.42e-09
9.41e-10
8.6e+04
0.3
1.01e-10
1.02e-10
4.3e+05
0.8
1.80e-10
9.65e-11
2.6e+05
0.4
1.61e-11
4.83e-12
8.3e+06
0.9
6.59e-11
2.09e-11
7.9e+05
0.15
2.85e-08
1.40e-08
2.9e+04
0.15
2.42e-04
2.06e-04
2.1e+02
0.2
3.95e-10
3.63e-10
2.5e+05
0.2
4.24e-05
3.77e-05
5.1e+02
0.3
1.04e-13
7.99e-14
1.6e+07
0.3
1.40e-06
1.24e-06
2.7e+03
0.35
1.94e-14
2.96e-15
1.3e+08
0.4
5.43e-08
4.74e-08
1.3e+04
80
144
192
Table 4. Kernel based on Dirac delta functions supported along Γ. n
cond(M )
em,n ∞
em,n 2
4.93e-02 256
96
14.34
4.43e-02
4.67e-02
4.06e-02
2.72e-02
112
18.06
3.46e-02
3.46e-02
21.16
2.59e-02
1.57e-02
124
21.49
2.74e-02
2.51e-02
112
26.01
1.25e-02
7.50e-03
144
25.58
2.32e-02
1.81e-02
128
31.88
7.06e-03
2.31e-03
160
29.83
2.17e-02
1.36e-02
144
38.78
1.05e-02
7.72e-03
176
36.52
1.58e-02
1.03e-02
64
8.42
8.27e-02
9.57e-02
192
42.51
1.20e-02
7.80e-03
80
11.26
6.09e-02
6.61e-02
208
47.22
1.13e-02
5.66e-03
m
n
cond(M )
em,n ∞
128
64
10.9
5.49e-02
80
15.05
96
256
em,n 2
m
and C = Mδ−1 Mϕ are shown for a few discretization levels. They clearly point to an enormous benefit of preconditing. The plots in Figure 2 gives a more visual characterization of the effect of preconditioning on the diagonal dominance of the corresponding matrix. It can therefore be concluded that smoother kernels lead to higher order resolutions and more accurate numerical results at the cost of an apparent increase in condition number. Latter can, however, be completely avoided by a simple and natural preconditioning procedure.
NUMERICS IN A BOX
23
Figure 2. Contour plot of Mϕ , Mδ , and C = Mδ−1 Mϕ for m = 256, n = 128, and δ = 0.4. Table 5. Preconditioning effect of Mδ−1 on Mϕ when δ = 0.4. (m, n)
cond(Mϕ )
cond(Mδ )
cond(Mδ−1 Mϕ )
(128, 64)
2.51e+03
1.09e+01
5.24e+00
(256, 128)
1.72e+06
2.15e+01
1.01e+01
(512, 256)
4.02e+11
4.27e+01
1.96e+01
5.2.2. Effective Numerical Implementation. The necessity to project a datum onto the subspace of mean zero functions in the above procedure effectively destroys the translation invariance of the constant coefficients equation on the periodic box. This makes it necessary to compute a box solution for each entry of the matrix M . While it was chosen to illustrate the ideas using testfunctions ϕy˜ approximating Dirac distributions δy in order to harvest the benefits of the theoretical analysis ensuring injectivity (and thus invertibility) of M , it is clear that other choices are possible, such as normal derivatives of testfunctions. These are particularly suited since they are mean zero functions supported in a small neighborhood of their “center-point”. As such they do not require to be projected onto the mean free subspace. It is therefore enough to compute (−∆π )−1 ∂ν(y) ϕy˜ =
2 X
νj (y)(−∆π )−1 ∂j ϕy˜
j=1
for one point y ∈ Γ only since (−∆π )−1 ∂j ϕy˜+v = (−∆π )−1 τv (∂j ϕy˜) = τv (−∆π )−1 ∂j ϕy˜, j = 1, 2,
24
PATRICK GUIDOTTI
where τv u = u(· − v) is the translation of a periodic function u. This also gives insight into the “circulant” structure of the matrix M . It is also possible to replace the test-function centers {˜ y : y ∈ Γ} by nearby or closest (box) grid points in Gm so that the translations required to obtain the kernel from the knowledge of, say, (−∆π )−1 ∂ν(y1 ) ϕy˜1 , can be implemented efficiently (i.e. in physical space). Remark 5.1. Notice that, if ∆ is replaced by a more general elliptic non-constant coefficient differential operator, the kernel construction given above is still viable and would deliver a purely numerical boundary integral method which does not rely on the explicit analytical knowledge of a fundamental solution for the differential operator. It even allows replacing the “discrete” fundamental solution by a smooth kernel which can more accurately be captured numerically. Remarkably this can be done at effectively not cost due to the availability of the natural preconditioning procedure described above. Remark 5.2. The proposed construction of smooth kernels also suggests that iterative parallelized methods can be used in the computation of the entries of the matrix M with a small number of iterations in the case of a non-constant coefficient differential operator A, at least when the coefficients vary smoothly. This is due to the fact that the building blocks Aπ −1 ϕy˜ will be locally close to each other thus providing excellent initial guesses for an iterative solver. 5.2.3. Kernel Functions. Ultimately the accuracy of the method rests on its ability to faithfully compute linearly independent functions in the kernel of the Laplacian ∆D Ω on the domain Ω. These are known explicitly for Ω = B(0, 2) and given by r ψk (r, θ) = ( )k eikθ , r ∈ [0, 2], θ ∈ [0, 2π), k ∈ N, 2 in polar coordinates. Using the method described above, it is possible to compute a numerical approximation of these functions defined on Gm . Tables 6 and 7 give the relative errors observed for the first 33 kernel functions at two distinct discretization levels. Table 6. Resolution of the first 33 kernel functions for m = 128, n = 80, and δ = 0.4. `∞ -err
`2 -err
`∞ -err
`2 -err
`∞ -err
`2 -err
ψ1
1.84e-07
8.11e-08
ψ12
1.08e-05
9.60e-06
ψ23
1.43e-03
9.88e-04
ψ2
1.72e-07
9.16e-08
ψ13
2.14e-05
1.39e-05
ψ24
2.05e-03
1.57e-03
ψ3
3.45e-07
2.08e-07
ψ14
3.65e-05
2.28e-05
ψ25
3.68e-03
2.36e-03
ψ4
5.99e-07
3.60e-07
ψ15
5.80e-05
3.13e-05
ψ26
4.40e-03
3.51e-03
ψ5
1.21e-06
5.73e-07
ψ16
7.81e-05
5.81e-05
ψ27
6.87e-03
5.60e-03
ψ6
1.23e-06
8.65e-07
ψ17
1.10e-04
7.28e-05
ψ28
1.17e-02
9.09e-03
ψ7
2.53e-06
1.32e-06
ψ18
1.60e-04
1.04e-04
ψ29
1.97e-02
1.32e-02
ψ8
4.29e-06
2.66e-06
ψ19
2.99e-04
1.72e-04
ψ30
2.76e-02
2.14e-02
ψ9
5.50e-06
2.92e-06
ψ20
3.10e-04
2.63e-04
ψ31
3.84e-02
3.08e-02
ψ10
5.59e-06
4.38e-06
ψ21
6.44e-04
4.12e-04
ψ32
4.99e-02
4.63e-02
ψ11
1.26e-05
6.32e-06
ψ22
1.07e-03
6.35e-04
ψ33
9.99e-02
7.05e-02
NUMERICS IN A BOX
25
Table 7. Resolution of the first 33 kernel functions for m = 512, n = 256, and δ = 0.4. `∞ -err
`2 -err
`∞ -err
`2 -err
`∞ -err
`2 -err
ψ1
1.24e-13
1.09e-14
ψ12
1.57e-13
5.68e-14
ψ23
6.23e-13
2.63e-13
ψ2
9.17e-14
1.10e-14
ψ13
1.90e-13
5.43e-14
ψ24
7.15e-13
4.11e-13
ψ3
1.29e-13
1.63e-14
ψ14
1.34e-13
5.18e-14
ψ25
9.98e-13
4.19e-13
ψ4
1.23e-13
1.90e-14
ψ15
2.03e-13
6.65e-14
ψ26
1.25e-12
5.38e-13
ψ5
1.22e-13
2.00e-14
ψ16
2.73e-13
8.48e-14
ψ27
1.52e-12
6.23e-13
ψ6
1.04e-13
2.24e-14
ψ17
2.65e-13
9.30e-14
ψ28
1.48e-12
7.26e-13
ψ7
1.31e-13
2.58e-14
ψ18
3.68e-13
1.15e-13
ψ29
1.99e-12
9.46e-13
ψ8
1.14e-13
2.87e-14
ψ19
3.73e-13
1.16e-13
ψ30
1.80e-12
1.04e-12
ψ9
1.23e-13
3.03e-14
ψ20
4.03e-13
2.01e-13
ψ31
2.77e-12
1.27e-12
ψ10
1.10e-13
3.18e-14
ψ21
5.31e-13
1.82e-13
ψ32
3.38e-12
1.80e-12
ψ11
1.74e-13
3.64e-14
ψ22
4.84e-13
2.58e-13
ψ33
3.50e-12
1.83e-12
5.3. Neumann Problem. Next, using the same notations and discretization procedure, the Neumann problem ( u − 4u = f in Ω = B(0, 2), (5.28) ∂ν u = 0 on Γ = S12 , p 2 for f (x) = cos( π2 r) 1 + π4 + π2 sin( π2 r)/r, r = x21 + x22 and x ∈ B. This problem has the exact solution u given by u(x) = cos π2 r(x) , x ∈ Ω. In order to show that the method is robust in the sense that it does not depend on the exact choices of its ingredients, a different cutoff function is used in order to modify the right-hand-side f to make it into a doubly periodic function which fits the periodic framework. More specifically, take 1 5 ψ(x) = 1 + tanh − [r − (π − 0.2)2 ] , 2 2 which essentially vanishes close to the the boundary of B and takes the value 1 on Ω. Replace then f by f˜ = f ψ to obtain a periodic function which coincides with f on Ω. In numerical experiments, this is clearly performed on the grid, i.e. by replacing f m by f˜m = ψ m f m . The solution procedure is parallel to that employed for the Dirichlet problem. First the function 1 −1 v m = (1m − 4m )−1 f˜m = Fm diag ( )k∈Z2m Fm (f˜m ) 2 1 + |k| is computed. Then the kernel matrix M is obtain as
m Mjk = − ∂ν(yj ) δyj , (1m − 4m )−1 ϕm y˜k q m , j, k = 1, . . . , m, where − ∂ν(yj ) δyj
m
= −ν1 (yj )(δy0 1 )m ⊗ δym2 − ν2 (yj )δym1 ⊗ (δy0 2 )m j
j
j
j
is used as a discretization of the normal derivative operator at the point (yj1 , yj2 ) = yj ∈ Γn . Finally the weight vector wn is determined by solving n X
m n − ∂ν(yj ) δyj , v m + wkn (1m − 4m )−1 ϕm y˜k q m = z + M w = 0, k=1
26
PATRICK GUIDOTTI
Table 8. Numerical experiments for the Neumann problem (5.28). m
n
δ
cond(M )
em,n ∞
em,n 2
m
n
δ
cond(M )
em,n ∞
em,n 2
32
32
0.3
3.31e+00
4.84e-03
2.97e-03
128
64
0.3
2.82e+01
2.59e-04
2.55e-04
0.4
6.04e+00
8.67e-03
4.94e-03
0.4
9.83e+01
2.12e-05
2.02e-05
0.5
9.61e+00
3.99e-03
3.86e-03
0.5
3.27e+02
6.24e-06
4.44e-06
0.3
9.98e+00
1.31e-02
1.20e-02
0.3
2.84e+01
2.65e-04
2.56e-04
0.4
2.85e+01
1.22e-02
9.84e-03
0.4
9.82e+01
1.91e-05
1.84e-05
0.5
1.79e+02
3.37e-03
2.35e-03
0.5
3.26e+02
1.53e-06
1.47e-06
0.3
3.14e+00
2.18e-02
2.08e-02
0.3
2.46e+03
3.41e-08
3.34e-08
0.4
5.45e+00
5.58e-03
5.57e-03
0.4
3.33e+04
4.41e-10
1.75e-10
0.5
9.38e+00
2.39e-03
2.19e-03
0.5
4.08e+05
5.96e-10
3.87e-10
0.3
9.36e+00
1.70e-03
1.36e-03
0.3
2.46e+03
3.43e-08
3.35e-08
0.4
2.33e+01
5.51e-04
3.24e-04
0.4
3.33e+04
2.65e-10
1.93e-10
0.5
5.35e+01
2.94e-04
1.42e-04
0.5
4.08e+05
1.05e-10
3.95e-11
0.3
2.90e+01
1.38e-04
5.22e-05
0.3
1.88e+07
1.76e-10
7.73e-11
0.4
9.89e+01
6.46e-04
5.20e-04
0.4
3.88e+09
1.76e-10
7.72e-11
0.5
3.48e+02
3.98e-04
3.53e-04
0.5
6.47e+11
1.76e-10
7.72e-11
48
64
32
48
64
256
64
128
512
128
256
m where zj = h− ∂ν(yj ) δyj , v m iqm for j = 1, . . . , n. Results of similar numerical experiments to those performed for the Dirichlet problem are summarized in Table 8. Remark 5.3. Notice that in all numerical experiments, radially symmetric functions were used. One reason is that radial symmetry is not readily compatible with periodicity in that it cannot be represented with very few periodic modes. Another is that explicit formulæ are available. Remark 5.4. While it might appear that in the construction of the matrix kernel M , one needs to solve n problems in the discretized periodicity box, this is not always the case. As for the Dirichlet problem, the operator 1 − 4 is translation invariant. It follows that it is enough to solve one such problem, e.g. for k = 1 since all other solutions would be a translate of the solution for k = 1. This is true because the datum ϕy˜k is a translate of ϕy˜1 . To make sure that the translation be compatible with the grid Gm , the theoretical location y˜ = y + δνΓ (y) would have to be replaced by the closest grid point in Gm (for instance). 6. Conclusions An effectively meshless approach to boundary value problems in general geometry domains is proposed based on the use of uniform discretizations of an encopassing computational box. Exploiting a pseudodifferential operator framework, relevant kernels can be replaced by smoother kernels which allow for more accurate numerical resolution. No explicit knowledge of the kernels is required beyond their analytical structure which is used in an essential way in order to construct their numerical counterparts. While the smooth kernels, which correspond to infinitely smoothing compact operators, and their associated discretization matrices are badly ill-conditioned, they can
NUMERICS IN A BOX
27
very effectively be preconditioned by use of their “rougher” counterparts with singular kernels in an arguably natural way at minimal additional cost. The methodology proposed is very general and can be employed in three space dimensions as well as to more general linear and nonlinear boundary value problems. The fact that no remeshing is required makes this method particularly appealing for free and moving boundary problems. These extensions will be the topic of forthcoming papers. References [1] P. Guidotti. Numerical Approximation of Generalized Functions: Aliasing, the Gibbs Phenomenon and a Numerical Uncertainty Principle. In Functional Analysis and Evolution Equations, Volume Dedicated to the Memory of G¨ unther Lumer. Birkh¨ auser, 2007. [2] M. E. Taylor. Partial Differential Equations II. Qualitative Studies of Linear Equations. Springer-Verlag, New York, 1996. University of California, Irvine, Department of Mathematics, 340 Rowland Hall, Irvine, CA 926973875, USA E-mail address:
[email protected]