FAST AND OBLIVIOUS ALGORITHMS FOR DISSIPATIVE AND 2D WAVE EQUATIONS L. BANJAI
∗,
† , AND A. SCHADLE ´ ´ ¨ M. LOPEZ-FERN ANDEZ
‡
Abstract. The use of time-domain boundary integral equations has proved very effective and efficient for three dimensional acoustic and electromagnetic wave equations. In even dimensions and when some dissipation is present, time-domain boundary equations contain an infinite memory tail. Due to this, computation for longer times becomes exceedingly expensive. In this paper we show how oblivious quadrature, initially designed for parabolic problems, can be used to significantly reduce both the cost and the memory requirements of computing this tail. We analyse Runge-Kutta based quadrature and conclude the paper with numerical experiments. Key words. fast and oblivious algorithms, convolution quadrature, wave equations, boundary integral equations, retarded potentials, contour integral methods. AMS subject classifications. 65R20, 65L06, 65M15, 65M38
1. Introduction. Certain wave problems exhibit the property that behind the wave front travelling at a finite speed there exists a smooth tail. The simplest examples of this phenomenon are the scalar wave equation in even dimensions or damped wave equation in any dimension. Recently the interest in the numerical solution of scalar and vector linear wave equations by means of time-domain boundary integral equations has risen sharply [2, 6, 13, 14, 17, 22, 23, 11, 28, 29, 30, 32, 33]. Two particularly popular approaches to the discretization are time-space Galerkin methods [3] and Convolution Quadrature (CQ) [25]. The difficulty that arises in applying time-domain boundary integral methods to dissipative or two-dimensional wave equation can best be appreciated by comparing the free space Green’s functions for the two and three dimensional acoustic wave equation H(t − d) , in two dimensions, √2 2π t − d2 k(t, d) = (1.1) δ(t − d) , in three dimensions, 4πd where H(·) is the Heaviside function, δ(·) is the Dirac delta, and d = |x − y| can be thought of as the distance between the source y and receiver x. Here we see that whereas in three dimensions the support of the Green’s function is on the time-cone t = d, in two dimensions the Green’s function is non-zero for all t > d and the decay in t is slow. This infinite tail forces an infinite memory on time-domain boundary integral equation based methods that results in expensive long-time computations. This will affect any numerical method based on time-domain integral equations – in particular both time-space Galerkin and Convolution Quadrature. The smoothness of ∗ The Maxwell Institute for Mathematical Sciences, School of Mathematical & Computer Sciences, Heriot-Watt University, Edinburgh EH14 4AS, UK. (
[email protected]) † Dipartimento di Matematica Guido Castelnuovo, Sapienza Universit` a di Roma, Piazzale Aldo Moro 5, 00185 Roma, Italy (
[email protected]). Partially supported by INdAM-GNCS, the Spanish grants MTM2014-54710-P and MTM2016-75465-P and by the Ram´ on y Cajal program of the Ministerio de Economia y Competitividad, Spain. ‡ Mathematisches Institut, Heinrich-Heine-Universit¨ at, 40255 D¨ usseldorf, Germany (
[email protected])
1
this tail has been used in [10] and [5] to speed up computations. More importantly for the current work, in [5] it is shown that this tail is due to an operator of parabolic type; the precise meaning of this is explained in the next section. This fact was used in [5] for purely theoretical purposes whereas in this paper it will be used to develop a fast algorithm with reduced memory requirements. Our new algorithm applied to the tail of the linear hyperbolic problems has essentially the same properties as the algorithms developed in [27, 31] for parabolic problems. The importance of these improvements can best be appreciated by considering a motivating application in more detail. Thus, we give a brief introduction to time-domain boundary integral equations – for more background information see [12]. Let Ω ⊂ Rn , n = 2, 3, be a bounded Lipschitz domain with boundary Σ = ∂Ω and let Ωc = Rn \ Ω be its complement. Let u be a causal solution of the dissipative wave equation in Rn \ Σ ∂t2 u(t, x) + α∂t u(t, x) − ∆u(t, x) = 0,
t ∈ [0, T ], x ∈ Rn \ Σ,
u(t, x) = g(t, x), t ∈ [0, T ], x ∈ Σ, u(0, x) = ∂t u(0, x) = 0, x ∈ Rn \ Σ, where α ≥ 0 is a constant and g(t, x) is a given boundary data. The solution of (1.2) can be represented as a single layer potential ∫ t∫ u(t, x) = k(t − τ, |x − y|)ϕ(τ, y)dΣy dτ, 0
(1.2)
(1.3)
Σ
where the boundary density ϕ is the unique solution of the boundary integral equation, see [25]: Find ϕ such that ∫ t∫ g(t, x) = k(t − τ, |x − y|)ϕ(τ, y)dΣy dτ, for all (t, x) ∈ [0, T ] × Σ. (1.4) 0
Σ
Inspecting this equation we can appreciate the difficulties that can appear if k(t, d) has an infinite tail. In particular, the infinite tail implies an infinite memory, that can place too much pressure on both computational and memory resources when using a standard numerical algorithm. Next we give a brief idea of how we may exploit a favourable property of the tail in order to apply techniques for parabolic equations [27, 31] specifically developed to deal with the problem of infinite memory. With no dissipation, i.e., α = 0, the kernel function k(·, ·) is given by (1.1). Explicit formulas for the case α > 0 are complicated, see [12], and even unavailable in the literature for two dimensions. Nevertheless, the Laplace transform of the kernel function ∫ ∞ K(λ, d) = (L k) (λ, d) = e−λt k(t, d)dt, 0
is easily written down explicitly: (√ ) 1 λ2 + αλd , in two dimensions, 2π K0 √ K(λ, d) = − λ2 +αλd e , in three dimensions, 4πd
(1.5)
where K0 (·) is a modified Bessel function. What distinguishes these transfer functions as being hyperbolic is that they increase exponentially in the left-half plane, whereas 2
parabolic transfer functions are polynomially bounded. However, as noticed in [5], even in this hyperbolic case eλd K(λ, d) is bounded polynomially in the cut C\(−∞, 0] complex plane. As a multiplication by eλd in Laplace domain corresponds to timeshift by d in time domain, we will be able to apply parabolic techniques once t > d. The main part of the paper is devoted to giving a rigorous, algorithmic development of this idea. To be more precise in quantifying the improvements made in this paper, let n0 > diam(Ω)/h where h is the time-step and Ω is the scatterer; in other words n0 > d/h for all d in the above. For tn+1 = (n + 1)h the time convolution in the definition (1.3) of u(tn+1 , x) is approximated by the discrete convolution n ∫ ∑ j=0
ωn−j (|x − y|)ϕ(tj , y)dΣy ,
Σ
with convolution weights ωn−j defined in Section 3. This sum is split as ∫ n ∑ j=n−n0
ωn−j (|x − y|)ϕ(tj , y)dΣy +
n−n 0 −1 ∫ ∑
Σ
j=0
ωn−j (|x − y|)ϕ(tj , y)dΣy .
(1.6)
Σ
Fast CQ algorithms for hyperbolic problems can compute the first sum in O(n0 log2 n0 ) or O(n0 log n0 ) time and O(n0 ) memory and history; the higher time-complexity is for a marching-on-in-time method [4] and the lower for a parallel-in-time method [9]— each have advantages depending on the intended application. Denoting by N = n−n0 the number of summands in the second sum, i.e., the number of time-steps after n0 , we discuss in the following the additional cost required to approximate this sum with an accuracy ε in the evaluation of the convolution weights. Thus here target accuracy denotes the error made in performing the discrete algorithm and not the error in approximating the original, continuous problem, ( ( )that is being ) discretized. • The computational complexity is O log 1ε N log(N ) . • The number of the transfer operator K is reduced from O(N ) ( ( ) of evaluations ) to O log 1ε log(N ) . ( ( ) ) • The memory requirements are reduced from O(N ) to O log 1ε log(N ) . The structure of the paper is as follows. In the next section we consider an abstract setting that covers the motivating application described above. Next, RungeKutta based convolution quadrature is introduced. In the main part of the paper, the description and analysis of an efficient scheme to compute the convolution weights is described. The paper concludes with a more detailed description of the time-domain boundary integral method and with the results of numerical experiments. 2. The abstract setting. Let K(λ, d) : Uδ × R>0 → C be analytic as a function of λ in the sector Uδ = {λ ∈ C : | Arg λ| < π − δ}, and for some µ ∈ R bounded as λd e K(λ, d) ≤ C(d)|λ|µ ,
δ ∈ (0, π/2),
λ ∈ Uδ .
(2.1)
This in turn implies the standard bound for hyperbolic operators |K(λ, d)| ≤ C(d)|λ|µ , 3
for Re λ > 0.
(2.2)
We are interested in computing convolutions ∫ t u(t) = k(t − s, d)g(s)ds,
(2.3)
0
where k(t, d) is the inverse Laplace transform of K(λ, d). If µ < −1, k(t, d) as a function of t is continuous and the above convolution is a well-defined continuous function for integrable g. For µ ≥ −1, the convolution is defined directly via the inverse Laplace transform ∫ 1 eλt K(λ, d)L g(λ)dλ, u(t) = 2πi σ+iR ∫∞ where σ > 0 and L g(λ) = 0 e−λt g(t)dt denotes the Laplace transform. For data g such that its Laplace transform decays sufficiently fast, the inverse Laplace transform above defines a continuous function, see [25]. Our aim is to describe an efficient algorithm for the computation of (2.3). An important aspect of the algorithm is that it should be effective for a range of 0 < d ≤ R where R > 0 is given. So far in the literature, fast algorithms have been considered in the non-sectorial, hyperbolic, case, i.e., kernels satisfying (2.2), [9, 4]. In the sectorial, parabolic, case, i.e., kernels bounded as |K(λ, d)| ≤ C(d)|λ|µ ,
λ ∈ Uδ ,
(2.4)
fast algorithms which further allow huge memory savings are available [27, 18, 31, 19]. Our kernel is strictly speaking non-sectorial, but after multiplication with eλd becomes sectorial. Naturally, this special class of operators requires its own fast algorithms. 3. Runge-Kutta convolution quadrature. Time discretization methods used in this paper, are based on implicit A-stable Runge-Kutta methods [15]. We employ standard notation for an s-stage Runge-Kutta discretization based on the Butcher tableau described by the matrix A = (aij )si,j=1 ∈ Rs×s and the vectors b = (b1 , . . . , bs )T ∈ Rs and c = (c1 , . . . , cs )T ∈ [0, 1]s . The corresponding stability function is given by r(z) = 1 + zbT (I − zA)−1 1,
(3.1)
where
1 = (1, 1, . . . , 1)T . Note that A-stability is equivalent to the condition |r(z)| ≤ 1 for Re z ≤ 0. In the following we collect all the assumptions on the Runge-Kutta method. These are satisfied by, for example, Radau IIA and Lobatto IIIC families of Runge-Kutta methods. Assumption 1. (a) The Runge-Kutta method is A-stable with (classical) order p ≥ 1 and stage order q ≤ p. (b) The stability function satisfies |r(iy)| < 1 for all real y ̸= 0. (c) The Runge-Kutta coefficient matrix A is invertible. (d) The Runge-Kutta method is stiffly accurate, i.e., bT A−1 = (0, 0, . . . , 1). 4
This implies that lim r(z) = 1 − bT A−1 1 = 0
|z|→∞
and cm = 1. Since r(z) is a rational function, the above assumptions imply that r(z) = O(z −1 ),
|z| → ∞.
(3.2)
We define the weight matrices Wn corresponding to the operator K by ) ( ∞ ∑ ∆(ζ) n ,d , Wn (d)ζ = K h n=0
(3.3)
where ( ∆(ζ) = A +
)−1 ζ 1bT . 1−ζ
(3.4)
Denoting by ω n = (ωn1 , . . . , ωns ) the last row of Wn , the approximation to the convolution integral (2.3) at time tn+1 = (n + 1)h is given by un+1 =
n ∑ s ∑
i ωn−j (d) g(tj + ci h) =
j=0 i=1
n ∑
ω n−j (d) gj ,
(3.5)
j=0
( )s with the column vector gj = g(tj + ch) = g(tj + ci h) i=1 . The convergence order of this approximation has been investigated in [26] for parabolic problems, i.e., for sectorial K(λ), and in [7] and [8] for hyperbolic problems, i.e., for non-sectorial operators. With the row vector en (z) = (e1n (z), . . . , esn (z)) defined as the last row of the s × s matrix En (z) given by (∆(ζ) − zI)−1 =
∞ ∑
En (z) ζ n ,
(3.6)
n=0
we obtain an integral formula for the weights ∫ h K(λ, d)en (hλ) dλ. ω n (d) = 2πi Γ
(3.7)
This representation follows from Cauchy’s formula and the definition of the weights in (3.3), with the integration contour Γ chosen so that it surrounds the poles of en (hλ). An explicit expressing for en is given by en (z) = r(z)n q(z),
(3.8)
with the row vector q(z) = bT (I − zA)−1 ; cf. [26, Lemma 2.4]. The A-stability assumption implies that the poles of r(z) are all in the right-half plane. Further, due to the decay of the rational function r(z), see (3.2), for n > µ + 1, the contour Γ can be deformed into the imaginary axis. For the weight matrices it holds ∫ h K(λ, d)En (hλ) dλ. (3.9) Wn (d) = 2πi Γ 5
By [26, Lemma 2.4], for n ≥ 1, En (z) is the rank-1 matrix given by En (z) = r(z)n−1 (I − zA)−1 1bT (I − zA)−1 .
(3.10)
The Runge-Kutta approximation of the inhomogeneous linear problem y ′ (t) = λy(t) + g(t),
y(0) = 0,
(3.11)
at time tn+1 is given by yn+1 (λ) = h
n ∑
en−j (hλ)gj
(3.12)
j=0
and thus the approximation of the convolution integral in (3.5) can be rewritten as [26, Proposition 2.4] ∫ 1 K(λ, d)yn+1 (λ)dλ. (3.13) un+1 = 2πi Γ In the next section we discuss how to approximate the integral in (3.7) by an efficient quadrature rule. It turns out that there exists s > 0 such that for n0 +s < n < n0 +sB, for an offset n0 proportional to d/h, the quadrature error decays exponentially in the number of quadrature nodes. The convergence rate depends on B > 1, but is independent of s. This is the key observation for the fast algorithm explained in section 4.2, where we split the sum in (3.12) and use contour quadrature to approximate (3.13). 4. Efficient quadrature for the computation of weights. If the Laplace transform of the kernel k is a sectorial operator, in [18] it is shown that the contour Γ in the integral formula for the weights ω n (d) in (3.7) can be chosen as the left branch of a hyperbola with center and foci on the real axis. In [31] both hyperbolas and Talbot contours are tested and shown to work in practice. In the case of kernels considered here, the contour must not have a real part extending to −∞. Theorem 4 shows that cutting the hyperbola at a finite real part, commits a small error. For the proof we will require the following technical lemma. Lemma 2. Let r(z) be the stability function of a Runge-Kutta method satisfying Assumption 1 and let γ(ξ) =
inf
−ξ≤Re z≤0
log |r(z)| . Re z
Then γ(ξ) ∈ (0, 1] for ξ > 0, it monotonically increases as ξ → 0 and |r(z)| ≤ |eγ(ξ)z |, for all z in the strip −ξ ≤ Re z ≤ 0. Further, there exists ρ > 0 such that |r(z)| ≤ |e2z |, for all z in the strip 0 ≤ Re z ≤ ρ. Proof. By assumption |r(z)| < 1 for all Re z < 0 and r(∞) = 0, hence γ(ξ) > 0. Further, γ(ξ) cannot be greater than 1, since this would contradict the approximation property r(z) = ez + O(z p+1 ). 6
ξ=1 ξ = 1/2
Backward Euler 0.69 0.811
2-stage Radau IIA 0.90 0.984
3-stage Radau IIA 0.94 0.997
Table 4.1
Numerically obtained values of γ(ξ) from Lemma 2 for different values of ξ.
The bound on r(z) is clear by the definition of γ(ξ). The remaining statement can be proved similarly, making sure that ρ is less than the real part of any singularity of r(z); for a similar statement see Lemma 1 in [18]. Remark 3. Due to the maximum principle the infimum in the definition of γ(·) is obtained either at Re z = −ξ or Re z = 0. Hence } { } { 1 1 1 1 γ(ξ) = min inf log , 1 = min log ,1 , ξ Re z=−ξ |r(z)| ξ φr (−ξ) where φr (x) = sup |r(z)| Re z=x
is the error growth function appearing in [15, Chapter IV.11]. For backward Euler γ(ξ) we have the explicit form γ(ξ) =
log(1 + ξ) , ξ
whereas some numerically obtained values for Radau IIA methods are given in Table 4.1. In order to reduce the number of constants in the estimates, we have chosen not to be as precise about the bound for the case Re z > 0. The optimality of the estimates has nevertheless not been significantly affected. With this and the assumption (2.1) we have that the integrand in (3.7) can be bounded as |K(λ, d)r(λh)n | ≤ C(d)|λ|µ |e−λd/n r(λh)|n ≤ C(d)|λ|µ eRe λtn (γ(ξ)−d/tn )
(4.1)
and thus it decreases exponentially with − Re λ as long as 0 < − Re λh < ξ and d/tn < γ(ξ). This suggests replacing contour Γ in (3.7) with a finite section of a hyperbola: Γ0 = νφ([−a, a]),
φ(x) = 1 − sin(α − ix),
ν > 0.
(4.2)
We want the endpoint of the finite hyperbola to be in the left-half complex plane, i.e., Re φ(a) = (1 − sin α cosh a) < 0
⇐⇒
cosh a > 1/ sin α.
The right-most point on the hyperbola is given by ν
sup
Re φ(x) = νφ(0) = ν(1 − sin α) < ν.
x∈[−a,a]
The error commited in replacing Γ with Γ0 is investigated next. 7
(4.3)
Fig. 1. Contour Γ0 is depicted by the solid line and Γ−1 and Γ1 by the dashed lines.
Theorem 4. Let d, δ > 0 and µ be given such that K(λ, d) satisfies (2.1) and let a and α ∈ (0, π/2 − δ) be given such that cosh a > 1/ sin α. Then for h, ν0 > 0, m > µ, 0 < ν ≤ ν0 and tn−m > d/γ(ξ) with ξ = hν0 | Re φ(a)|,
∫
µ−m −m ν Re φ(a)(γ(ξ)tn−m −d)
ω n (d) − h K(λ, d)en (λh) dλ h e ,
≤ C|νφ(a)| 2πi Γ0 where C = const · C(d), with C(d) of (2.1). Proof. Let us choose the contour in (3.7) as Γ = Γ−1 + Γ0 + Γ1 , where Γ0 is defined by (4.2), Γ−1 = {w ¯ − iϱ | ϱ ∈ [0, ∞)} and Γ1 = {w + iϱ | ϱ ∈ [0, ∞)}, where w = νφ(a); see Figure 1. To prove the required result we need to bound ∫ h K(λ, d)en (λh) dλ. 2πi Γ±1 From the definition of Γ±1 , (4.1), property (2.1) of K(λ) and the fact that r(z) = O(z −1 ) it follows that for λ ∈ Γ−1 ∪Γ1 h|K(λ, d)r(λh)n | ≤ Ch1−m |λ|µ−m eRe λ(γtn−m −d) = Ch1−m |λ|µ−m eν Re φ(a)(γtn−m −d) . Hence
∫
K(λ, d)r(λh)n bT (I − λhAv)−1 Γ±1 ∫ |λ|µ−m ∥(I − λhA)−1 ∥dλ ≤ Ch1−m eν Re φ(a)(γtn−m −d) Γ ∫ ±1 −m ν Re φ(a)(γtn−m −d) |λ|µ−m−1 dλ ≤ Ch e
h
Γ±1
≤ C|νφ(a)|µ−m h−m eν Re φ(a)(γtn−m −d) 8
(4.4)
finishes the proof. Remark 5. Lobatto IIIC methods [15] have a stability function that decays as r(z) = O(z −2 ). Employing this better estimate in (4.4) we obtain the bound
∫
µ−2m −2m ν Re φ(a)(γ(ξ)tn−m −d)
ω n (d) − h K(λ, d)en (λh) dλ h e ,
≤ C|νφ(a)| 2πi Γ0 where m needs to satisfy a weaker inequality m > µ/2 than in the original result. As we require that tn−m > d/γ(ξ), this also means that the estimate can be employed earlier. This would have the same effect on the remaining results in this section. The next step is to define a quadrature on the interval [−a, a] and hence on the contour Γ0 . Since the integrand is small at the edges of the interval, trapezoidal quadrature is a good choice and the following result gives the quadrature error. Theorem 6. Let d, δ > 0 and µ be given such that K(λ, d) satisfies (2.1). Let α ∈ (0, π/2−δ) and a > 0 be such that cosh a > 1/ sin α and 0 < b < min(α, π/2−(δ +α)). Assume that h and ν0 > 0 are small enough so that hν0 (1 − sin(α − b)) < ρ with ρ as in Lemma 2. For L ∈ N and τ = a/L let ξ = hν0 sup | Re φ(a + τ /2 + iy)| = −hν0 (1 − sin(α + b) cosh(a + τ /2)).
(4.5)
y∈[−b,b]
Then for f (x) =
νh K(νφ(x), d)en (νφ(x)h)φ′ (x) 2πi
and ∫
a
I=
f (x)dx,
IL =
−a
L a ∑ f (xk ), L k=−L
where xk = kτ , and any 0 < ν < ν0 it holds that ( ) e2tn ν −m ν Re φ(a+τ /2−ib)(γ(ξ)tn−m −d) ∥I − IL ∥ ≤ C h 2πb/τ + (hν cosh(a + τ /2)) e τ , e −1 for m the smallest integer with m > µ, and 1+µ
C = C1 (d, α, b)ν 1+µ max{1, (cosh(a + τ /2))
)}.
Proof. Let us first suppose that f is analytic and bounded as ∥f (z)∥ ≤ M for z ∈ Rτ = {w ∈ C : −a − τ ≤ Re w ≤ a + τ , −b < Im w < b}. Then it follows from Lemma 13 in the Appendix that ∥I − IL ∥ ≤
2M log 2 τ sup ∥f (a + τ /2 + iy) − f (−a − τ /2 + iy)∥. (4.6) + π e2πb/τ − 1 y∈[−b,b]
To finish the proof we first need to bound f in the rectangles Rτ and Rτ /2 , in a similar fashion as in [20]. By the assumptions on K, for b such that 0 < b < min(α, π/2 − (δ + α)), the integrand is analytic in the rectangle Rτ . For any x, y ∈ R √ |φ(x + iy)| = cosh x − sin(α + y) and |φ′ (x + iy)| = cosh2 x − sin2 (α + y), 9
leading to the following estimates for z = x + iy ∈ Rτ 1 − sin(α + b) ≤ |φ(x + iy)| ≤ cosh(a + τ ) − sin(α − b), √ ′ |φ (x + iy)| ≤ cosh2 (a + τ ) − sin2 (α − b), ′ √ φ (x + iy) 1 + sin(α + b) φ(x + iy) ≤ 1 − sin(α + b) .
(4.7)
Thus, if µ ≤ −1 in (2.2), by using the second part of Lemma 2 we can bound the integrand for z ∈ Rτ (and thus the constant M in (4.6)) as h 1/2 1/2+µ 1+µ 2νtn C(d) (1 + sin(α + b)) (1 − sin(α + b)) ν e . 2π
∥f (z)∥ ≤
Using now that Re φ(±a±τ /2+iy) < 0 we can estimate as in Theorem 4, with m = 0, ∥f (±a ± τ /2 + iy)∥ ≤
1 1/2 1/2+µ C(d) (1 + sin(α + b)) (1 − sin(α + b)) · 2π · ν 1+µ eνtn Re φ(a+τ /2−ib)(γ(ξ)−d/tn ) .
For µ > −1, we obtain instead √ h 1 + sin(α + b) 1+µ 1+µ 2νtn ∥f (z)∥ ≤ C(d) ν (cosh(a + τ /2)) e 2π 1 − sin(α + b) and √ h−m 1 + sin(α + b) ∥f (±a ± τ /2 + iy)∥ ≤ C(d) · 2π 1 − sin(α + b) 1+µ−m νtn−m Re φ(a+τ /2−ib)(γ(ξ)−d/tn−m )
· (ν cosh(a + τ /2))
e
.
This finishes the proof. Combining the two theorems gives the following result. Theorem 7. Under the conditions of Theorem 6 and with the same definition of IL it holds that the error can be estimated as ∥ω n (d) − IL ∥ ≤ ( ) e2tn ν −m ν(1−sin(α−b) cosh a)(γ(ξ)tn−m −d) C h 2πb/τ + (1 + τ ) (hν cosh a) e , e −1
(4.8)
with m = ⌈µ⌉ and 1+µ
C = C1 (d, α, b)ν 1+µ max{1, (cosh(a + τ /2))
)}.
4.1. Asymptotics and choice of parameters. From Theorem 7 it is straightforward to derive an error bound which is uniform for tn belonging to an interval. Oblivious quadrature algorithms rely heavily on this property and time is split into geometrically increasing intervals. A novelty here is that[we require the first interval ] to start at some n0 h > d. Therefore, let tn = nh ∈ hn0 + hB ℓ , hn0 + h2B ℓ+1 , 10
ℓ ≥ 0, and denote by Tℓ the right-end point of this interval, i.e., Tℓ = hn0 + h2B ℓ+1 . Choose νℓ = Tc0ℓ and aℓ = c1 log Tℓ for some constants B > 1, c0 > 0, c1 > 0. For τ small enough, i.e., L = a/τ big enough, the first term can be made arbitrarily small. In fact for the first term to be smaller than ε1 we need L ∝ log Tℓ log
1 . ε1
To simplify the details of the analysis of the second term, let us assume m = 0 in Theorem 7 and let ξ be given by the formula ξ(νℓ , aℓ ) = hνℓ | Re φ(aℓ )| = hνℓ (cosh aℓ sin α − 1) ∼ h, where in the last step we used νℓ ∼ T1ℓ and aℓ ∼ log Tℓ . In fact ξ is given by a somewhat more complicated formula, but the inclusion of all the details would not change the asymptotic results we give here; for an optimal choice of parameters these details are of importance. We can thus write the second term in the estimate of Theorem 7 as ε2 = e−
ξ(νℓ ,aℓ ) (γ(ξ(νℓ ,aℓ ))tn −d) h
∼ e−(γ(h)tn −d) .
Therefore as L is increased we expect the error to decrease exponentially fast until it reaches ε2 ; see Figure 4. Note that if we make n0 large enough, this error can also be controlled. For later intervals, i.e., for larger ℓ, the second term quickly becomes insignificantly small. A practical strategy for fixing the constants hidden in the asymptotics above can be derived from the following corollary of Theorem 7. Corollary 8. Assume that tn ∈ [t0 , Λt0 ], for given t0 > 0 and Λ ≥ 1, and that there exists 0 < D < 1 such that d ≤ Dγ(ξ)t0 . Then for every θ ∈ (0, 1), the following choice of parameters τ=
a(θ) , L
with
( a(θ) = arccosh
ν=
πbLθ , Λt0 a(θ)
γ(ξ)(1 − D)θ + 2Λ(1 − θ) γ(ξ)(1 − D)θ sin(α − b)
) ,
yields the uniform error estimate ( ) 2πbL(1 − θ) , |ω n (d) − IL | ≤ C exp − a(θ) where C includes all non exponentially growing terms in the bound (4.8). The above choice of parameters is independent of h. Proof. The stated choice for τ and ν guarantees that exponents in the bound (4.8) with D in place of d are equal and negative, with 2Λνt0 = θ
2πb , τ
θ ∈ (0, 1),
and (θ − 1)
2πb = γνt0 (1 − sin(α − b) cosh a) . τ 11
Remark 9. Note that γ(ξ) and a(θ) are nonlinearly related via (4.5). In our experiments we have opted to fix the value of γ(ξ) ∈ (0, 1) and then use the error estimate in Corollary 8 to compute θ, a(θ) and τ (θ). This strategy may lead to an underestimation of the stability function according to Lemma 2. Still, our numerical results show that reasonable values of γ and good choices for the rest of parameters are easy to find for prescribed accuracies. These parameters depend on δ, µ and d in (2.1) but not on the particular application of our method. Results for different values of γ(ξ) are displayed in Figures 3 and 4, for a particular example. The effect of round-off errors can be included in the analysis in the same way as in [21], leading to a minimization problem for the choice of θ. In the simplest case of analysis in [21], the propagation of the errors in the evaluation of K, that we denote by ε, is governed by the exponentially growing term εeΛσ = εϵ(θ)−θ/2 ,
with
ϵ(θ) = e−2πbL/a(θ) .
Then from Corollary 8 we deduce that the total error estimate is of the form εϵ(θ)−θ/2 + ϵ(θ)1−θ .
(4.9)
The choice θ = 1/L above guarantees the boundedness of the term in ε and the control of the error propagation, giving a convergence rate like O(e−cL/ ln L ). A better choice of θ can be obtained by minimizing (4.9) for given L, α, b, and Λ. 4.2. The fast and oblivious algorithm. Except for a shift by n0 = ⌈d/(hγ(ξ(ν0 , a0 )))⌉ as explained in the previous section the algorithm described below is the essentially the same as to the fast and oblivious algorithm in [31]. Thus the notation of [31] is used in the following. ∑Nℓ ℓ For Nℓ the smallest integer such that n < n0 + 1 + B + ℓ=2 B the convolution (3.5) is split into Nℓ sums (0)
un+1 = un+1 +
Nℓ ∑
(ℓ)
un+1 ,
(4.10)
ℓ=2
where for suitable bℓ given below (0)
un+1 =
bℓ−1 −1
n ∑
ω n−j gj
and
(ℓ)
un+1 =
∑
ω n−j gj .
j=bℓ
j=b1
In view of the discussion in Section 4.1 the [ bℓ are chosen such that ]for j = bℓ , bℓ + 1, . . . , bℓ−1 − 1 we have tn−j = (n − j)h ∈ hn0 + hB ℓ , hn0 + h2B ℓ+1 , where n0 is a fixed integer offset with n0 ≥ d/(hγ(ξ)). Inserting (3.7) and using (3.8) we obtain bℓ−1 −1 (ℓ)
un+1 =
∑
j=bℓ
=
1 2πi
h 2πi
∫
∫ K(λ, d)en−j (hλ) dλ gj
(4.11)
Γ
r(hλ)n−(bℓ−1 −1) K(λ, d)y (ℓ) (hλ; bℓ−1 , bℓ ) dλ,
(4.12)
Γ
with y (ℓ) (λ; bℓ−1 , bℓ ), similar to (3.12), the Runge-Kutta approximation ∫ to (3.11) at time t = hbℓ−1 with initial value y (ℓ) (hbℓ ) = 0. The contour integral Γ is approximated by the ℓ-dependent approximation given in Theorem 6. By changing each bℓ 12
10 -2
10 -5
rel. err.
abs. err.
10 -3
10 -7
10 -4
10 -9 10 0
10 1
10 2 10 3 t=nh
10 -6
10 4
10 0
10 1
10 2 10 3 t=nh
10 4
Fig. 2. Absolute (left) and relative (right) errors in the computation of weights ω n (d) for the 3 stage Radau IIA method of order 5 for d = 0.1, number of quadrature points L = 10 and basis B = 10 (top row) and L = 15 and B = 5 (bottom row).
in every B ℓ th step only we can obtain an algorithm, that instead of keeping all the history, i.e. gj for j = 0, . . . , n, in memory for evaluating the convolution requires (ℓ) only three copies of the Runge-Kutta solution y (ℓ) (λk ; bℓ , bℓ−1 ) for ℓ = 2 . . . Nℓ and (ℓ) k = −L, . . . , L, where λk = νℓ φ(τℓ k). Details can be found in [31]. As Nℓ is proportional to log(n − n0 ), the memory requirement thus grows like O(log(n − n0 )) and the (0) operation count as O((n − n0 ) log(n − n0 )). The computation of un+1 is done with standard CQ algorithms with the number of evaluations of the kernel and memory requirements proportional to n0 . 4.3. A numerical experiment. We illustrate Theorem 7 and Corollary 8 by considering in (4.11) the generating function K(λ, d) = K0 (λd), where K0 (·) is a modified Bessel function [1]. This function satisfies the bounds (2.2) and (2.1) with µ = −1/2 as proved in [5]. In Figure 2 we show the error in the approximation of the convolution weights in (4.11) for the Radau IIA method of order 5 along approximation intervals of the form [hn0 + hB ℓ , hn0 + 2hB ℓ+1 ], with B = 5 and B = 10 and for d = 0.1. To evaluate the absolute error |ω n − ω ref n | ref and the relative error |ω n − ω ref n |/|ω n | a reference solution is obtained with L = 50 quadrature nodes. The lower row of error curves corresponds to the case B = 5, where we take L = 15 quadrature nodes on the hyperbola and consider seven approximation intervals, i.e. ℓ = 2, . . . , 8. The upper row of error curves, with the larger error corresponds to B = 10, L = 10 and ℓ = 2, . . . , 6. So we have in total 13 error curves which are coloured differently. The other parameters that determine the hyperbola are as follows α = 0.9, γ(ξ) = 0.8, b = 0.6 and n0 = ⌈d/γ(ξ)/h⌉. Tests with different parameters d and different Radau IIA methods show very similar results. 5. An application. As explained in the introduction, the initial motivation for investigating operators satisfying (2.1) comes from the application of time-domain boundary integral operators for wave propagation in cases where the strong Huygens’ principle does not hold. Most common examples of the latter are propagation of acoustic waves in two dimensions or in a dissipative medium and propagation of viscoelastic waves. We now continue with the application described in the introduction 13
and show how the new algorithms can be applied here. As indicated in the introduction, the transfer function satisfies the bounds (2.2) and (2.1) as first noticed in [5]. Lemma 10. For a fixed d > 0, the function K(λ, d) given in (1.5) satisfies (2.1) with { −1/2, in two dimensions, µ= 0, in three dimensions. Proof. The bound (2.1) is obvious in the 3D case and for the two dimensional case follows from the asymptotic expansions for large arguments of K0 (z), see [1]. In [5, Lemma 3.2] the proof of the result is given for the two dimensional case and α = 0 and for three dimensional case and general α ≥ 0. The remaining case is a consequence of these results as shown next √ √ √ √ 2 2 λd e K0 ( λ2 + αλd) = eλd− λ +αλd e λ +αλd K0 ( λ2 + αλd) √ −1/2 ≤ C(σ, d) λ2 + αλ ≤ C(σ, d) |λ|
−1/2
.
This now allows us to discretize the time convolution in (1.4) using convolution quadrature: n ∫ ∑ gn (x) = ω n−j (|x − y|)ϕj (y)dΣy , n = 0, 1, . . . , T /h = N, (5.1) j=0
Σ
where the weights ω n−j (|x|) are determined from the kernel function K(λ, |x|) and a choice of linear multistep or Runge-Kutta method in the same way as in the previous sections. To simplify the description of the method, we will confine ourselves to single stage Runge-Kutta methods, i.e., the backward Euler method (s = 1). Numerical results will be for multistage Runge-Kutta based discretization, details of implementing these can be found in [4]. Note that we use the notation gn (x) = g(tn , x) and ϕn (x) ≈ ϕ(tn , x) above. In the case of multistage method these would be vectors of size s as before. The convergence of such an approximation of the integral operators has been first analyzed in [25] for linear multistep methods and then in [7, 8] for Runge-Kutta methods. To obtain a fully discrete system we need to discretize (5.1) in space as well. Here we will make use of a standard Galerkin boundary element method. In order to do this, let {Σ1 , Σ2 , . . . , ΣM } with ∪Σj = Σ be a boundary element mesh and let Sh = Span{b1 , b2 , . . . , bM } be a subspace of H −1/2 (Σ), in particular let it be the space of piecewise constant functions with the basis defined by { 1, if x ∈ Σi , bi (x) = 0, otherwise. Writing (5.1) in a variational form and discretizing by the Galerkin method we obtain the fully discrete system: Find ϕkj such that ∫ n ∫ ∫ ∑ ωn−j (|x − y|)ϕkj bk (y)bℓ (x)dΣy dΣx , gn (x)bℓ (x)dx = Σ
j=0
Σ
Σ
14
for all bℓ ∈ Sh and n = 0, 1, . . . , T /h = N . It will be convenient for the later discussion to rewrite this system in a matrix notation gn =
n ∑
Wn−j ϕj ,
(5.2)
j=0
where
(∫ (ϕj )ℓ = ϕℓj ,
(gn )ℓ =
) gn (x)bℓ (x) dx ,
and the matrix Wn is given by ∫ ∫ (Wn )kℓ = ωn (|x − y|)bk (y)bℓ (x)dΣy dΣx , Σ
ℓ = 1, . . . , M,
Σ
k, ℓ = 1, 2, . . . , M.
Σ
Note that these matrices are also given by the contour integral analogous to (3.9) ∫ h Wn = K(λ)En (hλ) dλ, 2πi Γ with K(·) given by ∫ ∫ (K(λ))kℓ =
K(λ, |x − y|)bk (y)bℓ (x)dΣy dΣx , Σ
Σ
where K(λ, d) is the Green’s function from (1.5). In case a Runge-Kutta method with s stages is chosen both gn and Wn will be tensors of dimension s × M and s × M × s × M , respectively. The simplest way of applying the techniques developed in this paper would be to apply oblivious quadrature for times tn > diam(Ω)/γ(ξ). A more efficient approach would be to split the matrices Wj into distance classes. For example given a positive constant d1 < diam(Ω) we could split each matrix into two as follows { ( ) (Wk )ij , if dist(Σi , Σj ) ≤ d1 , (1) (2) (1) = Wk , Wk = Wk − Wk . (5.3) ij 0, otherwise Then the above sum could be split as gn =
n ∑
(1)
Wn−j ϕj +
j=0
n ∑
(2)
Wn−j ϕj
j=0
and the new method applied to the first sum for tn > d1 /γ(ξ) and to the second for tn > diam Ω/γ(ξ), i.e., this way we can obtain savings earlier for the computation of the first sum. Remark 11. In this example it is important that the quadrature used to compute (1) (2) the weights Wj or Wj is valid for a range of distances d. To compute the relevant parameters we proceed as explained in Section 4 taking d = d1 or d = diam(Ω). As shown in Corollary 8, these parameters are then valid also for any d˜ < d. With this the stage is set for applying the oblivious quadrature techniques of the previous section to the setting of time-domain boundary integral equations described above. The algorithm is adapted to solve convolution integrals such as the one in 15
−2
−2
10
10 L = 12 L = 16 . L = 20
−4
10
γ(ξ) = 0.55 γ(ξ) = 0.65 . γ(ξ) = 0.75
−4
10
−6
−6
error
10
error
10
−8
−8
10
10
−10
−10
10
10
−12
10
0
−12
10
20 t
30
10
40
0
10
20 t
30
40
Fig. 3. Evolution of the error: For different L with fixed γ(ξ) = 0.6 (left) and for different γ(ξ) and fixed L = 26 (right).
(1.4) in the same way as explained in [31, Section 4.2]. From (5.2) we see that the scheme is implicit in the vector of stages ϕn . The fast and oblivious algorithm is then applied to deal with the evaluation of the history term in the right hand side of the linear system W0 ϕn = gn −
n−1 ∑
Wn−j ϕj .
j=0
Results of numerical experiments are given in the next section. 6. Numerical experiments. Let Ω ⊂ R2 be a disk with radius one. We solve (1.2) with α = 0 and g(x, t) = t4 e−2t . We integrate from t = 0 to T = 40 with step-size h = T /400, i.e. N = 400, and discretize in space with equally large patches of size ≈ 2π/100, i.e. M = 100. Because of the symmetries of the circle, see [29], the solution only depends on the radius. The splitting √ of the weights into two distance classes is done by setting the parameter d1 = 2 in Equation (5.3), such that the entries of W are divided into two equally large distance classes. In all experiments we chose the basis B = 5, which defines the splitting in (4.10) together with√the offset n0 = ⌈d/h/γ(ξ)⌉, where d depends on the distance class and is either 2 or 2. We wish to assess the error made in approximating the second term in (1.6) due to the contour integral approximation of the CQ weights. To this end for a fixed time and space grid we measure at every time step the difference between the “exact” CQ method and the approximation computed by our fast and oblivious implementation. The computation of the “exact” CQ solution is done as described in [9]. The weights ω n required for the first term in (1.6) are calculated as described in [24, Section 7]. The evolution of the error for contour parameters α = 0.98 and b = 0.33 for different L and γ(ξ) is shown in Figure 3. Increasing L and decreasing γ(ξ) yields more accurate results. Convergence in the number of quadrature nodes L for different γ(ξ) is shown in Figure 4. The error here is measured in the sup norm. Only if γ(ξ) is small enough increasing L improves the result. In this case we observe exponential convergence in L in agreement with Theorem 7. The choice of α and b is done experimentally. Remark 12. A better choice of parameters might be feasible including the angle α in the optimization process and eliminating b, as it is done in [34] for the numerical inversion of the Laplace transform. In our example we have tested the parameters 16
−2
−2
10
10
−6
10
10
−8
10
−10
−8
10
10
−12
5
−6
10
−10
10
10
γ(ξ) =0.50 γ(ξ) =0.60 . γ(ξ) =0.70 γ(ξ) =0.80 γ(ξ) =0.85
−4
error k k∞
−4
error k k∞
10
γ(ξ) =0.50 γ(ξ) =0.60 . γ(ξ) =0.70 γ(ξ) =0.80 γ(ξ) =0.85
−12
10
15 20 25 # Quadrature points (L)
30
10
35
5
10
15 20 25 # Quadrature points (L)
30
35
Fig. 4. Left: Convergence in the number of contour quadrature nodes L. Right: Results obtained by using the parameters from [34].
from [34] on a purely experimental basis and the convergence rates are actually better. However we point out that the theory in [34] does not apply to our situation. Another issue with the parameters from [34] is that the propagation of the errors in the evaluations of the Laplace transform K is not under control, as can be observed in Figure 4 when the error curves reach the accuracy of the reference solution, about 10−10 . A further study of the optimal parameters in the context of 2D and damped wave equations might be the topic of future research. 7. Appendix. We modify the proof given in [16] to show the following result. Lemma 13. Let f be analytic and bounded as |f (z)| ≤ M for z ∈ Rτ0 = {w ∈ C : −a − τ0 ≤ Re w ≤ a + τ0 , −b < Im w < b} and some τ0 > 0. Further, let ∫
a
I=
f (x)dx,
IL =
−a
L a ∑ f (xk ), L k=−L
where xk = kτ , τ = a/L and 0 < τ ≤ τ0 . Then |I − IL | ≤
2M + −1
e2πb/τ
log 2 π τ
sup |gτ /2 (y)|, y∈[−b,b]
where gτ /2 (y) = f (a + τ /2 + iy) − f (−a − τ /2 + iy). Proof. Let Γ be the boundary of the rectangle Rτ /2 ⊂ Rτ0 with Γ1 the top and bottom sides and Γ2 the left and right sides of the rectangle. Using residue calculus it follows that IL can be represented as ∫ IL = f (z)(2i)−1 cot(πz/τ )dz. Γ
Using the analyticity of f we obtain
∫
I=
f (z)u(z)dz, Γ
where u is defined by { u(z) =
− 21 , Im z > 0, 1 2,
17
Im z < 0.
Combining these two formulas we have ∫ IL − I =
f (z)S(z)dz, Γ
where S(z) = (2i)−1 cot(πz/τ ) − u(z) or, when simplified, { 1 , Im z > 0, 1−e−2iπz/τ S(z) = 1 , Im z < 0. e2iπz/τ −1 We now split the error into two terms IL − I = I1 + I2 , corresponding to Γ1 and Γ2 . The first term is easily bounded to give the first term in the above error estimate. Noticing that S(±a ± τ /2 + iy) = S(τ /2 + iy), we see that ∫ ∫ b b gτ /2 (y)S(τ /2 + iy)dy ≤ sup |gτ /2 (y)| |S(τ /2 + iy)|dy |I2 | = −b y∈[−b,b] −b ∫ ∞ ≤ sup |gτ /2 (y)| |S(τ /2 + iy)|dy = logπ 2 τ sup |gτ /2 (y)|, y∈[−b,b]
−∞
where we have used ∫ ∞ ∫ |S(τ /2 + iy)|dy = 0
and similarly
0
∫0 −∞
∞
y∈[−b,b]
1 dy = τ 1 + e2πy/τ
|S(τ /2 + iy)|dy =
∫
∞ 0
1 du = 1 + e2πu
log 2 2π τ,
log 2 2π τ .
Acknowledgments. The useful comments made by the referees during the revision process helped to improve the quality of the work. REFERENCES [1] M. Abramowitz and I. A. Stegun, editors. Handbook of mathematical functions with formulas, graphs, and mathematical tables. Dover Publications Inc., New York, 1992. [2] J. Ballani, L. Banjai, S. Sauter, and A. Veit. Numerical solution of exterior Maxwell problems by Galerkin BEM and Runge-Kutta convolution quadrature. Numer. Math., 123(4):643–670, 2013. [3] A. Bamberger and T. Ha-Duong. Formulation variationelle espace-temps pour le calcul par potentiel retard´ e d’une onde acoustique. Math. Meth. Appl. Sci., 8:405–435, 1986. [4] L. Banjai. Multistep and multistage convolution quadrature for the wave equation: Algorithms and experiments. SIAM J. Sci. Comput., 32(5):2964–2994, 2010. [5] L. Banjai and V. Gruhne. Efficient long time computations of time-domain boundary integrals for 2d and dissipative wave equation. J. Comput. Appl. Math., 235(14):4207–4220, 2011. [6] L. Banjai and M. Kachanovska. Sparsity of Runge–Kutta convolution weights for the threedimensional wave equation. BIT, 54(4):901–936, 2014. [7] L. Banjai and C. Lubich. An error analysis of Runge-Kutta convolution quadrature. BIT, 51(3):483–496, 2011. [8] L. Banjai, C. Lubich, and J. M. Melenk. Runge-Kutta convolution quadrature for operators arising in wave propagation. Numer. Math., 119(1):1–20, 2011. [9] L. Banjai and S. Sauter. Rapid solution of the wave equation in unbounded domains. SIAM J. Numer. Anal., 47(1):227–249, 2008/09. [10] J. A. M. Carrer, W. L. A. Pereira, and W. J. Mansur. Two-dimensional elastodynamics by the time-domain boundary element method: Lagrange interpolation strategy in time integration. Eng. Anal. Bound. Elem., 36(7):1164–1172, 2012. 18
[11] Q. Chen, P. Monk, X. Wang and D. Weile. Analysis of convolution quadrature applied to the time-domain electric field integral equation. Commun. Comput. Phys., 11:383–399, 2012. [12] M. Costabel. Time-dependent problems with the boundary integral equation method. 1, Fundamentals, 2005. [13] P. J. Davies and D. B. Duncan. Convolution-in-time approximations of time domain boundary integral equations. SIAM J. Sci. Comput., 35(1):B43–B61, 2013. [14] P. J. Davies and D. B. Duncan. Convolution spline approximations for time domain boundary integral equations. J. Integral Equations Appl., 26(3):369–412, 2014. [15] E. Hairer and G. Wanner. Solving ordinary differential equations. II, volume 14 of Springer Series in Computational Mathematics. Springer-Verlag, Berlin, second edition, 1996. Stiff and differential-algebraic problems. [16] M. Javed and L. N. Trefethen. A trapezoidal rule error bound unifying the Euler-Maclaurin formula and geometric convergence for periodic functions. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 470(2161):20130571, 9, 2014. [17] A. R. Laliena and F.-J. Sayas. Theoretical aspects of the application of convolution quadrature to scattering of acoustic waves. Numer. Math., 112(4):637–678, 2009. [18] M. L´ opez-Fern´ andez, C. Lubich, C. Palencia, and A. Sch¨ adle. Fast Runge-Kutta approximation of inhomogeneous parabolic equations. Numer. Math., 102(2):277–291, 2005. [19] M. L´ opez-Fern´ andez, C. Lubich, and A. Sch¨ adle. Adaptive, fast, and oblivious convolution in evolution equations with memory. SIAM J. Sci. Comput., 30(2):1015–1037, 2008. [20] M. L´ opez-Fern´ andez and C. Palencia. On the numerical inversion of the Laplace transform of certain holomorphic mappings. Appl. Numer. Math., 51(2-3):289–303, 2004. [21] M. L´ opez-Fern´ andez, C. Palencia, and A. Sch¨ adle. A spectral order method for inverting sectorial Laplace transforms. SIAM J. Numer. Anal., 44(3):1332–1350, 2006. [22] M. L´ opez-Fern´ andez and S. Sauter. Generalized convolution quadrature with variable time stepping. IMA J. Numer. Anal., 33(4):1156–1175, 2013. [23] M. L´ opez-Fern´ andez and S. Sauter. Generalized convolution quadrature with variable time stepping. Part II: algorithm and numerical results. Appl. Numer. Math., 94:88–105, 2015. [24] C. Lubich. Convolution quadrature and discretized operational calculus. II Numer. Math., 52:413–425, 1988. [25] C. Lubich. On the multistep time discretization of linear initial-boundary value problems and their boundary integral equations. Numer. Math., 67:365–389, 1994. [26] C. Lubich and A. Ostermann. Runge-Kutta methods for parabolic equations and convolution quadrature. Math. Comp., 60(201):105–131, 1993. [27] C. Lubich and A. Sch¨ adle. Fast convolution for nonreflecting boundary conditions. SIAM J. Sci. Comput., 24(1):161–182, 2002. [28] S. Sauter and A. Veit. A Galerkin method for retarded boundary integral equations with smooth and compactly supported temporal basis functions. Numer. Math., 123(1):145–176, 2013. [29] S. Sauter and A. Veit. Retarded boundary integral equations on the sphere: exact and numerical solution. IMA J. Numer. Anal., 34(2):675–699, 2014. [30] F.-J. Sayas. Energy estimates for Galerkin semidiscretizations of time domain boundary integral equations. Numer. Math., 124(1):121–149, 2013. [31] A. Sch¨ adle, M. L´ opez-Fern´ andez, and C. Lubich. Fast and oblivious convolution quadrature. SIAM J. Sci. Comput., 28(2):421–438, 2006. ¨ u, H. Ba˘ [32] H. A. Ulk¨ gcı, and E. Michielssen. Marching on-in-time solution of the time domain magnetic field integral equation using a predictor-corrector scheme. IEEE Trans. Antennas and Propagation, 61(8):4120–4131, 2013. [33] F. Vald´ es, M. Ghaffari-Miab, F. P. Andriulli, K. Cools, and E. Michielssen. High-order Calder´ on preconditioned time domain integral equation solvers. IEEE Trans. Antennas and Propagation, 61(5):2570–2588, 2013. [34] J.A.C Weideman and L.N. Trefethen. Parabolic and hyperbolic contours for computing the Bromwich integral. Math. Comput., 76(2):1341-1356 , 2007.
19