Spectral collocation and waveform relaxation methods with ... - ASU

2 downloads 0 Views 328KB Size Report
Nov 2, 2004 - relaxation methods for nonlinear conservation laws. Waveform ... tional Science Foundation under grant NSF DMS–9971164. †Department of ...
Spectral collocation and waveform relaxation methods with Gegenbauer reconstruction for nonlinear conservation laws Z. Jackiewicz∗ and B. Zubik–Kowal† November 2, 2004 Abstract. We investigate Chebyshev spectral collocation and waveform relaxation methods for nonlinear conservation laws. Waveform relaxation methods allow to replace the system of nonlinear differential equations resulting from the application of spectral collocation methods by a sequence of linear problems which can be effectively integrated by highly stable implicit methods. The obtained numerical solution is then enhanced on the intervals of smoothness by Gegenbauer reconstruction. The effectiveness of this approach is illustrated by numerical experiments. Keywords: Nonlinear conservation law, pseudospectral methods, waveform relaxation iterations, Gegenbauer reconstruction.



Department of Mathematics, Arizona State University, Tempe, Arizona 85287, e-mail: [email protected]. The work of this author was partially supported by the National Science Foundation under grant NSF DMS–9971164. † Department of Mathematics, Boise State University, 1910 University Drive, Boise, Idaho 83725, e-mail: [email protected].

1

1

Introduction

It is the purpose of this paper to investigate spectral collocation and waveform relaxation methods for the nonlinear conservation law  ´ ∂ ∂ ³    u(x, t) + f u(x, t) = 0, −L ≤ x ≤ L, t ≥ t0 ,   ∂x   ∂t 

u(x, t0 ) = g(x),         u(−L, t) = α(t),

−L ≤ x ≤ L,

u(L, t) = β(t),

(1.1)

t ≥ t0 .

The solution u(x, t) to (1.1) is a limit of uν (x, t) as ν → 0, where uν (x, t) satisfies the boundary–value problem with added viscosity ν  ´ ∂ ν ∂ ³ ν ∂2 ν    u (x, t) + f u (x, t) = ν u (x, t),    ∂t ∂x ∂x2  

uν (x, t0 ) = g(x),         uν (−L, t) = α(t),

uν (L, t) = β(t),

−L ≤ x ≤ L, t ≥ t0 , −L ≤ x ≤ L, t ≥ t0 ,

(1.2) and it may be beneficial to consider (1.2) with ‘small’ viscosity ν instead of (1.2) to stabilize the resulting schemes. Spectral and Legendre pseudospectral viscosity methods for nonlinear conservation laws have been considered by Gottlieb, Lustman and Orszag [17], Maday, Ould Kaber and Tadmor [32], and Guo, Ma and Tadmor [24]. In the next section we will describe the Chebyshev pseudospectral methods for (1.1) and (1.2). This leads to the nonlinear system of ordinary differential equations (ODEs) which will be integrated in time by explicit Runge–Kutta methods of order four. The resulting numerical schemes are spectrally accurate if the solution to (1.1) or (1.2) is smooth. For discontinuous problems we can improve the convergence away from the discontinuity by applying a filter of order p. High order filters allow for high resolution away from the discontinuities with strong oscillations (Gibbs phenomenon) in the neighborhoods of the discontinuities. The application of low order filters will reduce these oscillations, but at the expense of excessive blurring. In this paper we will use the former approach (high order filters) with subsequent application of Gegenbauer reconstruction, as in [24], to resolve the Gibbs phenomenon and enhance the accuracy of the resulting numerical approximations. This will be described in Section 3. In Section 4 the 2

results of numerical experiments are presented which illustrate the effectiveness of pseudospectral Chebyshev methods with Gegenbauer reconstruction for discontinuous problems. In Section 5 we will discuss waveform relaxation methods for (1.1) and (1.2) in conjunction with Chebyshev pseudospectral methods. These methods can be viewed as a way to replace the nonlinear systems of differential equations obtained by semidiscretization in space by the method of lines of (1.1) or (1.2) by a sequence of linear problems which are easier to solve. As a result, we can employ implicit numerical schemes for integration in time. These methods have much better stability properties than the explicit formulas employed in Section 2 and, as a consequence, allow the use of much larger stepsizes for time integration which fulfill stability restrictions, as compared with the explicit Runge–Kutta schemes. This is confirmed by numerical experiments presented in Section 6. The additional advantage of waveform relaxation is the fact that depending on the choice of splitting we can decouple the resulting system of differential equations which can then be efficiently integrated in a parallel computing environment. Finally, in Section 7 some concluding remarks are given and plans for future research are briefly outlined.

2

Pseudospectral Chebyshev method

We will describe the pseudospectral Chebyshev method for (1.2), where to simplify the notation, we will skip the dependence of the solution to this problem on the parameter ν. Let M be a nonnegative integer and denote by xi = −L cos(πi/M ), i = 0, 1, . . . , M , the Chebyshev–Gauss–Lobatto points in the interval [−L, L]. We discretize (1.2) in space by the method of lines ∂ ∂2 replacing ∂x f (u(x, t)) and ∂x 2 u(x, t) by pseudospectral approximations given by M ´ ³ ´ X ∂ ³ (1) f u(xi , t) ≈ f 0 u(xi , t) (2.1) dij u(xj , t), ∂x j=0 and

M X ∂2 (2) u(xi , t) ≈ dij u(xj , t), 2 ∂x j=0

i = 0, 1, . . . , M . Here, h

i (k) M

D(k) = dij

i,j=0

3

,

k = 1, 2,

(2.2)

are differentiation matrices of order k. The explicit expressions for these matrices are given in [7], [10] and [38]. Put ui (t) = u(xi , t). Substituting (2.1) and (2.2) into (1.2) and taking into account that u0 (t) = α(t), uM (t) = β(t), we obtain  M −1 M −1 X X   (2) (1) 0 0   dij uj (t) d u (t) = ν (t) + f (u (t)) u j i  ij i    j=1 j=1    ³ ´ ³ ´ (1) (1) (2) (2) 0 −f (u (t)) d α(t) + d β(t) + ν d α(t) + d β(t) , i  i0 i0 iM iM           ui (t0 ) = g(xi ),

i = 1, 2, . . . , M − 1. Put h

i (k) M −1

c(k) = d D ij

u(t) = and

h

i,j=1

,

 

 a(k) =  

(k)

dM −1,0

u1 (t) · · · uM −1 (t) f 0 (u(t)) =

h



(k)

d1,0 .. .

iT

,

0

  , 

 

 b(k) = 

g=

h





(k)

d1,M .. . (k)

dM −1,M

  , 

k = 1, 2,

g(x1 ) · · · g(xM −1 )

0

f (u1 (t)) · · · f (uM −1 (t))

iT

(2.3)

iT

,

.

Then the system (2.3) can be rewritten in the following vector form  c(1) u(t) + ν D c(2) u(t)   u0 (t) = −f 0 (u(t)) · D      ³ ´ ³

´

−f 0 (u(t)) · a(1) α(t) + b(1) β(t) + ν a(2) α(t) + b(2) β(t) ,

       u(t ) = g, 0

(2.4)

t ≥ t0 , where ‘·’ stands for componentwise multiplication. This system is then integrated in time by the classical Runge–Kutta method of order four, compare [6]. If the initial condition g(x) or the solution to (1.1) or (1.2) is discontinuous it may be necessary to use filters to stabilize the resulting scheme and to enhance the accuracy of numerical approximations. The use of filters for discontinuous problems is discussed in [39], [17], [23], [1], [15], [36], and 4

[16]. If the initial function g(x) is discontinuous we can obtain the filtered approximation M X

σ gM (xi ) =

fijσ g(xj ),

j=0

h

i = 0, 1, . . . , M , where F σ = fijσ fijσ = wj

iM

i,j=0

is the filter matrix defined by

M ³ n ´ T (x )T (x ) X n i n j

σ

n=0

M

γen

.

Here, Tn (x) = cos(n arccos(x)), n = 0, 1, . . ., are Chebyshev polynomials,   π   , j = 0, M,    π, n = 0, M,  2M γen =  π wj =  π    , n = 1, 2, . . . , M − 1,  , j = 1, 2, . . . , M − 1, 2 M are the weights of the Chebyshev–Gauss–Lobatto quadrature formula and the normalization constants, respectively, and σ(η) is a filter of order p, i.e., the real function σ ∈ C p−1 (−∞, ∞) which has the following properties: σ(η) = 0 for |η| > 1; σ(0) = 1; σ(1) = 0; and σ (m) (0) = σ (m) (1) = 0 for m = 1, 2, . . . , p − 1, compare [39], [15], [16]. The frequently used filters are the Ces´aro, raised cosine, sharpened raised cosine, Lanczos, and exponential cut–off filter, see [39], [7], [15], [16]. The family of exponential filters of order p is defined by      1,

σ(η) = 

   exp

µ

³ η − η ´p ¶ c

|η| ≤ ηc ,

, |η| > ηc , 1 − ηc where α is a measure of how strong the various modes should be filtered. Usually α is chosen so that σ(1) is of the order of the machine accuracy eps of the computer. This leads to α = −ln(C · eps) where C is a constant of moderate size, and we have chosen C = 1 in our implementation of this filter. We have found this family of filters to be quite effective in our numerical experiments reported in Section 4 and 6. We can also obtain matrices which combine the effect of filtering and i h (k),σ M are defined by differentiation of order k. These matrices D (k),σ = dij −α

i,j=0

(k),σ

dij

= wj

M ³ n ´ T (x )T (k) (x ) X n j i n

σ

n=0

M 5

γen

,

compare again [15], [16]. The general system of differential equations resulting from the semidiscretization of (1.2) in space using filtering and differentiation matrices D (k),σ takes the form  c(1),σ u(t) + ν D c(2),σ u(t)   u0 (t) = −f 0 (u(t)) · D      ³ ´ ³

´

−f 0 (u(t)) · a(1),σ α(t) + b(1),σ β(t) + ν a(2),σ α(t) + b(2),σ β(t) ,

       u(t ) = Fb σ g, 0

h

t ≥ t0 , where Fb σ = fijσ

iM −1

i,j=1

(2.5)

c(k),σ , a(k),σ , and b(k),σ are defined similarly , and D

c(k) , a(k) , and b(k) . Observe that for σ(η) ≡ 1 the system (2.5) reduces to as D (2.4). It is also possible to use different functions σ(η) for D (k),σ , k = 1, 2, and F σ , but it seems to be difficult to decide which strategy leads to the ‘optimal’ results. We experimented with and compared many choices and results of some numerical experiments are presented in Sections 4 and 6. We have introduced small viscosity ν in (1.2) and (2.5) in the physical space. Guo, Ma and Tadmor [24] consider alternative approach, the so–called spectral vanishing viscosity method, which is implemented directly on high modes of the computed solution in the spectral domain.

3

Location of discontinuities in the solution and Gegenbauer reconstruction

After the approximation uh (xi , tend ) to the solution u(xi , tend ) of (1.1) or (1.2) is computed we can compute spectral coefficients an (tend ), i.e., the coefficients of the expansion of uh (xi , tend ) in terms of Chebyshev polynomials Tn (x) uh (xi , tend ) =

M X

an (tend )Tn (

n=0

xi ), L

(3.1)

i = 0, 1, . . . , M , from the formula listed in [17] M 2 X 1 πjn an (tend ) = , uh (xM −j , tend ) cos M cen j=0 cej M

(3.2)

where ce0 = ceM = 2 and cen = 1 for n = 1, 2, . . . , M − 1. By examination of these coefficients we can then find the number S and the location of discontinuities in the solution u to (2.5). This can be done in many different 6

ways and we have adopted the approach by Gottlieb, Lustman and Orszag described in [17]. In this approach the Chebyshev coefficients an (tend ) (3.2) are fitted by an expression of the form an (tend ) =

S X

Bs An (Xs )

s=1

for M/3 ≤ n ≤ 2M/3, which corresponds to the spectral representation with coefficients An (Xs ) of the sum of the Heaviside functions with discontinuities at Xs and discontinuity jumps Bs . As explained in [17] the values of Xs are related to the eigenvalues of some summation operator and can be determined independently of Bs . This leads to the exact determination of subintervals on which discontinuities are located and they are then assigned to the midpoints of these subintervals. As reported in [17] the number of shocks does not have to be specified in advance and this procedure works quite well for as many as seven shocks. Once Xs are computed the values of Bs can be found, if needed, by a least squares procedure. We refer to [17] for a complete description of this process. Another approach advocated by Gelb and Tadmor [13] in the context where Fourier spectral coefficients are given, is based on generalized conjugate partial sums whose convergence to the jump function is then accelerated by the so–called concentration factors. This approach was generalized in [14] to general Jacobi polynomials which include Chebyshev polynomials as special case. This results in effective edge detectors, where both the location of discontinuities and discontinuity jumps are recovered. We refer again to [13], [14] for details. After the locations of discontinuities are detected we will use next Gegenbauer reconstruction as described in [19], [20], [21] to improve the accuracy of the numerical solution on the subintervals of [−L, L] where the solution to (1.1) or (1.2) is smooth. This reconstruction is based on ultraspherical or Genenbauer polynomials Cnλ (x), i.e., the polynomials which are orthonormal with respect to the inner product hCkλ , Cnλ i =

1 1 Z1 (1 − x2 )λ− 2 Ckλ (x)Cnλ (x) dx λ hn −1

where, for λ > 0, hλn is defined by 1

hλn = π 2 Cnλ (1) 7

Γ(λ + 12 ) , Γ(λ)(n + λ)

and normalized so that Cnλ (1) =

Γ(n + 2λ) . n! Γ(2λ)

They can be computed from the three term recurrence relation  λ λ λ   (n + 1)Cn+1 (x) = 2(n + λ) x Cn (x) − (n + 2λ − 1)Cn−1 (x),  

C0λ (x) = 1,

C1λ (x) = 2λ x,

n = 1, 2, . . ., compare [2], [8]. To illustrate this procedure let us assume that the solution to (1.1) or (1.2) is smooth on the subinterval [a, b] ⊂ [−L, L], and, as in [19], define the local variable ξ by x = x(ξ) = ² ξ+δ, where ² = (b−a)/2, δ = (b+a)/2, which f , the transfoms the interval [−1, 1] onto [a, b]. Denote by xei , i = 0, 1, . . . , M Chebyshev–Gauss–Lobatto points corresponding to [a, b] and compute the Chebyshev partial sum by e M X

ueh (xei , tend ) =

n=0

λ (x, tend ) = gm

m X

an (tend )Tn

³x e ´ i

L

,

(3.3)

f . We compute next the Gegenbauer series on a subinterval i = 0, 1, . . . , M [a, b] by the formula

l=0

gb²λ (l)Clλ

³x − δ ´

²

,

(3.4)

where gb²λ (l) are approximations to the Gegenbauer expansion coefficients g²λ (l) based on a subinterval [a, b]. These coefficients are defined by g²λ (l)

1 1 Z1 = λ (1 − ξ 2 )λ− 2 Clλ (ξ)ueh (² ξ + δ, tend ) dξ. hl −1

The approximations to these coefficients are computed by evaluating the above integral by the Chebyshev–Gauss–Lobatto quadrature formula. This leads to e M 1 X λ b g² (l) = λ wj (1 − xb2j )λ Clλ (xbj )ue(² xbj + δ, tend ), (3.5) hl j=0

where xbj are the Chebyshev–Gauss–Lobatto points corresponding to the inλ terval [−1, 1] and wj are the corresponding weights. We expect that gm (x, tend ) 8

will be a better approximation to u(x, tend ) than uh (x, tend ) on the intervals of smoothness of u. This expectation is supported by the result of Gottlieb and Shu [19] who proved that Gegenbauer reconstruction m X l=0

gb²λ (l)Clλ (ξ)

applied to the piecewise smooth function u(x) defined on the interval [−1, 1] with λ = m = β ² M , β < 2/27, is uniformly convergent to u(x) on the interval of smoothness [a, b] of u as M → ∞. To be more precise, the resulting error can be bounded by ¯ ¯ ³ ´ X max ¯¯u(² ξ + δ) − gb²λ (l)Clλ (ξ)¯¯ ≤ A qT² M + qR² M , m

−1≤ξ≤1

where

l=0

qT =

µ

27β 2

¶β

< 1,

qR =

µ

27² 32ρ

¶β

< 1,

and A grows at most as M 1+2λ . Here, ρ is the distance from [a, b] to the nearest singularity of u(x) in the complex plane. The constants qT and qR correspond to truncation and regularization errors, see [19], [20], [21]. The implementation of Gegenbauer reconstruction is quite sensitive to round–off errors and it is not clear how to choose m and λ to obtain ‘optimal’ results. Gottlied and Shu [19] recommend the values m = 0.1 ² M,

λ = 0.2 ² M,

(3.6)

and Guo, Ma, and Tadmor [24] choose m = λ = 0.05 M, but they did not attempt to optimize these parameters. To our knowledge the first attempt to optimize these parameters was made by Gelb [11] for Gegenbauer reconstruction where the Fourier coefficients are given for a smooth but non–periodic function. This approach was further refined in [12] by taking into account the smoothness characteristics of the function. The determination of optimal parameters based on the Chebyshev spectral coefficients is discussed in [25].

9

0

10

M=20

M=20 −2

10

M=40

M=40 −4

10

M=80 M=80 −6

error

10

−8

10

M=160 −10

10

−12

10

−14

10

−1

−0.8

−0.6

−0.4

−0.2

0 x

0.2

0.4

0.6

0.8

1

Figure 1: Errors of the Genenbauer reconstruction with m and λ defined by (3.6) for M = 20, 40, 80, and 160. To illustrate the effectiveness of Gegenbauer reconstruction we have plotted in Fig. 1 the error of this procedure for the function f (x) given by f (x) =

   exp(x),

  sin(cos(x)) + 1,

x ∈ [−1, 0], x ∈ (0, 1],

with the parameters m and λ defined by (3.6) for M = 20, 40, 80, and 160. The performance of this procedure could be further improved by appropriate tuning of the parameters m and λ, compare again [11], [12], [25]. The graphs of errors versus M in double logarithmic scale which illustrate spectral convergence of Gegenbauer reconstruction method for functions with varying degrees of smoothness are presented in [25] and [12].

10

4

Numerical experiments with pseudospectral methods and Gegenbauer reconstruction

In this section we present the results of some numerical experiments for the Burgers’ equation  ∂ ∂   u(x, t) + u(x, t) u(x, t) = 0,    ∂t ∂x      x     u(x, t0 ) = ,

−L ≤ x ≤ L, |x|
2At0 ,

t ∈ [t0 , tend ], t where A > 0 is a parameter. The solution to this problem is the N –wave  x √    , |x| < 2At, u(x, t) =  t √   0, |x| > 2At,

compare [40]. This solution is the limit as ν → 0 of the solutions to the problem ∂ ∂2 ∂ u(x, t) + u(x, t) u(x, t) = ν 2 u(x, t), (4.2) ∂t ∂x ∂x −L ≤ x ≤ L, t ∈ [t0 , tend ], with appropriate initial and boundary conditions. The solution to (4.2) is given by Ã

x 1+ u(x, t) = t

s

2

t ex /(4νt) t0 eA/(2ν) − 1

!−1

,

compare again [40]. The pseudospectral Chebyshev method applied to (4.2) leads to the system of ODEs  c(1),σ u(t) + ν D c(2),σ u(t)   u0 (t) = − u(t) · D      ³ ´ ³       

´

− u(t) · a(1),σ α(t) + b(1),σ β(t) + ν a(2),σ α(t) + b(2),σ β(t) ,

u(t0 ) = Fb σ g,

11

(4.3)

0

1.5

10

1 0.5 0 −0.5 −1

10

−1 −1.5 −5

0 x

5

0

10

−2

10 −2

10

−4

10

−6

10

−5

−3

0 x

10

5

16

32 M

64

Figure 2: Numerical results for M = 64 (left graphs) and error versus M (right graph). t ≥ t0 , where g correspond to initial function at the grid points xi and Fb σ , c(k),σ , a(k),σ , and b(k),σ are defined in Section 2. We have solved the probD lem (4.3) with t0 = 1, tend = 2 and A = 1 by pseudospectral Chebyshev method (2.5). This system was integrated by the classical Runge–Kutta method of order four. The efficiency of numerical computations could be increased somewhat by the use of total–variation–diminishing (TVD) or strong stability–preserving (SSP) Runge–Kutta methods (compare [22], [35]). However, time integration is not the main focus of this paper and we do not pursue these issues in more detail here. After integrating (2.5) in time, we then computed Chebyshev spectral coefficients (3.2) which were used to locate the discontinuities in the solution

12

and for Gegenbauer reconstruction as described in Section 3. In Fig. 2 (left graphs) we present the results for M = 64, ν = 1/M , time step ∆t = 10−3 , D(1),σ = D(1) , D(2),σ = D(2) , where we filter the initial condition using F σ corresponding to the exponential filter with p = 4 and ηc = 2/M . The parameters used for Gegenbauer reconstruction were chosen by trial and error and they are m = 1 and λ = 4 for left and right subintervals and m = 2 and λ = 5 for the middle subinterval. In the left upper graph of this figure the exact solution is plotted by thick solid line, the numerical solution before postprocessing and the not–a–knot spline fit of it by black square and thin solid line and the numerical solution obtained by Gegenbauer reconstruction by white circle. In the left lower graph we have plotted the not–a–knot spline fit of the error before postprocessing by dashed line and the error after postprocessing by solid line. These global errors are defined as norms of the differences between the exact solution to (4.2) and the corresponding numerical solutions. We can see that Gegenbauer postprocessing is very effective in reconstructing the solution to (4.1) on the subintervals of smoothness. In Fig. 2 (right graph) we have plotted in double logarithmic scale error versus M (black circles connected by thin solid line) together with the spline fit (thick solid line) generated using program csaps.m from Matlab Spline Toolbox (Version 3.2.1). This graph illustrates spectral convergence of the overall numerical scheme. The parameters m and λ of Gegenbauer reconstruction were chosen again by trial and error for different values of M and different subintervals. The systematic approach to choosing these parameters is the subject of current research, compare [12].

5

Waveform relaxation methods

Waveform relaxation method is an iterative technique to compute successive approximations u(k) (t), k = 0, 1, . . ., to the solution u(t) of (2.5). To be more specific, we use previous iterate u(k) (t) in the argument of the function f 0 and the next iterate u(k+1) (t) elsewhere. This has the effect of replacing the

13

nonlinear system (2.5) by a sequence of linear problems of the form  d (k+1)  c(1),σ u(k+1) (t) + ν D c(2),σ u(k+1) (t)   u (t) = −f 0 (u(k) (t)) · D   dt    ³ ´ ³ ´ 0 (k) (1),σ (1),σ (2),σ (2),σ −f (u (t)) · a α(t) + b β(t) + ν a α(t) + b β(t) ,         u(k+1) (t ) = Fb σ g,

(5.1)

0

t ≥ t0 , k = 0, 1, . . .. We can also consider more general schemes correspondc(l),σ , i.e., ing to appropriate splittings of matrices D (l),σ

c(l),σ = D c D 1

(l),σ

c +D 2

,

l = 1, 2.

For example, choosing Gauss–Jacobi or block Gauss–Jacobi splittings (l),σ

(l),σ

c D 1

³

´

c(l),σ , . . . , D c(l),σ , = diag D 1,1 1,s

l = 1, 2, (l),σ

c c where D of dimension mj , 1,j , j = 1, 2, . . . , s, are diagonal blocks of D1 m1 + · · · + ms = M − 1, leads to the sequence of linear problems  ³ ´ d (k+1)  0 (k)  c(1),σ u(k+1) (t) + D c(1),σ u(k) (t)  u (t) = −f (u (t)) · D  1 2   dt    ´ ³    c(2),σ u(k+1) (t) + D c(2),σ u(k) (t)  +ν D 1

2

³ ´ ³ ´    0 (k) (1),σ (1),σ (2),σ (2),σ  − f (u (t)) · a α(t) + b β(t) + ν a α(t) + b β(t) ,          u(k+1) (t0 ) = Fb σ g,

(5.2)

t ≥ t0 , k = 0, 1, . . ., where u(0) (t) is an arbitrary starting function usually c(l),σ chosen as u(0) (t) = Fb σ g, t ≥ t0 . Observe that the form of the matrices D effectively decouples the resulting system (5.2) into s subsystems which can be assigned to different processors for efficient solution in a parallel computing environment. For any function u : [t0 , T ] → RM −1 define the supremum norm on the interval [t0 , T ] by kuk[t0 ,T ] = sup{ku(t)k : t ∈ [t0 , T ]}, where k · k is any norm in RM −1 . To study boundedness of the sequence u(k) defined by (5.2) we assume that kf 0 (u)k ≤ F (Q) for kuk ≤ Q, 14

(5.3)

where F (Q) is a constant depending on Q. We also introduce the following notation: µi (T ) = ka(i),σ α + b(i),σ βk[t0 ,T ] , i = 1, 2, (1),σ

and

c Ci = Ci (Q, ν, σ) = F (Q)kD i

(2),σ

c k + νkD i

k,

i = 1, 2,

C3 = C3 (Q, ν, T ) = F (Q)µ1 (T ) + νµ2 (T ). We can demonstrate that the sequence u(k) defined by (5.2) is uniformly bounded for sufficiently small values of T − t0 . To be more precise we have the following theorem. Theorem 5.1 Assume that the function f 0 satisfies (5.3) and the starting function u(0) satisfies ku(0) k[t0 ,T ] ≤ Q, for some constant Q ≥ 2kFb σ gk + 1. Then

ku(k) k[t0 ,T ] ≤ Q,

(5.4)

k = 0, 1, . . ., if T is chosen so that (T − t0 )C1
kFb σ gk + 1 and use the exponential norm n

o

kukα[t0 ,T ] := sup ku(t)ke−α(t−t0 ) : t ∈ [t0 , T ] ,

where α is chosen so that

(5.11)

n

o

α > max C1 , C3 /(kFb σ gk + 1), (C1 + C2 )Q/(Q − kFb σ gk − 1) .

We have the following theorem.

Theorem 5.3 Assume that the function f 0 satisfies (5.3) and that the starting function u(0) satisfies with Q > kFb σ gk. Then

ku(0) (t)k ≤ Q,

t ∈ [t0 , T ]

(5.12)

ku(k) kα[t0 ,T ] ≤ Q

for k = 0, 1, . . ..

(5.13)

Proof: Note that (5.12) implies (5.13) with k = 0. Assume (5.13) for k. Integrating (5.2) from t0 to t, then taking norms k · k on both sides of the resulting equation and using (5.3) we obtain ku(k+1) (t)k ≤ kFb σ gk + C1 + C2

≤ +

Z

t

t0

Z

t

ku(k+1) (s)ke−α(s−t0 ) eα(s−t0 ) ds

t0

(k)

ku (s)ke

−α(s−t0 ) α(s−t0 )

e

kFb σ gk + C1 ku(k+1) kα[t0 ,T ]

C2 ku(k) kα[t0 ,T ]

Z

t

t0

Z

t t0

ds + C3

eα(s−t0 ) ds

eα(s−t0 ) ds + C3 (t − t0 )

C1 (k+1) α ≤ kFb σ gk + ku k[t0 ,T ] eα(t−t0 ) α C2 (k) α + ku k[t0 ,T ] eα(t−t0 ) + C3 (t − t0 ). α 18

Z

t t0

ds

Hence, ku(k+1) (t)k e−α(t−t0 ) ≤

³

´

kFb σ gk + C3 (t − t0 ) e−α(t−t0 )

C2 (k) α C1 (k+1) α ku k[t0 ,T ] + ku k[t0 ,T ] . α α

+

Observe that since α > C3 (kFb σ gk + 1) we have ³

´

kFb σ gk + C3 (t − t0 ) e−α (t−t0 ) ≤ kFb σ gk + 1

and it follows that

α ku(k+1) kα[t0 ,T ] ≤ α (kFb σ gk + 1) + C1 ku(k+1) kαt0 ,T ] + C2 Q.

This leads to

ku(k+1) kα[t0 ,T ] ≤

α (kFb σ gk + 1) + C2 Q ≤ Q. α − C1

The last inequality follows from the definition of α. This completes the proof. 2 Finally, we study the convergence of u(k) to the solution u of (2.5) in the exponential norm (5.11) on arbitrary interval [t0 , T ]. Define the constants e = Qeα(T −t0 ) Q c(2),σ k, f = C f (Q, e ν, σ) = F (Q)k e D c(1),σ k + νkD C i i i i f f e e C3 = C3 (Q, ν, T ) = F (Q)µ1 (T ) + νµ2 (T ), µ ³

i = 1, 2, ´



f = C f (L, Q, e σ, T ) = L Q e kD c(1),σ k + kD c(1),σ k + µ (T ) . C 4 4 1 1 2

We have the following theorem.

Theorem 5.4 Assume that the function f 0 satisfies (5.3) and (5.6). Then f

ke(k) kα[t0 ,t] ≤ eC1 (t−t0 )

f (t − t ))k (C 5 0 f, E 0 k!

(5.14)

f = C f +C f and E f is some constant such k = 0, 1, . . ., t ∈ [t0 , T ], where C 5 2 4 0 (0) α f that ke k[t0 ,T ] ≤ E0 .

19

e Subtracting Proof: Observe that kuk (t)kα[t0 ,T ] ≤ Q implies that kuk (t)k ≤ Q. e after some (5.2) from (2.5) and using (5.13) and (5.3) with Q replaced by Q computations we obtain

ke

(k+1)

f C

(t)k ≤

1

f + C 5

Z

Z

t t0 t t0

ke(k+1) (s)ke−α(s−t0 ) eα(s−t0 ) ds ke(k) (s)ke−α(s−t0 ) eα(s−t0 ) ds

f eα(t−t0 ) C 1



f eα(t−t0 ) + C 5

Z

t

t0 Z t t0

ke(k+1) kα[t0 ,s] ds ke(k) kα[t0 ,s] ds,

k = 0, 1, . . ., t ∈ [t0 , T ]. Hence, ke(k+1) kα[t0 ,t]

f ≤C 1

Z

t t0

ke(k+1) kα[t0 ,s]

f ds + C

5

Z

t t0

ke(k) kα[t0 ,s] ds.

The theorem now follows using exactly the same arguments as in the proof of Theorem 5.2. 2 Similarly as before the error bound in Theorem 5.4 can be replaced by k

ke(k) kα[t0 ,t]

f E f Z t−t0 C 5 0 f sk−1 eC1 s ds, ≤ (k − 1)! 0

f s), k = 0, 1, . . ., expressed in terms of the integral of the function sk−1 exp(C 1 which is a sharper estimate than (5.14).

6

Numerical experiments with waveform relaxation methods and Gegenbauer reconstruction

In this section we will use again Burgers’ equations (4.1) and (4.2) as test problems for waveform relaxation. The sequence of linear systems of ODEs

20

corresponding to (5.2) takes now the form  ³ ´ d (k+1)  (k)  c(1),σ u(k+1) (t) + D c(1),σ u(k) (t)  u (t) = −u (t) · D  1 2   dt    ´ ³    c(2),σ u(k+1) (t) + D c(2),σ u(k) (t)  +ν D 1 2

³ ´ ³ ´    (k) (1),σ (1),σ (2),σ (2),σ  − u (t) · a α(t) + b β(t) + ν a α(t) + b β(t) ,          u(k+1) (t0 ) = Fb σ g,

(6.1)

t ≥ t0 , k = 1, 2, . . .. To integrate these systems in time we apply the implicit 0

2

10

1

0

−1

−2 −5

−1

10

0 x

5

0

10

−2

10 −2

10

−4

10

−6

10

−5

−3

0 x

10

5

16

32 M

64

Figure 3: Numerical results after three iterations of Gauss–Jacobi waveform relaxation method with M = 64. Runge–Kutta methods of order four based on the Gauss–Legendre quadra21

ture formula [6]. This method can be represented by Butcher tableaux √ √ 3− 3 1 2−3 3 6 4 12 √ √ 3+ 3 2+3 3 1 . (6.2) 6 12 4 1 2

1 2

We have solved problem (5.2) with t0 = 1, tend = 2 and A = 1 by waveform relaxation method corresponding to the Gauss–Jacobi splitting of the matric(l),σ , l = 1, 2. We then computed Chebyshev spectral coefficients (3.2) ces D which were used to locate the discontinuities in the solution and for Gegenbauer reconstruction as described in Section 3. In Fig. 3 (left graphs) we present the results of numerical experiments after three iterations, i.e., solving the system (6.1) for k = 0, 1 and 2, for M = 64, ν = 1/M and time step ∆t = 10−2 which is an order of magnitude larger than that used in Section 4 c(l),σ , D c(l),σ and F σ corfor explicit Runge–Kutta of order 4. The matrices D 1 2 respond to the exponential filter with p = 4 and ηc = 2/M . The parameters used for Gegenbauer reconstruction are m = 1 and λ = 3. As in Section 4 on the upper graph of this figure the exact solution is plotted by thick solid line, the numerical solution before postprocessing and the not–a–knot spline fit of it by black square and thin solid line and the numerical solution obtained by Gegenbauer reconstruction by white circle. In the lower graph we have plotted the not–a–knot spline fit of the error before postprocessing by dashed line and the error after postprocessing by solid line. These errors are defined as in Section 4. In Fig. 3 (right graph) we have plotted in double logarithmic scale error versus M (black circles connected by thin solid line) together with the spline fit (thick solid line) generated using program csaps.m from Matlab Spline Toolbox (Version 3.2.1). This graph illustrates spectral convergence of the overall numerical scheme. As in Section 4 the parameters m and λ of Gegenbauer reconstruction were chosen again by trial and error for different values of M and different subintervals. Similar results are obtained if BDF3, the backward differentiation method of order three, is used with the same time step for integrating the system (6.1).

22

7

Concluding remarks

We investigate Chebyshev spectral collocation and waveform relaxation methods for nonlinear conservation laws. The location of discontinuities in the solution is determined by fitting the spectral coefficients corresponding to the numerical solution by spectral representation of a sum of Heaviside functions, and the numerical solution is then enhanced on the intervals of smoothness by Gegenbauer reconstruction. The systems of differential equations resulting from application of spectral collocation methods are solved by explicit Runge–Kutta method of order four which require rather small time steps for stable integration. The main advantage of waveform relaxation consists of the fact that we can replace these nonlinear systems of ODEs by a sequence of linear problems which can then be effectively integrated by A–stable implicit Runge–Kutta methods or A(α)–stable backward differentiation methods. This allows for much larger time steps than those used for explicit methods. This is confirmed by numerical experiments presented in Sections 4 and 6. Another advantage of waveform relaxation is that it can be applied in a parallel computing environment. Future work will address the numerical solution of nonlinear conservation laws and Gegenbauer reconstruction in two or three space dimensions, and the implementation of waveform relaxation methods with adaptive window control strategy. Some progress for Gegenbauer reconstruction in two dimension for rectangular domains using tensor product approach has been reported in [30]. The theoretical background for adaptive window strategy was formulated in [5]. Acknowledgements. The authors wish to express their gratidude to the anonymous referees for their useful comments.

References [1] S. Abarbanel, D. Gottlieb and E. Tadmor, Spectral methods for discontinuous problems. In: Numerical Methods for Fluid Dynamics II. Proceedings of the 1985 Conference on Numerical Methods in Fluid Dynamics (K.W. Morton and M.J. Baines, eds.), pp. 129–153, Clarendon Press, Oxford 1986.

23

[2] M. Abramowitz and I.S. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, National Bureau of Standards, Washington 1972. [3] K. Burrage, Parallel and Sequential Methods for Ordinary Differential Equations, Oxford University Press, Oxford 1995. [4] K. Burrage, Z. Jackiewicz, S.P. Nørsett and R. Renaut, Preconditioning waveform relaxation iterations for differential systems, BIT 36(1996), 54–76. [5] K. Burrage, Z. Jackiewicz and B.D. Welfert, Spectral approximation of time windows in the solution of dissipative linear differential equations, submitted to IMA J. Numer. Anal. [6] J.C. Butcher, The Numerical Analysis of Ordinary Differential Equations. Runge–Kutta and General Linear Methods. John Wiley & Sons, Chichester, New York 1987. [7] C. Canuto, M.Y. Hussaini, A. Quarteroni and T.A. Zang, Spectral Methods in Fluid Mechanics, Springer–Verlag, New York, Berlin, Heidelberg 1988. [8] P.J. Davis, Methods of Numerical Integration, Academic Press, Ontario, San Diego, New York 1984. [9] K. Dekker and J.G. Verwer, Stability of Runge–Kutta Methods for Stiff Nonlinear Differential Equations, North–Holland, Amsterdam, New York, Oxford 1984. [10] B. Fornberg, A Practical Guide to Pseudospectral Methods, Cambridge University Press, Cambridge 1996. [11] A. Gelb, On the reduction of round–off error for the Gegenbauer reconstruction method, J. Sci. Comput. 20(2004), 433–459. [12] A. Gelb and Z. Jackiewicz, Determining analyticity for parameter optimization of the Gegenbauer reconstruction method, submitted to SIAM J. Sci. Comput. [13] A. Gelb and E. Tadmor, Detection of edges in spectral data, Appl. Comput. Harmon. Anal. 7(1999), 101–135. 24

[14] A. Gelb and E. Tadmor, Detection of edges in spectral data. II. Nonlinear enhancement, SIAM J. Numer. Anal. 38(2000), 1389–1408. [15] D. Gottlieb and J.S. Hesthaven, Spectral methods for hyperbolic problems, J. Comput. Appl. Math. 128(2001), 83–131. [16] D. Gottlieb and J.S. Hesthaven, Spectral Approximation of Partial Differential Equations, manuscript. [17] D. Gottlieb, L. Lustman and S.A. Orszag, Spectral calculations of one– dimensional invisid compressible flows, SIAM J. Sci. Stat. Comput. 2(1981), 296–310. [18] D. Gottlieb and S.A. Orszag, Numerical Analysis of Spectral Methods: Theory and Applications. Society for Industrial and Applied Mathematics, Philadelphia 1977. [19] D. Gottlieb and C.W. Shu, On the Gibbs phenomenon IV: Recovering exponential accuracy in a subinterval from a Gegenbauer partial sum of a piecewise analytic function. Math. Comp. 64(1995), 1081–1095. [20] D. Gottlieb and C.W. Shu, On the Gibbs phenomenon and its resolution, SIAM Rev. 39(1997), 644–668. [21] D. Gottlieb and C.W. Shu, A general theory for the resolution of the Gibbs phenomenon, Atti dei Convegni Lincei 147(1998), 39–48. [22] D. Gottlieb and C.W. Shu, Total–variation–diminishing Runge–Kutta schemes, Math. Comp. 67(1998), 73–85. [23] D. Gottlieb and E. Tadmor, Recovering pointwise values of discontinuous data within spectral accuracy. In: Progress and Supercomputing in Fluid Dynamics. Proceedings of a 1984 U.S.–Israel Workshop (E.M. Murman and S.S. Abarbanel, eds.), pp. 357–375, Birkhauser, Boston 1985. [24] B.Y. Guo, H.P. Ma and E. Tadmor, Spectral vanishing viscosity method for nonlinear conservation laws, SIAM J. Numer. Anal. 39(2001), 1254– 1268.

25

[25] Z. Jackiewicz, Determination of optimal parameters for the Chebyshev–Gegenbauer reconstruction method, SIAM J. Sci. Comput. 25(2003/2004), 1187–1198. [26] Z. Jackiewicz, M. Kwapisz and E. Lo, Waveform relaxation methods for functional differential systems of neutral type, J. Math. Anal. Appl. 207(1997), 255–285. [27] Z. Jackiewicz, B. Owren and B.D. Welfert, Pseudospectra of waveform relaxation operators, Comp. & Math. Appls 36(1998), 67–85. [28] Z. Jackiewicz and B.D. Welfert, Stability of Gauss–Radau pseudospectral approximations of the one–dimensional wave equation, J. Sci. Comput 18(2003), 287–313. [29] J. Janssen, Acceleration of waveform relaxation methods for linear ordinary and partial differential equations, Ph.D. Thesis, Department of Computer Science, Katholieke Universiteit Leuven, 1997. [30] J.H. Jung and B.D. Shizgal, Inverse polynomial reconstruction of two dimensional Fourier images, submitted to J. Sci. Comput. [31] B. Leimkuhler, Estimating waveform relaxation convergence, SIAM J. Sci. Comput. 14(1993), 872–889. [32] Y. Maday, S.M. Ould Kaber and E. Tadmor, Legendre pseudospectral viscosity method for nonlinear conservation laws, SIAM J. Numer. Anal. 30(1993), 321–342. [33] U. Miekkala and O. Nevanlinna, Convergence of dynamic iteration methods for initial value problems, SIAM J. Sci. Stat. Comput. 8(1987), 459– 482. [34] O. Nevanlinna, Remarks on Picard–Lindel¨of iteration, Part I, BIT 29(1989), 328–346. [35] R. Spiteri and S. Ruuth, A new class of optimal high–order strong– stability–preserving time discretization methods, SIAM J. Numer. Anal. 40(2002), 469–491.

26

[36] E. Tadmor and J. Tanner, Adaptive mollifiers for high resolution recovery of piecewise smooth data from its spectral information, Found. Comput. Math. 2(2002), 155–189. [37] L.N. Trefethen, Pseudospectra of linear operators, SIAM Rev. 39(1997), 383–406. [38] L.N. Trefethen, Spectral Methods in Matlab. Society for Industrial and Applied Mathematics, Philadelphia 2000. [39] H. Vandeven, Family of spectral filters for discontinuous problems, J. Sci. Comput. 6(1991), 159–192. [40] G.B. Whitman, Linear and Nonlinear Waves, John Wley & Sons, New York 1999. [41] B. Zubik–Kowal, Chebyshev pseudospectral method and waveform relaxation for differential and differential–functional parabolic equations, Appl. Numer. Math. 34(2000), 309–328.

27