mesh with an average of 865 nodes to the nest mesh with an average of 2498 nodes. We note ..... maxu = 1:06; (c) & (d) Mesh and solution (resp.) at nal time, t ...
Report no. 95/24
Adaptive Lagrange-Galerkin Methods for Unsteady Convection-Dominated Diusion Problems Paul Houston
Endre Suli
In this paper we derive an a posteriori error estimate in the L2(L2 ) norm for the Lagrange-Galerkin discretisation of the unsteady twodimensional convection-diusion problem. The proof of the error estimate is based on so-called strong stability estimates of an associated backward dual problem, together with the Galerkin orthogonality of the nite element method. Based on this a posteriori error estimate, we design an adaptive algorithm for determining both the space and time discretisation parameters in such a way as to yield global error control in the L2(L2 ) norm with respect to a pre-determined tolerance. Moreover, the reliability and eciency of this adaptive strategy are numerically veri ed on test problems with known analytic solutions. Key words and phrases: A posteriori error analysis, Lagrange-Galerkin nite element methods, adaptive algorithms, reliability, eciency The rst author would like to acknowledge the nancial support of the EPSRC. The work reported here forms part of the research programme of the Oxford{Reading Institute for Computational Fluid Dynamics. Oxford University Computing Laboratory Numerical Analysis Group Wolfson Building Parks Road Oxford, England OX1 3QD
May, 1996
2
Contents
1 2 3 4
Introduction Notation and basic de nitions Model problem and discretisation A posteriori error analysis 4.1 4.2 4.3 4.4
Error representation : : : : : : : : : : : : : : : : : : : : Interpolation/projection estimates for the dual problem : Strong stability of the dual problem : : : : : : : : : : : : Completion of the proof of the a posteriori error estimate
5 Computational implementation
5.1 Adaptive algorithm : : : : : : : : : : : : : : : : : : 5.2 Mesh modi cation strategy : : : : : : : : : : : : : : 5.3 Estimation of the error constants : : : : : : : : : : 5.3.1 Estimation of C t and C inv : : : : : : : : : : 5.3.2 Estimation of the interpolation constants : : 5.3.3 Estimation of the strong stability constants
: : : : : :
: : : : : :
: : : : : :
: : : :
: : : :
: : : :
: : : :
: : : :
: : : : : :
: : : : : :
: : : : : :
: : : : : :
: : : : : :
3 6 7 11
11 13 23 25
28
29 31 32 32 32 33
6 Numerical experiments
35
7 Conclusions References
49 50
Appendix
53
A Trace inequality B Inverse estimate
53 57
6.1 Experiment 1: Boundary and internal layer problem : : : : : : : : 35 6.2 Experiment 2: Rotating cone problem : : : : : : : : : : : : : : : : 39 6.3 Experiment 3: Rotating cylinder problem : : : : : : : : : : : : : : 43
3
1 Introduction The modeling of convection-diusion processes is a central problem in elds such as meteorology, oil reservoir simulation, aerodynamics, physiology and many others. In many practical applications convection essentially dominates diusion, and although the governing partial dierential equation is parabolic, it displays several characteristics of hyperbolic problems. Moreover, the solutions to these problems typically exhibit very isolated phenomena, such as internal and boundary layers. In order to accurately model the solution to these problems, with the minimum computational eort, it is desirable to combine the use of an accurate numerical method which exploits the `nearly' hyperbolic nature of these problems with an adaptive mesh strategy that is capable of automatically generating a mesh to resolve such local features of the solution. We shall focus on the development of adaptive Lagrange-Galerkin nite element methods for time-dependent convection (dominated) diusion problems. The Lagrange-Galerkin method is based on combining the method of characteristics with the Galerkin nite element method. The idea behind this technique is to rewrite the governing equations in terms of Lagrangian coordinates as de ned by the particle trajectories (or characteristics) associated with the problem. The resulting equations are then projected onto a nite dimensional subspace (see Bercovier et al. [4],[5], Douglas & Russell [9], Lesaint [28], Pironneau [30] and Suli [33],[34], for example). The advantages of the method are its high accuracy,
exibility of implementation and good stability properties. In this paper we shall derive a global a posteriori error estimate in the L2 (L2 ) norm for the Lagrange-Galerkin discretisation of the unsteady (linear) convection-diusion problem in two space dimensions. The error estimate will be constructed through the use of the residual of the underlying partial dierential equation following the general approach developed by C. Johnson and his coworkers (see [12],[13],[14],[15],[16],[17],[21],[24],[25],[26], for example). We begin with an informal outline of this approach. The two main objectives of our work are reliability and eciency. By reliability, we mean that at the end of a computation, the error calculated with respect to some norm, k k, is less than some prescribed tolerance, TOL, i.e. kek TOL: (1.1) Further, by eciency, we mean that (1.1) is achieved with the minimum of computational eort; this can be thought of as minimising the total number of degrees of freedom (nodes) in the computational mesh. To realise these objectives we need to derive sharp a posteriori error estimates of the form kek E1(uh; h; p; f ); (1.2) where E1(uh; h; p; f ) is a computable global error estimator that depends on the numerical solution uh, the mesh size h, the degree p of the approximating poly-
4 nomial and the data f of the problem. Thus, reliability can be achieved by designing the computational mesh in such a way that
E1(uh; h; p; f ) TOL:
(1.3)
To prove eciency (in a weak sense) C. Johnson et al. rely on sharp a priori error estimates of the form
ku uhk E2 (u; h);
(1.4)
where E2(u; h) depends on the exact solution u and the mesh size h. For elliptic and parabolic problems they prove that the a posteriori error estimate, E1(uh; h; p; f ), may be bounded above by a constant times the a priori estimate, E2(u; h) (see Johnson [24], Eriksson & Johnson [13] and Eriksson et al. [12], for example). Hence, it follows that by decreasing the mesh size it is possible to realize the condition (1.3) under some assumptions on the nature of the exact solution u. The basic structure of the proof of an a posteriori error estimate of the form (1.2) is as follows: 1. Representation of the error in terms of the residual of the nite element approximation and the solution of the dual problem. 2. Use of Galerkin orthogonality. 3. Local interpolation (projection) error estimates for the solution of the dual problem. 4. Strong stability estimates for the dual problem. We shall now illustrate each of the above steps for an abstract problem. Suppose that we have the following continuous problem in variational form: nd u 2 V , such that (Lu; v) = (f; v) 8v 2 V;
(1.5)
where L is a continuous linear operator and V is a Hilbert space. The discrete analogue of (1.5) is given by: nd uh 2 V h V such that (Luh; vh) = (f; vh) 8vh 2 V h;
(1.6)
where V h is some nite element space. We de ne the residual, r, to our problem by r = f Luh. Hence, from (1.6) we observe that the residual is orthogonal to the nite element space V h (we refer to this property as Galerkin orthogonality).
5 To prove an a posteriori error estimate in a norm k k related to the inner product (; ) (i.e. for v 2 V , kvk2 = (v; v)), we de ne 2 V to be the solution of the dual problem: nd 2 V such that (L; v) = (u uh; v) 8v 2 V;
(1.7)
where L is the adjoint operator of L. Thus, using (1.7), the de nition of L and (1.5), we have the following error representation:
ku uhk2 = (u uh; u uh) = (u uh; L) = (L(u uh); ) = (r; ):
(1.8)
Using the Galerkin orthogonality inherent in the nite element method, we have
ku uhk2 = (r; );
(1.9)
where is any element of V h: we choose to be the interpolant of , although for certain problems more convenient de nitions of may be appropriate. Indeed, in Section 4 we shall de ne to be the L2 -projection of . Applying the Cauchy-Schwarz inequality to (1.9), we have
ku uhk2 khsrkkh s( )k;
(1.10)
where h is a mesh function. By standard interpolation theory, it can be shown that
kh s( )k C ikDsk;
(1.11)
where C i is an interpolation constant, and Ds denotes derivatives of order s of . We now introduce the concept of strong stability: strong stability measures norms of certain derivatives of the solution of a problem in terms of norms of the given data. The term `strong' refers to the fact that derivatives of the solution to the problem are involved, in contrast with standard (or weak) stability measuring the norm of the solution itself. Thus, for the dual problem we assume that we can prove a strong stability estimate of the form
kDsk C sku uhk;
(1.12)
where C s is a strong stability constant. Substituting (1.11) and (1.12) into (1.10) and dividing by ku uhk gives the required error bound
ku uhk C i C skhsrk:
(1.13)
6 To summarise, the approach to adaptivity and a posteriori error analysis developed by C. Johnson et al. relies on the use of the Galerkin orthogonality built into the nite element method coupled with strong stability estimates of an associated dual problem. The explicit use of Galerkin orthogonality and strong stability leads to an improvement in the asymptotic sharpness of the a posteriori error estimate as h ! 0. Moreover, numerical experiments indicate that this analysis leads to an error estimator that tends to 0 as h ! 0. It is this approach to a posteriori error analysis that we shall adopt here. The outline of the rest of this paper is as follows: in Section 2 we summarise some of the notational conventions that we shall use. In Section 3 we state the model problem to be considered and formulate the direct Lagrange-Galerkin method for this problem. In Section 4 we derive an a posteriori error estimate in the L2 (L2 ) norm based on the general framework presented above. In Section 5 we shall describe how our a posteriori error estimate may be eciently implemented into a Lagrange-Galerkin nite element code to yield global error control with respect to a pre-determined tolerance. In Section 6 we shall present some numerical experiments to illustrate the performance of our adaptive strategy. Finally, in Section 7 we shall summarise the work presented in this paper.
2 Notation and basic de nitions
Let Z denote the set of integers, N the set of positive integers, N0 the set of non-negative integers, R the set of real numbers and R+ the set of positive real numbers. Let ! be a bounded open subset of Rd, where d 2 N. For 1 p 1, let Lp(!) denote the usual Lebesgue space of real-valued functions with norm k kLp(!) . For p = 2 and for u; v 2 L2 (!) we denote by (; )! the L2(!) inner product de ned as Z (u; v)! := u(x)v(x)dx: !
For ! = , we denote k k by k k, and (; )! by (; ). If = (1; : : : ; d), with each i 2 N; i = 1; : : : ; d, we write jj := 1 + : : : + d and ! := 1!2 ! : : : d!. Let D := D11 : : : Ddd and Dj = @=@xj for 1 j d. For m 2 N0, we denote by C m(!) the set of all continuous real-valued functions de ned on ! such that Du is continuous on ! for all jj m. C m (!) will denote the set of all u in C m(!) such that D u can be extended from ! to a continuous function on ! for all jj m. In particular, when m = 0 we simply write C (!) instead of C 0(!). C0m (!) will denote the set of functions in C m (!) with compact support in ! . Also, C m;1 (!) will denote the set of functions in C m(!) for which, for 0 jj m, D u is Lipschitz-continuous in ! . L2 ( )
7 Further, for m 2 N0, let W m;p(!) denote the classical Sobolev space endowed with the norm k kW m;p(!) and the semi-norm j jW m;p(!) (cf. Adams [1]). For p = 2 we write H m(!) for W m;2(!). H0m(!) will denote the closure of the space of in nitely smooth functions with compact support in ! in the norm of H m(!). The dual space of H0m(!) will be denoted by H m(!). Let X be any of the spaces just de ned. Then X 2 will denote the topological product X X .
3 Model problem and discretisation Given a nal time T > 0, we shall consider the following unsteady convectiondiusion problem: given that f 2 L2 (0; T ; H 1( )) and u0 2 L2 ( ), nd u such that: ut + a ru u = f; x 2 ; t 2 I; (3.1a) u(; t) = 0; x 2 @ ; t 2 I; (3.1b) u(x; 0) = u0(x); x 2 ; (3.1c) where is a bounded convex polygonal domain in R2 with boundary @ and I = (0; T ]. Further, we assume that the diusion coecient > 0, the velocity eld a 2 C (0; T ; C01( )2 ) and that a is incompressible, i.e. r a = 0 8x 2 ; t 2 I: (3.2) We shall now state the following existence and uniqueness result for problem (3.1). Theorem 3.1 Suppose that f 2 L2(0; T ; H 1( )) and u0 2 L2 ( ); then (3.1) has a unique (weak) solution u 2 L2 (0; T ; H01( )) \ C (0; T ; L2 ( )). Proof A proof of a general existence and uniqueness theorem for parabolic equations of which Theorem 3.1 is a special case, may be found in Renardy & Rogers [32] (Theorem 10.3 and Lemma 10.4) and Temam [35] (Theorem 3.4), for example. 2 In this section we shall formulate the Lagrange-Galerkin method for (3.1). However, we rst need to introduce the following notation. Let 0 = t0 < t1 < : : : < tM < tM +1 = T be a subdivision (not necessarily uniform) of I , with corresponding time intervals I n = (tn 1 ; tn] and timesteps kn = tn tn 1 . For each n, let T n = fK g be a partition of into closed triangles K , with corresponding mesh function hn 2 C 0;1 ( ) satisfying jrhn(x)j 8x 2 K 8K 2 T n; (3.3a) 2 n c1hK meas(K ) 8K 2 T ; (3.3b) n c2hK hn(x) hK 8x 2 K 8K 2 T ; (3.3c)
8 where hK is the diameter of K (= longest side of K ) and c1 , c2 and are positive constants. Further, h is de ned to be the global mesh function de ned by h(x; t) = hn(x), for (x; t) 2 I n and we de ne the corresponding time step function k = k(t) by k(t) = kn; t 2 I n. We note that the inequality (3.3b) expresses the usual `minimum angle' condition stating that the angles of all triangles in T n are bounded below by the constant c1. If c1 1=2 then all the triangles are more or less equilateral, while if c1 is small then some triangles will be thin with one or two small angles. For some n 2 N0, we associate with T n the set E n = f g consisting of those line segments in R2 which appear as an edge of some K 2 T n. We also denote by Ein , those in E n which are interior to (i.e. not part of @ ). Let S n = I n; for p; q 2 N0 we de ne S hn = fv 2 H01( ) : vjK 2 Pp(K ) 8K 2 T ng;
V hn = fv : v(x; t)jSn =
q X
tj vj ; vj 2 S hn g;
j =0 : v(x; t)jSn 2 V hn ;
= fv n = 1; : : : ; M + 1g; where Pp(K ) denotes the set of polynomials of degree at most p over K . In the following, we shall assume that p = 1 and q = 0. We note that if v 2 V hn for n = 1; : : : ; M + 1, then v is continuous in space at any time, but may be discontinuous in time at the discrete time levels tn. To account for this, we introduce the notation vn := slim v(tn s); !0+
Vh
and [vn] := v+n vn : To construct the Lagrange-Galerkin method, we rst de ne the particle trajectories (or characteristics) associated with problem (3.1): the path of a particle located at position x 2 at time s 2 I is de ned as the solution of the initial value problem d X(x; s; t) = a(X(x; s; t); t); (3.4a) dt X(x; s; s) = x: (3.4b) Thus,
Zt
X(x; s; t) = x + s a(X(x; s; ); )d:
(3.5)
We note that for a 2 C (0; T ; C01( )2) there exists a unique solution to (3.4), (3.5), see Lefschetz [27], for example.
9 The Lagrange-Galerkin method makes use of the material derivative, Dtu, which is de ned, for u smooth enough, as follows: Dt u(x; t) := ddt u(X(x; s; t); t) js=t = @ u(x; t) + a(x; t) ru(x; t) 8x 2 ; t 2 I: (3.6) @t Hence, using the material derivative, equation (3.1) may be rewritten in the following (weak) form: nd u(t) 2 V , such that (Dtu(; t); v) + (ru(; t); rv) = (f (; t); v) 8v 2 V; (3.7a) (u(; 0); v) = (u0(); v) 8v 2 V; (3.7b) where V = H01( ). The Lagrange-Galerkin time-discretisation involves approximating the material derivative by a divided dierence operator. The simplest appropriate discretisation is the backward Euler method, giving for n = 0; : : : ; M :
! u(; tn+1) u(X(; tn+1; tn); tn) ; v kn+1 +(ru(; tn+1); rv) (f (; tn+1); v) 8v 2 V; (3.8a) (u(; 0); v) = (u0(); v) 8v 2 V: (3.8b) If we now de ne unh to be the Galerkin nite element approximation to u(; tn) at time tn ; then applying the nite element method to (3.8) yields the LagrangeGalerkin discretisation of (3.1) as follows: nd unh+1 2 S hn+1 for 0 n M such that ! unh+1 unh(X(; tn+1; tn)) ; v kn+1 +(runh+1; rv) = (f; v) 8v 2 S hn+1 ; (3.9a) 0 h (uh; v) = (u0; v) 8v 2 S 0 ; (3.9b) where fjSn+1 := f (; tn+1). This is the same approach as that used by Bercovier et al. [4],[5], Douglas & Russell [9], Lesaint [28], Pironneau [30] and Suli [33],[34]. Alternatively, if we integrate (3.9a) with respect to t over I n+1, we obtain the following equivalent formulation: nd uh such that, for n = 0; 1; : : : ; M , uhjSn+1 2 V hn+1 and satis es (Dthuh; v)n+1 + (ruh; rv)n+1 = (f; v)n+1 8v 2 V hn+1 ; (3.10a) (u0h ; v) = (u0; v) 8v 2 V h0 ; (3.10b) where DthuhjSn+1 := (uh (X(x; tn+1; tn+1); tn+1) uh (X(x; tn+1; tn); tn))=kn+1: (3.11)
10 Further, for v; w 2 L2 (I n+1; L2 ( )), we have used (v; w)n+1 :=
Z tn+1 tn
(v; w)dt:
Remark We note that in (3.10) the space discretisation may vary in both space
and time, but the time steps are only variable in time and not in space. Hence, the mesh design for (3.10) will not be fully optimal. This is clearly a disadvantage of using a method of lines approach as opposed to using space-time elements, as for example in the discontinuous Galerkin method, where time steps variable in space can be admitted (see, for example, Eriksson & Johnson [13] and the references cited therein).
If we assume that T n is quasi-uniform, k is constant and the solution u of (3.1) satis es (a) u 2 L1(0; T ; H 2( )); (3.12) (b) @u 2 L2 (0; T ; H 2( )); @t (c)Dt2u 2 L2 (0; T ; L2( )); then we may state the following a priori error estimate (assuming p = 1 and q = 0): Theorem 3.2 Let the solution u of (3.1) satisfy the requirements (3.12), and let the Galerkin solution uh be de ned by (3.9) for p = 1 and q = 0; then there exists a positive (generic) constant C , such that
2 3
@u 5 ku uhkl1(0;T ;L2 ( )) Ch2 4kukL1(0;T ;H 2( )) +
@t
2 2 L (0;T ;H ( ))
2
+Ck D u ; t
and
L2 (0;T ;L2 ( ))
3 2
@u
5 ku uhkl1(0;T ;H 1 ( )) Ch 4kukL1(0;T ;H 2 ( )) +
@t
2 1 L (0;T ;H ( ))
2
+Ck D u ; t
where
kvkl1(0;T ;X ) := 0max kv(ti)kX ; iM +1 and X = L2 ( ) or X = H 1 ( ).
L2 (0;T ;L2 ( ))
11 This is a slight generalisation of the convergence results proved by Douglas & Russell [9]. We note that for a bounded domain , it is convenient to assume that )2 ), so that the mapping x ! X(x; s; t) is a homeomorphism of
a 2 C (0; T ; C01 (
onto itself (cf. Suli [34]). 2
Proof
4 A posteriori error analysis In this section we shall derive an a posteriori estimate for the error, e = u uh, in the L2 (0; T ; L2( )) norm, where u and uh are the solutions of (3.1) and (3.10), respectively. However, before we proceed, we shall rst introduce the following notation: for v; w 2 L2 (0; T ; L2( )) we de ne (v; w)Q :=
M Z tn+1 X
n=0 tn
(v; w)dt;
kvkQ := ((v; v)Q)1=2 ; where Q := I .
4.1 Error representation
The (backward) dual problem takes the form: nd such that
r (a)
= e u uh; x 2 ; t 2 I; (4.1a) (; t) = 0; x 2 @ ; t 2 I ; (4.1b) (x; T ) = 0; x 2 : (4.1c) In the following theorem, we give an existence and uniqueness result for problem (4.1), which guarantees sucient regularity of the solution to enable us to perform the proceeding a posteriori error analysis. t
Theorem 4.1 Suppose that e 2 L2 (0; T ; L2( )), where is assumed to be convex; then (4.1) has a unique (weak) solution 2 H 1(0; T ; L2( )) \ L2 (0; T ; H01( ) \ H 2( )). Proof From Theorem 3.1, problem (4.1) has a unique (weak) solution 2 2 L (0; T ; H01 ( )) \ C (0; T ; L2 ( )). Moreover, from Theorem 4.9 (see Section 4.3) we have that 2 H 1 (0; T ; L2 ( )). Hence, if we let g
= e + t + r (a);
then, g 2 L2 (0; T ; L2 ( )) and since = g;
12 (in the sense of distributions), we also have that 2 L2 (0; T ; L2 ( )). By elliptic regularity (cf. Brenner & Scott [6] (Section 5.5) and Hackbusch [19] (Corollary 9.1.23), for example) we have that (t) 2 H01 ( ) \ H 2 ( ) for almost every t. In particular, using Lemma 4.2 (see Section 4.2) below we have
ZT 0
j(t)j2H 2 ( ) dt
ZT 0
k(t)k2 dt:
(4.2)
Hence, since 2 L2 (0; T ; H01 ( )) \ H 1 (0; T ; L2 ( )) (see above), then with inequality (4.2) we deduce that 2 H 1 (0; T ; L2 ( )) \ L2 (0; T ; H01 ( ) \ H 2 ( )), as required. 2
We shall now proceed to prove the following error representation: multiplying (4.1a) by e and integrating by parts in both space and time, we get
kek2Q = =
M Z tn+1 X
n=0 tn M Z tn+1 X n=0 tn M X
(e; t
r (a)
(et + a re; )dt +
)dt M Z tn+1 X
n=0 tn
(re; r)dt
([unh]; (tn)) + (u0 u0h ; (0))
=
n=0 M X Z tn+1
n=0 tn M X n=0
(f a ruh; )dt
M Z tn+1 X
n=0 tn
(ruh; r)dt
([unh]; (tn)) + (u0 u0h ; (0));
where we have used (3.7a). If we now let 2 V h, then using (3.10) we have
kek2Q =
M Z tn+1 X
r
([unh]=kn+1 + a uh f; )dt n t n=0 M Z tn+1 X + ( uh; ( ))dt n=0 tn M Z tn+1 X + (Dthuh ([unh]=kn+1 + a uh); )dt n t n=0 M Z tn+1 M Z n+1 X n ]=k ; (tn ))dt + X t (f + ([ u h n+1 n=0 tn n=0 tn +(u0 u0h ; (0)) I + II + III + IV + V + VI 8 2 V h:
r r
r
=
f; )dt (4.3)
Remark We note that, since uh is de ned to be a piecewise constant function on each time interval I n+1 for n = 0; : : : ; M , we have that uht jI n+1 = 0. Therefore,
13 we include the `jump' term, [unh]=kn+1 uht , in terms I and III of the error representation (4.3). The advantage of this is an improvement in the sharpness of the error estimator, since term I represents the nite element residual of the hyperbolic part of our problem, and term III represents the error committed by approximating the material derivative, Dt uh, by Dthuh.
4.2 Interpolation/projection estimates for the dual problem We shall now choose 2 V h in (4.3) to be the space-time L2 -projection of , i.e. if we rst de ne the L2 -projections:
Pn : L2 ( ) ! S hn ; n : L2 (I n) ! P0(I n); in space and in time, respectively, by
Z
Z tn
(Pn )vdx = 0 8v 2 S hn ;
(4.4a)
(n )vdt = 0 8v 2 P0(I n);
(4.4b)
tn 1
then, we can de ne (locally) jSn 2 V hn by letting jSn = Pnn = nPn 2 V hn ;
(4.5)
where = jSn . Further, if we introduce P and by (P)jSn = Pn(jSn ); ()jSn = n (jSn );
(4.6a) (4.6b)
then we let 2 V h be = P = P 2 V h:
(4.7)
It follows from this de nition (cf. Babuska & Rheinboldt [2]) that
kkQ kkQ:
(4.8)
We shall now give error estimates for the projection operator, in order to estimate . However, rst we shall introduce the following notation: for each 2 Ein, let n denote the unit normal to in the outward direction to K , and de ne for w 2 S hn (for some n 2 N0),
"
# @w = lim (rw(x + sn ) @ n s!0+
rw(x sn )) n ; x 2 ;
14 that is, [@w=@ n ] is the jump across in the normal component of rw. Finally, we introduce the discrete second derivatives X " @w # 1 2 DhwjK = @ n h ; K 2 T n: K 2@K \Ein We may now give the following results.
Lemma 4.2 Suppose that is a bounded convex polygonal domain in R2; then jwjH 2( ) kwk 8w 2 H01( ) \ H 2( ): (4.9) We shall rst assume that w 2 C01 ( ) \ C 3 ( ). By de nition, we have
Proof
jwj2H 2 ( )
Z 2 4 =
!2 3 + @x@y + @y2 5 dx @x2
Z 2 @ 2 w !2 @ 2 w @ 2 w 3 5 dx kwk2 + 2 4 @x@y @x2 @y 2 @2w
= I + II;
!2
@2w
!2
@2w
where x = (x; y). To prove the lemma for w 2 C01( ) \ C 3 ( ), it is sucient to show that II 0. Using integration by parts twice, we have
Z
Z
@2w @2w
dx = @x@y @x@y
@w @ 2 w
@
Z
=
@
@x @x@y
d
ny s
@w
@2w
@x
@x@y
ny
Z
@w @ 3 w
dx @x@y 2
@x ! Z @2w @2w @2w n d s + dx; x @y 2
@x2 @y2
where n = (nx ; ny ) denotes the unit outward normal vector to @ . Hence, II = 2
Z
@
@w
@2w
@x
@x@y
@2w
ny
@y 2
!
nx
ds:
(4.10)
To estimate this boundary integral, we rst de ne nt = (ny ; nx) to be the unit tangent vector along @ . Further, for each open line segment @ i of @ we note that the vectors n and nt are constant. Thus, writing the derivatives @=@x and @=@y in (4.10) in terms of derivatives in the normal and tangential directions to @ , we have II = 2
XZ i
@ i
nx
@w @n
@w
+ ny @ n
t
ny
@2w
@ n@ nt
nx
@2w @ n2t
!
ds:
Since w = 0 on @ , this implies that @w=@ nt vanishes on each @ i . Moreover, as we approach the point of intersection between any two adjacent sides, @ i and @ i+1 , of @ , rwj@ = 0 in the two linearly independent directions nt j@ i and nt j@ i+1 .
15 Therefore, by continuity we deduce that rw = 0 at the intersection point. Hence, we have II = 2 = =
XZ
@ i
i
XZ i
@ i
X" i
nx ny
nx ny
nx ny
@w @ 2 w
@ n @ n@ nt
@
@ nt
" 2# @w @n
@w 2#@ Ri @n
ds
ds
@ Li
= 0; where @ Li and @ Ri denote the end points of @ i . Hence, we have shown that jwjH 2 ( ) kwk 8w 2 C01( ) \ C 3( ): (4.11) To complete the proof for any w 2 H01 ( ) \ H 2 ( ), we shall use a density argument. Since C01 ( ) \ C 3 ( ) is dense in H01 ( ) \ H 2 ( ), we may pick a sequence of functions ) \ C 3( ) which converge to w in the H 2 ( ) norm as j ! 1. Using the wj 2 C01 (
triangle inequality and (4.11), we have jwjH 2 ( ) jw wj jH 2 ( ) + jwj jH 2 ( ) kw wj kH 2 ( ) + kwj k kw wj kH 2 ( ) + k(wj w)k + kwk p (1 + 2)kw wj kH 2 ( ) + kwk: Now passing to the limit as j ! 1 we deduce (4.9). This completes the proof of the lemma. 2
Remark We note that the assumption that is convex polygonal is a stronger
condition than is actually needed. In fact, we need only assume that satis es the segment property (cf. Adams [1], p54) to ensure that C01 ( ) \ C 3( ) is dense in H01( ) \ H 2( ) ([1], Theorem 3.18). However, for smooth domains the proof of this lemma requires to be convex (see Hackbusch [19], Corollary 9.1.23).
Theorem 4.3 Let n 2 N0 and let K 2 T n, where T n satis es conditions (3.3). Further, let r in R satisfy 1 < r 1. If for w 2 W 2;r (K ) we denote the piecewise linear interpolant of w by IKn w, then jw IKn wjW m;r (K ) C ih2K jwjW 2;r(K ); 0 m 1; (4.12) where
1
C i = 2(1 1=r)
03 1 X @ jj jW m;1(K ) A ; j =1
(4.13)
16 and j ; j = 1; 2; 3, denote the basis functions associated with the nite element K. Proof
See Ciarlet [7] and Ciarlet & Raviart [8]. 2
Theorem 4.4 Let n 2 N0 and assume that T n satis es conditions (3.3). If w 2 L2 ( ), then for any positive function 2 C 1( ) such that jr j hn 1 in , for suciently small, we have
k Pnwk C k wk;
(4.14)
where C = C ( ) = C2p=(1 C1p),
C1p = 2c2 1(C1i + C2i C l C inv)=(1 c2 1)2 ; C2p = 1 + C1p; p C1i = C2i = 3, C l = 3 2 and C inv = 6=c1. Non-constructive proofs of this theorem may be found in Eriksson [11] and Eriksson & Johnson [14]. However, here we shall repeat the proof given in [11] and [14] in a constructive way, so that the size of the error constant C is known. Using the de nition of Pn and the Cauchy-Schwarz inequality, we have Proof
k
Pn w
k2 = (Pn w; =
2 P w) n (Pn w; 2 Pn w IKn ( 2 Pn w)) + (w; IKn ( 2 Pn w)) k Pn wkk 1 ( 2 Pnw IKn ( 2 Pnw))k +k wkk 1 IKn ( 2 Pn w)k:
(4.15)
Now, using the triangle inequality, we have
k
1( 2P
nw
IKn ( 2 Pnw))kL2 (K ) k 1 ( 2 Pnw IKn ( 2 )Pn w)kL2 (K ) +k 1 (IKn ( 2 )Pn w IKn (IKn ( 2 )Pn w))kL2 (K )
= I + II:
(4.16)
In order to estimate I and II, we rst note that K
where K
r kL1(K);
K + hK k
:= max (x); x2K
K
(4.17)
:= xmin (x): 2K
Using the hypothesis of the theorem and condition (3.3c), we have
kr kL1 (K ) khn 1 kL1 (K ) c2 1 hK1 K :
(4.18)
17 Substituting (4.18) into (4.17), gives K
K + c2
1
K;
i.e., for suciently small ( < c2 ), we have 1 K (1 c 1 ) K : 2 It can be shown that, for any v 2 W 1;1(K ),
kv IKn vkL1 (K )
C1i hK
(4.19)
krvkL1 (K );
(4.20)
where C1i = 3. Further, using Theorem 4.3, we also have that, for any v 2 H 2 (K ),
kv IKn vkL2 (K )
C2i h2K v
j jH 2 (K );
(4.21)
where C2i = 3. Using (4.18), (4.19) and (4.20), we have I
1 2 I n ( 2 )k 1 kP w k 2 L (K ) n L (K ) K Kk 1 C i hK k ( 2 )k 1 k 1 ( Pn w)k 2 L (K ) L (K ) K 1 2 2C1i K2 c2 1 k Pn wkL2 (K ) K i 2C1 c2 1 k P wk 2 : (1 c2 1 )2 n L (K )
r
(4.22)
Further, using (4.21), we have II C2i K1 h2K jIKn ( 2 )Pn wjH 2 (K ) : Since IKn ( 2 ) and Pn w are linear functions of x on K , it can be shown that
jIKn ( 2 )Pn wjH 2 (K ) C lkr(IKn ( 2 ))kL1 (K )krPn wkL2 (K ) ; p
(4.23) (4.24)
where C l = 3 2. Using (4.24), the inverse inequality (see Appendix B) and the fact that IKn ( 2 ) is linear, inequality (4.23) becomes II
C2i C l
r
r
1 2 n 2 K hK k (IK ( ))kL1 (K ) k Pn wkL2 (K ) C2i C l C inv K1 hK k ( 2 )kL1 (K ) kPn wkL2 (K ) :
r
Further, using (4.18) and (4.19) again, gives II
2 i l inv 2C2 C C K c2 1 k 1 ( Pn w)kL2 (K ) K 2C2i C l C inv c2 1 k P wk 2 : n L (K ) (1 c2 1 )2
(4.25)
18 Substituting (4.22) and (4.25) into equation (4.16), we have
k
1 ( 2 Pn w
IKn (
2 Pn w))k C p k 1
k
Pn w ;
(4.26)
where 2c2 1 (C1i + C2i C l C inv ) : (1 c2 1 )2 Using inequality (4.26), we deduce that p=
C1
k
1 I n ( 2 P w)k C pk n K 2
k
(4.27)
Pn w ;
where C2p = 1 + C1p. Finally, substituting (4.26) and (4.27) into (4.15), we get
k
Pn w
k2 C1pk
For suciently small (so that 1
k
Pn w
Pn w
p
k2 + C2pk wkk
C1 >
k
Pn w :
(4.28)
0), it follows from (4.28) that p
k 1 C2C p k wk;
which proves the theorem. 2
1
Corollary 1 Let n 2 N0 and suppose that T n satis es conditions (3.3). If w 2 L2 ( ) then, for = 2 with suciently small, we have khn 2(Pnw w)k C3pkhn 2(IKn w w)k; (4.29) where C3p = (1 + C2 ) and C2 = C (2). If we let = hn 2 and = 2, then for suciently small the hypothesis of Theorem 4.4 holds. Using the triangle inequality and Theorem 4.4 with w replaced by w v, for v 2 V hn , we get Proof
khn 2 (Pnw
w
)k = khn 2 (Pn w v + v w)k khn 2 (Pn w v)k + khn 2 (w v)k C2 khn 2 (w v)k + khn 2 (w v)k 8v 2 V hn ;
where C2 = C (2). The statement of the corollary follows by letting v = IKn w 2 V hn .
2
Corollary 2 Let n 2 N0 and suppose that T n satis es conditions (3.3). If w 2 H 1( ) then, for = 2 with suciently small, we have khn 1r(Pnw w)k khn 1r(IKn w w)k + C4pkhn 2(IKn w w)k; (4.30) where C4p = C inv C2 =c2 .
19 Using the triangle inequality we have that, for any v 2 V hn , khn 1 r(Pnw w)k khn 1 r(v w)k + khn 1 r(Pn w v)k = I + II: (4.31) Let us consider term II: using condition (3.3c) and the inverse inequality (see Appendix B), we have khn 1 r(Pnw v)kL2 (K ) c2 1hK1 kr(Pnw v)kL2 (K ) Proof
Further, using Theorem 4.4, we get khn 1 r(Pnw v)k Substituting (4.32) into (4.31) and
2
k
kL2 (K ) v)kL2 (K ) :
C inv c2 1 hK2 Pn w
v
C inv c2 1 hn 2 Pn w
v
C inv c2 1 hn 2 Pn w
k
k
(
(
)k
C invC2c2 1 khn 2 (w v)k: letting v = IKn w 2 V hn yields the
(4.32) desired result.
We now give the following error estimates for the projection operators P and in order to estimate = P. Lemma 4.5 Suppose that R 2 L2 (I ; L2( )) and w 2 V h then j(R; P )Qj C5pkh2RkQkkQ; (4.33a) p 2 2 j(rw; r(P ))Qj C6 kh DhwkQkkQ; (4.33b) p p p p p where C5 = C3 C2i c2 2 , C6 = C t (Cp3i c2 1 + C4 C2i c2 2 + C5 )=2c2, C2i is as de ned in the proof of Theorem 4.4, C t = 4 2=c1 , C3i = 3 and C3p and C4p are as de ned in Corollary 1 and Corollary 2, respectively. Proof First, we consider (4.33a): using the Cauchy-Schwarz inequality, we get j(R; P )Qj kh2 RkQkh 2 (P )kQ : By de nition, we have that
kh
2 (P
)kQ =
M Z tn+1 X
n=0 tn
2 (P khn+1 n+1
)k2
!1=2
:
(4.34)
Using Corollary 1, we have 2 (P )k2 (C p )2 kh 2 (I n+1 )k2 : khn+1 n+1 3 n+1 K Further, using (3.3c), Theorem 4.3 and Lemma 4.2, we have X p 2 4 4 n+1 2 (P )k2 khn+1 (C3 ) c2 hK kIK k2L2 (K ) n+1
=
K 2T n+1
X
(C3p )2 (C2i )2 c2 4 jj2H 2 (K )
K 2T n+1 (C3p)2 (C2i )2 c2 4 jj2H 2 ( ) (C3p)2 (C2i )2 c2 4 kk2 ;
(4.35)
20 where C2i is as de ned in the proof of Theorem 4.4. Substituting (4.35) into (4.34) gives the desired result. Next, we consider (4.33b): let = P , then by integrating by parts in space, we have
0 0 11 Z X B X CC (rw; r)Q = rw n dsAA dt @ @ n t n=0 K 2T n+1 2@K \Ein+1 0 0 11 Z M Z tn+1 X X X @w CC B B@ dsAA dt: = @ n 2 @ n t M Z tn+1 X B
n=0
K 2T n+1 2@K \Ein+1
From the trace inequality (cf. Appendix A),
Z
jjds C t
p where C t = 4 2=c1 ,
Z
K
Z
jr
jdx + hK1 jjdx K
;
2 @K 8K 2 T n+1;
we have
j(rw; r)Qj 0 0 11 Z Z n +1 M X t B X B X @w CA dt jjdsC @ @ A n 2 @ n n=0 t K 2T n+1 2@K \Ein+1 1 0 Z M Z tn+1 X X @ (hK jrj + jj) dxA dt; C t Dh2 w n 2 n=0 t
K
K 2T n+1
where we have used the above notation. Further, using (3.3c) and the Cauchy-Schwarz inequality we have j(rw; r)Qj 12 C tc2 1 kh2 Dh2 wkQ k(h 1 jrj + h 2 jj)kQ : (4.36) Let us now consider the second term on the right-hand side of (4.36): using the triangle inequality, we have
k(h 1 jrj + h 2 jj)kQ kh
1
rkQ + kh 2kQ
= I + II:
(4.37)
From the rst part of this lemma, we have already proved that: II C5pkkQ :
(4.38)
Let us now consider term I: by de nition, we have
kh
1
rkQ =
M Z tn+1 X
n=0 tn
1 khn+1
r
!1=2
k2 dt
:
(4.39)
21 Using Corollary 2 gives
1 rk2 kh 1 r(I n+1 khn+1 n+1 K
2 (I n+1 )k + C4pkhn+1 K
2
)k
:
Further, using (3.3c), Theorem 4.3 and Lemma 4.2, we have
00 X 1 rk2 B khn+1 @@
K 2T n+1
K 2T n+1
kr(IKn+1
0 X +C4p @
K 2T n+1
00 X B @@
=
c2 2 hK2
11=2 )k2L2 (K ) A
c2 4 hK4
11=2 12 C k2L2 (K ) A A
kIKn+1
11=2 c2 2 (C3i )2 jj2H 2 (K ) A 0 X p +C4 @
K 2T n+1 i 1 p i 2 2 2 C3 c2 + C4 C2 c2 jjH 2 ( ) i 1 p i 2 2 C3 c2 + C4 C2 c2 kk2 ;
11=2 12 C c2 4 (C2i )2 jj2H 2 (K ) A A (4.40)
where C3i = 3 and C2i is as de ned in the proof of Theorem 4.4. Substituting (4.40) into (4.39) and using (4.38) gives
k(h 1 jrj + h 2 jj)kQ which proves the lemma. 2
C3i c2 1
+ C4pC2i c2 2 + C5p kkQ ;
Lemma 4.6 Suppose that R 2 L2 (I ; L2( )) and w 2 V h then j(R; P ( ))Qj C4i kkRkQkt kQ; M Z tn+1 X j tn (R; n )dtj C5i kkRkQkt kQ; n=0
p
(rw; rP ( ))Q = 0;
(4.41a) (4.41b) (4.41c)
where C4i = 1= 2, C5i = 1 and n = (x; tn). Proof
First, we consider (4.41a): using the Cauchy-Schwarz inequality, we get
j(R; P (
))Q j kkRkQ kk 1 ((P ) (P ))kQ :
22 By reversing the order of integration, we have
kk 1 ((P ) (P ))kQ M Z tn+1 Z X =
=
n=0 tn Z X M
n=0
2 ( (P ) kn+1 n+1 n+1
2 k (P ) kn+1 n+1 n+1
(Pn+1
!1=2
))2 dxdt
(Pn+1 )k2L2 (I n+1 ) dx
!1=2
(4.42)
:
If we denote the piecewise constant interpolant of v on I n+1 at the point (tn+1 + tn )=2 by IIn+1v, then using the fact that n+1 v is L2 -projection of v onto the set of piecewise constant functions, we have
kn+1(Pn+1 ) (Pn+1 )k2L2 (I n+1 ) kIIn+1(Pn+1 ) (Pn+1 )k2L2 (I n+1) :
(4.43)
Further, it can be easily proved that
kIIn+1(Pn+1 ) (Pn+1)kL2 (I n+1 ) C4i kn+1k(Pn+1 )t kL2 (I n+1 ); p i
(4.44)
where C4 = 1= 2. Using (4.43) and (4.44), (4.42) becomes
kk 1 ((P ) (P ))kQ !1=2 Z X M i 2 2 (C ) k(Pn+1 )t kL2 (I n+1 ) dx
n=0 4 !1=2 Z X M i 2 2 = (C4 ) kPn+1 t kL2 (I n+1 ) dx :
(4.45)
n=0
If we now reverse the order of integration in (4.45) and use that
kPn+1 t kL2 ( ) kt kL2 ( ) ;
(4.46)
then this proves the rst part of the lemma. To prove inequality (4.41b), we rst apply the Cauchy-Schwarz inequality, to give
j
M Z tn+1 X
n=0 tn
(R; n
)dtj
kkRkQ
M X n=0
1 (n kkn+1
!1=2
)k2L2 (I n+1 ;L2 ( )) dt
:
(4.47)
We note that if the interpolation operator IIn+1 de ned above is taken at the point tn , rather than at the midpoint of the interval I n+1 , then it can be shown that (4.44) holds with C4i replaced by C5i = 1. Using this inequality, we have 1 (n kkn+1
)kL2 (I n+1 ;L2 ( )) C5i kt kL2 (I n+1 ;L2 ( )) :
(4.48)
23 Substituting (4.48) into (4.47) gives the desired result. Finally, we consider (4.41c): (rw; rP (
))Q M X Z tn+1
= =
n=0 tn M X n=0
r
(rw; rPn+1 (n+1
wn+1 ;
rPn+1
Z tn+1 tn
))dt
(n+1
!!
)dt
;
where wjS n+1 = wn+1 . Using the de nition of n+1 from (4.4b) we deduce that (rw; rP (
))Q = 0:
2
4.3 Strong stability of the dual problem
In this section we derive strong stability estimates for the dual problem (4.1).
Lemma 4.7 Let be the solution of (4.1); then there is a constant C1s = C1s(T; ; ) such that
kk2L1(I ;L2( )) + k1=2 rk2Q C1skek2Q; where C1s = 2 minfc2 =; eT g and c = c ( ).
(4.49)
Multiply (4.1a) by and integrate over to obtain, 1d 2 1 1=2 2 2 dt k(t)k 2 (r a(t)(t); (t)) + k r(t)k = (e(t); (t)): Using the incompressibility condition (3.2) and the Cauchy-Schwarz inequality, we have 1 d k(t)k2 + k1=2 r(t)k2 ke(t)kk(t)k (4.50) 2 dt 12 ke(t)k2 + 12 k(t)k2 : Proof
Now, integrating with respect to time over the interval (t; T ) and using (4.1c), we get
k(t)k2 + 2
ZT t
k1=2 r(s)k2 ds kek2Q +
ZT t
k(s)k2 ds;
and by applying Gronwall's lemma (cf. Girault & Raviart [18]), we have
kk2L1 (I ;L2( )) + 2k1=2 rk2Q 2eT kek2Q :
24 This proves part of the lemma. In order to get an error constant that does not grow exponentially in time, we use the Poincare-Friedrichs inequality in equation (4.50) for as follows: 1d 2 1=2 2 2 dt k(t)k + k r(t)k c ke(t)kkr(t)k 2 2c ke(t)k2 + 21 k1=2 r(t))k2 ; where c = c ( ). Hence, we have d k(t)k2 + k1=2 r(t)k2 c2 ke(t)k2 : dt We now integrate with respect to time to obtain the desired result: 2 kk2 + k1=2 rk2 2c kek2 : L1 (I ;L2 ( ))
2
Q
Q
Lemma 4.8 Let be the solution of (4.1); then there is a constant C2s = C2s(T; a; ) such that k1=2 rk2L1(I ;L2 ( )) + kk2Q C2skek2Q; (4.51) where C2s = 4exp 2kak2L2 (I ;L1 ( )) = . Multiply (4.1a) by and integrate over to obtain, 1 d k1=2 r(t)k2 + k(t)k2 = (e(t) + r (a(t)(t)); (t)): 2 dt Using the incompressibility condition (3.2) and the Cauchy-Schwarz inequality, we have 1 d k1=2 r(t)k2 + k(t)k2 ke(t) + a(t) r(t)kk(t)k 2 dt 12 ke(t) + a(t) r(t)k2 + 21 k(t)k2 : Proof
If we now apply the triangle inequality, we have d k1=2 r(t)k2 + k(t)k2 ke(t) + a(t) r(t)k2 dt 2ke(t)k2 + 2ka(t) r(t)k2 2ke(t)k2 + 2 ka(t)k2L1 ( ) k1=2 r(t)k2 : Now, integrating with respect to time over the interval (t; T ), we get
k1=2 r(t)k2 +
ZT t
k(s)k2 ds 2kek2Q 2Z T +
t
ka(s)k2L1 ( ) k1=2 r(s)k2 ds;
25 and by applying Gronwall's lemma, we have
2
k1=2 rk2L1 (I ;L2( )) + kk2Q 4kek2Q e(2=)kakL2 (I;L1( )) : 2
Lemma 4.9 Let be the solution of (4.1); then there is a constant C3s = C3s(T; ; a; ) such that ktk2Q + k1=2 r(0)k2 C3skek2Q; where C3s = (2 + 2C1s kakL1 (I ;L1 ( )) =).
and integrate over to obtain, kt (t)k2 + (a(t) r(t); t (t)) 12 ddt k1=2 r(t)k2 = (e(t); t (t)); where we have also used the incompressibility condition (3.2). Further, using the Cauchy-Schwarz inequality, we have kt (t)k2 12 ddt k1=2 r(t)k2 ke(t)kkt (t)k + ka(t)kL1 ( ) kr(t)kkt (t)k ke(t)k2 + ka(t)k2L1 ( ) kr(t)k2 + 12 kt (t)k2 : Hence, kt (t)k2 ddt k1=2 r(t)k2 2ke(t)k2 + 2ka(t)k2L1 ( ) kr(t)k2 : Now, integrating with respect to time over the interval (0; T ), and using Lemma 4.7, we get kt k2Q + k1=2 r(0)k2 2kek2Q + 2kak2L1 (I ;L1( )) krk2Q 2C s 1 2 2 + kak kek2 ;
Proof
Multiply (4.1a) by
t
which proves the lemma. 2
L1 (I ;L1 ( ))
Q
4.4 Completion of the proof of the a posteriori error estimate We shall now proceed to estimate the terms I-VI on the right-hand side of (4.3). For the rst term I, we have I =
M Z tn+1 X
n=0 tn
([unh]=kn+1 + a ruh f; )dt
= (R1; )Q = (R1; P )Q + (R1 ; P ( ))Q = I1 + I2;
26 where P and are as de ned by (4.6), and
R1jSn+1 = [unh]=kn+1 + a ruh f:
By Lemma 4.5 and Lemma 4.8, we have
jI1 j C5pkph2R1 kQkkQ p s C5 C2 kh2 R1kQkekQ:
Similarly, using Lemma 4.6 and Lemma 4.9, we have
jI2j C4i kqkR1kQktkQ C4i C3skkR1 kQkekQ: Hence,
p
p Cs q C 5 jIj 2 kh2 R1kQkekQ + C4i C3skkR1kQkekQ: Similarly, we have
II = (ruh; r( ))Q = (ruh; r(P ))Q + (ruh; rP ( ))Q = II1 + II2 :
By Lemma 4.5 and Lemma 4.8, we have
jII1j C6pkqh2 Dh2 uhkQkkQ C6p C2skh2Dh2 uhkQkekQ: Also, by Lemma 4.6 we have II2 = 0: Thus, we have that
q
jIIj C6p C2skh2Dh2 uhkQkekQ: Next, we consider term III: applying the Cauchy-Schwarz inequality and inequality (4.8), we have
jIIIj kkR3 kQkkQ p kkR3 kQkkQ T kkR3 kQkkL1(I ;L2 ( )) ;
27 where
R3jSn+1 = (Dthuh ([unh]=kn+1 + a ruh))=kn+1: Then, applying the strong stability estimate of Lemma 4.7, we get q jIIIj C1sT kkR3 kQkekQ: Next, we consider term IV: using Lemma 4.6 and Lemma 4.9, we have jIVj C5i kqkR4 kQktkQ C5i C3skkR4 kQkekQ; where R4 jSn+1 = [unh]=kn+1 = (unh+1 unh)=kn+1: Next, we consider term V: using the Cauchy-Schwarz inequality, inequality (4.8) and Lemma 4.7, we get jVj kf fkQkkQ kpf fkQkkQ qT kf fkQkkL1(I ;L2( )) C1sT kf fkQkekQ: Finally, we consider term VI: using the Cauchy-Schwarz inequality and Lemma 4.7, we get jVIj ku0 u0h kk(0)k kqu0 u0h kkkL1(I ;L2 ( )) C1sku0 u0h kkekQ: We have now proved the main result of this paper:
Theorem 4.10 Let u and uh be solutions of (3.1) and (3.10), respectively. Then where
kekQ = ku uhkQ E (uh; h; k; f );
(4.52)
E (uh; h; k; f ) = E (uh; h; k; f ) + E0(u0; u0h ; h); E (uh; h; k; f ) = C1kh2R1 kQ + C2kkR1kQ + C3kh2 R2kQ +C4kkR3kQ + C5kkR4 kQ + C6 kkR5kQ; 0 E0(u0; uh ; h) = C7ku0 u0h k;
(4.53) (4.54)
28 and
R1 jSn+1 R2 R3 jSn+1 R4 jSn+1 R5 C1 C2 C3 C4 C5 C6 C7
= = = = = = = = = = = =
[unh]=kn+1 + a ruh f; Dh2 uh; (Dthuh ([unh]=kn+1 + a ruh))=kn+1; [unh]=kn+1; (f p f)=k; C5p C2s ; q s i C4 C3 q C6p C2s; q s C1 T; q C5i C3s; q C4 = C1sT; q s C1 :
5 Computational implementation In the previous section we derived an a posteriori error estimate in the L2 (L2 ) norm for the Lagrange-Galerkin discretisation of the unsteady convection-diusion problem of the following form, kekQ = ku uhkQ E (uh; h; k; f ); (5.1)
where E (uh; h; k; f ) was a computable global error estimator (cf. Theorem 4.10). In this section we shall consider the problem of how the error estimate (5.1) may be implemented into an adaptive Lagrange-Galerkin nite element algorithm to yield global error control (reliability) with respect to a prescribed tolerance, TOL, i.e. ku uhkQ TOL; (5.2) with the minimum amount of computational eort. We begin in Section 5.1 by designing an adaptive algorithm to determine the mesh parameters h and k in such a way that (5.2) holds. Further, in Section 5.2 we describe how the space-time mesh may be locally adapted so that the actual mesh parameters h and k are `close' to those predicted by the adaptive algorithm. Finally, to improve the eciency of the adaptive algorithm, we outline in Section 5.3 how the error constants may be sharpened by numerically estimating them.
29
5.1 Adaptive algorithm
For a given tolerance, TOL, we now consider the problem of nding a discretisation in space and time S h = f(T n; tn)gn0 such that: 1. ku uhkQ TOL; 2. S h is optimal in the sense that the number of degrees of freedom is minimal. In order to satisfy these criteria we shall use the a posteriori error estimate (5.1) to choose S h such that:
1. E (uh; h; k; f ) TOL; 2. The number of degrees of freedom of S h is minimal. The term E0(u0; u0h ; h) is easily controlled at the start of a computation; so here we shall only consider the problem of constructing Sh in an ecient way to ensure that
E (uh; h; k; f ) TOL0; where TOL = TOL0 + E0(u0; u0h ; h). To solve this problem, we rst write E symbolically in terms of two residual terms: one that controls the spatial mesh and one that controls the temporal mesh, i.e. let
E (uh; h; k; f ) C10 kh2R10 kQ + C20 kkR20 kQ:
(5.3)
Similarly, we split the tolerance TOL0 into a spatial part, TOLh, and a temporal part, TOLk . Thus, for reliability we now require that the following two conditions hold:
C10 kh2R10 kQ TOLh; C20 kkR20 kQ TOLk :
(5.4) (5.5)
To design the space-time mesh S h at each time level tn, we must `split up' the norm in (5.4) over each element K 2 T n, and the norm in (5.5) over the domain
at time tn . We do this as follows:
p
C10 kh2R10 kQ C10 T 1max kh2 R0 (un)k nM +1 n 1 h p 0 2 0 n C1 NT 1max max kh R (u )k 2 ; nM +1 K 2T n n 1 h L (K ) p C20 kkR20 kQ C20 T 1max kk R0 (un)k; nM +1 n 2 h
30 where N is the number of elements in the spatial mesh at time tn. Thus, if p C10 NT kh2nR10 (unh)kL2(K ) TOLh 8K 2 T n; for n = 1; : : : ; M + 1; p C20 T kknR20 (unh)k TOLk ; for n = 1; : : : ; M + 1; are satis ed then (5.4) and (5.5) will automatically hold. For the practical implementation of this method, we consider the following adaptive algorithm for choosing S h, assuming that the nal time T is xed: for each n = 1; 2; : : : ; M + 1 with T0n a given initial mesh and kn;0 an initial time step, determine meshes Tjn with Nj elements of size hn;j (x) and time steps kn;j and corresponding approximate solution uh;j de ned on Ijn such that, for j = 0; 1; : : : ; n^ 1, C1 kh2n;j+1R1(unh;j )kL2(K ) TOLh 8K 2 T n; (5.6a) +C3kh2n;j+1R2 (unh;j )kL2 (K ) = q j Nj T C2kkn;j+1R1(unh;j )k + C4 kkn;j+1R3 (unh;j )k p k; (5.6b) +C5kkn;j+1R4(unh;j )k + C6 kkn;j+1R5 (unh;j )k = TOL T where Ijn = (tn 1 ; tn 1 + kn;j ] and TOL0 = TOLh + TOLk . We de ne T n = Tn^n, kn = kn;n^ and hn = hn;n^ , where for each n, the number of trials n^ is the smallest integer such that for j = n^, the stopping condition C1kh2n;n^ R1(unh;n^ )kL2(K ) TOL +C3kh2n;n^ R2 (unh;n^ )kL2(K ) p h 8K 2 Tn^n; (5.7a) Nn^ T n n C2kkn;n^ R1(uh;n^ )k + C4kkn;n^ R3 (uh;n^ )k p k; +C5kkn;n^ R4(unh;n^ )k + C6kkn;n^ R5 (unh;n^ )k TOL (5.7b) T is satis ed.
Remarks
1. By construction, the stopping condition (5.7) will guarantee reliability of the adaptive algorithm. For eciency, we try to ensure that (5.7) is satis ed with near equality. 2. We should note that because we assume that the nal time T is xed, the time step given by (5.6b) may need to be limited to ensure that tM + kM +1;n^ = T . 3. For the implementation of this adaptive algorithm, we shall assume that T0n = T n 1 for n = 2; 3; : : :.
31
5.2 Mesh modi cation strategy
In Section 5.1 we described how to determine the space-time mesh S h in such a way as to ensure that the desired error control, i.e. ku uhkQ TOL; is satis ed. However, to obtain the `desired' mesh we must employ a suitable mesh modi cation technique to achieve this goal. Clearly, temporal adaption is quite straightforward, since the timestep can just be set equal to kn;j+1 given by (5.6b). For the spatial mesh we use the red-green isotropic re nement strategy, see Bank [3] and Hempel [22], and the references cited therein. Here, the user must rst specify a (coarse) background mesh upon which any future re nement will be based. A red re nement corresponds to dividing a certain triangle (father) into four similar triangles (sons) by connecting the midpoints of the sides (see Figure 1). Green re nement is only temporary and is used to remove any hanging nodes caused by a red re nement (see Figure 2). We note that green re nement is only used on elements which have one hanging node. For elements with two or more hanging nodes a red re nement is performed. The advantage of this re nement strategy is that the degradation of the `quality' of the mesh is limited since red re nement is obviously harmless and green triangles can never be further re ned.
Figure 1: Red re nement.
Figure 2: Green re nement. For the practical implementation of this mesh modi cation strategy we have used the FEMLAB package developed by Eriksson et al. [12].
Remark We note that within this mesh modi cation strategy, elements may
also be removed from the mesh (i.e. dere ned) provided that they do not lie
32 in the original background mesh. Thus, to prevent an overly re ned mesh in regions where the solution is smooth the background mesh should be chosen suitably coarse.
5.3 Estimation of the error constants
The quoted values of the constants Ci, for i = 1; : : : ; 7, which appear in (4.53) and (4.54) are upper bounds, i.e. correspond to the worst case scenario and hence, are not sharp. For reasons of eciency we would like to nd minimal values of these constants. In the following we shall explain how these constants may be sharpened by numerically estimating each of their constituent parts. In particular, we shall explain how to estimate the trace constant C t , the inverse constant C inv (Section 5.3.1), the interpolation constants C2i and C3i (Section 5.3.2) and the strong stability constants Cis for i = 1; : : : ; 3, (Section 5.3.3). However, we shall not attempt to estimate numerically the interpolation constants C1i , C4i and C5i or the projection constants Cip for i = 1; : : : ; 6: for these constants we simply take the values quoted in Section 4.
5.3.1 Estimation of C t and C inv
We recall that the constants C t and C inv satisfy the following inequalities:
w 2 W 1;1(K );
Z
for jwjds for w 2 P1(K ); jwjH 1(K )
Z
r
Z
j wjdx + hK1 jwjdx K K C inv hK1kwkL2(K );
Ct
; 2 @K;
respectively, where K 2 T n. Both C t and C inv can be calculated on an element by element basis during the computation. In order to obtain a value for each constant that is representative for all elements over the entire computation we rst store the maximum value calculated for each constant at each time-level. Then, at the end of the computation these values are averaged to give an approximation to C t and C inv. The problem with this procedure is that it only gives estimates of C t and C inv a posteriori. Hence, in practice we initially guess values for C t and C inv (based on the theoretical estimates), and run the code for a small number of time steps, say 20 or 30, and calculate C t and C inv as described above. The code is then run with these estimates for the full length of the computation. In practice we found that the approximations of C t and C inv calculated in such a manner corresponded well to those calculated over the entire execution of the code.
5.3.2 Estimation of the interpolation constants In this section we shall explain how to numerically estimate the interpolation constants C2i and C3i following the work developed by Handscomb [20]. We rst
33 recall the de nition of C2i and C3i : let w 2 H 2(K ), and let I w denote the piecewise linear interpolant of w over K ; then there exists positive constants C2i and C3i such that
kw I wkL2(K ) C2i h2K jwjH 2(K ) ; kr(w I w)kL2(K ) C3i hK jwjH 2(K ) ;
(5.8) (5.9)
respectively. Alternatively, for a given nite element K we may de ne C2i and C3i (respectively) as follows: kvkL2(K ) ; C2i := sup (5.10) 2 v2H2(0) (K ) hK jv jH 2 (K ) krvkL2(K ) ; C3i := sup (5.11) v2H2(0) (K ) hK jv jH 2 (K ) 2 (K ) := fv 2 H 2 (K ) : v = 0 at the vertices of Kg. where v := w I w and H(0) To compute C2i and C3i we restrict the supremum over sets of polynomials that vanish at the three vertices of K . This leads to generalised eigenvalue problems which can easily be solved to calculate C2i and C3i . Moreover, sharper bounds for C2i can be obtained by using thin-plate spline functions instead (see Handscomb [20] for more details). If we consider an element K de ned by the two sides b and c and the included angle , then the estimates computed in [20] for C2i and C3i are shown in Table 1.
b 1 1 1 1 1
c 1 1 1 1 1
30 60 90 120 150
C2i 0.0453 0.0681 0.0422 0.0290 0.0187
C3i 0.315 0.318 0.346 0.488 0.958
b 1 1 1 1 1
c 2 2 2 2 2
30 60 90 120 150
C2i 0.0267 0.0395 0.0380 0.0275 0.0181
C3i 0.540 0.342 0.341 0.484 0.956
Table 1: Computational estimates of the interpolation constants C2i and C3i .
5.3.3 Estimation of the strong stability constants We recall from Section 4 that we have the following (backward) dual problem: nd 2 H 1(0; T ; L2( )) \ L2 (0; T ; H01( ) \ H 2( )) such that
t
r (a)
= e u uh; x 2 ; t 2 I; (; t) = 0; x 2 @ ; t 2 I; (x; T ) = 0; x 2 :
(5.12a) (5.12b) (5.12c)
34 Further, based on this problem we need to calculate the strong stability constants, Cis for i = 1; : : : ; 3; which satisfy the following inequalities
kk2L1(I ;L2( )) C1skek2Q; kk2Q C2skek2Q; ktk2Q C3skek2Q;
(5.13a) (5.13b) (5.13c)
respectively. The estimation of these strong stability constants is certainly far from being trivial, since the error function, e, is not known. We could replace e by an arbitrary function , compute the solution to the backward dual problem (5.12) numerically, calculate Cis for each and take the supremum of Cis over all such . Of course, to do this numerically we would have to choose a nite number of right-hand side functions, , and take the supremum for Cis over this set of trial functions. However, numerical computations based on this approach have shown that the individual stability constants may vary by as much as one or even two orders of magnitude depending on the function chosen to represent e. Clearly, for the purposes of both reliability and eciency, it is important to `somehow' choose a right-hand side function that is representative of the error function associated with the numerical scheme for the particular physical problem under consideration. For a steady problem we may approximate e by solving the forward problem on two consecutive meshes and setting
e eh = u ne ucoarse h h ; cf. Hansbo & Johnson [21]. Then, the corresponding (steady) dual problem is solved with eh as the right-hand side. However, to apply this strategy for the time-dependent case (cf. (3.1)) we would need to store the function enh e(x; tn) at each time level tn . Clearly, this is not very practical due to the large amount of storage that would be required. An alternative would be to only store a small number of right-hand sides in the set, fenhg, e.g. every tenth, say, and assume the error, e, is a piecewise constant function in time at time-levels where enh was not stored. Of course, this strategy may still need a large amount of memory, depending on the number of right-hand side functions that are stored. Instead, the approach that we have adopted is to only store one right-hand side function, eh, from the set fenhg, which satis es
kenhk kehk 8n:
(5.14)
We then solve the dual problem (5.12) numerically with e replaced by eh. Computational estimates of the strong stability constants will be given in the next section for each of the model problems considered. We note here that in each case the backward dual problem is solved using an explicit rst order upwind scheme for the convection terms and implicit central dierences for the
35 diusion terms (see Mayers & Morton [29]). This nite dierence scheme can be shown to be both stable and monotone if x + y 1, where x and y are the Courant numbers calculated in their respective coordinate directions. To calculate the required derivatives of the dual solution in (5.13), we interpret the nite dierence approximation, h, of to be a piecewise constant function in time and a bi-quadratic function in space.
6 Numerical experiments In this section we shall present some numerical experiments to illustrate the performance of the adaptive algorithm (5.6) on a number of convection-diusion test problems. We rst recall that our model problem from Section 3 is as follows: nd u such that:
ut + a ru u = f; x 2 ; t 2 I; u(x; 0) = u0(x); x 2 ;
(6.1a) (6.1b)
subject to appropriate Dirichlet boundary conditions, where is a bounded convex polygonal domain in R2 with boundary @ and I = (0; T ], T > 0 a nal time. In Section 6.1 we shall demonstrate the eectiveness of using adaptively re ned meshes on a problem which exhibits very local phenomena in the form of internal and boundary layers. Next, in Sections 6.2 and 6.3 we shall consider the so-called rotating cone and rotating cylinder problems, respectively. For both of these problems the exact solution for = 0 is known, hence, we can investigate both the reliability and eciency of the adaptive algorithm. We note that for the analysis presented in Section 4 to hold, we require > 0. Hence, for both of these problems the computational results will be calculated for 0 < > > 1 for 0 x 1; y = 1; > < ( x)= for 0 x ; y = 0; u(x; t) = u(x; y; t) = > 0 (6.2) for x 1; y = 0; > > x = 1; 0 y 1 ; > : 0(y 1 + )= for for x = 1; 1 y 1: We note (for small) that initially the solution to this problem has boundary layers along x = 0 and y = 1. The boundary layer along x = 0 propagates into the domain and will interact with the out ow boundary at x = 1, where a new boundary layer will develop at time, t 0:5. The combination of both the internal and boundary layers in this problem make it a very challenging problem for any numerical method, especially for the Lagrange-Galerkin method which is known to be non-monotone. We note that in the following, we shall let = 7:8125 10 3 and T = 0:55 which is the time just before the steady-state solution starts to develop. coarse To compute the strong stability constants for this problem, u ne h and uh were computed on 80 80 and 40 40 triangular meshes (respectively) with a xed timestep, t = 1:0 10 3. The backward dual problem was then solved on an 81 81 nite dierence mesh with t = 1:0 10 3. Typical values of the strong stability constants are presented in Table 2 for dierent .
1:0 10 1:0 10 1:0 10 1:0 10
2 3 4 5
pC s 1 3:7826 10 1:3881 10 7:9574 10 8:7260 10
2 2 2 2
pC s 2 8:5563 10 1:5955 10 2:0305 10 2:2003 10
1 1 2 3
pC s= 2
85:5631 159:5488 203:0504 220:0305
pC s 3 1:3160 10 1:0733 10 2:2824 10 2:4580 10
1 1 1 1
Table 2: Computational estimates of the strong stability constants for dierent , where T = 0:55. We shall now present some numerical results for = 1:0 10 3. Firstly, we specify the background mesh to be the 5 5 triangular mesh shown in Figure 3(a). This mesh is initially re ned in order to properly resolve the boundary layers along x = 0 and y = 1 at time t = 0, i.e. so that E0(u0; u0h ; h) = 0 (see Figure 3(b)). Numerical results are presented in Figure 4 for TOLh = 1:0 10 1 and TOLk = 5:0 10 2. Here, we calculated C t = 8:0, C inv = 9:8 and the estimated error, E (uh; h; p; f ) = 8:7891 10 2. In Figures 4(a), 4(b) and 4(c), 4(d) we see that the adaptive algorithm has re ned the spatial mesh in the regions of the domain where the solution has
37
(a)
(b)
Figure 3: (a) Background mesh, with 25 nodes and 32 elements; (b) Background mesh adapted to resolve the initial condition, with 1007 nodes and 1738 elements. layers, and has kept the mesh coarse elsewhere. In particular, we see that the spatial mesh has been re ned at the points x = 0, y = 0 and x = 1, y = 1 where the boundary conditions are `nearly' discontinuous. Further, from Figure 4(c) we see that the mesh emanating from the bottom left-hand corner (i.e. at x = 0, y = 0) is ner at the top and at the bottom of the internal layer. This is because for = 1:0 10 3, the spatial part of the adaptive algorithm, i.e. (5.6a), is dominated by the term C3 kh2R2 kQ which measures the curvature of the numerical solution. In Figures 4(e), 4(f) we present a history of the number of nodes against time and the timestep size against time, respectively. In Figure 4(e) we see that initially there is a large number of nodes in the spatial mesh as the very steep layers propagate into the domain. However, as these layers become smoother through the process of diusion the number of nodes gradually decreases down to a minimum before the development of the boundary layer at x = 1, when the number of nodes increases again. The reverse of this can be seen in Figure 4(f) for the timestep size: initially the timesteps are very small in order to correctly capture the very large gradients of the solution. However, the timestep size gradually increases as the solution becomes smoother before becoming small again when the out ow boundary layer develops.
38
(a)
(b)
(c)
(d)
6000
0.04 0.035
5000
Timestep Size
Number of Nodes
0.03 4000
3000
0.025 0.02 0.015
2000 0.01 1000 0.005 0
0.0 0.0
0.1
0.2
(e)
0.3 Time
0.4
0.5
0.6
0.0
0.1
0.2
(f)
0.3
0.4
0.5
0.6
Time
Figure 4: Boundary/internal layer problem for TOLh = 1:0 10 1 and TOLk = 5:0 10 2 with = 1:0 10 3 and T = 0:55: (a) & (b) Mesh and solution (resp.) at time, t = 0:1218, with 3109 nodes and 5764 elements; (c) & (d) Mesh and solution (resp.) at nal time, t = 0:55, with 2943 nodes and 5546 elements; (e) History of nodes against time; (f) History of timestep size against time.
39
6.2 Experiment 2: Rotating cone problem
In this section we shall consider the rotating cone problem (cf. Priestley [31]). Here, we let = (0; 1)2, f = 0, a(x; y) = 2(2y 1; 1 2x)T and
(
2 2r; for r 1=4; u0(x) = 0cos ; otherwise;
(6.3)
where r2 = (2x 1=2)2 + (2y 1)2. Numerical estimates of the strong stability constants for = 1:0 10 5 and coarse T = 2 (i.e. after four revolutions) are given in Table 3. Here, u ne h and uh were computed on 81 81 and 41 41 triangular meshes (respectively) with a xed timestep, t = 1:0 10 2; the backward dual problem was solved on an 81 81 nite dierence mesh with t = 9:09 10 4.
pC s 1 3:0286 10
2
pC s 2 7:9320 10
4
pC s=
pC s 3 79.3196 1:5488 10 2
1
Table 3: Computational estimates of the strong stability constants for the rotating cone problem for = 1:0 10 5 and T = 2. The background mesh for this problem is shown in Figure 5. This mesh was adaptively re ned at time, t = 0, to resolve the initial condition u0(x), see Figure 6; here, E0(u0; u0h ; h) = 4:6783 10 6.
Figure 5: Background mesh for the rotating cone/cylinder problems, with 56 nodes and 86 elements. To investigate the reliability and eciency of the adaptive algorithm presented in Section 5.1 we provide in Table 4 a number of experiments for = 1:0 10 5 and T = 2 for dierent TOLh and TOLk . In each case we compute the error estimator E (uh; h; k; f ), the L2 (L2) norm of the actual error (based
40
(a)
(b)
Figure 6: (a) Initial mesh, and (b) initial condition, for the rotating cone problem, with 2466 nodes and 4904 elements. TOLh 7:0 10 3:0 10 1:5 10 1:0 10
3 3 3 3
TOLk 7:0 10 5:0 10 4:0 10 3:0 10
2 2 2 2
Av. Nodes 865 1497 2079 2498
E (uh; h; k; f ) 7:1381 10 2 5:0210 10 2 3:9576 10 2 2:9705 10 2
kekQ 4:3346 10 3:7430 10 3:2585 10 3:4339 10
3 3 3 3
E =kekQ 16.47 13.41 12.15 8.65
Table 4: Comparison of the eectivity index for the rotating cone problem for = 1:0 10 5 and T = 2.
on the analytic solution computed for = 0) and the eectivity index, E =kekQ. To give an indication of the size of the spatial mesh we also give the average number of nodes (Av. Nodes). Here, we see that in each case E (uh; h; k; f ) over-estimates the error which is what we expect. To measure the eciency of the adaptive algorithm we look at the size of the eectivity index: ideally we want this to be close to one. However, as noted by Johnson & Hansbo [26], an eectivity index close to one does not necessarily imply that the adaptive algorithm is ecient. Indeed, a computational mesh that is overly-re ned may give eectivity indices close to one, but the over-re nement clearly indicates an inecient adaptive algorithm. For this reason we only use the eectivity index on relatively coarse meshes to give us an indication of the sharpness of the error estimator E (uh; h; k; f ), and hence, an indication of the general eciency of our adaptive algorithm. We see from Table 4 that our error estimator, E (uh; h; k; f ), over-estimates the actual error by about an order of magnitude, which is quite
41 TOLh 7:0 10 3:0 10 1:5 10 1:0 10
3 3 3 3
TOLk 7:0 10 5:0 10 4:0 10 3:0 10
2 2 2 2
Ct 4.6 5.2 5.1 5.0
C inv 8.1 5.9 5.1 5.5
Table 5: Estimated values of C t and C inv for the rotating cone problem for = 1:0 10 5 and T = 2.
signi cant. However, as we noted in Section 1, E (uh; h; k; f ) becomes sharper as h; k ! 0 (i.e. as TOLh; TOLk ! 0). Indeed, we see here that the size of the eectivity index has decreased by a factor of about 2 going from the coarsest mesh with an average of 865 nodes to the nest mesh with an average of 2498 nodes. We note that for each experiment presented in Table 4 the computed estimates of C t and C inv are given in Table 5. Finally, in Figures 7 and 8 we present the numerical results for TOLh = 1:5 10 3 and TOLk = 4:0 10 2. Here, we see from Figures 7(a),(b) and Figures 7(c),(d) that the ne region of the mesh `follows' the position of the cone, as we would expect. However, in each case we see that the adaptive algorithm has failed to re ne the region of the cone nearest the centre of rotation. As a result we can see from Figures 7(d),(e) that there is a slight `leakage' of the numerical solution towards this centre. We note that this has also occurred in the region of the cone farthest from the centre of rotation, although the loss in accuracy at the base of the cone here is very small. Indeed, it is certainly comparable with the accuracy of the numerical solution considered in the y-direction (cf. Figure 7(f)) where the mesh remained ne. We shall come back to these points when we consider the rotating cylinder problem in the next section. From Figure 8(a) we see that the number of nodes in the spatial mesh remains fairly constant with time. Further, from Figure 8(b) we see that the timestep size remains completely unchanged throughout most of the computation; the timestep size is only altered at the end of the computation to ensure that tM + kM +1;n^ = T , cf. Section 5.1. This behaviour is expected since the initial condition, u0(x), is smooth and also there are very small diusion eects for = 1:0 10 5.
42
(a)
(b)
(c)
(d)
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0.0
0.0 0.0
0.1
0.2
0.3
0.4
0.5
x
(e)
0.6
0.7
0.8
0.9
1.0
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
y
(f)
Figure 7: Rotating cone problem for TOLh = 1:5 10 3 and TOLk = 4:0 10 2 with = 1:0 10 5 and T = 2: (a) & (b) Mesh and solution (resp.) at time, t = 1:116, with 2084 nodes, 4129 elements, min u = 6:39 10 3 and max u = 0:993; (c) & (d) Mesh and solution (resp.) at nal time, t = 2, with 1584 nodes, 3122 elements, min u = 1:61 10 2 and max u = 0:989; (e) Cut through solution at t = 2 where y = 0:5 and 0 x 1; (f) Cut through solution at t = 2 where x = 0:25 and 0 y 1.
43 -3
3000
*10
4.0 3.5 3.0
2000 Timestep Size
Number of Nodes
2500
1500
1000
2.5 2.0 1.5 1.0
500 0.5
0
0.0
0.0
0.5
1.0
1.5
2.0
0.0
0.5
Time
1.0
1.5
2.0
Time
(a)
(b)
Figure 8: Rotating cone problem for TOLh = 1:5 10 3 and TOLk = 4:0 10 2 with = 1:0 10 5 and T = 2: (a) History of nodes against time; (b) History of timestep size against time.
6.3 Experiment 3: Rotating cylinder problem
In this section we shall consider the rotating cylinder problem (cf. Johnson [23]). This problem is very similar to the rotating cone problem considered in Section 6.2. The only dierence here is that the initial condition is now the (discontinuous) cylinder given by
(
for r 1=4; u0(x) = 01;; otherwise ;
(6.4)
where r2 = (2x 1=2)2 + (2y 1)2. Numerical estimates of the strong stability constants for = 1:0 10 5 and T = 2 are given in Table 6. We note that these were calculated in exactly the same way as for the rotating cone problem.
pC s 1 3:3071 10
2
pC s 2 1:6864 10
3
pC s=
pC s 3 168.6436 9:9680 10 2
2
Table 6: Computational estimates of the strong stability constants for the rotating cylinder problem for = 1:0 10 5 and T = 2. The adapted mesh (based on the background mesh in Figure 5) and initial condition at time, t = 0, are shown in Figure 9; here, E0(u0; u0h ; h) = 5:5149 10 4.
44
(a)
(b)
Figure 9: (a) Initial mesh, and (b) initial condition, for the rotating cylinder problem, with 2536 nodes and 5044 elements. As before, to investigate the reliability and eciency of the adaptive algorithm formulated in Section 5.1, we present a number of experiments for = 1:0 10 5 and T = 2 for dierent TOLh and TOLk ; see Table 7. Here, we see that in each case the error estimator E (uh; h; k; f ) over-estimates the actual error, which again con rms the reliability of our adaptive algorithm. However, the eectivity indices here are smaller than those calculated for the rotating cone problem by about a factor of 1:5 2. This is expected since the a posteriori error analysis presented in Section 4 is rather pessimistic in the sense that the error estimator, E (uh; h; k; f ), bounds the error for a whole class of analytic solutions u of (6.1). Thus, for a very smooth problem (e.g. the rotating cone problem) we expect the error estimate to be far from being sharp, hence leading to large eectivity indices. In contrast, we expect the error estimate to be sharper for a non-smooth problem (e.g. the rotating cylinder problem), thus leading to smaller eectivity indices. We note that for each experiment presented in Table 7 we give computed estimates of C t and C inv in Table 8. In Figures 10 and 11 we present the numerical results for TOLh = 3:0 10 2 and TOLk = 4:5 10 1. From Figure 10(a) we observe, as with the rotating cone problem, that the number of nodes in the spatial mesh remains fairly constant over the length of the computation. However, in contrast with this we see from Figure 10(b) that as the solution becomes smoother the timestep slowly increases. In Figures 11(a),(b) and 11(c),(d) we again see that the mesh `follows' the position of the cylinder. However, as for the rotating cone problem, we observe from Figures 11(a),(c) that the mesh is relatively coarse in the regions of the cylinder closest to and farthest from the centre of rotation. This again leads to `leakage' of the numerical solution (cf. Figure 11(e)), which is much more evi-
45 TOLh 6:0 10 4:0 10 3:0 10 2:0 10
2 2 2 2
TOLk 6:0 10 5:0 10 4:5 10 4:0 10
1 1 1 1
Av. Nodes 1422 1667 1915 2455
E (uh; h; k; f ) 5:8077 10 1 4:8528 10 1 4:3510 10 1 3:8526 10 1
kekQ 5:8796 10 5:7983 10 5:7281 10 5:7896 10
2 2 2 2
E =kekQ 9.87 8.37 7.60 6.65
Table 7: Comparison of the eectivity index for the rotating cylinder problem for = 1:0 10 5 and T = 2. TOLh 6:0 10 4:0 10 3:0 10 2:0 10
2 2 2 2
TOLk 6:0 10 5:0 10 4:5 10 4:0 10
1 1 1 1
Ct 6.2 6.2 6.2 6.2
C inv 10.3 10.3 10.1 10.0
Table 8: Estimated values of C t and C inv for the rotating cylinder problem for = 1:0 10 5 and T = 2. dent for this test problem. However, despite this the two cuts taken through the numerical solution at t = 2 (see Figures 11(e),(f)) clearly show the high accuracy achieved by combining an adaptive algorithm with the Lagrange-Galerkin method. To see why this leakage happens we need to look at the local nature of the two parts of the error estimator that govern the spatial mesh, i.e. C1kh2R1 kQ and C3kh2 R2kQ. To do this we x the spatial mesh (Figure 12(a) shows a zoom of the spatial mesh around the cylinder) and let the initial condition move forward one timestep ( xed, k = 1:0 10 3) and plot the local quantities kekL2 (K ), C1kh2 R1kL2 (K ) and C3kh2 R2kL2 (K ) for each element K , see Figures 12(b),(c) and (d), respectively. From Figure 12(b) we see that the local error is fairly well equidistributed around the entire perimeter of the cylinder. Although, we note that the magnitude of the error is slightly larger in the region of the cylinder that is farthest from the centre of rotation (i.e. the left-hand side), since the velocity eld is stronger there. This behaviour is also re ected in the plot of C3kh2 R2kL2 (K ) (see Figure 12(d)). However, the magnitude of this term is very small since = 1:0 10 5, and hence has little aect on the spatial re nement algorithm. In contrast, the plot of C1 kh2R1 kL2(K ) shown in Figure 12(c) shows that this part of the error estimator only detects the error in the top and bottom regions of the outline of the cylinder. Moreover, the error in the left-hand side
46 -3
4000
*10
7
3500 6
5
2500 Timestep Size
Number of Nodes
3000
2000 1500
4
3
1000
2
500
1
0
0
0.0
0.5
1.0 Time
(a)
1.5
2.0
0.0
0.5
1.0
1.5
2.0
Time
(b)
Figure 10: Rotating cylinder problem for TOLh = 3:0 10 2 and TOLk = 4:5 10 1 with = 1:0 10 5 and T = 2: (a) History of nodes against time; (b) History of timestep size against time. and right-hand side regions of the cylinder, which correspond to the regions farthest from and closest to the centre of rotation (respectively), is not detected by C1 kh2R1 kL2(K ). Hence, the adaptive algorithm does not re ne the mesh in these regions which leads to the leakage shown in Figures 11(b),(d). The reason for this is the directionality of R1 . For instance, if we let a0 = (1; aT )T and r0uh =0 (@t0 uh; (ruh)T )T2, then when the vector r0uh is orthogonal to a0, i.e. when a r uh = 0, C1kh R1 kL2(K ) becomes zero. We note that when 1 ;
54 y
6
(x3 ; y3 )
LL
6 (0; 1) @ 3@
LL H (x1 ; y1 ) HHH L HH L (x ; y ) HL 2 2 K
@
-
^ @@
K
@2 @ 1 (0; 0) (1; 0) Local coordinates
x
Global coordinates
Figure 13: An ane transformation from the global Cartesian coordinates to the local coordinates. i.e. for x^ 2 K^ 0 , the function v^jK^ 0 is the re ection of v^jK^ in the line = 1 . In addition, we de ne 1 (S^) := fv^ : S^ ! R de ned by (A.5) : w^ 2 C 1 (K ^ )g: CR (A.6) Let v^ 2 CR1 (S^). We now consider each side of the canonical triangle K^ in turn: rst consider the side with vertices at points 1 and 3, denoted by s1;3 . We note that Z @ v^ (; )d: v ^(0; ) = v^(; ) Hence, we have that
0
@
Z 1 @ v^ jv^(0; )j jv^(; )j + @ (; ) d: 0
(A.7)
Integrating (A.7) over S^, we get
Z
s1;3
Z 1 Z 1 @ v^ (^x) d^x 0 0 0 0 @ Z 1Z 1 Z 1 Z 1 @ v^ 2 @ v^ 2!1=2 + jv^(^x)jd^x + d^x 0 0 0 0 @ @ Z Z ^ = 2 ^ jw^jd^x + ^ jrw^jd^x : K K
jw^(^x)jds
Z 1Z 1
jv^(^x)jd^x +
(A.8)
Next, we consider the side with vertices at points 1 and 2, denoted by s1;2 : again, using the fact that Z @ v^ v ^(; 0) = v^(; ) (; )d; we nd that
0
Z s1;2
jw^(^x)jds 2
Z K^
@
jw^jd^x +
Z K^
jr^ w^jd^x
:
(A.9)
55 Finally, we consider the side with vertices at points 2 and 3, denoted by s2;3 : we rst note that
Z
p Z1
s2;3
jv^(^x)jds = 2
Further, using the fact that ^( 1
v ;
we nd that
) = v^(; )
p Z
Z s2;3
jw^(^x)jds 2 2
K^
0
jv^(; 1
)jd:
Z @ v^ (; )d; @ 1
jw^jd^x +
Z K^
jr^ w^jd^x
:
(A.10)
Hence, on the canonical triangle, we have shown that for w^ 2 C 1 (K^ ),
Z
s
Z p Z jw^(^x)jds 2 2 ^ jw^jd^x + ^ jr^ w^jd^x ; K
K
(A.11)
where s = s1;3 ; s1;2 or s2;3 . We shall now show that (A.11) holds for w^ 2 H 1 (K^ ) using a density argument similar to the one presented in Brenner & Scott [6] (Proposition 1.6.3). Since C 1 (K^ ) is dense in W 1;1(K^ ), we may pick a sequence of functions w^j 2 ^ ) which converge to w^ 2 W 1;1 (K^ ) in the W 1;1 (K^ ) norm as j ! 1. Using (A.11) C 1 (K and the Cauchy-Schwarz inequality, we have that
Z p Z jw^j w^k jds 2 2 ^ jw^j w^k jd^x + ^ jr^ (w^j w^k )jd^x s K p K 2 2kw^j w^k kW 1;1 (K^ ) ! 0; as k; j ! 1. Hence, fw^j g is a Cauchy sequence in L1 (s ). Furthermore, since this space is complete there must be a unique limit u^ 2 L1 (s ) such that ku^ w^j kL1 (s ) ! 0 as j ! 1. We de ne w ^js := u^: Z
We need to ensure that this de nition does not depend on the particular sequence that we choose. Suppose we pick another sequence of functions w^j0 2 C 1 (K^ ) which satisfy kw^ w^j0 kH 1 (K^ ) ! 0 as j ! 1. Using the triangle inequality and (A.11) again, we have that
ku^ w^j0 kL1 (s) !
ku^ w^j kL1 (s) + kw^j w^j0 kL1 (s) p ku^ w^j kL1 (s) + 2 2kw^j w^j0 kW 1;1 (K^ ) p ku^ w^j kL1 (s) + 2 2 kw^j w^kW 1;1(K^ ) + kw^ w^j0 kW 1;1 (K^ ) 0;
56 as j ! 0. Thus, w^js is well de ned in L1 (s ). We shall now check that (A.11) holds for w ^ 2 H 1 (K^ ): using the triangle inequality, (A.11) and the Cauchy-Schwarz inequality, we have kw^kL1 (s) = ku^kL1 (s) ku^ w^j kL1 (s) + kw^j kL1 (s) Z p Z ^ ku^ w^j kL1 (s) + 2 2 ^ jw^j jd^x + ^ jrw^j jd^x K K Z p Z ^ ku^ w^j kL1 (s) + 2 2 ^ jw^j w^jd^x + ^ jr(w^j w^)jd^x K Z K p Z ^ w^jd^x +2 2 ^ jw^jd^x + ^ jr K K p ku^ w^j kL1 (s) + 2 2kw^j w^kW 1;1(K^ ) Z p Z ^ +2 2 ^ jw^jd^x + ^ jrw^jd^x K Z K p Z (A.12) ! 2 2 ^ jw^jd^x + ^ jr^ w^jd^x ; K
K
as j ! 1. Finally, to complete the proof of the lemma, we must now convert back to the global coordinate system, as follows, Z Z j w ^ j d^ x = jwjdet(B ) 1 dx ^ K
K
= (2meas(K ))
(2c1 h2K )
1
Z
1
K
Z
K
jwjdx
jwjdx;
(A.13)
where we have used (3.3b). Similarly, if we use inequalities (A.4) and (3.3b), we have that Z Z ^ jrw^jd^x = jB T rwjdet(B ) 1dx ^ K Z T 1 jB j(2meas(K )) j K Z c1 1 hK1 j wjdx: K (d=dt; d=dt)T by x^ t , then
K
r
Finally, if we denote the vector sponds to the side 2 @K , we have
Z
jwjds =
rwjdx
(A.14) if the side s of K^ corre-
Z1
jw^jjB x^ t jdt Z jB j jw^jds Zs 2hK jw^jds; 0
s
(A.15)
where we have again used (A.4). Substituting (A.13), (A.14) and (A.15) into (A.12) gives the desired result. 2
57
Appendix B Inverse estimate
Lemma B.1 Let n 2 N0 and let K 2 T n, where T n satis es conditions (3.3). If w 2 P1 (K ), then there exists a positive constant C inv such that jwjH 1(K ) C inv hK1kwkL2(K ); where C inv = 6=c1 . Proof As in the proof of Lemma A.1 we rst work in local coordinates on the canonical triangle K^ (cf. Figure 13). Since P1 (K^ ) is a nite-dimensional space, we have by equivalence of norms that kw^kH 1 (K^ ) C kw^kL2 (K^ ) 8w^ 2 P1 (K^ ); where C is a positive constant. Further, from this relation, we deduce that jw^j2H 1 (K^ ) C kw^k2L2 (K^ ) 8w^ 2 P1 (K^ ): (B.1) We now proceed to calculate the minimum C such that (B.1) holds. For w^ 2 P1 (K^ ), we write w ^ = a + b + d; where a; b; d 2 R. Hence, we have that (B.2a) jw^j2H 1 (K^ ) = 21 (a2 + b2 ); kw^k2L2 (K^ ) = 121 a2 + ab + 4ad + 6d2 + b2 + 4bd : (B.2b)
In order to minimise C , it is sucient to maximise the function f (a; b; d) de ned by 6(a2 + b2 ) f (a; b; d) = 2 : a + ab + 4ad + 6d2 + b2 + 4bd We nd that the global maximum of f is 36 which corresponds to the minimum value of the constant C in (B.1). As before, to prove the lemma we must now transform back to the global coordinate system: rstly using (3.3b), we note that
kw^kL2 (K^ ) =
q
(2meas(K )) 1 kwkL2 (K ) q hK1 (2c1 ) 1 kwkL2 (K ) : Also, using (3.3b) again and inequalities (A.4), we have
jwjH 1 (K ) =
Z
q K^
jB
T
r^ w^j2det(B)d^x
det(B )jB T jjw^jH 1 (K^ ) q 2c1 1 jw^jH 1 (K^ ):
(B.3)
1=2
Substituting (B.3) and (B.4) into (B.1) gives the desired result. 2
(B.4)