finite element methods for elliptic and parabolic

1 downloads 0 Views 523KB Size Report
I have never intended to write a Lecture Note on this topic, as there are many ... Chapter 4 begins with a brief note on the finite dimensional approximation to the ...
Lecture Notes On

FINITE ELEMENT METHODS FOR ELLIPTIC AND PARABOLIC PROBLEMS 1

Amiya Kumar Pani

Industrial Mathematics Group Department of Mathematics Indian Institute of Technology, Bombay Powai, Mumbai-4000 76 (India).

IIT Bombay, June 2012 . 1 Advanced Level Training Programme in Differential Equations Differential Equations during 10th June - 30th June, 2012 under the National Programme on Differential Equations:Theory, Computation and Applications (NPDE-TCA).

‘ Everything should be made simple, but not simpler’. - Albert Einstein.

Contents Preface

I

1 Introduction

1

1.1

Background. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1.2

Finite Difference Methods (FDM). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4

1.3

Finite Element Methods (FEM). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

2 Theory of Distribution and Sobolev Spaces 2.1

Review of L2 - Space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8

2.2

A Quick Tour to the Theory of Distributions. . . . . . . . . . . . . . . . . . . . . . . . . .

10

2.3

Elements of Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

16

3 Abstract Elliptic Theory

4

23

3.1

Abstract Variational Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24

3.2

Some Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27

Elliptic Equations

39

4.1

Finite Dimensional Approximation to the Abstract Variational Problems. . . . . . . . . .

39

4.2

Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41

4.3

Computational Issues

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

48

Adaptive Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51

4.4 5

8

Heat Equation

55

5.1

Some Vector-Valued Function Spaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

55

5.2

Weak Formulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56

5.3

Semidiscrete Galerkin Approximation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

57

5.4

Completely Discrete Schemes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

63

ii

Bibliography

68

HOW THESE NOTES CAME TO BE

I have never intended to write a Lecture Note on this topic, as there are many excellent books available in literature. But, when a group of enthusiastic faculty members in the Department of Mathematics at the Universidade Federale do Paran´a (UFPR) requested me to give some lectures on the theory of finite elements, I immediately rushed to the Central Library of UFPR to check some books on this topic. To my dismay (may be it is a boon in disguise), I could not find any relevant books in the library here. More over, in my first lecture, I realised the urgent need of supplying some notes in order to keep their interest on. However, thank to Professor Vidar Thom´ee, I had his notes entitled Lectures on Approximation of Parabolic Problems by Finite Elements [13] and thank to the netscape for downloading [7] from the home page of Professor D. Estep from CALTEC. So in an attempt to explain the lectures given by Professor Vidar at IIT, Bombay , first four chapters are written. Thank to LATEX and Adriana, she typed first chapter promptly and then the flow was on. So by the end of the series, we had these notes. At this point let me point out that the write up is influenced by the style of Mercier [10] (although the book is not available here, but it is more due to my experience of using this book for a course at IIT, Bombay). Sometimes back, I asked myself: Is there any novelty in the present approach? Well, (Bem!) the answer is certainly in negative. But for the following simple reason this one may differ from those books written by stalwarts of this field : Some questions are posed in the introductory chapter and throughout this series, attempts have been made to provide some answers. Moreover for finite element methods, the usefulness of Lax equivalence theorem has been highlighted throughout these notes and the role as well as the appropriate choice of intermediate or auxiliary projection has been brought out in the context of finite element approximations to parabolic problems. It seems to be a standard practice that in the Preface the author should give a brief description of each chapter of his / her book. In order to pay respect to our tradition, I (although first person is hardly used in writing articles) am making an effort to present some rudiments of each chapter of this lecture note. In the introductory chapter, some basic (!) questions are raised and in the subsequent chapters, some answers have been provided. To motivate the finite element methods, a quick description on the finite difference method to the Poisson’s equation is also given. Chapter 2 is devoted to a quick review of the theory of distribution, generalised derivatives and some basic elements of Sobolev spaces. Concept of trace and Poincar´e inequality are also discussed in this chapter. In chapter 3, a brief outline of abstract variational theory is presented . A proof of Lax-Milgram Theorem is also included. Several examples on elliptic problems are discussed and the question of solvability of their weak formulations are examined. Chapter 4 begins with a brief note on the finite dimensional approximation to the abstract variational problems. An introduction of finite element methods applied to two elliptic problems is also included. A priori error estimates are also derived. Based on Eriksson et al. [7], an adaptive procedure is outlined with a posteriori error estimates. Finally in chapter 5, a finite element Galerkin method is applied to the heat equation in spatial direction and for the resulting semidiscrete scheme, stability results for the approximate solution are derived. Since a direct comparision between the exact solution and the Galerkin approximation may not

,in general. yield optimal error estimates, in this chapter, concept of intermediate solution or projection is discussed and then stability results are used to derive semidiscrete error estimates. Finally, a full discretization is achieved by replacing the time derivative by difference quotients. Both (implicit) backward Euler and Crank-Nicolson schemes are described . As in semidiscrete case, optimal error bounds are also proved for these completely discrete schemes. I am indebted to Professors Bob Anderssen, Purna C. Das, Ian Sloan, Graeme Fairweather, Vidar Thom´ee and Lars Wahlbin for introducing me the beautiful facets of Numerical Analysis. The present set of notes is a part of the lectures given in the Department of Mathematics at Federal University of Parana, Curitiba. This would have not been possible without the support of Professor Jin Yun Yuan. He helped me in typing and correcting the manuscript. He is a good human being and a good host and I shall recommend the readers to visit him. I greately acknowledge the financial support provided by the UFPR. Thank to the GOROROBA group for entertaining me during lunch time and I also acknowledge the moral support provided by the evening caf´e club members (Carlos, Corr´ea Eidem, Fernando (he is known for his improved GOROROBA), Jose, Zeca). Perhapse, it would be incomplete if I do not acknowledge the support and encouragement given by my wife Tapaswini. She allowed me to come here in a crucial juncture of her life. Thanks to the email facility through which I get emails from my son Aurosmit and come to know more about my new born baby ‘ Anupam’ ( I am waiting eagerly to see him, he was born when I was away to Curitiba). I would like to express my heart felt thank to the family of Professor Jin Yun and Professor Beatriz for showing me some nice places in and around Curitiba. Finally, I am greately indebted to my friend Professor Kannan Moudgalya and his family for their help and moral support provided to my family during my abscence. Finally, I acknowledge the financial support provided by Professor Graeme Fairweather to attend the International Workshop and Conference on Analysis and Applications. Amiya Kumar Pani

Chapter 1

Introduction In this introductory chapter, we briefly describe various issues related to finite element methods for solving differential equations.

1.1

Background.

We shall now start with a mathematical model which describes the vibration of a drum. Let Ω ⊂ IR2 be a thin membrane fixed to the brim of a hallow wooden structure like: Given an external force f (as show in the figure) we are interested to find the displacement u at any point (x, y) in the domain Ω . The above model gives rise to the following partial differential equation: Find the displacement u(x, y) satisfying −∆u(x, y) = f (x, y), (x, y) ∈ Ω,

(1.1)

u(x, y) = 0, (x, y) ∈ ∂Ω,

∂2 ∂2 + 2 and ∂Ω is the boundary of the domain Ω. Since the boundary ∂Ω is fixed, there 2 ∂x ∂y is no displacement and hence, u(x, y) = 0 on the boundary ∂Ω. where ∆ =

By a solution (classical ) u of (1.1), we mean a twice continuously differentiable function u which satisfies the partial differential equation (1.1) at each point (x, y) in Ω and also the boundary condition. Note that f has to be continuous, if we are looking for the classical solutions of (1.1). For the present problem, the external force f may not be imparted continuously. In fact f can be a point force, i.e., a

f Ω

1

2

CHAPTER 1. INTRODUCTION

a

uo

x

force applied at a point in the domain. Thus a physically more relevant problem is to allow more general f , that is, f may have some discountinuities. Therefore, there is a need to generalize the concept of solutions. In early 1930, S.L. Sobolev came across with a similar situation while he was dealing with the following first order hyperbolic equation:

ut + aux = u(x, 0) =

0, t > 0, −∞ < x < ∞ u0 (x), −∞ < x < ∞.

(1.2)

Here, a is a real, positive constant and u0 is the initial profile. It is well know the exact solution u(x, t) = u0 (x − at). In this case, the solution preserves the shape of the initial profile. However, if u0 is not differentiable (say it has a kink) the solution u is still meaningful physically, but not in the sense of classical solution. These observations were instrumental for the development of the modern partial differential equations. Again comming back to our vibration of drum problem, we note that in mechanics or in physics, the same model is described through a minimization of total energy, say J(v), i.e., minimization of J(v) subjects to the set V of all possible admissible displacements, where J(v) =

1 2 |

Z



(|

∂v ∂v 2 | + | |2 ) dxdy − ∂x ∂y {z }

Kinetic Energy mass being unit

Z

f v dxdy | Ω {z }

(1.3)

P otential Energy

and V is a set of all possible displacements v such that the above integrals are meaningful and v = 0 on ∂Ω. More precisely, we cast the above problem as Find u ∈ V such that u = 0 on ∂Ω and u minimize J, that is, J(u) = M in J(v). (1.4) v∈V

The advantage of the second formulation is that the displacement u may be once continuously differentiable and the external force f may be of general form, i.e., f may be square integrable and may allow discontinuties. Further, it is observed that every solution u of (1.1) satisfies (1.4) (it is really not quite aparrent, but it is indeed possible, this point we shall closely examine later on in the course of my lectures). However, the converse need not be true, since in (1.4) the solution u is only once continuosly differentiable. We shall see subsequently that (1.4) is a weak formulation of (1.1), and it allows physically more relevant external forces, say even point force. Choice of the admissible space V . At this stage, it is worthwile to analyse the space V , which will motivate the introduction of Sobolev space in Chpater 2. With f ∈ L2 (Ω) ( Space of all square integrable functions), V may be considered as: Z Z ∂v ∂v {v ∈ C 1 (Ω) ∩ C(Ω) : |v|2 dxdy < ∞, (| |2 + | |2 ) dxdy < ∞ and v = 0 on ∂Ω}, ∂y Ω Ω ∂x

3

CHAPTER 1. INTRODUCTION

where C 1 (Ω), the space of one time continuously differentiable functions in Ω is such that C 1 (Ω) = {v : v, vx , vy ∈ C(Ω)} and C(Ω) is the space of continuous functions defined on Ω with Ω = Ω ∪ ∂Ω. Hence, u|∂Ω is properly defined for u ∈ C(Ω). Unfortunately, V is not complete with measurement (norm) given by sZ kuk1 = (|u|2 + |∇u|2 ) dxdy, Ω

∂u ∂u 2 ∂u 2 2 where ∇u = ( ∂u ∂x , ∂y ) and |∇u| = | ∂x | + | ∂y | . Roughly speaking, completeness means that all possible Cauchy sequences should find their limits inside that space. In fact, if V is not complete, we add the limits to make it complete. One is curious to know ‘why do we require completeness ?’ In practice, we need to solve the above problem by using some approximation schemes, i.e., we should like to approximate u by a sequence of approximate solutions {un }. Many times {un } forms a Cauchy sequence (that is a part of convergence analysis). Unless the space is complete, the limit may not be inside that space. Therefore, a more desirable space is the completion of V . Subsequently, we shall see that the completion of ∂v ∂v V is H01 (Ω). This is a Hilbert Sobolev Space and is stated as: {v ∈ L2 (Ω) : ∂x , ∂y ∈ L2 (Ω) and u(x, y) = 0, (x, y) ∈ ∂Ω}. Obviously, the square integrable function may not have partial derivatives in the usual sense. In order to attach a meaning, we shall generalize the concept of differentiation and that we shall discuss prior to the introduction of Sobolev spaces. One should note that the meaning of the H 1 -function v on Ω which satisfies v = 0 has to be understood in a general sense.

If we accept (1.4) as a more general formulation, and the equation (1.1) is its Euler form, then it is natural to ask: ‘Does every PDE have a weak form which is of the form (1.4) ?’ The answer is simply in negative. Say, for a flow problem with a transport or convective term : −∆u + ~b · ∇u = f, it does not have an energy formulation like (1.4). So, next question would be: ‘Under what conditions on PDE, such a minimization form exists ?’ More over, ‘Is it possible to have a more general weak formulation which in a particular situation coincides with minimization of the energy ?’ This is what we shall explore in the course of these lectures. If formally we multiply (1.1) by v ∈ V (space of admissible displacements) and apply Gauss divergence theorem, the contribution due to boundary terms becomes zero as v = 0 on ∂Ω. Then we obtain Z

∂u ∂v ∂u ∂v + ) dxdy = ( ∂y ∂y Ω ∂x ∂x

Z

f v dxdy.

(1.5)



R For flow problem with a tranport term ~b.∇u (1.5) we have an extra term Ω ~b.∇u v dxdy added to the left hand side of (1.5). This is a more general weak formulation and we shall also examine the relation between (1.4) and (1.5). Given such a weak formulation, ‘Is it possible to establish its wellposedness 1 ?’ Subsequently in Chapter 3, we shall settle this issue by using Lax-Milgram Lemma. Very often problem like (1.1) doesn’t admit exact or analytic solutions. For the problem (1.1), if the boundary is irregular, i.e.,Ω need not be a square or a circle, it is difficult to obtain an analytic solution. Even when analytic solution is known, it may contain complicated terms or may be an infinite series. In 1 The problem is said to be wellposed (in the sense of Hadamard) if it has a solution, the solution is unique and it depends continuously on the data

4

CHAPTER 1. INTRODUCTION

y

y_j

x_i

x

both the cases, one resorts to numerical approximations. One of the objectives of the numerical procedures for solving differential equations is to cut down the degrees of freedom (the solutions lie in some infinite dimensional spaces like the Hilbert Sobolev spaces described above) to a finite one so that the discrete problem can be solved by using computers. Broadly speaking, there are three numerical methods for solving PDE’s: Finite Difference Methods, Finite Element Procedures and Boundary Integral Techniques. In fact, we have also Spectral Methods, but we may prefer to give some comments on the class of spectral methods rather describing it as a separate class of methods, while dealing with finite element techniques. Below, we discuss only finite difference methods and present a comparision with finite element methods.

1.2

Finite Difference Methods (FDM).

Let us assume that Ω is a unit square,i.e., Ω = (0, 1) × (0, 1) in (1.1) In order to derive a finite difference scheme, we first discretize the domain and then discretize the 1 with mesh points equation. For fixed positive integers M and N , let h1 , the spacing in x-direction be M 1 xi = ih1 , i = 0, 1, . . . , M and let h2 := with mesh points or nodal points in the direction of y as N yj = jh2 , j = 0, 1, . . . , N . For simplicity of exposition, consider uniform partition in both directions, i.e., M = N and h = h1 = h2 . The set of all nodal points {(xi , yj ), 0 ≤ i, j ≤ M } forms a mesh in Ω. The finite difference scheme is now obtained by replacing the second derivatives in (1.1) at each nodal pints (xi , yj ) by central difference quotients. Let u at (xi , yj ) be called uij . For fixed j using Taylor series expansions of u(xi+1 , yj ) and u(xi−1 , yj ) around (xi , yj ), it may be easily found out that ui+1,j − 2uij + ui−1,j ∂2u |(xi ,yj ) = + O(h2 ). 2 ∂x h2

(1.6)

∂2u ui,j+1 − 2uij + ui,j−1 |(xi ,yj ) = + O(h2 ). 2 ∂y h2

(1.7)

Similarly for fixed i, replace

5

CHAPTER 1. INTRODUCTION

Now, we have at each interior points (xi , yj ), 1 ≤ i, j ≤ M − 1, −

1 [ui−1,j + ui+1,j + ui,j−1 + ui,j+1 − 4ui,j ] = fij + O(h2 ), h2

(1.8)

where fij = f (xi , yj ). However, it is not possible to solve the above system (because of the presence of O(h2 )-term). In order to drop this term, define Uij , an approximation of uij as solution of the following algebraic equations: Ui−1,j + Ui+1,j + Ui,j−1 + Ui,j+1 − 4Ui,j = −h2 fij , 1 ≤ i, j ≤ M − 1

(1.9)

with U0,j = 0 = UM,j , 0 ≤ j ≤ M and Ui,0 = Ui,M = 0, 0 ≤ i ≤ M . Writing a vector U in lexicographic ordering, i.e., U = [U1,1 , U2,1 , . . . , UM−1,1 , U1,2 , . . . , UM−1,2 , . . . , U1,M−1 , . . . , UM−1,M−1 ]T , it is easy to put (1.9) in a block diagonal form AU = F

(1.10)

where A = diag[B, . . . , B] and each block B = triad[1, −4, 1] is of size (M − 1) × (M − 1). The size of the matrix A is (M − 1)2 × (M − 1)2 . Note that the matrix A is very sparse (only three or five non-zero elements in each row of length (M − 1)2 ). This is an example of a large sparse system which is diagonally dominant (not strictly) 2 and symmetric. One may be curious to know (prior to the actual computation) : ‘Is this system (1.10) solvable?’ Since A is diagonally dominant and irreducible 3 , then A is invertible. How to compute U more efficiently is a problem related to Numerical Linear Algebra. However, it is to be noted that for large and sparse systems with large bandwidth, the iterative methods are more successful. One more thing which bothers a numerical analyst (even the user community) is the convergence of the discrete solution to the exact solution as the mesh size becoming smaller and smaller. Sometimes, it is important to know ‘how fast it converges’. This question is connected with the order of convergence. Higher rate of convergence implies faster computational process. Here, a measurement (norm) is used for quantification. At this stage, let us recall one of the important theorem called Lax- Ritchmyer Equivalence Theorem on the convergence of discrete solution. It roughly states that ‘For a wellposed linear problem, a stable numerical approximation is convergent if and only if it is consistent’. The consistency is somewhat related to the (local) truncation error. It is defined as the amount by which the exact or true solution u(xi , yj ) does not satisfy the difference scheme (1.9) by u(xi , yj ). From (1.8), the truncation error say τij at (xi , yj ) is O(h2 ). Note that this is the maximum rate of convergence we can expect once we choose this difference scheme. The concept of stability is connected with the propagation of round off or chopping off errors during the course of computation. We call a numerical method stable , of course with respect to some measurement, if small changes (like these accumulation of errors) in the data do not give rise to a drastic change in the solution. Roughly speaking, computations in different machines should not give drastic changes in the numerical solutions, i.e., stability will ensure that the method does not depend on a particular machine we use. Mostly for stability, we bound with respect to some measurements the changes in solutions (difference between unperturbed and pertubed solutions) by the perturbation in data. However, for linear discrete problems a uniform a priori bound P 2 |aij |, ∀i. If strict inequality holds , then A is strictly diagonally A matrix A = [aij ] is diagonally dominant if |aii | ≥ j dominant. 3 The matrix A is irreducible if it does not have a subsystem which can be solved independently.

6

CHAPTER 1. INTRODUCTION

(uniform with respect to the discretization parameter ) on the computed solution yields a stability of the system. For (1.10), the method is stable with respect some norm say k · k∞ if kU k∞ ≤ CkF k∞ , where C is independent of h and kU k∞ = max |Uij |. This is an easy consequence of invertibility of A and 0≤i,j≤M

boundedness of kA−1 k∞ independent of h. It is possible to discuss stability with respect to other norms like Euclidean norm. If u = [u1,1 , . . . , u2,1 , . . . , uM−1,1 , . . . , uM−1,M−1 ]T , then the error u − U satisfies A(u − U ) = Au − AU = Au − F = τ, where τ is a vector representing the local truncation errors. Hence, using stability ku − U k∞ ≤ Ckτ k ≤ Ch2 , where C depends on maximum norm of the 4th derivative of the exact solution u. It is to be remarked here that unless the problem is periodic it is impossible to achieve u ∈ C 4 (Ω), if Ω is a square (in this case at most u ∈ C 2 (Ω) ∩ C(Ω)). Therefore, the above error estimate or order of convergence looses its meaning unless the problem is periodic. On the other hand, if we choose boundary ∂Ω of Ω to be very nice (i.e., the boundary can be represented locally by the graph of a nice function), then u may be in C 4 (Ω), but we loose order of convergence as mesh points may not fall on the boundary of Ω and we may use some interpolations to define the boundary values on the grids. One possible plus point “why it is so popular amongst the user community” is that it is easy to implement and in the interior of the domain, it yields a reasonable good approximation. Some of the disadvantages, we would like to recall are: • It is difficult to apply FDM to the problems with irregular boundaries • It requires higher smoothness on the solutions for the same order of convergence (this would be clarified in the context of finite element method ). Nicer boundary may lead to smoother solutions, but there is a deterioration in the order of convergence near the boundary. • In order to mimick the basic properties of the physical system in the entire domain, it is desirable to refine the mesh only on the part of the domain, where there is a stiff gradient (i.e., the magnitude of ∇u is large) or a boundary layer. Unfortunately, for FDM, it is difficult to do refinements locally. Therefore, a finer refinement on the entire domain will dramatically increase the computational storage and cost. • The computed solution is obtained only on the grids. Therefore, for computation of solution on the points other than the grid points, we need to use some interpolation techniques. However, the finite element methods take care of the above points, i.e., it can be very well applied to problems with irregular domains, requires less smoothness to obtain the same order of convergence (say for O(h2 ), we need u ∈ H 2 ∩ H01 and this smoothless for u is easy to derive for a square domain); it is possible to refine the mesh locally and finally, the solution is computed at each and every point in the domain.

7

CHAPTER 1. INTRODUCTION

1.3

Finite Element Methods (FEM).

A basic step in FEM is the reformulation (1.5) of the original problem (1.1). Note that there is a reduction in the differentiation and this is a more desirable property for the numerical computation (roughly speaking, numerical differentiation brings ill conditioning to the system ). Then by discretizing the domain, construct a finite dimensional space Vh where h is the discretization parameter with property that the basis functions have small supports in Ω . Pose the reformulated problem in the finite dimensional setting as : Find uh ∈ Vh such that Z Z f vh dxdy (1.11) ∇uh · ∇vh dxdy = Ω



The question is to be answered ‘Is the above system (1.11) solvable ?’ Since (1.11) leads to a system of linear solutions, one appeals to the techiniques in numerical linear algebra to solve this system. Next question, we may like to pose as: ‘Does uh converge to u in some measurements as h −→ 0?’ If so, Is it possible to find its order of convergence ? We may wonder, whether we have used (Lax-Ritchmyer) equivalence theorem here. Consistency is related to the truncation of any v ∈ V by vh ∈ Vh . If V is a Hilbert space, then vh is the best approximation of v in Vh and it is possible quantify the error v − vh in the best approximation. The beauty of FEM is that stability is straightforward and it mimicks for the linear problems, the a priori bounds for the true solution. Note that the form of the weak formulation of the problem does not change for the discrete problem, therefore, the wellposedness of the original (linear) problem invariably yields the wellposedness of the discrete problem and hence, the the proof of stability becomes easier. These a priori error estimates u − uh in some measurements tell the user that the method works and we have confidence in computed numbers. However, many more methods were developed by user community without bothering about the convergence theory. But they have certain checks like if the computed solution stabilizes after some decimal points as h decreases, then computed results make sense. But this thumb rule may go wrong. One more useful problem, we may address is related to the following: “Given a tolerance (say ε ) and a measurement (say a norm k · k), CAN we compute a reliable and efficient approximate solution uh such that ku − uh k ≤ ε ?” By efficient approximate solution, we mean computed solution with minimal computational effort. This is a more relevant and practical question and I shall try my best to touch upon these points in the course of my lectures.

Chapter 2

Theory of Distribution and Sobolev Spaces In this chapter, we quickly review L2 -space, discuss distributional derivatives and present some rudiments of Sobolev spaces.

2.1

Review of L2 - Space.

To keep the presentation at an elementary level, we try to avoid introducing the L2 -space through measure theoretic approach. However, if we closely look at the present approach, the notion of integration is assumed tactically. Let Ω be a bounded domain in IRd (d = 1, 2, 3) with smooth boundary ∂Ω. All the functions, we shall consider in the remaining part of this lecture note, are assumed to be real valued. It is to be noted that most of the results developed here can be carried over to the complex case with appropriate modifications. Let L2 (Ω) be the space of all square integrable functions defined in Ω, i.e., Z 2 |v|2 dx < ∞}. L (Ω) := {v : Ω

2

2

Define a mapping (·, ·) : L (Ω) × L (Ω) 7→ IR by (v, w) :=

Z

v(x)w(x) dx. Ω

It satisfies all the properties of an innerproduct except for: (v, v) = 0 does not imply that v is identically equal to zero. For an example, consider Ω = (0, 1) and set for n > 1, vn (x) = n if x = n1 and is equal R1 to zero elsewhere. Now (vn , vn ) = 0 |vn |2 dx = 0, but vn 6= 0. Here, the main problem is that there are infinitely many functions whose square integrals are equal to zero, but the functions are not identically equaly to zero. It was Lebesgue who generalized the concept of identically equal to zero in early 20th century. If we wish to make L2 (Ω) an innerproduct space, then define an equivalence relation on it as follows: Z Z |w|2 dx. |v|2 dx = v ≡ w if and only if Ω



With this equivalence relation, the set L2 (Ω) is decomposed into disjoint classes of equivalent functions. In the language of measure theory, we call this relation almost equal everywhere (a.e.). Now (v, v) = 0 8

CHAPTER 2. THEORY OF DISTRIBUTION AND SOBOLEV SPACES

F_m

9

F_n

1/2−1/m 1/2−1/n

implies v = 0 a.e. This is indeed a generalization of the notion of “ identically equals to zero” to “ zero almost everywhere”. Note that when v ∈ L2 (Ω), it is a representer of that equivalence class. The induced L2 -norm is now given by Z  21 1 2 2 kvk = (v, v) = . |v| dx Ω

2

With this norm, L (Ω) is a complete (the proof of this is a nontrivial task and is usually proved using measure theoretic arguments) innerproduct space, i.e., it is a Hilbert space.

Some Properties. (i) The space C(Ω) ( space of all bounded and uniformly continuous functions in Ωis continuously imbedded 1 in L2 (Ω). In other words, it is possible to identify a uniformly continuous function in each equivalence class. Note that v ∈ L2 (Ω without being in C(Ω). For example the function v(x) = x−α , x ∈ (0, 1) is in L2 (0, 1) provided 0 < α < 2, but not in C[0, 1]. (ii) The space C(Ω) may not be a subspace of L2 (Ω). For example, let Ω = (0, 1) and set v(x) = x1 . Then, v ∈ C(Ω), but not in L2 (Ω). Of course, v be in L2 (Ω) without being in C(Ω). For example the function v(x) = 0 0 ≤ x < 21 and for x ∈ [ 12 , 1], v(x) = 1 is discontinuous at x = 21 , but it is not in L2 (0, 1) R (iii) Let L1 (Ω) = {v : Ω |v| dx < ∞}. For bounded domain Ω, the space L2 (Ω) is continuously imbedded in L1 (Ω). However for unbounded domains, the above imbedding may not hold. For example, when Ω = (1, ∞) , consider v(x) = x1 . Then, v does not belong to L1 (Ω), while it is in L2 (Ω). For bounded domains, there are functions which are in L1 (Ω), but not in L2 (Ω). One such example is v(x) = x1α , 1 > α ≥ 12 in (0, 1). R The space V = {v ∈ C(Ω) : Ω |v|2 dx < ∞} is not complete with respect to L2 -norm. We can now recall one example in our analysis course. Consider a sequence {fn }n≥2 of continuous functions in C(0, 1) as that is each fn ∈ C(0, 1) and fn (x) = 1 The



1, 12 ≤ x < 1 0, 0 < x ≤ 21 − n1 .

identity map i : C(Ω) 7→ L2 (Ω) is continuous that is kvk ≤ C maxx∈Ω |v(x)|.

CHAPTER 2. THEORY OF DISTRIBUTION AND SOBOLEV SPACES

10

Being a continuous function , the graph of fn is shown as in the above figure. It is apparently clear from the graphs of fn and fm that kfn − fm k =

Z



 21 |fn (x) − fm (x)|2 dx

can be made less than and equal to a preassigned ǫ > 0 by choosing a positive integer N with m, n > N . Therefore, {fn } forms a Cauchy sequence. However, fn converges to a discontinuous function f , where f (x) = 1 for 12 ≤ x < 1 and it is zero elsewhere. Thus, the space V is not complete. If we complete it with respect to L2 -norm, we obtain L2 (Ω)-space. Sometimes, we shall also use the space of all locally integrable functions in Ω denote byL1l oc(Ω) which is defined as L1loc (Ω) := {v ∈ L1 (K), for every compact set K with K ⊂ Ω}. Now, the space L1 (Ω) is continuously imbedded in L1loc (Ω). When Ω is unbounded say Ω = (0, ∞), constant functions, xα with α integers, ex are all in L1loc (0, ∞) with out being in L1 (0, ∞). Similarly, when Ω is bounded that is say Ω = (0, 1), the functions x1α , α ≥ 1, log x1 are all locally integrable but not in L1 (0, 1). Note that L1loc (Ω) contains all of C(Ω) without growth restriction. Similarly, when 1 ≤ p ≤ ∞, the space of all p-th integrable functions will be denoted by Lp (Ω). For 1 ≤ p < ∞, the norm is defined by kvkLp (Ω) := and for p = ∞

2.2

Z



1/p |v|p dx ,

kvkL∞ (Ω) := essupx∈Ω |v(x)|.

A Quick Tour to the Theory of Distributions.

We would like to generalize the concept of differentiation to the functions belonging to a really large space. One way to catch hold of such a space is to construct a smallest nontrivial space and then take its topological dual. As a preparation to choose such a space, we recall some notations and definitions. A multi-index α = (α1 , · · · , αd ) is a d-tuple with nonnegative integer elements. By an order of α, P we mean |α| = di=1 αi . The notion of multi-index is very useful for writing the higher order partial derivatives in compact form. Define αth order partial derivative of v as Dα v =

∂ |α| v αd 1 ∂xα 1 , · · · , ∂xd

, α = (α1 , · · · , αd ).

Example 2.1. When d = 3 and α = (1, 2, 1) with |α| = 4, the Dα v =

∂4v . ∂x1 ∂x22 ∂x3

In case |α| = 2, we have the following 6 possibilities :(2, 0, 0), (1, 1, 0), (0, 2, 0), (0, 1, 1), (0, 0, 2), (1, 0, 1) and Dα will represent all the partial derivatives of order 2. When α ≤ 3, Dα would mean all partial derivatives up and including 3. Let C m (Ω) := {v : Dα v ∈ C(Ω), |α| ≤ m}. Similarly, we define the space C m (Ω) as m-times continuously differentiable functions with uniformly continuous derivatives up to order m. By C ∞ (Ω),

CHAPTER 2. THEORY OF DISTRIBUTION AND SOBOLEV SPACES

11

we mean a space of infinitely differentiable functions in Ω, i.e., C ∞ (Ω) = ∩C m (Ω) . The support of a function v called supp v is defined by supp v := {x ∈ Ω : v(x) 6= 0}. If this set is compact ( i.e., if it is bounded in this case) and supp v ⊂⊂ Ω, then v is said to have ‘compact support’ with respect to Ω. Denote by C0∞ (Ω), a space of all infinitely differentiable functions with compact support in Ω. In fact, this is a nonempty set and it can be shown as has been done below that it has as many elements as the points in this set Ω. Example 2.2. For Ω = IR, consider the function  exp( |x|21−1 ), |x| < 1, φ(x) = 0, |x| ≥ 1. In order to show that φ ∈ C0∞ (Ω), it is enough to check the continuity and differentiability properties only at the points x = ±1. Apply L’Hopsital rule to conclude the result. Similarly, when Ω = IRd , consider

φ(x) =



exp( kxk12 −1 ), kxk < 1, 0, kxk ≥ 1,

Pd 2 where kxk2 = i=1 |xi | . Then, since the function is radial, we can in a similar manner prove that ∞ φ ∈ C0 (Ω). For all other points we can simply translate and show the existence of such a function. Exersise 2.1 Show that φ ∈ C0∞ (IRd )

When Ω is a bounded domain in IRd , consider for any point x0 ∈ Ω and small ǫ > 0 the following function  exp( kx−xǫ0 k2 −ǫ ), kx − x0 k < ǫ, φǫ (x) = 0, kx − x0 k ≥ ǫ.

This function has the desired property and can be constructed for each and every point in the domain. Since the domain Ω is open, it is possible to find ǫ for point x0 ∈ Ω so that the the ǫ-ball around x0 is inside the domain. Therefore, the space C0∞ (Ω) is a nontrivial vector space and this is the smallest space we are looking for. In order to consider the topological dual of this space, we need to equip C0∞ (Ω) with a topology with respect to which we can discuss convergence of a sequence. Since we shall be only using the concept of convergence of a sequence to discuss the continuity of the linear functionals defined on C0∞ (Ω), we refrain from defining the topology. We say :

‘There is a structure (topology) on C0∞ (Ω) with respect to which the convergence of a sequence {φn } to φ has the following meaning, i.e., φn 7→ φ in C0∞ (Ω) if the following conditions are satisfied (i) there is a common compact set K in Ω with K ⊂ Ω such that supp φn , supp φ ⊂ K. (ii) Dα φn 7→ Dα φ uniformly in K, for all multi-indices α. When C0∞ (Ω) is equipped with such a topology, we call it D(Ω), the space of test functions. Definition. A continuous linear functional on D(Ω) is called a distribution, i.e., T : D(Ω) 7→ IR is called a distribution if T is linear and T (φn ) 7→ T (φ) as φn 7→ φ in D(Ω). The space of all distributions will be denoted by D′ (Ω). This is the topological dual of D(Ω). We shall use < ·, · > for duality pairing between D′ (Ω) and D(Ω).

12

CHAPTER 2. THEORY OF DISTRIBUTION AND SOBOLEV SPACES

Example 2.3. Every integrable function defines a distribution. Let f ∈ L1 (Ω). Then for φ ∈ D(Ω), set Tf as Tf (φ) =

Z

f φ dx. Ω

We now claim that Tf ∈ D′ (Ω). Linearity of Tf is a consequence of additivity as well as homogeneity property of the integral. It, therefore, remains to show that Tf is continuous, i.e., as φn 7→ φ in D(Ω), we need to prove Tf (φn ) 7→ Tf (φ). Note that Z |Tf (φn ) − Tf (φ)| = f (φn − φ) dx . Ω

Since φn converges to φ in D(Ω), there is a common compact set K in Ω such that supp φn , supp φ ⊂ K and φn 7→ φ uniformly in K. Therefore, Z Z |f | dx. |f (φn − φ)| dx ≤ max |φn (x) − φ(x)| |Tf (φn ) − Tf (φ)| ≤ x∈K

K

Since

R





|f | dx < ∞, Tf (φn ) 7→ Tf (φ), as n 7→ ∞ and the result follows.

If f and g are in L1 (Ω) with f = g a.e., then Tf = Tg . Thus, we identify Tf = f and Tf (φ) =< f, φ >. Therefore, L1 (Ω) ⊂ D′ (Ω). Excercise 2.1. Show that every square integrable function defines a distribution. Excercise 2.2. Prove that every locally integrable function defines a distribution. Example 2.4. The Dirac delta δ function ( it is a misnomer and it is a functional, but in literature it is still called a function as P. A. M. Dirac called this a function when he discovered it) concentrated at the origin (0 ∈ Ω) is defined by δ(φ) = φ(0), ∀φ ∈ D(Ω). This, in fact, defines a distribution. Linearity is trivial. As φn 7→ φ in D(Ω), i.e., φn 7→ φ uniformly on a common compact support, then, δ(φn ) = φn (0) 7→ φ(0) = δ(φ), ∀φ ∈ D(Ω). Therefore, δ ∈ D′ (Ω) and we write δ(φ) =< δ, φ >= φ(0). Note that the Dirac delta function can not be generated by locally integrable function ( the proof can be obtained by using arguments by contradiction 2 ). Therefore, depending on whether the distribution is generated by locally integrable function or not, we have two types of distributions: Regular and Singular distributions. (i) Regular. The distributions which are generated by locally integrable functions. 2 Suppose

that the Dirac delta is genereted by a locally integreble function say f . Then we define the distribution Tf (φ) = δ(φ) that is Tf (φ) = φ(0) for φ ∈ D(IRd ). For ǫ > 0 it is possible to find φǫ ∈ D(IRd ) with suppφǫ ⊂ Bǫ (0), 0 ≤ φǫ ≤ 1 and φǫ ≡ 1 in Bǫ/2 (0). Note that δ(φǫ ) = 1. But on the other hand δ(φǫ ) = Tf (φǫ ) = Since f is localy integrable ,

R

Bǫ (0)

Z

IRd

f φǫ dx =

Z

f φǫ dx ≤

Bǫ (0)

Z

|f |dx. Bǫ (0)

|f |dx −→ 0 as ǫ −→ 0, that is δ(φǫ ) −→ 0. This leads to a contradiction and hence the

Direc delta function can not be genereted by locally integrable functions

CHAPTER 2. THEORY OF DISTRIBUTION AND SOBOLEV SPACES

13

(ii) Singular. The distributions is called singular if it is not regular. For example, the Dirac delta function is a singular distribution. Note that if T1 , T2 ∈ D′ (Ω) are two regular distributions satisfying < T1 , φ >=< T2 , φ > ∀φ ∈ D(Ω), then T1 = T2 . This we shall not prove , but like to use it in future3 We can also discuss the convergence of the sequence of distributions {Tn } to T in D′ (Ω). Say, {Tn } 7→ T in D′ (Ω) if < Tn , φ >7→< T, φ >, ∀φ ∈ D(Ω). As an example, we consider ǫ > 0

where A−1 =

R

kxk≤1 exp

ρǫ (x) =

(





−1 1−kxk2

Aǫ−d exp 0,



−ǫ2 ǫ2 −kxk2

dx. Note that

R

IRd



φ(x),

kxk < ǫ, kxk ≥ ǫ,

ρǫ dx = 1. It is easy to check that ρǫ ∈ D′ (IRd ).

We now claim that ρǫ −→ δ in D′ (IRd ) as ǫ −→ 0. For φ ∈ D(IRd ), we note that   Z −ǫ2 −d φ(x) dx. hρǫ , φi = Aǫ exp 2 ǫ − kxk2 IRd   Z −1 exp = A φ(ǫy) dy 1 − kyk2 IRd   Z −1 (φ(ǫy) − φ(0)) dy. = φ(0) + A exp 1 − kyk2 IRd

As ǫ −→ 0, using the Lebesgue dominated convergence theorem, we can take the limit under the integral sign and therefor, the integral term become zero. Thus we obtain limhρǫ , φi = φ(0) = hδ, φi. ǫ

Hence the result follows. Distributional Derivatives. In order to generalize the concept of differentiation, we note that for f ∈ C 1 ([a, b]) and φ ∈ D([a, b]), integration by parts yields Z b Z b x=b f ′ (x)φ(x) dx = f φ|x=a − f (x)φ′ (x) dx. a

a

Since supp φ ⊂ (a, b), φ(a) = φ(b) = 0. Thus, Z b Z b f (x)φ′ (x) dx = − < f, φ′ >, ∀φ ∈ D(Ω). f ′ (x)φ(x) dx = − a

a

We observe that the meaning of derivative of f can be attached through the term on the right hand side. However, the term on the right hand side is still be meaningful even if f is not differentiable. Note that f can be a distribution. This motivates us to define a derivative of a distribution. If T ∈ D′ (Ω) (Ω = (a, b)), then define DT as (DT )(φ) = − < T, Dφ >,

∀ φ ∈ D(Ω).

3 This is a cosequence of the following Lebesque lemma: Le T and T are two regular distribution genereted by locally g f integrable functions f and g, respectively. Then Tf = Tg if and only if f = g a.e.

CHAPTER 2. THEORY OF DISTRIBUTION AND SOBOLEV SPACES

14

Here, Dφ is simply the derivative of φ. Observe that (DT ) is a linear map. For φ1 , φ2 ∈ D(Ω) and a ∈ IR, = − < T, D(φ1 + aφ2 ) > = − < T, Dφ1 > −a < T, Dφ2 >

(DT )(φ1 + aφ2 )

= (DT )(φ1 ) + a(DT )(φ2 ).

For continuity, we first infer that as φn → φ in D(Ω), Dφn → Dφ on D(Ω). Now, (DT )(φn ) = − < T, Dφn >→ − < T, Dφ >= (DT )(φ). Therefore, DT ∈ D′ (Ω).

More precisely, if T is a distribution, i.e., T ∈ D′ (Ω), then αth order distributional derivative, say D T ∈ D′ (Ω), is defined by α

< Dα T, φ >= (−1)α < T, Dα φ >, φ ∈ D(Ω). Example 2.5. Let Ω = [−1, 1] and let f = |x| in Ω. Obviously, f ∈ D′ (−1, 1). To find its distributional derivative Df , write first for φ ∈ D(Ω) < Df, φ > = =

− < f, Dφ > Z 1 − f (x)φ′ (x)dx −1 0

=



Z

−1

(−x)φ′ (x)dx −

Z

1

xφ′ (x)dx.

0

On integrating by parts and using φ(1) = φ(−1) = 0, it follows that < Df, φ >

= = =

xφ]0−1

Z



1

Z

0

−1

φ(x)dx −

(−1)φ(x)dx +

−1 Z 1

xφ]10

Z

+

1

Z

1

φ(x)dx 0

1φ(x)dx

0

H(x)φ(x)dx

−1

= < H, φ >, where H(x) =

n

∀ φ ∈ D(Ω),

−1, −1 < x < 0 1. 0≤x=< H, φ >, ∀φ ∈ D(Ω), we obtain Df = H. The first distributional derivative of f is a Heaviside step function. Again to find DH, note that < DH, φ > = = = =

− < H, Dφ >= −

Z

0

−1

φ′ (x)dx −

Z

1

Z

1

H(x)φ′ (x)dx

−1

φ′ (x)dx

0

φ(0) − φ(−1) − φ(1) + φ(0)

2φ(0) = 2 < δ, φ > .

CHAPTER 2. THEORY OF DISTRIBUTION AND SOBOLEV SPACES

15

Therefore, DH = 2δ

or D2 f = 2δ.

In the above example, we observe that if a locally integrable function has a classical derivative a.e., which is also locally integrable, then then the classical derivative may not be equal to the distributional derivative. However, if a function is absolute continous, then its classical derivative coincides with its distributional derivative. Exercises: 1.) Find Df , when f (x) =



1, 0 ≤ x < 1 0, −1 < x < 0

f (x) = log x, 0 < x < 1, f (x) = (log x1 )k , k < 12 . 2.) Find Df , when f (x) = sin ( x1 ), 0 < x < 1 and f (x) = x1 , 0 < x < 1. 3.) Find D2 f when f (x) = x|x|, −1 < x < 1. ¯ and φ ∈ D(Ω). Then using integration by parts, we have For Ω ⊂ IRd , let f ∈ C 1 (Ω) Z Z ∂φ ∂f φdx = − f dx = − < f, Di φ > . Ω ∂xi Ω ∂xi As in one variable case, set the first generalized partial derivative of a distribution T ∈ D′ (Ω) as < Di T, φ >= − < T, Di φ >,

∀φ ∈ D(Ω).

It is easy to check that Di T ∈ D′ (Ω). Similarly, if T ∈ D′ (Ω) and α is a multi-index, then the αth order distributional derivative Dα T ∈ D′ (Ω) is defined by < Dα T, φ >= (−1)|α| < T, Dα φ >,

∀φ ∈ D(Ω).

Example 2.6. Let Ω = {(x1 , x2 ) ∈ IR2 : x21 + x22 < 1}. Set radial function f (r) = (log 1r )k , k < 12 , where p r = x21 + x22 < 1. This function is not continuous at the origin, but we shall show that it is in L2 (Ω). Now using coordinate transformation (transforming to polar coordinates), we find that Z



|f |2 dx1 dx2 = π

Z

0

1

|f (r)|2 r dr.

r(log( r1 ))2k

This integral is finite, because 7→ 0 as r 7→ 0. Hence, it defines a distribution. Its partial ∂f = −k(log( 1r ))k−1 rx2i i = 1, 2. distributional derivatives may be found out as Di f = ∂x i

The derivative map Dα : D′ → D′ is continuous. To prove this, let {Tn } a sequence in D′ converge to T in D′ . Then, for φ ∈ D < D α Tn , φ >

=

(−1)|α| < Tn , Dα φ >

→ (−1)|α| < T, Dα φ > =

Hence, the result follows.

< Dα T, φ > .

CHAPTER 2. THEORY OF DISTRIBUTION AND SOBOLEV SPACES

2.3

16

Elements of Sobolev Spaces

We define the (Hilbert) Sobolev space H 1 (Ω) as the set of all functions in L2 (Ω) such that all its first partial distributional derivatives are in L2 (Ω), i.e., H 1 (Ω) = {v ∈ L2 (Ω) :

∂v ∈ L2 (Ω), 1 ≤ i ≤ d}. ∂xi

Note that v being an L2 (Ω)-function, all its partial distribuitional derivatives exist. But for H 1 (Ω) what ∂v ∈ L2 (Ω), 1 ≤ i ≤ d. This is, indeed, an extra condition and it is not true that more we require is that ∂x i every L2 -function has its first distributional partial derivatives in L2 . In the second part of Example 2.5, we have shown that DH = 2δ ∈ / L2 (−1, 1), where as H ∈ L2 (−1, 1). Let us define a map (·, ·)1 : H 1 (Ω) × H 1 (Ω) → IR by (u, v)1 = (u, v) +

d X ∂u ∂v , ), ∀u, v ∈ H 1 (Ω). ( ∂x ∂x i i i=1

(2.1)

It is a matter of an easy exercise to check that (·, ·)1 forms an inner-product on H 1 (Ω), and (H 1 (Ω), (·, ·)1 ) is an inner-product space. The induced norm k · k1 on H 1 (Ω) is set as v u d u X p ∂u 2 k kuk1 = (u, u)1 = tkuk2 + k . (2.2) ∂xi i=1 In the next theorem, we shall prove that H 1 (Ω) is complete.

Theorem 2.1 The space H 1 (Ω) with k · k1 is a Hilbert space. Proof. Consider an arbitrary Cauchy sequence {un } in H 1 (Ω). We claim that it converges to an element n in H 1 (Ω). Given a Cauchy sequence {un } in H 1 (Ω), the {un } and { ∂u ∂xi }, i = 1, 2, . . . , d are Cauchy in 2 2 i L (Ω). Since L (Ω) is complete, we can find u and u , (i = 1, 2, . . . , d) such that un → u in L2 (Ω), ∂un → ui , 1 ≤ i ≤ d, in L2 (Ω). ∂xi To complete the proof, it is enough to show that ui = ∂un < ,φ > ∂xi

∂u ∂xi .

For φ ∈ D(Ω), consider

Z ∂φ ∂φ un = − < un , >= − dx ∂xi ∂x i Ω Z ∂φ ∂φ dx = − < u, > → − u ∂xi Ω ∂xi ∂u = < ,φ > . ∂xi

However,
∂xi

= →

Z

ZΩ



∂un φdx ∂xi ui φdx =< ui , φ > .

CHAPTER 2. THEORY OF DISTRIBUTION AND SOBOLEV SPACES

Thus,

∂u , φ >, ∂xi

< ui , φ >=< and we obtain

ui =

∀φ ∈ D(Ω),

∂u . ∂xi

Therefore, un → u in L2 (Ω)

17

∂u ∂un → in L2 (Ω), ∂xi ∂xi

and

i.e., un → u in H 1 (Ω).

Since {un } is arbitrary, we have shown the completeness of H 1 (Ω).

For positive integer m, we also define the higher order Sobolev spaces H m (Ω) as H m (Ω) := {v ∈ L2 (Ω) : Dα v ∈ L2 (Ω), |α| ≤ m}.

This is again a Hilbert space with respect to the inner product X (Dα u, Dα v), (u, v)m := |α|≤m

and the induced norm

X

kukm := (

|α|≤m

kDα uk2 )1/2 .

In fact, it is a matter of an easy induction on m to show that H m (Ω) is a complete innerproduct space. Why should we restrict ourselves to L2 -setting alone, we can now define Sobolev spaces W m,p (Ω) of order (m, p), 1 ≤ p ≤ ∞ by W m,p (Ω) := {v ∈ Lp (Ω) : Dα v ∈ Lp (Ω), |α| ≤ m}. This is a Banach space with norm 

and for p = ∞

kukm,p := 

X Z

|α|≤m



1/p

|Dα u|p dx

, 1 ≤ p < ∞,

kukm,∞ := max kDα ukL∞ (Ω) . |α|≤m

When p = 2, we shall call, for simplicity, W m,2 (Ω) as H m (Ω) and its norm k · km,2 as k · km . Exercise 2.2 Show that W 1,p (Ω), (1 ≤ p ≤ ∞) is a Banach space and the using mathematical induction show that W m,p (Ω) is also a Banach space.

Some Properties of H 1 - Space. Below, we shall discuss both negative and positive properties of H 1 Hilbert Sobolev space. Negative Properties.

CHAPTER 2. THEORY OF DISTRIBUTION AND SOBOLEV SPACES

18

(i) For d > 1, the functions in H 1 (Ω) may not be continuous. In Example 2.6 , we have noted that the function f (r) is not continuous at the origin, but it has distributional derivatives of all order. Now, ∂f 1 xi = −k(log( ))k−1 2 , i = 1, 2, ∂xi r r and hence, Z

∂f 2 | | dxdy ∂x 1 Ω

 2(k−1) 1 1 = 4 k log( ) cos2 θ drdθ r r 0 0  2(k−1) Z 1 1 = πk 2 r−1 log( ) dr. r 0 Z

π/2

Z

1

2

Here, we have used change of variables : x1 = r cos θ, x2 = sin θ, 0 ≤ θ ≤ π. It is easy to check ∂f ∈ L2 (Ω). that the integral is finite provided k < 12 (only care is needed at r = 0). Therefore, ∂x 1 ∂f Similarly, it can be shown that ∂x ∈ L2 (Ω) and hence, f ∈ H 1 (Ω). 2 When d = 1, i.e., Ω ⊂ IR, then every function in H 1 (Ω) is absolutely continuous. This can be checked easily as Z x Du(τ ) dτ, x0 , x1 ∈ Ω, u(x) − u(x0 ) = x0

where Du is the distributional derivative of u.

(ii) The space D(Ω) is not dense in H 1 (Ω). To see this, we claim that the orthogonal complement of D(Ω) in H 1 (Ω) is not a trivial space. Let u ∈ (D(Ω))⊥ in H 1 (Ω) and let φ ∈ D(Ω). Then 0 = (u, φ)1

d X ∂u ∂v , ) ( ∂x i ∂xi i=1

=

(u, φ) +

=

< u, φ > −

=

d X i=1


∂x2i

< −∆u + u, φ >, ∀φ ∈ D(Ω).

Therefore, −∆u + u = 0, in D′ (Ω). We shall show that there are enough functions satisfying the above equation. Let Ω be a unit ball in IRd . Consider u = er·x with r ∈ IRd . Note that  2 |r| u, ∆u = |r|2 er·x = u, |r| = 1. When d = 1, the solution u with r = ±1 belongs to (D(Ω))⊥ . For d > 1, there are infinitely many r’s lying on the boundary of the unit ball in IRd for which u ∈ (D(Ω))⊥ . Therefore, the space D(Ω) is not dense in H 1 (Ω). One then curious to know : ‘What is the closure of D(Ω) in H 1 (Ω)-space ?’

Then

‘Is it possible to characterize such a space?’ Let H01 (Ω) be the closure of D(Ω) in H 1 (Ω)-space. This is a closed subspace of H 1 (Ω). The characterization of such a space will require the concept of values of the functions on the boundary

CHAPTER 2. THEORY OF DISTRIBUTION AND SOBOLEV SPACES

19

or the restriction of functions to the boundary ∂Ω. Note that the boundary ∂Ω being a (d − 1)dimensional manifold is a set of measure zero, and hence, for any v ∈ H 1 (Ω), its value at the boundary or its restriction to the boundary ∂Ω does not make sense. When Ω is a bounded domain with Lipschitz continuous boundary ∂Ω ( i.e., the boundary can be parametrized by a Lipschitz function), then the space C 1 (Ω) (in fact C ∞ (Ω)) is dense in H 1 (Ω). The proof, we shall not pursue it here, but refer to Adams [1]. For v ∈ C 1 (Ω), its restriction to the boundary or value at the boundary is meaningful as v ∈ C(Ω). Since C 1 (Ω) is dense in H 1 (Ω), for every v ∈ H 1 (Ω) there is a sequence {vn } in C 1 (Ω) such that vn 7→ v in H 1 (Ω). Thus, for each vn , its restriction to the boundary ∂Ω that is vn |∂Ω makes sense and further, vn |∂Ω ∈ L2 (∂Ω). This sequence, infact, converges in L2 (∂Ω) to an element say γ0 v. This is what we call the trace of v. More precisely, we define the trace γ0 : H 1 (Ω) 7→ L2 (∂Ω) as a continuous linear map satisfying kγ0 vkL2 (∂Ω) ≤ Ckvk1 , and for v ∈ C 1 (Ω), γ0 v = v|∂Ω . This is, indeed, a generalization of the concept of the values of H 1 -functions on the boundary or the restriction to the boundary. The range of γ0 called H 1/2 (∂Ω) is a proper and dense subspace of L2 (∂Ω). The norm on this space is defined as kgk1/2,∂Ω = inf{kvk1 : v ∈ H 1 (Ω) and γ0 v = g}. Now the space H01 (Ω) is characterized by H01 (Ω) := {v ∈ H 1 (Ω) : γ0 v = 0}. For a proof, again we refer to Adams [1]. Sometimes, by abuse of language, we call γ0 v = 0 as simply v = 0 on ∂Ω.

Positive Properties of H01 and H 1 - Spaces. We shall see subsequently that the denseness of D(Ω) and C ∞ (Ω), respectively in H01 (Ω) and H 1 (Ω) with respect to H 1 -norm will play a vital role. Essentially, many important results are first derived for smooth functions and then using the denseness property are carried over to H01 or H 1 -spaces. In the following, we shall prove some inequalities. If v ∈ V := {w ∈ C 1 [a, b] : w(a) = w(b) = 0}, then for x ∈ [a, b] Z x dv v(x) = (s) ds. dx a

A use of H¨ older’s inequality yields

Z |v(x)| ≤ (x − a)1/2 (

b a

|

dv 2 | ds)1/2 . dx

On squaring and again integrating over x from a to b, it follows that Z

a

b

|v(x)|2 dx



Z b Z ( (x − a) dx)( a

2



(b − a) 2

Z

a

b

|

b a

|

dv 2 | ds) dx

dv 2 | dx. dx

Thus, kvkL2 (a,b) ≤ C(a, b)kv ′ kL2 (a,b) ,

CHAPTER 2. THEORY OF DISTRIBUTION AND SOBOLEV SPACES

20

√ . This is one dimensional version of Poincar´ e inequality. We note that where the constant C(a, b) = (b−a) 2 in the above result only v(a) = 0 is used. The same conclusion is also true when v(b) = 0. Further, it is easy to show that max |v(x)| ≤ (b − a)1/2 kv ′ kL2 (a,b) . x∈[a,b]

As a consequence of the Poincar´e inequality, kv ′ kL2 (a,b) defines a norm on V which is equivalent to H 1 (a, b)-norm, i.e., there are positive constants say C1 and C2 such that C1 kvk1 ≤ kv ′ kL2 (a,b) ≤ C2 kvk1 , 2

+ 1)−1/2 . Where C2 = 1 and C1 = ( (b−a) 2 We shall now generalize this to the functions in H01 (Ω), when Ω is a bounded domain in IRd . Theorem 2.3 (Poincar´e Inequality). Let Ω be a bounded domain in IRd . Then there exists constant C(Ω) such that Z Z 2 |v| dx ≤ C(Ω) |∇v|2 dx, ∀v ∈ H01 (Ω). Ω



Proof. Since D(Ω) is dense in H01 (Ω), it is enough to prove the above inequality for the functions in D(Ω). Now because of denseness, for every v ∈ H01 (Ω), there is a sequence {vn } in D(Ω) such that vn 7→ v in H 1 -norm. Suppose for every vn in D(Ω), the above result holds good,i.e., Z Z |∇vn |2 dx, vn ∈ D(Ω), (2.3) |vn |2 dx ≤ C(Ω) Ω



then taking limit as n 7→ ∞, we have kvn k 7→ kvk as well as k∇vn k 7→ k∇vk, and hence, the result follows for v ∈ H01 (Ω). It remains to prove (2.3). Since Ω is bounded, enclose it by a box in IRd , i.e., say Ω ⊂ Πdi=1 [ai , bi ]. Then extend each vn even alongwith its first partial derivatives to zero outside of Ω. This is possible as vn ∈ D(Ω). Now, we have (like in one dimensional case) for x = (x1 , · · · , xi , · · · , xd ) Z xi ∂vn vn (x) = (x1 , · · · , xi−1 , s, xi+1 , · · · , xd ) ds. ai ∂xi

Then using H¨ older’s inequality 1/2

|vn (x)| ≤ (xi − ai )

Z

bi ai

∂vn 2 | | ds ∂xi

!1/2

.

Squaring and then integrating both sides over the box, it is found that Z bd Z Z bd Z b1 ∂vn 2 (bi − ai )2 b1 | | dx. ··· |vn (x)| dx ≤ ··· 2 ∂xi ad a1 ad a1 Therefore the estimate (2.3) follows with the constant C = 21 (bi − ai )2 . This completes the rest of the proof. Remarks. The above theorem is not true if v ∈ H 1 (Ω) (say, when Ω = (0, 1) and v = 1, v ∈ H 1 (Ω) and it does not satisfy Poincar´e ineguality) or if Ω is not bounded. But when Ω is within a strip , the result still holds (the proof of theorem only uses that the domain is bounded in one direction that is in the 1 direction of xi ). The best constant C(Ω) can be λmin , where λmin is the minimum positive eigenvalue of the following Dirichlet problem −∆u = λu, x ∈ Ω

CHAPTER 2. THEORY OF DISTRIBUTION AND SOBOLEV SPACES

21

with Dirichlet boundary condition u = 0, on ∂Ω. Using the Poincar´e inequality , it is easy to arrive at the following result. Corollary 2.4 For v ∈ H01 (Ω), k∇vk is, infact, a norm on H01 (Ω)-space which is equivalent to H 1 -norm. Using the concept of trace and the denseness property, we can generalize the Green’s formula to the functions in Sobolev spaces. For details , see Kesavan [9] Lemma 2.5 Let u and v be in H 1 (Ω). Then for 1 ≤ i ≤ d, the following integration by parts formula holds Z Z Z ∂u ∂v uvνi ds, (2.4) dx = − v dx + u Ω ∂xi ∂Ω Ω ∂xi

where νi is the ith component of the outward normal ν to the boundary ∂Ω. Further, if u ∈ H 2 (Ω), then the following Green’s formula holds d Z X i=1



d Z d Z X X ∂u ∂v ∂u ∂2u νi γ0 ( dx = − v dx + )γ0 (v) ds, 2 ∂xi ∂xi ∂x ∂x i i i=1 Ω i=1 ∂Ω

or in usual vector notation (∇u, ∇v) = (−∆u, v) +

Z

∂Ω

∂u v ds. ∂ν

(2.5)

We now state another useful property of H 1 -space. For a proof, we may refer to Adams [1]. Theorem 2.6 ( Imbedding Theorem ). Let Ω be a bounded domain with Lipschitz boundary ∂Ω. Then H 1 (Ω) is continuously imbedded in Lp (Ω) for 1 ≤ p < q if d = 2 or p ≤ q, for d ≥ 2, where 1q = 21 − d1 . Further, the above imbedding is compact provided p < q.

Dual Spaces of H 1 and H01 . We denote by H −1 (Ω), the dual space of H01 (Ω), i.e., it consists of all continuous linear functionals on H01 (Ω). The norm on H −1 (Ω) is given by the standard dual norm and now is denoted by | < f, v > | kf k−1 = sup . (2.6) 1 kvk1 06=v∈H0 (Ω) The following Lemma characterizes the elements in the dual space H −1 (Ω). Lemma 2.7 A distribution f ∈ H −1 (Ω) if and only if there are functions fα ∈ L2 (Ω) such that X D α fα . f=

(2.7)

|α|≤1

For an example, the Dirac delta function δ belongs to H −1 (−1, 1), because there exists Heaviside step function  1, 0 < x < 1, H(x) = 0, −1 < x ≤ 0 in L2 (−1, 1) such that δ = DH.

CHAPTER 2. THEORY OF DISTRIBUTION AND SOBOLEV SPACES

22

Note that H01 (Ω) ⊂ L2 (Ω) ⊂ H −1 (Ω).

In fact the imbedding is continuous in each case. This forms a Gelfand triplet. For f ∈ L2 (Ω) and v ∈ H01 (Ω), we have the following identification < f, v >= (f, v),

(2.8)

where < ·, · > is a duality pairing between H −1 (Ω) and H01 (Ω).

Similarly, we can define the dual space (H 1 (Ω))′ of H 1 (Ω). Its norm is again given through the dual norm and is defined by | < f, v > | kf k(H 1 (Ω))′ = sup . (2.9) kvk1 06=v∈H 1 (Ω) In the present case, we may not have a characterization like the one in the previous Lemma. Remark. For positive integer m, we may define H0m (Ω) as the closure of D(Ω) with respect to the norm k · km . For smooth boundary Ω, we can characterize the space H02 by H02 (Ω) := {v ∈ H 2 (Ω) : γ0 v = 0, γ0 (

∂v ) = 0}. ∂ν

(2.10)

−m Similarly, the dual space H P (Ω) isα defined as the set of all distributions f such that there are 2 functions fα ∈ L (Ω) with f = |α|≤m D fα . The norm is given by

kf k−m =

sup 06=v∈H0m (Ω)

| < f, v > | . kvkm

(2.11)

Note. There are some exellent books available on the theory of Distribution and Sobolev Spaces like Adams [1], Dautray and Lions [5]. In this chapter, we have discussed only the basic rudiment theory which is barerly minimum for the development of the subsequent chapters.

Chapter 3

Abstract Elliptic Theory We present in this chapter a brief account of the fundamental tools which are used in the study of linear elliptic partial differential equations. To motivate the materials to come, we again start with the mathematical model of vibration of drum: Find u such that −∆u = f u=0

in Ω

(3.1)

on ∂Ω,

(3.2)

d

where Ω ⊂ IR (d = 2, 3) is a bounded domain with smooth boundary ∂Ω and f is a given function. Multiply (3.1) by a nice function v which vanishes on ∂Ω (say v ∈ D(Ω)) and then integrate over Ω to obtain Z Z (−∆u)vdx = f vdx. Ω



Formally, a use of Gauss divergence theorem for the integral on the left hand side yields Z Z Z ∂u ∇u · ∇vdx. vds + (−∆u)vdx = − Ω ∂Ω ∂ν Ω Since v = 0 on ∂Ω, the first term on the right hand side vanishes and hence, we obtain Z Z ∇u · ∇vdx = f vdx. Ω



Therefore, the problem (3.1) is reformulated as: Find a function u ∈ V such that a(u, v) = L(v), ∀v ∈ V, (3.3) R where a(u, v) = Ω ∇u · ∇vdx, and L(v) = Ω f vdx. Here, the space V (the space of all admissable displacements for the problem (1.1)) has to be chosen approprietely. For the present case, one thing is quite apparent that every element of V should vanish on ∂Ω. R

How does one choose V ? The choice of V should be such that the integrals in (3.3) are finite. For f ∈ L2 (Ω), space V may be chosen in such a way that ∇u ∈ L2 (Ω) and u ∈ L2 (Ω). The largest such space satisfying the above conditions and u = 0 on ∂Ω is, in fact, the space H01 (Ω). Thus, we choose V = H01 (Ω). Since D(Ω) is dense in H01 (Ω), the problem (3.3) makes sense for all v ∈ V . Now, observe that every (classical) solution of (3.1) – (3.2) satisfies (3.3). However, the converse need not be true. Then, one is curious to know whether, in some weaker sense, the solution of (3.3) does satisfy (3.1) – (3.2) . To see this, note that Z Z ∇u · ∇vdx = f vdx, ∀v ∈ D(Ω) ⊂ H01 (Ω). Ω



23

24

CHAPTER 3. ABSTRACT ELLIPTIC THEORY

Using the definition of distributional derivatives, it follows that Z



∇u · ∇v dx

=

d Z X i=1

=

d X i=1

=

∂u ∂v dx ∂xi ∂xi




=− ,v > ∂xi ∂xi ∂x2i i=1

< −∆u, v > ∀v ∈ D(Ω),

and hence, < −∆u, v >=< f, v > ∀v ∈ D(Ω). Obviously, −∆u = f

in D′ (Ω).

(3.4)

Thus, the solution of (3.3) satisfies (3.1) in the sense of distribution, i.e., in the sense of (3.4) . Conversely, if u satisfies (3.4), then following the above steps backward, we obtain a(u, v) = L(v) ∀v ∈ D(Ω). As D(Ω) is dense in H01 (Ω), the above equation holds for v ∈ H01 (Ω) and hence, u is a solution of (3.3). Since most of the linear elliptic equations can be recast in the abstract form (3.3), we examine, below, some sufficient conditions on a(·, ·) and L so that the problem (3.3) is wellposed in the sense of Hadamard.

3.1

Abstract Variational Formulation

Let V and H be real Hilbert spaces with V ֒→ H = H ′ ֒→ V ′ where V ′ is the dual space of V . Given a continuous linear functional L on V and a bilinear form a(·, ·) : V × V → IR, the abstract variational problem is to seek a function u ∈ V such that a(u, v) = L(v) ∀v ∈ V.

(3.5)

In the following Theorem, we shall prove the wellposedness of (3.5). This theorem is due to P.D. Lax and A.N. Milgram (1954) and hence, it is called Lax-Milgram Theorem. Theorem 3.1 Assume that the bilinear form a(·, ·) : V × V → IR is bounded and coercive, i.e., there exist two constants M and α0 > 0 such that (i)

|a(u, v)| ≤ M kukV kvkV ∀u, v ∈ V

and (ii) ′

a(u, v) ≥ α0 kvk2V

∀v ∈ V.

Then, for a given L ∈ V the problem (3.5) has one and only one solution u in V . Moreover, kukV ≤

1 kLkV ′ , α0

i.e., the solution u depends continuously on the given data L.

(3.6)

25

CHAPTER 3. ABSTRACT ELLIPTIC THEORY

Remark: In addition if we assume a(·, ·) is symmetric, then as a consequence of Riesz-Representation Theorem1 , it is easy to show the wellposedness of (3.5). Since a(·, ·) is symmetric, define an innerproduct 1 1 on V as ((u, v)) = a(u, v). The introduced norm is |||u||| = ((u, u)) 2 = (a(u, u)) 2 . The space (V, ||| · |||) is a Hilbert space. Due to boundedness and coercivity of a(·, ·), we have α0 kvk2V ≤ ((u, v)) = |||v|||2 ≤ M kvk2V . Hence, the two norms are equivalent. Thus, both these spaces have the same dual V ′ . Then the problem (3.5) can be reformulated in this new setting as: Given L ∈ V ′ , find u ∈ (V, ||| · |||) such that ((u, v)) = L(v) ∀v ∈ V.

(3.7)

Now, an application of Riesz-Representation Theorem ensures the existence of a unique element u ∈ (V, ||| · |||) such that (3.7) is satisfied. Further, √ α0 kukV ≤ |||u||| = kLkV ′ and hence, the result follows. When a(·, ·) is not symmetric, we prove the result as follows: Proof of Theorem 3.1. We shall first prove the estimate (3.6). Using coercivity, the property (ii) of the bilinear form, we have α0 kuk2V ≤ a(u, u) = |L(u)| ≤ kLkV ′ kukV .

Therefore, the result (3.6) follows. For continuous dependence, let Ln → L in V ′ and let un be the solution of (3.5) corresponding to Ln . Then, using linearity property, a(un − u, v) = (Ln − L)(v).

Now, an application of the estimate (3.6) yields kun − ukV ≤

1 kLn − LkV ′ . α0

Hence, Ln → L in V ′ implies un → u in V , and this completes the proof of the continuous dependence of u on L. Uniqueness is a straight forward consequence of the continuous dependence. For existence, we shall put (3.5) in an abstract operator theoretic form and then apply Banach contraction mapping theorem 2 For w ∈ V , define a functional lw by lw = a(w, v) ∀v ∈ V. Note that lw is linear functional on V . Using boundedness of the bilinear form, we obtain |lw (v)| = |a(w, v)| ≤ M kwkV kvkV , and hence, klw kV ′ = sup v∈V



|a(w, v)| |lw (v)| = sup ≤ M kwkV < ∞. kvkV kvkV v∈V

Thus, lw ∈ V . Since w 7→ lw is a linear map, setting A : V → V ′ by Aw = lw , we have (Aw, v) = a(w, v) ∀v ∈ V. 1 For every continuous linear functional l on a Hilbert space V , there exists one and only one element u ∈ V such that l(v) = (u, v)V , ∀v ∈ V and kukV = klkV ′ . 2 A map J : V → V (V can be taken even as a Banach space or even a metric space) is said to be a contraction if there is a constant 0 ≤ β < 1 such that kJ u − J vkV ≤ βku − vkV for u, v ∈ V . Every contraction mapping has a unique fixed point, i.e., ∃! u0 ∈ V such that J u0 = u0 .

26

CHAPTER 3. ABSTRACT ELLIPTIC THEORY

Indeed, w 7→ Aw is a linear map and A is continuous with kAwkV ≤ M kwkV . Let τ : V ′ 7→ V denote the Riesz mapping such that ∀f ∈ V ′ , ∀v ∈ V, f (v) = (τ f, v)V , where (·, ·)V is the innerproduct in V . The variational problem (3.5) is now equivalent to the operator equation τ Au = τ L in V. (3.8) Thus, for the wellposedness of (3.5), it is sufficient to show the wellposedness of (3.8). For ρ > 0, define a map J : V → V by J v = v − ρ (τ Av − τ L) ∀v ∈ V. (3.9) Suppose J in (3.9) is a contraction, then by contraction mapping theorem there is a unique element u ∈ V such that J u = u, i.e., u − ρ (τ Au − L) = u and u satisfies (3.8). To complete the proof, we now show that J is a contraction. For v1 , v2 ∈ V , we note that kJ v1 − v2 k2V

= kv1 − v2 − ρ (τ Av1 − τ Av2 ) k2V ≤ kv1 − v2 k2V − 2ρa(v1 − v2 , v1 − v2 ) + ρ2 a(v1 − v2 , τ A(v1 − v2 )) ≤ kv1 − v2 k2V − 2ρα0 kv1 − v2 k2V + ρ2 M 2 kv1 − v2 k2V .

Here, we have used the coercive property (ii) and boundedness property of A. Altogether, kJ v1 − J v2 k2V ≤ (1 − 2ρα0 + ρ2 M 2 )kv1 − v2 k2V , and hence, kJ w1 − J w2 kV ≤

p (1 − 2ρα0 + ρ2 M 2 )kw1 − w2 kV .

α0 With a choice of ρ ∈ (0, 2M ), we find that J is a contraction and this completes the proof.

Remark. There are also other proofs that are available in the literature. One depends on continuation argument by looking at the deformation from the symmetric case and other depends on the existence of a solution to an abstract operator equation. However, the present existential proof gives an algorithm to compute the solution as a fixed point of J . Based on Faedo-Galerkin method, another proof is presented in Temam [11] (see, pages 24–27). Corollary 3.2 When the bilinear form a(·, ·) in symmetric, i.e., a(u, v) = a(v, u), u, v ∈ V , and coercive, then the solution u of (3.5) is also the only element of V that minimizes the following energy functional J on V , i.e., J(u) = min J(v), (3.10) v∈V

where J(v) = 12 a(v, v) − L(v). Moreover, the converse is also true. Proof. Let u be a solution of (3.5). Write v = u + w, for some vector w ∈ V . Note that J(v)

1 a(u + w, u + w) − L(u + w) 2 1 1 1 a(u, u) − L(u) + (a(u, w) + a(w, u)) − L(w) + a(w, w). 2 2 2

= J(u + w) = =

Using definition of J and symmetric property of a(·, ·), we have 1 J(v) = J(u) + (a(u, w) − L(w)) + a(w, w) ≥ J(u). 2 Here, we have used the fact that u is a solution of (3.5) and coercivity (ii) of the bilinear form, Since v is arbitrary (it can be done by varying w), u satisfies (3.10).

27

CHAPTER 3. ABSTRACT ELLIPTIC THEORY

as

To prove the converse (direct proof), let u be a solution of (3.10). Define a scalar function for t ∈ IR Φ(t) = J(u + tv).

Indeed, Φ(t) has a minimum at t = 0. If Φ(t) is differentiable, then Φ(t)′ |t=0 = 0. Note that Φ(t) = =

1 a(u + tv, u + tv) − L(u + tv) 2 t t2 1 a(u, u) + (a(u, v) + a(v, u)) + a(v, v) − L(u) − tL(v). 2 2 2

This is a quadratic functional in t and hence, differentiable. Using symmetric property of a(·, ·), the derivative of Φ is Φ′ (t) = a(u, v) − L(v) + ta(v, v), ∀t.

Now Φ′ (0) = 0 implies a(u, v) = L(v) ∀v ∈ V , and hence, the result follows.

3.2

Some Examples.

In this section, we discuss some examples of linear elliptic partial differential equations and derive wellposedness of the weak solutions using Lax-Milgram Theorem. In all the following examples, we assume that Ω is a bounded domain with Lipschitz continuous boundary ∂Ω. Example 3.3 (Homogeneous Dirichlet Boundary Value Problem Revisited) For the problem (3.1)-(3.2), the weak formulation is given by (3.3) in V = H01 (Ω) for wellposedness, the bilinear form a(·, ·) satisfies both the properties: Z a(v, v) = |∇v|2 dx = kvk2V , Ω

and |a(u, v)| = |

Z



∇u · ∇v dx| ≤ kukV kvkV .

The functional L is linear and using the identification (2.9) in Chapter 2, we have L(v) = (f, v) =< f, v > ∀v ∈ V. Hence, using Poincar´e inequality it follows that kLkV ′ = sup

06=v∈V

|L(v)| ≤ Ckf k. kvkV

An application of the Lax-Milgram Theorem yields the wellposedness of (3.3). Moreover, kuk1 ≤ Ckf k. Since, f ∈ L2 (Ω), it is straight forward to check that k∆uk ≤ kf k and the following regularity result can be shown to hold kuk2 ≤ Ckf k. For a proof, see , Kesavan [9].

28

CHAPTER 3. ABSTRACT ELLIPTIC THEORY

Since, the bilinear form is symmetric, by Corollary 3.2 the problem (3.3) is equivalent to the following minimization problem: J(v), J(u) = min 1 v∈H0 (Ω)

where J(v) =

1 2

Z



|∇v|2 dx −

Z

f v dx.



Example 3.4 (Nonhomogeneous Dirichlet Problem) 1

Given f ∈ H −1 (Ω) and g ∈ H 2 (∂Ω), find a function u satisfying −∆u = f, x ∈ Ω

(3.11)

with non-homogeneous Dirchlet boundary condition u = g,

x ∈ ∂Ω.

To put (3.11) in the abstract variational form, the choice of V is not difficult to guess. Now since 1 1 1 H 2 (∂Ω) = Range of γ0 , where γ0 : H 1 (Ω) → H 2 (∂Ω), then for g ∈ H 2 (∂Ω), there is an element say u0 ∈ H 1 (Ω) such that γ0 u0 = g. Therefore, we recast the problem as: Find u ∈ H 1 (Ω) such that

where a(v, w) =

R



u − u0 ∈ H01 (Ω),

(3.12)

a(u − u0 , v) =< f, v > −a(u0 , v) ∀v ∈ H01 (Ω),

(3.13)

∇v · ∇wdx. We obtain (3.13) by using Gauss divergence Theorem.

To check the conditions in the Lax-Milgram Theorem, note that V = H01 (Ω), and the bilinear form a(·, ·) satisfies Z |∇u||∇v| dx ≤ k∇ukk∇vk = kukV kvkV . |a(u, v)| ≤ Ω

In H01 (Ω), k∇uk is in fact a norm (see Corollary 2.1) and is equivalent to H 1 (Ω)−norm. For coercivity, Z |∇u|2 dx = kuk2V . a(u, v) = Ω

Since a(·, ·) is continuous, the mapping L : v →< f, v > −a(u0 , v) is a continuous linear map on H01 (Ω) and hence, it belongs to H −1 (Ω). An application of Lax-Milgram Theorem yields the wellposedness of (3.12)-(3.13). Further, kukV − ku0 kV ≤ ku − u0 kV = k∇(u − u0 )k ≤ kLkH −1 (Ω) ≤ kf k−1 + k∇u0 k. Therefore, using the equivalence of the norms on H01 (Ω) and H 1 (Ω), we obtain kuk1 ≤ c(kf k−1 + ku0 k1 ), for all u0 ∈ H 1 (Ω) such that γ0 u0 = g. However, taking infimum over u0 ∈ H 1 (Ω) for which γ0 u0 = g, it follows that kuk1 ≤ C(kf k−1 + inf1 {ku0 k1 : γ0 u0 = g}) = C(kf k−1 + kgk 21 ,∂Ω ), u0 ∈H (Ω)

1 2

(using definition of H (∂Ω)). It remains to check that in what sense u satisfies (3.10)-(3.11)? Choose v ∈ D(Ω) in (3.13). This is possible as D(Ω) ⊂ H01 (Ω). Then a(u, v) = − < ∆u, v >=< f, v > ∀v ∈ D(Ω).

29

CHAPTER 3. ABSTRACT ELLIPTIC THEORY

Since f ∈ H −1 (Ω), ∆u ∈ H −1 (Ω) and hence, u satisfies u − u0 ∈ H01 (Ω), −∆u = f,

in

H

(3.14)

−1

(Ω).

(3.15)

H01 (Ω).

Converse part is straight forward as D(Ω) is dense in However u − u0 ∈ and hence, the non-homogeneous boundary condition is satisfied.

H01 (Ω)

if only if γ0 u0 = g

Remark. It is even possible to show the following higher regularity result provided f ∈ L2 (Ω) and 3 g ∈ H 2 (∂Ω): kuk2 ≤ c(kf k + kgk 23 ,∂Ω ),

where

kgk 23 ,∂Ω =

inf

v∈H 2 (Ω)

{kvk2 : γ0 v = g}.

Since a(·, ·) is symmetric, u − u0 will minimize J(v) over H01 (Ω), where J(v) =

1 a(v, v)− < f, v > . 2

Example 3.5 (Neumann Problem): Find u such that −∆u + cu = f ∂u = g on ∂ν Here assume that f ∈ L2 (Ω), g ∈ L2 (∂Ω) and c > 0.

in Ω,

(3.16)

∂Ω.

(3.17)

To find out the space V , the bilinear form and the linear functional, we multiply (3.16)-(3.17) by a smooth function v and apply Green’s Formula: Z Z Z Z ∂u f v dx. cuv dx = v ds + ∇u · ∇v dx − Ω Ω ∂Ω ∂ν Ω Using Neumann boundary condition Z Z Z Z f v dx + cuv dx = ∇u · ∇v dx + Therefore, set a(u, v) =

Z



and L(v) =

Z

gv ds.

(3.18)

∂Ω







(∇u · ∇v + cuv) dx, gv ds +

Z

f v dx.



∂Ω

The largest space for which these integrals are finite is H 1 (Ω)-space. So we choose V = H 1 (Ω). To verify the sufficient conditions in the Lax-Milgram Theorem, note that the bilinear form a(·, ·) satisfies: (i) the boundedness property as |a(u, v)| =

|

Z



(∇u · ∇v + cuv) dx|



k∇ukk∇vk + max c(x)kukkvk



max(1, max c(x))kuk1 kvk1 .

x∈Ω

x∈Ω

30

CHAPTER 3. ABSTRACT ELLIPTIC THEORY

(ii) the coercive condition, since a(v, v) =

Z



(k∇vk2 + c|v|2 ) dx ≥ min(1, c)kvk21 = αkvk21 .

Further, L is a linear functional and it satisfies Z Z |L(v)| ≤ |g||v| ds + |f ||v| dx ∂Ω





kgkL2(∂Ω) kγ0 vkL2 (∂Ω) + kf kkvk.

As kvk ≤ kvk1 and from the trace result kγ0 vkL2 (∂Ω) ≤ Ckvk1 , we obtain kLk(H 1 )′ =

sup 06=v∈H 1 (Ω)

|L(v)| ≤ C(kf k + kgkL2 (∂Ω) ). kvk1

Therefore, L is a continuous linear functional on H 1 (Ω). Apply the Lax-Milgram Theorem to obtain the wellposedness of the weak solution of (3.18). Moreover, the following regularity also holds true kuk1 ≤ C(kf k + kgkL2(∂Ω) ). Choose v ∈ D(Ω) as D(Ω) ⊂ H 1 (Ω) and using Green’s formula, we find that a(u, v) =< −∆u + cu, v >, and hence, < −∆u + cu, v >=< f, v >, ∀D(Ω). Therefore, the weak solution u satisfies the equation −∆u + cu = f, in D′ (Ω). Since f ∈ L2 (Ω), −∆u + cu ∈ L2 (Ω). Further, it can be shown that u ∈ H 2 (Ω). For the boundary condition, we note that for v ∈ H 1 (Ω) Z Z Z ∆uv dx + ∇ · ∇v dx = −

Since u ∈ H 2 (Ω), we obtain

Z



∂Ω





(∆u + cu)vdx =

Z

f vdx, ∀v ∈ H 1 (Ω).



Using Green’s Theorem, it follows that Z Z (∇u∇v + cuv) dx = Ω

∂u ds. ∂ν

∂Ω

∂u v ds + ∂ν

Z

f v dx. Ω

Comparing this with the weak formulation (3.18), we find that Z ∂u v) ds = 0, ∀v ∈ H 1 (Ω). (g − ∂ν ∂Ω Hence,

∂u = g on ∂Ω. ∂ν

31

CHAPTER 3. ABSTRACT ELLIPTIC THEORY

1 Remark: If g = 0, i.e., ∂u ∂ν = 0 on ∂Ω, CAN we impose this condition on H (Ω)-space as we have done in case of the homogeneous Dirichlet problem? The answer is simply in negative, because the space

∂u = 0 on ∂Ω} ∂ν

{v ∈ H 1 (Ω) :

is not closed. Since for Neumann problem, the boundary condition comes naturally when Green’s formula is applied, we call such boundary conditions as natural boundary conditions and we do not impose homogeneous natural conditions on the basic space. In contrast, for second order problem, the Dirichlet boundary condition is imposed on the solution space and this condition is called essential boundary condition. In general for elliptic operator of order 2m, (m ≥ 1, integer) 3 , the boundary conditions containing derivatives of order strictly less than m are called essential, where as the boundary conditions containing higher derivatives upto 2m − 1 and greater than or equal to m are called natural. As a rule, the homogeneous essential boundary conditions are imposed on the solution space. Example 3.6 (Mixed Boundary Value Problems). Consider the following mixed boundary value problem in general form: Find u such that Au = f, x ∈ Ω, (3.19) u=0

on ∂Ω0 ,

∂u =g ∂νA where Au = −

d X

on

∂ ∂u (aij ) + a0 u, ∂xj ∂xi i,j=1

∂Ω1 ,

(3.20) (3.21)

d X ∂u ∂u aij = νj , ∂νA ij=1 ∂xi

the boundary ∂Ω = ∂Ω0 ∪ ∂Ω1 with ∂Ω0 ∩ ∂Ω1 = ∅, and νj is the j th component of the external normal to ∂1 Ω. We shall assume the following conditions on the coefficientes aij , a0 and the forcing function f : (i) The coefficientes aij and a0 are continuous and bounded by a common positive constant M with a0 > 0. (ii) The operator A is uniformly elliptic, i.e., the associated quadratic form is uniformly positive definite. In other words, there is a positive constant α0 such that d X

i,j=1

aij (x)ξi ξj ≥ α0

d X i=1

|ξi |2 , 0 6= ξ = (ξ1 , . . . , ξd )T ∈ IRd .

(3.22)

(iii) Assume, for simplicity, that f ∈ L2 (Ω) and g ∈ L2 (∂Ω1 ). Since on one part of the boundary ∂Ω0 , we have zero essential boundary condition, we impose this on our solution space and thus, we consider V = {v ∈ H 1 (Ω) : v = 0 on ∂Ω0 }. This is a closed subspace of H 1 (Ω) and is itself a Hilbert space. The Poincar´e inequality holds true for V , provided volume or surface area of ∂Ω0 is bigger than zero (In the proof of the Poincar´e inequality, we 3 For higher order (order greater than one) partial differential equations, one essential condition for equations to be elliptic is that it should be of even order.

32

CHAPTER 3. ABSTRACT ELLIPTIC THEORY

need only vanishing of elements on a part of boundary, see the proof). For weak formulation, multiply (3.19) by v ∈ V and integrate over Ω. A use of integration by parts formula yields Z

Auv dx =



+

d Z X ∂u ∂v ∂u νj v ds + aij dx ∂x ∂x i i ∂xj i,j=1 Ω i,j=1 ∂Ω Z Z gv ds + a(u, v), a0 uv dx = −



d Z X

aij



where a(u, v) :=

∂Ω1

d Z X

i,j=1

∂u ∂v aij dx + ∂x i ∂xj Ω

Z

a0 uv dx.



Therefore, the weak formulation becomes

where L(v) =

R



f v dx +

a(u, v) = L(v), v ∈ V,

R

∂Ω1

(3.23)

gv ds.

For wellposedness of (3.23), let us verify the sufficient conditions of the Lax-Milgram Theorem. (i) For boundedness, note that |a(u, v)| = |

d Z X

Use

Pd

i=1

ai b i ≤

qP d

i=1

d X

i,j=1

qP d 2

ai



i,j=1

≤ M( 2 i=1 bi

aij

k

Z

∂u ∂v dx + ∂xi ∂xj

a0 uv dx|



∂u ∂v kk k + kukkvk). ∂xi ∂xj

to obtain

d d d d X X ∂u 2 1 X ∂v 2 1 ∂u X ∂v 2 k k k k k k ≤ d( k ) ( k ) 2 ≤ dk∇ukk∇vk. ∂x ∂x ∂x ∂x i j i j j=1 j=1 i=1 i=1

Since Poincar´e inequality holds, i.e., kvk ≤ c(Ω)k∇vk, v ∈ V , the seminorm k∇vk is infact a norm on V and this is equivalent to k · k1 -norm. Thus, |a(u, v)| ≤ C(M, d)kukV kvkV . (ii) For coercivity, observe that a(v, v) =

Z

d X

Ω i,j=1

For the last term, use a0 > 0 so that ∂v ∂v with ∂x = ξi and ∂x = ξj yields i j

R



aij

∂v ∂v dx + ∂xi ∂xj

Z



a0 |v|2 dx.

a0 |v|2 dx ≥ 0. For the first term, a use of uniform ellipticity

d Z X ∂v 2 ∂v ∂v dx ≥ α0 | dx = α0 k∇vk2 = α0 kuk2V . | aij ∂xi ∂xj ∂x i Ω Ω i,j=1 i=1

Z

d X

Alltogether, we obtain a(v, v) ≥ α0 kvk2V .

33

CHAPTER 3. ABSTRACT ELLIPTIC THEORY

Finally, the linear functional L satisfies |L(v)|



Z



|f v| dx +

Z

∂Ω1

|g||γ0 v| ds

≤ kf kkvk + kgkL2(∂Ω1 ) kγ0 vkL2 (∂Ω1 ) . From the trace result kγ0 vkL2 (∂Ω1 ) ≤ Ckvk1 ≤ CkvkV , and now using Poincar´e inequality, it follows that kLkV ′ = sup

06=v∈V

|L(v)| ≤ C(kf k + kgkL2(∂Ω) ). kvkV

An appeal to the Lax-Milgram Theorem yields the wellposedness of the weak solution u ∈ V of (3.23). Moreover, kuk1 ≤ C(M, α0 , d, Ω)(kf k + kgkL2(∂Ω) ). In order to explore in what sense the weak solution u of (3.23) satisfies (3.19)–(3.21), take v ∈ D(Ω) in (3.23). Since D(Ω) ⊂ V , using the definition of distributional derivatives, (3.23) becomes < Au, v >=< f, v >, ∀v ∈ D(Ω). Thus, Au = f, in D′ (Ω). For the boundary conditions, it is found that for v ∈ V , Z Z ∂u Auv dx + A(u, v) = v ds ∂Ω ∂νA Ω Z Z ∂u v ds, (v = 0 on ∂Ω0 ), = f v dx + Ω ∂Ω1 ∂νA and L(v) =

Z

f v dx +



From (3.23), we obtain

Z

∂Ω1

∂u v ds = ∂νA

Z

gv ds.

∂Ω1

Z

∂Ω1

gv ds, ∀v ∈ V,

and hence, the boundary conditions are satisfied. Note that even if f and g are smooth, the weak solution may not be in H 2 (Ω), because of the common points or the common curves between ∂Ω0 and ∂Ω1 . (a +a )

Since the bilinear form is symmetric (as aij = aji , if not redefine aij = ij 2 ji , and this is possible), the equation (3.23) is equivalent to the minimization of the energy functional J, where J(v) =

1 a(v, v) − L(v), v ∈ V. 2

So far, all the problems, we have discussed have symmetric bilinear forms and hence, each of those leads to an equivalent energy formulation. The example considered, below, does not have an equivalent energy minimization counter part.

34

CHAPTER 3. ABSTRACT ELLIPTIC THEORY

Example 3.7 Consider the following flow fluid problem with a transport term −∆u + ~b · ∇u + c0 u = f u=0

in Ω,

(3.24)

on ∂Ω,

(3.25)

where ~b = (b1 , . . . , bd )T is a constant vector and c0 ≥ 0. With an obvious choice of space V as H01 (Ω), we multiply the equation (3.24) by v ∈ H01 (Ω) and the integrate over Ω to have Z Z Z Z (−∆u)v dx + (~b · ∇u)v dx + c0 uv dx = f v dx, v ∈ V. Ω







For the first term apply Green’s formula and now rewrite the weak formulation as a(u, v) = L(v), v ∈ V, where a(u, v) = and L(v) =

R



Z



∇u · ∇v dx +

Z



(3.26)

(~b · ∇u)v dx +

Z

c0 uv dx



f v dx. Note that a(·, ·) is not symmetric as Z [(~b · ∇u)v − (~b · ∇v)u] dx 6= 0. a(u, v) − a(v, u) = Ω

It is straight forward to check a(·, ·) is bounded (use Cauchy-Schwarz inequality, boundedness of ~b and c0 ). For coercicity, a(v, v) =

Z



Note that Z



(~b · ∇v)vdx

2

|∇v| dx +

=

=

=

Z



d Z X

(~b · ∇v)vdx +

Z



c0 |v|2 dx.

∂v vdx ∂x i i=1 Ω Z d ∂(v 2 ) 1X bi dx (~b is constant) 2 i=1 Ω ∂xi Z d 1X bi v 2 νi ds = 0 as v ∈ V. 2 i=1 ∂Ω bi

Thus, a(v, v) =

Z





2

|∇v| dx +

Z



k∇vk2 = kvk2V .

c0 |v|2 dx

R Here, we have used Ω c0 |v|2 dx ≥ 0 as c0 ≥ 0. Note that when the vector ~b is not constant, under some restrictions on ~b it is possible to check coercivity of a(·, ·). Using the Lax-Milgram Theorem, there exists a unique solution u to (3.26) and kuk1 ≤ ckf k.

35

CHAPTER 3. ABSTRACT ELLIPTIC THEORY

Note that the minimization of the functional (in this case as ~b is constant) J, where Z Z 1 f vdx (|∇v|2 + c0 |v|2 )dx − J(v) = 2 Ω Ω will yield a wrong solution. In general when ~b is not constant or we use other boundary conditions, it is not even possible to write down the corresponding energy like functional J. Exercises. Find the admissible space, and derive the weak formulation. Using the Lax-Milgram Theorem prove the wellposedness of weak solutions. 1.) du d (a(x) ) = f, 0 < x < 1, dx dx Here a(x) ≥ a0 > 0, f ∈ L2 (0, 1). −

u(0) = 0,

du (1) = 0. dx

2.) d du du du du (a(x) ) + b(x) + c(x)u = f, (0) = (1) = 0. dx dx dx dx dx Please find approperiate conditions on b, relating a and c. Here again a(x) ≥ a0 > 0, c ≥ c0 > 0, and a, b, c are bounded by a common constant M . −

3.) The following Robinson Problem: 

−∆u + λu = f αu + ∂u ∂ν = 0

in Ω on ∂Ω

where α ≥ 0 and λ > 0 constants (sometimes this boundary condition is called Fourier Boundary Condition). In all the above cases, discuss in what sense the weak solutions satisfy the corresponding differential equations. Example 3.8 (Biharmonic Problems). Consider the following fourth order partial differential equations −∆2 u = f u=0 ∂u =0 ∂ν

in Ω,

(3.27)

on ∂Ω,

(3.28)

on ∂Ω.

(3.29)

To choose an appropriate function space, multiply the partial differential equation by a smooth function ∂v = 0 on ∂Ω (both these boundary conditions are essential v ∈ D(Ω) or by C 2 (Ω) such that v = 0 and ∂ν boundary conditions for the 4th order problem) and integrate over Ω. Apply Gauss divergence theorem to the first term, i.e., Z Z Z ∇(∆u) · νv ds ∇(∆u) · ∇v dx + ∆(∆u)v dx = − Ω Z ∂Ω Z Ω ∆u∆v dx. ∆u(∇u · νv) ds + − Ω

∂Ω

∂v ∂ν

Since both the boundary terms vanish as v = 0 and = 0 on ∂Ω, we have Z ∆2 uvdx = (∆u, ∆v) = a(u, v). Ω

36

CHAPTER 3. ABSTRACT ELLIPTIC THEORY

Similarly, L(v) =

Z

f vdx.



The largest space for which these integrals are meaningful is u ∈ H 2 (Ω). Since the boundary conditions are essential type, we impose it on the space. Thus, the admissible space V = H02 (Ω), where H02 (Ω) = {v ∈ H 2 (Ω) : v = 0,

∂v = 0 on ∂Ω}. ∂ν

Since D(Ω) is dense in H02 (Ω), we have the following weak formulation: Find u ∈ H02 (Ω) such that a(u, v) = L(v), ∀v ∈ H02 (Ω).

(3.30)

Using Poincar´e inequality for ∇v as ∇v · ν = 0 on ∂Ω, we have k∇vk2 ≤ c(Ω)k∆vk2 . Similarly, as v = 0 on ∂Ω kvk2 ≤ c(Ω)k∇vk2 .

Therefore, k∆vk is a norm on H02 (Ω) which is equivalent to H 2 (Ω)−norm. Note that X kDα vk2 . kvk2H 2 (Ω) = kvk2 + k∇vk2 + |α|=2

Since for the last term, contains not only k∆vk2 but also the cross derivatives, we need to apply interchange of derivatives like: Consider v ∈ D(Ω), then we shall use denseness to extend it to H02 (Ω). Z ∂2v 2 ∂v ∂3v ∂2v ∂2v ( ) dx =< Dij v, Dij v >= − < , >=< , >. ∂xi ∂xi ∂ 2 xj ∂x2i ∂x2j Ω ∂xi ∂xj Use denseness of D(Ω) in H02 (Ω), so that Z Z ∂2v 2 ∂2v ∂2v | | dx = 2 2 dx. Ω ∂xi ∂xj Ω ∂xi ∂xj Taking summation over i, j = 1, . . . , d, we obtain X

Dα vk2 =

d X d Z X



i=1 j=1

|α|=2

|

∂2v 2 | dx = ∂xi ∂xj

Z



|∆v|2 dx.

A use of Poincar´e inequality yields kvk2H 2 (Ω) ≤ Ck∆vk2 , where C is a generic positive constant. Thus k∆vk2 ≤ kvk22 ≤ Ck∆vk2 , and these two norms are equivalent. In order to verify the conditions in the Lax-Milgram Theorem, we observe that Z |∆u||∆v|dx ≤ k∆ukk∆vk = kukV kvkV , |a(u, v)| ≤ Ω

and

a(v, v) =

Z



|∆v|2 dx = kvk2V .

37

CHAPTER 3. ABSTRACT ELLIPTIC THEORY

Further, |L(v)| = | and hence,

Z



f vdx| ≤ kf kkvk ≤ Ckf kk∆vk = Ckf kkvkV , kLkV ′ = kLk−2 ≤ Ckf k.

Application of the Lax-Milgram Theorem yields the wellposedness of the weak solution u ∈ H02 (Ω). Moreover, kuk2 ≤ Ck∆uk ≤ Ckf k. In fact, it is an easy exercise to check that the weak solution u satisfies the PDE in the sense of distribution. Example 3.9 ( Steady State Stokes Equations). Consider the steady state Stokes system of equations: Given the vector f~, find a velocity vector ~u and pressure p of an incompressible fluid such that −∆~u + ∇p = f~, in Ω,

(3.31)

∇ · ~u = 0, in Ω,

(3.32)

~u = 0, on ∂Ω,

(3.33)

where Ω is a bounded domain with smooth boundary ∂Ω and ~u = (u1 , · · · , ud ). Here, the incompressibility condition is given by the equation (3.33). However, it is this condition which makes the problem difficult to solve. For variational formulation, we shall use the following function spaces. Let V := {~v ∈ (D(Ω))d : ∇ · ~v = 0, in Ω}, and further, let V be the closure of V with respect to H10 -norm, where H10 := {~v ∈ (H01 (Ω))d : ~v = 0 in Ω}. In fact, V := {~v ∈ H10 (Ω) : ∇ · ~v = 0 in Ω}.

For a proof, see Temam [11]. The norm on H10 (Ω) will be denoted by d X 1 k∇vi k2 ) 2 . k~v kH10 (Ω) = ( i=1

for weak formulation, form an innerproduct of (3.31) with ~v ∈ V and then integrate by parts the first term on the left hand side to obtain −

Z



∆~u · ~v dx

=

d Z X ∂ 2 uj vj dx ∂x2i i,j=1 Ω

= −

d Z X

i,j=1

∂Ω

d Z X ∂uj ∂uj ∂vj νi vj ds + dx. ∂xi ∂xi ∂xi i,j=1 Ω

Since ~v = 0, on ∂Ω, it follows that −

Z



∆~u · ~v dx = (∇~u, ∇~v ).

38

CHAPTER 3. ABSTRACT ELLIPTIC THEORY

for the second term on the left hand side, use again integration by parts to find that Z



∇p · ~v dx = =

d Z X i=1



d Z X i=1

∂p vi dx ∂xi

∂Ω

pνi vi ds −

Z



p∇ · ~v dx = 0.

Here, we have used ~v ∈ V. Now the admissible space is V . Since V is dense in V , the weak formulation of (3.31)–(3.33) reads as : ‘ Find ~u ∈ V such that a(~u, ~v ) = L(~v ), ∀~v ∈ V,

(3.34)

where a(~u, ~v ) = (∇~u, ∇~v ) and L(~v ) = (f~, ~v ). It is now straight forward to check that the bilinear form is coercive as well as bounded in V and moreover, the linear form L is continuous. note that the Poincar´e inequality is also valid for the functions in V . Therefore, apply the Lax-Milgram Theorem to conclude the wellposedness of the weak solution ~u of (3.34). Since the bilinear form is symmetric, (3.34) is equivalent to the following minimization problem: J(~u) = min J(~v ) ~ v ∈V

where

1 k∇~v k2 − (f~, ~v ). 2 Now suppose that ~u satisfies (3.34), then we shall show that ~u satisfies (3.31)-(3.33) in some sense. Now ~u ∈ V implies that ∇ · ~u = 0 in the sense of distribution. Then for v ∈ V , using integration by parts in (3.34), we obtain < −∆~u − f, v >= 0, ∀v ∈ V. (3.35) J(~v ) =

At this stage, let us recall some results from Temam [11] (see, pp. 14–15). Lemma 3.10 Let Ω be an open set in IRd and F~ = (f1 , . . . , fd ) with each fi ∈ D′ (Ω), i = 1, . . . , d. A necessary and sufficient condition that F~ = ∇p, for some p ∈ D′ (Ω) is that < F~ , v >= 0, ∀v ∈ V. Now using the above Lemma with F~ = ∆~u + F~ , we have −∆~u − F~ = −∇p for some p ∈ D′ (Ω).

Note. For more details on regularity results, see Agmon [2], [5] and also Kesavan [9].

(3.36)

Chapter 4

Elliptic Equations In the first part of this chapter, we discuss the finite dimensional approximation of the abstract variational formulations described in the third chapter and derive error estimates using Cea’s Lemma. The later part will be devoted to the discussion of some examples, derivation of the finite element equations and adaptive methods for elliptic equations.

4.1

Finite Dimensional Approximation to the Abstract Variational Problems.

Let us first recall the abstract variational problem discussed in the previous chapter. Given a real Hilbert space V and a linear functional L on it, find a function u ∈ V such that a(u, v) = L(v) ∀v ∈ V,

(4.1)

where the bilinear form a(·, ·) and the linear functional L satisfy the conditions in the Lax- Milgram Theorem in chapter 3. Let Vh be a finite dimensional subspace of V , which is parametrized by a parameter h. Let us pose the the problem (4.1) in this finite dimensional setting as: seek a solution uh ∈ Vh such that a(uh , vh ) = L(vh ) ∀vh ∈ Vh .

(4.2)

In literature, this formulation is known as Galerkin method. We may curious to know whether the problem (4.2) is wellposed. Since Vh is a finite dimensional with dimension say Nh , we may write down h a basis of Vh as {φi }N i=1 . Let Nh X αi φi , uh (x) = i=1

PNh where αi ’s are unknowns. Similarly, represent vh (x) = i=j βj φj , for some βj ’s. On substitution in (4.2), we obtain Nh Nh X X βj L(φj ). αi βj a(φi , φj ) = j=1

i,j=1

In matrix form β T Aα = β T b,

39

(4.3)

CHAPTER 4.

40

ELLIPTIC EQUATIONS

where the global stiffness matrix A = [aij ] with aij = a(φi , φj ) and the global load vector b = (bj ) with bj = L(φj ) . Since the system (4.2) is true for all v ∈ Vh , therefore, the equation (4.3) holds for all β ∈ IRNh . Thus, Aα = b. (4.4) In other words, it is enough to consider the problem (4.2) as a(uh , φj ) = L(φj ), j = 1, . . . , Nh , that is replacing vh by the basis function φj . Since the bilinear form a(·, ·) is coercive, the matrix A is positive definite. To verify this, we note that for any vector 0 6= β ∈ IRNh β T Aβ =

Nh X

i,j=1

Nh Nh Nh X X X βi βj a(φi , φj ) = a( βi φi , βj φj ) ≥ α0 k βi φi k2 , i=1

j=1

i=1

and hence, the linear system (4.4) has a unique solution α ∈ RNh . In other words, the problem (4.2) has a unique solution uh ∈ Vh . This can also be proved directly by applying Lax-Milgram Theorem to (4.2) as Vh ⊂ V . When the bilinear form is symmetric, then the matrix A becomes symmetric. Therefore, the problem (4.4) can be written equivalently as a minimization problem like J(α) = min J(β), β∈IRNh

where J(β) = 21 β T Aβ − β T b, or simply, we write J(uh ) = min J(vh ), vh ∈Vh

where J(vh ) = 21 a(vh , vh ) − L(vh ). Note that we identify the space Vh and IRNh through the basis {φi } of Vh and the natural basis {ei } of IRNh , i.e., uh is a solution of (4.2) if and only if α is a solution of PNh Aα = b, where uh = i=1 αi φi .

One of the attractive feature of finite element methods is that the finite dimensional spaces are constructed with basis functions having small support. For example, if the bilinear form is given by an integral like Z a(u, v) =



∇u · ∇v dx,

then a(φi , φj ) = 0, if supp φi ∩ supp φj = ∅. Therefore, small support of each φi entails the sparseness of the corresponding stiffness matrix A. For computational purpose, this is a desirable feature as fast iterative methods are available for solving large sparse linear systems. Below, we prove the Cea’s Lemma which deals with the estimates of the error e = u − uh , committed in the process of approximation. Lemma 4.1 ( Cea’s Lemma). Let u and uh , respectively, be the solutions of (4.1) and (4.2). Let all the conditions for the Lax-Milgram Theorem hold. Then there is a constant C independent of the discretization parameter h such that ku − uh kV ≤ C inf ku − χkV . χ∈Vh

Proof. Since Vh ⊂ V , we have for χ ∈ Vh a(u, χ) = L(χ),

CHAPTER 4.

41

ELLIPTIC EQUATIONS

and a(uh , χ) = L(χ). Hence, taking the difference and using linearity of the form a(·, ·), we obtain a(u − uh , χ) = 0, χ ∈ Vh .

(4.5)

Using coercivity property of the bilinear form, it follows that α0 ku − uh k2V



a(u − uh , u − uh )

=

a(u − uh , u − χ).

=

a(u − uh , u − χ) + a(u − uh , χ − uh )

Here, the second term in the last but one step is zero by using (4.5). Since the bilinear form is bounded, we infer that M ku − uh k2V ≤ ku − uh kV ku − χkV . α0 In case u = uh , the result follows trivially. Now for u 6= uh , we can cancel one term ku − uh kV from both sides and then taking infimum over χ ∈ Vh (as χ is arbitrary), we obtain the desired estimate with C = αM0 . This completes the rest of the proof. Remark. In literature, the property (4.5) is called as Galerkin orthogonality. The space Vh being a finite dimensional subspace of the real Hilbert space V is closed in V , and hence, the infimum is attained. In fact, the infimum is the best approximation say P1 u of u onto Vh . Given Vh , the maximum truncation error which can be expected is ku − P1 ukV . This is what we call the consistency condition. The stability is a straight forward consequence of the boundedness and coercivity of the bilinear form. Say , choose vh = uh in (4.2) to obtain α0 kuh k2V ≤ a(uh , uh ) = |L(uh )| ≤ kLkV ′ kuh kV , and hence, kLkV ′ . α0 This is bounded independent of the parameter h and therefore, the Galerkin method is stable with respect to V - norm. Now we can apply the Lax-Ritchmyer equivalence theorem to infer directly the convergence result. Note that when the bilinear form a(·, ·) is symmetric, the Galerkin approximation uh is the best approximation with respect to the energy norm (a(v, v))1/2 , v ∈ V . kuh kV ≤

4.2

Examples.

In this section, we examine through a couple of examples, the essential features of the finite element spaces which play a vital role in the error estimates especially in the optimal rate of convergence. It may be difficult for us to spend more time on the construction of the finite element spaces and the derivation of their approximation properties. But we indicate the basic steps through the two examples given below and for more details, we refer to some excellent books by Ciarlet [4], Johnson [8], and Brenner and Scott [3]. Example 4.2 ( Two point boundary value problems). Consider the following simple linear two point boundary value problem −(a(x)u′ )′ + a0 (x)u = f (x), x ∈ I := (0, 1), (4.6) with boundary conditions

u(0) = 0,

u′ (1) = 0,

where a ≥ α0 > 0, a0 ≥ 0, maxx∈[0,1] (|a|, |a0 |) ≤ M and f ∈ L2 (I).

CHAPTER 4.

42

ELLIPTIC EQUATIONS

One of the basic step in the finite element method is the weak or variational formulation of the original problem (4.6). Following the procedures in the last chapter, it is straight forward to guess the admissible space V as V := {v ∈ H 1 (I) : v(0) = 0}.

Now form an innerproduct between (4.6) and v ∈ V , and use integration by parts to the first term on the left hand side. The boundary term at x = 0 vanishes as v(0) = 0 , while the other boundary term becomes zero , because of the homogeneous Neumann condition u′ (1) = 0. Then the weak form of (4.6) is to seek a function u ∈ V such that a(u, v) = (f, v) ∀v ∈ V, where a(u, v) =

Z

1

(4.7)

(au′ v ′ + a0 uv) dx.

0

Construction of finite element spaces Vh . For any positive integer N , let {0 = x0 < x1 < · · · < xN = 1} be a partition of the interval I into sub-intervals called elements Ii = (xi−1 , xi ), i = 1, · · · , N with length hi = xi − xi−1 . Let h = max1≤i≤N hi be the parameter associated with the finite element mesh. The simplest finite element space defined on the above partition is the following C 0 -piecewise linear spline space, i.e., Vh := {vh ∈ C 0 ([0, 1]) : vh |Ii ∈ P1 (Ii ), i = 1, 2, · · · , N, vh (0) = 0}, where P1 (Ii ) is a linear polynomial. On each subintervals or elements Ii , vh coincides with a linear polynomial like ai + bi x, where ai and bi are unknowns. Therefore, we have 2N unknowns or degrees of freedom. But the continuity requirement on the internal nodal points or the mesh points (x1 , · · · , xN −1 ) would impose N −1 additional conditions and one more due to imposition of left hand boundary condition. Thus, the total degrees of freedom or the dimension of Vh is N . Now, the specification of values at the nodal points (x1 , · · · , xN ) uniquely determines the elements of Vh . Setting the Lagrange interpolating polynomials {φi }N i=1 as elements in Vh which satisfy φi (xj ) = δij , 1 ≤ i, j ≤ N, where the Kr¨onecker delta function δij = 1, when i = j and is equal to zero for i 6= j. It is an easy excercise to check that {φi }N i=1 forms a basis for Vh . More over, for i = 1, · · · , N − 1 ( x−x i−1 , xi−1 ≤ x ≤ xi , hi φi (x) = xi+1 −x , xi ≤ x ≤ xi+1 , hi+1 and for i = N

1 (x − xN −1 ). hN Each of these hat functions φi , i = 1, · · · , N − 1 has support in Ii ∪ Ii+1 and supp φN = IN . φN (x) =

As mentioned earlier, each element vh ∈ Vh is uniquely determined by its values at the nodal points xj , j = 1, . . . , N , that is, N X vh (xi )φi (x). vh (x) = i=1

Given a smooth function v with v(0) = 0, it can be approximated by its nodal interpolant Ih v in Vh as Ih v(xi ) = v(xi ), i = 1, . . . , N. Thus, Ih v(x) =

PN

i=1

v(xi )φi (x).

To estimate the error in the interpolation, we quickly prove the following Theorem.

CHAPTER 4.

43

ELLIPTIC EQUATIONS

Theorem 4.3 Let v be an element in H 2 (I) which vanishes at x = 0. Let Ih v be its nodal interpolant which is defined as above. Then the following estimates k(v − Ih v)′ k ≤ C1 hkv ′′ k, and kv − Ih vk ≤ C2 h2 kv ′′ k hold true. Proof. We shall first prove the result for individual elements and then sum up to obtain the result for the entire interval. On each element Ij , the interpolant Ih v is a linear function and is such that Ih v(xj−1 ) = v(xj−1 ) and Ih v(xj ) = v(xj ). Then we have a representation of Ih v as Ih v(x) = v(xj−1 )φ1,j + v(xj )φ2,j ∀x ∈ Ij , x −x

(4.8)

x−x

where φ1,j = jhj = 1 − φ2,j and φ2,j = hjj−1 are local basis functions on Ij such that φi,j , i = 1, 2 takes value 1 at the left and right hand nodes, respectively, and zero elsewhere. Using Taylors formula for i = j, j − 1, we have Z xi (xi − s)v ′′ (s) ds. (4.9) v(xi ) = v(x) + v ′ (x)(xi − x) + x

Substitute (4.9) in (4.8) and use the relations φ1,j + φ2,j = 1 and (xj−1 − x)φ1,j + (xj − x)φ2,j = 0 to obtain the following representation of interpolation error Z x Z xj ′′ v(x) − Ih v(x) = −[φ1,j (xj − s)v ′′ (s) ds]. (4.10) (s − xj−1 )v (s) ds + φ2,j xj−1

x

First squaring both sides and using (a + b)2 ≤ 2(a2 + b2 ), we integrate over Ij . Repeated applications of Cauchy-Schwarz inequality with analytic evaluation of L2 -norms of the local basis functions,P we obtain N the desired L2 estimate for the interpolation error on each subinterval Ij . Since kv − Ih vk2 = j=1 kv − Ih vk2L2 (Ij ) , the result follows for the L2 - norm For the other estimate, we differentiate the error (4.10). Note that for the integrals , differentiation with respect to x will give values of v ′′ almost every where and then repeat the above procedure to complete the rest of the proof. Remark. When v ∈ W 2,∞ (I), then use (4.10) with some standard modifications to obtain the following maximum norm error estimate kv − Ih vkL∞ (I) + hk(v − Ih v)′ kL∞ (I) ≤ C3 h2 kv ′′ kL∞ (I) . It is to be noted that quasi-uniformity condition on the finite element mesh is needed for the above h estimates that is hj ≥ c ∀hj and for some positive constant c. The procedure remains same for the higher degree C 0 -piecewise polynomial spaces. However, instead of looking at each element one concentrate only on a master element on which the construction of local basis functions is relatively simpler. Then, appropriate nonsingular transformations which are easy to findout take the required estimates back to the individual elements. that

With the finite dimensional space Vh , the Galerkin formulation is to seek a function uh ∈ Vh such a(uh , vh ) = (f, vh ), vh ∈ Vh .

(4.11)

CHAPTER 4.

44

ELLIPTIC EQUATIONS

Theorem 4.4 Let u be a solution of (4.7) with u ∈ H 2 (I) ∩ H01 (I). Let uh be a solution of (4.11). Then there is a constant C independent of h such that the error e = (u − uh ) satisfies ke′ k ≤ Chku′′ k and kek ≤ Ch2 ku′′ k. Proof. Note that |a(u, v)| ≤ M kuk1kvk1 ≤ 2M ku′ kkv ′ k and a(u, u) ≥ α0 ku′ k2 .

For the boundedness we have used the Poincar´e inequality kvk ≤ kv ′ k with V = H01 (I). Now apply Cea’s Lemma to obtain ku − uh kV = k(u − uh )′ k ≤ C(α0 ) inf k(u − χ)′ k ≤ Ck(u − Ih u)′ k ≤ Chku′′ k. χ∈Vh

For estimate in L2 -norm, we use the Aubin -Nitsche duality arguments. Consider the following adjoint elliptic problem: For g ∈ L2 (I), let φ be a solution of −(a(x)φ′ )′ + a0 (x)φ = g, x ∈ I,

(4.12)

with boundary conditions φ(0) = 0, φ′ (1) = 0. The above problem satisfies the following regularity result kφk2 ≤ Ckgk.

(4.13)

Multiply the equation (4.12) by e and integrate with respect to x from 0 to 1. Use the Neumann boundary condition for φ and e(0) = 0 to have (e, g) = a(e, φ) = a(e, φ − Ih φ). Here, we have used the Galerkin orthogonality condition a(e, χ) = 0 ∀χ ∈ Vh . A use of boundedness of the bilinear form yields |(e, g)| ≤ Cke′ kk(φ − Ih φ)′ k ≤ Chke′ kkφk2 ≤ Chke′ kkgk. In the last step we have employed the elliptic reqularity result (4.13). Now the result follows as kek = supφ∈L2 (I) |(e, g)|. Remark. We shall defer the computational procedure and the construction of finite element equations to the next section. Example 4.5 (The standard Galerkin method for elliptic equations). Let Ω be a domain in Rd with smooth boundary ∂Ω and consider the following homogeneous Dirichlet boundary value problem

where ∆ =

Σdj=1 ∂ 2 /∂x2j

−∆u + u = f, in Ω,

(4.14)

u = 0, on ∂Ω,

(4.15)

is the Laplace operator.

For the Galerkin formulation to the boundary value problem (4.14), we first write this problem in weak or variational form as: find u ∈ H01 (Ω) such that a(u, v) = (f, v) ∀v ∈ H01 (Ω),

(4.16)

CHAPTER 4.

45

ELLIPTIC EQUATIONS

where a(u, v) = (∇u, ∇v) + (u, v) This has been done in the last chapter. Construction of finite element space. We consider the simplest finite element space Vh and then apply Galerkin method to the variational form (4.16) using Vh . For easy exposition, we shall assume that the domain Ω is a convex polygon in IR2 with boundary ∂Ω. Let Th denote a partition of Ω into disjoint triangles K of diameter hK such that (i) ∪Ki ∈Th Ki = Ω, Ki ∩ Kj = ∅, Ki , Kj ∈ Th with Ki 6= Kj . (ii) No vertex of any triangle lies on the interior of a side of another triangle. Let hK be the longest side of the triangle K ∈ Th and let h = maxK∈Th hK denote the discretization parameter, i.e., the maximal length of a side of the triangulation Th . Therefore, the parameter h decreases as the triangulation is made finer. We assume that the angles of the triangulations are bounded below, independently of h. Sometimes (mainly for maximum error estimates), we also assume that the triangulations are quasi-uniform in the sense that the triangles of Th are essentially of the same size. This may be expressed by demanding that the area of K in Th is bounded below by ch2 with c > 0 independent of h. ¯ of Ω which are linear on each triangle of Th Let Vh denote the continuous functions on the closure Ω and vanish on ∂Ω, i.e., Vh := {χ ∈ C 0 (Ω) : χ|K ∈ IP1 (K) ∀K ∈ Th , χ = 0 on ∂Ω}. Apparently, it is not quite clear that Vh is a subspace of H01 (Ω). However, an easy excercise shows that ∂ (vh |K ) ∈ L2 (Ω), K ∈ Th . Use distributional Vh ⊂ H01 (Ω). ( Hints: What is needed to be shown that ∂x i derivative and the contributions on the interelement boundaries—Try this !). h Let {Pj }N 1 be the interior vertices of Th . A function in Vh is then uniquely determined by its values at the points Pj and thus depends on Nh parameters. Let Φj be the “pyramid function” in Vh which h takes the value 1 at Pj but vanishes at the other vertices that is Φi (Pj ) = δij . Then {Φj }N forms a 1 basis for Vh , and every vh in Vh can be expressed as

vh (x) =

Nh X

αj Φj (x) with αj = vh (Pj ).

j=1

Given a smooth function v on Ω which vanishes on ∂Ω, we can now, for instance, approximate it by its interpolant Ih v in Sh , which we define as the element of Sh that agrees with v at the interior vertices, i.e. Ih v(Pj ) = v(Pj ) for j = 1, ..., Nh . The following error estimates for the interpolant just defined in the case of a convex domain in IR2 are well known, namely; kIh v − vk ≤ Ch2 kvk2 , and 2

(Ω)∩H01 (Ω).

k∇(Ih v − v)k ≤ Chkvk2 ,

for v ∈ H We shall briefly indicate the steps leading to the above estimates, and for a proof, we refer to an excellent text by Ciarlet [4]. These results may be derived by showing the corresponding estimate for each K ∈ Th and then adding the squares of these estimates. For an individual K ∈ Th the proof is achieved by means of the Bramble-Hilbert lemma, noting that Ih v − v vanishes for all v ∈ IP1 . We remark here that in stead of finding the estimates for each individual elements, a master element is first constructed so that simplest affine transformations can be easily found to map to the individual elements. The local basis functions which play a crucial role in the error analysis can be easily derived for the master element. The Bramble-Hilbert Lemma is then applied only to the transformed problems on the master element.

CHAPTER 4.

46

ELLIPTIC EQUATIONS

We now return to the general case of a domain Ω in IRd and assume that we are given a family {Vh } of finite-dimensional subspaces of H01 (Ω) such that, for some integer r ≥ 2 and small h, inf {kv − χk + hk∇(v − χ)k} ≤ Chs kvks , for 1 ≤ s ≤ r, v ∈ H s ∩ H01 .

χ∈Vh

(4.17)

The above example of piecewise linear functions corresponds to d = r = 2. In the case r > 2, Vh most often consists of C 0 -piecewise polynomials of degree r − 1 on a triangulation Th as above. For instance, when r = 4, it is a case of piecewise cubic polynomial subspaces. In the general situation, estimates such as (4.17) with s > d/2 may often be obtained by exhibiting an interpolation operator Ih into Vh such that kIh v − vk + hk∇(Ih v − v)k ≤ Chs kvks , for 1 ≤ s ≤ r. (4.18) Note that in case s ≤ d/2, the point-values of v are not well defined for v ∈ H s and the construction has to be somewhat modified, see Brennan and Scott [3]. When the domain Ω is curved and r > 2 there are difficulties near the boundary. However, the above situation may be accomplished by mapping a curved triangle onto a straight-edged one (isoparametric elements). But we shall not pursue it further and interested reader may refer to Ciarlet [4]. For our subsequent use, we remark that if the family of triangulations Th is quasiuniform and Vh consists of piecewise polynomials, then one has the the following “inverse” inequality k∇χk ≤ Ch−1 kχk, ∀χ ∈ Vh .

(4.19)

This inequality follows easily from the corresponding inequality for each triangle K ∈ Th , which in turn is obtained by a transformation to a fixed reference triangle, and using the fact that all norms on a finite dimensional space are equivalent, see, e.g., [4]. The optimal orders to which functions and their gradients may be approximated under our assumption (4.18) are O(hr ) and O(hr−1 ), respectively, and we shall try to obtain approximations of these orders for the solution of the Dirichlet problem. Galerkin procedure. With Vh given as above, the Galerkin approximation uh ∈ Vh is determined as a function in Vh which satisfies a(uh , χ) = (f, χ), ∀χ ∈ Vh . (4.20) Since Vh ⊂ H01 , the error e = u − uh satisfies the following Galerkin orthogonality condition a(e, χ) = a(u, χ) − a(uh , χ) = (f, χ) − (f, χ) = 0 ∀ χ ∈ Vh . h Using a basis {Φj }N for Vh , the problem (4.20) may be stated as : Find the coefficients αi ’s in 1

uh (x) =

Nh X

αi Φi (x),

i=1

such that

Nh X i=1

αi [(∇Φi , ∇Φj ) + (Φi , Φj )] = (f, Φj ), j = 1, ..., Nh .

In vector matrix form, this may be expressed as Aα = F, where A = (aij ) is the so called global stiffness matrix, with aij = (∇Φi , ∇Φj ) + (Φi , Φj ), F = (fj ) the global load vector with entries fj = (f, Φj ), and α the vector of unknowns αi . The dimension of these arrays equals Nh , the dimension of Vh . Since the stiffness matrix A is positive definite, it is invertible, and hence, the discrete problem has a unique solution.

CHAPTER 4.

47

ELLIPTIC EQUATIONS

When Vh consists of piecewise polynomial functions, the elements of the matrix A may be calculated exactly. However, unless f has a particularly simple form, the elements (f, Φj ) of F have to be computed by some quadrature formula. We now prove the following estimate for the error e = u − uh between the exact or true solution of (4.16) and the solution of the discrete problem (4.20). Theorem 4.6 Let uh and u be the solutions of (4.20) and (4.16), respectively. Then, for 1 ≤ s ≤ r, the error e = (u − uh ) satisfies kek ≤ Chs kuks , and

k∇ek ≤ Chs−1 kuks . Proof. In order to apply the Cea’s Lemma, we note that a(u, u) = k∇uk2 + kuk2 ≥ k∇uk2

is coercive in H01 -norm and also for u, v ∈ H01 (Ω)

|a(u, v)| ≤ M k∇ukk∇vk is bounded. Here, we have used the Poincar´e inequality. Now an appeal to Cea’s Lemma yields k∇ek = k∇(u − uh )k ≤ inf k∇(u − χ)k ≤ Chs−1 kuks . χ∈Vh

For the estimate in L2 norm, we proceed by Aubin-Nitsche duality arguments. Let φ be an arbitrary in L2 , and ψǫH 2 ∩ H01 as the solution of −∆ψ + ψ ψ

=

φ, in Ω,

=

0, on ∂Ω,

and recall the elliptic regularity condition kψk2 ≤ Ckφk.

(4.21)

Then, (e, φ) = (u − uh , ∆ψ + ψ) = a((u − uh ), ψ).

Using Galerkin orthogonality for χ ∈ Vh , we have

(e, φ) = a(e, (ψ − χ)) ≤ Ck∇ek · k∇(ψ − χ)k, and hence, using the approximation property (4.18) with s = 2, and the elliptic regularity (4.21), we obtain (e, φ) ≤ Chs−1 kuks hkψk2 ≤ Chs kuks kφk,

which completes the rest of the proof if we choose φ = e.

Becasue of the variational formulation of the Galerkin method, the natural error estimates are expressed in L2 - based norms. Error analyses in other norms are also discussed in the literature. For example, we state, without proof, the following maximum-norm error estimate for C 0 - piecewise linear elements. For a proof, we refer to Brenner and Scott [3]. Theorem 4.7 Assume that Vh consists of C 0 -piecewise linear polynomials and that the triangulations Th are quasiuniform. Let uh and u be the solutions of (4.20) and (4.16), respectively. Then the error e in the maximum norm satisfies 1 kekL∞ (Ω) ≤ Ch2 log kukW 2,∞ (Ω) . h

CHAPTER 4.

4.3

48

ELLIPTIC EQUATIONS

Computational Issues

Although we shall discuss the formation of finite element equations only for the two examples discussed above, but the main theme remains the same for the higher order elements applied to different elliptic boundary value problems. We hope to being out the major steps leading to the linear algebraic equations. Example 4.1 (Revisited). For C 0 -linear elements, the global basis functions are givem by those hat functions described on section 4.2. Since we have imposed the boundary condition on the finite element sapce Vh , we obtain the global stiffness or Grammian matrix A = [aij]1≤i,j≤N with aij =

Z

1

0

[a(x)φ′i (x)φ′j (x) + a0 φi φj ] dx = 0, |i − j| > 1.

Therefore, each row has at most three non-zero entries,i.e., for i = j − 1, j, j + 1, the elements aij become Z xj+1 [a(x)φ′i (x)φ′j (x) + a0 φi φj ] dx. aij = xj−1

On [xj−1 , xj+1 ], we have φ′j

=

φ′j−1

(

Thus for j = 2, . . . , N − 1, aj−1,j

=

aj,j

= +

1

Z

and aj+1,j

xj−1 ≤ x ≤ xj xj < x < xj+1 ,

0,

xj−1 ≤ x ≤ xj xj ≤ x ≤ xj+1 .

=



1

hj+1 ,

xj

xj

a(x) dx +

xj−1 Z xj+1

h2j+1

− h1j , 0,

=

Z xj xj − x x − xj−1 1 a0 (x)( )( ) dx a(x) 2 dx + h hj hj xj−1 xj−1 j Z xj Z xj 1 1 − 2 a(x) dx + 2 a0 (x)(xj − x)(x − xj−1 ) dx, hj xj−1 hj xj−1

Z

=

1 h2j

xj−1 ≤ x ≤ xj , xj ≤ x ≤ xj+1 ;



and φ′j+1

1 hj , 1 − hj+1 ,

xj

−1 = 2 hj+1

Z

1 h2j+1

Z

xj+1

a(x) dx +

xj

1 h2j+1

Z

xj

xj−1

a0 (x)(x − xj−1 )2 dx

a0 (x)(xj+1 − x)2 dx, xj+1

a(x) dx +

xj

1 h2j+1

Z

xj+1

a0 (x)(xj+1 − x)(x − xj ) dx.

xj

When j = 1 and j = N . We have two entries like a11 , a21 , and aN −1,N , aN,N . For the right hand side, the load vector F is given by [fj ]j=1,...,N with fj = (f, φj )

=

Z

1

f (x)φj (x) dx =

=

xj+1

f (x)φj (x) dx

xj−1

0

Z

Z

xj

x − xj−1 f (x)( ) dx + hj xj−1

Z

xj+1

xj

f (x)(

xj+1 − x ) dx. hj+1

CHAPTER 4.

49

ELLIPTIC EQUATIONS

Remarks. (i) When a and a0 are exactly integrable or a and a0 are constants, we have, particularly, a simple form for the global stifness matrix A. In case a = 1, a0 = 0 and h = hj , j + 1, . . . , N , we obtain a triadiagonal matrix, which is similar to one derived by finite difference scheme (replacing the second derivative by central difference quotient) (ii) For higher order elements say for C 0 -piecewise cubic polinomials, it is difficult to obtain global basis functions φi ’s. But, on a master element, it is easy to construct local basis funcions. Therefore, the computation of the local stiffness matrix as well as the local load vector is done on this master element and then using affine transformations, these are obtained for each individual elements. However, the affine transformations are easy to construct and hence, make the computation of the local stiffness as well as the local load vector much simpler. Once, we have all the local sitffness matrices and local load vectors, we use assembly procedure by simply looking at the position of each element in the finite element mesh and then by adding contributions at the common vertex. For the second examples, we shall, below, discuss the above mentioned issues. Example 4.2 (Revisited). Since the supports of Φi ’s involve several elements in Th , it is difficult to derive these global basis functions and hence, it is not easy to compute the global stiffness matrix A. However, we may write X Z a(uh , vh ) = (∇uh · ∇vh + uh vh ) dx, K

K∈Th

and (f, vh ) =

X Z

K∈Th

f (x)vh (x) dx.

K

3

On each element K, let {PiK }i=1 denote the threee vertices of K and let {ΦK i } denote the local basis functions with the property that ΦK i (PjK ) = δij , 1 ≤ i, j ≤ 3. Set = uh

uK h

=

K

3 X

uiK ΦK i ,

i=1

and vhK = ΦK j , where uiK = uh (PiK )’s are unknows. On substitution for element K, we obtain a(uh , vh ) =

e X X

uiK aK ij ,

K∈Th i,j=1

where aK ij =

Z

K

K K K (∇ΦK i · ∇Φj + Φi Φj ) dx.

Note that on each element, we have the local stiffness matrix AK = (aK ij )1≤i,j≤3 . Similary, the local vector bK = (bK j ), 1 ≤ j ≤ 3 is given by Z f ΦK bK = j dx, 1 ≤ j ≤ 3. j K

Then, by looking at the position of the elements in the finite element mesh, i.e., by establishing a connection between the global node numbering and local node numbering (the relation between {Pj }’s

CHAPTER 4.

50

ELLIPTIC EQUATIONS

and {PjK : K ∈ Th , 1 ≤ j ≤ 3}, we assemble the contributions of the local stiffness matrices and the local load vectors to obtain the global stiffness matrix and the global load vector. Assembling Algorithm. • Set A = 0. • For K ∈ Th compute AK and bK . • For i, j = 1, 2, 3, compute

aPjK ,PjK = aPiK ,PjK + aK ij

and bPjK = bPjK + bK . Note that the contribution of a vertex, which is common to many elements is added up. In general, computation of local basis function ΦK j may not be an easy task. Specially, for higher order elements like quadratic or cubic polynomial, it is cumbersome to find local basis functions. Therefore, one b with vertices (0, 0), (1, 0), (0, 1) and the affine mapping F : K b 7→ K, K ∈ introduces a master element K 2 Th which is of the form F (b x) = Bb x + d where, B is 2 × 2 matrix and d ∈ IR . Let u bh (b x) = uh (F (b x)) and vh (b b x) = vh (F (b x)) Then Z Z uh vh dx =

K

Further,

b K

u bh vbh det(B) db x.

∇b uh = B T ∇uh , b x F (b x)

where B T = Jacobian of F . Thus, Z Z (B −T ∇b u)(B −T ∇b vh ) det(B) db x. ∇uh · ∇vh dx = b K K

b b it is easy to construct the local basis functions ΦK On element K, i , that is, for the present case,

Now, set

and compute

b b b b b b K K K c2 . c1 , ΦK c1 − c x2 , ΦK ΦK 3 := Φ(0,1) = x 2 := Φ(1,0) = x i := Φ(0,0) = 1 − x

Z

K

u bh (b x) =

3 X i=1

∇uh · ∇vh dx =

b b bh = ΦK uiK ΦK j , j = 1, 2, 3, i and v

3 Z X i=1

b K

b b −T x. ∇ΦK uiK (B −T ∇ΦK j ) det(B) db i )(B

Now, the computation only involves the evaluation of integrals like Z i!j! j i x= c2 db x c1 x . (i + j + 2)! b K

Remarks.

CHAPTER 4.

ELLIPTIC EQUATIONS

51

(i) For simplicity of programming, the integrals are evaluated by using quadrature rules. One such simple rule is Z 3 X b ), f (b x) db x= wi f (PiK b K i=1

where wi ’s are the quadrature weights. Then programming become easier as we have to calculate only b ∂ΦK i b the values of ΦK i and ∂ xb at the vertices of K. j (ii) After performing assembly procedure, we can impose essential boundary conditions and then obtain a reduced nonsingular system Aα = b.

The global stiffness matrix being sparse and large, one may use iterative procedures to solve the system. However, there are direct methods (elimination methods ) like frontal and skyline methods which are used effectively along with the assembling procedure. But, we shall not dwell on this and refer the readers to Johnson [8]. Computation of Order of Convergence. The a priori error bound (the bound obtained prior to the actual computation) in section 4.2 dependes on the unknown solution, provided the exact solution has the required regularity properties. However, it tells us that the approximate solution converges as the mesh is refined. Therefore, we have confidence on the computed solution. Being an asymptotic estimate, i.e., uh converges to the exact solution u in appropriate norm as h 7→ 0, it may happen that after some refinements that is as h ≤ h0 for some h0 , the approximate solution is close to the true solution and this h0 is so small that the computers may consider it to be zero (otherwise very small h0 may give rise to a large system of equations which is virtually difficult to solve). In such situations, the a priori error estimate looses its meaning. Veryoften, the user community is confronted with the question “ How does one get confidence on the computed solution ?” It is costumary to check the computed solutions for various decreasing mesh sizes. If the trend is such that up to some decimal places, the solutions for various mesh sizes remain same, then one has confidence on the computed numbers. But, this may go wrong! Therefore, alongwith this trend if one computes the order of convergence that matches with the theoretical order of convergence, then this may be a better way of ensuing the confidence. To compute the order of convergence, we note that for h1 and h2 with 0 < h2 < h1 ku − uh1 k ≈ C(u)hα 1, and ku − uh2 k ≈ C(u)hα 2. Then the order of convergence α is computed as     h1 ku − uh1 k / log . α ≈ log ku − uh2 k h2 In the abscence of the exact solution, we may replace u by a more refined computed solution.

4.4

Adaptive Methods.

Very often the user is confronted with the following problem

CHAPTER 4.

52

ELLIPTIC EQUATIONS

“Given a tolerance (TOL) and a measurement say a norm k · k, how to find an approximate solution uh to the exact unknown solution u with minimal computational effort so that ku − uh k ≤ T OL ? ” For minimal computational effort, we mean that the computational mesh is not overly refined for the given accuracy. This is an issue in Adaptive methods. It is expected that successful computational algorithms will save substantial computational work for a given accuracy. These codes are now becoming a standard feature of the finite element software. An adaptive algorithm consists of • a stopping criterion which guarantees the error control within the preassigned error tolerance level, • a mesh modification strategy in case the stopping criterion is not satisfied. It is mostly based on sharp a posteriori error estimate in which the error is evaluated in terms of the computed solutions and the given data and a priori error estimate which predicts the error in terms of the exact (unknown) solutions ( the one which we have been discussing uptill now). While the stopping criterion is built solely on the a posteriori error estimate, the mesh modification strategy may depend on a priori error estimates. A posteriori error estimates. We shall discuss a posteriori error estimates first for the example 4.1 with a0 = 0 (for simplicity) and then take up the second example. In order to measure the size of the error e = u − uh , we shall use the following energy norm Z 1 1/2 ′ ′ 2 kvkE ≡ kv ka := a(x)|v (x)| dx . 0

Since a(x) ≥ α0 > 0 and v vanishes at x = 0, the above definition makes sense as a weighted norm. The a priori error estimate discussed in section 4.2 depends on the second derivative of the unknown solution u, where as the a posteriori error estimate, we shall describe below depends on the given data and the computed solution uh . Theorem 4.8 Let u be the exact solution of (4.7) with a0 = 0 and the corresponding Galerkin approximation be denoted by uh . Then, the following a posteriori error estimate holds kekE ≡ ku′ − u′h ka ≤ Ci khR(uh )k a1 , where R(uh ) , the residual on each interval Ii is defined as R(uh )|Ii := f +(au′h )′ and Ci is an interpolation constant depending on the maximum of the ratios (maxIi a/ minIi a) . Proof. Note that using Galerkin orthogonality, we have Z 1 ′ 2 ke ka = ae′ (e − Ih e)′ dx 0

=

Z

1

0

=

Z

0

au′ (e − Ih e)′ dx −

1

f (e − Ih e) dx −

Z

0

1

N Z X i=1

au′h (e − Ih e)′ dx

Ii

au′h (e − Ih e)′ dx.

Now integrate by parts the last term over each interval Ii and use the fact that (e − Ih e)(xi ) = 0, i = 0, 1, · · · , N to obtain ke′ k2a

=

N Z X i=1

Ii

[f + (au′h )′ ](e − Ih e) dx

CHAPTER 4.

53

ELLIPTIC EQUATIONS

=

Z

1

0



R(uh )(e − Ih e) dx

khR(uh )k a1 kh−1 (e − Ih e)ka .

A slight modification of Theorem 4.2 yields an interpolation error in the weighted L2 - norm and thus, we obtain kh−1 (e − Ih e)ka ≤ Ci ke′ ka , where the interpolation constant Ci depends on the maximum of the ratios of (maxIi a/ minIi a) . For the automatic control of error in the energy norm , we shall use the above a posteriori eror estimate and shall examine below an adaptive algorithm. Algorithm. • Chose an initial mesh Th(0) of the mesh size h(0) . • Compute the finite element solution uh(0) in Vh(0) . • Given uh(m) in Vh(m) on a mesh with mesh size h(m) , • Stop, if

Ci kh(m) R(uh(m) )k a1 ≤ T OL,

• otherwise determine a new mesh Th(m) with maximal mesh size h(m+1) such that Ci kh(m) R(uh(m) )k a1 = T OL.

(4.22)

• Continue. By Theorem 4.6, the error is controlled to the T OL in the energy norm, if the stopping criterion is reached with solution uh = uh(m) , otherwise, a new maximal mesh size is determined by solving (4.22). However, the maximality is obtained by the equidistribution of errors so that the contributions of error from each individual elements Ii are kept equal, i.e., the equidistribution give rise to (m)

[a(xi )]−1 (hi

(m)

R(uh(m−1) ))2 hi

=

(T OL)2 , i = 1, 2, · · · , N (m) , N (m)

where N (m) is the number of intervals in Th(m) . In practice , the above nonlinear equation is simplified by replacing N (m) by N (m−1) . With slight modification of our a priori eror estimate in section 4.2, we may rewrite ku − uh kE = ku′ − u′h ka ≤ k(u − Ih u)′ ka ≤ Ci khu′′ ka . Now it is a routine excercise to check that khR(uh )k a1 ≤ CCi khu′′ ka , where C ≈ 1. This indicates that the a posteriori eror estimate is optimal in the same sense as the a priori estimate. As a second example, we recall the Poisson equation with homogeneous Dirichlet boundary condition −∆u = f,

x ∈ Ω and u = 0, x ∈ ∂Ω.

CHAPTER 4.

54

ELLIPTIC EQUATIONS

For applying the adaptive methods, we shall define a mesh function h(x) as h(x) = hK , x ∈ K,where K is a triangle belonging to the family of triangulations Th , see the section 3 for this. With Vh consisting of continuous piecewise linear functions that vanish on the boundary ∂Ω, the Galerkin orthogonality relation takes the form (∇(u − uh ), χ) = 0 ∀χ ∈ Vh . We shall use the following interpolation error which can be obtained with some modifications of the standard interpolation error (see, Eriksson et al. [6] ). Lemma 4.9 There is a positive constant Ci depending on the minimum angle condition such that the piecewise interpolant Ih v in Vh of v satisfies khr Ds (v − Ih v)k ≤ Ci kh2+r−s D2 vk, r = 0, 1, s = 0, 1. Now , we shall derive the a posteriori error estimate in the energy norm. Theorem 4.10 The Galerkin approximation uh satisfies the following a posteriori error estimate for e = u − uh k∇(u − uh )k ≤ Ci khR(uh )k, Where R(uh ) = R1 (uh ) + R2 (uh ) with R(uh )|K = |f + ∆uh |

on K ∈ Th ,

and R2 (uh )|K = max |[ S⊂∂K

Here

h [ ∂u ∂νS ]

∂uh ]|. ∂νS

represents the jump in the normal derivative of the function uh in Vh .

Proof. Using Galerkin orthogonality, we have k∇ek2

= (∇(u − uh ), ∇e) = (∇u, ∇e) − (∇uh , ∇e)

= (f, e) − (∇uh , ∇e) = (f, e − Ih e) − (∇uh , ∇(e − Ih e)).

On integrating by parts the last term over each triangle K , it follows thar X Z X Z ∂uh k∇ek2 = (e − Ih e) ds (f + ∆uh )(e − Ih e) dx + ∂νK K∈Th K K∈Th ∂K X Z X Z ∂uh (e − Ih e)hS ds. = (f + ∆uh )(e − Ih e) dx + h−1 S ∂νK K S K∈Th

S∈Th

An application of interpolation inequality completes the rest of the proof. Note that if all the triangles are more or less isoscale and the trianglulation satisfies the minimum angle condition, then Ci ≈ 1. Compared to one dimensional situation, in the present case we have a contribution fron the element sides S involving the jump in the normal derivative of uh divided by the local mesh size that is R2 . In one dimensional case this term is zero, because the interpolation error vanishes at the nodal points.

Chapter 5

Heat Equation We first introduce the theoretical materials to tackle the time variable in an initial and boundary value problem and then derive a weak formulation for the heat equation. Spatial discretization is performed using finite element spaces discussed in the last chapter and we call this procedure the semidiscrete scheme. Stability as well as convergence analysis are carried out for this method. Since the direct comparision between the exact solution and the Galerkin approximation may not yield optimal error estimates, introduction of a suitable intermediate projection is examined. Finally, we discretize in time by replacing the time derivative by finite difference quotients and discuss a priori error bounds for the completely discrete schemes. For simplicity of exposition, we consider the heat equation. However, most of the analysis goes through for more general second order linear parabolic equations with appropriate modifications. Let Ω be a domain in IRd with smooth boundary ∂Ω, and consider the initial-boundary value problem ut − ∆u = f, x ∈ Ω, t ≥ 0,

(5.1)

with homogeneous Dirichlet boundary condition u = 0, x ∈ ∂Ω, t ≥ 0, and initial condition ∂u ∂t

u(x, 0) = u0 (x), x ∈ Ω,

and ∆ is the Laplace operator. Here, the source term f and the initial function u0 where ut denotes are given functions on their respective domain of definitions.

5.1

Some Vector-Valued Function Spaces.

For functions defined on both space and time like say u(x, t) in (5.1), it is convenient to separate the variables. For each fixed t, consider u(t) as a mapping x 7→ u(x, t), i.e., for each t, u(t) may belong to a function space defined on the spatial domain and hence, it is usually called vector-valued function. Let (a, b) be an interval with −∞ ≤ a < b ≤ ∞, and let X be a Banach space with norm k · kX . For 1 ≤ p ≤ ∞, we denote by Lp (a, b; X) the space of strongly measurable functions φ : (a, b) 7→ X ( that is t 7→ kφ(t)kX is measurable) such that !1/p Z b

kφkLp (a,b;X) :=

a

55

kφ(t)kpX dt

,

CHAPTER 5.

56

HEAT EQUATION

and for p = ∞

kφkL∞ (a,b;X) := ess supt∈(a,b)kφ(t)kX .

When −∞ < a < b < ∞, the space C([a, b]; X) := {φ : [a, b] 7→ X; φ continuous in [a, b]} is a Banach space equipped with the norm kφkC([a,b];X) := max kφ(t)kX . t∈[a,b]

When the interval [a, b] will be the time interval [0, T ], T > 0 fixed, we may conveniently use Lp (X) as Lp (0, T ; X) and C(X) as C(0, T ; X).

5.2

Weak Formulation.

Rewrite (5.1) for each t > 0 as −∆u = −ut + f (x, t), x ∈ Ω.

(5.2)

Now, we shall derive a weak formulation which is similar to the elliptic equation discussed in chapter 3. Let V = H01 (Ω) and form an L2 -innerproduct between (5.2) and v ∈ V . A Use of the Gauss divergence theorem yields (∇u, ∇v) = −(ut , v) + (f (t), v) ∀v ∈ V, t > 0, that is, (ut , v) + (∇u, ∇v) = (f (t), v) ∀v ∈ V, t > 0.

(5.3)

Therefore, the weak form of (5.1) which we plan to use for the Galerkin formulation is defined as a function u(t) ∈ V , for t > 0 such that u satisfies the equation (5.3) and initial condition (u(0), v) = (u0 , v) ∀v ∈ V. Diagonalization method– a step towards existence. We briefly describe the diagonalization process for the heat equation (5.1). Now, attach with (5.1), an eigenvalue problem as −∆v = λv,

x ∈ Ω,

(5.4)

with homogeneous Dirichlet boundary condition x ∈ ∂Ω.

v = 0,

Since −∆ is self-adjoint and positive definite ( look at the bilinear form, it is symmetric and coercive), it has countable number of eigenvalues {λi }∞ i=1 with 0 < λ1 ≤ · · · ≤ λi ≤ · · · and λi 7→ ∞ as i 7→ ∞. For a proof, we may refer to Kesavan [9]. Let {φi } be the corresponding orthonormal eigenvectors , i.e., −∆φi = λi φi , x ∈ Ω and φi = 0, x ∈ ∂Ω. Being an orthogonal basis in L2 or in V , we write u(x, t) =

∞ X

αi (t)φi (x),

i=1

where the Fourier coefficients αi = (u, φi ) are unknowns. Choose v = φi in (5.3) to obtain (ut , φi ) + (∇u, ∇φi ) = (f (t), φi ) t > 0.

CHAPTER 5.

57

HEAT EQUATION

Once more, we use the Gauss divergence theorem to the second term on the right hand side and find that (∇u, ∇φi ) = (u, −∆φi ) = λi (u, φi ). Thus,

d αi (t) + λi αi (t) = fi (t), i = 1, 2, · · · , dt with (u(0), φi ) = u0i . Here, fi = (f, φi ) and u0i = (u0 , φi ). Formally with αi (0) = u0i , we have an infinite system of ordinary differential equations in αi . Using variation of constant formula, we obtain Z t αi (t) = αi (0)e−λi t + e−λi (t−s) fi (s) ds. 0

Now it is possible to write down formally the solution u as u(x, t) =

∞ X i=0

αi (t)φi (x) =

∞ X

αi (0)e

−λi t

φi +

∞ Z X i=0

i=1

t

e−λi (t−s) fi (s)φi ds.

0

There are some basic questions which we have not dealt with ( but we have done all the calculation formally) like the convergence of the corresponding Fourier series to u and to u0 . More importantly, lim

t7→0

∞ X

αi (t)φi .

i=1

Here, the mode of convergence will detect the type of solutions, we are looking for. However, the above formal analysis gives us a procedure not only for existential analysis, but also for computing the approximate solution. If we consider the finite dimensional space Vk spanned by the first k eigenvectors, and pose the weak formulation on this space, the corresponding k-system of ODEs forms a base for the spectral Galerkin method, i.e., if we know the eigenvectors, we can compute the approximate solution by solving the scalar ordinary differential equation in each component. Remark. When f = 0, we can associate with homogeneous equation, a solution operator E(t) as u(x, t) = E(t)u0 =

∞ X

e−λi t (u0 , φi )φi .

(5.5)

i=1

More over, using Parseval’s identity we have ku(t)k ≤ e−λ1 t ku0 k, where λ1 is the minimum eigenvalue of the Laplacian with homogeneous Dirichlet boundary condition. This shows that the solution tends to zero exponentially as t 7→ ∞. In case f 6= 0, we write using variation of constant formula Z t u(x, t) = E(t)u0 + E(t − s)f (s) ds.

(5.6)

0

The analysis in this section can be easily extended to any self-adjoint positive definite operator in stead of −∆.

5.3

Semidiscrete Galerkin Approximation.

We now turn to the standard Galerkin finite element method for the heat equation (5.1). In the first step, we like to approximate u(x, t) by means of a function uh (x, t) which, for each fixed t, belongs to

CHAPTER 5.

58

HEAT EQUATION

a finite-dimensional linear space Vh of functions of x with approximation property (4.17) described in section 4. This function is defined as the solution of an initial value problem for a finite system of ordinary differential equations and is referred to as a semidiscrete solution of our problem. In the next section, we also discretize the time variable by replacing the time derivative by difference quotients to produce completely discrete schemes. For the purpose of defining the Galerkin approximation, we then pose the approximate problem to find uh (t) ∈ Vh for each t ≥ 0, such that (uh,t , χ) + (∇uh , ∇χ) = (f, χ), ∀χ ∈ Vh , t ≥ 0,

(5.7)

uh (0) = u0h , where u0h is some approximation of u0 in Vh . h If {Φj }N is a basis for Vh , the semidiscrete problem may be stated as : Find the coefficient αi (t) in 1

uh (x, t) =

Nh X

αi (t)Φi (x)

i=1

such that

Nh X i=1

α′i (t)(Φi , Φj ) +

Nh X i=1

αi (t)(∇Φi , ∇Φj ) = (f, Φj ), j = 1, ..., Nh , t ≥ 0,

and, with βj the components of the given initial approximation u0h = αi (0) = βi ,

i = 1, ..., Nh .

PNh

i=1

βi Φi (x),

In matrix notation, it may be expressed as Aα′ (t) + Bα(t) = F (t), t > 0, with α(0) = β, where A = [aij ]1≤i,j≤Nh is the mass matrix with elements aij = (Φi , Φj ), B = [bij ]1≤i,j≤Nh , the stiffness matrix with bij = (∇Φi , ∇Φj ), F = [fj ]1≤j≤Nh the vector with entries fj = (f, Φj ), α(t) the vector of unknowns αi (t), and β = (βj ). The dimension of all these arrays equals Nh which is the dimension of Vh . Since Φi ’s are linearly independent, the mass matrix A is invertible. Therefore, the above system of ordinary differential equations may be written as ˜ α′ (t) + Bα(t) = F˜ , t > 0, with α(0) = β, ˜ = A−1 B and F˜ = A−1 F . Using variation of constant formula, we write down the solution as where B Z t ˜ ˜ α(t) = e−Bt α(0) + e−B(t−s) F˜ (s) ds, 0

that is, the above system has a unique solution α(t) for all t > 0. When Vh consists of piecewise polynomial functions, the elements of the matrices A and B may be calculated exactly and once for all. Again, the elements (f, Φj ) of F have to be computed by some quadrature formula, unless f has a particularly simple form. We now derive the stability estimates in energy norms and then use these results to discuss optimal error bounds. Below, we prove a stability result in term of the first energy norm Z t 2 2 k|φ(t)|k = kφ(t)k + k∇φ(s)k2 ds, t ≥ 0. 0

CHAPTER 5.

59

HEAT EQUATION

Theorem 5.1 The semidiscrete solution uh of (5.7) is stable with respect to the first energy norm in the sense that Z t kf (s)k ds, t > 0. k|uh (t)|k ≤ kuh (0)k + 2 0

Proof. Choosing χ = uh in (5.7), we note that (uht , u) =

1 d kuh k2 , 2 dt

and hence, we obtain

1 d kuh k2 + k∇uh k2 ≤ kf kkuhk. 2 dt On integration in time from 0 to t, it follows that Z t k|uh (t)|k2 ≤ kuh (0)k2 + kf (s)kkuh(s)k ds. 0

Letting k|uh (t∗ )|k = max k|uh (s)|k 0≤s≤t



for some t ∈ [0, t], we obtain 2

k|uh (t)|k ≤ [kuh (0)k +

Z

t

0

kf (s)k ds] k|uh(t∗ )|k.

Taking maximum over [0, t], it is easy to see that Z t Z t∗ kf (s)k ds]. kkf (s)k ds] ≤ [kuh (0)k + k|uh (t∗ )|k ≤ [kuh (0)k + 0

0

Since k|uh (t)|k ≤ k|uh (t∗ )|k, the result follows. The stability result in the second energy norm k|φ(t)|k21

2

= k∇φ(t)k +

Z

0

t

kφt (s)k2 ds, t ≥ 0

is useful in deriving the error estimate in H 1 -norm. Theorem 5.2 Let uh be a solution of the semidiscrete equation (5.7). Then the following stability result holds Z t 1/2 2 k|uh (t)|k1 ≤ k∇uh (0)k + 2 kf (s)k ds , t > 0. 0

Proof. Choose χ = uht in (5.7) to obtain kuht k2 +

1 d k∇uh k2 ≤ kf kkuhtk. 2 dt

On integration with respect to the temporal variable, we find that k|uh (t)|k21

Z t 1/2 Z t 1/2 ≤ k∇uh (0)k2 + 2 kf (s)k2 ds kuht (s)k2 ds 0 0 " Z t 1/2 # ≤ k∇uh (0)k + 2 kf (s)k2 ds max k|uh (s)|k1 , 0

0≤s≤t

CHAPTER 5.

60

HEAT EQUATION

and this completes the rest of the proof as in Theorem 5.1. Let us represent the error between the exact solution u and the semidiscrete Galerkin approximation uh as e := u − uh . Now, a direct comparision between u and uh yields an error equation in e (et , χ) + (∇e, ∇χ) = 0

∀χ ∈ Vh , t > 0.

(5.8)

Since χ ∈ Vh and e ∈ V , so for deriving some estimates using energy arguments, we now split the error using an intermediate solution or projection say uˆh ∈ Vh as u − uh = (u − u ˆh ) + (ˆ uh − uh ) = η + θ. You may be curious to know ‘How do we choose the intermediate solution u ˆh ∈ Vh ?’ It is quite clear that the choice of u ˆh should be such that the estimate of η becomes easier. Therefore, CAN we choose u ˆh = Ih u, where Ih u is an interpolant of u in Vh ? With this choice, we observe that the estimate of kηk ≤ Chr kukr and k∇ηk ≤ Chr−1 kukr , see chapter 4. Then, the equation in θ may be written as (θt , χ) + (∇θ, ∇χ) = −(ηt , χ) − (∇η, ∇χ) ∀χ ∈ Vh , t > 0. (5.9) Since θ ∈ Vh , set χ = θ in (5.9) and obtain

1 d kθk2 + k∇θk2 2 dt



kηt kkθk + k∇ηkk∇θk



C[kηt k + k∇ηk]k∇θk 1 C[kηt k2 + k∇ηk2 ] + k∇θk2 . 2



Here, we have used the Poincar´e inequality and the inequality ab ≤ a2 + 14 b2 , a, b ≥ 0. On integration with respect to time, it now follows that Z t Z t kθ(t)k2 + k∇θ(s)k2 ds ≤ kθ(0)k2 + C (kηt (s)k2 + k∇η(s)k2 ) ds. 0

0

Since, Ih commutes with time differentiation, it is easy to verify that kηt k = kut − Ih ut k ≤ Chr kut kr and with uh (0) = Ih u0 , θ(0) = 0. Thus, Z t Z t Z t kθ(t)k2 + k∇θ(s)k2 ds ≤ C[h2r kut (s)k2r ds + h2(r−1) ku(s)k2r ds]. 0

0

0

The estimate of kθ(t)k = O(h(r−1) ) which is not optimal. The loss of one power of h is due to the presence of the term (∇η, ∇χ) in (5.9). Therefore, a proper choice of an intermediate projection or solution u ˆh ∈ Vh could be for which (∇(u − uˆh ), ∇χ) = 0

∀χ ∈ Vh ,

(5.10)

that is, uˆh is the Ritz or elliptic projection of u onto Vh . Note that the following estimates of η have been derived in Chapter 4: kηk + hk∇ηk ≤ Chr kukr , u ∈ H r ∩ H01 . (5.11) Differentiating the equation (5.10) with respect to time, we obtain similarly kηt k ≤ Chr kut kr .

(5.12)

Essentially, what has been done here is that the elliptic part is projected onto Vh to find the intermediate solution u ˆh in Vh . The concept of elliptic projection was first proposed and analyzed for the parabolic equation by M. F. Wheeler [14] in her seminal paper. Below, we derive the convergence results using both stability estimates.

CHAPTER 5.

61

HEAT EQUATION

Theorem 5.3 Let u and uh , respectively, be the solutions of (5.1) and (5.7). Then, Z t kuh (t) − u(t)k ≤ ku0 − u0h k + Chr {ku0 kr + kut (s)kr ds}, t ≥ 0. 0

Proof. Since u − uh = η + θ and the estimate of η is known from (5.11), it is enough to prove the L2 -estimate of θ. The equation in θ becomes (θt , χ) + (∇θ, ∇χ) = −(ηt , χ) ∀χ ∈ Vh , t > 0.

(5.13)

An appeal to stability Theorem 5.1 with uh replaced by θ and f by -ηt yields Z t kθ(t)k ≤ k|θ(t)|k ≤ kθ(0)k + kηt k ds. 0

Note that kθ(0)k = kˆ uh (0) − u0h k ≤ ku0 − u ˆh (0)k+ ≤ ku0 − u0h k ≤ ku0 − u0h k + Chr ku0 kr , and This completes the rest of the proof.

kηt k = kut − u ˆht k ≤ Chr kut kr .

It is to be noted that in the above proof, we have made use of the non-negativity of k∇θk2 . Now with an application of k∇θk2 ≥ λ1 kθk2 , θ ∈ Vh , we show, below, the effect of the initial data on the error for large t. Here, λ1 is the minimum eigenvalue of −∆ with homogeneous Dirichlet boundary condition. Thus, 1 d kθk2 + λ1 kθk2 ≤ kηt k · kθk, 2 dt and therefore,

d kθk + λ1 kθk ≤ kηt k. dt On integration with respect to time, we find that Z t e−λ1 (t−s) kηt (s)k ds kθ(t)k ≤ e−λ1 t kθ(0)k + 0



e

−λ1 t

r

ku0 − u0h k + Ch {e

−λ1 t

ku0 kr +

Z

t

0

e−λ1 (t−s) kut (s)kr ds}.

Since kη(t)k ≤ Chr ku(t)kr ,

we infer, for a suitable choice of u0h , that

kuh (t) − u(t)k ≤ Chr {e−λ1 t ku0 kr + ku(t)kr +

Z

0

t

e−λ1 (t−s) kut (s)kr ds}.

This estimate is valid for all t > 0. Assuming quasi-uniformity on the triangulation Th , we may obtain an estimate for the error in the gradient by means of the “inverse inequality” (4.19) from the result of the previous Theorem. In fact, using interpolation Ih of u, we may split the error ∇e as k∇e(t)k = k∇u(t) − uh (t)k ≤ k∇u(t) − ∇Ih u(t)k + k∇Ih u(t) − ∇u(t)k,

CHAPTER 5.

62

HEAT EQUATION

and using inverse hypothesis (4.19) with interpolation error (4.18) (s = r), we obtain k∇ek ≤ ≤ ≤

Ch−1 {kIh u(t) − uh (t)k + hk∇(u(t) − Ih u(t)k} Ch−1 ku(t) − uh (t)k + Ch−1 {ku(t) − Ih u(t)k + hk∇u(t) − ∇Ih u(t)k} Ch−1 ku(t) − uh (t)k + Chr−1 ku(t)kr .

Therefore, by Theorem 5.3 and with an appropriate choice of u0h , the following estimate Z t kut (s)kr ds) k∇uh (t) − ∇u(t)k ≤ Chr−1 (ku0 kr + 0

holds. However, without using quasi-uniform assumption on the finite element mesh, it is possible to obtain an estimate for the gradient of the error. This is what we are going to discuss below. Theorem 5.4 Under the assumptions of Theorem 5.3, there is a constant C independent of h such that the following estimate holds true for t ≥ 0 Z t r−1 k∇uh (t) − ∇u(t)k ≤ Ck∇u0 − ∇u0h k + Ch {ku0 kr + ku(t)kr + ( kut k2r−1 ds)1/2 }. 0

Proof. As in the previous Theorem, it is enough to estimate the gradient of θ. Apply the second stability Theorem 5.2 to (5.13) with replacing uh by θ and f by −ηt to find that k∇θ(t)k

≤ k|θ(t)|k1 ≤ k∇θ(0)k +

Z

0

t

kηt k2 ds

≤ (k∇(u0 − u0h )k + k∇η(0)k) +

Z

t

0

1/2

kηt k2 ds

1/2

.

Using (5.11)–(5.12), it follows that k∇θ(t)k ≤ Ck∇(u0 − u0h )k + Ch

r−1

{ku0 kr +

Z

t 0

kut k2r−1 ds

1/2

},

and this completes the proof. Note that if u0h = Ih u0 , or u0h as the elliptic projection of u0 , then k∇(u0 − u0h )k ≤ Chr−1 ku0 kr , so that the first term on the right hand side in Theorem 5.4 is dominated by the second. Remark. Choose u0h as the Ritz or elliptic projection of u0 so that θ(0) = 0 in the proof of Theorem 5.4. Then, it is easy to check that Z t Z t 2 1/2 r k∇θ(t)k ≤ C( kηt k ds) ≤ Ch ( kut k2r ds)1/2 . (5.14) 0

0

Hence, the gradient of θ is of order O(hr ), whereas the gradient of the total error is only O(hr−1 ), for small h. Therefore, ∇uh is a better approximation to ∇ˆ uh than is possible to ∇u. This is an example of a phenomenon which is sometimes called in the literature as superconvergence. As an application of the superconvergence result (5.14), we indicate briefly how it may be used to show an almost optimal order error bound in the maximum-norm. With Ω ⊂ IR2 convex and bounded,

CHAPTER 5.

63

HEAT EQUATION

Vh consisting of piecewise linear functions (d = r = 2) on a quasi-uniform triangulation Th of Ω, we have from Theorem 4.5, 1 kη(t)kL∞ (Ω) ≤ Ch2 log ku(t)kW 2,∞ (Ω) . h In two dimensions, the Sobolev’s inequality becomes kukL∞ (Ω) ≤ Cǫ kukH 1+ǫ (Ω) , for ǫ > 0. This one almost bounds the maximum-norm by the norm in H 1 (Ω). It is possible to show using an inverse assumption of a type similar to (4.19), that kχkL∞ (Ω) ≤ C(log Since θ ∈ Vh , apply (5.14) (with r = 2) to obtain kθ(t)kL∞ (Ω) ≤ C(log

1 1/2 ) k∇χk h

∀χ ∈ Vh .

Z 1 1/2 2 t kut k22 ds)1/2 . ) h ( h 0

Thus, the maximum error satisfies 1 kuh (t) − u(t)kL∞ (Ω) ≤ kη(t)kL∞ (Ω) + kθ(t)kL∞ (Ω) ≤ C(u, t)h2 log . h

5.4

Completely Discrete Schemes.

In this section, we discuss the discretization in the time variable by replacing the time derivative by difference quotient. Let us begin with the simplest backward Euler-Galerkin method. Let k be the time step and U n be the approximation in Vh of u(t) at t = tn = nk. For a function φ(t), define φn = φ(tn ) and set the backward difference quotient as (φn − φn−1 ) . ∂¯t φn = k The backward Euler approximation is determined as a function U n ∈ Vh satisfying (∂¯t U n , χ) + (∇U n , ∇χ) = (f (tn ), χ)

∀χ ∈ Vh , n ≥ 1,

(5.15)

U 0 = u0h . Given U n−1 , the function U n is defined implicitly by solving the elliptic finite element problem (U n , χ) + k(∇U n , ∇χ) = (U n−1 + kf (tn ), χ),

∀χ ∈ Sh , n ≥ 1.

With notations analogous to the semidiscrete case, the above problem may be written in vector-matrix form (A + kB)αn = Aαn−1 + kF (tn ), n ≥ 1,

where A + kB is positive definite . Since the matrix A + kB is invertible, the problem (5.15) has a unique solution for a given U n−1 .

As in the semidiscrete case , we shall first establish some stability results with respect to the following two discrete energy norms for the fully discrete approximation U n . Set the first discrete energy norm as k|U n |k2 = kU n k2 + k

n X

m=1

and the second energy norm k|U n |k21 = k∇U n k2 + k

k∇U m k2 ,

n X

m=1

k∂¯t U m k2 .

CHAPTER 5.

64

HEAT EQUATION

Theorem 5.5 Let U n be a solution on (5.15). Then the discrete solution is stable for J = 1, 2, · · · , in the following sense J X k|U J |k ≤ kU 0 k + k kf n k, n=0

and

J

0

k|U |k1 ≤ k∇U k +

k

J X

n=0

n 2

kf k

!1/2

.

Proof. Setting χ = U n in (5.15), we note that (∂¯t U n , U n ) =

1¯ k 1 ∂t kU n k2 + k∂¯t U n k2 ≥ ∂¯t kU n k2 , 2 2 2

and hence,

1¯ ∂t kU n k2 + k∇U n k2 ≤ kf n k · kU n k. 2 Multiplying by 2k and summing from n = 1 to J, it now follows that kU J k2 + 2k

J X

n=1

J X

k∇U n k2 ≤ kU 0 k2 + 2k

n=1

kf n k · kU n k.

As in semidiscrete case, we set k|U M |k = max k|U n |k, for some M, 0 ≤ M ≤ J. 0≤n≤J

Now, J

k|U |k ≤ k|U

M

0

|k ≤ kU k + 2k

and this completes the first part of the proof.

J X

n=0

kf n k,

For the second estimate, we proceed exactly similarly, now with χ = ∂¯t U n . This completes the remaining part of the proof. Using the above mentioned stability estimates, we prove the following Theorem. Theorem 5.6 Let u and U n , respectively, be the solution of (5.1) and (5.15). Then, there is a constant C independent of h and k such that Z tJ Z tJ J r kutt kds, J ≥ 0 kut kr ds} + k ku(tJ ) − U k ≤ ku0 − u0h k + Ch {ku0 kr + 0

0

holds. Proof. Splitting the error as u(tJ ) − U J = θJ + η J , we obtain kη J k ≤ Chr ku(tJ )kr ≤ Chr {ku0 kr + This time, the corresponding error equation in θ becomes (∂¯t θn , χ) + (∇θn , ∇χ) = −(τ n , χ),

Z

0

tJ

kut kr ds}.

∀χ ∈ Vh , n ≥ 1,

(5.16)

CHAPTER 5.

65

HEAT EQUATION

where

τ n = ∂¯t η(tn ) + (∂¯t u(tn ) − ut (tn )) = τ1n + τ2n .

An appeal to the first stability estimate in Theorem 5.5 yields kθJ k ≤ k|θJ |k ≤ kθ0 k + 2k

J X

n=1

kτ n k ≤ kθ0 k + k

J X

n=1

kτ1n k + k

J X

n=1

kτ2n k.

As before, we estimate kθ0 k ≤ ku0 − u0h k + Chr ku0 kr . Note that τ1n = k −1 and hence, k

J X

n=1

kτ1n k



J Z X

tn

tn

ηt ds,

tn−1

r

tn−1

n=1

Z

Ch kut kr ds = Ch

r

Z

tJ

kut kr ds.

0

More over, apply Taylor’s expansion so that τ2n = (u(tn ) − u(tn−1 ))/k − ut (tn ) = −k −1 and thus, k

J X

n=1

kτ2n k

Z J X ≤ k n=1

Z

tn

tn−1

(s − tn−1 )utt (s) ds,

tn

tn−1

(s − tn−1 )utt (s) dsk ≤ k

Z

0

tJ

kutt k ds.

Altogether, we complete the proof of the Theorem. Remark. The second stability estimate can be used to find out the superconvergence result in k∇θJ k. Then it is easy to obtain the maximum norm estimate for the total error. But due to the presence of the time step k, the order of convergence for two dimensional bounded, convex domain and for C 0 -piecewise linear elements becomes 1 ku(tJ ) − U J kL∞ (Ω) ≤ C(u) log (h2 + k). h Because of the non-symmetric choice of the discretization in time, the backward Euler-Galerkin method is only first order accurate in the temporal direction. In order to obtain higher order accuracy in time, we now turn to the Crank-Nicolson Galerkin method which is symmetric and second order in time. We now discretize the semidiscrete equation in a n n−1 ) symmetric manner around the point tn−1/2 = (n − 1/2)k. Setting φn−1/2 as (φ +φ , determine U n in 2 Vh , recursively, by (∂¯t U n , χ) + (∇U n−1/2 , ∇χ) = (f (tn−1/2 ), χ) ∀χ ∈ Vh , n ≥ 1,

(5.17)

U 0 = u0h . In matrix form, we write the equation in U n as 1 1 (A + kB)αn = (A − kB)αn−1 + kF n−1/2 . 2 2 Here, the matrix A + 21 kB is positive definite and therefore, there exists a unique solution to the problem (5.17).

CHAPTER 5.

66

HEAT EQUATION

For the stability, we set k|U n−1/2 |k2 = kU n k2 + k

n X

k∇U m−1/2 k2 .

m=1

The related stability in the first energy norm is proved in the following Theorem. Theorem 5.7 Let U n be a solution of (5.17). Then, for J = 1, 2, · · · , k|U J−1/2 |k ≤ kU 0 k + k

J X

n=1

kf n−1/2 k.

Proof. Letting χ = U n−1/2 in (5.17) and noting that 1¯ ∂t kU n k2 , 2

(∂¯t U n , (U n + U n−1 )/2) = we now obtain

1¯ ∂t kU n k2 + k∇U n−1 k2 ≤ kf n−1/2 k · kU n−1/2 k. 2 Multiplying by 2k and summing from n = 1 to J, we find that kU J k2 + 2k

J X

n=1

k∇U n−1/2 k2 ≤ kU 0 k2 + k

J X

n=1

kf n−1/2 k · (kU n k + kU n−1 k).

Setting for some M with 1 ≤ M ≤ J so that

  k|U M−1/2 |k = max kU 0 k, max k|U n−1/2 |k , 1≤n≤J

and following the proof of the semidiscrete case, we obtain the required estimate. Using the above stability result, we shall prove the following result. Theorem 5.8 With U n and u the solutions of (5.17) and (5.1), respectively, we have, for J > 0, ku(tJ ) − U J k ≤ ku0 − u0h k + Chr {ku0 kr +

Z

tJ 0

kut kr ds} + Ck 2

Z

tJ 0

(kuttt k + k∆utt k) ds.

Proof. It is sufficient to obtain an estimate for θJ . The error equation in θn may be written as (∂¯t θn , χ) + (∇θn−1/2 , ∇χ) = −(τ n−1/2 , χ), ∀χ ∈ Vh , n ≥ 1,

(5.18)

where τ n−1/2

= +

∂¯t η(tn ) + (∂¯t u(tn ) − ut (tn−1/2 )) 1 n−1/2 n−1/2 n−1/2 + τ2 + τ3 . ∆(u(tn−1/2 ) − (u(tn ) + u(tn−1 )) = τ1 2

An appeal to Theorem 5.5 yields kθJ k ≤ k|θJ−1/2 |k ≤ kθ0 k + k

J X

n−1/2

(kτ1

n=1

n−1/2

k + kτ2

n−1/2

k + kτ3

k).

CHAPTER 5.

67

HEAT EQUATION

Since θ0 may be estimated as in the Theorem 5.6, it remains to bound the latter sum. As before, we have Z tn n−1/2 r −1 ¯ kut kr ds. kτ1 k = k∂t η(tn )k ≤ Ch k tn−1

More over, using Taylor’s expansion we obtain n−1/2

kτ2

k = = ≤

k∂¯t u(tn ) − ut (tn−1/2 )k Z tn Z tn−1/2 1 −1 2 (s − tn−1 )2 uttt (s) dsk (s − tn−1 ) uttt (s) ds + k k 2 tn−1/2 tn−1 Z tn kuttt k ds. Ck tn−1

Again an application of Taylors expansion yields n−1/2 kτ3 k

1 = k∆(u(tn−1/2 ) − (u(tn ) + u(tn−1 )))k ≤ Ck 2

Z

tn

tn−1

k∆utt k ds.

On substitution, we find that k

J X

n−1/2

(kτ1

n=1

n−1/2

k + kτ2

n−1/2

k + kτ3

k) ≤ Chr

Z

0

tJ

kur kr ds + k 2

Z

0

tJ

(kuttt k + k∆utt k) ds,

and this completes the rest of the proof.

Note. For excellent exposition on the evolution equations, we may refer to Dautray and Lions [5]. The material of this chapter is greately influenced by Thom´ee [13]. For further reading on the finite element approximations to parabolic initial and boundary value problems, see Thom´ee [12].

Bibliography [1] R. A. Adams, Sobolev Spaces, Academic Press, New York, 1975. [2] S. Agmon, Lectures on Elliptic Boundary Value Problems, New York, 1965. [3] K. E. Brennan and R. Scott, The Mathematical Theory of Finite Element Methods, Springer Verlag, Berlin, 1994. [4] P. G. Ciarlet, The Finite Element Method for Elliptic Problems, North Holland, Amsterdam, 1978. [5] R. Dautray, and J. L. Lions, Mathematical Analysis and Numerical Methods for Science and Technology, Vol. 1–6, Springer Verlag, Berlin, 1993. [6] K. Eriksson, and C. Johnson, Adaptive finite element methods for parabolic problems. I: A linear Model problem SIAM J. Numer. Anal., 43–77, 1991. [7] K. Eriksson, D. Estep, P. Hanso, and C. Johnson, Introduction to daptive finite element methods for differential equations Acta Numerica , 105–158, 1995. [8] C. Johnson, Numerical Solution of Partial Differential Equations by Finite Element Methods Cambridge University Press, Cambridge, 1987. [9] S. Kesavan Topics in Functional Analysis and Applications, Wiley Eastern Ltd. , New Delhi, 1989. [10] B. Mercier, Lectures on Topics in Finite Element Solution of Elliptic Problems, TIFR Lectures on Mathematics, Vol. 63, Narosa Publi., New Delhi, 1979. [11] R. Temam, Navier Stokes Equation Studies in Mathematics and Its Applications, Vol. 2, North Holland, (Revised Edition) 1984. [12] V. Thom´ee Galerkin Finite Element Methods for Parabolic Problems, Lecture Notes in Mathematics No. 1054, Springer Verlag, 1984. [13] V. Thom´ee Lectures on Approximation of Parabolic Problems by Finite Elements, Lecture Notes No. 2, Department of Mathematics, Indian Institute of Technology, Bombay (India), 1994. [14] M. F . Wheeler A priori L2 error estimates for Galerkin approximations to parabolic partial differential equations, SIAM J. Numer. Anal., 10, 723–759, 1973.

68