$
'
Department of Mathematics
Numerical Solution of the Cauchy Problem for the Helmholtz Equation Fredrik Berntsson, Vladimir A Kozlov, Lydie Mpinganzima and Bengt Ove Turesson LiTH-MAT-R--2014/04--SE
&
%
Department of Mathematics Link¨oping University S-581 83 Link¨oping, Sweden.
Numerical Solution of the Cauchy Problem for the Helmholtz Equation Fredrik Berntsson∗
Vladimir A Kozlov∗
Lydie Mpinganzima∗†
Bengt Ove Turesson∗
April 2, 2014
Abstract The Cauchy problem for the Helmholtz equation appears in applications related to acoustic or electromagnetic wave phenomena. The problem is ill–posed in the sense that the solution does not depend on the data in a stable way. In this paper we give a detailed study of the problem. Specifically we investigate how the ill–posedness depends on the shape of the computational domain and also on the wave number. Furthermore, we give an overview over standard techniques for dealing with ill–posed problems and apply them to the problem.
1. Introduction The Helmholtz equation arises in applications related to acoustic and electromagnetic wave phenomena. In the case of acoustic waves, u describes the pressure and the wave number is given by k = ω/c, where ω > 0 is the frequency and c is the speed of sound. For electromagnetic waves, the wave number k is given in terms of the electric permeability ε and magnetic permeability µ √ by k = ω εµ; see [2, 6]. The Cauchy problem for the Helmholtz equation has applications in optoelectronics [11] and characterization of sound sources [8, 13]. Let Ω ⊂ R2 be a simply connected domain with a piecewise smooth boundary Γ. The boundary is divided into different parts, namely Γ1 , Γ2 and Γ3 . The function is considered unknown on Γ1 , and both function values and normal derivatives are available on Γ2 . Also either function values or normal derivatives are available on Γ3 , i.e., on the rest of Γ. It is assumed that Γ1 is separated from Γ2 . Note that we do not consider the case when Γ = Γ1 ∪ Γ2 . This domain domain is illustrated in Figure 1. ∗ Department of Mathematics, Link¨ oping University, SE-58183 Link¨ oping, Sweden. Emails:
[email protected],
[email protected],
[email protected],
[email protected] † Department of Applied Mathematics, University of Rwanda, Huye Campus, Box 56 Butare, Rwanda. Email:
[email protected]
1
Γ1 Γ′′3
Γ′3 Ω
Γ2 Figure 1: The domain Ω. The boundary consists of Γ1 where the function is unknown, Γ2 where Cauchy data is available, and Γ3 = Γ′3 ∪ Γ′′3 where Dirichlet data is available. The boundary pieces Γ1 and Γ2 are seperated from each other. The Cauchy problem for the Helmholtz equation can be formulated as fol¯ such that lows: Find u ∈ C 2 (Ω) ∩ C 1 (Ω) ∆u + k 2 u = 0, in Ω, u = g, on Γ2 , (1) ~ ν · ∇u = 0, on Γ2 , u = 0, on Γ3 , where k is the wave number, ~ν denotes the outward unit normal and [u, ~ν ·∇u] = [g, 0] is the Cauchy data available only on a part of the boundary Γ. We remark that the case of a non–zero normal derivative ~ν · ∇u = h on Γ1 , or a non–zero function value u on Γ3 , can be reduced to the above problem by first solving a well–posed boundary value problem and using the linearity of the equation. The Cauchy problem (1) has been previously studied by many authors; see for instance [1, 11, 16, 15]. The problem is ill–posed in the sense that a small perturbation in the Cauchy data can cause an arbitrarily large change in the solution. This means that in the presence of measurement noise, or even computational errors, special regularization methods have to be used in order to find a stable solution to problem (1). It is often useful to reformulate the Cauchy problem (1) as a linear operator equation. In order to do this we replace the boundary condition u|Γ2 = g in the above equation, by the condition u|Γ1 = f . The resulting mixed boundary value problem is well–posed and can be used to define an operator, K : f 7→ g = u|Γ1 . This is useful since, after discretization, the operator K is approximated by a matrix, and standard linear algebra techniques, such as the truncated singular value decomposition, for solving ill–conditioned linear equation systems; see [4], can be used to solve the Cauchy problem efficiently in a stable way. A variety of iterative methods for solving ill–conditioned linear systems can also be used. The aim of this paper is to investigate the performance of standard regularization techniques when applied to problem (1). The structure of the paper is as follows: In Section 2 we reformulate problem (1) as a linear operator equation. We also use the singular value decomposition to provide a definition of the degree of ill–posedness of the formulated operator equation. In Section 3 we 2
give details of the finite difference implementation for solving a mixed–boundary value problem for the Helmholtz equation. We explain how to use the finite difference code to compute a matrix approximation of the linear operator. The degree of ill–posedness of the linear operator depends strongly on the size and shape of the domain Ω. This is studied further in Section 4. In Section 5 we illustrate the use of standard regularization techniques for solving the Cauchy problem for the Helmholtz equation. Both direct and iterative methods are considered. Finally we discuss in Section 6 our numerical results and draw some conclusions.
2. An ill–posed operator equation In this section we reformulate the Cauchy problem (1) as a linear operator equation. In order to do this, we replace the boundary condition u|Γ2 = g in (1) by the condition u|Γ1 = f . The resulting mixed boundary value problem is well–posed and can thus be used to define an operator, K : f 7→ g = u|Γ1 . This is useful since, after discretization, the operator K is approximated by a matrix so that standard linear algebra techniques such as the truncated singular value decomposition, can be used to solve the Cauchy problem efficiently in a stable way.
2.1. The operator equation As previously, let Ω ⊂ R2 be a domain with a piecewise smooth boundary Γ such that Γ = Γ1 ∪ Γ2 ∪ Γ3 . Consider the following mixed boundary value problem ¯ such that for the Helmholtz equation: Find u ∈ C 2 (Ω) ∩ C 1 (Ω) ∆u + k 2 u = 0, in Ω, u = f, on Γ1 , (2) ~ ν · ∇u = 0, on Γ2 , u = 0, on Γ3 . The above problem is well–posed and defines an operator, Kf = g,
(3)
where g = u|Γ1 and u is the unique solution of the problem (2). The operator K is well–defined. The operator K is compact, provided that appropriate function spaces are introduced, and the solution to the operator equation (3) can be written in terms of the singular system {σn , un , vn }∞ n=1 . It holds that f=
∞ X (un , g) vn , σn n=1
(4)
where {σn , un , vn } is the singular system of the operator K. The singular values σi decrease monotonically towards zero. Hence the formula (4) is numerically unstable: small perturbations in the coefficients (un , g) can cause drastic changes in the computed solution f because they are divided by the small number σn . This observation leads us to the following definition [3, Section 2.2]: 3
Definition 2.1. Let {σn } be the singular values of the compact operator K. The degree of ill–posedness of the equation (3) is defined as follows: If the sequence {σn } decay as O(n−α ), for some α > 0, then the equation is mildly ill–posed. If, on the other hand, the sequence behaves as O(e−αn ) then the equation is severely ill-posed. Remark 2.2. We shall mention here that we excluded the case when the boundary Γ = Γ1 ∪ Γ2 since we only consider the case when the operator K is compact. If the problem is consistent, i.e., there is an exact solution f so that the exact data g satisfies g = Kf , then the series (4) is convergent and f = K −1 g exists. This is typically not the case for measured data gδ containing noise. Hence Regularization methods are used to approximate K −1 . The following definition is often used: Definition 2.3. A regularization of K −1 is a family of operators Rλ such that Rλ gδ → K −1 g = f,
as,
gδ → g,
where λ := λ(δ, gδ , f ) is a regularization parameter, kg − gδ k ≤ δ is the noise level, g is the exact data such that Kf = g, and gδ is the available data.
2.2. A concrete test problem In order to more closely examine the properties of the operator equation (3), we specify a concrete test case. Let the domain be Ω = [0, 1] × [0, L] and consider the following well–posed boundary value problem: Find u(x, y) such that ∆u + k 2 u = 0, 0 < x < 1, 0 < y < L, uy (x, 0) = 0, 0 < x < 1, (5) u(0, y) = u(1, y) = 0, 0 < y < L, u(x, L) = f (x), 0 < x < 1. Using this boundary value problem we define the operator K1 : f (x) 7→ u(x, 0) = g(x).
(6)
Note that in this case both f (x) and g(x) can be taken from the same function space, e.g. C([0, 1]) or L2 ([0, 1]), and the operator K1 is self–adjoint. This property is naturally inherited by the matrix approximation leading to simpler numerical methods. Example 2.4. Let L = 0.2 in (5) and consider k 2 = 20.5 in the Helmholtz equation. By discretizing the equation (5) using on a grid (xi , yj ) of size 400×81 we obtain a matrix approximation K1 ∈ R400×400 . In order to illustrate the properties of (6) in this case, we compute singular value decomposition (SVD) of K1 given by K1 = U ΣV T . The singular values are displayed in Figure 3. We remark that the singular values decay exponentially, and only level off at the machine precision µ ≈ 1.1 · 10−16 . From Definition (2.1), the problem is thus severly ill–posed. Further we select an exact solution u(x, y) of the Helmholtz equation, from which we evaluate two functions f (x) and g(x) such that (K1 f )(x) = g(x). These functions are illustrated in Figure 2. We observe that the function g(x) is 4
0.7
0.5
0.6
0.4
0.5
u(0,x)=g(x)
u(L,x)=f(x)
0.6
0.3
0.2
0.4
0.3
0.1
0.2
0
0.1
−0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
x
Figure 2: The function u(L, x) = f (x) is presented on the left. The corresponding function K1 f (x) = u(0, x) = g(x) is shown on the right. Note that the operator K1 is strongly smoothing and the wiggly features of f (x) do not appear in the solution g(x).
1
2
10
10
0
0
10
10
−2
10
−1
10
−4
σk (x) and (uk,gδ ) (o)
10
−6
σ
k
10
−8
10
−10
10
−12
10
−2
10
−3
10
−4
10
−5
10
−6
−14
10
−16
10
10
−7
10
−8
−18
10
0
10
20
30
40
50
60
70
80
90
10
100
0
5
10
15
20
25
30
k
k
Figure 3: The first 100 singular values σn of the matrix K1 (left). The singular values decay exponentially and thus the operator equation is severly ill– posed. We also display both the 30 singular values and also the Fourier coefficients (un , gδ ) (right). Initially the coefficients (un , gδ ) tend to zero faster than the singular values σn . The coefficients level off at the noise level 10−5 .
5
very smooth compared to f (x). This smoothing property is typical for ill–posed problems. We finally add normally distributed noise with variance 10−5 to obtain simulated measurements gδ . We display in Figure 3 both the singular values σi and the coefficients (un , gδ ). We know that if a solution to K1 f = g exists then the series (4) is convergent. This is often referred to as the Picard criterion [3, 4]. This condition holds for the exact data g(x) but not for the noisy data gδ (x). We see from Figure 3 on the left that initially the coefficients (un , gδ ) decay faster than the singular values σn , but the coefficients level out at the noise level δ = 10−5 . We conclude that we can only hope to accurately reconstruct at most 15 − 20 singular components of the solution. e = [0, L1 ] × [0, L2]. In this case Remark 2.5. Suppose now that the domain is Ω a change of variables (x, y) 7→ (x/L1 , y/L1 ) transforms the Helmholtz equation into uxx + uyy + (L1 k)2 u = 0, in Ω = [0, 1] × [0, L2 /L1 ]. It is thus sufficient to consider the case where the interval in the x variable is [0, 1].
3. Numerical implementation In this section we present details of our numerical implementation. The well– posed boundary value problem (5) has been discretized using finite differences and the resulting code used to create a matrix approximation of the linear operator (6).
3.1. The Helmholtz equation in a rectangle In our implementation of the finite difference method for problem (5) we pick a regular grid: {(xi , yj ) : xi = i∆x, yj = j∆y, 0 ≤ i ≤ N, 0 ≤ j ≤ M } Let ui,j denote the numerical approximation of u(xi , yj ). The finite difference discretization [14] of the Helmholtz equation at each interior point (xi , yj ) simplifies to 2 2 1 1 (ui−1,j + ui+1,j ) + (− − + k 2 )ui,j + (ui,j−1 + ui,j+1 ) = 0. ∆x2 ∆x2 ∆y 2 ∆y 2 At the boundaries corresponding to x = 0, and x = 1, the value of the function u(x, y) is zero and the corresponding variables ui,j are explicitly set to zero, and at the boundary y = L we have a Dirichlet condition, ui,M = f (xi ),
i = 0, ... , N,
where u(x, L) = f (x) is the prescribed Dirichlet data at y = L. The discretization of the Neumann boundary condition uy (x, 0) = 0 is given by (−3ui,1 + 4ui,2 − ui,3 )(2∆y)−1 = 0, 6
i = 0, ... , N.
This particular choice means that we approximate the Neumann condition with accuracy O(∆y 2 ); and the resulting the finite difference method is also O(∆x2 + ∆y 2 ) accurate. The finite difference formulas above leads to a large and sparse linear system of equations Au = b, where the matrix A is of size M N × M N and b is a vector that contains the boundary data. Since the matrix A contains roughly 5 non–zero elements on each row, the total number of non–zeros elements is only nnz(A) ≈ 5(M + N ), and the system can be efficiently solved using any iterative method such as GMRES for example. In our application we want to solve the same well–posed boundary value problem multiple times, with different data f (x). Since the matrix A depends only on the size of the domain, i.e., L, the parameter k 2 , not on the actual boundary values f (x), we can save the sparse LU factorization and reuse it. This significantly improves the computational speed for the methods we consider. On the other hand the matrix A is banded the number of non-zeros in L and U remains relatively low.
3.2. The matrix approximation Recall that the linear operator K1 , see (6), maps function values u|Γ1 = f onto function values u|Γ2 = g. In the previous section we have explained the details of the finite difference implementation that allows us to evaluate the the operator K1 by solving a well–posed boundary value problem. Suppose that n1 is the number of gridpoints located on the boundary Γ1 and n2 is the number of grid points located on Γ2 . Then the functions f and g are represented by vectors F ∈ Rn1 and G ∈ Rn2 . Since the problem (2), and hence the corresponding operator, is linear the operator can be approximated by a matrix: e · F = G, e ∈ Rn2 ×n1 . K K
Further to obtain the matrix we simply pick a set of basis vectors {ei } for Rn1 and evaluate the operator, Kei = ki ,
e i) = ki , K(:,
i = 1, ... , n1 .
Thus by solving the mixed–boundary value problem (2) using the finite difference discretization, and data discrete F = ei we obtain one column in the e approximating the operator K. In the rest of the paper we will use matrix K the symbol K to denote either the matrix or the operator. Note that since we can reuse the sparse LU decomposition and evaluate K1 1 e is often the cost of computing the matrix K for multiple basis vectors {ei }ni=1 not unresonably large. Also this means that if an iterative method requires more than n1 iterations the same amount of computational resources could have been spent on building the matrix instead.
4. Numerical study of the Helmholtz equation In this section we investigate, numerically, the properties of the operator K1 , given by (6). Thus we select a discretization of the x–axis of size N = 400. The discretization of the y-axis is selected so that ∆x = ∆y. We choose for example M = 81 points in the case L = 0.2. Thus the matrix representation 7
1
0
10
10
−2
10
L=0.1 −4
10
k2=28
0
10
−6
10
k2=8 k
σ
k
σ /σ
1
−8
10
−1
10
−10
10
−12
L=0.5
10
−2
10
−14
10
−16
10
−18
10
−3
0
5
10
15
20
25
30
35
40
45
10
50
1
k
2
3
4
5
6
7
8
k
Figure 4: The first 50 singular values of the matrix K1 , for k 2 = 20.5 and different values of L (right). We see that the degree of ill–posedness depends strongly on the distance L between Γ1 and Γ2 . Also the first 8 singular values for the case L = 0.3 and for different values of k 2 . We see that the first few singular values depend strongly on k 2 but the rate of decay, and hence the degree of ill–posedness does not. of the operator will be a 400 × 400 matrix. In all cases matrix is computed using the finite difference discretization presented in Section 3. Since the matrix K1 is rather small we can compute its singular value decomposition (SVD) by dense matrix methods, for example Matlabs svd routine. The parameters that we investigate are L, i.e., the distance between Γ1 and Γ2 , and also the wave number k 2 . We first investigate the dependence of the singular values on the parameter L, i.e. the distance between Γ1 and Γ2 . Thus we computed the matrix approximation of the operator K1 , for equally spaced L values between 0.1 and 0.5. The results are displayed in Figure 4. We observe that for L = 0.5 only about 20 singular values are above the machine precision, and the problem is very challenging to solve with realistic noise levels. For L = 0.1, the singular values decay at a much slower rate and about 140 singular values are above the machine precision. We then investigate the dependence of the singular values on the wave number k 2 . In the experiment we computed the matrix K1 for equally spaced values of k 2 between 8 and 28. In Figure 4 we see that only the first few singular values depend strongly on the wave number. The rate of decay, and thus the degree of ill–posedness of the problem, does not depend on the wave number at all. Hence the difficulty in solving the Cauchy problem should depend very little on the wave number.
5. An overview of regularization methods In this section we study methods for solving the ill–conditioned linear system K1 F = Gδ ,
(7)
where Gδ is the discrete, and noisy, data vector. Since the original operator equation K1 f = g is ill–posed this is often referred to as a discrete ill–posed 8
0.5
0.4
0.6
0.3 0.4
U(x,y)
F(x) (black−dashed) Gm(x) (solid,blue)
0.6
0.2
0.2 0.25
0
0.1
0.2
−0.2 0
0.15 0.2
0
0.1
0.4 0.6
0.05 0.8
−0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1
x
0
y
x
Figure 5: The functions F and Gδ used for the test problem (left), and the solution u(x, y) (right). problem [4]. Methods for solving discrete ill–posed problems are either direct, i.e., they work directly with the matrix K1 and compute the solution using a formula based on a decomposition of the matrix, or iterative, i.e., they compute a sequence Fk that converges to the desired solution. Iterative methods only access the matrix indirectly by computing matrix vector products, or repeated evaluations of the operator K1 . Thus the matrix is not needed explicitly. A finite difference solver for the mixed–boundary value problem (5) can instead be used to evaluate a matrix–vector product K1 F , for instance. Iterative methods often require a subroutine for computing matrix–vector products involving both K and its transpose K T . An example is the Landweber method [7], Fn+1 = Fn + γK T (Gδ − KFn ),
n = 0, 1, 2, ... ,
which has been applied to the Cauchy problem the the Helmholtz equation [1]. Matrix–vector products involving K T are computed by solving an adjoint problem to (5). This means by solving a mixed–boundary value problem involving the same equation, but with a different set of boundary conditions, see [5] for details.
5.1. Direct regularization methods In this section we investigate direct regularization methods for solving the discrete ill–posed system K1 F = Gδ , where Gδ represents the noisy data vector. For direct methods we assume that the matrix K1 is available, and of relatively small dimension, so it is feasible to compute a singular value decomposition K1 = U ΣV T . Example 5.1. Let Ω = [0, 1] × [0, 0.2] and the wave number be k 2 = 20.5. As previously we discctretize the domain using a grid of size 400×81 and compute a matrix K1 of size 400 × 400. Since the matrix is of small size we can compute its SVD K1 = U ΣV T . We also pick an exact solution F and G, satisfying K1 F = G, and add normally distributed noise of variance 10−4 , to obtain G to Gδ . We display the functions F and Gδ , and also the function u(x, y), in Figure 5. 9
0.6
0.5
0.5
F(x) (black−dashed) F13 (x) (solid,blue)
0.4
tsvd
(solid,blue) 8 (x) tsvd
F(x) (black−dashed) F
0.6
0.3
0.2
0.1
0
−0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.4
0.3
0.2
0.1
0
−0.1
1
0
0.1
0.2
0.3
0.4
0.5
x
0.8
0.7
0.8
0.9
1
5
0.6
||
tsvd 2
4
Error || F−Fk
(solid,blue) 18 (x) tsvd
F(x) (black−dashed) F
0.6
x 6
1
0.4
0.2
3
2 0
1 −0.2
−0.4
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0
2
4
6
8
10
12
14
16
18
20
k
x
W Figure 6: The TSVD solutions Ftsvd (blue solid curve) computed using W = 8 (top, left), W = 13 (top, right), and W = 18 (bottom, left). We also display the W exact solution F (black dashed curve). The error kF −Ftsvd k2 , for W = 1, ... , 20, is also displayed (bottom, right). We see that the error is small for a resonably large range of values W .
Since the ill–posedness of associated with the small singular values an obvious solution is to truncate the series (4). Thus for a positive number W we define the truncated singular value decomposition (TSVD) as, W Ftsvd =
W X uTn Gδ vn . σn n=1
(8)
The corresponding operator RW is bounded and we have restored stability of the solution with respect to measurement errors. The method includes the parameter W , i.e., the number of components to include in the solution. If the noise level kG − Gδ k2 ≤ δ is known then the Morozov discrepancy principle; see [10], picks the smallest W such that, W kK1 Ftsvd − Gδ k2 ≤ δ.
(9)
In order to demonstrate the performance of the method we compute TSVD solutions for a range of W –values. In Figure 6 we display the TSVD solutions W W Ftsvd , for W = 8, 13, and 18. We also display the error kF − Ftsvd k2 , as a function of W . Using an appropriate level of regularization we manage to obtain a fairly good solution. For this example the discrepancy prindiple would suggest W = 11 as the appropriate regularization parameter to use. 10
0.6
0.7
0.5
F(x) (black−dashed) F (x) (solid,blue)
0.8
0.5
λ tik
Error || F−Fλtik||2
0.6
0.4
0.3
0.2
0.1
0.4
0.3
0.2
0.1
0
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
k
−0.1
2
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
−3
x 10
λ Figure 7: The error kFtikh − F k2 , for 10−4 ≤ λ ≤ 2 · 10−3 (left). Also the λ solution Ftikh , for λ = 3 · 10−4 (right). This demonstrates that the Tikhonov method can produce good solutions.
Another direct regularization method is Tikhonov regularization. The soluλ tion Ftik is the unique minimizer of the Tikhonov functional, kK1 F − Gδ k22 + λ2 kF k22 ,
(10)
where λ > 0 is the regularization parameter. The minimization problem represents a compromize between minimizing the residual kK1 F −Gδ k2 and enforcing stability, i.e. keeping the solution norm kF k2 small. Using the SVD of K1 we can write the Tikhonov solution as, λ Ftik =
N X σi uT Gδ i
i=1
σi2 + λ2
vi .
(11)
The performance of the Tikhonov method is illustrated in Figure 7. We present the error as a function of the regularization parameter λ and also the solution for a good choice of the regularization parameter. The discrepancy principle can be used to find an appropriate value for the parameter λ, and for this example we obtain λ = 1.3 · 10−3 .
5.2. Iterative regularization methods Iterative regularization methods typically access the matrix indirectly. It is assumed that a subroutine for evaluating matrix–vector products is available. This typically means that in each iteration a well-posed boundary value problem is solved. In our case the finite difference solver implemented for the problem (5) is used to evaluate the effect of applying the operator K1 to avector F . If the iterative method involves the transpose K1T then an adjoint problem can be solved in order to evaluate the effect of applying K1T to a vector, see e.g. [5] for details. Krylov subspace methods[12] are based on seeking an approximate solution in the Krylov subspace, Kn (A, r(0) ) = span(r(0) , Ar(0) , ... , An−1 r(0) ), 11
3
0.7
0.6
Residual || K1Fkcg−Gm||2
2.5
k
Error || Fcg−F||2
2
1.5
1
0.5
0
0.5
0.4
0.3
0.2
0.1
1
2
3
4
5
6
7
8
Iteration number k
0
1
2
3
4
5
6
7
8
Iteration number k
n Figure 8: The error kFcg − F k2 , for the first 8 CG iterations (left) and also the n residuals kK1 Fcg − Gδ k2 (right). This illustrates the fast convergence of the CG method.
where r(0) is the residual corresponding to a starting guess x(0) . Such methods can be used as regularization methods since it has been noted that the Krylov subsapce Kn tends to approximate the subspace span(v1 , ... , vn ) spanned by the first n singular vectors (which correspond to large singular values)[4, p. 145]. Thus the solution computed by Krylov subspace methods can be considered as approximations of the TSVD solution. The two standard Krylov subspace methods are generalized minimal residual (GMRES) for general non-singular matrices, and the conjugate gradient method (CG) for symmetric and positive definite matrices. Both methods work by using matrix-vector multiplications to build an orthogonal basis for the Krylov subspace Kn and computing the solution by solving a least squares problem. Efficient implementations exist. Example 5.2. In order to demonstrate the regularizing effects of Krylov subspace methods we once again consider the case detailed in Example 5.1. The matrix K1 is of size 400×400, symmetric and positive definite (if ignoring eigenvalues close to the machine precision). Therefore we attempt to solve the linear system K1 F = Gδ using the the conjugate gradient method. n n − F k2 , for the first CG 8 The residuals kK1 Fcg − Gδ k2 and errors kFcg iterations using the starting guess F 0 = 0, are displayed in Figure 8. We see that the method converges very rapidly. The errors initially decrease, but instead start to increase after only a few iterations. This is often refered to as semiconvergence[4, p. 135]. It is important to stop the iterations before the error starts to increase, and the number of iterations used acts as a regularization parameter. For this particular experiment the discrepancy principle would have stopped the iterations at n = 6. We also display the solutions obtained after n = 5 and n = 10 iterations in Figure 9. Clearly, by using n = 10 iterations we approximate K1−1 too accurately and the ill-posedness of the problems shows up in the magnification of high frequency noise.
12
10
0.6
8
F(x) (black−dashed) Fcg (x) (solid,blue)
0.4
10
5 cg
F(x) (black−dashed) F (x) (solid,blue)
0.5
0.3
0.2
0.1
6
4
2
0
−2
−4
−6
0
−8
−0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
−10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
x
n Figure 9: The exact solution F (black,dashed) and the CG solution Fcg produced after n iterations (blue,solid). We use both n = 5 (left) and n = 10 iterations (right). Clearly the case n = 10 gives an unphysical solution.
6. Concluding Remarks In this paper we have presented a Cauchy problem for the Helmholtz equation. Since the problem is important for applications we wanted to understand the properties of the problem and also investigate possible options for computing the solution. Since the problem is severely ill-posed regularization methods are needed to restore stable dependence with respect to errors in the Cauchy data. By rewriting the problem as an ill-posed operator equation and computing the SVD of the corresponding operator equation we found that the ill-posedness of the problem strongly depends on the size and shape of the domain Ω, while the wave number k 2 does not influence the degree of ill-posedness of the problem. Also, evaluating the linear operator is equivalent to solving a well-posed mixed boundary value problem for the Helmholtz equation. By implementing a finite difference solver for the well-posed boundary value problem we can approximate the linear operator by a matrix. Hence we can use standard linear algebra methods for solving discrete ill-conditioned methods. We test both the TSVD and Tikhonov methods and manage to produce good results. The draw back of using a method, such as TSVD, that rely on a decomposition of the matrix is that a large number of well-posed boundary value problems needs to be solved in order to first compute the matrix explicitly, i.e. to obtain a matrix K of size 400 × 400 we need to solve 400 well-posed boundary value problems. Hence we look for iterative methods that only rely on a subroutine for computing matrix-vector products. Typically in each iteration one well-posed boudnary value problem is solved. Thus as long as the number of iterations remains low such methods are vastly more efficient than direct methods. In the paper we apply the CG method and manage to obtain good solutuions after only a small number of iterations. We observe that usually direct methods such as TSVD or Tikhonov produce better solutions and it is easier to select a suitable level of regularization. Hence if an iterative method requires a large number of iterations it is usually preferable to switch to a direct method.
13
References [1] F. Berntsson, V.A. Kozlov, L. Mpinganzima, and B.O. Turesson. An alternating iterative procedure for the cauchy problem for the helmholtz equation. Inverse Problems in Science and Engineering, 22(1):45–62, 2014. [2] D. Colton and R. Kress. Inverse Acoustic and Electromagnetic Scattering Theory. Springer–Verlag, 2nd edition, 1998. [3] H. Engl, M. Hanke, and A. Neubauer. Regularization of Inverse Problems. Kluwer Academic Publishers, Dordrecht, the Netherlands, 1996. [4] P. C. Hansen. Rank-Deficient and Discrete Ill-Posed Problems. Numerical Aspects of Linear Inversion. Society for Industrial and Applied Mathematics, Philadelphia, 1997. [5] D.N. H` ao and D. Lesnic. The cauchy problem for the laplace’s equation via the conjugate gradient method. IMA Journal of Applied Mathematics, 65:199–217, 2000. [6] D.S. Jones. Acoustic and Electromagnetic Waves. Clarendon Press, 1986. [7] L Landweber. An iteration formula for fredholm integral equations of the first kind. Amer. J. Math., 73:615–624, 1951. [8] C. Langrenne and A. Garcia. Data completion method for the characterization of sound sources. J. Acoust. Soc. Am., 130(4):2016–2023, 2011. [9] D. Maxwell. Kozlov–maz’ya iteration as a form of landweber iteration. arXiv: 1107.2194v1 [math.AP], 2011. [10] V. A. Morozov. On the solution of functional equations by the method of regularization. Soviet. Math. Dokl., 7:414–417, 1966. [11] T. Regi´ nska and K. Regi´ nski. Approximate solution of a Cauchy problem for the Helmholtz equation. Inverse Problems, 22(3):975–989, 2006. [12] Y Saad. Iterative Methods for Sparse Linear Systems. International Thomson Publishing, 1996. [13] A. Schuhmacher, J. Hald, H.B. Rasmussen, and P.C. Hansen. Sound source reconstruction using inverse boundary element calculations. J. Acoust. Soc. Am., 113(1):114–127, 2003. [14] G.D. Smith. Numerical Solution of Partial Differential Equations. Oxford University Press, Oxford, third edition, 1985. [15] Xiang–Tuan Xiong. A regularization method for a cauchy problem of the helmholtz equation. Journal of Computational and Applied Mathematics, 233(8):1723–1732, 2010. [16] Xiang–Tuan Xiong and Chu-Li Fu. Two approximate methods of a Cauchy problem for the Helmholtz equation. Comput. Appl. Math., 26(2):285–307, 2007.
14