Regularization of the Cauchy problem for the

0 downloads 0 Views 4MB Size Report
the data can cause a dramatically large error in the solution for 0 < x ≤ 1. The stability of the solution is restored by using a wavelet regularization method. .... We will show that how Cauchy problem for the Helmholtz equation suffers from the nonexistence and instability of the ..... calculating Fx,J ϕm and Gx,J ψm are stable.
Journal of Computational and Applied Mathematics 320 (2017) 76–95

Contents lists available at ScienceDirect

Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

Regularization of the Cauchy problem for the Helmholtz equation by using Meyer wavelet Milad Karimi a , Alireza Rezaee b,∗ a

Department of Mathematical Sciences, Isfahan University of Technology, Isfahan 84156-83111, Iran

b

Faculty of New Sciences and Technologies, University of Tehran, Tehran, Iran

article

info

Article history: Received 21 October 2015 Received in revised form 15 July 2016 MSC: 35J05 65F22 65T60 47J06

abstract In this paper, we investigate a Cauchy problem associated with Helmholtz-type equation in an infinite ‘‘strip’’. This is a classical severely ill-posed problem, i.e., the solution (if it exists) does not depend continuously on the data (or Cauchy data), a small perturbation in the data can cause a dramatically large error in the solution for 0 < x ≤ 1. The stability of the solution is restored by using a wavelet regularization method. Moreover, some sharp stable estimates between the exact solution and its approximation in H r (R)-norm is also provided and the numerical examples show that the method works effectively. © 2017 Elsevier B.V. All rights reserved.

Keywords: Cauchy problem Helmholtz equation Regularization Meyer wavelet Multiresolution analysis

1. Introduction The Helmholtz equation is a special kind of elliptic equation and is specially important in some practical physical applications. It is often used to describe the vibration of a structure [1], the acoustic cavity problem [2], the radiation wave [3], the Poisson–Boltzmann equation [4], etc. For more information about the Cauchy problem of Helmholtz equations one can refer to [5,6]. The Cauchy problem of an elliptic equation is well known to be ill-posed in the sense of Hadamard. The direct problem for the Helmholtz equation, i.e., Dirichlet, Neumann or mixed boundary value problems have been studied extensively in the past century. However, in some practical problems, the boundary data on the whole boundary cannot be obtained. For computational aspects, the readers can consult Hào and Lesnic [7], Reinhardt et al. [8], Cheng and Yamamoto [9] and Hon and Wei [10]. For theoretical aspects, the reader can refer to Xiong [11], Xiong and Fu [12] and Qian et al. [13]. The Cauchy problem for the Helmholtz equation that arose from inverse scattering problems [6] is just an inverse problem and is severely ill-posed [14]. A number of numerical methods have been proposed to solve the problem. Marin et al., [15–17] have solved the Cauchy problem for the Helmholtz equation by employing the boundary element method (BEM) in conjunction with an iterative algorithm. A preliminary comparison between direct and iterative methods applied to solving the Cauchy problem for the Helmholtz equation has been undertaken by Marin et al., [18]. Several meshless methods



Corresponding author. E-mail addresses: [email protected] (M. Karimi), [email protected] (A. Rezaee).

http://dx.doi.org/10.1016/j.cam.2017.02.005 0377-0427/© 2017 Elsevier B.V. All rights reserved.

M. Karimi, A. Rezaee / Journal of Computational and Applied Mathematics 320 (2017) 76–95

77

including the method of fundamental solutions (MFS) [19,20], the boundary knot method (BKM) [21] and the plane wave method (PWM) [22], have been proposed for the efficient solutions of some inverse problems for the Helmholtz equations. Recently, the operator marching method [23], the modified Tikhonov method [24,11], the moment method [25] and the spectral method [11] are also used to deal with the Cauchy problem of the Helmholtz equation. However, most of numerical methods are short of stability analysis and error estimate. The outline of the paper is as follows. In Section 2 we study the ill-posedness of the problem. In Section 3 we describe the Meyer wavelets and discuss the properties that make them useful for solving ill-posed problems. Some sharp error estimates between the exact solution and its approximation as well as the choice of the regularization parameter are given in Section 4. Finally, in Section 5 numerical examples verify the efficiency and accuracy of the proposed method. 2. Model problem and its ill-posedness In this paper, we consider the Cauchy problem for the Helmholtz equation in a ‘‘strip’’ domain

 ∇ 2 u(x, y) + k2 u(x, y) = 0, u(0, y) = ϕ(y), ∂ u(0, y) = ψ(y), x

0 < x < 1, y ∈ R, y ∈ R, y ∈ R,

(2.1)

where ∇ 2 = ∂x2 + ∂y2 is a two-dimensional Laplace operator and the constant k > 0 is the number of wave. Note that, in the present paper all calculations are worked out for one dimensional problem and it is valid for higher dimensions. The solution u(x, y) for 0 < x < 1 will be determined from the noisy data ϕm (y) and ψm (y) in L2 (R) which satisfy ∥ϕ − ϕm ∥L2 (R) ≤ δ and ∥ψ − ψm ∥L2 (R) ≤ δ . Let S (R) be the Schwartz space which is defined as follows

S (R) :=





ϕ ∈ C (R) : sup |x D ϕ(x)| < ∞; m, n ∈ N0 := N ∪ {0} , ∞

m

n

x∈R

dn dxn

is differential operator and S ′ (R) be its dual space of S (R). where D := For a function ϕ ∈ S (R), its Fourier transform ϕˆ ∈ S ′ (R) is defined by n



1

ϕ(ω) ˆ := √



ϕ(t )e−iωt dt ,

(2.2)

R

while the Fourier transform of a tempered distribution f ∈ S ′ (R) is defined by

⟨fˆ , ϕ⟩ = ⟨f , ϕ⟩, ˆ

ϕ ∈ S ′ (R),

where ⟨·, ·⟩ denotes the inner product. s For s ≥ 0, the Sobolev space H s (R) consists of all tempered distributions f ∈ S ′ (R) for which fˆ (ω)(1 + |ω|2 ) 2 is a function in L2 (R). The norm of this space is given by



|fˆ (ω)| (1 + |ω| ) dω 2

∥f ∥H s :=

2 s

 21

.

(2.3)

R

It is easy to see that H 0 (R) = L2 (R), and L2 (R) ⊂ H s (R) for s ≤ 0. The function u(x, y) is a solution of problem (2.1) means that u(x, y) satisfies (2.1) in the classical sense and u(x, ·) ∈ L2 (R) for 0 ≤ x ≤ 1. Now we are ready to examine (2.1) in frequency space. Suppose u(x, y) be the solution of problem (2.1). The Fourier transform uˆ (x, ω) of u(x, y) about variable y satisfies

 uˆ xx (x, ω) + (k2 − |ω|2 ) = 0, uˆ (0, ω) = ϕ(ω), ˆ  ˆ uˆ x (0, ω) = ψ(ω),

0 < x < 1 , ω ∈ R, ω ∈ R, ω ∈ R.

(2.4)

It is easy to see the solution of problem (2.4) is

      sinh x |ω|2 − k2 ˆ  uˆ (x, ω) = ϕ(ω) ˆ cosh x |ω|2 − k2 + ψ(ω) |ω|2 − k2

(2.5)

or equivalently, the solution of problem (2.1) is

      sinh x |ω|2 − k2  iωy ˆ  u(x, y) = √ ϕ(ω) ˆ cosh x |ω|2 − k2 + ψ(ω) e dω. (2.6) 2π R |ω|2 − k2  From the Taylor expansion of function sinh(x |ω|2 − k2 ), we know that is no singularity for the second term in the right  hand side of (2.5) at |ω| = k. However, we know that the factors cosh(x |ω|2 − k2 ) and sinh(x |ω|2 − k2 ) in (2.5) and (2.6) 1

 

78

M. Karimi, A. Rezaee / Journal of Computational and Applied Mathematics 320 (2017) 76–95

ˆ all increase rapidly with exponential order as |ω| → ∞. Therefore, (2.5) and (2.6) imply that ϕ(ω) ˆ and ψ(ω) , which are the Fourier transforms of the exact data ϕ(y) and ψ(y), respectively, must decay rapidly. But in practice, the data at x = 0 are often obtained on the basis of rapidly of physical instruments which are denoted by ϕm (y) and ψm (y). In such cases we cannot assume that they are given with absolute accuracy. Therefore, such a decay of exact data is not likely to occur in the Fourier transform of measured noisy data ϕm (y) and ψm (y). ˆ m (ω) are, in general, merely in L2 (R). A small As measured data ϕm (y) and ψm (y), their Fourier transforms ϕˆ m (ω) and ψ perturbation in data ϕ(y) and ψ(y) may cause a dramatically large error in the solution u(x, y) for 0 < x ≤ 1. We will show that how Cauchy problem for the Helmholtz equation suffers from the nonexistence and instability of the solution. For that mean, we define an operator K : C (R) × C (R) −→ C (R) by K (ϕ, ψ)(y) := ϕ(y) + ψ(y),

for all ϕ, ψ ∈ C (R),

where C (R) is the space of continuous functions on R equipped with the supremum norm. Also, suppose that the functions ϕ(·) and ψ(·) are exact data and ϕm (·) and ψm (·) are measured data, corresponding to exact data ϕ(·) and ψ(·), respectively. cos(my) sin(my) We set ϕm (y) := ϕ(y) + m2 and ψm (y) := ψ(y) + m2 , so that the data error, for 0 < x < 1 is F (m) := ∥Km − K ∥∞ = sup |Km (ϕ, ψ) − K (ϕ, ψ)| y∈R

= sup |ϕm (y) + ψm (y) − ϕ(y) − ψ(y)| y∈R

≤ sup |ϕm (y) − ϕ(y)| + sup |ψm (y) − ψ(y)| y∈R y∈R  cos(my)   sin(my)      = sup   + sup   2 2 m

y∈R



2 m2

m

y∈R

.

The solution of problem (2.1), corresponding to the ϕm and ψm is

 √

um (x, y) =

sin(my) cosh x m2 − k2

 √

 +

2m2

cos(my) sinh x m2 − k2 2m2





m2 − k2

+ u(x, y),

hence O(m) := ∥um (x, ·) − u(x, ·)∥∞ = sup |um (x, y) − u(x, y)| x∈(0,1) y∈R

  √   √    cos(my) sinh x m2 − k2   sin(my) cosh x m2 − k2      ≤ sup  √  + xsup   2m2 ∈(0,1) x∈(0,1) 2m2 m2 − k2 y∈R

y∈R

√



m2 − k2

cosh



2m2

+

sinh

√

2m2



m2 − k2



m2 − k2

.

We note that lim F (m) = lim ∥Km − K ∥∞ ≤ lim

m→∞

m→∞

m→∞

2 m2

= 0,

and lim O(m) = lim ∥um (x, ·) − u(x, ·)∥∞ ≤ lim

m→∞

m→∞

 cosh√m2 − k2 

m→∞

2m2

+

sinh

√

2m2





m2 − k2 

= ∞.

m2 − k2

So, problem (2.1) is severely ill-posed and its numerical simulation is very difficult, thus some regularization methods are needed. It is obvious that the ill-posedness of problem (2.1) is caused by the perturbation of high frequencies. The Meyer wavelet has a very good local property in frequency domain, i.e., for fixed index J, the Fourier transform of the scaling functions in VJ and the wavelet functions in WJ have common compact support, respectively. Problem (2.1) will become well-posed in the scale spaces VJ . So Meyer wavelet will be applied to formulate a regularized solution of problem (2.1) in Section 3, by appropriate choice of J, which converges to the exact one when data error tends to zero. In order to simplify the analysis of problem (2.1) we can make decomposition. Let u1 , u2 and u3 be the solutions of the following problems, respectively:

 ∇ 2 u1 (x, y) + k2 u1 (x, y) = 0, u (0, y) = ϕ(y), ∂ 1u (0, y) = 0, x 1

0 < x < 1, y ∈ R, y ∈ R, y ∈ R,

(2.7)

M. Karimi, A. Rezaee / Journal of Computational and Applied Mathematics 320 (2017) 76–95

79

Fig. 1. (a) Exact solution u(0.5, ·); (b) unregularized solution reconstructed from ϕm and ψm for x = 0.5.

and

 ∇ 2 u2 (x, y) + k2 u2 (x, y) = 0, u (0, y) = 0, ∂ 2u (0, y) = ψ(y), x 2

0 < x < 1, y ∈ R, y ∈ R, y ∈ R.

(2.8)

Then u := u1 + u2 must be the solution of problem (2.1). Therefore we only need to solve problems (2.7) and (2.8), respectively. From the above analysis we know that they are all severely ill-posed and the regularization method for solving them will be required in later sections. In Fig. 1 we give the exact solution at x = 0.5, that is, u(0.5, y1 , y2 ), and the reconstructed solution uδ (0.5, y1 , y2 ) from the noisy data ϕm (y1 , y2 ), ψm (y1 , y2 ) without regularization. We see that uδ does not approximate the solution and some regularization procedure is necessary. 3. Meyer wavelet and auxiliary results In this section putting problem (2.1) in appropriate manner into Meyer’s scale space VJ or wavelet space WJ is equal to

ˆ m (ω). This is just the idea of the using a filter in frequency space which attenuates the high frequencies in ϕˆ m (ω) and ψ following wavelet regularization method. The aim of this section is to present a new technique for solving problem (2.1) which consists in applying wavelet basis decomposition of measured data. Let Φ be the Meyer scaling function defined by its Fourier transform [26]  1  √ ,     2π

π  3  ˆ (ω) = 1 Φ cos ν |ω| − 1 , √    2 4π   2π 0,

|ω| ≤ 2π 3

2π 3

,

≤ |ω| ≤

4π 3

(3.1)

,

otherwise,

where ν is a C function (0 ≤ k ≤ ∞) with k

ν(x) :=



0, 1,

x ≤ 0, x ≥ 1,

ˆ is a C k function and the corresponding and ν(x) + ν(1 − x) = 1 and ν(x) = x4 (35 − 84x + 70x2 − 20x3 ) on [0, 1]. Then Φ wavelet Ψ is given by  1 iω π  3   2 sin e ν |ω| − 1 , √    2 2π  2π π  3  1 iω Ψˆ (ω) = |ω| − 1 , √ e 2 cos ν    2 4π   2π 0,

2π 3 4π 3

≤ |ω| ≤ ≤ |ω| ≤

otherwise,

4π 3 8π 3

, ,

(3.2)

80

M. Karimi, A. Rezaee / Journal of Computational and Applied Mathematics 320 (2017) 76–95

ˆ and Ψˆ is and the support of Φ  4π 4π  , , 3 3  8π 2π   2π 8π  suppΨˆ = − ,− ∪ , . ˆ = − suppΦ

3

3

3

3

From [3], we see that the functions

Ψj,k (t ) := 2j/2 Ψ (2j t − k),

j, k ∈ Z

constitute an orthonormal basis of the L2 (R). This basis was constructed by Y. Meyer (1985) and it is an example of so-called wavelet basis. It is easy to see that

Ψˆ j,k (ω) = 2−j/2 e−ik2

−j ω

Ψˆ (2−j ω).

So

 8π

suppΨˆ j,k = −

3

2j , −

2π 3



2j ∪

 2π 3

2j ,

8π 3



2j ,

k ∈ Z.

(3.3)

The multiresolution analysis (MRA) {Vj }j∈Z of Meyer wavelet is generated by

Φj,k = 2j/2 Φ (2j t − k),

Vj = {Φj,k : k ∈ Z},

j, k ∈ Z,

and

 4π

ˆ j ,k = − suppΦ

3

2j ,

4π 3



2j ,

k ∈ Z,

(3.4)

the orthogonal projection of a function f on the space VJ is given by PJ f :=

 ⟨f , ΦJ ,k ⟩ΦJ ,k ,

f ∈ L2 (R),

(3.5)

k∈Z

where ⟨·, ·⟩ denotes L2 -inner product, while QJ f :=

 ⟨f , ΨJ ,k ⟩ΨJ ,k ,

f ∈ L2 (R),

(3.6)

k∈Z

denotes the orthogonal projection on the wavelet space WJ with VJ +1 = VJ ⊕ WJ . We see that the projection PJ can be considered as a low pass filter: frequencies higher than 43π 2J are filtered away. It is easy to see by (3.4) that P J f (ω) = 0,

for |ω| ≥

4 3

π 2J ,

(3.7)

and for j > J, it follows from (3.3) that

 Q J f (ω) = 0,

for |ω|
k

≤ 2e(1−x)k e(x−1)2 2−J (s−r ) M . (4.9)   Meanwhile, because of cosh(x |ω|2 − k2 ) = cos(x k2 − |ω|2 ) for |ω| < k and due to (3.9), Lemma 3.2, and noting that QJ ϕ ∈ WJ ⊂ VJ +1 , it is easy to see that the integrate I2 in (4.8) satisfies  1/2  2    2 r  cosh x |ω|2 − k2 Q   I2 = J ϕ(ω) (1 + |ω| ) dω J

|ω|< 34 π 2J

=

1/2    2  2 r  cos x k2 − |ω|2 Q   J ϕ(ω) (1 + |ω| ) dω

 |ω|k

=



  

2 1/2   hˆ (ω) (1 + |ω|2 )r dω

1



cosh |ω|2 − k2  √ 2  1/2 1  − |ω|2 −k2  ˆ 2 h(ω) (1 + |ω|2 )s dω ≤ 2e s−r  |ω|≥ 32 π 2J >k (1 + |ω|2 ) 2   √ 1/2  1  ˆ 2 − |ω|2 −k2 ≤ 2 sup e h(ω) (1 + |ω|2 )s dω s−r |ω|≥ 23 π 2J >k (1 + |ω|2 ) 2 |ω|≥ 32 π 2J >k |ω|≥ 32 π 2J >k



≤2

sup

e−(|ω|−k) |ω|−(s−r ) ∥h∥H s

|ω|≥ 23 π 2J >k

≤ 2ek−2 2−J (s−r ) M . J

Therefore,

  J I2 ≤ C exp(2J x) + 2 2ek−2 2−J (s−r ) M = 2Cek−(1−x)2 2−J (s−r ) M + 4ek−2 2−J (s−r ) M . J

J

(4.11)

Together with (4.9) we get

∥Fx,J ϕ − Fx,J ϕm ∥H r ≤ 2(C + e−kx )ek−(1−x)2 2−J (s−r ) M + 4ek−2 2−J (s−r ) M . J

J

(4.12)

Combining (4.12) with (4.7), we obtain

∥Fx ϕ − Fx,J ϕm ∥H r ≤ C exp(2J x)δ + 2(Cek + ek(1−x) )e−(1−x)2 2−J (s−r ) M + 4ek−2 2−J (s−r ) M + δ, J

J

(4.13)

where the C is the same constant appearing in Theorem 3.1. According to (4.13) and by appropriate choice of J we can obtain a stable estimate of logarithmic type. For that mean we choose J ∗ as in (4.3). It is easy to see that

 log2

     √   M  M −(s−r )  1 M M −(s−r )  2 ln ln = + log2 ln ln , δ δ 2 δ δ

we get



J ∗ −1/2

exp 2

     M M −(s−r )  x δ ≤ exp x ln ln δ δ δ    x M M −(s−r ) = ln δ δ δ  M −(s−r )x = ln δ 1−x M x , δ

M. Karimi, A. Rezaee / Journal of Computational and Applied Mathematics 320 (2017) 76–95

exp (x − 1)2J







M ≤ exp (x − 1) ln



 =

∗ 2−J (s−r )

M

δ

M 

ln

δ δ  x −1

M −(s−r )

ln

 M −(s−r ) 

85

M

M

δ

 M (s−r )(1−x) = ln δ 1−x M x , δ −(s−r )    M −(s−r )  M ln ≤ x ln δ δ  s−r 1 = ,  −(s−r ) ln Mδ + ln ln Mδ

and J∗

e −2

  M  M −(s−r )  ≤ exp − ln ln δ δ      M −(s−r ) −1 M ln = exp ln δ δ    −1  M M −(s−r ) δ  M (s−r ) = ln = ln . δ δ M δ

Combining these with (4.13) we get the final estimate

 ∥F x ϕ −

F x ,J ∗

ϕm ∥

 

Hr

≤ C ln

M −(s−r )x



+ 2 Ce + e

δ 

+ 4ek δ 

k

ln Mδ



(1−x)k





s−r

ln Mδ + ln ln δ



k

= C + 2 Ce + e

(1−x)k



= α(C , k, x)M x δ 1−x ln





M x δ 1−x ln

s−r 

−(s−r )

M x δ 1 −x

+δ  M −(s−r )x  1 + o(1) for δ → 0 δ

M −(s−r )x 

1 + o(1)

δ

(1−x)

ln Mδ + ln ln Mδ

 M −(s−r )



ln Mδ



for δ → 0.



(4.14)

Estimate (4.5) can be verified by similar process. Due to limitations of length, we omit the verification in detail, so

  

∥Gx ψ − Gx,J ∗ ψm ∥H r ≤ C ln

M −(s−r )x

δ

  + (2C + 1)e(1−x)k + 2ek



ln Mδ



M

(1−x) M

ln δ + ln ln δ

 + 2ek δ



ln Mδ



(s−r )

ln Mδ + ln ln Mδ

−(s−r )

s−r 

ln

M −(s−r )

δ

−(s−r )

s−r 

M x δ 1−x



   M −(s−r )x     = C + (2C + 1)e(1−x)k + 2ek M x δ 1−x ln 1 + o(1) for δ → 0 δ  M −(s−r )x   = β(C , k, x)M x δ 1−x ln 1 + o(1) for δ → 0. δ

(4.15)

Theorem 4.1 suggests how to define a wavelet regularized approximation of disturbed Cauchy problem for the Helmholtz equation.  Corollary 4.2. For case when k = 0, the problem (2.7) is the Cauchy problem for the Laplace equation which is analyzed by authors in [28]. If we take k = 0, then the inequality (4.14) leads to the following inequality

 M −(s−r )x   ∥Fx ϕ − Fx,J ∗ ϕm ∥H r ≤ (3C + 2)M x δ 1−x ln 1 + o(1) for δ → 0. δ

86

M. Karimi, A. Rezaee / Journal of Computational and Applied Mathematics 320 (2017) 76–95

By taking s = r = 0, we obtain

  ∥Fx ϕ − Fx,J ∗ ϕm ∥L2 (R) ≤ (3C + 2)M x δ 1−x 1 + o(1) for δ → 0   = C1 M x δ 1−x 1 + o(1) for δ → 0, which is the final result in [28]. Remark 4.3. When s = r = 0 in (4.4), we can obtain an L2 -estimate of Hölder type.

     ∥Fx ϕ − Fx,J ∗ ϕm ∥L2 (R) ≤ C + 2 Cek + e(1−x)k M x δ 1−x 1 + o(1) for δ → 0   =: α(C , k, x)M x δ 1−x 1 + o(1) for δ → 0,

(4.16)

    ∥Gx ψ − Gx,J ∗ ψm ∥L2 (R) ≤ C + (2C + 1)e(1−x)k + 2ek M x δ 1−x 1 + o(1) for δ → 0   =: β(C , k, x)M x δ 1−x 1 + o(1) for δ → 0.

(4.17)

and

But estimates (4.16) and (4.17) do not provide the convergence of the approximate solutions Fx,J ϕ(x, y) and Gx,J ψ(x, y) at the boundary x = 1. In fact at x = 1 they merely imply that the errors are bounded by α(C , k, 1)M and β(C , k, 1)M, respectively. These results cannot be improved in L2 -sense. But, for s − r > 0, by using Sobolev space H r (R), from estimates (4.14) and (4.15), we see the speed of convergence of the regularization solution for x ∈ [0, 1) is faster than (4.16) and (4.17). So, estimates (4.14) and (4.15) give the convergence. Especially, at x = 1 estimates (4.14) and (4.15) become

∥F1 ϕ − F1,J ∗ ϕm ∥H r = ∥h − F1,J ∗ ϕm ∥H r  M −(s−r )   ≤ α(C , k, 1)M ln 1 + o(1) for δ → 0, δ and

∥G1 ψ − G1,J ∗ ψm ∥H r = ∥g − G1,J ∗ ψm ∥H r  M −(s−r )   ≤ β(C , k, 1)M ln 1 + o(1) for δ → 0. δ Remark 4.4. The condition s − r > 0 is not harsh. By the Sobolev imbedding theorem, the smoothness of h(·) := u1 (1, ·) and g (·) := u2 (1, ·) is only slightly raised. The larger the s, the more restrictive is the assumption (4.2). For example, taking r = 0 and s = 12 , then h(·) and g (·) would be in C 0 (R). Remark 4.5. In general, the a priori bound M is unknown exactly in practice. In this case, if we take



J ∗∗ := log2

√   1  1 −(s−r )  2 ln ln , δ δ

then there hold the estimates

     ∥Fx ϕ − Fx,J ∗∗ ϕm ∥L2 (R) ≤ C + 2 Cek + e(1−x)k δ 1−x 1 + o(1) for δ → 0   =: α(C , k, x)δ 1−x 1 + o(1) for δ → 0, and

    ∥Gx ψ − Gx,J ∗∗ ψm ∥L2 (R) ≤ C + (2C + 1)e(1−x)k + 2ek δ 1−x 1 + o(1) for δ → 0   =: β(C , k, x)δ 1−x 1 + o(1) for δ → 0, where M is only a bounded positive constant and it is not necessary to be known exactly. This choice is helpful in concrete computation. In Theorem 4.1 just we choose a proper regularization parameter and obtain some sharp stability estimates between the exact and approximation solution, but give no procedure how to obtain regularization parameter. For that mean, we interested in by applying another way to find the regularization parameter J and obtain some stability estimates of the Hölder type and logarithmic-type, we used the following lemma which appeared in [29] for choosing a proper regularization parameter J. In fact, by following lemma we answer the question how to choose the regularization parameter J ∗ such that the wavelet regularization solution uδJ ∗ is order optimal on the subspace VJ ∗ .

M. Karimi, A. Rezaee / Journal of Computational and Applied Mathematics 320 (2017) 76–95

87

Lemma 4.6. Let the function f : [0, a] −→ R be given by



f (λ) = λb d ln

1  −c

(4.18)

λ

with a constant c ∈ R and positive constants a < 1, b and d. Then for the inverse function f −1 (λ), one has 1

f −1 (λ) = λ b

d b

c

ln

1b

λ

(1 + o(1)) for λ → 0.

(4.19)

Based on this lemma, we can choose the regularization parameter J by minimizing the right-hand side of (4.13). Set e−2 := λ ∈ (0, 1), E := η(C , k, x)M, where η(C , k, x) := 2(Cek + ek(1−x) ). Let J



C λ−x δ = λ1−x ln

1 −(s−r )

λ

E,

(4.20)

that is, Cδ

 1 −(s−r ) . = λ ln E λ

(4.21)

Then by Lemma 4.6 we obtain that

λ= =

Cδ  E Cδ  E

1 s−r Cδ →0 ln C δ (1 + o(1)) for E E ln

E s−r Cδ

(1 + o(1)) for

Cδ E

→ 0.

(4.22) J

Taking the principal part of λ, given by (4.22) and due to the e−2 = λ we have,

 J = log2 ln

E  Cδ

ln

 E −(s−r ) Cδ

.

(4.23)

By similar computations we obtain the regularization parameter for problem (2.8) which is as follows

 ˜  E

J˜ = log2 ln



ln

 E˜ −(s−r ) Cδ

(4.24)

where E˜ = θ (C , k, x)M and θ (C , k, x) = (2C + 1)ek(1−x) + 2ek . J

Due to e−2 = λ. Now, summarizing above inference process, we obtain the main result of the present paper. Theorem 4.7. For s ≥ r, suppose that conditions (4.1) and (4.2) hold. If one takes J ∗ = [J ]

and J˜∗ = [J˜],

(4.25)

where J and J˜ were defined in (4.23) and (4.24), respectively, [a] with square bracket denotes the largest integer less than or equal to a ∈ R. Then there hold the following stability estimates:

−(s−r )x  4ek C δ E ∥Fx ϕ − Fx,J ∗ ϕm ∥H r ≤ 2E x (C δ)1−x ln + Cδ η(C , k, x)  −(s−r )x E = 2E x (C δ)1−x ln (1 + o(1)) for δ → 0, Cδ

(4.26)

and

 ˜ −(s−r )x E 2ek C δ ∥Gx ψ − Gx,J˜∗ ψm ∥H r ≤ 2E˜ x (C δ)1−x ln + Cδ θ (C , k, x)  ˜ −(s−r )x E = 2E˜ x (C δ)1−x ln (1 + o(1)) for δ → 0. Cδ Remark 4.8. Taking s = r = 0, in (4.28) we can obtain an L2 -estimate

∥Fx ϕ − Fx,J ∗ ϕm ∥L2 (R) ≤ 2E x (C δ)1−x ,

(4.27)

88

M. Karimi, A. Rezaee / Journal of Computational and Applied Mathematics 320 (2017) 76–95

and

∥Gx ψ − Gx,J˜∗ ψm ∥L2 (R) ≤ 2E˜ x (C δ)1−x , we know our result is at least close to optimal or ‘‘order optimal’’ [30]. Therefore, we cannot expect to find a numerical method for approximating solution of (2.1) that satisfies a better estimate in L2 -sense, except for more delicate choice of coefficient C . This suggests that wavelets must be useful for solving the considered ill-posed problem. Remark 4.9. In general, the a priori bound E , E˜ and coefficient C are unknown exactly in practice. In this case, if we take



J ∗ = J˜∗ = log2 ln

  1 δ

ln

1 −(s−r )

δ



,

(4.28)

it holds that

 −(s−r )x 1 4ek δ ∥Fx ϕ − Fx,J ∗ ϕm ∥H r ≤ 2δ 1−x ln + δ η(1, k, x)  −(s−r )x 1 = 2δ 1−x ln (1 + o(1)) for δ → 0, δ

(4.29)

and

−(s−r )x  2ek δ 1 + ∥Gx ψ − Gx,J ∗ ψm ∥H r ≤ 2δ 1−x ln δ θ (1, k, x)  −(s−r )x 1 = 2δ 1−x ln (1 + o(1)) for δ → 0. δ

(4.30)

If we take vJδ := Fx,J ϕm and w˜δ := Gx,J˜ ϕm , then there hold J

−(s−r )x  4ek δ 1 + ∥v(x, ·) − vJδ∗ (x, ·)∥H r ≤ 2δ 1−x ln δ η(1, k, x) −(s−r )x  1 (1 + o(1)) for δ → 0, = 2δ 1−x ln δ

(4.31)

and

 −(s−r )x 1 2ek δ ∥w(x, ·) − wJ˜δ∗ (x, ·)∥H r ≤ 2δ 1−x ln + δ θ (1, k, x)  −(s−r )x 1 = 2δ 1−x ln (1 + o(1)) for δ → 0. δ

(4.32)

5. Numerical aspect 5.0.1. Numerical implementation We want to discuss some numerical aspects of the proposed method in this section. We note that arguments apply for higher dimensions. So, we consider the case when n = 2. Supposing that the sequence {ϕ(y1,i , y2,j )}Ni,j=1 and {ψ(y1,i , y2,j )}Ni,j=1 represent

samples from the functions ϕ(y1 , y2 ) and ψ(y1 , y2 ) on an equidistant grid in the square [a, b]2 and N is even, then we add a random uniformly distributed perturbation to each data and obtain the perturbation data

ϕm := ϕ + ϵ · RandomReal[NormalDistribution[·], Length[ϕ], Length[ϕ]],

(5.1)

ψm := ψ + ϵ · RandomReal[NormalDistribution[·], Length[ψ], Length[ψ]].

(5.2)

and Then the total noise δ can be measured in the sense of root mean square error according to

  N  N  2  1  δ := ∥ϕm − ϕ∥l2 =  2 ϕm (y1,i , y2,j ) − ϕ(y1,i , y2,j ) , N

i=1 j=1

(5.3)

M. Karimi, A. Rezaee / Journal of Computational and Applied Mathematics 320 (2017) 76–95

89

a

b

c

Fig. 2. The case of n = 1 for Example 5.1 at x = 0.5: (a) Problem (5.9) (b) Problem (5.10) (c) Problem (5.8).

and

  N  N  2  1  ψm (y1,i , y2,j ) − ψ(y1,i , y2,j ) , δ := ∥ψm − ψ∥l2 =  2 N

(5.4)

i=1 j=1

where ‘‘RandomReal[NormalDistribution[·], ·, ·]’’ is a normally distributed random variable with zero mean and unit standard deviation and ϵ dictates the level of noise. ‘‘RandomReal[NormalDistribution[·], Length[ϕ], Length[ϕ]]’’ returns an array of random entries that is of the same size as ϕ and ψ . For the functions ϕm (y1 , y2 ) and ψm (y1 , y2 ), we have

vJδ∗ := Fx,J ∗ ϕm = Fx PJ ∗ ϕm ,

(5.5)

wJδ∗ := Gx,J ∗ ψm = Gx PJ ∗ ψm .

(5.6)

and



Hence by using it with J which is defined as follows:



J ∗ := log2

√ 2 ln

1 δ

ln

1 −(s−r ) 

δ

,

(5.7)

we can obtain the approximate solution. We will use DMT as a short form of the ‘‘discrete Meyer (wavelet) transform’’. These algorithms are based on the fast Fourier transform (FFT), and computing the DMT of a vector in R requires O (N log2 2N ) operations. 5.1. One-dimensional cases In this section some numerical tests are presented to demonstrate the usefulness of the approach. The tests were performed using Mathematica software. Throughout this section, we set ϵ = 10−3 , 10−2 , 10−1 .

90

M. Karimi, A. Rezaee / Journal of Computational and Applied Mathematics 320 (2017) 76–95

Fig. 3. The case n = 1 for problems (5.9), (5.10) and (5.8) at x = 0.8 and k = 300, corresponding to ϵ = 0.001, 0.01, 0.1, respectively: (a) J ∗ = 2, i.e., s − r = 6 (b) J ∗ = 3, i.e., s − r = 9 (c) J ∗ = 4, i.e., s − r = 13 (d) J ∗ = 6, i.e., s − r = 40.

√ √ Example 5.1. The problem with analytical solution u(x, y) = e−|y| cos( α x)+ sin( α x) , α = k2 + 1, is the exact solution 



for the problem (2.1) with weak singularity. That is u(x, y) satisfies

 2 ∇ u(x, y) + k2 u(x, y) = 0, u(0, y) = e−|y| , √  ∂x u(0, y) = α e−|y| ,

0 < x < 1, y ∈ R, y ∈ R, y ∈ R.

(5.8)

According to the analysis in Section 2, we only need to solve the following three problems:

 ∇ 2 u1 (x, y) + k2 u1 (x, y) = 0, u (0, y) = e−|y| ,  1 ∂x u1 (0, y) = 0,

0 < x < 1, y ∈ R, y ∈ R, y ∈ R,

(5.9)

 ∇ 2 u2 (x, y) + k2 u2 (x, y) = 0, u2 (0, y) = 0, √  ∂x u2 (0, y) = α e−|y| ,

0 < x < 1, y ∈ R, y ∈ R, y ∈ R.

(5.10)

and

We fix the domain {(x, y)|0 < x ≤ 1, |y| < 10}. To observe the effect on different noisy levels ϵ , we consider the case k = 15 at x = 0.5. Fig. 2 illustrates the comparison between the exact and regularization solutions with three different levels of noise added into both Dirichlet and Neumann (see problem (5.8)) data. For s − r = 0.4, 0.4, 0.3 corresponding to problems (5.9), (5.10) and (5.8), respectively. We choose the regularization parameters J ∗ = 3, 2, 1 corresponding to noise of variance ϵ = 10−3 , 10−2 , 10−1 , respectively, which shows that the smaller the noise level, the better the approximation effect. From Fig. 2 it can be seen that as the magnitude of noise decreases, the numerical solutions converge to the corresponding exact solution. π2

Example 5.2. Take n = 1, k = 150 and the functions ϕ(y) = e y2 −π 2 and ψ(y) = 0 on y ∈ (−π , π ) as exact data functions. Similarly, Fig. 3 displays the comparison between the exact and reconstructed solution from noisy data ϕm . It can be observed that for several values of the regularization parameter, the regularized solution in space V3 is more accurate with respect to other regularized solutions.

M. Karimi, A. Rezaee / Journal of Computational and Applied Mathematics 320 (2017) 76–95

91

Fig. 4. The case n = 1 for problems (5.9), (5.10) and (5.8) at x = 0.8 and k = 300, corresponding to ϵ = 0.001, 0.01, 0.1, respectively: (a) J ∗ = 2, i.e., s − r = 6 (b) J ∗ = 3, i.e., s − r = 9 (c) J ∗ = 4, i.e., s − r = 13 (d) J ∗ = 6, i.e., s − r = 40.

Example 5.3 (See [31]). It is easy to see that the function u(x, y) =

1

π2

sin(π ky) cosh

  π 2 − 1kx

is the exact solution of problem (2.7) with n = 1 and the boundary conditions ϕ(y) = π12 sin(π ky), ψ(y) = 0. Although u(x, ·) and ϕ(·) do not belong to the L2 (R) space, the comparison of the exact solution with the wavelet regularization solution for k = 300 given in Fig. 4 shows that the computational effect for |y| < 10 is still rather satisfactory. In Fig. 4, we choose the regularization parameters J ∗ = 2, 3, 4, 6 corresponding with ϵ = 103 , 102 , 10−1 , respectively. 5.2. Two-dimensional cases









In n-dimension, the problem with analytical solution u(x, y) = e−|y| cos( α x) + sin( α x) , α = k2 + n and y = (y1 , y2 , . . . , yn ). is the exact solution for the problem (2.1) with weak singularity. That is u(x, y) satisfies

 2 ∇ u(x, y) + k2 u(x, y) = 0, u(0, y) = e−|y| , √  ∂x u(0, y) = α e−|y| ,

0 < x < 1, y ∈ Rn , y ∈ Rn , y ∈ Rn .

(5.11)

According to the analysis in Section 2, we only need to solve the following two problems:

 ∇ 2 u1 (x, y) + k2 u1 (x, y) = 0, u (0, y) = e−|y| ,  1 ∂x u1 (0, y) = 0,

0 < x < 1, y ∈ Rn , y ∈ Rn , y ∈ Rn ,

(5.12)

 ∇ 2 u2 (x, y) + k2 u2 (x, y) = 0, u2 (0, y) = 0, √  ∂x u2 (0, y) = α e−|y| ,

0 < x < 1, y ∈ Rn , y ∈ Rn , y ∈ Rn .

(5.13)

and

92

M. Karimi, A. Rezaee / Journal of Computational and Applied Mathematics 320 (2017) 76–95

Fig. 5. The case n = 2 for problem (5.8) at x = 0.8, k = 15 and noise level ϵ = 0.001: (a) Exact solution u(0.8, y1 , y2 ); (b) Regularized solution for J ∗ = 2 and s − r = 6; (c) Regularized solution for J ∗ = 3 and s − r = 9; (d) Regularized solution for J ∗ = 6 and s − r = 36.

If we take n = 2 and α = k2 + 2, then

√ √ u(x, y) = e−|y| cos( α x) + sin( α x) , 



is the analytical solution for (5.11). For n = 2, k = 15 and x = 0.8 Fig. 5 illustrates the comparisons between the exact solution and its regularized solution defined by the regularization parameter J ∗ = 2, 3, 6 for a noise of variance 0.001. We can see the regularized solution in space V6 is poor, it may be the noise in the functions ϕm and ψm is not damped enough by P6 , and thus the high frequencies of ϕˆ m and ψˆ m are so extremely magnified that they destroy the approximated solution. The approximation parameter J ∗ = 2 seems to be the optimal choice for this example.

√ 2 2 Example 5.4. Take n = 2, ϕ(y) = e−y and ψ(y) = β e−y ∈ S (R2 ), where β = 4 + k2 and y = (y1 , y2 ). ˆ Since ϕ(ω), ˆ ψ(ω) ∈ S (R2 ), ω = (ω1 , ω2 ) decays rapidly, we can calculate u(x, y) with exact data ϕ and ψ by using formula (2.6) for n = 2, that is, u(x, y) =

1



   ϕ(ω ˆ 1 , ω2 ) cosh x ω12 + ω22 − k2 eiω1 y1 +iω2 y2 dω1 dω2 2π R2     sinh x ω12 + ω22 − k2 1 ˆ  + ψ(ω1 , ω2 ) eiω1 y1 +iω2 y2 dω1 dω2 . 2π R2 2 2 2 ω1 + ω2 − k

M. Karimi, A. Rezaee / Journal of Computational and Applied Mathematics 320 (2017) 76–95

93

Fig. 6. The case n = 2 for problem (5.8) at x = 0.7, k = 100 and noise level ϵ = 0.001: (a) Exact solution u(0.7, y1 , y2 ); (b) Regularized solution for J ∗ = 4, s − r = 13; (c) Regularized solution for J ∗ = 3 and s − r = 6; (d) Regularized solution for J ∗ = 2 and s − r = 5.

In Fig. 6 we can see that if J ∗ is taken to be too large, the noise in the functions ϕm and ψm is not damped enough by PJ ∗ ,

ˆ m are so severely magnified that they destroy the approximated solution. The and thus the high frequencies of ϕˆ m and ψ approximation parameter J ∗ = 2 seems to be the optimal choice for this test. Example 5.5. Take n = 2, ψ(y1 , y2 ) = 0, and

ϕ(y1 , y2 ) :=

 

exp



0,



 π2  π2  exp , y21 − π 2 y22 − π 2

|y1 | < π , |y2 | < π ,

(5.14)

|y1 | ≥ π , |y2 | ≥ π .

Since ϕ(ω) ˆ ∈ S (R2 ), ω = (ω1 , ω2 ) decays rapidly, we can calculate u(x, y1 , y2 ) with exact data ϕ and ψ by using formula (2.6) for n = 2, that is, u(x, y1 , y2 ) =

1 2π

 (−π,π)2

   ϕ(ω ˆ 1 , ω2 ) cosh x ω12 + ω22 − k2 eiω1 y1 +iω2 y2 dω1 dω2 .

In Fig. 7 the regularized solutions defined by the regularization parameter J ∗ = 2, 3, 5 for a noise of variance 0.01 are presented. It can be observed that for too large J ∗ the difference between the exact and regularized solution again increases. In fact, in space V2 the approximation is poor since the frequencies are cut off excessively by the projection P2 . Similarly, if J ∗ is taken to be too large, the noise in the function ϕm is not damped enough by PJ ∗ , and thus the high frequencies of ϕˆ m are so severely magnified that they destroy the approximated solution.

94

M. Karimi, A. Rezaee / Journal of Computational and Applied Mathematics 320 (2017) 76–95

Fig. 7. The case n = 2 for problem (5.8) at x = 1, k = 25 and noise level ϵ = 0.01: (a) Exact solution u(1, y1 , y2 ); (b) Regularized solution for J ∗ = 2, s − r = 6;(c) Regularized solution for J ∗ = 3 and s − r = 8; (d) Regularized solution for J ∗ = 5 and s − r = 20.

We can see the smaller the noise level, the sharper the approximate regularized solution. So, the approximation parameter J ∗ = 3 seems to be the optimal choice for this example. 6. Conclusion In this paper, we consider the Cauchy problem for the Helmholtz equation in a ‘‘strip’’ domain. For this severely ill-posed problem, we deduce the optimal error bound with only nonhomogeneous Neumann data. According to this optimal error, one can judge if a regularization method is OK or not. About the regularization strategy, we proposed a regularization method by using Meyer wavelet. For the choice of regularization parameter, we use a priori bound. Some sharp stable estimates between the exact solution and its approximation are also provided. About our numerical experiments, we use the discrete wavelet (Meyer) transform as a short form of DMT. Although  √ we consider √ a ‘‘strip’’ domain, the solution of problem in Example 5.1 is almost zero for |y| > 10. Since e−|y| cos( α x) + sin( α x) < 2e−10 = o(10−5 ) for |y| > 10, it is reasonable to consider the numerical experiments in a finite rectangular domain {(x, y)|0 < x ≤ 1, |y| < 10} instead of ‘‘strip’’ domain. The numerical example demonstrates that our method works effectively. Acknowledgments The authors would like to offer their cordial thanks to the referees for their help and suggestions, without these suggestions there would be no present form of this paper. The authors also greatly thank Dr. F. Bahrami of Isfahan University of Technology and Dr. A. Molabahrami of Ilam University for their valuable and helpful suggestions.

M. Karimi, A. Rezaee / Journal of Computational and Applied Mathematics 320 (2017) 76–95

95

References [1] D.E. Beskos, Boundary element method in dynamic analysis: part II (1986–1996), ASME Appl. Mech. Rev. 50 (1997) 149–197. [2] J.T. Chen, F.C. Wong, Dual formulation of multiple reciprocity method for the acoustic mode of a cavity with a thin partition, J. Sound. Vib. 217 (1998) 75–95. [3] I. Harari, P.E. Barbone, M. Slavutin, R. Shalom, Boundary infinite elements for the Helmholtz equation in exterior domains, Internat. J. Numer. Methods Engrg. 41 (1998) 1105–1131. [4] J. Liang, S. Subramaniam, Computation of molecular electrostatics with boundary element methods, Biophys. J. 73 (1997) 1830–1841. [5] L. Marin, An alternating iterative MFS algorithm for the Cauchy problem for the modified Helmholtz equation, Comput. Mech. 45 (2010) 665–677. [6] T. Regińska, K. Regiński, Approximate solution of a Cauchy problem for the Helmholtz equation, Inverse Problems 22 (2006) 975–989. [7] D.N. Hào, D. Lesnic, The Cauchy problem for Laplace’s equation using the conjugate gradient method, IMA J. Appl. Math. 65 (2000) 199–217. [8] H.J. Reinhardt, H. Han, D.N. Hào, Stability and regularization of a discrete approximation to the Cauchy problem of Laplace’s equation, SIAM J. Numer. Anal. 36 (1999) 890–905. [9] J. Cheng, M. Yamamoto, Unique continuation on a line for harmonic functions, Inverse Probl. 14 (1998) 869–882. [10] Y.C. Hon, T. Wei, Backus–Gilbert algorithm for the Cauchy problem of Laplace equation, Inverse Probl. 17 (2001) 261–271. [11] X.T. Xiong, C.L. Fu, Two approximate methods of a Cauchy problem for the Helmholtz equation, Appl. Math. Model. 26 (2007) 285–307. [12] X.T. Xiong, Central difference regularization method for the Cauchy problem of Laplace’s equation, Appl. Math. Comput. 181 (2006) 675–684. [13] Z. Qian, C.L. Fu, X.T. Xiong, Fourth-order modified method for the Cauchy problem for the Laplace equation, J. Comput. Appl. Math. 192 (2006) 205–218. [14] V. Isakov, Inverse Problems for Partial Differential Equation, Springer, New York, 1998. [15] L. Marin, L. Elliott, P.J. Heggs, D.B. Ingham, D. Lesnic, X. Wen, An alternating iterative algorithm for the Cauchy problem associated the Helmholtz equations, Comput. Methods Appl. Mech. Engrg. 31 (2003) 709–722. [16] L. Marin, L. Elliott, P.J. Heggs, D.B. Ingham, D. Lesnic, X. Wen, Conjugate gradient-boundary element solution to the Cauchy problem for Helmholtz-type equations, Comput. Mech. 3 (2003) 367–377. [17] L. Marin, L. Elliott, P.J. Heggs, D.B. Ingham, D. Lesnic, X. Wen, BEM solution for the Cauchy problem associated with Helmholtz-type equations by the Landweber method, Eng. Anal. Bound. Elem. 28 (2004) 1025–1034. [18] L. Marin, L. Elliott, P.J. Heggs, D.B. Ingham, D. Lesnic, X. Wen, Comparison of regularization methods for solving the Cauchy problem associated with the Helmholtz equation, Internat. J. Numer. Methods Engrg. 60 (2004) 1933–1947. [19] L. Marin, A meshless method for the numerical solution of the Cauchy problem associated with three-dimentional Helmholtz-type equations, Appl. Math. Comput. 165 (2005) 355–374. [20] L. Marin, D. Lesnic, The method of fundamental solutions for the Cauchy problem associated with two-dimensional Helmholtz-type equations, Comput. Struct. 83 (2005) 267–278. [21] B.T. Jin, Y. Zheng, Boundary knot method for some inverse problems associated with the Helmholtz equation, J. Numer. Methods Engrg. 62 (2005) 1636–1651. [22] B.T. Jin, L. Marin, The plane wave method for inverse problems associated with Helmholtz-type equations, Eng. Anal. Bound. Elem. 32 (2008) 245–252. [23] P. Li, Z. Chen, J. Zhu, An operator marching method for inverse problems in range-dependent waveguides, Comput. Methods Appl. Mech. Engrg. 197 (2008) 4077–4091. [24] H.H. Qin, T. Wei, R. Shi, Modified Tikhonov regularization method for the Cauchy problem of the Helmholtz equation, J. Comput. Appl. Math. 224 (2009) 39–53. [25] T. Wei, H.H. Qin, R. Shi, Numerical solution of an inverse 2D Cauchy problem connected with the Helmholtz equation, Inverse Problems 24 (2008) 035003–18. [26] I. Daubechies, Ten Lectures on Wavelets, SIAM, Philadelphia, PA, 1992. [27] D.N. Hào, H.J. Reinhardt, A. Schneider, Regularization of a non-characteristic Cauchy problem for a parabolic equation, Inverse Problems 35 (1995) 1247–1263. [28] C.Y. Qiu, C.L. Fu, Z. Qian, Wavelets and regularization of the Cauchy problem for the Laplace equation, Math. Anal. Appl. 338 (2008) 1440–1447. [29] C.L. Fu, X.L. Feng, Z. Qian, The Fourier regularization for solving the Cauchy problem for the Helmholtz equation, Appl. Numer. Math. 59 (2009) 2625–2640. [30] U. Tautenhahen, Optimal stable approximations for the sideways heat equation, J. Inverse Ill-Posed Probl. 5 (1997) 287–307. [31] R. Shi, A modified method for the Cauchy problems of the Helmholtz-type equation (Master thesis), Lanzhou university, P.R. China, 2008.