An Application of Newton Type Iterative Method for ... - Semantic Scholar

1 downloads 0 Views 661KB Size Report
Aug 27, 2012 - Index Terms—quartic convergence, Newton Lavrentiev method, monotone ... differentiable and monotone in the ball Br(x) ⊂ D(F). Santhosh ...... of Integral Equations by Numerical Methods, Eds.: C.T.H.Baker and. G.F.Miller ...
IAENG International Journal of Applied Mathematics, 42:3, IJAM_42_3_07 ______________________________________________________________________________________

An Application of Newton Type Iterative Method for Lavrentiev Regularization for Ill-Posed Equations: Finite Dimensional Realization Santhosh George and Suresan Pareth

Abstract—In this paper, we consider, a finite dimensional realization of Newton type iterative method for Lavrentiev regularization of ill-posed equations. Precisely we consider the ill-posed equation F (x) = f when the available data is f δ with kf − f δ k ≤ δ and the operator F : D(F ) ⊆ X → X is a nonlinear monotone operator defined on a real Hilbert space X. The error estimate obtained under a general source condition on x0 − x ˆ (where x0 is the initial guess and x ˆ is the solution of F (x) = f ) is of optimal order. The regularization parameter α is chosen according to the adaptive method considered by Perverzev and Schock (2005). An example is provided to show the efficiency of the proposed method. Index Terms—quartic convergence, Newton Lavrentiev method, monotone operator, ill-posed problems, adaptive method.

I. I NTRODUCTION An iteratively regularized projection method has been considered for apporoximately solving the ill-posed operator equation F (x) = f (1) where F : D(F ) ⊆ X → X is a nonlinear monotone operator (i.e., hF (x)−F (y), x−yi ≥ 0, ∀x, y ∈ D(F )) and X is a real Hilbert space with the inner product h., .i and the norm k.k. It is assumed that (1) has a solution, namely x ˆ and F possesses a locally uniformly bounded Fr´echet derivative F ′ (x) for all x ∈ D(F ) (cf. [14]) i.e., kF ′ (x)k ≤ CF ,

x ∈ D(F )

for some constant CF . In application, usually only noisy data f δ are available, such that kf − f δ k ≤ δ. Then the problem of recovery of x ˆ from noisy equation F (x) = f δ is ill-posed, in the sense that a small perturbation in the data can cause large deviation in the solution. For solving (1) with monotone operators (see [7], [12], [14], [15]) one usually use the Lavrentiev regularization method. In this method the regularized approximation xδα is obtained by solving the operator equation F (x) + α(x − x0 ) = f δ .

(2)

It is known (cf. [15], Theorem 1.1) that the equation (2) has a unique solution xδα for α > 0, provided F is Fr´echet differentiable and monotone in the ball Br (ˆ x) ⊂ D(F ) Santhosh George and Suresan Pareth, Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Surathkal, India-575025, e-mail: [email protected], [email protected]

with radius r = kˆ x − x0 k + δ/α. However the regularized equation (2) remains nonlinear and one may have difficulties in solving them numerically. In [1], George and Elmahdy considered an iterative regularization method which converges linearly to xδα and its finite dimensional realization in [2]. Later in [3] George and Elmahdy considered an iterative regularization method which converges quadratically to xδα and its finite dimensional realization in [4]. Recall that a sequence (xn ) in X with lim xn = x∗ is said to be convergent of order p > 1, if there exist positive n reals β, γ, such that for all n ∈ N kxn − x∗ k ≤ βe−γp .If the sequence (xn ) has the property that kxn − x∗ k ≤ βq n , 0 < q < 1 then (xn ) is said to be linearly convergent. For an extensive discussion of convergence rate (see [8]). Note that the method considered in [1], [2], [3] and [4] are proved using a suitably constructed majorizing sequence which heavily depends on the initial guess and hence not suitable for practical consideration. In an attempt to avoid majorizing sequence to prove the convergence of the method considered in [1], [2], [3] and [4], the authors considered in [5], a two step iterative method for solving (1), which converges linearly to xδα . Later in [11], the authors considered an application of Newton type iterative method, that converges quartically to xδα . In this paper we consider, finite dimensional realization of the method considered in [11]. The organization of this paper is as follows. Section 2 describes the method and its convergence. Section 3 deals with the error analysis and parameter choice strategy. Section 4 gives the algorithm for implementing the proposed method. Numerical example and computational results are given in section 5. Finally in section 6 we summarize the key points in the paper. II. T HE M ETHOD AND

ITS

C ONVERGENCE

Let {Ph }h>0 be a family of orthogonal projections on X. Our aim in this section is to obtain an approximation for xδα , in the finite dimensional space R(Ph ), the range of Ph . For the results that follow, we impose the following conditions. Let εh := kF ′ (x)(I − Ph )k,

∀x ∈ D(F )

h )x0 k = 0 and and {bh : h > 0} is such that limh→0 k(I−P bh limh→0 bh = 0. We assume that εh → 0 as h → 0. The above assumption is satisfied if, Ph → I pointwise and if F ′ (x) is a compact operator. Further we assume that εh ≤ ε0 , bh ≤ b0 and δ ∈ (0, δ0 ].

(Advance online publication: 27 August 2012)

IAENG International Journal of Applied Mathematics, 42:3, IJAM_42_3_07 ______________________________________________________________________________________ Proof. Note that,

A. Projection Method

−1 kRα (x)Ph F ′ (x)k = sup k(Ph F ′ (x)Ph + αPh )−1 Ph F ′ (x)vk

We consider the following sequence defined iteratively by

kvk≤1

h,δ −1 h,δ h,δ δ h,δ yn,α = xh,δ n,α − Rα (xn,α )Ph [F (xn,α ) − f + α(xn,α − x0 )] (3) and

xh,δ n+1,α

=

(4) where Rα (x) := Ph F (x)Ph + αPh and := Ph x0 , for obtaining an approximation for xδα in the finite dimensional subspace R(Ph ) of X. Note that the iteration (3) and (4) are the finite dimensional realization of the iteration (3) and (4) in [11]. We will be selecting the parameter α = αi from some finite set

(Ph + I − Ph )vk ≤

using the adaptive method considered by Perverzev and Schock in [12]. We need the following assumptions for the convergence analysis. Assumption 1: (cf. [14], Assumption 3) There exists a constant k0 ≥ 0 such that for every x, u ∈ D(F ) and v ∈ X there exists an element Φ(x, u, v) ∈ X such that [F ′ (x) − F ′ (u)]v = F ′ (u)Φ(x, u, v), kΦ(x, u, v)k ≤ k0 kvkkx − uk. Assumption 2: There exists a continuous, strictly monotonically increasing function ϕ : (0, a] → (0, ∞) with a ≥ kF ′ (ˆ x)k satisfying; (i) limλ→0 ϕ(λ) = 0, ∀λ ∈ (0, a] and (ii) supλ≥0 αϕ(λ) λ+α ≤ cϕ ϕ(α) (iii) there exists v ∈ X with kvk ≤ 1 (cf. [10]) such that x0 − x ˆ = ϕ(F ′ (ˆ x))v. eh,δ n,α

Let

and for 0 < k0 < function defined by g(t) =

:=

h,δ kyn,α − xh,δ n,α k,

2 , ε 3(1+ α0 )

∀n ≥ 0 (5)

let g : (0, 1) → (0, 1) be the

0

27k03 ε0 3 3 (1 + ) t 8 α0

∀t ∈ (0, 1).

(6)

Hereafter we assume that δ ∈ (0, δ0 ] where δ0 < α0 . q 1+

Let b0 < q 1+ ρ
0 and let   δ + εh −γ4n ni := min n : e ≤ . αi

δ εh ˜ + + C(ϕ(α) + ) α α δ + εh ˜ + max{1, C}(ϕ(α) + ). α

IV. I MPLEMENTATION OF

ADAPTIVE CHOICE RULE

Finally the balancing algorithm associated with the choice of the parameter specified in Theorem 1 involves the following steps: • Choose α0 > 0 such that δ0 < α0 and µ > 1. i • Choose αi := µ α0 , i = 0, 1, 2, · · · , N.

(Advance online publication: 27 August 2012)

IAENG International Journal of Applied Mathematics, 42:3, IJAM_42_3_07 ______________________________________________________________________________________ where

A. Algorithm Set i = 0. o n n h . Choose ni := min n : e−γ4 ≤ δ+ε αi Solve xi := xh,δ by using the iteration (3) and (4). ni ,αi h If kxi − xj k > 4C0 δ+ε , j < i, then take k = i−1 αj and return xk . 5. Else set i = i + 1 and go to 2.

1. 2. 3. 4.

k(t, s) =



(1 − t)s, (1 − s)t,

0≤s≤t≤1 0≤t≤s≤1

.

Then for all x(t), y(t) : x(t) > y(t) :  Z 1 Z 1 3 3 hF (x) − F (y), x − yi = k(t, s)(x − y )(s)ds 0

0

V. N UMERICAL E XAMPLE In this section we consider the example considered in [14] for illustrating the algorithm considered in section IV. We apply the algorithm by choosing a sequence of finite dimensional subspace (Vn ) of X with dimVn = n + 1. Precisely we choose Vn as the linear span of {v1 , v2 , · · · , vn+1 } where vi , i = 1, 2, · · · , n + 1 are the linear splines in a uniform grid h,δ of n + 1 points in [0, 1]. Note that xh,δ , yn,α ∈ Vn . So Pn+1 n Pn+1 n,α h,δ h,δ n yn,α = i=1 ξi vi and xn,α = i=1 ηi vi , where ξin and ηin , i = 1, 2, · · · , n + 1 are some scalars. Then from (3) we have (Ph F



(xh,δ n,α )

+

h,δ α)(yn,α



xh,δ n,α )

δ

= Ph [f − F (xh,δ n,α ) h,δ h,δ +α(x0,α − xn,α )].

(27)

h,δ Observe that (yn,α −xh,δ n,α ) is a solution of (27) if and only n n n n n n n − ηn+1 )T is the if (ξ − η ) = (ξ1 − η1 , ξ2 − η2n , · · · , ξn+1 unique solution of

(Qn + αBn )(ξ n − η n ) = Bn [µ¯n − Fh1 + α(X0 − η¯n )] (28) where Qn Bn µ¯n

×(x − y)(t)dt ≥ 0. Thus the operator F is monotone. The Fr´echet derivative of F is given by Z 1 F ′ (u)w = 3 k(t, s)u2 (s)w(s)ds. (31) 0

Note that for u, v > 0, Z 1 [F ′ (v) − F ′ (u)]w = 3 k(t, s)u2 (s)ds 0 R1 2 k(t, s)[v (s) − u2 (s)]w(s)ds × 0 R1 k(t, s)u2 (s)ds 0 := F ′ (u)Φ(v, u, w) R1 k(t,s)[v 2 (s)−u2 (s)]w(s)ds where Φ(v, u, w) = 0 R 1 . 2 0

Observe that

= [hF ′ (xh,δ n,α )vi , vj i], i, j = 1, 2 · · · , n + 1 = [hvi , vj i], i, j = 1, 2 · · · , n + 1

Fh1

= [f δ (t1 ), f δ (t2 ), · · · , f δ (tn+1 )]T h,δ h,δ T = [F (xh,δ n,α )(t1 ), F (xn,α )(t2 ), · · · , F (xn,α )(tn+1 )]

X0

= [x0 (t1 ), x0 (t2 ), · · · , x0 (tn+1 )]T

and t1 , t2 , · · · , tn+1 are the grid points. Further from (4) it follows that h,δ δ h,δ h,δ ) + α)(xh,δ (Ph F ′ (yn,α n+1,α − yn,α ) = Ph [f − F (yn,α )

Φ(v, u, w) = R1

(Tn +αBn )(η n+1 − ξ n ) = Bn [µ¯n −Fh2 +α(X0 −ξ¯n )] (30) where Tn

=

h,δ [hF ′ (yn,α )vi , vj i], i, j = 1, 2 · · · , n + 1

Fh2

=

h,δ h,δ h,δ [F (yn,α )(t1 ), F (yn,α )(t2 ), · · · , F (yn,α )(tn+1 )]T .

Note that (28) and (30) are uniquely solvable as Qn and Tn are positive definite matrix (i.e., xQn xT > 0 and xTn xT > 0 for all non-zero vector x) and Bn is an invertible matrix. EXAMPLE 4: (see [14], section 4.3) Let F : D(F ) ⊆ L2 (0, 1) −→ L2 (0, 1) defined by Z 1 F (u) := k(t, s)u3 (s)ds, 0

0

k(t, s)[v 2 (s) − u2 (s)]w(s)ds R1 2 0 k(t, s)u (s)ds

k(t, s)[u(s) + v(s)][v(s) − u(s)]w(s)ds . R1 2 0 k(t, s)u (s)ds

R 1

k(t,s)[u(s)+v(s)]ds 0 R

. So Assumption 2 satisfies with k0 ≥ 1

=

0

In our computation, we take f (t) = f δ = f + δ. Then the exact solution

k(t,s)u2 (s)ds 0 6sin(πt)+sin3 (πt) 9π 2

and

x ˆ(t) = sin(πt).

h,δ +α(xh,δ 0,α − yn,α )] (29) h,δ and hence (xh,δ n+1,α − yn,α ) is a solution of (29) if and only if n+1 n+1 n (η n+1 − ξ n ) = (η1 − ξ1n , η2n+1 − ξ2n , · · · , ηn+1 − ξn+1 )T is the unique solution of

R1

k(t,s)(u(s)) ds

We use x0 (t) = sin(πt) +

3[tπ 2 − t2 π 2 + sin2 (πt)] 4π 2

as our initial guess, so that the function x0 − x ˆ satisfies the source condition x0 − x ˆ = ϕ(F ′ (ˆ x))

1 4

where ϕ(λ) = λ. For the operator F ′ (.) defined in (31), εh = O(n−2 ) (cf. [6]). Thus we expect to obtain the rate of convergence O((δ+ 1 εh ) 2 ). We choose α0 = (1.1)(δ + εh ), µ = 1.1, ρ = 0.11, γρ = 0.7818 and g(γρ ) = 0.99. The results of the computation are presented in Table 1. The plots of the exact solution and the approximate solution obtained are given in Figures 1 and 2.

(Advance online publication: 27 August 2012)

IAENG International Journal of Applied Mathematics, 42:3, IJAM_42_3_07 ______________________________________________________________________________________ Fig. 1.

1.5

1

1

0.5

0.5

0

0

Exact Solution

0.2

0.4

0.6

Approx Solution 0.8

−0.5 0

1

0.2

n=8 1.5

1

1

0.5

0.5

0

Exact Solution

0.2

0.4

0.6

1

Exact Solution

0.2

0.4

0.6

0.8

1

0.8

1

0.8

1

Curves of the exact and approximate solutions 1.5

1

1

0.5

0.5

0

0

Exact Solution

Exact Solution

Approx Solution 0.2

0.4

0.6

Approx Solution 0.8

−0.5 0

1

0.2

n=128 1.5

1

1

0.5

0.5

0

0

Exact Solution

0.4

0.6

0.6

Exact Solution

Approx Solution 0.2

0.4

n=256

1.5

−0.5 0

1

n=64

1.5

−0.5 0

0.8

Approx Solution 0.8

−0.5 0

n=32 Fig. 2.

0.6

Approx Solution 0.8

−0.5 0

1

S.Pareth thanks National Institute of Technology, Karnataka, India for the financial support. R EFERENCES

Approx Solution −0.5 0

0.4

n=16

1.5

0

ACKNOWLEDGMENT

Exact Solution

Approx Solution −0.5 0

error estimate. The regularization parameter α is chosen according to the balancing principle considered by Perverzev and Schock (2005). The numerical results provided confirm the efficiency of the method.

Curves of the exact and approximate solutions

1.5

0.2

n=512

0.4

0.6

n=1024

TABLE I I TERATIONS AND CORRESPONDING ERROR ESTIMATES n

k

nk

δ + εh

α

kxk − x ˆk

kxk −ˆ xk (δ+εh )1/2

8

2

4

0.0135

0.0180

0.0356

0.3065

16

2

4

0.0134

0.0178

0.0432

0.3736

32

2

4

0.0133

0.0178

0.0450

0.3897

64

2

4

0.0133

0.0177

0.0455

0.3938

128

2

4

0.0133

0.0177

0.0456

0.3948

256

2

4

0.0133

0.0177

0.0456

0.3950

512

19

5

0.0133

0.0897

0.0456

0.3951

1024

27

6

0.0133

0.1923

0.0456

0.3951

VI. C ONCLUSION We have suggested and analyzed the finite dimensional realization of the iterative method considered in [11] for obtaining an approximate solution for nonlinear ill-posed operator equation F (x) = f when the operator F : D(F ) ⊆ X → X defined on a real Hilbert space X is monotone, and the available data is f δ with kf − f δ k ≤ δ. Using a general source condition on x0 − x ˆ we obtained an optimal order

[1] S. George and A.I.Elmahdy, “An analysis of Lavrentiev regularization for nonlinear ill-posed problems using an iterative regularization method”, Int.J.Comput.Appl.Math., 2010, 5(3), pp. 369-381. [2] S. George and A.I.Elmahdy, ” An iteratively regularized projection method for nonlinear ill-posed problems”, Int. J. Contemp. Math. Sciences, 2010, 5, no. 52, pp. 2547-2565. [3] S. George and A.I.Elmahdy ”A quadratic convergence yielding iterative method for nonlinear ill-posed operator equations”, Comput. Methods Appl. Math, 2012, 12(1), pp. 3245 [4] S. George and A.I.Elmahdy, ” An iteratively regularized projection method with quadratic convergence for nonlinear ill-posed problems”, Int. J. of Math. Analysis, 2010, 4, no. 45, pp 2211-2228. [5] S.George and S.Pareth, “Two-Step Modified Newton Method for Nonlinear Lavrentiev Regularization”, ISRN Applied Mathematics, Volume 2012, Article ID 728627, 2012, doi:10.5402/2012/728627. [6] C.W.Groetsch, J.T.King and D.Murio, “Asymptotic analysis of a finite element method for Fredholm equations of the first kind”, in Treatment of Integral Equations by Numerical Methods, Eds.: C.T.H.Baker and G.F.Miller, Academic Press, London, 1982, pp. 1-11. [7] J.Jaan and U.Tautenhahn, “On Lavrentiev regularization for ill-posed problems in Hilbert scales”, Numer.Funct.Anal.Optim., 2003, 24, no. 5-6, pp. 531-555. [8] C.T.Kelley, ”Iterative methods for linear and nonlinear equations”, SIAM Philadelphia, 1995. [9] P.Mathe and S.V.Perverzev, “Geometry of linear ill-posed problems in variable Hilbert scales”, Inverse Problems, 2003, 19, no. 3, pp. 789-803. [10] M.T.Nair and P.Ravishankar, “Regularized versions of continuous Newton’s method and continuous modified Newton’s method under general source conditions”, Numer. Funct.Anal. Optim., 2008, 29(9-10), pp. 1140-1165. [11] S.Pareth and S.George, “Newton type methods for Lavrentiev regularization of nonlinear ill-posed operator equations”, (2012), (communicated). [12] S.V.Perverzev and E.Schock, “On the adaptive selection of the parameter in regularization of ill-posed problems”, SIAM J.Numer.Anal., 2005, 43, pp. 2060-2076. [13] A.G.Ramm, A.B.Smirnova and A.Favini, “Continuous modified Newton’s-type method for nonlinear operator equations” Ann.Mat.Pura Appl., 2003, 182, pp. 37-52. [14] E.V.Semenova, “Lavrentiev regularization and balancing principle for solving ill-posed problems with monotone operators”, Comput. Methods Appl. Math., 2010, no.4, pp. 444-454. [15] U.Tautanhahn, “On the method of Lavrentiev regularization for nonlinear ill-posed problems”, Inverse Problems, 2002, 18 , pp. 191-207.

Santhosh George received his Ph.D in Mathematics from Goa University. He is currently working as an Associate Professor in the department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, India. He has published several papers in international journals of good repute and proceedings of international conferences. His research interests include Functional Analysis, Inverse and Ill-posed problems and its applications. He has supervised many M.Tech dissertation. Two students have completed their Ph.D under his guidance. Currently three students are doing Ph.D under his guidance. Suresan Pareth did MSc degree in Mathematics from the department of Mathematics Calicut University, Kerala, India. Also did MCA and M.Tech in Systems Analysis and Computer Applications from National Institute of Technology, Karnataka, India. His areas of interests are Analysis of algorithms, Inverse and Ill-posed problems and its applications. He worked as Assistant Director, Senior Software Developer, Assistant Professor and Associate Professor in Computer Science and Engineering. Currently he is a research scholar in the department of Mathematical and Computational Sciences, National Institute of Technology, Karnataka, India.

(Advance online publication: 27 August 2012)