On generalized Newton method for solving operator ... - PMF Niš

2 downloads 0 Views 75KB Size Report
On generalized Newton method for solving operator inclusions. D.R. Sahu, Krishna Kumar Singh. Department of Mathematics, Faculty of Science, Banaras ...
Filomat 26:5 (2012), 1055–1063 DOI 10.2298/FIL1205055S

Published by Faculty of Sciences and Mathematics, University of Niˇs, Serbia Available at: http://www.pmf.ni.ac.rs/filomat

On generalized Newton method for solving operator inclusions D.R. Sahu, Krishna Kumar Singh Department of Mathematics, Faculty of Science, Banaras Hindu University, Varanasi-221005, India

Abstract. In this paper, we study the existence and uniqueness theorem for solving the generalized

operator equation of the form F(x) + G(x) + T(x) ∋ 0, where F is a Fr´echet differentiable operator, G is a maximal monotone operator and T is a Lipschitzian operator defined on an open convex subset of a Hilbert space. Our results are improvements upon corresponding results of Uko [Generalized equations and the generalized Newton method, Math. Programming 73 (1996) 251-268].

1. Introduction Let H be a real Hilbert space endowed with a norm ∥ · ∥ and an inner product ⟨·, ·⟩, respectively. Throughout the paper, we denote the set D a closed convex subset of H with nonempty interior D0 . For a set valued map G from H to H, the ∪ set D(G) defined by D(G) = {x ∈ H : Gx , ∅} is called the domain of G, the set R(G) defined by R(G) = x∈H Gx is called the range of G, the set G(G) defined by G(G) ∪ = {(x, y) ∈ H × H : x ∈ D(G), y ∈ Gx} is called the graph of G, the set G−1 x defined by G−1 x = {y ∈ H : x ∈ Gy} is called the inverse image of x ∈ R(G) under G and Br [x] will designate the set {y ∈ H : ∥y − x∥ ≤ r}. Let us recall some basic definitions. Definition 1.1. Let H be a Hilbert space. An operator T : H → H is said to be (i) monotone if ⟨Tx − Ty, x − y⟩ ≥ 0, ∀x, y ∈ H; (ii) strongly monotone if ⟨Tx − Ty, x − y⟩ ≥ k1 ∥x − y∥2 , ∀x, y ∈ H and for some k1 > 0; (iii) Lipschitz continuous if ∥Tx − Ty∥ ≤ k2 ∥x − y∥, ∀x, y ∈ H and for some k2 > 0; Definition 1.2. A multi-valued operator G : H → 2H is said to be 2010 Mathematics Subject Classification. Primary 49M15 Keywords. Fr´echet derivative, generalized Newton method, generalized equations, Lipschitzian mappings, maximal monotone operators, S-iteration process Received: 05 January 2012; Accepted: 15 April 2012 Communicated by Qamrul Hasan Ansari and Ljubiˇsa D.R. Koˇcinac Email addresses: [email protected] (D.R. Sahu), [email protected] (Krishna Kumar Singh)

D.R. Sahu, K.K. Singh / Filomat 26:5 (2012), 1055–1063

1056

(i) monotone if ⟨y2 − y1 , x2 − x1 ⟩ ≥ 0, ∀x1 , x2 ∈ H, y1 ∈ Gx1 , y2 ∈ Gx2 ; (ii) maximal monotone if it is monotone and there is no other monotone operator whose graph contains strictly the graph G(G) of G; (iv) strongly monotone if there is some α > 0 such that ⟨y2 − y1 , x2 − x1 ⟩ ≥ α∥x1 − x2 ∥2 , ∀x1 , x2 ∈ H, y1 ∈ Gx1 , y2 ∈ Gx2 . In the sequel, we will regard the statements [x, y] ∈ G, G(x) ∋ y, −y + G(x) ∋ 0 and y ∈ G(x) as synonymous. In [7], it is given that if G is maximal monotone then G is closed in the sense that [xm , ym ] ∈ G, lim xm = x and lim ym = m ⇒ [x, y] ∈ G. m→∞

m→∞

A well-known example (see [7, 8]) of a maximal monotone operator is the subgradient ∂ϕ(t) = {z ∈ H : ϕ(t) − ϕ(s) ≤ ⟨z, t − s⟩, ∀s ∈ H} of a proper lower semicontinuous convex function ϕ : H → (−∞, ∞]. The following definition will play a crucial role in the paper. Definition 1.3. ([2]) Let C be a nonempty convex subset of a normed space X and T : C → C an operator. The operator P : C → C is said to be S-operator generated by α ∈ (0, 1) and T if P = T[(1 − α)I + αT], where I is the identity operator. In this paper, we consider the following problem: Let F : D → H be an operator which is Fr´echet differentiable at each point of D0 and G : H → 2H a maximal monotone operator. Consider the problem: find x ∈ H such that Fx + Gx + Tx ∋ 0,

(1)

where T : D → H is a given operator. Examples of the variational inclusion (1): (1) If G = ∂ϕ and T = 0, then problem (1) reduces to the following problem: find x ∈ H such that ⟨Fx, y − x⟩ ≥ ϕ(x) − ϕ(y), ∀y ∈ H, which is called a nonlinear variational inequality and has been studied by many authors, see, for example, [3, 4, 9]. (2) Let G = ∂δK and T = 0, where ∂δK is the indicator function of a nonempty, closed and convex subset K of H defined by { 0, x ∈ K; ∂δK (x) = ∞, otherwise. In this case problem (1) reduces to the following problem: find x ∈ K such that ⟨Fx, y − x⟩ ≥ 0, ∀y ∈ K, which is the classical variational inequality, see ([5, 14]).

D.R. Sahu, K.K. Singh / Filomat 26:5 (2012), 1055–1063

1057

The generalized Newton method is given by F′wn wn+1 + Gwn+1 ∋ F′wn wn − Fwn , n = 0, 1, · · · ,

(2)

where F′w denotes the Fr´echet derivative of F at the point w ∈ D0 . Many results on the convergence of the generalized Newton method (2) can be found in [10, 11, 16, 17]. In Newton method (2), functional value of inverse of derivative is required at each iteration. This bring us a natural question how to modify generalized Newton iteration process (2), so that the computation of the inverse of derivative at each step in Newton method (2) can be avoided. In [15], Uko approximated the solution of (1) for T ≡ 0 by the generalized Newton method F′w0 wn+1 + Gwn+1 ∋ F′w0 wn − Fwn , n = 0, 1, · · · ,

(3)

and gave the following theorem for semi-local convergence analysis of (3) to solve the operator equation (1) for T = 0. Theorem 1.4. Let F : D → H be a continuous operator which has Fr´echet derivative at each point of D0 and G : H → 2H be a maximal monotone operator. For some x0 ∈ D, assume the operators F and G satisfy the following conditions: (i) (ii) (iii) (iv)

there exists y0 ∈ H such that y0 ∈ G(x0 ) and ∥F(x0 ) + y0 ∥ ≤ β, for some β > 0; ⟨F′x0 x, x⟩ ≥ η∥x∥2 , for some η > 0; ∥F′x − F′y ∥ ≤ K∥x − y∥, for all x, y ∈ D0 and for some K > 0; ⟨y2 − y1 , x2 − x1 ⟩ ≥ K0 ∥x1 − x2 ∥2 , ∀x1 , x2 ∈ H, y1 ∈ Gx1 , y2 ∈ Gx2 and for some K0 > 0.

Assume that η + K0 > 0 and denote d = r=

√2d . 1+ 1−2h

β η+K0

and h =

Md η+K0 .

Assume further that h
0. Then the set (U + V)−1 x contains exactly one element for all x ∈ R(A + B). Lemma 2.2. ([1, 6]) Let (X, d) be a complete metric space and F : X → X a contraction mapping. Then F has unique fixed point in X. Lemma 2.3. Let F : D → H be a continuous operator such that F has Fr´echet derivative at each point of D0 and T : D → H be an operator. Further, let G : H → 2H be a maximal monotone operator. For some x0 ∈ D0 , assume the following: (i) ⟨F′x0 x, x⟩ ≥ η∥x∥2 for some real number η; (ii) ⟨y2 − y1 , x2 − x1 ⟩ ≥ K0 ∥x1 − x2 ∥2 for all x1 , x2 ∈ H, y1 ∈ Gx1 , y2 ∈ Gx2 and for some K0 ≥ 0. Assume that η + K0 > 0. Then, the operator A : D → H defined by A(x) = (F′x0 + G)−1 (F′x0 x − Fx − Tx), x ∈ D is well defined on D. Proof. It follows from Lemma 2.1 by taking U = F′x0 , V = G, u = η and v = K0 .

(9)

D.R. Sahu, K.K. Singh / Filomat 26:5 (2012), 1055–1063

1059

First, we establish the following fixed point theorem to study convergence analysis of Algorithm 1.5, for the existence of unique solution of the operator equation (1). Theorem 2.4. Let F : D → H be a continuous operator such that F has Fr´echet derivative at each point of D0 and T : D → H be a Lipschitz continuous operator with Lipscitz constant L. Further, let G : H → 2H be a maximal monotone operator. For some x0 ∈ D, assume the following: (i) (ii) (iii) (iii)

There exists y0 ∈ H such that y0 ∈ G(x0 ) and ∥F(x0 ) + T(x0 ) + y0 ∥ ≤ β, for some β > 0; ⟨F′x0 x, x⟩ ≥ η∥x∥2 , for some real number η; ∥F′x − F′y ∥ ≤ K∥x − y∥, for all x, y ∈ Br [x0 ] and for some K > 0; ⟨y2 − y1 , x2 − x1 ⟩ ≥ K0 ∥x1 − x2 ∥2 , ∀x1 , x2 ∈ H, y1 ∈ Gx1 , y2 ∈ Gx2 and for some K0 ≥ 0. β

L Kd Assume that η + K0 > 0 and denote c = η+K , d = η+K0 and h = η+K . Assume further that h < 0 0 2d √ where r = . Then, for fixed α ∈ (0, 1), we have the following: 1−c+

(1−c)2 2

and Br [x0 ] ⊆ D0 ,

(1−c)2 −2h

(a) The operator A defined by (9) is well defined and a contraction self-operator on Br [x0 ] with Lipschitz constant γ, where γ = Kr+L η+K0 , and the operator equation (1) has a unique solution in Br [x0 ]. (b) The S-operator Aα : Br [x0 ] → H generated by α and A is a contraction self-operator on Br [x0 ] with Lipschitz constant γ(1 − α + αγ). Proof. √ (a) By Lemma 2.3, it is clear that the operator A is well defined on Br [x0 ]. First note that γ = 1 − (1 − c)2 − 2h < 1. For x ∈ Br [x0 ], by monotonicity of G we have K0 ∥Ax − x0 ∥2

≤ ⟨y0 + F(x) + T(x) + F′x0 (Ax − x), x0 − Ax⟩ ≤ ⟨F(x0 ) + y0 + F(x) + T(x) − F(x0 ) − F′x0 (x − x0 )

+F′x0 (Ax − x0 ), x0 − Ax⟩ = ⟨F(x0 ) + T(x0 ) + y0 + F(x) − F(x0 ) − F′x0 (x − x0 ) +T(x) − T(x0 ), x0 − Ax⟩ − ⟨F′x0 (Ax − x0 ), Ax − x0 ⟩ K ≤ ∥F(x0 ) + T(x0 ) + y0 ∥∥Ax − x0 ∥ + ∥x − x0 ∥2 ∥Ax − x0 ∥ 2 −η∥Ax − x0 ∥2 + L∥Ax − x0 ∥. Therefore, we get ∥Ax − x0 ∥ ≤

≤ =

1 ∥F(x0 ) + T(x0 ) + y0 ∥ η + K0 K L + ∥x − x0 ∥2 + ∥x − x0 ∥ 2(η + K0 ) η + K0 β Kr2 L + + r η + K0 2(η + K0 ) η + K0 r.

For x, y ∈ Br [x0 ], by monotonicity of G, we have K0 ∥Ax − Ay∥2

≤ ≤ = ≤

⟨F′x0 x − F(x) − T(x) − F′x0 Ax − F′x0 y + F(y) + T(y) +F′x0 Ay, Ax − Ay⟩

⟨F(y) − F(x) − F′x0 (y − x) − F′x0 (Ax − Ay) + T(y) −T(x), A(x) − A(y)⟩ ⟨F(y) − F(x) − F′x0 (y − x), Ax − Ay⟩

+⟨T(y) − T(x), A(x) − A(y)⟩ − ⟨F′x0 (Ax − Ay), Ax − Ay⟩ ∥F(y) − F(x) − F′x0 (y − x)∥∥Ax − Ay∥ + L∥x − y∥∥A(x) − A(y)∥ −η∥Ax − Ay∥2 .

(10)

D.R. Sahu, K.K. Singh / Filomat 26:5 (2012), 1055–1063

1060

Consequently, we get ∥Ax − Ay∥

≤ ≤ =

L K ∥x − y∥ max{∥x − x0 ∥, ∥y − x0 ∥} + ∥x − y∥ η + K0 η + K0 Kr + L ∥x − y∥ η + K0 γ∥x − y∥.

Hence, the operator A is contraction self-operator on Br [x0 ] with Lipschitz constant γ. (b) For x, y ∈ Br [x0 ], observe that ∥Sα x − Sα y∥

= ∥A[(1 − α)x + αAx] − A[(1 − α)y + αAy]∥ ≤ γ∥(1 − α)(x − y) + α(Ax − Ay)∥ ≤ γ(1 − α + αγ)∥x − y∥.

Hence, the S-operator Aα generated by α and A is a contraction self-operator on Br [x0 ] with Lipschitz constant γ(1 − α + αγ). Now, we are ready to present the semilocal convergence analysis of (8). Theorem 2.5. Let F : D → H be a continuous operator such that F has Fr´echet derivative at each point of D0 and T : D → H be a Lipschitz continuous operator with Lipscitz constant L. Further, let G : H → 2H be a maximal monotone operator. For some x0 ∈ D, assume following: (i) There exists y0 ∈ H such that y0 ∈ G(x0 ) and ∥F(x0 ) + y0 ∥ ≤ β, for some β > 0; (ii) ⟨F′x0 x, x⟩ ≥ η∥x∥2 , for some real number η; (iii) ∥F′x − F′y ∥ ≤ K∥x − y∥, for all x, y ∈ Br [x0 ] and for some K > 0; (iv) ⟨y2 − y1 , x2 − x1 ⟩ ≥ K0 ∥x1 − x2 ∥2 , ∀x1 , x2 ∈ H, y1 ∈ Gx1 , y2 ∈ Gx2 and for some K0 ≥ 0. β

L Kd Assume that η + K0 > 0 and denote c = η+K , d = η+K0 and h = η+K . Assume further that h < 0 0 2d √ where r = . Then, for fixed α ∈ (0, 1), we have the following: 1−c+

(1−c)2 2

and Br [x0 ] ⊆ D0 ,

(1−c)2 −2h

(a) The operator equation (1) has a unique solution x∗ in Br1 [x0 ] ∩ D, where r1 = (b) The sequence {xn } generated by (3) is in Br [x0 ] and it converges to x∗ . (c) The following error estimate holds: ∥xn+1 − x∗ ∥ ≤

d n+1 ρ , ∀n ∈ N0 , h

1−c−

√2d

(1−c)2 −2h

.

(11)

where ρ = γ(1 − α + αγ). Proof. (a) By definition of S-operator Aα , it is clear that a point is fixed points of the operator Aα if and only if it is a solution of the operator equation (1). Since the operator Aα is contraction self-operator on Br [x0 ], it follows from Lemma 2.2 that there exists a unique solution x∗ of operator equation (1) in Br [x0 ]. Now, we show that this solution x∗ is unique in Br1 [x0 ]. Suppose, for contradiction, that y∗ is another solution of (1) in Br1 [x0 ]. Then, y∗ is another fixed point of Aα . Using (10), we have ∥y∗ − x0 ∥ ≤

β K L + ∥y∗ − x0 ∥2 + ∥y∗ − x0 ∥. η + K0 2(η + K0 ) η + K0

(12)

From inequality (12), we must have either ∥y∗ −x0 ∥ ≥ r1 or ∥y∗ −x0 ∥ ≤ r. Hence, if y∗ ∈ Br1 [x0 ] then y∗ ∈ Br [x0 ]. Therefore, by uniqueness of fixed points of Aα in Br [x0 ], y∗ = x∗ .

D.R. Sahu, K.K. Singh / Filomat 26:5 (2012), 1055–1063

1061

(b) By (8), we can write ∥xn+1 − x∗ ∥ =

∥Aα (xn ) − Ax∗ ∥

γ(1 − α + αγ)∥xn − x∗ ∥ γn+1 (1 − α + αγ)n+1 ∥x0 − x∗ ∥ d n+1 ≤ ρ . h Therefore, xn → x∗ as n → ∞. ≤ ≤

(c) It follows from the part (b). In order to compare our Theorem 2.5 with the Theorem 1.4, we have the following corollary. Corollary 2.6. Let F : D → H be operators such that F has Fr´echet derivative at each point of D0 and G : H → 2H be a maximal monotone operator. For some x0 ∈ D, assume following: (i) (ii) (iii) (iv)

There exists y0 ∈ H such that y0 ∈ G(x0 ) and ∥F(x0 ) + y0 ∥ ≤ β, for some β > 0; ⟨F′x0 x, x⟩ ≥ η∥x∥2 , for some real number η; ∥F′x − F′y ∥ ≤ K∥x − y∥, for all x, y ∈ Br [x0 ] and for some K > 0; ⟨y2 − y1 , x2 − x1 ⟩ ≥ K0 ∥x1 − x2 ∥2 , ∀x1 , x2 ∈ H, y1 ∈ Gx1 , y2 ∈ Gx2 and for some K0 > 0. β

Kd Denote d = η+K0 and h = η+K . Assume that h < 12 , η + K0 > 0 and Br [x0 ] ⊆ D0 , where r = 0 α ∈ (0, 1), we have the following:

(a) The operator equation (1) has a unique solution x∗ in Br1 [x0 ] ∩ D, where r1 = (b) The sequence {xn } generated by (3) is in Br [x0 ] and it converges to x∗ . (c) The following error estimate holds: ∥xn+1 − x∗ ∥ ≤

√2d . 1+ 1−2h

Then, for fixed

√2d . 1− 1−2h

d n+1 ρ , ∀n ∈ N0 , h

(13)

where ρ = γ(1 − α + αγ). Proof. By putting T ≡ 0 in the Theorem 2.5, we get the result. Remark 2.7. One can observe from (13) and (4) that ρ = γ(1 − α + αγ) < γ.

(14)

The strict inequality (14) shows that the error estimate in Corollary 2.6 is sharper than that of Theorem 1.4. Therefore, Corollary 2.6 is an improvement of Theorem 1.4. Remark 2.8. Theorem 1.4 can not be applied for operator equation (1) as in Theorem 2.5, the Algorithm 1.5 can be successfully applied for solving operator equation (1). In the following example, we show numerically that the rate of convergence of the sequence {xn } generated by (8) is faster than the sequence {wn } generated by (3). Example 2.9. Let H = R and D = [−1, 1]. Consider the problem: find x ∈ R such that x2 + x − 1 = 0. For all x ∈ D, let F(x) = x2 − 1, G(x) = x and T ≡ 0. Clearly, for x0 = 1, we have η = 2, β = 1, K = 2, K0 = 1 and c = 0.

D.R. Sahu, K.K. Singh / Filomat 26:5 (2012), 1055–1063

1062

(1−c)2

Thus, h = 0.22 and therefore h < 2 . Hence, all the conditions of Theorem 2.5 are satisfied, hence there exists a point x∗ ∈ Br [x0 ], where r = 0.3813, such that F(x∗ ) + G(x∗ ) + T(x∗ ) = 0 Particularly, for α = 0.5, the following table shows that the rate of convergence of the sequence {xn } generated by (8) is faster than the sequence {wn } generated by (3). Newton method {wn } 1 0.666666666666667 0.629629629629630 0.620941929583905 0.618771659750809 0.618221650863616 0.618081764043566 0.618046153681308 0.618037086427452 0.618034777551723

n 1 2 3 4 5 6 7 8 9 10

SIP of generalized Newton-like {xn } 1 0.657407407407407 0.624058725602650 0.618990116370879 0.618186565541400 0.618058357903157 0.618037881467669 0.618034610584751 0.618034088084084 0.618034004617913

In the following figure, graph with red color is corresponding sequence {wn } generated by generalized Newton method (3) and graphs with blue, black and green colors are corresponding the sequence {xn } generated by SIP of generalized Newton-like (8), for α = 0.25, α = 0.5 and α = 0.75, respectively. Clearly, the rate of convergence of the sequence {xn } generated by SIP of generalized Newton-like (8) is always faster than the sequence {wn } generated by generalized Newton method (3). 0.67

0.66

xn

0.65

0.64

0.63

0.62

0.61

1

2 3 4 5 6 7 n Comparison of SIP of generalized Newton−like with generalized Newton method

8

References [1] R.P. Agarwal, D. O’Regan, D.R. Sahu, Fixed Point Theory for Lipschitzian-type Mappings with Applications, Springer, 2009. [2] R.P. Agarwal, D. O’Regan, D.R. Sahu, Iterative construction of fixed points of nearly asymptotically nonexpansive mappings, J. Nonlinear Convex Anal. 8 (2007) 61–79. [3] C. Baiocchi, A. Capelo, Variational and Quasi-Variational Inequalities, Application to Free Boundary Problems, Wiley, New York-London, 1984. [4] R. Glowinski, Numerical Methods for Nonlinear Variational Problems, Springer-Verlag, Berlin, 1984. [5] P.T. Harker, J.S. Pang, Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms and applications, Math.Prog. 48 (1990) 161–220. [6] E. Kreyszig, Introductory Functional Analysis with Applications, Wiley Classics Library, 1989. [7] G. J. Minty, Monotone (nonlinear) operators in Hilbert space, Duke Math. J. 29 (1973) 341–346. [8] G.J. Minty, On the monotonicity of the gradient of a convex function, Pacific J. Math. 14 (1964) 243–247.

D.R. Sahu, K.K. Singh / Filomat 26:5 (2012), 1055–1063

1063

[9] M.A. Noor, Some recent advances in variational inequalities I, New Zeal. J. Math. 26 (1997) 53–80 (II, 229–255). [10] S.M. Robinson, Generalized equations and their solutions, part 1: basic theory, Math. Prog. Study 10 (1979) 128–141. [11] S.M. Robinson, Generalized equations, in: A. Bachem, M. Gretschel and B. Korle (eds.), Mathematical Programming: the State of the Art (Springer, Berlin, 1982), pp. 346–367. [12] D.R. Sahu, Applications of the S-iteration process to constrained minimization problems and split feasibility problems, Fixed Point Theory 12 (2011) 187–204. [13] D.R. Sahu, Strong convergence of a new fixed point iteration process with applications, Proc. Internat. Conf. Recent Advances in Mathematical Science and Applications, 2009, pp. 59–72. [14] G. Stampacchia, Formes bilineaires sur les ensemble convexes, C. R. Acad. Sci. Paris 258 (1964) 4413–4416. [15] L.U. Uko, Generalized equations and the generalized Newton method, Math. Prog. 73 (1996) 251–268. [16] L.U. Uko, The generalized Newton’s method, J. Nig. Math. Soc. l0 (1991) 55–64. [17] L.U. Uko, Remarks on the generalized Newton method, Math. Prog. 59 (1993) 405–412.

Suggest Documents