Approximation solvability of a class of nonlinear set-valued variational ...

25 downloads 1861 Views 123KB Size Report
Department of Mathematics, University of Central Florida, FL 32816, USA. Received 14 September 2006. Available online 1 May 2007 ... variational inclusions using projection type methods, resolvent operator type methods, or averaging tech-.
J. Math. Anal. Appl. 337 (2008) 969–975 www.elsevier.com/locate/jmaa

Approximation solvability of a class of nonlinear set-valued variational inclusions involving (A, η)-monotone mappings Ram U. Verma Department of Mathematics, University of Central Florida, FL 32816, USA Received 14 September 2006 Available online 1 May 2007 Submitted by C.E. Chidume

Abstract A new class of nonlinear set-valued variational inclusions involving (A, η)-monotone mappings in a Hilbert space setting is introduced, and then based on the generalized resolvent operator technique associated with (A, η)-monotonicity, the existence and approximation solvability of solutions using an iterative algorithm is investigated. © 2007 Elsevier Inc. All rights reserved. Keywords: (A, η)-monotone mapping; Class of nonlinear set-valued variational inclusions; Resolvent operator method; Iterative algorithm

1. Introduction There exists a vast literature [1–32] on the approximation solvability of nonlinear variational inequalities as well as nonlinear variational inclusions using projection type methods, resolvent operator type methods, or averaging techniques. In most of the resolvent operator methods, the maximal monotonicity has played a key role, but more recently introduced notions of A-monotonicity and H -monotonicity have not only generalized the maximal monotonicity, but gave a new edge to resolvent operator methods. In [31], the author generalized the recently introduced and studied notion of A-monotonicity to the case of (A, η)-monotonicity, while examining the sensitivity analysis for a class of nonlinear variational inclusion problems based on the generalized resolvent operator technique. Resolvent operator techniques have been in use for a while in literature, especially with the general framework involving set-valued maximal monotone mappings, but it got a new empowerment by the recent developments of A-monotonicity [30], and H -monotonicity [10]. Furthermore, these developments added a new dimension to the existing notion of the maximal monotonicity and its applications to several other fields such as convex programming and variational inclusions. These notions can be used to generalize the Douglas–Rachford splitting method [8] and proximal point algorithm for maximal monotone mappings to the case of A-monotone and H -monotone mappings in Hilbert space settings as well. Most of the splitting methods used the resolvent operator of the form (I + λM)−1 , where M is a set-valued monotone mapping, λ is a positive constant, and I is the identity mapping. One way to get around invertibility of I + λM is to split M into two maximal monotone mappings S and T such that M = S + T with I + λS and I + λT . Then it is E-mail address: [email protected]. 0022-247X/$ – see front matter © 2007 Elsevier Inc. All rights reserved. doi:10.1016/j.jmaa.2007.01.114

970

R.U. Verma / J. Math. Anal. Appl. 337 (2008) 969–975

easier to construct an algorithm that uses just the operators of the form (I + λS)−1 and (I + λT )−1 . In general, the splitting method is classified into four categories: the forward–backward method, the double-backward method, the Peaceman–Rachford method, and the Douglas–Rachford method. Eckstein and Bertsekas [8] demonstrated that the Douglas–Rachford splitting method was an application of the proximal point algorithm, and, consequently, much of the theory of the proximal point and related algorithms was applicable to Douglas–Rachford splitting and its special cases, including the alternating direction method of multipliers. As a result, Eckstein and Bertsekas [8] introduced the generalized Douglas–Rachford splitting method, which in turn allows the derivation of a new augmented Lagrangian method for convex programming—the generalized alternating direction method of multipliers, which turned out to be faster than the alternating direction method of multipliers in some applications because it does allow approximate computation. Just recently, some significant advances are made to the general framework of the proximal point algorithm considered in [8] by using the local convergence of the over-relaxed proximal point algorithm and multiplier methods for a general class of inclusion problems with local monotonicity. Furthermore, as an application, the convergence result for the over-relaxed proximal point method of multipliers for nonlinear (nonconvex) programming can be derived. We intend in this paper to explore the approximation solvability of a general class of nonlinear variational inclusion problems based on (A, η)-resolvent operator technique in a Hilbert space setting. The iterative procedure adopted for approximating the solution is general in nature, which includes several iterative algorithms as special cases such as nonlinear set-valued variational inclusions involving (H, η)-monotone mappings in Hilbert spaces. 2. Preliminaries Let X be a real Hilbert space with the norm  ·  and inner product ·,·. Let 2X and C(X) denote, respectively, the family of all the nonempty subsets of X and the family of all closed subsets of X. Let us recall the following definitions and some auxiliary results. Definition 2.1. A mapping η : X × X → X is said to be τ -Lipschitz continuous if there exists a constant τ > 0 such that   η(x, y)  τ x − y for all x, y ∈ X. Definition 2.2. Let η : X × X → X and A, H : X → X be single-valued mappings. A set-valued mapping M : X → 2X is said to be: (i) monotone if u − v, x − y  0,

∀x, y ∈ X, u ∈ M(x), v ∈ M(y);

(ii) η-monotone if   u − v, η(x, y)  0,

∀x, y ∈ X, u ∈ M(x), v ∈ M(y);

(iii) strictly η-monotone if   u − v, η(x, y) > 0,

∀x, y ∈ X, u ∈ M(x), v ∈ M(y),

except for x = y; (iv) r-strongly η-monotone if there exists a positive constant r such that   u − v, η(x, y)  rx − y2 , ∀x, y ∈ X, u ∈ M(x), v ∈ M(y); (v) η-firmly nonexpansive if   u − v2  u − v, η(x, y) ,

∀x, y ∈ X, u ∈ M(x), v ∈ M(y);

(vi) (m, η)-relaxed monotone if there is a positive constant m such that   u − v, η(x, y)  (−m)x − y2 , ∀x, y ∈ X, u ∈ M(x), v ∈ M(y);

R.U. Verma / J. Math. Anal. Appl. 337 (2008) 969–975

971

(vii) maximal monotone, if M is monotone and (I + λM)(X) = X for all λ > 0, where I denotes the identity mapping on X; (viii) maximal η-monotone, if M is η-monotone and (I + λM)(X) = X for all λ > 0; (ix) A-monotone, if M is (m)-relaxed monotone and (A + λM)(X) = X, for all λ > 0; (x) (A, η)-monotone, if M is (m, η)-relaxed monotone and (A + λM)(X) = X for all λ > 0; (xi) maximal monotone, if M is monotone and (I + λM)(X) = X for all λ > 0, where I denotes the identity mapping on X; (xii) maximal η-monotone, if M is η-monotone and (I + λM)(X) = X for all λ > 0; (xiii) H -monotone, if M is monotone and (H + λM)(X) = X for all λ > 0; (xiv) (H, η)-monotone, if M is (η)-monotone and (H + λM)(X) = X for all λ > 0. Lemma 2.1. Let η : X × X → X be a single-valued mapping, A : X → X be an (r, η)-strongly monotone mapping and M : X → 2X be an (A, η)-monotone mapping. Then the mapping (A + λM)−1 is single-valued. Definition 2.3. Let η : X × X → X be a single-valued mapping, A : X → X be a strictly η-monotone mapping and A,η M : X → 2X be an (A, η)-monotone mapping. The resolvent operator RM,λ : X → X is defined by RM,λ (z) = (A + λM)−1 (z) A,η

for all z ∈ X,

where λ > 0 is a constant. Lemma 2.2. Let η : X × X → X be a τ -Lipschtiz continuous mapping, A : X → X be an (r, η)-strongly monotone A,η mapping and M : X → 2X be an (A, η)-monotone mapping. Then the resolvent operator RM,λ : X → X is τ/(r − λm)Lipschitz continuous, that is,   A,η τ R (x) − R A,η (y)  x − y for all x, y ∈ X. M,λ M,λ (r − λm) We define a Hausdorff pseudo-metric D : 2X × 2X → (−∞, +∞) ∪ {+∞} by   D(·,·) = max sup inf u − v, sup inf u − v u∈U v∈V

u∈V v∈U

for any given U, V ∈ 2X . Note that if the domain of D is restricted to closed bounded subsets, then D is the Hausdorff metric. Definition 2.4. A set-valued mapping A : X → 2X is said to be D-Lipschitz continuous if there exists a constant η > 0 such that   D A(u), A(v)  ηu − v for all u, v ∈ X. Theorem 2.1. Let η : X × X → X be a τ -Lipschtiz continuous mapping, A : X → X be an (r, η)-strongly monotone A,η mapping, and M : X → 2X be an (A, η)-monotone mapping. Then the resolvent operator RM,ρ : X → X is 2

τ , η)-firmly nonexpansive. ( (r−ρm)

Proof. For any elements u, v ∈ X, we have  M,η 

 M,η  1 u − A Jρ,A (u) ∈ M Jρ,A (u) ρ and  M,η 

 M,η  1 v − A Jρ,A (v) ∈ M Jρ,A (v) . ρ Since M is (A, η)-monotone (and hence (m, η)-relaxed monotone), we have

972

R.U. Verma / J. Math. Anal. Appl. 337 (2008) 969–975

 M,η 2  M,η   M,η   M,η  1 M,η M,η u − v − A Jρ,A (u) − A Jρ,A (v) , η Jρ,A (u), Jρ,A (v)  −mJρ,A (u) − Jρ,A (v) . ρ

(2.1)

Consequently, we have   M,η  M,η u − v, η Jρ,A (u), Jρ,A (v)  M,η 2   M,η   M,η   M,η  M,η M,η  A Jρ,A (u) − A Jρ,A (v) , η Jρ,A (u), Jρ,A (v) − ρmJρ,A (u) − Jρ,A (v)  M,η 2 M,η  (r − ρm)J (u) − J (v) ρ,A

ρ,A

  r − ρm  η J M,η (u), J M,η (v) 2 .  ρ,A ρ,A 2 τ It follows from (2.2) that   M,η 2 M,η η J (u), J (v)   ρ,A

ρ,A

This completes the proof.

(2.2)

 M,η  τ2  M,η u − v, η Jρ,A (u), Jρ,A (v) . r − ρm

(2.3)

2

3. Inclusions and convergence analysis This section deals with a new class of set-valued variational inclusions involving (A, η)-monotone mappings [31] in a Hilbert space setting. Let X be a real Hilbert space and K be a nonempty, closed and convex subset of X. Let S : X × X → X, and A : X → X, and η : X × X → X be nonlinear mappings. Let U : X → 2X be a set-valued mapping, M : X → 2X be an (A, η)-monotone mapping. The class of nonlinear set-valued variational inclusions is described as: determine an element a ∈ X, u ∈ U (a) such that 0 ∈ S(a, u) + M(a).

(3.1)

A special case of (3.1) is: determine an element a ∈ X, u ∈ U (a) such that 0 ∈ S(a, u) + M(a),

(3.2)

where M : X → 2X is an (H, η)-monotone mapping [12]. Next, using the resolvent operator method associated with (A, η)-monotone mappings, a new iterative algorithm for solving problem (3.1) is introduced. The convergence of the iterative sequence generated by the algorithm is examined. Theorem 3.1. For given a ∈ X, u ∈ U (a), (a, u) is a solution of problem (3.1) if and only if (a, u) satisfies the relation

A,η (3.3) a = RM,ρ A(a) − ρS(a, u) , where ρ > 0 is a constant. Proof. This directly follows from Definition 2.3.

2

The relation (3.3) and Nadler [26] allows us to suggest the following iterative algorithm. Algorithm 3.2. Step 1. Choose a0 ∈ X and choose u0 ∈ U (a0 ). Step 2. Let

A,η an+1 = (1 − λ)an + λRM,ρ A(an ) − ρS(an , un ) , where 0 < λ  1 is a constant.

(3.4)

R.U. Verma / J. Math. Anal. Appl. 337 (2008) 969–975

Step 3. Choose un+1 ∈ U (an+1 ) such that     un+1 − un   1 + (1 + n)−1 D U (an+1 ), U (an ) ,

973

(3.5)

where D(·,·) is the Hausdorff pseudo-metric on 2X . Step 4. If an+1 and un+1 satisfy (3.4) to a sufficient accuracy, stop; otherwise, set n := n + 1 and return to Step 2. Algorithm 3.3. Step 1. Choose a0 ∈ X and choose u0 ∈ U (a0 ). Step 2. Let

H,η an+1 = (1 − λ)an + λRM,ρ H (an ) − ρS(an , un ) , where 0 < λ  1 is a constant. Step 3. Choose un+1 ∈ U (an+1 ) such that     un+1 − un   1 + (1 + n)−1 D U (an+1 ), U (an ) ,

(3.6)

(3.7)

where D(·,·) is the Hausdorff pseudo-metric on 2X . Step 4. If an+1 and un+1 satisfy (3.4) to a sufficient accuracy, stop; otherwise, set n := n + 1 and return to Step 2. Theorem 3.4. Let η : X × X → X be a τ -Lipschitz continuous mapping, let A : X → X be an (r, η)-strongly monotone and β-Lipschitz continuous mapping, and let M : X → 2X be an (A, η)-monotone mapping. Let U : X → C(X) be Dγ -Lipschitz continuous. Let S : X × X → X be a nonlinear mapping such that for any given element (a, b) ∈ X × X, S(·, b) is μ-strongly monotone with respect to A and α-Lipschitz continuous, and let S(a, ·) be ζ -Lipschitz continuous. Furthermore, if there exists a constant 0 < ρ < mr such that τ β 2 − 2ρμ + ρ 2 α 2 + τ ζ γ < r − ρm,

(3.8)

then problem (3.1) has a solution (a, u), and sequences {an } and {un }, generated by Algorithm 3.2, converge strongly to a and u, respectively. Proof. It follows from (3.4) and Lemma 2.2 that   A,η  an+1 − an  = (1 − λ)an + λRM,ρ A(an ) − ρS(an , un )   A,η  − (1 − λ)an−1 + λRM,ρ A(an−1 ) − ρS(an−1 , un−1 )   A,η    A,η   (1 − λ)an − an−1  + λRM,ρ A(an ) − ρS(an , un ) − RM,ρ A(an−1 ) − ρS(an−1 , un−1 ) 

 τ  A(an ) − A(an−1 ) − ρ S(an , un ) − S(an−1 , un−1 )   (1 − λ)an − an−1  + λ r − ρm

 τ  A(an ) − A(an−1 ) − ρ S(an , un ) − S(an−1 , un )   (1 − λ)an − an−1  + λ r − ρm    (3.9) + S(an−1 , un ) − S(an−1 , un−1 ) . Since A is β-Lipschitz continuous, S(·, b) is μ-strongly monotone with respect to A and α-Lipschitz continuous, we obtain 

 A(an ) − A(an−1 ) − ρ S(an , un ) − S(an−1 , un ) 2  2 2    = A(an ) − A(an−1 ) − 2ρ S(an , un ) − S(an−1 , un ), A(an ) − A(an−1 ) + ρ 2 S(an , un ) − S(an−1 , un )    β 2 − 2ρμ + ρ 2 α 2 an − an−1 2 . (3.10)

974

R.U. Verma / J. Math. Anal. Appl. 337 (2008) 969–975

It follows from the assumptions that   S(an−1 , un ) − S(an−1 , un−1 )  ζ un − un−1     ζ γ 1 + n−1 an − an−1 .

(3.11)

In light of (3.9)–(3.11), we have an+1 − an   1 − λ + λ

τ 2 2 2 β − 2ρμ + ρ α an − an−1  r − ρm   τ ζ γ 1 + n−1 an − an−1 . +λ r − ρm

(3.12)

Now (3.12) implies that an+1 − an   1 − λ + λ

  τ τ −1 2 2 2 ζγ 1 + n an − an−1  β − 2ρμ + ρ α + λ r − ρm r − ρm  (1 − λ + λθn )an − an−1 ,

where θn = Set θ=

(3.13)

  τ τ ζ γ 1 + n−1 . β 2 − 2ρμ + ρ 2 α 2 + r − ρm r − ρm

τ τ ζγ. β 2 − 2ρμ + ρ 2 α 2 + r − ρm r − ρm

(3.14)

Then θn → θ as n → ∞. It follows from condition (3.8) that 0 < θ < 1. Therefore, by (3.13) and 0 < λ  1, it implies that {an } is a Cauchy sequence and so there exists a ∈ X such that an → a as n → ∞. Next we prove that un → u ∈ U (u) as n → ∞. As a matter of fact, it follows from (3.5) that {un } is also a Cauchy sequence. Therefore, there exists u ∈ X such that un → u as n → ∞. Furthermore,     d u, U (u) = inf u − t: t ∈ U (a)    u − un  + d un , U (a)    u − un  + D U (an ), U (a)  u − un  + γ an − a → 0. Hence, since U (a) is closed, we have u ∈ U (a). The continuity ensures that a and u satisfy the following relation:

A,η a = RM,ρ A(a) − ρS(a, u) . Then from Theorem 3.1, it follows that (a, u) is a solution to problem (3.1). This completes the proof.

2

Corollary 3.5. Let η : X ×X → X be a τ -Lipschitz continuous mapping, let H : X → X be an (r, η)-strongly monotone and β-Lipschitz continuous mapping, and let M : X → 2X be an (H, η)-monotone mapping. Let U : X → C(X) be Dγ -Lipschitz continuous. Let S : X × X → X be a nonlinear mapping such that for any given element (a, b) ∈ X × X, S(·, b) is μ-strongly monotone with respect to H and α-Lipschitz continuous, and let S(a, ·) is ζ1 -Lipschitz continuous. Furthermore, if there exists a constant ρ > 0 such that (3.15) τ β 2 − 2ρμ + ρ 2 α 2 + τ ζ γ < r. Then problem (3.2) admits a solution (a, u) and sequences {an } and {un } converge strongly to a and u, respectively, where {an } and {un } are generated by Algorithm 3.3.

R.U. Verma / J. Math. Anal. Appl. 337 (2008) 969–975

975

References [1] S. Adly, Perturbed algorithm and sensitivity analysis for a general class of variational inclusions, J. Math. Anal. Appl. 201 (1996) 609–630. [2] R.P. Agarwal, N.J. Huang, Y.J. Cho, Generalized nonlinear mixed implicit quasi-variational inclusions with set-valued mappings, J. Inequal. Appl. 7 (6) (2002) 807–828. [3] R. Ahmad, Q.H. Ansari, An iterative algorithm for generalized nonlinear variational inclusions, Appl. Math. Lett. 13 (5) (2002) 23–26. [4] S.S. Chang, Y.J. Cho, H.Y. Zhou, Iterative Methods for Nonlinear Operator Equations in Banach Spaces, Nova Sci. Publ., New York, 2002. [5] Y.J. Cho, Y.P. Fang, N.J. Huang, H.J. Hwang, Algorithms for systems of nonlinear variational inequalities, J. Korean Math. Soc. 41 (2004) 489–499. [6] X.P. Ding, C.L. Luo, Perturbed proximal point algorithms for generalized quasi-variational-like inclusions, J. Comput. Appl. Math. 210 (2000) 153–165. [7] J. Douglas, H.H. Rachford, On the numerical solution of heat conduction problems in two and three space variables, Trans. Amer. Math. Soc. 82 (1956) 421–439. [8] J. Eckstein, D.P. Bertsekas, On the Douglas–Rachford splitting method and proximal point algorithm for maximal monotone operators, Math. Program. 55 (1992) 293–318. [9] J. Eckstein, D.P. Bertsekas, On the Douglas–Rachford splitting method and proximal point algorithm for maximal monotone operators, Math. Program. 55 (1992) 293–318. [10] Y.P. Fang, N.J. Huang, H -monotone operator and resolvent operator technique for variational inclusions, Appl. Math. Comput. 145 (2003) 795–803. [11] Y.P. Fang, N.J. Huang, H -monotone operators and system of variational inclusions, Comm. Appl. Nonlinear Anal. 11 (1) (2004) 93–101. [12] Y.P. Fang, N.J. Huang, H.B. Thompson, A new system of variational inclusions with (H, η)-monotone operators in Hilbert spaces, Comput. Math. Appl. 49 (2005) 365–374. [13] F. Giannessi, A. Maugeri, Variational Inequalities and Network Equilibrium Problems, Plenum Press, New York, 1995. [14] A. Hassouni, A. Moudafi, A perturbed algorithms for variational inequalities, J. Math. Anal. Appl. 185 (1994) 706–712. [15] N.J. Huang, Generalized nonlinear variational inclusions with noncompact valued mapping, Appl. Math. Lett. 9 (3) (1996) 25–29. [16] N.J. Huang, Nonlinear implicit quasi-variational inclusions involving generalized m-accretive mappings, Arch. Inequal. Appl. 2 (2004) 403– 416. [17] N.J. Huang, Mann and Ishikawa type perturbed iterative algorithms for generalized nonlinear implicit quasi-variational inclusions, Comput. Math. Appl. 35 (10) (1998) 1–7. [18] N.J. Huang, A new completely general class of variational inclusions with noncompact valued mappings, Comput. Math. Appl. 35 (10) (1998) 9–14. [19] N.J. Huang, Y.P. Fang, A new class of general variational inclusions involving maximal η-monotone mappings, Publ. Math. Debrecen 62 (2003) 83–98. [20] M.M. Jin, Iterative algorithm for a new system of nonlinear set-valued variational inclusions involving (H, η)-monotone mappings, J. Inequal. Pure Appl. Math. 7 (2) (2006), Article 72. [21] M.M. Jin, Q.K. Liu, Nonlinear quasi-variational inclusions involving generalized m-accretive mappings, Nonlinear Funct. Anal. Appl. 9 (3) (2004) 485–494. [22] M.M. Jin, Generalized nonlinear implicit quasi-variational inclusions with relaxed monotone mappings, Adv. Nonlinear Var. Inequal. 7 (2) (2004) 173–181. [23] G. Kassay, J. Kolumban, System of multi-valued variational inequalities, Publ. Math. Debrecen 56 (2000) 185–195. [24] J.K. Kim, D.S. Kim, A new system of generalized nonlinear mixed variational inequalities in Hilbert spaces, J. Convex Anal. 11 (2004) 117–124. [25] T.L. Magnanti, G. Perakis, Averaging schemes for variational inequalities and systems for equations, Math. Oper. Res. 22 (3) (1997) 568–587. [26] S.B. Nadler, Multi-valued contraction mappings, Pacific J. Math. 30 (1969) 475–488. [27] K.R. Kazmi, Mann and Ishikawa type perturbed iterative algorithms for generalized quasivariational inclusions, J. Math. Anal. Appl. 209 (1997) 572–584. [28] R.U. Verma, Nonlinear A-monotone variational inclusion systems and the resolvent operator technique, J. Appl. Funct. Anal. 1 (1) (2006) 183–190. [29] R.U. Verma, A-monotonicity and its role in nonlinear variational inclusions, J. Optim. Theory Appl. 129 (3) (2006) 457–467. [30] R.U. Verma, General nonlinear variational inclusion problems involving A-monotone mappings, Appl. Math. Lett. 19 (9) (2006) 960–963. [31] R.U. Verma, Sensitivity analysis for generalized strongly monotone variational inclusions based on the (A, η)-resolvent operator technique, Appl. Math. Lett. 19 (2006) 1409–1413. [32] R.U. Verma, Averaging techniques and cocoercively monotone mappings, Math. Sci. Res. J. 10 (3) (2006) 79–82.

Suggest Documents