mann type hybrid extragradient method for variational inequalities ...

1 downloads 0 Views 219KB Size Report
Page 1 ... Mann type hybrid-extragradient algorithm for finding a common solution. ..... then the sequence {xn} generated by Mann's algorithm (1.7) converges ...
Fixed Point Theory, 13(2012), No. 2, 403-422 http://www.math.ubbcluj.ro/∼ nodeacj/sfptcj.html

MANN TYPE HYBRID EXTRAGRADIENT METHOD FOR VARIATIONAL INEQUALITIES, VARIATIONAL INCLUSIONS AND FIXED POINT PROBLEMS L.-C. CENG∗ , Q. H. ANSARI∗∗ , M. M. WONG∗∗∗ AND J.-C. YAO∗∗∗∗ ∗ Department

of Mathematics, Shanghai Normal University, Shanghai 200234, and Scientific Computing Key Laboratory of Shanghai Universities, China E-mail: [email protected]

∗∗ Department

of Mathematics, Aligarh Muslim University, Aligarh 202 002, India E-mail: [email protected]

∗∗∗ Department

∗∗∗∗ Center

of Applied Mathematics, Chung Yuan Christian University, Chung Li, 32023, Taiwan E-mail: [email protected] for General Education, Kaohsiung Medical University, Kaohsiung 80708, Taiwan E-mail: [email protected]

Abstract. Recently, Nadezhkina and Takahashi [N. Nadezhkina, W. Takahashi, Strong convergence theorem by a hybrid method for nonexpansive mappings and Lipschitz-continuous monotone mappings, SIAM J. Optim. 16 (4) (2006) 1230-1241] introduced an iterative algorithm for finding a common element of the fixed point set of a nonexpansive mapping and the solution set of a variational inequality in a real Hilbert space via combining two well-known methods: hybrid and extragradient. In this paper, we investigate the problem of finding a common solution of a variational inequality, a variational inclusion and a fixed point problem of a nonexpansive mapping in a real Hilbert space. Motivated by Nadezhkina and Takahashi’s hybrid-extragradient method we propose and analyze Mann type hybrid-extragradient algorithm for finding a common solution. It is proven that three sequences generated by this algorithm converge strongly to the same common solution under very mild conditions. Based on this result, we also construct an iterative algorithm for finding a common fixed point of three mappings, such that one of these mappings is nonexpansive and the other two mappings are taken from the more general class of Lipschitz pseudocontractive mappings and from the more general class of strictly pseudocontractive mappings, respectively. Key Words and Phrases: Variational inclusion, variational inequality, fixed point, nonexpansive mapping, inverse strongly monotone mapping, maximal monotone mapping, strong convergence. 2010 Mathematics Subject Classification: 49J40, 47J20, 47H10, 65K05, 47H09. ∗∗∗ Corresponding

author. In this research, the first author was partially supported by the National Science Foundation of China (11071169), Leading Academic Discipline Project of Shanghai Normal University (DZL707) and Innovation Program of Shanghai Municipal Education Commission (09ZZ133). Part of the research of the second author was done during his visit to King Fahd University of Petroleum & Minerals, Dhahran, Saudi Arabia. The second author is grateful to King Fahd University of Petroleum & Minerals, Dhahran, Saudi Arabia, for providing excellent research facilities during his visit. The fourth author was partially supported by the grant NSC 99-2221-E-037-007- MY3. 403

404

L.-C. CENG, Q. H. ANSARI, M. M. WONG AND J.-C. YAO

1. Introduction Let H be a real Hilbert space with inner product h·, ·i and norm k · k. Let C be a nonempty closed convex subset of H and let PC be the metric projection from H onto C. A mapping A of C into H is called monotone if hAu − Av, u − vi ≥ 0,

∀u, v ∈ C.

A mapping A of C into H is called k-Lipschitz continuous if there exists a constant k > 0 such that kAu − Avk ≤ kku − vk, ∀u, v ∈ C. Let the mapping A from C to H be monotone and Lipschitz continuous. The variational inequality is to find a u ∈ C such that hAu, v − ui ≥ 0,

∀v ∈ C.

(1.1)

The solution set of the variational inequality (1.1) is denoted by VI(C, A). The variational inequality was first discussed by Lions [16] and now is well known; there are various approaches to solving this problem in finite-dimensional and infinite-dimensional spaces, and the research is intensively continued. This problem has many applications in partial differential equations, optimal control, mathematical economics, optimization, mathematical programming, mechanics, and other fields; see, e.g., [10,20,31]. In the meantime, to construct a mathematical model which is as close as possible to a real complex problem, we often have to use more than one constraint. Solving such problems, we have to obtain some solution which is simultaneously the solution of two or more subproblems or the solution of one subproblem on the solution set of another subproblem. Actually, these subproblems can be given by problems of different types. For example, Antipin considered a finite-dimensional variant of the variational inequality, where the solution should satisfy some related constraint in inequality form [1] or some system of constraints in inequality and equality form [2]. Yamada [30] considered an infinite-dimensional variant of the solution of the variational inequality on the fixed point set of some mapping. A mapping A of C into H is called α-inverse strongly monotone if there exists a constant α > 0 such that hAu − Av, u − vi ≥ αkAu − Avk2 ,

∀u, v ∈ C;

see [6,17]. It is obvious that an α-inverse strongly monotone mapping A is monotone and Lipschitz continuous. A mapping S of C into itself is called nonexpansive if kSu − Svk ≤ ku − vk,

∀u, v ∈ C;

see [28]. We denote by F (S) the fixed point set of S; i.e., F (S) = {x ∈ C : Sx = x}. A set-valued mapping M with domain D(M ) and range R(M ) in H is called monotone if its graph G(M ) = {(x, f ) ∈ H × H : x ∈ D(M ), f ∈ M x} is a monotone set in H × H; i.e., M is monotone if and only if (x, f ), (y, g) ∈ G(M ) ⇒ hx − y, f − gi ≥ 0. A monotone set-valued mapping M is called maximal if its graph G(M ) is not properly contained in the graph of any other monotone mapping in H.

MANN TYPE HYBRID EXTRAGRADIENT METHOD

405

Let Φ be a single-valued mapping of C into H and M be a multivalued mapping with D(M ) = C. Consider the following variational inclusion: find u ∈ C, such that 0 ∈ Φ(u) + M u.

(1.2)

We denote by Ω the solution set of the variational inclusion (1.2). In particular, if Φ = M = 0, then Ω = C. In 1998, Huang [7] studied problem (1.2) in the case where M is maximal monotone and Φ is strongly monotone and Lipschitz continuous with D(M ) = C = H. Subsequently, Zeng, Guu and Yao [13] further studied problem (1.2) in the case which is more general than Huang’s one [7]. Moreover, the authors [13] obtained the same strong convergence conclusion as in Huang’s result [7]. In addition, the authors also gave the geometric convergence rate estimate for approximate solutions. In 2003, for finding an element of F (S) ∩ VI(C, A) under the assumption that a set C ⊂ H is nonempty, closed and convex, a mapping S of C into itself is nonexpansive and a mapping A of C into H is α-inverse strongly monotone, Takahashi and Toyoda [29] introduced the following iterative algorithm: xn+1 = αn xn + (1 − αn )SPC (xn − λn Axn ),

(1.3)

for every n = 0, 1, 2, ..., where x0 = x ∈ C chosen arbitrarily, {αn } is a sequence in (0, 1), and {λn } is a sequence in (0, 2α). They showed that, if F (S) ∩ VI(C, A) is nonempty, the sequence {xn } generated by (1.3) converges weakly to some z ∈ F (S) ∩ VI(C, A). In 2006, to solve this problem (i.e., to find an element of F (S) ∩ VI(C, A)), Iiduka and Takahashi [12] introduced the following iterative scheme by a hybrid method:  yn = αn xn + (1 − αn )SPC (xn − λn Axn ),    Cn = {z ∈ C : kyn − zk ≤ kxn − zk}, (1.4) Qn = {z ∈ C : hxn − z, x − xn i ≥ 0},    xn+1 = PCn ∩Qn x, for every n = 0, 1, 2, ..., where x0 = x ∈ C chosen arbitrarily, 0 ≤ αn ≤ c < 1 and 0 < a ≤ λn ≤ b < 2α. They proved that if F (S) ∩ VI(C, A) is nonempty, then the sequence {xn } generated by (1.4) converges strongly to PF (S)∩VI(C,A) x. Generally speaking, the algorithm suggested by Iiduka and Takahashi is based on two wellknown types of methods, i.e., on the projection-type method for solving variational inequality and so-called hybrid or outer-approximation method for solving fixed point problem. The idea of “hybrid” or “outer-approximation ” types of methods was originally introduced by Haugazeau in 1968 and was successfully generalized and extended in many papers; see, e.g., [3-5,8,18,25]. It is easy to see that the class of α-inverse strongly monotone mappings in the above mentioned problem of Takahashi and Toyoda [29] is the quite important class of mappings in various classes of well-known mappings. It is also easy to see that while α-inverse strongly monotone mappings are tightly connected with the important class of nonexpansive mappings, α-inverse strongly monotone mappings are also tightly connected with a more general and also quite important class of strictly pseudocontractive mappings. (A mapping T : C → C is called κ-strictly pseudocontractive if there exists a constant 0 ≤ κ < 1 such that kT x − T yk2 ≤ kx − yk2 + κk(I − T )x − (I − T )yk2 for

406

L.-C. CENG, Q. H. ANSARI, M. M. WONG AND J.-C. YAO

all x, y ∈ C.) That is, if a mapping T : C → C is nonexpansive, then the mapping I − T is 12 -inverse strongly monotone; moreover, F (T ) = VI(C, I − T ) (see, e.g., [29]). At the same time, if a mapping T : C → C is κ-strictly pseudocontractive, then the 2 mapping I − T is 1−κ 2 -inverse-strongly monotone and 1−κ -Lipschitz continuous. In 1976, for finding a solution of the nonconstrained variational inequality in the finite-dimensional Euclidean space Rn under the assumption that a set C ⊂ Rn is nonempty, closed and convex and a mapping A : C → Rn is monotone and kLipschitz-continuous, Korpelevich [15] introduced the following so-called extragradient method:  yn = PC (xn − λAxn ), (1.5) xn+1 = PC (xn − λAyn ), for every n = 0, 1, 2, ..., where x0 = x ∈ C chosen arbitrarily and λ ∈ (0, k1 ). He showed that if VI(C, A) is nonempty, then the sequences {xn } and {yn } generated by (1.5) converge to the same point z ∈ VI(C, A). The idea of the extragradient iterative algorithm introduced by Korpelevich [15] was successfully generalized and extended not only in Euclidean but also in Hilbert and Banach spaces; see, e.g., [11,9,19,26,24]. In 2006, by combining hybrid and extragradient methods, Nadezhkina and Takahashi [22] introduced an iterative algorithm for finding a common element of the fixed point set of a nonexpansive mapping and the solution set of the variational inequality for a monotone, Lipschitz-continuous mapping in a real Hilbert space. They gave a strong convergence theorem for three sequences generated by this algorithm. Theorem 1.1 [22, Theorem 3.1] Let C be a nonempty closed convex subset of a real Hilbert space H. Let A : C → H be a monotone and k-Lipschitz-continuous mapping and let S : C → C be a nonexpansive mapping such that F (S) ∩ VI(C, A) 6= ∅. Let {xn }, {yn } and {zn } be the sequences generated by  yn = PC (xn − λn Axn ),      zn = αn xn + (1 − αn )SPC (xn − λn Ayn ), Cn = {z ∈ C : kzn − zk ≤ kxn − zk}, (1.6)   Q = {z ∈ C : hx − z, x − x i ≥ 0},  n n n   xn+1 = PCn ∩Qn x, for every n = 0, 1, 2, ..., where x0 = x ∈ C chosen arbitrarily, {λn } ⊂ [a, b] for some a, b ∈ (0, k1 ) and {αn } ⊂ [0, c] for some c ∈ [0, 1). Then the sequences {xn }, {yn } and {zn } converge strongly to PF (S)∩VI(C,A) x. On the other hand, the construction of fixed points of nonexpansive mappings via Mann’s algorithm [14] has extensively been investigated in literature (see, e.g., [32,33] and references therein). Mann’s algorithm generates, initializing with an arbitrary x0 ∈ C, a sequence according to the recursive manner xn+1 = αn xn + (1 − αn )Sxn ,

(1.7)

for every n = 0, 1, 2, ..., where S : C → C is a nonexpansive mapping and {αn }∞ n=0 is a real control sequence in the interval [0, 1]. If S is a nonexpansive mapping with a fixed P∞ point and if the control sequence {αn }∞ n=0 is chosen so that n=0 αn (1 − αn ) = ∞, then the sequence {xn } generated by Mann’s algorithm (1.7) converges weakly to a

MANN TYPE HYBRID EXTRAGRADIENT METHOD

407

fixed point of S. (This is indeed true in a uniformly convex Banach space with a Frechet differential norm [33].) In this paper, let A : C → H be a monotone and k-Lipschitz-continuous mapping, Φ : C → H be an α-inverse strongly monotone mapping, M be a maximal monotone mapping with D(M ) = C and S : C → C be a nonexpansive mapping such that F (S) ∩ Ω ∩ VI(C, A) 6= ∅. By combining Nadezhkina and Takahashi’s hybridextragradient algorithm (1.6) and Mann’s algorithm (1.7) we introduce the following Mann type hybrid-extragradient algorithm  yn = PC (xn − λn Axn ),     tn = PC (xn − λn Ayn ),     tˆn = JM,µn (tn − µn Φ(tn )),  (1.8) zn = (1 − αn − α ˆ n )xn + αn tˆn + α ˆ n S tˆn ,   C = {z ∈ C : kz − zk ≤ kx − zk},  n n n    Qn = {z ∈ C : hxn − z, x − xn i ≥ 0},    xn+1 = PCn ∩Qn x for every n = 0, 1, 2, ..., where JM,µn = (I + µn M )−1 , x0 = x ∈ C chosen arbitrarily, αn } ⊂ (0, 1] such that αn + α ˆ n ≤ 1. It is {λn } ⊂ (0, k1 ), {µn } ⊂ (0, 2α] and {αn }, {ˆ proven that under very mild conditions three sequences {xn }, {yn }, {zn } generated by (1.8) converge strongly to the same point PF (S)∩Ω∩VI(C,A) x. It is worth pointing out that whenever Φ = M = 0, we have Ω = C. In this case, the problem of finding an element of F (S) ∩ Ω ∩ VI(C, A) reduces to the one of finding an element of F (S) ∩ VI(C, A). Thus, our result improves and extends Nadezhkina and Takahashi’s corresponding one [22], i.e., the above Theorem NT. Based on our main result, we also construct an iterative algorithm for finding a common fixed point of three mappings, one of which is nonexpansive and the other two ones are taken from the more general class of Lipschitz pseudocontractive mappings and from the more general class of strictly pseudocontractive mappings, respectively. 2. Preliminaries Let H be a real Hilbert space with inner product h·, ·i and norm k · k and C be a nonempty closed convex subset of H. We write → to indicate that the sequence {xn } converges strongly to x and * to indicate that the sequence {xn } converges weakly to x. Moreover, we use ωw (xn ) to denote the weak ω-limit set of the sequence {xn }, i.e., ωw (xn ) := {x : xni * x for some subsequence {xni } of {xn }}. For every point x ∈ H, there exists a unique nearest point in C, denoted by PC x, such that kx − PC xk ≤ kx − yk,

∀x ∈ C.

PC is called the metric projection of H onto C. We know that PC is a firmly nonexpansive mapping of H onto C; that is, there holds the following relation hPC x − PC y, x − yi ≥ kPC x − PC yk2 ,

∀x, y ∈ H.

408

L.-C. CENG, Q. H. ANSARI, M. M. WONG AND J.-C. YAO

Consequently, PC is nonexpansive and monotone. It is also known that PC is characterized by the following properties: PC x ∈ C and hx − PC x, PC x − yi ≥ 0, 2

2

(2.1) 2

kx − yk ≥ kx − PC xk + ky − PC xk , (2.2) for all x ∈ H, y ∈ C; see [28,34] for more details. Let A : C → H be a monotone mapping. In the context of the variational inequality, this implies that x ∈ VI(C, A)



x = PC (x − λAx) ∀λ > 0.

(2.3)

It is also known that H satisfies the Opial condition [21]. That is, for any sequence {xn } with xn * x, the inequality lim inf kxn − xk < lim inf kxn − yk n→∞

(2.4)

n→∞

holds for every y ∈ H with y 6= x. A set-valued mapping M : D(M ) ⊂ H → 2H is called monotone if for all x, y ∈ D(M ), f ∈ M x and g ∈ M y imply hf − g, x − yi ≥ 0. A set-valued mapping M is called maximal monotone if M is monotone and (I + λM )D(M ) = H for each λ > 0, where I is the identity mapping of H. We denote by G(M ) the graph of M . It is known that a monotone mapping M is maximal if and only if, for (x, f ) ∈ H × H, hf − g, x − yi ≥ 0 for every (y, g) ∈ G(M ) implies f ∈ M x. Let A : C → H be a monotone, k-Lipschitz-continuous mapping and let NC v be the normal cone to C at v ∈ C, i.e., NC v = {w ∈ H : hv − u, wi ≥ 0, ∀u ∈ C}. Define

 Tv =

Av + NC v, ∅,

if v ∈ C, if v ∈ 6 C.

Then, T is maximal monotone and 0 ∈ T v if and only if v ∈ VI(C, A); see [23]. Assume that M : D(M ) ⊂ H → 2H is a maximal monotone mapping. Then, for λ > 0, associated with M , the resolvent operator JM,λ can be defined as JM,λ x = (I + λM )−1 x,

∀x ∈ H.

In terms of Huang [7] (see also [13]), there holds the following property for the resolvent operator JM,λ : H → H. Lemma 2.1 JM,λ is single-valued and firmly nonexpansive, i.e., hJM,λ x − JM,λ y, x − yi ≥ kJM,λ x − JM,λ yk2 ,

∀x, y ∈ H.

Consequently, JM,λ is is nonexpansive and monotone. Lemma 2.2 [35] There holds the relation: kλx + µy + νzk2 = λkxk2 + µkyk2 + νkzk2 − λµkx − yk − µνky − zk2 − λνkx − zk2 for all x, y, z ∈ H and λ, µ, ν ∈ [0, 1] with λ + µ + ν = 1.

MANN TYPE HYBRID EXTRAGRADIENT METHOD

409

Lemma 2.3 Let M be a maximal monotone mapping with D(M ) = C. Then for any given λ > 0, u ∈ C is a solution of problem (1.2) if and only if u ∈ C satisfies u = JM,λ (u − λΦ(u)). Proof. 0 ∈ Φ(u) + M u

⇔ u − λΦ(u) ∈ u + λM u ⇔ u = (I + λM )−1 (u − λΦ(u)) ⇔ u = JM,λ (u − λΦ(u)). 

Lemma 2.4 [13] Let M be a maximal monotone mapping with D(M ) = C and let V : C → H be a strongly monotone, continuous and single-valued mapping. Then for each z ∈ H, the equation z ∈ V x + λM x has a unique solution xλ for λ > 0. Lemma 2.5 Let M be a maximal monotone mapping with D(M ) = C and A : C → H be a monotone, continuous and single-valued mapping. Then (I +λ(M +A))C = H for each λ > 0. In this case, M + A is maximal monotone. Proof. For each fixed λ > 0, put V = I + λA. Then V : C → H is a strongly monotone, continuous and single-valued mapping. In terms of Lemma 2.4, we obtain (V + λM )C = H. That is, (I + λ(M + A))C = H. It is clear that M + A is monotone. Therefore, M + A is maximal monotone.  3. Strong Convergence Theorem In this section we prove a strong convergence theorem by Mann type hybridextragradient method for finding a common solution of a variational inequality, a variational inclusion and a fixed point problem of a nonexpansive mapping in a real Hilbert space. Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let A : C → H be a monotone and k-Lipschitz-continuous mapping, Φ : C → H be an α-inverse strongly monotone mapping, M be a maximal monotone mapping with D(M ) = C and S : C → C be a nonexpansive mapping such that F (S) ∩ Ω ∩ VI(C, A) 6= ∅. For x0 = x ∈ C chosen arbitrarily, let {xn }, {yn } and {zn } be the sequences generated by  yn = PC (xn − λn Axn ),     tn = PC (xn − λn Ayn ),     tˆn = JM,µn (tn − µn Φ(tn )),  zn = (1 − αn − α ˆ n )xn + αn tˆn + α ˆ n S tˆn ,   C = {z ∈ C : kz − zk ≤ kx − zk},  n n n    Q = {z ∈ C : hx − z, x − x i ≥ 0},  n n n   xn+1 = PCn ∩Qn x for every n = 0, 1, 2, ..., where {λn } ⊂ [a, b] for some a, b ∈ (0, k1 ), {µn } ⊂ [, 2α] for some  ∈ (0, 2α], and {αn }, {ˆ αn } ⊂ [c, 1] for some c ∈ (0, 1], such that αn + α ˆn ≤ 1 for every n = 0, 1, 2, .... Then the sequences {xn }, {yn } and {zn } converge strongly to PF (S)∩Ω∩VI(C,A) x.

410

L.-C. CENG, Q. H. ANSARI, M. M. WONG AND J.-C. YAO

Proof. It is obvious that Cn is closed and Qn is closed and convex for every n = 0, 1, 2, .... As Cn = {z ∈ C : kzn − xn k2 + 2hzn − xn , xn − zi ≤ 0}, we also have that Cn is convex for every n = 0, 1, 2, .... As Qn = {z ∈ C : hxn − z, x − xn i ≥ 0}, we have hxn − z, x − xn i ≥ 0 for all z ∈ Qn and hence xn = PQn x by (2.1). For the remainder of the proof, we divide it into several steps. Step 1. We claim that F (S) ∩ Ω ∩ VI(C, A) ⊂ Cn ∩ Qn for every n = 0, 1, 2, .... Indeed, take a fixed u ∈ F (S) ∩ Ω ∩ VI(C, A) arbitrarily. From (2.2), monotonicity of A, and u ∈ VI(C, A), we have ktn − uk2 ≤ kxn − λn Ayn − uk2 − kxn − λn Ayn − tn k2 = kxn − uk2 − kxn − tn k2 + 2λn hAyn , u − tn i = kxn − uk2 − kxn − tn k2 + 2λn (hAyn − Au, u − yn i + hAu, u − yn i + hAyn , yn − tn i) ≤ kxn − uk2 − kxn − tn k2 + 2λn hAyn , yn − tn i = kxn − uk2 − kxn − yn k2 − 2hxn − yn , yn − tn i − kyn − tn k2 + 2λn hAyn , yn − tn i = kxn − uk2 − kxn − yn k2 − kyn − tn k2 + 2hxn − λn Ayn − yn , tn − yn i. Further, since yn = PC (xn − λn Axn ) and A is k-Lipschitz-continuous, from (2.1) we have hxn − λn Ayn − yn , tn − yn i = hxn − λn Axn − yn , tn − yn i + hλn Axn − λn Ayn , tn − yn i ≤ hλn Axn − λn Ayn , tn − yn i ≤ λn kkxn − yn kktn − yn k. So, we obtain ktn − uk2

≤ kxn − uk2 − kxn − yn k2 − kyn − tn k2 +2λn kkxn − yn kktn − yn k ≤ kxn − uk2 − kxn − yn k2 − kyn − tn k2 +λ2n k 2 kxn − yn k2 + kyn − tn k2 = kxn − uk2 + (λ2n k 2 − 1)kxn − yn k2 ≤ kxn − uk2 .

(3.1)

MANN TYPE HYBRID EXTRAGRADIENT METHOD

411

Also, since zn = (1−αn − α ˆ n )xn +αn tˆn + α ˆ n S tˆn , u = Su and u = JM,µn (u−µn Φ(u)), utilizing Lemma 2.2 we get from (3.1) kzn − uk2 = k(1 − αn − α ˆ n )(xn − u) + αn (tˆn − u) + α ˆ n (S tˆn − u)k2 ≤ (1 − αn − α ˆ n )kxn − uk2 + αn ktˆn − uk2 + α ˆ n kS tˆn − uk2 − αn α ˆ n ktˆn − S tˆn k2 2 2 ˆ ˆ ≤ (1 − αn − α ˆ n )kxn − uk + (αn + α ˆ n )ktn − uk − αn α ˆ n ktn − S tˆn k2 = (1−αn − α ˆ n )kxn − uk2 + (αn + α ˆ n )kJM,µn (tn − µn Φ(tn )) − JM,µn (u − µn Φ(u))k2 −αn α ˆ n ktˆn − S tˆn k2 ≤ (1 − αn − α ˆ n )kxn − uk2 + (αn + α ˆ n )k(tn − µn Φ(tn )) − (u − µn Φ(u))k2 2 ˆ ˆ −αn α ˆ n k tn − S t n k ≤ (1 − αn − α ˆ n )kxn − uk2 + (αn + α ˆ n )[ktn − uk2 + µn (µn − 2α)kΦ(tn ) − Φ(u)k2 ] 2 ˆ ˆ −αn α ˆ n k tn − S t n k ≤ (1 − αn − α ˆ n )kxn − uk2 + (αn + α ˆ n )ktn − uk2 − αn α ˆ n ktˆn − S tˆn k2 2 2 ≤ (1 − αn − α ˆ n )kxn − uk + (αn + α ˆ n )[kxn − uk + (λ2n k 2 − 1)kxn − yn k2 ] 2 ˆ ˆ −αn α ˆ n k tn − S t n k = kxn − uk2 + (αn + α ˆ n )(λ2n k 2 − 1)kxn − yn k2 − αn α ˆ n ktˆn − S tˆn k2 2 ≤ kxn − uk (3.2) for every n = 0, 1, 2, ... and hence u ∈ Cn . So, F (S) ∩ Ω ∩ VI(C, A) ⊂ Cn for every n = 0, 1, 2, .... Next, let us show by mathematical induction that {xn } is well-defined and F (S) ∩ Ω ∩ VI(C, A) ⊂ Cn ∩ Qn for every n = 0, 1, 2, .... For n = 0 we have Q0 = C. Hence we obtain F (S) ∩ Ω ∩ VI(C, A) ⊂ C0 ∩ Q0 . Suppose that xk is given and F (S) ∩ Ω ∩ VI(C, A) ⊂ Ck ∩ Qk for some integer k ≥ 0. Since F (S) ∩ Ω ∩ VI(C, A) is nonempty, Ck ∩ Qk is a nonempty closed convex subset of C. So, there exists a unique element xk+1 ∈ Ck ∩ Qk such that xk+1 = PCk ∩Qk x. It is also obvious that there holds hxk+1 −z, x−xk+1 i ≥ 0 for every z ∈ Ck ∩Qk . Since F (S)∩Ω ∩VI(C, A) ⊂ Ck ∩ Qk , we have hxk+1 − z, x − xk+1 i ≥ 0 for z ∈ F (S) ∩ Ω ∩ VI(C, A) and hence F (S)∩Ω ∩VI(C, A) ⊂ Qk+1 . Therefore, we obtain F (S)∩Ω ∩VI(C, A) ⊂ Ck+1 ∩Qk+1 . Step 2. We claim that lim kxn+1 − xn k = lim kxn − zn k = 0.

n→∞

n→∞

Indeed, let l0 = PF (S)∩Ω∩VI(C,A) x. From xn+1 = PCn ∩Qn x and l0 ∈ F (S) ∩ Ω ∩ VI(C, A) ⊂ Cn ∩ Qn , we have kxn+1 − xk ≤ kl0 − xk

(3.3)

for every n = 0, 1, 2, .... Therefore, {xn } is bounded. From (3.1) and (3.2) we also obtain that {tn } and {zn } are bounded. Since xn+1 ∈ Cn ∩ Qn ⊂ Qn and xn = PQn x, we have kxn − xk ≤ kxn+1 − xk for every n = 0, 1, 2, .... Therefore, there exists lim kxn − xk. Since xn = PQn x and n→∞

xn+1 ∈ Qn , utilizing (2.2), we have kxn+1 − xn k2 ≤ kxn+1 − xk2 − kxn − xk2

412

L.-C. CENG, Q. H. ANSARI, M. M. WONG AND J.-C. YAO

for every n = 0, 1, 2, .... This implies that lim kxn+1 − xn k = 0.

n→∞

Since xn+1 ∈ Cn , we have kzn − xn+1 k ≤ kxn − xn+1 k and hence kxn − zn k ≤ kxn − xn+1 k + kxn+1 − zn k ≤ 2kxn+1 − xn k for every n = 0, 1, 2, .... From kxn+1 − xn k → 0 it follows that lim kxn − zn k = 0.

n→∞

Step 3. We claim that lim kxn − yn k = lim kxn − tn k = lim kS tˆn − tˆn k = lim ktˆn − tn k = 0.

n→∞

n→∞

n→∞

n→∞

Indeed, for u ∈ F (S) ∩ Ω ∩ VI(C, A), we obtain from (3.2) kzn − uk2 ≤ kxn − uk2 + (αn + α ˆ n )(λ2n k 2 − 1)kxn − yn k2 − αn α ˆ n ktˆn − S tˆn k2 . Therefore, we have c2 ˆ 1−a2 k2 ktn

α ˆn ˆ ˆ 2 − S tˆn k2 ≤ kxn − yn k2 + (αn +αˆαnn)(1−λ 2 k 2 ) k tn − S t n k n 1 2 ≤ (αn +αˆ n )(1−λ2 k2 ) (kxn − uk − kzn − uk2 ) n 1 = (αn +αˆ n )(1−λ 2 k 2 ) (kxn − uk − kzn − uk)× n (kxn − uk + kzn − uk) 1 ≤ (αn +αˆ n )(1−λ 2 k 2 ) (kxn − uk + kzn − uk)kxn − zn k n 1 ≤ 2c(1−b2 k2 ) (kxn − uk + kzn − uk)kxn − zn k. (3.4) Since kxn − zn k → 0 and the sequences {xn } and {zn } are bounded, we deduce that

kxn − yn k2 +

lim kxn − yn k = lim ktˆn − S tˆn k = 0.

n→∞

n→∞

By the same process as in (3.1), we also have ktn − uk2

≤ kxn − uk2 − kxn − yn k2 − kyn − tn k2 +2λn kkxn − yn kktn − yn k ≤ kxn − uk2 − kxn − yn k2 − kyn − tn k2 + kxn − yn k2 +λ2n k 2 kyn − tn k2 = kxn − uk2 + (λ2n k 2 − 1)kyn − tn k2 ≤ kxn − uk2 .

MANN TYPE HYBRID EXTRAGRADIENT METHOD

413

Then, in contrast with (3.2), kzn − uk2 = k(1 − αn − α ˆ n )(xn − u) + αn (tˆn − u) + α ˆ n (S tˆn − u)k2 2 2 ≤ (1 − αn − α ˆ n )kxn − uk + αn ktˆn − uk + α ˆ n kS tˆn − uk2 − αn α ˆ n ktˆn − S tˆn k2 2 2 ≤ (1 − αn − α ˆ n )kxn − uk + (αn + α ˆ n )ktˆn − uk − αn α ˆ n ktˆn − S tˆn k2 2 = (1−αn − α ˆ n )kxn − uk + (αn + α ˆ n )kJM,µn (tn − µn Φ(tn )) − JM,µn (u − µn Φ(u))k2 2 −αn α ˆ n ktˆn − S tˆn k ≤ (1 − αn − α ˆ n )kxn − uk2 + (αn + α ˆ n )k(tn − µn Φ(tn )) − (u − µn Φ(u))k2 2 −αn α ˆ n ktˆn − S tˆn k ≤ (1 − αn − α ˆ n )kxn − uk2 + (αn + α ˆ n )[ktn − uk2 + µn (µn − 2α)kΦ(tn ) − Φ(u)k2 ] 2 −αn α ˆ n ktˆn − S tˆn k ≤ (1 − αn − α ˆ n )kxn − uk2 + (αn + α ˆ n )ktn − uk2 − αn α ˆ n ktˆn − S tˆn k2 ≤ (1 − αn − α ˆ n )kxn − uk2 + (αn + α ˆ n )[kxn − uk2 + (λ2n k 2 − 1)kyn − tn k2 ] −αn α ˆ n ktˆn − S tˆn k2 = kxn − uk2 + (αn + α ˆ n )(λ2n k 2 − 1)kyn − tn k2 − αn α ˆ n ktˆn − S tˆn k2 2 ≤ kxn − uk and, rearranging as in (3.4), ktn − yn k2 +

c2 ˆ 1−a2 k2 ktn

α ˆn ˆ ˆ 2 − S tˆn k2 ≤ ktn − yn k2 + (αn +αˆαnn)(1−λ 2 k 2 ) ktn − S tn k n 1 2 ≤ (αn +αˆ n )(1−λ2 k2 ) (kxn − uk − kzn − uk2 ) n 1 = (αn +αˆ n )(1−λ 2 k 2 ) (kxn − uk − kzn − uk)× n (kxn − uk + kzn − uk) 1 ≤ (αn +αˆ n )(1−λ 2 k 2 ) (kxn − uk + kzn − uk)kxn − zn k n 1 ≤ 2c(1−b2 k2 ) (kxn − uk + kzn − uk)kxn − zn k.

Since kxn − zn k → 0 and the sequences {xn } and {zn } are bounded, we deduce that lim ktn − yn k = lim ktˆn − S tˆn k = 0.

n→∞

n→∞

As A is k-Lipschitz-continuous, we have kAyn − Atn k → 0. From kxn − tn k ≤ kxn − yn k+kyn −tn k we also have kxn −tn k → 0. Since zn = (1−αn − α ˆ n )xn +αn tˆn + α ˆ n S tˆn , we have zn − xn = αn (tˆn − xn ) + α ˆ n (S tˆn − xn ) = αn (tˆn − xn ) + α ˆ n (S tˆn − tˆn + tˆn − xn ) ˆ = (αn + α ˆ n )(tn − xn ) + α ˆ n (S tˆn − tˆn ). Then 2cktˆn − xn k

≤ (αn + α ˆ n )ktˆn − xn k = kzn − xn − α ˆ n (S tˆn − tˆn )k ≤ kzn − xn k + α ˆ n kS tˆn − tˆn k ≤ kzn − xn k + kS tˆn − tˆn k

and hence ktˆn −xn k → 0. This together with kxn −tn k → 0, implies that ktˆn −tn k → 0. Step 4. We claim that ωw (xn ) ⊂ F (S) ∩ Ω ∩ VI(C, A). Indeed, as {xn } is bounded, there is a subsequence {xni } of {xn } such that {xni } converges weakly to some u ∈ ωw (xn ). We can obtain that u ∈ F (S) ∩ Ω ∩ VI(C, A). First, we show

414

L.-C. CENG, Q. H. ANSARI, M. M. WONG AND J.-C. YAO

u ∈ VI(C, A). Since xn − tn → 0 and xn − yn → 0, we conclude that tni * u and yni * u. Let  Av + NC v, if v ∈ C, Tv = ∅, if v 6∈ C. where NC v is the normal cone to C at v ∈ C. We have already mentioned that in this case the mapping T is maximal monotone, and 0 ∈ T v if and only if v ∈ VI(C, A); see [23]. Let G(T ) be the graph of T and let (v, w) ∈ G(T ). Then, we have w ∈ T v = Av + NC v and hence w − Av ∈ NC v. So, we have hv − t, w − Avi ≥ 0 for all t ∈ C. On the other hand, from tn = PC (xn − λn Ayn ) and v ∈ C we have hxn − λn Ayn − tn , tn − vi ≥ 0 and hence

tn − xn + Ayn i ≥ 0. λn From hv − t, w − Avi ≥ 0 for all t ∈ C and tni ∈ C, we have hv − tn ,

hv − tni , wi

≥ hv − tni , Avi t −x ≥ hv − tni , Avi − hv − tni , niλn ni + Ayni i i

= hv − tni , Av − Atni i + hv − tni , Atni − Ayni i − hv − tni , ≥ hv − tni , Atni − Ayni i − hv − tni ,

tni −xni λni

tni −xni λni

i

i.

Hence, we obtain hv − u, wi ≥ 0 as i → ∞. Since T is maximal monotone, we have u ∈ T −1 0 and hence u ∈ VI(C, A). Secondly, let us show u ∈ F (S). Assume u 6∈ F (S). Since ktˆn −xn k → 0 and xni * u, we have tˆni * u. In terms of Opial’s condition, we have from ktˆni − S tˆni k → 0 lim inf ktˆni − uk < lim inf ktˆni − Suk i→∞ i→∞ = lim inf ktˆni − S tˆni + S tˆni − Suk i→∞ ≤ lim inf kS tˆni − Suk ≤ lim inf ktˆni − uk. i→∞

i→∞

This is a contradiction. So, we obtain u ∈ F (S). Next, let us show u ∈ Ω . Since Φ is α-inverse strongly monotone and M is maximal monotone, by Lemma 2.5 we know that M + Φ is maximal monotone. Take a fixed (y, g) ∈ G(M + Φ) arbitrarily. Then we have g ∈ M y + Φ(y). So, we have g − Φ(y) ∈ M y. Since tˆni = JM,µni (tni − µni Φ(tni )) implies µ1n (tni − tˆni − µni Φ(tni )) ∈ M tˆni , i we have 1 hy − tˆni , g − Φ(y) − (tn − tˆni − µni Φ(tni ))i ≥ 0, µni i which hence yields hy − tˆni , gi ≥ hy − tˆni , Φ(y) + µ1n (tni − tˆni − µni Φ(tni ))i i = hy − tˆni , Φ(y) − Φ(tni )i + hy − tˆni , µ1n (tni − tˆni )i i ≥ αkΦ(y) − Φ(tˆni )k2 + hy − tˆni , Φ(tˆni )− Φ(tni )i + hy− tˆni , µ1n (tni − tˆni )i i ≥ hy − tˆni , Φ(tˆni ) − Φ(tni )i + hy − tˆni , 1 (tni − tˆni )i. µni

(3.5)

MANN TYPE HYBRID EXTRAGRADIENT METHOD

415

Observe that |hy − tˆni , Φ(tˆni ) − Φ(tni )i + hy − tˆni , µ1n (tni − tˆni )i| i ≤ ky − tˆni kkΦ(tˆni ) − Φ(tni )k + ky − tˆni kk µ1n (tni − tˆni )k i ≤ α1 ky − tˆni kktˆni − tni k + 1 ky − tˆni kktni − tˆni k = ( α1 + 1 )ky − tˆni kktˆni − tni k. It follows from ktn − tˆn k → 0 that lim |hy − tˆni , Φ(tˆni ) − Φ(tni )i + hy − tˆni ,

i→∞

1 (tn − tˆni )i| = 0. µni i

Letting i → ∞, we get from (3.5) hy − u, gi ≥ 0. This shows that 0 ∈ Φ(u) + M u. Hence, u ∈ Ω . Therefore, u ∈ F (S) ∩ Ω ∩ VI(C, A). Step 5. We claim that lim kxn − l0 k = lim kyn − l0 k = lim kzn − l0 k = 0,

n→∞

n→∞

n→∞

where l0 = PF (S)∩Ω∩VI(C,A) x. Indeed, from l0 = PF (S)∩Ω∩VI(C,A) x, u ∈ F (S) ∩ Ω ∩ VI(C, A), and (3.3), we have kl0 − xk ≤ ku − xk ≤ lim inf kxni − xk ≤ lim sup kxni − xk ≤ kl0 − xk. i→∞

i→∞

So, we obtain lim kxni − xk = ku − xk.

i→∞

From xni − x * u − x we have xni − x → u − x (due to the Kadec-Klee property of Hilbert spaces [34]) and hence xni → u. Since xn = PQn x and l0 ∈ F (S) ∩ Ω ∩ VI(C, A) ⊂ Cn ∩ Qn ⊂ Qn , we have −kl0 − xni k2 = hl0 − xni , xni − xi + hl0 − xni , x − l0 i ≥ hl0 − xni , x − l0 i. As i → ∞, we obtain −kl0 − uk2 ≥ hl0 − u, x − l0 i ≥ 0 by l0 = PF (S)∩Ω∩VI(C,A) x and u ∈ F (S) ∩ Ω ∩ VI(C, A). Hence we have u = l0 . This implies that xn → l0 . It is easy to see that yn → l0 and zn → l0 . This completes the proof.  Corollary 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let A : C → H be a monotone and k-Lipschitz-continuous mapping and S : C → C be a nonexpansive mapping such that F (S) ∩ VI(C, A) 6= ∅. For x0 = x ∈ C chosen arbitrarily, let {xn }, {yn } and {zn } be the sequences generated by  yn = PC (xn − λn Axn ),     ˆ n )xn + αn PC (xn − λn Ayn ) + α ˆ n SPC (xn − λn Ayn ),  zn = (1 − αn − α Cn = {z ∈ C : kzn − zk ≤ kxn − zk},   Qn = {z ∈ C : hxn − z, x − xn i ≥ 0},    xn+1 = PCn ∩Qn x αn } ⊂ for every n = 0, 1, 2, ..., where {λn } ⊂ [a, b] for some a, b ∈ (0, k1 ) and {αn }, {ˆ [c, 1] for some c ∈ (0, 1], such that αn + α ˆ n ≤ 1 for every n = 0, 1, 2, .... Then the sequences {xn }, {yn } and {zn } converge strongly to PF (S)∩VI(C,A) x.

416

L.-C. CENG, Q. H. ANSARI, M. M. WONG AND J.-C. YAO

Proof. Putting Φ = M = 0 in Theorem 3.1, we have Ω = C and F (S)∩Ω ∩VI(C, A) = F (S) ∩ VI(C, A). Let α be any positive number in the interval (0, ∞) and take any sequence {µn } ⊂ [, 2α] for some  ∈ (0, 2α]. Then Φ is α-inverse strongly monotone and we have  tn = PC (xn − λn Ayn ),     tˆn = JM,µn (tn − µn Φ(tn )) = (I + µn M )−1 tn = tn ,  zn = (1 − αn − α ˆ n )xn + αn tˆn + α ˆ n S tˆn   = (1 − α − α ˆ )x + α t + α ˆ  n n n n n n Stn   = (1 − αn − α ˆ n )xn + αn PC (xn − λn Ayn ) + α ˆ n SPC (xn − λn Ayn ). Therefore, by Theorem 3.1 we obtain the desired result.



Remark 3.1 Compared with Theorem 3.1 in Nadezhkina and Takahashi [22], our Theorem 3.1 improves and extends Nadezhkina and Takahashi [22, Theorem 3.1] in the following aspects: (a) Nadezhkina and Takahashi’s hybrid-extragradient method in [22, Theorem 3.1] is extended to develop Mann type hybrid-extragradient method in our Theorem 3.1. (b) the technique of proving strong convergence in our Theorem 3.1 is very different from that in Nadezhkina and Takahashi [22, Theorem 3.1] because our technique depends on the properties for maximal monotone mappings and their resolvent operators (see, e.g., Lemmas 2.1, 2.3 and 2.5), and the geometric properties for Hilbert spaces (see, e.g., Opial’s condition and Kadec-Klee’s property [34]). (c) our problem of finding an element of Fix(S) ∩ Ω ∩ VI(C, A) is more general than Nadezhkina and Takahashi’s problem of finding an element of Fix(S) ∩ VI(C, A) in [22, Theorem 3.1].

4. Applications Utilizing Theorem 3.1, we prove some strong convergence theorems in a real Hilbert space. Theorem 4.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let A : C → H be a monotone and k-Lipschitz-continuous mapping, Φ : C → H be an α-inverse strongly monotone mapping and M be a maximal monotone mapping with D(M ) = C such that Ω ∩ VI(C, A) 6= ∅. For x0 = x ∈ C chosen arbitrarily, let {xn }, {yn } and {zn } be the sequences generated by  yn = PC (xn − λn Axn ),     tn = PC (xn − λn Ayn ),    zn = (1 − βn )xn + βn JM,µn (tn − µn Φ(tn )), Cn = {z ∈ C : kzn − zk ≤ kxn − zk},     Qn = {z ∈ C : hxn − z, x − xn i ≥ 0},    xn+1 = PCn ∩Qn x

MANN TYPE HYBRID EXTRAGRADIENT METHOD

417

for every n = 0, 1, 2, ..., where {λn } ⊂ [a, b] for some a, b ∈ (0, k1 ), {µn } ⊂ [, 2α] for some  ∈ (0, 2α] and {βn } ⊂ [c, 1] for some c ∈ (0, 1]. Then the sequences {xn }, {yn } and {zn } converge strongly to PΩ∩VI(C,A) x. Proof. In Theorem 3.1, putting S = I and αn = α ˆ n = 12 βn for all n = 0, 1, 2, ..., we have tˆn = JM,µn (tn − µn Φ(tn )), zn = (1 − αn − α ˆ n )xn + αn tˆn + α ˆ n S tˆn = (1 − αn − α ˆ n )xn + (αn + α ˆ n )tˆn ˆ = (1 − βn )xn + βn tn = (1 − βn )xn + βn JM,µn (tn − µn Φ(tn )). In this case, we know that F (S) ∩ Ω ∩ VI(C, A) = Ω ∩ VI(C, A). Therefore, by Theorem 3.1 we obtain the desired result.  Theorem 4.2 [22, Theorem 4.2] Let C be a nonempty closed convex subset of a real Hilbert space H and let S : C → C be a nonexpansive mapping such that F (S) is nonempty. For x0 = x ∈ C chosen arbitrarily, let {xn } and {zn } be the sequences generated by  zn = (1 − α ˆ n )xn + α ˆ n Sxn ,    Cn = {z ∈ C : kzn − zk ≤ kxn − zk}, Qn = {z ∈ C : hxn − z, x − xn i ≥ 0},    xn+1 = PCn ∩Qn x for every n = 0, 1, 2, ..., where {ˆ αn } ⊂ [c, 1] for some c ∈ (0, 1]. Then the sequences {xn } and {zn } converge strongly to PF (S) x. Proof. Putting A = Φ = M = 0 in Theorem 3.1, we let k and α be any positive numbers in the interval (0, ∞) and take any sequence {λn } ⊂ [a, b] for some a, b ∈ (0, k1 ) and any sequence {µn } ⊂ [, 2α] for some  ∈ (0, 2α]. Then A is k-Lipschitzcontinuous and Φ is α-inverse strongly monotone. In this case, we know that F (S) ∩ Ω ∩ VI(C, A) = F (S) and  yn = PC (xn − λn Axn ) = xn ,    tn = PC (xn − λn Ayn ) = xn , tˆn = JM,µn (tn − µn Φ(tn )) = tn = xn ,    zn = (1 − αn − α ˆ n )xn + αn tˆn + α ˆ n S tˆn = (1 − α ˆ n )xn + α ˆ n Sxn . Therefore, by Theorem 3.1 we obtain the desired result.



Remark 4.1 Originally Theorem 4.2 is the result of Nakajo and Takahashi [18]. Theorem 4.3 Let H be a real Hilbert space. Let A : H → H be a monotone and k-Lipschitz-continuous mapping, Φ : H → H be an α-inverse strongly monotone mapping, M : H → 2H be a maximal monotone mapping and S : H → H be a nonexpansive mapping such that F (S) ∩ Ω ∩ A−1 0 6= ∅. For x0 = x ∈ H chosen

418

L.-C. CENG, Q. H. ANSARI, M. M. WONG AND J.-C. YAO

arbitrarily, let {xn } and {zn } be the sequences generated by           

tn = xn − λn A(xn − λn Axn ), zn = (1 − αn − α ˆ n )xn + αn JM,µn (tn − µn Φ(tn )) + α ˆ n SJM,µn (tn − µn Φ(tn )), Cn = {z ∈ C : kzn − zk ≤ kxn − zk}, Qn = {z ∈ C : hxn − z, x − xn i ≥ 0}, xn+1 = PCn ∩Qn x

for every n = 0, 1, 2, ..., where {λn } ⊂ [a, b] for some a, b ∈ (0, k1 ), {µn } ⊂ [, 2α] for some  ∈ (0, 2α] and {αn }, {ˆ αn } ⊂ [c, 1] for some c ∈ (0, 1], such that αn + α ˆn ≤ 1 for every n = 0, 1, 2, .... Then the sequences {xn } and {zn } converge strongly to PF (S)∩Ω∩A−1 0 x. Proof. Putting C = H in Theorem 3.1, we have A−1 0 = VI(H, A) and PC = PH = I. In this case, we know that           

yn = PC (xn − λn Axn ) = xn − λn Axn , tn = PC (xn − λn Ayn ) = xn − λn A(xn − λn Axn ), tˆn = JM,µn (tn − µn Φ(tn )), zn = (1 − αn − α ˆ n )xn + αn tˆn + α ˆ n S tˆn = (1 − αn − α ˆ n )xn + αn JM,µn (tn − µn Φ(tn )) + α ˆ n SJM,µn (tn − µn Φ(tn )).

Therefore, by Theorem 3.1 we obtain the desired result.



Let B : H → 2H be a maximal monotone mapping. Then, for any x ∈ H and r > 0, consider JB,r x = (I + rB)−1 x. It is known that such a JB,r is the resolvent of B. Theorem 4.4 Let H be a real Hilbert space. Let A : H → H be a monotone and k-Lipschitz-continuous mapping, Φ : H → H be an α-inverse strongly monotone mapping and B, M : H → 2H be two maximal monotone mappings such that A−1 0 ∩ B −1 0 ∩ Ω 6= ∅. Let JB,r be the resolvent of B for each r > 0. For x0 = x ∈ H chosen arbitrarily, let {xn } and {zn } be the sequences generated by  tn = xn − λn A(xn − λn Axn ),     ˆ n )xn + αn JM,µn (tn − µn Φ(tn )) + α ˆ n JB,r JM,µn (tn − µn Φ(tn )),  zn = (1 − αn − α Cn = {z ∈ C : kzn − zk ≤ kxn − zk},   Qn = {z ∈ C : hxn − z, x − xn i ≥ 0},    xn+1 = PCn ∩Qn x for every n = 0, 1, 2, ..., where {λn } ⊂ [a, b] for some a, b ∈ (0, k1 ), {µn } ⊂ [, 2α] for some  ∈ (0, 2α] and {αn }, {ˆ αn } ⊂ [c, 1] for some c ∈ (0, 1], such that αn + α ˆn ≤ 1 for every n = 0, 1, 2, .... Then the sequences {xn } and {zn } converge strongly to PA−1 0∩B −1 0∩Ω x.

MANN TYPE HYBRID EXTRAGRADIENT METHOD

419

Proof. Putting C = H and S = JB,r in Theorem 3.1, we know that PH = I, A−1 0 = VI(H, A) and F (JB,r ) = B −1 0. In this case, we have  yn = PC (xn − λn Axn ) = xn − λn Axn ,      tn = PC (xn − λn Ayn ) = xn − λn A(xn − λn Axn ), tˆn = JM,µn (tn − µn Φ(tn )),   z ˆ n )xn + αn tˆn + α ˆ n S tˆn  n = (1 − αn − α   = (1 − αn − α ˆ n )xn + αn JM,µn (tn − µn Φ(tn )) + α ˆ n JB,r JM,µn (tn − µn Φ(tn )). Therefore, by Theorem 3.1 we obtain the desired result.



It is well known that a mapping T : C → C is called pseudocontractive if kT x − T yk2 ≤ kx−yk2 +k(I −T )x−(I −T )yk2 for all x, y ∈ C. Moreover, whenever T : C → C is pseudocontractive and m-Lipschitz-continuous, the mapping I − T is monotone and (m + 1)-Lipschitz-continuous such that F (T ) = V I(C, I − T ) (see, e.g., proof of Theorem 4.5). It is easy to see that the definition of a pseudocontractive mapping is equivalent to the one that a mapping T : C → C is called pseudocontractive if hT x − T y, x − yi ≤ kx − yk2

(4.1)

for all x, y ∈ C; see [6]. Obviously, the class of pseudocontractive mappings is more general than the class of nonexpansive mappings. Let us observe the following example for Lipschitz continuous and pseudocontractive mappings. Let B : H → 2H be a maximal monotone mapping and let JB,λ be the resolvent of B for λ > 0. We define the following operator, which is called the Yosida approximation: Bλ = λ1 (I − JB,λ ). Then the operator T = I − Bλ is Lipschitz-continuous and pseudocontractive (see, e.g., [28]). In the meantime, we also know one more definition of a κ-strictly pseudocontractive mapping, which is equivalent to the definition given in the introduction. A mapping T : C → C is called κ-strictly pseudocontractive if there exists a constant 0 ≤ κ < 1 such that 1−κ hT x − T y, x − yi ≤ kx − yk2 − k(I − T )x − (I − T )yk2 2 for all x, y ∈ C. It is clear that in this case the mapping I − T is 1−κ 2 -inverse strongly monotone. From [27], we know that if T is a κ-strictly pseudocontractive mapping, 1+κ 1+κ then T is Lipschitz continuous with constant 1−κ , i.e., kT x − T yk ≤ 1−κ kx − yk for all x, y ∈ C. We denote by F (T ) the fixed point set of T . It is obvious that the class of strict pseudocontractions strictly includes the class of nonexpansive mappings and the class of pseudocontractions strictly includes the class of strict pseudocontractions. In the following theorem we introduce an iterative algorithm that converges strongly to a common fixed point of three mappings, one of which is nonexpansive and the other two ones are Lipschitz-continuous and pseudocontractive mapping and κ-strictly pseudocontractive mapping, respectively. Theorem 4.5 Let C be a nonempty closed convex subset of a real Hilbert space H. Let T : C → C be a pseudocontractive, m-Lipschitz-continuous mapping, Γ : C → C be a κ-strictly pseudocontractive mapping and S : C → C be a nonexpansive mapping

420

L.-C. CENG, Q. H. ANSARI, M. M. WONG AND J.-C. YAO

such that F (T ) ∩ F (S) ∩ F (Γ ) 6= ∅. For x0 = x ∈ C chosen arbitrarily, let {xn }, {yn } and {zn } be the sequences generated by  yn = xn − λn (xn − T xn ),     t  n = PC (xn − λn (yn − T yn )),     tˆn = tn − µn (tn − Γ tn ), zn = (1 − αn − α ˆ n )xn + αn tˆn + α ˆ n S tˆn ,   Cn = {z ∈ C : kzn − zk ≤ kxn − zk},     Qn = {z ∈ C : hxn − z, x − xn i ≥ 0},    xn+1 = PCn ∩Qn x 1 ), {µn } ⊂ [, 1 − κ] for every n = 0, 1, 2, ..., where {λn } ⊂ [a, b] for some a, b ∈ (0, m+1 for some  ∈ (0, 1−κ] and {αn }, {ˆ αn } ⊂ [c, 1] for some c ∈ (0, 1], such that αn +α ˆn ≤ 1 for every n = 0, 1, 2, .... Then the sequences {xn }, {yn } and {zn } converge strongly to PF (T )∩F (S)∩F (Γ ) x.

Proof. Putting A = I − T, Φ = I − Γ and M = 0 in Theorem 3.1, we know that A is monotone and (m+1)-Lipschitz-continuous and that Φ is α-inverse strongly monotone 1 with α = 1−κ 2 . Noticing {λn } ⊂ [a, b] ⊂ (0, m+1 ), we know that {λn } ⊂ (0, 1) and hence (1 − λn )xn + λn T xn ∈ C. Also, noticing {µn } ⊂ [, 1 − κ] ⊂ (0, 1 − κ], we know that {µn } ⊂ (0, 1] and hence (1 − µn )tn + µn Γ xn ∈ C. This implies that   yn = PC (xn − λn Axn ) = PC ((1 − λn )xn + λn T xn ) = xn − λn (xn − T xn ), tn = PC (xn − λn Ayn ) = PC (xn − λn (yn − T yn )),  tˆn = JM,µn (tn − µn Φ(tn )) = tn − µn (tn − Γ tn ). Now let us show F (T ) = VI(C, A). In fact, we have, for λ > 0, u ∈ VI(C, A) ⇔ ⇔ ⇔ ⇔ ⇔ ⇔ ⇔ ⇔

hAu, y − ui ≥ 0 ∀y ∈ C hu − λAu − u, u − yi ≥ 0 ∀y ∈ C u = PC (u − λAu) u = PC (u − λu + λT u) hu − λu + λT u − u, u − yi ≥ 0 ∀y ∈ C hu − T u, u − yi ≤ 0 ∀y ∈ C u = Tu u ∈ F (T ).

Next let us show Ω = F (Γ ). In fact, noticing that M = 0 and Φ = I − Γ we have u ∈ Ω ⇔ 0 ∈ Φ(u) + M u ⇔ 0 = Φ(u) = u − Γ u ⇔ u ∈ F (Γ ). Consequently, F (S) ∩ Ω ∩ VI(C, A) = F (T ) ∩ F (S) ∩ F (Γ ). Therefore, by Theorem 3.1 we obtain the desired result.



References [1] A.S. Antipin, Methods for solving variational inequalities with related constraints, Comput. Math. Math. Phys., 40(2000), 1239-1254. [2] A.S. Antipin, F.P. Vasiliev, Regularized prediction method for solving variational inequalities with an inexactly given set, Comput. Math. Math. Phys., 44(2004), 750-758.

MANN TYPE HYBRID EXTRAGRADIENT METHOD

421

[3] H.H. Bauschke, P.L. Combettes, A weak-to-strong convergence principle for Fejer-monotone methods in Hilbert spaces, Math. Oper. Res., 26(2001), 248-264. [4] H.H. Bauschke, P.L. Combettes, Construction of best Bregman approximations in reflexive Banach spaces, Proc. Amer. Math. Soc., 131(2003), 3757-3766. [5] R.S. Burachik, J.O. Lopes, B.F. Svaiter, An outer approximation method for the variational inequality problem, SIAM J. Control Optim., 43(2005), 2071-2088. [6] F.E. Browder, W.V. Petryshyn, Construction of fixed points of nonlinear mappings in Hilbert space, J. Math. Anal. Appl., 20(1967), 197-228. [7] N.J. Huang, A new completely general class of variational inclusions with noncompact valued mappings, Computers Math. Appl., 35(10)(1998), 9-14. [8] P.L. Combettes, Strong convergence of block-iterative outer approximation methods for convex optimization, SIAM J. Control Optim., 38(2000), 538-565. [9] R. Garciga Otero, A. Iuzem, Proximal methods with penalization effects in Banach spaces, Numer. Funct. Anal. Optim., 25(2004), 69-91. [10] R. Glowinski, Numerical Methods for Nonlinear Variational Problems, Springer-Verlag, New York, 1984. [11] B.-S. He, Z.-H. Yang, X.-M. Yuan, An approximate proximal-extragradient type method for monotone variational inequalities, J. Math. Anal. Appl., 300(2004), 362-374. [12] H. Iiduka, W. Takahashi, Strong convergence theorem by a hybrid method for nonlinear mappings of nonexpansive and monotone type and applications, Adv. Nonlinear Var. Inequal., 9(2006), 1-10. [13] L.C. Zeng, S.M. Guu, J.C. Yao, Characterization of H-monotone operators with applications to variational inclusions, Computer Math. Appl., 50(3-4)(2005), 329-337. [14] W.R. Mann, Mean value methods in iteration, Proc. Amer. Math. Soc., 4(1953), 506-510. [15] G.M. Korpelevich, An extragradient method for finding saddle points and for other problems, Ekon. Mate. Metody, 12(1976), 747-756. [16] J.L. Lions, Quelques Methodes de Resolution des Problemes aux Limites Non Lineaires, Dunod, Paris, 1969. [17] F. Liu, M.Z. Nashed, Regularization of nonlinear ill-posed variational inequalities and convergence rates, Set-Valued Anal., 6(1998), 313-344. [18] K. Nakajo, W. Takahashi, Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups, J. Math. Anal. Appl., 279(2003), 372-379. [19] M.A. Noor, New extragradient-type methods for general variational inequalities, J. Math. Anal. Appl., 277(2003), 379-394. [20] J.T. Oden, Quantitative Methods on Nonlinear Mechanics, Prentice-Hall, Englewood Cliffs, NJ, 1986. [21] Z. Opial, Weak convergence of the sequence of successive approximations for nonexpansive mappings, Bull. Amer. Math. Soc., 73(1967), 591-597. [22] N. Nadezhkina, W. Takahashi, Strong convergence theorem by a hybrid method for nonexpansive mappings and Lipschitz-continuous monotone mappings, SIAM J. Optim., 16(4)(2006), 12301241. [23] R.T. Rockafellar, On the maximality of sums of nonlinear monotone operators, Trans. Amer. Math. Soc., 149(1970), 75-88. [24] M.V. Solodov, Convergence rate analysis of iterative algorithms for solving variational inequality problem, Math. Program., 96(2003), 513-528. [25] M.V. Solodov, B.F. Svaiter, Forcing strong convergence of proximal point iterations in a Hilbert space, Math. Program., 87(2000), 189-202. [26] M.V. Solodov, B.F. Svaiter, An inexact hybrid generalized proximal point algorithm and some new results on the theory of Bregman functions, Math. Oper. Res., 25(2000), 214-230. [27] G. Marino, H.K. Xu, Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces, J. Math. Anal. Appl., 329(2007), 336-346. [28] W. Takahashi, Nonlinear Functional Analysis, Yokohama Publishers, Yokohama, Japan, 2000. [29] W. Takahashi, M. Toyoda, Weak convergence theorems for nonexpansive mappings and monotone mappings, J. Optim. Theory Appl., 118(2003), 417-428.

422

L.-C. CENG, Q. H. ANSARI, M. M. WONG AND J.-C. YAO

[30] I. Yamada, The hybrid steepest-descent method for the variational inequality problem over the intersection of fixed-point sets of nonexpansive mappings, in Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications, D. Butnariu, Y. Censor, and S. Reich, eds., Kluwer Academic Publishers, Dordrecht, Netherlands, 2001, pp. 473-504. [31] E. Zeidler, Nonlinear Functional Analysis and Its Applications, Springer-Verlag, New York, 1985. [32] C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Problems, 20(2004), 103-120. [33] S. Reich, Weak convergence theorems for nonexpansive mappings in Banach spaces, J. Math. Anal. Appl., 67(1979), 274-276. [34] K. Goebel, W.A. Kirk, Topics on Metric Fixed Point Theory, Cambridge University Press, Cambridge, England, 1990. [35] M.O. Osilike, D.I. Igbokwe, Weak and strong convergence theorems for fixed points of pseudocontractions and solutions of monotone type operator equations, Comput. Math. Appl., 40(2000), 559-567.

Received: March 11, 2011; Accepted: August 2, 2011.