Modified Extragradient Methods for a System of ... - Springer Link

3 downloads 0 Views 363KB Size Report
Mar 27, 2009 - methods and other aspects of the variational inequalities, see [17–28] ..... Frechet differentiable norm if the limit (2.1) is attained uniformly for x,y ...
Acta Appl Math (2010) 110: 1211–1224 DOI 10.1007/s10440-009-9502-9

Modified Extragradient Methods for a System of Variational Inequalities in Banach Spaces Yonghong Yao · Muhammad Aslam Noor · Khalida Inayat Noor · Yeong-Cheng Liou · Huma Yaqoob

Received: 5 January 2009 / Accepted: 16 March 2009 / Published online: 27 March 2009 © Springer Science+Business Media B.V. 2009

Abstract In this paper, we introduce a new system of general variational inequalities in Banach spaces. We establish the equivalence between this system of variational inequalities and fixed point problems involving the nonexpansive mapping. This alternative equivalent formulation is used to suggest and analyze a modified extragradient method for solving the system of general variational inequalities. Using the demi-closedness principle for nonexpansive mappings, we prove the strong convergence of the proposed iterative method under some suitable conditions. Keywords Variational inequality · Iterative method · Nonexpansive mapping · Fixed point Mathematics Subject Classification (2000) 47H05 · 47H10 · 47J25 1 Introduction Variational inequality theory has emerged as an important tool in studying a wide class of obstacle, unilateral, free, moving, equilibrium problems arising in several branches of pure The fourth author was partially supposed by the Grant NSC 97-2221-E-230-017. Y. Yao Department of Mathematics, Tianjin Polytechnic University, Tianjin 300160, China e-mail: [email protected] M. Aslam Noor () · K. Inayat Noor · H. Yaqoob Mathematics Department, COMSATS Institute of Information Technology, Islamabad, Pakistan e-mail: [email protected] K. Inayat Noor e-mail: [email protected] H. Yaqoob e-mail: [email protected] Y.-C. Liou Department of Information Management, Cheng Shiu University, Kaohsiung 833, Taiwan e-mail: [email protected]

1212

Y. Yao et al.

and applied sciences in a unified and general framework. This field is dynamics and is experiencing an explosive growth in both theory and applications. Several numerical methods have been developed for solving variational inequalities and related optimization problems, see [1–45] and the references therein. For the physical formulation, applications, numerical methods and other aspects of the variational inequalities, see [17–28] and the references therein. To convey an idea of the variational inequality, let C be a closed and convex set in a real Hilbert space H . For a given operator A, we consider the problem of finding x ∗ ∈ C such that   ∗ (1.1) Ax , x − x ∗ ≥ 0, ∀x ∈ C, which is known as the variational inequality, introduced and studied by Stampacchia [3] in 1964 in the field of potential theory. We remark that the minimum of a differentiable convex function on the convex set C can be characterized by the above variational inequality with T x ∗ = f  (x ∗ ), where f  (.) is the Frechet differential of f . We recall the following well-known result which is called the best approximation result or the projection lemma. Lemma 1.1 For a given z ∈ H , u ∈ C satisfies the inequality u − z, v − u ≥ 0,

∀v ∈ C,

if and only if u = PC z, where PC is the projection of H onto a closed convex set C. Using Lemma 1.1, one can easily show that the variational inequality is equivalent to the fixed point problem. In fact, we have the following result. Lemma 1.2 x ∗ ∈ C is a solution of the variational inequality if and only if x ∗ ∈ C satisfies the relation   x ∗ = PC x ∗ − λAx ∗ , (1.2) where PC is the projection of H onto a closed convex set C and λ > 0 is a constant. Lemma 1.2 implies that the variational inequality (1.1) is equivalent to the fixed point problem (1.2). This alternative equivalent formulation of the variational inequality problem has played an important and fundamental part in the existence of a solution and in developing several numerical iterative methods for solving the variational inequalities and related optimization problems. The fixed point formulation is used to suggest and analyze the following iterative method. Algorithm 1.1 For a given x0 ∈ H, find the approximate solution xn+1 by the iterative scheme xn+1 = PC [xn − λAxn ], where λ > 0 is a constant.

n = 0, 1, 2, . . .

Modified Extragradient Methods for a System

1213

We would like to point out: the convergence of Algorithm 1.1 requires that the operator A must be strongly monotone and Lipschitz continuous. These strong conditions rule out many applications of Algorithm 1.1 for several problems. We can use the fixed point formulation to suggest the following iterative method. Algorithm 1.2 For a given x0 ∈ H, find the approximate solution xn+1 by the iterative scheme xn+1 = PC [xn − λAxn+1 ],

n = 0, 1, 2, . . .

where λ > 0 is a constant. Algorithm 1.2 is an implicit type iterative method, which is itself very difficult to implement. In order to implement Algorithm 1.2, we use the technique of predictor and corrector type. In this technique, we use Algorithm 1.1 as a predictor and Algorithm 1.2 as a corrector. In this way, we have the following iterative method for solving the variational inequality. Algorithm 1.3 For a given x0 ∈ H, find the approximate solution xn+1 by the iterative schemes  yn = PC (xn − λAxn ), (1.3) xn+1 = PC (xn − λAyn ), which is known as the extragradient method and was introduced by Korpelevich [1]. Using the technique of Noor [17], one can show that the convergence of the extragradient method requires only that the operator is pseudomonotone. Note that strongly monotonicity implies monotonicity and monotonicity implies pseudomonotonicity but the converse is not true. This clearly improves the previous results. Iiduka, Takahashi and Toyoda [7] introduced another iterative method for solving the variational inequality (1.1) and proved the following weak convergence theorem. Theorem ITT Let C be a nonempty closed convex subset of a real Hilbert space H and let A be an α-inverse-strongly monotone operator of C into H with V I (C, A) = ∅. Let {xn } be a sequence defined as follows: ⎧ ⎪ ⎨ x1 = x ∈ C, yn = PC (xn − λn Axn ), (1.4) ⎪ ⎩ xn+1 = PC (αn xn + (1 − αn )yn ), for every n = 1, 2, . . . , where PC is the metric projection from H onto C, {αn } is a sequence in [−1, 1], and {λn } is a sequence in [0, 2α]. If {αn } and {λn } are chosen so that αn ∈ [a, b] for some a, b with −1 < a < b < 1 and λn ∈ [c, d] for some c, d with 0 < c < d < 2(1+a)α, then {xn } defined by (1.4) converges weakly to some element of V I (C, A). Noor [27] has again modified the extragradient method by performing an additional step forward and a projection at each iteration in conjunction with the technique of updating the solution. Using this modification, Noor [17, 26, 27] has suggested a number of projectionsplitting methods for solving several classes of variational inequalities. It has been shown that the convergence of these projection-splitting methods for solving the variational inequalities requires only that the operator is partially relaxed strong monotonicity, which

1214

Y. Yao et al.

is much weaker condition than co-coercivity and monotonicity. Due to this reason, modified extragradient methods are very efficient as compared with the extragradient methods and projection type methods, see [17, 27]. To convey the main flavor of this technique, we rewrite the fixed point formulation (1.2) as: 

y = PC (x − λAx), x = PC (y − μAy),

which is another fixed-point formulation of (1.2). Thus, using Lemma 1.1, one can show that the problem of finding x ∈ C satisfying (1.1) is equivalent to finding x, y ∈ C such that 

λAx + y − x, v − y ≥ 0,

∀v ∈ C,

μAy + x − y, v − x ≥ 0,

∀v ∈ C.

This system of variational inequalities was considered and studied by Noor [17, 27] using the auxiliary principle technique. Using Lemma 1.1, Noor [17, 27] has suggested and analyzed the following iterative method. Algorithm 1.4 For a given x0 ∈ H, find the approximate solution xn+1 by the iterative schemes  yn = PC (xn − λAxn ), (1.5) xn+1 = PC (yn − λAyn ), which is known as the modified extragradient method. Algorithm 1.4 is also known as the double projection or the two-step forward-backward projection method for solving the variational inequality (1.1). For the convergence analysis and its applications, see [17, 23–30, 36–45] and the references therein. This motivated Ceng, Wang and Yao [16] to introduce the following a general system of variational inequalities involving two different operators. For given two operators, consider the problem finding x ∗ , y ∗ ∈ C such that 

λAy ∗ + x ∗ − y ∗ , x − x ∗  ≥ 0,

∀x ∈ C,

μBx ∗ + y ∗ − x ∗ , x − y ∗  ≥ 0, ∀x ∈ C,

(1.6)

where λ and μ are two positive real numbers. To illustrate the applications of this system, we consider the following example, which is due to Zhu and Marcotte [35]. Example 1.1 We consider the problem of finding x ∗ ∈ C such that  ∗  A x , x − x ∗ ≥ 0,

 ∀x ∈ E = C ∩ x ∈ H : B(x) ≤ 0 ,

(1.7)

where A is strongly monotone on E and B(x) = {f1 (x), f2 (x), . . . , fm (x)} is a constraint mapping explicitly defined by the convex, Lipschitz continuous and continuously differentiable functions fi , i = 1, 2, . . . , m. Assume that there exists x0 ∈ C such that fi (x0 ) < 0,

Modified Extragradient Methods for a System

1215

i = 1, 2, . . . , m (Slater’s constraint qualification). Then the variational inequality (1.7) is equivalent to the Kuhn–Tucker-like system ⎧ ⎨ A(x ∗ ) + (∇B(x ∗ ))t y ∗ , x − x ∗  ≥ 0, ∀x ∈ C, ⎩ −B(x ∗ ), y − y ∗  ≥ 0,

∀y ≥ 0.

which is exactly the system of variational inequalities (1.6). Again using Lemma 1.1, one can show that this system of variational inequalities is equivalent to the fixed point problems. This equivalence is used to suggest and analyze some iterative methods for solving the system of variational inequalities, see [16, 23–30] and the references therein. To the best of our knowledge, such type of system of variational inequalities has not been considered in Banach spaces. This fact has motivated us to consider the system of variational inequalities in Banach spaces and this is one of the main motivation of this paper. We establish the equivalence between the system of variational inequalities and fixed point problems involving the nonexpansive mappings using the projection technique. This equivalence is used to suggest and analyze a modified extragradient method for solving the system of variational inequalities involving two different operators, which is the main result (Theorem 3.1) of this paper. We also discuss several cases, which can be obtained from our results. Results proved in this paper represent a refinement and improvement of the previously known results in several directions. It is interesting to compare the efficiency and practicality of the proposed methods with the other known methods. Aoyama et al. [8] first considered the following generalized variational inequality problem in Banach spaces. Let C be a nonempty closed convex subset of a smooth Banach space X. Let A : C → X be an accretive operator. Find a point x ∗ ∈ C such that   ∗ (1.8) Ax , j x − x ∗ ≥ 0, ∀x ∈ C. Remark 1.1 The problem (1.8) is very interesting as it is connected with the fixed point problem for nonlinear mapping, the problem of finding a zero point of an accretive operator, and so on. For the problem of finding a zero point of an accretive operator by the proximal point algorithm, see [9] and the references therein. In order to find a solution of problem (1.9), Aoyama et al. [8] introduced the following iterative algorithm in Banach spaces: ⎧ ⎪ ⎪ x1 = x ∈ C, ⎨ yn = QC (xn − λn Axn ), (1.9) ⎪ ⎪ ⎩ xn+1 = αn xn + (1 − αn )yn , n ≥ 0, where QC is a sunny nonexpansive retraction from X onto C. Then they proved a weak convergence theorem which is generalized simultaneously theorems of [2] and [6]. Hao [34] obtained a strong convergence theorem by using the following iterative algorithm:  yn = βn xn + (1 − βn )QC (I − λn A)xn , (1.10) n ≥ 0. xn+1 = αn u + (1 − αn )yn , We note that the iterative algorithms (1.4) and (1.3) in Hilbert spaces and (1.10) in Banach spaces only have weak convergence. This naturally brings us to the following questions:

1216

Y. Yao et al.

Question 1.1 (1) Could we extend a system of variational inequality problems (1.5) and (1.6) to Banach spaces which include (1.6) as a special case? (2) Could we construct an iterative algorithm such that the strong convergence is guaranteed? (3) Could we extend the iterative algorithm (1.10) to solve the problem (1.6) in Banach spaces? In this paper, motivated and inspired by the ideas of Ceng et al. [16] and Noor [17, 27], we first introduce the following system of general variational inequalities in Banach spaces: Let C be a nonempty closed convex subset of a real Banach space X. For given two operators A, B : C → X, we consider the problem of finding (x ∗ , y ∗ ) ∈ C × C such that  Ay ∗ + x ∗ − y ∗ , j (x − x ∗ ) ≥ 0, ∀x ∈ C, (1.11) Bx ∗ + y ∗ − x ∗ , j (x − y ∗ ) ≥ 0, ∀x ∈ C, which is called the system of general variational inequalities in a real Banach spaces. In particular, if A = B and x ∗ = y ∗ , then problem (1.11) reduces to problem (1.6) in a real Hilbert space. In this paper, for solving the problem (1.11), we first establish the equivalence between the system of variational inequalities (1.11) and some fixed point problem involving the nonexpansive mapping. This alternative equivalent formulation is used to suggest and analyze a modified extragradient method. Using the demi-closedness principle for nonexpansive mappings, we prove the strong convergence of the proposed iterative method under some suitable conditions.

2 Preliminaries Let X be a real Banach space and X ∗ be the dual space of X. Let U = {x ∈ X : x = 1} denote the unit sphere of X. X is said to be uniformly convex if for each  ∈ (0, 2], there exists a constant δ > 0 such that for any x, y ∈ U ,   x + y   ≤ 1 − δ.  x − y ≥  implies  2  The norm on X is said to be Gâteaux differentiable if the limit lim

t→0

x + ty − x t

(2.1)

exists for each x, y ∈ U and in this case X is said to be smooth. X is said to have a uniformly Frechet differentiable norm if the limit (2.1) is attained uniformly for x, y ∈ U and in this case X is said to be uniformly smooth. We define a function ρ : [0, ∞) → [0, ∞), called the modulus of smoothness of X, as follows:   1 x + y + x − y − 1 : x, y ∈ X, x = 1, y = τ . ρ(τ ) = sup 2 It is known that X is uniformly smooth if and only if limτ →0 ρ(τ )/τ = 0. Let q be a fixed real number with 1 < q ≤ 2. Then a Banach space X is said to be q-uniformly smooth if

Modified Extragradient Methods for a System

1217

there exists a constant c > 0 such that ρ(τ ) ≤ cτ q for all τ > 0. For q > 1, the generalized ∗ duality mapping Jq : X → 2X is defined by

 Jq (x) = f ∈ X ∗ : x, f  = xq , f  = xq−1 , ∀x ∈ X. In particular, if q = 2, the mapping J2 is called the normalized duality mapping and, usually, we write J2 = J . Further, we have the following properties of the generalized duality mapping Jq : (1) Jq (x) = xq−2 J2 (x) for all x ∈ X with x = 0. (2) Jq (tx) = t q−1 Jq (x) for all x ∈ X and t ∈ [0, ∞). (3) Jq (−x) = −Jq (x) for all x ∈ X. It is known that if X is smooth, then J is single-valued, which is denoted by j . Recall that the duality mapping j is said to be weakly sequentially continuous if for each {xn } ⊂ X with xn → x weakly, we have j (xn ) → j (x) weakly-∗. We know that if X admits a weakly sequentially continuous duality mapping, then X is smooth. For the details, see [31]. Let C be a nonempty closed convex subset of a smooth Banach space X. Recall that a mapping A : C → X is said to be accretive if   Ax − Ay, j (x − y) ≥ 0 for all x, y ∈ C. A mapping A : C → X is said to be α-strongly accretive if there exists a constant α > 0 such that Ax − Ay, j (x − y) ≥ αx − y2 for all x, y ∈ C. A mapping A : C → X is said to be α-inverse-strongly accretive if there exists a constant α > 0 such that Ax − Ay, j (x − y) ≥ αAx − Ay2 for all x, y ∈ C. In the following, we also need the ensuring lemmas. Lemma 2.1 ([11]) Let X be a q-uniformly smooth Banach space with 1 < q ≤ 2. Then   x + yq ≤ xq + q y, Jq (x) + 2Kyq for all x, y ∈ X, where K is the q-uniformly smooth constant of X. Let D be a nonempty subset of C. A mapping Q : C → D is said to be sunny if Q Qx + t (x − Qx) = Qx, whenever Qx + t (x − Qx) ∈ C for x ∈ C and t ≥ 0. A mapping Q : C → D is called a retraction if Qx = x for all x ∈ D. Furthermore, Q is a sunny nonexpansive retraction from C onto D if Q is a retraction from C onto D which is also sunny and nonexpansive. A subset D of C is called a sunny nonexpansive retraction of C if there exists a sunny nonexpansive retraction from C onto D. The following lemma concerns the sunny nonexpansive retraction. Lemma 2.2 ([12, 32]) Let C be a closed convex subset of a smooth Banach space X. Let D be a nonempty subset of C and Q : C → D be a retraction. Then Q is sunny and nonexpansive if and only if   u − Qu, j (y − Qu) ≤ 0 for all u ∈ C and y ∈ D.

1218

Y. Yao et al.

The first result regarding the existence of sunny nonexpansive retractions on the fixed point set of a nonexpansive mapping is due to Bruck [33]. Remark 2.1 If X is strictly convex and uniformly smooth and if T : C → C is a nonexpansive mapping having a nonempty fixed point set F (T ), then there exists a sunny nonexpansive retraction of C onto F (T ). Consequently, from Lemma 2.2, there is at most one sunny nonexpansive retraction from C onto D. Lemma 2.3 ([13]) Let C be a nonempty bounded closed convex subset of a uniformly convex Banach space X and let T be nonexpansive mapping of C into itself. If {xn } is a sequence of C such that xn → x weakly and xn − T xn → 0 strongly, then x is a fixed point of T . Lemma 2.4 ([14]) Let {xn } and {zn } be bounded sequences in a Banach space X and let {αn } be a sequence in [0, 1] which satisfies the following condition: 0 < lim infn→∞ αn ≤ lim supn→∞ αn < 1. Suppose xn+1 = αn xn + (1 − αn )zn , and

n ≥ 0,

lim sup zn+1 − zn  − xn+1 − xn  ≤ 0. n→∞

Then limn→∞ zn − xn  = 0. Lemma 2.5 ([12]) Assume {an } is a sequence of nonnegative real numbers such that an+1 ≤ (1 − γn )an + δn ,

n ≥ 0,

where {γn } is a sequence in (0, 1) and {δn } is a sequence in R such that ∞ (i) n=0 γn = ∞;  (ii) lim supn→∞ δn /γn ≤ 0 or ∞ n=0 |δn | < ∞. Then limn→∞ an = 0.

3 Main results In this section, we state and prove our main results. Lemma 3.1 Let C be a nonempty closed convex subset of a real 2-uniformly smooth Banach space X. Let the mapping A : C → X be α-inverse-strongly accretive. Then, we have   (I − A)x − (I − A)y 2 ≤ x − y2 + 2 K 2 − α Ax − Ay2 . In particular, if α ≥ K 2 , then I − A is nonexpansive. Proof Indeed, for all x, y ∈ C, from Lemma 2.1, we have     (I − A)x − (I − A)y 2 = (x − y) − (Ax − Ay)2   ≤ x − y2 − 2 Ax − Ay, j (x − y)

(3.1)

Modified Extragradient Methods for a System

1219

+ 2K 2 Ax − Ay2 ≤ x − y2 − 2αAx − Ay2 + 2K 2 Ax − Ay2 = x − y2 + 2 K 2 − α Ax − Ay2 . It is clear that if α ≥ K 2 , then I − A is nonexpansive. This completes the proof.



Lemma 3.2 Let C be a nonempty closed convex subset of a real 2-uniformly smooth Banach space X. Let QC be the sunny nonexpansive retraction from X onto C. Let the mappings A, B : C → X be α-inverse-strongly accretive and β-inverse-strongly accretive, respectively. Let G : C → C be a mapping defined by   G(x) = QC QC (x − Bx) − AQC (x − Bx) , ∀x ∈ C. If α ≥ K 2 and β ≥ K 2 , then G : C → C is nonexpansive. Proof For all x, y ∈ C, from Lemma 3.1, we have      G(x) − G(y) = QC QC (I − B)x − AQC (I − B)x   − QC QC (I − B)y − AQC (I − B)y    ≤ (I − A)QC (I − B)x − (I − A)QC (I − B)y .

(3.2)

From Lemma 3.1, we note that (I − A)QC (I − B) is nonexpansive. Therefore, from (3.2), we obtain immediately that the mapping G is nonexpansive. This completes the proof.  Lemma 3.3 Let C be a nonempty closed convex subset of a real smooth Banach space X. Let QC be the sunny nonexpansive retraction from X onto C. Let A, B : C → X be two possibly nonlinear mappings. For given x ∗ , y ∗ ∈ C, (x ∗ , y ∗ ) is a solution of problem (1.11) if and only if x ∗ = QC (y ∗ − Ay ∗ ) where y ∗ = QC (x ∗ − Bx ∗ ). Proof We note that we can rewrite (1.11) as  ∗ x − (y ∗ − Ay ∗ ), j (x − x ∗ ) ≥ 0,

∀x ∈ C,

y ∗ − (x ∗ − Bx ∗ ), j (x − y ∗ ) ≥ 0,

∀x ∈ C.

(3.3)

From Lemma 2.2, we can deduce that (3.3) is equivalent to  ∗ x = QC (y ∗ − Ay ∗ ), y ∗ = QC (x ∗ − Bx ∗ ). This completes the proof.



Remark 3.1 From Lemma 3.3, we note that   x ∗ = QC QC x ∗ − Bx ∗ − λAQC x ∗ − Bx ∗ , which implies that x ∗ is a fixed point of the mapping G. Throughout this paper, the set of fixed points of the mapping G is denoted by Ω.

1220

Y. Yao et al.

For solving a general system of variational inequality problem (1.11), we now introduce the following iterative algorithm. Algorithm 3.1 Let C be a nonempty closed convex subset of a real smooth Banach space X. Let QC be the sunny nonexpansive retraction from X onto C. Let A, B : C → X be two mappings. For fixed u ∈ C and arbitrarily given x0 ∈ C, let the sequences {xn } and {yn } be generated iteratively by 

yn = QC (xn − Bxn ), xn+1 = αn u + βn xn + γn QC (yn − Ayn ),

n ≥ 0,

(3.4)

where {αn }, {βn } and {γn } are three sequences in (0, 1). Now we prove the strong convergence of the algorithm (3.4) for solving problem (1.11). Theorem 3.1 Let C be a nonempty closed convex subset of a uniformly convex and 2uniformly smooth Banach space X which admits a weakly sequentially continuous duality mapping. Let QC be the sunny nonexpansive retraction from X onto C. Let the mappings A, B : C → X be α-inverse-strongly accretive with α ≥ K 2 and β-inverse-strongly accretive with β ≥ K 2 , respectively. Suppose the set of fixed points Ω of the mapping G defined by Remark 3.1 is nonempty. For a given x0 ∈ C, let the sequence {xn } be generated iteratively by (3.4). Suppose the sequences {αn }, {βn } and {γn } satisfy the following conditions: (i) αn + βn + γn = 1, ∀n ≥ 0; (ii) limn→∞ αn = 0 and ∞ n=0 αn = ∞; (iii) 0 < lim infn→∞ βn ≤ lim supn→∞ βn ≤ 1. Then {xn } defined by (3.4) converges strongly to Q u, where Q is the sunny nonexpansive retraction of C onto Ω. Proof Take x ∗ ∈ Ω. It follows from Lemma 3.3 that   x ∗ = QC QC x ∗ − Bx ∗ − λAQC x ∗ − Bx ∗ . Put y ∗ = QC (x ∗ − Bx ∗ ) and zn = QC (yn − Ayn ). Then x ∗ = QC (y ∗ − Ay ∗ ). From Lemma 3.1, we have     zn − x ∗  = QC (yn − Ayn ) − QC y ∗ − Ay ∗    ≤ (yn − Ayn ) − y ∗ − Ay ∗    ≤ yn − y ∗    = QC (xn − Bxn ) − QC x ∗ − Bx ∗    ≤ (xn − Bxn ) − x ∗ − Bx ∗    ≤ xn − x ∗ .

Modified Extragradient Methods for a System

1221

Hence it follows that     xn+1 − x ∗  = αn u + βn xn + γn zn − x ∗        ≤ αn u − x ∗  + βn xn − x ∗  + γn zn − x ∗      = αn u − x ∗  + (1 − αn )xn − x ∗    

 ≤ max u − x ∗ , x0 − x ∗  . Therefore, {xn } is bounded. Hence {yn }, {zn }, {Ayn } and {Bxn } are also bounded. We observe that   zn+1 − zn  = QC (yn+1 − Ayn+1 ) − QC (yn − Ayn ) ≤ yn+1 − yn    = QC (xn+1 − Bxn+1 ) − QC (xn − Bxn ) ≤ xn+1 − xn .

(3.5)

Set xn+1 = βn xn + (1 − βn )wn for all n ≥ 0. Then, we obtain αn+1 u + γn+1 zn+1 αn u + γn zn − 1 − βn+1 1 − βn   αn+1 αn γn+1 = − (zn+1 − zn ) u+ 1 − βn+1 1 − βn 1 − βn+1   γn+1 γn + − zn . 1 − βn+1 1 − βn

wn+1 − wn =

(3.6)

Combining (3.5) and (3.6), we have wn+1 − wn  − xn+1 − xn      αn+1 αn  γn+1 ≤  − xn+1 − xn  u +  1 − βn+1 1 − βn 1 − βn+1     γn+1 γn  − +  zn  − xn+1 − xn   1 − βn+1 1 − βn    αn+1 αn  − ≤  u + zn  .  1 − βn+1 1 − βn This implies that

lim sup wn+1 − wn  − xn+1 − xn  ≤ 0. n→∞

Hence, by Lemma 2.4, we obtain wn − xn  → 0 as n → ∞. Consequently, lim xn+1 − xn  = lim (1 − βn )wn − xn  = 0.

n→∞

n→∞

(3.7)

From (3.4), we can write xn+1 − xn = αn (u − xn ) + γn (zn − xn ) and note that 0 < lim infn→∞ γn ≤ lim supn→∞ γn < 1 and limn→∞ αn = 0. It follows from (3.7) that lim xn − zn  = 0.

n→∞

1222

Y. Yao et al.

From Lemma 3.2, we know that G : C → C is nonexpansive. Thus, we have       zn − G(zn ) = QC QC (xn − Bxn ) − AQC (xn − Bxn ) − G(zn )   = G(xn ) − G(zn ) ≤ xn − zn . Thus limn→∞ zn − G(zn ) = 0. Therefore,   lim xn − G(xn ) = 0. n→∞

Let Q be the sunny nonexpansive retraction of C onto Ω. Now we show that   lim sup u − Q u, j (xn − Q u) ≤ 0.

(3.8)

(3.9)

n→∞

To show (3.9), since {xn } is bounded, we can choose a sequence {xni } of {xn } which converges weakly to z such that     (3.10) lim sup u − Q u, j (xn − Q u) = lim u − Q u, j (xni − Q u) . n→∞

i→∞

According to Lemma 2.3 and (3.8), we have z ∈ Ω. Now, from (3.10), Lemma 2.2 and the weakly sequential continuity of the duality mapping j , we have     lim sup u − Q u, j (xn − Q u) = lim u − Q u, j (xni − Q u) n→∞

i→∞



 = u − Q u, j (z − Q u) ≤ 0.

Finally, from (3.4), we have xn+1 − Q u2   = αn u + βn xn + γn zn − Q u, j (xn+1 − Q u)     = αn u − Q u, j (xn+1 − Q u) + βn xn − Q u, j (xn+1 − Q u)   + γn zn − Q u, j (xn+1 − Q u) 1 ≤ βn xn − Q u2 + xn+1 − Q u2 2   + αn u − Q u, j (xn+1 − Q u) 1 + γn zn − Q u2 + xn+1 − Q u2 2 1 ≤ (1 − αn ) xn − Q u2 + xn+1 − Q u2 2   + αn u − Q u, j (xn+1 − Q u) , which implies that   xn+1 − Q u2 ≤ (1 − αn )xn − Q u2 + 2αn u − Q u, j (xn+1 − Q u) .

(3.11)

Modified Extragradient Methods for a System

1223

Finally, by Lemma 2.5 and (3.11), we conclude that xn converges strongly to Q u. This completes the proof.  Remark 3.2 Our result extends the main result of Ceng, Wang and Yao [16] from Hilbert spaces to Banach spaces. At the same time, our result includes the main result of Hao [34] as a special case. Acknowledgement The authors are very grateful to the referees for their careful reading, comments and suggestions which improved the presentation of this article.

References 1. Korpelevich, G.M.: An extragradient method for finding saddle points and for other problems. Ekon. Mat. Metody 12, 747–756 (1976) 2. Browder, F.E., Petryshyn, W.V.: Construction of fixed points of nonlinear mappings in Hilbert spaces. J. Math. Anal. Appl. 20, 197–228 (1967) 3. Stampacchi, G.: Formes bilineaires coercivites sur les ensembles convexes. C. R. Acad. Sci. Paris 258, 4413–4416 (1964) 4. Takahashi, W., Toyoda, M.: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 118, 417–428 (2003) 5. Zeng, L.C., Yao, J.C.: Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwan J. Math. 10, 1293–1303 (2006) 6. Gol’shtein, E.G., Tret’yakov, N.V.: Modified Lagrangians in convex programming and their generalizations. Math. Program. Study 10, 86–97 (1979) 7. Iiduka, H., Takahashi, W., Toyoda, M.: Approximation of solutions of variational inequalities for monotone mappings. Pan. Math. J. 14, 49–61 (2004) 8. Aoyama, K., Iiduka, H., Takahashi, W.: Weak convergence of an iterative sequence for accretive operators in Banach spaces. Fixed Point Theory Appl. 2006, 1–13 (2006) 9. Kamimura, S., Takahashi, W.: Weak and strong convergence of solutions to accretive operator inclusions and applications. Set-Valued Anal. 8, 361–374 (2000) 10. Takahashi, Y., Hashimoto, K., Kato, M.: On sharp uniform convexity, smoothness, and strong type, cotype inequalities. J. Nonlinear Convex Anal. 3, 267–281 (2002) 11. Xu, H.K.: Inequalities in Banach spaces with applications. Nonlinear Anal. 16, 1127–1138 (1991) 12. Reich, S.: Asymptotic behavior of contractions in Banach spaces. J. Math. Anal. Appl. 44, 57–70 (1973) 13. Browder, F.E.: Nonlinear operators and nonlinear equations of evolution in Banach spaces. Proc. Symp. Pure Math. 18, 78–81 (1976) 14. Suzuki, T.: Strong convergence of Krasnoselskii and Mann’s type sequences for one-parameter nonexpansive semigroups without Bochner integrals. J. Math. Anal. Appl. 305, 227–239 (2005) 15. Aslam Noor, M.: Projection-splitting algorithms for general mixed variational inequalities. J. Comput. Anal. Appl. 4, 47–61 (2002) 16. Ceng, L.C., Wang, C., Yao, J.C.: Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 67, 375–390 (2008) 17. Aslam Noor, M.: Some development in general variational inequalities. Appl. Math. Comput. 152, 199– 277 (2004) 18. Yao, Y., Noor, M.A.: On viscosity iterative methods for variational inequalities. J. Math. Anal. Appl. 325, 776–787 (2007) 19. Noor, M.A.: New approximation schemes for general variational inequalities. J. Math. Anal. Appl. 251, 217–229 (2000) 20. Kinderlehrer, D., Stampacchia, G.: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York (1980) 21. Glowinski, R.: Numerical Methods for Nonlinear Variational Problems. Springer, New York (1984) 22. Jaillet, P., Lamberton, D., Lapeyre, B.: Variational inequalities and the pricing of American options. Acta Appl. Math. 21, 263–289 (1990) 23. Aslam Noor, M., Inayat Noor, K.: Self-adaptive projection algorithms for general variational inequalities. Appl. Math. Comput. 151, 659–670 (2004) 24. Aslam Noor, M., Inayat Noor, K., Rassias, Th.M.: Some aspects of variational inequalities. J. Comput. Appl. Math. 47, 285–312 (1993)

1224

Y. Yao et al.

25. Aslam Noor, M.: General variational inequalities. Appl. Math. Lett. 1, 119–121 (1988) 26. Aslam Noor, M.: Wiener–Hopf equations and variational inequalities. J. Optim. Theory Appl. 79, 197– 206 (1993) 27. Aslam Noor, M.: Some algorithms for general monotone mixed variational inequalities. Math. Comput. Model. 29, 1–9 (1999) 28. Aslam Noor, M., Al-Said, E.A.: Wiener–Hopf equations technique for quasimonotone variational inequalities. J. Optim. Theory Appl. 103, 705–714 (1999) 29. Huang, Z., Aslam Noor, M.: An explicit projection method for a system of nonlinear variational inequalities with different (γ , r)-cocoercive mappings. Appl. Math. Comput. 190, 356–361 (2007) 30. Aslam Noor, M.: On iterative methods for solving a system of mixed variational inequalities. Appl. Anal. 87, 99–108 (2008) 31. Gossez, J.P., Lami Dozo, E.: Some geometric properties related to the fixed point theory for nonexpansive mappings. Pac. J. Math. 40, 565–573 (1972) 32. Bruck, R.E. Jr.: Nonexpansive retracts of Banach spaces. Bull. Am. Math. Soc. 76, 384–386 (1970) 33. Bruck, R.E.: Nonexpansive projections on subsets of Banach spaces. Pac. J. Math. 47, 341–355 (1973) 34. Hao, Y.: Strong convergence of an iterative method for inverse strongly accretive operators. J. Inequal. Appl. 2008, 420989 (2008). doi:10.1155/2008/42098 35. Zhu, D.L., Marcotte, P.: Co-coercivity and its role in the convergence of iterative schemes for solving variational inequalities. SIAM J. Optim. 6, 774–726 (1996) 36. Aslam Noor, M.: Differentiable nonconvex functions and general variational inequalities. Appl. Math. Comput. 199, 623–630 (2008) 37. Aslam Noor, M.: Extended general variational inequalities. Appl. Math. Lett. 22, 182–186 (2009) 38. Aslam Noor, M.: Sensitivity analysis of extended general variational inequalities. Appl. Math. E-Notes 9, 17–26 (2009) 39. Aslam Noor, M., Inayat Noor, K., Yaqoob, H.: On general mixed variational inequalities. Acta Appl. Math. (2009). doi:10.1007/s10440-008-9402.4 40. Aslam Noor, M.: Implicit Wiener–Hopf equations and quasi-variational inequalities. Alban. J. Math. 2, 15–25 (2008) 41. Aslam Noor, M.: Some iterative algorithms for extended general variational inequalities. Alban. J. Math. 3, 265–275 (2008) 42. Bnouhachem, A., Aslam Noor, M., Hao, Z.: Some new extragradient iterative methods for variational inequalities. Nonlinear Anal. 70, 1321–1329 (2009) 43. Aslam Noor, M.: On a class of general variational inequalities. J. Adv. Math. Stud. 1, 31–42 (2008) 44. Aslam Noor, M., Inayat Noor, K., Zainab, S.: On a predictor–corrector method for solving invex equilibrium problems. Nonlinear Anal. (2009). doi:10.1016/j.na.2009.01.235 45. Aslam Noor, M.: On iterative methods for solving a system of mixed variational inequalities. Appl. Anal. 87, 99–108 (2008)