Appl. Math. Mech. -Engl. Ed., 32(10), 1345–1356 (2011) DOI 10.1007/s10483-011-1505-x c Shanghai University and Springer-Verlag Berlin Heidelberg 2011
Applied Mathematics and Mechanics (English Edition)
Projected subgradient method for non-Lipschitz set-valued mixed variational inequalities∗ Guo-ji TANG (),
Nan-jing HUANG ()
(Department of Mathematics, Sichuan University, Chengdu 610064, P. R. China) (Communicated by Shi-sheng ZHANG)
Abstract A projected subgradient method for solving a class of set-valued mixed variational inequalities (SMVIs) is proposed when the mapping is not necessarily Lipschitz. Under some suitable conditions, it can be proven that the sequence generated by the method can strongly converge to the unique solution to the problem in the Hilbert spaces. Key words set-valued mixed variational inequality (SMVI), projected subgradient method, non-Lipschitz mapping, convergence Chinese Library Classification O177.91, O178 2010 Mathematics Subject Classification 47J20, 47J25, 47H05
1
Introduction
Let H be a Hilbert space and K be a nonempty, closed, and convex subset of H. Let T : K → 2H be a set-valued mapping and f : H → R ∪ {+∞} be a proper, convex, and lower semi-continuous (l.s.c.) function. We consider the following set-valued mixed variational inequality (in short, SMVI(T, K)): find x∗ ∈ K and w∗ ∈ T (x∗ ) such that w∗ , x − x∗ + f (x) − f (x∗ ) 0,
∀x ∈ K.
(1)
The SMVI problem (1) is encountered in many applications, in particular, in mechanical problems (see [1]) and equilibrium problems (see [2]). It is well-known that the problem (1) includes a large variety of problems as special cases. For example, if T is the subdifferential of a finite-valued convex continuous function ϕ defined on the Hilbert space H, then the problem (1) reduces to the following nondifferential optimization problem: min{f (x) + ϕ(x)}. x∈K
Furthermore, if f = 0, then the problem (1) reduces to the following set-valued variational inequality problem: find x∗ ∈ K and w∗ ∈ T (x∗ ) such that w∗ , x − x∗ 0,
∀x ∈ K.
(2)
∗ Received Apr. 2, 2011 / Revised Jul. 7, 2011 Project supported by the Key Program of National Natural Science Foundation of China (No. 70831005), the National Natural Science Foundation of China (No. 10671135), and the Fundamental Research Funds for the Central Universities (No. 2009SCU11096) Corresponding author Nan-jing HUANG, Professor, Ph. D., E-mail:
[email protected]
1346
Guo-ji TANG and Nan-jing HUANG
If T is single-valued and f = 0, then the problem (1) becomes the classical variational inequality problem: find x∗ ∈ K such that T (x∗ ), x − x∗ 0,
∀x ∈ K.
(3)
One of the most interesting and important problems in the variational inequality theory is the development of an efficient iterative algorithm to compute approximate solutions and the convergence analysis of the algorithm (see [3–25]). The simplest one among the proposed methods is the projection type method that has intensively been studied by many researchers (see [3–4]). However, the classical projection method is not suitable for solving the SMVI (1). Therefore, some researchers have studied other easily implementable methods for solving the problem (1). In particular, using an idea similar to that of Alber et al.[26] , Xia et al.[5] considered a projected subgradient method for solving the problem (1) in the Hilbert spaces. In each step, by an orthogonal projection onto the feasible set, they chose an εj -subgradient pj of the function f and v j in a set-valued mapping T . More precisely, they presented the following algorithm: Algorithm A (see Algorithm 3.1 in [5]) Step (i) Select initial x0 ∈ K and v 0 ∈ T (x0 ). Set j := 0. Step (ii) If 0 ∈ T (xj ) + ∂f (xj ), stop; else, go to step (iii). Step (iii) Let pj ∈ ∂εj f (xj ), ηj = max{1, v j + pj }, and ρj xj+1 = PK xj − (v j + pj ) ηj
(4)
with ρj and εj satisfying ∞ j=0
Step (iv)
ρj = +∞,
∞
ρ2j < +∞,
εj μρj
(μ > 0).
j=0
Take v j+1 ∈ T (xj+1 ) such that v j+1 − v j 1 +
1 H(T (xj+1 ), T (xj )). j+1
Let j := j + 1 and return to step (ii). Under the assumptions that T is paramonotone and weakly closed on K and Lipschitz continuous on the bounded subset of K, Xia et al.[5] proved that the sequence generated by Algorithm A could weakly converge to a solution of (1). However, in general, it is not easy to compute the Lipschitz constant associated with T . To overcome this difficulty, recently, Anh et al.[9] generalized the projection method for solving the strongly monotone set-valued variational inequality (2) in finite dimensional spaces, where T may not be Lipschitz. At each iteration, at most one projection onto the feasible set is needed. Motivated and inspired by the research mentioned above, in this paper, we propose a new projected subgradient method for solving the SMVI (1) when the mapping is not necessarily Lipschitz. Under the assumption of some generalized monotonicity of T that is different from the paramonotonicity used by Xia et al.[9] , we prove that the sequence generated by the method is strongly convergent in the Hilbert spaces, while only the weak convergence result was obtained in [5]. The results presented in this paper also generalize and improve some corresponding results of Anh et al.[5] from the set-valued variational inequality (2) to the SMVI (1) and from finite dimensional spaces to infinite dimensional spaces.
Projected subgradient method for non-Lipschitz set-valued mixed variational inequalities
2
1347
Preliminaries
Definition 2.1 Let T : K ⊂ H → 2H be a set-valued mapping with nonempty values and f : K → R ∪ {+∞} be a proper, convex, and lower semicontinuous function. The mapping T is said to be (i) strongly monotone on K iff there is β > 0 such that for all (x, x∗ ) and (y, y ∗ ) in graph T , y ∗ − x∗ , y − x β y − x 2 ; (ii) monotone on K iff for all (x, x∗ ) and (y, y ∗ ) in graph T , y ∗ − x∗ , y − x 0; (iii) paramonotone on K iff T is monotone and y ∗ − x∗ , y − x = 0 with (x, x∗ ), (y, y ∗ ) ∈ graph T implies that (x, y ∗ ), (y, x∗ ) ∈ graph T ; (iv) pseudomonotone on K iff for all (x, x∗ ) and (y, y ∗ ) in graph T , x∗ , y − x 0 ⇒ y ∗ , y − x 0; (v) strongly pseudomonotone on K iff there is β > 0 such that for all (x, x∗ ) and (y, y ∗ ) in graph T , x∗ , y − x 0 ⇒ y ∗ , y − x β y − x 2 ; (vi) f -pseudomonotone on K iff for all (x, x∗ ) and (y, y ∗ ) in graph T , x∗ , y − x + f (y) − f (x) 0 ⇒ y ∗ , y − x + f (y) − f (x) 0; (vii) f -strongly pseudomonotone on K iff there is β > 0 such that for all (x, x∗ ) and (y, y ∗ ) in graph T , x∗ , y − x + f (y) − f (x) 0 ⇒ y ∗ , y − x + f (y) − f (x) β y − x 2 ; (viii) Lipschitz continuous on a subset B of K iff there exists L > 0 such that H(T (x), T (y)) L x − y ,
∀x, y ∈ B,
where H(·, ·) is the Hausdorff metric on a nonempty, bounded, and closed subset of H, i.e., H(T (x), T (y)) = max{ sup
inf r − s , sup
r∈T (x) s∈T (y)
inf r − s },
s∈T (y) r∈T (x)
∀x, y ∈ B.
Remark 2.1 (i) We illustrate the implications between the monotonicity and some generalized monotonicity as follows: Strong pseudomonotonicity ⇐ Strong monotonicity ⇒ f -strong pseudomonotonicity ⇓ ⇓ ⇓ Pseudomonotonicity ⇐ Monotonicity ⇒ f -pseudomonotonicity ⇑ Paramonotonicity (ii) If f ≡ constant, then the f -pseudomonotonicity (f -strongly pseudomonotonicity) reduces to the pseudomonotonicity (strongly pseudomonotonicity). (iii) We would like to point out that the f -pseudomonotone mapping was used to study the F -complementarity problems in the Banach spaces by Yin et al.[27] , to study the stability for the Minty mixed variational inequality in the Banach spaces by Zhong and Huang[28] ,
1348
Guo-ji TANG and Nan-jing HUANG
and to construct the algorithm for solving the mixed variational inequalities in finite dimensional spaces by He[6] , respectively. The relationship between the pseudomonotonicity and the f -pseudomonotonicity was discussed by Zhong and Huang[28] . Similarly, we discuss the relationship between the strong pseudomonotonicity and the f -strong pseudomonotonicity. The following example shows that the f -strongly pseudomonotone mapping is not necessarily pseudomonotone. Thus, it follows from the diagram in (i) of Remark 2.1 that the f -strongly pseudomonotone mapping may not be strongly pseudomonotone if f = constant, strongly monotone, monotone, or paramonotone. Example 2.1 Let H = R = (−∞, +∞) and K = [3, 5]. Let f (x) = x2 ,
x ∈ K,
T (x) ≡ [−1, 1],
x ∈ K.
It is easy to see that T is not pseudomonotone on K. However, we can show that T is f -strongly pseudomonotone on K. In fact, let (x, x∗ ), (y, y ∗ ) ∈ graph T with x∗ , y − x + f (y) − f (x) = (x∗ + y + x)(y − x) 0. Then, the facts x∗ , y ∗ ∈ [−1, 1] and x, y ∈ [3, 5] imply that x∗ + y + x > 0, y ∗ + y + x > y − x, and so y − x 0. Thus, we have y ∗ , y − x + f (y) − f (x) = (y ∗ + y + x)(y − x) (y − x)2 , which shows that T is f -strongly pseudomonotone with modulus 1 on K. The following example shows that the strongly pseudomonotone mapping is not necessarily f -pseudomonotone. Thus, it follows from the diagram in (i) of Remark 2.1 that the strongly pseudomonotone mapping may not be f -strongly pseudomonotone, strongly monotone, monotone, or paramonotone. Example 2.2 Let H = R = (−∞, +∞) and K = [−2, 2]. Let f (x) = x2 ,
x ∈ K,
T (x) ≡ [2, 4],
x ∈ K.
We first show that T is strongly pseudomonotone on K. In fact, let (x, x∗ ), (y, y ∗ ) ∈ graph T with x∗ , y − x = x∗ (y − x) 0. Then, the facts x∗ , y ∗ ∈ [2, 4] and x, y ∈ [−2, 2] imply that y ∗ 12 (y − x), and y − x 0. Thus, we have 1 y ∗ , y − x = y ∗ (y − x) (y − x)2 , 2 which implies that T is strongly pseudomonotone with modulus 12 on K. However, T is not f -pseudomonotone on K. Indeed, we take (x, x∗ ) = (−2, 4) and (y, y ∗ ) = (−1, 2) in the graph T . Then, x∗ , y − x + f (y) − f (x) = (4 − 1 − 2)(−1 + 2) 0. However,
y ∗ , y − x + f (y) − f (x) = (2 − 1 − 2)(−1 + 2) < 0.
This implies that T is not f -pseudomonotone on K. Definition 2.2 For a convex function f : H → (−∞, +∞], dom f = {x ∈ H : f (x) < +∞} denotes its effective domain. For any given x ∈ dom f , ∂f (x) = {p ∈ H : f (y) − f (x) p, y − x, ∀y ∈ H} denotes the subdifferential of f at x, and a point p ∈ ∂f (x) is called a subgradient of f at x.
Projected subgradient method for non-Lipschitz set-valued mixed variational inequalities
1349
Suppose that K is a nonempty, closed, and convex subset of the Hilbert space H. The distance from z to K is defined by dist(z, K) := inf z − x . x∈K
Let PK (z) denote the projection of z onto K, i.e., PK (z) satisfies the condition z − PK (z) = dist(z, K). Lemma 2.1[5] Let K be a nonempty, closed, and convex subset in the Hilbert space H. Then, the following properties hold : (i) x − y, x − PK (x) 0, ∀x ∈ H, y ∈ K; (ii) PK (x) − PK (y) x − y , ∀x, y ∈ H. Lemma 2.2[9] Let {αj } be a sequence of nonnegative numbers such that αj+1 (1 − λj )αj + εj
for all j,
where λj ∈ (0, 1) and εj > 0 for any j satisfy ∞
λj = +∞,
j=0
∞
εj < +∞,
j=0
respectively. Then, lim αj = 0. j→∞
3
New projected subgradient method
The projected subgradient method (see [26]) originated from the steepest descent method for optimization problems, i.e., (5) min f (x), x∈K
where K is a closed and convex subset of H, and f : K → R is a convex and continuous function. This method consists of generating a sequence {xj } by taking from xj a step in the direction opposite to a subgradient of f at xj and then orthogonally projecting the resulting vector onto K. More precisely, the iterative scheme is xj+1 = PK (xj − αj pj ),
(6)
where αj denotes the stepsize, and pj ∈ ∂f (xj ). When K = H, and f is differentiable, it is just the steepest descent method. When operator T is substituted for the subgradient in (6), the projected subgradient method is extended to solve the variational inequality problem (3). The corresponding iterative scheme becomes (7) xj+1 = PK xj − αj T (xj ) . It is well-known that if T is Lipschitz continuous and strongly monotone, then the sequence generated by (7) converges to a solution to the problem (3). Recently, Xia et al.[5] generalized the projected subgradient method to solve the problem (1). The basic iterative scheme becomes xj+1 = PK xj − αj (v j + pj ) , (8) where v j ∈ T (xj ), and pj ∈ ∂f (xj ). In [5], the strong monotonicity of T was relaxed to the paramonotonicity. However, the Lipschitz continuity of T was still required. Our goal now is to iteratively construct a sequence such that it converges strongly to a solution to the problem
1350
Guo-ji TANG and Nan-jing HUANG
(1) without the Lipschitz continuity of T in the Hilbert spaces. Given an iteration xj ∈ K, we employ the idea in [9] to find a direction wj , on which the next iteration xj+1 lies. It includes the direction −(v j + pj ) as a special case. Therefore, the new method is more widely applied in comparison with (8). The method can be described in detail as follows: Algorithm 3.1 Step (i) Choose a sequence {ρj } such that 0 < ρj < 1,
∞
ρj = +∞,
j=0
∞
ρ2j < +∞.
(9)
j=0
Let x0 ∈ K, and set j := 0. Step (ii) For any given pj ∈ ∂f (xj ), take v j ∈ T (xj ). If v j = −pj , then terminate. If j v = −pj , then find wj such that v j , x − xj + wj , x − xj + f (x) − f (xj ) 0,
∀x ∈ K.
If wj = 0, then stop. Otherwise, go to step (iii). Step (iii) Set xj+1 = PK (xj + ρj wj ).
(10)
(11)
Remark 3.1 The main subproblem in this algorithm is to find wj = 0 satisfying (10). It is easy to see that (10) holds if wj = −(v j + pj ). In fact, since pj ∈ ∂f (xj ), i.e., f (x) − f (xj ) pj , x − xj , ∀x ∈ H, we have v j , x − xj + wj , x − xj + f (x) − f (xj ) v j , x − xj + wj , x − xj + pj , x − xj = v j + wj + pj , x − xj . Taking wj = −(v j + pj ) yields v j , x − xj + wj , x − xj + f (x) − f (xj ) 0,
∀x ∈ K.
This shows that Algorithm 3.1 is well-defined. Remark 3.2 If H = Rn , f ≡ constant, and pj = 0 ∈ ∂f (xj ), then Algorithm 3.1 reduces to Algorithm 2.1 in [9]. Thus, Algorithm 3.1 is the generalization of Algorithm 2.1 in [9]. Now, we analyze the convergence of the sequence generated by Algorithm 3.1 in the Hilbert spaces. Theorem 3.1 Suppose that the sequence {xj } generated by Algorithm 3.1 is finite. Then, the last term is a solution to the problem (1). Proof If the sequence is finite, then it must stop at step (i) for some xj . There are two possible cases. Case 1 If v j = −pj , then, by pj ∈ ∂f (xj ), we have v j , x − xj + f (x) − f (xj ) v j , x − xj + pj , x − xj = 0. That is, xj solves the problem (1). Case 2 If wj = 0, then it follows from (10) that xj is a solution to the problem (1). This completes the proof. From now on, we assume that the sequence {xj } generated by Algorithm 3.1 is infinite.
Projected subgradient method for non-Lipschitz set-valued mixed variational inequalities
1351
Theorem 3.2 Let f : H → R∪{+∞} be a proper, convex, and l.s.c. function and T : K → 2H be an f -strongly pseudomonotone mapping with modulus β > 0. Suppose that the solution set S of the problem (1) is nonempty. Then, the sequence {xj } constructed by Algorithm 3.1 satisfies xj+1 − x∗ 2 (1 − 2βρj ) xj − x∗ 2 + ρ2j wj 2 , (12) 1 where x∗ is a solution to the problem (1). Moreover, if 0 < ρj < 2β for every j, and the j j ∗ ∗ sequence {w } is bounded, then x → x as j → ∞, and x is the unique solution to the problem (1). Proof Let x∗ be any solution to the problem (1). From xj+1 = PK (xj + ρj wj ), x∗ = PK (x∗ ), and Lemma 2.1, it follows that
xj+1 − x∗ 2 = PK (xj + ρj wj ) − PK (x∗ ) 2 xj + ρj wj − x∗ 2 = xj − x∗ 2 + 2ρj wj , xj − x∗ + ρ2j wj 2 .
(13)
Applying the inequality (10) with x = x∗ , we obtain v j , x∗ − xj + wj , x∗ − xj + f (x∗ ) − f (xj ) 0, which implies
wj , xj − x∗ v j , x∗ − xj + f (x∗ ) − f (xj ).
∗
∗
(14)
∗
Since x ∈ K is a solution to the problem (1), there exists w ∈ T (x ) such that w∗ , xj − x∗ + f (xj ) − f (x∗ ) 0.
(15)
By the f -strongly pseudomonotonicity of T , we have v j , xj − x∗ + f (xj ) − f (x∗ ) β xj − x∗ 2 .
(16)
Substituting (16) into (14), we have wj , xj − x∗ −β xj − x∗ 2 .
(17)
Combining (13) and (17), we can deduce xj+1 − x∗ 2 xj − x∗ 2 − 2ρj β xj − x∗ 2 + ρ2j wj 2 = (1 − 2ρj β) xj − x∗ 2 + ρ2j wj 2 . 1 for all j, Let λj := 2ρj β, αj := xj − x∗ 2 , and εj := ρ2j wj 2 in Lemma 2.2. Since 0 < ρj < 2β ∞ ∞ and ρj = +∞, we have that 0 < λj < 1 for all j, and λj = +∞. Furthermore, since the j=0
sequence { wj } is bounded, and
∞ j=0
ρ2j < +∞, we have
j=0 ∞
j=0
εj < +∞. Consequently, Lemma
2.2 implies that lim xj − x∗ = 0. Since x∗ is taken as an arbitrary solution to the problem j→∞
(1), it follows that x∗ is the unique solution to this problem. This completes the proof. In Theorem 3.2, if f = 0, then we obtain the following result. Corllary 3.1 Let T : K → 2H be a strongly pseudomonotone mapping with modulus β > 0. Suppose that the solution set S of the problem (2) is nonempty. Then, the sequence {xj } constructed by Algorithm 3.1 with f = 0 and pj = 0 for all j satisfies xj+1 − x∗ 2 (1 − 2βρj ) xj − x∗ 2 + ρ2j wj 2 ,
1352
Guo-ji TANG and Nan-jing HUANG
1 where x∗ is a solution to the problem (2). Moreover, if 0 < ρj < 2β for every j, and the j j ∗ ∗ sequence {w } is bounded, then x → x as j → ∞, and x is the unique solution to the problem (2). Remark 3.3 Corollary 3.1 generalizes and improves Theorem 2.1 in [9] in the following aspects: (i) The strong monotonicity assumption in [9] is relaxed to be the strong pseudomonotonicity. (ii) Corollary 3.1 shows that the sequence generated by Algorithm 3.1 converges strongly to the unique solution to the problem (2) in the Hilbert spaces. Hence, Corollary 3.1 generalizes Theorem 2.1 in [9] from finite dimensional spaces to infinite dimensional spaces. In order to ensure the convergence of Algorithm 3.1, we have assumed that the sequence {wj } is bounded in Theorem 3.2. Using an additional parameter τj and some assumptions on f and T , we can guarantee that the sequence {wj } is automatically bounded. From now on, we adopt the following assumptions (H1 )–(H3 ): (H1 ) The solution set of the problem (1) is nonempty. (H2 ) f : H → R ∪ {+∞} is a proper, convex, and lower semicontinuous function in the conditions that K ⊂ int(dom f ), and ∂f is bounded on K. (H3 ) T : K → 2H is an f -strongly pseudomonotone mapping with modulus β > 0 on K, and T is bounded on K. Remark 3.4 (i) The assumptions (H1 )–(H2 ) are the same as those used in [5, 8]. It is well-known that if H is finite dimensional, then ∂f is always bounded on K. However, this is not true in a general Hilbert space. We know that a sufficient condition for ∂f to be bounded on K is that |f | is bounded on K (see Remark 3.1 (2) in [5]). (ii) Compared with (A3 ) in [5], the assumption (H3 ) does not need the Lipschitz continuity of T . The following example shows that assumption (H3 ) is reasonable. Example 3.1 Let H = R = (−∞, +∞) and K = [3, 5]. Let ⎧ ⎨ [−1, 1], x ∈ [3, 4], f (x) = x2 , x ∈ K, T (x) = ⎩ [2, 3], x ∈ (4, 5].
It is easy to see that T is bounded and nonmonotone on K. Using the same arguments in Example 2.1, we can show that T is f -strongly pseudomonotone on K. However, T is not Lipschitz continuous on K. In fact, for any x ∈ [3, 4] and y ∈ (4, 5], we have H(T (x), T (y)) = max{ sup
inf r − s , sup
r∈T (x) s∈T (y)
inf r − s } = 3.
s∈T (y) r∈T (x)
Therefore, for any L > 0, there exist x0 ∈ [3, 4] and y0 ∈ (4, 5] with x0 − y0
L x0 − y0 . This implies that T is not Lipschitz continuous on K. Hence, the assumption (A3 ) in [5] does not hold. Now, we describe the modification of Algorithm 3.1, in which parameters τj (j = 0, 1, 2, · · · ) are introduced as follows:
Projected subgradient method for non-Lipschitz set-valued mixed variational inequalities
1353
Algorithm 3.1∗ Step (i) Let x0 ∈ K, and set j := 0. Step (ii) For any given pj ∈ ∂f (xj ), take v j ∈ T (xj ). If v j = −pj , then terminate. If j v = −pj , then choose ρj ∈ (0, 1) and 1 1 (18) 0 < τj < min , j 2βρj v + pj such that the sequences {ρj } and {τj } satisfy ∞
ρj τj = +∞
(19)
j=0
and
∞
ρ2j < +∞.
(20)
j=0
Step (iii) such that
Find wj with
wj τj ( v j + pj )
(21)
τj v j , x − xj + wj , x − xj + τj f (x) − f (xj ) 0,
∀x ∈ K.
(22)
j
If w = 0, then stop. Otherwise, go to step (iv). Step (iv) Set xj+1 = PK (xj + ρj wj ).
(23)
Remark 3.5 If H = Rn , f ≡ constant, and pj = 0 ∈ ∂f (xj ), then Algorithm 3.1∗ reduces to the modification of Algorithm 2.1 in [9], which can be found in Remark 2.2 in [9]. Remark 3.6 We need to answer the following question: whether ρj and τj in step (i) and wj in step (ii) are well-defined. First, it is easy to check that (22) holds if wj = −τj (v j + pj ) ∞ (see Remark 3.1 for the similar arguments). Second, ρ2j < +∞ if ρj = 1j . Then, we will j=0
show that τj is well-defined. We need the following lemma. Lemma 3.1 Suppose that the assumptions (H1 )–(H3 ) hold. Then, the sequence {xj } generated by Algorithm 3.1∗ is bounded without the condition (19). Proof Let x∗ ∈ S, z j = xj + ρj wj , and μj = wj , x∗ − xj . It follows from (23) that j x ∈ K and PK (xj ) = xj . Since wj τj ( v j + pj ) < 1 (by (18)), using (ii) of Lemma 2.1, we have (24) xj+1 − xj = PK (z j ) − PK (xj ) z j − xj = ρj wj ρj . Then, it follows that ρ2j + xj − x∗ 2 − xj+1 − x∗ 2 xj+1 − xj 2 + xj − x∗ 2 − xj+1 − x∗ 2 = = = =
(by (24))
2xj − x∗ , xj − xj+1 2xj − x∗ , xj − z j + 2xj − x∗ , z j − xj+1 2xj − x∗ , −ρj wj + 2xj − z j , z j − xj+1 + 2z j − x∗ , z j − xj+1 2ρj x∗ − xj , wj + 2xj − z j , z j − xj+1 (by Lemma 2.1 (i)) 2ρj x∗ − xj , wj + 2xj − z j , z j − xj + 2xj − z j , xj − xj+1 2ρj x∗ − xj , wj − 2 xj − z j 2 − 2 xj − z j xj − xj+1 2ρj x∗ − xj , wj − 2ρ2j wj 2 − 2ρ2j wj (by (24))
2ρj x∗ − xj , wj − 4ρ2j = 2ρj μj − 4ρ2j .
(by wj < 1) (25)
1354
Guo-ji TANG and Nan-jing HUANG
Since x∗ ∈ S, there exists v ∗ ∈ T (x∗ ) such that v ∗ , x − x∗ + f (x) − f (x∗ ) 0,
∀x ∈ K.
(26)
Taking x = xj in (26), we have v ∗ , xj − x∗ + f (xj ) − f (x∗ ) 0. Then, it follows from the f -strong pseudomonotonicity of T that v j , xj − x∗ + f (xj ) − f (x∗ ) β xj − x∗ 2 0,
∀v j ∈ T (xj ).
(27)
Combining (22) and (27), we have μj = wj , x∗ − xj τj v j , xj − x∗ + f (xj ) − f (x∗ ) 0. Thus, (25) implies 0 2ρj μj xj − x∗ 2 − xj+1 − x∗ 2 + 5ρ2j i.e.,
xj+1 − x∗ 2 xj − x∗ 2 + 5ρ2j
for all j,
for all j.
By induction, we conclude xj+1 − x∗ 2 x0 − x∗ 2 + 5
j
ρ2i x0 − x∗ 2 + 5
i=0
∞
ρ2j ,
j=0
which implies that the sequence {xj } is bounded. This completes the proof. Since {xj } is bounded, by the assumptions (H2 ) and (H3 ), we know that both {pj } and {v j } are bounded. Hence, there exists γ > 0 such that v j + pj γ for all j. If ρj = 1j , and ⎧ 1 ⎪ ⎪ , ⎪ ⎨ 4βρ τj =
j
⎪ 1 ⎪ ⎪ ⎩ , 2γ
if
1 1 , j 2βρj v + pj
(28)
1 1 if , > j 2βρj v + pj
then (18)–(20) are satisfied. Therefore, Algorithm 3.1∗ is well-defined. Remark 3.7 We now make a comparison of Algorithm 3.1∗ and Algorithm A (i.e., Algorithm 3.1 in [5]). Indeed, if ηj = τ1j , then (18)–(20) reduce to the following conditions: ηj > max{2βρj , v j + pj },
∞ ρj j=0
j
j
ηj
= +∞,
∞
ρ2j < +∞.
j=0
Moreover, if wj = −τj (v j + pj ) = − v η+p in (22), then (23) becomes (4). Thus, it is easy to j ∗ see that Algorithm 3.1 is similar to Algorithm A. Now, we analyze the convergence of the sequence generated by Algorithm 3.1∗ in the Hilbert spaces.
Projected subgradient method for non-Lipschitz set-valued mixed variational inequalities
1355
Theorem 3.3 Suppose that the assumptions (H1 )–(H3 ) hold. Then, the sequence {xj } constructed by Algorithm 3.1∗ satisfies xj+1 − x∗ 2 (1 − 2βρj τj ) xj − x∗ 2 + ρ2j wj 2 ,
(29)
where x∗ is a solution to the problem (1). Furthermore, xj → x∗ as j → ∞, and x∗ is the unique solution to the problem (1). Proof It follows from (18) and (21) that wj τj ( v j + pj ) < 1. Let x∗ be any solution to the problem (1). Applying the inequality (22) with x = x∗ , we get τj v j , x∗ − xj + wj , x∗ − xj + τj f (x∗ ) − f (xj ) 0. This implies
wj , xj − x∗ τj v j , x∗ − xj + τj f (x∗ ) − τj f (xj ),
which plays the same role as (14). Then, using the same arguments in the proof of Theorem 3.2, we have wj , xj − x∗ −βτj xj − x∗ 2 . Hence, xj+1 − x∗ 2 xj − x∗ 2 − 2ρj βτj xj − x∗ 2 + ρ2j wj 2 (1 − 2ρj βτj ) xj − x∗ 2 + ρ2j . Using (18) again, we have 2ρj βτj < 1 for all j. Then, we can use Lemma 2.2 with λj := 2ρj βτj < 1, αj := xj − x∗ 2 , and εj := ρ2j to show that xj → x∗ as j → ∞. Since x∗ is taken as an arbitrary solution to the problem (1), it follows that x∗ is the unique solution to this problem. This completes the proof. Remark 3.8 Compared with Theorem 3.5 in [5], the following comments are shown: (i) T is f -strongly pseudomonotone in Theorem 3.1, while T is a paramonotone mapping in Theorem 3.5 in [5]. The relationship between the f -strongly pseudomonotonicity and the paramonotonicity can be found in Remark 2.1. (ii) The condition that T is weakly closed on K and Lipschitz continuous on the bounded subset in the assumption (A3 ) in [5] is removed. (iii) The strong convergence of the sequence is proved in Theorem 3.3, while only the weak convergence of the sequence could be obtained in Theorem 3.5 in [5].
References [1] Han, W. and Reddy, B. On the finite element method for mixed variational inequalities arising in elastoplasticity. SIAM J. Numer. Anal., 32(6), 1778–1807 (1995) [2] Cohen, G. Nash equilibria: gradient and decomposition algorithms. Large Scale Systems, 12(2), 173–184 (1987) [3] Facchinei, F. and Pang, J. S. Finite-Dimensional Variational Inequalities and Complementary Problems, Springer-Verlag, New York (2003) [4] Iusem, A. N. and Svaiter, B. F. A variant of Korpelevich’s method for solving variational inequalities with a new search strategy. Optimization, 42(4), 309–321 (1997) [5] Xia, F. Q., Huang, N. J., and Liu, Z. B. A projected subgradient method for solving generalized mixed variational inequalities. Oper. Res. Lett., 36(5), 637–642 (2008)
1356
Guo-ji TANG and Nan-jing HUANG
[6] He, Y. R. A new projection algorithm for mixed variational inequalities (in Chinese). Acta Math. Sci., 27A(2), 215–220 (2007) [7] Konnov, I. A combined relaxation method for a class of nonlinear variational inequalities. Optimization, 51(1), 127–143 (2002) [8] Maing´e, P. E. Projected subgradient techniques and viscosity methods for optimization with variational inequality constraints. European J. Oper. Res., 205(3), 501–506 (2010) [9] Anh, P. N., Muu, L.D., and Strodiot, J. J. Generalized projection method for non-Lipschitz multivalued monotone variational inequalities. Acta Math. Vietnam., 34(1), 67–79 (2009) [10] Farouq, N. E. Pseudomonotone variational inequalities: convergence of the auxiliary problem method. J. Optim. Theory Appl., 111(2), 306–325 (2001) [11] Hue, T. T., Strodiot, J. J., and Nguyen, V. H. Convergence of the approximate auxiliary problem method for solving generalized variational inequalities. J. Optim. Theory Appl., 121(1), 119–145 (2004) [12] Konnov, I. Combined Relaxation Methods for Variational Inequalities, Springer-Verlag, Berlin (2001) [13] Marcotte, P. A new algorithm for solving variational inequalities with application to the traffic assignment problem. Math. Program., 33(3), 339–351 (1985) [14] Patriksson, M. Merit functions and descent algorithms for a class of variational inequality problems. Optimization, 41(1), 37–55 (1997) [15] Patriksson, M. Nonlinear Programming and Variational Inequality Problems, Kluwer Academic Publishers, Dordrecht (1999) [16] Salmon, G., Strodiot, J. J., and Nguyen, V. H. A bundle method for solving variational inequalities. SIAM J. Optim., 14(3), 869–893 (2003) [17] Solodov, M. V. and Svaiter, B. F. A new projection method for variational inequalities. SIAM J. Control Optim., 37(3), 765–776 (1999) [18] Taji, K. and Fukushima, M. A new merit function and a successive quadratic programming algorithm for variational inequality problems. SIAM J. Optim., 6(3), 704–713 (1996) [19] Ding, X. P. Existence and algorithm of solutions for nonlinear mixed variational-like inequalities in Banach spaces. J. Comput. Appl. Math., 157(2), 419–434 (2003) [20] Ding, X. P. Predictor-corrector iterative algorithms for solving generalized mixed variational-like inequalities. Appl. Math. Comput., 152(3), 855–865 (2004) [21] Ding, X. P., Lin, Y. C., and Yao, J. C. Three-step relaxed hybrid steepest-descent methods for variational inequalities. Appl. Math. Mech. -Engl. Ed., 28(8), 1029–1036 (2007) DOI 10.1007/s10483007-0805-x [22] Zhang, S. S., Wang, X. R., Lee, H. W. J., and Chan, C. K. Viscosity method for hierarchical fixed point and variational inequalities with applications. Appl. Math. Mech. -Engl. Ed., 32(2), 241–250 (2011) DOI 10.1007/s10483-011-1410-8 [23] Chang, S. S., Lee, H. W. J., Chan C. K., and Liu, J. A. A new method for solving a system of generalized nonlinear variational inequalities in Banach spaces. Appl. Math. Comput., 217(15-16), 6830–6837 (2011) [24] Verma, R. U. General convergence analysis for two-step projection methods and applications to variational problems. Appl. Math. Lett., 18(11), 1286–1292 (2005) [25] Lan, H. Y., Cho, Y. J., and Verma, R. U. Nonlinear relaxed cocoercive variational inclusions involving (A, η)-accretive mappings in Banach spaces. Comput. Math. Appl., 51(9-10), 1529–1538 (2006) [26] Alber, Y. I., Iusem, A. N., and Solodov, M. V. On the projected subgradient method for nonsmooth convex optimization in a Hilbert space. Math. Program., 81(1), 23–35 (1998) [27] Yin, H. Y., Xu, C. X., and Zhang, Z. X. The F -complementarity problem and its equivalence with the least element problem (in Chinese). Acta Math. Sinica, 44(4), 679–686 (2001) [28] Zhong, R. Y. and Huang, N. J. Stability analysis for Minty mixed variational inequality in reflexive Banach spaces. J. Optim. Theory Appl., 147(3), 454–472 (2010)