He and Wu Journal of Inequalities and Applications (2017) 2017:89 DOI 10.1186/s13660-017-1366-3
RESEARCH
Open Access
A modified subgradient extragradient method for solving monotone variational inequalities Songnian He1,2* and Tao Wu1 *
Correspondence:
[email protected] 1 College of Science, Civil Aviation University of China, Tianjin, 300300, China 2 Tianjin Key Laboratory for Advanced Signal Processing, Civil Aviation University of China, Tianjin, 300300, China
Abstract In the setting of Hilbert space, a modified subgradient extragradient method is proposed for solving Lipschitz-continuous and monotone variational inequalities defined on a level set of a convex function. Our iterative process is relaxed and self-adaptive, that is, in each iteration, calculating two metric projections onto some half-spaces containing the domain is involved only and the step size can be selected in some adaptive ways. A weak convergence theorem for our algorithm is proved. We also prove that our method has O( n1 ) convergence rate. MSC: 47J20; 90C25; 90C30; 90C52 Keywords: variational inequalities; subgradient extragradient method; Lipschitz-continuous mapping; level set; half-spaces; convergence rate
1 Introduction Let H be a real Hilbert space with inner product ·, · and norm · . The variational inequality problem (VIP) is aimed to finding a point x∗ ∈ C, such that ∗ f x , x – x∗ ≥ ,
∀x ∈ C,
(.)
where C is a nonempty closed convex subset of H and f : C → H is a given mapping. This problem and its solution set are denoted by VI(C, f ) and SOL(C, f ), respectively. We also always assume that SOL(C, f ) = ∅. The variational inequality problem VI(C, f ) has received much attention due to its applications in a large variety of problems arising in structural analysis, economics, optimization, operations research and engineering sciences; see [–] and the references therein. It is well known that the problem (.) is equivalent to the fixed point problem for finding a point x∗ ∈ C, such that [] x∗ = PC x∗ – λf x∗ ,
(.)
where λ is an arbitrary positive constant. Many algorithms for the problem (.) are based on the fixed point problem (.). Korpelevich [] proposed an algorithm for solving the © The Author(s) 2017. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
He and Wu Journal of Inequalities and Applications (2017) 2017:89
Page 2 of 14
problem (.) in Euclidean space Rn , known as the extragradient method (EG): x ∈ C,
x˜ n = PC xn – λf (xn ) , xn+ = PC xn – λf (˜xn ) ,
(.) (.)
where λ is some positive number and PC denotes the metric projection of H onto C. She proved that if f is κ-Lipschitz-continuous and λ is selected such that λ ∈ (, /κ), then the two sequences {xn } and {˜xn } generated by the EG method, converge to the same point z ∈ SOL(C, f ). In , Nadezhkina and Takahashi [] generalized the above EG method to general Hilbert spaces (including infinite-dimensional spaces) and they also established the weak convergence theorem. In each iteration of the EG method, in order to get the next iterate xk+ , two projections onto C need to be calculated. But projections onto a general closed and convex subset are not easily executed and this might greatly affect the efficiency of the EG method. In order to overcome this weakness, Censor et al. developed the subgradient extragradient method in Euclidean space [], in which the second projection in (.) onto C was replaced with a projection onto a specific constructible half-space, actually which is one of the subgradient half-spaces. Then, in [, ], Censor et al. studied the subgradient extragradient method for solving the VIP in Hilbert spaces. They also proved the weak convergence theorem under the assumption that f is a Lipschitzian continuous and monotone mapping. The main purpose of this paper is to propose an improved subgradient extragradient method for solving the Lipschitz-continuous and monotone variational inequalities defined on a level set of a convex function [], that is, C := {x ∈ H | c(x) ≤ } and c : H → R is a convex function. In our algorithm, two projections PC in (.) and (.) will be replaced with PCk and PTk , respectively, where Ck and Tk are half-spaces, such that Ck ⊃ C and Tk ⊃ C. Ck is based on the subdifferential inequality, the idea of which was proposed firstly by Fukushima [], and Tk is the same one as Censor’s method []. It is also worth pointing out that the step size in our algorithm can be selected in some adaptive way, that is, we have no need to know or to estimate any information as regards the Lipschitz constant of f , therefore, our algorithm is easily executed. Our paper is organized as follows. In Section , we list some basic definitions, properties and lemmas. In Section , the improved subgradient extragradient algorithm and its corresponding geometrical intuition are presented. In Section , the weak convergence theorem for our method is proved. Finally, we prove that our algorithm has O( n ) convergence rate in the last section.
2 Preliminaries In this section, we list some basic concepts and lemmas, which are useful for constructing the algorithm and analyzing the convergence. Let H be a real Hilbert space with inner product ·, · and norm · and let C be a closed convex subset of H. We write xk x and xk → x to indicate that the sequence {xk }∞ k= converges weakly and strongly to x, respectively. For each point x ∈ H, there exists a unique nearest point in C, denoted by
He and Wu Journal of Inequalities and Applications (2017) 2017:89
Page 3 of 14
PC (x), such that x – PC (x) ≤ x – y,
∀y ∈ C.
(.)
The mapping PC : H → C is called the metric projection of H onto C. It is well known that PC is characterized by the following inequalities: x – PC (x), PC (x) – y ≥ , x – y ≥ x – PC (x) + y – PC (x) ,
(.) (.)
for all x ∈ H, y ∈ C [, ]. A function c : H → R is said to be Gâteaux differentiable at x ∈ H, if there exists an element, denoted by c (x) ∈ H, such that lim
t→
c(x + tv) – c(x) = v, c (x) , t
∀v ∈ H,
where c (x) is called the Gâteaux differential of c at x. We say c is Gâteaux differentiable on H, if for each x ∈ H, c is Gâteaux differentiable at x. A function c : H → R is said to be weakly lower semicontinuous (w-lsc) at x ∈ H, if xk x implies c(x) ≤ lim infk→∞ c(xk ). We say c is weakly lower semicontinuous on H, if for each x ∈ H, c is weakly lower semicontinuous at x. A function c : H → R is called convex, if we have the inequality c tx + ( – t)y ≤ tc(x) + ( – t)c(y), for all t ∈ [, ] and x, y ∈ H. For a convex function c : H → R, c is said to be subdifferentiable at a point x ∈ H if the set ∂c(x) d ∈ H | c(y) ≥ c(x) + d, y – x, ∀y ∈ H
(.)
is not empty, where each element in ∂c(x) is called a subgradient of c at x, ∂c(x) is subdifferential of c at x and the inequality in (.) is said to be the subdifferential inequality of c at x. We say c is subdifferentiable on H, if c is subdifferentiable at each x ∈ H. It is well known that if c is Gâteaux differentiable at x, then c is subdifferentiable at x and ∂c(x) = {c (x)}, namely, ∂c(x) is just a set of the simple points []. A mapping f : H → H is said to be Lipschitz-continuous [], if there exists a positive constant κ, such that f (x) – f (y) ≤ κx – y,
∀x, y ∈ H.
f is also said to be a κ-Lipschitzian-continuous mapping. A mapping f : H → H is said to be monotone on H, if f (x) – f (y), x – y ≥ ,
∀x, y ∈ H.
He and Wu Journal of Inequalities and Applications (2017) 2017:89
Page 4 of 14
Definition . (Normal cone) We denote the normal cone by NC (v) [] of C at v ∈ C, i.e. NC (v) := w ∈ H | w, y – v ≤ , ∀y ∈ C .
(.)
Definition . (Maximal monotone operator) Let T : H ⇒ H be a point-to-set operator defined on H. T is called a maximal monotone operator if T is monotone, i.e. u – v, x – y ≥ ,
∀u ∈ T(x) and ∀v ∈ T(y)
and the graph G(T) of T , G(T) := (x, u) ∈ H × H | u ∈ T(x) , is not properly contained in the graph of any other monotone operator. It is clear that a monotone mapping T is maximal iff for any (x, u) ∈ H × H, if u – v, x – y ≥ , ∀(y, v) ∈ G(T), then it follows that u ∈ T(x). Define ⎧ ⎨f (v) + N (v) if v ∈ C, C Tv = ⎩∅, if v ∈/ C. Then T is maximal monotone and ∈ Tv if and only if v ∈ SOL(C, f ) []. The next property is known as the Opial condition and all Hilbert spaces have this property []. Lemma . For any sequence {xk }∞ k= in H that converges weakly to x (xk x), the inequality lim inf xk – x < lim inf xk – y n→∞
n→∞
(.)
holds for any y ∈ H with x = y. The following lemma was proved in []. Lemma . Let H be a real Hilbert space and let C be a nonempty, closed and convex subset of H. Let the sequence {xk }∞ k= ⊂ H be Fejér-monotone with respect to C, i.e., for any u ∈ C, xk+ – u ≤ xk – u,
∀k ≥ .
Then {PC (xk )}∞ k= converges strongly to some z ∈ C.
(.)
He and Wu Journal of Inequalities and Applications (2017) 2017:89
Page 5 of 14
3 The modified subgradient extragradient method In this section, we give our algorithm for solving the VI(C, f ) in the setting of Hilbert spaces, where C is a level set given as follows: C := x ∈ H | c(x) ≤ ,
(.)
where c : H → R is a convex function. In the rest of this paper, we always assume that the following conditions are satisfied. Condition . The solution set of VI(C, f ), denoted by SOL(C, f ), is nonempty. Condition . The mapping f : H → H is monotone and Lipschitz-continuous on H (but we have no need to know or to estimate the Lipschitz constant of f ). Condition . The function c : H → R satisfies the following conditions: (i) c(x) is a convex function; (ii) c(x) is weakly lower semicontinuous on H; (iii) c(x) is Gâteaux differentiable on H and c (x) is a M -Lipschitzian-continuous mapping on H; (iv) there exists a positive constant M such that f (x) ≤ M c (x) for any x ∈ ∂C, where ∂C denotes the boundary of C. Next, we present the modified subgradient extragradient method as follows. Algorithm . (The modified subgradient extragradient method) Step : select an initial guess x ∈ H arbitrarily, set k = and construct the half-space Ck := w ∈ H | c(xk ) + c (xk ), w – xk ≤ ; Step : given the current iteration xk , compute yk = PCk xk – βk f (xk ) ,
(.)
where βk = σρ mk ,
σ > , ρ ∈ (, )
(.)
and mk is the smallest nonnegative integer, such that βk f (xk ) – f (yk ) + Mβk xk – yk ≤ ν xk – yk ,
(.)
where M = M M and ν ∈ (, ). Step : calculate the next iterate, xk+ = PTk xk – βk f (yk ) ,
(.)
He and Wu Journal of Inequalities and Applications (2017) 2017:89
Page 6 of 14
Figure 1 xk+1 is the subgradient extragradient projection of point xk – βk f (yk ) onto the hyperplane Tk .
where Tk = w ∈ H | xk – βk f (xk ) – yk , w – yk ≤ ,
(.)
which is the same half-space as Censor’s method []. Figure illustrates the iterative steps of this algorithm. At the end of this section, we list the alternating theorem [, ] for the solutions of VI(C, f ), where C is given by (.). This result will be used to prove the convergence theorem of our algorithm in the next section. Theorem . Assume that the solution set SOL(C, f ) of VI(C, f ) is nonempty. Given x∗ ∈ C. Then x∗ ∈ SOL(C, f ) iff we have either . f (x∗ ) = , or . x∗ ∈ ∂C and there exists a positive constant β such that f (x∗ ) = –βc (x∗ ).
4 Convergence theorem of the algorithm In this section, we prove the weak convergence theorem for Algorithm .. First of all, we give the following lemma, which plays a crucial role in the proof of our main result. ∞ Lemma . Let {xk }∞ k= and {yk }k= be the two sequences generated by Algorithm .. Let u ∈ SOL(C, f ) and let βk be selected as (.) and (.). Then, under the Conditions ., . and ., we have
xk+ – u ≤ xk – u – – ν xk – yk ,
∀k ≥ .
(.)
Proof Taking u ∈ SOL(C, f ) arbitrarily, for all k ≥ , using (.) and the monotonicity of f , we have xk+ – u ≤ xk – βk f (yk ) – u – xk – βk f (yk ) – xk+ = xk – u – xk – xk+ + βk f (yk ), u – xk+ = xk – u – xk – xk+ + βk f (yk ) – f (u), u – yk + f (u), u – yk + f (yk ), yk – xk+ ≤ xk – u – xk – xk+ + βk f (u), u – yk + f (yk ), yk – xk+
He and Wu Journal of Inequalities and Applications (2017) 2017:89
Page 7 of 14
= xk – u – xk – yk – yk – xk+ – xk – yk , yk – xk+ + βk f (u), u – yk + f (yk ), yk – xk+ = xk – u – xk – yk – yk – xk+ + xk – βk f (yk ) – yk , xk+ – yk + βk f (u), u – yk .
(.)
By the definition of Tk , we get xk – βk f (yk ) – yk , xk+ – yk = xk – βk f (xk ) – yk , xk+ – yk + βk f (xk ) – f (yk ), xk+ – yk ≤ βk f (xk ) – f (yk ), xk+ – yk ≤ βk f (xk ) – f (yk )xk+ – yk .
(.)
Substituting (.) into the last inequality of (.), thus we obtain xk+ – u ≤ xk – u – xk – yk – yk – xk+ + βk f (xk ) – f (yk )xk+ – yk + βk f (u), u – yk ≤ xk – u – xk – yk + βk f (u), u – yk + βk f (xk ) – f (yk ) .
(.)
The subsequent proof is divided into following two cases. Case : f (u) = . Using Theorem ., there exists a βu > such that f (u) = –βu c (u). By the subdifferential inequality, we have c(u) + c (u), yk – u ≤ c(yk ),
∀k ≥ .
(.)
Noting the fact that c(u) = due to u ∈ ∂C, we have c (u), yk – u ≤ c(yk ),
∀k ≥ .
(.)
Since –βu < , it follows from (.) that –βu c (u), yk – u ≥ –βu c(yk ),
∀k ≥ ,
it implies f (u), yk – u ≥ –βu c(yk ),
∀k ≥
or f (u), u – yk ≤ βu c(yk ),
∀k ≥ .
(.)
He and Wu Journal of Inequalities and Applications (2017) 2017:89
Page 8 of 14
By the definition of Ck , we have c(xk ) + c (xk ), yk – xk ≤ ,
∀k ≥ ,
using the subdifferential inequality again, c(yk ) + c (yk ), xk – yk ≤ c(xk ),
∀k ≥ .
Adding the above two inequalities, we obtain c(yk ) ≤ c (yk ) – c (xk ), yk – xk ,
∀k ≥ .
(.)
Combining (.) and (.), we have by using (iii) and (iv) of Condition . f (u), u – yk ≤ βu c(yk ) ≤ βu c (yk ) – c (xk ), yk – xk ≤ Myk – xk ,
(.)
where M is defined as before. Substituting (.) into the last inequality of (.), we obtain xk+ – u ≤ xk – u – xk – yk + Mβk yk – xk + βk f (xk ) – f (yk ) . Finally, from the condition of βk given by (.), we get xk+ – u ≤ xk – u – – ν xk – yk . Case : f (u) = . From (.), we can easily get xk+ – u ≤ xk – u – xk – yk + βk f (xk ) – f (yk ) .
(.)
Obviously, (.) implies βk f (xk ) – f (yk ) ≤ ν xk – yk ,
ν ∈ (, ).
Thus, (.) follows from the combination of (.) and (.).
(.)
Indeed, substituting (.) into (.), we get
σρ mk f (xk ) – f (yk ) + Mσρ mk xk – yk ≤ ν xk – yk .
(.)
Let m be the smallest nonnegative integer, such that
σρ m κ + Mσρ m ≤ ν ,
(.)
He and Wu Journal of Inequalities and Applications (2017) 2017:89
Page 9 of 14
where κ is the Lipschitz constant of f . Noting that f (xk ) – f (yk ) ≤ κxk – yk , we assert from (.) and (.) that mk ≤ m, which implies βk ≥ σρ m ,
(.)
namely infk≥ {βk } > . Theorem . Assume that Conditions .-. hold. Then the two sequences {xk }∞ k= and {yk }∞ generated by Algorithm . converge weakly to the same point z ∈ SOL(C, f ), furk= thermore z = lim PSOL(C,f ) (xk ). k→∞
(.)
Proof By Lemma ., xk+ – u ≤ xk – u for all k ≥ , so there exists a = lim xk – u k→∞
and the sequence {xk }∞ k= is bounded. From (.), we have xk – yk ≤
xk – u – xk+ – u . – ν
Hence, xk – yk →
(k → ∞).
(.)
In addition, f (xk ) – f (yk ) → (k → ∞). Using the Cauchy-Schwartz inequality, yk = yk – xk + xk ≤ yk – xk + xk , hence, the sequence {yk }∞ k= is also bounded. Let ω(xk ) be the set of weak limit points of {xk }∞ k= , i.e., ∞ ω(xk ) = z | ∃{xkj }∞ j= ⊂ {xk }k= s.t. xkj z . Since the sequence {xk }∞ k= is bounded, ω(xk ) = ∅. Taking z ∈ ω(xk ) arbitrarily, there ex∞ ists some subsequence {xkj }∞ j= of {xk }k= , such that x kj z
(j → ∞).
(.)
He and Wu Journal of Inequalities and Applications (2017) 2017:89
Page 10 of 14
Equation (.) together with (.) leads to (j → ∞).
ykj z
(.)
Due to yk ∈ Ck and the definition of Ck , we have c(xk ) – c (xk ), xk – yk ≤ ,
(.)
then, using the Cauchy-Schwartz inequality again, c(xk ) ≤ c (xk )xk – yk .
(.)
According to (iii) in Condition ., we can deduce that c (x) is bounded on any bounded sets of H, so there exists M > such that c (xk ) ≤ M for all k ≥ , and then c(xk ) ≤ M xk – yk → (k → ∞). According to (ii) in Condition ., we have c(z) ≤ lim inf c(xkj ) ≤ . j→∞
Hence, z ∈ C. Now, we turn to showing z ∈ SOL(C, f ). Define ⎧ ⎨f (v) + N (v), if v ∈ C, C Tv = ⎩∅, if v ∈/ C, where NC (v) is defined by (.). Obviously, T is a maximal monotone operator. For arbitrary (v, w) ∈ G(T), we have w ∈ T(v) = f (v) + NC (v), equivalently, w – f (v) ∈ NC (v). Setting y = z in (.), we get w – f (v), z – v ≤ .
(.)
On the other hand, by the definition of yk and (.), we have xk – βk f (xk ) – yk , yk – v ≥ or
yk – xk + f (xk ), v – yk ≥ , βk
(.)
He and Wu Journal of Inequalities and Applications (2017) 2017:89
Page 11 of 14
for all k ≥ . Using (.) and (.), we obtain w, v – z ≥ f (v), v – z
ykj – xkj + f (xkj ), v – ykj ≥ f (v), v – z – βk j ykj – xkj + f (xkj ), v – ykj = f (v), v – ykj + ykj – z – βk j = f (v), v – ykj + f (v), ykj – z ykj – xkj – + f (xkj ), v – ykj βk j = f (v) – f (ykj ), v – ykj + f (ykj ) – f (xkj ), v – ykj ykj – xkj – , v – ykj + f (v), ykj – z βk j ≥ f (v), ykj – z + f (ykj ) – f (xkj ), v – ykj ykj – xkj – , v – ykj . βk j
(.)
By virtue of (.), (.) and Condition ., taking j → ∞ in (.), we have w, v – z ≥ .
(.)
Since T is a maximal monotone operator, (.) means that ∈ T(z) and consequently z ∈ T – () = SOL(C, f ). Now we are in a position to verify that xk z (k → ∞). In fact, if there exists an∞ ¯ ∈ SOL(C, f ), but z¯ = z, notother subsequence {xki }∞ i= of {xk }k= , such that xki z
ing the fact that {xk – u}∞ k= is decreasing for all u ∈ SOL(C, f ), we obtain by using Lemma . lim xk – z = lim xkj – z < lim xkj – z¯
k→∞
j→∞
j→∞
= lim xk – z¯ = lim xki – z¯ k→∞
i→∞
< lim xki – z = lim xk – z. i→∞
k→∞
(.)
This is a contradiction, so z¯ = z. Consequently, we have xk z (k → ∞) and yk z (k → ∞). Finally, we show that z = limk→∞ PSOL(C,f ) (xk ). Put uk = PSOL(C,f ) (xk ), using (.) again and z ∈ SOL(C, f ), xk – uk , uk – z ≥ .
(.)
He and Wu Journal of Inequalities and Applications (2017) 2017:89
Page 12 of 14
By Lemma ., there exists u∗ ∈ SOL(C, f ) such that uk → u∗ . Therefore, taking k → ∞ in (.), we have z – u∗ , u∗ – z ≥ ,
(.)
therefore z = u∗ . The proof is complete.
5 Convergence rate of the modified method In this section, we prove the convergence rate of our modified subgradient extragradient method in the ergodic sense. The base of the complexity proof is ([, ]) SOL(C, f ) =
z ∈ C | f (u), u – z ≥ .
(.)
u∈C
In order to prove the convergence rate, now we give the key inequality of our method. Indeed, by an argument very similar to the proof of Lemma ., it is not difficult to get the following result. ∞ Lemma . Let {xk }∞ k= and {yk }k= be the two sequences generated by Algorithm . and let βk be selected as (.) and (.). Assume that the Conditions ., . and . are satisfied. Then, for any u ∈ C, we have
xk+ – u ≤ xk – u – – ν xk – yk + βk f (u), u – yk .
(.)
Theorem . For any integer n > , we have a zn ∈ H, which satisfies zn z, z ∈ SOL(C, f ) and x – u , f (u), zn – u ≤ ϒn
∀u ∈ C,
(.)
where n zn =
k= βk yk
ϒn
and ϒn =
n
βk .
(.)
k=
Proof Using (.), we get βk f (u), yk – u ≤ xk – u – xk+ – u .
(.)
Summing the inequality (.) over k = , , . . . , n, we get f (u),
n k=
βk yk –
n
βk u ≤ x – u ,
∀u ∈ C.
k=
From the notation of ϒn and zn in (.), we derive x – u , f (u), zn – u ≤ ϒn
∀u ∈ C.
He and Wu Journal of Inequalities and Applications (2017) 2017:89
Page 13 of 14
On the other hand, since zn is a convex combination of y , y , . . . , yn , it is easy to see that zn z ∈ SOL(C, f ) due to the fact that yk z ∈ SOL(C, f ) proved by Theorem .. The proof is complete. Let β = σρ m . From (.), βk ≥ β holds for all k ≥ and this together with (.) leads to ϒn ≥ (n + )β, this means the modified subgradient extragradient method has O( n ) convergence rate. In fact, for any bounded subset D ⊂ C and given accuracy > , our algorithm achieves f (u), zn – u ≤ ,
∀u ∈ D,
in at most n=
m β
iterations, where zn defined in (.) and m = sup{x – u | u ∈ D}.
6 Results and discussion Since the modified subgradient extragradient method proposed in this paper is relaxed and self-adaptive, it is easily implemented. A weak convergence theorem for our algorithm is proved due to the alternating theorem for the solutions of variational inequalities. Our results in this paper effectively improve the existing related results. 7 Conclusion Although the extragradient methods and the subgradient extragradient methods have been widely studied, the existing algorithms all face the problem that the projection operator is hard to calculate. The problem can be solved effectively by using the modified subgradient extragradient method proposed in this paper, since two projections onto the original domain are all replaced with projections onto some half-spaces, which is very easily calculated. Besides, the step size can be selected in some adaptive ways, which means that we have no need to know or to estimate the Lipschitz constant of the operator. Furthermore, we prove that our method has O( n ) convergence rate. Competing interests The authors declare that they have no competing interests. Authors’ contributions All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript. Acknowledgements This work was supported by the Foundation of Tianjin Key Lab for Advanced Signal Processing (2016 ASP-TJ02).
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Received: 18 January 2017 Accepted: 17 April 2017
He and Wu Journal of Inequalities and Applications (2017) 2017:89
Page 14 of 14
References 1. Kinderlehrer, D, Stampacchia, G: An Introduction to Variational Inequalities and Their Applications. Society for Industrial and Applied Mathematics, Philadelphia (2000) 2. Korpelevich, GM: An extragradient method for finding saddle points and for other problems. Èkon. Mat. Metody 12, 747-756 (1976) 3. Nadezhkina, N, Takahashi, W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 128, 191-201 (2006) 4. Censor, Y, Gibali, A, Reich, S: Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 61, 1119-1132 (2012) 5. Censor, Y, Gibali, A, Reich, S: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148, 318-335 (2011) 6. Censor, Y, Gibali, A, Reich, S: Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Methods Softw. 26, 827-845 (2011) 7. Yao, YH, Postolache, M, Liou, YC, Yao, ZS: Construction algorithms for a class of monotone variational inequalities. Optim. Lett. 10, 1519-1528 (2016) 8. Yao, YH, Liou, YC, Kang, SM: Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method. Comput. Math. Appl. 59, 3472-3480 (2010) 9. Yao, YH, Noor, MA, Liou, YC, Kang, SM: Iterative algorithms for general multivalued variational inequalities. Abstr. Appl. Anal. 2012, 768272 (2012) 10. Yao, YH, Noor, MA, Liou, YC: Strong convergence of a modified extragradient method to the minimum-norm solution of variational inequalities. Abstr. Appl. Anal. 2012, 817436 (2012) 11. Zegeye, H, Shahzad, N, Yao, YH: Minimum-norm solution of variational inequality and fixed point problem in Banach spaces. Optimization 64, 453-471 (2015) 12. Yao, YH, Shahzad, N: Strong convergence of a proximal point algorithm with general errors. Optim. Lett. 6, 621-628 (2012) 13. He, S, Yang, C: Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013, 942315 (2013) 14. Fukushima, M: A relaxed projection method for variational inequalities. Math. Program. 35, 58-70 (1986) 15. Takahashi, W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000) 16. Goebel, K, Reich, S: Uniform Convexity, Hyperbolic Geometry and Non-expansive Mappings. Dekker, New York (1984) 17. Hiriart-Urruty, JB, Lemarchal, C: Fundamentals of Convex Analysis. Springer, Berlin (2001) 18. Rockafellar, RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 149, 75-88 (1970) 19. Opial, Z: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 73, 591-597 (1967) 20. Takahashi, W, Toyoda, M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 118, 417-428 (2003) 21. He, S, Xu, HK: Uniqueness of supporting hyperplanes and an alternative to solutions of variational inequalities. J. Glob. Optim. 57, 1375-1384 (2013) 22. Nguyen, HQ, Xu, HK: The supporting hyperplane and an alternative to solutions of variational inequalities. J. Nonlinear Convex Anal. 16, 2323-2331 (2015) 23. Facchinei, F, Pang, JS: Finite-Dimensional Variational Inequalities and Complementarity Problems, vols. I and II. Springer Series in Operations Research. Springer, New York (2003) 24. Cai, XJ, Gu, GY, He, BS: On the O( 1t ) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators. Comput. Optim. Appl. 57, 339-363 (2014)