Hindawi Publishing Corporation Abstract and Applied Analysis Volume 2013, Article ID 604369, 8 pages http://dx.doi.org/10.1155/2013/604369
Research Article Approximation for the Hierarchical Constrained Variational Inequalities over the Fixed Points of Nonexpansive Semigroups Li-Jun Zhu School of Information and Calculation, Beifang University of Nationalities, Yinchuan 750021, China Correspondence should be addressed to Li-Jun Zhu;
[email protected] Received 22 December 2012; Accepted 6 February 2013 Academic Editor: Jen-Chih Yao Copyright © 2013 Li-Jun Zhu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The purpose of the present paper is to study the hierarchical constrained variational inequalities of finding a point 𝑥∗ such that 𝑥∗ ∈ Ω, ⟨(𝐴 − 𝛾𝑓)𝑥∗ − (𝐼 − 𝐵)𝑆𝑥∗ , 𝑥 − 𝑥∗ ⟩ ≥ 0, ∀𝑥 ∈ Ω, where Ω is the set of the solutions of the following variational inequality: 𝑥∗ ∈ ϝ, ⟨(𝐴 − 𝑆)𝑥∗ , 𝑥 − 𝑥∗ ⟩ ≥ 0, ∀𝑥 ∈ ϝ, where 𝐴, 𝐵 are two strongly positive bounded linear operators, 𝑓 is a 𝜌-contraction, 𝑆 is a nonexpansive mapping, and ϝ is the fixed points set of a nonexpansive semigroup {𝑇(𝑠)}𝑠≥0 . We present a double-net convergence hierarchical to some elements in ϝ which solves the above hierarchical constrained variational inequalities.
1. Introduction Let 𝐻 be a real Hilbert space with inner product ⟨⋅, ⋅⟩ and norm ‖ ⋅ ‖, respectively. Let 𝐶 be a nonempty closed convex subset of 𝐻. Recall that a self-mapping 𝑓 of 𝐶 is said to be contractive if there exists a constant 𝜌 ∈ [0, 1) such that ‖𝑓(𝑥) − 𝑓(𝑦)‖ ≤ 𝜌‖𝑥 − 𝑦‖ for all 𝑥, 𝑦 ∈ 𝐶. A mapping 𝑇 : 𝐶 → 𝐶 is called nonexpansive if 𝑇𝑥 − 𝑇𝑦 ≤ 𝑥 − 𝑦 ,
∀𝑥, 𝑦 ∈ 𝐶.
(1)
We denote by Fix(𝑇) the set of fixed points of 𝑇; that is, Fix(𝑇) = {𝑥 ∈ 𝐶 : 𝑇𝑥 = 𝑥}. A bounded linear operator 𝐵 is called strongly positive on 𝐻 if there exists a constant 𝛾̃ > 0 such that ⟨𝐵𝑥, 𝑥⟩ ≥ 𝛾̃‖𝑥‖2 ,
∀𝑥 ∈ 𝐻.
(2)
It is well known that the variational inequality for an operator, 𝜑 : 𝐻 → 𝐻, over a nonempty, closed, and convex set, 𝐶 ⊂ 𝐻, is to find a point 𝑥∗ ∈ 𝐶 with the property ⟨𝜑 (𝑥∗ ) , 𝑦 − 𝑥∗ ⟩ ≥ 0,
∀𝑦 ∈ 𝐶.
(3)
The set of the solutions of the variational inequality (3) is denoted by VI(𝐶, 𝜑). If the mapping 𝜑 is a monotone operator, then we say that VI(3) is monotone. It is well known
that if 𝜑 is Lipschitzian and strongly monotone, then for small enough 𝛿 > 0, the mapping 𝑃𝐶 (𝐼 − 𝛿𝜑) is a contraction on 𝐶 and so the sequence {𝑥𝑛 } of Picard iterates, given by 𝑥𝑛 = 𝑃𝐶 (𝐼−𝛿𝜑)𝑥𝑛−1 (𝑛 ≥ 1), converges strongly to the unique solution of the VI(3). This sort of VI(3) where 𝜑 is strongly monotone and Lipschitzian is originated from Yamada [1]. However, if 𝜑 is only monotone (not strongly monotone), then their iterative methods do not apply to VI. Many practical problems such as signal processing and network resource allocation are formulated as the variational inequality over the set of the solutions of some nonlinear mappings (e.g., the fixed point set of nonexpansive mappings), and algorithms to solve these problems have been proposed. Iterative algorithms have been presented for the convex optimization problem with a fixed point constraint along with proof that these algorithms strongly converge to the unique solution of problems with a strongly monotone operator. The strong monotonicity condition guarantees the uniqueness of the solution. For some related works on the variational inequalities, please see [2–23] and the references therein. Particulary, the variational inequality problems over the fixed points of nonexpansive mappings have been considered. The reader can consult [16, 24]. On the other hand, we note that in the literature, nonlinear ergodic theorems for nonexpansive semigroups have been considered by many authors; see, for example, [25–32]. In this paper, we will
2
Abstract and Applied Analysis
consider a general variational inequality problem with the variational inequality constraint is the fixed points of nonexpansive semigroups. The purpose of the present paper is to study the hierarchical constrained variational inequalities of finding a point 𝑥∗ such that 𝑥∗ ∈ Ω,
⟨(𝐴 − 𝛾𝑓) 𝑥∗ − (𝐼 − 𝐵) 𝑆𝑥∗ , 𝑥 − 𝑥∗ ⟩ ≥ 0, ∀𝑥 ∈ Ω,
(4)
where Ω is the set of the solutions of the following variational inequality: 𝑥∗ ∈ ϝ,
⟨(𝐴 − 𝑆) 𝑥∗ , 𝑥 − 𝑥∗ ⟩ ≥ 0,
∀𝑥 ∈ ϝ,
Lemma 2 (see [34]). Let 𝐶 be a closed convex subset of a real Hilbert space 𝐻 and let 𝑆 : 𝐶 → 𝐶 be a nonexpansive mapping. Then, the mapping 𝐼 − 𝑆 is demiclosed. That is, if {𝑥𝑛 } is a sequence in 𝐶 such that 𝑥𝑛 → 𝑥∗ weakly and (𝐼 − 𝑆)𝑥𝑛 → 𝑦 strongly, then (𝐼 − 𝑆)𝑥∗ = 𝑦. Lemma 3 (see [35]). Let 𝐶 be a nonempty closed convex subset of a real Hilbert space 𝐻. Assume that a mapping 𝐹 : 𝐶 → 𝐻 is monotone and weakly continuous along segments (i.e., 𝐹(𝑥 + 𝑡𝑦) → 𝐹(𝑥) weakly, as 𝑡 → 0, whenever 𝑥 + 𝑡𝑦 ∈ 𝐶 for 𝑥, 𝑦 ∈ 𝐶). Then the variational inequality 𝑥∗ ∈ 𝐶,
(5)
where 𝐴 and 𝐵 are two strongly positive bounded linear operators, 𝑓 is a 𝜌-contraction, 𝑆 is a nonexpansive mapping, and ϝ is the fixed points set of a nonexpansive semigroup {𝑇(𝑠)}𝑠≥0 . We present a double-net convergence hierarchical to some elements in ϝ which solves the above hierarchical constrained variational inequalities.
2. Preliminaries Let 𝐶 be a nonempty closed convex subset of a real Hilbert space 𝐻. The metric (or the nearest point) projection from 𝐻 onto 𝐶 is the mapping 𝑃𝐶 : 𝐻 → 𝐶 which assigns to each point 𝑥 ∈ 𝐶 the unique point 𝑃𝐶𝑥 ∈ 𝐶 satisfying the property 𝑥 − 𝑃𝐶𝑥 = inf 𝑥 − 𝑦 =: 𝑑 (𝑥, 𝐶) . (6) 𝑦∈𝐶 It is well known that 𝑃𝐶 is a nonexpansive mapping and satisfies 2 ⟨𝑥 − 𝑦, 𝑃𝐶𝑥 − 𝑃𝐶𝑦⟩ ≥ 𝑃𝐶𝑥 − 𝑃𝐶𝑦 , ∀𝑥, 𝑦 ∈ 𝐻. (7)
⟨𝐹𝑥∗ , 𝑥 − 𝑥∗ ⟩ ≥ 0,
∀𝑥 ∈ 𝐻, 𝑦 ∈ 𝐶.
(8)
Recall that a family {𝑇(𝑠)}𝑠≥0 of mappings of 𝐻 into itself is called a nonexpansive semigroup if it satisfies the following conditions: (S1) 𝑇(0)𝑥 = 𝑥 for all 𝑥 ∈ 𝐻; (S2) 𝑇(𝑠 + 𝑡) = 𝑇(𝑠)𝑇(𝑡) for all 𝑠, 𝑡 ≥ 0; (S3) ‖𝑇(𝑠)𝑥 − 𝑇(𝑠)𝑦‖ ≤ ‖𝑥 − 𝑦‖ for all 𝑥, 𝑦 ∈ 𝐻 and 𝑠 ≥ 0; (S4) for all 𝑥 ∈ 𝐻, 𝑠 → 𝑇(𝑠)𝑥 is continuous. We denote by Fix (𝑇(𝑠)) the set of fixed points of 𝑇(𝑠) and by ϝ the set of all common fixed points of {𝑇(𝑠)}𝑠≥0 ; that is, ϝ = ⋂𝑠≥0 Fix (𝑇(𝑠)). It is known that ϝ is closed and convex. We need the following lemmas for proving our main results. Lemma 1 (see [33]). Let 𝐶 be a nonempty bounded closed convex subset of a Hilbert space 𝐻 and let {𝑇(𝑠)}𝑠≥0 be a nonexpansive semigroup on 𝐶. Then, for every ℎ ≥ 0, 1 𝑡 1 𝑡 lim sup ∫ 𝑇 (𝑠) 𝑥𝑑𝑠 − 𝑇 (ℎ) ∫ 𝑇 (𝑠) 𝑥𝑑𝑠 = 0. 𝑡 → ∞ 𝑥∈𝐶 𝑡 0 𝑡 0
(9)
(10)
is equivalent to the dual variational inequality 𝑥∗ ∈ 𝐶,
⟨𝐹𝑥, 𝑥 − 𝑥∗ ⟩ ≥ 0,
∀𝑥 ∈ 𝐶.
(11)
3. Main Results Now we consider the following hierarchical variational inequality with the variational inequality constraint over the fixed points set of nonexpansive semigroups {𝑇(𝑠)}𝑠≥0 . Problem 1. Let 𝐶 be a nonempty closed convex subset of a real Hilbert space 𝐻. Let 𝑓 : 𝐶 → 𝐻 be a 𝜌-contraction with coefficient 𝜌 ∈ [0, 1) and let 𝑆 : 𝐶 → 𝐶 be a nonexpansive mapping. Let {𝑇(𝑠)}𝑠≥0 be a nonexpansive semigroup on 𝐶 and let 𝐴, 𝐵 : 𝐻 → 𝐻 be two strongly positive bounded ̃ (1 ≤ 𝜆 ̃ < 2) and 𝛾̃ (0 < linear operators with coefficients 𝜆 𝛾̃ < 1), respectively. Let 𝛾 be a constant satisfying 0 < 𝛾𝜌 < 𝛾̃. Now, our objective is to find 𝑥∗ such that 𝑥∗ ∈ Ω,
⟨(𝐴 − 𝛾𝑓) 𝑥∗ − (𝐼 − 𝐵) 𝑆𝑥∗ , 𝑥 − 𝑥∗ ⟩ ≥ 0, ∀𝑥 ∈ Ω,
Moreover, 𝑃𝐶 is characterized by the following property: ⟨𝑥 − 𝑃𝐶𝑥, 𝑦 − 𝑃𝐶𝑥⟩ ≤ 0,
∀𝑥 ∈ 𝐶,
(12)
where Ω := VI(ϝ, 𝐴 − 𝑆) is the set of the solutions of the following variational inequality: 𝑥∗ ∈ ϝ,
⟨(𝐴 − 𝑆) 𝑥∗ , 𝑥 − 𝑥∗ ⟩ ≥ 0,
∀𝑥 ∈ ϝ.
(13)
We observe that (𝐴 − 𝛾𝑓) − (𝐼 − 𝐵)𝑆 is strongly monotone and Lipschitz continuous. In fact, we have ⟨(𝐴 − 𝛾𝑓) 𝑥 − (𝐼 − 𝐵) 𝑆𝑥 − [(𝐴 − 𝛾𝑓) 𝑦 − (𝐼 − 𝐵) 𝑆𝑦] , 𝑥 − 𝑦⟩ = ⟨𝐴𝑥 − 𝐴𝑦, 𝑥 − 𝑦⟩ − 𝛾 ⟨𝑓 (𝑥) − 𝑓 (𝑦) , 𝑥 − 𝑦⟩ ̃ 𝑥 − 𝑦2 − 𝛾𝜌𝑥 − 𝑦2 − (𝐼 − 𝐵) ⟨𝑆𝑥 − 𝑆𝑦, 𝑥 − 𝑦⟩ ≥ 𝜆 2 ̃ − 1 + 𝛾̃ − 𝛾𝜌) 𝑥 − 𝑦2 , − (1 − 𝛾̃) 𝑥 − 𝑦 = (𝜆 (𝐴 − 𝛾𝑓) 𝑥 − (𝐼 − 𝐵) 𝑆𝑥 − [(𝐴 − 𝛾𝑓) 𝑦 − (𝐼 − 𝐵) 𝑆𝑦] ≤ 𝐴𝑥 − 𝐴𝑦 + 𝛾 𝑓 (𝑥) − 𝑓 (𝑦) + ‖𝐼 − 𝐵‖ 𝑆𝑥 − 𝑆𝑦 ≤ ‖𝐴‖ 𝑥 − 𝑦 + 𝛾𝜌 𝑥 − 𝑦 + (1 − 𝛾̃) 𝑥 − 𝑦 = (1 + ‖𝐴‖ + 𝛾𝜌 − 𝛾̃) 𝑥 − 𝑦 . (14)
Abstract and Applied Analysis
3
Hence, the existence and the uniqueness of the solution to Problem 1 are guaranteed. In order to solve the above hierarchical constrainted variational inequality, we present the following double net.
Theorem 5. Assume that VI (ϝ, 𝐴−𝑆) ≠ 0. Then, for each fixed 𝑡 ∈ (0, 1), the net {𝑥𝑠,𝑡 } defined by (15) converges in norm, as 𝑠 → 0+, to a solution 𝑥𝑡 ∈ ϝ. Moreover, as 𝑡 → 0+, the net {𝑥𝑡 } converges in norm to the unique solution 𝑥∗ of Problem 1.
̃ + 𝛾̃ − 𝛾𝜌). Then, 0 < 𝜅 ≤ 1. For each Algorithm 4. Set 𝜅 = 1/(𝜆 (𝑠, 𝑡) ∈ (0, 𝜅) × (0, 1), we define a double net {𝑥𝑠,𝑡 } implicitly by
Proof. We first show that the sequence {𝑥𝑠,𝑡 } is bounded. Take 𝑦∗ ∈ ϝ. From (15), we have ∗ 𝑥𝑠,𝑡 − 𝑦 =
𝑥𝑠,𝑡 = 𝑃𝐶 [𝑠 (𝑡𝛾𝑓 (𝑥𝑠,𝑡 ) + (𝐼 − 𝑡𝐵) 𝑆𝑥𝑠,𝑡 ) 1 𝜆𝑠 + (𝐼 − 𝑠𝐴) ∫ 𝑇 (]) 𝑥𝑠,𝑡 𝑑]] . 𝜆𝑠 0
𝑃𝑐 [𝑠 (𝑡𝛾𝑓 (𝑥𝑠,𝑡 ) + (𝐼 − 𝑡𝐵) 𝑆𝑥𝑠,𝑡 )
(15)
+ (𝐼 − 𝑠A)
≤ 𝑠 (𝑡𝛾𝑓 (𝑥𝑠,𝑡 ) + (𝐼 − 𝑡𝐵) 𝑆𝑥𝑠,𝑡 )
Note that this implicit manner algorithm is well defined. In fact, we define the mapping
+ (𝐼 − 𝑠𝐴)
𝑥 → 𝑊𝑠,𝑡 (𝑥) := 𝑃𝐶 [𝑠 (𝑡𝛾𝑓 (𝑥) + (𝐼 − 𝑡𝐵) 𝑆𝑥) + (𝐼 − 𝑠𝐴)
+ 𝑠 (𝐼 − 𝑡𝐵) (𝑆𝑥𝑠,𝑡 − 𝑆𝑦∗ ) + 𝑠𝑡𝛾𝑓 (𝑦∗ ) + 𝑠 (𝐼 − 𝑡𝐵) 𝑆𝑦∗ − 𝑠𝐴𝑦∗
Note that this self-mapping is a contraction. As a matter of fact, we have 𝑊𝑠,𝑡 (𝑥) − 𝑊𝑠,𝑡 (𝑦) = 𝑃𝐶 [𝑠 (𝑡𝛾𝑓 (𝑥) + (𝐼 − 𝑡𝐵) 𝑆𝑥)
+ (𝐼 − 𝑠𝐴) (
1 𝜆𝑠 ∫ 𝑇 (]) 𝑥𝑑]] 𝜆𝑠 0
1 𝜆𝑠 ∫ 𝑇 (]) 𝑦𝑑]] 𝜆𝑠 0
(17)
≤ 𝑠𝑡𝛾 𝑓 (𝑥) − 𝑓 (𝑦)
+ 𝑠 ‖𝐼 − 𝑡𝐵‖ 𝑆𝑥 − 𝑆𝑦
1 𝜆𝑠 + ‖𝐼 − 𝑠𝐴‖ ∫ 𝑇 (]) 𝑥𝑑] 𝜆 𝑠 0 −
1 𝜆𝑠 ∫ 𝑇 (]) 𝑦𝑑] 𝜆𝑠 0
̃ − 1) 𝑠 − (̃𝛾 − 𝛾𝜌) 𝑠𝑡] 𝑥 − 𝑦 . ≤ [1 − (𝜆 ̃ − 1)𝑠 − (̃𝛾 − Since (𝑠, 𝑡) ∈ (0, 𝜅) × (0, 1), 0 < 1 − (𝜆 𝛾𝜌)𝑠𝑡 < 1. Hence, 𝑊𝑠,𝑡 is a contraction. Therefore, by Banach’s Contraction Principle, 𝑊𝑠,𝑡 has a unique fixed point which is denoted by 𝑥𝑠,𝑡 ∈ 𝐶. Next we show the behavior of the net {𝑥𝑠,𝑡 } as 𝑠 → 0 and 𝑡 → 0 successively.
1 𝜆𝑠 ∫ 𝑇 (]) 𝑥𝑑] − 𝑦∗ ) (18) 𝜆𝑠 0
≤ 𝑠𝑡𝛾 𝑓 (𝑥𝑠,𝑡 ) − 𝑓 (𝑦∗ ) + 𝑠 ‖𝐼 − 𝑡𝐵‖ 𝑆𝑥𝑠,𝑡 − 𝑆𝑦∗ 1 𝜆𝑠 + ‖𝐼 − 𝑠𝐴‖ × ∫ 𝑇 (]) 𝑥𝑑] − 𝑦∗ 𝜆 𝑠 0 + 𝑠 𝑡𝛾𝑓 (𝑦∗ ) + (𝐼 − 𝑡𝐵) 𝑆𝑦∗ − 𝐴𝑦∗ ≤ 𝑠𝑡𝛾𝜌 𝑥𝑠,𝑡 − 𝑦∗ + 𝑠 (1 − 𝑡̃𝛾) 𝑥𝑠,𝑡 − 𝑦∗ ̃ 𝑥 − 𝑦∗ + (1 − 𝑠𝜆) 𝑠,𝑡 + 𝑠 𝑡𝛾𝑓 (𝑦∗ ) + (𝐼 − 𝑡𝐵) 𝑆𝑦∗ − 𝐴𝑦∗ ̃ − 1) 𝑠 − (̃𝛾 − 𝛾𝜌) 𝑠𝑡] 𝑥 − 𝑦∗ = [1 − (𝜆 𝑠,𝑡 + 𝑠 𝑡𝛾𝑓 (𝑦∗ ) + (𝐼 − 𝑡𝐵) 𝑆𝑦∗ − 𝐴𝑦∗ .
− 𝑃𝐶 [𝑠 (𝑡𝛾𝑓 (𝑦) + (𝐼 − 𝑡𝐵) 𝑆𝑦) + (𝐼 − 𝑠𝐴)
1 𝜆𝑠 ∫ 𝑇 (]) 𝑥𝑑] − 𝑦∗ 𝜆𝑠 0
= 𝑠𝑡𝛾 (𝑓 (𝑥𝑠,𝑡 ) − 𝑓 (𝑦∗ ))
(16) 1 𝜆𝑠 ∫ 𝑇 (]) 𝑥𝑑]] , 𝜆𝑠 0
(𝑠, 𝑡) ∈ (0, 𝜅) × (0, 1) .
+ (𝐼 − 𝑠𝐴)
1 𝜆𝑠 ∫ 𝑇 (]) 𝑥𝑑]] − 𝑦∗ 𝜆𝑠 0
Hence ∗ 𝑥𝑠,𝑡 − 𝑦 ≤
1
̃−1 (̃𝛾 − 𝛾𝜌) 𝑡 + 𝜆 × 𝑡𝛾𝑓 (𝑦∗ ) + (𝐼 − 𝑡𝐵) 𝑆𝑦∗ − 𝐴𝑦∗ .
(19)
It follows that for each fixed 𝑡 ∈ (0, 1), {𝑥𝑠,𝑡 } is bounded. Next, we show that lim𝑠 → 0 ‖𝑇(𝜏)𝑥𝑠,𝑡 − 𝑥𝑠,𝑡 ‖ = 0 for all 0 ≤ 𝜏 < ∞ and consequently, as 𝑠 → 0+, the entire net {𝑥𝑠,𝑡 } converges in norm to 𝑥𝑡 ∈ ϝ.
4
Abstract and Applied Analysis
̃− For each fixed 𝑡 ∈ (0, 1), we set 𝑅𝑡 := (1/((̃𝛾 − 𝛾𝜌)𝑡 + 𝜆 ∗ ∗ ∗ 1))‖𝑡𝛾𝑓(𝑦 ) + (𝐼 − 𝑡𝐵)𝑆𝑦 − 𝐴𝑦 ‖. It is clear that for each fixed 𝑡 ∈ (0, 1), {𝑥𝑠,𝑡 } ⊂ 𝐵(𝑦∗ , 𝑅𝑡 ). Notice that 1 𝜆 𝑠 ∫ 𝑇 (]) 𝑥𝑠,𝑡 𝑑] − 𝑦∗ ≤ 𝑥𝑠,𝑡 − 𝑦∗ ≤ 𝑅𝑡 . 𝜆 𝑠 0
(20)
𝜆𝑠
Set 𝑦𝑠,𝑡 = 𝑠(𝑡𝛾𝑓(𝑥𝑠,𝑡 ) + (𝐼 − 𝑡𝐵)𝑆𝑥𝑠,𝑡 ) + (𝐼 − 𝑠𝐴)(1/𝜆 𝑠 )
∫0 𝑇(])𝑥𝑠,𝑡 𝑑] for all (𝑠, 𝑡) ∈ (0, 𝜅) × (0, 1). We then have 𝑥𝑠,𝑡 = 𝑃𝐶𝑦𝑠,𝑡 , and for any 𝑦∗ ∈ ϝ, 𝑥𝑠,𝑡 − 𝑦∗ = 𝑥𝑠,𝑡 − 𝑦𝑠,𝑡 + 𝑦𝑠,𝑡 − 𝑦∗ = 𝑥𝑠,𝑡 − 𝑦𝑠,𝑡 + 𝑠 (𝑡𝛾𝑓 (𝑥𝑠,𝑡 ) + (𝐼 − 𝑡𝐵) 𝑆𝑥𝑠,𝑡 ) + (𝐼 − 𝑠𝐴)
Moreover, we observe that if 𝑥 ∈ 𝐵(𝑦∗ , 𝑅𝑡 ), then
+ 𝑠𝑡𝛾 (𝑓 (𝑥𝑠,𝑡 ) − 𝑓 (𝑦∗ )) + 𝑠 (𝐼 − 𝑡𝐵) (𝑆𝑥𝑠,𝑡 − 𝑆𝑦∗ )
∗ ∗ ∗ 𝑇 (𝑠) 𝑥 − 𝑦 ≤ 𝑇 (𝑠) 𝑥 − 𝑇 (𝑠) 𝑦 ≤ 𝑥 − 𝑦 ≤ 𝑅𝑡 , (21)
+ 𝑠 (𝑡𝛾𝑓 (𝑦∗ ) + (𝐼 − 𝑡𝐵) 𝑆𝑦∗ − 𝐴𝑦∗ ) + (𝐼 − 𝑠𝐴) (
that is, 𝐵(𝑦∗ , 𝑅𝑡 ) is 𝑇(𝑠)-invariant for all 𝑠. From (15), we deduce Notice that
1 𝜆𝑠 ≤ 𝑇 (𝜏) 𝑥𝑠,𝑡 − 𝑇 (𝜏) ∫ 𝑇 (]) 𝑥𝑠,𝑡 𝑑] 𝜆𝑠 0
⟨𝑥𝑠,𝑡 − 𝑦𝑠,𝑡 , 𝑥𝑠,𝑡 − 𝑦∗ ⟩ ≤ 0.
1 𝜆𝑠 1 𝜆𝑠 + 𝑇 (𝜏) ∫ 𝑇 (]) 𝑥𝑠,𝑡 𝑑] − ∫ 𝑇 (]) 𝑥𝑠,𝑡 𝑑] 𝜆𝑠 0 𝜆𝑠 0 1 𝜆𝑠 + ∫ 𝑇 (]) 𝑥𝑠,𝑡 𝑑] − 𝑥𝑠,𝑡 𝜆 𝑠 0
Thus, we have ∗ 2 ∗ 𝑥s,𝑡 − 𝑦 = ⟨𝑥𝑠,𝑡 − 𝑦𝑠,𝑡 , 𝑥𝑠,𝑡 − 𝑦 ⟩ + 𝑠 (𝐼 − 𝑡𝐵) ⟨𝑆𝑥𝑠,𝑡 − 𝑆𝑦∗ , 𝑥𝑠,𝑡 − 𝑦∗ ⟩ + (𝐼 − 𝑠𝐴)
1 𝜆𝑠 + 2 𝑥𝑠,𝑡 − ∫ 𝑇 (]) 𝑥𝑠,𝑡 𝑑] 𝜆𝑠 0
×⟨
1 𝜆𝑠 ∫ 𝑇 (]) 𝑥𝑠,𝑡 𝑑] − 𝑦∗ , 𝑥𝑠,𝑡 − 𝑦∗ ⟩ 𝜆𝑠 0
+ 𝑠 ⟨𝑡𝛾𝑓 (𝑦∗ ) + (𝐼 − 𝑡𝐵) 𝑆𝑦∗ − 𝐴𝑦∗ , 𝑥𝑠,𝑡 − 𝑦∗ ⟩
≤ 2𝑠 𝑡𝛾𝑓 (𝑥𝑠,𝑡 ) + (𝐼 − 𝑡𝐵) 𝑆𝑥𝑠,𝑡
≤ 𝑠𝑡𝛾 𝑓 (𝑥𝑠,𝑡 ) − 𝑓 (𝑦∗ ) 𝑥𝑠,𝑡 − 𝑦∗ + 𝑠 ‖𝐼 − 𝑡𝐵‖ 𝑆𝑥𝑠,𝑡 − 𝑆𝑦∗ 𝑥𝑠,𝑡 − 𝑦∗
𝐴 𝜆𝑠 ∫ 𝑇 (]) 𝑥𝑠,𝑡 𝑑] 𝜆𝑠 0
+ 𝑠 ⟨𝑡𝛾𝑓 (𝑦∗ ) + (𝐼 − 𝑡𝐵) 𝑆𝑦∗ − 𝐴𝑦∗ , 𝑥𝑠,𝑡 − 𝑦∗ ⟩
1 𝜆𝑠 1 𝜆𝑠 + 𝑇 (𝜏) ∫ 𝑇 (]) 𝑥𝑠,𝑡 𝑑] − ∫ 𝑇 (]) 𝑥𝑠,𝑡 𝑑] . 𝜆𝑠 0 𝜆𝑠 0 (22)
Since {𝑥𝑠,𝑡 } is bounded, {𝑓(𝑥𝑠,𝑡 )} and {𝑆𝑥𝑠,𝑡 } are also bounded. Then, from Lemma 1, we deduce for all 0 ≤ 𝜏 < ∞ and fixed 𝑡 ∈ (0, 1) that
𝑠→0
(25)
+ 𝑠𝑡𝛾 ⟨𝑓 (𝑥𝑠,𝑡 ) − 𝑓 (𝑦∗ ) , 𝑥𝑠,𝑡 − 𝑦∗ ⟩
1 𝜆𝑠 1 𝜆𝑠 ≤ 𝑇 (𝜏) ∫ 𝑇 (]) 𝑥𝑠,𝑡 𝑑] − ∫ 𝑇 (]) 𝑥𝑠,𝑡 𝑑] 𝜆𝑠 0 𝜆𝑠 0
lim 𝑇 (𝜏) 𝑥𝑠,𝑡 − 𝑥𝑠,𝑡 = 0.
1 𝜆𝑠 ∫ 𝑇 (]) 𝑥𝑠,𝑡 𝑑] − 𝑦∗ ) . 𝜆𝑠 0 (24)
𝑇 (𝜏) 𝑥𝑠,𝑡 − 𝑥𝑠,𝑡
−
1 𝜆𝑠 ∫ 𝑇 (]) 𝑥𝑠,𝑡 𝑑] − 𝑦∗ = 𝑥𝑠,𝑡 − 𝑦𝑠,𝑡 𝜆𝑠 0
(23)
1 𝜆𝑠 +‖𝐼−𝑠𝐴‖ ∫ 𝑇 (]) 𝑥𝑠,𝑡 𝑑]−𝑦∗ 𝑥𝑠,𝑡 −𝑦∗ 𝜆 𝑠 0 2 2 ≤ 𝑠𝑡𝛾𝜌𝑥𝑠,𝑡 − 𝑦∗ + 𝑠 (1 − 𝑡̃𝛾) 𝑥𝑠,𝑡 − 𝑦∗ + 𝑠 ⟨𝑡𝛾𝑓 (𝑦∗ ) + (𝐼 − 𝑡𝐵) 𝑆𝑦∗ − 𝐴𝑦∗ , 𝑥𝑠,𝑡 − 𝑦∗ ⟩ ̃ 𝑥 − 𝑦∗ 2 + (1 − 𝑠𝜆) 𝑠,𝑡 ̃ − 1) 𝑠 − (̃𝛾 − 𝛾𝜌) 𝑠𝑡] 𝑥 − 𝑦∗ 2 = [1 − (𝜆 𝑠,𝑡 + 𝑠 ⟨𝑡𝛾𝑓 (𝑦∗ )+(𝐼 − 𝑡𝐵) 𝑆𝑦∗ − 𝐴𝑦∗ , 𝑥𝑠,𝑡 − 𝑦∗ ⟩ . (26)
Abstract and Applied Analysis
5 Note that the mapping 𝐴 − 𝑡𝛾𝑓 − (𝐼 − 𝑡𝐵)𝑆 is monotone for all 𝑡 ∈ (0, 1), since
So ∗ 2 𝑥𝑠,𝑡 − 𝑦 ≤
1
⟨𝐴𝑥 − 𝑡𝛾𝑓 (𝑥) − (𝐼 − 𝑡𝐵) 𝑆𝑥
̃−1 (̃𝛾 − 𝛾𝜌) 𝑡 + 𝜆 × ⟨𝑡𝛾𝑓 (𝑦∗ ) + (𝐼 − 𝑡𝐵) 𝑆𝑦∗ − 𝐴𝑦∗ , 𝑥𝑠,𝑡 − 𝑦∗ ⟩ , ∗
𝑦 ∈ ϝ. (27) Assume {𝑠𝑛 } ⊂ (0, 1) such that 𝑠𝑛 → 0 as 𝑛 → ∞. By (27), we obtain immediately that
− (𝐴𝑦 − 𝑡𝛾𝑓 (𝑦) − (𝐼 − 𝑡𝐵) 𝑆𝑦) , 𝑥 − 𝑦⟩ = ⟨𝐴𝑥 − 𝐴𝑦, 𝑥 − 𝑦⟩ − 𝑡𝛾 ⟨𝑓 (𝑥) − 𝑓 (𝑦) , 𝑥 − 𝑦⟩
2 2 − 𝑡𝛾𝜌𝑥 − 𝑦 − (1 − 𝑡̃𝛾) 𝑥 − 𝑦 ̃ − 1 + (̃𝛾 − 𝛾𝜌) 𝑡] 𝑥 − 𝑦2 ≥ 0. = [𝜆 By Lemma 3, (31) is equivalent to its dual VI:
1 2 𝑥𝑠𝑛 ,𝑡 − 𝑦∗ ≤ ̃−1 (̃𝛾 − 𝛾𝜌) 𝑡 + 𝜆
𝑥𝑡 ∈ ϝ,
⟨𝐴𝑥𝑡 − 𝑡𝛾𝑓 (𝑥𝑡 ) − (𝐼 − 𝑡𝐵) 𝑆𝑥𝑡 , 𝑦∗ − 𝑥𝑡 ⟩ ≥ 0, ∀𝑦∗ ∈ ϝ. (33)
×⟨𝑡𝛾𝑓 (𝑦∗ )+(𝐼 − 𝑡𝐵) 𝑆𝑦∗ −𝐴𝑦∗ , 𝑥𝑠𝑛 ,𝑡 −𝑦∗ ⟩ , 𝑦∗ ∈ ϝ. (28) Since {𝑥𝑠𝑛 ,𝑡 } is bounded, there exists a subsequence {𝑠𝑛𝑖 } of {𝑠𝑛 } such that {𝑥𝑠𝑛 ,𝑡 } converges weakly to a point 𝑥𝑡 . From (23) and 𝑖 Lemma 2, we get 𝑥𝑡 ∈ ϝ. We can substitute 𝑥𝑡 for 𝑦∗ in (28) to get 1 2 𝑥𝑠𝑛 ,𝑡 − 𝑥𝑡 ≤ ̃−1 (̃𝛾 − 𝛾𝜌) 𝑡 + 𝜆
The weak convergence of {𝑥𝑠𝑛 ,𝑡 } to 𝑥𝑡 actually implies that 𝑥𝑠𝑛 ,𝑡 → 𝑥𝑡 strongly. This has proved the relative normcompactness of the net {𝑥𝑠,𝑡 } as 𝑠 → 0+ for each fixed 𝑡 ∈ (0, 1). In (28), we take the limit as 𝑛 → ∞ to get
̃−1 (̃𝛾 − 𝛾𝜌) 𝑡 + 𝜆
⟨𝐴𝑥𝑡 − 𝑡𝛾𝑓 (𝑥𝑡 ) − (𝐼 − 𝑡𝐵) 𝑆𝑥𝑡 , 𝑦∗ − 𝑥𝑡 ⟩ ≥ 0, ∀𝑦∗ ∈ ϝ. (34)
⟨𝐴𝑥𝑡 − 𝑡𝛾𝑓 (𝑥𝑡 ) − (𝐼 − 𝑡𝐵) 𝑆𝑥𝑡 , 𝑥𝑡 − 𝑥𝑡 ⟩ ≥ 0.
(35)
In (34), we take 𝑦∗ = 𝑥𝑡 to get ⟨𝐴𝑥𝑡 − 𝑡𝛾𝑓 (𝑥𝑡 ) − (𝐼 − 𝑡𝐵) 𝑆𝑥𝑡 , 𝑥𝑡 − 𝑥𝑡 ⟩ ≥ 0.
(36)
Adding up (35) and (36) yields ⟨𝐴𝑥𝑡 − 𝐴𝑥𝑡 , 𝑥𝑡 − 𝑥𝑡 ⟩ − 𝑡𝛾 ⟨𝑓 (𝑥𝑡 ) − 𝑓 (𝑥𝑡 ) , 𝑥𝑡 − 𝑥𝑡 ⟩
(37)
At the same time we note that ∗
∗
∗
× ⟨𝑡𝛾𝑓 (𝑦 ) + (𝐼 − 𝑡𝐵) 𝑆𝑦 − 𝐴𝑦 , 𝑥𝑡 − 𝑦 ⟩ , ∀𝑦∗ ∈ ϝ. (30) In particular, 𝑥𝑡 solves the following variational inequality: 𝑥𝑡 ∈ ϝ,
𝑥𝑡 ∈ ϝ,
− (𝐼 − 𝑡𝐵) ⟨𝑆𝑥𝑡 − 𝑆𝑥𝑡 , 𝑥𝑡 − 𝑥𝑡 ⟩ ≤ 0.
1
∗
Next we show that as 𝑠 → 0+, the entire net {𝑥𝑠,𝑡 } converges in norm to 𝑥𝑡 ∈ ϝ. We assume 𝑥𝑠𝑛 ,𝑡 → 𝑥𝑡 , where 𝑠𝑛 → 0. Similarly, by the above proof, we deduce 𝑥𝑡 ∈ ϝ which solves the following variational inequality:
In (33), we take 𝑦∗ = 𝑥𝑡 to get
× ⟨𝑡𝛾𝑓 (𝑥𝑡 ) + (𝐼 − 𝑡𝐵) 𝑆𝑥𝑡 − 𝐴𝑥𝑡 , 𝑥𝑠𝑛 ,𝑡 − 𝑥𝑡 ⟩ . (29)
∗ 2 𝑥𝑡 − 𝑦 ≤
(32)
̃ 𝑥 − 𝑦2 − (𝐼 − 𝑡𝐵) ⟨𝑆𝑥 − 𝑆𝑦, 𝑥 − 𝑦⟩ ≥ 𝜆
⟨𝐴𝑦∗ − 𝑡𝛾𝑓 (𝑦∗ ) − (𝐼 − 𝑡𝐵) 𝑆𝑦∗ , 𝑦∗ − 𝑥𝑡 ⟩ ≥ 0,
⟨𝐴𝑥𝑡 − 𝐴𝑥𝑡 , 𝑥𝑡 − 𝑥𝑡 ⟩ − 𝑡𝛾 ⟨𝑓 (𝑥𝑡 ) − 𝑓 (𝑥𝑡 ) , 𝑥𝑡 − 𝑥𝑡 ⟩ − (𝐼 − 𝑡𝐵) ⟨𝑆𝑥𝑡 − 𝑆𝑥𝑡 , 𝑥𝑡 − 𝑥𝑡 ⟩ ̃ − 1 + (̃𝛾 − 𝛾𝜌) 𝑡] 𝑥 − 𝑥 2 ≥ [𝜆 𝑡 𝑡
(38)
≥ 0. Therefore, by (37) and (38), we deduce
∀𝑦∗ ∈ ϝ.
(31)
𝑥𝑡 = 𝑥𝑡 .
(39)
6
Abstract and Applied Analysis
Hence the entire net {𝑥𝑠,𝑡 } converges in norm to 𝑥𝑡 ∈ ϝ as 𝑠 → 0+. As 𝑡 → 0+, the net {𝑥𝑡 } converges to the unique solution 𝑥∗ of Problem 1. In (33), we take any 𝑦∗ ∈ Ω to deduce ⟨𝐴𝑥𝑡 − 𝑡𝛾𝑓 (𝑥𝑡 ) − (𝐼 − 𝑡𝐵) 𝑆𝑥𝑡 , 𝑦∗ − 𝑥𝑡 ⟩ ≥ 0.
(40) ∗
By virtue of the monotonicity of 𝐴−𝑆 and the fact that 𝑦 ∈ Ω, we have ⟨𝐴𝑥𝑡 − 𝑆𝑥𝑡 , 𝑦∗ − 𝑥𝑡 ⟩ ≤ ⟨𝐴𝑦∗ − 𝑆𝑦∗ , 𝑦∗ − 𝑥𝑡 ⟩ ≤ 0.
(41)
We can rewrite (40) as (42)
+ (1 − 𝑡) (𝐴𝑥𝑡 − 𝑆𝑥𝑡 ) , 𝑦∗ − 𝑥𝑡 ⟩ ≥ 0. ⟨𝐴𝑥𝑡 − 𝛾𝑓 (𝑥𝑡 ) − (𝐼 − 𝐵) 𝑆𝑥𝑡 , 𝑦∗ − 𝑥𝑡 ⟩ ≥ 0,
∀𝑦∗ ∈ Ω. (43)
𝑡 ⟨𝛾𝑓 (𝑥𝑡 ) + (𝐼 − 𝐵) 𝑆𝑥𝑡 1−𝑡 −𝐴𝑥𝑡 , 𝑦∗ − 𝑥𝑡 ⟩ ,
(47)
𝑦∗ ∈ ϝ.
However, since 𝐴 − 𝑆 is monotone, (48)
Combining the last two relations yields
≥
𝑡 ⟨𝛾𝑓 (𝑥𝑡 ) + (𝐼 − 𝐵) 𝑆𝑥𝑡 1−𝑡 −𝐴𝑥𝑡 , 𝑦∗ − 𝑥𝑡 ⟩ ,
Hence ∗ 2 ∗ ∗ 𝑥𝑡 − 𝑦 ≤ ⟨𝑥𝑡 − 𝑦 , 𝑥𝑡 − 𝑦 ⟩ + ⟨𝛾𝑓 (𝑥𝑡 ) + (𝐼 − 𝐵) 𝑆𝑥𝑡 − 𝐴𝑥𝑡 , 𝑥𝑡 − 𝑦∗ ⟩
(49) 𝑦∗ ∈ ϝ.
Letting 𝑡 = 𝑡𝑛 → 0+ as 𝑛 → ∞ in (49), we get ⟨(𝐴 − 𝑆) 𝑦∗ , 𝑦∗ − 𝑥 ⟩ ≥ 0,
∗
= 𝛾 ⟨𝑓 (𝑥𝑡 ) − 𝑓 (𝑦 ) , 𝑥𝑡 − 𝑦 ⟩ + (𝐼 − 𝐵) ⟨𝑆𝑥𝑡 − 𝑆𝑦∗ , 𝑥𝑡 − 𝑦∗ ⟩
𝑦∗ ∈ ϝ.
(50)
𝑦∗ ∈ ϝ.
(51)
The equivalent dual VI of (50) is
+ ⟨𝐴𝑦∗ − 𝐴𝑥𝑡 , 𝑥𝑡 − 𝑦∗ ⟩ ∗
≥
⟨(𝐴 − 𝑆) 𝑦∗ , 𝑦∗ − 𝑥𝑡 ⟩
It follows from (41) and (42) that
∗
⟨(𝐴 − 𝑆) 𝑥𝑡 , 𝑦∗ − 𝑥𝑡 ⟩
⟨(𝐴 − 𝑆) 𝑦∗ , 𝑦∗ − 𝑥𝑡 ⟩ ≥ ⟨(𝐴 − 𝑆) 𝑥𝑡 , 𝑦∗ − 𝑥𝑡 ⟩ .
⟨𝑡 [𝐴𝑥𝑡 − 𝛾𝑓 (𝑥𝑡 ) − (𝐼 − 𝐵) 𝑆𝑥𝑡 ]
∗
We next prove that 𝜔𝑤 (𝑥𝑡 ) ⊂ Ω; namely, if (𝑡𝑛 ) is a null sequence in (0, 1) such that 𝑥𝑡𝑛 → 𝑥 weakly as 𝑛 → ∞, then 𝑥 ∈ Ω. To see this, we use (33) to get
∗
⟨(𝐴 − 𝑆) 𝑥 , 𝑦∗ − 𝑥 ⟩ ≥ 0,
∗
+ ⟨𝛾𝑓 (𝑦 ) + (𝐼 − 𝐵) 𝑆𝑦 − 𝐴𝑦 , 𝑥𝑡 − 𝑦 ⟩ 2 2 ≤ 𝛾𝜌𝑥𝑡 − 𝑦∗ + (1 − 𝛾̃) 𝑥𝑡 − 𝑦∗ ̃ 𝑥 − 𝑦∗ 2 −𝜆 𝑡 ∗ + ⟨𝛾𝑓 (𝑦 ) + (𝐼 − 𝐵) 𝑆𝑦∗ − 𝐴𝑦∗ , 𝑥𝑡 − 𝑦∗ ⟩ ̃ + 𝛾̃ − 𝛾𝜌)] 𝑥 − 𝑦∗ 2 = [1 − (𝜆 𝑡 + ⟨𝛾𝑓 (𝑦∗ ) + (𝐼 − 𝐵) 𝑆𝑦∗ − 𝐴𝑦∗ , 𝑥𝑡 − 𝑦∗ ⟩ . (44)
Namely, 𝑥 is a solution of VI(13); hence 𝑥 ∈ Ω. We further prove that 𝑥 = 𝑥∗ , the unique solution of VI(12). As a matter of fact, we have by (45) 1 2 𝑥𝑡𝑛 − 𝑥 ≤ ̃ 𝜆 + 𝛾̃ − 𝛾𝜌 × ⟨𝛾𝑓 (𝑥 ) + (𝐼 − 𝐵) 𝑆𝑥 − 𝐴𝑥 , 𝑥𝑡𝑛 − 𝑥 ⟩ , 𝑥 ∈ Ω. (52)
Therefore, 1 ∗ 2 𝑥𝑡 − 𝑦 ≤ ̃ ̃ 𝜆 + 𝛾 − 𝛾𝜌 × ⟨𝛾𝑓 (𝑦∗ ) + (𝐼 − 𝐵) 𝑆𝑦∗ − 𝐴𝑦∗ , 𝑥𝑡 − 𝑦∗ ⟩ , 𝑦∗ ∈ Ω. (45) In particular,
Therefore, the weak convergence to 𝑥 of {𝑥𝑡𝑛 } right implies that 𝑥𝑡𝑛 → 𝑥 in norm. Now we can let 𝑡 = 𝑡𝑛 → 0 in (45) to get ⟨𝛾𝑓 (𝑦∗ ) + (𝐼 − 𝐵) 𝑆𝑦∗ − 𝐴𝑦∗ , 𝑦∗ − 𝑥 ⟩ ≤ 0,
∀𝑦∗ ∈ Ω, (53)
which is equivalent to its dual VI 1
∗ 𝑥𝑡 − 𝑦 ≤ ̃ + 𝛾̃ − 𝛾𝜌 𝜆 × 𝛾𝑓 (𝑦∗ ) + (𝐼 − 𝐵) 𝑆𝑦∗ − 𝐴𝑦∗ , ∀𝑡 ∈ (0, 1) , which implies that {𝑥𝑡 } is bounded.
⟨𝛾𝑓 (𝑥 ) + (𝐼 − 𝐵) 𝑆𝑥 − 𝐴𝑥 , 𝑦∗ − 𝑥 ⟩ ≤ 0, (46)
∀𝑦∗ ∈ Ω. (54)
It turns out that 𝑥 ∈ Ω solves VI(12). By uniqueness, we have 𝑥 = 𝑥∗ . This is sufficient to guarantee that 𝑥𝑡 → 𝑥∗ in norm, as 𝑡 → 0+. This completes the proof.
Abstract and Applied Analysis
7
Corollary 6. For each (𝑠, 𝑡) ∈ (0, 𝜅) × (0, 1), let {𝑥𝑠,𝑡 } be a double net defined by 𝑥𝑠,𝑡 = 𝑃𝐶 [𝑠 (𝑡𝛾𝑓 (𝑥𝑠,𝑡 ) + (1 − 𝑡𝐵) 𝑆𝑥𝑠,𝑡 ) 1 𝜆𝑠 + (1 − 𝑠) ∫ 𝑇 (]) 𝑥𝑠,𝑡 𝑑]] , 𝜆𝑠 0
(55)
for all (𝑠, 𝑡) ∈ (0, 𝜅) × (0, 1). Then, for each fixed 𝑡 ∈ (0, 1), the net {𝑥𝑠,𝑡 } defined by (55) converges in norm, as 𝑠 → 0+, to a solution 𝑥𝑡 ∈ ϝ. Moreover, as 𝑡 → 0+, the net {𝑥𝑡 } converges in norm to 𝑥∗ which solves the following variational inequality: 𝑥∗ ∈ Ω,
⟨(𝐼 − 𝛾𝑓) 𝑥∗ − (𝐼 − 𝐵) 𝑆𝑥∗ , 𝑥 − 𝑥∗ ⟩ ≥ 0, ∀𝑥 ∈ Ω,
(56)
where Ω is the set of the solutions of the following variational inequality: 𝑥∗ ∈ ϝ,
⟨(𝐼 − 𝑆) 𝑥∗ , 𝑥 − 𝑥∗ ⟩ ≥ 0,
∀𝑥 ∈ ϝ.
(57)
Corollary 7. For each (𝑠, 𝑡) ∈ (0, 𝜅) × (0, 1), let {𝑥𝑠,𝑡 } be a double net defined by 𝑥𝑠,𝑡 = 𝑃𝐶 [𝑠 ((1 − 𝑡) 𝑆𝑥𝑠,𝑡 ) + (1 − 𝑠)
1 𝜆𝑠 ∫ 𝑇 (]) 𝑥𝑠,𝑡 𝑑]] , 𝜆𝑠 0 (58)
for all (𝑠, 𝑡) ∈ (0, 𝜅) × (0, 1). Then, for each fixed 𝑡 ∈ (0, 1), the net {𝑥𝑠,𝑡 } defined by (58) converges in norm, as 𝑠 → 0+, to a solution 𝑥𝑡 ∈ ϝ. Moreover, as 𝑡 → 0+, the net {𝑥𝑡 } converges to the minimum norm solution 𝑥∗ of the following variational inequality: 𝑥∗ ∈ ϝ,
⟨(𝐼 − 𝑆) 𝑥∗ , 𝑥 − 𝑥∗ ⟩ ≥ 0,
∀𝑥 ∈ ϝ.
(59)
Proof. In (55), we take 𝑓 = 0 and 𝐵 = 𝐼. Then (55) reduces to (58). Hence, the net {𝑥𝑡 } defined by (58) converges in norm to 𝑥∗ ∈ Ω which satisfies 𝑥∗ ∈ Ω,
⟨𝑥∗ , 𝑥 − 𝑥∗ ⟩ ≥ 0,
∀𝑥 ∈ Ω.
(60)
This indicates that ∗ ∗ 2 ∗ 𝑥 ≤ ⟨𝑥 , 𝑥⟩ ≤ 𝑥 ‖𝑥‖ ,
∀𝑥 ∈ Ω.
(61)
Therefore, 𝑥∗ is the minimum norm solution of the VI(59). This completes the proof.
[2] L.-C. Ceng, Q. H. Ansari, and J.-C. Yao, “Relaxed extragradient iterative methods for variational inequalities,” Applied Mathematics and Computation, vol. 218, no. 3, pp. 1112–1123, 2011. [3] L.-C. Ceng, Q. H. Ansari, and J.-C. Yao, “Some iterative methods for finding fixed points and for solving constrained convex minimization problems,” Nonlinear Analysis. Theory, Methods & Applications A, vol. 74, no. 16, pp. 5286–5302, 2011. [4] L.-C. Ceng, S.-M. Guu, and J.-C. Yao, “A general iterative method with strongly positive operators for general variational inequalities,” Computers & Mathematics with Applications, vol. 59, no. 4, pp. 1441–1452, 2010. [5] S.-S. Chang, H. W. Joseph Lee, and C. K. Chan, “A new method for solving equilibrium problem fixed point problem and variational inequality problem with application to optimization,” Nonlinear Analysis. Theory, Methods & Applications A, vol. 70, no. 9, pp. 3307–3319, 2009. [6] S. S. Chang, H. W. Joseph Lee, C. K. Chan, and J. A. Liu, “A new method for solving a system of generalized nonlinear variational inequalities in Banach spaces,” Applied Mathematics and Computation, vol. 217, no. 16, pp. 6830–6837, 2011. [7] Y. J. Cho, I. K. Argyros, and N. Petrot, “Approximation methods for common solutions of generalized equilibrium, systems of nonlinear variational inequalities and fixed point problems,” Computers & Mathematics with Applications, vol. 60, no. 8, pp. 2292–2301, 2010. [8] Y. J. Cho, S. M. Kang, and X. Qin, “On systems of generalized nonlinear variational inequalities in Banach spaces,” Applied Mathematics and Computation, vol. 206, no. 1, pp. 214–220, 2008. [9] F. Cianciaruso, V. Colao, L. Muglia, and H.-K. Xu, “On an implicit hierarchical fixed point approach to variational inequalities,” Bulletin of the Australian Mathematical Society, vol. 80, no. 1, pp. 117–124, 2009. [10] V. Colao, G. L. Acedo, and G. Marino, “An implicit method for finding common solutions of variational inequalities and systems of equilibrium problems and fixed points of infinite family of nonexpansive mappings,” Nonlinear Analysis. Theory, Methods & Applications A, vol. 71, no. 7-8, pp. 2708–2715, 2009. [11] R. Glowinski, Numerical Methods for Nonlinear Variational Problems, Springer, New York, NY, USA, 1984. [12] P. Kumam, N. Petrot, and R. Wangkeeree, “Existence and iterative approximation of solutions of generalized mixed quasivariational-like inequality problem in Banach spaces,” Applied Mathematics and Computation, vol. 217, no. 18, pp. 7496–7503, 2011.
Acknowledgment
[13] M. Aslam Noor, “Some developments in general variational inequalities,” Applied Mathematics and Computation, vol. 152, no. 1, pp. 199–277, 2004.
The author was supported in part by NSFC 71161001-G0105, NNSF of China (10671157 and 61261044), NGY2012097 and Beifang University of Nationalities scientific research project.
[14] M. A. Noor, “An extragradient algorithm for solving general nonconvex variational inequalities,” Applied Mathematics Letters, vol. 23, no. 8, pp. 917–921, 2010.
References
[15] H.-K. Xu, “Viscosity method for hierarchical fixed point approach to variational inequalities,” Taiwanese Journal of Mathematics, vol. 14, no. 2, pp. 463–478, 2010.
[1] I. Yamada, “The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings,” in Inherently Parallel Algorithms for Feasibility and Optimization and Their Applications, D. Butnariu, Y. Censor, and S. Reich, Eds., pp. 473–504, Elsevier, Amsterdam, The Netherlands, 2001.
[16] I. Yamada, N. Ogura, and N. Shirakawa, “A numerically robust hybrid steepest descent method for the convexly constrained generalized inverse problems,” in Inverse Problems, Image Analysis, and Medical Imaging, vol. 313 of Contemporary Mathematics, pp. 269–305, American Mathematical Society, Providence, RI, USA, 2002.
8 [17] Y. Yao, Y. J. Cho, and R. Chen, “An iterative algorithm for solving fixed point problems, variational inequality problems and mixed equilibrium problems,” Nonlinear Analysis. Theory, Methods & Applications A, vol. 71, no. 7-8, pp. 3363– 3373, 2009. [18] Y. Yao, R. Chen, and H.-K. Xu, “Schemes for finding minimumnorm solutions of variational inequalities,” Nonlinear Analysis. Theory, Methods & Applications A, vol. 72, no. 7-8, pp. 3447– 3456, 2010. [19] Y. Yao, Y. J. Cho, and Y.-C. Liou, “Iterative algorithms for hierarchical fixed points problems and variational inequalities,” Mathematical and Computer Modelling, vol. 52, no. 9-10, pp. 1697–1705, 2010. [20] Y. Yao and J.-C. Yao, “On modified iterative method for nonexpansive mappings and monotone mappings,” Applied Mathematics and Computation, vol. 186, no. 2, pp. 1551–1558, 2007. [21] H. Zegeye, E. U. Ofoedu, and N. Shahzad, “Convergence theorems for equilibrium problem, variational inequality problem and countably infinite relatively quasi-nonexpansive mappings,” Applied Mathematics and Computation, vol. 216, no. 12, pp. 3439–3449, 2010. [22] H. Zegeye and N. Shahzad, “A hybrid scheme for finite families of equilibrium, variational inequality and fixed point problems,” Nonlinear Analysis. Theory, Methods & Applications A, vol. 74, no. 1, pp. 263–272, 2011. [23] L.-C. Zeng and J.-C. Yao, “Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems,” Taiwanese Journal of Mathematics, vol. 10, no. 5, pp. 1293–1303, 2006. [24] I. Yamada and N. Ogura, “Hybrid steepest descent method for variational inequality problem over the fixed point set of certain quasi-nonexpansive mappings,” Numerical Functional Analysis and Optimization, vol. 25, no. 7-8, pp. 619–655, 2004. [25] N. Buong, “Strong convergence theorem for nonexpansive semigroups in Hilbert space,” Nonlinear Analysis. Theory, Methods & Applications A, vol. 72, no. 12, pp. 4534–4540, 2010. [26] R. Chen and H. He, “Viscosity approximation of common fixed points of nonexpansive semigroups in Banach space,” Applied Mathematics Letters, vol. 20, no. 7, pp. 751–757, 2007. [27] R. Chen and Y. Song, “Convergence to common fixed point of nonexpansive semigroups,” Journal of Computational and Applied Mathematics, vol. 200, no. 2, pp. 566–575, 2007. [28] F. Cianciaruso, G. Marino, and L. Muglia, “Iterative methods for equilibrium and fixed point problems for nonexpansive semigroups in Hilbert spaces,” Journal of Optimization Theory and Applications, vol. 146, no. 2, pp. 491–509, 2010. [29] A. T. M. Lau, “Semigroup of nonexpansive mappings on a Hilbert space,” Journal of Mathematical Analysis and Applications, vol. 105, no. 2, pp. 514–522, 1985. [30] A. T.-M. Lau and W. Takahashi, “Fixed point properties for semigroup of nonexpansive mappings on Fr´echet spaces,” Nonlinear Analysis. Theory, Methods & Applications A, vol. 70, no. 11, pp. 3837–3841, 2009. [31] A. T.-M. Lau and Y. Zhang, “Fixed point properties of semigroups of non-expansive mappings,” Journal of Functional Analysis, vol. 254, no. 10, pp. 2534–2554, 2008. [32] H. Zegeye and N. Shahzad, “Strong convergence theorems for a finite family of nonexpansive mappings and semigroups via the hybrid method,” Nonlinear Analysis. Theory, Methods & Applications A, vol. 72, no. 1, pp. 325–329, 2010.
Abstract and Applied Analysis [33] T. Shimizu and W. Takahashi, “Strong convergence to common fixed points of families of nonexpansive mappings,” Journal of Mathematical Analysis and Applications, vol. 211, no. 1, pp. 71– 83, 1997. [34] K. Goebel and W. A. Kirk, Topics in Metric Fixed Point Theory, vol. 28 of Cambridge Studies in Advanced Mathematics, Cambridge University Press, Cambridge, UK, 1990. [35] X. Lu, H.-K. Xu, and X. Yin, “Hybrid methods for a class of monotone variational inequalities,” Nonlinear Analysis. Theory, Methods & Applications A, vol. 71, no. 3-4, pp. 1032–1041, 2009.
Advances in
Operations Research Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Applied Mathematics
Algebra
Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Probability and Statistics Volume 2014
The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at http://www.hindawi.com International Journal of
Advances in
Combinatorics Hindawi Publishing Corporation http://www.hindawi.com
Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of Mathematics and Mathematical Sciences
Mathematical Problems in Engineering
Journal of
Mathematics Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Discrete Dynamics in Nature and Society
Journal of
Function Spaces Hindawi Publishing Corporation http://www.hindawi.com
Abstract and Applied Analysis
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of
Journal of
Stochastic Analysis
Optimization
Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014