ALGORITHMS FOR SOLVING MONOTONE ...

3 downloads 0 Views 2MB Size Report
Apr 4, 2017 - Y. Censor, A. Gibali and S. Reich, Algorithms for the split variational ... Yair Censor for their devoted supervision and for helping me fulfill.
ALGORITHMS FOR SOLVING MONOTONE VARIATIONAL INEQUALITIES AND APPLICATIONS

Aviv Gibali

ALGORITHMS FOR SOLVING MONOTONE VARIATIONAL INEQUALITIES AND APPLICATIONS

Research Thesis

In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy

Aviv Gibali

Submitted to the Senate of the Technion - Israel Institute of Technology

Sivan

5772

Haifa

June

2012

This research thesis was written under the supervision of Prof. Simeon Reich and Prof. Yair Censor in the Department of Mathematics Publications 1. Y. Censor, A. Gibali and S. Reich, Extensions of Korpelevich’s extragradient method for solving the variational inequality problem in Euclidean space, Optimization, accepted for publication. Impact factor: 0.755. 2. Y. Censor, A. Gibali and S. Reich, Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space, Optimization Methods and Software 26 (2011), 827–845. Impact factor: 0.785. 3. Y. Censor, A. Gibali and S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert space, Journal of Optimization Theory and Applications 148 (2011), 318–335. Impact factor: 1.011. 4. Y. Censor, A. Gibali and S. Reich, Algorithms for the split variational inequality problem, Numerical Algorithms 59 (2012), 301–323. Impact factor: 0.784. 5. Y. Censor, A. Gibali, S. Reich and S. Sabach, Common solutions to variational inequalities, Set-Valued and Variational Analysis 20 (2012), 229–247. Impact factor: 0.333. 6. Y. Censor, A. Gibali and S. Reich, A von Neumann alternating method for finding common solutions to variational inequalities, Nonlinear Analysis 75 (2012), 4596– 4603. Impact factor: 1.409.

The generous financial help of the Technion is gratefully acknowledged

I would like to thank my advisors Prof. Simeon Reich and Prof. Yair Censor for their devoted supervision and for helping me fulfill my dream of doing research at the frontier of science. I am grateful for their time, patience and guidance which provided useful tools for research. Their wise comments went even beyond the boundaries mere science. I would also like to thank Prof. Marc Teboulle for reading my dissertation and serving on my Ph. D. defense committee. I dedicate my thesis to all who believed in me and supported me on my way along the years. My parents, Sali and Vico, my brothers Shir and Daniel and all my friends. Last but not least, I thank my wonderful family, my wife Shimrit and my two daughters Noam and Shahar; this achievement is all yours.

Table of Contents List of Tables

ii

List of Figures

iii

Abstract

iv

List of Symbols

v

1 Introduction

1

2 Preliminaries 2.1 The variational inequality problem and other related problems . . . . . . . 3 Classical Variational Inequality Problems 3.1 Extensions of the Korpelevich extragradient algorithm . 3.2 Variational inequalities and fixed points . . . . . . . . . . 3.3 Strong convergence in Hilbert space . . . . . . . . . . . . 3.4 The constrained variational inequality problem . . . . . . 3.5 The multi-valued δ-algorithmic scheme . . . . . . . . . . 3.6 Variational inequality problems over the fixed point set of

4 19

24 . . . . . . . . . . 24 . . . . . . . . . . 41 . . . . . . . . . . 46 . . . . . . . . . . 57 . . . . . . . . . . 59 an FQNE operator 69

4 Split Inverse Problems 4.1 The split common null point problem . . . . . . . . . . . . . . . . . . . . . 4.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 The common solutions to variational inequalities problem . . . . . . . . . . 4.4 A von Neumann alternating method for finding common solutions to variational inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

108

Index

116

References

118

i

83 83 93 98

List of Tables 4.1 4.2

Geometric illustration of Algorithm 4.3.1 starting from (−1/2, 3) . . . . . . Geometric illustration of Algorithm 4.3.1 starting from (3, 3) . . . . . . . .

ii

107 108

List of Figures 2.1 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 4.1 4.2 4.3

Property of cutters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 The classical projection method for the VIP . . . . . . . . . . . . . . . . . 25 Korpelevich’s extragradient method . . . . . . . . . . . . . . . . . . . . . . 26 Fukushima’s method: Projecting onto super halfspaces . . . . . . . . . . . 28 Two-projection algorithms (Iusem & Svaiter) . . . . . . . . . . . . . . . . . 29 Two-projection algorithms (Solodov & Svaiter) . . . . . . . . . . . . . . . . 30 The subgradient extragradient algorithm . . . . . . . . . . . . . . . . . . . 32 The perturbed extragradient algorithm . . . . . . . . . . . . . . . . . . . . 37 The hybrid perturbed subgradient extragradient algorithm . . . . . . . . . 39 The two-subgradient extragradient algorithm . . . . . . . . . . . . . . . . . 40 The modified subgradient extragradient algorithm . . . . . . . . . . . . . . 42 The δ-algorithmic scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Special cases of the δ-algorithmic scheme . . . . . . . . . . . . . . . . . . . 68 VIP over the fixed point set of a cutter operator . . . . . . . . . . . . . . . 71 The generalized convex feasible set . . . . . . . . . . . . . . . . . . . . . . 80 1 Geometric illustration of Algorithm 4.3.1 with the starting point x = (−1/2, 3)107 Geometric illustration of Algorithm 4.3.1 with the starting point x1 = (3, 3) 108 The alternating method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

iii

Abstract In this thesis we study iterative algorithms for solving the Variational Inequality Problem (VIP) in an infinite dimensional Hilbert space. Several results are also presented in Euclidean space. The thesis is divided into two major parts. In the first part we study Korpelevich’s extragradient method for solving the VIP. This method involves projecting twice onto the feasible set of the VIP at each iterative step. Our novelty is the development of several extensions of this method. We were able to replace the second orthogonal projection onto the feasible set of the VIP in Korpelevich’s extragradient method with a projection onto a specific half-space (subgradient half-space). Another extension allows projections onto the members of an infinite sequence of subsets which epi-converges to the feasible set of the VIP. Several other extensions are also presented and the convergence analysis of the algorithms is given. In the second part we propose a prototypical Split Inverse Problem (SIP). The SIP is a modeling paradigm for handling problems in which different parts of the model reside in different spaces that are linked by a known operator between the spaces. This approach already found use in intensity-modulated radiation therapy (IMRT) treatment planning and more recently in the area of adaptive filtering. Two new special cases of the SIP are the Split Common Null Point Problem (SCNPP) and the Split Variational Inequality Problem (SVIP). The SCNPP entails finding a zero of a maximal monotone mapping in one space, the image of which under a given bounded linear transformation is a zero of another maximal monotone mapping. The SVIP entails finding a solution of one VIP, the image of which under a given bounded linear transformation is a solution of another VIP. We present four iterative algorithms that solve such problems in Hilbert spaces, and establish weak convergence for one and strong convergence for the other three. Special cases of the SCNPP and the SVIP are discussed, some of which are new even in Euclidean space.

iv

List of Symbols Sets & Spaces N R R ∪ {∞} Rn Rn+ B H X∗ VIP(f, C) Sol(f, C) SOL EP(g, C) Fix(f ) dom(f ) range(f ) B(x, r) Cr Cr ∆ ΩΦ int(C) n 2R = P (Rn ) H(x, y) Hλ G(M ) H(x, C, δ) T (z) g≤0 NCCS(Rn )

the natural numbers the real line (−∞, ∞] the real n-dimensional Euclidean space the nonnegative orthant of Rn Banach space Hilbert space the topological dual space of the vector space X the variational inequality problem with the operator f and the feasible set C the solution set of VIP(f, C) a finite intersection of Sol(fi , Ci ) the equilibrium problem with the bifunction g and the feasible set C the fixed point set of an operator f the domain of an operator f the range of an operator f the ball of center x and radius r {u ∈ Rn | dist(u, C) ≥ r} {u ∈ Rn | dist(u, C) < r} the diagonal set the set of all minimizers of a function Φ over a closed and convex set Ω the interior of a set C the set of subsets of Rn the half-space {u ∈ H | hu − y, x − yi ≤ 0} H(x, yλ ) with yλ := λx + (1 − λ)y for any λ ∈ [0, 1] the graph of a mapping M the set of all hyperplanes separating the sets C and B(x, δ dist(x, C)) the half-space the bounding hyperplane of which separates the set C from the point z if z ∈ / int C. Otherwise T (z) = H n the sublevel set {x ∈ R | g(x) ≤ 0}, for g : Rn → R the family of all nonempty, closed and convex subsets of Rn

v

Functions dist(x, C) dγ (C1 , C2 ) Φ(x) IC

the the the the

Euclidean distance function between a point x and a set C γ-distance function between the sets PmC1 and C2 proximity function defined as 1/2 i=1 wi dist(x, Ci )2 indicator function of a set C

the the the the the the

projection operator onto the set C projection operator onto the set Fix(f ) projection onto the sublevel set of g≤0 identity operator resolvent of a mapping M gradient of a function g

Operators PC R := PFix(f ) Pg≤0 I JλM := (I + λM )−1 ∇g Classes of operators T Fν DCp

the class of firmly quasi-nonexpansive operators (FQNE) the class of ν-firmly quasi-nonexpansive operators the class of p-demicontractive operators

Mappings ∂g NC Tg Tg

the the the the

subdifferential of the function g normal cone of a set C saddle-point associated mapping equilibrium associated mapping

the the the the the the the the the the the the the the the the

Convex Feasibility Problem Variational Inequality Problem Split Inverse Problem Split Feasibility Problem Split Variational Inequality Problem Common Solution to Variational Inequalities Problem Constrained Multiple-Set Split Convex Feasibility Problem Equilibrium Problem Split Common Fixed Point Problem Split Common Null Point Problem Split Zero Problem Split Monotone Variational Inclusion Split Minimization Problem Split Saddle-Point Problem Split Minimax Problem Split Equilibrium Problem vi

Problems CFP VIP SIP SFP SVIP CSVIP CMSSCFP EP SCFPP SCNPP SZP SMVI SMP SSPP SMMP SEP

Chapter 1 Introduction In the first part of this thesis we study several modifications of the extragradient method for solving the Variational Inequality Problem (VIP) and introduce several related problems. Let H be a real Hilbert space with inner product h·, ·i and norm k · k. Given a nonempty, closed and convex subset C ⊂ H and an operator f : H → H, the Variational Inequality Problem (VIP) is to find a point x∗ such that x∗ ∈ C and hf (x∗ ), x − x∗ i ≥ 0 for all x ∈ C.

(1.0.1)

This problem, denoted by VIP(f, C), which is fundamental in Optimization Theory was introduced by Hartman and Stampacchia in [88]. Many algorithms for solving the VIP are projection algorithms that employ projections onto C or onto some related set in order to reach iteratively a solution. For an excellent treatise on variational inequality problems in finite-dimensional spaces, see the two-volume book by Facchinei and Pang [77]. The books by Konnov [104] and Patriksson [130] contain extensive studies of VIPs including applications, algorithms and numerical results. For a wide range of applications of VIPs, see, e.g., the book by Kinderlehrer and Stampacchia [106]. See also Auslender and Teboulle [7]. The importance of VIPs stems from the fact that several fundamental problems in Optimization Theory can be formulated as VIPs, as the following few examples show. Example 1.0.1. Constrained minimization. Let C ⊂ H be a nonempty, closed and convex subset and let g : H → R be a continuously differentiable function which is convex on C. Then x∗ is a minimizer of g over C if and only if x∗ solves the VIP h∇g(x∗ ), x − x∗ i ≥ 0 for all x ∈ C,

(1.0.2)

where ∇g is the gradient of g (see, e.g., [20, Proposition 3.1, p. 210]). When g is not differentiable, we get the VIP hu∗ , x − x∗ i ≥ 0 for all x ∈ C,

(1.0.3)

where u∗ ∈ ∂g(x∗ ) and ∂g is the (set-valued) subdifferential of g (see, e.g., [86, Chapter 4, Subsection 3.5]). Example 1.0.2. When the Hilbert space H is Rn and the set C is Rn+ , then the VIP is equivalent to the nonlinear complementarity problem: find a point x∗ ∈ Rn+ such that f (x∗ ) ∈ Rn+ and hf (x∗ ), x∗ i = 0. 1

2

Variational Inequalities

Indeed, let H be Rn and C = Rn+ . So, if x∗ solves (1.0.1) then there exists x∗ ∈ Rn+ such that hf (x∗ ), x − x∗ i ≥ 0 for all x ∈ Rn+ . (1.0.4) In particular, if we take x = 0 we obtain hf (x∗ ), x∗ i ≤ 0 and if we take x = 2x∗ we obtain hf (x∗ ), x∗ i ≥ 0. Combining these two inequalities, we see that hf (x∗ ), x∗ i = 0. As a consequence, this yields hf (x∗ ), xi ≥ 0 for all x ∈ Rn+ (1.0.5) and hence f (x∗ ) ∈ Rn+ . Conversely, if x∗ solves the nonlinear complementarity problem, then hf (x∗ ), x − x∗ i = hf (x∗ ), xi ≥ 0 for all x ∈ Rn+ (since f (x∗ ) ∈ Rn+ ), which means that x∗ solves (1.0.1). Example 1.0.3. When the set C is the whole space H, then the VIP (1.0.1) is equivalent to the problem of finding zeros of an operator f , i.e., to find an element x∗ ∈ H such that f (x∗ ) = 0. When the Hilbert space H is Rn then the VIP enables us to solve a system of equations. Example 1.0.4. Let H1 and H2 be two real Hilbert spaces, and let C1 and C2 be two convex subsets of H1 and H2 , respectively. Given a bifunction g : H1 ×H2 → R, the Saddle-Point Problem (SDP) is to find a point (u∗1 , u∗2 ) ∈ C1 × C2 such that g(u∗1 , u2 ) ≤ g(u∗1 , u∗2 ) ≤ g(u1 , u∗2 ) for all (u1 , u2 ) ∈ C1 × C2 . This problem can be written as the VIP of finding (u∗1 , u∗2 ) ∈ C1 × C2 such that      ∗  ∇gu1 (u∗1 , u∗2 ) u1 u1 , − ≥ 0 for all (u1 , u2 ) ∈ C1 × C2 . ∗ ∗ −∇gu2 (u1 , u2 ) u2 u∗2

(1.0.6)

(1.0.7)

Many algorithms for solving the VIP are projection algorithms that employ projections onto C or onto some related set in order to reach iteratively a solution. See, e.g., Yang [167], Yamada and Ogura [161], Auslender and Teboulle [8] or Censor, Iusem and Zenios [54], to name but a (very) few papers out of the existing vast literature. In the first part of the thesis we focus on Korpelevich’s extragradient method, in which two orthogonal projections onto the set C (the feasible set of the VIP (1.0.1)) are calculated at each step. We present several modifications of this algorithm, where we mainly replace the second orthogonal projection onto C with a specific constructible subgradient projection. We also present an algorithm which finds a solution to a VIP which is also a fixed point of a given nonexpansive operator. This problem was widely studied; see, e.g., Ceng et al. [39] and the references therein. In the second part of the thesis we propose a prototypical Split Inverse Problem (SIP) and a new variational problem, called the Split Variational Inequality Problem (SVIP), which is an SIP. It entails finding a solution of one inverse problem (e.g., a Variational Inequality Problem (VIP)), the image of which under a given bounded linear transformation is a solution of another inverse problem such as a VIP. We present several iterative algorithms that solve such problems, under reasonable conditions, in Hilbert space. Following the connection between VIP and other optimization problem the SIP includes, for example, split minimization between two spaces so that the image of a solution point of one minimization problem, under a given bounded linear operator, is a solution point of

Algorithms for Solving Monotone Variational Inequalities and Applications

3

another minimization problem. An important special case of the SIP is the Split Feasibility Problem (SFP) which had already been studied and used in practice as a model in intensity-modulated radiation therapy (IMRT) treatment planning; see [42, 46].

Chapter 2 Preliminaries Throughout this thesis H is a real Hilbert space with inner product h·, ·i and norm k · k, and C is a nonempty, closed and convex subset of H. We write xk * x to indicate  that the ∞ ∞ k k sequence x k=0 converges weakly to x and x → x to indicate that the sequence xk k=0 converges strongly to x. Definition 2.0.5. A matrix A is a symmetric positive definite matrix if A = AT (T represents the transpose) and xT Ax ≥ 0, for all x 6= 0.

(2.0.1)

Definition 2.0.6. Given a symmetric positive definite matrix A, the A-norm of x ∈ Rn is defined by kxkA := hx, Axi1/2 for all x ∈ Rn . (2.0.2) We present several properties of sequences in Hilbert space. The first property is known as the Opial condition [128].  ∞ Condition 2.0.1. (Opial) For any sequence xk k=0 in H such that xk * x, we have lim inf kxk − xk < lim inf kxk − yk for all y 6= x. k→∞

k→∞

(2.0.3)

The second property is known as the Kadec-Klee property [83].  ∞ Definition 2.0.7. For any sequence xk k=0 in H, if xk * x and kxk k → kxk, then kxk − xk → 0. We also recall that in a real Hilbert space H, kλx + (1 − λ)yk2 = λkxk2 + (1 − λ)kyk2 − λ(1 − λ)kx − yk2

(2.0.4)

for all x, y ∈ H and λ ∈ [0, 1]. The next definition is due to Clarkson [60]. Definition 2.0.8. A Banach space B is said to be uniformly convex if to each ε ∈ (0, 2], there corresponds a positive δ(ε) such that the conditions kxk = kyk = 1 and kx − yk ≥ ε imply that k(x + y) /2k ≤ 1 − δ(ε). 4

Algorithms for Solving Monotone Variational Inequalities and Applications

5

we recall the Parallelogram Identity, that is 2kxk2 + 2kyk2 = kx + yk2 + kx − yk2 .

(2.0.5)

From this inequality if follows that every Hilbert space is uniformly convex. Next we present several more useful definitions and properties of sequences.  ∞ Definition 2.0.9. A sequence xk k=0 ⊂ H is called Fej´ er-monotone with respect to C if for every u ∈ C kxk+1 − uk ≤ kxk − uk for all k ≥ 0. (2.0.6) The next lemma is a well-known result (see, e.g., [123, Lemma 3.1]). ∞ Lemma 2.0.10. Let H be a real Hilbert a real sequence satisfying 0 < a ≤  kk} k=0  k ∞space, {α ∞ αk ≤ b < 1 for all k ≥ 0, and let v k=0 and w k=0 be two sequences in H such that for some σ ≥ 0, lim sup kv k k ≤ σ, (2.0.7) k→∞

lim sup kwk k ≤ σ

(2.0.8)

k→∞

and lim kαk v k + (1 − αk )wk k = σ.

(2.0.9)

lim kv k − wk k = 0.

(2.0.10)

k→∞

Then k→∞

The next lemma is taken from [80, Lemma 2]. ∞ Lemma 2.0.11. Let {ξk }∞ k=0 and {νk }k=0 be two sequences of non-negative numbers, and let µ ∈ [0, 1) be a constant. If the inequalities

ξk+1 ≤ µξk + νk for all k ≥ 0

(2.0.11)

hold and if lim νk = 0, then lim ξk = 0. k→∞

k→∞

Let C be a nonempty, closed and convex subset of H. Denote by dist(x, C) the Euclidean distance of a point x ∈ H to the set C, i.e., dist(x, C) := min{kx − zk | z ∈ C}.

(2.0.12)

It is well known that for a convex set C, the distance function dist(·, C) is convex. The following lemma is taken from [19, Lemma 3]. n Lemma 2.0.12. Let C ⊂ Rn be a closed subset and z ∈ / C. Let {z k }∞ k=0 ⊆ R be such k+1 k that limk→∞ kz − z k = 0 and both z and some point in C are cluster points of {z k }∞ k=0 . kj ∞ k ∞ Then there exist ς > 0 and a subsequence {z }j=0 of {z }k=0 such that   dist z kj+1 , C > dist z kj , C , (2.0.13)

and  dist z kj , C > ς.

(2.0.14)

6

Variational Inequalities Next we present several classes of operators.

Definition 2.0.13. Let H be a real Hilbert space. Let C ⊂ H be a subset of H and f : C → H be an operator from C to H. 1. f is called inverse strongly monotone (ISM) or (coercive) with constant α > 0 on C if hf (x) − f (y), x − yi ≥ αkf (x) − f (y)k2 for all x, y ∈ C.

(2.0.15)

Note that ISM is also known as the Dunn property [72, 136]. 2. f is called weakly co-coercive on C if there is a positive continuous function g(x, y) on C such that hf (x) − f (y), x − yi ≥ g(x, y)kf (x) − f (y)k2 for all x, y ∈ C.

(2.0.16)

3. f is called Lipschitz continuous with constant κ > 0 on C if kf (x) − f (y)k ≤ κkx − yk for all x, y ∈ C.

(2.0.17)

4. f is called nonexpansive on C if kf (x) − f (y)k ≤ kx − yk for all x, y ∈ C,

(2.0.18)

that is 1-Lipschitz. 5. f is called a strict contraction if it is Lipschitz continuous with constant κ < 1. 6. f is called firmly nonexpansive (FNE) [84] on C if hf (x) − f (y), x − yi ≥ kf (x) − f (y)k2 for all x, y ∈ C,

(2.0.19)

i.e., if it is 1-ISM. The FNE operators was first introduced by Browder [27, Definition 6] under the name firmly contractive operators.  ∞ 7. f is called demiclosed (closed in Rn ) at y ∈ Rn if for any sequence xk k=0 in C, we have:  xk * x ∈ C  (2.0.20) ⇒ (I − f )x = y.  (I − f )xk → y The Demiclosedness Principle [26] states that if f : C → H is a nonexpansive operator, then I − f is demiclosed at y ∈ H, in particular I − f is demiclosed at 0. 8. f is called monotone on C if hf (x) − f (y), x − yi ≥ 0 for all x, y ∈ C.

(2.0.21)

Algorithms for Solving Monotone Variational Inequalities and Applications

7

9. f is called psuedo-monotone on C if hf (y), x − yi ≥ 0 ⇒ hf (x), x − yi ≥ 0 for all x, y ∈ C.

(2.0.22)

10. f is called hemicontinuous if it is continuous along each line segment in C. 11. f is called asymptotically regular at x ∈ C [28] if lim (f k (x) − f k+1 (x)) = 0 for all x ∈ H,

k→∞

(2.0.23)

where f k denotes the k-th iterate of f . 12. f is called averaged [9] if there exists a nonexpansive operator N : C → H and a number c ∈ (0, 1) such that f = (1 − c)I + cN, (2.0.24) where I is the identity operator. In this case we also say that f is c-av [34]. 13. f is called odd if C is symmetric, i.e., C = −C, and if f (−x) = −f (x) for all x ∈ C.

(2.0.25)

14. We say that a nonexpansive operator f satisfies Condition (W) [73] if whenever k k k k k {xk − y k }∞ k=0 is bounded and kx − y k − kf (x ) − f (y )k → 0, it follows that ((x − y k ) − (f (xk ) − f (y k )) * 0. 15. The operator f is called strongly nonexpansive [29] if it is nonexpansive and whenk k k k ever {xk − y k }∞ k=0 is bounded and kx − y k − kf (x ) − f (y )k → 0, it follows that ((xk − y k ) − (f (xk ) − f (y k )) → 0. 16. Let N be the set of natural numbers, {f1 , f2 , . . .} be a sequence of operators, and r : N → N. An unrestricted (random) product of these operators is the sequence {Sn }n∈N defined by Sn = fr(n) fr(n−1) · · · fr(1) . Some of the relations between these classes of operators are given below. For more details, see Bruck and Reich [29], Baillon et al. [9], Goebel and Reich [84], and Byrne [34]. Remark 2.0.14. 1. It can be verified that if f is ν-ISM, then it is Lipschitz continuous with constant κ = 1/ν. 2. It is known that an operator f is averaged if and only if its complement I −f is ν-ISM for some ν > 1/2; see, e.g., [34, Lemma 2.1]. 3. The operator f is firmly nonexpansive if and only if its complement I − f is firmly nonexpansive. The operator f is firmly nonexpansive if and only if f is (1/2)-av (see [84, Proposition 11.2] and [34, Lemma 2.3]). 4. If f1 and f2 are c1 -av and c2 -av, respectively, then their composition S = f1 f2 is (c1 + c2 − c1 c2 )-av. See [34, Lemma 2.2].

8

Variational Inequalities 5. Every averaged operator is strongly nonexpansive and therefore satisfies Condition (W). For each point x ∈ H, there exists a unique nearest point in C, denoted by PC (x). That

is, kx − PC (x)k ≤ kx − yk for all y ∈ C.

(2.0.26)

The mapping PC : H → C is called the metric projection of H onto C. It is well known that PC is a nonexpansive mapping of H onto C; see, e.g., [93, Lemma 4.1]. The metric projection PC is characterized [84, Section 3] by the following two properties: PC (x) ∈ C

(2.0.27)

hx − PC (x) , PC (x) − yi ≥ 0 for all x ∈ H, y ∈ C,

(2.0.28)

and

and if C is a hyperplane, then (2.0.28) becomes an equality. It follows that kx − yk2 ≥ kx − PC (x)k2 + ky − PC (x)k2 for all x ∈ H, y ∈ C.

(2.0.29)

The next lemma was proved by Takahashi and Toyoda [149, Lemma 3.2]. In this connection, see also [133, Proposition 2.1]. Lemma 2.0.15. Let H be a real  Hilbert ∞ space and let C be a nonempty, closed and convex subset of H. Let the sequence xk k=0 ⊂ H be Fej´er-monotone with respect to C. Then   ∞ PC xk k=0 converges strongly to some z ∈ C. Next we present the concept of convergence of sets. Following [140], we denote by NCCS(Rn ) the family of all nonempty, closed and convex subsets of Rn . Definition 2.0.16. [4, Proposition 3.21] Let C and {Ck }∞ k=0 be a set and a sequence of sets in NCCS(Rn ), respectively. The sequence {Ck }∞ is said to epi-converge to the set k=0 epi C (denoted by Ck → C) if the following two conditions hold: (i) for every x ∈ C, there k k exists a sequence {xk }∞ k=0 such that x ∈ Ck for all k ≥ 0, and limk→∞ x = x. (ii) If xkj ∈ Ckj for all j ≥ 0, and limj→∞ xkj = x, then x ∈ C. Definition 2.0.17. Given C1 and C2 in NCCS(Rn ), the γ-distance between C1 and C2 , where γ ≥ 0, is defined by dγ (C1 , C2 ) := sup{kPC1 (x) − PC2 (x)k | kxk ≤ γ}.

(2.0.30)

n Proposition 2.0.18. Let C and {Ck }∞ k=0 be a set and a sequence of sets in NCCS(R ), respectively. The following statements are equivalent: (i) lim dγ (Ck , C) = 0 for all γ ≥ 0. (2.0.31) k→∞

(ii) epi

Ck → C.

(2.0.32)

Algorithms for Solving Monotone Variational Inequalities and Applications

9

Proof. See [140, 168]. The next proposition is [140, Proposition 7], but its Banach space variant already appears in [99, Proposition 7]. n Proposition 2.0.19. Let C and {Ck }∞ k=0 be a set and a sequence of sets in NCCS(R ), epi respectively. If Ck → C and limk→∞ xk = x, then

lim PCk (xk ) = PC (x).

k→∞

(2.0.33)

Definition 2.0.20. A function g : H → (−∞, +∞] is called (weak) lower semicontinuous if lim inf g(xk ) ≥ g(x) (2.0.34) k→∞

or (weak) upper semicontinuous if lim sup g(xk ) ≤ g(x)

(2.0.35)

k→∞

 ∞ for all sequences xk k=0 such that (xk * x) xk → x. Definition 2.0.21. A function g : H → (−∞, +∞] is called proper if it is not identically +∞. Definition 2.0.22. A function g : H → R is called convex if for any non-negative finite K K sequence {λk }K k=1 with Σk=1 λk = 1 and for every {xk }k=1 ⊂ H, we have ! K K X X λk g(xk ). (2.0.36) g λk xk ≤ k=1

k=1

A function g : H → R is called concave if the function −g is convex. Definition 2.0.23. Given a bifunction g : H × H → (−∞, +∞]. 1. g is called convex-concave if g(·, y) is convex and g(x, ·) is concave. 2. g is called monotone if g(x, y) + g(y, x) ≤ 0 for all x, y ∈ H.

(2.0.37)

Definition 2.0.24. Given an operator f : H → H. Denote by Fix(f ) the fixed point set of f , that is Fix(f ) := {x ∈ H | f (x) = x} . (2.0.38) Next we introduce classes of operators with properties with respect to their fixed point set. It is also known that every nonexpansive operator f : H → H satisfies, for all (x, y) ∈ H × H, the inequality h(x − f (x)) − (y − f (y)), f (y) − f (x)i ≤ (1/2)k(f (x) − x) − (f (y) − y)k2

(2.0.39)

and therefore we get, for all (x, q) ∈ H × Fix(f ), hx − f (x), y − f (x)i ≤ (1/2)kf (x) − xk2 ; see, e.g., [69, Theorem 3] and [68, Theorem 1].

(2.0.40)

10

Variational Inequalities

Lemma 2.0.25. [29, 34] If U : H → H an V : H → H are averaged operators and Fix (U ) ∩ Fix (V ) 6= ∅, then Fix (U ) ∩ Fix (V ) = Fix (U V ) = Fix (V U ) . Definition 2.0.26. Let H be a real Hilbert space. Let C ⊂ H be a subset of H and f : C → H be an operator from C to H. 1. Let α ≥ 0, f is called α-strongly quasi-nonexpansive (α-SQNE) if for all (x, q) ∈ H × Fix(f ) it holds kf (x) − qk2 ≤ kx − qk2 − α kx − f (x)k2 .

(2.0.41)

If α > 0 then we say that f is strongly quasi-nonexpansive (SQNE). 2. f is called quasi-nonexpansive if it is 0-SQNE, i.e., kf (x) − qk ≤ kx − qk for all (x, q) ∈ H × Fix(f ).

(2.0.42)

This class was denoted by Crombez [68, p. 161] by F 0 , the members of this class are also known as paracontracting operators. 3. f is called firmly quasi-nonexpansive (FQNE) if it is 1-SQNE. An important subset of F 0 , namely the T-class operators was introduced and investigated by Bauschke and Combettes [12], and by Combettes [62]. The operators in this class were named directed operators in Zaknoon [172] and further studied under this name by Segal [142] and by Censor and Segal [55, 56, 57, 58]. Cegielski [36, Def. 2.1] studied these operators under the name separating operators. Since both directed and separating are key words of other, widely-used, mathematical entities, Cegielski and Censor have recently introduced the term cutter operators [38], or, shortly, cutters. This class coincides with the class F ν for ν = 1 [68] and with the class DCp for p = −1 [117]. The term firmly quasinonexpansive (FQNE) for T-class operators was used by Yamada and Ogura [163, 162, Section B] and M˘aru¸ster [116]. This class of operators, is fundamental because it contains several types of operators commonly found in various areas of applied mathematics, such as the orthogonal projections, the subgradient projections and the resolvents of maximal monotone operators; see [12]. The formal definition of the T-class is as follows. Definition 2.0.27. An operator f : H → H is called cutter (f ∈ T) if Fix(f ) 6= ∅ and hf (x) − x, f (x) − qi ≤ 0 for all (x, q) ∈ H × Fix(f ).

(2.0.43)

If for x, y ∈ H we denote the half-space (x 6= y) H(x, y) := {u ∈ H | hu − y, x − yi ≤ 0} ,

(2.0.44)

then it is easy to see that f is cutter if and only if dom(f ) = H and Fix(f ) ⊂ H(x, f (x)) for all x ∈ H, where dom(f ) is the domain of f .

(2.0.45)

Algorithms for Solving Monotone Variational Inequalities and Applications

11

Figure 2.1: Property of cutters

This property is illustrated in Figure 2.1. Bauschke and Combettes [12] showed the following properties of cutters. (i) The set of all fixed points of a cutter operator with nonempty Fix(f ) is closed and convex because Fix(f ) = ∩x∈H H (x, f (x)) . (2.0.46) (ii) Let α ∈ [0, 1] and f ∈ T, then fα := (1 − α)I + αf ∈ T. Definition 2.0.28. Given a cutter operator f : H → H with Fix(f ) 6= ∅; for α ∈ [0, 2], fα is called an α-relaxed cutter. One can easily verify the following characterization of the α-relaxed cutter operator fα . α hfα (x) − x, q − xi ≥ kfα (x) − xk2 for all (x, q) ∈ H × Fix(f ),

(2.0.47)

which is equivalent to hfα (x) − x, fα (x) − qi ≤

α−1 kfα (x) − xk2 for all (x, q) ∈ H × Fix(f ). α

(2.0.48)

Theorem 2.0.29. Let α ∈ (0, 2]. An operator f : H → H with Fix(f ) 6= ∅ is an α-relaxed cutter if and only if f is ((2 − α) /α)-SQNE. See e.g., [69, Theorem 3.2(iii)]. Proof. See, e.g., [62, Proposition 2.3(ii)] and [37, Theorem 3.1.38]. A more general class of operators is the class of demicontractive operators (see, e.g., [117]). Definition 2.0.30. Let H be a real Hilbert space and let f : H → H be an operator.

12

Variational Inequalities

(i) f is called a demicontractive operator if there exists a number α ∈ [0, 1) such that kf (x) − qk2 ≤ kx − qk2 + α kx − f (x)k2 for all (x, q) ∈ H × Fix(f ).

(2.0.49)

This is equivalent to hx − f (x), x − qi ≥

1−α kx − f (x)k2 for all (x, q) ∈ H × Fix(f ). 2

(2.0.50)

Notation 2.0.1. Any closed and convex set C ⊂ H can be represented as C = {x ∈ H | g(x) ≤ 0} ,

(2.0.51)

where g : H → R is an appropriate convex function. Every convex set C can be described as in (2.0.51). For example we can take g(z) = dist(z, C); see, e.g., [92, Subsection 1.3(c)]. Definition 2.0.31. Let C ⊂ H. The indicator function of C at x is defined  0, if x ∈ C, IC (x) := (2.0.52) ∞, otherwise. Given a function g : H → (−∞, ∞] with dom(g), its (Fenchel-Moreau) subdifferential is defined as the mapping ∂g, where   {ξ ∈ H | g(y) ≥ g(x) + hξ, y − xi for all y ∈ H}, if x ∈ dom(g), ∂g(x) := (2.0.53)  ∅, if x ∈ / dom(g). In case that g is continuously differentiable then ∂g(x) = {∇g(x)}, this is the gradient of g. In addition, it is known that for any x ∈ H, the subgradient ∂g(x) of a convex function g : H → R is nonempty, compact and convex. For z ∈ H, take any ξ ∈ ∂g(z) and define ([11, Lemma 7.3])   {w ∈ H | g(z) + hξ, w − zi ≤ 0} , if ξ 6= 0, T (z) := (2.0.54)  H, if ξ = 0. Next we present two examples, besides the orthogonal projection, of cutters such that their complements are closed at 0. Example 2.0.32. Let g : Rn → R be a convex function such that the sublevel set g≤0 := {x ∈ Rn | g(x) ≤ 0} is nonempty. Define the operator   y − g(y) ξ, if g(y) > 0, Pg≤0 (y) := (2.0.55) kξk2  y, if g(y) ≤ 0, where ξ ∈ ∂g(y). The operator Pg≤0 is called a subgradient projector relative to g.

Algorithms for Solving Monotone Variational Inequalities and Applications

13

Lemma 2.0.33. Let g : Rn → R be a convex function and let z ∈ Rn . Assume that the sublevel set g≤0 6= ∅ and take any ξ ∈ ∂g(z). Then the following statements hold: 1. g≤0 ⊆ T (z). If ξ 6= 0, then T (z) is a half-space, otherwise T (z) = Rn . 2. Denoting by PT (z) (z) the orthogonal projection of z onto T (z), we have PT (z) (z) = Pg≤0 (z).

(2.0.56)

3. PT (z) − I is closed at 0. Proof. See [11, Lemma 7.3], [56, Lemma 2.4] and [37, Lemma 4.2.5, Corollary 4.2.6]. Next we present another example of a cutter operator whose complement is closed at 0. This class of operators was introduced by Aharoni et al. in [1] for solving the convex feasibility problem. Later Gibali [82] and Censor and Gibali [47] used them for solving variational inequalities. Let C ⊂ Rn be a nonempty, closed and convex set. Assume that C can be represented as a sublevel set of some convex function g : Rn → R as in (2.0.51). Given a point z ∈ Rn and a real number δ ∈ (0, 1], we define for z ∈ / C the ball B(z, δg(z)) := {x ∈ Rn | kx − zk ≤ δg(z)} .

(2.0.57)

Define Aδ (g(z)) := {(x, y) ∈ Rn × Rn | C ⊆ H(x, y) and int (B(z, δg(z))) ∩ H(x, y) = ∅} . (2.0.58) We also need to impose the following condition. Condition 2.0.2. Given a set C ⊂ Rn , described as in (2.0.51), we assume that for every z∈ /C B(z, δg(z)) ∩ C = ∅. (2.0.59) This condition always holds for g(z) = dist(z, C). Example 2.0.34. Given a nonempty, closed and convex subset C ⊂ Rn with the representation (2.0.51) and a real number δ ∈ (0, 1], such that Condition 2.0.2 holds, we define the operator TC,δ for any z ∈ Rn by  PH(x,y) (z), if z ∈ / C, TC,δ (z) := (2.0.60) z, if z ∈ C, where H(x, y) is any selection from Aδ (g(z)), and call it a C-δ operator. The fact that any C-δ operator is a cutter operator follows from its definition and the closedness of TC,δ − I at 0 follows from [56, Lemma 2.7]. It is also true that the subgradient projector Pg≤0 is a TC,δ operator; see [56, Lemma 2.8].

14

Variational Inequalities

Notation 2.0.2. Let C be a nonempty, closed and convex subset of Rn . For r ≥ 0 we define the sets C r := {u ∈ Rn | dist(u, C) ≥ r} , (2.0.61) and Cr := {u ∈ Rn | dist(u, C) < r}.

(2.0.62)

Since for a convex set C, the function dist(·, C) is convex, Cr is closed and convex as a sublevel set of a convex function. By the continuity of dist(·, C), the set C r is closed. Next we recall the definition of a quasi-shrinking operator ; see [161]. Definition 2.0.35. Let C ⊂ Rn be a closed and convex subset and let the quasi-nonexpansive operator f : Rn → Rn be such that Fix(f )∩C 6= ∅. The operator f is called quasi-shrinking on C if for any r ∈ [0, ∞), the function   inf {dist(x, Fix(f )) − dist(f (x), Fix(f )) | x ∈ (Fix(f ))r ∩ C} if (Fix(f ))r ∩ C 6= ∅, g(r) =  ∞, otherwise (2.0.63) satisfies g(r) = 0 ⇔ r = 0. Since dist(x, Fix(f )) = kx−PFix(f ) (x) k and dist(f (x), Fix(f )) = kf (x)−PFix(f ) (f (x)) k, if we denote by R := PFix(f ) we obtain the following equivalent definition. Definition 2.0.36. Let f and C be as in Definition 2.0.35. Then f is quasi-shrinking on C if for any {xk }∞ k=0 ⊂ C, the following implication holds:      lim kxk − R xk k − kf xk − Rf xk k = 0 ⇒ lim kxk − R xk k = 0. (2.0.64) k→∞

k→∞

The next proposition is inspired by [143, Lemma 1]. Proposition 2.0.37. Let C be a closed and convex (not necessarily bounded) subset of Rn and f : Rn → Rn be quasi-nonexpansive and such that Fix(f ) ∩ C 6= ∅. Then the function g : [0, ∞) → [0, ∞) defined by (2.0.63) has the following properties: (i) g(0) = 0 and g(r) ≥ 0 for all r ≥ 0, (ii) if r1 ≥ r2 > 0 then g(r1 ) ≥ g(r2 ), (iii) g(dist(x, Fix(f ))) ≤ kx − f (x)k for all x ∈ C. Proof. (i) Let r ≥ 0. We prove that g(r) ≥ 0. The inequality is clear if (Fix(f ))r ∩ C = ∅. Now suppose that (Fix(f ))r ∩ C 6= ∅. Then the definition of the metric projection and the quasi-nonexpansivity of f yield, for any x ∈ (Fix(f ))r ∩ C, kx − R(x)k − kf (x) − Rf (x)k ≥ kx − R(x)k − kf (x) − R(x)k ≥ 0.

(2.0.65)

Consequently, g(r) ≥ 0. Let x ∈ Fix(f ) ∩ C. By the quasi-nonexpansivity of f we have f (x) = x and g(0) ≤ kx − R(x)k − kf (x) − Rf (x)k = 0, (2.0.66)

Algorithms for Solving Monotone Variational Inequalities and Applications

15

which together with the first part proves that g(0) = 0. (ii) Let r1 ≥ r1 ≥ 0. Then, of course, (Fix(f ))r2 ⊆ (Fix(f ))r1 and the property is clear. (iii) Let x ∈ C and r = dist(x, Fix(f )). If r = 0 then f (x) = x and, by (i), the statement is obvious. Let r > 0. Then, of course, x ∈ (Fix(f ))r ∩ C and, by the definition of the metric projection and the triangle inequality, we have g(r) ≤ kx − R(x)k − kf (x) − Rf (x)k ≤ kx − Rf (x)k − kf (x) − Rf (x)k ≤ kx − f (x)k . (2.0.67) The proof is complete. We now prove that Definitions 2.0.35 and Definition 2.0.36 are equivalent. Proposition 2.0.38. Let C ⊂ Rn be a closed and convex subset and let a quasi-nonexpansive operator f : Rn → Rn be such that Fix(f ) ∩ C 6= ∅. Then Definitions 2.0.35 and 2.0.36 are equivalent. Proof. Let f be quasi-shrinking in the sense of Definition 2.0.35 and choose {xk }∞ k=0 ⊂ C. k k Suppose that limk→∞ kx − R x k 6= 0. Then there exists a constant ε > 0 and a kj k ∞ kj k > ε. We have subsequence {xkj }∞ j=0 ⊂ {x }k=0 such that kx − R x    inf (kxkj − R xkj k − kf xkj − Rf xkj k)

j≥0



inf

x∈(Fix(f ))ε ∩C

(kx − R (x) k − kf (x) − Rf (x) k)

= g(ε) > 0.

(2.0.68)

    Consequently, limk→∞ kxk − R xk k − kf xk − Rf xk k 6= 0 if it exists. On the other hand, let f be quasi-shrinking in the sense of Definition 2.0.36. Suppose r that g(r) = 0 for some r ≥ 0. Then there is a sequence {xk }∞ k=0 ⊂ (Fix(f )) ∩ C such that    lim (kxk − R xk k − kf xk − Rf xk k) = 0. (2.0.69) k→∞

So, by Definition 2.0.36, we have  r ≤ lim dist(xk , Fix(f )) = lim kxk − R xk k = 0, k→∞

k→∞

(2.0.70)

i.e., r = 0 and the proof is complete. Proposition 2.0.39. Let C ⊂ Rn be closed, bounded and convex and let the operator f : Rn → Rn with Fix(f ) ∩ C 6= ∅ be given. If f is SQNE (equivalently, an α-relaxed cutter for some α ∈ (0, 2)) and I − f is closed at 0, then f is quasi-shrinking on C. Proof. Denote R := PFix(f ) . Let r ≥ 0 and g(r) = 0 , i.e., there is a sequence {xk }∞ k=0 ⊂ r (Fix(f )) ∩ C such that lim (kxk − R(xk )k − kf (xk ) − Rf (xk )k) = 0.

k→∞

(2.0.71)

16

Variational Inequalities

By the quasi-nonexpansivity of f , the definition of the metric projection and by (2.0.71), we have 0 ≤ kxk − R(xk )k − kf (xk ) − R(xk )k ≤ kxk − R(xk )k − kf (xk ) − Rf (xk )k → 0.

(2.0.72)

0 ≤ kxk − R(xk )k − kf (xk ) − R(xk )k → 0.

(2.0.73)

Consequently, Since f is SQNE, there is α > 0 such that kf (xk ) − R(xk )k2 ≤ kxk − R(xk )k2 − αkf (xk ) − xk k2 .

(2.0.74)

k Let z ∈ Fix(f ). By the boundedness of {xk }∞ k=0 , there exists d > 0 such that kx − zk ≤ d for all k ≥ 0. Following the definition of the metric projection and the quasi-nonexpansivity of f , we obtain

kxk − R(xk )k + kf (xk ) − R(xk )k ≤ kxk − zk + kxk − R(xk )k ≤ 2kxk − zk ≤ 2d.

(2.0.75)

By (2.0.73), (4.3.24) and (2.0.75), we have 1 (kxk − R(xk )k2 − kf (xk ) − R(xk )k2 ) α 1 = (kxk − R(xk )k − kf (xk ) − R(xk )k)(kxk − R(xk )k + kf (xk ) − R(xk )k) α 2d ≤ (kxk − R(xk )k − kf (xk ) − R(xk )k) → 0. (2.0.76) α

kf (xk ) − xk k2 ≤

Consequently, lim kf (xk ) − xk k = 0.

k→∞

(2.0.77)

k ∞ kj ∞ Since {xk }∞ k=0 is bounded, there exists a subsequence {x }j=0 of {x }k=0 such that

lim xkj = x∗ .

j→∞

(2.0.78)

The closedness of I − f yields x∗ ∈ Fix(f ) and lim kxkj − R(xkj )k ≤ lim kxkj − x∗ k = 0.

j→∞

j→∞

(2.0.79)

Consequently, r=

inf

x∈(Fix(f ))r ∩C

dist(x, Fix(f )) ≤ lim kxkj − x∗ k = 0, j→∞

(2.0.80)

i.e., r = 0, which proves that f is quasi-shrinking. Remark 2.0.40. The converse to Proposition 2.0.39 is not true. To see this take C = {x ∈ Rn | ha, xi ≤ β} for some a 6= 0, β ∈ R and f = 2PC − I. Then f is quasi-shrinking but not SQNE.

Algorithms for Solving Monotone Variational Inequalities and Applications

17

The next lemma is quoted from [161, Lemma 1]. Lemma 2.0.41. Let g : [0, ∞) → [0, ∞) be a nondecreasing function such that g(r) = 0 ⇔ r = 0. Let {bk }∞ k=0 ⊂ [0, ∞) be such that lim bk = 0.

k→∞

(2.0.81)

Then any sequence {ak }∞ k=0 ⊂ [0, ∞) satisfying ak+1 ≤ ak − g(ak ) + bk+1

(2.0.82)

converges to 0. Next we present two known theorems, the Krasnosel’ski˘ı-Mann-Opial theorem [109, 113, 128] and the Halpern-Suzuki theorem [87, 147]. Theorem 2.0.42. [109, 113, 128] Let H be a real Hilbert space and C ⊂ H be a nonempty, closed and convex subset of H. Given an averaged operator f : C → C with Fix(f ) 6= ∅ and an arbitrary x0 ∈ C, the sequence generated by the recursion xk+1 = f (xk ), k ≥ 0, converges weakly to a point z ∈ Fix(f ). Remark 2.0.43. The convergence obtained in Theorem 2.0.42 is not strong in general [81, 17]. Theorem 2.0.44. [87, 147] Let H be a real Hilbert space and C ⊂ H be a closed and convex subset of H. Given an averaged operator f : C → C, and a sequence {αk }∞ k=0 ⊂ [0, 1] ∞ P 0 satisfying limk→∞ αk = 0 and αk = ∞, the sequence {xk }∞ k=0 generated by x ∈ C and k=0

xk+1 = αk x0 + (1 − αk )f (xk ), k ≥ 0, converges strongly to a point z ∈ Fix(f ). Now we discuss some properties of a mapping M : H → 2H . Definition 2.0.45. Let M : H → 2H . 1. The domain of a mapping M is the set dom(M ) := {x ∈ H | M (x) 6= ∅} .

(2.0.83)

2. The range of a mapping M is the set range(M ) = {u ∈ M (x) | x ∈ dom(M )} .

(2.0.84)

M (−x) = −M (x) for all x ∈ H.

(2.0.85)

hu − v, x − yi ≥ 0 for all u ∈ M (x), v ∈ M (y).

(2.0.86)

3. M is called odd if 4. M is called monotone if

18

Variational Inequalities 5. M is called psuedo-monotone if hv, x − yi ≥ 0 ⇒ hu, x − yi ≥ 0 for all u ∈ M (x), v ∈ M (y).

(2.0.87)

6. M is is called paramonotone, if it is monotone and whenever hu − v, x − yi = 0, u ∈ M (x), v ∈ M (y) it holds that u ∈ M (y), v ∈ M (x). 7. M is called a maximal monotone if and only if M is monotone, and the graph G(M ) of M, G(M ) := {(x, u) ∈ H × H | u ∈ M (x)} , (2.0.88) is not properly contained in the graph of any other monotone mapping. Remark 2.0.46. It is clear ([139, Theorem 3]) that a monotone mapping M is maximal if and only if, for any (x, u) ∈ H × H, if hu − v, x − yi ≥ 0 for all (v, y) ∈ G(M ), then it follows that u ∈ M (x). Remark 2.0.47. Let g : H → R ∪ {∞} be proper, lower semicontinuous and convex function. The subdifferential mapping, ∂g : H → 2H is a maximal monotone mapping. In addition, any c-av (averaged) operator f : H → H with c ∈ (0, 1/2] (in particular, firmly nonexpansive) is maximal monotone (see, e.g., [14, Example 20.27]). 8. The resolvent of M with parameter λ > 0 is denoted and defined by JλM := (I + λM )−1 . Remark 2.0.48. It is well known that for λ > 0, 1. M is monotone if and only if the resolvent JλM of M is single-valued and firmly nonexpansive. 2. M is maximal monotone if and only if JλM is single-valued, firmly nonexpansive and dom(JλM ) = H. 3. The following equivalence holds: 0 ∈ M (x∗ ) ⇔ x∗ ∈ Fix(JλM ).

(2.0.89)

4. Let C ⊆ H be a nonempty, closed and convex. The normal cone of a set C is the mapping NC : H → 2H defined at x as   {d ∈ H | hd, y − xi ≤ 0 for all y ∈ C}, if x ∈ C, NC (x) := (2.0.90)  ∅, otherwise. Observe that ∂IC = NC and the projection onto C is precisely the resolvent of the normal cone mapping, i.e., PC = JλNC . (2.0.91) In addition, it is known that NC is maximal monotone.

Algorithms for Solving Monotone Variational Inequalities and Applications

19

Now we present another known result; see, e.g., [120, Fact 2]. Remark 2.0.49. Let H be a real Hilbert space, and let a maximal monotone mapping M : H → 2H and an α-ISM operator f : H → H be given. Then the operator JλM (I − λf ) is averaged for each λ ∈ (0, 2α). Next we present a simple geometrical property which will be essential in the sequel. Claim 2.0.1. Given two points x and y in H, consider the half-space H(x, y) (see (2.0.44)). Denote by yλ := λx + (1 − λ)y for any λ ∈ [0, 1]. Then H(x, y) ⊆ H(x, yλ ) =: Hλ . Proof. Let z ∈ H(x, y). In order to show that z ∈ Hλ , λ ∈ [0, 1], we need to check that hx − yλ , z − yλ i ≤ 0. We have hx − yλ , z − yλ i = hx − (λx + (1 − λ) y) , z − (λx + (1 − λ) y)i = h(1 − λ) x − (1 − λ) y, (λz + (1 − λ) z) − (λx + (1 − λ) y)i = (1 − λ) hx − y, (1 − λ) (z − y) + λ (z − x)i = (1 − λ)2 hx − y, z − yi + (1 − λ) λ hx − y, z − xi = (1 − λ)2 hx − y, z − yi + (1 − λ) λ hx − y, y − xi + (1 − λ) λ hx − y, z − yi = (1 − λ) hx − y, z − yi − (1 − λ) λ kx − yk2 ≤ (1 − λ) hx − y, z − yi .

(2.0.92)

Since z ∈ H(x, y), we know that hx − y, z − yi ≤ 0. Hence z ∈ Hλ for any λ ∈ [0, 1], as claimed.

2.1

The variational inequality problem and other related problems

Let H be a real Hilbert space and C ⊂ H be a nonempty, closed and convex subset. Given an operator f : H → H, the Variational Inequality Problem (VIP) consists of finding a point x∗ ∈ C such that hf (x∗ ), x − x∗ i ≥ 0 for all x ∈ C. (2.1.1) This problem, denoted by VIP(f, C), is a fundamental problem in Variational Analysis and, in particular, in Optimization Theory. We denote by Sol(f, C) the solution set of the VIP(f, C). Now, following Rockafellar [139, Theorem 3], define the maximal monotone extension  f (w) + NC (w) , w ∈ C, M (w) := (2.1.2) ∅, w∈ / C. So in these circumstances, if f is a hemicontinues operator on C, then M is maximal monotone and 0 ∈ M (w) if and only if w ∈ Sol(f, C). In the next lemma we present an important property of the composed operator PC (I − λf ) that will be needed in the sequel (cf. [118]).

20

Variational Inequalities

Lemma 2.1.1. Let H be a real Hilbert space and let C ⊂ H be a nonempty, closed and convex subset. Let f : H → H be a β-ISM operator on H. If λ ∈ (0, 2β), then the operator PC (I − λf ) is averaged. Proof. We first prove that the operator I − λf is averaged. More precisely, we claim that if f is β-ISM, then the operator I −λf is averaged for λ ∈ (0, 2β). Indeed, take c ∈ (0, 1) such that c ≥ λ/(2β) and set N := I − λc f. Then I − λf = (1 − c)I + cN and N is nonexpansive: kx − yk2 − kN (x) − N (y)k2 λ = kx − yk − kx − yk − 2 hf (x) − f (y), x − yi + c  2 λ λ kf (x) − f (y)k2 = 2 hf (x) − f (y), x − yi − c c   λ λ 2 2 ≥ 2βkf (x) − f (y)k − kf (x) − f (y)k c c   λ λ = 2β − kf (x) − f (y)k2 ≥ 0. c c 2

2

!  2 λ 2 kf (x) − f (y)k c

(2.1.3)

Now, since the metric projection PC is averaged (see, e.g., [84, page 17]), so is the composition PC (I − λf ) (see Remark 2.0.14(4)). From this lemma, we deduce the following consequence. Lemma 2.1.2. Let C ⊂ H be a nonempty, closed and convex subset and let f : H → H be an β-ISM operator on H. If λ ∈ [0, 2β], then the operator PC (I − λf ) is nonexpansive on C. Based on the well-known result due to Eaves [74] we get another connection between the solution set of a variational inequality problem, Sol(f, C), and the fixed point set of the operator PC (I − λf ). That is, for any λ > 0, Sol(f, C) = Fix (PC (I − λf )) .

(2.1.4)

x∗ ∈ Fix (PC (I − λf )) ⇔ PC (x∗ − λf (x∗ )) = x∗

(2.1.5)

Indeed, and by the characterization of the metric projection (2.0.28) we have for all x ∈ C and λ > 0, 0 ≤ hx∗ − λf (x∗ ) − PC (x∗ − λf (x∗ )) , PC (x∗ − λf (x∗ )) − xi = hx∗ − λf (x∗ ) − x∗ , x∗ − xi = h−λf (x∗ ), x∗ − xi = λ hf (x∗ ), x − x∗ i .

(2.1.6)

Following this relation many iterative algorithms for solving the VIP(f, C) were developed; see, e.g., Auslender [5].

Algorithms for Solving Monotone Variational Inequalities and Applications

21

Remark 2.1.3. As mentioned before, the metric projection operator PC coincides with the resolvent of the normal cone, that is JλNC . But the composed operator PC (I − λf ) need not be a resolvent of a monotone mapping. Another useful result is the following lemma (see, e.g., [77, Proposition 1.5.9]). Lemma 2.1.4. Let H be a real Hilbert space and let C ⊂ H be nonempty, closed and convex subset. Let f : C → H. A point x belongs to Sol(f, C) if and only if there exists a point z such that x = PC (z) and f (PC (z)) + z − PC (z) = 0. Proof. If x belongs to Sol(f, C) then by (2.1.4) x = PC (x − f (x)). Denote by z = x − f (x) we get that x = PC (z) and f (PC (z)) + z − PC (z) = 0. On the other hand, if there exists a point z such that x = PC (z) and f (PC (z)) + z − PC (z) = 0, then z = x − f (x) and x = PC (x − f (x)),

(2.1.7)

and again by (2.1.4) we get that x belongs to Sol(f, C). Following Lemma 2.1.4, we get that Sol(f, C) = Fix (PC (I − λf (PC (I − λf )))) .

(2.1.8)

By translating this relation to iterative method for solving the VIP(f, C), we get the wellknown extragradient method of Korpelevich [105]. The operator f ◦ PC + I − PC is known as the normal operator and following Lemma 2.1.4 we see that Sol(f, C) = Fix (PC − f ◦ PC ) .

(2.1.9)

For more details see [23, Chapter 8] and [77, Chapter 1]. Now we present the relation of a VIP(f, C) with the set of zeros of the operator f [51]. Lemma 2.1.5. Let H be a real Hilbert space and let C ⊂ H be nonempty, closed and convex subset. Let f : H → H be an β-ISM operator. If C ∩ {x ∈ H | f (x) = 0} = 6 ∅, then ∗ ∗ x ∈ Sol(f, C) if and only if f (x ) = 0. Proof. It is clear that if that x∗ ∈ C and f (x∗ ) = 0, then x∗ ∈ Sol(f, C). In the other direction, assume that x∗ ∈ Sol(f, C). Applying (2.0.29), with (I − λf ) (x∗ ) ∈ H, for any λ ∈ (0, 2β], as x there, and q ∈ C ∩ Fix(I − λf ), with the same λ, as y there, we get kq − PC (I − λf ) (x∗ )k2 + k(I − λf ) (x∗ ) − PC (I − λf ) (x∗ )k2 ≤ k(I − λf ) (x∗ ) − qk2 .

(2.1.10)

Using the characterization of (2.1.4), we get kq − x∗ k2 + k(I − λf ) (x∗ ) − x∗ k2 ≤ k(I − λf ) (x∗ ) − qk2 .

(2.1.11)

By Lemma 2.1.2, the operator I − λf is nonexpansive for every λ ∈ [0, 2β], so with q ∈ C ∩ Fix(I − λf ), k(I − λf ) (x∗ ) − qk2 ≤ kx∗ − qk2 . (2.1.12) Combining the above inequalities, we obtain kq − x∗ k2 + k(I − λf ) (x∗ ) − x∗ k2 ≤ kx∗ − qk2 . Hence, k(I − λf ) (x∗ ) − x∗ k2 = 0. Since λ > 0, we get that f (x∗ ) = 0, as claimed.

(2.1.13)

22

Variational Inequalities

Next we recall the Equilibrium Problem (EP) and reformulate the VIP(f, C) in this terms. Problem 2.1.1. Let H be a real Hilbert space and let C ⊂ H be nonempty, closed and convex subset. Given a bifunction g : C × C → R such that 1. g(x, x) ≥ 0 for all x ∈ C. 2. g(x, ·) : C → R is convex and lower semicontinues for all x ∈ C. 3. g(·, y) : C → R is convex and upper semicontinues for all y ∈ C. The Equilibrium Problem, denoted by EP(g, C), consists of finding a point x∗ ∈ C such that g(x∗ , y) ≥ 0 for all y ∈ C. (2.1.14) So by taking g(x, y) = hf (x), y − xi, where g satisfies the above condition, we have EP(g, C) = VIP(f, C). Another known fact is that EP(g, C) is equivalent to finding a fixed point of the associated resolvent   1 Jλ (x) := w ∈ C | g(w, y) + hy − w, w − xi ≥ 0 for all y ∈ C . (2.1.15) λ Now consider the set-valued Variational Inequality Problem (VIP). Let C ⊂ H be a nonempty, closed and convex subset and given a mapping M : H → 2H . The VIP(M, C) consists in finding a point x∗ ∈ C such that there exists u∗ ∈ M (x∗ ) satisfying hu∗ , x − u∗ i ≥ 0 for all x ∈ C.

(2.1.16)

In next two lemmas we collect serval properties of maximal monotone and paramonotone mappings in Euclidean space. n

Lemma 2.1.6. Let M : Rn → 2R be a maximal monotone mapping. Then 1. M is locally bounded at any point in the interior of its domain. 2. G(M ) is closed. 3. M is bounded on bounded subsets of the interior of its domain. 4. If Sol(M, C) is nonempty, then it is closed and convex. Proof.

1. See [30, Theorem 4.6.1(ii)].

2. See [30, Theorem 4.2.1(ii)]. 3. Follows easily from (i). 4. See [18, Lemma 2.4(ii)].

Algorithms for Solving Monotone Variational Inequalities and Applications

23

The following lemma is taken from [19, Lemma 6]. n

Lemma 2.1.7. Let M : Rn → 2R be paramonotone and maximal monotone mapping. Let k ∞ { y k , uk }∞ k=0 ⊂ G(M ) be a bounded sequence such that all cluster points of {y }k=0 belong to C. Define the function γk : Sol(M, C) → R, as γk (x) := huk , y k − xi.

(2.1.17)

∞ If for some x ∈ Sol(M, C) there exists a subsequence {γkj (x)}∞ j=0 of {γk (x)}k=0 such that limj→∞ γkj (x) ≤ 0, then there exists a cluster point of {y kj }∞ j=0 belong to Sol(M, C).

Now we consider the problem of finding a zero of a maximal monotone mapping M : H → 2H ; that is: find a point x∗ ∈ H such that 0 ∈ M (x∗ ). (2.1.18) Many algorithms were developed for solving this problem. As we saw, minimizers of a convex, proper and lower semicontinuous function g are zeros of the maximal monotone mapping ∂g and solutions of VIPs are zeros of the maximal monotone extension (given by (2.1.2)). In addition, other problems can be formulated as zeros of an appropriate maximal monotone mappings, e.g., saddle-point problems [138], equilibrium problems [114] and others. By extending Lemma 2.1.5 from operators to mappings, we see that the setvalued variational inequality problem covers, among others, the problem of finding zeros of a maximal monotone mapping.

Chapter 3 Classical Variational Inequality Problems 3.1

Extensions of the Korpelevich extragradient algorithm

In this section we are concerned with the Variational Inequality Problem (VIP) in Euclidean space Rn . Given an operator f : Rn → Rn and a nonempty, closed and convex subset C ⊂ Rn . The VIP consists of finding a point x∗ such that x∗ ∈ C and hf (x∗ ), x − x∗ i ≥ 0 for all x ∈ C,

(3.1.1)

where h·, ·i denotes the inner product in Rn . This problem, denoted by VIP(f, C), which is a fundamental problem in Optimization Theory, was introduced by Hartman and Stampacchia in [88]. It was well-studied in the last decades, in particular, algorithmic approaches were comprehensively investigated, see, e.g., the book of Kinderlehrer and Stampacchia [106], the books by Konnov [104], Patriksson [130] the treatise of Facchinei and Pang [77], Auslender and Teboulle [7] and the review papers by Noor [127] and by Xiu and Zhang [155]. Many algorithms for solving the VIP are projection algorithms that employ projections onto the feasible set C of the VIP, or onto some related set, in order to iteratively reach a solution. In this work we study some of these algorithms, whose common idea is to employ projections of different types in order to generate a sequence of iterates that converges to a solution. Let us recall various iterative algorithms for solving the VIP. The book of Rockafellar [137], Hiriart-Urruty and Lemar´echal [91, 92] and Pang and Facchinei [77] serve as an excellent references for results on convex analysis. In order to present the classical iterative scheme, we recall the projected gradient method for constrained optimization, originally proposed by Goldstein [85] and Levitin and Polyak [110]. We are given a nonempty, closed and convex set C, and a continuously differentiable function g : Rn → R on C. Given the current iterate xk , calculate the next iterate xk+1 as follows. xk+1 = PC (xk − τk ∇g(x)),

(3.1.2)

where τk is a positive real number, see, e.g.; Bertsekas and Tsitsiklis [20, p. 212]. So, the relationship between the VIP(∇g, C) and the problem of minimizing g over C (Example 24

Algorithms for Solving Monotone Variational Inequalities and Applications

25

1.0.1) led to study the following method for solving the VIP(f, C) xk+1 = PC (xk − τk f (xk )).

(3.1.3)

The iterative step is illustrated in Figure 3.1. Convergence properties of such algorithms

Figure 3.1: The classical projection method for the VIP have been studied by a number of authors. In particular, Dafermos [70] shows that, if f is continuously differentiable and strongly monotone on C then the sequence {xk }∞ k=0 , generated by (3.1.3), when τk = τ > 0 is a constant for all k, is a globally converges to the unique solution of (3.1.1). Auslender [5] establishes global convergence of (3.1.3) with the assumptions that f is bounded and strongly monotone on C and τk = ρk /kf (xk )k, k = 0, 1, . . . ,

(3.1.4)

where {ρk }∞ k=0 is a sequence of positive numbers such that ∞ X k=1

ρk = ∞ and

∞ X

ρ2k < ∞.

(3.1.5)

k=0

If we remove the strong monotonicity assumption the situation becomes more complicated, and quite different from the case of convex optimization. In order to deal with this situation, Korpelevich proposed in [105] the extragradient method (see also Facchinei and Pang [77, Chapter 12]). In her method, in each iteration, in order to get the next iterate xk+1 , two orthogonal projections onto C are calculated, according to the following iterative step. Given the current iterate xk , calculate y k = PC (xk − τ f (xk )),

(3.1.6)

26

Variational Inequalities

xk+1 = PC (xk − τ f (y k )), (3.1.7) where τ is some positive number. Figure 3.2 illustrates the iterative step (3.1.6) and (3.1.7).

Figure 3.2: Korpelevich’s iterative step. The extragradient method takes its name from the extra evaluation of f (and the extra projection) that is called for in each iteration. The extra evaluation of f corresponds to an extra evaluation of the gradient, when solving variational inequality with a gradient of some function. If f is L-Lipschitz continuous and monotone on C, and Sol(f, C) is nonempty, then any sequence generated by (3.1.6)–(3.1.7), converges to a point in Sol(f, C) provided that the τ ∈ (0, 1/L). The iterative step (3.1.6)–(3.1.7) can be combined into the following form. xk+1 = PC (xk − τ f (PC (xk − τ f (xk )))). (3.1.8) Following Lemma 2.1.4, we see that if f is Lipschitz continues with constant L > 0 and τ ∈ (0, 1/L) then Sol(f, C) = Fix (PC (I − λf (PC (I − λf )))) . (3.1.9) Later on the monotonicity assumption was weakened to pseudo-monotonicity and τ was replaced by some sequence {τk }∞ k=0 . The literature on the VIP is vast and Korpelevich’s extragradient method has received great attention by many authors, who improved it in various ways; see, e.g., [95, 101, 144] and references therein, to name but a few. So, although convergence was proved in [105] under the assumptions of Lipschitz continuity and pseudo-monotonicity, there is still the need to calculate two projections onto the closed and convex set C. Projection methods are particularly useful, if the set C is simple enough that iterations of the form (3.1.3) are easily executed. If C is a general closed and convex subset, one has to solve the minimum norm problem min{kx − (xk − τk f (xk ))k | for all x ∈ C},

(3.1.10)

Algorithms for Solving Monotone Variational Inequalities and Applications

27

in order to obtain xk+1 by (3.1.3). In this case the efficiency of the projection methods might be seriously affected by the need to solve the optimization problem (3.1.10) at each iterative step. So, to overcome this problem Fukushima [80] proposed a method of projecting onto the subgradient half-spaces, containing the original set C. Under several assumptions, he proved that the sequence generated by such an algorithm converges to the unique solution of (3.1.1). The idea of Fukushima’s method is as follows. Suppose the set C is presented as a sublevel set of a function c (see 2.0.51). Let {τk }∞ k=0 be a sequence of positive numbers satisfying ∞ X lim τk = 0 and τk = ∞, (3.1.11) k→∞

k=0

and let the matrix A be a symmetric positive definite matrix. Algorithm 3.1.1. Fukushima’s method Step 0: Select an arbitrary starting point x0 and set k = 0. Step 1: Given the current iterate xk , choose ξ k ∈ ∂c(xk ) and construct the set ((2.0.54)) Tk := T (xk ) = {x ∈ Rn | c(xk ) + hξ k , x − xk i ≤ 0}.

(3.1.12)

Step 2: Let z k = xk − τk A−1 f (xk )/kf (xk )k, the next iterate xk+1 is the projection of z k onto Tk with respect to the A-norm, namely, xk+1 = PTk (z k ).

(3.1.13)

Step 3: If xk+1 = xk then stop. Otherwise, set k ← k + 1 and return to Step 1. The iterative step of this algorithm is illustrated in Figure 3.3. Yang [167] established global convergence of the sequence generated by [80] algorithm under even weaker conditions then those of [80]. Instead of assuming f to be strongly monotone on C, Yang assumed that f is weakly co-coercive. Iusem and Svaiter [95] and later on Solodov and Svaiter [144] proposed methods in which one of the orthogonal projection onto C is replaced by a projection onto a constrainable half-space. In addition they implement a backtracking search, sometimes called Armijotype search, see, Armijo [3] in order to evaluate the Lipschitz constant. The idea is to find an appropriate τk , starting with some τ k and reducing it until and τk is found such that y k given by (3.1.6) satisfies hf (y k ), xk − y k i > 0, or some related inequality. One of the methods with these characteristics is the modified Korpelevich method, see e.g., [101, 115, 150]. Given xk , the next iterate xk+1 is calculate by y k = PC (xk − γk f (xk )),

(3.1.14)

xk+1 = PC (xk − λk f (y k )).

(3.1.15)

In this method γk is found through a backtracking procedure, similar to the search of the above τk and λk satisfies λk = hf (y k ), y k − xk i/kf (y k )k2 . (3.1.16)

28

Variational Inequalities

Figure 3.3: Fukushima’s method: Projecting onto super halfspaces

This and other methods, that use this kind of characteristics, share a common feature. Namely, within the backtracking procedure, in order to determine whether a “candidate” stepsize τk satisfies the required inequality, it is necessary to evaluate PC (xk − τk f (xk )). This means that if the backtracking search at iteration k requires mk steps, then we need to evaluate mk projections onto C in order to find y k , plus one more in the computation of xk+1 . In [95], a method using some backtracking which requires just one projection onto C (instead of mk ) for calculating y k and another one for xk+1 , i.e., only two projections per major iteration, was established. The algorithm converges under the only assumptions of monotonicity and continuity of f and the nonemptiness of Sol(f, C). The algorithm e and requires the following hexogenous parameters δ ∈ (0, 1), βb and βe satisfying 0 < βb ≤ β, i b e a sequence {βk }∞ k=0 ⊂ β, β . Algorithm 3.1.2. The Iusem and Svaiter method Step 0: Select an arbitrary starting point x0 ∈ C and set k = 0. Step 1: Given the current iterate xk , claculate z k = xk − βk f (xk ).

(3.1.17)

If xk = PC (z k ) stop, otherwise, let 

j(k) := min j ≥ 0 | f (2−j PC (z k ) + (1 − 2−j )xk ), xk − PC (z k )

2 ≥ (δ/βk ) xk − PC (z k ) },

(3.1.18)

Algorithms for Solving Monotone Variational Inequalities and Applications

29

αk = 2−j(k) ,

(3.1.19)

y k = αk PC (z k ) + (1 − αk )xk ,

(3.1.20)

γk =

hf (y k ), xk − y k i , kf (y k )k2

(3.1.21)

wk = xk − γk f (y k ),

(3.1.22)

xk+1 = PC (wk ).

(3.1.23)

The iterative step of this algorithm is illustrated in Figure 3.4.

Figure 3.4: Two-projection algorithms (Iusem & Svaiter)

Solodov and Svaiter [144] established an algorithm that improves the method [95] in two ways, one is that the next iterate xk+1 is closer to the solution set Sol(f, C) then the next iterate xk+1 computed by the method of [95]. Second, in [95] the second projection step is onto C, while in [144] it is onto the intersection C ∩ Hk where Hk is a certain halfspace. The iterative step of Solodov’s and Svaiter’s algorithm is illustrated in Figure 3.5. The algorithm converges under the assumptions of continuity of f , nonemptiness of Sol(f, C) and pseudo-monotonicity of f . In some other developments, Iiduka, Takahashi and Toyoda [97] introduced an iterative method for solving the VIP(f, C) in Hilbert space, but again they have to calculate the projection onto C twice. The main difference between their method and Korpelevich’s method is that the second step (3.1.7) of Korpelevich’s method is replaced by xk+1 = PC (τk xk + (1 − τk )y k ), (3.1.24)

30

Variational Inequalities

Figure 3.5: Two-projection algorithms (Solodov & Svaiter)

for some sequence {τk }∞ k=0 ⊆ [−1, 1]. Noor [126, 127] suggested and analyzed an extension of the extragradient method which still employs two orthogonal projections onto C, but (3.1.7) is replaced by xk+1 = PC (y k − τ f (y k )).

(3.1.25)

So, Noor’s and all other extensions of Korpelevich’s method mentioned above, still require two projections onto C or that one projection is replaced by a projection onto a set which is the intersection of C with some hyperplane found through a line search. Following these ideas we present several extensions of Korpelevich’s extragradient method in Euclidean and Hilbert spaces. In our first algorithmic extension we replace the (second) projection (3.1.7) onto C by a projection onto a specific constructible half-space which is actually one of the subgradient half-spaces, as will be explained. We call this (Algorithm 3.1.3) the subgradient extragradient algorithm. In our second algorithmic extension we develop a projection method for solving VIP(f, C), with projections related to approximations of the set C. This extension allows projections onto the members of an infinite sequence of subsets {Ck }∞ k=0 of C which epi-converges to the feasible set C of the VIP. We call this extension (Algorithm 3.1.4) the perturbed extragradient algorithm. Our work is admittedly a theoretical development although its potential numerical advantages are obvious. The next subsections are organized as follows. In Subsection 3.1.1 the algorithmic extensions are presented. They are analyzed in Subsections 3.1.2 and 3.1.4, respectively. In Subsection 3.1.5 we present a hybrid of the two extensions (Algorithm 3.1.5) and a two-subgradient extragradient algorithm (Algorithm 3.1.6) about which we are able to prove only boundedness.

Algorithms for Solving Monotone Variational Inequalities and Applications

3.1.1

31

The Subgradient Extragradient Algorithm

In this subsection we present our first algorithmic extension which is a modification of the extragradient method. We call this modification the subgradient extragradient algorithm. The name derives from the replacement of the second projection onto C in (3.1.7) with a specific subgradient projection. Assume that the set C is presented as a sublevel set of a convex function c : Rn → Rn , that is, C = {x ∈ Rn | c(x) ≤ 0} ,

(3.1.26)

Next we present the subgradient extragradient algorithm in a real Hilbert space H [48]. Algorithm 3.1.3. The subgradient extragradient algorithm Step 0: Select a starting point x0 ∈ H and τ > 0, and set k = 0. Step 1: Given the current iterate xk , compute y k = PC (xk − τ f (xk )) construct the set Tk ,  

 w ∈ H | (xk − τ f (xk )) − y k , w − y k ≤ 0 , if xk − τ f (xk ) 6= y k , Tk :=  H, if xk − τ f (xk ) = y k .

(3.1.27)

(3.1.28)

and calculate the next iterate xk+1 = PTk (xk − τ f (y k )).

(3.1.29)

Step 2: If xk = y k then stop. Otherwise, set k ← (k + 1) and return to Step 1. Remark 3.1.1. Observe that if c is lower semicontinuous and Gˆateaux differentiable at y k ,   k k k k k k k k then { x − τ f (x ) − y } = ∂c(y ) = {∇c(y )}; otherwise x − τ f (x ) − y ∈ ∂c(y k ). See [11, Facts 7.2] and [76] for more details. Figure 3.6 illustrates the iterative step of this algorithm. For the convergence of the algorithm we assume the following conditions. Condition 3.1.1. The solution set of (3.1.1), that is, Sol(f, C), is nonempty. Condition 3.1.2. The operator f is monotone on C. Condition 3.1.3. The operator f is Lipschitz continuous on H with constant L > 0.

3.1.2

Convergence of the Subgradient Extragradient Algorithm

In this subsection we present a weak convergence theorem for Algorithm 3.1.3. First we show that the stopping criterion in Step 2 of Algorithm 3.1.3 is valid. Lemma 3.1.2. If xk = y k in Algorithm 3.1.3, then xk ∈ Sol(f, C).

32

Variational Inequalities

Figure 3.6: xk+1 is a subgradient projection of the point xk − τ f (y k ) onto Tk .

Proof. If xk = y k , then xk = PC (xk −τ f (xk )), so xk ∈ C. By the variational characterization of the metric projection onto C, we have

w − xk , (xk − τ f (xk )) − xk ≤ 0 for all w ∈ C, (3.1.30) which implies that

τ f (xk ), w − xk ≥ 0, for all w ∈ C.

(3.1.31)

Since τ > 0, inequality (3.1.31) implies that xk ∈ Sol(f, C). The next lemma is crucial for the proof of our convergence theorem. k ∞ Lemma 3.1.3. Let {xk }∞ k=0 and {y }k=0 be the two sequences generated by Algorithm 3.1.3 and let u ∈ Sol(f, C). Then, under Conditions 3.1.1–3.1.3, we have

k+1

2

2

2

x − u ≤ xk − u − (1 − τ 2 L2 ) y k − xk for all k ≥ 0. (3.1.32)

Proof. Since u ∈ Sol(f, C), y k ∈ C and f is monotone, we have

k f (y ) − f (u), y k − u ≥ 0 for all k ≥ 0.

(3.1.33)

This implies that f (y k ), y k − u ≥ 0 for all k ≥ 0.

(3.1.34)



f (y k ), xk+1 − u ≥ f (y k ), xk+1 − y k .

(3.1.35)

So,

By the definition of Tk and xk+1 , we have

k+1  x − y k , xk − τ f (xk ) − y k = 0

(3.1.36)

Algorithms for Solving Monotone Variational Inequalities and Applications for all k ≥ 0. Thus,

k+1

x − y k , (xk − τ f (y k )) − y k = xk+1 − y k , xk − τ f (xk ) − y k

+ τ xk+1 − y k , f (xk ) − f (y k )

= τ xk+1 − y k , f (xk ) − f (y k ) . Denoting z k = xk − τ f (y k ), we obtain

k+1

2

2

x − u = PTk (z k ) − u

= PTk (z k ) − z k + z k − u, PTk (z k ) − z k + z k − u

2

2

= z k − u + z k − PT (z k ) + 2 PT (z k ) − z k , z k − u . k

k

33

(3.1.37)

(3.1.38)

Since

2

2 z k − PTk (z k ) + 2 PTk (z k ) − z k , z k − u

= 2 z k − PTk (z k ), u − PTk (z k ) ≤ 0

(3.1.39)

for all k ≥ 0, we get

k





z − PT (z k ) 2 + 2 PT (z k ) − z k , z k − u ≤ − z k − PT (z k ) 2 k k k

(3.1.40)

for all k ≥ 0. Hence,

k+1

2

2

2

x − u ≤ z k − u − z k − PTk (z k )

2

2 = (xk − τ f (y k )) − u − (xk − τ f (y k )) − xk+1

2

2

= xk − u − xk − xk+1 + 2τ u − xk+1 , f (y k )

2

2

≤ xk − u − xk − xk+1 + 2τ y k − xk+1 , f (y k ) ,

(3.1.41)

where the last inequality follows from (3.1.35). So,

k+1

2

2

2



x − u ≤ xk − u − xk − xk+1 + 2τ y k − xk+1 , f (y k )

2

 = xk − u − xk − y k + y k − xk+1 , xk − y k + y k − xk+1

+ 2τ y k − xk+1 , f (y k )

2

2

2 = xk − u − xk − y k − y k − xk+1

+ 2 xk+1 − y k , xk − τ f (y k ) − y k , (3.1.42) and by (3.1.37),

k+1

2

2

2

2

x − u ≤ xk − u − xk − y k − y k − xk+1

+ 2τ xk+1 − y k , f (xk ) − f (y k ) . Using the Cauchy–Schwarz inequality and Condition 3.1.3, we obtain



2τ xk+1 − y k , f (xk ) − f (y k ) ≤ 2τ L xk+1 − y k xk − y k .

(3.1.43)

(3.1.44)

34

Variational Inequalities

In addition,



2 0 ≤ τ L xk − y k − y k − xk+1

2





2 = τ 2 L2 xk − y k − 2τ L xk+1 − y k xk − y k + y k − xk+1 .

(3.1.45)





2

2 2τ L xk+1 − y k xk − y k ≤ τ 2 L2 xk − y k + y k − xk+1 .

(3.1.46)

So, Combining the above inequalities and using Condition 3.1.3, we see that

k+1

2

2

2

2

x − u ≤ xk − u − xk − y k − y k − xk+1



+ 2τ L xk+1 − y k xk − y k

2

2

2 ≤ xk − u − xk − y k − y k − xk+1

2

2 + τ 2 L2 xk − y k + y k − xk+1

2

2

2 = xk − u − xk − y k + τ 2 L2 xk − y k .

(3.1.47)

k+1

2

2

2

x − u ≤ xk − u − (1 − τ 2 L2 ) y k − xk ,

(3.1.48)

Finally, we get which completes the proof. Theorem 3.1.4. Assume that Conditions 3.1.1–3.1.3 hold and let τ ∈ (0, 1/L). Then any k ∞ sequences {xk }∞ k=0 and {y }k=0 generated by Algorithm 3.1.3 weakly converge to the same ∗ solution u ∈ Sol(f, C) and furthermore, u∗ = lim PSol(f,C) (xk ). k→∞

(3.1.49)

Proof. Fix u ∈ Sol(f, C) and define ρ := 1 − τ 2 L2 . Since τ ∈ (0, 1/L), ρ ∈ (0, 1). By (3.1.48), we have

2

2 0 ≤ xk − u − ρ y k − xk , (3.1.50) or

2

2 ρ y k − xk ≤ xk − u .

(3.1.51)

Using (3.1.48) with k ← (k − 1), we get

or

k



x − u 2 ≤ xk−1 − u 2 − ρ y k−1 − xk−1 2 ,

(3.1.52)

2

2

2 ρ y k − xk + ρ y k−1 − xk−1 ≤ xk−1 − u .

(3.1.53)

Continuing, we get for all integers K ≥ 0, K X

k



y − xk 2 ≤ x0 − u 2 . ρ k=0

(3.1.54)

Algorithms for Solving Monotone Variational Inequalities and Applications nP

o K k k 2 Since the sequence is monotonically increasing and bounded, k=0 y − x

35

K≥0

ρ

∞ X

k



y − xk 2 ≤ x0 − u 2 .

(3.1.55)

k=0

Hence

lim y k − xk = 0.

(3.1.56)

k→∞

By Lemma 3.1.3, the sequence {xk }∞ k=0 is bounded. Therefore, it has at least one weak k ∞ accumulation point. If x¯ is a weak limit point of some subsequence {xkj }∞ j=0 of {x }k=0 , then xkj * x¯ and y kj * x¯ (3.1.57) Now define the maximal monotone mapping M as in (2.1.2) by  f (w) + NC (w) , w ∈ C, M (w) := ∅, w∈ / C.

(3.1.58)

If (v, w) ∈ G (M ), since w ∈ M (v) = f (v) + NC (v), we get w − f (v) ∈ NC (v). Then hw − f (v), v − yi ≥ 0 for all y ∈ C. On the other hand, by the definition of y k and (2.0.28),

k x − τ f (xk ) − y k , y k − v ≥ 0,

(3.1.59)

(3.1.60)

or 

y k − xk τ



k

+ f (x ), v − y

k

 ≥0

(3.1.61)

 ∞ for all k ≥ 0. Using (3.3.34) and applying (3.1.59) with y kj j=0 , we get

w − f (v), v − y kj ≥ 0.

(3.1.62)

Hence,





w, v − y kj ≥ f (v), v − y kj ≥ f (v), v − y kj    kj y − xkj kj kj + f (x ), v − y − τ



= f (v) − f (y kj ), v − y kj + f (y kj ) − f (xkj ), v − y kj  kj   y − xk j kj − ,v − y τ  kj  

kj y − xkj kj kj kj ≥ f (y ) − f (x ), v − y − ,v − y τ

(3.1.63)

36

Variational Inequalities

and

w, v − y

kj





kj

kj

≥ f (y ) − f (x ), v − y

kj



 −

y kj − xkj τ

 ,v − y

kj

 .

(3.1.64)

Taking the limit as j → ∞, we obtain hw, v − x¯i ≥ 0,

(3.1.65)

which implies, by the maximal monotonicity of M that x¯ ∈ M −1 (0) = Sol(f, C) . In order to shownthato the entire sequence weakly converges to x¯, assume that there is another ∞  ∞ 0 0 subsequence xkj of xk k=0 that weakly converges to some x¯ 6= x¯ and x¯ ∈ Sol(f, C) . j=0  ∞ Note that from Lemma 3.1.3 it follows that the sequence kxk − x¯k k=0 is decreasing for each u ∈ Sol(f, C). By the Opial condition we have 0

lim kxk − x¯k = lim inf kxkj − x¯k < lim inf kxkj − x¯ k

k→∞

j→∞

j→∞

0

0

= lim kxk − x¯ k = lim inf kxkj − x¯ k < lim inf kxkj − x¯k j→∞

k→∞

j→∞

k

= lim kx − x¯k, k→∞

0

and this is a contradiction, thus x¯ = x¯. This implies that the sequences  k ∞ y k=0 converge weakly to the same point x¯ ∈ Sol(f, C). Finally, put uk = PSol(f,C) (xk ),

(3.1.66)  k ∞ x k=0 and (3.1.67)

so by (2.0.28) and since x¯ ∈ Sol(f, C),

x¯ − uk , uk − xk ≥ 0. (3.1.68)  k ∞ By Lemma 2.0.15, u k=0 converges strongly to some u∗ ∈ Sol(f, C). Therefore h¯ x − u∗ , u∗ − x¯i ≥ 0

(3.1.69)

and hence u∗ = x¯.

3.1.3

The perturbed extragradient algorithm

Our next algorithmic extension is a modification of the extragradient method, which we call the perturbed extragradient algorithm. We now formulate the perturbed extragradient algorithm where H = Rn , i.e., Euclidean space. Algorithm 3.1.4. The perturbed extragradient algorithm epi n Step 0: Let {Ck }∞ k=0 be a sequence of sets in NCCS(R ) such that Ck → C. Select a starting point x1 ∈ C0 and τ > 0, and set k = 1. Step 1: Given the current iterate xk ∈ Ck−1 , compute y k = PCk (xk − τ f (xk ))

(3.1.70)

xk+1 = PCk (xk − τ f (y k )).

(3.1.71)

and Step 2: Set k ← (k + 1) and return to Step 1.

Algorithms for Solving Monotone Variational Inequalities and Applications

37

Figure 3.7: In the iterative step of Algorithm 3.1.4, xk+1 is obtained by performing the projections of the original Korpelevich method with respect to the set Ck .

Figure 3.7 illustrates the iterative step of this algorithm. We will need the following additional assumption for the convergence theorem. Condition 3.1.4. f is Lipschitz continuous on C with constant L > 0.

3.1.4

Convergence of the perturbed extragradient algorithm

First we observe that Lemma 3.1.3 holds for Algorithm 3.1.4 under Conditions 3.1.1–3.1.3. The following lemma uses, instead of Condition 3.1.3, Condition 3.1.4, which requires Lipschitz continuity on C and not on the whole space Rn . This entails the main difference between the proofs of Lemmata 3.1.3 and 3.1.5, which is that (3.1.36) becomes an inequality and this propagates down the rest of the proof. We give, however, the next proof in full for the convenience of the readers. epi

Lemma 3.1.5. Assume that Ck ⊆ Ck+1 ⊆ C for all k ≥ 0, that Ck → C, that Conditions k ∞ 3.1.1, 3.1.2 and Condition 3.1.4 hold. Let {xk }∞ k=0 and {y }k=0 be two sequences generated ∗ by Algorithm 3.1.4. Let x ∈ Sol(f, C). Then

k+1

2

2

2

x − x∗ ≤ xk − x∗ − (1 − τ 2 L2 ) y k − xk for all k ≥ 0.

(3.1.72)

Proof. Since x∗ ∈ Sol(f, C), y k ∈ Ck ⊆ C and f is pesudo-monotone with respect to Sol(f, C),

k k f (y ), y − x∗ ≥ 0 for all k ≥ 0. (3.1.73)

38

Variational Inequalities

So,



f (y k ), xk+1 − x∗ ≥ f (y k ), xk+1 − y k .

(3.1.74)

By the variational characterization of the projection with respect to Ck , we have

k+1  x − y k , xk − τ f (xk ) − y k ≤ 0. (3.1.75) Thus,



xk+1 − y k , (xk − τ f (y k )) − y k = xk+1 − y k , xk − τ f (xk ) − y k

+ τ xk+1 − y k , f (xk ) − f (y k )

≤ τ xk+1 − y k , f (xk ) − f (y k ) .

(3.1.76)

Denoting z k = xk − τ f (y k ), we obtain exactly equations (3.1.38)–(3.1.42) with PTk replaced by PCk . By (3.1.76) we obtain (3.1.43) for the present lemma too. Using the Cauchy– Schwarz inequality and Condition 3.1.4, we can repeat the remainder of the proof as in the proof of Lemma 3.1.3 and obtain finally

k+1

2

2

2

x − x∗ ≤ xk − x∗ − (1 − τ 2 L2 ) y k − xk , (3.1.77) which completes the proof. Next, we present our convergence theorem for the perturbed extragradient algorithm. epi

Theorem 3.1.6. Assume that Ck ⊆ Ck+1 ⊆ C for all k ≥ 0, that Ck → C, and that Conditions 3.1.1, 3.1.2 and Condition 3.1.4 hold. Let τ ∈ (0, 1/L). Then any sequence {xk }∞ k=0 , generated by Algorithm 3.1.4, converges to a solution of (3.1.1). Proof. Let x∗ ∈ Sol(f, C) and define ρ := 1 − τ 2 L2 . Using Lemma 3.1.5 instead of Lemma 3.1.3, we can obtain, by similar arguments, that here also

lim y k − xk = 0. k→∞

k ∞ So, if x¯ is the limit point of some subsequence {xkj }∞ j=0 of {x }k=0 , then

lim y kj = x¯.

j→∞

(3.1.78)

Using the continuity of f and PCk , and Proposition 2.0.19, we have x¯ = lim y kj = lim PCkj (xkj − τ f (xkj )) = PC (¯ x − τ f (¯ x)). j→∞

j→∞

(3.1.79)

As in the proof of Lemma 3.1.2, it follows that C). We now apply Lemma 3.1.5

k x¯ ∈ Sol(f, ∗ ∞

with x = x¯ to deduce that the sequence { x − x¯ }k=0 is monotonically decreasing and bounded, hence convergent. Since



lim xk − x¯ = lim xkj − x¯ = 0, (3.1.80) k→∞

the whole sequence {xk }∞ ¯. k=0 converges to x

j→∞

Algorithms for Solving Monotone Variational Inequalities and Applications

3.1.5

39

The hybrid perturbed subgradient extragradient

As a matter of fact, Algorithm 3.1.4 can be naturally modified by combining the two algorithmic extensions studied above into a hybrid perturbed subgradient extragradient algorithm, namely, to allow the second projection in Algorithm 3.1.4 to be replaced by a specific subgradient projection with respect to Ck . Algorithm 3.1.5. The hybrid perturbed subgradient extragradient algorithm Step 0: Select an arbitrary starting point x1 ∈ C0 and τ > 0, and set k = 1. Step 1: Given the current iterate xk , compute y k = PCk (xk − τ f (xk ))

(3.1.81)

construct the set Tk as in (3.1.28) and calculate the next iterate xk+1 = PTk (xk − τ f (y k )).

(3.1.82)

Step 2: Set k ← (k + 1) and return to Step 1. Figure 3.8 illustrates the iterative step of this algorithm.

Figure 3.8: In the iterative step of Algorithm 3.1.5, xk+1 is obtained by performing one subgradient projection and one projection onto the set Ck in each iterative step.

We proved the convergence of this algorithm by using similar arguments to those we employed in the previous proofs. Therefore we omit the proof. Another possibility is the following one. In Algorithm 3.1.3 we replaced the second projection onto C with a specific subgradient projection. It is natural to ask whether it is possible to replace the first projection onto C as well and, furthermore, if this could be done for any choice of a subgradient half-space. To accomplish this, one might consider the following algorithm. Algorithm 3.1.6. The two-subgradient extragradient algorithm Step 0: Select an arbitrary starting point x0 ∈ Rn and set k = 0.

40

Variational Inequalities  Step 1: Given the current iterate xk , choose ξ k ∈ ∂c(xk ), consider Tk := T xk as in (2.0.54), and then compute y k = PTk (xk − τ f (xk )) (3.1.83) and xk+1 = PTk (xk − τ f (y k )).

(3.1.84)

Step 2: If xk = y k , then stop. Otherwise, set k ← (k + 1) and return to Step 1. Figure 3.9 illustrates the iterative step of this algorithm.

Figure 3.9: In the iterative step of Algorithm 3.1.6, xk+1 is obtained by performing two subgradient projections in each iterative step.

We now observe that under Conditions 3.1.1–3.1.3, that Lemma 3.1.2 and 3.1.3 still k

k = 0. It is hold, that is, the generated sequence {xk }∞ k=0 is bounded and limk→∞ y − x ∗ still an open question whether these sequences converge to x ∈ Sol(f, C). First we show that Step 2 of Algorithm 3.1.6 is valid. Lemma 3.1.7. If xk = y k for some k in Algorithm 3.1.6, then xk ∈ Sol(f, C). Proof. If xk = y k , then xk = PTk (xk − τ f (xk )), so xk ∈ Tk . Therefore by the definition of Tk (see (2.0.54)), we have g(xk ) + hξ k , xk − xk i ≤ 0, so g(xk ) ≤ 0 and by the representation of the set C, xk ∈ C. By the variational characterization of the projection with respect to Tk , we have

w − y k , (xk − τ f (xk )) − y k ≤ 0 for all w ∈ Tk (3.1.85) and

τ w − xk , f (xk ) ≥ 0 for all w ∈ Tk .

(3.1.86)

Algorithms for Solving Monotone Variational Inequalities and Applications

41

Now we claim that C ⊆ Tk . Let x ∈ C, and consider ξ k ∈ ∂g(xk ). By the definition of the subdifferential set of c at a point xk (see (2.0.53)), we get for all y ∈ Rn , c(y) ≥ g(xk ) + hξ k , y − xk i, so, in particular, for x ∈ C ⊆ Rn , g(x) ≥ g(xk ) + hξ k , x − xk i.

(3.1.87)

By the representation of the set C (see (3.1.26)), we obtain 0 ≥ g(x) ≥ g(xk ) + hξ k , x − xk i

(3.1.88)

which means that x ∈ Tk and so C ⊆ Tk , as claimed. Since C ⊆ Tk , we have by (3.1.86),

τ w − xk , f (xk ) ≥ 0 for all w ∈ C. (3.1.89) Since τ > 0 and xk ∈ C, we finally get xk ∈ Sol(f, C). The proof of the next lemma is similar to that of Lemma 3.1.5 above. k ∞ Lemma 3.1.8. Let {xk }∞ k=0 and {y }k=0 be two sequences generated by Algorithm 3.1.6. Let x∗ ∈ Sol(f, C). Then under Conditions 3.1.1–3.1.3, we have for every k ≥ 0,

k+1

2

2

2

x − x∗ ≤ xk − x∗ − (1 − τ 2 L2 ) y k − xk . (3.1.90)

It is not difficult to show, by following the arguments given in Theorem 3.1.4, that under the conditions of this lemma and if τ ∈ (0, 1/L), then any sequence {xk }∞ k=0 generated by Algorithm 3.1.6 is bounded and

(3.1.91) lim y k − xk = 0. k→∞

The second algorithmic extension of Korpelevich’s extragradient method proposed here can be further studied along the lines of the following conjecture. Conjecture 3.1.1. The set inclusion condition Ck ⊆ Ck+1 ⊆ C for all k ≥ 0, which appears in our analysis of the perturbed extragradient algorithm, could probably be removed by employing techniques similar to those of [168], i.e., using the definition of the γ-distance (Definition 2.0.17).

3.2

Variational inequalities and fixed points

In this section we present a modified version of the subgradient extragradient algorithm which finds a solution of the VIP which is also a fixed point of a given nonexpansive mapping S : H → H. Let {αk }∞ k=0 ⊂ [c, d] for some c, d ∈ (0, 1). Algorithm 3.2.1. The modified subgradient extragradient algorithm Step 0: Select a starting point x0 ∈ H and τ > 0, and set k = 0. Step 1: Given the current iterate xk , compute y k = PC (xk − τ f (xk )),

(3.2.1)

construct the set Tk as in (3.1.28) and calculate the next iterate xk+1 = αk xk + (1 − αk )SPTk (xk − τ f (y k )). Step 2: Set k ← (k + 1) and return to Step 1.

(3.2.2)

42

Variational Inequalities

Figure 3.10: The iterative step of Algorithm 3.2.1.

Figure 3.10 illustrates the iterative step of this algorithm. For the convergence theorem we need to assume the following condition. Condition 3.2.1. Fix(S) ∩ Sol(f, C) 6= ∅.

3.2.1

Convergence of the modified subgradient extragradient algorithm

In this subsection we establish a weak convergence theorem for Algorithm 3.2.1. The outline of its proof is similar to that of [123, Theorem 3.1]. Theorem 3.2.1. that 3.1.2, 3.1.3 and 3.2.1 hold and τ < 1/L. Then  kAssume ∞  kConditions ∞ any sequences x k=0 and y k=0 generated by Algorithm 3.2.1 weakly converge to the same point u∗ ∈ Fix(S) ∩ Sol(f, C) and furthermore, u∗ = lim PFix(S)∩Sol(f,C) (xk ). k→∞

(3.2.3)

Proof. Denote tk := PTk (xk − τ f (y k )) for all k ≥ 0. Let u ∈ Fix(S) ∩ Sol(f, C). Applying (2.0.29) with C = Tk , x = xk − τ f (y k ) and y = u, we obtain

k



t − u 2 ≤ xk − τ f (y k ) − u 2 − xk − τ f (y k ) − tk 2

= kxk − uk2 − kxk − tk k2 + 2τ f (y k ), u − tk = kxk − uk2 − kxk − tk k2





 + 2τ f (y k ) − f (u), u − y k + f (u), u − y k + f (y k ), y k − tk .

(3.2.4)

By Condition 3.1.2,

f (y k ) − f (u), u − y k ≤ 0,

(3.2.5)

Algorithms for Solving Monotone Variational Inequalities and Applications

43

and since u ∈ Sol(f, C)

f (u), u − y k ≤ 0.

(3.2.6)

So,

k



t − u 2 ≤ kxk − uk2 − kxk − tk k2 + 2τ f (y k ), y k − tk

= kxk − uk2 − kxk − y k k2 − 2 xk − y k , y k − tk

− ky k − tk k2 + 2τ f (y k ), y k − tk = kxk − uk2 − kxk − y k k2 − ky k − tk k2

+ 2 xk − τ f (y k ) − y k , tk − y k .

(3.2.7)

By the definition of Tk , and tk , we obtain

 xk − τ f (xk ) − y k , tk − y k = 0,

(3.2.8)

so

k x − τ f (y k ) − y k , tk − y k



= xk − τ f (xk ) − y k , tk − y k + τ f (xk ) − τ f (y k ), tk − y k

= τ f (xk ) − τ f (y k ), tk − y k ≤ τ kf (xk ) − f (y k )kktk − y k k ≤ τ Lkxk − y k kktk − y k k,

(3.2.9)

where the last two inequalities follow from the Cauchy–Schwarz inequality and Condition 3.1.3. Therefore

k

t − u 2 ≤ kxk − uk2 − kxk − y k k2 − ky k − tk k2 + 2τ Lkxk − y k kktk − y k k.

(3.2.10)

Observe that 0 ≤ ktk − y k k − τ Lkxk − y k k

2

= ktk − y k k2 − 2τ Lkxk − y k kktk − y k k + τ 2 L2 kxk − y k k2 ,

(3.2.11)

2τ Lkxk − y k kktk − y k k ≤ ktk − y k k2 + τ 2 L2 kxk − y k k2 .

(3.2.12)

so, Thus

k

t − u 2 ≤ kxk − uk2 − kxk − y k k2 − ky k − tk k2 + ktk − y k k2 + τ 2 L2 kxk − y k k2 = kxk − uk2 − kxk − y k k2 + τ 2 L2 kxk − y k k2 = kxk − uk2 + (τ 2 L2 − 1)kxk − y k k2 ≤ kxk − uk2 ,

(3.2.13)

44

Variational Inequalities

where the last inequality follows from the fact that τ < 1/L. Using (2.0.4), we get  kxk+1 − uk2 = kαk xk + (1 − αk )S tk − uk2    = kαk xk − u + (1 − αk ) S tk − u k2  = αk kxk − uk2 + (1 − αk )kS tk − uk2    − αk (1 − αk )k xk − u − S tk − u k2  ≤ αk kxk − uk2 + (1 − αk )kS tk − uk2  = αk kxk − uk2 + (1 − αk )kS tk − S (u) k2 ≤ αk kxk − uk2 + (1 − αk )ktk − uk2 ≤ αk kxk − uk2 + (1 − αk ) kxk − uk2 + (τ 2 L2 − 1)kxk − y k k2 = kxk − uk2 + (1 − αk )(τ 2 L2 − 1)kxk − y k k2 ≤ kxk − uk2 ,

 (3.2.14)

so kxk+1 − uk2 ≤ kxk − uk2 .

(3.2.15)

lim kxk − uk = σ,

(3.2.16)

Therefore there exists k→∞

 ∞  ∞ and xk k=0 and tk k=0 are bounded. From the last relations it follows that (1 − αk )(1 − τ 2 L2 )kxk − y k k2 ≤ kxk − uk2 − kxk+1 − uk2 , or kxk − y k k2 ≤

kxk − uk2 − kxk+1 − uk2 . (1 − αk )(1 − τ 2 L2 )

(3.2.17)

(3.2.18)

Hence, lim kxk − y k k = 0.

k→∞

(3.2.19)

In addition, by the definition of y k and Tk , ky k − tk k2 = kPC (xk − τ f (xk )) − PTk (xk − τ f (y k ))k2 = kPTk (xk − τ f (xk )) − PTk (xk − τ f (y k ))k2 ≤ k(xk − τ f (xk )) − (xk − τ f (y k ))k2 = kτ f (y k ) − τ f (xk )k2 ≤ τ 2 L2 ky k − xk k2 ,

(3.2.20)

where the last inequality follows from Condition 3.1.3. So, ky k − tk k2 ≤ τ 2 L2 ky k − xk k2 ,

(3.2.21)

lim ky k − tk k = 0.

(3.2.22)

kxk − tk k ≤ kxk − y k k + ky k − tk k,

(3.2.23)

and by (3.2.19) we get k→∞

By the triangle inequality,

Algorithms for Solving Monotone Variational Inequalities and Applications

45

so by (3.2.19) and (3.2.22), we have lim kxk − tk k = 0.

k→∞

(3.2.24)

 ∞  ∞ Since xk k=0 is bounded, it has a subsequence xkj j=0 which weakly converges to some x ∈ H. We now show that x ∈ Fix(S)∩ Sol(f, C). Define the maximal monotone extension mapping M of f as in (2.1.2) and by using arguments similar to those used in the proof of Theorem 4.4.1, we get that x ∈ M −1 (0) = Sol(f, C) . It is now left to show that x ∈ Fix(S). To this end, let u ∈ Fix(S) ∩ Sol(f, C) as before. Since S is nonexpansive, we get from (3.3.21) that   (3.2.25) kS tk − uk = kS tk − S (u) k ≤ ktk − uk ≤ kxk − uk. By (3.3.27),  lim sup kS tk − uk ≤ σ.

(3.2.26)

k→∞

Furthermore,  lim kαk xk + (1 − αk )S tk − uk k→∞    = lim kαk xk − u + (1 − αk ) S tk − u k k→∞

= lim kxk+1 − uk = σ. k→∞

(3.2.27)

So applying Lemma 2.0.10, we obtain  lim kS tk − xk k = 0.

(3.2.28)

    kS xk − xk k = kS xk − S tk + S tk − xk k    ≤ kS xk − S tk k + kS tk − xk k  ≤ kxk − tk k + kS tk − xk k,

(3.2.29)

k→∞

Since

It follows from (3.3.42) and (3.2.28) that  lim kS xk − xk k = 0.

k→∞

Since S is nonexpansive on H, xkj * x and   lim k(I − S) xkj k = lim kxkj − S xkj k = 0, j→∞

k→∞

(3.2.30)

(3.2.31)

we obtain by the Demiclosedness Principle that (I − S)(x) = 0, which means that x ∈ Fix(S). Now, again by using similar arguments to those used in the proof of Theorem  3.1.4, ∞ we get that the entire sequence weakly converges to x. Therefore the sequences xk k=0  ∞ and y k k=0 weakly converge to x ∈ Fix(S) ∩ Sol(f, C). Finally, put uk = PFix(S)∩Sol(f,C) (xk ).

(3.2.32)

46

Variational Inequalities

Since x ∈ Fix(S) ∩ Sol(f, C), it follows from (2.0.28) that

x − uk , uk − xk ≥ 0. (3.2.33)  ∞ By Lemma 2.0.15, uk k=0 converges strongly to some u∗ ∈ Fix(S)∩ Sol(f, C). Therefore hx − u∗ , u∗ − xi ≥ 0

(3.2.34)

and hence x = u∗ . Remark 3.2.2. In Algorithm 3.2.1 we assumed that S was a nonexpansive mapping on H. If it is defined only on C we can replace it by Se = SPC , which is a nonexpansive mapping on C. In this case the iterative step is as follows: y k = PC (xk − τ f (xk )), construct the set Tk (3.1.28) and calculate the next iterate e T (xk − τ f (y k ). xk+1 = αk xk + (1 − αk )SP k

3.3

(3.2.35)

Strong convergence in Hilbert space

In this section we present two more modifications of the subgradient extragradient method, which enable us to obtain strong convergence theorems in a real Hilbert space. The first modification of the subgradient extragradient algorithm is inspired by Takahashi and Nadezhkina [124] and the second is inspired by Takahashi, Takeuchi and Kubota [148].We now present our first modification of the subgradient extragradient algorithm. Algorithm 3.3.1. The first modification of the subgradient extragradient algorithm Step 0: Select an arbitrary starting point x0 ∈ H and τ > 0, and set k = 0. Step 1: Given the current iterate xk , compute  k y = PC (xk − τ f (xk )),    k k k k    z = αk x + (1 − αk )PT k (x − τ f (y )), Ck = z ∈ H | z k − z ≤ xk − z , (3.3.1) 

k   0 k  Qk = z ∈ H | x − z, x − x ≥ 0 ,    k+1 x = PCk ∩Qk (x0 ) , where Tk is as in (3.1.28) and {αk }∞ k=0 ⊂ [0, α] for some α ∈ [0, 1). Step 2: Set k ← (k + 1) and return to Step 1.

3.3.1

Connection with Haugazeau’s method

In this subsection we describe the connection between our Algorithm 3.3.1 and the work of Haugazeau. His work was successfully generalized and extended in recent papers by Combettes [61], Solodov and Svaiter [145], Bauschke and Combettes [12, 13], and by Burachik,

Algorithms for Solving Monotone Variational Inequalities and Applications

47

Lopes and Svaiter [31]. Haugazeau presented an algorithm for solving the Best Approximation Problem (BAP) of finding the projection of a point onto the intersection of m closed convex subsets {Ci }m i=1 ⊂ H. Define, as in (2.0.44), for any pair x, y ∈ H the set H(x, y) := {u ∈ H | hu − y, x − yi ≤ 0},

(3.3.2)

and denoting by Q(x, y, z) the projection of x onto H(x, y) ∩ H(y, z), namely, Q(x, y, z) = PH(x,y)∩H(y,z) (x), he showed, see [89], that for an arbitrary starting point x0 ∈ H, any sequence {xk }∞ k=0 generated by the iterative step xk+1 = Q(x0 , xk , Pk(mod m)+1 (xk ))

(3.3.3)

converges strongly to the projection of x0 onto C = ∩m i=1 Ci . The operator Q requires projecting onto the intersection of two constructible half-spaces; this is not difficult to implement. In [89] Haugazeau introduced the operator Q as an explicit description of the projector onto the intersection of the two half-spaces H(x, y) and H(y, z). So, following, e.g., [15, Definition 3.1], denoting π = hx − y, y − zi , µ = kx − yk2 , ν = ky − zk2 and ρ = µν − π 2 , we have  z, if ρ = 0 and π ≥ 0,     x + 1 + πν (z − y), if ρ > 0 and πν ≥ ρ, Q(x, y, z) =    y + ν (π(x − y) + µ(z − y)), if ρ > 0 and πν < ρ. ρ

(3.3.4)

In our Algorithm 3.3.1 we may write



 Ck = z ∈ H | z k − z ≤ xk − z 

= z ∈ H | xk − (1/2)(xk + z k ), z − (1/2)(xk + z k ) ≤ 0 = H(xk , (1/2)(xk + z k ))

(3.3.5)

and 

Qk = z ∈ H | xk − z, x0 − xk ≥ 0 = H(x0 , xk ).

(3.3.6)

This leads to the following alternative phrasing of the iterative step of Algorithm 3.3.1:  k k k   y = PC (x − τ f (x )), z k = αk xk + (1 − αk )PTk (xk − τ f (y k )),   xk+1 = Q(x0 , xk , (1/2)(xk + z k )).

(3.3.7)

Observe that using the explicit description (3.3.4) of the operator Q and the projector onto Tk , the iterative step (3.3.1) of Algorithm 3.3.1 can be rewritten even more explicitly as

48

Variational Inequalities

follows.  k y = PC (xk − τ f (xk )),       denote ak := xk − τ f (xk ) − y k and wk := xk − τ f (y k ),         k ,w k −y k a   h i   wk − max 0, kak k2 ak , if ak 6= 0,  PT (wk ) := tk =  k    k  w , if ak = 0.      z k = α xk + (1 − α )tk ,   k 

0 kk denote πk := x − x , (1/2)(xk − z k ) , µk := kx0 − xk k2 ,   νk := k(1/2)(xk − z k )k2 and ρk := µk νk − (πk )2 .        Then,      (1/2)(xk + z k ), if ρk = 0 and πk ≥ 0,              x0 + 1 + πνkk (1/2)(z k − xk ), if ρk > 0 and πk νk ≥ ρk ,  xk+1 =           y k + ρνk (πk x0 − xk + µ2k (z k − xk )), if ρk > 0 and πk νk < ρk .  k

3.3.2

(3.3.8)

Convergence of the first modification of the subgradient extragradient algorithm

In this subsection we establish a strong convergence theorem for Algorithm 3.3.1. The outline of its proof is similar to that of [124, Theorem 3.1]. Theorem3.3.1. Conditions 3.1.1–3.1.3 hold and τ ∈ (0, 1/L). Then any ∞ Assume  that ∞ sequences xk k=0 and y k k=0 generated by Algorithm 3.3.1 strongly converge to the same point u∗ ∈ Sol(f, C) and furthermore, u∗ = PSol(f,C) (x0 ).

(3.3.9)

Proof. First observe that for all k ≥ 0, Qk is closed and convex. The set Ck is also closed and convex because n

2

2 o Ck = z ∈ H : z k − z ≤ xk − z n o

k

k k 2 k k

= z ∈H: z −x + 2 z − x ,x − z ≤ 0 . (3.3.10) By the definition of Qk and (2.0.28), we have xk = PQk (x0 ).

(3.3.11)

Denote tk := PTk (xk − τ f (y k )) for all k ≥ 0. Let u ∈ Sol(f, C). Applying (2.0.29) with the set C there as Tk , x = xk − τ f (y k ) and y = u, we obtain

k



t − u 2 ≤ xk − τ f (y k ) − u 2 − xk − τ f (y k ) − tk 2

= kxk − uk2 − kxk − tk k2 + 2τ f (y k ), u − tk = kxk − uk2 − kxk − tk k2





 + 2τ f (y k ) − f (u), u − y k + f (u), u − y k + f (y k ), y k − tk .

(3.3.12)

Algorithms for Solving Monotone Variational Inequalities and Applications

49

By Condition 3.1.2,

f (y k ) − f (u), u − y k ≤ 0,

(3.3.13)

f (u), u − y k ≤ 0.

(3.3.14)

and since u ∈ Sol(f, C), So,

k



t − u 2 ≤ kxk − uk2 − kxk − tk k2 + 2τ f (y k ), y k − tk

= kxk − uk2 − kxk − y k k2 − 2 xk − y k , y k − tk

− ky k − tk k2 + 2τ f (y k ), y k − tk = kxk − uk2 − kxk − y k k2 − ky k − tk k2

+ 2 xk − τ f (y k ) − y k , tk − y k .

(3.3.15)

By the definition of Tk ,

 xk − τ f (xk ) − y k , tk − y k ≤ 0,

(3.3.16)

so

k x − τ f (y k ) − y k , tk − y k



= xk − τ f (xk ) − y k , tk − y k + τ f (xk ) − τ f (y k ), tk − y k

≤ τ f (xk ) − τ f (y k ), tk − y k ≤ τ kf (xk ) − f (y k )kktk − y k k ≤ τ Lkxk − y k kktk − y k k,

(3.3.17)

where the last two inequalities follow from the Cauchy–Schwarz inequality and Condition 3.1.3. Therefore

k

t − u 2 ≤ kxk − uk2 − kxk − y k k2 − ky k − tk k2 + 2τ Lkxk − y k kktk − y k k. (3.3.18) Observe that 0 ≤ ktk − y k k − τ Lkxk − y k k

2

= ktk − y k k2 − 2τ Lkxk − y k kktk − y k k + τ 2 L2 kxk − y k k2 ,

(3.3.19)

2τ Lkxk − y k kktk − y k k ≤ ktk − y k k2 + τ 2 L2 kxk − y k k2 .

(3.3.20)

so, Thus

k

t − u 2 ≤ kxk − uk2 − kxk − y k k2 − ky k − tk k2 + ktk − y k k2 + τ 2 L2 kxk − y k k2 = kxk − uk2 − kxk − y k k2 + τ 2 L2 kxk − y k k2 = kxk − uk2 + (τ 2 L2 − 1)kxk − y k k2 ≤ kxk − uk2 ,

(3.3.21)

50

Variational Inequalities

where the last inequality follows from the fact that τ ∈ (0, 1/L). Now by the definition of z k and (2.0.4), we get kz k − uk2 = kαk xk + (1 − αk )tk − uk2   = kαk xk − u + (1 − αk ) tk − u k2 = αk kxk − uk2 + (1 − αk )ktk − uk2   − αk (1 − αk )k xk − u − tk − u k2 ≤ αk kxk − uk2 + (1 − αk )ktk − uk2 ≤ αk kxk − uk2 + (1 − αk ) kxk − uk2 + (τ 2 L2 − 1)kxk − y k k2



= kxk − uk2 + (1 − αk )(τ 2 L2 − 1)kxk − y k k2 ≤ kxk − uk2 ,

(3.3.22)

so u ∈ Ck and therefore Sol(f, C) ⊂ Ck for all k ≥ 0. Now we show, by induction, that ∞ the sequence xk k=0 is well-defined and Sol(f, C) ⊂ Ck ∩ Qk for all k ≥ 0. For k = 0 we have Q0 = H, so it follows that Sol(f, C) ⊂ C0 ∩ Q0 and therefore x1 = PC0 ∩Q0 (x0 ) is well-defined. Now suppose that xk is given and Sol(f, C) ⊂ Ck ∩ Qk for some k. By Condition 3.1.1, Ck ∩ Qk is nonempty, closed and convex, and therefore xk+1 = PCk ∩Qk (x0 ) is well-defined. By (2.0.28), we have

z − xk+1 , x0 − xk+1 ≤ 0 for all z ∈ Ck ∩ Qk . (3.3.23) Since Sol(f, C) ⊂ Ck ∩ Qk ,

u − xk+1 , x0 − xk+1 ≤ 0 for all u ∈ Sol(f, C),

(3.3.24)

which implies that u ∈ Qk+1 . Thus Sol(f, C) ⊂ Ck+1 ∩ Qk+1 , as required. Denote u∗ = PSol(f,C) (x0 ). It is clear that u∗ ∈ Sol(f, C). Since Sol(f, C) ⊂ Ck ∩ Qk , u∗ ∈ Sol(f, C) and xk+1 = PCk ∩Qk (x0 ) , we have kxk+1 − x0 k ≤ ku∗ − x0 k for all k ≥ 0. (3.3.25)  ∞ This implies, in particular, that xk k=0 is bounded, and it follows from (3.3.21) and  ∞  ∞ (3.3.22) that so are tk k=0 and z k k=0 . By the definition of xk+1 , we have xk+1 ∈ Qk and by the definition of Qk , xk = PQk (x0 ), so kxk − x0 k ≤ kxk+1 − x0 k for all k ≥ 0.

(3.3.26)

lim kxk − x0 k.

(3.3.27)

Hence there exists k→∞

Applying (2.0.29) with the set C there as Qk , x = x0 and y = xk+1 , we obtain kxk+1 − xk k2 ≤ kxk+1 − x0 k2 − kxk − x0 k2 for all k ≥ 0

(3.3.28)

lim kxk+1 − xk k = 0.

(3.3.29)

and so, k→∞

Algorithms for Solving Monotone Variational Inequalities and Applications

51

Since xk+1 ∈ Ck , kz k − xk+1 k ≤ kxk − xk+1 k, and therefore by the triangle inequality, kxk − z k k ≤ kxk − xk+1 k + kxk+1 − z k k ≤ 2kxk − xk+1 k,

(3.3.30)

lim kxk − z k k = 0.

(3.3.31)

kz k − uk2 ≤ kxk − uk2 + (1 − αk )(τ 2 L2 − 1)kxk − y k k2 ,

(3.3.32)

and so, k→∞

By (3.3.22), or kxk − uk2 − kz k − uk2 (1 − αk )(1 − τ 2 L2 )   1 k k k k = kx − uk − kz − uk kx − uk + kz − uk (1 − αk )(1 − τ 2 L2 )  k 1 k k kx − uk + kz − uk kx − z k k. (3.3.33) ≤ 2 2 (1 − αk )(1 − τ L )  ∞  ∞ By (3.3.31) and the boundedness of xk k=0 and z k k=0 , we obtain kxk − y k k2 ≤

lim kxk − y k k = 0.

(3.3.34)

lim kf (xk ) − f (y k )k = 0.

(3.3.35)

k→∞

By Condition 3.1.3, k→∞

Using a similar argument to the one following (3.3.18),

k

t − u 2 ≤ kxk − uk2 − kxk − y k k2 − ky k − tk k2 + 2τ Lkxk − y k kktk − y k k ≤ kxk − uk2 − kxk − y k k2 − ky k − tk k2 + kxk − y k k2 + τ 2 L2 ky k − tk k2 ≤ kxk − uk2 + (τ 2 L2 − 1)ky k − tk k2 .

(3.3.36)

Now by (3.3.22), kz k − uk2 ≤ αk kxk − uk2 + (1 − αk )ktk − uk2 ,

(3.3.37)

and by the last inequalities, kz k − uk2 ≤ αk kxk − uk2 + (1 − αk ) kxk − uk2 + (τ 2 L2 − 1)ky k − tk k2 = kxk − uk2 + (1 − αk )(τ 2 L2 − 1)ky k − tk k2 ≤ kxk − uk2 .

 (3.3.38)

Thus kxk − uk2 − kz k − uk2 (1 − αk )(1 − τ 2 L2 )   1 k k k k = kx − uk − kz − uk kx − uk + kz − uk (1 − αk )(1 − τ 2 L2 )  k 1 k k kx − uk + kz − uk kx − z k k. (3.3.39) ≤ (1 − αk )(1 − τ 2 L2 )

ky k − tk k2 ≤

52

Variational Inequalities

 ∞  ∞ By (3.3.31) and the boundedness of xk k=0 and z k k=0 , we obtain lim ky k − tk k = 0.

k→∞

(3.3.40)

By the triangle inequality, we also have kxk − tk k ≤ kxk − y k k + ky k − tk k,

(3.3.41)

and therefore lim kxk − tk k = 0. (3.3.42)  ∞  ∞  ∞ Since xk k=0 is bounded, there exists a subsequence xkj j=0 of xk k=0 which converges weakly to some x ∈ H. In order to show that x ∈ Sol(f, C) we again can use the maximal monotone extension M of f (2.1.2) and follows [139, Theorem 3] M −1 (0) = Sol(f, C). From u∗ = PSol(f,C) (x0 ), x¯ ∈ Sol(f, C), (3.3.25) and the weak lower semicontinuity of the norm it follows that k→∞

ku∗ − x0 k ≤ k¯ x − x0 k ≤ lim inf kxkj − x0 k j→∞

≤ lim sup kx − x0 k ≤ ku∗ − x0 k, kj

(3.3.43)

j→∞

so lim kxkj − x0 k = k¯ x − x0 k.

j→∞

(3.3.44)

Hence we have xkj − x0 * x¯ − x0 and kxkj − x0 k → k¯ x − x0 k, and so by the Kadec-Klee kj property of H ((2.0.7)), we obtain kx − x¯k → 0. Since xkj = PQkj (x0 ) and u∗ ∈ Qkj , we see that



−ku∗ − xkj k2 = u∗ − xkj , xkj − x0 + u∗ − xkj , x0 − u∗

≥ u∗ − xkj , x0 − u∗ . (3.3.45) Taking the limit as j → ∞, we obtain

− ku∗ − x¯k2 ≥ u∗ − x¯, x0 − u∗ ≥ 0,

(3.3.46)

lim xkj = x¯ = u∗ .

(3.3.47)

and therefore k→∞

 ∞  ∞ Since xkj j=0 is an arbitrary weakly convergent subsequence of xk k=0 , we conclude that  k ∞ x k=0 converges strongly to u∗ , i.e., limk→∞ xk = u∗ = PSol(f,C) (x0 ), as asserted.

3.3.3

The second modification of the subgradient extragradient algorithm

Takahashi, Takeuchi and Kubota [148] presented an algorithm for finding a fixed point of a nonexpansive mapping S in Hilbert space. Let C ⊆ H be a closed and convex subset, and S be a nonexpansive mapping of C into itself such that Fix(S) 6= ∅. Their iterative method, known as the shrinking projection method, is presented next.

Algorithms for Solving Monotone Variational Inequalities and Applications

53

Algorithm 3.3.2. Step 0: Select an arbitrary starting point x0 ∈ H, C1 = C, x1 = PC1 (x0 ), and set k = 1. Step 1: Given the current iterate xk , compute  k k  + (1 − αk )S(xk ),  y = αk x



 Ck+1 = z ∈ Ck | y k − z ≤ xk − z ,   xk+1 = P 0 Ck+1 (x ) ,

(3.3.48)

where {αk }∞ k=1 ⊂ [0, α] for some α ∈ [0, 1). Step 2: Set k ← (k + 1) and return to Step 1.

to

 ∞ They proved that any sequence xk k=0 generated by Algorithm 3.3.2 converges strongly u∗ = PFix(S) (x0 ).

(3.3.49)

A comment on implementability is in order here. Observe that in the TakahashiTakeuchi-Kubota algorithm, the sets Ck+1 become increasingly complicated because at every iteration another half-space is used to cut the set. This may render the algorithm unimplementable, unless one uses an inner-loop for calculating an approximation of the projection xk+1 = PCk+1 (x0 ) at each iterative step. Such an inner-loop can indeed be constructed from an iterative projection method that solves the Best Approximation Problem (BAP); see, e.g., [40] and the many references therein. These considerations also apply to our next algorithm. Inspired by Algorithm 3.3.2 and a recent development of Sudsukh [146], we now present the following algorithm for solving variational inequalities. Algorithm 3.3.3. The second modification of the subgradient extragradient algorithm Step 0: Select an arbitrary starting point x0 ∈ H, a constant τ > 0, C1 = C, x1 = PC1 (x0 ), and set k = 1. Step 1: Given the current iterate xk , compute  k y = PC (xk − τ f (xk )),     z k = α xk + (1 − α )P (xk − τ f (y k )), k

k k Tk k

  Ck+1 = z ∈ Ck | z − z ≤ x − z ,    k+1 x = PCk+1 (x0 ) ,

(3.3.50)

where {αk }∞ k=1 ⊂ [0, α] for some α ∈ (0, 1) and Tk is as in (3.1.28). Step 2: Set k ← (k + 1) and return to Step 1. We now prove a convergence theorem for this algorithm by using arguments which are quite similar to those we employed in the proof of Theorem 3.3.1.

54

3.3.4

Variational Inequalities

Convergence of the second modification of the subgradient extragradient algorithm

Theorem3.3.2. Assume Conditions 3.1.1–3.1.3 hold and τ ∈ (0, 1/L). Then any  k that ∞ k ∞ sequences x k=0 and y k=0 generated by Algorithm 3.3.3 strongly converge to the same point u∗ ∈ Sol(f, C) and furthermore, u∗ = PSol(f,C) (x0 ).

(3.3.51)

Proof. First we show that Ck is closed and convex for all k ≥ 1. C1 = C is clearly closed and convex by assumption. Now assume that Ck is closed and convex. Then Ck+1 is closed and convex asan intersection of Ck and a half-space. Now we prove, using induction, that ∞ the sequence xk k=0 is well-defined, by showing that Sol(f, C) ⊂ Ck for all k ≥ 1. For C1 = C this is clear. Now assume that Sol(f, C) ⊂ Ck . Denote tk := PTk (xk − τ f (y k )) for all k ≥ 0. Let u ∈ Sol(f, C). Applying (2.0.29) with the set C there as Tk , x = xk − τ f (y k ) and y = u, we obtain

k



t − u 2 ≤ xk − τ f (y k ) − u 2 − xk − τ f (y k ) − tk 2

= kxk − uk2 − kxk − tk k2 + 2τ f (y k ), u − tk = kxk − uk2 − kxk − tk k2





 + 2τ f (y k ) − f (u), u − y k + f (u), u − y k + f (y k ), y k − tk .

(3.3.52)

By Condition 3.1.2,

f (y k ) − f (u), u − y k ≤ 0,

(3.3.53)

f (u), u − y k ≤ 0.

(3.3.54)

and since u ∈ Sol(f, C), So,

k



t − u 2 ≤ kxk − uk2 − kxk − tk k2 + 2τ f (y k ), y k − tk

= kxk − uk2 − kxk − y k k2 − 2 xk − y k , y k − tk

− ky k − tk k2 + 2τ f (y k ), y k − tk = kxk − uk2 − kxk − y k k2 − ky k − tk k2

+ 2 xk − τ f (y k ) − y k , tk − y k .

(3.3.55)

By the definition of Tk ,

 xk − τ f (xk ) − y k , tk − y k ≤ 0,

(3.3.56)

so xk − τ f (y k ) − y k , tk − y k



= xk − τ f (xk ) − y k , tk − y k + τ f (xk ) − τ f (y k ), tk − y k

≤ τ f (xk ) − τ f (y k ), tk − y k ≤ τ kf (xk ) − f (y k )kktk − y k k



≤ τ Lkxk − y k kktk − y k k,

(3.3.57)

Algorithms for Solving Monotone Variational Inequalities and Applications

55

where the last two inequalities follow from the Cauchy–Schwarz inequality and Condition 3.1.3. Therefore

k

t − u 2 ≤ kxk − uk2 − kxk − y k k2 − ky k − tk k2 + 2τ Lkxk − y k kktk − y k k. (3.3.58) Observe that 0 ≤ ktk − y k k − τ Lkxk − y k k

2

= ktk − y k k2 − 2τ Lkxk − y k kktk − y k k + τ 2 L2 kxk − y k k2 ,

(3.3.59)

2τ Lkxk − y k kktk − y k k ≤ ktk − y k k2 + τ 2 L2 kxk − y k k2 .

(3.3.60)

so, Thus

k

t − u 2 ≤ kxk − uk2 − kxk − y k k2 − ky k − tk k2 + ktk − y k k2 + τ 2 L2 kxk − y k k2 = kxk − uk2 − kxk − y k k2 + τ 2 L2 kxk − y k k2 = kxk − uk2 + (τ 2 L2 − 1)kxk − y k k2 ≤ kxk − uk2 ,

(3.3.61)

where the last inequality follows from the fact that τ ∈ (0, 1/L). Now by the definition of z k and (2.0.4), we get kz k − uk2 = kαk xk + (1 − αk )tk − uk2   = kαk xk − u + (1 − αk ) tk − u k2 = αk kxk − uk2 + (1 − αk )ktk − uk2   − αk (1 − αk )k xk − u − tk − u k2 ≤ αk kxk − uk2 + (1 − αk )ktk − uk2 ≤ αk kxk − uk2 + (1 − αk ) kxk − uk2 + (τ 2 L2 − 1)kxk − y k k2



= kxk − uk2 + (1 − αk )(τ 2 L2 − 1)kxk − y k k2 ≤ kxk − uk2 ,

(3.3.62)

so u ∈ Ck+1 and therefore Sol(f, C) ⊂ Ck+1 . Denote u∗ = PSol(f,C) (x0 ). It is clear that u∗ ∈ Sol(f, C). Since Sol(f, C) ⊂ Ck+1 , u∗ ∈ Sol(f, C) and xk+1 = PCk+1 (x0 ) , we have kxk+1 − x0 k ≤ ku∗ − x0 k for all k ≥ 0. (3.3.63)  k ∞ This implies, in particular, that x k=0 is bounded, and it follows from (3.3.61) and  ∞  ∞ (3.3.62) that so are tk k=0 and z k k=0 . By the definition of the iterative step, xk = PCk (x0 ), so by (2.0.28) we have

k x − x0 , z − xk ≥ 0 for all z ∈ Ck . (3.3.64) Since Sol(f, C) ⊂ Ck , we have

k x − x0 , u − xk ≥ 0 for all u ∈ Sol(f, C).

(3.3.65)

56

Variational Inequalities

So by the Cauchy–Schwarz inequality,

0 ≤ xk − x0 , u − xk = xk − x0 , u − x0 + x0 − xk

= −kxk − x0 k2 + xk − x0 , u − x0 ≤ −kxk − x0 k2 + kxk − x0 kku − x0 k

(3.3.66)

and therefore kxk − x0 k ≤ ku − x0 k for all u ∈ Sol(f, C).

(3.3.67)

Now by the definition of our algorithm, xk = PCk (x0 ), xk+1 = PCk+1 (x0 ) ∈ Ck+1 ⊂ Ck and (2.0.28), we have

0 x − xk , xk − xk+1 ≥ 0 for all k ≥ 0. (3.3.68) Now by (3.3.68),

k



x − xk+1 2 = xk − x0 + x0 − xk+1 2

2

2

= xk − x0 + 2 xk − x0 , x0 − xk+1 + x0 − xk+1

2

2

= xk − x0 + 2 xk − x0 , x0 − xk + xk − xk+1 + x0 − xk+1

2

2

2

= xk − x0 − 2 xk − x0 + 2 xk − x0 , xk − xk+1 + x0 − xk+1

2

2

(3.3.69) ≤ − xk − x0 + x0 − xk+1 . Thus, kxk − x0 k ≤ kx0 − xk+1 k for all k ≥ 0, (3.3.70)  k ∞ and therefore the sequence kx − x0 k k=0 is increasing and since it is also bounded, it converges to some l ∈ R, i.e., lim kxk − x0 k = l. (3.3.71) k→∞

Apply (3.3.71) to (3.3.69), to obtain lim kxk+1 − xk k = 0.

k→∞

(3.3.72)

In addition, since xk+1 ∈ Ck+1 ⊂ Ck , kz k − xk+1 k2 ≤ kxk − xk+1 k2 ,

(3.3.73)

kxk − z k k ≤ kxk − xk+1 k + kxk+1 − z k k ≤ 2kxk − xk+1 k,

(3.3.74)

lim kxk − z k k = 0.

(3.3.75)

kz k − uk2 ≤ kxk − uk2 + (1 − αk )(τ 2 L2 − 1)kxk − y k k2 ,

(3.3.76)

and by the triangle inequality,

so k→∞

By (3.3.62),

Algorithms for Solving Monotone Variational Inequalities and Applications

57

or kxk − uk2 − kz k − uk2 (1 − αk )(1 − τ 2 L2 )   1 k k k k kx − uk − kz − uk kx − uk + kz − uk = (1 − αk )(1 − τ 2 L2 )  1 ≤ kxk − uk + kz k − uk kxk − z k k. (3.3.77) 2 2 (1 − αk )(1 − τ L )  ∞  ∞ By (3.3.75) and the boundedness of xk k=0 and z k k=0 , we get kxk − y k k2 ≤

lim kxk − y k k = 0.

(3.3.78)

lim kf (xk ) − f (y k )k = 0.

(3.3.79)

k→∞

By Condition 3.1.3, k→∞

The rest of the proof follows the lines of the proof of Theorem 3.3.1. Remark 3.3.3. In Theorems 3.3.1 and 3.3.2 we assume that f is Lipschitz continuous on H with constant L > 0 (Condition 3.1.3). If we assume that f is Lipschitz continuous only on C with constant L > 0, we can use a Lipschitz extension of f to H in order to evaluate the function at xk . Such an extension exists by Kirszbraun’s theorem [102], which states that there exists a Lipschitz continuous function f˜ : H → H that extends f and has the same Lipschitz constant L as f . Alternatively, we can take f˜ = f PC . In any case, the extension is not necessarily monotone on H but preserves monotonicity on C, which is all that we need in the proofs. Remark 3.3.4. Note that in the proofs of Theorems 3.3.1 and 3.3.2, once the fact that the weak cluster points are solutions is established, it is possible to refer to [61, Proposition 3.1(vi)] or to [12, Theorem 3.5 (iv)] and deduce the strong convergence to the projection onto the solution set of the initial point.

3.4

The constrained variational inequality problem

In Section 3.2 we tried to find a solution to a VIP which is also a fixed point of some nonexpansive operator. This idea led us to phrase a more general problem which we called the Constrained Variational Inequality Problem (CVIP). Let f : H → H, and let C and Ω be nonempty, closed and convex subsets of H. The Constrained Variational Inequality Problem (CVIP) is: find x∗ ∈ C ∩ Ω such that hf (x∗ ), x − x∗ i ≥ 0 for all x ∈ C.

(3.4.1)

The iterative algorithm for this CVIP, presented next, is inspired by our earlier work [48, 50] in which we modified the extragradient method of Korpelevich [105]. ∞ Let {λk }∞ k=0 ⊂ [a, b] for some a, b ∈ (0, 1/κ), and let {αk }k=0 ⊂ [c, d] for some c, d ∈ (0, 1). Then the following algorithm generates two sequences that converge to a point z ∈ Ω ∩ Sol(f, C), as the convergence theorem that follows shows.

58

Variational Inequalities

Algorithm 3.4.1. Step 0: Select an arbitrary starting point x0 ∈ H. Step 1: Given the current iterate xk , compute y k = PC (xk − λk f (xk )), construct the set Tk as in (3.1.28), that is  

 w ∈ H | (xk − τ f (xk )) − y k , w − y k ≤ 0 , if xk − τ f (xk ) 6= y k , Tk :=  H, if xk − τ f (xk ) = y k .

(3.4.2)

(3.4.3)

and then calculate the next iterate by  xk+1 = αk xk + (1 − αk )PΩ PTk (xk − λk f (y k )) .

(3.4.4)

Theorem 3.4.1. Let f : H → H, and let C and Ω be nonempty, closed andconvex sets. k ∞ Assume that Conditions 3.1.2–3.1.3 hold, and that Ω ∩ Sol(f, C) 6= ∅. Let x k=0 and  k ∞ ⊂ [a, b] for some y k=0 be any two sequences generated by Algorithm 3.4.1 with {λk }∞  ∞  k=0 ∞ k ∞ a, b ∈ (0, 1/κ) and {αk }k=0 ⊂ [c, d] for some c, d ∈ (0, 1). Then x k=0 and y k k=0 converge weakly to the same point z ∈ Ω ∩ Sol(f, C) and z = lim PΩ∩Sol(f,C) (xk ). k→∞

(3.4.5)

Proof. For the special case of fixed λk = τ for all k ≥ 0 this theorem is a direct consequence of our [50, Theorem 7.1] with the choice of the nonexpansive operator S there to be PΩ . However, a careful inspection of the proof of [50, Theorem 7.1] reveals that it also applies to a variable sequence {λk }∞ k=0 as used here. To relate our results to some previously published works we mention two lines of research related to our notion of the CVIP. Takahashi and Nadezhkina [123] proposed an algorithm for finding a point x∗ ∈ Fix(N )∩Sol(f, C), where N : C → C is a nonexpansive operator. The iterative step of their algorithm is as follows. Given the current iterate xk , compute y k = PC (xk − λk f (xk ))

(3.4.6)

 xk+1 = αk xk + (1 − αk )N PC (xk − λk f (y k )) .

(3.4.7)

and then The restriction PΩ |C of our PΩ in (3.4.4) is, of course, nonexpansive, and so it is a special case of N in [123]. But a significant advantage of our Algorithm 3.4.1 lies in the fact that we compute PTk onto a half-space in (3.4.4) whereas the authors of [123] need to project onto the convex set C. Various ways have been proposed in the literature to cope with the inherent difficulty of calculating projections (onto closed convex sets) that do not have a closed-form expression; see, e.g., He, Yang and Duan [90], or [49]. Bertsekas and Tsitsiklis [20, Page 288] consider the following problem in Euclidean space: given f : Rn → Rn , polyhedral sets C1 ⊂ Rn and C2 ⊂ Rm , and an m × n matrix A, find a point x∗ ∈ C1 such that A(x∗ ) ∈ C2 and hf (x∗ ), x − x∗ i ≥ 0 for all x ∈ C1 ∩ {y | A(y) ∈ C2 }.

(3.4.8)

Algorithms for Solving Monotone Variational Inequalities and Applications

59

Denoting Ω = A−1 (C2 ), we see that this problem becomes similar to, but not identical with a CVIP. While the authors of [20] seek a solution in Sol(f, C1 ∩ Ω), we aim in our CVIP at Ω∩Sol(f, C). They propose to solve their problem by the method of multipliers, which is a different approach than ours, and they need to assume that either C1 is bounded or AT A is invertible, where AT is the transpose of A.

3.5

The multi-valued δ-algorithmic scheme

In this section we are concerned with the Variational Inequality Problem (VIP) in the Euclidean space Rn with a mapping instead of an operator. Let C ⊆ Rn be a nonempty, closed and convex subset and T : Rn → P (Rn ) be a mapping. The VIP for T and C, denoted by VIP(T, C), is to find a point x∗ ∈ C such that there exists u∗ ∈ T (x∗ ) satisfying hu∗ , x − x∗ i ≥ 0 for all x ∈ C.

(3.5.1)

We denote the solution set of (3.5.1) by Sol(T, C). We present an iterative method for pointto-set paramonotone, maximal monotone operator T : Rn → P (Rn ). In [47] we introduce the δ-algorithmic scheme for the VIP with operators, inspired by this result we extend our scope here to VIP with mappings. Following the work of Iusem and Cruz [19] we introduce the δ-algorithmic scheme with point-to-set paramonotone, maximal monotone operator. It appears that this method includes as special cases some earlier algorithms appearing in the literature. In addition, this algorithm employs projections onto any user chosen separating hyperplanes, therefore, beside the “freedom” in choosing this hyperplanes, the computation of these projections can be easily made. Observe that in Algorithm 3.1.2 the bounding hyperplanes of the subgradiental halfspaces Ck , separate the current point z from the set C, the question again arises whether or not any other separating hyperplanes can be used in the algorithm while retaining the overall convergence to the solution. The answer to this question for the single-valued case is affirmative as it can be seen in our earlier work [47], and it holds under some not too restrictive conditions. Under these conditions, we showed that, as a matter of fact, the hyperplanes need to separate not just the point z from the feasible set of (3.5.1), but rather separate a “small” ball around z from C. This algorithm is called the δ-algorithmic scheme, our goal is to extend this algorithmic scheme for a point-to-set operators, i.e., solve (3.5.1). It appears that this structural algorithmic discovery for the point-to-set operator generalizes both Algorithms, the classical projection method (3.1.3) and Fukushima’s method (3.1.2) with a point-to-set operators as it was presented in [19]. Our work is admittedly a theoretical development and no numerical advantages are claimed at this point. The large “degree of freedom” of choosing the super-sets, onto which the projections of the algorithm are performed, from a wide family of half-spaces may include specific algorithms that have not yet been explored. The construction of a δ-algorithmic scheme was originally introduced by Aharoni, Berman and Censor [1] (δ − η algorithm) for the Convex Feasibility Problem (CFP), see also [59, Chapte 5]. It was also applied to the Best Approximation Problem (BAP) by Bregman et al. in [25].

60

Variational Inequalities

3.5.1

The δ-algorithmic scheme

Let T : Rn → P (Rn ) be paramonotone and maximal monotone operator and C ⊆ Rn be a nonempty, closed and convex subset. For the convergence of our δ-algorithmic scheme we assume the following conditions. Condition 3.5.1. Sol(T, C) 6= ∅. Condition 3.5.2. There exist y ∈ C and a bounded set D ⊂ Rn such that hu, x − yi ≥ 0, for all x ∈ / D, and for all u ∈ T (x).

(3.5.2)

Let {βk }∞ k=0 be a sequence of positive numbers satisfying ∞ X

βk = ∞, and

k=0

∞ X

βk2 < ∞.

(3.5.3)

k=0

Our δ-algorithmic scheme for solving (3.5.1) is as follows. Algorithm 3.5.1. Step 0: Choose a constant δ ∈ (0, 1], select a starting point x0 ∈ C and set k = 0.  k k Step 1: Given the current

stop. Otherwise,  k iterate x , if 0 ∈ T x , then k k k

and calculate the “shifted point” (1) take u ∈ T x , u 6= 0, choose ηk = max 1, u z k := xk −

βk k u . ηk

(3.5.4)

 (2) Choose any separating hyperplane Hk from Aδ dist(xk , C) ((2.0.58)) and calculate the next iterate xk+1 by  if xk ∈ int C,  xk , k+1 (3.5.5) x =  Pk (z k ), if xk ∈ / int C, where Pk is the projection operator onto the half-space Hk− whose bounding hyperplane is Hk . Step 3: If xk+1 = xk , stop, otherwise, set k ← (k + 1) and return to Step 1. The iterative step of this algorithmic scheme is illustrated in Figure 3.11. Remark 3.5.1. Observe that there is no need to calculate in practice the radius δ dist(xk , C) of the ball B(xk , δ dist(xk , C)). If there would have been a need to calculate this then it would, obviously, amount to preforming a projection of xk onto C, which is the very thing that we are trying to circumvent. All that is needed, when deriving from the algorithmic scheme a specific algorithm, is to show that the specific algorithm indeed “chooses” the hyperplanes in concert with the requirement of separating such B(xk , δ dist(xk , C)) balls from the feasible set of (3.5.1). We demonstrate this later on.

Algorithms for Solving Monotone Variational Inequalities and Applications

61

Figure 3.11: Illustration of the iterative step of Algorithm 3.5.1.

3.5.2

Convergence

In this section we establish the convergence theorem for Algorithm 3.5.1. We divide our proof into several lemmas, similar as in [19]. The next Lemma is taken from [47, Lemma 14]. Lemma 3.5.2. Let C ⊆ Rn be a nonempty, closed and convex subset, and let δ ∈ (0, 1]. Let W ⊆ Rn be a nonempty, convex and compact subset, let W \C := {x ∈ W | x ∈ / C} . Choose any separating hyperplane H from Aδ (δ dist(x, C)) and denote by θ(x) the projection of x onto the half-space H − . Then there exists a constant µ ∈ [0, 1) such that dist(θ(x), C) ≤ µ dist(x, C), for all x ∈ W \C.

(3.5.6)

Now we prove that if Algorithm 3.5.1 stops then it has reached a solution of the VIP (3.5.1). Theorem 3.5.3. If xk+1 = xk occurs for some k ≥ 0 in Algorithm 3.5.1, then xk ∈ Sol(T, C). Proof. Suppose that xk+1 = xk , then the radius of B(xk , C, δ) is zero which implies that xk ∈ C since δ > 0. By the characterization of the metric projection with respect to Hk− ((2.0.28)), we get    βk k k k+1 k+1 x − u − x ,w − x ≤ 0 for all w ∈ Hk− . (3.5.7) ηk By taking xk+1 = xk in (3.5.7), we obtain   βk k k − u , w − x ≤ 0 for all w ∈ Hk− . ηk

(3.5.8)

62

Variational Inequalities

Since βk > 0, ηk > 0 and C ⊆ Hk− we get that huk , w − xk i ≥ 0 for all w ∈ C,

(3.5.9)

meaning that xk ∈ Sol(T, C). In the remainder of this section we suppose that Algorithm 3.5.1 generates an infinite sequence {xk }∞ k=0 . The next lemmas are central for the convergence theorem of Algorithm 3.5.1, the proof follows similar lines as in [19]. Lemma 3.5.4. Let y and D be as in Condition 3.5.2, choose λ > 0 such that kx0 − yk ≤ λ, and D ⊆ B(y, λ). Then any sequence {xk }∞ k=0 generated by Algorithm 3.5.1 have the following properties. (i) if xk ∈ D then kxk+1 − yk2 ≤ λ2 + βk2 + 2βk λ, (ii) if xk ∈ / D then kxk+1 − yk2 ≤ kxk − yk2 + βk2 . Proof. Since y ∈ C, and C ⊆ Hk− it follows that y ∈ Hk− for all k ≥ 0, i.e., y = Pk (y). Due to the nonexpansivness of the operator Pk with respect to Hk− , we get



2

2 



k βk k βk k k+1 2 k

kx − yk = Pk x − u − Pk (y) ≤ x − u − y

ηk ηk  2

2 βk

uk 2 − 2 βk huk , xk − yi = xk − y + ηk ηk

k

2 β k (3.5.10) ≤ x − y + βk2 − 2 huk , xk − yi. ηk Consider the following two cases: (i) if xk ∈ D, apply the Cauchy-Schwartz inequality, the definition of ηk and the assumption that D ⊆ B(y, λ) to (3.5.10) and obtain that

2

βk kxk+1 − yk2 ≤ xk − y + βk2 + 2 uk xk − y ηk 2 2 ≤ λ + βk + 2βk λ.

(3.5.11)

(ii) if xk ∈ / D, by Condition 3.5.2, we get that huk , xk − yi ≥ 0. In addition, since βk /ηk > 0 we obtain from (3.5.10) that

k+1

2

2

x (3.5.12) − y ≤ xk − y + βk2 , as asserted. k ∞ Lemma 3.5.5. Assume that Condition 3.5.2 holds. Let {xk }∞ k=0 and {u }k=0 be any two sequences generated by Algorithm 3.5.1. Then, k ∞ (i) the sequences {xk }∞ k=0 and {u }k=0 are bounded, (ii) limk→∞ dist(xk , C) = 0. (iii) limk→∞ kxk+1 − xk k = 0. (iv) All cluster points of {xk }∞ k=0 belong to C.

Algorithms for Solving Monotone Variational Inequalities and Applications

63

Proof. (i) Let the point y and the set D be as in Condition 3.5.2, choose λ > 0 and β > 0 such that kx0 − yk ≤ λ, D ⊆ B(y, λ) and βk ≤ β, for all k ≥ 0. Observe that the existence p P∞ 2 2 of β is guaranteed by (3.5.3). Denote by σ := k=0 βk and λ := λ + σ + 2βλ. We will prove the boundedness of {xk }∞ k=0 by showing that {xk }∞ k=0 ⊆ B(y, λ).

(3.5.13)

Consider the two cases: (i) If xk ∈ B(y, λ) then xk ∈ B(y, λ) since λ > λ.  (ii) If xk ∈ / B(y, λ), denote by `(k) = max ` < k | x` ∈ B(y, λ) , which is well defined since kx0 − yk ≤ λ, i.e., x0 ∈ B(y, λ). Using Lemma 3.5.4(i) to obtain

`(k)+1

2 2 2

x − y ≤ λ2 + β`(k) + 2β`(k) λ ≤ λ2 + β`(k) + 2βλ.

(3.5.14)

Now, for `(k) + 1 < j ≤ k − 1, xj ∈ / D, we get

j+1



x − y 2 ≤ xj − y 2 + βj2 .

(3.5.15)

Summing up (3.5.14) with `(k) + 1 < j ≤ k − 1, k−1 X

k



x − y 2 ≤ x`(k)+1 − y 2 +

βj2 .

(3.5.16)

j=`(k)+1

Combining inequalities (4.3.10) and (4.3.21) yield ∞ k−1 X X

k

2 2

x − y 2 ≤ λ2 + βj2 + 2βλ βj + 2βλ ≤ λ + j=0

j=`(k) 2

2

= λ + σ + 2βλ = λ .

(3.5.17)

k ∞ Therefore, xk ∈ B(y, λ) which implies that the sequence {xk }∞ k=0 is bounded. Since {x }k=0 k ∞ is bounded, it follows by Lemma 2.1.6(iii) that so is {u }k=0 . (ii) Observe that





βk k k+1 k k k

kx − Pk (x )k = Pk x − u − Pk (x ) ηk βk ≤ kuk k ≤ βk (3.5.18) ηk

for all k ≥ 0. By Lemma 3.5.2 with W = B(y, λ) there exists µ e ∈ [0, 1) such that dist(Pk (x), C) ≤ µ e dist(x, C), for all x ∈ B(y, λ)\C.

(3.5.19)

So, for all k ≥ 0 such that xk ∈ / C, (3.5.13) implies that dist(Pk (xk ), C) ≤ µ e dist(xk , C).

(3.5.20)

64

Variational Inequalities

If xk ∈ C then µ e = 0 since C ⊆ Hk− . Denote by ck = PC Pk xk



, namely,

kPk (xk ) − ck k = dist(Pk (xk ), C).

(3.5.21)

Then, by the triangle inequality, we get kxk+1 − ck k = kxk+1 − Pk (xk ) + Pk (xk ) − ck k ≤ kxk+1 − Pk (xk )k + kPk (xk ) − ck k.

(3.5.22)

Since ck ∈ C, we have dist(xk+1 , C) ≤ kxk+1 − ck k.

(3.5.23)

It follows from (3.5.18), (3.5.20)–(3.5.23) that dist(xk+1 , C) ≤ kxk+1 − Pk (xk )k + dist(Pk (xk ), C) ≤ βk + µ e dist(xk , C).

(3.5.24)

Therefore, by applying Lemma 2.0.11 with ξk = dist(xk , C), νk = βk and µ = µ e, we get that lim dist(xk , C) = 0. (3.5.25) k→∞

(iii) By (3.5.18) and the triangle inequality, we have



kxk+1 − xk k ≤ xk+1 − Pk (xk ) + Pk (xk ) − xk ≤ βk + dist(xk , C).

(3.5.26)

Since limk→∞ βk = 0 by (ii) and (3.5.26) we get that limk→∞ kxk+1 − xk k = 0. (iv) Follows immediately from (ii). Theorem 3.5.6. Let T : Rn → P (Rn ) be a paramonotone mapping. Assume that Condition 3.5.1 holds and let {xk }∞ k=0 be any sequence generated by Algorithm 3.5.1. Then any k ∞ cluster point of {x }k=0 belongs to Sol(T, C). k ∞ Proof. Let {xk }∞ k=0 and {u }k=0 be any two sequences generated by Algorithm 3.5.1 and let the operator γk be as (2.1.17). Using the facts that C ⊆ Hk− , Pk is nonexpansivness and the definition of γk we have that for all x∗ ∈ Sol(T, C)



2 

2  

k βk k

βk k k ∗ ∗

kx − x k = Pk x − u − Pk (x ) ≤ x − u − x ηk ηk  2

βk

uk 2 − 2 βk huk , xk − x∗ i = kxk − x∗ k2 + ηk η  k ∗ γk (x ) ≤ kxk − x∗ k2 − βk 2 − βk . (3.5.27) ηk  ∞ k ∞ k k Since {xk }∞ k=0 and {u }k=0 are bounded, so is { x , u }k=0 . Therefore by Lemma 2.1.7 it is ∗ suffices to prove that {γk (x∗ )}∞ k=0 has a non-positive cluster point for some x ∈ Sol(T, C). k+1

∗ 2

Algorithms for Solving Monotone Variational Inequalities and Applications

65

Assume, by negation, that this is not true, and take any x ∈ Sol(T, C). Then there exists k ≥ 0 and ρ > 0 such that (3.5.28) γk (x) ≥ ρ for all k ≥ k. k Since {uk }∞ k=0 is bounded, there exists K > 1 such that ku k ≤ K for all k ≥ 0. Therefore  ηk = max 1, uk ≤ max {1, K} = K for all k ≥ 0. (3.5.29)

Thus, there exists ρ > 0 such that γk (x) γk (x) ≥ ≥ ρ, ηk K

(3.5.30)

applying this with x ∈ Sol(T, C) to (3.5.27) kxk+1 − xk2 ≤ kxk − xk2 − βk (2ρ − βk ) for all k ≥ k.

(3.5.31)

0

By (3.5.3) limk→∞ βk = 0, then there exists k ≥ k such that 0

So, we get for all k ≥ k

βk ≤ ρ for all k ≥ k.

(3.5.32)

ρβk ≤ kxk − xk2 − kxk+1 − xk2 .

(3.5.33)

0

0

Summing up (3.5.33) with m ≥ k ≥ k and deduce that ρ

m X

βk ≤

m X

kxk − xk2 − kxk+1 − xk2



k=k0

k=k0

0

0

≤ kxk − xk2 − kxm+1 − xk2 ≤ kxk − xk2 .

(3.5.34)

By taking the limit as m → ∞ in (3.5.34) we contradict (3.5.3). Therefore, there exists a cluster point of {xk }∞ k=0 belonging to Sol(T, C). Now in order to show that all cluster points of {xk }∞ k=0 belong to Sol(T, C), suppose that this is not true, i.e., there exists a cluster point z of {xk }∞ / Sol(T, C). By k=0 such that z ∈ Lemmas 2.1.6(iv) and 3.5.5(iii) we get that Sol(T, C) is closed and limk→∞ kxk+1 − xk k = 0, k ∞ so by using Lemma 2.0.12, we can obtain a subsequence {xkj }∞ j=0 of {x }k=0 and a real number ς > 0 such that   dist xkj +1 , Sol(T, C) > dist xkj , Sol(T, C) , (3.5.35) and  dist xkj , Sol(T, C) > ς.

(3.5.36)

Let the function γk (x) be as in (2.1.17) and define another function γ : Sol(T, C) → R as follows γ(x∗ ) := lim inf γkj (x∗ ). (3.5.37) j→∞



)}∞ k=0

By Lemma 3.5.5(ii), {γk (x is bounded. Next that we prove that γ is continuous and 0 actually γ : Sol(T, C) → (0, ∞). Take x∗ , x ∈ Sol(T, C). Note that 0

0

γkj (x∗ ) = hukj , xkj − x∗ i = hukj , xkj − x i + hukj , x − xi 0

0

≤ γkj (x ) + Kkx∗ − x k.

(3.5.38)

66

Variational Inequalities 0

0

Thus, γ(x∗ ) ≤ γ(x )+Kkx∗ −x k, where K is a upper bound of {kuk k}∞ k=0 . Now by reversing 0 ∗ the role of x , x , we obtain 0

0

| γ(x∗ ) − γ(x ) |≤ Kkx∗ − x k,

(3.5.39)

meaning that γ is continuous. Now, in order to show that γ(x∗ ) > 0 for all x∗ ∈ Sol(T, C), assume that this is not true, then by Lemma 2.1.7 {xkj }∞ j=0 has a cluster point in Sol(T, C), in contradiction with (3.5.36). Now, denote by U the set of cluster points of {xk }∞ k=0 . We prove that U ⊆ Sol(T, C). By the above arguments, U ∩ Sol(T, C) 6= ∅ and since {xk }∞ k=0 is bounded, the sets U and U ∩ Sol(T, C) are compact. By the continuity of γ, it follows that there exists x∗ ∈ U ∩ Sol(T, C) such that 0



0

γ(x ) ≥ γ(x ) > 0 for all x ∈ U ∩ Sol(T, C).

(3.5.40)

By (3.5.37) and (3.5.3) there exists b j such that for all indexes j ≥ b j, we have ∗

γ(x ) , γkj (x ) ≥ 2 0

and

(3.5.41)



βkj

γ(x ) . < K

(3.5.42)

In view of (3.5.27), using (3.5.41) and (3.5.42), we get, for all x ∈ U ∩ Sol(T, C) and all j ≥b j, kx

kj+1

  γkj (x∗ ) − x k ≤ kx − x k − βkj 2 − βkj ηkj   γ(x∗ ) kj ∗ 2 ≤ kx − x k − βkj − βkj < kxkj − x∗ k2 . K ∗ 2

kj

∗ 2

(3.5.43)

So, it follows that   dist xkj+1 , U ∩ Sol(T, C) ≤ dist xkj , U ∩ Sol(T, C) for all j ≥ b j, in contradiction with (3.5.35). Therefore all clusters points of {xk }∞ k=0 solve the VIP(T, C). In the next theorem we summarize the convergence sequence properties of Algorithm 3.5.1. Theorem 3.5.7. Let T : Rn → P (Rn ) be paramonotone and maximal monotone mapping. Assume that Condition 3.5.1 holds and let {xk }∞ k=0 be any sequence generated by Algorithm k ∞ k+1 3.5.1. Then {x }k=0 is bounded, limk→∞ kx − xk k = 0 and all cluster points of {xk }∞ k=0 belong to Sol(T, C). If the VIP(T, C) has a unique solution then the whole sequence {xk }∞ k=0 converge to it.

Algorithms for Solving Monotone Variational Inequalities and Applications

67

Remark 3.5.8. In [80] for the single-valued case, i.e., operators, convergence is proved under continuity, strongly monotonicity and the following condition on T : Rn → Rn . There exist y ∈ C, β > 0 and a bounded set D ⊂ Rn such that hT (x), x − yi ≥ βkT (x)k for all x ∈ / D.

(3.5.44)

So, it can be easily verified that our Condition 3.5.2 is weaker than (3.5.44) (see [19] for more details). In addition strong monotonicity imply uniqueness of the solution to VIP(T, C), and also paramonotonicity of T , therefore in this case, according to Theorem 3.5.7 any sequence {xk }∞ k=0 generated by Algorithm 3.5.1 converge to it.

3.5.3

Special cases of the δ-algorithmic scheme

We now recall the example given in [47] as an illustration that additional algorithms can be derived from Algorithm 3.5.1. This particular realization requires that (the interior) int(C) is nonempty. The idea of using an interior point as an anchor to generate a separating hyperplane appeared previously in [1] for the Convex Feasibility Problem and in [79] for an outer approximation method. Given a mapping T : Rn → P (Rn ) and C ⊆ Rn be a nonempty, closed and convex subset. Algorithm 3.5.2. Step 0: Let y ∈ int(C) be fixed and given. Select an arbitrary starting point x0 ∈ Rn and set k = 0.  Step 1: Given the current iterate xk , if 0 ∈ T xk , then stop. If xk ∈ C set xk+1 = xk and again stop. Otherwise,   (1) take uk ∈ T xk , uk 6= 0, and choose ηk = max 1, uk , (2) calculate the “shifted point” βk z k = xk − u k (3.5.45) ηk and construct the line Lk through the points xk and y. (3) Denote by wk the point closet to xk in the set Lk ∩ C. (4) Construct a hyperplane Hk separating xk from C and supporting C at wk . (5) Compute xk+1 = PH − (z k ), where Hk− is the half-space whose bounding hyperplane is k Hk and C ⊆ Hk− , set k = k + 1 and return to Step 1. The iterative step of this algorithm is illustrated in Figure 3.12. We show that Algorithm 3.5.2 generates sequences that converge to a solution of problem (3.5.1) by showing that it is a special case of Algorithm 3.5.1. Theorem 3.5.9. Let T : Rn → P (Rn ) be paramonotone and maximal monotone mapping, and assume that Conditions 3.5.1 and 3.5.2 hold and int(C) 6= ∅. Then any sequence ∗ {xk }∞ k=0 , generated by Algorithm 3.5.2, converges to x ∈ Sol(T, C). Proof. Algorithm 3.5.2 is obviously a special case of Algorithm 3.5.1 where we choose at each step a separating hyperplane which also supports C at the point wk . The stopping criterion is valid by Theorem 3.5.3. In order to invoke Theorem 3.5.7 we have to show that

68

Variational Inequalities

Figure 3.12: Illustration of the iterative step of Algorithm 3.5.2: Interior anchor point.

for such an algorithm δ ∈ (0, 1] always holds. By Lemma 3.5.5, {xk }∞ k=0 is bounded and, k since x ∈ / C, we have kPH − (xk ) − xk k = k

kxk − wk kky − PHk (y)k , ky − wk k

(3.5.46)

and we also have kxk − wk k ≥ dist(xk , C).

(3.5.47)

Defining d := dist(y, bd(C)), where bd(C) is the boundary of C. Since y ∈ int(C), ky − PHk (y)k ≥ d > 0.

(3.5.48)

From the boundedness of {xk }∞ k=0 we know that there exists a positive K such that ky − wk k ≤ K, for all k ≥ 0. Combining these inequalities with (3.5.46) implies that kPH − (xk ) − xk k ≥ k

d dist(xk , C), K

which shows that the algorithm is of the same type of Algorithm 3.5.1 with δ := d/K > 0. We can choose K big enough so that K > d and then δ ∈ (0, 1] as required.

3.5.4

Conclusions

The algorithmic scheme presented here, the δ-algorithmic scheme entails a large “degree of freedom” of choosing the half-spaces, onto which the projections of the algorithm are performed, from a wide family of half-spaces; this “degree of freedom”, besides including some existing results may include specific algorithms that have not yet been explored. The

Algorithms for Solving Monotone Variational Inequalities and Applications

69

convergence of our algorithmic scheme is guarantee under paramonotonicity and maximal monotonicity of the involved mapping T . It is known that there exist projection algorithms for set-valued variational inequality that require only pseudo-monotonicity of T in order of the whole sequence to converges to a solution, (see e.g., Bao and Khanh [10]) but these algorithms require projections on the C which in general is hard to obtain.

3.6

Variational inequality problems over the fixed point set of an FQNE operator

In this section we are interesting in a more general VIP where the feasible set is given as a fixed point set of some operator. Given operators f, h : Rn → Rn , the fixed point set of h, Fix(h) is closed and convex if h is quasi-nonexpansive. Observe that the feasible set C of the VIP in (3.1.1) can be presented as a fixed point set of some operator, for example, C = Fix (PC ). Following this idea Yamada [160] and Yamada and Ogura [161, 163] consider the VIP(f, Fix(h)), i.e., find a point x∗ ∈ Fix(h) such that hf (x∗ ), x − x∗ i ≥ 0 for all x ∈ Fix(h).

(3.6.1)

In case h is quasi-nonexpansive and so-called quasi-shrinking (Definition 2.0.35), an algorithm for solving (3.6.1) in a real Hilbert space was presented in [161] under the conditions of Lipschitz continuity and strong monotonicity of f . The iterative step of the method is as follows: xk+1 = h(xk ) − λk+1 f (h(xk )), (3.6.2) where {λk }∞ k=0 is a non-negative sequence satisfies certain conditions. We present a method for solving the VIP(f, Fix(h)) when the operator h is quasi-nonexpansive. This method generalizes the earlier results of Auslender, Fukushima and the δ-algorithmic scheme with operators. In addition we present the relation between our and Yamada and Ogura method. The next subsections are as follows. The new algorithm is presented in Subsection 3.6.1 and it is analyzed in Subsection 3.6.3. At last in Subsection 3.6.4 we present application of problem (3.6.1).

3.6.1

The Algorithm

We need to assume the following conditions in order to prove convergence of our algorithm.

3.6.2

Conditions

Let ε > 0 and h : Rn → Rn be a FQNE operator. Assume that: Condition 3.6.1. f is continuous on (Fix(h))ε . Condition 3.6.2. f is α-strongly monotone on (Fix(h))ε . Condition 3.6.3. For some y ∈ Fix(h), there exist some β > 0 and a bounded set D ⊂ Rn such that hf (x), x − yi ≥ βkf (x)k for all x ∈ / D. (3.6.3)

70

Variational Inequalities

Condition 3.6.4. I − h is closed at 0. Remark 3.6.1. Observe that Conditions 3.6.2 and 3.6.3 guarantee the existence and uniqueness of a solution of VIP(f, Fix(h)). In addition, conditions of the type of Condition 3.6.3 are commonly used in Optimization Theory (see, e.g., [78]). A sufficient condition for Condition 3.6.3 to hold is that the vectors f (x) and x make an acute angle, which is uniformly bounded away from π/2, as kxk → ∞ (see [80]). While Conditions 3.6.1 and 3.6.2 concern the behavior of f on (Fix(h))ε , Condition 3.6.3 deals with a rather global behavior. Let {ρk }∞ k=0 be a sequence of positive numbers satisfying lim ρk = 0 and

k→∞

∞ X

ρk = +∞.

(3.6.4)

k=1

Algorithm 3.6.1. Step 0: Choose an arbitrary starting point x0 ∈ Rn and set k = 0. Step 1: Given the current iterate xk , (1) build the half-space (xk 6= h(xk )) Hk := H(xk , h(xk )) and calculate the “shifted point”  k x − ρk f (xk )/kf (xk )k, if f (xk ) 6= 0, k z := (3.6.5) xk , if f (xk ) = 0. (2) Choose αk ∈ [µ, 2 − µ] for some µ ∈ (0, 1) and calculate the next iterate as xk+1 = Pαk (z k ),

(3.6.6)

where Pαk = (1 − αk )I + αk PHk , and PHk is the projection operator onto Hk . (3) Set k ← (k + 1) and return to (1). Remark 3.6.2. Observe that (3.6.6) has an explicit form since it is a relaxation of the projection onto a half-space, therefore (  hzk −h(xk ),xk −h(xk )i k k x − h x , if z k ∈ / Hk , z k − αk k+1 k k k 2 kx −h(x )k x = Pαk (z ) = (3.6.7) zk , if z k ∈ Hk . Illustration of the iterative step of Algorithm 3.6.1 is given in Figure 3.13. Next we will prove the convergence of Algorithm 3.6.1.

3.6.3

Convergence

The following Lemma is a consequence of Theorem 2.0.29, where Pαk is a relaxation of PHk , both defined in Algorithm 3.6.1. Lemma 3.6.3. Let y ∈ Rn be arbitrary and αk ∈ (0, 2). Then in Algorithm 3.6.1 we have kPαk (y) − wk2 ≤ ky − wk2 −

2 − αk kPαk (y) − yk2 for all w ∈ Fix(h). αk

(3.6.8)

2 − αk kPαk (y) − yk2 . αk

(3.6.9)

Consequently, dist(Pαk (y), Fix(h))2 ≤ dist(y, Fix(h))2 −

Algorithms for Solving Monotone Variational Inequalities and Applications

71

Figure 3.13: Illustration of the iterative step of Algorithm 3.6.1

Proof. Let w ∈ Fix(h). Because Fix(h) ⊆ Hk , the characterization of the metric projection yields hy − PHk (y), y − wi ≥ kPHk (y) − yk2 =

1 kPαk (y) − yk2 2 αk

(3.6.10)

and we have kPαk (y) − wk2 = ky + αk (PHk (y) − y) − wk2 = ky − wk2 + αk2 kPHk (y) − yk2 − 2αk hy − PHk (y), y − wi 2 − αk ≤ ky − wk2 − kPαk (y) − yk2 . (3.6.11) αk If we set w = PFix(T ) (y) in (3.6.8), we obtain dist(Pαk (y), Fix(h))2 ≤ kPαk (y) − wk ≤ dist(y, Fix(h))2 −

2 − αk kPαk (y) − yk2 (3.6.12) αk

and the proof is completed. Lemma 3.6.4. Assume that Conditions 3.6.1–3.6.4 hold. Then any sequence {xk }∞ k=0 , generated by Algorithm 3.6.1, is bounded. Proof. The proof is structured along the lines of [80, Lemma 3]. First assume that f (xk ) 6= 0. Let y ∈ Fix(h) be a point for which Condition 3.6.3 holds and let K > 0 be such that kx − yk < K, for all x ∈ D, where D is a bounded set given in Condition 3.6.3. Lemma 3.6.3 implies that, for each z k ∈ Rn , kPαk (z k ) − yk2 ≤ kz k − yk2 .

(3.6.13)

72

Variational Inequalities

Therefore, kx

k+1

2



2

k f (xk )

− y = xk − y − yk ≤ x − ρk

k kf (x )k ρk ρ2k k k −2 hf (x ), x − yi + kf (xk )k2 . kf (xk )k kf (xk )k2 2

(3.6.14)

Thus, if kxk − yk ≥ K, then we have, by (3.6.14) and Condition 3.6.3, kxk+1 − yk2 ≤ kxk − yk2 − 2ρk β + ρ2k = kxk − yk2 − ρk (2β − ρk ).

(3.6.15)

Since limk→∞ ρk = 0, the last inequality implies kxk+1 − yk < kxk − yk,

(3.6.16)

provided that k is sufficiently large. On the other hand, by (3.6.13), the definition of z k ((3.6.5)) and the triangle inequality, we obtain

k

k

k+1

k f (x )

≤ x − y + ρk .

x (3.6.17) − y ≤ (x − y) − ρ k

kf (xk )k So, for all sufficiently large k we have

k+1



x − y ≤ xk − y + δ,

(3.6.18)

where δ > 0 is a small constant. Inequalities (3.6.18) and (3.6.16) imply that {xk }∞ k=0 is k bounded. If f (x ) = 0 we have by (3.6.5) and (3.6.13) kxk+1 − yk2 ≤ kxk − yk2 ,

(3.6.19)

which implies (3.6.16) and the rest follows. Lemma 3.6.5. Assume that Conditions 3.6.1–3.6.4 hold. Then any sequences {xk }∞ k=0 and {z k }∞ , generated by Algorithm 3.6.1 satisfies k=0

lim z k − xk = 0. (3.6.20) k→∞

Proof. If f (xk ) = 0 then by (3.6.5) xk = z k . On the other hand, if f (xk ) 6= 0,

k

k

k f (x ) k k

z − x = x − ρk − x

kf (xk )k

f (xk )

= ρk . = −ρ k

kf (xk )k Taking the limit with k → ∞, and using (3.6.4), we obtain the desire result.

(3.6.21)

Algorithms for Solving Monotone Variational Inequalities and Applications

73

Lemma 3.6.6. Assume that Conditions 3.6.1–3.6.4 hold. Then any sequence {xk }∞ k=0 , generated by Algorithm 3.6.1 satisfies lim dist(xk , Fix(h)) = 0.

k→∞

(3.6.22)

Proof. Denote by R := PFix(h) . First assume that f (xk ) 6= 0, then by (3.6.9) dist(xk+1 , Fix(h)) ≤ dist(z k , Fix(h)).

(3.6.23)

Following the triangle inequality



k f (xk ) k

− R(x ) dist(z , Fix(h)) ≤ kz − R(x )k = x − ρk

kf (xk )k

k

ρk f (xk ) k

≤ x − R(x ) + kf (xk )k = dist(xk , Fix(h)) + ρk .

(3.6.24)

dist(xk+1 , Fix(h)) ≤ dist(xk , Fix(h)) + ρk .

(3.6.25)

k

k

k

Therefore, k

Now define the sequence ak := dist(x , Fix(h)), then by Proposition 2.0.37 and Lemma 3.6.3 1 k kx − Pαk (xk )k2 αk2  2  2  k k dist(x , Fix(h)) − dist(Pαk x , Fix(h))

g 2 (ak ) = g 2 (dist(xk , Fix(h))) ≤ kxk − h(xk )k2 = ≤

1 αk (2 − αk )

(3.6.26) (3.6.27)

On the other hand, by the nonexpansivity of the operator Pαk we get kxk+1 − Pαk (xk )k2 = kPαk (z k ) − Pαk (xk )k2 ≤ kz k − xk k2

2

  k

k f (x ) k

= ρ2k . − x = x − ρ k

k kf (x )k

(3.6.28)

Therefore, Let sk = R Pαk x

 k

kxk+1 − Pαk (xk )k ≤ ρk .

(3.6.29)

kPαk (xk ) − sk k = dist(Pαk (xk ), Fix(h)),

(3.6.30)

, namely,

then, by the triangle inequality, we get kxk+1 − sk k = kxk+1 − Pαk (xk ) + Pαk (xk ) − sk k ≤ kxk+1 − Pαk (xk )k + kPαk (xk ) − sk k.

(3.6.31)

On the other hand, since sk ∈ Fix(h), we have dist(xk+1 , Fix(h)) ≤ kxk+1 − sk k.

(3.6.32)

74

Variational Inequalities

From the last three inequalities we get dist(xk+1 , Fix(h)) ≤ kxk+1 − Pαk (xk )k + dist(Pαk (xk ), Fix(h)) ≤ ρk + dist(Pαk (xk ), Fix(h))

(3.6.33)

or, equivalently, 2 2 dist(xk+1 , Fix(h)) ≤ ρk + dist(Pαk (xk ), Fix(h)) 2 = ρ2k + 2ρk dist(Pαk (xk ), Fix(h)) + dist(Pαk (xk ), Fix(h)) . (3.6.34) Therefore 2 2 − dist(Pαk (xk ), Fix(h)) ≤ − dist(xk+1 , Fix(h)) + ρ2k + 2ρk dist(Pαk (xk ), Fix(h)). (3.6.35) Using the above inequality for (3.6.27), we get for all k ≥ 0   2 1 a2k − dist(xk+1 , Fix(h)) + ρ2k + 2ρk dist(Pαk (xk ), Fix(h)) g 2 (ak ) ≤ αk (2 − αk )  1 = a2k − a2k+1 + ρ2k + 2ρk dist(Pαk (xk ), Fix(h)) . (3.6.36) αk (2 − αk ) ∞ Now, by Lemma 3.6.4 the sequence {xk }∞ k=0 is bounded and therefore so is {ak }k=0 . Since

2 2 − αk kPαk (xk ) − xk k2 dist(Pαk (xk ), Fix(h)) ≤ kPαk (xk ) − zk2 ≤ kxk − zk2 − αk ≤ kxk − zk2 , (3.6.37)  for all z ∈ Fix(h).Take z = PFix(h) xk in the above inequalities and we obtain dist(Pαk (xk ), Fix(h)) ≤ ak = dist(xk , Fix(h)).

(3.6.38)

Therefore, the sequence {dist(Pαk (xk ), Fix(h))}∞ k=0 is also bounded. Since αk ∈ [µ, 2 − µ] for some µ ∈ (0, 1), then we have 1/ (αk (2 − αk )) ≤ 1/µ2 . Denote by bk := ρ2k + 2ρk dist(Pαk (xk ), Fix(h)), and using (3.6.36) we get  1 2 2 a − a + b k k+1 µ2 k 1 1 = 2 (ak + ak+1 ) (ak − ak+1 ) + 2 bk . µ µ

g 2 (ak ) ≤

(3.6.39)

Since {ak }∞ k=0 is bounded , we have ak + ak+1 ≤ K1 for some K1 > 0. By the definition of ρk we have that limk→∞ bk = 0. Hence 1 2 1 g (ak ) ≤ ak − ak+1 + 2 bk K1 µ K1

(3.6.40)

and now we can apply Lemma 2.0.41 to deduce that limk→∞ dist(xk , Fix(h)) = 0. ∞ If f (xk ) = 0, we again can apply Lemma 2.0.41 with {bk+1 }∞ k=0 = {0}k=0 , and then we obtain the result.

Algorithms for Solving Monotone Variational Inequalities and Applications

75

Lemma 3.6.7. Assume that Conditions 3.6.1–3.6.4 hold. Then any sequence {xk }∞ k=0 , generated by Algorithm 3.6.1 satisfies lim kxk+1 − xk k = 0.

k→∞

(3.6.41)

Proof. If f (xk ) 6= 0, by the triangle inequality and the nonexpansivness of the operator Pαk , we obtain for all k ≥ 0 kxk+1 − xk k = kxk+1 − Pαk (xk ) + Pαk (xk ) − xk k ≤ kxk+1 − Pαk (xk )k + kPαk (xk ) − xk k = kPαk (z k ) − Pαk (xk )k + kPαk (xk ) − xk k ≤ kz k − xk k + kPαk (xk ) − xk k ≤ ρk + αk dist(xk , Hk ),

(3.6.42)

where the last inequality follows from (3.6.21) and the equality αk dist(xk , Hk ) = kPαk (xk ) − xk k.

(3.6.43)

Since for all k ≥ 0, Fix(h) ⊂ Hk , we have dist(xk , Hk ) ≤ dist(xk , Fix(h)),

(3.6.44)

kxk+1 − xk k ≤ ρk + αk dist(xk , Fix(h)).

(3.6.45)

thus, By Lemma 3.6.6 and (3.6.4), we obtain the required result. In the case f (xk ) = 0 we have kxk+1 − xk k = αk dist(xk , Hk ) ≤ αk dist(xk , Fix(h)),

(3.6.46)

and again by taking the limit with k → ∞ and using Lemma 3.6.6 the proof is complete. Theorem 3.6.8. Assume that Conditions 3.6.1–3.6.4 hold, then any sequence {xk }∞ k=0 generated by Algorithm 3.6.1 converges to the unique solution x∗ of problem (3.6.1). Proof. Let x∗ be the unique solution of problem (3.6.1). By Lemma 3.6.6, xk ∈ (Fix(h))ε , for all sufficiently large k, where (Fix(h))ε is the set given in Conditions 3.6.1 and 3.6.2 (without loss of generality the value of ε is common in both conditions). From Condition 3.6.2 we have hf (xk ) − f (x∗ ), xk − x∗ i ≥ αkxk − x∗ k2 , (3.6.47) and hf (xk ) − f (x∗ ), xk − x∗ i = hf (xk ) − f (x∗ ), xk − xk+1 + xk+1 − x∗ i = hf (xk ), xk − xk+1 i + hf (xk ), xk+1 − x∗ i − hf (x∗ ), xk − x∗ i.

(3.6.48)

76

Variational Inequalities

Combining the last two inequalities yields hf (xk ), xk+1 − x∗ i ≥ αkxk − x∗ k2 + hf (x∗ ), xk − x∗ i + hf (xk ), xk+1 − xk i.

(3.6.49)

Let λ be an arbitrary positive number. Since x∗ satisfies (3.6.1), it follows from Lemmas 3.6.4 and 3.6.6 that the following inequalities hold, for all sufficiently large k, hf (x∗ ), xk − x∗ i ≥ −λ.

(3.6.50)

Using the Cauchy-Schwarz inequality hf (xk ), xk+1 − xk i ≥ −kf (xk )kkxk+1 − xk k.

(3.6.51)

From the boundedness of the sequence {xk }∞ k=0 (Lemma 3.6.4), it follows due to the continuity of f (Condition 3.6.1), that the sequence {f (xk )}∞ k=0 is also bounded. Lemma 3.6.7 guarantees hf (xk ), xk+1 − xk i ≥ −λ,

(3.6.52)

for all sufficiently large k. Applying (3.6.50) and (3.6.52) to (3.6.49), we obtain hf (xk ), xk+1 − x∗ i ≥ αkxk − x∗ k2 − 2λ

(3.6.53)

for all sufficiently large k. Let us divide the indices of {xk }∞ k=0 as follows ¯ := {k ≥ 0 | f (xk ) 6= 0}. Γ := {k ≥ 0 | f (xk ) = 0} and Γ

(3.6.54)

Equation (3.6.53) implies, due to the arbitrariness of λ, that for k ∈ Γ lim xk = x∗ .

k→∞

(3.6.55)

∗ We will show that sequence {xk }∞ k=0 contains a subsequence which converges to x . To do ¯ and suppose that there exists a ζ > 0 and an integer this let us consider the indices in Γ k0 such that

¯ k ≥ k0 . kxk − x∗ k ≥ ζ for all k ∈ Γ,

(3.6.56)

Algorithms for Solving Monotone Variational Inequalities and Applications

77

By Lemma 3.5.5, 2 − αk kPαk (z k ) − z k k2 kxk+1 − x∗ k2 = kPαk (z k ) − x∗ k2 ≤ kz k − x∗ k2 − αk



2  k

k

f (x ) ∗

=

x − ρk kf (xk )k − x

 2  k

2 − αk f (x ) k+1 k

x

− − x − ρk

k αk kf (x )k 2 − αk k+1 ρk kx − x k k2 − 2 hf (xk ), xk − x∗ i αk kf (xk )k   (2 − αk ) ρk 2 − αk k k+1 k 2 −2 hf (x ), x − x i + ρk 1 − αk kf (xk )k αk ρ k ≤ kxk − x∗ k2 − 2 hf (xk ), xk − x∗ i kf (xk )k   (2 − αk ) ρk 2 − αk k k+1 k 2 hf (x ), x − x i + ρk 1 − −2 αk kf (xk )k αk ρk hf (xk ), xk+1 − x∗ i = kxk − x∗ k2 − 2 k kf (x )k   2 − αk ρk −2 −1 hf (xk ), xk+1 − xk i k α kf (x )k  k  2 − αk + ρ2k 1 − . (3.6.57) αk = kxk − x∗ k2 −

k Since {f (xk )}∞ k=0 is bounded, there exists a τ > 0 such that kf (x )k ≤ τ , therefore,



1 1 ≤ − , for all k ∈ Γ. kf (xk )k τ

(3.6.58)

By the arbitrariness of λ, we can assume that 1 2λ ≤ αζ 2 . 4

(3.6.59)

By similar arguments as in derivation of inequality (3.6.52) and, by boundedness of sequence {(2 − αk ) /αk }∞ k=0 , we can assume that, for all sufficiently large k,     1 2 2 − αk − αζ − 2λ ≤ − 1 hf (xk ), xk+1 − xk i. (3.6.60) 4 αk By ρk → 0 and again by the boundedness of (2 − αk ) /αk we can also assume that, for all sufficiently large k,   2 − αk αζ 2 ρk 1 − ≤ . (3.6.61) αk 2τ Applying (3.6.53), (3.6.60) and (3.6.61) to (3.6.57) we get

78

Variational Inequalities

ρk kxk+1 − x∗ k2 ≤ kxk − x∗ k2 − 2 (αζ 2 − 2λ) kf (xk )k   ρk 1 2 αζ 2 +2 αζ − 2λ + ρ k kf (xk )k 4 2τ 3 ρk αζ 2 k ∗ 2 2 = kx − x k − αζ + ρk . 2 kf (xk )k 2τ

(3.6.62)

Using (3.6.58) to (3.6.62) we get kxk+1 − x∗ k2 ≤ kxk − x∗ k2 − ρk

αζ 2 , τ

¯ such that for all sufficiently large k. Then there exists an integer k ∈ Γ kx

k+1

αζ 2 ¯ such that k ≥ k. − x k ≤ kx − x k − ρk for all k ∈ Γ τ ∗ 2

k

∗ 2

(3.6.63)

¯ we have By adding the inequalities (3.6.63) from k = k to k + `, over indices k ∈ Γ, kxk+`+1 − x∗ k2 ≤ kxk − x∗ k2 −

αζ 2 τ

k+` X

ρk ,

(3.6.64)

¯ k=k k∈Γ,

for any ` > 0. However, this is impossible due to (3.6.4), so there exists no ζ > 0 such ˆ⊂Γ ¯ converging that (3.6.56) is satisfied. Thus, {xk }k∈Γ¯ contains a subsequence {xk }k∈Γˆ , Γ ∗ k k ∞ to x , and so there is a subsequence {x }k∈Γ∪Γˆ of the whole sequence {x }k=0 converging ∗ to x∗ . In order to prove that the entire sequence {xk }∞ k=0 is convergent to x , suppose to k ∞ the contrary that there exists a subsequence of {x }k=0 converging to xˆ and xˆ 6= x∗ . By Lemma 3.6.7, limk→∞ kxk+1 − xk k = 0, then there exists ζ > 0 and an arbitrarily large ¯ such that integer j ∈ Γ kxj − x∗ k ≥ ζ and kxj+1 − x∗ k ≥ kxj − x∗ k.

(3.6.65)

However, if j is sufficiently large, we may apply an argument similar to that used to derive (3.6.63), and obtain the inequality kxj+1 − x∗ k < kxj − x∗ k,

(3.6.66)

which contradicts (3.6.65). Therefore, the sequence {xk }∞ k=0 must converge to the solution x∗ . Remark 3.6.9. In Yamada and Ogura [161] the operator f is assumed to be Lipschitz continuous and strongly monotone on the image of h (which is probably Rn ), while here f is assumed to be continuous only on (Fix(h))ε for some ε > 0.

Algorithms for Solving Monotone Variational Inequalities and Applications

3.6.4

79

Application

In this subsection we present two problems that can be transformed into an equivalent VIP over a fixed point set of some operator. The first problem is the Convexly Constrained Generalized Pseudoinverse and the second is the bi-level or hierarchical optimization problem. 1. Hard-Soft Constraints Convex Feasibility Problem Let H be a real Hilbert space. Given m nonempty, closed and convex subsets {Ci }m i=1 ⊆ H. The Convex Feasibility Problem (CFP) is formulate as follows: find a point x∗ such that x∗ ∈ ∩m i=1 Ci 6= ∅.

(3.6.67)

In real-world problems it may turn out that ∩m i=1 Ci = ∅ (inconsistent) for a variety of reasons. For example, in design problems, this situation typically results from the incorporation of specifications that are too demanding and therefore conflicting. Therefore it is natural to split the collection of constraints into hard and soft constraints. Hard constraints may, for instance, arise from imperative specifications in design problems, e.g., stability in filter design, or from reliable a priori information in estimation problems, e.g, non-negativity in image restoration. Thus, the problem is formulated as of finding a point, which satisfies the hard constraints and least violates in some suitable sense the soft ones, see [64] for more details. We are now follow Yamada [160, Section 4] and present several definitions and results which will be useful in the sequel. Given m + 1 nonempty, closed and convex subsets Ω, {Ci }m i=1 ⊆ H. We recall the proximity function Φ : H → R, which is defined as m

1X Φ(x) := wi dist(x, Ci )2 , 2 i=1

(3.6.68)

Pm with {wi }m i=1 wi = 1. So, the generalized convex feasible set, which was i=1 ⊂ (0, 1] and first introduced by [164, 165], is a subset ΩΦ ⊆ Ω such that: ΩΦ := {u ∈ Ω | Φ(u) = inf Φ(Ω)}.

(3.6.69)

m m It is easy to see that if Ω∩(∩m i=1 Ci ) 6= ∅, then ΩΦ = Ω∩(∩i=1 Ci ), but even if Ω∩(∩i=1 Ci ) = ∅, the subset ΩΦ is the set of all minimizers of Φ over a closed and convex set Ω. Illustration of the generalized convex feasible set is given in Figure 3.14. Following [164] (see also [160, Proposition 4.2]), we give two fixed point characterization of the subset ΩΦ , that is, a representation as a fixed point set of some nonexpansive operator h.

Proposition 3.6.10. Fixed point characterization of KΦ . 1. For any α 6= 0, it follows that ΩΦ = Fix (1 − α)I + PΩ

m X

! w i PC i

.

(3.6.70)

i=1

If α ∈ (0, 3/2], then the operator h := (1 − α)I + PΩ

Pm

i=1

wi PCi is nonexpansive.

80

Variational Inequalities 2. For any β > 0, it follows ΩΦ = Fix PΩ (1 − β)I + β

m X

!! wi PCi

.

(3.6.71)

i=1

If β ∈ (0, 2] then the operator h := (PΩ ((1 − β)I + β ([64]).

Pm

i=1

wi PCi ) is nonexpansive

In the proof of [160, Proposition 4.2], it was shown that the operator 0

Φ =

m X

wi (I − PCi ) = I −

i=1

m X

wi PCi

(3.6.72)

i=1

is firmly nonexpansive and monotone on H, and then it follows that 0

ΩΦ := Fix(PΩ (I − µΦ )) for all µ > 0.

(3.6.73)

0

Due to (2.1.4) this can be treated as the VIP(Φ , Ω).

Figure 3.14: The generalized convex feasible set

Now we are concentrate on the Convexly Constrained Generalized Pseudoinverse Problem (see e.g., [160, Problem 4.4]. It appears that this problem has many applications in different fields, such as numerical linear algebra, approximation theory and signal processing. Let Ω ⊂ H be a nonempty, closed and convex subset, a bounded linear operator A : H → Rm and a vector b ∈ Rm . Suppose that S := arginf k Ax − b k6= ∅, x∈Ω

(3.6.74)

Algorithms for Solving Monotone Variational Inequalities and Applications

81

and let Θ : H → R be some convex function. The Convexly Constrained Generalized Pseudoinverse Problem is formulated as follows. find a point x∗ ∈ arginf Θ(x).

(3.6.75)

x∈S

In [160, Theorem 4.7] the following function was defined m

1X 1 Φ(x) := k A(x) − b k2 = (< ai , x > −bi )2 2 2 i=1

(3.6.76)

0

and then it follows that S = ΩΦ , thus we obtain the VIP(Θ , ΩΦ ). 2. bi-level optimization problem Given the operator f : Rn → Rm , we would like to find its ”minimizers”. So we could not talk on an optimal solution as defined for a scalar optimization problem (m = 1). Therefore we need to define a priory, on which solution concept is chosen. One might consider the lexicographic order, denoted by L . This partial order is defined for x, y ∈ Rm as follows: x L y ⇔ x = y or xk < yk where k := min{i = 1, . . . , m | xi 6= yi }.

(3.6.77)

Now, we consider the case where m = 2, i.e., f : Rn → R2 , and denote by fi : Rn → R the i-th coordinate (i = 1, 2) function of f . Then, our goal is to minimize f with respect to L . This problem is referred also as two-stage, bi-level and hierarchical optimization problem. Remark 3.6.11. Let C be a closed and convex set. From optimality conditions for convex minimization (see, e.g., Bertsekas and Tsitsiklis [20, Proposition 3.1, page 210] and [86, Chapter 4, Subsection 3.5]), it is well known that, if a function g : H → R∪{+∞} is proper, lower semicontinuous and convex on a nonempty, closed and convex subset C ⊂ Rn , then x∗ minimizes g over C if and only if 0 ∈ ∂ (g + IC ) (x∗ ),

(3.6.78)

x∗ = argmin{g(x) | for all x ∈ C} ⇐⇒ 0 ∈ ∂g(x∗ ).

(3.6.79)

or equivalently As we saw in Example 1.0.1, this problem can be casted as the following VIP, find a point x∗ ∈ C such that u∗ ∈ ∂g(x∗ ) and hu∗ , x − x∗ i ≥ 0 for all x ∈ C.

(3.6.80)

if g is also continuously differentiable, i.e., ∂g(x) = {∇g} we obtain x∗ = argmin{g(x) | for all x ∈ C} ⇐⇒ x∗ solves VIP(∇g, C).

(3.6.81)

Remark 3.6.12. Following Remarks 2.0.47 and 2.0.48(3) 0 ∈ ∂g(x∗ ) ⇔ x∗ ∈ Fix(Jλ∂g ).

(3.6.82)

82

Variational Inequalities Now, back to the lexicographic optimization problem: min{f2 | argmin{f1 }}.

(3.6.83)

Under the assumptions of convexity and lower semicontinuity of fi : Rn → R for i = 1, 2, (3.6.83) can be represented as VIP(∇f2 , Fix(Jλ∂f1 )), that is, find a point x∗ ∈ Fix(Jλ∂f1 ) such that (3.6.84) h∇f2 (x∗ ), x − x∗ i ≥ 0 for all x ∈ Fix(Jλ∂f1 ). So, in case that the function f : Rn → R2 satisfies Conditions 3.6.1–3.6.4, we could apply Algorithm 3.6.1 for solving it. Next we present an example that can be translated into an appropriate VIP over the fixed point set of a cutter operator. Example 3.6.13. Let C ⊆ Rn be closed and convex set. Given a lower semicontinuous and convex function g : Rn → R on C, we are interested in minimizing g over C so that the solution has minimal p-th norm. This solution is called a p-minimal-norm solution. Define the operator f = (f1 , f2 ) : Rn → R2 by   g f= , (3.6.85) 1 k·kpp p P 1/p where k·kp denotes the p-th norm, i.e., kxkp := ( ni=1 |xi |p ) . Let p = 2. Then we obtain ∇( 21 kxk22 ) = x, i.e., ∇( 12 kxk22 ) = I which is Lipschitz continuous and strongly monotone on Rn . Therefore we can use Yamada and Ogura’s hybrid steepest descent algorithm (see [163, Section 4]) for solving VIP(I, Fix(Jλ∂g )) to obtain a 2-minimal-norm solution of g. Let now p > 2. In this case ∇( p1 kxkpp ) = (x1 |x1 |p−2 , x2 |x2 |p−2 , ..., xn |xn |p−2 ). One can easily check that ∇ p1 k·kpp is not Lipschitz continuous. Therefore, we cannot use Yamada and Ogura’s algorithm but we can use Algorithm 3.6.1 for solving VIP(∇( p1 k·kpp ), Fix(Jλ∂g )). Observe that the above argument will not work for f : Rn → Rm with m > 2; but we plan to study the general case.

Chapter 4 Split Inverse Problems 4.1

The split common null point problem

In this chapter we study the prototypical Split Inverse Problem (SIP) formulated in [51, Section 2]. It concerns a model in which there are given two vector spaces X and Y and a linear operator A : X → Y. In addition, two inverse problems are involved. The first one, denoted by IP1 , is formulated in the space X and the second one, denoted by IP2 , is formulated in the space Y. Given these data, the Split Inverse Problem (SIP) is formulated as follows: find a point x∗ ∈ X that solves IP1 and such that ∗ the point y = A(x∗ ) ∈ Y solves IP2 .

(4.1.1) (4.1.2)

Real-world inverse problems can be cast into this framework by making different choices of the spaces X and Y (including the case X = Y ), and by choosing appropriate inverse problems for IP1 and IP2 . The Split Convex Feasibility Problem (SCFP) [171, 45] is the first instance of an SIP. The two problems IP1 and IP2 there are of the Convex Feasibility Problem (CFP) type. This formulation was used for solving an inverse problem in radiation therapy treatment planning [46, 42]. The SCFP has been well studied for the last two decades both theoretically and practically; see, [33, 46, 67, 71, 119, 132, 141, 157, 158, 166, 173, 175] and the references therein. Two leading candidates for IP1 and IP2 are the mathematical models of the CFP and problems of constrained optimization. In particular, the CFP formalism is in itself at the core of the modeling of many inverse problems in various areas of mathematics and the physical sciences; see, e.g., [41] and references therein for an early example. Over the past four decades, the CFP has been used to model significant real-world inverse problems in sensor networks, radiation therapy treatment planning, resolution enhancement and in many others; see [43] for exact references to all of the above. More work on the CFP can be found in [32, 34, 44]. It is therefore natural to ask whether other inverse problems can be used for IP1 and IP2 , besides the CFP, and be embedded in the SIP methodology. For example, can IP1 = CFP in the space X and can a constrained optimization problem be IP2 in the space Y ? In our recent paper [35] we have made a step in this direction by formulating an SIP with a Split Common Null Point Problem (SCNPP) for maximal monotone mappings in 83

84

The split variational inequality problem and related problems

Hilbert spaces. As we explain below, this formulation includes the earlier formulation with VIPs and all its special cases such as the CFP and constrained optimization problems. Let H1 and H2 be two real Hilbert spaces. Given mappings Bi : H1 → 2H1 , 1 ≤ i ≤ p, and Fj : H2 → 2H2 , 1 ≤ j ≤ r, respectively, and bounded linear operators Aj : H1 → H2 , 1 ≤ j ≤ r, the SCNPP is formulated as follows: find a point x∗ ∈ H1 such that 0 ∈ ∩pi=1 Bi (x∗ ) and such that the points ∗ yj = Aj (x∗ ) ∈ H2 solve 0 ∈ ∩rj=1 Fj (yj∗ ).

(4.1.3) (4.1.4)

To further motivate our study, we now put our SCNPP in the context of other SIPs and related works. We first recall our Split Variational Inequality Problem (SVIP), which is an SIP with a VIP in each one of the two spaces [51]. Let H1 and H2 be two real Hilbert spaces, and assume that there are given two operators f : H1 → H1 and g : H2 → H2 , a bounded linear operator A : H1 → H2 , and nonempty, closed and convex subsets C ⊂ H1 and Q ⊂ H2 . The SVIP is then formulated as follows: find a point x∗ ∈ C such that hf (x∗ ), x − x∗ i ≥ 0 for all x ∈ C and such that ∗ ∗ the point y = A(x ) ∈ Q and solves hg(y ∗ ), y − y ∗ i ≥ 0 for all y ∈ Q.

(4.1.5) (4.1.6)

Denoting by Sol(f, C) and Sol(g, Q) the solution sets of the VIPs in (4.1.5) and (4.1.6), respectively, we can also write the SVIP in the following way: find a point x∗ ∈ Sol (f, C) such that A(x∗ ) ∈ Sol (g, Q) .

(4.1.7)

Taking in (4.1.5)–(4.1.6) C = H1 , Q = H2 , and choosing x := x∗ − f (x∗ ) ∈ H1 in (4.1.5) and y = A(x∗ ) − g(A(x∗ )) ∈ H2 in (4.1.6), we obtain the Split Zeros Problem (SZP) for two operators f : H1 → H1 and g : H2 → H2 , which we introduced in [51, Subsection 7.3]. It is formulated as follows: find a point x∗ ∈ H1 such that f (x∗ ) = 0 and g(A(x∗ )) = 0.

(4.1.8)

Recall that under a certain continuity assumption on f , Rockafellar in [139, Theorem 3] showed that finding zero of the maximal monotone extension M of f is a solution to the VIP, that is, M −1 (0) = Sol(f, C). Following this idea, Moudafi [120] introduced the Split Monotone Variational Inclusion (SMVI) which generalized the SVIP. Given two operators f : H1 → H1 and g : H2 → H2 , a bounded linear operator A : H1 → H2 , and two mappings B1 : H1 → 2H1 and F1 : H2 → 2H2 , the SMVI is formulated as follows: find a point x∗ ∈ H1 such that 0 ∈ f (x∗ ) + B1 (x∗ ) and such that the point ∗ ∗ y = A(x ) ∈ H2 solves 0 ∈ g(y ∗ ) + F1 (y ∗ ).

(4.1.9) (4.1.10)

Moudafi presented an algorithm that converged weakly to a solution of the SMVI under certain conditions. Asking if it is possible to obtain strong convergence under reasonable

Algorithms for Solving Monotone Variational Inequalities and Applications

85

assumptions, we show in this chapter that this is indeed the case. We note that our twooperator SZP (4.1.8) is obtained from (4.1.9) and (4.1.10) by letting B1 and F1 be the zero operators. For two mappings our SCNPP is a special case of Moudafi’s SMVI, with f = g = 0. The applications presented in [120] only deal with this situation. Although the SCNPP is a special case of Moudafi’s SMVI, we believe it is of interest to study this important special case. Moreover, we are mainly interested in strong convergence and the strongly convergent algorithms presented in this paper can be easily adapted to the SMVI. Before we introduce the case where more than two mappings and more than one bounded linear operator A are involved, we recall the problem Masad and Reich [118] called the Constrained Multiple-Set Split Convex Feasibility Problem (CMSSCFP). Let r and p be two natural numbers. Let Ci , 1 ≤ i ≤ p, and Qj , 1 ≤ j ≤ r, be closed and convex subsets of H1 and H2 , respectively; further, for each 1 ≤ j ≤ r, let Aj : H1 → H2 be a bounded linear operator. Finally, let Ω be another closed and convex subset of H1 . The CMSSCFP is formulated as follows: find a point x∗ ∈ Ω such that p ∗ x ∈ ∩i=1 Ci and Aj (x∗ ) ∈ Qj for each j = 1, 2, . . . , r.

(4.1.11) (4.1.12)

Motivated by this CMSSCFP, we suggest here our Split Common Null Point Problem (SCNPP) (see (4.1.3)–(4.1.4)), which is a generalization of the SZP. Another related split problem is the Split Common Fixed Point Problem (SCFPP), first introduced in Euclidean spaces in [56] and later extended by Moudafi [119] to Hilbert spaces. Given operators Ui : H1 → H1 , i = 1, 2, . . . , p, and Tj : H2 → H2 , j = 1, 2, . . . , r, with nonempty fixed points sets Ci := Fix(Ui ), i = 1, 2, . . . , p, and Qj = Fix(Tj ), j = 1, 2, . . . , r, respectively, and a bounded linear operator A : H1 → H2 , the SCFPP is formulated as follows: find a point x∗ ∈ C := ∩pi=1 Ci such that A(x∗ ) ∈ Q := ∩rj=1 Qj .

(4.1.13)

The purpose of this chapter is to introduce the SCNPP and present several algorithms for solving it. Following [118], [87] and [89], we are able to establish strong convergence of three of the algorithms that we propose. The rest of this chapter is organized as follows. In Subsection 4.1.1 we present an algorithm for solving the SCNPP and show its weak convergence. In Subsection 4.1.4 we present three additional algorithms for solving the SCNPP and present strong convergence theorems for them.

4.1.1

Weak convergence

In this subsection we first present an algorithm for solving the SCNPP for two maximal monotone mappings. Then, for the general case of more than two such mappings, we employ a product space formulation in order to transform it into an SCNPP for two maximal monotone mappings, in a similar fashion to what has been done in [56, Section 4] and [51, Subsection 6.1].

86

The split variational inequality problem and related problems

4.1.2

The SCNPP for two maximal monotone mappings

Consider the SCNPP (4.1.3)–(4.1.4) with p = r = 1. That is, given two mappings B1 : H1 → 2H1 and F1 : H2 → 2H2 , and a bounded linear operator A : H1 → H2 , we consider the following two-mapping SCNPP: find a point x∗ ∈ H1 such that 0 ∈ B1 (x∗ ) and 0 ∈ F1 (A(x∗ )).

(4.1.14)

Here is our algorithm for solving (4.1.14). Algorithm 4.1.1. Step 0: Let λ > 0 and select an arbitrary starting point x0 ∈ H1 . Step 1: Given the current iterate xk , compute  xk+1 = JλB1 xk − γA∗ (I − JλF1 )A(xk ) ,

(4.1.15)

where A∗ is the adjoint of A, L = kA∗ Ak and γ ∈ (0, 2/L). Our convergence theorem for this algorithm is presented next. In view of the connection between our SCNPP and Moudafi’s SMVI, this theorem can be considered a corollary of [120, Theorem 3.1]. Its proof is based on the Krasnosel’ski˘ı-Mann-Opial theorem [109, 113, 128] and is given here for the convenience of the readers. We denote by Γ the solution set of (4.1.14). Theorem 4.1.1. Let H1 and H2 be two real Hilbert spaces. Given two maximal monotone mappings B1 :H1 → 2H1 and F1 : H2 → 2H2 , and a bounded linear operator A : H1 → H2 , ∞ any sequence xk k=0 , generated by Algorithm 4.1.1, converges weakly to a point x∗ ∈ Γ, provided that Γ 6= ∅ and γ ∈ (0, 2/L), where L = kA∗ Ak. Proof. First we prove that the operator γA∗ (I − JλF1 )A is ν-ISM for some ν > 1/2 and therefore its complement I − γA∗ (I − JλF1 )A is averaged. By Remark 2.0.48(1), JλF1 is firmly nonexpansive and therefore (1/2)-av (Remark 2.0.14(3)). So, JλF1 =

(I + N ) 2

(4.1.16)

for some nonexpansive operator N : H2 → H2 . Since I − JλF1 = (I − N )/2, it follows that I − JλF1 is 1-ISM. Hence

(I − JλF1 )A(x) − (I − JλF1 )A(y), A(x) − A(y) ≥ k(I − JλF1 )A(x) − (I − JλF1 )A(y)k2 .

(4.1.17)

Now kA∗ (I − JλF1 )A(x) − A∗ (I − JλF1 )A(y)k2 = hA∗ (I − JλF1 )A(x) − A∗ (I − JλF1 )A(y), A∗ (I − JλF1 )A(x) − A∗ (I − JλF1 )A(y)i = hA∗ ((I − JλF1 )A(x) − (I − JλF1 )A(y)), A∗ ((I − JλF1 )A(x) − (I − JλF1 )A(y))i = h(I − JλF1 )A(x) − (I − JλF1 )A(y), AA∗ ((I − JλF1 )A(x) − (I − JλF1 )A(y))i ≤ Lk(I − JλF1 )A(x) − (I − JλF1 )A(y)k2 .

(4.1.18)

Algorithms for Solving Monotone Variational Inequalities and Applications

87

Combining the above inequalities, we obtain

∗ A (I − JλF1 )A(x) − A∗ (I − JλF1 )A(y), x − y

= (I − JλF1 )A(x) − (I − JλF1 )A(y), A(x) − A(y) ≥ k(I − JλF1 )A(x) − (I − JλF1 )A(y)k2

1 ≥ A∗ (I − JλF1 )A(x) − A∗ (I − JλF1 )A(y) . L

(4.1.19)

It follows that the operator A∗ (I − JλF1 )A is (1/L)-ISM and therefore the operator γA∗ (I − JλF1 )A is (1/(γL))-ISM. Now, γ ∈ (0, 2/L) implies that 1/(γL) > 1/2. Thus the operator I − γA∗ (I − JλF1 )A is averaged. Since both JλB1 and I − γA∗ (I − JλF1 )A are averaged operators, so is their composition F1 B1 ∗ Jλ I − γA ∞− Jλ )A (see Remark 2.0.14(4)). Therefore, by the Theorem 2.0.42, ∗the  k(I sequence x k=0 , generated by Algorithm 4.1.1, converges weakly to a fixed point x of  JλB1 I − γA∗ (I − JλF1 )A . It remains to show that x∗ ∈ Γ. Let z ∈ Γ, i.e., 0 ∈ B1 (z) and 0 ∈ F1 (A(z)). So, by (2.0.89), z ∈ Fix(JλB1 ) and A(z) ∈ Fix(JλF1 ). In addition, since (I − γA∗ (I − JλF1 )A)(z) = z − γA∗ (I − JλF1 )A(z) = z − γA∗ A(z) + γA∗ JλF1 (A(z)) = z − γA∗ A(z) + γA∗ A(z) = z,

(4.1.20)

we get z ∈ Fix(I − γA∗ (I − JλF1 )A).  Observe that any z ∈ Γ is a fixed point of the averaged B1 F1 ∗ operator Jλ I − γA (I − Jλ )A . Indeed, by the above equalities we get  JλB1 I − γA∗ (I − JλF1 )A (z) = JλB1 (z − γA∗ (I − JλF1 )A(z)) = JλB1 (z) = z.

(4.1.21)

Since Γ 6= ∅, we get from [34, Proposition 2.2] (see also [29, Lemma 2.1]), with the averaged operators I − γA∗ (I − JλF1 )A and JλB1 , that  Fix(JλB1 ) ∩ Fix(I − γA∗ (I − JλF1 )A) = Fix JλB1 I − γA∗ (I − JλF1 )A   = Fix I − γA∗ (I − JλF1 )A JλB1 .

(4.1.22)

 Since x∗ is a fixed point of JλB1 I − γA∗ (I − JλF1 )A , we have x∗ ∈ Fix(JλB1 ) and x∗ ∈ Fix(I − γA∗ (I − JλF1 )A). Now we need to show that A(x∗ ) ∈ Fix(JλF1 ). Indeed, from x∗ ∈ Fix(I − γA∗ (I − JλF1 )A), we get A∗ (I − JλF1 )A(x∗ ) = 0,

(4.1.23)

JλF1 (A(x∗ )) = A(x∗ ) + w,

(4.1.24)

or where A∗ (w) = 0. Since JλF1 (A(z)) = A(z), we get JλF1 (A(x∗ )) − JλF1 (A(z)) = A(x∗ ) + w − A(z).

(4.1.25)

88

The split variational inequality problem and related problems

So, kA(x∗ ) − A(z)k2 ≥ kJλF1 (A(x)∗ ) − JλF1 (A(z))k2 = kA(x∗ ) + w − A(z)k2 = kA(x∗ ) − A(z)k2 + 2 hA(x∗ ) − A(z), wi + kwk2 = kA(x∗ ) − A(z)k2 + 2 hx∗ − z, A∗ (w)i + kwk2 = kA(x∗ ) − A(z)k2 + kwk2 .

(4.1.26)

Hence w = 0, which means that JλF1 (A(x∗ )) = A(x∗ ). This completes the proof of Theorem 4.4.1. Remark 4.1.2. Let H be a real Hilbert space, and let B : H → 2H and F : H → 2H be two maximal monotone mappings. Consider the following problem: find a point x∗ ∈ H such that 0 ∈ B(x∗ ) + F (x∗ ).

(4.1.27)

Many algorithms were developed for solving this problem. An important class of such algorithms is sometimes referred to as splitting methods. References on splitting methods and their applications can be found in Eckstein’s Ph.D. thesis [75], in Tseng’s work [151, 152, 153] and more recently in Combettes [66]. One splitting method of interest is the following forward-backward algorithm: xk+1 = J B (I − f ) (xk ),

(4.1.28)

where f = F is single-valued. Combettes [63, Section 6] was interested in (4.1.28) under the assumption that B : H → 2H and f : H → H are maximal monotone, and βf is firmly nonexpansive (i.e., 1/2-av) for some β ∈ (0, ∞). He proposed the following algorithm:   xk+1 = xk + λk JγBk xk − γk (f (xk ) + bk ) + ak − xk , (4.1.29) ∞ ∞ where the sequence {γk }∞ k=0 is bounded and the sequences {ak }k=0 and {bk }k=0 are absolutely summable errors in the computation of the resolvents. It can be seen that the iterative step (4.1.15) is a special case of (4.1.28) with f = γA∗ (I − JλF1 )A. Following the convergence proof of Theorem 4.4.1 here, we see that f is 1/ (γL)-ISM and therefore, for β = (γL)−1 , the operator βγA∗ (I − JλF1 )A is 1-ISM, which is firmly nonexpansive. Now by [14, Example 20.27], this operator is maximal monotone. Therefore Algorithm 4.1.1 is a special case of (4.1.29) without relaxation and we need to calculate the exact resolvent. It is surprising that our SCNPP is formulated in two different spaces, while (4.1.27) is defined only in one space and still we arrive at the same algorithm. Further related results on proximal feasibility problems appear in Combettes and Wajs [67, Subsection 4.3].

4.1.3

The general SCNPP

In view of Remark 2.0.48(3) we can show, by applying similar arguments to those used in [56], that our SCNPP can be transformed into a split common fixed point problem (SCFPP) (see (4.1.13)) with two operators T and U in a product space. Next, we show how the general SCNPP can be transformed into an SCNPP for two maximal monotone mappings.

Algorithms for Solving Monotone Variational Inequalities and Applications

89

Consider the space H = H1p × H2r , and the maximal monotone mappings Z : H1 → 2H1 and F : H → 2H defined by Z(x) = {0} for all x ∈ H1 and F ((x1 , . . . , xp , y 1 , . . . , y r )) = B1 (x1 ) × . . . × Bp (xp ) × F1 (y 1 ) × . . . × Fr (y r ) for each (x1 , . . . , xp , y 1 , . . . , y r ) ∈ H. In addition, let the bounded linear operator A : H1 → H be defined by A (x) = (x, . . . , x, A1 (x), . . . , Ar (x)) for all x ∈ H1 . So the general SCNPP (4.1.3)–(4.1.4) is equivalent to find a point x ∈ H1 such that 0 ∈ Z(x) and 0 ∈ F (A(x)) . (4.1.30) When Algorithm 4.1.1 is applied to this two-sets problem in the product space H and then translated back to the original spaces, it takes the following form. Algorithm 4.1.2. Step 0: Select an arbitrary starting point x0 ∈ H1 . Step 1: Given the current iterate xk , compute xk+1 = xk + γ

p X

JλBi (xk ) − x

 k

+

i=1

where γ ∈ (0, 2/L), with L = p +

r X

! F

A∗j (Jλ j − I)Aj (xk ) ,

(4.1.31)

j=1

Pr

j=1

kAj k2 .

The convergence result follows immediately from Theorem 4.4.1. We also may introduce relaxation parameters into the above algorithm as has been done in the relaxed version of [119, equation 2.10]. From now on, we will focus on the SCNPP for two maximal monotone mappings, keeping in mind that for the general case we can always apply the above product space formulation and then translate back the algorithms to the original spaces.

4.1.4

Strong convergence

In this subsection we first present a strong convergence theorem for Algorithm 4.1.1 under an additional assumption. This result relies on the work of Browder and Petryshyn [28, Theorem 5], and on that of Baillon, Bruck and Reich [9, Theorem 1.1] (see also [118, Lemma 7]). Then we study a second algorithm which is a modification of Algorithm 4.1.1 that results in a Halpern-type algorithm. The third algorithm in this subsection is inspired by Haugazeau’s method [89]; see also [12].

4.1.5

Strong convergence of Algorithm 4.1.1

The next two theorems are needed for our proof of Theorem 4.1.5. We present their full proofs for the reader’s convenience. Theorem 4.1.3. [28, Theorem 5], [98] Let B be a uniformly convex Banach space. If the operator S : B → B is nonexpansive with a nonempty fixed point set Fix (S) 6= ∅, then for any given constant c ∈ (0, 1), the operator Sc := cI + (1 − c)S is asymptotically regular and has the same fixed points as S.

90

The split variational inequality problem and related problems

Proof. It is obvious that Fix (S) = Fix (Sc ) and that Sc is also a nonexpansive self-mapping of B. Let u ∈ Fix (Sc ) and for a given x ∈ B, let xk = Sck (x). Since Sc is nonexpansive and u ∈ Fix (Sc ) , it follows that

k+1



x − u ≤ xk − u for all k ≥ 0. (4.1.32)

Therefore there exists limk→∞ xk − u = ` ≥ 0. Assume that ` > 0. Then xk+1 − u = Sck+1 (x) − u = Sc (xk ) − u = (cI + (1 − c)S) (xk ) − u  = c(xk − u) + (1 − c) S(xk ) − u .

(4.1.33)

Since



lim xk − u = lim xk+1 − u = `

(4.1.34)

k+1





x − u = S(xk ) − u ≤ xk − u ,

(4.1.35)

k→∞

k→∞

and the uniform convexity of B implies that

  lim xk − u − S(xk ) − u = 0, k→∞

(4.1.36)

i.e., xk − S(xk ) → 0. Hence xk+1 − xk → 0, which means that Sc is asymptotically regular, as claimed. Theorem 4.1.4. [9, Theorem 1.1] Let B be a uniformly convex Banach space. If the operator S  :k B → ∞B is nonexpansive, odd and asymptotically regular at x ∈ B, then the sequence S (x) k=0 converges strongly to a fixed point of S. Proof. Since S is odd, S(0) = −S(0) and S(0) = 0. Since S is nonexpansive, we have by the triangle inequality,

k k k k

S (x) = S (x) − S (0) ≤ S (x) − S k (0)



≤ S k−1 (x) − S k−1 (0) = S k−1 (x) ≤ · · · ≤ kx − 0k = kxk , (4.1.37)  k ∞

S (x) which means that the sequence is decreasing and bounded. Therefore k=0

∞ the  k+i k k

S (x) + S (x) limit limk→∞ S (x) exists is k=0

k and,

for a fixed i, the sequence

decreasing. Let limk→∞ S (x) = d. Then by the triangle inequality,



2d ≤ 2S k (x) = S k (x) − S k+i (x) + S k+i (x) + S k (x)



≤ S k (x) − S k+i (x) + S k (x) + S k+i (x) . (4.1.38)

k

k+i

Since S is asymptotically regular at x, we get that lim k→∞ S (x) − S (x) = 0. Thus  ∞ limk→∞ S k (x) S k+i (x) ≥ 2d. But the sequence S k+i (x) + S k (x) k=0 is decreas +

ing, so that S k (x) + S k+i (x) ≥ 2d for all k and i. We now have limk→∞ S k (x) = d and limm,n→∞ kS n (x) + S m (x)k = 2d. The  kuniform ∞ convexity of the space B implies that n m limm,n→∞ kS (x) − S (x)k = 0, whence S (x) k=0 converges strongly to a fixed point of S.

Algorithms for Solving Monotone Variational Inequalities and Applications

91

In the next Theorem (Theorem 4.1.5) we need the resolvent JλB to be odd, which means that   (I + λB)−1 (−x) = − (I + λB)−1 (x) for all x ∈ H. (4.1.39) Denote

  (I + λB)−1 (−x) = y and (I + λB)−1 (x) = z.

(4.1.40)

− x ∈ y + λB(y) and x ∈ z + λB(z).

(4.1.41)

x ∈ −y + λB(−y).

(4.1.42)

Then If B is odd, then Hence −y = z, which is (4.1.39). Therefore we assume in the following theorem that both B1 and F1 are odd. Now we are ready to present the strong convergence theorem for Algorithm 4.1.1. Its proof relies on Theorem 4.1.4. Theorem 4.1.5. Let H1 and H2 be two real Hilbert spaces. Let two odd and maximal monotone mappings B1 : H1 → 2H1 and F1 : H2 → 2H2, and ∞ a bounded linear operator A : H1 → H2 be given. If γ ∈ (0, 2/L), then any sequence xk k=0 , generated by Algorithm 4.1.1, converges strongly to x∗ ∈ Γ.  Proof. The operator JλB1 I − γA∗ (I − JλF1 )A is averaged by the proof of Theorem 4.4.1 (see also [120, Theorem 3.1]). Therefore,  by [28, Theorem 5] and [98] (see Theorem 4.1.3), the operator JλB1 I − γA∗ (I − JλF1 )A is also asymptotically regular. Since B1 and F1 are  odd, so are their resolvents JλB1 and JλF1 , and therefore JλB1 I − γA∗ (I − JλF1 )A is odd. Finally, the strong convergence of Algorithm 4.1.1 is now seen to follow from Theorem 4.1.4. For the general SCNPP we can again employ a product space formulation as in Subsection 4.1.3 and under the additional oddness assumption also get strong convergence.

4.1.6

A Halpern-type algorithm

Next we consider a modification of Algorithm 4.1.1 inspired by the Halpern iterative method and prove its strong convergence. Let S : C → C be a nonexpansive operator, where C is a nonempty, closed and convex subset of a Banach space B. A classical way to study nonexpansive mappings is to use strict contractions to approximate S, i.e., for t ∈ (0, 1), we define the strict contraction St : C → C by Tt (x) = tu + (1 − t)T (x) for x ∈ C,

(4.1.43)

where u ∈ C is fixed. Banach’s Contraction Mapping Principle (see, e.g., [84]) guarantees that each St has a unique fixed point xt ∈ C. In case Fix(S) 6= ∅, Browder [26] proved that if B is a Hilbert space, then xt converges strongly as t → 0+ to the fixed point of S nearest to u. Motivated by Browder’s results, Halpern [87] proposed an explicit iterative scheme and proved its strong convergence to a point z ∈ Fix(S). In the last decades many authors modified Halpern’s iterative scheme and found necessary and sufficient conditions, concerning the control sequence, that guarantee the strong convergence of Halpern-type schemes (see, e.g., [111, 134, 154, 156, 147]). Our algorithm for the SCNPP with two maximal monotone mappings is presented next.

92

The split variational inequality problem and related problems

Algorithm 4.1.3. Step 0: Select some λ > 0 and an arbitrary starting point x0 ∈ H1 . Step 1: Given the current iterate xk , compute  xk+1 = αk x0 + (1 − αk )JλB1 I − γA∗ (I − JλF1 )A (xk ),

(4.1.44)

where γ ∈ (0, 2/L) with L = kA∗ Ak and the sequence {αk }∞ k=0 ⊂ [0, 1] satisfies limk→∞ αk = ∞ P 0 and αk = ∞. k=0

Here is our strong convergence theorem for this algorithm. Theorem 4.1.6. Let H1 and H2 be two real Hilbert spaces. Let there be given two maximal monotone mappings B1 : H1 → 2H1 and F1 : H2 → 2H2 , and a bounded linear operator A : H1 → H2 . If Γ 6= ∅, γ ∈ (0, 2/L) and {αk }∞ k=0 ⊂ [0, 1] satisfies limk→∞ αk = 0 and ∞  k ∞ P αk = ∞, then any sequence x k=0 , generated by Algorithm 4.1.3, converges strongly k=0

to x∗ ∈ Γ.  B1 F1 ∗ Proof. As proved in Theorem 4.4.1, the operator J I − γA (I − J )A is averaged. λ λ  k ∞ So, according to Theorem 2.0.44, any sequence x k=0 , generated by Algorithm 4.1.3,  converges strongly to a point x∗ ∈ Fix JλB1 I − γA∗ (I − JλF1 )A as long as this set is nonempty. Following the proof of Theorem 4.4.1, we conclude that x∗ ∈ Γ, as claimed. Remark 4.1.7. Sequences {αk }∞ k=0 like those in Algorithm 4.1.3 are sometimes referred to in the literature as vanishing parameters. It is well known that vanishing parameters lead to numerical instabilities, which make them of limited use in applications. Still, Theorem 4.1.6 is of theoretical interest.

4.1.7

Haugazeau-type algorithm

In Subsection 3.3.1 we introduced Haugazeau’s iterative method [89] for solving the Best Approximation Problem (BAP), i.e., finding the projection of a point onto the intersection of m closed convex subsets {Ci }m i=1 ⊂ H of a real Hilbert space. In the proof of the conver-  gence theorem of Algorithm 4.1.1 we showed that the operator S := JλB1 I − γA∗ (I − JλF1 )A is averaged and therefore nonexpansive. Now consider the firmly nonexpansive operator S1/2 := (I + S) /2, which according to Theorem 4.1.3 has the same fixed points as S. Following the “weak-to-strong convergence principle” [12], strong convergence (without additional assumptions) can be obtained by replacing the updating rule (4.1.15) in Algorithm 4.1.1 with  xk+1 = Q x0 , xk , S1/2 xk = PH (x0 ,xk )∩H (xk ,S1/2 (xk )) (x0 ).

(4.1.45)

A similar technique can also be applied to the forward-backward splitting method in [63, Subsection 6].

Algorithms for Solving Monotone Variational Inequalities and Applications

93

Remark 4.1.8. 1. According to Remark 2.0.49, the operator JλB (I − λf ) is averaged, where B : H → 2H is maximal monotone, the operator f : H → H is α-ISM and λ ∈ (0, 2α). Since our convergence theorems rely on the averagedness of the operators involved, we could modify our algorithms and obtain strong convergence for Moudafi’s SMVI ((4.1.9) and (4.1.10) above). In addition, our algorithms allow us to solve Moudafi’s SMVI with monotone and hemicontinuous operators f and g (which is a larger class than the class of inverse strongly monotone operators). 2. Assuming that the mappings B1 : H1 → 2H1 and F1 : H2 → 2H2 are maximal monotone, and f : H1 → H1 and g : H2 → H2 are ISM operators, Moudafi presented an algorithm that converged weakly to a solution of the SMVI. By [139, Theorem 3], the sum of a maximal monotone mapping and an ISM operator is maximal monotone. Therefore the SMVI reduces to our two-mapping SCNPP. In addition, we can phrase the set-valued SVIP for maximal monotone mappings in the following way. Given two maximal monotone mappings B1 : H1 → 2H1 and F1 : H2 → 2H2 , a bounded linear operator A : H1 → H2 , and nonempty, closed and convex subsets C ⊂ H1 and Q ⊂ H2 , the set-valued SVIP is formulated as follows: find a point x∗ ∈ C and a point u∗ ∈ B1 (x∗ ) such that hu∗ , x − x∗ i ≥ 0 for all x ∈ C, and such that ∗ the points y = A(x∗ ) ∈ Q and v ∗ ∈ F1 (y ∗ ) solve hv ∗ , y − y ∗ i ≥ 0 for all y ∈ Q.

(4.1.46)

By extending Lemma 2.1.5 to mappings we see that if the zeros of the mappings B1 and F1 are in C and Q, respectively, then they are solutions of the set-valued SVIP, but in general not all solutions are zeros.

4.2

Applications

Now we present several applications of the SCNPP. They are listed here because their analysis can benefit from our algorithms for the SCNPP and because known algorithms for their solution may be generalized in the future to cover the more general SCNPP. The list includes known problems such as the Split Feasibility Problem (SFP) and the Convex Feasibility Problem (CFP). In addition, we introduce new “split” problems that have, to the best of our knowledge, never been studied before. Observe that if H1 = H2 and Aj = I for all j = 1, 2, . . . , r, then we can deal applications with “Split” replaced by “Common”. We can even study mixtures of “split” and “common” applications.

4.2.1

The split feasibility and convex feasibility problems

The Split Feasibility Problem (SFP) in Euclidean space is formulated as follows: find a point x∗ such that x∗ ∈ C ⊆ Rn and A(x∗ ) ∈ Q ⊆ Rm ,

(4.2.1)

where C ⊆ Rn , Q ⊆ Rm are nonempty, closed and convex sets, and A : Rn → Rm is given. This problem was originally introduced in Censor and Elfving [45], observe that an earliest

94

The split variational inequality problem and related problems

special case of this formulation was also given by Youla in [171]. The SFP was later used in the area of intensity-modulated radiation therapy (IMRT) treatment planning; see [46, 42]. Obviously, it is formally a special case of the SCNPP obtained from (4.1.14) by setting B1 = NC and F1 = NQ . The Convex Feasibility Problem (CFP) in a Euclidean space is: find a point x∗ such that x∗ ∈ ∩m i=1 Ci 6= ∅,

(4.2.2)

where Ci , i = 1, 2, . . . , m, are nonempty, closed and convex sets in Rn . This, in its turn, becomes a special case of the SFP by taking in (4.2.1) n = m, A = I Q = Rn and C = ∩m i=1 Ci . Many algorithms for solving the CFP have been developed; see, e.g., [11, 59]. Byrne [32] established an algorithm for solving the SFP, called the CQ-Algorithm, with the following iterative step:  xk+1 = PC xk + γAT (PQ − I)A(xk ) , (4.2.3) which does not require calculation of the inverse of the operator A, as in [45], but needs only its transpose AT . Remark 4.2.1. Observe that the SFP can be phrased as the following minimization problem. 1 min (dist(A(x), Q))2 = min kPQ (A(x)) − A(x)k2 . (4.2.4) x∈C x∈C 2 Employing the projected gradient method ((3.1.2)), we obtain the CQ algorithm. A recent excellent paper on the multiple-sets SFP which contains many references that reflect the state-of-the-art in this area is [112]. It is of interest to note that looking at the SFP from the point of view of the SCNPP enables us to find the minimum-norm solution of the SFP, i.e., a solution of the form x∗ = argmin{kxk | x solves the SFP (4.2.1)}.

(4.2.5)

This is done, and easily verified, by solving (4.1.14) with B1 = I + NC and F1 = NQ .

4.2.2

The Split Variational Inequality Problem (SVIP)

We now recall our Split Variational Inequality Problem (SVIP), which is an SIP with a VIP in each one of the two spaces [51]. Let H1 and H2 be two real Hilbert spaces. Given operators f : H1 → H1 and g : H2 → H2 , a bounded linear operator A : H1 → H2 , and nonempty, closed and convex subsets C ⊆ H1 and Q ⊆ H2 , the SVIP is formulated as follows: find a point x∗ ∈ C such that hf (x∗ ), x − x∗ i ≥ 0 for all x ∈ C and such that ∗ ∗ the point y = A(x ) ∈ Q and solves hg(y ∗ ), y − y ∗ i ≥ 0 for all y ∈ Q.

(4.2.6) (4.2.7)

Following Rockafellar [139, Theorem 3], a zero of the maximal monotone extension M of f (2.1.2) is a solution of the VIP(f, C). Therefore we can apply all the algorithms for the SCNPP with the maximal monotone extensions of the involved operators. Another

Algorithms for Solving Monotone Variational Inequalities and Applications

95

observation is that the SVIP (4.2.6)–(4.2.7) can be casted as an equivalent CVIP in an appropriate product space. We look at the product space H = H1 × H2 and introduce in it the product set D := C × Q and the set V := {x = (x, y) ∈ H | A(x) = y}.

(4.2.8)

We adopt the notational convention that objects in the product space are represented in boldface type. We transform the SVIP (4.2.6)–(4.2.7) into the following equivalent CVIP in the product space: find a point x∗ ∈ D ∩ V , such that hh(x∗ ), x − x∗ i ≥ 0 for all x = (x, y) ∈ D,

(4.2.9)

where h : H → H is defined by h(x, y) = (f (x), g(y)).

(4.2.10)

A simple adaptation of the decomposition lemma [20, Proposition 5.7, page 275] shows that problems (4.2.6)–(4.2.7) and (4.2.9) are equivalent, and, therefore, we can apply Algorithm 3.4.1 to the solution of (4.2.9). Lemma 4.2.2. A point x∗ = (x∗ , y ∗ ) solves (4.2.9) if and only if x∗ and y ∗ solve (4.2.6)– (4.2.7). Proof. If (x∗ , y ∗ ) solves (4.2.6)–(4.2.7), then it is clear that (x∗ , y ∗ ) solves (4.2.9). To prove the other direction, suppose that (x∗ , y ∗ ) solves (4.2.9). Since (4.2.9) holds for all (x, y) ∈ D, we may take (x∗ , y) ∈ D and deduce that hg(A(x∗ )), y − A(x∗ )i ≥ 0 for all y ∈ Q.

(4.2.11)

Using a similar argument with (x, y ∗ ) ∈ D, we get hf (x∗ ), x − x∗ i ≥ 0 for all x ∈ C,

(4.2.12)

which means that (x∗ , y ∗ ) solves (4.2.6)–(4.2.7). Using this equivalence, we can now employ Algorithm 3.4.1 in order to solve the SVIP. The following conditions are needed for the convergence theorem. Condition 4.2.1. f is monotone on C and g is monotone on Q. Condition 4.2.2. f is Lipschitz continuous on H1 with constant L1 > 0 and g is Lipschitz continuous on H2 with constant L2 > 0. Condition 4.2.3. V ∩ Sol(h, D) 6= ∅. ∞ Let {λk }∞ k=0 ⊂ [a, b] for some a, b ∈ (0, 1/L), where L = min{L1 , L2 }, and let {αk }k=0 ⊂ [c, d] for some c, d ∈ (0, 1). Then the following algorithm generates two sequences that converge to a point z ∈ V ∩ Sol(h, D), as the convergence theorem given below shows.

96

The split variational inequality problem and related problems

Algorithm 4.2.1. Step 0: Select an arbitrary starting point x0 ∈ H. Step 1: Given the current iterate xk , compute y k = P D (xk − λk h(xk )),

(4.2.13)

construct the set T k (similar to (3.1.28)) and then calculate  xk+1 = αk xk + (1 − αk )P V P T k (xk − λk h(y k )) .

(4.2.14)

Our convergence theorem for Algorithm 4.2.1 follows from Theorem 3.4.1. Theorem 4.2.3. Consider f : H1 → H1 and g : H2 → H2 , a bounded linear operator A : H1 → H2 , and nonempty, closed and subsets C ⊆ H1 and Q ⊆ H2 . Assume that  k ∞  k convex ∞ Conditions 4.2.1–4.2.3 hold, and let x k=0 and y k=0 be any two sequences generated by Algorithm 4.2.1 with {λk }∞ 1/L), where k=0 ⊂ [a, b] for some a,b ∈ (0, L = min{L1 , L2 }, ∞ k ∞ k ∞ and let {αk }k=0 ⊂ [c, d] for some c, d ∈ (0, 1). Then x k=0 and y k=0 converge weakly to the same point z ∈ V ∩ Sol(h, D) and z = lim P V ∩Sol(h,D) (xk ). k→∞

(4.2.15)

The value of the product space approach, described above, depends on the ability to “translate” Algorithm 4.2.1 back to the original spaces H1 and H2 . Observe that due to [131, Lemma 1.1] for x = (x, y) ∈ D, we have P D (x) = (PC (x), PQ (y)) and a similar formula holds for P T k . The potential difficulty lies in P V of (4.2.14). In the finite-dimensional case, since V is a subspace, the projection onto it is easily computable by using an orthogonal basis. For example, if U is a k-dimensional subspace of Rn with the basis {u1 , u2 , ..., uk }, then for x ∈ Rn , we have k X hx, ui i PU (x) = u. (4.2.16) 2 i ku k i i=1

4.2.3

The Split Minimization Problem (SMP)

As mentioned several times in this thesis (see e.g., Example 1.0.1), minimizing a proper, lower semicontinuous and convex function g : H → R ∪ {+∞} on a closed and convex subset C ⊂ H is equivalent to finding zero of ∂ (g + IC ). Using this relation we propose a new problem which we call the Split Minimization Problem (SMP). Let H1 , H2 be two real Hilbert spaces and C ⊆ H1 and Q ⊆ H2 be nonempty closed and convex sets. In addition, given two proper, lower semicontinuous and convex functions g1 : H1 → R ∪ {+∞} and g2 : H2 → R ∪ {+∞} and a bounded linear operator A : H1 → H2 , the SMP is formulated as follows: find a point x∗ ∈ C such that x∗ = argmin{g1 (x) | x ∈ C} and such that ∗ ∗ the point y = A(x ) ∈ Q and solves y ∗ = argmin{g2 (y) | y ∈ Q}.

(4.2.17) (4.2.18)

Observe that the conditions on g1 and g2 guaranty that the mappings B1 = ∂ (g1 + IC ) and F1 = ∂ (g2 + IQ ) are maximal monotone.

Algorithms for Solving Monotone Variational Inequalities and Applications

97

Remark 4.2.4. In case where the functions g1 and g2 are continuously differentiable and convex function on C, Q respectively, the SMP is a special case of the SVIP (see Example 1.0.1). Remark 4.2.5. If H1 = H2 and A = I we get a new problem, which we call the Common Minimizer Problem (CMP).

4.2.4

The Split Saddle-Point Problem (SSPP)

As we show in Example 1.0.4 the Split Saddle-Point Problem can be casted as VIP, therefore it is natural to phrase the Split Saddle-Point Problem (SSPP) directly as a SVIP. We now show following Rockafellar [138] how the SSPP can be presented as a SCNPP. Consider the following Saddle-Point Problem. Let H1 and H2 be two real Hilbert spaces, and let C1 and C2 be two convex subsets of H1 and H2 , respectively. Given a convex-concave bifunction g : C1 × C2 → (−∞, +∞], the Saddle-Point Problem is to find a point (x∗ , y ∗ ) ∈ C1 × C2 such that g(x∗ , y) ≤ g(x∗ , y ∗ ) ≤ g(x, y ∗ ) for all (x, y) ∈ C1 × C2 . (4.2.19) This means that the point x∗ ∈ C1 is the minimizer of g(·, y ∗ ) and the point y ∗ ∈ C2 is the maximizer of g(x∗ , ·). Therefore the Saddle-Point Problem is also refereed as the Minmax Problem. Rockafellar [138] associated the mapping Tg := ∂1 (g) × ∂2 (−g), where ∂i (g), for i = 1, 2 stands for the subdifferential of g with respect to the i-th variable. Rockafellar proved that Tg is maximal monotone if and only if g is closed and proper (in the sense of [138]). Moreover, (x∗ , y ∗ ) is a saddle-point of g if and only if (0, 0) ∈ Tg (x∗ , y ∗ ).

(4.2.20)

Therefore we are able to define a Split Saddle-Point Problem which is a SCNPP with two maximal monotone mappings. Let C1 , C2 ⊆ H1 and Q1 , Q2 ⊆ H2 be convex sets. Given two convex-concave bifunctions g1 : C1 × C2 → (−∞, +∞] and g2 : Q1 × Q2 → (−∞, +∞] and a bounded linear operator A : H1 → H2 . The Split Saddle-Point Problem is formulated as follows. find a point (x∗ , y ∗ ) ∈ C1 × C2 such that (x∗ , y ∗ ) = arg min max g1 (x, y)

(4.2.21)

(x,y)∈H1

and such that the point (A(x ) = u∗ , A(y ∗ ) = v ∗ ) ∈ Q1 × Q2 satisfies (u∗ , v ∗ ) = arg min max g2 (u, v) ∗

(4.2.22)

(u,v)∈H2

Remark 4.2.6. If H1 = H2 and A = I we get a new problem, which we call the Common Saddle-Point Problem (CSPP).

4.2.5

Split Equilibrium Problem (SEP)

Now we present another special case of the SCNPP which is the Split Equilibrium Problem (SEP). We first recall the Equilibrium Problem (EP) Problem, that is, 2.1.14. Let H be

98

The split variational inequality problem and related problems

a real Hilbert spaces and C ⊂ H a nonempty, closed and convex subset. Consider the bifunction g : C × C → R, such that 1. g(x, x) ≥ 0 for all x ∈ C, 2. g(x, ·) : C → R is convex and lower semicontinues for all x ∈ C, 3. g(·, y) : C → R is convex and upper semicontinues for all y ∈ C. The Equilibrium Problem (EP) with respect to the bifunction g and the set C, denoted by EP(f, C) is to find a point x∗ ∈ C such that g(x∗ , y) ≥ 0 for all y ∈ C.

(4.2.23)

The equilibrium problem, was first introduced by Blum and Oettli [21] where they studied existence of solutions and its relation with other problems in optimization, such as minimization problems, Nash equilibrium problems in noncooperative game, saddle point problems, VIPs and many others. For more information about EPs and iterative methods for solving it see [94, 65] and the reference therein. Following [114], we define the mapping T g : C × C → 2H with respect to the bifunction g above as follows.   {w ∈ H | g(x, y) ≥ hw, y − xi for all y ∈ C} , if x ∈ C, g (4.2.24) T (x) :=  ∅, if x ∈ / C. Then the EP(f, C) is equivalent to finding a zero of the maximal monotone mapping T g . So now we are ready to present the Split Equilibrium Problem (SEP). Let H1 and H2 be two real Hilbert spaces and C ⊂ H1 and Q ⊂ H2 be nonempty, closed and convex subset. Given two bifunctions g1 : C × C → R and g2 : Q × Q → R and a bounded linear operator A : H1 → H2 . So, the Split Equilibrium Problem (SEP) is formulated as follows. find a point x∗ ∈ C such that g1 (x∗ , x) ≥ 0 for all x ∈ C and such that ∗ ∗ the point y = A(x ) ∈ Q such that g2 (y ∗ , y) ≥ 0 for all y ∈ Q

4.3

(4.2.25) (4.2.26) (4.2.27)

The common solutions to variational inequalities problem

We are still looking at applications to the SCNPP, but now, we are focus on the case where H1 = H2 and Aj = I for all j = 1, 2, . . . , r. In this case the “Split” are replaced by “Common”applications. A new problem, called the Common Solutions to Variational Inequalities Problem (CSVIP) has recently been introduced in [51, Subsection 7.2] and further studied in [52]. In [51] it was considered as a special case of the Split Variational Inequality Problem (SVIP) introduced therein. In [52] we study the CSVIP with mapping while here we are interesting with operators. Let us recall the Common Solutions to Variational Inequalities Problem (CSVIP). This problem is formulated as follows.

Algorithms for Solving Monotone Variational Inequalities and Applications

99

Problem 4.3.1. Let H be a real Hilbert space. Let there be given, for each i = 1,T2, . . . , N , an operator fi : H → H and a nonempty, closed and convex subset Ci ⊆ H, with N i=1 Ci 6= TN ∗ ∅. The CSVIP is to find a point x ∈ i=1 Ci such that, for each i = 1, 2, . . . , N hfi (x∗ ), x − x∗ i ≥ 0 for all x ∈ Ci , i = 1, 2, . . . , N.

(4.3.1)

Obviously, if N = 1 then the problem is nothing but the well-known Variational Inequality Problem (VIP), first introduced by Hartman and Stampacchia in 1966 (see [88]). The motivation for defining and studying such CSVIPs with N > 1 stems from the simple observation that if we choose all fi = 0, then the problem reduces to that of finding a point TN ∗ x ∈ i=1 Ci in the nonempty intersection of a finite family of closed and convex sets, which is the well-known Convex Feasibility Problem (CFP). If the sets Ci are the fixed point sets of a family of operators Ti : H → H, then the CFP is the Common Fixed Point Problem (CFPP). These problems have been intensively studied over the past decades both theoretically (existence, uniqueness, properties, etc. of solutions) and algorithmically (devising iterative procedures which generate sequences that converge, finitely or asymptotically, to a solution). Since the phrase “system of variational inequalities” has been extensively used in the literature for many different problems, as can be seen from the cases mentioned in Subsection 4.3.1 below, it seems natural to call our new problem the Common Solutions to Variational Inequalities Problem. The significance of studying the CSVIP lies in the fact that besides its enabling a unified treatment of such well-known problems as the CFP and the CFPP, the CSVIP also opens a path to a variety of new “common point problems” that are created from various special cases of the VIP. Our main goal is to present an iterative procedure for solving CSVIPs and prove its strong convergence. Our algorithm, besides generating a sequence which strongly converges to a solution, also solves the, so called, Best Approximation Problem (BAP), which consists of finding the nearest point projection of a point onto the (unknown) intersection of N closed and convex subsets (see, e.g., [40] and the references therein). More precisely, our algorithm generates a sequence which converges strongly to the nearest point projection of the starting point onto the solution set of the CSVIP. A special case of the CSVIP in the Euclidean space Rn was considered in [51, Subsection 7.2]. There we transformed that CSVIP into a constrained variational inequality problem (CVIP) in an appropriate product space, i.e., find a point x∗ ∈ C ∩ ∆ such that hf (x∗ ), x − x∗ i ≥ 0 1

2

N

for all x = (x , x , . . . , x ) ∈ C,

(4.3.2) (4.3.3)

Nn where C:= ΠN i=1 Ci , the diagonal set in R

∆ := {x ∈ RN n | x = (a, a, . . . , a), a ∈ Rn }

(4.3.4)

and f : RN n → RN n is defined by  f (x1 , x2 , . . . , xN ) = (f1 (x1 ), . . . , fN (xN )),

(4.3.5)

where xi ∈ Rn for all i = 1, 2, . . . , N. So, problem (4.3.2)–(4.3.3) can be solved by Algorithm 3.4.1 ([51, Algorithm 4.4]). So, here we propose an algorithm that does not require the transformation into a product space.

100

The split variational inequality problem and related problems

Next we describe the connections between the CSVIP and some earlier results. In addition we present new algorithm for solving the CSVIP and prove its strong convergence through a sequence of claims.

4.3.1

Relation with previous work

Several variants of systems of variational inequalities appeared during the last decades. We present some of them in detail and show their connection to the CSVIP. 1. Konnov [103] considers the following system of variational inequalities. Let C ⊆ Rn n be a nonempty, closed and convex set and let Mi : C → 2R , i = 1, 2, . . . , N , be N mappings. The problem is to find a point x∗ ∈ C such that for each i = 1, 2, . . . , N, there exists u∗i ∈ Mi (x∗ ) satisfying hu∗i , x − x∗ i ≥ 0 for all x ∈ C, i = 1, 2, . . . , N .

(4.3.6)

This means that Konnov solves a CSVIP with mappings but with H = Rn and Ci = C for all i = 1, 2, . . . , N . 2. Ansari and Yao [2] studied the following system of variational inequalities. Let I be an index set and for each i ∈ I, let Xi be a Hausdorff topological vector space with its topological dual Xi∗ . Let Ci , i ∈ I, be nonempty, closed and convex subsets of Xi . Q ∗ Let C = N i=1 Ci and let fi : C → Xi , i = 1, 2, . . . , N , be operators (see also [129] for more details). Ansari and Yao then consider the problem of finding a point x∗ ∈ C such that hfi (x∗ ), x − x∗ i ≥ 0 for all x ∈ Ci , i = 1, 2, . . . , N. (4.3.7) 3. Kassay and Kolumb´an [100] solve another system of two variational inequalities. Let X and Y be two reflexive real Banach spaces and let C1 ⊆ X and C2 ⊆ Y be nonempty, closed and convex sets. Denote by X ∗ and Y ∗ the dual spaces of X and ∗ ∗ Y , respectively. Consider two mappings M1 : C1 ×C2 → 2X and M2 : C1 ×C2 → 2Y . Kassay’s and Kolumb´an’s problem is to find a pair (x1 , x2 ) ∈ C1 × C2 such that sup

hw, x − x1 i ≥ 0 for all x ∈ C1 ,

w∈M1 (x1 ,x2 )

sup

hz, y − x2 i ≥ 0 for all y ∈ C2 .

(4.3.8)

z∈M2 (x1 ,x2 )

4. Recently, Zhao et al. [174] have considered the following system of two variational inequalities in Euclidean spaces. Let C1 and C2 be two closed and convex subsets of Rn and Rm , respectively. Let f1 : C1 × C2 → Rn and f2 : C1 × C2 → Rm be two operators. Then Zhao et al.’s problem is to find a point (u∗1 , u∗2 ) ∈ C1 × C2 such that hf1 (u∗1 , u∗2 ), u1 − u∗1 i ≥ 0 for all u1 ∈ C1 , hf2 (u∗1 , u∗2 ), u2 − u∗2 i ≥ 0 for all u2 ∈ C2 .

(4.3.9)

The main difference between problems (4.3.1) and (4.3.9) is that our system includes any finite number (not only two) of operators defined on different sets. In addition, in our case the problem is formulated in Hilbert space (not only in Euclidean space).

Algorithms for Solving Monotone Variational Inequalities and Applications

4.3.2

101

The algorithm

In this subsection we present a new algorithm for solving the CSVIP. Let {Ci }N i=1 be N N nonempty, closed and convex subsets of H. Let {fi : H → H}i=1 be a set of N operators. Denote by Sol(fi , Ci ) the solution set of the Variational Inequality Problem VIP(fi , Ci ) corresponding to the mapping fi and the set Ci . Algorithm 4.3.1. Step 0: Select an arbitrary starting point x1 ∈ H. Step 1: Given the current iterate xk , calculate the next iterate as follows:   k k k k y = P x − λ f x ,  C i i i i      k k k k   zi = PCi x − λi fi yi ,   

   C k = z ∈ H | xk − z k , z − xk − γ k z k − xk ≤ 0 , i i i i T k  Ck = N  i=1 Ci ,   

1   k k k  W = z ∈ H | x − x , z − x ≤ 0 ,     k+1 x = PC k ∩W k (x1 ) .

(4.3.10)

This algorithm is quite complex in comparison with more “direct” iterative methods. In order to calculate the next approximation to the solution of the problem, the latter only use a value of one main operator at the current approximation. On the other hand, Algorithm 4.3.1 generates strongly convergent sequences, as is proved below, and this important property apparently complicates the process. It seems natural to ask by how much and k k k k how difficult it is to calculate Cik , C k = ∩N i=1 Ci , W and C ∩ W . Our main interest here is not to develop a practical numerical method and whether our work can help in the design and analysis of more practical algorithms remains to be seen. we give some simple computational examples to demonstrate the practical difficulties. In order to prove our convergence theorem we assume that the following conditions hold. Condition 4.3.1. The operators {fi }N i=1 are monotone and Lipschitz continuous with constant αi . T Condition 4.3.2. The common solution set SOL := N i=1 Sol(fi , Ci ) is nonempty.  k ∞ Condition 4.3.3. The sequence λi k=1 ⊂ [a, b], i = 1, . . . , N , for some a and b with 0 < a < b < 1/α, where α := max1≤i≤N αi .  ∞ Condition 4.3.4. The sequence γik k=1 ⊂ [ε, 1/2] for each i = 1, . . . , N , where ε ∈ (0, 1/2].  ∞ Theorem 4.3.1. Assume that Conditions 4.3.1–4.3.4 hold. Then any sequences xk k=1 ,  k ∞  ∞ yi k=1 and zik k=1 , generated by Algorithm 4.3.1, converge strongly to PSOL (x1 ). Proof. We divide the proof into four claims.  ∞ Claim 4.3.1. The projection PSOL (x1 ) and the sequence xk k=1 are well-defined.

102

The split variational inequality problem and related problems

Proof. It is known that each Sol(fi , Ci ), i = 1, . . . , N , is a closed and convex subset of H (see, e.g., [18, Lemma 2.4(ii)]). Hence fi is nonempty (by Condition 4.3.2), closed and convex, so PSOL (x1 ) is well defined. Next, it is clear that both Cik and W k are closed half-spaces for all k ≥ 1. Therefore C k and C k ∩ W k are closed and convex for all k ≥ 1. It remains to be proved that C k ∩ W k is not empty for all k. When γik = 1/2 for all k ∈ N and for all i = 1, . . . , N, then the set Cik has the following form:



 ek := z ∈ H : z k − z ≤ xk − z . C i i From Claim 2.0.1 it follows that  

eik ⊂ z ∈ H : xk − zik , z − xk − γik zik − xk ≤ 0 = Cik . C

(4.3.11)

(4.3.12)

k ek e k = TN C ek Let C i=1 i . It is enough to show that SOL ⊆ C ∩ W for all n ∈ N. First we ek for all n ∈ N. To this end, let s ∈ SOL and let wi ∈ fi (s) for any prove that SOL ⊆ C i = 1, . . . , N . It now follows from (2.0.29) that

k



zi − s 2 = PC xk − λki fi yik − s 2 i

2

2   ≤ xk − λki fi yik − s − xk − λki fi yik − zik

2

2

 = xk − s − xk − zik + 2λki fi yik , s − zik

2

2 

 = xk − s − xk − zik + 2λki fi yik − wi , s − yik



  + wi , s − yik + fi yik , yik − zik

(4.3.13)

for any i = 1, . . . , N . Using the monotonicity of fi and the fact that s ∈ Sol(fi , Ci ) , we obtain from (4.3.13) that

k







zi − s 2 ≤ xk − s 2 − xk − zik 2 + 2λki fi yik , yik − zik

2

2

 = xk − s − xk − yik + yik − zik + 2λki fi yik , yik − zik

2

2

= xk − s − xk − yik − 2 xk − yik , yik − zik −

k



yi − zik 2 + 2λki fi yik , yik − zik

2

2

2 = xk − s − xk − yik − yik − zik +

 2 xk − λki fi yik − yik , zik − yik . (4.3.14) From (2.0.28) we have



k  x − λki fi yik − yik , zik − yik = xk − λki fi xk − yik , zik − yik

  + λki fi xk − fi yik , zik − yik

  ≤ λki fi xk − fi yik , zik − yik

(4.3.15)

and from the Cauchy-Schwarz inequality it follows that

 

k  x − λki fi yik − yik , zik − yik ≤ λki fi xk − fi yik zik − yik .

(4.3.16)

Algorithms for Solving Monotone Variational Inequalities and Applications

103

Each operator fi , 1, . . . , N , is Lipschitz continuous with constant αi . Therefore fi is obviously Lipschitz continuous with constant α. Using this fact, we obtain





k x − λki fi yik − yik , zik − yik ≤ λki α xk − yik zik − yik .

(4.3.17)

k





zi − s 2 ≤ xk − s 2 − xk − yik 2 − yik − zik 2 +



2λki α xk − yik zik − yik .

(4.3.18)



2 0 ≤ λki α xk − yik − zik − yik

2



2 = λki α xk − yik − 2λki α xk − yik zik − yik

2 + zik − yik ,

(4.3.19)

Hence

Since

we obtain that



2

2 2 2λki α xk − yik zik − yik ≤ λki α xk − yik + zik − yik .

(4.3.20)

Thus,

k





zi − s 2 ≤ xk − s 2 − xk − yik 2 − yik − zik 2 +

2

2 2 λki α xk − yik + zik − yik .

k

2  2  k k

x − yik 2 .

= x − s − 1 − λi α

(4.3.21)

2

2 ek . Consequently, Since λki < 1/α it follows that zik − s ≤ xk − s . Therefore s ∈ C  ek for all k ≥ 1. Now we prove by induction that the sequence xk ∞ is well SOL ⊆ C k=1 1 1 e e1 ∩ W 1 defined. Indeed, since SOL ⊆ C and SOL ⊆ W = H, it follows that SOL ⊆ C ek−1 ∩ W k−1 for and therefore x2 = PCe1 ∩W 1 (x1 ) is well defined. Now suppose that SOL ⊆ C ek and for any s ∈ SOL, it some k > 2. Let xk = PCek−1 ∩W k−1 (x1 ). Again we have SOL ⊆ C follows from (2.0.28) that

1

x − xk , s − xk = x1 − PCek−1 ∩W k−1 (x1 ), s − PCek−1 ∩W k−1 (x1 ) ≤ 0. (4.3.22) ek ∩ W k for any n ≥ 1, as required. This This implies that s ∈ W k . Therefore SOL ⊆ C k ∞ shows that the sequence x k=1 is indeed well defined. Claim 4.3.2. 1, . . . , N .

The sequences

 k ∞  k ∞  ∞ x k=1 , yi k=1 and zik k=1 are bounded for any i =

Proof. Since xk+1 = PC k ∩W k (x1 ), we have for any s ∈ C k ∩ W k ,

k+1



x − x1 ≤ s − x1 .

(4.3.23)

104

The split variational inequality problem and related problems

 ∞ Therefore xk k=1 is bounded. It follows from the definition of W k that xk = PW k (x1 ). Since xk+1 ∈ W k , it follows from (2.0.29) that

k+1

2

2

2

x − xk + xk − x1 ≤ xk+1 − x1 . (4.3.24)

∞  Thus the sequence xk − x 1 k=1 is increasing and bounded, hence convergent. This shows that limn→∞ xk − x1 exists. In addition, from (4.3.24) we get that

lim xn+1 − xk = 0. (4.3.25) n→∞

Since xk+1 ∈ Cik , i = 1, . . . , N , we have

k  x − zik , xk+1 − xk − γik zik − xk ≤ 0.

(4.3.26)

Thus

2

γik zik − xk ≤ xk − zik , xk − xn+1 .



Hence zik − xk ≤ xk − xk+1 and therefore

lim zik − xk = 0, for all i = 1, . . . , N. n→∞

(4.3.27)

(4.3.28)

 ∞ Thus zik k=1 is a bounded sequence for each i = 1, . . . , N . Using (4.3.21), we see that 



k

 −1  k

x − s 2 − zik − s 2

x − yik 2 ≤ 1 − λki α 2 







 2 −1 k k

x − s − zik − s xk − s + zik − s = 1 − λi α 





 2 −1 k k

x − zik xk − s + zik − s . ≤ 1 − λi α (4.3.29)  ∞  ∞ Since both xk k=1 and zik k=1 are bounded, Condition 4.3.3 and (4.3.28), imply that

lim xk − yik = 0 for all i = 1, . . . , N. (4.3.30) n→∞

 ∞ Therefore yik k=1 is a bounded sequence for each i = 1, . . . , N , which completes the proof of Claim 4.3.2.  ∞  ∞ Claim 4.3.3. Any weak accumulation point of the sequences xk k=1 , yik k=1 and  k ∞ zi k=1 belongs to SOL.  ∞  ∞ Proof. Since xk k=1 is bounded (see Claim 4.3.2), there exists a subsequence xkj j=1 of  k ∞ x k=1 which converges weakly to x∗ . Therefore it follows from (4.3.30) that there also n o∞  ∞ k of yik k=1 which converges to x∗ for each i = 1, . . . , N . exists a subsequence yi j j=1

Define the maximal extension mapping ([139, Theorem 3]) Mi as in (2.1.2), i.e.,  fi (r) + NCi (r) , r ∈ Ci , Mi (r) = ∅, otherwise,

(4.3.31)

Algorithms for Solving Monotone Variational Inequalities and Applications

105

where NCi (r) is the normal cone of Ci at r ∈ Ci . Let (r, w) ∈ G(Mi ) with r ∈ Ci and let kj pi ∈ fi (r). D Since w ∈ MEi (r) = fi (r) + NCi (r), we get w − pi ∈ NCi (r).  Since yi ∈ Ci , we k k k obtain w − pi , r − yi j ≥ 0. On the other hand, since yi j = PCi xkj − λi j fi xk , we also have D E  k k k xkj − λi j fi xk − yi j , r − yi j ≤ 0 (4.3.32) and thus

*

+

k

xkj − yi j k λi j

− fi x

 k

k

, r − yi j

≤ 0.

Therefore it follows from the monotonicity of the operator fi , i = 1, . . . , N , that D E D E k k w, r − yi j ≥ pi , r − yi j + * k D E  xkj − yi j kj kj k − fi x , r − y i ≥ pi , r − yi + k λi j D E D E    k k = pi − fi yik , r − yi j + fi yik − fi xk , r − yi j + * k xkj − yi j kj , r − yi + k λi j * + kj E D kj   x − y k k i ≥ fi yik − fi xk , r − yi j + , r − yi j . kj λi

(4.3.33)

(4.3.34)

From the Cauchy-Schwarz inequality and the Lipschitz continuity with constant α it follows that

D

k

w, r − yi j

E

kj kj



x − y

i

k

k k ≥ −α r − yi j xkj − yi j − r − yi j a





k j

xkj − yi

kj kj  , = −Ki α x − yi + a

(4.3.35)

o n

k where Ki = supj∈N r − yi j . Taking the limit as j → ∞ and using the fact that

o∞ n

kj is bounded, we see that hw, r − x∗ i ≥ 0. The maximality of Mi and

r − yi j=1

Remark 2.0.46 now imply that x∗ ∈ Mi−1 (0) = Sol(fi , Ci ). Hence x∗ ∈ SOL.  ∞  ∞  ∞ Claim 4.3.4. The sequences xk k=1 , yik k=1 and zik k=1 converge strongly to PSOL (x1 ). Proof. Since (4.3.23) holds for all s ∈ C k ∩ W k and SOL ⊆ C k ∩ W k by the proof of Claim 4.3.1, we get for s = PSOL (x1 ) that

k



x − x1 ≤ PSOL (x1 ) − x1 (4.3.36)

106

The split variational inequality problem and related problems

and furthermore,



lim xk − x1 ≤ PSOL (x1 ) − x1 . (4.3.37) n→∞  k ∞ Now, since the sequence x k=1 is bounded (see Claim 4.3.2), there exists a subsequence  k ∞  ∞ x j j=1 of xk k=1 which converges weakly to x∗ . From Claim 4.3.3 it follows that x∗ ∈ SOL. From the weak lower semicontinuity of the norm and (4.3.37) it follows that kx∗ − x1 k ≤ lim inf kxkj − x1 k j→∞

= lim kxk − x1 k = PSOL (x1 ) − x1 . k→∞

(4.3.38)

∗ 1 Since x∗ ∈ SOL, it follows that accumu kx ∞= PSOL (x ). So, since by Claim 4.3.3k any weak lation point of the sequence x k=1 belong to SOL, it follows that x * x∗ = PSOL (x1 ). Finally,

kx∗ − x1 k ≤ lim inf kxk − x1 k = lim kxk − x1 k = x∗ − x1 . (4.3.39) k→∞

k→∞

Since (xk −x1 ) * x∗ −x1 and limk→∞ kxk −x1 k = kx∗ −x1 k, it follows from the Kadec-Klee property of H that limk→∞ kxk − x∗ k = 0, as asserted. This completes the proof of Theorem 4.4.1.  Remark 4.3.2. The class of inverse strongly monotone operators is commonly used in variational inequality problems (see e.g., [96] and references therein). Following Remark 2.0.14(1) inverse strong monotonicity implies monotonicity and Lipschitz continuity, thus we can replace Condition 4.3.1 by the assumption that each one of the operators {fi }N i=1 is αi -ISM (αi > 0). So, Theorem 4.4.1 applies to this case too.

4.3.3

Implementation

We now demonstrate, using a simple low-dimensional example, the practical difficulties associated with the implementation of Algorithm 4.3.1 (see also our comment after the formulation of Algorithm 4.3.1). We consider a two-disc convex feasibility problem in R2 and provide an explicit formulation of our Algorithm 4.3.1, as well as some numerical results. More explicitly, let C1 = {(x, y) ∈ R2 | (x − a1 )2 + (y − b1 )2 ≤ r12 } and C2 = {(x, y) ∈ R2 | (x − a2 )2 + (y − b2 )2 ≤ r22 } with C1 ∩C2 6= ∅. Consider the problem of finding a point (x∗ , y ∗ ) ∈ R2 such that (x∗ , y ∗ ) ∈ C1 ∩ C2 . Observe that in this case f1 = f2 ≡ 0. For simplicity we choose γ1k = γ2k = 1/2. Given the current iterate xk = (u, v), the explicit formulation of the iterative step of Algorithm 4.3.1 becomes:     r1 (u − a1 ) r1 (v − b1 )  k k  y1 = PC1 x = a1 + , b1 + ,   k(u − a1 , v − b1 )k k(u − a1 , v − b1 )k         r2 (u − a2 ) r2 (v − b2 )  k k   y2 = PC2 x = a2 + , b1 + ,   k(u − a2 , v − b2 )k k(u − a2 , v − b2 )k   (4.3.40) C1k = z = (s, t) ∈ R2 | kz − y1k k ≤ kz − xk k ,       C2k = z = (s, t) ∈ R2 | kz − y2k k ≤ kz − xk k ,    

   W k = z ∈ R 2 | x1 − xk , z − xk ≤ 0 ,      k+1 x = PC1k ∩C2k ∩W k x1 .

Algorithms for Solving Monotone Variational Inequalities and Applications

107

In order to calculate xk+1 , we solve the following constrained minimization problem: ( min kx1 − zk2 , (4.3.41) such that z ∈ C1k ∩ C2k ∩ W k . In the case of the metric projection onto two half-spaces, an explicit formula can be found in [15, Definition 3.1] and in [49, Subsection 3.1]. Following the same technique, it is possible to obtain the solution to (4.3.41) even for more than three half-spaces, but there are many subcases in the explicit formula (two to the power of the number of half-spaces). Now we present some numerical results for the particular case with the two discs C1 = {(x, y) ∈ R2 | x2 + y 2 ≤ 1} and C2 = {(x, y) ∈ R2 | (x − 1)2 + y 2 ≤ 1}. We choose separately two starting points (−1/2, 3) and (3, 3), and for each starting point we present a table with the (x, y) coordinates for the first 10 iterations of Algorithm 4.3.1 (Tables 4.1 and 4.2). In addition, Figures 4.1 and 4.2 illustrate the geometry in each iterative step, i.e., the discs and the three half-spaces C1k , C2k and W k . Iteration Number 1 2 3 4 5 6 7 8 9 10

x-value −0.500000000 0.0263507717 0.2898391508 0.4211545167 0.4687763141 0.4862238741 0.4935428246 0.4968764116 0.4984644573 0.4992386397

y-value 3.0000000000 1.9471923798 1.4209450920 1.1576070220 1.0169184232 0.9429308114 0.9048859275 0.8855650270 0.8758239778 0.8709324060

Table 4.1: 10 iterations with the starting point x1 = (−1/2, 3)

Figure 4.1: Geometric illustration of Algorithm 4.3.1 with the starting point x1 = (−1/2, 3)

108

The split variational inequality problem and related problems Iteration Number 1 2 3 4 5 6 7 8 9 10

x-value 3.0000000000 1.8536075595 1.2802790276 0.9937807510 0.8503033752 0.7789970157 0.7423971596 0.7264747366 0.7115677773 0.7260458319

y-value 3.0000000000 1.8534992168 1.2803811470 0.9936561265 0.8505218683 0.7785224690 0.7434698006 0.7235683325 0.7205826742 0.6973591138

Table 4.2: 10 iterations with the starting point x1 = (3, 3)

Figure 4.2: Geometric illustration of Algorithm 4.3.1 with the starting point x1 = (3, 3)

4.4

A von Neumann alternating method for finding common solutions to variational inequalities

In this section we present several more algorithms for solving the CSVIP. First we concentrate on the two-sets CSVIP. The algorithm presented stems from the classical von Neumann alternating projections algorithm [125]. The results presented next are taken from [53]. For simplicity, we mainly confine our attention to the following two-set CSVIP, i.e., Problem 4.3.1 with N = 2.

Algorithms for Solving Monotone Variational Inequalities and Applications

109

Problem 4.4.1. Let H be a real Hilbert space, and let C and Q be two nonempty closed and convex subsets of H. Given two operators f and g from H into itself, the two-set CSVIP is to find a point x∗ ∈ C ∩ Q such that hf (x∗ ), x − x∗ i ≥ 0 for all x ∈ C and ∗ ∗ hg(x ), y − x i ≥ 0 for all y ∈ Q.

(4.4.1) (4.4.2)

If we denote by Sol(f, C) and Sol(Q, g) the solution sets of (4.4.1) and (4.4.2), respectively, then Problem 4.4.1 is to find a point x∗ ∈ Sol(f, C) ∩ Sol(Q, g). Our alternating method for solving the two-set CSVIP is inspired by von Neumann’s original alternating projections method. Von Neumann [125] presented a method for calculating the orthogonal projection onto the intersection of two closed subspaces in Hilbert space. Let H be a real Hilbert space, and let  kV 1∞and V2 be closed subspaces. Choose x ∈ H k ∞ and construct the sequences a k=0 and b k=0 by  0 b = x, (4.4.3) ak = PV1 (bk−1 ) and bk = PV2 (ak ), k = 1, 2, . . . , where PV1 and PV2 denote the orthogonal projection operators of H onto V1 and V2 , rek ∞ spectively. Von Neumann showed [125, Lemma 22, page 475] that both sequences a k=0  ∞ and bk k=0 converge strongly to PV1 ∩V2 (x). This algorithm is known as von Neumann’s alternating projections method. Observe that not only the sequences converge strongly, but also that their common limit is the nearest point to x in V1 ∩ V2 . For recent elementary geometric proofs of von Neumann’s result, see [107, 108]. In 1965 Bregman [24] established the weak convergence of the sequence of alternating nearest point mappings between two closed and convex intersecting subsets of a Hilbert space. See also [9, 29]. In 2005 Bauschke, Combettes and Reich [16] studied the alternating resolvents method for finding a common zero of two maximal monotone mappings (see also the recent paper of Boikanyo and Moro¸sanu [22]). We propose an alternating method which employes two averaged operators in the sense of [9]. In this connection, we note that not all averaged operators are resolvents (of monotone mappings). Next we present our alternating method for solving the two-set CSVIP and then extend it to the general CSVIP.

4.4.1

The Algorithm

Now we introduce our modified von Neumann alternating method for solving the two-set CSVIP (4.4.1) and (4.4.2). Let Γ := Γ(C, Q, f, g) := Sol(f, C)∩ Sol(g, Q). The following conditions are needed for our convergence theorem. Condition 4.4.1. The operators f : H → H and g : H → H are α1 -ISM and α2 -ISM, respectively. Condition 4.4.2. λ ∈ (0, 2α), where α := min{α1 , α2 }. Condition 4.4.3. Γ 6= ∅.

110

The split variational inequality problem and related problems

Algorithm 4.4.1. Step 0: Select an arbitrary starting point x0 ∈ H. Step 1: Given the current iterate xk , compute   y k = (PQ (I − λg)) xk and xk+1 = (PC (I − λf )) y k . Note that (4.4.4) is actually an alternating method, that is,  xk+1 = (PC (I − λf )) (PQ (I − λg)) xk   . = PC PQ xk − λg xk − λf PQ xk − λg xk

(4.4.4)

(4.4.5)

An illustration of the iterative step of Algorithm 4.4.1 is presented in Figure 4.3.

Figure 4.3: Illustration of the iterative step of Algorithm 4.4.1.

Theorem 4.4.1. Let H be a real Hilbert space, and let C, Q be two nonempty closed  and ∞ convex subsets of H. Assume that Conditions 4.4.1–4.4.3 hold. Then any sequence xk k=0 generated by Algorithm 4.4.1 converges weakly to a point x∗ ∈ Γ, and furthermore, x∗ = lim PΓ (xk ). k→∞

(4.4.6)

Proof. Let λ ∈ (0, 2α). By Lemma 2.1.1, the operators PC (I − λf ) and PQ (I − λg) are averaged and so is their composition (PC (I − λf )) (PQ (I − λg)) (Remark 2.0.14(4)). Since Γ Krasnosel’ski˘ı-Mann-Opial theorem (Theorem 2.0.42) guarantees that any sequence  6= ∅, ∞ xk k=0 generated by Algorithm 4.4.1 converges weakly to a point x∗ which belongs to

Algorithms for Solving Monotone Variational Inequalities and Applications

111

the fixed point set of the operator (PC (I − λf )) (PQ (I − λg)) . Combining the assumption Γ 6= ∅ with Lemma 2.0.25, we obtain Fix(PC (I − λf )) ∩ Fix(PQ (I − λg)) = Fix ((PC (I − λf )) (PQ (I − λg))) = Fix ((PQ (I − λg)) (PC (I − λf ))) ,

(4.4.7)

which means that x∗ ∈ Fix(PC (I − λf ))) and x∗ ∈ Fix(PQ (I − λg)), and therefore by (2.1.4) x∗ ∈ Γ. Finally, let z ∈ Γ, i.e., z ∈ Sol(f, C) ∩ Sol(Q, g). Then PQ (z − λg(z)) = PC (z − λf (z)) = z. Since the operators PC (I − λf ) and PQ (I − λg) are averaged, they are also nonexpansive. Thus

k+1

2

2

x − z = (PC (I − λf )) (PQ (I − λg))(xk ) − z

2 = (PC (I − λf )) (PQ (I − λg))(xk ) − PC (I − λf )(z)

2 ≤ PQ (xk − λg(xk )) − z

2 = PQ (xk − λg(xk )) − PQ (z − λg(z))

2 ≤ xk − z . (4.4.8) So

k+1

2

2

x − z ≤ xk − z , (4.4.9)  k ∞ which means that the sequence x k=0 is Fej´er-monotone with respect to Γ. Now, put uk = PΓ (xk ).

(4.4.10)

Since the operators PC (I − λf ) and PQ (I − λg) are nonexpansive, it follows from (2.1.4) that the sets Sol(f, C) and Sol(g, Q) are nonempty, closed and convex (see [84, Proposition 5.3, page 25]). In addition, since Γ 6= ∅, each uk is well defined. So, applying (2.0.28) with the set C there as Γ and x = xk , we get

  y − PΓ xk , xk − PΓ xk ≤ 0 for all k ≥ 0 and y ∈ Γ. (4.4.11) Taking, in particular, y = x∗ ∈ Γ, we obtain

∗ x − uk , xk − uk ≤ 0.  ∞ By Lemma 2.0.15, uk k=0 converges strongly to some u∗ ∈ Γ. Therefore hx∗ − u∗ , x∗ − u∗ i ≤ 0

(4.4.12)

(4.4.13)

and hence u∗ = x∗ , as asserted. Remark 4.4.2.

 ∞ 1. The sequence y k k=0 also converges weakly to x∗ ∈ Γ.

2. Under the additional assumptions that C and Q aresymmetric, and f and g are odd, k ∞ we get from [9, Corollary 2.1] that any sequence x k=0 , generated by Algorithm 4.4.1, converges strongly to a point x∗ ∈ Γ. 3. Strong convergence also occurs when either C or Q is compact.

112

The split variational inequality problem and related problems

4. According to [9, Corollary 2.2], if Γ = ∅, then limk→∞ xk = ∞. 5. When C and Q are closed subspaces and f = g = 0 in the two-set CSVIP (4.4.1) and (4.4.2), we get von Neumann’s original problem and then Algorithm 4.4.1 is the classical alternating projections method (4.4.3).

Following [135] and [73], we now present two more algorithms for solving the two-set CSVIP (4.4.1) and (4.4.2). Let H be a real Hilbert space, and let C and Q be two nonempty, closed and convex subsets of H. Recall the following two lemmata [135, Lemmata 1.3 and 1.4]. Lemma 4.4.3. A convex combination of strongly nonexpansive operators is also strongly nonexpansive. Lemma 4.4.4. Let h be a convex combination of the strongly nonexpansive mappings {hk | 1 ≤ k ≤ m}. If the set ∩ {Fix (hk ) | 1 ≤ k ≤ m} is not empty, then it equals Fix (h). Now we can propose the following parallel algorithm. Algorithm 4.4.2. Step 0: Select an arbitrary starting point x0 ∈ H, and let the numbers w1 and w2 be such that w1 , w2 > 0 and w1 + w2 = 1. Step 1: Given the current iterate xk , compute   xk+1 = w1 PC xk − λf xk + w2 PQ xk − λg xk . (4.4.14) Theorem 4.4.5. Let H be a real Hilbert space, and let C and Q be two nonempty, closed and convex subsets of H. Assume that Conditions 4.4.1–4.4.3 hold. Then any sequence  ∞ xk k=0 generated by Algorithm 4.4.2 converges weakly to a point x∗ ∈ Γ, and furthermore, x∗ = lim PΓ (xk ). k→∞

(4.4.15)

Proof. By Lemma 2.1.1, the operators PC (I − λf ) and PQ (I − λg) are averaged, hence strongly nonexpansive (see Remark 2.0.14(5)). According to Lemma 4.4.3, any convex combination  k ∞ of strongly nonexpansive mappings is also strongly nonexpansive. So the sequence x k=0 generated by Algorithm 4.4.2 is, in fact, an iteration of a strongly nonexpansive operator and therefore the desired result is obtained by [29] and Lemma 4.4.4. Remark 4.4.6. The convergence obtained in Theorem 4.4.5 is not strong in general [17]. Finally, we recall the following theorem [73, , Theorem 1]. Theorem 4.4.7. Let h1 : H → H and h2 : H → H be two nonexpansive operators which satisfy Condition (W), the fixed point sets of which have a nonempty intersection. Then any unrestricted product from T1 and T2 converges weakly to a common fixed point. Since every averaged operator is strongly nonexpansive and therefore satisfies Condition (W), we can apply the above theorem to obtain an algorithm for solving the two-set CSVIP by using any unrestricted product from PC (I − λf ) and PQ (I − λg). Any such unrestricted product converges weakly to a point in Γ.

Algorithms for Solving Monotone Variational Inequalities and Applications

4.4.2

113

The general CSVIP

In this subsection we extend our algorithm to two methods for solving the general CSVIP. Let H be a real Hilbert space. Let there be given, for each i = 1, 2, . .T . , N , an operator fi : H → H and a nonempty, closed and convex subset Ci ⊂ H, with N i=1 Ci 6= ∅. The T CSVIP is to find a point x∗ ∈ N C such that, for each i = 1, 2, . . . , N, i=1 i hfi (x∗ ), x − x∗ i ≥ 0 for all x ∈ Ci , i = 1, 2, . . . , N. T Denote Ψ :=SOL= N i=1 SOL(fi , Ci ).

(4.4.16)

Algorithm 4.4.3. Step 0: Select an arbitrary starting point x0 ∈ H. Step 1: Given the current iterate xk , compute the product x

k+1

=

N Y

(PCi (I − λfi )) (xk ).

(4.4.17)

i=1

Theorem 4.4.8. Let H be a real Hilbert space. For each i = 1, 2, . . . , N , let an operator fi : H → H and a nonempty, closed and convex subset Ci ⊂ H be given. Assume that T N 2, . . . , N, fi is αi -ISM. Set α := mini {αi } and take i=1 Ci 6= ∅, Ψ 6= ∅ and that fori = 1, k ∞ λ ∈ (0, 2α). Then any sequence x k=0 generated by Algorithm 4.4.3 converges weakly to a point x∗ ∈ Ψ, and furthermore, x∗ = lim PΨ (xk ). k→∞

(4.4.18)

Algorithm 4.4.4. Step 0: Select an arbitrary starting point x0 ∈ H and a positive finite sequence {wi }N i=1 N P such that wi = 1. i=1

Step 1: Given the current iterate xk , compute xk+1 =

N X

wi (PCi (I − λfi )) (xk ).

(4.4.19)

i=1

Theorem 4.4.9. Let H be a real Hilbert space. For each i = 1, 2, . . . , N , let an operator fTi : H → H and a nonempty, closed and convex subset Ci ⊂ H be given. Assume that N 1, 2, . . . , N, fi is αi -ISM. Set α := mini {αi } and take i=1 Ci 6= ∅, Ψ 6= ∅, and that fori = ∞ k λ ∈ (0, 2α). Then any sequence x k=0 generated by Algorithm 4.4.4 converges weakly to a point x∗ ∈ Ψ, and furthermore, x∗ = lim PΨ (xk ). k→∞

(4.4.20)

The proofs of Theorem 4.4.8 and 4.4.9 are analogous to those of Theorems 4.4.1 and 4.4.5, respectively, and therefore are omitted. Remark 4.4.10. Combettes [63] and Bauschke and Combettes [14] present several more general algorithms and convergence theorems which allow for computational errors and varying parameters. These can also be used to solve the CSVIP with any finite number of operators.

114

4.4.3

The split variational inequality problem and related problems

Applications

Since CSVIP is a special case of the SCNPP one can consider the applications there as of the CSVIP. Here we present two more interesting applications of the CSVIP. 1. The Hierarchical Variational Inequality Problem Next we present another variant of the CSVIP, namely, the Hierarchical Variational Inequality Problem (HVIP). Let H be a real Hilbert space. 1. Let C be a nonempty, closed and convex subset of H and let U : C → C and V : C → C be two nonexpansive operators. Consider the operator h := I − V . Xu [159] studied the problem of finding a point x∗ ∈ Fix(U ) such that hh(x∗ ), x − x∗ i ≥ 0 for all x ∈ Fix(U ).

(4.4.21)

2. Yao and Liou [170] considered the following Hierarchical Variational Inequality Problem (HVIP).Let C ⊆ H be a nonempty, closed and convex set. Given the operators U : C → H and V : C → H, set h := I − V. Then the HVIP is to find a point x∗ ∈ Sol(U, C) such that hh(x∗ ), x − x∗ i ≥ 0 for all x ∈ Sol(U, C).

(4.4.22)

Since it is well known that x∗ ∈ Sol(U, C) if and only if PC (x∗ − λU (x∗ )) = x∗ for all λ ≥ 0, this problem is essentially a special case of Xu’s problem. Both problems can be formulated as a CSVIP in the following way. Find a point x∗ ∈ H such that h(I − U )(x∗ ), x − x∗ i ≥ 0 for all x ∈ H

(4.4.23)

hh(x∗ ), x − x∗ i ≥ 0 for all x ∈ Fix(U ).

(4.4.24)

and This is a two-set CSVIP with the operators f1 = I − U and f2 = h, and the sets C1 = H and C2 = Fix(U ). Observe that in Section 3.6, following Yamada [160], Yamada and Ogura [161, 163]), we consider the VIP(f, Fix(h)), which is related to the above problems. Recently, hierarchical fixed point problems and hierarchical minimization problems have attracted attention because of their connections with some convex programming problems. See, e.g., [121, 122, 169, 158] and the references therein. 2. Variational Inequality Problem over the intersection of convex sets Let H be a real Hilbert TNspace. Given N nonempty, closed and convex subsets Ci ⊆ H, i = 1, 2, . . . , N , with i=1 Ci 6= ∅, we consider the CSVIP (4.3.1) with fi ≡ f for each i = 1, 2, . . . , N . We obtain a single variational inequality problem over a nonempty intersectionTof N nonempty, closed and convex subsets. More precisely, we have to find a point x∗ ∈ N i=1 Ci such that hf (x∗ ), x − x∗ i ≥ 0 for all x ∈ Ci , i = 1, 2, . . . , N,

(4.4.25)

Algorithms for Solving Monotone Variational Inequalities and Applications

115

and, in particular, hf (x∗ ), x − x∗ i ≥ 0 for all x ∈

N \

Ci .

(4.4.26)

i=1

This problem is closely related to the work of Yamada [160], who considered a variational inequality problem over the intersection of the fixed point sets of nonexpansive mappings, i.e., Ci = Fix(Ti ), for i = 1, 2, . . . , N . Remark 4.4.11. Surprisingly, (4.4.26) consists of finding a point x∗ ∈ ∩N i=1 Ci such that ∗



hf (x ), x − x i ≥ 0 for all x ∈

N [

Ci .

(4.4.27)

i=1

So, although in general the set ∪N i=1 Ci is not guaranteed to be convex, its special structure enables us to apply our projection algorithms for solving (4.4.26).

Index CGRS, 101 classical projection, 25 extragradient, 25 first modification of the subgradient extragradient, 46 Fukushima, 27 Halpern-type CGR, 91 Haugazeau, 46 Haugazeau-type, 92 hybrid perturbed subgradient extragradient, 39 Iusem and Svaiter, 28 modified Korpelevich, 27 perturbed extragradient, 36 projected gradient, 24 second modification of the subgradient extragradient, 53 shrinking projection, 52 subgradient extragradient, 31 two-subgradient extragradient, 39 von Neumann, 109

γ-distance, 8 Armijo-type search, 27 Bifunction, 9 convex-concave, 9 monotone, 9 Condition (W), 7 Continuity Lipschitz, 6 lower semicontinuous, 9 upper semicontinuous, 9 Demiclosedness Principle, 6 Domain mapping, 17 operator, 10 Epi-convergence, 8 Fej´er-monotone, 5 Function concave, 9 convex, 9 distance, 5 indicator function, 12 proper, 9 proximity function, 79 Mapping maximal monotone, 18 monotone, 17 odd, 17 paramonotone, 18 pseudo-monotone, 18 subdifferential, 12 Method δ, 60 alternating, 109 CGR, 86

Norm A, 4 p, 82 Operator C-δ, 13 asymptotically regular, 7 averaged, 7 coercive, 6 cutter, 10 demiclosed, 6 demicontractive, 11 directed, 10 firmly contractive, 6 firmly nonexpansive (FNE), 6 firmly quasi-nonexpansive (FQNE), 10 hemicontinuous, 7 116

Algorithms for Solving Monotone Variational Inequalities and Applications inverse strongly monotone (ISM), 6 monotone, 6 nonexpansive, 6 normal, 21 odd, 7 paracontracting, 10 pseudo-monotone, 7 quasi-nonexpansive, 10 quasi-shrinking, 14 relaxed cutter, 11 resolvent, 18 separating, 10 strict contraction, 6 strongly nonexpansive, 7 strongly quasi-nonexpansive (SQNE), 10 Opial condition, 4

metric projection, 8 subgradient projection, 12 Property Dunn, 6 Kadec-Klee, 4 Range, 17 Set fixed point, 9 generalized convex feasible, 79 graph, 18 normal cone, 18 sublevel set, 12 Uniformly convex space, 4 Unrestricted (random) product, 7

Parallelogram identity, 5 VIP Positive definite matrix, 4 set-valued, 22 Problem single-valued, 19 Best Approximation (BAP), 47 Bi-level or hierarchical optimization, 79 Common Solutions to Variational Inequalities (CSVIP), 98 Constrained Multiple-Set Split Convex Feasibility (CMSSCFP), 85 Constrained Variational Inequality (CVIP), 57 Convex Feasibility (CFP), 83 Convexly Constrained Generalized Pseudoinverse, 79 Equilibrium (EP), 22 Hierarchical Variational Inequality (HVIP), 114 Split Common Fixed Point (SCFPP), 85 Split Common Null Point (SCNPP), 83 Split Equilibrium (SEP), 97 Split Feasibility (SFP), 3 Split Inverse (SIP), 83 Split Minimization (SMP), 96 Split Monotone Variational Inclusion (SMVI), 84 Split Saddle-Point (SSPP), 97 Split Variational Inequality (SVIP), 84 Split Zeros (SZP), 84 Variational Inequality (VIP), 1 Projection

117

References [1] R. Aharoni, A. Berman and Y. Censor, An interior points algorithm for the convex feasibility problem, Advances in Applied Mathematics 4 (1983), 479–489. [2] Q. H. Ansari and J. C. Yao, A fixed point theorem and its applications to a system of variational inequalities, Bulletin of the Australian Mathematical Society 59 (1999), 433–442. [3] L. Armijo, Minimization of functions having continuous partial derivatives, Pacific Journal of Mathematics 16 (1966), 1–3. [4] H. Attouch, Variational Convergence for Functions and Operators, Pitman, London, 1984. [5] A. Auslender, Optimisation: M´ethodes Num´eriques, Masson, Paris, 1976. [6] A. Auslender and M. Teboulle, Lagrangian duality and related multiplier methods for variational inequalities, SIAM Journal on Optimization 10 (2000), 1097–1115. [7] A. Auslender and M. Teboulle, Asymptotic Cones and Functions in Optimization and Variational Inequalities, Spinger Monographs in Mathematics, 2003. [8] A. Auslender and M. Teboulle, Interior projection-like methods for monotone variational inequalities, Mathematical Programming, Series A, 104 (2005), 39–68. [9] J.-B. Baillon, R. E. Bruck and S. Reich, On the asymptotic behavior of nonexpansive mappings and semigroups in Banach spaces, Houston Journal of Mathematics 4 (1978), 1–9. [10] T. Q. Bao and P. Q. Khanh, A projection-type algorithm for pseudo-monotone nonLipschitzian multivalued variational inequalities, Nonconvex Optimization and Its Applications 77 (2005), 113–129. 118

Algorithms for Solving Monotone Variational Inequalities and Applications

119

[11] H. H. Bauschke and J. M. Borwein, On projection algorithms for solving convex feasibility problems, SIAM Review 38 (1996), 367–426. [12] H. H. Bauschke and P. L. Combettes, A weak-to-strong convergence principle for Fej´ermonotone methods in Hilbert spaces, Mathematics of Operations Research 26 (2001), 248–264. [13] H. H. Bauschke and P. L. Combettes, Construction of best Bregman approximations in reflexive Banach spaces, Proceedings of the American Mathematical Society 131 (2003), 3757–3766. [14] H. H. Bauschke and P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Space, Springer, New York, 2011. [15] H. H. Bauschke, P. L. Combettes and D. R. Luke, A strongly convergent reflection method for finding the projection onto the intersection of two closed convex sets in a Hilbert space, Journal of Approximation Theory 141 (2006), 63–69. [16] H. H. Bauschke, P. L. Combettes and S. Reich, The asymptotic behavior of the composition of two resolvents, Nonlinear Analysis 60 (2005), 283–301. [17] H. H. Bauschke, E. Matouˇskov´a and S. Reich, Projection and proximal point methods: convergence results and counterexamples, Nonlinear Analysis 56 (2004), 715–738. [18] J. Y. Bello Cruz and A. N. Iusem, A strongly convergent direct method for monotone variational inequalities in Hilbert spaces, Numerical Functional Analysis and Optimization 30 (2009), 23–36. [19] J. Y. Bello Cruz and A. N. Iusem, Convergence of direct methods for paramonotone variational inequalities, Computational Optimization and Applications 46 (2010), 247– 263. [20] D. P. Bertsekas and J. N. Tsitsiklis, Parallel and Distributed Computation: Numerical Methods, Prentice-Hall International, Englwood Cliffs, NJ, USA, 1989. [21] E. Blum, W. Oettli, From optimization and variational inequalities to equilibrium problems, The Mathematics Student 63 (1994), 123–145.

120

References

[22] O. A. Boikanyo and G. Moro¸sanu, On the method of alternating resolvents, Nonlinear Analysis 74 (2011), 5147–5160. [23] J. M. Borwein and A. S. Lewis, Convex Analysis and Nonlinear Optimization Theory and Examples, Springer-Verlag, New-York, 2000. [24] L. Bregman, The method of successive projection for finding a common point of convex sets, Soviet Mathematics Doklady 6 (1965), 688–692. [25] L.M. Bregman, Y. Censor, S. Reich and Y. Zepkowitz-Malachi, Finding the projection of a point onto the intersection of convex sets via projections onto halfspaces, Journal of Approximation Theory 124 (2003), 194–218. [26] F. E. Browder, Fixed point theorems for noncompact mappings in Hilbert space, Proceedings of the National Academy of Sciences USA 53 (1965), 1272–1276. [27] F. E. Browder, Convergence theorems for sequences of nonlinear operators in Banach spaces, Mathematische Zeitschrift 100 (1967), 201–225. [28] F. E. Browder and W. V. Petryshyn, The solution by iteration of nonlinear functional equations in Banach spaces, Bulletin of the American Mathematical Society 72 (1966), 571–575. [29] R. E. Bruck and S. Reich, Nonexpansive projections and resolvents of accretive operators in Banach spaces, Houston Journal of Mathematics 3 (1977), 459–470. [30] R. S. Burachik and A. N. Iusem, Set-Valued Mappings and Enlargements of Monotone Operators, Springer, Berlin, 2008. [31] R. S. Burachik, J. O. Lopes, and B. F. Svaiter, An outer approximation method for the variational inequality problem, SIAM Journal on Control and Optimization 43 (2005), 2071–2088. [32] C. L. Byrne, Iterative projection onto convex sets using multiple Bregman distances, Inverse Problems 15 (1999), 1295–1313. [33] C. L. Byrne, Iterative oblique projection onto convex sets and the split feasibility problem, Inverse Problems 18 (2002), 441–453.

Algorithms for Solving Monotone Variational Inequalities and Applications

121

[34] C. L. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Problems 20 (2004), 103–120. [35] C. Byrne, Y. Censor, A. Gibali and S. Reich, The split common null point problem, Journal of Nonlinear and Convex Analysis, accepted for publication. [36] A. Cegielski, Generalized relaxations of nonexpansive operators and convex feasibility problems, Contemporary Mathematics 513 (2010), 111–123. [37] A. Cegielski, Iterative Methods for Fixed Point Problems in Hilbert Spaces, Lecture Notes in Mathematics 2057, Springer, Heidelberg, 2012. [38] A. Cegielski and Y. Censor, Opial-type theorems and the common fixed point problem, in: H. Bauschke, R. Burachik, P. Combettes, V. Elser, R. Luke and H. Wolkowicz (Editors), Fixed-Point Algorithms for Inverse Problems in Science and Engineering, Springer-Verlag, New York, NY, USA, 2011, 155–183. [39] L. C. Ceng, M. Teboulle and J. C. Yao, Weak convergence of an iterative method for pseudomonotone variational inequalities and fixed-point problems, Journal of Optimization Theory and Applications 146 (2010), 19–31. [40] Y. Censor, Computational acceleration of projection algorithms for the linear best approximation problem, Linear Algebra and Its Applications 416 (2006), 111–123. [41] Y. Censor, M. D. Altschuler and W. D. Powlis, On the use of Cimmino’s simultaneous projections method for computing a solution of the inverse problem in radiation therapy treatment planning, Inverse Problems 4 (1988), 607–623. [42] Y. Censor, T. Bortfeld, B. Martin and A. Trofimov, A unified approach for inversion problems in intensity-modulated radiation therapy, Physics in Medicine and Biology 51 (2006), 2353–2365. [43] Y. Censor, W. Chen, P. L. Combettes, R. Davidi and G. T. Herman, On the effectiveness of projection methods for convex feasibility problems with linear inequality constraints, Computational Optimization and Applications 51 (2012), 1065–1088. [44] Y. Censor, R. Davidi and G. T. Herman, Perturbation resilience and superiorization of iterative algorithms, Inverse Problems 26 (2010), Article ID 065008.

122

References

[45] Y. Censor and T. Elfving , A multiprojection algorithm using Bregman projections in product space, Numerical Algorithms 8 (1994), 221–239. [46] Y. Censor, T. Elfving, N. Kopf and T. Bortfeld, The multiple-sets split feasibility problem and its applications for inverse problems, Inverse Problems 21 (2005), 2071– 2084. [47] Y. Censor and A. Gibali, Projections onto super-half-spaces for monotone variational inequality problems in finite-dimensional spaces, Journal of Nonlinear and Convex Analysis 9 (2008), 461–475. [48] Y. Censor, A. Gibali and S. Reich, Extensions of Korpelevich’s extragradient method for solving the variational inequality problem in Euclidean space, Optimization, accepted for publication. [49] Y. Censor, A. Gibali and S. Reich, Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space, Optimization Methods and Software 26 (2011), 827–845. [50] Y. Censor, A. Gibali and S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert space, Journal of Optimization Theory and Applications 148 (2011), 318–335. [51] Y. Censor, A. Gibali and S. Reich, Algorithms for the split variational inequality problem, Numerical Algorithms 59 (2012), 301–323. [52] Y. Censor, A. Gibali, S. Reich and S. Sabach, Common solutions to variational inequalities, Set-Valued and Variational Analysis 20 (2012), 229–247. [53] Y. Censor, A. Gibali and S. Reich, A von Neumann alternating method for finding common solutions to variational inequalities, Nonlinear Analysis 75 (2012), 4596– 4603. [54] Y. Censor, A. N. Iusem and S. A. Zenios, An interior point method with Bregman functions for the variational inequality problem with paramonotone operators, Mathematical Programming 81 (1998), 373–400.

Algorithms for Solving Monotone Variational Inequalities and Applications

123

[55] Y. Censor and A. Segal, On the string averaging method for sparse common fixed points problems, International Transactions in Operational Research 16 (2009), 481– 494. [56] Y. Censor and A. Segal, The split common fixed point problem for directed operators, Journal of Convex Analysis 16 (2009), 587–600. [57] Y. Censor and A. Segal, On the string averaging method for sparse common fixed point problems, International Transactions in Operational Research 16 (2009), 481–494. [58] Y. Censor and A. Segal, On string-averaging for sparse problems and on the split common fixed point problem, Contemporary Mathematics 513 (2010), 125–142. [59] Y. Censor and S. A. Zenios, Parallel Optimization: Theory, Algorithms, and Applications, Oxford University Press, New York, NY, USA, 1997. [60] J. A. Clarkson, Uniformly convex spaces, Transactions of the American Mathematical Society 40 (1936), 396–414. [61] P. L. Combettes, Strong convergence of block-iterative outer approximation methods for convex optimization, SIAM Journal on Control and Optimization 38 (2000), 538– 565. [62] P. L. Combettes, Quasi-Fej´erian analysis of some optimization algorithms, in: D. Butnariu, Y. Censor and S. Reich (Editors), Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications, Elsevier Science Publishers, Amsterdam, The Netherlands, 2001, pp. 115–152. [63] P. L. Combettes, Solving monotone inclusions via compositions of nonexpansive averaged operators, Optimization 53 (2004), 475–504. [64] P. L. Combettes and P. Bondon, Hard-constrained inconsistent signal feasibility problem, IEEE Transactions on Signal Processing 47 (1999), 2460–2468. [65] P. L. Combettes and S. A. Hirstoaga, Equilibrium programming in Hilbert spaces, Journal of Nonlinear and Convex Analysis 6 (2005), 117–136.

124

References

[66] P. L. Combettes and J.-C. Pesquet, Proximal splitting methods in signal processing, in: in: H. Bauschke, R. Burachik, P. Combettes, V. Elser, R. Luke and H. Wolkowicz (Editors), Fixed-Point Algorithms for Inverse Problems in Science and Engineering, Springer-Verlag, New York, NY, USA, 2011, 185–212. [67] P. L. Combettes and V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Modeling and Simulation 4 (2005), 1168–1200. [68] G. Crombez, A geometrical look at iterative methods for operators with fixed points, Numerical Functional Analysis and Optimization 26 (2005), 157–175. [69] G. Crombez, A hierarchical presentation of operators with fixed points on Hilbert spaces, Numerical Functional Analysis and Optimization 27 (2006), 259–277. [70] S. Dafermos, Traffic equilibrium and variational inequalities, Transportation Science 14 (1980), 42–54. [71] Y. Dang and Y. Gao, The strong convergence of a KM–CQ-like algorithm for a split feasibility problem, Inverse Problems 27 (2011), Article ID 015007. [72] J. C. Dunn, Convexity, monotonicity, and gradient processes in Hilbert space, Journal of Mathematical Analysis and Applications 53 (1976), 145–158. [73] J. M. Dye and S. Reich, Unrestricted iterations of nonexpansive mappings in Hilbert space, Nonlinear Analysis 18 (1992), 199–207. [74] B. C. Eaves, On the basic theorem of complementarity, Mathematical Programming 1 (1971), 68–75. [75] J. Eckstein, Splitting Methods for Monotone Operators with Applicationsto Parallel Optimization, Doctoral dissertation, Department of Civil Engineering, Massachusetts Institute of Technology. Available as report LIDS-TH-1877, Laboratory for Information and Decision Sciences, MIT, 1989. [76] I. Ekeland and R. T´emam, Convex Analysis and Variational Problems, North-Holland, Amsterdam, 1976.

Algorithms for Solving Monotone Variational Inequalities and Applications

125

[77] F. Facchinei and J. S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, Volume I and Volume II, Springer-Verlag, New York, NY, USA, 2003. [78] R. Fletcher, Practical Methods of Optimization, John Wiley, Chichester, 1987. [79] M. Fukushima, On the convergence of a class of outer approximation algorithms for convex programs, Journal of Computational and Applied Mathematics 10 (1984), 147– 156. [80] M. Fukushima, A relaxed projection method for variational inequalities, Mathematical Programming 35 (1986), 58–70. [81] A. Genel and J. Lindenstrauss, An example concerning fixed points, Israel Journal of Mathematics 22 (1975), 81–85. [82] A. Gibali, Investigation of Iterative Algorithms for Solving the Variational Inequality Problem, M.Sc. Thesis, University of Haifa, Haifa, Israel, November 2007. [83] K. Goebel and W. A. Kirk, Topics in Metric Fixed Point Theory, Cambridge University Press, Cambridge, 1990. [84] K. Goebel and S. Reich, Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings, Marcel Dekker, New York and Basel, 1984. [85] A. A. Goldstein, Convex programming in Hilbert space, Bulletin of the American Mathematical Society 70 (1964), 709–710. [86] E. G. Golshtein and N. V. Tretyakov, Modified Lagrangians and Monotone Maps in Optimization, John Wiley & Sons, Inc, New York, NY, USA, 1996. [87] B. Halpern, Fixed points of nonexpanding maps, Bulletin of the American Mathematical Society 73 (1967), 957–961. [88] P. Hartman and G. Stampacchia, On some non-linear elliptic diferential-functional equations, Acta Mathematica 115 (1966), 271–310. [89] Y. Haugazeau, Sur les In´equations Variationnelles et la Minimisation de Fonctionnelles Convexes, Th`ese, Universit´e de Paris, Paris, France, 1968.

126

References

[90] S. He, C. Yang and P. Duan, Realization of the hybrid method for Mann iteration, Applied Mathematics and Computation 217 (2010), 4239–4247. [91] J.B. Hiriart-Urruty and C. Lemar´echal, Convex Analysis and Minimization Algorithms, Springer-Verlag, Berlin Heidelberg, 1993. [92] J.-B. Hiriart-Urruty and C. Lemar´echal, Fundamentals of Convex Analysis, SpringerVerlag, Berlin, Heidelberg, Germany, 2001. [93] A. N. Iusem and L. R. Lucambio P´erez, An extragradient-type method for non-smooth variational inequalities, Optimization 48 (2000), 309–332. [94] A. N. Iusem and W. Sosa, Iterative algorithms for equilibrium problems, Optimization 52 (2003), 301–316. [95] A. N. Iusem and B. F. Svaiter, A variant of Korpelevich’s method for variational inequalities with a new search strategy, Optimization 42 (1997), 309–321. [96] H. Iiduka and W. Takahashi, Strong convergence theorems for nonexpansive mappings and inverse-strongly monotone mappings, Nonlinear Analysis 61 (2005), 314–350. [97] H. Iiduka, W. Takahashi, M. Toyoda, Approximation of solutions of variational inequalities for monotone mappings, PanAmerican Mathematical Journal 14 (2004), 49–61. [98] S. Ishikawa, Fixed points and iteration of a nonexpansive mapping in a Banach Space, Proceedings of the American Mathematical Society 59 (1976), 65–71. [99] M. M. Israel, Jr. and S. Reich, Extension and selection problems for nonlinear semigroups in Banach Spaces, Mathematica Japonica 28 (1983), 1–8. [100] G. Kassay and J. Kolumb´an, System of multi-valued variational inequalities, Publicationes Mathematicae Debrecen 56 (2000), 185–195. [101] E. N. Khobotov, Modification of the extra-gradient method for solving variational inequalities and certain optimization problems, USSR Computational Mathematics and Mathematical Physics 27 (1989), 120–127.

Algorithms for Solving Monotone Variational Inequalities and Applications

127

¨ [102] M. D. Kirszbraun, Uber die zusammenziehende und Lipschitzsche Transformationen, Fundamenta Mathematicae 22 (1934), 77–108. [103] I. V. Konnov, On systems of variational inequalities, Russian Mathematics 41 (1997), 79–88. [104] I. V. Konnov, Combined Relaxation Methods for Variational Inequalities, SpringerVerlag, Berlin, Heidelberg, 2001. [105] G. M. Korpelevich, The extragradient method for finding saddle points and other problems, Ekonomika i Matematcheskie Metody 12 (1976), 747–756. [106] D. Kinderlehrer and G. Stampacchia, An Introduction to Variational Inequalities and Their Applications. Academic Press, New York, NY, USA, 1980. [107] E. Kopeck´a and S. Reich, A note on the von Neumann alternating projections algorithm, Journal of Nonlinear and Convex Analysis 5 (2004), 379–386. [108] E. Kopeck´a and S. Reich, Another note on the von Neumann alternating projections algorithm, Journal of Nonlinear and Convex Analysis 11 (2010), 455–460. [109] M. A. Krasnosel’ski˘ı, Two remarks on the method of successive approximations, (in Russian), Uspekhi Matematicheskikh Nauk 10 (1955), 123–127. [110] E. S. Levitin and B. T. Polyak, Constrained minimization problems, USSR Computational Mathematics and Mathematical Physics 6 (1966), 1–50. [111] P.-L. Lions, Approximation de points fixes de contractions, Comptes Rendus de l’Acad´emie des Sciences 284 (1977), 1357–1359. [112] G. L´opez, V. Mart´ın-M´arquez and H.-K. Xu, Iterative algorithms for the multiple-sets split feasibility problem, in: Y. Censor, M. Jiang and G. Wang (Editors), Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems, Medical Physics Publishing, Madison, WI, USA, 2010, 243–279. [113] W. R. Mann, Mean value methods in iteration, Proceedings of the American Mathematical Society 4 (1953), 506–510.

128

References

[114] M. A¨ıt Mansour, Z. Chbani and H. Riahi, Recession bifunction and solvability of noncoercive equilibrium problems, Communications in Applied Analysis 7 (2003), 369– 377. [115] P. Marcotte, Application of Khobotov’s algorithm to variational inequalities and network equilibrium problems, INFORM 29 (1991), 258–270. [116] S¸. M˘aru¸ster, Quasi-nonexpansivity and the convex feasibility problem, Analele S¸tiint¸ifice ale Universit˘a¸tii “Alexandru Ioan Cuza” din Ia¸si. Informatic˘a 15 (2005) 47–56. [117] S¸. M˘aru¸ster and C. Popirlan, On the Mann-type iteration and the convex feasibility problem, Journal of Computational and Applied Mathematics 212 (2008), 390–396. [118] E. Masad and S. Reich, A note on the multiple-set split convex feasibility problem in Hilbert space, Journal of Nonlinear and Convex Analysis 8 (2007), 367–371. [119] A. Moudafi, The split common fixed-point problem for demicontractive mappings, Inverse Problems 26 (2010), 1–6. [120] A. Moudafi, Split monotone variational inclusions, Journal of Optimization Theory and Applications 150 (2011), 275–283. [121] A. Moudafi and P. E. Maing´e, Towards viscosity approximations of hierarchical fixedpoint problems, Fixed Point Theory and Applications 2006, Article ID 95453, (pp. 10). [122] A. Moudafi and P. E. Maing´e, Strong convergence of an iterative method for hierarchical fixed-point problems, Pacific Journal of Optimization 3 (2007), 529–538. [123] N. Nadezhkina and W. Takahashi, Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings, Journal of Optimization Theory and Applications 128 (2006), 191–201. [124] N. Nadezhkina and W. Takahashi, Strong convergence theorem by a hybrid method for nonexpansive mappings and Lipschitz-continuous monotone mappings, SIAM Journal on Optimization 16 (2006), 1230–1241. [125] J. von Neumann, On rings of operators. Reduction theory, Annals of Mathematics 50 (1949), 401–485.

Algorithms for Solving Monotone Variational Inequalities and Applications

129

[126] M. A. Noor, Some algorithms for general monotone mixed variational inequalities, Mathematical and Computer Modelling 29 (1999), 1–9. [127] M. A. Noor, Some developments in general variational inequalities, Applied Mathematics and Computation 152 (2004), 197–277. [128] Z. Opial, Weak convergence of the sequence of successive approximations for nonexpansive mappings, Bulletin of the American Mathematical Society 73 (1967), 591–597. [129] J. S. Pang, Asymmetric variational inequality problems over product sets: applications and iterative methods, Mathematical Programming 31 (1985), 206–219. [130] M. Patriksson, Nonlinear Programing and Variational Inequality Problems, A Unified Approach, Kluwer Academic Publisher, Dordrecht, The Netherlands, 1999. [131] G. Pierra, Decomposition through formalization in a product space, Mathematical Programming 28 (1984), 96–115. [132] B. Qu and N. Xiu, A note on the CQ algorithm for the split feasibility problem, Inverse Problems 21 (2005), 1655–1666. [133] S. Reich, Nonlinear evolution equations and nonlinear ergodic theorems, Nonlinear Analysis 1 (1977), 319–320. [134] S. Reich, Strong convergence theorems for resolvents of accretive operators in Banach spaces, Journal of Mathematical Analysis and Applications 75 (1980), 287–292. [135] S. Reich, A limit theorem for projections, Linear and Multilinear Algebra 13 (1983), 281–290. [136] A. Renaudi and G. Cohen, Conditioning and regularization of nonsymmetric operators, Journal of Optimization Theory and Applications 92 (1997), 127–148. [137] R. T. Rockafellar, Convex Anaylsis, Princeton University Press, Princeton, NJ, USA, 1970. [138] R. T. Rockafellar, Monotone operators associated with saddle functions and minimax problems, Nonlinear Analysis, Part I. In: Browder, F.E. (Editor), Symposia in Pure Mathematics 18 (1970), 397–407, AMS, Providence.

130

References

[139] R. T. Rockafellar, On the maximality of sums of nonlinear monotone operators, Transactions of the American Mathematical Society 149 (1970), 75–88. [140] P. S. M. Santos and S. Scheimberg, A projection algorithm for general variational inequalities with perturbed constraint set, Applied Mathematics and Computation 181 (2006), 649–661. [141] F. Sch¨opfer, T. Schuster and A. K. Louis, An iterative regularization method for the solution of the split feasibility problem in Banach spaces, Inverse Problems 24 (2008), Article ID 055008. [142] A. Segal, Directed Operators for Common Fixed Point Problems and Convex Programming Problems, Ph.D. Thesis, University of Haifa, September 2008. [143] H. F. Senter and W. G. Dotson, Jr, Approximating fixed points of nonexpansive mappings, Proceedings of the American Mathematical Society 44 (1974), 375–380. [144] M. V. Solodov and B. F. Svaiter, A new projection method for variational inequality problems, SIAM Journal on Control and Optimization 37 (1999), 765–776. [145] M. V. Solodov and B. F. Svaiter, Forcing strong convergence of proximal point iterations in a Hilbert space, Mathematical Programing 87 (2000), 189–202. [146] C. Sudsukh, Strong convergence theorems for fixed point problems, equilibrium problem and applications, International Journal of Mathematical Analysis 3 (2009), 1867– 1880. [147] T. Suzuki, A sufficient and necessary condition for Halpern-type strong convergence to fixed points of nonexpansive mappings, Proceedings of the American Mathematical Society 135 (2007), 99–106. [148] W. Takahashi, Y. Takeuchi and R. Kubota, Strong convergence theorems by hybrid methods for families of nonexpansive mappings in Hilbert spaces, Journal of Mathematical Analysis and Applications 341 (2008), 276–286. [149] W. Takahashi and M. Toyoda, Weak convergence theorems for nonexpansive mappings and monotone mappings, Journal of Optimization Theory and Applications 118 (2003), 417–428.

Algorithms for Solving Monotone Variational Inequalities and Applications

131

[150] F. Tinti, Numberical solution for pseudomonotone variational inequality problems by extragradient methods, Variational Analysis and Applications 79 (2005), 1101–1128. [151] P. Tseng, Further applications of a splitting algorithm to decomposition in variational inequalities and convex programming, Mathematical Programming 48 (1990), 249–263. [152] P. Tseng, Applications of a splitting algorithm to decomposition in convex programming and variational inequalities, SIAM Journal on Control and Optimization 29 (1991), 119–138. [153] P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM Journal on Control and Optimization 38 (2000), 431–446. [154] R. Wittmann, Approximation of fixed points of nonexpansive mappings, Archiv der Mathematik 58 (1992), 486–491. [155] N. Xiu and J. Zhang, Some recent advances in projection-type methods for variational inequalities, Journal of Computational and Applied Mathematics 152 (2003), 559–585. [156] H.-K. Xu, Iterative algorithms for nonlinear operators, Journal of the London Mathematical Society 66 (2002), 240–256. [157] H.-K. Xu, A variable Krasnosel’skii–Mann algorithm and the multiple-set split feasibility problem, Inverse Problems 22 (2006), 2021–2034. [158] H.-K. Xu, Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces, Inverse Problems 26 (2010), Article ID 105018. [159] H.-K. Xu, Viscosity method for hierarchical fixed point approach to variational inequalities, Taiwanese Journal of Mathematics 14 (2010), 463–478. [160] I. Yamada, The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings, in: D. Butnariu, Y. Censor and S. Reich (Editors), Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications, Elsevier Science Publishers, Amsterdam, The Netherlands, 2001, 473–504.

132

References

[161] I. Yamada and N. Ogura, Two Generalizations of the Projected Gradient Method for Convexly Constrained Inverse Problems–Hybrid Steepest Descent Method, Adaptive Projected Subgradient Method, Numerical Analysis and New Information Technology 2003 (NANIT’03). Kyoto University: Research Institute of Mathematical Sciences, December 2003 (Published in ”Kokyuroku” 1362 (2004), 88–94. [162] I. Yamada and N. Ogura, Adaptive projected subgradient method for asymptotic minimization of sequence of nonnegative convex functions, Numerical Functional Analysis and Optimization 25 (2004), 593–617. [163] I. Yamada and N. Ogura, Hybrid Steepest Descent Method for Variational Inequality Problem over the Fixed Point Set of Certain Quasi-nonexpansive Mappings, Numerical Functional Analysis And Optimization 25 (2004), 619–655. [164] I. Yamada, N. Ogura, Y. Yamashita and K. Sakaniwa, An extension of optimal fixed point theorem for nonexpansive operator and its application to set theoretic signal estimation, Technical Report of IEICE DSP96-106 (1996), 63–70. [165] I. Yamada, N. Ogura, Y. Yamashita and K. Sakaniwa, Quadratic optimization of fixed points of nonexpansive mappings in Hilbert space, Numerical Functional Analysis and Optimization 19 (1998), 165–190. [166] Q. Yang, The relaxed CQ algorithm solving the split feasibility problem, Inverse Problems 20 (2004), 1261–1266. [167] Q. Yang, On variable-step relaxed projection algorithm for variational inequalities, Journal of Mathematical Analysis and Applications 302 (2005), 166–179. [168] Q. Yang and J. Zhao, Generalized KM theorems and their applications, Inverse Problems 22 (2006), 833–844. [169] Y. Yao and Y. C. Liou, Weak and strong convergence of Krasnoselski-Mann iteration for hierarchical fixed point problems, Inverse Problems 24 (2008), Article ID 015015. [170] Y. Yao and Y. C. Liou, An implicit extragradient method for hierarchical variational inequalities, Fixed Point Theory and Applications, 2011 (2011), Article ID 697248, (pp. 11).

Algorithms for Solving Monotone Variational Inequalities and Applications

133

[171] D. C. Youla, Generalized image restoration by the method of alternating orthogonal projections, IEEE Transactions on Circuits and Systems 25 (1978), 694–702. [172] M. Zaknoon, Algorithmic developments for the convex feasibility problem, Ph.D. Thesis, University of Haifa, Haifa, Israel, April 2003. [173] W. Zhang, D. Han and Z. Li, A self-adaptive projection method for solving the multiple-sets split feasibility problem, Inverse Problems 25 (2009), 115001. [174] Y. Zhao, Z. Xia, L. Pang and L. Zhang, Existence of solutions and algorithm for a system of variational inequalities, Fixed Point Theory and Applications 2010 (2010), Article ID 182539, (pp. 11). [175] J. Zhao and Q. Yang, Several solution methods for the split feasibility problem, Inverse Problems 21 (2005), 1791–1800.

‫תקציר‬ ‫בעבודת מחקר זו אנו חוקרים אלגוריתמי הטלה לפתרון אי‪-‬שוויונות וריאציוניים מונוטוניים‬ ‫במרחב הילברט ‪ H‬אינסוף מימדי‪ .‬חלק מהתוצאות מוצגות במקרה הפרטי של המרחב האוקלידי‬ ‫‪ R n‬הסוף מימדי‪.‬‬ ‫יהי ‪ H‬מרחב הילברט אינסוף מימדי‪ .‬בהינתן אופרטור ‪ f : H → H‬ותת קבוצה לא ריקה‪,‬‬ ‫קמורה וסגורה ‪ , C ⊂ H‬בעיית אי‪-‬השוויון הוריאציוני )‪(Variational Inequality Problem‬‬ ‫∗‬ ‫היא מציאת נקודה ‪ x ∈ C‬כך שיתקיים‬

‫‪f ( x ∗ ) , x − x ∗ ≥ 0 for all x ∈ C .‬‬ ‫בעיה זו‪ ,‬שהינה בעיה יסודית בתורת האופטימיזציה הוצגה לראשונה ע"י ‪Hartman,‬‬ ‫‪ Stampacchia‬בשנת ‪ 1966‬לצורך פתרון משוואות דיפרנציאליות חלקיות במכאניקה‪ .‬מאז ועד‬ ‫היום‪ ,‬במשך חמשת העשורים האחרונים נחקרה רבות בעיה זו הן תיאורטית והן מעשית‪ .‬חשיבותה‬ ‫של בעיה זו נובעת מהעובדה שמספר ניכר של בעיות יסודיות בתורת האופטימיזציה ניתנות לניסוח‬ ‫כבעיית אי‪-‬שוויון וריאציוני מתאימה‪ .‬לדוגמא‪ :‬נניח כי ברצוננו למצוא מינימום לפונקציה‬ ‫‪ f : H → R‬קמורה וגזירה ברציפות מעל קבוצה ‪ ;C‬אזי ע"י ניצול תנאי לאופטימאליות מסדר‬ ‫ראשון )פיתוח מסדר ראשון לטור טיילור( מציאת מינימום של הפונקציה ‪ f‬שקולה לפתרון בעיית‬ ‫אי‪-‬השוויון הוריאציוני עם אופרטור הגרדיאנט של ‪ ( ∇f ) f‬והקבוצה ‪ .C‬באופן דומה‪ ,‬פתרון‬ ‫בעיית אי‪-‬השוויון הוריאציוני מעל כל המרחב‪ ,‬כלומר עבור ‪ ,C = H‬שקולה לבעיית מינימום‬ ‫ללא אילוצים‪ .‬דוגמא חשובה נוספת היא פתרון מערכת משוואות ומציאת אפסים של אופרטורים‪.‬‬ ‫ניסוח הבעיה הוא כדלקמן‪ :‬בהינתן אופרטור )לא דווקא ליניארי( ‪ , f : H → H‬נרצה למצוא‬

‫שרש שלו‪ ,‬כלומר נרצה למצוא נקודה ‪x∗ ∈H‬‬

‫) (‬

‫כך שיתקיים ‪ . f x∗ = 0‬אם כן‪ ,‬קל לראות כי‬

‫בעיה זו שקולה לפתרון בעיית אי‪-‬השוויון הוריאציוני ביחס למרחב ‪ . H‬פרט לדוגמאות אלו‬ ‫קימות בעיות נוספות מתחומים שונים ומגוונים הניתנות להצגה בצורה של בעיית אי‪-‬שוויון‬ ‫וריאציוני‪.‬‬ ‫אלגוריתמים רבים שפותחו לשם פתרון בעיה זו הינם אלגוריתמי הטלה איטרטיביים‪ .‬אלגוריתמים‬ ‫אלו מייצרים סדרת נקודות‬

‫} ‪{x‬‬

‫∞‬ ‫‪k =0‬‬

‫‪k‬‬

‫∗‬

‫המתכנסת לפתרון הבעיה ‪ . x -‬אלגוריתמים אלו‪ ,‬שבהם‬

‫עוסקת עבודה מחקרית זו‪ ,‬משתמשים באופרטור המורכב מהזזות והטלות כלשהן‪ .‬בהינתן נקודה‬ ‫‪i‬‬

‫‪ x ∈ H‬וקבוצה לא ריקה‪ ,‬קמורה וסגורה ‪ , C ⊂ H‬הטלת הנקודה ‪ x‬על הקבוצה ‪ , C‬היא‬

‫נקודה ‪ y ∈C‬הקרובה ביותר ל‪x -‬‬

‫‪}.‬‬

‫‪2‬‬

‫‪z−x‬‬

‫‪ ,‬כלומר‪:‬‬

‫{‬

‫‪y = PC ( x ) = arg min z ∈ C‬‬

‫אלגוריתמים המשתמשים בהטלות נחשבים יעילים במצב שבו הקבוצה שעליה מטילים היא פשוטה‪,‬‬ ‫כלומר ניתן לחשב בקלות את ההטלה על הקבוצה‪ .‬דוגמאות לקבוצות פשוטות להטלה הנן על‪-‬‬ ‫מישור‪ ,‬חצי‪-‬מרחב‪ ,‬כדור ועוד קבוצות נוספות‪ .‬אם כן‪ ,‬במקרה של קבוצה קמורה כללית‪ ,‬בכל שלב‬ ‫של האלגוריתם יש צורך בפתירת בעיית אופטימיזציה בכדי לחשב את ההטלה‪ ,‬דבר שיכול‬ ‫להשפיע על יעילות האלגוריתמים‪ .‬בעקבות כך‪ ,‬פותחו אלגוריתמים רבים ושונים כדי להתגבר מצד‬ ‫אחד על בעיית חישוב ההטלה במקרה של קבוצה כללית ומצד שני כדי לנסות להחליש את התנאים‬ ‫על האופרטור ‪ f‬בכדי להבטיח את התכנסות האלגוריתם‪.‬‬ ‫אחד האלגוריתמים הקלסיים לפתרון בעיית מינימום עם אילוצים הוא האלגוריתם המכונה‬ ‫‪ .Projected gradient method‬בעקבות הקשר שראינו מעלה בין מינימיזציה של פונקציה גזירה‬ ‫ברציפות וקמורה על קבוצה לא ריקה‪ ,‬קמורה וסגורה ‪ C ⊂ H‬לבעיית אי‪-‬השוויון הוריאציוני‪,‬‬ ‫הוצג אלגוריתם איטרטיבי המחשב הטלה על הקבוצה ‪ C‬בכל צעד‪ .‬התכנסות האלגוריתם מובטחת‬ ‫תחת תנאי מונוטוניות מסוימים על האופרטור ‪. f‬‬ ‫בכדי להתגבר על מחסום ההטלות‪ Fukushima ,‬לדוגמא‪ ,‬השתמש בהטלות תת גרדיאנטיות שהינן‬ ‫למעשה הטלות על חצאי מרחבים המכילים את הקבוצה הפיזיבילית ‪ C‬של בעיית אי‪-‬השוויון‬ ‫הוריאציוני‪ .‬נזכיר כי להטלה על חצי‪-‬מרחב קיימת נוסחה סגורה‪ .‬בכיוון אחר‪Korpelevich ,‬‬ ‫פיתחה שיטה המכונה ‪ Extragradient‬שבה בכל צעד של האלגוריתם יש צורך בחישוב שתי‬

‫הטלות על הקבוצה הפיזיבילית ‪ . C‬יתרונה של שיטה זו הוא בהחלשת התנאים על האופרטור ‪f‬‬ ‫המבטיחים את התכנסות השיטה‪.‬‬ ‫בעקבות הכתוב לעיל‪ ,‬חלקה הראשון של עבודת המחקר מתרכז בפיתוח אלגוריתמים חדשים‬ ‫ושיפור אלגוריתמים קיימים לפתרון בעיית אי‪-‬השוויון הוריאציוני במרחב הילברט אינסוף מימדי‬ ‫ובמקרה הפרטי של המרחב האוקלידי ‪ R n‬הסוף מימדי‪ .‬תוך ניצול הרעיון של האלגוריתם של‬ ‫‪ Fukushima‬ואלגוריתם ה‪ Extragradient -‬אנו מציגים מספר אלגוריתמים שבהם אחת‬ ‫מההטלות על הקבוצה ‪ C‬מוחלפת בהטלה על חצי‪-‬מרחב ספציפי המכיל את הקבוצה הפיזיבילית‬

‫‪ii‬‬

‫‪ . C‬בנוסף אנו מציגים אלגוריתם שבו קיימת סדרת קבוצות סגורות וקמורות ‪{C k }k = 0‬‬ ‫∞‬

‫ה"מקרבות" את הקבוצה הפיזיבילית ‪ C‬והחישובים באלגוריתם מבוצעים ביחס לסדרת הקבוצות‬ ‫הזאת‪ .‬לכל האלגוריתמים אנו מביאים משפטי התכנסות והוכחות מלאות‪.‬‬ ‫בנוסף לכך‪ ,‬עדיין בחלק הראשון של עבודת המחקר‪ ,‬אנו מרחיבים את הדיון שלנו בשני מישורים‪.‬‬ ‫‪H‬‬ ‫האחד הוא דיון בבעיית אי‪-‬השוויון הוריאציוני ביחס להעתקות רב ערכיות ‪. M : H → 2‬‬

‫בסיטואציה זו העתקה ‪ M‬מתאימה לכל נקודה ‪ x ∈ H‬את הקבוצה )‪ . M ( x‬הרחבה נוספת‬

‫לבעיית אי‪-‬השוויון הוריאציוני מבוססת על ההבחנה כי כל קבוצה לא ריקה‪ ,‬קמורה וסגורה ‪C‬‬ ‫ניתנת להצגה כקבוצת נקודות השבת של אופרטור כלשהו‪ ,‬לדוגמא‪ ,‬קבוצת נקודות השבת של‬ ‫אופרטור ההטלה על הקבוצה ‪ , C‬משמע‬

‫)‬

‫‪ . C = Fix ( PC‬לכן למעשה ניתן לדבר על‬

‫סיטואציה כללית שבה הקבוצה ‪ C‬נתונה כקבוצת נקודות השבת של אופרטור כלשהו‪ .‬במצב זה‬ ‫לא ניתן להשתמש באלגוריתמי הטלה מפורשים היות ולא ניתן לחשב את ההטלה על קבוצה שאינה‬ ‫נתונה בצורה מפורשת‪ .‬אנו מציגים אלגוריתם המשתמש בהטלות על חצאי מרחבים הנבנים בעזרת‬ ‫האופרטור הנתון‪.‬‬ ‫בחלקה השני של עבודת המחקר אנו מתעניינים במודל כללי‪ ,‬אותו אנו מכנים ‪Split Inverse‬‬ ‫‪ .Problem‬המודל דן בשני מרחבי הילברט אינסוף ממדיים ‪ H 2 , H 1‬כאשר בכל מרחב ישנה‬ ‫בעיה הופכית )‪ .(Inverse Problem‬הבעיות תסומנה ב‪ IP1 -‬וב‪ , IP2 -‬בהתאמה‪ .‬בנוסף‪ ,‬ישנו‬ ‫קשר ליניארי ידוע בין שני המרחבים המבוטא באמצעות אופרטור ליניארי ‪ . A‬ניסוח הבעיה הוא‪:‬‬ ‫∗‬ ‫∗‬ ‫∗‬ ‫יש למצוא נקודה ‪ x ∈ H1‬הפותרת את הבעיה ‪ IP1‬כך שהתמונה ‪ y = Ax‬היא פתרון לבעיה‬

‫‪ . IP2‬המופע הראשון של בעיה העונה למודל זה היא בעיית ‪Split Feasibility Problem‬‬ ‫שהוצגה לראשונה ב‪ 1994-‬ע"י ‪ . Censor, Elfving‬בבעיה זו קיימות שתי תת קבוצות לא ריקות‪,‬‬ ‫קמורות וסגורות ‪ Q ⊂ H 2 , C ⊂ H1‬ואופרטור ליניארי ‪ . A: H1 → H2‬הבעיה היא למצוא‬ ‫*‬ ‫נקודה ‪ x ∈C‬כך ש ‪Ax* ∈C‬‬

‫‪ .‬בבעיה זו הבעיות ‪ IP1‬ו‪ IP2 -‬הן בעיות פיזיביליות‪ ,‬משמע יש‬

‫למצוא נקודה בקבוצה אחת כך שתמונתה תחת העתקה ליניארית נתונה תימצא בקבוצה אחרת‪ ,‬וכל‬ ‫קבוצה מצויה במרחב אחר‪ .‬גישה זו כבר מצאה שימוש בתכנון הטיפולים בקרינה‬ ‫)‪ (intensity-modulated radiation therapy (IMRT) treatment planning‬ולאחרונה אף‬ ‫בתחום של התאמת מסננים ))‪ .(Adaptive Filtering (AF‬אם כן‪ ,‬היה זה טבעי לשאול האם ניתן‬ ‫להרחיב את הדיון מעבר לפיזיביליות‪ .‬מסתבר שהתשובה היא כן! ע"י ניסוח המודל הכללי אנו‬ ‫מראים איך אפשר למעשה לנסח בעיות שונות החדשות אפילו במרחב סוף מימדי‪ .‬דוגמא לאחת‬ ‫‪iii‬‬

‫הבעיות היא ה‪ SVIP -‬שבה אנו בוחרים את הבעיות ‪ IP1‬ו‪ IP2 -‬כבעיות אי‪-‬שוויון וריאציוני‪.‬‬ ‫מסתבר שבדרך זו אנו מקבלים כמקרה פרטי סיטואציה שבה הפיזיביליות מוחלפת במינימיזציה‪.‬‬ ‫כאמור‪ ,‬סיטואציה זו‪ ,‬שבה יש צורך מעבר לפיזיביליות‪ ,‬נחקרה כבר בתחום של תכנון הטיפולים‬ ‫בקרינה )‪ .(IMRT‬פרט לכך אנו מגלים כי לאחרונה עוד תחומים יכולים לנצל את המבנה של‬ ‫המודל הנ"ל לשם שיפור הביצועים‪ .‬דוגמא אחת לכך היא בתחום של התאמת מסננים‬ ‫)‪ .(AF‬הבעיה הזאת מפוצלת למספר תחומים במרחבים שונים‪ .‬החוקרים בתחום מדווחים על‬ ‫תוצאות המעידות על שיפור הביצועים בעזרת מודל זה‪.‬‬ ‫אנו מאמינים שיש עוד הרבה לחקור בכיוון זה‪ ,‬הן תיאורטית והן מעשית‪.‬‬ ‫פרט לניסוח בעיות "מפוצלות" ניתן אף לדבר על מצב שבו ‪ , H 1 = H 2‬כלומר אין מדובר בשני‬ ‫מרחבים אלא במרחב אחד ואז המטרה היא למצוא פתרון משותף למספר בעיות‪ .‬אנו מציגים מספר‬ ‫בעיות חדשות כאלו ואלגוריתמים לפתרונן במרחב הילברט אין סוף מימדי; חלק מהבעיות אף חדש‬ ‫במרחב האוקלידי ‪ R n‬הסוף מימדי‪.‬‬

‫‪iv‬‬

‫עבודת המחקר נעשתה בהנחיית‬ ‫פרופסור שמעון רייך ופרופסור יאיר צנזור‬ ‫בפקולטה למתמטיקה‬ ‫ברצוני להודות למנחים שלי‪ ,‬פרופסור שמעון רייך‬ ‫ופרופסור יאיר צנזור‪ ,‬על ההנחיה המסורה ועל העזרה‬ ‫בהגשמת חלומי לחקר בחזית המדע‪ .‬אני אסיר תודה‬ ‫על הסבלנות‪ ,‬הזמן וההכוונה שסיפקה כלים‬ ‫שימושיים למחקר‪ .‬הערותיהם הנבונות היו אף מעבר‬ ‫לגבולות המדע לבד‪.‬‬ ‫ברצוני להודות לפרופסור מרק טבול על קריאת עבודת‬ ‫הדוקטורט שלי ועל לקיחת חלק בועדת הבוחנים‪.‬‬ ‫אני מקדיש את התזה שלי לכל מי שהאמין ותמך‬ ‫בי ובדרכי לאורך השנים‪:‬‬ ‫הוריי‪ ,‬סאלי וויקו‪ ,‬אחיי שיר ודניאל וכל חבריי‪.‬‬ ‫אחרון חביב‪ ,‬אני מודה למשפחה הנפלאה שלי‪ ,‬אשתי שמרית‬ ‫ושתי בנותיי נועם ושחר; ההישג הזה הוא כולו שלכן‪.‬‬

‫אני מודה לטכניון על התמיכה‬ ‫הכספית הנדיבה בהשתלמותי‬

‫אלגוריתמים לפתרון אי‪-‬שוויונות‬ ‫וריאציוניים מונוטוניים וישומים‬ ‫חיבור על מחקר‬

‫לשם מילוי חלקי של הדרישות לקבלת‬ ‫התואר דוקטור לפילוסופיה‬

‫אביב גיבלי‬

‫הוגש לסנט הטכניון – מכון טכנולוגי לישראל‬ ‫סיוון תשע"ב חיפה יוני ‪2012‬‬

‫אלגוריתמים לפתרון אי‪-‬שוויונות‬ ‫וריאציוניים מונוטוניים וישומים‬

‫אביב גיבלי‬

Suggest Documents