See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/316786304
General inertial Mann algorithms and their convergence analysis for nonexpansive mappings Chapter · January 2018
CITATION
READS
1
191
3 authors, including: Qiao-Li Dong
Cho Yeol Je
Civil Aviation University of China
Gyeongsang National University
31 PUBLICATIONS 104 CITATIONS
771 PUBLICATIONS 8,862 CITATIONS
SEE PROFILE
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
iterative algorithms for the fixed point problems of nonlinear operators View project
Young Whan Lee View project
All content following this page was uploaded by Qiao-Li Dong on 09 May 2017. The user has requested enhancement of the downloaded file.
General inertial Mann algorithms and their convergence analysis for nonexpansive mappings Qiao-Li Dong, Yeol Je Cho, Themistocles M. Rassias Dedicated to Ha¨ım Brezis and Louis Nirenberg in deep admiration
Abstract In this article, we introduce general inertial Mann algorithms for finding fixed points of nonexpansive mappings in Hilbert spaces, which includes some other algorithms as special cases. We reanalyze the accelerated Mann algorithm, which actually is an inertial type Mann algorithm. We investigate the convergence of the general inertial Mann algorithm, based on which, the strict convergence condition on the accelerated Mann algorithm is eliminated. Also, we apply the general inertial Mann algorithm to show the existence of solutions of the minimization problems by proposing a general inertial type gradient-projection algorithm. Finally, we give preliminary experiments to illustrate the advantage of the accelerated Mann algorithm.
1 Introduction Let H be a Hilbert space and C be a nonempty closed convex subset of H . A mapping T : C → C is called to be nonexpansive if ∥T x − Ty∥ ≤ ∥x − y∥ Q.L. Dong College of Science, Civil Aviation University of China, Tianjin 300300, China e-mail:
[email protected] Y.J. Cho Department of Mathematics Education and RINS, Gyeongsang National University, Jinju 660-701, Korea Center for General Education, China Medical University, Taichung 40402, Taiwan e-mail:
[email protected] T.M. Rassias Department of Mathematics, National Technical University of Athens, Zografou Campus, 15780 Athens, Greece e-mail:
[email protected]
1
2
Q.L. Dong, Y.J. Cho, T.M. Rassias
for all x, y ∈ C and Fix(T ) := {x ∈ C : T x = x} denotes the set of fixed points of T . In this paper, we consider the following fixed point problem: Problem 1. Suppose that T : C → C is a nonexpansive mapping with Fix(T ) ̸= 0. / Find a point x∗ ∈ C such that T (x∗ ) = x∗ . Approximating fixed point problems for nonexpansive mappings has a variety of specific applications since many problems can be seen as a fixed point problem of nonexpansive mappings such as convex feasibility problems, monotone variational inequalities (see [3, 4] and references therein). In 2011, Micchelli et. al. [27] proposed fixed-point framework in the study of the total-variation model for image denoising and finding a fixed point of a nonexpansive mapping was embedded in their algorithms. Recently, in 2013 and 2016, Chen et. al. [9, 10] showed the convergence of the primal-dual fixed point algorithms with aid of the fixed point theories of the nonexpansive mappings. A great deal of literature on the iteration methods for fixed points problems of nonexpansive mappings have been published (for example, see [11, 12, 15, 16, 19, 30, 31, 32, 35, 38]). One of the most used algorithms is the Mann algorithm [20, 22] as follows: xn+1 = αn xn + (1 − αn )T xn
(1)
for each n ≥ 0. The iterative sequence {xn } converges weakly to a fixed point of T provided that {αn } ⊂ [0, 1] satisfies ∑∞ n=1 αn (1 − αn ) = +∞. In generally, the convergence rate of the Mann algorithm is very slow, especially, for large scale problems. In 2014, Sakurai and Liduka [36] pointed out that, to guarantee practical systems and networks (see, for example, [17, 18]) stable and reliable, the fixed point has to be quickly found. So, there are increasing interests in study of fast algorithms for approximating fixed points of nonexpansive mappings. To the best of our knowledge, there are two main ways to speed up the Mann algorithm. One way is to combine conjugate gradient methods [29] and the Mann algorithm to construct the accelerated Mann algorithm (see [14]). We will make further analysis of the accelerated Mann algorithm in the section 3. Another way is to combine the inertial extrapolation with Mann algorithm. Consider the following minimization problem: min φ (x)
(2)
for all x ∈ H , where φ (x) is differentiable. There are many methods to solve the problem (2), the most popular two methods among which are the steepest descent method and the conjugate gradient method. The later is a popular acceleration method of the former. To accelerate speed of convergence of the algorithms, multi-step methods have been proposed in the literature, which can usually be viewed as certain discretizations of the second-order dynamical system with friction:
General inertial Mann algorithms for nonexpansive mappings
3
x(t) ¨ + γ x(t) ˙ + ∇φ (x(t)) = 0, where γ > 0 represents a friction parameter. One of the simplest methods is the two-step heavy ball methods, in which, given xn and xn−1 , the next point xn+1 is determined via xn+1 − 2xn + xn−1 xn − xn−1 +γ + ∇φ (xn ) = 0, h2 h which results in an iterative algorithm of the form xn+1 = xn + β (xn − xn−1 ) − α ∇φ (xn )
(3)
for each n ≥ 0, where β = 1 − γ h and α = h2 . In 1964, Polyak [33] firstly used (3) to solve the minimization problem (2) and called it an inertial type extrapolation algorithm. In 1987, Polyak [33, 34] also considered the relation between the heavy ball method and the following conjugate gradient method: xn+1 = xn + βk (xn − xn−1 ) − αk ∇φ (xn )
(4)
for each n ≥ 0, where αk and βk can be chosen through different ways. It is obvious that the only difference between the heavy ball method (3) is the choice of the parameters. From Polyak’s work, as an acceleration process, the inertial extrapolation algorithms were widely studied. Especially recently, researchers constructed many iterative algorithms by using inertial extrapolation, such as inertial forward-backward algorithm [2, 7, 21], Inertial extragradient methods [13] and fast iterative shrinkage thresholding algorithms (FISTA) (see [5, 8]). The inertial extrapolation algorithm is a two-step iterative method and its main feature is that the next iterate is defined by making use of the previous two iterates. By using the technique of the inertial extrapolation, in 2008, Mainge [23] introduced the classical inertial Mann algorithm: { yn = xn + αn (xn − xn−1 ), (5) xn+1 = (1 − λn )yn + λn T (yn ) for each n ≥ 1. He showed that {xn } converges weakly to a fixed point of T under the following conditions: (B1) αn ∈ [0, α ) for each n ≥ 1, where α ∈ [0, 1); 2 (B2) ∑∞ n=1 αn ∥xn − xn−1 ∥ < +∞; (B3) infn≥1 λn > 0 and supn≥1 λn < 1. For satisfying the summability condition (B2) of the sequence {xn }, one need to calculate αn at each step (see [28]). In 2015, Bot and Csetnek [7] got rid of the condition (B2) and substituted (B1) and (B3) with the following conditions, respectively:
4
Q.L. Dong, Y.J. Cho, T.M. Rassias
(C1) for each n ≥ 1, {αn } ⊂ [0, α ] is nondecreasing with α1 = 0 and 0 ≤ α < 1; (C2) for each n ≥ 1,
δ>
α 2 (1 + α ) + ασ , 1 − α2
0 < λ ≤ λn ≤
δ − α [α (1 + α ) + αδ + σ ] , δ [1 + α (1 + α ) + αδ + σ ]
where λ , σ , δ > 0. In this paper, we introduce a general inertial Mann algorithm which includes the classical inertial Mann algorithm and the accelerated Mann algorithm as special cases. The numerical experiments show that the accelerated Mann behaves better than other algorithms. The structure of the paper is as follows. In Section 2, we present some lemmas which will be used in the main results. In Section 3, we revisit first the accelerated Mann algorithm and show that it is an inertial type algorithm. Then we analyze the relationship between general inertial Mann algorithm with some other ones. The weak convergence of the general inertial Mann algorithm is discussed in Section 4. We apply the general inertial Mann algorithm to the minimization problems and propose a general inertial type gradient-projection algorithm in Section 5. In the final section, Section 6, some numerical results are provided, which compare the best choice of the parameters in the general inertial Mann algorithm.
2 Preliminaries We use the notation: (1) ⇀ for weak convergence and → for strong convergence; (2) ωw (xk ) = {x : ∃xk j ⇀ x} denotes the weak ω -limit set of {xk }. The following identity will be used several times in the paper (see Corollary 2.14 of [3]): ∥α x + (1 − α )y∥2 = α ∥x∥2 + (1 − α )∥y∥2 − α (1 − α )∥x − y∥2
(6)
for all α ∈ R and (x, y) ∈ H × H . Definition 1. A mapping T : H → H is called an averaged mapping if it can be written as the average of the identity I and a nonexpansive mapping, that is, T = (1 − α )I + α S,
(7)
where α is a number in ]0, 1[ and S : H → H is a nonexpansive mapping. More precisely, when (7) holds, we say that T is α -averaged. It is obvious that the averaged mapping is nonexpansive.
General inertial Mann algorithms for nonexpansive mappings
5
Lemma 1. [1] Let {ψn }, {δn } and {αn } be the sequences in [0, +∞) such that ψn+1 ≤ ψn + αn (ψn − ψn−1 ) + δn for each n ≥ 1, ∑∞ n=1 δn < +∞ and there exists a real number α with 0 ≤ αn ≤ α < 1 for all n ∈ N. Then the following hold: (1) ∑n≥1 [ψn − ψn−1 ]+ < +∞, where [t]+ = max{t, 0}; (2)there exists ψ ∗ ∈ [0, +∞) such that limn→+∞ ψn = ψ ∗ . Lemma 2. [3] Let D be a nonempty closed convex subset of H and T : D → H be a nonexpansive mapping. Let {xn } be a sequence in D and x ∈ H such that xn ⇀ x and T xn − xn → 0 as n → +∞. Then x ∈ Fix(T ). Lemma 3. [3] Let C be a nonempty subset of H and {xn } be a sequence in H such that the following two conditions hold: (i) for all x ∈ C, limn→∞ ∥xn − x∥ exists; (ii)every sequential weak cluster point of {xn } is in C. Then the sequence {xn } converges weakly to a point in C.
3 The general inertial Mann algorithms In this section, first, we revisit the accelerated Mann algorithms. Then we propose the general inertial Mann algorithm and show that it includes some other algorithms as special cases.
3.1 Revisit the accelerated Mann algorithms In 2014, Sakurai and Liduka [36] first proposed an acceleration of the Halpern algorithm to search for a fixed point of a nonexpansive mapping. Inspired by their work, by combining the Mann algorithm (1) and conjugate gradient methods [29], the authors [14] proposed the following accelerated Mann algorithm: 1 dn+1 := (T (xn ) − xn ) + βn dn , γ yn := xn + γ dn+1 , xn+1 := λn xn + (1 − λn )yn
(8) (9) (10)
for each n ≥ 1, where γ > 0. The sequence {xn } converges weakly to a fixed point of T provided that the sequences {λn } and {βn } satisfy the following conditions: (A1) ∑∞ n=0 λn (1 − λn ) = ∞; (A2) ∑∞ n=0 βn < ∞. Moreover, the sequence {xn } satisfies the following condition: (A3) {T (xn ) − xn } is bounded.
6
Q.L. Dong, Y.J. Cho, T.M. Rassias
Remark 1. The condition (A3) is very strict. Sakurai and Liduka [36] discussed it on two cases: (1) Suppose that Fix(T ) is bounded. Let C be a bounded closed convex set such that Fix(T ) ⊂ C and PC can be easily computed (for example, C is a closed ball with a large enough radius). Then compute xn+1 := PC (λn xn + (1 − λn )yn ) for each n ≥ 1 instead of the xn+1 in (10). The boundedness of C and the nonexpansivity of T mean that {xn } and {T (xn )} are bounded. Therefore, the condition (A3) holds. (2) Suppose that Fix(T ) is unbounded. One cannot choose a bounded C satisfying that Fix(T ) ⊂ C and verify the boundedness of {T (xn ) − xn }. Next, we rewrite the accelerated Mann algorithm (8)–(10). Based on the new formula, its convergence will be reanalyzed in Section 4. Substitute (9) into (10), we have xn+1 = λn xn + (1 − λn )(xn + γ dn+1 ) = xn + (1 − λn )γ dn+1
(11)
for each n ≥ 1, which implies that dn+1 =
1 (xn+1 − xn ) (1 − λn )γ
(12)
for each n ≥ 1. Combining (8) and (9), we have yn = T (xn ) + γβn dn = T (xn ) +
βn (xn − xn−1 ) 1 − λn−1
(13)
for each n ≥ 1, where the second equality comes from (12). Substitute (13) into (10), we obtain [ ] βn (xn − xn−1 ) xn+1 = λn xn + (1 − λn ) T (xn ) + 1 − λn−1 βn (1 − λn ) (14) = λn xn + (1 − γn )T (xn ) + (xn − xn−1 ) 1 − λn−1 [ ] βn (1 − λn ) = λn xn + (xn − xn−1 ) + (1 − λn )T (xn ) λn (1 − λn−1 ) for each n ≥ 1. Set
αn =
βn (1 − λn ) λn (1 − λn−1 )
(15)
General inertial Mann algorithms for nonexpansive mappings
7
and yn = xn + αn (xn − xn−1 ) for each n ≥ 1. Then the formula (8)–(10) can be rewrite as: { yn = xn + αn (xn − xn−1 ), xn+1 = λn yn + (1 − λn )T xn
(16)
(17)
for each n ≥ 1.
3.2 Algorithms Now we present the general inertial Mann algorithm as follows: yn = xn + αn (xn − xn−1 ), zn = xn + βn (xn − xn−1 ), xn+1 = (1 − λn )yn + λn T (zn )
(18)
for each n ≥ 1, where {αn }, {βn } and {λn } satisfy the following conditions: (D1) {αn } ⊂ [0, α ] and {βn } ⊂ [0, β ] are nondecreasing with α1 = β1 = 0 and α , β ∈ [0, 1); (D2) for any λ , σ , δ > 0,
δ>
αξ (1 + ξ ) + ασ , 1 − α2
0 < λ ≤ λn ≤
δ − α [ξ (1 + ξ ) + αδ + σ ] , δ [1 + ξ (1 + ξ ) + αδ + σ ]
(19)
where ξ = max{α , β }. Remark 2. By form, the general inertial Mann algorithm is the most general Mann algorithm with inertial effects we are aware of. It is easy to show that the general inertial Mann algorithm includes other algorithms as special cases. The relations between the algorithm (18) with other work are as follows: (1) αn = βn , i.e., yn = zn : this is the classical inertial Mann algorithm [23]; (2) βn = 0: this becomes the accelerated Mann algorithm [14]; (3) αn = 0: it becomes the following algorithm {
zn = xn + βn (xn − xn−1 ), xn+1 = (1 − λn )xn + λn T (zn )
(20)
for each n ≥ 1, which has not been studied before. Inspired by Malitsky [26] and Mainge [24, 25], we call the algorithm (20) the reflected Mann algorithm.
8
Q.L. Dong, Y.J. Cho, T.M. Rassias
4 Convergence analysis In this section, we prove the convergence of the general inertial Mann algorithm and then deduce the convergence of other methods. Theorem 1. Suppose that T : H → H is nonexpansive with Fix(T ) ̸= 0. / Assume the conditions (D1) and (D2) hold. Then the sequence {xn } generated by the general inertial Mann algorithm (18) converges weakly to a point of Fix(T ). Proof. Take arbitrarily p ∈ Fix(T ). From (6), it follows that ∥xn+1 − p∥2 = (1 − λn )∥yn − p∥2 + λn ∥T zn − p∥2 − λn (1 − λn )∥T zn − yn ∥2 ≤ (1 − λn )∥yn − p∥2 + λn ∥zn − p∥2 − λn (1 − λn )∥T zn − yn ∥2 .
(21)
Using (6) again, we have ∥yn − p∥2 = ∥(1 + αn )(xn − p) − αn (xn−1 − p)∥2 = (1 + αn )∥xn − p∥2 − αn ∥xn−1 − p∥2 + αn (1 + αn )∥xn − xn−1 ∥2
(22)
Similarly, we have ∥zn − p∥2 = (1 + βn )∥xn − p∥2 − βn ∥xn−1 − p∥2 + βn (1 + βn )∥xn − xn−1 ∥2 . (23) Combining (21), (22) and (23), we have ∥xn+1 − p∥2 − (1 + θn )∥xn − p∥2 + θn ∥xn−1 − p∥2 ≤ −λn (1 − λn )∥T zn − yn ∥2
(24)
+ [(1 − λn )αn (1 + αn ) + λn βn (1 + βn )]∥xn − xn−1 ∥2 , where
θn = αn (1 − λn ) + βn λn . From (D1), (D2) and λn ∈ (0, 1), it follows that the θn ⊂ [0, ξ ] is nondecreasing with θ1 = 0. Using (18), we have
2
1 αn
∥T zn − yn ∥ = (xn+1 − xn ) + (xn−1 − xn ) λn λn 1 α2 = 2 ∥xn+1 − xn ∥2 + n2 ∥xn−1 − xn ∥2 λn λn αn + 2 2 ⟨xn+1 − xn , xn−1 − xn ⟩ λn 1 α2 ≥ 2 ∥xn+1 − xn ∥2 + n2 ∥xn−1 − xn ∥2 λn λn ) 1 αn ( + 2 − ρn ∥xn+1 − xn ∥2 − ∥xn−1 − xn ∥2 , ρn λn
(25)
General inertial Mann algorithms for nonexpansive mappings
where we denote ρn :=
1 αn +δ λn .
9
From (24) and (25), we can derive the inequality
∥xn+1 − p∥2 − (1 + θn )∥xn − p∥2 + θn ∥xn−1 − p∥2 ≤
(1 − λn )(αn ρn − 1) ∥xn+1 − xn ∥2 + µn ∥xn − xn−1 ∥2 , λn
(26)
where
µn = (1 − λn )αn (1 + αn ) + λn βn (1 + βn ) + αn (1 − λn )
1 − ρn α n ≥0 ρn λn
(27)
since ρn αn ≤ 1 and λn ∈ (0, 1). Again, taking into account the choice of ρn , we have
δ=
1 − ρn αn , ρn λn
and, from (27),
µn = (1 − λn )αn (1 + αn ) + λn βn (1 + βn ) + αn (1 − λn )δ ≤ ξ (1 + ξ ) + αδ
(28)
for each n ≥ 1. In the following, we apply some techniques from [7, 2] adapted to our setting. Define the sequences ϕn := ∥xn − p∥2 for all n ∈ N and Ψn := ϕn − θn ϕn−1 + µn ∥xn − xn−1 ∥2 for all n ≥ 1. Using the monotonicity of {θn } and the fact that ϕn ≥ 0 for all n ∈ N, we have
Ψn+1 − Ψn ≤ ϕn+1 − (1 + θn )ϕn + θn ϕn−1 + µn+1 ∥xn+1 − xn ∥2 − µn ∥xn − xn−1 ∥2 . By (26), we know
Ψn+1 − Ψn ≤
( (1 − λ )(α ρ − 1) ) n n n + µn+1 ∥xn+1 − xn ∥2 . λn
(29)
(1 − λn )(αn ρn − 1) + µn+1 ≤ −σ λn
(30)
Now, we claim that
for each n ≥ 1. Indeed, by (27) and the monotonicity of {λn }, we have (1 − λn )(αn ρn − 1) + µn+1 ≤ −σ λn ⇐⇒ λn (µn+1 + σ ) + (1 − λn )(αn ρn − 1) ≤ 0
δ λn (1 − λn ) ≤0 αn + δ λ n ⇐⇒ (αn + δ λn )(µn+1 + σ ) + δ λn ≤ δ . ⇐⇒ λn (µn+1 + σ ) −
Employing (28), we have
10
Q.L. Dong, Y.J. Cho, T.M. Rassias
(αn + δ λn )(µn+1 + σ ) + δ λn ≤ (α + δ λn )[ξ (1 + ξ ) + αδ + σ ] + δ λn ≤ δ , where the last inequality follows by using the upper bound for (λn ) in (19). Hence the claim in (30) is true. It follows from (29) and (30) that
Ψn+1 − Ψn ≤ −σ ∥xn+1 − xn ∥2
(31)
for each n ≥ 1. The sequence (Ψn )n≥1 is non-increasing and the boundness for (θn )n≥1 delivers −ξ ϕn−1 ≤ ϕn − ξ ϕn−1 ≤ Ψn ≤ Ψ1 (32) for each n ≥ 1. Thus we obtain
ϕn ≤ ξ n ϕ0 + Ψ1
n−1
Ψ1
∑ ξ k ≤ ξ n ϕ0 + 1 − ξ
(33)
k=1
for each n ≥ 1, where we notice that Ψ1 = ϕ1 ≥ 0 (due to the relation θ1 = α1 = β1 = 0). Using (31)-(33), for all n ≥ 1, we have
σ
Ψ1
n
∑ ∥xk+1 − xk ∥2 ≤ Ψ1 − Ψn+1 ≤ Ψ1 + ξ ϕn ≤ ξ n+1 ϕ0 + 1 − ξ ,
k=1
which means that
∞
∑ ∥xn+1 − xn ∥2 < +∞.
(34)
lim ∥xn+1 − xn ∥ = 0.
(35)
n=1
Thus we have n→∞
From (20), we have ∥yn − xn+1 ∥ ≤ ∥xn − xn+1 ∥ + αn ∥xn − xn−1 ∥ ≤ ∥xn − xn+1 ∥ + α ∥xn − xn−1 ∥, which with (35) implies that lim ∥yn − xn+1 ∥ = 0.
(36)
lim ∥zn − xn+1 ∥ = 0.
(37)
n→∞
Similarly, we obtain n→∞
For an arbitrary p ∈ Fix(T ), by (26), (28), (34) and Lemma 1, we derive that limn→∞ ∥xn − p∥ exists (we take into consideration also λn ∈ (0, 1) in (26)). On the other hand, let x be a sequential weak cluster point of {xn }, that is, there exists a subsequence {xnk } which converge weakly to x. By (37), it follows that znk ⇀ x as k → ∞. Furthermore, from (18), we have
General inertial Mann algorithms for nonexpansive mappings
11
∥T zn − zn ∥ ≤ ∥T zn − yn ∥ + ∥yn − zn ∥ 1 ≤ ∥xn+1 − yn ∥ + ∥yn − xn+1 ∥ + ∥zn − xn+1 ∥ λn ( 1) ∥xn+1 − yn ∥ + ∥zn − xn+1 ∥. ≤ 1+ λ Thus, by (36) and (37), we obtain ∥T znk −znk ∥ → 0 as k → ∞. Applying now Lemma 2 for the sequence {znk }, we conclude that x ∈ Fix(T ). From Lemma 3, it follows that {xn } converges weakly to a point in Fix(T ). This completes the proof. Let αn = βn and then Theorem 4.1 becomes Theorem 5 in [7]. Theorem 2. Suppose that T : H → H is a nonexpansive mapping with Fix(T ) ̸= 0. / Assume the conditions (C1) and (C2) hold. Then the sequence {xn } generated by the classical Mann algorithm (5) converges weakly to a point of Fix(T ). Let βn = 0 and then we obtain another convergence condition of the accelerated Mann algorithm. Theorem 3. Suppose that T : H → H is a nonexpansive mapping with Fix(T ) ̸= 0. / Assume that {αn } ⊂ [0, α ] is nondecreasing with α1 = 0 and 0 ≤ α < 1 and {λn } satisfies
δ>
α 2 (1 + α ) + ασ , 1 − α2
0 < λ ≤ λn ≤
δ − α [α (1 + α ) + αδ + σ ] , δ [1 + α (1 + α ) + αδ + σ ]
where λ , σ , δ > 0. Then the sequence {xn } generated by the accelerated Mann algorithm (17) converges weakly to a point of Fix(T ). Remark 3. It is obvious that Theorem 4.3 does not need the strict condition (A3). Let αn = 0 and then we obtain the convergence theorem of the reflected Mann algorithm. Theorem 4. Suppose that T : H → H is a nonexpansive mapping with Fix(T ) ̸= 0. / Assume that {βn } ⊂ [0, β ] is nondecreasing with β1 = 0 and 0 ≤ β < 1 and {λn } satisfies 1 0 < λ ≤ λn ≤ , 1 + β (1 + β ) + σ where λ , σ > 0. Then the sequence {xn } generated by the reflected Mann algorithm (20) converges weakly to a point of Fix(T ).
5 Applications Consider the following constrained convex minimization problem:
12
Q.L. Dong, Y.J. Cho, T.M. Rassias
min φ (x), x∈C
(38)
where C is a closed convex subset of a Hilbert space H and φ : C → R is a realvalued convex function. If φ (x) is differentiable, then the problem (38) is equivalent to the following fixed point problem: x = PC (x − γ ∇φ (x)),
(39)
where γ > 0. Then the gradient-projection algorithm generates a iterative sequence via xn+1 = PC (xn − γ ∇φ (xn )) (40) for each n ≥ 1, where the initial guess x0 is taken from C arbitrarily, the parameter γ is a positive real number and PC is the metric projection from H onto C. Remark 4. There are some inertial type algorithms for solving the minimization problems (38). We first review them as follows: (1) In 2015, Bot and Csetnek [6] proposed the so-called inertial hybrid proximal extragradient algorithm, which includes the following algorithm as a special case: { k y = xk + αk (xk − xk−1 ), xk+1 = PC (yk − λk ∇φ (xk )) for each k ≥ 1. They showed the convergence of the algorithm provided that ∇φ is γ -cocoercive and {αk } is nondecreasing with α1 = 0, 0 ≤ αk ≤ α and 0 < λ ≤ λk ≤ 2γσ 2 for any α , σ ≥ 0 such that α (5 + 4σ 2 ) + σ 2 < 1. (2) In 2015, Malitski [26] proposed the projected reflected method: ( ) xk+1 = PC xk − λ ∇φ (2xk − xk−1 ) for each k ≥ 1. In 2016, Mainge [24, 25] extended the above method to more general cases as follows: yk = xk + αk (xk − xk−1 ), ( ) xk+1 = PC xk − λn ∇φ (yk ) for each k ≥ 1, where αk ≥ 0 and {λn } ⊂ [0, 1] satisfies some conditions. They proved the convergence of the method when ∇φ is Lipshitz continuous and monotone. (3) In 2016, Dong et al. [13] introduced the extragradient method with inertial effects: wk = xk + αk (xk − xk−1 ), yk = PC (wk − τ ∇φ (wk )), (41) xk+1 = (1 − λk )wk + λk PC (wk − τ ∇φ (yk )) for each k ≥ 1. The numerical experiments show that the inertial algorithm (41) speeds up the extragradient method.
General inertial Mann algorithms for nonexpansive mappings
13
Assume that ∇φ is L-Lipschitz continuous, namely, there is a constant L > 0 such that ∥∇φ (x) − ∇φ (y)∥ ≤ L∥x − y∥ (42) for all x, y ∈ C. In 2011, Xu [37] showed that the composite PC (I − γ ∇φ ) is ((2 + γ L)/4)-averaged for 0 < γ < 2/L. So the composite PC (I − γ ∇φ ) is nonexpansive and we use the general inertial Mann methods (18) to construct the general inertial gradient-projection algorithm for (38) as follows: yn = xn + αn (xn − xn−1 ), zn = xn + βn (xn − xn−1 ), (43) xn+1 = (1 − λn )yn + λn PC (zn − γ ∇φ (zn )) for each n ≥ 1, where {αn }, {βn } and {λn } satisfy the conditions (D1) and (D2). To generalize Theorem 4.1, we have the following convergent result: Theorem 5. Assume that the minimization problem (38) is consistent and the gradient ∇φ satisfies the Lipschitz condition (42). Let γ be a number such that 0 < γ < 2/L. Then the sequence {xn } generated by the general inertial gradientprojection algorithm (43) converges weakly to a minimizer of the problem (38).
6 Numerical examples and conclusions In this section, we present a numerical example to illustrate the choice of the parameters {αn } and {βn } in the general inertial algorithm (18). All the programs are written in Matlab version 7.0. and performed on a PC Desktop Intel(R) Core(TM) i5-4200U CPU @ 1.60GHz 2.30 GHz, RAM 4.00 GB. Problem 2. (see [36]) For any nonempty closed convex set Ci ⊂ RN for each i = 0, 1, · · · , m, Find x∗ ∈ C :=
m ∩
Ci ,
i=0
where one assumes that C ̸= 0. / Define a mapping T : RN → RN by T := P0
(1
m
Pi m∑
) ,
(44)
i=1
where Pi = PCi (i = 0, 1, · · · , m) stands for the metric projection onto Ci . Since Pi (i = 0, 1, · · · , m) is nonexpansive, the mapping T defined by (44) is also nonexpansive. Moreover, we find that
14
Q.L. Dong, Y.J. Cho, T.M. Rassias
Fix(T ) = Fix(P0 )
m ∩
Fix(Pi ) = C0
i=1
m ∩
Ci = C.
i=1
In the experiment, we set Ci (i = 0, 1, · · · , m) as a closed ball with center ci ∈ RN and radius ri > 0. Thus Pi (i = 0, 1, · · · , m) can be computed with ri ci + (x − ci ) if ∥ci − x∥ > ri , ∥c − x∥ i Pi (x) := x if ∥ci − x∥ ≤ ri . √ √ Choose ri := 1 (i = 0, 1, · · · , m), c0 := 0, ci ∈ (−1/ N, 1/ N)N (i = 1, · · · , m) are randomly chosen. In the numerical results listed in the following tables, “Iter.” denotes the number of iterations. We take E(x) = ∥xn − xn−1 ∥ < 10−6 as the stopping criterion and test three initial values x0 . In the general inertial algorithm, there are three parameters αn , βn , λn . To compare the different algorithms, we choose λn = 0.5 and test different choices of αn and βn . Table 1 The general inertial Mann algorithm with αn = 0.4, λn = 0.5, N = 500, m = 300. The initial value 100 × rand(N, 1) (1, 1, · · · , 1) (1, −1, · · · , 1, −1)
βn Iter. Iter. Iter.
0.0 8 4 4
0.1 18 13 13
0.2 20 15 15
0.3 1367 17 17
0.4 1291 1111 1429
0.5 1195 1031 1333
0.6 796 928 1217
0.7 32 27 27
0.8 36 31 31
0.9 41 36 36
1.0 47 42 42
Table 2 The general inertial Mann algorithm with βn = 0.0, λn = 0.5, N = 500, m = 700. The initial value 100 × rand(N, 1) (1, 1, · · · , 1) (10, −10, · · · , 10, −10)
αn Iter. Iter. Iter.
0.0 2181 4798 3608
0.1 4093 4557 3427
0.2 0.3 0.4 0.5 0.6 0.7 0.8 3786 4 8 2588 12 11 15 4315 3585 4 3 3 8 8 3249 5 3213 8 7 12 11
0.9 15 7 11
1.0 19 7 15
The table 1 illustrates that the number of iterations for the general inertial Mann algorithm with βn = 0 is minimal, that is, the accelerated Mann algorithm is best. From the table 2, we conclude that the number of the iteration is small for the accelerated algorithm with αn ∈ [0.6, 1.0].
General inertial Mann algorithms for nonexpansive mappings
15
Acknowledgments The first author would like to thank Professor Chunlin Wu for the discussion and valuable suggestions. We wish to express our thanks to Professor Mihai Turinici for reading the paper and providing very helpful comments.
References 1. F. Alvarez, Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space, SIAM J. Optim. 14 (2004), 773–782. 2. F. Alvarez, H. Attouch, An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping, Set-valued Anal. 9 (2001), 3–11. 3. H. H. Bauschke, P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, Springer, Berlin (2011). 4. H. H. Bauschke, J. M. Borwein, On projection algorithms for solving convex feasibility problems, SIAM Rev. 38 (1996), 367–426. 5. A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci. 2(1) (2009), 183–202. 6. R. I. Bot, E. R. Csetnek, A hybrid proximal-extragradient algorithm with inertial effects, Numer. Funct. Anal. Optim. 36 (2015), 951–963. 7. R. I. Bot, E. R. Csetnek, C. Hendrich, Inertial Douglas-Rachford splitting for monotone inclusion problems, Appl. Math. Comput. 256 (2015), 472–487. 8. A. Chambolle, Ch. Dossal, On the Convergence of the Iterates of the “Fast Iterative Shrinkage/Thresholding Algorithm, J. Optim. Theory. Appl. 166 (2015), 968–982. 9. P. Chen, J. Huang, X. Zhang, A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration, Inverse Probl. 29 (2013), 025011 (33 pp.). 10. P. Chen, J. Huang, X. Zhang, A primal-dual fixed point algorithm for minimization of the sum of three convex separable functions, Fixed Point Theroy Appl. 2016 (2016), 54. 11. Q. L. Dong, S. He, Y. J. Cho, A new hybrid algorithm and its numerical realization for two nonexpansive mappings, Fixed Point Theroy Appl. 2015 (2015), 150. 12. Q. L. Dong, Y. Y. Lu, A new hybrid algorithm for a nonexpansive mapping, Fixed Point Theroy Appl. 2015 (2015), 37. 13. Q. L. Dong, Y. Y. Lu, J. Yang, The extragradient algorithm with inertial effects for solving the variational inequality, Optimization, 65(12) (2016), 2217–2226. 14. Q. L. Dong, H. Y. Yuan, Accelerated Mann and CQ algorithms for finding a fixed point of a nonexpansive mapping, Fixed Point Theory Appl. 2015 (2015), 125. 15. B. Halpern, Fixed points of nonexpanding maps, Bull. Amer. Math. Soc. 73 (1967), 957–961. 16. S. He, C. Yang, Boundary point algorithms for minimum norm fixed points of nonexpansive mappings, Fixed Point Theroy Appl. 2014 (2014), 56. 17. H. Iiduka, Iterative algorithm for triple-hierarchical constrained nonconvex optimization problem and its application to network bandwidth allocation, SIAM J. Optim. 22 (2012), 862–878. 18. H. Iiduka, Fixed point optimization algorithms for distributed optimization in networked systems, SIAM J. Optim. 23 (2013), 1–26. 19. S. Ishikawa, Fixed points by a new iteration method, Proc. Amer. Math. Soc. 44 (1974), 147– 150. 20. M. A. Krasnoselskii, Two remarks on the method of successive approximations, Usp. Mat. Nauk 10 (1955), 123–127.
16
Q.L. Dong, Y.J. Cho, T.M. Rassias
21. D. A. Lorenz, T. Pock, An inertial forward-backward algorithm for monotone inclusions, J. Math. Imaging Vis. 51 (2015), 311–325. 22. W. R. Mann, Mean value methods in iteration, Proc. Am. Math. Soc. 4 (1953), 506–510. 23. P. E. Mainge, Convergence theorems for inertial KM-type algorithms, J. Comput. Appl. Math. 219 (2008), 223–236. 24. P. E. Mainge, Numerical approach to monotone variational inequalities by a one-step projected reflected gradient method with line-search procedure, Comput. Math. Appl. 72 (2016), 720–728. 25. P. E. Mainge, M. L. Gobinddass, Convergence of one-step projected gradient methods for variational inequalities, J. Optim. Theory Appl. 171, (2016) 146–168. 26. Yu. Malitski, Projected reflected gradient method for variational inequalities, SIAM J. Optim. 25 (2015), 502–520. 27. C. A. Micchelli, L. Shen, Y. Xu, Proximity algorithms for image models: denoising, Inverse Probl. 27 (2011), 45009–38. 28. A. Moudafi, M. Oliny, Convergence of a splitting inertial proximal method formonotone operators, J. Comput. Appl. Math. 155 (2003), 447–454. 29. J. Nocedal, S. J. Wright, Numerical Optimization, 2nd edn. Springer Series in Operations Research and Financial Engineering. Springer, Berlin (2006). 30. P. M. Pardalos, P. G. Georgiev, H. M. Srivastava, Nonlinear Analysis, Stability, Approximation, and Inequalities: In Honor of Themistocles M. Rassias on the Occasion of his 60th Birthday, Springer, 2012. 31. P. M. Pardalos, Th. M. Rassias, Contributions in Mathematics and Engineering : In Honor of Constantin Caratheodory, Springer, 2016. 32. E. Picard, Memoire sur la theorie des equations aux derivees partielles et la methode des approximations successives, J. Math. Pures et Appl. 6 (1890), 145–210. 33. B. T. Polyak, Some methods of speeding up the convergence of iteration methods, U.S.S.R. Comput. Math. Math. Phys. 4 (1964), 1–17. 34. B. T. Polyak, Introduction to Optimization, Optimization Software Inc., Publications Division: New York (1987). 35. Th. M. Rassias, L. Toth, Topics in Mathematical Analysis and Applications, Springer, (2014). 36. K. Sakurai, H. Liduka, Acceleration of the Halpern algorithm to search for a fixed poin of a nonexpansive mapping, Fixed Point Theory Appl. 2014 (2014), 202. 37. H. K. Xu, Averaged mappings and the gradient-projection algorithm, J. Optim. Theory Appl. 150 (2011), 360–378. 38. C. Yang, S. He, General alternative regularization methods for nonexpansive mappings in Hilbert spaces, Fixed Point Theroy Appl. 2014 (2014), 203.
View publication stats