Shrinking projection methods involving inertial forward ... - Springer Link

0 downloads 0 Views 532KB Size Report
Jan 30, 2018 - can be recast as (1.1) when the governing maximal monotone F is of the ... where f, g are proper and lower semicontinuous convex functions ... called the inertial proximal point algorithm and it is of the following form: ... Finally, we test ... for all x ∈ H and y ∈ C. Such PC is called the metric projection of H ...
RACSAM https://doi.org/10.1007/s13398-018-0504-1 ORIGINAL PAPER

Shrinking projection methods involving inertial forward–backward splitting methods for inclusion problems Suhel Ahmad Khan1 · Suthep Suantai2 · Watcharaporn Cholamjiak3

Received: 10 July 2017 / Accepted: 30 January 2018 © Springer-Verlag Italia S.r.l., part of Springer Nature 2018

Abstract In this paper, we propose a modified forward–backward splitting method using the shrinking projection and the inertial technique for solving the inclusion problem of the sum of two monotone operators. We prove its strong convergence under some suitable conditions in Hilbert spaces. We provide some numerical experiments including a comparison to show the implementation and the efficiency of our method. Keywords Shrinking projection method · Inertial method · Inclusion problem · Maximal monotone operator · Forward–backward algorithm Mathematics Subject Classification 47H04 · 47H10

1 Introduction Let H be a Hilbert space with the inner product ., . and norm .. We study the following inclusion problem: find xˆ ∈ H such that 0 ∈ A xˆ + B xˆ

B

Watcharaporn Cholamjiak [email protected] Suhel Ahmad Khan [email protected] Suthep Suantai [email protected]

1

Department of Mathematics, BITS-Pilani, Dubai Campus, P.O. Box 345055, Dubai, UAE

2

Center of Excellence in Mathematics and Applied Mathematics, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand

3

School of Science, University of Phayao, Phayao 56000, Thailand

(1.1)

S. Ahmad et al.

where A : H → H is an operator and B : H → 2 H is a set-valued operator. We denote the solution set of (1.1) by (A + B)−1 (0). This problem has received much attention due to its applications in large variety of problems arising in convex programming, variational inequalities, split feasibility problem and minimization problem. To be more precise, some concrete problems in machine learning, image processing and linear inverse problem can be modeled mathematically as this formulation. For solving the problem (1.1), the forward–backward splitting method [4,6,7,14,15,20, 28] is usually employed and is defined by the following manner: x1 ∈ H and xn+1 = (I + r B)−1 (xn − r Axn ), n ≥ 1,

(1.2)

where r > 0. In this case, each step of iterates involves only with A as the forward step and B as the backward step, but not the sum of operators. This method includes, as special cases, the proximal point algorithm [25] and the gradient method. In [13], Lions and Mercier introduced the following splitting iterative methods in a real Hilbert space: xn+1 = (2JrA − I )(2JrB − I )xn , n ≥ 1

(1.3)

xn+1 = JrA (2JrB − I )xn + (I − JrB )xn , n ≥ 1,

(1.4)

and where JrT = (I + r T )−1 with r > 0. The first one is often called Peaceman–Rachford algorithm [21] and the second one is called Douglas–Rachford algorithm [10]. We note that both algorithms are weakly convergent in general [3,13]. Many problems can be formulated as a problem of from (1.1). For instance, a stationary solution to the initial valued problem of the evolution equation ∂u (1.5) − Fu, u(0) = u 0 ∂t can be recast as (1.1) when the governing maximal monotone F is of the form F = A + B [13]. In optimization, it often needs [7] to solve a minimization problem of the form 0∈

min f (x) + g(x) x∈H

(1.6)

where f, g are proper and lower semicontinuous convex functions from H to the extended ¯ = (−∞, ∞] such that f is differentiable with L-Lipschitz continuous gradient, real line R and the proximal mapping of g is x → arg min g(y) + y∈H

x − y2 . 2r

(1.7)

In particular, if A := ∇ f and B := ∂g, where ∇ f is the gradient of f and ∂g is the subdifferential of g which is defined by ∂g(x) := s ∈ H : g(y) ≥ g(x) + s, y − x, ∀y ∈  H then problem (1.1) becomes (1.6) and (1.2) also becomes xn+1 = proxrg (xn − r ∇ f (xn )), n ≥ 1,

(1.8)

where r > 0 is the stepsize and proxrg = (I + r ∂g)−1 is the proximity operator of g. In 2001, Alvarez and Attouch [1] employed the heavy ball method which was studied in [22,23] for maximal monotone operators by the proximal point algorithm. This algorithm is called the inertial proximal point algorithm and it is of the following form:  yn = xn + θn (xn − xn−1 ) (1.9) xn+1 = (I + rn B)−1 yn , n ≥ 1.

Shrinking projection methods involving inertial forward–backward...

It was proved that if {rn } is non-decreasing and {θn } ⊂ [0, 1) with ∞ 

θn xn − xn−1 2 < ∞,

(1.10)

n=1

then algorithm (1.9) converges weakly to a zero of B. In particular, condition (1.10) is true for θn < 1/3. Here θn is an extrapolation factor and the inertia is represented by the term θn (xn −xn−1 ). It is remarkable that the inertial terminology greatly improves the performance of the algorithm and has a nice convergence properties [8,9,19]. Recently, Moudafi and Oliny [16] proposed the following inertial proximal point algorithm for solving the zero-finding problem of the sum of two monotone operators:  yn = xn + θn (xn − xn−1 ) (1.11) xn+1 = (I + rn B)−1 (yn − rn Axn ), n ≥ 1, where A : H → H and B : H → 2 H . They obtained the weak convergence theorem provided rn < 2/L with L the Lipschitz constant of A and the condition (1.10) holds. It is observed than, for θn > 0, the algorithm (1.11) does not take the form of a forward–backward splitting algorithm, since operator A is still evaluated at the point x n . Recently, Lorenz and Pock [15] proposed the following inertial forward–backward algorithm for monotone operators:  yn = xn + θn (xn − xn−1 ) (1.12) xn+1 = (I + rn B)−1 (yn − rn Ayn ), n ≥ 1, where {rn } is a positive real sequence. It is observed that algorithm (1.12) differs from that of Moudafi and Oliny insofar that they evaluated the operator B as the inertial extrapolate yn . The algorithms involving the inertial term mentioned above have weak convergence, and however, in some applied disciplines, the norm convergence is more desirable that the weak convergence [3]. In this work, we aim to introduce an algorithm that ensures the strong convergence. To this end, using the idea of Takahashi et al. [27], we employ the following projection method which is defined by: For C1 = C, x1 = PC1 x0 and ⎧ ⎨ yn = αn xn + (1 − αn )T xn , Cn+1 = {z ∈ Cn : yn − z ≤ xn − z}, (1.13) ⎩ xn+1 = PCn+1 x0 , ∀n ∈ N, where 0 ≤ αn ≤ a < 1 for all n ∈ N. It was proved that the sequence {xn } generated by (1.13) converges strongly to a fixed point of a nonexpansive mapping T . This method is usually called the shrinking projection method (see also Nakajo and Takahashi [18]). Further, we then establish the strong convergence result under some suitable conditions. Finally, we test some numerical experiment for supporting our main results and give a comparison between our inertial projection method and the standard projection method. It is remarkable that the convergence behavior of our method has a good convergence rate through the example.

2 Preliminaries and lemmas Let C be a nonempty, closed and convex subset of a Hilbert space H . We write x n  x to indicate that the sequence {xn } converges weakly to x and xn → x implies that {xn } converges strongly to x. A mapping T : C → C is said to be

S. Ahmad et al.

1. Nonexpansive if T x − T y ≤ x − y for all x, y ∈ C; 2. α-inverse strongly monotone if there exists α > 0 such that

x − y, T x − T y ≥ αT x − T y2 , ∀x, y ∈ C. The nearest point projection of H onto C is denoted by PC , that is, x − PC x ≤ x − y for all x ∈ H and y ∈ C. Such PC is called the metric projection of H onto C. We know that the metric projection PC is firmly nonexpansive, i.e., PC x − PC y2 ≤ PC x − PC y, x − y for all x, y ∈ H . Furthermore, x − PC x, y − PC x ≤ 0 holds for all x ∈ H and y ∈ C; see [26]. An operator A : H → 2 H is said to be monotone if x1 − x2 , y1 − y2  ≥ 0 wherever y1 ∈ Ax1 and y2 ∈ Ax2 . A monotone operator A is said to be maximal if the graph of A is not property contained in the graph of any other monotone operator. It is known that a monotone operator A is maximal if and only if R(I + r A) = H for every r > 0, where R(I + r A) is the range of I + r A. If A : H → 2 H is a maximal monotone, then, for each r > 0, a mapping Tr : H → D(A) is defined by Tr = (I + r A)−1 , where D(A) is the domain of A. Tr is called the resolvent of A. Lemma 2.1 [26] Let H be a real Hilbert space. Then the following equations hold: (1) x − y2 = x2 − y2 − 2x − y, y for all x, y ∈ H ; (2) x + y2 ≤ x2 + 2y, x + y for all x, y ∈ H ; (3) t x + (1 − t)y2 = tx2 + (1 − t)y2 − t (1 − t)x − y2 for all t ∈ [0, 1] and x, y ∈ H . Lemma 2.2 [12] Let C be a nonempty, closed and convex subset of a real Hilbert space H . Given that x, y, z ∈ H and a ∈ R, the set {v ∈ C : y − v2 ≤ x − v2 + z, v + a} is convex and closed. Lemma 2.3 [18] Let C be a nonempty, closed and convex subset of a real Hilbert space H and PC : H → C be the metric projection from H onto C. Then the following inequality holds: y − PC x2 + x − PC x2 ≤ x − y2 , ∀x ∈ H, ∀y ∈ C. Lemma 2.4 [5] Let C be a nonempty closed convex subset of a uniformly convex Banach space X and T a nonexpansive mapping of C into itself with F(T )  = ∅. If {x n } is a sequence in C such that xn  x and (I − T )xn → y, then (I − T )x = y. In particular, if y = 0, then x ∈ F(T ). In what follows, we shall use the following notation: TrA,B = JrB (I − r A) = (I + r B)−1 (I − r A), r > 0.

(2.1)

Lemma 2.5 [11,14,17] Let H be a real Hilbert space. Let A : H → H be an α-inverse strongly monotone operator and B : H → 2 H a maximal monotone operator. Then we have

Shrinking projection methods involving inertial forward–backward...

(i) For r > 0, F(TrA,B ) = (A + B)−1 (0); (ii) For 0 < s ≤ r and x ∈ H, x − TsA,B x ≤ 2x − TrA,B x. Lemma 2.6 [14] Let H be a real Hilbert space. Assume that A is an α-inverse strongly monotone operator. Then, given r > 0, we have TrA,B x − TrA,B y2 ≤ x − y2 − r (2α − r )Ax − Ay2 − (I − JrB )(I − r A)x − (I − JrB )(I − r A)y2 ,

(2.2)

for all x, y ∈ Br = {z ∈ H : z ≤ r }.

3 Main results In this section, we aim to introduce an inertial forward–backward splitting algorithm and prove the strong convergence theorem of our method in Hilbert spaces. Theorem 3.1 Let H be a real Hilbert space. Let A : H → H be an α-inverse strongly monotone operator and B : H → 2 H a maximal monotone operator such that (A + B)−1 (0)  = ∅. Let {xn } be a sequence generated by x0 , x1 ∈ C1 = H and ⎧ yn = xn + θn (xn − xn−1 ), ⎪ ⎪ ⎪ ⎪ = αn xn + (1 − αn )JrBn (I − rn A)yn , ⎨ zn (3.1) Cn+1 = {z ∈ Cn : z n − z2 ≤ xn − z2 + 2θn2 xn − xn−1 2 ⎪ ⎪ −2θ (1 − α )x − z, x − x }, ⎪ n n n n−1 n ⎪ ⎩ xn+1 = PCn+1 x1 , n ≥ 1 where JrBn = (I + rn B)−1 , {rn } ⊂ (0, 2α), {θn } ⊂ [0, θ ] for some θ ∈ [0, 1) and {αn } is a sequence in [0, 1]. Assume that the following conditions hold:

∞ (i) n=1 θn x n − x n−1  < ∞; (ii) lim supn→∞ αn < 1; (iii) 0 < lim inf n→∞ rn ≤ lim supn→∞ rn < 2α. Then the sequence {xn } converges strongly to q = P(A+B)−1 (0) x1 . Proof We split the proof into five steps. Step 1. Show that PCn+1 x1 is well-defined for every x1 ∈ H . We know that (A + B)−1 (0) is closed and convex by Lemma 2.5. From the definition of Cn+1 , from Lemma 2.2, Cn+1 is closed and convex for each n ≥ 1. For each n ∈ N, we put Tn = JrBn (I − rn A) and let z ∈ (A + B)−1 (0). Then, we have z n − z2 = αn xn + (1 − αn )Tn yn − z2 ≤ αn xn − z2 + (1 − αn )Tn yn − z2 ≤ αn xn − z2 + (1 − αn )yn − z2 = αn xn − z2 + (1 − αn )xn + θn (xn − xn−1 ) − z2 ≤ xn − z2 + 2(1 − αn )θn xn − xn−1 , yn − z = xn − z2 + 2(1 − αn )θn xn − xn−1 , yn − xn  − 2(1 − αn )θn xn − z, xn−1 − xn  ≤ xn − z2 + 2θn xn − xn−1 yn − xn  − 2(1 − αn )θn xn − z, xn−1 − xn 

S. Ahmad et al.

≤ xn − z2 + 2θn2 xn − xn−1 2 − 2θn (1 − αn )xn − z, xn−1 − xn .

(3.2)

So, we have z ∈ Cn+1 , thus (A + B)−1 (0) ⊂ Cn+1 . Therefore PCn+1 x1 is well-defined. Step 2. Show that limn→∞ xn − x1  exists. Since (A + B)−1 (0) is a nonempty, closed and convex subset of H , there exists a unique v ∈ (A + B)−1 (0) such that v = P(A+B)−1 (0) x1

(3.3)

From xn = PCn x1 , Cn+1 ⊂ Cn and xn+1 ∈ Cn , ∀n ≥ 1, we get xn − x1  ≤ xn+1 − x1 , ∀n ≥ 1. On the other hand, as (A +

B)−1 (0)

(3.4)

⊂ Cn , we obtain

xn − x1  ≤ v − x1 , ∀n ≥ 1.

(3.5)

It follows that the sequence {xn } is bounded and nondecreasing. Therefore, limn→∞ xn −x1  exists. Step 3. Show that xn → q as n → ∞. For m > n, by the definition of Cn , we see that xm = PCm x1 ∈ Cm ⊂ Cn . By Lemma 2.3, we get xm − xn 2 ≤ xm − x1 2 − xn − x1 2 .

(3.6)

From Step 2, we obtain that {xn } is Cauchy. Hence, xn → q as n → ∞. Step 4. Show that q ∈ (A + B)−1 (0). From Step 3, we get lim xn+1 − xn  = 0.

(3.7)

yn − xn  = θn xn − xn−1  → 0

(3.8)

n→∞

By (i), we have as n → ∞. From (3.7) and (3.8), we obtain yn − xn+1  ≤ yn − xn  + xn − xn+1  → 0

(3.9)

as n → ∞. Since xn+1 ∈ Cn+1 ⊂ Cn , we have yn − z n  ≤ yn − xn+1  + xn+1 − z n  ≤ yn − xn+1  + xn − xn+1 2 + 2θn2 xn − xn−1 2 − 2θn (1 − αn )xn − xn+1 , xn−1 − xn  ≤ yn − xn+1  + xn − xn+1 2 + 2θn2 xn − xn−1 2 + 2θn (1 − αn )xn − xn+1 xn−1 − xn  →0

(3.10)

as n → ∞ by (i), (3.7) and (3.9). It follows from (3.8) and (3.10) that z n − xn  ≤ z n − yn  + yn − xn  → 0 as n → ∞. By Lemmas 2.1 and 2.6, let z ∈ (A + B)−1 (0), we have z n − z2 = αn (xn − z) + (1 − αn )(Tn yn − z)2

(3.11)

Shrinking projection methods involving inertial forward–backward...

≤ αn xn − z2 + (1 − αn )Tn yn − z2 ≤ αn xn − z2 + (1 − αn ) yn − z2 − rn (2α − rn )Ayn − Az2

−yn − rn Ayn − Tn yn + rn Az2 ≤ αn xn − z2 + (1 − αn ) xn − z + θn (xn − xn−1 )2 − rn (2α − rn )Ayn − Az2

−yn − rn Ayn − Tn yn + rn Az2 ≤ xn − z2 + (1 − αn )2θn (xn − xn−1 ), yn − z −rn (1 − αn )(2α − rn )Ayn − Az2 −(1 − αn )yn − rn Ayn − Tn yn + rn Az2 .

(3.12)

By the assumptions (i)–(iii), it follows from (3.11) and (3.12) that lim Ayn − Az = lim yn − rn Ayn − Tn yn + rn Az = 0.

n→∞

n→∞

(3.13)

This gives, by the triangle inequality, that lim Tn yn − yn  = 0.

n→∞

(3.14)

Since lim inf n→∞ rn > 0, there is r > 0 such that rn ≥ r for all n ≥ 1. Lemma 2.5 (ii), yields that TrA,B yn − yn  ≤ 2Tn yn − yn . (3.15) Then, by (3.14), we obtain lim sup TrA,B yn − yn  ≤ 2 lim Tn yn − yn  = 0.

(3.16)

lim TrA,B yn − yn  = 0.

(3.17)

n→∞

n→∞

It follows that n→∞

From (3.8), since xn → q, we also have yn → q. Since TrA,B is nonexpansive, by Lemma 2.4 and (3.17), we have q ∈ (A + B)−1 (0). Step 5. Show that q = P(A+B)−1 (0) x1 . Since xn = PCn x1 and (A + B)−1 (0) ⊂ Cn , we obtain x1 − xn , xn − z ≥ 0, ∀z ∈ (A + B)−1 (0).

(3.18)

By taking the limit in (3.18), we obtain x1 − q, q − z ≥ 0, ∀z ∈ (A + B)−1 (0).

(3.19)  

This shows that q = P(A+B)−1 (0) x1 . By setting αn = 0 for all n ≥ 1, we immediately obtain the following result:

Corollary 3.2 Let H be a real Hilbert spaces. Let A : H → H be an α-inverse strongly monotone operator and B : H → 2 H a maximal monotone operator such that (A + B)−1 (0)  = ∅. Let {xn } be a sequence generated by x0 , x1 ∈ C1 = H and ⎧ yn ⎪ ⎪ ⎨ zn C ⎪ ⎪ ⎩ n+1 xn+1

= xn + θn (xn − xn−1 ), = JrBn (I − rn A)yn , = {z ∈ Cn : z n − z2 ≤ xn − z2 + 2θn2 xn − xn−1 2 − 2θn xn − z, xn−1 − xn }, = PCn+1 x1 , n ≥ 1

(3.20)

S. Ahmad et al.

where JrBn = (I + rn B)−1 , {rn } ⊂ (0, 2α) and {θn } ⊂ [0, θ ] for some θ ∈ [0, 1). Assume that the following conditions hold:

∞ (i) n=1 θn x n − x n−1  < ∞; (ii) 0 < lim inf n→∞ rn ≤ lim supn→∞ rn < 2α. Then the sequence {xn } converges strongly to q = P(A+B)−1 (0) x1 . Remark 3.3 We remark here that the condition (i) is easily implemented in numerical computation since the value of xn − xn−1  is known before choosing θn . Indeed, the parameter θn can be chosen such that 0 ≤ θn ≤ θ¯n , where    ωn ¯θn = min xn −xn−1  , θ i f xn  = xn−1 , θ otherwise,

∞ where {ωn } is a positive sequence such that n=1 ωn < ∞. We now give an example in R2 space. Example 3.4 Let H = R2 and let A : R2 → R2 be defined by Ax = 3x + (1, 2) and let B : R2 → R2 be defined by Bx = 4x where x ∈ R2 . We see that A is 1/3-inverse strongly monotone and B is maximal monotone. Moreover, by a direct calculation, we have for s > 0 JsB (x − s Ax) = (I + s B)−1 (x − s Ax) s 1 − 3s x− (1, 2) = 1 + 4s 1 + 4s where x ∈ R2 . Since α = 1/3, we can choose rn = 0.1 for all n ∈ N. Let αn =    1 min n 2 x −x , 0.5 i f xn  = xn−1 , n n−1  θn = 0.5 otherwise.

(3.21) 1 100n+1

and

The stopping criterion is defined by xn+1 − xn  < 10−4 . Choose x0 = (0, 1) and x1 = (2, 3). Then, by Theorem 3.1, we obtain the numerical results in Fig. 1. We next provide a numerical test of a comparison between our inertial forward–backward method defined in (3.1) and a standard forward–backward method (i.e., θn = 0). From Fig. 1, we see that the solution is approximately (− 0.14253, − 0.28604) and from Fig. 2, it is observed that our inertial forward–backward method has a better convergence behavior than the standard forward–backward method.

4 Application to the convex minimization problems In this section, we apply our main result to the convex minimization problem and give some numerical results. Let F : H → R be a convex smooth function and G : H → R be a convex, lowersemicontinuous and nonsmooth function. We consider the problem of finding xˆ ∈ H such that F(x) ˆ + G(x) ˆ ≤ F(x) + G(x) (4.1)

Shrinking projection methods involving inertial forward–backward... 3 1

xn

2.5

2

xn

1.5

n

n

2 x =(x1,xn)

2

1

0.5

0

−0.5 0

10

20

30

40

50

60

Number of iterations Fig. 1 Convergence behavior of xn = (xn1 , xn2 ) for Example 3.4

Inertial forward−backward algorithm Forward−backward algorithm

0.9 0.8 0.7 0.6

En

0.5 0.4 0.3 0.2 0.1 0 −0.1

0

5

10

15

20

Number of iterations Fig. 2 Error plotting of E n for Example 3.4

25

30

S. Ahmad et al. 3

1

xn

2

x

2 n

0

n

1 n

2 n

x =(x ,x )

1

−1

−2

−3 0

10

20

30

40

50

60

70

80

90

100

Number of iterations Fig. 3 Convergence behavior of xn = (xn1 , xn2 ) for Example 4.2

for all x ∈ H . This problem (4.1) is equivalent, by Fermat’s rule, to the problem of finding xˆ ∈ H such that 0 ∈ ∇ F(x) ˆ + ∂G(x) ˆ (4.2) where ∇ F is a gradient of F and ∂G is a subdifferential of G. In this point of view, we can set A = ∇ F and B = ∂G in Theorem 3.1. This is because if ∇ F is L1 -Lipschitz continuous, then it is L-inverse strongly monotone [2, Corollary 10]. Moreover, ∂G is maximal monotone [24, Theorem A]. So we obtain the following result. Theorem 4.1 Let H be a real Hilbert space. Let F : H → R be a convex and differentiable function with L1 -Lipschitz continuous gradient ∇ F and G : H → R be a convex and lower-semicontinuous function such that F + G attains a minimizer. Let {x n } be a sequence generated by x0 , x1 ∈ C1 = H and ⎧ yn = xn + θn (xn − xn−1 ), ⎪ ⎪ ⎪ ⎪ = αn xn + (1 − αn )Jr∂G (I − rn ∇ F(yn )), ⎨ zn n (4.3) Cn+1 = {z ∈ Cn : z n − z2 ≤ xn − z2 + 2θn2 xn − xn−1 2 ⎪ ⎪ −2θ (1 − α )x − z, x − x }, ⎪ n n n n−1 n ⎪ ⎩ xn+1 = PCn+1 x1 , n ≥ 1 = (I +rn ∂G)−1 , {rn } ⊂ (0, 2L), {θn } ⊂ [0, θ ], θ ∈ [0, 1) and {αn } is a sequence where Jr∂G n in [0, 1]. Assume that the following conditions hold:

∞ (i) n=1 θn x n − x n−1  < ∞; (ii) lim supn→∞ αn < 1; (iii) 0 < lim inf n→∞ rn ≤ lim supn→∞ rn < 2L. Then the sequence {xn } converges strongly to a minimizer of F + G.

Shrinking projection methods involving inertial forward–backward...

1.8

Inertial forward−backward algorithm Forward−backward algorithm

1.6 1.4 1.2

En

1 0.8 0.6 0.4 0.2 0 −0.2 0

5

10

15

20

25

30

35

40

45

50

Number of iterations Fig. 4 Error plotting of E n for Example 4.2

Example 4.2 Solve the following minimization problem: min x22 + (3, 5)x + 9 + x1 ,

x∈R2

(4.4)

where x = (x1 , x2 ) ∈ R2 . Set F(x) = x22 + (3, 5)x + 9 and G(x) = x1 for all x ∈ R2 . We have for x ∈ R2 and r > 0, ∇ F = 2x + (3, 5) and Jr∂G (x) = (max{|x1 | − r, 0}sign(x1 ), max{|x2 | − r, 0}sign(x2 )). We see that ∇ F is 2-Lipschitz continuous, consequently, it is 1/2-inverse strongly monotone. 1 and Choose rn = 0.1 for all n ∈ N. Let αn = 100n+1    1 min n 2 x −x , 0.5 i f xn  = xn−1 , n n−1  θn = 0.5 otherwise. The stopping criterion is defined by xn+1 − xn  < 10−4 . Let x0 = (0, 1) and x1 = (2, 3). Then, by Theorem 4.1, we obtain the numerical results in Fig. 3. From Fig. 3, we see that the minimizer of F +G is approximately (− 0.99958, − 1.99973). We next provide a numerical test of a comparison between our inertial forward–backward method defined in (4.3) and a standard forward–backward method (i.e., θn = 0) (Fig. 4). Remark 4.3 From Examples 3.4 and 4.2, it is shown that our inertial forward–backward method has a good convergence speed and requires small number of iterations for solving the inclusion problem than the standard forward–backward method. Acknowledgements S.A. Khan would like to thank BITS-Pilani, Dubai Campus. S. Suantai was supported by Chiang Mai University. W. Cholamjiak would like to thank the Thailand Research Fund under the Project MRG6080105 and University of Phayao.

S. Ahmad et al.

References 1. Alvarez, F., Attouch, H.: An inertial proximal method for monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 9, 3–11 (2001) 2. Baillon, J.B., Haddad, G.: Quelques proprietes des operateurs angle-bornes et cycliquement monotones. Isr. J. Math. 26, 137–150 (1977) 3. Bauschke, H.H., Combettes, P.L.: A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert spaces. Math. Oper. Res. 26, 248–264 (2001) 4. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics. Springer, New York (2011) 5. Browder, F.E.: Nonexpansive nonlinear operators in a Banach space. Proc. Natl. Acad. Sci. USA 54, 1041–1044 (1965) 6. Cholamjiak, P.: A generalized forward-backward splitting method for solving quasi inclusion problems in Banach spaces. Numer. Algor. 8, 221–239 (1994) 7. Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005) 8. Dang, Y., Sun, J., Xu, H.: Inertial accelerated algorithms for solving a split feasibility problem. J. Ind. Manag. Optim. https://doi.org/10.3934/jimo.2016078 9. Dong, Q.L., Yuan, H.B., Cho, Y.J., Rassias, Th.M.: Modified inertial Mann algorithm and inertial CQalgorithm for nonexpansive mappings. Optim. Lett. https://doi.org/10.1007/s11590-016-1102-9 10. Douglas, J., Rachford, H.H.: On the numerical solution of the heat conduction problem in 2 and 3 space variables. Trans. Am. Math. Soc. 82, 421–439 (1956) 11. Eshita, K., Takahashi, W.: Approximating zero points of accretive operators in general Banach spaces. J. Fixed Point Theory Appl. 2, 105–116 (2007) 12. Kim, T.H., Xu, H.K.: Strongly convergence of modified Mann iterations for with asymptotically nonexpansive mappings and semigroups. Nonlinear Anal. 64, 1140–1152 (2006) 13. Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979) 14. López, G., Martín-Marquez, V., Wang, F., Xu, H.K.: Forward-backward splitting methods for accretive operators in Banach spaces. Abstr. Appl. Anal. Art ID 109236 (2012) 15. Lorenz, D., Pock, T.: An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51, 311–325 (2015) 16. Moudafi, A., Oliny, M.: Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 155, 447–454 (2003) 17. Nakajo, K., Takahashi, W.: Strong and weak convergence theorem by an improved splitting method. Commun. Appl. Nonlinear Anal. 9, 99–107 (2002) 18. Nakajo, K., Takahashi, W.: Strongly convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 279, 372–379 (2003) 19. Nesterov, Y.: A method for solving the convex programming problem with convergence rate O(1/k 2 ). Dokl. Akad. Nauk SSSR 269, 543–547 (1983) 20. Passty, G.B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72, 383–390 (1979) 21. Peaceman, D.H., Rachford, H.H.: The numerical solution of parabolic and elliptic differentials. J. Soc. Ind. Appl. Math. 3, 28–41 (1955) 22. Polyak, B.T.: Introduction to Optimization. Optimization Software, New York (1987) 23. Polyak, B.T.: Some methods of speeding up the convergence of iterative methods. Zh. Vychisl. Mat. Mat. Fiz. 4, 1–17 (1964) 24. Rockafellar, R.T.: On the maximality of subdifferential mappings. Pac. J. Math. 33, 209–216 (1970) 25. Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976) 26. Takahashi, W.: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000) 27. Takahashi, W., Takeuchi, Y., Kubota, R.: Strong convergence theorems by hybrid methods for families of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 341, 276–286 (2008) 28. Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431–446 (2000)